contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
2037
G
Natlan Exploring
You are exploring the stunning region of Natlan! This region consists of $n$ cities, and each city is rated with an attractiveness $a_i$. A directed edge exists from City $i$ to City $j$ if and only if $i < j$ and $\gcd(a_i,a_j)\neq 1$, where $\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$. Starting from City $1$, your task is to determine the total number of distinct paths you can take to reach City $n$, modulo $998\,244\,353$. Two paths are different if and only if the set of cities visited is different.
Denote $dp[i]=$ the number of ways to get to city $i$. Brute-forcing all possible previous cities is out of the question, as this solution will take $O(n^2\cdot\log({\max{a_i}}))$ time complexity. What else can we do? Instead, consider caseworking on what the greatest common factor can be. Let's keep track of an array $count$ which for index $i$ keeps track of the sum of $dp$ values of all previous cities who has a factor of $i$. Say the current city has attractiveness $a_i$. We can almost recover $dp[i]$ by adding up the $count$ values of all factors of $a_i$. Unfortunately, this fails as it overcounts many instances. For example, if $\gcd(a_i, a_j)=12$ the $dp$ state from $i$ will be counted five times: $2, 3, 4, 6, 12$. Note that we don't actually care what the greatest common factor is, since the only requirement is that the greatest common factor is not $1$. This also means that repeat appearances of the same prime number in the factorization of $a_i$ doesn't matter at all - we can assume each prime factor occurs exactly once. Now, if $\gcd(a_i, a_j)=12$, it is only counted three times: $2,3,6$. Now, instead of blindly adding the $count$ values from all previous states, let's instead apply the Principle of Inclusion-Exclusion on the prime factors. Let's first add the $count$ values from all prime factors, then subtract the $count$ values from all factors with two prime factors, then add the $count$ values from all factors with three prime factors, and so on. It can be seen that actually, the value is only counted one time now. So what's the time complexity of this solution? Precomputing the set of all prime number takes $O(\max(a_i)\log(\max(a_i)))$ time (by the harmonic series $\frac{n}{1}+\frac{n}{2}+\ldots+\frac{n}{n}\approx n\log(n)$). For each number $a_i$, we have to consider all $2^{f(a_i)}$ subsets of prime factors, where $f(a_i)$ is the number of prime factors of $a_i$. The number with the most distinct prime factors is $510510=2\cdot3\cdot5\cdot7\cdot11\cdot13\cdot17$, so worst case $2^7=128$ operations are needed per number. This goes to a total operation count of approximately $128\cdot n$ which will pass in the time limit. Note that we may also use the Mobius function to compute the answer. The Mobius function's properties makes it utilize the Principle of Inclusion-Exclusion efficiently. The time complexity of this solution is $O(\max(a_i)\log(\max(a_i))+n\max(d(a_i)))$ where $d(a_i)$ is the maximum number of factors of $a_i$. This time complexity can be shown to be the same as the above time complexity.
[ "bitmasks", "combinatorics", "data structures", "dp", "math", "number theory" ]
2,000
import sys def input(): return sys.stdin.buffer.readline().strip() MOD = 998244353 ma = int(1e6 + 5) P = [1] * ma D = [[] for _ in range(ma)] for i in range(2, ma): if P[i] == 1: for j in range(i, ma, i): P[j] = i F = [0] * ma LU = [0] * ma BMS = [[] for _ in range(ma)] RES = [] from itertools import combinations def getBMS(x): if not BMS[x]: y = help(x) for i in range(len(y)): p = combinations(y, i + 1) for a in p: xx = 1 for j in a: xx *= j BMS[x].append((xx,i)) return BMS[x] def helps(x): y = x while x != 1: s = P[x] D[y].append(s) while x % s == 0: x //= s def help(x): if not D[x]: helps(x) return D[x] for yy in range(int(input())): n = int(input()) A = list(map(int, input().split())) for xx,i in getBMS(A[0]): F[xx] = 1 LU[xx] = yy for i in range(1, n - 1): tot = 0 for xx, j in getBMS(A[i]): if LU[xx] != - 1 and LU[xx] != yy: F[xx] = 0 LU[xx] = yy if F[xx]: if j % 2: tot -= F[xx] tot %= MOD else: tot += F[xx] tot %= MOD for xx, j in getBMS(A[i]): F[xx] += tot F[xx] %= MOD S = 0 for xx,i in getBMS(A[-1]): if LU[xx] != - 1 and LU[xx] != yy: LU[xx] = yy F[xx] = 0 if F[xx]: if i % 2: S -= F[xx] S %= MOD else: S += F[xx] S %= MOD RES.append(str(S)) print("\n".join(RES))
2039
A
Shohag Loves Mod
Shohag has an integer $n$. Please help him find an \textbf{increasing} integer sequence $1 \le a_1 \lt a_2 \lt \ldots \lt a_n \le 100$ such that $a_i \bmod i \neq a_j \bmod j$ $^{\text{∗}}$ is satisfied over all pairs $1 \le i \lt j \le n$. It can be shown that such a sequence always exists under the given constraints. \begin{footnotesize} $^{\text{∗}}$$a \bmod b$ denotes the remainder of $a$ after division by $b$. For example, $7 \bmod 3 = 1, 8 \bmod 4 = 0$ and $69 \bmod 10 = 9$. \end{footnotesize}
THOUGHT: A general approach to tackle ad-hoc problems is to play around with the conditions and see if we can add more constraints to limit the search space. ACTION: Let's analyze the modular condition $a_i \bmod i$. We know that $a_i \bmod i < i$, and all $a_i \bmod i$ values are distinct. Let's explore this step by step: $a_1 \bmod 1$ is always $0$. $a_2 \bmod 2$ can be $0$ or $1$. But since all $a_i \bmod i$ values must be distinct, $a_2 \bmod 2$ must be $1$ (otherwise, it would equal $a_1 \bmod 1$). $a_3 \bmod 3$ can be $0$, $1$, or $2$. Similarly, $a_3 \bmod 3$ must be $2$ (to avoid duplication with $a_1 \bmod 1$ or $a_2 \bmod 2$). $\ldots$ $a_n \bmod n$ must be $n-1$. OBSERVATION: This leads to the constraint $a_i \bmod i = i - 1$. THOUGHT: Next, let's consider the increasing sequence condition. OBSERVATION: Since the sequence must be increasing, we add the constraint $a_i \ge i$. THOUGHT: To further limit the search space, note that $n$ can be up to $50$, and $a_i$ must be $\le 100$. OBSERVATION: This suggests that we can restrict $a_i$ to values up to $2 \cdot n$. THOUGHT: Let's compile the constraints: $a_i \bmod i = i - 1$. $a_i \ge i$. $a_i \le 2 \cdot n$. Now we need to build the sequence that satisfies these conditions. ACTION: Let's build the sequence starting from the end. $a_n \bmod n = n-1$. Thus, $a_n$ can be $n-1$ or $2 \cdot n - 1$. Since $a_n \ge n$, $a_n$ must be $2 \cdot n - 1$. $a_{n-1} \bmod (n-1) = n-2$. So $a_{n-1}$ can be $n-2$ or $(n - 2) + (n - 1) = 2 \cdot n - 3$. Since $a_{n-1} \ge n-1$, $a_{n-1}$ must be $2 \cdot n - 3$. $\ldots$ $a_i \bmod i = i - 1$. Thus, $a_i$ can be $i - 1$ or $2 \cdot i - 1$ or $3 \cdot i - 1 \ldots$. Since $a_i \ge i$ and the odd numbers greater than $2 \cdot i$ have already been used, $a_i$ must be $2 \cdot i - 1$. OBSERVATION: If we limit the elements to $2 \cdot n$, there is exactly one sequence that satisfies all conditions. CONCLUSION: The sequence is $a_i = 2 \cdot i - 1$, which is $1, 3, 5, \ldots, 2 \cdot n - 1$. VALIDATION: We can validate the solution by checking if it satisfies all the constraints. $a_i \bmod i = (2 \cdot i - 1) \bmod i = i - 1$. So all $a_i \bmod i$ are distinct and values are under $2 \cdot n$. So the sequence is valid. Time Complexity: $\mathcal{O}(n)$.
[ "constructive algorithms", "number theory" ]
800
#include<bits/stdc++.h> using namespace std; const int N = 3e5 + 9; using ll = long long; void solve() { int n; cin >> n; for (int i = 1; i <= n; i++) { cout << 2 * i - 1 << ' '; } cout << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
B
Shohag Loves Strings
For a string $p$, let $f(p)$ be the number of distinct non-empty substrings$^{\text{∗}}$ of $p$. Shohag has a string $s$. Help him find a non-empty string $p$ such that $p$ is a substring of $s$ and $f(p)$ is even or state that no such string exists. \begin{footnotesize} $^{\text{∗}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
THOUGHT: The condition seems hard to track. So a good way is to play around with smaller cases and see if we can make some observations. ACTION: Let's start with the smallest string. When $s=$ a, the number of unique substrings $f(s) = 1$, so it's odd and not valid. OBSERVATION: No one length strings are valid. ACTION: Let's try the next smallest strings. When $s=$ aa, $f(s) = 2$, so it's even and valid. When $s=$ ab, $f(s) = 3$, so it's odd and not valid. OBSERVATION: Two length strings are valid if the adjacent characters are same. THOUGHT: So if $s$ contains two consecutive same characters, we can print it right away. All that remains is to consider strings without two consecutive same characters. ACTION: Let's try the next smallest strings with adjacent different characters. When $s=$ aba, $f(s) = 5$, so it's odd and not valid. When $s=$ abc, $f(s) = 6$, so it's even and valid. OBSERVATION: Three length strings are valid if all characters are different. THOUGHT: So if $s$ contains three consecutive different characters, we can print it right away. All that remains is to consider strings without two adjacent same characters but no three consecutive different characters. So all the remaining strings are of the form $s =$ abababababa... Let's try to see if we can make some observations about these strings. ACTION: Let's try to calculate the number of unique substrings for a string of the form $s =$ abababababa... There are exactly $2$ unique substrings of length $1$: a and b. There are exactly $2$ unique substrings of length $2$: ab and ba. There are exactly $2$ unique substrings of length $3$: aba and bab. $\ldots$ There are exactly $2$ unique substrings of length $n - 1$. However, the length $n$ substring occurs exactly once. OBSERVATION: The number of unique substrings of any length is $2$. But only the length $n$ substring occurs exactly once. So total number of unique substrings is $2n - 1$. And this is always odd! So there is no solution for these strings. THOUGHT: We have covered all the cases. CONCLUSION: If there are adjacent same characters, we can print it right away. If there are three consecutive different characters, we can print it right away. Otherwise there is no solution. Time Complexity: $\mathcal{O}(n)$.
[ "constructive algorithms", "greedy", "implementation", "strings" ]
1,000
#include<bits/stdc++.h> using namespace std; const int N = 3e5 + 9; using ll = long long; void solve() { string s; cin >> s; int n = s.size(); for (int i = 0; i + 1 < n; i++) { if (s[i] == s[i + 1]) { cout << s.substr(i, 2) << '\n'; return; } } for (int i = 0; i + 2 < n; i++) { if (s[i] != s[i + 1] and s[i] != s[i + 2] and s[i + 1] != s[i + 2]) { cout << s.substr(i, 3) << '\n'; return; } } cout << -1 << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
C1
Shohag Loves XOR (Easy Version)
\textbf{This is the easy version of the problem. The differences between the two versions are highlighted in bold. You can only make hacks if both versions of the problem are solved.} Shohag has two integers $x$ and $m$. Help him count the number of integers $1 \le y \le m$ such that $\mathbf{x \neq y}$ and $x \oplus y$ is \textbf{a divisor$^{\text{∗}}$ of} either $x$, $y$, or both. Here $\oplus$ is the bitwise XOR operator. \begin{footnotesize} $^{\text{∗}}$The number $b$ is a divisor of the number $a$ if there exists an integer $c$ such that $a = b \cdot c$. \end{footnotesize}
THOUGHT: Here $x > 0$ and $y > 0$. So $x \oplus y$ is neither equal to $x$ nor $y$. So $x \oplus y$ is a divisor of $x$ or $y$ and $x \oplus y < x$ or $x \oplus y < y$. OBSERVATION: Any divisor $d$ of $p$ such that $d < p$ we know that $d \le \lfloor \frac{p}{2} \rfloor$. Also, the highest bits of $d$ and $p$ are different when $d \le \lfloor \frac{p}{2} \rfloor$. THOUGHT: Wait but $x \oplus y$ has the same highest bit as $y$ if $y \ge 2 \cdot x$. CONCLUSION: So if $y \ge 2 \cdot x$, then $x \oplus y$ can not be a divisor of $y$. THOUGHT: But can it be a divisor of $x$? OBSERVATION: If $y \ge 2 \cdot x$, then $x \oplus y > x$ because the highest bit in $x \oplus y$ is greater than that in $x$. So $x \oplus y$ can not be a divisor of $x$. CONCLUSION: If $y \ge 2 \cdot x$, then $x \oplus y$ can not be a divisor of $x$ or $y$. So no solution in this case. THOUGHT: Now we need to consider the case when $y < 2 \cdot x$. But $x$ is small in this problem, making it feasible to iterate over all possible values of $y$. ACTION: Iterate over all possible values of $y < 2 \cdot x$ and check if $x \oplus y$ is a divisor of either $x$ or $y$. Time Complexity: $\mathcal{O}(x)$.
[ "bitmasks", "brute force", "math", "number theory" ]
1,200
#include<bits/stdc++.h> using namespace std; using ll = long long; void solve() { int x; ll m; cin >> x >> m; int ans = 0; for (int y = 1; y <= min(2LL * x, m); y++) { if (x != y and ((x % (x ^ y)) == 0 or (y % (x ^ y) == 0))) { ++ans; } } cout << ans << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
C2
Shohag Loves XOR (Hard Version)
\textbf{This is the hard version of the problem. The differences between the two versions are highlighted in bold. You can only make hacks if both versions of the problem are solved.} Shohag has two integers $x$ and $m$. Help him count the number of integers $1 \le y \le m$ such that $x \oplus y$ is \textbf{divisible$^{\text{∗}}$ by} either $x$, $y$, or both. Here $\oplus$ is the bitwise XOR operator. \begin{footnotesize} $^{\text{∗}}$The number $a$ is divisible by the number $b$ if there exists an integer $c$ such that $a = b \cdot c$. \end{footnotesize}
THOUGHT: Consider the three cases of when $x \oplus y$ is divisible by $x$, $y$, or both separately. Case 1: $x \oplus y$ is divisible by $x$. THOUGHT: Let $p = x \oplus y$. So $y = p \oplus x$. So we can rephrase the problem as counting the number of integers $p$ such that $p$ is divisible by $x$ and $1 \le p \oplus x \le m$. OBSERVATION: $p \oplus x \le p + x$, because xor is just addition without carry. THOUGHT: So it feels like almost all values of $p \le m$ might work! And it's actually true because $p \oplus x \le p + x \le m$, so if $p + x \le m$ i.e. $p \le m - x$ then $p \oplus x \le m$ is always true. CONCLUSION: All multiples of $x$ under $m - x$ always work! So the count is $\lfloor \frac{m - x}{x} \rfloor$. THOUGHT: But how about when $p > m - x$? OBSERVATION: if $p > x$, then $p \oplus x \ge p - x$, because xor is like subtraction without borrowing. THOUGHT: So if $p - x > m$ i.e. $p > m + x$ then $p \oplus x \ge p - x > m$ is always true. CONCLUSION: So no values of $p$ work when $p > m + x$. THOUGHT: And there are two multiples of $x$ in the range $(m - x, m + x]$. So we can just check them manually. CONCLUSION: Answer is $\lfloor \frac{m - x}{x} \rfloor$ plus manually checking the two multiples of $x$ in the range $(m - x, m + x]$. Case 2: $x \oplus y$ is divisible by $y$. THOUGHT: As we already know that $x \oplus y \le x + y$, and when $x < y$, $x \oplus y \le x + y < y + y = 2 \cdot y$. But as $x > 0$ and $y > 0$, so $x \oplus y$ is neither $x$ nor $y$, so the smallest multiple of $y$ that can work is $2 \cdot y$. But $x \oplus y < 2 \cdot y$, so no solution here. CONCLUSION: No solution when $x < y$. And as $x$ is small in this problem, so we can just iterate over all values of $y \le x$ and manually check if $x \oplus y$ is divisible by $y$. Case 3: $x \oplus y$ is divisible by both $x$ and $y$. THOUGHT: So the xor is divisible by $\text{lcm}(x, y)$. So when $x \neq y$, $\text{lcm}(x, y) \ge 2 \cdot \max(x, y)$ but $x \oplus y < 2 \cdot \max(x, y)$, so no solution here. CONCLUSION: Only works when $y = x$. FINAL CONCLUSION: So we just implement the above cases and the answer is the sum of case $1$ and case $2$ subtracted by case $3$. Time Complexity: $\mathcal{O}(x)$.
[ "bitmasks", "brute force", "math", "number theory" ]
1,800
#include<bits/stdc++.h> using namespace std; using ll = long long; void solve() { int x; ll m; cin >> x >> m; // divisible by x ll p = m - m % x; ll ans = p / x - (x < p); if ((x ^ p) >= 1 and (x ^ p) <= m) ++ans; p += x; if ((x ^ p) >= 1 and (x ^ p) <= m) ++ans; // divisibly by y for (int y = 1; y <= min(1LL * x, m); y++) { ll cur = x ^ y; if (cur % y == 0) { ++ans; } } // divisible by both if (x <= m) { --ans; } cout << ans << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
D
Shohag Loves GCD
Shohag has an integer $n$ and a set $S$ of $m$ unique integers. Help him find the lexicographically largest$^{\text{∗}}$ integer array $a_1, a_2, \ldots, a_n$ such that $a_i \in S$ for each $1 \le i \le n$ and $a_{\operatorname{gcd}(i, j)} \neq \operatorname{gcd}(a_i, a_j)$$^{\text{†}}$ is satisfied over all pairs $1 \le i \lt j \le n$, or state that no such array exists. \begin{footnotesize} $^{\text{∗}}$An array $a$ is lexicographically larger than an array $b$ of the same length if $a \ne b$, and in the first position where $a$ and $b$ differ, the array $a$ has a larger element than the corresponding element in $b$. $^{\text{†}}$$\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$. \end{footnotesize}
THOUGHT: For problems where we need to construct something under some conditions, then a good idea is to first see the nature of the sequences that satisfy the conditions. And to find the properties of such sequences we can try to find some necessary conditions that must have to be met for the sequence to satisfy the conditions. ACTION: The given condition is that $a_{\text{gcd}(i, j)} \neq \text{gcd}(a_i, a_j)$ all $i < j$. As the given operation is gcd on indices and values, it's hard to directly find the properties of the sequence. But what happens when $i$ divides $j$? Then $a_{\text{gcd}(i, j)} = a_i$. So the condition becomes $a_i \neq \text{gcd}(a_i, a_j)$ which translates to $a_i$ can not divide $a_j$ because otherwise $\text{gcd}(a_i, a_j) = a_i$. So we have found a necessary condition: for any pair $i < j$ where $i$ divides $j$, $a_i$ can not divide $a_j$. THOUGHT: Hmm, but is the condition sufficient? One way to check the sufficiency is to find a contradiction. ACTION: Imagine two indices $i$ and $j$ where $i$ does not divide $j$ but the condition is violated: $a_{\text{gcd}(i, j)} = \text{gcd}(a_i, a_j)$. Then $g = \text{gcd}(i, j)$ is a divisor of both $i$ and $j$ and as $a_g$ divides both $a_i$ and $a_j$, so for pair $(g, i)$ we have $a_{\text{gcd}(g, i)} = a_g$ and $\text{gcd}(a_g, a_i) = a_g$ because $a_g$ divides $a_i$. So for this pair the condition is violated! So if the condition is violated for some pair $(i, j)$ then it is violated for the pair $(g, i)$ and $(g, j)$ as well where $g = \text{gcd}(i, j)$. So if there is no pair $(i, j)$ where $i$ divides $j$ and the condition is violated, then all the pairs also satisfy the condition. This proves the sufficiency of the condition. THOUGHT: So we have found a necessary and sufficient condition. Now we need to construct the lexicographically largest sequence that satisfies the condition. Solution 1: Consider a multiple chain $i_1 < i_2 < \ldots < i_k$. Such that $i_1$ divides $i_2$, $i_2$ divides $i_3$, ..., $i_{k-1}$ divides $i_k$. Then we know that we have to put distinct values for all the indices in the multiple chain otherwise one number will be divisible by another which will violate the condition. And as we are aiming for the lexicographically largest sequence, it makes sense to put the values in decreasing order in a multiple chain i.e. $a_{i_1} > a_{i_2} > \ldots > a_{i_k}$. This way we don't have to care about the divisibility condition, as the numbers are in decreasing order. Now, we definitely will try to put the largest number possible for each index. So what is the largest number that we can put for the index $x$? Consider a directed graph where there is an edge from $i$ to $j$ if $i$ divides $j$. Then the question is what is the length of the longest path in this graph ending at $x$. You can find it using a simple DP on this directed acyclic graph in $O(n \log n)$ time. But if you think about it, the answer is actually simple. Let $p(x)$ the number of prime factors of $x$ counted with multiplicity. Then the answer is $p(x) + 1$. For example, if $x = 2 \cdot 3^2 \cdot 5$, then one of the longest chains ending at $x$ is $1 \rightarrow 2 \rightarrow 2 \cdot 3 \rightarrow 2 \cdot 3^2 \rightarrow 2 \cdot 3^2 \cdot 5$ which has length $4$. We can precalculate the values $p(x)$ for all $1 \leq x \leq n$ in $O(n \log n)$ time using sieve. So at index $i$ we will put the ($p(i) + 1$)-th largest number from the set $S$. And the largest value of $p(i)$ for $1 \leq i \leq n$ is $\lfloor \log_2 n \rfloor$ for the chain $1 \rightarrow 2 \rightarrow 2^2 \rightarrow \ldots \rightarrow 2^{\lfloor \log_2 n \rfloor}$. So if $m < \lfloor \log_2 n \rfloor + 1$, then we can't construct the sequence and the answer is $-1$. Otherwise set $a_i = s_{m - p(i)}$ for all $i$. Also, unless you have noticed already, the actual numbers don't matter! Time Complexity: $O(n \log n)$ Solution 2: As we are trying to construct the lexicographically largest sequence, it's always better to take larger values first. Let $s_i$ be the $i$-th smallest number in the set $S$. So, initially set $a_1 = s_m$. Then we can't use $s_m$ for any other index as it will violate the condition. Then set $a_2 = s_{m - 1}$ (because we can't use $s_m$). Then we can't use $s_{m - 1}$ for any other index $j$ where $j$ is divisible by $2$ as it will violate the condition. Then set $a_3 = s_{m - 1}$ (because we can't use $s_{m}$). Then we can't use $s_{m - 1}$ for any other index $j$ where $j$ is divisible by $3$ as it will violate the condition. Now set $a_4 = s_{m - 2}$ (because we can't use $s_{m - 1}$ or $s_m$). Then we can't use $s_{m - 2}$ for any other index $j$ where $j$ is divisible by $4$ as it will violate the condition. Then for $a_5$ we can actually use $s_{m-1}$ as $5$ is not divisible by $2, 3,$ or $4$ so the only constraint is that $a_5 \neq a_1$. ... Notice that this is a sieve-like process where we are using the maximum number that we can use for the current index and then we are remembering that in the multiples of the current index, the current number can't be used. We can use sets to simulate the process. So this process will always construct a valid lexicographically largest sequence. If it is not possible to construct the sequence, then the answer is $-1$. Also, if you notice the construction process carefully, the actual numbers don't matter! Time Complexity: $O(n \log^2 n)$
[ "constructive algorithms", "greedy", "math", "number theory" ]
1,700
#include<bits/stdc++.h> using namespace std; const int N = 1e5 + 9; using ll = long long; vector<int> d[N]; void solve() { int n, m; cin >> n >> m; vector<int> s(m + 1); for (int i = 1; i <= m; i++) { cin >> s[i]; } vector<int> a(n + 1, -1); for (int i = 1; i <= n; i++) { set<int> banned; for (int j: d[i]) { banned.insert(a[j]); } for (int k = m; k >= 1; k--) { if (banned.find(s[k]) == banned.end()) { a[i] = s[k]; break; } } if (a[i] == -1) { cout << -1 << '\n'; return; } } for (int i = 1; i <= n; i++) { cout << a[i] << ' '; } cout << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); for (int i = 1; i < N; i++) { for (int j = i + i; j < N; j += i) { d[j].push_back(i); } } int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
E
Shohag Loves Inversions
Shohag has an array $a$ of integers. Initially $a = [0, 1]$. He can repeatedly perform the following operation any number of times: - Let $k$ be the number of inversions$^{\text{∗}}$ in the current array $a$. - Insert $k$ at any position in $a$, including the beginning or the end. For example, if $a = [4, 6, 2, 4]$, then the number of inversions is $k = 3$. So Shohag can obtain the following arrays after the operation: $[\textbf{3}, 4, 6, 2, 4]$, $[4, \textbf{3}, 6, 2, 4]$, $[4, 6, \textbf{3}, 2, 4]$, $[4, 6, 2, \textbf{3}, 4]$, and $[4, 6, 2, 4, \textbf{3}]$. Given an integer $n$, help Shohag count, modulo $998\,244\,353$, the number of distinct arrays of length $n$ that can be obtained after performing the operations. \begin{footnotesize} $^{\text{∗}}$The number of inversions in an array $a$ is the number of pairs of indices ($i$, $j$) such that $i < j$ and $a_i > a_j$. \end{footnotesize}
It's hard to track the array when we insert new inversions as the inversion number can quickly become very large. The key observation here is to notice what happens when the inversion count becomes more than $1$. As the initial array has only $0$ and $1$ as elements when we insert an inversion count that is more than $1$, the inversion count will be larger than any element of the array. And this gives us a way to control everything! Let $\text{dp}_i$ be the number of final arrays of length $n$ we can get from the current array of length $i$ if the current number of inversions in it is larger than any element of the array. Let $k$ be the number of inversions in the current array and $k > \max(a)$. Then If we insert $k$ not in the end, then the new inversion count will be more than $k$, so we get the same situation for $dp_{i+1}$ Or if we insert $k$ in the end then the number of inversions will still be $k$. So if we inserted it $j$ times in the end and once somewhere else ($i$ ways to do so) then we will get the situation for $dp_{i+j+1}$ So $\text{dp}_i = (i \cdot \sum_{j > i} \text{dp}_j) + 1$, here $1$ is added as we can end the sequence here by inserting $k$ in the end $(n - i)$ times. This can be computed with simple dp in $O(n)$ time using suffix sums. Now we just need to deal with the starting array when it starts to have more than $1$ inversion. There are $(n - 1)$ ways to finish the sequence having $\le 1$ inversion. And they are of the form $0, 0, \ldots, 0, [0, 1, 0], 1, \ldots, 1, 1$ this is because we first insert $0$ at the beginning for some time and then we get $1$ inversion count when we insert $0$ at the end for the first time and then we will have to insert $1$ at the end every time after that. And to count the ways to get the starting array of length $m$ with more than $1$ inversion, we can notice that it's just the sum of ways where we insert $1$ before the first $1$ in any sequence of the form like above $0, 0, \ldots, 0, [0, 1, 0], 1, \ldots, 1, 1$. And if the position of the first $1$ is $j$ then we have $(j - 1)$ ways to do so. So total ways is $\sum_{j=2}^{m - 1} (j - 1) = \frac{(m - 2) \cdot (m - 1)}{2} - 1$ So the answer is just $n - 1 + \sum_{m=3}^{n} \left( \frac{(m - 2) \cdot (m - 1)}{2} - 1 \right) \cdot \text{dp}_m$ Time Complexity: $O(n)$ Note that there are ways to write the dp so that you don't have to handle the starting array separately. Also in this problem, we have limited the total sum of $n$ over all test cases. But there exists solutions where the solution works even without the limit but we decided to let both solutions pass. Also, I am extremely sorry that during the contest we found out that some people found the second difference/derivative of the sequence on OEIS. We searched on OEIS before but couldn't find it, otherwise we would have modified the problem. Again, sorry for this issue.
[ "combinatorics", "dp", "implementation", "math" ]
2,200
#include <bits/stdc++.h> #include <chrono> std::mt19937 eng(std::chrono::steady_clock::now().time_since_epoch().count()); int rnd(int l, int r) { return std::uniform_int_distribution<int>(l, r)(eng); } namespace FastIO { // char buf[1 << 21], *p1 = buf, *p2 = buf; // #define getchar() (p1 == p2 && (p1 = buf, p2 = (p1 + fread(buf, 1, 1 << 21, stdin))) == p1 ? EOF : *p1++) template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; } template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar((x % 10) ^ '0'); } template <typename T> inline void print(T x) { if (x > 0) write<T>(x); else if (x < 0) putchar('-'), write<T>(-x); else putchar('0'); } template <typename T> inline void print(T x, char en) { print<T>(x), putchar(en); } // inline char rChar() { char ch = getchar(); while (!isalpha(ch)) ch = getchar(); return ch; } }; using namespace FastIO; using i32 = int32_t; using u32 = uint32_t; using u64 = uint64_t; template <uint32_t MOD> struct mint { static constexpr u32 get_r() { u32 ret = MOD; for (i32 i = 0; i < 4; ++i) ret *= 2 - MOD * ret; return ret; } static constexpr u32 r = get_r(); static constexpr u32 n2 = -u64(MOD) % MOD; static_assert(r * MOD == 1, "invalid, r * MOD != 1"); static_assert(MOD < (1 << 30), "invalid, MOD >= 2 ^ 30"); static_assert((MOD & 1) == 1, "invalid, MOD % 2 == 0"); u32 a; constexpr mint() : a(0) {} constexpr mint(const int64_t &b) : a(reduce(u64(b % MOD + MOD) * n2)){}; static constexpr u32 reduce(const u64 &b) { return (b + u64(u32(b) * u32(-r)) * MOD) >> 32; } constexpr mint &operator += (const mint &b) { if (i32(a += b.a - 2 * MOD) < 0) a += 2 * MOD; return *this; } constexpr mint &operator -= (const mint &b) { if (i32(a -= b.a) < 0) a += 2 * MOD; return *this; } constexpr mint &operator *= (const mint &b) { a = reduce(u64(a) * b.a); return *this; } constexpr mint &operator /= (const mint &b) { *this *= b.inverse(); return *this; } constexpr mint operator + (const mint &b) const { return mint(*this) += b; } constexpr mint operator - (const mint &b) const { return mint(*this) -= b; } constexpr mint operator * (const mint &b) const { return mint(*this) *= b; } constexpr mint operator / (const mint &b) const { return mint(*this) /= b; } constexpr bool operator == (const mint &b) const { return (a >= MOD ? a - MOD : a) == (b.a >= MOD ? b.a - MOD : b.a); } constexpr bool operator != (const mint &b) const { return (a >= MOD ? a - MOD : a) != (b.a >= MOD ? b.a - MOD : b.a); } constexpr mint operator-() const { return mint() - mint(*this); } constexpr mint pow(u64 n) const { mint ret(1), mul(*this); while (n > 0) { if (n & 1) ret *= mul; mul *= mul, n >>= 1; } return ret; } constexpr mint inverse() const { return pow(MOD - 2); } friend std::ostream &operator<< (std::ostream &os, const mint &b) { return os << b.get(); } friend std::istream &operator>> (std::istream &is, mint &b) { int64_t t; is >> t; b = mint<MOD>(t); return (is); } constexpr u32 get() const { u32 ret = reduce(a); return ret >= MOD ? ret - MOD : ret; } static constexpr u32 get_MOD() { return MOD; } explicit operator u32() const { return get(); } }; using modint = mint<998244353>; // Let's write some brute first // dp[i][j] := current length is i, current number of inversions is j (not inserted) // dp[i][j] -> dp[>= i + 1][[j + 1, j + i]] // this is true for j >= 1, so let's do something when j = 0 // we can generate [0, (0 ... ), 1, 0] -> dp[>= 3][1] // this is still kinda annoying because 1 > 1 does not hold, we process it till j >= 2 // [0, 0, ..., 0, 1, 0] -> [0, 0, ..., 0, 1, 0, 1, ..., 1] // after that we insert an 1 before some numbers of 0 and we get dp[i][1] -> dp[>= i + 1][[j + 1, j + i - 1]] // the answer is sum dp[i][j] for all 1 <= i <= n, j >= 1, plus 1 ([0, 0, 0 ... 1]) // actually we care nothing 'bout, j so let's say f[i] = sum dp[i][j] // (f[i] * i - 1) -> f[i + 1], f[i + 2], ..., f[n] #define MAXN 1000001 modint f[MAXN]; void solve() { int n = read<int>(); modint ans = 1, pre = 2; f[3] = 1; for (int i = 4; i <= n; ++i) f[i] = pre + modint(1), pre += f[i] * modint(i) - modint(1); for (int i = 3; i <= n; ++i) ans += f[i]; // f[3] : [0, 1, 0] // f[4] : [0, 0, 1, 0] (+1), [0, 1, 1, 0], [1, 0, 1, 0] (dp[3][1] * 2) print<int>(ans.get(), '\n'); } int main() { int T = read<int>(); while (T--) solve(); return 0; }
2039
F1
Shohag Loves Counting (Easy Version)
\textbf{This is the easy version of the problem. The only differences between the two versions of this problem are the constraints on $t$, $m$, and the sum of $m$. You can only make hacks if both versions of the problem are solved.} For an integer array $a$ of length $n$, define $f(k)$ as the greatest common divisor (GCD) of the maximum values of all subarrays$^{\text{∗}}$ of length $k$. For example, if the array is $[2, 1, 4, 6, 2]$, then $f(3) = \operatorname{gcd}(\operatorname{max}([2, 1, 4]), \operatorname{max}([1, 4, 6]), \operatorname{max}([4, 6, 2])) = \operatorname{gcd}(4, 6, 6) = 2$. An array is good if $f(i) \neq f(j)$ is satisfied over all pairs $1 \le i \lt j \le n$. Shohag has an integer $m$. Help him count the number, modulo $998\,244\,353$, of non-empty good arrays of arbitrary length such that each element of the array is an integer from $1$ to $m$. \begin{footnotesize} $^{\text{∗}}$An array $d$ is a subarray of an array $c$ if $d$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
Let $s_k$ be the sequence of $k$ length subarray maximums of the array. Then $s_{k + 1}$ is just the adjacent maximum sequence of $s_k$. Also, let $g_k$ be the GCD of the elements of $s_k$. Then notice that every element of $s_{k + 1}$ is also divisible by $g_k$. That is $g_k$ divides $g_{k + 1}$. For the array to be good, $g_k$ must be different for all $k$. So $g_k < g_{k + 1}$ and $g_k$ divides $g_{k + 1}$. This means if the length of the array is $n$, then $n \le \lfloor \log_2 m \rfloor + 1$. Now consider a non-decreasing sequence of integers $a$ of length $n$ such that $1 \le a_i \le m$ for all $i$. Then the $k$ length subarray maximums of $a$ are just the last $k$ elements of $a$. So $g_k$ is the GCD of the last $k$ elements of $a$. Then for $g_k$ to be different for all $k$, all the elements of $a$ must be distinct. So the condition for $a$ to be good is that the elements are distinct and all suffix GCDs are distinct as well. Next the question is how many permutations of this increasing sequence $a$ is good as well? To count this, lets start from $s_n$. $s_n$ is just $[a_n]$. Now consider $s_{n - 1}$. We need to put $a_{n - 1}$ in the sequence such that the adjacent maximum sequence of $s_{n-1}$ becomes $s_n$. For this we clearly have $2$ ways: $[a_{n - 1}, a_n]$ and $[a_n, a_{n - 1}]$. Now consider $s_{n - 2}$. We need to put $a_{n - 2}$ in the sequence such that the adjacent maximum sequence of $s_{n-2}$ becomes $s_{n-1}$. For this we again have $2$ ways because $a_{n - 2}$ can be inserted in $2$ places: before $a_{n - 1}$ or after $a_{n - 1}$. Similarly for all other $s_k$ we have $2$ ways to insert it: putting it before $a_{k + 1}$ or after $a_{k + 1}$. So the total number of good permutations of $a$ is $2^{n - 1}$. So our problem reduces to the following: Select a length $n$ such that $1 \le n \le \lfloor \log_2 m \rfloor + 1$. Count the number of strictly increasing sequences of length $n$ such that all suffix GCDs are distinct. Multiply the answer by $2^{n - 1}$. Sum up the answer for all valid $n$. For a fixed $n$, let's count the number of strictly increasing sequences of length $n$ such that all suffix GCDs are distinct. Let $\text{dp}_{i, g}$ be the number of strictly increasing sequences of length $n$ such that the starting element is $i$ and the GCD of the elements is $g$. Now iterate from $i = m$ to $1$. Then the transition is to iterate over the next suffix GCD $h$ such that $g$ divides $h$, $g < h \le m$ and $g = \text{gcd}(i, h)$ and then add $\text{dp}_{*, h}$ to $\text{dp}_{i, g}$. Here $\text{dp}_{*, h}$ is the sum of all $\text{dp}_{j, h}$ for all $j > i$. Another way to look at the transition is that for a fixed $i$, we iterate over all $h$ and if $\text{gcd}(i, h) < h$, then we add $\text{dp}_{*, h}$ to $\text{dp}_{i, \text{gcd}(i, h)}$. But doing everything like this would still be $O(m^2 \log m)$ which is too slow. Notice that all $g$ are the divisors of $i$. Here the main difficulty is that we need update at the index $\text{gcd}(i, h)$ over all $h$ but it is hard to track the exact gcd but what's easier is to track the multiple of the gcd. So for each $g$, let's say we know the sum of all $\text{dp}_{*, h}$ over all $h$ such that $g$ divides $h$. So this sums up all $\text{dp}_{*, h}$ such that $g$ divides $\text{gcd}(i, h)$. Then using inclusion exclusion on the divisors of $i$ we can get the sum of all $\text{dp}_{*, h}$ for all $h$ such that $g$ is exactly $\text{gcd}(i, h)$. This will take $O(\sigma(i)^2)$ time for each $i$ where $\sigma(i)$ is the number of divisors of $i$. And once we calculate the $\text{dp}_{i, g}$ for some $i$ and $g$, then before transitioning to $i - 1$, we can add the value of $\text{dp}_{i, g}$ to all divisors of $g$ to get the value of $\text{dp}_{*, h}$ faster in the future. To keep track of this, we can use a separate array. So for a fixed $n$, the time complexity is $O(\sum_{i = 1}^{m} \sigma(i)^2)$. And we need to do this for all $n$ from $1$ to $\lfloor \log_2 m \rfloor + 1$. So the overall time complexity is $O(\log m \cdot \sum_{i = 1}^{m} \sigma(i)^2)$. We actually allowed this to pass in F1. We can make the time complexity much better with a simple modification in the dp. Note that we don't need to use the length of array in the dp state. As we need to sum up after multiplying by $2^{\text{length of array} - 1}$ at the end, we can modify the dp to directly store the sum of $2^{\text{length of array} - 1}$. So we can just multiply the dp by $2$ during each transition. So the time complexity becomes $O(\sum_{i = 1}^{m} \sigma(i)^2)$. This is very fast for F1.
[ "combinatorics", "dp", "math", "number theory" ]
2,800
#include<bits/stdc++.h> using namespace std; const int N = 2e5 + 9, mod = 998244353; using ll = long long; int add(int a, int b){ a += b; if(a > mod) a -= mod; if(a < 0) a += mod; return a; } // dp[i][j] = number of arrays where starting element is i and gcd of the array is j int dp[N], cur[N], uni[N]; int sum[N]; vector<int> d[N]; void solve() { int m; cin >> m; for (int i = 1; i <= m; i++) { dp[i] = cur[i] = 0; uni[i] = 0; sum[i] = 0; } int ans = 0; ans = 0; for (int i = m; i >= 1; i--) { for (int j: d[i]) { cur[j] = 0; } int sz = d[i].size(); for(int idj = sz-1; idj >= 0; idj--){ int j = d[i][idj]; uni[j] = add(sum[j],sum[j]); for(int idk = idj+1; idk < sz; idk++){ int k = d[i][idk]; if(k%j) continue; uni[j] = add(uni[j],-uni[k]); } cur[j] = add(uni[j], - add(dp[j],dp[j])); } cur[i] += 1; for (int j : d[i]) { dp[j] = add(dp[j],cur[j]); for(auto k : d[j]){ sum[k] = add(sum[k],cur[j]); } ans = add(ans,cur[j]); } } cout << ans << '\n'; } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); for (int i = 1; i < N; i++) { for (int j = i; j < N; j += i) { d[j].push_back(i); } } int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2039
F2
Shohag Loves Counting (Hard Version)
\textbf{This is the hard version of the problem. The only differences between the two versions of this problem are the constraints on $t$, $m$, and the sum of $m$. You can only make hacks if both versions of the problem are solved.} For an integer array $a$ of length $n$, define $f(k)$ as the greatest common divisor (GCD) of the maximum values of all subarrays$^{\text{∗}}$ of length $k$. For example, if the array is $[2, 1, 4, 6, 2]$, then $f(3) = \operatorname{gcd}(\operatorname{max}([2, 1, 4]), \operatorname{max}([1, 4, 6]), \operatorname{max}([4, 6, 2])) = \operatorname{gcd}(4, 6, 6) = 2$. An array is good if $f(i) \neq f(j)$ is satisfied over all pairs $1 \le i \lt j \le n$. Shohag has an integer $m$. Help him count the number, modulo $998\,244\,353$, of non-empty good arrays of arbitrary length such that each element of the array is an integer from $1$ to $m$. \begin{footnotesize} $^{\text{∗}}$An array $d$ is a subarray of an array $c$ if $d$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
First, check the editorial of F1. Note that for F2 there is no limit on the sum of $m$, so we need to change the approach a bit. And for F2 you need to remove the length from the dp state (which I described at the end of the editorial of F1). Now instead of iterating $i$ from $m$ to $1$, we iterate from $1$ to $m$. And reformulate the dp as follows. Let's say we are building the strictly increasing sequence $a$ from left to right and we are fixing what the suffix GCD of the final array $a$ starting from each element will be. Let $\text{dp}_{j, h}$ be the sum of $2^{\text{length of array so far} - 1}$ for all $a$ such that we are at element $j$ and the suffix GCD of the final array $a$ starting from element $j$ is $h$. Then the transition is to iterate over the previous suffix GCD $g$ at element $i$ such that $g$ divides $h$, $g < h$ and $g = \text{gcd}(i, h)$ and then add $\text{dp}_{i, g}$ to $\text{dp}_{j, h}$. Just like F1, we can speed up the transitions by tracking some prefix sums and doing inclusion-exclusion on the divisors of $i$. We can use the Mobius Inversion Formula to do it in $O(\sum_{d | i} \sigma(d))$. Another way to make it faster is to do SOS DP on the divisors of $i$ which will take $O(\sigma(i) \cdot p(i))$ where $p(i)$ is the number of unique prime factors of $i$. It is hard to describe all the little details of the implementation here, please refer to the code for more details. The overall time complexity is $O(\sum_{i = 1}^M \sigma(i) \cdot p(i))$ or $O(\sum_{i = 1}^M \sum_{d | i} \sigma(d))$ where $M$ is the maximum value of $m$. Both work fast enough.
[ "dp", "number theory" ]
3,200
#include<bits/stdc++.h> using namespace std; const int N = 1e6 + 9, mod = 998244353; inline void add(int &x, int y) { x = x + y >= mod ? x + y - mod : x + y; } int spf[N]; void sieve() { vector<int> p; for(int i = 2; i < N; i++) { if (spf[i] == 0) spf[i] = i, p.push_back(i); int sz = p.size(); for (int j = 0; j < sz && i * p[j] < N && p[j] <= spf[i]; j++) { spf[i * p[j]] = p[j]; } } } int mob[N]; void mobius() { mob[1] = 1; for (int i = 2; i < N; i++){ mob[i]--; for (int j = i + i; j < N; j += i) { mob[j] -= mob[i]; } } for (int i = 1; i < N; i++) { mob[i] = (mob[i] % mod + mod) % mod; } } int c[N]; vector<int> divs[N]; void gen_divs(int n) { // not sorted int id = 1, x = n; divs[n][0] = 1; while (n > 1) { int k = spf[n]; int cur = 1, sz = id; while (n % k == 0) { cur *= k; n /= k; for (int i = 0; i < sz; i++) { divs[x][id++] = divs[x][i] * cur; } } } } void prec() { sieve(); // generate divisors without using push_back as its really slow on Codeforces for (int i = 1; i < N; i++) { for (int j = i; j < N; j += i) { c[j]++; } divs[i].resize(c[i]); gen_divs(i); } mobius(); } int dp[N]; int f[N]; int tmp[N], ans[N]; void solve() { for (int i = 1; i < N; i++) { for (int d: divs[i]) { tmp[d] = (mod - f[d]) % mod; for (int c: divs[d]) { add(tmp[d], dp[c]); } tmp[d] = (2 * tmp[d] + 1) % mod; } // apply mobius inversion formula for (int d: divs[i]) { for (int c: divs[d]) { add(dp[d], 1LL * mob[c] * tmp[d / c] % mod); } add(f[d], tmp[d]); } ans[i] = ans[i - 1]; add(ans[i], f[i]); } } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); prec(); solve(); int t = 1; cin >> t; while (t--) { int m; cin >> m; cout << ans[m] << '\n'; } return 0; }
2039
G
Shohag Loves Pebae
Shohag has a tree with $n$ nodes. Pebae has an integer $m$. She wants to assign each node a value — an integer from $1$ to $m$. So she asks Shohag to count the number, modulo $998\,244\,353$, of assignments such that following conditions are satisfied: - For each pair $1 \le u \lt v \le n$, the least common multiple (LCM) of the values of the nodes in the unique simple path from $u$ to $v$ is \textbf{not} divisible by the number of nodes in the path. - The greatest common divisor (GCD) of the values of all nodes from $1$ to $n$ is $1$. But this problem is too hard for Shohag to solve. As Shohag loves Pebae, he has to solve the problem. Please save Shohag!
Let's say we assign $a_u$ to the node $u$. Let $h_u$ be the maximum length of a simple path that passes through $u$. Then a necessary condition is that $a_u$ can not be a multiple of any number $\le h_u$. Because if $a_u$ is a multiple of $k \le h_u$ and $v$ is a node such that the unique simple path from $u$ to $v$ has length $k$, then the LCM of the values of the nodes from $u$ to $v$ is a multiple of $k$, which is a contradiction. The condition also means that $a_u$ can not be a multiple of any prime number $p \le h_u$. Is this a sufficient condition? Yes, and the proof is also simple. So now the problem is to count the number of assignments such that for each node $u$, $a_u$ is not a multiple of any prime number $p \le h_u$ and $\text{gcd}(a_1, a_2, \ldots, a_n) = 1$. Let $f_{w, p}$ be the count of numbers from $1$ to $w$ that are not divisible by any prime $\le p$, $D$ be the diameter of the tree, A number $x$ is good if $x$ is not divisible by any prime $\le D$, $\mu(g)$ be the Mobius function, $\pi(x)$ be the number of primes $\le x$. Then the answer to our problem is $\sum_{g = 1}^m \mu(g) \cdot [g \text{ is good}] \cdot \prod_{i = 1}^n f_{\lfloor \frac{m}{g} \rfloor, h_i}$. As $\lfloor \frac{m}{g} \rfloor$ is a non-decreasing function and has at most $2 \sqrt{m}$ distinct values, we can iterate over $\lfloor \frac{m}{g} \rfloor$ and calculate range sums of $\mu(g) \cdot [g \text{ is good}]$. For calculating prefix sums of a multiplicative function (like $\mu(g)$), it's a standard task and can be solved using Dirichlet convolution, Min25 sieve or multiple other methods. Here, we need a slight variant of the method as we need the prefix sums of $\mu(g) \cdot [g \text{ is good}]$. This can be achieved using Dirichlet convolution in $\mathcal{O}(m^{2 / 3})$ if we just imagine the prime numbers $\le D$ do not exist in the number system. Refer to my code for more details. But for each fixed $\lfloor \frac{m}{g} \rfloor$, how do we calculate $\prod_{i = 1}^n f_{\lfloor \frac{m}{g} \rfloor, h_i}$ fast enough? Trivially doing it will make the total complexity around $\mathcal{O}(n \sqrt{m})$ which is too slow. The key observation is to not forget that the values of $h_i$ are not random, they are the maximum length of a simple path that passes through the node $i$. So $h_i \ge \lceil \frac{D}{2} \rceil$ for all $i$ because from each node, the endpoints of the diameter are at least $\lceil \frac{D}{2} \rceil$ away. So now consider two cases: Case 1: $D > 2 \sqrt{m}$ In this case, all $h_i \ge \lceil \frac{D}{2} \rceil \ge \sqrt{m}$ for all $i$. So only primes or $1$ are the good numbers. So instead of going with the mobius route, we can just directly solve it by calculating the total number of ways and subtracting the number of ways where the gcd is a prime. We can calculate the total number of ways by first calculating the number of primes $\le m$ and then $f_{m, h_i}$ is just $\pi(m) - \pi(h_i) + 1$. And the number of ways where the gcd is a prime is just $1$ for all primes $> D$ and $0$ otherwise. Counting primes under $m$ is also a standard task and can be done in $\mathcal{O}(m^{2 / 3} \log m)$ or faster. Case 2: $D \le 2 \sqrt{m}$ We can convert each $h_i$ to the maximum prime $\le h_i$ and then group $h_i$ by their values. Then the maximum number of groups will be $\mathcal{O}(\pi(\sqrt{m}))$. So for each fixed $k = \lfloor \frac{m}{g} \rfloor$, if the sum of the mobius function in the range $(\lfloor \frac{m}{k + 1} \rfloor, \lfloor \frac{m}{k} \rfloor]$ is non-zero (keep in mind that when all numbers in the range are bad numbers, then the sum will definitely be $0$), then we can calculate the product of $f_{k, h_i}$ directly. Then the upper bound of the complexity will be around $\mathcal{O}(\frac{m}{\log^2 m} \cdot \log \left( \frac{n}{\pi(\sqrt[3]{m})} \right))$. The proof will be added later. This works fast enough.
[ "math", "number theory" ]
3,500
#include<bits/stdc++.h> using namespace std; #include<ext/pb_ds/assoc_container.hpp> #include<ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; struct custom_hash { static uint64_t splitmix64(uint64_t x) { x += 0x9e3779b97f4a7c15; x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9; x = (x ^ (x >> 27)) * 0x94d049bb133111eb; return x ^ (x >> 31); } size_t operator()(uint64_t x) const { static const uint64_t FIXED_RANDOM = chrono::steady_clock::now().time_since_epoch().count(); return splitmix64(x + FIXED_RANDOM); } }; const int N = 1e6 + 9, T = 1e7 + 9, RT = 33333, mod = 998244353; using ll = long long; int power(int n, long long k) { int ans = 1 % mod; while (k) { if (k & 1) ans = (long long) ans * n % mod; n = (long long) n * n % mod; k >>= 1; } return ans; } int SQRT(int n) { int x = sqrt(n); while (x * x < n) ++x; while (x * x > n) --x; return x; } int spf[T], id[T], DIAMETER, mu[T]; vector<int> primes; // 1 indexed int prefix_prime_count[T], prefix_sum_mu[T]; void init() { mu[1] = 1; for(int i = 2; i < T; i++) { if (spf[i] == 0) spf[i] = i, mu[i] = i <= DIAMETER ? 0 : -1, primes.push_back(i); int sz = primes.size(); for (int j = 0; j < sz && i * primes[j] < T && primes[j] <= spf[i]; j++) { spf[i * primes[j]] = primes[j]; if (i % primes[j] == 0) mu[i * primes[j]] = 0; else mu[i * primes[j]] = mu[i] * (primes[j] <= DIAMETER ? 0 : -1); } } primes.insert(primes.begin(), 0); for (int i = 1; i < primes.size(); i++) { id[primes[i]] = i; } for (int i = 2; i < T; i++) { prefix_prime_count[i] = prefix_prime_count[i - 1] + (spf[i] == i); } for (int i = 1; i < T; i++) prefix_sum_mu[i] = prefix_sum_mu[i - 1] + mu[i]; } int cnt[N]; // count of nodes having each diameter int m; namespace GoodNumbers { // numbers which aren't divisible by the first k primes gp_hash_table<int, int, custom_hash> mp[RT << 1]; int count_num(int n, int k) { // n is a floor value, returns good numbers <= n if (k == 0 or n == 0) return n; if (primes[k] >= n) return 1; if (n < T and 1LL * primes[k] * primes[k] > n) { return 1 + prefix_prime_count[n] - k; } if (mp[k].find(n) != mp[k].end()) return mp[k][n]; int ans; if (1LL * primes[k] * primes[k] > n) { int x = upper_bound(primes.begin(), primes.begin() + k, (int)SQRT(n)) - primes.begin() - 1; ans = count_num(n, x) - (k - x); } else ans = count_num(n, k - 1) - count_num(n / primes[k], k - 1); mp[k][n] = ans; return ans; } }; vector<pair<int, int>> v; namespace Dirichlet { // good number = numbers that aren't divisible by any prime <= DIAMETER // we will run dirichlet imagining there exists no prime <= DIAMETER gp_hash_table<int, int, custom_hash> mp; int p_c(int n) { return n < 1 ? 0 : 1; } int p_g(int n) { return GoodNumbers::count_num(n, v.back().first); } int solve (int x) { // sum of mob[i] over 1 <= i <= x and i is a good number if (x < T) return prefix_sum_mu[x]; if (mp.find(x) != mp.end()) return mp[x]; int ans = 0; for (int i = 2, last; i <= x; i = last + 1) { last = x / (x / i); ans += solve(x / i) * (p_g(last) - p_g(i - 1)); } ans = p_c(x) - ans; return mp[x] = ans; } }; int count_primes(int n) { if (n < T) return prefix_prime_count[n]; int x = SQRT(n); int k = upper_bound(primes.begin(), primes.end(), x) - primes.begin() - 1; return GoodNumbers::count_num(n, k) + k - 1; } // diameter > 2 * sqrt(m) void solve_large() { // only primes are good, so count total ways // and subtract where gcd is prime (means all nodes have a fixed prime) int total_ways = 1; int primes_under_m = count_primes(m); for (auto [k, c]: v) { if (m <= primes[k]) break; total_ways = 1LL * total_ways * power((primes_under_m - k + 1) % mod, c) % mod; // 1 or a prime > k } int bad_ways = (max(0, primes_under_m - v.back().first)) % mod; int ans = (total_ways - bad_ways + mod) % mod; cout << ans << '\n'; } // diameter <= 2 * sqrt(m) void solve_small() { int ans = 0; for (int l = 1, r; l <= m; l = r + 1) { int x = m / l; r = m / x; int cur = ((Dirichlet::solve(r) - Dirichlet::solve(l - 1)) % mod + mod) % mod; if (cur) { int mul = 1; for (auto [k, c]: v) { if (x <= primes[k]) break; mul = 1LL * mul * power(GoodNumbers::count_num(x, k) % mod, c) % mod; } ans += 1LL * cur * mul % mod; ans %= mod; } } cout << ans << '\n'; } vector<int> g[N]; int dp[N], up[N]; void dfs(int u, int p = 0) { dp[u] = 0; if (p) g[u].erase(find(g[u].begin(), g[u].end(), p)); for (auto v: g[u]) { if (v ^ p) { dfs(v, u); dp[u] = max(dp[u], dp[v] + 1); } } } int pref[N], suf[N]; void dfs2(int u) { int sz = g[u].size(); for (int i = 0; i < sz; i++) { int v = g[u][i]; pref[i] = dp[v] + 1; if (i) pref[i] = max(pref[i], pref[i - 1]); } for (int i = sz - 1; i >= 0; i--) { int v = g[u][i]; suf[i] = dp[v] + 1; if (i + 1 < sz) suf[i] = max(suf[i], suf[i + 1]); } for (int i = 0; i < sz; i++) { int v = g[u][i]; int cur = up[u]; if (i) cur = max(cur, pref[i - 1]); if (i + 1 < sz) cur = max(cur, suf[i + 1]); up[v] = cur + 1; } for (auto v: g[u]) { dfs2(v); } } int mx_d[N]; int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int n; cin >> n >> m; for (int i = 1; i < n; i++) { int u, v; cin >> u >> v; g[u].push_back(v); g[v].push_back(u); } dfs(1); dfs2(1); for (int u = 1; u <= n; u++) { vector<int> vec; if (u != 1) vec.push_back(up[u]); for (auto v: g[u]) { vec.push_back(dp[v] + 1); } sort(vec.rbegin(), vec.rend()); mx_d[u] = vec[0]; if (vec.size() > 1) { mx_d[u] += vec[1]; } mx_d[u] += 1; } for (int i = 1; i <= n; i++) { cnt[mx_d[i]]++; DIAMETER = max(DIAMETER, mx_d[i]); } init(); int last_prime = 0; for (int i = 2; i <= DIAMETER; i++) { if (spf[i] == i) last_prime = i; if (cnt[i]) { int k = id[last_prime]; if (!v.empty() and v.back().first == k) { v.back().second += cnt[i]; } else { v.push_back({k, cnt[i]}); } } } if (DIAMETER > 2 * SQRT(m)) solve_large(); else solve_small(); return 0; }
2039
H1
Cool Swap Walk (Easy Version)
\textbf{This is the easy version of the problem. The only difference is the maximum number of operations you can perform. You can only make hacks if both versions are solved.} You are given an array $a$ of size $n$. A cool swap walk is the following process: - In an $n \times n$ grid, we note the cells in row $i$ and column $j$ as $(i, j)$. You need to walk from $(1,1)$ to $(n,n)$, taking only steps to the right or down. - Formally, if you are in $(x,y)$ currently, you can step to either $(x+1,y)$ or $(x,y+1)$, but you can not step beyond the boundaries of the grid. - When you step in $(i,j)$, you \textbf{must} swap $a_i$ and $a_j$ when $i \neq j$. You can perform at most $2n+4$ cool swap walks. Sort the array $a_1, a_2, \ldots, a_n$ in non-decreasing order. We can show that it's always possible to do so.
We can observe that this kind of path is imporatnt - when we are in $(x, x)$, we only perform one of the following two kind of moves: Move 1 $(x, x) \rightarrow (x, x+1) \rightarrow (x+1, x+1)$ This move transforms $[\ldots,a_x, a_{x+1},\ldots]$ into $[\ldots,a_{x+1}, a_{x},\ldots]$. Move 2 $(x, x) \rightarrow (x, x+1) \rightarrow (x, x+2) \rightarrow (x+1, x+2) \rightarrow (x+2, x+2)$ This move transforms $[\ldots,a_x, a_{x+1},a_{x+2},\ldots]$ into $[\ldots,a_{x+2}, a_{x+1},a_{x},\ldots]$. Summary of the path: Note the arrays before and after the path as $a$ and $a'$, respectively. We can see $a'_n=a_1$, and $[a'_1,\ldots,a'_{n-1}]$ can be obtained from $[a_2,\ldots,a_{n}]$ through the following transformation: Swap any two adjacent numbers of $[a_2,\ldots,a_{n}]$, but each number can be swapped at most once. This inspires us to use Odd-Even Sort algorithm. Steps to Achieve the Sorted Array: Step $1$: Initialize $a_1 = mn$: If $a_1 \neq mn$, where $mn$ is the minimum of the array, use the following path: $(1, 1) \rightarrow (1, p_1) \rightarrow (p_1, p_1) \rightarrow (p_1, n) \rightarrow (n, n)$ This sequence ensures that $a_1 = mn$. Then, repeat steps $2$ and $3$ until the array is sorted. Step $2$: Perform Odd-Even Sorting: Perform an Odd-Even Sort (a round of comparison) using the key path above on the subarray $a_2, \dots, a_n$. Step $3$: Maintain the orderliness of $[a_{2}, \dots ,a_{n}]$ while repeatedly making $a_1 = mn$: After step $2$, we want $mn$ back to the head of the array. To achieve this, perform the following operations: $(1, 1) \rightarrow (1, n) \rightarrow (n, n)$ This sequence transforms the array as follows: $[a_1, a_2, \dots, a_n(a_n=mn)] \rightarrow [a'_1, a'_2, \dots, a'_n]=[a_n, a_{n-1}, a_1, a_2, \dots, a_{n-2}]$ When this is performed after an odd-even sort, it ensures that: $mn$ is back to the head of the array. The subarray $a_1, \dots, a_{n-1}$ has been cyclically shifted. Handling Continuous Cyclic Shifts in Odd-Even Sort: Even Length ($n-1$ is even): Cyclic shifting does not affect the odd-even sort. You can continue applying the sort as usual. Odd Length ($n-1$ is odd): A small modification is needed. Specifically, First compare $(a_3,a_4),(a_5,a_6),\ldots$ instead of $(a_2,a_3),(a_4,a_5),\ldots$ This adjustment ensures that the odd-even sort operates correctly despite the continuous cyclic shifts. Overall, we obtained a sorted array using $2n$ walks.
[ "constructive algorithms", "implementation", "sortings" ]
3,500
#include <map> #include <set> #include <cmath> #include <ctime> #include <queue> #include <stack> #include <cstdio> #include <cstdlib> #include <vector> #include <cstring> #include <algorithm> #include <iostream> #include <bitset> using namespace std; typedef double db; typedef long long ll; typedef unsigned long long ull; const int N=2010; int T,n,mn,tot; int a[N]; vector<int> X[N],Y[N]; void path1(int num) //(1,1)->(1,2)->(2,2)->(2,3)->(3,3)->... { for(int i=1;i<=n;i++) { X[num].push_back(i),Y[num].push_back(i); if(i!=n) { X[num].push_back(i),Y[num].push_back(i+1); swap(a[i],a[i+1]); } } } void path2(int num) //(1,1)->(1,n)->(n,n) { for(int i=1;i<=n;i++) { X[num].push_back(1),Y[num].push_back(i); swap(a[1],a[i]); } for(int i=2;i<=n;i++) { X[num].push_back(i),Y[num].push_back(n); swap(a[i],a[n]); } } void walk1(int j) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j-1),Y[tot].push_back(j+1); X[tot].push_back(j),Y[tot].push_back(j+1); X[tot].push_back(j+1),Y[tot].push_back(j+1); swap(a[j-1],a[j+1]); } void walk2(int j) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j+1); X[tot].push_back(j+1),Y[tot].push_back(j+1); swap(a[j-1],a[j]); swap(a[j],a[j+1]); } int main() { scanf("%d",&T); while(T--) { scanf("%d",&n); for(int i=1;i<=n;i++) scanf("%d",&a[i]); mn=n;tot=0; for(int i=1;i<=n;i++) mn=min(mn,a[i]); for(int i=1;i<=3*n;i++) X[i].clear(),Y[i].clear(); int p1; for(int i=1;i<=n;i++) if(a[i]==mn) p1=i; if(p1!=1) { tot++; for(int i=1;i<=p1;i++) X[tot].push_back(1),Y[tot].push_back(i),swap(a[1],a[i]); for(int i=2;i<=p1;i++) X[tot].push_back(i),Y[tot].push_back(p1),swap(a[i],a[p1]); for(int i=p1+1;i<=n;i++) X[tot].push_back(p1),Y[tot].push_back(i),swap(a[p1],a[i]); for(int i=p1+1;i<=n;i++) X[tot].push_back(i),Y[tot].push_back(n),swap(a[i],a[n]); } for(int i=2;i<=n;i++) { tot++; X[tot].push_back(1),Y[tot].push_back(1); if(n&1) { if(i&1) { for(int j=2;j<=n;j+=2) { if(j+1==i) walk2(j); else if(a[j]>a[j+1]) walk1(j); else walk2(j); } } else { for(int j=2;j<=n;j+=2) { if(a[j]>a[j+1]) walk1(j); else walk2(j); } } } else { if(i&1) { for(int j=2;j<=n;j+=2) { if(j==i-1) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j); swap(a[j-1],a[j]); j--; } else if(a[j]>a[j+1]) walk1(j); else walk2(j); } } else { for(int j=2;j<=n;j+=2) { if(j==i) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j); swap(a[j-1],a[j]); j--; } else if(a[j]>a[j+1]) walk1(j); else walk2(j); } } } path2(++tot); } printf("%d\n",tot); for(int i=1;i<=tot;i++) { for(int j=1;j<2*n-1;j++) { if(X[i][j]==X[i][j-1]) printf("R"); else printf("D"); } printf("\n"); } } return 0; }
2039
H2
Cool Swap Walk (Hard Version)
\textbf{This is the hard version of the problem. The only difference is the maximum number of operations you can perform. You can only make hacks if both versions are solved.} You are given an array $a$ of size $n$. A cool swap walk is the following process: - In an $n \times n$ grid, we note the cells in row $i$ and column $j$ as $(i, j)$. You need to walk from $(1,1)$ to $(n,n)$, taking only steps to the right or down. - Formally, if you are in $(x,y)$ currently, you can step to either $(x+1,y)$ or $(x,y+1)$, but you can not step beyond the boundaries of the grid. - When you step in $(i,j)$, you \textbf{must} swap $a_i$ and $a_j$ when $i \neq j$. You can perform at most $n+4$ cool swap walks. Sort the array $a_1, a_2, \ldots, a_n$ in non-decreasing order. We can show that it's always possible to do so.
First, read the editorial of the easy version. We can see that the bottleneck lies in the fact that after every round of odd-even sorting, we need to perform a walk operation to ensure that $a_1 = mn$. The following method can break through this bottleneck: for simplicity, let's assume $n$ is even. Define the numbers smaller than or equal to $\frac{n}{2}$ as $S$, and the numbers bigger than $\frac{n}{2}$ as $B$. If we have $a = [S, \ldots, S, B, \ldots, B]$, we can repeatedly perform key path operations to get the following sequence: $[S, \ldots, S, B, \ldots, B] \to [S, \ldots, S, B, \ldots, B, S] \to [S, \ldots, S, B, \ldots, B, S, S] \to \ldots \to [B, \ldots, B, S, \ldots, S]$ In this process, we only perform odd-even sorting for the subarray $[B, \ldots, B]$. $[B, \ldots, B, S, \ldots, S] \to [B, \ldots, B, S, \ldots, S, B] \to [B, \ldots, B, S, \ldots, B, B] \to \ldots \to [S, \ldots, S, B, \ldots, B]$ In this process, we only perform odd-even sorting for the subarray $[S, \ldots, S]$. After that, the array is sorted. Finally, the only remaining problem is how to arrange $a = [S, \ldots, S, B, \ldots, B]$. Assume we have $k$ positions $p_1, p_2, \ldots, p_k$ such that $1 < p_1 < p_2 < \ldots < p_k \leq n$. Consider what the following operations are doing: $(1, 1) \to (1, p_1)\to (2, p_1) \to (2, p_2)\to (3, p_2) \to \ldots \to (k, p_k)$ If we ignore the other numbers,these operations correspond to: $\text{swap}(a_1, a_{p_1}), \text{swap}(a_2, a_{p_2}), \ldots$ Then, we can take any path from $(k, p_k)$ to $(n, n)$. At first, we perform one operation to set $a_1 = n$, then choose $\frac{n}{2}$ positions $p_1, p_2, \ldots, p_{\frac{n}{2}}$ to obtain $a = [S, \ldots, S, B, \ldots, B]$. For $n$ being odd, we need two additional operations for some little adjustments. Overall, we obtained a sorted array using $n + 4$ walks.
[ "constructive algorithms", "implementation", "sortings" ]
3,500
#include <map> #include <set> #include <cmath> #include <ctime> #include <queue> #include <stack> #include <cstdio> #include <cstdlib> #include <vector> #include <cstring> #include <algorithm> #include <iostream> #include <bitset> using namespace std; typedef double db; typedef long long ll; typedef unsigned long long ull; const int N=2010; int T,n,tot; int a[N]; vector<int> X[N],Y[N]; void path1(int num) //(1,1)->(1,2)->(2,2)->(2,3)->(3,3)->... { for(int i=1;i<=n;i++) { X[num].push_back(i),Y[num].push_back(i); if(i!=n) { X[num].push_back(i),Y[num].push_back(i+1); swap(a[i],a[i+1]); } } } void path2(int num) //(1,1)->(1,n)->(n,n) { for(int i=1;i<=n;i++) { X[num].push_back(1),Y[num].push_back(i); swap(a[1],a[i]); } for(int i=2;i<=n;i++) { X[num].push_back(i),Y[num].push_back(n); swap(a[i],a[n]); } } void path3(int num,vector<int> p) //swap(1,p[0]),(2,p[1]),... note p[0]!=1 { for(int i=1;i<=p[0];i++) { X[num].push_back(1),Y[num].push_back(i); swap(a[1],a[i]); } for(int i=1;i<p.size();i++) { for(int j=p[i-1];j<=p[i];j++) { X[num].push_back(i+1),Y[num].push_back(j); swap(a[i+1],a[j]); } } int x=p.size(),y=p.back(); while(x!=n) { x++; X[num].push_back(x),Y[num].push_back(y); swap(a[x],a[y]); } while(y!=n) { y++; X[num].push_back(x),Y[num].push_back(y); swap(a[x],a[y]); } } void walk1(int j) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j-1),Y[tot].push_back(j+1); X[tot].push_back(j),Y[tot].push_back(j+1); X[tot].push_back(j+1),Y[tot].push_back(j+1); swap(a[j-1],a[j+1]); } void walk2(int j) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j+1); X[tot].push_back(j+1),Y[tot].push_back(j+1); swap(a[j-1],a[j]); swap(a[j],a[j+1]); } void walk3(int j) { X[tot].push_back(j-1),Y[tot].push_back(j); X[tot].push_back(j),Y[tot].push_back(j); swap(a[j-1],a[j]); } void init() { scanf("%d",&n); for(int i=1;i<=n;i++) scanf("%d",&a[i]); tot=0; for(int i=1;i<=3*n;i++) X[i].clear(),Y[i].clear(); vector<pair<int,int> > pr; for(int i=1;i<=n;i++) pr.push_back(make_pair(a[i],i)); sort(pr.begin(),pr.end()); for(int i=1;i<=n;i++) a[pr[i-1].second]=i; } void step1() { int p1,pn; vector<int> p; for(int i=1;i<=n;i++) if(a[i]==1) p1=i; if(p1!=1) { p.push_back(p1); path3(++tot,p); } if(n==2) return ; tot++; X[tot].push_back(1),Y[tot].push_back(1); for(int j=2;j<=n;j+=2) { if(j+1>n) walk3(j); else if(a[j]==n) walk1(j); else walk2(j); } p1=n; for(int i=1;i<=n;i++) if(a[i]==n) pn=i; p.clear(); p.push_back(pn);p.push_back(p1); path3(++tot,p); p.clear(); for(int i=1;i<=n;i++) if(a[i]<=(n+1)/2) p.push_back(i); path3(++tot,p); } void step2() { int head; if(n&1) { for(int t=1;t<=2;t++) { head=n/2+2; for(int i=1;i<=n/2+(t==1);i++) { tot++; X[tot].push_back(1),Y[tot].push_back(1); for(int j=2;j<=n;j++) { if(!(head<=j&&j<=head+n/2-1)) walk3(j); else if(j==head&&(head&1)) walk3(j); else { if(!(head<=j+1&&j+1<=head+n/2-1)) walk3(j); else if(a[j]>a[j+1]) walk1(j),j++; else walk2(j),j++; } } head--; } } } else { for(int t=1;t<=2;t++) { head=n/2+1; for(int i=1;i<=n/2;i++) { tot++; X[tot].push_back(1),Y[tot].push_back(1); for(int j=2;j<=n;j++) { if(!(head<=j&&j<=head+n/2-1)) walk3(j); else if(j==head&&(head&1)) walk3(j); else { if(!(head<=j+1&&j+1<=head+n/2-1)) walk3(j); else if(a[j]>a[j+1]) walk1(j),j++; else walk2(j),j++; } } head--; } } } } void output() { printf("%d\n",tot); for(int i=1;i<=tot;i++) { for(int j=1;j<2*n-1;j++) { if(X[i][j]==X[i][j-1]) printf("R"); else printf("D"); } printf("\n"); } } int main() { scanf("%d",&T); while(T--) { init(); step1(); step2(); output(); } return 0; }
2040
A
Game of Division
You are given an array of integers $a_1, a_2, \ldots, a_n$ of length $n$ and an integer $k$. Two players are playing a game. The first player chooses an index $1 \le i \le n$. Then the second player chooses a different index $1 \le j \le n, i \neq j$. The first player wins if $|a_i - a_j|$ is not divisible by $k$. Otherwise, the second player wins. We play as the first player. Determine whether it is possible to win, and if so, which index $i$ should be chosen. The absolute value of a number $x$ is denoted by $|x|$ and is equal to $x$ if $x \ge 0$, and $-x$ otherwise.
$|x - y|$ is divisible by $k$ if and only if $x \mod k = y \mod k$. Let's split all numbers into groups according to the value $x \mod k$. The second player wins if he chooses a number from the same group. This means that the first player must choose the number that is the only one in its group.
[ "games", "math" ]
800
for _ in range(int(input())): n, k = map(int, input().split()) a = list(map(int, input().split())) b = [[] for _ in range(k)] for i in range(0, n): x = a[i] b[x % k].append(i + 1) res = -1 for i in range(k): if len(b[i]) == 1: res = b[i][0] break if res == -1: print("NO") else: print("YES\n" + str(res))
2040
B
Paint a Strip
You have an array of \textbf{zeros} $a_1, a_2, \ldots, a_n$ of length $n$. You can perform two types of operations on it: - Choose an index $i$ such that $1 \le i \le n$ and $a_i = 0$, and assign $1$ to $a_i$; - Choose a pair of indices $l$ and $r$ such that $1 \le l \le r \le n$, $a_l = 1$, $a_r = 1$, $a_l + \ldots + a_r \ge \lceil\frac{r - l + 1}{2}\rceil$, and assign $1$ to $a_i$ for all $l \le i \le r$. What is the minimum number of operations of the \textbf{first type} needed to make all elements of the array equal to one?
At each moment of time, the array contains a number of non-intersecting segments consisting only of ones. Using an operation of the first type can increase the number of these segments by $1$. Using an operation of the second type decreases the number of these segments by $x - 1$, where $x$ - is the number of segments that this operation covers. Therefore, the number of operations of the second type is no more than the number of operations of the first type minus $1$. The optimal strategy - is to perform one operation of the first type, and then alternate operations of the first and second types, increasing the number of ones from $x$ to $2 \cdot (x + 1)$ on each such pair of operations. There is no point in doing more operations of the first type on the prefix of operations, since we still must cover no more than two segments of ones with operations of the second type; otherwise, we will reduce the possible number of operations of the second type. At some point in the development of this problem, the following alternative statement appeared: we need to minimize the total number of operations of both types. How to solve this problem?
[ "constructive algorithms", "greedy", "math" ]
1,000
tt = int(input()) for _ in range(tt): n = int(input()) ans = 1 cur = 1 while True: if cur >= n: print(ans) break ans += 1 cur = cur * 2 + 2
2040
C
Ordered Permutations
Consider a permutation$^{\text{∗}}$ $p_1, p_2, \ldots, p_n$ of integers from $1$ to $n$. We can introduce the following sum for it$^{\text{†}}$: $$S(p) = \sum_{1 \le l \le r \le n} \min(p_l, p_{l + 1}, \ldots, p_r)$$ Let us consider all permutations of length $n$ with the maximum possible value of $S(p)$. Output the $k$-th of them in lexicographical$^{\text{‡}}$order, or report that there are less than $k$ of them. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^{\text{†}}$For example: - For the permutation $[1, 2, 3]$ the value of $S(p)$ is equal to $\min(1) + \min(1, 2) + \min(1, 2, 3) + \min(2) + \min(2, 3) + \min(3) =$ $1 + 1 + 1 + 2 + 2 + 3 = 10$ - For the permutation $[2, 4, 1, 3]$ the value of $S(p)$ is equal to $\min(2) + \min(2, 4) + \min(2, 4, 1) + \min(2, 4, 1, 3) \ +$ $ \min(4) + \min(4, 1) + \min(4, 1, 3) \ +$ $\min(1) + \min(1, 3) \ +$ $\min(3) =$ $2 + 2 + 1 + 1 + 4 + 1 + 1 + 1 + 1 + 3 = 17$. $^{\text{‡}}$An array $a$ is lexicographically smaller than an array $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; or - in the first position where $a$ and $b$ differ, the array $a$ has a smaller element than the corresponding element in $b$. \end{footnotesize}
These permutations are generated as follows. We will greedily go through the numbers in order from $1$ to $n$, and we will put each one either in the first free cell or in the last one. For example, if we want to put $4$ in the permutation $1, 3, \circ, \circ, \dots, \circ, 2$, we can put it either in the third cell or in the second from the end. That is, the permutation first increases and then decreases. We can prove that the greedy algorithm works like this. Let us now want to put the number $i$. When we put this number, we can immediately say that the minimum on the segments, one of the ends of which is the chosen position, is equal to $i$ (we do not take into account already placed numbers smaller than $i$). The number of these segments is equal to $n - i + 1$. The answer we get is equal to the sum of this fixed number and what we get in the future. Assume we put the number $i$ not at the end of the array. Let's consider the optimal further answer: $[\dots, x_j, \dots] i [\dots, x_j, \dots]$. Now let's put $i$ at the end of the array and leave the order of the following elements unchanged. All segments whose ends lie on elements that are larger than $i$ may no longer cover the number $i$, but the sets of numbers larger than $i$ that they cover have not changed. So the answer got better. Since we choose one of the ends independently, there are $2^{n - 1}$ of such permutations, and we can find the $k$-th one using a simple loop, similar to converting a number into binary notation.
[ "bitmasks", "combinatorics", "constructive algorithms", "greedy", "math", "two pointers" ]
1,600
tt = int(input()) for _ in range(tt): n, k = map(int, input().split()) a, b = [], [] if n <= 60 and (1 << (n - 1)) < k: print(-1) continue k -= 1 d = [] while k: d.append(k % 2) k //= 2 while len(d) < n - 1: d.append(0) a, b = [], [] j = 1 for i in range(n - 2, -1, -1): if d[i] == 0: a.append(j) else: b.append(j) j += 1 b.reverse() print(*a, n, *b)
2040
D
Non Prime Tree
You are given a tree with $n$ vertices. You need to construct an array $a_1, a_2, \ldots, a_n$ of length $n$, consisting of \textbf{unique} integers from $1$ to $2 \cdot n$, and such that for each edge $u_i \leftrightarrow v_i$ of the tree, the value $|a_{u_i} - a_{v_i}|$ is not a prime number. Find any array that satisfies these conditions, or report that there is no such array.
There are many array construction tactics that can be devised here. We will show two of them. We will perform a depth-first traversal of the graph and write a number $1$ greater than the previous one in the traversal order into each subsequent vertex. If the next vertex is not a leaf, then some number has already been written into its parent, which may violate the condition "$|a_{u_i} - a_{v_i}|$ is prime". If the difference is even and not equal to $2$, then the condition is satisfied. Otherwise, the condition may be satisfied, but we will still achieve an even difference not equal to $2$. If the difference is odd, first add the number $1$. If the difference becomes $2$, add another $2$. It can be shown that if we added this additional $2$, then we did not add them to the previous two vertices in the traversal order. We will write the values $2, 4, 6, \dots$ to the vertices with even depth in breadth-first order. We will write the values $n \cdot 2, n \cdot 2 - 2, n \cdot 2 - 4, \dots$ to the vertices with odd depth in breadth-first order. In such a traversal, the condition "$|a_{u_i} - a_{v_i}|$ is prime" can be violated only for one pair, and one of the vertices of this pair will be a leaf. We will change the value of this leaf to the value of the parent minus $1$. There are many possible solutions to this problem, and almost all testers have implemented a unique solution. There are solutions that we could not prove correct, but we could not hack them either.
[ "brute force", "constructive algorithms", "dfs and similar", "greedy", "number theory", "trees", "two pointers" ]
1,900
#include <bits/stdc++.h> using namespace std; using ll = long long; void dfs(int v, vector<vector<int>>& g, vector<int>& h, int p) { h[v] = h[p] + 1; for (int u : g[v]) { if (u == p) continue; dfs(u, g, h, v); } } int main() { ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<vector<int>> g(n); for (int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; u--, v--; g[u].push_back(v); g[v].push_back(u); } vector<int> h(n); dfs(0, g, h, 0); vector<vector<int>> hs(n + 1); for (int i = 0; i < n; i++) hs[h[i]].push_back(i); int l = 2, r = 2 * n; int cur = 0; vector<int> ans(n); for (int i = 1; i <= n; i++) { if (cur) { for (int v : hs[i]) { ans[v] = r; r -= 2; } } else { for (int v : hs[i]) { ans[v] = l; l += 2; } } cur ^= 1; } bool found = false; for (int i = 0; i < n; i++) { for (int v : g[i]) { if (h[v] < h[i]) continue; if (abs(ans[v] - ans[i]) == 2) { ans[v] = ans[i] - 1; found = true; break; } } if (found) break; } for (int i = 0; i < n; i++) cout << ans[i] << ' '; cout << '\n'; } return 0; }
2040
E
Control of Randomness
You are given a tree with $n$ vertices. Let's place a robot in some vertex $v \ne 1$, and suppose we initially have $p$ coins. Consider the following process, where in the $i$-th step (starting from $i = 1$): - If $i$ is odd, the robot moves to an adjacent vertex in the direction of vertex $1$; - Else, $i$ is even. You can either pay one coin (if there are some left) and then the robot moves to an adjacent vertex in the direction of vertex $1$, or not pay, and then the robot moves to an adjacent vertex chosen \textbf{uniformly at random}. The process stops as soon as the robot reaches vertex $1$. Let $f(v, p)$ be the minimum possible expected number of steps in the process above if we spend our coins optimally. Answer $q$ queries, in the $i$-th of which you have to find the value of $f(v_i, p_i)$, modulo$^{\text{∗}}$ $998\,244\,353$. \begin{footnotesize} $^{\text{∗}}$ Formally, let $M = 998\,244\,353$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$. \end{footnotesize}
To begin with, let's solve it without queries and forced movements. Let's consider the nature of the path. The current vertex $v$ has a parent of parent $u$. Let there be an odd move now, and the robot will go to the parent of $v$. If we're lucky, it'll go to $u$. Otherwise, it will go to the brother of vertex $v$. But this won't change anything - the next step the robot will do the same thing again. For vertex $v$ and all its brothers, the answer is the same. Let $d[v]$ be the answer for vertex $v$, and let $x$ be the number of brothers of vertex $v$, including itself. Then $d[v] = 2 + \frac{1}{d + 1} \cdot d[u] + \frac{d}{d + 1} \cdot d[v]$, whence $d[v] = d[u] + 2 \cdot (x + 1)$. We can see that our path consists of blocks of height $2$ - the robot tries to overcome the next block until it succeeds, and then proceeds to the next one. We are now ready to answer the queries. Performing an operation is essentially skipping a block - the robot will pass it on the first try, spending not $2 \cdot (x + 1)$ actions on it, but $2$. Therefore we will delete blocks with the largest $x$ greedily. We will traverse the graph and store two sets of degrees of vertices on the path to the root - one for vertices with odd depth, and the other for vertices with even depth. We will answer requests offline. Having detected a query, we will run through the first $p$ elements of the corresponding set and subtract them. Asymptotics of the trivial implementation, in which for each query we move up to the root, is $O(n \cdot q)$. Asymptotics of a possible offline solution, where we will maintain sets of vertices while traversing the graph, is $O(n + \sum_i{p_i} \cdot set)$. This problem originally had the following constraints: $1 \le n, q \le 2 \cdot 10^5$ $1 \le n, q \le 2 \cdot 10^5$ The sum of $p$ in all queries is not greater than $2 \cdot 10^5$ The sum of $p$ in all queries is not greater than $2 \cdot 10^5$ How to solve this problem? Could you solve this problem without the second constraint? However, it is not hard thanks to a recent blog.
[ "combinatorics", "dfs and similar", "dp", "graphs", "greedy", "math", "probabilities", "trees" ]
2,100
#include <bits/stdc++.h> using namespace std; void solve() { int n, q; cin >> n >> q; vector < vector <int> > g(n); for (int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; u--, v--; g[u].push_back(v); g[v].push_back(u); } vector <int> depth(n); vector <int> d(n); vector < vector < pair <int,int> > > qrs(n); // <p, idx> vector <int> res(q); for (int i = 0; i < q; i++) { int v, p; cin >> v >> p; v--; qrs[v].push_back({p, i}); } multiset <int> st[2]; // store negative number to be able to use usual foreach loop function <void(int, int, int)> dfs = [&](int v, int p, int pp) { if (depth[v] == 1) d[v] = 1; if (depth[v] > 1) d[v] = d[pp] + 2 * (int)g[p].size(); for (pair <int, int> qr : qrs[v]) { int p = qr.first, idx = qr.second; int ans = d[v]; for (int i : st[1 - depth[v] % 2]) { if (p == 0) break; ans -= (-i - 1) * 2; p--; } res[idx] = ans; } if (depth[v] != 0) st[depth[v] % 2].insert(-(int)g[v].size()); for (int to : g[v]) { if (to == p) continue; depth[to] = depth[v] + 1; dfs(to, v, p); } if (depth[v] != 0) st[depth[v] % 2].erase(st[depth[v] % 2].find(-(int)g[v].size())); }; dfs(0, 0, 0); for (int i = 0; i < q; i++) cout << res[i] << '\n'; } int main() { int tt; cin >> tt; while (tt--) { solve(); } return 0; }
2040
F
Number of Cubes
Consider a rectangular parallelepiped with sides $a$, $b$, and $c$, that consists of unit cubes of $k$ different colors. We can apply cyclic shifts to the parallelepiped in any of the three directions any number of times$^{\text{∗}}$. There are $d_i$ cubes of the $i$-th color ($1 \le i \le k$). How many different parallelepipeds (with the given sides) can be formed from these cubes, no two of which can be made equal by some combination of cyclic shifts? \begin{footnotesize} $^{\text{∗}}$On the image: - Top left shows the top view of the original parallelepiped. Lower layers will shift in the same way as the top layer. - Top right shows the top view of a parallelepiped shifted to the right by $1$. - Bottom left shows the top view of a parallelepiped shifted down by $2$. - Bottom right shows the top view of a parallelepiped shifted to the right by $1$ and down by $2$. \end{footnotesize}
Recall Burnside's lemma - the number of elements up to an action group is: $\frac{1}{|G|} \cdot \sum_{g \in G} {\sum_{x \in X}{[g x = x]}}$ , where $[x] = 1$ if $x = true$ and $[x] = 0$ if $x = false$. Let's try to iterate over the elements of the action group - all triplets of numbers $[0, a)$, $[0, b)$, $[0, c)$. When applying a fixed action $(i, j, l)$, the element moves to this vector. Let's choose a cell and add a vector to it until we reach it again. We have drawn a cycle - all cells on it must be of the same type. An example of such a traversal for $(a, b, c) = (9, 4, 1)$, $(i, j, k) = (3, 2, 0)$ (each different number in the table corresponds to a cycle): 123123123456456456123123123456456456 You can count the cycles by traversal, or you can derive formula: the length of all cycles is the same and equals $N = lcm(\frac{a}{gcd(a, i)}, \frac{b}{gcd(b, j)}, \frac{c}{gcd(c, l)})$. What's good about the equal lengths of the cycles? Because the formula for calculating the number of stationary parallelepipeds is simple. First, all $d_i$ must be divisible by $N$. Then we distribute them among the cycles. This is the multinomial coefficient for $(\frac{d_1}{N}, \frac{d_2}{N}, \dots, \frac{d_k}{N})$. Current total time $O(a \cdot b \cdot c \cdot k)$: iterated over $a \cdot b \cdot c$ vector triplets, calculated $N$, checked divisibility for all $k$ numbers, and if successful, calculated the multinomial coefficient of size $k$. Let's speed up the solution. Let's calculate $G = gcd(d_1, d_2, \dots, d_k)$. Since all $d_i$ are divisible by $N$, then $G$ is also divisible by $N$. There are no more different $N$ than the number of divisors of $a \cdot b \cdot c$. Let's calculate the number of triplets that give each value of $N$, and at the end we will calculate the multinomial coefficient for all identical values of $N$ at once. The total time is $O(a \cdot b \cdot c \cdot \log C + d(a \cdot b \cdot c) \cdot k)$, where $d(x)$ - is the number of divisors of $x$, and $\log$ appears due to the calculation of $gcd$. Let's continue to speed up the solution. There are two solutions further. Solution 1. Let's look again at the formula $N = lcm(\frac{a}{gcd(a, i)} \dots)$. For convenience, we will focus on the first element. Let's say we want the first element to be $x = \frac{a}{gcd(a, i)}$. Then $x$ is divisible by $a$ and $\frac{a}{x} = gcd(a, i)$. $i$ is divisible by $gcd(i, \dots)$, so $i$ is divisible by $\frac{a}{x}$. Then the possible $i$ are of the form $p \cdot \frac{a}{x}$, where $1 \le p \le x$, and the equality $gcd(a, \frac{a}{x}) = \frac{a}{x}$ is exactly satisfied. $p$ is coprime to $x$, otherwise the value of $gcd$ will be multiplied by their common divisor and the equality will be violated. Therefore the number of suitable $x$ is equal to $phi(x)$, where $phi(x)$ - Euler's function. So, let's enumerate triplets of divisors of $a$, $b$ and $c$. The number of ways to obtain a triple $(x, y, z)$ is equal to $phi(x) \cdot phi(y) \cdot phi(z)$. Let's calculate $phi(x)$ using the Sieve of Eratosthenes. We get a solution in $O(d(a) \cdot d(b) \cdot d(c) \cdot \log C + d(a \cdot b \cdot c) \cdot k)$ and $O(a \cdot b \cdot c \cdot \log \log (a \cdot b \cdot c))$ for pre-calculation. Solution 2. Let's calculate the same $N$ using dynamic programming. For convenience, we denote the dimensions of $a$, $b$, and $c$ by the array $a_i$. Let $dp[i][j]$ be the number of ways, having passed $i$ dimensions, to obtain $lcm$ equal to $j$. The transitions will be as follows: we will iterate over the pairs of the previous $lcm$ $t_1$ and the next divisor $t_2$ of the size of the next dimension $a_i$. Then the new $lcm$ will be equal to $lcm(t_1, t_2)$ and we make the transition $dp[i + 1][lcm(t_1, t_2)] += dp[i][t_1] \cdot cnt[i][t_2]$, where $cnt[i][j]$ - the number of such $x$ that $\frac{a_i}{gcd(a_i, x)} = j$. How to calculate the array $cnt[i][j]$. We cannot calculate it trivially in $O((a + b + c) \cdot \log C)$, since it is too long time. For simplicity, we calculate the array $cnt2[i][j]$ equal to the number of $x$ such that $gcd(a_i, x) = j$. We iterate over the divisors of $a_i$ in ascending order. Let the current divisor be $d_1$. Add $\frac{a_i}{d_1}$ to $cnt2[i][d_1]$, since that many $x$ will be divisible by $d_1$. Those $x$ that are divisible by $d_1$ but are not equal to it, we will subtract later. We iterate over the divisors $d_2$ of $d_1$. We subtract $cnt2[i][d_1]$ from $cnt2[i][d_2]$, since their $gcd$ is actually not $d_2$, but $d_1$ or a number that $d_1$ divides. Let's calculate $cnt[i][j] = cnt2[i][\frac{a_i}{j}]$. If we pre-calculate the divisors of all numbers and compress their "coordinates", we get a solution in $O(d(a \cdot b \cdot c)^2 \cdot \log C + d(a \cdot b \cdot c) \cdot k)$.
[ "combinatorics", "dp", "math", "number theory" ]
2,700
#include <bits/stdc++.h> #define int long long using namespace std; const int N = 3000010; const int mod = 998244353; int fact[N], ifact[N]; int pos[N]; int powmod(int a, int n) { int res = 1; while (n) { if (n % 2 == 0) { a = (a * a) % mod; n /= 2; } else { res = (res * a) % mod; n--; } } return res; } int inv(int a) { return powmod(a, mod - 2); } void prepare() { fact[0] = 1; for (int i = 1;i < N; i++) { fact[i] = (fact[i - 1] * i) % mod; } ifact[N - 1] = inv(fact[N - 1]); for (int i = N - 2; i >= 0; i--) { ifact[i] = (ifact[i + 1] * (i + 1)) % mod; } } int C(int n, int k) { return ((fact[n] * ifact[k]) % mod * ifact[n - k]) % mod; } int MC(vector <int> &a) { int sum=0; for (int i : a) sum += i; int res = fact[sum]; for (int i : a) { res = (res * ifact[i]) % mod; } return res; } int lcm(int a, int b) { return a / __gcd(a, b) * b; } vector <int> all_divs(int x) { vector <int> d1, d2; for (int i = 1; i * i <= x; i++) { if (x % i == 0) { d1.push_back(i); if (i * i != x) { d2.push_back(x / i); } } } reverse(d2.begin(), d2.end()); for (int i : d2) d1.push_back(i); return d1; } void solve() { int a, b, c, k; cin >> a >> b >> c >> k; vector <int> v(k); for (int &i : v) cin >> i; int g = v[0]; for (int i : v) g = __gcd(g, i); vector <int> divs_g = all_divs(g); set <int> divs; for (int i : all_divs(a)) divs.insert(i); for (int i : all_divs(b)) divs.insert(i); for (int i : all_divs(c)) divs.insert(i); for (int i : all_divs(g)) divs.insert(i); int D = divs.size(); int i = 0; for (int j : divs) { pos[j] = i; i++; } int n = max({a, b, c}) + 1; vector < vector <int> > tmp(3, vector <int> (D)); vector < vector <int> > cnt(3, vector <int> (D)); for (int t = 0; t < 3; t++) { int x; if (t == 0) x = a; if (t == 1) x = b; if (t == 2) x = c; vector <int> divs_x = all_divs(x); for (int i = (int)divs_x.size() - 1; i >= 0; i--) { tmp[t][pos[divs_x[i]]] += x / divs_x[i]; for (int j = 0; j < i; j++) { if (divs_x[i] % divs_x[j] == 0) { tmp[t][pos[divs_x[j]]] -= tmp[t][pos[divs_x[i]]]; } } cnt[t][pos[x / divs_x[i]]] = tmp[t][pos[divs_x[i]]]; } } vector < vector <int> > dp(4, vector <int> (D)); dp[0][0] = 1; for(int i = 0; i < 3; i++) { for (int t1 : divs_g) { for (int t2 : divs_g) { int new_pos = lcm(t1, t2); if (t2 < n) { dp[i + 1][pos[new_pos]] = (dp[i + 1][pos[new_pos]] + dp[i][pos[t1]] * cnt[i][pos[t2]]) % mod; } } } } int sum = 0; i = 0; for (int j : divs) { if (g % j != 0) continue; int N = j, cnt = dp[3][pos[j]]; vector <int> u; for (int t : v) u.push_back(t / N); sum = (sum + (MC(u) * cnt) % mod) % mod; } sum = (sum * inv(a * b * c)) % mod; cout << sum << endl; } int32_t main() { prepare(); int tt; cin >> tt; while (tt--) { solve(); } return 0; }
2042
A
Greedy Monocarp
There are $n$ chests; the $i$-th chest initially contains $a_i$ coins. For each chest, you can choose any non-negative ($0$ or greater) number of coins to add to that chest, with one constraint: the total number of coins in all chests must become \textbf{at least $k$}. After you've finished adding coins to the chests, greedy Monocarp comes, who wants the coins. He will take the chests one by one, and since he is greedy, he will always choose the chest with the maximum number of coins. Monocarp will stop as soon as the total number of coins in chests he takes is \textbf{at least $k$}. You want Monocarp to take as few coins as possible, so you have to add coins to the chests in such a way that, when Monocarp stops taking chests, he will have \textbf{exactly $k$} coins. Calculate the minimum number of coins you have to add.
Consider several first chests that Monocarp will take before exceeding the limit if we don't add any coins; so, this will be the set of several largest chests such that the sum of this set is $s \le k$, but if the next chest is taken, the sum would exceed $k$. For this set, the minimum number of coins that should be added is $k - s$. We can add exactly that amount if we increase the maximum element, the set of chests will include exactly the same elements, but now their sum is equal to $k$. Now we have to consider the case where we add coins to some chest that is not the part of this set. Suppose Monocarp took some chest $i$, which initially had $a_i$ coins, but did not take chest $j$, such that $a_j > a_i$. In order for chest $i$ to be taken, its final value must be at least $a_j$, as Monocarp selects the maximum chest from the available ones. Let's suppose that $x$ coins were added to chest $i$, so that $a_i + x \ge a_j$. However, instead, we could have increased chest $j$ to the value $a_i + x$, and this would require fewer coins, since $a_j > a_i$. Thus, we have shown that it is not optimal to "change" the order of the chests, so we can always assume that Monocarp takes several chests that were the largest in the original order.
[ "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n, k; cin >> n >> k; vector<int> a(n); for (auto& x : a) cin >> x; sort(a.begin(), a.end(), greater<int>()); int sum = 0; for (auto& x : a) { if (sum + x <= k) sum += x; else break; } cout << k - sum << '\n'; } }
2042
B
Game with Colored Marbles
Alice and Bob play a game. There are $n$ marbles, the $i$-th of them has color $c_i$. The players take turns; Alice goes first, then Bob, then Alice again, then Bob again, and so on. During their turn, a player \textbf{must} take \textbf{one} of the remaining marbles and remove it from the game. If there are no marbles left (all $n$ marbles have been taken), the game ends. Alice's score at the end of the game is calculated as follows: - she receives $1$ point for every color $x$ such that she has taken at least one marble of that color; - additionally, she receives $1$ point for every color $x$ such that she has taken \textbf{all} marbles of that color (of course, only colors present in the game are considered). For example, suppose there are $5$ marbles, their colors are $[1, 3, 1, 3, 4]$, and the game goes as follows: Alice takes the $1$-st marble, then Bob takes the $3$-rd marble, then Alice takes the $5$-th marble, then Bob takes the $2$-nd marble, and finally, Alice takes the $4$-th marble. Then, Alice receives $4$ points: $3$ points for having at least one marble for colors $1$, $3$ and $4$, and $1$ point for having all marbles of color $4$. \textbf{Note that this strategy is not necessarily optimal for both players}. Alice wants to maximize her score at the end of the game. Bob wants to minimize it. Both players play optimally (i. e. Alice chooses a strategy which allows her to get as many points as possible, and Bob chooses a strategy which minimizes the amount of points Alice can get). Calculate Alice's score at the end of the game.
It's fairly intuitive that if there is at least one unique marble available (a marble is unique if there are no other marbles with the same color), taking it is optimal: if Alice takes that marble, she gets $2$ points, and if Bob takes that marble, he denies $2$ points to Alice. So, initially, both players take unique marbles one by one, until there is none left. Let's denote the number of unique marbles as $u$; then, Alice takes $\lceil \frac{u}{2} \rceil$ unique marbles and gets $2$ points for each of them. After that, all remaining marbles are non-unique; for each remaining color, there are at least two marbles. Let's denote the number of remaining colors as $k$. We can show that Alice can get $k$ more points no matter how Bob plays, but she can't get more points if Bob plays optimally. There exists a symmetric strategy for each player: if during the previous turn, your opponent took the first marble of some color, respond by taking a marble of the same color; otherwise, play any legal move. This symmetric strategy ensures that Alice gets exactly $1$ point for each remaining color, since each color will be shared between two players. So, Alice can always achieve $k$ points, and Bob can make sure she doesn't get more than $k$. So, to solve this problem, you need to calculate the number of marbles for each color. Then, let the number of colors with exactly one marble be $u$, and the number of colors with more than $1$ marble be $k$. Alice's score will be $2 \cdot \lceil \frac{u}{2} \rceil + k$.
[ "games", "greedy" ]
900
#include<bits/stdc++.h> using namespace std; int main() { int t; scanf("%d", &t); for(int _ = 0; _ < t; _++) { int n; scanf("%d", &n); vector<int> c(n); for(int i = 0; i < n; i++) { scanf("%d", &c[i]); --c[i]; } vector<int> cnt(n); for(auto x : c) cnt[x]++; int exactly1 = 0, morethan1 = 0; for(auto x : cnt) if (x == 1) exactly1++; else if(x > 1) morethan1++; printf("%d\n", morethan1 + (exactly1 + 1) / 2 * 2); } }
2042
C
Competitive Fishing
Alice and Bob participate in a fishing contest! In total, they caught $n$ fishes, numbered from $1$ to $n$ (the bigger the fish, the greater its index). Some of these fishes were caught by Alice, others — by Bob. Their performance will be evaluated as follows. First, an integer $m$ will be chosen, and all fish will be split into $m$ \textbf{non-empty} groups. The first group should contain several (at least one) smallest fishes, the second group — several (at least one) next smallest fishes, and so on. Each fish should belong to exactly one group, and each group should be a contiguous subsegment of fishes. Note that the groups are numbered in exactly that order; for example, the fishes from the second group cannot be smaller than the fishes from the first group, since the first group contains the smallest fishes. Then, each fish will be assigned a value according to its group index: each fish in the first group gets value equal to $0$, each fish in the second group gets value equal to $1$, and so on. So, each fish in the $i$-th group gets value equal to $(i-1)$. The score of each contestant is simply the total value of all fishes that contestant caught. You want Bob's score to exceed Alice's score by \textbf{at least} $k$ points. What is the minimum number of groups ($m$) you have to split the fishes into? If it is impossible, you should report that.
The main idea we need to solve this problem is the following one. For each fish, its value will be equal to the number of groups before its group. So, each "border" between two groups increases the value of every fish after the border by $1$. Let $s_i$ be the number of Bob's fishes minus the number of Alice's fishes among the fishes with indices $i, i+1, \dots, n-1, n$; also, let $a_j$ be the index of the fish from which the $j$-th group starts. Then the difference in scores between Bob and Alice is equal to $0 \cdot (s_{a_1} - s_{a_2}) + 1 \cdot (s_{a_2} - s_{a_3}) + \cdots + (m - 1) \cdot s_{a_m}$, where $m$ is the number of groups. This sum can be rewritten as: $0 \cdot s_{a_1} + (2 - 1) \cdot s_{a_2} + (3 - 2) \cdot s_{a_3} + \cdots + s_{a_m}$. So, $s_i$ denotes how the difference between Bob's score and Alice's score changes if we split the $i$-th fish and the $(i-1)$-th fish into different groups. From this, it is clear that the final score is the sum of certain elements of the array $s$. Since we have to minimize the number of groups (the number of selected elements from the array $s$), it is optimal to choose the maximum elements. So, the solution is the following: construct the array $s$, sort it, and take the next maximum element until the sum is less than $k$. The answer to the problem is the number of selected elements plus $1$.
[ "greedy" ]
1,800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n, k; string s; cin >> n >> k >> s; vector<int> vals; int sum = 0; for (int i = n - 1; i > 0; --i) { sum += (s[i] == '1' ? 1 : -1); if (sum > 0) vals.push_back(sum); } sort(vals.begin(), vals.end()); int ans = 1; while (k > 0 && !vals.empty()) { k -= vals.back(); vals.pop_back(); ++ans; } cout << (k > 0 ? -1 : ans) << '\n'; } }
2042
D
Recommendations
Suppose you are working in some audio streaming service. The service has $n$ active users and $10^9$ tracks users can listen to. Users can like tracks and, based on likes, the service should recommend them new tracks. Tracks are numbered from $1$ to $10^9$. It turned out that tracks the $i$-th user likes form a segment $[l_i, r_i]$. Let's say that the user $j$ is a predictor for user $i$ ($j \neq i$) if user $j$ likes all tracks the $i$-th user likes (and, possibly, some other tracks too). Also, let's say that a track is strongly recommended for user $i$ if the track is not liked by the $i$-th user yet, but it is liked by \textbf{every} predictor for the $i$-th user. Calculate the number of strongly recommended tracks for each user $i$. If a user doesn't have any predictors, then print $0$ for that user.
Firstly, if several segments are equal, then the answer for all of them is zero. Now let's move to the problem where all segments are distinct. User $j$ is a predictor for user $i$ iff $l_j \le l_i \le r_i \le r_j$. Also, a track is strongly recommended if it is in all predictor segments, i. e. the track belongs to the intersection $[L, R]$ of all predictors. Since the segment $[l_i, r_i]$ also belongs to $[L, R]$, then the tracks we need to find form two intervals $[L, l_i)$ and $(r_i, R]$. Let's focus on finding interval $(r_i, R]$. Since the right border of the intersection is the minimum among right borders, then our task is to find the minimum among $r_j$-s such that $r_j \ge r_i$ while $l_j \le l_i$. Let's do it in the following way: let's sort all segments by $l_i$ in increasing order; in case of equal $l_i$-s, sort them by $r_i$ in decreasing order. If we process segments in the given order, then by the moment we process the $i$-th segment, all its predictors will be already processed. Let's keep $r_i$-s of all processed segments so far in an "ordered set" $S$ (std::set, for example). Suppose we process segment $i$. Since the right borders of all predictors are already in $S$ and their $r_j \ge r_i$, then finding the minimum among them is equivalent to just taking $R = S.\mathrm{lower\_bound}(r_i)$. Then we can add $R - r_i$ to the answer for the $i$-th segment. In order to calculate intervals $[L, l_i)$ we can just reflect all segments and solve the same problem. The complexity of the solution is $O(n \log{n})$.
[ "data structures", "implementation", "sortings", "two pointers" ]
1,900
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) struct Seg { int l, r; bool operator< (const Seg &oth) const { if (l != oth.l) return l < oth.l; return r < oth.r; }; }; void solve() { int n; cin >> n; vector<Seg> seg(n); for (int i = 0; i < n; i++) cin >> seg[i].l >> seg[i].r; vector<int> ans(n, 0); for (int k = 0; k < 2; k++) { vector<int> ord(n); iota(ord.begin(), ord.end(), 0); sort(ord.begin(), ord.end(), [&seg](int i, int j){ if (seg[i].l != seg[j].l) return seg[i].l < seg[j].l; return seg[i].r > seg[j].r; }); set<int> bounds; for (int i : ord) { auto it = bounds.lower_bound(seg[i].r); if (it != bounds.end()) ans[i] += *it - seg[i].r; bounds.insert(seg[i].r); } for (auto &s : seg) { s.l = -s.l; s.r = -s.r; swap(s.l, s.r); } } map<Seg, int> cnt; for (auto s: seg) cnt[s]++; for (int i = 0; i < n; i++) if (cnt[seg[i]] > 1) ans[i] = 0; for (int a : ans) cout << a << '\n'; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) solve(); return 0; }
2042
E
Vertex Pairs
You are given a tree consisting of $2n$ vertices. Recall that a tree is a connected undirected graph with no cycles. Each vertex has an integer from $1$ to $n$ written on it. Each value from $1$ to $n$ is written on \textbf{exactly two} different vertices. Each vertex also has a cost —vertex $i$ costs $2^i$. You need to choose a subset of vertices of the tree such that: - the subset is connected; that is, from each vertex in the subset, you can reach every other vertex in the subset by passing only through the vertices in the subset; - each value from $1$ to $n$ is written on at least one vertex in the subset. Among all such subsets, you need to find the one with the smallest total cost of the vertices in it. Note that you are not required to minimize the number of vertices in the subset.
Note that the cost function of a subset actually states the following: you are asked to choose the minimum lexicographic subset if the vertices are ordered in descending order. This can be shown by looking at the binary representations of the subset costs. Intuitively, we want to implement the following process. Iterate over the vertices in descending order and check whether we have to take the current vertex in the subset, or we can skip it. If we can, we skip it; otherwise, we take it. How to write this checking function? I think that it is easier to do this if we root the tree by a vertex that will definitely be in the answer. Finding such a vertex is easy - out of two vertices with the value $1$, at least one will definitely be in the answer. Iterate over it and take the best answer from the two options. We can compare the answers by their by binary representations. In the rooted tree, where the root is always taken, it is easier to check connectivity. If a vertex is taken in the subset, then its parent must also be taken. Otherwise, the subset will definitely be disconnected. A vertex must be taken in the subset if there is a value such that both vertices with this value are in its subtree. However, sometimes it happens that one of the two vertices with some value has already been prohibited from being taken. In this case, for this value, we need to check that only the non-prohibited vertex is in the subtree. Let's maintain the state of each vertex: the vertex must be taken in the subset; the vertex must not be taken in the subset; it has not been determined whether to take the vertex or not. Initially, we know that for each value, we must take the vertices that are on both paths from the vertices with this value to the root. So, on the path from their LCA (lowest common ancestor) to the root. If at least one such vertex is not taken, then that value will not appear in the subset. A vertex can be skipped if its state is not determined. When we decide not to take a vertex, the following happens. The vertices in its subtree must also not be taken. And if for some vertex it is marked that it must not be taken in the subset, then another vertex with the same value must now be taken. We will write two auxiliary functions. The first function marks the state as "must take". So, it takes a vertex $v$ and jumps over the parents of vertex $v$ until it either reaches the root or an already marked vertex. In total, this function will make $O(n)$ iterations, as with each successful iteration of the loop, another vertex gets marked. The second function marks the state as "must not take". That is, it takes a vertex $v$ and traverses the subtree of vertex $v$, marking all descendants as "must not take". I chose to implement this function using a breadth-first search. Again, we can stop the traversal when we see that a vertex is already marked as "must not take". In total, there will also be $O(n)$ iterations. When we mark a vertex, we call the first function from another vertex with its value. Overall complexity: $O(n \log n)$ (where everything except LCA works in $O(n)$).
[ "binary search", "brute force", "data structures", "dfs and similar", "divide and conquer", "greedy", "implementation", "trees" ]
2,900
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; vector<vector<int>> g; struct LCA { vector<vector<pair<int, int>>> st; vector<int> pw; void build(vector<pair<int, int>> a) { int n = a.size(); int lg = 32 - __builtin_clz(n); st.resize(lg, vector<pair<int, int>>(n)); st[0] = a; for (int j = 1; j < lg; ++j) { for (int i = 0; i < n; ++i) { st[j][i] = st[j - 1][i]; if (i + (1 << (j - 1)) < n) st[j][i] = min(st[j][i], st[j - 1][i + (1 << (j - 1))]); } } pw.resize(n + 1); for (int i = 2; i <= n; ++i) pw[i] = pw[i / 2] + 1; } vector<int> d, fst, par; vector<pair<int, int>> ord; int lca(int v, int u) { int l = fst[v], r = fst[u]; if (l > r) swap(l, r); ++r; int len = pw[r - l]; assert(len < int(st.size())); return min(st[len][l], st[len][r - (1 << len)]).second; } void init(int v, int p = -1) { if (fst[v] == -1) fst[v] = ord.size(); ord.push_back({ d[v], v }); for (int u : g[v]) if (u != p) { par[u] = v; d[u] = d[v] + 1; init(u, v); ord.push_back({ d[v], v }); } } LCA(int r = 0) { int n = g.size(); d.resize(n); fst.assign(n, -1); par.assign(n, -1); ord.clear(); init(r); build(ord); } }; int main() { cin.tie(0); ios::sync_with_stdio(false); int n; cin >> n; vector<int> a(2 * n); forn(i, 2 * n){ cin >> a[i]; --a[i]; } g.resize(2 * n); forn(i, 2 * n - 1){ int v, u; cin >> v >> u; --v, --u; g[v].push_back(u); g[u].push_back(v); } vector<int> l(n, -1), r(n, -1); forn(i, 2 * n){ if (l[a[i]] == -1) l[a[i]] = i; else r[a[i]] = i; } vector<char> res(2 * n, 1); forn(rt, 2 * n) if (a[rt] == 0){ LCA d(rt); vector<int> state(2 * n, 0); auto mark = [&](int v){ while (v != -1 && state[v] != 1){ state[v] = 1; v = d.par[v]; } }; auto markdel = [&](int v){ queue<int> q; q.push(v); state[v] = -1; while (!q.empty()){ int v = q.front(); q.pop(); mark(l[a[v]] ^ r[a[v]] ^ v); for (int u : g[v]) if (u != d.par[v] && state[u] == 0){ state[u] = -1; q.push(u); } } }; forn(i, n) mark(d.lca(l[i], r[i])); for (int i = 2 * n - 1; i >= 0; --i) if (state[i] == 0) markdel(i); vector<char> cur(2 * n, 0); for (int i = 0; i < 2 * n; ++i) if (state[i] == 1) cur[i] = 1; reverse(cur.begin(), cur.end()); res = min(res, cur); } reverse(res.begin(), res.end()); cout << count(res.begin(), res.end(), 1) << '\n'; forn(i, 2 * n) if (res[i]) cout << i + 1 << " "; cout << '\n'; return 0; }
2042
F
Two Subarrays
You are given two integer arrays $a$ and $b$, both of size $n$. Let's define the cost of the subarray $[l, r]$ as $a_l + a_{l + 1} + \cdots + a_{r - 1} + a_r + b_l + b_r$. If $l=r$, then the cost of the subarray is $a_l + 2 \cdot b_l$. You have to perform queries of three types: - "$1$ $p$ $x$" — assign $a_{p} := x$; - "$2$ $p$ $x$" — assign $b_{p} := x$; - "$3$ $l$ $r$" — find \textbf{two non-empty non-overlapping subarrays} within the segment $[l, r]$ with the maximum total cost and print their total cost.
To begin with, let's understand how to calculate the answer if we consider only one query of the third type. For this, we can use the following dynamic programming: $dp_{i, k}$ - the maximum result if we have considered the first $i$ elements and chosen $k$ boundaries of subsegments (i.e., $k=0$ - the first segment has not started yet, $k=1$ - the first segment has started, $k=2$ - the first segment is finished, but the second segment hasn't started yet, and so on). The transitions in this dynamic programming are quite straightforward: we can select the current element as the next boundary of the subsegment (increase $k$ by $1$), in which case we have to increase the value of dynamic programming by $b_{l+i}$; or we can keep the current value of $k$. Additionally, the dynamic programming value needs to be increased by $a_{l+i}$ if the value of $k$ corresponds to an open segment ($k=1$ or $k=3$). Note that an element can be both the left and the right border of the segment at the same time; so we also need a transition from $k=0$ to $k=2$ and from $k=2$ to $k=4$. Note that the transition from $dp_{i}$ to $dp_{i + 1}$ requires only $7$ numbers: $a_i$, $b_i$, $dp_{i, 0}, dp_{i, 1}, \dots, dp_{i, 4}$. Therefore, this dynamic programming can be easily packaged within some data structure, for example, a segment tree. In each vertex of the segment tree, let's store a transition matrix $mat_{i, j}$ of size $5 \times 5$ - the maximum result if we started in this segment in state with $k = i$ and must end in state with $k =j$. This matrix is easy to update when changing values in the arrays $a$ and $b$, and it is also easy to merge (to merge two such matrices, it is necessary to consider triples of the form $0 \le i \le j \le k \le 4$, and there are only $35$ of them). Thus, we know how to perform one query in $O(35\log{n})$. Note that you should keep a static size array in each vertex of the segment tree (for example, you can use std::array) in C++; if you use something like a std::vector, it will consume much more memory (static arrays require about $200$ bytes for each vertex of the segment tree, which is already a lot). This also works faster in practice, even though the asymptotic complexity of the solution is the same.
[ "data structures", "dp", "implementation", "matrices" ]
2,600
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) const int N = 200 * 1000 + 13; const int K = 5; using li = long long; using mat = array<array<li, K>, K>; const li INF = 1e18; int n, q; li a[N], b[N]; mat t[4 * N]; mat init(li a, li b) { mat c; forn(i, K) forn(j, i + 1) c[j][i] = -INF; c[0][0] = c[2][2] = c[4][4] = 0; c[0][1] = c[2][3] = a + b; c[0][2] = c[2][4] = a + b + b; c[1][1] = c[3][3] = a; c[1][2] = c[3][4] = a + b; return c; } mat combine(mat a, mat b) { mat c = init(-INF, -INF); forn(i, K) forn(j, i + 1) forn(k, j + 1) c[k][i] = max(c[k][i], a[k][j] + b[j][i]); return c; } void build(int v, int l, int r) { if (l + 1 == r) { t[v] = init(a[l], b[l]); return; } int m = (l + r) / 2; build(v * 2 + 1, l, m); build(v * 2 + 2, m, r); t[v] = combine(t[v * 2 + 1], t[v * 2 + 2]); } void upd(int v, int l, int r, int p) { if (l + 1 == r) { t[v] = init(a[l], b[l]); return; } int m = (l + r) / 2; if (p < m) upd(v * 2 + 1, l, m, p); else upd(v * 2 + 2, m, r, p); t[v] = combine(t[v * 2 + 1], t[v * 2 + 2]); } mat get(int v, int l, int r, int L, int R) { if (L >= R) return init(-INF, -INF); if (l == L && r == R) return t[v]; int m = (l + r) / 2; return combine( get(v * 2 + 1, l, m, L, min(m, R)), get(v * 2 + 2, m, r, max(m, L), R) ); } int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> n; forn(i, n) cin >> a[i]; forn(i, n) cin >> b[i]; build(0, 0, n); cin >> q; forn(_, q) { int t, x, y; cin >> t >> x >> y; --x; if (t == 1) { a[x] = y; upd(0, 0, n, x); } else if (t == 2) { b[x] = y; upd(0, 0, n, x); } else { auto res = get(0, 0, n, x, y); cout << res[0][4] << '\n'; } } }
2043
A
Coin Transformation
Initially, you have a coin with value $n$. You can perform the following operation any number of times (possibly zero): - transform one coin with value $x$, where $x$ is \textbf{greater than $3$} ($x>3$), into two coins with value $\lfloor \frac{x}{4} \rfloor$. What is the maximum number of coins you can have after performing this operation any number of times?
Let's try to solve this problem "naively": obviously, while we have at least one coin with value $>3$, we should transform it, since it increases the number of coins we get. We can simulate this process, but the number of transformations we get might be really large, so we need to speed this up. Let's make it faster the following way: instead of transforming just one coin, we will transform all coins at once. So, after one operation, we will have $2$ coins with value $\lfloor \frac{x}{4} \rfloor$; after two operations, we will have $4$ coins with value $\lfloor \frac{x}{16} \rfloor$ each, and so on. This can be implemented using a simple while-loop: while the value of our coins is greater than $3$, we divide it by $4$ and double the number of coins. This solution works in $O(\log n)$. It is also possible to derive a formula for the answer: the number of times we need to divide a number by $4$ so it becomes less than $4$ is $\lfloor \log_4 n \rfloor$, and the number of coins we will get is $2$ to the power of that expression. However, you must be very careful with this approach, because it can have severe precision issues due to the fact that standard logarithm functions work with floating-point numbers, so they are imprecise. You should use some way to calculate $\lfloor \log_4 n \rfloor$ without floating-point calculations; for example, iterating (or binary searching) on the power of $4$ you need to divide the number by so that it becomes less than $4$.
[ "brute force", "math" ]
800
t = int(input()) for i in range(t): n = int(input()) ans = 1 while n > 3: n //= 4 ans *= 2 print(ans)
2043
B
Digits
Artem wrote the digit $d$ on the board exactly $n!$ times in a row. So, he got the number $dddddd \dots ddd$ (exactly $n!$ digits). Now he is curious about which \textbf{odd} digits from $1$ to $9$ divide the number written on the board.
There are several ways to solve this problem. I will describe two of them. Using divisibility rules (a lot of math involved): We can try divisibility rules for all odd integers from $1$ to $9$ and find out whether they work for our numbers: $1$ is always the answer, since every integer is divisible by $1$: a number is divisible by $3$ iff its sum of digits is divisible by $3$. Since our number consists of $n!$ digits $d$, then either $n!$ or $d$ should be divisible by $3$; so, $n \ge 3$ or $d \bmod 3 = 0$; a number is divisible by $9$ iff its sum of digits is divisible by $9$. This is a bit trickier than the case with $3$, because it is possible that both $n!$ and $d$ are divisible by $3$ (not $9$), and it makes the sum of digits divisible by $9$; a number is divisible by $5$ iff its last digit is $5$ or $0$. Just check that $d=5$, and that's it; probably the trickiest case: a number is divisible by $7$ iff, when this number is split into blocks of $3$ digits (possibly with the first block shorter than $3$ digits), the sign-alternating sum of these blocks is divisible by $7$. Like, $1234569$ is divisible by $7$ because $(1-234+569)$ is divisible by $7$. If we apply this rule to our numbers from the problem, we can use the fact that when $n \ge 3$, the number can be split into several blocks of length $6$, and each such block changes the alternating sum by $0$. So, if $n \ge 3$ or $d = 7$, our number is divisible by $7$. Almost brute force (much less math involved): First, we actually need a little bit of math. If you take a number consisting of $n!$ digits equal to $d$, it is always divisible by $(n-1)!$ digits equal to $d$. This is because, if you write some integer repeatedly, the resulting number will be divisible by the original number, like, for example, $424242$ is divisible by $42$. So, if for some $n = k$, the number is divisible by some digit, then for $n = k+1$, the number will also be divisible for some digit. This means that there exists an integer $m$ such that for all integers $n \ge m$, the results are the same if you use the same digit $d$. So, we can set $n = \min(n, m)$, and if $m$ is small enough, use brute force. What is the value of $m$? The samples tell us that the number consisting of $7!$ ones is divisible by $1$, $3$, $7$ and $9$ (and divisibility by $5$ depends only on $d$), so you can actually use $m=7$. It is also possible to reduce $m$ to $6$, but this is not required. So, the solution is: reduce $n$ to something like $7$ if it is greater than $7$, then use brute force. You can either calculate the remainder of a big number modulo small number using a for-loop, or, if you code in Java or Python, use built-in big integers (just be careful with Python, modern versions of it forbid some operations with integers longer than $4300$ digits, you might need to override that behavior).
[ "math", "number theory" ]
1,100
import sys sys.set_int_max_str_digits(6000) def fact(x): if x == 0: return 1 return x * fact(x - 1) t = int(input()) for i in range(t): n, k = map(int, input().split()) n = min(n, 7) s = int(str(k) * fact(n)) for i in range(1, 10, 2): if s % i == 0: print(i, end = ' ') print()
2043
C
Sums on Segments
You are given an array $a$ of $n$ integers, where all elements except for \textbf{at most one} are equal to $-1$ or $1$. The remaining element $x$ satisfies $-10^9 \le x \le 10^9$. Find all possible sums of subarrays of $a$, including the empty subarray, whose sum is defined as $0$. In other words, find all integers $x$ such that the array $a$ has at least one subarray (possibly empty) with sum equal to $x$. A subarray is a contiguous subsegment of an array. Output these sums in ascending order. Each sum should be printed only once, even if it is achieved by multiple subarrays.
What could the answer to the problem be if all elements were equal to $1$ or $-1$? Let's consider all segments with a fixed left boundary $l$. The empty segment $[l; l-1]$ has a sum of $0$. As we move the right boundary to the right, the sum will change by $\pm1$. That is, we can obtain all sums from the minimum sum to the maximum one. To find the sums for the entire array, we need to find the union of the segments. Since all segments include $0$, their union is also a segment. Therefore, the possible sums are all sums from the minimum sum to the maximum sum in the entire array. Now let's apply this reasoning to the given problem. The segments that do not include the strange element still form a segment of possible sums that includes $0$. As for the segments that include the strange element, we can look at them this way. We will remember these segments and remove the strange element. Then the resulting sums will also form a segment that includes $0$. If we return the element to its place, all sums will increase exactly by this element. Thus, it will remain a segment, however, not necessarily including $0$ now. Then the solution could be as follows. We will find the minimum and maximum sum among the segments that do not contain the strange element. We will find the minimum and maximum sum among the segments that do contain it. Then we will output the union of the obtained sum segments. Next, you need to adapt your favorite algorithm for finding the maximum sum segment for this problem. My favorite is reducing it to prefix sums. The sum of the segment $[l; r]$ is equal to $\mathit{pref}_{r+1} - \mathit{pref}_l$. We fix the right boundary of the segment $r$. Since the first term for all segments is now the same, the maximum sum segment with this right boundary is the one with the minimum possible prefix sum at the left boundary. We will then iterate over $r$ in increasing order and find the maximum sum among all right boundaries. The minimum prefix sum on the left can be maintained on the fly. For a fixed right boundary, we have two options: for some prefix of left boundaries, the strange element is inside the segment, and for some suffix, it is outside. This suffix may be empty if the boundary $r$ is to the left of the strange element. Therefore, we will maintain two values on the fly: the minimum prefix sum before the strange element and after it. Finally, we need to find the possible sums in the union of the two segments. There are two options here. If the segments intersect, then it includes all sums from the minimum of the left boundaries to the maximum of the right ones. If they do not intersect, then it is simply two segments. Overall complexity: $O(n)$ per testcase.
[ "binary search", "brute force", "data structures", "dp", "greedy", "math" ]
1,600
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) l1, r1 = 0, 0 l2, r2 = 2*10**9, -2*10**9 pr = 0 mnl, mxl = 0, 0 mnr, mxr = 2*10**9, -2*10**9 for i in range(n): pr += a[i] if a[i] != -1 and a[i] != 1: mnr, mxr = mnl, mxl mnl, mxl = pr, pr l1 = min(l1, pr - mxl) r1 = max(r1, pr - mnl) l2 = min(l2, pr - mxr) r2 = max(r2, pr - mnr) mnl = min(mnl, pr) mxl = max(mxl, pr) res = [] if l2 > r1: res = list(range(l1, r1 + 1)) + list(range(l2, r2 + 1)) elif r2 < l1: res = list(range(l2, r2 + 1)) + list(range(l1, r1 + 1)) else: res = list(range(min(l1, l2), max(r1, r2) + 1)) print(len(res)) print(*res)
2043
D
Problem about GCD
Given three integers $l$, $r$, and $G$, find two integers $A$ and $B$ ($l \le A \le B \le r$) such that their greatest common divisor (GCD) equals $G$ and the distance $|A - B|$ is maximized. If there are multiple such pairs, choose the one where $A$ is minimized. If no such pairs exist, output "-1 -1".
First, let's try to solve this problem with $G=1$. We can check the pair $(l, r)$. If its greatest common divisor is not $1$, then we should check $(l, r-1)$ and $(l+1, r)$, i. e. the pairs on the distance $(r-l-1)$. If these don't work, we can check $(l, r-2)$, $(l+1, r-1)$ and $(l+2, r)$, and so on, and the answer will be located fast enough (more about that in the third paragraph of the editorial). So, we get a solution in $O(K^2 \log A)$ per test case, where $A$ is the bound on the integers on the input, and $K$ is the decrease in distance we had to make to find this pair (i. e. if the answer has distance $|A-B|$, then $K = |r - l| - |A - B|$). What to do if $G \ne 1$? Almost the same, but first, we need to ensure that the first pair we try has both integers divisible by $G$. So, if $l \bmod G \ne 0$, let's shift it to the next closest integer which is divisible by $G$; and if $r \bmod G \ne 0$, let's subtract $r \bmod G$ from $r$ to make it divisible. Then, we make the same process, but instead of trying pairs like $(l, r-1)$, $(l+1, r)$, $(l, r-2)$ and so on, we try $(l, r-G)$, $(l+G, r)$, $(l, r - 2G)$, and so on. Okay, now let's talk about why this works fast, i. e. why this $K$ in the complexity formula is not that big. All the following paragraphs will assume $G=1$, but we can use the same reasoning with $G>1$ if we divide everything by $G$. Intuitively, we can think about it in terms of prime gaps: as soon as $r-K$ becomes a prime number, we get our result. Average gap between two primes is about $ln A$, but there can be pretty big gaps, more than $1000$. If you're bold and brave, you can stop here and submit, but let's find a better bound. Instead of thinking about the gap between two primes, let's think about the gap between two numbers which are coprime with $l$. Let's assume that $l$ is the product of several first prime numbers (if it is not, integers which are coprime with $l$ will appear even more often). $\frac{1}{2}$ of all integers are not divisible by $2$; $\frac{2}{3}$ of them are not divisible by $3$; $\frac{4}{5}$ of them are not divisible by $5$, and so on. If we repeat this process until the product of primes we considered becomes too large, we can get that, on average, $1$ in $7$ or $8$ integers is coprime with $l$. This is a better bound, but it still uses "average" gaps. However, this should be enough to try to submit the solution. Okay, now let's show a rigorous proof (which you really shouldn't need during the contest) of some reasonable bound. We can prove that if you consider all possible pairs of integers from intervals $[l, l+30)$ and $(r-30, r]$, you will find at least one coprime pair. There are $30$ integers in each interval, so there are $900$ pairs to consider. Suppose in some of them, both integers are divisible by $2$. There will be at most $225$ such pairs, so we are left with $675$ pairs. Suppose in some of the remaining pairs, both integers are divisible by $3$. There will be at most $10$ integers divisible by $3$ in each segment, so at most $100$ pairs. We are left with $575$ pairs. Suppose in some of the remaining pairs, both integers are divisible by $5$. There will be at most $6$ integers divisible by $5$ in each segment, so at most $36$ pairs. We are left with $539$ pairs. If we repeat this until some prime number like $37$, we will still have to "fix" more than $450$ pairs, so at least one number has $15$ or more pairs to "fix". This should mean that it is divisible by at least $15$ primes which are greater than $37$, and it means it's greater than $10^{18}$. So, in every pair of intervals $[l, l+30)$ and $(r-30, r]$ such that the numbers are not greater than $10^{18}$, there will be at least one coprime pair, and this proves that $K \le 60$. In practice, it is much lower since, in our proof, we didn't consider the fact that a lot of pairs will have more than $1$ prime which they are divisible by; if we take this into account (or repeat the same process until primes become much greater), we can prove tighter bounds, for example, $K = 40$.
[ "brute force", "flows", "math", "number theory" ]
1,800
#include<bits/stdc++.h> using namespace std; long long gcd(long long x, long long y) { if(x == 0) return y; else return gcd(y % x, x); } void solve() { long long l, r, g; scanf("%lld %lld %lld", &l, &r, &g); long long L = l + (l % g == 0 ? 0 : g - (l % g)); long long R = r - r % g; for(int i = 0; i <= (R - L) / g; i++) for(int j = 0; j <= i; j++) if(gcd(L + j * g, R - (i - j) * g) == g) { printf("%lld %lld\n", L + j * g, R - (i - j) * g); return; } puts("-1 -1"); } int main() { int t; scanf("%d", &t); for(int i = 0; i < t; i++) solve(); }
2043
E
Matrix Transformation
You are given two matrices $A$ and $B$ of size $n \times m$, filled with integers between $0$ and $10^9$. You can perform the following operations \textbf{on matrix $A$} in any order and any number of times: - &=: choose two integers $i$ and $x$ ($1 \le i \le n$, $x \ge 0$) and replace each element in row $i$ with the result of the bitwise \textbf{AND} operation between $x$ and that element. Formally, for every $j \in [1, m]$, the element $A_{i,j}$ is replaced with $A_{i,j} \text{ & } x$; - |=: choose two integers $j$ and $x$ ($1 \le j \le m$, $x \ge 0$) and replace each element in column $j$ with the result of the bitwise \textbf{OR} operation between $x$ and that element. Formally, for every $i \in [1, n]$, the element $A_{i,j}$ is replaced with $A_{i,j} \text{ | } x$. The value of $x$ may be chosen differently for different operations. Determine whether it is possible to transform matrix $A$ into matrix $B$ using the given operations any number of times (including zero).
Every operation which affects multiple bits can be split into several operations which only affect one bit. For example, if you make an |= operation with $x=11$, it is the same as making three |= operations with $x=1$, $x=2$ and $x=8$. So, let's solve the problem for each bit separately. If we consider only one bit, the problem becomes the following one: You are given a binary matrix $A$, and you have to check if it is possible to transform it into another binary matrix $B$. The operations you can make are "set all elements in some row to $0$" and "set all elements in some column to $1$". Let's find all cells where $A_{i,j} \ne B_{i,j}$. If $A_{i,j} = 0$ and $B_{i,j} = 1$, then it means that we definitely have to apply an operation to the $j$-th column. Otherwise, if $A_{i,j} = 1$ and $B_{i,j} = 0$, then we definitely have to apply an operation to the $i$-th row. That way, we can find the operations we definitely need to apply. It's pretty obvious that we don't need to do the same operation twice - if we do the same operation to the same row/column twice, we can discard the first operation, since it will be overwritten by the last operation. But these are not all operations we need to apply. Suppose $B_{i,j} = 0$, and we change everything in the $j$-th column to $1$. Then, we have to make an operation with the $i$-th row to set $B_{i,j}$ back to $0$. This means that after you apply an operation to the $j$-th column, you have to do an operation with the $i$-th row. Same when $B_{i,j} = 1$: after you apply an operation to the $i$-th row, you have to do an operation with the $j$-th column. Let's build a graph where every operation will be represented by a vertex, and a directed edge $x \rightarrow y$ means that, if you apply the operation $x$, the operation $y$ must be applied after that. Some vertices of this graph represent operations which we definitely need to apply. If there is a cycle reachable from one of these vertices, the transformation is impossible, since we will apply operations forever and never get the matrix we need. But if there is no cycle reachable from any operation we need to apply, then we can "mark" all operations reachable from the operations we need, and apply them in the order of topological sorting, and we will get exactly the matrix $B$ (for every cell that was changed, the last operation applied to it will make it correct; and we enforced that we need to apply operations to all cells that must change). Searching for a cycle in a directed graph can be done with "three-colored DFS" - a modification of DFS which, for each vertex, maintains one of three states - either this vertex wasn't reached by DFS at all, or it was already reached and is on the DFS stack, or this vertex was already fully processed (both reached by DFS and deleted from the DFS stack). This way, we can find a back-edge that creates the cycle as follows: if we try to go from a vertex to another vertex which is on the DFS stack, it means that we have found a cycle. There are other methods of cycle detection, but this one is the most common. If for every bit, we can make the transformations we need, the answer is Yes. Otherwise, the answer is No. The solution works in $O(n m \log A)$.
[ "bitmasks", "brute force", "data structures", "dfs and similar", "graphs", "greedy", "implementation" ]
2,300
#include<bits/stdc++.h> using namespace std; struct graph { int V; vector<vector<int>> g; vector<int> color; bool dfs(int v) { if(color[v] != 0) return false; color[v] = 1; bool res = false; for(auto y : g[v]) { if(color[y] == 2) continue; else if(color[y] == 0) res |= dfs(y); else res = true; } color[v] = 2; return res; } void add_edge(int x, int y) { g[x].push_back(y); } graph(int V) { this->V = V; this->g.resize(V); this->color.resize(V); }; }; int get_bit(int x, int y) { return (x >> y) & 1; } bool check(const vector<vector<int>>& a, const vector<vector<int>>& b, int k) { int n = a.size(); int m = a[0].size(); vector<bool> must_row(n); vector<bool> must_col(m); auto G = graph(n + m); for(int i = 0; i < n; i++) for(int j = 0; j < m; j++) { if(get_bit(a[i][j], k) != get_bit(b[i][j], k)) { if(get_bit(b[i][j], k) == 0) must_row[i] = true; else must_col[j] = true; } if(get_bit(b[i][j], k) == 0) G.add_edge(j + n, i); else G.add_edge(i, j + n); } for(int i = 0; i < n; i++) if(must_row[i] && G.dfs(i)) return false; for(int j = 0; j < m; j++) if(must_col[j] && G.dfs(j + n)) return false; return true; } void solve() { int n, m; scanf("%d %d", &n, &m); vector<vector<int>> a(n, vector<int>(m)); auto b = a; for(int i = 0; i < n; i++) for(int j = 0; j < m; j++) scanf("%d", &a[i][j]); for(int i = 0; i < n; i++) for(int j = 0; j < m; j++) scanf("%d", &b[i][j]); for(int i = 0; i < 30; i++) { if(!check(a, b, i)) { puts("No"); return; } } puts("Yes"); } int main() { int t; scanf("%d", &t); for(int i = 0; i < t; i++) solve(); }
2043
F
Nim
Recall the rules of the game "Nim". There are $n$ piles of stones, where the $i$-th pile initially contains some number of stones. Two players take turns choosing a non-empty pile and removing any positive (strictly greater than $0$) number of stones from it. The player unable to make a move loses the game. You are given an array $a$, consisting of $n$ integers. Artem and Ruslan decided to play Nim on segments of this array. Each of the $q$ rounds is defined by a segment $(l_i, r_i)$, where the elements $a_{l_i}, a_{l_i+1}, \dots, a_{r_i}$ represent the sizes of the piles of stones. Before the game starts, Ruslan can remove any number of piles from the chosen segment. However, at least \textbf{one pile must remain}, so in a single round he can remove at most $(r_i - l_i)$ piles. He is allowed to remove $0$ piles. After the removal, the game is played on the remaining piles within the segment. \textbf{All rounds are independent}: the changes made in one round do not affect the original array or any other rounds. Ruslan wants to remove as many piles as possible so that Artem, who always makes the first move, loses. For each round, determine: - the maximum number of piles Ruslan can remove; - the number of ways to choose the \textbf{maximum} number of piles for removal. Two ways are considered different if there exists an index $i$ such that the pile at index $i$ is removed in one way but not in the other. Since the number of ways can be large, output it modulo $998\,244\,353$. If Ruslan cannot ensure Artem's loss in a particular round, output -1 for that round.
Let's recall the condition for the second player to win in the game of "Nim". The XOR of the sizes of the piles must be equal to $0$. That is, we are asked to remove as many piles as possible so that the XOR becomes $0$. Notice the following fact. Suppose there are $c$ piles of size $x$ on a segment. If we remove an even number of piles, the XOR does not change. If we remove an odd number, it changes by $x$. Therefore, there is no point in keeping more than $2$ piles. If we keep $t > 2$ piles, we can remove another $2$ piles, and it will not change anything at all. We will answer the queries independently. Let's find the count of each of the $51$ elements in the given segment. For example, we can precompute how many times each element appears in each prefix. Now we can write the following dynamic programming solution. $\mathit{dp}[i][j][f]$ represents a pair of (maximum amount of removed elements, number of ways to remove that amount), where we have considered the first $i$ of the $51$ values, the current XOR among the non-removed values is $j$, and $f = 1$ if at least one element is not removed and $0$ otherwise. Let the amount of the current value be $c$. From each state, there are at most three transitions: remove all elements with that value - $1$ way; keep $1$ element - $c$ ways; keep $2$ elements - $\frac{c(c-1)}{2}$ ways. The base case is $\mathit{dp}[0][0][0] = (0, 1)$, the rest are filled with $(-1, -1)$, for example. The final state is $\mathit{dp}[51][0][1]$. If it's equal to $(-1, -1)$, there's no answer. Since the XOR cannot exceed $63$, the number of states in the dynamic programming solution is $51 \cdot 64 \cdot 2$. From each state, there are three transitions, which fits within the time constraints.
[ "bitmasks", "brute force", "combinatorics", "dp", "games", "greedy", "implementation", "shortest paths" ]
2,700
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for(int i = 0; i < int(n); i++) const int MOD = 998244353; int add(int a, int b){ a += b; if (a >= MOD) a -= MOD; return a; } int mul(int a, int b){ return a * 1ll * b % MOD; } struct state{ int mx, cnt; }; void merge(state &a, int mx, int cnt){ if (a.mx > mx) return; if (a.mx < mx) a.mx = mx, a.cnt = 0; a.cnt = add(a.cnt, cnt); } state dp[52][64][2]; int main(){ cin.tie(0); ios::sync_with_stdio(false); int n, q; cin >> n >> q; vector<int> a(n); forn(i, n) cin >> a[i]; int mx = *max_element(a.begin(), a.end()); vector<vector<int>> cnt(n + 1, vector<int>(mx + 1)); forn(i, n){ cnt[i + 1] = cnt[i]; ++cnt[i + 1][a[i]]; } forn(_, q){ int l, r; cin >> l >> r; --l; memset(dp, -1, sizeof(dp)); dp[0][0][0] = {0, 1}; forn(i, mx + 1){ int c = cnt[r][i] - cnt[l][i]; int c2 = c * 1ll * (c - 1) / 2 % MOD; forn(val, 64) forn(fl, 2) if (dp[i][val][fl].cnt >= 0){ if (c > 0) merge(dp[i + 1][val ^ i][true], dp[i][val][fl].mx + c - 1, mul(dp[i][val][fl].cnt, c)); if (c > 1) merge(dp[i + 1][val][true], dp[i][val][fl].mx + c - 2, mul(dp[i][val][fl].cnt, c2)); merge(dp[i + 1][val][fl], dp[i][val][fl].mx + c, dp[i][val][fl].cnt); } } auto ans = dp[mx + 1][0][1]; if (ans.cnt == -1) cout << -1 << '\n'; else cout << ans.mx << ' ' << ans.cnt << '\n'; } }
2043
G
Problem with Queries
You are given an array $a$, consisting of $n$ integers. Your task is to process $q$ queries of two types: - $1~p~x$ — set the value of the element at index $p$ equal to $x$; - $2~l~r$ — count the number of pairs of indices $(i, j)$ such that $l \le i < j \le r$ and $a_i \ne a_j$. Note that the queries in this task are \textbf{encoded}; each subsequent query can only be decoded after calculating the answer to the preceding query of the second type.
First, let's reformulate the problem. Instead of counting the number of pairs of distinct elements in a segment, we will count the number of pairs of identical elements and subtract it from the total number of pairs. To solve this problem, we will use square root decomposition on the array. Let's divide the original array into blocks of size $B$ and learn how to answer queries $(l, r)$ that satisfy the following conditions: $l = L \cdot B$ - this means that the left boundary of the query coincides with the beginning of some block of the square root decomposition; $r = R \cdot B - 1$ - this means that the right boundary of the query coincides with the last element of some block of the square root decomposition. Due to this restriction on the problem, we can utilize certain properties that will be necessary for our solution: The order of elements within a block does not matter; we can treat each block of our square root decomposition as an unordered multiset; Changing the value of an element in terms of a multiset can be reformulated as removing the old value and adding the new value. In general, the modified queries can be rewritten in the following format: add $ind~x$ - add an element with value $x$ to the block $ind$. del $ind~x$ - remove an element with value $x$ from the block $ind$. get $L~R$ - count the number of pairs of positions $(i, j)$, where $L \cdot B \le i < j \le R \cdot B$ and $a_i = a_j$. Let's assume we have an array $blocks[i][j]$, which stores the answer to the third type of query for all possible segments of blocks $0 \le i \le j \le K$, where $K$ is the number of blocks in the square root decomposition of our array. Initially, we fill the $blocks$ array with zeros and then add all elements using the add operation. Let's observe how the first type of query modifies this array; the second type of query is considered similarly. When an element with value $x$ is added to the block $ind$, it affects all elements of the array $blocks[i][j]$ such that $0 \le i \le ind \le j$. For a specific element of the array $blocks[i][j]$, its value increases by the number of occurrences of the element $x$ in the segment $[i \cdot B; j \cdot B]$. This happens because the added element can form a pair with all existing elements equal to the specified value that belong to this segment. Formally, this can be described as $blocks[i][j]~=blocks[i][j] + count(i, j, x)$, where $count(i, j, x)$ is a function that returns the number of occurrences of the element $x$ in the blocks $i, \dots, j$. The function $count(i, j, x)$ can be maintained by storing an array $pref[val][i]$, where for each element $val$, it will store how many times it appears in the segment of blocks $[0, i]$. When adding an element, the number of values that need to be recalculated will be about $\frac{K \cdot (K - 1)}{2}$. If we can do this in $O(1)$, we can achieve a solution in $O((Q+N) \cdot N^{\frac{2}{3}}$, taking $B = N^{\frac{2}{3}}$. Unfortunately, it is practically impossible to fit this within time limits, so let's try to improve the solution. Returning to the description of how to modify an arbitrary element of the array $blocks[i][j]$, let's rewrite and transform the formula: $blocks[i][j]~=blocks[i][j] + count(i, j, x)$ into $blocks[i][j]~=blocks[i][j] + count(i, ind, x) + count(ind + 1, j, x)$. The second term in this expression does not use the right boundary $j$, and the third term does not use the left boundary $i$. Suppose we want to add the second term to the relevant elements $blocks[i][j]$; we can iterate over the left boundary $i$ and add $count(i, ind, x)$ for all right boundaries where $j \ge ind$. To perform such additions in $O(1)$, we will use something like a difference array: we will create an array $update\_right[i][j]$, where for a fixed left boundary, we will store how the suffix of right boundaries will change. When iterating over the left boundary $i$, for the suffix of right boundaries $j \ge ind$, we need to add the same amount $count(i, ind, x)$; but we will handle this addition on the suffix in one position of the array $update\_right[i][ind] = update\_right[i][ind] + count(i, ind, x)$. Then, to account for these additions in the third type of queries $get(L,~ R)$, we will need to take the sum of the elements of the array $update\_right[L][L] + update\_right[L][L + 1] + \dots + update\_right[L][R]$. The third term from the sum $blocks[i][j]~=blocks[i][j] + count(i, ind, x) + count(ind + 1, j, x)$ is processed similarly: we will iterate over the right boundary and form an array $update\_left$, where the additions will be on the prefix. Thus, each modification query will be executed in $O(K)$. A query of the third type requires us to iterate over one of the fixed boundaries and find the sum, which will require $O(K)$ operations. By choosing $B = \sqrt N$, each of these queries can be executed in $O(\sqrt N)$. Returning to the original problem, let's describe how to find the answer to a query whose boundaries do not coincide with the boundaries of the blocks: First, find the answer for the problem we learned to solve above for the largest segment of blocks that is completely contained within the query; Now, we need to add the contribution of $O(B)$ elements from the blocks that are not fully included in the query. Each of these $O(B)$ elements can form a pair with all elements in our block segment, so we will need to take the number of occurrences of this element from the $pref$ array. Additionally, these $O(B)$ elements can form pairs with each other; to handle this, we can maintain an array of size $N$, where for each value, we keep track of the number of additional elements with that value. When processing some additional element $x$, we will add the count of these elements from the array to the answer and increase this count by $1$; at the end of processing the query for all additional elements, we will reset their count back to $0$. Thus, we have obtained a solution in $O((N+Q) \cdot \sqrt N)$.
[ "brute force", "data structures", "implementation" ]
3,000
null
2044
A
Easy Problem
Cube is given an integer $n$. She wants to know how many ordered pairs of positive integers $(a,b)$ there are such that $a=n-b$. Since Cube is not very good at math, please help her!
For any $n$, Cube can set $a$ = any integer between $1$ and $n-1$ inclusive, and set $b = n - a$. $a$ cannot be less than $1$, because then it would be non-positive, and $a$ cannot be greater than $n-1$, because then $b$ would be less than $1$, which would make it non-positive. Therefore the answer is just $n-1$ for all $n$.
[ "brute force", "math" ]
800
input = sys.stdin.readline for _ in range(int(input())): print(int(input())-1)
2044
B
Normal Problem
A string consisting of only characters 'p', 'q', and 'w' is painted on a glass window of a store. Ship walks past the store, standing directly in front of the glass window, and observes string $a$. Ship then heads inside the store, looks directly at the same glass window, and observes string $b$. Ship gives you string $a$. Your job is to find and output $b$.
The letters she reads that comprise string $b$ are just the letters that comprise string $a$, flipped left-to-right. This means that 'p' becomes 'q', 'q' becomes 'p', and 'w' stays 'w', since it is vertically symmetrical. The order in which the letters are read is also reversed, because what used to be the left side of string $a$ gets flipped over to the right side of string $b$, and vice versa. We now have an algorithm for constructing string $b$, which is to iterate from right-to-left on string $a$, outputting 'p' when there is a 'q', 'q' when there is a 'p', and 'w' when there is a 'w'.
[ "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; #define ll long long #define pll pair<ll, ll> int t; int main() { cin.tie(0)->sync_with_stdio(0); cin >> t; while (t--) { string s; cin >> s; reverse(s.begin(), s.end()); for (char &c : s) if (c == 'q') c = 'p'; else if (c == 'p') c = 'q'; cout << s << '\n'; } }
2044
C
Hard Problem
Ball is the teacher in Paperfold University. The seats of his classroom are arranged in $2$ rows with $m$ seats each. Ball is teaching $a + b + c$ monkeys, and he wants to assign as many monkeys to a seat as possible. Ball knows that $a$ of them only want to sit in row $1$, $b$ of them only want to sit in row $2$, and $c$ of them have no preference. Only one monkey may sit in each seat, and each monkey's preference must be followed if it is seated. What is the maximum number of monkeys that Ball can seat?
Let $A$, $B$, $C$ be three sets of monkeys, such that monkeys in $A$ can only sit in row $1$, $B$ in row $2$, and $C$ can sit anywhere. It is clear that if there is free space in row $1$, and there are monkeys left in set $A$, it is optimal to seat a monkey from set $A$ onto row $1$. This is because a monkey from set $C$ can be seated on either row, and there might be space left on the other row for that same monkey in set $C$ after you've already seated the monkey from set $A$. However, this is not the case if you start by seating the monkeys in set $C$ in the front row, since you might now leave empty seats at the back, but then have monkeys from set $A$ still left unseated. Therefore, the strategy is as follows: seat as many monkeys from set $A$ as you can in the front row, then seat as many monkeys from set $B$ as you can in the back row, then seat as many monkeys from set $C$ as you can, and that yields the answer.
[ "greedy", "math" ]
800
#include<bits/stdc++.h> #define ll long long using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin>>tt; while(tt--) { int m,a,b,c; cin>>m>>a>>b>>c; int ans=0,rem=0; ans+=min(m,a);rem+=m-min(m,a); ans+=min(m,b);rem+=m-min(m,b); ans+=min(rem,c); cout<<ans<<'\n'; } return 0; }
2044
D
Harder Problem
Given a sequence of positive integers, a positive integer is called a mode of the sequence if it occurs the maximum number of times that any positive integer occurs. For example, the mode of $[2,2,3]$ is $2$. Any of $9$, $8$, or $7$ can be considered to be a mode of the sequence $[9,9,8,8,7,7]$. You gave UFO an array $a$ of length $n$. To thank you, UFO decides to construct another array $b$ of length $n$ such that $a_i$ is a mode of the sequence $[b_1, b_2, \ldots, b_i]$ for all $1 \leq i \leq n$. However, UFO doesn't know how to construct array $b$, so you must help her. Note that $1 \leq b_i \leq n$ must hold for your array for all $1 \leq i \leq n$.
Observe that if you have an array where all elements are unique, they will all have frequency $1$, therefore they can all be classified as the mode. Therefore, it follows that the strategy for the construction is to just construct an array where for each prefix, the last element of this prefix appears in the array at least once. An easy way of doing is this is such: For each element $a_i$, if this value has appeared previously in the array (you can use a set to check this), set $b_i$ equal to some random integer that isn't used elsewhere in the list $a$, and keep going. Otherwise, set $b_i = a_i$.
[ "constructive algorithms", "greedy", "math" ]
1,100
#include<bits/stdc++.h> #define ll long long using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin>>tt; while(tt--) { int n; cin>>n; vector<int> a(n+1),b(n); for(int i=0;i<n;i++) { int x; cin>>x; if(!a[x]) { b[i]=x; a[x]=1; } } queue<int> q; for(int i=1;i<=n;i++) if(!a[i]) q.push(i); for(int i=0;i<n;i++) { if(!b[i]) { b[i]=q.front(); q.pop(); } } for(int i=0;i<n;i++) cout<<b[i]<<" \n"[i==n-1]; } return 0; }
2044
E
Insane Problem
Wave is given five integers $k$, $l_1$, $r_1$, $l_2$, and $r_2$. Wave wants you to help her count the number of ordered pairs $(x, y)$ such that all of the following are satisfied: - $l_1 \leq x \leq r_1$. - $l_2 \leq y \leq r_2$. - There exists a non-negative integer $n$ such that $\frac{y}{x} = k^n$.
Clearly, trying to bruteforce over all possible values of $x$ or $y$ is too slow, because the bounds are $1 \leq l_1 \leq r_1 \leq 10^9$. However, there is another variable that you can actually bruteforce over - and that is $n$. This is because exponentiation famously makes numbers very big very quickly - and if we set $k$ as small as possible (i.e. $2$), we only need to check $1 \leq n \leq 32$. This is because $2^{32} > 10^9$, so there cannot possibly be any solutions for $n > 32$ for any $k$. Now, let's rephrase the problem. We need to find pairs $(x, y)$ such that $x \cdot k^n = y$. Now, we can check every value of $n$ from $1$ to $32$, and for each, binary search to find the smallest $x$ such that $y$ fits the conditions, and the largest $x$. Now, we can subtract these two values and add this to the answer. Note that we do not need to care about more than $32$ different values of $k^n$, because obviously $k^{32} \ge 2^{32} > 10^9$. From here and on, we focus on solving for only one value of $k^n$. When $k^n$ is fixed and you are given $\frac{y}{x}=k^n$, notice $y$ is fixed as $x k^n$. Therefore, if we count the values $x$ such that $y$ is in the given interval as well, we will be properly counting the ordered pairs. Formally, this condition can be cleared out as: $l_2 \le x k^n \le r_2$ $\frac{l_2}{k^n} \le x \le \frac{r_2}{k^n}$ Because $x$ is an integer, $\left \lceil {\frac{l_2}{k^n}} \right \rceil \le x \le \left \lfloor {\frac{r_2}{k^n}} \right \rfloor$ Thus, when we intersect the two intervals, we get the following interval at last. $\max \left({l_1,\left \lceil {\frac{l_2}{k^n}} \right \rceil}\right) \le x \le \min \left({r_1,\left \lfloor {\frac{r_2}{k^n}} \right \rfloor}\right)$ Compute the size of this interval for all $k^n$ (at most $32$ values) and the answer can be found. Do note the following details while implementing: When $r < l$, the size of the interval is $0$, not negative. Beware of overflows. Dealing with big integers can be helpful in avoiding this, but it may make your solution slow. Do not round up a fraction using the ceil function; This has been a recurring issue in almost every Div.4!
[ "binary search", "greedy", "implementation", "math", "number theory" ]
1,300
#include<bits/stdc++.h> #define ll long long using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin>>tt; while(tt--) { ll k,l1,r1,l2,r2; cin>>k>>l1>>r1>>l2>>r2; ll kn=1,ans=0; for(int n=0;r2/kn>=l1;n++) { ans+=max(0ll,min(r2/kn,r1)-max((l2-1)/kn+1,l1)+1ll); kn*=k; } cout<<ans<<'\n'; } return 0; }
2044
F
Easy Demon Problem
For an arbitrary grid, Robot defines its beauty to be the sum of elements in the grid. Robot gives you an array $a$ of length $n$ and an array $b$ of length $m$. You construct a $n$ by $m$ grid $M$ such that $M_{i,j}=a_i\cdot b_j$ for all $1 \leq i \leq n$ and $1 \leq j \leq m$. Then, Robot gives you $q$ queries, each consisting of a single integer $x$. For each query, determine whether or not it is possible to perform the following operation \textbf{exactly} once so that $M$ has a beauty of $x$: - Choose integers $r$ and $c$ such that $1 \leq r \leq n$ and $1 \leq c \leq m$ - Set $M_{i,j}$ to be $0$ for all ordered pairs $(i,j)$ such that $i=r$, $j=c$, or both. Note that queries are \textbf{not persistent}, meaning that you do not actually set any elements to $0$ in the process — you are only required to output if it is possible to find $r$ and $c$ such that if the above operation is performed, the beauty of the grid will be $x$. Also, note that you must perform the operation for each query, even if the beauty of the original grid is already $x$.
This is an anti-hash test for python sets and dictionaries. Before you call us evil, we saved you from getting hacked in open hack phase. Beware! Let's denote the beauty of the matrix as $B$, and denote $\text{SumA}$ as the sum of all the elements in the array $a$, and $\text{SumB}$ as the sum of all the elements in the array $b$. Before applying an operation, the beauty of the matrix can be expressed as: $B = b_1 \cdot a_1 + b_1 \cdot a_2 + b_1 \cdot a_3 + b_2 \cdot a_1 + b_2 \cdot a_2 + \ldots$ After factoring, this simplifies to: $B = b_1 \cdot (a_1 + a_2 + a_3 + \ldots) + b_2 \cdot (a_1 + a_2 + a_3 + \ldots) + \ldots$ Further factoring gives: $B = (a_1 + a_2 + a_3 + a_4 + \ldots) \cdot (b_1 + b_2 + b_3 + \ldots)$ This can be written as: $B = \text{SumA} \cdot \text{SumB}$ Now, consider the effect of an operation on a column $C$. The beauty decreases by $A_c \cdot \text{SumB}$. Similarly, when an operation is done on a row $R$, the beauty decreases by $B_r \cdot \text{SumA}$. An important observation is that the element at position $(r, c)$ is counted twice, so we must account for this in the formula. After considering this, let the beauty after the operations be denoted as $X$. Using the observations above: $X = B - (b_i \cdot \text{SumA} + a_j \cdot \text{SumB} - a_j \cdot b_i)$ Simplifying further: $X = \text{SumA} \cdot \text{SumB} - b_i \cdot \text{SumA} - a_j \cdot \text{SumB} + a_j \cdot b_i$ Factoring terms, we obtain: $X = \text{SumA} \cdot (\text{SumB} - b_i) - a_j \cdot (\text{SumB} - b_i)$ Finally: $X = (\text{SumB} - b_i) \cdot (\text{SumA} - a_j)$ At this stage, it is sufficient to iterate over the divisors of $X$. For each ordered pair of divisors whose product is $X$, we check whether the required values of $\text{SumB} - b_i$ and $\text{SumA} - a_j$ can be achieved. This can be implemented using a simple map or boolean vector for faster computation, although such optimization is not required for this problem.
[ "binary search", "brute force", "data structures", "math", "number theory" ]
1,900
#include <bits/stdc++.h> using namespace std; #define FOR(i,a,b) for (int i = (a); i < (b); ++i) #define F0R(i,a) FOR(i,0,a) #define int long long #define vt vector #define endl "\n" const int N = 4e5 + 5; bool apos[N], aneg[N], bpos[N], bneg[N], posspos[N], possneg[N]; signed main() { ios_base::sync_with_stdio(false); cin.tie(0); int n,m,q; cin >> n >> m >> q; vector<int> a(n), b(m); int asum = 0, bsum = 0; F0R(i, n) { cin >> a[i]; asum += a[i]; } F0R(i, m) { cin >> b[i]; bsum += b[i]; } F0R(i, n) { if(abs(asum-a[i]) < N) { if(asum-a[i]<0) aneg[a[i]-asum]=true; else apos[asum-a[i]]=true; } } F0R(i, m) { if(abs(bsum-b[i]) < N) { if(bsum-b[i]<0) bneg[b[i]-bsum]=true; else bpos[bsum-b[i]]=true; } } FOR(i, 1, N) { FOR(j, 1, N) { if(i * j > N) break; if(apos[i]&&bpos[j]) posspos[i*j]=true; if(apos[i]&&bneg[j]) possneg[i*j]=true; if(aneg[i]&&bpos[j]) possneg[i*j]=true; if(aneg[i]&&bneg[j]) posspos[i*j]=true; } } while(q--) { int x; cin >> x; if(x>0) { if(posspos[x]) { cout << "YES" << endl; } else { cout << "NO" << endl; } } else { if(possneg[-x]) { cout << "YES" << endl; } else { cout << "NO" << endl; } } } return 0; }
2044
G1
Medium Demon Problem (easy version)
\textbf{This is the easy version of the problem. The key difference between the two versions is highlighted in bold.} A group of $n$ spiders has come together to exchange plushies. Initially, each spider has $1$ plushie. Every year, if spider $i$ has at least one plushie, he will give exactly one plushie to spider $r_i$. Otherwise, he will do nothing. Note that all plushie transfers happen at the same time. \textbf{In this version, if any spider has more than $1$ plushie at any point in time, they will throw all but $1$ away.} The process is stable in the current year if each spider has the same number of plushies (before the current year's exchange) as he did the previous year (before the previous year's exchange). Note that year $1$ can never be stable. Find the first year in which the process becomes stable.
This problem deals with a specific subclass of graphs called "functional graphs", also known as "successor graphs". The key feature that they have is that each node only has one successor. Therefore, the graph in the problem will necessarily be split into $k \geq 1$ components, where each component necessarily contains one cycle, and each node will either be in the cycle, or it will be on a path leading towards the cycle. Observe that if a node that is not on a cycle currently has a plushie, this plushie will cause the arrangement to be unstable until the plushie reaches the cycle. Proof: suppose node $u$ has the plushie on day $i$. On the next day, $u$ will no longer have this plushie, because they will have passed it down to $r_u$, therefore, the arrangement has changed. This continues inductively until the plushie reaches the cycle of its component. From this, we know that the answer is at least the distance of any node to the cycle. Now, since every node in the cycle already has a plushie, we know that these plushies just get passed round and round, so actually, nodes within the cycle cannot change the answer. Therefore, we've already found the final answer.
[ "dfs and similar", "graph matchings", "graphs", "implementation", "trees" ]
1,700
#include<bits/stdc++.h> #define ll long long using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin>>tt; while(tt--) { int n; cin>>n; vector<int> r(n+1),d(n+1); for(int i=1;i<=n;i++) { cin>>r[i]; d[r[i]]++; } set<pair<int,int> > s; for(int i=1;i<=n;i++) s.insert({d[i],i}); int ans=2; queue<int> q; while(!s.empty()&&(*s.begin()).first==0) { while(!s.empty()&&(*s.begin()).first==0) { int k=(*s.begin()).second; auto it=s.find({d[r[k]],r[k]}); d[r[k]]--; if(it!=s.end()) { s.erase(it); q.push(r[k]); } s.erase(s.begin()); } while(!q.empty()) s.insert({d[q.front()],q.front()}),q.pop(); ans++; } cout<<ans<<'\n'; } return 0; }
2044
G2
Medium Demon Problem (hard version)
\textbf{This is the hard version of the problem. The key difference between the two versions is highlighted in bold.} A group of $n$ spiders has come together to exchange plushies. Initially, each spider has $1$ plushie. Every year, if spider $i$ has at least one plushie, he will give exactly one plushie to spider $r_i$. Otherwise, he will do nothing. Note that all plushie transfers happen at the same time. \textbf{In this version, each spider is allowed to have more than 1 plushie at any point in time.} The process is stable in the current year if each spider has the same number of plushies (before the current year's exchange) as he did the previous year (before the previous year's exchange). Note that year $1$ can never be stable. Find the first year in which the process becomes stable.
Note that similarly to G1, once all plushies end up in the hands of spiders who are in a loop, the process becomes stable. Let's model the input as a collection of rooted forests. For each spider $i$, if $i$ is part of a loop, then let's compress the loop into a single node and use that as the root of a tree. Otherwise, if spider $i$ gives a present to spider $r_i$, then let's draw an edge from $i$ to $r_i$. Now, let $i$ be any node that is not part of a loop. How long will it take until spider $i$ runs out of presents? We can see that it is the subtree size of $i$, as one present leaves the subtree each year. Thus, our challenge now is to process the nodes in an efficient order such that we can find the subtree size of all nodes. This can be done with topological sorting, which gives us an order that processes all nodes starting from the leaf upwards. After the topological sort, we may do dynamic programming to find subtree sizes of all nodes. Let $dp[i]$ be the number of days until spider $i$ runs out of presents. Let's suppose that we already calculated $dp[i]$ (we initialize it to be $1$ for all nodes since each spider starts with a present). Then, we should add $dp[i]$ to $dp[r_i]$. Doing this and adding up all $dp$ values of nodes directly before a cycle will yield the answer.
[ "dfs and similar", "dp", "dsu", "graphs", "implementation", "trees" ]
1,900
#include<bits/stdc++.h> #define ll long long using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin>>tt; while(tt--) { int n; cin>>n; vector<int> r(n+1),d(n+1),v(n+1,1); for(int i=1;i<=n;i++) { cin>>r[i]; d[r[i]]++; } set<pair<int,int> > s; for(int i=1;i<=n;i++) { s.insert({d[i],i}); } int ans=2; queue<int> q; while(!s.empty()&&(*s.begin()).first==0) { while(!s.empty()&&(*s.begin()).first==0) { int k=(*s.begin()).second; ans=max(ans,v[k]+2);v[r[k]]+=v[k]; auto it=s.find({d[r[k]],r[k]}); d[r[k]]--; if(it!=s.end()) { s.erase(it); q.push(r[k]); } s.erase(s.begin()); } while(!q.empty()) s.insert({d[q.front()],q.front()}),q.pop(); } cout<<ans<<'\n'; } return 0; }
2044
H
Hard Demon Problem
Swing is opening a pancake factory! A good pancake factory must be good at flattening things, so Swing is going to test his new equipment on 2D matrices. Swing is given an $n \times n$ matrix $M$ containing positive integers. He has $q$ queries to ask you. For each query, he gives you four integers $x_1$, $y_1$, $x_2$, $y_2$ and asks you to flatten the submatrix bounded by $(x_1, y_1)$ and $(x_2, y_2)$ into an array $A$. Formally, $A = [M_{(x1,y1)}, M_{(x1,y1+1)}, \ldots, M_{(x1,y2)}, M_{(x1+1,y1)}, M_{(x1+1,y1+1)}, \ldots, M_{(x2,y2)}]$. The following image depicts the flattening of a submatrix bounded by the red dotted lines. The orange arrows denote the direction that the elements of the submatrix are appended to the back of $A$, and $A$ is shown at the bottom of the image. Afterwards, he asks you for the value of $\sum_{i=1}^{|A|} A_i \cdot i$ (sum of $A_i \cdot i$ over all $i$).
Consider translating the sum back onto the matrix. For simplicity we discuss about querying the whole matrix. The sum we would like to find is $\sum_i i\cdot A_i$. Here, $A_i$ corresponds to $M_{(x,y)}$, so we will translate this to $\sum_{x,y} i\cdot M_{(x,y)}$. The issue left is on the $i$ multiplied to it. Remember that we index the entries in increasing order of $y$, and then increasing order of $x$. Assuming $y$ and $x$ were $0$-indexed, this will mean entry $(x,y)$ corresponds to $x\cdot n + y$ (also $0$-indexed). You can notice that this naturally corresponds to the order we had defined as well. Then, what we want to find is $\sum_{x,y} (x \cdot n + y + 1)\cdot M_{(x,y)}$. Notice $x \cdot n$, $y$, $1$ are independent, and we can split them into sums $\sum_x x \cdot n \cdot M_{(x,y)}$, $\sum_y y \cdot M_{(x,y)}$, $\sum M_{(x,y)}$. Each of these three sums can be precomputed entry by entry, and a 2D prefix sum can solve the answer for the entire matrix. The query for a submatrix is very similar. Formally, you have to care about: That we have $y_2-y_1+1$ columns instead of $n$ now; That the precomputed values might not start from $0$ on the first row/column of the query. Still, these two issues can be fixed using the three sums we have precomputed. The time complexity becomes $\mathcal{O}(n^2+q)$.
[ "constructive algorithms", "data structures", "dp", "implementation", "math" ]
2,100
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { ll n, Q; cin >> n >> Q; vector <vll> mat(n, vll(n)); for (vll &ve : mat) { for (ll &i : ve) cin >> i; } vector <vll> psR(n, vll(n+1)), psRr(n, vll(n+1)), psRc(n+1, vll(n+1)), ps(n+1, vll(n+1)), psRrc(n+1, vll(n+1)); for (ll i = 0; i < n; i++) { for (ll j = 0; j < n; j++) { psR[i][j+1] = psR[i][j] + mat[i][j]; } } for (ll i = 0; i < n; i++) { for (ll j = 0; j < n; j++) { psRr[i][j+1] = psRr[i][j] + mat[i][j]*(j+1); } } for (ll i = 0; i < n; i++) { for (ll j = 0; j <= n; j++) { psRc[i+1][j] = psRc[i][j] + psR[i][j]*(i+1); } } for (ll i = 0; i < n; i++) { for (ll j = 0; j <= n; j++) { psRrc[i+1][j] = psRrc[i][j] + psRr[i][j]; } } for (ll i = 0; i < n; i++) { for (ll j = 0; j <= n; j++) { ps[i+1][j] = ps[i][j] + psR[i][j]; } } while (Q--) { ll x1, y1, x2, y2; cin >> x1 >> y1 >> x2 >> y2; x1--; y1--; x2--; y2--; ll ans = 0; ans += -(ps[x2+1][y2+1]-ps[x2+1][y1]-ps[x1][y2+1]+ps[x1][y1])*x1*(y2-y1+1); ans += (psRc[x2+1][y2+1] - psRc[x1][y2+1] - (ps[x2+1][y2+1]-ps[x1][y2+1]))*(y2-y1+1); ans += (psRc[x2+1][y1] - psRc[x1][y1] - (ps[x2+1][y1]-ps[x1][y1]))*-(y2-y1+1); ans += (ps[x2+1][y2+1]-ps[x1][y2+1])*-y1; ans += (ps[x2+1][y1]-ps[x1][y1])*y1; ans += psRrc[x2+1][y2+1] - psRrc[x1][y2+1]; ans +=-(psRrc[x2+1][y1] - psRrc[x1][y1]); cout << ans << ' '; } cout << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2046
A
Swap Columns and Find a Path
There is a matrix consisting of $2$ rows and $n$ columns. The rows are numbered from $1$ to $2$ from top to bottom; the columns are numbered from $1$ to $n$ from left to right. Let's denote the cell on the intersection of the $i$-th row and the $j$-th column as $(i,j)$. Each cell contains an integer; initially, the integer in the cell $(i,j)$ is $a_{i,j}$. You can perform the following operation any number of times (possibly zero): - choose two columns and swap them (i. e. choose two integers $x$ and $y$ such that $1 \le x < y \le n$, then swap $a_{1,x}$ with $a_{1,y}$, and then swap $a_{2,x}$ with $a_{2,y}$). After performing the operations, you have to choose a path from the cell $(1,1)$ to the cell $(2,n)$. For every cell $(i,j)$ in the path except for the last, the next cell should be either $(i+1,j)$ or $(i,j+1)$. Obviously, the path cannot go outside the matrix. The cost of the path is the sum of all integers in all $(n+1)$ cells belonging to the path. You have to perform the operations and choose a path so that its cost is \textbf{maximum} possible.
We can divide the columns in the matrix into three different groups: the columns where we go through the top cell; the columns where we go through the bottom cell; the columns where we go through both cells. There should be exactly one column in the $3$-rd group - this will be the column where we shift from the top row to the bottom row. However, all other columns can be redistributed between groups $1$ and $2$ as we want: if we want to put a column into the $1$-st group, we put it before the column where we go down; otherwise, we put it after the column where we go down. So, we can get any distribution of columns between the $1$-st and the $2$-nd group. Now let's consider the contribution of each column to the answer. Columns from the $1$-st group add $a_{1,i}$ to the answer, columns from the $2$-nd group add $a_{2,i}$, the column from the $3$-rd group adds both of these values. So, we can iterate on the index of the column where we go down, take $a_{1,i} + a_{2,i}$ for it, and take $\max(a_{1,j}, a_{2,j})$ for every other column. This works in $O(n^2)$, and under the constraints of the problem, it is enough. However, it is possible to solve the problem in $O(n)$. To do so, calculate the sum of $\max(a_{1,j}, a_{2,j})$ over all columns. Then, if we pick the $i$-th column as the column where we go down, the answer be equal to this sum, plus $\min(a_{1,i}, a_{2,i})$, since this will be the only column where we visit both cells.
[ "greedy", "sortings" ]
1,200
for _ in range(int(input())): n = int(input()) a = [] for i in range(2): a.append(list(map(int, input().split()))) best = [max(a[0][i], a[1][i]) for i in range(n)] full = [a[0][i] + a[1][i] for i in range(n)] sum_best = sum(best) ans = -10 ** 19 for i in range(n): ans = max(ans, sum_best + full[i] - best[i]) print(ans)
2046
B
Move Back at a Cost
You are given an array of integers $a$ of length $n$. You can perform the following operation zero or more times: - In one operation choose an index $i$ ($1 \le i \le n$), assign $a_i := a_i + 1$, and then move $a_i$ to the back of the array (to the rightmost position). For example, if $a = [3, 5, 1, 9]$, and you choose $i = 2$, the array becomes $[3, 1, 9, 6]$. Find the lexicographically smallest$^{\text{∗}}$ array you can get by performing these operations. \begin{footnotesize} $^{\text{∗}}$An array $c$ is lexicographically smaller than an array $d$ if and only if one of the following holds: - $c$ is a prefix of $d$, but $c \ne d$; or - in the first position where $c$ and $d$ differ, the array $c$ has a smaller element than the corresponding element in $d$. \end{footnotesize}
The first idea is to notice, that each element is moved to the back at most once. Indeed, if we fix a subset of elements that we ever move to the back, we can perform the operation once on each of them in any order we like, and that becomes their final order with the smallest possible increase. The optimal order is, of course, the increasing order. The question is how to select this subset of elements to move to the back. Since we need the lexicographically smallest array, we're looking for some greedy approach that chooses the smallest possible element on the next position one by one, left to right. What's the smallest number our resulting array can start with? Of course, the minimum. That means, all the elements in front of the minimum have to be moved to the back and be increased by one. What's the smallest number we can have on the second place, given that we have the minimum in the first positions? Either the smallest element to the right of the minimum, or the smallest element among those already moved to the back. ... Analysing this approach, we see that as we go left to right, we keep picking elements from the suffix minima sequence, and keep growing the set of elements we have to move to the right to ''extract'' this sequence from the initial array. At one point, the next smallest element comes not from the suffix minima sequence, but from the pile of integers we move to the right. At this point, all the remaining elements have to be moved to the right once (that is, increased by $1$), and then listed in sorted order. So, the answer is always several first elements of the suffix minima sequence, starting from the global minimum, and then all other elements, increased by $1$ and sorted in increased order. To find the point where we switch from the suffix minima sequence to the moved elements, it is convenient to precomute the minima, and keep a set of those elements we already move to the right. This solution runs in $O(n \log n)$, along with many other.
[ "binary search", "data structures", "greedy", "sortings" ]
1,600
#include <bits/stdc++.h> using namespace std; #ifdef LOCAL #define eprintf(...) fprintf(stderr, __VA_ARGS__) #else #define eprintf(...) 42 #endif using ll = long long; using ld = long double; using D = double; using uint = unsigned int; template<typename T> using pair2 = pair<T, T>; #define pb push_back #define mp make_pair #define all(x) (x).begin(),(x).end() #define fi first #define se second mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); void solve() { int n; scanf("%d", &n); vector<int> a(n); for (int i = 0; i < n; i++) scanf("%d", &a[i]); vector<int> front(n); int frontfront = n; int frontback = n; multiset<int> back; for (int i = n - 1; i >= 0; i--) { if (frontfront >= frontback || a[i] <= front[frontfront]) front[--frontfront] = a[i]; else back.insert(a[i] + 1); while (frontfront < frontback && !back.empty() && front[frontback - 1] > *back.begin()) { back.insert(front[frontback - 1] + 1); frontback--; } } vector<int> answer; for (int i = frontfront; i < frontback; i++) answer.pb(front[i]); for (auto t : back) answer.pb(t); for (auto t : answer) printf("%d ", t); printf(" "); } int main() { int NT = 1; scanf("%d", &NT); for (int T = 1; T <= NT; T++) { solve(); } return 0; }
2046
C
Adventurers
Once, four Roman merchants met in a Roman mansion to discuss their trading plans. They faced the following problem: they traded the same type of goods, and if they traded in the same city, they would inevitably incur losses. They decided to divide up the cities between them where they would trade. The map of Rome can be represented in this problem as a plane with certain points marked — the cities of the Roman Empire. The merchants decided to choose a certain dividing point $(x_0, y_0)$. Then, in a city with coordinates $(x_i, y_i)$, - the first merchant sells goods if $x_0 \le x_i$ and $y_0 \le y_i$; - the second merchant sells goods if $x_0 > x_i$ and $y_0 \le y_i$; - the third merchant sells goods if $x_0 \le x_i$ and $y_0 > y_i$; - the fourth merchant sells goods if $x_0 > x_i$ and $y_0 > y_i$. The merchants want to choose $(x_0, y_0)$ in such a way as to \textbf{maximize the smallest number of cities} that any of them gets (i. e., as fair as possible). Please find such a point for them.
First, we will use the idea of binary search on the answer. Let's assume we are currently checking if we can achieve the answer $k$. We will iterate over all the necessary coordinates of the vertical lines. To do this, we note that it makes sense to iterate only over those where the lines coincide in the Y-coordinate with at least one of the points. It is obvious that in this case, the necessary vertical lines will be on the order of $O(n)$. During the iteration over the vertical line, we will maintain two segment trees for the sum: one for the left half of the plane and the other for the right. $t_y$ in each of the segment trees will represent how many points have a Y-coordinate of $y$ in the corresponding half of the plane. In each of the segment trees, we will check if it is possible to divide the half-plane into two parts by a line parallel to the x-axis, so that there are at least $k$ points in both parts; and if this is possible, we will find the set of suitable lines - it will obviously be some segment $[y_{l}, \, y_{r}]$. If the intersection of the resulting segments in both segment trees is not empty, then we have found the answer. This can be done by descending the tree in $O(\log n)$ time. Thus, the total time complexity of the described solution is $O(n \log^2 n)$.
[ "binary search", "data structures", "greedy", "sortings", "ternary search", "two pointers" ]
2,100
#include <math.h> #include <unordered_set> #include <unordered_map> #include <map> #include <iostream> #include <fstream> #include <vector> #include <set> #include <array> #include <cstring> #include <ctime> #include <cassert> #include <string_view> #include <functional> #include <sstream> #include <numeric> #include <cmath> #include <deque> #include <list> #include <algorithm> #include <iomanip> using namespace std; using i64 = long long; using ui32 = unsigned int; using ui64 = unsigned long long; #define all(a) (a).begin(), (a).end() struct Tree; Tree* NewNode(); struct Count { int left = 0; int right = 0; void Add(int delta) { left += delta; if (delta < 0) { right -= delta; } } void operator += (const Count& c) { left += c.left; right += c.right; } int GetMin() { return min(left, right); } }; struct Tree { Count count; Tree* left = nullptr; Tree* right = nullptr; void Add(int l, int r, int p, int delta = 1) { count.Add(delta); if (r - l == 1) { return; } int mid = (l + r) / 2; if (p < mid) { if (left == nullptr) { left = NewNode(); } left->Add(l, mid, p, delta); } else { if (right == nullptr) { right = NewNode(); } right->Add(mid, r, p, delta); } } Count LeftCount() { if (left) { return left->count; } return {}; } Count RightCount() { if (right) { return right->count; } return {}; } void Remove(int l, int r, int p) { Add(l, r, p, -1); } }; Tree nodes[7000000]; Tree* NewNode() { static Tree* nextNode = nodes; return nextNode++; } struct Solver { struct Point { int x, y; bool operator < (const Point& p) const { return x < p.x; } }; int max = 0; int bx = 0; int by = 0; bool FindBest(int l, int r, Tree* node, const Count& left = {}, const Count& right = {}) { if (node == nullptr) { return false; } int mid = (l + r) / 2; Count newLeft = node->LeftCount(); Count newRight = node->RightCount(); newLeft += left; newRight += right; int mn = min(newLeft.GetMin(), newRight.GetMin()); bool updated = false; if (mn > max) { max = mn; by = mid; updated = true; } if (mn == newLeft.GetMin()) { if (FindBest(mid, r, node->right, newLeft, right)) { return true; } } else { if (FindBest(l, mid, node->left, left, newRight)) { return true; } } return updated; } void Solve(istream& cin, ostream& cout) { int n; cin >> n; vector<Point> a(n); int t9 = 1000000001; #ifdef pperm t9 = 100; #endif Tree* tree = NewNode(); for (Point& p : a) { cin >> p.x >> p.y; tree->Add(-t9, t9, p.y); } sort(all(a)); for (int i = 0; i < n;) { int j = i + 1; while (j < n && a[j].x == a[i].x) { ++j; } if (FindBest(-t9, t9, tree)) { bx = a[i].x; } for (;i < j; ++i) { tree->Remove(-t9, t9, a[i].y); } } cout << max << ' ' << bx << ' ' << by << endl; } }; int main(int argc, char* args[]) { #ifdef pperm ifstream cin("/home/pperm86/My/Codeforces/input.txt"); #endif ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef pperm srand(time(0)); #endif int T = 1; cin >> T; for (int iTest = 1; iTest <= T; ++iTest) { Solver solver{}; solver.Solve(cin, cout); } #ifdef pperm cout << clock() / static_cast<double>(CLOCKS_PER_SEC) << endl; #endif return 0; }
2046
D
For the Emperor!
In Ancient Rome, a plan to defeat the barbarians was developed, but for its implementation, each city must be informed about it. The northern part of the Roman Empire consists of $n$ cities connected by $m$ one-way roads. Initially, the $i$-th city has $a_i$ messengers, and each messenger can freely move between cities following the existing roads. A messenger can carry a copy of the plan with him and inform the cities he visits, and can make unlimited copies for other messengers in the city he is currently in. At the start, you will produce some number of plans and deliver them to messengers of your choice. Your goal is to make sure that every city is visited by a messenger with a plan. Find the smallest number of the plans you need to produce originally, so that the messengers will deliver them to every city, or determine that it is impossible to do so at all.
We will compress the graph into strongly connected components. Inside a component, each runner can move between any pair of vertices and thus visit all. We will now solve the problem for a directed acyclic graph. We will check for the existence of an answer. Suppose all runners already know the winning plan, and we will try to send them out so that each vertex is visited at least once. To do this, it is necessary to decompose the graph into paths such that: each vertex belongs to at least one path; no more than $a[i]$ paths start at each vertex (where $a[i]$ is the number of runners initially located at the vertex); paths may intersect. Take the source $s$ and the sink $t$, and now a path can be represented as a sequence $s$, $u_1$, ..., $u_k$, $t$. We will divide $u$ into $u_{in}$ and $u_{out}$ and will consider that a vertex $u$ belongs to least one transition from $u_{in}$ to $u_{out}$. We will draw the following edges: from $s$ to $u_{in}$ with capacity $cap = a[u]$; from $u_{in}$ to $u_{out}$ with $cap = inf$ and the condition of flow through the edge $f \geq 1$; from $u_{out}$ to $t$ with $cap = inf$; from $u_{out}$ to $v_{in}$ with $cap = inf$ for all edges (u, v) in the original graph. This is equivalent to finding a flow with constraints in this graph. To do this, we will create a dummy source $s'$ and sink $t'$, and now for each edge $(u_{in}, u_{out})$ with flow through it $1 \le f \le inf$, we will make the following replacements by drawing edges: from $u_{in}$ to $u_{out}$ with $cap = inf - 1$; from $s'$ to $u_{out}$ with $cap = 1$; from $u_{in}$ to $t'$ with $cap = 1$; from $t$ to $s$ with $cap = inf$. Finding a flow that satisfies the constraints is equivalent to finding the maximum flow from $s'$ to $t'$, and if it equals the number of vertices, then the answer exists; otherwise, we can output $-1$ at this step. Now we minimize the number of runners who initially know the plan. Note that: it makes no sense for runners to move until they know the plan; it makes no sense to pass the plan to more than one runner from one city. We try to take this into account when constructing. Instead of adding edges from the source to $u_{in}$, we will do the following: create a vertex $u_{cnt}$ to control the number of runners and draw edges: from $s$ to $u_{cnt}$ with $cap = a[u]$; from $u_{cnt}$ to $u_{out}$ with $cap = a[u]$; from $u_{cnt}$ to $u_{in}$ with $cap = 1$, $cost = 1$. assign zero cost to all other edges. This is equivalent to ensuring that all $a[i]$ runners learn the plan, but if no one comes to our city, we will personally tell exactly one runner from this number, paying $cost=1$ for that person. The answer to the problem is the minimum cost of the maximum flow from $s'$ to $t'$ with this graph construction. The proof follows from the graph construction. MCMF can be considered as $O(n^2m^2)$ or $O(n^3m)$ for any graphs, but it is worth noting that the flow value $f$ is limited to $n$, and we have a solution in $O(fnm)$ using Ford-Bellman or $O(fm\log n)$ using Dijkstra's algorithm with potentials. Interestingly, we note that as a result, we obtained $3n$ vertices and $m + 7n$ edges. With careful implementation, MCMF fits well within TL.
[ "flows", "graphs" ]
3,100
#define _CRT_SECURE_NO_WARNINGS #include<iostream> #include<fstream> #include<vector> #include<stack> #include<queue> #include<set> #include<map> #include<array> #include<unordered_set> #include<unordered_map> #include<cstring> #include<string> #include<memory> #include<iomanip> #include<cassert> #include<cmath> #include<random> #include<algorithm> #include<chrono> #pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt") #define int long long #define ld long double #define endl ' ' using namespace std; constexpr int N = 500; vector<int> g[N], gr[N], gcmp[N]; int a[N], acmp[N], used[N], color[N], cur_color = 0; vector<int> order, cmp; void dfs1(int u) { for (int v : g[u]) { if (used[v] == 0) { used[v] = 1; dfs1(v); } } order.push_back(u); } void dfs2(int u) { for (int v : gr[u]) { if (used[v] == 0) { used[v] = 1; dfs2(v); } } color[u] = cur_color; } inline array<int, 3> getInd(int u) { return { 3 * u, 3 * u + 1, 3 * u + 2 }; } constexpr int INF = 0x3f3f3f3f; struct MCMF { struct rib { int b, u, c, f; size_t back; }; MCMF(int size) : n(size), g_mcmf(size) {}; int n; vector<vector<rib>> g_mcmf; void rebuild(int sz) { n = sz, g_mcmf.clear(); g_mcmf.resize(sz); } void add_rib(int a, int b, int u, int c) { rib r1 = { b, u, c, 0, g_mcmf[b].size() }; rib r2 = { a, 0, -c, 0, g_mcmf[a].size() }; g_mcmf[a].push_back(r1); g_mcmf[b].push_back(r2); } pair<int, int> get_flow(int s, int t, int maxflow = INF) { int flow = 0, cost = 0; while (flow < maxflow) { vector<int> id(n, 0); vector<int> d(n, INF); vector<int> q(n); vector<int> p(n); vector<size_t> p_rib(n); int qh = 0, qt = 0; q[qt++] = s; d[s] = 0; while (qh != qt) { int v = q[qh++]; id[v] = 2; if (qh == n) qh = 0; for (size_t i = 0; i < g_mcmf[v].size(); ++i) { rib& r = g_mcmf[v][i]; if (r.f < r.u && d[v] + r.c < d[r.b]) { d[r.b] = d[v] + r.c; if (id[r.b] == 0) { q[qt++] = r.b; if (qt == n) qt = 0; } else if (id[r.b] == 2) { if (--qh == -1) qh = n - 1; q[qh] = r.b; } id[r.b] = 1; p[r.b] = v; p_rib[r.b] = i; } } } if (d[t] == INF) break; int addflow = maxflow - flow; for (int v = t; v != s; v = p[v]) { int pv = p[v]; size_t pr = p_rib[v]; addflow = min(addflow, g_mcmf[pv][pr].u - g_mcmf[pv][pr].f); } for (int v = t; v != s; v = p[v]) { int pv = p[v]; size_t pr = p_rib[v], r = g_mcmf[pv][pr].back; g_mcmf[pv][pr].f += addflow; g_mcmf[v][r].f -= addflow; cost += g_mcmf[pv][pr].c * addflow; } flow += addflow; } return { flow, cost }; } }; void solve() { for (int i =0 ; i < N; i++) { g[i] = {}; gr[i] = {}; gcmp[i] = {}; } memset(a, 0, sizeof a); memset(acmp, 0, sizeof acmp); memset(used, 0, sizeof used); memset(color, 0, sizeof color); cur_color = 0; order = {}; cmp = {}; int n, m; cin >> n >> m; for (int i = 0; i < n; i++) { g[i].clear(); gr[i].clear(); } for (int i = 0; i < n; i++) cin >> a[i]; for (int i = 0; i < m; i++) { int u, v; cin >> u >> v; u--, v--; g[u].push_back(v); gr[v].push_back(u); } order.clear(); memset(used, 0, sizeof(used[0]) * n); for (int i = 0; i < n; i++) { if (used[i] == 0) { used[i] = 1; dfs1(i); } } memset(used, 0, sizeof(used[0]) * n); for (int i = 0; i < n; i++) { int u = order[n - i - 1]; if (used[u] == 0) { used[u] = 1; dfs2(u); cur_color++; } } // for (int i = 0; i < n; i++) cout << color[i] << " "; cout << endl; memset(acmp, 0, sizeof(acmp[0]) * cur_color); for (int i = 0; i < n; i++) { acmp[color[i]] += a[i]; for (int v : g[i]) { if (color[i] != color[v]) { gcmp[color[i]].push_back(color[v]); } } } for (int i = 0; i < cur_color; i++) { auto& e = gcmp[i]; sort(e.begin(), e.end()); e.erase(unique(e.begin(), e.end()), e.end()); // cout << i << " " << acmp[i] << ": "; // for (int v : e) cout << v << " "; cout << endl; } int s = 3 * cur_color, t = s + 1, so = t + 1, to = so + 1; MCMF gg(to + 1); for (int i = 0; i < cur_color; i++) { auto [uin, uout, ucnt] = getInd(i); gg.add_rib(s, ucnt, acmp[i], 0); gg.add_rib(ucnt, uin, 1, 1); gg.add_rib(ucnt, uout, INF, 0); gg.add_rib(uin, uout, INF, 0); gg.add_rib(uout, t, INF, 0); gg.add_rib(so, uout, 1, 0); gg.add_rib(uin, to, 1, 0); } gg.add_rib(t, s, INF, 0); for (int i = 0; i < cur_color; i++) { auto [uin, uout, ucnt] = getInd(i); for (int v : gcmp[i]) { auto [vin, vout, vcnt] = getInd(v); gg.add_rib(uout, vin, INF, 0); } } auto [flow, cost] = gg.get_flow(so, to); // cout << flow << " " << cost << endl; if (flow < cur_color) { cout << -1 << endl; return; } cout << cost << endl; } signed main() { // freopen("input.txt", "r", stdin); ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int q; cin >> q; while (q--) solve(); }
2046
E2
Cheops and a Contest (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, $m$ is arbitrary. You can hack only if you solved all versions of this problem.} There is a problem-solving competition in Ancient Egypt with $n$ participants, numbered from $1$ to $n$. Each participant comes from a certain city; the cities are numbered from $1$ to $m$. There is at least one participant from each city. The $i$-th participant has strength $a_i$, specialization $s_i$, and wisdom $b_i$, so that $b_i \ge a_i$. Each problem in the competition will have a difficulty $d$ and a unique topic $t$. The $i$-th participant will solve the problem if - $a_i \ge d$, i.e., their strength is not less than the problem's difficulty, or - $s_i = t$, and $b_i \ge d$, i.e., their specialization matches the problem's topic, and their wisdom is not less than the problem's difficulty. Cheops wants to choose the problems in such a way that each participant from city $i$ will solve strictly more problems than each participant from city $j$, for all $i < j$. Please find a set of at most $5n$ problems, where the \textbf{topics of all problems are distinct}, so that Cheops' will is satisfied, or state that it is impossible.
For convenience, we will call groups that include certain cities. The key condition is that each specialization can correspond to no more than one task. Thus, cities from different groups must have "practically sorted" strength values as the group order increases. Let $[l_i, r_i]$ denote the segment that includes all the strengths of the cities in group $i$, where $r_i$ is the maximum strength of a city from this group, and $l_i$ is the minimum. If $l_i \le r_{i+2}$, it is impossible to build a contest, as there must be at least a difference of 2 tasks between the cities corresponding to $l_i$ and $r_{i+2}$, but they can differ by at most 1 task in specialization. After this, the problem reduces to considering two neighboring groups and subsequently checking the task difficulties for compliance with the linear conditions of each group. Let's consider two cases: If $l_i \ge r_{i+1}$, it is sufficient to add a few tasks (specifically, just 2) of difficulty from the segment $[r_{i+1} + 1, l_i]$ to maintain order. Otherwise, each city in the $i$-th group that lies in the intersection must solve a task with its specialization. In this case, no task can be in the segment $[l_i + 1, r_{i+1}]$, as the city with the highest strength in group $i+1$ will solve at least as many tasks as the weakest city in group $i$, which we cannot allow. To maintain order with the other groups, it is sufficient to add 2 tasks of difficulty $r_{i+1} + 1$ (unlike the first case, due to the fact that $l_i \le r_{i+2}$ cannot occur, we can immediately determine the appropriate task difficulty, but they can be arranged using the same algorithm as for the first case). It can be noted that when adding pairs of tasks, they need to choose an unused specification (since we are not using it in principle, but otherwise it may cause collisions). We conclude that we need to arrange tasks with a specified specification that a city from the intersection of groups can solve, meaning its difficulty must not exceed the wisdom of that city, and also arrange 2 tasks of certain difficulties to maintain order between groups. These difficulties arise from possible intersections with other groups when using $r_{i+1} + 1$ as the task difficulty and possible specification collisions of cities, which may allow a city from a weaker group to solve a task that we did not intend for it, thus violating the strictness conditions between groups. Most problems cannot arise when $m=2$, so what has been said above is already a solution (without minor adjustments) to the simple version of the problem. First, we will solve how to choose the difficulty for tasks with a specific specification. To do this, we will prohibit using the task specification corresponding to cities that form the second case (i.e., from the $i+1$-th group) with difficulty $[l_i, b_j]$, where j is the number of such a city. Also, as mentioned, we cannot have a task from the segment $[l_i + 1, r_{i+1}]$ regardless of the type. To determine the difficulty, it is sufficient to set it to the maximum possible (equal to wisdom) and then gradually decrease it until we reach an allowed difficulty. To resolve the problem from the first case, we will place barriers with 2 tasks of difficulty equal to the strength of a certain city wherever possible. If there exists an answer for the given configuration, we will guarantee to construct it this way, using no more than $3 \cdot n$ tasks. After all calculations, we need to check whether the contest we constructed meets the problem's conditions and output -1 otherwise.
[ "greedy", "implementation" ]
3,500
/** * author: tourist * created: 01.12.2024 18:36:51 **/ #undef _GLIBCXX_DEBUG #include <bits/stdc++.h> using namespace std; #ifdef LOCAL #include "algo/debug.h" #else #define debug(...) 42 #endif int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; while (tt--) { int n, m; cin >> n >> m; vector<int> a(n), b(n), c(n); set<int> s; for (int i = 0; i < n; i++) { cin >> a[i] >> b[i] >> c[i]; s.insert(c[i]); } vector<vector<int>> vs(m); for (int i = 0; i < m; i++) { int foo; cin >> foo; vs[i].resize(foo); for (int j = 0; j < foo; j++) { cin >> vs[i][j]; --vs[i][j]; } } const int inf = int(1.01e9); vector<int> min_a(m, inf); vector<int> max_a(m, -1); for (int i = 0; i < m; i++) { for (int j : vs[i]) { min_a[i] = min(min_a[i], a[j]); max_a[i] = max(max_a[i], a[j]); } } auto Unify = [&](vector<pair<int, int>>& bad) { sort(bad.begin(), bad.end()); int ptr = 0; for (int i = 1; i < int(bad.size()); i++) { if (bad[i].first <= bad[ptr].second + 1) { bad[ptr].second = max(bad[ptr].second, bad[i].second); } else { bad[++ptr] = bad[i]; } } bad.resize(ptr + 1); }; vector<pair<int, int>> bad; for (int id = 0; id < m - 1; id++) { if (min_a[id] < max_a[id + 1]) { bad.emplace_back(min_a[id] + 1, max_a[id + 1]); } } Unify(bad); vector<int> ctr(n); for (int i = 0; i < m; i++) { for (int x : vs[i]) { ctr[x] = i; } } vector<pair<int, int>> tasks; int unused = 0; vector<int> order(n); iota(order.begin(), order.end(), 0); sort(order.begin(), order.end(), [&](int i, int j) { return a[i] > a[j]; }); multiset<int> before, after; for (int i = 0; i < n; i++) { after.insert(ctr[i]); } { int beg = 0; while (beg < n) { int end = beg; while (end + 1 < n && a[order[end + 1]] == a[order[end]]) { end += 1; } for (int i = beg; i <= end; i++) { before.insert(ctr[order[i]]); after.erase(after.find(ctr[order[i]])); } if (!before.empty() && !after.empty() && *prev(before.end()) <= *after.begin()) { for (int i = 0; i < 2; i++) { do { unused += 1; } while (s.find(unused) != s.end()); tasks.emplace_back(a[order[end]], unused); } } beg = end + 1; } } map<int, int> add; for (int id = 0; id < m - 1; id++) { for (int i : vs[id]) { if (a[i] <= max_a[id + 1]) { if (add.find(c[i]) == add.end()) { add[c[i]] = b[i]; } else { add[c[i]] = min(add[c[i]], b[i]); } } } } map<int, vector<pair<int, int>>> kill; for (int id = 1; id < m; id++) { for (int i : vs[id]) { if (a[i] >= min_a[id - 1]) { kill[c[i]].push_back({min_a[id - 1] + 1, b[i]}); } } } for (auto& [type, x] : add) { auto& k = kill[type]; Unify(k); int dif = x; while (true) { bool changed = false; { auto it = lower_bound(bad.begin(), bad.end(), make_pair(dif + 1, -1)); if (it != bad.begin()) { it = prev(it); if (it->second >= dif) { dif = it->first - 1; changed = true; } } } { auto it = lower_bound(k.begin(), k.end(), make_pair(dif + 1, -1)); if (it != k.begin()) { it = prev(it); if (it->second >= dif) { dif = it->first - 1; changed = true; } } } if (!changed) { break; } } tasks.emplace_back(dif, type); } debug(tasks); vector<int> all; map<int, int> spec; for (auto& [x, y] : tasks) { all.push_back(x); assert(spec.find(y) == spec.end()); spec[y] = x; } sort(all.begin(), all.end()); vector<int> solved(n); for (int i = 0; i < n; i++) { solved[i] = int(upper_bound(all.begin(), all.end(), a[i]) - all.begin()); if (spec.find(c[i]) != spec.end() && spec[c[i]] > a[i] && spec[c[i]] <= b[i]) { solved[i] += 1; } } vector<int> min_solved(m, inf); vector<int> max_solved(m, -1); for (int i = 0; i < m; i++) { for (int j : vs[i]) { min_solved[i] = min(min_solved[i], solved[j]); max_solved[i] = max(max_solved[i], solved[j]); } } bool win = true; for (int id = 0; id < m - 1; id++) { if (min_solved[id] <= max_solved[id + 1]) { win = false; break; } } if (win) { cout << tasks.size() << ' '; for (auto& [x, y] : tasks) { cout << x << " " << y << ' '; } } else { cout << -1 << ' '; } } return 0; }
2046
F1
Yandex Cuneiform (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, there are no question marks. You can hack only if you solved all versions of this problem.} For a long time, no one could decipher Sumerian cuneiform. However, it has finally succumbed to pressure! Today, you have the chance to decipher Yandex cuneiform. Yandex cuneiform is defined by the following rules: - An empty string is a Yandex cuneiform. - If you insert exactly one copy of each of the three letters 'Y', 'D', and 'X' into a Yandex cuneiform in such a way that no two adjacent letters become equal after the operation, you obtain a Yandex cuneiform. - If a string can't be obtained using the above rules, it is not a Yandex cuneiform. You are given a template. A template is a string consisting of the characters 'Y', 'D', 'X', and '?'. You need to check whether there exists a way to replace each question mark with 'Y', 'D', or 'X' to obtain a Yandex cuneiform, and if it exists, output any of the matching options, as well as a sequence of insertion operations to obtain the resulting cuneiform. In this version of the problem, there are \textbf{no question marks} in the template.
It can be noted that if the string contains the correct number of symbols 'Y', 'D', 'X' and does not have two consecutive identical symbols, then it can be obtained using the given operations. To demonstrate this, we will provide an algorithm that constructs a sequence of operations leading to the desired string. We will start from the end (from the string we want to obtain to the empty string) gradually removing symbols. Let's look at the current string. Without loss of generality, let its first symbol be 'Y'. Then the string must contain the substring 'DX' and/or the substring 'XD'. This is true because for the absence of 'DX' and 'XD', it is necessary that between any pair of symbols not equal to 'Y', there is at least one symbol 'Y'. This can be achieved at best by placing 'Y' between every symbol. Thus, the number of symbols not equal to 'Y' with $n-1$ 'Y' can be at most $n$. But $3n-1>n$ when $n>0$. Thus, the string has the form 'Y...ADXB...'. If A $\neq$ B, we can remove the symbols 'Y', 'D', 'X' and nothing will break. If A = B, then A = B = 'Y', since A != 'D' and A != 'X'. Then the string has the form 'Y...YDXY...', and we can remove the substring 'YDX' and still have an unbroken string 'Y...Y...'.
[ "constructive algorithms", "data structures", "greedy" ]
3,300
null
2046
F2
Yandex Cuneiform (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, there is no restriction on the number of question marks. You can hack only if you solved all versions of this problem.} For a long time, no one could decipher Sumerian cuneiform. However, it has finally succumbed to pressure! Today, you have the chance to decipher Yandex cuneiform. Yandex cuneiform is defined by the following rules: - An empty string is a Yandex cuneiform. - If you insert exactly one copy of each of the three letters 'Y', 'D', and 'X' into a Yandex cuneiform in such a way that no two adjacent letters become equal after the operation, you obtain a Yandex cuneiform. - If a string can't be obtained using the above rules, it is not a Yandex cuneiform. You are given a template. A template is a string consisting of the characters 'Y', 'D', 'X', and '?'. You need to check whether there exists a way to replace each question mark with 'Y', 'D', or 'X' to obtain a Yandex cuneiform, and if it exists, output any of the matching options, as well as a sequence of insertion operations to obtain the resulting cuneiform. In this version of the problem, the number of question marks in the template can be arbitrary.
We will isolate substrings from the string that consist entirely of '?'. For each, we will denote its length as $len_i$ and the two neighboring symbols (standing to the left and right) as $l_i$, $r_i$. (If the right and/or left symbol is absent, we will take it as '#', a symbol that does not match any of our interests). We will write trivial constraints on the number of available symbols of each type on the segment. It is obvious that the number of symbols of each type cannot exceed $\frac{len_i+1}{2}$: we will place one symbol for every other symbol. But if, for example, we had a symbol 'Y' on the left, the number of positions where we can place 'Y' decreases by $1$. In total, we can place no more than $\frac{len+1-(l_i=b)-(r_i=b)}{2}$ symbols of type b. We will write such constraints for each type of symbol and denote them as $Y, D, X$ respectively. It turns out that we can arrange $y$ symbols 'Y', $d$ symbols 'D' and $x$ symbols 'X' if and only if $y+d+x=len_i$ and $0\leq y \leq Y, 0 \leq d \leq D, 0 \leq x \leq X$. This can be proven by induction or checked with stress tests on small values. From this, we conclude that in order to be able to arrange $x, d, y$ symbols of each type, the inequalities above must hold. From the equality $y+d+x=len_i$, we express $d=len_i - (x+y)$. This results in three constraints: $0\leq x \leq X, 0 \leq y \leq Y, 0 \leq len_i - (x+y) \leq D$. The first two form a rectangle, and the third cuts off two corners at a $45$ degree angle. This is a convex polygon. Thus, if for each segment of '?' we write a polygon and sum them using Minkowski sum, we will obtain constraints for the entire string. Since in the Minkowski sum (after merging collinear consecutive segments) there will be at most 6 points, we can write a greedy algorithm that sequentially tries to match the required symbol to each '?'. The final asymptotic complexity is $O(n)$ with some constant.
[ "constructive algorithms", "data structures", "greedy", "implementation" ]
3,500
#pragma GCC optimize("Ofast") #include <iostream> #include <cmath> #include <cstdint> #include <vector> #include <string> #include <iomanip> #include <set> #include <map> #include <unordered_map> #include <unordered_set> #include <algorithm> #include <functional> #include <queue> #include <fstream> #include <random> //#include <numbers> #include <optional> #include <deque> #include <sstream> #include <list> #include <chrono> #include <thread> #include <cassert> using namespace std; using i64 = int64_t; using ui64 = uint64_t; #define YN(b) if (b) cout << "YES "; else cout << "NO "; template<typename T> istream& operator>>(istream& is, vector<T>& v) { for (auto &e: v) { is >> e; } return is; } template<typename T> ostream& operator<<(ostream& os, vector<T>& v) { for (auto &e: v) { os << e << " "; } return os; } template<typename T, typename V> istream& operator>>(istream& is, pair<T, V>& v) { return is >> v.first >> v.second; } template<typename T, typename V> ostream& operator<<(ostream& os, pair<T, V>& v) { return os << v.first << " " << v.second; } const int64_t md = 1e9+7; int dx[4] = {-1, 0, 1, 0}; inline int64_t sqr(int64_t x) { return x * x; } inline int popcount(int64_t x) { int c = 0; while (x) c += x & 1, x >>= 1; return c; } pair<int64_t, int64_t> inter(pair<int64_t, int64_t> a, pair<int64_t, int64_t> b) { return {max(a.first, b.first), min(a.second, b.second)}; } struct pt { using T = int; using V = int64_t; T x = 0, y = 0; pt() = default; pt(T x, T y) : x(x), y(y) {} inline pt& operator-=(const pt& other) { x -= other.x; y -= other.y; return *this; } inline pt operator-(const pt& other) const { pt copy(*this); copy -= other; return copy; } inline pt& operator+=(const pt& other) { x += other.x; y += other.y; return *this; } inline pt operator+(const pt& other) const { pt copy(*this); copy += other; return copy; } V operator^(const pt& other) const { return (V)x * other.y - (V)y * other.x; } V operator*(const pt& other) const { return (V)x * other.x + (V)y * other.y; } void print() const { cerr << x << " " << y << " "; } bool operator==(const pt&) const = default; }; inline bool in_triangle(const pt& a, const pt& b, const pt& c, const pt& point) { int64_t s1 = abs((b - a) ^ (c - a)); int64_t s2 = abs((a - point) ^ (b - point)) + abs((b - point) ^ (c - point)) + abs((c - point) ^ (a - point)); return s1 == s2; } struct mi { size_t i = 0; size_t ni = 1; const size_t m; mi(size_t m) : m(m) {} void add() { ++i; if (++ni == m) ni = 0; } }; int64_t orientation(pt a, pt b, pt c) { return -((int64_t)a.x * (b.y - c.y) + (int64_t)b.x * (c.y - a.y) + (int64_t)c.x * (a.y - b.y)); } inline bool comp(pt a, pt b) { auto cp = a ^ b; if (!cp) { return a * a < b * b; } return (pt::V)a.x * b.x < 0 ? (a.x > b.x) : cp > 0; } struct NMP { array<pt, 13> pts; int len = 0; template<typename... Args> void emplace_back(Args&& ...p) { new (&pts[len++]) pt(p...); } void push_back(const pt& x) { pts[len++] = x; } pt& operator[](size_t x) { return pts[x]; } size_t size() const { return len; } const pt& operator[](size_t x) const { return pts[x]; } pt* begin() { return &pts[0]; } pt* end() { return &pts[len]; } void resize(size_t x) { len = x; } }; struct poly { using P = pt; NMP pts; void add(const P& p) { pts.push_back(p); } template<typename... Args> void emplace(Args&& ...p) { pts.emplace_back(p...); } void sort() { size_t min_id = 0; for (size_t i = 1; i < pts.size(); ++i) { if (pts[i].y < pts[min_id].y || (pts[i].y == pts[min_id].y && pts[i].x < pts[min_id].x)) { min_id = i; } } std::rotate(pts.begin(), pts.begin() + min_id, pts.end()); auto p0 = pts[0]; std::sort(pts.begin(), pts.end(), [&](const P& a, const P& b) { auto o = orientation(p0, a, b); if (!o) return (int64_t)(p0.x-a.x)*(p0.x-a.x) + (int64_t)(p0.y-a.y)*(p0.y-a.y) < (int64_t)(p0.x-b.x)*(p0.x-b.x) + (int64_t)(p0.y-b.y)*(p0.y-b.y); return o < 0; }); } void no_coll() { if (pts.size() < 3) { return; } size_t ptr = 1; for (size_t j = 2; j < pts.size(); ++j) { if ((pts[ptr] - pts[ptr - 1]) ^ (pts[j] - pts[ptr])) { pts[++ptr] = pts[j]; continue; } pts[ptr] = pts[j]; } pts.resize(ptr + 1); } poly operator+(const poly& other) const { if (pts.size() == 1) { poly c(other); for (auto& p: c.pts) p += pts[0]; return c; } if (other.pts.size() < pts.size()) { return other + *this; } // size at least 2s poly res; mi i(pts.size()), j(other.pts.size()); while (i.i < pts.size() && j.i < other.pts.size()) { res.emplace(pts[i.i] + other.pts[j.i]); auto a = (pts[i.ni] - pts[i.i]); auto b = (other.pts[j.ni] - other.pts[j.i]); auto cp = a ^ b; if (cp < 0) { j.add(); } else if (cp > 0) { i.add(); } else if ((P::V)a.y * b.y >= 0) { i.add(); j.add(); } else if (a.y > b.y) { i.add(); } else { j.add(); } } while (i.i < pts.size()) { res.emplace(pts[i.i] + other.pts[0]); i.add(); } while (j.i < other.pts.size()) { res.emplace(pts[0] + other.pts[j.i]); j.add(); } res.no_coll(); return res; } void print() { for (auto &e: pts) { cout << "(" << e.x << " " << e.y << ") "; } cout << " "; } bool in(const pt& p) const { if (pts.size() == 1) { return pts[0] == p; } if (pts.size() == 2) { return ((pts[0] - p) ^ (pts[1] - p)) == 0 && min(pts[0].x, pts[1].x) <= p.x && p.x <= max(pts[0].x, pts[1].x) && min(pts[0].y, pts[1].y) <= p.y && p.y <= max(pts[0].y, pts[1].y); } for (int i = 2; i < pts.size(); ++i) { if (in_triangle(pts[i], pts[i - 1], pts[0], p)) { return true; } } return false; } }; struct co { array<int, 3> cst {}; poly p; co() { p.emplace(0, 0); } co(int l, int a, int b) { cst = { max(0, 1 + l - (a == 0) - (b == 0)) / 2, max(0, 1 + l - (a == 1) - (b == 1)) / 2, max(0, 1 + l - (a == 2) - (b == 2)) / 2 }; p.emplace(min(cst[0], l - cst[2]), max(0, l - cst[0] - cst[2])); p.emplace(min(cst[0], l), max(0, l - cst[0] - cst[2])); p.emplace(cst[0], min(cst[1], l - cst[0])); p.emplace(min(l - cst[1], cst[0]), cst[1]); p.emplace(max(0, l - cst[1] - cst[2]), min(l, cst[1])); p.emplace(max(0, l - cst[1] - cst[2]), min(cst[1], l - cst[2])); p.sort(); p.no_coll(); } co operator+(const co& rhs) { co res; res.p = p + rhs.p; for (int j = 0; j < 3; ++j) res.cst[j] = cst[j] + rhs.cst[j]; return res; } bool ok(int x, int y, int z) { return x <= cst[0] && y <= cst[1] && z <= cst[2]; } bool in(int x, int y, int z) { return ok(x, y, z) && p.in(pt(x, y)); } }; char idx[128]{}; string ydx = "YDX"; struct fenwick { fenwick(int x) : t(x), n(x) {} vector<int> t; int n; int sum (int r) { int result = 0; for (; r >= 0; r = (r & (r+1)) - 1) result += t[r]; return result; } void inc (int i, int delta) { for (; i < n; i = (i | (i+1))) t[i] += delta; } }; using S = std::list<pair<short, int>>; struct cmp { bool operator()(S::iterator a, S::iterator b) const { return a->second < b->second; }; }; auto encode1(auto a, auto b) { return a * 256 + b; }; auto encode(auto it) { auto prev = it++; return encode1(prev->first, it->first); }; auto erase_char(auto& curr_pos, S& str, fenwick& alive, auto& pos, auto it) { auto curr = it; auto prev = --it; it = curr; auto next = ++it; if (next != str.end()) { if (!pos[encode(curr)].count(curr)) { exit(-1); } pos[encode(curr)].erase(curr); } if (prev != str.end()) { if (!pos[encode(prev)].count(prev)) { exit(-1); } pos[encode(prev)].erase(prev); } curr_pos.emplace_back(curr->first, alive.sum(curr->second) - 1); alive.inc(curr->second, -1); str.erase(curr); if (next != str.end() && prev != str.end()) { auto p = prev; ++p; if (p != next) { exit(-1); } pos[encode(prev)].emplace(prev); } }; bool is_ok(S::iterator x, S::iterator end) { auto curr = x; auto prev = --x; auto next = ++curr; if (prev != end && next != end) { return prev->first != next->first; } return true; } auto got(auto& curr_pos, auto& str, auto& alive, auto& pos, vector<S::iterator> t) { for (auto &e: t) { erase_char(curr_pos, str, alive, pos, e); } }; auto relax(auto& curr_pos, auto& str, auto& alive, auto& pos, short sym, short co) { auto &v = pos[co]; auto it = *v.begin(); auto A = it; auto prev = --it; it = A; auto B = ++it; auto next = ++it; if (prev != str.end() && next != str.end() && prev->first == next->first) { if (prev->first != sym) { exit(-1); } got(curr_pos, str, alive, pos, {prev, A, B}); return false; } got(curr_pos, str, alive, pos, {A, B}); return true; }; vector<array<pair<char, int>, 3>> solve_str(const string& s) { S str; fenwick alive(s.size()); for (int i = 0; i < s.size(); ++i) { str.emplace_back(s[i], i); alive.inc(i, 1); } auto it = str.begin(); map<int, set<decltype(it), cmp>> pos; while (it != str.end()) { auto prev = it++; if (it == str.end()) break; pos[encode(prev)].emplace(prev); } vector<pair<char, int>> curr_pos; vector<decltype(it)> curr_t; vector<array<pair<char, int>, 3>> res; while (str.size()) { it = str.begin(); auto curr = it++; if (it == str.end()) { exit(-1); } short code = curr->first; erase_char(curr_pos, str, alive, pos, curr); array<int, 2> candidates; if (code == 'Y') { candidates[0] = encode1('D', 'X'); candidates[1] = encode1('X', 'D'); } else if (code == 'X') { candidates[0] = encode1('D', 'Y'); candidates[1] = encode1('Y', 'D'); } else { candidates[0] = encode1('Y', 'X'); candidates[1] = encode1('X', 'Y'); } bool ok = 0; while (!ok) { for (auto &c: candidates) { if (!pos[c].size()) { continue; } if (relax(curr_pos, str, alive, pos, code, c)) { ok = 1; res.emplace_back(); res.back()[0] = curr_pos[2]; res.back()[1] = curr_pos[1]; res.back()[2] = curr_pos[0]; curr_pos.clear(); break; } // A...ABCA, ABC removed res.emplace_back(); for (int j = 0; j < 3; ++j) { res.back()[j] = curr_pos.back(); // removed res.back()[j].second += 1; curr_pos.pop_back(); } break; } } } reverse(res.begin(), res.end()); return res; } bool check(const string& s) { array<int, 3> cnt{}; char prev = -1; for (auto &e: s) { if (e == '?') { return false; } if (e != 'Y' && e != 'D' && e != 'X') { return false; } if (e == prev) return false; prev = e; ++cnt[idx[e]]; } return cnt[0] == s.size() / 3 && cnt[1] == s.size() / 3 && cnt[2] == s.size() / 3; } optional<string> solve_fast1(string v) { auto sv = v; int N = v.size() / 3; array<int, 3> rem{N, N, N}; for (auto &e: v) { if (e == '?') continue; --rem[idx[e]]; } if (rem[0] < 0 || rem[1] < 0 || rem[2] < 0) return {}; vector<co> mink; vector<pair<int, int>> segs; for (int i = 0; i < v.size(); ++i) { if (v[i] != '?') continue; int j = i + 1; while (j < v.size() && v[j] == '?') ++j; mink.emplace_back(j - i, i > 0 ? idx[v[i - 1]] : 3, j < v.size() ? idx[v[j]] : 3); segs.emplace_back(i, j - 1); i = j - 1; } reverse(mink.begin(), mink.end()); reverse(segs.begin(), segs.end()); for (int i = 1; i < mink.size(); ++i) { mink[i] = mink[i - 1] + mink[i]; } if (mink.size()) { mink.pop_back(); } while (segs.size()) { auto seg = segs.back(); segs.pop_back(); auto mk = mink.size() ? mink.back() : co(); if (mink.size()) { mink.pop_back(); } char prev = idx[seg.first ? v[seg.first - 1] : '?']; char next_r = idx[seg.second + 1 < v.size() ? v[seg.second + 1] : '?']; while (seg.first <= seg.second) { char next = seg.first == seg.second ? next_r : -1; bool found = 0; for (int j = 0; j < 3; ++j) { if (prev == j || next == j || !rem[j]) { continue; } --rem[j]; auto cs = co(seg.second - seg.first, j, seg.second + 1 < v.size() ? idx[v[seg.second + 1]] : 3); auto vv = mk + cs; if (!vv.in(rem[0], rem[1], rem[2])) { ++rem[j]; continue; } v[seg.first++] = ydx[j]; prev = j; found = 1; break; } if (!found) { return {}; } } } return v; } optional<pair<vector<array<pair<char, int>, 3>>, string>> solve(string s) { auto ww = solve_fast1(s); if (!ww || !check(*ww)) { return {}; } return pair{solve_str(*ww), *ww}; } signed main() { ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); idx['Y'] = 0; idx['D'] = 1; idx['X'] = 2; idx['?'] = 99; int t; cin >> t; while (t--) { string s; cin >> s; auto r = solve(s); if (r) { cout << "YES " << r->second << " "; for (auto &x: r->first) { for (auto &e: x) { cout << e.first << " " << e.second << " "; } cout << " "; } } else { cout << "NO "; } } }
2047
A
Alyona and a Square Jigsaw Puzzle
Alyona assembles an unusual square Jigsaw Puzzle. She does so in $n$ days in the following manner: - On the first day, she starts by placing the central piece in the center of the table. - On each day after the first one, she places a certain number of pieces around the central piece in clockwise order, always finishing each square layer completely before starting a new one. For example, she places the first $14$ pieces in the following order: \begin{center} {\small The colors denote the layers. The third layer is still unfinished.} \end{center} Alyona is happy if at the end of the day the assembled part of the puzzle does not have any started but unfinished layers. Given the number of pieces she assembles on each day, find the number of days Alyona is happy on.
Alyona is happy when there are no unfinished layers - that is, in front of her is a perfect square with odd side length. Since the order of pieces is fixed, it is enough to keep track of the total current size $s$ of the puzzle, and after each day check, if the $s$ is a perfect square of an odd number. The easiest way to do that is to create an additional array containing $1^2, 3^2, 5^3, ..., 99^2$, and check after each day, whether $s$ is in this array.
[ "implementation", "math" ]
800
NT = int(input()) sqs = set() k = 1 while k * k <= 100 * 1000: sqs.add(k * k) k += 2 for T in range(NT): n = int(input()) a = list(map(int, input().split())) answer = 0 cursum = 0 for t in a: cursum += t if cursum in sqs: answer += 1 print(answer)
2047
B
Replace Character
You're given a string $s$ of length $n$, consisting of only lowercase English letters. You must do the following operation exactly once: - Choose any two indices $i$ and $j$ ($1 \le i, j \le n$). You can choose $i = j$. - Set $s_i := s_j$. You need to minimize the number of distinct permutations$^\dagger$ of $s$. Output any string with the smallest number of distinct permutations after performing \textbf{exactly one} operation. $^\dagger$ A permutation of the string is an arrangement of its characters into any order. For example, "bac" is a permutation of "abc" but "bcc" is not.
Find the character which appears the lowest number of times - if tied, take the earlier character in the alphabet. Find the character which appears the highest number of times - if tied, take the later character in the alphabet. Then, replace any of the lowest-occurrence characters with the highest-occurrence character.
[ "brute force", "combinatorics", "greedy", "strings" ]
900
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; string s; cin >> s; vector<int> occ(26); for (int i=0; i<n; i++) occ[s[i] - 'a'] += 1; pair<pair<int,char>,int> low, high; low = high = {{occ[s[0] - 'a'], s[0]}, 0}; for (int i=1; i<n; i++) { low = min(low, {{occ[s[i] - 'a'], s[i]}, i}); high = max(high, {{occ[s[i] - 'a'], s[i]}, i}); } s[low.second] = s[high.second]; cout << s << " "; } return 0; }
2048
A
Kevin and Combination Lock
Kevin is trapped in Lakeside Village by Grace. At the exit of the village, there is a combination lock that can only be unlocked if Kevin solves it. The combination lock starts with an integer $ x $. Kevin can perform one of the following two operations zero or more times: - If $ x \neq 33 $, he can select two consecutive digits $ 3 $ from $ x $ and remove them simultaneously. For example, if $ x = 13\,323 $, he can remove the second and third $ 3 $, changing $ x $ to $ 123 $. - If $ x \geq 33 $, he can change $ x $ to $ x - 33 $. For example, if $ x = 99 $, he can choose this operation to change $ x $ to $ 99 - 33 = 66 $. When the value of $ x $ on the combination lock becomes $ 0 $, Kevin can unlock the lock and escape from Lakeside Village. Please determine whether it is possible for Kevin to unlock the combination lock and escape.
When we transform $\overline{x33y}$ into $\overline{xy}$ (where $x$ and $y$ are decimal numbers), the actual value changes from $10^{p+2} \cdot x + 33 \cdot 10^p + y$ to $10^p \cdot x + y$. The decrease is $99 \cdot 10^p \cdot x + 33 \cdot 10^p$. It is easy to see that $33 \mid (99 \cdot 10^p \cdot x + 33 \cdot 10^p)$. Therefore, we can replace the operation of removing two consecutive $3$ s with a series of $-33$ operations. Hence, we only need to determine whether $x$ can be reduced to $0$ using a series of $-33$ operations, which is equivalent to checking whether $x \bmod 33$ equals zero.
[ "brute force", "greedy", "implementation", "math", "number theory" ]
800
null
2048
A
Kevin and Combination Lock
Kevin is trapped in Lakeside Village by Grace. At the exit of the village, there is a combination lock that can only be unlocked if Kevin solves it. The combination lock starts with an integer $ x $. Kevin can perform one of the following two operations zero or more times: - If $ x \neq 33 $, he can select two consecutive digits $ 3 $ from $ x $ and remove them simultaneously. For example, if $ x = 13\,323 $, he can remove the second and third $ 3 $, changing $ x $ to $ 123 $. - If $ x \geq 33 $, he can change $ x $ to $ x - 33 $. For example, if $ x = 99 $, he can choose this operation to change $ x $ to $ 99 - 33 = 66 $. When the value of $ x $ on the combination lock becomes $ 0 $, Kevin can unlock the lock and escape from Lakeside Village. Please determine whether it is possible for Kevin to unlock the combination lock and escape.
When we transform $\overline{x33y}$ into $\overline{xy}$ (where $x$ and $y$ are decimal numbers), the actual value changes from $10^{p+2} \cdot x + 33 \cdot 10^p + y$ to $10^p \cdot x + y$. The decrease is $99 \cdot 10^p \cdot x + 33 \cdot 10^p$. It is easy to see that $33 \mid (99 \cdot 10^p \cdot x + 33 \cdot 10^p)$. Therefore, we can replace the operation of removing two consecutive $3$ s with a series of $-33$ operations. Hence, we only need to determine whether $x$ can be reduced to $0$ using a series of $-33$ operations, which is equivalent to checking whether $x \bmod 33$ equals zero.
[ "brute force", "greedy", "implementation", "math", "number theory" ]
800
null
2048
B
Kevin and Permutation
Kevin is a master of permutation-related problems. You are taking a walk with Kevin in Darkwoods, and during your leisure time, he wants to ask you the following question. Given two positive integers $ n $ and $ k $, construct a permutation$^{\text{∗}}$ $ p $ of length $ n $ to minimize the sum of the minimum values of all subarrays$^{\text{†}}$ of length $ k $. Formally, you need to minimize $$ \sum_{i=1}^{n-k+1}\left( \min_{j=i}^{i+k-1} p_j\right). $$ \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^{\text{†}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. Two subarrays are considered different if the sets of \textbf{positions} of the deleted elements are different. \end{footnotesize}
In the entire permutation, at most $k$ subintervals can contain $1$. Similarly, at most $k$ subintervals can contain $2, 3, \ldots$. To maximize the number of subintervals where the minimum value is as small as possible, we use the following construction: $p_k=1,p_{2k}=2,\dots,p_{\lfloor{\frac{n}{k}}\rfloor\cdot k}=\left\lfloor\frac{n}{k}\right\rfloor$ For the remaining positions, we can fill them arbitrarily with all values from $\lfloor \frac{n}{k} \rfloor + 1$ to $n$. It is easy to prove that this construction minimizes the answer.
[ "constructive algorithms", "greedy" ]
900
null
2048
B
Kevin and Permutation
Kevin is a master of permutation-related problems. You are taking a walk with Kevin in Darkwoods, and during your leisure time, he wants to ask you the following question. Given two positive integers $ n $ and $ k $, construct a permutation$^{\text{∗}}$ $ p $ of length $ n $ to minimize the sum of the minimum values of all subarrays$^{\text{†}}$ of length $ k $. Formally, you need to minimize $$ \sum_{i=1}^{n-k+1}\left( \min_{j=i}^{i+k-1} p_j\right). $$ \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^{\text{†}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. Two subarrays are considered different if the sets of \textbf{positions} of the deleted elements are different. \end{footnotesize}
In the entire permutation, at most $k$ subintervals can contain $1$. Similarly, at most $k$ subintervals can contain $2, 3, \ldots$. To maximize the number of subintervals where the minimum value is as small as possible, we use the following construction: $p_k=1,p_{2k}=2,\dots,p_{\lfloor{\frac{n}{k}}\rfloor\cdot k}=\left\lfloor\frac{n}{k}\right\rfloor$ For the remaining positions, we can fill them arbitrarily with all values from $\lfloor \frac{n}{k} \rfloor + 1$ to $n$. It is easy to prove that this construction minimizes the answer.
[ "constructive algorithms", "greedy" ]
900
null
2048
C
Kevin and Binary Strings
Kevin discovered a binary string $s$ that \textbf{starts with 1} in the river at Moonlit River Park and handed it over to you. Your task is to select two non-empty substrings$^{\text{∗}}$ of $s$ (which can be overlapped) to maximize the XOR value of these two substrings. The XOR of two binary strings $a$ and $b$ is defined as the result of the $\oplus$ operation applied to the two numbers obtained by interpreting $a$ and $b$ as binary numbers, with the leftmost bit representing the highest value. Here, $\oplus$ denotes the bitwise XOR operation. The strings you choose may have leading zeros. \begin{footnotesize} $^{\text{∗}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
To maximize the XOR sum of the two substrings, we aim to maximize the number of binary digits in the XOR result. To achieve this, the substring $[1,n]$ must always be selected. Suppose the first character of the other substring is $1$. If it is not $1$, we can remove all leading zeros. Next, find the position of the first $0$ in the string from left to right. We want this position to be flipped to $1$, while ensuring that the $1$s earlier in the string are not changed to $0$s. Therefore, let the position of the first $0$ be $p$. The length of the other substring must be $n-p+1$. By enumerating the starting position of the other substring and calculating the XOR sum of the two substrings linearly, we can take the maximum value. The time complexity of this approach is $O(n^2)$. If the entire string consists only of $1$s, selecting $[1,n]$ and $[1,1]$ can be proven to yield the maximum XOR sum among all possible choices. Interesting fact: This problem can actually be solved in $O(n)$ time complexity. Specifically, observe that the other substring needs to satisfy the following conditions: its length is $n-p+1$, and its first character is $1$. Thus, its starting position must be less than $p$. This implies that the length of the prefix of $1$s in the other substring can be chosen from the range $[1, p-1]$. We aim to flip the first segment of $0$ s in the original string to $1$ s, while ensuring that the $1$ immediately after this segment of $0$ s remains unchanged. Let the length of the first segment of $0$ s be $q$. Then, the length of the prefix of $1$ s in the other substring must be $\min(p-1, q)$, and the starting position can be determined efficiently. When preparing the contest and selecting problems, we determined that the $O(n)$ solution would be too difficult for a Problem C. Therefore, the problem was designed with an $O(n^2)$ data range to make it more accessible.
[ "bitmasks", "brute force", "greedy", "implementation", "strings" ]
1,200
null
2048
C
Kevin and Binary Strings
Kevin discovered a binary string $s$ that \textbf{starts with 1} in the river at Moonlit River Park and handed it over to you. Your task is to select two non-empty substrings$^{\text{∗}}$ of $s$ (which can be overlapped) to maximize the XOR value of these two substrings. The XOR of two binary strings $a$ and $b$ is defined as the result of the $\oplus$ operation applied to the two numbers obtained by interpreting $a$ and $b$ as binary numbers, with the leftmost bit representing the highest value. Here, $\oplus$ denotes the bitwise XOR operation. The strings you choose may have leading zeros. \begin{footnotesize} $^{\text{∗}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
To maximize the XOR sum of the two substrings, we aim to maximize the number of binary digits in the XOR result. To achieve this, the substring $[1,n]$ must always be selected. Suppose the first character of the other substring is $1$. If it is not $1$, we can remove all leading zeros. Next, find the position of the first $0$ in the string from left to right. We want this position to be flipped to $1$, while ensuring that the $1$s earlier in the string are not changed to $0$s. Therefore, let the position of the first $0$ be $p$. The length of the other substring must be $n-p+1$. By enumerating the starting position of the other substring and calculating the XOR sum of the two substrings linearly, we can take the maximum value. The time complexity of this approach is $O(n^2)$. If the entire string consists only of $1$s, selecting $[1,n]$ and $[1,1]$ can be proven to yield the maximum XOR sum among all possible choices. Interesting fact: This problem can actually be solved in $O(n)$ time complexity. Specifically, observe that the other substring needs to satisfy the following conditions: its length is $n-p+1$, and its first character is $1$. Thus, its starting position must be less than $p$. This implies that the length of the prefix of $1$s in the other substring can be chosen from the range $[1, p-1]$. We aim to flip the first segment of $0$ s in the original string to $1$ s, while ensuring that the $1$ immediately after this segment of $0$ s remains unchanged. Let the length of the first segment of $0$ s be $q$. Then, the length of the prefix of $1$ s in the other substring must be $\min(p-1, q)$, and the starting position can be determined efficiently. When preparing the contest and selecting problems, we determined that the $O(n)$ solution would be too difficult for a Problem C. Therefore, the problem was designed with an $O(n^2)$ data range to make it more accessible.
[ "bitmasks", "brute force", "greedy", "implementation", "strings" ]
1,200
null
2048
D
Kevin and Competition Memories
Kevin used to get into Rio's Memories, and in Rio's Memories, a series of contests was once held. Kevin remembers all the participants and all the contest problems from that time, but he has forgotten the specific rounds, the distribution of problems, and the exact rankings. There are $ m $ problems in total, with the $ i $-th problem having a difficulty of $ b_i $. Let each contest consist of $ k $ problems, resulting in a total of $ \lfloor \frac{m}{k} \rfloor $ contests. This means that you select exactly $ \lfloor \frac{m}{k} \rfloor \cdot k $ problems for the contests in any combination you want, with each problem being selected at most once, and the remaining $m\bmod k$ problems are left unused. For example, if $m = 17$ and $k = 3$, you should create exactly $5$ contests consisting of $3$ problems each, and exactly $2$ problems will be left unused. There are $ n $ participants in the contests, with Kevin being the $1$-st participant. The $ i $-th participant has a rating of $ a_i $. During the contests, each participant solves all problems with a difficulty not exceeding their rating, meaning the $ i $-th participant solves the $ j $-th problem if and only if $ a_i \geq b_j $. In each contest, Kevin's rank is one plus the number of participants who solve more problems than he does. For each $ k = 1, 2, \ldots, m $, Kevin wants to know the minimum sum of his ranks across all $ \lfloor \frac{m}{k} \rfloor $ contests. In other words, for some value of $k$, after selecting the problems for each contest, you calculate the rank of Kevin in each contest and sum up these ranks over all $ \lfloor \frac{m}{k} \rfloor $ contests. Your goal is to minimize this value. Note that contests for different values of $k$ are independent. It means that for different values of $k$, you can select the distribution of problems into the contests independently.
Read all the hints. First, remove all contestants with a rating lower than yours, making you the contestant with the lowest rating. Since any problem you can solve can also be solved by everyone else, this does not affect your ranking. These problems can effectively be treated as having infinite difficulty. At this point, you cannot solve any problem, so your ranking in a competition is given by $(1 +$ the number of contestants who solve at least one problem in that competition $)$. Therefore, we only need to focus on the easiest problem in each competition. Precompute, for each problem, the number of contestants who can solve it, denoted as $c_i$. This can be done by sorting the contestants by rating and the problems by difficulty, then using a two-pointer or binary search approach to compute $c_i$. The remaining task is: given $c_i$, divide all $c_i$ into $\lfloor \frac{n}{k} \rfloor$ groups, each containing $k$ elements, and minimize the sum of the maximum values in each group. This can be solved using a greedy algorithm: sort $c_i$ in ascending order, and for a given $k$, the answer is $(c_k+1) + (c_{2k}+1) + \dots$. The brute force calculation is bounded by the harmonic series, and combined with sorting, the time complexity is $O(n \log n + m \log m)$.
[ "binary search", "brute force", "data structures", "greedy", "sortings", "two pointers" ]
1,600
null
2048
D
Kevin and Competition Memories
Kevin used to get into Rio's Memories, and in Rio's Memories, a series of contests was once held. Kevin remembers all the participants and all the contest problems from that time, but he has forgotten the specific rounds, the distribution of problems, and the exact rankings. There are $ m $ problems in total, with the $ i $-th problem having a difficulty of $ b_i $. Let each contest consist of $ k $ problems, resulting in a total of $ \lfloor \frac{m}{k} \rfloor $ contests. This means that you select exactly $ \lfloor \frac{m}{k} \rfloor \cdot k $ problems for the contests in any combination you want, with each problem being selected at most once, and the remaining $m\bmod k$ problems are left unused. For example, if $m = 17$ and $k = 3$, you should create exactly $5$ contests consisting of $3$ problems each, and exactly $2$ problems will be left unused. There are $ n $ participants in the contests, with Kevin being the $1$-st participant. The $ i $-th participant has a rating of $ a_i $. During the contests, each participant solves all problems with a difficulty not exceeding their rating, meaning the $ i $-th participant solves the $ j $-th problem if and only if $ a_i \geq b_j $. In each contest, Kevin's rank is one plus the number of participants who solve more problems than he does. For each $ k = 1, 2, \ldots, m $, Kevin wants to know the minimum sum of his ranks across all $ \lfloor \frac{m}{k} \rfloor $ contests. In other words, for some value of $k$, after selecting the problems for each contest, you calculate the rank of Kevin in each contest and sum up these ranks over all $ \lfloor \frac{m}{k} \rfloor $ contests. Your goal is to minimize this value. Note that contests for different values of $k$ are independent. It means that for different values of $k$, you can select the distribution of problems into the contests independently.
Read all the hints. First, remove all contestants with a rating lower than yours, making you the contestant with the lowest rating. Since any problem you can solve can also be solved by everyone else, this does not affect your ranking. These problems can effectively be treated as having infinite difficulty. At this point, you cannot solve any problem, so your ranking in a competition is given by $(1 +$ the number of contestants who solve at least one problem in that competition $)$. Therefore, we only need to focus on the easiest problem in each competition. Precompute, for each problem, the number of contestants who can solve it, denoted as $c_i$. This can be done by sorting the contestants by rating and the problems by difficulty, then using a two-pointer or binary search approach to compute $c_i$. The remaining task is: given $c_i$, divide all $c_i$ into $\lfloor \frac{n}{k} \rfloor$ groups, each containing $k$ elements, and minimize the sum of the maximum values in each group. This can be solved using a greedy algorithm: sort $c_i$ in ascending order, and for a given $k$, the answer is $(c_k+1) + (c_{2k}+1) + \dots$. The brute force calculation is bounded by the harmonic series, and combined with sorting, the time complexity is $O(n \log n + m \log m)$.
[ "binary search", "brute force", "data structures", "greedy", "sortings", "two pointers" ]
1,600
null
2048
E
Kevin and Bipartite Graph
The Arms Factory needs a poster design pattern and finds Kevin for help. A poster design pattern is a bipartite graph with $ 2n $ vertices in the left part and $ m $ vertices in the right part, where there is an edge between each vertex in the left part and each vertex in the right part, resulting in a total of $ 2nm $ edges. Kevin must color each edge with a positive integer in the range $ [1, n] $. A poster design pattern is good if there are no monochromatic cycles$^{\text{∗}}$ in the bipartite graph. Kevin needs your assistance in constructing a good bipartite graph or informing him if it is impossible. \begin{footnotesize} $^{\text{∗}}$A monochromatic cycle refers to a simple cycle in which all the edges are colored with the same color. \end{footnotesize}
The graph has a total of $2nm$ edges, and each color forms a forest. Therefore, for any given color, there are at most $2n + m - 1$ edges. Thus, the total number of edges cannot exceed $(2n + m - 1)n$. This gives the condition: $(2n+m-1)n\ge 2nm$. Simplifying, we find $m \leq 2n - 1$. Next, we only need to construct a valid case for $m = 2n - 1$ to solve the problem. In fact, this is easy to construct. Since each right-side vertex has a degree of $2n$, and there are $n$ total colors, let each color have exactly $2$ edges. For any given color, this is equivalent to choosing two left-side vertices to connect (ignoring the existence of the right-side vertices). After $2n - 1$ connections, the left-side vertices need to form a tree. It turns out that connecting the left-side vertices into a chain suffices. During construction, we can cycle the colors of the first right-side vertex. For example, for $n = 4$ and $m = 7$, the result looks like this: 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 1 4 4 3 3 2 2 1 Thus, a simple construction method is as follows: for left-side vertex $i$ and right-side vertex $j$, the color of the edge connecting them is given by: $\left\lfloor\dfrac{(i+j)\bmod 2n}2\right\rfloor+1$
[ "constructive algorithms", "graphs", "greedy" ]
2,000
null
2048
E
Kevin and Bipartite Graph
The Arms Factory needs a poster design pattern and finds Kevin for help. A poster design pattern is a bipartite graph with $ 2n $ vertices in the left part and $ m $ vertices in the right part, where there is an edge between each vertex in the left part and each vertex in the right part, resulting in a total of $ 2nm $ edges. Kevin must color each edge with a positive integer in the range $ [1, n] $. A poster design pattern is good if there are no monochromatic cycles$^{\text{∗}}$ in the bipartite graph. Kevin needs your assistance in constructing a good bipartite graph or informing him if it is impossible. \begin{footnotesize} $^{\text{∗}}$A monochromatic cycle refers to a simple cycle in which all the edges are colored with the same color. \end{footnotesize}
The graph has a total of $2nm$ edges, and each color forms a forest. Therefore, for any given color, there are at most $2n + m - 1$ edges. Thus, the total number of edges cannot exceed $(2n + m - 1)n$. This gives the condition: $(2n+m-1)n\ge 2nm$. Simplifying, we find $m \leq 2n - 1$. Next, we only need to construct a valid case for $m = 2n - 1$ to solve the problem. In fact, this is easy to construct. Since each right-side vertex has a degree of $2n$, and there are $n$ total colors, let each color have exactly $2$ edges. For any given color, this is equivalent to choosing two left-side vertices to connect (ignoring the existence of the right-side vertices). After $2n - 1$ connections, the left-side vertices need to form a tree. It turns out that connecting the left-side vertices into a chain suffices. During construction, we can cycle the colors of the first right-side vertex. For example, for $n = 4$ and $m = 7$, the result looks like this: Thus, a simple construction method is as follows: for left-side vertex $i$ and right-side vertex $j$, the color of the edge connecting them is given by: $\left\lfloor\dfrac{(i+j)\bmod 2n}2\right\rfloor+1$
[ "constructive algorithms", "graphs", "greedy" ]
2,000
null
2048
F
Kevin and Math Class
Kevin is a student from Eversleeping Town, currently attending a math class where the teacher is giving him division exercises. On the board, there are two rows of positive integers written, each containing $ n $ numbers. The first row is $ a_1, a_2, \ldots, a_n $, and the second row is $ b_1, b_2, \ldots, b_n $. For each division exercise, Kevin can choose any segment $ [l, r] $ and find the smallest value $ x $ among $ b_l, b_{l+1}, \ldots, b_r $. He will then modify each $ a_i $ for $ l \leq i \leq r $ to be the ceiling of $ a_i $ divided by $ x $. Formally, he selects two integers $ 1 \leq l \leq r \leq n $, sets $ x = \min_{l \leq i \leq r} b_i $, and changes all $ a_i $ for $ l \leq i \leq r $ to $ \lceil \frac{a_i}{x} \rceil $. Kevin can leave class and go home when all $ a_i $ become $ 1 $. He is eager to leave and wants to know the minimum number of division exercises required to achieve this.
Construct a min Cartesian tree for the sequence $b$. It is easy to observe that we will only operate on the intervals defined by this Cartesian tree. To solve the problem, we can use DP on the Cartesian tree. Let $f_{u,i}$ represent the minimum possible maximum value of $a_x$ within the subtree rooted at $u$ after performing $i$ operations on all positions within that subtree. When merging, suppose the two child subtrees of $u$ are $ls$ and $rs$. The transition can be written as: $f_{u,k}=\min_{i+j=k}\left(\max(f_{ls,i},f_{rs,j},a_u)\right)$ Then consider division at the position corresponding to $b_u$, which updates the DP state: $f_{u,k+1}\leftarrow\left\lceil\frac{f_{u,k}}{b_u}\right\rceil$ Since operating on the entire sequence repeatedly can ensure that $\log_2(\max(a_i)) \leq 63$ operations suffice, the second dimension of the DP state only needs to be defined for $0 \sim 63$. Thus, the time complexity for this approach is $O(n \log^2 a)$. The bottleneck of this approach lies in merging the DP states of the two subtrees. Observing that $f_{u,i}$ is monotonically non-increasing, the $\min-\max$ convolution for $f_{ls,i}$ and $f_{rs,i}$ is equivalent to merging two sequences using a merge sort-like approach. This reduces the merging complexity to $O(\log a)$. Consequently, the overall complexity becomes $O(n \log a)$, which is an optimal solution.
[ "brute force", "data structures", "divide and conquer", "dp", "implementation", "math", "trees" ]
2,500
null
2048
F
Kevin and Math Class
Kevin is a student from Eversleeping Town, currently attending a math class where the teacher is giving him division exercises. On the board, there are two rows of positive integers written, each containing $ n $ numbers. The first row is $ a_1, a_2, \ldots, a_n $, and the second row is $ b_1, b_2, \ldots, b_n $. For each division exercise, Kevin can choose any segment $ [l, r] $ and find the smallest value $ x $ among $ b_l, b_{l+1}, \ldots, b_r $. He will then modify each $ a_i $ for $ l \leq i \leq r $ to be the ceiling of $ a_i $ divided by $ x $. Formally, he selects two integers $ 1 \leq l \leq r \leq n $, sets $ x = \min_{l \leq i \leq r} b_i $, and changes all $ a_i $ for $ l \leq i \leq r $ to $ \lceil \frac{a_i}{x} \rceil $. Kevin can leave class and go home when all $ a_i $ become $ 1 $. He is eager to leave and wants to know the minimum number of division exercises required to achieve this.
Construct a min Cartesian tree for the sequence $b$. It is easy to observe that we will only operate on the intervals defined by this Cartesian tree. Proof: For any $b_x$, find the last $b_p \leq b_x$ on its left and the first $b_q < b_x$ on its right. If we want to divide the interval by $b_x$, the operation interval cannot include both $p$ and $q$. At the same time, choosing the largest possible interval is always optimal. Hence, the operation interval must be $[p+1, q-1]$. All such intervals are exactly the intervals corresponding to the Cartesian tree. To solve the problem, we can use DP on the Cartesian tree. Let $f_{u,i}$ represent the minimum possible maximum value of $a_x$ within the subtree rooted at $u$ after performing $i$ operations on all positions within that subtree. When merging, suppose the two child subtrees of $u$ are $ls$ and $rs$. The transition can be written as: $f_{u,k}=\min_{i+j=k}\left(\max(f_{ls,i},f_{rs,j},a_u)\right)$ Then consider division at the position corresponding to $b_u$, which updates the DP state: $f_{u,k+1}\leftarrow\left\lceil\frac{f_{u,k}}{b_u}\right\rceil$ Since operating on the entire sequence repeatedly can ensure that $\log_2(\max(a_i)) \leq 63$ operations suffice, the second dimension of the DP state only needs to be defined for $0 \sim 63$. Thus, the time complexity for this approach is $O(n \log^2 a)$. The bottleneck of this approach lies in merging the DP states of the two subtrees. Observing that $f_{u,i}$ is monotonically non-increasing, the $\min-\max$ convolution for $f_{ls,i}$ and $f_{rs,i}$ is equivalent to merging two sequences using a merge sort-like approach. This reduces the merging complexity to $O(\log a)$. Consequently, the overall complexity becomes $O(n \log a)$, which is an optimal solution.
[ "brute force", "data structures", "divide and conquer", "dp", "implementation", "math", "trees" ]
2,500
null
2048
G
Kevin and Matrices
Kevin has been transported to Sacred Heart Hospital, which contains all the $ n \times m $ matrices with integer values in the range $ [1,v] $. Now, Kevin wants to befriend some matrices, but he is willing to befriend a matrix $ a $ if and only if the following condition is satisfied: $$ \min_{1\le i\le n}\left(\max_{1\le j\le m}a_{i,j}\right)\le\max_{1\le j\le m}\left(\min_{1\le i\le n}a_{i,j}\right). $$ Please count how many matrices in Sacred Heart Hospital can be friends with Kevin. Since Kevin is very friendly, there could be many matrices that meet this condition. Therefore, you only need to output the result modulo $998\,244\,353$.
Let us assume the left-hand side attains its maximum at position $L$, and the right-hand side attains its maximum at position $R$. For any $L, R$, let $P$ be the position in the same row as $L$ and the same column as $R$. Then we have $a_L \geq a_P \geq a_R$, which implies $a_L \geq a_R$. Hence, we only need to consider cases where $a_L = a_R$. Now, consider positions where both sides attain their maximum value, denoted as $S$. Positions in $S$ are the maximum in their respective column and the minimum in their respective row. For any two positions $P, Q$ in $S$ that are not in the same row or column, we can observe that the positions in the same row as $P$ and the same column as $Q$, and vice versa, can also attain the maximum value. By induction, we can conclude that $S$ forms a subrectangle. Next, enumerate the maximum value $k$ and the size of $S$, denoted by $i \times j$. The constraints are as follows: all remaining elements in the rows of $S$ must be $\leq k$, and all remaining elements in the columns of $S$ must be $\geq k$. Using the principle of inclusion-exclusion, we derive: $ans=\sum_{k=1}^v\sum_{i=1}^n\sum_{j=1}^m(-1)^{i+j}\binom ni\binom mjk^{i(m-j)}(v-k+1)^{(n-i)j}v^{(n-i)(m-j)}$ The naive approach is $O(nmv)$, which is computationally expensive. Let us simplify: $=\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\sum_{j=1}^m(-1)^{j}\binom mj\left(k^i\right)^{m-j}\left((v-k+1)^{n-i}\right)^j\left(v^{n-i}\right)^{m-j}\\ =\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\sum_{j=1}^m\binom mj\left(-(v-k+1)^{n-i}\right)^j\left(k^iv^{n-i}\right)^{m-j}$ This resembles the Binomial Theorem. To simplify further, add and subtract the term for $j=0$: $=\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\left(\sum_{j=0}^m\binom mj\left(-(v-k+1)^{n-i}\right)^j\left(k^iv^{n-i}\right)^{m-j}-\left(k^iv^{n-i}\right)^m\right)\\ =\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\left(\left(-(v-k+1)^{n-i}+k^iv^{n-i}\right)^m-\left(k^iv^{n-i}\right)^m\right)$ Thus, the problem can be solved in $O(nv\log m)$ time.
[ "brute force", "combinatorics", "dp", "math" ]
2,800
null
2048
G
Kevin and Matrices
Kevin has been transported to Sacred Heart Hospital, which contains all the $ n \times m $ matrices with integer values in the range $ [1,v] $. Now, Kevin wants to befriend some matrices, but he is willing to befriend a matrix $ a $ if and only if the following condition is satisfied: $$ \min_{1\le i\le n}\left(\max_{1\le j\le m}a_{i,j}\right)\le\max_{1\le j\le m}\left(\min_{1\le i\le n}a_{i,j}\right). $$ Please count how many matrices in Sacred Heart Hospital can be friends with Kevin. Since Kevin is very friendly, there could be many matrices that meet this condition. Therefore, you only need to output the result modulo $998\,244\,353$.
Let us assume the left-hand side attains its maximum at position $L$, and the right-hand side attains its maximum at position $R$. For any $L, R$, let $P$ be the position in the same row as $L$ and the same column as $R$. Then we have $a_L \geq a_P \geq a_R$, which implies $a_L \geq a_R$. Hence, we only need to consider cases where $a_L = a_R$. Now, consider positions where both sides attain their maximum value, denoted as $S$. Positions in $S$ are the maximum in their respective column and the minimum in their respective row. For any two positions $P, Q$ in $S$ that are not in the same row or column, we can observe that the positions in the same row as $P$ and the same column as $Q$, and vice versa, can also attain the maximum value. By induction, we can conclude that $S$ forms a subrectangle. Next, enumerate the maximum value $k$ and the size of $S$, denoted by $i \times j$. The constraints are as follows: all remaining elements in the rows of $S$ must be $\leq k$, and all remaining elements in the columns of $S$ must be $\geq k$. Using the principle of inclusion-exclusion, we derive: $ans=\sum_{k=1}^v\sum_{i=1}^n\sum_{j=1}^m(-1)^{i+j}\binom ni\binom mjk^{i(m-j)}(v-k+1)^{(n-i)j}v^{(n-i)(m-j)}$ The naive approach is $O(nmv)$, which is computationally expensive. Let us simplify: $=\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\sum_{j=1}^m(-1)^{j}\binom mj\left(k^i\right)^{m-j}\left((v-k+1)^{n-i}\right)^j\left(v^{n-i}\right)^{m-j}\\ =\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\sum_{j=1}^m\binom mj\left(-(v-k+1)^{n-i}\right)^j\left(k^iv^{n-i}\right)^{m-j}$ This resembles the Binomial Theorem. To simplify further, add and subtract the term for $j=0$: $=\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\left(\sum_{j=0}^m\binom mj\left(-(v-k+1)^{n-i}\right)^j\left(k^iv^{n-i}\right)^{m-j}-\left(k^iv^{n-i}\right)^m\right)\\ =\sum_{k=1}^v\sum_{i=1}^n(-1)^i\binom ni\left(\left(-(v-k+1)^{n-i}+k^iv^{n-i}\right)^m-\left(k^iv^{n-i}\right)^m\right)$ Thus, the problem can be solved in $O(nv\log m)$ time.
[ "brute force", "combinatorics", "dp", "math" ]
2,800
null
2048
H
Kevin and Strange Operation
Kevin is exploring problems related to binary strings in Chinatown. When he was at a loss, a stranger approached him and introduced a peculiar operation: - Suppose the current binary string is $ t $, with a length of $ \vert t \vert $. Choose an integer $ 1 \leq p \leq \vert t \vert $. For all $ 1 \leq i < p $, \textbf{simultaneously} perform the operation $ t_i = \max(t_i, t_{i+1}) $, and then delete $ t_p $. For example, suppose the current binary string is 01001, and you choose $ p = 4 $. Perform $ t_i = \max(t_i, t_{i+1}) $ for $t_1$, $t_2$, and $ t_3 $, transforming the string into 11001, then delete $ t_4 $, resulting in 1101. Kevin finds this strange operation quite interesting. Thus, he wants to ask you: Given a binary string $ s $, how many distinct non-empty binary strings can you obtain through any number of operations (possibly zero)? Since the answer may be very large, you only need to output the result modulo $998\,244\,353$.
Assume that after performing several operations on the $01$ string $s$, we get the $01$ string $t$. It's not hard to notice that each element in $t$ corresponds to the $\max$ of a subset of elements from $s$. Further observation shows that this subset must form a continuous segment, so we can express $t_i$ as $\max\limits_{k=l_i}^{r_i} s_k$. Initially, $t = s$, so all $l_i = r_i = i$. Suppose the current length of string $t$ is $m$, corresponding to two sequences $l$ and $r$. If an operation is performed at position $p$ where $1 \le p \le m$, the new sequence $t'$ will correspond to two sequences $l'$ and $r'$. Then, since for $1 \le i < p$, we have $t'_i=\max(t_i,t_{i+1})$, and for $p \le i < m$, $t'_i=t_{i+1}$, it can be observed that for $1 \le i < p$, we have $l'_i=l_i,r'_i=r_{i+1}$, and for $p \le i < m$, $l'_i=l_{i+1},r'_i=r_{i+1}$. If we only focus on the change of sequences $l$ and $r$ to $l'$ and $r'$, it is equivalent to deleting the values $l_p$ and $r_1$. Thus, performing $k$ operations starting from the sequence $s$, the resulting sequence $t$ will correspond to the sequences $l$ and $r$, where $l$ is obtained by deleting any $k$ values from $1$ to $n$, and $r$ is the sequence from $k+1$ to $n$. Now, let's consider how to determine if the $01$ string $t$ can be generated. By reversing $t$ to get $t'$, the task becomes finding $n \ge p_1 > p_2 > \dots > p_k \ge 1$ such that for all $1 \le i \le k$, we have $t'_i=\max\limits_{k=p_i}^{n-i+1}s_k$. A clearly correct greedy strategy is to choose $p_i$ in the order $i=1 \sim k$, always selecting the largest possible value. Now consider performing DP. Let $dp_{i,j}$ represent how many length-$i$ $01$ strings $t$ can be generated such that after running the above greedy algorithm, $p_i$ exactly equals $j$. We assume that $p_0 = n+1$ and the boundary condition is $dp_{0,n+1} = 1$. We now consider the transition from $dp_{i-1,j}$ to $dp_{i,*}$: Both types of transitions can be considered as adding $dp_{i,j-1}$ to $dp_{i-1,j}$, then finding the largest $pos \le n-i+1$ where $s_{pos} = 1$, and for all $j-1 > pos$ (i.e., $j \ge pos+2$), adding $dp_{i,pos}$ to $dp_{i,j}$. The first type of transition can be viewed as a global shift of the DP array, while the second type requires calculating a segment suffix sum of the DP array and then performing a point update. This can be done efficiently using a segment tree in $O(n \log n)$ time for all transitions. The final answer is the sum of all $1 \le i \le n, 1 \le j \le n$ of $dp_{i,j}$. Using a segment tree for maintenance, we can also query the sum of each entry of $dp_i$ in $O(1)$ time (by setting to zero those entries where $dp_{i-1,1}$ is out of range after the shift). Since the transitions have better properties, it's actually possible to solve the problem cleverly using prefix sums in $O(n)$ time without needing complex data structures, but that is not strictly necessary.
[ "data structures", "dp" ]
3,100
null
2048
H
Kevin and Strange Operation
Kevin is exploring problems related to binary strings in Chinatown. When he was at a loss, a stranger approached him and introduced a peculiar operation: - Suppose the current binary string is $ t $, with a length of $ \vert t \vert $. Choose an integer $ 1 \leq p \leq \vert t \vert $. For all $ 1 \leq i < p $, \textbf{simultaneously} perform the operation $ t_i = \max(t_i, t_{i+1}) $, and then delete $ t_p $. For example, suppose the current binary string is 01001, and you choose $ p = 4 $. Perform $ t_i = \max(t_i, t_{i+1}) $ for $t_1$, $t_2$, and $ t_3 $, transforming the string into 11001, then delete $ t_4 $, resulting in 1101. Kevin finds this strange operation quite interesting. Thus, he wants to ask you: Given a binary string $ s $, how many distinct non-empty binary strings can you obtain through any number of operations (possibly zero)? Since the answer may be very large, you only need to output the result modulo $998\,244\,353$.
Assume that after performing several operations on the $01$ string $s$, we get the $01$ string $t$. It's not hard to notice that each element in $t$ corresponds to the $\max$ of a subset of elements from $s$. Further observation shows that this subset must form a continuous segment, so we can express $t_i$ as $\max\limits_{k=l_i}^{r_i} s_k$. Initially, $t = s$, so all $l_i = r_i = i$. Suppose the current length of string $t$ is $m$, corresponding to two sequences $l$ and $r$. If an operation is performed at position $p$ where $1 \le p \le m$, the new sequence $t'$ will correspond to two sequences $l'$ and $r'$. Then, since for $1 \le i < p$, we have $t'_i=\max(t_i,t_{i+1})$, and for $p \le i < m$, $t'_i=t_{i+1}$, it can be observed that for $1 \le i < p$, we have $l'_i=l_i,r'_i=r_{i+1}$, and for $p \le i < m$, $l'_i=l_{i+1},r'_i=r_{i+1}$. If we only focus on the change of sequences $l$ and $r$ to $l'$ and $r'$, it is equivalent to deleting the values $l_p$ and $r_1$. Thus, performing $k$ operations starting from the sequence $s$, the resulting sequence $t$ will correspond to the sequences $l$ and $r$, where $l$ is obtained by deleting any $k$ values from $1$ to $n$, and $r$ is the sequence from $k+1$ to $n$. Now, let's consider how to determine if the $01$ string $t$ can be generated. By reversing $t$ to get $t'$, the task becomes finding $n \ge p_1 > p_2 > \dots > p_k \ge 1$ such that for all $1 \le i \le k$, we have $t'_i=\max\limits_{k=p_i}^{n-i+1}s_k$. A clearly correct greedy strategy is to choose $p_i$ in the order $i=1 \sim k$, always selecting the largest possible value. Now consider performing DP. Let $dp_{i,j}$ represent how many length-$i$ $01$ strings $t$ can be generated such that after running the above greedy algorithm, $p_i$ exactly equals $j$. We assume that $p_0 = n+1$ and the boundary condition is $dp_{0,n+1} = 1$. We now consider the transition from $dp_{i-1,j}$ to $dp_{i,*}$: If $s[j-1, n-i+1]$ already contains $1$, then the $i$-th position in the reversed $t$ must be $1$, and it must be the case that $p_i = j-1$, so we add $dp_{i,j-1}$ to $dp_{i-1,j}$. If $s[j-1, n-i+1]$ does not contain $1$, the $i$-th position in the reversed $t$ can be $0$. If it is $0$, then it must be the case that $p_i = j-1$, and we add $dp_{i,j-1}$ to $dp_{i-1,j}$; if we want the $i$-th position in the reversed $t$ to be $1$, we need to find the largest $pos \le n-i+1$ such that $s_{pos} = 1$, and then set $p_i = pos$, adding $dp_{i,pos}$ to $dp_{i-1,j}$. Both types of transitions can be considered as adding $dp_{i,j-1}$ to $dp_{i-1,j}$, then finding the largest $pos \le n-i+1$ where $s_{pos} = 1$, and for all $j-1 > pos$ (i.e., $j \ge pos+2$), adding $dp_{i,pos}$ to $dp_{i,j}$. The first type of transition can be viewed as a global shift of the DP array, while the second type requires calculating a segment suffix sum of the DP array and then performing a point update. This can be done efficiently using a segment tree in $O(n \log n)$ time for all transitions. The final answer is the sum of all $1 \le i \le n, 1 \le j \le n$ of $dp_{i,j}$. Using a segment tree for maintenance, we can also query the sum of each entry of $dp_i$ in $O(1)$ time (by setting to zero those entries where $dp_{i-1,1}$ is out of range after the shift). Since the transitions have better properties, it's actually possible to solve the problem cleverly using prefix sums in $O(n)$ time without needing complex data structures, but that is not strictly necessary.
[ "data structures", "dp" ]
3,100
null
2048
I1
Kevin and Puzzle (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, you need to find any one good array. You can hack only if you solved all versions of this problem.} Kevin is visiting the Red Church, and he found a puzzle on the wall. For an array $ a $, let $ c(l,r) $ indicate how many distinct numbers are among $ a_l, a_{l+1}, \ldots, a_r $. In particular, if $ l > r $, define $ c(l,r) = 0 $. You are given a string $ s $ of length $ n $ consisting of letters $ L $ and $ R $ only. Let a non-negative array $ a $ be called good, if the following conditions hold for $ 1 \leq i \leq n $: - if $s_i=\verb!L!$, then $c(1,i-1)=a_i$; - if $s_i=\verb!R!$, then $c(i+1,n)=a_i$. If there is a good array $a$, print any of the good arrays. Otherwise, report that no such arrays exists.
Lemma: Suppose the largest value filled is $mx$, and the number of distinct values is $c$. Let $d = c - mx$. Then, $d = 0$ or $d = 1$. Proof: Clearly, $c \le mx + 1$. If $c < mx$, observe where $mx$ is placed, and a contradiction arises. Now, consider the leftmost and rightmost characters in sequence $s$: We recursively remove the leftmost and rightmost characters, solve for the inner region, add $1$ to all of them, and place $0$ and $x$ at both ends. Now, consider how $d$ changes: Therefore, consider the outermost RL. If it contains LL or RR inside, there is no solution. Otherwise, a solution always exists, and it can be easily constructed based on the above process, the time complexity is $O(n)$.
[ "constructive algorithms" ]
3,500
null
2048
I1
Kevin and Puzzle (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, you need to find any one good array. You can hack only if you solved all versions of this problem.} Kevin is visiting the Red Church, and he found a puzzle on the wall. For an array $ a $, let $ c(l,r) $ indicate how many distinct numbers are among $ a_l, a_{l+1}, \ldots, a_r $. In particular, if $ l > r $, define $ c(l,r) = 0 $. You are given a string $ s $ of length $ n $ consisting of letters $ L $ and $ R $ only. Let a non-negative array $ a $ be called good, if the following conditions hold for $ 1 \leq i \leq n $: - if $s_i=\verb!L!$, then $c(1,i-1)=a_i$; - if $s_i=\verb!R!$, then $c(i+1,n)=a_i$. If there is a good array $a$, print any of the good arrays. Otherwise, report that no such arrays exists.
Lemma: Suppose the largest value filled is $mx$, and the number of distinct values is $c$. Let $d = c - mx$. Then, $d = 0$ or $d = 1$. Proof: Clearly, $c \le mx + 1$. If $c < mx$, observe where $mx$ is placed, and a contradiction arises. Now, consider the leftmost and rightmost characters in sequence $s$: If they are L and R, we can see that both positions must be filled with $0$, and no other position can be filled with $0$. For internal positions, whether L or R, $0$ counts as a distinct number. Therefore, we can remove these two positions, recursively solve for the remaining part, add $1$ to all numbers, and then place a $0$ at both ends. If both are L, the leftmost L must be $0$. Suppose the rightmost L is filled with $x$. It is easy to prove that $x$ cannot be placed in internal positions. For internal positions, whether L or R, either $0$ or $x$ is counted as a distinct number. So, as in the previous case, we remove the two positions, recursively solve for the remaining part, add $1$ to all numbers, and then add $0$ and $x$ at both ends. The value of $x$ must be the number of distinct values inside plus 1. This condition is equivalent to the internal region satisfying $d = 1$. If both are R, the analysis is the same as for the L and L case. If the leftmost character is R and the rightmost character is L, a simple construction is to fill everything with $1$. In this case, no $0$ will appear, so this case can only correspond to $d = 0$. We recursively remove the leftmost and rightmost characters, solve for the inner region, add $1$ to all of them, and place $0$ and $x$ at both ends. Now, consider how $d$ changes: For the LR case, $d$ remains unchanged. For LL or RR, the internal region must satisfy $d = 1$. For RL, the entire region results in $d = 0$. Therefore, consider the outermost RL. If it contains LL or RR inside, there is no solution. Otherwise, a solution always exists, and it can be easily constructed based on the above process, the time complexity is $O(n)$.
[ "constructive algorithms" ]
3,500
null
2048
I2
Kevin and Puzzle (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, you need to count the number of good arrays. You can hack only if you solved all versions of this problem.} Kevin is visiting the Red Church, and he found a puzzle on the wall. For an array $ a $, let $ c(l,r) $ indicate how many distinct numbers are among $ a_l, a_{l+1}, \ldots, a_r $. In particular, if $ l > r $, define $ c(l,r) = 0 $. You are given a string $ s $ of length $ n $ consisting of letters $ L $ and $ R $ only. Let a non-negative array $ a $ be called good, if the following conditions hold for $ 1 \leq i \leq n $: - if $s_i=\verb!L!$, then $c(1,i-1)=a_i$; - if $s_i=\verb!R!$, then $c(i+1,n)=a_i$. You need to count the number of good arrays $a$. Since the answer may be large, you only need to output the answer modulo $998\,244\,353$.
According to the easy version, we can see that most cases have very few solutions because for LR, LL, and RR, after filling in the inner part, the outer layer only has one unique way to fill. Therefore, if there is no RL layer, the answer must be $1$. Next, consider the case where RL is present. Let's assume that RL is the outermost pair of characters. It can be proved that the numbers at the R and L positions in this RL must be the same. The specific proof is omitted here, but readers can prove this for themselves easily. Let this common value be $m$. Then, enumerate the rightmost R position $x$ filled with $m$ and the leftmost L position $y$ filled with $m$. Now, let's discuss the relationship between $x$ and $y$: If $x > y$, it can be observed that all Ls to the right of $x$ must be filled with $m$, and all Rs to the right of $x$ have only one way to be filled. The same applies for $y$. For this case, we can directly enumerate $m$, then determine the unique positions for $x$ and $y$, and check if $x > y$. If $x < y$, at this point, all Rs to the left of $x$ must be filled with $m$, and the Ls to the left of $x$ have only one way to be filled. Similarly for the right side of $y$. Now, consider the section between $(x, y)$. Clearly, $m$ must not appear in the middle, so we can delete $x$ and all Rs to its left, as well as $y$ and all Ls to its right (i.e., remove all positions where the value is $m$). The resulting sequence is called the remaining sequence. After removing these characters, we solve for the remaining sequence, then add $1$ to all the numbers obtained, and finally, add all positions filled with $m$. Below is an example of an initial sequence, where the red characters are $x$ and $y$, and the omitted part is the section between $(x, y)$. Below is the corresponding remaining sequence, with $*$ representing the original sequence positions of $x$ and $y$. After filling the remaining sequence, we need to analyze the conditions for adding all positions filled with $m$: we divide the remaining sequence into three parts: "left", "middle", and "right", with the positions of $x$ and $y$ as boundaries, where the left part contains only Ls, and the right part contains only Rs. The condition to be satisfied is that let the total number of colors in the "left-middle" part be $c_1$, and the total number of colors in the "middle-right" part be $c_2$, then we need $m = c_1 + 1 = c_2 + 1$. Additionally, $m$ must not appear in the remaining sequence. This restriction is equivalent to requiring both the "left-middle" and "middle-right" parts to satisfy $d = 1$. It can be concluded that the remaining sequence satisfies $d = 1$. For the condition $c_1 = c_2$, it is easy to see that it is equivalent to: let $z$ be the larger of the counts of the left and right parts, then the first $z$ characters of the remaining sequence must all be L, and the last $z$ characters must all be R. The final necessary and sufficient condition is: let $x$ have $a$ Ls to the left, and $y$ have $b$ Rs to the right. Without loss of generality, assume $a \ge b$. Remove the substring between $(x, y)$ (not including $x$ and $y$). This remaining substring needs to satisfy that the last $a-b$ characters are Rs, and after removing these $a-b$ Rs, the resulting string must satisfy $d = 1$, meaning there is no RL situation when taking the first and last characters. Since $d = 1$, if the remaining sequence satisfies this condition, there will be exactly one way to fill it. Finally, we only need to count the cases separately. The case where $x > y$ can be counted in $O(n)$, for the case where $x < y$, without loss of generality, assume $a \ge b$, we can enumerate $y$, calculate the length of the longest consecutive Rs before $y$, denoted as $cnt$, so the restriction on $a$ becomes $b \le a \le b + cnt$. Through the above observations, we only need to enumerate $O(n)$ pairs of $x$ and $y$ and check whether they increase the answer by $1$. That is, the answer is at the $O(n)$ level. The last problem is: for each given interval $[l, r]$, we need to check if there is any RL situation when taking the first and last characters of this string. A simple solution is to use a bitset to maintain, for example, by scanning $r$ from small to large, and maintaining the existence of RL for each $l + r$. The time complexity of this approach is $O(\frac{n^2}{\omega})$, which can be handled by this problem. If using block convolution, a more optimal complexity can be achieved. In fact, if further exploring the properties, the time complexity can be reduced to $O(n \log^2 n)$ (requiring convolution), which you can explore on your own if you are interested.
[ "bitmasks", "fft", "math" ]
3,500
null
2048
I2
Kevin and Puzzle (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, you need to count the number of good arrays. You can hack only if you solved all versions of this problem.} Kevin is visiting the Red Church, and he found a puzzle on the wall. For an array $ a $, let $ c(l,r) $ indicate how many distinct numbers are among $ a_l, a_{l+1}, \ldots, a_r $. In particular, if $ l > r $, define $ c(l,r) = 0 $. You are given a string $ s $ of length $ n $ consisting of letters $ L $ and $ R $ only. Let a non-negative array $ a $ be called good, if the following conditions hold for $ 1 \leq i \leq n $: - if $s_i=\verb!L!$, then $c(1,i-1)=a_i$; - if $s_i=\verb!R!$, then $c(i+1,n)=a_i$. You need to count the number of good arrays $a$. Since the answer may be large, you only need to output the answer modulo $998\,244\,353$.
According to the easy version, we can see that most cases have very few solutions because for LR, LL, and RR, after filling in the inner part, the outer layer only has one unique way to fill. Therefore, if there is no RL layer, the answer must be $1$. Next, consider the case where RL is present. Let's assume that RL is the outermost pair of characters. It can be proved that the numbers at the R and L positions in this RL must be the same. The specific proof is omitted here, but readers can prove this for themselves easily. Let this common value be $m$. Then, enumerate the rightmost R position $x$ filled with $m$ and the leftmost L position $y$ filled with $m$. Now, let's discuss the relationship between $x$ and $y$: If $x > y$, it can be observed that all Ls to the right of $x$ must be filled with $m$, and all Rs to the right of $x$ have only one way to be filled. The same applies for $y$. For this case, we can directly enumerate $m$, then determine the unique positions for $x$ and $y$, and check if $x > y$. If $x > y$, it can be observed that all Ls to the right of $x$ must be filled with $m$, and all Rs to the right of $x$ have only one way to be filled. The same applies for $y$. For this case, we can directly enumerate $m$, then determine the unique positions for $x$ and $y$, and check if $x > y$. If $x < y$, at this point, all Rs to the left of $x$ must be filled with $m$, and the Ls to the left of $x$ have only one way to be filled. Similarly for the right side of $y$. Now, consider the section between $(x, y)$. Clearly, $m$ must not appear in the middle, so we can delete $x$ and all Rs to its left, as well as $y$ and all Ls to its right (i.e., remove all positions where the value is $m$). The resulting sequence is called the remaining sequence. After removing these characters, we solve for the remaining sequence, then add $1$ to all the numbers obtained, and finally, add all positions filled with $m$. Below is an example of an initial sequence, where the red characters are $x$ and $y$, and the omitted part is the section between $(x, y)$. Below is the corresponding remaining sequence, with $*$ representing the original sequence positions of $x$ and $y$. After filling the remaining sequence, we need to analyze the conditions for adding all positions filled with $m$: we divide the remaining sequence into three parts: "left", "middle", and "right", with the positions of $x$ and $y$ as boundaries, where the left part contains only Ls, and the right part contains only Rs. The condition to be satisfied is that let the total number of colors in the "left-middle" part be $c_1$, and the total number of colors in the "middle-right" part be $c_2$, then we need $m = c_1 + 1 = c_2 + 1$. Additionally, $m$ must not appear in the remaining sequence. This restriction is equivalent to requiring both the "left-middle" and "middle-right" parts to satisfy $d = 1$. It can be concluded that the remaining sequence satisfies $d = 1$. For the condition $c_1 = c_2$, it is easy to see that it is equivalent to: let $z$ be the larger of the counts of the left and right parts, then the first $z$ characters of the remaining sequence must all be L, and the last $z$ characters must all be R. The final necessary and sufficient condition is: let $x$ have $a$ Ls to the left, and $y$ have $b$ Rs to the right. Without loss of generality, assume $a \ge b$. Remove the substring between $(x, y)$ (not including $x$ and $y$). This remaining substring needs to satisfy that the last $a-b$ characters are Rs, and after removing these $a-b$ Rs, the resulting string must satisfy $d = 1$, meaning there is no RL situation when taking the first and last characters. Since $d = 1$, if the remaining sequence satisfies this condition, there will be exactly one way to fill it. If $x < y$, at this point, all Rs to the left of $x$ must be filled with $m$, and the Ls to the left of $x$ have only one way to be filled. Similarly for the right side of $y$. Now, consider the section between $(x, y)$. Clearly, $m$ must not appear in the middle, so we can delete $x$ and all Rs to its left, as well as $y$ and all Ls to its right (i.e., remove all positions where the value is $m$). The resulting sequence is called the remaining sequence. After removing these characters, we solve for the remaining sequence, then add $1$ to all the numbers obtained, and finally, add all positions filled with $m$. Below is an example of an initial sequence, where the red characters are $x$ and $y$, and the omitted part is the section between $(x, y)$. Below is the corresponding remaining sequence, with $*$ representing the original sequence positions of $x$ and $y$. After filling the remaining sequence, we need to analyze the conditions for adding all positions filled with $m$: we divide the remaining sequence into three parts: "left", "middle", and "right", with the positions of $x$ and $y$ as boundaries, where the left part contains only Ls, and the right part contains only Rs. The condition to be satisfied is that let the total number of colors in the "left-middle" part be $c_1$, and the total number of colors in the "middle-right" part be $c_2$, then we need $m = c_1 + 1 = c_2 + 1$. Additionally, $m$ must not appear in the remaining sequence. This restriction is equivalent to requiring both the "left-middle" and "middle-right" parts to satisfy $d = 1$. It can be concluded that the remaining sequence satisfies $d = 1$. For the condition $c_1 = c_2$, it is easy to see that it is equivalent to: let $z$ be the larger of the counts of the left and right parts, then the first $z$ characters of the remaining sequence must all be L, and the last $z$ characters must all be R. The final necessary and sufficient condition is: let $x$ have $a$ Ls to the left, and $y$ have $b$ Rs to the right. Without loss of generality, assume $a \ge b$. Remove the substring between $(x, y)$ (not including $x$ and $y$). This remaining substring needs to satisfy that the last $a-b$ characters are Rs, and after removing these $a-b$ Rs, the resulting string must satisfy $d = 1$, meaning there is no RL situation when taking the first and last characters. Since $d = 1$, if the remaining sequence satisfies this condition, there will be exactly one way to fill it. Finally, we only need to count the cases separately. The case where $x > y$ can be counted in $O(n)$, for the case where $x < y$, without loss of generality, assume $a \ge b$, we can enumerate $y$, calculate the length of the longest consecutive Rs before $y$, denoted as $cnt$, so the restriction on $a$ becomes $b \le a \le b + cnt$. When $cnt = 0$, the value of $a$ is fixed, but the corresponding $x$ can be chosen from any position in a consecutive block of Rs. We find that because $cnt = 0$, i.e., the position before $y$ is filled with L, then the position after $x$ cannot be filled with R, so $x$ can only be chosen as the last R in the consecutive R block. When $cnt > 0$, we only need to enumerate the values of $x$. It is easy to see that each $x$ will be counted at most $2$ times. Through the above observations, we only need to enumerate $O(n)$ pairs of $x$ and $y$ and check whether they increase the answer by $1$. That is, the answer is at the $O(n)$ level. The last problem is: for each given interval $[l, r]$, we need to check if there is any RL situation when taking the first and last characters of this string. A simple solution is to use a bitset to maintain, for example, by scanning $r$ from small to large, and maintaining the existence of RL for each $l + r$. The time complexity of this approach is $O(\frac{n^2}{\omega})$, which can be handled by this problem. If using block convolution, a more optimal complexity can be achieved. In fact, if further exploring the properties, the time complexity can be reduced to $O(n \log^2 n)$ (requiring convolution), which you can explore on your own if you are interested.
[ "bitmasks", "fft", "math" ]
3,500
null
2049
A
MEX Destruction
Evirir the dragon snuck into a wizard's castle and found a mysterious contraption, and their playful instincts caused them to play with (destroy) it... Evirir the dragon found an array $a_1, a_2, \ldots, a_n$ of $n$ non-negative integers. In one operation, they can choose a non-empty subarray$^{\text{∗}}$ $b$ of $a$ and replace it with the integer $\operatorname{mex}(b)$$^{\text{†}}$. They want to use this operation any number of times to make $a$ only contain zeros. It can be proven that this is always possible under the problem constraints. What is the minimum number of operations needed? \begin{footnotesize} $^{\text{∗}}$An array $c$ is a subarray of an array $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. $^{\text{†}}$The minimum excluded (MEX) of a collection of integers $f_1, f_2, \ldots, f_k$ is defined as the smallest non-negative integer $x$ which does not occur in the collection $f$. \end{footnotesize}
Case 1: All elements are $0$. Then the answer is $0$. Case 2: Some element is non-zero, and all non-zero elements form a contiguous subarray. Then the answer is $1$ since we can choose that subarray and replace it with a $0$. Case 3: Otherwise, the answer is $2$. We can replace the entire array with a non-zero element (since $0$ is in the array), then replace the entire array again with a $0$ (since the only element left is non-zero). $1$ operation is not enough. If we only use $1$ operation, the selected subarray must contain all non-zero elements. Since the non-zero elements do not form a subarray, the selected subarray must contain a $0$, thus the $\operatorname{MEX}$ will be non-zero.
[ "greedy", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; while (!a.empty() && a.back() == 0) a.pop_back(); reverse(a.begin(), a.end()); while (!a.empty() && a.back() == 0) a.pop_back(); reverse(a.begin(), a.end()); if (a.empty()) { cout << 0 << '\n'; return; } bool hasZero = false; for (const auto x : a) hasZero |= x == 0; if (hasZero) cout << 2 << '\n'; else cout << 1 << '\n'; } int main() { int t; cin >> t; for (int i = 0; i < t; i++) solve(); return 0; }
2049
B
pspspsps
Cats are attracted to pspspsps, but Evirir, being a dignified dragon, is only attracted to pspspsps with oddly specific requirements... Given a string $s = s_1s_2\ldots s_n$ of length $n$ consisting of characters p, s, and . (dot), determine whether a permutation$^{\text{∗}}$ $p$ of length $n$ exists, such that for all integers $i$ ($1 \le i \le n$): - If $s_i$ is p, then $[p_1, p_2, \ldots, p_i]$ forms a permutation (of length $i$); - If $s_i$ is s, then $[p_i, p_{i+1}, \ldots, p_{n}]$ forms a permutation (of length $n-i+1$); - If $s_i$ is ., then there is no additional restriction. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
Since the entire $p$ must be a permutation, if $s_1 =$s, we can set $s_1 =$., and if $s_n =$p, we can set $s_n =$.. After that, the answer is YES if and only if all non-dot characters in $s$ are all p or s. If all non-dot characters are p, we can choose the permutation $p = [1, 2, \ldots, n]$. If all non-dot characters are s, we can choose $p = [n, n - 1, \ldots, 1]$. Otherwise, there exists both a p and a s. Suppose for contradiction that there is a solution. Let $a$ and $b$ represent the subarrays represented by the p and s respectively. Without loss of generality, suppose $a$ is the shorter subarray. Since $b$ is also a permutation, the elements of $a$ must be in $b$. Since $p$ is a permutation, $a$ must be a subarray of $b$. However, $b$ cannot contain $a$: since $b$ is not the entire $p$, $b$ does not contain $p_1$. However, $a$ contains $p_1$. Contradiction.
[ "brute force", "constructive algorithms", "graph matchings", "implementation" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; string s; cin >> s; if (s[0] == 's') s[0] = '.'; if (s.back() == 'p') s.back() = '.'; bool found_p = false; bool found_s = false; for (const auto c : s) { switch (c) { case 'p': found_p = true; break; case 's': found_s = true; break; } } cout << (found_p && found_s ? "NO" : "YES") << '\n'; } int main() { int t; cin >> t; for (int i = 0; i < t; i++) solve(); return 0; }
2049
C
MEX Cycle
Evirir the dragon has many friends. They have 3 friends! That is one more than the average dragon. You are given integers $n$, $x$, and $y$. There are $n$ dragons sitting in a circle. The dragons are numbered $1, 2, \ldots, n$. For each $i$ ($1 \le i \le n$), dragon $i$ is friends with dragon $i - 1$ and $i + 1$, where dragon $0$ is defined to be dragon $n$ and dragon $n + 1$ is defined to be dragon $1$. Additionally, dragons $x$ and $y$ are friends with each other (if they are already friends, this changes nothing). Note that all friendships are mutual. Output $n$ non-negative integers $a_1, a_2, \ldots, a_n$ such that for each dragon $i$ ($1 \le i \le n$), the following holds: - Let $f_1, f_2, \ldots, f_k$ be the friends of dragon $i$. Then $a_i = \operatorname{mex}(a_{f_1}, a_{f_2}, \ldots, a_{f_k})$.$^{\text{∗}}$ \begin{footnotesize} $^{\text{∗}}$The minimum excluded (MEX) of a collection of integers $c_1, c_2, \ldots, c_m$ is defined as the smallest non-negative integer $t$ which does not occur in the collection $c$. \end{footnotesize}
There are many possible solutions. The simplest one we can find (thanks to Kaey) is as follows: Set $a_x = 0, a_{x+1} = 1, a_{x+2} = 0, \ldots$, alternating between 0 and 1, wrapping around accordingly. Formally, using 0-based indexing, set $a_{(x+i) \bmod n} = i \bmod 2$ for all $i$ ($0 \le i \le n - 1$). If $n$ is odd or if $x - y$ is even, set $a_x = 2$. Why this works: If $n$ is even and $x - y$ is odd, all $0$'s are only friends with $1$'s and vice versa. If $n$ is odd, $a_x$ will be adjacent to $0$ and $1$, so we set $a_x = 2$. Now $a$ is valid ignoring the extra friendship. Adding in the extra friendship, $a$ is still valid since $a_x = 2 > a_y$, so it will not affect $a_y$. If $n$ is even and $x - y$ is even, the extra friendship connects two $0$ or two $1$. Setting $a_x = 2$ works because dragon $x$'s friends still have another neighbor to maintain their $\operatorname{MEX}$.
[ "brute force", "constructive algorithms", "greedy", "implementation" ]
1,500
#include <iostream> #include <vector> using namespace std; void solve() { int n, x, y; cin >> n >> x >> y; --x; --y; vector<int> ans(n); for (int i = 0; i < n; ++i) ans[(x + i) % n] = i % 2; if (n % 2 || (x - y) % 2 == 0) ans[x] = 2; for (auto x : ans)cout << x << ' '; cout << endl; } int main() { int T; cin >> T; while (T--) solve(); }
2049
D
Shift + Esc
After having fun with a certain contraption and getting caught, Evirir the dragon decides to put their magical skills to good use — warping reality to escape fast! You are given a grid with $n$ rows and $m$ columns of non-negative integers and an integer $k$. Let $(i, j)$ denote the cell in the $i$-th row from the top and $j$-th column from the left ($1 \le i \le n$, $1 \le j \le m$). For every cell $(i, j)$, the integer $a_{i, j}$ is written on the cell $(i, j)$. You are initially at $(1, 1)$ and want to go to $(n, m)$. You may only move down or right. That is, if you are at $(i, j)$, you can only move to $(i+1, j)$ or $(i, j+1)$ (if the corresponding cell exists). Before you begin moving, you may do the following operation any number of times: - Choose an integer $i$ between $1$ and $n$ and cyclically shift row $i$ to the left by $1$. Formally, simultaneously set $a_{i,j}$ to $a_{i,(j \bmod m) + 1}$ for all integers $j$ ($1 \le j \le m$). Note that you may not do any operation after you start moving.After moving from $(1, 1)$ to $(n, m)$, let $x$ be the number of operations you have performed before moving, and let $y$ be the sum of the integers written on visited cells (including $(1, 1)$ and $(n, m)$). Then the cost is defined as $kx + y$. Find the minimum cost to move from $(1, 1)$ to $(n, m)$.
Let $f(i,j)$ be the minimum cost to move to cell $(i,j)$ after shifting and Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] For simplicity sake, we will add a row with all zeros above the first row. Also note that the operations with states denoting columns are all under modulo Unable to parse markup [type=CF_MATHJAX] The transitions are as follows: Base cases: Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] From row Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] In Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] The final answer is Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX]
[ "brute force", "dp" ]
1,900
#include<bits/stdc++.h> using namespace std; typedef long long int ll; ll dp[511][511],a[511][511]; void solve() { int n,m,k; cin>>n>>m>>k; for(int i=1;i<=n;i++){ for(int j=0;j<m;j++)cin>>a[i][j]; } for(int i=0;i<=n;i++){ for(int j=0;j<m;j++)dp[i][j] = 1e18; } dp[0][0] = 0; for(int i=1;i<=n;i++){ for(int shift = 0;shift<m;shift++){ vector<ll>tmp(m,1e18); for(int j=0;j<m;j++)tmp[j] = dp[i-1][j] + a[i][(j+shift)%m] + k*1LL*shift; for(int j=0;j<m;j++)tmp[j] = min(tmp[j],tmp[(j+m-1)%m] + a[i][(j+shift)%m]); for(int j=0;j<m;j++)tmp[j] = min(tmp[j],tmp[(j+m-1)%m] + a[i][(j+shift)%m]); for(int j=0;j<m;j++)dp[i][j] = min(dp[i][j],tmp[j]); } //for(int j=0;j<m;j++)cout<<dp[i][j]<<" "; // cout<<'\n'; } cout<<dp[n][m-1]<<endl; } int main() { int t; cin>>t; for (int i = 0; i < t; i++) solve(); }
2049
E
Broken Queries
You, a wizard whose creation was destroyed by a dragon, are determined to hunt it down with a magical AOE tracker. But it seems to be toyed with... This is an interactive problem. There is a hidden binary array $a$ of length $n$ ($\mathbf{n}$ \textbf{is a power of 2}) and a hidden integer $k\ (2 \le k \le n - 1)$. The array $a$ contains \textbf{exactly one 1} (and all other elements are 0). For two integers $l$ and $r$ ($1 \le l \le r \le n$), define the range sum $s(l, r) = a_l + a_{l+1} + \cdots + a_r$. You have a magical device that takes ranges and returns range sums, but it returns the opposite result when the range has length at least $k$. Formally, in one query, you can give it a pair of integers $[l, r]$ where $1 \le l \le r \le n$, and it will return either $0$ or $1$ according to the following rules: - If $r - l + 1 < k$, it will return $s(l, r)$. - If $r - l + 1 \ge k$, it will return $1 - s(l, r)$. Find $k$ using at most $33$ queries. The device is \textbf{not} adaptive. It means that the hidden $a$ and $k$ are fixed before the interaction and will not change during the interaction.
Make 2 queries Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Make 1 query: query Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Without loss of generality, assume that the 1 is in Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] If Unable to parse markup [type=CF_MATHJAX] , query Unable to parse markup [type=CF_MATHJAX] . The result is Unable to parse markup [type=CF_MATHJAX] if Unable to parse markup [type=CF_MATHJAX] and Unable to parse markup [type=CF_MATHJAX] otherwise. Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] If Unable to parse markup [type=CF_MATHJAX] , query Unable to parse markup [type=CF_MATHJAX] . The result is Unable to parse markup [type=CF_MATHJAX] if $k' \ge k$ and Unable to parse markup [type=CF_MATHJAX] otherwise. Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] In both cases, the binary search takes Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] The limit of Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX]
[ "binary search", "bitmasks", "brute force", "constructive algorithms", "implementation", "interactive" ]
2,400
#include <bits/stdc++.h> using namespace std; int qry(int l, int r, bool rev = 0, int n = 0) { if (rev) { int t = n - l; l = n - r; r = t; } cout << "? " << l + 1 << ' ' << r << endl; cin >> r; return r; } void solve() { int n; cin >> n; int a = qry(0, n / 4); int b = qry(n / 4, n / 2); bool kSmall = 1; bool firstHalf = 1; if (a == b) firstHalf = 0; int bs = 0; if (qry(0, n / 2, firstHalf, n) == 0) kSmall = 0; if (kSmall) { for (int k = n / 4; k; k /= 2) if (qry(0, bs + k, firstHalf, n) == 0) bs += k; } else { bs = n / 2 - 1; for (int k = n / 4; k; k /= 2) if (qry(0, bs + k, 1-firstHalf, n) == 1) bs += k; } cout << "! " << bs + 1 << endl; } int main() { int T = 1; cin >> T; while (T--) solve(); return 0; }
2049
F
MEX OR Mania
An integer sequence $b_1, b_2, \ldots, b_n$ is good if $\operatorname{mex}(b_1, b_2, \ldots, b_n) - (b_1 | b_2 | \ldots | b_n) = 1$. Here, $\operatorname{mex(c)}$ denotes the MEX$^{\text{∗}}$ of the collection $c$, and $|$ is the bitwise OR operator. Shohag has an integer sequence $a_1, a_2, \ldots, a_n$. He will perform the following $q$ updates on $a$: - $i$ $x$ — increase $a_i$ by $x$. After each update, help him find the length of the longest good subarray$^{\text{†}}$ of $a$. \begin{footnotesize} $^{\text{∗}}$The minimum excluded (MEX) of a collection of integers $c_1, c_2, \ldots, c_k$ is defined as the smallest non-negative integer $y$ which does not occur in the collection $c$. $^{\text{†}}$An array $d$ is a subarray of an array $f$ if $d$ can be obtained from $f$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
Let's figure out when a sequence is good. Let $m$ be the maximum element of the sequence. Notice that the bitwise OR of the sequence is at least Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Now we need to check for which Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] So, a sequence is good if the maximum element is Unable to parse markup [type=CF_MATHJAX] Now, let's see how to answer the queries without any updates. To find the longest good subarray, we can use a two-pointers approach. But a better way to do this is to fix the power $k (0 \leq k \leq \log_2 n)$ and find the longest good subarray with maximum element $2^k - 1$. To do this, ignore the elements greater than Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] So to sum it up, for each power Unable to parse markup [type=CF_MATHJAX] Now regarding the updates, it is hard to track everything if we do the updates normally. But its's easier if we look at them in reverse order! Then each update will be decreasing the value of $a_i$ by Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Please check my code for more details. Overall complexity is $O((n + q) \log^2 n)$ or Unable to parse markup [type=CF_MATHJAX]
[ "bitmasks", "brute force", "data structures", "dsu", "implementation" ]
2,700
#include<bits/stdc++.h> using namespace std; const int N = 1e5 + 9, Q = 3e5 + 9; using ll = long long; struct GoodSet { // insert, erase and track distinct and total elements map<int, int> mp; int size; int k; GoodSet() {} GoodSet(int _k): k(_k), size(0) { }; void insert(int x, int c = 1) { mp[x] += c; size += c; } void erase(int x) { if (mp[x] == 1) { mp.erase(x); } else { mp[x]--; } size -= 1; } void merge(GoodSet oth) { for (auto [x, c]: oth.mp) { insert(x, c); } } bool is_good() { // check if all elements from 0 to 2^k - 1 exists in the set return (int) mp.size() == (1 << k); } int get_value() { if (is_good()) return size; return 0; } }; struct MaxSet { // insert, erase and track max element map<int, int> mp; MaxSet() {} void insert(int x) { mp[x]++; } void erase(int x) { mp[x]--; if (mp[x] == 0) mp.erase(x); } int get_max() { return mp.rbegin() -> first; } }; struct DSU { // DSU for each power of 2 int n; int k; vector<int> par; vector<GoodSet> comp; MaxSet good_lengths; DSU() {} DSU(int _n, int _k): n(_n), k(_k) { par.resize(n + 1); comp.resize(n + 1); for (int i = 1; i <= n; i++) { par[i] = i; comp[i] = GoodSet(k); good_lengths.insert(comp[i].get_value()); } } int find(int u) { return par[u] = (par[u] == u ? u : find(par[u])); } void merge(int u, int v) { u = find(u); v = find(v); if (u == v) return; good_lengths.erase(comp[u].get_value()); good_lengths.erase(comp[v].get_value()); // small to large merging if (comp[u].mp.size() < comp[v].mp.size()) { comp[u].mp.swap(comp[v].mp); swap(comp[u].size, comp[v].size); } comp[u].merge(comp[v]); comp[v].mp.clear(); // clear to save up memory good_lengths.insert(comp[u].get_value()); par[v] = u; } // insert or erase an element from the component that u belongs to void update_in_component(int u, int x, bool insert = true) { u = find(u); good_lengths.erase(comp[u].get_value()); if (insert) comp[u].insert(x); else comp[u].erase(x); good_lengths.insert(comp[u].get_value()); } }; DSU f[18]; ll a[N]; // make it long long as total sum can be huge int id[Q], x[Q], ans[Q]; void solve() { int n, q; cin >> n >> q; for (int i = 1; i <= n; i++) { cin >> a[i]; } for (int i = 1; i <= q; i++) { cin >> id[i] >> x[i]; a[id[i]] += x[i]; } MaxSet se; for (int k = 0; (1 << k) <= n; k++) { f[k] = DSU(n, k); for (int i = 1; i <= n; i++) { if (a[i] < (1 << k)) { f[k].update_in_component(i, a[i], true); } } for (int i = 2; i <= n; i++) { if (a[i] < (1 << k) and a[i - 1] < (1 << k)) { f[k].merge(i - 1, i); } } se.insert(f[k].good_lengths.get_max()); } for (int qid = q; qid >= 1; qid--) { ans[qid] = se.get_max(); int i = id[qid], sub = x[qid]; for (int k = 0; (1 << k) <= n; k++) { se.erase(f[k].good_lengths.get_max()); if (a[i] < (1 << k)) f[k].update_in_component(i, a[i], false); if (a[i] - sub < (1 << k)) f[k].update_in_component(i, a[i] - sub, true); if (a[i] >= (1 << k) and a[i] - sub < (1 << k)) { if (i > 1 and a[i - 1] < (1 << k)) { f[k].merge(i - 1, i); } if (i + 1 <= n and a[i + 1] < (1 << k)) { f[k].merge(i, i + 1); } } se.insert(f[k].good_lengths.get_max()); } a[i] -= sub; } for (int i = 1; i <= q; i++) { cout << ans[i] << '\n'; } } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2050
A
Line Breaks
Kostya has a text $s$ consisting of $n$ words made up of Latin alphabet letters. He also has two strips on which he must write the text. The first strip can hold $m$ characters, while the second can hold as many as needed. Kostya must choose a number $x$ and write the first $x$ words from $s$ on the first strip, while all the remaining words are written on the second strip. To save space, the words are written without gaps, but each word must be entirely on one strip. Since space on the second strip is very valuable, Kostya asks you to choose the maximum possible number $x$ such that all words $s_1, s_2, \dots, s_x$ fit on the first strip of length $m$.
An important condition in the problem: we can take $x$ words on the first line from the beginning and we cannot skip any word. The main idea is to compute the total length of words as we keep adding them, and stop when we reach a word where adding the next word would exceed the capacity of the first strip (which equals $m$). It is important to take into account the case when all words can be placed on the first line or none will fit, but our solution takes this into account.
[ "implementation" ]
800
def solve(): n, m = [int(i) for i in input().split()] ans = 0 for i in range(n): l = input() if len(l) <= m: m -= len(l) ans += 1 else: for i in range(i + 1, n): input() break print(ans) t = int(input()) for i in range(t): solve()
2050
B
Transfusion
You are given an array $a$ of length $n$. In one operation, you can pick an index $i$ from $2$ to $n-1$ inclusive, and do one of the following actions: - Decrease $a_{i-1}$ by $1$, then increase $a_{i+1}$ by $1$. - Decrease $a_{i+1}$ by $1$, then increase $a_{i-1}$ by $1$. After each operation, all the values must be non-negative. Can you make all the elements equal after any number of operations?
The main idea of this problem is that these operations only change elements on the positions with the same parity. So, we can solve for elements on odd and even positions independently. Let's make two arrays $od$ and $ev$ - the first one will consist of all the elements on the odd positions, and the second one will consist of all the elements on the even positions. Now we can rewrite given operations as: pick any array $od$ or $ev$, after that pick any two adjacent elements and subtract $1$ from one of these elements and add $1$ to another. In order to make all the elements in array $od$ equal, the sum of all elements in $od$ must be divisible by $|od|$, and also the sum of all the elements in $ev$ must be divisible by $|ev|$, where $|a|$ is the length of array $a$. And also to make all the elements of the initial array equal, $\frac{sum(od)}{|od|} = \frac{sum(ev)}{|ev|}$ must be satisfied. If all these conditions are satisfied, the answer is "YES", otherwise "NO".
[ "brute force", "greedy", "math" ]
1,100
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (int &x : a) cin >> x; long long ods = 0, evs = 0; for (int i = 0; i < n; i++) { if (i & 1) ods += a[i]; else evs += a[i]; } int odc = n / 2, evc = n / 2; if (n & 1) evc++; if (ods % odc != 0 || evs % evc != 0 || ods / odc != evs / evc) { cout << "NO"; return; } cout << "YES"; } int main() { int TESTS; cin >> TESTS; while (TESTS --> 0) { solve(); cout << '\n'; } return 0; }
2050
C
Uninteresting Number
You are given a number $n$ with a length of no more than $10^5$. You can perform the following operation any number of times: choose one of its digits, square it, and replace the original digit with the result. The result must be a digit (that is, if you choose the digit $x$, then the value of $x^2$ must be less than $10$). Is it possible to obtain a number that is divisible by $9$ through these operations?
The requirement that a digit must remain a digit imposes the following restrictions on transformations: we can transform $0$ into $0$, $1$ into $1$, $2$ into $4$, and $3$ into $9$. Any other digit squared will exceed 9, therefore, it cannot be transformed. Transformations involving $0$ and $1$ are useless, leaving us with two possible actions: squaring the digit $2$ or the digit $3$. We will use the divisibility rule for $9$. It states that a number is divisible by $9$ if and only if the sum of its digits is divisible by $9$. Let's see how the sum of the digits will change with the possible transformations. If we square $2$, the sum of the digits increases by $2^2 - 2 = 2$, and if we square $3$, the sum of the digits increases by $3^2 - 3 = 6$. We will count the number of digits $2$ in the number and the number of digits $3$ in the number. We can choose how many of the available digits $2$ and $3$ we will transform. Transforming more than 8 twos and more than 8 threes is pointless because remainders modulo $9$ their transformation adds to the sum will repeat. Thus, the final solution looks like this: we calculate the sum of the digits in the number, count the number of digits $2$ and $3$. We will iterate over how many digits $2$ we change (possibly 0, but no more than 8), and how many digits $3$ we change (possibly 0, but also no more than 8). Let's say we changed $x$ digits $2$ and $y$ digits $3$, then the sum of the digits in the number increased by $x * 2 + y * 6$. If new sum is divisible by $9$, the answer is "YES". If such a situation was never reached during the iteration, then the answer is "NO".
[ "brute force", "dp", "math" ]
1,200
def solve(): s = [int(x) for x in list(input())] sm = sum(s) twos = s.count(2) threes = s.count(3) for i in range(min(10, twos + 1)): for j in range(min(10, threes + 1)): if (sm + i * 2 + j * 6) % 9 == 0: print('YES') return print('NO') t = int(input()) for _ in range(t): solve()
2050
D
Digital string maximization
You are given a string $s$, consisting of digits from $0$ to $9$. In one operation, you can pick any digit in this string, except for $0$ or the leftmost digit, decrease it by $1$, and then swap it with the digit left to the picked. For example, in one operation from the string $1023$, you can get $1103$ or $1022$. Find the lexicographically maximum string you can obtain after any number of operations.
Let's look at digit $s_i$. We can see that we can't move it to the left more than $s_i$ times because it will be $0$ after. So, we can say that only digits on indices from $i$ to $i+9$ can stand on index $i$, because the maximum digit $9$ can be moved to the left no more than $9$ times. Thus, for each $i$ we can brute force all digits from $s_i$ to $s_{i+9}$ and pick such $j$ that $s_j - (j - i)$ is maximum; if we have multiple maximum options, we will minimize $j$. After that, we will move $s_j$ to the left until it is on index $i$.
[ "brute force", "greedy", "math", "strings" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { string s; cin >> s; for (int i = 0; i < s.size(); i++) { int best = s[i] - '0', pos = i; for (int j = i; j < min(i + 10, (int) s.size()); j++) { if (s[j] - '0' - (j - i) > best) { best = s[j] - '0' - (j - i); pos = j; } } while (pos > i) { swap(s[pos], s[pos - 1]); pos--; } s[i] = char(best + '0'); } cout << s; } int main() { int TESTS = 1; cin >> TESTS; while (TESTS --> 0) { solve(); cout << '\n'; } return 0; }
2050
E
Three Strings
You are given three strings: $a$, $b$, and $c$, consisting of lowercase Latin letters. The string $c$ was obtained in the following way: - At each step, either string $a$ or string $b$ was randomly chosen, and the first character of the chosen string was removed from it and appended to the end of string $c$, until one of the strings ran out. After that, the remaining characters of the non-empty string were added to the end of $c$. - Then, a certain number of characters in string $c$ were randomly changed. For example, from the strings $a=\textcolor{red}{\text{abra}}$ and $b=\textcolor{blue}{\text{cada}}$, without character replacements, the strings $\textcolor{blue}{\text{ca}}\textcolor{red}{\text{ab}}\textcolor{blue}{\text{d}}\textcolor{red}{\text{ra}}\textcolor{blue}{\text{a}}$, $\textcolor{red}{\text{abra}}\textcolor{blue}{\text{cada}}$, $\textcolor{red}{\text{a}}\textcolor{blue}{\text{cada}}\textcolor{red}{\text{bra}}$ could be obtained. Find the minimum number of characters that could have been changed in string $c$.
Let's use the idea of dynamic programming. Let $dp[i][j]$ be the answer to the problem when considering string $a$ as its own prefix of length $i$, string $b$ as its own prefix of length $j$, and string $c$ as its own prefix of length $i+j$. Then the dynamic programming recurrence is easy: we need to iterate over where we took the next (($i+j$)-th) character of string $c$. If the character is taken from string $a$, the answer is: $dp[i - 1][j]$, if $a_i = c_{i+j}$, $dp[i - 1][j] + 1$ otherwise (since we need to replace character $a_i$ with $c_{i+j}$). If it is taken from string $b$, the answer is calculated similarly: $dp[i][j - 1]$, if $b_j = c_{i+j}$, $dp[i][j - 1] + 1$ otherwise. Thus, to obtain the minimum value of the current dynamic programming state, we need to take the minimum of the two obtained values. To get the answer, we need to take the value of the dynamic programming table at $dp[n][m]$, where $n$ is the length of string $a$ and $m$ is the length of string $b$. The final time complexity of the solution is $\mathcal{O}(n \cdot m)$ per test case.
[ "dp", "implementation", "strings" ]
1,500
#include <iostream> #include <algorithm> static const int inf = 1e9; void solve() { std::string a, b, res; std::cin >> a >> b >> res; int n = (int) a.size(), m = (int) b.size(); int dp[n + 1][m + 1]; std::fill(&dp[0][0], &dp[0][0] + (n + 1) * (m + 1), inf); dp[0][0] = 0; for (int i = 0; i < n; i++) { dp[i + 1][0] = dp[i][0] + (a[i] != res[i]); } for (int j = 0; j < m; j++) { dp[0][j + 1] = dp[0][j] + (b[j] != res[j]); } for (int i = 1; i <= n; i++) { for (int j = 1; j <= m; j++) { dp[i][j] = std::min(dp[i - 1][j] + (a[i - 1] != res[i + j - 1]), dp[i][j - 1] + (b[j - 1] != res[i + j - 1])); } } std::cout << dp[n][m] << std::endl; } int main() { int tests; std::cin >> tests; while (tests--) { solve(); } }
2050
F
Maximum modulo equality
You are given an array $a$ of length $n$ and $q$ queries $l$, $r$. For each query, find the maximum possible $m$, such that all elements $a_l$, $a_{l+1}$, ..., $a_r$ are equal modulo $m$. In other words, $a_l \bmod m = a_{l+1} \bmod m = \dots = a_r \bmod m$, where $a \bmod b$ — is the remainder of division $a$ by $b$. In particular, when $m$ can be infinite, print $0$.
Let's look at two arbitrary integers $x$ and $y$. Now we want to find the maximum $m$, which satisfies $x\bmod m = y\bmod m$. We know that $x\bmod m = y\bmod m$, then $|x - y|\bmod m = 0$, because they have the same remainder by $m$. That means that any $m$ which is a divisor of $|x - y|$ will satisfy the required condition. Now let's generalize the idea we've obtained to the segment: $a_l\bmod m = a_{l+1}\bmod m = \dots = a_r\bmod m$ means that $a_l\bmod m = a_{l+1}\bmod m$, and $a_{l+1}\bmod m = a_{l+2}\bmod m$, and ..., and $a_{r-1}\bmod m = a_r\bmod m$. So, $m$ must be a divisor of $|a_l - a_{l+1}|$, $|a_{l+1} - a_{l+2}|$, ..., $|a_{r-1} - a_r|$ at the same time. That means that $m$ should be GCD($|a_l - a_{l+1}|$, $|a_{l+1} - a_{l+2}|$, ..., $|a_{r-1} - a_r|$), where GCD is the greatest common divisor. $m = 0$, when all the elements on the segment [$l; r$] are equal. Let's build an array consisting of differences of adjacent elements; now we can use sparse table to find GCD on the segments efficiently.
[ "data structures", "divide and conquer", "math", "number theory" ]
1,700
#include <bits/stdc++.h> using namespace std; const int LOGN = 20; vector<vector<int>> stGCD; int get_gcd(int l, int r) { int k = __lg(r - l + 1); return __gcd(stGCD[k][l], stGCD[k][r - (1 << k) + 1]); } void solve() { stGCD.clear(); int n, q; cin >> n >> q; vector<int> a(n); for (int &x : a) cin >> x; vector<int> b; for (int i = 1; i < n; i++) b.push_back(abs(a[i - 1] - a[i])); stGCD.resize(LOGN, vector<int>(b.size(), 1)); for (int i = 0; i < b.size(); i++) stGCD[0][i] = b[i]; for (int i = 1; i < LOGN; i++) for (int j = 0; j + (1 << (i - 1)) < b.size(); j++) stGCD[i][j] = __gcd(stGCD[i - 1][j], stGCD[i - 1][j + (1 << (i - 1))]); while (q--) { int l, r; cin >> l >> r; if (l == r) { cout << 0 << " "; continue; } l--; r -= 2; int gcd = get_gcd(l, r); cout << gcd << " "; } } int main() { int TESTS = 1; cin >> TESTS; while (TESTS --> 0) { solve(); cout << "\n"; } return 0; }
2050
G
Tree Destruction
Given a tree$^{\text{∗}}$ with $n$ vertices. You can choose two vertices $a$ and $b$ once and remove all vertices on the path from $a$ to $b$, including the vertices themselves. If you choose $a=b$, only one vertex will be removed. Your task is to find the maximum number of connected components$^{\text{†}}$ that can be formed after removing the path from the tree. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$A connected component is a set of vertices such that there is a path along the edges from any vertex to any other vertex in the set (and it is not possible to reach vertices not belonging to this set) \end{footnotesize}
Let's choose some vertices $a$ and $b$, between which there are $k$ edges. Then, when removing this path, the tree will split into $s - 2 \cdot k$, where $s$ is the sum of the degrees of the vertices on the path (this is exactly how many edges are connected to the chosen path). Let's suspend the tree from vertex $1$, and for each vertex $v$ of the given tree, we will calculate two values: $\text{dp[v].x}$ - the best answer if the path starts at vertex $v$ and ends in its subtree, and $\text{dp[v].y}$ - the best answer if the path passes through vertex $v$ from one of its children to another. The recalculations of the dynamic programming will be similar to those used in finding the diameter of the tree using dynamic programming. The answer will be the largest value among all $\text{dp[v].x}$ and $\text{dp[v].y}$.
[ "dfs and similar", "dp", "trees" ]
1,900
#include <bits/stdc++.h> #define int long long #define x first #define y second using namespace std; void dfs(int v, int p, vector<vector<int>> &sl, vector<pair<int, int>> &dp){ dp[v].x = sl[v].size(); int m1 = -1, m2 = -1; for(int u: sl[v]){ if(u == p){ continue; } dfs(u, v, sl, dp); dp[v].x = max(dp[v].x, dp[u].x + (int)sl[v].size() - 2); m2 = max(m2, dp[u].x); if(m1 < m2) swap(m1, m2); } dp[v].y = dp[v].x; if(m2 != -1){ dp[v].y = m1 + m2 + sl[v].size() - 4; } } void solve(int tc){ int n; cin >> n; vector<vector<int>> sl(n); for(int i = 1; i < n; ++i){ int u, v; cin >> u >> v; sl[--u].emplace_back(--v); sl[v].emplace_back(u); } vector<pair<int, int>> dp(n); dfs(0, 0, sl, dp); int ans = 0; for(int i = 0; i < n; ++i){ ans = max(ans, max(dp[i].x, dp[i].y)); } cout << ans; } bool multi = true; signed main() { int t = 1; if (multi)cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
2051
A
Preparing for the Olympiad
Monocarp and Stereocarp are preparing for the Olympiad. There are $n$ days left until the Olympiad. On the $i$-th day, if Monocarp plans to practice, he will solve $a_i$ problems. Similarly, if Stereocarp plans to practice on the same day, he will solve $b_i$ problems. Monocarp can train on any day he wants. However, Stereocarp watches Monocarp and follows a different schedule: if Monocarp trained on day $i$ and $i < n$, then Stereocarp will train on day $(i+1)$. Monocarp wants to organize his training process in a way that the difference between the number of problems he solves and the number of problems Stereocarp solves is as large as possible. Formally, Monocarp wants to maximize the value of $(m-s)$, where $m$ is the number of problems he solves, and $s$ is the number of problems Stereocarp solves. Help Monocarp determine the maximum possible difference in the number of solved problems between them.
Let's consider what contribution each day that Monokarp trains makes to the difference. For each day, except the last one, if Monokarp trains on that day, then the number of problems he has solved will increase by $a_i$, and the number of problems solved by Stereokarp will increase by $b_{i+1}$. Therefore, if $a_i - b_{i+1} > 0$, it is beneficial for Monokarp to train on the $i$-th day; otherwise, it is not beneficial. On the last day, it is always beneficial to train, as Stereokarp will not solve anything on the day following it.
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n), b(n); for (auto &x : a) cin >> x; for (auto &x : b) cin >> x; int ans = a[n - 1]; for (int i = 0; i < n - 1; ++i) ans += max(0, a[i] - b[i + 1]); cout << ans << '\n'; } }
2051
B
Journey
Monocarp decided to embark on a long hiking journey. He decided that on the first day he would walk $a$ kilometers, on the second day he would walk $b$ kilometers, on the third day he would walk $c$ kilometers, on the fourth day, just like on the first, he would walk $a$ kilometers, on the fifth day, just like on the second, he would walk $b$ kilometers, on the sixth day, just like on the third, he would walk $c$ kilometers, and so on. Monocarp will complete his journey on the day when he has walked at least $n$ kilometers in total. Your task is to determine the day on which Monocarp will complete his journey.
Processing every day separately is too slow. Instead, we will use the fact that every three days, the number of kilometers Monocarp walks repeats, and process days in "triples". During every three days, Monocarp walks exactly $(a+b+c)$ kilometers, so we can do the following: while $n \ge a + b + c$, subtract $(a+b+c)$ from $n$ and increase the answer by $3$; and finally, process the remaining days, since there will be at most $3$ of them. However, this works in $O(n)$ per test case, so it is still too slow. We need to improve the part when we subtract $a+b+c$ from $n$ until $n$ becomes less than this sum. Does this sound familiar? The number of times we need to subtract $(a+b+c)$ from $n$ is exactly $\lfloor \frac{n}{a+b+c} \rfloor$, and the number we get after that is $n \bmod (a+b+c)$ by definition of integer division and remainder. This allows us to process all "triples" in $O(1)$, instead of running a loop in $O(n)$. The solution we get works in $O(1)$ per test case.
[ "binary search", "math" ]
800
t = int(input()) for i in range(t): n, a, b, c = map(int, input().split()) sum = a + b + c d = n // sum * 3 if n % sum == 0: print(d) elif n % sum <= a: print(d + 1) elif n % sum <= a + b: print(d + 2) else: print(d + 3)
2051
C
Preparing for the Exam
Monocarp is preparing for his first exam at the university. There are $n$ different questions which can be asked during the exam, numbered from $1$ to $n$. There are $m$ different lists of questions; each list consists of exactly $n-1$ different questions. Each list $i$ is characterized by one integer $a_i$, which is the index of the only question which is \textbf{not present} in the $i$-th list. For example, if $n = 4$ and $a_i = 3$, the $i$-th list contains questions $[1, 2, 4]$. During the exam, Monocarp will receive one of these $m$ lists of questions. Then, the professor will make Monocarp answer all questions from the list. So, Monocarp will pass only if he knows all questions from the list. Monocarp knows the answers for $k$ questions $q_1, q_2, \dots, q_k$. For each list, determine if Monocarp will pass the exam if he receives that list.
For every question list, we should check if Monocarp knows all questions from the list, i. e. all numbers $1, 2, \dots, a_{i-1}, a_{i+1}, \dots, n$ appear in the list $[q_1, q_2, \dots, q_k]$. Searching for every number in the list $q$ naively is too slow; instead, we can make a boolean array such that the $j$-th element in it is true if and only if Monocarp knows the $j$-th question. That way, we can check if an integer appears in the list $q$ in $O(1)$. However, that is not enough, since every list of questions contains $O(n)$ questions, and there are $O(n)$ lists. We need to use the fact that every list contains exactly $n-1$ questions somehow. If Monocarp knows all $n$ questions, he can answer any question list (since he knows everything). If Monocarp knows $n-2$ questions or less, he cannot pass at all, since every question list contains more questions than he knows. The only case that's left if when $k=n-1$, i. e. Monocarp knows all questions except for one. Let's analyze it in more detail (the two next paragraphs will assume that $k=n-1$). Since every question list has the same size as the set of questions known by Monocarp, then in order for Monocarp to pass the exam, these two sets of questions must be equal. However, checking that they are equal by iterating on their contents is too slow; instead, we will check that two sets of questions are different by using the elements which are absent from them. Let's check if Monocarp knows the question $a_i$. If he does, then the $i$-th list of questions is different from the set of questions he knows, so he can't pass. But if Monocarp doesn't know the $a_i$-th question, then he knows every question which is not $a_i$, so he can pass. So, Monocarp knows the $i$-th question list if and only if he does not know the $a_i$-th question, and this can be checked in $O(1)$. This way, we get a solution working in $O(n)$ on each test case.
[ "constructive algorithms", "implementation" ]
1,000
for _ in range(int(input())): n, m, k = map(int, input().split()) a = list(map(int, input().split())) q = list(map(int, input().split())) used = [False for i in range(n + 1)] for i in q: used[i] = True l = len(q) for i in range(m): if l == n or (l == n-1 and not used[a[i]]): print(1, end='') else: print(0, end='') print()
2051
D
Counting Pairs
You are given a sequence $a$, consisting of $n$ integers, where the $i$-th element of the sequence is equal to $a_i$. You are also given two integers $x$ and $y$ ($x \le y$). A pair of integers $(i, j)$ is considered interesting if the following conditions are met: - $1 \le i < j \le n$; - if you simultaneously remove the elements at positions $i$ and $j$ from the sequence $a$, the sum of the remaining elements is at least $x$ and at most $y$. Your task is to determine the number of interesting pairs of integers for the given sequence $a$.
There is a common trick in problems of the form "count something on segment $[l, r]$": calculate the answer for $[0, r]$, and then subtract the answer for $[0, l-1]$. We can use this trick in our problem as follows: calculate the number of pairs $i,j$ such that the sum of all other elements is less than $y+1$, and subtract the number of pairs such that the sum is less than $x$. Now we need to solve the following problem: given an array and an integer $x$, calculate the number of ways to choose $i,j$ ($1 \le i < j \le n$) so that the sum of all elements, except for $a_i$ and $a_j$, is less than $x$. Naive solution (iterate on the pair, calculate the sum of remaining elements) works in $O(n^3)$. It can be improved to $O(n^2)$ if, instead of calculating the sum of remaining elements in $O(n)$, we do it in $O(1)$: if we remove $a_i$ and $a_j$, the remaining elements sum up to $s - a_i - a_j$, where $s$ is the sum of all elements. However, $O(n^2)$ is still too slow. For every $i$, let's try to calculate the number of elements $j$ which "match" it faster. If we sort the array, the answer won't change; but in a sorted array, for every $i$, all possible values of $j$ form a suffix of the array (if $s - a_i - a_j < x$ and $a_{j+1} \ge a_j$, then $s - a_i - a_{j+1} < x$). So, for every $i$, let's find the minimum $j'$ such that $s - a_i - a_{j'} < x$; all $j \ge j'$ are possible "matches" for $i$. This can be done with two pointers method: when we decrease $i$, the index $j'$ won't decrease. Unfortunately, this method has an issue. We need to calculate only pairs where $i<j$, but this method doesn't maintain this constraint. However, this issue can be easily resolved. First, let's get rid of pairs where $i=j$. To do so, simply calculate the number of indices $i$ such that $s - 2a_i < x$. Then, let's get rid of pairs where $i>j$. For every such pair, there is a pair with $i<j$ where these two indices are swapped (and vice versa), so we just need to divide the number of pairs by $2$. Now we have a solution working in $O(n \log n)$ for each test case. Instead of two pointers, you can use binary search, the complexity will be the same.
[ "binary search", "sortings", "two pointers" ]
1,200
def calcLessThanX(a, x): n = len(a) s = sum(a) j = 0 ans = 0 for i in range(n-1, -1, -1): while j < n and s - a[i] - a[j] >= x: j += 1 ans += (n - j) for i in range(n): if s - a[i] - a[i] < x: ans -= 1 return ans // 2 for _ in range(int(input())): n, x, y = map(int, input().split()) a = list(map(int, input().split())) a = sorted(a) print(calcLessThanX(a, y+1) - calcLessThanX(a, x))
2051
E
Best Price
A batch of Christmas trees has arrived at the largest store in Berland. $n$ customers have already come to the store, wanting to buy them. Before the sales begin, the store needs to determine the price for one tree (the price is the same for all customers). To do this, the store has some information about each customer. For the $i$-th customer, two integers $a_i$ and $b_i$ are known, which define their behavior: - if the price of the product is at most $a_i$, the customer will buy a tree and leave a positive review; - otherwise, if the price of the product is at most $b_i$, the customer will buy a tree but leave a negative review; - otherwise, the customer will not buy a tree at all. Your task is to calculate the maximum possible earnings for the store, given that it can receive no more than $k$ negative reviews.
First, let's design a solution in $O(n^2)$. We can solve the problem in $O(n \cdot max b_i)$, if we iterate on the price $p$ we use, and for every price, calculate the number of trees bought and the number of negative reviews. However, we don't need to check every possible price from $1$ to $max b_i$: let's instead check every integer in the union of $a$ and $b$ (or check every $a_i$, and then check every $b_i$). Why is it always optimal? Suppose some integer price $p$ which is not present in the union of $a$ and $b$ is optimal. Then, if we use $p+1$ instead of $p$, the status of each customer will be the same, but we will get more money for each tree we sell. So, it is enough to check the elements of $a$ and the elements of $b$ as possible prices. This works in $O(n^2)$, we need to speed it up. I will explain two different methods that allow to check every price faster. Event processing (or sweep line): Shortly, we process all possible prices in ascending order, and when we go from one price to the next, we update the customers which no longer want to buy a tree with a new price, and the customers which will leave a negative review if the price is increased. One of the ways to implement it is as follows. For every customer, create two "events" of the type "when price exceeds $a_i$, the customer will leave a negative review" and "when price exceeds $b_i$, the customer will no longer buy a tree and leave a negative review". These events can be implemented as pairs of integers $(a_i, 1)$ and $(b_i, 2)$. Then, we can sort the events and process them from left to right in sorted order, maintaining the number of trees and negative reviews. When we process the event with price $p$, the change it makes will come into effect only when the price exceeds $p$, so we should first update the answer, then apply the change from the event. Furthermore, all events with the same price value should be processed at the same time (so if there are multiple events with the same price value, you don't update the answer after processing only several of them). All of this is a bit complicated to implement, that's why I would like to show you an Alternative approach: For every price $p$, we need to calculate two values: the number of trees bought, i. e. the number of customers $i$ such that $b_i \ge p$; the number of negative reviews, i. e. the number of customers $i$ such that $a_i < p \le b_i$. The first one can be calculated in $O(\log n)$ with binary search, if we sort the array $b$. The second one is a bit trickier. Let's calculate it as follows: take the number of trees bought, and then subtract the number of trees bought without a negative review (which is the number of customers $i$ such that $a_i \ge p$). If we sort both arrays $a$ and $b$, this value can also be processed in $O(\log n)$ with binary search. So, we spend $O(\log n)$ time to check one possible price, and the number of different prices we have to check is up to $2n$, so this solution works in $O(n \log n)$.
[ "binary search", "brute force", "data structures", "greedy", "sortings" ]
1,600
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n, k; cin >> n >> k; vector<int> a(n), b(n); for (auto &x : a) cin >> x; for (auto &x : b) cin >> x; vector<pair<int, int>> ev; for (int i = 0; i < n; ++i) { ev.emplace_back(a[i], 1); ev.emplace_back(b[i], 2); } sort(ev.begin(), ev.end()); long long ans = 0; int cnt = n, bad = 0; for (int i = 0; i < 2 * n;) { auto [x, y] = ev[i]; if (bad <= k) ans = max(ans, x * 1LL * cnt); while (i < 2 * n && ev[i].first == x) { bad += (ev[i].second == 1); bad -= (ev[i].second == 2); cnt -= (ev[i].second == 2); ++i; } } cout << ans << '\n'; } }
2051
F
Joker
Consider a deck of $n$ cards. The positions in the deck are numbered from $1$ to $n$ from top to bottom. A joker is located at position $m$. $q$ operations are applied sequentially to the deck. During the $i$-th operation, you need to take the card at position $a_i$ and move it either to the beginning or to the end of the deck. For example, if the deck is $[2, 1, 3, 5, 4]$, and $a_i=2$, then after the operation the deck will be either $[1, 2, 3, 5, 4]$ (the card from the second position moved to the beginning) or $[2, 3, 5, 4, 1]$ (the card from the second position moved to the end). Your task is to calculate the number of distinct positions where the joker can be after each operation.
Let's represent the positions where the joker can be as a set of non-overlapping segments $[l_1, r_1]$, $[l_2, r_2]$, .... Let's consider what happens to the segment $[l, r]$ after applying the $i$-th operation: if $a_i < l$, the possible positions segment becomes $[l - 1, r]$ (since moving the $a_i$-th card to the front does not change the joker's positions, while moving it to the back shifts the positions up by $1$); if $a_i > r$, the possible positions segment becomes $[l, r + 1]$ (since moving the $a_i$-th card to the front shifts the positions down by $1$, while moving it to the back does not change the joker's positions); if $l \le a_i \le r$, let's consider $3$ subsegments where the joker can be located: positions from the subsegment $[l, a_i - 1]$ moves to $[l, a_i]$ (similarly to the case $a_i > r$); positions from the subsegment $[a_i + 1, r]$ moves to $[a_i, r]$ (similarly to the case $a_i < l$); the joker from position $a_i$ moves to one of two positions: $1$ or $n$. Thus, in this case, the segment $[l, r]$ remains, but we need to add two new segments ($[1, 1]$ and $[n, n]$) to the set. positions from the subsegment $[l, a_i - 1]$ moves to $[l, a_i]$ (similarly to the case $a_i > r$); positions from the subsegment $[a_i + 1, r]$ moves to $[a_i, r]$ (similarly to the case $a_i < l$); the joker from position $a_i$ moves to one of two positions: $1$ or $n$. Note that when $l=r=a_i$, the current segment disappears. At first glance, it seems that this solution works in $O(nq)$, since the number of segments can be $O(n)$, and we need to update each of them. However, it is not difficult to notice that there cannot be more than $3$ segments. Specifically: the initial segment $[m, m]$, which expands to the left and right, the segment $[1,1]$, which expands only to the right, and the segment $[n, n]$, which expands only to the left.
[ "brute force", "greedy", "implementation", "math" ]
2,000
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n, m, q; cin >> n >> m >> q; vector<pair<int, int>> segs({{1, -q}, {m, m}, {n + q + 1, n}}); while (q--) { int x; cin >> x; bool ins = false; for (auto& [l, r] : segs) { if (x < l) l = max(1, l - 1); else if (x > r) r = min(n, r + 1); else { ins = true; if (l == r) l = n + q, r = -q; } } if (ins) { segs[0] = {1, max(segs[0].second, 1)}; segs[2] = {min(segs[2].first, n), n}; } int lf = 0, rg = -1, ans = 0; for (auto [l, r] : segs) { if (l > r) continue; if (l > rg) { ans += max(0, rg - lf + 1); lf = l; rg = r; } rg = max(rg, r); } ans += max(0, rg - lf + 1); cout << ans << ' '; } cout << '\n'; } }
2051
G
Snakes
Suppose you play a game where the game field looks like a strip of $1 \times 10^9$ square cells, numbered from $1$ to $10^9$. You have $n$ snakes (numbered from $1$ to $n$) you need to place into some cells. Initially, each snake occupies exactly one cell, and you can't place more than one snake into one cell. After that, the game starts. The game lasts for $q$ seconds. There are two types of events that may happen each second: - snake $s_i$ enlarges: if snake $s_i$ occupied cells $[l, r]$, it enlarges to a segment $[l, r + 1]$; - snake $s_i$ shrinks: if snake $s_i$ occupied cells $[l, r]$, it shrinks to a segment $[l + 1, r]$. Each second, exactly one of the events happens. If at any moment of time, any snake runs into some obstacle (either another snake or the end of the strip), you lose. Otherwise, you win with the score equal to the maximum cell occupied by any snake so far. What is the minimum possible score you can achieve?
Note that when you place snakes on the strip in some order, they form some permutation. And when you fix that permutation, you can place them greedily. In other words, when you know in what order you'll place snakes, it's always optimal to place them as close to each other as possible. Since the bigger the initial distance - the bigger the resulting distance of the farthest snake (or the bigger the final score). We can even calculate that final score precisely: it's equal to $1 + \texttt{sum of distances} + \texttt{number of times the last snake enlarges}$. So, we can solve the task in two steps. First, let's calculate $\mathrm{minDist}[i][j]$ - the minimum possible distance between snakes $i$ and $j$ if we plan to place snake $j$ right after snake $i$. Suppose the initial gap between these snakes is $x$. Let's skim through all events: each time the $i$-th snake enlarges, our gap decreases, or $x' = x - 1$. each time the $j$-th snake shrinks, our gap increases, or $x' = x + 1$. if at any moment $x'$ becomes negative, then we lose. In other words, we needed bigger initial $x$. We can rephrase what happens more formally: for each event $i$ let $e_i = 1$ if $x$ increases, $e_i = -1$ if $x$ decreases or $0$ otherwise. Then after the $i$-th event the current gap will be equal to $x' = x + \sum_{j=1}^{i}{e_j}$. The following inequality should hold for each $i$: $x + \sum_{j=1}^{i}{e_j} \ge 0$ or $x \ge -\sum_{j=1}^{i}{e_j}$. So, if we will find the minimum $\min\limits_{1 \le i \le q}{\sum_{j=1}^{i}{e_j}}$ then we can set the initial distance to this minimum gap plus one, or $\mathrm{minDist}[i][j] = -\min\limits_{1 \le i \le q}{\sum_{j=1}^{i}{e_j}} + 1$. Now we know the minimum distances between neighboring snakes, so we can find the optimal order. Let's do it with bitmask dp $d[mask][lst]$, since all we need to know in each state is the set of already placed snakes $mask$ and the last snake $lst$. Transitions are straightforward: let's just choose the next snake to place and place it at distance $\mathrm{minDist}$. The initial states are $d[2^i][i] = 1$ for each $i$. The answer is $\min\limits_{1 \le i \le n}{d[2^n - 1][i] + \texttt{number of times snake $i$ enlarges}}$, i. e. we just choose the last snake. The time complexity is $O(n^2 q)$ for the first part (or $O(n q)$ if written more optimally) plus $O(2^n n^2)$ for the second part.
[ "bitmasks", "dp", "dsu", "graphs" ]
2,100
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) const int INF = int(1e9); int n, q; vector<int> id, ch; bool read() { if (!(cin >> n >> q)) return false; id.resize(q); ch.resize(q); fore (i, 0, q) { char c; cin >> id[i] >> c; id[i]--; ch[i] = c == '+' ? 1 : -1; } return true; } int getDist(int s, int t) { int pSum = 0, cMin = 0; fore (e, 0, q) { if (id[e] == t) pSum += ch[e] < 0; if (id[e] == s) pSum -= ch[e] > 0; cMin = min(cMin, pSum); } return -cMin + 1; } inline void solve() { vector<vector<int>> minDist(n, vector<int>(n, INF)); fore (i, 0, n) fore (j, 0, n) minDist[i][j] = getDist(i, j); vector<int> len(n, 0); fore (e, 0, q) len[id[e]] += ch[e] > 0; vector< vector<int> > d(1 << n, vector<int>(n, INF)); fore (i, 0, n) d[1 << i][i] = 1; fore (mask, 1, 1 << n) fore (lst, 0, n) { if (d[mask][lst] == INF) continue; fore (nxt, 0, n) { if ((mask >> nxt) & 1) continue; int nmask = mask | (1 << nxt); d[nmask][nxt] = min(d[nmask][nxt], d[mask][lst] + minDist[lst][nxt]); } } int ans = INF; fore (lst, 0, n) ans = min(ans, d[(1 << n) - 1][lst] + len[lst]); cout << ans << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
2053
A
Tender Carpenter
\begin{quote} I would use a firework to announce, a wave to bid farewell, and a bow to say thanks: bygones are bygones; not only on the following path will I be walking leisurely and joyfully, but also the footsteps won't halt as time never leaves out flowing; for in the next year, we will meet again. \hfill — Cocoly1990, Goodbye 2022 \end{quote} In his dream, Cocoly would go on a long holiday with no worries around him. So he would try out for many new things, such as... being a carpenter. To learn it well, Cocoly decides to become an apprentice of Master, but in front of him lies a hard task waiting for him to solve. Cocoly is given an array $a_1, a_2,\ldots, a_n$. Master calls a set of integers $S$ stable if and only if, for any possible $u$, $v$, and $w$ from the set $S$ (note that $u$, $v$, and $w$ do not necessarily have to be pairwise distinct), sticks of length $u$, $v$, and $w$ can form a non-degenerate triangle$^{\text{∗}}$. Cocoly is asked to partition the array $a$ into several (possibly, $1$ or $n$) \textbf{non-empty} continuous subsegments$^{\text{†}}$, such that: for each of the subsegments, the set containing all the elements in it is stable. Master wants Cocoly to partition $a$ in \textbf{at least two} different$^{\text{‡}}$ ways. You have to help him determine whether it is possible. \begin{footnotesize} $^{\text{∗}}$A triangle with side lengths $x$, $y$, and $z$ is called non-degenerate if and only if: - $x + y > z$, - $y + z > x$, and - $z + x > y$. $^{\text{†}}$A sequence $b$ is a subsegment of a sequence $c$ if $b$ can be obtained from $c$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. $^{\text{‡}}$Two partitions are considered different if and only if at least one of the following holds: - the numbers of continuous subsegments split in two partitions are different; - there is an integer $k$ such that the lengths of the $k$-th subsegment in two partitions are different. \end{footnotesize}
Can you find a partition that is always valid? Suppose you are given a set $S$. How do you judge whether $S$ is stable in $\mathcal O(\lvert S\rvert)$? Are sets that are very large really necessary? Note that: we always have a partition like $[a_1], [a_2], \dots, [a_n]$, since $(x, x, x)$ always forms a non-degenerate (equilateral) triangle. We focus on the second partition scheme in which not all continuous subsegments have a length of $1$. One can also note that if a set $S$ is stable then for all $T \subsetneq S$ ($T \neq \varnothing$), $T$ is also stable. Short proof: If $\exists u, v, w \in T$ such that $(u, v, w)$ doesn't form a non-degenerate triangle, therefore $u, v, w \in S$ so it is concluded that $S$ is not stable. Contradiction! If such a partition exists, we can always split long continuous subsegments into shorter parts, while the partition remains valid. Therefore, it's enough to check the case in which there is one subsegment of length $2$ and the rest of length $1$. So, we should output NO if and only if for all $1 \leq i < n$, $2\min(a_i, a_{i+1}) \leq \max(a_i, a_{i+1})$.
[ "dp", "geometry", "greedy", "math" ]
800
#include <bits/stdc++.h> #define MAXN 1001 int a[MAXN]; void solve() { int n; std::cin >> n; for (int i = 1; i <= n; ++i) std::cin >> a[i]; for (int i = 1; i < n; ++i) if (2 * std::min(a[i], a[i + 1]) > std::max(a[i], a[i + 1])) { std::cout << "YES\n"; return; } std::cout << "NO\n"; } int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr), std::cout.tie(nullptr); int t; std::cin >> t; while (t--) solve(); return 0; }
2053
B
Outstanding Impressionist
\begin{quote} If it was so, then let's make it a deal... \hfill — MayDay, Gentleness \end{quote} Even after copying the paintings from famous artists for ten years, unfortunately, Eric is still unable to become a skillful impressionist painter. He wants to forget something, but the white bear phenomenon just keeps hanging over him. Eric still remembers $n$ pieces of impressions in the form of an integer array. He records them as $w_1, w_2, \ldots, w_n$. However, he has a poor memory of the impressions. For each $1 \leq i \leq n$, he can only remember that $l_i \leq w_i \leq r_i$. Eric believes that impression $i$ is unique if and only if there exists a possible array $w_1, w_2, \ldots, w_n$ such that $w_i \neq w_j$ holds for all $1 \leq j \leq n$ with $j \neq i$. Please help Eric determine whether impression $i$ is unique for every $1 \leq i \leq n$, \textbf{independently} for each $i$. Perhaps your judgment can help rewrite the final story.
What if for all $1 \leq i \leq n$, $l_i \ne r_i$ holds? How do you prove it? Use prefix sums or similar to optimize your solution. For each $1 \leq i \leq n$, for each $l_i \leq x \leq r_i$, we want to check if it is okay for impression $i$ being unique at the value of $x$. Note that: for each $j \neq i$, we can always switch $w_j$ to a value different from $x$ if $l_j \neq r_j$, since there are at least two options. Therefore, it is impossible if and only if there exists a $1 \leq j \leq n$ with $j \neq i$ such that $l_j = r_j = x$. Let's record $a_i$ as the number of different $k$ satisfying $1 \leq k \leq n$ and $l_k = r_k = i$. If $l_i \neq r_i$, then we say impression $i$ cannot be made unique if and only if for all $l_i \leq k \leq r_i$, $a_k \geq 1$; otherwise ($l_i = r_i$), it cannot be unique if and only if $a_{l_i} \geq 2$. This can all be checked quickly within a prefix sum, so the overall time complexity is $\mathcal O(\sum n)$.
[ "binary search", "brute force", "data structures", "greedy" ]
1,200
#include <bits/stdc++.h> #define MAXN 400001 int l[MAXN], r[MAXN], sum[MAXN], cnt[MAXN]; void solve() { int n; std::cin >> n; for (int i = 1; i <= 2 * n; ++i) sum[i] = cnt[i] = 0; for (int i = 1; i <= n; ++i) { std::cin >> l[i] >> r[i]; if (l[i] == r[i]) sum[l[i]] = 1, ++cnt[l[i]]; } for (int i = 2; i <= 2 * n; ++i) sum[i] += sum[i - 1]; for (int i = 1; i <= n; ++i) std::cout << ((l[i] == r[i] ? cnt[l[i]] <= 1 : sum[r[i]] - sum[l[i] - 1] < r[i] - l[i] + 1) ? "1" : "0"); std::cout << '\n'; } int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr), std::cout.tie(nullptr); int t; std::cin >> t; while (t--) solve(); return 0; }
2053
C
Bewitching Stargazer
\begin{quote} I'm praying for owning a transparent heart; as well as eyes with tears more than enough... \hfill — Escape Plan, Brightest Star in the Dark \end{quote} Iris looked at the stars and a beautiful problem emerged in her mind. She is inviting you to solve it so that a meteor shower is believed to form. There are $n$ stars in the sky, arranged in a row. Iris has a telescope, which she uses to look at the stars. Initially, Iris observes stars in the segment $[1, n]$, and she has a lucky value of $0$. Iris wants to look for the star in the middle position for each segment $[l, r]$ that she observes. So the following recursive procedure is used: - First, she will calculate $m = \left\lfloor \frac{l+r}{2} \right\rfloor$. - If the length of the segment (i.e. $r - l + 1$) is even, Iris will divide it into two equally long segments $[l, m]$ and $[m+1, r]$ for further observation. - Otherwise, Iris will aim the telescope at star $m$, and her lucky value will increase by $m$; subsequently, if $l \neq r$, Iris will continue to observe two segments $[l, m-1]$ and $[m+1, r]$. Iris is a bit lazy. She defines her laziness by an integer $k$: as the observation progresses, she will not continue to observe any segment $[l, r]$ with a length \textbf{strictly less than} $k$. In this case, please predict her final lucky value.
Process many segments simultaneously. What kind of segments do we process at a time? The length. The point that must be noted is: that if we call the process of splitting a large segment into two smaller segments a round, then all segments are of the same length when the $i$-th round of the observation is conducted; and, the number of rounds does not exceed $\mathcal O(\log n)$. The $k$ restriction is equivalent to specifying that only a certain prefix of rounds is computed. Here are some different approaches: (Most succinctly) Note that the distribution of segments after round 1 is centrally symmetric; Also, $x$ and $y$ being centrally symmetric implies that $x + y = n + 1$, so it is simple to calculate by simulating the number of segments and the length directly. If a segment $[l, r]$ is split into $[l, m - 1]$ and $[m + 1, r]$, its left endpoint sum changes from $l$ to $2l + \frac{r - l}{2} + 1$, and since $(r - l)$ is fixed, the sum of the left endpoints of all segments can be maintained similarly. The following recursive method also works: the answer to $n$ can be recovered by the answer to $\lfloor \frac{n}{2} \rfloor$. The time complexity is $\mathcal O(t\log n)$. This is written by _TernaryTree_.
[ "bitmasks", "divide and conquer", "dp", "math" ]
1,500
#include <bits/stdc++.h> #define int long long using namespace std; int T; int n, k; signed main() { cin >> T; while (T--) { cin >> n >> k; int mul = n + 1, sum = 0, cur = 1; while (n >= k) { if (n & 1) sum += cur; n >>= 1; cur <<= 1; } cout << mul * sum / 2 << endl; } return 0; }
2053
D
Refined Product Optimality
\begin{quote} As a tester, when my solution has a different output from the example during testing, I suspect the author first. \hfill — Chris, a comment \end{quote} Although Iris occasionally sets a problem where the solution is possibly wrong, she still insists on creating problems with her imagination; after all, everyone has always been on the road with their stubbornness... And like ever before, Iris has set a problem to which she gave a wrong solution, but Chris is always supposed to save it! You are going to play the role of Chris now: - Chris is given two arrays $a$ and $b$, both consisting of $n$ integers. - Iris is interested in the \textbf{largest} possible value of $P = \prod\limits_{i=1}^n \min(a_i, b_i)$ after an arbitrary rearrangement of $b$. Note that she only wants to know the maximum value of $P$, and \textbf{no} actual rearrangement is performed on $b$. - There will be $q$ modifications. Each modification can be denoted by two integers $o$ and $x$ ($o$ is either $1$ or $2$, $1 \leq x \leq n$). If $o = 1$, then Iris will increase $a_x$ by $1$; otherwise, she will increase $b_x$ by $1$. - Iris asks Chris the maximum value of $P$ for $q + 1$ times: once before any modification, then after every modification. - Since $P$ might be huge, Chris only needs to calculate it modulo $998\,244\,353$. Chris soon worked out this problem, but he was so tired that he fell asleep. Besides saying thanks to Chris, now it is your turn to write a program to calculate the answers for given input data. \textbf{Note}: since the input and output are large, you may need to optimize them for this problem. For example, in C++, it is enough to use the following lines at the start of the main() function: \begin{verbatim} int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); std::cout.tie(nullptr); } \end{verbatim}
What if $q = 0$? How do you keep the array sorted? The problem makes no difference when both $a$ and $b$ can be rearranged. Let the rearranged arrays of $a$ and $b$ be $c$ and $d$ respectively. If $q = 0$, we can write $c$ as $\operatorname{SORTED}(a_1, a_2 \ldots, a_n)$ and $d$ as $\operatorname{SORTED}(b_1, b_2 \ldots, b_n)$. It can be proved that this reaches the maximum value: if not so, then There must be some pair $(i, j)$ such that $c_i < c_j, d_i > d_j$. Since $\min(c_i, d_i) \cdot \min(c_j, d_j) = c_i \cdot \min(c_j, d_j) \leq c_i \cdot \min(c_j, d_i) = \min(c_i, d_j) \cdot \min(c_j, d_i)$, we can swap $d_i$ and $d_j$, and the product does not decrease. Consider the modification, which is a single element increment by $1$. Without loss of generality, let $c_x$ be increased by $1$ (and the processing method for $d$ is the same). If $c_x < c_{x+1}$, then after the modification $c_x \leq c_{x+1}$, which would be fine. Otherwise, we can modify the array $c$ in the form of a single round of "Insertion Sort": We continuously swap $c_x$ and $c_{x+1}$, $x \gets x + 1$, until $c_x < c_{x+1}$ (or $x = n$), and thus the array remains sorted after the increment. In fact, the swap operation does nothing in the above process: in these cases, $c_x = c_{x+1}$ holds! So we can just set $x'$ as the maximum $k$ such that $c_k = c_x$, and then increase $c_{x'}$ by $1$, after which $c$ is still sorted. The $k$ can be found with a naive binary search, so the problem is solved in $\mathcal O(n\log n + q(\log p + \log n))$ per test case.
[ "binary search", "data structures", "greedy", "math", "schedules", "sortings" ]
1,700
#include <bits/stdc++.h> constexpr int MOD = 998244353; int qpow(int a, int x = MOD - 2) { int res = 1; for (; x; x >>= 1, a = 1ll * a * a % MOD) if (x & 1) res = 1ll * res * a % MOD; return res; } #define MAXN 200001 int a[MAXN], b[MAXN], c[MAXN], d[MAXN]; void solve() { int n, q, res = 1; std::cin >> n >> q; for (int i = 1; i <= n; ++i) std::cin >> a[i], c[i] = a[i]; for (int i = 1; i <= n; ++i) std::cin >> b[i], d[i] = b[i]; std::sort(c + 1, c + n + 1), std::sort(d + 1, d + n + 1); for (int i = 1; i <= n; ++i) res = 1ll * res * std::min(c[i], d[i]) % MOD; std::cout << res << " \n"[q == 0]; for (int i = 1, op, x; i <= q; ++i) { std::cin >> op >> x; if (op == 1) { int p = std::upper_bound(c + 1, c + n + 1, a[x]) - c - 1; if (c[p] < d[p]) res = 1ll * res * qpow(c[p]) % MOD * (c[p] + 1) % MOD; ++a[x], ++c[p]; } else { int p = std::upper_bound(d + 1, d + n + 1, b[x]) - d - 1; if (d[p] < c[p]) res = 1ll * res * qpow(d[p]) % MOD * (d[p] + 1) % MOD; ++b[x], ++d[p]; } std::cout << res << " \n"[i == q]; } } int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr), std::cout.tie(nullptr); int t; std::cin >> t; while (t--) solve(); return 0; }
2053
E
Resourceful Caterpillar Sequence
\begin{quote} Endless Repeating 7 Days \hfill — r-906, Panopticon \end{quote} There is a tree consisting of $n$ vertices. Let a caterpillar be denoted by an integer pair $(p, q)$ ($1 \leq p, q \leq n$, $p \neq q$): its head is at vertex $p$, its tail is at vertex $q$, and it dominates all the vertices on the simple path from $p$ to $q$ (including $p$ and $q$). The caterpillar sequence of $(p, q)$ is defined as the sequence consisting only of the vertices on the simple path, sorted in the ascending order of the distance to $p$. Nora and Aron are taking turns moving the caterpillar, with Nora going first. Both players will be using his or her own optimal strategy: - They will play to make himself or herself win; - However, if it is impossible, they will play to prevent the other person from winning (thus, the game will end in a tie). In Nora's turn, she must choose a vertex $u$ adjacent to vertex $p$, which is not dominated by the caterpillar, and move all the vertices in it by one edge towards vertex $u$$^{\text{∗}}$. In Aron's turn, he must choose a vertex $v$ adjacent to vertex $q$, which is not dominated by the caterpillar, and move all the vertices in it by one edge towards vertex $v$. Note that the moves allowed to the two players are different. Whenever $p$ is a leaf$^{\text{†}}$, Nora wins$^{\text{‡}}$. Whenever $q$ is a leaf, Aron wins. If either initially both $p$ and $q$ are leaves, or after $10^{100}$ turns the game has not ended, the result is a tie. Please count the number of integer pairs $(p, q)$ with $1 \leq p, q \leq n$ and $p \neq q$ such that, if the caterpillar is initially $(p, q)$, Aron wins the game. \begin{footnotesize} $^{\text{∗}}$In other words: Let the current caterpillar sequence be $c_1, c_2, \ldots, c_k$, then after the move, the new caterpillar sequence becomes $d(u, c_1), d(u, c_2), \ldots, d(u, c_k)$. Here, $d(x, y)$ is the next vertex on the simple path from $y$ to $x$. $^{\text{†}}$In a tree, a vertex is called a leaf if and only if its degree is $1$. $^{\text{‡}}$Therefore, Nora never fails to choose a vertex $u$ when the game has not ended. The same goes for Aron. \end{footnotesize}
Suppose somebody wins. In which round does he or she win? A player can always undo what his opponent did in the previous turn. Can you find the necessary and sufficient condition for $(p, q)$ to be a caterpillar that makes Aron win? Denote Nora's first move as round $1$, Aron's first move as round $2$, and so on. Suppose a player does not have a winning strategy in the $k$-th round, but he or she has a winning strategy in the $(k + 2)$-th round - it can be shown impossible because the other player can always withdraw the last move of another player so that the status is the same as it was before the $k$-th round. Therefore: if a player wins in the $k$-th round, we claim that $k \leq 2$. Given $p, q$, let's determine who will eventually win the game. If both $p$ and $q$ are leaves, the result is a tie. If $p$ is a leaf while $q$ is not, Nora wins. If $q$ is a leaf while $p$ is not, Aron wins. If neither $p$ nor $q$ is a leaf: Can $k = 1$? Nora wins if and only if $p$ is adjacent to a leaf. Can $k = 2$? Aron wins if and only if $p$ is not adjacent to a leaf, and $f(p, q)$ is adjacent to a leaf. Otherwise, the result is a tie. Can $k = 1$? Nora wins if and only if $p$ is adjacent to a leaf. Can $k = 2$? Aron wins if and only if $p$ is not adjacent to a leaf, and $f(p, q)$ is adjacent to a leaf. Otherwise, the result is a tie. The counting part can also be solved easily in $\mathcal O(n)$. Denote $c$ as the number of leaves. The initial answer would be $c \cdot (n - c)$, considering the third case. For the fourth case, we can enumerate $m = f(p, q)$, which is adjacent to at least one leaf. Given $m$, $q$ must be a non-leaf neighbor of $m$, and let the number of different $q$ be $k$. For each of the potential $p$, which is a non-leaf node whose neighbors are all non-leaf nodes too, it is computed exactly $k - 1$ times for all the $k$ candidates of $q$ (since $m$ must be on the simple path from $p$ to $q$), so the extra contributions are easy to calculate. (If you do not think that much, you can use some simple DP, which I will not elaborate here.)
[ "dfs and similar", "dp", "games", "graphs", "greedy", "trees" ]
1,900
#include <bits/stdc++.h> #define MAXN 200001 std::vector<int> g[MAXN]; inline int deg(int u) { return g[u].size(); } int d[MAXN]; void solve() { int n; std::cin >> n; long long ans = 0; for (int i = 1, u, v; i < n; ++i) { std::cin >> u >> v; g[u].push_back(v), g[v].push_back(u); } int c1 = 0, c2 = 0; for (int i = 1; i <= n; ++i) c1 += (deg(i) == 1); ans += 1ll * c1 * (n - c1); for (int i = 1; i <= n; ++i) if (deg(i) > 1) { for (int v : g[i]) d[i] += (deg(v) > 1); c2 += (d[i] == deg(i)); } for (int m = 1; m <= n; ++m) if (deg(m) > 1 && d[m] != deg(m)) ans += 1ll * c2 * (d[m] - 1); std::cout << ans << '\n'; for (int i = 1; i <= n; ++i) (std::vector<int>()).swap(g[i]), d[i] = 0; } int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr), std::cout.tie(nullptr); int t; std::cin >> t; while (t--) solve(); return 0; }
2053
F
Earnest Matrix Complement
\begin{quote} 3, 2, 1, ... We are the — RiOI Team! \hfill — Felix & All, Special Thanks 3 \end{quote} - Peter: Good news: My problem T311013 is approved! - $\delta$: I'm glad my computer had gone out of battery so that I wouldn't have participated in wyrqwq's round and gained a negative delta. - Felix: [thumbs_up] The problem statement concerning a removed song! - Aquawave: Do I mourn my Chemistry? - E.Space: ahh? - Trine: Bread. - Iris: So why am I always testing problems? Time will pass, and we might meet again. Looking back at the past, everybody has lived the life they wanted. Aquawave has a matrix $A$ of size $n\times m$, whose elements can only be integers in the range $[1, k]$, inclusive. In the matrix, some cells are already filled with an integer, while the rest are currently not filled, denoted by $-1$. You are going to fill in all the unfilled places in $A$. After that, let $c_{u,i}$ be the number of occurrences of element $u$ in the $i$-th row. Aquawave defines the beauty of the matrix as $$\sum_{u=1}^k \sum_{i=1}^{n-1} c_{u,i} \cdot c_{u,i+1}.$$ You have to find the maximum possible beauty of $A$ after filling in the blanks optimally.
What are we going to fill into the matrix? In other words, is there any relationship between the filed numbers? Try to come up with a naive DP solution that works in large time complexity, such as $\mathcal O(nk^2)$. For many different numbers between two consecutive rows, however, the transition is almost the same. If $x' = \max(a, x + b)$ and $x'' = \max(c, x' + d)$, then $x'' = \max(\max(a + d, c), x + b + d)$. Conclusion: For each row, an optimal solution exists, such that the newly filled-in numbers are the same. Proof: Consider fixing the rows $i - 1$ and $i + 1$, and observe all the newly filled-in numbers at row $i$. Then a new number $u$ brings a contribution of $c_{u, i-1} + c_{u, i+1}$, and it is clear that there exists a scheme that takes the maximum value such that all the $u$ filled in are equal. Adjusting for each row leads to the above conclusion. Consider dp. Let $f_{i,j}$ denote the maximum contribution that can be achieved between the first $i$ rows (ignoring the initial contribution) when the empty elements in the $i$-th row are filled with $j$. Let $c_i$ be the number of $-1$ numbers in the $i$-th row, and $d_{i,j}$ denote the number of elements $j$ in the $i$-th row initially. The transfer should be as follows: $f_{i, j} = \max(\max\limits_{1 \leq w \leq k}(f_{i-1, w} + c_i \cdot d_{i-1, w} + c_{i-1} \cdot d_{i-1, j}), f_{i-1, j} + (d_{i, j} + c_i) \cdot (d_{i-1, j} + c_{i-1}) - d_{i, j}d_{i-1, j}).$ In addition to being able to optimize the above transition to $\mathcal O(nk)$, the present problem in a matrix has a good property. Specifically, for the same $i$, there are only $\mathcal O(m)$ values of $j$ such that $d_{i,j} \neq 0$! If $d_{i,j} = 0$ and $d_{i-1, j} = 0$, the original transfer can be viewed as $f_{i,j} = \max(\max\limits_{1 \leq w \leq k}(f_{i-1, w}), f_{i-1, j} + c_i \cdot c_{i-1}).$ This can be seen as a global modification in the form of $x \gets \max(a, x + b)$. The tags are composable in $\mathcal O(1)$; Otherwise, we can brutely update the new $dp_j$ for $\mathcal O(m)$ positions. Therefore, this problem is solved in $\mathcal O(nm)$. We decided to let every segment tree solution pass comfortably, so that we set small constraints and large TL. Bonus Hint for implementation: always use $\max(a, dp_j + b)$ to get the real value.
[ "brute force", "data structures", "dp", "greedy", "implementation", "math" ]
2,600
#include <bits/stdc++.h> namespace FastIO { char buf[1 << 21], *p1 = buf, *p2 = buf; #define getchar() (p1 == p2 && (p1 = buf, p2 = (p1 + fread(buf, 1, 1 << 21, stdin))) == p1 ? EOF : *p1++) template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; } template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar((x % 10) ^ '0'); } template <typename T> inline void print(T x) { if (x > 0) write<T>(x); else if (x < 0) putchar('-'), write<T>(-x); else putchar('0'); } template <typename T> inline void print(T x, char en) { print<T>(x), putchar(en); } #undef getchar }; using namespace FastIO; using ll = long long; void solve() { int n = read<int>(), m = read<int>(), k = read<int>(); ll cntP = 0, cntQ = 0; std::vector<int> vep(m), veq(m), cntp(k + 1), cntq(k + 1), vis(k + 1); std::vector<ll> dp(k + 1); ll a = 0, b = 0, v = 0, ext = 0; // max(a, x + b). cntp[0] = cntq[0] = m; auto get = [&](int x) -> int { return (~x) ? x : 0; }; auto read_q = [&]() -> void { for (int i = 0; i < m; ++i) --cntq[get(veq[i])]; for (int i = 0; i < m; ++i) ++cntq[get(veq[i] = read<int>())]; cntQ = cntq[0]; }; auto roll = [&]() -> void { std::swap(vep, veq), std::swap(cntp, cntq), std::swap(cntP, cntQ); }; auto chkmax = [&](ll &a, ll b) -> void { a = std::max(a, b); }; read_q(), roll(); for (int i = 2; i <= n; ++i) { read_q(); ll max_dp = std::max(a, v + b); for (int k : vep) if (~k) chkmax(max_dp, std::max(a, dp[k] + b) + cntP * cntq[k]); for (int k : veq) if (~k) chkmax(max_dp, std::max(a, dp[k] + b) + cntP * cntq[k]); for (int k : vep) if ((~k) && vis[k] != i) { vis[k] = i, ext += 1ll * cntp[k] * cntq[k]; dp[k] = std::max(a, dp[k] + b) + cntP * cntq[k] + cntQ * cntp[k] - b; chkmax(dp[k], max_dp + cntp[k] * cntQ - b - cntP * cntQ); chkmax(v, dp[k]); } for (int k : veq) if ((~k) && vis[k] != i) { vis[k] = i; dp[k] = std::max(a, dp[k] + b) + cntP * cntq[k] + cntQ * cntp[k] - b; chkmax(dp[k], max_dp + cntp[k] * cntQ - b - cntP * cntQ); chkmax(v, dp[k]); } a = std::max(max_dp, a + cntP * cntQ), b += cntP * cntQ; roll(); } print<ll>(std::max(a, v + b) + ext, '\n'); } int main() { int T = read<int>(); while (T--) solve(); return 0; }
2053
G
Naive String Splits
\begin{quote} And I will: love the world that you've adored; wish the smile that you've longed for. Your hand in mine as we explore, please take me to tomorrow's shore. \hfill — Faye Wong, As Wished \end{quote} Cocoly has a string $t$ of length $m$, consisting of lowercase English letters, and he would like to split it into parts. He calls a pair of strings $(x, y)$ beautiful if and only if there exists a sequence of strings $a_1, a_2, \ldots, a_k$, such that: - $t = a_1 + a_2 + \ldots + a_k$, where $+$ denotes string concatenation. - For each $1 \leq i \leq k$, at least one of the following holds: $a_i = x$, or $a_i = y$. Cocoly has another string $s$ of length $n$, consisting of lowercase English letters. Now, for each $1 \leq i < n$, Cocoly wants you to determine whether the pair of strings $(s_1s_2 \ldots s_i, \, s_{i+1}s_{i+2} \ldots s_n)$ is beautiful. \textbf{Note}: since the input and output are large, you may need to optimize them for this problem. For example, in C++, it is enough to use the following lines at the start of the main() function: \begin{verbatim} int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); std::cout.tie(nullptr); } \end{verbatim}
(If we do not have to be deterministic) We do not need any hard string algorithm. In what cases won't greed work? Why? Is your brute forces actually faster (maybe you can explain it by harmonic series)? Let's call the prefix $s_1$ the short string and the suffix $s_2$ the long string. If $\lvert s_1\rvert\gt \lvert s_2\rvert$, swapping the order doesn't affect the answer. Consider a greedy approach. We first match a few short strings until we can't match anymore. If we can't match, we try discarding a few short strings and placing one long string. We enumerate how many short strings to discard. Find the first position where we can place a long string. We call this placing once a "matching process." The starting position of each "matching process" is the next position after the previous "matching process." However, this approach has a flaw. Below, I will explain the situation where this flaw occurs. It's not difficult to view the long string as several short strings concatenated with a "tail" at the end. Why only find the first position? Why wouldn't it be better to backtrack and discard a few more $s_1$'s before placing the long string? The diagram above is an example. The red line indicates the cutoff position for matching $s_1$. The three orange boxes represent the possible choices for attempting to match the long string. The last one is invalid, so skip it. Replace it with the "The first valid position to place the long string." box. This box is correct. The difference between choosing the second box and the first box is that when we end the matching and proceed to the next round, the content we need to match is that a suffix of $s_1$ has been moved to the beginning of $s_1$. As shown in the diagram. The alignment results in the following diagram. In the diagram, the case where $\lvert pre\rvert \gt \lvert suf\rvert$ is shown. The other case follows similarly. $pre+suf=suf+pre$, and as can be seen from the diagram, here $pre+suf$ needs to be "misaligned but equal." This means that $s_1$ must have a periodic cycle, and the "tail" must also be able to be formed through this cycle. Additionally, the $\lvert s_1 \rvert$ characters at the beginning of $s_2$ must still be able to be formed by the periodic cycle. In summary, it is equivalent to $s_1$ and $s_2$ having a common periodic cycle. In other words, backtracking twice can lead to a solution, but backtracking once may not always work. It is necessary for $s_1$ and $s_2$ to have a common periodic cycle. Having a common periodic cycle is too specific. First, the common periodic cycle of these two strings must indeed be the periodic cycle of string $s$. By finding this cycle in advance, it essentially becomes a problem of solving an equation. Let $a=\lvert s_1\rvert$ and $b=\lvert s_2\rvert$, and we need to check whether there are non-negative integer solutions $x$ and $y$ that satisfy the equation $xa+yb=\lvert s \rvert$. This equation-solving part is just for show. We enumerate $y$, and for each value of $y$, we compute the enumeration $\mathcal O\left(\frac{m}{n}\right)$ times. At most, we do this $n$ times, so the overall complexity is $\mathcal O(m)$. If we directly implement this, the worst-case complexity is related to the harmonic series. Consider a case like $s_1 = \texttt{a}, s_2 = \texttt{aaaaaaaab}$, and $t = \texttt{aaaaaaaaaaaaaaaaaaaaaaaaab}$. In this case, we would need to attempt $\mathcal O\left(\frac{m}{\lvert s_1\rvert }\right)$ steps to finish matching the short string. Each time we backtrack, it requires at most $\mathcal O\left(\frac{m}{\lvert s_2\rvert }\right)$ steps, and each of those takes $\mathcal O\left(\frac{\lvert s_2\rvert }{\lvert s_1\rvert }\right)$. Thanks to orzdevinwang and crazy_sea for pointing out it! Yes, this problem can indeed be solved in linear time. First, let the prefix of the long string contain $c$ short strings. Then, the first "valid position to place the long string" we backtrack to can only be obtained by either backtracking $c$ short strings or $c+1$ short strings. This is easy to understand. Backtracking $c$ short strings means considering the prefix of the long string, then adding the "tail" of the long string. Backtracking $c + 1$ short strings only happens when the "tail" part of the long string is a prefix of the short string. For example, the hack in the CF comment section: https://codeforces.com/blog/entry/136455?#comment-1234262 The correct output should be: When $k = 3$, $s_1 = \texttt{aba}$, $s_2 = \texttt{abaab}$, we first filled in 2 short strings, but if we backtrack $c = 1$ short string, it will be judged as unsolvable. This is how the error in std occurs. This situation only arises when the "tail" part of $s_2$ is a prefix of $s_1$. So, each time we calculate, we first use binary search to find this $c$, and the complexity is $\mathcal O\left(\sum\limits_{i=1}^n\log(\frac n i)\right)=\mathcal O(n)$. After that, backtracking only requires two checks. The backtracking process becomes linear. Next, let's accelerate the process of filling in short strings. The hack for brute-force filling has already been given above. If we directly use the method that everyone initially thought was correct, which is the "binary search for the number of short string occurrences" method, Then it can be accelerated by this set of hacks to achieve a $\log$ complexity. crazy_sea points out an alternative solution. Similar to block division, let $B = \frac{n}{\lvert s_1\rvert}$. We first try to match $B$ occurrences of $s_1$, meaning that each match can move forward by $n$ positions. After that, the remaining part will not exceed $B$ occurrences, and we binary search to find how many are left. For the part where we moved forward by $B$ occurrences, we can backtrack at most once, and then the binary search will take $\mathcal O(\log B)$. Each time, we can at least fill in $\lvert s_2\rvert$ occurrences. The time complexity for calculating the answer becomes $\mathcal O\left(\frac{m}{\lvert s_2\rvert }\log B\right)$, and the total complexity, due to $\mathcal O\left(\sum\limits_{i=1}^n \log(\frac{n}{i})\right) = \mathcal O(n)$, is $\mathcal O(m)$. Thus, the problem is solved in linear time. During testing, we found that jqdai0815's solution is actually more violent than we thought (We have also received something similar in contest). He specially handled the case in which $X$ and $T$ share a common shortest period, and for the rest $i$, he just used two queues to optimize the BFS progress (actually changing it to std::priority_queue<int> or similar also works), and it passed just seeming to have a larger constant time complexity. I'm curious if anybody can (hack it or) prove that it is correct. Thanks in advance! This is written by Caylex. This is written by Caylex.
[ "binary search", "brute force", "greedy", "hashing", "math", "number theory", "strings" ]
3,400
#include <bits/stdc++.h> using namespace std; template <int P> class mod_int { using Z = mod_int; private: static int mo(int x) { return x < 0 ? x + P : x; } public: int x; int val() const { return x; } mod_int() : x(0) {} template <class T> mod_int(const T &x_) : x(x_ >= 0 && x_ < P ? static_cast<int>(x_) : mo(static_cast<int>(x_ % P))) {} bool operator==(const Z &rhs) const { return x == rhs.x; } bool operator!=(const Z &rhs) const { return x != rhs.x; } Z operator-() const { return Z(x ? P - x : 0); } Z pow(long long k) const { Z res = 1, t = *this; while (k) { if (k & 1) res *= t; if (k >>= 1) t *= t; } return res; } Z &operator++() { x < P - 1 ? ++x : x = 0; return *this; } Z &operator--() { x ? --x : x = P - 1; return *this; } Z operator++(int) { Z ret = x; x < P - 1 ? ++x : x = 0; return ret; } Z operator--(int) { Z ret = x; x ? --x : x = P - 1; return ret; } Z inv() const { return pow(P - 2); } Z &operator+=(const Z &rhs) { (x += rhs.x) >= P && (x -= P); return *this; } Z &operator-=(const Z &rhs) { (x -= rhs.x) < 0 && (x += P); return *this; } Z operator-() { return -x; } Z &operator*=(const Z &rhs) { x = 1ULL * x * rhs.x % P; return *this; } Z &operator/=(const Z &rhs) { return *this *= rhs.inv(); } #define setO(T, o) \ friend T operator o(const Z &lhs, const Z &rhs) \ { \ Z res = lhs; \ return res o## = rhs; \ } setO(Z, +) setO(Z, -) setO(Z, *) setO(Z, /) #undef setO friend istream & operator>>(istream &is, mod_int &x) { long long tmp; is >> tmp; x = tmp; return is; } friend ostream &operator<<(ostream &os, const mod_int &x) { os << x.val(); return os; } }; typedef long long ll; typedef unsigned long long ull; mt19937 rnd(chrono::system_clock::now().time_since_epoch().count()); constexpr int p = 993244853; using Hash = mod_int<p>; // using Hash = ull; const Hash base = rnd() % 20091119 + 30; string s, t; Hash st; int mnlen; Hash S[5000020]; Hash T[5000020]; Hash pw[5000020]; // int doit; inline Hash SUM(const int l, const int r, const Hash *s) { return s[r] - s[l - 1] * pw[r - l + 1]; } int n, m; inline bool check(const int l, const int r, const Hash x, const int len, const Hash *S) { // doit++; return r - l + 1 >= len && SUM(l, l + len - 1, S) == x && SUM(l, r - len, S) == SUM(l + len, r, S); } inline bool calc(const int L1, const int R1, const int L2, const int R2, const int l1, const int l2) { if (mnlen && check(L1, R1, st, mnlen, S) && check(L2, R2, st, mnlen, S)) { if (check(1, m, st, mnlen, T)) { for (int i = 0; i <= m; i += l2) { if (!((m - i) % l1)) return 1; } return 0; } } const Hash s1 = SUM(L1, R1, S); const Hash s2 = SUM(L2, R2, S); int l = 1, r = l2 / l1, okcnt = 0; while (l <= r) { int mid = l + r >> 1; if (check(L2, L2 + mid * l1 - 1, s1, l1, S)) l = (okcnt = mid) + 1; else r = mid - 1; } const int tt = (n / l1); int ed = 0; while (ed <= m) { int L = 1, R = (m - ed) / l1, cnt = 0; int tcnt = 0; while (tcnt + tt <= R && check(ed + 1, ed + (tcnt + tt) * l1, s1, l1, T)) tcnt += tt; L = max(L, tcnt); R = min(R, tcnt + tt - 1); // cerr << ed << ' ' << L << ' ' << R << '\n'; while (L <= R) { int mid = L + R >> 1; if (check(ed + 1, ed + mid * l1, s1, l1, T)) L = (cnt = mid) + 1; else R = mid - 1; } if (ed + cnt * l1 + 1 > m) return 1; // int st = ed + (cnt - okcnt) * l1 + 1; // if (st > ed && st + l2 - 1 <= m && SUM(st, st + l2 - 1, T) == s2) // { // ed = st + l2 - 1; // continue; // } // return 0; bool found = 0; for (int st = ed + (cnt - okcnt) * l1 + 1, cnt = 1; cnt <= 2 && st > ed; st -= l1, cnt++) { if (st + l2 - 1 <= m && SUM(st, st + l2 - 1, T) == s2) { ed = st + l2 - 1; found = 1; break; } } if (!found) return 0; } return 1; } void solve() { // build(); mnlen = 0; cin >> n >> m >> s >> t; s = ' ' + s; t = ' ' + t; for (int i = 1; i <= n; i++) S[i] = S[i - 1] * base + Hash(s[i] - 'a' + 1); for (int i = 1; i <= m; i++) T[i] = T[i - 1] * base + Hash(t[i] - 'a' + 1); for (int i = 1; i <= n; i++) { if (n % i == 0 && check(1, n, S[i], i, S)) { st = S[i]; mnlen = i; break; } } for (int i = 1; i < n; i++) putchar('0' ^ (i <= n - i ? calc(1, i, i + 1, n, i, n - i) : calc(i + 1, n, 1, i, n - i, i))); putchar('\n'); // cerr << doit << '\n'; } int main() { // freopen("out.txt", "r", stdin); pw[0] = Hash(1); for (int i = 1; i <= 5'000'000; i++) pw[i] = pw[i - 1] * base; ios::sync_with_stdio(0), cin.tie(0), cout.tie(0); int t; cin >> t; while (t--) solve(); // cerr << fixed << setprecision(10) << 1.0 * clock() / CLOCKS_PER_SEC << '\n'; return 0; }
2053
H
Delicate Anti-monotonous Operations
\begin{quote} I shall be looking for you who would be out of Existence. \hfill — HyuN, Disorder \end{quote} There are always many repetitive tasks in life. Iris always dislikes them, so she refuses to repeat them. However, time cannot be turned back; we only have to move forward. Formally, Iris has an integer sequence $a_1, a_2, \ldots, a_n$, where each number in the sequence is between $1$ and $w$, inclusive. It is guaranteed that $w \geq 2$. Iris defines an operation as selecting two numbers $a_i, a_{i+1}$ satisfying $a_i = a_{i+1}$, and then changing them to two arbitrary integers within the range $[1, w]$. Iris does not like equality, so she must guarantee that $a_i \neq a_{i+1}$ after the operation. Two identical pairs $a_i, a_{i+1}$ can be selected multiple times. Iris wants to know the maximum possible sum of all elements of $a$ after several (possible, zero) operations, as well as the minimum number of operations required to achieve this maximum value.
What if $w = 2$? Is it optimal to increase $\sum\limits_{i=0}^n [a_i \neq a_{i+1}]$ (suppose $a_0 = a_{n+1} = 2$)? If $w \geq 3$, what's the answer to the first question? If $a_i = a_{i+1}$, after the operation we can obtain $a_{i-1} = a_i$ or $a_{i+1} = a_{i+2}$ (and possibly, both). Try to think of the whole process reversedly. If $w \geq 3$, $1 \leq a_i \leq w - 1$, can you solve the problem? How many extra operations are required for each $1 \leq i \leq n$ if $a_i = w$, in the above scheme you use for $a_i \leq w - 1$? Read the Hints. $w = 2$ After any operation, $k = \sum\limits_{i=0}^n [a_i \neq a_{i+1}]$ won't decrease (suppose $a_0 = a_{n+1} = 2$). For a fixed $k$, the maximal $\sum a_i = 2n - \frac{1}{2}k$ and can be reached by each time turning a $[2, 1, 1]$ into $[2, 2, 1]$ (or symmetrically, $[1, 1, 2] \rightarrow [1, 2, 2]$. $w \geq 3$ No initial operations can be conducted, or $\min(a_i) = w$ No initial operations can be conducted, or $\min(a_i) = w$ This case is trivial. $w \geq 3$ Some initial operations can be conducted Some initial operations can be conducted We claim that the answer to the first question is $nw - 1$. For the second question, let's study some rather easier cases below. $w \geq 3$ Some initial operations can be conducted $a_i \neq w$ Some initial operations can be conducted $a_i \neq w$ $a_i \neq w$ We pretend that the final sequence is $[w, w, \dots, w, (w-1), w, w, \dots, w]$, then since $(a_i, a_{i+1})$ must be different after the operation, the last operation can only occur on $[w, (w-1)]$ (or $[(w-1), w]$). And since initially $a_i \neq w$, each position must have been operated on at least once. This gives us states such as $[w, \dots, w, x, x, w, \dots, w]$, $[w, \dots, w, y, y, x, w, \dots, w]$, etc. To the leftmost positions, we get $a_1 = a_2$ (essentially based on Hint 3). Also, we get ...... $a_{n-1} = a_n$? This is if and only if the initial $[w, (w - 1)]$ is neither at the beginning nor at the end. If the initial $[w, (w - 1)]$ is at the beginning, we only need $a_{n-1} = a_n$ to achieve the goal, and symmetrically the same. Less is more. Obviously, we only need to satisfy either $a_1 = a_2$ or $a_{n-1} = a_n$, and then use $n - 1$ more operations to reach the target situation. How do we get to $a_1 = a_2$ (symmetrically the same)? Based on Hint 3, we find the smallest $x$ that satisfies $a_x = a_{x+1}$, and then follow the example above, again using $x - 1$ operations to conduct the equality sign to $a_1 = a_2$. Lemma 1: We can never choose an index $a_i = w$, fix it (i.e. avoid changing $a_i$ in the following operations), and then use some operations to reach $\sum a_i = nw - 1$ unless $[a_1, \ldots, a_i] = [w, \ldots, w]$ or $[a_i, \ldots, a_n] = [w, \ldots, w]$. Proof: If not so, the array is split into two parts: $[a_1, \ldots, a_{i-1}]$ and $[a_{i+1}, \ldots, a_n]$. We have: after some operations, the maximum $\sum a_i$ we can get for each part are respectively $(i-1)w - 1, (n-i)w - 1$, and add them up and we get $nw - 2$, which is less than $nw - 1$, so it's never optimal. Lemma 2: Look at the final array $a$, consisting of $n - 1$ element $w$ and $1$ element $(w - 1)$. Obtain an array $a'$ by keeping the elements with the value of $w$. Denote $t_i$ as the last round in which ${a'}_i$ was changed to $w$ (and then become fixed). Then, there exists some $k$ such that $t_1 < t_2 < \dots < t_k > t_{k+1} > \dots > t_{n-1}$. Proof: This follows from Lemma 1. According to Lemma 2, we can see the pattern that we used above is optimal. $w \geq 3$ Some initial operations can be conducted $a_1 \neq w, a_n \neq w$ Some initial operations can be conducted $a_1 \neq w, a_n \neq w$ $a_1 \neq w, a_n \neq w$ Basically, the idea remains to reach $a_1 = a_2$ in the same way first (symmetrically the same), and then to extend to all positions. However, at this point, in the second stage, some problems may arise as follows: $[\dots \underline{a_k}\ a_{k+1}\ w\ a_{k+3} \dots]$ $[\dots \underline{\color{red}{a_{k+1}}\ a_{k+1}}\ w\ a_{k+3} \dots]$ $[\dots \color{red}s\ \underline{\color{red}w\ w}\ a_{k+3} \dots]$ $[\dots \color{red}{\underline{s\ s}}\ t\ a_{k+3} \dots]$ $[\dots \color{red}w\ \underline{\color{red}t\ t}\ a_{k+3} \dots]$ $[\dots \color{red}{w\ w}\ \underline{\color{red}{a_{k+3}}\ a_{k+3}} \dots]$ In which $s, t, w$ are three distinct integers in the range $[1, w]$ (This also explains why we need to specially deal with the $w = 2$ case). Since we cannot fix $a_{k+2} = w$ at the beginning (refer to the "Why is it optimal?" spoiler above), we have to first change $a_{k+2}$ into something not equal to $w$, and that cost at least $2$ extra operations, which is shown here. Do we always need $2$ extra operations? One may note that if $a_i = a_{i+1} = w$, in which the two elements are both in the way of expansion, we can only use $1$ operation to vanish their existence. Formally, if there is a maximum continuous subsegment of $w$ in the way of expansion, let its length be $L$, then we will spend $\lceil \frac{L}{2} \rceil + [L = 1]$ extra operations. Suppose in the first stage, we pick $a_x = a_{x+1}$ and keep operating on it until $a_1 = a_2$. Then after it, $\forall 3 \leq k \leq x + 1$, $a_k$ can be an arbitrary number which is different from the initial $a_{k-1}$, thus we can always force it to be $\neq w$. In the above case, only elements $[a_{x+2}, \ldots, a_n]$ are considered 'in the way of expansion', and symmetrically the same. $w \geq 3$ Some initial operations can be conducted No additional constraints Some initial operations can be conducted No additional constraints No additional constraints It may come to you that if $a_1 = w$, we can ignore $a_1$; if $a_1 = a_2 = w$, we can ignore $a_2$. Symmetrically the same. And so on... Then we boil the problem down to the case above. It is correct unless in some rare cases when we ignore all the prefixes and suffixes, there will be no remaining $a_i = a_{i+1}$; or if we pick any pair $a_i = a_{i+1}$ as the starting pair in the remaining array, it is not optimal compared to picking an $a_k = a_{k+1} = w (a_{k+2} \neq w)$ (symmetrically the same). So, we have to special handle the deleted prefix and suffix, once it has a length greater than $2$. In summary, the problem can be solved in $\mathcal O(n)$.
[ "constructive algorithms", "implementation" ]
3,500
#include <bits/stdc++.h> #define MAXN 200005 int a[MAXN]; void solve() { int N, w; scanf("%d%d", &N, &w); for (int i = 1; i <= N; ++i) scanf("%d", a + i); if (N == 1) return (void)printf("%d 0\n", a[1]); if (*std::min_element(a + 1, a + N + 1) == w) return (void)printf("%lld 0\n", 1ll * w * N); if (w == 2) { int ans = N * 2, pans = 0; for (int i = 1, j = 1; i <= N; i = ++j) if (a[i] == 1) { --ans; while (j < N && a[j + 1] == 1) ++j; pans += j - i; } return (void)printf("%d %d\n", ans, pans); } bool flag = true; for (int i = 1; i < N; ++i) if (a[i] == a[i + 1]) flag = false; if (flag) return (void)printf("%lld 0\n", std::accumulate(a + 1, a + N + 1, 0ll)); printf("%lld ", 1ll * w * N - 1); if (std::accumulate(a + 1, a + N + 1, 0ll) == 1ll * w * N - 1) return (void)puts("0"); int ans = 0x3f3f3f3f, l = (a[1] == w ? 2 : 1), r = (a[N] == w ? N - 1 : N); if ((a[1] == w && a[2] == w) || (a[N] == w && a[N - 1] == w)) { int Lw = 0, Rw = N + 1; while (a[Lw + 1] == w) ++Lw; while (a[Rw - 1] == w) --Rw; int pans = Rw - Lw; for (int i = Lw + 1, j = i; i < Rw; i = ++j) if (a[i] == w) { while (j + 1 < Rw && a[j + 1] == w) ++j; pans += (i == j ? 2 : ((j - i) >> 1) + 1); } ans = pans, l = Lw + 1, r = Rw - 1; } for (int d = 0; d < 2; std::reverse(a + l, a + r + 1), ++d) for (int i = l - 1, pre = 0, len = 0; i + 2 <= r; ) { if (a[i + 1] == a[i + 2]) ans = std::min(r - (i + 1) + r - l - 1 + pre - ((len == 1) && i + 2 < r && a[i + 3] != w ? 1 : 0), ans); ++i; if (a[i] == w) ++len, pre += (len == 1 ? 2 : len == 2 ? -1 : (len & 1)); else len = 0; } printf("%d\n", ans); } int main() { int T; scanf("%d", &T); while (T--) solve(); return 0; }
2053
I1
Affectionate Arrays (Easy Version)
\begin{quote} You are the beginning of the letter, the development of a poem, and the end of a fairy tale. \hfill — ilem, Pinky Promise \end{quote} \textbf{This is the easy version of the problem. The difference between the versions is that in this version, you need to compute the minimum length of the arrays. You can hack only if you solved all versions of this problem.} Iris treasures an integer array $a_1, a_2, \ldots, a_n$. She knows this array has an interesting property: the maximum absolute value of all elements is less than or equal to the sum of all elements, that is, $\max(\lvert a_i\rvert) \leq \sum a_i$. Iris defines the boredom of an array as its maximum subarray$^{\text{∗}}$ sum. Iris's birthday is coming, and Victor is going to send her another array $b_1, b_2, \ldots, b_m$ as a gift. For some seemingly obvious reasons, he decides the array $b_1, b_2, \ldots, b_m$ should have the following properties. - $a_1, a_2, \ldots, a_n$ should be a subsequence$^{\text{†}}$ of $b_1, b_2, \ldots, b_m$. - The two arrays have the same sum. That is, $\sum\limits_{i=1}^n a_i = \sum\limits_{i=1}^m b_i$. - The boredom of $b_1, b_2, \ldots, b_m$ is the smallest possible. - Among the arrays with the smallest boredom, the length of the array $b$ (i.e., $m$) is the smallest possible. And in this case, Iris will understand his regard as soon as possible! Even constrained as above, there are still too many possible gifts. So Victor asks you to \textbf{compute the value of $\boldsymbol{m}$} of any array $b_1, b_2, \ldots, b_m$ satisfying all the conditions above. He promises you: if you help him successfully, he will share a bit of Iris's birthday cake with you. \textbf{Note}: since the input is large, you may need to optimize it for this problem. For example, in C++, it is enough to use the following lines at the start of the main() function: \begin{verbatim} int main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); std::cout.tie(nullptr); } \end{verbatim} \begin{footnotesize} $^{\text{∗}}$An array $c$ is a subarray of an array $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. $^{\text{†}}$A sequence $c$ is a subsequence of a sequence $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) element from arbitrary positions. \end{footnotesize}
What is the minimized LIS? How do you prove that it is an achievable lower bound? Go for a brute DP first. We claim that the minimized LIS is $\sum a_i$. Let $p$ be $\sum a_i$. Since it is required that $\operatorname{LIS}(b) = p$ while $\sum b_i = p$, we point out that it is equivalent to each of the prefix sums of the sequence being between $[0, p]$. Sufficiency: $\forall X \in [0, p], Y \in [0, p]$, $X - Y \leq p$. Also, we can pick $X = p, Y = 0$, so it can only be $= p$. Necessity: disproof. If a prefix sum is $< 0$, then choose the whole array except for this prefix; if a prefix sum is $> p$, then choose this prefix. Both derive a contradiction of the LIS being greater than $p$. Consider dp. Let $f_{i,j}$ denote, after considering the first $i$ numbers, the minimum extra sequence length (i.e. the actual length minus $i$), when the current prefix sum is $j$. The initial states are $f_{0,j} = [j \neq 0]$. The transfer is simple too: $f_{i,j} = \min\limits_{0 \leq k + a_i \leq p} (f_{i-1,k} + [k + a_i \neq j])$ It is possible to optimize the transfer to $\mathcal O(np)$, since for each $j$, the contribution from at most one $k$ is special ($+0$). We can calculate the prefix and suffix $\min$ for $f_{i-1}$ and it will be fast to get the dp array in the new row. Then, let's focus on optimizing it to $\mathcal O(n)$. We call the set of $0 \leq k \leq p$ satisfying $0 \leq k + a_i \leq p$ as the legal interval of $i$ (denoted as $L_i$). It is concluded that the range of $f_{i, 0\dots p}$ is at most $1$, and this can be proven by considering the transfer to each of the $j$ with the $k \in L_i$ which has the least $f_{i-1, k}$. Let $v_i = \min_{0 \leq j \leq p}(f_{i, j})$. We also have: for those $j$ with $f_{i,j} = v_i$, they form a consecutive segment in the integer field. Let the segment be covering $[l_i, r_i]$. Inductive proof. The essence of the transfer is that it shifts all the DP values $= v_{i-1}$ by $a_i$ unit length, and all the other numbers will be updated to $v_{i-1} + 1$. Then truncates the $j \in ((-\inf, 0) \cup (p, +\inf))$ part. The consecutive segment remains consecutive. Specially, if $[l_{i-1}, r_{i-1}] \cap L_i = \varnothing$, then $\min_{k \in L_i}(f_{i-1, k}) = v_{i-1} + 1$, hence we need to set $v_i = v_{i-1} + 1$, and $l_i, r_i$ as the range of $j = k + a_i$ in which $k \in L_i$. Otherwise, $v_i = v_{i-1}$, and $l_i, r_i$ can be calculated by shifting $l_{i-1}, r_{i-1}$ by $a_i$ unit length. In fact, we only need to maintain three variables $l, r, v$ to represent the current consecutive segment and the current value field. Therefore, this problem can be easily solved in $\mathcal O(n)$.
[ "data structures", "dp", "greedy" ]
2,800
#include <bits/stdc++.h> namespace FastIO { template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; } template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar((x % 10) ^ '0'); } template <typename T> inline void print(T x) { if (x > 0) write<T>(x); else if (x < 0) putchar('-'), write<T>(-x); else putchar('0'); } template <typename T> inline void print(T x, char en) { print<T>(x), putchar(en); } }; using namespace FastIO; #define MAXN 3000001 int a[MAXN]; void solve() { int N = read<int>(); long long p = 0, l = 0, r = 0, v = 0; for (int i = 1; i <= N; ++i) p += (a[i] = read<int>()); for (int i = 1; i <= N; ++i) if (a[i] >= 0) { l += a[i], r = std::min(r + a[i], p); if (l > r) ++v, l = a[i], r = p; } else { a[i] = -a[i]; r -= a[i], l = std::max(l - a[i], 0ll); if (l > r) ++v, l = 0, r = p - a[i]; } print<int>(v + N + (int)(r != p), '\n'); } int main() { int T = read<int>(); while (T--) solve(); return 0; }
2053
I2
Affectionate Arrays (Hard Version)
\textbf{Note that this statement is different to the version used in the official round. The statement has been corrected to a solvable version. In the official round, all submissions to this problem have been removed.} \textbf{This is the hard version of the problem. The difference between the versions is that in this version, you need to compute the sum of value of different arrays. You can hack only if you solved all versions of this problem.} Iris treasures an integer array $a_1, a_2, \ldots, a_n$. She knows this array has an interesting property: the maximum absolute value of all elements is less than or equal to the sum of all elements, that is, $\max(\lvert a_i\rvert) \leq \sum a_i$. Iris defines the boredom of an array as its maximum subarray$^{\text{∗}}$ sum. Iris's birthday is coming, and Victor is going to send her another array $b_1, b_2, \ldots, b_m$ as a gift. For some seemingly obvious reasons, he decides the array $b_1, b_2, \ldots, b_m$ should have the following properties. - $a_1, a_2, \ldots, a_n$ should be a subsequence$^{\text{†}}$ of $b_1, b_2, \ldots, b_m$. - The two arrays have the same sum. That is, $\sum\limits_{i=1}^n a_i = \sum\limits_{i=1}^m b_i$. - The boredom of $b_1, b_2, \ldots, b_m$ is the smallest possible. - Among the arrays with the smallest boredom, the length of the array $b$ (i.e., $m$) is the smallest possible. And in this case, Iris will understand his regard as soon as possible! For a possible array $b_1, b_2, \ldots, b_m$ satisfying all the conditions above, Victor defines the value of the array as the \textbf{number of occurrences} of array $a$ as subsequences in array $b$. That is, he counts the number of array $c_1, c_2, \ldots, c_{n}$ that $1\le c_1< c_2< \ldots< c_n\le m$ and for all integer $i$ that $1\le i\le n$, $b_{c_{i}}=a_i$ is satisfied, and let this be the value of array $b$. Even constrained as above, there are still too many possible gifts. So Victor asks you to \textbf{calculate the sum of value of all possible arrays} $b_1, b_2, \ldots, b_m$. Since the answer may be large, Victor only needs the number modulo $998\,244\,353$. He promises you: if you help him successfully, he will share a bit of Iris's birthday cake with you. \begin{footnotesize} $^{\text{∗}}$An array $c$ is a subarray of an array $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. $^{\text{†}}$A sequence $c$ is a subsequence of a sequence $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) element from arbitrary positions. \end{footnotesize}
Try insert numbers into spaces between the elements of $a$ (including the beginning and the end). Note that an array be can be formed in multiple ways (for example, you can insert a number $x$ to the left or right of the original number $x$ in array $a$). What's the number of different ways? It's actually the "value", that is, the number of occurrances of $a$. Use the fact that "only one number can be inserted into each space" to prove. Read the editorial for I1 and the hints first. According to the hint, you can count the number of ways to insert the new numbers into the space, so the dp and transitions are similar to those in problem I1. We can add an additional dp array, $g_{i, j}$ representing the number of different ways when everything meets the description of $f_{i, j}$. Let's see from which values it can be transferred: If $f_{i, j} = v_i$, then $f_{i, j}$ can only be transferred from $f_{i-1, j-a_i}$. That is, $g_{i, j} = g_{i-1, j-a_i}$. If $f_{i, j} = v_i + 1$, then $f_{i, j} = \sum_{k \in L_i, f_{i-1, k} = f_{i, j} - 1} g_{i-1, k}$. We can use some tags and a Deque to maintain the whole process, so the amortized time complexity is $\mathcal O(n)$. Try to solve the initial problem: count the number of valid array $b$ for $n\le 500$. Consider $f_{i,j,k}$: the number of ways to fill the first $i$ elements of $b$, which contains at most $j$ first elements from $a$ as a subsequence, and the sum of the first $i$ elements is $k$. Try to optimize this from $O(n^2s^2)$ to $O(n^2s)$, and then $O(\rm{poly}(n))$, and then $O(n^3)$.
[ "data structures", "dp", "graphs", "greedy", "math", "shortest paths", "two pointers" ]
3,500
#include <bits/stdc++.h> namespace FastIO { template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; } template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar((x % 10) ^ '0'); } template <typename T> inline void print(T x) { if (x > 0) write<T>(x); else if (x < 0) putchar('-'), write<T>(-x); else putchar('0'); } template <typename T> inline void print(T x, char en) { print<T>(x), putchar(en); } }; using namespace FastIO; const int MOD = 998244353; namespace Modint { inline int add(int x, int y) { return (x += y) >= MOD ? x - MOD : x; } inline int sub(int x, int y) { return x < y ? x - y + MOD : x - y; } inline int mul(int x, int y) { return 1ll * x * y % MOD; } inline int pow(int x, int y) { int r = 1; for (; y; y >>= 1, x = mul(x, x)) if (y & 1) r = mul(r, x); return r; } inline int inv(int x) { return pow(x, MOD - 2); } }; struct modint { int v; modint (int x = 0) : v(x) { /* for debug only. assert(0 <= x && x < MOD); */ } inline int get() { return v; } modint operator - () const { return v ? MOD - v : 0; } modint operator + (const modint &k) const { return Modint::add(v, k.v); } modint operator - (const modint &k) const { return Modint::sub(v, k.v); } modint operator * (const modint &k) const { return Modint::mul(v, k.v); } modint operator / (const modint &k) const { return Modint::mul(v, Modint::inv(k.v)); } modint pow(const modint &k) const { return Modint::pow(v, k.v); } modint inverse() const { return Modint::inv(v); } modint operator += (const modint &k) { (v += k.v) >= MOD && (v -= MOD); return *this; } modint operator -= (const modint &k) { (v -= k.v) < 0 && (v += MOD); return *this; } modint operator *= (const modint &k) { return v = Modint::mul(v, k.v); } modint operator /= (const modint &k) { return v = Modint::mul(v, Modint::inv(k.v)); } }; struct Node { int f; modint g; long long len; Node () {} Node (int F, int G, long long L) : f(F), g(G), len(L) {} Node (int F, modint G, long long L) : f(F), g(G), len(L) {} }; std::deque<Node> Q; #define MAXN 3000001 int a[MAXN]; void solve() { int N = read<int>(); long long p = 0, l = 0, r = 0, v = 0; for (int i = 1; i <= N; ++i) p += (a[i] = read<int>()); for (int i = 1; i <= N; ++i) assert (std::abs(a[i]) <= p); Q.clear(); Q.push_back(Node(0, 1, 1)), Q.push_back(Node(1, 1, p)); modint gv = 1, gz = p % MOD, tv = 0, tz = 0; for (int i = 1; i <= N; ++i) if (a[i] >= 0) { for (long long ls = a[i]; ls > 0; ) { Node k = Q.back(); Q.pop_back(); if (ls < k.len) Q.push_back(Node(k.f, k.g, k.len - ls)), k.len = ls; (k.f == v ? gv : gz) -= k.g * (k.len % MOD), ls -= k.len; } l += a[i], r = std::min(r + a[i], p); if (l > r) ++v, l = a[i], r = p, gv = gz, tv = tz, gz = tz = 0; gz += -tz * (a[i] >= MOD ? a[i] - MOD : a[i]); if (a[i] > 0) Q.push_front(Node(v + 1, -tz, a[i])); tz += gv + tv * ((r - l + 1) % MOD); } else if (a[i] < 0) { a[i] = -a[i]; for (long long ls = a[i]; ls > 0; ) { Node k = Q.front(); Q.pop_front(); if (ls < k.len) Q.push_front(Node(k.f, k.g, k.len - ls)), k.len = ls; (k.f == v ? gv : gz) -= k.g * (k.len % MOD), ls -= k.len; } r -= a[i], l = std::max(l - a[i], 0ll); if (l > r) ++v, l = 0, r = p - a[i], gv = gz, tv = tz, gz = tz = 0; gz += -tz * (a[i] >= MOD ? a[i] - MOD : a[i]), Q.push_back(Node(v + 1, -tz, a[i])), tz += gv + tv * ((r - l + 1) % MOD); } print<int>((Q.back().g + (r == p ? tv : tz)).get(), '\n'); } int main() { int T = read<int>(); while (T--) solve(); return 0; }
2055
A
Two Frogs
\begin{center} \begin{tabular}{c} \hline {\small Roaming through the alligator-infested Everglades, Florida Man encounters a most peculiar showdown.} \ \hline \end{tabular} \end{center} There are $n$ lilypads arranged in a row, numbered from $1$ to $n$ from left to right. Alice and Bob are frogs initially positioned on distinct lilypads, $a$ and $b$, respectively. They take turns jumping, starting with Alice. During a frog's turn, it can jump either one space to the left or one space to the right, as long as the destination lilypad exists. For example, on Alice's first turn, she can jump to either lilypad $a-1$ or $a+1$, provided these lilypads are within bounds. It is important to note that each frog \textbf{must jump} during its turn and cannot remain on the same lilypad. However, there are some restrictions: - The two frogs cannot occupy the same lilypad. This means that Alice cannot jump to a lilypad that Bob is currently occupying, and vice versa. - If a frog cannot make a valid jump on its turn, it loses the game. As a result, the other frog wins. Determine whether Alice can guarantee a win, assuming that both players play optimally. It can be proven that the game will end after a finite number of moves if both players play optimally.
Look at the distance between the frogs. Notice that regardless of how the players move, the difference between the numbers of the lilypads the two players are standing on always alternates even and odd, depending on the starting configuration. Then, the key observation is that exactly one player has the following winning strategy: Walk towards the other player, and do not stop until they are forced onto lilypad $1$ or $n$. For instance, if Alice and Bob start on lilypads with the same parity, Bob cannot stop Alice from advancing towards him. This is because at the start of Alice's turn, she will always be able to move towards Bob due to their distance being even and therefore at least $2$, implying that there is a free lilypad for her to jump to. This eventually forces Bob into one of the lilypads $1, n$, causing him to lose. In the case that they start on lilypads with different parity, analogous logic shows that Bob wins. Therefore, for each case, we only need to check the parity of the lilypads for a constant time solution.
[ "constructive algorithms", "games", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int T; cin >> T; while (T--) { int N, A, B; cin >> N >> A >> B; if ((A ^ B) & 1) cout << "NO\n"; else cout << "YES\n"; } cout.flush(); return 0; }