contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
2001
E1
Deterministic Heap (Easy Version)
\textbf{This is the easy version of the problem. The difference between the two versions is the definition of deterministic max-heap, time limit, and constraints on $n$ and $t$. You can make hacks only if both versions of the problem are solved.} Consider a perfect binary tree with size $2^n - 1$, with nodes numbered from $1$ to $2^n-1$ and rooted at $1$. For each vertex $v$ ($1 \le v \le 2^{n - 1} - 1$), vertex $2v$ is its left child and vertex $2v + 1$ is its right child. Each node $v$ also has a value $a_v$ assigned to it. Define the operation $\mathrm{pop}$ as follows: - initialize variable $v$ as $1$; - repeat the following process until vertex $v$ is a leaf (i.e. until $2^{n - 1} \le v \le 2^n - 1$); - among the children of $v$, choose the one with the larger value on it and denote such vertex as $x$; if the values on them are equal (i.e. $a_{2v} = a_{2v + 1}$), you can choose any of them; - assign $a_x$ to $a_v$ (i.e. $a_v := a_x$); - assign $x$ to $v$ (i.e. $v := x$); - assign $-1$ to $a_v$ (i.e. $a_v := -1$). Then we say the $\mathrm{pop}$ operation is deterministic if there is a unique way to do such operation. In other words, $a_{2v} \neq a_{2v + 1}$ would hold whenever choosing between them. A binary tree is called a max-heap if for every vertex $v$ ($1 \le v \le 2^{n - 1} - 1$), both $a_v \ge a_{2v}$ and $a_v \ge a_{2v + 1}$ hold. A max-heap is deterministic if the $\mathrm{pop}$ operation is deterministic to the heap when we do it \textbf{for the first time}. Initially, $a_v := 0$ for every vertex $v$ ($1 \le v \le 2^n - 1$), and your goal is to count the number of different deterministic max-heaps produced by applying the following operation $\mathrm{add}$ exactly $k$ times: - Choose an integer $v$ ($1 \le v \le 2^n - 1$) and, for every vertex $x$ on the path between $1$ and $v$, add $1$ to $a_x$. Two heaps are considered different if there is a node which has different values in the heaps. Since the answer might be large, print it modulo $p$.
Consider subtask - decision version of the problem (i.e. yes/no problem) as follow: "Given a sequence $a$ resulting from $k$ operations, check whether such $a$ is a deterministic max-heap.", what property $a$ should have? Try to fix the position of leaf being popped, what property are needed on the path from root to it? Does it really matter where the leaf being popped lies important? Try to count the number of $a$ where the first leaf being popped is fixed at one position using DP. Try to use DP to track the path of elements being popped and count number of possible $a$ satisfy the property. Again, in lot of problems, decision version of the problem is usually a useful subtask to consider. And it's especially useful in counting problems about giving some operation and ask you to count the number of possible outcomes. Since there may be multiple ways to apply operations to make the same heap, let's consider the decision version of the problem instead of counting the operation sequence directly. That is, "Given a sequence $a$ resulting from $k$ operations, check whether such $a$ is a deterministic max-heap.", and try to figure out what properties are needed. Let $v = 2^{n - 1}$ be the only leaf such that after popping one element from the top, every element on the path between $1$ and $v$ would be moved upward, then the condition of a deterministic max-heap can be rephrased as follows: $a_1 = k$ $a_{v} \ge a_{2v} + a_{2v + 1}$, for any $v$ ($1 \le v < 2^{n - 1}$) $a_{2\cdot 2^k} > a_{2 \cdot 2^k + 1}$, for any $k$ ($0 \le k \le n - 2$) So we can see that for any $k$ ($0 \le k \le n - 2$), the number of operations done in the subtree of $2\cdot 2^k$ should be greater than the number of operations done in the subtree of $2 \cdot 2^k + 1$, and we don't care much about how these operations distribute in the subtree of $2 \cdot 2^k + 1$ as long as 3. holds. Let's call such subtree "uncritical". For any uncritical subtree with size $sz$, if we do $x$ operations under it, there are $\binom{x + (sz - 1)}{x}$ ways to do so. Now we can use dp to consider all valid $a$ we can get by enumerating the number of operation done on each vertex on the path from $1$ to $2^{n - 1}$ and all uncritical subtree. Let $dp[i][j]$ be the number of different $a$ we can get by $j$ operations on the subtree with size $2^i - 1$, then we have base case: $dp[1][j] = 1, \forall j \in [0, k]$ transition: $dp[i + 1][l] \text{ += } dp[i][j] \times \binom{x + (2^i - 2)}{x} ,\forall x \in [0, j), l \in [j + x, k]$ using prefix sum, we can optimize the above dp to $O(nk^2)$, and the number of deterministic max-heap where vertex $2^{n - 1}$ would be pulled up is $dp[n][k]$. Finally, due to the symmetry property of a perfect binary tree, the number of deterministic max-heap where $v$ would be pulled are equal for all possible $v$, so the final answer we want would be $dp[n][k] \times 2^{n - 1}$.
[ "combinatorics", "dp", "math", "trees" ]
2,400
#include <iostream> #include <vector> using namespace std; using ll = long long; int binpow(ll a, ll b, ll p) { ll c = 1; while(b) { if (b & 1) c = c * a % p; a = a * a % p, b >>= 1; } return c; } int main() { int t; cin >> t; while(t--) { int n, k, p; cin >> n >> k >> p; vector<ll> fac(k + 1, 1); for(int i = 1; i <= k; i++) fac[i] = fac[i - 1] * i % p; vector<ll> faci(k + 1); faci[k] = binpow(fac[k], p - 2, p); for(int i = k - 1; i >= 0; i--) faci[i] = faci[i + 1] * (i + 1) % p; vector<ll> pow2(n + 1, 1); for(int i = 1; i <= n; i++) pow2[i] = pow2[i - 1] * 2 % p; vector<ll> dp(k + 1, 1); for(int i = 1; i < n; i++) { vector<ll> binom(k + 1, 1); for(int j = 1; j <= k; j++) binom[j] = binom[j - 1] * (pow2[i] + j - 2 + p) % p; for(int j = 0; j <= k; j++) binom[j] = binom[j] * faci[j] % p; vector<ll> nxt(k + 1); for(int j = 0; j <= k; j++) for(int x = 0; x < j and j + x <= k; x++) nxt[j + x] = (nxt[j + x] + dp[j] * binom[x] % p) % p; for(int j = 1; j <= k; j++) nxt[j] = (nxt[j - 1] + nxt[j]) % p; dp.swap(nxt); } cout << dp[k] * pow2[n - 1] % p << '\n'; } }
2001
E2
Deterministic Heap (Hard Version)
\textbf{This is the hard version of the problem. The difference between the two versions is the definition of deterministic max-heap, time limit, and constraints on $n$ and $t$. You can make hacks only if both versions of the problem are solved.} Consider a perfect binary tree with size $2^n - 1$, with nodes numbered from $1$ to $2^n-1$ and rooted at $1$. For each vertex $v$ ($1 \le v \le 2^{n - 1} - 1$), vertex $2v$ is its left child and vertex $2v + 1$ is its right child. Each node $v$ also has a value $a_v$ assigned to it. Define the operation $\mathrm{pop}$ as follows: - initialize variable $v$ as $1$; - repeat the following process until vertex $v$ is a leaf (i.e. until $2^{n - 1} \le v \le 2^n - 1$); - among the children of $v$, choose the one with the larger value on it and denote such vertex as $x$; if the values on them are equal (i.e. $a_{2v} = a_{2v + 1}$), you can choose any of them; - assign $a_x$ to $a_v$ (i.e. $a_v := a_x$); - assign $x$ to $v$ (i.e. $v := x$); - assign $-1$ to $a_v$ (i.e. $a_v := -1$). Then we say the $\mathrm{pop}$ operation is deterministic if there is a unique way to do such operation. In other words, $a_{2v} \neq a_{2v + 1}$ would hold whenever choosing between them. A binary tree is called a max-heap if for every vertex $v$ ($1 \le v \le 2^{n - 1} - 1$), both $a_v \ge a_{2v}$ and $a_v \ge a_{2v + 1}$ hold. A max-heap is deterministic if the $\mathrm{pop}$ operation is deterministic to the heap when we do it \textbf{for the first and the second time}. Initially, $a_v := 0$ for every vertex $v$ ($1 \le v \le 2^n - 1$), and your goal is to count the number of different deterministic max-heaps produced by applying the following operation $\mathrm{add}$ exactly $k$ times: - Choose an integer $v$ ($1 \le v \le 2^n - 1$) and, for every vertex $x$ on the path between $1$ and $v$, add $1$ to $a_x$. Two heaps are considered different if there is a node which has different values in the heaps. Since the answer might be large, print it modulo $p$.
Fix the position of first and second leaf being popped, what's the property needed for $a$ to be determinitic? Does the position of first leaf being popped really matters? Fix the position of first leaf being popped, do we really need to consider all possible position of second leaf being popped? Make good use of symmetry can reduce the complexity of problem significantly! For convenience, let $\text{det_heap}()$ denote # twice-deterministic heap, $\text{det_heap}(v)$ denote # twice-deterministic heap where the first pop come from leaf $v$, $\text{det_heap}(v, u)$ denote # twice-deterministic heap where the first pop come from leaf $v$, second pop come from leaf $u$. Similar to easier version, we have $\text{det_heap}() = 2^{n - 1}\cdot\text{det_heap}(2^{n - 1})$ because of symmetry. Then consider enumerate $u$, the second leaf being popped, we have $\text{det_heap}(2^{n - 1}) = \sum\limits_{u = 2^{n - 1} + 1}^{2^n - 1}\text{det_heap}(2^{n - 1}, u)$ because of symmetry, again, for any $u_1 \neq u_2, LCA(2^{n - 1}, u_1) = LCA(2^{n - 1}, u_2)$, we have $\text{det_heap}(2^{n - 1}, u_1) = \text{det_heap}(2^{n - 1}, u_2)$, so for each LCA, we only need to enumerate leftmost leaf under it as second popped leaf and multiply answer with # leaves in such subtree. then consider the relation of values between vertices, we have where black edge from $x$ to $y$ means $a_x \le a_y$, which comes from the nature of $\text{add}$ operation red edge from $x$ to $y$ means $a_x < a_y$, which come from the fact that $v$ should be first leaf being pulled blue edge from $x$ to $y$ means $a_x < a_y$, which come from the fact that $u$ should be second leaf being pulled Then observe if we have some directed triangle consist of edges $(x, y), (y, z), (x, z)$, edge $(x, z)$ didn't bring extra relation so we can delete it, after that, we would have Then we can use pair of value of vertices from same depth as the state of dp, and enumerate such pair in increasing order of height while not breaking the relations, then we would have $dp[h][l][r]$ = # twice-deterministic heap when the left vertex have value $l$ and right vertex have value $r$ and the current height is $h$. First, to handle the relation of Z shape formed by red/blue arrows and enumerate value of $(L, R, R')$, consider the following picture (The red subtree is one-time deterministic heap, while the other can be any heap.) Also let $f(h, k)$ denote # deterministic heap for one time pop with height $h$ after doing operation $k$ times where the leftmost leaf being pulled, $g(h, k)$ denote # max heap with height $h$ after doing operation $k$ times (We already know how to compute them in easier version of this problem) Let's fix the value $L$ and $R$, and push it to some $dp[h][L'][R']$, we need to satisfy $L' > R' > L > R$ $L' \ge L + R$ in such case, enumerate all pair $(L, R), (L > R)$ and for all $L' \ge L + R, R' \ge L + 1$, add $f(h - 1, L) \cdot g(h - 1, R)$ to $dp[h][L'][R']$. After that, for every $L' \le R'$, eliminate its contribution, while for every $L' > R'$, multiply its contribution with $f(h, R') \cdot 2^h$. which can be done in $O(k^2)$ with the help of prefix sum. Second, to handle the relation formed by black/blue edges and enumerate value $R'$, consider the following picture For each $L, R$, we need to push it some some $dp[h][L'][R']$ where the following conditions hold $L' \ge L > R'$ $L' \ge L + R$ we can handle it by first adding $dp[h - 1][L][R]$ to $dp[L'][R']$ for all $L' \ge L + R, R' \le L - 1$. Then for all $L' \le R'$, eliminate its contribution, while for the other, multiply the contribution with $g(h, R')$, which can also be done by prefix sum in $O(k^2)$. Then the answer would be $2^{n - 1} \cdot \sum\limits_{L + R \le k}dp[h - 2][L][R]$ and the problem has been solved in $O(nk^2)$.
[ "combinatorics", "dp", "trees" ]
2,900
#include <iostream> #include <vector> using namespace std; using ll = long long; int binpow(ll a, ll b, ll p) { ll c = 1; while(b) { if (b & 1) c = c * a % p; a = a * a % p, b >>= 1; } return c; } int main() { int t; cin >> t; while(t--) { int n, k, p; cin >> n >> k >> p; vector<ll> fac(k + 1, 1); for(int i = 1; i <= k; i++) fac[i] = fac[i - 1] * i % p; vector<ll> faci(k + 1); faci[k] = binpow(fac[k], p - 2, p); for(int i = k - 1; i >= 0; i--) faci[i] = faci[i + 1] * (i + 1) % p; vector<ll> pow2(n + 1, 1); for(int i = 1; i <= n; i++) pow2[i] = pow2[i - 1] * 2 % p; vector<ll> dp(k + 1, 1); vector dp2(k + 1, vector<ll>(k + 1)); for(int l = 0; l <= k; l++) for(int r = 0; r < l; r++) dp2[l][r] = 1; for(int i = 1; i + 1 < n; i++) { vector<ll> binom(k + 1, 1); for(int j = 1; j <= k; j++) binom[j] = binom[j - 1] * (pow2[i] + j - 2 + p) % p; for(int j = 0; j <= k; j++) binom[j] = binom[j] * faci[j] % p; vector<ll> binom2(k + 1, 1); for(int j = 1; j <= k; j++) binom2[j] = binom2[j - 1] * (pow2[i + 1] + j - 2 + p) % p; for(int j = 0; j <= k; j++) binom2[j] = binom2[j] * faci[j] % p; vector<ll> nxt(k + 1); for(int j = 0; j <= k; j++) for(int x = 0; x < j and j + x <= k; x++) nxt[j + x] = (nxt[j + x] + dp[j] * binom[x] % p) % p; for(int j = 1; j <= k; j++) nxt[j] = (nxt[j - 1] + nxt[j]) % p; vector nxt2(k + 1, vector<ll>(k + 1)); { vector tmp(k + 1, vector<ll>(k + 1)); for(int l = 0; l < k; l++) for(int r = 0; r < l and l + r <= k; r++) tmp[l + r][l + 1] = (tmp[l + r][l + 1] + dp[l] * binom[r]) % p; for(int l = 0; l <= k; l++) for(int r = 1; r <= k; r++) tmp[l][r] = (tmp[l][r] + tmp[l][r - 1]) % p; for(int l = 1; l <= k; l++) for(int r = 1; r <= k; r++) tmp[l][r] = (tmp[l][r] + tmp[l - 1][r]) % p; for(int l = 0; l <= k; l++) for(int r = 0; r < l; r++) nxt2[l][r] = (nxt2[l][r] + tmp[l][r] * nxt[r] % p * pow2[i]) % p; } { vector tmp(k + 1, vector<ll>(k + 1)); for(int l = 1; l <= k; l++) for(int r = 0; l + r <= k; r++) tmp[l + r][l - 1] = (tmp[l + r][l - 1] + dp2[l][r]) % p; for(int l = 0; l <= k; l++) for(int r = k - 1; r >= 0; r--) tmp[l][r] = (tmp[l][r] + tmp[l][r + 1]) % p; for(int l = 1; l <= k; l++) for(int r = 0; r <= k; r++) tmp[l][r] = (tmp[l][r] + tmp[l - 1][r]) % p; for(int l = 0; l <= k; l++) for(int r = 0; r <= k; r++) nxt2[l][r] = (nxt2[l][r] + tmp[l][r] * binom2[r]) % p; } dp.swap(nxt); dp2.swap(nxt2); } ll ans = 0; for(int l = 0; l <= k; l++) for(int r = 0; l + r <= k; r++) ans = (ans + dp2[l][r]) % p; cout << ans * pow2[n - 1] % p << '\n'; } }
2002
A
Distanced Coloring
You received an $n\times m$ grid from a mysterious source. The source also gave you a magic positive integer constant $k$. The source told you to color the grid with some colors, satisfying the following condition: - If $(x_1,y_1)$, $(x_2,y_2)$ are two distinct cells with the same color, then $\max(|x_1-x_2|,|y_1-y_2|)\ge k$. You don't like using too many colors. Please find the minimum number of colors needed to color the grid.
Consider the case with $n=m=k$. Generalize the solution for all $n,m,k$. It can be shown that for any $k\times k$ subgrid, the colors we use must be pairwise distinct. Thus, we have an lower bound of $\min(n,k)\cdot\min(m,k)$. We can show that this lower bound is indeed achievable by coloring the upper-left $\min(n,k)\cdot\min(m,k)$ subgrid with distinct colors, and copy-pasting it to fill the rest of the grid. Time complexity: $O(1)$.
[ "constructive algorithms", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int t, n, m, k; int main() { scanf("%d", &t); while (t--) { scanf("%d%d%d", &n, &m, &k); printf("%d\n", min(n, k)*min(m, k)); } return 0; }
2002
B
Removals Game
Alice got a permutation $a_1, a_2, \ldots, a_n$ of $[1,2,\ldots,n]$, and Bob got another permutation $b_1, b_2, \ldots, b_n$ of $[1,2,\ldots,n]$. They are going to play a game with these arrays. In each turn, the following events happen in order: - Alice chooses either the first or the last element of her array and removes it from the array; - Bob chooses either the first or the last element of his array and removes it from the array. The game continues for $n-1$ turns, after which both arrays will have exactly one remaining element: $x$ in the array $a$ and $y$ in the array $b$. If $x=y$, Bob wins; otherwise, Alice wins. Find which player will win if both players play optimally.
Find some conditions that Bob need to win. A general idea is that it is very difficult for Bob to win. We make some observations regarding the case where Bob wins. It is intuitive to see that for any subarray $[A_l,A_{l+1},\cdots,A_r]$, the elements' positions in $B$ must also form an interval. If $A[l:r]$ does not form an interval in $B$, Alice can simply remove elements until only $A[l:r]$ is left, no matter how Bob plays, there must now be an element in $A$ but not in $B$ because of our previous condition. Alice can then remove everything other than this element to win. From here, it is easy to prove by induction that for Bob to win, either $A=B$ or $A=\textrm{rev}(B)$. Time complexity: $O(n)$.
[ "constructive algorithms", "games" ]
1,000
#include <bits/stdc++.h> #define ll long long #define lc(x) ((x) << 1) #define rc(x) ((x) << 1 | 1) #define ru(i, l, r) for (int i = (l); i <= (r); i++) #define rd(i, r, l) for (int i = (r); i >= (l); i--) #define mid ((l + r) >> 1) #define pii pair<int, int> #define mp make_pair #define fi first #define se second #define sz(s) (int)s.size() #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; #define ordered_set tree<int, null_type, less<int>, rb_tree_tag, tree_order_statistics_node_update> using namespace std; mt19937 Rand(chrono::steady_clock::now().time_since_epoch().count()); int read() { int x = 0, w = 0; char ch = getchar(); while (!isdigit(ch)) w |= ch == '-', ch = getchar(); while (isdigit(ch)) x = x * 10 + ch - '0', ch = getchar(); return w ? -x : x; } int main() { int T = read(); while (T--) { int n = read(); vector<int> a, b; ru(i, 1, n) a.push_back(read()); ru(i, 1, n) b.push_back(read()); if (a == b) { printf("Bob\n"); continue; } reverse(a.begin(), a.end()); if (a == b) { printf("Bob\n"); continue; } printf("Alice\n"); } return 0; }
2002
C
Black Circles
There are $n$ circles on a two-dimensional plane. The $i$-th circle is centered at $(x_i,y_i)$. Initially, all circles have a radius of $0$. The circles' radii increase at a rate of $1$ unit per second. You are currently at $(x_s,y_s)$; your goal is to reach $(x_t,y_t)$ without touching the circumference of any circle (\textbf{including the moment you reach $(x_t,y_t)$}). You can move in any direction you want. However, your speed is limited to $1$ unit per second. Please determine whether this is possible.
The problem can't be that hard, find some simple strategy. We consider a simple strategy: walk towards the goal in a straight line. If some circle reaches the goal first, it is obvious that we have no chance of succeeding, no matter what path we take. Otherwise, it can be proven that we will not pass any circles on our way to the goal. Suppose we start at $A$, our goal is $B$, and we got intercepted by some circle $C$ at the point $D$. It follows that $CD=AD$. According to the triangle inequality, $CD>CB-DB$ should hold. Thus, we have $CB-DB\le AD$, which means $CB\le AB$, proof by contradiction. Time complexity: $O(n)$.
[ "brute force", "geometry", "greedy", "math" ]
1,200
#include <bits/stdc++.h> #define ll long long using namespace std; int t, n, x[100011], y[100011], xs, ys, xt, yt; ll dis(int x1, int y1, int x2, int y2) { return 1ll * (x2 - x1) * (x2 - x1) + 1ll * (y2 - y1) * (y2 - y1); } int main() { scanf("%d", &t); while (t--) { scanf("%d", &n); for (int i = 1; i <= n; ++i) scanf("%d%d", x + i, y + i); scanf("%d%d%d%d", &xs, &ys, &xt, &yt); bool ok = 1; for (int i = 1; i <= n; ++i) { if (dis(xt, yt, x[i], y[i]) <= dis(xt, yt, xs, ys)) { ok = 0; break; } } printf(ok ? "YES\n" : "NO\n"); } fclose(stdin); fclose(stdout); return 0; }
2002
D2
DFS Checker (Hard Version)
\textbf{This is the hard version of the problem. In this version, you are given a generic tree and the constraints on $n$ and $q$ are higher. You can make hacks only if both versions of the problem are solved.} You are given a rooted tree consisting of $n$ vertices. The vertices are numbered from $1$ to $n$, and the root is the vertex $1$. You are also given a permutation $p_1, p_2, \ldots, p_n$ of $[1,2,\ldots,n]$. You need to answer $q$ queries. For each query, you are given two integers $x$, $y$; you need to swap $p_x$ and $p_y$ and determine if $p_1, p_2, \ldots, p_n$ is a valid DFS order$^\dagger$ of the given tree. Please note that the swaps are \textbf{persistent} through queries. $^\dagger$ A DFS order is found by calling the following $dfs$ function on the given tree. \begin{verbatim} dfs_order = [] function dfs(v): append v to the back of dfs_order pick an arbitrary permutation s of children of v for child in s: dfs(child) dfs(1) \end{verbatim} Note that the DFS order is not unique.
Try to find some easy checks that can be maintained. The problem revolves around finding a check for dfs orders that's easy to maintain. We have discovered several such checks, a few checks and their proofs are described below, any one of these checks suffices to tell whether a dfs order is valid. For every $u$, all of its children $v$ satisfies $[pos_v,pos_v+siz_v-1]\subseteq[pos_u,pos_u+siz_u-1]$. We can maintain this check by keeping track of the number of $u$ which violates this condition, and check for each $u$ using sets, when checking, we need only check the child with the minimum $pos_v$ and maximum $pos_v+siz_v-1$. Proof: We prove by induction. When $u$'s children only consists of leaves, it is easy to see that this check ensures $[pos_u,pos_u+siz_u-1]$ is a valid dfs order of the subtree of $u$. Then, we can merge the subtree of $u$ into a large node with size $siz_u$, and continue the analysis above. Check 2: First we check $p_1=1$. Then, for each pair of adjacent elements $p_i,p_{i+1}$, $fa(p_{i+1})$ must be an ancestor of $p_i$, where $fa(u)$ denotes the father of node $u$. We can maintain this check by keeping track of the number of $u$ which violates this condition, and check for each $i$ by checking whether $p_i$ is in the subtree of $fa(p_{i+1})$. Proof: For any subtree $u$, we take any $p_i,p_{i+1}$ such that $p_i$ does not belong in subtree $u$, but $p_{i+1}$ does. It follows that $p_{i+1}=u$, since only the subtree of $fa(u)$ has nodes that does not belong in subtree $u$. From this, we can gather that each subtree will be entered at most once (and form a continuous interval), and that the first visited node will be $u$, which is sufficient to say that $p$ is a dfs order. Time complexity: $O((n+q)\log n)/O(n+q)$.
[ "binary search", "data structures", "dfs and similar", "graphs", "hashing", "trees" ]
2,300
#include <bits/stdc++.h> using namespace std; #define pb push_back #define mp make_pair #define fi first #define se second #define int long long void dbg_out() { cerr << endl; } template <typename H, typename... T> void dbg_out(H h, T... t) { cerr << ' ' << h; dbg_out(t...); } #define dbg(...) { cerr << #__VA_ARGS__ << ':'; dbg_out(__VA_ARGS__); } using ll = long long; mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); const int MAXN = 3e5 + 5; const int MOD = 1e9+7; //998244353; const int INF = 0x3f3f3f3f; const ll INF64 = ll(4e18) + 5; vector<int> g[MAXN]; int tin[MAXN]; int tout[MAXN]; int id[MAXN]; int par[MAXN]; int T = 0; void dfs(int u, int p){ id[u] = tin[u] = tout[u] = T++; for(auto v : g[u]) if(v != p){ dfs(v,u); par[v] = u; tout[u] = tout[v]; } } void solve(){ int n,q; cin >> n >> q; vector<int> p(n+1); for(int i = 0; i <= n; i++) g[i].clear(); T = 0; for(int i = 2; i <= n; i++){ int pi; cin >> pi; g[pi].push_back(i); } for(int i = 1; i <= n; i++){ cin >> p[i]; } dfs(1,1); int cnt = 0; auto ok = [&](int i){ if(p[i] == 1){ if(i == 1) return 1; return 0; } int ant = p[i-1]; if(par[p[i]] == ant) return 1; if(tin[ant] != tout[ant]) return 0; int pa = par[p[i]]; if(tin[ant] < tin[pa] || tin[ant] > tout[pa]) return 0; return 1; }; for(int i = 1; i <= n; i++){ cnt += ok(i); } for(int qw = 0; qw < q; qw++){ int x,y; cin >> x >> y; set<int> in; in.insert(x); in.insert(y); if(x-1 >= 1) in.insert(x-1); if(x+1 <= n) in.insert(x+1); if(y-1 >= 1) in.insert(y-1); if(y+1 <= n) in.insert(y+1); for(auto v : in){ cnt -= ok(v); } swap(p[x],p[y]); for(auto v : in){ cnt += ok(v); } cout << (cnt == n ? "YES": "NO") << '\n'; } } signed main(){ ios::sync_with_stdio(false); cin.tie(NULL); int t = 1; cin >> t; while(t--){ solve(); } return 0; }
2002
E
Cosmic Rays
Given an array of integers $s_1, s_2, \ldots, s_l$, every second, cosmic rays will cause all $s_i$ such that $i=1$ or $s_i\neq s_{i-1}$ to be deleted simultaneously, and the remaining parts will be concatenated together in order to form the new array $s_1, s_2, \ldots, s_{l'}$. Define the \textbf{strength} of an array as the number of seconds it takes to become empty. You are given an array of integers compressed in the form of $n$ pairs that describe the array left to right. Each pair $(a_i,b_i)$ represents $a_i$ copies of $b_i$, i.e. $\underbrace{b_i,b_i,\cdots,b_i}_{a_i\textrm{ times}}$. For each $i=1,2,\dots,n$, please find the \textbf{strength} of the sequence described by the first $i$ pairs.
Consider an incremental solution. Use stacks. The problems asks for the answer for every prefix, which hints at an incremental solution. To add a new pair to the current prefix, we need to somehow process the new block merging with old ones. Thus, we should use some structure to store the information on the last block over time. Namely, we use a stack to keep track of all blocks that became the last. For each block, we keep two values: its color $c$ and its lifetime $l$ (the times it takes for the block to disappear). When inserting a new block, we pop all blocks that would be shadowed by the current one (i.e. with lifetime shorter than the current block), and merging blocks with the same $a$. When merging two blocks with length $x$ and $z$, and the maximum lifetime of blocks between them is $y$, $y\le\min(x,z)$ should hold, and the new block will have lifetime $x+z-y$. For more details, please refer to the solution code. There also exists $O(n\log n)$ solutions using ordered sets or heaps. Time complexity: $O(n)$.
[ "brute force", "data structures", "dp" ]
2,300
#include <bits/stdc++.h> #define ll long long #define N 3000011 #define pii pair<ll,int> #define s1 first #define s2 second using namespace std; int t, n, prv[N], nxt[N], b[N]; ll a[N]; priority_queue<pair<ll, int>> pq, del; ll sum = 0, ans[N]; int main() { scanf("%d", &t); while (t--) { scanf("%d", &n); for (int i = 1; i <= n; ++i) scanf("%lld%d", a + i, b + i); while (!pq.empty()) pq.pop(); while (!del.empty()) del.pop(); nxt[0] = 1; prv[n + 1] = n; for (int i = 1; i <= n; ++i) pq.push({-a[i], i}), prv[i] = i - 1, nxt[i] = i + 1; ll tim = 0; int lst = 1; for (int _ = 1; _ <= n; ++_) { while (!del.empty() && pq.top() == del.top()) pq.pop(), del.pop(); pii p = pq.top(); pq.pop(); int id = p.s2; tim = -p.s1; if (nxt[id] <= n && b[id] == b[nxt[id]]) { a[nxt[id]] += tim; pq.push({-a[nxt[id]], nxt[id]}); } if (prv[id] && nxt[id] <= n && b[prv[id]] == b[nxt[id]]) { del.push({-a[nxt[id]], nxt[id]}); a[nxt[id]] -= tim; } prv[nxt[id]] = prv[id]; nxt[prv[id]] = nxt[id]; while (lst < nxt[0]) ans[lst++] = tim; } for (int i = 1; i <= n; ++i) printf("%lld ", ans[i]); putchar(10); } return 0; }
2002
F1
Court Blue (Easy Version)
\textbf{This is the easy version of the problem. In this version, $n=m$ and the time limit is lower. You can make hacks only if both versions of the problem are solved.} In the court of the Blue King, Lelle and Flamm are having a performance match. The match consists of several rounds. In each round, either Lelle or Flamm wins. Let $W_L$ and $W_F$ denote the number of wins of Lelle and Flamm, respectively. The Blue King considers a match to be \textbf{successful} if and only if: - after every round, $\gcd(W_L,W_F)\le 1$; - at the end of the match, $W_L\le n, W_F\le m$. Note that $\gcd(0,x)=\gcd(x,0)=x$ for every non-negative integer $x$. Lelle and Flamm can decide to stop the match whenever they want, and the final score of the performance is $l \cdot W_L + f \cdot W_F$. Please help Lelle and Flamm coordinate their wins and losses such that the performance is \textbf{successful}, and the total score of the performance is maximized.
Primes are powerful. Prime gaps are small. We view the problem as a walk on grid, starting at $(1,1)$. WLOG, we suppose $l>f$, thus only cells $(a,b)$ where $a<b$ would be considered. Notice that when $n$ is large enough, the largest prime $p\le n$ satisfies $2p>n$. As such, all cells $(p,i)$ where $i<p$ will be unblocked and reachable. However, we've only bounded one side of the final result. We take this a step further, let $q$ be the second-largest prime $q\le n$. By the same logic, we assume that $2q>n$. As such, all cells $(i,q)$, where $p\le i\le n$ will be unblocked and reachable. Thus, we have constructed an area where the optimal solution must be, with its dimensions bounded by $n-p$ and $n-q$. We just need to run any brute force solution (dfs with memorization or dp) on this area to find the result. If we assume the asymptotic of prime gap is $O(P(n))$, this yields a $O(n\cdot P(n)^2\cdot\log P(n))$ solution, where the $\log\log n$ is from taking the gcd of two numbers which differ by $O(P(n))$. This can also be optimized to $O(n\cdot P(n)^2)$ by preprocessing gcd. We added the constraints that $n$'s are pairwise distinct to avoid forcing participants to write memorizations. In fact, under the constraints of the problem, the maximum area is $39201$, and the sum of the $10^3$ largest areas is $2.36\times10^7$. Time complexity: $O(P(n)^2\log P(n))/O(P(n)^2)$
[ "brute force", "dfs and similar", "dp", "math", "number theory" ]
2,600
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int N = 2e7 + 5; bool ntp[N]; int di[405][405]; bool bad[405][405]; int pos[N], g = 0, m = 3; int prime[2000005], cnt = 0; void sieve(int n) { int i, j; for (i = 2; i <= n; i++) { if (!ntp[i]) prime[++cnt] = i; for (j = 1; j <= cnt && i * prime[j] <= n; j++) { ntp[i * prime[j]] = 1; if (!(i % prime[j])) break; } } } int gcd(int x, int y) { if (!x) return y; if (x <= g && y <= g) return di[x][y]; return gcd(y % x, x); } int main() { sieve(2e7); int i, j = 1, T, n, a, b, c, u, v; ll ans; for (i = 3; i <= 2e7; i++) { while (j < cnt - 1 && prime[j + 2] <= i) j++; g = max(g, i - prime[j] + 1); if (prime[j] * 2 <= i) m = i; pos[i] = j; } for (i = 0; i <= g; i++) for (j = 0; j <= g; j++) if (!i) di[i][j] = j; else di[i][j] = di[j % i][i]; scanf("%d", &T); while (T--) { scanf("%*d%d%d%d", &n, &a, &b); if (n <= m) u = v = 1; else { u = prime[pos[n] + 1]; v = prime[pos[n]]; } ans = 0; for (i = u; i <= n; i++) for (j = v; j <= i; j++) { bad[i - u + 1][j - v + 1] = (gcd(i, j) > 1 || (bad[i - u][j - v + 1] && bad[i - u + 1][j - v])); if (!bad[i - u + 1][j - v + 1]) ans = max(ans, max((ll)a * i + (ll)b * j, (ll)a * j + (ll)b * i)); } printf("%lld\n", ans); } return 0; }
2002
F2
Court Blue (Hard Version)
\textbf{This is the hard version of the problem. In this version, it is not guaranteed that $n=m$, and the time limit is higher. You can make hacks only if both versions of the problem are solved.} In the court of the Blue King, Lelle and Flamm are having a performance match. The match consists of several rounds. In each round, either Lelle or Flamm wins. Let $W_L$ and $W_F$ denote the number of wins of Lelle and Flamm, respectively. The Blue King considers a match to be \textbf{successful} if and only if: - after every round, $\gcd(W_L,W_F)\le 1$; - at the end of the match, $W_L\le n, W_F\le m$. Note that $\gcd(0,x)=\gcd(x,0)=x$ for every non-negative integer $x$. Lelle and Flamm can decide to stop the match whenever they want, and the final score of the performance is $l \cdot W_L + f \cdot W_F$. Please help Lelle and Flamm coordinate their wins and losses such that the performance is \textbf{successful}, and the total score of the performance is maximized.
Try generalizing the solution of F1. Write anything and pray that it will pass because of number theory magic. We generalize the solution in F1. Let $p$ be the largest prime $\le m$ and $q$ be the largest prime $\le\min(n,p-1)$. The problem is that there might be $\gcd(q,i)\neq1$ for some $p+1\le i\le m$, thus invalidating our previous analysis. To solve this, we simply choose $q$ to be the largest integer such that $q\le n$ and $\gcd(q,i)=1$ for all $p+1\le i\le m$. An asymptotic analysis of this solution is as follows: As the length of $[p+1,m]$ is $O(P(m))$, and each of these integers have at most $O(\log_nm)$ prime divisors of $O(m)$ magnitude, which means that if we only restrict $q$ to primes, we will have to skip at most $O(P(n)\log_nm)$ primes to find the largest $q$. As the density of primes is $O(\frac1{P(n)})$, the asymptotic of $n-q$ will be $O(P(m)\log_nm\cdot P(n))=O(P(m)^2)$, our actual $q$ (which is not restricted to primes) will not be worse than this. Thus, our total area will be $O(P(m)^3)$, times the gcd complexity gives us an $O(P(m)^3\log m)$ solution. However, the actual area is much lower than this. Under the constraints of the problem, when forcing $p,q$ to take primes, the maximum area is $39960$, and the sum of the $10^3$ largest areas is $3.44\times10^7$. The actual solution will not be worse than this. Because we only need to check whether $\gcd(x,y)=1$, the complexity can actually be optimized to $O(P(m)^3)$ with some sieves. Namely, iterating over prime divisors $d$ of $[p,m]$ and $[q,n]$ and marking all cells which has $d$ as its common divisor. This solution is by far from optimal. We invite you to continue optimizing your solutions and try to minimize the number of cells visited in each query :) Time complexity: $O(P(m)^3\log m)$ Keep $p$ the same, set $q$ to $p-L$ and only keep reachable cells in $[p,n]\times [q,m]$. $L$ is some constant ($100$ should work). We found this solution during testing, tried, and failed to hack it. Keep $p$ the same, do dfs from each cell $(n,p),(n-1,p),\cdots$, prioritizing increasing $W_L$ over increasing $W_F$, and stop the process the first time you reach any cell $(x,m)$, take the maximum of all cells visited. This should not be worse than the intended solution, and actually runs quite fast. Simply take all cells in $[n-L,n]\times [m-L,m]$ and mark everything outside as reachable. $L=50$ works. We found this solution the day before the round, we don't know how to hack it either. UPD: $L=50$ was hacked. Hats off to the hacker. Do dfs with pruning. Run dfs starting at $(n,m)$, return when the cell is $(p,i)$ (i.e. obviously reachable because of primes), or when the value of the cell is smaller than the current answer. Add some memorization and it passes.
[ "brute force", "dp", "math", "number theory" ]
2,800
#include <bits/stdc++.h> #define ll long long #define N 20000011 using namespace std; int t, n, m, a, b; bool is[N]; int pr[N / 10]; int gcd(int a, int b) { while (b) a %= b, swap(a, b); return a; } ll ans = 0; bool vis[1011][1011]; pair<int, int> vv[200011]; int vn, c; bool flg = 0; inline ll V(int i, int j) { return i <= n ? 1ll * max(i, j) * max(a, b) + 1ll * min(i, j) * min(a, b) : 1ll * i * b + 1ll * j * a; } void dfs(int i, int j) { ++c; bool mk = gcd(i, j) == 1; if (!mk) return; ans = max(ans, V(i, j)); vis[m - i][n - j] = 1; vv[++vn] = {i, j}; if (j < n && !vis[m - i][n - j - 1]) dfs(i, j + 1); if (i == m || flg) { flg = 1; return; } if (i < m && !vis[m - i - 1][n - j]) dfs(i + 1, j); } int main() { is[0] = is[1] = 1; for (int i = 2; i < N; ++i) { if (!is[i]) pr[++pr[0]] = i; for (int j = 1; j <= pr[0] && i * pr[j] < N; ++j) { is[i * pr[j]] = 1; if (i % pr[j] == 0) break; } } scanf("%d", &t); while (t--) { scanf("%d%d%d%d", &n, &m, &a, &b); int p; if (m <= 10) p = 1; else { p = m; while (is[p]) --p; } vn = 0; ans = 0; flg = 0; c = 0; for (int i = min(n, p - (p > 1));; --i) { assert(i > 0); ans = max(ans, V(p, i)); if (p < m) dfs(p + 1, i); else break; if (flg) break; } for (int i = 1; i <= vn; ++i) vis[m - vv[i].first][n - vv[i].second] = 0; printf("%lld\n", ans); } return 0; }
2002
G
Lattice Optimizing
Consider a grid graph with $n$ rows and $n$ columns. Let the cell in row $x$ and column $y$ be $(x,y)$. There exists a directed edge from $(x,y)$ to $(x+1,y)$, with non-negative integer value $d_{x,y}$, for all $1\le x < n, 1\le y \le n$, and there also exists a directed edge from $(x,y)$ to $(x,y+1)$, with non-negative integer value $r_{x,y}$, for all $1\le x \le n, 1\le y < n$. Initially, you are at $(1,1)$, with an empty set $S$. You need to walk along the edges and eventually reach $(n,n)$. Whenever you pass an edge, its value will be inserted into $S$. Please maximize the MEX$^{\text{∗}}$ of $S$ when you reach $(n,n)$. \begin{footnotesize} $^{\text{∗}}$The MEX (minimum excluded) of an array is the smallest non-negative integer that does not belong to the array. For instance: - The MEX of $[2,2,1]$ is $0$, because $0$ does not belong to the array. - The MEX of $[3,1,0,1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not. - The MEX of $[0,3,1,2]$ is $4$, because $0, 1, 2$, and $3$ belong to the array, but $4$ does not. \end{footnotesize}
We apologize for unintended solutions passing, and intended solutions failing with large constants. Brute force runs very fast on $n=18$, which forced us to increase constraints. Meet in the middle with a twist. Union of sets is hard. Disjoint union of sets is easy. The problem hints strongly at meet in the middle, at each side, there will be $O(2^n)$ paths. The twist is on merging: we are given two sequence of sets $S_1,S_2,\cdots$ and $T_1,T_2,\cdots$, we have to find the maximum $\textrm{MEX}(S_i\cup T_j)$. If we enumerate over the maximum MEX $P$, we only need to check whether there exists two sets such that $\{ 0,1,\cdots,P-1\} \subseteq S_i\cup T_j$. Instead of meeting at $n$, we perform meet in the middle at the partition of $Bn$ and $(2-B)n$. For each $S_i$, we put all its subsets $S^\prime_i$ into a hashmap. When checking MEX $P$, we iterate over all $T_j$, and check whether $\{ 0,1,\cdots,P-1\} -T_j$ is in the hashmap. This gives us a $O((2^{2Bn}+2^{(2-B)n})\cdot n)$ solution, which is optimal at $B=\frac23$, where the complexity is $O(2^{\frac43}\cdot n)$. We can substitute enumerating $P$ with binary search, yielding $O(2^{\frac43n}\cdot\log n)$, or checking whether $P$ can be increased each time, yielding $O(2^{\frac43n})$. Instead of hashmaps, you can also insert all $S$ and $T$'s into a trie, and run a brute force dfs to search for MEX $P$. It can be proven that the complexity is also $O(2^{\frac43n}\cdot n)$. The intended solution is $O(2^{\frac43 n})$, however solutions that have slightly higher complexity can pass with good optimizations. Time complexity: $O(2^{\frac43n})$.
[ "bitmasks", "brute force", "hashing", "meet-in-the-middle" ]
3,400
#include <bits/stdc++.h> using namespace std; #define pb push_back #define mp make_pair #define fi first #define se second #define int long long void dbg_out() { cerr << endl; } template <typename H, typename... T> void dbg_out(H h, T... t) { cerr << ' ' << h; dbg_out(t...); } #define dbg(...) { cerr << #__VA_ARGS__ << ':'; dbg_out(__VA_ARGS__); } using ll = long long; mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); const int MAXN = 3e7 + 5; const int MOD = 1e9 + 7; //998244353; const int INF = 0x3f3f3f3f; const ll INF64 = ll(4e18) + 5; int D[40][40]; int R[40][40]; int L; int32_t trie[MAXN][2]; int n; int li = 1; void add(int x) { int at = 0; int K = 2 * n - 2; for (int i = 0; i <= K; i++) { int id = (x >> i) & 1; if (trie[at][id] == 0) { trie[at][id] = li++; } at = trie[at][id]; } } void limpa() { for (int i = 0; i < li; i++) { trie[i][0] = trie[i][1] = 0; } li = 1; } int query(int at, int x, int dep) { int id = (x >> dep) & 1; if (id == 0) { if (trie[at][1]) { return query(trie[at][1], x, dep + 1); } return dep; } int ans = dep; if (trie[at][0]) ans = max(ans, query(trie[at][0], x, dep + 1)); if (trie[at][1]) ans = max(ans, query(trie[at][1], x, dep + 1)); while ((x >> ans) & 1) ans++; return ans; } void rec_in(int i, int j, int tar1, int tar2, int v) { if (i > tar1 || j > tar2) return; if (i == tar1 && j == tar2) { add(v); } int right = v | (1ll << R[i][j]); int down = v | (1ll << D[i][j]); rec_in(i + 1, j, tar1, tar2, down); rec_in(i, j + 1, tar1, tar2, right); } int ANS = 0; void rec_out(int i, int j, int tar1, int tar2, int v) { if (i < tar1 || j < tar2) return; if (i == tar1 && j == tar2) { ANS = max(ANS, query(0, v, 0)); return; } int left = v | (1ll << R[i][j - 1]); int up = v | (1ll << D[i - 1][j]); rec_out(i - 1, j, tar1, tar2, up); rec_out(i, j - 1, tar1, tar2, left); } void solve() { ANS = 0; for (int i = 1; i <= li; i++) trie[i][0] = trie[i][1] = 0; li = 1; cin >> n; L = 4 * n / 3; for (int i = 0; i < n - 1; i++) { for (int j = 0; j < n; j++) { cin >> D[i][j]; } } if (L > n) { L--; } if (L > n) { L--; } if (L > n) { L--; } for (int i = 0; i < n; i++) { for (int j = 0; j < n - 1; j++) { cin >> R[i][j]; } } for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) if (i + j == L) { limpa(); rec_in(0, 0, i, j, 0); rec_out(n - 1, n - 1, i, j, 0); } cout << ANS << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(NULL); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
2002
H
Counting 101
It's been a long summer's day, with the constant chirping of cicadas and the heat which never seemed to end. Finally, it has drawn to a close. The showdown has passed, the gates are open, and only a gentle breeze is left behind. Your predecessors had taken their final bow; it's your turn to take the stage. Sorting through some notes that were left behind, you found a curious statement named \textbf{Problem 101}: - Given a positive integer sequence $a_1,a_2,\ldots,a_n$, you can operate on it any number of times. In an operation, you choose three consecutive elements $a_i,a_{i+1},a_{i+2}$, and merge them into one element $\max(a_i+1,a_{i+1},a_{i+2}+1)$. Please calculate the maximum number of operations you can do without creating an element greater than $m$. After some thought, you decided to propose the following problem, named \textbf{Counting 101}: - Given $n$ and $m$. For each $k=0,1,\ldots,\left\lfloor\frac{n-1}{2}\right\rfloor$, please find the number of integer sequences $a_1,a_2,\ldots,a_n$ with elements in $[1, m]$, such that when used as input for \textbf{Problem 101}, the answer is $k$. As the answer can be very large, please print it modulo $10^9+7$.
Try to solve Problem 101 with any approach. Can you express the states in the solution for Problem 101 with some simple structures? Maybe dp of dp? Try finding some unnecessary states and pruning them, to knock a $n$ off the complexity. We analyse Problem 101 first. A dp solution will be, let $f_{l,r,v}$ be the minimum number of elements left after some operations, if we only consider elements in $a[l:r]$, without creating any elements $>v$. If $\max(a[l:r])<v$, then the answer is $1$ if $r-l+1$ is odd, and $2$ otherwise. We simply repeatedly perform operations with middle element as the center. It is easy to see that this is optimal. If $\max(a[l:r])=v$, then $a[l:r]$ will be partitioned into blocks by elements equal to $v$. Denote the length of the blocks as $y_1,y_2,\cdots,y_l$. Let $b_i$ be the number of operations that has the $i$-th element with value $v$ as its center. We can perform a dp as follows: let $g_{i,j}$ be the minimum number of elements left if we have already considered the first $i$ blocks, and $b_i=j$. When adding a new block, we need to know several things: $[l^\prime,r^\prime]$, the minimum/maximum number of elements of the new block after some operations, without creating an element greater than $v-1$. $[l^\prime,r^\prime]$, the minimum/maximum number of elements of the new block after some operations, without creating an element greater than $v-1$. $j_2$, the value of $b_{i+1}$. $j_2$, the value of $b_{i+1}$. The transition of the dp will be: $g_{i+1,j_2}=g_{i,j_1}+cost(j_1+j_2,l^\prime,r^\prime)$, where $cost(j,l,r)$ is defined as follows: $cost(j,l,r)=\left\{ \begin{aligned} +\infty,&&j>r\\ 0,&&l\le j\le r, j\equiv l\pmod2\\ 1,&&j\le r,j\not\equiv l\pmod2\\ 2,&&j<l, j\equiv l\pmod2\\ \end{aligned} \right.$ The $cost$ function follows from the case where $\max(a)<v$. Now, we have a solution to Problem 101. We now consider generalizing it to Counting 101. We make some observations regarding the structure of $g$. If you printed a table of $g$, you would discover that for each $g_i$, if you subtract the minimum element from all $g_{i,j}$ in this row, you will get something like: $[\cdots,2,1,2,1,2,1,0,1,0,1,0,1,0,1,2,1,2,\cdots]$. We can prove this by induction and some casework, which is omitted for brevity. If a $g_i$ satisfies this property, then we can express it with a triple $(l,r,mn)$, where $mn$ is the minimum element of $g_i$, and $l,r$ are the minimum/maximum $j$ such that $g_{i,j}=mn$. Now that we can express the structure of each $g_i$ with a triple $(l,r,mn)$, we can run a dp of dp. Our state will be $dp_{i,v,l,r,mn}$, where $i$ is the current length, $v$ is the upper bound, and $(l,r,mn)$ describes the current $g$. For each transition, we need to enumerate $l^\prime,r^\prime$, and the new $(l,r,mn)$ can be found in $O(1)$ with some casework. This gives us an $O(n^6m)$ solution. Now to optimize the solution. We optimize the solution by reducing the number of viable states. Suppose we have a state $+(l,r,mn)$, we claim, that the answer will remain the same if we replace it with $+(l,\infty,mn)+(l\bmod2,r,mn)-(l\bmod2,\infty,mn)$. We prove this by examining their behaviour when we add any number of blocks after them. If the answer remains unchanged for any blocks we can add, then the claim stands. Suppose we are at $g_i$, and we will add a sequence of blocks $S$ after $g_i$. The final answer can be found by running a backwards dp similar to $g$, on $\text{rev}(S)$, and merging with $g_i$ in the middle. If we denote the backwards dp as $g^\prime$, then the answer will be $\min(g_j+g^\prime_j)$ over all $j$. It can be shown after some brief casework (using $g$'s structure), that there exists an optimal $j$ where $g_{i,j}$ is the minimum of $g_i$, thus the answer is just a range (with a certain parity) min of $g^\prime$, plus $mn$. Because $g^\prime$ also follow $g$'s structure, any range will be first decreasing, then increasing. It can be shown with these properties, $\min(g^\prime[l:r])=\min(g^\prime[l:\infty])+\min(g^\prime[l\bmod2:r])-\min(g^\prime[l\bmod2:\infty])$. Thus, our answer will not change when substituting the states. After substituting, the total number of different $l,r$ pairs will be $O(n)$, which yields an $O(n^5m)$ solution. The solution can also be optimized to $O(n^4m)$ using Lagrange Interpolation, but it actually runs slower than $O(n^5m)$ because of large constants (or bad implementation by setters).
[ "greedy" ]
3,500
#include <bits/stdc++.h> #pragma comment(linker, "/stack:200000000") #pragma GCC optimize("Ofast") #pragma GCC target("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx,avx,tune=native") #define L(i, j, k) for(int i = (j); i <= (k); ++i) #define R(i, j, k) for(int i = (j); i >= (k); --i) #define ll long long #define vi vector <int> #define sz(a) ((int) (a).size()) #define me(f, x) memset(f, x, sizeof(f)) #define ull unsigned long long #define pb emplace_back using namespace std; const int N = 133, mod = 1e9 + 7; struct mint { int x; inline mint(int o = 0) { x = o; } inline mint &operator = (int o) { return x = o, *this; } inline mint &operator += (mint o) { return (x += o.x) >= mod && (x -= mod), *this; } inline mint &operator -= (mint o) { return (x -= o.x) < 0 && (x += mod), *this; } inline mint &operator *= (mint o) { return x = (ll) x * o.x % mod, *this; } inline mint &operator ^= (int b) { mint w = *this; mint ret(1); for (; b; b >>= 1, w *= w) if (b & 1) ret *= w; return x = ret.x, *this; } inline mint &operator /= (mint o) { return *this *= (o ^= (mod - 2)); } friend inline mint operator + (mint a, mint b) { return a += b; } friend inline mint operator - (mint a, mint b) { return a -= b; } friend inline mint operator * (mint a, mint b) { return a *= b; } friend inline mint operator / (mint a, mint b) { return a /= b; } friend inline mint operator ^ (mint a, int b) { return a ^= b; } }; inline mint qpow(mint x, int y = mod - 2) { return x ^ y; } mint fac[N], ifac[N], inv[N]; void init(int x) { fac[0] = ifac[0] = inv[1] = 1; L(i, 2, x) inv[i] = (mod - mod / i) * inv[mod % i]; L(i, 1, x) fac[i] = fac[i - 1] * i, ifac[i] = ifac[i - 1] * inv[i]; } mint C(int x, int y) { return x < y || y < 0 ? 0 : fac[x] * ifac[y] * ifac[x - y]; } inline mint sgn(int x) { return (x & 1) ? mod - 1 : 1; } int n, m; mint f[N][N]; mint g[N][N]; mint h1[N][N][N]; mint h2[N][N][N]; mint pw[N][N]; int ans[N][N][N]; int main() { ios :: sync_with_stdio(false); cin.tie(0); cout.tie(0); n = 130, m = 30; // n = m = 10; L(i, 0, max(n, m)) { pw[i][0] = 1; L(j, 1, max(n, m)) pw[i][j] = pw[i][j - 1] * i; } f[0][0] = 1; int c1 = 0, c2 = 0; L(test, 1, m) { me(g, 0); int up = n; L(i, 0, up + 1) L(j, 0, i) L(l, 0, i) h1[i][j][l] = h2[i][j][l] = 0; h2[0][0][0] = 1; L(i, 0, up + 1) { L(j, 0, i) L(l, 0, i) h1[i][j][l & 1] -= h1[i][j][l]; L(j, 0, i) { for (int l = (i - j) & 1; l <= i; l += 2) if (h1[i][j][l].x) { L(k, 0, up - i) { if (l > k) { int tr = k - !(l & 1); if (tr < 0) h2[i + k + 1][j + 3][0] += h1[i][j][l] * pw[test - 1][k]; else h2[i + k + 1][j + 2][tr] += h1[i][j][l] * pw[test - 1][k]; } else { h2[i + k + 1][j + 1][k - l] += h1[i][j][l] * pw[test - 1][k]; } } } for (int r = (i - j) & 1; r <= i; r += 2) if (h2[i][j][r].x) { int l = r & 1; L(k, 0, up - i) { if (l > k) { int tr = k - !(r & 1); if (tr < 0) h2[i + k + 1][j + 3][0] += h2[i][j][r] * pw[test - 1][k]; else h2[i + k + 1][j + 2][tr] += h2[i][j][r] * pw[test - 1][k]; } else { h2[i + k + 1][j + 1][k - l] += h2[i][j][r] * pw[test - 1][k]; L(d, r + 1, k) if (f[k][d].x) h1[i + k + 1][j + 1][d - r] += h2[i][j][r] * f[k][d]; } } } } } // cout << "H = " << h[1][1][0][0].x << endl; L(i, 1, up + 1) L(j, 1, i) { L(l, 0, i) if (h1[i][j][l].x) { if (!l) g[i - 1][j - 1] += h1[i][j][l]; else g[i - 1][j - 1 + (l % 2 == 0 ? 2 : 1)] += h1[i][j][l]; } L(r, 0, i) if (h2[i][j][r].x) { if (!(r & 1)) g[i - 1][j - 1] += h2[i][j][r]; else g[i - 1][j - 1 + (r % 2 == 0 ? 2 : 1)] += h2[i][j][r]; } } L(i, 1, up) { mint vl = 0; if (i == 1) vl = test; else if (i % 2 == 1) { vl += pw[test - 1][i]; L(p, 1, i - 2) { int l = p, r = i - p - 1; if (l > r) swap(l, r); L(j, 1, l) { vl += f[r][j] * pw[test - 1][l]; } } } g[i][1] = vl; mint sum = 0; L(j, 1, i) sum += g[i][j]; int pos = (i & 1) ? 3 : 2; g[i][pos] += pw[test][i] - sum; } swap(f, g); for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { ans[test][i][j] = f[i][j].x; } } } // cout << "clock = " << clock() << endl; int T; cin >> T; while (T--) { int N, M; cin >> N >> M; L(i, 0, (N - 1) / 2) cout << ans[M][N][N - i * 2] << " "; cout << '\n'; } return 0; }
2003
A
Turtle and Good Strings
Turtle thinks a string $s$ is a good string if there exists a sequence of strings $t_1, t_2, \ldots, t_k$ ($k$ is an arbitrary integer) such that: - $k \ge 2$. - $s = t_1 + t_2 + \ldots + t_k$, where $+$ represents the concatenation operation. For example, $abc = a + bc$. - For all $1 \le i < j \le k$, the first character of $t_i$ \textbf{isn't equal to} the last character of $t_j$. Turtle is given a string $s$ consisting of lowercase Latin letters. Please tell him whether the string $s$ is a good string!
A necessary condition for $s$ to be good is that $s_1 \ne s_n$. For a string $s$ where $s_1 \ne s_n$, let $t_1$ be a string composed of just the single character $s_1$, and let $t_2$ be a string composed of the $n - 1$ characters from $s_2$ to $s_n$. In this way, the condition is satisfied. Therefore, if $s_1 \ne s_n$, the answer is "Yes"; otherwise, the answer is "No". Time complexity: $O(n)$ per test case.
[ "greedy", "strings" ]
800
#include <cstdio> const int maxn = 110; int n; char s[maxn]; void solve() { scanf("%d%s", &n, s + 1); puts(s[1] != s[n] ? "Yes" : "No"); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
B
Turtle and Piggy Are Playing a Game 2
Turtle and Piggy are playing a game on a sequence. They are given a sequence $a_1, a_2, \ldots, a_n$, and Turtle goes first. Turtle and Piggy alternate in turns (so, Turtle does the first turn, Piggy does the second, Turtle does the third, etc.). The game goes as follows: - Let the current length of the sequence be $m$. If $m = 1$, the game ends. - If the game does not end and it's Turtle's turn, then Turtle must choose an integer $i$ such that $1 \le i \le m - 1$, set $a_i$ to $\max(a_i, a_{i + 1})$, and remove $a_{i + 1}$. - If the game does not end and it's Piggy's turn, then Piggy must choose an integer $i$ such that $1 \le i \le m - 1$, set $a_i$ to $\min(a_i, a_{i + 1})$, and remove $a_{i + 1}$. Turtle wants to \textbf{maximize} the value of $a_1$ in the end, while Piggy wants to \textbf{minimize} the value of $a_1$ in the end. Find the value of $a_1$ in the end if both players play optimally. You can refer to notes for further clarification.
The problem can be rephrased as follows: the first player can remove a number adjacent to a larger number, while the second player can remove a number adjacent to a smaller number. To maximize the value of the last remaining number, in the $i$-th round of the first player's moves, he will remove the $i$-th smallest number in the sequence, and this is always achievable. Similarly, in the $i$-th round of the second player's moves, he will remove the $i$-th largest number. Thus, the answer is the $\left\lfloor\frac{n}{2}\right\rfloor + 1$-th smallest number in the original sequence. Time complexity: $O(n \log n)$ per test case.
[ "games", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; const int maxn = 100100; int n, a[maxn]; void solve() { scanf("%d", &n); for (int i = 1; i <= n; ++i) { scanf("%d", &a[i]); } sort(a + 1, a + n + 1, greater<int>()); printf("%d\n", a[(n + 1) / 2]); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
C
Turtle and Good Pairs
Turtle gives you a string $s$, consisting of lowercase Latin letters. Turtle considers a pair of integers $(i, j)$ ($1 \le i < j \le n$) to be a pleasant pair if and only if there exists an integer $k$ such that $i \le k < j$ and \textbf{both} of the following two conditions hold: - $s_k \ne s_{k + 1}$; - $s_k \ne s_i$ \textbf{or} $s_{k + 1} \ne s_j$. Besides, Turtle considers a pair of integers $(i, j)$ ($1 \le i < j \le n$) to be a good pair if and only if $s_i = s_j$ \textbf{or} $(i, j)$ is a pleasant pair. Turtle wants to reorder the string $s$ so that the number of good pairs is \textbf{maximized}. Please help him!
Partition the string $s$ into several maximal contiguous segments of identical characters, denoted as $[l_1, r_1], [l_2, r_2], \ldots, [l_m, r_m]$. For example, the string "aabccc" can be divided into $[1, 2], [3, 3], [4, 6]$. We can observe that a pair $(i, j)$ is considered a "good pair" if and only if the segments containing $i$ and $j$ are non-adjacent. Let $a_i = r_i - l_i + 1$. Then, the number of good pairs is $\frac{n(n - 1)}{2} - \sum\limits_{i = 1}^{m - 1} a_i \cdot a_{i + 1}$. Hence, the task reduces to minimizing $\sum\limits_{i = 1}^{m - 1} a_i \cdot a_{i + 1}$. If $s$ consists of only one character, simply output the original string. Otherwise, for $m \ge 2$, it can be inductively proven that $\sum\limits_{i = 1}^{m - 1} a_i \cdot a_{i + 1} \ge n - 1$, and this minimum value of $n - 1$ can be achieved. As for the construction, let the maximum frequency of any character be $x$, and the second-highest frequency be $y$. Begin by placing $x - y$ instances of the most frequent character at the start. Then, there are $y$ remaining instances of both the most frequent and the second most frequent characters. Next, sort all characters by frequency in descending order and alternate between them when filling the positions (ignoring any character once it's fully used). This guarantees that, except for the initial segment of length $\ge 1$, all other segments have a length of exactly $1$. Time complexity: $O(n + |\Sigma| \log |\Sigma|)$ per test case, where $|\Sigma| = 26$.
[ "constructive algorithms", "greedy", "sortings", "strings" ]
1,200
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<int, int> pii; const int maxn = 1000100; int n; pii a[99]; char s[maxn]; void solve() { scanf("%d%s", &n, s + 1); for (int i = 0; i < 26; ++i) { a[i] = mkp(0, i); } for (int i = 1; i <= n; ++i) { ++a[s[i] - 'a'].fst; } sort(a, a + 26, greater<pii>()); queue<pii> q; while (a[0].fst > a[1].fst) { putchar('a' + a[0].scd); --a[0].fst; } for (int i = 0; i < 26; ++i) { q.push(a[i]); } while (q.size()) { pii p = q.front(); q.pop(); if (!p.fst) { continue; } putchar('a' + p.scd); --p.fst; q.push(p); } putchar('\n'); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
D1
Turtle and a MEX Problem (Easy Version)
\textbf{The two versions are different problems. In this version of the problem, you can choose the same integer twice or more. You can make hacks only if both versions are solved.} One day, Turtle was playing with $n$ sequences. Let the length of the $i$-th sequence be $l_i$. Then the $i$-th sequence was $a_{i, 1}, a_{i, 2}, \ldots, a_{i, l_i}$. Piggy gave Turtle a problem to solve when Turtle was playing. The statement of the problem was: - There was a non-negative integer $x$ at first. Turtle would perform an arbitrary number (possibly zero) of operations on the integer. - In each operation, Turtle could choose an integer $i$ such that $1 \le i \le n$, and set $x$ to $\text{mex}^{\dagger}(x, a_{i, 1}, a_{i, 2}, \ldots, a_{i, l_i})$. - Turtle was asked to find the answer, which was the \textbf{maximum} value of $x$ after performing an arbitrary number of operations. Turtle solved the above problem without difficulty. He defined $f(k)$ as the answer to the above problem when the initial value of $x$ was $k$. Then Piggy gave Turtle a non-negative integer $m$ and asked Turtle to find the value of $\sum\limits_{i = 0}^m f(i)$ (i.e., the value of $f(0) + f(1) + \ldots + f(m)$). Unfortunately, he couldn't solve this problem. Please help him! $^{\dagger}\text{mex}(c_1, c_2, \ldots, c_k)$ is defined as the smallest \textbf{non-negative} integer $x$ which does not occur in the sequence $c$. For example, $\text{mex}(2, 2, 0, 3)$ is $1$, $\text{mex}(1, 2)$ is $0$.
Let the smallest non-negative integer that does not appear in the $i$-th sequence be denoted as $u_i$, and the second smallest non-negative integer that does not appear as $v_i$. In one operation of choosing index $i$, if $x = u_i$, then $x$ changes to $v_i$; otherwise, $x$ changes to $u_i$. It can be observed that if $x$ is operated at least once, its final value will not exceed $\max\limits_{i = 1}^n v_i$. To achieve this maximum value, one can select the index corresponding to this maximum value twice. Let $k = \max\limits_{i = 1}^n v_i$, then the answer is $\sum\limits_{i = 0}^m \max(i, k)$. If $k \ge m$, the answer is $(m + 1) \cdot k$; otherwise, the answer is $\sum\limits_{i = 0}^{k - 1} k + \sum\limits_{i = k}^m i = k^2 + \frac{(m + k)(m - k + 1)}{2}$. Time complexity: $O(\sum l_i)$ per test case.
[ "greedy", "math" ]
1,500
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<ll, ll> pii; const int maxn = 200100; ll n, m, a[maxn]; bool vis[maxn]; void solve() { scanf("%lld%lld", &n, &m); ll k = 0; while (n--) { int t; scanf("%d", &t); for (int i = 0; i <= t + 5; ++i) { vis[i] = 0; } for (int i = 1; i <= t; ++i) { scanf("%lld", &a[i]); if (a[i] <= t + 4) { vis[a[i]] = 1; } } ll mex = 0; while (vis[mex]) { ++mex; } vis[mex] = 1; while (vis[mex]) { ++mex; } k = max(k, mex); } printf("%lld\n", k >= m ? (m + 1) * k : k * k + (m + k) * (m - k + 1) / 2); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
D2
Turtle and a MEX Problem (Hard Version)
\textbf{The two versions are different problems. In this version of the problem, you can't choose the same integer twice or more. You can make hacks only if both versions are solved.} One day, Turtle was playing with $n$ sequences. Let the length of the $i$-th sequence be $l_i$. Then the $i$-th sequence was $a_{i, 1}, a_{i, 2}, \ldots, a_{i, l_i}$. Piggy gave Turtle a problem to solve when Turtle was playing. The statement of the problem was: - There was a non-negative integer $x$ at first. Turtle would perform an arbitrary number (possibly zero) of operations on the integer. - In each operation, Turtle could choose an integer $i$ such that $1 \le i \le n$ and \textbf{$i$ wasn't chosen before}, and set $x$ to $\text{mex}^{\dagger}(x, a_{i, 1}, a_{i, 2}, \ldots, a_{i, l_i})$. - Turtle was asked to find the answer, which was the \textbf{maximum} value of $x$ after performing an arbitrary number of operations. Turtle solved the above problem without difficulty. He defined $f(k)$ as the answer to the above problem when the initial value of $x$ was $k$. Then Piggy gave Turtle a non-negative integer $m$ and asked Turtle to find the value of $\sum\limits_{i = 0}^m f(i)$ (i.e., the value of $f(0) + f(1) + \ldots + f(m)$). Unfortunately, he couldn't solve this problem. Please help him! $^{\dagger}\text{mex}(c_1, c_2, \ldots, c_k)$ is defined as the smallest \textbf{non-negative} integer $x$ which does not occur in the sequence $c$. For example, $\text{mex}(2, 2, 0, 3)$ is $1$, $\text{mex}(1, 2)$ is $0$.
Please first read the solution for problem D1. Construct a directed graph. For the $i$-th sequence, add a directed edge from $u_i$ to $v_i$. In each operation, you can either move along an outgoing edge of $x$ or move to any vertex $u$ and remove one of its outgoing edges. Let $f_i$ be the maximum vertex number that can be reached starting from vertex $i$ by continuously selecting one outgoing edge in the directed graph. This can be solved using DP in reverse order from large to small. Firstly, the answer for $x$ is at least $f_x$. Additionally, the answer for every $x$ is at least $\max\limits_{i=1}^n u_i$. Moreover, for any vertex $i$ with outdegree greater than 1, the answer for every $x$ is at least $f_i$, since you can disconnect any outgoing edge from $i$ except the one leading to $f_i$. Let $k = \max\limits_{i=1}^n v_i$. For values $\le k$, we can directly enumerate and calculate. For values $> k$, they won't be operated. Time complexity: $O(\sum l_i)$ per test case.
[ "dfs and similar", "dp", "graphs", "greedy", "implementation", "math" ]
2,100
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<ll, ll> pii; const int maxn = 200100; ll n, m, a[maxn], f[maxn]; pii b[maxn]; bool vis[maxn]; vector<int> G[maxn]; inline ll calc(ll l, ll r) { return (l + r) * (r - l + 1) / 2; } void solve() { scanf("%lld%lld", &n, &m); ll t = -1, ans = 0, k = 0; for (int i = 1, l; i <= n; ++i) { scanf("%d", &l); for (int j = 1; j <= l; ++j) { scanf("%lld", &a[j]); if (a[j] < maxn) { vis[a[j]] = 1; } } ll u = 0; while (vis[u]) { ++u; } t = max(t, u); ll v = u; vis[u] = 1; while (vis[v]) { ++v; } b[i] = mkp(u, v); k = max(k, v); vis[u] = 0; for (int j = 1; j <= l; ++j) { if (a[j] < maxn) { vis[a[j]] = 0; } } } for (int i = 0; i <= k; ++i) { vector<int>().swap(G[i]); } for (int i = 1; i <= n; ++i) { G[b[i].fst].pb(b[i].scd); } for (int u = k; ~u; --u) { f[u] = u; for (int v : G[u]) { f[u] = max(f[u], f[v]); } if ((int)G[u].size() >= 2) { t = max(t, f[u]); } } for (int i = 0; i <= min(k, m); ++i) { ans += max(t, f[i]); } if (k < m) { ans += calc(k + 1, m); } printf("%lld\n", ans); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
E1
Turtle and Inversions (Easy Version)
\textbf{This is an easy version of this problem. The differences between the versions are the constraint on $m$ and $r_i < l_{i + 1}$ holds for each $i$ from $1$ to $m - 1$ in the easy version. You can make hacks only if both versions of the problem are solved.} Turtle gives you $m$ intervals $[l_1, r_1], [l_2, r_2], \ldots, [l_m, r_m]$. He thinks that a permutation $p$ is interesting if there exists an integer $k_i$ for every interval ($l_i \le k_i < r_i$), and if he lets $a_i = \max\limits_{j = l_i}^{k_i} p_j, b_i = \min\limits_{j = k_i + 1}^{r_i} p_j$ for every integer $i$ from $1$ to $m$, the following condition holds: $$\max\limits_{i = 1}^m a_i < \min\limits_{i = 1}^m b_i$$ Turtle wants you to calculate the maximum number of inversions of all interesting permutations of length $n$, or tell him if there is no interesting permutation. An inversion of a permutation $p$ is a pair of integers $(i, j)$ ($1 \le i < j \le n$) such that $p_i > p_j$.
Divide all numbers into two types: small numbers ($0$) and large numbers ($1$), such that any small number is less than any large number. In this way, a permutation can be transformed into a $01$ sequence. A permutation is interesting if and only if there exists a way to divide all numbers into two types such that, for every given interval $[l, r]$, all $0$s appear before all $1$s, and there is at least one $0$ and one $1$ in the interval. Such a $01$ sequence is called interesting. If an interesting $01$ sequence is fixed, we can greedily arrange all $0$s in descending order and all $1$s in descending order. Let the number of $0$s be $x$ and the number of $1$s be $y$. Let $z$ be the number of index pairs $(i, j)$, where the $i$-th number is $1$ and the $j$-th number is $0$ (such pairs are called $10$ pairs). Then, the maximum number of inversions is $\frac{x(x - 1)}{2} + \frac{y(y - 1)}{2} + z$. In this version, the intervals are non-overlapping, so DP can be applied directly. Let $f_{i, j}$ represent the maximum number of $10$ pairs when considering all numbers from $1$ to $i$, where there are $j$ $1$s. For transitions, if $i$ is the left endpoint of an interval and the right endpoint of this interval is $p$, we can enumerate the number of $0$s as $k$ and the number of $1$s as $p - i + 1 - k$ for the transition. Otherwise, we consider whether $i$ is $0$ or $1$. The answer is $\max\limits_{i = 0}^n f_{n, i} + \frac{i(i - 1)}{2} + \frac{(n - i)(n - i - 1)}{2}$. Time complexity: $O(n^2 + m)$ per test case.
[ "brute force", "divide and conquer", "dp", "greedy", "math" ]
2,600
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<ll, ll> pii; const int maxn = 5050; int n, m, f[maxn][maxn], a[maxn], b[maxn]; void solve() { scanf("%d%d", &n, &m); for (int i = 0; i <= n + 1; ++i) { for (int j = 0; j <= n + 1; ++j) { f[i][j] = -1e9; } a[i] = 0; } for (int i = 1, l, r; i <= m; ++i) { scanf("%d%d", &l, &r); a[l] = r; } f[0][0] = 0; for (int i = 1; i <= n; ++i) { if (a[i]) { int p = a[i]; for (int j = 0; j < i; ++j) { for (int k = 1; k <= p - i; ++k) { f[p][j + k] = max(f[p][j + k], f[i - 1][j] + (p - i + 1 - k) * j); } } i = p; } else { for (int j = 0; j <= i; ++j) { f[i][j] = f[i - 1][j] + j; if (j) { f[i][j] = max(f[i][j], f[i - 1][j - 1]); } } } } int ans = 0; for (int i = 0; i <= n; ++i) { ans = max(ans, f[n][i] + i * (i - 1) / 2 + (n - i) * (n - i - 1) / 2); } printf("%d\n", ans); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
E2
Turtle and Inversions (Hard Version)
\textbf{This is a hard version of this problem. The differences between the versions are the constraint on $m$ and $r_i < l_{i + 1}$ holds for each $i$ from $1$ to $m - 1$ in the easy version. You can make hacks only if both versions of the problem are solved.} Turtle gives you $m$ intervals $[l_1, r_1], [l_2, r_2], \ldots, [l_m, r_m]$. He thinks that a permutation $p$ is interesting if there exists an integer $k_i$ for every interval ($l_i \le k_i < r_i$), and if he lets $a_i = \max\limits_{j = l_i}^{k_i} p_j, b_i = \min\limits_{j = k_i + 1}^{r_i} p_j$ for every integer $i$ from $1$ to $m$, the following condition holds: $$\max\limits_{i = 1}^m a_i < \min\limits_{i = 1}^m b_i$$ Turtle wants you to calculate the maximum number of inversions of all interesting permutations of length $n$, or tell him if there is no interesting permutation. An inversion of a permutation $p$ is a pair of integers $(i, j)$ ($1 \le i < j \le n$) such that $p_i > p_j$.
Compared to problem E1, this version contains the situation where intervals can overlap. Consider a pair of overlapping intervals $[l_1, r_1]$ and $[l_2, r_2]$ (assuming $l_1 \le l_2$). The range $[l_1, l_2 - 1]$ must consist entirely of $0$s, and the range $[r_1 + 1, r_2]$ must consist entirely of $1$s. Then, you can remove the intervals $[l_1, r_1]$ and $[l_2, r_2]$, add a new interval $[l_2, r_1]$, and keep track of which numbers are guaranteed to be $0$ and which are guaranteed to be $1$. After processing in this way, the intervals will no longer overlap, reducing the problem to problem E1. Time complexity: $O(n^2 + m \log m)$ per test case. There exists a $O(n + m)$ solution to this problem.
[ "brute force", "data structures", "divide and conquer", "dp", "greedy", "math", "two pointers" ]
2,700
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<int, int> pii; const int maxn = 5050; const int inf = 0x3f3f3f3f; int n, m, b[maxn], c[maxn], f[maxn][maxn]; struct node { int l, r; } a[maxn]; void solve() { scanf("%d%d", &n, &m); for (int i = 1; i <= n; ++i) { c[i] = -1; b[i] = 0; } for (int i = 1; i <= m; ++i) { scanf("%d%d", &a[i].l, &a[i].r); } if (!m) { printf("%d\n", n * (n - 1) / 2); return; } sort(a + 1, a + m + 1, [&](const node &a, const node &b) { return a.l < b.l; }); int mnl = a[1].l, mxl = a[1].l, mnr = a[1].r, mxr = a[1].r; for (int i = 2; i <= m + 1; ++i) { if (i == m + 1 || a[i].l > mxr) { if (mxl >= mnr) { puts("-1"); return; } for (int j = mnl; j < mxl; ++j) { c[j] = 0; } for (int j = mnr + 1; j <= mxr; ++j) { c[j] = 1; } b[mxl] = mnr; mnl = mxl = a[i].l; mnr = mxr = a[i].r; } else { mnl = min(mnl, a[i].l); mxl = max(mxl, a[i].l); mnr = min(mnr, a[i].r); mxr = max(mxr, a[i].r); } } for (int i = 0; i <= n; ++i) { for (int j = 0; j <= n; ++j) { f[i][j] = -inf; } } f[0][0] = 0; for (int i = 1; i <= n; ++i) { if (b[i]) { int p = b[i]; for (int j = 0; j < i; ++j) { for (int k = 1; k <= p - i; ++k) { f[p][j + k] = max(f[p][j + k], f[i - 1][j] + (p - i + 1 - k) * j); } } i = p; } else if (c[i] == 1) { for (int j = 1; j <= i; ++j) { f[i][j] = f[i - 1][j - 1]; } } else if (c[i] == 0) { for (int j = 0; j < i; ++j) { f[i][j] = f[i - 1][j] + j; } } else { for (int j = 0; j <= i; ++j) { f[i][j] = max(f[i - 1][j] + j, j ? f[i - 1][j - 1] : -inf); } } } int ans = -1; for (int i = 0; i <= n; ++i) { ans = max(ans, f[n][i] + i * (i - 1) / 2 + (n - i) * (n - i - 1) / 2); } printf("%d\n", ans); } int main() { int T = 1; scanf("%d", &T); while (T--) { solve(); } return 0; }
2003
F
Turtle and Three Sequences
Piggy gives Turtle three sequences $a_1, a_2, \ldots, a_n$, $b_1, b_2, \ldots, b_n$, and $c_1, c_2, \ldots, c_n$. Turtle will choose a subsequence of $1, 2, \ldots, n$ of length $m$, let it be $p_1, p_2, \ldots, p_m$. The subsequence should satisfy the following conditions: - $a_{p_1} \le a_{p_2} \le \cdots \le a_{p_m}$; - All $b_{p_i}$ for all indices $i$ are \textbf{pairwise distinct}, i.e., there don't exist two different indices $i$, $j$ such that $b_{p_i} = b_{p_j}$. Help him find the maximum value of $\sum\limits_{i = 1}^m c_{p_i}$, or tell him that it is impossible to choose a subsequence of length $m$ that satisfies the conditions above. Recall that a sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements.
If the number of possible values for $b_i$ is small, you can use bitmask DP. Define $f_{i, S, j}$ as the maximum result considering all numbers up to $i$, where $S$ is the current set of distinct $b_i$ values in the subsequence, and $j$ is the value of $a$ for the last element of the subsequence. The transitions can be optimized using a Fenwick tree (Binary Indexed Tree). When $b_i$ has many possible values, consider using color-coding, where each $b_i$ is randomly mapped to a number in $[1, m]$, and then apply bitmask DP. The probability of obtaining the optimal solution in one attempt is $\frac{m!}{m^m}$, and the probability of not obtaining the optimal solution after $T$ attempts is $(1 - \frac{m!}{m^m})^T$. When $T = 300$, the probability of not obtaining the optimal solution is approximately $7.9 \cdot 10^{-6}$, which is acceptable. Time complexity: $O(Tn2^m \log n)$.
[ "brute force", "data structures", "dp", "greedy", "math", "probabilities", "two pointers" ]
2,800
#include <bits/stdc++.h> #define pb emplace_back #define fst first #define scd second #define mkp make_pair #define mems(a, x) memset((a), (x), sizeof(a)) using namespace std; typedef long long ll; typedef double db; typedef unsigned long long ull; typedef long double ldb; typedef pair<ll, ll> pii; const int maxn = 3030; int n, m, a[maxn], b[maxn], c[maxn], d[maxn], e[maxn]; struct BIT { int c[maxn]; inline void init() { mems(c, -0x3f); } inline void update(int x, int d) { for (int i = x; i <= n; i += (i & (-i))) { c[i] = max(c[i], d); } } inline int query(int x) { int res = -1e9; for (int i = x; i; i -= (i & (-i))) { res = max(res, c[i]); } return res; } } T[32]; void solve() { scanf("%d%d", &n, &m); for (int i = 1; i <= n; ++i) { scanf("%d", &a[i]); } for (int i = 1; i <= n; ++i) { scanf("%d", &b[i]); } for (int i = 1; i <= n; ++i) { scanf("%d", &c[i]); } mt19937 rnd(chrono::system_clock::now().time_since_epoch().count()); int ans = 0; for (int _ = 0; _ < 300; ++_) { for (int i = 1; i <= n; ++i) { d[i] = rnd() % m; } for (int i = 1; i <= n; ++i) { e[i] = d[b[i]]; } for (int i = 0; i < (1 << m); ++i) { T[i].init(); } T[0].update(1, 0); for (int i = 1; i <= n; ++i) { for (int S = 0; S < (1 << m); ++S) { if (S & (1 << e[i])) { continue; } T[S | (1 << e[i])].update(a[i], T[S].query(a[i]) + c[i]); } } ans = max(ans, T[(1 << m) - 1].query(n)); } printf("%d\n", ans > 0 ? ans : -1); } int main() { int T = 1; // scanf("%d", &T); while (T--) { solve(); } return 0; }
2004
A
Closest Point
Consider a set of points on a line. The distance between two points $i$ and $j$ is $|i - j|$. The point $i$ from the set is \textbf{the closest} to the point $j$ from the set, if there is no other point $k$ in the set such that the distance from $j$ to $k$ is \textbf{strictly less} than the distance from $j$ to $i$. In other words, all other points from the set have distance to $j$ greater or equal to $|i - j|$. For example, consider a set of points $\{1, 3, 5, 8\}$: - for the point $1$, the closest point is $3$ (other points have distance greater than $|1-3| = 2$); - for the point $3$, there are two closest points: $1$ and $5$; - for the point $5$, the closest point is $3$ (but not $8$, since its distance is greater than $|3-5|$); - for the point $8$, the closest point is $5$. You are given a set of points. You have to add an \textbf{integer} point into this set in such a way that \textbf{it is different from every existing point in the set}, and \textbf{it becomes the closest point to every point in the set}. Is it possible?
This problem can be solved "naively" by iterating on the point we add and checking that it becomes the closest point to all elements from the set. However, there is another solution which is faster and easier to implement. When we add a point, it can become the closest only to two points from the set: one to the left, and one to the right. If there are, for example, two points to the left of the point we add, then the condition on the leftmost point won't be met (the point we add won't be the closest to it). So, if $n>2$, there is no way to add a point. If $n=2$, we want to insert a new point between the two existing points - so we need to check that they are not adjacent.
[ "implementation", "math" ]
800
t = int(input()) for i in range(t): n = int(input()) x = list(map(int, input().split())) if n > 2 or x[0] + 1 == x[-1]: print('NO') else: print('YES')
2004
B
Game with Doors
There are $100$ rooms arranged in a row and $99$ doors between them; the $i$-th door connects rooms $i$ and $i+1$. Each door can be either locked or unlocked. Initially, all doors are unlocked. We say that room $x$ is reachable from room $y$ if all doors between them are unlocked. You know that: - Alice is in some room from the segment $[l, r]$; - Bob is in some room from the segment $[L, R]$; - Alice and Bob are in different rooms. However, you don't know the exact rooms they are in. You don't want Alice and Bob to be able to reach each other, so you are going to lock some doors to prevent that. What's the smallest number of doors you have to lock so that Alice and Bob cannot meet, regardless of their starting positions inside the given segments?
Let's find the segment of rooms where both Alice and Bob can be. This segment is $[\max(L, l), \min(R, r)]$. If it is empty, then it is sufficient to simply block any door between Alice's segment and Bob's segment, so the answer is $1$. If it is not empty, let's denote by $x$ the number of doors in this segment - $\min(R, r) - \max(L, l)$. Each of these doors definitely needs to be blocked so that Alice and Bob cannot meet if they are in adjacent rooms within this segment. Therefore, the answer is at least $x$. However, sometimes we also need to restrict this segment on the left and/or right. If either of them is to the left of this segment (i.e., the left boundaries of their segments do not coincide), we need to block the door to the left of the intersection (and increase the answer by $1$). Similarly for the right boundary.
[ "brute force", "greedy" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int l, r, L, R; cin >> l >> r >> L >> R; int inter = min(r, R) - max(l, L) + 1; int ans = inter - 1; if (inter <= 0) { ans = 1; } else { ans += (l != L); ans += (r != R); } cout << ans << '\n'; } }
2004
C
Splitting Items
Alice and Bob have $n$ items they'd like to split between them, so they decided to play a game. All items have a cost, and the $i$-th item costs $a_i$. Players move in turns starting from Alice. In each turn, the player chooses one of the remaining items and takes it. The game goes on until no items are left. Let's say that $A$ is the total cost of items taken by Alice and $B$ is the total cost of Bob's items. The resulting score of the game then will be equal to $A - B$. Alice wants to maximize the score, while Bob wants to minimize it. Both Alice and Bob will play optimally. But the game will take place tomorrow, so today Bob can modify the costs a little. He can increase the costs $a_i$ of several (possibly none or all) items by an integer value (possibly, by the same value or by different values for each item). However, the total increase must be less than or equal to $k$. Otherwise, Alice may suspect something. Note that Bob \textbf{can't decrease} costs, only increase. What is the minimum possible score Bob can achieve?
Let's sort the array in descending order and consider the first two moves of the game. Since Alice goes first, she takes the maximum element in the array ($a_1$). Then, during his turn, Bob takes the second-largest element ($a_2$) to minimize the score difference. Bob can increase his element in advance; however, he cannot make it larger than $a_1$ (since Alice always takes the maximum element in the array). This means that Bob needs to increase his element to $\min(a_1, a_2 + k)$. Then the game moves to a similar situation, but without the first two elements and with an updated value of $k$. Thus, to solve the problem, we can sort the array and then iterate through it while maintaining several values. If the current index is odd (corresponding to Alice's turn), increase the answer by $a_i$. If the current index is even (corresponding to Bob's turn), increase the current value of $a_i$ by $\min(k, a_{i - 1} - a_i)$, decrease $k$ by that same value, and also subtract the new value of $a_i$ from the answer.
[ "games", "greedy", "sortings" ]
1,100
fun main() { repeat(readln().toInt()) { val (n, k) = readln().split(" ").map { it.toInt() } val a = readln().split(" ").map { it.toInt() }.sortedDescending().toIntArray() var score = 0L var rem = k for (i in a.indices) { if (i % 2 == 0) { score += a[i] } else { val needed = minOf(rem, a[i - 1] - a[i]) a[i] += needed rem -= needed score -= a[i] } } println(score) } }
2004
D
Colored Portals
There are $n$ cities located on a straight line. The cities are numbered from $1$ to $n$. Portals are used to move between cities. There are $4$ colors of portals: blue, green, red, and yellow. Each city has portals of two different colors. You can move from city $i$ to city $j$ if they have portals of the same color (for example, you can move between a "blue-red" city and a "blue-green" city). This movement costs $|i-j|$ coins. Your task is to answer $q$ independent queries: calculate the minimum cost to move from city $x$ to city $y$.
If cities $x$ and $y$ have a common portal, then the cost of the path is $|x-y|$. Otherwise, we have to move from city $x$ to some intermediate city $z$. It's important to note that the type of city $z$ should be different from the type of city $x$. Otherwise, we could skip city $z$ and go directly from $x$ to the next city in the path. Since the type of city $z$ doesn't match the type of city $x$, it must have a common portal with city $y$. Therefore, the optimal path look like this: $x \rightarrow z \rightarrow y$. Additional cities won't decrease the cost of the path. Now we have to figure out how to choose city $z$ to minimize the cost of the path. Without loss of generality, let's say that $x \le y$. Then there are three possible locations for city $z$: $z < x$, then the cost of the path is $(x-z) + (y-z)$; $x \le z \le y$, then the cost of the path is $(z-x) + (y-z) = y - x$; $y < z$, then the cost of the path is $(z-x) + (z-y)$. It is easy to see that in each case, to minimize the cost of the path, we should choose $z$ as close to $x$ as possible. Let's define two arrays: $\mathrm{lf}_{i, j}$ - the index of the nearest city to the left of $i$ with type $j$, and similarly $\mathrm{rg}_{i, j}$ - the index of the nearest city to the right of $i$ with type $j$. Then the answer to the problem is the minimum value among $|x - \mathrm{lf}_{x, j}| + |\mathrm{lf}_{x, j} - y|$ and $|x - \mathrm{rg}_{x, j}| + |\mathrm{rg}_{x, j} - y|$ for all types $j$ for which there are common portals with cities $x$ and $y$.
[ "binary search", "brute force", "data structures", "graphs", "greedy", "implementation", "shortest paths" ]
1,600
#include <bits/stdc++.h> using namespace std; const int INF = 1e9; const string vars[] = {"BG", "BR", "BY", "GR", "GY", "RY"}; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n, q; cin >> n >> q; vector<int> a(n); for (int i = 0; i < n; ++i) { char s[5]; cin >> s; a[i] = find(vars, vars + 6, s) - vars; } vector<vector<int>> lf(n), rg(n); for (int o = 0; o < 2; ++o) { vector<int> last(6, -INF); for (int i = 0; i < n; ++i) { last[a[i]] = (o ? n - i - 1 : i); (o ? rg[n - i - 1] : lf[i]) = last; } reverse(a.begin(), a.end()); } while (q--) { int x, y; cin >> x >> y; --x; --y; int ans = INF; for (int j = 0; j < 6; ++j) { if (a[x] + j != 5 && j + a[y] != 5) { ans = min(ans, abs(x - lf[x][j]) + abs(lf[x][j] - y)); ans = min(ans, abs(x - rg[x][j]) + abs(rg[x][j] - y)); } } if (ans > INF / 2) ans = -1; cout << ans << '\n'; } } }
2004
E
Not a Nim Problem
Two players, Alice and Bob, are playing a game. They have $n$ piles of stones, with the $i$-th pile initially containing $a_i$ stones. On their turn, a player can choose any pile of stones and take any positive number of stones from it, with one condition: - let the current number of stones in the pile be $x$. It is not allowed to take from the pile a number of stones $y$ such that the greatest common divisor of $x$ and $y$ is not equal to $1$. The player who cannot make a move loses. Both players play optimally (that is, if a player has a strategy that allows them to win, no matter how the opponent responds, they will win). Alice goes first. Determine who will win.
So, okay, this is actually a problem related to Nim game and Sprague-Grundy theorem. If you're not familiar with it, I recommend studying it here. The following tutorial expects you to understand how Grundy values are calculated, which you can read there. The first (naive) solution to this problem would be to calculate the Grundy values for all integers from $1$ to $10^7$ via dynamic programming. That would work in $O(A^2 log A)$ or $O(A^2)$, where $A$ is the maximum value in the input, so it's not fast enough. However, coding a naive solution to this problem might still help. There is a method that allows you to (sometimes) approach these kinds of problems where you can calculate Grundy values slowly, and you need to do it faster. This method usually consists of four steps: code a solution that calculates Grundy values naively; run it and generate several first values of Grundy function; try to observe some patterns and formulate a faster way to calculate Grundy functions using them; verify your patterns on Grundy values of larger numbers and/or try to prove them. The third step is probably the most difficult, and generating too few Grundy values might lead you to wrong conclusions. For example, the first $8$ values of Grundy are $[1, 0, 2, 0, 3, 0, 4, 0]$, which may lead to an assumption that $g(x) = 0$ if $x$ is even, or $g(x) = \frac{x+1}{2}$ if $x$ is odd. However, this pattern breaks as soon as you calculate $g(9)$, which is not $5$, but $2$. You can pause reading the editorial here and try formulating a different pattern which would explain that; the next paragraph contains the solution. Okay, we claim that: $g(1) = 1$; $g(x) = 0$ if $x$ is even; $g(x) = p(x)$ if $x$ is an odd prime number, where $p(x)$ is the index of $x$ in the sequence of prime numbers; $g(x) = g(d(x))$ if $x$ is an odd composite number, where $d(x)$ is the minimum prime divisor of $x$. We can prove it with induction on $x$ and something like that: $g(1) = 1$ can simply be verified by calculation; if $x$ is even, there are no transitions to other even values of pile size from the current state of the game (because that would imply that both $x$ and $x-y$ are divisible by $2$, so $gcd(x,y)$ is at least $2$). Since these are the only states having Grundy values equal to $0$, the MEX of all transitions is equal to $0$; if $x$ is an odd prime number, there are transitions to all states of the game from $1$ to $x-1$. The set of Grundy values of these transitions contains all integers from $0$ to $g(x')$, where $x'$ is the previous odd prime number; so, the MEX of this set is $g(x') + 1 = p(x') + 1 = p(x)$; if $x$ is a composite prime number and $d(x)$ is its minimum prime divisor, then there are transitions to all states from $1$ to $d(x)-1$ (having Grundy values from $0$ to $g(d(x))-1$); but every $x'$ such that $g(x') = g(d(x))$ is divisible by $d(x)$; so, there are no transitions to those states (otherwise $gcd(x,y)$ would be at least $d(x)$). So, the MEX of all states reachable in one step is $g(d(x))$. This is almost the whole solution. However, we need to calculate $p(x)$ and $d(x)$ for every $x$ until $10^7$. Probably the easiest way to do it is using the linear prime sieve.
[ "brute force", "games", "math", "number theory" ]
2,100
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; const int N = int(1e7) + 43; int lp[N + 1]; vector<int> pr; int idx[N + 1]; void precalc() { for (int i = 2; i <= N; i++) { if (lp[i] == 0) { lp[i] = i; pr.push_back(i); } for (int j = 0; j < pr.size() && pr[j] <= lp[i] && i * 1ll * pr[j] <= N; ++j) lp[i * pr[j]] = pr[j]; } for (int i = 0; i < pr.size(); i++) idx[pr[i]] = i + 1; } int get(int x) { if(x == 1) return 1; x = lp[x]; if(x == 2) return 0; else return idx[x]; } void solve() { int n; scanf("%d", &n); int res = 0; for(int i = 0; i < n; i++) { int x; scanf("%d", &x); res ^= get(x); } if(res) puts("Alice"); else puts("Bob"); } int main() { precalc(); int t; scanf("%d", &t); for(int i = 0; i < t; i++) solve(); }
2004
F
Make a Palindrome
You are given an array $a$ consisting of $n$ integers. Let the function $f(b)$ return the minimum number of operations needed to make an array $b$ a palindrome. The operations you can make are: - choose two adjacent elements $b_i$ and $b_{i+1}$, remove them, and replace them with a single element equal to $(b_i + b_{i + 1})$; - or choose an element $b_i > 1$, remove it, and replace it with two positive integers $x$ and $y$ ($x > 0$ and $y > 0$) such that $x + y = b_i$. For example, from an array $b=[2, 1, 3]$, you can obtain the following arrays in one operation: $[1, 1, 1, 3]$, $[2, 1, 1, 2]$, $[3, 3]$, $[2, 4]$, or $[2, 1, 2, 1]$. Calculate $\displaystyle \left(\sum_{1 \le l \le r \le n}{f(a[l..r])}\right)$, where $a[l..r]$ is the subarray of $a$ from index $l$ to index $r$, inclusive. In other words, find the sum of the values of the function $f$ for all subarrays of the array $a$.
To begin with, let's find out how to solve the problem for a single subarray in linear time. If equal numbers are at both ends, we can remove them and reduce the problem to a smaller size (thus, we have decreased the length of the array by $2$ in $0$ operations, or by $1$ if there is only one element). Otherwise, we have to do something with one of these numbers: either merge the smaller one with its neighbor or split the larger one. It can be shown that it doesn't matter which of these actions we take, so let's split the larger of the two numbers at the ends in such a way that the numbers at the ends become equal, and then we can remove the equal numbers at the ends (thus, we reduce the length of the array by $1$ in $1$ operation). In this case, the answer for the array depends on the number of situations where the ends of the array have equal numbers and the number of situations where they are different. Now let's understand how to quickly calculate how many times, when processing a subarray, we encounter a situation where the ends have equal numbers. After removing these equal numbers at the ends, we obtain a subarray such that numbers with equal sums have been removed from the prefix and suffix compared to the original subarray. We can determine how many such subarrays can be obtained from the original one by removing a prefix and a suffix with equal sums. For this, we can use prefix sums. Let the prefix sum at the beginning of the segment be $p_l$, and at the end be $p_r$. If we remove a prefix and a suffix with sum $s$, the sums become $p_l + s$ and $p_r - s$. At the same time, the sum $p_l + p_r$ for the smaller subarray remains the same. Therefore, for each such subarray that results from the situation "the elements on the right and left were equal, we removed them" the sum $p_l + p_r$ is the same as that of the original segment. To calculate the number of shorter segments with the same sum $p_l + p_r$ for each segment, we can iterate through all segments in order of increasing length and store in a map the count of subarrays with each sum $p_l + p_r$. This leads us to solve the problem in $O(n^2 \log{n})$.
[ "binary search", "brute force", "data structures", "greedy", "math" ]
2,600
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (auto& x : a) cin >> x; vector<int> p(n + 1); for (int i = 0; i < n; ++i) p[i + 1] = p[i] + a[i]; map<int, int> cnt; long long ans = 0; for (int len = 0; len <= n; ++len) { for (int i = 0; i <= n - len; ++i) { int s = p[i] + p[i + len]; ans += len; ans -= 2 * cnt[s]; ans -= (s % 2 == 1 || !binary_search(p.begin(), p.end(), s / 2)); cnt[s] += 1; } } cout << ans << '\n'; } }
2004
G
Substring Compression
Let's define the operation of compressing a string $t$, consisting of at least $2$ digits from $1$ to $9$, as follows: - split it into an \textbf{even} number of non-empty substrings — let these substrings be $t_1, t_2, \dots, t_m$ (so, $t = t_1 + t_2 + \dots + t_m$, where $+$ is the concatenation operation); - write the string $t_2$ $t_1$ times, then the string $t_4$ $t_3$ times, and so on. For example, for a string "12345", one could do the following: split it into ("1", "23", "4", "5"), and write "235555". Let the function $f(t)$ for a string $t$ return the minimum length of the string that can be obtained as a result of that process. You are given a string $s$, consisting of $n$ digits from $1$ to $9$, and an integer $k$. Calculate the value of the function $f$ for all contiguous substrings of $s$ of length exactly $k$.
Let's start with learning how to solve the problem for a single string with any polynomial complexity. One immediately wants to use dynamic programming. For example, let $\mathit{dp}_i$ be the minimum length of the compressed string from the first $i$ characters. Then, for the next transition, we could iterate over the next odd block first, and then over the next even block. This results in $O(n^3)$ complexity ($O(n)$ states and $O(n^2)$ for transitions). Note that there is no point in taking very long odd blocks. For instance, we can always take the first digit in the first block and the rest of the string in the second block. In this case, the answer will not exceed $9n$. Therefore, there is no point in taking more than $6$ digits in any odd block, as that would exceed our estimate. Can we take even fewer digits? Ideally, we would like to never take more than $1$ digit. Let's show that this is indeed possible. Consider two blocks: an odd block where we took a number with at least $2$ digits, and the even block after it. Let's try to change them so that the first block consists only of the first digit, and all the others go into the second block. Let's estimate how the length has changed. The digits that were previously in the first block and are now in the second block will now appear in the answer $9$ times in the worst case. The digits in the second block previously appeared $x$ times, and now they appear the following number of times: If the length was $2$, then $\displaystyle \lfloor\frac{x}{10}\rfloor$ times. In the best case, $11 \rightarrow 1$ times, so $-10$. Thus, since the length was $2$, one digit now appears $+9$ times, and at least one appears $-10$ times. The answer has decreased. If the length was $3$, then $\displaystyle \lfloor\frac{x}{100}\rfloor$ times. In the best case, $111 \rightarrow 1$ times, so $-110$. Thus, since the length was $3$, two digits now appear $+9$ times (a total of $+18$), and at least one appears $-110$ times. The answer has decreased. And so on. Now we can write the same dynamic programming in $O(n^2)$. There is no need to iterate over the odd block anymore, as it always has a length of $1$. The iteration over the even block still takes $O(n)$. We can try to make the states of the dp more complicated. What do the transitions look like now? Let $j$ be the end of the even block that we are iterating over. Then $\mathit{dp}_j = \min(\mathit{dp}_j, \mathit{dp}_i + \mathit{ord(s_i)} \cdot (j - i - 1))$. That is, essentially, for each character of the string, we choose how many times it should be written in the answer. Let's put this coefficient (essentially the digit in the last chosen odd block) into the state. Let $\mathit{dp}[i][c]$ be the minimum length of the compressed string from the first $i$ characters, if the last odd block contains the digit $c$. Then the transitions are as follows: $\mathit{dp}[i + 1][c] = \min(\mathit{dp}[i + 1][c], \mathit{dp}[i][c] + c)$ - append the next character to the end of the last even block; $\mathit{dp}[i + 2][s_i] = \min(\mathit{dp}[i + 2][s_i], \mathit{dp}[i][c] + s_i)$ - start a new odd block at position $i$ and immediately take another character in the subsequent even block. If we do not take another character, the dynamic programming will start accumulating empty even blocks, which is prohibited by the problem. Thus, we have learned to solve the problem in $O(n)$ for a fixed string. The remaining task is to figure out how to solve it for all substrings of length $k$. Let's represent the entire set of the query substrings as a queue. That is, to transition from the substring $s_{i,i+k-1}$ to $s_{i+1,i+k}$, we need to append a character to the end and remove a character from the beginning. There is a known technique for solving problems on queues. It is called "minimum on a queue". If we figure out how to solve the problem on a stack (that is, with operations to append a character to the end and remove a character from the end), then we can solve it on a queue using two stacks. Our dynamic programming has only one drawback that slightly hinders this. One transition is at $i + 1$, and the other is at $i + 2$. Let's make both transitions at $i + 1$. To do this, we can introduce a dummy state that indicates that we wrote out an odd block in the last step. That is, let $\mathit{dp}[i][c]$ still mean the same when $c > 0$, while the state $c = 0$ is dummy. For convenience, I suggest thinking about these transitions this way. We have a vector of values $\mathit{dp}[i]$ with $10$ elements. We want to be able to convert it into a vector $\mathit{dp}[i + 1]$ with the same $10$ elements, knowing the value of $s_i$. Now, this sounds like multiplying a vector by a matrix. Yes, the matrix consists of operations not $*/+$, but $+/\min$, yet it still works. Then the solution on the stack is as follows. For each digit from $1$ to $9$, we pre-build the transition matrix of dynamic programming for that digit. We maintain one stack of transition matrices and another stack of the dp vectors. When we append a digit, we add elements to both stacks, multiplying the last vector by the matrix. When we remove, we pop the last elements from both stacks. If you are not familiar with the minimum on a queue, you can read this article: https://cp-algorithms.com/data_structures/stack_queue_modification.html. However, we have a slight difference from the standard technique. The minimum is a commutative operation, unlike matrix multiplication. But fixing this is not very difficult. In the first stack, we will multiply by the new matrix on the right. In the second stack, we will multiply on the left. Since the complexity of that queue is linear, we get an algorithm with a complexity of $n \cdot 10^3$. It can be noted that, in the transition matrix, there is a linear number of useful values (not equal to $\infty$). Thus, multiplying by such a matrix can be done in $10^2$.
[ "data structures", "dp", "matrices" ]
3,200
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int D = 10; const int INF = 1e9; typedef array<array<int, D>, D> mat; mat mul(const mat &a, const mat &b, bool fl){ mat c; forn(i, D) forn(j, D) c[i][j] = INF; if (fl){ forn(i, D) forn(j, D){ c[j][i] = min(c[j][i], min(a[j][0] + b[0][i], a[j][i] + b[i][i])); c[j][0] = min(c[j][0], a[j][i] + b[i][0]); } } else{ forn(i, D) forn(j, D){ c[i][j] = min(c[i][j], min(a[i][0] + b[0][j], a[i][i] + b[i][j])); c[0][j] = min(c[0][j], a[0][i] + b[i][j]); } } return c; } struct minqueue{ vector<pair<mat, mat>> st1, st2; void push(const mat &a){ if (!st1.empty()) st1.push_back({a, mul(st1.back().second, a, true)}); else st1.push_back({a, a}); } void pop(){ if (st2.empty()){ st2 = st1; reverse(st2.begin(), st2.end()); st1.clear(); assert(!st2.empty()); st2[0].second = st2[0].first; forn(i, int(st2.size()) - 1) st2[i + 1].second = mul(st2[i + 1].first, st2[i].second, false); } st2.pop_back(); } int get(){ if (st1.empty()) return st2.back().second[0][0]; if (st2.empty()) return st1.back().second[0][0]; int ans = INF; forn(i, D) ans = min(ans, st2.back().second[0][i] + st1.back().second[i][0]); return ans; } }; mat tran[D]; void init(int d){ forn(i, D) forn(j, D) tran[d][i][j] = INF; for (int i = 1; i <= 9; ++i){ tran[d][i][i] = i; tran[d][i][0] = i; } tran[d][0][d] = 0; } int main() { cin.tie(0); ios::sync_with_stdio(false); for (int i = 1; i <= 9; ++i) init(i); int n, k; cin >> n >> k; string s; cin >> s; minqueue q; forn(i, n){ q.push(tran[s[i] - '0']); if (i - k >= 0) q.pop(); if (i - k >= -1) cout << q.get() << ' '; } cout << '\n'; return 0; }
2005
A
Simple Palindrome
Narek has to spend 2 hours with some 2-year-old kids at the kindergarten. He wants to teach them competitive programming, and their first lesson is about palindromes. Narek found out that the kids only know the vowels of the English alphabet (the letters $\mathtt{a}$, $\mathtt{e}$, $\mathtt{i}$, $\mathtt{o}$, and $\mathtt{u}$), so Narek needs to make a string that consists of vowels only. After making the string, he'll ask the kids to count the number of subsequences that are palindromes. Narek wants to keep it simple, so he's looking for a string such that the amount of palindrome subsequences is minimal. Help Narek find a string of length $n$, consisting of \textbf{lowercase} English \textbf{vowels only} (letters $\mathtt{a}$, $\mathtt{e}$, $\mathtt{i}$, $\mathtt{o}$, and $\mathtt{u}$), which \textbf{minimizes} the amount of \textbf{palindrome$^{\dagger}$ subsequences$^{\ddagger}$} in it. $^{\dagger}$ A string is called a palindrome if it reads the same from left to right and from right to left. $^{\ddagger}$ String $t$ is a subsequence of string $s$ if $t$ can be obtained from $s$ by removing several (possibly, zero or all) characters from $s$ and concatenating the remaining ones, without changing their order. For example, $\mathtt{odocs}$ is a subsequence of $c{\textcolor{red}{od}}ef{\textcolor{red}{o}}r{\textcolor{red}{c}}e{\textcolor{red}{s}}$.
If the the numbers of vowels are fixed, how to arrange them to get the best possible answer? How to choose the numbers of vowels? Should they be as close, as possible? Let's define the numbers of vowels by $a_0, \cdots, a_4$ and assume we have fixed them. Obviously, $a_0 + \cdots + a_4 = n$. At first, let's not consider the empty string as it doesn't change anything. Then, the number of palindrome subsequences will be at least $A = 2^{a_0} + \cdots + 2^{a_4} - 5$ (every subsequence consisting of the same letter minus the five empty strings). Now, notice that if we put the same characters consecutively then the answer would be exactly $A$, and that would be the best possible answer for that fixed numbers (there cannot be other palindrome subsequences because if the first and last characters are the same, then all the middle part will be the same as well). Now, we need to find the best array $a$. To do this, let's assume there are 2 mumbers $x$ and $y$ in the array such that $x + 2 \leq y$. Then, $2^x + 2^y > 2 \cdot 2^{y-1} \geq 2^{x+1} + 2^{y-1}$. This means, that replacing $x$ and $y$ with $x+1$ and $y-1$ will not change the sum of the array $a$ but will make the number of palindrome subsequences smaller. We can do this replacing process until no two numbers in $a$ have difference bigger than $1$. Actually, there is only one such array (not considering its permutations) and it contains only $(n / 5)$-s and $((n / 5) + 1)$-s.
[ "combinatorics", "constructive algorithms", "greedy", "math" ]
900
#include <bits/stdc++.h> using namespace std; const string VOWELS = "aeiou"; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; // test cases while (t--) { int n; cin >> n; // string length vector<int> v(5, n / 5); // n - (n % 5) numbers should be (n / 5) for (int i = 0; i < n % 5; i++) v[i]++; // and the others should be (n / 5) + 1 for (int i = 0; i < 5; i++) for (int j = 0; j < v[i]; j++) cout << VOWELS[i]; // output VOWELS[i] v[i] times cout << "\n"; } }
2005
B2
The Strict Teacher (Hard Version)
\textbf{This is the hard version of the problem. The only differences between the two versions are the constraints on $m$ and $q$. In this version, $m, q \le 10^5$. You can make hacks only if both versions of the problem are solved.} Narek and Tsovak were busy preparing this round, so they have not managed to do their homework and decided to steal David's homework. Their strict teacher noticed that David has no homework and now wants to punish him. She hires other teachers to help her catch David. And now $m$ teachers together are chasing him. Luckily, the classroom is big, so David has many places to hide. The classroom can be represented as a one-dimensional line with cells from $1$ to $n$, inclusive. At the start, all $m$ teachers and David are in \textbf{distinct} cells. Then they make moves. During each move - David goes to an adjacent cell or stays at the current one. - Then, each of the $m$ teachers simultaneously goes to an adjacent cell or stays at the current one. This continues until David is caught. David is caught if any of the teachers (possibly more than one) is located in the same cell as David. \textbf{Everyone sees others' moves, so they all act optimally.} Your task is to find how many moves it will take for the teachers to catch David if they all act optimally. Acting optimally means the student makes his moves in a way that maximizes the number of moves the teachers need to catch him; and the teachers coordinate with each other to make their moves in a way that minimizes the number of moves they need to catch the student. Also, as Narek and Tsovak think this task is easy, they decided to give you $q$ queries on David's position.
There are only few cases. Try finding them and handling separately. For the easy version, there are three cases and they can be considered separately. Case 1: David is in the left of both teachers. In this case, it is obvious that he needs to go as far left as possible, which is cell $1$. Then, the time needed to catch David will be $b_1-1$. Case 2: David is in the right of both teachers. In this case, similarly, David needs to go as far right as possible, which is cell $n$. Then, the time needed to catch David will be $n-b_2$. Case 3: David is between the two teachers. In this case, David needs to stay in the middle (if there are two middle cells, it doesn't matter which one is picked as the middle) of two teachers, so they both have to come closer to him simultaneously. So, they will need the same amount of time, which will be $(b_2-b_1) / 2$. Notice, that David can always go to the middle cell not depending on his cell number. What changes in Case 3 if there are more than two teachers? For this version, there are three cases, too. Case 1 and Case 2 from the above solution are still correct, but the last one should be changed a bit because now it is important between which two consecutive teachers David is. To find that teachers, we can use binary search (after sorting $b$, of course). After finding that David is between teachers $i$ and $i+1$, the answer is $(b_{i+1}-b_i) / 2$, just like the easy version.
[ "binary search", "greedy", "math", "sortings" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; // test cases while (t--) { int n, m, q; cin >> n >> m >> q; vector<int> a(m); for (int i = 0; i < m; i++) cin >> a[i]; sort(a.begin(), a.end()); for (int i = 1; i <= q; i++) { int b; cin >> b; int k = upper_bound(a.begin(), a.end(), b) - a.begin(); // finding the first teacher at right if (k == 0) cout << a[0] - 1 << ' '; // case 1 else if (k == m) cout << n - a[m - 1] << ' '; // case 2 else cout << (a[k] - a[k - 1]) / 2 << ' '; // case 3 } cout << "\n"; } }
2005
C
Lazy Narek
Narek is too lazy to create the third problem of this contest. His friend Artur suggests that he should use ChatGPT. ChatGPT creates $n$ problems, each consisting of $m$ letters, so Narek has $n$ strings. To make the problem harder, he combines the problems by selecting some of the $n$ strings \textbf{possibly none} and concatenating them \textbf{without altering their order}. His chance of solving the problem is defined as $score_n - score_c$, where $score_n$ is Narek's score and $score_c$ is ChatGPT's score. Narek calculates $score_n$ by examining the selected string (he moves from left to right). He initially searches for the letter $"n"$, followed by $"a"$, $"r"$, $"e"$, and $"k"$. Upon finding all occurrences of these letters, he increments $score_n$ by $5$ and resumes searching for $"n"$ again (he doesn't go back, and he just continues from where he left off). After Narek finishes, ChatGPT scans through the array and increments $score_c$ by $1$ for each letter $"n"$, $"a"$, $"r"$, $"e"$, or $"k"$ that Narek fails to utilize (note that if Narek fails to complete the last occurrence by finding all of the $5$ letters, then all of the letters he used are counted in ChatGPT's score $score_c$, and Narek doesn't get any points if he doesn't finish finding all the 5 letters). Narek aims to maximize the value of $score_n - score_c$ by selecting the most optimal subset of the initial strings.
A greedy approach doesn't work because the end of one string can connect to the beginning of the next. Try thinking in terms of dynamic programming. Before processing the next string, we only need to know which letter we reached from the previous selections. Let's loop through the $n$ strings and define $dp_i$ as the maximal answer if we are currently looking for the $i$-th letter in the word "Narek". Initially, $dp_0 = 0$, and $dp_1 = \cdots = dp_4 = -\infty$. For the current string, we brute force on all five letters we could have previously ended on. Let's say the current letter is the $j$-th, where $0 \leq j < 5$. If $dp_j$ is not $-\infty$, we can replicate the process of choosing this string for our subset and count the score difference (the answer). Eventually, we will reach to some $k$-th letter in the word "Narek". If reaching $dp_k$ from $dp_j$ is bigger than the previous value of $dp_k$, we update $dp_k$ by $dp_j + counted\texttt{_}score$. Finally, the answer is $dp_i - 2 \cdot i$. This is because if $i$ is not $0$, then we didn't fully complete the entire word (the problem states that in this case, these letters are counted in the GPT's score, so we subtract this from our score and add it to the GPT's). Note: Updating the array $dp$ is incorrect, because we may update some $dp_i$ for some string, and then use that updated $dp_i$ for that same string. To avoid this, we can use two arrays and overwrite one to the other for each string. Time complexity: $O(n*m*5)$ or $O(n*m*25)$, depending on the implementation.
[ "dp", "implementation", "strings" ]
1,800
#include <bits/stdc++.h> using namespace std; const string narek = "narek"; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; // test cases while (t--) { int n, m; cin >> n >> m; vector<string> s(n); for (int i = 0; i < n; i++) cin >> s[i]; vector<int> dp(5, int(-1e9)), ndp; dp[0] = 0; for (int i = 0; i < n; i++) { ndp = dp; // overwriting dp for (int j = 0; j < 5; j++) { if (dp[j] == int(-1e9)) continue; int counted_score = 0, next = j; for (int k = 0; k < m; k++) { int ind = narek.find(s[i][k]); if (ind == -1) continue; // if s[i][k] is not a letter of "narek" if (next == ind) { // if s[i][k] is the next letter next = (next + 1) % 5; counted_score++; } else counted_score--; } ndp[next] = max(ndp[next], dp[j] + counted_score); } dp = ndp; // overwriting dp back } int ans = 0; for (int i = 0; i < 5; i++) ans = max(ans, dp[i] - 2 * i); // checking all letters cout << ans << "\n"; } }
2005
D
Alter the GCD
You are given two arrays $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$. You must perform the following operation \textbf{exactly once}: - choose any indices $l$ and $r$ such that $1 \le l \le r \le n$; - swap $a_i$ and $b_i$ for all $i$ such that $l \leq i \leq r$. Find the maximum possible value of $\text{gcd}(a_1, a_2, \ldots, a_n) + \text{gcd}(b_1, b_2, \ldots, b_n)$ after performing the operation exactly once. Also find the number of distinct pairs $(l, r)$ which achieve the maximum value.
Find the the maximal sum of $\gcd$-s first. Try checking whether it is possible to get some fixed $\gcd$-s for $a$ and $b$ (i.e. fix the $\gcd$-s and try finding a sufficient range to swap) Choosing a range is equivalent to choosing a prefix and a suffix. Which prefixes and suffixes are needed to be checked to find the maximal sum of $\gcd$-s? There are only few different values in the $\gcd$-s of prefixes and suffixes. Try finding a limit for them. After finding the maximal sum of $\gcd$-s, try dynamic programming to find the nubmer of ways. Let $M$ be the maximal value of $a$. Finding the maximal sum of $\gcd$-s. Let's define $p_i$ and $q_i$ as the $\gcd$-s of the $i$-th prefixes of $a$ and $b$, respectively. Then, for each index $i$, such that $1 < i \leq n$, $p_i$ is a divisor of $p_{i-1}$, so either $p_i=p_{i-1}$ or $p_i \leq p_{i-1} / 2$ (the same holds for $q$). This means, that there are at most $\log(M)$ different values in each of $p$ and $q$. The same holds for suffixes. Now, let's assume we have fixed the $\gcd$-s of $a$ and $b$ after the operation as $A$ and $B$. Now, let's consider the shortest of the longest prefixes of $a$ and $b$ having $\gcd$ $A$ and $B$, respectively. If the swapped range has intersection with that prefix, we can just not swap the intersection as it cannot worsen the answer. The same holds for the suffix. This means, that the swapped range should be inside the middle part left between that prefix and suffix. But the first and last elements of the middle part have to be changed, because they didn't allow us to take longer prefix or suffix. So, the part that has to be swapped is that middle part. Then, we can see that the only sufficient lengths of the longest prefixes and suffixes are the places where one of them (i.e. in array $a$ or $b$) changes (i.e. $i$ is a sufficient prefix if $p_i \neq p_{i+1}$ or $q_i \neq q_{i+1}$, because otherwise we would have taken longer prefix). So, we can brute force through the sufficient prefixes and suffixes (the number of each is at most $\log(M)$). All we are left with is calculating the $\gcd$-s of the middle parts to update the answer, which can be done with sparse table. Now, let's brute force through the sufficient prefixes. Assume we are considering the prefix ending at $i$. This means the left border of the range will start at $i+1$. Then, we can brute force the right border starting from $i+1$. In each iteration, we keep $\gcd$-s of the range and update the answer. To proceed to the next one, we only need to update the $\gcd$-s with the next numbers of $a$ and $b$. Finding the number of ways. Let's fix the $\gcd$-s of $a$ and $b$ and compute $dp_{i,j}$ ($1 \leq i \leq n$ and $0 \leq j < 3$), which shows: the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ without swapping a range, if $j=0$ the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ without swapping a range, if $j=0$ the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ with a range started swapping but not ended yet, if $j=1$ the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ with a range started swapping but not ended yet, if $j=1$ the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ with an already swapped range, if $j=2$ the number of ways to get the fixed $\gcd$-s in the interval $[1, i]$ with an already swapped range, if $j=2$ In all $dp$-s we assume we don't swap $a_1$ and $b_1$ Then, we calculate the $dp$ with $i$ starting from $1$. In each iteration, we check if the pair $(a_i, b_i)$ is sufficient, or no (we consider them swapped, if $j=1$). If it is not sufficient, we don't do anything. Otherwise: $dp_{i,0} = dp_{i-1,0}$ $dp_{i,0} = dp_{i-1,0}$ $dp_{i,1} = dp_{i-1,0}+dp_{i-1,1}$ $dp_{i,1} = dp_{i-1,0}+dp_{i-1,1}$ $dp_{i,2} = dp_{i-1,0}+dp_{i-1,1}+dp_{i-1,2}$ $dp_{i,2} = dp_{i-1,0}+dp_{i-1,1}+dp_{i-1,2}$ After the calculations the answer is $dp_{n,0}+dp_{n,1}+dp_{n,2}$. But we have to subtract $n$ from the answer if not swapping any range gives the expected sum of $\gcd$-s, because we counted that case once in $dp_{n, 0}$ and $n-1$ times when adding $dp_{i-1,0}$ to $dp_{i,2}$. We also check the swaps of prefixes separately. Finally, the only thing left is to brute force through the $\gcd$-s of $a$ and $b$ in a smart way. As $a_1$ and $b_1$ are not swapped, the $\gcd$-s of the arrays should be their divisors. Then, it is enough to brute force through the divisors of $a_1$ only (the $\gcd$ of $a$), as the other $\gcd$ is derieved from their sum. And since $a_1$ has at most $\sqrt[3]{a_i}$ ($1344$, to be precise) divisors, the solution will be fast enough (actually, there are way too few cases when the derieved $\gcd$ of $b$ is actually a divisor of $b_1$). Finalizing. But this solution is not fast enough when all $a_1$-s are $10^9$, because they have $\sqrt[3]{a_1}$ divisors, but in order to find them, we need to do $\sqrt{a_1}$ checks. It means, that the time complexity is $O(t * \sqrt{a_1})$, which is slow. To handle this, we can write another slower solution which doesn't depend on $a_1$. One of them is the following solution working in $O(n^2* \log(M))$, but $\log(M)$ comes from $\gcd$, which is actually amortized and faster. We brute force through the left border, and then through the right border of the swapping range, keeping its $\gcd$ and checking each of them. The reason we need this slow solution is that now, we can use it for $n\leq20$. But for bigger $n$-s, the first solution will work, because there are at most $t/20$ such $n$-s. So the time complexity for them will be $O(n*\log^2(M))$ for the sum of the $\gcd$-s and $O(t/20*\sqrt{M} + \sqrt[3]{M} * n)$ for the number of ways. For the small $n$-s, the time complexity will be $O(n*\log(M)*20)$. See the formal proof below. Let's say there are $c_i$ $n$-s equal to $i$. We know, that $\displaystyle\sum_{i=1}^{20} (c_i\cdot i) \leq n$. Then the time complexity will be $O(\displaystyle\sum_{i=1}^{20} (c_i\cdot i^2* \log(M))) \leq O((\displaystyle\sum_{i=1}^{20} (c_i\cdot i))*\log(M)*20) \leq O(n*\log(M)*20)$. Time complexity: $O(n*\log^2(M) + t/20*\sqrt{M} + \sqrt[3]{M} * n + n*\log(M)*20) \approx O(10^9)$. It is actually a very rough estimation with amortized $\log(M)$.
[ "binary search", "brute force", "data structures", "divide and conquer", "implementation", "number theory" ]
2,400
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int N = 200005; int a[N], b[N]; int pa[N], pb[N], sa[N], sb[N]; int n; ll dp[N][3]; ll solve_one(ll ga, ll gb) { if (ga <= 0 || gb <= 0) return 0; // invalid fixed gcd(s) if (b[1] % gb) return 0; // ivanlid gcd of b for (int i = 0; i <= n; i++) for (int j = 0; j < 3; j++) dp[i][j] = 0; // intializing dp[1][0] = 1; // base case for (int i = 2; i <= n; i++) { for (int j = 0; j < 3; j++) { for (int k = 0; k <= j; k++) { bool flag = 1; if (j == 1) swap(a[i], b[i]); // if swapping if (a[i] % ga) flag = 0; // checking sufficiency if (b[i] % gb) flag = 0; // checking sufficiency if (j == 1) swap(a[i], b[i]); // if swapping if (flag) dp[i][j] += dp[i - 1][k]; /* if j is 0, we can use only k=0 (not started swapping yet) if j is 1, we can use both k=0 (starging swapping) and k=1 (currently swapping) if j is 2, we can use k=0 (no swaps at all), k=1 (finishing swapping), k=2 (already finished swapping) */ } } } return dp[n][0] + dp[n][1] + dp[n][2]; } int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; // test cases while (t--) { cin >> n; for (int i = 1; i <= n; i++) cin >> a[i]; for (int i = 1; i <= n; i++) cin >> b[i]; sa[n + 1] = sb[n + 1] = 0; // initializing for (int i = 1; i <= n; i++) { // calculating the gcd-s of prefixes and suffixes of a and b pa[i] = gcd(pa[i - 1], a[i]); pb[i] = gcd(pb[i - 1], b[i]); sa[n - i + 1] = gcd(sa[n - i + 2], a[n - i + 1]); sb[n - i + 1] = gcd(sb[n - i + 2], b[n - i + 1]); } if (n <= 300) { // slower solution int max_gcd = 0, ways = 0; for (int i = 1; i <= n; i++) { int ga = 0, gb = 0; // current gcd-s of the range for (int j = i; j <= n; j++) { ga = gcd(ga, a[j]); gb = gcd(gb, b[j]); int ca = gcd(pa[i - 1], gcd(sa[j + 1], gb)); // gcd of a after the operation int cb = gcd(pb[i - 1], gcd(sb[j + 1], ga)); // gcd of b after the operation if (ca + cb > max_gcd) { max_gcd = ca + cb; ways = 1; } else if (ca + cb == max_gcd) ways++; } } cout << max_gcd << ' ' << ways << "\n"; continue; } int max_gcd = pa[n] + pb[n]; for (int i = 2; i <= n; i++) { if (pa[i] == pa[i - 1] && pb[i] == pb[i - 1]) continue; // checking sufficiency of the prefix int ga = 0, gb = 0; // current gcd-s of the range for (int j = i; j <= n; j++) { ga = gcd(ga, a[j]); // updating gcd of the range of a gb = gcd(gb, b[j]); // updating gcd of the range of b int ca = gcd(pa[i - 1], gcd(sa[j + 1], gb)); // gcd of a after the operation int cb = gcd(pb[i - 1], gcd(sb[j + 1], ga)); // gcd of b after the operation max_gcd = max(max_gcd, ca + cb); // updating the answer } } ll ans = 0; for (int ga = 1; ga * ga <= a[1]; ga++) { if (a[1] % ga) continue; ans += solve_one(ga, max_gcd - ga); // counting for fixed gcd-s if (ga * ga == a[1]) continue; ans += solve_one(a[1] / ga, max_gcd - a[1] / ga); // counting for fixed gcd-s } if (pa[n] + pb[n] == max_gcd) ans -= n; // n-1 times updated with j=2 and k=0 and one time with no swaps at all (dp[n][0]) for (int i = 1; i <= n; i++) { if (gcd(pa[i], sb[i + 1]) + gcd(pb[i], sa[i + 1]) == max_gcd) ans++; // checking prefixes } cout << max_gcd << ' ' << ans << "\n"; } return 0; }
2005
E2
Subtangle Game (Hard Version)
\textbf{This is the hard version of the problem. The differences between the two versions are the constraints on all the variables. You can make hacks only if both versions of the problem are solved.} Tsovak and Narek are playing a game. They have an array $a$ and a matrix $b$ of integers with $n$ rows and $m$ columns, numbered from $1$. The cell in the $i$-th row and the $j$-th column is $(i, j)$. They are looking for the elements of $a$ in turns; Tsovak starts first. Each time a player looks for a cell in the matrix containing the current element of $a$ (Tsovak looks for the first, then Narek looks for the second, etc.). Let's say a player has chosen the cell $(r, c)$. The next player has to choose his cell in the submatrix starting at $(r + 1, c + 1)$ and ending in $(n, m)$ (the submatrix can be empty if $r=n$ or $c=m$). If a player cannot find such a cell (or the remaining submatrix is empty) or the array ends (the previous player has chosen the last element), then he loses. Your task is to determine the winner if the players play optimally. \textbf{Note: since the input is large, you may need to optimize input/output for this problem.} For example, in C++, it is enough to use the following lines at the start of the main() function: \begin{verbatim} int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); } \end{verbatim}
What happens, when some number appears in $a$ more than once? How to use the constraint on the numbers of the array and the matrix? Let's say some number $x$ accures in $a$ more than once. Let's define by $u$ and $v$ the first two indexes of $a$, such that $u<v$ and $a_u=a_v=x$. When a player reahces $a_u = x$, it is always optimal to choose such cell that after it no $x$ can be choosen (there is no other $x$ in the remaining submatrix). This is true, because if the player would win when choosing that "inside" $x$ (let's call him $x_2$), he would also win in that bigger matrix. But if he would lose, that means there is some number inside the submatrix left from $x_2$ that is winning for the other player, so the latter can choose that not depending on the opponent's choose. So, when a player reaches $a_v$, he will lose, which is the same as the array $a$ ends there. This gives us the chance to "stop" array $a$ at the first index where some number appears the second time. Now, as all the numbers are not greater than $7$, we can shorten array $a$ to at most $7$ elements. Then, we can keep $dp_{k,i,j}$ which shows whether the player, starting from $k$-th position of the array $a$ (i.e. considering only the suffix starting at $k$, but not necessarily Tsovak has to start), will win, or not. To calculate the $dp$, we can go from the bottom-right cell of the matrix to the upper-left cell. $dp_{k,i,j}$ wins, if at least ono of these happens: $i<n$ and $dp_{k,i+1,j}$ wins $i<n$ and $dp_{k,i+1,j}$ wins $j<m$ and $dp_{k,i,j+1}$ wins $j<m$ and $dp_{k,i,j+1}$ wins $b_{i,j}=a_k$ and ($k=l$ or $i=n$ or $j=m$ or $dp_{k+1,i+1,j+1}$ loses) $b_{i,j}=a_k$ and ($k=l$ or $i=n$ or $j=m$ or $dp_{k+1,i+1,j+1}$ loses) Finally, if $dp_{1,1,1}$ wins, Tsovak wins, otherwise, Narek wins. Time complexity: $O(n*m)$. Consider the same $dp$ as in E1. Can you eliminate one of the states? Instead of $dp_{k,i,j}$, eliminate $k$ from the state. Instead, assign some array index to the $dp$ value to keep information about the array. Keep $dp0_{i,j}$ which shows the smalles even $k$, such that $dp_{k,i,j}$ wins. Do the same for odd ones. Let's define $dp0_{i,j}$ as the minimal even index $k$ ($1 \le k \le l$), such that $dp_{k,i,j}$ wins. Define $dp1_{i,j}$ similarly for odds. First, fill all of them with $\infty$. Then, compute them from the bottom-right cell of the matrix to the top-left cell of the matrix. $dp0_{i,j}$ will be the minimal of the following three values: $dp0_{i+1, j}$ (if $i=n$ there will be no submatrix left, so $dp0_{i+1,j}$ will be $\infty$) $dp0_{i+1, j}$ (if $i=n$ there will be no submatrix left, so $dp0_{i+1,j}$ will be $\infty$) $dp0_{i,j+1}$ $dp0_{i,j+1}$ Let's define by $ind$ the index of $b_{i,j}$ in $a$ (there are no duplicates in $a$ and if there is no $b_{i,j}$ there, assign $\infty$ to $ind$). If $dp1_{i+1,j+1}>ind+1$, which means that $dp_{ind+1,i+1,j+1}$ loses and ensues the current player's win, then the value of this part will be $ind$. Otherwise, $\infty$. This is because, in other cases, the opponent either had a winning position starting from $ind+1$ or even earlier in the game, so we can't win from that index. Let's define by $ind$ the index of $b_{i,j}$ in $a$ (there are no duplicates in $a$ and if there is no $b_{i,j}$ there, assign $\infty$ to $ind$). If $dp1_{i+1,j+1}>ind+1$, which means that $dp_{ind+1,i+1,j+1}$ loses and ensues the current player's win, then the value of this part will be $ind$. Otherwise, $\infty$. This is because, in other cases, the opponent either had a winning position starting from $ind+1$ or even earlier in the game, so we can't win from that index. We count $dp1$ similarly (but we count $dp0$ and $dp1$ simultaneously for every $(i,j)$). Lastly, if $dp1_{1,1}=1$, then Tsovak wins, otherwise, Narek wins.
[ "data structures", "dp", "games", "greedy", "implementation" ]
2,500
#include <bits/stdc++.h> using namespace std; const int L = 1505, N = 1505, M = 1505; int a[L], b[N][M]; int l, n, m; int ind[N * M]; int dp0[N][M], dp1[N][M]; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; // test cases while (t--) { int i, j; cin >> l >> n >> m; for (i = 1; i <= l; i++) cin >> a[i]; for (i = 1; i <= l; i++) { if (ind[a[i]]) { // checking if a[i] has already appeared l = i - 1; break; } ind[a[i]] = i; } for (i = 1; i <= n; i++) for (j = 1; j <= m; j++) cin >> b[i][j]; for (i = 0; i <= n + 1; i++) for (j = 0; j <= m + 1; j++) dp0[i][j] = dp1[i][j] = int(1e9); // initializing dp0 and dp1 for (i = n; i >= 1; i--) { for (j = m; j >= 1; j--) { dp0[i][j] = min(dp0[i + 1][j], dp0[i][j + 1]); // default update dp1[i][j] = min(dp1[i + 1][j], dp1[i][j + 1]); // default update if (!ind[b[i][j]]) continue; // if there is no b[i][j] in a then it cannot be taken by a player if (ind[b[i][j]] % 2 == 0 && ind[b[i][j]] + 3 <= dp1[i + 1][j + 1]) dp0[i][j] = min(dp0[i][j], ind[b[i][j]]); // if dp1 cannot win after dp0 choosing b[i][j] if (ind[b[i][j]] % 2 == 1 && ind[b[i][j]] + 3 <= dp0[i + 1][j + 1]) dp1[i][j] = min(dp1[i][j], ind[b[i][j]]); // if dp0 cannot win after dp1 choosing b[i][j] } } if (dp1[1][1] == 1) cout << "T\n"; else cout << "N\n"; for (i = 1; i <= l; i++) ind[a[i]] = 0; // changing used ind[i]-s back to 0-s for the next test case } }
2006
A
Iris and Game on the Tree
Iris has a tree rooted at vertex $1$. Each vertex has a value of $\mathtt 0$ or $\mathtt 1$. Let's consider a leaf of the tree (the vertex $1$ is never considered a leaf) and define its weight. Construct a string formed by the values of the vertices on the path starting at the root and ending in this leaf. Then the weight of the leaf is the difference between the number of occurrences of $\mathtt{10}$ and $\mathtt{01}$ substrings in it. Take the following tree as an example. Green vertices have a value of $\mathtt 1$ while white vertices have a value of $\mathtt 0$. - Let's calculate the weight of the leaf $5$: the formed string is $\mathtt{10110}$. The number of occurrences of substring $\mathtt{10}$ is $2$, the number of occurrences of substring $\mathtt{01}$ is $1$, so the difference is $2 - 1 = 1$. - Let's calculate the weight of the leaf $6$: the formed string is $\mathtt{101}$. The number of occurrences of substring $\mathtt{10}$ is $1$, the number of occurrences of substring $\mathtt{01}$ is $1$, so the difference is $1 - 1 = 0$. The score of a tree is defined as the number of leaves with non-zero weight in the tree. But the values of some vertices haven't been decided and will be given to you as $?$. Filling the blanks would be so boring, so Iris is going to invite Dora to play a game. On each turn, one of the girls chooses any of the remaining vertices with value $?$ and changes its value to $\mathtt{0}$ or $\mathtt{1}$, \textbf{with Iris going first}. The game continues until there are no vertices with value $\mathtt{?}$ left in the tree. Iris aims to maximize the score of the tree, while Dora aims to minimize that. Assuming that both girls play optimally, please determine the final score of the tree.
Consider a formed string. Let's delete the useless part that doesn't contribute to the number of $\tt{01}$ and $\tt{10}$ substrings. We will get a string where each pair of adjacent bits is different. For example, $\tt{110001}\rightarrow\tt{101}$. Then the weight of a leaf depends on the parity of the length of the string. You can also see that the weight is non-zero if the value of the root is different from the value of the leaf. If the value of the root is already decided, the strategy is quite simple: just fill the values of the leaf nodes with a value different from or equal to the root. It's easy to calculate the answer. If the value of the root has not yet been decided, it seems optimal to fill it first. That's because some values of the leaves have already been decided. When Iris chooses to colour the root at the very beginning, she will make the initial value larger (which is the larger one of the counts of $\tt 0$ and $\tt 1$ in the leaves). However, this can go wrong when there are equal numbers of $\tt 0$ and $\tt 1$ in the leaf nodes. The first to colour the root may lose the advantage of being the first (and when there are odd numbers of $\tt ?$ in the initial leaf nodes, Iris will colour one node less). In this situation, the optimal choice is to colour the unimportant nodes - the nodes that are neither the root nor a leaf. If there is an odd number of $\tt ?$ in the unimportant nodes, then Dora will have to colour the root (after filling the $\tt ?$ in the unimportant nodes one by one), which will cause Iris to colour the leaves first. When Dora colours a leaf, Iris can colour another leaf with the opposite colour if there is at least one leaf left; and colour the root with the opposite colour if there is none left. So Dora will never choose to colour a leaf first in this case. To judge whether a node is a leaf, you can record the degrees of the nodes. The time complexity is $\mathcal O(n)$.
[ "constructive algorithms", "dfs and similar", "games", "graphs", "greedy", "trees" ]
1,700
T = int(input()) for _ in range(T) : n = int(input()) x, y, z, w = 0, 0, 0, 0 deg = [0] * (n + 1) for __ in range(n - 1) : u, v = map(int, input().split()) deg[u] += 1 deg[v] += 1 s = " " + input() for i in range(2, n + 1) : if deg[i] == 1 : if s[i] == '?' : z += 1 elif s[i] == '0' : x += 1 else : y += 1 elif s[i] == '?' : w += 1 if s[1] == '0' : print(y + (z + 1) // 2) elif s[1] == '1' : print(x + (z + 1) // 2) else : print(max(x, y) + (z + (w % 2 if x == y else 0)) // 2)
2006
B
Iris and the Tree
Given a rooted tree with the root at vertex $1$. For any vertex $i$ ($1 < i \leq n$) in the tree, there is an edge connecting vertices $i$ and $p_i$ ($1 \leq p_i < i$), with a weight equal to $t_i$. Iris does not know the values of $t_i$, but she knows that $\displaystyle\sum_{i=2}^n t_i = w$ and each of the $t_i$ is a \textbf{non-negative integer}. The vertices of the tree are numbered in a special way: the numbers of the vertices in each subtree are consecutive integers. In other words, the vertices of the tree are numbered in the order of a depth-first search. \begin{center} {\small The tree in this picture satisfies the condition. For example, in the subtree of vertex $2$, the vertex numbers are $2, 3, 4, 5$, which are consecutive integers.} \end{center} \begin{center} {\small The tree in this picture does not satisfy the condition, as in the subtree of vertex $2$, the vertex numbers $2$ and $4$ are not consecutive integers.} \end{center} We define $\operatorname{dist}(u, v)$ as the length of the simple path between vertices $u$ and $v$ in the tree. Next, there will be $n - 1$ events: - Iris is given integers $x$ and $y$, indicating that $t_x = y$. After each event, Iris wants to know the maximum possible value of $\operatorname{dist}(i, i \bmod n + 1)$ \textbf{independently} for each $i$ ($1\le i\le n$). She only needs to know the sum of these $n$ values. Please help Iris quickly get the answers. Note that when calculating the maximum possible values of $\operatorname{dist}(i, i \bmod n + 1)$ and $\operatorname{dist}(j, j \bmod n + 1)$ for $i \ne j$, the unknown edge weights \textbf{may be different}.
The nodes are numbered by dfs order, which tells us the vertex numbers in one subtree are always consecutive. Let's consider an edge connecting vertex $i$ and $p_i$. Suppose the size of the subtree $i$ is $s_i$, so the vertices are numbered between $[i,i+s_i-1]$. Then for each $j$ that $i\le j<i+s_i-1$, the path between node $j$ and $j+1$ is always in the subtree, so it doesn't pass the edge. The only two paths that passes edge $(i,p_i)$ is the path between $i-1$ and $i$, and between $i+s_i-1$ and $i+s_i$. Let's calculate the maximum value of $\text{dist}(i,(i\bmod n)+1)$. First, if all of the weights of its edges have been determined, then it's already calculated. Otherwise, it's optimal to set one of the edges with undetermined weight with weight $w-\text{sum of weights of the known edges}$. Then the answer is $w-\text{sum of weights of the known edges outside the path}$. How to maintain the process? Each time we know the weight of an edge, we specially check whether the weights of the two paths that passes this edge are uniquely determined or not. For all other paths that are not uniquely determined, the contribution of the edge is $-y$ ($y$ is the weight). We can use addition tags to handle this. The time complexity is $O(n)$. Can you solve the problem if the nodes are not numbered by dfs order?
[ "brute force", "data structures", "dfs and similar", "dsu", "math", "trees" ]
1,800
#include <bits/stdc++.h> namespace FastIO { template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; } template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar(x % 10 ^ '0'); } template <typename T> inline void print(T x) { if (x < 0) putchar('-'), x = -x; else if (x == 0) putchar('0'); write<T>(x); } template <typename T> inline void print(T x, char en) { if (x < 0) putchar('-'), x = -x; else if (x == 0) putchar('0'); write<T>(x), putchar(en); } }; using namespace FastIO; #define MAXN 200001 int fa[MAXN], dep[MAXN]; int c1[MAXN], c2[MAXN], len[MAXN]; void solve() { int N = read<int>(); long long w = read<long long>(); for (int i = 2; i <= N; ++i) fa[i] = read<int>(); for (int i = 2; i <= N; ++i) dep[i] = dep[fa[i]] + 1; for (int i = 1; i <= N; ++i) len[i] = c1[i] = 0; for (int i = 1, x, y; i <= N; ++i) { x = i, y = (i == N ? 1 : i + 1); while (x != y) { if (dep[x] < dep[y]) std::swap(x, y); (c1[x] ? c2[x] : c1[x]) = i, x = fa[x], ++len[i]; } } long long sum = 0, sur = N; for (int i = 1, x; i < N; ++i) { x = read<int>(), sum += read<long long>(); if ((--len[c1[x]]) == 0) --sur; if ((--len[c2[x]]) == 0) --sur; print<long long>(sum * 2 + sur * (w - sum), " \n"[i == N - 1]); } } int main() { int T = read<int>(); while (T--) solve(); return 0; }
2006
C
Eri and Expanded Sets
Let there be a set that contains \textbf{distinct} positive integers. To expand the set to contain as many integers as possible, Eri can choose two integers $x\neq y$ from the set such that their average $\frac{x+y}2$ is still a positive integer and isn't contained in the set, and add it to the set. The integers $x$ and $y$ remain in the set. Let's call the set of integers consecutive if, after the elements are sorted, the difference between any pair of adjacent elements is $1$. For example, sets $\{2\}$, $\{2, 5, 4, 3\}$, $\{5, 6, 8, 7\}$ are consecutive, while $\{2, 4, 5, 6\}$, $\{9, 7\}$ are not. Eri likes consecutive sets. Suppose there is an array $b$, then Eri puts all elements in $b$ into the set. If after a finite number of operations described above, the set can become consecutive, the array $b$ will be called brilliant. Note that if the same integer appears in the array multiple times, we only put it into the set \textbf{once}, as a set always contains distinct positive integers. Eri has an array $a$ of $n$ positive integers. Please help him to count the number of pairs of integers $(l,r)$ such that $1 \leq l \leq r \leq n$ and the subarray $a_l, a_{l+1}, \ldots, a_r$ is brilliant.
Let's consider the elements in the final set $a$, and take $b_i=a_i-a_{i-1}$ as its difference array. Observation 1: $b_i$ is odd. Otherwise we can turn $b_i$ into two $\frac{b_i}{2}$. Observation 2: Adjacent $b_i$ are not different. Suppose $b_i$ and $b_{i+1}$ are different and odd, then $a_{i+1}-a_{i-1}$ is even, and $\frac{a_{i+1}+a_{i-1}}{2}\neq a_i$, so $a$ can be larger. Thus, the array $a$ is an arithmetic progression, with an odd tolerance $b_i$. If you notice this, and can also notice the monotonicity, you can maintain $(c,d,len)$ for a range of numbers to show that the final set is an arithmetic progression starting from $c$, consisting of $len$ elements and has tolerance $d$. It's amazing that two pieces of information like this can be merged, so we can use sparse table to maintain. However, there's a better way to solve it. Similar to Euclidean Algorithm, the tolerance is equal to the maximum odd divisor of the gcd of the difference array of $a$, that is, $\gcd\{a_i-a_{i-1}\}$. Then the restriction means that $\gcd\{a_i-a_{i-1}\}$ is a power of $2$. For a fixed point $l$, find the smallest $r$ that interval $[l,r]$ is good. A divisor of a power of $2$ is still a power of $2$, so it has monotonicity, which makes it possible to use two pointers to maintain it in $O(n\log nV)$ or $O(n\log V)$. Using sparse table or binary lifting may reach the same complexity. Note that adjacent same numbers should be carefully dealt with. There's a harder but similar version of the problem. Can you solve it? There're $q$ queries: $L,R,k$. Find the number of pairs $(l,r)$ that $L\le l\le r\le R$, and $(\lvert Q({a_l\dots a_r}) \rvert-1) \cdot k\ge \max(a_l\dots a_r) - \min(a_l\dots a_r)$, where $Q(array)$ means the result of putting the array into the set.
[ "data structures", "divide and conquer", "math", "number theory", "two pointers" ]
2,300
#include<bits/stdc++.h> using namespace std; #define int long long #define pii pair<int,int> #define all(v) v.begin(),v.end() #define pb push_back #define REP(i,b,e) for(int i=(b);i<(int)(e);++i) #define over(x) {cout<<x<<endl;return;} #define cntbit(x) __buitin_popcount(x) int n; int a[400005],res[400005],lsteq[400005]; int st[400005][20]; int query(int l,int r){ int s=__lg(r-l+1); return __gcd(st[l][s],st[r-(1<<s)+1][s]); } void Main() { cin>>n; REP(i,0,n)cin>>a[i]; --n; if(!n)over(1) REP(i,0,n)st[i][0]=llabs(a[i]-a[i+1]); REP(j,0,__lg(n)){ REP(i,0,n-(1<<(j+1))+1){ st[i][j+1]=__gcd(st[i][j],st[i+(1<<j)][j]); } } lsteq[n]=n; for(int i=n-1;i>=0;--i)lsteq[i]=(a[i]==a[i+1]? lsteq[i+1]:i); int ans=1; int l=0,r=0; REP(i,0,n){ l=max(l,lsteq[i]);r=max(r,l); while(r<n&&cntbit(query(l,r))>1)++r; ans+=n-r;ans+=lsteq[i]-i+1; } cout<<ans<<endl; } void TC() { int tc=1; cin>>tc; while(tc--){ Main(); cout.flush(); } } signed main() { return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0; }
2006
D
Iris and Adjacent Products
Iris has just learned multiplication in her Maths lessons. However, since her brain is unable to withstand too complex calculations, she could not multiply two integers with the product greater than $k$ together. Otherwise, her brain may explode! Her teacher sets a difficult task every day as her daily summer holiday homework. Now she is given an array $a$ consisting of $n$ elements, and she needs to calculate the product of each two adjacent elements (that is, $a_1 \cdot a_2$, $a_2 \cdot a_3$, and so on). Iris wants her brain to work safely, and in order to do that, she would like to modify the array $a$ in such a way that $a_i \cdot a_{i + 1} \leq k$ holds for every $1 \leq i < n$. There are two types of operations she can perform: - She can rearrange the elements of the array $a$ in an arbitrary way. - She can select an arbitrary element of the array $a$ and change its value to an arbitrary integer from $1$ to $k$. Iris wants to minimize the number of operations of \textbf{type $2$} that she uses. However, that's completely not the end of the summer holiday! Summer holiday lasts for $q$ days, and on the $i$-th day, Iris is asked to solve the Math homework for the subarray $b_{l_i}, b_{l_i + 1}, \ldots, b_{r_i}$. Help Iris and tell her the minimum number of type $2$ operations she needs to perform for each day. Note that the operations are \textbf{independent} for each day, i.e. the array $b$ is not changed.
Read the hints. Let's consider how to reorder the array. Let the sorted array be $a_1,a_2,\dots,a_n$. We can prove that it is optimal to reorder it like this: $a_n,a_1,a_{n-1},a_2,a_{n-2},\dots$. You can find the proof at the end of the tutorial. From the proof or anything we can discover the restriction is: for all $i$ that $1\le i\le \frac n2$, $a_i\times a_{n-i+1}\le k$. Let $cnt(l,r)$ denote the number of elements with values in range $[l,r]$. We can rewrite the restrictions into: for each $i$ that $1\le i\le \sqrt k$, it must satisfy that $\min(cnt(1,i),\lfloor\frac n2\rfloor)\ge \min(cnt(\lfloor\frac k{i+1}\rfloor+1,k),\lfloor\frac n2\rfloor)$. This means that the number of $a_j$ that cannot fit $i+1$ must not be greater than the number of $a_j\le i$, and only $\lfloor\frac n2\rfloor$ numbers should be considered. Note that there're only $O(\sqrt k)$ range sums on the value range to consider, which is acceptable. Consider a modify operation, we obviously change the maximum value into $1$. This increases every $cnt(1,i)$ by $1$ and decreases every non-zero $cnt(\lfloor\frac k{i+1}\rfloor+1,k)$ by $1$, so the answer is easy to calculate. Just notice the situation when $cnt(1,\sqrt k)$ is too small and the length of the interval is too short. How to maintain $cnt(1,i)$ for all subintervals? You can consider simply prefix sums, but its time constant factor seems too large for a 2-dimensional array. So we lowered the memory limit to reduce such issues. An easy way to handle is: solve the problem offline, and for each $i$ we calculate the prefix sum for $cnt(1,i)$ and $cnt(\lfloor\frac k{i+1}\rfloor+1,k)$, and then answer all the queries. In this way the time constant factor becomes much smaller and the memory becomes $O(n+q+k)$. Another way to solve is to use Mo's algorithm. It uses $O(n+q+k)$ memory and $O(n\sqrt q)$ time. The time complexity is $O(\sqrt k\sum(n+q))$. Proof for the optimal way to reorder: If $n=1$, it's correct. First, it is optimal to put $a_n$ at the first position. Suppose there's a way to reorder that $a_n$ is not at the first position: $a_{p_1},a_{p_2},\dots,a_{p_k=n},\dots$, we can always reverse the prefix $a_{p_1},a_{p_2},\dots,a_{p_k}$ so that it still satisfies the condition but $a_n$ is at the first position. Then, it is optimal to put $a_1$ at the second position. Suppose there's a way to reorder that $a_n$ is at the first position and $a_{t\neq 1}$ at the second position, as $a_t$ and $a_1$ both can fit $a_n$, we can consider them the same(any number can be put beside them), so it's possible to swap $a_t$ and $a_1$. If $a_1\times a_n>k$, then the array isn't good. Otherwise, any number can be put beside $a_1$, so the method of reordering $a_2,a_3,\dots,a_{n-1}$ can be reduced to the same problem of $n'=n-2$. So the conclusion is proved.
[ "data structures", "greedy", "implementation", "math" ]
2,600
#include<bits/stdc++.h> using namespace std; #define all(v) v.begin(),v.end() #define pb push_back #define REP(i,b,e) for(int i=(b);i<(int)(e);++i) #define over(x) {cout<<(x)<<endl;return;} int n,q,r; int a[300005]; int sum[300005]; int ql[300005],qr[300005]; int ans[300005]; int cnt[300005],cnt2[300005]; void Main() { cin>>n>>q>>r; REP(i,0,n)cin>>a[i]; sum[0]=0; int B=sqrt(r); REP(i,0,n)sum[i+1]=sum[i]+(a[i]<=B); REP(i,0,q){ int x,y; cin>>x>>y; --x,--y; ql[i]=x;qr[i]=y; ans[i]=0; if(sum[y+1]-sum[x]<(y-x+1)/2)ans[i]=(y-x+1)/2-sum[y+1]+sum[x]; } REP(i,0,B){ REP(j,0,n+1)cnt[j]=cnt2[j]=0; REP(j,0,n){ cnt[j+1]=cnt[j]+(a[j]<=i); cnt2[j+1]=cnt2[j]+(a[j]<=r/(i+1)); } REP(j,0,q){ int x=ql[j],y=qr[j],l=y-x+1; int c1=cnt2[y+1]-cnt2[x],c2=cnt[y+1]-cnt[x]; ans[j]=max(ans[j],min((l-c1-c2+1)/2,l/2-c2)); } } REP(i,0,q)cout<<ans[i]<<' '; cout<<endl; } void TC() { int tc=1; cin>>tc; while(tc--){ Main(); cout.flush(); } } signed main() { return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0; } /* 1. CLEAR the arrays (ESPECIALLY multitests) 2. DELETE useless output */
2006
E
Iris's Full Binary Tree
Iris likes full binary trees. Let's define the depth of a rooted tree as the maximum number of \textbf{vertices} on the simple paths from some vertex to the root. A full binary tree of depth $d$ is a binary tree of depth $d$ with exactly $2^d - 1$ vertices. Iris calls a tree a $d$-binary tree if some vertices and edges can be \textbf{added} to it to make it a full binary tree of depth $d$. Note that \textbf{any vertex} can be chosen as the root of a full binary tree. Since performing operations on large trees is difficult, she defines the binary depth of a tree as the minimum $d$ satisfying that the tree is $d$-binary. Specifically, if there is no integer $d \ge 1$ such that the tree is $d$-binary, the binary depth of the tree is $-1$. Iris now has a tree consisting of only vertex $1$. She wants to add $n - 1$ more vertices to form a larger tree. She will add the vertices one by one. When she adds vertex $i$ ($2 \leq i \leq n$), she'll give you an integer $p_i$ ($1 \leq p_i < i$), and add a new edge connecting vertices $i$ and $p_i$. Iris wants to ask you the binary depth of the tree formed by the first $i$ vertices for each $1 \le i \le n$. Can you tell her the answer?
Here're two lemmas that will be used in the solution. Lemma 1: Among all the points on a tree with the greatest distance from a certain point, one of which must be one of the two endpoints of a certain diameter. Lemma 2: After merging two trees, at least one new diameter is generated at the four endpoints of two certain diameters. Obviously, we consider a subtree composed of the first $k$ points, which corresponds to a fully binary tree with a depth of $d$, and must have only one node with the smallest depth (otherwise it would not be connected). If the depth of the node with the smallest depth is greater than $1$, then we can reduce the depth of all nodes by $1$. It is easy to see that this will not cause conflicts, and $d$ should be reduced by $1$. In this case, we obtain: Conclusion 1: There must be a node corresponding to the root of a fully binary tree. Meanwhile, due to the degrees of the nodes of a full binary tree, it is obtained that: Conclusion 2: The node corresponding to the root should have a degree $\leq 2$; if there is a node with a degree $\gt 3$, then all subsequent queries should be $d = -1$. We can consider the case where each point is the root, and due to the Lemma, we can obtain $d$ as the maximum distance from that point to the endpoints of two diameters, then plus $1$. Finally, select the point with degree $\leq 2$ that minimizes this maximum value as the root. Brute practices can achieve $\mathcal O(n^2\log n)$ time. Consider optimization. According to Lemma 2, each time a point is added, the diameter length either remains unchanged or increases by $1$. In the case of adding $1$, as mentioned earlier, we investigate the maximum distance of all points (temporarily ignoring their degrees). If we consider real-time maintenance of its changes, it is obvious that if the new diameter distance is even, it is equivalent to a maximum distance of $+1$ for all points except for a subtree at the midpoint of the new diameter; otherwise, it is equivalent to a maximum distance of $+1$ for all points of the subtree at the midpoint of the original diameter. Pay attention to the subtrees mentioned, where the roots of the entire tree are variable, but this is not very important. We can traverse and number all nodes in the DFS order, so that the subtree of each node is within an interval in such order. Therefore, an indefinite rooted subtree can also be represented by $\mathcal O(1)$ intervals. You can use a segment tree to facilitate the maintenance of the $+1$ operations. In addition, considering the impact of degrees, we just need to increase the maximum distance of a $3$-degreed node by $+\inf$. Ultimately, at the end of each operation, we need only the global minimum value. The time complexity is $O (n\log n)$.
[ "brute force", "data structures", "dfs and similar", "trees" ]
3,100
#include<bits/stdc++.h> using namespace std; #define int long long #define pii pair<int,int> #define all(v) v.begin(),v.end() #define pb push_back #define REP(i,b,e) for(int i=(b);i<(int)(e);++i) #define over(x) {cout<<(x)<<endl;return;} struct ds{ int seg[2000005],tag[2000005]; void build(int l,int r,int p){ seg[p]=(l==0? 0:1e18),tag[p]=0; if(l==r)return; int m=(l+r)>>1; build(l,m,p*2+1);build(m+1,r,p*2+2); } void pushdown(int p){ if(!tag[p])return; tag[p*2+1]+=tag[p];tag[p*2+2]+=tag[p]; seg[p*2+1]+=tag[p];seg[p*2+2]+=tag[p]; tag[p]=0; } void add(int l,int r,int s,int t,int p,int val){ if(l<=s&&t<=r){ seg[p]+=val;tag[p]+=val; return; } int m=(s+t)>>1;pushdown(p); if(m>=l)add(l,r,s,m,p*2+1,val); if(m<r)add(l,r,m+1,t,p*2+2,val); seg[p]=min(seg[p*2+1],seg[p*2+2]); } void update(int pos,int l,int r,int p,int val){ if(l==r){ seg[p]=val; return; } int m=(l+r)>>1;pushdown(p); if(m>=pos)update(pos,l,m,p*2+1,val); else update(pos,m+1,r,p*2+2,val); seg[p]=min(seg[p*2+1],seg[p*2+2]); } }seg; int n; vector<int>v[500005]; int fa[500005],an[500005][21]; int dep[500005],dfn[500005],rev[500005]; int tot,deg[500005],ls[500005]; void dfs(int x,int pre,int d){ dep[x]=d;fa[x]=pre;an[x][0]=fa[x];ls[x]=1; REP(i,0,__lg(n+1))if(an[x][i]==-1)an[x][i+1]=-1;else an[x][i+1]=an[an[x][i]][i]; rev[tot]=x;dfn[x]=tot++; for(auto i:v[x])dfs(i,x,d+1),ls[x]+=ls[i]; } int getlca(int x,int y){ if(dep[x]<dep[y])swap(x,y); int d=dep[x]-dep[y]; for(int i=__lg(d);i>=0;--i)if((d>>i)&1)x=an[x][i]; if(x==y)return x; d=dep[x]; for(int i=__lg(d);i>=0;--i)if((1<<i)<=dep[x]&&an[x][i]!=an[y][i])x=an[x][i],y=an[y][i]; return fa[x]; } int getan(int x,int y){ if(dfn[y]>=dfn[x]&&dfn[y]<=ls[x]){ int d=dep[y]-dep[x]-1; for(int i=0;(1<<i)<=d;++i)if((d>>i)&1)y=an[y][i]; return y; }else return fa[x]; } void updside(int x,int y){ if(dfn[y]>=dfn[x]&&dfn[y]<=ls[x]){ seg.add(0,n-1,0,n-1,0,1); x=getan(x,y); seg.add(dfn[x],ls[x],0,n-1,0,-1); }else seg.add(dfn[x],ls[x],0,n-1,0,1); } int getdist(int x,int y){return dep[x]+dep[y]-2*dep[getlca(x,y)];} struct diameter{ int x,y,len; int update(int z){ if(x==y){ int d=getdist(x,z); if(d<=len)return d+len; updside(y,z); y=getan(y,z); return d+len; }else{ int d1=getdist(x,z),d2=getdist(y,z),d3=min(d1,d2); if(d3<=len)return d3+len+1; if(d1>d2)swap(x,y); updside(y,z); y=x;++len;return len*2; } } int query(int z){ if(x==y)return len+getdist(x,z); else return len+min(getdist(x,z),getdist(y,z))+1; } }; void Main() { cin>>n; REP(i,0,n)v[i].clear(); REP(i,1,n){ cin>>fa[i];--fa[i]; v[fa[i]].pb(i); } tot=0;dfs(0,-1,0);seg.build(0,n-1,0); REP(i,0,n)deg[i]=0,ls[i]+=dfn[i]-1; diameter d={0,0,0}; cout<<"1 "; REP(i,1,n){ ++deg[fa[i]];++deg[i]; if(deg[fa[i]]==4){ REP(j,i,n)cout<<-1<<' '; cout<<endl;return; } if(deg[fa[i]]==3)seg.update(dfn[fa[i]],0,n-1,0,1e18); seg.update(dfn[i],0,n-1,0,d.update(i)); cout<<seg.seg[0]+1<<' '; } cout<<endl; } void TC() { int tc=1; cin>>tc; while(tc--){ Main(); cout.flush(); } } signed main() { return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0; } /* 1. CLEAR the arrays (ESPECIALLY multitests) 2. DELETE useless output */
2006
F
Dora's Paint
Sadly, Dora poured the paint when painting the class mural. Dora considers the mural as the matrix $b$ of size $n \times n$. Initially, $b_{i,j} = 0$ for all $1 \le i, j \le n$. Dora has only two brushes which have two different colors. In one operation, she can paint the matrix with one of two brushes: - The first brush has color $1$ on it and can paint one column of the matrix. That is, Dora chooses $1 \leq j \leq n$ and makes $b_{i,j} := 1$ for all $1 \leq i \leq n$; - The second brush has color $2$ on it and can paint one row of the matrix. That is, Dora chooses $1 \leq i \leq n$ and makes $b_{i,j} := 2$ for all $1 \leq j \leq n$. Dora paints the matrix so that the resulting matrix $b$ \textbf{contains only} $1$ and $2$. For a matrix $b$, let $f(b)$ denote the minimum number of operations needed to turn the initial matrix (containing only $0$) into $b$. The beauty of a matrix $b$ is the number of ways to paint the initial matrix in exactly $f(b)$ operations to turn it into $b$. If there's no way to turn the initial matrix into $b$, the beauty of $b$ is $0$. However, Dora made a uniformly random mistake; there's \textbf{exactly one} element different in the matrix $a$ given to you from the real matrix $b$. That is, there is exactly one pair $(i, j)$ such that $a_{i, j} = 3 - b_{i, j}$. Please help Dora compute the expected beauty of the real matrix $b$ modulo $998\,244\,353$ (all possible $n^2$ mistakes have equal probability). Since the size of the matrix is too large, Dora will only tell you the positions of $m$ elements of color $1$, and the remaining $n^2-m$ elements have color $2$.
Read the hints. From the hints, you may understand that, when given the degrees of each node, you can get the initial beauty of a matrix in $\mathcal O(n)$. To quickly perform topological sort, you should use addition tags to maintain the degrees of each node. Also, this is equal to sorting them by degrees. Brute-force traversal of each edge solves in $\mathcal O(n^3)$. First, let's try to solve the problem in $\mathcal O(n^2)$. In the Case of Initial Beauty is not $0$ For an inverted edge between column $i$ and row $j$, we claim that the beauty is still not $0$ if and only if column $i$ and row $j$ are in adjacent layers in the initial topological sort. If they are not in adjacent layers, WLOG let the initial direction be $i \to j$. Since it's a bipartite graph, we can't get $i \to x \to j$, but we can definitely get $i \to x \to y \to j$. If the edge is inverted, then there exists a cycle: $i \to x \to y \to j \to i$, which means that the beauty becomes $0$. Otherwise, it can also be proved that no new cycle is created. Since the answer is $\displaystyle\prod_{i=1}^n [\sum_{j=1}^n \text{ColOutdeg}(j) = i]! \cdot \prod_{i=1}^n [\sum_{j=1}^n \text{RowOutdeg}(j) = i]!$, a modification will only lead to changes in $\mathcal O(1)$ terms, and can therefore be easily calculated. In the Case of Initial Beauty is $0$ Note that this is a full bipartite graph. This means, if there is a cycle, then there must be a cycle of exactly $4$ vertices. Proof: If we can find the cycle of any number of vertices, we can get a cycle of $4$ vertices, and after that we can simply try to invert only the $4$ edges and sum the answer up, otherwise the cycle still exists. The problem is turned down to how to find a cycle. We can try to DFS from each vertex. If a cycle is found, it will contain no more than $2m$ edges, and we can do as the proof does to reduce the number of edges (so that the size eventually becomes $4$). $\rule{1000px}{1px}$ Then, let's move on to $\mathcal O(n + m)$ (In fact, $\mathcal O((n + m)\log n)$ solutions may pass as well). In the Case of Initial Beauty is not $0$ Let's assume that all the blocks are from being $\tt 2$ to being $\tt 1$, and then you can calculate the answer efficiently enough. Then, brute go over the $\tt 1$ blocks and recalculate its contribution to the answer. In the Case of Initial Beauty is $0$ Since the topo sorting is done halfway (some nodes are first extracted as previous "layers"), we need to flip an edge to continue. At the moment, however, no remaining vertices have an indegree of $0$. Then, after a flip, there should be a vertex with an indegree of $0$ (otherwise the beauty remains $0$ and we skip it). From this point of view, if there is no vertex with an indegree of $1$, then the beauty will always be $0$ after one flip. If there is a node with an indegree of $1$ (assume that it is $u$), we can easily find a cycle of $4$ vertices containing $u$: We find the only node $x$ satisfying $x \to u$. We find a node $y$ satisfying $y \to x$. We find a node $z$, again, satisfying $z \to y$. There must be $u \to z$ since the indegree of $u$ is only $1$. There are some other ways to find the cycle. For example, you are just going to find a pattern, and after selecting a node, you can do sweepline on all the $m$ occurrences of $\tt 1$. If you implement it well, it is also in linear time. Time complexity: $\mathcal O(n + m)$. Method 2: Try to find some way to "sort" the grid, for swapping different rows and different columns doesn't matter. How? Sort by the number of $1$. How to calculate the beauty? The beauty is the product of factorials of number of equal rows and equal columns. How to solve the original problem when the initial grid has positive beauty? How to optimize? How to solve the original problem when the initial grid is impossible to be painted out? Are there too many ways to flip a block? Read the hints. Let's sort the rows and columns in some way, which means you should swap two rows or two columns each time to make them ordered. Let's sort by the number of $1$ in each row and column. For example, the result of sorting the left part is shown in the right part: $\left[\begin{matrix}2&1&1&1\\2&2&1&2\\1&1&1&1\\2&2&1&2\end{matrix}\right]\Rightarrow\left[\begin{matrix}1&1&1&1\\1&1&1&2\\1&2&2&2\\1&2&2&2\end{matrix}\right]$ We claim that the grid is solvealbe if and only if the later rows are "included" by previous rows, and so for the columns. For example, if a position in any row is filled with $1$, then the position in any previous row must also be $1$. This is correct because you'll notice that any submatrices of the following two types cannot appear in a solveable grid. To check that, you'll only need to check adjacent rows and columns. $\left[\begin{matrix}1&2\\2&1\end{matrix}\right],\left[\begin{matrix}2&1\\1&2\end{matrix}\right]$ After you sort all of that, the number of ways to paint it is easy to think about. It's similar to the product of factorials of the number of each kind of rows(for example, the last two rows in the right above). The only to notice is the first step of painting shouldn't be counted. Consider modifying one block. If the initial grid is solveable, there're only $O(n)$ types of changing(in each row, all the $1$ and $2$ can be considered the same). Each type of changing can be done in $O(1)$ using simple discussion. If the initial grid is unsolveable, we should find the two adjacent rows and columns that starts the problem. We claim that the block to change must belong to one of the four blocks in the intersection of the rows and columns. Just try every possible situation and do brute force on it. For example, consider the following matrix: The first two columns and the last two rows went wrong. So let's try to change blocks $(3,1),(3,2),(4,1),(4,2)$ to judge whether it is possible. $\left[\begin{matrix}1&1&1&1\\1&1&1&2\\1&2&2&2\\2&1&2&2\end{matrix}\right]$ The time complexity is $O((n+m)\log n)$, and if you implement carefully enough you can solve it in $O(n+m)$.
[ "brute force", "combinatorics", "constructive algorithms", "graphs", "implementation" ]
3,500
#if defined(LOCAL) or not defined(LUOGU) #pragma GCC optimize(3) #pragma GCC optimize("Ofast,unroll-loops") #endif #include<bits/stdc++.h> using namespace std; struct time_helper { #ifdef LOCAL clock_t time_last; #endif time_helper() { #ifdef LOCAL time_last=clock(); #endif } void test() { #ifdef LOCAL auto time_now=clock(); std::cerr<<"time:"<<1.*(time_now-time_last)/CLOCKS_PER_SEC<<";all_time:"<<1.*time_now/CLOCKS_PER_SEC<<std::endl; time_last=time_now; #endif } ~time_helper() { test(); } }time_helper; #ifdef LOCAL #include"dbg.h" #else #define dbg(...) (__VA_ARGS__) #endif namespace Fread{const int SIZE=1<<16;char buf[SIZE],*S,*T;inline char getchar(){if(S==T){T=(S=buf)+fread(buf,1,SIZE,stdin);if(S==T)return'\n';}return *S++;}}namespace Fwrite{const int SIZE=1<<16;char buf[SIZE],*S=buf,*T=buf+SIZE;inline void flush(){fwrite(buf,1,S-buf,stdout);S=buf;}inline void putchar(char c){*S++=c;if(S==T)flush();}struct NTR{~NTR(){flush();}}ztr;} #define getchar Fread::getchar #define putchar Fwrite::putchar #define Setprecision 10 #define between '\n' #define __int128 long long template<typename T>struct is_char{static constexpr bool value=(std::is_same<T,char>::value||std::is_same<T,signed char>::value||std::is_same<T,unsigned char>::value);};template<typename T>struct is_integral_ex{static constexpr bool value=(std::is_integral<T>::value||std::is_same<T,__int128>::value)&&!is_char<T>::value;};template<typename T>struct is_floating_point_ex{static constexpr bool value=std::is_floating_point<T>::value||std::is_same<T,__float128>::value;};namespace Fastio{struct Reader{template<typename T>typename std::enable_if_t<std::is_class<T>::value,Reader&>operator>>(T&x){for(auto &y:x)*this>>y;return *this;}template<typename T>typename std::enable_if_t<is_integral_ex<T>::value,Reader&>operator>>(T&x){char c=getchar();short f=1;while(c<'0'||c>'9'){if(c=='-')f*=-1;c=getchar();}x=0;while(c>='0'&&c<='9'){x=(x<<1)+(x<<3)+(c^48);c=getchar();}x*=f;return *this;}template<typename T>typename std::enable_if_t<is_floating_point_ex<T>::value,Reader&>operator>>(T&x){char c=getchar();short f=1,s=0;x=0;T t=0;while((c<'0'||c>'9')&&c!='.'){if(c=='-')f*=-1;c=getchar();}while(c>='0'&&c<='9'&&c!='.')x=x*10+(c^48),c=getchar();if(c=='.')c=getchar();else return x*=f,*this;while(c>='0'&&c<='9')t=t*10+(c^48),s++,c=getchar();while(s--)t/=10.0;x=(x+t)*f;return*this;}template<typename T>typename std::enable_if_t<is_char<T>::value,Reader&>operator>>(T&c){c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();return *this;}Reader&operator>>(char*str){int len=0;char c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();while(c!='\n'&&c!=' '&&c!='\r')str[len++]=c,c=getchar();str[len]='\0';return*this;}Reader&operator>>(std::string&str){str.clear();char c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();while(c!='\n'&&c!=' '&&c!='\r')str.push_back(c),c=getchar();return*this;}Reader(){}}cin;const char endl='\n';struct Writer{typedef __int128 mxdouble;template<typename T>typename std::enable_if_t<std::is_class<T>::value,Writer&>operator<<(T x){for(auto &y:x)*this<<y<<between;*this<<'\n';return *this;}template<typename T>typename std::enable_if_t<is_integral_ex<T>::value,Writer&>operator<<(T x){if(x==0)return putchar('0'),*this;if(x<0)putchar('-'),x=-x;static int sta[45];int top=0;while(x)sta[++top]=x%10,x/=10;while(top)putchar(sta[top]+'0'),--top;return*this;}template<typename T>typename std::enable_if_t<is_floating_point_ex<T>::value,Writer&>operator<<(T x){if(x<0)putchar('-'),x=-x;x+=pow(10,-Setprecision)/2;mxdouble _=x;x-=(T)_;static int sta[45];int top=0;while(_)sta[++top]=_%10,_/=10;if(!top)putchar('0');while(top)putchar(sta[top]+'0'),--top;putchar('.');for(int i=0;i<Setprecision;i++)x*=10;_=x;while(_)sta[++top]=_%10,_/=10;for(int i=0;i<Setprecision-top;i++)putchar('0');while(top)putchar(sta[top]+'0'),--top;return*this;}template<typename T>typename std::enable_if_t<is_char<T>::value,Writer&>operator<<(T c){putchar(c);return*this;}Writer&operator<<(char*str){int cur=0;while(str[cur])putchar(str[cur++]);return *this;}Writer&operator<<(const char*str){int cur=0;while(str[cur])putchar(str[cur++]);return*this;}Writer&operator<<(std::string str){int st=0,ed=str.size();while(st<ed)putchar(str[st++]);return*this;}Writer(){}}cout;} #define cin Fastio::cin #define cout Fastio::cout #define endl Fastio::endl void solve(); main() { int t=1; cin>>t; while(t--)solve(); } template <uint32_t mod> struct LazyMontgomeryModInt { using mint = LazyMontgomeryModInt; using i32 = int32_t; using u32 = uint32_t; using u64 = uint64_t; static constexpr u32 get_r() { u32 ret = mod; for (i32 i = 0; i < 4; ++i) ret *= 2 - mod * ret; return ret; } static constexpr u32 r = get_r(); static constexpr u32 n2 = -u64(mod) % mod; static_assert(r * mod == 1, "invalid, r * mod != 1"); static_assert(mod < (1 << 30), "invalid, mod >= 2 ^ 30"); static_assert((mod & 1) == 1, "invalid, mod % 2 == 0"); u32 a; constexpr LazyMontgomeryModInt() : a(0) {} constexpr LazyMontgomeryModInt(const int64_t &b) : a(reduce(u64(b % mod + mod) * n2)){}; static constexpr u32 reduce(const u64 &b) { return (b + u64(u32(b) * u32(-r)) * mod) >> 32; } constexpr mint &operator+=(const mint &b) { if (i32(a += b.a - 2 * mod) < 0) a += 2 * mod; return *this; } constexpr mint &operator-=(const mint &b) { if (i32(a -= b.a) < 0) a += 2 * mod; return *this; } constexpr mint &operator*=(const mint &b) { a = reduce(u64(a) * b.a); return *this; } constexpr mint &operator/=(const mint &b) { *this *= b.inverse(); return *this; } constexpr mint operator+(const mint &b) const { return mint(*this) += b; } constexpr mint operator-(const mint &b) const { return mint(*this) -= b; } constexpr mint operator*(const mint &b) const { return mint(*this) *= b; } constexpr mint operator/(const mint &b) const { return mint(*this) /= b; } constexpr bool operator==(const mint &b) const { return (a >= mod ? a - mod : a) == (b.a >= mod ? b.a - mod : b.a); } constexpr bool operator!=(const mint &b) const { return (a >= mod ? a - mod : a) != (b.a >= mod ? b.a - mod : b.a); } constexpr mint operator-() const { return mint() - mint(*this); } constexpr mint pow(u64 n) const { mint ret(1), mul(*this); while (n > 0) { if (n & 1) ret *= mul; mul *= mul; n >>= 1; } return ret; } constexpr mint inverse() const { return pow(mod - 2); } friend ostream &operator<<(ostream &os, const mint &b) { return os << b.get(); } friend istream &operator>>(istream &is, mint &b) { int64_t t; is >> t; b = LazyMontgomeryModInt<mod>(t); return (is); } constexpr u32 get() const { u32 ret = reduce(a); return ret >= mod ? ret - mod : ret; } static constexpr u32 get_mod() { return mod; } explicit operator u32() const {return get();} }; #define modint LazyMontgomeryModInt #define mint modint<p> constexpr int p=998244353; std::pmr::monotonic_buffer_resource pool(100000000); void solve() { int n,m; cin>>n>>m; vector<std::pmr::vector<int>>a(n,std::pmr::vector<int>{&pool}); time_helper.test(); while(m--) { int u,v; cin>>u>>v; u--,v--; a[u].emplace_back(v); } time_helper.test(); std::pmr::vector<mint>inv{&pool},fac{&pool},ifac{&pool}; inv.resize(n+1),fac.resize(n+1),ifac.resize(n+1); std::pmr::vector<int>arr_cnt{&pool}; arr_cnt.resize(n+2); auto arr_sort=[&](const auto&a) { for(int i=0;i<n;i++) arr_cnt[a[i].size()+1]++; for(int x=1;x<=n;x++) arr_cnt[x]+=arr_cnt[x-1]; vector<std::pmr::vector<int>>b(n,std::pmr::vector<int>{&pool}); for(int i=0;i<n;i++) b[arr_cnt[a[i].size()]++]=move(a[i]); for(int x=0;x<=n+1;x++) arr_cnt[x]=0; return b; }; a=move(arr_sort(a)); auto arr_eq=[&](const auto&a,const auto&b) { if(a.size()!=b.size())return false; bool res=1; for(auto q:a)arr_cnt[q]=1; for(auto q:b)if(!arr_cnt[q]){res=0;break;} for(auto q:a)arr_cnt[q]=0; return res; }; auto arr_includes=[&](const auto&a,const auto&b) { bool res=1; for(auto q:a)arr_cnt[q]=1; for(auto q:b)if(!arr_cnt[q]){res=0;break;} for(auto q:a)arr_cnt[q]=0; return res; }; auto arr_set_difference=[&](const auto&a,const auto&b,auto &diff) { for(auto q:b)arr_cnt[q]=1; for(auto q:a) if(!arr_cnt[q]) { diff.emplace_back(q); if(diff.size()>2)break; } for(auto q:b)arr_cnt[q]=0; }; { fac[0]=1; for(int x=1;x<=n;x++) fac[x]=fac[x-1]*x; ifac[n]=fac[n].inverse(); for(int x=n;x>=1;x--) ifac[x-1]=ifac[x]*x,inv[x]=ifac[x]*fac[x-1]; } int failcnt=0; for(int x=1;x<n;x++) failcnt+=!(arr_includes(a[x],a[x-1])); if(failcnt==0) { int nc=0; mint ansbas=fac[a[0].size()],ans=0; std::pmr::vector<int>tnc{&pool},tmc{&pool}; tmc.emplace_back(a[0].size()); int ct=0; for(int x=0;x<n;x++) if(x==n-1||!arr_eq(a[x],a[x+1])) { nc++; if(a[x].size()!=n) ansbas*=fac[nc]; int mc=0; if(x==n-1)mc=n-a[x].size(); else mc=a[x+1].size()-a[x].size(); if(x!=n-1) ansbas*=fac[mc]; tnc.emplace_back(nc); tmc.emplace_back(mc); nc=0; ct++; } else nc++; for(int x=0;x<ct;x++) { int nc=tnc[x],mc=tmc[x+1]; if(mc==0)continue; mint fix=ansbas; if(x==ct-1) { fix*=mc; } if(x==ct-2&&mc==1&&tmc[x+2]==0) { fix*=inv[tnc[x+1]+1]; } if(x==ct-1||mc!=1) { if(mc) { if(nc>1)ans+=fix; else ans+=fix*(tmc[x]+1); } } else if(nc>1)ans+=fix*(tnc[x+1]+1); else ans+=fix*(tnc[x+1]+1)*(tmc[x]+1); } for(int x=0;x<ct;x++) { int nc=tnc[x],mc=tmc[x]; if(mc==0)continue; mint fix=ansbas; if(x==ct-1) { if(tmc[x+1]==0)fix*=nc; if(nc==1)fix*=inv[tmc[x+1]+1]; } if(x==0||mc!=1) { if(mc) { if(nc>1)ans+=fix; else ans+=fix*(tmc[x+1]+1); } } else if(nc>1)ans+=fix*(tnc[x-1]+1); else ans+=fix*(tnc[x-1]+1)*(tmc[x+1]+1); } cout<<(ans*inv[n]*inv[n]).get()<<endl; return; } if(failcnt>2){cout<<0<<endl;return;} mint ans=0; auto calc=[&](vector<std::pmr::vector<int>>&a)->mint { a=move(arr_sort(a)); for(int x=1;x<n;x++) if(!arr_includes(a[x],a[x-1]))return 0; int nc=0; mint ansbas=fac[a[0].size()]; for(int x=0;x<n;x++) if(x==n-1||!arr_eq(a[x],a[x+1])) { nc++; if(a[x].size()!=n) ansbas=ansbas*fac[nc]; int mc=0; if(x==n-1)mc=n-a[x].size(); else mc=a[x+1].size()-a[x].size(); if(x!=n-1) ansbas=ansbas*fac[mc]; nc=0; } else nc++; return ansbas; }; std::pmr::set<pair<int,int>>st{&pool}; std::pmr::vector<int>diff{&pool}; vector<std::pmr::vector<int>>b(n,std::pmr::vector<int>{&pool}); time_helper.test(); for(int x=1;x<n;x++) if(!arr_includes(a[x],a[x-1])) { for(int y=x+2;y<n;y++) if(!arr_includes(a[y],a[y-1]))goto fail; { diff.clear(); arr_set_difference(a[x],a[x-1],diff); if(diff.size()<=1) { if(!st.count({x-1,diff[0]})) { b=a; b[x-1].emplace_back(diff[0]); ans+=calc(b),st.insert({x-1,diff[0]}); } if(!st.count({x,diff[0]})) { b=a; b[x].erase(find(b[x].begin(),b[x].end(),diff[0])); ans+=calc(b),st.insert({x,diff[0]}); } } } { diff.clear(); arr_set_difference(a[x-1],a[x],diff); if(diff.size()<=1) { if(!st.count({x,diff[0]})) { b=a; b[x].emplace_back(diff[0]); ans+=calc(b),st.insert({x,diff[0]}); } if(!st.count({x-1,diff[0]})) { b=a; b[x-1].erase(find(b[x-1].begin(),b[x-1].end(),diff[0])); ans+=calc(b),st.insert({x-1,diff[0]}); } } } fail: break; } cout<<(ans*inv[n]*inv[n]).get()<<endl; }
2007
A
Dora's Set
Dora has a set $s$ containing integers. In the beginning, she will put all integers in $[l, r]$ into the set $s$. That is, an integer $x$ is initially contained in the set if and only if $l \leq x \leq r$. Then she allows you to perform the following operations: - Select three \textbf{distinct} integers $a$, $b$, and $c$ from the set $s$, such that $\gcd(a, b) = \gcd(b, c) = \gcd(a, c) = 1^\dagger$. - Then, remove these three integers from the set $s$. What is the maximum number of operations you can perform? $^\dagger$Recall that $\gcd(x, y)$ means the greatest common divisor of integers $x$ and $y$.
In a pair of $(a, b, c)$ there are at least two odd integers and at most one even integer. That's because if two even integers occur at the same time, their $\gcd$ will be at least $2$. It's optimal to make full use of the odd integers, so we try to choose two odd integers in a pair. In fact, this is always possible. Note that two consecutive odd integers always have $\gcd=1$, and consecutive integers also have $\gcd=1$, so we can choose three consecutive integers $(a,b,c)$ in the form of $2k - 1, 2k, 2k + 1$. We can see that this is always optimal. Therefore, the answer is: the result of dividing the number of odd numbers in $[l, r]$ by $2$ and rounding down. The time complexity is: $\mathcal O(T)$ or $\mathcal O(\sum (r - l))$.
[ "greedy", "math", "number theory" ]
800
T = int(input()) for _ in range(T) : l, r = map(int, input().split()) print(((r + 1) // 2 - l // 2) // 2)
2007
B
Index and Maximum Value
After receiving yet another integer array $a_1, a_2, \ldots, a_n$ at her birthday party, Index decides to perform some operations on it. Formally, there are $m$ operations that she is going to perform in order. Each of them belongs to one of the two types: - $+ l r$. Given two integers $l$ and $r$, for all $1 \leq i \leq n$ such that $l \leq a_i \leq r$, set $a_i := a_i + 1$. - $- l r$. Given two integers $l$ and $r$, for all $1 \leq i \leq n$ such that $l \leq a_i \leq r$, set $a_i := a_i - 1$. For example, if the initial array $a = [7, 1, 3, 4, 3]$, after performing the operation $+ \space 2 \space 4$, the array $a = [7, 1, 4, 5, 4]$. Then, after performing the operation $- \space 1 \space 10$, the array $a = [6, 0, 3, 4, 3]$. Index is curious about the maximum value in the array $a$. Please help her find it after each of the $m$ operations.
We take a maximum value from the initial array. Let's call it $a_{pos}$. We can see that $a_{pos}$ will always be one of the maximum values after any number of operations. Proof: Consider the value $x$ (which should be less than or equal to $a_{pos}$) before a particular operation. If $x = a_{pos}$, then $x = a_{pos}$ still holds after the operation. Otherwise $x < a_{pos}$. In the operation, the difference between $x$ and $a_{pos}$ decreases by at most $1$, so $x$ cannot become strictly greater than $a_{pos}$. Then, $a_{pos}$ remains the maximum value. So we have reduced the problem to $n = 1$. Each time a new operation is added, we perform it only on $a_{pos}$ (and then output $a_{pos}$). Time complexity: $\mathcal O(n + q)$.
[ "data structures", "greedy" ]
900
T = int(input()) for _ in range(T) : n, m = map(int, input().split()) v = max(map(int, input().split())) for __ in range(m) : c, l, r = input().split() l = int(l) r = int(r) if (l <= v <= r) : if (c == '+') : v = v + 1 else : v = v - 1 print(v, end = ' ' if __ != m - 1 else '\n')
2007
C
Dora and C++
Dora has just learned the programming language C++! However, she has completely misunderstood the meaning of C++. She considers it as two kinds of adding operations on the array $c$ with $n$ elements. Dora has two integers $a$ and $b$. In one operation, she can choose one of the following things to do. - Choose an integer $i$ such that $1 \leq i \leq n$, and increase $c_i$ by $a$. - Choose an integer $i$ such that $1 \leq i \leq n$, and increase $c_i$ by $b$. Note that $a$ and $b$ are \textbf{constants}, and they can be the same. Let's define a range of array $d$ as $\max(d_i) - \min(d_i)$. For instance, the range of the array $[1, 2, 3, 4]$ is $4 - 1 = 3$, the range of the array $[5, 2, 8, 2, 2, 1]$ is $8 - 1 = 7$, and the range of the array $[3, 3, 3]$ is $3 - 3 = 0$. After any number of operations (possibly, $0$), Dora calculates the range of the new array. You need to help Dora minimize this value, but since Dora loves exploring all by herself, you only need to tell her the minimized value.
Read the hints. Consider $a = b$. First, the answer is less than $a$. Otherwise, you can always increase the minimum value by $a$ so that all elements are greater than $\max(c_i) - a$. Let's go over all possible minimum values, say $m$, and we can do the operations so that all elements are in the range $[m, m + a - 1]$. Then we can calculate the range and compare it with the answer. There are $n$ possible minimum values, so the process is too slow. Consider first letting $c_1$ be the minimum, and we do the operations to make $c_2 \dots c_n$ in the range $[c_1, c_1 + a - 1]$. Then, sort the array, and try $c_2, c_3, \dots c_n$ as the minimum in turn. Suppose $c_i$ is the current minimum, and $j$ is the largest integer such that $c_j < c_i$. If $j$ exists, then $c_j + a$ is the maximum. That's because for all $k < j$, $c_k + a \leq c_j + a$; and for all $k > i$, $c_k \leq c_j + a$. So we can do this in $O(n)$ after sorting. In the case of $a \neq b$. We can prove that this is equivalent to $a = b = \gcd(a, b)$ according to the Bezout's Identity (Please, search it yourself on https://en.wikipedia.org/wiki/, for the character "é" cannot be displayed inside a $\LaTeX$ link): First, the changes in value of each element must be a multiple of $d$, since $a, b$ are both multiples of $\gcd(a, b)$. Second, we can always construct integers $x, y$ such that $ax + by = \gcd(a, b)$, so we can perform some operations such that we only increase / decrease an element by $\gcd(a, b)$. Time complexity: $\mathcal O(n\log n + \log V)$.
[ "math", "number theory" ]
1,500
import math T = int(input()) for _ in range(T) : n, a, b = map(int, input().split()) d = math.gcd(a, b) a = list(map(int, input().split())) a = [x % d for x in a] a.sort() res = a[n - 1] - a[0] for i in range(1, n) : res = min(res, a[i - 1] + d - a[i]) print(res)
2008
A
Sakurako's Exam
Today, Sakurako has a math exam. The teacher gave the array, consisting of $a$ ones and $b$ twos. In an array, Sakurako \textbf{must} place either a '+' or a '-' in front of each element so that the sum of all elements in the array equals $0$. Sakurako is not sure if it is possible to solve this problem, so determine whether there is a way to assign signs such that the sum of all elements in the array equals $0$.
First of all, this task is the same as: "divide an array into two arrays with equal sum". So, obviously, we need to check if the sum of all elements is even which implies that the number of ones is even. Then, we can out half of 2s in one array and the other half in another, but if number of 2s is odd, then one array will have a greater sum then another, so we need to put two 1s there. So, if we don't have two ones while the number of 2s is odd then the answer is "NO". Also, if sum is odd, the answer is also "NO". In all other cases, the answer is "YES".
[ "brute force", "constructive algorithms", "greedy", "math" ]
800
t=int(input()) for _ in range(t): a,b=map(int,input().split()) if a%2==1: print("NO") continue if a==0 and b%2==1: print("NO") continue print("YES")
2008
B
Square or Not
A beautiful binary matrix is a matrix that has ones on its edges and zeros inside. \begin{center} {\small Examples of four beautiful binary matrices.} \end{center} Today, Sakurako was playing with a beautiful binary matrix of size $r \times c$ and created a binary string $s$ by writing down all the rows of the matrix, starting from the first and ending with the $r$-th. More formally, the element from the matrix in the $i$-th row and $j$-th column corresponds to the $((i-1)*c+j)$-th element of the string. You need to check whether the beautiful matrix from which the string $s$ was obtained could be \textbf{squared}. In other words, you need to check whether the string $s$ could have been build from a \textbf{square} beautiful binary matrix (i.e., one where $r=c$).
Assume that string was created from the beautiful binary matrix with size $r\times c$. If $r\le 2$ or $c\le 2$, then the whole matrix consists of '1'. This means that the string will have only one character and this is the only case such happening. So, if the whole string is constructed out of '1', we print "Yes" only if the size of the string is 4, since only $r=c=2$ is a good matrix for us. Otherwise, we have at least one '0' in the string. Let's look at what is the index of the first '0'. If it has index $r+1$, since the whole first line and the first character of the first line equal to '1', so now, we have a fixed value of $r$ (index of the first '0' minus 1) and the answer is "Yes" only if $r$ is the square root of $n$.
[ "brute force", "math", "strings" ]
800
for _ in range(int(input())): n=int(input()) s=input() i=0 while i<n and s[i]=='1': i+=1 if i==n: if n==4: print("Yes") else: print("No") continue i-=1 if i*i==n: print("Yes") else: print("No")
2008
C
Longest Good Array
Today, Sakurako was studying arrays. An array $a$ of length $n$ is considered good if and only if: - the array $a$ is increasing, meaning $a_{i - 1} < a_i$ for all $2 \le i \le n$; - the differences between adjacent elements are increasing, meaning $a_i - a_{i-1} < a_{i+1} - a_i$ for all $2 \le i < n$. Sakurako has come up with boundaries $l$ and $r$ and wants to construct a good array of maximum length, where $l \le a_i \le r$ for all $a_i$. Help Sakurako find the maximum length of a good array for the given $l$ and $r$.
We can solve this problem greedily. Let's choose the first element equal to $l$. Then, the second element should be $l+1$. The third $l+3$, and so on. In general, the $i-$th element is equal to $l+\frac{i\cdot (i+1)}{2}$. Proof of this solution: Assume that array $a$ is the array made by our algorithm and $b$ is the array with a better answer. This means that $len(b)>len(a)$. By the construction of $a$, there exists an integer $i$ such that for all $j<i$, $a_j=b_j$ and $a_i<b_i$, because $a_i$ we choose as the smallest possible element. WLOG assume that $len(b)=len(a)+1=n$. Then $b_n-b_{n-1}>b_{n-1}-b_{n-2}\ge a_{n-1}-a_{n-2}$. So, we can append $b_n$ to the array $a$, which leads to a contradiction. Now, the task is to find the biggest $x$ such that $l+\frac{x\cdot (x+1)}{2}\le r$. In fact, it can be found by binary search, the formula of discriminant, or just by brute force.
[ "binary search", "brute force", "math" ]
800
for _ in range(int(input())): a, b = map(int, input().split()) i = 0 while a + i <= b: a += i i += 1 print(i)
2008
D
Sakurako's Hobby
For a certain permutation $p$$^{\text{∗}}$ Sakurako calls an integer $j$ reachable from an integer $i$ if it is possible to make $i$ equal to $j$ by assigning $i=p_i$ a certain number of times. If $p=[3,5,6,1,2,4]$, then, for example, $4$ is reachable from $1$, because: $i=1$ $\rightarrow$ $i=p_1=3$ $\rightarrow$ $i=p_3=6$ $\rightarrow$ $i=p_6=4$. Now $i=4$, so $4$ is reachable from $1$. Each number in the permutation is colored either black or white. Sakurako defines the function $F(i)$ as the number of black integers that are reachable from $i$. Sakurako is interested in $F(i)$ for each $1\le i\le n$, but calculating all values becomes very difficult, so she asks you, as her good friend, to compute this. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation (the number $2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$, but the array contains $4$). \end{footnotesize}
Any permutation can be divided into some number of cycles, so $F(i)$ is equal to the number of black colored elements in the cycle where $i$ is. So, we can write out all cycles in $O(n)$ and memorize for each $i$ the number of black colored elements in the cycle where it is.
[ "dp", "dsu", "graphs", "math" ]
1,100
t = int(input()) for _ in range(t): n = int(input()) b = [0] * (n + 1) us = [0] * (n + 1) p = [k-1 for k in map(int, input().split())] s = input() for i in range(0, n): if us[i]: continue sz = 0 while not us[i]: us[i] = 1 sz += s[i] == '0' i = p[i] while us[i] != 2: b[i] = sz us[i] = 2 i = p[i] print(" ".join(map(str, b[:-1])))
2008
E
Alternating String
Sakurako really loves alternating strings. She calls a string $s$ of lowercase Latin letters an alternating string if characters in the even positions are the same, if characters in the odd positions are the same, and the length of the string is \textbf{even}. For example, the strings 'abab' and 'gg' are alternating, while the strings 'aba' and 'ggwp' are not. As a good friend, you decided to gift such a string, but you couldn't find one. Luckily, you can perform two types of operations on the string: - Choose an index $i$ and delete the $i$-th character from the string, which will reduce the length of the string by $1$. This type of operation can be performed \textbf{no more than $1$ time}; - Choose an index $i$ and replace $s_i$ with any other letter. Since you are in a hurry, you need to determine the minimum number of operations required to make the string an alternating one.
Firstly, since first operation can be used at most 1 time, we need to use it only when string has odd length. Let's assume that the string has even length, then we can look at characters on odd and even positions independently. So, if we change all characters on even positions to the character that which is occurs the most. Same goes to the characters on the odd position. Now, we have case where we need to delete one character. We can make prefix sum on even positions (let's call $pref_1[i][c]$ number of $c$ on such even $i>j$ and $s[j]=c$), prefix sum on odd position (let's call it $pref_2[i][c]$. Definition same as for $pref_1[i][c]$ but with odd instead of even), suffix sum on even positions (let's call it $suff_1[i][c]$ and definition same as $pref_1[i][c]$ but with $j>i$ instead of $i>j$) and suffix sum on odd positions (let's call it $suff_2[i][c]$ and definition same as $pref_2[i][c]$ but with $j>i$ instead of $i>j$). If we delete character on index $i$, our string after $i$ shift right and changes parity for all indeces bigger then $i$, so to find how many characters $c$ there are on even positions after deleting index $i$ is $pref_1[i][c]+suf_2[i][c]$. Using this, we can try to delete each character independently and solve the task as it has even length.
[ "brute force", "data structures", "dp", "greedy", "implementation", "strings" ]
1,500
t = int(input()) for _ in range(t): n = int(input()) s = input() res = len(s) if n % 2 == 0: v = [[0] * 26 for _ in range(2)] for i in range(n): v[i % 2][ord(s[i]) - ord('a')] += 1 for i in range(2): mx = max(v[i]) res -= mx print(res) else: pref = [[0] * 26 for _ in range(2)] suf = [[0] * 26 for _ in range(2)] for i in range(n - 1, -1, -1): suf[i % 2][ord(s[i]) - ord('a')] += 1 for i in range(n): suf[i % 2][ord(s[i]) - ord('a')] -= 1 ans = n for k in range(2): mx = 0 for j in range(26): mx = max(mx, suf[1 - k][j] + pref[k][j]) ans -= mx res = min(res, ans) pref[i % 2][ord(s[i]) - ord('a')] += 1 print(res)
2008
F
Sakurako's Box
Sakurako has a box with $n$ balls. Each ball has it's value. She wants to bet with her friend that if the friend randomly picks two balls from the box (it could be two distinct balls, but they may have the same value), the product of their values will be the same as the number that Sakurako guessed. Since Sakurako has a PhD in probability, she knows that the best number to pick is the expected value, but she forgot how to calculate it. Help Sakurako and find the expected value of the product of two elements from the array. It can be shown that the expected value has the form $\frac{P}{Q}$, where $P$ and $Q$ are non-negative integers, and $Q \ne 0$. Report the value of $P \cdot Q^{-1}(\bmod 10^9+7)$.
By the statement, we need to find the value of this expresion $\frac{\sum_{i=0}^{n}\sum_{j=i+1}^{n} a_i\cdot a_j}{\frac{n\cdot (n-1)}{2}}$. Let's find this two values separately. For the first, we can do it in several ways. We can see that this sum equal to $\sum_{i=0}^{n}a_i\cdot (\sum_{j=i+1}^{n}a_j)$ and compute by prefix sum. Also, we can notice that it is equal to $\frac{(\sum_{i=0}^{n}a_i)^2-\sum_{i=0}^{n}a_i^2}{2}$. Note, that for second approach you need to use division by modulo, i.e. $2^{-1}=2^{p-2}$ for prime p. To compute $\frac{n\cdot (n-1)}{2}$, you can compute $n\cdot (n-1)$ by modulo and than use division by modulo for $2^{-1}$. Then, also using division by modulo you need to divide first value by second.
[ "combinatorics", "math", "number theory" ]
1,400
import sys; input = sys.stdin.readline for i in range(int(input())): n = int(input()) a = list(map(int, input().split())) ans = 0 s = 0 mod = int(1e9 + 7) for i in range(n): s += a[i] s %= mod for i in range(n): s -= a[i] ans = (ans + a[i] * s) % mod ans = (ans * pow(n * (n - 1) // 2, mod - 2, mod)) % mod print(ans)
2008
G
Sakurako's Task
Sakurako has prepared a task for you: She gives you an array of $n$ integers and allows you to choose $i$ and $j$ such that $i \neq j$ and $a_i \ge a_j$, and then assign $a_i = a_i - a_j$ or $a_i = a_i + a_j$. You can perform this operation any number of times for any $i$ and $j$, as long as they satisfy the conditions. Sakurako asks you what is the maximum possible value of $mex_k$$^{\text{∗}}$ of the array after any number of operations. \begin{footnotesize} $^{\text{∗}}$$mex_k$ is the $k$-th non-negative integer that is absent in the array. For example, $mex_1(\{1,2,3 \})=0$, since $0$ is the first element that is not in the array, and $mex_2(\{0,2,4 \})=3$, since $3$ is the second element that is not in the array. \end{footnotesize}
Let's look at case when $n=1$. We cannot change the value of the element, so we will not change the array. If $n>1$, let's call $g=gcd(a_1,a_2,\dots,a_n)$. Using operations in the statement, we can get only numbers 0 and that have $g$ as a divisor. So, the best array for this task will be $a_i=(i-1)\cdot g$. Now, we can find where the $mex_k$ should be in linear time. Firstly, if it is before the first one, then the answer is $k$, otherwise, we can assign $k=k-a_1$. Then let's look at the second element. If $a_1+k<a_2$, then answer should be $a_1+k$ and otherwise we can assign $k=k-a_2+a_1-1$. In other words, when we look if our $k$ can be in range from $a_i$ to $a_{i+1}$, we know that number of elements that were not in the array and less than $a_i$ equal to $k+a_i$. Then, when we find such $i$, we can output the answer.
[ "binary search", "greedy", "math", "number theory" ]
1,800
import math t = int(input()) for _ in range(t): n, k = map(int, input().split()) a = list(map(int, input().split())) g = 0 mx = 0 for i in range(n): g = math.gcd(g, a[i]) mx = max(mx, a[i]) if g == 0: print(k) continue a.sort() q = -g if n != 1: for i in range(n): q += g a[i] = q a.append(10**16) lst = -1 for i in range(n + 1): if k <= a[i] - lst - 1: break k -= max(a[i] - lst - 1, 0) lst = a[i] print(lst + k)
2008
H
Sakurako's Test
Sakurako will soon take a test. The test can be described as an array of integers $n$ and a task on it: Given an integer $x$, Sakurako can perform the following operation any number of times: - Choose an integer $i$ ($1\le i\le n$) such that $a_i\ge x$; - Change the value of $a_i$ to $a_i-x$. Using this operation any number of times, she must find the minimum possible median$^{\text{∗}}$ of the array $a$. Sakurako knows the array but does not know the integer $x$. Someone let it slip that one of the $q$ values of $x$ will be in the next test, so Sakurako is asking you what the answer is for each such $x$. \begin{footnotesize} $^{\text{∗}}$The median of an array of length $n$ is the element that stands in the middle of the sorted array (at the $\frac{n+2}{2}$-th position for even $n$, and at the $\frac{n+1}{2}$-th for odd) \end{footnotesize}
Let's fix one $x$ and try to solve this task for it. As we know, in $0-$indexed array median is $\lfloor\frac{n}{2}\rfloor$ where $n$ is number of elements in the array, so to find median, we need to find the smallest element which has at least $\lfloor\frac{n}{2}\rfloor$ elements in the array that is less or equal to it. Also, it is obvious, that we need to decrease all elements till we can, since the least element, the least median of the array is. So, after all operation, we change $a_i$ to $a_i \mod x$. How to find number of $i$ that $a_i\mod x\le m$ for some $m$. In fact, we can try to find number of elements in range $[k\cdot x,k\cdot x+m]$ for all $k$, since all this elements will be less then $m$ if we take it by modulo $x$. To find number of elements in such range, we can notice that $a_i\le n$, so we can make prefix sum of counting array (let's call it $pref[i]$ number of elements less or equal $i$) and then number of elements in tange $[a,b]$ will be $pref[b]-pref[a-1]$. Also, since $a_i\le n$, $k$ will be less then $\frac{n}{x}$, so for fixed $x$ our solution will work in $\frac{n}{x}\cdot log(n)$. Let's precompute it for all $x$ in range $[1,n]$. Then, it will work in time $\sum_{x=1}^{n+1}\frac{n}{x}\cdot log(n)=log(n)\cdot \sum_{x=1}^{n+1}\frac{n}{x} \stackrel{(*)}{=} log(n)\cdot n\cdot log(n)=n\cdot log^2(n)$. $(*)$ This transition is true because of $\sum_{i=1}^{n+1}\frac{n}{x}$ is harmonic series. It means, $\sum_{i=1}^{n+1}\frac{n}{x}=n\cdot \sum_{i=1}^{n+1}\frac{1}{x}\le n\cdot log(n)$.
[ "binary search", "brute force", "greedy", "math", "number theory" ]
2,100
for _ in range(int(input())): n, m = map(int, input().split()) a = list(map(int, input().split())) c = [0] * (n + 1) for i in range(n): c[a[i]] += 1 for i in range(1, n + 1): c[i] += c[i - 1] res = [0] * (n + 1) for x in range(1, n + 1): l, r = 0, x while l < r: mid = (l + r) // 2 cnt = c[mid] for k in range(1, n // x + 1): cnt += c[min(k * x + mid, n)] - c[k * x - 1] if cnt - 1 >= n // 2: r = mid else: l = mid + 1 res[x] = l for _ in range(m): x = int(input()) print(res[x])
2009
A
Minimize!
You are given two integers $a$ and $b$ ($a \leq b$). Over all possible integer values of $c$ ($a \leq c \leq b$), find the minimum value of $(c - a) + (b - c)$.
We choose $c$ between $a$ and $b$ $(a \leq c \leq b)$. The distance is $(c - a) + (b - c) = b - a$. Note that the distance does not depend on the the position $c$ at all.
[ "brute force", "math" ]
800
for _ in range(int(input())): a,b = map(int,input().split()) print(b-a)
2009
B
osu!mania
You are playing your favorite rhythm game, osu!mania. The layout of your beatmap consists of $n$ rows and $4$ columns. Because notes at the bottom are closer, you will process the bottommost row first and the topmost row last. Each row will contain exactly one note, represented as a '#'. For each note $1, 2, \dots, n$, in the order of processing, output the column in which the note appears.
Implement the statement. Iterate from $n-1$ to $0$ and use the .find() method in std::string in C++ (or .index() in python) to find the '#' character.
[ "brute force", "implementation" ]
800
import sys input=lambda:sys.stdin.readline().rstrip() for i in range(int(input())): n=int(input()) print(*reversed([input().index("#")+1 for i in range(n)]))
2009
C
The Legend of Freya the Frog
Freya the Frog is traveling on the 2D coordinate plane. She is currently at point $(0,0)$ and wants to go to point $(x,y)$. In one move, she chooses an integer $d$ such that $0 \leq d \leq k$ and jumps $d$ spots forward in the direction she is facing. Initially, she is facing the positive $x$ direction. After every move, she will alternate between facing the positive $x$ direction and the positive $y$ direction (i.e., she will face the positive $y$ direction on her second move, the positive $x$ direction on her third move, and so on). What is the minimum amount of moves she must perform to land on point $(x,y)$?
Consider the $x$ and $y$ directions separately and calculate the jumps we need in each direction. The number of jumps we need in the $x$ direction is $\lceil \frac{x}{k} \rceil$ and similarily $\lceil \frac{y}{k} \rceil$ in the $y$ direction. Now let's try to combine them to obtain the total number of jumps. Let's consider the following cases: $\lceil \frac{y}{k} \rceil \geq \lceil \frac{x}{k} \rceil$. In this case, there will need to be $\lceil \frac{y}{k} \rceil - \lceil \frac{x}{k} \rceil$ extra jumps in the $y$ direction. While Freya performs these extra jumps, she will choose $d = 0$ for the $x$ direction. In total, there will need to be $2 \cdot \lceil \frac{y}{k} \rceil$ jumps. $\lceil \frac{y}{k} \rceil \geq \lceil \frac{x}{k} \rceil$. In this case, there will need to be $\lceil \frac{y}{k} \rceil - \lceil \frac{x}{k} \rceil$ extra jumps in the $y$ direction. While Freya performs these extra jumps, she will choose $d = 0$ for the $x$ direction. In total, there will need to be $2 \cdot \lceil \frac{y}{k} \rceil$ jumps. $\lceil \frac{x}{k} \rceil > \lceil \frac{y}{k} \rceil$. We can use the same reasoning as the previous case, but there's a catch. Since Freya is initially facing the $x$ direction, for the last jump, she does not need to jump in the $y$ direction. In total, there will need to be $2 \cdot \lceil \frac{x}{k} \rceil - 1$ jumps. $\lceil \frac{x}{k} \rceil > \lceil \frac{y}{k} \rceil$. We can use the same reasoning as the previous case, but there's a catch. Since Freya is initially facing the $x$ direction, for the last jump, she does not need to jump in the $y$ direction. In total, there will need to be $2 \cdot \lceil \frac{x}{k} \rceil - 1$ jumps.
[ "implementation", "math" ]
1,100
for _ in range(int(input())): x,y,k = map(int,input().split()) print(max(2*((x+k-1)//k)-1,2*((y+k-1)//k)))
2009
D
Satyam and Counting
Satyam is given $n$ distinct points on the 2D coordinate plane. \textbf{It is guaranteed that $0 \leq y_i \leq 1$ for all given points $(x_i, y_i)$.} How many different nondegenerate right triangles$^{\text{∗}}$ can be formed from choosing three different points as its vertices? Two triangles $a$ and $b$ are different if there is a point $v$ such that $v$ is a vertex of $a$ but not a vertex of $b$. \begin{footnotesize} $^{\text{∗}}$A nondegenerate right triangle has positive area and an interior $90^{\circ}$ angle. \end{footnotesize}
Initially, the obvious case one might first consider is an upright right triangle (specifically, the triangle with one of its sides parallel to the $y$-axis). This side can only be made with two points in the form $(x, 0)$ and $(x,1)$. We only need to search third point. Turns out, the third point can be any other unused vertex! If the third point has $y = 0$, then it will be an upright triangle, but if the third point has $y = 1$, it will simply be upside down. One of the other case is in the form of $(x,0), (x+1,1), (x+2, 0)$. Let's see why this is a right triangle. Recall that in right triangle, the sum of the squares of two of the sides must equal to the square of the third side. The length between the first and the second point is $\sqrt 2$ because it is the diagonal of $1$ by $1$ unit block. Similarily, the second and third point also has length $\sqrt 2$. Obviously, the length between the first and third point is $2$. Since we have $\sqrt 2^2 + \sqrt 2^2 = 2^2$, this is certainly a right triangle. Of course, we can flip the $y$ values of each point and it will still be a valid right triangle, just upside down.
[ "geometry", "math" ]
1,400
from collections import Counter for _ in range(int(input())): n = int(input()) nums = [] for i in range(n): x,y = map(int,input().split()) nums.append((x,y)) ans = 0 b = Counter(x[0] for x in nums) check = set(nums) for i in b: if b[i]==2: ans += n-2 for p in check: if (p[0]-1,p[1]^1) in check and (p[0]+1,p[1]^1) in check: ans +=1 print(ans)
2009
E
Klee's SUPER DUPER LARGE Array!!!
Klee has an array $a$ of length $n$ containing integers $[k, k+1, ..., k+n-1]$ in that order. Klee wants to choose an index $i$ ($1 \leq i \leq n$) such that $x = |a_1 + a_2 + \dots + a_i - a_{i+1} - \dots - a_n|$ is minimized. Note that for an arbitrary integer $z$, $|z|$ represents the absolute value of $z$. Output the minimum possible value of $x$.
We can rewrite $x$ as $|a_1+\dots+a_i-(a_{i+1}+\dots+a_n)|$. Essentially, we want to minimize the absolute value difference between the sums of the prefix and the suffix. With absolute value problems. it's always good to consider the positive and negative cases separately. We will consider the prefix greater than the suffix separately with the less than case. We can use binary search to search for the greatest $i$ such that $a_1 + \dots + a_i \leq a_{i+1} + \dots + a_n$. Note that here, the positive difference is minimized. If we move to $i+1$, then the negative difference is minimized (since the sum of prefix will now be less than the sum of suffix). The answer is the minimum absolute value of both cases. To evaluate $a_1 + \dots + a_i$ fast, we can use the sum of arithmetic sequence formula. Bonus: Solve in $\mathcal{O}(1)$.
[ "binary search", "math", "ternary search" ]
1,400
import sys input=sys.stdin.readline from math import floor,sqrt f=lambda x: (2*x*x + x*(4*k-2) + (n-n*n-2*k*n))//2 t=int(input()) for _ in range(t): n,k=map(int,input().split()) D=4*k*k + 4*k*(n-1) + (2*n*n-2*n+1) i=(floor(sqrt(D))-(2*k-1))//2 ans=min(abs(f(i)),abs(f(i+1))) print(ans)
2009
F
Firefly's Queries
Firefly is given an array $a$ of length $n$. Let $c_i$ denote the $i$'th cyclic shift$^{\text{∗}}$ of $a$. She creates a new array $b$ such that $b = c_1 + c_2 + \dots + c_n$ where $+$ represents concatenation$^{\text{†}}$. Then, she asks you $q$ queries. For each query, output the sum of all elements in the subarray of $b$ that starts from the $l$-th element and ends at the $r$-th element, inclusive of both ends. \begin{footnotesize} $^{\text{∗}}$The $x$-th ($1 \leq x \leq n$) cyclic shift of the array $a$ is $a_x, a_{x+1} \ldots a_n, a_1, a_2 \ldots a_{x - 1}$. Note that the $1$-st shift is the initial $a$. $^{\text{†}}$The concatenation of two arrays $p$ and $q$ of length $n$ (in other words, $p + q$) is $p_1, p_2, ..., p_n, q_1, q_2, ..., q_n$. \end{footnotesize}
Let's duplicate the array $a$ and concatenate it with itself. Now, $a$ should have length $2n$ and $a_i = a_{i-n}$ for all $n < i \leq 2n$. Now, the $j$'th element of the $i$'th rotation is $a_{i+j-1}$. It can be shown for any integer $x$, it belongs in rotation $\lfloor \frac{x-1}{n} \rfloor + 1$ and at position $(x-1) \mod n + 1$. Let $rl$ denote the rotation for $l$ and $rr$ denote the rotation for $r$. If $rr - rl > 1$, we are adding $rr-rl-1$ full arrays to our answer. The leftovers is just the suffix of rotation $rl$ starting at position $l$ and the prefix of rotation of $rr$ starting at position $r$. This can be done with prefix sums. You may need to handle $rl=rr$ separately.
[ "bitmasks", "data structures", "flows", "math" ]
1,700
#include <bits/stdc++.h> using namespace std; #define ll long long int main() { int t; cin >> t; while (t--) { ll n, q; cin >> n >> q; vector<ll> a(n), ps(1); for (ll &r : a) { cin >> r; ps.push_back(ps.back() + r); } for (ll &r : a) { ps.push_back(ps.back() + r); } while (q--) { ll l, r; cin >> l >> r; l--; r--; ll i = l / n, j = r / n; l %= n; r %= n; cout << ps[n] * (j - i + 1) - (ps[i + l] - ps[i]) - (ps[j + n] - ps[j + r + 1]) << "\n"; } } }
2009
G1
Yunli's Subarray Queries (easy version)
\textbf{This is the easy version of the problem. In this version, it is guaranteed that $r=l+k-1$ for all queries.} For an arbitrary array $b$, Yunli can perform the following operation any number of times: - Select an index $i$. Set $b_i = x$ where $x$ is any integer she desires ($x$ is not limited to the interval $[1,n]$). Denote $f(b)$ as the minimum number of operations she needs to perform until there exists a consecutive subarray$^{\text{∗}}$ of length at least $k$ in $b$. Yunli is given an array $a$ of size $n$ and asks you $q$ queries. In each query, you must output $\sum_{j=l+k-1}^{r} f([a_l, a_{l+1}, \ldots, a_j])$. Note that in this version, you are only required to output $f([a_l, a_{l+1}, \ldots, a_{l+k-1}])$. \begin{footnotesize} $^{\text{∗}}$If there exists a consecutive subarray of length $k$ that starts at index $i$ ($1 \leq i \leq |b|-k+1$), then $b_j = b_{j-1} + 1$ for all $i < j \leq i+k-1$. \end{footnotesize}
We first make the sequence $b_i=a_i-i$ for all $i$. Now, if $b_i=b_j$, then $i$ and $j$ are in correct relative order. Now, to solve the problem, we precompute the answer for every window of $k$, and then each query is a lookup. We use a sliding window, maintaining a multiset of frequencies of values of $b$ in the current window. To move from the window $[i\ldots i+k-1]$ to $i+1 \ldots i+k$, we lower the frequency of $b_i$ by $1$, and increase the frequency of $b_{i+k}$ by $1$.
[ "binary search", "data structures", "two pointers" ]
1,900
#include "bits/stdc++.h" #pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx,avx2,sse,sse2") #define fast ios_base::sync_with_stdio(0) , cin.tie(0) , cout.tie(0) #define endl '\n' #define int long long #define f first #define mp make_pair #define s second using namespace std; void solve(){ int n, k, q; cin >> n >> k >> q; int a[n + 1]; for(int i = 1; i <= n; i++) cin >> a[i]; map <int,int> m; multiset <int> tot; for(int i = 1; i <= n; i++) tot.insert(0); for(int i = 1; i < k; i++){ tot.erase(tot.find(m[a[i] - i])); m[a[i] - i]++; tot.insert(m[a[i] - i]); } int ret[n + 1]; for(int i = k; i <= n; i++){ tot.erase(tot.find(m[a[i] - i])); m[a[i] - i]++; tot.insert(m[a[i] - i]); int p = i - k + 1; ret[p] = k - *tot.rbegin(); tot.erase(tot.find(m[a[p] - p])); m[a[p] - p]--; tot.insert(m[a[p] - p]); } while(q--){ int l, r ; cin >> l >> r; cout << ret[l] << endl; } tot.clear(); m.clear(); } signed main() { fast; int t; cin >> t; while(t--){ solve(); } }
2009
G2
Yunli's Subarray Queries (hard version)
\textbf{This is the hard version of the problem. In this version, it is guaranteed that $r \geq l+k-1$ for all queries.} For an arbitrary array $b$, Yunli can perform the following operation any number of times: - Select an index $i$. Set $b_i = x$ where $x$ is any integer she desires ($x$ is not limited to the interval $[1,n]$). Denote $f(b)$ as the minimum number of operations she needs to perform until there exists a consecutive subarray$^{\text{∗}}$ of length at least $k$ in $b$. Yunli is given an array $a$ of size $n$ and asks you $q$ queries. In each query, you must output $\sum_{j=l+k-1}^{r} f([a_l, a_{l+1}, \ldots, a_j])$. \begin{footnotesize} $^{\text{∗}}$If there exists a consecutive subarray of length $k$ that starts at index $i$ ($1 \leq i \leq |b|-k+1$), then $b_j = b_{j-1} + 1$ for all $i < j \leq i+k-1$. \end{footnotesize}
First, read the solution to the easy version of the problem to compute the answer for every window of $k$. Let $c_i=f([a_i, ..., a_{i+k-1}])$. Now, the problem simplifies to finding $\sum_{j=l}^{r-k+1} ( \min_{i=l}^{j} c_i)$ We will answer the queries offline in decreasing order of $l$. We maintain a lazy segment tree. We have a variable $x$ sweeping from $n-k$ to $0$. As the variable sweeps leftwards, in node $i$ of the segment tree, we keep track of $\min_{j=x}^{i}c_i$. To decrease the value of $x$, we note that the range $[x-1, y]$ in the segment tree will be set to $c_{x-1}$, where $y$ is the largest value such that $c_y>c_{x-1}$ but $c_{y+1}\leq c_{x-1}$ (or $y=n-1$). To find the $y$ for each $x$, we may either walk/binary search in the segment tree, or use a monotonic stack. Let $p_i$ be the smallest value $j>i$ such that $c_j<c_i.$ We can calculate these values using a monotonic stack and iterating through $c$ backwards. If such $j$ does not exist, then we let $p_i=n.$ Then $f(h,i)=c_i$ for all $i\le h-k+1<p_i.$ Further, $f(h,i)=c_{p_i}$ for $p_i\le h-k+1<p_{p_i},$ and so on. Now, let $w(0,i)=i,$ and $w(h,i)=p_{w(h-1,i)}$ for $h>0.$ To calculate the answer for a query $(l,r),$ consider the largest value of $j$ such that $w(j,l)\le r.$ Then we can take the sum $c_{w(j,l)}\cdot(r-w(j,l)+1)+\sum_{i=1}^jc_{w(i-1,l)}\cdot(w(i,l)-w(i-1,l)).$ Now it remains to quickly calculate this sum. We can use binary lifting to solve this. Specifically, we create an $n\times20$ data table where $d[i][j]=\sum_{h=1}^{2^j}c_{w(h-1,i)}\cdot(w(h,i)-w(h-1,i)),$ if $w(2^j,i)$ exists, and $-1$ otherwise. We can precompute this table recursively, as $d[i][0]=c_i\cdot(w(1,i)-i)$ and $d[i][j]=d[i][j-1]+d[w(2^{j-1},i)][j-1].$ Then, to answer queries, we iterate $j$ from $19$ to $0,$ and if $w(2^j,l)\le r,$ we add $d[l][j]$ to our answer, and set $l=w(2^j,l).$ At the end, we add $c_l\cdot(r-l+1)$ to our answer.
[ "binary search", "data structures", "dp" ]
2,200
#include <bits/stdc++.h> #define int long long #define pii pair<int, int> #define fi first #define se second using namespace std; void solve() { int n, k, q; cin >> n >> k >> q; vector<int> a(n); for (int &r : a) cin >> r; vector<int> c(3 * n), v(n); multiset<int> s; for (int i = 0; i < k; i++) c[a[i] - i + n - 1]++; for (int r : c) s.insert(r); v[k - 1] = k - *s.rbegin(); for (int i = k; i < n; i++) { int x = a[i] - i + n - 1, y = a[i - k] - i + k + n - 1; c[x]++; s.erase(s.find(c[x] - 1)); s.insert(c[x]); c[y]--; s.erase(s.find(c[y] + 1)); s.insert(c[y]); v[i] = k - *s.rbegin(); } vector<int> l(n, -1); stack<pii> t; for (int i = k - 1; i < n; i++) { while (!t.empty()) { if (t.top().se <= v[i]) break; l[t.top().fi] = i; t.pop(); } t.push({i, v[i]}); } vector<vector<pii>> w(n, vector<pii>(20, {-1, -1})); for (int i = n - 1; i >= k - 1; i--) { w[i][0].se = l[i]; if (l[i] < 0) { w[i][0].fi = (n - i) * v[i]; continue; } w[i][0].fi = (l[i] - i) * v[i]; for (int j = 1; j < 20; j++) { if (w[w[i][j - 1].se][j - 1].se < 0) break; w[i][j].se = w[w[i][j - 1].se][j - 1].se; w[i][j].fi = w[i][j - 1].fi + w[w[i][j - 1].se][j - 1].fi; } } while (q--) { int l, r, ans = 0; cin >> l >> r; l--; r--; l += k - 1; for (int j = 19; ~j; j--) { if (w[l][j].se < 0) continue; if (w[l][j].se > r) continue; ans += w[l][j].fi; l = w[l][j].se; } cout << ans + v[l] * (r - l + 1) << "\n"; } } int32_t main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
2009
G3
Yunli's Subarray Queries (extreme version)
\textbf{This is the extreme version of the problem. In this version, the output of each query is different from the easy and hard versions. It is also guaranteed that $r \geq l+k-1$ for all queries.} For an arbitrary array $b$, Yunli can perform the following operation any number of times: - Select an index $i$. Set $b_i = x$ where $x$ is any integer she desires ($x$ is not limited to the interval $[1,n]$). Denote $f(b)$ as the minimum number of operations she needs to perform until there exists a consecutive subarray$^{\text{∗}}$ of length at least $k$ in $b$. Yunli is given an array $a$ of size $n$ and asks you $q$ queries. In each query, you must output $\sum_{i=l}^{r-k+1} \sum_{j=i+k-1}^{r} f([a_i, a_{i+1}, \ldots, a_j])$. \begin{footnotesize} $^{\text{∗}}$If there exists a consecutive subarray of length $k$ that starts at index $i$ ($1 \leq i \leq |b|-k+1$), then $b_j = b_{j-1} + 1$ for all $i < j \leq i+k-1$. \end{footnotesize}
I decided to write the Editorial for this problem in a step-by-step manner. Some of the steps are really short and meant to be used as hints but i decided to have a uniform naming for everything. Continuing from the easier versions of the problem, we know we need to compute sum of min of subarrays, and answer subarray queries on this. Consider the standard approach of finding sum of min over all subarrays. Sum of min of all subarrays for a fixed array is a well known problem. Here is how you can solve it, given an array $a$ of length $n$. Let $nx_i$ denote the smallest integer $j (j > i)$ such that $a_j < a_i$ holds, or $n + 1$ if no such integer exists. Similarly, define $pv_i$ denote the largest integer $j (j < i)$ such that $a_j \le a_i$ holds, or $0$ if no such integer exists. The answer is simply $\sum_{i = 1}^{n} a_i \cdot (nx_i - i) \cdot (i - pv_i)$. Calculating $nx_i$ and $pv_i$ can be done with Monotonic Stack. Given a query $(L, R)$, divide all indices $i$ ($L \le i \le R$) into $4$ groups depending on the existence of $nx_i$ and $pv_i$ within the interval $[L, R]$, i.e. : Case $1$ : $L \le pv_i, nx_i \le R$ Case $1$ : $L \le pv_i, nx_i \le R$ Case $2$ : $pv_i < L, nx_i \le R$ Case $2$ : $pv_i < L, nx_i \le R$ Case $3$ : $L \le pv_i, nx_i > R$ Case $3$ : $L \le pv_i, nx_i > R$ Case $4$ : $pv_i < L, nx_i > R$ Case $4$ : $pv_i < L, nx_i > R$ Try to calculate the contributions of each of these categories separately. Case $1$ can be reduced to rectangle queries. Case $4$ is simple to handle as there is atmost $1$ element which satisfies that condition, which (if exists) is the minimum element in the range $(L, R)$ which can be found using any RMQ data structure like Sparse Table or Segment Tree. Given a list of $n$ tuples $(l, r, v)$ and $q$ queries $(L, R)$ you have to add $v$ to answer of the $i$-th query if $L <= l <= r <= R$. This can be solved in $O((n + q) log(n))$ using Fenwick Tree and Sweepline. Iterate from $i = n$ to $i = 1$. For every tuple with left end at $i$ say $(i, j, v)$, add $v$ to a range query sum data structure at position $j$. Then, for every query with left end at $i$, we can simply query the range sum from $i = l$ to $i = r$ to get the required answer. For every index $i$ from $1$ to $n$, generate a tuple as $(pv_i, nx_i, a_i \cdot (i - pv_i) \cdot (nx_i - i)$. Then, solve the Rectangle Queries problem with this list of tuples. The answer will be the required contribution of all indices belonging to Case $1$. This leaves us with Case $2$ and $3$, which are symmetric, so we discuss only case $2$. Let us sweepline from $i = n$ to $i = 1$, maintaining a Monotonic Stack of elements, popping elements when we find a smaller element, similar to how we find $pv_i$. The indices belonging to Case $2$ are precisely the elements present in the Monotonic Stack (obviously ignore any element $> R$) when we have swept till $i = L$, with the possible exception of the minimum in the range $[L, R]$ (that might belong to Case $4$). Let's analyze the contribution of the indices in Case $2$. It is $a_i \cdot (L - i + 1) \cdot (nx_i - i)$. Take a look at what happens when we go from $L$ to $L - 1$, how do all the contributions of elements belonging to Case $2$ change. Some elements get popped from the Monotonic Stack because $a_{L - 1}$. We need to reset the contribution of all these elements to $0$. The elements that do not get popped have their contribution increased by exactly $(nx_i - i)$. $1$ element gets added to the Monotonic Stack, which is $a_{L - 1}$, so we need to initiliatize its contribution to $(nx_{L - 1} - (L - 1)).$ Resetting and Initiliazing Contribution is simple enough with most data structures, so let us focus on adding $(nx_i - i)$ to the elements present in the Monotonic Stack. We can keep a Lazy Segment Tree with $2$ parameters, $sumcon=$ sum of all contributions in this segment tree node, and $addcon =$ sum of $(nx_i - i)$ of all "non-popped" elements in this node. The lazy tag will denote how many contribution increases I have to do. We can simply do $sumcon += lazy * addcon$ for the lazy updates. Then, we can query the range sum from $L$ to $R$ to get the sum of contributions of all elements belonging to Case $2$. Case $3$ can be solved in a symmetric way. Adding up the answers over Case $1$, $2$, $3$ and $4$ will give us the required answer. We need to be quite careful with the Case $4$ element, as we might double count its contribution in Case $2$ and $3$. I handle this in the model solution by querying the sum of contribution in $[L, X - 1]$ where $X$ is the largest element present in the monostack which is $\le R$, and handling $X$ separately. You can easily note that $X$ is the only element belonging to Case $4$ (if any at all).
[ "data structures", "dp", "implementation" ]
2,700
#include <bits/stdc++.h> #define int long long #define ll long long #define pii pair<int,int> #define piii pair<pii,pii> #define fi first #define se second #pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt") using namespace std; struct segtree { const static int INF = 1e18, INF2 = 0; int l, r; segtree* lc, * rc; int v = INF, v2=0, v3=0, v4=0; segtree* getmem(); segtree() : segtree(-1, -1) {}; segtree(int l, int r) : l(l), r(r) { if (l == r) return; int m = (l + r) / 2; lc = getmem(); *lc = segtree(l, m); rc = getmem(); *rc = segtree(m + 1, r); } int op(int a, int b) { return min(a,b); } int op2(int a, int b) { return a+b; } void add(int qi, int qv, int h, int h4=0) { if (r < qi || l > qi) return; if (l == r) { if (v==INF) v=qv; v2+=qv, v3+=qv*h; v4+=qv*h4; return;} lc->add(qi, qv, h, h4); rc->add(qi, qv, h, h4); v = op(lc->v, rc->v); v2 = op2(lc->v2, rc->v2); v3 = op2(lc->v3, rc->v3); v4 = op2(lc->v4, rc->v4); } int qrr(int ql, int qx) { if (v>=qx||r<ql) return 1e9; if (l==r) return l; int k=lc->qrr(ql,qx); if (k<1e9) return k; return rc->qrr(ql,qx); } int q2(int ql, int qr) { if (l > qr || r < ql) return INF2; if (ql <= l && r <= qr) return v2; return op2(lc->q2(ql, qr), rc->q2(ql, qr)); } int q3(int ql, int qr) { if (l > qr || r < ql) return INF2; if (ql <= l && r <= qr) return v3; return op2(lc->q3(ql, qr), rc->q3(ql, qr)); } int q4(int ql, int qr) { if (l > qr || r < ql) return INF2; if (ql <= l && r <= qr) return v4; return op2(lc->q4(ql, qr), rc->q4(ql, qr)); } }; segtree mem[2000005];int memsz = 0; segtree* segtree::getmem() { return &mem[memsz++]; } void solve() { int n, k, q; cin >> n >> k >> q; vector<int> a(n); for (int &r : a) cin >> r; vector<int> c(2 * n), v(n); multiset<int> s; for (int i = 0; i < k; i++) c[a[i] - i + n - 1]++; for (int r : c) s.insert(r); v[k - 1] = k - *s.rbegin(); for (int i = k; i < n; i++) { int x = a[i] - i + n - 1, y = a[i - k] - i + k + n - 1; c[x]++; s.erase(s.find(c[x] - 1)); s.insert(c[x]); c[y]--; s.erase(s.find(c[y] + 1)); s.insert(c[y]); v[i] = k - *s.rbegin(); } vector<int> ans(q); vector<vector<pii>> w(n); segtree co(0, n+2), lb(0, n+2), e(0,n+2),e2(0,n+2); vector<int> rb(n); stack<int> t; for (int i = 0; i < q; i++) { int l,r; cin >> l >> r; w[l+k-2].push_back({i,r-1}); } for (int i = n-1; ~i; i--) { e.add(i,v[i],i,i*i); int j=min(n,e.qrr(i,v[i])); rb[i]=j; e.add(j-1,-v[i],i,i*i); e2.add(j,v[i]*(j-i),i); while (!t.empty()) { int x=t.top(); if (v[x]<v[i]) break; t.pop(); co.add(rb[x],v[x]*(rb[x]-x)*(x-i),0); lb.add(x,v[x]*(x-i),x); lb.add(rb[x]-1,-v[x]*(x-i),x); e.add(x,-v[x],x,x*x); e.add(rb[x]-1,v[x],x,x*x); e2.add(rb[x],-v[x]*(rb[x]-x),x); } t.push(i); int l=i; for (auto [p,r]:w[i]) { int x=e.q2(l,r), y=e.q3(l,r), z=e.q4(l,r); int f=y*(r+1)-z, g=x*(r+1)-y; int lx=lb.q2(l,r), ly=lb.q3(l,r); ans[p]=co.q2(l,r+1)+lx*(r+1)-ly+e2.q3(l,r+1)-e2.q2(l,r+1)*(i-1)+f-g*(i-1); } } for (int r:ans) cout << r << "\n"; } int32_t main() { ios::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) solve(); }
2013
A
Zhan's Blender
Today, a club fair was held at "NSPhM". In order to advertise his pastry club, Zhan decided to demonstrate the power of his blender. To demonstrate the power of his blender, Zhan has $n$ fruits. The blender can mix up to $x$ fruits per second. In each second, Zhan can put up to $y$ fruits into the blender. After that, the blender will blend $\min(x, c)$ fruits, where $c$ is the number of fruits inside the blender. After blending, blended fruits are removed from the blender. Help Zhan determine the minimum amount of time required for Zhan to blend all fruits.
Let's consider two cases: If $x \geq y$. In this case, the blender will mix $\min(y, c)$ fruits every second (where $c$ is the number of unmixed fruits). Therefore, the answer will be $\lceil \frac{n}{y} \rceil$. If $x < y$. Here, the blender will mix $\min(x, c)$ fruits every second. In this case, the answer will be $\lceil \frac{n}{x} \rceil$, similarly. Thus, the final answer is $\lceil \frac{n}{\min(x, y)} \rceil$.
[ "constructive algorithms", "math" ]
800
#include <iostream> using namespace std; int main(){ int t = 1; cin >> t; while(t--){ int n, x, y; cin >> n >> x >> y; x = min(x, y); cout << (n + x - 1) / x << endl; } }
2013
B
Battle for Survive
Eralim, being the mafia boss, manages a group of $n$ fighters. Fighter $i$ has a rating of $a_i$. Eralim arranges a tournament of $n - 1$ battles, in each of which two not yet eliminated fighters $i$ and $j$ (\textbf{$1 \le i < j \le n$}) are chosen, and as a result of the battle, fighter $i$ is eliminated from the tournament, and the rating of fighter $j$ is reduced by the rating of fighter $i$. That is, $a_j$ is decreased by $a_i$. Note that fighter $j$'s rating can become negative. The fighters indexes do not change. Eralim wants to know what maximum rating the last remaining fighter can preserve if he chooses the battles optimally.
It can be noted that the value of $a_{n-1}$ will always be negative in the final result. Therefore, we can subtract the sum $a_1 + a_2 + \ldots + a_{n-2}$ from $a_{n-1}$, and then subtract $a_{n-1}$ from $a_n$. Thus, the final sum will be $a_1 + a_2 + \ldots + a_{n-2} - a_{n-1} + a_n$. This value cannot be exceeded because $a_{n-1}$ will always be negative.
[ "constructive algorithms", "greedy", "math" ]
900
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int mod = 1e9 + 7; void solve(){ int n; cin >> n; ll ans = 0; vector<int> a(n); for(int i=0;i<n;i++){ cin >> a[i]; ans += a[i]; } cout << ans - 2 * a[n - 2] << '\n'; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while(t--){ solve(); } }
2013
C
Password Cracking
Dimash learned that Mansur wrote something very unpleasant about him to a friend, so he decided to find out his password at all costs and discover what exactly he wrote. Believing in the strength of his password, Mansur stated that his password — is a binary string of length $n$. He is also ready to answer Dimash's questions of the following type: Dimash says a binary string $t$, and Mansur replies whether it is true that $t$ is a substring of his password. Help Dimash find out the password in no more than $2n$ operations; otherwise, Mansur will understand the trick and stop communicating with him.
We will initially maintain an empty string $t$ such that $t$ appears as a substring in $s$. We will increase the string $t$ by one character until its length is less than $n$. We will perform $n$ iterations. In each iteration, we will check the strings $t + 0$ and $t + 1$. If one of them appears in $s$ as a substring, we will add the appropriate character to the end of $t$ and proceed to the next iteration. If neither of these two strings appears in $s$, it means that the string $t$ is a suffix of the string $s$. After this iteration, we will check the string $0 + t$. If it appears in $s$, we will add $0$ to $t$; otherwise, we will add $1$. Thus, in each iteration, we perform 2 queries, except for one iteration in which we perform 3 queries. However, after this iteration, we will make only 1 query, so the total number of queries will not exceed $2 \cdot n$.
[ "constructive algorithms", "interactive", "strings" ]
1,400
#include <iostream> #include <vector> #include <string> #include <array> using namespace std; bool ask(string t) { cout << "? " << t << endl; int res; cin >> res; return res; } void result(string s) { cout << "! " << s << endl; } void solve() { int n; cin >> n; string cur; while (cur.size() < n) { if (ask(cur + "0")) { cur += "0"; } else if (ask(cur + "1")) { cur += "1"; } else { break; } } while ((int) cur.size() < n) { if (ask("0" + cur)) { cur = "0" + cur; } else{ cur = "1" + cur; } } result(cur); } int main() { int t; cin >> t; while (t--) solve(); }
2013
D
Minimize the Difference
Zhan, tired after the contest, gave the only task that he did not solve during the contest to his friend, Sungat. However, he could not solve it either, so we ask you to try to solve this problem. You are given an array $a_1, a_2, \ldots, a_n$ of length $n$. We can perform any number (possibly, zero) of operations on the array. In one operation, we choose a position $i$ ($1 \leq i \leq n - 1$) and perform the following action: - $a_i := a_i - 1$, and $a_{i+1} := a_{i+1} + 1$. Find the minimum possible value of $\max(a_1, a_2, \ldots, a_n) - \min(a_1, a_2, \ldots, a_n)$.
First statement: if $a_i > a_{i+1}$, then it is always beneficial to perform an operation at position $i$. Therefore, the final array will be non-decreasing. Second statement: if the array is non-decreasing, then performing operations is not advantageous. We will maintain a stack that holds a sorted array. Each element in the stack will represent a pair $(x, cnt)$, where $x$ is the value and $cnt$ is the number of its occurrences. When adding $a_i$ to the stack, we will keep track of the sum of the removed elements $sum$ from the stack and their count $cnt$. Initially, $sum = a_i$ and $cnt = 1$. We will remove the last element from the stack while it is greater than $\frac{sum}{cnt}$. After that, we recalculate $sum$ and $cnt$. Then we add the pairs $\left( \frac{sum}{cnt}, cnt - sum \mod cnt \right)$ and $\left( \frac{sum}{cnt} + 1, sum \mod cnt \right)$ to the stack. The time complexity of the algorithm is $O(n)$, since on each iteration, no more than 2 elements are added to the stack, and each element is removed at most once.
[ "binary search", "greedy" ]
1,900
#include <bits/stdc++.h> using namespace std; typedef long long ll; ll a[200200]; int n; void solve(){ cin >> n; for(int i=1;i<=n;i++){ cin >> a[i]; } stack<pair<ll, int>> s; for(int i=1;i<=n;i++){ ll sum = a[i], cnt = 1; while(s.size() && s.top().first >= sum / cnt){ sum += s.top().first * s.top().second; cnt += s.top().second; s.pop(); } s.push({sum / cnt, cnt - sum % cnt}); if(sum % cnt != 0){ s.push({sum / cnt + 1, sum % cnt}); } } ll mx = s.top().first; while(s.size() > 1){ s.pop(); } cout << mx - s.top().first << '\n'; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while(t--){ solve(); } }
2013
E
Prefix GCD
Since Mansur is tired of making legends, there will be no legends for this task. You are given an array of positive integer numbers $a_1, a_2, \ldots, a_n$. The elements of the array can be rearranged in any order. You need to find the smallest possible value of the expression $$\gcd(a_1) + \gcd(a_1, a_2) + \ldots + \gcd(a_1, a_2, \ldots, a_n),$$ where $\gcd(a_1, a_2, \ldots, a_n)$ denotes the greatest common divisor (GCD) of $a_1, a_2, \ldots, a_n$.
Let $g$ be the greatest common divisor $gcd$ of the array $a$. We will divide each element $a_i$ by $g$, and at the end, simply multiply the result by $g$. Now, consider the following greedy algorithm. We will start with an initially empty array $b$ and add to the end of array $b$ the element that minimizes the GCD with the already existing array $b$. It can be observed that the $gcd$ will reach 1 in at most 10 iterations. After that, the remaining elements can be added in any order. Let $A$ be the minimum possible GCD for the current prefix of array $b$, and let $B$ be the optimal answer such that $A < B$. In this case, we can first place $A$, and then write the sequence $B$ in the same order. The answer will not worsen, since $A + \text{gcd}(A, B) \leq B$. Total time complexty: $O(n \cdot 10)$.
[ "brute force", "dp", "greedy", "math", "number theory" ]
2,200
#include <bits/stdc++.h> using namespace std; int a[200200]; int n; void solve(){ cin >> n; int g = 0, cur = 0; long long ans = 0; for(int i=0;i<n;i++){ cin >> a[i]; g = __gcd(g, a[i]); } for(int i=0;i<n;i++){ a[i] /= g; } for(int t=0;t<n;t++){ int nc = 1e9; for(int i=0;i<n;i++){ nc = min(nc, __gcd(cur, a[i])); } cur = nc; ans += cur; if(cur == 1) { ans += n - t - 1; break; } } cout << ans * g << '\n'; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while(t--){ solve(); } }
2013
F1
Game in Tree (Easy Version)
\textbf{This is the easy version of the problem. In this version, $\mathbf{u = v}$. You can make hacks only if both versions of the problem are solved.} Alice and Bob are playing a fun game on a tree. This game is played on a tree with $n$ vertices, numbered from $1$ to $n$. Recall that a tree with $n$ vertices is an undirected connected graph with $n - 1$ edges. Alice and Bob take turns, with Alice going first. Each player starts at some vertex. On their turn, a player must move from the current vertex to a neighboring vertex that has not yet been visited by anyone. The first player who cannot make a move loses. You are given two vertices $u$ and $v$. Represent the simple path from vertex $u$ to $v$ as an array $p_1, p_2, p_3, \ldots, p_m$, where $p_1 = u$, $p_m = v$, and there is an edge between $p_i$ and $p_{i + 1}$ for all $i$ ($1 \le i < m$). You need to determine the winner of the game if Alice starts at vertex $1$ and Bob starts at vertex $p_j$ for each $j$ (where $1 \le j \le m$).
First, let's understand how the game proceeds. Alice and Bob start moving toward each other along the path from vertex $1$ to vertex $u$. At some vertex, one of the players can turn into a subtree of a vertex that is not on the path $(1, u)$. After this, both players go to the furthest accessible vertex. Let the path from vertex $1$ to vertex $u$ be denoted as $(p_1, p_2, \dots, p_m)$, where $p_1 = 1$ and $p_m = u$. Initially, Alice is at vertex $p_1$ and Bob is at vertex $p_m$. For each vertex on the path $(p_1, p_2, \dots, p_m)$, we define two values: $a_i$ - the number of vertices that Alice will visit if she descends into the subtree of vertex $p_i$ that does not lie on the path $(1, u)$; $a_i$ - the number of vertices that Alice will visit if she descends into the subtree of vertex $p_i$ that does not lie on the path $(1, u)$; $b_i$ - the number of vertices that Bob will visit if he descends into the subtree of vertex $p_i$ that also does not lie on this path. $b_i$ - the number of vertices that Bob will visit if he descends into the subtree of vertex $p_i$ that also does not lie on this path. Let the distance to the furthest vertex in the subtree of vertex $p_i$ be denoted as $d_{p_i}$. Then: $a_i = d_{p_i} + i$ - the number of vertices Alice can visit if she descends into the subtree at vertex $p_i$. $a_i = d_{p_i} + i$ - the number of vertices Alice can visit if she descends into the subtree at vertex $p_i$. $b_i = d_{p_i} + m - i + 1$ - the number of vertices Bob can visit if he descends into the subtree at vertex $p_i$. $b_i = d_{p_i} + m - i + 1$ - the number of vertices Bob can visit if he descends into the subtree at vertex $p_i$. Now, consider what happens if Alice is at vertex $p_i$ and Bob is at vertex $p_j$. If Alice decides to descend into the subtree of vertex $p_i$, she will visit $a_i$ vertices. Meanwhile, Bob can reach any vertex on the segment $(p_i, p_{i+1}, \dots, p_j)$. It is advantageous for Bob to descend into the subtree of the vertex with the maximum value of $b_k$, where $k \in [i+1, j]$. Therefore, it is beneficial for Alice to descend into the subtree of vertex $p_i$ if the following condition holds: $a_i > \max(b_{i+1}, b_{i+2}, \dots, b_j)$ Otherwise, she should move to vertex $p_{i+1}$. The situation for Bob is similar: he will descend into the subtree of vertex $p_j$ if the condition analogous to Alice's condition holds for him. To efficiently find the maximum on the segment $(p_{i+1}, \dots, p_j)$, one can use a segment tree or a sparse table. This allows finding the maximum in $O(\log n)$ for each query, resulting in an overall time complexity of $O(n \log n)$. However, it can be proven that instead of using a segment tree or sparse table, one can simply iterate through all vertices on the segment and terminate the loop upon finding a greater vertex. This approach will yield a solution with a time complexity of $O(n)$.
[ "binary search", "brute force", "data structures", "dp", "games", "greedy", "implementation", "trees" ]
2,700
#include <bits/stdc++.h> using namespace std; int dfs(int v, int p, int to, const vector<vector<int>> &g, vector<int> &max_depth) { int ans = 0; bool has_to = false; for (int i : g[v]) { if (i == p) { continue; } int tmp = dfs(i, v, to, g, max_depth); if (tmp == -1) { has_to = true; } else { ans = max(ans, tmp + 1); } } if (has_to || v == to) { max_depth.emplace_back(ans); return -1; } else { return ans; } } int solve(const vector<vector<int>> &g, int to) { vector<int> max_depth; dfs(0, -1, to, g, max_depth); int n = max_depth.size(); reverse(max_depth.begin(), max_depth.end()); int first = 0, second = n - 1; while (true) { { int value1 = max_depth[first] + first; bool valid = true; for (int j = second; j > first; --j) { if (value1 <= max_depth[j] + (n - j - 1)) { valid = false; break; } } if (valid) { return 0; } ++first; if (first == second) { return 1; } } { int value2 = max_depth[second] + (n - second - 1); bool valid = true; for (int j = first; j < second; ++j) { if (value2 < max_depth[j] + j) { valid = false; break; } } if (valid) { return 1; } --second; if (first == second) { return 0; } } } } void solve() { int n; cin >> n; vector<vector<int>> g(n); for (int i = 0; i < n - 1; ++i) { int a, b; cin >> a >> b; g[a - 1].push_back(b - 1); g[b - 1].push_back(a - 1); } int s, f; cin >> s >> f; --s, --f; int ans = solve(g, s); if (ans == 0) { cout << "Alice\n"; } else { cout << "Bob\n"; } } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) solve(); }
2013
F2
Game in Tree (Hard Version)
\textbf{This is the hard version of the problem. In this version, it is not guaranteed that $u = v$. You can make hacks only if both versions of the problem are solved.} Alice and Bob are playing a fun game on a tree. This game is played on a tree with $n$ vertices, numbered from $1$ to $n$. Recall that a tree with $n$ vertices is an undirected connected graph with $n - 1$ edges. Alice and Bob take turns, with Alice going first. Each player starts at some vertex. On their turn, a player must move from the current vertex to a neighboring vertex that has not yet been visited by anyone. The first player who cannot make a move loses. You are given two vertices $u$ and $v$. Represent the simple path from vertex $u$ to $v$ as an array $p_1, p_2, p_3, \ldots, p_m$, where $p_1 = u$, $p_m = v$, and there is an edge between $p_i$ and $p_{i + 1}$ for all $i$ ($1 \le i < m$). You need to determine the winner of the game if Alice starts at vertex $1$ and Bob starts at vertex $p_j$ for each $j$ (where $1 \le j \le m$).
Read the Solution of the easy version of the problem. The path $(u, v)$ can be divided into two vertical paths $(1, u)$ and $(1, v)$. We will solve for the vertices on the path $(1, u)$, while the path $(1, v)$ is solved similarly. For each vertex on the path $(1, u)$, we define two values: $fa_i$ - the first vertex at which Alice will win if she descends into the subtree when Bob starts at vertex $p_i$. $fb_i$ - the first vertex at which Bob will win if he descends into the subtree when he starts at vertex $p_i$. The claim is that it is beneficial for Alice to descend into the subtree at vertex $v$ only over a certain segment of vertices from which Bob starts. A similar claim holds for Bob. Now, for each of Alice's vertices, we need to determine the segment where it is beneficial to descend. The left boundary of the segment for vertex $p_i$ will be $2 \cdot i$, since Alice will always be on the left half of the path. It is easy to notice that the right boundary of the segment can be found using binary search. We redefine the value $b_i$ to be $d_{p_i} - i$. To check whether it is beneficial for us to descend into the subtree for a fixed midpoint $mid$, we need to satisfy the following condition: $a_i > \max(b_{i+1}, b_{i+2}, \ldots, b_{mid - i}) + mid.$ Let $(l_j, r_j)$ denote the segment where it is beneficial for Alice to descend if Bob starts at vertex $p_j$. Then the value $fa_i$ will be the minimum position $j$ such that $l_j \leq i \leq r_j$; this can be found using a set. The value $fb_i$ is calculated similarly. Alice wins at vertex $p_i$ if $fa_i \leq i - fb_i$; otherwise, Bob wins. To efficiently find the maximum on the segment, a sparse table can be used. Additionally, using sets and binary searches gives us a time complexity of $O(n \log n)$.
[ "binary search", "data structures", "trees" ]
3,500
#include <bits/stdc++.h> using namespace std; const int maxn = 2e5 + 12; vector<int> g[maxn], del[maxn], add[maxn]; vector<int> ord; int ans[maxn]; int dp[maxn]; int n, m, k; bool calc(int v, int p, int f){ bool is = 0; if(v == f) is = 1; dp[v] = 1; for(int to:g[v]){ if(to == p){ continue; } bool fg = calc(to, v, f); is |= fg; if(fg == 0){ dp[v] = max(dp[v], dp[to] + 1); } } if(is){ ord.push_back(v); } return is; } struct sparse { int mx[20][200200], lg[200200]; int n; void build(vector<int> &a){ n = a.size(); for(int i=0;i<n;i++){ mx[0][i] = a[i]; } lg[0] = lg[1] = 0; for(int i=2;i<=n;i++){ lg[i] = lg[i/2] + 1; } for(int k=1;k<20;k++){ for(int i=0;i + (1 << k) - 1 < n;i++){ mx[k][i] = max(mx[k-1][i], mx[k-1][i + (1 << (k - 1))]); } } } int get (int l, int r) { if(l > r) return -1e9; int k = lg[r-l+1]; return max(mx[k][l], mx[k][r - (1 << k) + 1]); } } st_a, st_b; void solve(int v){ ord.clear(); calc(1, 0, v); reverse(ord.begin(), ord.end()); m = ord.size(); vector<int> a(m+1), b(m+1); vector<int> fa(m+1, 1e9), fb(m+1, -1e9); for(int i=0;i<m;i++){ a[i] = dp[ord[i]] + i; b[i] = dp[ord[i]] - i; del[i].clear(); add[i].clear(); } st_a.build(a); st_b.build(b); multiset<int> s; for(int i=1;i<m;i++){ int pos = i; for(int l=i+1, r=m-1;l<=r;){ int mid = l + r >> 1; if(st_b.get(i+1 , mid) + mid < a[i] - i){ pos = mid; l = mid + 1; } else r = mid - 1; } if(i < pos){ add[min(m, 2 * i)].push_back(i); del[min(m, pos + i)].push_back(i); } for(int x:add[i]){ s.insert(x); } if(s.size()) fa[i] = *s.begin(); for(int x:del[i]){ s.erase(s.find(x)); } } s.clear(); for(int i=0;i<=m;i++){ add[i].clear(); del[i].clear(); } for(int i=1;i<m;i++){ int pos = i; for(int l=1, r = i-1;l<=r;){ int mid = l + r >> 1; if(st_a.get(mid, i-1) - mid + 1 <= b[i] + i){ pos = mid; r = mid - 1; } else l = mid + 1; } pos--; if(pos >= 0){ add[min(m, pos + i)].push_back(i); del[min(m, 2 * i - 1)].push_back(i); } for(int x:add[i]){ s.insert(x); } if(s.size()) fb[i] = *s.rbegin(); for(int x:del[i]){ s.erase(s.find(x)); } } for(int i=m-1;i>0;i--){ b[i] = max(b[i+1] + 1, dp[ord[i]]); if(b[i] >= st_a.get(1, i-1)){ fb[i] = i; } if(a[0] > max(st_b.get(1, i-1) + i, b[i])){ fa[i] = 0; } ans[ord[i]] = 0; if(fa[i] <= i - fb[i]){ ans[ord[i]] = 1; } } } void solve(){ cin >> n; for(int i=1;i<=n;i++){ g[i].clear(); } for(int i=1;i<n;i++){ int a, b; cin >> a >> b; g[a].push_back(b); g[b].push_back(a); } int u, v; cin >> u >> v; solve(u), solve(v); ord.clear(); calc(v, 0, u); auto p = ord; for(int x:p){ if(ans[x] == 1){ cout << "Alice\n"; } else{ cout << "Bob\n"; } } } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while(t--){ solve(); } }
2014
A
Robin Helps
\begin{quote} There is a little bit of the outlaw in everyone, and a little bit of the hero too. \end{quote} The heroic outlaw Robin Hood is famous for taking from the rich and giving to the poor. Robin encounters $n$ people starting from the $1$-st and ending with the $n$-th. The $i$-th person has $a_i$ gold. If $a_i \ge k$, Robin will take all $a_i$ gold, and if $a_i=0$, Robin will give $1$ gold if he has any. Robin starts with $0$ gold. Find out how many people Robin gives gold to.
This problem requires a simple implementation. Set a variable (initially $0$) to represent the gold Robin has, and update it according to the rules as he scans through $a_i$, adding $1$ to answer whenever Robin gives away a gold.
[ "greedy", "implementation" ]
800
#include <iostream> using namespace std; void work(){ int n,k; cin >> n >> k; int res = 0, gold = 0; for (int i=0;i<n;i++){ int cur; cin >> cur; if (!cur && gold) gold--, res++; else if (cur >= k) gold += cur; } cout << res << '\n'; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
B
Robin Hood and the Major Oak
\begin{quote} In Sherwood, the trees are our shelter, and we are all children of the forest. \end{quote} The Major Oak in Sherwood is known for its majestic foliage, which provided shelter to Robin Hood and his band of merry men and women. The Major Oak grows $i^i$ new leaves in the $i$-th year. It starts with $1$ leaf in year $1$. Leaves last for $k$ years on the tree. In other words, leaves grown in year $i$ last between years $i$ and $i+k-1$ inclusive. Robin considers even numbers lucky. Help Robin determine whether the Major Oak will have an even number of leaves in year $n$.
The key observation is that $i^i$ has the same even/odd parity as $i$. Therefore, the problem reduces to finding whether the sum of $k$ consecutive integers ending in $n$ is even. This can be done by finding the sum of $n-k+1, n-k+2, ..., n-1, n$ which is $k*(2n-k+1)/2$, and checking its parity. Alternatively, one can count the number of odd numbers in those $k$ consecutive integers. Note: Originally, the number of leaves grown was to be $i^m$ according to the fractal nature of life where $m$ is set to some integer. Developers decided to replace $m$ with $i$ for simplicity, following Filikec's suggestion.
[ "math" ]
800
#include <iostream> using namespace std; void work(){ int n,k; cin >> n >> k; cout << (((n+1)*n/2 - (n-k)*(n-k+1)/2)%2?"NO":"YES") << '\n'; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
C
Robin Hood in Town
\begin{quote} In Sherwood, we judge a man not by his wealth, but by his merit. \end{quote} Look around, the rich are getting richer, and the poor are getting poorer. We need to take from the rich and give to the poor. We need Robin Hood! There are $n$ people living in the town. Just now, the wealth of the $i$-th person was $a_i$ gold. But guess what? The richest person has found an extra pot of gold! More formally, find an $a_j=max(a_1, a_2, \dots, a_n)$, change $a_j$ to $a_j+x$, where $x$ is a non-negative integer number of gold found in the pot. If there are multiple maxima, it can be any one of them. A person is unhappy if their wealth is \textbf{strictly less than half} of the average wealth$^{\text{∗}}$. If \textbf{strictly more than half} of the total population $n$ are unhappy, Robin Hood will appear by popular demand. Determine the minimum value of $x$ for Robin Hood to appear, or output $-1$ if it is impossible. \begin{footnotesize} $^{\text{∗}}$The average wealth is defined as the total wealth divided by the total population $n$, that is, $\frac{\sum a_i}{n}$, the result is a real number. \end{footnotesize}
If we sort the wealth in increasing order, then the $j$-th person must be unhappy for Robin to appear, where $j=\lfloor n/2 \rfloor +1$ if $1$-indexing or $j=\lfloor n/2 \rfloor$ if $0$-indexing. We need $a_j < \frac{s+x}{2*n}$, where $s$ is the original total wealth before $x$ gold from the pot was added. Rearranging the equation gives $x>2*n*a_j-s$. Because $x$ is a non-negative integer, we arrive at the answer $max(0,2*n*a_j-s+1)$. Of course, this problem can also be solved by binary search, with two caveats. First, one needs to be careful to avoid comparison between integer and float types, as rounding errors could create issues. You can always avoid division by $2n$ by multiplying it out. Second, one needs to pick the upper limit carefully to ensure it is large enough. Note that $2*n*max(a)$ can serve as the upper limit for the binary search for $x$, because that would push the average to be strictly above $2*max(a)$ and everyone except the one with the pot of gold would be unhappy. There are $2$ edge cases, $n=1, 2$, where the condition for Robin can never be reached, because the richest person will always be happy (at least in this problem, though perhaps not IRL). ChatGPT struggled to identify these edge cases, so it was tempting to leave at least one hidden. Following testing, we decided to give both in samples to reduce frustration. Note: Wealth inequality is better measured by the Gini coefficient which is too involved for this problem. Our criterion is a crude approximation for the Gini coefficient, and is equivalent to setting the mean to median ratio (a well known indicator for inequality) to $2$. For a random distribution, this ratio is close to $1$. Interestingly, this ratio for UK salary distribution is around $1.2$, so no Robin yet.
[ "binary search", "greedy", "math" ]
1,100
#include <iostream> #include <vector> #include <algorithm> using namespace std; void work(){ int n; cin >> n; long long sum = 0; vector<long long> v(n); for (auto &c : v) cin >> c, sum += c; sort(v.begin(),v.end()); if (n < 3){ cout << "-1\n"; return; } cout << max(0LL,v[n/2]*2*n-sum+1) << '\n'; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
D
Robert Hood and Mrs Hood
\begin{quote} Impress thy brother, yet fret not thy mother. \end{quote} Robin's brother and mother are visiting, and Robin gets to choose the start day for each visitor. All days are numbered from $1$ to $n$. Visitors stay for $d$ continuous days, all of those $d$ days must be between day $1$ and $n$ inclusive. Robin has a total of $k$ risky 'jobs' planned. The $i$-th job takes place between days $l_i$ and $r_i$ inclusive, for $1 \le i \le k$. If a job takes place on any of the $d$ days, the visit overlaps with this job (the length of overlap is unimportant). Robin wants his brother's visit to overlap with the maximum number of \textbf{distinct jobs}, and his mother's the minimum. Find suitable start days for the visits of Robin's brother and mother. If there are multiple suitable days, choose the earliest one.
Since the number of days $n$ is capped, we can check all possible start day $x$ in range $[1,n-d+1]$ (so that the duration of $d$ days would fit). We would like to find the number of overlapped jobs for each value of $x$. A job between days $l_i$ and $r_i$ would overlap with the visit if the start day $x$ satisfies $l_i-d+1 \le x \le r_i$. Naively, this range update could be potentially $O(n)$, which is too slow. However, noting the start and end, each job update could be done in $2$ operations. We add $+1$ at $l_i-d+1$ and $-1$ at $r_i+1$, and after all jobs are recorded, we will take a prefix sum to work out the number of overlapped jobs for each $x$. When $l_i-d+1$ drops below $1$, we simply use $1$ to avoid lower values which are not being considered for $x$. The time complexity is $O(n)$. Note: Robin's risky jobs are generally deemed illegal by the Sheriff of Nottingham. Robert is practical and helpful. Like all good parents, Mrs Hood is a worrier.
[ "brute force", "data structures", "greedy", "sortings" ]
1,400
#include <iostream> #include <vector> using namespace std; void work(){ int n,k,d; cin >> n >> d >> k; vector<int> ss(n+1),es(n+1); for (int i=0;i<k;i++){ int a,b; cin >> a >> b; ss[a]++; es[b]++; } for (int i=0;i<n;i++) ss[i+1] += ss[i]; for (int i=0;i<n;i++) es[i+1] += es[i]; int most = 0; int robert = 0; int mrs = 0; int least = 1e9; for (int i=d;i<=n;i++){ int cur = ss[i] - es[i-d]; if (cur > most) most = cur, robert = i-d+1; if (cur < least) least = cur, mrs = i-d+1; } cout << robert << ' ' << mrs << "\n"; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
E
Rendez-vous de Marian et Robin
\begin{quote} In the humble act of meeting, joy doth unfold like a flower in bloom. \end{quote} Absence makes the heart grow fonder. Marian sold her last ware at the Market at the same time Robin finished training at the Major Oak. They couldn't wait to meet, so they both start without delay. The travel network is represented as $n$ vertices numbered from $1$ to $n$ and $m$ edges. The $i$-th edge connects vertices $u_i$ and $v_i$, and takes $w_i$ seconds to travel (all $w_i$ are even). Marian starts at vertex $1$ (Market) and Robin starts at vertex $n$ (Major Oak). In addition, $h$ of the $n$ vertices each has a single horse available. Both Marian and Robin are capable riders, and could mount horses in no time (i.e. in $0$ seconds). Travel times are halved when riding. Once mounted, a horse lasts the remainder of the travel. Meeting must take place on a vertex (i.e. not on an edge). Either could choose to wait on any vertex. Output the earliest time Robin and Marian can meet. If vertices $1$ and $n$ are disconnected, output $-1$ as the meeting is cancelled.
This problem builds on the standard Dijkstra algorithm. So please familiarise yourself with the algorithm if not already. In Dijkstra algorithm, a distance vector/list is used to store travel times to all vertices, here we need to double the vector/list to store travel times to vertices arriving with and without a horse. If a vertex has a horse, then it's possible to transition from without horse to with horse there. The Dijkstra algorithm is then run as standard. What if a horse has already been taken by Marian when Robin arrives, and vice versa? Well, the optimal solution would not require the second person to arrive to use the horse, because the first to arrive could simply wait for the second to arrive, giving an earlier meeting than whatever is possible if the second to arrive had to use the horse and go elsewhere. Therefore, for any vertex, $1$ horse is sufficient. We run Dijkstra algorithm twice to find the fastest time Robin and Marian could reach any vertex $i$ as $tR(i)$ and $tM(i)$. The earliest meeting time at a given vertex $i$ is $max(tR(i),tM(i))$, and we need to check all vertices. The time complexity is that of Dijkstra algorithm which, in this problem, is $O(n \log n)$.
[ "dfs and similar", "graphs", "shortest paths" ]
1,800
#include <iostream> #include <vector> #include <set> using namespace std; void dijkstra(int s, vector<vector<long long>> &d, vector<vector<pair<int,long long>>> &graph, vector<bool> &hs){ auto cmp = [&](auto &a, auto &b){return make_pair(d[a.first][a.second],a) < make_pair(d[b.first][b.second],b);}; set<pair<int,int>,decltype(cmp)> q(cmp); d[s][0] = 0; q.insert({s,0}); while (q.size()){ auto [curv,curh] = *q.begin(); q.erase(q.begin()); bool horse = (curh || hs[curv]); for (auto &[neighv, neighd] : graph[curv]){ long long dist = horse?neighd/2:neighd; if (d[neighv][horse] > d[curv][curh] + dist){ q.erase({neighv,horse}); d[neighv][horse] = d[curv][curh] + dist; q.insert({neighv,horse}); } } } } void work(){ int n,m,h; cin >> n >> m >> h; vector<bool> hs(n); vector<vector<pair<int,long long>>> graph(n); for (int i=0;i<h;i++){ int c; cin >> c; hs[--c]=1; } for (int i=0;i<m;i++){ int a,b,c; cin >> a >> b >> c; a--,b--; graph[a].push_back({b,c}); graph[b].push_back({a,c}); } vector<vector<long long>> d1(n,vector<long long>(2,1e18)); vector<vector<long long>> d2(n,vector<long long>(2,1e18)); dijkstra(0,d1,graph,hs); dijkstra(n-1,d2,graph,hs); long long best = 1e18; auto get = [&](int a){return max(min(d1[a][0],d1[a][1]),min(d2[a][0],d2[a][1]));}; for (int i=0;i<n;i++) best = min(best,get(i)); cout << (best==1e18?-1:best) << '\n'; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
F
Sheriff's Defense
\begin{quote} "Why, master," quoth Little John, taking the bags and weighing them in his hand, "here is the chink of gold." \end{quote} The folk hero Robin Hood has been troubling Sheriff of Nottingham greatly. Sheriff knows that Robin Hood is about to attack his camps and he wants to be prepared. Sheriff of Nottingham built the camps with strategy in mind and thus there are exactly $n$ camps numbered from $1$ to $n$ and $n-1$ trails, each connecting two camps. Any camp can be reached from any other camp. Each camp $i$ has initially $a_i$ gold. As it is now, all camps would be destroyed by Robin. Sheriff can strengthen a camp by subtracting exactly $c$ gold from \textbf{each of its neighboring camps} and use it to build better defenses for that camp. Strengthening a camp \textbf{doesn't change} its gold, only its neighbors' gold. A camp can have negative gold. After Robin Hood's attack, all camps that have been strengthened survive the attack, all others are destroyed. What's the maximum gold Sheriff can keep in his surviving camps after Robin Hood's attack if he strengthens his camps optimally? Camp $a$ is neighboring camp $b$ if and only if there exists a trail connecting $a$ and $b$. Only strengthened camps count towards the answer, as others are destroyed.
An important observation is that strengthening a base only influences its neighbors, so we can just keep consider adjacent nodes as later ones are not affected. Let's consider induction to solve this problem. Let $d[i][0]$ denote the most gold from node $i$ and all its children if we don't strengthen node $i$ and $d[i][1]$ if we do strengthen the node $i$. Base case: If the current node $i$ is a leaf, $d[i][0] = 0$, $d[i][1] = a_i$. Base case: If the current node $i$ is a leaf, $d[i][0] = 0$, $d[i][1] = a_i$. Induction step: Consider the node $i$ with children $1 \dots m$. Assume that all nodes $1 \dots m$ are already calculated. If we don't strengthen the node $i$, $d[i][0] = {\sum_{j = 1}^m max(d[j][0],d[j][1])}$. If the node $i$ is strengthened, $d[i][1] = a_i + {\sum_{j = 1}^m max(d[j][0],d[j][1]-2\cdot c)}$. Induction step: Consider the node $i$ with children $1 \dots m$. Assume that all nodes $1 \dots m$ are already calculated. If we don't strengthen the node $i$, $d[i][0] = {\sum_{j = 1}^m max(d[j][0],d[j][1])}$. If the node $i$ is strengthened, $d[i][1] = a_i + {\sum_{j = 1}^m max(d[j][0],d[j][1]-2\cdot c)}$. Time complexity - $O(n)$.
[ "dfs and similar", "dp", "greedy", "trees" ]
2,000
#include <iostream> #include <vector> using namespace std; void work(){ int n,k; cin >> n >> k; vector<long long> v(n); vector<vector<int>> g(n); for (auto &c : v) cin >> c; for (int i=0;i<n-1;i++){ int a,b; cin >> a >> b; a--,b--; g[a].push_back(b); g[b].push_back(a); } vector<bool> vis(n); vector<vector<long long>> d(n,vector<long long>(2)); auto dfs = [&](auto &&dfs, int cur, int p) -> void { for (auto &neigh : g[cur]){ if (neigh == p) continue; dfs(dfs,neigh,cur); d[cur][1] += max(d[neigh][0],d[neigh][1] - 2*k); d[cur][0] += max(d[neigh][0],d[neigh][1]); } d[cur][1] += v[cur]; }; dfs(dfs,0,-1); cout << max(0LL,max(d[0][0],d[0][1])) << '\n'; } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2014
G
Milky Days
\begin{quote} What is done is done, and the spoilt milk cannot be helped. \end{quote} Little John is as little as night is day — he was known to be a giant, at possibly $2.1$ metres tall. It has everything to do with his love for milk. His dairy diary has $n$ entries, showing that he acquired $a_i$ pints of fresh milk on day $d_i$. Milk declines in freshness with time and stays drinkable for a maximum of $k$ days. In other words, fresh milk acquired on day $d_i$ will be drinkable between days $d_i$ and $d_i+k-1$ inclusive. Every day, Little John drinks drinkable milk, up to a maximum of $m$ pints. In other words, if there are less than $m$ pints of milk, he will drink them all and not be satisfied; if there are at least $m$ pints of milk, he will drink exactly $m$ pints and be satisfied, and it's a milk satisfaction day. Little John always drinks \textbf{the freshest} drinkable milk first. Determine the number of milk satisfaction days for Little John.
The key for this problem is the use of a stack, where last item in is the first item out. As we scan through the diary entries, we will only drink till the day of the next entry. If there is left over milk, we will push them into the stack with number of pints and the day they were acquired. If there isn't enough milk to reach the next entry, we will check the stack for left overs. Careful implementation is required to check for expiry day. It might help to append a fictitious entry with large day number and $0$ pints. Since every pop from the stack accompanies either the processing of a diary entry or permanently removing a stack item, the number of stack operation is $O(n)$. Since diary entries are presented in sorted order, the time complexity is $O(n)$. Note: Originally, this problem has an easy version, where Little John drinks the oldest drinkable milk first. However testers and the team were uncertain about the difficulties of the two problems, and there was concern that they are too 'implementation heavy'. For the sake of balance, only the hard version is presented here as G. However, you may wish to try the easy version yourself. I recall my parents telling me to use the oldest milk first, and now I say the same to my children. Has it all been worthwhile?
[ "brute force", "data structures", "greedy", "implementation" ]
2,200
#include <iostream> #include <map> #include <list> #include <vector> using namespace std; typedef long long ll; typedef pair<ll,ll> pll; typedef vector<pll> vpll; void work(){ int n,m,k; cin >> m >> n >> k; map<ll,ll> days; for (int i=0;i<m;i++){ int a,b; cin >> a >> b; days[a] += b; } days[1e18] = 0; ll curd = 1; ll got = 0; ll res = 0; list<pll> pq; for (auto &cur : days){ while (pq.size() && curd < cur.first){ auto [d,x] = pq.front(); pq.pop_front(); if (d+k-1 < curd) continue; else if (d > curd) curd = d, got = 0; if (n-got > x) got += x; else{ ll sat = min(curd + (x-n+got)/n + 1,min(d + k, cur.first)); ll newx = x-(sat-curd)*n+got; if (newx) pq.push_front({d,newx}); res += sat-curd; got = 0; curd = sat; } } pq.push_front(cur); } cout << res << '\n'; } int main(){ cin.tie(NULL); ios_base::sync_with_stdio(false); int t; cin >> t; while (t--) work(); return 0; }
2014
H
Robin Hood Archery
\begin{quote} At such times archery was always the main sport of the day, for the Nottinghamshire yeomen were the best hand at the longbow in all merry England, but this year the Sheriff hesitated... \end{quote} Sheriff of Nottingham has organized a tournament in archery. It's the final round and Robin Hood is playing against Sheriff! There are $n$ targets in a row numbered from $1$ to $n$. When a player shoots target $i$, their score increases by $a_i$ and the target $i$ is destroyed. The game consists of turns and players alternate between whose turn it is. Robin Hood always starts the game, then Sheriff and so on. The game continues until all targets are destroyed. Both players start with score $0$. At the end of the game, the player with most score wins and the other player loses. If both players have the same score, it's a tie and no one wins or loses. In each turn, the player can shoot any target that wasn't shot before. Both play optimally to get the most score possible. Sheriff of Nottingham has a suspicion that he might lose the game! This cannot happen, you must help Sheriff. Sheriff will pose $q$ queries, each specifying $l$ and $r$. This means that the game would be played only with targets $l, l+1, \dots, r$, as others would be removed by Sheriff before the game starts. For each query $l$, $r$, determine whether the Sheriff can \textbf{not lose} the game when only considering the targets $l, l+1, \dots, r$.
Sheriff can never win. This is quite obvious as Robin is the first to pick and both just keep picking the current biggest number. This means that Sheriff can at best get a tie - this happens if and only if all elements have even appearance. The segment $a_l \dots a_r$ is a tie if and only if there's no element that appears an odd number of times. There are multiple ways to solve this problem. Two are outlined. We can keep the count of appearances of each element using an array in $O(1)$ time. Sort the queries into blocks of size ${\sqrt n}$. Keep updating the boundaries of the current segment and the total count of elements that appear an odd number of times. Sheriff can tie iff there is no odd appearance. Time complexity - $O((n+q){\sqrt n})$. Consider the prefixes of all targets. If the current segment is $a_l \dots a_r$, there's no element with odd appearance if and only if the set of numbers with odd appearance in $a_1 \dots a_{l-1}$ is the same as $a_1 \dots a_r$. We can check if two prefixes have the same set of elements with odd appearance with xor hashing. Time complexity - $O(n+q)$.
[ "data structures", "divide and conquer", "greedy", "hashing" ]
1,900
#Mo's algorithm #include <iostream> #include <vector> #include <algorithm> #include <array> using namespace std; int K = 500; int Cnt[1000001]; void work(){ int n,q; cin >> n >> q; vector<int> v(n); for (auto &c : v) cin >> c, Cnt[c] = 0; vector<array<int,3>> qs(q); for (int i=0;i<q;i++) cin >> qs[i][0] >> qs[i][1], qs[i][2] = i; auto cmp = [&](array<int,3> &a, array<int,3> &b){return make_pair(make_pair(a[0]/K,a[1]/K),a) < make_pair(make_pair(b[0]/K,b[1]/K),b);}; sort(qs.begin(),qs.end(),cmp); int l, r; l = r = 0; int odd = 1; Cnt[v.front()]++; vector<bool> res(q); for (auto &c : qs){ c[0]--,c[1]--; while (r < c[1]){ Cnt[v[++r]]++; if (Cnt[v[r]]%2) odd++; else odd--; } while (l > c[0]){ Cnt[v[--l]]++; if (Cnt[v[l]]%2) odd++; else odd--; } while (l < c[0]){ Cnt[v[l]]--; if (Cnt[v[l++]]%2) odd++; else odd--; } while (r > c[1]){ Cnt[v[r]]--; if (Cnt[v[r--]]%2) odd++; else odd--; } res[c[2]] = odd; } for (bool c : res) cout << (c?"NO\n":"YES\n"); } int main(){ int t; cin >> t; while (t--) work(); return 0; } #xor hashing #include <iostream> #include <vector> #include <map> #include <random> #include <set> using namespace std; void work(){ int n,q; cin >> n >> q; vector<unsigned long long> v(n); for (auto &c : v) cin >> c; random_device rd; mt19937_64 gen(rd()); map<unsigned long long, unsigned long long> mapping; set<unsigned long long> used = {0}; for (auto &c : v){ unsigned long long random; if (!mapping.contains(c)){ do{ random = gen(); }while (used.contains(random)); used.insert(random); mapping[c] = random; }else{ random = mapping[c]; } c = random; } vector<unsigned long long> xor_pref(n+1); for (int i=0;i<n;i++) xor_pref[i+1] = xor_pref[i] ^ v[i]; for (int i=0;i<q;i++){ int l,r; cin >> l >> r; cout << ((xor_pref[r]^xor_pref[l-1])?"NO\n":"YES\n"); } } int main(){ int t; cin >> t; while (t--) work(); return 0; }
2018
A
Cards Partition
\begin{quote} DJ Genki vs Gram - Einherjar Joker \hfill ⠀ \end{quote} You have some cards. An integer between $1$ and $n$ is written on each card: specifically, for each $i$ from $1$ to $n$, you have $a_i$ cards which have the number $i$ written on them. There is also a shop which contains unlimited cards of each type. You have $k$ coins, so you can buy \textbf{at most} $k$ new cards in total, and the cards you buy can contain any integer \textbf{between $\mathbf{1}$ and $\mathbf{n}$}, inclusive. After buying the new cards, you must partition \textbf{all} your cards into decks, according to the following rules: - all the decks must have the same size; - there are no pairs of cards with the same value in the same deck. Find the maximum possible size of a deck after buying cards and partitioning them optimally.
The answer is at most $n$. Solve the problem with $k = 0$. When is the answer $n$? If the answer is not $n$, how can you buy cards? Note that there are $n$ types of cards, so the subsets have size at most $n$, and the answer is at most $n$. If $k = 0$, you can make subsets of size $s$ if and only if the following conditions are true: the number of cards ($m$) is a multiple of $s$; the maximum number of cards of some type ($x$) is $\leq m/s$. Proof: $m$ is the number of decks times $s$. The number of decks is $m/s$. Each deck can contain at most $1$ card of each type, so there are at most $m/s$ cards of each type in total. If the two conditions above hold, you can make a deck containing the $s$ types of cards with maximum frequency. You can show with some calculations that the conditions still hold after removing these cards. So you can prove by induction that the two conditions are sufficient to make decks of size $s$. The same idea is used in problems like 1954D - Colored Balls and abc227_d - Project Planning. For a generic $k$, the answer is $n$ if you can make the number of cards of type $1, \ldots, n$ equal. Otherwise, for any choice of number of cards to buy, you can buy them without changing $x$. It means that you need $x \cdot s$ cards in total: if you have less than $x \cdot s$ cards, you have to check if you can reach $x \cdot s$ cards by buying at most $k$ new cards; if you already have $x \cdot s$ or more cards at the beginning, you have to check if you can make $m$ a multiple of $s$. Complexity: $O(n)$
[ "2-sat", "brute force", "greedy", "implementation", "math" ]
1,600
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n, k; cin >> n >> k; vector<ll> a(n + 1, 0); ll mx = 0, tot = 0; for (ll i = 1; i <= n; i++) { cin >> a[i]; mx = max(mx, a[i]); tot += a[i]; } ll ans = 0; for (ll sz = 1; sz <= n; sz++) { ll last_mul = sz * ((tot + k) / sz); if (last_mul < tot) continue; if (mx > last_mul / sz) continue; ans = max(ans, sz); } cout << ans << nl; } return 0; }
2018
B
Speedbreaker
\begin{quote} Djjaner - Speedbreaker \hfill ⠀ \end{quote} There are $n$ cities in a row, numbered $1, 2, \ldots, n$ left to right. - At time $1$, you conquer exactly one city, called the starting city. - At time $2, 3, \ldots, n$, you can choose a city adjacent to the ones conquered so far and conquer it. You win if, for each $i$, you conquer city $i$ at a time no later than $a_i$. A winning strategy may or may not exist, also depending on the starting city. How many starting cities allow you to win?
When is the answer $0$? Starting from city $x$ is equivalent to setting $a_x = 1$. At some time $t$, consider the minimal interval $[l, r]$ that contains all the cities with $a_i \leq t$ (let's call it "the minimal interval at time $t$"). You have to visit all this interval within time $t$, otherwise there are some cities with $a_i \leq t$ which you do not visit in time. So if this interval has length $> t$, you cannot visit it all within time $t$, and the answer is $0$. Otherwise, the answer is at least $1$. A possible construction is visiting "the minimal interval at time $1$", then "the minimal interval at time $2$", ..., then "the minimal interval at time $n$". Note that, when you visit "the minimal interval at time $t$", the actual time is equal to the length of the interval, which is $\leq t$. In this way, at time $t$ you will have conquered all the cities in the minimal interval at time $t$, and possibly other cities. Starting from city $x$ is equivalent to setting $a_x = 1$. After this operation, you have to guarantee that, for each $i$, the minimal interval at time $t$ is short enough. If this interval is $[l, r]$ before the operation, it can become either $[x, r]$ (if $x < l$), or $[l, x]$ (if $x > r$), or stay the same. In all this cases, the resulting length must be $\leq t$. With some calculations (e.g., $r-x+1 \leq t$), you can get than $x$ must be contained in $[r-t+1, l+t-1]$. So it's enough to calculate and intersect the intervals obtained at $t = 1, \ldots, n$, and print the length of the final interval. You can calculate the minimal intervals by iterating on the cities in increasing order of $a_i$. Again, if the old interval is $[l, r]$ and the new city has index $x$, the new possible intervals are $[x, r]$, $[l, r]$, $[l, x]$. Another correct solution is to intersect the intervals $[i-a_i+1, i+a_i-1]$. The proof is contained in the editorial of
[ "binary search", "data structures", "dp", "greedy", "implementation", "two pointers" ]
1,900
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n + 1, 0); vector<vector<ll>> adj(n + 1); for (ll i = 1; i <= n; i++) { cin >> a[i]; adj[min(n, a[i])].pb(i); } ll l = INF, r = -INF; ll l_ans = -INF, r_ans = INF; ll flag = 1; for (ll i = 1; i <= n; i++) { for (auto u : adj[i]) { l = min(l, u); r = max(r, u); } if (l == INF) continue; if (r - l + 1 > i) flag = 0; l_ans = max(l_ans, r - i + 1); r_ans = min(r_ans, l + i - 1); } if (flag == 0 || l_ans > r_ans) cout << 0 << nl; else cout << r_ans - l_ans + 1 << nl; } return 0; }
2018
C
Tree Pruning
\begin{quote} t+pazolite, ginkiha, Hommarju - Paved Garden \hfill ⠀ \end{quote} You are given a tree with $n$ nodes, rooted at node $1$. In this problem, a leaf is a non-root node with degree $1$. In one operation, you can remove a leaf and the edge adjacent to it (possibly, new leaves appear). What is the minimum number of operations that you have to perform to get a tree, also rooted at node $1$, where all the leaves are at the same distance from the root?
Solve for a fixed final depth of the leaves. Which nodes are "alive" if all leaves are at depth $d$ at the end? If the final depth of the leaves is $d$, it's optimal to keep in the tree all the nodes at depth $d$ and all their ancestors. These nodes are the only ones which satisfy the following two conditions: their depth ($a_i$) is $\leq d$; the maximum depth of a node in their subtree ($b_i$) is $\geq d$. So every node is alive in the interval of depths $[a_i, b_i]$. The optimal $d$ is the one contained in the maximum number of intervals.
[ "brute force", "dfs and similar", "greedy", "sortings", "trees" ]
1,700
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<vector<ll>> adj(n + 1); for (ll i = 0; i < n - 1; i++) { ll a, b; cin >> a >> b; adj[a].pb(b); adj[b].pb(a); } vector<bool> vis(n + 1, false); vector<ll> depth(n + 1, 0), max_depth(n + 1, 0); function<void(ll)> dfs = [&](ll s) { vis[s] = true; for (auto u : adj[s]) { if (vis[u]) continue; depth[u] = depth[s] + 1; dfs(u); max_depth[s] = max(max_depth[s], max_depth[u]); } max_depth[s] = max(max_depth[s], depth[s]); }; dfs(1); vector<ll> sweep(n + 2, 0); for (ll i = 1; i <= n; i++) { sweep[depth[i]]++; sweep[max_depth[i] + 1]--; } for (ll i = 1; i <= n + 1; i++) sweep[i] += sweep[i - 1]; ll ans = n - (*max_element(sweep.begin(), sweep.end())); cout << ans << nl; } return 0; }
2018
D
Max Plus Min Plus Size
\begin{quote} EnV - The Dusty Dragon Tavern \hfill ⠀ \end{quote} You are given an array $a_1, a_2, \ldots, a_n$ of positive integers. You can color some elements of the array red, but there cannot be two adjacent red elements (i.e., for $1 \leq i \leq n-1$, at least one of $a_i$ and $a_{i+1}$ must not be red). Your score is the maximum value of a red element, plus the minimum value of a red element, plus the number of red elements. Find the maximum score you can get.
The optimal subsequence must contain at least one occurrence of the maximum. Iterate over the minimum, in decreasing order. You have some "connected components". How many elements can you pick from each component? How to make sure you have picked at least one occurrence of the maximum? The optimal subsequence must contain at least one occurrence of the maximum ($r$) (suppose it doesn't; then you can just add one occurrence, at the cost of removing at most two elements, and this does not make your score smaller). Now you can iterate over the minimum value ($l$), in decreasing order. At any moment, you can pick elements with values $[l, r]$. Then you have to support queries "insert pick-able element" and "calculate score". The pick-able elements make some "connected components" of size $s$, and you can pick $\lceil s/2 \rceil$ elements. You can maintain the components with a DSU. You also want to pick an element with value $r$. For each component, check if it contains $r$ in a subsequence with maximum size. If this does not happen for any component, your score decreases by $1$. All this information can be maintained by storing, for each component, if it contains $r$ in even positions, and if it contains $r$ in odd positions. Complexity: $O(n \alpha(n))$
[ "data structures", "dp", "dsu", "greedy", "implementation", "matrices", "sortings" ]
2,200
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n + 1, 0); ll mx = 0; for (ll i = 1; i <= n; i++) cin >> a[i], mx = max(mx, a[i]); vector<ll> pr(n + 1, 0), sz(n + 1, 0); iota(pr.begin(), pr.end(), 0); vector<ll> mx_even(n + 1, 0), mx_odd(n + 1, 0); function<ll(ll)> find = [&](ll x) { if (x == pr[x]) return x; return pr[x] = find(pr[x]); }; ll total_sz = 0, cnt_mx = 0; auto toggle = [&](ll x) { sz[x] = 1; total_sz++; if (a[x] == mx) mx_even[x] = 1, cnt_mx++; }; auto is_mx = [&](ll a) { if (mx_even[a]) return true; if (sz[a] % 2) return false; if (mx_odd[a]) return true; return false; }; function<void(ll, ll)> onion = [&](ll a, ll b) { if (a <= 0 || a > n || b <= 0 || b > n) return; a = find(a); b = find(b); if (a == b) return; if (a > b) swap(a, b); pr[b] = a; total_sz -= (sz[a] + 1) / 2; total_sz -= (sz[b] + 1) / 2; cnt_mx -= (is_mx(a) + is_mx(b)); if (sz[a] % 2) { mx_even[a] |= mx_odd[b]; mx_odd[a] |= mx_even[b]; } else { mx_even[a] |= mx_even[b]; mx_odd[a] |= mx_odd[b]; } sz[a] += sz[b]; total_sz += (sz[a] + 1) / 2; cnt_mx += (is_mx(a)); }; map<ll, vector<ll>> events; for (ll i = 1; i <= n; i++) events[-a[i]].pb(i); ll ans = 0; for (auto [val, v] : events) { for (auto pos : v) toggle(pos); for (auto pos : v) { if (pos - 1 >= 1 && a[pos - 1] >= a[pos]) { onion(pos - 1, pos); } if (pos + 1 <= n && a[pos + 1] >= a[pos]) { onion(pos, pos + 1); } } ans = max(ans, mx - val + total_sz - (cnt_mx == 0)); }; cout << ans << nl; } return 0; }
2018
E1
Complex Segments (Easy Version)
\begin{quote} Ken Arai - COMPLEX \hfill ⠀ \end{quote} \textbf{This is the easy version of the problem. In this version, the constraints on $n$ and the time limit are lower. You can make hacks only if both versions of the problem are solved.} A set of (closed) segments is \textbf{complex} if it can be partitioned into some subsets such that - all the subsets have the same size; and - a pair of segments intersects \textbf{if and only if} the two segments are in the same subset. You are given $n$ segments $[l_1, r_1], [l_2, r_2], \ldots, [l_n, r_n]$. Find the maximum size of a \textbf{complex} subset of these segments.
Solve for a fixed $m$ (size of the subsets). $m = 1$ is easy. Can you do something similar for other $m$? Solve for a fixed $k$ (number of subsets). If you have a $O(n \log n)$ solution for a fixed $m$, note that there exists a faster solution! Let's write a function max_k(m), which returns the maximum $k$ such that there exists a partition of $k$ valid sets containing $m$ intervals each. max_k works in $O(n \log n)$ in the following way (using a lazy segment tree): (wlog) $r_i \leq r_{i+1}$; for each $i$ not intersecting the previous subset, add $1$ on the interval $[l[i], r[i]]$; as soon as a point belongs to $m$ intervals, they become a subset; return the number of subsets. For a given $k$, you can binary search the maximum $m$ such that max_k(m) $\geq k$ in $O(n \log^2 n)$. The problem asks for the maximum $mk$. Since $mk \leq n$, for any constant $C$ either $m \leq C$ or $k \leq n/C$. For $C = (n \log n)^{1/2}$, the total complexity becomes $O((n \log n)^{3/2})$, which is enough to solve
[ "binary search", "data structures", "divide and conquer", "dsu", "greedy", "math", "sortings" ]
3,300
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 static constexpr int MAXN = 2.5e4; static constexpr int MAXT = 2 * MAXN; static constexpr int THRESHOLD = 600; // about sqrt{n log n} static constexpr int THRESHOLD2 = 256; struct Segment { struct Node { int mx; int lz; Node(int _mx = 0, int _lz = 0) : mx(_mx), lz(_lz) {} }; int n; vector<Node> t; Segment(int _n) { for (n = 1; n < _n; n <<= 1); t.resize(2 * n); } void prop(int i) { if (t[i].lz && i < n) { t[2*i ].mx += t[i].lz; t[2*i+1].mx += t[i].lz; t[2*i ].lz += t[i].lz; t[2*i+1].lz += t[i].lz; t[i].lz = 0; } } void upd(int i, int a, int b, int l, int r, int x) { prop(i); if (b <= l || r <= a) return; if (l <= a && b <= r) { t[i].mx += x; t[i].lz += x; } else { int m = (a + b) / 2; upd(2*i , a, m, l, r, x); upd(2*i+1, m, b, l, r, x); t[i].mx = max(t[2*i].mx, t[2*i+1].mx); } } int get_max() { return t[1].mx; } }; struct MySegment : public Segment { vector<array<int, 3>> upds; MySegment(int n) : Segment(n) {} void add(int l, int r, int x) { upds.push_back({l, r, x}); upd(1, 0, n, l, r, x); } void clean() { for (auto [l, r, x] : upds) { upd(1, 0, n, l, r, -x); } upds.clear(); } }; int maxpartition(int n, vector<int> l, vector<int> r) { vector<int> idxs(n); iota(begin(idxs), end(idxs), 0); sort(begin(idxs), end(idxs), [&](int i, int j){ return r[i] < r[j]; }); MySegment st(2 * n + 1); auto max_k = [&](int m) -> int { // return max{k : one can form k subsets of m elements each} int k = 0; int last_r = -1; for (int i : idxs) { if (l[i] <= last_r) continue; st.add(l[i], r[i]+1, 1); if (st.get_max() >= m) { ++k; last_r = r[i]; st.clean(); } } st.clean(); return k; }; int ans = 1; for (int m = 1; m <= min(n, THRESHOLD); ++m) ans = max(ans, m * max_k(m)); for (int k = 1; k <= n / THRESHOLD; ++k) { int bsl = 0, bsu = n + 1; while (bsu - bsl > 1) { int bsm = (bsl + bsu) / 2; if (max_k(bsm) >= k) bsl = bsm; else bsu = bsm; } ans = max(ans, k * bsl); } return ans; } int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<int> l(n, 0), r(n, 0); for (ll i = 0; i < n; i++) cin >> l[i]; for (ll i = 0; i < n; i++) cin >> r[i]; ll ans = maxpartition(n, l, r); cout << ans << nl; } return 0; }
2018
E2
Complex Segments (Hard Version)
\begin{quote} Ken Arai - COMPLEX \hfill ⠀ \end{quote} \textbf{This is the hard version of the problem. In this version, the constraints on $n$ and the time limit are higher. You can make hacks only if both versions of the problem are solved.} A set of (closed) segments is \textbf{complex} if it can be partitioned into some subsets such that - all the subsets have the same size; and - a pair of segments intersects \textbf{if and only if} the two segments are in the same subset. You are given $n$ segments $[l_1, r_1], [l_2, r_2], \ldots, [l_n, r_n]$. Find the maximum size of a \textbf{complex} subset of these segments.
Now let's go back to max_k(m). It turns out you can implement it in $O(n \alpha(n))$. First of all, let's make all the endpoints distinct, in such a way that two intervals intersect if and only if they were intersecting before. Let's maintain a binary string of size $n$, initially containing only ones, that can support the following queries: set bit in position $p$ to 0; find the nearest 1 to the left of position $p$. This can be maintained with DSU, where the components are the maximal intervals containing 100...00. Now let's reuse the previous solution (sweeping $r$ from left to right), but instead of a segment tree we will maintain a binary string with the following information: the positions $> r$ store 1; the positions $\leq r$ store 1 if and only if the value in that position (in the previous solution) is a suffix max. So the queries become: add $1$ to $[l, r]$: $r$ changes, so you have to set elements in $[r'+1, r-1]$ to $0$; $r$ changes, so you have to set elements in $[r'+1, r-1]$ to $0$; the only other element that changes is the nearest 1 to the left of position $l$, which does not represent a suffix max anymore. the only other element that changes is the nearest 1 to the left of position $l$, which does not represent a suffix max anymore. find the maximum: it's equal to the number of suffix maximums, which depends on $r$ and on the number of components. This solution allows us to replace a $O(\log n)$ factor with a $O(\alpha(n))$ factor. Complexity: $O(n \sqrt n \alpha(n))$
[ "binary search", "data structures", "divide and conquer", "dsu", "greedy", "math", "sortings" ]
3,400
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int N; struct DSU { int cc; vector<int> arr, ans; int find(int node) { if (arr[node] < 0) return node; return arr[node] = find(arr[node]); } void join(int u, int v) { u = find(u), v = find(v); if (u == v) return; cc--; if (arr[u] > arr[v]) swap(u, v); arr[u] += arr[v]; arr[v] = u; ans[u] = min(ans[u], ans[v]); } void reset() { ans.resize(2 * N + 1); arr.assign(2 * N + 1, -1); iota(ans.begin(), ans.end(), 0); cc = 2 * N + 1; } }; DSU dsu; vector<pair<int, int>> rangers; int max_groups(int group_size) { dsu.reset(); int curr_l = 0; int last_r = 1; int cnt = 0; int actual_l = 0; for (auto [l, r]: rangers) { if (l < curr_l) continue; while (last_r < r - 1) { dsu.join(last_r, last_r - 1); last_r++; } last_r++; int fake = dsu.ans[dsu.find(l - 1)]; if (fake > actual_l) dsu.join(fake, fake - 1); if (dsu.cc - (2 * N + 1 - r) - cnt - 1 == group_size) { cnt++; curr_l = r; int curr = r - 1; while (curr != actual_l) { dsu.join(curr, curr - 1); curr = dsu.ans[dsu.find(curr)]; } actual_l = last_r; last_r++; } } return cnt; } int maxpartition(int N, vector<int> L, vector<int> R) { ::N = N; vector<array<int, 3>> positions(2 * N); for (int i = 0; i < N; i++) { positions[2 * i] = {L[i], 0, i}; positions[2 * i + 1] = {R[i], 1, i}; } sort(positions.begin(), positions.end()); rangers.resize(N); for (int i = 0; i < 2 * N; i++) { auto [z, wh, pos] = positions[i]; if (!wh) rangers[pos].first = i + 1; else rangers[pos].second = i + 2; } sort(rangers.begin(), rangers.end(), [&](const auto &a, const auto &b) { return a.second < b.second; }); int ans = 0; function<void(int, int, int, int)> solve = [&](int tl, int tr, int al, int ar) { if (tl > tr) return; if (al == ar) { ans = max(ans, tr * ar); return; } int tm = (tl + tr) / 2; int am = max_groups(tm); ans = max(ans, tm * am); solve(tl, tm - 1, am, ar); solve(tm + 1, tr, al, am); }; solve(1, N, 0, N); return ans; } int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<int> l(n, 0), r(n, 0); for (ll i = 0; i < n; i++) cin >> l[i]; for (ll i = 0; i < n; i++) cin >> r[i]; ll ans = maxpartition(n, l, r); cout << ans << nl; } return 0; }
2018
F3
Speedbreaker Counting (Hard Version)
\begin{quote} NightHawk22 - Isolation \hfill ⠀ \end{quote} \textbf{This is the hard version of the problem. In the three versions, the constraints on $n$ and the time limit are different. You can make hacks only if all the versions of the problem are solved.} This is the statement of \textbf{Problem D1B}: - There are $n$ cities in a row, numbered $1, 2, \ldots, n$ left to right. - At time $1$, you conquer exactly one city, called the starting city. - At time $2, 3, \ldots, n$, you can choose a city adjacent to the ones conquered so far and conquer it. You win if, for each $i$, you conquer city $i$ at a time no later than $a_i$. A winning strategy may or may not exist, also depending on the starting city. How many starting cities allow you to win? For each $0 \leq k \leq n$, count the number of arrays of positive integers $a_1, a_2, \ldots, a_n$ such that - $1 \leq a_i \leq n$ for each $1 \leq i \leq n$; - the answer to \textbf{Problem D1B} is $k$. The answer can be very large, so you have to calculate it modulo a given prime $p$.
Suppose you are given a starting city and you want to win. Find several strategies to win (if possible) and try to work with the simplest ones. The valid starting cities are either zero, or all the cities in $I := \cap_{i=1}^n [i - a_i + 1, i + a_i - 1] = [l, r]$. Now you have some bounds on the $a_i$. Fix the interval $I$ and try to find a (slow) DP. Counting paths seems easier than counting arrays. Make sure that, for each array, you make exactly one path (or a number of paths which is easy to handle). How many distinct states do you calculate in your DP? For a fixed starting city, if you can win, this strategy works: [Strategy 1] If there is a city on the right whose distance is $t$ and whose deadline is in $t$ turns, go to the right. Otherwise, go to the left. Proof: All constraints on the right hold. This strategy minimizes the time to reach any city on the left. So, if any strategy works, this strategy works too. For a fixed starting city, if you can win, this strategy works: [Strategy 2] If there is a city whose distance is $t$ and whose deadline is in $t$ turns, go to that direction. Otherwise, go to any direction. The valid starting cities are either zero, or all the cities in $I := \cap_{i=1}^n [i - a_i + 1, i + a_i - 1] = [l, r]$. Proof: The cities outside $I$ are losing, because there exists at least one unreachable city. Let's start from any city $x$ in $I$, and use Strategy 2. You want to show that, for any $x$ in $I$, Strategy 2 can visit all cities in $I$ first, then all the other cities. Then, you can conclude that either all the cities in $I$ are winning, or they are all losing. The interval $I$ gives bounds on the $a_i$: specifically, $a_i \geq \max(i-l+1, r-i+1)$. Then, you can verify that visiting the interval $I$ first does not violate Strategy 2. If you use Strategy 1, the first move on the right determines $l$. Let's iterate on the (non-empty) interval $I$. Let's calculate the bounds $a_i \geq \max(i-l+1, r-i+1)$. Note that Strategy 1 is deterministic (i.e., it gives exactly one visiting order for each fixed pair (starting city, $a$)). From now, you will use Strategy 1. Now you will calculate the number of pairs ($a$, visiting order) such that the cities in $I$ are valid starting cities (and there might be other valid starting cities). Let's define dp[i][j][k] = number of pairs ($a$, visiting order), restricted to the interval $[i, j]$, where $k =$ "are you forced to go to the right in the next move?". Here are the main ideas to find the transitions: If you go from $[i+1, j]$ to $[i, j]$, you must ensure that $a_i \geq \max(i-l+1, r-i+1, j-i+1)$ (because you visit it at time $j-i+1$). Also, $k$ must be $0$. If you go from $[i, j-1]$ to $[i, j]$, and you want to make $k = 0$, you must make $a_j = j-i+1$. It means that $j$ was the city that was enforcing you to go to the right. In my code, the result is stored in int_ans[i][j]. Now you want to calculate the number of pairs ($a$, visiting order) such that the cities in $I$ are the only valid starting cities. This is similar to 2D prefix sums, and it's enough to make int_ans[i][j] -= int_ans[i - 1][j] + int_ans[i][j + 1] - int_ans[i - 1][j + 1]. Since, for a fixed $a$, the visiting order only depends on the starting city, the number of $a$ for the interval $[i, j]$ is now int_ans[i][j] / (j - i + 1). You have solved $k \geq 1$. The answer for $k = 0$ is just $n^n$ minus all the other answers. In the previous section, you are running the same DP for $O(n^2)$ different "bound arrays" on the $a_i$ (in particular, $O(n)$ arrays for each $k$). Now you want to solve a single $k$ with a single DP. For a fixed $k$, you can notice that, if you run the DP on an array of length $2n$ instead of $n$, the bound array obtained from $I = [n-k+1, n]$ contains all the bound arrays you wanted as subarrays of length $n$. So you can run the DP and get all the results as dp[i][i + n - 1][0]. You still have $O(n^3)$ distinct states in total. How to make "bound arrays" simpler? It turns out that you can handle $l$ and $r$ differently! You can create bound arrays only based on $r$ (and get $O(n^2)$ distinct states), and find $l$ using the Corollary of Lemma 2. The transitions before finding $l$ are very simple (you always go to the left). So a possible way to get $O(n^2)$ complexity is processing Strategy 1 and the DP in reverse order (from time $n$ to time $1$). Complexity: $O(n^2)$
[ "dp", "greedy", "math" ]
3,100
#include <bits/stdc++.h> using namespace std; int t,n,p,dp[6011][6011][2],lim[6011],w[6011][6011],ans[3011]; void solve(int x) {//printf("==========================solve(%d)\n",x); for(int i=1;i<=x;++i)lim[i]=x-i+1; for(int i=x+1;i<=n;++i)lim[i]=i-x+1; // printf("lim:");for(int i=1;i<=n;++i)printf("%d ",lim[i]);putchar(10); // printf("res[%d]:",x);for(int i=1;i<=n;++i)printf("%d ",res[x][i]);putchar(10); } int main() { scanf("%d",&t);while(t--) { scanf("%d%d",&n,&p);//printf(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>. n:%d p:%d\n",n,p); for(int i=1;i<=n;++i)lim[i]=n-i+1; for(int i=n+1;i<2*n;++i)lim[i]=i-n+1; for(int i=1;i<=n;++i)ans[i]=0; for(int i=1;i<=2*n;++i)for(int j=i;j<=2*n;++j)dp[i][j][0]=dp[i][j][1]=0; for(int r=1;r<2*n;++r) { int cur=1; for(int i=r;i>=1;--i) { if(r-i+1<lim[i])w[i][r]=0; else w[i][r]=cur; cur=1ll*cur*(n-max(lim[i],r-i+1)+1)%p; } } for(int i=1;i<=n;++i)dp[i][i+n-1][1]=1; for(int i=n;i;--i) { for(int l=1;l+i-1<2*n;++l) { int r=l+i-1; if(r>=n) { ans[r-n+1]=(ans[r-n+1]+1ll*w[l][r]*dp[l][r][1])%p; ans[r-n]=(ans[r-n]-1ll*w[l][r]*dp[l][r][1])%p; } if(l<r) { dp[l+1][r][0]=(dp[l+1][r][0]+1ll*dp[l][r][0]*(n-max(lim[l],r-l+1)+1))%p; dp[l][r-1][1]=(dp[l][r-1][1]+1ll*dp[l][r][0]*(n-max(lim[r],r-l+1)+1))%p; if(lim[l]<=r-l+1)dp[l+1][r][0]=(dp[l+1][r][0]+dp[l][r][1])%p; dp[l][r-1][1]=(dp[l][r-1][1]+1ll*dp[l][r][1]*(n-max(lim[r],r-l+1)+1))%p; } } } ans[0]=1;for(int i=1;i<=n;++i)ans[0]=1ll*ans[0]*n%p; for(int i=1;i<=n;++i)ans[0]=(ans[0]-ans[i])%p; for(int i=0;i<=n;++i)printf("%d ",(ans[i]%p+p)%p);putchar(10); } }
2019
A
Max Plus Size
\begin{quote} EnV - Dynasty \hfill ⠀ \end{quote} You are given an array $a_1, a_2, \ldots, a_n$ of positive integers. You can color some elements of the array red, but there cannot be two adjacent red elements (i.e., for $1 \leq i \leq n-1$, at least one of $a_i$ and $a_{i+1}$ must not be red). Your score is the maximum value of a red element plus the number of red elements. Find the maximum score you can get.
Can you reach the score $\max(a) + \lceil n/2 \rceil$? Can you reach the score $\max(a) + \lceil n/2 \rceil - 1$? The maximum red element is $\leq \max(a)$, and the maximum number of red elements is $\lceil n/2 \rceil$. Can you reach the score $\max(a) + \lceil n/2 \rceil$? If $n$ is even, you always can, by either choosing all the elements in even positions or all the elements in odd positions (at least one of these choices contains $\max(a)$). If $n$ is odd, you can if and only if there is one occurrence of $\max(a)$ in an odd position. Otherwise, you can choose even positions and your score is $\max(a) + \lceil n/2 \rceil - 1$. Complexity: $O(n)$
[ "brute force", "dp", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n + 1, 0); for (ll i = 1; i <= n; i++) cin >> a[i]; ll ans = 0; for (ll i = 1; i <= n; i++) { ll sz = n / 2 + (n % 2 == 1 && i % 2 == 1); ans = max(ans, a[i] + sz); } cout << ans << nl; } return 0; }
2019
B
All Pairs Segments
\begin{quote} Shirobon - FOX \hfill ⠀ \end{quote} You are given $n$ points on the $x$ axis, at increasing positive integer coordinates $x_1 < x_2 < \ldots < x_n$. For each pair $(i, j)$ with $1 \leq i < j \leq n$, you draw the segment $[x_i, x_j]$. The segments are closed, i.e., a segment $[a, b]$ contains the points $a, a+1, \ldots, b$. You are given $q$ queries. In the $i$-th query, you are given a positive integer $k_i$, and you have to determine how many points with integer coordinates are contained in exactly $k_i$ segments.
Can you determine fast how many intervals contain point $p$? The intervals that contain point $p$ are the ones with $l \leq p$ and $r \geq p$. Determine how many intervals contain: point $x_1$; points $x_1 + 1, \ldots, x_2 - 1$; point $x_2$; $\ldots$ point $x_n$. First, let's focus on determining how many intervals contain some point $x$. These intervals are the ones with $l \leq x$ and $x \leq r$. So a point $x_i < p < x_{i+1}$ satisfies $x_1 \leq p, \ldots, x_i \leq p$, and $p \leq x_{i+1}, \ldots, p \leq x_n$. It means that you have found $x_{i+1} - x_i - 1$ points contained in exactly $i(n-i)$ intervals (because there are $i$ possible left endpoints and $n-i$ possible right endpoints). Similarly, the point $p = x_i$ is contained in $i(n-i+1) - 1$ intervals (you have to remove interval $[x_i, x_i]$, which you do not draw). So you can use a map that stores how many points are contained in exactly $x$ intervals, and update the map in the positions $i(n-i)$ and $i(n-i+1) - 1$. Complexity: $O(n \log n)$
[ "implementation", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n, q; cin >> n >> q; vector<ll> x(n + 1, 0); for (ll i = 1; i <= n; i++) cin >> x[i]; map<ll, ll> mp; for (ll i = 1; i <= n - 1; i++) { mp[i * (n - i)] += (x[i + 1] - x[i] - 1); } for (ll i = 1; i <= n; i++) { mp[i * (n - i + 1) - 1]++; } while (q--) { ll x; cin >> x; cout << mp[x] << ' '; } cout << nl; } return 0; }
2020
A
Find Minimum Operations
You are given two integers $n$ and $k$. In one operation, you can subtract any power of $k$ from $n$. Formally, in one operation, you can replace $n$ by $(n-k^x)$ for any non-negative integer $x$. Find the minimum number of operations required to make $n$ equal to $0$.
How many operations will have $x = 0$. Try $k = 2$. Answer will be the number of ones in binary representation of $n$. If $k$ is 1, we can only subtract 1 in each operation, and our answer will be $n$. Now, first, it can be seen that we have to apply at least $n$ mod $k$ operations of subtracting $k^0$ (as all the other operations will not change the value of $n$ mod $k$). Now, once $n$ is divisible by $k$, solving the problem for $n$ is equivalent to solving the problem for $\frac{n}{k}$ as subtracting $k^0$ will be useless because if we apply the $k^0$ subtraction operation, then $n$ mod $k$ becomes $k-1$, and we have to apply $k-1$ more operations to make $n$ divisible by $k$ (as the final result, i.e., 0 is divisible by $k$). Instead of doing these $k$ operations of subtracting $k^0$, a single operation subtracting $k^1$ will do the job. So, our final answer becomes the sum of the digits of $n$ in base $k$. The complexity of the solution is $O(\log_{k}{n})$ per testcase.
[ "bitmasks", "brute force", "greedy", "math", "number theory" ]
800
#include <bits/stdc++.h> using namespace std; int find_min_oper(int n, int k){ if(k == 1) return n; int ans = 0; while(n){ ans += n%k; n /= k; } return ans; } int main() { int t; cin >> t; while(t--){ int n,k; cin >> n >> k; cout << find_min_oper(n,k) << "\n"; } return 0; }
2020
B
Brightness Begins
Imagine you have $n$ light bulbs numbered $1, 2, \ldots, n$. \textbf{Initially, all bulbs are on}. To flip the state of a bulb means to turn it off if it used to be on, and to turn it on otherwise. Next, you do the following: - for each $i = 1, 2, \ldots, n$, flip the state of all bulbs $j$ such that $j$ is divisible by $i^\dagger$. After performing all operations, there will be several bulbs that are still on. Your goal is to make this number exactly $k$. Find the smallest suitable $n$ such that after performing the operations there will be exactly $k$ bulbs on. We can show that an answer always exists. $^\dagger$ An integer $x$ is divisible by $y$ if there exists an integer $z$ such that $x = y\cdot z$.
The final state of $i$th bulb (on or off) is independent of $n$. The final state of the $i$th bulb tells us about the parity of number of divisors of $i$. For any bulb $i$, its final state depends on the parity of the number of divisors of $i$. If $i$ has an even number of divisors, then bulb $i$ will be on; else it will be off. This translates to, if $i$ is not a perfect square, bulb $i$ will be on; else it will be off. So now the problem is to find the $k$th number which is not a perfect square. This can be done by binary searching the value of $n$ such that $n- \lfloor \sqrt{n} \rfloor = k$ or the direct formula $n$ = $\lfloor k + \sqrt{k} + 0.5 \rfloor$. For the proof of the second formula you can refer this book page 141 E18
[ "binary search", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin >> t; while(t--){ long long k; cin >> k; cout << k + int(sqrtl(k) + 0.5) << "\n"; } return 0; }
2020
C
Bitwise Balancing
You are given three non-negative integers $b$, $c$, and $d$. Please find a non-negative integer $a \in [0, 2^{61}]$ such that $(a\, |\, b)-(a\, \&\, c)=d$, where $|$ and $\&$ denote the bitwise OR operation and the bitwise AND operation, respectively. If such an $a$ exists, print its value. If there is no solution, print a single integer $-1$. If there are multiple solutions, print any of them.
Try to find some independent operations/combinations. The expression is independent for each digit in binary representation The first observation is that the expression $a$|$b$ - $a$&$c = d$ is bitwise independent. That is, the combination of a tuple of bits of $a, b, and\ c$ corresponding to the same power of 2 will only affect the bit value of $d$ at that power of 2 only. This is because: We are performing subtraction, so extra carry won't be generated to the next power of 2. We are performing subtraction, so extra carry won't be generated to the next power of 2. Any tuple of bits of $a$, $b$, and $c$ corresponding to the same power of 2 won't result in -1 for that power of 2. As that would require $a|b$ to be zero and $a$&$c$ to be one. The former condition requires the bit of $a$ to be zero, while the latter requires it to be one, which is contradicting. Any tuple of bits of $a$, $b$, and $c$ corresponding to the same power of 2 won't result in -1 for that power of 2. As that would require $a|b$ to be zero and $a$&$c$ to be one. The former condition requires the bit of $a$ to be zero, while the latter requires it to be one, which is contradicting. Now, let us consider all 8 cases of bits of a, b, c and the corresponding bit value of d in the following table: So, clearly, for different bits of $b, c,$ and $d$, we can find the value of the corresponding bit in $a$, provided it is not the case when bit values of $b, c,$ and $d$ are 1,0,0 or 0,1,1, which are not possible; so, in that case, the answer is -1.
[ "bitmasks", "hashing", "implementation", "math", "schedules", "ternary search" ]
1,400
#include <bits/stdc++.h> #define ll long long using namespace std; void solve() { ll a = 0, b, c, d, pos = 1, bit_b, bit_c, bit_d, mask = 1; cin >> b >> c >> d; for (ll i = 0; i < 62; i++) { if (b&mask) bit_b = 1; else bit_b = 0; if (c&mask) bit_c = 1; else bit_c = 0; if (d&mask) bit_d = 1; else bit_d = 0; if ((bit_b && (!bit_c) && (!bit_d)) || ((!bit_b) && bit_c && bit_d)) { pos = 0; break; } if (bit_b && bit_c) { a += (1ll-bit_d)*mask; } else { a += bit_d*mask; } mask<<=1; } if (pos) { cout << a << "\n"; } else { cout << -1 << "\n"; } } int main() { ll t; cin >> t; while (t--) { solve(); } }
2020
D
Connect the Dots
One fine evening, Alice sat down to play the classic game "Connect the Dots", but with a twist. To play the game, Alice draws a straight line and marks $n$ points on it, indexed from $1$ to $n$. Initially, there are no arcs between the points, so they are all disjoint. After that, Alice performs $m$ operations of the following type: - She picks three integers $a_i$, $d_i$ ($1 \le d_i \le 10$), and $k_i$. - She selects points $a_i, a_i+d_i, a_i+2d_i, a_i+3d_i, \ldots, a_i+k_i\cdot d_i$ and connects each pair of these points with arcs. After performing all $m$ operations, she wants to know the number of connected components$^\dagger$ these points form. Please help her find this number. $^\dagger$ Two points are said to be in one connected component if there is a path between them via several (possibly zero) arcs and other points.
The value of $d$ is very small. Try using Disjoint Set Union (DSU). The main idea is to take advantage of the low upper bound of $d_i$ and apply the Disjoint Set Union. We will consider $dp[j][w]$, which denotes the number of ranges that contain the $j$ node in connection by the triplets/ranges with $w$ as $d$ and $j$ is not $a\ +\ k\ *\ d$, and $id[j][w]$, which denotes the node that represents an overall connected component of which $j$ node is part of for now. The size of both $dp$ and $id$ is $max_j * max_w = (n) * 10 = 10\ n$. We will maintain two other arrays, $Start_cnt[j][w]$ and $End_cnt[j][w]$, which store the number of triplets with $j$ as $a_i$ and $a_i+k_i*d_i$, respectively, and with $w$ as $d_i$, to help us maintain the beginning and end of ranges. We will now apply Disjoint Set Union. For each $i$th triplet, we assume $a_i$ will be the parent node of the Set created by $a_i$, $a_i$ + $d_i$, ..., $a_i+k_i*d_i$. The transitions of $dp$ are as follows: 1) if $j \ge 10$ (max of $d_i$): for all $w$, $dp[j][w]$ are the same as $dp[j-w][w]$, just with some possible changes. These changes are due to $j$ being the start or the end of some triplet with $d_i$ as $w.$ So, let us start with $dp[j][w]$ as $Start_cnt[j][w] - End_cnt[j][w]$. If $dp[j-w][w]$ is non-zero, then perform a union operation (of DSU) between the $j$ node and $id[j-w][w]$, increment $dp[j][w]$ by $dp[j-w][w]$, and assign $id[j][w]$ as $id[j-w][w]$. This unites the ranges over the $j$ node. 2) if $j \lt 10$ (max of $d_i$): we do the same as above; rather than doing for all $w,$ we would restrict ourselves with $w$ from $0$ to $j-1$. The net time complexity = updation of $dp[j][w]$ value by $Start_cnt$ and $End_cnt$ + union operations due to union of $id[j-w][w]$ with $j$ over all $w$ + incrementing $dp$ values (by $dp[j-w][w]$) + copying $id$ values = $10\ n + 10\ nlogn + 10\ n + 10\ n= 10 nlogn + 30n$ (in worst case) = $O(max_d*n*logn)$.
[ "brute force", "dp", "dsu", "graphs", "math", "trees" ]
1,800
#include <bits/stdc++.h> #define ll long long using namespace std; const ll N = 2e5+2; const ll C = 10 + 1; vector<ll> par(N), sz(N, 0); vector<vector<ll>> dp(N, vector<ll> (C, 0)), ind(N, vector<ll> (C, 0)), start_cnt(N, vector<ll> (C, 0)), end_cnt(N, vector<ll> (C,0)); ll find_par(ll a) { if (par[a] == a) return a; return par[a] = find_par(par[a]); } void unite(ll a, ll b){ a = find_par(a), b = find_par(b); if (a == b) return; if (sz[b] > sz[a]) swap(a, b); sz[a] += sz[b]; par[b] = a; } void reset(ll n) { for (ll i = 1; i <= n; i++) { par[i] = i; sz[i] = 1; for (ll j = 1; j < C; j++) { dp[i][j] = start_cnt[i][j] = end_cnt[i][j] = 0; ind[i][j] = i; } } } void solve() { ll n, m, a, d, k; cin >> n >> m; reset(n); for (ll i = 0; i < m; i++) { cin >> a >> d >> k; start_cnt[a][d]++; end_cnt[a + k * d][d]++; } for (ll i = 1; i <= n; i++) { for (ll j = 1; j < C; j++) { dp[i][j] = start_cnt[i][j] - end_cnt[i][j]; if (i-j < 1) continue; if (dp[i-j][j]) { unite(ind[i-j][j], i); ind[i][j] = ind[i-j][j]; dp[i][j] += dp[i-j][j]; } } } ll ans = 0; for (ll i = 1; i <= n; i++) { if (find_par(i) == i) ans++; } cout << ans << "\n"; } int main() { ll t; cin >> t; while (t--) { solve(); } }
2020
E
Expected Power
You are given an array of $n$ integers $a_1,a_2,\ldots,a_n$. You are also given an array $p_1, p_2, \ldots, p_n$. Let $S$ denote the random \textbf{multiset} (i. e., it may contain equal elements) constructed as follows: - Initially, $S$ is empty. - For each $i$ from $1$ to $n$, insert $a_i$ into $S$ with probability $\frac{p_i}{10^4}$. Note that each element is inserted independently. Denote $f(S)$ as the bitwise XOR of all elements of $S$. Please calculate the expected value of $(f(S))^2$. Output the answer modulo $10^9 + 7$. Formally, let $M = 10^9 + 7$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
Try to find the expected value of $f(S)$ rather than $(f(S))^2$. Write the binary representation of $f(S)$ and find $(f(S))^2$. Let the binary representation of the $Power$ be $b_{20}b_{19}...b_{0}$. Now $Power^2$ is $\sum_{i=0}^{20} \sum_{j=0}^{20} b_i b_j * 2^{i+j}$. Now if we compute the expected value of $b_ib_j$ for every pair $(i,j)$, then we are done. We can achieve this by dynamic programming. For each pair $i,j$, there are only 4 possible values of $b_i,b_j$. For every possible value, we can maintain the probability of reaching it, and we are done. The complexity of the solution is $O(n.log(max(a_i))^2)$.
[ "bitmasks", "dp", "math", "probabilities" ]
2,000
#include <bits/stdc++.h> using namespace std; int fast_exp(int b, int e, int mod){ int ans = 1; while(e){ if(e&1) ans = (1ll*ans*b) % mod; b = (1ll*b*b) % mod; e >>= 1; } return ans; } const int mod = 1e9+7; const int bits = 11; int inv(int n){ return fast_exp(n,mod-2,mod); } const int inverse_1e4 = inv(10000); int dp[bits][bits][2][2]; void transition(int a, int p){ p = (1ll*p*inverse_1e4) % mod; int negp = (mod+1-p) % mod; int bin[bits]; for(int i = 0; i < bits; i++){ bin[i] = a&1; a >>= 1; } for(int i = 0; i < bits; i++){ for(int j = 0; j < bits; j++){ int temp[2][2]; for(int k : {0,1}) for(int l : {0,1}) temp[k][l] = (1ll*dp[i][j][k][l]*negp + 1ll*dp[i][j][k^bin[i]][l^bin[j]]*p) % mod; for(int k : {0,1}) for(int l : {0,1}) dp[i][j][k][l] = temp[k][l]; } } } int main() { int t; cin >> t; while(t--){ int n; cin >> n; int a[n],p[n]; for(int i = 0; i < n; i++) cin >> a[i]; for(int i = 0; i < n; i++) cin >> p[i]; for(int i = 0; i < bits; i++) for(int j = 0; j < bits; j++) dp[i][j][0][0] = 1; for(int i = 0; i < n; i++) transition(a[i],p[i]); int ans = 0; for(int i = 0; i < bits; i++){ for(int j = 0; j < bits; j++){ int pw2 = (1ll<<(i+j)) % mod; ans += (1ll*pw2*dp[i][j][1][1]) % mod; ans %= mod; for(int k : {0,1}) for(int l : {0,1}) dp[i][j][k][l] = 0; } } cout << ans << "\n"; } return 0; }
2020
F
Count Leaves
Let $n$ and $d$ be positive integers. We build the the divisor tree $T_{n,d}$ as follows: - The root of the tree is a node marked with number $n$. This is the $0$-th layer of the tree. - For each $i$ from $0$ to $d - 1$, for each vertex of the $i$-th layer, do the following. If the current vertex is marked with $x$, create its children and mark them with all possible distinct divisors$^\dagger$ of $x$. These children will be in the $(i+1)$-st layer. - The vertices on the $d$-th layer are the leaves of the tree. For example, $T_{6,2}$ (the divisor tree for $n = 6$ and $d = 2$) looks like this: Define $f(n,d)$ as the number of leaves in $T_{n,d}$. Given integers $n$, $k$, and $d$, please compute $\sum\limits_{i=1}^{n} f(i^k,d)$, modulo $10^9+7$. $^\dagger$ In this problem, we say that an integer $y$ is a divisor of $x$ if $y \ge 1$ and there exists an integer $z$ such that $x = y \cdot z$.
$f$ satisfies a special property for fixed d. $f$ is multiplicative i.e. $f(x.y)$ = $f(x) * f(y)$ if $x,y$ are coprime, for fixed d. It can be observed that the number of leaves is equal to the number of ways of choosing $d$ integers $a_0,a_1,a_2...a_d$ with $a_d = n$ and $a_i divides a_{i+1}$ for all $(0 \le i \le d-1)$. Lets define $g(n) = f(n,d)$ for given $d$. It can be seen that the function $g$ is multiplicative i.e. $g(p*q) = g(p)*g(q)$ if $p$ and $q$ are coprime. Now, lets try to calculate $g(n)$ when $n$ is of the form $p^x$ where $p$ is a prime number and $x$ is any non negative integer. From the first observation, we can say that here all the $a_i$ will be a power of $p$. Therefore $a_0,a_1,a_2...a_d$ can be written as $p^{b_0},p^{b_1},p^{b_2}...p^{b_d}$. Now we just have to ensure that $0 \le b_0 \le b_1 \le b_2 ... \le b_d = x$. The number of ways of doing so is $\binom{x+d}{d}$. Now, lets make a dp (inspired by the idea of the dp used in fast prime counting) where $dp(n,x) = \sum_{i=1, spf(i) >= x}^{n} g(i)$. Here $spf(i)$ means smallest prime factor of $i$. $dp(x, p) = \begin{cases} 0 & \text{if } x = 0 \\ 1 & \text{if } p > x \\ dp(x, p+1) & \text{if } p \text{ is not prime} \\ \sum\limits_{i=0 \, \text{to} \, p^i \le x} dp\left(\lfloor{\frac{x}{p^i}} \rfloor, p+1\right) f(p^{ik}, d) & \text{otherwise} \end{cases}$ Our required answer is $dp(n,2)$. The overall complexity of the solution is $O(n^{\frac{2}{3}})$.
[ "dp", "math", "number theory" ]
2,900
#include <bits/stdc++.h> #define ll long long #define pb push_back #define mp make_pair #define F first #define S second #define pii pair<int,int> #define pll pair<ll,ll> #define pcc pair<char,char> #define vi vector <int> #define vl vector <ll> #define sd(x) scanf("%d",&x) #define slld(x) scanf("%lld",&x) #define pd(x) printf("%d",x) #define plld(x) printf("%lld",x) #define pds(x) printf("%d ",x) #define pllds(x) printf("%lld ",x) #define pdn(x) printf("%d\n",x) #define plldn(x) printf("%lld\n",x) using namespace std; ll powmod(ll base,ll exponent,ll mod){ ll ans=1; if(base<0) base+=mod; while(exponent){ if(exponent&1)ans=(ans*base)%mod; base=(base*base)%mod; exponent/=2; } return ans; } ll gcd(ll a, ll b){ if(b==0) return a; else return gcd(b,a%b); } const int INF = 2e9; const ll INFLL = 4e18; const int small_lim = 1e6+1; const int mod = 1e9+7; const int big_lim = 1e3+1; ll primes_till_i[small_lim]; ll primes_till_bigger_i[big_lim]; vl sieved_primes[small_lim]; vl sieved_primes_big[big_lim]; vi prime; int N,k,d; void sieve(){ vi lpf(small_lim); ll pw; for(int i = 2; i < small_lim; i++){ if(! lpf[i]){ prime.pb(i); lpf[i] = i; } for(int j : prime){ if((j > lpf[i]) || (j*i >= small_lim)) break; lpf[j*i] = j; } } for(int i = 2; i < small_lim; i++){ primes_till_i[i] = primes_till_i[i-1] + (lpf[i] == i); } } ll count_primes(ll n, int ind){ if(ind < 0) return n-1; if(1ll*prime[ind]*prime[ind] > n){ if(n < small_lim) return primes_till_i[n]; if(primes_till_bigger_i[N/n]) return primes_till_bigger_i[N/n]; int l = -1, r = ind; while(r-l > 1){ int mid = (l+r)>>1; if(1ll*prime[mid]*prime[mid] > n) r = mid; else l = mid; } return primes_till_bigger_i[N/n] = count_primes(n,l); } int sz; if(n < small_lim) sz = sieved_primes[n].size(); else sz = sieved_primes_big[N/n].size(); ll ans; if(sz <= ind){ ans = count_primes(n,ind-1); ans -= count_primes(n/prime[ind],ind-1); ans += ind; if(n < small_lim) sieved_primes[n].pb(ans); else sieved_primes_big[N/n].pb(ans); } if(n < small_lim) return sieved_primes[n][ind]; else return sieved_primes_big[N/n][ind]; } ll count_primes(ll n){ if(n < small_lim) return primes_till_i[n]; if(primes_till_bigger_i[N/n]) return primes_till_bigger_i[N/n]; return count_primes(n,prime.size()-1); } const int ncrlim = 3.5e6; int fact[ncrlim]; int invfact[ncrlim]; void init_fact(){ fact[0] = 1; for(int i = 1; i < ncrlim; i++) fact[i] = (1ll*fact[i-1]*i)%mod; invfact[ncrlim-1] = powmod(fact[ncrlim-1], mod-2, mod); for(int i = ncrlim-1; i > 0; i--) invfact[i-1] = (1ll*invfact[i]*i)%mod; } int ncr(int n, int r){ if(r > n || r < 0) return 0; int ans = fact[n]; ans = (1ll*ans*invfact[n-r]) % mod; ans = (1ll*ans*invfact[r]) % mod; return ans; } ll calculate_dp(ll n, int ind){ if(n == 0) return 0; if(prime[ind] > n) return 1; ll ans = 1,temp; if(1ll*prime[ind]*prime[ind] > n){ temp = ncr(k+d,d); temp *= count_primes(n)-ind; ans+=temp; ans %= mod; return ans; } ans = 0; ll gg = 1; ll mult = d; while(gg <= n){ temp = calculate_dp(n/gg,ind+1); temp *= ncr(mult,d); ans += temp; ans %= mod; mult += k; gg *= prime[ind]; } return ans; } int main(){ sieve(); init_fact(); int t; sd(t); while(t--){ sd(N);sd(k);sd(d); plldn(calculate_dp(N,0)); for(int i = 1; i < big_lim; i++){ primes_till_bigger_i[i] = 0; sieved_primes_big[i].clear(); } } return 0; }
2021
A
Meaning Mean
Pak Chanek has an array $a$ of $n$ positive integers. Since he is currently learning how to calculate the floored average of two numbers, he wants to practice it on his array $a$. While the array $a$ has at least two elements, Pak Chanek will perform the following three-step operation: - Pick two different indices $i$ and $j$ ($1 \leq i, j \leq |a|$; $i \neq j$), note that $|a|$ denotes the current size of the array $a$. - Append $\lfloor \frac{a_i+a_j}{2} \rfloor$$^{\text{∗}}$ to the end of the array. - Remove elements $a_i$ and $a_j$ from the array and concatenate the remaining parts of the array. For example, suppose that $a=[5,4,3,2,1,1]$. If we choose $i=1$ and $j=5$, the resulting array will be $a=[4,3,2,1,3]$. If we choose $i=4$ and $j=3$, the resulting array will be $a=[5,4,1,1,2]$. After all operations, the array will consist of a single element $x$. Find the maximum possible value of $x$ if Pak Chanek performs the operations optimally. \begin{footnotesize} $^{\text{∗}}$$\lfloor x \rfloor$ denotes the floor function of $x$, which is the greatest integer that is less than or equal to $x$. For example, $\lfloor 6 \rfloor = 6$, $\lfloor 2.5 \rfloor=2$, $\lfloor -3.6 \rfloor=-4$ and $\lfloor \pi \rfloor=3$ \end{footnotesize}
For now, let's ignore the floor operation, so an operation is merging two elements $a_i$ and $a_j$ into one element $\frac{a_i+a_j}{2}$. Consider the end result. Each initial element in $a$ must contribute a fractional coefficient to the final result. It turns out that the sum of the coefficients is fixed (it must be $1$). That means we can greedily give the biggest values in $a$ the biggest coefficients. One way to do this is by sorting $a$ in ascending order. We merge $a_1$ and $a_2$, then merge that result with $a_3$, then merge that result with $a_4$, and so on until $a_n$. If we do this, $a_n$ contributes $\frac{1}{2}$ times, $a_{n-1}$ contributes $\frac{1}{4}$ times, $a_{n-2}$ contributes $\frac{1}{8}$ times, and so on. This is the optimal way to get the maximum final result. It turns out that the strategy above is also the optimal strategy for the original version of the problem. So we can simulate that process to get the answer. Time complexity for each test case: $O(n\log n)$
[ "data structures", "greedy", "math", "sortings" ]
800
null
2021
B
Maximize Mex
You are given an array $a$ of $n$ positive integers and an integer $x$. You can do the following two-step operation any (possibly zero) number of times: - Choose an index $i$ ($1 \leq i \leq n$). - Increase $a_i$ by $x$, in other words $a_i := a_i + x$. Find the maximum value of the $\operatorname{MEX}$ of $a$ if you perform the operations optimally. The $\operatorname{MEX}$ (minimum excluded value) of an array is the smallest non-negative integer that is not in the array. For example: - The $\operatorname{MEX}$ of $[2,2,1]$ is $0$ because $0$ is not in the array. - The $\operatorname{MEX}$ of $[3,1,0,1]$ is $2$ because $0$ and $1$ are in the array but $2$ is not. - The $\operatorname{MEX}$ of $[0,3,1,2]$ is $4$ because $0$, $1$, $2$ and $3$ are in the array but $4$ is not.
For the $\operatorname{MEX}$ to be at least $k$, then each non-negative integer from $0$ to $k-1$ must appear at least once in the array. First, notice that since there are only $n$ elements in the array, there are at most $n$ different values, so the $\operatorname{MEX}$ can only be at most $n$. And since we can only increase an element's value, that means every element with values bigger than $n$ can be ignored. We construct a frequency array $\text{freq}$ such that $\text{freq}[k]$ is the number of elements in $a$ with value $k$. Notice that the values just need to appear at least once to contribute to the $\operatorname{MEX}$, so two or more elements with the same value should be split into different values to yield a potentially better result. To find the maximum possible $\operatorname{MEX}$, we iterate each index $k$ in the array $\text{freq}$ from $0$ to $n$. In each iteration of $k$, if we find $\text{freq[k]}>0$, that means it's possible to have the $\operatorname{MEX}$ be bigger than $k$, so we can iterate $k$ to the next value. Before we iterate to the next value, if we find $\text{freq}[k]>1$, that indicates duplicates, so we should do an operation to all except one of those values to change them into $k+x$, which increases $\text{freq}[k+x]$ by $\text{freq}[k]-1$ and changes $\text{freq}[k]$ into $1$. In each iteration of $k$, if we find $\text{freq}[k]=0$, that means, $k$ is the maximum $\operatorname{MEX}$ we can get, and we should end the process. Time complexity for each test case: $O(n)$
[ "brute force", "greedy", "math", "number theory" ]
1,200
null
2021
C2
Adjust The Presentation (Hard Version)
\textbf{This is the hard version of the problem. In the two versions, the constraints on $q$ and the time limit are different. In this version, $0 \leq q \leq 2 \cdot 10^5$. You can make hacks only if all the versions of the problem are solved.} A team consisting of $n$ members, numbered from $1$ to $n$, is set to present a slide show at a large meeting. The slide show contains $m$ slides. There is an array $a$ of length $n$. Initially, the members are standing in a line in the order of $a_1, a_2, \ldots, a_n$ from front to back. The slide show will be presented in order from slide $1$ to slide $m$. Each section will be presented by the member at the front of the line. After each slide is presented, you can move the member at the front of the line to any position in the lineup (without changing the order of the rest of the members). For example, suppose the line of members is $[\textcolor{red}{3},1,2,4]$. After member $3$ presents the current slide, you can change the line of members into either $[\textcolor{red}{3},1,2,4]$, $[1,\textcolor{red}{3},2,4]$, $[1,2,\textcolor{red}{3},4]$ or $[1,2,4,\textcolor{red}{3}]$. There is also an array $b$ of length $m$. The slide show is considered good if it is possible to make member $b_i$ present slide $i$ for all $i$ from $1$ to $m$ under these constraints. However, your annoying boss wants to make $q$ updates to the array $b$. In the $i$-th update, he will choose a slide $s_i$ and a member $t_i$ and set $b_{s_i} := t_i$. Note that these updates are \textbf{persistent}, that is changes made to the array $b$ will apply when processing future updates. For each of the $q+1$ states of array $b$, the initial state and after each of the $q$ updates, determine if the slideshow is good.
Firstly, let's relabel the $n$ members such that member number $i$ is the $i$-th member in the initial line configuration in array $a$. We also adjust the values in $b$ (and the future updates) accordingly. For now, let's solve the problem if there are no updates to the array $b$. Consider the first member who presents. Notice that member $1$ must be the first one presenting since he/she is at the very front of the line, which means $b_1=1$ must hold. After this, we insert him/her into any position in the line. However, instead of determining the target position immediately, we make member $1$ a "pending member" and we will only determine his/her position later on when we need him/her again. To generalize, we can form an algorithm to check whether achieving $b$ is possible or not. We iterate each element $b_i$ for each $i$ from $1$ to $m$. While iterating, we maintain a set of pending members which is initially empty, and we maintain who is the next member in the line. When iterating a value of $b_i$, there are three cases: If $b_i$ is equal to the next member in the line, then we can make that member present. And then he/she will become a pending member for the next iterations. Else, if $b_i$ is one of the pending members, then we can always set a precise target position when moving that member in the past such that he/she will be at the very front of the line at this very moment. And then, that member will be a pending member again. Else, then it's impossible to make member $b_i$ present at this time. To solve the problem with updates, let's observe some special properties of $b$ if $b$ is valid. Notice that once a member becomes a pending member, he/she will be a pending member forever. And a member $x$ becomes a pending member during the first occurence of value $x$ $b$. Since the order of members becoming pending must follow the order of the members in the line, that means the first occurence for each value $x$ in $b$ must be in chronological order from $1$ to $n$. More formally, let's define $\text{first}[x]$ as follows: If the value $x$ appears at least once in $b$, then $\text{first}[x]$ is the smallest index $i$ such that $b_i=x$. If the value $x$ doesn't appear in $b$, then $\text{first}[x]=m+1$. Then, for $b$ to be valid, it must hold that $\text{first}[1]\leq\text{first}[2]\leq\ldots\leq\text{first}[n]$. To handle the updates, we must maintain the array $\text{first}$. In order to do that, for each value $x$ from $1$ to $n$, we maintain a set of indices for every occurence of $x$ in $b$. The value of $\text{first}$ is just the minimum value in the set, or $m+1$ if the set is empty. An update to an element in $b$ corresponds to two updates among the sets, which corresponds to two updates in array $\text{first}$. To maintain the status on whether array $\text{first}$ is non-decreasing or not, we maintain a value $c$ which represents the number of pairs of adjacent indices $(x,x+1)$ (for all $1\leq x\leq n-1$) such that $\text{first}[x]\leq\text{first}[x+1]$. The array is non-decreasing if and only if $c=n-1$. For an update to an index $x$ in $\text{first}$, we only need to check how pairs $(x-1,x)$ and $(x,x+1)$ affect the value of $c$. Time complexity for each test case: $O((n+m+q)\log (n+m))$
[ "constructive algorithms", "data structures", "greedy", "implementation", "sortings" ]
1,900
null
2021
D
Boss, Thirsty
Pak Chanek has a friend who runs a drink stall in a canteen. His friend will sell drinks for $n$ days, numbered from day $1$ to day $n$. There are also $m$ types of drinks, numbered from $1$ to $m$. The profit gained from selling a drink on a particular day can vary. On day $i$, the projected profit from selling drink of type $j$ is $A_{i, j}$. Note that $A_{i, j}$ can be negative, meaning that selling the drink would actually incur a loss. Pak Chanek wants to help his friend plan the sales over the $n$ days. On day $i$, Pak Chanek must choose to sell \textbf{at least} one type of drink. Furthermore, the types of drinks sold on a single day must form a subarray. In other words, in each day, Pak Chanek will select $i$ and $j$ such that $1 \leq i \leq j \leq m$. Then all types of drinks between $i$ and $j$ (inclusive) will be sold. However, to ensure that customers from the previous day keep returning, the selection of drink types sold on day $i$ ($i>1$) must meet the following conditions: - At least one drink type sold on day $i$ must also have been sold on day $i-1$. - At least one drink type sold on day $i$ must \textbf{not} have been sold on day $i-1$. The daily profit is the sum of the profits from all drink types sold on that day. The total profit from the sales plan is the sum of the profits over $n$ days. What is the maximum total profit that can be achieved if Pak Chanek plans the sales optimally?
We can see it as a grid of square tiles, consisting of $n$ rows and $m$ columns. Consider a single row. Instead of looking at the $m$ tiles, we can look at the $m+1$ edges between tiles, including the leftmost edge and the rightmost edge. We number the edges from $0$ to $m$ such that tile $j$ ($1\leq j\leq m$) is the tile between edge $j-1$ and edge $j$. For a single row $i$, choosing a segment of tiles is equivalent to choosing two different edges $l$ and $r$ ($0\leq l<r\leq m$) as the endpoints of the segment, which denotes choosing the tiles from $l+1$ to $r$. For each edge $j$, we can precompute a prefix sum $\text{pref}[j]=A_{i,1}+A_{i,2}+\ldots+A_{i,j}$. That means, choosing edges $l$ and $r$ yields a profit of $\text{pref}[r]-\text{pref}[l]$. Let's say we've chosen edges $l'$ and $r'$ for row $i-1$ and we want to choose two edges $l$ and $r$ for row $i$ that satisfies the problem requirement. We want to choose a continuous segment that includes at least one chosen tile and at least one unchosen tile from the previous row. A chosen tile that is adjacent to an unchosen tile only appears at the endpoints of the previous segment. That means, to satisfy the requirement, the new segment must strictly contain at least one endpoint of the previous segment. More formally, at least one of the following conditions must be satisfied: $l<l'<r$ $l<r'<r$ Knowing that, we can see that when going from one row to the next, we only need the information for one of the two endpoints, not both. We can solve this with dynamic programming from row $1$ to row $n$. For each row, we find the optimal profit for each $l$ and the optimal profit for each $r$. We do those two things separately. For each row $i$, define the following things: $\text{dpL}[i][l]$: the optimal profit for the first $i$ rows if the left endpoint of the last segment is $l$. $\text{dpR}[i][r]$: the optimal profit for the first $i$ rows if the right endpoint of the last segment is $r$. Let's say we've calculated all values of $\text{dpL}$ and $\text{dpR}$ for row $i-1$ and we want to calculate for row $i$. Let $l$ and $r$ be the left and right endpoints of the new segment respectively. Let $p$ be one of the endpoints of the previous segment. Then it must hold that $l<p<r$, which yields a maximum profit of $-\text{pref}[l]+\max(\text{dpL}[i-1][p],\text{dpR}[i-1][p])+\text{pref}[r]$. Let's calculate all values of $\text{dpR}$ first for row $i$. For each $r$, we want to consider all corresponding triples $(l,p,r)$ and find the maximum profit among them. In order to do that, we do three steps: Make a prefix maximum of $-\text{pref}[l]$. Then, for each $p$, we add $\max(\text{dpL[i-1][p]},\text{dpR}[i-1][p])$ with the maximum value of $-\text{pref}[l]$ for all $0\leq l<j$. And then we make a prefix maximum of those values. Then, for each $r$, the value of $\text{dpR}[i][r]$ is $\text{pref}[r]$ added by the maximum value for the previous calculation for each $p$ from $0$ to $r-1$. We do the same thing for $\text{dpL}$ but we use suffix maxmiums instead. After doing all that for every row from $1$ to $n$, we find the maximum value of $\text{dpL}$ and $\text{dpR}$ in the last row to get the answer. Time complexity of each test case: $O(nm)$
[ "dp", "greedy", "implementation" ]
2,500
null
2021
E3
Digital Village (Extreme Version)
\textbf{This is the extreme version of the problem. In the three versions, the constraints on $n$ and $m$ are different. You can make hacks only if all the versions of the problem are solved.} Pak Chanek is setting up internet connections for the village of Khuntien. The village can be represented as a connected simple graph with $n$ houses and $m$ internet cables connecting house $u_i$ and house $v_i$, each with a latency of $w_i$. There are $p$ houses that require internet. Pak Chanek can install servers in at most $k$ of the houses. The houses that need internet will then be connected to one of the servers. However, since each cable has its latency, the latency experienced by house $s_i$ requiring internet will be the \textbf{maximum} latency of the cables between that house and the server it is connected to. For each $k = 1,2,\ldots,n$, help Pak Chanek determine the minimum \textbf{total} latency that can be achieved for all the houses requiring internet.
Since the cost of a path uses the maximum edge weight in the path, we can use a Kruskal-like algorithm that is similar to finding the MST (Minimum Spanning Tree). Initially, the graph has no edges, and we add each edge one by one starting from the smallest values of $w_i$, while maintaining the connected components in the graph using DSU (Disjoint Set Union). While doing the MST algorithm, we simultaneously construct the reachability tree of the graph, whose structure represents the sequence of mergings of connected components in the algorithm. Each vertex in the reachability tree corresponds to some connected component at some point in time in the algorithm. Each non-leaf vertex in the reachability tree always has two children, which are the two connected components that are merged to form the connected component represented by that vertex, so every time two connected components merge in the algorithm, we make a new vertex in the reachability tree that is connected to its two corresponding children. After doing all that, we've constructed a reachability tree that is a rooted binary tree with $2n-1$ vertices, $n$ of which are leaves. For each non-leaf vertex $x$, we write down $\text{weight}[x]$ which is the weight of the edge that forms its connected component. For each leaf, we mark it as special if and only if it corresponds to a house that needs internet. Then, for each vertex $x$, we calculate $\text{cnt}[x]$, which is the number of special leaves in the subtree of $x$. These values will be used later. Consider a non-leaf $x$ in the reachability tree. It can be obtained that two vertices in the original graph corresponding to any two leaves in the subtree of $x$ can have a path between them in the original graph with a weight of at most $\text{weight}[x]$. Let's solve for some value of $k$. For each special vertex $x$, we want to choose a target vertex $y$ that's an ancestor of $x$. Then, we choose a set of $k$ leaves for the houses with installed servers. We want it such that each chosen target has at least one leaf in its subtree that is a member of the set. The total path cost of this is the sum of $\text{weight}[y]$ for all chosen targets $y$. Let's say we've fixed the set of $k$ leaves. Then, we mark every ancestor of these leaves. If we only consider the marked vertices with the edges between them, we have a reduced tree. For each special leaf, we want to choose its nearest ancestor that is in the reduced tree for its target to get the one with the smallest weight. Knowing this, we can solve the problem in another point of view. Initially, we have the original reachability tree. We want to reduce it into a reduced tree with $k$ leaves. We want to do it while maintaining the chosen targets of the special leaves and their costs. Initially, for each special leaf, we choose itself as its target. In one operation, we can do the following: Choose a vertex that's currently a leaf. Move every target that's currently in that leaf to its parent. Remove that leaf and the edge connecting it to its parent. We want to do that until the reduced tree has $k$ leaves. For each edge connecting a vertex $x$ to its parent $y$ in the reachability tree, calculate $(\text{weight}[y]-\text{weight}[x])\times\text{cnt}[x]$. That is the cost to move every target in vertex $x$ to vertex $y$. Define that as the edge's length. We want to do operations with the minimum cost so that the reduced tree has $k$ leaves. We want to minimize the sum of lengths of the deleted edges. If we look at it in a different way, we want to choose edges to be in the reduced tree with the maximum sum of lengths. For some value of $k$, the edges of the reduced tree can be decomposed into $k$ paths from some vertex to its descendant. We want the total sum of lengths of these paths to be as big as possible. But how do we solve it for every $k$ from $1$ to $n$? Let's say $k=1$. We can choose the path from the root to its furthest leaf. How do we solve for $k=2$ onwards? It turns out that we can use the optimal solution for some value of $k$ to make the optimal solution for $k+1$, by just adding the longest possible available path. That means, for each $k$ from $1$ to $n$, we just find the current longest available path and add it to our reduced tree. What if at some point. there are more than one possible longest paths? It can be proven that we can choose any of these paths and the optimal solutions for the next values of $k$ will still be optimal. The proof for this greedy strategy involves the convexity of the total length as $k$ goes from $1$ to $n$. However, we won't explain it in detail here. So to solve the problem, we do DFS in the reachability tree to calculate for each vertex $x$, the furthest leaf and the second furthest leaf in its subtree. For each $k$ from $1$ to $n$, we add the current longest available path using this precalculation. Time complexity: $O(n\log n+m\log m)$
[ "data structures", "dfs and similar", "dp", "dsu", "graphs", "greedy", "math", "trees" ]
2,800
null
2022
A
Bus to Pénjamo
\begin{quote} Ya vamos llegando a Péeeenjamoo ♫♫♫ \end{quote} There are $n$ families travelling to Pénjamo to witness Mexico's largest-ever "walking a chicken on a leash" marathon. The $i$-th family has $a_i$ family members. All families will travel using a single bus consisting of $r$ rows with $2$ seats each. A person is considered happy if: - Another family member is seated in the same row as them, or - They are sitting alone in their row (with an empty seat next to them). Determine the maximum number of happy people in an optimal seating arrangement. Note that \textbf{everyone} must be seated in the bus. It is guaranteed that all family members will fit on the bus. Formally, it is guaranteed that $\displaystyle\sum_{i=1}^{n}a_i \le 2r$.
The key to maximizing happiness is to seat family members together as much as possible. If two members of the same family sit in the same row, both will be happy, and we only use two seats. However, if they are seated separately, only one person is happy, but two seats are still used. Therefore, we prioritize seating family pairs together first. Once all possible pairs are seated, there may still be some family members left to seat. If a family has an odd number of members, one person will be left without a pair. Ideally, we want to seat this person alone in a row to make them happy. However, if there are no remaining rows to seat them alone, we'll have to seat them with someone from another family. This means the other person might no longer be happy since they are no longer seated alone. The easiest way to handle part 2 is to check the number of remaining rows and people. If the number of remaining rows is greater than or equal to the number of unpaired people, all unpaired people can sit alone and remain happy. Otherwise, some people will have to share a row, and their happiness will be affected. In that case, the number of happy people will be $2 \times$ remaining rows $-$ remaining people. Write down key observations and mix them to solve the problem
[ "constructive algorithms", "greedy", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin>>t; while(t--) { int n,r; cin>>n>>r; vector<int>arr(n); int leftalone=0; int happy=0; for(int k=0;k<n;k++) { cin>>arr[k]; happy+=(arr[k]/2)*2; r-=arr[k]/2; leftalone+=arr[k]%2; } if(leftalone>r) happy+=r*2-leftalone; else happy+=leftalone; cout<<happy<<endl; } }
2022
B
Kar Salesman
Karel is a salesman in a car dealership. The dealership has $n$ different models of cars. There are $a_i$ cars of the $i$-th model. Karel is an excellent salesperson and can convince customers to buy up to $x$ cars (of Karel's choice), as long as the cars are from different models. Determine the minimum number of customers Karel has to bring in to sell all the cars.
Since no customer can buy more than one car from the same model, the minimum number of clients we need is determined by the model with the most cars. Therefore, we need at least: $\max\{a_1, a_2, \cdots a_n\}$ clients, because even if a customer buys cars from other models, they cannot exceed this limit for any single model. Each client can buy up to $x$ cars from different models. To distribute all the cars among the clients, we also need to consider the total number of cars. Thus, the minimum number of clients needed is at least: $\displaystyle\left\lceil \frac{a_1 + a_2 + \cdots + a_n}{x}\right\rceil$ This ensures that all cars can be distributed across the clients, respecting the limit of $x$ cars per customer. The actual number of clients required is the maximum of the two values: $\displaystyle\max\left\{\left\lceil \frac{a_1 + a_2 + \cdots + a_n}{x}\right\rceil, \max\{a_1, a_2, \cdots a_n\}\right\}$ This gives us a lower bound for the number of clients required, satisfying both constraints (the maximum cars of a single model and the total number of cars). Apply binary search on the answer. To demonstrate that this is always sufficient, we can reason that the most efficient strategy is to reduce the car count for the models with the largest numbers first. By doing so, we maximize the benefit of allowing each client to buy up to $x$ cars from different models. After distributing the cars, two possible outcomes will arise: All models will have the same number of remaining cars, and this situation will be optimal when the total cars are evenly distributed (matching the $\displaystyle\left\lceil \frac{a_1 + a_2 + ... + a_n}{x}\right\rceil$ bound). All models will have the same number of remaining cars, and this situation will be optimal when the total cars are evenly distributed (matching the $\displaystyle\left\lceil \frac{a_1 + a_2 + ... + a_n}{x}\right\rceil$ bound). There will be still a quantity of models less than or equal to x remaining, which matches the case where the maximum number of cars from a single model $\max(a_1, a_2, ..., a_n)$ determines the bound. There will be still a quantity of models less than or equal to x remaining, which matches the case where the maximum number of cars from a single model $\max(a_1, a_2, ..., a_n)$ determines the bound. Imagine a grid of size $w \times x$, where $w$ represents the minimum number of clients needed: $\displaystyle w = \max\left\{\left\lceil \frac{a_1 + a_2 + \cdots + a_n}{x}\right\rceil, \max\{a_1, a_2, \cdots a_n\}\right\}$ Now, place the cars in this grid by filling it column by column, from the first model to the $n$-th model. This method ensures that each client will buy cars from different models and no client exceeds the $x$ car limit. Since the total number of cars is less than or equal to the size of the grid, this configuration guarantees that all cars will be sold within $w$ clients. If you don't know how to solve a problem, try to solve for small cases to find a pattern.
[ "binary search", "greedy", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; int main() { int t; cin>>t; while(t--) { long long n,x; cin>>n>>x; vector<long long>arr(n); long long sum=0; long long maximo=0; for(int k=0;k<n;k++) { cin>>arr[k]; maximo=max(maximo,arr[k]); sum+=arr[k]; } long long sec=(sum+x-1)/(long long)x; cout<<max(maximo,sec)<<endl; } }
2022
C
Gerrymandering
\begin{quote} We all steal a little bit. But I have only one hand, while my adversaries have two. \hfill Álvaro Obregón \end{quote} Álvaro and José are the only candidates running for the presidency of Tepito, a rectangular grid of $2$ rows and $n$ columns, where each cell represents a house. It is guaranteed that $n$ is a multiple of $3$. Under the voting system of Tepito, the grid will be split into districts, which consist of any $3$ houses that are connected$^{\text{∗}}$. Each house will belong to exactly one district. Each district will cast a single vote. The district will vote for Álvaro or José respectively if at least $2$ houses in that district select them. Therefore, a total of $\frac{2n}{3}$ votes will be cast. As Álvaro is the current president, he knows exactly which candidate each house will select. If Álvaro divides the houses into districts optimally, determine the maximum number of votes he can get. \begin{footnotesize} $^{\text{∗}}$A set of cells is connected if there is a path between any $2$ cells that requires moving only up, down, left and right through cells in the set. \end{footnotesize}
We will use dynamic programming to keep track of the maximum number of votes Álvaro can secure as we move from column to column (note that there are many ways to implement the DP, we will use the easiest to understand). An important observation is that if you use a horizontal piece in one row, you also have to use it in the other to avoid leaving holes. Let $dp[i][j]$ represent the maximum number of votes Álvaro can get considering up to the $i$-th column. The second dimension $j$ represents the current configuration of filled cells: $j = 0$: All cells up to the $i$-th column are completely filled (including $i$-th). $j = 1$: All cells up to the i-th column are completely filled (including $i$-th), and there's one extra cell filled in the first row of the next column. $j = 2$: The $i$-th column is filled, and there's one extra cell filled in the second row of the next column. For simplicity, we will call "L", "second L", "third L", and "fourth L" respectively the next pieces: From $dp[k][0]$ (Both rows filled at column $k$): You can place a horizontal piece in both rows, leading to $dp[k+3][0]$. You can place a first L piece, leading to $dp[k+1][1]$. You can place an L piece, leading to $dp[k+1][2]$. You can place a horizontal piece in both rows, leading to $dp[k+3][0]$. You can place a first L piece, leading to $dp[k+1][1]$. You can place an L piece, leading to $dp[k+1][2]$. From $dp[k][1]$ (One extra cell in the first row of the next column): You can place a horizontal piece in the first row occupying from $k+2$ to $k+4$ and also a horizontal piece in the second row from $k+1$ to $k+3$, leading to $dp[k+3][1]$. You can place a fourth L, leading to $dp[k+2][0]$. You can place a horizontal piece in the first row occupying from $k+2$ to $k+4$ and also a horizontal piece in the second row from $k+1$ to $k+3$, leading to $dp[k+3][1]$. You can place a fourth L, leading to $dp[k+2][0]$. From $dp[k][2]$ (One extra cell in the second row of the next column): You can place a horizontal piece in the first row occupying from $k+1$ to $k+3$ and also a horizontal piece in the second row from $k+2$ to $k+4$, leading to $dp[k+3][2]$. You can place a third L, leading to $dp[k+2][0]$. You can place a horizontal piece in the first row occupying from $k+1$ to $k+3$ and also a horizontal piece in the second row from $k+2$ to $k+4$, leading to $dp[k+3][2]$. You can place a third L, leading to $dp[k+2][0]$. More formally for each DP state, the following transitions are possible: From $dp[k][0]$: $dp[k+3][0] = max(dp[k+3][0], dp[k][0] + \text{vot})$ $dp[k+1][1] = max(dp[k+1][1], dp[k][0] + \text{vot})$ $dp[k+1][2] = max(dp[k+1][2], dp[k][0] + \text{vot})$ From $dp[k][1]$: $dp[k+3][1] = max(dp[k+3][1], dp[k][1] + \text{vot})$ $dp[k+2][0] = max(dp[k+2][0], dp[k][1] + \text{vot})$ From $dp[k][2]$: $dp[k+3][2] = max(dp[k+3][2], dp[k][2] + \text{vot})$ $dp[k+2][0] = max(dp[k+2][0], dp[k][2] + \text{vot})$ To implement the DP solution, you only need to handle the transitions for states $dp[i][0]$ and $dp[i][1]$. For $dp[i][2]$, the transitions are the same as $dp[i][1]$, with the rows swapped. Don't begin to implement until all the details are very clear, this will make your implementation much easier. Draw images to visualize easier everything.
[ "dp", "implementation" ]
1,800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin>>n; vector<string>cad(n); vector<vector<int> >vot(2,vector<int>(n+8)); for(int k=0;k<2;k++) { cin>>cad[k]; for(int i=0;i<n;i++) if(cad[k][i]=='A') vot[k][i+1]=1; } vector<vector<int> >dp(n+9,vector<int>(3,-1)); dp[0][0]=0; for(int k=0;k<=n-1;k++) { for(int i=0;i<3;i++) { if(dp[k][i]!=-1) { int vt=0,val=dp[k][i]; if(i==0) { // * // * vt=(vot[0][k+1]+vot[0][k+2]+vot[0][k+3])/2+(vot[1][k+1]+vot[1][k+2]+vot[1][k+3])/2; dp[k+3][0]=max(vt+val,dp[k+3][0]); vt=(vot[1][k+1]+vot[0][k+2]+vot[0][k+1])/2; dp[k+1][1]=max(vt+val,dp[k+1][1]); vt=(vot[1][k+1]+vot[1][k+2]+vot[0][k+1])/2; dp[k+1][2]=max(vt+val,dp[k+1][2]); } if(i==1) { // ** // * vt=(vot[0][k+2]+vot[0][k+3]+vot[0][k+4])/2+(vot[1][k+1]+vot[1][k+2]+vot[1][k+3])/2; dp[k+3][1]=max(vt+val,dp[k+3][1]); vt=(vot[1][k+1]+vot[1][k+2]+vot[0][k+2])/2; dp[k+2][0]=max(vt+val,dp[k+2][0]); } if(i==2) { //* //** vt=(vot[1][k+2]+vot[1][k+3]+vot[1][k+4])/2+(vot[0][k+1]+vot[0][k+2]+vot[0][k+3])/2; dp[k+3][2]=max(vt+val,dp[k+3][2]); vt=(vot[0][k+1]+vot[0][k+2]+vot[1][k+2])/2; dp[k+2][0]=max(vt+val,dp[k+2][0]); } } } } cout<<dp[n][0]<<endl; } int main() { int t; cin>>t; while(t--) solve(); }
2022
D1
Asesino (Easy Version)
\textbf{This is the easy version of the problem. In this version, you can ask at most $n+69$ questions. You can make hacks only if both versions of the problem are solved.} This is an interactive problem. It is a tradition in Mexico's national IOI trainings to play the game "Asesino", which is similar to "Among Us" or "Mafia". Today, $n$ players, numbered from $1$ to $n$, will play "Asesino" with the following three roles: - \textbf{Knight}: a Knight is someone who always tells the truth. - \textbf{Knave}: a Knave is someone who always lies. - \textbf{Impostor}: an Impostor is someone everybody thinks is a Knight, but is secretly a Knave. Each player will be assigned a role in the game. There will be \textbf{exactly one} Impostor but there can be any (possible zero) number of Knights and Knaves. As the game moderator, you have accidentally forgotten the roles of everyone, but you need to determine the player who is the Impostor. To determine the Impostor, you will ask some questions. In each question, you will pick two players $i$ and $j$ ($1 \leq i, j \leq n$; $i \neq j$) and ask if player $i$ thinks that player $j$ is a Knight. The results of the question is shown in the table below. \begin{center} \begin{tabular}{|c||c||c||c|} \hline & Knight & Knave & Impostor \ \hline \hline Knight & Yes & No & Yes \ \hline \hline Knave & No & Yes & No \ \hline \hline Impostor & No & Yes & — \ \hline \end{tabular} {\small The response of the cell in row $a$ and column $b$ is the result of asking a question when $i$ has role $a$ and $j$ has row $b$. For example, the "Yes" in the top right cell belongs to row "Knight" and column "Impostor", so it is the response when $i$ is a Knight and $j$ is an Impostor.} \end{center} Find the Impostor in at most $n + 69$ questions. \textbf{Note: the grader is adaptive:} the roles of the players are not fixed in the beginning and may change depending on your questions. However, it is guaranteed that there exists an assignment of roles that is consistent with all previously asked questions under the constraints of this problem.
Try to do casework for small $n$. It is also a good idea to simulate it. Think about this problem as a directed graph, where for each query there is a directed edge. Observe that if you ask $u \mapsto v$ and $v \mapsto u$, both answers will match if and only if $u$ and $v$ are not the impostor. This can be easily shown by case work. We can observe that $n = 3$ and $n = 4$ are solvable with $4$ queries. This strategies are illustrated in the image below: There's many solutions that work. Given that 69 is big enough, we can use the previous observation of asking in pairs and reducing the problem recursively. Combining hint 2 with our solutions for $n = 3$ and $n = 4$, we can come up with the following algorithm: While $n > 4$, query $n \mapsto n - 1$ and $n - 1 \mapsto n$. If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do $n -= 2$. While $n > 4$, query $n \mapsto n - 1$ and $n - 1 \mapsto n$. If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do $n -= 2$. If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do $n -= 2$. If $n > 4$ doesn't hold and we haven't found the impostor, we either have the $n = 3$ case or the $n = 4$ case, and we can solve either of them in $4$ queries. If $n > 4$ doesn't hold and we haven't found the impostor, we either have the $n = 3$ case or the $n = 4$ case, and we can solve either of them in $4$ queries. In the worst case, this algorithm uses $n + 1$ queries. I'm also aware of a solution with $n + 4$ queries, $n + 2$ queries, and one with $n + \lceil\log(n)\rceil$ queries. We only present this solution to shorten the length of the blog but feel free to ask about the others in the comments.
[ "binary search", "brute force", "constructive algorithms", "implementation", "interactive" ]
1,900
#include <bits/stdc++.h> using namespace std; bool query(int i, int j, int ans = 0) { cout << "? " << i << " " << j << endl; cin >> ans; return ans; } int main() { int tt; for (cin >> tt; tt; --tt) { int n, N; cin >> N; n = N; array<int, 2> candidates = {-1, -1}; while (n > 4) { if (query(n - 1, n) != query(n, n - 1)) { candidates = {n - 1, n}; break; } else n -= 2; } if (candidates[0] != -1) { int not_candidate = (candidates[0] == N - 1) ? N - 2 : N; if (query(candidates[0], not_candidate) != query(not_candidate, candidates[0])) { cout << "! " << candidates[0] << endl; } else cout << "! " << candidates[1] << endl; } else { if (n == 3) { if (query(1, 2) == query(2, 1)) cout << "! 3\n"; else { if (query(1, 3) == query(3, 1)) cout << "! 2\n"; else cout << "! 1\n"; } } else { if (query(1 , 2) != query(2, 1)) { if (query(1, 3) == query(3, 1)) cout << "! 2\n"; else cout << "! 1\n"; } else { if (query(1, 3) != query(3, 1)) cout << "! 3\n"; else cout << "! 4\n"; } } } } }
2022
D2
Asesino (Hard Version)
\textbf{This is the hard version of the problem. In this version, you must use the minimum number of queries possible. You can make hacks only if both versions of the problem are solved.} This is an interactive problem. It is a tradition in Mexico's national IOI trainings to play the game "Asesino", which is similar to "Among Us" or "Mafia". Today, $n$ players, numbered from $1$ to $n$, will play "Asesino" with the following three roles: - \textbf{Knight}: a Knight is someone who always tells the truth. - \textbf{Knave}: a Knave is someone who always lies. - \textbf{Impostor}: an Impostor is someone everybody thinks is a Knight, but is secretly a Knave. Each player will be assigned a role in the game. There will be \textbf{exactly one} Impostor but there can be any (possible zero) number of Knights and Knaves. As the game moderator, you have accidentally forgotten the roles of everyone, but you need to determine the player who is the Impostor. To determine the Impostor, you will ask some questions. In each question, you will pick two players $i$ and $j$ ($1 \leq i, j \leq n$; $i \neq j$) and ask if player $i$ thinks that player $j$ is a Knight. The results of the question is shown in the table below. \begin{center} \begin{tabular}{|c||c||c||c|} \hline & Knight & Knave & Impostor \ \hline \hline Knight & Yes & No & Yes \ \hline \hline Knave & No & Yes & No \ \hline \hline Impostor & No & Yes & — \ \hline \end{tabular} {\small The response of the cell in row $a$ and column $b$ is the result of asking a question when $i$ has role $a$ and $j$ has row $b$. For example, the "Yes" in the top right cell belongs to row "Knight" and column "Impostor", so it is the response when $i$ is a Knight and $j$ is an Impostor.} \end{center} Find the Impostor in the minimum number of queries possible. That is, let $f(n)$ be the minimum integer such that for $n$ players, there exists a strategy that can determine the Impostor using at most $f(n)$ questions. Then, you should use at most $f(n)$ questions to determine the Impostor. \textbf{Note: the grader is adaptive:} the roles of the players are not fixed in the beginning and may change depending on your questions. However, it is guaranteed that there exists an assignment of roles that is consistent with all previously asked questions under the constraints of this problem.
What is the minimal value anyway? Our solution to D1 used $n$ queries if $n$ is even and $n + 1$ queries when $n$ is odd. Is this the optimal strategy? Can we find a lower bound? Can we find the optimal strategy for small $n$? There's only $9$ possible unlabeled directed graphs with $3$ nodes and at most $3$ edges, you might as well draw them. They are illustrated in the image below for convenience though. Stare at them and convince yourself $4$ queries is the optimal for $n = 3$. We would have to prove that $n - 1 < f(n)$ for all $n$ and $n < f(n)$ for $n$ odd. How could we structure a proof? For $n - 1$ we have some control over the graph. We can show by pigeonhole principle that at least one of them has in-degree $0$, and at least one of them has out-degree $0$. If those two nodes are different, call $A$ the node with in-degree $0$ and $B$ the node with out-degree $0$. Let the grader always reply yes to your queries. $A$ can be the impostor and everyone else Knaves. $B$ can be the impostor and everyone else Knights. If those two nodes are different, call $A$ the node with in-degree $0$ and $B$ the node with out-degree $0$. Let the grader always reply yes to your queries. $A$ can be the impostor and everyone else Knaves. $B$ can be the impostor and everyone else Knights. If those two nodes are the same, then the graph looks like a collection of cycles and one isolated node. The grader will always reply yes except for the last query where it replies no. Let the last query be to player $A$ about player $B$. The two assignments of roles are: $A$ is the impostor and everyone else in the cycle is a Knight. $B$ is the impostor and everyone else in the cycle is a Knave. If those two nodes are the same, then the graph looks like a collection of cycles and one isolated node. The grader will always reply yes except for the last query where it replies no. Let the last query be to player $A$ about player $B$. The two assignments of roles are: $A$ is the impostor and everyone else in the cycle is a Knight. $B$ is the impostor and everyone else in the cycle is a Knave. Note that the structure of our proof is very general and is very easy to simulate for small values of $n$ and small number of queries. Can we extend it to $n$ odd and $n$ queries? Try writing an exhaustive checker and run it for small values. According to our conjecture, $n = 5$ shouldn't solvable with $5$ queries. So we should find a set of answers that for any queries yields two valid assignments with different nodes as the impostor. It doesn't exist! Which means $n = 5$ is solvable with $5$ queries. How? Also, observe that if we find a solution to $n = 5$ we can apply the same idea and recursively solve the problem in $n$ queries for all $n > 3$. Consider the natural directed graph representation, adding a directed edge with weight $0$ between node $i$ and $j$ if the answer to the query ''? i j'' was yes, and a $1$ otherwise. We will also denote this query by $i \mapsto j$. We will use a lemma, that generalizes the idea for D1. The sum of weights of a cycle is odd if and only if the impostor is among us (among the cycle, I mean). Suppose the impostor is not in the cycle. Then observe that the only time you get an edge with weight $1$ is whenever there are two consecutive nodes with different roles. - Consider consecutive segments of nodes with the same role in this cycle, and compress each of this segments into one node. The image bellow illustrates how, grey edges are queries with answer no. The new graph is bipartite, and thus has an even number of edges. But all the edges in this new graph, are all the grey edges in the original graph, which implies what we want. The new graph is bipartite, and thus has an even number of edges. But all the edges in this new graph, are all the grey edges in the original graph, which implies what we want. If the impostor is in the cycle, there are three ways of inserting it (assume the cycle has more than $2$ nodes that case is exactly what we proved in hint 2). We can insert the impostor into one of this ``segments'' of consecutive nodes with the same role. This would increase the number of grey edges by $1$, changing the parity, We can insert it between two segments. If we have Knight $\mapsto$ Impostor $\mapsto$ Knave, the number of grey edges decreases by $1$. If thee is Knave $\mapsto$ Impostor $\mapsto$ Knight, the number of grey edges increases by $1$. Either way, we changed the parity of the cycle. $\blacksquare$ If the impostor is in the cycle, there are three ways of inserting it (assume the cycle has more than $2$ nodes that case is exactly what we proved in hint 2). We can insert the impostor into one of this ``segments'' of consecutive nodes with the same role. This would increase the number of grey edges by $1$, changing the parity, We can insert it between two segments. If we have Knight $\mapsto$ Impostor $\mapsto$ Knave, the number of grey edges decreases by $1$. If thee is Knave $\mapsto$ Impostor $\mapsto$ Knight, the number of grey edges increases by $1$. Either way, we changed the parity of the cycle. $\blacksquare$ The algorithm that solves the problem is the following: If $n = 3$, query $1 \mapsto 2$ and $2 \mapsto 1$, If queries match, $3$ is the impostor. If queries match, $3$ is the impostor. Else, query $1 \mapsto 3$ and $3 \mapsto 1$. If queries match, $2$ is the impostor. Else, $1$ is the impostor. Else, query $1 \mapsto 3$ and $3 \mapsto 1$. If queries match, $2$ is the impostor. Else, $1$ is the impostor. Else, Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] While $n > 5$, query $n \mapsto n - 1$ and $n - 1 \mapsto n$. If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do Unable to parse markup [type=CF_MATHJAX] . While $n > 5$, query $n \mapsto n - 1$ and $n - 1 \mapsto n$. If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do Unable to parse markup [type=CF_MATHJAX] . If their answers don't match, one of them is the impostor. Query $n \mapsto n - 2$ and $n - 2 \mapsto n$. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If their answers don't match, $n$ is the impostor. Otherwise, $n - 1$ is the impostor. If the answers match, do Unable to parse markup [type=CF_MATHJAX] . Unable to parse markup [type=CF_MATHJAX] If Unable to parse markup [type=CF_MATHJAX] doesn't hold and we haven't found the impostor, we either have the Unable to parse markup [type=CF_MATHJAX] case or the Unable to parse markup [type=CF_MATHJAX] case, we solve them optimally in Unable to parse markup [type=CF_MATHJAX] or $5$ queries each. If Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] To solve $n = 5$ in $5$ queries, We will form a cycle of size $3$, asking for $1 \mapsto 2$, $2 \mapsto 3$ and $3 \mapsto 2$ (blue edges in the image below). If the cycle has an even number of no's, we know the impostor is among $4$ or $5$. So we ask $3 \mapsto 4$ and $4 \mapsto 3$ (green edges). If both queries match, $5$ is the impostor. Else, $4$ is the impostor. If both queries match, $5$ is the impostor. Else, $4$ is the impostor. Else, The impostor is among us (among the cycle, I mean). Ask $1 \mapsto 3$ and $2 \mapsto 1$ (purple edges). If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $1$ is the impostor. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, $2$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $3$ is the impostor. It is impossible for $1 \mapsto 2$ to match with $2 \mapsto 1$ and $1 \mapsto 3$ to match with $3 \mapsto 1$, because we know at least one of this cycles will contain the impostor. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $1$ is the impostor. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, $2$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $3$ is the impostor. It is impossible for $1 \mapsto 2$ to match with $2 \mapsto 1$ and $1 \mapsto 3$ to match with $3 \mapsto 1$, because we know at least one of this cycles will contain the impostor. To solve $n = 4$ in Unable to parse markup [type=CF_MATHJAX] We will query $1 \mapsto 2$, $2 \mapsto 1$, $1 \mapsto 3$ and $3 \mapsto 1$. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $1$ is the impostor. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, $2$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $3$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, then $4$ is the impostor. If $1 \mapsto 2$ doesn't match with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, $2$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ doesn't match with $3 \mapsto 1$, $3$ is the impostor. If $1 \mapsto 2$ matches with $2 \mapsto 1$ and $1 \mapsto 3$ matches with $3 \mapsto 1$, then $4$ is the impostor. We have proven that that $f(n) \le \max(4, n)$. Now we will prove that $n \le f(n)$. We will show it is impossible to solve the problem for any $n$ with only $n - 1$ queries. We will show that the grader always has a strategy of answering queries, such that there exists at least 2 different assignments of roles that are consistent with the answers given, and have different nodes as the impostor. Consider the directed graph generated from the queries. By pigeonhole principle at least one node has in-degree $0$, and at least one node has out-degree $0$. If those two nodes are different, call Unable to parse markup [type=CF_MATHJAX] the node with in-degree Unable to parse markup [type=CF_MATHJAX] and $B$ the node with out-degree $0$. Let the grader always reply yes to your queries. $A$ can be the impostor and everyone else Knaves. $B$ can be the impostor and everyone else Knights. If those two nodes are different, call Unable to parse markup [type=CF_MATHJAX] Unable to parse markup [type=CF_MATHJAX] $A$ can be the impostor and everyone else Knaves. $B$ can be the impostor and everyone else Knights. If those two nodes are the same, then the graph looks like a collection of cycles and one isolated node. The grader will always reply yes except for the last query where it replies no. Let the last query be to player $A$ about player $B$. The two assignments of roles are: $A$ is the impostor and everyone else in the cycle is a Knight. $B$ is the impostor and everyone else in the cycle is a Knave. If those two nodes are the same, then the graph looks like a collection of cycles and one isolated node. The grader will always reply yes except for the last query where it replies no. Let the last query be to player $A$ about player $B$. The two assignments of roles are: $A$ is the impostor and everyone else in the cycle is a Knight. $B$ is the impostor and everyone else in the cycle is a Knave. Thus, we have shown that regardless what the questions asked are, it is impossible to find the impostor among $n$ players, in $n - 1$ queries. $\blacksquare$ Now, we prove that $f(3) = 4$: Stare at this image: lol, Actually, one of the testers coded the exhaustive checker. I will let them post their code if they want to. In the Olympiad version, the impostor behaved differently. It could either disguise as a Knight while secretly being a Knave, or disguise as a Knave while secretly being a Knight. Turns out the test cases and the interactor in both versions are practically the same. Why would that be? Do small cases! Let the pencil (or the computer) do the work and use your brain to look out for patterns. Do small cases! Let the pencil (or the computer) do the work and use your brain to look out for patterns. Try to find the most natural and simple way of modeling the problem. Try to find the most natural and simple way of modeling the problem. Learn how to code a decision tree, or other models to exhaustively search for constructions, proof patterns, recursive complete search, etc. and build intuition on that to solve the more general cases of the problem. Learn how to code a decision tree, or other models to exhaustively search for constructions, proof patterns, recursive complete search, etc. and build intuition on that to solve the more general cases of the problem.
[ "constructive algorithms", "dp", "interactive" ]
2,700
#include <bits/stdc++.h> using namespace std; int n; void answer (int a) { cout << "! " << a << endl; } int query (int a, int b) { cout << "? " << a << " " << b << endl; int r; cin >> r; if (r == -1) exit(0); return r; } void main_ () { cin >> n; if (n == -1) exit(0); if (n == 3) { if (query(1, 2) != query(2, 1)) { if (query(1, 3) != query(3, 1)) { answer(1); } else { answer(2); } } else { answer(3); } return; } for (int i = 1; i + 1 <= n; i += 2) { if (n % 2 == 1 && i == n - 4) break; if ((n % 2 == 0 && i + 1 == n) || query(i, i + 1) != query(i + 1, i)) { int k = 1; while (k == i || k == i + 1) k++; if (query(i, k) != query(k, i)) { return answer(i); } else { return answer(i + 1); } } } vector<int> v = { query(n - 4, n - 3), query(n - 3, n - 2), query(n - 2, n - 4) }; if ((v[0] + v[1] + v[2]) % 2 == 0) { if (query(n - 3, n - 4) != v[0]) { if (query(n - 2, n - 3) != v[1]) { answer(n - 3); } else { answer(n - 4); } } else { answer(n - 2); } } else { if (query(n, 1) != query(1, n)) { answer(n); } else { answer(n - 1); } } } int main () { int t; cin >> t; while (t--) main_(); return 0; }
2022
E1
Billetes MX (Easy Version)
\textbf{This is the easy version of the problem. In this version, it is guaranteed that $q = 0$. You can make hacks only if both versions of the problem are solved.} An integer grid $A$ with $p$ rows and $q$ columns is called beautiful if: - All elements of the grid are integers between $0$ and $2^{30}-1$, and - For any subgrid, the XOR of the values at the corners is equal to $0$. Formally, for any four integers $i_1$, $i_2$, $j_1$, $j_2$ ($1 \le i_1 < i_2 \le p$; $1 \le j_1 < j_2 \le q$), $A_{i_1, j_1} \oplus A_{i_1, j_2} \oplus A_{i_2, j_1} \oplus A_{i_2, j_2} = 0$, where $\oplus$ denotes the bitwise XOR operation. There is a partially filled integer grid $G$ with $n$ rows and $m$ columns where only $k$ cells are filled. Polycarp wants to know how many ways he can assign integers to the unfilled cells so that the grid is beautiful. However, Monocarp thinks that this problem is too easy. Therefore, he will perform $q$ updates on the grid. In each update, he will choose an unfilled cell and assign an integer to it. Note that these updates are \textbf{persistent}. That is, changes made to the grid will apply when processing future updates. For each of the $q + 1$ states of the grid, the initial state and after each of the $q$ queries, determine the number of ways Polycarp can assign integers to the unfilled cells so that the grid is beautiful. Since this number can be very large, you are only required to output their values modulo $10^9+7$.
Consider the extremal cases, what is the answer if the grid is empty? What is the answer if the grid is full? What happens if $N = 2$? Observe that if $N = 2$, the xor of every column is constant. Can we generalize this idea? Imagine you have a valid full grid $a$. For each $i$, change $a[0][i]$ to $a[0][0]\oplus a[0][i]$. Observe that the grid still satisfies the condition! Using the previous idea we can show that for any valid grid, there must exist two arrays $X$ and $Y$, such that $a[i][j] = X[i] \oplus Y[j]$. Consider doing the operation described in hint 4 for every row and every column of a full grid that satisfies the condition; That is, for each row, and each column, fix the first element, and change every value in that row or column by their xor with the first element. We will be left with a grid whose first row and first column are all zeros. But this grid also satisfies the condition! So it must hold that $a[i][j] \oplus a[i][0] \oplus a[0][j] \oplus a[0][0] = 0$, but 3 of this values are zero! We can conclude that $a[i][j]$ must also be zero. This shows that there must exist two arrays $X$ and $Y$, such that $a[i][j] = X[i] \oplus Y[j]$, for any valid full grid. Think of each tile of the grid that we know of, as imposing a condition between two elements of arrays $X$ and $Y$. For each tile added, we lose one degree of freedom right? We could make a bunch of substitutions to determine new values of the grid. How can we best model the problem now? Think about it as a graph where the nodes are rows and columns, and there is an edge between row $i$ and column $j$ with weight $a[i][j]$. Substitutions are now just paths on the graph. If we have a path between the node that represents row $i$ and column $j$, the xor of the weights in this path represents the value of $a[i][j]$. What happens if there's more than one path, and two paths have different values? To continue hint 7, we can deduce that if there is a cycle with xor of weights distinct to $0$ in this graph, there would be a contradiction, and arrays $X$ and $Y$ can't exist. How can we check if this is the case? Do a dfs on this graph, maintaining an array $p$, such that $p[u]$ is the xor of all edges in the path you took from the root, to node $u$. Whenever you encounter a back edge between nodes $u$ and $v$ with weight $w$, check if $p[u] \oplus p[v] = w$. So lets assume there is no contradiction, ie, all cycles have xor 0. What would be the answer to the problem? We know that if the graph is connected, there exists a path between any two tiles and all the values of the tiles would be determined. So, in how many ways can we make a graph connected? Say there are $K$ connected components. We can connect them all using $K - 1$ edges. For each edge, there are $2^{30}$ possible values they can have. Thus, the number of ways of making the graph connected is $\left(2^{30} \right)^{K - 1}$. Why is this the answer to the problem? This is just the solution. Please read the hints in order to understand why it works and how to derive it. Precompute an array $c$, with $c[i] = 2^{30\cdot i} \pmod{10^9 + 7}$. Let $a$ be the 2d array with known values of the grid. Consider the graph formed by adding an edge between node $i$ and node $j + n$ and weight $a[i][j]$ for every known tile $(i, j)$ in the grid. Iterate from $1$ to $n + m$ maintaining an array $p$ initialized in $-1$s. If the current proceed node hasn't been visited, we run a dfs through it. We will use array $p$ to maintain the running xor for each node during the dfs. If we ever encounter a back edge between nodes $u, v$ and weight $w$, we check if $p[u] \oplus p[v] = w$. If not, we the answer is zero. If this condition always holds, let $K$ be the number of times you had to run the dfs. $K$ is also the number of connected components in the graph. The answer is $c[K - 1]$. Complexity: $\mathcal{O}(n + m + k)$.
[ "2-sat", "binary search", "combinatorics", "constructive algorithms", "dfs and similar", "dsu", "graphs" ]
2,500
#include <bits/stdc++.h> using namespace std; int const Mxn = 2e5 + 2; long long int const MOD = 1e9 + 7; long long int precalc[Mxn]; vector<vector<array<int, 2>>> adj; bool valid = 1; int pref[Mxn]; void dfs(int node) { for (auto [child, w] : adj[node]) { if (pref[child] == -1) { pref[child] = pref[node]^w; dfs(child); } else { if ((pref[child]^pref[node]) != w) valid = 0; } } } int main() { precalc[0] = 1; for (int i = 1; i < Mxn; i++) { precalc[i] = (precalc[i - 1]<<30)%MOD; } int tt; for (cin >> tt; tt; --tt) { int N, M, K, Q; cin >> N >> M >> K >> Q; adj.clear(); adj.assign(N + M, vector<array<int, 2>>{}); int x, y, z; for (int i = 0; i < K; i++) { cin >> x >> y >> z; x--, y--; adj[x].push_back({y + N, z}); adj[y + N].push_back({x, z}); } for (int i = 0; i < N + M; i++) pref[i] = -1; valid = 1; int cnt = 0; for (int i = 0; i < N + M; i++) { if (pref[i] != -1) continue; cnt++; pref[i] = 0; dfs(i); } if (valid) cout << precalc[cnt - 1] << '\n'; else cout << 0 << '\n'; } }
2022
E2
Billetes MX (Hard Version)
\textbf{This is the hard version of the problem. In this version, it is guaranteed that $q \leq 10^5$. You can make hacks only if both versions of the problem are solved.} An integer grid $A$ with $p$ rows and $q$ columns is called beautiful if: - All elements of the grid are integers between $0$ and $2^{30}-1$, and - For any subgrid, the XOR of the values at the corners is equal to $0$. Formally, for any four integers $i_1$, $i_2$, $j_1$, $j_2$ ($1 \le i_1 < i_2 \le p$; $1 \le j_1 < j_2 \le q$), $A_{i_1, j_1} \oplus A_{i_1, j_2} \oplus A_{i_2, j_1} \oplus A_{i_2, j_2} = 0$, where $\oplus$ denotes the bitwise XOR operation. There is a partially filled integer grid $G$ with $n$ rows and $m$ columns where only $k$ cells are filled. Polycarp wants to know how many ways he can assign integers to the unfilled cells so that the grid is beautiful. However, Monocarp thinks that this problem is too easy. Therefore, he will perform $q$ updates on the grid. In each update, he will choose an unfilled cell and assign an integer to it. Note that these updates are \textbf{persistent}. That is, changes made to the grid will apply when processing future updates. For each of the $q + 1$ states of the grid, the initial state and after each of the $q$ queries, determine the number of ways Polycarp can assign integers to the unfilled cells so that the grid is beautiful. Since this number can be very large, you are only required to output their values modulo $10^9+7$.
Please read the solution to E1 beforehand, as well as all the hints. Through the observations in E1, we can reduce the problem to the following: We have a graph, we add edges, and we want to determine after each addition if all its cycles have xor 0, and the number of connected components in the graph. The edges are never removed, so whenever an edge is added that creates a cycle with xor distinct to zero, this cycle will stay in the graph for all following updates. So we can binary search the first addition that creates a cycle with xor distinct to zero, using the same dfs we used in E1. After the first such edge, the answer will always be zero. Now, for all the additions before that, we must determine how many connected components the graph has at each step. But this is easily solvable with Disjoint Set Union. Complexity: $\mathcal{O}(\log(q)(n + m + k + q) + \alpha(n + m)(q + k))$. We will answer the queries online. Remember that if the graph only contains cycles with xor $0$, the xor of a path between a pair of nodes is unique. We'll use this in our advantage. Let $W(u, v)$ be the unique value of the xor of a path between nodes $u$ and $v$. Lets modify a dsu, drop the path compression, and define array $p$, that maintains the following invariant: For every node $u$ in a component with root $r$, $W(u, r)$ equals the xor of $p[x]$ for all ancestors $x$ of $u$ in our dsu (we also consider $u$ an ancestor of itself). Whenever we add an edge between two nodes $u$ and $v$ in two different components with weight $w$, we consider consider the roots $U$ and $V$ of their respective components. Without loss of generality, assume $U$'s component has more elements than $V$'s. We will add an edge with weight $W(u, U) \oplus W(v, V) \oplus w$ between $V$ and $U$, and make $U$ the root of our new component. This last step is the small to large optimization, to ensure the height of our trees is logarithmic. With this data structure, we can maintain the number of connected components like in a usual dsu, and whenever an edge $(u, v)$ with weight $w$ is added, and $u$ and $v$ belong to the same component, we can obtain the value of $W(u, v)$ in $\mathcal{O(\log(n))}$, and check if it is equal to $w$. This idea is similar to the data structure described here, and is useful in other contexts. Complexity: $\mathcal{O}(\log(q + k)(n + m + k + q))$. In our Olympiad, we had $v_i \le 1$ instead of $v_i \le 2^{30}$, and we also had a final subtask were tiles could change values. How would you solve this? We have three different solutions for this problem, one of them found during the Olympiad! I'll let the contestant who found it post it if he wants to.
[ "binary search", "combinatorics", "data structures", "dsu", "graphs" ]
2,600
#include <bits/stdc++.h> using namespace std; int const Mxn = 2e5 + 2; long long int const MOD = 1e9 + 7; vector<vector<array<int, 2>>> adj; vector<array<int, 3>> Edges; long long int precalc[Mxn]; struct DSU { vector<int> leader; vector<int> sz; int components; DSU(int N) { leader.resize(N); iota(leader.begin(), leader.end(), 0); sz.assign(N, 1); components = N; } int find(int x) { return (leader[x] == x) ? x : (leader[x] = find(leader[x])); } void unite(int x, int y) { x = find(x), y = find(y); if (x == y) return; if (sz[x] < sz[y]) swap(x, y); leader[y] = leader[x]; sz[x] += sz[y]; components--; } }; bool valid = 1; int pref[Mxn]; void dfs(int node = 0) { for (auto [child, w] : adj[node]) { if (pref[child] == -1) { pref[child] = pref[node]^w; dfs(child); } else { if ((pref[child]^pref[node]) != w) valid = 0; } } } int main() { precalc[0] = 1; for (int i = 1; i < Mxn; i++) { precalc[i] = (precalc[i - 1]<<30)%MOD; } int tt; for (cin >> tt; tt; --tt) { int N, M, K, Q; cin >> N >> M >> K >> Q; Edges.clear(); adj.clear(); adj.assign(N + M, vector<array<int, 2>>{}); DSU dsu(N + M); dsu.components = N + M; int x, y, z; for (int i = 0; i < K; i++) { cin >> x >> y >> z; x--, y--; adj[x].push_back({y + N, z}); adj[y + N].push_back({x, z}); dsu.unite(x, y + N); } for (int i = 0; i < Q; i++) { cin >> x >> y >> z; x--, y--; Edges.push_back({x, y + N, z}); } int firstzero = 0; for (int k = 20; k >= 0; k--) { if (firstzero + (1<<k) > Q) continue; for (int i = firstzero; i < firstzero + (1<<k); i++) { x = Edges[i][0], y = Edges[i][1], z = Edges[i][2]; adj[x].push_back({y, z}); adj[y].push_back({x, z}); } valid = 1; for (int i = 0; i < N + M; i++) pref[i] = -1; for (int i = 0; i < N + M; i++) { if (pref[i] == -1) pref[i] = 0, dfs(i); } if (!valid) { for (int i = firstzero + (1<<k) - 1; i >= firstzero; --i) { x = Edges[i][0], y = Edges[i][1]; adj[x].pop_back(); adj[y].pop_back(); } } else { firstzero += (1<<k); } } if (firstzero == 0) { valid = 1; for (int i = 0; i < N + M; i++) pref[i] = -1; for (int i = 0; i < N + M; i++) { if (pref[i] == -1) pref[i] = 0, dfs(i); } if (!valid) firstzero--; } for (int i = 0; i <= Q; i++) { if (i <= firstzero) cout << precalc[dsu.components - 1] << '\n'; else cout << 0 << '\n'; if (i == Q) break; x = Edges[i][0], y = Edges[i][1]; dsu.unite(x, y); } } }
2023
A
Concatenation of Arrays
You are given $n$ arrays $a_1$, $\ldots$, $a_n$. The length of each array is two. Thus, $a_i = [a_{i, 1}, a_{i, 2}]$. You need to concatenate the arrays into a single array of length $2n$ such that the number of inversions$^{\dagger}$ in the resulting array is minimized. Note that you \textbf{do not need} to count the actual number of inversions. More formally, you need to choose a permutation$^{\ddagger}$ $p$ of length $n$, so that the array $b = [a_{p_1,1}, a_{p_1,2}, a_{p_2, 1}, a_{p_2, 2}, \ldots, a_{p_n,1}, a_{p_n,2}]$ contains as few inversions as possible. $^{\dagger}$The number of inversions in an array $c$ is the number of pairs of indices $i$ and $j$ such that $i < j$ and $c_i > c_j$. $^{\ddagger}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Let's sort the arrays in order of non-decreasing sum of elements. It turns out that it is always optimal to concatenate the arrays in this order. To prove this, let's consider some optimal answer. Note that if in the final order there are two adjacent arrays, such that the sum of the elements of the left array is greater than the sum of the elements of the right array, then we can swap them, and the number of inversions will not increase. Thus, we can bring any optimal answer to ours by swapping adjacent arrays so that the number of inversions does not increase each time. Thus, such an order is truly optimal.
[ "constructive algorithms", "greedy", "math", "sortings" ]
1,300
null