contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1767
B
Block Towers
There are $n$ block towers, numbered from $1$ to $n$. The $i$-th tower consists of $a_i$ blocks. In one move, you can move one block from tower $i$ to tower $j$, but only if $a_i > a_j$. That move increases $a_j$ by $1$ and decreases $a_i$ by $1$. You can perform as many moves as you would like (possibly, zero). What's the largest amount of blocks you can have on the tower $1$ after the moves?
Notice that it never makes sense to move blocks between the towers such that neither of them is tower $1$ as that can only decrease the heights. Moreover, it never makes sense to move blocks away from the tower $1$. Thus, all operations will be moving blocks from some towers to tower $1$. At the start, which towers can move at least one block to tower $1$? Well, only such $i$ that $a_i > a_1$. What happens after you move a block? Tower $1$ becomes higher, some tower becomes lower. Thus, the set of towers that can share a block can't become larger. Let's order the towers by the number of blocks in them. At the start, the towers that can share a block are at the end (on some suffix) in this order. After one move is made, the towers get reordered, and the suffix can only shrink. Ok, but if that suffix shrinks, what's the first tower that will become too low? The leftmost one that was available before. So, regardless of what the move is, the first tower that might become unavailable is the leftmost available tower. Thus, let's attempt using it until it's not too late. The algorithm then is the following. Find the lowest tower that can move the block to tower $1$, move a block, repeat. When there are no more towers higher than tower $1$, the process stops. However, the constraints don't allow us to do exactly that. We'll have to make at most $10^9$ moves per testcase. Ok, let's move the blocks in bulk every time. Since the lowest available tower will remain the lowest until you can't use it anymore, make all the moves from it at the same time. If the current number of blocks in tower $1$ is $x$ and the current number of blocks in that tower is $y$, $\lceil\frac{y - x}{2}\rceil$ blocks can be moved. You can also avoid maintaining the available towers by just iterating over the towers in the increasing order of their height. Overall complexity: $O(n \log n)$ per testcase.
[ "data structures", "greedy", "sortings" ]
800
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) x = a[0] a = sorted(a[1:]) for y in a: if y > x: x += (y - x + 1) // 2 print(x)
1767
C
Count Binary Strings
You are given an integer $n$. You have to calculate the number of binary (consisting of characters 0 and/or 1) strings $s$ meeting the following constraints. For every pair of integers $(i, j)$ such that $1 \le i \le j \le n$, an integer $a_{i,j}$ is given. It imposes the following constraint on the string $s_i s_{i+1} s_{i+2} \dots s_j$: - if $a_{i,j} = 1$, all characters in $s_i s_{i+1} s_{i+2} \dots s_j$ should be the same; - if $a_{i,j} = 2$, there should be at least two different characters in $s_i s_{i+1} s_{i+2} \dots s_j$; - if $a_{i,j} = 0$, there are no additional constraints on the string $s_i s_{i+1} s_{i+2} \dots s_j$. Count the number of binary strings $s$ of length $n$ meeting the aforementioned constraints. Since the answer can be large, print it modulo $998244353$.
Suppose we build the string from left to right, and when we place the $i$-th character, we ensure that all substrings ending with the $i$-th character are valid. What do we need to know in order to calculate the number of different characters in the string ending with the $i$-th character? Suppose the character $s_i$ is 0. Let's try going to the left of it. The string from $i$ to $i$ will have the same characters; but if there is at least one character 1 before the $i$-th position, the string $s_1 s_2 s_3 \dots s_i$ will have two different characters. What about the strings in the middle? The string $s_j s_{j+1} \dots s_i$ will contain different characters if and only if there is at least one 1 in $[j, i)$ (since $s_i$ is 0), so we are actually interested in the position of the last character 1 before $i$. The same logic applies if the character $s_i$ is 1: we are only interested in the position of the last 0 before $i$, and it is enough to check if all substrings ending with the $i$-th character are violated. What if when we choose the $i$-th character, we violate some substring that doesn't end in the $i$-th position? Well, you could also check that... or you could just ignore it. Actually, it doesn't matter if this happens because it means that the substring that is violated ends in some position $k > i$; and we will check it when placing the $k$-th character. So, the solution can be formulated with the following dynamic programming: let $dp_{i,j}$ be the number of ways to choose the first $i$ characters of the string so that the last character different from $s_i$ was $s_j$ (or $j = 0$ if there was no such character), and all the constraints on the substrings ending no later than position $i$ are satisfied. The transitions are simple: you either place the same character as the last one (going from $dp_{i,j}$ to $dp_{i+1,j}$), or a different character (going from $dp_{i,j}$ to $dp_{i+1,i}$); and when you place a character, you check all the constraints on the substrings ending with the $i$-th position. Note that the state $dp_{1,0}$ is actually represented by two strings: 0 and 1. This solution works in $O(n^3)$, although $O(n^4)$ or $O(n^2)$ implementations are also possible.
[ "data structures", "dp" ]
2,100
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; while(x >= MOD) x -= MOD; while(x < 0) x += MOD; return x; } int mul(int x, int y) { return (x * 1ll * y) % MOD; } const int N = 143; int n; int a[N][N]; int dp[N][N]; bool check(int cnt, int last) { for(int i = 0; i < cnt; i++) { if(a[i][cnt - 1] == 0) continue; if(a[i][cnt - 1] == 1 && last > i) return false; if(a[i][cnt - 1] == 2 && last <= i) return false; } return true; } int main() { cin >> n; for(int i = 0; i < n; i++) { for(int j = i; j < n; j++) cin >> a[i][j]; } if(a[0][0] != 2) dp[1][0] = 2; for(int i = 1; i < n; i++) for(int j = 0; j < i; j++) for(int k : vector<int>({j, i})) if(check(i + 1, k)) dp[i + 1][k] = add(dp[i + 1][k], dp[i][j]); int ans = 0; for(int i = 0; i < n; i++) ans = add(ans, dp[n][i]); cout << ans << endl; }
1767
D
Playoff
$2^n$ teams participate in a playoff tournament. The tournament consists of $2^n - 1$ games. They are held as follows: in the first phase of the tournament, the teams are split into pairs: team $1$ plays against team $2$, team $3$ plays against team $4$, and so on (so, $2^{n-1}$ games are played in that phase). When a team loses a game, it is eliminated, and each game results in elimination of one team (there are no ties). After that, only $2^{n-1}$ teams remain. If only one team remains, it is declared the champion; otherwise, the second phase begins, where $2^{n-2}$ games are played: in the first one of them, the winner of the game "$1$ vs $2$" plays against the winner of the game "$3$ vs $4$", then the winner of the game "$5$ vs $6$" plays against the winner of the game "$7$ vs $8$", and so on. This process repeats until only one team remains. The skill level of the $i$-th team is $p_i$, where $p$ is a permutation of integers $1$, $2$, ..., $2^n$ (a permutation is an array where each element from $1$ to $2^n$ occurs exactly once). You are given a string $s$ which consists of $n$ characters. These characters denote the results of games in each phase of the tournament as follows: - if $s_i$ is equal to 0, then during the $i$-th phase (the phase with $2^{n-i}$ games), in each match, the team with the lower skill level wins; - if $s_i$ is equal to 1, then during the $i$-th phase (the phase with $2^{n-i}$ games), in each match, the team with the higher skill level wins. Let's say that an integer $x$ is \textbf{winning} if it is possible to find a permutation $p$ such that the team with skill $x$ wins the tournament. Find all winning integers.
Firstly, let's prove that the order of characters in $s$ is interchangeable. Suppose we have a tournament of four teams with skills $a$, $b$, $c$ and $d$ such that $a < b < c < d$; and this tournament has the form $01$ or $10$. It's easy to see that $a$ and $d$ cannot be winners, since $a$ will be eliminated in the round with type $1$, and $d$ will be eliminated in the round with type $0$. However, it's easy to show that both with $s = 10$ and with $s = 01$, $b$ and $c$ can be winners. Using this argument to matches that go during phases $i$ and $i+1$ (a group of two matches during phase $i$ and a match during phase $i + 1$ between the winners of those matches can be considered a tournament with $n = 2$), we can show that swapping $s_i$ and $s_{i+1}$ does not affect the possible winners of the tournament. So, suppose all phases of type $1$ happen before phases of type $0$, there are $x$ phases of type $1$ and $y$ phases of type $0$ ($x + y = n$). $2^{x+y} - 2^y$ teams will be eliminated in the first part (phases of type $1$), and the team with the lowest skill that wasn't eliminated in the first half will win the second half. It's easy to see that the teams with skills $[1..2^x-1]$ cannot pass through the first part of the tournament, since to pass the first part, a team has to be the strongest in its "subtree" of size $2^x$. Furthermore, since the minimum of $2^y$ teams passing through the first half wins, the winner should have skill not greater than $2^{x+y}-2^y+1$ - the winner should have lower skill than at least $2^y - 1$ teams, so teams with skills higher than $2^{x+y}-2^y+1$ cannot win. Okay, now all possible winners belong to the segment $[2^x, 2^n - 2^y + 1]$. Let's show that any integer from this segment can be winning. Suppose $k \in [2^x, 2^n - 2^y + 1]$, let's construct the tournament in such a way that only team with skill $k$ and $2^y-1$ teams with the highest skill pass through the first part of the tournament (obviously, then team $k$ wins). There are $2^y$ independent tournaments of size $2^x$ in the first part; let's assign teams with skills from $1$ to $2^x-1$, and also the team $k$ to one of those tournaments; for all other $2^y-1$ tournaments, let's assign the teams in such a way that exactly one team from the $2^y-1$ highest ones competes in each of them. It's easy to see that the team $k$ will win its tournament, and every team from the $2^y-1$ highest ones will win its tournament as well, so the second half will contain only teams with skills $k$ and $[2^n-2^y+2..2^n]$ (and, obviously, $k$ will be the winner of this tournament). So, the answer to the problem is the segment of integers $[2^x, 2^n - 2^y + 1]$.
[ "combinatorics", "constructive algorithms", "dp", "greedy", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; int main() { int n; string s; cin >> n >> s; int k = count(s.begin(), s.end(), '1'); for (int x = 1 << k; x <= (1 << n) - (1 << (n - k)) + 1; ++x) cout << x << ' '; }
1767
E
Algebra Flash
\begin{quote} Algebra Flash 2.2 has just been released!Changelog: - New gamemode! Thank you for the continued support of the game! \end{quote} Huh, is that it? Slightly disappointed, you boot up the game and click on the new gamemode. It says "Colored platforms". There are $n$ platforms, numbered from $1$ to $n$, placed one after another. There are $m$ colors available in the game, numbered from $1$ to $m$. The $i$-th platform is colored $c_i$. You start on the platform $1$ and want to reach platform $n$. In one move, you can jump from some platform $i$ to platforms $i + 1$ or $i + 2$. All platforms are initially deactivated (including platforms $1$ and $n$). For each color $j$, you can pay $x_j$ coins to activate all platforms of that color. You want to activate some platforms so that you could start on an activated platform $1$, jump through some activated platforms and reach an activated platform $n$. What's the smallest amount of coins you can spend to achieve that?
Imagine we bought some subset of colors. How to check if there exists a path from $1$ to $n$? Well, we could write an easy dp. However, it's not immediately obvious where to proceed from that. You can't really implement buying colors inside the dp, because you should somehow know if you bought the current color before, and that's not really viable without storing a lot of information. Let's find another approach. Let's try to deduce when the subset is bad - the path doesn't exist. Trivial cases: $c_1$ or $c_n$ aren't bought. Now, if there are two consecutive platforms such that their colors aren't bought, the path doesn't exist. Otherwise, if there are no such platforms, you can show that the path always exists. In particular, that implies that among all pairs of consecutive platforms, at least one color of the pair have to be bought. If the colors of the pair are the same, then it's just that this color have to be bought. The next step is probably hard to get without prior experience. Notice how the condition is similar to a well-known graph problem called "vertex cover". That problem is about finding a set of vertices in an undirected graph such that all graph edges have at least one of their endpoints in the set. In particular, our problem would be to find a vertex cover of minimum cost. That problem is known to be NP-hard, thus the constraints. We can't solve it in polynomial time but we'll attempt to it faster than the naive approach in $O(2^m \cdot m^2)$. Let's start with this approach anyway. We can iterate over a mask of taken vertices and check if that mask is ok. In order to do that, we iterate over edges and check if at least vertex is taken for each of them. Again, having a bit of prior experience, one could tell from the constraints that the intended solution involves meet-in-the-middle technique. Let's iterate over the mask of taken vertices among vertices from $1$ to $\frac m 2$. Then over the mask of taken vertices from $\frac m 2 + 1$ to $m$. The conditions on edges split them into three groups: the edges that are completely in $\mathit{mask}_1$, the edges that are completely in $\mathit{mask}_2$ and the edges that have one endpoint in $\mathit{mask}_1$ and another endpoint in $\mathit{mask}_2$. First two types are easy to check, but how to force the third type to be all good? Consider the vertices that are not taken into $\mathit{mask}_1$. All edges that have them as one of the endpoints will turn out bad if we don't take their other endpoints into $\mathit{mask}_2$. That gives us a minimal set of constraints for each $\mathit{mask}_1$: a mask $\mathit{con}$ that includes all vertices from the second half that have edges to at least one of non-taken vertex in $\mathit{mask}_1$. Then $\mathit{mask}_2$ is good if it has $\mathit{con}$ as its submask. Thus, we would want to update the answer with the $\mathit{mask}_1$ of the minimum cost such that its $\mathit{con}$ is a submask of $\mathit{mask}_2$. Finally, let $\mathit{dp}[\mathit{mask}]$ store the minimum cost of some $\mathit{mask}_1$ such that its $\mathit{con}$ is a submask of $\mathit{mask}$. Initialize the $\mathit{dp}$ with the exact $\mathit{con}$ for each $\mathit{mask}_1$. Then push the values of $\mathit{dp}$ up by adding any new non-taken bit to each mask. When iterating over $\mathit{mask}_2$, check if it's good for edges of the second kind and update the answer with $\mathit{dp}[\mathit{mask}_2]$. Overall complexity: $O(2^{m/2} \cdot m^2)$.
[ "bitmasks", "brute force", "dp", "graphs", "math", "meet-in-the-middle", "trees" ]
2,500
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int n, m; scanf("%d%d", &n, &m); vector<int> c(n); forn(i, n){ scanf("%d", &c[i]); --c[i]; } vector<int> x(m); forn(i, m) scanf("%d", &x[i]); vector<long long> g(m); forn(i, n - 1){ g[c[i]] |= 1ll << c[i + 1]; g[c[i + 1]] |= 1ll << c[i]; } g[c[0]] |= 1ll << c[0]; g[c[n - 1]] |= 1ll << c[n - 1]; int mid = m / 2; vector<int> dp(1 << mid, 1e9); forn(mask, 1 << (m - mid)){ long long chk = 0; int tot = 0; forn(i, m - mid){ if ((mask >> i) & 1) tot += x[i + mid]; else chk |= g[i + mid]; } if (((chk >> mid) | mask) != mask) continue; chk &= (1ll << mid) - 1; dp[chk] = min(dp[chk], tot); } forn(i, mid) forn(mask, 1 << mid) if (!((mask >> i) & 1)){ dp[mask | (1 << i)] = min(dp[mask | (1 << i)], dp[mask]); } int ans = 1e9; forn(mask, 1 << mid){ long long chk = 0; int tot = 0; forn(i, mid){ if ((mask >> i) & 1) tot += x[i]; else chk |= g[i]; } chk &= (1ll << mid) - 1; if ((chk | mask) != mask) continue; ans = min(ans, dp[mask] + tot); } printf("%d\n", ans); return 0; }
1767
F
Two Subtrees
You are given a rooted tree consisting of $n$ vertices. The vertex $1$ is the root. Each vertex has an integer written on it; this integer is $val_i$ for the vertex $i$. You are given $q$ queries to the tree. The $i$-th query is represented by two vertices, $u_i$ and $v_i$. To answer the query, consider all vertices $w$ that lie in the subtree of $u_i$ or $v_i$ \textbf{(if a vertex is in both subtrees, it is counted twice)}. For all vertices in these two subtrees, list all integers written on them, and find the integer with the maximum number of occurrences. If there are multiple integers with maximum number of occurrences, the \textbf{minimum} among them is the answer.
Original solution (by shnirelman) First, let's solve the following problem: we need to maintain a multiset of numbers and process queries of $3$-th types: add a number to the multiset, remove one occurrence of a number from the multiset (it is guaranteed that it exists), calculate the mode on this multiset. To do this, we will maintain the array $cnt_i$ - the frequency of $i$ in the multiset. Now the mode is the position of the leftmost maximum in this array. There are many ways to search for this position, we will use the following: we will build a sqrt-decomposition on the array $cnt$: for a block we will maintain a maximum on this block and an array $c_i$ - the number of positions $j$ in this block, such that $cnt_j = i$. Since in each of the initial requests $cnt_i$ changes by no more than $1$, the maximum in the block also changes by no more than 1 and, using the $c$ array, it is easy to update it after each query. Now, to find the mode (the position of the leftmost maximum in the $cnt$ array), you first need to go through all the blocks to find the value of the maximum and the leftmost block in which this maximum occurs, then iterate over the desired position in this block. Thus, queries to add and remove an element run in $O(1)$, and a mode search query runs in $O(\sqrt{A})$, where $A$ is the number of possible distinct values, in a given problem $A = 2 \cdot 10 ^ 5$. Now let's get back to the problem itself. Let's build a Preorder traversal of our tree. Let $tin_v$ be the position in the $0$-indexing of the vertex $v$ in the Preorder traversal, $tout_v$ be the size of the Preorder traversal after leaving the vertex $v$. Then the half-interval $[tin_v, tout_v)$ of the Preorder traversal represents the set of vertices of the subtree of the vertex $v$. For the $i$-th query, we will consider $tin_{v_i} \le tin_{u_i}$. Let $sz_v = tout_v - tin_v$ be the size of the subtree of $v$, $B$ be some integer, then $v$ will be called light if $sz_v < B$, and heavy otherwise. A query $i$ is called light (heavy) if $v_i$ is a light (heavy) vertex. We will solve the problem for light and heavy queries independently. Light queries. Let's use the small-to-large technique and maintain the multiset described at the beginning of the solution. Let at the moment we have this multiset for the vertex $w$. Let's answer all light queries for which $u_i = w$. To do this, take all the vertices from the subtree of $v_i$ and add the numbers written on them, calculate the mode on the current multiset - this will be the answer to the query, and then delete the newly added vertices. In the standard implementation of small-to-large, you need to maintain several structures at the same time, which in this case is impossible due to the fact that each of them takes up $O(A\sqrt{A})$ of memory. This problem can be avoided, for example, as follows: before constructing the Preorder traversal for each vertex $v$, put its heaviest son at the head of the adjacency list. Then it will be possible to iterate over the vertices in the order of the Preorder traversal, preserving the asymptotics. This part of the solution runs in $O(n\log{n} + qB + q\sqrt{A})$. Heavy queries. Let's divide all heavy vertices into non-intersecting vertical paths, so that two vertices from the same path have subtrees that differ by no more than $B$ vertices, and the number of the paths themselves is $O(\frac{n}{B})$. To do this, let's take the deepest of the unused heavy vertices and build one of the desired paths, going up to the parent, while the first of these conditions is met. Then we mark all the vertices in this path as used, and start over. We will continue to do this while there are still unused heavy vertices. It is easy to see that the resulting paths are vertical and the subtrees of two vertices from the same path differ by no more than $B$ by construction. Let's prove that there are not very many of these paths. To do this, we will understand in which cases the path breaks: If the current path contains a root, then since the root has no parent, the path will terminate. Obviously, this path is only $1$. If the parent of the last vertex of the path has only one heavy child (this last vertex itself). From the construction, a break means that the number of vertices in this path plus the number of children outside the heaviest son subtree of the parent of the last vertex and each vertex of the path, except for the initial one, is more than $B$ in total, but each of the counted vertices can be counted in only one of such cases, that is, the number of paths that terminate in this way does not exceed $\frac{n}{B}$. If the parent of the last node has more than one heavy child. Let's leave only heavy vertices in the tree (since the parent of a heavy vertex is a heavy vertex too, it will indeed be a tree (or an empty graph)). This tree contains at most $\frac{n}{B}$ leafs. Calculating the total degree of the vertices of this tree, we can see that there are at most $\frac{n}{B}$ additional sons (all sons of a vertex except one). This means that the number of paths terminating in this way is at most $\frac{n}{B}$. Let's divide the heavy queries according to the paths where the $v_i$ is situated. We will answer queries with vertices $v$ from the same path together. We will do it similarly to the case with light queries, with minor differences: at the very beginning, we add to the multiset all the vertices of the subtree of the initial vertex of the path and mentally remove these vertices from the subtrees of $v_i$ vertices. Everything else is preserved. Let's calculate how long it takes: add all vertices from one subtree: $O(n)$, small-to-large: $O(n\log{n})$, to answer one query due to condition on vertices from one path we have to add at most $B$ vertices. Since there are only $O(\frac{n}{B})$ paths, the whole solution will take $O(\frac{n^2\log{n}}{B} + qB + q\sqrt{A })$. We take $B = \sqrt{\frac{n^2\log{n}}{q}}$ and, counting $n \approx q$, we get $B = \sqrt{n\log{n}}$ and total running time $O(n\sqrt{n\log{n}} + n\sqrt{A})$. Implementation details. As already mentioned, a subtree corresponds to a segment of the Preorder traversal, so $2$ subtrees are $2$ segments. We will maintain the data structure described at the beginning on the sum of $2$ segments. By moving the boundaries of these segments, you can move from one query to another, as in Mo's algorithm. It remains only to sort the queries. Heavy queries are sorted first by path number of $v_i$, then by $tin_{u_i}$. Light queries are sorted only by $tin_{u_i}$, but here you can't just move the segment of the $v$ subtree, you need to rebuild it for each query. Bonus. Solve this problem for two subtrees and a path connecting the roots of these subtrees. Alternative solution (by BledDest) This solution partially intersects with the one described by the problem author. We will use the same data structure for maintaining the mode; and we will also use DFS order of the tree (but before constructing it, we will reorder the children of each vertex so that the heaviest child is the first one). Let $tin_v$ be the moment we enter the vertex $v$ in DFS, and $tout_v$ be the moment we leave the vertex. As usual, the segment $[tin_v, tout_v]$ represents the subtree of vertex $v$, and we can change the state of the structure from the subtree of the vertex $x$ to the subtree of the vertex $y$ in $|tin_x - tin_y| + |tout_x + tout_y|$ operations. Let this number of operations be $cost(x,y)$. Let $v_1, v_2, \dots, v_n$ be the DFS order of the tree. We can prove that $cost(v_1, v_2) + cost(v_2, v_3) + \dots + cost(v_{n-1}, v_n)$ is estimated as $O(n \log n)$ if we order the children of each vertex in such a way that the first of them is the heaviest one. Proof. Let's analyze how many times some vertex $v$ is added when we go in DFS order and maintain the current set of vertices. When some vertex is added to the current subtree, this means that the previous vertex in DFS order was not an ancestor of the current vertex, so the current vertex is not the first son of its parent. So, the size of the subtree of the parent is at least 2x the size of the current vertex. Since the path from $v$ to root can have at most $O(\log n)$ such vertices, then the vertex $v$ is added at most $O(\log n)$ times. Okay, how do we use it to process queries efficiently? Let's say that the vertex $v_i$ (the $i$-th vertex in DFS order) has coordinate equal to $cost(v_1, v_2) + cost(v_2, v_3) + \dots + cost(v_{i-1}, v_i)$. Let this coordinate be $c_{v_i}$. Then, if we have the data structure for the query $(x_1, y_1)$ and we want to change it so it meets the query $(x_2, y_2)$, we can do it in at most $|c_{x_1} - c_{x_2}| + |c_{y_1} - c_{y_2}|$ operations, which can be treated as the Manhattan distance between points $(c_{x_1}, c_{y_1})$ and $(c_{x_2}, c_{y_2})$. Do you see where this is going? We can map each query $(x, y)$ to the point $(c_x, c_y)$, and then order them in such a way that the total distance we need to travel between them is not too large. We can use Mo's algorithm to do this. Since the coordinates are up to $O(n \log n)$, but there are only $q$ points, some alternative sorting orders for Mo (like the one that uses Hilbert's curve) may work better than the usual one.
[ "data structures", "trees" ]
3,100
#include <bits/stdc++.h> #define f first #define s second using namespace std; using li = long long; using ld = long double; using pii = pair<int, int>; const int INF = 2e9 + 13; const li INF64 = 2e18 + 13; const int M = 998244353; const int A = 2e5 + 13; const int N = 2e5 + 13; const int B = 2000; const int SQRTA = 500; const int K = N / B + 113; int val[N]; vector<int> g[N]; int sz[N]; int gr[N]; int leaf[N], group_root[N]; int par[N]; bool heavy[N]; int tin[N], tout[N], T = 0, mid[N]; int et[N]; int valet[N]; void dfs1(int v, int pr, int depth) { par[v] = pr; sz[v] = 1; int mx = -1; for(int i = 0; i < g[v].size(); i++) { int u = g[v][i]; if(u != pr) { dfs1(u, v, depth + 1); sz[v] += sz[u]; if(mx == -1 || sz[g[v][mx]] < sz[u]) mx = i; } } if(mx != -1) swap(g[v][mx], g[v][0]); } void dfs2(int v) { et[T] = v; tin[v] = T++; for(int u : g[v]) { if(u != par[v]) dfs2(u); } tout[v] = T; } struct Query { int ind; int v, u; int lv, rv, lu, ru; int b; Query() {}; }; bool cmp(const Query& a, const Query& b) { if(a.b != b.b) return a.b < b.b; else return a.lu < b.lu; } int cnt[A]; int block_index[A]; int block_mx[A]; int block_cnt_of_cnt[A / SQRTA + 13][A]; inline void insert(int i) { block_cnt_of_cnt[block_index[valet[i]]][cnt[valet[i]]]--; cnt[valet[i]]++; block_cnt_of_cnt[block_index[valet[i]]][cnt[valet[i]]]++; if(cnt[valet[i]] > block_mx[block_index[valet[i]]]) block_mx[block_index[valet[i]]]++; } inline void erase(int i) { if(cnt[valet[i]] == block_mx[block_index[valet[i]]] && block_cnt_of_cnt[block_index[valet[i]]][cnt[valet[i]]] == 1) block_mx[block_index[valet[i]]]--; block_cnt_of_cnt[block_index[valet[i]]][cnt[valet[i]]]--; cnt[valet[i]]--; block_cnt_of_cnt[block_index[valet[i]]][cnt[valet[i]]]++; } int get_mode() { int mx = 0; for(int i = 0; i < A / SQRTA + 1; i++) mx = max(mx, block_mx[i]); for(int i = 0; ; i++) { if(block_mx[i] == mx) { for(int j = i * SQRTA; ; j++) { if(cnt[j] == mx) return j; } } } } Query queries[N]; int ans[N]; void solve() { int n; cin >> n; for(int i = 0; i < n; i++) cin >> val[i]; for(int i = 1; i < n; i++) { int v, u; cin >> v >> u; v--; u--; g[v].push_back(u); g[u].push_back(v); } dfs1(0, -1, 0); vector<pii> ord(n); for(int i = 0; i < n; i++) { ord[i] = {sz[i], i}; gr[i] = -1; } sort(ord.begin(), ord.end()); for(int i = 0; i < n; i++) { if(sz[i] >= B) heavy[i] = true; } int cur = 0; for(int i = 0; i < n; i++) { int v = ord[i].s; if(sz[v] < B || gr[v] != -1) continue; leaf[cur] = v; int u = v; while(gr[u] == -1 && sz[u] - sz[v] < B) { gr[u] = cur; group_root[cur] = u; u = par[u]; } cur++; } dfs2(0); for(int i = 0; i < n; i++) { if(sz[i] < B) { gr[i] = cur + tin[i] / B; } } for(int i = 0; i < n; i++) valet[i] = val[et[i]]; for(int i = 0; i < A; i++) { block_index[i] = i / SQRTA; } int q; cin >> q; for(int i = 0; i < q; i++) { queries[i].ind = i; cin >> queries[i].v >> queries[i].u; queries[i].v--; queries[i].u--; queries[i].lv = tin[queries[i].v]; queries[i].rv = tout[queries[i].v]; queries[i].lu = tin[queries[i].u]; queries[i].ru = tout[queries[i].u]; if(queries[i].lv > queries[i].lu) { swap(queries[i].v, queries[i].u); swap(queries[i].lv, queries[i].lu); swap(queries[i].rv, queries[i].ru); } queries[i].b = gr[queries[i].v]; } sort(queries, queries + q, cmp); int lv = 0, rv = 0, lu = 0, ru = 0; li fir = 0, sec = 0; int hs = 0; for(int i = 0; i < q; i++) { int qlv = queries[i].lv; int qrv = queries[i].rv; int qlu = queries[i].lu; int qru = queries[i].ru; fir += abs(lv - qlv) + abs(rv - qrv); sec += abs(lu - qlu) + abs(ru - qru); if(queries[i].b < cur) { while(rv < qrv) insert(rv++); while(lv > qlv) insert(--lv); while(rv > qrv) erase(--rv); while(lv < qlv) erase(lv++); } else { while(rv > lv) erase(--rv); lv = qlv; rv = lv; while(rv < qrv) insert(rv++); } while(ru < qru) insert(ru++); while(lu > qlu) insert(--lu); while(ru > qru) erase(--ru); while(lu < qlu) erase(lu++); ans[queries[i].ind] = get_mode(); } for(int i = 0; i < q; i++) cout << ans[i] << endl; } mt19937 rnd(1); int main() { ios::sync_with_stdio(0); cin.tie(0); // freopen("input.txt", "r", stdin); solve(); }
1768
A
Greatest Convex
You are given an integer $k$. Find the largest integer $x$, where $1 \le x < k$, such that $x! + (x - 1)!^\dagger$ is a multiple of $^\ddagger$ $k$, or determine that no such $x$ exists. $^\dagger$ $y!$ denotes the factorial of $y$, which is defined recursively as $y! = y \cdot (y-1)!$ for $y \geq 1$ with the base case of $0! = 1$. For example, $5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 \cdot 0! = 120$. $^\ddagger$ If $a$ and $b$ are integers, then $a$ is a multiple of $b$ if there exists an integer $c$ such that $a = b \cdot c$. For example, $10$ is a multiple of $5$ but $9$ is not a multiple of $6$.
Is $x = k - 1$ always suitable? The answer is yes, as $x! + (x - 1)! = (x - 1)! \times (x + 1) = ((k - 1) - 1)! \times ((k - 1) + 1) = (k - 2)! \times (k)$, which is clearly a multiple of $k$. Therefore, $x = k - 1$ is the answer. Time complexity: $\mathcal{O}(1)$
[ "greedy", "math", "number theory" ]
800
answer = [print(int(input()) - 1) for testcase in range(int(input()))]
1768
B
Quick Sort
You are given a permutation$^\dagger$ $p$ of length $n$ and a positive integer $k \le n$. In one operation, you: - Choose $k$ \textbf{distinct} elements $p_{i_1}, p_{i_2}, \ldots, p_{i_k}$. - Remove them and then add them sorted in increasing order to the end of the permutation. For example, if $p = [2,5,1,3,4]$ and $k = 2$ and you choose $5$ and $3$ as the elements for the operation, then $[2, \textcolor{red}{5}, 1, \textcolor{red}{3}, 4] \rightarrow [2, 1, 4, \textcolor{red}{3},\textcolor{red}{5}]$. Find the minimum number of operations needed to sort the permutation in increasing order. It can be proven that it is always possible to do so. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Suppose we can make operations so that $x$ elements do not participate in any operation. Then these $x$ elements in the final array will end up at the beginning in the order in which they were in the initial array. And since this $x$ must be maximized to minimize the number of operations, we need to find the maximal subsequence of the numbers $[1, 2, \ldots]$. Let this sequence have $w$ numbers, then the answer is $\lceil\frac{n - w}{k}\rceil=\lfloor \frac{n - w + k - 1}{k} \rfloor$.
[ "greedy", "math" ]
900
#include <bits/stdc++.h> #define all(x) (x).begin(), (x).end() #define allr(x) (x).rbegin(), (x).rend() #define gsize(x) (int)((x).size()) const char nl = '\n'; typedef long long ll; typedef long double ld; using namespace std; void solve() { int n, k; cin >> n >> k; vector<int> p(n); for (int i = 0; i < n; i++) cin >> p[i]; int c_v = 1; for (int i = 0; i < n; i++) { if (p[i] == c_v) c_v++; } cout << (n - c_v + k) / k << nl; } int main() { ios::sync_with_stdio(0); cin.tie(0); int T; cin >> T; while (T--) solve(); }
1768
C
Elemental Decompress
You are given an array $a$ of $n$ integers. Find two permutations$^\dagger$ $p$ and $q$ of length $n$ such that $\max(p_i,q_i)=a_i$ for all $1 \leq i \leq n$ or report that such $p$ and $q$ do not exist. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Two cases produce no answers: One element appears more than twice in $a$. One element appears more than twice in $a$. After sorting, there is some index that $a[i] < i$ ($1$-indexed). After sorting, there is some index that $a[i] < i$ ($1$-indexed). Consider there is some index that $a[i] <i$, then both $p[i] < i$ and $q[i] < i$ must satisfy. This is also true for the first $i - 1$ index, so the numbers that are smaller than $i$ in both $p$ and $q$ are $(i - 1) \times 2 + 2 = i * 2$. This is a contradiction. Otherwise, solutions always exist. One method is to constructively attach each element in $a$ to $p$ or $q$: Traverse from the biggest element to the smallest in $a$, if that number haven't appeared in $p$ then attach it to $p$, otherwise attach it to $q$. Traverse from the biggest element to the smallest in $a$, if that number haven't appeared in $p$ then attach it to $p$, otherwise attach it to $q$. Traverse from the biggest element to the smallest in $a$ again, if we attached it to $p$, find the biggest number that did not appear in $q$ and attach to $q$, vice versa. Traverse from the biggest element to the smallest in $a$ again, if we attached it to $p$, find the biggest number that did not appear in $q$ and attach to $q$, vice versa. A naive solution requires the $O(n^2)$ method to solve. We can reduce to $O(n \log n)$ by sorting elements in $a$ as pairs <element, index>. Time complexity: $\mathcal{O}(n \log(n))$ There is actually a $O(n)$ solution to this problem. If there are at least $k > 3$ positions $i_1, i_2, \ldots, i_k$ that $a[i_1] = a[i_2] = \ldots = a[i_k]$ then there is no solution. Since we need the condition $a[i] = max(p[i], q[i])$, hence $p[i] = a[i]$ or/and $q[i] = a[i]$. If there are already $p[i_1] = a[i_1]$ and $q[i_2] = a[i_2]$ then we don't have another number equal to $a[i_k]$ because $p[]$ and $q[]$ are two permutations (each number must appear exactly once). Since we have the $max()$ function, we need to use the larger value firsts. If we iterate from the smallest value to the top, there will be scenarios where all the remaining positions $i$ will result in $max(p[i], q[i]) \geq a[i]$ because you don't have enough smaller integers. So for each $x = n \rightarrow 1$ (iterating from the largest element to the smallest element), we check for each position $i$ that $a_i = x$, then assign $p[i] := a[i]$ if $a[i]$ didn't appear in permutation $p[]$, otherwise assign $q[i] := a[i]$. We fill the remaining integers that wasn't used, from the largest to the smallest. We use $vp$ as the largest integer not used in permutation $p[1..n]$. We use $vq$ as the largest integer not used in permutation $q[1..n]$. Then for each of the value $x = n \rightarrow 1$, we assign to $p[i], q[i]$ that was not used. Check if the permutation $p[]$ and $q[]$ satisfied the solution, if it didnt then output "NO", otherwise output "YES" and the two permutations: $p[1..n]$ and $q[1..n]$. Just iterate through each element as normal. There is also another way that you can skip testing if $max(p[i], q[i]) = a[i])$ is correct. But the proof is a bit harder to understand, so I prefer using this code instead. I reached $O(n \log n)$ solution. 165 I reached $O(n)$ solution. 96
[ "constructive algorithms", "greedy", "implementation", "sortings" ]
1,300
#include <iostream> #include <vector> using namespace std; int query() { /// Input number of element int n; cin >> n; /// Input the array a[1..n] vector<int> a(n + 1); for (int i = 1; i <= n; ++i) cin >> a[i]; /// Storing position of each a[i] vector<vector<int> > b(n + 1); for (int i = 1; i <= n; ++i) { b[a[i]].push_back(i); /// max(p[i], q[i]) = a[i] so either p[i] = a[i] or/and q[i] = a[i] /// so if the number appear the third time or more, then "NO" if (b[a[i]].size() >= 3) { cout << "NO\n"; return 0; } } /// Initialize permutation p[1..n], q[1..n] vector<int> p(n + 1, -1), q(n + 1, -1); /// Initialize permutation position, fp[p[i]] = i, fq[q[i]] = i vector<int> fp(n + 1, -1), fq(n + 1, -1); for (int x = n; x >= 1; --x) { for (int i : b[x]) { /// Because of max(), we must save up the larger value /// So we assign p[i] or q[i] by x, one by one from x large -> small if (fp[x] == -1) p[fp[x] = i] = x; else if (fq[x] == -1) q[fq[x] = i] = x; } } for (int x = n, vp = n, vq = n; x >= 1; --x) { for (int i : b[x]) { /// Assign the remaining integers while (fp[vp] != -1) --vp; while (fq[vq] != -1) --vq; if (p[i] == -1 && vp > 0) p[fp[vp] = i] = vp; if (q[i] == -1 && vq > 0) q[fq[vq] = i] = vq; } } for (int i = 1; i <= n; ++i) { if (max(p[i], q[i]) != a[i]) { /// Statement condition is not satisfied cout << "NO\n"; return 0; } } /// Output the answer cout << "YES\n"; for (int i = 1; i <= n; ++i) cout << p[i] << " "; cout << "\n"; for (int i = 1; i <= n; ++i) cout << q[i] << " "; cout << "\n"; return 0; } signed main() { ios::sync_with_stdio(NULL); cin.tie(NULL); int q = 1; /// If there is no multiquery cin >> q; /// then comment this while (q-->0) { /// For each query query(); } return 0; }
1768
D
Lucky Permutation
You are given a permutation$^\dagger$ $p$ of length $n$. In one operation, you can choose two indices $1 \le i < j \le n$ and swap $p_i$ with $p_j$. Find the minimum number of operations needed to have \textbf{exactly one} inversion$^\ddagger$ in the permutation. $^\dagger$ A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^\ddagger$ The number of inversions of a permutation $p$ is the number of pairs of indices $(i, j)$ such that $1 \le i < j \le n$ and $p_i > p_j$.
For some fixed $n$, there are $n - 1$ permutations that have exactly $1$ inversion in them (the inversion is colored): $\color{red}{2}, \color{red}{1}, 3, 4, \ldots, n - 1, n$ $1, \color{red}{3}, \color{red}{2}, 4, \ldots, n - 1, n$ $1, 2, \color{red}{4}, \color{red}{3}, \ldots, n - 1, n$... ... $1, 2, 3, 4, \ldots, \color{red}{n}, \color{red}{n - 1}$ Let's build a directed graph with $n$ vertices where the $i$-th vertex has an outgoing edge $i \rightarrow p_i$. It is easy to see that the graph is divided up into cycles of the form $i \rightarrow p_i \rightarrow p_{p_i} \rightarrow p_{p_{p_i}} \rightarrow \ldots \rightarrow i$. Let $cycles$ be the number of cycles in this graph. It is a well know fact that $n - cycles$ is the minimum number of swaps needed to get the permutation $1, 2, 3, \ldots, n$ from our initial one (in other words, to sort it). Suppose we now want to get the $k$-th permutation from the list above. Let $x$ and $y$ be such that $p_x = k$ and $p_y = k + 1$. Let us remove the edges $x \rightarrow k$ and $y \rightarrow k + 1$ from the graph and instead add the edges $x \rightarrow k + 1$ and $y \rightarrow k$. Let $cycles'$ be the number of cycles in this new graph. The minimum number of swaps needed to get the $k$-th permutation in the list is equal to $n - cycles'$. Turns out that we can easily calculate $cycles'$ if we know $cycles$: $cycles' = cycles + 1$ if the vertices $k$ and $k + 1$ were in the same cycle in the initial graph, $cycles' = cycles - 1$ otherwise. To quickly check if two vertices $u$ and $v$ are in the same cycle, assign some id to each cycle (with a simple dfs or with a DSU) and the compare $u$-s cycle id with $v$-s cycle id. The answer is just the minimum possible value of $n - cycles'$ over all $1 \le k \le n - 1$. Time complexity: $\mathcal{O}(n)$. PS: you can also find $cycles'$ with data structures (for example, by maintaining a treap for each cycle).
[ "constructive algorithms", "dfs and similar", "graphs", "greedy" ]
1,800
#include <bits/stdc++.h> #define all(x) (x).begin(), (x).end() #define allr(x) (x).rbegin(), (x).rend() #define gsize(x) (int)((x).size()) const char nl = '\n'; typedef long long ll; typedef long double ld; using namespace std; void solve() { int n; cin >> n; vector<int> p(n); for (int i = 0; i < n; i++) { cin >> p[i]; p[i]--; } int ind = 1, ans = 0; vector<int> comp(n, 0); for (int i = 0; i < n; i++) { if (comp[i]) continue; { int v = i; while (comp[v] == 0) { comp[v] = ind; v = p[v]; ans++; } ind++; ans--; } } for (int i = 0; i < n - 1; i++) { if (comp[i] == comp[i + 1]) { cout << ans - 1 << nl; return; } } cout << ans + 1 << nl; } int main() { ios::sync_with_stdio(0); cin.tie(0); int T; cin >> T; while (T--) solve(); }
1768
E
Partial Sorting
Consider a permutation$^\dagger$ $p$ of length $3n$. Each time you can do one of the following operations: - Sort the first $2n$ elements in increasing order. - Sort the last $2n$ elements in increasing order. We can show that every permutation can be made sorted in increasing order using only these operations. Let's call $f(p)$ the minimum number of these operations needed to make the permutation $p$ sorted in increasing order. Given $n$, find the sum of $f(p)$ over all $(3n)!$ permutations $p$ of size $3n$. Since the answer could be very large, output it modulo a prime $M$. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
We need at most $3$ operations to sort the permutation: $1 -> 2 -> 1$ For $f(p) = 0$, there is only one case: the initially sorted permutation. return (38912738912739811 & 1) For $f(p) \leq 1$, this scenario appears when the first $n$ numbers or the last $n$ numbers are in the right places. Both cases have $n$ fixed positions, so there will be $(2n!)$ permutations in each case. Intersection: since both cases share the $n$ middle elements, $(n!)$ permutation will appear in both cases. So there will be $2 \times (2n!) - (n!)$ such permutations. For $f(p) \leq 2$, this scenario appears when the smallest $n$ elements' positions are in range $[1, 2n]$, or the largest $n$ numbers' positions are in range $[n + 1, 3n]$. If the smallest $n$ elements are all in position from $1$ to $2n$, then: There are $C_{2n}^{n}$ ways to choose $n$ positions for these numbers. There are $C_{2n}^{n}$ ways to choose $n$ positions for these numbers. For each way to choose these positions, there are $n!$ ways to choose the position for the smallest $n$ numbers, and $2n!$ ways for the rest. For each way to choose these positions, there are $n!$ ways to choose the position for the smallest $n$ numbers, and $2n!$ ways for the rest. The total number of valid permuations are: $C_{2n}^{n} \times n! \times 2n!$ The total number of valid permuations are: $C_{2n}^{n} \times n! \times 2n!$ If the largest $n$ elements are all in position from $n + 1$ to $3n$, we do the same calculation. Intersection: intersection appears when the first $n$ numbers are all in range $[1, 2n]$ and the last $n$ numbers are all in range $[n + 1, 3n]$. Let intersection = 0 Let's talk about the numbers in range $[n + 1, 2n]$. There are $n$ such numbers. . Imagine there are EXACTLY $i = 0$ numbers in this range that appear in the first $n$ numbers. So: In the first $n$ numbers there are $n$ numbers in range $[1, n]$. There are $C_{n}^{n - 0} \times n!$ cases. In the first $n$ numbers there are $n$ numbers in range $[1, n]$. There are $C_{n}^{n - 0} \times n!$ cases. In the first $n$ numbers there are $0$ numbers in range $[n + 1, 2n]$. There are $C_{n}^{0} \times n!$ cases. In the first $n$ numbers there are $0$ numbers in range $[n + 1, 2n]$. There are $C_{n}^{0} \times n!$ cases. In the last $n$ numbers there are $n$ numbers in range $[n + 1, 3n]$, we have used $0$ numbers for the first n numbers. There are $C_{2n - 0}^{n} \times n!$ cases. In the last $n$ numbers there are $n$ numbers in range $[n + 1, 3n]$, we have used $0$ numbers for the first n numbers. There are $C_{2n - 0}^{n} \times n!$ cases. Then we have: intersection += $C_{n}^{n - 0} \times C_{n}^{0} \times C_{2n - 0}^{n} \times n! \times n! \times n!$ After convert $0$ to $i$, we have intersection += $C_{n}^{n - i} \times C_{n}^{i} \times C_{2n - i}^{n} \times n! \times n! \times n!$ How about there is EXACTLY $i = 1$ number in range $[n + 1, 2n]$ appearing in the first $n$ numbers? In the first $n$ numbers there are $n - 1$ numbers in range $[1, n]$. There are $C_{n}^{n - 1} \times n!$ cases. In the first $n$ numbers there are $n - 1$ numbers in range $[1, n]$. There are $C_{n}^{n - 1} \times n!$ cases. In the first $n$ numbers there are $1$ numbers in range $[n + 1, 2n]$. There are $C_{n}^{1} \times n!$ cases. In the first $n$ numbers there are $1$ numbers in range $[n + 1, 2n]$. There are $C_{n}^{1} \times n!$ cases. In the last $n$ numbers there are $n$ numbers in range $[n + 1, 3n]$, we have used $1$ numbers for the first $n$ numbers. There are $C_{2n - 1}^{n} \times n!$ cases. In the last $n$ numbers there are $n$ numbers in range $[n + 1, 3n]$, we have used $1$ numbers for the first $n$ numbers. There are $C_{2n - 1}^{n} \times n!$ cases. Then we have: intersection += $C_{n}^{n - 1} \times C_{n}^{1} \times C_{2n - 1}^{n} \times n! \times n! \times n!$ After convert $1$ to $i$, we have intersection += $C_{n}^{n - i} \times C_{n}^{i} \times C_{2n - i}^{n} \times n! \times n! \times n!$ We do the same thing with remaining cases, all the way up to $i = n$. The number of intersections will be equal to: $\sum_{i = 0}^{n} C_{n}^{n - i} \times C_{n}^{i} \times C_{2n - i}^{n} \times n! \times n! \times n!$ So, the answer will be $2 \times C_{2n}^{n} \times n! \times 2n! - \sum_{i = 0}^{n} C_{n}^{n - i} \times C_{n}^{i} \times C_{2n - i}^{n} \times n! \times n! \times n!$ For $f(p) \leq 3$, it will be the count of all valid permutations. return __factorial(n * __number_of_sides_of_a_triangle) Time complexity: $\mathcal{O}(n)$
[ "combinatorics", "math", "number theory" ]
2,300
#include <bits/stdc++.h> using namespace std; long long n, M; long long frac[3000005], inv[3000005]; long long powermod(long long a, long long b, long long m) { if (b == 0) return 1; unsigned long long k = powermod(a, b / 2, m); k = k * k; k %= m; if (b & 1) k = (k * a) % m; return k; } void Ready() { frac[0] = 1; inv[0] = 1; for (int i = 1; i <= 3000000; i++) { frac[i] = (frac[i - 1] * i) % M; } inv[3000000] = powermod(frac[3000000], M - 2, M); for (int i = 3000000; i > 0; i--) { inv[i - 1] = (inv[i] * i) % M; } } long long C(long long n, long long k) { return ((frac[n] * inv[k]) % M * inv[n - k]) % M; } int main() { cin >> n >> M; Ready(); long long ans[4]{}; // X = 0 ans[0] = 1; // X = 1 ans[1] = 2 * frac[2 * n] - frac[n] - ans[0] + M + M; ans[1] %= M; // X = 2 ans[2] = frac[2 * n]; ans[2] = ans[2] * C(2 *n, n) % M; ans[2] = ans[2] * frac[n] % M; ans[2] = ans[2] * 2 % M; for (int i = 0; i <= n; i++) { int sub = C(n, i); sub = sub * C(n, n - i) % M; sub = sub * C(2 * n - i, n) % M; sub = sub * frac[n] % M; sub = sub * frac[n] % M; sub = sub * frac[n] % M; ans[2] = (ans[2] - sub + M) % M; } ans[2] = (ans[2] - ans[1] + M) % M; ans[2] = (ans[2] - ans[0] + M) % M; // X = 3 ans[3] = frac[3 * n]; ans[3] = (ans[3] - ans[2] + M) % M; ans[3] = (ans[3] - ans[1] + M) % M; ans[3] = (ans[3] - ans[0] + M) % M; long long answer = ans[1] + 2 * ans[2] + 3 * ans[3]; answer %= M; cout << answer << endl; }
1768
F
Wonderful Jump
You are given an array of positive integers $a_1,a_2,\ldots,a_n$ of length $n$. In one operation you can jump from index $i$ to index $j$ ($1 \le i \le j \le n$) by paying $\min(a_i, a_{i + 1}, \ldots, a_j) \cdot (j - i)^2$ eris. For all $k$ from $1$ to $n$, find the minimum number of eris needed to get from index $1$ to index $k$.
There is a very easy $\mathcal{O}(n^2)$ dp solution, we will show one possible way to optimize it to $\mathcal{O}(n \cdot \sqrt{A})$, where $A$ is the maximum possible value of $a_i$. Let $dp_k$ be the minimum number of eris required to reach index $k$, $dp_1 = 0$. Suppose we want to calculate $dp_j$ and we already know $dp_1, dp_2, \ldots, dp_{j - 1}$. Let's look at our cost function more closely. We can notice that it is definitely not optimal to use the transition $i \rightarrow j$ if $\min(a_i \ldots a_j) \cdot (j - i)^2 > A \cdot (j - i)$. That is, it will be more optimal to perform $j - i$ jumps of length 1. Transforming this inequality, we get $j - i > \frac{A}{\min(a_i \ldots a_j)}$. So if $\min(a_i \ldots a_j)$ is quite large, we only need to look at a couple of $i$ close to $j$, and then do something else for the small values of $\min(a_i \ldots a_j)$. 1. $\min(a_i \ldots a_j) \ge \sqrt{A}$ To handle this case, we can just iterate over all $i$ from $\max(j - \sqrt{A}, 1)$ to $j - 1$, since the transition $i \rightarrow j$ could be optimal only if $j - i \le \frac{A}{min(a_i \ldots a_j)} \le \sqrt{A}$. Time complexity: $\mathcal{O}(\sqrt{A})$. 2. $\min(a_i \ldots a_j) < \sqrt{A}$ Another useful fact is that if there exists an index $k$ such that $i < k < j$ and $a_k = \min(a_i \ldots a_j)$, the transition $i \rightarrow j$ also cannot be optimal, since $i \rightarrow k$ followed by $k \rightarrow j$ will cost less. Proof: $\min(a_i \ldots a_j) \cdot (j - i) ^ 2 > \min(a_i \ldots a_k) \cdot (k - i) ^ 2 + \min(a_k \ldots a_j) \cdot (j - k)^2$ $a_k \cdot (j - i) ^ 2 > a_k \cdot (k - i) ^ 2 + a_k \cdot (j - k)^2$ $(j - i) ^ 2 > (k - i) ^ 2 + (j - k)^2$ This leaves us two subcases two handle. 2.1 $\min(a_i \ldots a_j) = a_i$ Just maintain the rightmost occurrences $< j$ of all values from $1$ to $\sqrt{A}$. Time complexity: $\mathcal{O}(\sqrt{A})$. 2.2 $\min(a_i \ldots a_j) = a_j$ Initially set $i$ to $j - 1$ and decrease it until $a_i \le a_j$ becomes true. Time complexity: $\mathcal{O}(\sqrt{A})$ amortized. Total time complexity: $\mathcal{O}(n \cdot \sqrt{A})$
[ "dp", "greedy" ]
2,900
#include <iostream> #include <vector> #include <chrono> #include <random> #include <cassert> std::mt19937 rng((int) std::chrono::steady_clock::now().time_since_epoch().count()); int main() { std::ios_base::sync_with_stdio(false); std::cin.tie(NULL); int n; std::cin >> n; std::vector<int> a(n); for(int i = 0; i < n; i++) { std::cin >> a[i]; } std::vector<long long> dp(n, 1e18); dp[0] = 0; for(int i = 0; i < n; i++) { int dist = n / a[i] + 1; // take from behind for(int j = i-1; j >= 0 && i-j <= dist; j--) { dp[i] = std::min(dp[i], dp[j] + (long long) a[i] * (i - j) * (i - j)); if(a[j] <= a[i]) break; } // propagate forward for(int j = i+1; j < n && j-i <= dist; j++) { dp[j] = std::min(dp[j], dp[i] + (long long) a[i] * (i - j) * (i - j)); if(a[j] <= a[i]) break; } std::cout << dp[i] << (i + 1 == n ? '\n' : ' '); } }
1770
A
Koxia and Whiteboards
Kiyora has $n$ whiteboards numbered from $1$ to $n$. Initially, the $i$-th whiteboard has the integer $a_i$ written on it. Koxia performs $m$ operations. The $j$-th operation is to choose one of the whiteboards and change the integer written on it to $b_j$. Find the maximum possible sum of integers written on the whiteboards after performing all $m$ operations.
Exactly $n$ items out of of $a_1,\ldots,a_n,b_1,\ldots,b_m$ will remain on the whiteboard at the end. $b_m$ will always remain on the board at the end. Consider the case where $n=2$ and $m=2$. As we mentioned in hint 2, $b_2$ will always be written, but what about $b_1$? This problem can be solved naturally with a greedy algorithm - for $i = 1, 2, \dots, m$, we use $b_i$ to replace the minimal value among the current $a_1, a_2, \dots, a_n$. The time complexity is $O(nm)$ for each test case. Alternatively, we can first add $b_m$ to our final sum. For the remaining $(n+m-1)$ integers, we can freely pick $(n - 1)$ out of them and add it to our final sum. This is because if we want a certain $a_i$ to remain on the board at the end, we simply do not touch it in the process. If we want a certain $b_i$ to remain on the board at the end, then on the $i^{th}$ operation we replace some $a_j$ that we do not want at the end by $b_i$. Using an efficient sorting algorithm gives us a $O((n + m) \log (n + m))$ solution, which is our intended solution.
[ "brute force", "greedy" ]
1,000
#include <stdio.h> #include <bits/stdc++.h> using namespace std; #define rep(i,n) for (int i = 0; i < (n); ++i) #define Inf32 1000000001 #define Inf64 4000000000000000001 int main(){ int _t; cin>>_t; rep(_,_t){ int n,m; cin>>n>>m; vector<long long> a(n+m); rep(i,n+m)scanf("%lld",&a[i]); sort(a.begin(),a.end()-1); reverse(a.begin(),a.end()); long long ans = 0; rep(i,n)ans += a[i]; cout<<ans<<endl; } return 0; }
1770
B
Koxia and Permutation
Reve has two integers $n$ and $k$. Let $p$ be a permutation$^\dagger$ of length $n$. Let $c$ be an array of length $n - k + 1$ such that $$c_i = \max(p_i, \dots, p_{i+k-1}) + \min(p_i, \dots, p_{i+k-1}).$$ Let the cost of the permutation $p$ be the maximum element of $c$. Koxia wants you to construct a permutation with the minimum possible cost. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
For $k = 1$, the cost is always $2n$ for any permutation. For $k \geq 2$, the minimal cost is always $n + 1$. When $k = 1$ every permutation has the same cost. When $k \geq 2$, the minimal cost will be at least $n+1$. This is because there will always be at least one segment containing the element $n$ in the permutation, contributing $n$ to the "max" part of the sum, and the "min" part will add at least $1$ to the sum. In fact, the cost $n+1$ is optimal. It can be achieved by ordering the numbers in the pattern $[n, 1, n - 1, 2, n - 2, 3, n - 3, 4, \dots]$. The time complexity is $O(n)$ for each test case. Other careful constructions should also get Accepted.
[ "constructive algorithms" ]
1,000
#include <iostream> #define MULTI int _T; cin >> _T; while(_T--) using namespace std; typedef long long ll; int n, k; int main () { ios::sync_with_stdio(0); cin.tie(0); MULTI { cin >> n >> k; int l = 1, r = n, _ = 1; while (l <= r) cout << ((_ ^= 1) ? l++ : r--) << ' '; cout << endl; } }
1770
C
Koxia and Number Theory
Joi has an array $a$ of $n$ \textbf{positive} integers. Koxia wants you to determine whether there exists a \textbf{positive} integer $x > 0$ such that $\gcd(a_i+x,a_j+x)=1$ for all $1 \leq i < j \leq n$. Here $\gcd(y, z)$ denotes the greatest common divisor (GCD) of integers $y$ and $z$.
If $a_i$ are not pairwise distinct we get a trivial NO. If all $a_i$ are pairwise distinct, can you construct an example with $n = 4$ that gives an answer NO? Perhaps you are thinking about properties such as parity? Try to generalize the idea. Consider every prime. Consider Chinese Remainder Theorem. How many primes should we check? Consider Pigeonhole Principle. First, we should check whether the integers in $a$ are pairwise distinct, as $a_i + x \geq 2$ and $\gcd(t,t)=t$, which leads to a trivial NO. Given an integer $x$, let's define $b_i := a_i + x$. The condition "$\gcd(b_i,b_j)=1$ for $1\le i < j \le n$" is equivalent to "every prime $p$ should divides at most one $b_i$". Given a prime $p$, how should we verify whether for every $x > 0$, $p$ divides at least two elements in $b$? A small but guided sample is $a = [5, 6, 7, 8]$ with answer NO, because $\gcd(6 + x, 8 + x) \neq 1$ if $x \equiv 0 \pmod 2$, $\gcd(5 + x, 7 + x) \neq 1$ if $x \equiv 1 \pmod 2$. That is, if we consider $[5, 6, 7, 8]$ modulo $2$, we obtain the multiset ${1, 0, 1, 0}$. Both $0$ and $1$ appeared twice, so for any choice of $x$, exactly two integers in $b$ will be divided by $2$. This idea can be extended to larger primes. For a given prime $p$, let $cnt_j$ be the multiplicity of $j$ in the multiset $[ a_i \text{ mod } p, a_2 \text{ mod } p, \dots, a_n \text{ mod } p ]$. If $\min(\mathit{cnt}_0, \mathit{cnt}_1, \dots, \mathit{cnt}_{p-1}) \geq 2$, we output NO immediately. While there are many primes up to ${10}^{18}$, we only need to check for the primes up to $\lfloor \frac{n}{2} \rfloor$. This is because $\min(\mathit{cnt}_0, \mathit{cnt}_1, \dots, \mathit{cnt}_{p-1}) \geq 2$ is impossible for greater primes according to Pigeonhole Principle. Since the number of primes up to $\lfloor \frac{n}{2} \rfloor$ is at most $O\left(\frac{n}{\log n} \right)$, the problem can be solved in time $O\left(\frac{n^2}{\log n} \right)$. The reason that $\min(\mathit{cnt}) \geq 2$ is essential because for a prime $p$, if $a_u \equiv a_v \pmod p$, then it's necessary to have $(x + a_u) \not\equiv 0 \pmod p$, because $\gcd(x + a_u, x + a_v)$ will be divided by $p$ otherwise. So actually, $\mathit{cnt}_i \geq 2$ means $x \not\equiv (p-i) \pmod p$. If $\min(\mathit{cnt}) < 2$ holds for all primes, then we can list certain congruence equations and use Chinese Reminder Theorem to calculate a proper $x$; if there exists a prime that $\min(\mathit{cnt}) \geq 2$, then any choose of $x$ leads to the situation that $p$ appears twice.
[ "brute force", "chinese remainder theorem", "math", "number theory" ]
1,700
#include <iostream> #include <algorithm> #define MULTI int _T; cin >> _T; while(_T--) using namespace std; typedef long long ll; const int N = 105; const int INF = 0x3f3f3f3f; template <typename T> bool chkmin (T &x, T y) {return y < x ? x = y, 1 : 0;} template <typename T> bool chkmax (T &x, T y) {return y > x ? x = y, 1 : 0;} int n; ll a[N]; int cnt[N]; int main () { ios::sync_with_stdio(0); cin.tie(0); MULTI { cin >> n; for (int i = 1;i <= n;++i) { cin >> a[i]; } int isDistinct = 1; sort(a + 1, a + n + 1); for (int i = 1;i <= n - 1;++i) { if (a[i] == a[i + 1]) isDistinct = 0; } if (isDistinct == 0) { cout << "NO" << endl; continue; } int CRT_able = 1; for (int mod = 2;mod <= n / 2;++mod) { fill(cnt, cnt + mod, 0); for (int i = 1;i <= n;++i) { cnt[a[i] % mod]++; } if (*min_element(cnt, cnt + mod) >= 2) CRT_able = 0; } cout << (CRT_able ? "YES" : "NO") << endl; } }
1770
D
Koxia and Game
Koxia and Mahiru are playing a game with three arrays $a$, $b$, and $c$ of length $n$. Each element of $a$, $b$ and $c$ is an integer between $1$ and $n$ inclusive. The game consists of $n$ rounds. In the $i$-th round, they perform the following moves: - Let $S$ be the multiset $\{a_i, b_i, c_i\}$. - Koxia removes one element from the multiset $S$ by her choice. - Mahiru chooses one integer from the two remaining in the multiset $S$. Let $d_i$ be the integer Mahiru chose in the $i$-th round. If $d$ is a permutation$^\dagger$, Koxia wins. Otherwise, Mahiru wins. Currently, only the arrays $a$ and $b$ have been chosen. As an avid supporter of Koxia, you want to choose an array $c$ such that Koxia will win. Count the number of such $c$, modulo $998\,244\,353$. Note that Koxia and Mahiru both play optimally. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
If all of $a$, $b$ and $c$ are fixed, how to determine who will win? If $a$ and $b$ are fixed, design an algorithm to check if there is an array $c$ that makes Koxia wins. If you can't solve the problem in Hint 2, try to think about how it's related to graph theory. Try to discuss the structure of components in the graph to count up the number of $c$. Firstly, let's consider how an array $c$ could make Koxia wins. Lemma 1. In each round, Koxia should remove an element in $S$ to make the remaining $2$ elements in $S$ the same (i.e. Mahiru's choice determined nothing actually). In round $n$, if Koxia leaves two choices for Mahiru then Mahiru will be able to prevent $d$ from being a permutation. This means if Koxia wins, there is only one choice for $d_n$. Now $(d_1, d_2, \dots, d_{n-1})$ have to be a permutation of a specific $n-1$ numbers. Apply the same argument on $d_{n-1}$ and so on, we can conclude that every $d_i$ only has one choice if Koxia wins. Lemma 2. Let $p$ be array of length $n$ where we can set $p_i$ to either $a_i$ or $b_i$. Koxia wins iff there exists a way to make $p$ a permutation. According to Lemma 1, if there is a way to make $p$ a permutation, we can just set $c_i = p_i$. Koxia can then force Mahiru to set $d_i = p_i$ every round and Koxia will win. If it is impossible to make $p$ a permutation, Mahiru can pick either $a_i$ or $b_i$ (at least one of them is available) every round. The resulting array $d$ is guaranteed to not be a permutation. First, we need an algorithm to determine if there is a way to make $p$ a permutation. We can transform this into a graph problem where $(a_i, b_i)$ are edges in a graph with $n$ vertices. Then there is a way to make $p$ a permutation iff there is a way to assign a direction for every edge such that every vertex has one edge leading into it. It is not hard to see that this is equivalent to the condition that for every connected component, the number of edges equals the number of vertices. We can verify this by a Disjoint-Set Union or a graph traversal in $O(n \alpha(n))$ or $O(n)$ time complexity. To solve the counting problem, we consider the structure of the connected components one by one. A component with $|V| = |E|$ can be viewed as a tree with an additional edge. This additional edge can be categorized into two cases: The additional edge forms a cycle together with some of the other edges. There are $2$ choices for the cycle (clockwise and counterclockwise), and the choices of other edges are fixed then (point away from the cycle). The additional edge forms a self-loop. Then the value of $c_i$ determines nothing in this situation so it can be any integers in $[1, n]$, and the choices of all other edges are fixed. Therefore, if exists at least one $c$ to make Koxia wins, then the answer is $2^{\textrm{cycle component cnt}} \cdot n^{\textrm{self-loop component cnt}}$. The time complexity is $O(n \alpha(n))$ or $O(n)$.
[ "constructive algorithms", "data structures", "dfs and similar", "dsu", "flows", "games", "graph matchings", "graphs", "implementation" ]
2,000
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 5; const int P = 998244353; int n, a[N], b[N]; vector <int> G[N]; bool vis[N]; int vertex, edge, self_loop; void dfs(int x) { if (vis[x]) return ; vis[x] = true; vertex++; for (auto y : G[x]) { edge++; dfs(y); if (y == x) { self_loop++; } } } void solve() { scanf("%d", &n); for (int i = 1; i <= n; i++) scanf("%d", &a[i]); for (int i = 1; i <= n; i++) scanf("%d", &b[i]); for (int i = 1; i <= n; i++) G[i].clear(); for (int i = 1; i <= n; i++) { G[a[i]].push_back(b[i]); G[b[i]].push_back(a[i]); } int ans = 1; for (int i = 1; i <= n; i++) vis[i] = false; for (int i = 1; i <= n; i++) { if (vis[i]) continue ; vertex = 0; edge = 0; self_loop = 0; dfs(i); if (edge != 2 * vertex) { ans = 0; } else if (self_loop) { ans = 1ll * ans * n % P; } else { ans = ans * 2 % P; } } printf("%d\n", ans); } int main() { int t; scanf("%d", &t); while (t--) { solve(); } return 0; }
1770
E
Koxia and Tree
Imi has an undirected tree with $n$ vertices where edges are numbered from $1$ to $n-1$. The $i$-th edge connects vertices $u_i$ and $v_i$. There are also $k$ butterflies on the tree. Initially, the $i$-th butterfly is on vertex $a_i$. All values of $a$ are pairwise distinct. Koxia plays a game as follows: - For $i = 1, 2, \dots, n - 1$, Koxia set the direction of the $i$-th edge as $u_i \rightarrow v_i$ or $v_i \rightarrow u_i$ with equal probability. - For $i = 1, 2, \dots, n - 1$, if a butterfly is on the initial vertex of $i$-th edge and there is no butterfly on the terminal vertex, then this butterfly flies to the terminal vertex. Note that operations are sequentially in order of $1, 2, \dots, n - 1$ instead of simultaneously. - Koxia chooses two butterflies from the $k$ butterflies with equal probability from all possible $\frac{k(k-1)}{2}$ ways to select two butterflies, then she takes the distance$^\dagger$ between the two chosen vertices as her score. Now, Koxia wants you to find the expected value of her score, modulo $998\,244\,353^\ddagger$. $^\dagger$ The distance between two vertices on a tree is the number of edges on the (unique) simple path between them. $^\ddagger$ Formally, let $M = 998\,244\,353$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
Solve a classic problem - find the sum of pairwise distances of $k$ chosen nodes in a tree. If we add the move operations while the direction of edges are fixed, find the sum of pairwise distances of $k$ chosen nodes. If you can't solve the problem in Hint 2, consider why writers make each edge passed by butterflies for at most once. When the direction of edges become random, how maintaining $p_i$ as the possibility of "node $i$ contains a butterfly" help you to get the answer? At first sight, we usually think of a classic problem - find the sum of pairwise distances of $k$ chosen nodes in a tree. For any edge, if there are $x$ chosen nodes and $n - x$ chosen nodes respectively on each side of it, then there will be $x (n - x)$ pairs of nodes passing this edge. Without loss of generality, let's assign node $1$ as root, and define both $\mathit{siz}_i$ as the number of chosen nodes in subtree $i$. By summing up $\mathit{siz}_{\mathit{son}} (n - \mathit{siz}_{\mathit{son}})$ for each edge $(\mathit{fa}, \mathit{son})$, we derive the answer, which equals to the expected value of the distance of two nodes (Uniformly randomly chosen from $k$ nodes) after dividing by $\binom{k}{2}$. Let's turn to Hint 2 then - add the move operations while the direction of edges are fixed. Let's define $\mathit{siz}^0_i$ and $\mathit{siz}_i$ as the number of butterflies in subtree $i$, but before any move operation / in real time respectively. A very important observation is, although butterflies are moving, we can always claim $|\mathit{siz}_{\mathit{son}} - \mathit{siz}^0_{\mathit{son}}| \leq 1$ because each edge is passed by butterflies for at most once. This property allows us to discuss different values of $\mathit{siz}_{\mathit{son}}$ to sum up the answer in constant time complexity, if you maintain butterflies' positions correctly. When we introduce random directions additionally, if we define $p_i$ as the possibility of "node $i$ contains a butterfly", then an equivalent statement of move operation from node $u$ to node $v$ will be, actually, set $p_u = p_v = \frac{p_u+p_v}{2}$, which allows us to maintain $p$ in real time easily. Similarly, by discussing values of $\mathit{siz}_{\mathit{son}}$ (but with possibilities of each case instead of specified moves), we get the final answer. The total time complexity is $O(n)$.
[ "combinatorics", "dfs and similar", "dp", "dsu", "math", "probabilities", "trees" ]
2,400
#include <iostream> #include <vector> using namespace std; typedef long long ll; const int N = 3e5 + 5; const int mod = 998244353; const int inv2 = 499122177; ll qpow (ll n, ll m) { ll ret = 1; while (m) { if (m & 1) ret = ret * n % mod; n = n * n % mod; m >>= 1; } return ret; } ll getinv (ll a) { return qpow(a, mod - 2); } int n, k; int a[N]; int u[N], v[N]; vector <int> e[N]; int fa[N]; ll p[N], sum[N]; void dfs (int u, int f) { sum[u] = p[u]; for (int v : e[u]) if (v != f) { dfs(v, u); fa[v] = u; sum[u] += sum[v]; } } int main () { ios::sync_with_stdio(0); cin.tie(0); cin >> n >> k; for (int i = 1;i <= k;++i) { cin >> a[i]; p[a[i]] = 1; } for (int i = 1;i <= n - 1;++i) { cin >> u[i] >> v[i]; e[u[i]].push_back(v[i]); e[v[i]].push_back(u[i]); } dfs(1, -1); ll ans = 0; for (int i = 1;i <= n - 1;++i) { if (fa[u[i]] == v[i]) swap(u[i], v[i]); ll puv = p[u[i]] * (1 - p[v[i]] + mod) % mod; ll pvu = p[v[i]] * (1 - p[u[i]] + mod) % mod; ll delta = 0; delta -= puv * sum[v[i]] % mod * (k - sum[v[i]]) % mod; delta -= pvu * sum[v[i]] % mod * (k - sum[v[i]]) % mod; delta += puv * (sum[v[i]] + 1) % mod * (k - sum[v[i]] - 1) % mod; delta += pvu * (sum[v[i]] - 1) % mod * (k - sum[v[i]] + 1) % mod; ans = (ans + sum[v[i]] * (k - sum[v[i]]) + delta * inv2) % mod; ans = (ans % mod + mod) % mod; p[u[i]] = p[v[i]] = 1ll * (p[u[i]] + p[v[i]]) * inv2 % mod; } cout << ans * getinv(1ll * k * (k - 1) / 2 % mod) % mod << endl; }
1770
F
Koxia and Sequence
Mari has three integers $n$, $x$, and $y$. Call an array $a$ of $n$ \textbf{non-negative} integers good if it satisfies the following conditions: - $a_1+a_2+\ldots+a_n=x$, and - $a_1 \, | \, a_2 \, | \, \ldots \, | \, a_n=y$, where $|$ denotes the bitwise OR operation. The score of a good array is the value of $a_1 \oplus a_2 \oplus \ldots \oplus a_n$, where $\oplus$ denotes the bitwise XOR operation. Koxia wants you to find the total bitwise XOR of the scores of all good arrays. If there are no good arrays, output $0$ instead.
From symmetry, for any non-negative integer $t$, the number of good sequences with $a_1=t$, the number of good sequences with $a_2=t$, ... are equal. It is useful to consider the contribution of each bit to the answer independently. Since XOR is undone twice, considering the contribution to the answer can be regarded as counting up over mod $2$. It is difficult to count those for which total or is $y$, but it is relatively easy to count those for which total or is a subset of $y$. Then, we can consider a way using the inclusion-exclusion principle. Lucas's theorem and Kummer's theorem are useful. In particular, you can derive equivalence conditions on or. "There are $a+b$ white balls. For every pair $(c,d)$ of nonnegative integers satisfying $c+d=n$, find the sum of the ways to choose $c$ balls from the first $a$ balls and $d$ balls from the remaining $b$ balls." The answer to this problem is $_{a+b} C_{n}$. This is because considering how to choose for every pair $(c,d)$ of non-negative integers satisfying $c+d=n$ merely consider the case of choosing $n$ balls as how many balls to choose from $a$ balls and $b$ balls. This result is called Vandermonde's identity. Let $f(i,t)$ is the number of good sequence such that $a_i=t$. $f(1,t)=f(2,t)=...=f(n,t)$, so if $n$ is even, the answer is $0$. Otherwise, the answer is total xor of $t$ such that number of good sequences such that $a_1=t$ is odd. We can consider each bit independently, so we can rewrite the problem "for each $i$, find the number of good sequences such that $a_1$'s $i$-th bit is $1$, modulo $2$". Let $g(y')$ is the answer if $y$ is a subset of $y'$(means $y \mid y'=y'$). We can prove with induction the answer of the original problem is total xor of $g(y')$ such that $y'$ is a subset of given $y$. So, the goal is for each $i$, find the number of $y'$ such that $y'$ is a subset of $y$ and $g(y')$'s $i$-th bit is $1$, modulo $2$. We can prove with Lucas's theorem or Kummer's theorem ($p=2$), "$\binom{a}{b} \bmod 2$ is $1$" is equivalent to "$b$ is a subset of $a$". The number of sequences such that the length is $n$ and total sum is $x$ and total or is a subset of $y$, modulo $2$ is equal to $\sum_{t_1+\ldots+t_n=x} \prod{} \binom{y}{t_i}$, because if there is $t_i$ which is not a subset of $y$, $\binom{y}{t_i}$ is $0$ and the product is also $0$, modulo $2$. Consider Vandermonde's identity, the value is equal to $\binom{ny}{x}$. In a similar way, we can rewrite the problem to "for each $i$, find the number of $y'$ such that $y'$ is a subset of $y$ and $y'$'s $i$-th bit is $1$ and $x-2^i$ is a subset of $ny'-2^i$($\binom{ny'-2^i}{x-2^i} \bmod 2 = 1$), modulo $2$". From the above, we can solve this problem in $O(y \log y)$ by performing the calculation in $O(1)$ for all $i$ and all $y'$.
[ "bitmasks", "combinatorics", "dp", "math", "number theory" ]
3,100
#include <bits/stdc++.h> using namespace std; #define int long long #define ll long long #define ii pair<ll,ll> #define iii pair<ii,ll> #define fi first #define se second #define endl '\n' #define debug(x) cout << #x << ": " << x << endl #define pub push_back #define pob pop_back #define puf push_front #define pof pop_front #define lb lower_bound #define ub upper_bound #define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--)) #define all(x) (x).begin(),(x).end() #define sz(x) (int)(x).size() mt19937 rng(chrono::system_clock::now().time_since_epoch().count()); int n,a,b; bool isSub(int i,int j){ if (i<0 || j<0) return false; return (j&i)==i; } signed main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin.exceptions(ios::badbit | ios::failbit); cin>>n>>a>>b; int ans=0; for (int sub=b;sub;sub=(sub-1)&b) rep(bit,0,20) if (sub&(1<<bit)){ if (isSub(a-(1<<bit),n*sub-(1<<bit))){ ans^=(1<<bit); } } cout<<ans*(n%2)<<endl; }
1770
G
Koxia and Bracket
Chiyuu has a bracket sequence$^\dagger$ $s$ of length $n$. Let $k$ be the minimum number of characters that Chiyuu has to remove from $s$ to make $s$ balanced$^\ddagger$. Now, Koxia wants you to count the number of ways to remove $k$ characters from $s$ so that $s$ becomes balanced, modulo $998\,244\,353$. Note that two ways of removing characters are considered distinct if and only if the set of indices removed is different. $^\dagger$ A bracket sequence is a string containing only the characters "(" and ")". $^\ddagger$ A bracket sequence is called balanced if one can turn it into a valid math expression by adding characters + and 1. For example, sequences (())(), (), (()(())) and the empty string are balanced, while )(, ((), and (()))( are not.
and errorgorn What special properties does the deleted bracket sequence have? Try to solve this with $O(n^2)$ DP. If there is no balance requirement, can multiple brackets be processed quickly with one operation? Can you combine the last idea with divide-and-conquer or something? Let us consider what properties the removed bracket subsequence has. First, it must be a bracket subsequence in the form ))...)((....(. The proof is simple: if there is a deleted ) on the right-hand side of a (, then we can keep them in $s$ without breaking the balancing property of the remaining sequence. This property means that we can divide the string into $2$ parts. We only delete ) from the first part and only delete ( from the second part. Now let us try to find the dividing point between the two parts: Consider a prefix sum based on a sequence of brackets in which each ( is replaced by 1 and each ) is replaced by -1. We define a position as a special position if and only if the number corresponding to this position is less than the previously occurring minimum value. It is easy to see that whenever a special position occurs, we must remove an additional ) before this position to make the bracket sequence satisfy the condition again. Considering the above idea, we can find that only the ) before the farthest special position may be deleted, so we can use this position as the dividing point. We now solve two separate problems. However, we can turn the problem on deleting only '(' into the one on deleting only ). For example, if we are only allowed to delete ( from (()((()()), it is equivalent to the number of ways to delete only ) from (()()))()). For the part where only ) is deleted, the sufficient condition for it to be a balanced bracket sequence is that each number in the prefix sum must be greater than 0 after the operation. Also considering the above ideas, let us define the state $dp_{i,j}$, which represents after removing the breakets required by special position, the number of ways to delete additional $j$ $(j \geq 0)$ occurrence of ) from the string up the $i$-th occurrence of ) in the string. $\begin{equation} dp_{i,j} = \begin{cases} dp_{i-1,j}+dp_{i-1,j-1}, & \text{if } i \text{ is not special}; \\ dp_{i-1,j}+dp_{i-1,j+1}, & \text{if } i \text{ is special}. \end{cases} \end{equation}$ Multiply the $dp_{end,0}$ obtained from both parts of the string to obtain the answer. The time complexity is $O(n^2)$ and optimized implementations can run in about 9 seconds, but it is not enough to pass. Let's try to optimize the transitions when there are no special positions. For state $dp_{i,j}$, after processing $k$ individual ), the transitions are as follows: $dp_{i+k,j}=\sum_{l=0}^k \binom{k}{l} \times dp_{i,j-l}$ We find that this transfer equation behaves as a polynomial convolution. Thus we can optimize this convolution by NTT with a time complexity of $O(n \log n)$ for a single operation, while the worst global complexity of this Solution is $O(n^2 \log n)$ due to the presence of the special position. Consider how this Solution can be combined with the $O(n^2)$ Solution. For states $dp_{i,j}$ where we want to consider its contribution to $dp_{i+k}$, if $j \geq k$ is satisfied, then the transitions are not affected by the special position anyway. Based on the above idea, we can adopt a mixed Solution based on periodic reconstruction: set the reconstruction period $B$, and within one round of the period, we use the $O(n^2)$ DP Solution to handle the part of $j \le B$, while for the part of $j>B$, we compute the answer by NTT after one round of the period. The time complexity $O(\frac{n^2}{B}+B\cdot n \log n)$ can be optimized to $O(n\sqrt{n \log n})$ by setting the appropriate $B$. Although the time complexity is still high, given the low constant factor of the $O(n^2)$ solution, a decently-optimized implementation is able to get AC. Consider combining the idea of extracting $j \geq k$ parts for NTT with divide-and-conquer. Suppose now that the interval to be processed is $(l,r)$, where the DP polynomial passed is $s$. We proceed as follows: Count the number of special positions $num$ in the interval $(l,r)$, extract the part of the polynomial $s$ corresponding to the state $j \geq num$, and convolute it with the current interval alone. Pass the part of the polynomial $s$ corresponding to the state $j < num$ into the interval $(l,mid)$, and then pass the result into the interval $(mid+1,r)$ to continue the operation. Add the polynomials obtained by the above two steps directly, and return the obtained polynomial. How to calculate the time complexity of performing the above operations? Let's analyze the operations passed into the left interval and the right interval separately. When passing in the left interval $(l,mid)$, the size of the polynomial for the NTT operation is the number of special positions in the interval $(l,r)$ minus the number of special positions in the left interval $(l,mid)$, i.e., the number of special positions in the right interval $(mid+1,r)$, which does not exceed the length of the right interval $(mid+1,r)$. When passed into the right interval $(mid+1,r)$, the size of the polynomial does not exceed the length of the left interval $(l,mid)$. Also, the length of the combinatorial polynomial multiplied with $s$ is the interval length + 1. In summary, the size of the two polynomials for the NTT operation in the interval $(l,r)$ does not exceed the interval length + 1. Thus the time complexity of this solution is divide-and-conquer combined with the time complexity of NTT, i.e. $O(n \log^2 n)$.
[ "divide and conquer", "fft", "math" ]
3,400
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #include <ext/rope> using namespace std; using namespace __gnu_pbds; using namespace __gnu_cxx; #define int long long #define ll long long #define ii pair<ll,ll> #define iii pair<ii,ll> #define fi first #define se second #define endl '\n' #define debug(x) cout << #x << ": " << x << endl #define pub push_back #define pob pop_back #define puf push_front #define pof pop_front #define lb lower_bound #define ub upper_bound #define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--)) #define all(x) (x).begin(),(x).end() #define sz(x) (int)(x).size() #define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update> //change less to less_equal for non distinct pbds, but erase will bug mt19937 rng(chrono::system_clock::now().time_since_epoch().count()); const int MOD=998244353; ll qexp(ll b,ll p,int m){ ll res=1; while (p){ if (p&1) res=(res*b)%m; b=(b*b)%m; p>>=1; } return res; } ll inv(ll i){ return qexp(i,MOD-2,MOD); } ll fix(ll i){ i%=MOD; if (i<0) i+=MOD; return i; } ll fac[1000005]; ll ifac[1000005]; ll nCk(int i,int j){ if (i<j) return 0; return fac[i]*ifac[j]%MOD*ifac[i-j]%MOD; } //https://github.com/kth-competitive-programming/kactl/blob/main/content/numerical/NumberTheoreticTransform.h const ll mod = (119 << 23) + 1, root = 62; // = 998244353 // For p < 2^30 there is also e.g. 5 << 25, 7 << 26, 479 << 21 // and 483 << 21 (same root). The last two are > 10^9. typedef vector<int> vi; typedef vector<ll> vl; void ntt(vl &a) { int n = sz(a), L = 31 - __builtin_clz(n); static vl rt(2, 1); for (static int k = 2, s = 2; k < n; k *= 2, s++) { rt.resize(n); ll z[] = {1, qexp(root, mod >> s, mod)}; rep(i,k,2*k) rt[i] = rt[i / 2] * z[i & 1] % mod; } vi rev(n); rep(i,0,n) rev[i] = (rev[i / 2] | (i & 1) << L) / 2; rep(i,0,n) if (i < rev[i]) swap(a[i], a[rev[i]]); for (int k = 1; k < n; k *= 2) for (int i = 0; i < n; i += 2 * k) rep(j,0,k) { ll z = rt[j + k] * a[i + j + k] % mod, &ai = a[i + j]; a[i + j + k] = ai - z + (z > ai ? mod : 0); ai += (ai + z >= mod ? z - mod : z); } } vl conv(const vl &a, const vl &b) { if (a.empty() || b.empty()) return {}; int s = sz(a) + sz(b) - 1, B = 32 - __builtin_clz(s), n = 1 << B; int inv = qexp(n, mod - 2, mod); vl L(a), R(b), out(n); L.resize(n), R.resize(n); ntt(L), ntt(R); rep(i,0,n) out[-i & (n - 1)] = (ll)L[i] * R[i] % mod * inv % mod; ntt(out); return {out.begin(), out.begin() + s}; } vector<int> v; vector<int> solve(int l,int r,vector<int> poly){ if (poly.empty()) return poly; if (l==r){ poly=conv(poly,{1,1}); poly.erase(poly.begin(),poly.begin()+v[l]); return poly; } int m=l+r>>1; int num=0; rep(x,l,r+1) num+=v[x]; num=min(num,sz(poly)); vector<int> small(poly.begin(),poly.begin()+num); poly.erase(poly.begin(),poly.begin()+num); vector<int> mul; rep(x,0,r-l+2) mul.pub(nCk(r-l+1,x)); poly=conv(poly,mul); small=solve(m+1,r,solve(l,m,small)); poly.resize(max(sz(poly),sz(small))); rep(x,0,sz(small)) poly[x]=(poly[x]+small[x])%MOD; return poly; } int solve(string s){ if (s=="") return 1; v.clear(); int mn=0,curr=0; for (auto it:s){ if (it=='(') curr++; else{ curr--; if (curr<mn){ mn=curr; v.pub(1); } else{ v.pub(0); } } } return solve(0,sz(v)-1,{1})[0]; } int n; string s; int pref[500005]; signed main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin.exceptions(ios::badbit | ios::failbit); fac[0]=1; rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD; ifac[1000004]=inv(fac[1000004]); rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD; cin>>s; n=sz(s); pref[0]=0; rep(x,0,n) pref[x+1]=pref[x]+(s[x]=='('?1:-1); int pos=min_element(pref,pref+n+1)-pref; string a=s.substr(0,pos),b=s.substr(pos,n-pos); reverse(all(b)); for (auto &it:b) it^=1; cout<<solve(a)*solve(b)%MOD<<endl; }
1770
H
Koxia, Mahiru and Winter Festival
\begin{quote} Wow, what a big face! \hfill {{\small Kagura Mahiru}} \end{quote} Koxia and Mahiru are enjoying the Winter Festival. The streets of the Winter Festival can be represented as a $n \times n$ undirected grid graph. Formally, the set of vertices is $\{(i,j) \; | \; 1 \leq i,j\leq n \}$ and two vertices $(i_1,j_1)$ and $(i_2,j_2)$ are connected by an edge if and only if $|i_1-i_2|+|j_1-j_2|=1$. \begin{center} {\small A network with size $n = 3$.} \end{center} Koxia and Mahiru are planning to visit The Winter Festival by traversing $2n$ routes. Although routes are not planned yet, the endpoints of the routes are already planned as follows: - In the $i$-th route, they want to start from vertex $(1, i)$ and end at vertex $(n, p_i)$, where $p$ is a permutation of length $n$. - In the $(i+n)$-th route, they want to start from vertex $(i, 1)$ and end at vertex $(q_i, n)$, where $q$ is a permutation of length $n$. \begin{center} {\small A network with size $n = 3$, points to be connected are shown in the same color for $p = [3, 2, 1]$ and $q = [3, 1, 2]$.} \end{center} Your task is to find a routing scheme — $2n$ paths where each path connects the specified endpoints. Let's define the congestion of an edge as the number of times it is used (both directions combined) in the routing scheme. In order to ensure that Koxia and Mahiru won't get too bored because of traversing repeated edges, please find a routing scheme that \textbf{minimizes the maximum congestion among all edges}. \begin{center} {\small An example solution — the maximum congestion is $2$, which is optimal in this case.} \end{center}
In what scenario the maximum congestion is $1$? Assume you have a blackbox that can solve any problem instance of size $n-2$, use it to solve a problem instance of size $n$. This is a special case for a problem called congestion minimization. While in general this problem is NP-hard, for this special structure it can be solved efficiently. The only case where the maximum congestion is $1$, is when $p = q = [1,2,\dots,n]$. This can be proved by a pigeonhole argument - if there exists some $p_i \neq i$ or $q_i \neq i$, the total length of the $2n$ paths will be strictly greater than the number of edges which means at least one edge has to be used more than once. Our goal now is to try constructing a routing scheme with maximum congestion $2$. We will show that this is always doable for any input. We will first show a pictorial sketch that presents the idea, and fill in the details later. We also provide you a python script that visualize the output to help you debug, you can find it at the end of this section. The solution is based on induction. The base cases $n=0$ and $n=1$ are trivial. Now assume we can solve any problem instance for size up to $k-2$. We will treat it as a blackbox to solve a problem instance of size $k$. Given any problem instance of size $k$, first we route the following $4$ demand pairs using only the outer edges: the top-bottom pair that starts at $(1, 1)$, using the left and bottom edges; the top-bottom pair that starts at $(1, k)$, using the right and bottom edges; the left-right pair that starts at $(1, 1)$, using the top and right edges; the left-right pair that ends at $(1,k)$, using the left and top edges. However, if this is the same pair as above, then we route another arbitrary left-right pair using the left, top and right edges. As of now, there are $k-2$ top-bottom demands and $k-2$ left-right demands remain to be routed. We simply connect their starting and ending points one step closer to the center, retaining their relative order. In such way, we reduced the problem to a problem instance of size $k-2$ which we already knew how to solve.
[ "constructive algorithms" ]
3,500
#include <bits/stdc++.h> #define FOR(i,s,e) for (int i=(s); i<(e); i++) #define FOE(i,s,e) for (int i=(s); i<=(e); i++) #define FOD(i,s,e) for (int i=(s)-1; i>=(e); i--) #define PB push_back using namespace std; struct Paths{ /* store paths in order */ vector<vector<pair<int, int>>> NS, EW; Paths(){ NS.clear(); EW.clear(); } }; Paths solve(vector<int> p, vector<int> q){ int n = p.size(); Paths Ret; Ret.NS.resize(n); Ret.EW.resize(n); // Base case if (n == 0) return Ret; if (n == 1){ Ret.NS[0].PB({1, 1}); Ret.EW[0].PB({1, 1}); return Ret; } // Route NS flow originating from (1, 1) and (1, n) using leftmost and rightmost edges FOE(i,1,n){ Ret.NS[0].PB({i, 1}); Ret.NS[n-1].PB({i, n}); } // Routing to final destination using bottom edges FOE(i,2,p[0]) Ret.NS[0].PB({n, i}); FOD(i,n,p[n-1]) Ret.NS[n-1].PB({n, i}); // Create p'[] for n-2 instance vector<int> p_new(0); FOE(i,1,n-2) p_new.PB(p[i] - (p[i]>p[0]) - (p[i]>p[n-1])); // Route EW flow originating from (1, 1) using topmost and rightmost edges FOE(i,1,n) Ret.EW[0].PB({1, i}); FOE(i,2,q[0]) Ret.EW[0].PB({i, n}); // Route EW flow originating in (m, 1) with q[m] as small as possible int m = 1; // special handle so congestion is 1 if possible if (p[0] == 1 && p[n-1] == n && q[0] == 1 && q[n-1] == n){ m = n - 1; FOE(i,1,n) Ret.EW[n-1].PB({n, i}); } else{ FOR(i,1,n) if (q[i] < q[m]) m = i; // Route(m+1, 1) --> (1, 1) --> (1, n) --> (q[m], n) FOD(i,m+2,2) Ret.EW[m].PB({i, 1}); FOR(i,1,n) Ret.EW[m].PB({1, i}); FOE(i,1,q[m]) Ret.EW[m].PB({i, n}); } // Create q'[] for n-2 instance vector<int> q_new(0); FOR(i,1,n) if (i != m) q_new.PB(q[i] - (q[i]>q[0]) - (q[i]>q[m])); if (n > 1){ Paths S = solve(p_new, q_new); int t; // connect NS paths FOR(i,1,n-1){ Ret.NS[i].PB({1, i+1}); for (auto [x, y]: S.NS[i-1]){ Ret.NS[i].PB({x+1, y+1}); t = y + 1; } Ret.NS[i].PB({n, t}); if (p[i] != t) Ret.NS[i].PB({n, p[i]}); } // connect EW paths int l = 0; FOR(i,1,n) if (i != m){ Ret.EW[i].PB({i+1, 1}); if (i > m) Ret.EW[i].PB({i, 1}); for (auto [x, y]: S.EW[l]){ Ret.EW[i].PB({x+1, y+1}); t = x + 1; } Ret.EW[i].PB({t, n}); if (q[i] != t) Ret.EW[i].PB({q[i], n}); ++l; } } return Ret; } int main(){ int n; vector<int> p, q; scanf("%d", &n); p.resize(n), q.resize(n); FOR(i,0,n) scanf("%d", &p[i]); FOR(i,0,n) scanf("%d", &q[i]); Paths Solution = solve(p, q); for (auto path: Solution.NS){ printf("%d", path.size()); for (auto [x, y]: path) printf(" %d %d", x, y); puts(""); } for (auto path: Solution.EW){ printf("%d", path.size()); for (auto [x, y]: path) printf(" %d %d", x, y); puts(""); } return 0; }
1771
A
Hossam and Combinatorics
Hossam woke up bored, so he decided to create an interesting array with his friend Hazem. Now, they have an array $a$ of $n$ positive integers, Hossam will choose a number $a_i$ and Hazem will choose a number $a_j$. Count the number of interesting pairs $(a_i, a_j)$ that meet all the following conditions: - $1 \le i, j \le n$; - $i \neq j$; - The absolute difference $|a_i - a_j|$ must be equal to the maximum absolute difference over all the pairs in the array. More formally, $|a_i - a_j| = \max_{1 \le p, q \le n} |a_p - a_q|$.
Firstly, let's find $\max_{1 \le p, q \le n} |a_p - a_q| = max(a) - min(a)$ if it's equal to zero, then any pair is valid, so answer if $n \cdot (n - 1)$ Otherwise, let's calculate $count\_min$ and $count\_max$. Answer is $2 \cdot count\_min \cdot count\_max$
[ "combinatorics", "math", "sortings" ]
900
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 5; int n, a[N]; int main() { #ifndef ONLINE_JUDGE freopen("input.in", "r", stdin); #endif int t; scanf("%d", &t); while(t--){ scanf("%d", &n); for(int i = 0 ; i < n ; ++i) scanf("%d", a + i); sort(a, a + n); if(a[0] == a[n - 1]){ printf("%lld\n", (1LL * n * (n - 1LL))); continue; } int mn = 0, mx = n - 1; while(a[0] == a[mn]) ++mn; while(a[n - 1] == a[mx]) --mx; long long l = mn; long long r = n - mx - 1; printf("%lld\n", 2LL * l * r); } }
1771
B
Hossam and Friends
Hossam makes a big party, and he will invite his friends to the party. He has $n$ friends numbered from $1$ to $n$. They will be arranged in a queue as follows: $1, 2, 3, \ldots, n$. Hossam has a list of $m$ pairs of his friends that don't know each other. Any pair not present in this list are friends. A subsegment of the queue starting from the friend $a$ and ending at the friend $b$ is $[a, a + 1, a + 2, \ldots, b]$. A subsegment of the queue is called good when all pairs of that segment are friends. Hossam wants to know how many pairs $(a, b)$ there are ($1 \le a \le b \le n$), such that the subsegment starting from the friend $a$ and ending at the friend $b$ is good.
Just $a_i < b_i$ in non-friends pairs. Let's calculate $r_i =$ minimum non-friend for all people. So, we can't start subsegment in $a_i$ and finish it righter $r_i$. Let's process people from right to left and calculate the rightmost positions there subsegment can end. Initially, $R = n-1$. Then we go to $a_i$ just do $R = \min(R, r_i)$ and add $R - i + 1$ to answer.
[ "binary search", "constructive algorithms", "dp", "two pointers" ]
1,400
#pragma GCC optimize("Ofast,no-stack-protector,unroll-loops,no-stack-protector,fast-math") #include <bits/stdc++.h> #define ll long long #define ld long double #define IO ios_base::sync_with_stdio(0),cin.tie(0),cout.tie(0); using namespace std; const int N = 1e5 + 5, M = 1e5 + 5; int n, m; int mn[N]; int main() { #ifndef ONLINE_JUDGE freopen("input.in", "r", stdin); #endif int t; scanf("%d", &t); while(t--){ scanf("%d %d", &n, &m); for(int i = 1 ; i <= n ; ++i) mn[i] = n; for(int i = 0 ; i < m ; ++i){ int x, y; scanf("%d %d", &x, &y); if(x > y) swap(x, y); mn[x] = min(mn[x], y - 1); } for(int i = n - 1 ; i ; --i) mn[i] = min(mn[i], mn[i + 1]); ll ans = n; for(int i = 0 ; i < n ; ++i) ans += (mn[i] - i); printf("%lld\n", ans); } }
1771
C
Hossam and Trainees
Hossam has $n$ trainees. He assigned a number $a_i$ for the $i$-th trainee. A pair of the $i$-th and $j$-th ($i \neq j$) trainees is called successful if there is an integer $x$ ($x \geq 2$), such that $x$ divides $a_i$, and $x$ divides $a_j$. Hossam wants to know if there is a successful pair of trainees. Hossam is very tired now, so he asks you for your help!
If exists $x \geq 2$ such that $a_i$ divides $x$ and $a_j$ divides $x$ then exists prime number $p$ such that $a_i$ and $a_j$ divides $p$. We can choose $p =$ any prime divisor of $x$. So, let's factorize all numbers and check, if two of them divides one prime number. We can use default factorization, and it will be $O(n \cdot \sqrt{A})$. It's too long, so just calculate prime numbers $\leq \sqrt{A}$ and check if $a_i$ divides this numbers. It will be $O(n \cdot \frac{\sqrt{A}}{\log{A}})$ - fast enouth.
[ "greedy", "math", "number theory" ]
1,600
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 5, M = 2 * N + 5; bool vis[N], ans; void Sieve(){ memset(vis, true, sizeof(vis)); vis[0] = vis[1] = false; for(int i = 4 ; i < N ; i += 2) vis[i] = false; for(int i = 3 ; i < N / i ; i += 2){ if(!vis[i])continue; for(int j = i * i ; j < N ; j += i + i) vis[j] = false; } } int in[N], vid; vector<int> primes; void Gen(){ for(int i = 2 ; i < N ; ++i) if(vis[i]) primes.emplace_back(i); } set<int> st; void check(int x){ if(in[x] == vid){ ans = true; return; } in[x] = vid; } void fact(int x){ if(x < N && vis[x] == true){ check(x); return; } int idx = 0, sz = primes.size(); while(x > 1 && idx < sz && x / primes[idx] >= primes[idx]){ if(x % primes[idx] == 0){ check(primes[idx]); while(x % primes[idx] == 0)x /= primes[idx]; } ++idx; } if(x > 1){ if(x < N) return check(x), void(); if(st.find(x) != st.end()){ ans = true; return; } st.emplace(x); } } void pre(){ ++vid; st.clear(); } int main(){ Sieve(); Gen(); int t; scanf("%d", &t); while(t--){ pre(); int n; scanf("%d", &n); ans = false; while(n--){ int x; scanf("%d", &x); fact(x); } puts(ans ? "YES" : "NO"); } }
1771
D
Hossam and (sub-)palindromic tree
Hossam has an unweighted tree $G$ with letters in vertices. Hossam defines $s(v, \, u)$ as a string that is obtained by writing down all the letters on the unique simple path from the vertex $v$ to the vertex $u$ in the tree $G$. A string $a$ is a subsequence of a string $s$ if $a$ can be obtained from $s$ by deletion of several (possibly, zero) letters. For example, "dores", "cf", and "for" are subsequences of "codeforces", while "decor" and "fork" are not. A palindrome is a string that reads the same from left to right and from right to left. For example, "abacaba" is a palindrome, but "abac" is not. Hossam defines a sub-palindrome of a string $s$ as a subsequence of $s$, that is a palindrome. For example, "k", "abba" and "abhba" are sub-palindromes of the string "abhbka", but "abka" and "cat" are not. Hossam defines a maximal sub-palindrome of a string $s$ as a sub-palindrome of $s$, which has the maximal length among all sub-palindromes of $s$. For example, "abhbka" has only one maximal sub-palindrome — "abhba". But it may also be that the string has several maximum sub-palindromes: the string "abcd" has $4$ maximum sub-palindromes. Help Hossam find the length of the longest maximal sub-palindrome among all $s(v, \, u)$ in the tree $G$. \textbf{Note that the sub-palindrome is a subsequence, not a substring.}
Let's use dynamic programming method. Let $dp_{v, \, u}$ as length of the longest maximal sub-palindrome on the path between vertexes $v$ and $u$. Then the answer to the problem is $\max\limits_{1 \le v, \, u \le n}{dp_{v, \, u}}$. Define $go_{v, \, u}$ $(v \neq u)$ vertex $x$ such that it is on way between $v$ and $u$ and distance between $v$ and $x$ is $1$. If $v = u$, then we put $go_{v, \, u}$ equal to $v$. So, there are three cases: The answer for $(v, \, u)$ equals to the answer for $(go_{v, \, u}, \, u)$; The answer for $(v, \, u)$ equals to the answer for $(v, \, go_{u, \, v})$; If $s_v = s_u$, then the answer for $(v, \, u)$ equals to the answer for $(go_{v, \, u}, \, go_{u, \, v}) \, + \, 2$. In this case we took best sub-palindrome strictly inside the path $v, \, u$ and added to it two same symbols in $v$ and $u$. Formally , the transitions in dynamics will look like this: $dp_{v, \, u} := \max(dp_{v, \, go_{u, \, v}}, \; dp_{go_{v, \, u}, \, u}, \; dp_{go_{v, \, u}, \, go_{u, \, v}} + 2 \cdot (s_v = s_u)).$ Dynamic's base: $dp_{v, \, v} := 1,$ $dp_{v, \, w} := 1 \, + \, (s_v = s_w),$ In order to calculate the values in dp, you need to iterate through pairs of vertices in ascending order of the distance between the vertices in the pair (note that this can be done by counting sort). The question remains: how to calculate the array $go$? Let's iterate all vertexes and let the current vertex is $v$. Let $v$ be the root of the tree. Consider all sons of this vertex. Let current son is $x$. Then for all $u$ from subtree of $x$ the value of $go_{v, \, u}$ will be number of $x$. Thus, time and memory complexity of this solution is $\mathcal{O}(n^2)$.
[ "brute force", "data structures", "dfs and similar", "dp", "strings", "trees" ]
2,100
#include <bits/stdc++.h> using namespace std; void dfs(int v, vector<vector<int>> &g, vector<vector<int>> &go, vector<vector<pair<int, int>>> &kek, int s, int t = -1, int p = -1, int len = 0){ if(len == 1) t = v; if(len > 1) go[s][v] = t; kek[len].push_back({s, v}); for(int u : g[v]) if(u != p) dfs(u, g, go, kek, s, t, v, len + 1); } void Solve(){ int n; cin >> n; string a; cin >> a; vector<vector<int>> g(n); vector<vector<int>> go(n, vector<int>(n)); vector<vector<pair<int, int>>> kek(n); vector<vector<int>> dp(n, vector<int>(n)); for(int i = 0; i < n - 1; i++){ int v, u; cin >> v >> u; g[--v].push_back(--u); g[u].push_back(v); } for(int v = 0; v < n; v++) dfs(v, g, go, kek, v); for(int len = 0; len < n; len++){ for(auto p : kek[len]){ int v = p.first; int u = p.second; if(len == 0){ dp[v][u] = 1; }else if(len == 1){ dp[v][u] = 1 + (a[v] == a[u]); }else{ int x = dp[v][go[u][v]]; int y = dp[go[v][u]][u]; int z = dp[go[v][u]][go[u][v]] + ((a[v] == a[u]) << 1); dp[v][u] = max({x, y, z}); } } } int ans = 0; for(int v = 0; v < n; v++) for(int u = 0; u < n; u++) ans = max(ans, dp[v][u]); cout << ans << '\n'; } signed main(){ ios_base::sync_with_stdio(NULL); cin.tie(NULL); cout.tie(NULL); int test = 1; cin >> test; for(int i = 1; i <= test; i++) Solve(); }
1771
E
Hossam and a Letter
Hossam bought a new piece of ground with length $n$ and width $m$, he divided it into an $n \cdot m$ grid, each cell being of size $1\times1$. Since Hossam's name starts with the letter 'H', he decided to draw the capital letter 'H' by building walls of size $1\times1$ on some squares of the ground. Each square $1\times1$ on the ground is assigned a quality degree: perfect, medium, or bad. The process of building walls to form up letter 'H' has the following constraints: - The letter must consist of one horizontal and two vertical lines. - The vertical lines must not be in the same or neighboring columns. - The vertical lines must start in the same row and end in the same row (and thus have the same length). - The horizontal line should connect the vertical lines, but must not cross them. - The horizontal line can be in any row between the vertical lines (not only in the middle), except the top and the bottom one. (With the horizontal line in the top row the letter looks like 'n', and in the bottom row like 'U'.) - It is forbidden to build walls in cells of bad quality. - You can use at most one square of medium quality. - You can use any number of squares of perfect quality. Find the maximum number of walls that can be used to draw the letter 'H'. Check the note for more clarification.
Let's preprocess the following data for each cell. 1. first medium cell above current cell. 2. first medium cell below current cell. 3. first bad cell above current cell. 4. first bad cell below current cell. Then we will try to solve the problem for each row (i), and 2 columns (j, k). Now we have a horizontal line in row (i), and we can calculate the length of vertical line by the following. There is two cases: In case of the horizontal line contains one letter 'm'. For each column (j, k): get first cell above it the don't contain ('#' or 'm') and first cell below it the don't contain ('#' or 'm'). In case of the horizontal line doesn't contain any letter 'm'. We will try to get the 4 cells as it contains letter 'm', but in this case we will 4 trials. for each cell from the 4 cells, we allow to have only one letter 'm' in that line. After getting above cells and below cells for each line. the starting cell will be the maximum between the two above cells, and the ending cell will be the minimum between the two below cells. Then we need to check that starting cell is above the current row (i) to avoid making letter n instead of H And check that ending cell is below the current row (i) to avoid making letter u instead of H. Since n, m has the same maximum limit 400. Thus, time complexity of this solution is $O(n^3)$.
[ "brute force", "dp", "implementation", "two pointers" ]
2,500
#pragma GCC optimize("Ofast,no-stack-protector,unroll-loops,no-stack-protector,fast-math") #include <bits/stdc++.h> #define ll long long #define ld long double #define IO ios_base::sync_with_stdio(0),cin.tie(0),cout.tie(0); using namespace std; const int N = 4e2 + 5; int n, m; char a[N][N]; int upM[N][N]; int upB[N][N]; int downM[N][N]; int downB[N][N]; int _get(int I, int j, int incI, char ch){ int i = I + incI; while(i >= 0 && i < n){ if(a[i][j] == '#') break; if(a[i][j] == ch) break; i += incI; } return i; } int _getCount(int i, int j){ if(a[i][j] == 'm') return 1; return (a[i][j] == '.' ? 0 : 10); } /** 1 2 4 8 UL DL UR DR */ int getU(int i, int j, int bt){ if(!bt) return max(upM[i][j], upB[i][j]) + 1; int cur = upM[i][j]; if(cur == -1 || a[cur][j] == '#') return cur + 1; cur = upM[cur][j]; return cur + 1; } int getD(int i, int j, int bt){ if(!bt) return min(downM[i][j], downB[i][j]) &mdash; 1; int cur = downM[i][j]; if(cur == n || a[cur][j] == '#') return cur - 1; cur = downM[cur][j]; return cur - 1; } int solve(int i, int l, int r, int msk){ int upL = getU(i, l, (msk & 1)); int downL = getD(i, l, (msk & 2)); int upR = getU(i, r, (msk & 4)); int downR = getD(i, r, (msk & 8)); int up = max(upL, upR); int down = min(downL, downR); if(up < i && down > i) return 2 * (down - up + 1) + (r - l - 1); return 0; } int main() { #ifndef ONLINE_JUDGE freopen("input.in", "r", stdin); #endif scanf("%d %d", &n, &m); for(int i = 0 ; i < n ; ++i) scanf("%s", a + i); for(int i = 0 ; i < n ; ++i){ for(int j = 0 ; j < m ; ++j){ if(a[i][j] == '#') continue; upM[i][j] = _get(i, j, -1, 'm'); upB[i][j] = _get(i, j, -1, '#'); downM[i][j] = _get(i, j, 1, 'm'); downB[i][j] = _get(i, j, 1, '#'); } } int mx = 0; for(int i = 0 ; i < n ; ++i){ for(int j = 0 ; j + 2 < m ; ++j){ int cnt = _getCount(i, j) + _getCount(i, j + 1); for(int k = j + 2 ; k < m ; ++k){ if((cnt += _getCount(i, k)) > 1) break; mx = max(mx, solve(i, j, k, 0)); if(cnt == 1) continue; mx = max(mx, solve(i, j, k, 1)); mx = max(mx, solve(i, j, k, 2)); mx = max(mx, solve(i, j, k, 4)); mx = max(mx, solve(i, j, k, 8)); } } } printf("%d\n", mx); }
1771
F
Hossam and Range Minimum Query
Hossam gives you a sequence of integers $a_1, \, a_2, \, \dots, \, a_n$ of length $n$. Moreover, he will give you $q$ queries of type $(l, \, r)$. For each query, consider the elements $a_l, \, a_{l + 1}, \, \dots, \, a_r$. Hossam wants to know the \textbf{smallest} number in this sequence, such that it occurs in this sequence an \textbf{odd} number of times. You need to compute the answer for each query before process the next query.
Note that we were asked to solve the problem in online mode. If this were not the case, then the Mo Algorithm could be used. How to solve this task in online mode? Consider two ways. The first way is as follows. Let's build a persistent bitwise trie $T$ on a given array, where the $i$-th version of the trie will store numbers $x$ such that $x$ occurs on the subsegment $a[1\dots i]$ an odd number of times. This can be done as follows. Let $T_0$ be an empty trie, and $T_i$ will be obtained as follows: first we assign $T_i = T_{i - 1}$; then, if $a_i$ occurs in $T_{i - 1}$, then we will erase the number $a_i$ from $T_i$, otherwise we will insert it there. Suppose we need to get answer on the query $[l, \, r]$. Note that if $x$ is included in $T_r$, but is not included in $T_{l - 1}$ (or is included in $T_{l - 1}$, but is not included in $T_r$), then this means that the number $x$ on the segment $a[l\dots r]$ occurs an odd number of times. Otherwise, the number $x$ occurs an even number of times (recall that $0$ is an even number). Thus, we need to find a minimum number $x$ such that it occurs either in $T_{l - 1}$ or in $T_r$, but not in both at once. If there is no such number, then you need to output $0$. Let's go down $T_{l - 1}$ and $T_r$ in parallel on the same prefix of the number. If $T_{l - 1}$ and $T_r$ are equal, then the same numbers are contained there, and then the answer is $0$. Next, we will assume that the answer is not $0$. The left subtree of the vertex is the son to whom the transition along the edge of $0$ is going, and the right subtree is the vertex to which the transition along the edge of $1$ is going. Let us now stand at the vertices $v$ and $u$, respectively. If the left subtrees of $v$ and $u$ are equal, it means that the same numbers are contained there, so there is no point in going there, so we go along the right edge. Otherwise, the left subtree of $v$ contains at least one number that is not in the left subtree of $u$ (or vice versa), so we will go down the left edge. The number in which we ended up will be the answer. Note that in order to compare two subtrees for equality, you need to use the hashing technique of root trees. Then we can compare the two subtree for $\mathcal{O}(1)$. Thus, we get the asymptotics $\mathcal{O((n+q)\log{max(a)})}$. If we compress the numbers of the sequence $a$ in advance, then we can get the asymptotics of $\mathcal{O((n + q) \log{n})}$. Let's consider the second way. Let's compress the numbers in the sequence $a$ in advance. Let $pref_{ij} = 0$ if the prefix $i$ contains the number $a_j$ an even number of times, and $pref_{ij} = 1$ if the prefix $i$ contains the number $a_j$ an odd number of times. Then, in order to get an answer to the query $[l\dots r]$, we need to take the "bitwise exclusive OR" arrays $pref_{l - 1}$ and $pref_r$ and find in it the minimum $j$ such that $pref_{ij} = 1$. The number $j$ will be the answer. Obviously, now this solution need much time and memory. In order to optimize the amount of memory consumed, we will use bitsets. However, even in this case, we consume memory of the order of $\mathcal{o}(h^2 \, / \, 64)$, which is still a lot. So let's not remember about all $pref_i$, but only some. For example, let's get some constant $k$ and remeber only about $pref_0, \, pref_k, \, pref_{2k}, \, \data\, pref_{pk}$. Then, when we need to answer the next query $[l \dots r]$, we will find the right block on which we store almost all the numbers we are looking for, and then we will insert/erase for $\mathcal{O(k)}$ missing numbers. If you select $k \sim\sqrt{n}$, then this solution will fit in memory. However, if you use std::bitset<> in C++, then most likely this solution will still receive the verdict Time Limit. Therefore, to solve this problem, you need to write your own fast bitset. The asymptotics of such a solution would be $\mathcal{O}(n\, (n+q) \, / \, 64)$. However, due to a well-chosen $k$ and a self-written bitset, the constant in this solution will be very small and under given constraints, such a solution can work even faster than the first one.
[ "binary search", "bitmasks", "data structures", "hashing", "probabilities", "strings", "trees" ]
2,500
#pragma optimize("SEX_ON_THE_BEACH") #pragma GCC optimize("unroll-loops") #pragma GCC optimize("unroll-all-loops") #pragma GCC optimize("O3") #pragma GCC optimize("Ofast") #pragma GCC optimize("fast-math") //#define _FORTIFY_SOURCE 0 #pragma GCC optimize("no-stack-protector") //#pragma GCC target("sse,sse2,sse3,ssse3,popcnt,abm,mmx,tune=native") #include<bits/stdc++.h> #include <x86intrin.h> using uint = unsigned int; using ll = long long int; using ull = unsigned long long int; using dd = double; using ldd = long double; using pii = std::pair<int, int>; using pll = std::pair<ll, ll>; using pdd = std::pair<dd, dd>; using pld = std::pair<ldd, ldd>; namespace fast { template<typename T> T gcd(T a, T b) { return gcd(a, b); } template<> unsigned int gcd<unsigned int>(unsigned int u, unsigned int v) { int shift; if (u == 0) return v; if (v == 0) return u; shift = __builtin_ctz(u | v); u >>= __builtin_ctz(u); do { unsigned int m; v >>= __builtin_ctz(v); v -= u; m = (int)v >> 31; u += v & m; v = (v + m) ^ m; } while (v != 0); return u << shift; } template<> unsigned long long gcd<unsigned long long>(unsigned long long u, unsigned long long v) { int shift; if (u == 0) return v; if (v == 0) return u; shift = __builtin_ctzll(u | v); u >>= __builtin_ctzll(u); do { unsigned long long m; v >>= __builtin_ctzll(v); v -= u; m = (long long)v >> 63; u += v & m; v = (v + m) ^ m; } while (v != 0); return u << shift; } } namespace someUsefull { template<typename T1, typename T2> inline void checkMin(T1& a, T2 b) { if (a > b) a = b; } template<typename T1, typename T2> inline void checkMax(T1& a, T2 b) { if (a < b) a = b; } template<typename T1, typename T2> inline bool checkMinRes(T1& a, T2 b) { if (a > b) { a = b; return true; } return false; } template<typename T1, typename T2> inline bool checkMaxRes(T1& a, T2 b) { if (a < b) { a = b; return true; } return false; } } namespace operators { template<typename T1, typename T2> std::istream& operator>>(std::istream& in, std::pair<T1, T2>& x) { in >> x.first >> x.second; return in; } template<typename T1, typename T2> std::ostream& operator<<(std::ostream& out, std::pair<T1, T2> x) { out << x.first << " " << x.second; return out; } template<typename T1> std::istream& operator>>(std::istream& in, std::vector<T1>& x) { for (auto& i : x) in >> i; return in; } template<typename T1> std::ostream& operator<<(std::ostream& out, std::vector<T1>& x) { for (auto& i : x) out << i << " "; return out; } } //name spaces using namespace std; using namespace operators; using namespace someUsefull; //end of name spaces //defines #define ff first #define ss second #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define NO {cout << "NO"; return;} #define YES {cout << "YES"; return;} //end of defines //#undef HOME //debug defines #ifdef HOME #define debug(x) cerr << #x << " " << (x) << endl; #define debug_v(x) {cerr << #x << " "; for (auto ioi : x) cerr << ioi << " "; cerr << endl;} #define debug_vp(x) {cerr << #x << " "; for (auto ioi : x) cerr << '[' << ioi.ff << " " << ioi.ss << ']'; cerr << endl;} #define debug_v_v(x) {cerr << #x << "/*\n"; for (auto ioi : x) { for (auto ioi2 : ioi) cerr << ioi2 << " "; cerr << '\n';} cerr << "*/" << #x << endl;} int jjj; #define wait() cin >> jjj; #define PO cerr << "POMELO" << endl; #define OL cerr << "OLIVA" << endl; #define gen_clock(x) cerr << "Clock " << #x << " created" << endl; ll x = clock(); #define check_clock(x) cerr << "Time spent in " << #x << ": " << clock() - x << endl; x = clock(); #define reset_clock(x) x = clock(); #define say(x) cerr << x << endl; #else #define debug(x) 0; #define debug_v(x) 0; #define debug_vp(x) 0; #define debug_v_v(x) 0; #define wait() 0; #define PO 0; #define OL 0; #define gen_clock(x) 0; #define check_clock(x) 0; #define reset_clock(x) 0; #define say(x) 0; #endif // HOME const int SIZE = 200000; const int block = 64; const int _size = (SIZE + 63) / 64; struct bs { ull arr[_size]; bs() { for (int i = 0; i < _size; ++i) arr[i] = 0; } bs& operator^=(bs &other) { #pragma GCC ivdep for (int i = 0; i < _size; ++i) arr[i] ^= other.arr[i]; return *this; } int _Find_first_in_xor(bs& other) { ull t; for (int i = 0; i < _size; ++i) { if (t = arr[i] ^ other.arr[i]) { return (i << 6) + __builtin_ctzll(t); } } return SIZE; } int _Find_first() { for (int i = 0; i < _size; ++i) { if (arr[i]) { return (i << 6) + __builtin_ctzll(arr[i]); } } return SIZE; } void flip(int id) { ull &x = arr[id >> 6]; id &= 63; x ^= ((ull)1 << id); } int size() { return SIZE; } }; ostream& operator<<(ostream &os, bs &x) { for (int i = 0; i < _size; ++i) { os << x.arr[i] << " "; } return os; } void solve(int test) { int n; cin >> n; vector<int> arr(n); cin >> arr; vector<int> to(n); { map<int, int> have; for (int i : arr) have[i] = 0; int cnt = 0; for (auto &i : have) { i.ss = cnt; to[cnt] = i.ff; cnt++; } for (int &i: arr) i = have[i]; } vector<vector<int>> blocks; for (int i = 0; i < n; i += block) { blocks.push_back({}); for (int j = 0; i + j < n && j < block; ++j) { blocks.back().push_back(arr[i + j]); } } vector<bs> blocks_bs(blocks.size()); for (int i = 0; i < blocks.size(); ++i) { for (int j : blocks[i]) { blocks_bs[i].flip(j); } } for (int i = 1; i < blocks.size(); ++i) { blocks_bs[i] ^= blocks_bs[i - 1]; } int q; cin >> q; bs have; int last = 0; for (int i = 0; i < q; ++i) { int a, b; cin >> a >> b; int l = (last ^ a); int r = (last ^ b); // cin >> l >> r; --l; --r; int lb = l / block; int rb = r / block; if (rb - lb <= 1) { int L = l; while (l <= r) { have.flip(arr[l]); ++l; } int id = have._Find_first(); int ans = (id == have.size() ? 0 : to[id]); last = ans; cout << ans << '\n'; l = L; while (l <= r) { have.flip(arr[l]); ++l; } } else { int L = (lb + 1) * block; int old_l = l; int R = (rb + 1) * block; checkMin(R, n); int old_r = r; ++r; while (r < R) { blocks_bs[rb].flip(arr[r]); ++r; } while (l < L) { blocks_bs[lb].flip(arr[l]); ++l; } int id = blocks_bs[rb]._Find_first_in_xor(blocks_bs[lb]); int ans = (id == have.size() ? 0 : to[id]); last = ans; cout << ans << '\n'; r = old_r; ++r; while (r < R) { blocks_bs[rb].flip(arr[r]); ++r; } l = old_l; while (l < L) { blocks_bs[lb].flip(arr[l]); ++l; } } } } signed main() { ios_base::sync_with_stdio(false); cout.tie(0); cin.tie(0); //freopen("file.in", "r", stdin); //freopen("file.out", "w", stdout); int t = 1; //cin >> t; for (int i = 0; i < t; ++i) { solve(i+1); //cout << '\n'; //PO; } return 0; } /* */
1772
A
A+B?
You are given an expression of the form $a{+}b$, where $a$ and $b$ are integers from $0$ to $9$. You have to evaluate it and print the result.
There are multiple ways to solve this problem. Most interpreted languages have some function that takes the string, evaluates it as code, and then returns the result. One of the examples is the eval function in Python. If the language you use supports something like that, you can read the input as a string and use it as the argument of such a function. Suppose you use a language where this is impossible. There are still many approaches to this problem. The most straightforward one is to take the first and the last characters of the input string, calculate their ASCII codes, and then subtract the ASCII code of the character 0 from them to get these digits as integers, not as characters. Then you can just add them up and print the result.
[ "implementation" ]
800
t = int(input()) for i in range(t): print(eval(input()))
1772
B
Matrix Rotation
You have a matrix $2 \times 2$ filled with \textbf{distinct} integers. You want your matrix to become beautiful. The matrix is beautiful if the following two conditions are satisfied: - in each row, the first element is smaller than the second element; - in each column, the first element is smaller than the second element. You can perform the following operation on the matrix any number of times: rotate it clockwise by $90$ degrees, so the top left element shifts to the top right cell, the top right element shifts to the bottom right cell, and so on: Determine if it is possible to make the matrix beautiful by applying zero or more operations.
Sure, you can just implement the rotation operation and check all $4$ possible ways to rotate the matrix, but it's kinda boring. The model solution does the different thing. If a matrix is beautiful, then its minimum is in the upper left corner, and its maximum is in the lower right corner (and vice versa). If you rotate it, the element from the upper left corner goes to the upper right corner, and the element from the lower right corner goes to the lower left corner - so these elements are still in the opposite corners. No matter how many times we rotate a beautiful matrix, its minimum and maximum elements will be in the opposite corners - and the opposite is true as well; if you have a $2 \times 2$ matrix with minimum and maximum elements in opposite corners, it can be rotated in such a way that it becomes beautiful. So, all we need to check is that the minimum and the maximum elements are in the opposite corners. There are many ways to do it; in my opinion, the most elegant one is to read all four elements in an array of size $4$; then the opposite corners of the matrix correspond either to positions $0$ and $3$, or to positions $1$ and $2$ in this array. So, we check that the sum of positions of minimum and maximum is exactly $3$.
[ "brute force", "implementation" ]
800
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for(int _ = 0; _ < t; _++) { vector<int> a(4); for(int i = 0; i < 4; i++) cin >> a[i]; int maxpos = max_element(a.begin(), a.end()) - a.begin(); int minpos = min_element(a.begin(), a.end()) - a.begin(); if(maxpos + minpos == 3) puts("YES"); else puts("NO"); } }
1772
C
Different Differences
An array $a$ consisting of $k$ integers is \textbf{strictly increasing} if $a_1 < a_2 < \dots < a_k$. For example, the arrays $[1, 3, 5]$, $[1, 2, 3, 4]$, $[3, 5, 6]$ are strictly increasing; the arrays $[2, 2]$, $[3, 7, 5]$, $[7, 4, 3]$, $[1, 2, 2, 3]$ are not. For a strictly increasing array $a$ of $k$ elements, let's denote the \textbf{characteristic} as the number of different elements in the array $[a_2 - a_1, a_3 - a_2, \dots, a_k - a_{k-1}]$. For example, the characteristic of the array $[1, 3, 4, 7, 8]$ is $3$ since the array $[2, 1, 3, 1]$ contains $3$ different elements: $2$, $1$ and $3$. You are given two integers $k$ and $n$ ($k \le n$). Construct an increasing array of $k$ integers from $1$ to $n$ with \textbf{maximum possible} characteristic.
We can transform the problem as follows. Let $d_i = a_{i+1} - a_i$. We need to find an array $[d_1, d_2, \dots, d_{k-1}]$ so that the sum of elements in it is not greater than $n-1$, all elements are positive integers, and the number of different elements is the maximum possible. Suppose we need $f$ different elements in $d$. What can be the minimum possible sum of elements in $d$? It's easy to see that $d$ should have the following form: $[2, 3, 4, \dots, f, 1, 1, 1, \dots, 1]$. This array contains exactly $f$ different elements, these different elements are as small as possible (so their sum is as small as possible), and all duplicates are $1$'s. So, if the sum of this array is not greater than $n-1$, then it is possible to have the number of different elements in $d$ equal to $f$. The rest is simple. We can iterate on $f$, find the maximum possible $f$, construct the difference array, and then use it to construct the array $a$ itself.
[ "constructive algorithms", "greedy", "math" ]
1,000
def construct(f, k): return [(i + 2 if i < f - 1 else 1) for i in range(k)] t = int(input()) for i in range(t): k, n = map(int, input().split()) ans = 1 for f in range(1, k): d = construct(f, k - 1) if sum(d) <= n - 1: ans = f res = [1] d = construct(ans, k - 1) for x in d: res.append(res[-1] + x) print(*res)
1772
D
Absolute Sorting
You are given an array $a$ consisting of $n$ integers. The array is sorted if $a_1 \le a_2 \le \dots \le a_n$. You want to make the array $a$ sorted by applying the following operation \textbf{exactly once}: - choose an integer $x$, then for every $i \in [1, n]$, replace $a_i$ by $|a_i - x|$. Find any value of $x$ that will make the array sorted, or report that there is no such value.
What does it actually mean for an array $a_1, a_2, \dots, a_n$ to be sorted? That means $a_1 \le a_2$ and $a_2 \le a_3$ and so on. For each pair of adajacent elements, let's deduce which values $x$ put them in the correct order. Any value of $x$ that puts all pairs in the correct order will be the answer. Consider any $a_i$ and $a_{i+1}$ and solve the inequality $|a_i - x| \le |a_{i+1} - x|$. If $a_i = a_{i+1}$, then any value of $x$ works. Let $a_i$ be smaller than $a_{i+1}$. If $x$ is smaller than or equal to $a_i$, then the inequality becomes $a_i - x \le a_{i+1} - x \Leftrightarrow a_i \le a_{i+1}$. Thus, they don't change their order, and any $x \le a_i$ works. If $x$ is greater than or equal to $a_{i+1}$, then the inequality becomes $x - a_i \le x - a_{i+1} \Leftrightarrow a_i \ge a_{i+1}$. Thus, they always change their order, and none of $x \ge a_i$ work. If $x$ is between $a_i$ and $a_{i+1}$, then the inequality becomes $x - a_i \le a_{i+1} - x \Leftrightarrow 2x \le a_i + a_{i+1} \Leftrightarrow x \le \frac{a_i + a_{i+1}}{2}$. Thus, they only remain in the same order for any integer $x$ such that $a_i \le x \le \lfloor \frac{a_i + a_{i+1}}{2} \rfloor$. In union, that tells us that all values of $x$ that work for such a pair are $x \le \lfloor \frac{a_i + a_{i+1}}{2} \rfloor$. The similar analysis can be applied to $a_i > a_{i+1}$, which results in the required $x$ being $x \ge \lceil \frac{a_i + a_{i+1}}{2} \rceil$ for such pairs. Finally, how to find out if some value of $x$ passes all conditions? Among all conditions of form $x \le \mathit{val_i}$, in order for some $x$ to work, it should be less than or equal to even the smallest of them. Similarly, among all conditions of form $x \ge \mathit{val_i}$, in order for some $x$ to work, it should be greater than or equal to even the largest of them. Thus, take the minimum over the pairs of one type. Take the maximum over the pairs of another type. If two resulting values are contradictory, then there is no answer. Otherwise, any value inside the resulting range of $x$ works. Overall complexity: $O(n)$ per testcase.
[ "constructive algorithms", "math" ]
1,400
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for(int i = 0; i < t; i++) { int n; cin >> n; vector<int> a(n); for(int j = 0; j < n; j++) cin >> a[j]; int mn = 0, mx = int(1e9); for(int j = 0; j + 1 < n; j++) { int x = a[j]; int y = a[j + 1]; int midL = (x + y) / 2; int midR = (x + y + 1) / 2; if(x < y) mx = min(mx, midL); if(x > y) mn = max(mn, midR); } if(mn <= mx) cout << mn << endl; else cout << -1 << endl; } }
1772
E
Permutation Game
Two players are playing a game. They have a permutation of integers $1$, $2$, ..., $n$ (a permutation is an array where each element from $1$ to $n$ occurs exactly once). The permutation is not sorted in either ascending or descending order (i. e. the permutation does not have the form $[1, 2, \dots, n]$ or $[n, n-1, \dots, 1]$). Initially, all elements of the permutation are colored red. The players take turns. On their turn, the player can do one of three actions: - rearrange the elements of the permutation in such a way that all \textbf{red} elements keep their positions (note that \textbf{blue} elements can be swapped with each other, but it's not obligatory); - change the color of one red element to blue; - skip the turn. The first player wins if the permutation is sorted in ascending order (i. e. it becomes $[1, 2, \dots, n]$). The second player wins if the permutation is sorted in descending order (i. e. it becomes $[n, n-1, \dots, 1]$). If the game lasts for $100^{500}$ turns and nobody wins, it ends in a draw. Your task is to determine the result of the game if both players play optimally.
Note that it makes no sense to use the first type of operation if it does not lead to an instant win, because the opponent can return the previous state of the array with their next move. So the winner is the one who has time to color "their" elements in blue first. Let's denote $a$ as the number of elements that only the first player needs to color, $b$ as the number of elements only the second player needs to color, $c$ - both players needs to color. To win, the first player needs to have time to paint $a+c$ elements, and they have no more than $b$ moves to do it, because otherwise the second player can prevent the win of the first player. So the winning condition for the first player is $a+c \le b$. Similarly, for the second player, with the only difference that they have $1$ move less (because they go second), which means the condition is $b+c < a$. If none of these conditions are met, then neither player has a winning strategy, which means they will both reduce the game to a draw.
[ "games" ]
1,700
for tc in range(int(input())): n = int(input()) p = list(map(int, input().split())) a, b, c = 0, 0, 0 for i in range(n): if p[i] != i + 1 and p[i] != n - i: c += 1 elif p[i] != i + 1: a += 1 elif p[i] != n - i: b += 1 if a + c <= b: print("First") elif b + c < a: print("Second") else: print("Tie")
1772
F
Copy of a Copy of a Copy
It all started with a black-and-white picture, that can be represented as an $n \times m$ matrix such that all its elements are either $0$ or $1$. The rows are numbered from $1$ to $n$, the columns are numbered from $1$ to $m$. Several operations were performed on the picture (possibly, zero), each of one of the two kinds: - choose a cell such that it's not on the border (neither row $1$ or $n$, nor column $1$ or $m$) and it's surrounded by four cells of the opposite color (four zeros if it's a one and vice versa) and paint it the opposite color itself; - make a copy of the current picture. Note that the order of operations could be arbitrary, they were not necessarily alternating. You are presented with the outcome: all $k$ copies that were made. Additionally, you are given the initial picture. However, all $k+1$ pictures are shuffled. Restore the sequence of the operations. If there are multiple answers, print any of them. The tests are constructed from the real sequence of operations, i. e. at least one answer always exists.
Notice the following: once you apply the recolor operation to some cell, you can never recolor it again. That happens because you can't recolor its neighbors too as each of them has at least one neighbor of the same color - this cell itself. In particular, that implies that applying a recolor operation always decreases the possible number of operations that can be made currently. It doesn't always decrease them by $1$: from $1$ to $5$ operations can become unavailable, but it always decreases. That gives us an order of copies. Just sort them in the decreasing order of the number of recolor operations that can be made currently. If the numbers are the same, the copies must be equal, so their order doesn't matter. The only thing remains is to apply the operations. Turns out, their order doesn't matter at all. Consider all different cells for a pair of adjacent pictures. It's never possible that there are two different cells that are adjacent to each other. Thus, no operation can interfere with another one. Just print all positions of different cells in any order you want and make a copy. Overall complexity: $O(nmk + k \log k)$.
[ "constructive algorithms", "dfs and similar", "graphs", "implementation", "sortings" ]
2,000
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for(int i = 0; i < int(n); i++) struct op{ int t, x, y, i; }; int dx[] = {-1, 0, 1, 0}; int dy[] = {0, 1, 0, -1}; int main(){ int n, m, k; cin >> n >> m >> k; vector<vector<string>> a(k + 1, vector<string>(n)); forn(z, k + 1) forn(i, n) cin >> a[z][i]; vector<int> cnt(k + 1); forn(z, k + 1){ for (int i = 1; i < n - 1; ++i){ for (int j = 1; j < m - 1; ++j){ bool ok = true; forn(t, 4) ok &= a[z][i][j] != a[z][i + dx[t]][j + dy[t]]; cnt[z] += ok; } } } vector<int> ord(k + 1); iota(ord.begin(), ord.end(), 0); sort(ord.begin(), ord.end(), [&cnt](int x, int y){ return cnt[x] > cnt[y]; }); vector<op> ops; forn(z, k){ forn(i, n) forn(j, m) if (a[ord[z]][i][j] != a[ord[z + 1]][i][j]){ a[ord[z]][i][j] ^= '0' ^ '1'; ops.push_back({1, i + 1, j + 1, -1}); } ops.push_back({2, -1, -1, ord[z + 1] + 1}); } cout << ord[0] + 1 << '\n'; cout << ops.size() << '\n'; for (auto it : ops){ cout << it.t << " "; if (it.t == 1) cout << it.x << " " << it.y << '\n'; else cout << it.i << '\n'; } }
1772
G
Gaining Rating
Monocarp is playing chess on one popular website. He has $n$ opponents he can play with. The $i$-th opponent has rating equal to $a_i$. Monocarp's initial rating is $x$. Monocarp wants to raise his rating to the value $y$ ($y > x$). When Monocarp is playing against one of the opponents, he will win if his \textbf{current} rating is bigger or equal to the opponent's rating. If Monocarp wins, his rating is increased by $1$, otherwise it is decreased by $1$. The rating of his opponent does not change. Monocarp wants to gain rating $y$ playing as few games as possible. But he can't just grind it, playing against weak opponents. The website has a rule that you should play against all opponents as evenly as possible. Speaking formally, if Monocarp wants to play against an opponent $i$, there should be no other opponent $j$ such that Monocarp has played more games against $i$ than against $j$. Calculate the minimum possible number of games Monocarp needs to gain rating $y$ or say it's impossible. Note that ratings of Monocarp's opponents don't change, while Monocarp's rating does change.
After parsing the statement, you can understand that Monocarp plays cyclically: in one cycle, he chooses some order of opponents and play with them in that order. Then repeats again and again, until he gains desired rating at some moment. So, firstly, let's prove that (in one cycle) it's optimal to play against opponents in increasing order of their skills. Suppose you play with opponents in some order $ord$ and there is a position where $a[ord_i] > a[ord_{i+1}]$, if you swap $ord_i$ and $ord_{i+1}$ you won't lose anything and may even gain extra wins. It means that the total gain after playing one cycle in increasing order in greater or equal than playing in any other order. In other words, we can sort array $a$ and play against them cyclically in that order. Monocarp's list of games will look like several full cycles and some prefix. The problem is that there can be many cycles, and we need to skip them in a fast way. How one cycle looks? Monocarp starts with some $x$ wins first $p$ games and then loses all other games ($m$ games where $m = n - p$). The maximum rating he gains is $x + p$ and the resulting rating after all games is $x + p - m$. We can already find several conditions of leaving a cycle: if $x + p \ge y$ then Monocarp gets what he wants and stops; otherwise, if $x + p - m \le x$ (or $p - m \le 0$) he will never gain the desired rating, since in the next cycle the number of wins $p' \le p$, since his starting rating $x + p - m \le x$. Otherwise, if $x + p < y$ and $p - m > 0$, he will start one more cycle with rating $x' = x + p - m$ and will gain the desired rating $y$, eventually. So, how to find the number of games $p$ he will win for a starting rating $x$? Let's calculate two values for a given sorted skill array $a$: for each $i$ let's calculate $t_i$ - the minimum starting rating Monocarp need to win opponent $i$ (and all opponent before) and $b_i$ - the rating he'll get after winning the $i$-th opponent. We can calculate these values in one iteration (we'll use $0$-indexation): $t_0 = a_0$, $b_0 = a_0 + 1$; then for each $i > 0$ if $b_{i - 1} \ge a_i$ then $t_i = t_{i - 1}$ and $b_i = b_{i - 1} + 1$, otherwise $t_i = a_i - i$ and $b_i = a_i + 1$. Now, knowing values $t_i$ it's easy to find the number of wins $p$ for a starting rating $x$: $p$ is equal to minimum $j$ such that $t_j > x$ (don't forget, $0$-indexation). Or the first position in array $t$ with value strictly greater than $x$. We can search it with standard $\text{upper_bound}$ function, since array $t$ is sorted. Okay, we found the number of wins $p$ for the current $x$. Let's just calculate how many cycles $k$ Monocarp will make with exactly $p$ wins. There are only two conditions that should be met in order to break this cycle: either Monocarp reaches rating $y$ - it can be written as inequality $x + k (p - m) + p \ge y$, or the number of wins increases (starting rating becomes greater or equal than $t_p$), i.e. $x + k (p - m) \ge t_p$. From the first inequality, we get minimum $k_1 = \left\lceil \frac{y - x - p}{p - m} \right\rceil$ and from the second one - $k_2 = \left\lceil \frac{t_p - x}{p - m} \right\rceil$. As a result, we can claim that Monocarp will repeat the current cycle exactly $k = \min(k_1, k_2)$ times and either finish in the next turn or the number of wins will change. So, we can skip these $k$ equal cycles: we can increase answer by $kn$ and current rating by $k(p - m)$. Since we skip equal cycles, then at each step we either finish (with success or $-1$), or the number of wins $p$ increases. Since $p$ is bounded by $n$, we will make no more than $n$ skips, and total complexity is $O(n \log n)$ because of initial sorting and calls of $\text{upper_bound}$.
[ "binary search", "greedy", "implementation", "math", "sortings", "two pointers" ]
2,200
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) typedef long long li; int n; li x, y; vector<li> a; inline bool read() { if(!(cin >> n >> x >> y)) return false; a.resize(n); fore (i, 0, n) cin >> a[i]; return true; } li ceil(li a, li b) { assert(a >= 0 && b >= 0); return (a + b - 1) / b; } inline void solve() { sort(a.begin(), a.end()); vector<li> t(n), b(n); fore (i, 0, n) { if (i > 0 && b[i - 1] >= a[i]) { t[i] = t[i - 1]; b[i] = b[i - 1] + 1; } else { t[i] = a[i] - i; b[i] = a[i] + 1; } } li ans = 0; while (x < y) { int pos = int(upper_bound(t.begin(), t.end(), x) - t.begin()); li p = pos, m = n - pos; if (x + p >= y) { cout << ans + (y - x) << endl; return; } if (p <= m) { cout << -1 << endl; return; } //1. x + k(p - m) + p >= y li k = ceil(y - x - p, p - m); if (pos < n) { //2. x + k(p - m) >= t[pos] k = min(k, ceil(t[pos] - x, p - m)); } ans += k * n; //x + k(p - m) < y, since 1. and p >= p - m x += k * (p - m); } assert(false); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; while (t--) { read(); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1774
A
Add Plus Minus Sign
AquaMoon has a string $a$ consisting of only $0$ and $1$. She wants to add $+$ and $-$ between all pairs of consecutive positions to make the absolute value of the resulting expression as small as possible. Can you help her?
The answer is the number of $1$s modulo $2$. We can get that by adding '-' before the $\text{2nd}, \text{4th}, \cdots, 2k\text{-th}$ $1$, and '+' before the $\text{3rd}, \text{5th}, \cdots, 2k+1\text{-th}$ $1$.
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; char c[1005]; int main() { int t; scanf("%d", &t); int n; while (t--) { scanf("%d", &n); scanf("%s", c + 1); int u = 0; for (int i = 1; i <= n; ++i) { bool fl = (c[i] == '1') && u; u ^= (c[i] - '0'); if (i != 1) putchar(fl ? '-' : '+'); } putchar('\n'); } }
1774
B
Coloring
Cirno_9baka has a paper tape with $n$ cells in a row on it. As he thinks that the blank paper tape is too dull, he wants to paint these cells with $m$ kinds of colors. For some aesthetic reasons, he thinks that the $i$-th color must be used exactly $a_i$ times, and for every $k$ consecutive cells, their colors have to be distinct. Help Cirno_9baka to figure out if there is such a way to paint the cells.
First, We can divide $n$ cells into $\left\lceil\frac{n}{k}\right\rceil$ segments that except the last segment, all segments have length $k$. Then in each segment, the colors in it are pairwise different. It's easy to find any $a_i$ should be smaller than or equal to $\left\lceil\frac{n}{k}\right\rceil$. Then we can count the number of $a_i$ which is equal to $\left\lceil\frac{n}{k}\right\rceil$. This number must be smaller than or equal to $n \bmod k$, which is the length of the last segment. All $a$ that satisfies the conditions above is valid. We can construct a coloring using the method below: First, we pick out all colors $i$ that $a_i=\left\lceil\frac{n}{k}\right\rceil$, then we use color $i$ to color the $j$-th cell in each segment. Then we pick out all colors $i$ that $a_i<\left\lceil\frac{n}{k}\right\rceil-1$ and use these colors to color the rest of cells with cyclic order(i.e. color $j$-th cell of the first segment, of second the segment ... of the $\left\lceil\frac{n}{k}\right\rceil$ segment, and let $j+1$. when one color is used up, we begin to use the next color) At last, we pick out all colors $i$ that $a_i=\left\lceil\frac{n}{k}\right\rceil-1$, and color them with the cyclic order. This method will always give a valid construction.
[ "constructive algorithms", "greedy", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; int main() { int t; scanf("%d", &t); while (t--) { int n, m, k; scanf("%d %d %d", &n, &m, &k); int fl = 0; for (int i = 1; i <= m; ++i) { int a; scanf("%d", &a); if (a == (n + k - 1) / k) ++fl; if (a > (n + k - 1) / k) fl = 1 << 30; } puts(fl <= (n - 1) % k + 1 ? "YES" : "NO"); } }
1774
C
Ice and Fire
Little09 and his friends are playing a game. There are $n$ players, and the temperature value of the player $i$ is $i$. The types of environment are expressed as $0$ or $1$. When two players fight in a specific environment, if its type is $0$, the player with a lower temperature value in this environment always wins; if it is $1$, the player with a higher temperature value in this environment always wins. The types of the $n-1$ environments form a binary string $s$ with a length of $n-1$. If there are $x$ players participating in the game, there will be a total of $x-1$ battles, and the types of the $x-1$ environments will be the first $x-1$ characters of $s$. While there is more than one player left in the tournament, choose any two remaining players to fight. The player who loses will be eliminated from the tournament. The type of the environment of battle $i$ is $s_i$. For each $x$ from $2$ to $n$, answer the following question: if all players whose temperature value does not exceed $x$ participate in the game, how many players have a chance to win?
We define $f_i$ to mean that the maximum $x$ satisfies $s_{i-x+1}=s_{i-x+2}=...=s_{i}$. It can be proved that for $x$ players, $f_{x-1}$ players are bound to lose and the rest have a chance to win. So the answer to the first $i$ battles is $ans_i=i-f_i+1$. Next, we prove this conclusion. Suppose there are $n$ players and $n-1$ battles, and $s_{n-1}=1$, and there are $x$ consecutive $1$ at the end. If $x=n-1$, then obviously only the $n$-th player can win. Otherwise, $s_{n-1-x}$ must be 0. Consider the following facts: Players $1$ to $x$ have no chance to win. If the player $i$ ($1\le i\le x$) can win, he must defeat the player whose temperature value is lower than him in the last $x$ battles. However, in total, only the $i-1$ player's temperature value is lower than his. Because $i-1<x$, the $i$-th player cannot win. Players $1$ to $x$ have no chance to win. If the player $i$ ($1\le i\le x$) can win, he must defeat the player whose temperature value is lower than him in the last $x$ battles. However, in total, only the $i-1$ player's temperature value is lower than his. Because $i-1<x$, the $i$-th player cannot win. Players from $x+1$ to $n$ have a chance to win. For the player $i$ ($x+1\le i\le n$), we can construct: in the first $n-2-x$ battles, we let all players whose temperature value in $[x+1,n]$ except the player $i$ fight so that only one player will remain. In the $(n-1-x)$-th battle, we let the remaining player fight with the player $1$. Since $s_{n-1-x}=0$, the player $1$ will win. Then there are only the first $x$ players and the player $i$ in the remaining $x$ battles, so the player $i$ can win. Players from $x+1$ to $n$ have a chance to win. For the player $i$ ($x+1\le i\le n$), we can construct: in the first $n-2-x$ battles, we let all players whose temperature value in $[x+1,n]$ except the player $i$ fight so that only one player will remain. In the $(n-1-x)$-th battle, we let the remaining player fight with the player $1$. Since $s_{n-1-x}=0$, the player $1$ will win. Then there are only the first $x$ players and the player $i$ in the remaining $x$ battles, so the player $i$ can win. For $s_{n-1}=0$, the situation is similar and it will not be repeated here.
[ "constructive algorithms", "dp", "greedy" ]
1,300
#include <bits/stdc++.h> using namespace std; const int N = 300005; int T, n, ps[2]; char a[N]; void solve() { scanf("%d %s", &n, a + 1); ps[0] = ps[1] = 0; for (int i = 1; i < n; ++i) { ps[a[i] - 48] = i; if (a[i] == '0') printf("%d ", ps[1] + 1); else printf("%d ", ps[0] + 1); } putchar('\n'); } int main() { scanf("%d", &T); while (T--) solve(); return 0; }
1774
D
Same Count One
ChthollyNotaSeniorious received a special gift from AquaMoon: $n$ binary arrays of length $m$. AquaMoon tells him that in one operation, he can choose any two arrays and any position $pos$ from $1$ to $m$, and swap the elements at positions $pos$ in these arrays. He is fascinated with this game, and he wants to find the minimum number of operations needed to make the numbers of $1$s in all arrays the same. He has invited you to participate in this interesting game, so please try to find it! If it is possible, please output specific exchange steps in the format described in the output section. Otherwise, please output $-1$.
Considering that we need to make the number of $1\text{s}$ in each array the same, we should calculate the sum of $1\text{s}$, and every array has $sum / n$ $1\text{s}$. Because only the same position of two different arrays can be selected for exchange each time, for a position $pos$, we traverse each array each time. If the number of $1\text{s}$ in this array is not enough, then we need to turn some $0\text{s}$ into $1\text{s}$; If the number of $1\text{s}$ in this array is more than we need, then some $1\text{s}$ should be turned into $0\text{s}$. It can be proved that as long as the total number of $1\text{s}$ is a multiple of $n$, the number of $1\text{s}$ in each array can be made the same through exchanges.
[ "brute force", "constructive algorithms", "greedy", "implementation", "two pointers" ]
1,600
#include <bits/stdc++.h> int main() { int T; scanf("%d", &T); while (T--) { int n, m; scanf("%d %d", &n, &m); std::vector<std::vector<int>> A(n, std::vector<int>(m, 0)); std::vector<int> sum(n, 0); for (int i = 0; i < n; ++i) { for (int j = 0; j < m; ++j) { scanf("%d", &A[i][j]); sum[i] += A[i][j]; } } int tot = 0; for (int i = 0; i < n; ++i) tot += sum[i]; if (tot % n) { puts("-1"); continue; } tot /= n; std::vector<std::tuple<int, int, int>> ans; std::vector<int> Vg, Vl; Vg.reserve(n), Vl.reserve(n); for (int j = 0; j < m; ++j) { for (int i = 0; i < n; ++i) { if (sum[i] > tot && A[i][j]) Vg.push_back(i); if (sum[i] < tot && !A[i][j]) Vl.push_back(i); } for (int i = 0; i < (int)std::min(Vl.size(), Vg.size()); ++i) { ++sum[Vl[i]], --sum[Vg[i]]; ans.emplace_back(Vl[i], Vg[i], j); } Vl.clear(), Vg.clear(); } printf("%d\n", (int)ans.size()); for (auto [i, j, k] : ans) printf("%d %d %d\n", i + 1, j + 1, k + 1); } return 0; }
1774
E
Two Chess Pieces
Cirno_9baka has a tree with $n$ nodes. He is willing to share it with you, which means you can operate on it. Initially, there are two chess pieces on the node $1$ of the tree. In one step, you can choose any piece, and move it to the neighboring node. You are also given an integer $d$. You need to ensure that the distance between the two pieces doesn't \textbf{ever} exceed $d$. Each of these two pieces has a sequence of nodes which they need to pass \textbf{in any order}, and eventually, they have to return to the root. As a curious boy, he wants to know the minimum steps you need to take.
We can find that for any $d$-th ancestor of some $b_i$, the first piece must pass it some time. Otherwise, we will violate the distance limit. The second piece must pass the $d$-th ancestor of each $b_i$ as well. Then we can add the $d$-th ancestor of each $a_i$ to the array $b$, and add the $d$-th ancestor of each $b_i$ to the array $a$. Then we can find now we can find a solution that each piece only needs to visit its nodes using the shortest route, without considering the limit of $d$, and the total length can be easily computed. We can find that if we adopt the strategy that we visit these nodes according to their DFS order(we merge the array of $a$ and $b$, and sort them according to the DFS order, if the first one is from $a$, we try to move the first piece to this position, otherwise use the second piece), and move the other piece one step closer to the present piece only if the next step of the present piece will violate the distance limit, then we can ensure the movement exactly just let each piece visit its necessary node without extra operations.
[ "dfs and similar", "dp", "greedy", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; const int N = 1e6 + 5; int t[N * 2], nxt[N * 2], cnt, h[N]; int n, d; void add(int x, int y) { t[++cnt] = y; nxt[cnt] = h[x]; h[x] = cnt; } int a[N], b[N]; bool f[2][N]; void dfs1(int x, int fa, int dis) { a[dis] = x; if (dis > d) b[x] = a[dis - d]; else b[x] = 1; for (int i = h[x]; i; i = nxt[i]) { if (t[i] == fa) continue; dfs1(t[i], x, dis + 1); } } void dfs2(int x, int fa, int tp) { bool u = 0; for (int i = h[x]; i; i = nxt[i]) { if (t[i] == fa) continue; dfs2(t[i], x, tp); u |= f[tp][t[i]]; } f[tp][x] |= u; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); cin >> n >> d; for (int i = 1; i < n; i++) { int x, y; cin >> x >> y; add(x, y), add(y, x); } dfs1(1, 0, 1); for (int i = 0; i <= 1; i++) { int num; cin >> num; for (int j = 1; j <= num; j++) { int x; cin >> x; f[i][x] = 1, f[i ^ 1][b[x]] = 1; } } for (int i = 0; i <= 1; i++) dfs2(1, 0, i); int ans = 0; for (int i = 0; i <= 1; i++) for (int j = 2; j <= n; j++) if (f[i][j]) ans += 2; cout << ans; return 0; }
1774
F1
Magician and Pigs (Easy Version)
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $n$ and $x$. You can make hacks only if both versions of the problem are solved.} Little09 has been interested in magic for a long time, and it's so lucky that he meets a magician! The magician will perform $n$ operations, each of them is one of the following three: - $1\ x$: Create a pig with $x$ Health Points. - $2\ x$: Reduce the Health Point of all living pigs by $x$. - $3$: Repeat all previous operations. Formally, assuming that this is the $i$-th operation in the operation sequence, perform the first $i-1$ operations (including "Repeat" operations involved) in turn. A pig will die when its Health Point is less than or equal to $0$. Little09 wants to know how many living pigs there are after all the operations. Please, print the answer modulo $998\,244\,353$.
Let $X=\max x$. Think about what 'Repeat' is doing. Assuming the total damage is $tot$ ($tot$ is easy to calculate because it will be multiplied by $2$ after each 'Repeat' and be added after each 'Attack'). After repeating, each pig with a current HP of $w$ ($w > tot$) will clone a pig with a HP of $w-tot$. Why? 'Repeat' will do what you just did again, so each original pig will certainly create a pig the same as it, and it will be attacked by $tot$, so it can be considered that a pig with $w-tot$ HP has been cloned. Next, the problem is to maintain a multiset $S$, which supports: adding a number, subtracting $x$ for all numbers, and inserting each number after subtracting $tot$. Find the number of positive elements in the final multiset. $tot$ in 'Repeat' after the first 'Attack' will multiply by $2$ every time, so it will exceed $X$ in $O(\log X)$ times. That is, only $O(\log X)$ 'Repeat' operations are effective. So we can maintain $S$ in brute force. Every time we do 'Repeat', we take out all the numbers larger than $tot$, then subtract and insert them again. Note that we may do some 'Repeat' operations when $tot=0$, which will result in the number of pigs generated before multiplying by $2$. Therefore, we also need to maintain the total multiplication. If you use map to maintain it, the time complexity is $O((n+X)\log ^2X)$. It can pass F1. You can also use some ways to make the time complexity $O((n+X)\log X)$.
[ "brute force", "data structures", "implementation" ]
2,400
// Author: Little09 // Problem: F. Magician and Pigs (Easy Version) #include <bits/stdc++.h> using namespace std; #define ll long long const ll mod = 998244353, inv = (mod + 1) / 2; int n; map<ll, ll> s; ll tot, mul = 1, ts = 1; inline void add(ll &x, ll y) { (x += y) >= mod && (x -= mod); } int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); cin >> n; while (n--) { int op; cin >> op; if (op == 1) { ll x; cin >> x; add(s[x + tot], ts); } else if (op == 2) { ll x; cin >> x; tot += x; } else if (tot <= 2e5) { if (tot == 0) mul = mul * 2 % mod, ts = ts * inv % mod; else { for (ll i = tot + 2e5; i > tot; i--) add(s[i + tot], s[i]); tot *= 2; } } } ll res = 0; for (auto i : s) if (i.first > tot) add(res, i.second); res = res * mul % mod; cout << res; return 0; }
1774
F2
Magician and Pigs (Hard Version)
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $n$ and $x$. You can make hacks only if both versions of the problem are solved.} Little09 has been interested in magic for a long time, and it's so lucky that he meets a magician! The magician will perform $n$ operations, each of them is one of the following three: - $1\ x$: Create a pig with $x$ Health Points. - $2\ x$: Reduce the Health Point of all living pigs by $x$. - $3$: Repeat all previous operations. Formally, assuming that this is the $i$-th operation in the operation sequence, perform the first $i-1$ operations (including "Repeat" operations involved) in turn. A pig will die when its Health Point is less than or equal to $0$. Little09 wants to know how many living pigs there are after all the operations. Please, print the answer modulo $998\,244\,353$.
For F1, there is another way. Consider every pig. When Repeat, it will clone, and when Attack, all its clones will be attacked together. Therefore, considering all the operations behind each pig, you can choose to reduce $x$ (the current total damage) or not when Repeat, and you must choose to reduce $x$ when Attack. Any final choice will become a pig (living or not). We just need to calculate how many Repeat choices can make a living pig. For Repeat of $x=0$, there is no difference between the two choices. For the Repeat of $x\geq 2\times 10^5$, it is obvious that you can only choose not to reduce $x$. Except for the above parts, there are only $O(\log x)$ choices. You can just find a subset or use knapsack to solve it. It can also pass F1 and the time complexity is $O((n+X)\log X)$. The bottleneck of using this method to do F2 lies in the backpack. The problem is that we need to find how many subsets whose sum $<x$ of of a set whose size is $O(\log X)$. Observation: if you sort the set, each element is greater than the sum of all elements smaller than it. We can use observation to solve the problem. Consider each element from large to small. Suppose you now need to find elements and subsets of $<x$. If the current element $\ge x$, it must not be selected; If the current element $<x$, if it is not selected, the following elements can be selected at will (the sum of all the following elements is less than it). It can be recursive. Thus, for a given $x$, we can find the number of subsets whose sum $<x$ within $O(\log X)$. The time complexity is $O(n\log X)$.
[ "binary search", "brute force", "data structures", "implementation" ]
2,700
// Author: Little09 // Problem: F. Magician and Pigs #include <bits/stdc++.h> using namespace std; #define ll long long #define mem(x) memset(x, 0, sizeof(x)) #define endl "\n" #define printYes cout << "Yes\n" #define printYES cout << "YES\n" #define printNo cout << "No\n" #define printNO cout << "NO\n" #define lowbit(x) ((x) & (-(x))) #define pb push_back #define mp make_pair #define pii pair<int, int> #define fi first #define se second const ll inf = 1000000000000000000; // const ll inf=1000000000; const ll mod = 998244353; // const ll mod=1000000007; const int N = 800005; int n, m; ll a[N], b[N], c[N], cnt, s[N], d[N], cntd; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); cin >> n; ll maxs = 1e9, sum = 0; for (int i = 1; i <= n; i++) { cin >> a[i]; if (a[i] != 3) cin >> b[i]; if (a[i] == 2) sum += b[i]; sum = min(sum, maxs); if (a[i] == 3) b[i] = sum, sum = sum * 2; sum = min(sum, maxs); } sum = 0; ll res = 1, ans = 0; for (int i = n; i >= 1; i--) { if (a[i] == 2) sum += b[i]; else if (a[i] == 3) { if (b[i] == maxs) continue; if (b[i] == 0) { res = res * 2 % mod; continue; } c[++cnt] = b[i]; } else { b[i] -= sum; if (b[i] <= 0) continue; ll su = 0, r = b[i]; for (int j = 1; j <= cnt; j++) { if (r > c[j]) { su = (su + (1ll << (cnt - j))) % mod; r -= c[j]; } } su = (su + 1) % mod; ans = (ans + su * res) % mod; } } cout << ans; return 0; }
1774
G
Segment Covering
ChthollyNotaSeniorious gives DataStructures a number axis with $m$ distinct segments on it. Let $f(l,r)$ be the number of ways to choose an even number of segments such that the union of them is exactly $[l,r]$, and $g(l,r)$ be the number of ways to choose an odd number of segments such that the union of them is exactly $[l,r]$. ChthollyNotaSeniorious asked DataStructures $q$ questions. In each query, ChthollyNotaSeniorious will give DataStructures two numbers $l, r$, and now he wishes that you can help him find the value $f(l,r)-g(l,r)$ modulo $998\,244\,353$ so that he wouldn't let her down.
If there exist two segments $(l_1, r_1), (l_2, r_2)$ such that $l_1 \le l_2 \le r_2 \le r_1$ and we choose $(l_1, r_1)$, number of ways of choosing $(l_2, r_2)$ at the same time will be equal to that of not choosing. Hence if we choose $(l_1, r_1)$, the signed number of ways will be $0$. So we can delete $(l_1, r_1)$. It can be proved that the absolute value of every $f_{l,r} - g_{l,r}$ doesn't exceed $1$. Proof: First, find segments $(l_i, r_i)$ which are completely contained by $(l, r)$. Let us sort $(l_i, r_i)$ in ascending order of $l_i$. As there does not exist a segment that contains another, $r_i$ are also sorted. Assume that $l_2 < l_3 \leq r_1$ and $r_3 \geq r_2$. If we choose segments $1$ and $3$, choosing $2$ or not will be the same except for the sign. So $3$ is useless. So we can delete such useless segments. After the process, $l_3 > r_1, l_4 > r_2, \cdots$ will be held. If $[l_1, r_1] \cup [l_2, r_2] \cup \cdots \cup [l_k, r_k] = [l, r]$, answer will be $(-1)^k$, else answer will be $0$. This picture shows what the segments eventually are like: For $[l_i, r_i]$, we can find the lowest $j$ such that $l_j > r_i$ and construct a tree by linking such $i$ and $j$. Then the LCA of $1$ and $2$ will be $5$, where the answer becomes 0. So we can get the answer of $(ql, qr)$ quickly by simply finding the LCA of two segments -- the segment starting with $ql$ (if no segment starts with $ql$, the answer is $0$), and the first segment whose $l$ is greater than $ql$ (if it do not intersect with the previous segment, the answer is $0$). And find the segment ending with $qr$. If it is on the path of the two segments, the answer will be $\pm 1$. Else, the answer will be $0$.
[ "brute force", "combinatorics", "constructive algorithms", "data structures", "dp", "trees" ]
3,200
#include <bits/stdc++.h> #define File(a) freopen(a ".in", "r", stdin), freopen(a ".out", "w", stdout) using tp = std::tuple<int, int, int>; const int sgn[] = {1, 998244352}; const int N = 200005; int x[N], y[N]; std::vector<tp> V; bool del[N]; int fa[20][N]; int m, q; int main() { scanf("%d %d", &m, &q); for (int i = 1; i <= m; ++i) scanf("%d %d", x + i, y + i), V.emplace_back(y[i], -x[i], i); std::sort(V.begin(), V.end()); int mxl = 0; for (auto [y, x, i] : V) { if (-x <= mxl) del[i] = true; mxl = std::max(mxl, -x); } V.clear(); x[m + 1] = y[m + 1] = 1e9 + 1; for (int i = 1; i <= m + 1; ++i) { if (!del[i]) V.emplace_back(x[i], y[i], i); } std::sort(V.begin(), V.end()); for (auto [x, y, id] : V) { int t = std::get<2>(*std::lower_bound(V.begin(), V.end(), tp{y + 1, 0, 0})); fa[0][id] = t; } fa[0][m + 1] = m + 1; for (int k = 1; k <= 17; ++k) { for (int i = 1; i <= m + 1; ++i) fa[k][i] = fa[k - 1][fa[k - 1][i]]; } for (int i = 1; i <= q; ++i) { int l, r; scanf("%d %d", &l, &r); int u = std::lower_bound(V.begin(), V.end(), tp{l, 0, 0}) - V.begin(), v = u + 1; u = std::get<2>(V[u]); if (x[u] != l || y[u] > r) { puts("0"); continue; } if (y[u] == r) { puts("998244352"); continue; } v = std::get<2>(V[v]); if (y[v] > r || x[v] > y[u]) { puts("0"); continue; } int numu = 0, numv = 0; for (int i = 17; i >= 0; --i) { if (y[fa[i][u]] <= r) { u = fa[i][u]; numu += !i; } } for (int i = 17; i >= 0; --i) { if (y[fa[i][v]] <= r) { v = fa[i][v]; numv += !i; } } if (u == v || (y[u] != r && y[v] != r)) puts("0"); else printf("%d\n", sgn[numu ^ numv]); } return 0; }
1774
H
Maximum Permutation
Ecrade bought a deck of cards numbered from $1$ to $n$. Let the value of a permutation $a$ of length $n$ be $\min\limits_{i = 1}^{n - k + 1}\ \sum\limits_{j = i}^{i + k - 1}a_j$. Ecrade wants to find the most valuable one among all permutations of the cards. However, it seems a little difficult, so please help him!
When it seems to be hard to come up with the whole solution directly, simplified questions can often be a great helper. So first let us consider the case where $n$ is the multiple of $k$. Let $t=\frac{n}{k}$. What is the largest value one can obtain theoretically? Pick out $t$ subsegments $a_{[1:k]},a_{[k+1:2k]},\dots,a_{[n-k+1:n]}$, and one can see that the answer cannot be greater than the average value of the sum of each subsegment, that is $\frac{n(n+1)}{2t}=\frac{k(n+1)}{2}$. If $k$ is even, one can construct $a=[1,n,2,n-1,3,n-2,\dots,\frac{n}{2},\frac{n}{2}+1]$ to reach the maximum value. For easy understanding let us put $a_{[1:k]},a_{[k+1:2k]},\dots,a_{[n-k+1:n]}$ into a $t\times k$ table from top to bottom like this: Note that the difference value between two consecutive subsegments is equal to the difference value between two values in the same column in this table. It inspires us to fill in the last $k-3$ columns in an S-shaped way like this: Then our goal is to split the remaining $3t$ numbers into $3$ groups and minimize the difference value between the sum of each group. If $n$ is odd, the theoretical maximum value mentioned above is an integer, and the sum of each group must be equal to reach that value. Otherwise, the theoretical maximum value mentioned above is not an integer, and the sum of each group cannot be determined. So let us first consider the case where $n$ is odd. A feasible approach to split the numbers is as follows: $(1,\frac{3t+1}{2},3t),(3,\frac{3t-1}{2},3t-1),\dots,(t,t+1,\frac{5t+1}{2}),(2,2t,\frac{5t-1}{2}),(4,2t-1,\frac{5t-3}{2}),\dots,(t-1,\frac{3t+3}{2},2t+1)$ Then fill them into the table: Similarly, one can come up with an approach when $n$ is even: Okay, it's time for us to go back to the original problem. Let $n=qk+r\ (r,q\in \mathbb{N},1\le r<k)$. Split $n$ elements like this: As shown in the picture, there are $(q+1)$ red subsegments with $r$ elements each, and $q$ blue subsegments with $(k-r)$ elements each. What is the largest value one can obtain theoretically now? Pick out $q$ non-intersecting subsegments consisting of a whole red subsegment and a whole blue segement each, and one can see that the answer cannot be greater than the average value of the sum of each subsegment, that is the sum of $1 \thicksim n$ subtracted by the sum of any whole red subsegment, then divide by $q$. Similarly, the answer cannot be greater than the sum of $1 \thicksim n$ added by the sum of any whole blue subsegment, then divide by $(q+1)$. Thus, our goal is to make the maximum sum of each red subsegment the minimum, and make the minimum sum of each blue subsegment the maximum. Now here comes an interesting claim: it can be proved that, If $r\neq 1$ and $k-r\neq 1$, one can fill the red subsegments using the method when $n$ is the multiple of $k$ (here $n'=(q+1)r,k'=r$ ) with $1\thicksim (q+1)r$, and the blue subsegments (here $n'=q(k-r),k'=k-r$ ) with the remaining numbers; If $r\neq 1$ and $k-r\neq 1$, one can fill the red subsegments using the method when $n$ is the multiple of $k$ (here $n'=(q+1)r,k'=r$ ) with $1\thicksim (q+1)r$, and the blue subsegments (here $n'=q(k-r),k'=k-r$ ) with the remaining numbers; If $r=1$, one can fill the red subsegments with $1\thicksim (q+1)$ from left to right, and the first element of the blue subsegments with $(q+2)\thicksim (2q+1)$ from right to left, and the blue subsegments without the first element using the method when $n$ is the multiple of $k$ (here $n'=q(k-2),k'=k-2$ ) with the remaining numbers; If $r=1$, one can fill the red subsegments with $1\thicksim (q+1)$ from left to right, and the first element of the blue subsegments with $(q+2)\thicksim (2q+1)$ from right to left, and the blue subsegments without the first element using the method when $n$ is the multiple of $k$ (here $n'=q(k-2),k'=k-2$ ) with the remaining numbers; If $k-r=1$ and $q=1$, one can let $a_k=n$, and fill the rest two subsegments using the method when $n$ is the multiple of $k$ (here $n'=n-1,k'=k-1$ ) with the remaining numbers; If $k-r=1$ and $q=1$, one can let $a_k=n$, and fill the rest two subsegments using the method when $n$ is the multiple of $k$ (here $n'=n-1,k'=k-1$ ) with the remaining numbers; If $k-r=1$ and $q>1$, one can fill the blue subsegments with $(n-q+1)\thicksim n$ from right to left, and the first element of the red subsegments with $1\thicksim (q+1)$ from right to left, and the red subsegments without the first element using the method when $n$ is the multiple of $k$ (here $n'=(q+1)(r-1),k'=r-1$ ) with the remaining numbers, If $k-r=1$ and $q>1$, one can fill the blue subsegments with $(n-q+1)\thicksim n$ from right to left, and the first element of the red subsegments with $1\thicksim (q+1)$ from right to left, and the red subsegments without the first element using the method when $n$ is the multiple of $k$ (here $n'=(q+1)(r-1),k'=r-1$ ) with the remaining numbers, then the constraints can both be satisfied and the value of the permutation is theoretically maximum. The proof is omitted here. Time complexity: $O(\sum n)$ Bonus: solve the problem if $k\ge 2$.
[ "constructive algorithms" ]
3,500
#include<bits/stdc++.h> using namespace std; typedef long long ll; ll t,n,k,seq[100009],ans[100009]; inline ll read(){ ll s = 0,w = 1; char ch = getchar(); while (ch > '9' || ch < '0'){ if (ch == '-') w = -1; ch = getchar();} while (ch <= '9' && ch >= '0') s = (s << 1) + (s << 3) + (ch ^ 48),ch = getchar(); return s * w; } ll f(ll x,ll y,ll k){return (x - 1) * k + y;} void get(ll n,ll k){ if (!(k & 1)){ for (ll i = 1;i <= n >> 1;i += 1) seq[(i << 1) - 1] = i,seq[i << 1] = n + 1 - i; return; } ll m = n / k,cur = 3 * m; for (ll i = 4;i <= k;i += 1){ if (i & 1) for (ll j = m;j >= 1;j -= 1) seq[f(j,i,k)] = ++ cur; else for (ll j = 1;j <= m;j += 1) seq[f(j,i,k)] = ++ cur; } for (ll i = 1;i <= (m + 1 >> 1);i += 1){ seq[f(i,1,k)] = (i << 1) - 1; seq[f(i,2,k)] = ((3 * m + 3) >> 1) - i; seq[f(i,3,k)] = 3 * m - i + 1; } for (ll i = (m + 3 >> 1);i <= m;i += 1){ ll delta = i - (m + 3 >> 1); seq[f(i,1,k)] = ((3 * m + 3) >> 1) + delta; seq[f(i,2,k)] = (m << 1) + 1 + delta; seq[f(i,3,k)] = m - (m & 1) - (delta << 1); } } void print(){ ll res = 0,sum = 0; for (ll i = 1;i <= k;i += 1) sum += ans[i]; res = sum; for (ll i = k + 1;i <= n;i += 1) sum += ans[i] - ans[i - k],res = min(res,sum); printf("%lld\n",res); for (ll i = 1;i <= n;i += 1) printf("%lld ",ans[i]); puts(""); } int main(){ t = read(); while (t --){ n = read(),k = read(); if (!(n % k)){ get(n,k); for (ll i = 1;i <= n;i += 1) ans[i] = seq[i]; print(); continue; } ll q = n / k,r = n % k; if (r == 1){ ll cur = 0,delta = (q << 1) + 1; for (ll i = 1;i <= n;i += k) ans[i] = ++ cur; for (ll i = n - k + 1;i >= 2;i -= k) ans[i] = ++ cur; get(q * (k - 2),k - 2),cur = 0; for (ll i = 3;i <= n;i += k) for (ll j = i;j <= i + k - 3;j += 1) ans[j] = seq[++ cur] + delta; print(); continue; } if (k - r == 1){ if (q == 1){ ll cur = 0; ans[k] = n; get(n - 1,k - 1); for (ll i = 1;i < k;i += 1) ans[i] = seq[++ cur]; for (ll i = k + 1;i <= n;i += 1) ans[i] = seq[++ cur]; print(); continue; } ll cur = n + 1,delta = q + 1; for (ll i = k;i <= n;i += k) ans[i] = -- cur; cur = 0; for (ll i = 1;i <= n;i += k) ans[i] = ++ cur; get((q + 1) * (r - 1),r - 1),cur = 0; for (ll i = 2;i <= n;i += k) for (ll j = i;j <= i + r - 2;j += 1) ans[j] = seq[++ cur] + delta; print(); continue; } ll cur = 0,delta = (q + 1) * r; get((q + 1) * r,r); for (ll i = 1;i <= n;i += k) for (ll j = i;j <= i + r - 1;j += 1) ans[j] = seq[++ cur]; get(q * (k - r),k - r),cur = 0; for (ll i = r + 1;i <= n;i += k) for (ll j = i;j <= i + (k - r) - 1;j += 1) ans[j] = seq[++ cur] + delta; print(); } return 0; }
1775
A1
Gardener and the Capybaras (easy version)
\textbf{This is an easy version of the problem. The difference between the versions is that the string can be longer than in the easy version. You can only do hacks if both versions of the problem are passed.} Kazimir Kazimirovich is a Martian gardener. He has a huge orchard of binary balanced apple trees. Recently Casimir decided to get himself three capybaras. The gardener even came up with their names and wrote them down on a piece of paper. The name of each capybara is a non-empty line consisting of letters "a" and "b". Denote the names of the capybaras by the lines $a$, $b$, and $c$. Then Casimir wrote the nonempty lines $a$, $b$, and $c$ in a row without spaces. For example, if the capybara's name was "aba", "ab", and "bb", then the string the gardener wrote down would look like "abaabbb". The gardener remembered an interesting property: either the string $b$ is lexicographically not smaller than the strings $a$ and $c$ at the same time, or the string $b$ is lexicographically not greater than the strings $a$ and $c$ at the same time. In other words, either $a \le b$ and $c \le b$ are satisfied, or $b \le a$ and $b \le c$ are satisfied (or possibly both conditions simultaneously). Here $\le$ denotes the lexicographic "less than or equal to" for strings. Thus, $a \le b$ means that the strings must either be equal, or the string $a$ must stand earlier in the dictionary than the string $b$. For a more detailed explanation of this operation, see "Notes" section. Today the gardener looked at his notes and realized that he cannot recover the names because they are written without spaces. He is no longer sure if he can recover the original strings $a$, $b$, and $c$, so he wants to find any triplet of names that satisfy the above property.
To solve this problem, it was enough just to consider all options of splitting the string into three substrings, and there are only $O(n^2)$ ways to do it.
[ "brute force", "constructive algorithms", "implementation" ]
800
"#include <bits/stdc++.h>\n#define sz(x) ((int)x.size())\nusing namespace std;\nstring s;\nint n;\n\nvoid Solve() {\n cin >> s;\n n = sz(s);\n\n if (s[0] == s[1]) {\n cout << s[0] << \" \" << s[1] << \" \" << s.substr(2) << '\\n';\n return;\n }\n if (s[n - 2] == s[n - 1]) {\n cout << s.substr(0, n - 2) << \" \" << s[n - 2] << \" \" << s[n - 1] << '\\n';\n return;\n }\n\n if (s[0] < s[1]) {\n for (int i = 1; i < n - 1; i++) {\n if (s[i] > s[i + 1]) {\n cout << s.substr(0, i) << \" \" << s[i] << \" \" << s.substr(i + 1) << '\\n';\n return;\n }\n }\n } else {\n for (int i = 1; i < n - 1; i++) {\n if (s[i] <= s[i + 1]) {\n cout << s.substr(0, i) << \" \" << s[i] << \" \" << s.substr(i + 1) << '\\n';\n return;\n }\n }\n }\n\n cout << \":(\\n\";\n}\n\nint main() {\n ios_base::sync_with_stdio(false); cin.tie(nullptr);\n int t;\n cin >> t;\n\n for (int i = 0; i < t; i++) {\n Solve();\n }\n\n return 0;\n}\n"
1775
A2
Gardener and the Capybaras (hard version)
\textbf{This is an hard version of the problem. The difference between the versions is that the string can be longer than in the easy version. You can only do hacks if both versions of the problem are passed.} Kazimir Kazimirovich is a Martian gardener. He has a huge orchard of binary balanced apple trees. Recently Casimir decided to get himself three capybaras. The gardener even came up with their names and wrote them down on a piece of paper. The name of each capybara is a non-empty line consisting of letters "a" and "b". Denote the names of the capybaras by the lines $a$, $b$, and $c$. Then Casimir wrote the nonempty lines $a$, $b$, and $c$ in a row without spaces. For example, if the capybara's name was "aba", "ab", and "bb", then the string the gardener wrote down would look like "abaabbb". The gardener remembered an interesting property: either the string $b$ is lexicographically not smaller than the strings $a$ and $c$ at the same time, or the string $b$ is lexicographically not greater than the strings $a$ and $c$ at the same time. In other words, either $a \le b$ and $c \le b$ are satisfied, or $b \le a$ and $b \le c$ are satisfied (or possibly both conditions simultaneously). Here $\le$ denotes the lexicographic "less than or equal to" for strings. Thus, $a \le b$ means that the strings must either be equal, or the string $a$ must stand earlier in the dictionary than the string $b$. For a more detailed explanation of this operation, see "Notes" section. Today the gardener looked at his notes and realized that he cannot recover the names because they are written without spaces. He is no longer sure if he can recover the original strings $a$, $b$, and $c$, so he wants to find any triplet of names that satisfy the above property.
Consider five cases to solve the task: If the string starts with $aa$, then we can split it into $a = s[0]$, $b = s[1]$, $c = s[2 ... n - 1]$. If the string starts with $bb$, then we can split it into $a = s[0]$, $b = s[1]$, $c = s[2 ... n - 1]$. If the string starts with $ba$, then we can split it into $a = s[0]$, $b = s[1]$, $c = s[2 ... n - 1]$. If the string starts with $ab$, and then there is another letter a at position $i > 1$, then we can split it into $a = s[0]$, $b = s[1 ... i - 1]$, $c = s[i ... n - 1]$. If the string starts with $ab$, and all other letters are b, then we can split it into $a = s[0 ... n - 3]$, $b = s[n - 2]$, $c = s[n - 1]$.
[ "constructive algorithms", "greedy" ]
900
"#include <bits/stdc++.h>\n#define sz(x) ((int)x.size())\nusing namespace std;\nstring s;\nint n;\n\nvoid Solve() {\n cin >> s;\n n = sz(s);\n\n if (s[0] == s[1]) {\n cout << s[0] << \" \" << s[1] << \" \" << s.substr(2) << '\\n';\n return;\n }\n if (s[n - 2] == s[n - 1]) {\n cout << s.substr(0, n - 2) << \" \" << s[n - 2] << \" \" << s[n - 1] << '\\n';\n return;\n }\n\n if (s[0] < s[1]) {\n for (int i = 1; i < n - 1; i++) {\n if (s[i] > s[i + 1]) {\n cout << s.substr(0, i) << \" \" << s[i] << \" \" << s.substr(i + 1) << '\\n';\n return;\n }\n }\n } else {\n for (int i = 1; i < n - 1; i++) {\n if (s[i] <= s[i + 1]) {\n cout << s.substr(0, i) << \" \" << s[i] << \" \" << s.substr(i + 1) << '\\n';\n return;\n }\n }\n }\n\n cout << \":(\\n\";\n}\n\nint main() {\n ios_base::sync_with_stdio(false); cin.tie(nullptr);\n int t;\n cin >> t;\n\n for (int i = 0; i < t; i++) {\n Solve();\n }\n\n return 0;\n}\n"
1775
B
Gardener and the Array
The gardener Kazimir Kazimirovich has an array of $n$ integers $c_1, c_2, \dots, c_n$. He wants to check if there are two different subsequences $a$ and $b$ of the original array, for which $f(a) = f(b)$, where $f(x)$ is the bitwise OR of all of the numbers in the sequence $x$. A sequence $q$ is a subsequence of $p$ if $q$ can be obtained from $p$ by deleting several (possibly none or all) elements. Two subsequences are considered different if the sets of indexes of their elements in the original sequence are different, that is, the values of the elements are not considered when comparing the subsequences.
The problem can be solved as follows: for each bit count the number of its occurrences in all numbers in the test. If each number has a bit which occurs in all numbers exactly once, then the answer is "NO", otherwise the answer is "YES". Let's try to prove this solution. Let there be a number in which all bits occurs in all numbers at least $2$ times. But then it is possible to construct a sequence $a$ using all numbers, and a sequence $b$ using all numbers except the given one. If each number has "unique" bits, then all $f(x)$ will be different.
[ "bitmasks", "constructive algorithms" ]
1,300
"#include <iostream>\n#include <vector>\n#include <set>\n#include <algorithm>\n#include <ctime>\n#include <cmath>\n#include <map>\n#include <assert.h>\n#include <fstream>\n#include <cstdlib>\n#include <random>\n#include <iomanip>\n#include <queue>\n#include <random>\n#include <unordered_map>\n \nusing namespace std;\n \n#define sqr(a) ((a)*(a))\n#define all(a) (a).begin(), (a).end()\n \nlong long MOD = (int) 1e9 + 7;\n\nvoid solve() {\n int n;\n cin >> n;\n\n vector<vector<int> > a(n);\n map<int, int> occurrences;\n for (int i = 0; i < n; ++i) {\n int k;\n cin >> k;\n\n a[i].resize(k);\n for (int j = 0; j < k; ++j) {\n cin >> a[i][j];\n --a[i][j];\n\n ++occurrences[a[i][j]];\n }\n }\n\n for (int i = 0; i < n; ++i) {\n bool find = false;\n for (int j = 0; j < a[i].size(); ++j) {\n if (occurrences[a[i][j]] == 1) {\n find = true;\n break;\n }\n }\n\n if (!find) {\n cout << \"Yes\\n\";\n return;\n }\n }\n\n cout << \"No\\n\";\n}\n\nint main() {\n // freopen(\"input.txt\", \"r\", stdin);\n\n int tests = 1;\n cin >> tests;\n \n for (int i = 1; i <= tests; ++i) {\n solve();\n }\n}\n"
1775
C
Interesting Sequence
Petya and his friend, robot Petya++, like to solve exciting math problems. One day Petya++ came up with the numbers $n$ and $x$ and wrote the following equality on the board: $$n\ \&\ (n+1)\ \&\ \dots\ \&\ m = x,$$ where $\&$ denotes the bitwise AND operation. Then he suggested his friend Petya find such a minimal $m$ ($m \ge n$) that the equality on the board holds. Unfortunately, Petya couldn't solve this problem in his head and decided to ask for computer help. He quickly wrote a program and found the answer. Can you solve this difficult problem?
Note that the answer $-1$ will be when $n$ AND $x \neq x$. This holds if there is a bit with number $b$ which exists in number $x$, but it does not exist in number $n$. What is clear now is that some bits in $n$ must be zeroed. Since we have bitwise AND going sequentially with numbers larger than $n$, we change the bits from the lowest to the highest. Thus, in number $n$ some bit prefix is zeroed out, so if number $x$ is not number $n$ with a zeroed prefix (possibly empty) of bits, then there is no answer. Now we can calculate for each bit $i$ the minimal number $m_i$ such that $n$ AND $n + 1$ AND $\ldots$ AND $m_i$ has $0$ in the $i$-th bit. Now calculate $m_{zero}$ as the maximum $m_i$ on all bits to be zeroed, and $m_{one}$ as the minimum $m_i$ on all bits to be left untouched. Then, if $m_{zero} < m_{one}$, we will take $m_{zero}$ as the answer, otherwise there is no answer. The problem can also be solved by binary search: we will use it to find $m$, and check the answer by the formula: for each bit find the nearest $m$, at which it will be zeroed. We can do this using the following fact: for $i$-th bit, the first $2^i$ numbers (starting from zero) will not contain it, then $2^i$ will, then $2^i$ again will not and so on. Such a solution works for $O(\log^2 n)$.
[ "bitmasks", "math" ]
1,600
"#pragma GCC optimize(\"O3\")\n#include<bits/stdc++.h>\n\n#define ll long long\n#define pb push_back\n#define ld long double\n#define f first\n#define s second\n\nusing namespace std;\nconst ll inf = (1ll << 62);\nint32_t main() {\n\n ios_base::sync_with_stdio(0);\n cin.tie(0);\n cout.tie(0);\n ll t;\n cin >> t;\n while(t--) {\n ll n, x;\n cin >> n >> x;\n\n ll ma = n, mi = inf;\n ll mask = 0;\n for(ll i = 0; i < 61; i++) {\n if(!((n >> i) & 1)) {\n if((x >> i) & 1) {\n mi = -1;\n break;\n }\n } else {\n ll now = n + (1ll << i) - (n & mask);\n if((x >> i) & 1) {\n mi = min(mi, now);\n } else {\n ma = max(ma, now);\n }\n }\n mask += (1ll << i);\n }\n if(ma < mi) cout << ma << '\\n';\n else cout << \"-1\\n\";\n }\n\n\n}\n\n"
1775
D
Friendly Spiders
Mars is home to an unusual species of spiders — Binary spiders. Right now, Martian scientists are observing a colony of $n$ spiders, the $i$-th of which has $a_i$ legs. Some of the spiders are friends with each other. Namely, the $i$-th and $j$-th spiders are friends if $\gcd(a_i, a_j) \ne 1$, i. e., there is some integer $k \ge 2$ such that $a_i$ and $a_j$ are simultaneously divided by $k$ without a remainder. Here $\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$. Scientists have discovered that spiders can send messages. If two spiders are friends, then they can transmit a message directly in one second. Otherwise, the spider must pass the message to his friend, who in turn must pass the message to his friend, and so on until the message reaches the recipient. Let's look at an example. Suppose a spider with eight legs wants to send a message to a spider with $15$ legs. He can't do it directly, because $\gcd(8, 15) = 1$. But he can send a message through the spider with six legs because $\gcd(8, 6) = 2$ and $\gcd(6, 15) = 3$. Thus, the message will arrive in two seconds. Right now, scientists are observing how the $s$-th spider wants to send a message to the $t$-th spider. The researchers have a hypothesis that spiders always transmit messages optimally. For this reason, scientists would need a program that could calculate the minimum time to send a message and also deduce one of the optimal routes.
Note that if we construct the graph by definition, it will be large. This will lead us to the idea to make it more compact. Let us create a bipartite graph whose left part consists of $n$ vertices with numbers $a_i$. And in the right part each vertex corresponds to some prime number, not larger than the maximal number from the left part. Draw an edge from the vertex $v$ of the left part to the vertex $u$ of the right lpart if and only if $a_v$ is divisible by a prime number corresponding to the vertex $u$. In this graph from vertex $s$ to vertex $t$, run bfs and output the distance divided by $2$. Now about how to construct such a graph quickly. Obviously, the number $a_i$ will have at most $\log a_i$ different prime divisors. Then let us factorize $a_v$ and draw edges from vertex $v$ to each prime in factorization.
[ "dfs and similar", "graphs", "math", "number theory", "shortest paths" ]
1,800
"#include <bits/stdc++.h>\nusing namespace std;\ntypedef long long ll;\nconst int N = 300100;\nconst int kMaxA = 300100;\nconst int oo = 1e9;\nvector<int> edges[kMaxA];\nint dist[kMaxA], prv[kMaxA];\nbool used[kMaxA];\nint prv_id[kMaxA];\nint min_prime[kMaxA];\nint n, a[N];\nint boss[kMaxA];\nint mem_id[kMaxA];\nint source, dest;\n\nint main() {\n ios_base::sync_with_stdio(false);\n cin.tie(nullptr);\n\n cin >> n;\n for (int i = 0; i < n; i++) {\n cin >> a[i];\n mem_id[a[i]] = i;\n }\n cin >> source >> dest;\n source--; dest--;\n\n if (a[source] == a[dest]) {\n if (source == dest) {\n cout << \"1\\n\" << source + 1 << '\\n';\n } else if (a[source] == 1) {\n cout << -1;\n } else {\n cout << \"2\\n\" << source + 1 << \" \" << dest + 1;\n }\n return 0;\n }\n\n fill(min_prime, min_prime + kMaxA, oo);\n\n for (int prime = 2; prime < kMaxA; prime++) {\n if (min_prime[prime] != oo) {\n continue;\n }\n min_prime[prime] = prime;\n\n if (ll(prime) * ll(prime) >= kMaxA) {\n continue;\n }\n\n for (int value = prime * prime; value < kMaxA; value += prime) {\n min_prime[value] = min(min_prime[value], prime);\n }\n }\n\n fill(boss, boss + kMaxA, -1);\n\n fill(dist, dist + kMaxA, oo);\n fill(prv, prv + kMaxA, -1);\n queue<int> pr_queue;\n\n for (int i = 0; i < n; i++) {\n bool is_need = edges[a[i]].empty();\n\n int value = a[i];\n int pre_prime = -1;\n while (value > 1) {\n int cur_prime = min_prime[value];\n if (cur_prime != pre_prime && is_need) {\n edges[a[i]].push_back(cur_prime);\n }\n\n if (cur_prime != pre_prime && i == source) {\n dist[cur_prime] = 0;\n pr_queue.push(cur_prime);\n }\n\n pre_prime = cur_prime;\n boss[cur_prime] = i;\n value /= cur_prime;\n }\n }\n\n fill(used, used + kMaxA, false);\n\n while (!pr_queue.empty()) {\n int cur_prime = pr_queue.front();\n pr_queue.pop();\n\n for (int value = cur_prime * 2; value < kMaxA; value += cur_prime) {\n if (used[value]) {\n continue;\n }\n used[value] = true;\n\n for (int next_prime : edges[value]) {\n if (dist[next_prime] == oo) {\n dist[next_prime] = dist[cur_prime] + 1;\n prv_id[next_prime] = mem_id[value];\n prv[next_prime] = cur_prime;\n pr_queue.push(next_prime);\n }\n }\n }\n }\n\n int best_dist = oo;\n int best_prime = -1;\n\n for (int prime : edges[a[dest]]) {\n if (dist[prime] < best_dist) {\n best_dist = dist[prime];\n best_prime = prime;\n }\n }\n\n if (best_dist == oo) {\n cout << -1;\n return 0;\n }\n\n vector<int> route;\n\n int cur_prime = best_prime;\n route.push_back(dest);\n route.push_back(prv_id[best_prime]);\n while (prv[cur_prime] != -1) {\n cur_prime = prv[cur_prime];\n route.push_back(prv_id[cur_prime]);\n }\n\n std::reverse(route.begin(), route.end());\n route.front() = source;\n\n cout << route.size() << '\\n';\n for (int id : route) {\n cout << id + 1 << \" \";\n }\n\n return 0;\n}\n"
1775
E
The Human Equation
Petya and his friend, the robot Petya++, went to BFDMONCON, where the costume contest is taking place today. While walking through the festival, they came across a scientific stand named after Professor Oak and Golfball, where they were asked to solve an interesting problem. Given a sequence of numbers $a_1, a_2, \dots, a_n$ you can perform several operations on this sequence. Each operation should look as follows. You choose some subsequence$^\dagger$. Then you call all the numbers at odd positions in this subsequence northern, and all the numbers at even positions in this subsequence southern. In this case, only the position of the number in the subsequence is taken into account, not in the original sequence. For example, consider the sequence $1, 4, 2, 8, 5, 7, 3, 6, 9$ and its subsequence (shown in bold) $1, \mathbf{4}, \mathbf{2}, 8, \mathbf{5}, 7, 3, \mathbf{6}, 9$. Then the numbers $4$ and $5$ are northern, and the numbers $2$ and $6$ are southern. After that, you can do one of the following: - add $1$ to all northern numbers and subtract $1$ from all south numbers; or - add $1$ to all southern numbers and subtract $1$ from all northern numbers. Thus, from the sequence $1, \mathbf{4}, \mathbf{2}, 8, \mathbf{5}, 7, 3, \mathbf{6}, 9$, if you choose the subsequence shown in bold, you can get either $1, \mathbf{5}, \mathbf{1}, 8, \mathbf{6}, 7, 3, \mathbf{5}, 9$ or $1, \mathbf{3}, \mathbf{3}, 8, \mathbf{4}, 7, 3, \mathbf{7}, 9$. Then the operation ends. Note also that all operations are independent, i. e. the numbers are no longer called northern or southern when one operation ends. It is necessary to turn all the numbers of the sequence into zeros using the operations described above. Since there is very little time left before the costume contest, the friends want to know, what is the minimum number of operations required for this. The friends were unable to solve this problem, so can you help them? $^\dagger$ A sequence $c$ is a subsequence of a sequence $d$ if $c$ can be obtained from $d$ by the deletion of several (possibly, zero or all) elements.
Let's calculate an array of prefix sums. What do the operations look like in this case? If we calculate the array of prefix sums, we'll see that the operations now look like "add 1 on a subsequence" or "take away 1 on a subsequence". Why? If we take the indices $i$ and $j$ and apply our operation to them (i.e. $a_i = a_i + 1$ and $a_j = a_j - 1$), it will appear that we added $1$ on the segment $[i ... j - 1]$ in the prefix sums array. We still need to make the array all zeros. How? We will add $1$ to all elements that are less than zero, then subtract $1$ from all elements that are greater than $0$. From this we get that the answer is the difference between the maximum and minimum prefix sums.
[ "greedy", "implementation" ]
2,100
"#include <iostream>\n\nusing namespace std;\n\nint main() {\n ios_base::sync_with_stdio(false);\n cin.tie(0);\n int t;\n cin >> t;\n while (t--) {\n int n;\n cin >> n;\n int64_t mn = 0;\n int64_t mx = 0;\n int64_t cur = 0;\n for (int i = 0; i < n; i++) {\n int u;\n cin >> u;\n cur += u;\n mn = min(mn, cur);\n mx = max(mx, cur);\n }\n cout << mx - mn << '\\n';\n }\n}\n"
1775
F
Laboratory on Pluto
As you know, Martian scientists are actively engaged in space research. One of the highest priorities is Pluto. In order to study this planet in more detail, it was decided to build a laboratory on Pluto. It is known that the lab will be built of $n$ square blocks of equal size. For convenience, we will assume that Pluto's surface is a plane divided by vertical and horizontal lines into unit squares. Each square is either occupied by a lab block or not, and only $n$ squares are occupied. Since each block is square, it has four walls. If a wall is adjacent to another block, it is considered inside, otherwise — outside. Pluto is famous for its extremely cold temperatures, so the outside walls of the lab must be insulated. One unit of insulation per exterior wall would be required. Thus, the greater the total length of the outside walls of the lab (i. e., its perimeter), the more insulation will be needed. Consider the lab layout in the figure below. It shows that the lab consists of $n = 33$ blocks, and all the blocks have a total of $24$ outside walls, i. e. $24$ units of insulation will be needed. You should build the lab optimally, i. e., minimize the amount of insulation. On the other hand, there may be many optimal options, so scientists may be interested in the number of ways to build the lab using the minimum amount of insulation, modulo a prime number $m$. Two ways are considered the same if they are the same when overlapping without turning. Thus, if a lab plan is rotated by $90^{\circ}$, such a new plan can be considered a separate way. To help scientists explore Pluto, you need to write a program that solves these difficult problems.
Let us find out the minimal perimeter for a fixed $n$ in $O(1)$. Let $a = \lfloor \sqrt{n} \rfloor$: If $a^2 < n$ and $n \leq a \cdot (a+1)$, then the minimum perimeter will be $2 \cdot (2 \cdot a + 1)$. If $a \cdot a + a < n$ and $n \leq (a + 1)^2$, then the minimum perimeter will be $4 \cdot (a + 1)$. If 1) and 2) are not satisfied, then the minimum perimeter is $4 \cdot a$. For convenience, we denote the semiperimeter by $p$. Let's check one of the rectangle's side $x$ from $1$ to $p$. We will immediately get $y = p - x$. Then if the area of the rectangle is at least $n$, then for tests with $u=1$ we can derive a rectangle with sides $x$ and $y$ by removing some number of cells from the 1st row or column of this matrix. Consider a matrix with sides $x$ and $y$ with minimal perimeter and area at least $n$. It is easy to see that if we gradually remove one corner cell of a given figure, the perimeter will not change. Suppose we have some good figure. Let's look at all four figures that form empty cells in the matrix. These figures form a kind of 'staircase', that is, the number of cells in the past row is not less than the number of cells. Why is this so? Suppose that in some line we delete a cell so that the figure ceases to form a staircase. In that case the perimeter of the piece will increase by $2$, which will be the bad piece. Now the problem is reduced to the following: For each matrix with dimensions $x$ by $y$ with minimal perimeter and area not less than $n$, calculate the number of ways to arrange the staircases in 4 corners such that the sum of cells occupied by the staircases equals $x \cdot y - n$. How to do this? Assume DP $dp_{angles, sum, last}$ - the number of ways to arrange staircases in $angles$, not more than 4, so that the sum of all cells occupied by staircases equals $sum$ and the length of the last line of staircase equals $last$. Then how to count the given DP? Go through $angles, sum, last$ and $cur \leq last$ - the length of the line we will add to the current staircase, then go from $dp_{angles, sum, last}$ to $dp_{angles, sum + cur, cur}$. Or go to the next corner $dp_{angles + 1, sum, maxP}$. This DP works for $O(n \cdot \sqrt{n})$. To answer $n$ we go through all good rectangles, as specified above, and add $f_{4, x \cdot y - n, maxP}$ to the answer. To solve the problem for a full score, you must optimize the above DP to $O(n)$. The final difficulty is $O(n + t \cdot \sqrt{n})$.
[ "constructive algorithms", "dp", "greedy", "math" ]
2,500
"#include <bits/stdc++.h>\n#define el \"\\n\"\nusing namespace std;\n\nconst int N = 4e5 + 50;\n\nint f[5][1500][1500], M, a;\n\nint get_P(int n) {\n int a = sqrt(n);\n if (a * a < n && a * a + a >= n) {\n return 2 * a + 1;\n }\n if (a * a + a < n && n <= (a + 1) * (a + 1)) {\n return 2 * (a + 1);\n }\n return 2 * a;\n}\n\nvoid precalc() {\n a = sqrt(N) + 5;\n f[0][0][a] = 1;\n for (int i = 0; i < 4; i++) {\n for (int j = 0; j <= a; j++) {\n for (int was = a; was >= 0; was--) {\n if (was && j + was <= a) {\n (f[i][j + was][was] += f[i][j][was]) %= M;\n }\n if (was) {\n (f[i][j][was - 1] += f[i][j][was]) %= M;\n }\n }\n (f[i + 1][j][a] += f[i][j][0]) %= M;\n }\n }\n}\n\nint main() {\n ios::sync_with_stdio(false);\n cin.tie(NULL);\n\n int q, type;\n cin >> q >> type;\n if (type == 2) {\n cin >> M;\n precalc();\n }\n while (q--) {\n int n;\n cin >> n;\n int p = get_P(n);\n if (type == 1) {\n int _n, _m;\n for (int x = 1; x <= p; x++) {\n int y = p - x;\n if (x + y == p && x * y >= n) {\n _n = x; _m = y;\n break;\n }\n }\n vector <vector <char> > ans(_n, vector <char> (_m, '#'));\n cout << _n << \" \" << _m << \"\\n\";\n for (int i = 0; i < _n * _m - n; i++) {\n ans[i][0] = '.';\n }\n for (int i = 0; i < _n; i++, cout << el) {\n for (int j = 0; j < _m; j++) {\n cout << ans[i][j];\n }\n }\n\n continue;\n }\n\n int ans = 0;\n for (int x = 1; x <= p; x++) {\n int y = p - x;\n if (x + y == p && x * y >= n) {\n (ans += f[4][x * y - n][a]) %= M;\n }\n }\n cout << p * 2 << \" \" << ans << \"\\n\";\n }\n}\n\n"
1777
A
Everybody Likes Good Arrays!
An array $a$ is good if for all pairs of adjacent elements, $a_i$ and $a_{i+1}$ ($1\le i \lt n$) are of \textbf{different} parity. Note that an array of size $1$ is trivially good. You are given an array of size $n$. In one operation you can select any pair of adjacent elements in which both elements are of the \textbf{same} parity, delete them, and insert their product in the same position. Find the minimum number of operations to form a good array.
Solution Replace even numbers with $0$ and odd numbers with $1$ in the array $a$. Now we observe that the given operation is equivalent to selecting two equal adjacent elements and deleting one of them. Now the array can be visualized as strips of zeros (in green) and ones (in red) like this $[\color{green}{0,0,0},\color{red}{1,1,1,1},\color{green}{0},\color{red}{1,1}]$. Note that since the number of adjacent elements ($a[i],a[i+1])$ such that $a[i] \ne a[i+1]$ remains constant (nice invariant!), every strip can be handled independently. The size of every strip must be $1$ in the final array and performing an operation reduces the size of the corresponding strip by $1$. So, for a strip of length $L$, it would require $L-1$ operations to reduce its size to $1$. So, every strip would contribute $-1$ to the number of operations apart from its length. So, the answer is ($n -$ total no. of strips) which also equals ($n - x - 1$) where x is number of adjacent elements ($a[i],a[i+1]$) such that ($a[i] \ne a[i+1]$).
[ "greedy", "math" ]
800
def main(): T = int(input()) while T > 0: T = T - 1 n = int(input()) a = [int(x) for x in input().split()] ans = 0 for i in range(1, n): ans += 1 - ((a[i] + a[i - 1]) & 1) print(ans) if __name__ == '__main__': main()
1777
B
Emordnilap
A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). There are $n! = n \cdot (n-1) \cdot (n - 2) \cdot \ldots \cdot 1$ different permutations of length $n$. Given a permutation $p$ of $n$ numbers, we create an array $a$ consisting of $2n$ numbers, which is equal to $p$ concatenated with its reverse. We then define the beauty of $p$ as the number of inversions in $a$. The number of inversions in the array $a$ is the number of pairs of indices $i$, $j$ such that $i < j$ and $a_i > a_j$. For example, for permutation $p = [1, 2]$, $a$ would be $[1, 2, 2, 1]$. The inversions in $a$ are $(2, 4)$ and $(3, 4)$ (assuming 1-based indexing). Hence, the beauty of $p$ is $2$. Your task is to find the sum of beauties of all $n!$ permutations of size $n$. Print the remainder we get when dividing this value by $1\,000\,000\,007$ ($10^9 + 7$).
Observation: Every permutation has the same beauty. Consider two indices $i$ and $j$ ($i < j$) in a permutation $p$. These elements appear in the order $[p_i, p_j, p_j, p_i]$ in array $A$. Now we have two cases: Case $1$: $p_i > p_j$ The first $p_i$ appears before both $p_j$'s in this case, accounting for $2$ inversions. Case $2$: $p_i < p_j$ Both the $p_j$'s appear before the second $p_i$, accounting for $2$ inversions again. Hence, any pair of indices in $p$ account for $2$ inversions in $A$. Thus, beauty of every permutation $p = {n\choose 2} \cdot 2 = n \cdot (n - 1)$ Sum of beauties of all permutations $= n \cdot (n - 1) \cdot n!$
[ "combinatorics", "greedy", "math" ]
900
t = int(input()) for _ in range(t): n = int(input()) nf = 1 mod = int(1e9 + 7) for i in range(n): nf = nf * (i + 1) nf %= mod ans = n * (n - 1) * nf ans %= mod print(ans)
1777
C
Quiz Master
A school has to decide on its team for an international quiz. There are $n$ students in the school. We can describe the students using an array $a$ where $a_i$ is the smartness of the $i$-th ($1 \le i \le n$) student. There are $m$ topics $1, 2, 3, \ldots, m$ from which the quiz questions will be formed. The $i$-th student is considered proficient in a topic $T$ if $(a_i \bmod T) = 0$. Otherwise, he is a rookie in that topic. We say that a team of students is collectively proficient in all the topics if for every topic there is a member of the team proficient in this topic. Find a team that is collectively proficient in all the topics such that the maximum difference between the smartness of any two students in that team is \textbf{minimized}. Output this difference.
We can sort the smartness values and use two pointers. The two pointers indicate the students we are considering in our team. Let $l$ be the left pointer and $r$ be the right pointer. $a_l$ is the minimum smartness of our team and $a_r$ is the maximum. If this team is collectively proficient in all topics, then the difference would be $a_r - a_l$. Now, if $l$ is increased, the team may lose proficiency in some topics. $r$ would either stay the same or increase as well for the team to become proficient again. For a team to be proficient, each number from $1$ to $m$ should have a smartness value which is a multiple of it. To check for proficiency, we can maintain a frequency array $f$ of size $m$ and a variable $count$ to indicate the number of topics that have found a multiple. When we add a student to the team, we iterate through all the factors of their smartness which are less than or equal to $m$ and increase their frequency. If an element had a frequency of $0$ before, then $count$ is increased by $1$. Similarly, when we remove a student from the team, we go through all the factors less than or equal to $m$ and decrease their frequency. If an element now has $0$ frequency, then $count$ is decreased by $1$. $count$ being equal to $m$ at any point indicates a collectively proficient team.
[ "binary search", "math", "number theory", "sortings", "two pointers" ]
1,700
#include <bits/stdc++.h> #define all(v) v.begin(), v.end() #define var(x, y, z) cout << x << " " << y << " " << z << endl; #define ll long long int #define pii pair<ll, ll> #define pb push_back #define ff first #define ss second #define FASTIO \ ios ::sync_with_stdio(0); \ cin.tie(0); \ cout.tie(0); using namespace std; const ll inf = 1e17; const ll MAXM = 1e5; vector<ll> factors[MAXM + 5]; void init() { for (ll i = 1; i <= MAXM; i++) { for (ll j = i; j <= MAXM; j += i) { factors[j].pb(i); } } } void solve() { ll n, m; cin >> n >> m; vector<pii> vec; for (ll i = 0; i < n; i++) { ll value; cin >> value; vec.pb({value, i}); } sort(all(vec)); vector<ll> frequency(m + 5, 0); ll curr_count = 0; ll j = 0; ll global_ans = inf; for (ll i = 0; i < n; i++) { for (auto x : factors[vec[i].ff]) { if (x > m) break; if (!frequency[x]++) { curr_count++; } } while (curr_count == m) { ll curr_ans = vec[i].ff - vec[j].ff; if (curr_ans < global_ans) { global_ans = curr_ans; } for (auto x : factors[vec[j].ff]) { if (x > m) break; if (--frequency[x] == 0) { curr_count--; } } j++; } } cout << (global_ans >= inf ? -1 : global_ans) << "\n"; } int main() { FASTIO init(); ll t; cin >> t; while (t--) { solve(); } return 0; }
1777
D
Score of a Tree
You are given a tree of $n$ nodes, rooted at $1$. Every node has a value of either $0$ or $1$ at time $t=0$. At any integer time $t>0$, the value of a node becomes the bitwise XOR of the values of its children at time $t - 1$; the values of leaves become $0$ since they don't have any children. Let $S(t)$ denote the sum of values of all nodes at time $t$. Let $F(A)$ denote the sum of $S(t)$ across all values of $t$ such that $0 \le t \le 10^{100}$, where $A$ is the initial assignment of $0$s and $1$s in the tree. The task is to find the sum of $F(A)$ for all $2^n$ initial configurations of $0$s and $1$s in the tree. Print the sum modulo $10^9+7$.
We will focus on computing the expected value of $F(A)$ rather than the sum, as the sum is just $2^n \times \mathbb{E}(F(A))$. Let $F_u(A)$ denote the sum of all values at node $u$ from time $0$ to $10^{100}$, if the initial configuration is $A$. Clearly, $F(A) = \sum {F_u(A)}$. With linearity of expectations, $\mathbb{E}(F(A)) = \sum {\mathbb{E}(F_u(A))}$ Define $V_u(A, t)$ as the value of node $u$ at time $t$, if the initial configuration is $A$. Observe that $V_u(A, t)$ is simply $0$ if there is no node in $u$'s subtree at a distance of $t$ from $u$, otherwise, the value is the bitwise xor of the initial values of all nodes in the subtree of $u$ at a distance of $t$ from $u$. Thus, define $d_u$ as the length of the longest path from $u$ to a leaf in $u$'s subtree. Now, $\mathbb{E}(V_u(A, t))$ is half if $t$ is less than or equal to $d_u$, otherwise it's $0$. This is because the expected value of xor of $k$ boolean values is $0$ is $k$ is zero, otherwise it's half. This fact has multiple combinatorial proofs. For example, one can simply count the number of ways of choosing odd number of boolean values, among the $k$ values as $1$ to get $\sum_{\text{odd i}}{\binom{k}{i}} = 2^{k-1}$ We use this to get: $\mathbb{E}(F_u(A)) = \mathbb{E}(\sum_{0}^{10^{100}}{V_u(A, t)}) = \sum_{0}^{10^{100}}{\mathbb{E}(V_u(A, t)} = \frac{d_u + 1}{2}$ All the $d_u$ values can be computed by a single traversal of the tree. Our final result is: $2^n \times \sum{ \frac{d_u + 1}{2} }$ Time complexity: $\mathcal{O}(n)$
[ "bitmasks", "combinatorics", "dfs and similar", "dp", "math", "probabilities", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; #define MOD 1000000007 long long power(long long a, int b) { long long ans = 1; while (b) { if (b & 1) { ans *= a; ans %= MOD; } a *= a; a %= MOD; b >>= 1; } return ans; } int DFS(int v, vector<int> edges[], int p, int dep, int ped[]) { int mdep = dep; for (auto it : edges[v]) if (it != p) mdep = max(DFS(it, edges, v, dep + 1, ped), mdep); ped[v] = mdep - dep + 1; return mdep; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int T, i, j, n, u, v; cin >> T; while (T--) { cin >> n; vector<int> edges[n]; for (i = 0; i < n - 1; i++) { cin >> u >> v; u--, v--; edges[u].push_back(v); edges[v].push_back(u); } int ped[n]; DFS(0, edges, 0, 0, ped); long long p = power(2, n - 1), ans = 0; for (i = 0; i < n; i++) { ans += p * ped[i] % MOD; ans %= MOD; } cout << ans << "\n"; } }
1777
E
Edge Reverse
You will be given a weighted directed graph of $n$ nodes and $m$ directed edges, where the $i$-th edge has a weight of $w_i$ ($1 \le i \le m$). You need to reverse some edges of this graph so that there is at least one node in the graph from which every other node is reachable. The cost of these reversals is equal to the maximum weight of all reversed edges. If no edge reversal is required, assume the cost to be $0$. It is guaranteed that no self-loop or duplicate edge exists. Find the minimum cost required for completing the task. If there is no solution, print a single integer $-1$.
If the cost for completing the task is $c$, we can reverse any edge with cost $\le c$. This is equivalent to making those edges bidirectional since when checking for reachability, we only need to traverse an edge once, and we can choose to reverse it or not, depending upon the need. We can apply a binary search on the minimum cost and check if there exists at least one node such that all nodes are reachable from it if all edges with cost less than or equal to the current cost become bidirectional. To check in linear time if there exists such a suitable node, we will use a suppressed version of the Kosa Raju algorithm. We condense the nodes into Strongly Connected Components (SCCs) and perform a topological sort on them. If there exists an SCC from which all SCCs are reachable, then the first element in the topological sort will be that SCC (since in a topological sort, an element can only reach elements coming after it). So, we can choose any node from the first SCC in the topological sort and apply DFS to check if all the nodes are reachable from that node. If they aren't, we conclude it is impossible to complete the task with the current cost. Overall time complexity: $O((N+M) \cdot \log C)$
[ "binary search", "dfs and similar", "graphs", "trees" ]
2,200
#include <bits/stdc++.h> using namespace std; // Using Kosa Raju, we guarantee the topmost element (indicated by root) of stack is from the root SCC void DFS(int v, bool visited[], int &root, vector<int> edges[]) { visited[v] = true; for (auto it : edges[v]) if (!visited[it]) DFS(it, visited, root, edges); root = v; } int cnt(int v, bool visited[], vector<int> edges[]) { int ans = 1; visited[v] = true; for (auto it : edges[v]) if (!visited[it]) ans += cnt(it, visited, edges); return ans; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int T; cin >> T; while (T--) { int i, j, n, m, u, v, w; cin >> n >> m; vector<pair<int, int>> og_edges[n]; for (i = 0; i < m; i++) { cin >> u >> v >> w; u--, v--; og_edges[u].push_back({v, w}); } int l = -1, r = 1e9 + 1, mid; while (r - l > 1) { mid = l + (r - l) / 2; vector<int> edges[n]; for (i = 0; i < n; i++) { for (auto it : og_edges[i]) { edges[i].push_back(it.first); if (it.second <= mid) edges[it.first].push_back(i); } } bool visited[n] = {}; int root; for (i = 0; i < n; i++) { if (!visited[i]) DFS(i, visited, root, edges); } memset(visited, false, sizeof(visited)); if (cnt(root, visited, edges) == n) r = mid; else l = mid; } if (r == 1e9 + 1) r = -1; cout << r << '\n'; } return 0; }
1777
F
Comfortably Numb
You are given an array $a$ consisting of $n$ non-negative integers. The numbness of a subarray $a_l, a_{l+1}, \ldots, a_r$ (for arbitrary $l \leq r$) is defined as $$\max(a_l, a_{l+1}, \ldots, a_r) \oplus (a_l \oplus a_{l+1} \oplus \ldots \oplus a_r),$$ where $\oplus$ denotes the bitwise XOR operation. Find the maximum numbness over all subarrays.
The problem can be solved recursively. Keep dividing the array into subarrays at the maximum element of the current subarray. Let's say the maximum element of the initial array is at index $x$, so the array gets divided into two subarrays $a[1...x-1]$ and $a[x+1,...n]$. Say we have already calculated the answer for the left and right subarrays. Now, we need to calculate the answer for all the subarrays containing $a[x]$ to complete the process for the array. To do this, we will maintain two separate tries for both the left and right parts. This trie will contain all the prefix xor values for all the indices in the respective part. We will iterate over the smaller subarray out of the two. For every index, we will try to find the largest answer that can be obtained from a subarray with one end at this index. This can be done by moving the prefix xor value at the current index (xor'ed with $a[x]$) over the 'prefix xor trie' of the other subarray. This will cover all the subarrays containing $a[x]$, and so the entire array will now get covered. After the process, we will merge the two tries into one by again iterating over the smaller subarray. Similarly, we can solve for left subarray and right subarray by finding their respective maximum element, and dividing the subarray at that element. As we follow small-to-large merging, there are about $nlogn$ operations on the trie, and so the overall time complexity is $\mathcal{O}(n \log n \log A)$.
[ "bitmasks", "data structures", "divide and conquer", "strings", "trees" ]
2,400
#include <bits/stdc++.h> using namespace std; struct Trie{ struct Trie *child[2]={0}; }; typedef struct Trie trie; void insert(trie *dic, int x) { trie *temp = dic; for(int i=30;i>=0;i--) { int curr = x>>i&1; if(temp->child[curr]) temp = temp->child[curr]; else { temp->child[curr] = new trie; temp = temp->child[curr]; } } } int find_greatest(trie *dic, int x) { int res = 0; trie *temp = dic; for(int i=30;i>=0;i--) { int curr = x>>i&1; if(temp->child[curr^1]) { res ^= 1<<i; temp = temp->child[curr^1]; } else { temp = temp->child[curr]; } } return res; } int main() { int test_cases; cin >> test_cases; while(test_cases--) { int n; cin>>n; int a[n+1]; for(int i=1;i<=n;i++) { cin>>a[i]; } trie *t[n+2]; int prexor[n+1]; prexor[0] = 0; for(int i=1;i<=n;i++) { t[i] = new trie; insert(t[i], prexor[i-1]); prexor[i] = prexor[i-1]^a[i]; } t[n+1] = new trie; insert(t[n+1], prexor[n]); pair<int,int> asc[n+1]; for(int i=1;i<=n;i++) { asc[i] = make_pair(a[i],i); } sort(asc+1,asc+n+1); int left[n+1], right[n+1]; stack<int> s; for(int i=1;i<=n;i++) { while(!s.empty() && a[i]>=a[s.top()]) s.pop(); if(s.empty()) left[i] = 0; else left[i] = s.top(); s.push(i); } while(!s.empty()) s.pop(); for(int i=n;i>0;i--) { while(!s.empty() && a[i]>a[s.top()]) s.pop(); if(s.empty()) right[i] = n+1; else right[i] = s.top(); s.push(i); } int ans = 0; for(int i=1;i<=n;i++) { int x = asc[i].second; int r = right[x]-1; int l = left[x]+1; if(x-l < r-x) { for(int j=l-1;j<x;j++) { ans = max(ans, find_greatest(t[x+1], prexor[j]^a[x])); } t[l] = t[x+1]; for(int j=l-1;j<x;j++) { insert(t[l], prexor[j]); } } else { for(int j=x;j<=r;j++) { ans = max(ans, find_greatest(t[l], prexor[j]^a[x])); } for(int j=x;j<=r;j++) { insert(t[l], prexor[j]); } } } cout<<ans << endl; } }
1778
A
Flip Flop Sum
You are given an array of $n$ integers $a_1, a_2, \ldots, a_n$. The integers are either $1$ or $-1$. You have to perform the following operation \textbf{exactly once} on the array $a$: - Choose an index $i$ ($1 \leq i < n$) and flip the signs of $a_i$ and $a_{i+1}$. Here, flipping the sign means $-1$ will be $1$ and $1$ will be $-1$. What is the maximum possible value of $a_1 + a_2 + \ldots + a_n$ after applying the above operation?
Let's say we've chosen index $i$. What will happen? If the values of $a_i$ and $a_{i+1}$ have opposite signs, flipping them won't change the initial $sum$. if $a_i$ = $a_{i+1}$ = $1$, flipping them will reduce the $sum$ by $4$. if $a_i$ = $a_{i+1}$ = $-1$, flipping them will increase the $sum$ by $4$. So, for each $i < n$, we can check the values of $a_i$ and $a_{i+1}$, and we can measure the effects on the $sum$ based on the three cases stated above. Among the effects, take the one that maximizes the $sum$. Time complexity: In each test case, $\mathcal{O}(n)$
[ "greedy", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int sz = 1e5 + 10; int ara[sz]; int main() { int t; scanf("%d", &t); while(t--) { int n; scanf("%d", &n); int sum = 0; for(int i = 1; i <= n; i++) { scanf("%d", &ara[i]); sum += ara[i]; } int ans = -1e9; for(int i = 1; i < n; i++) { if(ara[i] == ara[i+1]) { if(ara[i] == 1) ans = max(ans, sum-4); else ans = max(ans, sum+4); } else ans = max(ans, sum); } printf("%d\n", ans); } return 0; }
1778
B
The Forbidden Permutation
You are given a permutation $p$ of length $n$, an array of $m$ \textbf{distinct} integers $a_1, a_2, \ldots, a_m$ ($1 \le a_i \le n$), and an integer $d$. Let $\mathrm{pos}(x)$ be the index of $x$ in the permutation $p$. The array $a$ is \textbf{not good} if - $\mathrm{pos}(a_{i}) < \mathrm{pos}(a_{i + 1}) \le \mathrm{pos}(a_{i}) + d$ for all $1 \le i < m$. For example, with the permutation $p = [4, 2, 1, 3, 6, 5]$ and $d = 2$: - $a = [2, 3, 6]$ is a not good array. - $a = [2, 6, 5]$ is good because $\mathrm{pos}(a_1) = 2$, $\mathrm{pos}(a_2) = 5$, so the condition $\mathrm{pos}(a_2) \le \mathrm{pos}(a_1) + d$ is not satisfied. - $a = [1, 6, 3]$ is good because $\mathrm{pos}(a_2) = 5$, $\mathrm{pos}(a_3) = 4$, so the condition $\mathrm{pos}(a_2) < \mathrm{pos}(a_3)$ is not satisfied. In one move, you can swap two adjacent elements of the permutation $p$. What is the minimum number of moves needed such that the array $a$ becomes good? It can be shown that there always exists a sequence of moves so that the array $a$ becomes good. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$, but there is $4$ in the array).
If the array $a$ is good, the answer is obviously $0$. Now, how can we optimally transform a not good array $a$ to a good array? Let, we are on the index $i$ $(i<m)$ and $x = a_i$, $y = a_{i+1}$. If we observe carefully, we will find that there are basically two cases that will make the array $a$ good: Move $x$ and $y$ in the permutation $p$ in such a way that $pos(y)$ becomes greater than $pos(x) + d$. To do that, in the permutation $p$, we can swap $x$ to the left and $y$ to the right. The total number of swaps needed is $= d - (pos(y) - pos(x)) + 1$. We need to check if there is enough space to the left of $pos(x)$ and to the right of $pos(y)$ to perform the needed number of swaps. Move $x$ and $y$ in the permutation $p$ in such a way that $pos(y)$ becomes smaller than $pos(x)$. To do that, In the permutation $p$, we can simply swap $y$ to the left until the condition is satisfied. The number of swaps needed is $pos(y) - pos(x)$. For each $i < m$, calculate the minimum number of swaps needed among these two cases. The minimum number of swaps among all $i < m$ will be the desired answer. Time complexity: In each test case, $\mathcal{O}(n)$
[ "greedy", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; const int sz = 1e5+10; int p[sz], a[sz], pos[sz]; int main() { int t; scanf("%d", &t); while(t--) { int n, m, d; scanf("%d %d %d", &n, &m, &d); for(int i = 1; i <= n; i++) { scanf("%d", &p[i]); pos[ p[i] ] = i; } for(int i = 1; i <= m; i++) { scanf("%d", &a[i]); } int ans = 1e9; for(int i = 1; i < m; i++) { if(pos[a[i+1]] <= pos[a[i]] || pos[a[i+1]]-pos[a[i]] > d) { ans = 0; break; } ans = min(ans, pos[a[i+1]] - pos[a[i]]); int dist = pos[a[i+1]]-pos[a[i]]; int swapNeed = d-dist+1; int swapPossible = (pos[a[i]]-1) + (n - pos[a[i+1]]); if(swapPossible >= swapNeed) ans = min(ans, swapNeed); } printf("%d\n", ans); } return 0; }
1778
C
Flexible String
You have a string $a$ and a string $b$. Both of the strings have length $n$. There are \textbf{at most $10$ different characters} in the string $a$. You also have a set $Q$. Initially, the set $Q$ is empty. You can apply the following operation on the string $a$ any number of times: - Choose an index $i$ ($1\leq i \leq n$) and a lowercase English letter $c$. Add $a_i$ to the set $Q$ and then replace $a_i$ with $c$. For example, Let the string $a$ be "$\tt{abecca}$". We can do the following operations: - In the first operation, if you choose $i = 3$ and $c = \tt{x}$, the character $a_3 = \tt{e}$ will be added to the set $Q$. So, the set $Q$ will be $\{\tt{e}\}$, and the string $a$ will be "$\tt{ab\underline{x}cca}$". - In the second operation, if you choose $i = 6$ and $c = \tt{s}$, the character $a_6 = \tt{a}$ will be added to the set $Q$. So, the set $Q$ will be $\{\tt{e}, \tt{a}\}$, and the string $a$ will be "$\tt{abxcc\underline{s}}$". You can apply any number of operations on $a$, but in the end, the set $Q$ should contain \textbf{at most $k$ different characters}. Under this constraint, you have to maximize the number of integer pairs $(l, r)$ ($1\leq l\leq r \leq n$) such that $a[l,r] = b[l,r]$. Here, $s[l,r]$ means the substring of string $s$ starting at index $l$ (inclusively) and ending at index $r$ (inclusively).
If we can replace all the characters of the string $a$, we can transform the string $a$ to the string $b$. So, replacing more characters is always beneficial. For a fixed string $a$ and another fixed string $b$, if the answer is $x_1$ for $k_1$ and $x_2$ for $k_2$ $(k_1 < k_2)$, it can be shown that $x_1 \leq x_2$ always satisfies. That is to say, we can safely consider the size of the set $q$ to be the maximum limit $min(k, u)$ where $u$ is the number of unique characters in the string $a$. Now, we can generate all possible sets of characters having size $min(k, u)$. Obviously, we won't take the characters that are not present in the string $a$ because they have no effect on the answer. There are many ways to generate the sets, like backtracking, bitmasking, etc. If we can calculate the number of valid pairs $(l, r)$ for each set efficiently, the rest task will be just taking the maximum of them. To calculate the number of pairs for each set efficiently, we can observe the fact that if $a[l,r]=b[l,r]$ is true, $a[p,q] = b[p,q]$ satisfies for any $l\leq p\leq q\leq r$. So, we will get $\frac{c\times (c+1)}{2}$ number of valid pairs from here where $c=r-l+1$. Now, we can start iterating from the beginning of the string $a$. We can say that $a_i$ matches $b_i$ if they are equal or $a_i$ exists in the currently generated set. While iterating, when we are on the $j$th index, we need to find the rightmost index $r$ such that $a[j,r]=b[j,r]$ satisfies. Then we need to add the number of valid pairs in this range to the contribution of this set. After that, we need to set the value of $j$ to $r+1$ and repeat the steps again. The rest of the tasks are trivial. Time complexity: In each test case, $\mathcal{O}(n\times \binom{u}{m})$, where $m=min(k,u)$
[ "bitmasks", "brute force", "strings" ]
1,600
#include <bits/stdc++.h> using namespace std; #define ll long long #define pb push_back #define EL '\n' #define fastio std::ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL); string a, b; string char_list; bool mark[26]; ll ans, k; ll count_matching_pair() { ll tot_pair = 0, match_count = 0; for(ll i = 0; i < a.size(); i++) { if(a[i] == b[i] || mark[ a[i]-'a' ]) match_count++; else { tot_pair += match_count*(match_count+1)/2; match_count = 0; } } tot_pair += match_count*(match_count+1)/2; return tot_pair; } void solve(ll pos, ll cnt) { if(cnt > k) return; if(pos == char_list.size()) { if(cnt == k) ans = max(ans, count_matching_pair()); return; } solve(pos+1, cnt); mark[ char_list[pos]-'a' ] = 1; solve(pos+1, cnt+1); mark[ char_list[pos]-'a' ] = 0; } int main() { fastio; ll t; cin >> t; while(t--) { ll n; cin >> n >> k; cin >> a >> b; unordered_set <char> unq; for(auto &ch : a) unq.insert(ch); char_list.clear(); for(auto &x : unq) char_list.pb(x); k = min(k, (ll)unq.size()); memset(mark, 0, sizeof mark); ans = 0; solve(0, 0); cout << ans << EL; } return 0; }
1778
D
Flexible String Revisit
You are given two binary strings $a$ and $b$ of length $n$. In each move, the string $a$ is modified in the following way. - An index $i$ ($1 \leq i \leq n$) is chosen uniformly at random. The character $a_i$ will be flipped. That is, if $a_i$ is $0$, it becomes $1$, and if $a_i$ is $1$, it becomes $0$. What is the expected number of moves required to make both strings equal \textbf{for the first time}? A binary string is a string, in which the character is either $\tt{0}$ or $\tt{1}$.
Let $k$ be the number of indices where the characters between two strings are different and $f(x)$ be the expected number of moves to make two strings equal given that the strings have $x$ differences. We have to find the value of $f(k)$. For all $x$ where $1 \leq x \leq n-1$, $\begin{align} f(x) = \frac{x}{n} \cdot [1 + f(x-1)] + \frac{n-x}{n} \cdot [1 + f(x+1)]\\ or,~~~ f(x) = 1 + \frac{x}{n}\cdot f(x-1) + \frac{n-x}{n} \cdot f(x+1) \end{align}$ Now, $f(0) = 0$ and $f(1) = 1 + \frac{n-1}{n}f(2)$. We can represent any $f(i)$ in the form $f(i) = a_i + b_i \cdot f(i+1)$. Let, $a_{1} = 1$ and $b_{1} = \frac{n-1}{n}$. So, we can write $f(1) = a_{1} + b_{1}\cdot f(2)$. When $1 \lt i \lt n$, $f(i) = 1 + \frac{i}{n} \cdot f(i-1) + \frac{n-i}{n} \cdot f(i+1)$. We can substitute the value of $f(i-1)$ with $a_{i-1} + b_{i-1}\cdot f(i)$ and calculate the value of $f(i)$. Thus we can get the value of $a_i$ and $b_i$ using the value of $a_{i-1}$ and $b_{i-1}$ by considering $a_1$ and $b_1$ as the base case. We get, $a_{i} = \frac{n+i\cdot a_{i-1}}{n-i\cdot b_{i-1}}$ and $b_{i} = \frac{n-i}{n-i \cdot b_{i-1}}$ for $2 \leq i \leq n$. Substituting $f(i-1) = a_{i-1} + b_{i-1}\cdot f(i)$, $\begin{align} f(i) & = 1 + \frac{i}{n} \cdot [a_{i-1} + b_{i-1}\cdot f(i)] + \frac{n-i}{n} \cdot f(i+1)\\ & = 1 + \frac{i}{n}\cdot a_{i-1} + \frac{i}{n} \cdot b_{i-1} \cdot f(i) + \frac{n-i}{n} \cdot f(i+1)\\ & = \frac{1+\frac{i}{n}\cdot a_{i-1}}{1-\frac{i}{n}\cdot b_{i-1}} + \frac{n-i}{n-i \cdot b_{i-1}}f(i+1)\\ & = \frac{n+i\cdot a_{i-1}}{n-i\cdot b_{i-1}} + \frac{n-i}{n-i \cdot b_{i-1}}f(i+1)\\ & = a_{i} + b_{i} \cdot f(i+1) \end{align}$ So, $a_{i} = \frac{n+i\cdot a_{i-1}}{n-i\cdot b_{i-1}}$ and $b_{i} = \frac{n-i}{n-i \cdot b_{i-1}}$ for $2 \leq i \leq n$. Similarly, $f(n) = 1+f(n-1)$. We can represent any $f(i)$ in the form $f(i) = c_i + d_i \cdot f(i-1)$. Let, $c_{n} = 1$ and $d_{n} = 1$. So, we can write $f(n) = c_{n} + d_{n}\cdot f(n-1)$. When $1 \lt i \lt n$, $f(i) = 1 + \frac{i}{n} \cdot f(i-1) + \frac{n-i}{n} \cdot f(i+1)$. We can substitute the value of $f(i+1)$ with $c_{i+1} + d_{i+1}\cdot f(i)$ and calculate the value of $f(i)$. Thus we can get the value of $c_i$ and $d_i$ using the value of $c_{i+1}$ and $d_{i+1}$ by considering $c_n$ and $d_n$ as the base case. We get, $c_{i} = \frac{n+(n-i)\cdot c_{i+1}}{n-(n-i)\cdot d_{i+1}}$ and $d_{i} = \frac{i}{n-(n-i) \cdot d_{i+1}}$. Substituting $f(i+1) = c_{i+1} + d_{i+1}\cdot f(i)$, $\begin{align} f(i) & = 1 + \frac{i}{n} \cdot f(i-1) + \frac{n-i}{n} \cdot [c_{i+1} + d_{i+1}\cdot f(i)]\\ & = 1 + \frac{i}{n}\cdot f(i-1) + \frac{n-i}{n} \cdot c_{i+1} + \frac{n-i}{n} \cdot d_{i+1} \cdot f(i)\\ & = \frac{1+\frac{n-i}{n}\cdot c_{i+1}}{1-\frac{n-i}{n}\cdot d_{i+1}} + \frac{i}{n-(n-i) \cdot d_{i+1}}f(i-1)\\ & = \frac{n+(n-i)\cdot c_{i+1}}{n-(n-i)\cdot d_{i+1}} + \frac{i}{n-(n-i) \cdot d_{i+1}}f(i-1)\\ & = c_{i} + d_{i} \cdot f(i-1) \end{align}$ So, $c_{i} = \frac{n+(n-i)\cdot c_{i+1}}{n-(n-i)\cdot d_{i+1}}$ and $d_{i} = \frac{i}{n-(n-i) \cdot d_{i+1}}$. Now, $f(i) = c_i + d_i \cdot f(i-1)$ and $f(i-1) = a_{i-1} + b_{i-1} \cdot f(i)$. By solving these two equations, we find that $f(i) = \frac{c_i+d_i \cdot a_{i-1}}{1-d_i \cdot b_{i-1}}$. Time Complexity: $\mathcal{O}(n\cdot \log m)$. After some calculations, it can be shown that $f(1) = 2^n - 1$. Now we know $f(0) = 0$ and $f(1) = 2^n - 1$. From the relation between $f(i)$, $f(i-1)$ and $f(i-2)$, we can write $f(i) = \frac{n\cdot f(i-1) - i \cdot f(i-2) - n}{n-i+1}$.
[ "combinatorics", "dp", "math", "probabilities" ]
2,100
#include <bits/stdc++.h> using namespace std; #define ll long long template<int MOD> struct ModInt { unsigned x; ModInt() : x(0) { } ModInt(signed sig) : x(sig) { } ModInt(signed long long sig) : x(sig%MOD) { } int get() const { return (int)x; } ModInt pow(ll p) { ModInt res = 1, a = *this; while (p) { if (p & 1) res *= a; a *= a; p >>= 1; } return res; } ModInt &operator+=(ModInt that) { if ((x += that.x) >= MOD) x -= MOD; return *this; } ModInt &operator-=(ModInt that) { if ((x += MOD - that.x) >= MOD) x -= MOD; return *this; } ModInt &operator*=(ModInt that) { x = (unsigned long long)x * that.x % MOD; return *this; } ModInt &operator/=(ModInt that) { return (*this) *= that.pow(MOD - 2); } ModInt operator+(ModInt that) const { return ModInt(*this) += that; } ModInt operator-(ModInt that) const { return ModInt(*this) -= that; } ModInt operator*(ModInt that) const { return ModInt(*this) *= that; } ModInt operator/(ModInt that) const { return ModInt(*this) /= that; } bool operator<(ModInt that) const { return x < that.x; } friend ostream& operator<<(ostream &os, ModInt a) { os << a.x; return os; } }; typedef ModInt<998244353> mint; int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); mint two = 2; int t; cin >> t; while(t--){ int n; cin >> n; string a, b; cin >> a >> b; int cnt = 0; for(int i = 0; i < n; i++){ if(a[i]!=b[i]) cnt++; } vector<mint> dp(n+2); dp[0] = two.pow(n); dp[1] = dp[0]-1; dp[0] = 0; for(long long i = 2; i < n; i++){ mint x = i-1; x /= n; dp[i] = (dp[i-1]-x*dp[i-2]-1)*n/(n-i+1); } dp[n] = dp[n-1]+1; cout << dp[cnt] << '\n'; } return 0; }
1778
E
The Tree Has Fallen!
Recently, a tree has fallen on Bob's head from the sky. The tree has $n$ nodes. Each node $u$ of the tree has an integer number $a_u$ written on it. But the tree has no fixed root, as it has fallen from the sky. Bob is currently studying the tree. To add some twist, Alice proposes a game. First, Bob chooses some node $r$ to be the root of the tree. After that, Alice chooses a node $v$ and tells him. Bob then can pick one or more nodes from the subtree of $v$. His score will be the bitwise XOR of all the values written on the nodes picked by him. Bob has to find the maximum score he can achieve for the given $r$ and $v$. As Bob is not a good problem-solver, he asks you to help him find the answer. Can you help him? You need to find the answers for several combinations of $r$ and $v$ for the same tree. Recall that a tree is a connected undirected graph without cycles. The subtree of a node $u$ is the set of all nodes $y$ such that the simple path from $y$ to the root passes through $u$. Note that $u$ is in the subtree of $u$.
At first, we can think of another problem. Given an array. You need to find the maximum subset $XOR$. How can we solve it? We can solve this problem very efficiently using a technique called "$XOR$ Basis". You can read about it from here. In problem E, at first, we can fix any node as the root of the tree. Let's call this rooted tree the base tree. After that, start an Euler tour from the root and assign discovery time $d_u$ and finishing time $f_u$ to each node $u$. In each query, three types of cases can occur (node $r$ and node $v$ are from the query format): $r=v$. In this case, we need to calculate the maximum subset $XOR$ of the whole tree. Node $v$ is not an ancestor of node $r$ in the base tree. So, the subtree of node $v$ will remain the same. Node $v$ is an ancestor of node $r$ in the base tree. What will be the new subtree of node $v$ in this case? This is a bit tricky. Let's denote such a node $c$ that is a child of node $v$ and an ancestor of node $r$ in the base tree. Then the new subtree of node $v$ will contain the whole tree except the subtree (in the base tree) of node $c$. Let's say, $in_u$ is the $XOR$ basis of all the values in node $u$'s subtree (in the base tree). We can build $in_u$ by inserting the value $a_u$ to $in_u$ and merging it with all of its children $w$'s basis $in_w$. Two basis can be merged in $O(log^2(d))$ complexity, where $d$ is their dimension. If we can build the basis $in_u$ for each node $u$, we are able to answer the case $1$ and case $2$. To answer case $2$, we need to find the maximum subset $XOR$ in the corresponding basis. To answer case $1$, we need to do a similar thing in the basis $in_{root}$, where $root$ is the root node of the base tree. For case $3$, let's say $out_u$ is the $XOR$ basis of all the values of the base tree except the node $u$'s subtree (in the base tree). Then the answer of the case $3$ will be the maximum subset $XOR$ in the basis $out_c$. To build the basis $out_u$ for each node $u$, we can utilize the properties of the discovery time $d_u$ and finishing time $f_u$. Which nodes will be outside the subtree of node $u$? The nodes $w$ that have either $d_w < d_u$ or $d_w > f_u$. To merge their basis easily, we can pre-calculate two basis arrays $pre[]$ and $suf[]$ where the basis $pre_i$ includes all the values of the nodes $w$ such that $d_w \leq i$ and the basis $suf_i$ includes all the values of the nodes $w$ such that $d_w \geq i$. To find the node $c$ in the case $3$, we can perform a binary search on the children of node $v$. We can use the fact that the order of the discovery times follows the order of the children and a node $c$ is only an ancestor of a node $r$ iff $d_c \leq d_r$ && $f_c \geq f_r$. Time complexity: $\mathcal{O}(n\times log^2(d))$, where $d = max(a_i)$
[ "bitmasks", "dfs and similar", "math", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; #define ll long long #define pb push_back #define nn '\n' #define fastio std::ios_base::sync_with_stdio(false); cin.tie(NULL); const int sz = 2e5 + 10, d = 30; vector <int> g[sz], Tree[sz]; int a[sz], discover_time[sz], finish_time[sz], nodeOf[sz], tim; struct BASIS { int basis[d]; int sz; void init() { for(int i = 0; i < d; i++) basis[i] = 0; sz = 0; } void insertVector(int mask) { for (int i = d-1; i >= 0; i--) { if (((mask>>i)&1) == 0) continue; if (!basis[i]) { basis[i] = mask; ++sz; return; } mask ^= basis[i]; } } void mergeBasis(const BASIS &from) { for(int i = d-1; i >= 0; i--) { if(!from.basis[i]) continue; insertVector(from.basis[i]); } } int findMax() { int ret = 0; for(int i = d-1; i >= 0; i--) { if(!basis[i] || (ret>>i & 1)) continue; ret ^= basis[i]; } return ret; } } in[sz], out, pre[sz], suf[sz]; void in_dfs(int u, int p) { in[u].insertVector(a[u]); discover_time[u] = ++tim; nodeOf[tim] = u; for(auto &v : g[u]) { if(v == p) continue; Tree[u].pb(v); in_dfs(v, u); in[u].mergeBasis(in[v]); } finish_time[u] = tim; } inline bool in_subtree(int sub_root, int v) { return discover_time[sub_root] <= discover_time[v] && finish_time[sub_root] >= finish_time[v]; } int findChildOnPath(int sub_root, int v) { int lo = 0, hi = (int)Tree[sub_root].size()-1; while(lo <= hi) { int mid = lo+hi>>1, node = Tree[sub_root][mid]; if(finish_time[node] < discover_time[v]) lo = mid + 1; else if(discover_time[node] > discover_time[v]) hi= mid - 1; else return node; } } void init(int n) { for(int i = 0; i <= n+5; i++) { g[i].clear(), Tree[i].clear(); in[i].init(); pre[i].init(), suf[i].init(); } tim = 0; } int main() { fastio; int t; cin >> t; while(t--) { int n; cin >> n; init(n); for(int i = 1; i <= n; i++) cin >> a[i]; for(int i = 1; i < n; i++) { int u, v; cin >> u >> v; g[u].pb(v); g[v].pb(u); } in_dfs(1, -1); for(int i = 1; i <= n; i++) { pre[i].insertVector(a[ nodeOf[i] ]); pre[i].mergeBasis(pre[i-1]); } for(int i = n; i >= 1; i--) { suf[i].insertVector(a[ nodeOf[i] ]); suf[i].mergeBasis(suf[i+1]); } int q; cin >> q; while(q--) { int root, v; cin >> root >> v; if(root == v) { cout << in[1].findMax() << nn; } else if(in_subtree(v, root)) { int child = findChildOnPath(v, root); out.init(); out.mergeBasis(pre[discover_time[child]-1]); out.mergeBasis(suf[finish_time[child]+1]); cout << out.findMax() << nn; } else cout << in[v].findMax() << nn; } } return 0; }
1778
F
Maximizing Root
You are given a rooted tree consisting of $n$ vertices numbered from $1$ to $n$. Vertex $1$ is the root of the tree. Each vertex has an integer value. The value of $i$-th vertex is $a_i$. You can do the following operation at most $k$ times. - Choose a vertex $v$ \textbf{that has not been chosen before} and an integer $x$ such that $x$ is a common divisor of the values of all vertices of the subtree of $v$. Multiply by $x$ the value of each vertex in the subtree of $v$. What is the maximum possible value of the root node $1$ after at most $k$ operations? Formally, you have to maximize the value of $a_1$. A tree is a connected undirected graph without cycles. A rooted tree is a tree with a selected vertex, which is called the root. The subtree of a node $u$ is the set of all nodes $y$ such that the simple path from $y$ to the root passes through $u$. Note that $u$ is in the subtree of $u$.
Let, $x_u$ be the value of node $u$ and $dp[u][d]$ be the minimum number of moves required to make the GCD of the subtree of $u$ equal to a multiple of $d$. Now, $dp[u][d] = 0$ if the subtree GCD of node $u$ is already a multiple of $d$ and $dp[u][d] = \infty$ if $(x_u \cdot x_u)$ is not a multiple of $d$. For each divisor $y$ of $d$, suppose, we want to perform the move on the subtree of $u$ by multiplying each node value of the subtree with $y$ iff $(x_u\cdot y)$ is a multiple of $d$ and $y$ is a divisor of $x_u$. In this case, we have to make the GCD of all the subtree of child nodes of $u$ equal to a multiple of $LCM(\frac{d}{y}, y)$ before performing the move on the subtree of $u$. This is because we have to make each node of the subtree a multiple of $\frac{d}{y}$ to get the multiple of $d$ after performing the move on the subtree of node $u$ using $y$. Also, to perform the move of multiplying by $y$, the value of each subtree node should be a multiple of $y$. So we have to make each node value a multiple of $LCM(\frac{d}{y}, y)$. So, $dp[u][d]$ will be calculated from $dp[v][LCM(\frac{d}{y}, y)]$ for each divisor $y$ of $d$ for all child $v$ of $u$. Now, $x_1\cdot D$ is the answer where $D$ is the largest divisor of $x_1$ such that $k \geq dp[1][D]$. Time Complexity: $O(n\cdot m^2)$ where $m$ is the number of divisors of $x_1$.
[ "dfs and similar", "dp", "graphs", "math", "number theory", "trees" ]
2,600
#include <bits/stdc++.h> using namespace std; const int N = 100005; const int mod = 998244353; int val[N]; vector<int> g[N]; vector<int> divisor[N]; int subtree_gcd[N], par[N]; int dp[N][1003]; int gcdd[1003][1003]; inline long long ___gcd(long long a, long long b){ if(gcdd[a][b]) return gcdd[a][b]; return gcdd[a][b] = __gcd(a, b); } inline long long lcm(long long a, long long b){ return (a/___gcd(a, b))*b; } void dfs(int u, int p){ par[u] = p; subtree_gcd[u] = val[u]; for(int v: g[u]){ if(v==p) continue; dfs(v, u); subtree_gcd[u] = ___gcd(subtree_gcd[u], subtree_gcd[v]); } } int solve(int u, int d, int p){ if(subtree_gcd[u]%d==0) return 0; if((val[u]*val[u])%d) return (1<<30); if(dp[u][d]!=-1) return dp[u][d]; long long req = d/___gcd(d, subtree_gcd[u]); long long res = (1<<30); for(int div: divisor[val[u]]){ if((val[u]*div)%d==0 && d%div==0){ long long r = 1; for(int v: g[u]){ if(v==p) continue; r += solve(v, lcm(d/div, div), u); } res = min(res, r); } } return dp[u][d] = min(res, (1LL<<30)); } int main(){ ios::sync_with_stdio(false); cin.tie(nullptr); for(int i = 2; i < 1001; i++){ for(int j = i; j < 1001; j+=i){ divisor[j].push_back(i); } } int t; cin >> t; while(t--){ int n, k; cin >> n >> k; for(int i = 0; i <= n; i++){ g[i].clear(); } for(int i = 1; i <= n; i++){ cin >> val[i]; } for(int i = 0; i <= n; i++){ for(int d: divisor[val[1]]){ dp[i][d] = -1; } } for(int i = 0; i < n-1; i++){ int u, v; cin >> u >> v; g[u].push_back(v); g[v].push_back(u); } dfs(1, 0); int ans = val[1]; for(int d: divisor[val[1]]){ int req = 0; int f = 1; for(int v: g[1]){ int x = solve(v, d, par[v]); if(x>n) f = 0; req += x; } if(!f) continue; req++; if(req<=k){ ans = max(ans, val[1]*d); } } cout << ans << "\n"; } return 0; }
1779
A
Hall of Fame
Thalia is a Legendary Grandmaster in chess. She has $n$ trophies in a line numbered from $1$ to $n$ (from left to right) and a lamp standing next to each of them (the lamps are numbered as the trophies). A lamp can be directed either to the left or to the right, and it illuminates all trophies in that direction (but not the one it is next to). More formally, Thalia has a string $s$ consisting only of characters 'L' and 'R' which represents the lamps' current directions. The lamp $i$ illuminates: - trophies $1,2,\ldots, i-1$ if $s_i$ is 'L'; - trophies $i+1,i+2,\ldots, n$ if $s_i$ is 'R'. She can perform the following operation \textbf{at most} once: - Choose an index $i$ ($1 \leq i < n$); - Swap the lamps $i$ and $i+1$ (without changing their directions). That is, swap $s_i$ with $s_{i+1}$. Thalia asked you to illuminate all her trophies (make each trophy illuminated by at least one lamp), or to tell her that it is impossible to do so. If it is possible, you can choose to perform an operation or to do nothing. Notice that lamps \textbf{cannot} change direction, it is only allowed to swap adjacent ones.
What happens when $\texttt{L}$ appears after some $\texttt{R}$ in the string? Suppose that there exists an index $i$ such that $s_i = \texttt{R}$ and $s_{i+1} = \texttt{L}$. Lamp $i$ illuminates trophies $i+1,i+2,\ldots n$ and lamp $i+1$ illuminates $1,2,\ldots i$. We can conclude that all trophies are illuminated if $\texttt{L}$ appears right after some $\texttt{R}$. So, strings $\texttt{LLRRLL}, \texttt{LRLRLR}, \texttt{RRRLLL}, \ldots$ do not require any operations to be performed on them, since they represent configurations of lamps in which all trophies are already illuminated. Now, we consider the case when such $i$ does not exist and think about how we can use the operation once. Notice that if $\texttt{R}$ appears right after some $\texttt{L}$, an operation can be used to transform $\texttt{LR}$ into $\texttt{RL}$, and we have concluded before that all trophies are illuminated in that case. So, if $\texttt{LR}$ appears in the string, we perform the operation on it. An edge case is when $\texttt{L}$ and $\texttt{R}$ are never adjacent (neither $\texttt{LR}$ nor $\texttt{RL}$ appears). Notice that $s_i = s_{i+1}$ must hold for $i=1,2,\ldots n-1$ in that case, meaning that $\texttt{LL} \ldots \texttt{L}$ and $\texttt{RR} \ldots \texttt{R}$ are the only impossible strings for which the answer is $-1$. Solve the task in which $q \leq 10^5$ range queries are given: for each segment $[l,r]$ print the required index $l \leq i < r$, $0$ or $-1$.
[ "constructive algorithms", "greedy", "strings" ]
800
null
1779
B
MKnez's ConstructiveForces Task
MKnez wants to construct an array $s_1,s_2, \ldots , s_n$ satisfying the following conditions: - Each element is an integer number different from $0$; - For each pair of adjacent elements their sum is equal to the sum of the whole array. More formally, $s_i \neq 0$ must hold for each $1 \leq i \leq n$. Moreover, it must hold that $s_1 + s_2 + \cdots + s_n = s_i + s_{i+1}$ for each $1 \leq i < n$. Help MKnez to construct an array with these properties or determine that it does not exist.
There always exists an answer for even $n$. Can you find it? There always exists an answer for odd $n \geq 5$. Can you find it? If $n$ is even, the array $[-1,1,-1,1, \ldots ,-1,1]$ is a solution. The sum of any two adjacent elements is $0$, as well as the sum of the whole array. Suppose that $n$ is odd now. Since $s_{i-1} + s_i$ and $s_i + s_{i+1}$ are both equal to the sum of the whole array for each $i=2,3,\ldots n-1$, it must also hold that $s_{i-1} + s_i = s_i + s_{i+1}$, which is equivalent to $s_{i-1} = s_{i+1}$. Let's fix $s_1 = a$ and $s_2 = b$. The condition above produces the array $s = [a,b,a,b, \ldots a,b,a]$ (remember that we consider an odd $n$). Let $k$ be a positive integer such that $n = 2k+1$. The sum of any two adjacent elements is $a+b$ and the sum of the whole array is $(k+1)a + kb$. Since the two values are equal, we can conclude that $ka + (k-1)b = 0$. $a=k-1$ and $b=-k$ produces an answer. But, we must be careful with $a=0$ and $b=0$ since that is not allowed. If $k=1$ then $ka+(k-1)b=0$ implies $ka=0$ and $a=0$, so for $n=2\cdot 1 + 1 = 3$ an answer does not exist. Otherwise, one can see that $a=k-1$ and $b=-k$ will be non-zero, which produces a valid answer. So, the array $[k-1, -k, k-1, -k, \ldots, k-1, -k, k-1]$ is an answer for $k \geq 2$ ($n \geq 5$). Solve a generalized task with given $m$ - find an array $a_1,a_2, \ldots a_n$ ($a_i \neq 0$) such that $a_i + a_{i+1} + \ldots a_{i+m-1}$ is equal to the sum of the whole array for each $i=1,2,\ldots n-m+1$ (or determine that it is impossible to find such array).
[ "constructive algorithms", "math" ]
900
null
1779
C
Least Prefix Sum
Baltic, a famous chess player who is also a mathematician, has an array $a_1,a_2, \ldots, a_n$, and he can perform the following operation several (possibly $0$) times: - Choose some index $i$ ($1 \leq i \leq n$); - multiply $a_i$ with $-1$, that is, set $a_i := -a_i$. Baltic's favorite number is $m$, and he wants $a_1 + a_2 + \cdots + a_m$ to be the smallest of all non-empty prefix sums. More formally, for each $k = 1,2,\ldots, n$ it should hold that $$a_1 + a_2 + \cdots + a_k \geq a_1 + a_2 + \cdots + a_m.$$ Please note that multiple smallest prefix sums may exist and that it is only required that $a_1 + a_2 + \cdots + a_m$ is one of them. Help Baltic find the minimum number of operations required to make $a_1 + a_2 + \cdots + a_m$ the least of all prefix sums. It can be shown that a valid sequence of operations always exists.
Try a greedy approach. What data structure supports inserting an element, finding the maximum and erasing the maximum? That is right, a binary heap, or STL priority_queue. Let $p_i = a_1 + a_2 + \ldots a_i$ and suppose that $p_x < p_m$ for some $x < m$. Let $x$ be the greatest such integer. Performing an operation to any element in the segment $[1,x]$ does nothing since $p_m - p_x$ stays the same. Similarly, performing an operation to any element in segment $[m+1,n]$ does not affect it. A greedy idea is to choose the maximal element in segment $[x+1,m]$ and perform an operation on it, because it decreases $p_m$ as much as possible. Repeat this process until $p_m$ eventually becomes less than or equal to $p_x$. It might happen that a new $p_y$ such that $p_y < p_m$ and $y<x$ emerges. In that case, simply repeat the algorithm until $p_m$ is less than or equal to any prefix sum in its "left". Suppose that $p_x < p_m$ and $x > m$ now. The idea is the same, choose a minimal element in segment $[m+1,x]$ and perform an operation on it as it increases $p_x$ as much as possible. And repeat the algorithm as long as such $x$ exists. To implement this, solve the two cases independently. Let's describe the first case as the second one is analogous. Iterate over $i$ from $m$ to $1$ and maintain a priority queue. If $p_i < p_m$, pop the queue (possibly multiple times) and decrease $p_m$ accordingly (we simulate performing the "optimal" operations). Notice that one does not have to update any element other than $p_m$. Add $a_i$ to the priority queue afterwards. The time complexity is $O(n \log n)$. Solve the task for each $m=1,2,\ldots, n$ i.e. print $n$ integers: the minimum number of operations required to make $a_1 + a_2 + \ldots a_m$ a least prefix sum for each $m$ (the tasks are independent).
[ "data structures", "greedy" ]
1,600
null
1779
D
Boris and His Amazing Haircut
Boris thinks that chess is a tedious game. So he left his tournament early and went to a barber shop as his hair was a bit messy. His current hair can be described by an array $a_1,a_2,\ldots, a_n$, where $a_i$ is the height of the hair standing at position $i$. His desired haircut can be described by an array $b_1,b_2,\ldots, b_n$ in a similar fashion. The barber has $m$ razors. Each has its own size and can be used \textbf{at most} once. In one operation, he chooses a razor and cuts a segment of Boris's hair. More formally, an operation is: - Choose any razor which hasn't been used before, let its size be $x$; - Choose a segment $[l,r]$ ($1\leq l \leq r \leq n$); - Set $a_i := \min (a_i,x)$ for each $l\leq i \leq r$; Notice that some razors might have equal sizes — the barber can choose some size $x$ only as many times as the number of razors with size $x$. He may perform as many operations as he wants, as long as any razor is used at most once and $a_i = b_i$ for each $1 \leq i \leq n$ at the end. He \textbf{does not} have to use all razors. Can you determine whether the barber can give Boris his desired haircut?
If $a_i < b_i$ for some $i$, then an answer does not exist since a cut cannot make a hair taller. If you choose to perform a cut on some segment $[l,r]$ with a razor of size $x$, you can "greedily" extend it (decrease $l$ and increase $r$) as long as $x \geq b_i$ for each $i$ in that segment and still obtain a correct solution. There exist some data structures (segment tree, dsu, STL maps and sets, $\ldots$) ideas, but there is also a simple solution with a STL stack. Consider $a_n$ and $b_n$. If $b_n$ is greater, the answer is NO since it is an impossible case (see "Stupid Hint" section). If $a_n$ is greater, then a cut on range $[l,n]$ with a razor of size $b_n$ has to be performed. Additionally, $l$ should be as small as possible (see "Hint" section). For each $i$ in the range, if $a_i$ becomes exactly equal to $b_i$, we consider the position $i$ satisfied. If $a_n$ and $b_n$ are equal, then we simply pop both arrays' ends (we ignore those values as they are already satisfied) and we continue our algorithm. Onto the implementation. We keep track (and count) of each razor size we must use. This can simply be done by putting the corresponding sizes into some container (array or vector) and checking at the end whether it is a subset of $x_1,x_2,\ldots x_m$ (the input array of allowed razors). To do this, one can use sorting or maps. This part works in $O(n \log n + m \log m)$ time. Implementing cuts is more challenging, though. To do this, we keep a monotone stack which represents all cuts which are valid until "now" (more formally, all cuts with their $l$ being $\leq i$, and the value of $l$ will be determined later). The top of the stack will represent the smallest razor, and the size of razors does not decrease as we pop it. So, we pop the stack as long as the top is smaller than $b_n$ (since performing an operation which makes $a_i$ less than $b_i$ is not valid). After this, if the new top is exactly equal to $b_n$ we can conclude that we have satisfied $a_n = b_n$ with some previous cut and we simply continue our algorithm. Otherwise, we add $b_n$ to the stack as a cut must be performed. This part works in $O(n + m)$ time. Total complexity is $O(n \log n + m \log m)$ because of sorting/mapping. Solve the task in $O(n+m)$ total complexity.
[ "constructive algorithms", "data structures", "dp", "dsu", "greedy", "sortings" ]
1,700
null
1779
E
Anya's Simultaneous Exhibition
This is an interactive problem. Anya has gathered $n$ chess experts numbered from $1$ to $n$ for which the following properties hold: - For any pair of players one of the players wins every game against the other (and no draws ever occur); - Transitivity does not necessarily hold — it might happen that $A$ always beats $B$, $B$ always beats $C$ and $C$ always beats $A$. Anya \textbf{does not} know, for each pair, who is the player who beats the other.To organize a tournament, Anya hosts $n-1$ games. In each game, she chooses two players. One of them wins and stays, while the other one is disqualified. After all the games are hosted only one player will remain. A player is said to be a candidate master if they can win a tournament (notice that the winner of a tournament may depend on the players selected by Anya in the $n-1$ games). Since Anya is a curious girl, she is interested in finding the candidate masters. Unfortunately, she does not have much time. To speed up the process, she will organize up to $2n$ simuls (short for "simultaneous exhibition", in which one player plays against many). In one simul, Anya chooses \textbf{exactly one} player who will play against some (at least one) of the other players. The chosen player wins all games they would win in a regular game, and the same holds for losses. After the simul finishes, Anya is only told the total number of games won by the chosen player (but not which ones). Nobody is disqualified during a simul. Can you help Anya host simuls and determine the candidate masters? The winning players in each pair \textbf{could be} changed between the simuls, but only in a way that preserves the results of all previous simuls. These changes may depend on your queries.
A tournament graph is given. Player $i$ is a candidate master if for every other player there exists a path from $i$ to them. Can you find one candidate master? (which helps in finding all of them) Statement: If player $i$ has the highest out-degree, then they are a candidate master. Proof: Let's prove a stronger claim, if player $i$ has the highest out degree, then they reach every other player in $1$ or $2$ edges. Let $S_1$ be the set of players which are immediately reachable and let $S_2$ be the set of other players (not including $i$). Choose some $x \in S_2$. If $x$ is not reachable from $i$, then it has an edge to it as well as to every player in $S_1$, meaning that the out-degree of $x$ is at least $|S_1|+1$. This is a contradiction, since $i$ has out-degree exactly equal to $|S_1|$ and the initial condition was for $i$ to have the highest out-degree. So, every $x \in S_2$ is reachable by $i$, which proves the lemma. Statement: There exists an integer $w$ such that player $i$ is a candidate master if and only if its out-degree is greater than or equal to $w$. Proof: Let $S_1, S_2, \ldots S_k$ represent the strongly connected components (SCC) of the tournament graph, in the topological order ($S_1$ is the "highest" component, while $S_k$ is the "lowest"). Since the graph is a tournament, it holds that there exists a directed edge from $x$ to $y$ for each $x \in S_i$, $y \in S_j$, $i<j$. We can also conclude that $x$ has a higher out-degree than $y$ for each $x \in S_i$, $y \in S_j$, $i<j$. The same holds for $i=1$, which proves the lemma since $S_1$ is the set of candidate masters, and also each player in it has strictly higher out-degree than every other player not in it. For each player, we host a simul which includes every other player (but themselves). This tells us the necessary out-degrees and we can easily find one candidate master (the one with highest out-degree). The second step is to sort all players by out-degree in non-increasing order and maintain the current $w$ described in "Lemma 2". Its initial value is the out-degree of player $1$. As we iterate over players, we host additional simuls: if player $i$ wins a match against at least one player among $1,2, \ldots j$ (the set of current candidate masters), then $i$ is also a candidate master, as well as $j+1, j+2, \ldots i-1$, we update the set accordingly and eventually decrease $w$. The first step requires $n$ simuls to be hosted, and the same hold for step $2$. In total, that is $2n$ simuls (or slightly less, depending on implementation). Solve the task if $n-1$ simuls are allowed to be hosted.
[ "constructive algorithms", "graphs", "greedy", "interactive", "sortings" ]
2,400
null
1779
F
Xorcerer's Stones
Misha had been banned from playing chess for good since he was accused of cheating with an engine. Therefore, he retired and decided to become a xorcerer. One day, while taking a walk in a park, Misha came across a rooted tree with nodes numbered from $1$ to $n$. The root of the tree is node $1$. For each $1\le i\le n$, node $i$ contains $a_i$ stones in it. Misha has recently learned a new spell in his xorcery class and wants to test it out. A spell consists of: - Choose some node $i$ ($1 \leq i \leq n$). - Calculate the bitwise XOR $x$ of all $a_j$ such that node $j$ is in the subtree of $i$ ($i$ belongs to its own subtree). - Set $a_j$ equal to $x$ for all nodes $j$ in the subtree of $i$. Misha can perform at most $2n$ spells and he wants to remove all stones from the tree. More formally, he wants $a_i=0$ to hold for each $1\leq i \leq n$. Can you help him perform the spells? A tree with $n$ nodes is a connected acyclic graph which contains $n-1$ edges. The subtree of node $i$ is the set of all nodes $j$ such that $i$ lies on the simple path from $1$ (the root) to $j$. We consider $i$ to be contained in its own subtree.
If XOR of all stones equals $0$, then by performing one spell to the root we obtain an answer. Solve the task for even $n$. If $n$ is even, performing a spell to the root guarantees that all nodes will have the same number of stones, meaning that the total XOR of the tree is $0$, thus performing the same spell again makes all nodes have $0$ stones. Consider even and odd subtrees. What does a spell do to them? A spell performed to the odd subtree does nothing. It does not change the total XOR, and it also makes performing a spell to some node in its subtree useless as their XORs would be $0$ anyway. Thus, we are only interested in even subtrees. Please refer to the hints as steps. Let's finish the casework: Let nodes $u$ and $v$ have even subtrees. And let the spell be performed on $u$, and then on $v$ slightly later. Consider the following $3$ cases: $u$ is an ancestor of $v$; performing a spell on $v$ does not make sense as its subtree's XOR is already $0$. $v$ is an ancestor of $u$; performing a spell on $u$ does not make sense either as it will be "eaten" by $v$ later. More formally, let $s_u$ be the current XOR of $u$'s subtree. We define $s_v$ and $s_1$ analogously. A spell performed on $u$ sets $s_v := s_v \oplus s_u$ and $s_1 := s_1 \oplus s_u$. Later, the spell performed on $v$ sets $s_1 := s_1 \oplus s_v$. Notice that the final state of $s_1$ is the same as if only the spell on $v$ was performed (since $s_u \oplus s_u = 0$). This means that the total XOR stays the same independent of whether we perform a spell on $u$ or not. Neither $u$ or $v$ is a parent of the other; this is the only case we are interested in and we will use this as a fact. We only need to choose some subtrees such that their total XOR is equal to the XOR of the root. Why? Because after applying the spells to them the total XOR becomes $0$, and that problem has been solved in "Hint 1.1". Of course, each pair of subtrees must satisfy the condition in case $3$, thus finding the subtrees is possible with dynamic programming and reconstruction. In each node we keep an array of size $32$ which tells us the possible XOR values obtained by performing spells on nodes in its subtree. Transitions are done for each edge, from child to parent, in similar fashion to the knapsack problem, but with XOR instead of sums. Time complexity is $O(n A^2)$ and memory complexity is $O(nA)$. It is possible to optimize this, but not necessary. Can you minimize the number of operations? The number of them is $\leq 6$ (in an optimal case).
[ "bitmasks", "constructive algorithms", "dp", "trees" ]
2,500
null
1779
G
The Game of the Century
The time has finally come, MKnez and Baltic are to host The Game of the Century. For that purpose, they built a village to lodge its participants. The village has the shape of an equilateral triangle delimited by three roads of length $n$. It is cut into $n^2$ smaller equilateral triangles, of side length $1$, by $3n-3$ additional roads which run parallel to the sides. See the figure for $n=3$. Each of the $3n$ roads is made of multiple (possibly $1$) road segments of length $1$ which connect adjacent intersections. The direction has already been chosen for each of the $3n$ roads (so, for each road, the same direction is assigned to all its road segments). Traffic can only go in the specified directions (i. e. the roads are monodirectional). You are tasked with making adjustments to the traffic plan so that from each intersection it is possible to reach every other intersection. Specifically, you can invert the traffic direction of any number of road segments of length $1$. What is the minimal number of road segments for which you need to invert the traffic direction?
Consider the sides of the big triangle. If they have the same orientation (clockwise or counter-clockwise), $0$ segments have to be inverted since the village is already biconnected. If the three sides do not all have the same orientation, inverting some segments is necessary. The following picture represents them (or rather, one case, but every other is analogous). There are two "major" paths: $A \rightarrow C$ $A \rightarrow B \rightarrow C$ Each road has a "beginning" in one of the paths, hence it is possible to reach every intersection from $A$. Similarly, $C$ can be reached from every other intersection. The problem is that $C$ acts as a tap as it cannot reach anything. To make the village biconnected, we will make it possible for $C$ to reach $A$ by inverting the smallest number of road segments. Intuitively, that will work since for every intersection $x$ there exists a cycle $A \rightarrow x \rightarrow C \rightarrow A$. A formal proof is given at the end of this section. The task is to find a shortest path from $C$ to $A$, with edges having weights of $0$ (its direction is already as desired) and $1$ (that edge has to be inverted). To implement this in $O(n)$, one has to notice that we are only interested in the closest road of each direction to the big triangle's side. One can prove that by geometrically looking at the strongly connected components, and maybe some casework is required. But, the implementation is free of boring casework. Now, it is possible to build a graph with vertices which belong to at least one of the $6$ roads we take into consideration (there are $3$ pairs of opposite directions). One can run a $0$-$1$ BFS and obtain the shortest path. This will indeed make the village biconnected as a shortest path does not actually pass through sides $A \rightarrow B$ and $B \rightarrow C$ (why would it? it is not optimal making effort to reach it just to invert some unnecessary additional road segments) The shortest path from $C$ to $A$ intersects the side $A \rightarrow C$ though (possibly multiple times), but it can be proved that every intersection on it is reachable from $C$. Also, sides $A \rightarrow B$ and $B \rightarrow C$ are reachable from $C$ since $A$ is reachable from $C$. This means that all sides are reachable by both $A$ and $C$, and so is every other intersection. Solve the problem if it is required to process $q \leq 10^5$ queries: invert a given road (and every segment in it), then print the minimum number of road segments you need to additionally invert to make the village biconnected (and not perform them, they are independent from the queries).
[ "constructive algorithms", "graphs", "shortest paths" ]
3,000
null
1779
H
Olympic Team Building
Iron and Werewolf are participating in a chess Olympiad, so they want to practice team building. They gathered $n$ players, where $n$ is a power of $2$, and they will play sports. Iron and Werewolf are among those $n$ people. One of the sports is tug of war. For each $1\leq i \leq n$, the $i$-th player has strength $s_i$. Elimination rounds will be held until only one player remains — we call that player the absolute winner. In each round: - Assume that $m>1$ players are still in the game, where $m$ is a power of $2$. - The $m$ players are split into two teams of equal sizes (i. e., with $m/2$ players in each team). The strength of a team is the sum of the strengths of its players. - If the teams have equal strengths, Iron chooses who wins; otherwise, the stronger team wins. - Every player in the losing team is eliminated, so $m/2$ players remain. Iron already knows each player's strength and is wondering who can become the absolute winner and who can't if he may choose how the teams will be formed in each round, as well as the winning team in case of equal strengths.
Huge thanks to dario2994 for helping me solve and prepare this problem. Try reversing the process. Start with some multiset ${ x }$ containing only $1$ element. We will try to make $x$ an absolute winner. Extend it by repeatedly adding subsets of equal sizes to it (which have sum less than or equal to the current one). A simple greedy solution is to always extend the current subset with the one which has the maximum sum. This, however, gives WA as 100 100 101 98 89 103 103 104 111 680 1 1 1 1 1 2 is a counter-example. (see it for yourself!) Statement: If it is possible to extend some multiset $A$ to the whole array $s$, we consider it winning. If $A \leq B$ holds for some subset $B$, then $B$ is also winning. Here, $A \leq B$ means that there exists a bijection $f\colon A\to B$ such that $x \leq f(x)$ for each $x \in A$. Proof: Let $A, A_1, A_2, \ldots A_k$ be a winning sequence of extensions. By definition of $A \leq B$, we can swap each element of $A$ with some other element. There are three cases: If we swap some $x \in A$ with an element which is already in $A$, nothing happens. If we swap some $x \in A$ with an element which is not in any $A_i$, the sum of $A$ increases hence it is also winning. If we swap some $x \in A$ with an element which is contained in some $A_i$, the sum of $A$ increases, the sum of $A_i$ decreases and the sum of sums of $A,A_1, \ldots, A_i$ stays the same. The sequence remains winning. Special cases of the lemma are subsets of size $1$. We can conclude that if $x$ is winning and $x \leq y$, then so is $y$. The answer is binary-searchable. Let's work with some fixed $s_i = x$ from now on. The idea is to maintain a list of current possible winning subsets. The motivation of lemma is to exclude subsets which cannot produce the answer. we can see that ${ s_j, s_i } \leq { s_{i-1},s_i }$ for each $j < i-1$, hence the only interesting subset is ${ s_{i-1}, s_i }$ (we excluded others because if one of them is an answer, then so is ${ s_{i-1}, s_i }$. We will use this without explanation later on). we can see that ${ s_j, s_i } \leq { s_{i-1},s_i }$ for each $j < i-1$, hence the only interesting subset is ${ s_{i-1}, s_i }$ (we excluded others because if one of them is an answer, then so is ${ s_{i-1}, s_i }$. We will use this without explanation later on). we extend ${ s_{i-1}, s_{i} }$ further to a subset of size $4$. There will be at most $15$ new subsets and it can be proved by using the lemma and two pointers. we extend ${ s_{i-1}, s_{i} }$ further to a subset of size $4$. There will be at most $15$ new subsets and it can be proved by using the lemma and two pointers. extend the current subsets to have size $8$. And, of course, use the lemma again and exclude unnecessary subsets. Implementing it properly should be fast enough. A fact is that there will be at most $8000$ subsets we have to consider. Although, proving it is not trivial. Consider all subsets of size $4$ and a partial ordering of them. In our algorithm we have excluded all subsets with large sum and then we excluded all subsets which are less than some other included set. So, the max collection of such subsets is less in size than the max antichain of $4$-subsets with respect to $\leq$. Following the theory, a max anti-chain can have size at most $\max\limits_{k} f(k)$, where $f(k)$ is the number of $4$-subsets with indices having sum exactly $k$. Hard-coding this gives us an approximation of $519$. This value should be multiplied by $15$, since we had that many $4$-subsets to begin with. This is less than $8000$, which is relatively small. extend the current subsets to have size $8$. And, of course, use the lemma again and exclude unnecessary subsets. Implementing it properly should be fast enough. A fact is that there will be at most $8000$ subsets we have to consider. Although, proving it is not trivial. Consider all subsets of size $4$ and a partial ordering of them. In our algorithm we have excluded all subsets with large sum and then we excluded all subsets which are less than some other included set. So, the max collection of such subsets is less in size than the max antichain of $4$-subsets with respect to $\leq$. Following the theory, a max anti-chain can have size at most $\max\limits_{k} f(k)$, where $f(k)$ is the number of $4$-subsets with indices having sum exactly $k$. Hard-coding this gives us an approximation of $519$. This value should be multiplied by $15$, since we had that many $4$-subsets to begin with. This is less than $8000$, which is relatively small. Extending the $8$-subsets further sounds like an impossible task. But, greedy finally comes in handy. Every $8$-subset should be extended with another $8$-subset which maximizes the sum. It is not hard to see that if an answer exists, then this produces a correct answer too. We are left with solving $8$-sum in an array of size $24$. As we need to do this around $8000$ times, we use the meet in the middle approach. It can be done in around $2^{12}$ operations if merge sort is used (it spares us the "$12$" factor in $12 \cdot 2^{12}$). Extending the $8$-subsets further sounds like an impossible task. But, greedy finally comes in handy. Every $8$-subset should be extended with another $8$-subset which maximizes the sum. It is not hard to see that if an answer exists, then this produces a correct answer too. We are left with solving $8$-sum in an array of size $24$. As we need to do this around $8000$ times, we use the meet in the middle approach. It can be done in around $2^{12}$ operations if merge sort is used (it spares us the "$12$" factor in $12 \cdot 2^{12}$). Summary: Solving the task for $n\leq 16$ is trivial. Let's fix $n=32$. Calculating the exact complexity does not make much sense as $n$ is fixed, but we will calculate the rough number of basic arithmetic operations used. Firstly, a binary search is used, which is $\log 32 = 5$ operations. Then, we generate $8$-subsets which can be implemented quickly (but is not trivial). The meet in the middle part consumes $8000 \cdot 2^{12}$ operations. The total number of operations is roughly $5 \cdot 8000 \cdot 4096 = 163\,840\,000$, which gives us a decent upper bound. Try to construct a strong test against fake greedy solutions :)
[ "brute force", "meet-in-the-middle" ]
3,500
null
1780
A
Hayato and School
Today Hayato came home from school with homework. In the assignment, Hayato was given an array $a$ of length $n$. The task was to find $3$ numbers in this array whose sum is \textbf{odd}. At school, he claimed that there are such $3$ numbers, but Hayato was not sure, so he asked you for help. Answer if there are such three numbers, and if so, output indices $i$, $j$, and $k$ such that $a_i + a_j + a_k$ is odd. The odd numbers are integers that are not divisible by $2$: $1$, $3$, $5$, and so on.
Note that there are two variants of which numbers to take to make their amount odd: $3$ odd number; $2$ even and $1$ odd. Let's save all indices of even and odd numbers into two arrays, and check both cases.
[ "constructive algorithms", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int T; cin >> T; while (T--) { int n; cin >> n; vector<int> odd, even; for (int i = 1; i <= n; i++) { int x; cin >> x; if (x % 2 == 0) { even.push_back(i); } else { odd.push_back(i); } } if (odd.size() >= 3) { cout << "YES\n"; cout << odd[0] << " " << odd[1] << " " << odd[2] << '\n'; } else if (odd.size() >= 1 && even.size() >= 2) { cout << "YES\n"; cout << odd[0] << " " << even[0] << " " << even[1] << '\n'; } else { cout << "NO\n"; } } }
1780
B
GCD Partition
While at Kira's house, Josuke saw a piece of paper on the table with a task written on it. The task sounded as follows. There is an array $a$ of length $n$. On this array, do the following: - select an integer $k > 1$; - split the array into $k$ subsegments $^\dagger$; - calculate the sum in each of $k$ subsegments and write these sums to another array $b$ (where the sum of the subsegment $(l, r)$ is ${\sum_{j = l}^{r}a_j}$); - the final score of such a split will be $\gcd(b_1, b_2, \ldots, b_k)^\ddagger$. The task is to find such a partition that the score is \textbf{maximum possible}. Josuke is interested in this task but is not strong in computer science. Help him to find the maximum possible score. $^\dagger$ A division of an array into $k$ subsegments is $k$ pairs of numbers $(l_1, r_1), (l_2, r_2), \ldots, (l_k, r_k)$ such that $l_i \le r_i$ and for every $1 \le j \le k - 1$ $l_{j + 1} = r_j + 1$, also $l_1 = 1$ and $r_k = n$. These pairs represent the subsegments. $^\ddagger$ $\gcd(b_1, b_2, \ldots, b_k)$ stands for the greatest common divisor (GCD) of the array $b$.
Let's note that it doesn't make sense for us to divide into more than $k = 2$ subsegments. Let's prove it. Let us somehow split the array $a$ into $m > 2$ subsegments : $b_1, b_2, \ldots, b_m$. Note that $\gcd(b_1, b_2, \ldots, b_m) \le \gcd(b_1 + b_2, b_3, \ldots, b_m)$, since if $b_1$ and $b_2$ were multiples of $\gcd(b_1, b_2 , \ldots, b_m)$, so $b_1 + b_2$ is also a multiple of $\gcd(b_1, b_2, \ldots, b_m)$. This means that we can use $b_1 + b_2$ instead of $b_1$ and $b_2$, and the answer will not worsen, thus it is always beneficial to use no more than $k = 2$ subsegments. How to find the answer? Let $s$ be the sum of the array $a$. Let's say $pref_i = {\sum_{j = 1}^{i} a_j}$, then the answer is $\max\limits_{1 \le i < n}(\gcd(pref_i, s - pref_i)$.
[ "brute force", "greedy", "math", "number theory" ]
1,100
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0), cout.tie(0); int T = 1; cin >> T; while (T--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; long long s = accumulate(a.begin(), a.end(), 0ll), cur = 0; long long ans = 1; for (int i = 0; i < n - 1; i++) { cur += a[i], s -= a[i]; ans = max(ans, __gcd(s, cur)); } cout << ans << "\n"; } }
1780
D
Bit Guessing Game
This is an interactive problem. Kira has a hidden positive integer $n$, and Hayato needs to guess it. Initially, Kira gives Hayato the value $\mathrm{cnt}$ — the number of unit bits in the binary notation of $n$. To guess $n$, Hayato can only do operations of one kind: choose an integer $x$ and subtract it from $n$. Note that after each operation, the number $n$ \textbf{changes}. Kira doesn't like bad requests, so if Hayato tries to subtract a number $x$ greater than $n$, he will lose to Kira. After each operation, Kira gives Hayato the updated value $\mathrm{cnt}$ — the number of unit bits in the binary notation of the updated value of $n$. Kira doesn't have much patience, so Hayato must guess the \textbf{original} value of $n$ after no more than $30$ operations. Since Hayato is in elementary school, he asks for your help. Write a program that guesses the number $n$. Kira is an honest person, so he chooses the initial number $n$ before all operations and \textbf{does not} change it afterward.
There are two similar solutions to this problem, we will tell you both. Subtract $1$. What can we say now about the number of units at the beginning of the binary notation of a number? There are exactly $cnt - cnt_w + 1$, where $cnt$ is the number of unit bits after subtracting the unit, and $cnt_w$ is - before subtracting. Now we subtract them, for this we need to subtract the number $2^{cnt - cnt_w + 1} - 1$ and continue the algorithm. But such an algorithm makes at worst $60$ requests. To save queries, note that we know the number of units after we remove the units from the beginning, therefore it is useless to make another request for this. Then at the same time, as we remove the units from the beginning, we will immediately subtract the unit. As a result, there will be absolutely no more than $cnt$ operations, where $cnt$ is the initial number of single bits in the record of the number $n$. This number does not exceed $O(log_2(n))$, in turn, $log_2(n)$ does not exceed 30, which fits into the restrictions. Let $ans$ be the desired number $n$, and $was$ be the initial number of bits in the number $n$. Let's subtract the powers of two : $2^0, 2^1, 2^2, ... 2^ k$, while $was$ it will not become 0. We will support the $Shift$ flag - whether there was a bit transfer to $n$ when subtracting any power of two. Suppose we subtracted $2^k$ at the $k$th step and the number of bits became equal to $cnt_{new}$, and before the subtraction was $cnt$, then consider two cases. 1) $cnt - cnt_{new} = 1$, then bit $k$ was included in $n$ at the $k$th step, and if $Shift = false$, then we add to $ans$ - $2^k$, since there was no bit transfer, which means bit k is also in the original number, and subtract from $was$ - $1$. If $Shift = true$, then we added this bit during previous operations, and it does not need to be taken into account. 2) $cnt - cnt_{new} \neq 1$, then we know that the number of bits has not decreased, also that in the number $n$ there was such a bit $m$ that $2^m > 2^k$, and at the same time the bit $m$ is in $n$. Moreover, $m - k - 1 = cnt_{new} - cnt$. So $m = k + 1 + cnt_{new} - cnt$. Let's add $2^m$ to the answer, subtract from $was$ - $1$ and assign the $Shift$ flag the value $true$, since there was a bit transfer. Thus, we found the initial number $n$, which is equal to $ans$, and also made no more than $O(log_2(n))$ queries, since $k\le log_2(n)$. Thus, the solution spends no more than 30 requests, which fits into the limitations of the task.
[ "binary search", "bitmasks", "constructive algorithms", "interactive" ]
1,800
#include <iostream> using namespace std; int ask (int x) { cout << "- " << x << endl; if (x == -1) exit(0); cin >> x; return x; } int main() { int t; cin >> t; while (t--) { int cnt; cin >> cnt; int n = 0; int was = 0; while (cnt > 0) { n += 1; int nw = ask(1 + was); int back = nw - cnt + 1; n += (1 << back) - 1; was = (1 << back) - 1; cnt = nw - back; } cout << "! " << n << endl; } }
1780
E
Josuke and Complete Graph
Josuke received a huge undirected weighted complete$^\dagger$ graph $G$ as a gift from his grandfather. The graph contains $10^{18}$ vertices. The peculiarity of the gift is that the weight of the edge between the different vertices $u$ and $v$ is equal to $\gcd(u, v)^\ddagger$. Josuke decided to experiment and make a new graph $G'$. To do this, he chooses two integers $l \le r$ and deletes all vertices except such vertices $v$ that $l \le v \le r$, and also deletes all the edges except between the remaining vertices. Now Josuke is wondering how many different weights are there in $G'$. Since their count turned out to be huge, he asks for your help. $^\dagger$ A complete graph is a simple undirected graph in which every pair of distinct vertices is adjacent. $^\ddagger$ $\gcd(x, y)$ denotes the greatest common divisor (GCD) of the numbers $x$ and $y$.
Let's fix $g$ and check that the $g$ weight edge exists in $G'$. The first number, which is divided into $g$, starting with $L$ - $\lceil \frac{L}{g} \rceil \cdot g$, and the second - $(\lceil \frac{L}{g} \rceil + 1) \cdot g$, note that their $\gcd$ is $g$, so the edge between these vertices weighs $g$. If the second number is greater $R$, the edge with weight $g$ in the $G'$ doesn't exist, because on the segment from $L$ to $R$ at most one vertex, which is divided by $g$. That is, we should calculate the number of such $g$, which is $(\lceil \frac{L}{g} \rceil + 1) \cdot g \leq R$. For $g \geq L$: $(\lceil \frac{L}{g} \rceil + 1) \cdot g = 2 \cdot g$. Get the upper limit on the $g \leq \lfloor \frac{R}{2} \rfloor$. That is, all $g$ on segment from $L$ to $\lfloor \frac{R}{2} \rfloor$ occur in the $G'$ as weight some edge. Add them to the answer. Look at $g < L$. Note that $\lceil \frac{L}{g} \rceil$ takes a $O(\sqrt{L})$ of different values. Let's fix some $f = \lceil \frac{L}{g} \rceil$. Note that $f$ corresponds to a consecutive segment $l \leq g \leq r$. Let's brute this segments in ascending order $f$. Then, if there is a left border $l$ of the segment, you can find $r$ either by binary search or by writing the formula. The next left border is $r + 1$. Then note, if $f$ is fixed, then $(f + 1) \cdot g \leq R$ is equivalent to $g \leq \lfloor \frac{R}{f + 1} \rfloor$. That is, with a fixed segment from $l$ to $r$, $g$ occurs in the $G'$ as weight some edge if $l \leq g \leq min(r, \lfloor \frac{R}{f + 1} \rfloor)$. Then brute all these segments and sum up of all good $g$. Overall time complexity is $O(\sqrt{L})$ or $O(\sqrt{L} \cdot log(L))$.
[ "binary search", "brute force", "data structures", "math", "number theory" ]
2,400
#include <bits/stdc++.h> using namespace std; typedef long long ll; int main(){ ios::sync_with_stdio(false); cin.tie(0), cout.tie(0); int t; cin >> t; for (int test_case = 0; test_case < t; test_case++){ ll L, R; cin >> L >> R; ll ans = max(0ll, R / 2 - L + 1); for (ll left = 1, right; left < L; left = right + 1){ ll C = (L + left - 1) / left; right = (L + C - 2) / (C - 1) - 1; ans += max(0ll, min(right, R / (C + 1)) - left + 1); } cout << ans << '\n'; } }
1780
F
Three Chairs
One day Kira found $n$ friends from Morioh and decided to gather them around a table to have a peaceful conversation. The height of friend $i$ is equal to $a_i$. It so happened that the height of each of the friends \textbf{is unique}. Unfortunately, there were only $3$ chairs in Kira's house, and obviously, it will not be possible to seat all friends! So, Kira has to invite only $3$ of his friends. But everything is not so simple! If the heights of the lowest and the tallest of the invited friends are not coprime, then the friends will play tricks on each other, which will greatly anger Kira. Kira became interested, how many ways are there to choose $3$ of his friends so that they don't play tricks? Two ways are considered different if there is a friend invited in one way, but not in the other. Formally, if Kira invites friends $i$, $j$, and $k$, then the following should be true: $\gcd(\min(a_i, a_j, a_k), \max(a_i, a_j, a_k)) = 1$, where $\gcd(x, y)$ denotes the greatest common divisor (GCD) of the numbers $x$ and $y$. Kira is not very strong in computer science, so he asks you to count the number of ways to invide friends.
Let's sort the array and process the triples $i, j, k$, assuming that $i < j < k$ and $a_i < a_j < a_k$. Now if $\gcd(a_i, a_k) = 1$, then the number of ways to take the index $j$ is $k - i - 1$. We will consider the answer to the problem for each $k$ from $1$ to $n$, assuming that $a_k$ is the maximum number in the triple. Now let $c$ be the number of numbers that are mutually prime with $a_k$ on the prefix from $1$ to $k - 1$, and $sum$ is the sum of their indices. Then you need to add $c\cdot i - sum - c$ to the answer. It remains to find out the number of numbers that are mutually prime with $a_k$ and the sum of their indices. This can be done using the inclusion and exclusion method. Let $cnt_i$ be the number of numbers $a_j$ that are divisible by $i$, $s_i$ be the sum of the indices $j$ of numbers $a_j$ that are divisible by $i$. Let's look at the prime numbers $p_1, p_2, ..., p_m$ included in the factorization of the number $a_k$. Then let $c$ initially be equal to the number of numbers on the prefix, and $sum$ to the sum of the indices on the prefix. Note that then we took into account the extra elements - numbers that are divisible by $p_1, p_2, ..., p_m$, since they will not be mutually simple with $a_k$, we subtract them from $c$ and $sum$. But having done this, we again took into account the extra elements that are multiples of the numbers of the form $p_i * p_j$, where $i \neq j$, add them back, etc. So we can iterate over the mask $mask$ of the primes $p_1, p_2, ..., p_m$. And depending on the parity of the bits in the mask, we will subtract or add elements that are multiples of $d$, where $d$ is the product of the primes included in $mask$. Having received $c$ and $sum$, we can recalculate the answer for the position $i$. To move from position $i$ to position $i+1$, update the values of $cnt$ and $s$ by adding the element $a_{i-1}$ by iterating over the mask of the simple element $a_{i-1}$.
[ "bitmasks", "brute force", "combinatorics", "data structures", "dp", "number theory", "sortings" ]
2,300
#include "bits/stdc++.h" using namespace std; #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; #define sz(v) ((int)(v).size()) #define all(a) (a).begin(), (a).end() #define rall(a) a.rbegin(), a.rend() #define F first #define S second #define pb push_back #define ppb pop_back #define eb emplace_back #define time ((double)clock() / (double)CLOCKS_PER_SEC) using pii = pair<int, int>; using ll = long long; using int64 = long long; using ld = double; const ll infll = (ll) 1e18 + 27; const ll inf = (ll) 1e9; #define dbg(x) cout << #x << " = " << (x) << endl template<class T> using pq = priority_queue<T, vector<T>, less<T>>; template<class T> using pqr = priority_queue<T, vector<T>, greater<T>>; template<typename T, typename T2> istream &operator>>(istream &in, pair<T, T2> &b) { in >> b.first >> b.second; return in; } template<typename T, typename T2> ostream &operator<<(ostream &out, const pair<T, T2> &b) { out << "{" << b.first << ", " << b.second << "}"; return out; } template<typename T> istream &operator>>(istream &in, vector<T> &b) { for (auto &v : b) { in >> v; } return in; } template<typename T> ostream &operator<<(ostream &out, vector<T> &b) { for (auto &v : b) { out << v << ' '; } return out; } template<typename T> ostream &operator<<(ostream &out, deque<T> &b) { for (auto &v : b) { out << v << ' '; } return out; } template<typename T> void print(T x, string end = "\n") { cout << x << end; } template<typename T1, typename T2> bool chkmin(T1 &x, const T2 &y) { return x > y && (x = y, true); } template<typename T1, typename T2> bool chkmax(T1 &x, const T2 &y) { return x < y && (x = y, true); } mt19937_64 rng(chrono::high_resolution_clock::now().time_since_epoch().count()); const int N = 3e5 + 10; ll s[N]; ll cnt[N]; ll d[N][20]; int ptr[N]; bool u[N]; ll Cnt = 0; ll Sum = 0; ll Ans = 0; void Answer (int x, int pos) { ll C = Cnt; ll X = Sum; int K = (1ll << ptr[x]); for (int mask = 1; mask < K; mask++) { ll k = 1; for (int j = 0; j < ptr[x]; j++) { if ((mask >> j) & 1) { k *= d[x][j]; } } int bits = __builtin_popcount(mask); int D = k; if (bits % 2 == 1) { C -= cnt[D]; X -= s[D]; } else { C += cnt[D]; X += s[D]; } } Ans += C * pos - X; } void add (int x, int pos) { Cnt += 1; Sum += pos + 1; auto v = d[x]; int K = (1ll << ptr[x]); for (int mask = 1; mask < K; mask++) { ll k = 1; for (int j = 0; j < ptr[x]; j++) { if ((mask >> j) & 1) { k *= d[x][j]; } } int D = k; s[D] += pos + 1; cnt[D] += 1; } } void solve() { for (int i = 2; i < N; i++) { if (!u[i]) { for (int j = i; j < N; j += i) { u[j] = true; d[j][ptr[j]] = i; ptr[j]++; } } } int n; cin >> n; vector<int> a(n); cin >> a; sort(all(a)); for (int i = 0; i < n; i++) { Answer(a[i], i); if (i > 0) { add(a[i - 1], i - 1); } } cout << Ans << "\n"; } int32_t main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); solve(); return 0; }
1780
G
Delicious Dessert
Today is an important day for chef Tonio — an auditor has arrived in his hometown of Morioh. He has also arrived at Tonio's restaurant and ordered dessert. Tonio has not been prepared for this turn of events. As you know, dessert is a string of lowercase English letters. Tonio remembered the rule of desserts — a string $s$ of length $n$. Any dessert $t$ is delicious if the number of occurrences of $t$ in $s$ as a substring is \textbf{divisible} by the length of $t$. Now Tonio wants to know the number of delicious substrings of $s$. If the substring occurs several times in the string $s$, then all occurrences must be taken into account.
This problem has several solutions using different suffix structures. We will tell two of them - using suffix array, and using suffix automaton. Let's build suffix array $suf$ and array $lcp$ (largest common prefixes) on the string $s$. Fix some $k > 1$ and consider about all substring $s$ length $k$. Match $1$ to positions $i$ array $lcp$, such that $lcp_i \geq k$, and match $0$ to other positions. One in position $i$ means that the substrings length $k$ starting at $suf_i$ and $suf_{i + 1}$ are equal. Consider any block of units length $len$, then for $len + 1$ substrings length $k$ number of occurrences in $s$ - $len + 1$, then in order for the substrings in this block to be delicious, it is necessary that $len + 1$ divided by $k$. Let's brute $k$ from $n$ to $2$ and sustain all sizes blocks. Then, when shift to a new $k$ should events - change $0$ to $1$, for all $i$, such that $lcp_i = k$. To do this you can sustain DSU (Disjoint set union). Then for each block size we know the number of blocks with this size. Then it is enough to consider all blocks of length $len$, such as $len + 1$ - divider $k$. It can be done explicitly, just brute $k$, $2 \cdot k$, ..., as $len + 1$. And this works in sum of harmonic series: ${\sum_{k = 2}^{n} \lfloor \frac{n}{k} \rfloor}$ $= O(n \cdot log(n))$. For $k = 1$, obviously, any substring length $1$ satisfies, so you can just add $n$ to the answer. Overall time complexity is $O(n \cdot log(n))$. The solution with the suffix automaton is as follows: let's build the suffix automaton itself, now we calculate for each vertex of the suffix automaton the dynamics $dp_v$ - this is the number of paths from the vertex $v$ to the terminal vertices. This dynamics means the number of occurrences of the substring corresponding to this vertex in the entire string. Let's introduce the function $f(v)$ - the length of the longest substring leading to the vertex $v$. We know that all substrings of length from $l_v$ to $r_v$ lead to the vertex $v$ of the suffix automaton - each once. Where $r_v = f(v)$ and $l_v = f(suff_v) + 1$, where $suff_v$ is the suffix link of $v$. Why is it so? All substrings of the form $s[x:k], s[x+1:k], ..., s[y:k]$ lead to the vertex $v$ of the suffix automaton, and there is a suffix link $suff_v$ to which lead all substrings of the form $s[x + 1:k], s[y + 2:k], ..., s[t: k]$. In order to solve the problem, let's go through the $v$ vertex and look at the number of occurrences of any substring that leads to the $v$ - $dp_v$ vertex, then fix $c$ - the number of such $l_v \le x \le r_v$, that $dp_v$ is evenly divisible by $x$. Therefore, $dp_v \cdot c$ must be added to the answer. All divisors can be stored in $O(n \cdot log(n))$ and each time find the number of such $x$ by binsearch. Asymptotics $O(n \cdot log(n))$
[ "binary search", "dsu", "hashing", "math", "number theory", "string suffix structures" ]
2,400
/* Includes */ #include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> /* Using libraries */ using namespace std; /* Defines */ #define fast ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0) #define ld long double #define pb push_back #define vc vector #define sz(a) (int)a.size() #define forn(i, n) for (int i = 0; i < n; ++i) #define pii pair <int, int> #define vec pt #define all(a) a.begin(), a.end() const int K = 26; const int N = 1e6 + 1; struct node { int next[K]; int suf = -1, pr = -1, dp = 0, len = 0, ch = -1; node () { forn (i, K) next[i] = -1; } }; int get (char c) { if (c >= 'a') return c - 'a'; return c - 'A'; } int lst = 0, sz = 1; node t[N * 2]; int used[N * 2]; int add (int a, int x) { int b = sz++; t[b].pr = a; t[b].suf = 0; t[b].ch = x; for (; a != -1; a = t[a].suf) { if (t[a].next[x] == -1) { t[a].next[x] = b; t[b].len = max(t[b].len, t[a].len + 1); continue; } int c = t[a].next[x]; if (t[c].pr == a) { t[b].suf = c; break; } int d = sz++; forn (i, K) t[d].next[i] = t[c].next[i]; t[d].suf = t[c].suf; t[c].suf = t[b].suf = d; t[d].pr = a; t[d].ch = x; for (; a != -1 && t[a].next[x] == c; a = t[a].suf) { t[a].next[x] = d; t[d].len = max(t[d].len, t[a].len + 1); } break; } return b; } void add (char c) { lst = add(lst, get(c)); } void dfs (int u) { used[u] = 1; for (int i = 0; i < K; ++i) { if (t[u].next[i] == -1) continue; int v = t[u].next[i]; if (!used[v]) dfs(v); t[u].dp += t[v].dp; } } vc <int> p[N], pr; int dr[N]; vc <pii> d; int l, r, cur = 0; int cnt_log (int x, int y) { int z = 1, res = 0; while (y >= z) { z *= x; ++res; } return res - 1; } void rec (int i, int x) { if (i == sz(d)) { cur += l <= x; return; } rec(i + 1, x); for (int j = 1; j <= d[i].second; ++j) { x *= d[i].first; if (x > r) break; rec(i + 1, x); } } void solve () { int n; cin >> n; string s; cin >> s; for (char c : s) add(c); for (int a = lst; a != -1; a = t[a].suf) t[a].dp = 1; dfs(0); for (int i = 2; i <= n; ++i) { if (dr[i] == 0) { dr[i] = i; pr.pb(i); } for (int j = 0; j < sz(pr) && pr[j] <= dr[i] && i * pr[j] <= n; ++j) dr[i * pr[j]] = pr[j]; } long long ans = 0; forn (i, sz) { if (t[i].len == 0) continue; l = t[t[i].suf].len + 1; r = t[i].len; int x = t[i].dp; d.clear(); while (x > 1) { int y = dr[x]; if (d.empty() || d.back().first != y) d.pb({y, 1}); else d.back().second++; x /= y; } rec(0, 1); ans += t[i].dp * cur; cur = 0; } cout << ans << '\n'; } /* Starting and precalcing */ signed main() { /* freopen("input.txt","r",stdin);freopen("output.txt","w",stdout); */ fast; int t = 1; // cin >> t; while (t--) solve(); return 0; }
1781
A
Parallel Projection
Vika's house has a room in a shape of a rectangular parallelepiped (also known as a rectangular cuboid). Its floor is a rectangle of size $w \times d$, and the ceiling is right above at the constant height of $h$. Let's introduce a coordinate system on the floor so that its corners are at points $(0, 0)$, $(w, 0)$, $(w, d)$, and $(0, d)$. A laptop is standing on the floor at point $(a, b)$. A projector is hanging on the ceiling right above point $(f, g)$. Vika wants to connect the laptop and the projector with a cable in such a way that the cable always goes along the walls, ceiling, or floor (i. e. does not go inside the cuboid). Additionally, the cable should always run \textbf{parallel} to one of the cuboid's edges (i. e. it can not go diagonally). What is the minimum length of a cable that can connect the laptop to the projector? \begin{center} {\small Illustration for the first test case. One of the optimal ways to put the cable is shown in green.} \end{center}
Note that bending the cable on the wall is not necessary: we can always bend it on the floor and on the ceiling, while keeping the vertical part of the cable straight. Thus, we can just disregard the height of the room, view the problem as two-dimensional, and add $h$ to the answer at the end. In the two-dimensional formulation, we need to connect points $(a, b)$ and $(f, g)$ with a cable that goes parallel to the coordinate axes and touches at least one side of the $(0, 0)$ - $(w, d)$ rectangle. We can now casework on the side of the rectangle (the sides are referred to as in the picture from the problem statement): If the cable touches the front side, its length will be $b + |a - f| + g$. If the cable touches the left side, its length will be $a + |b - g| + f$. If the cable touches the back side, its length will be $(d - b) + |a - f| + (d - g)$. If the cable touches the right side, its length will be $(w - a) + |b - g| + (w - f)$. Out of these four values, the smallest one (plus $h$) is the answer.
[ "geometry", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int w, d, h; cin >> w >> d >> h; int a, b; cin >> a >> b; int f, g; cin >> f >> g; int ans = b + abs(a - f) + g; ans = min(ans, a + abs(b - g) + f); ans = min(ans, (d - b) + abs(a - f) + (d - g)); ans = min(ans, (w - a) + abs(b - g) + (w - f)); cout << ans + h << '\n'; } return 0; }
1781
B
Going to the Cinema
A company of $n$ people is planning a visit to the cinema. Every person can either go to the cinema or not. That depends on how many other people will go. Specifically, every person $i$ said: "I want to go to the cinema if and only if at least $a_i$ other people will go, \textbf{not counting myself}". That means that person $i$ will become sad if: - they go to the cinema, and strictly less than $a_i$ other people go; or - they don't go to the cinema, and at least $a_i$ other people go. In how many ways can a set of people going to the cinema be chosen so that nobody becomes sad?
Let's fix the number of people going to the cinema $k$ and try to choose a set of this exact size. What happens to people with different $a_i$? If $a_i < k$, person $i$ definitely wants to go. If $a_i > k$, person $i$ definitely does not want to go. If $a_i = k$, there is actually no good outcome for person $i$. If person $i$ goes to the cinema, there are only $k - 1$ other people going, so person $i$ will be sad (since $k - 1 < a_i$). If person $i$ does not go, there are $k$ other people going, so person $i$ will be sad too (since $k \ge a_i$). Thus, for a set of size $k$ to exist, there must be no people with $a_i = k$, and the number of people with $a_i < k$ must be exactly $k$. We can easily check these conditions if we use an auxiliary array cnt such that cnt[x] is equal to the number of people with $a_i = x$. Alternative solution: Notice that if a set of $k$ people can go to the cinema, it must always be a set of people with the smallest $a_i$. Thus, we can start with sorting the array $a$ in non-decreasing order. Then, for each length $k$ of a prefix of this array, we can check whether the first $k$ elements are all smaller than $k$, and the remaining $n-k$ elements are all greater than $k$. However, since the array is sorted, it is enough to check that the $k$-th element is smaller than $k$, and the $k+1$-th element is greater than $k$.
[ "brute force", "greedy", "sortings" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } sort(a.begin(), a.end()); int ans = 0; for (int k = 0; k <= n; k++) { if (k == 0 || a[k - 1] < k) { if (k == n || a[k] > k) { ans += 1; } } } cout << ans << '\n'; } return 0; }
1781
C
Equal Frequencies
Let's call a string balanced if all characters that are present in it appear the same number of times. For example, "coder", "appall", and "ttttttt" are balanced, while "wowwow" and "codeforces" are not. You are given a string $s$ of length $n$ consisting of lowercase English letters. Find a balanced string $t$ of the same length $n$ consisting of lowercase English letters that is different from the string $s$ in as few positions as possible. In other words, the number of indices $i$ such that $s_i \ne t_i$ should be as small as possible.
Instead of "finding $t$ that differs from $s$ in as few positions as possible", let's formulate it as "finding $t$ that matches $s$ in as many positions as possible", which is obviously the same. First of all, let's fix $k$, the number of distinct characters string $t$ will have. Since the string must consist of lowercase English letters, we have $1 \le k \le 26$, and since the string must be balanced, we have $n \bmod k = 0$. For each $k$ that satisfies these conditions, we will construct a balanced string that matches $s$ in as many positions as possible. In the end, out of all strings we will have constructed, we will print the one with the maximum number of matches. From now on, we are assuming $k$ is fixed. Suppose we choose some character $c$ to be present in string $t$. We need to choose exactly $\frac{n}{k}$ positions in $t$ to put character $c$. Let $\operatorname{freq}_{c}$ be the number of occurrences of $c$ in $s$. Then, in how many positions can we make $s$ and $t$ match using character $c$? The answer is: in $\min(\frac{n}{k}, \operatorname{freq}_{c})$ positions. Now, since we want to maximize the total number of matches, we should choose $k$ characters with the largest values of $\min(\frac{n}{k}, \operatorname{freq}_{c})$. This is also equivalent to choosing $k$ characters with the largest values of $\operatorname{freq}_{c}$. How to construct the desired string? For each chosen character $c$, pick any $\min(\frac{n}{k}, \operatorname{freq}_{c})$ of its occurrences in $s$ and put $c$ in the corresponding positions in $t$. Then, if $\operatorname{freq}_{c} < \frac{n}{k}$, save the information about $\frac{n}{k} - \operatorname{freq}_{c}$ unused characters $c$; otherwise, if $\operatorname{freq}_{c} > \frac{n}{k}$, save the information about $\operatorname{freq}_{c} - \frac{n}{k}$ empty positions in $t$. In the end, match the unused characters with the empty positions arbitrarily.
[ "brute force", "constructive algorithms", "greedy", "implementation", "sortings", "strings" ]
1,600
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int n; cin >> n; string s; cin >> s; vector<vector<int>> at(26); for (int i = 0; i < n; i++) { at[(int) (s[i] - 'a')].push_back(i); } vector<int> order(26); iota(order.begin(), order.end(), 0); sort(order.begin(), order.end(), [&](int i, int j) { return at[i].size() > at[j].size(); }); string res = ""; int best = -1; for (int cnt = 1; cnt <= 26; cnt++) { if (n % cnt == 0) { int cur = 0; for (int i = 0; i < cnt; i++) { cur += min(n / cnt, (int) at[order[i]].size()); } if (cur > best) { best = cur; res = string(n, ' '); vector<char> extra; for (int it = 0; it < cnt; it++) { int i = order[it]; for (int j = 0; j < n / cnt; j++) { if (j < (int) at[i].size()) { res[at[i][j]] = (char) ('a' + i); } else { extra.push_back((char) ('a' + i)); } } } for (char& c : res) { if (c == ' ') { c = extra.back(); extra.pop_back(); } } } } } cout << n - best << '\n'; cout << res << '\n'; } return 0; }
1781
D
Many Perfect Squares
You are given a set $a_1, a_2, \ldots, a_n$ of distinct positive integers. We define the squareness of an integer $x$ as the number of perfect squares among the numbers $a_1 + x, a_2 + x, \ldots, a_n + x$. Find the maximum squareness among all integers $x$ between $0$ and $10^{18}$, inclusive. Perfect squares are integers of the form $t^2$, where $t$ is a non-negative integer. The smallest perfect squares are $0, 1, 4, 9, 16, \ldots$.
The answer is obviously at least $1$. Can we make it at least $2$? In this case, let's check all possible pairs of indices $i < j$ and try to figure out for what values of $x$ both $a_i + x$ and $a_j + x$ are perfect squares. We can write down two equations: $a_i + x = p^2$ and $a_j + x = q^2$, for some non-negative integers $p < q$. Let's subtract the first equation from the second one and apply the formula of difference of two squares: $a_j - a_i = q^2 - p^2 = (q - p)(q + p)$. It follows that $q - p$ is a positive integer divisor of $a_j - a_i$. It is well-known how to enumerate all divisors of $a_j - a_i$ in $O(\sqrt{a_j - a_i})$. For each such divisor $d$, we have a simple system of equations for $p$ and $q$: $\begin{cases} q - p = d \\ q + p = \frac{a_j - a_i}{d} \end{cases}$ that we can solve: $\begin{cases} p = \frac{1}{2}(\frac{a_j - a_i}{d} - d) \\ q = \frac{1}{2}(\frac{a_j - a_i}{d} + d) \\ \end{cases}$ and if both $p$ and $q$ turn out to be integers, that means we have found a candidate value for $x$: $x = p^2 - a_i = q^2 - a_j$. For each candidate value of $x$, we can just calculate its squareness and find the maximum. The complexity of this solution is $O(n^2 \cdot \sqrt{a_n} + n^3 \cdot f(a_n))$, where $f(a_n)$ is the maximum number of divisors an integer between $1$ and $a_n$ can have. The first part corresponds to finding all divisors of $a_j - a_i$ for all pairs $i < j$. The second part corresponds to checking all candidate values of $x$: there are $O(n^2 \cdot f(a_n))$ of them, and we need $O(n)$ time to calculate the squareness of each. Bonus: The first part can be optimized by using faster factorization methods. Can you see how to optimize the second part to $O(n^2 \cdot f(a_n))$?
[ "brute force", "math", "number theory" ]
1,800
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } int ans = 1; auto Test = [&](long long x) { int cnt = 0; for (int v : a) { long long u = llround(sqrtl(v + x)); if (u * u == v + x) { cnt += 1; } } ans = max(ans, cnt); }; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { int diff = a[j] - a[i]; for (int k = 1; k * k <= diff; k++) { if (diff % k == 0) { long long q = k + diff / k; if (q % 2 == 0) { q /= 2; if (q * q >= a[j]) { Test(q * q - a[j]); } } } } } } cout << ans << '\n'; } return 0; }
1781
E
Rectangle Shrinking
You have a rectangular grid of height $2$ and width $10^9$ consisting of unit cells. There are $n$ rectangles placed on this grid, and the borders of these rectangles pass along cell borders. The $i$-th rectangle covers all cells in rows from $u_i$ to $d_i$ inclusive and columns from $l_i$ to $r_i$ inclusive ($1 \le u_i \le d_i \le 2$; $1 \le l_i \le r_i \le 10^9$). The initial rectangles can intersect, be nested, and coincide arbitrarily. You should either remove each rectangle, or replace it with any of its non-empty subrectangles. In the latter case, the new subrectangle must lie inside the initial rectangle, and its borders must still pass along cell borders. In particular, it is allowed for the subrectangle to be equal to the initial rectangle. After that replacement, no two (non-removed) rectangles are allowed to have common cells, and the total area covered with the new rectangles must be as large as possible. \begin{center} {\small Illustration for the first test case. The initial rectangles are given at the top, the new rectangles are given at the bottom. Rectangle number $4$ is removed.} \end{center}
It turns out that it is always possible to cover all cells that are covered by the initial rectangles. If the grid had height $1$ instead of $2$, the solution would be fairly simple. Sort the rectangles in non-decreasing order of their left border $l_i$. Maintain a variable $p$ denoting the rightmost covered cell. Then, for each rectangle, in order: If $r_i \le p$, remove this rectangle (note that since we process the rectangles in non-decreasing order of $l_i$, it means that this rectangle is fully covered by other rectangles). Otherwise, if $l_i \le p$, set $l_i = p + 1$ (shrink the current rectangle). Then, set $p = r_i$ (this is the new rightmost covered cell). Let's generalize this approach for a height $2$ grid. Again, sort the rectangles in non-decreasing order of $l_i$, and maintain two variables $p_1$ and $p_2$ denoting the rightmost covered cell in row $1$ and row $2$, respectively. Then, for each rectangle, in order: If it is a height $1$ rectangle (that is, $u_i = d_i$), proceed similarly to the "height $1$ grid" case above: If $r_i \le p_{u_i}$, remove this rectangle. Otherwise, if $l_i \le p_{u_i}$, set $l_i = p_{u_i} + 1$. Then, set $p_{u_i} = r_i$. If $r_i \le p_{u_i}$, remove this rectangle. Otherwise, if $l_i \le p_{u_i}$, set $l_i = p_{u_i} + 1$. Then, set $p_{u_i} = r_i$. If it is a height $2$ rectangle (that is, $u_i = 1$ and $d_i = 2$): If $r_i \le p_1$, set $u_i = 2$ (remove the first row from the rectangle) and go back to the "height $1$ rectangle" case above. If $r_i \le p_2$, set $d_i = 1$ (remove the second row from the rectangle) and go back to the "height $1$ rectangle" case above. Otherwise, consider all processed rectangles $j$ that have $r_j \ge l_i$, i.e., intersect the $i$-th rectangle. If $l_j \ge l_i$, remove rectangle $j$; otherwise, shrink rectangle $j$ by setting $r_j = l_i - 1$. Finally, set $p_1 = p_2 = r_i$. If $r_i \le p_1$, set $u_i = 2$ (remove the first row from the rectangle) and go back to the "height $1$ rectangle" case above. If $r_i \le p_2$, set $d_i = 1$ (remove the second row from the rectangle) and go back to the "height $1$ rectangle" case above. Otherwise, consider all processed rectangles $j$ that have $r_j \ge l_i$, i.e., intersect the $i$-th rectangle. If $l_j \ge l_i$, remove rectangle $j$; otherwise, shrink rectangle $j$ by setting $r_j = l_i - 1$. Finally, set $p_1 = p_2 = r_i$. Here, only the last case is tricky and different from our initial "height $1$ grid" solution, but it is also necessary: in a height $2$ grid, sometimes we have to shrink rectangles on the right, not only on the left. Now, if we implement this solution in a straightforward fashion, iterating over all $j < i$ for every $i$, we'll arrive at an $O(n^2)$ solution - again, purely because of the last case. To optimize it, note that once a rectangle is shrinked in this last case, it never has to be shrinked again. Thus, we can maintain all rectangles in a priority queue ordered by their $r_i$, and once we pop a rectangle from the priority queue, we will never have to push it again, which will help us arrive at an amortized $O(n \log n)$ solution. Instead of a priority queue, we can use some stacks as well - one for each row and maybe an extra one for height $2$ rectangles. The overall time complexity will still be $O(n \log n)$ due to sorting.
[ "binary search", "brute force", "data structures", "greedy", "implementation", "two pointers" ]
2,300
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> r1(n), c1(n), r2(n), c2(n); for (int i = 0; i < n; i++) { cin >> r1[i] >> c1[i] >> r2[i] >> c2[i]; assert(1 <= r1[i] && r1[i] <= r2[i] && r2[i] <= 2 && c1[i] <= c2[i]); --c1[i]; } vector<int> order(n); iota(order.begin(), order.end(), 0); sort(order.begin(), order.end(), [&](int i, int j) { return c1[i] < c1[j]; }); set<pair<int, int>> s; int ans = 0; int p1 = -1; int p2 = -1; for (int i : order) { if (r1[i] == 1 && r2[i] == 2) { if (p1 >= c2[i]) { r1[i] = 2; } if (p2 >= c2[i]) { r2[i] = 1; } if (r1[i] > r2[i]) { continue; } } if (r1[i] == 1 && r2[i] == 2) { while (!s.empty()) { auto it = prev(s.end()); if (it->first >= c1[i]) { c2[it->second] = c1[i]; s.erase(it); } else { break; } } ans += (c2[i] - max(c1[i], p1)) + (c2[i] - max(c1[i], p2)); p1 = p2 = c2[i]; s.emplace(c2[i], i); continue; } assert(r1[i] == r2[i]); if (r1[i] == 1) { c1[i] = max(c1[i], p1); p1 = max(p1, c2[i]); } else { c1[i] = max(c1[i], p2); p2 = max(p2, c2[i]); } if (c1[i] < c2[i]) { ans += c2[i] - c1[i]; s.emplace(c2[i], i); } } cout << ans << '\n'; for (int i = 0; i < n; i++) { ++c1[i]; if (r1[i] <= r2[i] && c1[i] <= c2[i]) { cout << r1[i] << " " << c1[i] << " " << r2[i] << " " << c2[i] << '\n'; } else { cout << "0 0 0 0" << '\n'; } } } return 0; }
1781
F
Bracket Insertion
Vika likes playing with bracket sequences. Today she wants to create a new bracket sequence using the following algorithm. Initially, Vika's sequence is an empty string, and then she will repeat the following actions $n$ times: - Choose a place in the current bracket sequence to insert new brackets uniformly at random. If the length of the current sequence is $k$, then there are $k+1$ such places: before the first bracket, between the first and the second brackets, $\ldots$, after the $k$-th bracket. In particular, there is one such place in an empty bracket sequence. - Choose string "()" with probability $p$ or string ")(" with probability $1 - p$ and insert it into the chosen place. The length of the bracket sequence will increase by $2$. A bracket sequence is called regular if it is possible to obtain a correct arithmetic expression by inserting characters '+' and '1' into it. For example, sequences "(())()", "()", and "(()(()))" are regular, while ")(", "(()", and "(()))(" are not. Vika wants to know the probability that her bracket sequence will be a regular one at the end. Help her and find this probability modulo $998\,244\,353$ (see Output section).
Instead of looking at a probabilistic process, we can consider all possible ways of inserting brackets. There are $1 \cdot 3 \cdot 5 \cdot \ldots \cdot (2n - 1) = (2n - 1)!!$ ways of choosing places, and $2^n$ ways of choosing "()" or ")(" at every point. Let $r$ be the sum of $p^k \cdot (1-p)^{n-k}$ over all such ways that lead to a regular bracket sequence, where $k$ is the number of strings "()" inserted during the process (and $n - k$ is then the number of strings ")(" inserted). Then, $\frac{r}{(2n - 1)!!}$ is the answer to the problem. Consider the sequence of "prefix balances" of the bracket sequence. The first (empty) prefix balance is $0$, and each successive balance is $1$ larger than the previous one if the next bracket is '(', and $1$ smaller if the bracket is ')'. Initially, when the bracket sequence is empty, the sequence of prefix balances is $[0]$. Whenever we insert "()" into the bracket sequence in a place with prefix balance $x$, essentially we are replacing $x$ with $[x, x + 1, x]$ in the sequence of prefix balances. Whenever we insert ")(" instead, that's equivalent to replacing $x$ with $[x, x - 1, x]$. A bracket sequence is regular if and only if its corresponding sequence of prefix balances does not have any negative integers (and ends with $0$; however, this is guaranteed in our setting). Thus, we can reformulate the problem as follows: Initially, we have an integer array $[0]$. $n$ times, we choose an integer from the array uniformly at random. Say this integer is $x$, then we replace $x$ with $[x, x + 1, x]$ with probability $p$, and with $[x, x - 1, x]$ with probability $1 - p$. What is the probability that the sequence will not contain negative integers at any point? Let $f(n, x)$ be the sought probability multiplied by $(2n - 1)!!$ if we start with $[x]$ Here, we multiply by $(2n-1)!!$ to simplify the formulas, and to keep thinking about "numbers of ways" instead of "probabilities", as described in the first paragraph of this tutorial. The base cases are $f(0, x) = 1$ if $x \ge 0$, and $f(0, x) = 0$ otherwise. When $n > 0$: $f(n, x) =$$= \sum\limits_{i=0}^{n-1} \sum\limits_{j=0}^{n-1-i} p \cdot \binom{n-1}{i} \cdot \binom{n-1-i}{j} \cdot f(i, x) \cdot f(j, x + 1) \cdot f(n - 1 - i - j, x) +$ $+ \sum\limits_{i=0}^{n-1} \sum\limits_{j=0}^{n-1-i} (1 - p) \cdot \binom{n-1}{i} \cdot \binom{n-1-i}{j} \cdot f(i, x) \cdot f(j, x - 1) \cdot f(n - 1 - i - j, x)$. $= \sum\limits_{i=0}^{n-1} \sum\limits_{j=0}^{n-1-i} p \cdot \binom{n-1}{i} \cdot \binom{n-1-i}{j} \cdot f(i, x) \cdot f(j, x + 1) \cdot f(n - 1 - i - j, x) +$ $+ \sum\limits_{i=0}^{n-1} \sum\limits_{j=0}^{n-1-i} (1 - p) \cdot \binom{n-1}{i} \cdot \binom{n-1-i}{j} \cdot f(i, x) \cdot f(j, x - 1) \cdot f(n - 1 - i - j, x)$. What does this formula mean? Essentially, since we start with an array of a single integer $[x]$, the first operation has to be applied to $x$. After that, once $x$ gets replaced with $[x, x \pm 1, x]$, $i$ operations will be applied to the left $x$ (including everything produced from it), $j$ operations will be applied to $x \pm 1$ (again, together with its production), and $n - 1 - i - j$ operations will be applied to the right $x$ (and to its production). Thus, we can find the sum over $i$ and $j$ of the product of the corresponding values of $f$ and the binomial coefficients: since the sequences of $i$, $j$, and $n - 1 - i - j$ operations can be interleaved arbitrarily, we have $\binom{n-1}{i}$ ways to choose the positions of $i$ operations applied to the left $x$ in the global sequence of $n-1$ operations, and then $\binom{n-1-i}{j}$ ways to choose the positions of $j$ operations applied to $x \pm 1$ out of the remaining $n - 1 - i$ positions in the global sequence. This results in an $O(n^4)$ solution, since there are $O(n^2)$ values of $f$ to calculate, and each of them is calculated in $O(n^2)$. To optimize it, let's rewrite the formula a little bit by moving the loop over $j$ outside: $f(n, x) = \sum\limits_{j=0}^{n-1} (p \cdot f(j, x + 1) + (1 - p) \cdot f(j, x - 1)) \cdot \binom{n-1}{j} \cdot \sum\limits_{i=0}^{n-1-j} \binom{n-1-j}{i} \cdot f(i, x) \cdot f(n - 1 - j - i, x)$. Now, let's introduce an auxiliary function: $g(k, x) = \sum\limits_{i=0}^{k} \binom{k}{i} \cdot f(i, x) \cdot f(k - i, x)$. Now, let's rewrite the formula for $f$ using $g$: $f(n, x) = \sum\limits_{j=0}^{n-1} (p \cdot f(j, x + 1) + (1 - p) \cdot f(j, x - 1)) \cdot \binom{n-1}{j} \cdot g(n - 1 - j, x)$. Now, both $g(n, x)$ and $f(n, x)$ can be computed in $O(n)$ time, resulting in an $O(n^3)$ solution.
[ "combinatorics", "dp", "math", "trees" ]
2,700
#include <bits/stdc++.h> using namespace std; template <typename T> T inverse(T a, T m) { T u = 0, v = 1; while (a != 0) { T t = m / a; m -= t * a; swap(a, m); u -= t * v; swap(u, v); } assert(m == 1); return u; } template <typename T> class Modular { public: using Type = typename decay<decltype(T::value)>::type; constexpr Modular() : value() {} template <typename U> Modular(const U& x) { value = normalize(x); } template <typename U> static Type normalize(const U& x) { Type v; if (-mod() <= x && x < mod()) v = static_cast<Type>(x); else v = static_cast<Type>(x % mod()); if (v < 0) v += mod(); return v; } const Type& operator()() const { return value; } template <typename U> explicit operator U() const { return static_cast<U>(value); } constexpr static Type mod() { return T::value; } Modular& operator+=(const Modular& other) { if ((value += other.value) >= mod()) value -= mod(); return *this; } Modular& operator-=(const Modular& other) { if ((value -= other.value) < 0) value += mod(); return *this; } template <typename U> Modular& operator+=(const U& other) { return *this += Modular(other); } template <typename U> Modular& operator-=(const U& other) { return *this -= Modular(other); } Modular operator-() const { return Modular(-value); } template <typename U = T> typename enable_if<is_same<typename Modular<U>::Type, int>::value, Modular>::type& operator*=(const Modular& rhs) { value = normalize(static_cast<int64_t>(value) * static_cast<int64_t>(rhs.value)); return *this; } Modular& operator/=(const Modular& other) { return *this *= Modular(inverse(other.value, mod())); } template <typename V, typename U> friend V& operator>>(V& stream, Modular<U>& number); private: Type value; }; template <typename T> Modular<T> operator+(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) += rhs; } template <typename T, typename U> Modular<T> operator+(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) += rhs; } template <typename T, typename U> Modular<T> operator+(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) += rhs; } template <typename T> Modular<T> operator-(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) -= rhs; } template <typename T, typename U> Modular<T> operator-(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) -= rhs; } template <typename T, typename U> Modular<T> operator-(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) -= rhs; } template <typename T> Modular<T> operator*(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) *= rhs; } template <typename U, typename T> U& operator<<(U& stream, const Modular<T>& number) { return stream << number(); } template <typename U, typename T> U& operator>>(U& stream, Modular<T>& number) { typename common_type<typename Modular<T>::Type, long long>::type x; stream >> x; number.value = Modular<T>::normalize(x); return stream; } constexpr int md = 998244353; using Mint = Modular<std::integral_constant<decay<decltype(md)>::type, md>>; int main() { ios::sync_with_stdio(false); cin.tie(0); int n; Mint p; cin >> n >> p; p /= 10000; vector<vector<Mint>> C(n + 1, vector<Mint>(n + 1)); for (int i = 0; i <= n; i++) { C[i][0] = 1; for (int j = 1; j <= i; j++) { C[i][j] = C[i - 1][j] + C[i - 1][j - 1]; } } vector<vector<Mint>> dp(n + 1, vector<Mint>(n + 1)); vector<vector<Mint>> aux(n + 1, vector<Mint>(n + 1)); for (int b = 0; b <= n; b++) { dp[0][b] = aux[0][b] = 1; } for (int i = 1; i <= n; i++) { for (int b = 0; b <= n - i; b++) { for (int y = 0; y <= i - 1; y++) { dp[i][b] += C[i - 1][y] * aux[i - 1 - y][b] * (dp[y][b + 1] * p + (b == 0 ? 0 : dp[y][b - 1] * (1 - p))); } for (int j = 0; j <= i; j++) { aux[i][b] += dp[j][b] * dp[i - j][b] * C[i][j]; } } } auto ans = dp[n][0]; for (int i = 1; i <= 2 * n; i += 2) { ans /= i; } cout << ans << '\n'; return 0; }
1781
G
Diverse Coloring
In this problem, we will be working with rooted binary trees. A tree is called a rooted binary tree if it has a fixed root and every vertex has at most two children. Let's assign a color — white or blue — to each vertex of the tree, and call this assignment a coloring of the tree. Let's call a coloring diverse if every vertex has a neighbor (a parent or a child) colored into an opposite color compared to this vertex. It can be shown that any tree with at least two vertices allows a diverse coloring. Let's define the disbalance of a coloring as the absolute value of the difference between the number of white vertices and the number of blue vertices. Now to the problem. Initially, the tree consists of a single vertex with the number $1$ which is its root. Then, for each $i$ from $2$ to $n$, a new vertex $i$ appears in the tree, and it becomes a child of vertex $p_i$. It is guaranteed that after each step the tree will keep being a binary tree rooted at vertex $1$, that is, each vertex will have at most two children. After every new vertex is added, print the smallest value of disbalance over all possible diverse colorings of the current tree. Moreover, after adding the last vertex with the number $n$, also print a diverse coloring with the smallest possible disbalance as well.
It turns out that it is always possible to construct a diverse coloring with disbalance $0$ or $1$ (depending on the parity of $n$), except for the case of a tree with $4$ vertices with one vertex of degree $3$ (which is given in the example). Let's traverse the tree from bottom to top. For each subtree, we will try to construct a diverse coloring where the subtree root is colored white. We will define the disbalance of the subtree coloring to be the number of vertices colored white, minus the number of vertices colored blue. This is equivalent to the original definition, except that now the number also has a sign. We will aim at obtaining disbalances $0$ and $+1$, if possible. In some small cases, it will be impossible. We will say "subtree has disbalance $x$" or "vertex has disbalance $x$", meaning that we have constructed a coloring of the (vertex's) subtree with disbalance $x$. Let $u$ be the root of the subtree (colored white): If $u$ has no children, the only coloring has disbalance $+1$; however, it is not diverse. Thus, we will have to be careful about the leaf case in the future. If $u$ has one child $v$, flip the colors in $v$'s subtree to create a blue neighbor for $u$: If $v$ had disbalance $0$ before the flip, now $u$ has disbalance $+1$. If $u$ had disbalance $+1$ before the flip, now $u$ has disbalance $0$. If $v$ had disbalance $0$ before the flip, now $u$ has disbalance $+1$. If $u$ had disbalance $+1$ before the flip, now $u$ has disbalance $0$. If $u$ has two children $v$ and $w$: If both have disbalance $0$, flip the colors in at least one of $v$'s and $w$'s subtrees, now $u$ has disbalance $+1$. If both have disbalance $+1$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. If one has disbalance $0$ and the other has disbalance $+1$, flip the colors in the one with $0$, now $u$ has disbalance $0$. If both have disbalance $0$, flip the colors in at least one of $v$'s and $w$'s subtrees, now $u$ has disbalance $+1$. If both have disbalance $+1$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. If one has disbalance $0$ and the other has disbalance $+1$, flip the colors in the one with $0$, now $u$ has disbalance $0$. The only issue happens when $u$ has two children that are both leaves: we have to recolor both $v$ and $w$ into blue, which will force $u$ to have disbalance $-1$. Let's add new cases to the analysis above based on the existence of subtrees with disbalance $-1$: If $u$ has one child $v$ with disbalance $-1$: Flip the colors in $v$'s subtree, now $u$ has disbalance $+2$. Flip the colors in $v$'s subtree, now $u$ has disbalance $+2$. If $u$ has two children $v$ and $w$, where $v$ has disbalance $-1$: If $w$ has disbalance $-1$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. If $w$ has disbalance $0$, flip the colors in $w$'s subtree, now $u$ has disbalance $0$. If $w$ has disbalance $+1$, flip the colors in both $v$'s and $w$'s subtrees, now $u$ has disbalance $+1$. If $w$ has disbalance $-1$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. If $w$ has disbalance $0$, flip the colors in $w$'s subtree, now $u$ has disbalance $0$. If $w$ has disbalance $+1$, flip the colors in both $v$'s and $w$'s subtrees, now $u$ has disbalance $+1$. We can see that, once again, a new case appears where a subtree has disbalance $+2$ (described at the beginning of this tutorial), and unfortunately we can't avoid that. We can see that this case only happens for a specific subtree of $4$ vertices. Let's proceed with the case analysis... If $u$ has one child $v$ with disbalance $+2$: Flip $v$'s color (not the whole subtree, but just $v$), now $u$ has disbalance $+1$. Flip $v$'s color (not the whole subtree, but just $v$), now $u$ has disbalance $+1$. If $u$ has two children $v$ and $w$, where $v$ has disbalance $+2$: If $w$ has disbalance $-1$, flip the colors in both $v$'s and $w$'s subtrees, now $u$ has disbalance $0$. If $w$ has disbalance $0$, flip $v$'s color, now $u$ has disbalance $+1$. If $w$ has disbalance $+1$, flip $v$'s color and the colors in $w$'s subtree, now $u$ has disbalance $0$. If $w$ has disbalance $+2$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. If $w$ has disbalance $-1$, flip the colors in both $v$'s and $w$'s subtrees, now $u$ has disbalance $0$. If $w$ has disbalance $0$, flip $v$'s color, now $u$ has disbalance $+1$. If $w$ has disbalance $+1$, flip $v$'s color and the colors in $w$'s subtree, now $u$ has disbalance $0$. If $w$ has disbalance $+2$, flip the colors in either $v$'s or $w$'s subtree, now $u$ has disbalance $+1$. It follows that for any other tree, except two special cases of a $3$-vertex tree and a $4$-vertex tree, it is possible to obtain disbalance $0$ or $+1$. From this point, one way to implement the solution is to carefully consider all the cases. Note that whenever we say "flip the colors in $v$'s subtree", we can just set some flag in vertex $v$. Then, as we traverse the tree from top to bottom, we can construct the correct coloring in $O(n)$ time. Another way is to use dynamic programming f(u, d, hasNeighbor) = true/false: whether it is possible to color $u$'s subtree to obtain disbalance $d$ so that all vertices except $u$ have neighbors of opposite color, and $u$ has such a neighbor iff hasNeighbor = true. Since it is enough to limit disbalance by $O(1)$, we can conclude that the number of states, the number of transitions, and the time complexity are all $O(n)$.
[ "constructive algorithms", "trees" ]
3,200
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> p(n); for (int i = 1; i < n; i++) { cin >> p[i]; --p[i]; } for (int i = 2; i <= n; i++) { if (i == 4 && p[1] == 0 && p[2] == 1 && p[3] == 1) { cout << 2 << '\n'; } else { cout << i % 2 << '\n'; } } vector<vector<int>> g(n); for (int i = 1; i < n; i++) { g[p[i]].push_back(i); } string res = ""; auto Solve = [&](int nn) { vector<vector<vector<bool>>> good(nn); vector<vector<vector<vector<int>>>> prevs(nn); vector<int> sz(nn); vector<int> L(nn); vector<int> R(nn); function<void(int)> Dfs = [&](int v) { sz[v] += 1; for (int u : g[v]) { Dfs(u); sz[v] += sz[u]; } L[v] = sz[v] / 2; R[v] = L[v] + 1; good[v].assign(2, vector<bool>(R[v] - L[v] + 1, false)); prevs[v].assign(2, vector<vector<int>>(R[v] - L[v] + 1)); auto Set = [&](int c, int k, vector<int> pr) { if (k >= L[v] && k <= R[v]) { good[v][c][k - L[v]] = true; prevs[v][c][k - L[v]] = pr; } }; if (g[v].size() == 0) { Set(0, 1, {}); } if (g[v].size() == 1) { int u = g[v][0]; for (int cu = 0; cu < 2; cu++) { for (int ku = L[u]; ku <= R[u]; ku++) { if (good[u][cu][ku - L[u]]) { Set(1, 1 + (sz[u] - ku), {cu, ku, 1}); if (cu == 1) { Set(0, 1 + ku, {cu, ku, 0}); } } } } } if (g[v].size() == 2) { int u = g[v][0]; int w = g[v][1]; for (int cu = 0; cu < 2; cu++) { for (int ku = L[u]; ku <= R[u]; ku++) { if (good[u][cu][ku - L[u]]) { for (int cw = 0; cw < 2; cw++) { for (int kw = L[w]; kw <= R[w]; kw++) { if (good[w][cw][kw - L[w]]) { Set(1, 1 + (sz[u] - ku) + (sz[w] - kw), {cu, ku, 1, cw, kw, 1}); if (cu == 1) { Set(1, 1 + ku + (sz[w] - kw), {cu, ku, 0, cw, kw, 1}); } if (cw == 1) { Set(1, 1 + (sz[u] - ku) + kw, {cu, ku, 1, cw, kw, 0}); } if (cu == 1 && cw == 1) { Set(0, 1 + ku + kw, {cu, ku, 0, cw, kw, 0}); } } } } } } } } }; Dfs(0); int best = nn + 1; int best_k = -1; for (int k = L[0]; k <= R[0]; k++) { if (good[0][1][k - L[0]]) { int val = abs(k - (nn - k)); if (val < best) { best = val; best_k = k; } } } assert(best <= nn); res = string(nn, '.'); function<void(int, int, int)> Restore = [&](int v, int c, int k) { int ptr = 0; for (int u : g[v]) { res[u] = (prevs[v][c][k - L[v]][ptr + 2] == 0 ? res[v] : (char) ('w' ^ 'b' ^ res[v])); Restore(u, prevs[v][c][k - L[v]][ptr], prevs[v][c][k - L[v]][ptr + 1]); ptr += 3; } }; res[0] = 'w'; Restore(0, 1, best_k); return best; }; Solve(n); cout << res << '\n'; } return 0; }
1781
H2
Window Signals (hard version)
\textbf{This is the hard version of the problem. In this version, the constraints on $h$ and $w$ are higher.} A house at the sea has $h$ floors, all of the same height. The side of the house facing the sea has $w$ windows at equal distances from each other on every floor. Thus, the windows are positioned in cells of a rectangular grid of size $h \times w$. In every window, the light can be turned either on or off, except for the given $k$ (at most $2$) windows. In these $k$ windows the light can not be turned on, because it is broken. In the dark, we can send a signal to a ship at sea using a configuration of lights turned on and off. However, the ship can not see the position of the lights with respect to the house. Thus, if one configuration of windows with lights on can be transformed into another using parallel translation, these configurations are considered equal. Note that only parallel translation is allowed, but neither rotations nor flips are. Moreover, a configuration without any light at all is not considered valid. Find how many different signals the ship can receive and print this number modulo $998\,244\,353$.
Let's iterate over the dimensions of the bounding box of the image of windows with lights on, $h' \times w'$ ($1 \le h' \le h; 1 \le w' \le w$), count images with such bounding box, and sum all these values up. An image has a bounding box of size exactly $h' \times w'$ if and only if: it fits inside an $h' \times w'$ rectangle; it has a light on each of its four borders. To account for the second condition, we can use inclusion-exclusion principle. This way, we will have an "it does not have a light on some of its borders" condition instead, at the cost of an extra $2^4$ time factor. We will disregard this condition in the rest of this tutorial. There are $2^{h'w'}$ possible images fitting in an $h' \times w'$ rectangles. How many of them are impossible to show because of broken lights? Let's find this number and subtract it. Consider all possible ways to place a rectangle of size $h' \times w'$ on the given $h \times w$ grid: If there are no broken lights inside, any image of size $h' \times w'$ is possible to show, so we don't need to subtract anything for this size at all. If there is $1$ broken light inside, its relative position in the $h' \times w'$ rectangle must be turned on (for an image to be impossible to show). If there are $2$ broken lights inside, find their relative positions in the $h' \times w'$ rectangle. For an image to be impossible to show, at least one of the two positions must be turned on. Unless a placement with no broken lights exists, we have some cells in the rectangle where the light must be turned on - let's call the set of these cells $X$, and some pairs of cells where at least one light must be turned on - let's call the set of these pairs $Y$. If a pair from $Y$ contains a cell from $X$, this pair can be removed from $Y$. Once we do that, note that the pairs from $Y$ form several chains - that happens because the coordinate-wise distance between the cells in each pair is equal to the distance between the broken lights, which is $(r_2 - r_1, c_2 - c_1)$. If we have a chain of length $p$, it can be seen that there are $f(p)$ ways to turn lights on so that every pair of neighboring cells has at least one light, where $f(p) = f(p - 1) + f(p - 2)$ are Fibonacci numbers. Thus, the number to subtract is the product of: $2^w$, where $w$ is the number of cells not included in $X$ and $Y$; $f(p)$ over all chains formed by $Y$, where $p$ is the length of a chain. Every subgrid size $h' \times w'$ is processed in $O(hw)$ time, and there are $hw$ different sizes, thus, the overall time complexity is $O(h^2 w^2)$. It is possible to optimize the constant factor of this solution to pass the hard version too. However, a solution of $O(h w^2)$ time complexity exists as well. Here is a sketch of it: Instead of fixing both dimensions of the lights image, $h'$ and $w'$, let's only fix $w'$. Use inclusion-exclusion like described at the beginning of this tutorial; however, only use it for the top, left, and right borders. We will not use it for the bottom border, since we are not fixing the height of the image. Go through all top-left corners $(i, j)$ of the lights image in lexicographic order ($1 \le i \le h; 1 \le j \le w - w' + 1$). For each top-left corner, count how many images of width $w'$ can be shown using this top-left corner, which can not be shown using any previous top-left corner. Similarly to the previous solution, consider cases of $0$, $1$, and $2$ broken lights inside the current subgrid. Note, however, that we are not fixing the height of the subgrid, so just assume that it stretches all the way down to the bottom border of the whole grid. Maintain the set of cells $X$ using an array, and maintain the set of pairs $Y$ using linked lists. Once a cell joins set $X$, remove all pairs that touch it from set $Y$. Once a pair joins set $Y$, if neither of its ends belongs to $X$, merge two corresponding linked lists. Maintain a variable denoting the product of $f(p)$ over all chains formed by $Y$. Once any split or merge happens to the lists, update this variable using $O(1)$ multiplications/divisions. Whenever $i$ (the row number of the top-left corner) increases by $1$, the maximum available height of the lights image decreases by $1$. Thus, we have to "remove" the cells in the current bottom row: that is, for any future image, we won't be able to light up those cells. If any such cell belongs to $X$, just stop: we won't get any new images. Otherwise, if such a cell belongs to a pair in $Y$, add the second end of this pair to $X$. For fixed width $w'$ and for each top-left corner $(i, j)$, we need to spend $O(1)$ time. Moreover, for fixed width $w'$, once $i$ increases (which happens $O(h)$ times), we need to spend $O(w)$ time to process cell removals. Hence, the time complexity of both parts is $O(h w^2)$.
[]
3,500
#include <bits/stdc++.h> using namespace std; template <typename T> T inverse(T a, T m) { T u = 0, v = 1; while (a != 0) { T t = m / a; m -= t * a; swap(a, m); u -= t * v; swap(u, v); } assert(m == 1); return u; } template <typename T> class Modular { public: using Type = typename decay<decltype(T::value)>::type; constexpr Modular() : value() {} template <typename U> Modular(const U& x) { value = normalize(x); } template <typename U> static Type normalize(const U& x) { Type v; if (-mod() <= x && x < mod()) v = static_cast<Type>(x); else v = static_cast<Type>(x % mod()); if (v < 0) v += mod(); return v; } const Type& operator()() const { return value; } template <typename U> explicit operator U() const { return static_cast<U>(value); } constexpr static Type mod() { return T::value; } Modular& operator+=(const Modular& other) { if ((value += other.value) >= mod()) value -= mod(); return *this; } Modular& operator-=(const Modular& other) { if ((value -= other.value) < 0) value += mod(); return *this; } template <typename U> Modular& operator+=(const U& other) { return *this += Modular(other); } template <typename U> Modular& operator-=(const U& other) { return *this -= Modular(other); } Modular operator-() const { return Modular(-value); } template <typename U = T> typename enable_if<is_same<typename Modular<U>::Type, int>::value, Modular>::type& operator*=(const Modular& rhs) { value = normalize(static_cast<int64_t>(value) * static_cast<int64_t>(rhs.value)); return *this; } Modular& operator/=(const Modular& other) { return *this *= Modular(inverse(other.value, mod())); } template <typename V, typename U> friend V& operator>>(V& stream, Modular<U>& number); private: Type value; }; template <typename T> Modular<T> operator+(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) += rhs; } template <typename T, typename U> Modular<T> operator+(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) += rhs; } template <typename T, typename U> Modular<T> operator+(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) += rhs; } template <typename T> Modular<T> operator-(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) -= rhs; } template <typename T, typename U> Modular<T> operator-(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) -= rhs; } template <typename T, typename U> Modular<T> operator-(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) -= rhs; } template <typename T> Modular<T> operator*(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator*(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator*(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) *= rhs; } template<typename T, typename U> Modular<T> power(const Modular<T>& a, const U& b) { assert(b >= 0); Modular<T> x = a, res = 1; U p = b; while (p > 0) { if (p & 1) res *= x; x *= x; p >>= 1; } return res; } template <typename U, typename T> U& operator<<(U& stream, const Modular<T>& number) { return stream << number(); } template <typename U, typename T> U& operator>>(U& stream, Modular<T>& number) { typename common_type<typename Modular<T>::Type, long long>::type x; stream >> x; number.value = Modular<T>::normalize(x); return stream; } constexpr int md = 998244353; using Mint = Modular<std::integral_constant<decay<decltype(md)>::type, md>>; class dsu { public: vector<int> p; vector<int> sz; int n; dsu(int _n) : n(_n) { p.resize(n); iota(p.begin(), p.end(), 0); sz.assign(n, 1); } inline int get(int x) { return (x == p[x] ? x : (p[x] = get(p[x]))); } inline bool unite(int x, int y) { x = get(x); y = get(y); if (x != y) { p[x] = y; sz[y] += sz[x]; return true; } return false; } }; int main() { int tt; cin >> tt; while (tt--) { int h, w, k; cin >> h >> w >> k; vector<Mint> fib(h * w + 1); fib[0] = 1; fib[1] = 2; for (int i = 2; i <= h * w; i++) { fib[i] = fib[i - 1] + fib[i - 2]; } vector<pair<int, int>> p(k); for (int i = 0; i < k; i++) { cin >> p[i].first >> p[i].second; --p[i].first; --p[i].second; } Mint ans = 0; for (int ww = 1; ww <= w; ww++) { for (int top = 0; top < 2; top++) { for (int left = 0; left < 2; left++) { for (int right = 0; right < 2; right++) { if (ww == 1 && left + right > 0) { continue; } int sign = ((top + left + right) % 2 == 0 ? 1 : -1); dsu d(h * ww); vector<vector<int>> g(h * ww); vector<bool> must(h * ww); bool done = false; Mint prod = power(Mint(2), (h - top) * (ww - left - right)); for (int i = 0; i < h; i++) { for (int j = 0; j <= w - ww; j++) { vector<pair<int, int>> here; for (auto& c : p) { if (c.first >= i + top && c.second >= j + left && c.second < j + ww - right) { here.emplace_back(c.first - i, c.second - j); } } if (here.size() == 0) { ans += sign * prod; done = true; break; } if (here.size() == 1) { int x = here[0].first * ww + here[0].second; if (!must[x]) { must[x] = true; int px = d.get(x); prod /= fib[d.sz[px]]; d.sz[px] -= 1; int sub = 0; for (int y : g[x]) { if (!must[y]) { sub += 1; } } ans += sign * prod * fib[d.sz[px] - sub]; prod *= fib[d.sz[px]]; } } if (here.size() == 2) { int x = here[0].first * ww + here[0].second; int y = here[1].first * ww + here[1].second; int px = d.get(x); int py = d.get(y); assert(px != py); prod /= fib[d.sz[px]]; prod /= fib[d.sz[py]]; if (!must[x] && !must[y]) { int subx = 1; for (int t : g[x]) { if (!must[t]) { subx += 1; } } int suby = 1; for (int t : g[y]) { if (!must[t]) { suby += 1; } } ans += sign * prod * fib[d.sz[px] - subx] * fib[d.sz[py] - suby]; g[x].push_back(y); g[y].push_back(x); } d.unite(px, py); prod *= fib[d.sz[py]]; } } if (done) { break; } for (int j = left; j < ww - right; j++) { int x = (h - 1 - i) * ww + j; if (must[x]) { done = true; break; } int px = d.get(x); prod /= fib[d.sz[px]]; d.sz[px] -= 1; prod *= fib[d.sz[px]]; for (int y : g[x]) { if (!must[y]) { must[y] = true; int py = d.get(y); prod /= fib[d.sz[py]]; d.sz[py] -= 1; prod *= fib[d.sz[py]]; } } } if (done) { break; } } } } } } cout << ans << '\n'; } return 0; }
1783
A
Make it Beautiful
An array $a$ is called ugly if it contains \textbf{at least one} element which is equal to the \textbf{sum of all elements before it}. If the array is not ugly, it is beautiful. For example: - the array $[6, 3, 9, 6]$ is ugly: the element $9$ is equal to $6 + 3$; - the array $[5, 5, 7]$ is ugly: the element $5$ (the second one) is equal to $5$; - the array $[8, 4, 10, 14]$ is beautiful: $8 \ne 0$, $4 \ne 8$, $10 \ne 8 + 4$, $14 \ne 8 + 4 + 10$, so there is no element which is equal to the sum of all elements before it. You are given an array $a$ such that $1 \le a_1 \le a_2 \le \dots \le a_n \le 100$. You have to \textbf{reorder} the elements of $a$ in such a way that the resulting array is beautiful. Note that you are not allowed to insert new elements or erase existing ones, you can only change the order of elements of $a$. You are allowed to keep the array $a$ unchanged, if it is beautiful.
If we put the maximum in the array on the first position, then for every element, starting from the third one, the sum of elements before it will be greater than it (since that sum is greater than the maximum value in the array). So, the only element that can make our array ugly is the second element. We need to make sure that it is not equal to the first element. Let's put the maximum element on the first position, the minimum element on the second position, and then fill the rest of the array arbitrarily. The only case when it fails is when the maximum element is equal to the minimum element - and it's easy to see that if the maximum is equal to the minimum, then the first element of the array will be equal to the second element no matter what, and the array cannot become beautiful. So, the solution is to check if the maximum is different from the minimum, and if it is so, put them on the first two positions, and the order of remaining elements does not matter. Note that the given array is sorted, so the minimum is the first element, the maximum is the last element.
[ "constructive algorithms", "math", "sortings" ]
800
t = int(input()) for i in range(t): n = int(input()) a = list(map(int, input().split())) if a[0] == a[n - 1]: print('NO') else: print('YES') print(a[n - 1], end = ' ') print(*(a[0:n-1]))
1783
B
Matrix of Differences
For a square matrix of integers of size $n \times n$, let's define its \textbf{beauty} as follows: for each pair of side-adjacent elements $x$ and $y$, write out the number $|x-y|$, and then find the number of different numbers among them. For example, for the matrix $\begin{pmatrix} 1 & 3\\ 4 & 2 \end{pmatrix}$ the numbers we consider are $|1-3|=2$, $|1-4|=3$, $|3-2|=1$ and $|4-2|=2$; there are $3$ different numbers among them ($2$, $3$ and $1$), which means that its beauty is equal to $3$. You are given an integer $n$. You have to find a matrix of size $n \times n$, where each integer from $1$ to $n^2$ occurs exactly once, such that its \textbf{beauty} is the maximum possible among all such matrices.
The first step is to notice that beauty doesn't exceed $n^2-1$, because the minimum difference between two elements is at least $1$, and the maximum difference does not exceed $n^2-1$ (the difference between the maximum element $n^2$ and the minimum element $1$). At first, finding a matrix with maximum beauty seems to be a quite difficult task. So let's try to find an array of $n^2$ elements of maximum beauty. In this case, it is not difficult to come up with an array of the form $[n^2, 1, n^2-1, 2, n^2-2, 3, \dots]$. In such an array, there are all possible differences from $1$ to $n^2-1$. So we found an array with the maximum possible beauty. It remains to find a way to "convert" the array to the matrix, i.e. to find such a sequence of matrix cells that each two adjacent cells in it are side-adjacent. One of the ways is the following: traverse the first row of the matrix from left to right, go down to the second row, traverse it from right to left, go down to the third row, traverse it from left to right, and so on. Thus, we constructed a matrix with the maximum possible beauty $n^2-1$.
[ "constructive algorithms", "math" ]
1,100
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<vector<int>> a(n, vector<int>(n)); int l = 1, r = n * n, t = 0; forn(i, n) { forn(j, n) { if (t) a[i][j] = l++; else a[i][j] = r--; t ^= 1; } if (i & 1) reverse(a[i].begin(), a[i].end()); } forn(i, n) forn(j, n) cout << a[i][j] << " \n"[j == n - 1]; } }
1783
C
Yet Another Tournament
You are participating in Yet Another Tournament. There are $n + 1$ participants: you and $n$ other opponents, numbered from $1$ to $n$. Each two participants will play against each other exactly once. If the opponent $i$ plays against the opponent $j$, he wins if and only if $i > j$. When the opponent $i$ plays against you, everything becomes a little bit complicated. In order to get a win against opponent $i$, you need to prepare for the match for at least $a_i$ minutes — otherwise, you lose to that opponent. You have $m$ minutes in total to prepare for matches, but you can prepare for only one match at one moment. In other words, if you want to win against opponents $p_1, p_2, \dots, p_k$, you need to spend $a_{p_1} + a_{p_2} + \dots + a_{p_k}$ minutes for preparation — and if this number is greater than $m$, you cannot achieve a win against all of these opponents at the same time. The final place of each contestant is equal to the number of contestants with strictly more wins $+$ $1$. For example, if $3$ contestants have $5$ wins each, $1$ contestant has $3$ wins and $2$ contestants have $1$ win each, then the first $3$ participants will get the $1$-st place, the fourth one gets the $4$-th place and two last ones get the $5$-th place. Calculate the minimum possible place (lower is better) you can achieve if you can't prepare for the matches more than $m$ minutes in total.
Suppose, at the end, you won $x$ matches, what can be your final place? Look at each opponent $i$ with $i < x$ ($0$-indexed). Since the $i$-th opponent ($0$-indexed) won $i$ games against the other opponents, even if they win against you, they'll gain $i + 1 \le x$ wins in total and can't affect your place (since your place is decided by only opponents who won strictly more matches than you). From the other side, let's look at each opponent $i$ with $i > x$ ($0$-indexed). Even if they lose to you, they still have $i > x$ wins (you have only $x$), so all of them have strictly more wins than you. As a result, there is only one opponent $i = x$, whose match against you can affect your final place: if you won against them, your place will be $n - x$, otherwise your place will be $n - x + 1$. Now, let's compare your possible places if you win $x$ games with places for winning only $x - 1$ games: $x$ wins gives you places $n - x$ or $n - x + 1$, while winning $x - 1$ leads you to places $n - x + 1$ or $n - x + 2$ that objectively worse. In other words, it's always optimal to win as many matches as possible. How to win the most number of games? It's to choose the easiest opponents. Let's sort array $a$ and find the maximum prefix $[0, x)$ with $a_0 + a_1 + \dots + a_{x-1} \le m$. So, we found maximum number of games $x$ we can win. The last is to check: can we get place $n - x$, or only $n - x + 1$. If $a_x$ contains among $x$ smallest values, then we'll take place $n - x$. Otherwise, let's try to "insert" $a_x$ in this set, i. e. let's erase the biggest among them and insert $a_x$. If the sum is still lower or equal to $m$, it's success and we get place $n - x$. Otherwise, our place is $n - x + 1$. The total complexity is $O(n \log{n})$ because of sorting.
[ "binary search", "greedy", "sortings" ]
1,700
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; vector<int> a(n); for (auto &x : a) cin >> x; auto b = a; sort(b.begin(), b.end()); int ans = 0; for (int i = 0; i < n && b[i] <= m; ++i) { m -= b[i]; ++ans; } if (ans != 0 && ans != n && m + b[ans - 1] >= a[ans]) ++ans; cout << n + 1 - ans << '\n'; } }
1783
D
Different Arrays
You are given an array $a$ consisting of $n$ integers. You \textbf{have to} perform the sequence of $n-2$ operations on this array: - during the first operation, you either add $a_2$ to $a_1$ and subtract $a_2$ from $a_3$, or add $a_2$ to $a_3$ and subtract $a_2$ from $a_1$; - during the second operation, you either add $a_3$ to $a_2$ and subtract $a_3$ from $a_4$, or add $a_3$ to $a_4$ and subtract $a_3$ from $a_2$; - ... - during the last operation, you either add $a_{n-1}$ to $a_{n-2}$ and subtract $a_{n-1}$ from $a_n$, or add $a_{n-1}$ to $a_n$ and subtract $a_{n-1}$ from $a_{n-2}$. So, during the $i$-th operation, you add the value of $a_{i+1}$ to one of its neighbors, and subtract it from the other neighbor. For example, if you have the array $[1, 2, 3, 4, 5]$, one of the possible sequences of operations is: - subtract $2$ from $a_3$ and add it to $a_1$, so the array becomes $[3, 2, 1, 4, 5]$; - subtract $1$ from $a_2$ and add it to $a_4$, so the array becomes $[3, 1, 1, 5, 5]$; - subtract $5$ from $a_3$ and add it to $a_5$, so the array becomes $[3, 1, -4, 5, 10]$. So, the resulting array is $[3, 1, -4, 5, 10]$. An array is reachable if it can be obtained by performing the aforementioned sequence of operations on $a$. You have to calculate the number of reachable arrays, and print it modulo $998244353$.
One of the key observations to this problem is that, after the first $i$ operations, the first $i$ elements of the array are fixed and cannot be changed afterwards. Also, after the $i$-th operation, the elements on positions from $i+3$ to $n$ are the same as they were before applying the operations. This allows us to write the following dynamic programming: $dp_{i, x, y}$ - the number of different prefixes our array can have, if we have performed $i$ operations, the $(i+1)$-th element is $x$, and the $(i+2)$-th element is $y$. The elements after $i+2$ are the same as in the original array, and the elements before $i+1$ won't be changed anymore, so we are interested only in these two elements. Let's analyze the transitions in this dynamic programming. We apply the operation $i+1$ to the elements $a_{i+1}$, $a_{i+2}$ and $a_{i+3}$. If we add $a_{i+2}$ to $a_{i+1}$, then we subtract it from $a_{i+3}$, so we transition into state $dp_{i+1, y, a_{i+3}-y}$. Otherwise, we transition into state $dp_{i+1, y, a_{i+3}+y}$. The element we leave behind is either $x-y$ or $x+y$, and if $y \ne 0$, these two transitions give us different prefixes. But if $y=0$, we need to make only one of these transitions, because adding or subtracting $0$ actually makes no difference. Okay, now we've got a solution with dynamic programming in $O(n^3 A^2)$, where $n$ is up to $300$ and $A$ is up to $300$. This is too slow. But we can notice that the value of $a_{i+1}$ actually does not affect our transitions at all; we can just discard it, so our dynamic programming becomes $O(n^2 A)$, which easily fits into TL. Small implementation note: elements can become negative, and in order to store dynamic programming with negative states in an array, we need to do something about that. I don't recommend using maps (neither ordered nor unordered): you either get an extra log factor, or make your solution susceptible to hacking. Instead, let's say that the value of $dp_{i, y}$, where $y$ can be a negative number, will be stored as $dp[i][y + M]$ in the array, where $M$ is some constant which is greater than the maximum possible $|y|$ (for example, $10^5$ in this problem). That way, all array indices will be non-negative. Solution complexity: $O(n^2 A)$.
[ "brute force", "dp", "implementation" ]
2,000
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; while(x >= MOD) x -= MOD; while(x < 0) x += MOD; return x; } const int ZERO = 100000; int dp[2][ZERO * 2]; void recalc(int x) { for(int i = 0; i < ZERO * 2; i++) dp[1][i] = 0; for(int i = 0; i < ZERO * 2; i++) { if(dp[0][i] == 0) continue; int nx = x + i; dp[1][nx] = add(dp[1][nx], dp[0][i]); if(nx != ZERO) dp[1][2 * ZERO - nx] = add(dp[1][2 * ZERO - nx], dp[0][i]); } for(int i = 0; i < ZERO * 2; i++) dp[0][i] = dp[1][i]; } int main() { int n; cin >> n; vector<int> a(n); for(int i = 0; i < n; i++) cin >> a[i]; dp[0][ZERO] = 1; for(int i = 1; i + 1 < n; i++) recalc(a[i]); int ans = 0; for(int i = 0; i < ZERO * 2; i++) ans = add(ans, dp[0][i]); cout << ans << endl; }
1783
E
Game of the Year
Monocarp and Polycarp are playing a computer game. This game features $n$ bosses for the playing to kill, numbered from $1$ to $n$. They will fight \textbf{each} boss the following way: - Monocarp makes $k$ attempts to kill the boss; - Polycarp makes $k$ attempts to kill the boss; - Monocarp makes $k$ attempts to kill the boss; - Polycarp makes $k$ attempts to kill the boss; - ... Monocarp kills the $i$-th boss on \textbf{his} $a_i$-th attempt. Polycarp kills the $i$-th boss on \textbf{his} $b_i$-th attempt. After one of them kills the $i$-th boss, they move on to the $(i+1)$-st boss. The attempt counters reset for both of them. Once one of them kills the $n$-th boss, the game ends. Find all values of $k$ from $1$ to $n$ such that Monocarp kills \textbf{all bosses}.
Consider some value of $k$. When is it included in the answer? When Monocarp spends a lower or an equal amount of "blocks" of attempts than Polycarp for killing every boss. Formally, $\lceil \frac{a_i}{k} \rceil \le \lceil \frac{b_i}{k} \rceil$ for all $i$ from $1$ to $n$. Let's reverse this condition. $k$ is not in the answer if there exists such $i$ from $1$ to $n$ that $\lceil \frac{b_i}{k} \rceil < \lceil \frac{a_i}{k} \rceil$. So, there exists at least one value between $\lceil \frac{b_i}{k} \rceil$ and $\lceil \frac{a_i}{k} \rceil$. Let's call it $x$. Now it's $\lceil \frac{b_i}{k} \rceil < x \le \lceil \frac{a_i}{k} \rceil$. I set the $\le$ and $<$ signs arbitrarily, just so that it shows that such a value exists. You can't put both $\le$ or both $<$, because that will accept $0$ values or at least $2$ values, respectively. Would be cool if we could multiply everything by $k$ and it still worked. Is it completely impossible, though? Take a look at $b_i < xk \le a_i$. What it says is that there exists a multiple of $k$ between $b_i$ and $a_i$. A multiple of $k$ is a number that's the last in each "block" of attempts (the block of value that are rounded up the same). Turns out, this is what we are looking for already. Right after the multiple of $k$, the new block starts. Thus, we are wrong we our signs. It should be $b_i \le xk < a_i$ - $a_i$ is in the block after $b_i$, so it requires more blocks of attempts. So for $k$ to not be included in the answer, there should exist at least one $i$ such that there exists a multiple of $k$ in the half-interval $[b_i; a_i)$. That is pretty easy to implement. For each $x$, calculate the number of half-intervals that cover $x$. I think this is called delta-encoding. Iterate over all half-intervals and make two updates for each one: increment by $1$ on position $b_i$ and decrement by $1$ on position $a_i$. Then make a prefix sum over these updates. Now the value in the $x$-th position tells you the number of half-intervals that cover $x$. To check a particular value of $k$, iterate over all multiples of $k$ and check that none are covered by half-intervals. It's known that the total number of multiples over all numbers from $1$ to $n$ is $n + \frac{n}{2} + \frac{n}{3} + \dots + \frac{n}{n} = O(n \log n)$. Overall complexity: $O(n \log n)$ per testcase.
[ "brute force", "data structures", "math", "number theory" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int t; scanf("%d", &t); while (t--){ int n; scanf("%d", &n); vector<int> a(n), b(n); forn(i, n) scanf("%d", &a[i]); forn(i, n) scanf("%d", &b[i]); vector<int> dx(n + 1); forn(i, n) if (b[i] < a[i]){ ++dx[b[i]]; --dx[a[i]]; } forn(i, n) dx[i + 1] += dx[i]; vector<int> ans; for (int k = 1; k <= n; ++k){ bool ok = true; for (int nk = k; nk <= n; nk += k) ok &= dx[nk] == 0; if (ok) ans.push_back(k); } printf("%d\n", int(ans.size())); for (int k : ans) printf("%d ", k); puts(""); } return 0; }
1783
F
Double Sort II
You are given two permutations $a$ and $b$, both of size $n$. A permutation of size $n$ is an array of $n$ elements, where each integer from $1$ to $n$ appears exactly once. The elements in each permutation are indexed from $1$ to $n$. You can perform the following operation any number of times: - choose an integer $i$ from $1$ to $n$; - let $x$ be the integer such that $a_x = i$. Swap $a_i$ with $a_x$; - let $y$ be the integer such that $b_y = i$. Swap $b_i$ with $b_y$. Your goal is to make both permutations \textbf{sorted in ascending order} (i. e. the conditions $a_1 < a_2 < \dots < a_n$ and $b_1 < b_2 < \dots < b_n$ must be satisfied) using \textbf{minimum number of operations}. Note that both permutations must be sorted after you perform the sequence of operations you have chosen.
The solution to this problem uses cyclic decomposition of permutations. A cyclic decomposition of a permutation is formulated as follows: you treat a permutation as a directed graph on $n$ vertices, where each vertex $i$ has an outgoing arc $i \rightarrow p_i$. This graph consists of several cycles, and the properties of this graph can be helpful when solving permutation-based problems. First of all, how does the cyclic decomposition of a sorted permutation look? Every vertex belongs to its own cycle formed by a self-loop going from that vertex to itself. We will try to bring the cyclic decompositions of the given permutations to this form. What does an operation with integer $i$ do to the cyclic decomposition of the permutation? If $i$ is in its own separate cycle, the operation does nothing ($p_i = i$, so we swap an element with itself). Otherwise, let's suppose that $x$ is the element before $i$ in the same cycle ($p_x = i$), and $y$ is the element after $i$ in the same cycle ($p_i = y$). Note that this can be the same element. When we apply an operation on $i$, we swap $p_x$ with $p_i$, so after the operation, $p_i = i$, and $p_x = y$. So, $i$ leaves the cycle and forms its separate cycle, and $y$ becomes the next vertex in the cycle after $x$. So, using the operation, we exclude the vertex $i$ from the cycle. Suppose we want to sort one permutation. Then each cycle having length $\ge 2$ must be broken down: for a cycle of length $c$, we need to exclude $c-1$ vertices from it to break it down. The vertex we don't touch can be any vertex from the cycle, and all other vertices from the cycle will be extracted using one operation directed at them. It's easy to see now that if we want to sort a permutation, we don't need to apply the same operation twice, and the order of operations does not matter. Okay, then what about sorting two permutations in parallel? Let's change the problem a bit: instead of calculating the minimum number of operations, we will try to maximize the number of integers $i$ such that we don't perform operations with them. So, an integer $i$ can be left untouched if it is the only untouched vertex in its cycles in both permutations... Can you see where this is going? Suppose we want to leave the vertex $i$ untouched. It means that in its cycles in both permutations, every other vertex has to be extracted with an operation. So, if two cycles from different permutations have a vertex in common, we can leave this vertex untouched, as long as there are no other vertices left untouched in both of these cycles. Let's build a bipartite graph, where each vertex in the left part represents a cycle in the first permutation, and each vertex in the right part represents a cycle in the second permutation. We will treat each integer $i$ as an edge between two respective vertices in the bipartite graph. If the edge corresponding to $i$ is "used" ($i$ is left untouched), we cannot "use" any edges incident to the same vertex in left or right part. So, maximizing the number of untouched numbers is actually the same as finding the maximum matching in this bipartite graph. After you find the maximum matching, restoring the actual answer is easy. Remember that the edges saturated by the matching correspond to the integers we don't touch with our operations, the order of operations does not matter, and each integer has to be used in an operation only once. So, the actual answer is the set of all integers without those which correspond to the edges from the matching. This solution runs in $O(n^2)$ even with a straightforward implementation of bipartite matching, since the bipartite graph has at most $O(n)$ vertices and $O(n)$ edges.
[ "dfs and similar", "flows", "graph matchings", "graphs" ]
2,500
#include<bits/stdc++.h> using namespace std; const int N = 5043; vector<int> g[N]; int mt[N]; int u[N]; vector<vector<int>> cycle[2]; vector<int> a[2]; int n; int vs[2]; vector<vector<int>> inter; bool kuhn(int x) { if(u[x]) return false; u[x] = true; for(auto y : g[x]) { if(mt[y] == x) continue; if(mt[y] == -1 || kuhn(mt[y])) { mt[y] = x; return true; } } return false; } int find_intersection(const vector<int>& x, const vector<int>& y) { for(auto i : x) for(auto j : y) if(i == j) return i; return -1; } int main() { scanf("%d", &n); for(int k = 0; k < 2; k++) { a[k].resize(n); for(int j = 0; j < n; j++) { scanf("%d", &a[k][j]); a[k][j]--; } } for(int k = 0; k < 2; k++) { vector<bool> used(n); for(int i = 0; i < n; i++) { if(used[i]) continue; vector<int> cur; int j = i; while(!used[j]) { cur.push_back(j); used[j] = true; j = a[k][j]; } cycle[k].push_back(cur); } vs[k] = cycle[k].size(); } inter.resize(vs[0], vector<int>(vs[1])); for(int i = 0; i < vs[0]; i++) for(int j = 0; j < vs[1]; j++) { inter[i][j] = find_intersection(cycle[0][i], cycle[1][j]); if(inter[i][j] != -1) g[i].push_back(j); } for(int i = 0; i < vs[1]; i++) mt[i] = -1; for(int i = 0; i < vs[0]; i++) { for(int j = 0; j < vs[0]; j++) u[j] = false; kuhn(i); } set<int> res; for(int i = 0; i < n; i++) res.insert(i); for(int i = 0; i < vs[1]; i++) if(mt[i] != -1) res.erase(inter[mt[i]][i]); printf("%d\n", res.size()); for(auto x : res) printf("%d ", x + 1); puts(""); }
1783
G
Weighed Tree Radius
You are given a tree of $n$ vertices and $n - 1$ edges. The $i$-th vertex has an initial weight $a_i$. Let the distance $d_v(u)$ from vertex $v$ to vertex $u$ be the number of edges on the path from $v$ to $u$. Note that $d_v(u) = d_u(v)$ and $d_v(v) = 0$. Let the weighted distance $w_v(u)$ from $v$ to $u$ be $w_v(u) = d_v(u) + a_u$. Note that $w_v(v) = a_v$ and $w_v(u) \neq w_u(v)$ if $a_u \neq a_v$. Analogically to usual distance, let's define the eccentricity $e(v)$ of vertex $v$ as the greatest weighted distance from $v$ to any other vertex (including $v$ itself), or $e(v) = \max\limits_{1 \le u \le n}{w_v(u)}$. Finally, let's define the radius $r$ of the tree as the minimum eccentricity of any vertex, or $r = \min\limits_{1 \le v \le n}{e(v)}$. You need to perform $m$ queries of the following form: - $v_j$ $x_j$ — assign $a_{v_j} = x_j$. After performing each query, print the radius $r$ of the current tree.
Firstly, let's define the weight of path $(u, v)$ as $w_p(u, v) = a_u + d_u(v) + a_v$. On contrary to weighted distances, $w_p(u, v) = w_p(v, u)$ and also $w_p(v, v) = 2 a_v$. Now, let's define the diameter of a tree as path $(u, v)$ with maximum $w_p(u, v)$. It's okay if diameter may be explicit case $(v, v)$. The useful part of such definition is next: our diameter still holds most properties of the usual diameter. Let's look at two of them: There is a vertex on diameter path $(x, y)$ with $w_v(x) = \left\lceil \frac{w_p(x, y)}{2} \right\rceil$ and $w_v(y) = w_p(x, y) - w_v(x)$. It's easy to prove after noting the fact that $a_x \le d_x(y) + a_y$ and $a_y \le d_y(x) + a_x$ (otherwise, you could choose diameter $(x, x)$ or $(y, y)$). For any vertex $v$ eccentricity $e(v) = \max(w_v(x), w_v(y))$. In other words, either $x$ or $y$ has the maximum distance from $v$. (You can also prove it by contradiction). It also means that $e(v) \ge \left\lceil \frac{w_p(x, y)}{2} \right\rceil$. Now let's look how the diameter changes when we change the weight $a_v$. If $a_v$ is increasing it's quite easy. The only paths that change weights are the paths ending at $v$. Denote such path as $(v, u)$ and note that either $v = u$ or $w_p(v, u) = a_v + w_v(u) \le a_v + e(v)$ $=$ $a_v + \max(w_v(x), w_v(y))$. In other words, there will be only three candidates for a new diameter: path $(v, v)$ with $w_p(v, v) = 2 a_v$; path $(v, x)$ with $w_p(v, x) = a_v + d_v(x) + a_x$; path $(v, y)$ with $w_p(v, y) = a_v + d_v(y) + a_y$. The only thing you need to calculate fast enough is the two distances $d_v(x)$ and $d_v(y)$. And since $d_v(x) = depth(v) + depth(x) - 2 \cdot depth(lca(v, x))$, your task is to calculate $lca$. Finally, how to handle decreasing $a_v$'s? Let's get rid of them using DCP (dynamic connectivity problem) technique. Keep track of each value $a_v$: each possible value $a_v$ for some vertex $v$ will be "active" on some segment of queries $[l, r) \in [0, m)$. Since there are only $m$ queries, there will be exactly $n + m$ such segments for all vertices $v$ in total. Now, all queries becomes "assign $a_v = x$ on some segment of queries $[l, r)$". Note that in that case, the previous value of $a_v$ was $0$, so you are dealing with only "increasing value" queries. Finally, to handle all range queries efficiently, you build a Segment Tree on queries, set all queries and then traverse your Segment Tree while maintaining the current diameter in order to calculate answers for all queries. Each of $n + m$ queries transforms in $O(\log{m})$ queries to segment tree vertices, and preforming each query asks you to calculate $lca$ two times. If you use the usual binary lifting, then your complexity becomes $O((n + m) \log{m} \log{n})$ what is okay. But if you use Sparse Table on Euler tour, you can take $lca$ in $O(1)$ and your complexity will be $O(n \log{n} + (n + m) \log{m})$.
[ "data structures", "divide and conquer", "implementation", "trees" ]
2,800
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define x first #define y second typedef long long li; typedef long double ld; typedef pair<int, int> pt; template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) { return out << "(" << p.x << ", " << p.y << ")"; } template<class A> ostream& operator <<(ostream& out, const vector<A> &v) { fore(i, 0, sz(v)) { if(i) out << " "; out << v[i]; } return out; } const int INF = int(1e9); const li INF64 = li(1e18); const ld EPS = 1e-9; const int LOG = 18; const int N = int(2e5) + 55; int n; vector<int> a; vector<int> g[N]; inline bool read() { if(!(cin >> n)) return false; a.resize(n); fore (i, 0, n) cin >> a[i]; fore (i, 0, n - 1) { int u, v; cin >> u >> v; u--, v--; g[u].push_back(v); g[v].push_back(u); } return true; } struct Lca { vector<int> log2, tin; vector< vector<int> > hs; int T = 0; void dfs(int v, int p, int cdepth) { tin[v] = T++; hs[0][tin[v]] = cdepth; for (int to : g[v]) { if (to == p) continue; dfs(to, v, cdepth + 1); hs[0][T++] = cdepth; } } void init() { log2.assign(2 * n, 0); fore (i, 2, sz(log2)) log2[i] = log2[i / 2] + 1; hs.assign(log2.back() + 1, vector<int>(2 * n, INF)); tin.assign(n, 0); T = 0; dfs(0, -1, 0); assert(T < 2 * n); fore (pw, 0, sz(hs) - 1) { fore (i, 0, T - (1 << pw)) hs[pw + 1][i] = min(hs[pw][i], hs[pw][i + (1 << pw)]); } } int getMin(int u, int v) { if (tin[u] > tin[v]) swap(u, v); int len = log2[tin[v] + 1 - tin[u]]; int d = min(hs[len][tin[u]], hs[len][tin[v] + 1 - (1 << len)]); // cerr << u << " " << v << ": " << d << endl; return d; } inline int getH(int v) { return hs[0][tin[v]]; } } lcaST; int getFarthest(int s) { vector<int> used(n, 0); queue<int> q; used[s] = 1; q.push(s); int v; while (!q.empty()) { v = q.front(); q.pop(); for (int to : g[v]) { if (used[to]) continue; used[to] = 1; q.push(to); } } return v; } int getDist(int u, int v) { return lcaST.getH(u) + lcaST.getH(v) - 2 * lcaST.getMin(u, v); } const int M = int(2e5); vector<pt> ops[4 * M]; void setOp(int v, int l, int r, int lf, int rg, const pt &op) { if (l >= r || lf >= rg) return; if (l == lf && r == rg) { ops[v].push_back(op); return; } int mid = (l + r) / 2; if (lf < mid) setOp(2 * v + 1, l, mid, lf, min(mid, rg), op); if (rg > mid) setOp(2 * v + 2, mid, r, max(lf, mid), rg, op); } void updDiam(int &s, int &t, int &curD, const pt &op) { int v = op.x; a[v] = op.y; int ns = s, nt = t, nD = curD; vector<pt> cds = {{s, v}, {v, t}, {v, v}}; for (auto &c : cds) { int d1 = getDist(c.x, c.y); if (nD < a[c.x] + d1 + a[c.y]) { nD = a[c.x] + d1 + a[c.y]; ns = c.x, nt = c.y; } } s = ns; t = nt; curD = nD; } vector<int> ans; void calcDiams(int v, int l, int r, int s, int t, int curD) { for (auto &op : ops[v]) updDiam(s, t, curD, op); if (r - l > 1) { int mid = (l + r) / 2; calcDiams(2 * v + 1, l, mid, s, t, curD); calcDiams(2 * v + 2, mid, r, s, t, curD); } else ans[l] = (curD + 1) / 2; for (auto &op : ops[v]) a[op.first] = 0; } inline void solve() { lcaST.init(); int s = getFarthest(0); int t = getFarthest(s); int m; cin >> m; vector<int> lst(n, 0); fore (i, 0, m) { int v, x; cin >> v >> x; v--; setOp(0, 0, m, lst[v], i, {v, a[v]}); lst[v] = i; a[v] = x; } fore (v, 0, n) setOp(0, 0, m, lst[v], m, {v, a[v]}); ans.resize(m, -1); a.assign(n, 0); calcDiams(0, 0, m, s, t, getDist(s, t)); fore (i, 0, m) cout << ans[i] << '\n'; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1784
A
Monsters (easy version)
{This is the easy version of the problem. In this version, you only need to find the answer once. In this version, hacks are \textbf{not allowed}.} In a computer game, you are fighting against $n$ monsters. Monster number $i$ has $a_i$ health points, all $a_i$ are integers. A monster is alive while it has at least $1$ health point. You can cast spells of two types: - Deal $1$ damage to any single alive monster of your choice. - Deal $1$ damage to all alive monsters. If at least one monster dies (ends up with $0$ health points) as a result of this action, then repeat it (and keep repeating while at least one monster dies every time). Dealing $1$ damage to a monster reduces its health by $1$. Spells of type 1 can be cast any number of times, while a spell of type 2 can be cast at most once during the game. What is the smallest number of times you need to cast spells of type 1 to kill all monsters?
First, let's prove that it's always optimal to use a spell of type 2 as your last spell in the game and kill all monsters with it. Indeed, suppose you use a spell of type 2 earlier and it deals $x$ damage to all monsters. Suppose that some monsters are still alive. For any such monster, say they had $y$ health points before the spell of type 2, and $y > x$. Then, you will need to cast $y-x$ more spells of type 1 to kill it afterwards. But you could just cast these $y-x$ spells of type 1 on this monster before casting the spell of type 2. Thus, you can move all usages of spells of type 1 before the usage of the spell of type 2 without changing the answer. Without loss of generality, assume that $a_1 \le a_2 \le \ldots \le a_n$. Let $b_i$ be the amount of health points monster $i$ has right before the spell of type 2 is cast ($1 \le b_i \le a_i$). Then, the number of spells of type 1 needed is $\sum \limits_{i=1}^n (a_i - b_i)$, which means we want to maximize $\sum \limits_{i=1}^n b_i$. Note that we can rearrange $b$ so that $b_1 \le b_2 \le \ldots \le b_n$: since $a$ is sorted too, the $b_i \le a_i$ condition will still hold. Also, since all monsters must be killed by a spell of type 2 afterwards, $b_{i+1} - b_i \le 1$ must hold. Thus, we should go through all monsters in non-decreasing order of $a_i$ and decide their $b_i$ greedily, picking the largest value satisfying both $b_i \le a_i$ and $b_i \le b_{i-1} + 1$. Specifically, we should choose $b_1 = 1$ and $b_i = \min(a_i, b_{i-1} + 1)$.
[ "brute force", "greedy" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } sort(a.begin(), a.end()); vector<int> b(n); b[0] = 1; for (int i = 1; i < n; i++) { b[i] = min(b[i - 1] + 1, a[i]); } long long ans = 0; for (int i = 0; i < n; i++) { ans += a[i] - b[i]; } cout << ans << '\n'; } return 0; }
1784
B
Letter Exchange
A cooperative game is played by $m$ people. In the game, there are $3m$ sheets of paper: $m$ sheets with letter 'w', $m$ sheets with letter 'i', and $m$ sheets with letter 'n'. Initially, each person is given three sheets (possibly with equal letters). The goal of the game is to allow each of the $m$ people to spell the word "win" using their sheets of paper. In other words, everyone should have one sheet with letter 'w', one sheet with letter 'i', and one sheet with letter 'n'. To achieve the goal, people can make exchanges. Two people participate in each exchange. Both of them choose exactly one sheet of paper from the three sheets they own and exchange it with each other. Find the shortest sequence of exchanges after which everyone has one 'w', one 'i', and one 'n'.
For each person, there are three essential cases of what they could initially have: Three distinct letters: "win". No need to take part in any exchanges. Two equal letters and another letter, e.g. "wii". An extra 'i' must be exchanged with someone's 'n'. Three equal letters, e.g. "www". One 'w' must be exchanged with someone's 'i', another 'w' must be exchanged with someone's 'n'. Let's create a graph on three vertices: 'w', 'i', 'n'. Whenever person $i$ has an extra letter $x$ and is lacking letter $y$, create a directed edge $x \rightarrow y$ marked with $i$. Once the graph is built, whenever you have a cycle of length $2$, that is, $x \xrightarrow{i} y \xrightarrow{j} x$, it means person $i$ needs to exchange $x$ for $y$, while person $j$ needs to exchange $y$ for $x$. Thus, both of their needs can be satisfied with just one exchange. Finally, once there are no cycles of length $2$, note that the in-degree and the out-degree of every vertex are equal. If e.g. there are $p$ edges 'w' $\rightarrow$ 'i', it follows that there are $p$ edges 'i' $\rightarrow$ 'n' and $p$ edges 'n' $\rightarrow$ 'w'. It means we can form $p$ cycles of length $3$. (The cycles could also go in the opposite direction: 'w' $\rightarrow$ 'n' $\rightarrow$ 'i' $\rightarrow$ 'w'.) In any case, each cycle of length $3$ can be solved using $2$ exchanges.
[ "constructive algorithms" ]
1,900
private fun IntArray.countOf(value: Int) = count { it == value } private fun solve() { val s = "win" fun IntArray.bad() = (0 until 3).singleOrNull { c -> count { it == c } > 1 } val data = List(readInt()) { readLn().map { s.indexOf(it) }.toIntArray() } val ans = mutableListOf<String>() fun exchange(c1: Int, c2: Int, i1: Int, i2: Int) { ans.add("$$${i1+1} $$${s[c1]} $$${i2+1} $$${s[c2]}") val index1 = data[i1].indexOf(c1) val index2 = data[i2].indexOf(c2) data[i1][index1] = c2 data[i2][index2] = c1 } val todo = List(3) { List(3) { mutableListOf<Int> () } } for (i in data.indices) { val bad = data[i].bad() ?: continue for (j in 0 until 3) { if (data[i].countOf(j) == 0) { todo[bad][j].add(i) } } } for (i in 0 until 3) { for (j in 0 until 3) { if (i != j) { while (todo[i][j].isNotEmpty() && todo[j][i].isNotEmpty()) { exchange(i, j, todo[i][j].removeLast(), todo[j][i].removeLast()) } } } } while (todo[0][1].isNotEmpty() && todo[1][2].isNotEmpty() && todo[2][0].isNotEmpty()) { val a = todo[0][1].removeLast() val b = todo[1][2].removeLast() val c = todo[2][0].removeLast() exchange(0, 1, a, b) exchange(0, 2, b, c) } while (todo[1][0].isNotEmpty() && todo[2][1].isNotEmpty() && todo[0][2].isNotEmpty()) { val a = todo[1][0].removeLast() val b = todo[2][1].removeLast() val c = todo[0][2].removeLast() exchange(1, 0, a, c) exchange(2, 1, b, c) } println(ans.size) println(ans.joinToString("\n")) } fun main() { repeat(readInt()) { solve() } } private fun readLn() = readLine()!! private fun readInt() = readLn().toInt()
1784
C
Monsters (hard version)
This is the hard version of the problem. In this version, you need to find the answer for every prefix of the monster array. In a computer game, you are fighting against $n$ monsters. Monster number $i$ has $a_i$ health points, all $a_i$ are integers. A monster is alive while it has at least $1$ health point. You can cast spells of two types: - Deal $1$ damage to any single alive monster of your choice. - Deal $1$ damage to all alive monsters. If at least one monster dies (ends up with $0$ health points) as a result of this action, then repeat it (and keep repeating while at least one monster dies every time). Dealing $1$ damage to a monster reduces its health by $1$. Spells of type 1 can be cast any number of times, while a spell of type 2 can be cast at most once during the game. For every $k = 1, 2, \ldots, n$, answer the following question. Suppose that only the first $k$ monsters, with numbers $1, 2, \ldots, k$, are present in the game. What is the smallest number of times you need to cast spells of type 1 to kill all $k$ monsters?
Continuing on the solution to the easy version: now we have a set of integers $A$, we need to add elements into $A$ one by one and maintain the answer to the problem. Recall that for every $i$, either $b_i = b_{i-1}$ or $b_i = b_{i-1} + 1$. Note that $b_i = b_{i-1}$ can only happen when $b_i = a_i$. Let's call such an element useless. If we remove a useless element, the answer does not change. If there are no useless elements, we have $b_1 = 1$ and $b_i = b_{i-1} + 1$ for $i > 1$: that is, $b_i = i$. Thus, the answer to the problem can be easily calculated as $\sum \limits_{i=1}^m (a_i - b_i) = \sum \limits_{i=1}^m a_i - \frac{m(m+1)}{2}$, where $m$ is the current size of the set. We can formulate the condition "there are no useless elements" as follows. For any $x$, let $k_x$ be the number of elements in $A$ not exceeding $x$. Then, $k_x \le x$. On the other hand, suppose that for some $x$, we have $k_x > x$. Let's find the smallest such $x$. Then, we can see that $A$ contains a useless element equal to $x$, and we can safely remove it. We can check this condition after adding each new element to $A$ using a segment tree. In every cell $x$ of the array maintained by the segment tree, we will store the difference $x - k_x$. Initially, cell $x$ contains value $x$. When a new element $v$ appears, we should subtract $1$ from all cells in range $[v; n]$. Then, if a cell with a negative value appears (that is, $x - k_x < 0$, which is equivalent to $k_x > x$), we should find the leftmost such cell $x$ and remove an element equal to $x$. In particular, we should add $1$ to all cells in range $[x; n]$. Thus, we can use a segment tree with "range add" and "global min". At most one useless element can appear every time we enlarge $A$, and if that happens, we can identify and remove it in $O(\log n)$, resulting in an $O(n \log n)$ time complexity.
[ "data structures", "greedy" ]
2,200
#include <bits/stdc++.h> using namespace std; int main() { int tt; cin >> tt; while (tt--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } vector<int> mn(2 * n - 1); vector<int> add(2 * n - 1, 0); vector<int> pos(2 * n - 1); auto Pull = [&](int x, int z) { mn[x] = min(mn[x + 1], mn[z]) + add[x]; pos[x] = (mn[x + 1] <= mn[z] ? pos[x + 1] : pos[z]); }; function<void(int, int, int, int, int, int)> Modify = [&](int x, int l, int r, int ll, int rr, int v) { if (ll <= l && r <= rr) { mn[x] += v; add[x] += v; return; } int y = (l + r) >> 1; int z = x + 2 * (y - l + 1); if (ll <= y) { Modify(x + 1, l, y, ll, rr, v); } if (rr > y) { Modify(z, y + 1, r, ll, rr, v); } Pull(x, z); }; function<void(int, int, int)> Build = [&](int x, int l, int r) { if (l == r) { mn[x] = l; pos[x] = l; return; } int y = (l + r) >> 1; int z = x + 2 * (y - l + 1); Build(x + 1, l, y); Build(z, y + 1, r); Pull(x, z); }; Build(0, 1, n); long long s = 0; long long m = 0; for (int i = 0; i < n; i++) { s += a[i]; m += 1; Modify(0, 1, n, a[i], n, -1); if (mn[0] < 0) { s -= pos[0]; m -= 1; Modify(0, 1, n, pos[0], n, +1); } cout << s - m * (m + 1) / 2 << " \n"[i == n - 1]; } } return 0; }
1784
D
Wooden Spoon
$2^n$ people, numbered with distinct integers from $1$ to $2^n$, are playing in a single elimination tournament. The bracket of the tournament is a full binary tree of height $n$ with $2^n$ leaves. When two players meet each other in a match, a player with the \textbf{smaller} number always wins. The winner of the tournament is the player who wins all $n$ their matches. A virtual consolation prize "Wooden Spoon" is awarded to a player who satisfies the following $n$ conditions: - they lost their first match; - the player who beat them lost their second match; - the player who beat that player lost their third match; - $\ldots$; - the player who beat the player from the previous condition lost the final match of the tournament. It can be shown that there is always exactly one player who satisfies these conditions. Consider all possible $(2^n)!$ arrangements of players into the tournament bracket. For each player, find the number of these arrangements in which they will be awarded the "Wooden Spoon", and print these numbers modulo $998\,244\,353$.
Let's focus on the sequence of players beating each other $1 = a_0 < a_1 < \ldots < a_n$: $a_0$ is the tournament champion, $a_0$ beats $a_1$ in the last match, $a_1$ beats $a_2$ in the second-to-last match, $\ldots$, $a_{n-1}$ beats $a_n$ in the first match. For a fixed such sequence, how many ways are there to fill the tournament bracket? Let's look at the sequence in reverse. There are $2^n$ ways to put player $a_n$ somewhere. Player $a_{n-1}$ has to be the opponent of player $a_n$ in the first match. Player $a_{n-2}$ has to beat some player $b > a_{n-2}$ in the first match, and then beat $a_{n-1}$ in the second match. There are $2^n - a_{n-2} - 2$ ways to choose player $b$ (since it can not be equal to $a_{n-1}$ and $a_n$), and there are also $2$ ways to order $a_{n-2}$ and $b$. Player $a_{n-3}$ has to be the winner of a subbracket containing $3$ other players $c_1, c_2, c_3 > a_{n-3}$, and then beat $a_{n-2}$ in the third match. There are $2^n - a_{n-3} - 4$ players to choose $c_i$ from (since they can not be equal to $a_{n-2}$, $a_{n-1}$, $a_n$, and $b$), and there are $\binom{2^n - a_{n-3} - 4}{3}$ ways to do so, and there are also $4!$ ways to order $a_{n-3}$, $c_1$, $c_2$, and $c_3$. In general, player $a_{n - i}$ has to be the winner of a subbracket containing $2^{i-1} - 1$ other players, and there are $2^n - a_{n-i} - 2^{i-1}$ players to choose from, and there are $\binom{2^n - a_{n-i} - 2^{i-1}}{2^{i-1} - 1}$ ways to choose, and also $(2^{i-1})!$ ways to order this subbracket. You can see that the total number of brackets for a fixed sequence $1 = a_0 < a_1 < \ldots < a_n$ can be represented as $f(a_0, 0) \cdot f(a_1, 1) \cdot \ldots \cdot f(a_{n-1}, n-1) \cdot f(a_n, n)$, where $f(a_i, i)$ is some function of a player number and a round number. Now let's use dynamic programming: let $d(a_i, i)$ be the sum of products of $f(a_0, 0) \cdot f(a_1, 1) \cdot \ldots \cdot f(a_i, i)$ over all sequences $1 = a_0 < a_1 < \ldots < a_i$. Then: $d(1, 0) = f(1, 0)$; $d(a_0, 0) = 0$ for $a_0 > 1$; $d(a_i, i) = f(a_i, i) \cdot \sum \limits_{a_{i-1}=1}^{a_i-1} d(a_{i-1}, i-1)$ for $i > 0$. The answer for player $x$ is $d(x, n)$. This DP has $O(n \cdot 2^n)$ states, and note that the inner sums in the formulas for $d(a_i, i)$ and $d(a_i + 1, i)$ only differ by one summand. Thus, by using cumulative sums for transitions, we can achieve an $O(n \cdot 2^n)$ time complexity.
[ "combinatorics", "dp" ]
2,400
#include <bits/stdc++.h> using namespace std; template <typename T> T inverse(T a, T m) { T u = 0, v = 1; while (a != 0) { T t = m / a; m -= t * a; swap(a, m); u -= t * v; swap(u, v); } assert(m == 1); return u; } template <typename T> class Modular { public: using Type = typename decay<decltype(T::value)>::type; constexpr Modular() : value() {} template <typename U> Modular(const U& x) { value = normalize(x); } template <typename U> static Type normalize(const U& x) { Type v; if (-mod() <= x && x < mod()) v = static_cast<Type>(x); else v = static_cast<Type>(x % mod()); if (v < 0) v += mod(); return v; } const Type& operator()() const { return value; } template <typename U> explicit operator U() const { return static_cast<U>(value); } constexpr static Type mod() { return T::value; } Modular& operator+=(const Modular& other) { if ((value += other.value) >= mod()) value -= mod(); return *this; } Modular& operator/=(const Modular& other) { return *this *= Modular(inverse(other.value, mod())); } template <typename U = T> typename enable_if<is_same<typename Modular<U>::Type, int>::value, Modular>::type& operator*=(const Modular& rhs) { value = normalize(static_cast<int64_t>(value) * static_cast<int64_t>(rhs.value)); return *this; } private: Type value; }; template <typename T> Modular<T> operator*(const Modular<T>& lhs, const Modular<T>& rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator*(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator*(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator/(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) /= rhs; } constexpr int md = 998244353; using Mint = Modular<std::integral_constant<decay<decltype(md)>::type, md>>; vector<Mint> fact(1, 1); vector<Mint> inv_fact(1, 1); Mint C(int n, int k) { if (k < 0 || k > n) { return 0; } while ((int) fact.size() < n + 1) { fact.push_back(fact.back() * (int) fact.size()); inv_fact.push_back(1 / fact.back()); } return fact[n] * inv_fact[k] * inv_fact[n - k]; } int main() { ios::sync_with_stdio(false); cin.tie(0); int n; cin >> n; C(1 << n, 0); vector<Mint> dp(1 << n); dp[0] = (1 << n) * fact[1 << (n - 1)]; for (int rd = n - 2; rd >= 0; rd--) { vector<Mint> new_dp(1 << n); Mint sum = 0; for (int i = 0; i < (1 << n); i++) { new_dp[i] = sum * C((1 << n) - 1 - i - (1 << rd), (1 << rd) - 1) * fact[1 << rd]; sum += dp[i]; } swap(dp, new_dp); } Mint sum = 0; for (int i = 0; i < (1 << n); i++) { cout << sum() << '\n'; sum += dp[i]; } return 0; }
1784
E
Infinite Game
Alice and Bob are playing an infinite game consisting of sets. Each set consists of rounds. In each round, one of the players wins. The first player to win two rounds in a set wins this set. Thus, a set always ends with the score of $2:0$ or $2:1$ in favor of one of the players. Let's call a game scenario a finite string $s$ consisting of characters 'a' and 'b'. Consider an infinite string formed with repetitions of string $s$: $sss \ldots$ Suppose that Alice and Bob play rounds according to this infinite string, left to right. If a character of the string $sss \ldots$ is 'a', then Alice wins the round; if it's 'b', Bob wins the round. As soon as one of the players wins two rounds, the set ends in their favor, and a new set starts from the next round. Let's define $a_i$ as the number of sets won by Alice among the first $i$ sets while playing according to the given scenario. Let's also define $r$ as the limit of ratio $\frac{a_i}{i}$ as $i \rightarrow \infty$. If $r > \frac{1}{2}$, we'll say that scenario $s$ is winning for Alice. If $r = \frac{1}{2}$, we'll say that scenario $s$ is tied. If $r < \frac{1}{2}$, we'll say that scenario $s$ is winning for Bob. You are given a string $s$ consisting of characters 'a', 'b', and '?'. Consider all possible ways of replacing every '?' with 'a' or 'b' to obtain a string consisting only of characters 'a' and 'b'. Count how many of them result in a scenario winning for Alice, how many result in a tied scenario, and how many result in a scenario winning for Bob. Print these three numbers modulo $998\,244\,353$.
For a fixed game scenario $s$, let's build a weighted functional graph on $4$ vertices that correspond to set scores $0:0$, $1:0$, $0:1$, and $1:1$. For each score $x$, traverse the scenario from left to right, changing the score after each letter, and starting a new set whenever necessary. If the set score by the end of the scenario is $y$, add a directed edge from $x$ to $y$. The weight of this edge is the number of sets Alice wins during the process, minus the number of sets Bob wins. When we have built such a graph for a game scenario $s$, we can easily decide whether $s$ is winning for Alice, tied, or winning for Bob. Starting from vertex $0:0$, move by the outgoing edges until you arrive at a cycle. In the cycle, find the sum the edge weights. If the sum is positive, the scenario is winning for Alice; if the sum is $0$, the scenario is tied; if the sum is negative, the scenario is winning for Bob. Now we can use dynamic programming. Let $f(i, \{u_0, u_1, u_2, u_3\}, \{w_0, w_1, w_2, w_3\})$ be the number of ways to choose $s_1 s_2 \ldots s_i$ so that edges from vertices $0, 1, 2, 3$ go to vertices $u_0, u_1, u_2, u_3$ and have weights $w_0, w_1, w_2, w_3$, respectively. Even though this DP has $O(n^5)$ states, it might be possible to get this solution accepted if you only visit reachable states and optimize your solution's constant factor. However, here's an idea that drastically improves the time complexity. Note that in the end, we are only interested in the sum of some $w_j$, and not in every value separately. Outside of our DP, let's fix the mask of vertices that will lie on the cycle reachable from $0:0$. In the DP state, we can just store the sum $s$ of $w_j$ over $j$ belonging to this mask: $f(i, \{u_0, u_1, u_2, u_3\}, s)$. In the end, we will look at the values of $u_0, u_1, u_2, u_3$ and check if the cycle in our graph is indeed the one we want; only if that's true, we will add the DP value to the overall answer. This way, at the cost of running the DP $2^4$ times, we have cut the number of states to $O(n^2)$. The overall time complexity of this solution is $O(n^2)$ too, although the constant factor is huge.
[ "brute force", "combinatorics", "dp", "games", "probabilities" ]
3,100
#include <bits/stdc++.h> using namespace std; template <typename T> class Modular { public: using Type = typename decay<decltype(T::value)>::type; constexpr Modular() : value() {} template <typename U> Modular(const U& x) { value = normalize(x); } template <typename U> static Type normalize(const U& x) { Type v; if (-mod() <= x && x < mod()) v = static_cast<Type>(x); else v = static_cast<Type>(x % mod()); if (v < 0) v += mod(); return v; } const Type& operator()() const { return value; } template <typename U> explicit operator U() const { return static_cast<U>(value); } constexpr static Type mod() { return T::value; } Modular& operator+=(const Modular& other) { if ((value += other.value) >= mod()) value -= mod(); return *this; } template <typename U> Modular& operator+=(const U& other) { return *this += Modular(other); } template <typename U> friend bool operator==(const Modular<U>& lhs, const Modular<U>& rhs); private: Type value; }; template <typename T> bool operator==(const Modular<T>& lhs, const Modular<T>& rhs) { return lhs.value == rhs.value; } template <typename T, typename U> bool operator==(const Modular<T>& lhs, U rhs) { return lhs == Modular<T>(rhs); } constexpr int md = 998244353; using Mint = Modular<std::integral_constant<decay<decltype(md)>::type, md>>; int main() { ios::sync_with_stdio(false); cin.tie(0); string s; cin >> s; int len = (int) s.size(); vector<Mint> ans(3); for (int use = 1; use < (1 << 4); use++) { int pw = 1 << 8; int init = 0; for (int i = 0; i < 4; i++) { init += i << (2 * i); } vector<vector<int>> go_state(pw, vector<int>(2)); vector<vector<int>> go_diff(pw, vector<int>(2)); for (int state = 0; state < pw; state++) { for (int put = 0; put < 2; put++) { int new_diff = 0; int new_state = 0; for (int i = 0; i < 4; i++) { int to = (state >> (2 * i)) & 3; if (put == 0) { if (to & 1) { if (use & (1 << i)) { new_diff += 1; } to = 0; } else { to |= 1; } } else { if (to & 2) { if (use & (1 << i)) { new_diff -= 1; } to = 0; } else { to |= 2; } } new_state += to << (2 * i); } go_state[state][put] = new_state; go_diff[state][put] = new_diff; } } int limit = (len + 1) / 2 * 2 * __builtin_popcount(use); vector<vector<Mint>> dp(2 * limit + 1, vector<Mint>(pw)); dp[limit][init] = 1; for (char c : s) { vector<vector<Mint>> new_dp(2 * limit + 1, vector<Mint>(pw)); for (int sum = 0; sum < (int) dp.size(); sum++) { for (int state = 0; state < pw; state++) { if (dp[sum][state] == 0) { continue; } for (int put = 0; put < 2; put++) { if ((put == 0 && c == 'b') || (put == 1 && c == 'a')) { continue; } int new_sum = sum + go_diff[state][put]; int new_state = go_state[state][put]; new_dp[new_sum][new_state] += dp[sum][state]; } } } swap(dp, new_dp); } for (int sum = 0; sum < (int) dp.size(); sum++) { for (int state = 0; state < pw; state++) { if (dp[sum][state] == 0) { continue; } vector<int> to(4); for (int i = 0; i < 4; i++) { to[i] = (state >> (2 * i)) & 3; } vector<int> seq(1, 0); while (true) { int nxt = to[seq.back()]; bool done = false; for (int i = 0; i < (int) seq.size(); i++) { if (seq[i] == nxt) { done = true; seq.erase(seq.begin(), seq.begin() + i); break; } } if (done) { break; } seq.push_back(nxt); } int real_use = 0; for (int x : seq) { real_use |= (1 << x); } if (use == real_use) { ans[sum > limit ? 0 : (sum < limit ? 2 : 1)] += dp[sum][state]; } } } } for (int i = 0; i < 3; i++) { cout << ans[i]() << '\n'; } return 0; }
1784
F
Minimums or Medians
Vika has a set of all consecutive positive integers from $1$ to $2n$, inclusive. Exactly $k$ times Vika will choose and perform one of the following two actions: - take two smallest integers from her current set and remove them; - take two median integers from her current set and remove them. Recall that medians are the integers located exactly in the middle of the set if you write down its elements in increasing order. Note that Vika's set always has an even size, thus the pair of median integers is uniquely defined. For example, two median integers of the set $\{1, 5, 6, 10, 15, 16, 18, 23\}$ are $10$ and $15$. How many different sets can Vika obtain in the end, after $k$ actions? Print this number modulo $998\,244\,353$. Two sets are considered different if some integer belongs to one of them but not to the other.
Let's denote removing minimums with L, and removing medians with M. Now a sequence of Vika's actions can be described with a string $s$ of length $k$ consisting of characters L and M. Observe that if we have a substring LMM, we can replace it with MLM, and the set of removed numbers will not change. We can keep applying this transformation until there are no LMM substrings in $s$. Now, a string $s$ of our interest looks as a concatenation of: $p$ letters M, for some $0 \le p \le k$; if $p < k$, then a letter L; $\max(0, k - 1 - p)$ letters L and M, without two letters M in a row. Let's denote the number of letters M in part 3 above as $q$. Can different strings still lead to equal sets in the end? First, let's suppose that $k \le \frac{n - 1}{2}$. We will prove that all strings that match the above pattern result in distinct integer sets. Part 1 in the above concatenation means that integers from $n-p+1$ to $n+p$ are removed. Since there are $k-p-q$ letters L in $s$ in total, integers from $1$ to $2(k-p-q)$ are removed too. However, integers in the range $[2(k-p-q) + 1; n-p]$ are not removed, and note that $2(k-p-q)+1 \le n-p$ is equivalent to $k \le \frac{n+p+2q-1}{2}$. Hence, this range is never empty when $k \le \frac{n - 1}{2}$. Thus, we can see that for all pairs of $p$ and $q$, the leftmost non-removed ranges are distinct. Now, also note that in part 3 of the concatenation, any letter M always removes some two consecutive integers (since there is no substring MM), and letters L serve as "shifts" for these removals, and different sets of "shifts" result in different final sets of integers. This finishes the proof. It is easy to find the number of ways to fill part 3 for fixed $p$ and $q$: there are $\binom{(k - 1 - p) - (q - 1)}{q}$ ways to choose $q$ positions out of $(k - 1 - p)$ so that no two consecutive positions are chosen. Now we have just iterate over all valid pairs of $p$ and $q$ to get an $O(n^2)$ solution for the $k \le \frac{n - 1}{2}$ case. Before optimizing it to $O(n)$, let's get back to the $k > \frac{n - 1}{2}$ case. Some strings can result in the same final set. Let $x$ be the smallest integer in the final set. Note that $x$ is always odd. We will only look for a string that contains $\frac{x - 1}{2}$ letters L: that is, a string that removes integers from $1$ to $x-1$ only via removing minimums. We can see that there is always a unique such string. Recall the uniqueness proof for $k \le \frac{n - 1}{2}$. When $p > 0$, we will now force that leftmost non-removed range, $[2(k-p-q) + 1; n-p]$, to be non-empty. If the range is empty, our sequence of actions does not satisfy the condition from the previous paragraph, so we can skip this pair of $p$ and $q$. When $p = 0$, things are a bit different. Suppose we have fixed $q$. It means there are $k-q$ letters L in the string. These operations remove integers from $1$ to $2(k-q)$. Thus, we need the first letter M to remove integers strictly greater than $2(k-q)+1$, which gives us a lower bound on the number of letters L at the start of the string. Otherwise, we can use the same binomial coefficient formula for counting. This should finish the $O(n^2)$ solution for any $k$. To optimize it, let's iterate over $q$ first and iterate over $p$ inside. It turns out that all valid values of $p$ form a range, and if we look at what we are summing up, it is $\binom{0}{q} + \binom{1}{q} + \ldots + \binom{r}{q}$ for some integer $r$. This sum is equal to $\binom{r+1}{q+1}$ by the hockey-stick identity. Thus, we finally have an $O(n)$ solution.
[]
3,400
#include <bits/stdc++.h> using namespace std; template <typename T> T inverse(T a, T m) { T u = 0, v = 1; while (a != 0) { T t = m / a; m -= t * a; swap(a, m); u -= t * v; swap(u, v); } assert(m == 1); return u; } template <typename T> class Modular { public: using Type = typename decay<decltype(T::value)>::type; constexpr Modular() : value() {} template <typename U> Modular(const U& x) { value = normalize(x); } template <typename U> static Type normalize(const U& x) { Type v; if (-mod() <= x && x < mod()) v = static_cast<Type>(x); else v = static_cast<Type>(x % mod()); if (v < 0) v += mod(); return v; } const Type& operator()() const { return value; } template <typename U> explicit operator U() const { return static_cast<U>(value); } constexpr static Type mod() { return T::value; } Modular& operator+=(const Modular& other) { if ((value += other.value) >= mod()) value -= mod(); return *this; } Modular& operator/=(const Modular& other) { return *this *= Modular(inverse(other.value, mod())); } template <typename U = T> typename enable_if<is_same<typename Modular<U>::Type, int>::value, Modular>::type& operator*=(const Modular& rhs) { value = normalize(static_cast<int64_t>(value) * static_cast<int64_t>(rhs.value)); return *this; } private: Type value; }; template <typename T, typename U> Modular<T> operator*(const Modular<T>& lhs, U rhs) { return Modular<T>(lhs) *= rhs; } template <typename T, typename U> Modular<T> operator/(U lhs, const Modular<T>& rhs) { return Modular<T>(lhs) /= rhs; } constexpr int md = 998244353; using Mint = Modular<std::integral_constant<decay<decltype(md)>::type, md>>; vector<Mint> fact(1, 1); vector<Mint> inv_fact(1, 1); Mint C(int n, int k) { if (k < 0 || k > n) { return 0; } while ((int) fact.size() < n + 1) { fact.push_back(fact.back() * (int) fact.size()); inv_fact.push_back(1 / fact.back()); } return fact[n] * inv_fact[k] * inv_fact[n - k]; } int main() { ios::sync_with_stdio(false); cin.tie(0); int n, k; cin >> n >> k; Mint ans = 1; for (int q = 0; q <= k - 1; q++) { int bound = k - q - max(1, 2 * (k - q) - n + 1); ans += C(bound + 1, q + 1); } for (int q = 1; q <= k - 1; q++) { int pos = 2 * (k - q) + 2; int shifts = max(1, pos - n); int left = k - shifts; ans += C(left - (q - 1), q); } cout << ans() << '\n'; return 0; }
1786
A2
Alternating Deck (hard version)
This is a hard version of the problem. In this version, there are two colors of the cards. Alice has $n$ cards, each card is either black or white. The cards are stacked in a deck in such a way that the card colors alternate, starting from a white card. Alice deals the cards to herself and to Bob, dealing at once several cards from the top of the deck in the following order: one card to herself, two cards to Bob, three cards to Bob, four cards to herself, five cards to herself, six cards to Bob, seven cards to Bob, eight cards to herself, and so on. In other words, on the $i$-th step, Alice deals $i$ top cards from the deck to one of the players; on the first step, she deals the cards to herself and then alternates the players every two steps. When there aren't enough cards at some step, Alice deals all the remaining cards to the current player, and the process stops. \begin{center} {\small First Alice's steps in a deck of many cards.} \end{center} How many cards of each color will Alice and Bob have at the end?
Note that on the $i$-th step, Alice takes $i$ cards from the deck. It means that after $k$ steps, $\frac{k(k + 1)}{2}$ steps are taken from the deck. Thus, after $O(\sqrt{n})$ steps, the deck is empty. We can simulate the steps one by one by taking care of whose turn it is and what is the color of the top card. Using this information, we can keep track of how many cards of what color each player has. Print this information in the end.
[ "implementation" ]
800
NT = int(input()) for T in range(NT): n = int(input()) answer = [0, 0, 0, 0] first_card = 1 for it in range(1, 20000): who = 0 if it % 4 == 1 or it % 4 == 0 else 1 cnt = it if n < cnt: cnt = n cnt_white = (cnt + first_card % 2) // 2 cnt_black = cnt - cnt_white answer[who * 2 + 0] += cnt_white answer[who * 2 + 1] += cnt_black first_card += cnt n -= cnt if n == 0: break assert(n == 0) print(*answer)
1786
B
Cake Assembly Line
A cake assembly line in a bakery was once again optimized, and now $n$ cakes are made at a time! In the last step, each of the $n$ cakes should be covered with chocolate. Consider a side view on the conveyor belt, let it be a number line. The $i$-th cake occupies the segment $[a_i - w, a_i + w]$ on this line, each pair of these segments does not have common points. Above the conveyor, there are $n$ dispensers, and when a common button is pressed, chocolate from the $i$-th dispenser will cover the conveyor segment $[b_i - h, b_i + h]$. Each pair of these segments also does not have common points. \begin{center} {\small Cakes and dispensers corresponding to the first example.} \end{center} The calibration of this conveyor belt part has not yet been performed, so you are to make it. Determine if it's possible to shift the conveyor so that each cake has some chocolate on it, and there is no chocolate outside the cakes. You can assume that the conveyour is long enough, so the cakes never fall. Also note that the button can only be pressed once. \begin{center} {\small In the first example we can shift the cakes as shown in the picture.} \end{center}
Obviously, the $i$-th cake should be below the $i$-th dispenser. The leftmost possible position of the cake is when the chocolate would touch the right border. If $c_i$ is the new position of the cake's center, then in this case $c_i + w = b_i + h$. The rightmost possible position is, similarly, when $c_i - w = b_i - h$. Thus, the new position of the center should be between $b_i + h - w$ and $b_i - h + w$. This means that the $i$-th cake should be shifted by any length between $(b_i + h - w) - a_i$ and $(b_i - h + w) - a_i$. Since all cakes on the conveyor move at the same time, the shift $p$ should satisfy $(b_i + h - w) - a_i \le p \le (b_i - h + w) - a_i$ for all $i$ at the same time. This is possible if and only if a value $p$ exists such that $\max_i (b_i + h - w - a_i) \le p \le \min_i (b_i - h + w - a_i),$ $\max_i (b_i + h - w - a_i) \le \min_i (b_i - h + w - a_i),$
[ "brute force", "sortings" ]
1,300
#include <bits/stdc++.h> using namespace std; const int inf = 1000000000; int main() { int tt; cin >> tt; while (tt--) { int n, w, h; cin >> n >> w >> h; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } vector<int> b(n); for (int i = 0; i < n; i++) { cin >> b[i]; } int minshift = -inf; int maxshift = inf; for (int i = 0; i < n; i++) { minshift = max(minshift, (b[i] + h) - (a[i] + w)); maxshift = min(maxshift, (b[i] - h) - (a[i] - w)); } if (minshift <= maxshift) { cout << "YES" << '\n'; } else { cout << "NO" << '\n'; } } return 0; }
1787
A
Exponential Equation
You are given an integer $n$. Find any pair of integers $(x,y)$ ($1\leq x,y\leq n$) such that $x^y\cdot y+y^x\cdot x = n$.
For even $n$, a key observation is that $x=1,y=\dfrac{n}{2}$ is always legit. And for odd $n$, notice that $x^yy+y^xx=xy(x^{y-1}+y^{x-1})$. So if $x$ or $y$ is an even number, obviously $x^y y+y^x x$ is an even number. Otherwise if both $x$ and $y$ are all odd numbers , $x^{y-1}$ and $y^{x-1}$ are all odd numbers, so $x^{y-1}+y^{x-1}$ is an even number, and $x^yy+y^xx$ is also an even number. That means $x^yy+y^xx$ is always even and there's no solution for odd $n$.
[ "constructive algorithms", "math" ]
800
null
1787
B
Number Factorization
Given an integer $n$. Consider all pairs of integer arrays $a$ and $p$ of the same length such that $n = \prod a_i^{p_i}$ (i.e. $a_1^{p_1}\cdot a_2^{p_2}\cdot\ldots$) ($a_i>1;p_i>0$) and $a_i$ is the product of some (possibly one) \textbf{distinct} prime numbers. For example, for $n = 28 = 2^2\cdot 7^1 = 4^1 \cdot 7^1$ the array pair $a = [2, 7]$, $p = [2, 1]$ is correct, but the pair of arrays $a = [4, 7]$, $p = [1, 1]$ is not, because $4=2^2$ is a product of non-distinct prime numbers. Your task is to find the maximum value of $\sum a_i \cdot p_i$ (i.e. $a_1\cdot p_1 + a_2\cdot p_2 + \ldots$) over all possible pairs of arrays $a$ and $p$. Note that you do not need to minimize or maximize the length of the arrays.
First, $a_i^{p_i}$ is equivalent to the product of $a_i^{1}$ for $p$ times, so it is sufficient to set all $p_i$ to $1$. Decomposite $n$ to some prime factors, greedily choose the most number of distinct prime numbers, the product is the maximum.
[ "greedy", "math", "number theory" ]
1,100
#include <bits/stdc++.h> using namespace std; #define mp make_pair pair<int, int> s[110]; int d[110]; void get() { int n, l = 0, i, c; cin >> n; for (i = 2; i * i <= n; i++) { if (n % i == 0) { c = 0; while (n % i == 0) c++, n /= i; s[++l] = make_pair(c, i); } } if (n != 1) s[++l] = make_pair(1, n); sort(s + 1, s + l + 1), d[l + 1] = 1; for (i = l; i >= 1; i--) d[i] = d[i + 1] * s[i].second; int ans = 0; for (i = 1; i <= l; i++) if (s[i].first != s[i - 1].first) ans += d[i] * (s[i].first - s[i - 1].first); cout << ans << endl; } signed main() { ios::sync_with_stdio(0); cin.tie(0); int T; cin >> T; while (T--) get(); return 0; }
1787
C
Remove the Bracket
RSJ has a sequence $a$ of $n$ integers $a_1,a_2, \ldots, a_n$ and an integer $s$. For each of $a_2,a_3, \ldots, a_{n-1}$, he chose a pair of \textbf{non-negative integers} $x_i$ and $y_i$ such that $x_i+y_i=a_i$ and $(x_i-s) \cdot (y_i-s) \geq 0$. Now he is interested in the value $$F = a_1 \cdot x_2+y_2 \cdot x_3+y_3 \cdot x_4 + \ldots + y_{n - 2} \cdot x_{n-1}+y_{n-1} \cdot a_n.$$ Please help him find the minimum possible value $F$ he can get by choosing $x_i$ and $y_i$ optimally. It can be shown that there is always at least one valid way to choose them.
Idea & Solution: rsj This is the reason why the problem was named as Remove the Bracket. $\begin{aligned} \text{Product} &= a_1 \cdot a_2 \cdot a_3 \cdot \ldots \cdot a_n = \\ &= a_1 \cdot (x_2+y_2) \cdot (x_3+y_3) \cdot \ldots \cdot (x_{n-1}+y_{n-1}) \cdot a_n = \\ &\overset{\text{?}}{=} a_1 \cdot x_2+y_2 \cdot x_3+y_3 \cdot \ldots \cdot x_{n-1}+y_{n-1} \cdot a_n. \end{aligned}$ However, We discussed to remove it on 28th Jan in the statement. Really sorry for inconvenience of the statement!
[ "dp", "greedy", "math" ]
1,600
null