contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1895
B
Points and Minimum Distance
You are given a sequence of integers $a$ of length $2n$. You have to split these $2n$ integers into $n$ pairs; each pair will represent the coordinates of a point on a plane. Each number from the sequence $a$ should become the $x$ or $y$ coordinate of exactly one point. Note that some points can be equal. After the points are formed, you have to choose a path $s$ that starts from one of these points, ends at one of these points, and visits all $n$ points at least once. The length of path $s$ is the sum of distances between all adjacent points on the path. In this problem, the distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is defined as $|x_1-x_2| + |y_1-y_2|$. Your task is to form $n$ points and choose a path $s$ in such a way that the length of path $s$ is minimized.
Since the order of $a_i$ does not matter, let's sort them for convenience. Let's treat the resulting path as two one-dimensional paths: one path $x_1 \rightarrow x_2 \rightarrow \dots \rightarrow x_n$ and one path $y_1 \rightarrow y_2 \rightarrow \dots \rightarrow y_n$. The length of the path $s$ is equal to the sum of lengths of these two paths. If we fix which integers are $x$-coordinates and which integers are $y$-coordinates, it's quite easy to see that it is optimal to place both $[x_1, x_2, \dots, x_n]$ and $[y_1, y_2, \dots, y_n]$ in sorted order: the length of the path visiting both $\min x_i$ and $\max x_i$ should be at least $\max x_i - \min x_i$, and sorting gives exactly that result; the same with $y$-coordinates. Okay, then, how do we choose which integers are $x$-coordinates and which integers are $y$-coordinates? The total length of the path $s$ will be equal to $\max x_i - \min x_i + \max y_i - \min y_i$; one of the minimums will be equal to the value of $a_1$; one of the maximums will be equal to the value of $a_{2n}$; so we need to consider the remaining minimum and maximum. The minimum coordinate of any type should be less than or equal to at least $n - 1$ elements. Similarly, the maximum coordinate should be greater than or equal to at least $n-1$ elements. So, the second minimum (the minimum which is not $a_1$) is at most $a_{n+1}$, the second maximum is at least $a_n$, and the length of the path is at least $a_{2n} - a_1 + a_n - a_{n+1}$. And it is possible to reach this bound: take the first $n$ values in $a$ as $x$-coordinates, and the last $n$ values as $y$-coordinates.
[ "greedy", "math", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int n; vector<int> a; inline void read() { cin >> n; a.clear(); for (int i = 0; i < 2 * n; i++) { int x; cin >> x; a.pb(x); } } inline void solve() { sort(all(a)); vector<pair<int, int> > pts; for (int i = 0; i < n; i++) { pts.pb(mp(a[i], a[i + n])); } int ans = 0; for (int i = 1; i < n; i++) { ans += abs(pts[i].ft - pts[i - 1].ft) + abs(pts[i].sc - pts[i - 1].sc); } cout << ans << endl; for (int i = 0; i < n; i++) { cout << pts[i].ft << ' ' << pts[i].sc << endl; } } int main () { int t; cin >> t; while (t--){ read(); solve(); } }
1895
C
Torn Lucky Ticket
A ticket is a non-empty string of digits from $1$ to $9$. A lucky ticket is such a ticket that: - it has an even length; - the sum of digits in the first half is equal to the sum of digits in the second half. You are given $n$ ticket pieces $s_1, s_2, \dots, s_n$. How many pairs $(i, j)$ (for $1 \le i, j \le n$) are there such that $s_i + s_j$ is a lucky ticket? Note that it's possible that $i=j$. Here, the + operator denotes the concatenation of the two strings. For example, if $s_i$ is 13, and $s_j$ is 37, then $s_i + s_j$ is 1337.
There is an obvious $O(n^2)$ approach: iterate over the first part, the second part and check the sum. In order to improve it, let's try to get rid of the second iteration. Consider the case where the first part is longer or equal than the second part. So, we still iterate over the first part in $O(n)$. However, instead of iterating over the exact second part, let's just iterate over its length. Now, we know the total length of the parts, but not their sums of digits. Hmm, not exactly. By fixing the longer part, we actually know what the required sum of each half should be. It's fully inside the first part. However, this first part also contains some digits that belong to the second half. So, if the sum of the second part was $s$, the total sum of the second half would be these digits plus $s$. Let $l$ be the total length of the ticket: the length of the fixed first part plus the fixed length of the second part. Let $\mathit{sum}_l$ be the sum of the first $\frac{l}{2}$ digits of the first part, and $\mathit{sum}_r$ be the sum of its remaining digits. Then $\mathit{sum}_l = \mathit{sum}_r + s$. Rewrite into $\mathit{sum}_l - \mathit{sum}_r = s$. Thus, we just know what should be the sum of the second half. And every ticket part that is exactly of the fixed length and has sum exactly $\mathit{sum}_l - \mathit{sum}_r$ will form a lucky ticket with the fixed part. So, we have to precalculate $\mathit{cnt}[l][s]$, which is the number of ticket parts that have length $l$ and sum $s$, and use that structure to speed up the solution. The mirror case, where the second part is strictly longer than the first part, can be handled similarly. Overall complexity: $O(n)$.
[ "brute force", "dp", "hashing", "implementation", "math" ]
1,400
n = int(input()) s = input().split() ans = 0 cnt = [[0 for k in range(46)] for k in range(6)] for y in s: cnt[len(y)][sum([int(c) for c in y])] += 1 for L in s: for lenr in range(len(L) % 2, len(L) + 1, 2): l = len(L) + lenr suml = sum([int(c) for c in L[:l // 2]]) sumr = sum([int(c) for c in L[l // 2:]]) if suml - sumr >= 0: ans += cnt[lenr][suml - sumr] for R in s: for lenl in range(len(R) % 2, len(R), 2): l = len(R) + lenl suml = sum([int(c) for c in R[-l // 2:]]) sumr = sum([int(c) for c in R[:-l // 2]]) if suml - sumr >= 0: ans += cnt[lenl][suml - sumr] print(ans)
1895
D
XOR Construction
You are given $n-1$ integers $a_1, a_2, \dots, a_{n-1}$. Your task is to construct an array $b_1, b_2, \dots, b_n$ such that: - every integer from $0$ to $n-1$ appears in $b$ exactly once; - for every $i$ from $1$ to $n-1$, $b_i \oplus b_{i+1} = a_i$ (where $\oplus$ denotes the bitwise XOR operator).
We can see that the first element of the array $b$ determines all other values, $b_{i + 1} = b_1 \oplus a_1 \cdots \oplus a_i$. So let's iterate over the value of $b_1$. For every value of $b_1$, we need to check whether it produces correct permutation or not (i. e. all $b_i < n$). To do it in a fast way, we can generate an array $c$, where $c_i$ is the XOR of the first $i$ element of the array $a$ (i.e. $c_i = a_1 \oplus a_2 \oplus \cdots \oplus a_i$ and $c_0 = 0$). We can see that $b_{i + 1} = b_1 \oplus c_i$. Let's store all values of $c_i$ in a binary trie. To check that $b_1$ produces an array where all elements are less than $n$, we can calculate the maximum value of $b_1 \oplus c_i$ using descending on the trie. If the maximum is less than $n$, then it's a valid first element of the permutation. Note that we don't actually need to check that the minimum is exactly $0$ and all elements are distinct: we are guaranteed that the answer exists, so all values $[0, c_1, c_2, c_3, \dots, c_{n-1}]$ are pairwise distinct, and no matter which $b_1$ we choose, all $b_i$ will also be pairwise distinct.
[ "bitmasks", "constructive algorithms", "data structures", "math", "string suffix structures", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; const int LOG = 20; int main() { ios::sync_with_stdio(false); cin.tie(0); int n; cin >> n; vector<int> a(n); for (int i = 1; i < n; ++i) { cin >> a[i]; a[i] ^= a[i - 1]; } vector<array<int, 2>> t({{-1, -1}}); auto add = [&](int x) { int v = 0; for (int i = LOG - 1; i >= 0; --i) { int j = (x >> i) & 1; if (t[v][j] == -1) { t[v][j] = t.size(); t.push_back({-1, -1}); } v = t[v][j]; } }; for (int x : a) add(x); auto get = [&](int x) { int v = 0; for (int i = LOG - 1; i >= 0; --i) { int j = (x >> i) & 1; if (t[v][j ^ 1] != -1) j ^= 1; x ^= j << i; v = t[v][j]; } return x; }; for (int x = 0; x < n; ++x) { if (get(x) == n - 1) { for (int i : a) cout << (x ^ i) << ' '; break; } } }
1895
E
Infinite Card Game
Monocarp and Bicarp are playing a card game. Each card has two parameters: an attack value and a defence value. A card $s$ beats another card $t$ if the attack of $s$ is strictly greater than the defence of $t$. Monocarp has $n$ cards, the $i$-th of them has an attack value of $\mathit{ax}_i$ and a defence value of $\mathit{ay}_i$. Bicarp has $m$ cards, the $j$-th of them has an attack value of $\mathit{bx}_j$ and a defence value of $\mathit{by}_j$. On the first move, Monocarp chooses one of his cards and plays it. Bicarp has to respond with his own card that beats that card. After that, Monocarp has to respond with a card that beats Bicarp's card. After that, it's Bicarp's turn, and so forth. \textbf{After a card is beaten, it returns to the hand of the player who played it.} It implies that each player always has the same set of cards to play as at the start of the game. The game ends when the current player has no cards that beat the card which their opponent just played, and the current player loses. If the game lasts for $100^{500}$ moves, it's declared a draw. Both Monocarp and Bicarp play optimally. That is, if a player has a winning strategy regardless of his opponent's moves, he plays for a win. Otherwise, if he has a drawing strategy, he plays for a draw. You are asked to calculate three values: - the number of Monocarp's starting moves that result in a win for Monocarp; - the number of Monocarp's starting moves that result in a draw; - the number of Monocarp's starting moves that result in a win for Bicarp.
Let's restate the game in game theory terms. The state of the game can be just the card that is currently on top, since none of the previously played cards matter. A move is still a card beating another card, so these are the edges of the game graph. Now we can solve this as a general game. Mark all trivial winning and losing states and determine the outcome for all other states with a DFS/BFS on a transposed game graph. Unfortunately, there are $O(n^2)$ edges in the graph, so we can't quite do it as is. There are multiple ways to optimize that, I'll describe the smartest one, in my opinion. Notice that, after a card beats another card, its attack doesn't matter at all. It's only health that's relevant. And the larger the health is, the fewer responses the opponent has. In particular, we use the fact that the sets of the responses can either coincide or be included one into another. So, there is always the best response for each card (or multiple equally good ones). That is just the card with the largest health among the ones that beat the current one. To find this card, you can sort them by their attack value and find a prefix/suffix health maximum. This massively reduces the number of edges. Now, each state has at most one outgoing edge. Zero edges mean that the card is winning, since there is no card to beat it. For these states, we can tell the winner. The rest of the states either reach these marked ones, thus, can be deduced, or form a closed loop among each other, making the outcome a draw (since there is always a response for each move). Overall complexity: $O(n \log n)$ per testcase.
[ "binary search", "brute force", "data structures", "dfs and similar", "dp", "dsu", "games", "graphs", "greedy", "sortings", "two pointers" ]
2,300
// #include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct card{ int x, y; card() {} card(int x, int y) : x(x), y(y) {} }; void dfs(int v, const vector<int> &g, vector<int> &res, vector<char> &used){ if (used[v]) return; used[v] = true; dfs(g[v], g, res, used); res[v] = -res[g[v]]; } void solve(){ vector<vector<card>> a(2); vector<vector<int>> prpos(2); forn(z, 2){ int n; scanf("%d", &n); a[z].resize(n); forn(i, n) scanf("%d", &a[z][i].x); forn(i, n) scanf("%d", &a[z][i].y); sort(a[z].begin(), a[z].end(), [](const card &a, const card &b){ return a.x > b.x; }); prpos[z].resize(n + 1, -1); forn(i, n){ if (prpos[z][i] == -1 || a[z][i].y > a[z][prpos[z][i]].y) prpos[z][i + 1] = i; else prpos[z][i + 1] = prpos[z][i]; } } int n = a[0].size(); vector<int> g(n + a[1].size()); forn(z, 2) forn(i, a[z].size()){ int cnt = lower_bound(a[z ^ 1].begin(), a[z ^ 1].end(), card(a[z][i].y, -1), [](const card &a, const card &b){ return a.x > b.x; }) - a[z ^ 1].begin(); g[i + z * n] = prpos[z ^ 1][cnt] == -1 ? -1 : prpos[z ^ 1][cnt] + (z ^ 1) * n; } vector<int> res(g.size()); vector<char> used(g.size()); forn(i, g.size()) if (g[i] == -1) res[i] = 1, used[i] = true; int w = 0, l = 0; forn(i, n){ if (!used[i]) dfs(i, g, res, used); w += res[i] == 1; l += res[i] == -1; } printf("%d %d %d\n", w, n - l - w, l); } int main(){ int t; scanf("%d", &t); while (t--) solve(); }
1895
F
Fancy Arrays
Let's call an array $a$ of $n$ non-negative integers fancy if the following conditions hold: - at least one from the numbers $x$, $x + 1$, ..., $x+k-1$ appears in the array; - consecutive elements of the array differ by at most $k$ (i.e. $|a_i-a_{i-1}| \le k$ for each $i \in [2, n]$). You are given $n$, $x$ and $k$. Your task is to calculate the number of fancy arrays of length $n$. Since the answer can be large, print it modulo $10^9+7$.
It is difficult to directly calculate the number of arrays that satisfy both conditions. So let's calculate the opposite value - the number of arrays, where only the second condition is holds, and the first is necessarily violated. Using the fact that the array can contain integers from two sets $[0, x)$ and $[x + k, \infty)$. We can notice that the array can consist of elements from only one set, but not both (because the difference between elements from different sets is at least $k+1$ which violates the second condition). So the answer to the problem is $f(0, M) - f(0, x-1) - f(x+k, M)$, where $f(l, r)$ is the number of arrays of length $n$ that satisfy the second condition and elements in range from $l$ to $r$, and $M$ is some big constant (let it be equal to $10^{100}$). Let's take a closer look at the function $f(l, r)$. We can represent the required array $a$ as integer $a_1$ and an array of $n-1$ differences $\Delta_i=a_{i+1}-a_1$. There are $(2k+1)^{n-1}$ differences arrays in total, because of the second condition. Let $cnt(l, r, \Delta)$ be the number of valid $a_1$ for the given array $\Delta$ ($a_1$ is valid if it's produces array $a$, where all elements from $l$ to $r$). Using such a representation we can see that $f(l, r) = \sum\limits_{\Delta} cnt(l, r, \Delta)$. Moreover, let $mn$ be the minimum value in $\Delta$ and $mx$ be the maximum value in $\Delta$, then $cnt(l, r, \Delta)$ is equal to $\max(0, (r - l) - (mx - mn))$. It is difficult to calculate the values of $f(0, M)$ and $f(x+k, M)$ independently, because we can't iterate over all arrays $\Delta$. Luckily, our goal is to calculate their difference. Since the value of $M$ is big enough, we can say that $cnt$ of any $\Delta$ is positive. So we can represent $f(0, M) - f(x+k, M)$ as $\sum\limits_{\Delta} cnt(0, M, \Delta) - \sum\limits_{\Delta} cnt(x+k, M, \Delta)$ $\sum\limits_{\Delta} (M - 0) - (mx - mn) - \sum\limits_{\Delta} (M - (x+k)) - (mx - mn))$ $\sum\limits_{\Delta} ((M - 0) - (mx - mn)) - ((M - (x + k)) - (mx - mn))$ $f(0, M) - f(x+k, M) = \sum\limits_{\Delta} x+k$ It remains us to calculate the value of $f(0, x-1)$. We can do it using simple dynamic programming $dp_{i, j}$ - the number of arrays of length $i$ if the last element is $j$. The transition from $(i, y)$ to $(i + 1, z)$ can be made if $|y - z| \le k$. And the answer is equal to the sum of $dp_{n, j}$ for each $j \in [0, x)$ Such dynamic programming works in $O(nx)$, but $n$ is too big for this solution. However, thanks to fairly simple transitions, we can represent it as matrix multiplication. Using fast matrix exponentiation, we can calculate it in $O(x^3\log(n))$. The total answer is $(x+k)(2k+1)^{n-1} - f(0, x - 1)$.
[ "combinatorics", "dp", "math", "matrices" ]
2,600
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) const int MOD = 1e9 + 7; const int N = 40; using mat = array<array<int, N>, N>; int add(int x, int y) { x += y; if (x >= MOD) x -= MOD; if (x < 0) x += MOD; return x; } int mul(int x, int y) { return x * 1LL * y % MOD; } mat mul(mat x, mat y) { mat res; forn(i, N) forn(j, N) res[i][j] = 0; forn(i, N) forn(j, N) forn(k, N) res[i][j] = add(res[i][j], mul(x[i][k], y[k][j])); return res; } template<class T> T binpow(T x, int y, T e) { T res = e; while (y) { if (y & 1) res = mul(res, x); x = mul(x, x); y >>= 1; } return res; } int main() { int t; cin >> t; while (t--) { int n, x, k; cin >> n >> x >> k; mat a, e; forn(i, N) forn(j, N) a[i][j] = (i < x && abs(i - j) <= k); forn(i, N) forn(j, N) e[i][j] = (i == j); mat b = binpow(a, n - 1, e); int ans = mul(x + k, binpow(2 * k + 1, n - 1, 1)); forn(i, x) forn(j, x) ans = add(ans, -b[i][j]); cout << ans << '\n'; } }
1895
G
Two Characters, Two Colors
You are given a string consisting of characters 0 and/or 1. You have to paint every character of this string into one of two colors, red or blue. If you paint the $i$-th character red, you get $r_i$ coins. If you paint it blue, you get $b_i$ coins. After coloring the string, you remove every \textbf{blue} character from it, and count the number of inversions in the resulting string (i. e. the number of pairs of characters such that the left character in the pair is 1, and the right character in the pair is 0). For each inversion, you have to pay $1$ coin. What is the maximum number of coins you can earn?
This problem requires partitioning some object into two parts, and imposes different costs depending on how we perform the partition. Let's try to model it with minimum cuts. Create a flow network with $n+2$ vertices: a vertex for every character of the string, a source and a sink. Using the edges of the network, we will try to model the following: if we paint the $i$-th character red, we lose $b_i$ coins (we can treat it as always gaining $r_i + b_i$ coins no matter what, and then losing coins instead of earning them for coloring the characters); if we paint the $i$-th character blue, we lose $r_i$ coins (the same applies here); if $i < j$, $s_i = 1$, $s_j = 0$ and both $s_i$ and $s_j$ are red, we lose $1$ coin. To model the third constraint, let's say that the vertices representing $1$'s are red if they belong to the set $S$ in the cut, or blue if they belong to $T$. For the vertices representing $0$'s, this will be the opposite: they will be red if they belong to the set $T$, or blue if they belong to the set $S$. So, if we add a directed edge with capacity $1$ from $i$ to $j$ when $s_i = 1$, $s_j = 0$ and $i < j$, this edge will be cut exactly when both $s_i$ and $s_j$ are red, and they create an inversion. To model the second constraint, for vertices representing $1$'s, add an incoming edge from the source with capacity $r_i$, and an outgoing edge to the sink with capacity $b_i$. For vertices representing $0$'s, this will be vice versa (since red vertices representing $0$'s belong to $T$, the edge from the source to that vertex should have capacity $b_i$, not $r_i$). Now we got ourselves a flow network. The answer to the problem will be $\sum\limits_{i=1}^{n} (r_i + b_i) - mincut$, where $mincut$ is the minimum cut in this network. But instead of searching for the minimum cut, we will try to calculate the maximum flow. We are going to do it greedily. Let's process the vertices of the network, from the vertex representing the $1$-st character to the vertex representing the $n$-th character. Every time, we will try to push as much flow as possible through the current vertex. When processing a vertex, first of all, let's try to push $\min(r_i, b_i)$ flow through the edges connecting the source and the sink with it. Then, if it is a vertex with a character of type $1$, let's remember that we can push $r_i - \min(r_i, b_i)$ flow through it to successive vertices of type $0$ (let's call this excess flow for that vertex). And if it is a vertex representing a character of type $0$, let's try to push at most $r_i - \min(r_i, b_i)$ flow into it from the vertices of type $1$ we processed earlier (but no more than $1$ unit of flow from each vertex). But when we process a vertex of type $0$, how do we choose from which vertices of type $1$ we take the flow? It can be shown that we can take $1$ unit of flow from the vertices with the maximum amount of excess flow (the formal proof is kinda long, but the main idea is that every optimal flow assignment can be transformed into the one working in this greedy way without breaking anything). So, to recap, when we process a vertex of type $1$, we remember that it has some excess flow. And when we process a vertex of type $0$, we take $1$ unit of excess flow from several vertices with the maximum amount of excess flow. So, we need a data structure that allows us to process the following queries: add an integer; calculate the number of positive integers in the structure; subtract $1$ from $k$ maximum integers in the structure. Model implementation of this data structure is kinda long (there are easier ways to do it), but I will describe it nevertheless. We will use an explicit-key treap which stores two values in each vertex - an integer belonging to the structure and the quantity of that integer in the structure (so, it's kinda like a map which counts the number of occurrences of each integer). This treap has to support adding $-1$ to all values in some subtree (with lazy propagation). Most operations with it are fairly straightforward, but how do we actually subtract $1$ from $k$ maximum values in the tree? We will do it in the following way: split the treap to extract $k$ maximum elements (let's denote the first treap as the part without $k$ maximums, and the second treap as the part with those $k$ maximums); add $-1$ to the values in the second treap; while the minimum element in the second treap is not greater than the maximum element in the first treap, remove the minimum from the second treap and insert it into the first treap; merge the treaps. It's quite easy to see that if you store the pairs of the form "element, the number of its occurrences" in the treap, the third step will require moving at most $2$ elements. And the resulting complexity of every operation with this data structure becomes $O(\log n)$, so the whole solution works in $O(n \log n)$.
[ "binary search", "data structures", "dp", "flows", "greedy" ]
3,100
#include<bits/stdc++.h> using namespace std; mt19937 rnd(42); struct node { long long x; int y; int cnt; int sum; int push; node* l; node* r; node() {}; node(long long x, int y, int cnt, int sum, int push, node* l, node* r) : x(x), y(y), cnt(cnt), sum(sum), push(push), l(l), r(r) {}; }; typedef node* treap; typedef pair<treap, treap> ptt; int getSum(treap t) { return (t ? t->sum : 0); } int getCnt(treap t) { return (t ? t->cnt : 0); } treap fix(treap t) { if(!t) return t; int p = t->push; if(p != 0) { if(t->l) t->l->push += p; if(t->r) t->r->push += p; t->x += p; t->push = 0; } t->sum = getSum(t->l) + t->cnt + getSum(t->r); return t; } treap merge(treap a, treap b) { a = fix(a); b = fix(b); if(!a) return b; if(!b) return a; if(a->y > b->y) { a->r = merge(a->r, b); return fix(a); } else { b->l = merge(a, b->l); return fix(b); } } bool hasKey(treap t, long long x) { t = fix(t); if(!t) return false; if(t->x == x) return true; if(t->x < x) return hasKey(t->r, x); return hasKey(t->l, x); } ptt splitKey(treap t, long long x) { t = fix(t); if(!t) return make_pair(t, t); if(t->x < x) { ptt p = splitKey(t->r, x); t->r = p.first; return make_pair(fix(t), p.second); } else { ptt p = splitKey(t->l, x); t->l = p.second; return make_pair(p.first, fix(t)); } } long long kthMax(treap t, int k) { t = fix(t); if(!t) return -1; int sumR = getSum(t->r); if(sumR >= k) return kthMax(t->r, k); if(sumR + t->cnt >= k) return t->x; return kthMax(t->l, k - sumR - t->cnt); } int getCntByKey(treap t, long long x) { t = fix(t); if(!t) return 0; if(t->x == x) return t->cnt; if(t->x > x) return getCntByKey(t->l, x); return getCntByKey(t->r, x); } treap increaseKey(treap t, long long x, int cnt) { t = fix(t); if(!t) return t; if(t->x == x) { t->cnt += cnt; } else if(t->x > x) { t->l = increaseKey(t->l, x, cnt); } else { t->r = increaseKey(t->r, x, cnt); } return fix(t); } treap insert(treap t, long long x, int y, int cnt) { t = fix(t); if(!t || t->y < y) { ptt p = splitKey(t, x); treap res = new node(x, y, cnt, cnt, 0, p.first, p.second); return fix(res); } if(t->x > x) t->l = insert(t->l, x, y, cnt); else t->r = insert(t->r, x, y, cnt); return fix(t); } treap insertMain(treap t, long long x, int cnt) { if(hasKey(t, x)) return increaseKey(t, x, cnt); else return insert(t, x, rnd(), cnt); } treap eraseKey(treap t, long long x) { t = fix(t); if(!t) return NULL; if(t->x > x) { t->l = eraseKey(t->l, x); return fix(t); } if(t->x < x) { t->r = eraseKey(t->r, x); return fix(t); } treap tnew = merge(t->l, t->r); delete t; return tnew; } ptt splitSum(treap t, int k) { t = fix(t); if(!t) return make_pair(t, t); if(k == 0) return make_pair(t, treap(NULL)); long long x = kthMax(t, k); int cnt = getCntByKey(t, x); t = eraseKey(t, x); ptt p = splitKey(t, x); k -= getSum(p.second); if(k > 0) p.second = merge(new node(x, rnd(), k, k, 0, NULL, NULL), p.second); if(k < cnt) p.first = merge(p.first, new node(x, rnd(), cnt - k, cnt - k, 0, NULL, NULL)); return p; } long long minKey(treap t) { t = fix(t); if(!t) return 1e18; if(t->l) return minKey(t->l); return t->x; } treap decreaseUpToKLargest(treap t, int k, int& res) { if(!k) return t; t = fix(t); k = min(k, getSum(t)); res = k; ptt p = splitSum(t, k); if(p.second) { p.second->push--; for(int i = 0; i < 2; i++) { long long x = minKey(p.second); int cntMin = getCntByKey(p.second, x); p.second = eraseKey(p.second, x); if(x != 0 && cntMin != 0) p.first = insertMain(p.first, x, cntMin); } } return merge(p.first, p.second); } void destroy(treap t) { if(!t) return; if(t->l) destroy(t->l); if(t->r) destroy(t->r); delete t; } void solve() { int n; cin >> n; string s; cin >> s; vector<long long> r(n), b(n); for(int i = 0; i < n; i++) cin >> r[i]; for(int i = 0; i < n; i++) cin >> b[i]; long long sum = 0; for(int i = 0; i < n; i++) sum += r[i] + b[i]; long long flow = 0; treap t = NULL; for(int i = 0; i < n; i++) { long long d = min(r[i], b[i]); flow += d; r[i] -= d; b[i] -= d; if(s[i] == '0') { int add = 0; if(r[i] > 1e9) r[i] = 1e9; t = decreaseUpToKLargest(t, r[i], add); flow += add; } else { if(r[i] != 0) t = insertMain(t, r[i], 1); } } cout << sum - flow << endl; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for(int i = 0; i < t; i++) solve(); }
1896
A
Jagged Swaps
You are given a permutation$^\dagger$ $a$ of size $n$. You can do the following operation - Select an index $i$ from $2$ to $n - 1$ such that $a_{i - 1} < a_i$ and $a_i > a_{i+1}$. Swap $a_i$ and $a_{i+1}$. Determine whether it is possible to sort the permutation after a finite number of operations. $^\dagger$ A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Look at the samples. Observe that since we are only allowed to choose $i \ge 2$ to swap $a_i$ and $a_{i+1}$, it means that $a_1$ cannot be modified by the operation. Hence, $a_1 = 1$ must hold. We can prove that as long as $a_1 = 1$, we will be able to sort the array. Consider the largest element of the array. Let its index be $i$. Our objective is to move $a_i$ to the end of the array. If $i = n$, it means that the largest element is already at the end. Otherwise, since $a_i$ is the largest element, this means that $a_{i-1} < a_i$ and $a_i > a_{i+1}$. Hence, we can do an operation on index $i$ and move the largest element one step closer to the end. We repeatedly do the operation until we finally move the largest element to the end of the array. Then, we can pretend that the largest element does not exist and do the same algorithm for the prefix of size $n - 1$. Hence, we will able to sort the array by doing this repeatedly.
[ "sortings" ]
800
#include <bits/stdc++.h> using namespace std; typedef long long ll; typedef vector <ll> vi; int main() { int t; cin >> t; while (t --> 0) { int n; cin >> n; vi arr(n); for (int i = 0; i < n; i++) { cin >> arr[i]; } if (arr[0] == 1) { cout << "YES"; } else { cout << "NO"; } cout << '\n'; } }
1896
B
AB Flipping
You are given a string $s$ of length $n$ consisting of characters $A$ and $B$. You are allowed to do the following operation: - Choose an index $1 \le i \le n - 1$ such that $s_i = A$ and $s_{i + 1} = B$. Then, swap $s_i$ and $s_{i+1}$. You are only allowed to do the operation \textbf{at most once} for each index $1 \le i \le n - 1$. However, you can do it in any order you want. Find the maximum number of operations that you can carry out.
What happens when $s$ starts with some $\texttt{B}$ and ends with some $\texttt{A}$? If the string consists of only $\texttt{A}$ or only $\texttt{B}$, no operations can be done and hence the answer is $0$. Otherwise, let $x$ be the smallest index where $s_x = \texttt{A}$ and $y$ be the largest index where $x_y = \texttt{B}$. If $x > y$, this means that the string is of the form $s=\texttt{B}\ldots\texttt{BA}\ldots\texttt{A}$. Since all the $\texttt{B}$ are before the $\texttt{A}$, no operation can be done and hence the answer is also $0$. Now, we are left with the case where $x < y$. Note that $s[1,x-1] = \texttt{B}\ldots\texttt{B}$ and $s[y+1,n] = \texttt{A}\ldots\texttt{A}$ by definition. Since the operation moves $\texttt{A}$ to the right and $\texttt{B}$ to the left, this means that $s[1,x - 1]$ will always consist of all $\texttt{B}$ and $s[y + 1, n]$ will always consist of all $\texttt{A}$. Hence, no operation can be done from index $1$ to $x - 1$ as well as from index $y$ to $n - 1$. The remaining indices where an operation could be done are from $x$ to $y - 1$. It can be proven that all $y - x$ operations can be done if their order is chosen optimally. Let array $b$ store the indices of $s$ between $x$ and $y$ that contain $\texttt{B}$ in increasing order. In other words, $x < b_1 < b_2 < \ldots < b_k = y$ and $b_i = \texttt{B}$, where $k$ is the number of occurences of $\texttt{B}$ between $x$ and $y$. For convenience, we let $b_0 = x$. Then, we do the operations in the following order: $b_1 - 1, b_1 - 2, \ldots, b_0 + 1, b_0,$ $b_2 - 1, b_2 - 2, \ldots, b_1 + 1, b_1,$ $b_3 - 1, b_3 - 2, \ldots, b_2 + 1, b_2,$ $\vdots$ $b_k - 1, b_k - 2, \ldots, b_{k - 1} + 1, b_k$ It can be seen that the above ordering does operation on all indices between $x$ and $y - 1$. To see why all of the operations are valid, we look at each row separately. Each row starts with $b_i - 1$, which is valid as $s_{b_i} = \texttt{B}$ and $s_{b_i - 1} = \texttt{A}$ (assuming that it is not the last operation of the row). Then, the following operations in the same row move $\texttt{B}$ to the left until position $b_{i - 1}$. To see why the last operation of the row is valid as well, even though $s_{b_{i - 1}}$ might be equal to $\texttt{B}$ initially by definition, either $i = 1$ which means that $s_{b_0} = s_x = \texttt{A}$, or an operation was done on index $b_{i - 1} - 1$ in the previous row which moved $\texttt{A}$ to index $b_{i - 1}$. Hence, all operations are valid.
[ "greedy", "strings", "two pointers" ]
900
#include <bits/stdc++.h> using namespace std; char s[200005]; signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int tc, n; cin >> tc; while (tc--) { cin >> n; s[n + 1] = 'C'; for (int i = 1; i <= n; ++i) cin >> s[i]; int pt1 = 1, pt2 = 1, ans = 0; while (s[pt1] == 'B') ++pt1, ++pt2; while (pt1 <= n) { int cntA = 0, cntB = 0; while (s[pt2] == 'A') ++pt2, ++cntA; while (s[pt2] == 'B') ++pt2, ++cntB; if (s[pt2 - 1] == 'B') ans += pt2 - pt1 - 1; if (cntB) pt1 = pt2 - 1; else break; } cout << ans << '\n'; } }
1896
C
Matching Arrays
You are given two arrays $a$ and $b$ of size $n$. The beauty of the arrays $a$ and $b$ is the number of indices $i$ such that $a_i > b_i$. You are also given an integer $x$. Determine whether it is possible to rearrange the elements of $b$ such that the beauty of the arrays becomes $x$. If it is possible, output one valid rearrangement of $b$.
Consider a greedy algorithm. For simplicity, assume that both arrays $a$ and $b$ are sorted in increasing order. The final answer can be obtained by permuting the answer array using the same permutation to permute sorted array $a$ to the original array $a$. Claim: If the rearrangement $b_{x + 1}, b_{x + 2}, \ldots, b_n, b_1, b_2, \ldots, b_x$ does not have beauty $x$, it is not possible to rearrange $b$ to make the beauty equals to $x$. Proof: Suppose there exists an alternative rearrangement different from above represented by permutation $p$ where $b_{p_1}, b_{p_2}, \ldots, b_{p_n}$ results in a beauty of $x$. Let array $q$ represent the $x$ indices where $a_i > b_{p_i}$ in increasing order. In other words, $1 \le q_1 < q_2 < \ldots < q_x \le n$ and $a_{q_i} > b_{p_{q_i}}$. Let $i$ be the largest index where $q_i \neq n - i + 1$ ($q_i < n - i + 1$ also holds due to strict inequality above). We know that $a_{q_i} > b_{p_{q_i}}$ and $a_{n - i + 1} \le b_{p_{n - i + 1}}$. Since $a$ is sorted and $q_i < n - i + 1$, we have $a_{q_i} \le a_{n - i + 1}$, and hence, $a_{n - i + 1} > b_{p_{q_i}}$ and $a_{q_i} \le b_{p_{n - i + 1}}$. This means that we can swap $p_{q_i}$ with $p_{n - i + 1}$ without changing the beauty of the array, while allowing $q_i = n - i + 1$. Hence, by doing the swapping repeatedly, we will get $q_i = n - i + 1$ for all $i$ between 1 and $x$. An alternative interpretation to the result above is that we managed to obtain a solution where for all $i$ between $1$ and $n - x$, we have $a_i \le b_{p_i}$. On the other hand, for all $i$ between $n - x + 1$ and $n$, we have $a_i > b_{p_i}$. Let $i$ be the largest index between $1$ and $n - x$ where $p_i \neq i + x$ ($p_i < i + x$ due to maximality). Then, let $j$ be the index where $p_j = i + x$. Consider two cases: $j \le n - x$. Since $i$ is the largest index where $p_i \neq i + x$, this means that $j < i$ and hence $a_j \le a_i$. We have $a_i \le b_{p_i} \le b_{i + x} = b_{p_j}$ and $a_j \le a_i \le b_{p_i}$. Hence, we can swap $p_i$ and $p_j$ without changing the beauty of the array, while allowing $p_i = i + x$. $j > n - x$. We have $a_i \le b_{p_i} \le b_{i + x} = b_{p_j}$ and $a_j > b_{p_j} = b_{i + x} > b_{p_i}$. Hence, we can swap $p_i$ and $p_j$ without changing the beauty of the array, while allowing $p_i = i + x$. By repeating the above repeatedly, we can obtain a solution where $p_i = i + x$ for all $i$ between $1$ and $n - x$. Let $i$ be the smallest index between $n - x + 1$ and $n$ where $p_i \neq i - n + x$ ($p_i > i - n + x$ by minimality). Then, let $j$ be the index where $p_j = i - n + x$. Note that $j > n - x$ as well since $p_i = i + x$ for all $i$ between $1$ and $n - x$. Since $i$ is the smallest index where $p_i \neq i - n + x$, this means that $i < j$ and hence $a_i \le a_j$. We have $a_i > b_{p_i} \ge b_{i - n + x} = b_{p_j}$ and $a_j \ge a_i > b_{p_i}$. Hence, we can swap $p_i$ and $p_j$ without changing the beauty of the array, while allowing $p_i = i - n + x$. By doing this repeatedly, we can obtain a solution where $p_i = i - n + x$ for all $i$ between $n - x + 1$ and $n$. Now, $p = [x + 1, x + 1, \ldots, n, 1, 2, \ldots, x]$, which matches the rearrangement in our claim.
[ "binary search", "constructive algorithms", "greedy", "sortings" ]
1,400
#include <bits/stdc++.h> using namespace std; #define REP(i, s, e) for (int i = (s); i < (e); i++) #define RREP(i, s, e) for (int i = (s); i >= (e); i--) const int INF = 1000000005; const int MAXN = 200005; int t; int n, x; int a[MAXN], b[MAXN], aid[MAXN]; int ans[MAXN]; int main() { ios::sync_with_stdio(0), cin.tie(0); cin >> t; while (t--) { cin >> n >> x; REP (i, 0, n) { cin >> a[i]; } REP (i, 0, n) { cin >> b[i]; } iota(aid, aid + n, 0); sort(aid, aid + n, [&] (int l, int r) { return a[l] < a[r]; }); sort(b, b + n); REP (i, 0, x) { ans[aid[n - x + i]] = b[i]; } REP (i, x, n) { ans[aid[i - x]] = b[i]; } REP (i, 0, n) { x -= a[i] > ans[i]; } if (x == 0) { cout << "YES\n"; REP (i, 0, n) { cout << ans[i] << ' '; } cout << '\n'; } else { cout << "NO\n"; } } return 0; }
1896
D
Ones and Twos
You are given a $1$-indexed array $a$ of length $n$ where each element is $1$ or $2$. Process $q$ queries of the following two types: - "1 s": check if there exists a subarray$^{\dagger}$ of $a$ whose sum equals to $s$. - "2 i v": change $a_i$ to $v$. $^{\dagger}$ An array $b$ is a subarray of an array $a$ if $b$ can be obtained from $a$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. In particular, an array is a subarray of itself.
Consider some small examples and write down every possible value of subarray sums. Can you see some patterns? Denote $s[l,r]$ as the sum of subarray from $l$ to $r$. Claim: If there is any subarray with sum $v\ge 2$, we can find a subarray with sum $v-2$. Proof: Suppose $s[l,r]=v$, consider 3 cases: $a[l]=2$, we have $s[l+1,r]=v-2$. $a[r]=2$, we have $s[l,r-1]=v-2$. $a[l]=a[r]=1$, we have $s[l+1,r-1]=v-2$. So to check if there exists a subarray whose sum equals $v$, we can find the maximum subarray sum having the same parity with $v$ and compare it with $v$. The case where $(s[1,n]-v)\%2=0$ is obvious, suppose the opposite happens. If array $a$ is full of $2$-s, the answer is $\texttt{NO}$. Otherwise, let $x$ and $y$ be the positions of the first $1$ and last $1$ in $a$. Any subarray with $l\le x\le y\le r$ will have a different parity with $v$. So we will compare $\max(s[x+1, n],s[1,y-1])$ with $v$ to get the answer.
[ "binary search", "data structures", "divide and conquer", "math", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n, q; cin >> n >> q; vector<int> a(n); set<int> pos; for (int i = 0; i < n; i++) { cin >> a[i]; if (a[i] == 1) pos.insert(i); } while (q--) { int cmd; cin >> cmd; if (cmd == 1) { int v; cin >> v; int num = pos.size(); if ((v - num) & 1) { if (num == 0) cout << "NO"; else { int s1 = 2 * *pos.rbegin() - (num - 1); int s2 = 2 * (n - *pos.begin() - 1) - (num - 1); if (v <= max(s1, s2)) cout << "YES"; else cout << "NO"; } } else { if (v <= 2 * n - num) cout << "YES"; else cout << "NO"; } cout << '\n'; } else { int i; cin >> i; i--; pos.erase(i); cin >> a[i]; if (a[i] == 1) pos.insert(i); } } } }
1896
E
Permutation Sorting
You are given a permutation$^\dagger$ $a$ of size $n$. We call an index $i$ good if $a_i=i$ is satisfied. After each second, we rotate all indices that are not good to the right by one position. Formally, - Let $s_1,s_2,\ldots,s_k$ be the indices of $a$ that are \textbf{not} good in increasing order. That is, $s_j < s_{j+1}$ and if index $i$ is not good, then there exists $j$ such that $s_j=i$. - For each $i$ from $1$ to $k$, we assign $a_{s_{(i \% k+1)}} := a_{s_i}$ all at once. For each $i$ from $1$ to $n$, find the first time that index $i$ becomes good. $^\dagger$ A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
For each index $i$ from $1$ to $n$, let $h_i$ denote the number of cyclic shifts needed to move $a_i$ to its correct spot. In other words, $h_i$ is the minimum value such that $(i + h_i - 1)\ \%\ n + 1 = a_i$. How can we get the answer from $h_i$? For convenience, we will assume that the array is cyclic, so $a_j = a_{(j - 1)\ \%\ n + 1}$. The answer for each index $i$ from $1$ to $n$ is $h_i$ (defined in hint 1) minus the number of indices $j$ where $i < j < i + h_i$ and $i < a_j < i + h_i$ (or $i < a_j + n < i + h_i$ to handle cyclic case when $i + h_i > n$). This is because the value that we are calculating is equal to the number of positions that $a_i$ will skip during the rotation as the index is already good. To calculate the above value, it is convenient to define an array $b$ of size $2n$ where $b_i = a_i$ for all $i$ between $1$ to $n$, and $b_i = a_{i - n} + n$ for all $i$ between $n + 1$ to $2n$ to handle cyclicity. We will loop from $i = 2n$ to $i = 1$, and do a point increment to position $a_i$ if $a_i \ge i$, otherwise, do a point increment to position $a_i + n$. Then, to get the answer for index $i$, we do a range sum query from $i + 1$ to $i + h_i - 1$. Point increment and range sum query can be done using a binary indexed tree in $O(\log n)$ time per query/update. Hence, the problem can be solved in $O(n\log n)$ time.
[ "data structures", "sortings" ]
2,100
#include <bits/stdc++.h> using namespace std; #define REP(i, s, e) for (int i = (s); i < (e); i++) #define RREP(i, s, e) for (int i = (s); i >= (e); i--) #define ALL(_a) _a.begin(), _a.end() #define SZ(_a) (int) _a.size() const int INF = 1000000005; const int MAXN = 1000005; int t; int n; int p[MAXN]; int ans[MAXN]; int fw[MAXN * 2]; void incre(int i, int x) { for (; i <= 2 * n; i += i & -i) { fw[i] += x; } } int qsm(int i) { int res = 0; for (; i > 0; i -= i & -i) { res += fw[i]; } return res; } inline int qsm(int s, int e) { return qsm(e) - qsm(s - 1); } int main() { #ifndef DEBUG ios::sync_with_stdio(0), cin.tie(0); #endif cin >> t; while (t--) { cin >> n; REP (i, 1, n + 1) { cin >> p[i]; } REP (i, 0, 2 * n + 5) { fw[i] = 0; } vector<pair<int, int>> rgs; REP (i, 1, n + 1) { if (i <= p[i]) { rgs.push_back({i, p[i]}); rgs.push_back({i + n, p[i] + n}); } else { rgs.push_back({i, p[i] + n}); } } sort(ALL(rgs), greater<pair<int, int>>()); for (auto [l, r] : rgs) { if (l <= n) { ans[p[l]] = r - l - qsm(l, r); } incre(r, 1); } REP (i, 1, n + 1) { cout << ans[i] << ' '; } cout << '\n'; } return 0; }
1896
F
Bracket Xoring
You are given a binary string $s$ of length $2n$ where each element is $\mathtt{0}$ or $\mathtt{1}$. You can do the following operation: - Choose a balanced bracket sequence$^\dagger$ $b$ of length $2n$. - For every index $i$ from $1$ to $2n$ in order, where $b_i$ is an open bracket, let $p_i$ denote the minimum index such that $b[i,p_i]$ is a balanced bracket sequence. Then, we perform a range toggle operation$^\ddagger$ from $i$ to $p_i$ on $s$. Note that since a balanced bracket sequence of length $2n$ will have $n$ open brackets, we will do $n$ range toggle operations on $s$. Your task is to find a sequence of no more than $10$ operations that changes all elements of $s$ to $\mathtt{0}$, or determine that it is impossible to do so. Note that you do \textbf{not} have to minimize the number of operations. Under the given constraints, it can be proven that if it is possible to change all elements of $s$ to $\mathtt{0}$, there exists a way that requires no more than $10$ operations. $^\dagger$ A sequence of brackets is called balanced if one can turn it into a valid math expression by adding characters $+$ and $1$. For example, sequences "(())()", "()", and "(()(()))" are balanced, while ")(", "(()", and "(()))(" are not. $^\ddagger$ If we perform a range toggle operation from $l$ to $r$ on a binary string $s$, then we toggle all values of $s_i$ such that $l \leq i \leq r$. If $s_i$ is toggled, we will set $s_i := \mathtt{0}$ if $s_i = \mathtt{1}$ or vice versa. For example, if $s=\mathtt{1000101}$ and we perform a range toggle operation from $3$ to $5$, $s$ will be changed to $s=\mathtt{1011001}$.
The operation is equivalent to toggling $s$ at every odd $i$ where $b_i$ is an open bracket and every even $i$ where $b_i$ is a closed bracket. Suppose there are $x$ open brackets and $y$ close brackets between positions $1$ and $i$. Note that $x \ge y$ by definition of balanced bracket sequences. - Case 1: $b_i$ is an open bracket. Position $i$ will be toggled exactly $x - y$ times as $y$ of the open brackets will be matched before position $i$, and the remaining $x - y$ open brackets will only be matched after position $i$. This means that position $i$ will be toggled only if $x - y$ is odd, and hence, $x - y + 2y = x + y = i$ must be odd as well. - Case 2: $b_i$ is a close bracket. Position $i$ will be toggled exactly $x - y + 1$ times as $y - 1$ of the open brackets will be matched before $i$, $1$ of the open bracket will be matched with position $i$, and the remaining $x - y$ open brackets will be matched after position $i$. This means that position $i$ will be toggled only if $x - y$ is even, and hence, $x - y + 2y = x + y = i$ must be even as well. Every operation will always toggle $s_1$ and $s_n$. Furthermore, every operation will always toggle an even number of positions. If $s$ is toggled at an open bracket, $s$ will be toggled at its matching close bracket as well. If $s_1 \neq s_n$ or there is an odd number of $\texttt{1}$s in $s$, it is not possible to change all elements of $s$ to $\texttt{0}$. Otherwise, it is always possible. If it is possible to change all elements of $s$ to $\texttt{0}$, at most $3$ operations are needed. From hint 3, we know the cases where it is impossible to change all elements of $s$ to $\texttt{0}$. We will now only consider the case where it is possible to change all elements of $s$ to $\texttt{0}$. Using hint 1, we can easily check whether it is possible to change all elements of $s$ to $\texttt{0}$ using only one operation by first constructing the bracket sequence and then checking whether the resultant bracket sequence is balanced. From now on, we will assume that it is not possible to change all elements of $s$ to $\texttt{0}$ using one operation. Suppose $s_1 = s_n = \texttt{1}$. We know from hint 2 that each operation will always toggle $s_1$ and $s_n$, so since it is not possible to change all elements of $s$ to $\texttt{0}$ using one operation, we will need three operations. If we let the first operation be $b=\texttt{(()()}\ldots\texttt{()())}$, $s_1$ and $s_n$ will be toggled while the remaining elements stay the same. Now, $s_1 = s_n = \texttt{0}$, so if we can solve this case using only two operations, it means that we can solve the $s_1 = s_n = \texttt{1}$ case using only three operations. To solve the final case where $s_1 = s_n = \texttt{0}$, we will look at a special balanced bracket sequences $b = \texttt{(()()}\ldots\texttt{()())}$. Notice that if we do an operation using this bracket sequence, only $s_1$ and $s_n$ will be toggled. Suppose there exist an index $i$ between $2$ and $2n - 2$ where we want to toggle both $s_i$ and $s_{i + 1}$. We can take the special balanced bracket sequence $b = \texttt{(()()}\ldots\texttt{()())}$, then swap $b_i$ and $b_{i + 1}$. This will always result in a balanced bracket sequence that will toggle $s_1$, $s_n$, as well as the desired $s_i$ and $s_{i + 1}$. This will allow us to change all elements of $s$ to $\texttt{0}$ in $2n$ moves as we can scan from $i = 2$ to $i = 2n - 2$ and do an operation toggling $s_i$ and $s_{i+1}$ if $s_i = \texttt{1}$. Since there is an even number of $\texttt{1}$s in $s$ from hint 3, toggling adjacent positions will always change all elements of $s$ to $\texttt{0}$. To reduce the number of operations from $2n$ to $2$, notice that a lot of the operations can be parallelized into a single operation. Let $A_0$ represent the set of even indices between $2$ and $2n - 2$ where we want to toggle $s_i$ and $s_{i + 1}$. Similarly, let $A_1$ represent the set of odd indices between $2$ and $2n - 2$ where we want to toggle $s_i$ and $s_{i + 1}$. In a single operation, we can take the special balanced bracket sequence $b = \texttt{(()()}\ldots\texttt{()())}$, and swap $b_i$ and $b_{i + 1}$ for all $i$ that is in the set $A_0$. Since $A_0$ only contains even indices, the swaps are non-intersecting, and hence, the resultant bracket sequence will still be balanced and $s_1$, $s_n$, as well as $s_i$ and $s_{i + 1}$ will be toggled for all the desired even indices $i$. We can use the same strategy with $A_1$, starting with the same special balanced bracket sequence and then swapping $b_i$ and $b_{i + 1}$ for all $i$ that is in set $A_1$. Hence, after using these two operations, all elements of $s$ will change to $\texttt{0}$. We will demonstrate a way to use $2$ bracket sequence to solve any binary string whose first and last element is $\texttt{0}$ and who also has an even number of $\texttt{1}$s. Defined the balance of an (incomplete) bracket sequence as the number of open brackets minus the number of closed brakcets. For example, ((() has balance $2$, (()()(( has balance $3$ and () has balance $0$. Using hint $1$ we can see that the resulting binary string will contain 0 at position $i$ iff the character at position $i$ in both bracket sequences is the same. Suppose the balance of your current bracket sequences is $(a,b)$. You can change it $(a\pm 1, b\pm 1)$. If both $\pm$ have the same parity, then the resulting binary string will contain $\texttt{0}$ at that position. Now, we will demonstrate a greedy algorithm. $(0,0) \to (1,1) \to (0,2),(2,2) \to (1,3),(1,1) \to (0,2),(2,2) \to \ldots \to (1,1) \to (0,0)$ One can show by a simple parity argument that the second last balance must necessarily be $(1,1)$ since the number of $\texttt{1}$s in the string is even. Similar to solution 2, we will demonstrate a way to use $2$ bracket sequence to solve any binary string whose first and last element is $\texttt{0}$ and who also has an even number of $\texttt{1}$s. Using the same greedy argument in solution 2 (or by guessing), we know that we can always use two bracket sequences where the number of open brackets minus the number of close brackets is always between $0$ and $3$ for all prefixes of the bracket sequence. For convenience, we will define "balance" as the number of open brackets minus the number of close brackets. Hence, we can do dynamic programming using the states $dp[i][balance1][balance2]$ which returns whether it is possible to create two bracket sequences $b1$ and $b2$ of length $i$ such that the balance of $b1$ and $b2$ are $balance1$ and $balance2$ respectively and $s[1, i]$ becomes all $\mathtt{0}$. The transition can be done by making sure that the balances stay between $0$ and $3$ and that $b1_i \neq b2_i$ if $s_i=\mathtt{1}$ and vice versa.
[ "constructive algorithms", "greedy", "implementation", "math" ]
2,600
#include <bits/stdc++.h> using namespace std; void solve(int n, string s) { vector<int> a; for (char &i : s) { a.push_back(i & 1); } if (a.front() != a.back()) { cout << -1 << endl; return ; } if (count(a.begin(), a.end(), 1) % 2) { cout << -1 << endl; return ; } bool flipped = false; if (a.front() == 1 && a.back() == 1) { for (int &i : a) i ^= 1; flipped = true; } string l(2 * n, '-'), r(2 * n, '-'); int cnt = 0; for (int i = 0; i < 2 * n; i++) { if (a[i]) { l[i] = (cnt) ? '(' : ')'; r[i] = (cnt ^ 1) ? '(' : ')'; cnt ^= 1; } } int tot = count(a.begin(), a.end(), 0) / 2; cnt = 0; for (int i = 0; i < 2 * n; i++) { if (!a[i]) { if (cnt < tot) l[i] = '(', r[i] = '('; else l[i] = ')', r[i] = ')'; cnt++; } } if (flipped) { cout << 3 << endl; cout << l << endl; cout << r << endl; for (int i = 0; i < n; i++) cout << "()"; cout << endl; } else { cout << 2 << endl; cout << l << endl; cout << r << endl; } } int main() { int t; cin >> t; while (t--) { int n; cin >> n; string s; cin >> s; solve(n, s); } return 0; }
1896
G
Pepe Racing
This is an interactive problem. There are $n^2$ pepes labeled $1, 2, \ldots, n^2$ with \textbf{pairwise distinct} speeds. You would like to set up some races to find out the relative speed of these pepes. In one race, you can choose exactly $n$ distinct pepes and make them race against each other. After each race, you will only know the \textbf{fastest} pepe of these $n$ pepes. Can you order the $n^2-n+1$ fastest pepes in \textbf{at most} $2n^2 - 2n + 1$ races? Note that the slowest $n - 1$ pepes are indistinguishable from each other. Note that the interactor is \textbf{adaptive}. That is, the relative speeds of the pepes are not fixed in the beginning and may depend on your queries. But it is guaranteed that at any moment there is at least one initial configuration of pepes such that all the answers to the queries are consistent.
Find the fastest pepe in $n + 1$ queries. Divide $n^2$ pepes into $n$ groups $G_1, G_2, \dots, G_n$. For each group $G_i$, use $1$ query to find the fastest pepe in the group, let's call him the head of $G_i$. Finally, use $1$ query to find the fastest pepe of all the heads. After knowing the fastest pepe, find the second fastest pepe in $n + 1$ queries. Just remove the fastest pepe and repeat the process from hint 1. One of the groups will have size $n - 1$, but we can "steal" one non-head pepe from another already-queried group. Note that the above process for Hint 2 uses a lot of repeated queries. Can we optimize it to $2$ queries? Assume that the fastest pepe is the head of $G_i$. After removing him, we can recalculate the head of $G_i$ using $1$ query similar to hint 2. Then, use the second query on all the heads, similar to the last query of hint 1. Solve the problem using $2n^2 - n + 1$ queries. Our algorithm has three phases: Phase 1: Divide $n^2$ pepes into $n$ groups $G_1, G_2, \dots, G_n$. For each group $G_i$, use $1$ query to find the fastest pepe in the group, let's call this guy the head of $G_i$. Phase 2: Until there are $2n - 1$ pepes, repeat these two steps: Use $1$ query to find the fastest pepe of the heads of all groups, then remove him. Assume that this pepe is the head of $G_i$. Steal non-head pepes from other groups so that $|G_i| = n$, then use $1$ query to recalculate its head. Use $1$ query to find the fastest pepe of the heads of all groups, then remove him. Assume that this pepe is the head of $G_i$. Steal non-head pepes from other groups so that $|G_i| = n$, then use $1$ query to recalculate its head. Phase 3: Until there are $n - 1$ pepes, repeatedly find the fastest pepe using $2$ queries (or $1$ if there are only $n$ pepes left), then remove him. Total number of queries is $n + 2(n^2 - 2n + 1) + 2(n - 1) + 1 = 2n^2 - n + 1$. Call a pepe slow if it does not belong in the fastest $n^2 - n + 1$ pepes. Note that there are $n - 1$ slow pepes, and we do not care for their relative speed. After each query, we know that the fastest pepe cannot be slow. We use the algorithm in hint 4 until there are $2n - 1$ pepes left. Since the head of $n$ groups cannot be slow, we are left with exactly $(2n - 1) - n = n - 1$ candidates for slow pepes. Once we determine the $n - 1$ slow pepes, we only need to find the ranking of the other $n$ pepes, which can be done using $n - 1$ queries. Total number of queries is $n + 2(n^2 - 2n + 1) + (n - 1) = 2n^2 - 2n + 1$. We will try to decrease the number of queries based on the fact that we can omit one query for recalculation when the size of a group decreases from $2$ to $1$. We modify the algorithm in hint 4 to maintain an invariant: $|G_i| - |G_j| \le 1$ $\forall \, 1 \le i < j \le n$. In other words, we want to make the sizes of these groups as balanced as possible. To maintain this, whenever we have $|G_j| - |G_i| = 2$ after removing some pepe, we can transfer any non-head pepe from $G_j$ to $G_i$ to balance these groups out. Next, to recalculate the head of $G_i$, we will "borrow" instead of "steal" from other groups. If the fastest pepe is borrowed from $G_j$, then we transfer a random non-head pepe in $G_i$ back to $G_j$. This works since the head of $G_j$ is faster than the head of $G_i$, which in turn is faster than the random pepe. Finally, when there are $2n$ pepes left, all groups have the size of $2$, and we only need to use $1$ query for each pepe from later on. Total number of queries is $n + 2(n^2 - 2n) + (n + 1) = 2n^2 - 2n + 1$.
[ "constructive algorithms", "implementation", "interactive", "sortings" ]
3,200
#include <bits/stdc++.h> using namespace std; #define int long long #define ll long long #define ii pair<ll,ll> #define iii pair<ii,ll> #define fi first #define se second #define debug(x) cout << #x << ": " << x << endl #define pub push_back #define pob pop_back #define puf push_front #define pof pop_front #define lb lower_bound #define ub upper_bound #define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--)) #define all(x) (x).begin(),(x).end() #define sz(x) (int)(x).size() mt19937 rng(chrono::system_clock::now().time_since_epoch().count()); int n; vector<int> v[25]; int ask(vector<int> v){ cout<<"? "; for (auto it:v) cout<<it<<" "; cout<<endl; int res; cin>>res; return res; } vector<int> fix(vector<int> v){ int t=ask(v); rep(x,0,n) if (v[x]==t) swap(v[0],v[x]); return v; } signed main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin.exceptions(ios::badbit | ios::failbit); int TC; cin>>TC; while (TC--){ cin>>n; rep(x,0,n){ v[x].clear(); rep(y,1,n+1) v[x].pub(x*n+y); } rep(x,0,n) v[x]=fix(v[x]); vector<int> ans; rep(x,0,n*n-2*n+1){ vector<int> t; rep(x,0,n) t.pub(v[x][0]); int best=ask(t); int idx=-1; rep(x,0,n) if (t[x]==best) idx=x; ans.pub(best); v[idx].erase(v[idx].begin()); rep(x,0,n) if (x!=idx){ while (sz(v[x])>1 && sz(v[idx])<n){ v[idx].pub(v[x].back()); v[x].pob(); } } v[idx]=fix(v[idx]); } vector<int> a,b; rep(x,0,n){ rep(y,0,sz(v[x])){ if (y==0) a.pub(v[x][y]); else b.pub(v[x][y]); } } set<int> bad=set<int>(all(b)); rep(x,0,n-1){ a=fix(a); ans.pub(a[0]); a.erase(a.begin()); a.pub(b.back()); b.pob(); } for (auto it:a) if (!bad.count(it)) ans.pub(it); cout<<"! "; for (auto it:ans) cout<<it<<" "; cout<<endl; } }
1896
H2
Cyclic Hamming (Hard Version)
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $k$. You can make hacks only if all versions of the problem are solved.} In this statement, all strings are $0$-indexed. For two strings $a$, $b$ of the same length $p$, we define the following definitions: - The hamming distance between $a$ and $b$, denoted as $h(a, b)$, is defined as the number of positions $i$ such that $0 \le i < p$ and $a_i \ne b_i$. - $b$ is a cyclic shift of $a$ if there exists some $0 \leq k < p$ such that $b_{(i+k) \bmod p} = a_i$ for all $0 \le i < p$. Here $x \bmod y$ denotes the remainder from dividing $x$ by $y$. You are given two binary strings $s$ and $t$ of length $2^{k+1}$ each. Both strings may contain missing characters (denoted by the character '?'). Your task is to count the number of ways to replace the missing characters in both strings with the characters '0' or '1' such that: - Each string $s$ and $t$ contains exactly $2^k$ occurrences of each character '0' and '1' - $h(s, c) \ge 2^k$ for all strings $c$ that is a cyclic shift of $t$. As the result can be very large, you should print the value modulo $998\,244\,353$.
For any $c$ which is a cyclic shift of $t$, what will happen if $h(s,c)>2^k$? Try finding some useful relationship between $1$-s of $s$ and $1$-s of $c$. There are exactly $n/2$ positions $i$ such that $s[i]=c[i]=1$. Think about polynomial multiplication. Consider $S\cdot T$, where $S=\sum s[i]x^i$ and $T=\sum t[2n-i-1]x^i$ (reversed coefficients). What are some properties that the coefficients of $S\cdot T$ tell us? Denote $A=S\cdot T$, we have $A[x^i]+A[x^{i+2n}]=n/2$. Try factoring $S\cdot T$ into some irreducible polynomials. Sort $0,1,\ldots,2n-1$ based on their bit-reversal values and build a binary tree on top of it. What condition should be satisfied on each level of the tree? Consider the polynomial product $A=S\cdot T$, where $S=\sum s[i]x^i$ and $T=\sum t[2n-i-1]x^i$ (reversed coefficients). Claim 1. For all $0\le i<2n$, we have $A[x^i]+A[x^{i+2n}]=n/2$, where $A[x^k]$ denote the coefficient of $x^k$ in $A$. Claim 2. We can express $A=(x+1)(x^2+1)(x^4+1)\ldots (x^n+1)(n/2+C(x-1))$ where $C$ is some polynomial with degree not greater than $2n-2$. $\begin{align} &(x+1)(x^2+1)(x^4+1)\ldots (x^n+1)(n/2+C(x-1))\\ =\ &C(x^{2n}-1)+n/2\cdot (x^0+x^1+x^2+\ldots+x^{2n-1}) \end{align}$ It is easy to see that this satisfies claim 1. Claim 3. Since $x^{2^p}+1$ is cyclotomic polynomial and hence irreducible for all $p$, each factor in $x+1,x^2+1,\ldots,x^n+1$ must divide at least one of $S$ and $T$. And under the constraints that each of $s$ and $t$ must have exactly $n$ $1$-s, this condition is also sufficient. Let $A=(x+1)(x^2+1)(x^4+1)\ldots (x^n+1)\cdot D$. We have $A(1)=n^2$, hence $D(1)=n/2$, which means that $D$ has the form of $n/2+C(x-1)$. Therefore $A$ satisfies claim 2. Recall $n=2^k$, define $f_s(mask)$ ($0\le mask<2n$) as the number of string $s$ such that for each $p$ where $p$-th bit of $mask$ is on, $S$ is divisible by $x^{p}+1$. Define $f_t$ similarly for $T$. Define $f^{\prime}_t$ such that $f^{\prime}_t(mask)=f_t(mask\oplus (2n-1))$ where $\oplus$ denotes bitwise XOR. The answer to the problem is $\displaystyle\sum_{mask} f_s(mask)\cdot \mu(f^{\prime}_t(mask))$ where $\mu$ denote Mobius transform. The reason we need Mobius transform is that $mask_s$ does not represent all divisors, just a subset of it. Let rearrange elements of $s$ into a new array $s^{\prime}$ so that $s^{\prime}[\text{reverse_bit}(i)]=s[i]$ (for example, with $n=8$, the new order will be $[0,8,4,12,2,6,10,14,1,9,5,13,3,11,7,15]$). Construct a perfect binary tree based on the array $s^{\prime}$. This binary will have $k+2$ levels from $0$ to $k+1$, starting at the root. Claim 4. $S$ is divisible by $x^{2^p}+1$ if only if for every tree node at $p$-th level, the number of $1$-s of $s^{\prime}$ under both children are equal (the proof is left as an exercise for readers). Group the positions by the remainder when divided by $2^p$, find the necessary condition for each group, and consider its position on the tree. Consider the following dynamic programming: $dp_s[i][mask][num]$, where the levels in $mask$ satisfy claim 4 and $num$ is the number of $1$-s under $i$-th node on the tree. Denote $l$ as level of $i$-th node, transitions are: $dp_s[i][mask][num_1+num_2]\text{ += }dp_s[2i][mask][num_1]\cdot dp_s[2i+1][mask][num_2]$ $dp_s[i][mask+2^l][2\cdot num]\text{ += }dp_s[2i][mask][num]\cdot dp_s[2i+1][mask][num]$ The above dp took $\mathcal{O}(n^3)$, which is sufficient to solve the easy version. To solve the hard version, we will optimize the above transitions: The first transition is actually the convolution of $dp_s[2i][mask]$ and $dp_s[2i+1][mask]$, we can use FFT to optimize it. In the second transition, because $num$ is multiplied by $2$ every time, we can omit it and just make the transition to $dp_s[i][mask+2^l][num]$, reduce the length of $dp_s[i][mask+2^l]$ by two. By careful analysis, we can show that the time complexity is now $\mathcal{O}(3^k\cdot k)$ (recall $n=2^k$) with a fair constant factor, which solved the hard version.
[ "brute force", "dp", "fft", "math", "number theory" ]
3,500
null
1898
A
Milica and String
Milica has a string $s$ of length $n$, consisting only of characters A and B. She wants to modify $s$ so it contains \textbf{exactly} $k$ instances of B. In one operation, she can do the following: - Select an integer $i$ ($1 \leq i \leq n$) and a character $c$ ($c$ is equal to either A or B). - Then, replace \textbf{each} of the first $i$ characters of string $s$ (that is, characters $s_1, s_2, \ldots, s_i$) with $c$. Milica does not want to perform too many operations in order not to waste too much time on them. She asks you to find the minimum number of operations required to modify $s$ so it contains exactly $k$ instances of B. She also wants you to find these operations (that is, integer $i$ and character $c$ selected in each operation).
In one move, Milica can replace the whole string with $\texttt{AA} \ldots \texttt{A}$. In her second move, she can replace a prefix of length $k$ with $\texttt{BB} \ldots \texttt{B}$. The process takes no more than $2$ operations. The question remains - when can we do better? In the hint section, we showed that the minimum number of operations is $0$, $1$, or $2$. We have $3$ cases: No operations are needed if $s$ already contains $k$ characters $\texttt{B}$. No operations are needed if $s$ already contains $k$ characters $\texttt{B}$. Else, we can use brute force to check if changing some prefix leads to $s$ having $k$ $\texttt{B}$s. If we find such a prefix, we print it as the answer, and use only one operation. There are $O(n)$ possibilities. Implementing them takes $O(n)$ or $O(n^2)$ time. Else, we can use brute force to check if changing some prefix leads to $s$ having $k$ $\texttt{B}$s. If we find such a prefix, we print it as the answer, and use only one operation. There are $O(n)$ possibilities. Implementing them takes $O(n)$ or $O(n^2)$ time. Else, we use the two operations described in the hint section. Else, we use the two operations described in the hint section. A fun fact is that only one operation is enough. Can you prove it?
[ "brute force", "implementation", "strings" ]
800
null
1898
B
Milena and Admirer
Milena has received an array of integers $a_1, a_2, \ldots, a_n$ of length $n$ from a secret admirer. She thinks that making it non-decreasing should help her identify the secret admirer. She can use the following operation to make this array non-decreasing: - Select an element $a_i$ of array $a$ and an integer $x$ such that $1 \le x < a_i$. Then, replace $a_i$ by two elements $x$ and $a_i - x$ in array $a$. New elements ($x$ and $a_i - x$) are placed in the array $a$ in this order instead of $a_i$.More formally, let $a_1, a_2, \ldots, a_i, \ldots, a_k$ be an array $a$ before the operation. After the operation, it becomes equal to $a_1, a_2, \ldots, a_{i-1}, x, a_i - x, a_{i+1}, \ldots, a_k$. Note that the length of $a$ increases by $1$ on each operation. Milena can perform this operation multiple times (possibly zero). She wants you to determine the minimum number of times she should perform this operation to make array $a$ non-decreasing. An array $x_1, x_2, \ldots, x_k$ of length $k$ is called non-decreasing if $x_i \le x_{i+1}$ for all $1 \le i < k$.
Try a greedy approach. That is, split each $a_i$ only as many times as necessary (and try to create almost equal parts). We will iterate over the array from right to left. Then, as described in the hint section, we will split the current $a_i$ and create almost equal parts. For example, $5$ split into three parts forms the subarray $[1,2,2]$. Splitting $8$ into four parts forms the subarray $[2,2,2,2]$. Notice that the subarrays must be sorted. Because we want to perform as few splits as possible, the rightmost endpoint value should be as high as possible (as long as it is lower than or equal to the leftmost endpoint of the splitting of $a_{i+1}$ if it exists). When we iterate over the array, it is enough to set the current $a_i$ to the leftmost endpoint of the splitting (the smallest current value). It will help to calculate the optimal splitting of $a_{i-1}$. For the current $a_i$, we want to find the least $k$ such that we can split $a_{i-1}$ into $k$ parts so the rightmost endpoint is less than or equal to $a_i$. More formally, we want $\lceil \frac{a_{i-1}}{k} \rceil \leq a_i$ to hold. Afterwards, we set $a_{i-1}$ to $\lfloor \frac{a_{i-1}}{k} \rfloor$ and continue with our algorithm. The simplest way to find the desired $k$ is to apply the following formula: $k=\lceil \frac{a_{i-1}}{a_i} \rceil$ The answer is the sum over all $k-1$. There are $q \leq 2\cdot 10^5$ queries: given an index $i$ and an integer $x>1$, set $a_i := \lceil \frac{a_i}{x} \rceil$. Modify the array and print the answer to the original problem. Please note that the queries stack.
[ "greedy", "math" ]
1,500
null
1898
C
Colorful Grid
Elena has a grid formed by $n$ horizontal lines and $m$ vertical lines. The horizontal lines are numbered by integers from $1$ to $n$ from top to bottom. The vertical lines are numbered by integers from $1$ to $m$ from left to right. For each $x$ and $y$ ($1 \leq x \leq n$, $1 \leq y \leq m$), the notation $(x, y)$ denotes the point at the intersection of the $x$-th horizontal line and $y$-th vertical line. Two points $(x_1,y_1)$ and $(x_2,y_2)$ are adjacent if and only if $|x_1-x_2| + |y_1-y_2| = 1$. \begin{center} {\small The grid formed by $n=4$ horizontal lines and $m=5$ vertical lines.} \end{center} Elena calls a sequence of points $p_1, p_2, \ldots, p_g$ of length $g$ a walk if and only if all the following conditions hold: - The first point $p_1$ in this sequence is $(1, 1)$. - The last point $p_g$ in this sequence is $(n, m)$. - For each $1 \le i < g$, the points $p_i$ and $p_{i+1}$ are adjacent. Note that the walk may contain the same point more than once. In particular, it may contain point $(1, 1)$ or $(n, m)$ multiple times. There are $n(m-1)+(n-1)m$ segments connecting the adjacent points in Elena's grid. Elena wants to color each of these segments in blue or red color so that there exists a walk $p_1, p_2, \ldots, p_{k+1}$ of length $k+1$ such that - out of $k$ segments connecting two consecutive points in this walk, no two consecutive segments have the same color (in other words, for each $1 \le i < k$, the color of the segment between points $p_i$ and $p_{i+1}$ differs from the color of the segment between points $p_{i+1}$ and $p_{i+2}$). Please find any such coloring or report that there is no such coloring.
The solution does not exist only for "small" $k$ or when $n+m-k$ is an odd integer. Try to find a construction otherwise. Read the hint for the condition of the solution's existence. We present a single construction that solves the problem for each valid $k$. One can verify that the pattern holds in general. How would you write a checker? That is, how would you check that a valid walk of length $k+1$ exists in the given grid (with all the edges colored beforehand)?
[ "constructive algorithms" ]
1,700
null
1898
D
Absolute Beauty
Kirill has two integer arrays $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$ of length $n$. He defines the absolute beauty of the array $b$ as $$\sum_{i=1}^{n} |a_i - b_i|.$$ Here, $|x|$ denotes the absolute value of $x$. Kirill can perform the following operation \textbf{at most once}: - select two indices $i$ and $j$ ($1 \leq i < j \leq n$) and swap the values of $b_i$ and $b_j$. Help him find the maximum possible absolute beauty of the array $b$ after performing \textbf{at most one} swap.
If $a_i > b_i$, swap them. Imagine the pairs $(a_i,b_i)$ as intervals. How can we visualize the problem? The pair $(a_i,b_i)$ represents some interval, and $|a_i - b_i|$ is its length. Let us try to maximize the sum of the intervals' lengths. We present three cases of what a swap does to two arbitrary intervals. Notice how the sum of the lengths increases only in the first case. We want the distance between the first interval's right endpoint and the second interval's left endpoint to be as large as possible. So we choose integers $i$ and $j$ such that $i \neq j$ and $a_j - b_i$ is maximized. We add twice the value, if it is positive, to the original absolute beauty. If the value is $0$ or negative, we simply do nothing. To quickly find the maximum of $a_j - b_i$ over all $i$ and $j$, we find the maximum of $a_1,a_2,\ldots a_n$ and the minimum of $b_1,b_2, \ldots b_n$. Subtracting the two extremum values produces the desired result. How to handle point updates? (change a single value in either $a$ or $b$, and print the answer after each query.)
[ "greedy", "math" ]
1,900
null
1898
E
Sofia and Strings
Sofia has a string $s$ of length $n$, consisting only of lowercase English letters. She can perform operations of the following types with this string. - Select an index $1 \le i \le |s|$ and remove the character $s_i$ from the string. - Select a pair of indices $(l, r)$ ($1 \le l \le r \le |s|$) and sort the substring $s_{l} s_{l+1} \ldots s_r$ in alphabetical order. Here, $|s|$ denotes the current length of $s$. In particular, $|s| = n$ before the first operation. For example, if $s = \mathtt{sofia}$, then performing the operation of the first type with $i=4$ results in $s$ becoming $\mathtt{sofa}$, and performing the operation of the second type with $(l, r) = (2, 4)$ after that results in $s$ becoming $\mathtt{safo}$.Sofia wants to obtain the string $t$ of length $m$ after performing zero or more operations on string $s$ as described above. Please determine whether it is possible or not.
Notice how sorting only the substrings of length $2$ is enough. Try a greedy approach. We sort only the substrings of length $2$. We can swap two adjacent characters if the first is greater than or equal to the second. Let us fix some character $s_i$ and presume we want to change its position to $j$. We have to perform the described swaps if they are possible. More formally: if $j<i$, then every character in the segment $s_j s_{j+1} \ldots s_{i-1}$ must be greater than or equal to $s_i$; if $i<j$, then every character in the segment $s_{i+1} s_{i+2} \ldots s_j$ must be smaller than or equal to $s_i$. We want to reorder the string $s$ and get the string $s'$. Then, we check if we can delete some characters in $s'$ to achieve $t$. In other words, we want $t$ to be a subsequence of $s'$. A general algorithm that checks if the string $a$ is a subsequence of the string $b$ is as follows. We iterate through $a$, and for each character, we find its first next appearance in $b$. If such a character does not exist, we conclude that $a$ is not a subsequence of $b$. If we complete the iteration gracefully, then $a$ is a subsequence of $b$. We will try to check if $t$ is a subsequence of $s$, but we allow ourselves to modify $s$ along the way. We maintain $26$ queues for positions of each lowercase English letter in the string $s$. We iterate through the string $t$, and for every character $t_i$, we try to move the first available equivalent character in $s$ to position $i$. In other words, at every moment, the prefix of string $s$ is equal to the prefix of string $t$ (if possible). For the current character $t_i$ and the corresponding $s_j$, prefixes $t_1t_2\dots t_{i-1}$ and $s_1s_2\dots s_{i-1}$ are the same, which means that $j\geq i$. To move $s_j$ to position $i$, we need to delete all characters between $s_i$ and $s_j$ that are smaller than $s_j$. We will delete them and all characters from the current prefix $s_1s_2\dots s_{i-1}$ from the queues because they are no longer candidates for $s_j$. By doing so, $s_j$ will be the first character in the corresponding queue. If at some moment in our greedy algorithm, the queue we are looking for becomes empty, then the answer is "NO". Otherwise, we will make the prefix $s_1s_2\dots s_m$ equal to the $t$ and delete the remaining characters from $s$. Why is this greedy approach optimal? Let's suppose for some character $t_{i_1}$ we chose $s_{j_1}$ and for $t_{i_2}$ we chose $s_{j_2}$, such that $i_1< i_2$, $j_1>j_2$ and $t_{i_1}=t_{i_2}=s_{j_1}=s_{j_2}$. We need to prove that if we can move $s_{j_1}$ to position $i_1$ and $t_{i_2}$ to position $i_2$, when we can move $s_{j_2}$ to $i_1$ and $s_{j_1}$ to $i_2$. In the moment when we chose $s_{j_1}$, prefixes $t_1t_2\dots t_{i_1-1}$ and $s_1s_2\dots s_{i_1-1}$ are the same, so $j_1\geq i_1$. Similarly, $j_2\geq i_2$, which means the only possibility is $i_1<i_2\leq j_2<j_1$. If we can move $s_{j_1}$ to position $i_1$, than we can also move $s_{j_2}$ to $i_1$ because $s_{j_1}=s_{j_2}$ and $j_2<j_1$. Also, if we can move $s_{j_1}$ to $i_1$, than we can move $s_{j_1}$ to $i_2$ because $i_1<i_2$, from which it follows that we can move $s_{j_2}$ to $i_2$, because $s_{j_2}=s_{j_1}$ and $j_2<j_1$. The overall complexity is $O(\alpha \cdot (n+m))$, where $\alpha$ is the alphabet size ($\alpha=26$ in this problem). Solve the problem when the size of the alphabet is arbitrary (up to $n$).
[ "data structures", "greedy", "sortings", "strings", "two pointers" ]
2,200
null
1898
F
Vova Escapes the Matrix
Following a world tour, Vova got himself trapped inside an $n \times m$ matrix. Rows of this matrix are numbered by integers from $1$ to $n$ from top to bottom, and the columns are numbered by integers from $1$ to $m$ from left to right. The cell $(i, j)$ is the cell on the intersection of row $i$ and column $j$ for $1 \leq i \leq n$ and $1 \leq j \leq m$. Some cells of this matrix are blocked by obstacles, while all other cells are empty. Vova occupies one of the empty cells. It is guaranteed that cells $(1, 1)$, $(1, m)$, $(n, 1)$, $(n, m)$ (that is, corners of the matrix) are blocked. Vova can move from one empty cell to another empty cell if they share a side. Vova can escape the matrix from any empty cell on the boundary of the matrix; these cells are called exits. Vova defines the type of the matrix based on the number of exits he can use to escape the matrix: - The $1$-st type: matrices with no exits he can use to escape. - The $2$-nd type: matrices with exactly one exit he can use to escape. - The $3$-rd type: matrices with multiple (two or more) exits he can use to escape. Before Vova starts moving, Misha can create more obstacles to block more cells. However, he cannot change the type of the matrix. What is the maximum number of cells Misha can block, so that the type of the matrix remains the same? Misha cannot block the cell Vova is currently standing on.
To solve the problem for matrices of $3$-rd type, find the shortest path to $2$ closest exits with a modification of BFS. Block all cells not belonging to the path with obstacles. For a matrix of type $1$, Misha can block all empty cells (except the one Vova stands on). For a matrix of type $2$, Misha finds the shortest path to some exit with a single BFS and then blocks every other cell. Matrices of type $3$ are more complicated. We want to find two shortest paths to the two closest exits and block the remaining empty cells. But, notice how the paths will likely share their beginnings. We do not have to count those cells twice. Let's take a look at the junction where the two paths merge. If we first fix the junction, finding the shortest path to Vova can be done by running a single BFS and precalculating the shortest distances from each cell to Vova. Finding the shortest path from the junction to the two closest exits can also be done with BFS and precalculation. We modify the BFS, making it multi-source, with a source in each exit. Also, we will allow each cell to be visited twice (but by different exits). We will need to maintain the following data for each cell: How many times was it visited; The last exit/source that visited it; The sum of paths from all exits/sources that visited the cell so far. Running the BFS with proper implementation produces the answer. When everything said is precalculated, we fix the junction in $O(nm)$ ways (each empty cell can be a valid junction), and then calculate the shortest path from Vova to the two closest cells in $O(1)$ per junction. Total complexity is $O(nm)$. Solve the problem with LCA, or report that such a solution does not exist.
[ "brute force", "dfs and similar", "divide and conquer", "shortest paths" ]
2,600
null
1899
A
Game with Integers
Vanya and Vova are playing a game. Players are given an integer $n$. On their turn, the player can add $1$ to the current integer or subtract $1$. The players take turns; Vanya starts. If \textbf{after} Vanya's move the integer is divisible by $3$, then he wins. If $10$ moves have passed and Vanya has not won, then Vova wins. Write a program that, based on the integer $n$, determines who will win if both players play optimally.
Consider the remainder from dividing $n$ by $3$ before the first move. If it is equal to $1$ or $2$, then Vanya can make the number $n$ divisible by $3$ after the first move, i.e. he wins. Let the remainder be $0$, then Vanya must change the number after which it will not be divisible by $3$. Then Vova can do the same operation as Vanya and make it divisible by $3$ again. This will go on indefinitely, so Vova wins.
[ "games", "math", "number theory" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; if (n % 3) { cout << "First\n"; } else { cout << "Second\n"; } } int main() { int t; cin >> t; while (t--) { solve(); } }
1899
B
250 Thousand Tons of TNT
Alex is participating in the filming of another video of BrMeast, and BrMeast asked Alex to prepare 250 thousand tons of TNT, but Alex didn't hear him well, so he prepared $n$ boxes and arranged them in a row waiting for trucks. The $i$-th box from the left weighs $a_i$ tons. All trucks that Alex is going to use hold the same number of boxes, denoted by $k$. Loading happens the following way: - The first $k$ boxes goes to the first truck, - The second $k$ boxes goes to the second truck, - $\dotsb$ - The last $k$ boxes goes to the $\frac{n}{k}$-th truck. Upon loading is completed, each truck must have \textbf{exactly} $k$ boxes. In other words, if at some point it is not possible to load exactly $k$ boxes into the truck, then the loading option with that $k$ is not possible. Alex hates justice, so he wants the maximum absolute difference between the total weights of two trucks to be as great as possible. If there is only one truck, this value is $0$. Alex has quite a lot of connections, so for every $1 \leq k \leq n$, he can find a company such that each of its trucks can hold exactly $k$ boxes. Print the maximum absolute difference between the total weights of any two trucks.
Solution #1: Since $k$ is a divisor of $n$, there are $O(\sqrt[3]{n})$ such $k$. We can enumerate all k, calculate a given value in $O(n)$, and take the maximum of them. Total complexity - $O(n \cdot \sqrt[3]{n})$. Solution #2: Without using the fact that $k$ is a divisor of $n$, we can simply loop over $k$ and then calculate the values using prefix sums, and at the end check that there are exactly $k$ elements in each segment. Such a solution works in $O(\frac{n}{1} + \frac{n}{2} + \frac{n}{3} + \cdots + \frac{n}{n}) = O(n \log n)$.
[ "brute force", "implementation", "number theory" ]
1,100
#include<bits/stdc++.h> using namespace std; using ll = long long; #define all(x) x.begin(), x.end() void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) cin >> a[i]; ll ans = -1; for (int d = 1; d <= n; ++d) { if (n % d == 0) { ll mx = -1e18, mn = 1e18; for (int i = 0; i < n; i += d) { ll sm = 0; for (int j = i; j < i + d; ++j) { sm += a[j]; } mx = max(mx, sm); mn = min(mn, sm); } ans = max(ans, mx - mn); } } cout << ans << '\n'; } int32_t main() { int t; cin >> t; while (t--) solve(); }
1899
C
Yarik and Array
A subarray is a continuous part of array. Yarik recently found an array $a$ of $n$ elements and became very interested in finding the maximum sum of a \textbf{non empty} subarray. However, Yarik doesn't like consecutive integers with the same parity, so the subarray he chooses must have alternating parities for adjacent elements. For example, $[1, 2, 3]$ is acceptable, but $[1, 2, 4]$ is not, as $2$ and $4$ are both even and adjacent. You need to help Yarik by finding the maximum sum of such a subarray.
There are "bad" positions in the array, i.e., those on which two numbers of the same parity are next to each other. Note that all matching segments cannot contain such positions, in other words, we need to solve the problem of finding a sub segment with maximal sum on some number of non-intersecting sub segments of the array, the boundaries of which are between two neighboring elements of the same parity. The problem of finding a sub segment with maximal sum can be solved using the classical algorithm with keeping minimal prefix sum on the prefix. The problem can be solved in a single pass over the array by simply dropping the keeped values when we are in a bad position. Total complexity - $O(n)$.
[ "dp", "greedy", "two pointers" ]
1,100
#include <iostream> #include <vector> #include <algorithm> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) { cin >> a[i]; } int ans = a[0]; int mn = min(0, a[0]), sum = a[0]; for (int i = 1; i < n; ++i) { if (abs(a[i] % 2) == abs(a[i - 1] % 2)) { mn = 0; sum = 0; } sum += a[i]; ans = max(ans, sum - mn); mn = min(mn, sum); } cout << ans << endl; } int main() { int tc = 1; cin >> tc; for (int t = 1; t <= tc; t++) { solve(); } }
1899
D
Yarik and Musical Notes
Yarik is a big fan of many kinds of music. But Yarik loves not only listening to music but also writing it. He likes electronic music most of all, so he has created his own system of music notes, which, in his opinion, is best for it. Since Yarik also likes informatics, in his system notes are denoted by integers of $2^k$, where $k \ge 1$ — a positive integer. But, as you know, you can't use just notes to write music, so Yarik uses combinations of two notes. The combination of two notes $(a, b)$, where $a = 2^k$ and $b = 2^l$, he denotes by the integer $a^b$. For example, if $a = 8 = 2^3$, $b = 4 = 2^2$, then the combination $(a, b)$ is denoted by the integer $a^b = 8^4 = 4096$. Note that different combinations can have the same notation, e.g., the combination $(64, 2)$ is also denoted by the integer $4096 = 64^2$. Yarik has already chosen $n$ notes that he wants to use in his new melody. However, since their integers can be very large, he has written them down as an array $a$ of length $n$, then the note $i$ is $b_i = 2^{a_i}$. The integers in array $a$ can be repeated. The melody will consist of several combinations of two notes. Yarik was wondering how many pairs of notes $b_i, b_j$ $(i < j)$ exist such that the combination $(b_i, b_j)$ is equal to the combination $(b_j, b_i)$. In other words, he wants to count the number of pairs $(i, j)$ $(i < j)$ such that $b_i^{b_j} = b_j^{b_i}$. Help him find the number of such pairs.
The problem requires to count the number of pairs of indices $(i, j)$ ($i < j$) such that $(2^a)^{(2^b)} = (2^b)^{(2^a)}$, where $a = b_i, b = b_j$. Obviously, when $a = b$ this equality is satisfied. Let $a \neq b$, then rewrite the equality: $(2^a)^{(2^b)} = (2^b)^{(2^a)} \Leftrightarrow 2^{(a \cdot 2^b)} = 2^{(b \cdot 2^a)} \Leftrightarrow a \cdot 2^b = b \cdot 2^a \Leftrightarrow \frac{a}{b} = \frac{2^a}{2^b}$. Obviously, $a$ and $b$ must differ by powers of two, otherwise the equality cannot be satisfied, since the ratio of powers of two is on the right. Without loss of generality, suppose that $b = a \cdot 2^k$ ($k > 0$), then the equation takes the form $\frac{a}{a \cdot 2^k} = \frac{2^a}{2^{a \cdot 2^k}} \Leftrightarrow \frac{1}{2^k} = \frac{1}{2^{(2^k - 1)a}} \Leftrightarrow 2^k = 2^{(2^k - 1)a}$. If $k = 1$, then $a = 1$, $b = 2$. If $k > 1$, then $2^k - 1 > k$, and so the equality cannot be satisfied. Thus, the only possible cases where the equality is satisfied are if $b_i = b_j$ or $b_i = 1, b_j = 2$ (and vice versa). The number of such pairs can be counted for $O(n \log n)$.
[ "hashing", "math", "number theory" ]
1,300
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n; cin >> n; vector<int> a(n); for (int& x : a) cin >> x; ll ans = 0; map<int, int> cnt; for (int i = 0; i < n; i++) { ans += cnt[a[i]]; if (a[i] == 1) ans += cnt[2]; else if (a[i] == 2) ans += cnt[1]; cnt[a[i]]++; } cout << ans << "\n"; } signed main() { int t; cin >> t; while (t--) solve(); }
1899
E
Queue Sort
Vlad found an array $a$ of $n$ integers and decided to sort it in non-decreasing order. To do this, Vlad can apply the following operation any number of times: - Extract the first element of the array and insert it at the end; - Swap \textbf{that} element with the previous one until it becomes the first or until it becomes \textbf{strictly} greater than the previous one. Note that both actions are part of the operation, and for one operation, you \textbf{must} apply both actions. For example, if you apply the operation to the array [$4, 3, 1, 2, 6, 4$], it will change as follows: - [$\textcolor{red}{4}, 3, 1, 2, 6, 4$]; - [$3, 1, 2, 6, 4, \textcolor{red}{4}$]; - [$3, 1, 2, 6, \textcolor{red}{4}, 4$]; - [$3, 1, 2, \textcolor{red}{4}, 6, 4$]. Vlad doesn't have time to perform all the operations, so he asks you to determine the minimum number of operations required to sort the array or report that it is impossible.
Consider the position of the first minimum in the array, let it be equal to $k$. All elements standing on positions smaller than $k$ are strictly greater, so we must apply operations to them, because otherwise the array will not be sorted. Suppose we have applied operations to all such elements, they have taken some positions after $k$ (since they are strictly greater than the minimum), i.e. now the minimum element that moved from position $k$ is at the beginning of the array. If we apply the operation to it, it will return to its current position, since it is less than or equal to all elements of the array, i.e. the array will not change. Thus, after the array has its minimum at the beginning, it is useless to apply operations, and all the operations applied before that will move the element from the beginning of the array to some position after the position of the first minimum. Then, if the part of the array after the position $k$ is not sorted, the answer is $-1$, because it is impossible to change the order of elements in it. Otherwise, the answer is equal to the number of elements standing before the first minimum, since the operation must be applied to them and they will be in the right place in the right part. Total complexity - $O(n)$.
[ "greedy", "implementation", "sortings" ]
1,300
def solve(): n = int(input()) a = [int(x) for x in input().split()] fm = 0 for i in range(n): if a[i] < a[fm]: fm = i for i in range(fm + 1, n): if a[i] < a[i - 1]: print(-1) return print(fm) for _ in range(int(input())): solve()
1899
F
Alex's whims
Tree is a connected graph without cycles. It can be shown that any tree of $n$ vertices has exactly $n - 1$ edges. Leaf is a vertex in the tree with exactly one edge connected to it. Distance between two vertices $u$ and $v$ in a tree is the minimum number of edges that must be passed to come from vertex $u$ to vertex $v$. Alex's birthday is coming up, and Timofey would like to gift him a tree of $n$ vertices. However, Alex is a very moody boy. Every day for $q$ days, he will choose an integer, denoted by the integer chosen on the $i$-th day by $d_i$. If on the $i$-th day there are not two leaves in the tree at a distance \textbf{exactly} $d_i$, Alex will be disappointed. Timofey decides to gift Alex a designer so that he can change his tree as he wants. Timofey knows that Alex is also lazy (a disaster, not a human being), so at the beginning of every day, he can perform \textbf{no more} than one operation of the following kind: - Choose vertices $u$, $v_1$, and $v_2$ such that there is an edge between $u$ and $v_1$ and no edge between $u$ and $v_2$. Then remove the edge between $u$ and $v_1$ and add an edge between $u$ and $v_2$. This operation \textbf{cannot} be performed if the graph is no longer a tree after it. Somehow Timofey managed to find out all the $d_i$. After that, he had another brilliant idea — just in case, make an instruction manual for the set, one that Alex wouldn't be disappointed. Timofey is not as lazy as Alex, but when he saw the integer $n$, he quickly lost the desire to develop the instruction and the original tree, so he assigned this task to you. It can be shown that a tree and a sequence of operations satisfying the described conditions always exist. Here is an example of an operation where vertices were selected: $u$ — $6$, $v_1$ — $1$, $v_2$ — $4$.
This problem can be solved in several similar ways, one of them is given below. First, it is most convenient to take a bamboo - vertices from $1$ to $n$ connected in order. Then, we will maintain the following construction. At each moment of time, vertices $1$ and $2$ will be connected by an edge, from vertex $2$ there will be at most two branches, which are sequentially connected vertices (bamboo). Thus, at any given time there will be at most three leaves in the tree, one of which is vertex $1$. We will maintain vertices from two branches in two arrays. Then, let the current number from the query be $d$. If the distance from any of the leaves to vertex $1$ is $d$, we don't need to perform the operation. Otherwise, let's do the operation so that the distance from a leaf from, for example, the first branch to vertex $1$ is equal to $d$. If the current distance is greater than $d$, then we remove the extra vertices to the end of the second branch, and otherwise we add the necessary ones from the end of the second branch. Thus, after each operation, the distance from vertex $1$ to some of the sheets will be equal to $d$. Transformations can be done by completely moving vertices, then the total complexity - $O(nq)$.
[ "constructive algorithms", "graphs", "greedy", "shortest paths", "trees" ]
1,600
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> b1, b2; for (int i = 0; i < n; ++i) { b1.push_back(i); } b2.push_back(1); for (int i = 1; i < n; ++i) { cout << i << ' ' << i + 1 << endl; } int q; cin >> q; while (q--) { int d; cin >> d; d++; if (b1.size() == d) { cout << "-1 -1 -1\n"; } else if (b1.size() < d) { d = d - b1.size(); vector<int> qq(b2.end() - d, b2.end()); int u = b2[b2.size() - d]; int v1 = b2[b2.size() - d - 1]; int v2 = b1.back(); cout << u + 1 << ' ' << v1 + 1 << ' ' << v2 + 1 << '\n'; for (int i = 0; i < d; ++i) b2.pop_back(); for (auto i : qq) b1.push_back(i); } else { d = b1.size() - d; vector<int> qq(b1.end() - d, b1.end()); int u = b1[b1.size() - d]; int v1 = b1[b1.size() - d - 1]; int v2 = b2.back(); cout << u + 1 << ' ' << v1 + 1 << ' ' << v2 + 1 << '\n'; for (int i = 0; i < d; ++i) b1.pop_back(); for (auto i : qq) b2.push_back(i); } } } }
1899
G
Unusual Entertainment
A tree is a connected graph without cycles. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in any order. For example, $[5, 1, 3, 2, 4]$ is a permutation, but $[2, 1, 1]$ is not a permutation (as $1$ appears twice in the array) and $[1, 3, 2, 5]$ is also not a permutation (as $n = 4$, but $5$ is present in the array). After a failed shoot in the BrMeast video, Alex fell into depression. Even his birthday did not make him happy. However, after receiving a gift from Timofey, Alex's mood suddenly improved. Now he spent days playing with the gifted constructor. Recently, he came up with an unusual entertainment. Alex builds a tree from his constructor, consisting of $n$ vertices numbered from $1$ to $n$, with the root at vertex $1$. Then he writes down each integer from $1$ to $n$ in some order, obtaining a permutation $p$. After that, Alex comes up with $q$ triples of integers $l, r, x$. For each triple, he tries to determine if there is at least one descendant of vertex $x$ among the vertices $p_l, p_{l+1}, \ldots, p_r$. A vertex $u$ is a descendant of vertex $v$ if and only if $\mathrm{dist}(1, v) + \mathrm{dist}(v, u) = \mathrm{dist}(1, u)$, where $\mathrm{dist}(a, b)$ is the distance between vertices $a$ and $b$. In other words, vertex $v$ must be on the path from the root to vertex $u$. Alex told Zakhar about this entertainment. Now Alex tells his friend $q$ triples as described above, hoping that Zakhar can check for the presence of a descendant. Zakhar is very sleepy, so he turned to you for help. Help Zakhar answer all of Alex's questions and finally go to sleep.
Let's start the depth-first search from vertex $1$ and write out the entry and exit times for each vertex. Then, the fact that vertex $b$ is a descendant of vertex $a$ is equivalent to the fact that $\mathrm{tin}[a] \leq \mathrm{tin}[b] \leq \mathrm{tout}[b] \leq \mathrm{tout}[b] \leq \mathrm{tout}[a]$, where $\mathrm{tin}$ and $\mathrm{tout}$ - entry and exit times, respectively. Then, let us create an array $a$, where $a_i = \mathrm{tin}[p_i]$, then the problem is reduced to checking that on the segment c $l$ through $r$ in array $a$ there is at least one number belonging to the segment $[\mathrm{tin}[x]; \mathrm{tout}[x]]$. This can be done, for example, using Merge Sort Tree, then the total complexity will be $O(n \log n + q \log^ 2 n)$.
[ "data structures", "dfs and similar", "dsu", "shortest paths", "sortings", "trees", "two pointers" ]
1,900
#include <bits/stdc++.h> using namespace std; #define sz(x) (int)x.size() #define all(x) x.begin(), x.end() struct SegmentTree { int n; vector<vector<int>> tree; void build(vector<int> &a, int x, int l, int r) { if (l + 1 == r) { tree[x] = {a[l]}; return; } int m = (l + r) / 2; build(a, 2 * x + 1, l, m); build(a, 2 * x + 2, m, r); merge(all(tree[2 * x + 1]), all(tree[2 * x + 2]), back_inserter(tree[x])); } SegmentTree(vector<int>& a) : n(a.size()) { int SIZE = 1 << (__lg(n) + bool(__builtin_popcount(n) - 1)); tree.resize(2 * SIZE - 1); build(a, 0, 0, n); } int count(int lq, int rq, int mn, int mx, int x, int l, int r) { if (rq <= l || r <= lq) return 0; if (lq <= l && r <= rq) return lower_bound(all(tree[x]), mx) - lower_bound(all(tree[x]), mn); int m = (l + r) / 2; int a = count(lq, rq, mn, mx, 2 * x + 1, l, m); int b = count(lq, rq, mn, mx, 2 * x + 2, m, r); return a + b; } int count(int lq, int rq, int mn, int mx) { return count(lq, rq, mn, mx, 0, 0, n); } }; vector<vector<int>> g; vector<int> tin, tout; int timer; void dfs(int v, int p) { tin[v] = timer++; for (auto u : g[v]) { if (u != p) { dfs(u, v); } } tout[v] = timer; } void solve() { int n, q; cin >> n >> q; g.assign(n, vector<int>()); for (int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; u--; v--; g[u].push_back(v); g[v].push_back(u); } timer = 0; tin.resize(n); tout.resize(n); dfs(0, -1); vector<int> p(n); for (int i = 0; i < n; i++) cin >> p[i]; vector<int> a(n); for (int i = 0; i < n; i++) a[i] = tin[p[i] - 1]; SegmentTree ST(a); for (int i = 0; i < q; i++) { int l, r, x; cin >> l >> r >> x; l--; x--; if (ST.count(l, r, tin[x], tout[x])) { cout << "YES\n"; } else { cout << "NO\n"; } } } int main() { int tests; cin >> tests; while (tests--) { solve(); if(tests > 0) cout << "\n"; } return 0; }
1900
A
Cover in Water
Filip has a row of cells, some of which are blocked, and some are empty. He wants all empty cells to have water in them. He has two actions at his disposal: - $1$ — place water in an empty cell. - $2$ — remove water from a cell and place it in any other empty cell. If at some moment cell $i$ ($2 \le i \le n-1$) is empty and both cells $i-1$ and $i+1$ contains water, then it becomes filled with water. Find the minimum number of times he needs to perform action $1$ in order to fill all empty cells with water. Note that you don't need to minimize the use of action $2$. Note that blocked cells neither contain water nor can Filip place water in them.
Assume that cells $i-1$, $i$, and $i+1$ are covered in water. What happens if you remove water from cell $i$? The water at position $i$ is replaced as both cells $i-1$ and $i+1$ have water in them. Read the hints. If there are 3 consecutive empty cells $i-1$, $i$, $i+1$, we can place water in cells $i-1$ and $i+1$ and then move water from cell $i$ to all other cells. If there are no such cells, we have to place water on every empty cell. So if we find substring ''...'' in the array, the answer is $2$, otherwise the answer is the number of empty cells. Time and memory complexities are $O(N)$. Solve the problem if it is required that each cell is filled with water, or next to at least $1$ cell that is filled with water.
[ "constructive algorithms", "greedy", "implementation", "strings" ]
800
null
1900
B
Laura and Operations
Laura is a girl who does not like combinatorics. Nemanja will try to convince her otherwise. Nemanja wrote some digits on the board. All of them are either $1$, $2$, or $3$. The number of digits $1$ is $a$. The number of digits $2$ is $b$ and the number of digits $3$ is $c$. He told Laura that in one operation she can do the following: - Select two different digits and erase them from the board. After that, write the digit ($1$, $2$, or $3$) different from both erased digits. For example, let the digits be $1$, $1$, $1$, $2$, $3$, $3$. She can choose digits $1$ and $3$ and erase them. Then the board will look like this $1$, $1$, $2$, $3$. After that, she has to write another digit $2$, so at the end of the operation, the board will look like $1$, $1$, $2$, $3$, $2$. Nemanja asked her whether it was possible for only digits of one type to remain written on the board after some operations. If so, which digits can they be? Laura was unable to solve this problem and asked you for help. As an award for helping her, she will convince Nemanja to give you some points.
Check if only digits $1$ can remain. The situation is similar for checking if only digits $2$ or only digits $3$ can remain. Try to find something that stays the same after each operation. Look at the parity of the numbers. The parity of each number changes after an operation. That means that if $2$ numbers have the same parity, they will always have the same parity. If they had different parity, their parities stay different. Read the hints. If the parity of $b$ and $c$ is not the same, then it is impossible for only digits $1$ to remain on the board, as it would require $b = c = 0$. Otherwise, the following construction will leave only digits $1$ on the board. First remove digits $2$ and $3$ and write digit $1$ while $b>0$ and $c>0$. If $b=c$, then we are done. Otherwise, without loss of generality assume $b>c$. That means that after the operations $c=0$ and $b$ is even (Because $b$ and $c$ are the same parity). Now we perform the following $2$ operations $\frac{b}{2}$ times to get only digits $1$ left. Remove digits $1$ and $2$ and add digit $3$. After that remove digits $2$ and $3$ and add digit $1$. An effective change of these $2$ operations is the reduction of $b$ by $2$. Time and memory complexities are $O(1)$. Come up with a problem that uses a similar idea and try to solve it. A lot of beginner problems use a similar idea.
[ "dp", "math" ]
900
null
1900
C
Anji's Binary Tree
Keksic keeps getting left on seen by Anji. Through a mutual friend, he's figured out that Anji really likes binary trees and decided to solve her problem in order to get her attention. Anji has given Keksic a binary tree with $n$ vertices. Vertex $1$ is the root and does not have a parent. All other vertices have exactly one parent. Each vertex can have up to $2$ children, a left child, and a right child. For each vertex, Anji tells Keksic index of both its left and its right child or tells him that they do not exist. Additionally, each of the vertices has a letter $s_i$ on it, which is either 'U', 'L' or 'R'. Keksic begins his journey on the root, and in each move he does the following: - If the letter on his current vertex is 'U', he moves to its parent. If it doesn't exist, he does nothing. - If the letter on his current vertex is 'L', he moves to its left child. If it doesn't exist, he does nothing. - If the letter on his current vertex is 'R', he moves to its right child. If it doesn't exist, he does nothing. Before his journey, he can perform the following operations: choose any node, and replace the letter written on it with another one. You are interested in the minimal number of operations he needs to do before his journey, such that when he starts his journey, he will reach a leaf at some point. A leaf is a vertex that has no children. It does not matter which leaf he reaches. Note that it does not matter whether he will stay in the leaf, he just needs to move to it. Additionally, note that it does not matter how many times he needs to move before reaching a leaf. Help Keksic solve Anji's tree so that he can win her heart, and make her come to Čačak.
Solve the problem if all of the characters on vertices are 'U'. We can run DFS from the root. Using it we can calculate the number of edges that we have to traverse to get to every edge. Just output the smallest value among all the leaves. Modify the DFS such that it takes into account that traversing some edges does not require an operation. Add weights to the edges. Read the hints. We will make edges from a vertex to its children. The weight of that edge will be $1$ unless one of the following holds: The weight of an edge between a vertex and its left child will be $0$ if the letter written on that vertex is 'L'. The weight of an edge between a vertex and its left child will be $0$ if the letter written on that vertex is 'L'. The weight of an edge between a vertex and its right child will be $0$ if the letter written on that vertex is 'R'. The weight of an edge between a vertex and its right child will be $0$ if the letter written on that vertex is 'R'. Now we run a modified DFS to find the distance of each vertex from the root, but with weighted edges. We output the minimal value among all the leaves. Time and memory complexities are $O(N)$. Solve the problem if after reaching a leaf, Keksic can teleport to any other leaf that he chooses and then needs to get back to the root, or report that it is impossible for him to complete such a travel.
[ "dfs and similar", "dp", "trees" ]
1,300
null
1900
D
Small GCD
Let $a$, $b$, and $c$ be integers. We define function $f(a, b, c)$ as follows: Order the numbers $a$, $b$, $c$ in such a way that $a \le b \le c$. Then return $\gcd(a, b)$, where $\gcd(a, b)$ denotes the greatest common divisor (GCD) of integers $a$ and $b$. So basically, we take the $\gcd$ of the $2$ smaller values and ignore the biggest one. You are given an array $a$ of $n$ elements. Compute the sum of $f(a_i, a_j, a_k)$ for each $i$, $j$, $k$, such that $1 \le i < j < k \le n$. More formally, compute $$\sum_{i = 1}^n \sum_{j = i+1}^n \sum_{k =j +1}^n f(a_i, a_j, a_k).$$
Let $m$ be the biggest value in the array. Calculate array $x$ such that $x_i$ ($1 \le i \le m$) is the number of triples which have $\gcd$ of $i$. Then the answer is the sum of $i \cdot x_i$ over all $i$ ($1 \le i \le m$). Value of $\lfloor \frac{m}{1} \rfloor + \lfloor \frac{m}{2} \rfloor + \lfloor \frac{m}{3} \rfloor + \ldots + \lfloor \frac{m}{m} \rfloor$ is around $m log m$. We calculate $x_i$ from $x_m$ to $x_1$. For some $i$, we can first calculate the number of triples that have a value of function $f$ that is an integer multiple of $i$, and then from it subtract $x_{2i}, x_{3i}, x_{4i} \ldots$. Because of the previous hint, the subtractions will be quite fast. Now the question is how to calculate the number of triples that have a value of function $f$ that is an integer multiple of $i$. Numbers up to $100\,000$ can have at most $128$ divisors. As the order in which numbers are given does not matter, we can sort the array. Now for each number $d$ ($1 \le d \le M$) store indices of all numbers that are divisible by $d$ in an array of vectors. Now we will for each number $d$ ($1 \le d \le m$) count the number of triples. To find the number of triples that have a gcd value that is an integer multiple of $d$, we can do the following. We go over the index of the number that will be value $b$ in the function. We can utilize the vectors we calculated in the previous step for that. Say that the current index is $i$. Then for value $c$ we can pick any number with an index larger than $i$ as the array is sorted. For value $a$, we can pick any of the numbers divisible by $d$ with an index less than $i$, the number of which we can get from the vector. By multiplying those $2$ numbers we get the number of triples that have $a_i$ as their middle value. As numbers can have up to $128$ divisors, the step above is quite fast. Total time complexity is $O(m log m + n log n + 128 \cdot n)$. Total memory complexity is $O(m + 128 \cdot n)$. Solve the problem if $f(a,b,c) = \gcd (a,c)$.
[ "bitmasks", "brute force", "dp", "math", "number theory" ]
2,000
null
1900
E
Transitive Graph
You are given a \textbf{directed} graph $G$ with $n$ vertices and $m$ edges between them. Initially, graph $H$ is the same as graph $G$. Then you decided to perform the following actions: - If there exists a triple of vertices $a$, $b$, $c$ of $H$, such that there is an edge from $a$ to $b$ and an edge from $b$ to $c$, but there is no edge from $a$ to $c$, add an edge from $a$ to $c$. - Repeat the previous step as long as there are such triples. Note that the number of edges in $H$ can be up to $n^2$ after performing the actions. You also wrote some values on vertices of graph $H$. More precisely, vertex $i$ has the value of $a_i$ written on it. Consider a simple path consisting of $k$ \textbf{distinct} vertices with indexes $v_1, v_2, \ldots, v_k$. The length of such a path is $k$. The value of that path is defined as $\sum_{i = 1}^k a_{v_i}$. A simple path is considered the longest if there is no other simple path in the graph with greater length. Among all the longest simple paths in $H$, find the one with the smallest value.
Try to simplify graph $H$. Look at strongly connected components of $G$, and what happens with them. Use dp to find the answer. The main observation is what $H$ looks like. All the strongly connected components (SCC) in $G$ will become fully connected subgraphs in $H$. Secondly, take any two vertices $a$ and $b$ such that $a$ and $b$ are not in the same SCC. We can let $S_a$ be a set of vertices that are in the same SCC as $a$ ($a$ included). Similarly, $S_b$ is a set of vertices that are in the same SCC as $b$. If there is an edge going from $a$ to $b$, then for any two vertices $x$ and $y$ such that $x$ belongs to $S_a$ and $y$ belongs to $S_b$, there is an edge going from $x$ to $y$. Both of the previously stated facts about the graph can be proven by induction. Now, let's say that there is the longest path that goes through at least one vertex of an SCC. Then that path goes through all the vertices in the SCC, due to all vertices in SCC being connected to the same vertices outside the SCC and due to the fact that SCC is a complete subgraph. Now we can construct the graph $H'$. Each of the SCCs from $H$ will be a vertex in $H'$. The number on the vertex will be equal to the sum of all numbers on the vertices of the SCC that it was constructed from. Edges between two new vertexes will be added if there is an edge between their original SCCs. The edge will have a weight equal to the size of the SCC that it is going into. An additional vertex will be added at index $0$ and an edge will be made between it and all other vertices with $0$ ingoing edges. Weight will be determined based on the size of the SCC of the vertex that the edge is going into. Due to the previous observations, the answer for the $H'$ will be the same as the answer for the $H$. However, notice that $H'$ is a DAG. That means that the answer for it can be computed using DP after topological ordering. Total time and memory complexity is $O(n)$. We will define the value of the path as the biggest value of a vertex on the path. Among the values of all the longest paths, find the median one. It is guaranteed that there are at most $10^{18}$ longest paths starting from each node. (So basically, you can ignore overflow in the number of paths) The rest of the constraints are the same.
[ "dfs and similar", "dp", "dsu", "graphs", "implementation" ]
2,100
null
1900
F
Local Deletions
For an array $b_1, b_2, \ldots, b_m$, for some $i$ ($1 < i < m$), element $b_i$ is said to be a local minimum if $b_i < b_{i-1}$ and $b_i < b_{i+1}$. Element $b_1$ is said to be a local minimum if $b_1 < b_2$. Element $b_m$ is said to be a local minimum if $b_m < b_{m-1}$. For an array $b_1, b_2, \ldots, b_m$, for some $i$ ($1 < i < m$), element $b_i$ is said to be a local maximum if $b_i > b_{i-1}$ and $b_i > b_{i+1}$. Element $b_1$ is said to be a local maximum if $b_1 > b_2$. Element $b_m$ is said to be a local maximum if $b_m > b_{m-1}$. Let $x$ be an array of distinct elements. We define two operations on it: - $1$ — delete all elements from $x$ that are \textbf{not} local minima. - $2$ — delete all elements from $x$ that are \textbf{not} local maxima. Define $f(x)$ as follows. Repeat operations $1, 2, 1, 2, \ldots$ in that order until you get only one element left in the array. Return that element. For example, take an array $[1,3,2]$. We will first do type $1$ operation and get $[1, 2]$. Then we will perform type $2$ operation and get $[2]$. Therefore, $f([1,3,2]) = 2$. You are given a permutation$^\dagger$ $a$ of size $n$ and $q$ queries. Each query consists of two integers $l$ and $r$ such that $1 \le l \le r \le n$. The query asks you to compute $f([a_l, a_{l+1}, \ldots, a_r])$. $^\dagger$ A permutation of length $n$ is an array of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$, but there is $4$ in the array).
Solve the problem for $q=1$. We can just simulate the process. Each operation removes at least half of the elements, meaning that we will perform at most $log n$ operations. Solve the problem if each $l_i = 1$. So basically, solve the problem for each prefix. Keep values that are in the array after $1$, $2$, $3$, $\cdots$, $log n$ operations. Let's call those array's layers. Layer $0$ is an array after $0$ operations, layer $1$ after $1$ operation, and so on. When we add a value to the end of the layer, only $1$ number that was previously local extreme (local minimum or maximum, depending on layer) might unbecome local extreme if the new value becomes local extreme. So we can just propagate this update to all layers. Try to simulate the process of updating described in hint $4$. Also, notice that if the array is longer than $3$, we can handle prefix and suffix updates separately. Read the hints. Now we will precompute and store the array after each operation on the entire permutation. We will call those array layers. We can now solve queries in $O(log n)$. If a query involves a small number of elements, we can just brute force it. Otherwise, we do the following: Now let's define our queries a bit differently. We are given some array $x$, which will be a subarray of some layer, and $2$ values, $a$ and $b$. We will get array $y$ by appending $a$ to the start of $x$ and appending $b$ to the end of $x$. We are interested in the value of $f$($y$). It is easy to see that all queries involving $3$ or more elements can be converted into the modified query. If $|y|$ is small, we can just brute force it. Otherwise, we can transform it into a query on the next layer in constant time, or in $O(log n)$. Now, the first thing to notice is that all elements in $x$ that are neither first nor last will be deleted only if they were deleted when we performed operations on the whole permutation. That means that they will represent some interval on the next layer, let's call it $z$. (That interval can be found either with binary search or in $O(1)$ with precomputation) It holds that $|z|$ is around half of $|x|$. Now, notice that among $a$ and the first element of $x$, there has to be at least one deletion. The same goes for the last element of $x$ and $b$. So now we have transformed the query onto the next level in $O(1)$, or $O(log n)$. As we will do at most $O(log n)$ such transformations, the complexity of a single query is either $O(log n)$ or $O(log^2 n)$ depending on the way we find the next interval, both of which should be fast enough to pass. Total time complexity: $O(n + q log n)$ or $O(n + q log^2 n)$. Total memory complexity: $O(n+q)$ 2103D - Local Construction
[ "binary search", "data structures", "implementation" ]
2,800
null
1901
A
Line Trip
There is a road, which can be represented as a number line. You are located in the point $0$ of the number line, and you want to travel from the point $0$ to the point $x$, and back to the point $0$. You travel by car, which spends $1$ liter of gasoline per $1$ unit of distance travelled. When you start at the point $0$, your car is fully fueled (its gas tank contains the maximum possible amount of fuel). There are $n$ gas stations, located in points $a_1, a_2, \dots, a_n$. When you arrive at a gas station, you fully refuel your car. \textbf{Note that you can refuel only at gas stations, and there are no gas stations in points $0$ and $x$}. You have to calculate the minimum possible volume of the gas tank in your car (in liters) that will allow you to travel from the point $0$ to the point $x$ and back to the point $0$.
We can iterate over the volume of the gas tank from $1$ to $\infty$ (in fact $200$ is enough due to problem limitations) and check whether it's enough to travel from the point $0$ to the point $x$, and back. Let the volume be $V$, then all the following inequalities (which correspond to the ability to travel from the current gas station, having a full tank, to the next one) must be met: $a_1 - 0 \le V$, $a_2 - a_1 \le V$, ..., $a_n - a_{n-1} \le V$, and $2(x - a_n) \le V$ (the multiplier $2$ because there is no gas station at $x$, and we have to go from $a_n$ to $x$ and back without refueling). If all these conditions hold, then $V$ can be the answer. However, we can notice that the minimum value of $V$ that is sufficient for the travel is just the maximum of the left sides of inequalities written above. In other words, the answer to the problem is $\max(a_1, a_2 - a_1, \dots, a_n - a_{n-1}, 2(x-a_n))$.
[ "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n, x; cin >> n >> x; int prev = 0, ans = 0; for (int i = 0; i < n; ++i) { int a; cin >> a; ans = max(ans, a - prev); prev = a; } ans = max(ans, 2 * (x - prev)); cout << ans << '\n'; } }
1901
B
Chip and Ribbon
There is a ribbon divided into $n$ cells, numbered from $1$ to $n$ from left to right. Initially, an integer $0$ is written in each cell. Monocarp plays a game with a chip. The game consists of several turns. During the first turn, Monocarp places the chip in the $1$-st cell of the ribbon. During each turn \textbf{except for the first turn}, Monocarp does \textbf{exactly one} of the two following actions: - move the chip to the next cell (i. e. if the chip is in the cell $i$, it is moved to the cell $i+1$). This action is impossible if the chip is in the last cell; - choose any cell $x$ and teleport the chip into that cell. \textbf{It is possible to choose the cell where the chip is currently located}. At the end of each turn, the integer written in the cell with the chip is increased by $1$. Monocarp's goal is to make some turns so that the $1$-st cell contains the integer $c_1$, the $2$-nd cell contains the integer $c_2$, ..., the $n$-th cell contains the integer $c_n$. He wants to teleport the chip as few times as possible. Help Monocarp calculate the minimum number of times he has to teleport the chip.
At first, let's change the statement a bit: instead of teleporting our chip into cell $x$, we create a new chip in cell $x$ (it means that the chip does not disappear from the cell where it was located). And when we want to move a chip, we move any chip to the next cell. Then, $c_i$ will be the number of times a chip appeared in the cell $i$, and the problem will be the same: ensure the condition on each $c_i$ by "creating" the minimum number of chips. Let's look at value of $c_1$. If $c_1 > 1$, we have to create at least $c_1 - 1$ new chips in cell $1$. Let's create that number of chips in that cell. Then, let's see how we move chips from the cell $i$ to the cell $(i+1)$. If $c_i \ge c_{i+1}$, then all chips that appeared in the cell $(i+1)$ could be moved from the $i$-th cell, so we don't need to create any additional chips in that cell. But if $c_i < c_{i+1}$, then at least $c_{i+1} - c_i$ chips should be created in the cell $(i+1)$, since we can move at most $c_i$ chips from the left. So, for every $i$ from $2$ to $n$, we have to create $\max(0, c_i - c_{i-1})$ chips in the $i$-th cell; and the number of times we create a new chip in total is $c_1 - 1 + \sum\limits_{i=2}^{n} \max(0, c_i - c_{i-1})$.
[ "greedy", "math" ]
1,100
#include <bits/stdc++.h> using namespace std; const int N = 200'000; int t; int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { int n; cin >> n; vector <int> cnt(n); long long res = 0; int cur = 0; for (int i = 0; i < n; ++i) { cin >> cnt[i]; if (cnt[i] > cur) res += cnt[i] - cur; cur = cnt[i]; } cout << res - 1 << endl; } return 0; }
1901
C
Add, Divide and Floor
You are given an integer array $a_1, a_2, \dots, a_n$ ($0 \le a_i \le 10^9$). In one operation, you can choose an integer $x$ ($0 \le x \le 10^{18}$) and replace $a_i$ with $\lfloor \frac{a_i + x}{2} \rfloor$ ($\lfloor y \rfloor$ denotes rounding $y$ down to the nearest integer) for all $i$ from $1$ to $n$. Pay attention to the fact that all elements of the array are affected on each operation. Print the smallest number of operations required to make all elements of the array equal. If the number of operations is less than or equal to $n$, then print the chosen $x$ for each operation. If there are multiple answers, print any of them.
Sort the array. Notice how applying the operation doesn't change the order of the elements, regardless of $x$. It means that it's enough to make the initial minimum and maximum equal to make all elements equal. Consider the difference between the minimum and the maximum values. What happens to it after an operation? Let the minimum be $a$ and the maximum be $b$. Then it's $\lfloor \frac{b + x}{2} \rfloor - \lfloor \frac{a + x}{2} \rfloor$. The roundings are difficult to deal with, so let's pretend the parities always align to even. So it's $\frac{b + x}{2} - \frac{a + x}{2} = \frac{b - a}{2}$. Apparently, the difference is just getting divided by $2$. Let's bring the parities back. Notice that the rounding of the difference depends only on the parities of $a, b$ and $x$. You can consider all cases of parities of $a$ and $b$ to discover that it's always possible to divide the difference by $2$, rounding down, and it's never possible to make it less than that. One easy algorithm to achieve that is the following: if the minimum is even, choose $x = 0$; otherwise, choose $x = 1$. Repeat until the minimum is equal to the maximum. Overall complexity: $O(n + \log A)$ per testcase.
[ "constructive algorithms", "greedy", "math" ]
1,400
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) x, y = min(a), max(a) res = [] while x != y: res.append(x % 2) x = (x + res[-1]) // 2 y = (y + res[-1]) // 2 print(len(res)) if len(res) <= n: print(*res)
1901
D
Yet Another Monster Fight
Vasya is a sorcerer that fights monsters. Again. There are $n$ monsters standing in a row, the amount of health points of the $i$-th monster is $a_i$. Vasya is a very powerful sorcerer who knows many overpowered spells. In this fight, he decided to use a chain lightning spell to defeat all the monsters. Let's see how this spell works. Firstly, Vasya chooses an index $i$ of some monster ($1 \le i \le n$) and the initial power of the spell $x$. Then the spell hits monsters \textbf{exactly} $n$ times, one hit per monster. The first target of the spell is always the monster $i$. For every target \textbf{except for the first one}, the chain lightning will choose a \textbf{random} monster \textbf{who was not hit by the spell and is adjacent to one of the monsters that already was hit}. So, each monster will be hit exactly once. The first monster hit by the spell receives $x$ damage, the second monster receives $(x-1)$ damage, the third receives $(x-2)$ damage, and so on. Vasya wants to show how powerful he is, so he wants to kill all the monsters with a single chain lightning spell. The monster is considered dead if the damage he received is not less than the amount of its health points. On the other hand, Vasya wants to show he doesn't care that much, so he wants to choose the \textbf{minimum} initial power of the spell $x$ such that it kills all monsters, \textbf{no matter which monster (among those who can get hit) gets hit on each step}. Of course, Vasya is a sorcerer, but the amount of calculations required to determine the optimal spell setup is way above his possibilities, so you have to help him find the minimum spell power required to kill all the monsters. Note that Vasya chooses the initial target and the power of the spell, other things should be considered random and Vasya wants to kill all the monsters even in the worst possible scenario.
Let's start with a naive solution. Let $i$ be the index of the monster we started with. Let's make sure that we can kill all monsters, starting with monster $i$. Suppose we have chosen spell power $x$. How to check if all monsters will be definitely killed? Let's iterate on each monster $j$ and calculate the minimum possible damage it can get: if $i = j$, then the $j$-th monster will receive exactly $x$ damage. So, $x$ should be at least $a_j$; if $i > j$, then the minimum amount of damage the $j$-th monster may receive happens when the spell first strikes all monsters to the right of monster $i$, and then goes to the left of monster $i$. It means that $n - j$ monsters will be struck before the $j$-th monster, so the $j$-th monster will receive $x - (n-j)$ damage. So, $x$ should be at least $a_j + (n-j)$ for every $j < i$; and if $i < j$, then the minimum amount of of damage the $j$-th monster may receive happens when the spell first strikes all monsters to the left of monster $i$, and then goes to the right of monster $i$. It means that $j-1$ monsters will be struck before the $j$-th monster, so the $j$-th monster will receive $x - (j-1)$ damage. So, $x$ should be at least $a_j + (j-1)$ for every $j > i$. So, if we want to start with the $i$-th monster, the minimum spell power we need is $\max(\max\limits_{j=1}^{i-1} a_j + (n-j), a_i, \max\limits_{j=i+1}^{n} a_j + (j-1))$. This gives us a solution in $O(n^2)$, but it can be sped up to $O(n)$ using prefix and suffix maxima.
[ "binary search", "dp", "greedy", "implementation", "math" ]
1,700
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; cin >> n; vector<int> a(n); for (auto &it : a) cin >> it; vector<int> pref(n), suf(n); for (int i = 0; i < n; ++i) { pref[i] = a[i] + (n - i - 1); suf[i] = a[i] + i; } for (int i = 1; i < n; ++i) { pref[i] = max(pref[i], pref[i - 1]); } for (int i = n - 2; i >= 0; --i) { suf[i] = max(suf[i], suf[i + 1]); } int ans = 2e9; for (int i = 0; i < n; ++i) { int cur = a[i]; if (i > 0) cur = max(cur, pref[i - 1]); if (i + 1 < n) cur = max(cur, suf[i + 1]); ans = min(ans, cur); } cout << ans << endl; return 0; }
1901
E
Compressed Tree
You are given a tree consisting of $n$ vertices. A number is written on each vertex; the number on vertex $i$ is equal to $a_i$. You can perform the following operation any number of times (possibly zero): - choose a vertex which has \textbf{at most $1$} incident edge and remove this vertex from the tree. Note that you can delete all vertices. After all operations are done, you're compressing the tree. The compression process is done as follows. While there is a vertex having \textbf{exactly $2$} incident edges in the tree, perform the following operation: - delete this vertex, connect its neighbors with an edge. It can be shown that if there are multiple ways to choose a vertex to delete during the compression process, the resulting tree is still the same. Your task is to calculate the maximum possible sum of numbers written on vertices after applying the aforementioned operation any number of times, and then compressing the tree.
We can use dynamic programming on tree to solve this problem. After we're done removing vertices, a vertex will be left in the tree after the compression process if and only if its degree is not $2$. So, we can try to choose the number of children for each vertex in dynamic programming, and depending on this number of children, the vertex we're considering is either removed from the tree during the compression, or left in the tree. But if we want to use dynamic programming, we have to root the tree at some vertex, so the degree of each vertex depends not only on the number of its children we left in the tree, but also on whether there exists an edge leading from the ancestor to that vertex. So, for each vertex, we will consider two values of dynamic programming: the best answer for its subtree if there is an edge leading into it from the parent (i. e. we haven't deleted everything outside of that subtree), and the answer if there is no edge leading from the parent (i. e. we deleted everything outside of that subtree). Let the first of these two values be $dp_v$ - the maximum answer for subtree of $v$ if there is at least one non-deleted vertex not in the subtree of vertex $v$ (i.e there is an edge from vertex $v$ up the tree). Let's look at the dp transitions, depending on the number of children of vertex $v$ that are not deleted: $0$: $dp_v = \max(dp_v, a_v)$; $1$: $dp_v = \max(dp_v, \max\limits_u dp_u)$ ($a_v$ is not taken into account, because $v$ has $2$ incident edges and will be compressed); at least $2$: $dp_v = \max(dp_v, a_v + \sum\limits_u dp_u)$. Then let us consider the case when the resulting tree is in the subtree of some vertex $v$. We can update the global answer depending on the number of children of vertex $v$ that are not deleted: $0$: $ans = \max(ans, a_v)$; $1$: $ans = \max(ans, a_v + \max\limits_u dp_u)$; $2$: $ans = \max(ans, \max\limits_{u_1, u_2} dp_{u_1} + dp_{u_2})$ ($a_v$ is not taken into account, because $v$ has $2$ incident edges and will be compressed); at least $3$: $ans = \max(ans, a_v + \sum\limits_u dp_u)$. For convenience, we can calculate auxiliary dynamic programming for children of vertex $v$: $sum_i$ is the maximum sum of $dp_u$ for $i$ children. From above written transitions, we can see that, $sum_3$ can store maximum sum for at least $3$ children. Don't forget that we can delete all vertices, so the answer is at least $0$.
[ "dfs and similar", "dp", "graphs", "greedy", "sortings", "trees" ]
2,200
#include <bits/stdc++.h> using namespace std; using li = long long; const li INF = 1e18; const int N = 555555; int n; int a[N]; vector<int> g[N]; li dp[N]; li ans; void calc(int v, int p) { vector<li> sum(4, -INF); sum[0] = 0; for (int u : g[v]) if (u != p) { calc(u, v); for (int i = 3; i >= 0; --i) { sum[min(i + 1, 3)] = max(sum[min(i + 1, 3)], sum[i] + dp[u]); } } dp[v] = -INF; for (int j = 0; j < 4; ++j) { dp[v] = max(dp[v], sum[j] + (j == 1 ? 0 : a[v])); ans = max(ans, sum[j] + (j == 2 ? 0 : a[v])); } } int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { cin >> n; for (int i = 0; i < n; ++i) cin >> a[i]; for (int i = 0; i < n; ++i) g[i].clear(); for (int i = 0; i < n - 1; ++i) { int x, y; cin >> x >> y; --x; --y; g[x].push_back(y); g[y].push_back(x); } ans = 0; calc(0, -1); cout << ans << '\n'; } }
1901
F
Landscaping
You are appointed to a very important task: you are in charge of flattening one specific road. The road can be represented as a polygonal line starting at $(0, 0)$, ending at $(n - 1, 0)$ and consisting of $n$ vertices (including starting and ending points). The coordinates of the $i$-th vertex of the polyline are $(i, a_i)$. "Flattening" road is equivalent to choosing some line segment from $(0, y_0)$ to $(n - 1, y_1)$ such that all points of the polyline are below the chosen segment (or on the same height). Values $y_0$ and $y_1$ \textbf{may be real}. You can imagine that the road has some dips and pits, and you start pouring pavement onto it until you make the road flat. Points $0$ and $n - 1$ have infinitely high walls, so pavement doesn't fall out of segment $[0, n - 1]$. The cost of flattening the road is equal to the area between the chosen segment and the polyline. You want to minimize the cost, that's why the flattened road is not necessary horizontal. But there is a problem: your data may be too old, so you sent a person to measure new heights. The person goes from $0$ to $n - 1$ and sends you new heights $b_i$ of each vertex $i$ of the polyline. Since measuring new heights may take a while, and you don't know when you'll be asked, calculate the minimum cost (and corresponding $y_0$ and $y_1$) to flatten the road after each new height $b_i$ you get.
Let's say that we are searching for the best line that contains the best segment. Observation $1$: the best line touches at least one vertex of the polyline. Otherwise, you can push it down until it touches polyline. Observation $2$: the best line touches two vertices of the polyline. Suppose it touches only one vertex $i$ then it's not hard to prove that rotating the line around point $i$ either clockwise or counterclockwise will decrease the area under the line. Observation $3$: if both points are on the left half ($i < j < \frac{n}{2}$) then rotating the line clockwise around point $j$ will decrease the area. Analogically, if both points are on the right half ($\frac{n}{2} < i < j$) then rotating the line counterclockwise around $i$ will decrease the area. Observation $4$: since all points of the polyline should be under the line, then the vertices the line touches are the consecutive vertices on the convex hull of the polyline. In total, we understand that the pair of points we need forms a segment of the convex hull that crosses a vertical line $x = \frac{n}{2}$. Knowing that, what can we do? Let's split all vertices in two halves: "left half" with all vertices to the left of $\frac{n}{2}$ and "right half" with all vertices to the right. Note that while we process the first half of "queries" the right half (vertices in $[\frac{n}{2}, n - 1]$) remains the same. If we know how to process the first half of queries, then processing the second half is practically the same thing but in the reverse order. So, how to find that best segment of the convex hull that crosses $x = \frac{n}{2}$ efficiently while we work with the left half of vertices? Note that if we look at all segments that connect vertex from the left part with vertex from the right part: each segment will certainly cross $x = \frac{n}{2}$ in some real point $y_c$; the segment of the convex hull will have maximum possible $y_c$ (otherwise, it's not a convex hull segment). There are around $n^2$ segment that crosses $\frac{n}{2}$ but we don't need all of them. Let's calculate for each vertex $i$ ($i < \frac{n}{2}$) (both the old and new values) the segment that crosses $\frac{n}{2}$ and has $y_c$ maximum possible. Note that this "maximum possible" segment connects $i$ with some vertex $j$ that has to be on the convex hull that was built on the right half (otherwise, we will find convex hull point with higher slope and its $y_c$ will be bigger). In other words, for each point from the left, we are searching a tangent line to the convex hull on the right. And we can do it efficiently with binary search while checking some cross products. Let's name that tangent line as $t(x, y)$ where $(x, y)$ is a point from the left half. One more observation: $y_c = \frac{y_0 + y_1}{2}$. You can prove it by looking at the area of trapezoid that formed by segment $(0, y_0) - (n - 1, y_1)$. So, let's define a function $f(l)$ that takes line $l$ and returns $2 y_c$ (or the answer to the task). We have all we need to calculate the answer: the answer to the $i$-th query is equal to the maximum among $f(t(j, b_j))$ for all $j \le i$ and $f(t(j, a_j))$ for all $i < j < \frac{n}{2}$. In total, the solution is next: build convex hull on vertices $[\frac{n}{2}, n - 1]$; calculate $t(i, a_i)$ with binary search for all $i < \frac{n}{2}$; calculate $f(t(i, a_i))$ for all $i < \frac{n}{2}$ and store suffix maximums among these values; calculate $t(i, b_i)$ with binary search for all $i < \frac{n}{2}$; calculate $f(t(i, b_i))$ for all $i < \frac{n}{2}$ and store prefix maximums among these values; calculate the $i$-th answer as maximum between the $i$-th prefix maximum and the $(i+1)$-th suffix maximum. In order to solve the task for the right half $[\frac{n}{2}, n - 1]$ just reverse arrays $a$ and $b$ and swap them, and do the same algorithm above. It's true, because replacing first $k$ values $a_i$ with $b_i$ in array $a$ is equivalent to replacing last $n - k$ values $b_i$ with $a_i$ in array $b$. Complexity of the solution is $O(n \log n)$ because of binary searches. Note that we can do everything in integers, except calculating the values of function $f$ and taking maximum among $f$.
[ "binary search", "geometry", "two pointers" ]
2,900
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define all(a) (a).begin(), (a).end() #define x first #define y second typedef long long li; typedef long double ld; typedef pair<int, int> pt; template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) { return out << "(" << p.x << ", " << p.y << ")"; } template<class A> ostream& operator <<(ostream& out, const vector<A> &v) { fore(i, 0, sz(v)) { if(i) out << " "; out << v[i]; } return out; } pt operator+ (const pt &a, const pt &b) { return {a.x + b.x, a.y + b.y}; } pt operator- (const pt &a, const pt &b) { return {a.x - b.x, a.y - b.y}; } li operator *(const pt &a, const pt &b) { return a.x * 1ll * b.x + a.y * 1ll * b.y; } li operator %(const pt &a, const pt &b) { return a.x * 1ll * b.y - a.y * 1ll * b.x; } const int INF = int(1e9); const li INF64 = li(1e18); const ld EPS = 1e-9; int n; vector<pt> old, nw; inline bool read() { if(!(cin >> n)) return false; old.resize(n); nw.resize(n); fore (i, 0, n) { old[i].x = i; cin >> old[i].y; } fore (i, 0, n) { nw[i].x = i; cin >> nw[i].y; } return true; } inline ld getTr(const pt &a, const pt &b) { pt tmp = b - a; pt v = {-tmp.y, tmp.x}; ld c = v * a; ld y0 = (c - v.x * 0) / v.y; ld y1 = (c - v.x * li(n - 1)) / v.y; return (y0 + y1); } vector<pt> hull(const vector<pt> &ps, int l, int r) { vector<pt> h; fore (i, l, r) { while (sz(h) > 1 && (h[sz(h) - 1] - h[sz(h) - 2]) % (ps[i] - h[sz(h) - 1]) >= 0) h.pop_back(); h.push_back(ps[i]); } return h; } inline void solve() { vector<ld> maxTr(n - 1, 0); fore (t, 0, 2) { auto h = hull(old, n / 2, n); auto best = [&](const pt &p) { int l = -1, r = sz(h) - 1; while (r - l > 1) { int mid = (l + r) >> 1; if ((h[mid] - p) % (h[mid + 1] - h[mid]) >= 0) l = mid; else r = mid; } return h[r].x; }; vector<ld> bs(n / 2 + 1, 0); for (int i = n / 2 - 1; i >= 0; i--) { int j = best(old[i]); bs[i] = getTr(old[i], old[j]); bs[i] = max(bs[i], bs[i + 1]); } ld lans = 0; fore (i, 0, n / 2) { int j = best(nw[i]); lans = max(lans, getTr(nw[i], old[j])); maxTr[i] = max({maxTr[i], lans, bs[i + 1]}); } reverse(all(old)); reverse(all(nw)); swap(old, nw); fore (i, 0, n) old[i].x = nw[i].x = i; reverse(all(maxTr)); } maxTr.push_back(maxTr.back()); cout << maxTr << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(12); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1902
A
Binary Imbalance
You are given a string $s$, consisting only of characters '0' and/or '1'. In one operation, you choose a position $i$ from $1$ to $|s| - 1$, where $|s|$ is the current length of string $s$. Then you insert a character between the $i$-th and the $(i+1)$-st characters of $s$. If $s_i = s_{i+1}$, you insert '1'. If $s_i \neq s_{i+1}$, you insert '0'. Is it possible to make the number of zeroes in the string strictly greater than the number of ones, using any number of operations (possibly, none)?
If the string consists of all ones, then it's always impossible. Any operation will only add more ones to the string. Otherwise, let's show that it's always possible. If the string consists of all zeroes, no operations are required. Otherwise, there always exists a pair of adjacent zero and one. Applying an operation between them will increase the number of zeroes by one, and there still will exist a pair of adjacent zero and one. Thus, that operation can be performed infinitely many times. We can keep performing it until there are more zeroes than ones. Overall complexity: $O(n)$ per testcase.
[ "constructive algorithms" ]
800
for _ in range(int(input())): n = int(input()) s = input() print("NO" if s.count('1') == n else "YES")
1902
B
Getting Points
Monocarp is a student at Berland State University. Due to recent changes in the Berland education system, Monocarp has to study only one subject — programming. The academic term consists of $n$ days, and in order not to get expelled, Monocarp has to earn at least $P$ points during those $n$ days. There are two ways to earn points — completing practical tasks and attending lessons. For each practical task Monocarp fulfills, he earns $t$ points, and for each lesson he attends, he earns $l$ points. Practical tasks are unlocked "each week" as the term goes on: the first task is unlocked on day $1$ (and can be completed on any day from $1$ to $n$), the second task is unlocked on day $8$ (and can be completed on any day from $8$ to $n$), the third task is unlocked on day $15$, and so on. Every day from $1$ to $n$, there is a lesson which can be attended by Monocarp. And every day, Monocarp chooses whether to study or to rest the whole day. When Monocarp decides to study, he attends a lesson and can complete \textbf{no more than $2$} tasks, which are already unlocked and not completed yet. If Monocarp rests the whole day, he skips a lesson and ignores tasks. Monocarp wants to have as many days off as possible, i. e. he wants to maximize the number of days he rests. Help him calculate the maximum number of days he can rest!
Firstly, let $c$ be the total number of tasks in the term. Then $c = \left\lceil\frac{n}{7}\right\rceil = \left\lfloor \frac{n + 6}{7} \right\rfloor$. Suppose, Monocarp will study exactly $k$ days. How many points will he get? He gets $k \cdot l$ for attending lessons and, since he can complete at most $2$ tasks per day, he will solve no more than $\min(c, 2 \cdot k)$ tasks. So, in the best possible scenario he will get $k \cdot l + \min(c, 2 k) \cdot t$ points. And, actually, it's possible to get exactly that many points. For example, Monocarp can study the last $k$ days of the term: at the $n$-th day he will complete $(c-1)$-th and $c$-th tasks, at the $(n-1)$-th day - tasks $(c - 3)$ and $(c - 2)$ and so on. It's easy to see that all that tasks will be available at the day Monocarp completes them. In total, we need to find the minimum $k$ such that $k \cdot l + \min(c, 2 k) \cdot t \ge P$. We can analyze two cases, or perform a Binary Search on $k$.
[ "binary search", "brute force", "greedy" ]
1,100
fun main(args: Array<String>) { repeat(readln().toInt()) { val (n, p, l, t) = readln().split(' ').map { it.toLong() } val cntTasks = (n + 6) / 7 fun calc(k: Long) = k * l + minOf(2 * k, cntTasks) * t var lf = 0L var rg = n while (rg - lf > 1) { val mid = (lf + rg) / 2 if (calc(mid) >= p) rg = mid else lf = mid } println(n - rg) } }
1902
C
Insert and Equalize
You are given an integer array $a_1, a_2, \dots, a_n$, all its elements are distinct. First, you are asked to insert one more integer $a_{n+1}$ into this array. $a_{n+1}$ should not be equal to any of $a_1, a_2, \dots, a_n$. Then, you will have to make all elements of the array equal. At the start, you choose a \textbf{positive} integer $x$ ($x > 0$). In one operation, you add $x$ to exactly one element of the array. \textbf{Note that $x$ is the same for all operations}. What's the smallest number of operations it can take you to make all elements equal, after you choose $a_{n+1}$ and $x$?
Let's start by learning how to calculate the function without the insertion. Since $x$ can only be positive, we will attempt to make all elements equal to the current maximum value in the array. Pick some $x$. Now, how to check if it's possible to make every element equal to the maximum? Well, for one element, the difference between the maximum and the element should be divisible by $x$. So, for all elements, all differences should be divisible by $x$. Thus, the greatest common divisor of all differences should be divisible by $x$. Among all values of $x$ that work, we should obviously pick the largest one. The answer is equal to $\sum\limits_{i=1}^n \frac{\mathit{max} - a_i}{x}$, so it decreases with the increase of $x$. Thus, it only makes sense to make $x = \mathit{gcd}(\mathit{max} - a_1, \mathit{max} - a_2, \dots, \mathit{max} - a_n)$. If you think about that from the perspective of the Euclid's algorithm of finding the gcd, you can rewrite it as $\mathit{gcd}(a_2 - a_1, a_3 - a_2, \dots, a_n - a_{n - 1}, \mathit{max} - a_n)$. Think about it on some small $n$. $\mathit{gcd}(a_3 - a_1, a_2 - a_1) = \mathit{gcd}((a_3 - a_1) - (a_2 - a_1), a_2 - a_1) = \mathit{gcd}(a_3 - a_2, a_2 - a_1)$. Basically, you can arbitrarily add or subtract the arguments of gcd from each other. With that interpretation, it's clear that it never makes sense to make all elements equal to anything greater than the current maximum. $x$ will still be required to be divisible by all the adjacent differences, but you will be adding one more condition that can only decrease the gcd. Let's return to the insertion. First, inserting a new element can never decrease the function. You will add one more constraint $a_{n+1} - a_n$ to the gcd. If you increase the maximum value in the array, all $n$ elements will need additional operations to become equal to it instead of the old maximum. If you don't change the maximum, $a_{n+1}$ itself will need some operations to become equal to the maximum. Since $x$ can only decrease, let's try to not change it at all. So, we only add an element of form $a_i + x \cdot k$ for any integer $k$, if it doesn't appear in the array. Since all elements of the array are also of that form for any $i$, let's instead choose $a_i$ that is the maximum of the array. So, the form is $\mathit{max} + x \cdot k$. If we choose a positive $k$, the maximum increases. Thus, the answer increases $k \cdot n$, since all elements will require $k$ extra operations. Obviously, it doesn't make sense to pick $k > 1$, since $\mathit{max} + x$ doesn't appear in the array already, and the number of extra operations for it is the minimum possible - just $n$. If we choose a negative $k$, no other elements are affected. Thus, the answer only depends on the number of operations that $a_{n+1}$ will require. So, the best option is to choose the largest $k$ such that $\mathit{max} + x \cdot k$ doesn't appear in the array. Worst case, $k = -n$ if the elements are $[\mathit{max}, \mathit{max} - x, \mathit{max} - 2 \cdot x, \dots, \mathit{max} - (n - 1) \cdot x]$. The number of operations is equal to $-k$, so it's never larger than $n$. Thus, the negative $k$ is always at least as good or even better than the positive $k$. If we change the value of $x$, we can only divide it by something. Thus, it becomes at least twice as small. Notice how it also at least doubles the current answer. The smallest possible current answer is $0 + 1 + 2 + \dots + (n - 1) = \frac{n \cdot (n - 1)}{2}$. For any $n > 2$, that is greater or equal to $n$. If we also consider that $a_{n+1}$ itself will require at least one extra operation, we can extend it to $n > 1$. So, changing $x$ is never better than not changing $x$. Thus, we come to the final algorithm. Calculate $x = \mathit{gcd}(a_2 - a_1, a_3 - a_2, \dots, a_n - a_{n - 1})$. For $n = 1$, it's $0$, so we handle it separately - the answer can always be $1$. Then, initialize $k = -1$ and keep decreasing it by $1$ as long as $\mathit{max} - x \cdot k$ appears in the array. That can be checked with a structure like set, with a binary search over the sorted array or with two pointers. Finally, calculate the answer as $\sum\limits_{i=1}^n \frac{\mathit{max} - a_i}{x} - k$. Overall complexity: $O(n \log n + \log A)$ per testcase.
[ "brute force", "constructive algorithms", "greedy", "math", "number theory" ]
1,300
from math import gcd for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) g = 0 for i in range(n - 1): g = gcd(g, a[i + 1] - a[i]) g = max(g, 1) a.sort() j = n - 1 res = a[-1] while True: while j >= 0 and a[j] > res: j -= 1 if j < 0 or a[j] != res: break res -= g print((a[-1] * (n + 1) - (sum(a) + res)) // g)
1902
D
Robot Queries
There is an infinite $2$-dimensional grid. Initially, a robot stands in the point $(0, 0)$. The robot can execute four commands: - U — move from point $(x, y)$ to $(x, y + 1)$; - D — move from point $(x, y)$ to $(x, y - 1)$; - L — move from point $(x, y)$ to $(x - 1, y)$; - R — move from point $(x, y)$ to $(x + 1, y)$. You are given a sequence of commands $s$ of length $n$. Your task is to answer $q$ \textbf{independent} queries: given four integers $x$, $y$, $l$ and $r$; determine whether the robot visits the point $(x, y)$, while executing a sequence $s$, but the substring from $l$ to $r$ is reversed (i. e. the robot performs commands in order $s_1 s_2 s_3 \dots s_{l-1} s_r s_{r-1} s_{r-2} \dots s_l s_{r+1} s_{r+2} \dots s_n$).
Let's divide the path of the robot into three parts: points before the $l$-th operation; points from the $l$-th to the $r$-th operations; points after the $r$-th operation; The first and the third parts are pretty simple to check because the reverse of the substring $(l, r)$ does not affect them. So we can precompute a dictionary $mp_{x, y}$ that stores all indices of operations in the initial sequence $s$ after which the robot stands at the point $(x, y)$. Then point $(x, y)$ lies on the first part if $mp_{x, y}$ contains at least one index among indices from $0$ to $l-1$. Similarly, for the third part, but we have to check indices from $r$ to $n$. It remains to understand how the second part works. Reversing of the order of commands from $l$ to $r$ means that the robot first performs all commands from $1$ to $l-1$ in the original string, then the command $s_r$, then $s_{r-1}$, and so on, until the command $s_l$. So, it means that the points that the robot visits can be represented as follows: for an integer $k$ such that $l \le k \le r$, perform all commands from $1$ to $r$, except for the commands from $l$ to $k$. Let $pos_i$ be the point where the robot stands after the $i$-th operation, and $\Delta_{i, j}$ be the total movement using operations from $i$ to $j$ (note that $\Delta_{i, j} = pos_j - pos_{i-1}$). Then we have to check whether there is such $i$ that $l \le i \le r$ and $pos_{l-1} + \Delta_{i, r} = (x, y)$. Using the aforementioned equality for $\Delta$, we can rewrite this as $pos_{l-1} + pos_r - pos_i = (x, y)$. As a result, our task is to check whether there is such $i$ that $l \le i \le r$ and $pos_i = pos_{l-1} + pos_r - (x, y)$. And we already know how to check that, using the dictionary $mp$ (you will need some sort of binary search or a function similar to lower_bound to find whether there is a moment from $l$ to $r$ when a point is visited). If the point belongs to at least one of the parts of the path, then the answer is YES, otherwise the answer is NO.
[ "binary search", "data structures", "dp", "implementation" ]
1,900
#include <bits/stdc++.h> using namespace std; #define x first #define y second using pt = pair<int, int>; int main() { ios::sync_with_stdio(false); cin.tie(0); int n, q; cin >> n >> q; string s; cin >> s; vector<pt> pos(n + 1); for (int i = 0; i < n; ++i) { pos[i + 1].x = pos[i].x + (s[i] == 'R') - (s[i] == 'L'); pos[i + 1].y = pos[i].y + (s[i] == 'U') - (s[i] == 'D'); } map<pt, vector<int>> mp; for (int i = 0; i <= n; ++i) mp[pos[i]].push_back(i); auto check = [&](pt p, int l, int r) { if (!mp.count(p)) return false; auto it = lower_bound(mp[p].begin(), mp[p].end(), l); return it != mp[p].end() && *it <= r; }; while (q--) { int x, y, l, r; cin >> x >> y >> l >> r; int nx = pos[r].x + pos[l - 1].x - x, ny = pos[r].y + pos[l - 1].y - y; bool f = check({x, y}, 0, l - 1) | check({nx, ny}, l, r - 1) | check({x, y}, r, n); cout << (f ? "YES" : "NO") << '\n'; } }
1902
E
Collapsing Strings
You are given $n$ strings $s_1, s_2, \dots, s_n$, consisting of lowercase Latin letters. Let $|x|$ be the length of string $x$. Let a collapse $C(a, b)$ of two strings $a$ and $b$ be the following operation: - if $a$ is empty, $C(a, b) = b$; - if $b$ is empty, $C(a, b) = a$; - if the last letter of $a$ is equal to the first letter of $b$, then $C(a, b) = C(a_{1,|a|-1}, b_{2,|b|})$, where $s_{l,r}$ is the substring of $s$ from the $l$-th letter to the $r$-th one; - otherwise, $C(a, b) = a + b$, i. e. the concatenation of two strings. Calculate $\sum\limits_{i=1}^n \sum\limits_{j=1}^n |C(s_i, s_j)|$.
Let's suppose that when we calculate the collapse of two strings $a$ and $b$, we reverse the string $a$ first, so that instead of checking and removing the last letters of $a$, we do this to the first letters of $a$. Then, $|C(a,b)| = |a| + |b| - 2|LCP(a', b)|$, where $LCP(a', b)$ is the longest common prefix of $a'$ (the reversed version of $a$) and $b$. Then the answer to the problem becomes $2n \sum\limits_{i=1}^{n} |s_i| - 2\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} LCP(s_i', s_j)$. We need some sort of data structure that allows us to store all strings $s_i'$ and for every string $s_j$, calculate the total LCP of it with all strings in the structure. There are many ways to implement it (hashing, suffix arrays, etc), but in our opinion, one of the most straightforward is using a trie. Build a trie on all strings $s_i'$. Then, for every vertex of the trie, calculate the number of strings $s_i'$ that end in the subtree of that vertex (you can maintain it while building the trie: when you add a new string into it, increase this value by $1$ on every vertex you go through). If you want to find the $LCP$ of two strings $s$ and $t$ using a trie, you can use the fact that it is equal to the number of vertices that are both on the path to $s$ and on the path to $t$ at the same time (except for the root vertex). This method can be expanded to querying the sum of $LCP$ of a given string $s$ and all strings in the trie as follows: try to find $s$ in the trie. While searching for it, you will descend in the trie and go through vertices that represent prefixes of $s$. For every such prefix, you need the number of strings in the trie that have the same prefix - and it is equal to the number of strings ending in the subtree of the corresponding vertex (which we already calculated). Don't forget that you shouldn't consider the root, since the root represents the empty prefix. This solution works in $O(SA)$ or $O(S \log A)$, where $S$ is the total length of the strings given in the input and $A$ is the size of the alphabet.
[ "data structures", "strings", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; const int N = int(1e6) + 99; int nxt; int to[N][26]; int sum[N]; long long res; void add(const string& s) { int v = 0; ++sum[v]; for (auto c : s) { int i = c - 'a'; if (to[v][i] == -1) to[v][i] = nxt++; v = to[v][i]; ++sum[v]; } } void upd(const string& s) { int curLen = s.size(); int v = 0; for (auto c : s) { int i = c - 'a'; if (to[v][i] == -1) { res += sum[v] * 1LL * curLen; break; } else { int nxtV = to[v][i]; res += (sum[v] - sum[nxtV]) * 1LL * curLen; --curLen; v = nxtV; } } } void solve(int n, vector <string> v) { int sumSizes = 0; for (int i = 0; i < n; ++i) sumSizes += v[i].size(); nxt = 1; memset(sum, 0, sizeof sum); memset(to, -1, sizeof to); for(int i = 0; i < n; ++i) add(v[i]); for (int i = 0; i < n; ++i) { reverse(v[i].begin(), v[i].end()); upd(v[i]); } } int main() { ios_base::sync_with_stdio(false); int n; cin >> n; vector <string> v(n); for (int i = 0; i < n; ++i) cin >> v[i]; res = 0; solve(n, v); for(int i = 0; i < n; ++i) reverse(v[i].end(), v[i].end()); solve(n, v); cout << res << endl; return 0; }
1902
F
Trees and XOR Queries Again
You are given a tree consisting of $n$ vertices. There is an integer written on each vertex; the $i$-th vertex has integer $a_i$ written on it. You have to process $q$ queries. The $i$-th query consists of three integers $x_i$, $y_i$ and $k_i$. For this query, you have to answer if it is possible to choose a set of vertices $v_1, v_2, \dots, v_m$ (possibly empty) such that: - every vertex $v_j$ is on the simple path between $x_i$ and $y_i$ (endpoints can be used as well); - $a_{v_1} \oplus a_{v_2} \oplus \dots \oplus a_{v_m} = k_i$, where $\oplus$ denotes the bitwise XOR operator.
This problem requires working with XOR bases, so let's have a primer on them. Suppose you want to solve the following problem: given a set of integers $x_1, x_2, \dots, x_k$ and another integer $y$, check whether it is possible to choose several (maybe zero) integers from the set such that their XOR is $y$. It can be solved with Gauss elimination method for systems of linear equations, but there are easier and faster methods, and we will describe one of them. For the given set of integers, let's build an XOR base. An XOR base of a set of integers $X = \{x_1, x_2, \dots, x_k\}$ is another set of integers $A = \{a_1, a_2, \dots, a_m\}$ such that: every integer that can be expressed as the XOR of some integers from the set $X$ can also be expressed as the XOR of some integers from $A$, and vice versa; every integer in $A$ is non-redundant (i. e. if you remove any integer from $A$, the first property is no longer met). For example, one of the XOR bases for $X = \{1, 2, 3\}$ is $A = \{1, 3\}$. $A = \{1, 2\}$ is also an XOR base of $X$, but $A = \{1, 2, 3\}$ is not since, for example, $2$ can be deleted. Note that an XOR base is not necessarily a subset of the original set. For example, for $X = \{1, 2\}$, $A = \{1, 3\}$ is a valid XOR base. Due to the laws of linear algebra, an XOR base of size $m$ supports $2^m$ integers (i. e. $2^m$ integers can be expressed using XOR of some numbers from the base). This means that since in our problem the integers are limited to $20$ bits, the maximum size of an XOR base we need is $20$. Now let's talk about how we build, store and maintain the XOR base. We will use an array of $20$ integers, initially filled with zeroes (an array of all zeroes represents an empty XOR base). Let's call this array $b$. If some integer $b_i$ in this array is non-zero, it has to meet the following constraints: the $i$-th bit is set to $1$ in $b_i$; the $i$-th bit is set to $0$ in every integer $b_j$ such that $j < i$. This is kinda similar to how the Gauss elimination method transforms a matrix representing the system of linear equations: it leaves only one row with non-zero value in the first column and puts it as the first row, then in the next non-zero column it leaves only one row with non-zero value (except for maybe the first row) and puts it as the second row, and so on. Okay, we need to process two types of queries for XOR bases: add an integer and change the XOR base accordingly if needed; check that some integer is supported (i. e. can be represented) by the XOR base. For both of those queries, we will use a special reduction process. In the model solution, it is the function reduce(b,x) that takes an array $b$ representing the XOR base and an integer $x$, and tries to eliminate bits set to $1$ from $x$. In this reduction process, we iterate on bits from $19$ (or the highest bit the number can have set to $1$) to $0$, and every time the bit $i$ we're considering is set to $1$, we try to make it $0$ by XORing $x$ with $b_i$. It can be easily seen that due to the properties of XOR base, XORing with $b_i$ is the only way to do it: if we XOR with any other number $b_j$ such that $j < i$, it won't affect the $i$-th bit; and if we XOR it with $b_j$ such that $j > i$, it sets the $j$-th bit to $1$ (and we have already ensured that it should be $0$). If reduce(b,x) transforms $x$ to $0$, then $x$ is supported by the XOR base, otherwise it is not. And if we want to try adding $x$ to the base, we can simply reduce $x$, find the highest non-zero bit in the resulting integer $x'$, (let it be $i$), and assign $b_i$ to $x'$ (it is guaranteed that $b_i$ was zero, since otherwise we would have eliminated the $i$-th bit). So, that's how we can work with XOR bases and process every query to them in $O(B)$, where $B$ is the number of bits in each integer. Now let's go back to the original problem. Basically, for every query, we have to build a XOR base on some path in the tree. We can root the tree and then use LCA to split this path into two vertical paths, get XOR bases from those two paths, and merge them in $O(B^2)$. But how do we get an XOR base on a vertical path in something like $O(B^2)$? To do this, for each vertex $v$, let's consider the following process. We go from $v$ to the root, maintain the XOR base of the integers we met, and every time we add something to the XOR base, we mark the current vertex as "interesting" for $v$. Our goal is to build a list of "interesting" vertices for $v$ in order from $v$ to the root. Since the size of each XOR base is up to $20$, the size of each such list is also up to $20$, so we can get the XOR base for a vertical path by simply iterating on that interesting list for the lower endpoint of the path. Okay, the last part of the problem we have to solve is how to build these lists for all vertices in reasonable time. The key insight here is that if $p$ is the parent of $v$, then the list for $v$ will be very similar to the list for $p$: if $v$ is not supported by the XOR base of the list for $p$, then the list for $v$ is simply the list for $p$, with the vertex $v$ added; otherwise, $v$ eliminates one of the vertices from the list for $p$. We can find which one by building the XOR base for $v$ and the list of $p$; we need to add $a_v$ first, and then the values from all vertices in the list of $p$ in order "from bottom to top", and when an interesting vertex for $p$ adds nothing to the XOR base, it means that it is exactly the vertex we need to eliminate. Combining all of these, we can get a solution that works in $O(nB^2)$ for preprocessing and in $O(B^2 + \log n)$ to answer each query.
[ "data structures", "dfs and similar", "divide and conquer", "graphs", "implementation", "math", "trees" ]
2,400
#include<bits/stdc++.h> using namespace std; const int N = 200043; const int K = 20; typedef array<int, K> base; int a[N]; vector<int> g[N]; vector<int> path_up[N]; int tin[N], tout[N]; int T = 0; int fup[N][K]; base make_empty() { base b; for(int i = 0; i < K; i++) b[i] = 0; return b; } int reduce(const base& b, int x) { for(int i = K - 1; i >= 0; i--) if(x & (1 << i)) x ^= b[i]; return x; } bool add(base& b, int x) { x = reduce(b, x); if(x != 0) { for(int i = K - 1; i >= 0; i--) if(x & (1 << i)) { b[i] = x; return true; } } return false; } bool check(const base& b, int x) { return reduce(b, x) == 0; } vector<int> rebuild_path(const vector<int>& path, int v) { base b = make_empty(); vector<int> ans; if(add(b, a[v])) ans.push_back(v); for(auto x : path) if(add(b, a[x])) ans.push_back(x); return ans; } void dfs(int v, int u) { tin[v] = T++; if(u == v) path_up[v] = rebuild_path(vector<int>(0), v); else path_up[v] = rebuild_path(path_up[u], v); fup[v][0] = u; for(int i = 1; i < K; i++) fup[v][i] = fup[fup[v][i - 1]][i - 1]; for(auto y : g[v]) if(y != u) dfs(y, v); tout[v] = T++; } bool is_ancestor(int u, int v) { return tin[u] <= tin[v] && tout[u] >= tout[v]; } int LCA(int x, int y) { if(is_ancestor(x, y)) return x; for(int i = K - 1; i >= 0; i--) if(!is_ancestor(fup[x][i], y)) x = fup[x][i]; return fup[x][0]; } bool query(int x, int y, int k) { base b = make_empty(); int z = LCA(x, y); for(auto v : path_up[x]) if(!is_ancestor(v, y)) add(b, a[v]); for(auto v : path_up[y]) if(!is_ancestor(v, x)) add(b, a[v]); add(b, a[z]); return check(b, k); } int main() { int n; scanf("%d", &n); for(int i = 0; i < n; i++) scanf("%d", &a[i]); for(int i = 0; i < n - 1; i++) { int x, y; scanf("%d %d", &x, &y); --x; --y; g[x].push_back(y); g[y].push_back(x); } dfs(0, 0); int q; scanf("%d", &q); for(int i = 0; i < q; i++) { int x, y, k; scanf("%d %d %d", &x, &y, &k); --x; --y; if(query(x, y, k)) puts("YES"); else puts("NO"); } }
1903
A
Halloumi Boxes
Theofanis is busy after his last contest, as now, he has to deliver many halloumis all over the world. He stored them inside $n$ boxes and each of which has some number $a_i$ written on it. He wants to sort them in non-decreasing order based on their number, however, his machine works in a strange way. It can only reverse any subarray$^{\dagger}$ of boxes with length \textbf{at most} $k$. Find if it's possible to sort the boxes using \textbf{any number of reverses}. $^{\dagger}$ Reversing a subarray means choosing two indices $i$ and $j$ (where $1 \le i \le j \le n$) and changing the array $a_1, a_2, \ldots, a_n$ to $a_1, a_2, \ldots, a_{i-1}, \; a_j, a_{j-1}, \ldots, a_i, \; a_{j+1}, \ldots, a_{n-1}, a_n$. The length of the subarray is then $j - i + 1$.
If the array is already sorted or $k > 1$ then there is always a way (reverse of size $2 =$ swap consecutive elements). Else it is not possible since when $k = 1$ the array remains the same.
[ "brute force", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ int n,k; cin>>n>>k; int arr[n]; for(int i = 0;i < n;i++){ cin>>arr[i]; } if(is_sorted(arr,arr+n) || k > 1){ cout<<"YES\n"; } else{ cout<<"NO\n"; } } return 0; }
1903
B
StORage room
In Cyprus, the weather is pretty hot. Thus, Theofanis saw this as an opportunity to create an ice cream company. He keeps the ice cream safe from other ice cream producers by locking it inside big storage rooms. However, he forgot the password. Luckily, the lock has a special feature for forgetful people! It gives you a table $M$ with $n$ rows and $n$ columns of non-negative integers, and to open the lock, you need to find an array $a$ of $n$ elements such that: - $0 \le a_i < 2^{30}$, and - $M_{i,j} = a_i | a_j$ for all $i \neq j$, where $|$ denotes the bitwise OR operation. The lock has a bug, and sometimes it gives tables without any solutions. In that case, the ice cream will remain frozen for the rest of eternity. Can you find an array to open the lock?
Solution: Initially, we set all $a_i = 2^{30} - 1$ (all bits on). You can through every $i$,$j$ such that $i \neq j$ and do $a_i \&= M_{i,j}$ and $a_j \&= M_{i,j}$. Then we check if $M_{i,j} = a_i | a_j$ for all pairs. If this holds you found the array else the answer is NO. Proof: Initially, all elements have all their bits set on and we remove only the bits that affect our answer. If $M_{i,j}$ doesn't have a specific bit then definitely neither $a_i$ nor $a_j$ should have it. If $M_{i,j}$ has a specific bit on then we don't have to remove anything (in the end we want at least one of $a_i$ and $a_j$ to have the bit on).
[ "bitmasks", "brute force", "constructive algorithms", "greedy" ]
1,200
#include <bits/stdc++.h> using namespace std; int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t; cin>>t; while(t--){ int n; cin>>n; int m[n][n]; int arr[n]; for(int i = 0;i < n;i++){ arr[i] = (1<<30) - 1; } for(int i = 0;i < n;i++){ for(int j = 0;j < n;j++){ cin>>m[i][j]; if(i != j){ arr[i] &= m[i][j]; arr[j] &= m[i][j]; } } } bool ok = true; for(int i = 0;i < n;i++){ for(int j = 0;j < n;j++){ if(i != j && (arr[i] | arr[j]) != m[i][j]){ ok = false; } } } if(!ok){ cout<<"NO\n"; } else{ cout<<"YES\n"; for(int i = 0;i < n;i++){ cout<<arr[i]<<" "; } cout<<"\n"; } } }
1903
C
Theofanis' Nightmare
Theofanis easily gets obsessed with problems before going to sleep and often has nightmares about them. To deal with his obsession he visited his doctor, Dr. Emix. In his latest nightmare, he has an array $a$ of size $n$ and wants to divide it into non-empty subarrays$^{\dagger}$ such that every element is in exactly one of the subarrays. For example, the array $[1,-3,7,-6,2,5]$ can be divided to $[1] [-3,7] [-6,2] [5]$. The Cypriot value of such division is equal to $\Sigma_{i=1}^{k} i \cdot \mathrm{sum}_i$ where $k$ is the number of subarrays that we divided the array into and $\mathrm{sum}_i$ is the sum of the $i$-th subarray. The Cypriot value of this division of the array $[1] [-3,7] [-6,2] [5] = 1 \cdot 1 + 2 \cdot (-3 + 7) + 3 \cdot (-6 + 2) + 4 \cdot 5 = 17$. Theofanis is wondering what is the \textbf{maximum} Cypriot value of any division of the array. $^{\dagger}$ An array $b$ is a subarray of an array $a$ if $b$ can be obtained from $a$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. In particular, an array is a subarray of itself.
Let $suf_i$ be the suffix sum of the array (from the $i^{th}$ position to the $n^{th}$). $ans =$ sum of $suf_{L_i}$ where $L_i$ is the leftmost element of the $i^{th}$ subarray. Definitely, $L_1 = 1$ and we can take any other we want (at most once). So we start with $ans = suf_1$ and for every $i > 1$ we add $suf_i$ if it is positive. We can easily see that this greedy works.
[ "constructive algorithms", "greedy" ]
1,400
#include <bits/stdc++.h> using namespace std; int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t; cin>>t; while(t--){ int n; cin>>n; int arr[n]; long long suf[n+1] = {0}; for(int i = 0;i < n;i++){ cin>>arr[i]; } for(int i = n-1;i >= 0;i--){ suf[i] = suf[i+1] + arr[i]; } long long ans = suf[0]; for(int i = 1;i < n;i++){ if(suf[i] > 0){ ans += suf[i]; } } cout<<ans<<"\n"; } }
1903
D1
Maximum And Queries (easy version)
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $n$ and $q$, the memory and time limits. You can make hacks only if all versions of the problem are solved.} Theofanis really likes to play with the bits of numbers. He has an array $a$ of size $n$ and an integer $k$. He can make at most $k$ operations in the array. In each operation, he picks a single element and increases it by $1$. He found the \textbf{maximum} bitwise AND that array $a$ can have after at most $k$ operations. Theofanis has put a lot of work into finding this value and was very happy with his result. Unfortunately, Adaś, being the evil person that he is, decided to bully him by repeatedly changing the value of $k$. Help Theofanis by calculating the \textbf{maximum} possible bitwise AND for $q$ different values of $k$. Note that queries are independent.
Construct the answer by iterating through every single bit from large to small ($2^{60}$ to $2^0$). Denote $x$ a the current answer and $b$ a the bit we want to add. For each $i$ ($1 \le i \le n$) if the $b$-th bit in $a_i$ is on we do not need to use any operations. If the $b$-th bit in $a_i$ is 0 then we need to increase $a_i$ by $2^b - a_i$ $mod$ $2^b$. If the total number of operations required to get from $x$ to $x+2^b$ is smaller than $k$, decrease $k$ by that number and change the array accordingly. Otherwise do nothing.
[ "binary search", "bitmasks", "brute force", "greedy" ]
1,700
#include<bits/stdc++.h> using namespace std; typedef long long ll; typedef long double ld; #define rep(a, b) for(int a = 0; a < (b); ++a) #define st first #define nd second #define pb push_back #define all(a) a.begin(), a.end() const int LIM=1e5+7; ll T[LIM], P[LIM], n, k; void solve() { rep(i, n) T[i]=P[i]; ll ans=0; for(ll i=60; i>=0; --i) { ll sum=0; rep(j, n) { if(T[j]&(1ll<<i)) continue; ll p=(T[j]/(1ll<<i))*(1ll<<i)+(1ll<<i); p+=ans^(p&ans); sum+=p-T[j]; if(sum > k){ break; } } if(sum>k) continue; rep(j, n) { if(T[j]&(1ll<<i)) continue; ll p=(T[j]/(1ll<<i))*(1ll<<i)+(1ll<<i); p+=ans^(p&ans); T[j]=p; } ans+=1ll<<i; k-=sum; } cout << ans << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int q; cin >> n >> q; rep(i, n) cin >> P[i]; while(q--) { cin >> k; solve(); } }
1903
D2
Maximum And Queries (hard version)
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $n$ and $q$, the memory and time limits. You can make hacks only if all versions of the problem are solved.} Theofanis really likes to play with the bits of numbers. He has an array $a$ of size $n$ and an integer $k$. He can make at most $k$ operations in the array. In each operation, he picks a single element and increases it by $1$. He found the \textbf{maximum} bitwise AND that array $a$ can have after at most $k$ operations. Theofanis has put a lot of work into finding this value and was very happy with his result. Unfortunately, Adaś, being the evil person that he is, decided to bully him by repeatedly changing the value of $k$. Help Theofanis by calculating the \textbf{maximum} possible bitwise AND for $q$ different values of $k$. Note that queries are independent.
Let $S = \sum (2^{20} - a_i)$. If $k \ge S$ then the answer is $2^{20} + \lfloor \frac{k-S}{n} \rfloor$. Similarly to D1 let's construct the answer bit by bit. Let $x$ be the current answer and $b$ be the bit we want to add. Let's look at the amount of operations we need to do on the $i$-th element to change our answer from $x$ to $x+2^b$. if $x$ is not a submask of $a_i$, then after constructing answer $x$ it has $0$s on all bits not greater than $b$. In this case we need to increase the $i$-th element by $2^b$. if $x+2^b$ is a submask of $a_i$, then we do not need to increase the $i$-th element. otherwise we need to increase the $i$-th element by $2^b - a_i$ $mod$ $2^b$. We can handle all three cases efficiently if we precompute the following two arrays: $A(mask)$ - how many elements from the array is $mask$ a submask of. $B(mask, b)$ - sum of $a_i$ $mod$ $2^b$ over all $a_i$ for which $x$ is a submask. Both arrays can be calculated efficiently using SOS dp. This allows us to answer the queries in $O$($q \times log$ $a$) with $O$($a$ $log^2$ $a$) preprocessing
[ "bitmasks", "divide and conquer", "dp", "greedy" ]
2,500
#include<bits/stdc++.h> using namespace std; typedef long double ld; typedef long long ll; #define rep(a, b) for(ll a = 0; a < (b); ++a) #define st first #define nd second #define pb push_back #define all(a) a.begin(), a.end() const int LIM=1e6+7; ll dpsum[1<<20][20], dpcnt[1<<20], T[LIM]; int main() { ios_base::sync_with_stdio(0); cin.tie(0); ll n, q; cin >> n >> q; ll sto=0, sfrom=0; rep(i, n) { cin >> T[i]; sto+=(1ll<<20ll)-T[i]; sfrom+=T[i]; ++dpcnt[T[i]]; ll sum=0; rep(j, 20) { sum+=T[i]&(1ll<<j); dpsum[T[i]][j]+=sum; } } rep(i, 20) rep(j, 1<<20) if(!(j&(1<<i))) dpcnt[j]+=dpcnt[j+(1<<i)]; rep(i, 20) rep(j, 1<<20) if(!(j&(1<<i))) rep(l, 20) dpsum[j][l]+=dpsum[j+(1<<i)][l]; while(q--) { ll k; cin >> k; if(k>=sto) { k+=sfrom; cout << k/n << '\n'; continue; } ll ans=0; for(ll i=19; i>=0; --i) { ll x=(n-dpcnt[ans|(1<<i)])*(1ll<<i); x-=dpsum[ans][i]-dpsum[ans|(1<<i)][i]; if(x<=k) { k-=x; ans|=1<<i; } } cout << ans << '\n'; } }
1903
E
Geo Game
This is an interactive problem. Theofanis and his sister are playing the following game. They have $n$ points in a 2D plane and a starting point $(s_x,s_y)$. Each player (starting from the first player) chooses one of the $n$ points that wasn't chosen before and adds to the sum (which is initially $0$) the \textbf{square} of the Euclidean distance from the previous point (which is either the starting point or it was chosen by the other person) to the new point (that the current player selected). The game ends after exactly $n$ moves (after all the points are chosen). The first player wins if the sum is even in the end. Otherwise, the second player wins. Theofanis is a very competitive person and he hates losing. Thus, he wants to choose whether he should play first or second. Can you show him, which player to choose, and how he should play to beat his sister?
If we went from point $(a,b)$ to $(c,d)$ then we will add to the sum $a^2 + c^2 - 2ac + b^2 + d^2 - 2bd$ which is equal to $a \oplus b \oplus c \oplus d$ mod $2$ ($\oplus$ is bitwise xor). For each point, we find ($x$ mod $2$) $\oplus$ ($y$ mod $2$). Let $p_0 =$ the number of ($x$ mod $2$) $\oplus$ ($y$ mod $2$) $== 0$, $p_1 =$ the number of ($x$ mod $2$) $\oplus$ ($y$ mod $2$) $== 1$ and $v =$ ($s_x$ mod $2$) $\oplus$ ($s_y$ mod $2$). Let's say that we create a binary string starting with $v$ and has another $p_0$ zeros and $p_1$ ones. If the number of $s_i \neq s_{i+1}$ is odd then the sum will be odd otherwise the sum will be even. If you are the first player then you want to have an even number of $s_i \neq s_{i+1}$. That holds iff the first element of $s$ is the same as the last element of $s$. Thus, the second player will want to put all $p_v$ occurrences of $v$ before the end (so that the last element is not equal to the first). He can do this iff $p_v < (n - 1) / 2$ (rounded down) (this means that $p_v < p_{v \oplus 1}$). If $p_{v \oplus 1} \le p_v$ you play as the first player and you choose occurrences of $v \oplus 1$ until there aren't any and else ($p_v < p_{v \oplus 1}$) you play as the second player and you choose occurrences of $v$ until there aren't any.
[ "greedy", "interactive", "math" ]
2,000
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ int n; cin>>n; int sx,sy; cin>>sx>>sy; int x[n],y[n]; set<int>p[2]; for(int i = 0;i < n;i++){ cin>>x[i]>>y[i]; p[(x[i] % 2) ^ (y[i] % 2)].insert(i+1); } int v = (sx % 2) ^ (sy % 2); if(p[v].size() >= p[v^1].size()){ cout<<"First"<<endl; v ^= 1; for(int i = 0;i < n;i++){ if(i % 2 == 0){ int j; if(!p[v].empty()){ j = (*p[v].begin()); p[v].erase(j); } else{ j = (*p[v^1].begin()); p[v^1].erase(j); } cout<<j<<endl; } else{ int j; cin>>j; if(p[0].count(j)){ p[0].erase(j); } else{ p[1].erase(j); } } } } else{ cout<<"Second"<<endl; for(int i = 0;i < n;i++){ if(i % 2 == 1){ int j; if(!p[v].empty()){ j = (*p[v].begin()); p[v].erase(j); } else{ j = (*p[v^1].begin()); p[v^1].erase(j); } cout<<j<<endl; } else{ int j; cin>>j; if(p[0].count(j)){ p[0].erase(j); } else{ p[1].erase(j); } } } } } }
1903
F
Babysitting
Theofanis wants to play video games, however he should also take care of his sister. Since Theofanis is a CS major, he found a way to do both. He will install some cameras in his house in order to make sure his sister is okay. His house is an undirected graph with $n$ nodes and $m$ edges. His sister likes to play at the edges of the graph, so he has to install a camera to at least one endpoint of every edge of the graph. Theofanis wants to find a vertex cover that maximizes the minimum difference between indices of the chosen nodes. More formally, let $a_1, a_2, \ldots, a_k$ be a vertex cover of the graph. Let the minimum difference between indices of the chosen nodes be the minimum $\lvert a_i - a_j \rvert$ (where $i \neq j$) out of the nodes that you chose. \textbf{If $k = 1$ then we assume that the minimum difference between indices of the chosen nodes is $n$}. Can you find the maximum possible minimum difference between indices of the chosen nodes over all vertex covers?
We can solve this problem using 2-sat and binary search the answer. In order to have a vertex cover we should have $(u_i | v_i)$. And if we want to check if the answer is greater or equal to $mid$ we want to have $(!x | !y)$ for all pairs $1\le x, y \le n$ such that $|x - y| < mid$. However, this way we will have too many edges. To fix this problem we can create some helping nodes and edges in a similar structure of a segment tree. In 2-sat when we connect two nodes there are two possibilities for each (taken or not taken). If we have an edge from $a$ to $b$ it means that when $a$ holds then definitely also $b$ holds. So, we build a binary tree in a similar structure to a segment tree so that each node is connected to two children creating a binary-directed tree and each node corresponds to a range. When we want to have edges from $x$ to $[x+1,x+mid)$ and $(x-mid,x-1]$ we just choose the nodes in this binary tree that are needed to make the range (they are at most $\log(n)$). So that makes the total time complexity to: $O(M \log^2(n))$ There is also an $O(M \log(n))$ solution. Shootout to jeroenodb for pointing it out. The idea is similar. We use 2-sat and binary search the answer, however, this time we do 2-sat differently. In our graph, we will have the $m$ edges to have a vertex cover. So we should have $(u_i | v_i)$. We add those edges and then we should also have the edges corresponding to $(!x | !y)$ for all pairs $1\le x, y \le n$ such that $|x - y| < mid$. But we don't need all of them. We can just only go to first to the left and first to the right by storing the set of unvisited nodes and doing lower bound with dsu.
[ "2-sat", "binary search", "data structures", "graphs", "trees" ]
2,500
#pragma GCC optimize("O3") #include "bits/stdc++.h" using namespace std; #define all(x) begin(x),end(x) template<typename A, typename B> ostream& operator<<(ostream &os, const pair<A, B> &p) { return os << '(' << p.first << ", " << p.second << ')'; } template<typename T_container, typename T = typename enable_if<!is_same<T_container, string>::value, typename T_container::value_type>::type> ostream& operator<<(ostream &os, const T_container &v) { string sep; for (const T &x : v) os << sep << x, sep = " "; return os; } #define debug(a) cerr << "(" << #a << ": " << a << ")\n"; typedef long long ll; typedef vector<int> vi; typedef vector<basic_string<int>> vvi; typedef pair<int,int> pi; const int mxN = 1e5+1, oo = 1e9; template<int (*merge)(int,int), int (*init)(int)> struct DSU{ vi sz, dat; DSU(int n) : sz(n,-1),dat(n) { for(int i=0;i<n;++i) dat[i] = init(i); } void link(int a, int b) { if(sz[a]>sz[b]) { swap(a,b); } sz[a]+=sz[b]; sz[b]=a; dat[a] = merge(dat[a],dat[b]); } bool unite(int a, int b) { int pa = find(a),pb = find(b); if(pa!=pb) link(pa,pb); return pa!=pb; } int get(int i) { return dat[find(i)]; } int find(int a) { if(sz[a]<0) return a; return sz[a] = find(sz[a]); } }; int dec(int i) {return i-1;} int inc(int i) {return i+1;} int mymin(int a, int b) {return min(a,b);} int mymax(int a, int b) {return max(a,b);} bool solve(const vvi& adj, const vvi& rev, int mid) { int n = rev.size()/2; DSU<mymin, dec> dsuL(n); DSU<mymax, inc> dsuR(n); auto getL = [&](int i) { if(i>=n) i-=n; int l = dsuL.get(i); if(l==-1 or abs(i-l)>=mid) return 0; return l+1; }; auto getR = [&](int i) { if(i>=n) i-=n; int r = dsuR.get(i); if(r==n or abs(i-r)>=mid) return 0; return r+1; }; auto rem = [&](int at) { if(at>=n) at-=n; if(at) dsuR.unite(at-1,at); if(at+1<n) dsuL.unite(at,at+1); }; vector<bool> vis(n*2); vi ord; auto dfs = [&](auto&& self, int at) -> void { vis[at]=1; if(at<n) { while(int l = getL(at)) self(self,l-1+n); while(int r = getR(at)) self(self,r-1+n); } else rem(at); for(int to : adj[at]) if(!vis[to]) self(self,to); ord.push_back(at); }; for(int i=0;i<2*n;++i) if(!vis[i]) dfs(dfs,i); reverse(all(ord)); fill(all(vis),0); dsuL = DSU<mymin,dec>(n); dsuR = DSU<mymax,inc>(n); int comp=0; vi comps(2*n); auto dfs2 = [&](auto&& self, int at) -> void { comps[at]=comp; vis[at]=1; if(at>=n) { while(int l = getL(at)) self(self,l-1); while(int r = getR(at)) self(self,r-1); } else rem(at); for(int to : rev[at]) if(!vis[to]) self(self,to); }; for(int i : ord) if(!vis[i]) { dfs2(dfs2,i); comp++; } for(int i=0;i<n;++i) if(comps[i]==comps[i+n]) return false; return true; } void solve() { int n,m; cin >> n >> m; vvi adj(2*n),rev(2*n); auto addE = [&](int u, int v) { adj[u].push_back(v); rev[v].push_back(u); }; for(int i=0;i<m;++i) { int u,v; cin >> u >> v; --u,--v; addE(u+n,v); addE(v+n,u); } int lo=1,hi=n; while(lo<hi) { int mid = (lo+hi+1)/2; if(solve(adj,rev,mid)) { lo= mid; } else hi = mid-1; } cout << lo << '\n'; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int t; cin >> t; while(t--) solve(); }
1904
A
Forked!
Lunchbox is done with playing chess! His queen and king just got forked again! In chess, a fork is when a knight attacks two pieces of higher value, commonly the king and the queen. Lunchbox knows that knights can be tricky, and in the version of chess that he is playing, knights are even trickier: instead of moving $1$ tile in one direction and $2$ tiles in the other, knights in Lunchbox's modified game move $a$ tiles in one direction and $b$ tiles in the other. Lunchbox is playing chess on an infinite chessboard which contains all cells $(x,y)$ where $x$ and $y$ are (possibly negative) integers. Lunchbox's king and queen are placed on cells $(x_K,y_K)$ and $(x_Q,y_Q)$ respectively. Find the number of positions such that if a knight was placed on that cell, it would attack both the king and queen.
There are at most $8$ positions of the knight that can attack a single cell. Therefore, we can find all $8$ positions that attack the king and the $8$ positions that attack the queen and count the number of positions that appear in both of these lists. 1904B - Collecting Game
[ "brute force", "implementation" ]
900
#include <bits/stdc++.h> using namespace std; int dx[4] = {-1, 1, -1, 1}, dy[4] = {-1, -1, 1, 1}; int main(){ int t; cin >> t; for(int i = 0; i < t; i++){ int a, b; cin >> a >> b; int x1, y1, x2, y2; cin >> x1 >> y1 >> x2 >> y2; set<pair<int, int>> st1, st2; for(int j = 0; j < 4; j++){ st1.insert({x1+dx[j]*a, y1+dy[j]*b}); st2.insert({x2+dx[j]*a, y2+dy[j]*b}); st1.insert({x1+dx[j]*b, y1+dy[j]*a}); st2.insert({x2+dx[j]*b, y2+dy[j]*a}); } int ans = 0; for(auto x : st1) if(st2.find(x) != st2.end()) ans++; cout << ans << '\n'; } }
1904
B
Collecting Game
You are given an array $a$ of $n$ positive integers and a score. If your score is greater than or equal to $a_i$, then you can increase your score by $a_i$ and remove $a_i$ from the array. For each index $i$, output the maximum number of additional array elements that you can remove if you remove $a_i$ and then set your score to $a_i$. Note that the removal of $a_i$ should not be counted in the answer.
Let's sort array $a$. The answer for the largest element is $n-1$ because the score, which is $a_n$, cannot be smaller than any of the other elements. Now, consider the second largest element. The answer is at least $n-2$ because every element that is not greater than $a_{n-1}$ can be taken. Then, we check if the score is at least $a_n$. This inspires the following solution: first, we find the prefix sum $p$ of array $a$. We calculate the answer in decreasing order of $a_i$. To calculate the answer for an $a_i$, we find the largest $j$ such that $p_i\geq a_j$ and set the answer for $i$ equal to the answer of $j$. 1904C - Array Game
[ "binary search", "dp", "greedy", "sortings", "two pointers" ]
1,100
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; using namespace std; #define pb push_back #define ff first #define ss second typedef long long ll; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef pair<ld, ld> pld; const int INF = 1e9; const ll LLINF = 1e18; const int MOD = 1e9 + 7; template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>; inline ll ceil0(ll a, ll b) { return a / b + ((a ^ b) > 0 && a % b); } void setIO() { ios_base::sync_with_stdio(0); cin.tie(0); } int main(){ setIO(); int T; cin >> T; for(int tt = 1; tt <= T; tt++){ int n; cin >> n; pii arr[n + 1]; for(int i = 1; i <= n; i++) cin >> arr[i].ff, arr[i].ss = i; sort(arr + 1, arr + n + 1); int nxt[n + 1]; ll sum[n + 1]; int ans[n + 1]; nxt[0] = sum[0] = 0; for(int i = 1; i <= n; i++){ if(nxt[i - 1] >= i){ nxt[i] = nxt[i - 1]; sum[i] = sum[i - 1]; } else { sum[i] = sum[i - 1] + arr[i].ff; nxt[i] = i; while(nxt[i] + 1 <= n && sum[i] >= arr[nxt[i] + 1].ff){ nxt[i]++; sum[i] += arr[nxt[i]].ff; } } ans[arr[i].ss] = nxt[i]; } for(int i = 1; i <= n; i++) cout << ans[i] - 1 << " "; cout << endl; } }
1904
C
Array Game
You are given an array $a$ of $n$ positive integers. In one operation, you must pick some $(i, j)$ such that $1\leq i < j\leq |a|$ and append $|a_i - a_j|$ to the end of the $a$ (i.e. increase $n$ by $1$ and set $a_n$ to $|a_i - a_j|$). Your task is to minimize and print the minimum value of $a$ after performing $k$ operations.
If $k\geq 3$, the answer is equal to $0$ since after performing an operation on the same pair $(i, j)$ twice, performing an operation on the two new values (which are the same) results in $0$. Therefore, let's consider the case for $1\leq k\leq 2$. For $k=1$, it is sufficient to sort the array and output the minimum between $a_i$ and $a_{i+1} - a_i$. For $k=2$, let's brute force the first operation. If the newly created value is $v$, then it is sufficient to find the smallest $a_i$ satisfying $a_i\geq v$ and greatest $a_i$ satisfying $a_i\leq v$ and relax the answer on $|a_i - v|$. Also, remember to consider the cases of performing no operation or one operation. This runs in $O(N^2\log N)$. There also exists a solution in $O(N^2)$ using a two pointers approach. 1904D1 - Set To Max (Easy Version) / 1904D2 - Set To Max (Hard Version)
[ "binary search", "brute force", "data structures", "sortings", "two pointers" ]
1,400
#include <bits/stdc++.h> using namespace std; #define int long long signed main() { int t; cin >> t; while (t--) { int n, k; cin >> n >> k; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; if (k >= 3) { cout << 0 << endl; continue; } sort(begin(a), end(a)); int d = a[0]; for (int i = 0; i < n - 1; i++) d = min(d, a[i + 1] - a[i]); if (k == 1) { cout << d << endl; continue; } for (int i = 0; i < n; i++) for (int j = 0; j < i; j++) { int v = a[i] - a[j]; int p = lower_bound(begin(a), end(a), v) - begin(a); if (p < n) d = min(d, a[p] - v); if (p > 0) d = min(d, v - a[p - 1]); } cout << d << endl; } }
1904
D2
Set To Max (Hard Version)
\textbf{This is the hard version of the problem. The only differences between the two versions of this problem are the constraints on $n$ and the time limit. You can make hacks only if all versions of the problem are solved.} You are given two arrays $a$ and $b$ of length $n$. You can perform the following operation some (possibly zero) times: - choose $l$ and $r$ such that $1 \leq l \leq r \leq n$. - let $x=\max(a_l,a_{l+1},\ldots,a_r)$. - for all $l \leq i \leq r$, set $a_i := x$. Determine if you can make array $a$ equal to array $b$.
Can we reduce the number of intervals we want to apply an operation on? What is the necessary condition to perform an operation on an interval If $b_i < a_i$ for any $i$, then it is clearly impossible. In order for $a_i$ to become $b_i$, $i$ must be contained by an interval that also contains a $j$ where $a_j = b_i$. Note that if there is a triple $i \le j < k$ where $a_j = a_k = b_i$, then it is never optimal to apply the operation on interval $[i, k]$, since applying the operation on interval $[i, j]$ will be sufficient. Thus, for $i$ we only need to consider the closest $a_j = b_i$ to the right or left of $i$. Lets find the necessary conditions for us to apply an operation on the interval $[i, j]$. First of all, $a_k \le b_i$ for $i \le k \le j$. Second, $b_k \ge b_i$ for all $i \le k \le j$. Turns out, these conditions are also sufficient, since we can apply these operations in increasing order of $b_i$ without them interfering with each other. If we check for every $i$ there exists an interval $[i, j]$ or $[j, i]$ that satisfies the necessary conditions, then there will exist a sequence of operations to transform $a$ into $b$. Checking for the conditions can be done with brute force for D1 or using monotonic stacks or segment trees for D2. 1904E - Tree Queries
[ "constructive algorithms", "data structures", "divide and conquer", "greedy", "implementation", "sortings" ]
1,800
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; using namespace std; #define pb push_back #define ff first #define ss second typedef long long ll; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef pair<ld, ld> pld; const int INF = 1e9; const ll LLINF = 1e18; const int MOD = 1e9 + 7; template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>; inline ll ceil0(ll a, ll b) { return a / b + ((a ^ b) > 0 && a % b); } void setIO() { ios_base::sync_with_stdio(0); cin.tie(0); } int main(){ setIO(); int T; cin >> T; for(int tt = 1; tt <= T; tt++){ int n; cin >> n; int a[n + 1], b[n + 1]; for(int i = 1; i <= n; i++) cin >> a[i]; for(int i = 1; i <= n; i++) cin >> b[i]; bool val[n + 1]; memset(val, false, sizeof(val)); for(int t = 0; t < 2; t++){ int prvb[n + 1]; //prev smaller int nxta[n + 1]; //next greater stack<pii> s; s.push({INF, n + 1}); for(int i = n; i >= 1; i--){ while(s.top().ff <= a[i]) s.pop(); nxta[i] = s.top().ss; s.push({a[i], i}); } while(!s.empty()) s.pop(); s.push({0, 0}); for(int i = 1; i <= n; i++){ while(s.top().ff >= b[i]) s.pop(); prvb[i] = s.top().ss; s.push({b[i], i}); } int m[n + 1]; memset(m, 0, sizeof(m)); for(int i = 1; i <= n; i++){ m[a[i]] = i; if(a[i] <= b[i] && m[b[i]]) val[i] |= prvb[i] < m[b[i]] && nxta[m[b[i]]] > i; } reverse(a + 1, a + n + 1); reverse(b + 1, b + n + 1); reverse(val + 1, val + n + 1); } bool ans = true; for(int i = 1; i <= n; i++) ans &= val[i]; cout << (ans ? "YES" : "NO") << endl; } }
1904
E
Tree Queries
\begin{quote} Those who don't work don't eat. Get the things you want with your own power. But believe, the earnest and serious people are the ones who have the last laugh... But even then, I won't give you a present. \hfill —Santa, Hayate no Gotoku! \end{quote} Since Hayate didn't get any Christmas presents from Santa, he is instead left solving a tree query problem. Hayate has a tree with $n$ nodes. Hayate now wants you to answer $q$ queries. Each query consists of a node $x$ and $k$ other additional nodes $a_1,a_2,\ldots,a_k$. These $k+1$ nodes are guaranteed to be all distinct. For each query, you must find the length of the longest simple path starting at node $x^\dagger$ after removing nodes $a_1,a_2,\ldots,a_k$ along with all edges connected to at least one of nodes $a_1,a_2,\ldots,a_k$. $^\dagger$ A simple path of length $k$ starting at node $x$ is a sequence of \textbf{distinct} nodes $x=u_0,u_1,\ldots,u_k$ such that there exists a edge between nodes $u_{i-1}$ and $u_i$ for all $1 \leq i \leq k$.
The solution doesn't involve virtual tree What is an easy way to represent the tree? Consider the Euler tour of the tree where $in[u]$ is the entry time of each node and $out[u]$ is the exit time. The interval $[in[u], out[u]]$ corresponds to the subtree of $u$. Removing a node is equivalent to blocking some intervals on the Euler tour. There are two cases when $a$ is blocked with respect to query node $x$. If $a$ is an ancestor of $x$, then the set of reachable nodes is reduced to the interval $[in[nxt_a], out[nxt_a]]$, where $nxt_a$ is the first node on the path from $a$ to $x$. This is equivalent to blocking the intervals $[0, in[nxt_a])$ and $(out[nxt_a], n - 1]$. If $a$ is not an ancestor, then the interval $[in[a], out[a]]$ is blocked. Lets build a lazy segment tree on the Euler tour of the tree. Each leaf node will correspond to the depth of a node on the tree. We can re-root the tree from $a$ to $b$ by subtracting one from all nodes the range $[in[b], out[b]]$ and adding one to all other nodes. Thus, we can traverse the tree while re-rooting and process our queries offline. When the query node $x$ is the root, block all the necessary intervals and then find the maximum value in the segment tree. In a tree, one of the farthest nodes from some node $x$ is one of the two endpoints of the diameter. Let's try to find the diameter of the connected subgraph node $x$ is in after the nodes $a_{1 \dots n}$ are removed. Consider an euler tour of the tree and order the nodes by their inorder traversal. When $k$ nodes are removed, the remaining nodes form $O(k)$ contiguous intervals in the tour. Let's build a segtree/sparse table where each node stores the diameter (as a pair of nodes) for the nodes with $in$ values in the range $[l, r]$. To merge two diameters, we can enumerate all $4 \choose 2$ ways to pick the new diameter and take the best one. To answer a query, we can first generate a list of banned intervals (just like solution 1) and use that list to generate the list of unbanned intervals. Then we can query our segtree for the diameter of each of ranges. Finally, we can combine the answers of the seperate queries to obtain the diameter of the connected subgraph. We know the farthest node from node $x$ is one of the two endpoints, so it suffices to just manually check the distance of those two nodes. Final complexity is $O(n \log^2 n + \sum k \log n)$. 1904F - Beautiful Tree
[ "data structures", "dfs and similar", "graphs", "implementation", "trees" ]
2,500
#include <bits/stdc++.h> #define sz(x) ((int)(x.size())) #define all(x) x.begin(), x.end() #define pb push_back #define eb emplace_back const int MX = 2e5 +10, int_max = 0x3f3f3f3f; using namespace std; //lca template start vector<int> dep, sz, par, head, tin, tout, tour; vector<vector<int>> adj; int n, ind, q; void dfs(int x, int p){ sz[x] = 1; dep[x] = dep[p] + 1; par[x] = p; for(auto &i : adj[x]){ if(i == p) continue; dfs(i, x); sz[x] += sz[i]; if(adj[x][0] == p || sz[i] > sz[adj[x][0]]) swap(adj[x][0], i); } if(p != 0) adj[x].erase(find(all(adj[x]), p)); } void dfs2(int x, int p){ tour[ind] = x; tin[x] = ind++; for(auto &i : adj[x]){ if(i == p) continue; head[i] = (i == adj[x][0] ? head[x] : i); dfs2(i, x); } tout[x] = ind; } int k_up(int u, int k){ if(dep[u] <= k) return -1; while(k > dep[u] - dep[head[u]]){ k -= dep[u] - dep[head[u]] + 1; u = par[head[u]]; } return tour[tin[u] - k]; } int lca(int a, int b){ while(head[a] != head[b]){ if(dep[head[a]] > dep[head[b]]) swap(a, b); b = par[head[b]]; } if(dep[a] > dep[b]) swap(a, b); return a; } int dist(int a, int b){ return dep[a] + dep[b] - 2*dep[lca(a, b)]; } //lca template end //segtree template start #define ff first #define ss second int dist(pair<int, int> a){ return dist(a.ff, a.ss); } pair<int, int> merge(pair<int, int> a, pair<int, int> b){ auto p = max(pair(dist(a), a), pair(dist(b), b)); for(auto x : {a.ff, a.ss}){ for(auto y : {b.ff, b.ss}){ if(x == 0 || y == 0) continue; p = max(p, pair(dist(pair(x, y)), pair(x, y))); } } return p.ss; } pair<int, int> mx[MX*4]; #define LC(k) (2*k) #define RC(k) (2*k +1) void update(int p, int v, int k, int L, int R){ if(L + 1 == R){ mx[k] = {tour[p], tour[p]}; return ; } int mid = (L + R)/2; if(p < mid) update(p, v, LC(k), L, mid); else update(p, v, RC(k), mid, R); mx[k] = merge(mx[LC(k)], mx[RC(k)]); } void query(int qL, int qR, vector<pair<int, int>>& ret, int k, int L, int R){ if(qR <= L || R <= qL) return ; if(qL <= L && R <= qR){ ret.push_back(mx[k]); return ; } int mid = (L + R)/2; query(qL, qR, ret, LC(k), L, mid); query(qL, qR, ret, RC(k), mid, R); } //segtree template end int query(vector<int> arr, int x){ vector<pair<int, int>> banned, ret; for(int u : arr){ if(lca(u, x) == u){ u = k_up(x, dep[x] - dep[u] - 1); banned.push_back({0, tin[u]}); banned.push_back({tout[u], n}); }else{ banned.push_back({tin[u], tout[u]}); } } sort(all(banned), [&](pair<int, int> a, pair<int, int> b){ return (a.ff < b.ff) || (a.ff == b.ff && a.ss > b.ss); }); vector<pair<int, int>> tbanned; //remove nested intervals int mx = 0; for(auto [a, b] : banned){ if(b <= mx) continue; else if(a != b){ tbanned.pb({a, b}); mx = b; } } banned = tbanned; int tim = 0; for(auto [a, b] : banned){ if(tim < a) query(tim, a, ret, 1, 0, n); tim = b; } if(tim < n) query(tim, n, ret, 1, 0, n); pair<int, int> dia = pair(x, x); for(auto p : ret) dia = merge(dia, p); int ans = max(dist(x, dia.ff), dist(x, dia.ss)); return ans; } void solve(){ cin >> n >> q; dep = sz = par = head = tin = tout = tour = vector<int>(n+1, 0); adj = vector<vector<int>>(n+1); for(int i = 1; i<n; i++){ int a, b; cin >> a >> b; adj[a].push_back(b); adj[b].push_back(a); } dfs(1, 0); head[1] = 1; dfs2(1, 0); for(int i = 1; i<=n; i++){ update(tin[i], dep[i], 1, 0, n); } for(int i = 1; i<=q; i++){ int x, k; cin >> x >> k; vector<int> arr(k); for(int& y : arr) cin >> y; cout << query(arr, x) << "\n"; } } signed main(){ cin.tie(0) -> sync_with_stdio(0); int T = 1; //cin >> T; for(int i = 1; i<=T; i++){ //cout << "Case #" << i << ": "; solve(); } return 0; }
1904
F
Beautiful Tree
Lunchbox has a tree of size $n$ rooted at node $1$. Each node is then assigned a value. Lunchbox considers the tree to be beautiful if each value is distinct and ranges from $1$ to $n$. In addition, a beautiful tree must also satisfy $m$ requirements of $2$ types: - "1 a b c" — The node with the smallest value on the path between nodes $a$ and $b$ must be located at $c$. - "2 a b c" — The node with the largest value on the path between nodes $a$ and $b$ must be located at $c$. Now, you must assign values to each node such that the resulting tree is beautiful. If it is impossible to do so, output $-1$.
Can we represent the conditions as a graph? Lets rewrite the condition that node $a$ must be smaller than node $b$ as a directed edge from $a$ to $b$. Then, we can assign each node a value based on the topological sort of this new directed graph. If this directed graph had a cycle, it is clear that there is no way to order the nodes. With this in mind, we can try to construct a graph that would have these properties. Once we have the graph, we can topological sort to find the answer. For now, let's consider the problem if it only had type 1 requirements (type 2 requirements can be done very similarly). Thus, the problem reduces to "given a path and a node, add a directed edge from the node to every node in that path." To do this, we can use binary lifting. For each node, create $k$ dummy nodes, the $i$th of which represents the minimum number from the path between node $a$ and the $2^i$th parent of $a$. Now, we can draw a directed edge from the the $i$th dummy node of $a$ to the $i-1$th dummy node of $a$ and the $i-1$th dummy node of the $2^{i-1}$th parent of $a$. Now, to add an edge from any node to a vertical path of the tree, we can repeatedly add an edge from that node to the largest node we can. This will add $O(\log n)$ edges per requirement. The final complexity is $O((n+m)\log n)$ time and $O((n+m)\log n)$.
[ "data structures", "dfs and similar", "graphs", "implementation", "trees" ]
2,800
#pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx,avx2,fma") #pragma GCC target("sse4,popcnt,abm,mmx,tune=native") #include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; using namespace std; #define pb push_back #define ff first #define ss second typedef long long ll; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef pair<ld, ld> pld; const int INF = 1e9; const ll LLINF = 1e18; const int MOD = 1e9 + 7; template<class K> using sset = tree<K, null_type, less<K>, rb_tree_tag, tree_order_statistics_node_update>; inline ll ceil0(ll a, ll b) { return a / b + ((a ^ b) > 0 && a % b); } void setIO() { ios_base::sync_with_stdio(0); cin.tie(0); } const int MAXN = 200'000; const int LG = 18; const int MAXM = 200'000; vector<int> g[MAXN + 5]; int sz[MAXN + 5], in[MAXN + 5], par[MAXN + 5], depth[MAXN + 5], head[MAXN + 5], tim; int n, m; void dfs1(int x, int p){ sz[x] = 1; for(int &i : g[x]){ if(i == p) continue; dfs1(i, x); sz[x] += sz[i]; if(g[x][0] == p || sz[i] > sz[g[x][0]]) swap(g[x][0], i); } } void dfs2(int x, int p){ in[x] = tim++; par[x] = p; depth[x] = depth[p] + 1; for(int i : g[x]){ if(i == p) continue; head[i] = (i == g[x][0] ? head[x] : i); dfs2(i, x); } } const int MAXSZ = MAXN + 2*MAXN*LG; int down[LG][MAXN + 5]; int up[LG][MAXN + 5]; vector<int> dag[MAXSZ+ 5]; int lg[MAXN + 5]; void upd(int l, int r, int x, int t){ if(l <= in[x] && in[x] <= r){ if(l < in[x]) upd(l, in[x] - 1, x, t); if(in[x] < r) upd(in[x] + 1, r, x, t); } else { int sz = lg[r - l + 1]; if(t == 2){ dag[up[sz][l]].pb(x); dag[up[sz][r - (1 << sz) + 1]].pb(x); } else { dag[x].pb(down[sz][l]); dag[x].pb(down[sz][r - (1 << sz) + 1]); } } } //1 is down, 2 is up void draw(int a, int b, int c, int t){ while(head[a] != head[b]){ if(depth[head[a]] > depth[head[b]]) swap(a, b); upd(in[head[b]], in[b], c, t); b = par[head[b]]; } if(depth[a] > depth[b]) swap(a, b); upd(in[a], in[b], c, t); } bool vis[MAXSZ + 5], stk[MAXSZ + 5]; vector<int> ord; bool fail; int ind; void dfs3(int x){ if(fail) return; vis[x] = stk[x] = true; for(int i : dag[x]){ if(i == x) continue; if(!vis[i]){ dfs3(i); } else if(stk[i]){ fail = true; break; } } stk[x] = false; if(x <= n) ord.pb(x); } int main(){ setIO(); cin >> n >> m; lg[1] = 0; for(int i = 2; i <= n; i++) lg[i] = lg[i/2] + 1; for(int i = 0; i < n - 1; i++){ int a, b; cin >> a >> b; g[a].pb(b); g[b].pb(a); } tim = 0; dfs1(1, 1); head[1] = 1; dfs2(1, 1); for(int i = 1; i <= n; i++) down[0][in[i]] = up[0][in[i]] = i; ind = n + 1; for(int i = 1; i < LG; i++){ for(int j = 0; j + (1 << i) <= n; j++){ down[i][j] = ind++; up[i][j] = ind++; dag[down[i][j]].pb(down[i - 1][j]); dag[down[i][j]].pb(down[i - 1][j + (1 << (i - 1))]); dag[up[i - 1][j]].pb(up[i][j]); dag[up[i - 1][j + (1 << (i - 1))]].pb(up[i][j]); } } for(int i = 0; i < m; i++){ int t, a, b, c; cin >> t >> a >> b >> c; draw(a, b, c, t); } fail = false; for(int i = 1; i <= n; i++){ if(!vis[i]){ dfs3(i); if(fail) break; } } if(fail){ cout << -1 << endl; return 0; } reverse(ord.begin(), ord.end()); int ans[n + 1]; for(int i = 0; i < ord.size(); i++){ ans[ord[i]] = i + 1; } for(int i = 1; i <= n; i++) cout << ans[i] << " "; cout << endl; }
1905
A
Constructive Problems
Gridlandia has been hit by flooding and now has to reconstruct all of it's cities. Gridlandia can be described by an $n \times m$ matrix. Initially, all of its cities are in economic collapse. The government can choose to rebuild certain cities. Additionally, any collapsed city which has at least one vertically neighboring rebuilt city and at least one horizontally neighboring rebuilt city can ask for aid from them and become rebuilt \textbf{without help from the government}. More formally, collapsed city positioned in $(i, j)$ can become rebuilt if \textbf{both} of the following conditions are satisfied: - At least one of cities with positions $(i + 1, j)$ and $(i - 1, j)$ is rebuilt; - At least one of cities with positions $(i, j + 1)$ and $(i, j - 1)$ is rebuilt. If the city is located on the border of the matrix and has only one horizontally or vertically neighbouring city, then we consider only that city. \begin{center} \begin{tabular}{c} \ \end{tabular} {\small Illustration of two possible ways cities can be rebuilt by adjacent aid. White cells are collapsed cities, yellow cells are initially rebuilt cities (either by the government or adjacent aid), and orange cells are rebuilt cities after adjacent aid.} \end{center} The government wants to know the minimum number of cities it has to rebuild such that \textbf{after some time} all the cities can be rebuild.
We can observe an invariant given by the problem is that every time we apply adjacent aid on any state of the matrix, the sets of rows that have at least one rebuilt city, respectively the sets of columns that appear that have at least one rebuilt city remain constant. Therefore, if we want to have a full matrix as consequence of applying adjacent aid multiple times, both of these sets must contain all rows/columns. As such, the answer is bounded by $max(n, m)$. We can tighten this bound by finding an example which always satisfies the statement. If we take, without loss of generality, $n \le m$, the following initial setting will satisfy the statement: $(1, 1), (2, 2), (3, 3), ..., (n, n), (n, n + 1), (n, n + 2), .. (n, m)$ I have proposed this div2A at 3 contests and after 1 year of waiting I finally was able to propose it (mainly because theoretically, this was supposed to be my round :)). As the title suggests, this problem is inspired by one day trying to solve some constructive problem that required to draw some weird grid with some properties. And, as I was drawing multiple grids to try out multiple strategies, I was wondering how to draw these grids more optimally, as actually having to count for every matrix the height/width was pretty annoying, and I could just eyeball it by drawing it next to another drawn (but filled out) grid. As such, I needed an already drawn grid "below" the current one and another "to the left".
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> #define all(x) (x).begin(),(x).end() using namespace std; using ll = long long; using ld = long double; //#define int ll #define sz(x) ((int)(x).size()) using pii = pair<int,int>; using tii = tuple<int,int,int>; const int nmax = 1e6 + 5; const int inf = 1e9 + 5; int n, k, m, q; vector<int> g[nmax]; int v[nmax]; static void testcase() { cin >> n >> m; cout << max(n, m) << '\n'; return; } signed main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for (int i = 0; i < t; i++) testcase(); } /** Anul asta se da centroid. -- Surse oficiale */
1905
B
Begginer's Zelda
You are given a tree$^{\dagger}$. In one zelda-operation you can do follows: - Choose two vertices of the tree $u$ and $v$; - Compress all the vertices on the path from $u$ to $v$ into one vertex. In other words, all the vertices on path from $u$ to $v$ will be erased from the tree, a new vertex $w$ will be created. Then every vertex $s$ that had an edge to some vertex on the path from $u$ to $v$ will have an edge to the vertex $w$. \begin{center} {\small Illustration of a zelda-operation performed for vertices $1$ and $5$.} \end{center} Determine the minimum number of zelda-operations required for the tree to have only one vertex. $^{\dagger}$A tree is a connected acyclic undirected graph.
We can prove by induction that on any tree with $K$ leaves, the answer is $[{\frac{K + 1}{2}}]$, where with $[x]$ we denote the greatest integer smaller than $x$. This can be proven by induction, we will give an overview of what a proof would look like: For two leaves, the answer is clearly $1$. For three leaves, the answer is clearly $2$. For more than four leaves, it is always the case that we can find two leaves for which the node that will be created as a result of applying an operation on these two will have degree greater than $1$ (i.e. it will not be a leaf) The third argument holds because in a tree with four leaves, we have either at least two nodes with degree at least $3$ (and as such we can choose two leaves which contain these two nodes on their chain), or a node with degree at least $4$. Furthermore, it reduces the number of leaves in the tree by $2$.
[ "greedy", "trees" ]
1,100
#include <cmath> #include <functional> #include <fstream> #include <iostream> #include <vector> #include <algorithm> #include <string> #include <set> #include <map> #include <list> #include <time.h> #include <math.h> #include <random> #include <deque> #include <queue> #include <unordered_map> #include <unordered_set> #include <iomanip> #include <cassert> #include <bitset> #include <sstream> #include <cstring> #include <numeric> #define all(x) (x).begin(),(x).end() using namespace std; using ll = long long; using ld = long double; //#define int ll #define sz(x) ((int)(x).size()) using pii = pair<int,int>; using tii = tuple<int,int,int>; const int nmax = 1e6 + 5; const int inf = 1e9 + 5; int n, k, m, q; int freq[nmax]; static void testcase() { cin >> n; for (int i = 1, a, b; i < n; i++) { cin >> a >> b; freq[a]++; freq[b]++; } int cnt = 0; for(int i = 1; i <= n; i++) cnt += (freq[i] == 1), freq[i] = 0; cout << (cnt + 1) / 2 << '\n'; return; } signed main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for (int i = 0; i < t; i++) testcase(); } /** Anul asta se da centroid. -- Surse oficiale */
1905
C
Largest Subsequence
Given is a string $s$ of length $n$. In one operation you can select the lexicographically largest$^\dagger$ subsequence of string $s$ and cyclic shift it to the right$^\ddagger$. Your task is to calculate the minimum number of operations it would take for $s$ to become sorted, or report that it never reaches a sorted state. $^\dagger$A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; - In the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$. $^\ddagger$By cyclic shifting the string $t_1t_2\ldots t_m$ to the right, we get the string $t_mt_1\ldots t_{m-1}$.
We can notice that this operation will ultimately reverse the lexicographically largest subset of the initial string. Thus, we can easily check if the string is sortable, and for finding the number of operations we will subtract the length of the largest prefix of equal values of the subset from its length. This solution works in $O(n)$ time.
[ "greedy", "strings" ]
1,400
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(nullptr)->sync_with_stdio(false); int q; cin >> q; while (q--) { int n; string s; cin >> n >> s; s = '$' + s; vector<int> subset; for (int i = 1; i <= n; ++i) { while (!subset.empty() && s[subset.back()] < s[i]) { subset.pop_back(); } subset.push_back(i); } int ans = 0; int m = (int)subset.size() - 1; while (ans <= m && s[subset[ans]] == s[subset[0]]) { ans++; } ans = m - ans + 1; for (int i = 0; i <= m; ++i) { if (i < m - i) { swap(s[subset[i]], s[subset[m - i]]); } } for (int i = 2; i <= n; ++i) { if (s[i] < s[i - 1]) { ans = -1; break; } } cout << ans << '\n'; } }
1905
D
Cyclic MEX
For an array $a$, define its cost as $\sum_{i=1}^{n} \operatorname{mex} ^\dagger ([a_1,a_2,\ldots,a_i])$. You are given a permutation$^\ddagger$ $p$ of the set $\{0,1,2,\ldots,n-1\}$. Find the maximum cost across all cyclic shifts of $p$. $^\dagger\operatorname{mex}([b_1,b_2,\ldots,b_m])$ is the smallest non-negative integer $x$ such that $x$ does not occur among $b_1,b_2,\ldots,b_m$. $^\ddagger$A permutation of the set $\{0,1,2,...,n-1\}$ is an array consisting of $n$ distinct integers from $0$ to $n-1$ in arbitrary order. For example, $[1,2,0,4,3]$ is a permutation, but $[0,1,1]$ is not a permutation ($1$ appears twice in the array), and $[0,2,3]$ is also not a permutation ($n=3$ but there is $3$ in the array).
Let's analyze how the values of each prefix mex changes upon performing a cyclic shift to the left: The first prefix mex is popped. Each prefix mex with a value less than $p_1$ doesn't change. Each prefix mex with a value greater than $p_1$ becomes $p_1$. $n$ is appended to the back. Let's keep our prefix mexes compressed ( meaning that we keep the value and its frequency instead of keeping multiple same values ). After that, we can simulate the above process naively with a deque, because the potential will decrease by the number of performed operations. This solution works in $O(n)$ time.
[ "data structures", "implementation", "math", "two pointers" ]
2,000
#include <bits/stdc++.h> #define int long long using namespace std; int32_t main() { cin.tie(nullptr)->sync_with_stdio(false); int q; cin >> q; while (q--) { int n; cin >> n; vector<int> a(n + 1); for (int i = 1; i <= n; ++i) { cin >> a[i]; } deque<pair<int, int>> dq; vector<int> f(n + 1); int mex = 0; int sum = 0; for (int i = 1; i <= n; ++i) { f[a[i]]++; while (f[mex]) { mex++; } dq.push_back({mex, 1}); sum += mex; } int ans = sum; for (int i = 1; i < n; ++i) { pair<int, int> me = {a[i], 0}; sum -= dq.front().first; dq.front().second--; if (dq.front().second == 0) { dq.pop_front(); } while (!dq.empty() && dq.back().first >= a[i]) { sum -= dq.back().first * dq.back().second; me.second += dq.back().second; dq.pop_back(); } dq.push_back(me); sum = sum + me.first * me.second; dq.push_back({n, 1}); sum += n; ans = max(ans, sum); } cout << ans << '\n'; } }
1905
E
One-X
In this sad world full of imperfections, ugly segment trees exist. A segment tree is a tree where each node represents a segment and has its number. A segment tree for an array of $n$ elements can be built in a recursive manner. Let's say function $\operatorname{build}(v,l,r)$ builds the segment tree rooted in the node with number $v$ and it corresponds to the segment $[l,r]$. Now let's define $\operatorname{build}(v,l,r)$: - If $l=r$, this node $v$ is a leaf so we stop adding more edges - Else, we add the edges $(v, 2v)$ and $(v, 2v+1)$. Let $m=\lfloor \frac{l+r}{2} \rfloor$. Then we call $\operatorname{build}(2v,l,m)$ and $\operatorname{build}(2v+1,m+1,r)$. So, the whole tree is built by calling $\operatorname{build}(1,1,n)$. Now Ibti will construct a segment tree for an array with $n$ elements. He wants to find the sum of $\operatorname{lca}^\dagger(S)$, where $S$ is a non-empty subset of \textbf{leaves}. Notice that there are exactly $2^n - 1$ possible subsets. Since this sum can be very large, output it modulo $998\,244\,353$. $^\dagger\operatorname{lca}(S)$ is the number of the least common ancestor for the nodes that are in $S$.
Let's try to solve a slightly easier problem first: changing the coefficient of the label to be the msb of the label. We can note that at each depth, every label will have the same number of digits ( in base 2 ), thus the same msb. And we can notice that for each depth there are at most $2$ different interval lengths. Combining the former with the latter, we can solve this case in $O(log(N))$ time complexity, since the maximum depth is bounded by $log(N)$. We can find an easy generalization to this: for the $k$-th most significant bit of each label to be $1$, we have to go right from a node whose depth is $k-1$. Thus the above solution can be extended to find the contribution of the $k$-th most significant bit of each label. Doing this for all bits gives us a time complexity of $O(log^2(N))$ which is sufficient to pass the given constraints.
[ "combinatorics", "dfs and similar", "dp", "math", "trees" ]
2,400
#include <bits/stdc++.h> #define int long long using namespace std; const int mod = 998244353; struct Mint { int val; Mint(long long x = 0) { val = x % mod; } Mint operator+(Mint oth) { return val + oth.val; } Mint operator-(Mint oth) { return val - oth.val + mod; } Mint operator*(Mint oth) { return val * oth.val; } }; Mint powmod(int a, int b) { if (b == 0) { return 1; } if (b % 2 == 1) { return powmod(a, b - 1) * a; } Mint P = powmod(a, b / 2); return P * P; } map<int, vector<vector<tuple<int, int, int>>>> memo; vector<vector<tuple<int, int, int>>> find_ranges(int lg) // log^2 { if (memo.find(lg) != memo.end()) { return memo[lg]; } if (lg == 1) { return {{{1, 1, 1}}}; } vector<vector<tuple<int, int, int>>> l = find_ranges((lg + 1) / 2); vector<vector<tuple<int, int, int>>> r = find_ranges(lg / 2); vector<vector<tuple<int, int, int>>> ans(max(l.size(), r.size()) + 1); Mint x = (powmod(2, (lg + 1) / 2) - 1) * (powmod(2, lg / 2) - 1); ans[0].push_back({lg, 1, x.val}); for (int i = 0; i < (int)l.size(); ++i) { for (auto j : l[i]) { ans[i + 1].push_back(j); } } for (int i = 0; i < (int)r.size(); ++i) { for (auto j : r[i]) { bool ok = false; for (auto &[size, cnt, ways] : ans[i + 1]) { if (size == get<0>(j)) { ok = true; cnt += get<1>(j); } } if (!ok) { ans[i + 1].push_back(j); } } } return memo[lg] = ans; } Mint count(int lg, int coef) { vector<vector<tuple<int, int, int>>> adam = find_ranges(lg); Mint ans = 0; Mint pow_2 = 1; for (int i = 0; i < (int)adam.size(); ++i) { for (auto [size, cnt, ways] : adam[i]) { ans = ans + pow_2 * coef * cnt * ways; } pow_2 = pow_2 * 2; } return ans; } int32_t main() { cin.tie(nullptr)->sync_with_stdio(false); int q; cin >> q; while (q--) { int n; cin >> n; vector<vector<tuple<int, int, int>>> adam = find_ranges(n); Mint ans = count(n, 1); for (int i = 1; i < (int)adam.size(); ++i) { for (auto [size, cnt, ways] : adam[i - 1]) { int lsize = (size + 1) / 2; int rsize = size / 2; if (rsize) { ans = ans + count(rsize, cnt); } } } cout << ans.val << '\n'; memo.clear(); } }
1905
F
Field Should Not Be Empty
You are given a permutation$^{\dagger}$ $p$ of length $n$. We call index $x$ good if for all $y < x$ it holds that $p_y < p_x$ and for all $y > x$ it holds that $p_y > p_x$. We call $f(p)$ the number of good indices in $p$. You can perform the following operation: pick $2$ \textbf{distinct} indices $i$ and $j$ and swap elements $p_i$ and $p_j$. Find the maximum value of $f(p)$ after applying the aforementioned operation \textbf{exactly once}. $^{\dagger}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
The key observation to this problem is that most swaps are useless. In fact, we can find that only $2n$ swaps can increase our initial cost: The first type of meaningful swaps is $(i,p_i)$ For each $1 < i < n$, consider $k$ and $l$ such that $p_k = max(p_1,p_2,...,p_{i-1})$ and $p_l = min(p_{i+1},p_{i+2},...,p_n)$. The second type is $(k,l)$. The reason why this is true is because the first type of swap can create a new good index ( since every good index must be a fixed point ) and the second type of swap can fix an already good index. It's obvious that if a swap does neither of the above, it can't increase our current cost. Calculating $f(p)$ after each swap can be done with a segment tree. Consider adding one on the ranges $(min(i,p_i),max(i,p_i))$. Now an index is good if and only if its value is $1$, which is also the minimum value an index can have. Thus our segment tree has to find the number of minimums while performing range additions which can be easily maintained by lazy propagation. This solution works in $O(nlog(n))$ time complexity.
[ "brute force", "data structures", "divide and conquer" ]
2,600
#include <bits/stdc++.h> using namespace std; struct aint { vector<pair<int, int>> a; vector<int> lazy; void resize(int n) { a = vector<pair<int, int>>(4 * n); lazy = vector<int>(4 * n); } void init(int node, int left, int right) { a[node].second = (right - left + 1); if (left != right) { int mid = (left + right) / 2; init(2 * node, left, mid); init(2 * node + 1, mid + 1, right); } } void prop(int node, int left, int right) { a[node].first += lazy[node]; if (left != right) { lazy[2 * node] += lazy[node]; lazy[2 * node + 1] += lazy[node]; } lazy[node] = 0; } pair<int, int> merge(pair<int, int> a, pair<int, int> b) { if (a.first == b.first) { return pair<int, int>{a.first, a.second + b.second}; } return min(a, b); } void update(int node, int left, int right, int st, int dr, int val) { prop(node, left, right); if (right < st || left > dr) { return; } if (st <= left && dr >= right) { lazy[node] += val; prop(node, left, right); return; } int mid = (left + right) / 2; update(2 * node, left, mid, st, dr, val); update(2 * node + 1, mid + 1, right, st, dr, val); a[node] = merge(a[2 * node], a[2 * node + 1]); } }; struct bit { vector<int> a; void resize(int n) { a = vector<int>(n + 1); } void update(int pos, int val) { int n = (int)a.size() - 1; for (int i = pos; i <= n; i += i & (-i)) { a[i] += val; } } int query(int pos) { int ans = 0; for (int i = pos; i; i -= i & (-i)) { ans += a[i]; } return ans; } int query(int st, int dr) { return query(dr) - query(st - 1); } }; int32_t main() { cin.tie(nullptr)->sync_with_stdio(false); int q; cin >> q; while (q--) { int n; cin >> n; vector<int> a(n + 1); bool sortat = true; for (int i = 1; i <= n; ++i) { cin >> a[i]; if (a[i] != i) { sortat = false; } } if (sortat) { cout << n - 2 << '\n'; continue; } vector<pair<int, int>> qui(n + 1); stack<int> s; bit tree; tree.resize(n); for (int i = 1; i <= n; ++i) { while (!s.empty() && a[s.top()] < a[i]) { s.pop(); } if (!s.empty()) { qui[i].first = s.top(); } if (tree.query(a[i], n) > 1) { qui[i].first = 0; } tree.update(a[i], 1); s.push(i); } while (!s.empty()) s.pop(); tree.resize(n); for (int i = n; i >= 1; --i) { while (!s.empty() && a[s.top()] > a[i]) { s.pop(); } if (!s.empty()) { qui[i].second = s.top(); } if (tree.query(1, a[i]) > 1) { qui[i].second = 0; } tree.update(a[i], 1); s.push(i); } aint lesgo; lesgo.resize(n); lesgo.init(1, 1, n); function<void(int, int)> upd = [&](int ind, int sign) { lesgo.update(1, 1, n, min(ind, a[ind]), max(ind, a[ind]), sign); }; function<int()> count = [&]() { if (lesgo.a[1].first == 1) { return lesgo.a[1].second; } return 0; }; function<void(int, int)> mySwap = [&](int i, int j) { upd(i, -1); upd(j, -1); swap(a[i], a[j]); upd(i, 1); upd(j, 1); }; for (int i = 1; i <= n; ++i) { upd(i, 1); } int ans = 0; for (int i = 1; i <= n; ++i) { if (qui[i].first && qui[i].second) { mySwap(qui[i].first, qui[i].second); ans = max(ans, count()); mySwap(qui[i].first, qui[i].second); } } vector<int> pos(n + 1); for (int i = 1; i <= n; ++i) { pos[a[i]] = i; } for (int i = 1; i <= n; ++i) { int qui1 = i; int qui2 = pos[i]; mySwap(qui1, qui2); ans = max(ans, count()); mySwap(qui1, qui2); } cout << ans << '\n'; } }
1907
A
Rook
As you probably know, chess is a game that is played on a board with 64 squares arranged in an $8\times 8$ grid. Columns of this board are labeled with letters from \textbf{a} to \textbf{h}, and rows are labeled with digits from \textbf{1} to \textbf{8}. Each square is described by the row and column it belongs to. The rook is a piece in the game of chess. During its turn, it may move any non-zero number of squares horizontally or vertically. Your task is to find all possible moves for a rook on an empty chessboard.
The answer includes all cells that share the same column or row with the given cell. Let's iterate through all the columns, keeping the row constant, and vice versa, iterate through the rows without changing the column. To ensure that the input cell is not included in the answer, you can either skip it or add all positions to a set and then remove it from there.
[ "implementation" ]
800
for _ in range(int(input())): s = input() for c in "abcdefgh": if c != s[0]: print(c + s[1], end=' ') for c in "12345678": if c != s[1]: print(s[0] + c, end=' ') print()
1907
B
YetnotherrokenKeoard
Polycarp has a problem — his laptop keyboard is broken. Now, when he presses the 'b' key, it acts like an unusual backspace: it deletes the last (rightmost) lowercase letter in the typed string. If there are no lowercase letters in the typed string, then the press is completely ignored. Similarly, when he presses the 'B' key, it deletes the last (rightmost) uppercase letter in the typed string. If there are no uppercase letters in the typed string, then the press is completely ignored. In both cases, the letters 'b' and 'B' are not added to the typed string when these keys are pressed. Consider an example where the sequence of key presses was "ARaBbbitBaby". In this case, the typed string will change as follows: "" $\xrightarrow{A}$ "A" $\xrightarrow{R}$ "AR" $\xrightarrow{a}$ "ARa" $\xrightarrow{B}$ "Aa" $\xrightarrow{b}$ "A" $\xrightarrow{b}$ "A" $\xrightarrow{i}$ "Ai" $\xrightarrow{t}$ "Ait" $\xrightarrow{B}$ "it" $\xrightarrow{a}$ "ita" $\xrightarrow{b}$ "it" $\xrightarrow{y}$ "ity". Given a sequence of pressed keys, output the typed string after processing all key presses.
To solve the problem, it was necessary to quickly support deletions. For this, one could maintain two stacks: one with the positions of uppercase letters and one with the positions of lowercase letters. Then, when deleting, one needs to somehow mark that the character at this position should not be output. Alternatively, one could reverse the original string, then instead of deleting characters, they would simply need to be skipped.
[ "data structures", "implementation", "strings" ]
1,000
for _ in range(int(input())): s = list(input()) n = len(s) upper = [] lower = [] for i in range(n): if s[i] == 'b': s[i] = '' if lower: s[lower.pop()] = '' continue if s[i] == 'B': s[i] = '' if upper: s[upper.pop()] = '' continue if 'a' <= s[i] <= 'z': lower += [i] else: upper += [i] print(''.join(s))
1907
C
Removal of Unattractive Pairs
Vlad found a string $s$ consisting of $n$ lowercase Latin letters, and he wants to make it as short as possible. To do this, he can remove \textbf{any} pair of adjacent characters from $s$ any number of times, provided they are \textbf{different}. For example, if $s$=racoon, then by removing one pair of characters he can obtain the strings coon, roon, raon, and raco, but he cannot obtain racn (because the removed letters were the same) or rcon (because the removed letters were not adjacent). What is the minimum length Vlad can achieve by applying any number of deletions?
Consider a finite string; obviously, all characters in it are the same, as otherwise, we could remove some pair of characters. If some character occurs in the string more than $\lfloor \frac{n}{2} \rfloor$ times, then the final string will always consist only of it, because with one deletion we can only get rid of one occurrence. To minimize the number of these characters, we need to remove one occurrence each time. We can always do this until the string is left with only such a character. Otherwise, we can remove all possible pairs regardless of the order of deletions.
[ "constructive algorithms", "greedy", "math", "strings" ]
1,200
orda = ord('a') def solve(): n = int(input()) cnt = [0] * 26 for c in input(): cnt[ord(c) - orda] += 1 mx = max(cnt) print(max(n % 2, 2 * mx - n)) for _ in range(int(input())): solve()
1907
D
Jumping Through Segments
Polycarp is designing a level for a game. The level consists of $n$ segments on the number line, where the $i$-th segment starts at the point with coordinate $l_i$ and ends at the point with coordinate $r_i$. The player starts the level at the point with coordinate $0$. In one move, they can move to any point that is within a distance of no more than $k$. After their $i$-th move, the player must land within the $i$-th segment, that is, at a coordinate $x$ such that $l_i \le x \le r_i$. This means: - After the first move, they must be inside the first segment (from $l_1$ to $r_1$); - After the second move, they must be inside the second segment (from $l_2$ to $r_2$); - ... - After the $n$-th move, they must be inside the $n$-th segment (from $l_n$ to $r_n$). The level is considered completed if the player reaches the $n$-th segment, following the rules described above. After some thought, Polycarp realized that it is impossible to complete the level with some values of $k$. Polycarp does not want the level to be too easy, so he asks you to determine the minimum integer $k$ with which it is possible to complete the level.
First, let's note that if we can pass a level with some value of $k$, then we can make all the same moves and pass it with a larger value. This allows us to use binary search for the answer. To check whether it is possible to pass the level with a certain $k$, we will maintain a segment in which we can find ourselves. After each move, it expands by $k$ in both directions and is reduced to the intersection with the segment where the player must be at that move. If at any point the intersection becomes empty, then it is impossible to pass the level with such $k$.
[ "binary search", "constructive algorithms" ]
1,400
def solve(): n = int(input()) seg = [list(map(int, input().split())) for x in range(n)] def check(k): ll, rr = 0, 0 for e in seg: ll = max(ll - k, e[0]) rr = min(rr + k, e[1]) if ll > rr: return False return True l, r = -1, 10 ** 9 while r - l > 1: mid = (r + l) // 2 if check(mid): r = mid else: l = mid print(r) for _ in range(int(input())): solve()
1907
E
Good Triples
Given a non-negative integer number $n$ ($n \ge 0$). Let's say a triple of non-negative integers $(a, b, c)$ is good if $a + b + c = n$, and $digsum(a) + digsum(b) + digsum(c) = digsum(n)$, where $digsum(x)$ is the sum of digits of number $x$. For example, if $n = 26$, then the pair $(4, 12, 10)$ is good, because $4 + 12 + 10 = 26$, and $(4) + (1 + 2) + (1 + 0) = (2 + 6)$. Your task is to find the number of good triples for the given number $n$. The order of the numbers in a triple matters. For example, the triples $(4, 12, 10)$ and $(10, 12, 4)$ are two different triples.
A triplet is considered good only if each digit of the number $n$ was obtained without carrying over during addition. For example, consider $a=2$, $b=7$, $c=4$; the sum of the digits is $2 + 7 + 4 = 13$, and the sum of the digits of their sum is $1 + 3 = 4$. This means that whenever there is a carry in one of the digits, the sum $digsum(a) + digsum(b) + digsum(c)$ always increases more than $digsum(n)$. This allows us to consider each digit separately and multiply their answers. The answer for each digit $x$ will be the number of digit triplets with the sum $x$. These values do not depend on the input data, so they can be precalculated, but this is not necessary to pass the tests.
[ "brute force", "combinatorics", "number theory" ]
1,600
t = int(input()) for _ in range(t): n = int(input()) cnt = 1 while n > 0: d = n % 10 n //= 10 mul = 0 for i in range(d + 1): for j in range(d + 1): if d - i - j >= 0: mul += 1 cnt *= mul print(cnt)
1907
F
Shift and Reverse
Given an array of integers $a_1, a_2, \ldots, a_n$. You can make two types of operations with this array: - Shift: move the last element of array to the first place, and shift all other elements to the right, so you get the array $a_n, a_1, a_2, \ldots, a_{n-1}$. - Reverse: reverse the whole array, so you get the array $a_n, a_{n-1}, \ldots, a_1$. Your task is to sort the array in non-decreasing order using the minimal number of operations, or say that it is impossible.
In this problem, there are several possible sequences of actions from which the optimal one must be chosen. For brevity, let's denote the reverse by the letter "R", and the shift by the letter "S": SS$\dots$SS RS$\dots$SR RS$\dots$SS SS$\dots$SR Let's write out the array twice and count the segments on which it increases and decreases. This way, we can find all possible shifts that will sort the array.
[ "greedy", "sortings" ]
1,800
t = int(input()) for _ in range(t): n = int(input()) a = list(map(int, input().split())) a = list(reversed(a))*2 p = [0] q = [0] for i in range(n*2-1): p.append(p[-1]+1 if a[i]>=a[i+1] else 0) q.append(q[-1]+1 if a[i]<=a[i+1] else 0) minn = 1000000 for i in range(n-1,len(p)): if p[i] == n-1: minn = min(minn, i-n+1, len(p)-i+1) if q[i] == n-1: minn = min(minn, len(p)-i, i-n+2) print(-1 if minn == 1000000 else minn)
1907
G
Lights
In the end of the day, Anna needs to turn off the lights in the office. There are $n$ lights and $n$ light switches, but their operation scheme is really strange. The switch $i$ changes the state of light $i$, but it also changes the state of some other light $a_i$ (change the state means that if the light was on, it goes off and vice versa). Help Anna to turn all the lights off using minimal number of switches, or say it is impossible.
Let's construct a directed graph where an edge originates from vertex $i$ to vertex $a_i$. In such a graph, exactly one edge originates from each vertex, and there is exactly one cycle in each connected component. First, we will turn off all the lights that are not part of the cycles; the sequence of such turn-offs is unique: We will remove all the turned-off vertices into which no edges enter, and we will turn off and remove the turned-on ones. After that, only cycle components will remain, some of which may have lights turned on. Consider any edge of the cycle from $i$ to $a_i$; we will either press switch $i$ or not. To count the number of operations in these cases, we will use the same algorithm as before.
[ "brute force", "constructive algorithms", "dfs and similar", "graphs", "greedy", "implementation" ]
2,200
#include <bits/stdc++.h> #define long long long int #define DEBUG using namespace std; // @author: pashka void solve(){ int n; cin >> n; vector<bool> s(n); { string ss; cin >> ss; for (int i = 0; i < n; i++) { s[i] = ss[i] == '1'; } } vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; a[i]--; } vector<int> res; vector<int> d(n); for (int i = 0; i < n; i++) { d[a[i]]++; } vector<int> z; for (int i = 0; i < n; i++) { if (d[i] == 0) z.push_back(i); } for (int i = 0; i < (int)z.size(); i++) { int x = z[i]; int y = a[x]; if (s[x]) { res.push_back(x); s[x] = !s[x]; s[y] = !s[y]; } d[y]--; if (d[y] == 0) { z.push_back(y); } } vector<bool> u(n); for (int i = 0; i < n; i++) { if (s[i] && !u[i]) { int x = i; vector<int> p; vector<bool> ps; int c = 0; while (!u[x]) { p.push_back(x); ps.push_back(s[x]); c += s[x]; u[x] = true; x = a[x]; } int k = p.size(); p.push_back(x); ps.push_back(s[x]); if (c % 2 == 1) { cout << -1; return; } vector<int> v1; vector<bool> ps1 = ps; for (int j = 0; j < k; j++) { if (j == 0 || ps1[j]) { v1.push_back(p[j]); ps1[j] = !ps1[j]; ps1[j + 1] = !ps1[j + 1]; } } vector<int> v2; vector<bool> ps2 = ps; for (int j = 0; j < k; j++) { if (j != 0 && ps2[j]) { v2.push_back(p[j]); ps2[j] = !ps2[j]; ps2[j + 1] = !ps2[j + 1]; } } if (v1.size() < v2.size()) { for (auto x : v1) { res.push_back(x); } } else { for (auto x : v2) { res.push_back(x); } } } } cout << res.size() << "\n"; for (auto x : res) cout << x + 1 << " "; } int main() { int t; cin >> t; for(int _ = 0; _ < t; ++_){ solve(); cout << "\n"; } return 0; }
1909
A
Distinct Buttons
\begin{quote} Deemo - Entrance \hfill ⠀ \end{quote} You are located at the point $(0, 0)$ of an infinite Cartesian plane. You have a controller with $4$ buttons which can perform one of the following operations: - $U$: move from $(x, y)$ to $(x, y+1)$; - $R$: move from $(x, y)$ to $(x+1, y)$; - $D$: move from $(x, y)$ to $(x, y-1)$; - $L$: move from $(x, y)$ to $(x-1, y)$. Unfortunately, the controller is broken. If you press all the $4$ buttons (in any order), the controller stops working. It means that, during the whole trip, you can only press at most $3$ distinct buttons (any number of times, in any order). There are $n$ special points in the plane, with integer coordinates $(x_i, y_i)$. Can you visit all the special points (in any order) without breaking the controller?
Suppose you can only use $\texttt{U}$, $\texttt{R}$, $\texttt{D}$. Which cells can you reach? If you only use buttons $\texttt{U}$, $\texttt{R}$, $\texttt{D}$, you can never reach the points with $x < 0$. However, if all the special points have $x \geq 0$, you can reach all of them with the following steps: visit all the special points with $x = 0$, using the buttons $\texttt{U}$, $\texttt{D}$; press the button $\texttt{R}$ to reach $x = 1$; visit all the special points with $x = 1$, using the buttons $\texttt{U}$, $\texttt{D}$; $\dots$ visit all the special points with $x = 100$, using the buttons $\texttt{U}$, $\texttt{D}$. Similarly, if you use at most $3$ buttons in total, you can reach all the special points if at least one of the following conditions is true: all $x_i \geq 0$; all $x_i \leq 0$; all $y_i \geq 0$; all $y_i \leq 0$. Complexity: $O(n)$
[ "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; ll a = 1, b = 1, c = 1, d = 1; for (ll i = 1; i <= n; i++) { ll x, y; cin >> x >> y; if (x > 0) a = 0; if (x < 0) b = 0; if (y > 0) c = 0; if (y < 0) d = 0; } if (a + b + c + d == 0) cout << "NO" << nl; else cout << "YES" << nl; } return 0; }
1909
B
Make Almost Equal With Mod
\begin{quote} xi - Solar Storm \hfill ⠀ \end{quote} You are given an array $a_1, a_2, \dots, a_n$ of distinct positive integers. You have to do the following operation \textbf{exactly once}: - choose a positive integer $k$; - for each $i$ from $1$ to $n$, replace $a_i$ with $a_i \text{ mod } k^\dagger$. Find a value of $k$ such that $1 \leq k \leq 10^{18}$ and the array $a_1, a_2, \dots, a_n$ contains \textbf{exactly} $2$ distinct values at the end of the operation. It can be shown that, under the constraints of the problem, at least one such $k$ always exists. If there are multiple solutions, you can print any of them. $^\dagger$ $a \text{ mod } b$ denotes the remainder after dividing $a$ by $b$. For example: - $7 \text{ mod } 3=1$ since $7 = 3 \cdot 2 + 1$ - $15 \text{ mod } 4=3$ since $15 = 4 \cdot 3 + 3$ - $21 \text{ mod } 1=0$ since $21 = 21 \cdot 1 + 0$
Find a value of $k$ that works in many cases. $k = 2$ works in many cases. What if it does not work? If $k = 2$ does not work, either all the numbers are even or all the numbers are odd. Which $k$ can you try now? Let $f(k)$ be the number of distinct values after the operation, using $k$. Let's try $k = 2$. It works in all cases, except when either all the numbers are even or all the numbers are odd. Let's generalize. If $a_i \text{ mod } k = x$, one of the following holds: $a_i \text{ mod } 2k = x$; $a_i \text{ mod } 2k = x+k$; It means that, if $f(k) = 1$ (i.e., all the values after the operations are $x$), either $f(2k) = 1$ (if either all the values become $x$, or they all become $x+k$), or $f(2k) = 2$. Therefore, it is sufficient to try $k = 2^1, \dots, 2^{57}$. In fact, $f(1) = 1$ and $f(2^{57}) = n$, so there must exist $m < 57$ such that $f(2^m) = 1$ and $f(2^{m+1}) \neq 1 \implies f(2^{m+1}) = 2$. Alternative (more intuitive?) interpretation: $a_i \text{ mod } 2^j$ corresponds to the last $j$ digits in the binary representation of $a_i$. There must exist $j$ such that the last $j$ digits make exactly $2$ distinct blocks. In the following picture, $a = [1005, 2005, 7005, 11005, 16005]$ and $k = 16$: Complexity: $O(n \log(\max a_i))$
[ "bitmasks", "constructive algorithms", "math", "number theory" ]
1,200
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n + 1, 0); for (ll i = 1; i <= n; i++) { cin >> a[i]; } auto good = [&](vector<ll> a, ll k) { set<ll> st; for (ll i = 1; i <= n; i++) st.insert(a[i] % k); return (st.size() == 2); }; for (ll k = 2;; k *= 2) { if (good(a, k)) { cout << k << nl; break; } } } return 0; }
1909
C
Heavy Intervals
\begin{quote} Shiki - Pure Ruby \hfill ⠀ \end{quote} You have $n$ intervals $[l_1, r_1], [l_2, r_2], \dots, [l_n, r_n]$, such that $l_i < r_i$ for each $i$, and all the endpoints of the intervals are distinct. The $i$-th interval has weight $c_i$ per unit length. Therefore, the weight of the $i$-th interval is $c_i \cdot (r_i - l_i)$. You don't like large weights, so you want to make the sum of weights of the intervals as small as possible. It turns out you can perform all the following three operations: - rearrange the elements in the array $l$ in any order; - rearrange the elements in the array $r$ in any order; - rearrange the elements in the array $c$ in any order. However, after performing all of the operations, the intervals must still be valid (i.e., for each $i$, $l_i < r_i$ must hold). What's the minimum possible sum of weights of the intervals after performing the operations?
Assign bigger costs to shorter intervals. Solve the problem with $n = 2$. Solve the following case: $l = [1, 2]$, $r = [3, 4]$, $c = [1, 2]$. Can you generalize it? You have to match each $l_i$ with some $r_j > l_i$. Construct $v = {l_1, l_2, \dots, l_n, r_1, r_2, \dots, r_n}$ and sort it. If you replace every $l_i$ with the symbol $\texttt{(}$ and every $r_i$ with the symbol $\texttt{)}$, you get a regular bracket sequence (sketch of proof: $l_i < r_i$ for each $i$, so each prefix of symbols contains at least as many $\texttt{(}$ as $\texttt{)}$, so the bracket sequence is regular). Now match each $\texttt{(}$ with the corresponding $\texttt{)}$. You can show that this is the optimal way to rearrange the $l_i$ and the $r_i$. (From now, let the $l_i$, $r_i$ and $c_i$ be the values after your rearrangement.) Proof: If you match the brackets in any other way, you get two intervals such that their intersection is non-empty but it is different from both intervals (i.e., you get $l_i < l_j < r_i < r_j$). You have also assigned some cost $c_i$ to $[l_i, r_i]$ and $c_j$ to $[l_j, r_j]$. Without loss of generality, $c_i \leq c_j$ (the other case is symmetrical). If you swap $r_i$ and $r_j$, the cost does not increase. Keep swapping endpoints until you get the "regular" bracket matching. You can show that the process ends in a finite number of steps. For example, you can show that $\sum ((r_i - l_i)^2)$ strictly increases after each step, and it is an integer $\leq \sum (r_i^2)$. Now, you can get the minimum cost by sorting the intervals by increasing length and sorting the $c_i$ in decreasing order. Alternative (more intuitive?) interpretation: If you solve the problem with $n = 2$ and try to generalize, you can notice that it seems optimal to match every $r_i$ with the largest unused $l_i$ (if you iterate over $r_i$ in increasing order). You can implement the solution by using either a stack (to simulate the bracket matching) or a set (to find the largest unused $l_i$). Complexity: $O(n \log n)$
[ "constructive algorithms", "data structures", "dsu", "greedy", "math", "sortings" ]
1,400
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n; cin >> n; vector<array<ll, 2>> events; vector<ll> c(n + 1, 0); for (ll i = 1; i <= n; i++) { ll l; cin >> l; events.pb({l, 0}); } for (ll i = 1; i <= n; i++) { ll r; cin >> r; events.pb({r, 1}); } for (ll i = 1; i <= n; i++) cin >> c[i]; sort(events.begin(), events.end()); sort(c.begin() + 1, c.end()); vector<ll> st; vector<ll> lengths = {0}; for (auto [pos, type] : events) { if (type == 0) { st.pb(pos); } else { assert(!st.empty()); lengths.pb(pos - st.back()); st.pop_back(); } } sort(lengths.begin() + 1, lengths.end()); reverse(lengths.begin() + 1, lengths.end()); ll ans = 0; for (ll i = 1; i <= n; i++) ans += lengths[i] * c[i]; cout << ans << nl; } return 0; }
1909
D
Split Plus K
\begin{quote} eliteLAQ - Desert Ruins \hfill ⠀ \end{quote} There are $n$ positive integers $a_1, a_2, \dots, a_n$ on a blackboard. You are also given a positive integer $k$. You can perform the following operation some (possibly $0$) times: - choose a number $x$ on the blackboard; - erase one occurrence of $x$; - write two \textbf{positive} integers $y$, $z$ such that $y+z = x+k$ on the blackboard. Is it possible to make all the numbers on the blackboard equal? If yes, what is the minimum number of operations you need?
Solve the problem with $k = 0$. Solve the problem with generic $k$. When is the answer $-1$? Do you notice any similarities between the cases with $k = 0$ and with generic $k$? Consider the "shifted" problem, where each $x$ on the blackboard (at any moment) is replaced with $x' = x-k$. Now, the operation becomes "replace $x$ with $y+z$ such that $y+z = x+k \implies (y'+k)+(z'+k) = (x'+k)+k \implies y'+z' = x'$". Therefore, in the shifted problem, $k' = 0$. Now you can replace every $a'_i := a_i - k$ with any number of values with sum $a'_i$, and the answer is the amount of numbers on the blackboard at the end, minus $n$. If we want to make all the values equal to $m'$, it must be a divisor of every $a'_i$. If all the $a'_i$ are positive, it is optimal to choose $m' = \gcd a'_i$. If all the $a'_i$ are zero, the answer is $0$. If all the $a'_i$ are negative, it is optimal to choose $m' = -\gcd -a'_i$. Otherwise, the answer is $-1$. Alternative way to get this result: You have to split each $a_i$ into $p_i$ pieces equal to $m$, and their sum must be equal to $a_i + k(p_i - 1) = (a_i - k) + kp_i = mp_i$. Then, $(a_i - k) = (m - k)p_i$, so $m' = m-k$ must be a divisor of every $a'_i = a_i - k$. In both the positive and the negative case, you will only write positive elements (in the original setup), as wanted. If all the $a'_i$ are positive, the numbers you will write on the shifted blackboard are positive, so they will be positive also in the original blackboard. If all the $a'_i$ are negative, the numbers you will write on the shifted blackboard are greater than the numbers you will erase, so they will be greater than the numbers in input (and positive) in the original blackboard. Complexity: $O(n + \log(\max a_i))$
[ "greedy", "math", "number theory" ]
1,900
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n, k; cin >> n >> k; vector<ll> a(n + 1, 0); ll tot = 0, pos = 0, zero = 0, neg = 0; for (ll i = 1; i <= n; i++) { cin >> a[i]; a[i] -= k; tot += a[i]; if (a[i] > 0) pos = 1; else if (a[i] == 0) zero = 1; else neg = 1; } if (pos + zero + neg >= 2) { cout << -1 << nl; continue; } if (zero == 1) { cout << 0 << nl; continue; } ll g = 0; for (ll i = 1; i <= n; i++) g = __gcd(g, abs(a[i])); ll ans = abs(tot) / g - n; cout << ans << nl; } return 0; }
1909
E
Multiple Lamps
\begin{quote} Kid2Will - Fire Aura \hfill ⠀ \end{quote} You have $n$ lamps, numbered from $1$ to $n$. Initially, all the lamps are turned off. You also have $n$ buttons. The $i$-th button toggles all the lamps whose index is a multiple of $i$. When a lamp is toggled, if it was off it turns on, and if it was on it turns off. You have to press some buttons according to the following rules. - You have to press at least one button. - You cannot press the same button multiple times. - You are given $m$ pairs $(u_i, v_i)$. If you press the button $u_i$, you also have to press the button $v_i$ (at any moment, not necessarily after pressing the button $u_i$). Note that, if you press the button $v_i$, you don't need to press the button $u_i$. You don't want to waste too much electricity. Find a way to press buttons such that at the end at most $\lfloor n/5 \rfloor$ lamps are on, or print $-1$ if it is impossible.
Find a strategy that turns "few" lamps on in most cases. Pressing all the buttons turns $\lfloor \sqrt n \rfloor$ lamps on. If the strategy in Hint 2 does not work, at most $3$ lamps must be on at the end. Iterate over all subsets of at most $3$ lamps that must be on at the end. If you press all the buttons, lamp $i$ is toggled by all the divisors of $i$, so it will be on if $i$ has an odd number of divisors, i.e., if $i$ is a perfect square. Then, the strategy of pressing all the buttons works if $\lfloor \sqrt n \rfloor \leq \lfloor n/5 \rfloor$, which is true for $n \geq 20$. If $n \leq 19$, at most $\lfloor 19/5 \rfloor = 3$ lamps must be turned on at the end. If you know which lamps must be turned on at the end, you can iterate over the buttons from $1$ to $n$ and press button $i$ if and only if lamp $i$ is in the wrong state. So you can iterate over all subsets of at most $3$ lamps, and check if the corresponding choice of buttons is valid (i.e., the $m$ constraints hold). You can remove a $\log n$ factor by precalculating the choice of buttons for all small subsets before running the testcases. Complexity for each test: $O(\sum n + (\sum m \cdot k^{2k-4}))$ (if $\lfloor n/k \rfloor$ lamps must be on; in this case, $k = 5$).
[ "bitmasks", "brute force", "constructive algorithms", "math", "number theory" ]
2,400
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 #define DEN 5 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll t; cin >> t; while (t--) { ll n, m; cin >> n >> m; vector<vector<ll>> adj_on(n + 1), adj_off(n + 1); for (ll i = 1; i <= m; i++) { ll a, b; cin >> a >> b; if (a > b) adj_on[a].pb(b); else adj_off[b].pb(a); } if (n >= 20) { cout << n << nl; for (ll i = 1; i <= n; i++) cout << i << ' '; cout << nl; continue; } ll ans = -1, sol_count = 0; auto solve = [&](ll lamps_target) { ll lamps = 0, buttons = 0, flag = 1; for (ll i = 1; i <= n; i++) { if (((lamps ^ lamps_target) >> i) & 1) { buttons ^= (1 << i); for (ll j = i; j <= n; j += i) lamps ^= (1 << j); } if ((buttons >> i) & 1) { for (auto j : adj_on[i]) { if (((buttons >> j) & 1) == 0) flag = 0; } } else { for (auto j : adj_off[i]) { if ((buttons >> j) & 1) flag = 0; } } } if (flag == 0) return; if (buttons == 0) return; if (__builtin_popcount(lamps_target) > n / 5) return; ans = buttons; sol_count++; }; for (ll i = 1; i <= n; i++) { solve((1 << i)); for (ll j = i + 1; j <= n; j++) { solve((1 << i) + (1 << j)); for (ll k = j + 1; k <= n; k++) { solve((1 << i) + (1 << j) + (1 << k)); } } } // cout << sol_count << nl; if (ans == -1) { cout << -1 << nl; } else { cout << __builtin_popcountll(ans) << nl; for (ll i = 1; i <= n; i++) { if ((ans >> i) & 1) cout << i << ' '; } cout << nl; } } return 0; }
1909
F2
Small Permutation Problem (Hard Version)
\begin{quote} Andy Tunstall - MiniBoss \hfill ⠀ \end{quote} \textbf{In the easy version, the $a_i$ are in the range $[0, n]$; in the hard version, the $a_i$ are in the range $[-1, n]$ and the definition of good permutation is slightly different. You can make hacks only if all versions of the problem are solved.} You are given an integer $n$ and an array $a_1, a_2, \dots, a_n$ of integers in the range $[-1, n]$. A permutation $p_1, p_2, \dots, p_n$ of $[1, 2, \dots, n]$ is good if, for each $i$, the following condition is true: - if $a_i \neq -1$, the number of values $\leq i$ in $[p_1, p_2, \dots, p_i]$ is exactly $a_i$. Count the good permutations of $[1, 2, \dots, n]$, modulo $998\,244\,353$.
Find a clean way to visualize the problem. Draw a $n \times n$ grid, with tokens in $(i, p_i)$. Which constraints on the tokens do you have? You have some "L" shapes, and each of them must contain a fixed number of tokens (in 1909F1 - Small Permutation Problem (Easy Version) the shapes are $1$ cell wide and must contain at most $2$ tokens). Iterate over the shapes in increasing order of $i$. Split each shape into $2$ rectangles and iterate over the number of tokens in the first rectangle. Draw a $n \times n$ grid, with tokens in $(i, p_i)$. Consider any $a_i \neq -1$ and the nearest $a_j \neq -1$ on the left (if it does not exist, let's set $j = a_j = 0$). Then, there must be $a_i$ tokens in the subgrid $[1, i] \times [1, i]$. We can suppose we have already inserted the $a_j$ tokens in $[1, j] \times [1, j]$, and we have to insert $a_i - a_j$ tokens in the remaining cells of $[1, i] \times [1, i]$ (they make an "L" shape). WLOG, the $a_j$ tokens in $[1, j] \times [1, j]$ are in the cells $(1, 1), \dots, (a_j, a_j)$. Then, we can put tokens in the blue cells in the picture. The blue shape can be further split into these two rectangles: Iterate over $k$, the number of tokens in the first rectangle ($0 \leq k \leq a_i - a_j$). Then, you have to insert $k$ tokens into a $h_1 \times w_1$ rectangle, and the remaining $a_i - a_j - k$ tokens into a $h_2 \times (w_2 - k)$ rectangle. The number of ways to insert $k$ tokens into a $h \times w$ rectangle is equal to the product of the number of ways to choose $k$ rows, the number of ways to choose $k$ columns, and the number of ways to fill the resulting $k \times k$ subgrid: the result is $\binom{h}{k} \binom{w}{k} k!$. $a_n = n$ automatically (if $a_n \neq n$ and $a_n \neq -1$, the answer is $0$). If the non-negative $a_i$ are non-decreasing, the sum of the $a_i - a_j + 1$ (i.e., the $k$ over which you have to iterate) is $O(n)$, so the algorithm is efficient enough. Otherwise, the answer is $0$. Complexity: $O(n)$
[ "combinatorics", "dp", "math" ]
2,500
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 200010 ll fc[maxn], nv[maxn]; ll fxp(ll b, ll e) { ll r = 1, k = b; while (e != 0) { if (e % 2) r = (r * k) % mod; k = (k * k) % mod; e /= 2; } return r; } ll inv(ll x) { return fxp(x, mod - 2); } ll bnm(ll a, ll b) { if (a < b || b < 0) return 0; ll r = (fc[a] * nv[b]) % mod; r = (r * nv[a - b]) % mod; return r; } int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif fc[0] = 1; nv[0] = 1; for (ll i = 1; i < maxn; i++) { fc[i] = (i * fc[i - 1]) % mod; nv[i] = inv(fc[i]); } ll t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n + 1, 0); for (ll i = 1; i <= n; i++) { cin >> a[i]; } if (a[n] != -1 && a[n] != n) { cout << 0 << nl; continue; } a[n] = n; auto fill_rect = [&](ll h, ll w, ll x) { ll ans = bnm(h, x) * bnm(w, x) % mod * fc[x] % mod; return ans; }; ll ans = 1; ll curr_side = 0, curr_inserted = 0; for (ll i = 1; i <= n; i++) { if (a[i] == -1) continue; ll to_insert = a[i] - curr_inserted; ll up_h = curr_side - curr_inserted; ll up_w = i - curr_side; ll down_h = up_w; ll down_w = i - curr_inserted; if (to_insert < 0) { ans = 0; break; } ll partial = 0; for (ll j = 0; j <= to_insert; j++) { partial = (partial + fill_rect(up_h, up_w, j) * fill_rect(down_h, down_w - j, to_insert - j) % mod) % mod; } ans = (ans * partial) % mod; curr_side = i; curr_inserted = a[i]; } cout << ans << nl; } return 0; }
1909
G
Pumping Lemma
\begin{quote} Tanchiky & Siromaru - Crystal Gravity \hfill ⠀ \end{quote} You are given two strings $s$, $t$ of length $n$, $m$, respectively. Both strings consist of lowercase letters of the English alphabet. Count the triples $(x, y, z)$ of strings such that the following conditions are true: - $s = x+y+z$ (the symbol $+$ represents the concatenation); - $t = x+\underbrace{ y+\dots+y }_{k \text{ times}} + z$ for some integer $k$.
If you remove the condition $s = x+y+z$, the problem becomes harder. Solve for a fixed $|y|$ (length of $y$). Suppose you have found a valid $y$. Shift it one position to the right. When is it still valid? The valid $y$ with $|y| = l$ start in consecutive positions. Using the same idea of the proof of Hint 4, you can find all the valid $y$. Let's use $s[l, r]$ to indicate the substring $[l, r]$ of $s$. Let's use $(a, b)$ to indicate the triple of strings $(s[1, a], s[a+1, b], s[b+1, n])$. Suppose $(a, b)$ is valid. Then, $(a+1, b+1)$ is valid if and only if $s_{b+1} = t_{b+1}$. If $(a, b)$ is valid, $s[1, b] = t[1, b]$; so if $s_b \neq t_b$, $(a+k, b+k)$ is invalid for $k \geq 0$. Therefore, if $(a, b)$ is the first valid pair with $b-a=l$ (i.e., with $|y| = l$), and $k$ is the smallest positive integer such that $(a+k, b+k)$ is invalid, then $s_{b+k} \neq t_{b+k}$, so the only valid pairs with $|y| = l$ are the $(a+j, b+j)$ with $0 \leq j < k$ (i.e., the valid $y$ with $|y| = l$ start in consecutive positions). Now let's find all the valid $y$ with $|y| = l$. Suppose $(a, b)$ is valid, and $c$ is the minimum index such that $s_c \neq t_c$. Then, $b < c$, and $(a+k, b+k)$ is valid for all $b \leq b+k < c$. Similarly, if $d$ is the minimum integer such that $s_{n-d} \neq t_{m-d}$, $(a+k, b+k)$ is valid for all $n-d < a+k \leq a \implies n-d+l < b+k \leq b$. Therefore, it's enough to check only one pair for each length, with $b$ in $[n-d+l+1, c-1]$ (because either all these pairs are valid or they are all invalid). This is possible by precomputing the rolling hash of the two strings. Alternatively, you can use z-function. Complexity: $O(n)$
[ "hashing", "strings" ]
3,000
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll N, M; cin >> N >> M; string S, T; cin >> S >> T; const ll p = 31; const ll m[2] = {(ll)1e9 + 7, (ll)1e9 + 9}; vector<array<ll, 2>> p_pow(M + 1, {1, 1}); for (ll i = 0; i < M; i++) { for (ll j = 0; j < 2; j++) { p_pow[i + 1][j] = (p_pow[i][j] * p) % m[j]; } } auto compute_hash = [&](string s) { ll n = s.size(); vector<array<ll, 2>> hash_value((ll)s.size() + 1, {0, 0}); for (ll i = 0; i < n; i++) { char c = s[i]; for (ll j = 0; j < 2; j++) { hash_value[i + 1][j] = (hash_value[i][j] + (c - 'a' + 1) * p_pow[i][j]) % m[j]; } } return hash_value; }; vector<array<ll, 2>> s_hash = compute_hash(S); vector<array<ll, 2>> t_hash = compute_hash(T); auto calc = [&](ll type, ll l, ll r) { array<ll, 2> ans = {0, 0}; vector<array<ll, 2>> *hash; if (type == 0) hash = &s_hash; else hash = &t_hash; for (ll j = 0; j < 2; j++) { ans[j] = ((*hash)[r + 1][j] - (*hash)[l][j] + m[j]) % m[j]; ans[j] = (p_pow[M - l][j] * ans[j]) % m[j]; } return ans; }; auto compare = [&](ll t0, ll l0, ll r0, ll t1, ll l1, ll r1) { assert(r0 - l0 == r1 - l1); return calc(t0, l0, r0) == calc(t1, l1, r1); }; ll diff = N; for (ll i = 0; i < N; i++) { if (S[i] != T[i]) { diff = i; break; } } ll diff_alt = -1; for (ll i = N - 1; i >= 0; i--) { if (S[i] != T[i + (M - N)]) { diff_alt = i; break; } } ll ans = 0; for (ll yl = 1; yl <= M; yl++) { if ((M - N) % yl != 0) continue; auto check = [&](ll xl) { ll zl = N - xl - yl; if (zl < 0) return false; ll ykl = M - xl - zl; if (!compare(0, 0, xl - 1, 1, 0, xl - 1)) return false; if (!compare(0, N - zl, N - 1, 1, M - zl, M - 1)) return false; if (!compare(0, xl, xl + yl - 1, 1, xl, xl + yl - 1)) return false; if (!compare(1, xl, xl + ykl - yl - 1, 1, xl + yl, xl + ykl - 1)) return false; return true; }; if (check(diff_alt + 1)) { ans += max((ll)0, diff - diff_alt - yl); } } cout << ans << nl; return 0; }
1909
H
Parallel Swaps Sort
\begin{quote} Dubmood - Keygen 8 \hfill ⠀ \end{quote} You are given a permutation $p_1, p_2, \dots, p_n$ of $[1, 2, \dots, n]$. You can perform the following operation some (possibly $0$) times: - choose a subarray $[l, r]$ of even length; - swap $a_l$, $a_{l+1}$; - swap $a_{l+2}$, $a_{l+3}$ (if $l+3 \leq r$); - $\dots$ - swap $a_{r-1}$, $a_r$. Sort the permutation in at most $10^6$ operations. You do not need to minimize the number of operations.
Find a strategy which is as simple and "easy to handle" as possible. Only perform operations such that all swapped pairs have $a_i > a_{i+1}$. Let's call such subarrays "swappable". First, for each $i$ from left to right, do the operation on $[j, i]$, where $j$ is the minimum index such that $[j, i]$ is swappable. Repeat the same algorithm from right to left. After Hints 3 and 4, the array is sorted. Prove it (it will be useful). Assign $\texttt{B}$ to the indices $i$ such that $a_i < a_{i-1}$ and $\texttt{A}$ to the other indices. During the process in Hint 3, after the operation on index $i$, which properties do $\texttt{A}$, $\texttt{B}$ have in the prefix $[1, i]$? Answer to Hint 6: the values of type $\texttt{A}$ are increasing; there are no two consecutive elements of type $\texttt{B}$. The rest of the proof (i.e., what happens during the process in Hint 4) is relatively easy. Now let's find an efficient implementation. We have to use the properties in Hint 7. You have to find the longest $\texttt{AB} \dots \texttt{AB}$ ending in position $i$, and perform the operation on it. What happens during the operation? $\texttt{A}$ will always remain $\texttt{A}$. For each $\texttt{B}$, you have to detect when it becomes $\texttt{A}$. For each $\texttt{B}$, you can precompute the number of moves needed to make it $\texttt{A}$. For example, you can use a segment tree with the following information: the type of each element, the positions of the elements of type $\texttt{B}$, and the number of moves required for each $\texttt{B}$ to become $\texttt{A}$. Let's only perform operations such that all swapped pairs have $a_i > a_{i+1}$. Let's call such subarrays "swappable". First, for each $i$ from left to right, do the operation on $[j, i]$, where $j$ is the minimum index such that $[j, i]$ is swappable (let's call it "operation $1.i$"). Then, for each $i$ from right to left, do the operation on $[j, i]$, where $j$ is the minimum index such that $[j, i]$ is swappable (let's call it "operation $2.i$"). After these operations, the array is sorted. Let's prove it. Assign $\texttt{B}$ to the indices $i$ such that $a_i < a_{i-1}$ and $\texttt{A}$ to the other indices. After operation $1.i$, only assign letters in the prefix $[1, i]$ and ignore the other elements. During the operations $2.i$, assign letters to all the elements. Most of the following proofs are by induction. After the operation $1.i$ (supposing the properties were true after the operation $1.(i-1)$): An element of type $\texttt{A}$ will always remain of type $\texttt{A}$. Proof: the only elements of type $\texttt{A}$ whose previous element changes are the ones in the subarray $[j, i]$, which are swapped with a smaller element of type $\texttt{B}$. There are no two consecutive elements of type $\texttt{B}$. Proof: if you swap $[j, i]$, $p_{j-1}$ (if it exists) must be of type $\texttt{A}$ (otherwise $[j-2, i]$ is swappable). The elements of type $\texttt{A}$ are increasing. Proof: it's true if no $\texttt{B}s$ become $\texttt{A}s$, and it's also true if some $\texttt{B}s$ become $\texttt{A}s$ because any of them is adjacent to two $\texttt{A}s$. After the operation $2.i$: The three properties above are still true. The suffix $[i, n]$ contains the values in $[i, n]$ in order. Proof: $a_i$ is an $\texttt{A}$, so it must be the largest $p_i$ in $[1, i]$, which is $i$. Now let's understand how we can implement the algorithm. Example implementation: We maintain a segment tree. The $i$-th position of the segment tree contains information about the element which was initially $p_i$. Note that the relative position of $\texttt{B}s$ never changes: for example, if you want information about the last $k$ $\texttt{B}s$ in the current permutation, and you search them in the segment tree, you will find exactly the last $k$ $\texttt{B}s$, even though their indices will not correspond to the current indices. We have to find the longest swappable subarray ending at $i$. It means we need the current positions of the $\texttt{B}s$. For each $\texttt{B}$ maintain the current position, and assume the position of all the $\texttt{A}s$ is $0$. Also maintain, for each element, if it is a $\texttt{B}$ or not. Note that the $\texttt{B}s$ that are affected by each operation can be found in a suffix of the segment tree. In this way, finding the longest swappable subarray can be done with a binary search on the segment tree: since $\texttt{B}s$ cannot be consecutive, you have to find the longest suffix such that the sum of the positions of the $\texttt{B}s$ is the maximum possible (i.e., if there are $k$ $\texttt{B}s$ and the last of them is in position $i$, the sum of their positions must be $k(i-k+1)$). After finding the longest subarray in the segment tree, you have to perform the operation on it, i.e., subtract $1$ from all the nonzero positions. Some $\texttt{B}s$ may become $\texttt{A}s$. How to detect them? Since $\texttt{A}s$ never become $\texttt{B}s$, a $\texttt{B}$ becomes $\texttt{A}$ after it is swapped with all the elements greater than it on its left. So you can precompute the number of swaps that every $\texttt{B}$ needs to become $\texttt{A}$, and put it in the segment tree as well. Again, the operation is "subtract $1$ from a range". Detecting $\texttt{A}s$ means detecting elements which need $0$ swaps to become $\texttt{A}s$. You can find them after each operation by traversing the segment tree (which must support "range min" on the number of swaps needed), and set their position to $0$ and the number of swaps needed to $\infty$. Complexity: $2n-3$ moves, $O(n \log n)$ time.
[ "constructive algorithms", "data structures" ]
3,500
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <algorithm> #include <cassert> #include <functional> #include <vector> #ifdef _MSC_VER #include <intrin.h> #endif #if __cplusplus >= 202002L #include <bit> #endif namespace atcoder { namespace internal { #if __cplusplus >= 202002L using std::bit_ceil; #else unsigned int bit_ceil(unsigned int n) { unsigned int x = 1; while (x < (unsigned int)(n)) x *= 2; return x; } #endif int countr_zero(unsigned int n) { #ifdef _MSC_VER unsigned long index; _BitScanForward(&index, n); return index; #else return __builtin_ctz(n); #endif } constexpr int countr_zero_constexpr(unsigned int n) { int x = 0; while (!(n & (1 << x))) x++; return x; } } // namespace internal } // namespace atcoder namespace atcoder { #if __cplusplus >= 201703L template <class S, auto op, auto e, class F, auto mapping, auto composition, auto id> struct lazy_segtree { static_assert(std::is_convertible_v<decltype(op), std::function<S(S, S)>>, "op must work as S(S, S)"); static_assert(std::is_convertible_v<decltype(e), std::function<S()>>, "e must work as S()"); static_assert( std::is_convertible_v<decltype(mapping), std::function<S(F, S)>>, "mapping must work as F(F, S)"); static_assert( std::is_convertible_v<decltype(composition), std::function<F(F, F)>>, "compostiion must work as F(F, F)"); static_assert(std::is_convertible_v<decltype(id), std::function<F()>>, "id must work as F()"); #else template <class S, S (*op)(S, S), S (*e)(), class F, S (*mapping)(F, S), F (*composition)(F, F), F (*id)()> struct lazy_segtree { #endif public: lazy_segtree() : lazy_segtree(0) {} explicit lazy_segtree(int n) : lazy_segtree(std::vector<S>(n, e())) {} explicit lazy_segtree(const std::vector<S>& v) : _n(int(v.size())) { size = (int)internal::bit_ceil((unsigned int)(_n)); log = internal::countr_zero((unsigned int)size); d = std::vector<S>(2 * size, e()); lz = std::vector<F>(size, id()); for (int i = 0; i < _n; i++) d[size + i] = v[i]; for (int i = size - 1; i >= 1; i--) { update(i); } } void set(int p, S x) { assert(0 <= p && p < _n); p += size; for (int i = log; i >= 1; i--) push(p >> i); d[p] = x; for (int i = 1; i <= log; i++) update(p >> i); } S get(int p) { assert(0 <= p && p < _n); p += size; for (int i = log; i >= 1; i--) push(p >> i); return d[p]; } S prod(int l, int r) { assert(0 <= l && l <= r && r <= _n); if (l == r) return e(); l += size; r += size; for (int i = log; i >= 1; i--) { if (((l >> i) << i) != l) push(l >> i); if (((r >> i) << i) != r) push((r - 1) >> i); } S sml = e(), smr = e(); while (l < r) { if (l & 1) sml = op(sml, d[l++]); if (r & 1) smr = op(d[--r], smr); l >>= 1; r >>= 1; } return op(sml, smr); } S all_prod() { return d[1]; } void apply(int p, F f) { assert(0 <= p && p < _n); p += size; for (int i = log; i >= 1; i--) push(p >> i); d[p] = mapping(f, d[p]); for (int i = 1; i <= log; i++) update(p >> i); } void apply(int l, int r, F f) { assert(0 <= l && l <= r && r <= _n); if (l == r) return; l += size; r += size; for (int i = log; i >= 1; i--) { if (((l >> i) << i) != l) push(l >> i); if (((r >> i) << i) != r) push((r - 1) >> i); } { int l2 = l, r2 = r; while (l < r) { if (l & 1) all_apply(l++, f); if (r & 1) all_apply(--r, f); l >>= 1; r >>= 1; } l = l2; r = r2; } for (int i = 1; i <= log; i++) { if (((l >> i) << i) != l) update(l >> i); if (((r >> i) << i) != r) update((r - 1) >> i); } } template <bool (*g)(S)> int max_right(int l) { return max_right(l, [](S x) { return g(x); }); } template <class G> int max_right(int l, G g) { assert(0 <= l && l <= _n); assert(g(e())); if (l == _n) return _n; l += size; for (int i = log; i >= 1; i--) push(l >> i); S sm = e(); do { while (l % 2 == 0) l >>= 1; if (!g(op(sm, d[l]))) { while (l < size) { push(l); l = (2 * l); if (g(op(sm, d[l]))) { sm = op(sm, d[l]); l++; } } return l - size; } sm = op(sm, d[l]); l++; } while ((l & -l) != l); return _n; } template <bool (*g)(S)> int min_left(int r) { return min_left(r, [](S x) { return g(x); }); } template <class G> int min_left(int r, G g) { assert(0 <= r && r <= _n); assert(g(e())); if (r == 0) return 0; r += size; for (int i = log; i >= 1; i--) push((r - 1) >> i); S sm = e(); do { r--; while (r > 1 && (r % 2)) r >>= 1; if (!g(op(d[r], sm))) { while (r < size) { push(r); r = (2 * r + 1); if (g(op(d[r], sm))) { sm = op(d[r], sm); r--; } } return r + 1 - size; } sm = op(d[r], sm); } while ((r & -r) != r); return 0; } private: int _n, size, log; std::vector<S> d; std::vector<F> lz; void update(int k) { d[k] = op(d[2 * k], d[2 * k + 1]); } void all_apply(int k, F f) { d[k] = mapping(f, d[k]); if (k < size) lz[k] = composition(f, lz[k]); } void push(int k) { all_apply(2 * k, lz[k]); all_apply(2 * k + 1, lz[k]); lz[k] = id(); } }; } // namespace atcoder using namespace std; using namespace __gnu_pbds; using namespace atcoder; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 110 typedef tree<ll, null_type, less<ll>, rb_tree_tag, tree_order_statistics_node_update> indexed_set; struct S { ll sum_B, sum_pos, min_moves; }; struct F { ll x, y, z; bool rset; }; S op(S l, S r) { return S{l.sum_B + r.sum_B, l.sum_pos + r.sum_pos, min(l.min_moves, r.min_moves)}; } S e() { return S{0, 0, INF}; } S mapping(F l, S r) { ll a = r.sum_B, b = r.sum_pos, c = r.min_moves; a += l.x; if (l.rset) b = l.y; else b += (l.y * (r.sum_B + l.x)); c += l.z; return S{a, b, c}; } F composition(F l, F r) { ll x = r.x, y = r.y, z = r.z; bool rset = r.rset; x += l.x; if (l.rset) y = l.y, rset = true; else y += l.y; z += l.z; return F{x, y, z, rset}; } F id() { return F{0, 0, 0, false}; } bool no_B(S u) { return (u.sum_B == 0); } ll last_pos = 0; bool swappable(S u) { return (u.sum_pos == (u.sum_B * (last_pos - u.sum_B + 1))); } bool no_zeros(S u) { return (u.min_moves != 0); } int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif ll n; cin >> n; vector<ll> p(n + 1, 0); for (ll i = 1; i <= n; i++) cin >> p[i]; lazy_segtree<S, op, e, F, mapping, composition, id> st( vector<S>(n + 1, e()) ); auto find_zeros = [&]() { while (true) { ll pos = st.min_left<no_zeros>(n + 1); if (pos == 0) break; st.apply(pos - 1, pos, F{-1, 0, INF, true}); } }; indexed_set pref; vector<ll> req_moves(n + 1, 0); for (ll i = 1; i <= n; i++) { req_moves[i] = pref.order_of_key(-p[i]); pref.insert(-p[i]); } vector<array<ll, 2>> ans; auto process_longest = [&](ll st_r) { last_pos = st.prod(st_r, st_r + 1).sum_pos; ll st_l = st.min_left<swappable>(n + 1); ll sum_B = st.prod(st_l, n + 1).sum_B; ans.pb({last_pos - 2 * sum_B + 1, last_pos}); st.apply(st_l, st_r + 1, F{0, -1, -1, false}); find_zeros(); }; for (ll i = 1; i <= n; i++) { if (req_moves[i] != 0) { st.apply(i, i + 1, F{1, i, req_moves[i] - INF, false}); process_longest(i); } } while (true) { ll r = st.min_left<no_B>(n + 1); if (r == 0) break; process_longest(r - 1); } cout << ans.size() << nl; for (auto [l, r] : ans) cout << l _ r << nl; return 0; }
1909
I
Short Permutation Problem
\begin{quote} Xomu - Last Dance \hfill ⠀ \end{quote} You are given an integer $n$. For each $(m, k)$ such that $3 \leq m \leq n+1$ and $0 \leq k \leq n-1$, count the permutations of $[1, 2, ..., n]$ such that $p_i + p_{i+1} \geq m$ for exactly $k$ indices $i$, modulo $998\,244\,353$.
Insert the elements into the permutation in some comfortable order. Suppose $m$ is even. You can insert elements in the order $[m/2, m/2 - 1, m/2 + 1, m/2 - 2, \dots]$. You can solve for a single $m$ with DP. Can you calculate the DP for multiple $m$ simultaneously? The first part of the DP (where you insert both "small" and "big" elements) is the same for each $m$ (but with a different length), and for the second part (where you only insert "small" elements) you don't need DP. The second part of the DP can be replaced with combinatorics formulas. Binomials are compatible with NTT. Let's solve for a single $m$. Suppose $m$ is even. Start from an empty array, and insert the elements in the order $[m/2, m/2 - 1, m/2 + 1, m/2 - 2, \dots]$. At any moment, all the elements are concatenated, and you can insert new elements either at the beginning, at the end or between two existing elements. When you insert an element $\geq m/2$, the sum with any of the previous inserted elements is $\geq m$. Otherwise, the sum is $< m$. So you can calculate $dp_{i,j} =$ number of ways to insert the first $i$ elements (of $[m/2, m/2 - 1, m/2 + 1, m/2 - 2, \dots]$) and make $j$ "good" pairs (with sum $\geq m$). You can split the ordering $[m/2, m/2 - 1, m/2 + 1, m/2 - 2, \dots]$ into two parts: small and big elements alternate; there are only small elements. For the second part, you don't need DP. Suppose you have already inserted $i$ elements, and there are $j$ good pairs, but when you will have inserted all elements you want $k$ good pairs. The number of ways to insert the remaining elements can be computed with combinatorics in $O(1)$ after precomputing factorials and inverses (you have to choose which pairs to break and use stars and bars; we skip exact formulas because they are relatively easy to find). If you rearrange the factorials correctly, you can get that all the answers for a fixed $m$ can be computed by multiplying two polynomials, one of which contains the $dp_{i,j}$, where $i$ is equal to the length of the "alternating" prefix. NTT is fast enough. Complexity: $O(n^2 \log n)$
[ "combinatorics", "dp", "fft", "math" ]
1,900
#include <algorithm> #include <array> #include <cassert> #include <type_traits> #include <vector> #ifdef _MSC_VER #include <intrin.h> #endif #if __cplusplus >= 202002L #include <bit> #endif namespace atcoder { namespace internal { #if __cplusplus >= 202002L using std::bit_ceil; #else unsigned int bit_ceil(unsigned int n) { unsigned int x = 1; while (x < (unsigned int)(n)) x *= 2; return x; } #endif int countr_zero(unsigned int n) { #ifdef _MSC_VER unsigned long index; _BitScanForward(&index, n); return index; #else return __builtin_ctz(n); #endif } constexpr int countr_zero_constexpr(unsigned int n) { int x = 0; while (!(n & (1 << x))) x++; return x; } } // namespace internal } // namespace atcoder #include <cassert> #include <numeric> #include <type_traits> #ifdef _MSC_VER #include <intrin.h> #endif #include <utility> #ifdef _MSC_VER #include <intrin.h> #endif namespace atcoder { namespace internal { constexpr long long safe_mod(long long x, long long m) { x %= m; if (x < 0) x += m; return x; } struct barrett { unsigned int _m; unsigned long long im; explicit barrett(unsigned int m) : _m(m), im((unsigned long long)(-1) / m + 1) {} unsigned int umod() const { return _m; } unsigned int mul(unsigned int a, unsigned int b) const { unsigned long long z = a; z *= b; #ifdef _MSC_VER unsigned long long x; _umul128(z, im, &x); #else unsigned long long x = (unsigned long long)(((unsigned __int128)(z)*im) >> 64); #endif unsigned long long y = x * _m; return (unsigned int)(z - y + (z < y ? _m : 0)); } }; constexpr long long pow_mod_constexpr(long long x, long long n, int m) { if (m == 1) return 0; unsigned int _m = (unsigned int)(m); unsigned long long r = 1; unsigned long long y = safe_mod(x, m); while (n) { if (n & 1) r = (r * y) % _m; y = (y * y) % _m; n >>= 1; } return r; } constexpr bool is_prime_constexpr(int n) { if (n <= 1) return false; if (n == 2 || n == 7 || n == 61) return true; if (n % 2 == 0) return false; long long d = n - 1; while (d % 2 == 0) d /= 2; constexpr long long bases[3] = {2, 7, 61}; for (long long a : bases) { long long t = d; long long y = pow_mod_constexpr(a, t, n); while (t != n - 1 && y != 1 && y != n - 1) { y = y * y % n; t <<= 1; } if (y != n - 1 && t % 2 == 0) { return false; } } return true; } template <int n> constexpr bool is_prime = is_prime_constexpr(n); constexpr std::pair<long long, long long> inv_gcd(long long a, long long b) { a = safe_mod(a, b); if (a == 0) return {b, 0}; long long s = b, t = a; long long m0 = 0, m1 = 1; while (t) { long long u = s / t; s -= t * u; m0 -= m1 * u; // |m1 * u| <= |m1| * s <= b auto tmp = s; s = t; t = tmp; tmp = m0; m0 = m1; m1 = tmp; } if (m0 < 0) m0 += b / s; return {s, m0}; } constexpr int primitive_root_constexpr(int m) { if (m == 2) return 1; if (m == 167772161) return 3; if (m == 469762049) return 3; if (m == 754974721) return 11; if (m == 998244353) return 3; int divs[20] = {}; divs[0] = 2; int cnt = 1; int x = (m - 1) / 2; while (x % 2 == 0) x /= 2; for (int i = 3; (long long)(i)*i <= x; i += 2) { if (x % i == 0) { divs[cnt++] = i; while (x % i == 0) { x /= i; } } } if (x > 1) { divs[cnt++] = x; } for (int g = 2;; g++) { bool ok = true; for (int i = 0; i < cnt; i++) { if (pow_mod_constexpr(g, (m - 1) / divs[i], m) == 1) { ok = false; break; } } if (ok) return g; } } template <int m> constexpr int primitive_root = primitive_root_constexpr(m); unsigned long long floor_sum_unsigned(unsigned long long n, unsigned long long m, unsigned long long a, unsigned long long b) { unsigned long long ans = 0; while (true) { if (a >= m) { ans += n * (n - 1) / 2 * (a / m); a %= m; } if (b >= m) { ans += n * (b / m); b %= m; } unsigned long long y_max = a * n + b; if (y_max < m) break; n = (unsigned long long)(y_max / m); b = (unsigned long long)(y_max % m); std::swap(m, a); } return ans; } } // namespace internal } // namespace atcoder #include <cassert> #include <numeric> #include <type_traits> namespace atcoder { namespace internal { #ifndef _MSC_VER template <class T> using is_signed_int128 = typename std::conditional<std::is_same<T, __int128_t>::value || std::is_same<T, __int128>::value, std::true_type, std::false_type>::type; template <class T> using is_unsigned_int128 = typename std::conditional<std::is_same<T, __uint128_t>::value || std::is_same<T, unsigned __int128>::value, std::true_type, std::false_type>::type; template <class T> using make_unsigned_int128 = typename std::conditional<std::is_same<T, __int128_t>::value, __uint128_t, unsigned __int128>; template <class T> using is_integral = typename std::conditional<std::is_integral<T>::value || is_signed_int128<T>::value || is_unsigned_int128<T>::value, std::true_type, std::false_type>::type; template <class T> using is_signed_int = typename std::conditional<(is_integral<T>::value && std::is_signed<T>::value) || is_signed_int128<T>::value, std::true_type, std::false_type>::type; template <class T> using is_unsigned_int = typename std::conditional<(is_integral<T>::value && std::is_unsigned<T>::value) || is_unsigned_int128<T>::value, std::true_type, std::false_type>::type; template <class T> using to_unsigned = typename std::conditional< is_signed_int128<T>::value, make_unsigned_int128<T>, typename std::conditional<std::is_signed<T>::value, std::make_unsigned<T>, std::common_type<T>>::type>::type; #else template <class T> using is_integral = typename std::is_integral<T>; template <class T> using is_signed_int = typename std::conditional<is_integral<T>::value && std::is_signed<T>::value, std::true_type, std::false_type>::type; template <class T> using is_unsigned_int = typename std::conditional<is_integral<T>::value && std::is_unsigned<T>::value, std::true_type, std::false_type>::type; template <class T> using to_unsigned = typename std::conditional<is_signed_int<T>::value, std::make_unsigned<T>, std::common_type<T>>::type; #endif template <class T> using is_signed_int_t = std::enable_if_t<is_signed_int<T>::value>; template <class T> using is_unsigned_int_t = std::enable_if_t<is_unsigned_int<T>::value>; template <class T> using to_unsigned_t = typename to_unsigned<T>::type; } // namespace internal } // namespace atcoder namespace atcoder { namespace internal { struct modint_base {}; struct static_modint_base : modint_base {}; template <class T> using is_modint = std::is_base_of<modint_base, T>; template <class T> using is_modint_t = std::enable_if_t<is_modint<T>::value>; } // namespace internal template <int m, std::enable_if_t<(1 <= m)>* = nullptr> struct static_modint : internal::static_modint_base { using mint = static_modint; public: static constexpr int mod() { return m; } static mint raw(int v) { mint x; x._v = v; return x; } static_modint() : _v(0) {} template <class T, internal::is_signed_int_t<T>* = nullptr> static_modint(T v) { long long x = (long long)(v % (long long)(umod())); if (x < 0) x += umod(); _v = (unsigned int)(x); } template <class T, internal::is_unsigned_int_t<T>* = nullptr> static_modint(T v) { _v = (unsigned int)(v % umod()); } unsigned int val() const { return _v; } mint& operator++() { _v++; if (_v == umod()) _v = 0; return *this; } mint& operator--() { if (_v == 0) _v = umod(); _v--; return *this; } mint operator++(int) { mint result = *this; ++*this; return result; } mint operator--(int) { mint result = *this; --*this; return result; } mint& operator+=(const mint& rhs) { _v += rhs._v; if (_v >= umod()) _v -= umod(); return *this; } mint& operator-=(const mint& rhs) { _v -= rhs._v; if (_v >= umod()) _v += umod(); return *this; } mint& operator*=(const mint& rhs) { unsigned long long z = _v; z *= rhs._v; _v = (unsigned int)(z % umod()); return *this; } mint& operator/=(const mint& rhs) { return *this = *this * rhs.inv(); } mint operator+() const { return *this; } mint operator-() const { return mint() - *this; } mint pow(long long n) const { assert(0 <= n); mint x = *this, r = 1; while (n) { if (n & 1) r *= x; x *= x; n >>= 1; } return r; } mint inv() const { if (prime) { assert(_v); return pow(umod() - 2); } else { auto eg = internal::inv_gcd(_v, m); assert(eg.first == 1); return eg.second; } } friend mint operator+(const mint& lhs, const mint& rhs) { return mint(lhs) += rhs; } friend mint operator-(const mint& lhs, const mint& rhs) { return mint(lhs) -= rhs; } friend mint operator*(const mint& lhs, const mint& rhs) { return mint(lhs) *= rhs; } friend mint operator/(const mint& lhs, const mint& rhs) { return mint(lhs) /= rhs; } friend bool operator==(const mint& lhs, const mint& rhs) { return lhs._v == rhs._v; } friend bool operator!=(const mint& lhs, const mint& rhs) { return lhs._v != rhs._v; } private: unsigned int _v; static constexpr unsigned int umod() { return m; } static constexpr bool prime = internal::is_prime<m>; }; template <int id> struct dynamic_modint : internal::modint_base { using mint = dynamic_modint; public: static int mod() { return (int)(bt.umod()); } static void set_mod(int m) { assert(1 <= m); bt = internal::barrett(m); } static mint raw(int v) { mint x; x._v = v; return x; } dynamic_modint() : _v(0) {} template <class T, internal::is_signed_int_t<T>* = nullptr> dynamic_modint(T v) { long long x = (long long)(v % (long long)(mod())); if (x < 0) x += mod(); _v = (unsigned int)(x); } template <class T, internal::is_unsigned_int_t<T>* = nullptr> dynamic_modint(T v) { _v = (unsigned int)(v % mod()); } unsigned int val() const { return _v; } mint& operator++() { _v++; if (_v == umod()) _v = 0; return *this; } mint& operator--() { if (_v == 0) _v = umod(); _v--; return *this; } mint operator++(int) { mint result = *this; ++*this; return result; } mint operator--(int) { mint result = *this; --*this; return result; } mint& operator+=(const mint& rhs) { _v += rhs._v; if (_v >= umod()) _v -= umod(); return *this; } mint& operator-=(const mint& rhs) { _v += mod() - rhs._v; if (_v >= umod()) _v -= umod(); return *this; } mint& operator*=(const mint& rhs) { _v = bt.mul(_v, rhs._v); return *this; } mint& operator/=(const mint& rhs) { return *this = *this * rhs.inv(); } mint operator+() const { return *this; } mint operator-() const { return mint() - *this; } mint pow(long long n) const { assert(0 <= n); mint x = *this, r = 1; while (n) { if (n & 1) r *= x; x *= x; n >>= 1; } return r; } mint inv() const { auto eg = internal::inv_gcd(_v, mod()); assert(eg.first == 1); return eg.second; } friend mint operator+(const mint& lhs, const mint& rhs) { return mint(lhs) += rhs; } friend mint operator-(const mint& lhs, const mint& rhs) { return mint(lhs) -= rhs; } friend mint operator*(const mint& lhs, const mint& rhs) { return mint(lhs) *= rhs; } friend mint operator/(const mint& lhs, const mint& rhs) { return mint(lhs) /= rhs; } friend bool operator==(const mint& lhs, const mint& rhs) { return lhs._v == rhs._v; } friend bool operator!=(const mint& lhs, const mint& rhs) { return lhs._v != rhs._v; } private: unsigned int _v; static internal::barrett bt; static unsigned int umod() { return bt.umod(); } }; template <int id> internal::barrett dynamic_modint<id>::bt(998244353); using modint998244353 = static_modint<998244353>; using modint1000000007 = static_modint<1000000007>; using modint = dynamic_modint<-1>; namespace internal { template <class T> using is_static_modint = std::is_base_of<internal::static_modint_base, T>; template <class T> using is_static_modint_t = std::enable_if_t<is_static_modint<T>::value>; template <class> struct is_dynamic_modint : public std::false_type {}; template <int id> struct is_dynamic_modint<dynamic_modint<id>> : public std::true_type {}; template <class T> using is_dynamic_modint_t = std::enable_if_t<is_dynamic_modint<T>::value>; } // namespace internal } // namespace atcoder namespace atcoder { namespace internal { template <class mint, int g = internal::primitive_root<mint::mod()>, internal::is_static_modint_t<mint>* = nullptr> struct fft_info { static constexpr int rank2 = countr_zero_constexpr(mint::mod() - 1); std::array<mint, rank2 + 1> root; // root[i]^(2^i) == 1 std::array<mint, rank2 + 1> iroot; // root[i] * iroot[i] == 1 std::array<mint, std::max(0, rank2 - 2 + 1)> rate2; std::array<mint, std::max(0, rank2 - 2 + 1)> irate2; std::array<mint, std::max(0, rank2 - 3 + 1)> rate3; std::array<mint, std::max(0, rank2 - 3 + 1)> irate3; fft_info() { root[rank2] = mint(g).pow((mint::mod() - 1) >> rank2); iroot[rank2] = root[rank2].inv(); for (int i = rank2 - 1; i >= 0; i--) { root[i] = root[i + 1] * root[i + 1]; iroot[i] = iroot[i + 1] * iroot[i + 1]; } { mint prod = 1, iprod = 1; for (int i = 0; i <= rank2 - 2; i++) { rate2[i] = root[i + 2] * prod; irate2[i] = iroot[i + 2] * iprod; prod *= iroot[i + 2]; iprod *= root[i + 2]; } } { mint prod = 1, iprod = 1; for (int i = 0; i <= rank2 - 3; i++) { rate3[i] = root[i + 3] * prod; irate3[i] = iroot[i + 3] * iprod; prod *= iroot[i + 3]; iprod *= root[i + 3]; } } } }; template <class mint, internal::is_static_modint_t<mint>* = nullptr> void butterfly(std::vector<mint>& a) { int n = int(a.size()); int h = internal::countr_zero((unsigned int)n); static const fft_info<mint> info; int len = 0; // a[i, i+(n>>len), i+2*(n>>len), ..] is transformed while (len < h) { if (h - len == 1) { int p = 1 << (h - len - 1); mint rot = 1; for (int s = 0; s < (1 << len); s++) { int offset = s << (h - len); for (int i = 0; i < p; i++) { auto l = a[i + offset]; auto r = a[i + offset + p] * rot; a[i + offset] = l + r; a[i + offset + p] = l - r; } if (s + 1 != (1 << len)) rot *= info.rate2[countr_zero(~(unsigned int)(s))]; } len++; } else { int p = 1 << (h - len - 2); mint rot = 1, imag = info.root[2]; for (int s = 0; s < (1 << len); s++) { mint rot2 = rot * rot; mint rot3 = rot2 * rot; int offset = s << (h - len); for (int i = 0; i < p; i++) { auto mod2 = 1ULL * mint::mod() * mint::mod(); auto a0 = 1ULL * a[i + offset].val(); auto a1 = 1ULL * a[i + offset + p].val() * rot.val(); auto a2 = 1ULL * a[i + offset + 2 * p].val() * rot2.val(); auto a3 = 1ULL * a[i + offset + 3 * p].val() * rot3.val(); auto a1na3imag = 1ULL * mint(a1 + mod2 - a3).val() * imag.val(); auto na2 = mod2 - a2; a[i + offset] = a0 + a2 + a1 + a3; a[i + offset + 1 * p] = a0 + a2 + (2 * mod2 - (a1 + a3)); a[i + offset + 2 * p] = a0 + na2 + a1na3imag; a[i + offset + 3 * p] = a0 + na2 + (mod2 - a1na3imag); } if (s + 1 != (1 << len)) rot *= info.rate3[countr_zero(~(unsigned int)(s))]; } len += 2; } } } template <class mint, internal::is_static_modint_t<mint>* = nullptr> void butterfly_inv(std::vector<mint>& a) { int n = int(a.size()); int h = internal::countr_zero((unsigned int)n); static const fft_info<mint> info; int len = h; // a[i, i+(n>>len), i+2*(n>>len), ..] is transformed while (len) { if (len == 1) { int p = 1 << (h - len); mint irot = 1; for (int s = 0; s < (1 << (len - 1)); s++) { int offset = s << (h - len + 1); for (int i = 0; i < p; i++) { auto l = a[i + offset]; auto r = a[i + offset + p]; a[i + offset] = l + r; a[i + offset + p] = (unsigned long long)(mint::mod() + l.val() - r.val()) * irot.val(); ; } if (s + 1 != (1 << (len - 1))) irot *= info.irate2[countr_zero(~(unsigned int)(s))]; } len--; } else { int p = 1 << (h - len); mint irot = 1, iimag = info.iroot[2]; for (int s = 0; s < (1 << (len - 2)); s++) { mint irot2 = irot * irot; mint irot3 = irot2 * irot; int offset = s << (h - len + 2); for (int i = 0; i < p; i++) { auto a0 = 1ULL * a[i + offset + 0 * p].val(); auto a1 = 1ULL * a[i + offset + 1 * p].val(); auto a2 = 1ULL * a[i + offset + 2 * p].val(); auto a3 = 1ULL * a[i + offset + 3 * p].val(); auto a2na3iimag = 1ULL * mint((mint::mod() + a2 - a3) * iimag.val()).val(); a[i + offset] = a0 + a1 + a2 + a3; a[i + offset + 1 * p] = (a0 + (mint::mod() - a1) + a2na3iimag) * irot.val(); a[i + offset + 2 * p] = (a0 + a1 + (mint::mod() - a2) + (mint::mod() - a3)) * irot2.val(); a[i + offset + 3 * p] = (a0 + (mint::mod() - a1) + (mint::mod() - a2na3iimag)) * irot3.val(); } if (s + 1 != (1 << (len - 2))) irot *= info.irate3[countr_zero(~(unsigned int)(s))]; } len -= 2; } } } template <class mint, internal::is_static_modint_t<mint>* = nullptr> std::vector<mint> convolution_naive(const std::vector<mint>& a, const std::vector<mint>& b) { int n = int(a.size()), m = int(b.size()); std::vector<mint> ans(n + m - 1); if (n < m) { for (int j = 0; j < m; j++) { for (int i = 0; i < n; i++) { ans[i + j] += a[i] * b[j]; } } } else { for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { ans[i + j] += a[i] * b[j]; } } } return ans; } template <class mint, internal::is_static_modint_t<mint>* = nullptr> std::vector<mint> convolution_fft(std::vector<mint> a, std::vector<mint> b) { int n = int(a.size()), m = int(b.size()); int z = (int)internal::bit_ceil((unsigned int)(n + m - 1)); a.resize(z); internal::butterfly(a); b.resize(z); internal::butterfly(b); for (int i = 0; i < z; i++) { a[i] *= b[i]; } internal::butterfly_inv(a); a.resize(n + m - 1); mint iz = mint(z).inv(); for (int i = 0; i < n + m - 1; i++) a[i] *= iz; return a; } } // namespace internal template <class mint, internal::is_static_modint_t<mint>* = nullptr> std::vector<mint> convolution(std::vector<mint>&& a, std::vector<mint>&& b) { int n = int(a.size()), m = int(b.size()); if (!n || !m) return {}; int z = (int)internal::bit_ceil((unsigned int)(n + m - 1)); assert((mint::mod() - 1) % z == 0); if (std::min(n, m) <= 60) return convolution_naive(a, b); return internal::convolution_fft(a, b); } template <class mint, internal::is_static_modint_t<mint>* = nullptr> std::vector<mint> convolution(const std::vector<mint>& a, const std::vector<mint>& b) { int n = int(a.size()), m = int(b.size()); if (!n || !m) return {}; int z = (int)internal::bit_ceil((unsigned int)(n + m - 1)); assert((mint::mod() - 1) % z == 0); if (std::min(n, m) <= 60) return convolution_naive(a, b); return internal::convolution_fft(a, b); } template <unsigned int mod = 998244353, class T, std::enable_if_t<internal::is_integral<T>::value>* = nullptr> std::vector<T> convolution(const std::vector<T>& a, const std::vector<T>& b) { int n = int(a.size()), m = int(b.size()); if (!n || !m) return {}; using mint = static_modint<mod>; int z = (int)internal::bit_ceil((unsigned int)(n + m - 1)); assert((mint::mod() - 1) % z == 0); std::vector<mint> a2(n), b2(m); for (int i = 0; i < n; i++) { a2[i] = mint(a[i]); } for (int i = 0; i < m; i++) { b2[i] = mint(b[i]); } auto c2 = convolution(std::move(a2), std::move(b2)); std::vector<T> c(n + m - 1); for (int i = 0; i < n + m - 1; i++) { c[i] = c2[i].val(); } return c; } std::vector<long long> convolution_ll(const std::vector<long long>& a, const std::vector<long long>& b) { int n = int(a.size()), m = int(b.size()); if (!n || !m) return {}; static constexpr unsigned long long MOD1 = 754974721; // 2^24 static constexpr unsigned long long MOD2 = 167772161; // 2^25 static constexpr unsigned long long MOD3 = 469762049; // 2^26 static constexpr unsigned long long M2M3 = MOD2 * MOD3; static constexpr unsigned long long M1M3 = MOD1 * MOD3; static constexpr unsigned long long M1M2 = MOD1 * MOD2; static constexpr unsigned long long M1M2M3 = MOD1 * MOD2 * MOD3; static constexpr unsigned long long i1 = internal::inv_gcd(MOD2 * MOD3, MOD1).second; static constexpr unsigned long long i2 = internal::inv_gcd(MOD1 * MOD3, MOD2).second; static constexpr unsigned long long i3 = internal::inv_gcd(MOD1 * MOD2, MOD3).second; static constexpr int MAX_AB_BIT = 24; static_assert(MOD1 % (1ull << MAX_AB_BIT) == 1, "MOD1 isn't enough to support an array length of 2^24."); static_assert(MOD2 % (1ull << MAX_AB_BIT) == 1, "MOD2 isn't enough to support an array length of 2^24."); static_assert(MOD3 % (1ull << MAX_AB_BIT) == 1, "MOD3 isn't enough to support an array length of 2^24."); assert(n + m - 1 <= (1 << MAX_AB_BIT)); auto c1 = convolution<MOD1>(a, b); auto c2 = convolution<MOD2>(a, b); auto c3 = convolution<MOD3>(a, b); std::vector<long long> c(n + m - 1); for (int i = 0; i < n + m - 1; i++) { unsigned long long x = 0; x += (c1[i] * i1) % MOD1 * M2M3; x += (c2[i] * i2) % MOD2 * M1M3; x += (c3[i] * i3) % MOD3 * M1M2; long long diff = c1[i] - internal::safe_mod((long long)(x), (long long)(MOD1)); if (diff < 0) diff += MOD1; static constexpr unsigned long long offset[5] = { 0, 0, M1M2M3, 2 * M1M2M3, 3 * M1M2M3}; x -= offset[diff % 5]; c[i] = x; } return c; } } // namespace atcoder #include <bits/stdc++.h> using namespace std; using namespace atcoder; using mint = modint998244353; #define int long long #define ll long long #define ii pair<int,int> #define iii tuple<int,int,int> #define fi first #define se second #define endl '\n' #define debug(x) cout << #x << ": " << x << endl #define pub push_back #define pob pop_back #define puf push_front #define pof pop_front #define lb lower_bound #define ub upper_bound #define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--)) #define all(x) (x).begin(),(x).end() #define sz(x) (int)(x).size() mt19937 rng(chrono::system_clock::now().time_since_epoch().count()); const int MOD=998244353; const int HASH_MOD = 1000000007; ll qexp(ll b,ll p,int m){ ll res=1; while (p){ if (p&1) res=(res*b)%m; b=(b*b)%m; p>>=1; } return res; } ll inv(ll i){ return qexp(i,MOD-2,MOD); } ll fix(ll i){ i%=MOD; if (i<0) i+=MOD; return i; } ll fac[1000005]; ll ifac[1000005]; ll nCk(int i,int j){ if (i<j) return 0; return fac[i]*ifac[j]%MOD*ifac[i-j]%MOD; } int n, X; int memo[4005][4005]; int ans[4005][4005]; signed main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin.exceptions(ios::badbit | ios::failbit); fac[0]=1; rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD; ifac[1000004]=inv(fac[1000004]); rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD; cin>>n>>X; memo[1][0]=1; rep(x,1,n+1){ rep(y,0,x){ //y 1 gaps, x-y-1 0 gaps if (y) memo[x+1][y-1+2*(x%2)]=(memo[x+1][y-1+2*(x%2)]+y*memo[x][y])%MOD; memo[x+1][y+2*(x%2)]=(memo[x+1][y+2*(x%2)]+(x-y-1)*memo[x][y])%MOD; memo[x+1][y+(x%2)]=(memo[x+1][y+(x%2)]+2*memo[x][y])%MOD; } } rep(x,2,n+1){ if (x%2==0) reverse(memo[x],memo[x]+x); vector<int> a; rep(y,x-1,0) a.pub(memo[x][y]*fac[y]%MOD*fac[n-y]%MOD); vector<int> b; rep(y,0,n-x+1) b.pub(ifac[y]*ifac[n-x-y]%MOD); a=convolution(a,b); rep(y,0,x-1) ans[x][y]=a[x-2-y]*ifac[y]%MOD*ifac[x-y]%MOD; int tot=1; rep(y,1,n-x+1) tot=tot*y%MOD; rep(y,0,x-1) ans[x][y]=ans[x][y]*tot%MOD; } ll hash = 0; ll pow = 1; for (ll i = 1; i <= 3 * n; i++) pow = (X * pow) % HASH_MOD; rep(x,2,n+1){ reverse(ans[x],ans[x]+n); rep(y,0,n) { hash = (hash + pow * ans[x][y]) % HASH_MOD; pow = (X * pow) % HASH_MOD; } } cout << hash << endl; }
1913
A
Rating Increase
Monocarp is a great solver of adhoc problems. Recently, he participated in an Educational Codeforces Round, and gained rating! Monocarp knew that, before the round, his rating was $a$. After the round, it increased to $b$ ($b > a$). He wrote both values one after another to not forget them. However, he wrote them so close to each other, that he can't tell now where the first value ends and the second value starts. Please, help him find some values $a$ and $b$ such that: - neither of them has a leading zero; - both of them are strictly greater than $0$; - $b > a$; - they produce the given value $ab$ when written one after another. If there are multiple answers, you can print any of them.
Since the length of the string is pretty small, it's possible to iterate over all possible cuts of $ab$ into $a$ and $b$. First, you have to check if $b$ has a leading zero. If it doesn't, compare integer representations of $a$ and $b$. In order to get an integer from a string, you can use stoi(s) for C++ or int(s) for Python.
[ "implementation" ]
800
for _ in range(int(input())): ab = input() for i in range(1, len(ab)): if ab[i] != '0' and int(ab[:i]) < int(ab[i:]): print(ab[:i], ab[i:]) break else: print(-1)
1913
B
Swap and Delete
You are given a binary string $s$ (a string consisting only of 0-s and 1-s). You can perform two types of operations on $s$: - delete one character from $s$. This operation costs $1$ coin; - swap any pair of characters in $s$. This operation is free (costs $0$ coins). You can perform these operations any number of times and in any order. Let's name a string you've got after performing operations above as $t$. The string $t$ is good if for each $i$ from \textbf{$1$ to $|t|$} $t_i \neq s_i$ ($|t|$ is the length of the string $t$). The empty string is always good. Note that you are comparing the resulting string $t$ with the initial string $s$. What is the minimum total cost to make the string $t$ good?
Let's count the number of 0-s and 1-s in $s$ as $cnt_0$ and $cnt_1$ correspondingly. Since $t$ consists of characters from $s$ then $t$ will contain no more than $cnt_0$ zeros and $cnt_1$ ones. Let's build $t$ greedily, since we always compare $t$ with prefix of $s$. Suppose the length of $t$ is at least one, it means that $t_1$ must be different from $s_1$, so if $s_1$ $=$ 0 we must set $t_1$ $=$ 1. So let's check that we have at least one 1 (or $cnt_1 > 0$), take 1 and place it at $t_1$. Case $s_1$ $=$ 1 is the same. After placing $t_1$ we can analogically try to place $t_2$ and so on until we run out of necessary digits or build the whole string of length $|s|$. We've built the longest possible string $t$ in $O(|s|)$ time, so the answer is $|s| - |t|$.
[ "strings" ]
1,000
for _ in range(int(input())): s = input() cnt = [0, 0] for i in range(len(s)): cnt[int(s[i])] += 1 for i in range(len(s) + 1): if (i == len(s) or cnt[1 - int(s[i])] == 0): print(len(s) - i) break cnt[1 - int(s[i])] -= 1
1913
C
Game with Multiset
In this problem, you are initially given an empty multiset. You have to process two types of queries: - ADD $x$ — add an element equal to $2^{x}$ to the multiset; - GET $w$ — say whether it is possible to take the sum of some subset of the current multiset and get a value equal to $w$.
We are given a classic knapsack problem. There are items with certain weights and a total weight that we want to achieve. If the weights were arbitrary, we would need dynamic programming to solve it. However, this variation of the knapsack problem can be solved greedily. What makes it special? If we arrange the weights in non-decreasing order, each subsequent weight is divisible by the previous one. Then the following greedy approach works. Arrange the items by weight, from smallest to largest. Let's say we want to achieve sum $s$, and the weights of the items are $x_1, x_2, \dots$. Let the weights be distinct, and the number of items with weight $x_i$ be $\mathit{cnt}_i$. If $s$ is not divisible by $x_1$, the answer is NO. Otherwise, we look at the remainder of $s$ divided by $x_2$. If it is not equal to $0$, our only hope to make it $0$ is to take items with weight $x_1$. All other items are divisible by $x_2$, so they will not help. We take exactly $\frac{s \bmod x_2}{x_1}$ items. If there are less than that, the answer is NO. Subtract this from $s$ - now the remainder is $0$. Now the claim is: there is no point in taking additional items with weight $x_1$ in a quantity not divisible by $\frac{x_2}{x_1}$. If we do that, we will break the remainder again. Thus, we can pack $\frac{x_2}{x_1}$ items with weight $x_1$ into groups of weight $x_2$ and add the number of these groups to $\mathit{cnt}_2$. Then recursively solve the same problem, but for the new value of $s$ and weights $2, 3, \dots$. When we run out of distinct weights, we have to check that there are enough items of the largest weight to collect the entire weight $s$. This can be written by iterating through the weight values from smallest to largest. For each weight, we can maintain the count in an array. Then the first type of query works in $O(1)$, and the second type of query works in $30$ iterations.
[ "binary search", "bitmasks", "brute force", "greedy" ]
1,300
from sys import stdin, stdout LOG = 30 cnt = [0 for i in range(LOG)] ans = [] for _ in range(int(stdin.readline())): t, v = map(int, stdin.readline().split()) if t == 1: cnt[v] += 1 else: nw = 0 for i in range(LOG): r = (v % (2 << i)) // (1 << i) if r > nw + cnt[i]: ans.append(0) break v -= r nw = (nw + cnt[i] - r) // 2 else: ans.append(nw >= (v >> 30)) stdout.write('\n'.join(["YES" if x else "NO" for x in ans]))
1913
D
Array Collapse
You are given an array $[p_1, p_2, \dots, p_n]$, where all elements are distinct. You can perform several (possibly zero) operations with it. In one operation, you can choose a \textbf{contiguous subsegment} of $p$ and remove \textbf{all} elements from that subsegment, \textbf{except for} the minimum element on that subsegment. For example, if $p = [3, 1, 4, 7, 5, 2, 6]$ and you choose the subsegment from the $3$-rd element to the $6$-th element, the resulting array is $[3, 1, 2, 6]$. An array $a$ is called reachable if it can be obtained from $p$ using several (maybe zero) aforementioned operations. Calculate the number of reachable arrays, and print it modulo $998244353$.
Let's rephrase the problem a bit. Instead of counting the number of arrays, let's count the number of subsequences of elements that can remain at the end. One of the classic methods for counting subsequences is dynamic programming of the following kind: $\mathit{dp}_i$ is the number of subsequences such that the last taken element is at position $i$. Counting exactly this is that simple. It happens that element $i$ cannot be the last overall because it is impossible to remove everything after it. Thus, let's say it a little differently: the number of good subsequences on a prefix of length $i$. Now we are not concerned with what comes after $i$ - after the prefix. If we learned to calculate such dp, we could get the answer from it. We just need to understand when we can remove all elements after a fixed one. It is easy to see that it is enough for all these elements to be smaller than the fixed one. Then they can be removed one by one from left to right. If there is at least one larger element, then the maximum of such elements cannot be removed. Therefore, the answer is equal to the sum of dp for all $i$ such that $a_i$ is the largest element on a suffix of length $n - i + 1$. How to calculate such dp? Let's look at the nearest element to the left that is larger than $a_i$. Let its position be $j$. Then any subsequence that ended with an element between $j$ and $i$ can have the element $i$ appended to it. It is only necessary to remove the elements standing before $i$. This can also be done one by one. Can $j$ be removed as well? It can, but it's not that simple. Only the element that is the nearest larger one to the left for $a_j$ or someone else even more to the left can do this. Let $f[i]$ be the position of the nearest larger element to the left. Then the element $i$ can also extend subsequences ending in $a[f[i]], a[f[f[i]]], a[f[f[f[i]]]], \dots$. If there are no larger elements to the left of the element - the element is the maximum on the prefix - then $1$ is added to its $\mathit{dp}$ value. This is a subsequence consisting of only this element. The position of the nearest larger element to the left can be found using a monotonic stack: keep a stack of elements; while the element at the top is smaller, remove it; then add the current one to the stack. Counting the dp currently works in $O(n^2)$ in the worst case. How to optimize this? The first type of transitions is optimized by prefix sums, as it is simply the sum of dp on a segment. For the second type of transitions, you can maintain the sum of dp on a chain of jumps to the nearest larger element: $\mathit{dpsum}_i = \mathit{dpsum}_{f[i]} + \mathit{dp}_i$. Now, both transitions can be done in $O(1)$. Overall complexity: $O(n)$ for each testcase.
[ "data structures", "divide and conquer", "dp", "trees" ]
2,100
#include <bits/stdc++.h> using namespace std; const int MOD = 998244353; int normalize(long long x) { x %= MOD; if (x < 0) x += MOD; return x; } int main() { int t; cin >> t; for (int tc = 0; tc < t; ++tc) { int n; cin >> n; vector <int> a(n); vector <int> nextMin(n); vector <int> dpSum(n + 2); vector <int> dpNext(n); for (int i = 0; i < n; ++i) cin >> a[i]; stack <int> stMin; nextMin[n - 1] = n; dpSum[n] = 1; for (int pos = n - 1; pos >= 0; --pos) { while(!stMin.empty() && a[stMin.top()] > a[pos]) stMin.pop(); nextMin[pos] = stMin.empty() ? n : stMin.top(); stMin.push(pos); int nxtPos = nextMin[pos]; int dpPos = normalize(dpSum[pos + 1] - dpSum[nxtPos + 1]); if (nxtPos != n) { dpPos = normalize(dpPos + dpNext[nxtPos]); dpNext[pos] = normalize(dpSum[nxtPos] - dpSum[nxtPos + 1] + dpNext[nxtPos]); } dpSum[pos] = normalize(dpPos + dpSum[pos + 1]); //cout << pos << ' ' << nxtPos << ' ' << dpPos << endl; } int res = 0; int mn = a[0]; for(int i = 0; i < n; ++i) { mn = min(mn, a[i]); if (a[i] == mn) { res = normalize(res + dpSum[i] - dpSum[i + 1]); } } cout << res << endl; } return 0; }
1913
E
Matrix Problem
You are given a matrix $a$, consisting of $n$ rows by $m$ columns. Each element of the matrix is equal to $0$ or $1$. You can perform the following operation any number of times (possibly zero): choose an element of the matrix and replace it with either $0$ or $1$. You are also given two arrays $A$ and $B$ (of length $n$ and $m$ respectively). After you perform the operations, the matrix should satisfy the following conditions: - the number of ones in the $i$-th row of the matrix should be exactly $A_i$ for every $i \in [1, n]$. - the number of ones in the $j$-th column of the matrix should be exactly $B_j$ for every $j \in [1, m]$. Calculate the minimum number of operations you have to perform.
There are many ways to solve this problem (even if all of them are based on minimum cost flows), but in my opinion, the most elegant one is the following one. Let us build another matrix $b$ of size $n \times m$ that meets the following requirements: the sum in the $i$-th row of the matrix $b$ is $A_i$; the sum in the $j$-th column of the matrix $b$ is $B_j$; the number of cells $(i, j)$ such that $a_{i,j} = b_{i,j} = 1$ is the maximum possible. It's quite easy to see that this matrix $b$ is the one which we need to transform the matrix $a$ into: the first two conditions are literally from the problem statement, and the third condition ensures that the number of $1$'s we change into $0$'s is as small as possible (and since we know the number of $1$'s in the matrix $a$, and we know that the number of $1$'s in the matrix $b$ should be exactly $\sum A_i$, this also minimizes the number of times we change a $0$ into a $1$). So, the third condition minimizes the number of operations we have to perform. How can we build $b$? Let's model it with a flow network (with costs). We will need a source (let's call it $S$), a sink (let's call it $T$), a vertex for every row (let's call it $R_i$ for the $i$-th row), and a vertex for every column (let's call it $C_j$ for the $j$-th column). To model that we want the $i$-th row to have the sum $A_i$, let's add a directed edge from $S$ to $R_i$ with capacity of $A_i$ and cost of $0$. Some solutions will also need to make sure that this directed edge has a lower constraint on the flow equal to $A_i$, but we will show later why it's unnecessary in our method. Similarly, to model that the $j$-th column should have sum $B_j$, add a directed edge from $C_j$ to $T$ with capacity $A_i$ and cost $0$. To model that we can choose either $0$ or $1$ for the cell $(i, j)$, add a directed edge from $R_i$ to $C_j$ with capacity $1$. The value in the corresponding cell of the matrix $b$ will be equal to the flow along that edge. The cost of this edge should reflect that we want to have as many cells such that $a_{i,j} = b_{i,j} = 1$. To ensure that, let's make its cost $0$ if $a_{i,j} = 1$, or $1$ if $a_{i,j}=0$. That way, the cost of the flow increases by $1$ each time we put a $1$ in a cell where $a_{i,j} = 0$ - and since the number of $1$'s in the matrix $b$ is fixed, this means that we put a $0$ in a cell where $a_{i,j} = 1$; so, the number of cells $(i, j)$ such that $a_{i,j} = b_{i,j} = 1$ gets reduced. Now our network is ready. In order to make sure that all edges connecting $S$ with $R_i$ and $C_j$ with $T$ are saturated, we have to find the minimum cost maximum flow in it. Since the network has no negative cycles, the number of vertices is $O(n+m)$, the number of edges is $O(nm)$, and the maximum flow in the network is also $O(nm)$, any reasonable MCMF algorithm can be used. After running MCMF, let's check that the amount of the flow we pushed is equal both to $\sum A_i$ and to $sum B_j$. If that's not the case, then it is impossible to construct the matrix $b$, so the answer is $-1$. Otherwise, to calculate the number of operations we have to perform, we can either restore the matrix $b$ from the flow we got and calculate the number of cells such that $a_{i,j} \ne b_{i,j}$, or derive a formula which can calculate the number of operations directly from the number of $1$'s in $a$, number of $1$'s in $b$ and the cost of the flow. The model solution does the latter.
[ "flows", "graphs" ]
2,400
#include<bits/stdc++.h> using namespace std; const int N = 111; struct edge { int y, c, w, f; edge() {}; edge(int y, int c, int w, int f) : y(y), c(c), w(w), f(f) {}; }; vector<edge> e; vector<int> g[N]; int rem(int x) { return e[x].c - e[x].f; } void add_edge(int x, int y, int c, int w) { g[x].push_back(e.size()); e.push_back(edge(y, c, w, 0)); g[y].push_back(e.size()); e.push_back(edge(x, 0, -w, 0)); } int n, m, s, t, v; pair<int, long long> MCMF() { int flow = 0; long long cost = 0; while(true) { vector<long long> d(v, (long long)(1e18)); vector<int> p(v, -1); vector<int> pe(v, -1); queue<int> q; vector<bool> inq(v); q.push(s); inq[s] = true; d[s] = 0; while(!q.empty()) { int k = q.front(); q.pop(); inq[k] = false; for(auto ei : g[k]) { if(rem(ei) == 0) continue; int to = e[ei].y; int w = e[ei].w; if(d[to] > d[k] + w) { d[to] = d[k] + w; p[to] = k; pe[to] = ei; if(!inq[to]) { inq[to] = true; q.push(to); } } } } if(p[t] == -1) break; flow++; cost += d[t]; int cur = t; while(cur != s) { e[pe[cur]].f++; e[pe[cur] ^ 1].f--; cur = p[cur]; } } return make_pair(flow, cost); } int get_sum(const vector<int>& v) { int sum = 0; for(auto x : v) sum += x; return sum; } int main() { cin >> n >> m; s = n + m; t = n + m + 1; v = n + m + 2; int sum_matrix = 0; vector<int> A(n), B(m); for(int i = 0; i < n; i++) for(int j = 0; j < m; j++) { int x; cin >> x; sum_matrix += x; if(x == 1) add_edge(i, j + n, 1, 0); else add_edge(i, j + n, 1, 1); } for(int i = 0; i < n; i++) { cin >> A[i]; add_edge(s, i, A[i], 0); } for(int i = 0; i < m; i++) { cin >> B[i]; add_edge(i + n, t, B[i], 0); } auto res = MCMF(); if(res.first != get_sum(A) || res.first != get_sum(B)) cout << -1 << endl; else cout << sum_matrix - res.first + res.second * 2 << endl; }
1913
F
Palindromic Problem
You are given a string $s$ of length $n$, consisting of lowercase Latin letters. You are allowed to replace at most one character in the string with an arbitrary lowercase Latin letter. Print the lexicographically minimal string that can be obtained from the original string and contains the maximum number of palindromes as substrings. Note that if a palindrome appears more than once as a substring, it is counted the same number of times it appears. The string $a$ is lexicographically smaller than the string $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; - in the first position where $a$ and $b$ are different, the string $a$ contains a letter that appears earlier in the alphabet than the corresponding letter in $b$.
Let's recall the algorithm for counting the number of palindromes in a string. For each position, we can calculate the longest odd and even palindromes with centers at that position (the right one for even). Then sum up all the values. If we forget about the complexity, we can consider the following algorithm for solving the full problem. Iterate over the index of the change and the new letter. Make the change, count the number of palindromes, and update the answer. In case of equality, choose the lexicographically minimal change. Let's keep the first step and optimize the rest. It won't be possible to count the number of palindromes from scratch for every change, so let's try to calculate how much their number will change from the original. It's also not feasible to compare all strings lexicographically naively, so we have to come up with a faster way to compare. Let's fix the change position $i$. Which of the longest palindromes change? If for some center $j$, the longest palindrome included $i$, then we definitely know its new length. Due to the change at $i$, the longest palindrome will stop at $i$. If the longest palindrome did not reach $i$, it will not change. And only if it stopped at $i$, it can become longer, and we don't know by how much yet. But for each center $j$ and the parity of the palindrome, there are only positions to the left and to the right where it stops. So, in total, we will need to make $O(n)$ such checks. How to take into account the palindromes that become shorter? We need to calculate the sum of the differences for all palindromes that include $i$. For an odd palindrome, this change looks like this: $[\dots, 0, -1, -2, -3, -4, -3, -2, -1, 0, \dots]$, where $-4$ is the center of the palindrome, and the values show the difference in the number of palindromes for each position of the string. For an even palindrome: $[\dots, 0, -1, -2, -3, -3, -2, -1, 0, \dots]$. To calculate the sum at each position, we can pre-calculate a difference array based on these values. The difference array usually implies first-order changes: write $+x$ at $l$, write $-x$ at $r+1$, then collect the prefix sum to get $[\dots, 0, x, x, \dots, x, 0, \dots]$. But we can also record second-order changes. For the odd construction, we will write as follows. Let the length of the palindrome be $\mathit{len}$, and the center be $i$. Then: $\mathit{dt}[i - \mathit{len} + 1] \mathrel{{+}{=}} 1$; $\mathit{dt}[i + 1] \mathrel{{-}{=}} 2$; $\mathit{dt}[i + \mathit{len} + 1] \mathrel{{+}{=}} 1$. Then iterate from left to right, maintaining two variables. The current change and the current value. At each $i$, add $\mathit{dt}[i]$ to the change, add the change to the value, and save the value to some array. For the even case, it is built similarly. It is easy to see that such changes can be made at the same time. Basically the same as the regular difference array. Now, in each position, the total decrease in the length of the palindromes passing through that position is recorded. The answer will decrease by almost this amount. The only difference is the odd-length palindromes with a center at this position. They will not change at all, and we accidentally subtracted them. Thus, let's save the longest odd palindrome for each position in advance and add it back. Let's learn how to calculate the longest palindromes for each center. Yes, there is the Manacher's algorithm for a static string. We can use it for counting at the very beginning, but it's not really necessary. Instead, let's learn to count palindromes using suffix structures. In particular, with a suffix array. It is known that with an additional sparse table, we can query the longest common prefix of two suffixes in $O(1)$ with pre-calculation in $O(n \log n)$. Let's build a suffix array on the string $s + \# + \mathit{rev}(s) + \$$. Then we can query the LCP of both suffixes and reverse prefixes. Then the longest odd palindrome with center at $i$ is LCP($s[i..n]$, $\mathit{rev}$($s[1..i]$)). The even one is LCP($s[i..n]$, $\mathit{rev}$($s[1..i-1]$)). So, we can count the palindromes at the very beginning with $O(n \log n)$ pre-calculation and $O(n)$ extra calculations. The following task remains. For each position, the centers of the palindromes that reach it are known. For each such center, we need to be able to recalculate how much longer the palindrome will become after changing the letter at that position. Obviously, even the new letter may still not match the letter on the other side. If it does match, we would like to know how many letters further match the opposite ones in the original string. And we already have a tool for this. It's still a simple LCP query on the suffix to the right of the right letter and the reverse prefix to the left of the left letter. Now we have everything necessary to recalculate the number of palindromes. If the new count is greater than the current maximum, we will simply update the answer. If it is equal, we need to check the lexicographic order. The following check works: if one change increased the original letter, and the other decreased it, then the second one will definitely produce a smaller string; if both changes decreased it, then the more left one will produce a smaller string; if both are in the same position, the smaller letter will produce a smaller string; if both changes increased it, then the more right one will produce a smaller string; if both are in the same position, the smaller letter will produce a smaller string. All these checks can be performed in $O(1)$. Overall complexity: $O(n \log n + n \cdot |\mathit{AL}|)$.
[ "binary search", "data structures", "hashing", "string suffix structures", "strings" ]
2,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) #define x first #define y second using namespace std; struct sparse_table { vector<vector<int>> st; vector<int> pw; sparse_table() {} sparse_table(const vector<int> &a) { int n = a.size(); int logn = 32 - __builtin_clz(n); st.resize(logn, vector<int>(n)); forn(i, n) st[0][i] = a[i]; for (int j = 1; j < logn; ++j) forn(i, n) { st[j][i] = st[j - 1][i]; if (i + (1 << (j - 1)) < n) st[j][i] = min(st[j][i], st[j - 1][i + (1 << (j - 1))]); } pw.resize(n + 1); pw[0] = pw[1] = 0; for (int i = 2; i <= n; ++i) pw[i] = pw[i >> 1] + 1; } int get(int l, int r) { if (l >= r) return 1e9; int len = pw[r - l]; return min(st[len][l], st[len][r - (1 << len)]); } }; struct suffix_array { vector<int> c, pos; vector<pair<pair<int, int>, int>> p, nw; vector<int> cnt; int n; void radix_sort(int max_al) { cnt.assign(max_al, 0); forn(i, n) ++cnt[p[i].x.y]; for (int i = 1; i < max_al; ++i) cnt[i] += cnt[i - 1]; nw.resize(n); forn(i, n) nw[--cnt[p[i].x.y]] = p[i]; cnt.assign(max_al, 0); forn(i, n) ++cnt[nw[i].x.x]; for (int i = 1; i < max_al; ++i) cnt[i] += cnt[i - 1]; for (int i = n - 1; i >= 0; --i) p[--cnt[nw[i].x.x]] = nw[i]; } vector<int> lcp; sparse_table st; int get_lcp(int l, int r) { l = c[l], r = c[r]; if (l > r) swap(l, r); return st.get(l, r); } suffix_array(const string &s) { n = s.size(); c = vector<int>(s.begin(), s.end()); int max_al = *max_element(c.begin(), c.end()) + 1; p.resize(n); for (int k = 1; k < n; k <<= 1) { for (int i = 0, j = k; i < n; ++i, j = (j + 1 == n ? 0 : j + 1)) p[i] = make_pair(make_pair(c[i], c[j]), i); radix_sort(max_al); c[p[0].y] = 0; for (int i = 1; i < n; ++i) c[p[i].y] = c[p[i - 1].y] + (p[i].x != p[i - 1].x); max_al = c[p.back().y] + 1; } lcp.resize(n); int l = 0; forn(i, n) { l = max(0, l - 1); if (c[i] == n - 1) continue; while (i + l < n && p[c[i] + 1].y + l < n && s[i + l] == s[p[c[i] + 1].y + l]) ++l; lcp[c[i]] = l; } pos.resize(n); forn(i, n) pos[i] = p[i].y; st = sparse_table(lcp); } }; int main() { string s; int n; cin >> n; cin >> s; string t = s; reverse(t.begin(), t.end()); auto sa = suffix_array(s + "#" + t + "$"); vector<vector<int>> ev0(n), ev1(n); long long base = 0; vector<long long> dt(n + 2); vector<int> d1(n); forn(i, n){ int len0 = sa.get_lcp(i, 2 * n - i + 1); base += len0; dt[i - len0] += 1; dt[i] -= 1; dt[i + 1] -= 1; dt[i + len0 + 1] += 1; if (i - len0 - 1 >= 0 && i + len0 < n){ ev0[i - len0 - 1].push_back(i); ev0[i + len0].push_back(i); } int len1 = sa.get_lcp(i, 2 * n - i); d1[i] = len1; dt[i - len1 + 1] += 1; dt[i + 1] -= 2; dt[i + len1 + 1] += 1; base += len1; if (i - len1 >= 0 && i + len1 < n){ ev1[i - len1].push_back(i); ev1[i + len1].push_back(i); } } vector<long long> dx(n + 1); long long curt = 0, val = 0; forn(i, n){ curt += dt[i]; val += curt; dx[i] = val; } long long ans = base; int pos = -1, nc = -1; bool gr = false; forn(i, n) forn(c, 26) if (c != s[i] - 'a'){ long long cur = base; for (int j : ev0[i]){ if (j <= i && c == s[2 * j - i - 1] - 'a') cur += 1 + sa.get_lcp(i + 1, 2 * n - (2 * j - i - 2)); else if (j > i && c == s[2 * j - i - 1] - 'a') cur += 1 + sa.get_lcp(2 * j - i, 2 * n - (i - 1)); } for (int j : ev1[i]){ if (c != s[2 * j - i] - 'a') continue; if (j < i) cur += 1 + sa.get_lcp(i + 1, 2 * n - (2 * j - i - 1)); else cur += 1 + sa.get_lcp(2 * j - i + 1, 2 * n - (i - 1)); } cur += d1[i]; cur -= dx[i]; bool upd = false; if (cur > ans){ upd = true; } else if (cur == ans){ if (c < s[i] - 'a'){ if (pos == -1 || gr) upd = true; } else{ if (pos < i && gr) upd = true; } } if (upd){ ans = cur; pos = i; nc = c; gr = c > s[i] - 'a'; } } cout << ans << endl; if (pos != -1) s[pos] = nc + 'a'; cout << s << endl; return 0; }
1914
A
Problemsolving Log
Monocarp is participating in a programming contest, which features $26$ problems, named from 'A' to 'Z'. The problems are sorted by difficulty. Moreover, it's known that Monocarp can solve problem 'A' in $1$ minute, problem 'B' in $2$ minutes, ..., problem 'Z' in $26$ minutes. After the contest, you discovered his contest log — a string, consisting of uppercase Latin letters, such that the $i$-th letter tells which problem Monocarp was solving during the $i$-th minute of the contest. If Monocarp had spent enough time in total on a problem to solve it, he solved it. Note that Monocarp could have been thinking about a problem after solving it. Given Monocarp's contest log, calculate the number of problems he solved during the contest.
For each problem, we will calculate the total number of minutes Monocarp spent on it. Then we will compare it to the minimum required time for solving the problem. One of the possible implementations is as follows. Create a string $t =$"ABC ... Z" in the program. Then problem $t_i$ can be solved in $i + 1$ minutes. We can compare s.count(t[i]) to $i + 1$. We can also use the ASCII table to convert the problem number to a letter. In Python, there are functions chr and ord for this purpose. For example, the letter for the $i$-th problem is chr(ord('A') + $i$). In C++, you can directly obtain a char from an int using char('A' + $i$). It is also possible to solve this problem in linear time of the length of the string, but it was not required.
[ "implementation", "strings" ]
800
for _ in range(int(input())): n = int(input()) s = input() print(sum([s.count(chr(ord('A') + i)) >= i + 1 for i in range(26)]))
1914
B
Preparing for the Contest
Monocarp is practicing for a big contest. He plans to solve $n$ problems to make sure he's prepared. Each of these problems has a difficulty level: the first problem has a difficulty level of $1$, the second problem has a difficulty level of $2$, and so on, until the last ($n$-th) problem, which has a difficulty level of $n$. Monocarp will choose some order in which he is going to solve all $n$ problems. Whenever he solves a problem which is more difficult than the last problem he solved, he gets excited because he feels like he's progressing. He doesn't get excited when he solves the first problem in his chosen order. For example, if Monocarp solves the problems in the order $[3, \underline{5}, 4, 1, \underline{6}, 2]$, he gets excited twice (the corresponding problems are underlined). Monocarp wants to get excited exactly $k$ times during his practicing session. Help him to choose the order in which he has to solve the problems!
The examples give a pretty big hint to the solution: to get $k = 0$, we have to order all problems from the hardest to the easiest one, and to get $k = n - 1$, we have to order them from the easiest to the hardest problem. Let's try to combine them for the general case. Let's start by placing the problems from the hardest to the easiest one: $[n, n-1, \dots, 2, 1]$. If $k = 1$, we can just swap two last problems and make solving the last problem exciting. If $k = 2$, we can reverse the order of the last $3$ problems, so $2$ of them excite Monocarp, and so on; generally, reversing $(k+1)$ last problems ensures that exactly $k$ last problems will excite Monocarp. There are also other methods to sovle this problem, but this is one of the easiest.
[ "constructive algorithms", "math" ]
800
#include<bits/stdc++.h> using namespace std; void solve() { int n, k; cin >> n >> k; vector<int> a(n); for(int i = 0; i < n; i++) a[i] = n - i; reverse(a.end() - k - 1, a.end()); for(int i = 0; i < n; i++) { if(i) cout << " "; cout << a[i]; } cout << endl; } int main() { int t; cin >> t; for(int i = 0; i < t; i++) solve(); return 0; }
1914
C
Quests
Monocarp is playing a computer game. In order to level up his character, he can complete quests. There are $n$ quests in the game, numbered from $1$ to $n$. Monocarp can complete quests according to the following rules: - the $1$-st quest is always available for completion; - the $i$-th quest is available for completion if all quests $j < i$ have been completed at least once. Note that Monocarp can complete the same quest multiple times. For each completion, the character gets some amount of experience points: - for the first completion of the $i$-th quest, he gets $a_i$ experience points; - for each subsequent completion of the $i$-th quest, he gets $b_i$ experience points. Monocarp is a very busy person, so he has free time to complete no more than $k$ quests. Your task is to calculate the maximum possible total experience Monocarp can get if he can complete no more than $k$ quests.
Let's iterate over the number of quests that have been completed at least once (denote it as $i$). It remains to complete $k-i$ quests more, and we are allowed to complete any of the first $i$ quests again. It is obvious that we would like to complete quests with the largest value of $b_i$ among the first $i$ of them. So the answer to the problem is the maximum of $\sum\limits_{j=1}^{i} a_j + \max\limits_{j=1}^{i} b_j \cdot (k - i)$ over all values of $i$ from $1$ to $\min(n, k)$. Note that the value of $n$ is too large to calculate sums and maximums in the aforementioned formula every time (for each $i$ independently), so you have to maintain these values as the value for $i$ grows.
[ "greedy", "math" ]
1,100
fun main() = repeat(readLine()!!.toInt()) { val (n, k) = readLine()!!.split(" ").map { it.toInt() } val a = readLine()!!.split(" ").map { it.toInt() } val b = readLine()!!.split(" ").map { it.toInt() } var (res, sum, mx) = intArrayOf(0, 0, 0) for (i in 0 until minOf(n, k)) { sum += a[i] mx = maxOf(mx, b[i]) res = maxOf(res, sum + mx * (k - i - 1)) } println(res) }
1914
D
Three Activities
Winter holidays are coming up. They are going to last for $n$ days. During the holidays, Monocarp wants to try all of these activities \textbf{exactly once} with his friends: - go skiing; - watch a movie in a cinema; - play board games. Monocarp knows that, on the $i$-th day, exactly $a_i$ friends will join him for skiing, $b_i$ friends will join him for a movie and $c_i$ friends will join him for board games. Monocarp also knows that he can't try more than one activity in a single day. Thus, he asks you to help him choose three \textbf{distinct} days $x, y, z$ in such a way that the total number of friends to join him for the activities ($a_x + b_y + c_z$) is maximized.
The main idea of the problem is that almost always you can take the maximum in each array. And when you can't, you don't need to look at a lot of smaller numbers. In particular, it is enough to consider the three largest numbers from each array. Let's show the correctness of this for the first array. There always exists an optimal answer in which one of the three largest numbers is taken from array $a$. Let's fix some taken elements in arrays $b$ and $c$. Then at least one of the three positions of the largest elements in $a$ is different from both fixed position. The argument is generalized to all three arrays similarly. Thus, the solution looks as follows. Find the positions of the three maximums in each array and iterate over the answer in $3^3$. Finding three maximums can be done using sorting or in one linear time pass over the array.
[ "brute force", "dp", "greedy", "implementation", "sortings" ]
1,200
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) b = list(map(int, input().split())) c = list(map(int, input().split())) def get_best3(a): mx1, mx2, mx3 = -1, -1, -1 for i in range(len(a)): if mx1 == -1 or a[i] > a[mx1]: mx3 = mx2 mx2 = mx1 mx1 = i elif mx2 == -1 or a[i] > a[mx2]: mx3 = mx2 mx2 = i elif mx3 == -1 or a[i] > a[mx3]: mx3 = i return (mx1, mx2, mx3) ans = 0 for x in get_best3(a): for y in get_best3(b): for z in get_best3(c): if x != y and x != z and y != z: ans = max(ans, a[x] + b[y] + c[z]) print(ans)
1914
E2
Game with Marbles (Hard Version)
\textbf{The easy and hard versions of this problem differ only in the constraints on the number of test cases and $n$. In the hard version, the number of test cases does not exceed $10^4$, and the sum of values of $n$ over all test cases does not exceed $2 \cdot 10^5$. Furthermore, there are no additional constraints on $n$ in a single test case.} Recently, Alice and Bob were given marbles of $n$ different colors by their parents. Alice has received $a_1$ marbles of color $1$, $a_2$ marbles of color $2$,..., $a_n$ marbles of color $n$. Bob has received $b_1$ marbles of color $1$, $b_2$ marbles of color $2$, ..., $b_n$ marbles of color $n$. All $a_i$ and $b_i$ are between $1$ and $10^9$. After some discussion, Alice and Bob came up with the following game: players take turns, starting with Alice. On their turn, a player chooses a color $i$ such that \textbf{both} players have at least one marble of that color. The player then discards one marble of color $i$, and their opponent discards all marbles of color $i$. The game ends when there is no color $i$ such that both players have at least one marble of that color. The score in the game is the difference between the number of remaining marbles that Alice has and the number of remaining marbles that Bob has at the end of the game. In other words, the score in the game is equal to $(A-B)$, where $A$ is the number of marbles Alice has and $B$ is the number of marbles Bob has at the end of the game. Alice wants to maximize the score, while Bob wants to minimize it. Calculate the score at the end of the game if both players play optimally.
Let's change the game in the following way: Firstly, we'll let Bob make all moves. It means that for each color $i$ Bob discarded one marble, while Alice discarded all her marbles. So the score $S$ will be equal to $0 - \sum_{i}{(b_i - 1)}$. Alice makes the first move by choosing some color $i$ and "takes back color $i$". In means that we cancel Bob's move on color $i$ and Alice makes that move instead. How score $S$ will change? Initially, we had $-(b_i - 1)$ contribution to the score, but now contribution becomes $+(a_i - 1)$. In other words, choosing color $i$ gives score $S' = S + (a_i + b_i - 2)$. Note that, the greater $(a_i + b_i)$ the greater $S'$ becomes. Bob makes the second move by "saving some color $i$", i. e. he forbids Alice to choose color $i$. It doesn't change score $S$ but now Alice can't choose color $i$ on her turns. Alice and Bob continue playing, by "taking" and "saving" colors, until Alice will take all non-forbidden colors. As a result, we showed that optimum strategy in the initial game is the same: sort colors by $a_i + b_i$ in decreasing order. Alice chooses $1$-st, $3$-rd, $5$-th and so on colors, while Bob chooses $2$-nd, $4$-th, $6$-th and so on colors. The resulting complexity is $O(n \log{n})$.
[ "games", "greedy", "sortings" ]
1,400
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) typedef long long li; // S = sum a_i - sum b_i // if Bob made all steps: then S = 0 - sum (b_i - 1) // each Alice step: S += (a_i - 1) + (b_i - 1) i. e. the bigger (a_i + b_i) the better int n; vector<int> a, b; inline bool read() { if(!(cin >> n)) return false; a.resize(n); b.resize(n); fore (i, 0, n) cin >> a[i]; fore (i, 0, n) cin >> b[i]; return true; } inline void solve() { vector<int> ids(n); iota(ids.begin(), ids.end(), 0); sort(ids.begin(), ids.end(), [&](int i, int j) { return a[i] + b[i] > a[j] + b[j]; }); li S = 0; fore (i, 0, n) { if (i & 1) S -= b[ids[i]] - 1; else S += a[ids[i]] - 1; } cout << S << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif // ios_base::sync_with_stdio(false); // cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; for(int i = 0; i < t; i++) { if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } } return 0; }
1914
F
Programming Competition
BerSoft is the biggest IT corporation in Berland. There are $n$ employees at BerSoft company, numbered from $1$ to $n$. The first employee is the head of the company, and he does not have any superiors. Every other employee $i$ has exactly one direct superior $p_i$. Employee $x$ is considered to be a superior (direct or indirect) of employee $y$ if one of the following conditions holds: - employee $x$ is the direct superior of employee $y$; - employee $x$ is a superior of the direct superior of employee $y$. The structure of BerSoft is organized in such a way that the head of the company is superior of every employee. A programming competition is going to be held soon. Two-person teams should be created for this purpose. However, if one employee in a team is the superior of another, they are uncomfortable together. So, teams of two people should be created so that no one is the superior of the other. Note that no employee can participate in more than one team. Your task is to calculate the maximum possible number of teams according to the aforementioned rules.
Note that the problem basically states the following: you are given a rooted tree; you can pair two vertices $x$ and $y$ if neither $x$ is an ancestor of $y$ nor $y$ is an ancestor of $x$. Each vertex can be used in at most one pair. Calculate the maximum possible number of pairs you can make. Let's look at subtrees of child nodes of the root. If two vertices belong to different subtrees, we can pair them up. So we can slightly rephrase the problem: given $m$ types of objects, with counts $sz_1, sz_2, \dots, sz_m$, find the maximum number of pairs that can be formed using objects of different types. This is a well-known problem with the following solution. Let $tot$ be the total number of objects and $mx$ be the type that has the maximum number of objects (maximum value of $sz$). If the number of objects of type $mx$ is at most the number of all other objects (i. e. $sz_{mx} \le tot - sz_{mx}$), then we can pair all objects (except $1$ if the total number of objects is odd). Otherwise, we can create $tot - sz_{mx}$ pairs of the form $(mx, i)$ for all $i$ except $mx$. Now we can return to the original problem. If the first aforementioned option is met for the root, then we know the answer to the problem is $\frac{tot}{2}$. Otherwise, we can create $tot - sz_{mx}$ pairs, but some nodes from the $mx$-th child's subtree are left unmatched (there are not enough vertices outside that subtree). So we can recursively look at the $mx$-th child subtree and solve the same problem. The only difference is that some number of vertices from that subtree are already matched. To solve this issue, we can add the parameter $k$ - how many vertices in the current subtree are already matched (we care only about the number of them). From the second paragraph, we know that the best situation (we can pair all objects) appears when the maximum value is at most the number of all other objects. Using this fact, we can say that $k$ matched vertices belong to the $mx$-th child's subtree. So we have to modify our check formula from $sz_{mx} \le tot - sz_{mx}$ to $sz_{mx} - k \le tot - sz_{mx}$. This process goes on until the current vertex (subtree) is a leaf or the condition is met. This solution works in $O(n)$ if we precalculate the sizes for all the subtrees with a DFS before running it.
[ "dfs and similar", "dp", "graph matchings", "greedy", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; const int N = 222222; int n; int sz[N]; vector<int> g[N]; void init(int v) { sz[v] = 1; for (int u : g[v]) { init(u); sz[v] += sz[u]; } } int calc(int v, int k) { int tot = 0, mx = -1; for (int u : g[v]) { tot += sz[u]; if (mx == -1 || sz[mx] < sz[u]) mx = u; } if (tot == 0) return 0; if (sz[mx] - k <= tot - sz[mx]) return (tot - k) / 2; int add = tot - sz[mx]; return add + calc(mx, max(0, k + add - 1)); } int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { cin >> n; for (int i = 0; i < n; ++i) g[i].clear(); for (int i = 1; i < n; ++i) { int p; cin >> p; g[p - 1].push_back(i); } init(0); cout << calc(0, 0) << '\n'; } }
1914
G2
Light Bulbs (Hard Version)
\textbf{The easy and hard versions of this problem differ only in the constraints on $n$. In the hard version, the sum of values of $n$ over all test cases does not exceed $2 \cdot 10^5$. Furthermore, there are no additional constraints on the value of $n$ in a single test case}. There are $2n$ light bulbs arranged in a row. Each light bulb has a color from $1$ to $n$ (\textbf{exactly two light bulbs for each color}). Initially, all light bulbs are turned off. You choose a set of light bulbs $S$ that you initially turn on. After that, you can perform the following operations in any order any number of times: - choose two light bulbs $i$ and $j$ \textbf{of the same color}, exactly one of which is on, and turn on the second one; - choose three light bulbs $i, j, k$, such that both light bulbs $i$ and $k$ \textbf{are on and have the same color}, and the light bulb $j$ is between them ($i < j < k$), and turn on the light bulb $j$. You want to choose a set of light bulbs $S$ that you initially turn on in such a way that by performing the described operations, you can ensure that all light bulbs are turned on. Calculate two numbers: - the minimum size of the set $S$ that you initially turn on; - the number of sets $S$ of minimum size (taken modulo $998244353$).
Let's call a contiguous segment of lamps closed if the number of lamps for each color is either $0$ or $2$ in this segment. For example, a segment of lamps $[3, 2, 1, 2, 1, 3]$ is closed. Furthermore, let's say that a closed segment of lamps is minimal if it is impossible to split it into multiple closed segments. For example, $[3, 2, 1, 2, 1, 3]$ is minimal, but $[3, 3, 2, 1, 2, 1]$ is not since it can be split into $[3, 3]$ and $[2, 1, 2, 1]$ (which are closed). Each closed segment has the following property: if you start with any lamp in this segment, you cannot "leave" this segment; i. e. you cannot use the lamps from this segment to light any lamps outside. To prove this, let's suppose the opposite: we started in the closed segment and managed to turn a lamp of color $x$ outside that segment on (and this was the first lamp outside of the segment we turned on). We could not do it with the operation of the first type, since it would mean that the other lamp of color $x$ is in the segment (so it is not closed); and we could not do it with the operation of the second type, since for every color present in the closed segment, both lamps of that color (and the segment they can light up) belong to the closed segment. For every minimal closed segment, we can light it up using just one lamp (for example, the first lamp). So, to calculate the minimum possible size of a set of lamps $S$ we initially turn on, we can just split the given sequence of colors into minimal closed segments. Unfortunately, calculating the number of possible sets of lamps $S$ is trickier. For every minimal closed segment we got, we can calculate the number of "starting" lamps that allow us to light the whole segment, and multiply them. However, not every lamp from a minimal closed segment can be a "starting" lamp: for example, in the segment $[3, 2, 1, 2, 1, 3]$, any lamp of color $3$ can be used, but no lamp of other color can be used. To deal with this, let us find all minimal closed segments in the sequence of colors. In the example above, we have to find out that both the segments $[3, 2, 1, 2, 1, 3]$ and $[2, 1, 2, 1]$ are closed; and since lamps of colors $1$ and $2$ belong to the "inner" closed segment, they cannot be used to light the whole "outer" closed segment. So, if a lamp belongs to any of the "inner" closed segments, it cannot be used as a starting lamp. Let's mark all such lamps. We can also show that if a lamp is not marked, it can be used as a starting lamp. It is because if we start with some lamp and try to turn on everything we can with the operations given in the statement, we will get precisely the shortest closed segment that lamp belongs to. Proving it is not that hard: suppose we stopped before turning the whole shortest closed segment on; either we got multiple segments of lamps (and we can turn on everything in between them), or we got a segment which is not closed (and for at least one color, there is exactly one lamp in it; so we can light the other lamp of that color). Okay, let's recap. Our solution consists of the following steps: find all minimal closed segments of lamps; for every "inner" segment, mark all lamps in it to show they cannot be used as the starting lamps; split the given sequence of colors into the minimum number of segments; for each segment we got from the split, calculate the number of unmarked lamps and multiply those values. The most difficult part is finding all minimal closed segments of lamps. To get a solution in something like $O(n^2)$, we can do it naively: iterate on the left border of the segment, add the first lamp in the segment, and keep adding next lamps until the number of lamps of each color becomes either $0$ or $2$. That's how the easy version of the problem is solved. For the hard version, this is too slow. We have to find the closed segments faster. In order to do this, we can use hashing. Many hashing methods can help, but in my opinion, the most elegant one is XOR hashing, which works as follows: For each color, generate a random $64$-bit integer and replace both occurrences of that color with the generated number. Then, if the segment is closed, the XOR of all numbers in the segment is equal to $0$ (each color occurs either $0$ or $2$ times, thus each integer is taken either $0$ or $2$ times, and all integers taken twice cancel out). This allows us to find all minimal closed segments in $O(n \log n)$ as follows: iterate on the array of colors from left to right, maintaining the XOR on the current prefix and a map where, for each XOR we encountered, we store the longest prefix which has that value of XOR. Then, after we process the $i$-th element, we can quickly find the left border of the segment ending in the $i$-th element by looking for the current XOR in the map. Don't forget to update the map after that. That way, we arrive at a solution which works in $O(n \log n)$, which is enough to solve the hard version.
[ "combinatorics", "data structures", "dfs and similar", "dp", "graphs", "hashing" ]
2,300
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { return (((x + y) % MOD) + MOD) % MOD; } int mul(int x, int y) { return (x * 1ll * y) % MOD; } mt19937_64 rnd(98275314); long long gen() { long long x = 0; while(x == 0) x = rnd(); return x; } vector<int> c; vector<int> g; int process_block(int l, int r) { int ans = 0; while(l < r) { if(g[l] != -1 && g[l] < r) l = g[l]; else { ans++; l++; } } return ans; } void solve() { int n; scanf("%d", &n); int size = 0, cnt = 1; c.resize(n * 2); g.resize(n * 2, -1); for(int i = 0; i < 2 * n; i++) { scanf("%d", &c[i]); --c[i]; } vector<long long> val(n); for(int i = 0; i < n; i++) val[i] = gen(); map<long long, int> last; long long cur = 0; last[0] = 0; for(int i = 0; i < n * 2; i++) { cur ^= val[c[i]]; if(cur == 0) { size++; cnt = mul(cnt, process_block(last[0], i + 1)); last.clear(); } else if(last.count(cur)) { g[last[cur]] = i + 1; } last[cur] = i + 1; } printf("%d %d\n", size, cnt); c.clear(); g.clear(); } int main() { int t; scanf("%d", &t); for(int i = 0; i < t; i++) solve(); }
1915
A
Odd One Out
You are given three digits $a$, $b$, $c$. Two of them are equal, but the third one is different from the other two. Find the value that occurs exactly once.
You can write three if-statements to find the equal pair, and output the correct answer. A shorter solution: output the bitwise $\textsf{XOR}$ of $a$, $b$, and $c$. It works, since any number $\textsf{XOR}$ed with itself is $0$, so the two equal numbers will "cancel" and you will be left with the odd number out.
[ "bitmasks", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int a, b, c; cin >> a >> b >> c; cout << (a ^ b ^ c) << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1915
B
Not Quite Latin Square
A Latin square is a $3 \times 3$ grid made up of the letters $A$, $B$, and $C$ such that: - in each row, the letters $A$, $B$, and $C$ each appear once, and - in each column, the letters $A$, $B$, and $C$ each appear once. For example, one possible Latin square is shown below. $$\begin{bmatrix} A & B & C \\ C & A & B \\ B & C & A \\ \end{bmatrix}$$You are given a Latin square, but one of the letters was replaced with a question mark $?$. Find the letter that was replaced.
There are many solutions. For example, look at the row with the question mark, and write some if statements to check the missing letter. A shorter solution: find the count of each letter. The one that appears only twice is the missing one.
[ "bitmasks", "brute force", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int cnt[3] = {}; for (int i = 0; i < 9; i++) { char c; cin >> c; if (c != '?') {cnt[c - 'A']++;} } for (int i = 0; i < 3; i++) { if (cnt[i] < 3) {cout << (char)('A' + i) << '\n';} } } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1915
C
Can I Square?
Calin has $n$ buckets, the $i$-th of which contains $a_i$ wooden squares of side length $1$. Can Calin build a square using \textbf{all} the given squares?
You should add up all the values to get the sum $s$. Then we just need to check if $s$ is a perfect square. There are many ways, for example you can use inbuilt sqrt function or binary search. Be careful with precision errors, since sqrt function might return a floating-point type.
[ "binary search", "implementation" ]
800
#include "bits/stdc++.h" using namespace std; #define ll long long #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() bool is_square(ll x) { ll l = 1, r = 1e9; while(l <= r) { ll mid = l + (r - l) / 2; if(mid * mid == x) return true; if(mid * mid > x) r = mid - 1; else l = mid + 1; } return false; } void solve() { ll n; cin >> n; ll s = 0; for(int i = 0, x; i < n; ++i) { cin >> x; s += x; } if(is_square(s)) cout << "YES\n"; else cout << "NO\n"; } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1915
D
Unnatural Language Processing
Lura was bored and decided to make a simple language using the five letters $a$, $b$, $c$, $d$, $e$. There are two types of letters: - vowels — the letters $a$ and $e$. They are represented by $\textsf{V}$. - consonants — the letters $b$, $c$, and $d$. They are represented by $\textsf{C}$. There are two types of syllables in the language: $\textsf{CV}$ (consonant followed by vowel) or $\textsf{CVC}$ (vowel with consonant before and after). For example, $ba$, $ced$, $bab$ are syllables, but $aa$, $eda$, $baba$ are not.A word in the language is a sequence of syllables. Lura has written a word in the language, but she doesn't know how to split it into syllables. Help her break the word into syllables. For example, given the word $bacedbab$, it would be split into syllables as $ba.ced.bab$ (the dot $.$ represents a syllable boundary).
There are many solutions. Below is the simplest one: Go through the string in reverse (from right to left). If the last character is a vowel, it must be part of $\textsf{CV}$ syllable - else, $\textsf{CVC}$ syllable. So we can go back $2$ or $3$ characters appropriately, insert $\texttt{.}$, and continue. The complexity is $\mathcal{O}(n)$ if implemented well.
[ "greedy", "implementation", "strings" ]
900
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int n; cin >> n; string s; cin >> s; string res = ""; while (!s.empty()) { int x; if (s.back() == 'a' || s.back() == 'e') {x = 2;} else {x = 3;} while (x--) { res += s.back(); s.pop_back(); } res += '.'; } res.pop_back(); reverse(res.begin(), res.end()); cout << res << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1915
E
Romantic Glasses
Iulia has $n$ glasses arranged in a line. The $i$-th glass has $a_i$ units of juice in it. Iulia drinks only from odd-numbered glasses, while her date drinks only from even-numbered glasses. To impress her date, Iulia wants to find a contiguous subarray of these glasses such that both Iulia and her date will have the same amount of juice in total if only the glasses in this subarray are considered. Please help her to do that. More formally, find out if there exists two indices $l$, $r$ such that $1 \leq l \leq r \leq n$, and $a_l + a_{l + 2} + a_{l + 4} + \dots + a_{r} = a_{l + 1} + a_{l + 3} + \dots + a_{r-1}$ if $l$ and $r$ have the same parity and $a_l + a_{l + 2} + a_{l + 4} + \dots + a_{r - 1} = a_{l + 1} + a_{l + 3} + \dots + a_{r}$ otherwise.
Let's rewrite the given equation: $a_l + a_{l + 2} + a_{l + 4} + \dots + a_{r} = a_{l + 1} + a_{l + 3} + \dots + a_{r-1}$ as $a_l - a_{l+1} + a_{l + 2} - a_{l+3} + a_{l + 4} + \dots - a_{r-1} + a_{r} = 0.$ How to check this? Let's flip all elements on even indices $a_2 \to -a_2, a_4 \to -a_4, \dots$. Then alternating sums of $[a_1, a_2, a_3, a_4, \dots]$ are the same as subarray sums on $[a_1, -a_2, a_3, -a_4, \dots]$. So we just need to check if there is a subarray with sum $0$. This is a standard problem with prefix sums: if two prefix sums are equal, then the subarray between them has sum $0$; otherwise, no subarray has sum $0$. The complexity is $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$ depending on how you check if two elements of an array are equal. Be careful about using hash tables, as they can be hacked.
[ "data structures", "greedy", "math" ]
1,300
#include "bits/stdc++.h" using namespace std; #define ll long long #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() void solve() { int n; cin >> n; vector<int> a(n); for(int i = 0; i < n; ++i) cin >> a[i]; map<ll, ll> m; ll s = 0; m[0] = 1; for(int i = 0; i < n; ++i) { a[i] *= ((i % 2) ? -1 : 1); s += a[i]; if(m[s]) { cout << "YES\n"; return; } ++m[s]; } cout << "NO\n"; } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1915
F
Greetings
There are $n$ people on the number line; the $i$-th person is at point $a_i$ and wants to go to point $b_i$. For each person, $a_i < b_i$, and the starting and ending points of all people are distinct. (That is, all of the $2n$ numbers $a_1, a_2, \dots, a_n, b_1, b_2, \dots, b_n$ are distinct.) All the people will start moving simultaneously at a speed of $1$ unit per second until they reach their final point $b_i$. When two people meet at the same point, they will greet each other once. How many greetings will there be? Note that a person can still greet other people even if they have reached their final point.
Let's consider two persons $1$ and $2$. When do they meet each other? We can treat a person traveling from point $a$ to point $b$ as a segment $[a, b]$. Notice that they meet when $a_1 < a_2$ and $b_2 < b_1$, or $a_2 < a_1$ and $b_1 < b_2$; or in other words, when one segment contains another. We can count the number of pairs when one fully contains another, which is a quite classic problem, and can be solved by iterating over the segments in increasing order of the $b$ position and for each of them adding the number of segments that we have already passed that have an $a$ value larger than the one of the current segment. The time complexity is $\mathcal{O}(n \log n)$.
[ "data structures", "divide and conquer", "sortings" ]
1,500
#include <bits/stdc++.h> using namespace std; #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> typedef __gnu_pbds::tree<int, __gnu_pbds::null_type, less<int>, __gnu_pbds::rb_tree_tag, __gnu_pbds::tree_order_statistics_node_update> ordered_set; int t, n; vector<pair<int, int>> arr; long long ans; ordered_set st; void solve(){ cin >> n; arr.assign(n, {}); for(auto &p : arr) cin >> p.second >> p.first; sort(arr.begin(), arr.end()); ans = 0; st.clear(); for(auto p : arr){ ans += st.size() - st.order_of_key(p.second); st.insert(p.second); } cout << ans << "\n"; } int main(){ ios_base::sync_with_stdio(false);cin.tie(NULL); cin >> t; while(t--){ solve(); } }
1915
G
Bicycles
All of Slavic's friends are planning to travel from the place where they live to a party using their bikes. And they all have a bike except Slavic. There are $n$ cities through which they can travel. They all live in the city $1$ and want to go to the party located in the city $n$. The map of cities can be seen as an undirected graph with $n$ nodes and $m$ edges. Edge $i$ connects cities $u_i$ and $v_i$ and has a length of $w_i$. Slavic doesn't have a bike, but what he has is money. Every city has exactly one bike for sale. The bike in the $i$-th city has a slowness factor of $s_{i}$. Once Slavic buys a bike, he can use it \textbf{whenever} to travel from the city he is currently in to any neighboring city, by taking $w_i \cdot s_j$ time, considering he is traversing edge $i$ using a bike $j$ he owns. Slavic can buy as many bikes as he wants as money isn't a problem for him. Since Slavic hates traveling by bike, he wants to get from his place to the party in the shortest amount of time possible. And, since his informatics skills are quite rusty, he asks you for help. What's the shortest amount of time required for Slavic to travel from city $1$ to city $n$? Slavic can't travel without a bike. It is guaranteed that it is possible for Slavic to travel from city $1$ to any other city.
We can build a graph with $n \cdot 1000$ nodes, where each node is responsible for the pair $(i, s)$, where $i$ is the index of the city and $s$ is the speed we have when we are at this city. Then, we can use Dijkstra's algorithm to compute the shortest path on this graph by considering all edges of node $i$ (when we are at a pair which has $i$ as the city), and the new $s$ would be the minimum value of $s_i$ and $s_j$ where $j$ is the neighboring city we are considering. After computing all shortest paths from node $(1, s_1)$, we just find the minimum value of $(n, j)$ for all $j$ from $1$ to $1000$, and that will be our answer.
[ "graphs", "greedy", "implementation", "shortest paths", "sortings" ]
1,800
#include "bits/stdc++.h" const int64_t inf = 1e18; void solve() { int n, m; std::cin >> n >> m; std::vector<std::pair<int, int>> adj[n]; for(int i = 0; i < m; ++i) { int u, v, w; std::cin >> u >> v >> w; --u, --v; adj[u].emplace_back(v, w); adj[v].emplace_back(u, w); } std::vector<int> s(n); for(int& i: s) std::cin >> i; std::vector<std::vector<int64_t>> dist(n, std::vector<int64_t>(1001, inf)); std::vector<std::vector<bool>> vis(n, std::vector<bool>(1001, false)); dist[0][s[0]] = 0; std::priority_queue<std::array<int64_t, 3>> q; q.push({0, 0, s[0]}); while(!q.empty()) { int u = q.top()[1], k = q.top()[2]; q.pop(); if(vis[u][k] || dist[u][k] == inf) continue; vis[u][k] = true; for(auto x: adj[u]) { int v = x.first, w = x.second; int c = std::min(s[v], k); if(dist[v][c] > dist[u][k] + 1LL * w * k) { dist[v][c] = dist[u][k] + 1LL * w * k; q.push({-dist[v][c], v, c}); } } } int64_t ans = inf; for(int k = 1; k <= 1000; ++k) ans = std::min(ans, dist[n - 1][k]); std::cout << ans << "\n"; } int main() { std::ios_base::sync_with_stdio(0);std::cin.tie(0); int t = 1; std::cin >> t; while(t--) { solve(); } }