contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1380
E
Merging Towers
You have a set of $n$ discs, the $i$-th disc has radius $i$. Initially, these discs are split among $m$ towers: each tower contains at least one disc, and the discs in each tower are sorted in descending order of their radii from bottom to top. You would like to assemble one tower containing all of those discs. To do so, you may choose two different towers $i$ and $j$ (each containing at least one disc), take several (possibly all) top discs from the tower $i$ and put them on top of the tower $j$ in the same order, as long as the top disc of tower $j$ is bigger than each of the discs you move. You may perform this operation any number of times. For example, if you have two towers containing discs $[6, 4, 2, 1]$ and $[8, 7, 5, 3]$ (in order from bottom to top), there are only two possible operations: - move disc $1$ from the first tower to the second tower, so the towers are $[6, 4, 2]$ and $[8, 7, 5, 3, 1]$; - move discs $[2, 1]$ from the first tower to the second tower, so the towers are $[6, 4]$ and $[8, 7, 5, 3, 2, 1]$. Let the difficulty of some set of towers be the minimum number of operations required to assemble one tower containing all of the discs. For example, the difficulty of the set of towers $[[3, 1], [2]]$ is $2$: you may move the disc $1$ to the second tower, and then move both discs from the second tower to the first tower. You are given $m - 1$ queries. Each query is denoted by two numbers $a_i$ and $b_i$, and means "merge the towers $a_i$ and $b_i$" (that is, take all discs from these two towers and assemble a new tower containing all of them in descending order of their radii from top to bottom). The resulting tower gets index $a_i$. For each $k \in [0, m - 1]$, calculate the difficulty of the set of towers after the first $k$ queries are performed.
First of all, let's try to find a simple way to evaluate the difficulty of a given set of towers. I claim that the difficulty is equal to the number of pairs of discs $(i, i + 1)$ that belong to different towers. The beginning of the proof during each operation we can "merge" at most one such pair: if we move discs to the tower with disk $i$ on top of it, only the pair $(i - 1, i)$ can be affected; we can always take the first several $k$ discs belonging to the same tower and move them to the tower containing disc $k + 1$, thus merging exactly one pair in exactly one operation. The end of the proof After that, there are two main approaches: LCA and small-to-large merging. The model solution uses LCA, so I'll describe it. For each pair $(i, i + 1)$, we have to find the first moment these discs belong to the same tower. To do so, let's build a rooted tree on $2m - 1$ vertices. The vertices $1$ to $m$ will be the leaves of the tree and will represent the original towers. The vertex $m + i$ will represent the tower created during the $i$-th query and will have two children - the vertices representing the towers we merge during the $i$-th query. The vertex $2m - 1$ is the root. Now, if some vertex $x$ is an ancestor of vertex $y$, it means that the tower represented by vertex $x$ contains all the discs from the tower represented by vertex $y$. So, to find the first tower containing two discs $i$ and $i + 1$, we have to find the lowest common ancestor of the vertices representing the towers $t_i$ and $t_{i + 1}$. The easiest way to do it is to implement something like binary lifting, which allows us to solve the problem in $O(n \log m)$.
[ "data structures", "dsu", "implementation", "trees" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int n, m; vector<int> p; vector<vector<int>> val; vector<int> who; int getp(int a){ return a == p[a] ? a : p[a] = getp(p[a]); } int main(){ scanf("%d%d", &n, &m); p.resize(m); val.resize(m); who.resize(n); int ans = n - 1; forn(i, m) p[i] = i; forn(i, n){ int x; scanf("%d", &x); --x; who[i] = x; ans -= (i && who[i] == who[i - 1]); val[who[i]].push_back(i); } printf("%d\n", ans); forn(i, m - 1){ int v, u; scanf("%d%d", &v, &u); v = getp(v - 1), u = getp(u - 1); if (val[v].size() < val[u].size()) swap(v, u); for (int x : val[u]){ if (x) ans -= who[x - 1] == v; if (x < n - 1) ans -= who[x + 1] == v; } for (int x : val[u]){ val[v].push_back(x); who[x] = v; } p[u] = v; printf("%d\n", ans); } return 0; }
1380
F
Strange Addition
Let $a$ and $b$ be some non-negative integers. Let's define strange addition of $a$ and $b$ as following: - write down the numbers one under another and align them by their least significant digit; - add them up digit by digit and concatenate the respective sums together. Assume that both numbers have an infinite number of leading zeros. For example, let's take a look at a strange addition of numbers $3248$ and $908$: You are given a string $c$, consisting of $n$ digits from $0$ to $9$. You are also given $m$ updates of form: - $x~d$ — replace the digit at the $x$-th position of $c$ with a digit $d$. Note that string $c$ might have leading zeros at any point of time. After each update print the number of pairs $(a, b)$ such that both $a$ and $b$ are non-negative integers and the result of a strange addition of $a$ and $b$ is equal to $c$. Note that the numbers of pairs can be quite large, so print them modulo $998244353$.
Let's solve the task as if there are no updates. This can be done with a pretty straightforward dp. $dp_i$ is the number of pairs $(a, b)$ such that the result of the strange addition of $a$ and $b$ is the prefix of $c$ of length $i$. $dp_0 = 1$. From each state you can add a single digit to $a$ and to $b$ at the same time. You can either go to $dp_{i+1}$ and multiply the answer by the number of pairs of digits than sum up to $c_i$. Or go to $dp_{i+2}$ and multiply the answer by the number of pairs of digits than sum up to $c_i c_{i+1}$. Note that no pair of digits can sum up to a three-digit value, so it makes no sense to go further. Let's optimize this dp with some data structure. Segment tree will work well. Let the node store the number of ways to split the segment $[l; r)$ into blocks of size $1$ or $2$ so that: both the leftmost character and the rightmost character are not taken into any block; the leftmost character is taken into some block and the rightmost character is not taken into any block; the leftmost character is not taken into any block and the rightmost character is taken into some block; both the leftmost and the rightmost characters are taken into some blocks. This structure makes the merge pretty manageable. You should glue up the segments in such a way that all the middle characters are taken into some block: either in separate blocks in their own segments or into the same block of length $2$. The answer will be in the root of the tree in a value such that both characters are taken. The update in the segment tree will still work in $O(\log n)$. Overall complexity: $O(n + m \log n)$.
[ "data structures", "dp", "matrices" ]
2,600
#include <bits/stdc++.h> using namespace std; const int N = int(5e5) + 9; const int MOD = 998244353; int mul(int a, int b) { return (a * 1LL * b) % MOD; } void add(int &a, int x) { a += x; a %= MOD; } int bp(int a, int n) { int res = 1; for (; n > 0; n >>= 1) { if (n & 1) res = mul(res, a); a = mul(a, a); } return res; } int inv(int a) { int ia = bp(a, MOD - 2); assert(mul(a, ia) == 1); return ia; } int n, m; string s; int dp[N][10]; int idp[N][10]; set <pair<int, int> > lr; int res = 1; void updRes(int l, int r, int c) { assert(l <= r); if (c == 1) { assert(!lr.count(make_pair(l, r))); lr.insert(make_pair(l, r)); res = mul(res, dp[r - l + (r + 1 != n)][r + 1 == n? 1 : s[r + 1] - '0']); } else { assert(lr.count(make_pair(l, r))); lr.erase(make_pair(l, r)); res = mul(res, inv(dp[r - l + (r + 1 != n)][r + 1 == n? 1 : s[r + 1] - '0'])); } } void getLR(int &l, int &r, int pos) { l = r = -1; auto it = lr.lower_bound(make_pair(pos, n)); if(it == lr.begin()) return; --it; if(!(it->first <= pos && pos <= it->second)) return; l = it->first, r = it->second; } char buf[N]; int main(){ scanf("%d %d\n", &n, &m); scanf("%s", buf); s = string(buf); for (int i = 0; i <= 9; ++i) { dp[0][i] = i + 1; //idp[0][i] = inv(dp[0][i]); dp[1][i] = 2 * (i + 1) + (9 - i); //idp[1][i] = inv(dp[1][i]); } for (int i = 2; i < N; ++i) for (int j = 0; j <= 9; ++j) { dp[i][j] = (2LL * dp[i - 1][j] + 8LL * dp[i - 2][j]) % MOD; //idp[i][j] = inv(dp[i][j]); } for (int i = 0; i < N; ++i) for(int j = 0; j < 10; ++j) assert(dp[i][j] != 0); for (int l = 0; l < n; ) { int r = l; while(r < n && s[r] == '1') ++r; res = mul(res, dp[r - l - (r == n)][r == n? 1 : s[r] - '0']); if (l != r) lr.insert(make_pair(l, r - 1)); l = r + 1; } for (int i = 0; i < m; ++i) { int pos; char x; scanf("%d %c\n", &pos, &x); --pos; if (x == '1') { if (s[pos] != '1'){ int l1, r1, l2, r2; getLR(l1, r1, pos - 1); getLR(l2, r2, pos + 1); int l = pos, r = pos; if (l1 != -1) { assert(r1 == pos - 1); updRes(l1, r1, -1); l = l1; } else { res = mul(res, inv(dp[0][s[pos] - '0'])); } if (l2 != -1) { assert(l2 == pos + 1); updRes(l2, r2, -1); r = r2; } else { if (pos + 1 != n) res = mul(res, inv(dp[0][s[pos + 1] - '0'])); } s[pos] = x; updRes(l, r, 1); } } else { if (s[pos] != '1') { if (pos - 1 >= 0 && s[pos - 1] == '1') { int l, r; getLR(l, r, pos - 1); updRes(l, r, -1); s[pos] = x; updRes(l, r, 1); } else { res = mul(res, dp[0][x - '0']); res = mul(res, inv(dp[0][s[pos] - '0'])); s[pos] = x; } } else { int l, r; getLR(l, r, pos); assert(l != -1 && r != -1); updRes(l, r, -1); s[pos] = x; if (l == r) { res = mul(res, dp[0][s[pos] - '0']); if (pos + 1 < n) res = mul(res, dp[0][s[pos + 1] - '0']); } else if (pos == l) { res = mul(res, dp[0][s[pos] - '0']); updRes(l + 1, r, 1); } else if (pos == r) { if (pos + 1 != n) res = mul(res, dp[0][s[pos + 1] - '0']); updRes(l, r - 1, 1); } else { updRes(l, pos - 1, 1); updRes(pos + 1, r, 1); } } } printf("%d\n", res); } }
1380
G
Circular Dungeon
You are creating a level for a video game. The level consists of $n$ rooms placed in a circle. The rooms are numbered $1$ through $n$. Each room contains exactly one exit: completing the $j$-th room allows you to go the $(j+1)$-th room (and completing the $n$-th room allows you to go the $1$-st room). You are given the description of the multiset of $n$ chests: the $i$-th chest has treasure value $c_i$. Each chest can be of one of two types: - regular chest — when a player enters a room with this chest, he grabs the treasure and proceeds to the next room; - mimic chest — when a player enters a room with this chest, the chest eats him alive, and he loses. The player starts in a random room with each room having an equal probability of being chosen. The players earnings is equal to the total value of treasure chests he'd collected before he lost. You are allowed to choose the order the chests go into the rooms. For each $k$ from $1$ to $n$ place the chests into the rooms in such a way that: - each room contains \textbf{exactly} one chest; - \textbf{exactly} $k$ chests are mimics; - the expected value of players earnings is \textbf{minimum} possible. Please note that for each $k$ the placement is chosen independently. It can be shown that it is in the form of $\frac{P}{Q}$ where $P$ and $Q$ are non-negative integers and $Q \ne 0$. Report the values of $P \cdot Q^{-1} \pmod {998244353}$.
Idea: BledDest At first, let's say that the expected value is equal to the average of total earnings over all positions and is equal to the sum of earnings over all positions divided by $n$. So we can trasition to minimizing the sum. Let's learn how to solve the task for some fixed $k$. Fix some arrangement and rotate the rooms so that the last room contains a mimic. So now you have $cnt_1$ regular chests, then a single mimic, $cnt_2$ regular chests, single mimic, $\dots$, $cnt_k$ regular chests, single mimic. All $cnt_i \ge 0$ and $\sum \limits_{i=1}^{k} cnt_i = n - k$. Take a look at some of these intervals of length $cnt_i$. The last chest in the interval is taken from $cnt_i$ starting positions, the second-to-last is taken $cnt_i - 1$ times and so on. Now let's find the optimal way to choose $cnt_i$. Fix some values of $cnt_i$. Take a look at the smallest of these values and the largest of them. Let the values be $x$ and $y$. If they differ by at least $2$ ($x \le y - 2$), then the smaller result can always be achieved by moving a regular chest from the larger one to the smaller one. Consider two sequences of coefficients for both intervals: $[1, 2, \dots, x - 1, x]$ and $[1, 2, \dots, y - 1, y]$. However, if you remove one chest, then they will be equal to $[1, 2, \dots, x - 1, x, x + 1]$ and $[1, 2, \dots, y - 1]$. If you only consider the difference between the numbers of both sequences, then you can see that only coefficient $y$ got removed and coefficient $x + 1$ was added. So you can rearrange the chests in such a way that all chests are assigned to the same value and only the chest that was assigned to $y$ becomes assigned to $x+1$, thus decreasing the total value. Now we have all $cnt_i$ set now. The only thing left is to assign chests optimally. Write down the union of all the coefficient sequences from all the intervals $\bigcup \limits_{i=1}^{n-k} [1, \dots, cnt_i-1, cnt_i]$ and sort them in the non-decreasing order. It's easy to show that the chests should be sorted in the non-increasing order (really classical thing, you can try proving that by showing that any other arrangement can easily be improved once again). That allows us to write a solution in $O(n^2)$. Sort all the chests in the beginning, after that for some $k$ multiply the value of the $i$-th chest by $\lfloor \frac{i}{k} \rfloor$ and sum up the results. Finally, let's speed this up with prefix sums. Notice that the first $k$ values are multiplied by $0$, the second $k$ values - by $1$ and so on. If $n$ is not divisible by $k$, then the last block just has length smaller than $k$. Thus, we can calculate the answer for some $k$ in $O(\frac{n}{k})$. And that's equal to $O(\sum \limits_{k=1}^{n} \frac{n}{k})$ = $O(n \log n)$. Overall complexity: $O(n \log n)$.
[ "greedy", "math", "probabilities" ]
2,600
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int MOD = 998244353; int add(int a, int b){ a += b; if (a >= MOD) a -= MOD; if (a < 0) a += MOD; return a; } int mul(int a, int b){ return a * 1ll * b % MOD; } int binpow(int a, int b){ int res = 1; while (b){ if (b & 1) res = mul(res, a); a = mul(a, a); b >>= 1; } return res; } vector<int> pr; int main() { int n; scanf("%d", &n); vector<int> c(n); forn(i, n) scanf("%d", &c[i]); sort(c.begin(), c.end(), greater<int>()); pr.push_back(0); forn(i, n) pr.push_back(add(pr.back(), c[i])); int invn = binpow(n, MOD - 2); for (int k = 1; k <= n; ++k){ int ans = 0; for (int i = 0, j = 0; i < n; i += k, ++j) ans = add(ans, mul(j, add(pr[min(n, i + k)], -pr[i]))); printf("%d ", mul(ans, invn)); } puts(""); return 0; }
1381
A1
Prefix Flip (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is the constraint on $n$ and the required number of operations. You can make hacks only if all versions of the problem are solved.} There are two binary strings $a$ and $b$ of length $n$ (a binary string is a string consisting of symbols $0$ and $1$). In an operation, you select a prefix of $a$, and simultaneously invert the bits in the prefix ($0$ changes to $1$ and $1$ changes to $0$) and reverse the order of the bits in the prefix. For example, if $a=001011$ and you select the prefix of length $3$, it becomes $011011$. Then if you select the entire string, it becomes $001001$. Your task is to transform the string $a$ into $b$ in at most $3n$ operations. It can be proved that it is always possible.
The easy version has two main solutions: Solution 1: $O(n)$ time with $3n$ operations The idea is to fix the bits one-by-one. That is, make $s_1=t_1$, then make $s_2=t_2$, etc. To fix the bit $i$ (when $s_i\ne t_i$), we can flip the prefix of length $i$, then flip the prefix of length $1$, and again flip the prefix of length $i$. These three operations do not change any other bits in $s$, so it's simple to implement in $O(n)$. Since we use $3$ operations per bit, we use at most $3n$ operations overall. Solution 2: $O(n^2)$ time with $2n$ operations In this solution, we take a similar approach to solution 1, in that we fix the bits one-by-one. This time, we will fix the bits in reverse order. To fix the bit $i$, we can either flip the prefix of length $i$, or flip the first bit and then flip the prefix of length $i$. Since we do this in reverse order, the previously fixed bits do not get messed up by this procedure. And we use at most $2$ operations per bit, so $2n$ operations overall. However, we do have to simulate the operations in order to check if we should flip the first bit. Simulating an operation can easily be done in $O(n)$ time per operation, or $O(n^2)$ time to simulate all operations.
[ "constructive algorithms", "data structures", "strings" ]
1,300
#include <bits/stdc++.h> using namespace std; // Solution 2 from editorial // Fix the bits one-by-one in reverse order. // Simulate the operations manually, achieving O(n^2) time complexity int t, n; string a, b; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n >> a >> b; vector<int> ops; for(int i = n - 1; i >= 0; i--) { if(a[i] != b[i]) { if(a[0] == b[i]) { ops.push_back(1); a[0] = '0' + !(a[0] - '0'); } reverse(a.begin(), a.begin() + i + 1); for(int j = 0; j <= i; j++) { a[j] = '0' + !(a[j] - '0'); } ops.push_back(i + 1); } } cout << ops.size() << ' '; for(int x : ops) { cout << x << ' '; } cout << '\n'; } }
1381
A2
Prefix Flip (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is the constraint on $n$ and the required number of operations. You can make hacks only if all versions of the problem are solved.} There are two binary strings $a$ and $b$ of length $n$ (a binary string is a string consisting of symbols $0$ and $1$). In an operation, you select a prefix of $a$, and simultaneously invert the bits in the prefix ($0$ changes to $1$ and $1$ changes to $0$) and reverse the order of the bits in the prefix. For example, if $a=001011$ and you select the prefix of length $3$, it becomes $011011$. Then if you select the entire string, it becomes $001001$. Your task is to transform the string $a$ into $b$ in at most $2n$ operations. It can be proved that it is always possible.
There are several ways to solve the hard version as well. Solution 1 Given an arbitrary binary string $s$, we can make all bits $0$ in at most $n$ operations. Simply scan the string from left to right. If bits $i$ and $i+1$ disagree, apply the operation to the prefix of length $i$. This is also easy to simulate in $O(n)$ time. We can make $s$ all zeros in at most $n$ operations, and we can make $t$ all zeros in at most $n$ operations. By reversing the order of the operations on $t$, we have transformed $s$ into $t$ in at most $2n$ operations, as desired. Solution 2 Another approach is to optimize the simulation for solution 2 from the easy version. You can do this with a data structure such as a balanced binary search tree in $O(n \log n)$ time, but there is no need. Instead, we can observe that after making the last $k$ bits correct with our procedure, the prefix of $s$ of length $n-k$ will correspond to some segment of the original string $s$, except it will be possibly flipped (inverted and reversed). So, we need only keep track of the starting index of this segment, and a flag for whether it is flipped. Complexity is $O(n)$. Solution 3 A third solution uses randomization to improve the number of operations from $3n$ in solution 1 of the easy version. We can observe that in a random test case, approximately half the bits of $s$ will be mismatches with $t$. Solution 1 in the easy version uses $3$ operations per mismatch, which is $3n/2$ operations in expectation. Obviously, you don't get to decide that all test cases are random. But you can spend a small number of operations initially, flipping random prefixes to make the string more random. If it doesn't work, you can try again repeatedly. Flipping random prefixes is a complicated process that might be hard to compute the exact probability. But if the probability is $p$, and we try to flip $k$ prefixes randomly, the time complexity is $O((k+1)n)\log p$. If you find a deterministic solution with a strictly lower ratio than $2$ operations per bit, we would love to hear about it!
[ "constructive algorithms", "data structures", "implementation", "strings", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; // Solution 2 from editorial // instead of simulating, we store two variables idx and flip // they tell us enough information to ask the value of the first bit during the process // If flip is false, it means the substring s[idx, ..., idx + i) is currently the beginning of s // If flip is true, it means the flipped substring s[idx - i + 1, ..., idx] is currently the beginning of s // we update idx and flip correctly after fixing each bit in O(1) time per bit, or O(n) time overall int t, n; string a, b; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n >> a >> b; bool flip = false; int idx = 0; vector<int> ops; for(int i = n - 1; i >= 0; i--) { if(flip ^ (a[idx] == b[i])) { ops.push_back(1); } ops.push_back(i + 1); if(flip) idx -= i; else idx += i; flip = !flip; } cout << ops.size() << ' '; for(int x : ops) { cout << x << ' '; } cout << '\n'; } }
1381
B
Unmerge
Let $a$ and $b$ be two arrays of lengths $n$ and $m$, respectively, with no elements in common. We can define a new array $\mathrm{merge}(a,b)$ of length $n+m$ recursively as follows: - If one of the arrays is empty, the result is the other array. That is, $\mathrm{merge}(\emptyset,b)=b$ and $\mathrm{merge}(a,\emptyset)=a$. In particular, $\mathrm{merge}(\emptyset,\emptyset)=\emptyset$. - If both arrays are non-empty, and $a_1<b_1$, then $\mathrm{merge}(a,b)=[a_1]+\mathrm{merge}([a_2,\ldots,a_n],b)$. That is, we delete the first element $a_1$ of $a$, merge the remaining arrays, then add $a_1$ to the beginning of the result. - If both arrays are non-empty, and $a_1>b_1$, then $\mathrm{merge}(a,b)=[b_1]+\mathrm{merge}(a,[b_2,\ldots,b_m])$. That is, we delete the first element $b_1$ of $b$, merge the remaining arrays, then add $b_1$ to the beginning of the result. This algorithm has the nice property that if $a$ and $b$ are sorted, then $\mathrm{merge}(a,b)$ will also be sorted. For example, it is used as a subroutine in merge-sort. For this problem, however, we will consider the same procedure acting on non-sorted arrays as well. For example, if $a=[3,1]$ and $b=[2,4]$, then $\mathrm{merge}(a,b)=[2,3,1,4]$. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). There is a permutation $p$ of length $2n$. Determine if there exist two arrays $a$ and $b$, each of length $n$ and with no elements in common, so that $p=\mathrm{merge}(a,b)$.
Consider the maximum element $2n$ of $p$. Assume without loss of generality that it comes from array $a$. Then the merge algorithm will exhaust array $b$ before it takes the element $2n$. Therefore, if $2n$ appears at index $i$ in $p$, the entire suffix of $p$ beginning at index $i$ must be a contiguous block in one of the arrays $a$ or $b$. Then if we ignore this suffix of $p$, we should determine if the prefix of $p$ can be the merge of two arrays of certain sizes. We can repeat the same argument, as the maximum remaining element also corresponds to a contiguous block. Taking this argument all the way, consider all indices $i$ where $p_i$ is greater than all elements that come before. This gives us all the lengths of the contiguous blocks, and we should determine if a subset of them add up to $n$. We've shown this condition is necessary. It is also sufficient because if we assign the blocks to $a$ and $b$ accordingly, the merge algorithm works correctly. Now, this is just a subset-sum problem. The standard subset-sum DP approach takes $O(n^2)$ time, which is good enough. It's also possible to do $O(n\sqrt n)$ by using the fact that sum of values is $2n$ (as they are the lengths of disjoint blocks), meaning there are only $O(\sqrt n)$ distinct values.
[ "dp" ]
1,800
#include <bits/stdc++.h> using namespace std; // compute block lengths, and do subset sum. // O(n^2) subset sum is a standard dp // For educational purposes, here is the O(n sqrt(n)) solution // we treat each distinct length independently, and remember the number of occurrences // use the helper array a to update the dp states const int N = 1e5 + 5; int t, n, p[2 * N], a[N]; bool vis[N]; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n; int mx = 0; vector<int> ind; for(int i = 1; i <= 2 * n; i++) { cin >> p[i]; if(p[i] > mx) { mx = p[i]; ind.push_back(i); } } ind.push_back(2 * n + 1); vector<int> lens; for(int i = 1; i < (int) ind.size(); i++) { lens.push_back(ind[i] - ind[i - 1]); } sort(lens.begin(), lens.end()); fill(vis, vis + n + 1, false); vis[0] = true; int m = lens.size(); for(int k = 0; k < m; k++) { int r = k; while(r < m && lens[r] == lens[k]) r++; fill(a, a + n + 1, 0); for(int i = lens[k]; i <= n; i++) { if(!vis[i] && vis[i - lens[k]] && a[i - lens[k]] < r - k) { a[i] = a[i - lens[k]] + 1; vis[i] = true; } } k = r - 1; } cout << (vis[n] ? "YES" : "NO") << '\n'; } }
1381
C
Mastermind
In the game of Mastermind, there are two players  — Alice and Bob. Alice has a secret code, which Bob tries to guess. Here, a code is defined as a sequence of $n$ colors. There are exactly $n+1$ colors in the entire universe, numbered from $1$ to $n+1$ inclusive. When Bob guesses a code, Alice tells him some information about how good of a guess it is, in the form of two integers $x$ and $y$. The first integer $x$ is the number of indices where Bob's guess correctly matches Alice's code. The second integer $y$ is the size of the intersection of the two codes as multisets. That is, if Bob were to change the order of the colors in his guess, $y$ is the maximum number of indices he could get correct. For example, suppose $n=5$, Alice's code is $[3,1,6,1,2]$, and Bob's guess is $[3,1,1,2,5]$. At indices $1$ and $2$ colors are equal, while in the other indices they are not equal. So $x=2$. And the two codes have the four colors $1,1,2,3$ in common, so $y=4$. \begin{center} Solid lines denote a matched color for the same index. Dashed lines denote a matched color at a different index. $x$ is the number of solid lines, and $y$ is the total number of lines. \end{center} You are given Bob's guess and two values $x$ and $y$. Can you find one possibility of Alice's code so that the values of $x$ and $y$ are correct?
Suppose we have already decided which $x$ indices agree on color. We should shuffle the remaining $n-x$ indices in a way that minimizes the number of matches. We should also replace $n-y$ indices with a color that doesn't contribute to the multiset intersection. Because there are $n+1>n$ colors, there is some color $c$ that doesn't appear in $a$, and we can use it to fill these $n-y$ indices. Assuming that $n-y$ is greater than or equal to the minimum number of excess matches we're forced to make, we have a solution. Let $f$ be the number of occurrences of the most frequent color in the $n-x$ indices. Then the number of forced matches is clearly at least $2f-(n-x)$. And it can be achieved as follows. Let's reindex so that we have contiguous blocks of the same color. Then rotate everything by $\left\lfloor \frac{n-x}{2}\right\rfloor$ indices. Now we need to decide on the $x$ indices that minimize $f$. This can be done simply by always choosing the most frequent color remaining. We can do this with a priority queue in $O(n\log n)$, or $O(n)$ with counting sort. To make the solution clearer, let's see how it works on the sixth test case in the sample: $[3,3,2,1,1,1]$ with $x=2,y=4$. First, we greedily choose the most frequent color two times: $[3,\_,\_,1,\_,\_]$. After choosing $1$ the first time, there is a tie between $1$ and $3$. So alternatively, we could choose the color $1$ twice. Then the remaining indices have colors $3,2,1,1$. Rotating by $(n-x)/2=2$ indices, we can place the colors as $1,1,2,3$ to get $[3,1,1,1,2,3]$. The color $4$ does not appear in $a$, so we should fill $n-y=2$ indices with $4$. (But not where we already forced a match.) For example, we get the solution $[3,4,1,1,4,3]$.
[ "constructive algorithms", "graph matchings", "greedy", "implementation", "sortings", "two pointers" ]
2,500
#include <bits/stdc++.h> using namespace std; // ind[color] = list of indices that have this color // hist[frequency] = list of colors with this frequency // a solution with priority_queue is perhaps simpler to implement, // but here is an O(n) solution because we can. const int N = 1e5 + 5; int t, n, x, y, b[N], a[N]; vector<int> ind[N], hist[N]; bool mis[N]; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n >> x >> y; for(int i = 0; i <= n + 1; i++) { ind[i].clear(); hist[i].clear(); mis[i] = false; a[i] = 0; } for(int i = 1; i <= n; i++) { cin >> b[i]; ind[b[i]].push_back(i); } for(int i = 1; i <= n + 1; i++) { hist[ind[i].size()].push_back(i); } // greedily choose x indices by frequency int idx = n; for(int k = 1; k <= x; k++) { while(hist[idx].empty()) idx--; int col = hist[idx].back(); // match the color and update our ind/hist structures a[ind[col].back()] = col; ind[col].pop_back(); hist[idx].pop_back(); hist[idx - 1].push_back(col); } while(idx > 0 && hist[idx].empty()) idx--; vector<int> ve; // idx = max frequency color of the remaining unmatched indices if(idx * 2 > 2 * n - x - y) { cout << "NO\n"; continue; } // create a vector of the indices // put same colors together so that all we have to do is rotate by floor((n - x) / 2) for(int i = 1; i <= idx; i++) { for(int col : hist[i]) { ve.insert(ve.end(), ind[col].begin(), ind[col].end()); } } int mismatch = n - y; auto makemismatch = [&](int i) { a[i] = hist[0][0]; mismatch--; mis[i] = true; }; for(int i = 0; i < n - x; i++) { a[ve[i]] = b[ve[(i + (n - x) / 2) % (n - x)]]; if(a[ve[i]] == b[ve[i]]) makemismatch(ve[i]); } for(int i = 0; mismatch > 0; i++) { if(!mis[ve[i]]) makemismatch(ve[i]); } cout << "YES\n"; for(int i = 1; i <= n; i++) { cout << a[i] << ' '; } cout << '\n'; } }
1381
D
The Majestic Brown Tree Snake
There is an undirected tree of $n$ vertices, connected by $n-1$ bidirectional edges. There is also a snake stuck inside of this tree. Its head is at vertex $a$ and its tail is at vertex $b$. The snake's body occupies all vertices on the unique simple path between $a$ and $b$. The snake wants to know if it can reverse itself  — that is, to move its head to where its tail started, and its tail to where its head started. Unfortunately, the snake's movements are restricted to the tree's structure. In an operation, the snake can move its head to an adjacent vertex not currently occupied by the snake. When it does this, the tail moves one vertex closer to the head, so that the length of the snake remains unchanged. Similarly, the snake can also move its tail to an adjacent vertex not currently occupied by the snake. When it does this, the head moves one unit closer to the tail. \begin{center} Let's denote a snake position by $(h,t)$, where $h$ is the index of the vertex with the snake's head, $t$ is the index of the vertex with the snake's tail. This snake can reverse itself with the movements $(4,7)\to (5,1)\to (4,2)\to (1, 3)\to (7,2)\to (8,1)\to (7,4)$. \end{center} Determine if it is possible to reverse the snake with some sequence of operations.
Let the length of the snake be $L$. Let's call a node $p$ a "pivot" if there exist three edge-disjoint paths of length $L$ extending from $p$. Clearly, if one of the snake's endpoints (head or tail) can reach a pivot, then the snake can rotate through these $3$ paths, reversing itself. I claim two things: If a snake's endpoint can reach some pivot, then it can reach all pivots. If a snake's endpoint cannot reach a pivot, the snake cannot reverse itself. Let's prove claim 1. Say there are two pivots $p_1$ and $p_2$, and a snake's endpoint can reach $p_1$. At most one edge from $p_1$ is on the path between $p_1$ and $p_2$. So let's put the snake in one of the other branches of $p_1$. Then we can move the snake back through $p_1$ and on the path to $p_2$. Let's prove claim 2. Consider the longest path in the tree. If it is impossible for the snake to enter this path, we may delete the path without changing the possible snake positions, so we apply induction on the smaller tree. Otherwise, if the snake can enter the path, we can show that it can never leave. (And therefore, it is also initially in the path, because snake moves are reversible.) Assume for contradiction that the snake can leave the path. Then in its last move leaving the path, it occupies a length $L$ path from a node in the longest path. And because we said it was the longest path, both of those branches must have length at least $L$ as well. But then the snake's endpoint is at a pivot, giving us a contradiction. This completes the proof of claim 2. Solution 1 Now that we understand claims 1 and 2, how can we use them? First, we can detect if any node is a pivot using DP to find the longest 3 paths from each node. If a pivot does not exist, we output NO. Otherwise, root the tree at the pivot $p$. Let's move the snake back and forth in a greedy fashion like this: Move the head to the deepest leaf it can reach. Then move the tail to the deepest leaf it can reach. And repeat. If at any point, one endpoint becomes an ancestor of the other, we can move the snake up to $p$. Otherwise, if no more progress can be made (progress is determined by the smallest reachable depth of an endpoint), then the snake cannot reverse itself. Clearly, the snake can only go back and forth $O(n)$ times before progress stops. We can simulate the back-and-forth motion by answering $k$-th ancestor queries with binary lifting. Complexity is $O(n\log n)$. Solution 2 It's also possible to achieve $O(n)$ with two pointers. Consider the path of nodes initially occupied by the snake, numbered from $1$ to $L$. Each node has a subtree of non-snake nodes. Let $a_i$ be the height of the non-snake subtree of node $i$. We can maintain two pointers $\ell$ and $r$, where $\ell$ is the maximum achievable index of the head, and $r$ is the minimum achievable index of the tail. We do a similar back-and-forth motion as in solution $1$. Send the head to the node that minimizes $r$, then send the tail to the node that maximizes $\ell$, and repeat. The snake can reverse itself if and only if a pivot exists and $\ell,r$ can swap places. Bonus: Can you prove that the number of times the snake must switch between moving the head and tail is $O(\sqrt n)$?
[ "dfs and similar", "dp", "greedy", "trees", "two pointers" ]
3,000
#include <bits/stdc++.h> using namespace std; // This is the O(n) two-pointer solution from the editorial const int N = 1e5 + 5; int t, n, a, b, u, v, par[N], branch[N], depth[N]; vector<int> adj[N], Q[N]; // find largest branch lengths from each node, using 2 DFS's. // first DFS is for root = a // second DFS is for re-rooting void findpivots(int x, int p) { Q[x].assign(3, 0); branch[x] = 0; for(int y : adj[x]) { if(y != p) { depth[y] = 1 + depth[x]; findpivots(y, x); Q[x].push_back(1 + branch[y]); branch[x] = max(branch[x], 1 + branch[y]); } } } void findpivots2(int x, int p) { // move the three largest branch lengths to the front partial_sort(Q[x].begin(), Q[x].begin() + 3, Q[x].end(), greater<int>()); for(int y : adj[x]) { if(y != p) { Q[y].push_back(1 + Q[x][branch[y] + 1 == Q[x][0]]); findpivots2(y, x); } } } // compute branch[x] = height of subtree of non-snake nodes bool dfs(int x, int p) { par[x] = p; branch[x] = 0; bool hasb = (x == b); for(int y : adj[x]) { if(y != p) { if(dfs(y, x)) hasb = true; else branch[x] = max(branch[x], 1 + branch[y]); } } return hasb; } int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n >> a >> b; for(int i = 1; i <= n; i++) adj[i].clear(); for(int i = 0; i < n - 1; i++) { cin >> u >> v; adj[u].push_back(v); adj[v].push_back(u); } depth[a] = 0; findpivots(a, -1); findpivots2(a, -1); bool pivot = false; for(int i = 1; i <= n; i++) pivot |= (Q[i][2] >= depth[b]); if(!pivot) { cout << "NO\n"; continue; } dfs(a, -1); // trace out path from a to b, converting into an array problem vector<int> ve; ve.push_back(branch[b]); while(b != a) { b = par[b]; ve.push_back(branch[b]); } // two-pointers on the array. int len = (int) ve.size(), l = 0, r = len - 1; int L = ve[r], R = r - ve[l]; while(l < r) { if(l < L) { l++; R = min(R, len - 1 - (ve[l] - l)); }else if(r > R) { r--; L = max(L, ve[r] - (len - 1 - r)); }else break; } cout << (l == r ? "YES" : "NO") << '\n'; } }
1381
E
Origami
After being discouraged by 13 time-limit-exceeded verdicts on an ugly geometry problem, you decided to take a relaxing break for arts and crafts. There is a piece of paper in the shape of a simple polygon with $n$ vertices. The polygon may be non-convex, but we all know that proper origami paper has the property that \textbf{any horizontal line intersects the boundary of the polygon in at most two points.} If you fold the paper along the vertical line $x=f$, what will be the area of the resulting shape? When you fold, the part of the paper to the left of the line is symmetrically reflected on the right side. Your task is to answer $q$ independent queries for values $f_1,\ldots,f_q$.
First, let's imagine the problem in one dimension. And let's see how the length of the folded segment changes as we sweep the fold line from left to right. If the fold line is to the left of the paper, it's just the length of the segment. Then as the fold line enters the left side of the paper, we subtract the length of paper to the left of the fold line, since it gets folded onto the right half. Then as the fold line passes the midpoint of the segment, we should add back the length we passed after the midpoint, since the left half gets folded past the right end of the paper. Finally, after the line exits the paper, the answer stays constant again. Now let's advance to the two-dimensional setting. Imagine the process described above applied to all horizontal segments of the polygon simultaneously. We see that the answer is the integral of the answers for all the segments. Now, let's split the polygon into two halves by the midpoints of each horizontal segment. For a fold line, the answer is the total area, minus the area left of the sweep line belonging to the first polygon, plus the area left of the sweep line belonging to the second polygon. We can sort all line segments and queries, and answer all queries in a sweep line. To simplify the process, it helps to know the standard algorithm to compute the area of a polygon by considering a trapezoid for each line segment and combining their areas with inclusion-exclusion. Essentially, we have to sweep the integral of a piecewise linear function. Complexity is $O((n+q)\log(n+q))$.
[ "geometry", "math", "sortings" ]
3,300
#include <bits/stdc++.h> using namespace std; // pro-tip: use complex<double> if you're really lazy // and don't like typing a custom point struct #define pt complex<double> #define x real() #define y imag() const int N = 1e5 + 5; int n, q; double f[N], ans[N]; pt p[N]; // list of events (x coordinate, query index if applicable, A, B) // where we should answer at this x if a query, or // we should integrate the line Ax+B starting at x coordinate if not query // to integrate on interval, we integrate its negation // starting at the right endpoint of the interval. vector<tuple<double, int, double, double>> ve; void add_segment(const pt &p, const pt &q, int k) { // vertical line has no effect on the area. if(abs(q.x - p.x) < 1e-10) return; if(p.x > q.x) return add_segment(q, p, -k); double A = (q.y - p.y) / (q.x - p.x); double B = p.y - p.x * A; ve.emplace_back(p.x, 0, k * A, k * B); ve.emplace_back(q.x, 0, -k * A, -k * B); } // point on line segment AB with the same y coordinate as p pt proj(pt a, pt b, pt p) { return {(b.x - a.x) * (p.y - a.y) / (b.y - a.y) + a.x, p.y}; } int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> n >> q; // j = index of point with minimum y coordinate // L, R = two pointers that we use to scan polygon vertically, // drawing the midpoint lines int j = 0, X, Y, L = 0, R = 0, idx; double area = 0, a = 0, b = 0, pos, A, B; for(int i = 0; i < n; i++) { cin >> X >> Y; p[i] = pt(X, Y); if(p[i].y < p[j].y) j = i; } for(int i = 1; i <= q; i++) { cin >> f[i]; ve.emplace_back(f[i], i, -1, -1); } // compute total area of polygon for(int i = 0; i < n; i++) { int j = (i + 1) % n; area += (conj(p[i]) * p[j]).y; if(p[i].y > p[j].y) add_segment(p[i], p[j], 1); else add_segment(p[j], p[i], 1); } area = abs(0.5 * area); rotate(p, p + j, p + n); pt mid = p[0], mid2; for(int i = 0; i < n - 1; i++) { int L2 = L + 1; int R2 = (R + n - 1) % n; if(p[L2].y < p[R2].y) { mid2 = (p[L2] + proj(p[R], p[R2], p[L2])) * 0.5; L = L2; }else { mid2 = (p[R2] + proj(p[L], p[L2], p[R2])) * 0.5; R = R2; } add_segment(mid, mid2, 2); mid = mid2; } sort(ve.begin(), ve.end()); for(auto &e : ve) { tie(pos, idx, A, B) = e; if(idx > 0) { ans[idx] = area + (0.5 * a * pos * pos + b * pos); }else { area -= (0.5 * A * pos * pos + B * pos); a += A, b += B; } } cout << fixed << setprecision(6); for(int i = 1; i <= q; i++) { cout << ans[i] << '\n'; } }
1382
A
Common Subsequence
You are given two arrays of integers $a_1,\ldots,a_n$ and $b_1,\ldots,b_m$. Your task is to find a \textbf{non-empty} array $c_1,\ldots,c_k$ that is a subsequence of $a_1,\ldots,a_n$, and also a subsequence of $b_1,\ldots,b_m$. If there are multiple answers, find one of the \textbf{smallest} possible length. If there are still multiple of the smallest possible length, find any. If there are no such arrays, you should report about it. A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero) elements. For example, $[3,1]$ is a subsequence of $[3,2,1]$ and $[4,3,1]$, but not a subsequence of $[1,3,3,7]$ and $[3,10,4]$.
If there is any common subsequence, then there is a common element of $a$ and $b$. And a common element is also a common subsequence of length $1$. Therefore, we need only find a common element of the two arrays, or say that they share no elements. Complexity is $O(nm)$ if we compare each pair of elements, $O((n+m)\log(n+m))$ if we sort the arrays and take their intersection, or use a set data structure, or $O(n+m)$ if we use a hash table.
[ "brute force" ]
800
#include <bits/stdc++.h> using namespace std; const int N = 1005; int t, n, m, a[N], b; bool vis[N]; // vis[x] = true if x appears in array a // for each element b[i], we check if it is also in a // don't forget to reset vis array before the next test case int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n >> m; for(int i = 0; i < n; i++) { cin >> a[i]; vis[a[i]] = true; } int e = -1; for(int j = 0; j < m; j++) { cin >> b; if(vis[b]) e = b; } for(int i = 0; i < n; i++) { vis[a[i]] = false; } if(e == -1) { cout << "NO\n"; }else { cout << "YES\n1 " << e << '\n'; } } }
1382
B
Sequential Nim
There are $n$ piles of stones, where the $i$-th pile has $a_i$ stones. Two people play a game, where they take alternating turns removing stones. In a move, a player may remove a positive number of stones from the \textbf{first non-empty pile} (the pile with the minimal index, that has at least one stone). The first player who cannot make a move (because all piles are empty) loses the game. If both players play optimally, determine the winner of the game.
Suppose $a_1>1$. If removing the entire first pile is winning, player 1 will do that. Otherwise, player 1 can leave exactly one stone in the first pile, forcing player 2 to remove it, leaving player 1 in the winning position. Otherwise, if $a_1=1$, then it is forced to remove the first pile. So, whichever player gets the first pile with more than one stone wins. That is, let $k$ be the maximum number such that $a_1=\cdots=a_k=1$. If $k$ is even, the first player will win. Otherwise, the second player will win. The only exception is when all piles have exactly $1$ stone. In that case, the first player wins when $k$ is odd. Complexity is $O(n)$.
[ "dp", "games" ]
1,100
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 5; int t, n, a[N]; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> t; while(t--) { cin >> n; for(int i = 0; i < n; i++) { cin >> a[i]; } // count number of 1's in the prefix // parity of k determines which player makes the first non-forced move // or if k = n, all moves are forced, and the parity is reversed int k = 0; while(k < n && a[k] == 1) { k++; } cout << ((k == n) ^ (k % 2) ? "Second" : "First") << '\n'; } }
1383
A
String Transformation 1
\textbf{Note that the only difference between String Transformation 1 and String Transformation 2 is in the move Koa does. In this version the letter $y$ Koa selects must be strictly greater alphabetically than $x$ (read statement for better understanding). You can make hacks in these problems independently.} Koa the Koala has two strings $A$ and $B$ of the same length $n$ ($|A|=|B|=n$) consisting of the first $20$ lowercase English alphabet letters (ie. from a to t). In one move Koa: - selects some subset of positions $p_1, p_2, \ldots, p_k$ ($k \ge 1; 1 \le p_i \le n; p_i \neq p_j$ if $i \neq j$) of $A$ such that $A_{p_1} = A_{p_2} = \ldots = A_{p_k} = x$ (ie. all letters on this positions are equal to some letter $x$). - selects a letter $y$ (from the first $20$ lowercase letters in English alphabet) such that $y>x$ (ie. letter $y$ is \textbf{strictly greater} alphabetically than $x$). - sets each letter in positions $p_1, p_2, \ldots, p_k$ to letter $y$. More formally: for each $i$ ($1 \le i \le k$) Koa sets $A_{p_i} = y$.\textbf{Note that you can only modify letters in string $A$}. Koa wants to know the smallest number of moves she has to do to make strings equal to each other ($A = B$) or to determine that there is no way to make them equal. Help her!
First of all, if there exists some $i$ such that $A_i > B_i$ there isn't a solution. Otherwise, create a graph where every character is a node, and put a directed edge between node $u$ and node $v$ if character $u$ must be transformed into character $v$ (ie. from $A_i$ to $B_i$ for all $i$). We must select a list with minimum number of operations such that if there is an edge from node $u$ to node $v$, then it must exist a subsequence of the operations in the list that transforms $u$ into $v$. Each weakly connected component $C$ can be solved independently and the answer for each component is $|C| - 1$. So total answer is $|ALP| - k$ where $k$ is the number of weakly connected components in the graph. Proof: Each weakly connected component $C$ requires at least $|C| - 1$ operations (because they are connected). Since there are no cycles in the graph a topological order exists. Find one and select each pair of consecutive nodes in this order as the list of operations. Time complexity: $O(|A| + |B| + |ALP|)$ per test case where $|ALP| = 20$ denotes size of alphabet
[ "dsu", "graphs", "greedy", "sortings", "strings", "trees", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; const int Alp = 20; int main() { ios_base::sync_with_stdio(0), cin.tie(0); int test; cin >> test; while (test--) { int n; string a, b; cin >> n >> a >> b; bool bad = false; vector<vector<int>> adj(Alp); for (int i = 0; i < n; ++i) if (a[i] != b[i]) { if (a[i] > b[i]) { bad = true; cout << "-1\n"; break; } adj[a[i]-'a'].push_back(b[i]-'a'); adj[b[i]-'a'].push_back(a[i]-'a'); } if (bad) continue; vector<bool> mark(Alp); function<void(int)> dfs = [&](int u) { mark[u] = true; for (auto v : adj[u]) if (!mark[v]) dfs(v); }; int ans = Alp; for (int i = 0; i < Alp; ++i) if (!mark[i]) dfs(i), --ans; cout << ans << "\n"; } return 0; }
1383
B
GameGame
Koa the Koala and her best friend want to play a game. The game starts with an array $a$ of length $n$ consisting of non-negative integers. Koa and her best friend move in turns and each have initially a score equal to $0$. Koa starts. Let's describe a move in the game: - During his move, a player chooses any element of the array and removes it from this array, xor-ing it with the current score of the player.More formally: if the current score of the player is $x$ and the chosen element is $y$, his new score will be $x \oplus y$. Here $\oplus$ denotes bitwise XOR operation. Note that after a move element $y$ is removed from $a$. - The game ends when the array is empty. At the end of the game the winner is the player with the maximum score. If both players have the same score then it's a draw. If both players play optimally find out whether Koa will win, lose or draw the game.
Let $x$ be the number of ones and $y$ be the numbers of zeros in the most significant bit of the numbers: if $x$ is even, whatever decision players take, both will end with the same score in that bit, so go to the next bit (if it doesn't exist the game ends in a draw). Indeed, the parity of the result of both players will be the same, since $x$ is even. if $x$ is odd, one of the players ends with $0$ in this bit and the other with $1$, the player with $1$ in this bit wins the game because the well know inequality $2^k > \sum\limits_{i=0}^{k-1} 2^i$ for $k \ge 1$, so the game is equivalent to play on an array of $x$ ones and $y$ zeros. Lemma: The second player wins iff $x \bmod 4 = 3$ and $y \bmod 2 = 0$ otherwise the first player wins. Proof: We know that $x \bmod 2 = 1$ so $x \bmod 4$ can be $1$ or $3$ if $x \bmod 4 = 1$ the first player can choose one $1$ and the remaining number of $1$ is a multiple of $4$, if the first player always repeats the last move of the second player (if $y \bmod 2 = 1$ and the second player takes the last $0$ both players start taking all the remaining ones), then both ends with the same number of ones which divided by $2$ is even and therefore the first player wins. if $x \bmod 4 = 3$ if $y \bmod 2 = 0$ the second player can repeat the last move of the first player always so the first ends with a even numbers of $1$ and therefore the second player wins. if $y \bmod 2 = 1$ the first player takes one $0$ and the game now is exactly the previous case with the first player as the second player. Lemma: The second player wins iff $x \bmod 4 = 3$ and $y \bmod 2 = 0$ otherwise the first player wins. Proof: We know that $x \bmod 2 = 1$ so $x \bmod 4$ can be $1$ or $3$ if $x \bmod 4 = 1$ the first player can choose one $1$ and the remaining number of $1$ is a multiple of $4$, if the first player always repeats the last move of the second player (if $y \bmod 2 = 1$ and the second player takes the last $0$ both players start taking all the remaining ones), then both ends with the same number of ones which divided by $2$ is even and therefore the first player wins. if $x \bmod 4 = 3$ if $y \bmod 2 = 0$ the second player can repeat the last move of the first player always so the first ends with a even numbers of $1$ and therefore the second player wins. if $y \bmod 2 = 1$ the first player takes one $0$ and the game now is exactly the previous case with the first player as the second player. if $y \bmod 2 = 0$ the second player can repeat the last move of the first player always so the first ends with a even numbers of $1$ and therefore the second player wins. if $y \bmod 2 = 1$ the first player takes one $0$ and the game now is exactly the previous case with the first player as the second player. Time complexity: $O(n)$ per testcase
[ "bitmasks", "constructive algorithms", "dp", "games", "greedy", "math" ]
1,900
import sys input = sys.stdin.readline d = { 1: 'WIN', 0: 'LOSE', -1: 'DRAW' } def main(): t = int(input()) for _ in range(t): n = int(input()) a = map(int, input().split()) f = [0] * 30 for x in a: for b in range(30): if x >> b & 1: f[b] += 1 ans = -1 for x in reversed(range(30)): if f[x] % 2 == 1: ans = 0 if f[x] % 4 == 3 and (n - f[x]) % 2 == 0 else 1 break print(d[ans]) main()
1383
C
String Transformation 2
\textbf{Note that the only difference between String Transformation 1 and String Transformation 2 is in the move Koa does. In this version the letter $y$ Koa selects can be any letter from the first $20$ lowercase letters of English alphabet (read statement for better understanding). You can make hacks in these problems independently.} Koa the Koala has two strings $A$ and $B$ of the same length $n$ ($|A|=|B|=n$) consisting of the first $20$ lowercase English alphabet letters (ie. from a to t). In one move Koa: - selects some subset of positions $p_1, p_2, \ldots, p_k$ ($k \ge 1; 1 \le p_i \le n; p_i \neq p_j$ if $i \neq j$) of $A$ such that $A_{p_1} = A_{p_2} = \ldots = A_{p_k} = x$ (ie. all letters on this positions are equal to some letter $x$). - selects \textbf{any} letter $y$ (from the first $20$ lowercase letters in English alphabet). - sets each letter in positions $p_1, p_2, \ldots, p_k$ to letter $y$. More formally: for each $i$ ($1 \le i \le k$) Koa sets $A_{p_i} = y$.\textbf{Note that you can only modify letters in string $A$}. Koa wants to know the smallest number of moves she has to do to make strings equal to each other ($A = B$) or to determine that there is no way to make them equal. Help her!
The only difference between this problem and the previous problem is that the underlying graph might have cycles. Each weakly connected component can be solved independently and the answer is $2 \cdot n - |LDAG| - 1$ where $n$ is the number of nodes in the component and $|LDAG|$ is the size of the largest Directed Acyclic Graph. The largest directed acyclic graph can be computed using dynamic programming in $O(2^n \cdot n)$. For every node $u$ store a mask $reach[u]$ with all nodes it can reach directly. Then go through every possible mask from $1$ to $2^n - 1$ and check whether this set of nodes is acyclic or not. It is acyclic if there exists at least one node $u$ (the last node in one topological order) such that the set without this node is acyclic and this node doesn't reach any other node in this set: Proof by @eatmore: Lemma 1: Suppose that there is a weakly connected component with $n$ vertices, and there is a solution with $k$ edges. Then, the size of the largest DAG is at least $2 \cdot n-1-k$. Proof: Let's keep track of current weakly connected components. Also, in each of the components, let's keep track of some DAG. Initially, each vertex is in a separate component, and each DAG consists of a single vertex. So, there are $n$ components, and the total size of all DAGs is $n$. Processing an edge $(u, v)$: If $u$ and $v$ are in the same component: if $v$ is in the DAG, remove it. Number of components is unchanged, the total size of all DAGs is decreased by at most 1. If $u$ and $v$ are in different components, join the components. Concatenate the DAGs (DAG of $u$'s component comes before DAG of $v$'s component). Number of components decreases by $1$, the total size of all DAGs is unchanged. At the end, the number of components becomes $1$, so $n-1$ edges are used to decrease the number of components. The remaining $k-n+1$ edges could decrease the size of DAGs, so the final size is at least $n-(k-n+1) = 2 \cdot n-1-k$. From lemma 1 we know that $k >= 2 \cdot n - 1 - |DAG|$, then $k$ is minimized when the size of the final DAG is maximized. Time complexity: $O(|A| + |B| + 2^n \cdot n)$ per test case
[ "bitmasks", "dp", "graphs", "trees" ]
3,100
#include <bits/stdc++.h> using namespace std; const int Alp = 20; int main() { ios_base::sync_with_stdio(0), cin.tie(0); int test; cin >> test; while (test--) { int len; string a, b; cin >> len >> a >> b; vector<int> adj(Alp); vector<vector<int>> G(Alp); for (int i = 0; i < len; ++i) if (a[i] != b[i]) { adj[a[i]-'a'] |= 1 << (b[i]-'a'); G[a[i]-'a'].push_back(b[i]-'a'); G[b[i]-'a'].push_back(a[i]-'a'); } vector<bool> mark(Alp); function<void(int)> dfs = [&](int u) { mark[u] = true; for (auto v : G[u]) if (!mark[v]) dfs(v); }; int comp = 0; for (int i = 0; i < Alp; ++i) if (!mark[i]) dfs(i), ++comp; int ans = 0; vector<bool> dp(1<<Alp); dp[0] = true; for (int mask = 0; mask < 1<<Alp; ++mask) if (dp[mask]) { ans = max(ans, __builtin_popcount(mask)); for (int u = 0; u < Alp; ++u) if ((~mask >> u & 1) && (adj[u] & mask) == 0) dp[mask | 1 << u] = true; } cout << 2*Alp - comp - ans << "\n"; } return 0; }
1383
D
Rearrange
Koa the Koala has a matrix $A$ of $n$ rows and $m$ columns. Elements of this matrix are distinct integers from $1$ to $n \cdot m$ (each number from $1$ to $n \cdot m$ appears exactly once in the matrix). For any matrix $M$ of $n$ rows and $m$ columns let's define the following: - The $i$-th row of $M$ is defined as $R_i(M) = [ M_{i1}, M_{i2}, \ldots, M_{im} ]$ for all $i$ ($1 \le i \le n$). - The $j$-th column of $M$ is defined as $C_j(M) = [ M_{1j}, M_{2j}, \ldots, M_{nj} ]$ for all $j$ ($1 \le j \le m$). Koa defines $S(A) = (X, Y)$ as the spectrum of $A$, where $X$ is the set of the maximum values in rows of $A$ and $Y$ is the set of the maximum values in columns of $A$. More formally: - $X = \{ \max(R_1(A)), \max(R_2(A)), \ldots, \max(R_n(A)) \}$ - $Y = \{ \max(C_1(A)), \max(C_2(A)), \ldots, \max(C_m(A)) \}$ Koa asks you to find some matrix $A'$ of $n$ rows and $m$ columns, such that each number from $1$ to $n \cdot m$ appears exactly once in the matrix, and the following conditions hold: - $S(A') = S(A)$ - $R_i(A')$ is bitonic for all $i$ ($1 \le i \le n$) - $C_j(A')$ is bitonic for all $j$ ($1 \le j \le m$) An array $t$ ($t_1, t_2, \ldots, t_k$) is called bitonic if it first increases and then decreases.More formally: $t$ is bitonic if there exists some position $p$ ($1 \le p \le k$) such that: $t_1 < t_2 < \ldots < t_p > t_{p+1} > \ldots > t_k$. Help Koa to find such matrix or to determine that it doesn't exist.
Let $A$ be a matrix of size $n \cdot m$ that is formed by a permutation of elements from $1$ to $n \cdot m$. Find the maximum element on each row and column (i.e. the spectrum) Now we are going to build the answer adding numbers one by one in decreasing order. We start with an empty 2 dimensional matrix (both dimensions have length 0) and at the end of each iteration the following invariants will be maintained on the matrix: All elements processed are inside of the matrix. Each row and column is bitonic. The horizontal/vertical spectrum of this matrix is a subset of the expected horizontal/vertical spectrum. In addition we are keeping a queue with all positions in the matrix that doesn't contain any element yet. At the end of each iteration the following invariants will be maintained on the queue: Let $A$ be a position on the queue and $B$ a position that contains a value that belongs to the horizontal spectrum in the matrix such that $A$ and $B$ are in the same row, then all positions between $A$ and $B$ have already an element in the matrix or occur in the queue before $A$. Let $A$ be a position on the queue and $B$ a position that contains a value that belongs to the vertical spectrum in the matrix such that $A$ and $B$ are in the same column, then all positions between $A$ and $B$ have already an element in the matrix or occur in the queue before $A$. In the end, invariants on the matrix guarantee that all elements are placed, each row and column consist in a bitonic sequence as required and the spectrums are equal to the expected spectrums. Let's prove each invariant on the matrix is kept: Clearly on each step a different element is placed on the matrix on an empty position, we should only show that the operation pop element from Queue doesn't fail with Queue empty. Say the current element is $t$, so the matrix is filled with elements larger than $t$. Now we know that in original matrix there were at least $n-x$ rows with maximums less than t and at least m-y columns with maximums less than $t$, so there could be at most $x \cdot y$ elements greater or equal than t but there are $x \cdot y+1$ already. On each row and column the first element added is the maximum and then elements are added in each direction starting from it toward each edge, since elements are processed from largest to smallest then each row and column is bitonic. The first element added on each row and column is the maximum, and we only add it if it is part of the expected spectrum. Time complexity: $O(n \cdot m)$
[ "brute force", "constructive algorithms", "graphs", "greedy", "sortings" ]
2,800
#include <bits/stdc++.h> #define endl '\n' using namespace std; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int n, m; cin >> n >> m; vector<vector<int>> mat(n, vector<int>(m)); for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) cin >> mat[i][j]; vector<int> h(n * m + 1); vector<int> v(n * m + 1); for (int i = 0; i < n; ++i) { int a = 0; for (int j = 0; j < m; ++j) a = max(a, mat[i][j]); h[a] = 1; } for (int i = 0; i < m; ++i) { int a = 0; for (int j = 0; j < n; ++j) a = max(a, mat[j][i]); v[a] = 1; } vector<vector<int>> fin(n, vector<int>(m)); queue<pair<int, int>> q; int x = -1, y = -1; for (int u = n * m; u >= 1; --u) { x += h[u]; y += v[u]; if (h[u] || v[u]) { fin[x][y] = u; } else { int qx, qy; tie(qx, qy) = q.front(); q.pop(); fin[qx][qy] = u; } if (h[u]) for (int i = y - 1; i >= 0; --i) q.push({x, i}); if (v[u]) for (int i = x - 1; i >= 0; --i) q.push({i, y}); } for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) cout << fin[i][j] << " \n"[j + 1 == m]; return 0; }
1383
E
Strange Operation
Koa the Koala has a binary string $s$ of length $n$. Koa can perform no more than $n-1$ (possibly zero) operations of the following form: In one operation Koa selects positions $i$ and $i+1$ for some $i$ with $1 \le i < |s|$ and sets $s_i$ to $max(s_i, s_{i+1})$. Then Koa deletes position $i+1$ from $s$ (after the removal, the remaining parts are concatenated). Note that after every operation the length of $s$ decreases by $1$. How many different binary strings can Koa obtain by doing no more than $n-1$ (possibly zero) operations modulo $10^9+7$ ($1000000007$)?
Firstly the described operation can be seen as divide $s$ in sub-strings and take the bitwise or in each one. For each possible resultant string $w$ let's think in the following way of obtain it from $s$: Suppose we already have a $1$ in $w$ in previous steps (if not it can be handled later) and we used the firsts $i$ characters of $s$ to get the firsts $j$ characters of $w$. if the $(j+1)$-th character in $w$ is $1$, find the next $1$ in $s$, (ie first $i' > i$ such that $s_{i'} = 1$), merge everything between the last $1$ and $i'-1$ and take the new $1$. if the $(j+1)$-th character in $w$ is $0$, and we have $k$ zeros after the last $1$ in $w$ find the next block of $k+1$ zeros in $s$, (ie first $i' > i$ such that for each $(0 \le k' \le k)$ $s_{i' - k'} = 0$), if $i'$ is equal to $i+1$ just append the new $0$ to $w$ otherwise merge everything between the last $1$ and $i'-k$ and take the new zeros from $i'-k+1$ to $i'$.We can prove that in this way every possible resultant string $w$ is generated in a unique way and it uses the minimum number of characters from $s$ to obtain $w$. We can prove that in this way every possible resultant string $w$ is generated in a unique way and it uses the minimum number of characters from $s$ to obtain $w$. So we can start thinking about dynamic programming keeping in mind this greedy. Let $dp(i)$ be the number of strings that we can obtain using the last $n-i$ characters from $s$, the transitions are the previous described two cases, taking care of the case of ending with certain numbers of $0$. Therefore: If there is at least a $1$ in $s$, let $i$ be the first position such that $s_i = 1$, answer equals $dp(i) \cdot i$ (because we start assuming that there exists some previous $1$ and before this $1$ there are exactly $i$ possibilities: empty, one $0$, two $0$s, ..., $i - 1$ $0$s). Otherwise answer is $|s|$ (because $s$ consists of all $0$). Time complexity: $O(n)$
[ "combinatorics", "data structures", "dp" ]
2,800
#include <bits/stdc++.h> using namespace std; const int mod = 1000000007; int main() { ios::sync_with_stdio(false), cin.tie(0); string s; cin >> s; int n = s.length(); vector<int> dist(n); for (int i = 0; i < n; ++i) if (s[i] == '0') dist[i] = (i ? dist[i-1] : 0) + 1; vector<int> dp(n + 2), nxt(n+2, n); auto get = [&](int i) { return nxt[i] < n ? dp[nxt[i]] : 0; }; for (int i = n-1; i >= 0; --i) { dp[i] = ((dist[i] <= dist.back()) + get(0) + get(dist[i] + 1)) % mod; nxt[dist[i]] = i; } int ans = n; if (nxt[0] < n) ans = ((long long)get(0) * (nxt[0] + 1)) % mod; cout << ans << "\n"; return 0; }
1383
F
Special Edges
Koa the Koala has a \textbf{directed} graph $G$ with $n$ nodes and $m$ edges. Each edge has a capacity associated with it. Exactly $k$ edges of the graph, numbered from $1$ to $k$, are special, such edges initially have a capacity equal to $0$. Koa asks you $q$ queries. In each query she gives you $k$ integers $w_1, w_2, \ldots, w_k$. This means that capacity of the $i$-th special edge becomes $w_i$ (and other capacities remain the same). Koa wonders: what is the maximum flow that goes from node $1$ to node $n$ after each such query? Help her!
Finding maximum flow from $1$ to $n$ is equivalent to find the minimum cut from $1$ to $n$. Let's use the later interpretation to solve the problem. Suppose there is a single special edge ($k = 1$), on each query there are two options, either this edge belong to the minimum cut or it doesn't. If the edge doesn't belong to the minimum cut, the value of the cut won't change even if we increase the capacity of each edge arbitrarily (let's say to $\infty$). On the other hand, if the edge belong to the cut, then the value of the cut will be equal to the capacity of the edge + the value of the cut if the capacity of the edge was $0$. With this in mind we can compute each query in O(1). Let $MC_0$ be the value of the minimum cut if the capacity of the special edge is 0, and $MC_{\infty}$ be the value of the minimum cut if the capacity of the special edge is $\infty$. Then for each query $c_i$ the minimum cut will be equal to $min(MC_{\infty}, MC_0 + c_i)$. We can generalize this ideas to multiple edges in the following way. For each subset of special edges, they either belong to the minimum cut, or they don't. If we fix a subset $S$ and say those are among the special edges the only ones that belong to the minimum cut, then the value of the cut will be equal to the sum of the values of the capacities of these edges plus the minimum cut in the graph were each of these edges has capacity 0 and other special edges has capacity $\infty$. In a similar way as we did for the case of $k=1$ we can pre-compute $2^k$ cuts, fixing $S$ as each possible set in $O(2^k \cdot max\_flow(n, m))$, and then answer each query in $O(2^k)$. The overall complexity of this solution will be $O(2^k \cdot max\_flow(n, m) + q \cdot 2^k)$. However the preprocessing can be done faster. If we have the maximum flow (and the residual network) for a mask, we can compute the maximum flow adding one new edge in $O(w * m)$, doing at most $w$ steps of augmentation as done by Ford-Fulkerson algorithm. To remove an edge we just store the residual network before augmenting, in such a way that we can undo the last change. The time complexity of the preprocessing will be $O(max\_flow(n, m) + 2^k \cdot w \cdot m)$ and since we need to be able to undo the last operation spatial complexity will be $O(k \cdot m)$. It was possible to solve the problem without the last trick, but it was not guaranteed that slow flows implementations would work in $O(2^k \cdot max\_flow(n, m))$.
[ "flows", "graphs" ]
3,200
#include <bits/stdc++.h> using namespace std; template<typename C, typename R = C> struct dinic { typedef C flow_type; typedef R result_type; static_assert(std::is_arithmetic<flow_type>::value, "flow_type must be arithmetic"); static_assert(std::is_arithmetic<result_type>::value, "result_type must be arithmetic"); static const flow_type oo = std::numeric_limits<flow_type>::max(); struct edge { //int src; // not needed, can be deleted to save memory int dst; int rev; flow_type cap, flowp; edge(int src, int dst, int rev, flow_type cap, int flowp) : /*src(src),*/ dst(dst), rev(rev), cap(cap), flowp(flowp) {} }; dinic(int n) : adj(n), que(n), level(n), edge_pos(n), flow_id(0) {} int add_edge(int src, int dst, flow_type cap, flow_type rcap = 0) { adj[src].emplace_back(src, dst, (int) adj[dst].size(), cap, flow_id++); if (src == dst) adj[src].back().rev++; adj[dst].emplace_back(dst, src, (int) adj[src].size() - 1, rcap, flow_id++); return (int) adj[src].size() - 1 - (src == dst); } inline bool side_of_S(int u) { return level[u] == -1; } result_type max_flow(int source, int sink, vector<flow_type> &flow_e) { result_type flow = 0; while (true) { int front = 0, back = 0; std::fill(level.begin(), level.end(), -1); for (level[que[back++] = sink] = 0; front < back && level[source] == -1; ++front) { int u = que[front]; for (const edge &e : adj[u]) if (level[e.dst] == -1 && flow_e[rev(e).flowp] < rev(e).cap) level[que[back++] = e.dst] = 1 + level[u]; } if (level[source] == -1) break; std::fill(edge_pos.begin(), edge_pos.end(), 0); std::function<flow_type(int, flow_type)> find_path = [&](int from, flow_type res) { if (from == sink) return res; for (int &ept = edge_pos[from]; ept < (int) adj[from].size(); ++ept) { edge &e = adj[from][ept]; if (flow_e[e.flowp] == e.cap || level[e.dst] + 1 != level[from]) continue; flow_type push = find_path(e.dst, std::min(res, e.cap - flow_e[e.flowp])); if (push > 0) { flow_e[e.flowp] += push; flow_e[rev(e).flowp] -= push; if (flow_e[e.flowp] == e.cap) ++ept; return push; } } return static_cast<flow_type>(0); }; for (flow_type f; (f = find_path(source, oo)) > 0;) flow += f; } return flow; } result_type max_flow2(int source, int sink, vector<flow_type> &flow_e) { result_type flow = 0; std::function<flow_type(int, flow_type)> find_path = [&](int from, flow_type res) { level[from] = 1; if (from == sink) return res; for (int &ept = edge_pos[from]; ept < (int) adj[from].size(); ++ept) { edge &e = adj[from][ept]; if (level[e.dst] == 1 || flow_e[e.flowp] == e.cap) continue; flow_type push = find_path(e.dst, std::min(res, e.cap - flow_e[e.flowp])); if (push > 0) { flow_e[e.flowp] += push; flow_e[rev(e).flowp] -= push; if (flow_e[e.flowp] == e.cap) ++ept; return push; } } return static_cast<flow_type>(0); }; for (bool ok = true; ok; ) { int it = 0; std::fill(edge_pos.begin(), edge_pos.end(), 0); for (flow_type f; ; ++it) { std::fill(level.begin(), level.end(), -1); f = find_path(source, oo); if (f == 0) { if (it == 0) ok = false; break; } flow += f; } } return flow; } int flow_id; private: std::vector<std::vector<edge>> adj; std::vector<int> que; std::vector<int> level; std::vector<int> edge_pos; inline edge& rev(const edge &e) { return adj[e.dst][e.rev]; } }; const int inf = 25; struct edge { int u, v, w; }; mt19937 rng(chrono::high_resolution_clock::now().time_since_epoch().count()); void relabel(int n, vector<edge> &e, int k) { shuffle(e.begin() + k, e.end(), rng); vector<vector<pair<int, int>>> adj(n); for (auto &i : e) adj[i.u].push_back({ i.v, &i-&e[0] }); vector<edge> ne = e; vector<int> id(n, -1); id[0] = 0; id[n-1] = n-1; int sz = 1, esz = k; function<void(int)> dfs = [&](int u) { for (auto &v : adj[u]) { if (v.second >= k) { ne[esz++] = e[v.second]; v.second = -1; } if (id[v.first] == -1) { id[v.first] = sz++; dfs(v.first); } } }; dfs(0); for (auto &u : adj) for (auto v : u) if (v.second >= k) ne[esz++] = e[v.second]; for (int i = 0; i < n; ++i) if (id[i] == -1) id[i] = sz++; e = ne; for (auto &i : e) { i.u = id[i.u]; i.v = id[i.v]; } } struct masks { int x, cost; vector<int> flow_e; bool operator<(const masks &o) const { return cost < o.cost; } }; typedef std::chrono::_V2::system_clock::time_point timepoint; timepoint get_time() { return std::chrono::high_resolution_clock::now(); } int get_elapsed(timepoint t) { return std::chrono::duration_cast<std::chrono::milliseconds>(get_time() - t).count(); } int main() { ios_base::sync_with_stdio(0), cin.tie(0); int n, m, k, q; cin >> n >> m >> k >> q; vector<edge> e(m); for (auto &i : e) cin >> i.u >> i.v >> i.w, --i.u, --i.v; vector<int> mask(1 << k, -1); relabel(n, e, k); dinic<int> d(n); for (int i = 0; i < m; ++i) d.add_edge(e[i].u, e[i].v, ((i < k) ? inf : e[i].w)); vector<int> flow_e(d.flow_id); for (int i = 0; i < k; ++i) flow_e[2*i] = inf; mask[0] = d.max_flow(0, n-1, flow_e); priority_queue<masks> pq; pq.push({ 0, 0, flow_e }); while (!pq.empty()) { int x = pq.top().x; flow_e = pq.top().flow_e; pq.pop(); for (int j = 0; j < k; ++j) if ((~x >> j & 1) && mask[x | 1 << j] == -1) { auto n_flow_e = flow_e; n_flow_e[2 * j] = 0; auto t = get_time(); mask[x | 1 << j] = mask[x] + d.max_flow2(0, n-1, n_flow_e); pq.push({ x | 1 << j, get_elapsed(t), n_flow_e }); } } vector<int> cut(1 << k), bit(1 << k); for (int i = 1; i < 1 << k; ++i) bit[i] = i & -i; const int U = (1 << k) - 1; while (q--) { for (int i = 0; i < k; ++i) cin >> cut[1 << i]; int ans = mask[U]; for (int i = 1; i < 1<<k; ++i) { cut[i] = cut[i ^ bit[i]] + cut[bit[i]]; ans = min(ans, cut[i] + mask[U ^ i]); } cout << ans << "\n"; } return 0; }
1384
A
Common Prefixes
The length of the \textbf{longest common prefix} of two strings $s = s_1 s_2 \ldots s_n$ and $t = t_1 t_2 \ldots t_m$ is defined as the maximum integer $k$ ($0 \le k \le min(n,m)$) such that $s_1 s_2 \ldots s_k$ equals $t_1 t_2 \ldots t_k$. Koa the Koala initially has $n+1$ strings $s_1, s_2, \dots, s_{n+1}$. For each $i$ ($1 \le i \le n$) she calculated $a_i$ — the length of the \textbf{longest common prefix} of $s_i$ and $s_{i+1}$. Several days later Koa found these numbers, but she couldn't remember the strings. So Koa would like to find some strings $s_1, s_2, \dots, s_{n+1}$ which would have generated numbers $a_1, a_2, \dots, a_n$. Can you help her? If there are many answers print any. We can show that answer always exists for the given constraints.
The problem asks to find $n+1$ strings such that $LCP(s_i, s_{i + 1}) = a_i$ for all $i$ ($1 \le i \le n$). A way to solve this problem is the following: Set $s_1 =$ "aaaa...aaaaaaa" (ie. $200$ times 'a'). For $i$ such that ($1 \le i \le n$) set $s_{i + 1} := s_i$ and then flip $(a_i + 1)$-th character of $s_{i + 1}$ (ie. if it was 'a' put 'b' otherwise 'a'). So for each $i$: $s_{i}$ and $s_{i + 1}$ will have exactly $a_i$ common characters from the prefix. The $(a_i + 1)$-th character of $s_{i+1}$ is different than $(a_i + 1)$-th character of $s_i$ (this character always exists since $0 \le a_i \le 50$ and each string has length exactly $200$). Therefore the LCP is $a_i$ as desired. So for each $i$: $s_{i}$ and $s_{i + 1}$ will have exactly $a_i$ common characters from the prefix. The $(a_i + 1)$-th character of $s_{i+1}$ is different than $(a_i + 1)$-th character of $s_i$ (this character always exists since $0 \le a_i \le 50$ and each string has length exactly $200$). Therefore the LCP is $a_i$ as desired. Time complexity: $O(n)$ per testcase
[ "constructive algorithms", "greedy", "strings" ]
1,200
import sys input = sys.stdin.readline def main(): t = int(input()) for _ in range(t): n = int(input()) a = list(map(int, input().split())) mx = max(a) ans = [ 'a' * (mx + 1) ] * (n + 1) for i, x in enumerate(a): who = 'a' if ans[i][x] == 'b' else 'b' ans[i + 1] = ans[i][:x] + who + ans[i][x + 1:] print('\n'.join(ans)) main()
1384
B1
Koa and the Beach (Easy Version)
\textbf{The only difference between easy and hard versions is on constraints. In this version constraints are lower. You can make hacks only if all versions of the problem are solved.} Koa the Koala is at the beach! The beach consists (from left to right) of a shore, $n+1$ meters of sea and an island at $n+1$ meters from the shore. She measured the depth of the sea at $1, 2, \dots, n$ meters from the shore and saved them in array $d$. $d_i$ denotes the depth of the sea at $i$ meters from the shore for $1 \le i \le n$. Like any beach this one has tide, the intensity of the tide is measured by parameter $k$ and affects all depths \textbf{from the beginning at time $t=0$} in the following way: - For a total of $k$ seconds, each second, tide \textbf{increases} all depths by $1$. - Then, for a total of $k$ seconds, each second, tide \textbf{decreases} all depths by $1$. - This process repeats again and again (ie. depths increase for $k$ seconds then decrease for $k$ seconds and so on ...).Formally, let's define $0$-indexed array $p = [0, 1, 2, \ldots, k - 2, k - 1, k, k - 1, k - 2, \ldots, 2, 1]$ of length $2k$. At time $t$ ($0 \le t$) depth at $i$ meters from the shore equals $d_i + p[t \bmod 2k]$ ($t \bmod 2k$ denotes the remainder of the division of $t$ by $2k$). Note that the changes occur \textbf{instantaneously} after each second, see the notes for better understanding. At time $t=0$ Koa is standing at the shore and wants to get to the island. Suppose that at some time $t$ ($0 \le t$) she is at $x$ ($0 \le x \le n$) meters from the shore: - In one second Koa can swim $1$ meter further from the shore ($x$ changes to $x+1$) or not swim at all ($x$ stays the same), in both cases $t$ changes to $t+1$. - As Koa is a bad swimmer, the depth of the sea at the point where she is can't exceed $l$ at integer points of time (or she will drown). More formally, if Koa is at $x$ ($1 \le x \le n$) meters from the shore at the moment $t$ (for some integer $t\ge 0$), the depth of the sea at this point  — $d_x + p[t \bmod 2k]$  — can't exceed $l$. In other words, $d_x + p[t \bmod 2k] \le l$ must hold always. - Once Koa reaches the island at $n+1$ meters from the shore, she stops and can rest.Note that \textbf{while Koa swims tide doesn't have effect on her} (ie. she can't drown while swimming). Note that \textbf{Koa can choose to stay on the shore for as long as she needs} and \textbf{neither the shore or the island are affected by the tide} (they are solid ground and she won't drown there). Koa wants to know whether she can go from the shore to the island. Help her!
For this version you can just simulate each possible action of Koa. Let $(pos, tide, down)$ a state where $pos$ is the current position of Koa (ie $0$ is the shore, from $1$ to $n$ is the $i$-th meter of sea and $n+1$ is the island), $tide$ is the current increment of the tide, and $down$ is a boolean that is true if the tide is decreasing and false otherwise. You can see each state like a node and each action (ie. wait or swim) like an edge, so you can just do a dfs to see if the island is reachable from the shore. The number of nodes and edges is $O(n \cdot k)$. Time complexity: $O(n \cdot k)$ per testcase
[ "brute force", "dp", "greedy" ]
1,900
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0), cin.tie(0); int test; cin >> test; while (test--) { int n, k, l; cin >> n >> k >> l; vector<int> d(n+2, -k); for (int i = 1; i <= n; ++i) cin >> d[i]; set<tuple<int, int, bool>> mark; function<bool(int, int, bool)> go = [&](int pos, int tide, bool down) { if (pos > n) return true; if (mark.find({ pos, tide, down }) != mark.end()) return false; mark.insert({ pos, tide, down }); tide += down ? -1 : +1; if (tide == 0) down = false; if (tide == k) down = true; if (d[pos] + tide <= l && go(pos, tide, down)) return true; if (d[pos + 1] + tide <= l && go(pos + 1, tide, down)) return true; return false; }; if (go(0, 0, false)) cout << "Yes\n"; else cout << "No\n"; } return 0; }
1384
B2
Koa and the Beach (Hard Version)
\textbf{The only difference between easy and hard versions is on constraints. In this version constraints are higher. You can make hacks only if all versions of the problem are solved.} Koa the Koala is at the beach! The beach consists (from left to right) of a shore, $n+1$ meters of sea and an island at $n+1$ meters from the shore. She measured the depth of the sea at $1, 2, \dots, n$ meters from the shore and saved them in array $d$. $d_i$ denotes the depth of the sea at $i$ meters from the shore for $1 \le i \le n$. Like any beach this one has tide, the intensity of the tide is measured by parameter $k$ and affects all depths \textbf{from the beginning at time $t=0$} in the following way: - For a total of $k$ seconds, each second, tide \textbf{increases} all depths by $1$. - Then, for a total of $k$ seconds, each second, tide \textbf{decreases} all depths by $1$. - This process repeats again and again (ie. depths increase for $k$ seconds then decrease for $k$ seconds and so on ...).Formally, let's define $0$-indexed array $p = [0, 1, 2, \ldots, k - 2, k - 1, k, k - 1, k - 2, \ldots, 2, 1]$ of length $2k$. At time $t$ ($0 \le t$) depth at $i$ meters from the shore equals $d_i + p[t \bmod 2k]$ ($t \bmod 2k$ denotes the remainder of the division of $t$ by $2k$). Note that the changes occur \textbf{instantaneously} after each second, see the notes for better understanding. At time $t=0$ Koa is standing at the shore and wants to get to the island. Suppose that at some time $t$ ($0 \le t$) she is at $x$ ($0 \le x \le n$) meters from the shore: - In one second Koa can swim $1$ meter further from the shore ($x$ changes to $x+1$) or not swim at all ($x$ stays the same), in both cases $t$ changes to $t+1$. - As Koa is a bad swimmer, the depth of the sea at the point where she is can't exceed $l$ at integer points of time (or she will drown). More formally, if Koa is at $x$ ($1 \le x \le n$) meters from the shore at the moment $t$ (for some integer $t\ge 0$), the depth of the sea at this point  — $d_x + p[t \bmod 2k]$  — can't exceed $l$. In other words, $d_x + p[t \bmod 2k] \le l$ must hold always. - Once Koa reaches the island at $n+1$ meters from the shore, she stops and can rest.Note that \textbf{while Koa swims tide doesn't have effect on her} (ie. she can't drown while swimming). Note that \textbf{Koa can choose to stay on the shore for as long as she needs} and \textbf{neither the shore or the island are affected by the tide} (they are solid ground and she won't drown there). Koa wants to know whether she can go from the shore to the island. Help her!
Let's define positions $i$ such that ($1 \le i \le n$) and $d_i + k \le l$ as safe positions, also positions $0$ and $n + 1$ are safe too (ie. the shore and the island respectively). Remaining positions are unsafe. Koa can wait indefinitely on safe positions without drowning, so she can reach the island (ie. position $n+1$) if and only if she can reach each safe position from the previous one. Suppose Koa is at some safe position $i$ and wants to reach the next safe position $j$ ($0 \le i < j \le n + 1$): A solution strategy for Koa is the following: If Koa is at an unsafe position $x$ at time $t_0$, she must swim to $x + 1$ as soon as she can, that is, at the first moment of time $t \ge t_0$ such that $d_{x + 1} + p[(t + 1) \bmod 2k] \le l$ (to not drown). If Koa is at a safe position $x$ at time $t_0$, she must wait to some moment of time $t_1$ such that tide is exactly at $k$ units. After that she must follow the unsafe positions strategy until the next safe position. So a way to go from $i$ to $j$ would be, apply point $2$ on $i$, and apply point $1$ to reach each position $p$ such that ($i < p \le j$). This works because: If there exists some position with $d_p$ greater than $l$ she would drown with any tide so let's assume that all positions are less or equal to $l$. Suppose Koa drowns at some position $p$, she can leave $i$ with some value of tide because $d_{i+1} \le l$ and as long as the tide is decreasing whether she chooses to wait or not she would be safe. So she must have drown with the tide increasing. If she leaves $i$ with other tide (different from $k$): suppose she would be able to reach position $p$, then the tide will have increased and it will be higher and therefore she would drown too, this is true because the tide never can be $k$ and start decreasing again between $i$ and $j$ because these positions are unsafe ones. Time complexity: $O(n)$ per test case
[ "constructive algorithms", "dp", "greedy", "implementation" ]
2,200
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0), cin.tie(0); int test; cin >> test; while (test--) { int n, k, l; cin >> n >> k >> l; vector<int> d(n+1), safe = { 0 }; for (int i = 1; i <= n; ++i) { cin >> d[i]; if (d[i] + k <= l) safe.push_back(i); } safe.push_back(n+1); bool ok = true; for (size_t i = 1; i < safe.size() && ok; ++i) { int tide = k; bool down = true; for (int j = safe[i-1] + 1; j < safe[i]; ++j) { tide += down ? -1 : +1; if (down) tide -= max(0, d[j] + tide - l); if (tide < 0 || d[j] + tide > l) { ok = false; break; } if (tide == 0) down = false; } } if (ok) cout << "Yes\n"; else cout << "No\n"; } return 0; }
1385
A
Three Pairwise Maximums
You are given three positive (i.e. strictly greater than zero) integers $x$, $y$ and $z$. Your task is to find positive integers $a$, $b$ and $c$ such that $x = \max(a, b)$, $y = \max(a, c)$ and $z = \max(b, c)$, or determine that it is impossible to find such $a$, $b$ and $c$. You have to answer $t$ independent test cases. Print required $a$, $b$ and $c$ in any (arbitrary) order.
Suppose $x \le y \le z$. If $y \ne z$ then the answer is -1, because $z$ is the overall maximum among all three integers $a$, $b$ and $c$ and it appears in two pairs (so it should appear at most twice among $x$, $y$ and $z$). Otherwise, the answer exists and it can be $x$, $x$ and $z$ (it is easy to see that this triple fits well).
[ "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { vector<int> a(3); for (auto &it : a) cin >> it; sort(a.begin(), a.end()); if (a[1] != a[2]) { cout << "NO" << endl; } else { cout << "YES" << endl << a[0] << " " << a[0] << " " << a[2] << endl; } } return 0; }
1385
B
Restore the Permutation by Merger
A permutation of length $n$ is a sequence of integers from $1$ to $n$ of length $n$ containing each number exactly once. For example, $[1]$, $[4, 3, 5, 1, 2]$, $[3, 2, 1]$ are permutations, and $[1, 1]$, $[0, 1]$, $[2, 2, 1, 4]$ are not. There was a permutation $p[1 \dots n]$. It was merged with itself. In other words, let's take two instances of $p$ and insert elements of the second $p$ into the first maintaining relative order of elements. The result is a sequence of the length $2n$. For example, if $p=[3, 1, 2]$ some possible results are: $[3, 1, 2, 3, 1, 2]$, $[3, 3, 1, 1, 2, 2]$, $[3, 1, 3, 1, 2, 2]$. The following sequences are not possible results of a merging: $[1, 3, 2, 1, 2, 3$], [$3, 1, 2, 3, 2, 1]$, $[3, 3, 1, 2, 2, 1]$. For example, if $p=[2, 1]$ the possible results are: $[2, 2, 1, 1]$, $[2, 1, 2, 1]$. The following sequences are not possible results of a merging: $[1, 1, 2, 2$], [$2, 1, 1, 2]$, $[1, 2, 2, 1]$. Your task is to restore the permutation $p$ by the given resulting sequence $a$. It is guaranteed that the answer \textbf{exists and is unique}. You have to answer $t$ independent test cases.
The solution is pretty simple: it's obvious that the first element of $a$ is the first element of the permutation $p$. Let's take it to $p$, remove it and its its copy from $a$. So we just have the smaller problem and can solve it in the same way. It can be implemented as "go from left to right, if the current element isn't used, take it and mark it's used".
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(2 * n); for (auto &it : a) cin >> it; vector<int> used(n); vector<int> p; for (int i = 0; i < 2 * n; ++i) { if (!used[a[i] - 1]) { used[a[i] - 1] = true; p.push_back(a[i]); } } for (auto it : p) cout << it << " "; cout << endl; } return 0; }
1385
C
Make It Good
You are given an array $a$ consisting of $n$ integers. You have to find the length of the smallest (shortest) prefix of elements you need to erase from $a$ to make it a good array. Recall that the prefix of the array $a=[a_1, a_2, \dots, a_n]$ is a subarray consisting several first elements: the prefix of the array $a$ of length $k$ is the array $[a_1, a_2, \dots, a_k]$ ($0 \le k \le n$). The array $b$ of length $m$ is called good, if you can obtain a \textbf{non-decreasing} array $c$ ($c_1 \le c_2 \le \dots \le c_{m}$) from it, repeating the following operation $m$ times (initially, $c$ is empty): - select either the first or the last element of $b$, remove it from $b$, and append it to the end of the array $c$. For example, if we do $4$ operations: take $b_1$, then $b_{m}$, then $b_{m-1}$ and at last $b_2$, then $b$ becomes $[b_3, b_4, \dots, b_{m-3}]$ and $c =[b_1, b_{m}, b_{m-1}, b_2]$. Consider the following example: $b = [1, 2, 3, 4, 4, 2, 1]$. This array is \textbf{good} because we can obtain \textbf{non-decreasing} array $c$ from it by the following sequence of operations: - take the first element of $b$, so $b = [2, 3, 4, 4, 2, 1]$, $c = [1]$; - take the last element of $b$, so $b = [2, 3, 4, 4, 2]$, $c = [1, 1]$; - take the last element of $b$, so $b = [2, 3, 4, 4]$, $c = [1, 1, 2]$; - take the first element of $b$, so $b = [3, 4, 4]$, $c = [1, 1, 2, 2]$; - take the first element of $b$, so $b = [4, 4]$, $c = [1, 1, 2, 2, 3]$; - take the last element of $b$, so $b = [4]$, $c = [1, 1, 2, 2, 3, 4]$; - take the only element of $b$, so $b = []$, $c = [1, 1, 2, 2, 3, 4, 4]$ — $c$ is non-decreasing. Note that the array consisting of one element is good. Print the length of the shortest prefix of $a$ to delete (erase), to make $a$ to be a good array. Note that the required length can be $0$. You have to answer $t$ independent test cases.
Consider the maximum element $a_{mx}$ of the good array $a$ of length $k$. Then we can notice that the array $a$ looks like $[a_1 \le a_2 \le \dots \le a_{mx} \ge \dots \ge a_{k-1} \ge a_k]$. And this is pretty obvious that if the array doesn't have this structure, then it isn't good (you can see it yourself). So we need to find the longest such suffix. It's pretty easy doable with pointer: initially, the pointer $pos$ is at the last element. Then, while $pos > 1$ and $a_{pos - 1} \ge a_{pos}$, decrease $pos$ by one. If we're done with the previous step, we do the same, but while $pos > 1$ and $a_{pos - 1} \le a_{pos}$. The answer is $pos-1$.
[ "greedy" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (auto &it : a) cin >> it; int pos = n - 1; while (pos > 0 && a[pos - 1] >= a[pos]) --pos; while (pos > 0 && a[pos - 1] <= a[pos]) --pos; cout << pos << endl; } return 0; }
1385
D
a-Good String
You are given a string $s[1 \dots n]$ consisting of lowercase Latin letters. It is guaranteed that $n = 2^k$ for some integer $k \ge 0$. The string $s[1 \dots n]$ is called $c$-good if \textbf{at least one} of the following three conditions is satisfied: - The length of $s$ is $1$, and it consists of the character $c$ (i.e. $s_1=c$); - The length of $s$ is greater than $1$, the first half of the string consists of only the character $c$ (i.e. $s_1=s_2=\dots=s_{\frac{n}{2}}=c$) and the second half of the string (i.e. the string $s_{\frac{n}{2} + 1}s_{\frac{n}{2} + 2} \dots s_n$) is a $(c+1)$-good string; - The length of $s$ is greater than $1$, the second half of the string consists of only the character $c$ (i.e. $s_{\frac{n}{2} + 1}=s_{\frac{n}{2} + 2}=\dots=s_n=c$) and the first half of the string (i.e. the string $s_1s_2 \dots s_{\frac{n}{2}}$) is a $(c+1)$-good string. For example: "aabc" is 'a'-good, "ffgheeee" is 'e'-good. In one move, you can choose one index $i$ from $1$ to $n$ and replace $s_i$ with any lowercase Latin letter (any character from 'a' to 'z'). Your task is to find the minimum number of moves required to obtain an 'a'-good string from $s$ (i.e. $c$-good string for $c=$ 'a'). It is guaranteed that the answer always exists. You have to answer $t$ independent test cases. Another example of an 'a'-good string is as follows. Consider the string $s = $"cdbbaaaa". It is an 'a'-good string, because: - the second half of the string ("aaaa") consists of only the character 'a'; - the first half of the string ("cdbb") is 'b'-good string, because: - the second half of the string ("bb") consists of only the character 'b'; - the first half of the string ("cd") is 'c'-good string, because: - the first half of the string ("c") consists of only the character 'c'; - the second half of the string ("d") is 'd'-good string.
Consider the problem in $0$-indexation. Define the function $calc(l, r, c)$ which finds the minimum number of changes to make the string $s[l \dots r)$ $c$-good string. Let $mid = \frac{l + r}{2}$. Then let $cnt_l = \frac{r - l}{2} - count(s[l \dots mid), c) + calc(mid, r, c + 1)$ and $cnt_r = \frac{r - l}{2} - count(s[mid \dots r), c) + calc(l, mid, c + 1)$, where $count(s, c)$ is the number of occurrences of the character $c$ in $s$. We can see that $cnt_l$ describes the second condition from the statement and $cnt_r$ describes the third one. So, $calc(l, r, c)$ returns $min(cnt_l, cnt_r)$ except one case. When $r - l = 1$, we need to return $1$ if $s_l \ne c$ and $0$ otherwise. This function works in $O(n \log n)$ (each element of $s$ belongs to exactly $\log{n}$ segments, like segment tree). You can get the answer if you run $calc(0, n,~ 'a')$.
[ "bitmasks", "brute force", "divide and conquer", "dp", "implementation" ]
1,500
#include <bits/stdc++.h> using namespace std; int calc(const string &s, char c) { if (s.size() == 1) { return s[0] != c; } int mid = s.size() / 2; int cntl = calc(string(s.begin(), s.begin() + mid), c + 1); cntl += s.size() / 2 - count(s.begin() + mid, s.end(), c); int cntr = calc(string(s.begin() + mid, s.end()), c + 1); cntr += s.size() / 2 - count(s.begin(), s.begin() + mid, c); return min(cntl, cntr); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; string s; cin >> n >> s; cout << calc(s, 'a') << endl; } return 0; }
1385
E
Directing Edges
You are given a graph consisting of $n$ vertices and $m$ edges. It is not guaranteed that the given graph is connected. Some edges are already directed and you can't change their direction. Other edges are undirected and you have to choose some direction for all these edges. You have to direct undirected edges in such a way that the resulting graph is directed and acyclic (i.e. the graph with all edges directed and having no directed cycles). Note that you have to direct \textbf{all} undirected edges. You have to answer $t$ independent test cases.
Firstly, if the graph consisting of initial vertices and only directed edges contains at least one cycle then the answer is "NO". Otherwise, the answer is always "YES". Let's build it. Let's build the topological sort of the graph without undirected edges. Then let's check for each directed edge if it's going from left to right (in order of topological sort). If it isn't true then there is a cycle and the answer is "NO". Otherwise, let's direct each edge from left to right in order of the topological sort.
[ "constructive algorithms", "dfs and similar", "graphs" ]
2,000
#include <bits/stdc++.h> using namespace std; vector<int> ord; vector<int> used; vector<vector<int>> g; void dfs(int v) { used[v] = 1; for (auto to : g[v]) { if (!used[to]) dfs(to); } ord.push_back(v); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n, m; cin >> n >> m; g = vector<vector<int>>(n); vector<pair<int, int>> edges; for (int i = 0; i < m; ++i) { int t, x, y; cin >> t >> x >> y; --x, --y; if (t == 1) { g[x].push_back(y); } edges.push_back({x, y}); } ord.clear(); used = vector<int>(n); for (int i = 0; i < n; ++i) { if (!used[i]) dfs(i); } vector<int> pos(n); reverse(ord.begin(), ord.end()); for (int i = 0; i < n; ++i) { pos[ord[i]] = i; } bool bad = false; for (int i = 0; i < n; ++i) { for (auto j : g[i]) { if (pos[i] > pos[j]) bad = true; } } if (bad) { cout << "NO" << endl; } else { cout << "YES" << endl; for (auto [x, y] : edges) { if (pos[x] < pos[y]) { cout << x + 1 << " " << y + 1 << endl; } else { cout << y + 1 << " " << x + 1 << endl; } } } } return 0; }
1385
F
Removing Leaves
You are given a tree (connected graph without cycles) consisting of $n$ vertices. The tree is unrooted — it is just a connected undirected graph without cycles. In one move, you can choose exactly $k$ leaves (leaf is such a vertex that is connected to only one another vertex) connected \textbf{to the same vertex} and remove them with edges incident to them. I.e. you choose such leaves $u_1, u_2, \dots, u_k$ that there are edges $(u_1, v)$, $(u_2, v)$, $\dots$, $(u_k, v)$ and remove these leaves and these edges. Your task is to find the \textbf{maximum} number of moves you can perform if you remove leaves optimally. You have to answer $t$ independent test cases.
This is mostly implementation problem. We can notice that all leaves are indistinguishable for us. So if we have some vertex with at least $k$ leaves attached to it, we can choose it, remove these leaves from the tree and continue the algorithm. The rest is just an implementation: let's maintain for each vertex $v$ the list of all leaves which are connected to it $leaves_v$ and the set of vertices which is sorted by the size of $leaves_v$. So let's take any vertex which Is connected with at least $k$ leaves (we can just take the vertex with the maximum value in the set) and remove any $k$ leaves attached to it. If it has zero leaves after the current move, let's mark is as a leaf and append it to the list of the corresponding vertex (you also need to remove edges from the graph fast to find the required vertex, so you may need to maintain the graph as the list of sets). And don't forget about the case $k=1$ because it may be special for your solution so you could handle it in a special way. Time complexity: $O(n \log{n})$.
[ "data structures", "greedy", "implementation", "trees" ]
2,300
#include <bits/stdc++.h> using namespace std; int n, k, ans; vector<set<int>> g; vector<set<int>> leaves; struct comp { bool operator() (int a, int b) const { if (leaves[a].size() == leaves[b].size()) return a < b; return leaves[a].size() > leaves[b].size(); } }; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { cin >> n >> k; g = leaves = vector<set<int>>(n); for (int i = 0; i < n - 1; ++i) { int x, y; cin >> x >> y; --x, --y; g[x].insert(y); g[y].insert(x); } for (int i = 0; i < n; ++i) { if (g[i].size() == 1) { leaves[*g[i].begin()].insert(i); } } set<int, comp> st; for (int i = 0; i < n; ++i) { st.insert(i); } int ans = 0; while (true) { int v = *st.begin(); if (int(leaves[v].size()) < k) break; for (int i = 0; i < k; ++i) { int leaf = *leaves[v].begin(); g[leaf].erase(v); g[v].erase(leaf); st.erase(v); st.erase(leaf); leaves[v].erase(leaf); if (leaves[leaf].count(v)) leaves[leaf].erase(v); if (g[v].size() == 1) { int to = *g[v].begin(); st.erase(to); leaves[to].insert(v); st.insert(to); } st.insert(v); st.insert(leaf); } ans += 1; } cout << ans << endl; } return 0; }
1385
G
Columns Swaps
You are given a table $a$ of size $2 \times n$ (i.e. two rows and $n$ columns) consisting of integers from $1$ to $n$. In one move, you can choose some \textbf{column} $j$ ($1 \le j \le n$) and swap values $a_{1, j}$ and $a_{2, j}$ in it. Each column can be chosen \textbf{no more than once}. Your task is to find the \textbf{minimum} number of moves required to obtain permutations of size $n$ in both first and second rows of the table or determine if it is impossible to do that. You have to answer $t$ independent test cases. Recall that the permutation of size $n$ is such an array of size $n$ that contains each integer from $1$ to $n$ exactly once (the order of elements doesn't matter).
Firstly, we can determine that the answer is -1 if some number has not two occurrences. Otherwise, the answer exists (and we actually don't need to prove it because we can check it later). Let's find for each number $i$ from $1$ to $n$ indices of columns in which it appears $c_1[i]$ and $c_2[i]$. Consider some number $i$. If $c_1[i] = c_2[i]$ then let's just skip it, we can't change anything by swapping values in this column. Otherwise, let $r_1[i]$ be the number of row of the number $i$ in the column $c_1[i]$ and $r_2[i]$ is the number of row of the number $i$ in the column $c_2[i]$. If $r_1[i] = r_2[i]$ then it's obvious that at exactly one of these two columns should be swapped. The same, if $r_1[i] \ne r_2[i]$ then it's obvious that we either swap both of them or don't swap both of them. Let's build a graph consisting of $n$ vertices, when the vertex $v$ determines the state of the $v$-th column. If $r_1[i] = r_2[i]$ then let's add edge of color $1$ between vertices $c_1[i]$ and $c_2[i]$. Otherwise, let's add the edge of color $0$ between these vertices. So, we have the graph consisting of several connected components and some strange edges. Let's color it. If the edge $(v, to)$ has the color $1$ then the color of the vertex $to$ should be different from the color of the vertex $v$. The same, if the edge $(v, to)$ has the color $0$ then the color of the vertex $to$ should be the same as the color of the vertex $v$. This makes sense, because edges with color $1$ mean that exactly one of the columns connected by this edge should be swapped (and vice versa). So, after we colored the graph, we can ensure that conditions for each edge are satisfied. If it isn't so, the answer is -1 (but this case can't actually appear). Otherwise, we need to decide for each component independently, what is the color $0$ and the color $1$ means for it. The color $0$ can mean that the column having this color isn't swapped (and the color $1$ means that the column having this color is swapped in this case) and vice versa. We can choose greedily the minimum number of swaps for each component and print the answer. Time complexity: $O(n)$.
[ "2-sat", "dfs and similar", "dsu", "graphs", "implementation" ]
2,300
#include <bits/stdc++.h> using namespace std; int cnt0, cnt1; vector<int> col, comp; vector<vector<pair<int, int>>> g; void dfs(int v, int c, int cmp) { col[v] = c; if (col[v] == 0) ++cnt0; else ++cnt1; comp[v] = cmp; for (auto [to, change] : g[v]) { if (col[to] == -1) { dfs(to, c ^ change, cmp); } } } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<vector<int>> a(2, vector<int>(n)); vector<vector<int>> pos(n); for (int i = 0; i < 2; ++i) { for (int j = 0; j < n; ++j) { cin >> a[i][j]; --a[i][j]; pos[a[i][j]].push_back(j); } } bool bad = false; g = vector<vector<pair<int, int>>>(n); for (int i = 0; i < n; ++i) { if (pos[i].size() != 2) { bad = true; break; } int c1 = pos[i][0], c2 = pos[i][1]; if (c1 == c2) continue; int r1 = a[0][c1] != i, r2 = a[0][c2] != i; g[c1].push_back({c2, r1 == r2}); g[c2].push_back({c1, r1 == r2}); } col = comp = vector<int>(n, -1); int cnt = 0; vector<pair<int, int>> colcnt; int ans = 0; for (int i = 0; i < n; ++i) { if (col[i] == -1) { cnt0 = cnt1 = 0; dfs(i, 0, cnt); ++cnt; colcnt.push_back({cnt0, cnt1}); ans += min(cnt0, cnt1); } } for (int i = 0; i < n; ++i) { for (auto [j, diff] : g[i]) { if ((col[i] ^ col[j]) != diff) { bad = true; } } } if (bad) { cout << -1 << endl; } else { cout << ans << endl; for (int i = 0; i < n; ++i) { int changeZero = colcnt[comp[i]].first < colcnt[comp[i]].second; if (col[i] ^ changeZero) { cout << i + 1 << " "; } } cout << endl; } } return 0; }
1386
A
Colors
Linda likes to change her hair color from time to time, and would be pleased if her boyfriend Archie would notice the difference between the previous and the new color. Archie always comments on Linda's hair color if and only if he notices a difference — so Linda always knows whether Archie has spotted the difference or not. There is a new hair dye series in the market where all available colors are numbered by integers from $1$ to $N$ such that a smaller difference of the numerical values also means less visual difference. Linda assumes that for these series there should be some critical color difference $C$ ($1 \le C \le N$) for which Archie will notice color difference between the current color $\mathrm{color}_{\mathrm{new}}$ and the previous color $\mathrm{color}_{\mathrm{prev}}$ if $\left|\mathrm{color}_{\mathrm{new}} - \mathrm{color}_{\mathrm{prev}}\right| \ge C$ and will not if $\left|\mathrm{color}_{\mathrm{new}} - \mathrm{color}_{\mathrm{prev}}\right| < C$. Now she has bought $N$ sets of hair dye from the new series — one for each of the colors from $1$ to $N$, and is ready to set up an experiment. Linda will change her hair color on a regular basis and will observe Archie's reaction — whether he will notice the color change or not. Since for the proper dye each set should be used completely, each hair color can be obtained no more than once. Before the experiment, Linda was using a dye from a different series which is not compatible with the new one, so for the clearness of the experiment Archie's reaction to the first used color is meaningless. Her aim is to find the precise value of $C$ in a limited number of dyes. Write a program which finds the value of $C$ by experimenting with the given $N$ colors and observing Archie's reactions to color changes.
Subtask 1 ($N \leq 64$) We will use the colors in this order: $1$, $N$, $2$, $N-1$, $3$, $N-2$, $\ldots$; this way we will check each difference $N-1$, $N-2$, $N-3$, $N-4$, $\ldots$ and the answer is the first difference that is not recognized by Archie. Complexity: $N$ queries. Subtask 2 ($N \leq 125$) We can first ask the colors $N/2$ and $1$. If Archie recognizes the difference, then $C \leq N/2$, and as in the Subtask 1, we can ask the queries $N/2-1$, $2$, $N/2-2$, $3$, $N/2-3$, $4$, $\ldots$, until we find the first difference that Archie does not recognize. Otherwise we ask the queries $N$, $2$, $N-1$, $3$, $N-2$, $\ldots$ until we find the first difference that Archie recognizes. Complexity: $N/2+1$ queries. Subtask 3 ($N \leq 1000$) First use $\sqrt N$ values and try to understand $k$ value for which it is true that $C$ is between $k \sqrt N$ and $(k+1) \sqrt N$. For example, if $N = 100$, then use values $5$, $15$, $25$, $\ldots$, $95$ and use the Subtask 1 $N$-query algorithm to find the value of $k$. And then again use the Subtask 1 $N$-query algorithm to calculate the precise $C$ value. Complexity: $2 \sqrt N$ queries. Subtask 4 ($N \leq 10^9$) Let assume that we have a correct strategy for all values of $N$ that do not exceed $k$. If $k$ is even ($k = 2 j$) we will use the strategy that was used for $j$ numbers and use only even (or odd) numbers. This way each jump in $j$ becomes twice as long and in the result (when the strategy for $j$ has finished) we will know that the answer is $1$ or $2$, $3$ or $4$, $5$ or $6$, and so on. We then know for some $x$ that Archie recognizes the difference $2 x$ and we need to understand whether he recognizes the difference $2 x - 1$. It can be proved that if possible answers are $2x-1$ and $2x$, then the last difference that was checked was either $2x$ or $2x-2$ and in both cases we will be able to make a jump in the opposite direction with length $2x-1$. If $k$ is odd ($k = 2 j + 1$) we use the strategy for $j$ colors and use the numbers $2$, $4$, $6$, $8$, $10$, and so on. When the strategy for $j$ is finished, we know that the answer is $1$ or $2$, $3$ or $4$, $5$ or $6$, $\ldots$, $2j-1$ or $2j$ or $2j+1$. And then in almost all cases we can calculate the answer with one additional query, but if the possible answer is one of $2j-1$, $2j$ or $2j+1$ then we need to use two additional queries. Complexity: $2 \log_2 N$ queries. For example, for the base cases $N = 3$ and $N = 4$ we can use the following algorithms: Then using our construction, we get the following algorithm for $N = 8$: Subtask 5 ($N \leq 10^{18}$) Let's assume that we have a correct strategy for all values of $N$ that do not exceed $k$. We will restrict our strategy even more - consecutive jumps need to be made in the opposite directions. Suppose that $k$ is even ($k = 2 j$) and the first color used in the $j$ strategy is $f$. Then we make the first jump from $f$ to $f+j$ (or from $f+j$ to $f$). With this jump we will understand whether $C$ is bigger than $j$ (if the answer is negative) or smaller or equal than $j$ (if the answer is positive). If the answer is smaller or equal to $j$ then we use strategy for $j$ on numbers from $1$ to $j$ (we already have the color $f$). If the answer is bigger than $j$, then we extend all jumps in the $j$ strategy by $j$ (if we had a jump with length $p$, then now we will make jump with length $p + j$). As we are always jumping back and forth then we will never jump out of range from $1$ to $n$ and will return the answer in the range $j+1$ to $2j$. If $k$ is odd, we can use a similar strategy. Complexity: $\log_2 N+1$ queries. In this case, we get the following algorithm for $N = 8$:
[ "*special", "binary search", "constructive algorithms", "interactive" ]
2,700
null
1386
B
Mixture
Serge, the chef of the famous restaurant "Salt, Pepper & Garlic" is trying to obtain his first Michelin star. He has been informed that a secret expert plans to visit his restaurant this evening. Even though the expert's name hasn't been disclosed, Serge is certain he knows which dish from the menu will be ordered as well as what the taste preferences of the expert are. Namely, the expert requires an extremely precise proportion of salt, pepper and garlic powder in his dish. Serge keeps a set of bottles with mixtures of salt, pepper and garlic powder on a special shelf in the kitchen. For each bottle, he knows the exact amount of each of the ingredients in kilograms. Serge can combine any number of bottled mixtures (or just use one of them directly) to get a mixture of particular proportions needed for a certain dish. Luckily, the absolute amount of a mixture that needs to be added to a dish is so small that you can assume that the amounts in the bottles will always be sufficient. However, the numeric values describing the proportions may be quite large. Serge would like to know whether it is possible to obtain the expert's favourite mixture from the available bottles, and if so—what is the smallest possible number of bottles needed to achieve that. Furthermore, the set of bottles on the shelf may change over time as Serge receives new ones or lends his to other chefs. So he would like to answer this question after each such change. For example, assume that expert's favorite mixture is $1:1:1$ and there are three bottles of mixtures on the shelf: \begin{center} $$ \begin{array}{cccc} \hline \text{Mixture} & \text{Salt} & \text{Pepper} & \text{Garlic powder} \\ \hline 1 & 10 & 20 & 30 \\ 2 & 300 & 200 & 100 \\ 3 & 12 & 15 & 27 \\ \hline \end{array} $$ Amount of ingredient in the bottle, kg \end{center} To obtain the desired mixture it is enough to use an equivalent amount of mixtures from bottles 1 and 2. If bottle 2 is removed, then it is no longer possible to obtain it. Write a program that helps Serge to solve this task!
The crux of solving this problem is to think about it from a geometric perspective and discover a couple of properties, afterwards it's a matter of implementing efficient ways / data structures to process the queries accordingly. There are a couple of possible approaches, but it seems easiest to translate it to a 2D geometry problem. We first translate all mixtures - the target mixture and the bottles - to 2D points in the following way: given a mixture with proportions $(S, P, G)$ we transform it to a point $(x, y) = (S/(S+P+G), P/(S+P+G))$. Intuitively $x$ and $y$ are the relative amounts of salt and pepper, respectively, in the mixture. Now scaling the proportion values (i.e., multiplying $S$, $P$, $G$ with the same coefficient, which describes an identical mixture) keeps $(x, y)$ constant. Mixing two or more bottles ends up combining these 2D points (or vectors) with positive weights the sum of which is 1. Having that, we consider these lemmas (proofs are left as an exercise): Lemma 1. If a bottle point matches the target point, the target mixture can be obtained using only that one bottle. Lemma 2. If the line segment defined by two different bottle points contains the target point, the target mixture can be obtained using those two bottles. Lemma 3. If the triangle defined by three different bottle points contains the target point, the target mixture can be obtained using those three bottles. Lemma 4. If the target point is not contained by any triangle defined by three different bottles, the target mixture cannot be obtained. Based on these properties we can define the following general algorithm: Maintain the set of points. At each step, check the state to find the answer: 2a. If there is a point matching the target point $\Rightarrow$ 1. 2b. Otherwise, if there is a pair of points whose line segment contains the target point $\Rightarrow$ 2. 2c. Otherwise, if there is a triplet of points whose triangle contains the target point $\Rightarrow$ 3. 2d. Otherwise $\Rightarrow$ 0. 2a. If there is a point matching the target point $\Rightarrow$ 1. 2b. Otherwise, if there is a pair of points whose line segment contains the target point $\Rightarrow$ 2. 2c. Otherwise, if there is a triplet of points whose triangle contains the target point $\Rightarrow$ 3. 2d. Otherwise $\Rightarrow$ 0. Subtask 1 ($N \leq 50$) At each step we can simply check each point / pair of points / triplet of points to see if we have case (2a), (2b), or (2c). This takes $O(N)$ / $O(N^2)$ / $O(N^3)$ time for each query, so the total time complexity to process all queries is $O(N^4)$. Complexity: $O(N^4)$ Subtask 2 ($N \leq 500$) Full search on all points / pairs of points is still feasible here. We need to speed-up the case (2c): checking the triangles. Here the key observation is that instead of considering all triangles individually we can check whether the target point is inside the convex hull defined all bottle points; namely, if the target point is inside the convex hull there exists at least one triplet of bottle points the triangle of which contains the target point, and if it is outside the hull no such triplet exists. It's possible to build the convex hull and check if the target point is inside it in time $O(N \log N)$, but for this subtask a sub-optimal approach up to $O(N^2)$ is also good enough. Complexity: $O(N^3)$ Subtask 3 ($N \leq 5000$) Full search on all single points still feasible. To check triplets we do the same convex hull approach as in the previous subtask, but we have to use the optimal $O(N \log N)$ method this time. For pairs of points we need something better. If we fix a point as one end of the line segment, to have the target point on the segment we know that the other point has to be exactly opposite from the first point relative to the target point (direction-wise, the distance doesn't matter). In other words, after fixing the first point we know exactly at what angle the second point should be (with respect to the target point). So if we have a data structure that we can store points in and test for existence by angle (e.g., using tangent value), we can load all current points in it and then go through each bottle point and quickly check whether an opposite point currently exists. If there is never an opposite point, it means we don't have any line segment containing the target point, and vice versa. It can be done in $O(N \log N)$ for each query. Complexity: $O(N^2 \log N)$ Subtask 4 ($N \leq 10^5$) For the previous subtasks we were answering each query completely independently. For the full solution we aim for a $O(\log N)$ time for each query so we need to find a way to maintain the state of points through time so that we can update the state (add / remove bottles) and check the current answer quickly for each query. Let's look at all cases (single point, pair, triplet) separately: a. For single point checks we can keep the points in a structure that let's us add/remove/find in $O(\log N)$ time. We can also just maintain a counter for matching bottles that we increase/decrease whenever we add/remove a point that matches the target point - then at any step the answer is 1 if the counter is greater than zero. b. For solving the line segment case we go back to the previous idea. If we have all bottle points stored in an appropriate data structure, we need $O(N \log N)$ time per query to check whether there exist opposite points ($N$ points, $O(\log N)$ check if an opposite point exists), which is too slow. However, we can maintain a counter for the total number of pairs of points that are opposite to each other (with respect to the target point) within the set of all current points, and update it after each query. Then, similarly to the single point case, the answer is 2 if this counter is greater than zero. To achieve this we need to maintain a data structure that stores all angles of points (e.g., tangent value) and checks for (and allows to update) the number of elements with a certain value currently stored. This can be done in $O(\log N)$ time for each operation. c. For the triplet case we can use the same convex hull idea, but we need a dynamic version that we can update query-to-query (add/remove points) and test whether the target point is inside it. This can be done with an amortized time of $O(\log N)$ per query. However, we don't really need to construct the exact convex hull itself, we just need a way to tell whether the (single fixed) target point is inside of it. If we order all points around the target point by angle, then it's enough to check whether all angles between consecutive points are less than 180 degrees. If that's not the case (i.e., some two consecutive points are more than 180 degrees apart) then the target point is outside the hull (and the answer is 0), otherwise inside (answer 3). We need to maintain the points ordered in such a way by adding/removing points for each query, and be able to check for angles bigger than 180 degrees. There are various ways how to do this technically, and it can be done in $O(\log N)$ time per query. Complexity: $O(N \log N)$. Final notes In the end for the final solution all cases can be handled using the same data structure that stores the points/angles, so the solution becomes relatively concise. This problem requires divisions. You can simply use floating point numbers, but there is a risk of making the wrong discrete decisions due to floating point imprecision. The correct way here is to work with rational number, i.e., store and operate with values as numerators and denominators ($p/q$). However, you still have to be careful to not cause overflows. Note that subtasks have varying constraints on the proportion values, which allows more freedom in operations without causing overflows. The full problem has a constraint of $10^6$, and it is possible to implement a solution that only multiplies these values once so we use 2nd order values. (3rd order is also tolerated, but more typical careless approaches that yield 4th or 8th degree values are penalized in later subtasks).
[ "*special", "data structures", "geometry", "math", "sortings" ]
2,900
null
1386
C
Joker
Joker returns to Gotham City to execute another evil plan. In Gotham City, there are $N$ street junctions (numbered from $1$ to $N$) and $M$ streets (numbered from $1$ to $M$). Each street connects two distinct junctions, and two junctions are connected by at most one street. For his evil plan, Joker needs to use an odd number of streets that together form a cycle. That is, for a junction $S$ and an \textbf{even} positive integer $k$, there is a sequence of junctions $S, s_1, \ldots, s_k, S$ such that there are streets connecting (a) $S$ and $s_1$, (b) $s_k$ and $S$, and (c) $s_{i-1}$ and $s_i$ for each $i = 2, \ldots, k$. However, the police are controlling the streets of Gotham City. On each day $i$, they monitor a different subset of all streets with consecutive numbers $j$: $l_i \leq j \leq r_i$. These monitored streets cannot be a part of Joker's plan, of course. Unfortunately for the police, Joker has spies within the Gotham City Police Department; they tell him which streets are monitored on which day. Now Joker wants to find out, for some given number of days, whether he can execute his evil plan. On such a day there must be a cycle of streets, consisting of an odd number of streets which are not monitored on that day.
In this task, you are given a graph with $N$ nodes and $M$ edges. Furthermore, you are required to answer $Q$ queries. In every query, all the edges from the interval $[l_i,r_i]$ are temporarily removed and you should check whether the graph contains an odd cycle or not. Thanks for the solutions to the Germany (Subtasks 1-5) and Poland (Subtask 6) BOI teams. Subtask 1 ($N, M, Q \leq 200$) For every query do a DFS from every node and check if an odd cycle is formed. Complexity: $O(QNM)$. Subtask 2 ($N, M, Q \leq 2000$) The graph is bipartite if and only if it contains no odd cycle. Therefore we can color the nodes with two colors (with DFS or BFS) and check if two adjacent nodes share the same color or not. Complexity: $O(QM)$. Subtask 3 ($l_i=1$ for all queries) We can sort the queries by their right endpoints ($r_i$) and answer them offline. Therefore we insert all the edges in the decreasing order of their indices into a DSU structure. Complexity: $O(Q \log Q +M \alpha(N))$. Subtask 4 ($l_i \leq 200$ for all queries) We can sort the queries by their left endpoint ($l_i$) and apply our solution from Subtask 3 to all queries with the same $l_i$. Complexity: $O(Q \log Q + 200 M \alpha(N))$. Subtask 5 ($Q \leq 2000$) We use the "Mo's algorithm" technique. Split the range $[1, M]$ of edge indices into blocks of size $B$ and sort the queries by $l_i/B$ or by $r_i$ if their left endpoint is in the same block. We can now keep two pointers to hold the current range of removed edges. If we answer all queries in the current block, the right pointer moves at most $M$ steps. The left pointer moves at most $QB$ steps in total. Since the left pointer may move to the left and the right inside the current block we need to modify our DSU to allow rollbacks. If we choose $B = M / \sqrt Q$ we get the following runtime: Complexity: $O(Q \log Q + M \sqrt Q \log N)$. Subtask 6 (no further constraints) For any $1 \leq l \leq M$, let $\mathrm{last}[l]$ be the last index $r$ such that the answer for the query $[l, r]$ is positive (or $M+1$ if no such index exists). That is, the graph exluding the edges $[l,r]$ is bipartite, but the graph excluding the edges $[l,r-1]$ is not. We can prove that if $l_1 < l_2$, then $\mathrm{last}[l_1] \leq \mathrm{last}[l_2]$. We will exploit this fact in order to compute the array $\mathrm{last}$ using a divide & conquer algorithm. We write a recursive function $\mathrm{rec}$, which takes two intervals: $[l_1, l_2], [r_1, r_2]$ ($1 \leq l_1 \leq l_2 \leq M$, $1 \leq r_1 \leq r_2 \leq M+1$), possibly intersecting, but $l_1 \leq r_1$ and $l_2 \leq r_2$. This function will compute $\mathrm{last}[l]$ for each $l \in [l_1, l_2]$, assuming that for these values of $l$, we have $\mathrm{last}[l] \in [r_1, r_2]$. We will initially call $\mathrm{rec}([1, M], [1, M+1])$. We assume that when $\mathrm{rec}([l_1, l_2], [r_1, r_2])$ is called, then our DSU contains all edges with indices to the left of $l_1$ and to the right of $r_2$. For instance, when $M=9$ and $\mathrm{rec}([2, 5], [3, 7])$ is called, then edges with indices $1$, $8$, and $9$ should be in the DSU. We also assume that the graph in the DSU is bipartite. We take $l_{\mathrm{mid}} := (l_1 + l_2) / 2$ and compute $\mathrm{last}[l_{\mathrm{mid}}]$; this can be done by adding all edges $[l_1, l_{\mathrm{mid}}-1]$ to the DSU, and then trying to add all edges with indices $r_2, r_2-1, r_2-2, \ldots$, until we try to add an edge breaking the bipartiteness. The index $r_{\mathrm{mid}}$ of such an edge is exactly $\mathrm{last}[l_{\mathrm{mid}}]$. We then rollback all edges added so far in our recursive call. Now that we know that $\mathrm{last}[l_{\mathrm{mid}}] = r_{\mathrm{mid}}$, we will run two recursive calls: For each $l \in [l_1, l_{\mathrm{mid}}-1]$, we know that $\mathrm{last}[l] \in [r_1, r_{\mathrm{mid}}]$. We run $\mathrm{rec}([l_1, l_{\mathrm{mid}}-1], [r_1, r_{\mathrm{mid}}])$, remembering to add all necessary edges to the right of $r_{\mathrm{mid}}$. After the recursive call, we rollback them. For each $l \in [l_{\mathrm{mid}}+1, l_2]$, we know that $\mathrm{last}[l] \in [r_{\mathrm{mid}}, r_2]$. We run $\mathrm{rec}([l_{\mathrm{mid}}+1, l_2], [\max(l_{\mathrm{mid}}+1, r_{\mathrm{mid}}), r_2])$, now adding all necessary edges to the left of $l_{\mathrm{mid}}+1$. We rollback them after the recursive call. We can see that each recursive call uses at most $O((l_2-l_1) + (r_2-r_1))$ edge additions and rollbacks in DSU, each taking $O(\log N)$ time pessimistically. Also, on each level of recursion, the total length of all intervals is bounded by $2M$. Hence, all intervals in all recursive calls have total length bounded by $O(M \log M)$. Hence, the total time of the preprocessing is $O(M \log M \log N)$. With our preprocessing, each query $[l, r]$ reduces to simply verifying $r \leq \mathrm{last}[l]$, which can be done in constant time. Complexity: $O(M \log M \log N + Q)$.
[ "*special", "bitmasks", "data structures", "divide and conquer", "dsu" ]
2,800
null
1387
A
Graph
You are given an undirected graph where each edge has one of two colors: black or red. Your task is to assign a real number to each node so that: - for each black edge the sum of values at its endpoints is $1$; - for each red edge the sum of values at its endpoints is $2$; - the sum of the absolute values of all assigned numbers is the smallest possible. Otherwise, if it is not possible, report that there is no feasible assignment of the numbers.
Subtasks 1-4 These subtasks essentially are for the solutions that have the correct idea (see Subtask 5), but with slower implementation. Subtask 5 According to the last example in the task statement it is clear that the graph may actually be a multigraph, i.e., may contain more than one edge between a pair of vertices, loops, or isolated vertices. Reading the input carefully allows us to get rid of extraordinary situations: Loops define the values of the vertices (0.5 or 1 depending on the color of the edge). For the multi-edges it should be checked that all edges are of the same color. If not, there is no solution. For each isolated vertex a 0 should be assigned. Since there may be several connected components, the task should be solved for each component separately. Minimum sums of absolute values found for separate components will sum up to the minimum sum for the whole graph. Let's investigate a solution for a single connected component. For each vertex we will assign a tuple $(a, b)$ which corresponds to the value in the form $ax + b$ where $a \in \{-1,0,1\}$ and $b$ - any real number. Any vertex may be already processed or not yet processed. Value $a=0$ means that for the particular vertex the exact value is already known ($b$). If $a \neq 0$, it means that we still do not know the exact value for the particular vertex - it still contains a variable part. We start with an arbitrary vertex $v_0$ and assign $(1,0)$ to it. This means that the vertex is processed while the exact value is still unknown (we can denote it by a variable $x$). Now let's process all other vertices by DFS going from already processed vertices via edges. Let's see what are the possible cases if an already processed vertex $v$ is connected with a vertex $u$. We first calculate $(a_u', b_u')$, the values for $u$ that follow from values for $v$ and the color of the edge that connects them. Namely, $a_u' = -a_v$ and $b_u' = 1 - b_v$ if the edge is black or $b_u' = 2 - b_v$ if the edge is red. Then we check whether $u$ is already processed. If not, we assign $(a_u, b_u) = (a_u', b_u')$ and proceed with DFS. If, however, the vertex is already processed (we have found a cycle), there are several cases: If $a_u = a_u'$ and $b_u = b_u'$, there is no additional information and we proceed with DFS. If $a_u = a_u'$ and $b_u \neq b_u'$, there is contradiction and we can stop DFS. Otherwise we have $a_u a_v = -1$, in which case it is possible to establish the value of $x$: $x_{\mathrm{val}}=(b_v-b_u)/(a_u-a_v)$. Now that we know $x_{val}$ we need to recalculate the values for all the already processed vertices by replacing $(a_w, b_w)$ with $(0, a_w x_{\mathrm{val}} + b_w)$. It can be seen that during the DFS we may need to replace the values for already processed vertices only once, and after that all further processed vertices will have exact values ($a_v=0$). If at the end we have $a_v=0$ for all vertices, this is the only valid solution. Otherwise ($|a_v|=1$) we need to find the value of $x$ giving the overall minimum for the sum of absolute values. Lets take all values of $b_v$ (for $a_v = -1$) and $-b_v$ (for $a_v = 1$) and find the median of them. It can be proved that this is the value of $x$ resulting in the overall minimum (proof left as an exercise). To find the median of $N$ numbers, you can use various algorithms: You can sort the numbers and take the middle one ($O(N \log N)$). You can use the quickselect algorithm (expected running time $O(N)$, $O(N^2)$ worst-case). You can use the median-of-medians algorithm ($O(N)$ worst-case). Complexity: $O(N + M)$ / $O(N \log N + M)$.
[ "*special", "binary search", "dfs and similar", "dp", "math", "ternary search" ]
2,100
null
1387
B1
Village (Minimum)
\textbf{This problem is split into two tasks. In this task, you are required to find the minimum possible answer. In the task Village (Maximum) you are required to find the maximum possible answer. Each task is worth $50$ points.} There are $N$ houses in a certain village. A single villager lives in each of the houses. The houses are connected by roads. Each road connects two houses and is exactly $1$ kilometer long. From each house it is possible to reach any other using one or several consecutive roads. In total there are $N-1$ roads in the village. One day all villagers decided to move to different houses — that is, after moving each house should again have a single villager living in it, but no villager should be living in the same house as before. We would like to know the smallest possible total length in kilometers of the shortest paths between the old and the new houses for all villagers. \begin{center} Example village with seven houses \end{center} For example, if there are seven houses connected by roads as shown on the figure, the smallest total length is $8$ km (this can be achieved by moving $1 \to 6$, $2 \to 4$, $3 \to 1$, $4 \to 2$, $5 \to 7$, $6 \to 3$, $7 \to 5$). Write a program that finds the smallest total length of the shortest paths in kilometers and an example assignment of the new houses to the villagers.
Subtask 1 We can try each permutation of villagers, claculate the total distance and find the answer. Complexity: $O(N!)$. Subtask 2 See the Subtask 3 solution; here we can store the tree in slower data structures and work with the tree slower. Complexity: $O(N^2)$. Subtask 3 Each villager needs to move to a new place so we can process the tree from leaves (greedy): if a villager that currently is in a leaf has been there from the very beginning (still needs to move to other place), change places with its (only) neighbour node villager, add $2$ to the answer and mark the leaf node as processed (or remove it from the tree so that new nodes can become leaves). If the last villager is not moved then it can change with any of it neighbors. Complexity: $O(N)$.
[ "*special", "dp", "greedy", "trees" ]
2,100
null
1387
B2
Village (Maximum)
\textbf{This problem is split into two tasks. In this task, you are required to find the maximum possible answer. In the task Village (Minimum) you are required to find the minimum possible answer. Each task is worth $50$ points.} There are $N$ houses in a certain village. A single villager lives in each of the houses. The houses are connected by roads. Each road connects two houses and is exactly $1$ kilometer long. From each house it is possible to reach any other using one or several consecutive roads. In total there are $N-1$ roads in the village. One day all villagers decided to move to different houses — that is, after moving each house should again have a single villager living in it, but no villager should be living in the same house as before. We would like to know the largest possible total length in kilometers of the shortest paths between the old and the new houses for all villagers. \begin{center} Example village with seven houses \end{center} For example, if there are seven houses connected by roads as shown on the figure, the largest total length is $18$ km (this can be achieved by moving $1 \to 7$, $2 \to 3$, $3 \to 4$, $4 \to 1$, $5 \to 2$, $6 \to 5$, $7 \to 6$). Write a program that finds the largest total length of the shortest paths in kilometers and an example assignment of the new houses to the villagers.
Subtask 1 We can try each permutation of villagers, claculate the total distance and find the answer. Complexity: $O(N!)$. Subtask 2 See the Subtask 3 solution; here we can store the tree in slower data structures and work with the tree slower. Complexity: $O(N^2)$. Subtask 3 In the beginning let's think about each edge independently - how many villagers can go through it in each direction? If the edge is between nodes $a$ and $b$ then the maximal number of such villagers is $\min(\mathrm{subtreeSize}(a), \mathrm{subtreeSize}(b))$. Calculate this value for each edge and add them up - this way we get the theoretically maximal achievable total distance. Now we find a node in the tree with a property that the biggest neighbour subtree is at most $N / 2$. Such vertex is called a tree centroid and can be found in linear time. Now we just need to arrange all nodes from the neighbour subtrees and the centroid node itself so that no nodes stay in the same subtree where they were before. This is possible because no subtree is bigger than the sum of all other subtrees and it guarantees that the maximal possible number of villagers will pass through each of the edges. Complexity: $O(N)$.
[ "*special", "dfs and similar", "trees" ]
2,500
null
1387
C
Viruses
The Committee for Research on Binary Viruses discovered a method of replication for a large family of viruses whose genetic codes are sequences of zeros and ones. Each virus originates from a single gene; for simplicity genes are denoted by integers from $0$ to $G - 1$. At each moment in time a virus is a sequence of genes. When mutation occurs, one of the genes from the sequence is replaced by a certain sequence of genes, according to the mutation table. The virus stops mutating when it consists only of genes $0$ and $1$. For instance, for the following mutation table: $$ 2 \to \langle 0\ 1 \rangle \\ 3 \to \langle 2\ 0\ 0\rangle\\ 3 \to \langle 1\ 3\rangle\\ 4 \to \langle 0\ 3\ 1\ 2\rangle\\ 5 \to \langle 2\ 1\rangle\\ 5 \to \langle 5\rangle $$ a virus that initially consisted of a single gene $4$, could have mutated as follows: $$ \langle 4 \rangle \to \langle \underline{0\ 3\ 1\ 2} \rangle \to \langle 0\ \underline{2\ 0\ 0}\ 1\ 2 \rangle \to \langle 0\ \underline{0\ 1}\ 0\ 0\ 1\ 2 \rangle \to \langle 0\ 0\ 1\ 0\ 0\ 1\ \underline{0\ 1} \rangle $$ or in another way: $$ \langle 4 \rangle \to \langle \underline{0\ 3\ 1\ 2} \rangle \to \langle 0\ \underline{1\ 3}\ 1\ 2 \rangle \to \langle 0\ 1\ 3\ 1\ \underline{0\ 1} \rangle \to \langle 0\ 1\ \underline{2\ 0\ 0}\ 1\ 0\ 1 \rangle \to \langle 0\ 1\ \underline{0\ 1}\ 0\ 0\ 1\ 0\ 1 \rangle $$ Viruses are detected by antibodies that identify the presence of specific continuous fragments of zeros and ones in the viruses' codes. For example, an antibody reacting to a fragment $\langle 0\ 0\ 1\ 0\ 0 \rangle$ will detect a virus $\langle 0\ 0\ 1\ 0\ 0\ 1\ 0\ 1 \rangle$, but it will not detect a virus $\langle 0\ 1\ 0\ 1\ 0\ 0\ 1\ 0\ 1 \rangle$. For each gene from $2$ to $G-1$, the scientists are wondering whether a given set of antibodies is enough to detect all viruses that can emerge through mutations from this gene. If not, they want to know the length of the shortest virus that cannot be detected. It may happen that sometimes scientists don't have any antibodies. Then of course no virus can be detected, so the scientists are only interested in the length of the shortest possible virus that can emerge from the gene mutations.
From reading the task you may think that there aren't enough constraints. This is not true as you actually have enough information. $k$. You are given that the sum of all values $k$ does not exceed 100, so naturally, $1 \leq k \leq 100$. $l$. You are given that the sum of all values $l$ does not exceed 50, so naturally, $1 \leq l \leq 50$. $N$. You are given that the sum of all values $k$ does not exceed 100 and that $k \geq 1$ in every row of the mutation table. Thus, there are at most 100 rows, meaning $G - 2 \leq N \leq 100$. Since $G > 2$, $0 < N \leq 100$. $G$. You are given that every integer from $2$ to $G - 1$ appears in the table as $a$ at least once. This means that $N \geq G - 2$ (which you are also conveniently given). Hence, $2 < G \leq N + 2$, or remembering constraints on $N$, $2 < G \leq 102$. $M$. You are given that the sum of all values $l$ does not exceed 50 and that $l \geq 1$ for every antibody. Thus, there are at most 50 antibodies, meaning $0 \leq M \leq 50$. Note that this means that a test with $G = 102$ is a valid test, and it may fail some solutions. We were nice though, and only put it in the first subtask, so if you're failing that one, this may be the answer why. In all subtasks, we'll use the same approaches, which will be similar to dynamic programming and Dijkstra's algorithm. Imagine we are computing $dp_i$ - the minimal length of a virus that we can obtain from gene $i$ and is not detected by any antibodies. Then, similar to Dijkstra, we can take the smallest unprocessed value of $dp_i$ and process it, that is, for every transition $a \to \langle b_1\ b_2\ \ldots\ b_k \rangle$, where for some $j$, $b_j = i$, we can update the value of $dp_a$ using that transition. Since every transition is non-empty, we know that once we picked something as the smallest unprocessed value of $dp_i$, it cannot be updated to a better result anymore, hence it's our answer. Once you have no more unprocessed values (or they're all $\infty$), you're done. Subtask 1 (No antibodies ($M = 0$)) In this subtask we're not interested in viruses themselves, just in their length as any virus is valid. So, just do what was discussed above. Initially, $dp_0 = dp_1 = 1$. For any other $i$, $dp_i = \infty$. Then repeatedly pick the smallest unprocessed value of $dp_i$ and process it - for every transition $a \to \langle b_1\ b_2\ \ldots\ b_k \rangle$, $dp_a = \min(dp_a, dp_{b_1} + dp_{b_2} + ... + dp_{b_k})$. You don't really need to even do transition selection or a queue for minimal values here, the most basic implementation will yield $O(G \cdot (G + \sum{l}))$. Subtask 2 ($N = G - 2$) In this subtask, due to the restriction that every integer from $2$ to $G - 1$ appears in the table as $a$ at least once, every gene has strictly one outgoing mutation. This means that from every gene, there can be either no or a single virus only. However, viruses can still be quite large, so you can't just compute it. It could also be infinite. Luckily, you can use similar approach here, except this time you'd also need to store some information about the virus itself rather than only the length of it. What information do we need about the virus then? Well, our dynamic programming state already has a condition that it's the shortest virus that is not detected by any antibodies. So, our initialization is changed slightly: $dp_0 = \begin{cases} 0, & \text{if $0$ is an antibody}\\ 1, & \text{otherwise} \end{cases}$ $dp_1 = \begin{cases} 0, & \text{if $1$ is an antibody}\\ 1, & \text{otherwise} \end{cases}$ Now, what about transitions? Similarly, $dp_a = \min(dp_a, dp_{b_1} + dp_{b_2} + ... + dp_{b_k})$. We also know that there can't be an antibody fully inside each individual $dp_{b_j}$. But what about overlaps? Well, since each antibody length is at most $\ell \leq 50$, we know we are only interested in storing the first and the last 50 characters for every value of $dp$. Then when we are doing a transition and gluing up multiple states together, we just have to check if due to this an antibody won't appear in the result, which will make this transition invalid. It's possible that the most inefficient way of doing so will time out, but you have plenty of leeway here, so it shouldn't cause too many issues. Subtask 3 (One antibody ($M = 1$)) Here we need to expand our DP a bit. Given that unlike in the previous subtask, there can be multiple different viruses originating from a single gene, it's no longer enough to store the information about the virus, we need to encode it inside a state. So, the general approach will be as follows: We are now calculating $dp_{i, st, en}$, the minimal length of a virus such that we start in the state $st$, obtain a virus from the gene $i$ that is not detected by any antibodies and end up in the state $en$. The transition then becomes where for transition $a \to \langle b_1\ b_2\ \ldots\ b_k \rangle$; $dp_{a, st, en} = \min(dp_{a, st, en}, dp_{b_1, st, x_1} + dp_{b_2, x_1, x_2} + \ldots + dp_{b_k, x_{k - 1}, en})$. But wait! We have to now compute for every value of $k - 1$ variables of $x$, and $k \leq 100$! Well, we can notice that we can do a second dynamic programming, somewhat reducing the running cost, but this will only be enough for Subtask 4. Instead we can transform the transitions themselves. What we want to do is to make sure that in every transition $a \to \langle b_1\ b_2\ \ldots\ b_k \rangle$, $k \leq 2$. We can do that, since $a \to \langle b_1\ b_2\ \ldots\ b_k \rangle = \begin{cases} a \to \langle b_1\ z \rangle, \text{where $z$ is a new gene}\\ z \to \langle b_2\ \ldots\ b_k \rangle \end{cases}$ reduces the length of a transition. Note that by doing so we would have at most $\sum{k}$ transitions still and the sum of all values $k$ would still be $O(\sum{k})$, since it could only double worst-case. We are also creating new genes, but similarly, we'll still have $O(G)$ of them. However, now transition is a lot smoother. For a transition $a \to \langle b_1\rangle$, we have $dp_{a, st, en} = \min(dp_{a, st, en}, dp_{b_1, st, en})$. For a transition $a \to \langle b_1\ b_2\rangle$, we have $dp_{a, st, en} = \min(dp_{a, st, en}, dp_{b_1, st, x} + dp_{b_2, x, en})$. Now, remember that from $dp_{i, st, en}$ we only need to consider transitions that contain $i$ as the right hand side. Now, imagine we fix $st$ and $en$ and iterate freely over $i$. It's clear that we will consider every transition at most twice. This means that to compute all $O(G \cdot S^2)$ states we need to process $O(G \cdot S^2)$ transitions. As such, if we assume that the amount of states is $S$, then if we're using fast structure to find minimal states, the complexity would be $O(G \cdot S^2 \cdot (S \cdot \log(G \cdot S^2)))$. The last logarithm comes from the necessity to update the queue of the smallest values every time we update a value successfully. So, the final question we need to answer is what is the state here? We can observe that since we have one antibody, it's enough to have a state encoding at what length of prefix for the antibody you are. For an antibody of length $l$, this gives you $l$ states (because you can't visit prefix of length $l$ as that just means that you can detect this virus). So, you are now looking for $dp_{i, st, en}$, the minimal length of a virus such that we start by already having part of the virus ending with a prefix of antibody of length $st$, obtain a virus from gene $i$ that is not detected by any antibodies and end up with a virus ending with a prefix of antibody length $en$. Now, your answer for a gene $i$ is $\min(dp_{i, 0, x})$ for any $0 \leq x < l$. The transition doesn't really have any additional work either as the fact that we are not having an antibody in the virus is encoded in the states themselves. The initialization is where all of the work happens now. What we can do is to iterate over terminal genes $i$ (0 and 1) and starting states $st$. Then, let's observe the virus obtained from virus corresponding to the state $st$ (e.g. prefix of antibody of length $st$) and the gene $i$ appended to it. Well, we get a string and need to check what is the maximum its suffix that is also a prefix for the antibody. Doing so naively is enough, but you can also observe that it's exactly what KMP algorithm is looking for. Let's call this new length $en$. Then we've found that for this $i$ and $st$, as long as $en < l$, $dp_{i, st, en} = 1$. Everything that's not initialized after we've iterated over every $i$ and $st$ has a value of $\infty$ as it's impossible. So, our $S = \sum{l}$ and as such our total complexity is $O(G \cdot (\sum{l})^3 \cdot \log(G \cdot (\sum{l})^2))$. Subtask 5 (No further constraints) We can use the same solution as Subtask 3, but the state needs to change. Now that we have multiple antibodies, we need to compute a set of prefixes for all antibodies, where some of them would be invalid. Please note that the definition of a bad prefix isn't if it's equal to one antibody or not. The prefix is bad if and only if it has a suffix that is an antibody, since we may have that a prefix of one antibody already ends with a different antibody. Now a similar initialization can occur, where we would iterate over every $i$ and $st$, obtain a string from $st$ and $i$ and find its largest suffix that is in the prefix set to make the transition. Doing so naively is enough, but once again, you can also observe that it's exactly what the Aho-Corasick algorithm is doing. We can also see that we'll still have at most $O(\sum{l})$ states, and as such the rest of the algorithm and the total complexity is the same as in Subtask 3. Subtask 4 (The sum of all values $l$ does not exceed 10) This was a subtask designed to award points to conceptually right solutions but inefficiently written. Such as, for example, not implementing fast enough structure for picking the minimal values of $dp$ or not transforming the transitions and as such having to write a separate dynamic programming for executing the transitions.
[ "*special", "dp", "shortest paths", "string suffix structures" ]
2,900
null
1388
A
Captain Flint and Crew Recruitment
Despite his bad reputation, Captain Flint is a friendly person (at least, friendly to animals). Now Captain Flint is searching worthy sailors to join his new crew (solely for peaceful purposes). A sailor is considered as worthy if he can solve Flint's task. Recently, out of blue Captain Flint has been interested in math and even defined a new class of integers. Let's define a positive integer $x$ as \textbf{nearly prime} if it can be represented as $p \cdot q$, where $1 < p < q$ and $p$ and $q$ are prime numbers. For example, integers $6$ and $10$ are nearly primes (since $2 \cdot 3 = 6$ and $2 \cdot 5 = 10$), but integers $1$, $3$, $4$, $16$, $17$ or $44$ are not. Captain Flint guessed an integer $n$ and asked you: can you represent it as {the sum of $4$ \textbf{different positive} integers} where \textbf{at least $3$} of them should be nearly prime. Uncle Bogdan easily solved the task and joined the crew. Can you do the same?
Consider the three smallest nearly prime numbers: $6, 10$ and $14$: if $n \le 30=6+10+14$, then the answer is NO. otherwise the answer is YES. The easiest way is to display $6, 10, 14, n-30$ in cases where $n-30 \neq 6, 10, 14$. If $n = 36, 40, 44,$ then we can output $6, 10, 15, n-31$. In addition, it was possible to generate the first nearly prime numbers and iterate over all their possible triplets. Complexity: $O(1)$ or more if brute-force is written.
[ "brute force", "greedy", "math", "number theory" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while(q--){ int n; cin >> n; if(n <= 30){ cout << "NO" << endl; } else{ cout << "YES" << endl; if(n == 36 || n == 40 || n == 44){ cout << 6 << ' ' << 10 << ' ' << 15 << ' ' << n - 31 << endl; } else{ cout << 6 << ' ' << 10 << ' ' << 14 << ' ' << n - 30 << endl; } } } }
1388
B
Captain Flint and a Long Voyage
Captain Flint and his crew keep heading to a savage shore of Byteland for several months already, drinking rum and telling stories. In such moments uncle Bogdan often remembers his nephew Denis. Today, he has told a story about how Denis helped him to come up with an interesting problem and asked the crew to solve it. In the beginning, uncle Bogdan wrote on a board a positive integer $x$ consisting of $n$ digits. After that, he wiped out $x$ and wrote integer $k$ instead, which was the concatenation of binary representations of digits $x$ consists of (without leading zeroes). For example, let $x = 729$, then $k = 111101001$ (since $7 = 111$, $2 = 10$, $9 = 1001$). After some time, uncle Bogdan understood that he doesn't know what to do with $k$ and asked Denis to help. Denis decided to wipe last $n$ digits of $k$ and named the new number as $r$. As a result, Denis proposed to find such integer $x$ of length $n$ that $r$ (as number) is maximum possible. If there are multiple valid $x$ then Denis is interested in the minimum one. All crew members, including captain Flint himself, easily solved the task. All, except cabin boy Kostya, who was too drunk to think straight. But what about you? Note: in this task, we compare integers ($x$ or $k$) as numbers (despite what representations they are written in), so $729 < 1999$ or $111 < 1000$.
Statement: $x$ consists of digits $8-9$. This is so, because if $x$ contains digits $0-7$, which in their binary notation are shorter than digits $8-9$, then the number $k$ written on the board, and therefore the number $r$ (obtained by removing the last $n$ digits of the number $k$) will be shorter than if you use only the digits $8$ and $9$, which means it will not be the maximum possible. Statement: $x$ is $99 \dots 988 \dots 8$. Obviously, the more $x$, the more $k$ and $r$. Therefore, to maximize $k$, $x$ must be $99 \dots 999$. However, due to the fact that $r$ is $k$ without the last $n$ digits, at the end of the number $x$ it is possible to replace a certain number of $9$ digits with $8$ so that $r$ will still be the maximum possible. Statement: the number of digits $8$ in the number $x$ of length $n$ is equal to $\left\lceil \frac{n}{4} \right\rceil$. $8_{10}=1000_2$ and $9_ {10}=1001_2$. We can see that the binary notations of the digits $8$ and $9$ are $4$ long and differ in the last digit. Suppose the suffix of a number $x$ consists of $p$ digits $8$. Then the maximum $r$ is achieved if at least $4 \cdot p - 3$ digits are removed from the end of $k$. By the condition of the problem, exactly $n$ digits are removed, which means $4 \cdot p - 3 \le n$ and then $p = \left \lfloor \frac {n + 3} {4} \right \rfloor = \left \lceil \frac {n} {4} \right \rceil$. Complexity: $O(n)$.
[ "greedy", "math" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int n; cin >> n; int x = (n + 3) / 4; for (int i = 0; i < n - x; ++i) { cout << 9; } for (int i = 0; i < x; ++i) { cout << 8; } cout << endl; } }
1388
C
Uncle Bogdan and Country Happiness
Uncle Bogdan is in captain Flint's crew for a long time and sometimes gets nostalgic for his homeland. Today he told you how his country introduced a happiness index. There are $n$ cities and $n−1$ undirected roads connecting pairs of cities. Citizens of any city can reach any other city traveling by these roads. Cities are numbered from $1$ to $n$ and the city $1$ is a capital. In other words, the country has a tree structure. There are $m$ citizens living in the country. A $p_i$ people live in the $i$-th city but all of them are working in the capital. At evening all citizens return to their home cities using the shortest paths. Every person has its own mood: somebody leaves his workplace in good mood but somebody are already in bad mood. Moreover any person can ruin his mood on the way to the hometown. \textbf{If person is in bad mood he won't improve it}. Happiness detectors are installed in each city to monitor the happiness of \textbf{each} person who visits the city. The detector in the $i$-th city calculates a happiness index $h_i$ as the number of people in good mood minus the number of people in bad mood. Let's say for the simplicity that mood of a person doesn't change inside the city. Happiness detector is still in development, so there is a probability of a mistake in judging a person's happiness. One late evening, when all citizens successfully returned home, the government asked uncle Bogdan (the best programmer of the country) to check the correctness of the collected happiness indexes. Uncle Bogdan successfully solved the problem. Can you do the same? More formally, You need to check: "Is it possible that, after all people return home, for each city $i$ the happiness index will be equal exactly to $h_i$".
For each city $v$ count $a_v$ - how many people will visit it. Knowing this value and the value of the level of happiness - $h_v$, we can calculate how many people visited the city in a good mood: $g_v=\frac {a_v + h_v} {2}$. We can single out the $3$ criterions for the correctness of the values of the happiness indices: $a_v + h_v$ is a multiple of $2$. For each $v$, $g_v$ - an integer. $0 \le g_v \le a_v$. In each city $v$ the number of residents who passed this city in a good mood - a non-negative number not exceeding $a_v$. $g_{to_1} + g_{to_2} + \ldots + g_{to_k} \le g_v,$ where $to_1,$ $to_2,$ $\ldots,$ $to_k$ are the cities where the resident can move out of the city $v$ on the way home. This follows from the fact that the mood of the inhabitants can be deteriorated and cannot be improved. This is enough, since these conditions guarantee the correctness of the happiness indices by definition, as well as the peculiarities of changes in the mood of residents.
[ "dfs and similar", "greedy", "math", "trees" ]
1,800
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 7; vector < int > gr[N]; bool access = true; int p[N], h[N], a[N], g[N]; void dfs(int v, int ancestor = -1) { a[v] = p[v]; int sum_g = 0; for (int to : gr[v]) { if (to == ancestor) continue; dfs(to, v); sum_g += g[to]; a[v] += a[to]; } if ((a[v] + h[v]) % 2 == 0) {} // first else access = false; g[v] = (a[v] + h[v]) / 2; if (g[v] >= 0 && g[v] <= a[v]) {} // second else access = false; if (sum_g <= g[v]) {} // third else access = false; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int n, m; cin >> n >> m; for (int i = 0; i < n; ++i) cin >> p[i]; for (int i = 0; i < n; ++i) cin >> h[i]; for (int i = 0; i < n - 1; ++i) { int a, b; cin >> a >> b; --a, --b; gr[a].push_back(b); gr[b].push_back(a); } dfs(0); cout << (access ? "YES" : "NO") << endl; access = true; for (int i = 0; i < n; ++i) gr[i].clear(); } }
1388
D
Captain Flint and Treasure
Captain Fint is involved in another treasure hunt, but have found only one strange problem. The problem may be connected to the treasure's location or may not. That's why captain Flint decided to leave the solving the problem to his crew and offered an absurdly high reward: one day off. The problem itself sounds like this... There are two arrays $a$ and $b$ of length $n$. Initially, an $ans$ is equal to $0$ and the following operation is defined: - Choose position $i$ ($1 \le i \le n$); - Add $a_i$ to $ans$; - If $b_i \neq -1$ then add $a_i$ to $a_{b_i}$. What is the maximum $ans$ you can get by performing the operation on each $i$ ($1 \le i \le n$) exactly once? Uncle Bogdan is eager to get the reward, so he is asking your help to find the optimal order of positions to perform the operation on them.
Let's construct a graph $G$ with $n$ vertices and directed edges $(i; b_i)$. Note that it is not profitable to process the vertex $i$ if the vertices $j$, for which $b_j=i$, have not yet been processed, since it is possible to process these vertices $j$ so that they will not decrease $a_i$. We will do the following operation $n$ times: Choose a vertex $V$ which does not contain any edge (there is always such a vertex due to an additional condition in the problem). Let's process it as follows: if $a_V> 0$, then apply the operation from the condition to this vertex. This is beneficial because if $b_v \neq -1$ then we will improve the value of $a_{b_V}$. if $a_V \le 0$, then the value of $a_{b_V}$ will not improve, which means that it is profitable to apply the operation to the vertex $V$ after $b_V$ (if $b_V \neq -1$). Let's store two containers $now$ and $after$. In $now$ we store the order of processing of vertices for which $a_V>0$. In $after$ - for which $a_V \le 0$. Then let's notice that the order $now + reverse (after)$ is appropriate to achieve the maximum answer. Total $O(n)$ or $O(n \cdot log(n))$ depending on the implementation.
[ "data structures", "dfs and similar", "graphs", "greedy", "implementation", "trees" ]
2,000
#include <bits/stdc++.h> #define Vanya Unstoppable using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int n; cin >> n; long long a[n]; for (int i = 0; i < n; ++i) { cin >> a[i]; } set < int > s; for (int i = 0; i < n; ++i) { s.insert(i); } int b[n]; vector < int > sz(n); for (int i = 0; i < n; ++i) { cin >> b[i]; --b[i]; if (b[i] == -2) continue; ++sz[b[i]]; if (sz[b[i]] == 1) { s.erase(b[i]); } } long long sum = 0; vector < int > ans[2]; while (!s.empty()) { int v = *s.begin(); s.erase(s.begin()); int w = b[v]; sum += a[v]; if (a[v] >= 0) { if (w >= 0) { a[w] += a[v]; } ans[0].push_back(v); } else { ans[1].push_back(v); } if (w >= 0) { --sz[w]; if (sz[w] == 0) { s.insert(w); } } } cout << sum << endl; for (int to : ans[0]) cout << to + 1 << ' '; reverse(ans[1].begin(), ans[1].end()); for (int to : ans[1]) cout << to + 1 << ' '; cout << endl; }
1388
E
Uncle Bogdan and Projections
After returning to shore, uncle Bogdan usually visits the computer club "The Rock", to solve tasks in a pleasant company. One day, uncle Bogdan met his good old friend who told him one unusual task... There are $n$ non-intersecting horizontal segments with ends in integers points on the plane with the standard cartesian coordinate system. All segments are strictly above the $OX$ axis. You can choose an arbitrary vector ($a$, $b$), where $b < 0$ and coordinates are real numbers, and project all segments to $OX$ axis along this vector. The projections shouldn't intersect but may touch each other. Find the minimum possible difference between $x$ coordinate of the right end of the rightmost projection and $x$ coordinate of the left end of the leftmost projection.
It is easy to understand that there is an optimal vector at which the value we need is minimal and at least one pair of projections is touching. Note also that the vector is completely described by the angle between it and the positive direction of the $OX$ axis. If two line segments are at different heights, then there are two ways to select a vector so that their projections touch. Let's find two angles that describe these vectors. If we project along a vector with an angle that is in the interval that the found two angles form, then the projections will intersect. So, this range of angles is 'forbidden'. Using the scanline method, we can find such angles that they are the boundaries of some 'forbidden' interval and do not fall into any 'forbidden' interval. Then we only need to check these angles. We also need to quickly find the rightmost and leftmost points for each angle. Let's take two points at different heights. Let them project to one point with a vector with an angle $\alpha$. Then on the interval $(0^\circ;\alpha)$ the upper one will be to the right, and on the interval $(\alpha; 180^\circ)$ the upper one will be to the left. We will process two types of requests using the scanning line: Check the answer for the current angle; Swap two points. It is necessary to carefully handle the case when, at some angle, several points are swapped. Alternative way: if you project the point $(x; y)$ along the vector with the angle $\alpha$, you get the point $(x + y * ctg (\ alpha); 0)$. We'll use the Convex Hull Trick to quickly find the rightmost and leftmost points for each angle. We will store CHT for maximums and minimums with lines $k = y_i, b = x_i$. Queries at $ctg(\alpha)$ will give us the leftmost and rightmost points. It is a corner case if all points are at the same height. Then the answer is $max(xr_i) - min(xl_i)$. Complexity of solution - $O(n^2log(n))$.
[ "data structures", "geometry", "sortings" ]
2,700
#include <bits/stdc++.h> #define pb push_back #define x first #define y second using namespace std; const int N = 1e5 + 10; const double eps = 1e-9; double xl[N], xr[N], y[N], pi = acos(-1), mn_x, mx_x; int ind_l, ind_r; double point_pr(double x, double y, double ctg) { return x - y * ctg; } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int n; vector<pair<double, int> > q; vector<pair<double, pair<int, int> > > events, prom_left, prom_right; cin >> n; pair<double, double> mx, mn; mx = {-1.0, -1.0}; mn = {2e9, 2e9}; mn_x = 2e9; mx_x = -2e9; for(int i = 0; i < n; ++i) { cin >> xl[i] >> xr[i] >> y[i]; if(xl[i] < mn_x) { mn_x = xl[i]; } if(xr[i] > mx_x) { mx_x = xr[i]; } if(mx.y < y[i]) { mx.y = y[i]; mx.x = xl[i]; ind_l = i; } else if(mx.y == y[i] && mx.x > xl[i]) { mx.y = y[i]; mx.x = xl[i]; ind_l = i; } if(mn.y > y[i]) { mn.y = y[i]; mn.x = xl[i]; ind_r = i; } else if(mn.y == y[i] && mn.x < xl[i]) { mn.y = y[i]; mn.x = xl[i]; ind_r = i; } } double a1, a2; for(int i = 0; i < n; ++i) { for(int j = 0; j < n; ++j) { if(y[i] > y[j]) { a1 = (xr[i] - xl[j]) / (y[i] - y[j]); a2 = (xl[i] - xr[j]) / (y[i] - y[j]); q.pb({a1, 1}); q.pb({a2, 2}); a1 = (xl[i] - xl[j]) / (y[i] - y[j]); a2 = (xr[i] - xr[j]) / (y[i] - y[j]); events.pb({a1, {i, j}}); events.pb({a2, {-i - 1, j}}); } } } if(q.empty()) { cout << fixed << setprecision(9) << mx_x - mn_x << endl; return 0; } sort(q.rbegin(), q.rend()); int cnt = 0; double last = 0; for(auto i : q) { if(i.y == 2) { --cnt; if(!cnt) { events.pb({i.x, {-1e9, -1e9}}); } } else { if(!cnt) { events.pb({i.x, {-1e9, -1e9}}); } ++cnt; } } sort(events.rbegin(), events.rend()); double ans = 1e18, ang; last = -1e18; for(auto i : events) { if(i.y.x == i.y.y) { unordered_set<int> s; vector<int> to_check; for(auto j : prom_left) { s.insert(j.y.x); if(j.y.x == ind_l) { to_check.pb(j.y.y); } } prom_left.clear(); for(auto j : to_check) { if(!s.count(j)) { ind_l = j; break; } } s.clear(); to_check.clear(); for(auto j : prom_right) { s.insert(j.y.y); if(j.y.y == ind_r) { to_check.pb(-j.y.x - 1); } } prom_right.clear(); for(auto j : to_check) { if(!s.count(j)) { ind_r = j; break; } } s.clear(); to_check.clear(); double res = point_pr(xr[ind_r], y[ind_r], i.x) - point_pr(xl[ind_l], y[ind_l], i.x); if(ans > res) { ans = res; ang = i.x; } } else if(i.y.x < 0) { if(abs(i.x - last) > eps) { unordered_set<int> s; vector<int> to_check; for(auto j : prom_right) { s.insert(j.y.y); if(j.y.y == ind_r) { to_check.pb(-j.y.x - 1); } } prom_right.clear(); for(auto j : to_check) { if(!s.count(j)) { ind_r = j; break; } } s.clear(); to_check.clear(); } prom_right.pb(i); } else { if(abs(i.x - last) > eps) { unordered_set<int> s; vector<int> to_check; for(auto j : prom_left) { s.insert(j.y.x); if(j.y.x == ind_l) { to_check.pb(j.y.y); } } prom_left.clear(); for(auto j : to_check) { if(!s.count(j)) { ind_l = j; break; } } s.clear(); to_check.clear(); } prom_left.pb(i); } last = i.x; } cout << fixed << setprecision(9) << ans << endl; return 0; }
1389
A
LCM Problem
Let $LCM(x, y)$ be the minimum positive integer that is divisible by both $x$ and $y$. For example, $LCM(13, 37) = 481$, $LCM(9, 6) = 18$. You are given two integers $l$ and $r$. Find two integers $x$ and $y$ such that $l \le x < y \le r$ and $l \le LCM(x, y) \le r$.
Suppose we have chosen $x$ and $y$ as the answer, and $x$ is not a divisor of $y$. Since $LCM(x, y)$ belongs to $[l, r]$, we could have chosen $x$ and $LCM(x, y)$ instead. So if the answer exists, there also exists an answer where $x$ is a divisor of $y$. If $2l > r$, then there is no pair $(x, y)$ such that $l \le x < y \le r$ and $x|y$. Otherwise, $x = l$ and $y = 2l$ is the answer.
[ "constructive algorithms", "greedy", "math", "number theory" ]
800
t = int(input()) for i in range(t): l, r = map(int, input().split()) if l * 2 > r: print(-1, -1) else: print(l, l * 2)
1389
B
Array Walk
You are given an array $a_1, a_2, \dots, a_n$, consisting of $n$ \textbf{positive} integers. Initially you are standing at index $1$ and have a score equal to $a_1$. You can perform two kinds of moves: - move right — go from your current index $x$ to $x+1$ and add $a_{x+1}$ to your score. This move can only be performed if $x<n$. - move left — go from your current index $x$ to $x-1$ and add $a_{x-1}$ to your score. This move can only be performed if $x>1$. \textbf{Also, you can't perform two or more moves to the left in a row.} You want to perform \textbf{exactly} $k$ moves. Also, there should be no more than $z$ moves to the left among them. What is the maximum score you can achieve?
Notice that your final position is determined by the number of moves to the left you make. Let there be exactly $t$ moves to the left, that leaves us with $k - t$ moves to the right. However, let's interpret this the other way. You have $t$ pairs of moves (right, left) to insert somewhere inside the sequence of $k - 2t$ moves to the right. Easy to see that all the positions from $1$ to $k - 2t + 1$ will always be visited. And the extra pairs can also increase the score by visiting some positions ($i+1$, $i$) for some $i$ from $1$ to $k - 2t + 1$. Notice that it's always optimal to choose exactly the same $i$ for all the pairs (right, left). And that $i$ should be such that $a_{i+1} + a_i$ is maximum possible. You can implement this idea in a straightforward manner: iterate over $t$ and calculate the sum of values from $1$ to $k - 2t + 1$ and the maximum value of $a_{i+1} + a_i$ over $i$ from $1$ to $k - 2t + 1$. That will lead to a $O(zn)$ solution per testcase. You can optimize it to $O(n)$ with prefix sums or with some clever order to iterate over $t$. It's also possible to iterate over the final position and restore the number of left moves required to achieve it. Overall complexity: $O(zn)$ or $O(n)$ per testcase.
[ "brute force", "dp", "greedy" ]
1,600
for _ in range(int(input())): n, k, z = map(int, input().split()) a = [int(x) for x in input().split()] ans = 0 s = 0 mx = 0 for i in range(k + 1): if i < n - 1: mx = max(mx, a[i] + a[i + 1]) s += a[i] if i % 2 == k % 2: tmp = (k - i) // 2 if tmp <= z: ans = max(ans, s + mx * tmp) print(ans)
1389
C
Good String
Let's call left cyclic shift of some string $t_1 t_2 t_3 \dots t_{n - 1} t_n$ as string $t_2 t_3 \dots t_{n - 1} t_n t_1$. Analogically, let's call right cyclic shift of string $t$ as string $t_n t_1 t_2 t_3 \dots t_{n - 1}$. Let's say string $t$ is \textbf{good} if its left cyclic shift is equal to its right cyclic shift. You are given string $s$ which consists of digits 0–9. What is the minimum number of characters you need to erase from $s$ to make it good?
Let's analyze when the string is good. Suppose it is $t_1t_2 \dots t_k$. The cyclic shifts of this string are $t_kt_1t_2 \dots t_{k-1}$ and $t_2t_3 \dots t_k t_1$. We get the following constraints for a good string: $t_k = t_2$, $t_1 = t_3$, $t_2 = t_4$, ..., $t_{k - 2} = t_k$, $t_{k - 1} = t_1$. If the string has odd length, then all characters should be equal to each other; otherwise, all characters on odd positions should be equal, and all characters on even positions should be equal. Now, since there are only $10$ different types of characters, we can brute force all possible combinations of the first and the second character of the string we want to obtain (there are only $100$ of them) and, for each combination, greedily construct the longest possible subsequence of $s$ beginning with those characters in $O(n)$.
[ "brute force", "dp", "greedy", "two pointers" ]
1,500
#include <bits/stdc++.h> using namespace std; #define sz(a) int((a).size()) #define forn(i, n) for (int i = 0; i < int(n); ++i) int solve(const string& s, int x, int y) { int res = 0; for (auto c : s) if (c - '0' == x) { ++res; swap(x, y); } if (x != y && res % 2 == 1) --res; return res; } void solve() { string s; cin >> s; int ans = 0; forn(x, 10) forn(y, 10) ans = max(ans, solve(s, x, y)); cout << sz(s) - ans << endl; } int main() { int T; cin >> T; while (T--) solve(); }
1389
D
Segment Intersections
You are given two lists of segments $[al_1, ar_1], [al_2, ar_2], \dots, [al_n, ar_n]$ and $[bl_1, br_1], [bl_2, br_2], \dots, [bl_n, br_n]$. Initially, all segments $[al_i, ar_i]$ are equal to $[l_1, r_1]$ and all segments $[bl_i, br_i]$ are equal to $[l_2, r_2]$. In one step, you can choose one segment (either from the first or from the second list) and extend it by $1$. In other words, suppose you've chosen segment $[x, y]$ then you can transform it either into $[x - 1, y]$ or into $[x, y + 1]$. Let's define a total intersection $I$ as the sum of lengths of intersections of the corresponding pairs of segments, i.e. $\sum\limits_{i=1}^{n}{\text{intersection_length}([al_i, ar_i], [bl_i, br_i])}$. Empty intersection has length $0$ and length of a segment $[x, y]$ is equal to $y - x$. What is the minimum number of steps you need to make $I$ greater or equal to $k$?
At first, note that intersection_length of segments $[l_1, r_1]$ and $[l_2, r_2]$ can be calculated as $\min(r_1, r_2) - \max(l_1, l_2)$. If it's negative then segments don't intersect, otherwise it's exactly length of intersection. Now we have two major cases: do segments $[l_1, r_1]$ and $[l_2, r_2]$ already intersect or not. If segments intersect then we already have $n \cdot (\min(r_1, r_2) - \max(l_1, l_2))$ as the total intersection. Note, that making both segments equal to $[\min(l_1, l_2), \max(r_1, r_2)]$ in each pair are always optimal since in each step we will increase the total intersection by $1$. After making all segments equal to $[\min(l_1, l_2), \max(r_1, r_2)]$ we can increase total intersection by $1$ only in two steps: we need to extend both segments in one pair. In result, we can find not a hard formula to calculate the minimum number of steps: we already have $n \cdot (\min(r_1, r_2) - \max(l_1, l_2))$ of the total intersection, then we can increase it by at most $n \cdot ((\max(r_1, r_2) - \min(l_1, l_2)) - (\min(r_1, r_2) - \max(l_1, l_2)))$ using one step per increase, and then to any number using two steps per increase. In the case of non-intersecting $[l_1, l_2]$ and $[r_1, r_2]$, we should at first "invest" some number of steps in each pair to make them intersect. So let's iterate over the number of segments to "invest" $cntInv$. We should make $cntInv \cdot (\max(l_1, l_2) - \min(r_1, r_2))$ steps to make segments touch. Now, $cntInv$ segments touch so we can use almost the same formulas for them as in the previous case. The total complexity is $O(n)$ per test case.
[ "brute force", "greedy", "implementation", "math" ]
2,100
import kotlin.math.* fun main() { repeat(readLine()!!.toInt()) { val (n, k) = readLine()!!.split(' ').map { it.toLong() } val (l1, r1) = readLine()!!.split(' ').map { it.toLong() } val (l2, r2) = readLine()!!.split(' ').map { it.toLong() } var ans = 1e18.toLong() if (max(l1, l2) <= min(r1, r2)) { val rem = max(0L, k - n * (min(r1, r2) - max(l1, l2))) val maxPossible = n * (abs(l1 - l2) + abs(r1 - r2)) ans = min(rem, maxPossible) + max(0L, rem - maxPossible) * 2 } else { val invest = max(l1, l2) - min(r1, r2) for (cntInv in 1..n) { var curAns = invest * cntInv val maxPossible = (max(r1, r2) - min(l1, l2)) * cntInv curAns += min(k, maxPossible) + max(0L, k - maxPossible) * 2 ans = min(ans, curAns) } } println(ans) } }
1389
E
Calendar Ambiguity
Berland year consists of $m$ months with $d$ days each. Months are numbered from $1$ to $m$. Berland week consists of $w$ days. The first day of the year is also the first day of the week. Note that the last week of the year might be shorter than $w$ days. A pair $(x, y)$ such that $x < y$ is ambiguous if day $x$ of month $y$ is the same day of the week as day $y$ of month $x$. Count the number of ambiguous pairs.
Let the month, the days in them and the days of the week be numbered $0$-based. Translate the $x$-th day of the $y$-th month to the index of that day in a year; that would be $yd + x$. Thus, the corresponding day of the week is $(yd + x)~mod~w$. So we can rewrite the condition for a pair as $xd + y \equiv yd + x~(mod~w)$. That's also $(xd + y) - (yd + x) \equiv 0~(mod~w)$. Continue with $(x - y)(d - 1) \equiv 0~(mod~w)$. So $(x - y)(d - 1)$ should be divisible by $w$. $d - 1$ is fixed and some prime divisors of $w$ might have appeared in it already. If we remove them from $w$ then $(x - y)$ should just be divisible by the resulting number. So we can divide $w$ by $gcd(w, d - 1)$ and obtain that $w'$. Now we should just count the number of pairs $(x, y)$ such that $x - y$ is divisible by $w'$. We know that the difference $x - y$ should be from $1$ to $min(d, m)$. So we can fix the difference and add the number of pairs for that difference. That would be $mn - k$ for a difference $k$. Finally, the answer is $\sum \limits_{i=1}^{min(d, m) / w'} mn - i \cdot w'$. Use the formula for the sum of arithmetic progression to solve that in $O(1)$. Overall complexity: $O(\log(w + d))$ per testcase.
[ "math", "number theory" ]
2,200
fun gcd(a: Long, b: Long): Long { if (a == 0L) return b return gcd(b % a, a) } fun main() { repeat(readLine()!!.toInt()) { val (m, d, w) = readLine()!!.split(' ').map { it.toLong() } val w2 = w / gcd(d - 1, w) val mn = minOf(m, d) var cnt = mn / w2 println((2 * (mn - w2) - w2 * (cnt - 1)) * cnt / 2) } }
1389
F
Bicolored Segments
You are given $n$ segments $[l_1, r_1], [l_2, r_2], \dots, [l_n, r_n]$. Each segment has one of two colors: the $i$-th segment's color is $t_i$. Let's call a pair of segments $i$ and $j$ bad if the following two conditions are met: - $t_i \ne t_j$; - the segments $[l_i, r_i]$ and $[l_j, r_j]$ intersect, embed or touch, i. e. there exists an integer $x$ such that $x \in [l_i, r_i]$ and $x \in [l_j, r_j]$. Calculate the maximum number of segments that can be selected from the given ones, so that there is no bad pair among the selected ones.
There are two approaches to this problem. Most of the participants of the round got AC by implementing dynamic programming with data structures such as segment tree, but I will describe another solution which is much easier to code. Let's consider a graph where each vertex represents a segment, and two vertices are connected by an edge if the corresponding segments compose a bad pair. Since each bad pair is formed by two segments of different colors, the graph is bipartite. The problem asks us to find the maximum independent set, and in bipartite graphs, the size of the independent set is equal to $V - M$, where $V$ is the number of vertices, and $M$ is the size of the maximum matching. The only thing that's left is finding the maximum matching. Let's use event processing approach to do it: for each segment, create two events "the segment begins" and "the segment ends". While processing the events, maintain the currently existing segments in two sets (grouped by their colors and sorted by the time they end). When a segment ends, let's try to match it with some segment of the opposite color - and it's quite obvious that we should choose a segment with the minimum $r_i$ to form a pair. Overall, this solution runs in $O(n \log n)$.
[ "data structures", "dp", "graph matchings", "sortings" ]
2,600
#include <bits/stdc++.h> using namespace std; #define x first #define y second #define pb push_back #define mp make_pair #define sz(a) int((a).size()) #define all(a) (a).begin(), (a).end() #define forn(i, n) for (int i = 0; i < int(n); ++i) typedef pair<int, int> pt; const int INF = 1e9; const int N = 200 * 1000; int n; vector<pt> a[2]; struct segtree { int n; vector<int> t, ps; segtree(int n) : n(n) { t.resize(4 * n, -INF); ps.resize(4 * n, 0); } void push(int v, int l, int r) { if (l + 1 != r) { ps[v * 2 + 1] += ps[v]; ps[v * 2 + 2] += ps[v]; } t[v] += ps[v]; ps[v] = 0; } void upd(int v, int l, int r, int pos, int val) { push(v, l, r); if (l + 1 == r) { t[v] = val; return; } int m = (l + r) >> 1; if (pos < m) upd(v * 2 + 1, l, m, pos, val); else upd(v * 2 + 2, m, r, pos, val); t[v] = max(t[v * 2 + 1], t[v * 2 + 2]); } void add(int v, int l, int r, int L, int R, int val) { push(v, l, r); if (L >= R) return; if (l == L && r == R) { ps[v] += val; push(v, l, r); return; } int m = (l + r) >> 1; add(v * 2 + 1, l, m, L, min(m, R), val); add(v * 2 + 2, m, r, max(m, L), R, val); t[v] = max(t[v * 2 + 1], t[v * 2 + 2]); } int get(int v, int l, int r, int L, int R) { push(v, l, r); if (L >= R) return -INF; if (l == L && r == R) return t[v]; int m = (l + r) >> 1; int r1 = get(v * 2 + 1, l, m, L, min(m, R)); int r2 = get(v * 2 + 2, m, r, max(m, L), R); t[v] = max(t[v * 2 + 1], t[v * 2 + 2]); return max(r1, r2); } void upd(int pos, int val) { return upd(0, 0, n, pos, val); } void add(int l, int r, int val) { return add(0, 0, n, l, r, val); } int get(int l, int r) { return get(0, 0, n, l, r); } }; int main() { scanf("%d", &n); forn(i, n) { int l, r, t; scanf("%d%d%d", &l, &r, &t); a[t - 1].pb(mp(r, l)); } forn(i, 2) sort(all(a[i]), [](pt a, pt b) { if (a.x == b.x) return a.y > b.y; return a.x < b.x; }); segtree t1(sz(a[0]) + 1), t2(sz(a[1]) + 1); t1.upd(0, 0); t2.upd(0, 0); int ans = 0; for (int i = 0, j = 0; i + j < n; ) { if (i < sz(a[0]) && (j == sz(a[1]) || a[0][i].x <= a[1][j].x)) { int pos = upper_bound(all(a[1]), mp(a[0][i].y, -INF)) - a[1].begin() + 1; int cur = t2.get(0, pos) + 1; ans = max(ans, cur); t1.upd(i + 1, cur); t2.add(0, pos, 1); ++i; } else { int pos = upper_bound(all(a[0]), mp(a[1][j].y, -INF)) - a[0].begin() + 1; int cur = t1.get(0, pos) + 1; ans = max(ans, cur); t2.upd(j + 1, cur); t1.add(0, pos, 1); ++j; } } printf("%d\n", ans); }
1389
G
Directing Edges
You are given an undirected connected graph consisting of $n$ vertices and $m$ edges. $k$ vertices of this graph are special. You have to direct each edge of this graph or leave it undirected. If you leave the $i$-th edge undirected, you pay $w_i$ coins, and if you direct it, you don't have to pay for it. Let's call a vertex saturated if it is reachable from each special vertex along the edges of the graph (if an edge is undirected, it can be traversed in both directions). After you direct the edges of the graph (possibly leaving some of them undirected), you receive $c_i$ coins for each saturated vertex $i$. Thus, your total profit can be calculated as $\sum \limits_{i \in S} c_i - \sum \limits_{j \in U} w_j$, where $S$ is the set of saturated vertices, and $U$ is the set of edges you leave undirected. For each vertex $i$, calculate the maximum possible profit you can get if you have to make the vertex $i$ saturated.
Suppose we want to calculate the maximum profit for some vertex in $O(n)$. Let's try to find out how it can be done, and then optimize this process so we don't have to run it $n$ times. First of all, we have to find the bridges and biconnected components in our graph. Why do we need them? Edges in each biconnected component can be directed in such a way that it becomes a strongly connected component, so we don't have to leave these edges undirected (it is never optimal). Furthermore, for each such component, either all vertices are saturated or no vertex is saturated. Let's build a tree where each vertex represents a biconnected component of the original graph, and each edge represents a bridge. We can solve the problem for this tree, and then the answer for some vertex of the original graph is equal to the answer for the biconnected component this vertex belongs to. Okay, now we have a problem on tree. Let's implement the following dynamic programming solution: root the tree at the vertex we want to find the answer for, and for each vertex, calculate the value of $dp_v$ - the maximum profit we can get for the subtree of vertex $v$, if it should be reachable by all special vertices from its subtree. Let's analyze how we can calculate these $dp$ values. Suppose we have a vertex $v$ with children $v_1$, $v_2$, ..., $v_d$, we have already calculated the $dp$ values for the children, and we want to calculate $dp_v$. First of all, since the vertex $v$ is going to be saturated, we will get the profit from it, so we initialize $dp_v$ with $c_v$. Then we should decide whether we want to get the profit from the children of vertex $v$. Suppose the edge leading from $v$ to $v_i$ has weight $w_i$. If we want to take the profit from the subtree of $v_i$, we (usually) have to make this edge undirected, so both vertices are saturated, thus we get $dp_{v_i} - w_i$ as profit - or we could leave this edge directed from $v_i$ to $v$, so the vertex $v$ is saturated, and $v_i$ is not, and get $0$ as the profit. But sometimes we can gain the profit from the vertex $v_i$ and its subtree without leaving the edge undirected: if all special vertices belong to the subtree of $v_i$, we can just direct this edge from $v_i$ to $v$, and there is no reason to choose the opposite direction or leave the edge undirected. Similarly, if all special vertices are outside of this subtree, there's no reason to direct the edge from $v_i$ to $v$. So, if one of this conditions is met, we can get the full profit from the subtree of $v_i$ without leaving the edge undirected. Okay, let's summarize it. We can calculate $dp_v$ as $dp_v = c_v + \sum \limits_{i=1}^{d} f(v_i)$, where $f(v_i)$ is either $dp_{v_i}$ if one of the aforementioned conditions is met (we don't have to leave the edge undirected if we want to saturate both vertices), or $f(v_i) = \max(0, dp_{v_i} - w_i)$ otherwise. Now we have an $O(n^2)$ solution. Let's optimize it to $O(n)$. Root the tree at vertex $1$ and calculate the dynamic programming as if $1$ is the root. Then, we shall use rerooting technique to recalculate the dynamic programming for all other vertices: we will try each vertex as the root of the tree, and $dp_v$ is the answer for the vertex $v$ if it is the root. The rerooting technique works as follows: let's run DFS from the initial root of the tree, and when we traverse an edge by starting or finishing a recursive call of DFS, we move the root along the edge; so, if we call $DFS(x)$, $x$ is the current root; if it has some child $y$, we move the root to $y$ the same moment when we call $DFS(y)$, and when the call of $DFS(y)$ ends, the root moves back to $x$. Okay, the only thing that's left is to describe how we move the root. If the current root is $x$, and we want to move it to $y$ (a vertex adjacent to $x$), then we have to change only the values of $dp_x$ and $dp_y$: first of all, since $y$ is no longer a child of $x$, we have to subtract the value that was added to $dp_x$ while we considered vertex $y$; then, we have to make $x$ the child of vertex $y$, so we add the profit we can get from the vertex $x$ to $dp_y$. It can be done in $O(1)$, so our solution runs in $O(n)$, though with a very heavy constant factor.
[ "dfs and similar", "dp", "graphs", "trees" ]
2,800
#include<bits/stdc++.h> using namespace std; typedef long long li; const int N = 300043; bool is_bridge[N]; int w[N]; int c[N]; int v[N]; vector<pair<int, int> > g[N]; vector<pair<int, int> > g2[N]; int comp[N]; li sum[N]; li dp[N]; int cnt[N]; int fup[N]; int tin[N]; int T = 0; li ans[N]; int v1[N], v2[N]; int n, m, k; int dfs1(int x, int e) { tin[x] = T++; fup[x] = tin[x]; for(auto p : g[x]) { int y = p.first; int i = p.second; if(i == e) continue; if(tin[y] != -1) fup[x] = min(fup[x], tin[y]); else { fup[x] = min(fup[x], dfs1(y, i)); if(fup[y] > tin[x]) is_bridge[i] = true; } } return fup[x]; } void dfs2(int x, int cc) { if(comp[x] != -1) return; comp[x] = cc; cnt[cc] += v[x]; sum[cc] += c[x]; for(auto y : g[x]) if(!is_bridge[y.second]) dfs2(y.first, cc); } void process_edge(int x, int y, int m, int weight) { li add_dp = dp[y]; if(cnt[y] > 0 && cnt[y] < k) add_dp = max(0ll, add_dp - weight); cnt[x] += m * cnt[y]; dp[x] += m * add_dp; } void link(int x, int y, int weight) { process_edge(x, y, 1, weight); } void cut(int x, int y, int weight) { process_edge(x, y, -1, weight); } void dfs3(int x, int p) { dp[x] = sum[x]; for(auto e : g2[x]) { int i = e.second; int y = e.first; if(y == p) continue; dfs3(y, x); link(x, y, w[i]); } } void dfs4(int x, int p) { ans[x] = dp[x]; for(auto e : g2[x]) { int i = e.second; int y = e.first; if(y == p) continue; cut(x, y, w[i]); link(y, x, w[i]); dfs4(y, x); cut(y, x, w[i]); link(x, y, w[i]); } } int main() { scanf("%d %d %d", &n, &m, &k); for(int i = 0; i < k; i++) { int x; scanf("%d", &x); --x; v[x] = 1; } for(int i = 0; i < n; i++) scanf("%d", &c[i]); for(int i = 0; i < m; i++) scanf("%d", &w[i]); for(int i = 0; i < m; i++) { scanf("%d %d", &v1[i], &v2[i]); --v1[i]; --v2[i]; g[v1[i]].push_back(make_pair(v2[i], i)); g[v2[i]].push_back(make_pair(v1[i], i)); } for(int i = 0; i < n; i++) { tin[i] = -1; comp[i] = -1; } dfs1(0, -1); int cc = 0; for(int i = 0; i < n; i++) if(comp[i] == -1) dfs2(i, cc++); for(int i = 0; i < m; i++) if(is_bridge[i]) { g2[comp[v1[i]]].push_back(make_pair(comp[v2[i]], i)); g2[comp[v2[i]]].push_back(make_pair(comp[v1[i]], i)); } dfs3(0, 0); dfs4(0, 0); for(int i = 0; i < n; i++) printf("%lld ", ans[comp[i]]); puts(""); }
1391
A
Suborrays
A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). For a positive integer $n$, we call a permutation $p$ of length $n$ \textbf{good} if the following condition holds for every pair $i$ and $j$ ($1 \le i \le j \le n$) — - $(p_i \text{ OR } p_{i+1} \text{ OR } \ldots \text{ OR } p_{j-1} \text{ OR } p_{j}) \ge j-i+1$, where $\text{OR}$ denotes the bitwise OR operation. In other words, a permutation $p$ is \textbf{good} if for every subarray of $p$, the $\text{OR}$ of all elements in it is not less than the number of elements in that subarray. Given a positive integer $n$, output any \textbf{good} permutation of length $n$. We can show that for the given constraints such a permutation always exists.
Every permutation is good. Proof: We use the fact that for any set of numbers, it's bitwise OR is at least the maximum value in it. Now, we just need to show that any subarray of length $len$ has at least one element greater than or equal to $len$. If the maximum element is $< len$, then, we have $len$ elements all with values in the range $[1,len-1]$. By the pigeonhole principle, at least $2$ of them must be the same contradicting the fact the it's a permutation. Time Complexity: $\mathcal{O}(n)$
[ "constructive algorithms", "math" ]
800
#include<bits/stdc++.h> using namespace std; mt19937 rng((int) std::chrono::steady_clock::now().time_since_epoch().count()); void solve(){ int n; cin >> n; vector<int> all; for(int i = 1;i <= n;i++)all.push_back(i); shuffle(all.begin(),all.end(),rng); for(int i = 0;i < n;i++){ cout << all[i] << " "; } cout << endl; } int main(){ int t; cin >> t; while(t--){ solve(); } }
1391
B
Fix You
Consider a conveyor belt represented using a grid consisting of $n$ rows and $m$ columns. The cell in the $i$-th row from the top and the $j$-th column from the left is labelled $(i,j)$. Every cell, except $(n,m)$, has a direction R (Right) or D (Down) assigned to it. If the cell $(i,j)$ is assigned direction R, any luggage kept on that will move to the cell $(i,j+1)$. Similarly, if the cell $(i,j)$ is assigned direction D, any luggage kept on that will move to the cell $(i+1,j)$. If at any moment, the luggage moves out of the grid, it is considered to be lost. There is a counter at the cell $(n,m)$ from where all luggage is picked. A conveyor belt is called \textbf{functional} if and only if any luggage reaches the counter regardless of which cell it is placed in initially. More formally, for every cell $(i,j)$, any luggage placed in this cell should eventually end up in the cell $(n,m)$. This may not hold initially; you are, however, allowed to \textbf{change} the directions of some cells to make the conveyor belt functional. Please determine the minimum amount of cells you have to change. Please note that it is always possible to make any conveyor belt functional by changing the directions of some set of cells.
The answer is $#R in the last column + #D in the last row$. It's obvious that we must change all Rs in the last column and all Ds in the last row. Otherwise, anything placed in those cells will move out of the grid. We claim that doing just this is enough to make the grid functional. Indeed, for any other cell, any luggage placed in it will eventually reach either the last row or the last column, from which it will move to the counter. Time Complexity: $\mathcal{O}(n*m)$
[ "brute force", "greedy", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int n,m; void solve(){ cin >> n >> m; int ans = 0; for(int i = 1;i <= n;i++){ for(int j = 1;j <= m;j++){ char o;cin >> o; if(o == 'C')continue; if(i == n and o == 'D')ans++; if(j == m and o == 'R')ans++; } } cout << ans << endl; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t;cin >> t; while(t--){ solve(); } return 0; }
1391
C
Cyclic Permutations
A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). Consider a permutation $p$ of length $n$, we build a graph of size $n$ using it as follows: - For every $1 \leq i \leq n$, find the \textbf{largest} $j$ such that $1 \leq j < i$ and $p_j > p_i$, and add an undirected edge between node $i$ and node $j$ - For every $1 \leq i \leq n$, find the \textbf{smallest} $j$ such that $i < j \leq n$ and $p_j > p_i$, and add an undirected edge between node $i$ and node $j$ In cases where no such $j$ exists, we make no edges. Also, note that we make edges between the corresponding indices, not the values at those indices. For clarity, consider as an example $n = 4$, and $p = [3,1,4,2]$; here, the edges of the graph are $(1,3),(2,1),(2,3),(4,3)$. A permutation $p$ is \textbf{cyclic} if the graph built using $p$ has at least one simple cycle. Given $n$, find the number of cyclic permutations of length $n$. Since the number may be very large, output it modulo $10^9+7$. Please refer to the Notes section for the formal definition of a simple cycle
The answer is $n!-2^{n-1}$. Consider an arbitrary cyclic permutation - for example, [4,2,3,1,5,6]; it contains many cycles of length $3$: $[1,2,3]$, $[1,3,5]$, $[3,4,5]$. Note that all the listed cycles contain nodes obtained from just one choice of $i$. We can generalize this to the following. If for any $i$, we make edges on both sides of it, this will create a simple cycle of length $3$. The proof is simple and is an exercise for you. Thus, there has to at most one peak that is the element $n$- all acyclic permutations increase, then reach $n$, and, finally, decrease. These are formally called unimodal permutations, and it's easy to see that any unimodal permutation forms a tree, and, thus, contains no simple cycle - each element, except $n$, has a uniquely defined parent. We can construct any unimodal permutation by adding the numbers $n, n-1, \ldots, 1$ into a deque in the same order. For example, $[2,3,4,1]$ can be constructed by first pushing $4$, $3$, $2$ to the front, and, finally, $1$ at the back. Thus, for every element, except $n$, we have the choice of pushing it to the front or the back, making the total number of ways equal to $2^{n-1}$. Time Complexity: $\mathcal{O}(n)$
[ "combinatorics", "dp", "graphs", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; #define int long long const int MOD = 1e9+7; int n; int res,fact; signed main(){ cin >> n; res = 1; fact = 1; for(int i = 1;i <= n-1;i++){ res *= 2; fact *= i; fact %= MOD; res %= MOD; } fact *= n; fact %= MOD; fact -= res; fact %= MOD; if(fact < 0)fact += MOD; cout << fact; return 0; }
1391
D
505
A binary matrix is called \textbf{good} if every \textbf{even} length square sub-matrix has an \textbf{odd} number of ones. Given a binary matrix $a$ consisting of $n$ rows and $m$ columns, determine the minimum number of cells you need to change to make it good, or report that there is no way to make it good at all. All the terms above have their usual meanings — refer to the Notes section for their formal definitions.
Firstly, if $min(n,m) > 3$, then, no solution exists because this means the grid contains at least one $4\times4$ sub-matrix, which can further be decomposed into four $2\times2$ sub-matrices. Since all four of these $2\times2$ sub-matrices are supposed to have an odd number of ones, the union of them will have an even number of ones. The problem, now, reduces to changing the least number of cells such that every $2\times2$ sub-matrix has an odd number of ones - this is possible to achieve for every valid grid. For example, for every even-indexed row, alternate the cells, and for every odd-indexed row, make all cells equal to $1$. We will solve this reduction using dynamic programming. We represent the $i^{th}$ column as a $n$-bit integer - $a_i$; let $dp(i,mask)$ be the minimum cells we have to flip to make the first $i$ columns valid, and the $i^{th}$ column is represented by $mask$. The transition is quite simple: $dp(i,cmask) = min( dp(i-1,pmask)+bitcount(cmask \oplus a[i]), dp(i,cmask)).$ The term $bitcount(cmask \oplus a[i])$ is equal to the number of positions where these two masks differ. Please note that we only consider those pairs, $\{pmask,cmask\}$, that when put adjacent do not form a $2\times2$ sub-matrix with an even number of ones. To speed up the transition, they can be pre-calculated. Time Complexity: $\mathcal{O}(m*2^{2n} + n*m)$
[ "bitmasks", "brute force", "constructive algorithms", "dp", "greedy", "implementation" ]
2,000
#include <bits/stdc++.h> using namespace std; const int N = 5e5+1; int n,m; int a[4][N]; int dp3[N][8]; int dp2[N][4]; bool ok3[8][8]; bool ok2[4][4]; void fill3(){ for(int i = 0;i < 8;i++){ for(int j = 0;j < 8;j++){ bool bad = 0; for(int st = 0;st < 2;st++){ int bits = (bool)(i&(1<<st))+(bool)(i&(1<<(st+1))); bits += (bool)(j&(1<<st))+(bool)(j&(1<<(st+1))); if(bits%2 == 0){ bad = 1; } } if(!bad){ ok3[i][j] = 1; } } } } void fill2(){ for(int i = 0;i < 4;i++){ for(int j = 0;j < 4;j++){ bool bad = 0; for(int st = 0;st < 1;st++){ int bits = (bool)(i&(1<<st))+(bool)(i&(1<<(st+1))); bits += (bool)(j&(1<<st))+(bool)(j&(1<<(st+1))); if(bits%2 == 0){ bad = 1; } } if(!bad){ ok2[i][j] = 1; } } } } void solve2(){ for(int i = 1;i <= m;i++){ int mask = a[1][i]+2*a[2][i]; for(int cur = 0;cur < 4;cur++){ dp2[i][cur] = 1e9; for(int prev = 0;prev < 4;prev++){ if(!ok2[prev][cur])continue; dp2[i][cur] = min(dp2[i][cur],dp2[i-1][prev]+__builtin_popcount(cur^mask)); } } } int ans = 1e9; for(int i = 0;i < 4;i++)ans = min(ans,dp2[m][i]); cout << ans; } void solve3(){ for(int i = 1;i <= m;i++){ int mask = a[1][i]+2*a[2][i]+4*a[3][i]; for(int cur = 0;cur < 8;cur++){ dp3[i][cur] = 1e9; for(int prev = 0;prev < 8;prev++){ if(!ok3[prev][cur])continue; dp3[i][cur] = min(dp3[i][cur],dp3[i-1][prev]+__builtin_popcount(cur^mask)); } } } int ans = 1e9; for(int i = 0;i < 8;i++)ans = min(ans,dp3[m][i]); cout << ans; } void solve(){ cin >> n >> m; if(min(n,m) > 3){ cout << -1; return; } if(min(n,m) == 1){ cout << 0; return; } for(int i = 1;i <= n;i++){ for(int j = 1;j <= m;j++){ char o;cin >> o; a[i][j] = o-'0'; } } if(n == 3)solve3(); else solve2(); } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); fill2(); fill3(); solve(); return 0; }
1391
E
Pairs of Pairs
You have a simple and connected undirected graph consisting of $n$ nodes and $m$ edges. Consider any way to pair some subset of these $n$ nodes such that no node is present in more than one pair. This pairing is \textbf{valid} if for every pair of pairs, the induced subgraph containing all $4$ nodes, two from each pair, has at most $2$ edges (out of the $6$ possible edges). More formally, for any two pairs, $(a,b)$ and $(c,d)$, the induced subgraph with nodes $\{a,b,c,d\}$ should have at most $2$ edges. Please note that the subgraph induced by a set of nodes contains nodes only from this set and edges which have both of its end points in this set. Now, do one of the following: - Find a simple path consisting of at least $\lceil \frac{n}{2} \rceil$ nodes. Here, a path is called simple if it does not visit any node multiple times. - Find a valid pairing in which at least $\lceil \frac{n}{2} \rceil$ nodes are paired. It can be shown that it is possible to find at least one of the two in every graph satisfying constraints from the statement.
Let's build the DFS tree of the given graph, and let $dep(u)$ denote the depth of node $u$ in the tree. If $dep(u) \ge \lceil \frac{n}{2} \rceil$ holds for any node $u$, we have found a path. Otherwise, the maximum depth is at most $\lfloor \frac{n}{2} \rfloor$, and we can find a valid pairing as follows: For every depth $i$ ($1 \le i \le \lfloor \frac{n}{2} \rfloor$), keep pairing nodes at this depth until $0$ or $1$ remain. Clearly, at most $1$ node from each depth will remain unpaired, so, in total, we have paired at least $n - \lfloor \frac{n}{2} \rfloor = \lceil \frac{n}{2} \rceil$ nodes. Finally, let's prove that every subgraph induced from some $2$ pairs has at most $2$ edges. Consider $2$ arbitrary pairs, ($a,b$) and ($c,d$), where $dep(a) = dep(b)$ and $dep(c) = dep(d)$. W.L.O.G, let $dep(a) < dep(c)$. Obviously, the edges ($a,b$) and ($c,d$) cannot exist because edges can only go from a descendant to an ancestor. We can, further, show that each of $c$ and $d$ can have an edge to at most one of $a$ and $b$. For example, if $c$ has an edge to both $a$ and $b$, we can conclude that $a$ and $b$ are ancestor-descendants, which is not possible because both of them are at the same depth. Time Complexity: $\mathcal{O}(n+m)$
[ "constructive algorithms", "dfs and similar", "graphs", "greedy", "trees" ]
2,600
#include <bits/stdc++.h> using namespace std; const int N = 5e5+1; int n,m; vector<int> adj[N]; bool vis[N]; vector<int> all[N]; int dep[N]; int par[N]; int pairs = 0; bool outputted = 0; void dfs(int u){ if(outputted)return; vis[u] = 1; pairs -= all[dep[u]].size()/2; all[dep[u]].push_back(u); pairs += all[dep[u]].size()/2; if(dep[u] >= (n+1)/2){ outputted = 1; cout << "PATH" << "\n"; int cur = u; cout << dep[u] << "\n"; while(cur != 0){ cout << cur << " "; cur = par[cur]; } cout << "\n"; } for(int v:adj[u]){ if(vis[v])continue; par[v] = u; dep[v] = dep[u]+1; dfs(v); } } void solve(){ cin >> n >> m; outputted = 0; for(int i = 1;i <= n;i++){ adj[i].clear(); all[i].clear(); vis[i] = 0; } pairs = 0; for(int i = 1;i <= m;i++){ int a,b;cin >> a >> b; adj[a].push_back(b); adj[b].push_back(a); } dep[1] = 1; dfs(1); if(outputted)return; cout << "PAIRING" << "\n"; cout << pairs << "\n"; for(int i = 1;i < (n+1)/2;i++){ for(int j = 0;j+1 < all[i].size();j+=2){ cout << all[i][j] << " " << all[i][j+1] << "\n"; } } } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t;cin >> t; while(t--){ solve(); } return 0; }
1392
A
Omkar and Password
Lord Omkar has permitted you to enter the Holy Church of Omkar! To test your worthiness, Omkar gives you a password which you must interpret! A password is an array $a$ of $n$ positive integers. You apply the following operation to the array: pick any two adjacent numbers that are not equal to each other and replace them with their sum. Formally, choose an index $i$ such that $1 \leq i < n$ and $a_{i} \neq a_{i+1}$, delete both $a_i$ and $a_{i+1}$ from the array and put $a_{i}+a_{i+1}$ in their place. For example, for array $[7, 4, 3, 7]$ you can choose $i = 2$ and the array will become $[7, 4+3, 7] = [7, 7, 7]$. Note that in this array you can't apply this operation anymore. Notice that one operation will decrease the size of the password by $1$. What is the shortest possible length of the password after some number (possibly $0$) of operations?
If your array consists of one number repeated $n$ times, then you obviously can't do any moves to shorten the password. Otherwise, you can show that it is always possible to shorten the password to $1$ number. For an array consisting of $2$ or more distinct elements, considering the maximum value of the array. If its max value appears once, you can just repeat operations on the maximum array value until you are left with one number. What if the maximum elements appears more than once? Well, because there must exist at least $2$ distinct numbers, you can always pick a maximum element adjacent to a different number to perform an operation on. The array will then have one occurrence of a maximum and you can simply repeat using aforementioned logic.
[ "greedy", "math" ]
800
#include <bits/stdc++.h> #define len(v) ((int)((v).size())) #define all(v) (v).begin(), (v).end() #define rall(v) (v).rbegin(), (v).rend() #define chmax(x, v) x = max((x), (v)) #define chmin(x, v) x = min((x), (v)) using namespace std; using ll = long long; void solve() { int n; cin >> n; vector<int> vec(n); for (int i = 0; i < n; ++i) { cin >> vec[i]; } sort(all(vec)); if (vec.front() == vec.back()) cout << n << "\n"; else cout << "1\n"; } int main() { ios::sync_with_stdio(false); cin.tie(0); int nbTests; cin >> nbTests; for (int iTest = 0; iTest < nbTests; ++iTest) { solve(); } }
1392
B
Omkar and Infinity Clock
Being stuck at home, Ray became extremely bored. To pass time, he asks Lord Omkar to use his time bending power: Infinity Clock! However, Lord Omkar will only listen to mortals who can solve the following problem: You are given an array $a$ of $n$ integers. You are also given an integer $k$. Lord Omkar wants you to do $k$ operations with this array. Define one operation as the following: - Set $d$ to be the maximum value of your array. - For every $i$ from $1$ to $n$, replace $a_{i}$ with $d-a_{i}$. The goal is to predict the contents in the array after $k$ operations. Please help Ray determine what the final sequence will look like!
There's only two possible states the array can end up as. Which state it becomes after $k$ turns is determined solely by the parity of $k$. After the first move, the array will consists of all non-negative numbers ($d-a_{i}$ will never be negative because $a_{i}$ never exceeds $d$). After one turn, let's define $x$ as max($a_{i}$) over all $i$. For any number $a_{i}$, it will turn into $x-a_{i}$. Because a zero will always exist after any one operation, $x$ will be the maximum of the array in the next turn as well. Then the value at index $i$ will turn into $x-(x-a_{i})$. This is just equal to $a_{i}$! Now that it's obvious that our array will enter a cycle with a period of $2$, we simply do the following: If $k$ is odd, perform $1$ operation. Otherwise perform $2$ operations.
[ "implementation", "math" ]
800
#include <iostream> #include <vector> #include <chrono> #include <random> #include <cassert> std::mt19937 rng((int) std::chrono::steady_clock::now().time_since_epoch().count()); int main() { std::ios_base::sync_with_stdio(false); std::cin.tie(NULL); int t; std::cin >> t; while(t--) { int n; long long k; std::cin >> n >> k; std::vector<int> a(n); for(int i = 0; i < n; i++) { std::cin >> a[i]; } if(k > 1) { k = 2 + k % 2; } while(k--) { int mx = -1000000000; for(int i = 0; i < n; i++) { mx = std::max(mx, a[i]); } for(int i = 0; i < n; i++) { a[i] = mx - a[i]; } } for(int i = 0; i < n; i++) { std::cout << a[i] << (i + 1 == n ? '\n' : ' '); } } }
1392
C
Omkar and Waterslide
Omkar is building a waterslide in his water park, and he needs your help to ensure that he does it as efficiently as possible. Omkar currently has $n$ supports arranged in a line, the $i$-th of which has height $a_i$. Omkar wants to build his waterslide from the right to the left, so his supports must be nondecreasing in height in order to support the waterslide. In $1$ operation, Omkar can do the following: take any \textbf{contiguous subsegment} of supports which is \textbf{nondecreasing by heights} and add $1$ to each of their heights. Help Omkar find the minimum number of operations he needs to perform to make his supports able to support his waterslide! An array $b$ is a subsegment of an array $c$ if $b$ can be obtained from $c$ by deletion of several (possibly zero or all) elements from the beginning and several (possibly zero or all) elements from the end. An array $b_1, b_2, \dots, b_n$ is called nondecreasing if $b_i\le b_{i+1}$ for every $i$ from $1$ to $n-1$.
Call the initial array $a$. We claim that the answer is $\sum max(a_i-a_{i+1}, 0)$ over the entire array of supports (call this value $ans$). Now let's show why. First, notice that in a nondecreasing array, $ans = 0$. So, the problem is now to apply operations to the array such that $ans = 0$. Now, let's see how applying one operation affects $ans$. Perform an operation on an arbitrary nondecreasing subarray that begins at index $i$ and ends at index $j$. Note that the differences of elements within the subarray stay the same, so the only two pairs of elements which affect the sum are $a_{i-1}, a_i$ and $a_j, a_{j+1}$. Let's initially look at the pair $a_{i-1}, a_i$. If $a_{i-1} \leq a_i$ (or if $i = 1$), applying an operation would not change $ans$. But, if $a_{i-1} /gt a_i$, applying an operation would decrease $ans$ by $1$. Now let's look at the pair $a_j, a_{j+1}$. If $a_j \leq a_{j+1}$ (or if $j = n$), applying an operation would not change $ans$. But, if $a_j \gt a_{j+1}$, applying an operation would increase $ans$ by $1$. We have now shown that we can decrease $ans$ by at most $1$ with each operation, showing that it is impossible to make his supports able to hold the waterslide in fewer than $\sum max(a_i-a_{i+1}, 0)$ over the initial array. Now, let's construct a solution that applies exactly $\sum max(a_i-a_{i+1}, 0)$ operations to make the array valid. Consider applying operations to each suffix of length $j$ until the suffix of length $j+1$ is nondecreasing. Since operations are applied iff $a_{n-j+1} \lt a_{n-j}$, and each operation decreases $a_{n-j+1} \lt a_{n-j}$ by $1$, the total number of operations would just be the sum of $max(0, a_{n-j}-a_{n-j+1})$, which is equal to $\sum max(a_i-a_{i+1}, 0)$ over the entire array.
[ "greedy", "implementation" ]
1,200
#include <iostream> #include <vector> #include <chrono> #include <random> #include <cassert> std::mt19937 rng((int) std::chrono::steady_clock::now().time_since_epoch().count()); int main() { std::ios_base::sync_with_stdio(false); std::cin.tie(NULL); int t; std::cin >> t; while(t--) { int n; std::cin >> n; long long last = 0; long long ans = 0; while(n--) { long long x; std::cin >> x; x += ans; if(x >= last) { last = x; } else { ans += last &mdash; x; } } std::cout << ans << '\n'; } }
1392
D
Omkar and Bed Wars
Omkar is playing his favorite pixelated video game, Bed Wars! In Bed Wars, there are $n$ players arranged in a circle, so that for all $j$ such that $2 \leq j \leq n$, player $j - 1$ is to the left of the player $j$, and player $j$ is to the right of player $j - 1$. Additionally, player $n$ is to the left of player $1$, and player $1$ is to the right of player $n$. Currently, each player is attacking either the player to their left or the player to their right. This means that each player is currently being attacked by either $0$, $1$, or $2$ other players. A key element of Bed Wars strategy is that if a player is being attacked by exactly $1$ other player, then they should logically attack that player in response. If instead a player is being attacked by $0$ or $2$ other players, then Bed Wars strategy says that the player can logically attack either of the adjacent players. Unfortunately, it might be that some players in this game are not following Bed Wars strategy correctly. Omkar is aware of whom each player is currently attacking, and he can talk to any amount of the $n$ players in the game to make them instead attack another player  — i. e. if they are currently attacking the player to their left, Omkar can convince them to instead attack the player to their right; if they are currently attacking the player to their right, Omkar can convince them to instead attack the player to their left. Omkar would like all players to be acting logically. Calculate the minimum amount of players that Omkar needs to talk to so that after all players he talked to (if any) have changed which player they are attacking, all players are acting logically according to Bed Wars strategy.
As described in the statement, the only situation in which a player is not acting logically according to Bed Wars strategy is when they are being attacked by exactly $1$ player, but they are not attacking that player in response. Let the player acting illogically be player $j$. There are two cases in which player $j$ is acting illogically: The first case is that player $j$ is being attacked by the player to their left, player $j - 1$, and not by the player to their right, player $j + 1$. This means that $s_{j - 1} =$ R, as they are attacking player $j$ who is to their right, and $s_{j + 1} =$ R, as they are attacking the player to the right instead of player $j$ who is to their left. Furthermore, since player $j$ is not being logical, instead of attacking player $j - 1$, they are attacking player $j + 1$, so $s_j =$ R. This means that in this case $s_{j - 1} = s_j = s_{j + 1} =$ R, i. e. R occurs $3$ times in a row somewhere in $s$. We want to avoid this case, so the final string (after Omkar has convinced some player to change) cannot have $3$ Rs in a row. The second case is that player $j$ is being attacked by the player to their right, player $j + 1$, and not by the player to their left, player $j - 1$. It is easy to see that this case is essentially the same as the first case, but reversed, meaning that the final string also cannot have $3$ Ls in a row. Combining these two cases, the condition in the statement reduces to the simple condition that the same character cannot occur $3$ times in a row in our final string. If we have some subsegment of length $l$ of the same character in $s$, and this subsegment is maximal, so that the characters preceding and following it in $s$ are different from the characters in it, then we can make all players in this subsegment logical by having Omkar talk to $\lfloor \frac l 3 \rfloor$ of the players in that subsegment. Therefore, assuming that not all characters in $s$ are the same, we simply find the lengths of all maximal subsegments of the same characters in $s$ (noting that we need to wrap around if $s_1 = s_n$), and sum over their lengths divided (rounding down) by $3$. Finding these lengths and summing over their quotients can be done easily by looping through $s$ in $O(n)$. If all the characters in $s$ are the same, then we can see similarly that Omkar needs to talk to $\lceil \frac n 3 \rceil$ of the players. (Note that we are rounding up instead of down - this is due to the circular nature of $s$).
[ "dp", "greedy" ]
1,700
#include <bits/stdc++.h> using namespace std; void solve() { int n, ans = 0; cin >> n; string s; cin >> s; int cnt = 0; while(s.size() && s[0] == s.back()) { cnt++; s.pop_back(); } if(s.empty()) { if(cnt <= 2) { cout << "0\n"; return; } if(cnt == 3) { cout << "1\n"; return; } cout << (cnt + 2) / 3 << '\n'; return; } s.push_back('$'); for(int i = 0; i + 1 < s.size(); i++) { cnt++; if(s[i] != s[i + 1]) { ans += cnt / 3; cnt = 0; } } cout << ans << '\n'; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while(t--) solve(); }
1392
E
Omkar and Duck
\textbf{This is an interactive problem.} Omkar has just come across a duck! The duck is walking on a grid with $n$ rows and $n$ columns ($2 \leq n \leq 25$) so that the grid contains a total of $n^2$ cells. Let's denote by $(x, y)$ the cell in the $x$-th row from the top and the $y$-th column from the left. Right now, the duck is at the cell $(1, 1)$ (the cell in the top left corner) and would like to reach the cell $(n, n)$ (the cell in the bottom right corner) by moving either down $1$ cell or to the right $1$ cell each second. Since Omkar thinks ducks are fun, he wants to play a game with you based on the movement of the duck. First, for each cell $(x, y)$ in the grid, you will tell Omkar a nonnegative integer $a_{x,y}$ not exceeding $10^{16}$, and Omkar will then put $a_{x,y}$ uninteresting problems in the cell $(x, y)$. After that, the duck will start their journey from $(1, 1)$ to $(n, n)$. For each cell $(x, y)$ that the duck crosses during their journey (including the cells $(1, 1)$ and $(n, n)$), the duck will eat the $a_{x,y}$ uninteresting problems in that cell. Once the duck has completed their journey, Omkar will measure their mass to determine the total number $k$ of uninteresting problems that the duck ate on their journey, and then tell you $k$. Your challenge, given $k$, is to exactly reproduce the duck's path, i. e. to tell Omkar precisely which cells the duck crossed on their journey. To be sure of your mastery of this game, Omkar will have the duck complete $q$ different journeys ($1 \leq q \leq 10^3$). Note that all journeys are independent: at the beginning of each journey, the cell $(x, y)$ will still contain $a_{x,y}$ uninteresting tasks.
The problem essentially boils down to constructing a grid such that any path from $(1, 1)$ to $(n, n)$ has a different sum and you can easily determine any path from its sum. You can do this using the following construction: for all $(x, y)$, if $x$ is even, then let $a_{x,y} = 2^{x + y}$; otherwise, let $a_{x,y} = 0$. The construction is illustrated below for $n = 8$: $\begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2^{3} & 2^{4} & 2^{5} & 2^{6} & 2^{7} & 2^{8} & 2^{9} & 2^{10} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2^{5} & 2^{6} & 2^{7} & 2^{8} & 2^{9} & 2^{10} & 2^{11} & 2^{12} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2^{7} & 2^{8} & 2^{9} & 2^{10} & 2^{11} & 2^{12} & 2^{13} & 2^{14} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2^{9} & 2^{10} & 2^{11} & 2^{12} & 2^{13} & 2^{14} & 2^{15} & 2^{16} \\ \end{matrix}$ You can see that this construction works using the following observations: The maximum value of $n$ is $25$, and $2^{2 \cdot 25} = 2^{50} < 10^{16}$. For any integer $j$ between $2$ and $2n$ (inclusive), all paths cross exactly one cell $(x, y)$ such that $x + y = j$. For any cell $(x, y)$, you can move to either one or two cells, and if you can move to two cells, then exactly one of those will have $x'$ even and exactly one of those will have $x'$ odd, as the cells will necessarily be $(x + 1, y)$ and $(x, y + 1)$ which have different parities of $x'$. This means that the sum on any path will be the sum of distinct powers of $2$ between $2^2$ and $2^{2n}$ (inclusive), meaning that given that we know which cell $(x, y)$ the path crossed satisfying $x + y = j$, we can determine which cell $(x', y')$ the path crossed satisfying $x' + y' = j + 1$ by checking whether the path sum contains $2^{j + 1}$ and then appropriately selecting either $(x', y') = (x + 1, y)$ or $(x', y') = (x, y + 1)$. We know that the path must start at $(1, 1)$ so we can therefore easily determine the rest of the path given the sum.
[ "bitmasks", "constructive algorithms", "interactive", "math" ]
2,100
#include <bits/stdc++.h> #define len(v) ((int)((v).size())) #define all(v) (v).begin(), (v).end() #define rall(v) (v).rbegin(), (v).rend() #define chmax(x, v) x = max((x), (v)) #define chmin(x, v) x = min((x), (v)) using namespace std; using ll = long long; int main() { ios::sync_with_stdio(false); cin.tie(0); int side; cin >> side; vector<vector<ll>> grid(side, vector<ll>(side, 0)); for (int i = 0; i < side; ++i) { for (int j = 0; j < side; ++j) { if ((i-j+side)&2LL) grid[i][j] = (1LL << (i+j)); cout << grid[i][j] << " \n"[j==side-1]; } } cout << flush; int nbQuery; cin >> nbQuery; for (int iQuery = 0; iQuery < nbQuery; ++iQuery) { ll sum; cin >> sum; cout << "1 1"; int row = 0, col = 0; for (int diag = 0; diag < 2*side-2; ++diag) { ll should = sum&(1LL<<(diag+1)); if (row+1<side && grid[row+1][col] == should) ++row; else ++col; cout << " " << row+1 << " " << col+1; } cout << endl << flush; } }
1392
F
Omkar and Landslide
Omkar is standing at the foot of Celeste mountain. The summit is $n$ meters away from him, and he can see all of the mountains up to the summit, so for all $1 \leq j \leq n$ he knows that the height of the mountain at the point $j$ meters away from himself is $h_j$ meters. It turns out that for all $j$ satisfying $1 \leq j \leq n - 1$, $h_j < h_{j + 1}$ (meaning that heights are strictly increasing). Suddenly, a landslide occurs! While the landslide is occurring, the following occurs: every minute, if $h_j + 2 \leq h_{j + 1}$, then one square meter of dirt will slide from position $j + 1$ to position $j$, so that $h_{j + 1}$ is decreased by $1$ and $h_j$ is increased by $1$. These changes occur simultaneously, so for example, if $h_j + 2 \leq h_{j + 1}$ and $h_{j + 1} + 2 \leq h_{j + 2}$ for some $j$, then $h_j$ will be increased by $1$, $h_{j + 2}$ will be decreased by $1$, and $h_{j + 1}$ will be both increased and decreased by $1$, meaning that in effect $h_{j + 1}$ is unchanged during that minute. The landslide ends when there is no $j$ such that $h_j + 2 \leq h_{j + 1}$. Help Omkar figure out what the values of $h_1, \dots, h_n$ will be after the landslide ends. It can be proven that under the given constraints, the landslide will always end in finitely many minutes. Note that because of the large amount of input, it is recommended that your code uses fast IO.
Fun fact: This problem was originally proposed as B. TL;DR: We can show that in the resulting array, every pair of adjacent elements differs by exactly $1$ except that there may be at most one pair of adjacent equal elements. It is easy to see that there is only one such array satisfying that condition that also has the same length and sum as the given array, so we simply calculate that array based on the sum of the given array. Proof: Clearly, the order in which we perform the slides (transfers of one square meter of dirt from $a_{j + 1}$ to $a_j$ for some $j$) does not matter. Consider then performing slides in the following manner: whenever we perform a slide from $a_j$ to $a_{j - 1}$, after that slide, if it is possible to perform a slide from $a_{j - 1}$ to $a_{j - 2}$, we will do so, and then from $a_{j - 2}$ to $a_{j - 3}$ and so on. We will call this action "performing a sequence of slides from $a_j$". Assume that we have just performed a sequence of slides from $a_j$. We can see that if there was a pair of adjacent elements to the left of $a_j$ that were equal, i. e. some $k < j$ such that $a_{k - 1} = a_k$, then, assuming that $a_{k -1}, a_k$ is the rightmost such pair, then the sequence of slides that we started will end with $a_k$ being increased. In this case, $a_{k - 1}$ and $a_k$ are no longer equal, but $a_k, a_{k + 1}$ may now be equal, so the amount of pairs of adjacent equal elements to the left of $a_j$ has either decreased or stayed the same. On the other hand, if there was no such pair, then the sequence of slides would end with $a_1$ being increased, meaning it might now be true that $a_1$ and $a_2$ are equal, so that the amount of pairs of adjacent equal elements to the left of $a_j$ is either still $0$ or now $1$. Combining these two facts, we see that if there were either $0$ or $1$ pairs of adjacent equal elements to the left of $a_j$ to start with, then there will only be either $0$ or $1$ pairs of adjacent equal elements to the left of $a_j$ after performing a sequence of slides from $a_j$. Noting that as our array is initially strictly increasing, there are initially no pairs of adjacent equal elements, we can simply first perform as many sequences of slides from $a_2$ as possible, then perform as many sequences of slides from $a_3$ as possible, and so on, until we perform as many sequences of slides from $a_n$ as possible. When we are performing sequences of slides from $a_j$, there can clearly only be either $0$ or $1$ pairs of adjacent equal elements to the left of $a_j$, and there can't be any such pairs to the right of $a_j$ as that part of the array hasn't been touched yet and is therefore still strictly increasing. Therefore, we can conclude that once all possible slides have been performed, the entire array will contain at most $1$ pair of adjacent equal elements. Since it cannot be possible to perform any more slides once all possible slides have been performed, all pairs of adjacent elements that are not equal must differ by at most $1$. It is easy to see that there is only one array satisfying these conditions that has the same length $n$ and sum $S = \sum_{j = 1}^n a_j$. You can construct this array by starting with the array $0, 1, 2, \dots, n - 1$, then adding $1$ to each element from left to right, looping back to the beginning when you reach the end, until the sum of the array is $S$. From this construction we can derive the following formula for the array: $a_j = j - 1 + \lfloor \frac {S - \frac {n(n - 1)} 2} n \rfloor + \{j \leq (S - \frac {n(n - 1)} 2) \% n\}$ where $\{C\}$ is $1$ if the condition $C$ is satisfied, and $0$ otherwise. EDIT: The fact that the order of operations doesn't matter turned out to be harder to prove than I thought (thanks to the comments for pointing this out), so I decided to add the following proof: If you have two sequences of maximal operations, then what we want to show is that they have to consist of the same operations (possibly in different orders). Since they are both maximal, they must either both be empty or both be nonempty. If they are both empty then we are done. If they are both nonempty: consider the current state of the vector (before applying either sequence of operations). Since the sequences are nonempty, there is at least one operation $\alpha$ that can be immediately applied on the vector. Here we should make the observation that applying an operation at some index cannot prevent operations at other indexes from being able to be applied (it can only allow other operations to be applied). Therefore, applying other (different) operations cannot prevent operation $\alpha$ from being able to be applied, so operation $\alpha$ must occur in both sequences. From our observation, we can see that if, in either sequence, operation $\alpha$ does not occur initially, then operation $\alpha$ can be applied initially because by performing it earlier we are not preventing any of the operations in between from being applied. Thus, the first operation of each sequence is now the same, so we can apply the same argument to the remainder of the sequence (since it must be finite).
[ "binary search", "constructive algorithms", "data structures", "greedy", "math" ]
2,400
#include <bits/stdc++.h> using namespace std; typedef long long ll; int main() { ios_base::sync_with_stdio(0), cin.tie(0); ll n; cin >> n; ll s = 0; for (ll x, i = 0; i < n; ++i) cin >> x, s += x; ll l = (s - n * (n-1) / 2) / n + 1; s = l * n + n * (n-1) / 2 - s; for (int i = 0; i < n; ++i) cout << l + i - (n-i <= s) << " \n"[i+1==n]; return 0; }
1392
G
Omkar and Pies
Omkar has a pie tray with $k$ ($2 \leq k \leq 20$) spots. Each spot in the tray contains either a chocolate pie or a pumpkin pie. However, Omkar does not like the way that the pies are currently arranged, and has another ideal arrangement that he would prefer instead. To assist Omkar, $n$ elves have gathered in a line to swap the pies in Omkar's tray. The $j$-th elf from the left is able to swap the pies at positions $a_j$ and $b_j$ in the tray. In order to get as close to his ideal arrangement as possible, Omkar may choose a contiguous subsegment of the elves and then pass his pie tray through the subsegment starting from the left. However, since the elves have gone to so much effort to gather in a line, they request that Omkar's chosen segment contain at least $m$ ($1 \leq m \leq n$) elves. Formally, Omkar may choose two integers $l$ and $r$ satisfying $1 \leq l \leq r \leq n$ and $r - l + 1 \geq m$ so that first the pies in positions $a_l$ and $b_l$ will be swapped, then the pies in positions $a_{l + 1}$ and $b_{l + 1}$ will be swapped, etc. until finally the pies in positions $a_r$ and $b_r$ are swapped. Help Omkar choose a segment of elves such that the amount of positions in Omkar's final arrangement that contain the same type of pie as in his ideal arrangement is the maximum possible. \textbf{Note that since Omkar has a big imagination, it might be that the amounts of each type of pie in his original arrangement and in his ideal arrangement do not match}.
Consider any two binary strings $u$ and $v$ of length $k$. Notice that if you swap the bits at positions $\alpha$ and $\beta$ in $u$ and also swap the bits at positions $\alpha$ and $\beta$ in $v$, then the amount of common bits in $u$ and $v$ remains the same. Furthermore, you can do this multiple times - i. e. applying the same sequence of swaps to $u$ and $v$ doesn't change their amount of common bits. Let's apply this to the problem at hand. Assume that Omkar has selected the subsegment of elves between some $l$ and $r$. Let $s$ be Omkar's original binary string, let $s'$ be $s$ after applying the subsegment of swaps between $l$ and $r$ (inclusive) to it, and let $t$ be Omkar's ideal binary string. Now consider applying all the swaps in the subsegment from $1$ to $r$ (inclusive) to both $s$ and $t$, but in reverse - first we apply the $r$-th swap, then the $r - 1$-th swap, and so on, until we finally apply the $1$-st swap. From our first observation, these new strings $s^{\prime\prime}$ and $t^{\prime\prime}$ have the same amount of common bits as $s'$ and $t$. We additionally notice that just as $t^{\prime\prime}$ is $t$ with the subsegment of swaps from $1$ to $r$ applied in reverse, $s^{\prime\prime}$ is actually $s$ with the subsegment of swaps from $1$ to $l - 1$ applied in reverse, as the subsegment of swaps from $l$ to $r$ that were applied to $s$ to create $s'$ has been undone. Let us then define some more strings as follows: for all $j$ such that ($0 \leq j \leq n$), let $s_j$ be the result of applying the subsegment of swaps from $1$ to $j$ in reverse to $s$, and let $t_j$ be the result of applying the subsegment of swaps from $1$ to $j$ in reverse to $s$ (so $s_0 = s$ and $t_0 = t$). We now see that the amount of common bits that result from choosing a subsegment of swaps between $l$ and $r$ is equivalent to the amount of common bits between $s_{l - 1}$ and $t_r$, and so the problem is now simply to find two indices $j_1$ and $j_2$ such that $j_2 - j_1 \geq m$ and the amount of common bits between $s_{j_1}$ and $t_{j_2}$ is the maximum possible. Here we make another observation: if $\omega$ is the amount of common bits between two binary strings $u$ and $v$ of length $k$, $\epsilon$ is the amount of bits set to $1$ in $u$, $\zeta$ is the amount of bits set to $1$ in $v$, and $\lambda$ is the amount of common bits set to $1$ in $u$ and $v$, then $\omega = 2\lambda + k - \epsilon - \zeta$. Since we are only comparing strings derived from performing swaps on $s$ to strings derived from performing swaps on $t$, and swaps don't change the overall amount of $1$ bits in a string, $\epsilon$ and $\zeta$, like $k$, are constants - $\epsilon$ is the amount of $1$ bits in $s$, and $\zeta$ is the amount of $1$ bits in $t$. This means that maximizing $\omega$, the amount of common bits in $u$ and $v$, is equivalent to maximizing $\lambda$, the amount of common $1$ bits in $u$ and $v$. Therefore, we can now proceed with bitmask DP to finish the problem. For a binary string $\mu$ of length $k$, let $left_\mu$ be the smallest index $j$ such that $\mu$ is a subset of $s_j$ considering $1$ bits, and let $right_\mu$ be the largest index $j$ such that $\mu$ is a subset of $t_j$ considering $1$ bits. These DP values are straightforward to compute, and we can obtain our answer by choosing the subsegment of swaps from $left_\mu$ to $right_\mu$ for any $\mu$ with the largest amount of $1$ bits such that $left_\mu$ and $right_\mu$ both exist and $right_\mu - left_\mu \geq m$.
[ "bitmasks", "dfs and similar", "dp", "math", "shortest paths" ]
2,900
#include <bits/stdc++.h> using namespace std; int p[20],dp[2][(1<<20)]; int getmask(string s) { int ans=0; for (int i=0;i<s.size();i++) ans|=((s[i]-'0')<<i); return ans; } int main() { int n,m,k; scanf("%d%d%d",&n,&m,&k); string a,b; cin >> a >> b; for (int i=0;i<k;i++) p[i]=i; for (int i=0;i<(1<<k);i++) { dp[0][i]=1e9; dp[1][i]=-1e9; } dp[0][getmask(a)]=0; dp[1][getmask(b)]=0; for (int i=1;i<=n;i++) { int x,y; scanf("%d%d",&x,&y); x--; y--; swap(p[x],p[y]); string aa(k,'0'),bb(k,'0'); for (int j=0;j<k;j++) { aa[p[j]]=a[j]; bb[p[j]]=b[j]; } dp[0][getmask(aa)]=min(dp[0][getmask(aa)],i); dp[1][getmask(bb)]=i; } int o1=count(a.begin(),a.end(),'1'),o2=count(b.begin(),b.end(),'1'); pair<int,pair<int,int> > ans(0,{0,0}); for (int i=(1<<k)-1;i>=0;i--) { if (dp[1][i]-dp[0][i]>=m) ans=max(ans,make_pair(k-(o1+o2-2*__builtin_popcount(i)),make_pair(dp[0][i]+1,dp[1][i]))); for (int j=0;j<k;j++) { if (i&(1<<j)) { dp[0][i^(1<<j)]=min(dp[0][i^(1<<j)],dp[0][i]); dp[1][i^(1<<j)]=max(dp[1][i^(1<<j)],dp[1][i]); } } } printf("%d\n%d %d",ans.first,ans.second.first,ans.second.second); }
1392
H
ZS Shuffles Cards
zscoder has a deck of $n+m$ custom-made cards, which consists of $n$ cards labelled from $1$ to $n$ and $m$ jokers. Since zscoder is lonely, he wants to play a game with himself using those cards. Initially, the deck is shuffled uniformly randomly and placed on the table. zscoder has a set $S$ which is initially empty. Every second, zscoder draws the top card from the deck. - If the card has a number $x$ written on it, zscoder removes the card and adds $x$ to the set $S$. - If the card drawn is a joker, zscoder places all the cards back into the deck and reshuffles (uniformly randomly) the $n+m$ cards to form a new deck (hence the new deck now contains all cards from $1$ to $n$ and the $m$ jokers). Then, if $S$ currently contains all the elements from $1$ to $n$, the game ends. Shuffling the deck doesn't take time at all. What is the expected number of seconds before the game ends? We can show that the answer can be written in the form $\frac{P}{Q}$ where $P, Q$ are relatively prime integers and $Q \neq 0 \bmod 998244353$. Output the value of $(P \cdot Q^{-1})$ modulo $998244353$.
Firstly, let's find a simple dp. Let $f(x)$ denote the expected time before the game ends with the deck is full (with $n+m$ cards) and $S$ contains $n - x$ elements. Hence, $f(0)=0$. Our goal is to find $f(n)$. Suppose the jokers are also numbered from $1$ to $m$ and $S$ contains $n-x$ elements. Consider the cards drawn before we draw our first joker (which causes the deck to be reshuffled). Suppose we draw $i$ cards with a number, $l$ of which is a number not in $S$, before drawing our first joker. There are $\binom{x}{l} \cdot \binom{n-x}{i-l} \cdot i!$ ways to choose and permute the first $i$ cards, $m$ ways to choose the first joker and $(n+m-i-1)!$ ways to permute the cards that were not drawn. The total time taken is $i+1+f(x-l)$. Hence, $f(x) = \displaystyle\sum_{l=0}^{x}\displaystyle\sum_{i=0}^{n}\binom{x}{l}\binom{n-x}{i-l} \cdot i! \cdot m \cdot \frac{(n+m-i-1)!}{(n+m)!} \cdot (f(x-l)+i+1).$ This gives us an easy $O(n^{3})$ solution. Note that $f(x)$ is also on the right hand side, so you need to move the corresponding term to the left first before computing (this is not difficult). To optimize our solution, we just need to manipulate the sums. I will show how to simplify $\displaystyle\sum_{l=0}^{x}\displaystyle\sum_{i=0}^{n}\binom{x}{l}\binom{n-x}{i-l} \cdot i! \cdot m \cdot \frac{(n+m-i-1)!}{(n+m)!} \cdot (i+1)$. The way to simplify $\displaystyle\sum_{l=0}^{x}\displaystyle\sum_{i=0}^{n}\binom{x}{l}\binom{n-x}{i-l} \cdot i! \cdot m \cdot \frac{(n+m-i-1)!}{(n+m)!} \cdot f(x-l)$ is analogous. We have $\displaystyle\sum_{l=0}^{x}\displaystyle\sum_{i=0}^{n}\binom{x}{l}\binom{n-x}{i-l} \cdot i! \cdot m \cdot \frac{(n+m-i-1)!}{(n+m)!} \cdot (i+1)$ $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \displaystyle\sum_{l=0}^{x}\frac{1}{l!(x-l)!} \displaystyle \sum_{i=0}^{n}\frac{(i+1)! \cdot (n+m-i-1)!}{(i-l)! \cdot (n-x-i+l)!}$ (expanding and regrouping) $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \displaystyle\sum_{l=0}^{x}\frac{(m-1+x-l)!(l+1)}{(x-l)!} \displaystyle \sum_{i=0}^{n}\frac{(i+1)!}{(l+1)!(i-l)!} \cdot \frac{(n+m-i-1)!}{(n-x-i+l)!(m-1+x-l)!}$ (making binomial coefficients appear) $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \displaystyle\sum_{l=0}^{x}\frac{(m-1+x-l)!(l+1)}{(x-l)!} \displaystyle \sum_{i=0}^{n}\binom{i+1}{l+1}\binom{n+m-i-1}{m-1+x-l}$ Recall that $\displaystyle\sum_{i}\binom{i}{a}\binom{n-i}{b-a} = \binom{n+1}{b+1}$, because we can count the right hand side by fixing the position of the $(a+1)$-th element, where $i$ denotes the number of elements on the left of the $(a+1)$-th element. Hence, $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \displaystyle\sum_{l=0}^{x}\frac{(m-1+x-l)!(l+1)}{(x-l)!} \cdot \binom{n+m+1}{m+x+1}$ $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \binom{n+m+1}{m+x+1} \cdot \displaystyle\sum_{l=0}^{x}\frac{(m-1+x-l)!}{(x-l)!} \cdot (l+1)$ $= \frac{m \cdot x! \cdot (n-x)!}{(n+m)!} \cdot \binom{n+m+1}{m+x+1} \cdot \displaystyle\sum_{l=0}^{x}\frac{(m-1+l)!}{l!} \cdot (x-l+1)$. We can compute the latter sum in $O(1)$ via prefix sums (after splitting $x+1$ and $-l$). A similar computation can be done for $\displaystyle\sum_{l=0}^{x}\displaystyle\sum_{i=0}^{n}\binom{x}{l}\binom{n-x}{i-l} \cdot i! \cdot m \cdot \frac{(n+m-i-1)!}{(n+m)!} \cdot f(x-l)$. Hence, $f(x)$ can be computed in $O(1)$ (with prefix sums) for $x=1$ to $n$. This gives a $O(n+m)$ time solution if you precompute factorials.
[ "combinatorics", "dp", "math", "probabilities" ]
3,000
const val MOD = 998244353L fun main() { val (n, m) = readLine()!!.split(" ").map { it.toInt() } val factorial = LongArray(4000002) factorial[0] = 1L for (j in 1..4000001) { factorial[j] = (j.toLong() * factorial[j - 1]) % MOD } val factInv = LongArray(4000002) factInv[4000001] = factorial[4000001] pow -1 for (j in 4000000 downTo 0) { factInv[j] = ((j + 1).toLong() * factInv[j + 1]) % MOD } fun choose(a: Int, b: Int) = if (a < 0 || b < 0 || b > a) 0L else (factorial[a] * ((factInv[b] * factInv[a - b]) % MOD)) % MOD val answer = LongArray(n + 1) var currSum = 0L for (k in 1..n) { var sum = currSum * choose(n + m, m + k) sum %= MOD sum += factorial[m - 1] * ((choose(m + k + 1, m + 1) * choose(n + m + 1, m + k + 1)) % MOD) sum %= MOD sum *= (factorial[k] * ((factorial[n - k] * ((factInv[n + m] * m.toLong()) % MOD)) % MOD)) % MOD sum %= MOD answer[k] = sum * (((m + k).toLong() * ((factInv[k] * factorial[k - 1]) % MOD)) % MOD) answer[k] %= MOD currSum += factorial[m + k - 1] * ((factInv[k] * answer[k]) % MOD) currSum %= MOD } println(answer[n]) } const val MOD_TOTIENT = MOD.toInt() - 1 infix fun Long.pow(power: Int): Long { var e = power e %= MOD_TOTIENT if (e < 0) { e += MOD_TOTIENT } if (e == 0 && this == 0L) { return this } var b = this % MOD var res = 1L while (e > 0) { if (e and 1 != 0) { res *= b res %= MOD } b *= b b %= MOD e = e shr 1 } return res }
1392
I
Kevin and Grid
As Kevin is in BigMan's house, suddenly a trap sends him onto a grid with $n$ rows and $m$ columns. BigMan's trap is configured by two arrays: an array $a_1,a_2,\ldots,a_n$ and an array $b_1,b_2,\ldots,b_m$. In the $i$-th row there is a heater which heats the row by $a_i$ degrees, and in the $j$-th column there is a heater which heats the column by $b_j$ degrees, so that the temperature of cell $(i,j)$ is $a_i+b_j$. Fortunately, Kevin has a suit with one parameter $x$ and two modes: - heat resistance. In this mode suit can stand all temperatures greater or equal to $x$, but freezes as soon as reaches a cell with temperature less than $x$. - cold resistance. In this mode suit can stand all temperatures less than $x$, but will burn as soon as reaches a cell with temperature at least $x$. Once Kevin lands on a cell the suit automatically turns to cold resistance mode if the cell has temperature less than $x$, or to heat resistance mode otherwise, and cannot change after that. We say that two cells are adjacent if they share an edge. Let a path be a sequence $c_1,c_2,\ldots,c_k$ of cells such that $c_i$ and $c_{i+1}$ are adjacent for $1 \leq i \leq k-1$. We say that two cells are connected if there is a path between the two cells consisting only of cells that Kevin can step on. A connected component is a maximal set of pairwise connected cells. We say that a connected component is \textbf{good} if Kevin can escape the grid starting from it  — when it contains at least one border cell of the grid, and that it's \textbf{bad} otherwise. To evaluate the situation, Kevin gives a score of $1$ to each good component and a score of $2$ for each bad component. The final score will be the difference between the total score of components with temperatures bigger than or equal to $x$ and the score of components with temperatures smaller than $x$. There are $q$ possible values of $x$ that Kevin can use, and for each of them Kevin wants to know the final score. Help Kevin defeat BigMan!
An obvious solution would be to do DFS, but it is $O(nmq)$. Firstly we focus on answering a single question. We represent our input with two graphs (one for cells with temperature less than X and other for temperatures greater than X), in which we add an edge between two neigbouring cells. As it is a subgraph of the grid graph, this means that this graph is planar and thus we may apply Euler's formula on both graphs: $V_1+F_1=E_1+C_1$, where V1 is the number of vertices in graph 1, F1 is the number of faces in graph 1, $\dots$. However, some faces are not interesting, namely the $2 \times 2$ square of adjacent cells. Let $Q_1$ be the number of such squares. Similarly, $V_2+F_2=E_2+1+C_2$. We see that interesting faces in graph 1 represent connected components in graph 2 that cannot reach the border, and vice-versa. In this way, if we subtract the equations, we get $C_1-F_1+F_2-C_2=V_1-E_1+E_2-V_1+Q_1-Q_2$. We can observe that, because of this interpretation, the LHS of the equation is the answer. We have to devise an algorithm to calculate efficiently the number of squares/edges. Let's calculate horizontal edges, and do the same for vertical edges. Firstly, if $a_i+b_j \geq X$ and $a_i+b_{j+1} \geq X$ then $a_i+min(b_j,b_{j+1}) \geq X$. So we create array $B$ such that $B_j=min(b_j,b_{j+1})$. The number of edges is the number of indexes $i,j$ such that $a_i+B_j \geq X$. This trick can also be used to calculate edges in cold regions. To have a more efficient solution, we must calculate faster the number of indexes $i,j$ such that $a_i+B_j \geq X$. We can thus apply fast Fourier transform to arrays representing frequencies of $a$ and $B$ and multiply them, inverting the Fourier transform in order to get the answers quickly in O(1) with prefix sums. By doing this we can calculate the number of edges, and the number of $2 \times 2$ squares can be calculated in a similar way. The final complexity is, thus, $O((n+m)log(n+m)+max(a_i,b_i)log(max(a_i,b_i)))$
[ "fft", "graphs", "math" ]
3,300
#include<bits/stdc++.h> using namespace std; #define MAX 262144 #define MAXN 1000000 long long int a[MAXN],b[MAXN]; using cd = complex<double>; const double PI = acos(-1); vector<cd> A(MAX),B(MAX); vector<cd> Amx(MAX),Bmx(MAX); vector<cd> Amn(MAX),Bmn(MAX); vector<cd> E11(MAX),E12(MAX),E21(MAX),E22(MAX); vector<cd> SQ1(MAX),SQ2(MAX); vector<cd> V(MAX); vector<long long int> A1(MAX),A2(MAX); void fft(vector<cd> & a, bool invert) { int n = a.size(); for (int i = 1, j = 0; i < n; i++) { int bit = n >> 1; for (; j & bit; bit >>= 1) j ^= bit; j ^= bit; if (i < j) swap(a[i], a[j]); } for (int len = 2; len <= n; len <<= 1) { double ang = 2 * PI / len * (invert ? -1 : 1); cd wlen(cos(ang), sin(ang)); for (int i = 0; i < n; i += len) { cd w(1); for (int j = 0; j < len / 2; j++) { cd u = a[i+j], v = a[i+j+len/2] * w; a[i+j] = u + v; a[i+j+len/2] = u - v; w *= wlen; } } } if (invert) { for (cd & x : a) x /= n; } } void prod(vector<cd> &a, vector<cd> &b, vector<cd> &c){ for(int i=0;i<a.size();i++){ c[i]=a[i]*b[i]; } } int main(){ long long int n,m,q; scanf("%lld %lld %lld",&n,&m,&q); for(int i=0;i<n;i++){ scanf("%lld",&a[i]); } for(int i=0;i<m;i++){ scanf("%lld",&b[i]); } for(int i=0;i<MAX;i++){ A[i]=cd(0,0); Amn[i]=cd(0,0); Amx[i]=cd(0,0); B[i]=cd(0,0); Bmn[i]=cd(0,0); Bmx[i]=cd(0,0); } for(int i=0;i<n;i++){ A[a[i]]+=cd(1,0); } for(int i=0;i<n-1;i++){ Amn[min(a[i],a[i+1])]+=cd(1,0); } for(int i=0;i<n-1;i++){ Amx[max(a[i],a[i+1])]+=cd(1,0); } for(int i=0;i<m;i++){ B[b[i]]+=cd(1,0); } for(int i=0;i<m-1;i++){ Bmn[min(b[i],b[i+1])]+=cd(1,0); } for(int i=0;i<m-1;i++){ Bmx[max(b[i],b[i+1])]+=cd(1,0); } fft(A,0); fft(Amn,0); fft(Amx,0); fft(B,0); fft(Bmn,0); fft(Bmx,0); prod(A,Bmn,E11); prod(Amn,B,E12); prod(Amx,B,E21); prod(A,Bmx,E22); prod(Amn,Bmn,SQ1); prod(Amx,Bmx,SQ2); prod(A,B,V); fft(E11,1); fft(E12,1); fft(E21,1); fft(E22,1); fft(SQ1,1); fft(SQ2,1); fft(V,1); for(int i=0;i<MAX;i++){ A1[i]=round(SQ1[i].real())-round(E11[i].real())-round(E12[i].real())+round(V[i].real()); A2[i]=round(SQ2[i].real())-round(E21[i].real())-round(E22[i].real())+round(V[i].real()); } for(int i=1;i<MAX;i++){ A2[i]+=A2[i-1]; } for(int i=MAX-2;i>-1;i--){ A1[i]+=A1[i+1]; } for(int i=0;i<q;i++){ int query; scanf("%d",&query); //cout<<A1[query]<<" "<<A2[query-1]<<endl; printf("%lld\n",A1[query]-A2[query-1]); } return 0; }
1393
A
Rainbow Dash, Fluttershy and Chess Coloring
One evening Rainbow Dash and Fluttershy have come up with a game. Since the ponies are friends, they have decided not to compete in the game but to pursue a common goal. The game starts on a square flat grid, which initially has the outline borders built up. Rainbow Dash and Fluttershy have flat square blocks with size $1\times1$, Rainbow Dash has an infinite amount of light blue blocks, Fluttershy has an infinite amount of yellow blocks. The blocks are placed according to the following rule: each newly placed block must touch the built on the previous turns figure by a side (note that the outline borders of the grid are built initially). At each turn, one pony can place any number of blocks of her color according to the game rules. Rainbow and Fluttershy have found out that they can build patterns on the grid of the game that way. They have decided to start with something simple, so they made up their mind to place the blocks to form a \textbf{chess coloring}. Rainbow Dash is well-known for her speed, so she is interested in the minimum number of turns she and Fluttershy need to do to get a chess coloring, covering the whole grid with blocks. Please help her find that number! Since the ponies can play many times on different boards, Rainbow Dash asks you to find the minimum numbers of turns for several grids of the games. The chess coloring in two colors is the one in which each square is neighbor by side only with squares of different colors.
By modeling the game on different grids it was possible to notice that the answer is equal to $\lfloor \frac{n}{2} \rfloor + 1$. You can prove that this is the answer by using induction method separately for grids with even and odd sides. Initially it was asked to solve the problem for rectangular grids. You can think about this version of the problem.
[ "greedy", "math" ]
800
#include<bits/stdc++.h> using namespace std; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t, n; cin >> t; while (t--) { cin >> n; cout << n / 2 + 1 << '\n'; } return 0; }
1393
B
Applejack and Storages
This year in Equestria was a year of plenty, so Applejack has decided to build some new apple storages. According to the advice of the farm designers, she chose to build two storages with non-zero area: one in the shape of a square and another one in the shape of a rectangle (which possibly can be a square as well). Applejack will build the storages using planks, she is going to spend exactly one plank on each side of the storage. She can get planks from her friend's company. Initially, the company storehouse has $n$ planks, Applejack knows their lengths. The company keeps working so it receives orders and orders the planks itself. Applejack's friend can provide her with information about each operation. For convenience, he will give her information according to the following format: - $+$ $x$: the storehouse received a plank with length $x$ - $-$ $x$: one plank with length $x$ was removed from the storehouse (it is guaranteed that the storehouse had some planks with length $x$). Applejack is still unsure about when she is going to order the planks so she wants to know if she can order the planks to build rectangular and square storages out of them after every event at the storehouse. Applejack is busy collecting apples and she has completely no time to do the calculations so she asked you for help! We remind you that all four sides of a square are equal, and a rectangle has two pairs of equal sides.
Let's maintain the array $cnt_i$, in it we are going to store the number of planks for each length. Let's note that to be able to build a square and a rectangle we need to have four planks of the same length and also two pairs of planks of the same length. To check it we can maintain two values: $sum2=\sum_{i=1}^{10^5}\lfloor \frac{cnt_i}{2}\rfloor$ and $sum4=\sum_{i=1}^{10^5}\lfloor \frac{cnt_i}{4}\rfloor$. Then, you will be able to build a square and a rectangular storage if $sum4 \ge 1$ and $sum2 \ge 4$. The first constraint satisfies the requirement about a square (you should have $\ge 4$ planks of some length), and the second constraint satisfies the requirement about a rectangle (two pairs of the same length should be used and also two pairs are already used in the square).
[ "constructive algorithms", "data structures", "greedy", "implementation" ]
1,400
#include<bits/stdc++.h> using namespace std; int const MAXN = 1e5 + 5; int cnt[MAXN]; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int n, q, x, cnt2 = 0, cnt4 = 0; char type; cin >> n; for (int i = 1; i <= n; ++i) { cin >> x; cnt2 -= cnt[x] / 2; cnt4 -= cnt[x] / 4; cnt[x]++; cnt2 += cnt[x] / 2; cnt4 += cnt[x] / 4; } cin >> q; for (int i = 1; i <= q; ++i) { cin >> type >> x; cnt2 -= cnt[x] / 2; cnt4 -= cnt[x] / 4; if (type == '+') cnt[x]++; else cnt[x]--; cnt2 += cnt[x] / 2; cnt4 += cnt[x] / 4; if (cnt4 >= 1 && cnt2 >= 4) cout << "YES" << '\n'; else cout << "NO" << '\n'; } return 0; }
1393
C
Pinkie Pie Eats Patty-cakes
Pinkie Pie has bought a bag of patty-cakes with different fillings! But it appeared that not all patty-cakes differ from one another with filling. In other words, the bag contains some patty-cakes with the same filling. Pinkie Pie eats the patty-cakes one-by-one. She likes having fun so she decided not to simply eat the patty-cakes but to try not to eat the patty-cakes with the same filling way too often. To achieve this she wants the minimum distance between the eaten with the same filling to be the largest possible. Herein Pinkie Pie called the distance between two patty-cakes the number of eaten patty-cakes strictly between them. Pinkie Pie can eat the patty-cakes in any order. She is impatient about eating all the patty-cakes up so she asks you to help her to count the greatest minimum distance between the eaten patty-cakes with the same filling amongst all possible orders of eating! Pinkie Pie is going to buy more bags of patty-cakes so she asks you to solve this problem for several bags!
Let's note that if you can find the arrangement with the maximum distance $\geq X$, then you can also find the arrangement with the maximum distance $\geq X-1$. It allows you to use the binary search on the answer. To check that the answer is at least $X$, we can use the greedy algorithm. Each time let's use the element that we can use (we didn't use it on the last $x-1$ steps) and the number of the remaining elements equal to it is as large as possible. You can store the set of elements that you can use in the sorted by the number of appearances in $std::set$. This sol works $O(n log^2 n)$. Also there is a solution in $O(n)$.
[ "constructive algorithms", "greedy", "math", "sortings" ]
1,700
#include <bits/stdc++.h> using namespace std; const int MAXN = 100009; int cnt[MAXN]; vector<int> a; int n; bool check(int x) { for (int i = 1; i <= n; i ++) cnt[i] = 0; for (int i = 0; i < n; i ++) cnt[a[i]]++; set<pair<int, int>, greater<pair<int, int>>> ss; //use greater comparator to sort set in descending order for (int i = 1; i <= n; i ++) { if (cnt[i] > 0) ss.insert({cnt[i], i}); } vector<int> b; for (int i = 0; i < n; i ++) { if (i >= x && cnt[b[i - x]]) { ss.insert({cnt[b[i - x]], b[i - x]}); } if (ss.empty()) return 0; b.push_back(ss.begin()->second); ss.erase(ss.begin()); cnt[b.back()]--; } return 1; } signed main() { ios :: sync_with_stdio(0); cin.tie(0); int ttt; cin >> ttt; while (ttt--) { cin >> n; a.resize(n); for (int i = 0; i < n; i ++) { cin >> a[i]; } int l = 0, r = n; while (r - l > 1) { int m = (r + l) / 2; if (check(m)) { l = m; } else { r = m; } } cout << l - 1 << "\n"; } return 0; }
1393
D
Rarity and New Dress
Carousel Boutique is busy again! Rarity has decided to visit the pony ball and she surely needs a new dress, because going out in the same dress several times is a sign of bad manners. First of all, she needs a dress pattern, which she is going to cut out from the rectangular piece of the multicolored fabric. The piece of the multicolored fabric consists of $n \times m$ separate square scraps. Since Rarity likes dresses in style, a dress pattern must only include scraps sharing the same color. A dress pattern must be the square, and since Rarity is fond of rhombuses, the sides of a pattern must form a $45^{\circ}$ angle with sides of a piece of fabric (that way it will be resembling the traditional picture of a rhombus). Examples of proper dress patterns: Examples of improper dress patterns: The first one consists of multi-colored scraps, the second one goes beyond the bounds of the piece of fabric, the third one is not a square with sides forming a $45^{\circ}$ angle with sides of the piece of fabric. Rarity wonders how many ways to cut out a dress pattern that satisfies all the conditions that do exist. Please help her and satisfy her curiosity so she can continue working on her new masterpiece!
Let's note that if there is a rhombus of size $X$ in the cell $(i, j)$, then there are also rhombuses with the smaller sizes. Let's divide the rhombus into the left part and the right part. Let's solve the problem separately for both of them and then the answer for the cell is going to be equal to the minimum of these values. Note that if the maximum size for the cell $(i, j)$ is $X$, then the maximum size for the cell $(i, j+1)$ is at most $X+1$ (and at most $1$, if these two cells are not equal). Also the maximums size for the left part is at most minimum of the number of consecutive cells to the up and to left from the fixed cell. To find the number of consective equal cells to the up, we will use the dynamic programming. If the cells $(i, j)$ and $(i-1, j)$ are equal, then the answer for the cell $(i, j)$ is equal to the answer for the cell $(i-1, j)$ plus $1$, otherwise it is equal to $1$. Similarly we can find the answer for the cells to the left. Now we need to find the maximum size of the left part. We can use dynamic programming and calculate it accordingly to our observations. Similarly for the right part. The total complexity is $O(nm)$.
[ "dfs and similar", "dp", "implementation", "shortest paths" ]
2,100
#include<bits/stdc++.h> using namespace std; typedef long long ll; int const maxn = 2005; char a[maxn][maxn]; int cnt_up[maxn][maxn], cnt_down[maxn][maxn], L[maxn], R[maxn]; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int n, m; cin >> n >> m; for (int i = 1; i <= n; ++i) { for (int j = 1; j <= m; ++j) cin >> a[i][j]; } ll ans = 0; for (int i = 1; i <= n; ++i) { for (int j = 1; j <= m; ++j) { if (i != 1 && a[i][j] == a[i - 1][j]) cnt_up[i][j] = cnt_up[i - 1][j] + 1; else cnt_up[i][j] = 0; } } for (int i = n; i >= 1; --i) { for (int j = 1; j <= m; ++j) { if (i != n && a[i][j] == a[i + 1][j]) cnt_down[i][j] = cnt_down[i + 1][j] + 1; else cnt_down[i][j] = 0; } } for (int i = 1; i <= n; ++i) { for (int j = 1; j <= m; ++j) { int go = j; while (go <= m && a[i][j] == a[i][go]) go++; go--; for (int pos = j; pos <= go; ++pos) { if (pos == j) L[pos] = pos; else { L[pos] = max(L[pos - 1], pos - min(cnt_up[i][pos], cnt_down[i][pos])); } } j = go; } for (int j = m; j >= 1; --j) { int go = j; while (go >= 1 && a[i][j] == a[i][go]) go--; go++; for (int pos = j; pos >= go; --pos) { if (pos == j) R[pos] = pos; else { R[pos] = min(R[pos + 1], pos + min(cnt_up[i][pos], cnt_down[i][pos])); } } j = go; } for (int j = 1; j <= m; ++j) { ans += (ll)min(j - L[j] + 1, R[j] - j + 1); } } cout << ans << '\n'; return 0; }
1393
E2
Twilight and Ancient Scroll (harder version)
This is a harder version of the problem E with larger constraints. Twilight Sparkle has received a new task from Princess Celestia. This time she asked to decipher the ancient scroll containing important knowledge of pony origin. To hide the crucial information from evil eyes, pony elders cast a spell on the scroll. That spell adds exactly one letter in any place to each word it is cast on. To make the path to the knowledge more tangled elders chose some of words in the scroll and cast a spell on them. Twilight Sparkle knows that the elders admired the order in all things so the scroll original scroll contained words in \textbf{lexicographically non-decreasing order}. She is asked to delete one letter from some of the words of the scroll (to undo the spell) to get some version of the original scroll. Unfortunately, there may be more than one way to recover the ancient scroll. To not let the important knowledge slip by Twilight has to look through all variants of the original scroll and find the required one. To estimate the maximum time Twilight may spend on the work she needs to know the number of variants she has to look through. She asks you to find that number! Since that number can be very big, Twilight asks you to find it modulo $10^9+7$. It may occur that princess Celestia has sent a wrong scroll so the answer may not exist. A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; - in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
Let's use dynamic programming. $dp[i][j]$: the number of ways to form the non-decreasing subsequence on the strings $1 \ldots i$, s.t. the delete character in string $i$ is $j$. This works in $O(L^3)$, where $L$ is the total length of all strings. Let's optimize this solution. For each string, sort all strings obtained by deleting at most one character from this string. You can do it in $O(L^2 \cdot log L)$ for all strings. Then you can use two pointers to calculate or dp. To calculate the answer for the layer $i$ we will consider strings in the sorted order, and add all dp values for the smaller strings. We can calculate this dp in $O(L^2)$ and solve the problem in $O(L^2 \cdot log L)$. We can use binary search and hash to compare strings in $O(log L)$. Then you can sort all the strings in $O(L \cdot log^2 L)$. Note that you can sort the strings in $O(L)$. Look at the string $s$. For each character find the first character to the right not equal to it (array $nxt[i]$). Then we will store two pointers: to the beginning and the end of the list. Consider characters in the order from left to right. If $s_i$ > $s_{nxt[i]}$, add $i$ to the beginning of the list (to the position $l$ and increase $l$ by $1$), otherwise add it to the end of the list (to the position $r$ and decrease $r$ by $1$). Then add $s$ to some position in the list ($s$ will be in the list after it without the last character). Then you can use the hash to make the comparisons for two pointers in $O(log L)$. This sol works in $O(L \cdot log L)$ and fits into TL.
[ "dp", "hashing", "implementation", "string suffix structures", "strings", "two pointers" ]
3,200
#include<bits/stdc++.h> using namespace std; typedef long long ll; int const maxn = 1e5 + 5, maxc = 1e6 + 5; ll mod[2], P[2], p[2][maxc], rev_P[2]; vector < ll > h[2][maxn]; vector < int > sorted[maxn]; string s[maxn]; int nxt[maxc]; int a[maxc], dp[2][maxc], inf = 1e9 + 7; int MOD = 1e9 + 7; ll st(ll x, int y, int ok) { if (y == 0) return 1; if (y % 2 == 0) { ll d = st(x, y / 2, ok); return d * d % mod[ok]; } return x * st(x, y - 1, ok) % mod[ok]; } inline char get_c(int i, int x, int numb) { if (numb < x) return s[i][numb]; if (numb + 1 < (int)s[i].size()) return s[i][numb + 1]; return ' '; } inline ll get_hash(int t, int i, int x, int len) { if (len < x) return h[t][i][len]; return (h[t][i][x] + (h[t][i][len + 1] - h[t][i][x + 1] + mod[t]) * rev_P[t]) % mod[t]; } inline pair < ll, ll > get_h(int i, int x, int len) { return {get_hash(0, i, x, len), get_hash(1, i, x, len)}; } inline int check(int i, int x, int j, int y) { int len1 = (int)s[i].size(), len2 = (int)s[j].size(); if (x != len1) len1--; if (y != len2) len2--; int lef = 0, righ = min(len1, len2) + 1; while (righ - lef > 1) { int mid = (righ + lef) / 2; if (get_h(i, x, mid) == get_h(j, y, mid)) lef = mid; else righ = mid; } return get_c(i, x, lef) >= get_c(j, y, lef); } main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); mod[0] = 1e9 + 7, mod[1] = 1e9 + 9, P[0] = 29, P[1] = 31, rev_P[0] = st(P[0], mod[0] - 2, 0), rev_P[1] = st(P[1], mod[1] - 2, 1); p[0][0] = 1, p[1][0] = 1; for (int i = 1; i < maxc; ++i) { for (int j = 0; j <= 1; ++j) p[j][i] = p[j][i - 1] * P[j] % mod[j]; } int n; cin >> n; for (int i = 1; i <= n; ++i) { cin >> s[i]; for (int j = 0; j <= 1; ++j) { h[j][i].push_back(0); for (int pos = 0; pos < (int)s[i].size(); ++pos) { h[j][i].push_back((h[j][i][pos] + p[j][pos] * (s[i][pos] - 'a' + 1)) % mod[j]); } } nxt[(int)s[i].size() - 1] = (int)s[i].size() - 1; for (int pos = (int)s[i].size() - 2; pos >= 0; --pos) { if (s[i][pos] != s[i][pos + 1]) nxt[pos] = pos + 1; else nxt[pos] = nxt[pos + 1]; } int l = 0, r = (int)s[i].size() - 1; for (int j = 0; j < (int)s[i].size(); ++j) { if (s[i][nxt[j]] <= s[i][j]) a[l++] = j; else a[r--] = j; } for (int j = 0; j < (int)s[i].size(); ++j) { sorted[i].push_back(a[j]); if (a[j] == (int)s[i].size() - 1) sorted[i].push_back((int)s[i].size()); } } for (int i = 0; i <= (int)s[1].size(); ++i) { dp[0][i] = 1; } for (int i = 2; i <= n; ++i) { int oks = (i - 1) % 2, ptr = 0, sum = 0, cur = -1; for (auto key : sorted[i]) { cur++; while (ptr < (int)sorted[i - 1].size() && check(i, key, i - 1, sorted[i - 1][ptr])) { sum += dp[(1^oks)][ptr]; if (sum >= MOD) sum -= MOD; ptr++; } dp[oks][cur] = sum; } } int ans = 0; for (int i = 0; i <= (int)s[n].size(); ++i) { ans += dp[(n - 1) % 2][i]; if (ans >= MOD) ans -= MOD; } cout << ans << '\n'; return 0; }
1394
A
Boboniu Chats with Du
Have you ever used the chat application QQ? Well, in a chat group of QQ, administrators can muzzle a user for days. In Boboniu's chat group, there's a person called Du Yi who likes to make fun of Boboniu every day. Du will chat in the group for $n$ days. On the $i$-th day: - If Du can speak, he'll make fun of Boboniu with fun factor $a_i$. But after that, he may be muzzled depending on Boboniu's mood. - Otherwise, Du won't do anything. Boboniu's mood is a constant $m$. On the $i$-th day: - If Du can speak and $a_i>m$, then Boboniu will be angry and muzzle him for $d$ days, which means that Du won't be able to speak on the $i+1, i+2, \cdots, \min(i+d,n)$-th days. - Otherwise, Boboniu won't do anything. The total fun factor is the sum of the fun factors on the days when Du can speak. Du asked you to find the maximum total fun factor among all possible permutations of $a$.
If $a_i>m$, we consider it as a big item with value $a_i$, else a small item with value $a_i$. We are asked to choose some items and maximize the total value. If an item is not chosen, it means we put it on a muzzled day. Enumerate the number of chosen big item, which is denoted by $x$. Thus they take $(x-1)(d+1)+1$ days. The remaining days are used to place small item on it. Choose items greedily. i. e. We sort items by value from largest to smallest, choose previous $x$ big items and previous $n-(x-1)(d+1)-1$ small items and update the answer. The total time complexity is $O(n\log n)$.
[ "dp", "greedy", "sortings", "two pointers" ]
1,800
#include <bits/stdc++.h> #define rep(i, a, b) for (int i = (a); i <= int(b); i++) using namespace std; typedef long long ll; const int maxn = 1e5; int n, d, m, k, l; ll a[maxn + 5], b[maxn + 5]; void solve(ll a[], int n) { sort(a + 1, a + n + 1); reverse(a + 1, a + n + 1); rep(i, 1, n) a[i] += a[i - 1]; } int main() { scanf("%d %d %d", &n, &d, &m); for (int i = 0, x; i < n; i++) { scanf("%d", &x); if (x > m) a[++k] = x; else b[++l] = x; } if (k == 0) { ll s = 0; rep(i, 1, n) s += b[i]; printf("%lld\n", s); exit(0); } solve(a, k); solve(b, l); fill(b + l + 1, b + n + 1, b[l]); ll res = 0; rep(i, (k + d) / (1 + d), k) if (1ll * (i - 1) * (d + 1) + 1 <= n) { res = max(res, a[i] + b[n - 1ll * (i - 1) * (d + 1) - 1]); } printf("%lld\n", res); return 0; }
1394
B
Boboniu Walks on Graph
Boboniu has a \textbf{directed} graph with $n$ vertices and $m$ edges. The out-degree of each vertex is at most $k$. Each edge has an integer weight between $1$ and $m$. No two edges have equal weights. Boboniu likes to walk on the graph with some specific rules, which is represented by a tuple $(c_1,c_2,\ldots,c_k)$. If he now stands on a vertex $u$ with out-degree $i$, then he will go to the next vertex by the edge with the $c_i$-th $(1\le c_i\le i)$ smallest weight among all edges outgoing from $u$. Now Boboniu asks you to calculate the number of tuples $(c_1,c_2,\ldots,c_k)$ such that - $1\le c_i\le i$ for all $i$ ($1\le i\le k$). - Starting from any vertex $u$, it is possible to go back to $u$ in finite time by walking on the graph under the described rules.
Let $\deg u$ denote the out degree of $u$. Let $nex_{u,i}$ denote the vertex, which the edge with the $i$-the smallest weight among all edges start from $u$ ends at. For a fixed tuple $(t_1,t_2,\ldots,t_k)$, if $\{ nex_{i,t_{\deg i}} | 1\le i\le n \}=\{1,2,\ldots,n\}$ (i. e. each vertex appears exactly once), then it is a correct tuple. Let $S_{i,j}$ denote if $c_i=j$, the set for vertex with out degree $i$, which is $\{ nex_{u,j} | \deg u=i \}$. Thus the condition above can be changed to: $S_{1,t_1} \cup S_{2,t_2} \cup \ldots \cup S_{k,t_k}=\{1,2,\ldots,n\}$. Let's enumerate all $k!$ situations and use hash to check if it's correct. The hash function is diverse. For example, for a integer set $T$, we can use $h(T)=\sum_{x\in T}val_x\bmod p$ or $h(T)=\prod_{x\in T}val_x\bmod p$. Just make sure it has associative property. Here $val_x$ may be a random number. Let alone using multiple hash. The total time complexity is $O(n+m+k!)$.
[ "brute force", "dfs and similar", "graphs", "hashing" ]
2,300
//by Sshwy #include<bits/stdc++.h> using namespace std; #define pb push_back #define FOR(i,a,b) for(int i=(a);i<=(b);++i) #define ROF(i,a,b) for(int i=(a);i>=(b);--i) mt19937 mt_rand(chrono::high_resolution_clock::now().time_since_epoch().count()); const int N=2e5+5,HS=3,K=10; int n,m,k; vector< pair<int,int> > g[N]; int mod[HS]; struct hash_number{ int a[HS]; hash_number(){ fill(a,a+HS,0); } hash_number(long long x){ FOR(i,0,HS-1)a[i]=(x%mod[i]+mod[i])%mod[i]; } hash_number operator+(hash_number x){ hash_number res; res.a[0]=(a[0]+x.a[0])%mod[0]; res.a[1]=(a[1]*1ll*x.a[1])%mod[1]; res.a[2]=(a[2]+x.a[2])%mod[2]; return res; } bool operator==(const hash_number& x)const { FOR(i,0,HS-1)if(a[i]!=x.a[i])return 0; return 1; } }val[N],c[K][K],s; int ans; int status[K]; void dfs(int x,hash_number hsh){ if(x==k){ if(hsh == s) ++ans; return; } FOR(i,1,x+1){ status[x+1]=i; dfs(x+1,hsh+c[x+1][i]); } } int main(){ mod[0]=998244353; mod[1]=1e9+7; mod[2]=std::uniform_int_distribution<int>(1e8,1e9)(mt_rand); fprintf(stderr,"%d %d %d\n",mod[0],mod[1],mod[2]); scanf("%d%d%d",&n,&m,&k); FOR(i,1,m){ int u,v,w; scanf("%d%d%d",&u,&v,&w); g[u].pb({w,v}); } std::uniform_int_distribution<long long> rg(1,1e18); FOR(i,1,n)val[i]=hash_number(rg(mt_rand)); FOR(i,1,n)s=s+val[i]; FOR(u,1,n){ int d=g[u].size(); sort(g[u].begin(),g[u].end()); for(int i=1;i<=g[u].size();++i){ int v=g[u][i-1].second; c[d][i]=c[d][i]+val[v]; } } dfs(0,hash_number()); printf("%d\n",ans); return 0; }
1394
C
Boboniu and String
Boboniu defines BN-string as a string $s$ of characters 'B' and 'N'. You can perform the following operations on the BN-string $s$: - Remove a character of $s$. - Remove a substring "BN" or "NB" of $s$. - Add a character 'B' or 'N' to the end of $s$. - Add a string "BN" or "NB" to the end of $s$. Note that a string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. Boboniu thinks that BN-strings $s$ and $t$ are similar if and only if: - $|s|=|t|$. - There exists a permutation $p_1, p_2, \ldots, p_{|s|}$ such that for all $i$ ($1\le i\le |s|$), $s_{p_i}=t_i$. Boboniu also defines $\text{dist}(s,t)$, the distance between $s$ and $t$, as the minimum number of operations that makes $s$ similar to $t$. Now Boboniu gives you $n$ non-empty BN-strings $s_1,s_2,\ldots, s_n$ and asks you to find a \textbf{non-empty} BN-string $t$ such that the maximum distance to string $s$ is minimized, i.e. you need to minimize $\max_{i=1}^n \text{dist}(s_i,t)$.
It's obvious that the operation of BN-string is equivalent to the operation of BN-set, which I'm talking about, for a multi set $s$ contains only B and N: Remove a B or an N (if exists) from $s$. Insert a B or an N into $s$. Remove a B and an N from $s$. Insert a B and an N into $s$. So let's use pair $(x,y)$ to denote a BN-set, which means there are $x$ B and $y$ N in it. We can do an operation to move to $(x,y\pm 1),(x\pm 1,y),(x\pm 1,y\pm 1)$. The definition of similar of BN-set $(x_1,y_1)$ and $(x_2,y_2)$ is simply $x_1=x_2$ and $y_1=y_2$. Now the problem is to find a $(x_t,y_t)$ and minimize $\max_{i=1}^n\text{dist}(s_i,t)$. There are many algorithms to solve it and I'll describe two of them. We can figure out the distance between $s_1=(x_1,y_1)$ and $s_2=(x_2,y_2)$ is: $\text{dist}(s_1,s_2)=\begin{cases} |x_1-x_2|+|y_1-y_2| & (x_1-x_2)(y_1-y_2)<0\\ \max(|x_1-x_2|,|y_1-y_2|) & (x_1-x_2)(y_1-y_2) \ge 0 \end{cases}$ We also have non randomized algorithm. It can be shown that for a fixed pair $P=(p_x,p_y)$, all the pair with distance $x$ from $P$ forms a hexagon: So if we draw a hexagon centered on $(x_t,y_t)$ with radius $\max_{i=1}^n\text{dist}(s_i,t)$, then it must cover all $s_i$. So we can try to find a hexagon with minimal radius $r$ to cover all $s_i$ and then we can easily calculate $(x_t,y_t)$. Let's use binary search and do some condition tests to calculate $r$. The total time complexity is $O(n\log_2n)$.
[ "binary search", "geometry", "ternary search" ]
2,600
//by Sshwy #include<bits/stdc++.h> using namespace std; #define FOR(i,a,b) for(int i=(a);i<=(b);++i) #define ROF(i,a,b) for(int i=(a);i>=(b);--i) const int INF=1e9; int n; int main(){ cin>>n; int lx=INF,rx=-INF,ly=INF,ry=-INF,lz=INF,rz=-INF; FOR(i,1,n){ string s; cin>>s; int x=0,y=0; for(char c:s){ if(c=='B')++x; else ++y; } //printf("%d %d\n",x,y); lx=min(lx,x), rx=max(rx,x); ly=min(ly,y), ry=max(ry,y); lz=min(lz,x-y), rz=max(rz,x-y); } int ans=INF,ax=0,ay=0,az=0; auto calc = [&](){ int l=0,r=lz-(lx-ry),mid; auto check = [&](int a){ return lx+a+a>=rx && lx-ry+a+a+a>=rz && ry-a-a<=ly; }; if(check(r)==0)return INF; while(l<r)mid=(l+r)>>1, check(mid)?r=mid:l=mid+1; return l; }; FOR(_,1,6){ assert(lx<=rx && ly<=ry && lz<=rz); int v=calc(); if(v<ans){ ans=v; ax=lx+v, ay=ry-v, az=ax-ay; } int Lx=ly,Rx=ry,Ly=-rz,Ry=-lz,Lz=lx,Rz=rx; lx=Lx, rx=Rx, ly=Ly, ry=Ry, lz=Lz, rz=Rz; int Ax=ay,Ay=-az,Az=ax; ax=Ax,ay=Ay,az=Az; } cout<<ans<<endl; FOR(i,1,ax)cout<<'B'; FOR(i,1,ay)cout<<'N'; cout<<endl; return 0; }
1394
D
Boboniu and Jianghu
Since Boboniu finished building his Jianghu, he has been doing Kungfu on these mountains every day. Boboniu designs a map for his $n$ mountains. He uses $n-1$ roads to connect all $n$ mountains. Every pair of mountains is connected via roads. For the $i$-th mountain, Boboniu estimated the tiredness of doing Kungfu on the top of it as $t_i$. He also estimated the height of each mountain as $h_i$. A path is a sequence of mountains $M$ such that for each $i$ ($1 \le i < |M|$), there exists a road between $M_i$ and $M_{i+1}$. Boboniu would regard the path as a challenge if for each $i$ ($1\le i<|M|$), $h_{M_i}\le h_{M_{i+1}}$. Boboniu wants to divide \textbf{all} $n-1$ roads into several challenges. Note that each road must appear in \textbf{exactly one} challenge, but a mountain may appear in several challenges. Boboniu wants to minimize the total tiredness to do all the challenges. The tiredness of a challenge $M$ is the sum of tiredness of all mountains in it, i.e. $\sum_{i=1}^{|M|}t_{M_i}$. He asked you to find the minimum total tiredness. As a reward for your work, you'll become a guardian in his Jianghu.
Generally speaking, you're asked to use some simple directed paths (challenges) to cover the original tree and minimize the total cost (tiredness). Those edges with $h_u \neq h_v$ are already oriented, and for the other ones, we need to determine their directions. At first, let's consider the case where each edge has already been oriented (or, for all edges $(u, v)$, $h_u \neq h_v$ holds). We use $P(u, v)$ to denote the directed path from $u$ to $v$. In the beginning, for each edge $u \to v$, let's set up $P(u, v)$ to cover it. Thus the total cost of it is obviously $\sum_{i = 1}^{n} \text{deg}(i) \cdot t_i$, where $\text{deg}_i$ denotes the degree of vertex $i$. We can choose two challenge $P(x, y)$ and $P(y, z)$ ($y$ should be on $P(x, z)$) and merge them together to get a single challenge $P(x, z)$. This operation will reduce the total tiredness by $t_y$. Thus we try to do such operation to maximize the total reduction. For vertex $i$, suppose that there are $\text{in}_i$ challenges end at $i$ and $\text{out}_i$ challenges start from $i$. Thus the total reduction is $\sum_{i = 1}^{n} \min(\text{in}_i, \text{out}_i) \cdot t_i$. Now, let's try to solve the original problem. For the directed edges, we calculate $\text{in}'_i$ and $\text{out}'_i$ for every vertex $i$, and we can delete them from the tree. For the undirected edges, they form several small trees (a forest). Let's choose an arbitrary root for each tree and do a DP on it. Let $p_u$ be the father of $u$. For a non-root vertex $u$ and its subtree, $f_u$ denotes the maximum reduction when we orient $p_u \to u$ (down), and $g_u$ denotes the maximum reduction when we orient $u \to p_u$ (up). Take $f_u$ for example. To calculate it, we should know the in degree and out degree of $u$ (after directing the edges). Suppose that $u$ has $c$ children. If $x$ of them are oriented end at $u$ (up) and $(s - x)$ of them are oriented start from $u$ (down), the reduction of $u$ is $\min(\text{in}'_u + 1 + x, \text{out}'_u + (s - x)) \cdot t_u$ ($+ 1$ because $p_u \to u$). Now the question changes to: You are given $c$ vertices $v_1, v_2,\ldots, v_c$, choose $x$ of them forming a set $A$. Maximize $\sum_{v \in A} g_v + \sum_{v \notin A} f_v$. Calculate the maximum value for all $0 \le x \le c$. Actually, you can sort $[v_1, v_2, \cdots, v_c]$ by $(f_v - g_v)$ and calculate the prefix sum $s_i$. The answer for $x = i$ is simply $s_i + \sum_v g_v$. The calculation for $g_u$ and the root is similar. The total time complexity is $O(n \log n)$.
[ "dp", "greedy", "sortings", "trees" ]
2,800
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int maxn = 2e5; int n, h[maxn + 3], t[maxn + 3], in[maxn + 3], out[maxn + 3]; vector<int> G[maxn + 3]; bool vis[maxn + 3]; ll ans, f[maxn + 3], g[maxn + 3], a[maxn + 3]; void dfs(int u, int fa = 0) { vis[u] = true; for (int i = 0, v; i < G[u].size(); i++) { if ((v = G[u][i]) == fa) continue; dfs(v, u); } int s = 0; ll cur = 0; for (int i = 0, v; i < G[u].size(); i++) { if ((v = G[u][i]) == fa) continue; cur += g[v], a[++s] = f[v] - g[v]; } sort(a + 1, a + s + 1); reverse(a + 1, a + s + 1); for (int i = 0; i <= s; i++) { cur += a[i]; if (fa) { f[u] = max(f[u], 1ll * min(in[u] + 1 + s - i, out[u] + i) * t[u] + cur); g[u] = max(g[u], 1ll * min(in[u] + s - i, out[u] + 1 + i) * t[u] + cur); } else { f[u] = max(f[u], 1ll * min(in[u] + s - i, out[u] + i) * t[u] + cur); } } } int main() { scanf("%d", &n); for (int i = 1; i <= n; i++) scanf("%d", &t[i]); for (int i = 1; i <= n; i++) scanf("%d", &h[i]); for (int i = 1, u, v; i < n; i++) { scanf("%d %d", &u, &v); ans += t[u] + t[v]; if (h[u] == h[v]) { G[u].push_back(v), G[v].push_back(u); } else { if (h[u] > h[v]) swap(u, v); in[u]++, out[v]++; } } for (int i = 1; i <= n; i++) if (!vis[i]) { dfs(i), ans -= f[i]; } printf("%lld\n", ans); return 0; }
1394
E
Boboniu and Banknote Collection
No matter what trouble you're in, don't be afraid, but face it with a smile. I've made another billion dollars! — Boboniu Boboniu has issued his currencies, named Bobo Yuan. Bobo Yuan (BBY) is a series of currencies. Boboniu gives each of them a positive integer identifier, such as BBY-1, BBY-2, etc. Boboniu has a BBY collection. His collection looks like a sequence. For example: We can use sequence $a=[1,2,3,3,2,1,4,4,1]$ of length $n=9$ to denote it. Now Boboniu wants to fold his collection. You can imagine that Boboniu stick his collection to a long piece of paper and fold it between currencies: Boboniu will only fold the same identifier of currencies together. In other words, if $a_i$ is folded over $a_j$ ($1\le i,j\le n$), then $a_i=a_j$ must hold. Boboniu doesn't care if you follow this rule in the process of folding. But once it is finished, the rule should be obeyed. A formal definition of fold is described in notes. According to the picture above, you can fold $a$ two times. In fact, you can fold $a=[1,2,3,3,2,1,4,4,1]$ at most two times. So the maximum number of folds of it is $2$. As an international fan of Boboniu, you're asked to calculate the maximum number of folds. You're given a sequence $a$ of length $n$, for each $i$ ($1\le i\le n$), you need to calculate the maximum number of folds of $[a_1,a_2,\ldots,a_i]$.
At first, I'd like to explanation the fold operation in a intuitive way. Section 1 Fold To be exact, the origin problem isn't ask us to calculate the number of folds, but the number of folding marks. Section 1.1 Example 1 For example, you can fold $[1,1,1,1]$ three times, but you have different method to fold it: Method 1: Method 2: The first method can be represent by folding sequence (defined in the statement) $[1,-1,1,-1]$. The second method seems to be invalid under the definition of fold (in statement). But really? In fact, those methods are equivalent. The position of their folding marks are exactly the same, but different in whether it's valley folds or mountain folds. For the first method, the folding type of each fold mark is: mountain, valley, mountain. For the second method, it's: mountain, mountain, valley. (Try it yourself!) Although the second method does only two folds, it folds two layers of paper together so it gets two folding marks in one fold. Section 1.2 Example 2 For $[1,2,2,2,2,1,1,2,2]$, there are different folding methods. I'll display two of them: Firstly to be notice, they have same number of folding marks. The first method can be represent by folding sequence $[1,1,-1,1,-1,-1,1,1,1]$. We can change the result of the second method to the first method: Just change the blue part to red part. Section 1.3 Summary In fact, the definition of fold in statement always lead to alternate mountain and valley folds. But you can also use different folding method, because any results can be transform into alternate mountain and valley folds. key point 1: While folding, we don't care whether it's mountain or valley folds, we just care about the position of folding marks (and the rule in statement). key point 2: Fix the sequence $a$, for all folding methods of $a$, as long as the results of them don't have any available folds, then the number of folding marks of them must be equal, and their results must be the same (or simply reverse sequence). Although I'll provide proof for these key points, I want you to first think of it intuitively. Section 2 X-Y-X Folding Method If understanding the folding operation intuitively, it'll be quite easy to come up with a naive algorithm runs in polynomial time. Now I'll describe a general folding method. Note: I will consider sequence as a string (with a large character set). The definition of substring, palindrome is similar for sequence. Let's consider $[\ldots,a,b,b,b,c,\ldots]$, we can fold three $b$ into one $b$ with a mountain fold and a valley fold. Similarly, consider three continuous substring $\textit{XYX}$, where $Y$ is the reverse string of $X$: We can fold them into one $X$ with a mountain fold and a valley fold, according to the first two key points. Let's call it X-Y-X substring and the folding method X-Y-X fold. The questions are: How many layers of paper are folded during a X-Y-X fold? Can we find a proper folding method where each fold contains only one layer of paper? How to fold a string which doesn't contain X-Y-X substring? Section 3 Key Points (For high level competitor) I display all the key points firstly which may lead you to final solution quickly. Intuitively, we will fold $a$ from left to right. Let's maintain another string $b$. Each time: Push $a_i$ to the end of $b$ and check if $b$ contains new X-Y-X substring. If it does, then fold it. Calculate the number of folds of $b$. Note that this step won't actually change $b$. After that you get the answer of $[a_1,a_2,\cdots,a_i]$. key point 3: Using this folding method, each fold contains exactly one layer of paper. key point 4: After pushing $a_i$ to the end of $b$, $b$ contains at most one X-Y-X substring and it must be a suffix if exists. lemma: For string $s$ and two even palindromic substring $s[l_1,r_1]$, $s[l_2,r_2]$ of it, if $[l_1,r_1]$ contains center position of $s[l_2,r_2]$ and $[l_2,r_2]$ contains center position of $s[l_1,r_1]$, then $s$ must contain X-Y-X substring. Use information above, we can figure out a $O(n^3)$ or $O(n^2)$ solution. key point 5: For a string $b$ which doesn't contain X-Y-X substring, we can only fold its prefix or suffix. And it can be folded at most $O(\sqrt{|b|})$ times. key point 6: For a string $b$ which doesn't contain X-Y-X substring, after pushing an element to the end of $b$, it has at most $O(\log_2|b|)$ even palindromic suffixes. Use information above, we can figure out a $O(n\log_2n+n\sqrt{n})$ or $O(n\log_2n)$ solution, which is enough to pass. Section 3.1 Example Let's say $a=[1,2,2,2,2,1,1,2,2]$. So the folding method performs like: $b=[1]$. It doesn't have any available folds. The answer is $0$. $b=[1,2]$. It doesn't have any available folds. The answer is $0$. $b=[1,2,2]$. It can be folded once. The answer is $1$. $b=[1,2,2,2]$, has X-Y-X substring $[2,2,2]$, so $b$ is changed to $[1,2]$. Plus X-Y-X counter by $2$. After that, $b$ doesn't have any available folds. The answer is $2$. $b=[1,2]+a_5=[1,2,2]$. It can be folded once. The answer is $3$. $b=[1,2,2]+a_6=[1,2,2,1]$. It can be folded once. The answer is $3$. $b=[1,2,2,1]+a_7=[1,2,2,1,1]$. It can be folded twice. The answer is $4$. $b=[1,2,2,1,1]+a_8=[1,2,2,1,1,2]$, has X-Y-X substring $[1,2,2,1,1,2]$, so $b$ is changed to $[1,2]$. Plus X-Y-X counter by $2$. After that, $b$ doesn't have any available folds. The answer is $4$. $b=[1,2]+a_9=[1,2,2]$. It can be folded once. The answer is $5$. Each time the answer is the sum of X-Y-X counter and number of folds of $b$. So the total output will be $[0,0,1,2,3,3,4,4,5]$. Section 4 Proof and Understanding Now I'll describe the proof of some key points. If you have already understand them, skip this section and read the algorithm part. Section 4.1 Lemma Description: For string $S$ and two even palindromic substring $S[l_1,r_1]$, $S[l_2,r_2]$ of it, if $[l_1,r_1]$ contains center position of $S[l_2,r_r]$ and $[l_2,r_2]$ contains center position of $S[l_1,r_1]$, then $S$ must contain X-Y-X substring. Lets say $l_1<l_2$ and $|r_1-l_1|\ge |r_2-l_2|$. By construction we can get: Blue lines denotes the center position of two substrings and the red part forms a X-Y-X substring. Q. E. D. Section 4.2 X-Y-X and Simple X-Y-X Let define Simple X-Y-X substring (S-X-Y-X) as a X-Y-X string which doesn't contain any X-Y-X substring except itself. key point 7: S-X-Y-X string contains exactly one even palindromic suffix. i. e. the Y-X part of it. Proof: Use reduction to absurdity and the lemma. Section 4.3 Key Point 4 Description: After pushing $a_i$ to the end of $b$, $b$ contains at most one X-Y-X substring and it must be a suffix if exists. Let $b'=b+a_i$. Just like the picture above, if $b'$ contains two or more X-Y-X substring, there are three cases: black and red black and blue black and green All of the three cases can be negate by the lemma or plain discovery. Q. E. D. By the way: key point 4 shows us that if $b'$ has a X-Y-X substring, it must be a S-X-Y-X substring. key point 7 shows us that that if $b'$ has a X-Y-X substring, it must be produced by its shortest even palindromic suffix. Section 4.4 Key Point 5 Description: For a string $b$ which doesn't contain X-Y-X substring, we can only fold its prefix or suffix. And it can be folded at most $O(\sqrt{|b|})$ times. Let's take suffix for example. Each time you can fold an even palindromic suffix if exists. And the length of it must be increasing: So that it won't contain X-Y-X substring. Similar for prefix. So the folds will end up being Thus you can fold less than $2\sqrt{|b|}$ times. Q. E. D. Section 4.5 Key Point 6 Description: For a string $b$ which doesn't contain X-Y-X substring, after pushing an element to the end of $b$, it has at most $O(\log_2|b|)$ even palindromic suffixes. Because of the lemma, even palindromic suffixes of $b$ cannot both contain center position of each other, so that if we sort them by length, then every even palindromic suffix must be at least twice as long as the previous one: Thus $b$ has at most $O(\log_2|b|)$ even palindromic suffixes. Q. E. D. Section 5 Algorithm The algorithm itself is quite simple. Remember that: Intuitively, we will fold $a$ from left to right. Let's maintain another string $b$. Each time: Push $a_i$ to the end of $b$ and check if $b$ contains new X-Y-X substring. If so, then fold it. Calculate the number of folds of $b$. Note that this step won't change $b$. After that you get the answer of $[a_1,a_2,\cdots,a_i]$. Section 5.1 Part 1 Let's maintain $S_i$, the set of even palindromic suffixes of $b$ after the $i$-th time. Since the size of it is $O(\log_2n)$, use any data structure you want. Calculate $S_i$ from $S_{i-1}$ in $O(\log_2n)$ is trivial. Then we simply find the shortest even palindromic suffix of $b$ and check if it produce a X-Y-X substring in $O(\log_2n)$. If it does produce, then we simply fold it, which means remove the shortest even palindromic suffix of $b$. Don't forget to update X-Y-X counter. Section 5.2 Part 2 To calculate the number of folds of $b$, which doesn't contain X-Y-X substring: Let $p_i$ denote the length of shortest even palindromic suffix of $b[1,i]$ (index starts from $1$). Let $q_i$ denote that, if we fold the shortest even palindromic prefix start from $b[1,i]$ and repeat folding until you can't fold, the position of the first character of the result. i. e. $b[1,i]$ is folded to $b[q_i,i]$ finally. Let $c_i$ denote the number of folds during the process. Calculate $q_i$ from $q_{i-1}$ in $O(\log_2n)$. Don't forget $c_i$. Get $p_i$ from $S_i$. $c_i$ denotes the number of prefix folds of $b$. You can fold suffix of $b$ using $p$ and calculate the number of suffix folds in $O(\sqrt{n})$. The sum of two above is the number of folds of $b$. Plus it by X-Y-X counter you'll get the answer. Remove suffix of $b$ is trivial. The total time complexity is $O(n\log_2n+n\sqrt{n})$. Section 5.3 Bonus Can you optimize the time complexity of part 2 to $O(\log_2n)$ and implement the algorithm in $O(n\log_2n)$?
[ "strings" ]
3,500
//by Sshwy #include<bits/stdc++.h> using namespace std; #define pb push_back #define FOR(i,a,b) for(int i=(a);i<=(b);++i) #define ROF(i,a,b) for(int i=(a);i>=(b);--i) const int N=1e5+5; int n,a[N],xyx,q[N],c[N],p[N]; vector<int> v[N]; int pos; void get_v(int pos){ v[pos].clear(); if(pos==1)return; if(a[pos]==a[pos-1])v[pos].pb(2); for(auto x:v[pos-1])if(a[pos]==a[pos-x-1])v[pos].pb(x+2); } bool check_even_pal(int L,int R){ assert(R<=pos); for(auto x:v[R])if(x==R-L+1)return 1; return 0; } int qry(){ int res=xyx+c[pos]; int cur=pos; while(p[cur] && q[pos]<=cur-p[cur]/2){ cur-=p[cur]/2; ++res; } return res; } int main(){ scanf("%d",&n); q[0]=1; FOR(i,1,n){ ++pos; scanf("%d",&a[pos]); get_v(pos); int x; if(v[pos].size() && (x=v[pos][0], check_even_pal(pos-x/2-x+1,pos-x/2))){ pos-=x; xyx+=2; }else { p[pos]=v[pos].size()? v[pos][0]:0; q[pos]=q[pos-1],c[pos]=c[pos-1]; if(check_even_pal(q[pos],pos)){ q[pos]+=(pos-q[pos]+1)/2; c[pos]++; } } printf("%d%c",qry()," \n"[i==n]); } return 0; }
1395
A
Boboniu Likes to Color Balls
Boboniu gives you - $r$ red balls, - $g$ green balls, - $b$ blue balls, - $w$ white balls. He allows you to do the following operation as many times as you want: - Pick a red ball, a green ball, and a blue ball and then change their color to white. You should answer if it's possible to arrange all the balls into a palindrome after several (possibly zero) number of described operations.
If there are less than or equal to one odd number in $r$, $b$, $g$, $w$, then you can order them to be a palindrome. Otherwise, do the operation once (if you can) and check the condition above. It is meaningless to do operation more than once because we only care about the parity of $r$, $b$, $g$, $w$.
[ "brute force", "math" ]
1,000
def check(r,g,b,w): return False if r%2 + g%2 + b%2 + w%2 > 1 else True if __name__ == '__main__': T = int(input()) for ttt in range(T): r,g,b,w = map(int,input().split()) if check(r,g,b,w): print("Yes") elif r>0 and g>0 and b>0 and check(r-1,g-1,b-1,w+1): print("Yes") else : print("No")
1395
B
Boboniu Plays Chess
Boboniu likes playing chess with his employees. As we know, no employee can beat the boss in the chess game, so Boboniu has never lost in any round. You are a new applicant for his company. Boboniu will test you with the following chess question: Consider a $n\times m$ grid (rows are numbered from $1$ to $n$, and columns are numbered from $1$ to $m$). You have a chess piece, and it stands at some cell $(S_x,S_y)$ which is not on the border (i.e. $2 \le S_x \le n-1$ and $2 \le S_y \le m-1$). From the cell $(x,y)$, you can move your chess piece to $(x,y')$ ($1\le y'\le m, y' \neq y$) or $(x',y)$ ($1\le x'\le n, x'\neq x$). In other words, the chess piece moves as a rook. From the cell, you can move to any cell on the same row or column. Your goal is to visit each cell exactly once. Can you find a solution? Note that cells on the path between two adjacent cells in your route are not counted as visited, and it is not required to return to the starting point.
There are many solutions and I will describe one of them. Let say $f(i,j) = ( (i+S_x-2)\bmod n+1, (j+S_y-2)\bmod m+1 )$. Iterate $i$ from $1$ to $n$: if $i$ is odd, print $f(i,1),f(i,2),\ldots,f(i,m)$. Else print $f(i,m),f(i,m-1),\ldots,f(i,1)$.
[ "constructive algorithms" ]
1,100
#include<bits/stdc++.h> using namespace std; #define FOR(i,a,b) for(int i=(a);i<=(b);++i) #define ROF(i,a,b) for(int i=(a);i>=(b);--i) int n,m,sx,sy; void f(int i,int j){ printf("%d %d\n",(i+sx-2)%n+1,(j+sy-2)%m+1); } int main(){ scanf("%d%d%d%d",&n,&m,&sx,&sy); FOR(i,1,n){ if(i&1)FOR(j,1,m)f(i,j); else ROF(j,m,1)f(i,j); } return 0; }
1395
C
Boboniu and Bit Operations
Boboniu likes bit operations. He wants to play a game with you. Boboniu gives you two sequences of non-negative integers $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_m$. For each $i$ ($1\le i\le n$), you're asked to choose a $j$ ($1\le j\le m$) and let $c_i=a_i\& b_j$, where $\&$ denotes the bitwise AND operation. Note that you can pick the same $j$ for different $i$'s. Find the minimum possible $c_1 | c_2 | \ldots | c_n$, where $|$ denotes the bitwise OR operation.
Suppose the answer is $A$. Thus for all $i$ ($1\le i\le n$), $c_i | A = A$. Since $a_i, b_i <2^9$, we can enumerate all integers from $0$ to $2^9-1$, and check if there exists $j$ for each $i$ that $(a_i \& b_j) | A = A$. The minimum of them will be the answer. The time complexity is $O(2^9\cdot n^2)$
[ "bitmasks", "brute force", "dp", "greedy" ]
1,600
#include<bits/stdc++.h> #define ci const int& using namespace std; int n,m,p[210],d[210],ans; bool Check(ci x){ for(int i=1;i<=n;++i){ for(int j=1;j<=m;++j)if(((p[i]&d[j])|x)==x)goto Next; return 0; Next:; } return 1; } int main(){ scanf("%d%d",&n,&m); for(int i=1;i<=n;++i)scanf("%d",&p[i]); for(int i=1;i<=m;++i)scanf("%d",&d[i]); ans=(1<<9)-1; for(int i=8;i>=0;--i)Check(ans^(1<<i))?ans^=(1<<i):0; printf("%d",ans); return 0; }
1396
A
Multiples of Length
You are given an array $a$ of $n$ integers. You want to make all elements of $a$ equal to zero by doing the following operation \textbf{exactly three} times: - Select a segment, for each number in this segment we can add a multiple of $len$ to it, where $len$ is the length of this segment (added integers can be different). It can be proven that it is always possible to make all elements of $a$ equal to zero.
In this problem, the answer is rather simple. Here is one possible solution to this task. $1 \space \space 1$ $0$ $1 \space \space 1$ $0$ $1 \space \space 1$ $-a_1$ $1 \space \space 1$ $-a_1$ $1 \space \space n$ $0, \space -n \cdot a_2, \space -n \cdot a_3, \space \dots , \space -n \cdot a_n$ $2 \space \space n$ $(n-1) \cdot a_2, \space (n-1) \cdot a_3, \space \dots , \space (n-1) \cdot a_n$
[ "constructive algorithms", "greedy", "number theory" ]
1,600
n = int(input()) a = list(map(int, input().split())) if n == 1: print('1 1', -a[0], '1 1', '0', '1 1', '0', sep='\n') exit(0) print(1, n) for i in range(n): print(-a[i] * n, end = ' ') a[i] -= a[i] * n print() print(1, n - 1) for i in range(n - 1): print(-a[i], end = ' ') a[i] = 0 print() print(2, n) for i in range(1, n): print(-a[i], end = ' ') a[i] = 0 print()
1396
B
Stoned Game
T is playing a game with his friend, HL. There are $n$ piles of stones, the $i$-th pile initially has $a_i$ stones. T and HL will take alternating turns, with T going first. In each turn, a player chooses a non-empty pile and then removes a single stone from it. However, one cannot choose a pile that has been chosen in the previous turn (the pile that was chosen by the other player, or if the current turn is the first turn then the player can choose any non-empty pile). The player who cannot choose a pile in his turn loses, and the game ends. Assuming both players play optimally, given the starting configuration of $t$ games, determine the winner of each game.
Let us denote $S$ as the current total number of stones. Consider the following cases: Case A: There is a pile that has more than $\lfloor \frac{S}{2} \rfloor$ stones. The first player (T) can always choose from this pile, thus he (T) is the winner. Case B: Every pile has at most $\lfloor \frac{S}{2} \rfloor$ stones, and $S$ is even. It can be proven that the second player (HL) always wins. Let us prove by induction: When $S = 0$, the second player obviously wins. When $S \geq 2$, consider the game state after the first player moves. If there is a pile that now has more than $\lfloor \frac{S}{2} \rfloor$ stones, then we arrive back at case A where the next player to move wins. Otherwise, the second player can choose from any valid pile (note that the case condition implies that there are at least two non-empty piles before the first player's move). Now $S$ has been reduced by $2$, and every pile still has at most $\lfloor \frac{S}{2} \rfloor$ stones. The condition allows us to assign a perfect matching of stones, where one stone is matched with exactly one stone from a different pile. A greedy way to create such a matching: Give each label $0, 1, \dots, S - 1$ to a different stone so that for every pair of stones with labels $l < r$ that are from the same pile, stones $l + 1, l + 2, \dots, r - 1$ are also from that pile; then match stones $i$ with $i + \frac{S}{2}$ for all $0 \le i < \frac{S}{2}$. For every stone that the first player removes, the second player can always remove its matching stone, until the first player can no longer make a move and loses. Case C: Every pile has at most $\lfloor \frac{S}{2} \rfloor$ stones, and $S$ is odd. The first player (T) can choose from any pile, and we arrive back at case B where the next player to move loses. So the first player (T) wins if and only if there is a pile that has more than $\lfloor \frac{S}{2} \rfloor$ stones or $S$ is odd. This can be easily checked in $O(n)$.
[ "brute force", "constructive algorithms", "games", "greedy" ]
1,800
t = int(input()) for _ in range(t): n = int(input()) a = [int(x) for x in input().split()] maxPile = max(a) numStones = sum(a) if maxPile * 2 > numStones or (numStones & 1): print('T') else: print('HL')
1396
C
Monster Invaders
Ziota found a video game called "Monster Invaders". Similar to every other shooting RPG game, "Monster Invaders" involves killing monsters and bosses with guns. For the sake of simplicity, we only consider two different types of monsters and three different types of guns. Namely, the two types of monsters are: - a normal monster with $1$ hp. - a boss with $2$ hp. And the three types of guns are: - Pistol, deals $1$ hp in damage to one monster, $r_1$ reloading time - Laser gun, deals $1$ hp in damage to all the monsters in the current level (including the boss), $r_2$ reloading time - AWP, instantly kills any monster, $r_3$ reloading time \textbf{The guns are initially not loaded, and the Ziota can only reload 1 gun at a time.} The levels of the game can be considered as an array $a_1, a_2, \ldots, a_n$, in which \textbf{the $i$-th stage has $a_i$ normal monsters and 1 boss}. Due to the nature of the game, \textbf{Ziota cannot use the Pistol (the first type of gun) or AWP (the third type of gun) to shoot the boss before killing all of the $a_i$ normal monsters}. If Ziota damages the boss but does not kill it immediately, \textbf{he is forced to move out of the current level to an arbitrary adjacent level} (adjacent levels of level $i$ $(1 < i < n)$ are levels $i - 1$ and $i + 1$, the only adjacent level of level $1$ is level $2$, the only adjacent level of level $n$ is level $n - 1$). Ziota can also choose to move to an adjacent level at any time. \textbf{Each move between adjacent levels are managed by portals with $d$ teleportation time.} In order not to disrupt the space-time continuum within the game, \textbf{it is strictly forbidden to reload or shoot monsters during teleportation.} Ziota starts the game at level 1. The objective of the game is rather simple, to kill all the bosses in all the levels. He is curious about the minimum time to finish the game (assuming it takes no time to shoot the monsters with a loaded gun and Ziota has infinite ammo on all the three guns). Please help him find this value.
In this problem, it is useful to note that when the boss only has $1$ hp left, just use the pistol because it has the least reloading time. So there are 3 strategies we will use when playing at stage $i$ $(1 \le i \le n)$: Take $a_i$ pistol shots to kill first $a_i$ monsters and shoot the boss with the AWP. Take $a_i + 1$ pistol shots and move back to this stage later to take another pistol shot to finish the boss. Use the laser gun and move back to this stage later to kill the boss with a pistol shot. Observation: We will always finish the game at stage $n$ or $n - 1$. Considering we are at stage $i$ $(i \le n - 1)$ and the boss at both stage $i$ stage $i - 1$ has $1$ hp left, we can spend $2 * d$ time to finish both these stages instead of going back later, which costs us exactly the same. Therefore, we will calculate $dp(i,0/1)$ as the minimum time to finish first $i - 1$ stages and 0/1 is the remaining hp of the boss at stage $i$. The transitions are easy to figure out by using 3 strategies as above. The only thing we should note is that we can actually finish the game at stage $n - 1$ by instantly kill the boss at stage $n$ with the AWP so we don't have to go back to this level later. Answer to the problem is $dp(n, 0)$. Time complexity: $O(n)$.
[ "dp", "greedy", "implementation" ]
2,300
/*input 4 2 4 4 1 4 5 1 2 */ #include <bits/stdc++.h> using namespace std; int read() { int x = 0, c = getchar(); for(; !(c > 47 && c < 58); c = getchar()); for(; (c > 47 && c < 58); c = getchar()) x = x * 10 + c - 48; return x; } void upd(long long &a, long long b) { a = (a < b) ? a : b; } const int N = 1e6 + 5; long long f[N][2]; int n, r1, r2, r3, d, a[N]; int main(){ n = read(), r1 = read(), r2 = read(), r3 = read(), d = read(); for(int i = 1; i <= n; a[i ++] = read()); for(int i = 2; i <= n; ++ i) f[i][0] = f[i][1] = 1e18; f[1][0] = 1ll * r1 * a[1] + r3; f[1][1] = min(0ll + r2, 1ll * r1 * a[1] + r1); for(int i = 1; i < n; ++ i) { // 0 -> 0 // so we clear this one and the next one as well upd(f[i + 1][0], f[i][0] + d + 1ll * r1 * a[i + 1] + r3); // 0 -> 1 // this one is cleared, but next one isnt upd(f[i + 1][1], f[i][0] + d + min(0ll + r2, 1ll * r1 * a[i + 1] + r1)); // 1 -> 0 upd(f[i + 1][0], f[i][1] + d + 1ll * r1 * a[i + 1] + r3 + 2 * d + r1); upd(f[i + 1][0], f[i][1] + d + 1ll * r1 * a[i + 1] + r1 + d + r1 + d + r1); upd(f[i + 1][0], f[i][1] + d + r2 + d + r1 + d + r1); // 1 -> 1 upd(f[i + 1][1], f[i][1] + d + r2 + d + r1 + d); upd(f[i + 1][1], f[i][1] + d + 1ll * r1 * a[i + 1] + r1 + d + r1 + d); if(i == n - 1) { upd(f[i + 1][0], f[i][1] + d + 1ll * r1 * a[i + 1] + r3 + d + r1); } } cout << f[n][0] << endl; }
1396
D
Rainbow Rectangles
Shrimpy Duc is a fat and greedy boy who is always hungry. After a while of searching for food to satisfy his never-ending hunger, Shrimpy Duc finds M&M candies lying unguarded on a $L \times L$ grid. There are $n$ M&M candies on the grid, the $i$-th M&M is currently located at $(x_i + 0.5, y_i + 0.5),$ and has color $c_i$ out of a total of $k$ colors (the size of M&Ms are insignificant). Shrimpy Duc wants to steal a \textbf{rectangle} of M&Ms, specifically, he wants to select a rectangle with \textbf{integer} coordinates within the grid and steal all candies within the rectangle. Shrimpy Duc doesn't need to steal every single candy, however, he would like to steal \textbf{at least one candy for each color}. In other words, he wants to select a rectangle whose sides are parallel to the coordinate axes and whose left-bottom vertex $(X_1, Y_1)$ and right-top vertex $(X_2, Y_2)$ are points with integer coordinates satisfying $0 \le X_1 < X_2 \le L$ and $0 \le Y_1 < Y_2 \le L$, so that for every color $1 \le c \le k$ there is at least one M&M with color $c$ that lies within that rectangle. How many such rectangles are there? This number may be large, so you only need to find it modulo $10^9 + 7$.
Let $xl, xr, yd, yu$ denote a rectangle with opposite corners $(xl, yd)$ and $(xr, yu)$. For convenience, assume $(xl \le xr)$ and $(yd \le yu)$. Let's try solving the problem if coordinates are in range $[1, n]$. We could easily do this by coordinates compression. First, let's look at the problem with $(yd, yu)$ fixed. We define $f_x$ to be the smallest integer such that $x \le f_x$ and $(x, yd), (f_x, yu)$ is a $\textbf{good}$ rectangle (If there is no such integer, let $f_x = inf$). It can be proven that $f_x$ is non-decreasing, i.e. if $x < y$, then $f_x \le f_y$. Now, let's see how $f_x$ changes when we iterate $yd$ over a fixed $yu$. It is hard to add points to the set, so we will try to support deleting points operation. For point $i$, we have the following definitions: Let set $S = \{ j | c_j = c_i, y_i < y_j \le yu, x_j \le x_i \}$. Let $prv_i = j \in S$ with the largest $x_j$. Let set $S' = \{ j | c_j = c_i, y_i < y_j \le yu, x_j \ge x_i \}$. Let $nxt_i = j \in S'$ with the smallest $x_j$. (Note that $S$ or $S'$ might represent empty set). With these two functions, we could see how $f_x$ changes after we delete point $i$. It looks something like this: For every $xl \in (x_{prv_i}, x_i]$ such that $f_{xl} \ge x_i, f_{xl} = max(f_{xl}, x_{nxt_i})$; We could support this operation using segment tree with lazy propagation. The total time complexity is $O(n^2 \cdot log_n)$.
[ "data structures", "sortings", "two pointers" ]
3,300
#include <bits/stdc++.h> using namespace std; using ll = long long; const int MOD = 1000000007; int L; ll sum[8040]; int len[8040]; int last[8040]; int lazy[8040]; void init(int v, int l, int r, const vector<int> &xs) { len[v] = xs[r] - xs[l - 1]; if (l < r) { int md = (l + r) >> 1; init(v << 1, l, md, xs); init(v << 1 | 1, md + 1, r, xs); } } void reset(int v, int l, int r, const vector<int> &go) { lazy[v] = -1; if (l == r) { sum[v] = ll(len[v]) * (L - go[l]); last[v] = go[l]; return; } int md = (l + r) >> 1; reset(v << 1, l, md, go); reset(v << 1 | 1, md + 1, r, go); sum[v] = sum[v << 1] + sum[v << 1 | 1]; last[v] = last[v << 1 | 1]; } void push(int v, int l, int r) { if (lazy[v] != -1) { last[v] = lazy[v]; sum[v] = ll(len[v]) * (L - lazy[v]); if (l < r) { lazy[v << 1] = lazy[v]; lazy[v << 1 | 1] = lazy[v]; } lazy[v] = -1; } } void modify(int v, int l, int r, int L, int R, int qv) { push(v, l, r); if (L > r || R < l) return; if (L <= l && r <= R) { lazy[v] = qv; push(v, l, r); return; } int md = (l + r) >> 1; modify(v << 1, l, md, L, R, qv); modify(v << 1 | 1, md + 1, r, L, R, qv); sum[v] = sum[v << 1] + sum[v << 1 | 1]; last[v] = last[v << 1 | 1]; } int walk(int v, int l, int r, int qv) { push(v, l, r); if (last[v] <= qv) return -1; if (l == r) return l; int md = (l + r) >> 1; int ans = walk(v << 1, l, md, qv); if (ans == -1) ans = walk(v << 1 | 1, md + 1, r, qv); return ans; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int N, K; cin >> N >> K >> L; vector<int> X(N), Y(N), C(N); vector<int> xs = {-1, L}; vector<int> ys = {-1, L}; for (int i = 0; i < N; ++i) { cin >> X[i] >> Y[i] >> C[i]; --C[i]; xs.emplace_back(X[i]); ys.emplace_back(Y[i]); } sort(xs.begin(), xs.end()); xs.resize(unique(xs.begin(), xs.end()) - xs.begin()); sort(ys.begin(), ys.end()); ys.resize(unique(ys.begin(), ys.end()) - ys.begin()); int NX = xs.size(); int NY = ys.size(); { vector<int> order(N); iota(order.begin(), order.end(), 0); sort(order.begin(), order.end(), [&](int i, int j) { return make_pair(Y[i], -X[i]) > make_pair(Y[j], -X[j]); }); vector<int> newX(N), newY(N), newC(N); for (int i = 0; i < N; ++i) { newX[i] = X[order[i]]; newY[i] = Y[order[i]]; newC[i] = C[order[i]]; } X.swap(newX), Y.swap(newY), C.swap(newC); } init(1, 1, NX - 2, xs); int ans = 0; for (int yr = 1; yr + 1 < NY; ++yr) { vector<vector<int>> addAt(NX); for (int i = 0; i < N; ++i) { if (Y[i] <= ys[yr]) { int xi = lower_bound(xs.begin(), xs.end(), X[i]) - xs.begin(); addAt[xi].emplace_back(C[i]); } } int bad = K; vector<int> cnts(K); auto inc = [&](int z) { if (++cnts[z] == 1) --bad; }; auto dec = [&](int z) { if (--cnts[z] == 0) ++bad; }; vector<int> go(NX); int ptr = 0; for (int i = 1; i + 1 < NX; ++i) { while (bad && ptr + 2 < NX) { ptr++; for (int z : addAt[ptr]) inc(z); } if (bad) go[i] = L; else go[i] = xs[ptr]; for (int z : addAt[i]) dec(z); } reset(1, 1, NX - 2, go); vector<int> prv(N); vector<int> nxt(N); vector<map<int, int>> mp(K); for (int i = 0; i < N; ++i) { if (Y[i] <= ys[yr]) { auto it = mp[C[i]].lower_bound(X[i]); if (it == mp[C[i]].end()) { nxt[i] = -1; } else { nxt[i] = it->second; } it = mp[C[i]].upper_bound(X[i]); if (it == mp[C[i]].begin()) { prv[i] = -1; } else { prv[i] = prev(it)->second; } mp[C[i]][X[i]] = i; } } auto remove = [&](int i) { int xprv = (prv[i] == -1 ? -1 : X[prv[i]]); int xcur = X[i]; int xnxt = (nxt[i] == -1 ? L : X[nxt[i]]); int l = lower_bound(xs.begin(), xs.end(), xprv) - xs.begin() + 1; int r = walk(1, 1, NX - 2, xnxt); if (r == -1) r = NX - 1; --r; r = min(r, int(lower_bound(xs.begin(), xs.end(), xcur) - xs.begin())); if (l <= r) modify(1, 1, NX - 2, l, r, xnxt); }; ptr = N - 1; for (int yl = 1; yl <= yr; ++yl) { ll add = sum[1] % MOD * (ys[yr + 1] - ys[yr]) % MOD * (ys[yl] - ys[yl - 1]) % MOD; ans = (ans + add) % MOD; while (ptr >= 0 && Y[ptr] == ys[yl]) remove(ptr--); } assert(sum[1] == 0); } cout << ans << "\n"; return 0; }
1396
E
Distance Matching
You are given an integer $k$ and a tree $T$ with $n$ nodes ($n$ is even). Let $dist(u, v)$ be the number of edges on the shortest path from node $u$ to node $v$ in $T$. Let us define a undirected weighted complete graph $G = (V, E)$ as following: - $V = \{x \mid 1 \le x \le n \}$ i.e. the set of integers from $1$ to $n$ - $E = \{(u, v, w) \mid 1 \le u, v \le n, u \neq v, w = dist(u, v) \}$ i.e. there is an edge between every pair of distinct nodes, the weight being the distance between their respective nodes in $T$ Your task is simple, find a perfect matching in $G$ with total edge weight $k$ $(1 \le k \le n^2)$.
Root the tree at centroid $c$. First, determine if there is any matching that satisfies the requirement. Consider an edge $e$ that splits the tree into $2$ subtrees with sizes $x$ and $N - x$ respectively, let $z$ be the number of paths passing through $e$, then we have $z$ has the same parity as $x$ and $x \% 2 \le z \le min(x, N - x)$. Thus the necessary condition for a matching is $\sum (sub(v) \% 2) \le K \le \sum sub(v)$ and $K$ has the same parity as $\sum sub(v)$, where $v \ne c$ and $sub(v)$ is the size of the subtree rooted at $v$. We prove that this is also the sufficient condition by its construction: Consider the matching with maximum $K$, note that $c$ lies on all the paths in the matching. We can see that if we remove two vertices from the largest subtree, rooted at $w \ne c$, then $c$ is still the centroid. Also, if we match two vertices $v$ and $u$ in the subtree rooted at $w$, the answer decreases by $2 \cdot dist(c, lca(u, v))$. Based on this, we can achieve the target $K$ by repeating the following operation ($currentK$ is the current maximum possible $K$, initially $\sum sub(v)$): Let $z$ be a non-leaf vertex in the largest subtree such that $dist(c, z) \le \frac{currentK - targetK}{2}$ (if there are many $z$, take any $z$ with maximum $dist(c, z)$). Match two vertices $v$ and $u$ whose LCA is $z$, then remove $v$ and $u$ from the tree. After some time $currentK = targetK$, so we just need to greedily match the remaining vertices to create the final matching. The final complexity is $O(N log N)$.
[ "constructive algorithms", "dfs and similar", "trees" ]
3,200
#include <bits/stdc++.h> using namespace std; using ll = long long; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int N; ll K; cin >> N >> K; vector<vector<int>> adj(N); for (int i = 0; i < N - 1; ++i) { int v, u; cin >> v >> u; adj[--v].emplace_back(--u); adj[u].emplace_back(v); } vector<int> sz(N); function<void(int, int)> dfs1 = [&](int v, int p) { sz[v] = 1; for (int u : adj[v]) if (u != p) { dfs1(u, v); sz[v] += sz[u]; } }; dfs1(0, -1); int root = 0; for (int i = 1; i < N; ++i) { if (sz[i] >= N / 2 && sz[i] < sz[root]) root = i; } vector<int> dist(N); vector<int> top(N, -1); vector<int> par(N, -1); ll low = 0, high = 0; function<void(int, int, int)> dfs2 = [&](int v, int p, int r) { dist[v] = dist[p] + 1; top[v] = r; par[v] = p; { auto it = find(adj[v].begin(), adj[v].end(), p); assert(it != adj[v].end()); adj[v].erase(it); } sz[v] = 1; for (int u : adj[v]) { dfs2(u, v, r); sz[v] += sz[u]; } low += (sz[v] & 1); high += sz[v]; }; for (int v : adj[root]) { dfs2(v, root, v); } if (low > K || high < K || (high - K) % 2) { cout << "NO\n"; return 0; } set<pair<int, int>> sizes; for (int v : adj[root]) { sizes.emplace(sz[v], v); } vector<set<pair<int, int>>> lcas(N); vector<int> deg(N); for (int v = 0; v < N; ++v) deg[v] = adj[v].size(); for (int v = 0; v < N; ++v) if (v != root) { if (deg[v] == 0) { } else { lcas[top[v]].emplace(dist[v], v); } } vector<bool> matched(N); function<void(int)> kill = [&](int v) { assert(deg[v] == 0); if (--deg[par[v]] == 0) { v = par[v]; lcas[top[v]].erase(pair<int, int>(dist[v], v)); } }; cout << "YES\n"; while (high > K) { assert(sizes.size()); int v = (--sizes.end())->second; sizes.erase(pair<int, int>(sz[v], v)); assert(lcas[v].size()); int mdist = (--lcas[v].end())->first; if (high - 2 * mdist <= K) { int x = lcas[v].lower_bound(pair<int, int>((high - K) / 2, -1))->second; int y = -1; for (int z : adj[x]) if (!matched[z]) { y = z; break; } high = K; cout << x + 1 << " " << y + 1 << "\n"; matched[x] = true; matched[y] = true; break; } else { high -= 2 * mdist; assert(lcas[v].size()); int u = (--lcas[v].end())->second; vector<int> nxts; while (nxts.size() < 2 && adj[u].size()) { int w = adj[u].back(); adj[u].pop_back(); if (!matched[w]) { nxts.emplace_back(w); } } if (nxts.size() < 2) nxts.emplace_back(u); assert(nxts.size() == 2); cout << nxts[0] + 1 << " " << nxts[1] + 1 << "\n"; matched[nxts[0]] = true; matched[nxts[1]] = true; kill(nxts[0]); kill(nxts[1]); sz[v] -= 2; if (sz[v]) sizes.emplace(sz[v], v); } } vector<int> seq; function<void(int)> dfs3 = [&](int v) { if (!matched[v]) seq.emplace_back(v); for (int u : adj[v]) dfs3(u); }; dfs3(root); int h = seq.size() / 2; for (int i = 0; i < h; ++i) { cout << seq[i] + 1 << " " << seq[i + h] + 1 << "\n"; } }
1397
A
Juggling Letters
You are given $n$ strings $s_1, s_2, \ldots, s_n$ consisting of lowercase Latin letters. In one operation you can remove a character from a string $s_i$ and insert it to an arbitrary position in a string $s_j$ ($j$ may be equal to $i$). You may perform this operation any number of times. Is it possible to make all $n$ strings equal?
If the total number of occurrences of some character $c$ is not a multiple of $n$, then it is impossible to make all $n$ strings equal - because then it is impossible for all $n$ strings to have the same number of $c$. On the other hand, if the total number of occurrences of every character $c$ is a multiple of $n$, then it is always possible to make all $n$ strings equal. To achieve this, for every character $c$ we move exactly ((the total number of occurrences of $c$) $/$ $n$) characters $c$ to the end of each string, and by the end we will have all $n$ strings equal each other. We can easily check if the condition satisfies by counting the total number of occurrences of each character $c$ and check its divisibility by $n$. The final complexity is $O(S \cdot 26)$ or $O(S)$ where $S$ is the sum of lengths of all strings.
[ "greedy", "strings" ]
800
numTests = int(input()) for testNo in range(numTests): n = int(input()) cnt = [0 for i in range(26)] for _ in range(n): s = input() for i in s: cnt[ord(i) - 97] += 1 ans = True for i in range(26): if cnt[i] % n != 0: ans = False break if ans: print('YES') else: print('NO')
1397
B
Power Sequence
Let's call a list of positive integers $a_0, a_1, ..., a_{n-1}$ a \textbf{power sequence} if there is a positive integer $c$, so that for every $0 \le i \le n-1$ then $a_i = c^i$. Given a list of $n$ positive integers $a_0, a_1, ..., a_{n-1}$, you are allowed to: - Reorder the list (i.e. pick a permutation $p$ of $\{0,1,...,n - 1\}$ and change $a_i$ to $a_{p_i}$), then - Do the following operation any number of times: pick an index $i$ and change $a_i$ to $a_i - 1$ or $a_i + 1$ (i.e. increment or decrement $a_i$ by $1$) with a cost of $1$. Find the minimum cost to transform $a_0, a_1, ..., a_{n-1}$ into a power sequence.
First of all, the optimal way to reorder is to sort $a$ in non-decreasing order. Note that the cost the transform $a_i$ to $c^i$ is $\lvert a_i - c^i \rvert$. While there is a pair $(a_i, a_j)$ such that $i < j$ and $a_i > a_j$, swap $a_i$ and $a_j$. Since $\lvert x \rvert + \lvert y \rvert = max \{ \lvert x + y \rvert, \lvert x - y \rvert \}$, we have $\lvert a_i - c^i \rvert + \lvert a_j - c^j \rvert$ $= max \{ \lvert (a_i + a_j) - (c^i + c^j) \rvert, \lvert (a_i - a_j) - (c^i - c^j) \rvert \}$ $\ge max \{ \lvert (a_j + a_i) - (c^i + c^j) \rvert, \lvert (a_j - a_i) - (c^i - c^j) \rvert \}$ $= \lvert a_j - c^i \rvert + \lvert a_i - c^j \rvert$ when $a_i > a_j$ and $c^i \le c^j$, so the total cost does not increase. Hence, it is best to have $a_0 \le a_1 \le \cdots \le a_{n-1}$. From now on, we assume $a$ is sorted in non-decreasing order. Denote $a_{max} = a_{n - 1}$ as the maximum value in $a$, $f(x) = \sum{\lvert a_i - x^i \rvert}$ as the minimum cost to transform $a$ into ${x^0, x^1, \cdots, x^{n-1}}$, and $c$ as the value where $f(c)$ is minimum. Note that $c^{n - 1} - a_{max} \le f(c) \le f(1)$, which implies $c^{n - 1} \le f(1) + a_{max}$. We enumerate $x$ from $1, 2, 3, \dots$ until $x^{n - 1}$ exceeds $f(1) + a_{max}$, calculate $f(x)$ in $O(n)$, and the final answer is the minimum among all calculated values. The final complexity is $O(n \cdot max(x))$. But why doesn't this get TLE? Because $f(1) = \sum{(a_i - 1)} < a_{max} \cdot n \le 10^9 \cdot n$, thus $x^{n - 1} \le f(1) + a_{max} \le 10^9 \cdot (n + 1)$. When $n = 3, 4, 5, 6$, $max(x)$ does not exceed $63245, 1709, 278, 93$ respectively; so we can see that $O(n \cdot max(x))$ comfortably fits in the time limit.
[ "brute force", "math", "number theory", "sortings" ]
1,500
n = int(input()) a = [int(x) for x in input().split()] a.sort() inf = 10**18 if n <= 2: print(a[0] - 1) else: ans = sum(a) - n for x in range(1, 10**9): curPow = 1 curCost = 0 for i in range(n): curCost += abs(a[i] - curPow) curPow *= x if curPow > inf: break if curPow > inf: break if curPow / x > ans + a[n - 1]: break ans = min(ans, curCost) print(ans)
1398
A
Bad Triangle
You are given an array $a_1, a_2, \dots , a_n$, which is sorted in non-decreasing order ($a_i \le a_{i + 1})$. Find three indices $i$, $j$, $k$ such that $1 \le i < j < k \le n$ and it is \textbf{impossible} to construct a non-degenerate triangle (a triangle with nonzero area) having sides equal to $a_i$, $a_j$ and $a_k$ (for example it is possible to construct a non-degenerate triangle with sides $3$, $4$ and $5$ but impossible with sides $3$, $4$ and $7$). If it is impossible to find such triple, report it.
The triangle with side $a \ge b \ge c$ is degenerate if $a \ge b + c$. So we have to maximize the length of the longest side ($a$) and minimize the total length of other sides ($b + c$). Thus, if $a_n \ge a_1 + a_2$ then we answer if $1, 2, n$, otherwise the answer is -1.
[ "geometry", "math" ]
800
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) if a[0] + a[1] > a[-1]: print(-1) else: print(1, 2, n)
1398
B
Substring Removal Game
Alice and Bob play a game. They have a binary string $s$ (a string such that each character in it is either $0$ or $1$). Alice moves first, then Bob, then Alice again, and so on. During their move, the player can choose any number (not less than one) of \textbf{consecutive equal characters} in $s$ and delete them. For example, if the string is $10110$, there are $6$ possible moves (deleted characters are bold): - $\textbf{1}0110 \to 0110$; - $1\textbf{0}110 \to 1110$; - $10\textbf{1}10 \to 1010$; - $101\textbf{1}0 \to 1010$; - $10\textbf{11}0 \to 100$; - $1011\textbf{0} \to 1011$. After the characters are removed, the characters to the left and to the right of the removed block become adjacent. I. e. the following sequence of moves is valid: $10\textbf{11}0 \to 1\textbf{00} \to 1$. The game ends when the string becomes empty, and the score of each player is \textbf{the number of $1$-characters deleted by them}. Each player wants to maximize their score. Calculate the resulting score of Alice.
The following greedy strategy works: during each turn, delete the largest possible substring consisting of $1$-characters. So we have to find all blocks of $1$-characters, sort them according to their length and model which blocks are taken by Alice, and which - by Bob. Why does the greedy strategy work? It's never optimal to delete some part of the block of ones - because we either have to spend an additional turn to delete the remaining part, or allow our opponent to take it (which is never good). Why don't we need to delete zeroes? If we delete a whole block of zeroes, our opponent can take the newly formed block of $1$'s during their turn, and it is obviously worse than taking a part of that block. And deleting some part of a block of zeroes doesn't do anything - our opponent will never delete the remaining part because it's suboptimal.
[ "games", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; #define sz(a) int((a).size()) #define forn(i, n) for (int i = 0; i < int(n); ++i) void solve() { string s; cin >> s; vector<int> a; forn(i, sz(s)) if (s[i] == '1') { int j = i; while (j + 1 < sz(s) && s[j + 1] == '1') ++j; a.push_back(j - i + 1); i = j; } sort(a.rbegin(), a.rend()); int ans = 0; for (int i = 0; i < sz(a); i += 2) ans += a[i]; cout << ans << endl; } int main() { int T; cin >> T; while (T--) solve(); }
1398
C
Good Subarrays
You are given an array $a_1, a_2, \dots , a_n$ consisting of integers from $0$ to $9$. A subarray $a_l, a_{l+1}, a_{l+2}, \dots , a_{r-1}, a_r$ is good if the sum of elements of this subarray is equal to the length of this subarray ($\sum\limits_{i=l}^{r} a_i = r - l + 1$). For example, if $a = [1, 2, 0]$, then there are $3$ good subarrays: $a_{1 \dots 1} = [1], a_{2 \dots 3} = [2, 0]$ and $a_{1 \dots 3} = [1, 2, 0]$. Calculate the number of good subarrays of the array $a$.
We use zero indexing in this solution. We also use half-closed interval (so subarray $[l, r]$ is $a_l, a_{l + 1}, \dots, a_{r-1}$). Let's precalculate the array $p$, where $p_i = \sum\limits_{j = 0}^{i - 1} a_j$ (so $p_x$ if sum of first $x$ elements of $a$). Then subarray $[l, r]$ is good if $p_r - p_l = r - l$, so $p_r - r = p_l - l$. Thus, we have to group all prefix by value $p_i - i$ for $i$ from $0$ to $n$. And if the have $x$ prefix with same value of $p_i - i$ then we have to add $\frac{x (x-1)}{2}$ to the answer.
[ "data structures", "dp", "math" ]
1,600
for _ in range(int(input())): n = int(input()) a = input() d = {0 : 1} res, s = 0, 0 for i in range(n): s += int(a[i]) x = s - i - 1 if x not in d: d[x] = 0 d[x] += 1 res += d[x] - 1 print(res)
1398
D
Colored Rectangles
You are given three multisets of pairs of colored sticks: - $R$ pairs of red sticks, the first pair has length $r_1$, the second pair has length $r_2$, $\dots$, the $R$-th pair has length $r_R$; - $G$ pairs of green sticks, the first pair has length $g_1$, the second pair has length $g_2$, $\dots$, the $G$-th pair has length $g_G$; - $B$ pairs of blue sticks, the first pair has length $b_1$, the second pair has length $b_2$, $\dots$, the $B$-th pair has length $b_B$; You are constructing rectangles from these pairs of sticks with the following process: - take a pair of sticks of one color; - take a pair of sticks of another color different from the first one; - add the area of the resulting rectangle to the total area. Thus, you get such rectangles that their opposite sides are the same color and their adjacent sides are not the same color. Each pair of sticks can be used at most once, some pairs can be left unused. You are not allowed to split a pair into independent sticks. What is the maximum area you can achieve?
Let's build some rectangles and take a look at the resulting pairings. For example, consider only red/green rectangles. Let the rectangles be $(r_{i1}, g_{i1})$, $(r_{i2}, g_{i2})$, .... Sort them in a non-decreasing order of $r_{ij}$. I claim that in the most optimal set $g_{ij}$ are also sorted in a non-decreasing order. It's easy to prove with some induction. Moreover, if there are some green or red sticks that are not taken and that are longer than the smallest taken corresponding sticks, then it's always optimal to take those instead. These facts helps us to conclude that from each set only some suffix of the largest sticks are taken. And they also give us the idea of the solution: sort the sticks in each set and pick the largest from any of the two sets into a pair until no pairs can be taken. However, the greedy approach of "take from any two of the three sets" is incorrect. We need to choose these two sets smartly. Let $dp[r][g][b]$ store the maximum total area that can be obtained by taking $r$ largest red sticks, $g$ largest green sticks and $b$ largest blue sticks. Each transition chooses a pair of colors and takes the next pairs in both of them. The answer is the maximum value in all the $dp$. Overall complexity: $O(RGB + R \log R + G \log G + B \log B)$.
[ "dp", "greedy", "sortings" ]
1,800
n = [int(x) for x in input().split()] a = [] for i in range(3): a.append([int(x) for x in input().split()]) a[i].sort(reverse=True) dp = [[[0 for i in range(n[2] + 1)] for j in range(n[1] + 1)] for k in range(n[0] + 1)] ans = 0 for i in range(n[0] + 1): for j in range(n[1] + 1): for k in range(n[2] + 1): if i < n[0] and j < n[1]: dp[i + 1][j + 1][k] = max(dp[i + 1][j + 1][k], dp[i][j][k] + a[0][i] * a[1][j]) if i < n[0] and k < n[2]: dp[i + 1][j][k + 1] = max(dp[i + 1][j][k + 1], dp[i][j][k] + a[0][i] * a[2][k]) if j < n[1] and k < n[2]: dp[i][j + 1][k + 1] = max(dp[i][j + 1][k + 1], dp[i][j][k] + a[1][j] * a[2][k]) ans = max(ans, dp[i][j][k]) print(ans)
1398
E
Two Types of Spells
Polycarp plays a computer game (yet again). In this game, he fights monsters using magic spells. There are two types of spells: fire spell of power $x$ deals $x$ damage to the monster, and lightning spell of power $y$ deals $y$ damage to the monster and \textbf{doubles} the damage of the next spell Polycarp casts. Each spell can be cast \textbf{only once per battle}, but Polycarp can cast them in any order. For example, suppose that Polycarp knows three spells: a fire spell of power $5$, a lightning spell of power $1$, and a lightning spell of power $8$. There are $6$ ways to choose the order in which he casts the spells: - first, second, third. This order deals $5 + 1 + 2 \cdot 8 = 22$ damage; - first, third, second. This order deals $5 + 8 + 2 \cdot 1 = 15$ damage; - second, first, third. This order deals $1 + 2 \cdot 5 + 8 = 19$ damage; - second, third, first. This order deals $1 + 2 \cdot 8 + 2 \cdot 5 = 27$ damage; - third, first, second. This order deals $8 + 2 \cdot 5 + 1 = 19$ damage; - third, second, first. This order deals $8 + 2 \cdot 1 + 2 \cdot 5 = 20$ damage. Initially, Polycarp knows $0$ spells. His spell set changes $n$ times, each time he either learns a new spell or forgets an already known one. After each change, calculate the maximum possible damage Polycarp may deal using the spells he knows.
Let's solve this problem for fixed set of spells. For example, we have a fireball spells with powers $f_1, f_2, \dots , f_m$ and lighting spells with powers $l_1, l_2, \dots, l_k$. We reach the maximum total damage if we can double all $k$ spells with maximum damage. It's possibly iff the set of $k$ largest by power spell (let's denote this set as $s$) contains at least one fireball spell. Otherwise (if set $s$ contains only lightning spells) the maximum damage reach when we double $k-1$ largest spells in set $s$ and one largest spell not from set $s$ (if such spell exist). Now how do you solve the original problem when spells are added and removed? All we have to do it maintain the set of $x$ largest by power spells (where $x$ is current number of lightning spells), and change this set by adding or removing one spell. Also you have to maintain the sum of spells power in set this set and the number of fireball spells in this set. You can do it by set in C++ or TreeSet on Java.
[ "binary search", "data structures", "greedy", "implementation", "math", "sortings" ]
2,200
#include <bits/stdc++.h> using namespace std; const int N = int(1e5) + 9; int n; set <int> sDouble; long long sum[2]; set <int> s[2]; int cntDouble[2]; // 0: 0 -> 1 // 1: 1 -> 0 void upd(int id) { assert(s[id].size() > 0); int x = *s[id].rbegin(); if (id == 1) x = *s[id].begin(); bool d = sDouble.count(x); sum[id] -= x, sum[!id] += x; s[id].erase(x), s[!id].insert(x); cntDouble[id] -= d, cntDouble[!id] += d; } int main(){ cin >> n; for (int i = 0; i < n; ++i) { int tp, x; cin >> tp >> x;// tp = 1 if double if (x > 0) { sum[0] += x; s[0].insert(x); cntDouble[0] += tp; if (tp) sDouble.insert(x); } else { x = -x; int id = 0; if (s[1].count(x)) id = 1; else assert(s[0].count(x)); sum[id] -= x; s[id].erase(x); cntDouble[id] -= tp; if (tp) { assert(sDouble.count(x)); sDouble.erase(x); } } int sumDouble = cntDouble[0] + cntDouble[1]; while (s[1].size() < sumDouble) upd(0); while (s[1].size() > sumDouble) upd(1); while (s[1].size() > 0 && s[0].size() > 0 && *s[0].rbegin() > *s[1].begin()) { upd(0); upd(1); } assert(s[1].size() == sumDouble); long long res = sum[0] + sum[1] * 2; if (cntDouble[1] == sumDouble && sumDouble > 0) { res -= *s[1].begin(); if (s[0].size() > 0) res += *s[0].rbegin(); } cout << res << endl; } return 0; }
1398
F
Controversial Rounds
Alice and Bob play a game. The game consists of several sets, and each set consists of several rounds. Each round is won either by Alice or by Bob, and the set ends when one of the players has won $x$ rounds in a row. For example, if Bob won five rounds in a row and $x = 2$, then two sets ends. You know that Alice and Bob have already played $n$ rounds, and you know the results of some rounds. For each $x$ from $1$ to $n$, calculate the maximum possible number of sets that could have already finished if each set lasts until one of the players wins $x$ rounds in a row. It is possible that the last set is still not finished — in that case, you should not count it in the answer.
Let's consider the following function $f(pos, x)$: minimum index $npos$ such that there is a substring of string $s_{pos}, s_{pos+1}, \dots, s_{npos-1}$ of length $x$ consisting of only characters $1$ and $?$ or $0$ and $?$. If this function has asymptotic $O(1)$ then we can solve problem for $O(n log n)$. Now, let's precalculate two array $nxt0$ and $nxt1$; $nxt0[i]$ is equal the maximum integer $len$ such that substring $s_i, s_{i+1}, \dots, s_{i+len-1}$ consist only characters $1$ and $?$. $nxt1[i]$ is equal the maximum integer $len$ such that substring $s_i, s_{i+1}, \dots, s_{i+len-1}$ consist only characters $0$ and $?$. Also let's precalculate the arrays $p0$ and $p1$ of size $n$; $p0[len]$ contain all positions $pos$ such that substring $s_{pos}, s_{pos+1}, \dots, s_{pos+len-1}$ consist only characters $1$ and $?$ and $pos = 0$ or $s_{pos - 1} = 0$; $p1[len]$ contain all positions $pos$ such that substring $s_{pos}, s_{pos+1}, \dots, s_{pos+len-1}$ consist only characters $0$ and $?$ and $pos = 0$ or $s_{pos - 1} = 1$. After that let's solve problem for some $x$. Suppose, that now we already processed first $pos$ elements of $s$. If $nxt0[pos] \ge x$ or $nxt1[pos] \ge x$ then we increase the answer and change $pos = pos + x$. Otherwise we have to find the minimum element (denote this element as $npos$) in $p0[x]$ or $p1[x]$ such that $npos \ge pos$. If there is no such element then we found the final answer . Otherwise let's increase answer and change $pos = npos + x$ and continue this algorithm.
[ "binary search", "data structures", "dp", "greedy", "two pointers" ]
2,500
#include <bits/stdc++.h> using namespace std; const int N = int(1e6) + 99; const int INF = int(1e9) + 99; int n; string s; vector <int> p[2][N]; int nxt[2][N]; int ptr[2]; char buf[N]; int main(){ cin >> n >> s; for (int i = n - 1; i >= 0; --i) { if (s[i] != '0') nxt[0][i] = 1 + nxt[0][i + 1]; if (s[i] != '1') nxt[1][i] = 1 + nxt[1][i + 1]; } for (int b = 0; b <= 1; ++b) { int l = 0; while (l < n) { if (s[l] == char('0' + b)) { ++l; continue; } int r = l + 1; while (r < n && s[r] != char('0' + b)) ++r; for (int len = 1; len <= r - l; ++len) p[b][len].push_back(l); l = r; } } for (int len = 1; len <= n; ++len) { int pos = 0, res = 0; ptr[0] = ptr[1] = 0; while (pos < n) { int npos = INF; for (int b = 0; b <= 1; ++b) { if (nxt[b][pos] >= len) npos = min(npos, pos + len); while (ptr[b] < p[b][len].size() && pos > p[b][len][ ptr[b] ]) ++ptr[b]; if (ptr[b] < p[b][len].size()) npos = min(npos, p[b][len][ ptr[b] ] + len); } if (npos != INF) ++res; pos = npos; } cout << res << ' '; } cout << endl; return 0; }
1398
G
Running Competition
A running competition is going to be held soon. The stadium where the competition will be held can be represented by several segments on the coordinate plane: - two horizontal segments: one connecting the points $(0, 0)$ and $(x, 0)$, the other connecting the points $(0, y)$ and $(x, y)$; - $n + 1$ vertical segments, numbered from $0$ to $n$. The $i$-th segment connects the points $(a_i, 0)$ and $(a_i, y)$; $0 = a_0 < a_1 < a_2 < \dots < a_{n - 1} < a_n = x$. For example, here is a picture of the stadium with $x = 10$, $y = 5$, $n = 3$ and $a = [0, 3, 5, 10]$: A lap is a route that goes along the segments, starts and finishes at the same point, and never intersects itself (the only two points of a lap that coincide are its starting point and ending point). The length of a lap is a total distance travelled around it. For example, the red route in the picture representing the stadium is a lap of length $24$. The competition will be held in $q$ stages. The $i$-th stage has length $l_i$, and the organizers want to choose a lap for each stage such that the length of the lap is a \textbf{divisor of $l_i$}. The organizers don't want to choose short laps for the stages, so for each stage, they want to find the maximum possible length of a suitable lap. Help the organizers to calculate the maximum possible lengths of the laps for the stages! In other words, for every $l_i$, find the maximum possible integer $L$ such that $l_i \bmod L = 0$, and there exists a lap of length \textbf{exactly} $L$. If it is impossible to choose such a lap then print $-1$.
First of all, let's find all possible lengths of the laps (after doing that, we can just check every divisor of $l_i$ to find the maximum possible length of a lap for a given query). A lap is always a rectangle - you can't construct a lap without using any vertical segments or using an odd number of vertical segments, and if you try to use $4$ or more vertical segments, you can't go back to the point where you started because both horizontal segments are already partially visited. So, a lap is a rectangle bounded by two vertical segments; and if we use vertical segments $i$ and $j$, the perimeter of this rectangle is $2(a_i - a_j + y)$. Let's find all values that can be represented as $a_i - a_j$. A naive $O(n^2)$ approach will be too slow, we have to speed it up somehow. Let's build an array $b$ of $n + 1$ numbers where $b_j = K - a_j$ ($K$ is some integer greater than $200000$). Each number that can be represented as $a_i - a_j$ can also be represented as $a_i + b_j - K$, so we have to find all possible sums of two elements belonging to different arrays. The key observation here is that, if $a_i$ and $b_j$ are small, we can treat each array as a polynomial: let $A(x) = \sum \limits_{i=0}^{n} x^{a_i}$, and similarly, $B(x) = \sum \limits_{j=0}^{n} x^{b_j}$. Let's look at the product of that polynomials. The coefficient for $x^k$ is non-zero if and only if there exist $i$ and $j$ such that $a_i + b_j = k$, so finding all possible sums (and all possible differences) can be reduced to multiplying two polynomials, which can be done faster than $O(n^2)$ using Karatsuba's algorithm or FFT.
[ "bitmasks", "fft", "math", "number theory" ]
2,600
#include<bits/stdc++.h> using namespace std; #define forn(i, n) for(int i = 0; i < n; i++) #define sz(a) ((int)(a).size()) const int LOGN = 20; const int N = (1 << LOGN); const int K = 200043; const int M = 1000043; typedef double ld; typedef long long li; const ld PI = acos(-1.0); struct comp { ld x, y; comp(ld x = .0, ld y = .0) : x(x), y(y) {} inline comp conj() { return comp(x, -y); } }; inline comp operator +(const comp &a, const comp &b) { return comp(a.x + b.x, a.y + b.y); } inline comp operator -(const comp &a, const comp &b) { return comp(a.x - b.x, a.y - b.y); } inline comp operator *(const comp &a, const comp &b) { return comp(a.x * b.x - a.y * b.y, a.x * b.y + a.y * b.x); } inline comp operator /(const comp &a, const ld &b) { return comp(a.x / b, a.y / b); } vector<comp> w[LOGN]; vector<int> rv[LOGN]; void precalc() { for(int st = 0; st < LOGN; st++) { w[st].assign(1 << st, comp()); for(int k = 0; k < (1 << st); k++) { double ang = PI / (1 << st) * k; w[st][k] = comp(cos(ang), sin(ang)); } rv[st].assign(1 << st, 0); if(st == 0) { rv[st][0] = 0; continue; } int h = (1 << (st - 1)); for(int k = 0; k < (1 << st); k++) rv[st][k] = (rv[st - 1][k & (h - 1)] << 1) | (k >= h); } } inline void fft(comp a[N], int n, int ln, bool inv) { for(int i = 0; i < n; i++) { int ni = rv[ln][i]; if(i < ni) swap(a[i], a[ni]); } for(int st = 0; (1 << st) < n; st++) { int len = (1 << st); for(int k = 0; k < n; k += (len << 1)) { for(int pos = k; pos < k + len; pos++) { comp l = a[pos]; comp r = a[pos + len] * (inv ? w[st][pos - k].conj() : w[st][pos - k]); a[pos] = l + r; a[pos + len] = l - r; } } } if(inv) for(int i = 0; i < n; i++) a[i] = a[i] / n; } comp aa[N]; comp bb[N]; comp cc[N]; inline void multiply(comp a[N], int sza, comp b[N], int szb, comp c[N], int &szc) { int n = 1, ln = 0; while(n < (sza + szb)) n <<= 1, ln++; for(int i = 0; i < n; i++) aa[i] = (i < sza ? a[i] : comp()); for(int i = 0; i < n; i++) bb[i] = (i < szb ? b[i] : comp()); fft(aa, n, ln, false); fft(bb, n, ln, false); for(int i = 0; i < n; i++) cc[i] = aa[i] * bb[i]; fft(cc, n, ln, true); szc = n; for(int i = 0; i < n; i++) c[i] = cc[i]; } comp a[N]; comp b[N]; comp c[N]; int used[M]; int dp[M]; int main() { precalc(); int n, x, y; for(int i = 0; i < M; i++) dp[i] = -1; scanf("%d %d %d", &n, &x, &y); vector<int> A(n + 1); for(int i = 0; i <= n; i++) scanf("%d", &A[i]); for(int i = 0; i <= n; i++) { a[A[i]] = comp(1.0, 0.0); b[K - A[i]] = comp(1.0, 0.0); } int s = 0; multiply(a, K + 1, b, K + 1, c, s); for(int i = K + 1; i < s; i++) if(c[i].x > 0.5) used[(i - K + y) * 2] = 1; for(int i = 1; i < M; i++) { if(!used[i]) continue; for(int j = i; j < M; j += i) dp[j] = max(dp[j], i); } int q; scanf("%d", &q); for(int i = 0; i < q; i++) { int l; scanf("%d", &l); printf("%d ", dp[l]); } }
1399
A
Remove Smallest
You are given the array $a$ consisting of $n$ positive (greater than zero) integers. In one move, you can choose two indices $i$ and $j$ ($i \ne j$) such that the absolute difference between $a_i$ and $a_j$ is no more than one ($|a_i - a_j| \le 1$) and remove the smallest of these two elements. If two elements are equal, you can remove any of them (but exactly one). Your task is to find if it is possible to obtain the array consisting of \textbf{only one element} using several (possibly, zero) such moves or not. You have to answer $t$ independent test cases.
Firstly, let's sort the initial array. Then it's obvious that the best way to remove elements is from smallest to biggest. And if there is at least one $i$ such that $2 \le i \le n$ and $a_i - a_{i-1} > 1$ then the answer is "NO", because we have no way to remove $a_{i-1}$. Otherwise, the answer is "YES".
[ "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (auto &it : a) cin >> it; sort(a.begin(), a.end()); bool ok = true; for (int i = 1; i < n; ++i) { ok &= (a[i] - a[i - 1] <= 1); } if (ok) cout << "YES" << endl; else cout << "NO" << endl; } return 0; }
1399
B
Gifts Fixing
You have $n$ gifts and you want to give all of them to children. Of course, you don't want to offend anyone, so all gifts should be equal between each other. The $i$-th gift consists of $a_i$ candies and $b_i$ oranges. During one move, you can choose some gift $1 \le i \le n$ and do one of the following operations: - eat exactly \textbf{one candy} from this gift (decrease $a_i$ by one); - eat exactly \textbf{one orange} from this gift (decrease $b_i$ by one); - eat exactly \textbf{one candy} and exactly \textbf{one orange} from this gift (decrease both $a_i$ and $b_i$ by one). Of course, you can not eat a candy or orange if it's not present in the gift (so neither $a_i$ nor $b_i$ can become less than zero). As said above, all gifts should be equal. This means that after some sequence of moves the following two conditions should be satisfied: $a_1 = a_2 = \dots = a_n$ and $b_1 = b_2 = \dots = b_n$ (and $a_i$ equals $b_i$ is \textbf{not necessary}). Your task is to find the \textbf{minimum} number of moves required to equalize all the given gifts. You have to answer $t$ independent test cases.
At first, consider the problems on candies and oranges independently. Then it's pretty obvious that for candies the optimal way is to decrease all $a_i$ to the value $min(a)$ (we need obtain at least this value to equalize all the elements and there is no point to decrease elements further). The same works for the array $b$. Then, if we unite these two problems, we need to take the maximum moves we need for each $i$, because we need exactly that amount of moves to decrease $a_i$ to $min(a)$ and $b_i$ to $min(b)$ simultaneously. So, the answer is $\sum\limits_{i=1}^{n} max(a_i - min(a), b_i - min(b))$.
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n), b(n); for (auto &it : a) cin >> it; for (auto &it : b) cin >> it; int mna = *min_element(a.begin(), a.end()); int mnb = *min_element(b.begin(), b.end()); long long ans = 0; for (int i = 0; i < n; ++i) { ans += max(a[i] - mna, b[i] - mnb); } cout << ans << endl; } return 0; }
1399
C
Boats Competition
There are $n$ people who want to participate in a boat competition. The weight of the $i$-th participant is $w_i$. Only teams consisting of \textbf{two} people can participate in this competition. As an organizer, you think that it's fair to allow only teams with \textbf{the same total weight}. So, if there are $k$ teams $(a_1, b_1)$, $(a_2, b_2)$, $\dots$, $(a_k, b_k)$, where $a_i$ is the weight of the first participant of the $i$-th team and $b_i$ is the weight of the second participant of the $i$-th team, then the condition $a_1 + b_1 = a_2 + b_2 = \dots = a_k + b_k = s$, where $s$ is the total weight of \textbf{each} team, should be satisfied. Your task is to choose such $s$ that the number of teams people can create is the \textbf{maximum} possible. Note that each participant can be in \textbf{no more than one} team. You have to answer $t$ independent test cases.
This is just an implementation problem. Firstly, let's fix $s$ (it can be in range $[2; 2n]$), find the maximum number of boats we can obtain with this $s$ and choose the maximum among all found values. To find the number of pairs, let's iterate over the smallest weight in the team in range $[1; \lfloor\frac{s + 1}{2}\rfloor - 1]$. Let this weight be $i$. Then (because the sum of weights is $s$) the biggest weight is $s-i$. And the number of pairs we can obtain with such two weights and the total weight $s$ is $min(cnt_i, cnt_{s - i})$, where $cnt_i$ is the number of occurrences of $i$ in $w$. And the additional case: if $s$ is even, we need to add $\lfloor\frac{cnt_{\frac{s}{2}}}{2}\rfloor$. Don't forget that there is a case $s - i > n$, so you need to assume that these values $cnt$ are zeros.
[ "brute force", "greedy", "two pointers" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> cnt(n + 1); for (int i = 0; i < n; ++i) { int x; cin >> x; ++cnt[x]; } int ans = 0; for (int s = 2; s <= 2 * n; ++s) { int cur = 0; for (int i = 1; i < (s + 1) / 2; ++i) { if (s - i > n) continue; cur += min(cnt[i], cnt[s - i]); } if (s % 2 == 0) cur += cnt[s / 2] / 2; ans = max(ans, cur); } cout << ans << endl; } return 0; }
1399
D
Binary String To Subsequences
You are given a binary string $s$ consisting of $n$ zeros and ones. Your task is to divide the given string into the \textbf{minimum} number of \textbf{subsequences} in such a way that each character of the string belongs to exactly one subsequence and each subsequence looks like "010101 ..." or "101010 ..." (i.e. the subsequence should not contain two adjacent zeros or ones). Recall that a subsequence is a sequence that can be derived from the given sequence by deleting zero or more elements without changing the order of the remaining elements. For example, subsequences of "1011101" are "0", "1", "11111", "0111", "101", "1001", but not "000", "101010" and "11100". You have to answer $t$ independent test cases.
Let's iterate over all characters of $s$ from left to right, maintaining two arrays $pos_0$ and $pos_1$, where $pos_0$ stores indices of all subsequences which end with '0' and $pos_1$ stores indices of all subsequences which end with '1'. If we met '0', then the best choice is to append it to some existing subsequence which ends with '1'. If there are no such sequences, we need to create new one which ends with '0'. Otherwise we need to convert one of '1'-sequences to '0'-sequence. The same works with characters '1'. So, when we don't need to create the new sequence, we try to don't do that. And values in arrays $pos_0$ and $pos_1$ help us to determine the number of sequence we assign to each character. And also, there is a cute proof of this solution from Gassa: let $f(i)$ be the difference between the number of '1' and the number of '0' on the prefix of $s$ of length $i$. We claim that the answer is $max(f(i)) - min(f(i))$ and let's show why is it true. Let's build a function on a plane with points $(i, f(i))$. Then we can match each $x$ between $min(f(i))$ and $max(f(i))$ with some subsequence.
[ "constructive algorithms", "data structures", "greedy", "implementation" ]
1,500
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; string s; cin >> n >> s; vector<int> ans(n); vector<int> pos0, pos1; for (int i = 0; i < n; ++i) { int newpos = pos0.size() + pos1.size(); if (s[i] == '0') { if (pos1.empty()) { pos0.push_back(newpos); } else { newpos = pos1.back(); pos1.pop_back(); pos0.push_back(newpos); } } else { if (pos0.empty()) { pos1.push_back(newpos); } else { newpos = pos0.back(); pos0.pop_back(); pos1.push_back(newpos); } } ans[i] = newpos; } cout << pos0.size() + pos1.size() << endl; for (auto it : ans) cout << it + 1 << " "; cout << endl; } return 0; }
1399
E1
Weights Division (easy version)
\textbf{Easy and hard versions are actually different problems, so we advise you to read both statements carefully}. You are given a weighted rooted tree, vertex $1$ is the root of this tree. A tree is a connected graph without cycles. A rooted tree has a special vertex called the root. A parent of a vertex $v$ is the last different from $v$ vertex on the path from the root to the vertex $v$. Children of vertex $v$ are all vertices for which $v$ is the parent. A vertex is a leaf if it has no children. The weighted tree is such a tree that each edge of this tree has some weight. The weight of the path is the sum of edges weights on this path. The weight of the path from the vertex to itself is $0$. You can make a sequence of zero or more moves. On each move, you select an edge and divide its weight by $2$ rounding down. More formally, during one move, you choose some edge $i$ and divide its weight by $2$ rounding down ($w_i := \left\lfloor\frac{w_i}{2}\right\rfloor$). Your task is to find the minimum number of \textbf{moves} required to make the \textbf{sum of weights of paths} from the root to each leaf at most $S$. In other words, if $w(i, j)$ is the weight of the path from the vertex $i$ to the vertex $j$, then you have to make $\sum\limits_{v \in leaves} w(root, v) \le S$, where $leaves$ is the list of all leaves. You have to answer $t$ independent test cases.
Let's define $cnt_i$ as the number of leaves in the subtree of the $i$-th edge (of course, in terms of vertices, in the subtree of the lower vertex of this edge). Values of $cnt$ can be calculated with pretty standard and simple dfs and dynamic programming. Then we can notice that our edges are independent and we can consider the initial answer (sum of weights of paths) as $\sum\limits_{i=1}^{n-1} w_i \cdot cnt_i$. Let $diff(i)$ be the difference between the current impact of the $i$-th edge and the impact of the $i$-th edge if we divide its weight by $2$. $diff(i) = w_i \cdot cnt_i - \lfloor\frac{w_i}{2}\rfloor \cdot cnt_i$. This value means how the sum of weights decreases if we divide the weight of the $i$-th edge by $2$. Create ordered set which contains pairs $(diff(i), i)$. Then the following greedy solution works: let's take the edge with maximum $diff(i)$ and divide its weight by $2$. Then re-add it into the set with new value $diff(i)$. When the sum becomes less than or equal to $S$, just stop and print the number of divisions we made. The maximum number of operations can reach $O(n \log w)$ so the solution complexity is $O(n \log{w} \log{n})$ (each operation takes $O(\log{n})$ time because the size of the set is $O(n)$).
[ "data structures", "dfs and similar", "greedy", "trees" ]
2,000
#include <bits/stdc++.h> using namespace std; vector<int> w, cnt; vector<vector<pair<int, int>>> g; long long getdiff(int i) { return w[i] * 1ll * cnt[i] - (w[i] / 2) * 1ll * cnt[i]; } void dfs(int v, int p = -1) { if (g[v].size() == 1) cnt[p] = 1; for (auto [to, id] : g[v]) { if (id == p) continue; dfs(to, id); if (p != -1) cnt[p] += cnt[id]; } } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; long long s; cin >> n >> s; w = cnt = vector<int>(n - 1); g = vector<vector<pair<int, int>>>(n); for (int i = 0; i < n - 1; ++i) { int x, y; cin >> x >> y >> w[i]; --x, --y; g[x].push_back({y, i}); g[y].push_back({x, i}); } dfs(0); set<pair<long long, int>> st; long long cur = 0; for (int i = 0; i < n - 1; ++i) { st.insert({getdiff(i), i}); cur += w[i] * 1ll * cnt[i]; } cerr << cur << endl; int ans = 0; while (cur > s) { int id = st.rbegin()->second; st.erase(prev(st.end())); cur -= getdiff(id); w[id] /= 2; st.insert({getdiff(id), id}); ++ans; } cout << ans << endl; } return 0; }
1399
E2
Weights Division (hard version)
\textbf{Easy and hard versions are actually different problems, so we advise you to read both statements carefully}. You are given a weighted rooted tree, vertex $1$ is the root of this tree. \textbf{Also, each edge has its own cost}. A tree is a connected graph without cycles. A rooted tree has a special vertex called the root. A parent of a vertex $v$ is the last different from $v$ vertex on the path from the root to the vertex $v$. Children of vertex $v$ are all vertices for which $v$ is the parent. A vertex is a leaf if it has no children. The weighted tree is such a tree that each edge of this tree has some weight. The weight of the path is the sum of edges weights on this path. The weight of the path from the vertex to itself is $0$. You can make a sequence of zero or more moves. On each move, you select an edge and divide its weight by $2$ rounding down. More formally, during one move, you choose some edge $i$ and divide its weight by $2$ rounding down ($w_i := \left\lfloor\frac{w_i}{2}\right\rfloor$). Each edge $i$ has an associated cost $c_i$ which is either $1$ or $2$ coins. Each move with edge $i$ costs $c_i$ coins. Your task is to find the minimum total \textbf{cost} to make the \textbf{sum of weights of paths} from the root to each leaf at most $S$. In other words, if $w(i, j)$ is the weight of the path from the vertex $i$ to the vertex $j$, then you have to make $\sum\limits_{v \in leaves} w(root, v) \le S$, where $leaves$ is the list of all leaves. You have to answer $t$ independent test cases.
Read the easy version editorial first, because almost all solution is the solution to the easy version with little changes. Firstly, let's simulate the greedy process we used to solve the easy version problem, for edges with cost $1$ and edges with cost $2$, independently. But we don't stop when our sum reach something, let's simulate until our sum becomes $0$ and store each intermediate result in the array $v_1$ for edges with cost $1$ and $v_2$ for edges with cost $2$. So, the array $v_1$ contains the initial total impact of edges of cost $1$, then the impact after making one move, two moves, and so on... The same with $v_2$ but for edges with cost $2$. Now let's fix how many moves on edges with cost $1$ we do. Let it be $i$ (and arrays $v_1$ and $v_2$ are $0$-indexed). Then the sum we obtain from the $1$-cost edges is $v_1[i]$. So we need to find the minimum number of moves $p$ we can do on $2$-cost edges so that $v_1[i] + v_2[p] \le S$. This can be done using binary search or moving pointer (if we iterate over $i$ in increasing order, place $p$ at the end of $v_2$ and move it to the left while $v_1[i] + v_2[p] \le S$). Then, if $v_1[i] + v_2[p] \le S$, we can update the answer with the value $i + 2p$. Time complexity is actually the same as in easy version of the problem: $O(n \log{w} \log{n})$.
[ "binary search", "dfs and similar", "greedy", "sortings", "trees", "two pointers" ]
2,200
#include <bits/stdc++.h> using namespace std; const int INF = 1e9; int n; vector<int> w, c, cnt; vector<vector<pair<int, int>>> g; long long getdiff(int i) { return w[i] * 1ll * cnt[i] - (w[i] / 2) * 1ll * cnt[i]; } void dfs(int v, int p = -1) { if (g[v].size() == 1) cnt[p] = 1; for (auto [to, id] : g[v]) { if (id == p) continue; dfs(to, id); if (p != -1) cnt[p] += cnt[id]; } } vector<long long> get(int clr) { set<pair<long long, int>> st; long long cur = 0; for (int i = 0; i < n - 1; ++i) { if (c[i] == clr) { st.insert({getdiff(i), i}); cur += w[i] * 1ll * cnt[i]; } } vector<long long> res; res.push_back(cur); while (cur > 0 && !st.empty()) { int id = st.rbegin()->second; st.erase(prev(st.end())); cur -= getdiff(id); res.push_back(cur); w[id] /= 2; st.insert({getdiff(id), id}); } return res; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { long long s; cin >> n >> s; w = c = cnt = vector<int>(n - 1); g = vector<vector<pair<int, int>>>(n); for (int i = 0; i < n - 1; ++i) { int x, y; cin >> x >> y >> w[i] >> c[i]; --x, --y; g[x].push_back({y, i}); g[y].push_back({x, i}); } dfs(0); vector<long long> v1 = get(1), v2 = get(2); int pos = int(v2.size()) - 1; int ans = INF; for (int i = 0; i < int(v1.size()); ++i) { while (pos > 0 && v1[i] + v2[pos - 1] <= s) --pos; if (v1[i] + v2[pos] <= s) { ans = min(ans, i + pos * 2); } } cout << ans << endl; } return 0; }
1399
F
Yet Another Segments Subset
You are given $n$ segments on a coordinate axis $OX$. The $i$-th segment has borders $[l_i; r_i]$. All points $x$, for which $l_i \le x \le r_i$ holds, belong to the $i$-th segment. Your task is to choose the \textbf{maximum} by size (the number of segments) subset of the given set of segments such that each pair of segments in this subset either non-intersecting or one of them lies inside the other one. Two segments $[l_i; r_i]$ and $[l_j; r_j]$ are non-intersecting if they have \textbf{no common points}. For example, segments $[1; 2]$ and $[3; 4]$, $[1; 3]$ and $[5; 5]$ are non-intersecting, while segments $[1; 2]$ and $[2; 3]$, $[1; 2]$ and $[2; 2]$ are intersecting. The segment $[l_i; r_i]$ lies inside the segment $[l_j; r_j]$ if $l_j \le l_i$ and $r_i \le r_j$. For example, segments $[2; 2]$, $[2, 3]$, $[3; 4]$ and $[2; 4]$ lie inside the segment $[2; 4]$, while $[2; 5]$ and $[1; 4]$ are not. You have to answer $t$ independent test cases.
Firstly, let's compress the given borders of segments (just renumerate them in such a way that the maximum value is the minimum possible and the relative order of integers doesn't change). Pretty standard approach. Now let's do recursive dynamic programming $dp_{l, r}$. This state stores the answer for the segment $[l; r]$ (not necessarily input segment!). How about transitions? Firstly, if there is a segment covering the whole segment $[l; r]$, why don't just take it? It doesn't change anything for us. The first transition is just skip the current left border and try to take the additional answer from the state $dp_{l + 1, r}$. The second transition is the following: let's iterate over all possible segments starting at $l$ (we can store all right borders of such segments in some array $rg_l$). Let the current segment be $[l; nr]$. If $nr \ge r$, just skip it (if $nr > r$ then we can't take this segment into the answer because it's out of $[l; r]$, and if $nr = r$ then we can't take it because we considered it already). Then we can take two additional answers: from $dp_{l, nr}$ and from $dp_{nr + 1, r}$. Don't forger about some corner cases, like when $nr + 1 > r$ or $l + 1 > r$ and something like that. You can get the answer if you run the calculation from the whole segment. What is the time complexity of this solution? We obviously have $O(n^2)$ states. And the number of transitions is also pretty easy to calculate. Let's fix some right border $r$. For this right border, we consider $O(n)$ segments in total. Summing up, we get $O(n^2)$ transitions. So the time complexity is $O(n^2)$. P.S. I am sorry about pretty tight ML (yeah, I saw Geothermal got some memory issues because of using map). I really wanted to make it 512MB but just forgot to do that.
[ "data structures", "dp", "graphs", "sortings" ]
2,300
#include <bits/stdc++.h> using namespace std; vector<vector<int>> rg; vector<vector<int>> dp; int calc(int l, int r) { if (dp[l][r] != -1) return dp[l][r]; dp[l][r] = 0; if (l > r) return dp[l][r]; bool add = count(rg[l].begin(), rg[l].end(), r); // can't be greater than 1 dp[l][r] = max(dp[l][r], add + (l + 1 > r ? 0 : calc(l + 1, r))); for (auto nr : rg[l]) { if (nr >= r) continue; dp[l][r] = max(dp[l][r], add + calc(l, nr) + (nr + 1 > r ? 0 : calc(nr + 1, r))); } return dp[l][r]; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> l(n), r(n); vector<int> val; for (int i = 0; i < n; ++i) { cin >> l[i] >> r[i]; val.push_back(l[i]); val.push_back(r[i]); } sort(val.begin(), val.end()); val.resize(unique(val.begin(), val.end()) - val.begin()); for (int i = 0; i < n; ++i) { l[i] = lower_bound(val.begin(), val.end(), l[i]) - val.begin(); r[i] = lower_bound(val.begin(), val.end(), r[i]) - val.begin(); } int siz = val.size(); dp = vector<vector<int>>(siz, vector<int>(siz, -1)); rg = vector<vector<int>>(siz); for (int i = 0; i < n; ++i) { rg[l[i]].push_back(r[i]); } cout << calc(0, siz - 1) << endl; } return 0; }
1400
A
String Similarity
A binary string is a string where each character is either 0 or 1. Two binary strings $a$ and $b$ of equal length are similar, if they have the same character in some position (there exists an integer $i$ such that $a_i = b_i$). For example: - 10010 and 01111 are similar (they have the same character in position $4$); - 10010 and 11111 are similar; - 111 and 111 are similar; - 0110 and 1001 are not similar. You are given an integer $n$ and a binary string $s$ consisting of $2n-1$ characters. Let's denote $s[l..r]$ as the contiguous substring of $s$ starting with $l$-th character and ending with $r$-th character (in other words, $s[l..r] = s_l s_{l + 1} s_{l + 2} \dots s_r$). You have to construct a binary string $w$ of length $n$ which is similar to \textbf{all of the following strings}: $s[1..n]$, $s[2..n+1]$, $s[3..n+2]$, ..., $s[n..2n-1]$.
There are many ways to solve this problem: let the answer be $s_1 s_3 s_5 \dots s_{2n-1}$, so it coincides with $s[1..n]$ in the first character, with $s[2..n+1]$ in the second character, and so on; copy the character $s_n$ $n$ times, since it appears in every substring we are interested in; find the character that occurs at least $n$ times in $s$, and build the answer by copying it $n$ times; and many others.
[ "constructive algorithms", "strings" ]
800
fun main() { repeat(readLine()!!.toInt()) { val n = readLine()!!.toInt() val c = readLine()!!.groupingBy { it }.eachCount().maxBy { it.value }!!.key println(c.toString().repeat(n)) } }
1400
B
RPG Protagonist
You are playing one RPG from the 2010s. You are planning to raise your smithing skill, so you need as many resources as possible. So how to get resources? By stealing, of course. You decided to rob a town's blacksmith and you take a follower with you. You can carry at most $p$ units and your follower — at most $f$ units. In the blacksmith shop, you found $cnt_s$ swords and $cnt_w$ war axes. Each sword weights $s$ units and each war axe — $w$ units. You don't care what to take, since each of them will melt into one steel ingot. What is the maximum number of weapons (both swords and war axes) you and your follower can carry out from the shop?
Let's say (without loss of generality) that $s \le w$ (a sword is lighter than a war axe). If not, then we can swap swords and axes. Now, let's iterate over the number of swords $s_1$ we will take ourselves. We can take at least $0$ and at most $\min(cnt_s, \frac{p}{s})$ swords. If we have taken $s_1$ swords, then we can fill the remaining capacity with axes, i. e. we can take $w_1 = \min(cnt_w, \frac{p - s_1 \cdot s}{w})$ axes. The last thing is to decide, what the follower will take: since swords are lighter, it's optimal to take as many swords as possible, i. e. to take $s_2 = \min(cnt_s - s_1, \frac{f}{s})$ swords and fill the remaining space with war axes, i. e. to take $w_2 = \min(cnt_w - w_1, \frac{f - s_2 \cdot s}{w})$ axes. In other words, if we fix the number of swords $s_1$ we'll take $s_1 + s_2 + w_1 + w_2$ weapons in total. The result complexity is $O(cnt_s)$.
[ "brute force", "greedy", "math" ]
1,700
fun main() { repeat(readLine()!!.toInt()) { val (p, f) = readLine()!!.split(' ').map { it.toInt() } var (cntS, cntW) = readLine()!!.split(' ').map { it.toInt() } var (s, w) = readLine()!!.split(' ').map { it.toInt() } if (s > w) { s = w.also { w = s } cntS = cntW.also { cntW = cntS } } var ans = 0 for (s1 in 0..minOf((p / s), cntS)) { val w1 = minOf(cntW, (p - s * s1) / w) val s2 = minOf(cntS - s1, f / s) val w2 = minOf(cntW - w1, (f - s * s2) / w) ans = maxOf(ans, s1 + s2 + w1 + w2) } println(ans) } }
1400
C
Binary String Reconstruction
Consider the following process. You have a binary string (a string where each character is either 0 or 1) $w$ of length $n$ and an integer $x$. You build a new binary string $s$ consisting of $n$ characters. The $i$-th character of $s$ is chosen as follows: - if the character $w_{i-x}$ exists and is equal to 1, then $s_i$ is 1 (formally, if $i > x$ and $w_{i-x} = $ 1, then $s_i = $ 1); - if the character $w_{i+x}$ exists and is equal to 1, then $s_i$ is 1 (formally, if $i + x \le n$ and $w_{i+x} = $ 1, then $s_i = $ 1); - if both of the aforementioned conditions are false, then $s_i$ is 0. You are given the integer $x$ and the resulting string $s$. Reconstruct the original string $w$.
At first, let's replace all elements of string $w$ by 1. Now, let's consider all indices $i$ such that $s_i = 0$. If $s_i = 0$, then $w_{i - x}$ must be equal to 0 (if it exists) and $w_{i + x}$ must be equal to 0 (if it exists), so let's replace all such elements by 0. And after let's perform the process described in statement on string $w$. If we get the string $s$, then we can print $w$ as the answer, otherwise the answer is $-1$.
[ "2-sat", "brute force", "constructive algorithms", "greedy" ]
1,500
#include <bits/stdc++.h> using namespace std; int t; string s; int x; string f(string s) { string res = s; for (int i = 0; i < s.size(); ++i) { if (i - x >= 0 && s[i - x] == '1' || i + x < s.size() && s[i + x] == '1') res[i] = '1'; else res[i] = '0'; } return res; } int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { cin >> s >> x; int n = s.size(); string ns = string(n, '1'); for (int i = 0; i < n; ++i) { if (s[i] == '0') { if (i - x >= 0) ns[i - x] = '0'; if (i + x < n) ns[i + x] = '0'; } } if (f(ns) == s) cout << ns << endl; else cout << -1 << endl; } return 0; }
1400
D
Zigzags
You are given an array $a_1, a_2 \dots a_n$. Calculate the number of tuples $(i, j, k, l)$ such that: - $1 \le i < j < k < l \le n$; - $a_i = a_k$ and $a_j = a_l$;
Let's iterate indices $j$ and $k$. For simplicity, for each $j$ we'll iterate $k$ in descending order. To calculate the number of tuples for a fixed $(j, k)$ we just need the number of $i < j$ such that $a_i = a_k$ and the number of $l > k$ with $a_l = a_j$. We can maintain both values in two frequency arrays $cntLeft$ and $cntRight$, where $cntLeft[x]$ ($cntRight[x]$) is the number of $a_i = x$ for $i < j$ ($i > k$). It's easy to update $cntLeft$ ($cntRight$) when moving $j$ to $j + 1$ ($k$ to $k - 1$). The time complexity is $O(n^2)$.
[ "brute force", "combinatorics", "data structures", "math", "two pointers" ]
1,900
fun main() { repeat(readLine()!!.toInt()) { val n = readLine()!!.toInt() val a = readLine()!!.split(' ').map { it.toInt() - 1 }.toIntArray() val cntLeft = IntArray(n) { 0 } val cntRight = IntArray(n) { 0 } var ans = 0L for (j in a.indices) { cntRight.fill(0) for (k in n - 1 downTo j + 1) { ans += cntLeft[a[k]] * cntRight[a[j]] cntRight[a[k]]++ } cntLeft[a[j]]++ } println(ans) } }
1400
E
Clear the Multiset
You have a multiset containing several integers. Initially, it contains $a_1$ elements equal to $1$, $a_2$ elements equal to $2$, ..., $a_n$ elements equal to $n$. You may apply two types of operations: - choose two integers $l$ and $r$ ($l \le r$), then remove one occurrence of $l$, one occurrence of $l + 1$, ..., one occurrence of $r$ from the multiset. This operation can be applied only if each number from $l$ to $r$ occurs at least once in the multiset; - choose two integers $i$ and $x$ ($x \ge 1$), then remove $x$ occurrences of $i$ from the multiset. This operation can be applied only if the multiset contains at least $x$ occurrences of $i$. What is the minimum number of operations required to delete all elements from the multiset?
Let's solve the problem by dynamic programing. Let $dp_{i, j}$ be the minimum number of operations to delete all elements from $1$ to $i$, if we have $j$ "unclosed" operations of the first type. In each transition, we either advance to the right (and possibly "close" some operations of the second type, so we go from $dp_{i, j}$ to $dp_{i + 1, \min(a_{i + 1}, j)}$, adding $1$ to the answer if $j < a_{i + 1}$), or "open" a new operation of the second type (so we go from $dp_{i, j}$ to $dp_{i, j + 1}$, adding $1$ to the answer). This solution runs in $O(nA)$, which is too slow. The observation that helps us to speed this up is that we have to use only values from the array $a$ as the second state of our dynamic programming. Let's prove it. The proof begins here Let's call an integer $i$ saturated if all occurrences of $i$ were deleted by operations of the first type. Also, let's denote $c_i$ as the number of elements equal to $i$ deleted by the operations of the first type (so, $i$ is saturated iff $a_i = c_i$). Suppose $k$ is the minimum integer such that $c_k$ is not a value from the array $a$. Obviously, $k$ is not saturated. Since the number of operations of the first type covering some integer decreases only when we have to close them, it always decreases to some number in $a$, so it is impossible that $c_{k - 1} > c_k$. It is also impossible that $c_{k - 1} = c_k$, since $k$ is the minimum integer such that $c_k$ does not belong to array $a$. So, $c_{k - 1} < c_k$, it means that some number of operations of the first type started in integer $k$. We can either get rid of them altogether (if they end in $k$ as well), or shift them to $k + 1$, so $c_k$ belongs to the array $a$. That way, if in the optimal answer some value $c_k$ does not belong to $a$, we can reconstruct it so it is no longer the case. The proof ends here Overall, since we have only up $n$ disctict values of $j$, the solution runs in $O(n^2)$ (though it is possible to solve the problem faster).
[ "data structures", "divide and conquer", "dp", "greedy" ]
2,200
#include <bits/stdc++.h> using namespace std; const int N = int(5e3) + 9; int n; int a[N]; int dp[N][N]; int calc (int pos, int x) { int &res = dp[pos][x]; if (res != -1) return res; if (pos == n) return res = 0; res = 1 + calc(pos + 1, n); res = min(res, calc(pos + 1, pos) + a[pos]); if (x != n) { if (a[x] >= a[pos]) res = min(res, calc(pos + 1, pos)); else { res = min(res, calc(pos + 1, pos) + a[pos] - a[x]); res = min(res, 1 + calc(pos + 1, x)); } } return res; } int main(){ scanf("%d", &n); for (int i = 0; i < n; ++i) scanf("%d", a + i); memset(dp, -1, sizeof(dp)); printf("%d\n", calc(0, n)); return 0; }
1400
F
x-prime Substrings
You are given an integer value $x$ and a string $s$ consisting of digits from $1$ to $9$ inclusive. A substring of a string is a contiguous subsequence of that string. Let $f(l, r)$ be the sum of digits of a substring $s[l..r]$. Let's call substring $s[l_1..r_1]$ $x$-prime if - $f(l_1, r_1) = x$; - there are no values $l_2, r_2$ such that - $l_1 \le l_2 \le r_2 \le r_1$; - $f(l_2, r_2) \neq x$; - $x$ is divisible by $f(l_2, r_2)$. You are allowed to erase some characters from the string. If you erase a character, the two resulting parts of the string are concatenated without changing their order. What is the minimum number of characters you should erase from the string so that there are no $x$-prime substrings in it? If there are no $x$-prime substrings in the given string $s$, then print $0$.
The number of $x$-prime strings is relatively small, so is their total length (you may bruteforce all of them and check that the total length of $x$-prime strings never exceeds $5000$, we will need that bruteforce in the solution). Now we have the following problem: given a set of strings with total length up to $5000$ and a string with length up to $1000$, erase the minimum number of characters from that string so no string from the set appears there as a substring. You may have already encountered the similar problem in some previous contests. The solution to this problem is dynamic programming - let $dp[x][y]$ be the minimum number of characters to erase, if we considered first $x$ characters, there were no occurrences of strings from the set in the resulting string... and the second state of dynamic programming ($y$) is a bit more complicated. We want to make sure that no string appears in the result as a substring, so $y$ should somehow handle that. Let $y$ be the longest suffix of the current string that coincides with some prefix of a string from the set. If some string from the set is a suffix of $y$, then the state is clearly bad - we have an occurrence of an $x$-prime string. Otherwise, there is no occurrence ending in the last character, since we considered the longest possible suffix that matches some prefix from the set. The transitions in $dp$ are the following - we either take the next character and recalculate $y$, or skip the next character and add $1$ to the number of removed characters. If we maintain and recalculate $y$ naively, as a string, the solution will be too slow. There are only up to $5000$ different values of $y$, but their total length may be much greater, so we don't even have enough memory to store our dynamic programming values. Let's enumerate all strings that are prefixes of some $x$-prime string and map each of these strings to some integer, so the second state in dynamic programming can be an integer instead of a string. Then we have to mark the bad integers (which correspond to strings having some $x$-prime string as a suffix) and calculate the two-dimensional transition matrix $T$, where $T[x][a]$ is the integer corresponding to a string that we get, if we append the $x$-th string with character $a$, so the transitions in dynamic programming can be done in $O(1)$. There are many ways to precalculate this matrix and mark all of the bad integers, but the most efficient one is to use Aho-Corasick algorithm (the transition matrix $T$ is exactly the automaton that the Aho-Corasick algorithm builds). If you are not familiar with it, I recommend you to read about it here: https://cp-algorithms.com/string/aho_corasick.html
[ "brute force", "dfs and similar", "dp", "string suffix structures", "strings" ]
2,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int AL = 9; const int N = 5000; const int INF = 1e9; string s; int x; struct node{ int nxt[AL]; int p; char pch; int link; int go[AL]; bool term; node(){ memset(nxt, -1, sizeof(nxt)); memset(go, -1, sizeof(go)); link = p = -1; term = false; } int& operator [](int x){ return nxt[x]; } }; vector<node> trie; void add_string(string s){ int v = 0; for (auto it : s){ int c = it - '1'; if (trie[v][c] == -1){ trie.push_back(node()); trie[trie.size() - 1].p = v; trie[trie.size() - 1].pch = c; trie[v][c] = trie.size() - 1; } v = trie[v][c]; } trie[v].term = true; } int go(int v, int c); int get_link(int v){ if (trie[v].link == -1){ if (v == 0 || trie[v].p == 0) trie[v].link = 0; else trie[v].link = go(get_link(trie[v].p), trie[v].pch); } return trie[v].link; } int go(int v, int c) { if (trie[v].go[c] == -1){ if (trie[v][c] != -1) trie[v].go[c] = trie[v][c]; else trie[v].go[c] = (v == 0 ? 0 : go(get_link(v), c)); } return trie[v].go[c]; } string t; void brute(int i, int sum){ if (sum == x){ bool ok = true; for (int l = 0; l < int(t.size()); ++l){ int cur = 0; for (int r = l; r < int(t.size()); ++r){ cur += (t[r] - '0'); if (x % cur == 0 && cur != x) ok = false; } } if (ok){ add_string(t); } return; } for (int j = 1; j <= min(x - sum, 9); ++j){ t += '0' + j; brute(i + 1, sum + j); t.pop_back(); } } int main() { cin >> s >> x; trie.push_back(node()); brute(0, 0); vector<vector<int>> dp(s.size() + 1, vector<int>(trie.size(), INF)); dp[0][0] = 0; forn(i, s.size()) forn(j, trie.size()) if (dp[i][j] != INF){ dp[i + 1][j] = min(dp[i + 1][j], dp[i][j] + 1); int nxt = go(j, s[i] - '1'); if (!trie[nxt].term) dp[i + 1][nxt] = min(dp[i + 1][nxt], dp[i][j]); } printf("%d\n", *min_element(dp[s.size()].begin(), dp[s.size()].end())); return 0; }