contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1872
G
Replace With Product
Given an array $a$ of $n$ positive integers. You need to perform the following operation \textbf{exactly} once: - Choose $2$ integers $l$ and $r$ ($1 \le l \le r \le n$) and replace the subarray $a[l \ldots r]$ with the single element: the product of all elements in the subarray $(a_l \cdot \ldots \cdot a_r)$. For example, if an operation with parameters $l = 2, r = 4$ is applied to the array $[5, 4, 3, 2, 1]$, the array will turn into $[5, 24, 1]$. Your task is to maximize the sum of the array after applying this operation. Find the optimal subarray to apply this operation.
Key observation: if the product of all elements in the array is sufficiently large, it is always optimal to perform the operation on the entire array, except for a possible prefix/suffix consisting of ones. You can estimate sufficiently large, for example, with the number $2^{60}$ (in reality, $2 \cdot n$ will be sufficient). There won't be a proof, but that's how it is. Now the problem reduces to the case where the product of the array is not greater than $2^{60}$. This means that the number of array elements that are $> 1$ is at most $60$. Notice that in an optimal segment, there won't be a situation where the left/right element is equal to $1$, because then we can move that boundary and the sum will increase by $1$. Therefore, both the start and end of the segment must be in non-ones elements. This means that there are at most $60^2$ interesting segment options. We can explicitly iterate through all of them and choose the best one. Use prefix products and sums to quickly estimate the value of a specific segment.
[ "brute force", "greedy", "math" ]
2,000
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) MAXP = 2 * n prod = 1 for x in a: prod *= x if prod > MAXP: break if prod > MAXP: l = 0 r = n - 1 while l < n and a[l] == 1: l += 1 while r >= 0 and a[r] == 1: r -= 1 print(l + 1, r + 1) continue not1 = [] for i in range(n): if a[i] > 1: not1.append(i) maxval = 0 l = 0 r = 0 for x in range(len(not1)): p = 1 s = 0 for y in range(x, len(not1)): i = not1[x] j = not1[y] p *= a[j] s += a[j] - 1 if maxval <= p - s - (j - i + 1): maxval = p - s - (j - i + 1) l = i r = j print(l + 1, r + 1)
1873
A
Short Sort
There are three cards with letters $a$, $b$, $c$ placed in a row in some order. You can do the following operation \textbf{at most once}: - Pick two cards, and swap them. Is it possible that the row becomes $abc$ after the operation? Output "YES" if it is possible, and "NO" otherwise.
There are only $6$ possible input strings, and they are all given in the input, so you can just output NO if $s$ is $\texttt{bca}$ or $\texttt{cab}$ and YES otherwise. Another way to solve it is to count the number of letters in the wrong position. A swap changes $2$ letters, so if at most two letters are in the wrong position, then it's possible, otherwise it's not possible. Of course, you can also brute force all possible swaps and check if it works.
[ "brute force", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200007; const int MOD = 1000000007; string alph = "abc"; void solve() { string s; cin >> s; int cnt = 0; for (int i = 0; i < 3; i++) { cnt += (s[i] != alph[i]); } cout << (cnt <= 2 ? "YES\n" : "NO\n"); } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1873
B
Good Kid
Slavic is preparing a present for a friend's birthday. He has an array $a$ of $n$ digits and the present will be the product of all these digits. Because Slavic is a good kid who wants to make the biggest product possible, he wants to add $1$ to exactly one of his digits. What is the maximum product Slavic can make?
Just brute force all possibilties for the digit to increase, and check the product each time. The complexity is $\mathcal{O}(n^2)$ per testcase. You can make it faster if you notice that it's always optimal to increase the smallest digit (why?), but it wasn't necessary to pass.
[ "brute force", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); int ans = 1; for(int i = 0; i < n; i++) { cin >> a[i]; } sort(a.begin(), a.end()); a[0]++; for(int i = 0; i < n; i++) { ans*=a[i]; } cout << ans << endl; } int32_t main(){ int t = 1; cin >> t; while (t--) { solve(); } }
1873
C
Target Practice
A $10 \times 10$ target is made out of five "rings" as shown. Each ring has a different point value: the outermost ring — 1 point, the next ring — 2 points, ..., the center ring — 5 points. Vlad fired several arrows at the target. Help him determine how many points he got.
You can just hardcode the values in the array below, and iterate through the grid; if it is an $\texttt{X}$, we add the value to our total. See the implementation for more details. $\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 1 \\ 1 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 2 & 1 \\ 1 & 2 & 3 & 4 & 4 & 4 & 4 & 3 & 2 & 1 \\ 1 & 2 & 3 & 4 & 5 & 5 & 4 & 3 & 2 & 1 \\ 1 & 2 & 3 & 4 & 5 & 5 & 4 & 3 & 2 & 1 \\ 1 & 2 & 3 & 4 & 4 & 4 & 4 & 3 & 2 & 1 \\ 1 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 2 & 1 \\ 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{bmatrix}$
[ "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200007; const int MOD = 1000000007; int score[10][10] = { {1,1,1,1,1,1,1,1,1,1}, {1,2,2,2,2,2,2,2,2,1}, {1,2,3,3,3,3,3,3,2,1}, {1,2,3,4,4,4,4,3,2,1}, {1,2,3,4,5,5,4,3,2,1}, {1,2,3,4,5,5,4,3,2,1}, {1,2,3,4,4,4,4,3,2,1}, {1,2,3,3,3,3,3,3,2,1}, {1,2,2,2,2,2,2,2,2,1}, {1,1,1,1,1,1,1,1,1,1} }; void solve() { int ans = 0; for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { char c; cin >> c; if (c == 'X') {ans += score[i][j];} } } cout << ans << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1873
D
1D Eraser
You are given a strip of paper $s$ that is $n$ cells long. Each cell is either black or white. In an operation you can take any $k$ consecutive cells and make them all white. Find the minimum number of operations needed to remove all black cells.
The key idea is greedy. Let's go from the left to the right, and if the current cell is black, we should use the operation starting at this cell (it may go off the strip, but that's okay, we can always shift it leftwards to contain all the cells we need it to). We can implement this in $\mathcal{O}(n)$: iterate from left to right with a variable $i$, and when you see a black cell, you should skip the next $k-1$ cells (because the eraser will take care of them) and increase the number of operations by $1$. The answer is the total number of operations. Why does it work? Notice the order of operations doesn't matter. Consider the leftmost black cell we erase. It means none of the cells to its right are black. So it doesn't make sense to use the operation on any of the cells to its right, since they are already white. It is at least as good to use the operation starting at this cell and to the $k-1$ cells on the left, since we may or may not hit another black cell.
[ "greedy", "implementation", "two pointers" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int n, k; cin >> n >> k; string s; cin >> s; int res = 0; for (int i = 0; i < n; i++) { if (s[i] == 'B') { res++; i += k - 1; } } cout << res << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1873
E
Building an Aquarium
You love fish, that's why you have decided to build an aquarium. You have a piece of coral made of $n$ columns, the $i$-th of which is $a_i$ units tall. Afterwards, you will build a tank around the coral as follows: - Pick an integer $h \geq 1$ — the height of the tank. Build walls of height $h$ on either side of the tank. - Then, fill the tank up with water so that the height of each column is $h$, unless the coral is taller than $h$; then no water should be added to this column. For example, with $a=[3,1,2,4,6,2,5]$ and a height of $h=4$, you will end up using a total of $w=8$ units of water, as shown. You can use at most $x$ units of water to fill up the tank, but you want to build the biggest tank possible. What is the largest value of $h$ you can select?
We need to find the maximum height with a certain upper bound - this is a tell-tale sign of binary search. If you don't know what that is, you should read this Codeforces EDU article. For a given value of $h$, in the $i$-th column we will need $h - a_i$ units of water if $h \geq a_i$, or $0$ units otherwise. (This is equal to $\max(h - a_i, 0)$, why?) So we can compute the amount of water for all $n$ columns by simply iterating through and summing the total amount of water needed for each column, and see if it's not larger than $x$. Then you can binary search on the optimal value of $h$. The model solution uses the starting bounds $l = 0$, $r = 2a_i + \varepsilon$, because the optimal height could be $a_i + x$. So the complexity is $\mathcal{O}(n \log a_i)$ per testcase.
[ "binary search", "sortings" ]
1,100
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int n; long long x; cin >> n >> x; long long a[n]; for (int i = 0; i < n; i++) { cin >> a[i]; } long long lo = 0, hi = 2'000'000'007; while (lo < hi) { long long mid = lo + (hi - lo + 1) / 2; long long tot = 0; for (int i = 0; i < n; i++) { tot += max(mid - a[i], 0LL); } if (tot <= x) {lo = mid;} else {hi = mid - 1;} } cout << lo << endl; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1873
F
Money Trees
Luca is in front of a row of $n$ trees. The $i$-th tree has $a_i$ fruit and height $h_i$. He wants to choose a contiguous subarray of the array $[h_l, h_{l+1}, \dots, h_r]$ such that for each $i$ ($l \leq i < r$), \textbf{$h_i$ is divisible$^{\dagger}$ by $h_{i+1}$}. He will collect all the fruit from each of the trees in the subarray (that is, he will collect $a_l + a_{l+1} + \dots + a_r$ fruits). However, if he collects more than $k$ fruits in total, he will get caught. What is the maximum length of a subarray Luca can choose so he doesn't get caught? $^{\dagger}$ $x$ is divisible by $y$ if the ratio $\frac{x}{y}$ is an integer.
Let's compute $len_l$ for each $l \leq n$, the maximum $r$ such that the subarray $[h_l \dots h_r]$ satisfies the height divisibility condition. Now let's do a binary search on the $size$ of the segment, where we check every $i$ such that $len_i \geq i+size-1$. Check the sum between $i$ and $i+size-1$ using a prefix sum. If it is smaller or equal to $k$, we update the lower bound of the binary search, otherwise, if we find no such $i$ for this $size$ then we update the upper bound. Final complexity $O(n logn)$
[ "binary search", "greedy", "math", "two pointers" ]
1,300
#include <bits/stdc++.h> using namespace std; const int N = 200'000; int n, k; int a[N+5], h[N+5], pref[N+5], length[N+5]; bool get(int dist) { bool found = false; for(int i = 0; i < n-dist+1; i++) { if(length[i] < dist){continue;} int sum = pref[i+dist]-pref[i]; if(sum <= k) { found = true; break; } } return found; } void solve() { pref[0] = 0; cin >> n >> k; for(int i = 0; i < n; i++) { cin >> a[i]; pref[i+1] = pref[i]+a[i]; } for(int i = 0; i < n; i++) { cin >> h[i]; } length[n-1] = 1; for(int i = n-2; i >= 0; i--) { if(h[i]%h[i+1] == 0) { length[i] = length[i+1]+1; } else { length[i] = 1; } } int l = 1, r = N; while(l <= r) { int mid = (l+r)/2; if(get(mid)) { l = mid+1; } else { r = mid-1; } } cout << r << endl; } int main() { int t = 1; cin >> t; while (t--) { solve(); } }
1873
G
ABBC or BACB
You are given a string $s$ made up of characters $A$ and $B$. Initially you have no coins. You can perform two types of operations: - Pick a substring$^\dagger$ $AB$, change it to $BC$, and get a coin. - Pick a substring$^\dagger$ $BA$, change it to $CB$, and get a coin. What is the most number of coins you can obtain?$^\dagger$ A substring of length $2$ is a sequence of two adjacent characters of a string.
Preface Before we get into the editorial, I want to make a note that this type of problem is not traditional for Div. 4 on purpose, and is a lot more ad-hoc than usual. My goal was to make a sort of "introductory" ad-hoc problem, and I hope you enjoyed it. The editorial is a bit long, mainly because I want to go into a lot of detail regarding how to approach such ad-hoc problems, since usually most Div. 4 problems are unlike this, and instead focused on noticing and implementing the correct algorithm. I don't think anyone solving the problem will actually go through all this detail explicitly, but I wanted to provide a very explicit walk-through for anyone not knowing how to start. But I think ad-hoc skills are important for Codeforces especially, hence this problem. Maybe you'll see more soon. ;) Let's think about how we can repeatedly use the operations first. If we have a string $\texttt{AAAB}$, then we can do the operations $\texttt{AAAB} \to \texttt{AABC} \to \texttt{ABCC} \to \texttt{BCCC}$. Similarly, if we have a string $\texttt{BAAA}$, then we can do the operations $\texttt{BAAA} \to \texttt{CBAA} \to \texttt{CCBA} \to \texttt{CCCB}$. In a sense, it's useful to think of the $\texttt{B}$ as "eating" the $\texttt{A}$s in a sense: whenever a $\texttt{B}$ is next to an $\texttt{A}$, it eats covers it and eats it, and then it moves on. Note that $\texttt{B}$ cannot eat $\texttt{C}$s. Let's replace the characters as follows: replace $\texttt{A}$ with $\texttt{.}$ and $\texttt{C}$ with $\texttt{_}$. Then, it's much clear what is going on. Here is a series of moves on the string $\texttt{BAAABA}$, which becomes $\texttt{B...B.}$ after the replacing: $\texttt{B...B.} \to \texttt{_B..B.} \to \texttt{__B.B.} \to \texttt{___BB.} \to \texttt{___B_B}$ Now you can see how each $\texttt{B}$ is eating all the dots in one direction (it can't travel over blank spaces represented by $\texttt{_} = \texttt{C}$). Okay, we have this intuition, how to solve the problem now? We need to eat the maximum number of dots ($\texttt{A}$s in the original string), since $\texttt{B}$s cannot eat anything else. Note that for each $\texttt{B}$, it can eat all the $\texttt{A}$s to its left or to its right, but not both; one it leaves its original spot, it is stuck on that side: $\texttt{..B.} \to \texttt{.B_.} \to \texttt{B__.}$ This is where our original string comes into play. Suppose our original string starts with $\texttt{B}$, so it must be of the form $\texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \dots$ $\dots\underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B}$ So we only have to look at the case where our string starts and ends with $\texttt{A}$, meaning our string looks something like: $\underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \dots \,\,\,\,\,\,\,\, \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0}$ $\underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0} \texttt{B} \texttt{B} \underbrace{\texttt{A} \dots \texttt{A}}_{\text{some number, possibly } 0}$ What's the only other case? Start and end with $\texttt{A}$, and there are no two $\texttt{B}$s next to each other. In this case, you can see there is one more "group" of $\texttt{A}$s than there are $\texttt{B}$s, but each $\texttt{B}$ can only get one group of $\texttt{A}$s. So we won't be able to get all of them. What's the best we can do? Note that each $\texttt{B}$ can only get one group, if it goes left or right, so we can get all $\texttt{A}$s except one group. Now the answer is simply greedy: it is the total number of $\texttt{A}$s, minus the smallest group (since we want to get the most number of coins, we will take groups as large as possible). You can also envision this as a sort of greedy: each $\texttt{B}$ takes the largest group available, and we stop once no more $\texttt{B}$s are free. The time complexity for finding groups is $\mathcal{O}(n)$, so the whole solution runs in that time as well.
[ "constructive algorithms", "greedy" ]
1,500
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { string s; cin >> s; int n = s.length(), cnt = 0; bool all = (s[0] == 'B' || s[n - 1] == 'B'); for (int i = 0; i < n - 1; i++) { if (s[i] == s[i + 1] && s[i] == 'B') {all = true;} } vector<int> lens; int curr = 0; for (int i = 0; i < n; i++) { if (s[i] == 'A') {curr++;} else { if (curr != 0) {lens.push_back(curr);} curr = 0; } } if (curr != 0) {lens.push_back(curr);} sort(lens.begin(), lens.end()); if (lens.empty()) {cout << 0 << '\n'; return;} int tot = 0; if (all) {tot += lens[0];} for (int i = 1; i < lens.size(); i++) { tot += lens[i]; } cout << tot << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1873
H
Mad City
Marcel and Valeriu are in the mad city, which is represented by $n$ buildings with $n$ two-way roads between them. Marcel and Valeriu start at buildings $a$ and $b$ respectively. Marcel wants to catch Valeriu, in other words, be in the same building as him or meet on the same road. During each move, they choose to go to an adjacent building of their current one or stay in the same building. Because Valeriu knows Marcel so well, Valeriu can predict where Marcel will go in the next move. Valeriu can use this information to make his move. They start and end the move at the same time. It is guaranteed that any pair of buildings is connected by some path and there is at most one road between any pair of buildings. Assuming both players play optimally, answer if Valeriu has a strategy to indefinitely escape Marcel.
Because we have a tree with an additional edge, our graph has exactly one cycle. If Marcel and Valeriu share the same starting building then the answer is "NO". If we do a depth-first search from Valeriu's node, when we encounter an already visited node that is not the current node's parent, the node is part of the cycle. Moreover, it's the node where Valeriu enters the cycle. Valeriu can escape Marcel forever if and only if he reaches this node before Marcel can. This is because if Valeriu is in a cycle then he always has $2$ choices of buildings where he would run that still keep him in the cycle. Marcel can only be in one building at any given time and Valeriu knows Marcel's next move, which means Valeriu can always escape him. So it just remains to check with a breadth-first search or a depth-first search if Marcel arrives at Valeriu's entry node after Valeriu.
[ "dfs and similar", "dsu", "games", "graphs", "shortest paths", "trees" ]
1,700
#include <bits/stdc++.h> using namespace std; const int N = 200005; vector<int> adj[N]; vector<bool> vis(N); int entry_node = -1; vector<int> path; bool dfs1(int u, int p) { vis[u] = true; for(auto v : adj[u]) { if(v != p && vis[v]) { entry_node = v; return true; } else if(v != p && !vis[v]) { if(dfs1(v, u)) { return true; } } } return false; } int dfs2(int u) { vis[u] = true; int distbruh = N; for(auto v : adj[u]) { if(v == entry_node) { return 1; } if(!vis[v]) { int dist = dfs2(v)+1; distbruh = min(dist, distbruh); } } return distbruh; } void solve() { int n, a, b; cin >> n >> a >> b; for(int i = 0; i < n; i++) { int u, v; cin >> u >> v; adj[u].push_back(v); adj[v].push_back(u); } dfs1(b, -1); vis.assign(n+1, false); int distMarcel = N, distValeriu = 0; if(entry_node == a) { distMarcel = 0; } else { distMarcel = dfs2(a); } vis.assign(n+1, false); if(entry_node == b) { distValeriu = 0; } else { distValeriu = dfs2(b); } if(distValeriu < distMarcel) { cout << "YES" << endl; } else { cout << "NO" << endl; } for(int i = 1; i <= n; i++) { adj[i].clear(); vis[i] = false; } } int32_t main(){ int t = 1; cin >> t; while (t--) { solve(); } }
1874
A
Jellyfish and Game
Jellyfish has $n$ green apples with values $a_1, a_2, \dots, a_n$ and Gellyfish has $m$ green apples with values $b_1,b_2,\ldots,b_m$. They will play a game with $k$ rounds. For $i=1,2,\ldots,k$ in this order, they will perform the following actions: - If $i$ is odd, Jellyfish can choose to swap one of her apples with one of Gellyfish's apples or do nothing. - If $i$ is even, Gellyfish can choose to swap one of his apples with one of Jellyfish's apples or do nothing. Both players want to maximize the sum of the values of their apples. Since you are one of the smartest people in the world, Jellyfish wants you to tell her the final sum of the value of her apples after all $k$ rounds of the game. Assume that both Jellyfish and Gellyfish play optimally to maximize the sum of values of their apples.
Let us define $\min(a)$ to be the minimum value of $a$ in the current round, $\max(a)$ to be the maximum value of $a$ in the current round, $\min(b)$ to be the minimum value of $b$ in the current round, $\max(b)$ to be the maximum value of $b$ in the current round, $\text{MIN}$ to be the minimum value of all the apples, $\text{MAX}$ to be the maximum value of all the apples. By greedy and induction, we would come to the following conclusion: If Jellyfish is the one operating this round: If $\min(a) < \max(b)$, she will swap this two apples, otherwise she will do nothing. If Gellyfish is the one operating this round: If $\max(a) > \min(b)$, he will swap this two apples, otherwise she will do nothing. We consider who $\text{MAX}$ and $\text{MIN}$ will belong to after the first round. In the first round: If $\max(a) < \max(b)$, $\text{MAX} = \max(b)$. And because $\min(a) < \max(b)$, Jellyfish will swap this two apples. So $\text{MAX}$ belongs to Jellyfish. If $\max(a) > \max(b)$, $\text{MAX} = \max(a)$. If $\min(a)=\max(a)$, then $\min(a) > \max(b)$, Jellyfish will do nothing. Otherwise Jellyfish won't swap the apple with value $\text{MAX}$. In conclusion $\text{MAX}$ belongs to Jellyfish We can also show that $\text{MIN}$ belongs to Gellyfish, and the proof is symmetric with the above. So in the second round, $\min(b) = \text{MIN}, \max(a) = \text{MAX}$, We have $\text{MIN}<\text{MAX}$. So Gellyfish will swap this two apples, in the third round, $\min(a)=\text{MIN}, \max(b) = \text{MAX}$, Jellyfish will swap this two apples. So after the first round, the game will go two rounds at a time, with two people swapping two apples with the minimum value and the maximum value back and forth. So we only need to know the answer for $k = 1$ and $k = 2$. Time complexity: $O(n + m)$ per test case. Memory complexity: $O(n + m)$ per test case.
[ "brute force", "games", "greedy", "implementation" ]
1,200
#include<bits/stdc++.h> using namespace std; const int N = 1000 + 5; int n = 0, m = 0, k = 0, x = 0, y = 0, a[N] = {}, b[N] = {}; inline void solve(){ scanf("%d %d %d", &n, &m, &k); k --; for(int i = 0 ; i < n ; i ++) scanf("%d", &a[i]); for(int i = 0 ; i < m ; i ++) scanf("%d", &b[i]); x = y = 0; for(int i = 1 ; i < n ; i ++) if(a[i] < a[x]) x = i; for(int i = 1 ; i < m ; i ++) if(b[i] > b[y]) y = i; if(b[y] > a[x]) swap(a[x], b[y]); if(k & 1){ x = 0, y = 0; for(int i = 1 ; i < n ; i ++) if(a[i] > a[x]) x = i; for(int i = 1 ; i < m ; i ++) if(b[i] < b[y]) y = i; swap(a[x], b[y]); } long long ans = 0; for(int i = 0 ; i < n ; i ++) ans += a[i]; printf("%lld\n", ans); } int T = 0; int main(){ scanf("%d", &T); for(int i = 0 ; i < T ; i ++) solve(); return 0; }
1874
B
Jellyfish and Math
Jellyfish is given the non-negative integers $a$, $b$, $c$, $d$ and $m$. Initially $(x,y)=(a,b)$. Jellyfish wants to do several operations so that $(x,y)=(c,d)$. For each operation, she can do one of the following: - $x := x\,\&\,y$, - $x := x\,|\,y$, - $y := x \oplus y$, - $y := y \oplus m$. Here $\&$ denotes the bitwise AND operation, $|$ denotes the bitwise OR operation and $\oplus$ denotes the bitwise XOR operation. Now Jellyfish asks you for the minimum number of operations such that $(x,y)=(c,d)$.
First of all, since $\text{and}, \text{or}, \text{xor}$ are all bitwise operations, each bit is independent of the other. We define $a_i$ as the $i$-th bit of $a$, $b_i$ as the $i$-th bit of $b$, $c_i$ as the $i$-th bit of $c$, $d_i$ as the $i$-th bit of $d$, $m_i$ as the $i$-th bit of $m$, $x_i$ as the $i$-th bit of $x$, $y_i$ as the $i$-th bit of $y$. (in binary) Lemma. For all $i \neq j$, if $(a_i, b_i, m_i) = (a_j, b_j, m_j)$ and $(c_i, d_i) \neq (c_j, d_j)$, the goal cannot be achieved. Proof. Because after every operation we will have $(x_i, y_i) = (x_j, y_j)$, so we can't achieve he goal. Since $(a_i, b_i, m_i)$ has only $2^3=8$ cases, and $(c_i, d_i)$ has only $2^2=4$ cases, and there are some $(0/1, 0/1, 0/1)$ that do not appear in $\{(a_i, b_i, m_i)\ |\ 0 \leq i \leq \log \max(a, b, c, d, m)\}$, so there are only $(4+1)^8<4\times 10^5$ cases in this problem. We can use BFS(breadth-first search) for preprocessing. Time complexity: $O(5^8)$ for preprocessing and $O(\log \max(a, b, c, d, m))$ per test case. Memory complexity: $O(5^8)$ for preprocessing and $O(1)$ per test case.
[ "bitmasks", "brute force", "dfs and similar", "dp", "graphs", "shortest paths" ]
2,400
#include<bits/stdc++.h> using namespace std; const int S = 4e5 + 5, Inf = 0x3f3f3f3f; int pw5[10] = {}, dp[S] = {}; queue<int> Q; inline void checkmin(int &x, int y){ if(y < x) x = y; } inline int w(int mask, int i){ return (mask / pw5[i]) % 5; } inline int f(int a, int b, int m){ return (a << 2) | (b << 1) | m; } inline int g(int c, int d){ return (c << 1) | d; } inline int work(int mask, int opt){ int ret = 0; for(int a = 0 ; a < 2 ; a ++) for(int b = 0 ; b < 2 ; b ++) for(int m = 0 ; m < 2 ; m ++){ int x = w(mask, f(a, b, m)), c = x >> 1, d = x & 1; if(opt == 1) c = c & d; else if(opt == 2) c = c | d; else if(opt == 3) d = c ^ d; else d = m ^ d; ret += pw5[f(a, b, m)] * g(c, d); } return ret; } inline void init(){ pw5[0] = 1; for(int i = 1 ; i <= 8 ; i ++) pw5[i] = pw5[i - 1] * 5; memset(dp, 0x3f, sizeof(dp)); int mask = 0; for(int a = 0 ; a < 2 ; a ++) for(int b = 0 ; b < 2 ; b ++) for(int m = 0 ; m < 2 ; m ++) mask += pw5[f(a, b, m)] * g(a, b); dp[mask] = 0, Q.push(mask); while(Q.size()){ int s = Q.front(); Q.pop(); for(int opt = 0 ; opt < 4 ; opt ++){ int t = work(s, opt); if(dp[t] == Inf) dp[t] = dp[s] + 1, Q.push(t); } } for(int mask = 0 ; mask < pw5[8] ; mask ++) for(int i = 0 ; i < 8 ; i ++) if(w(mask, i) == 4){ for(int x = 1 ; x <= 4 ; x ++) checkmin(dp[mask], dp[mask - x * pw5[i]]); break; } } inline void solve(){ int A = 0, B = 0, C = 0, D = 0, M = 0, mask = pw5[8] - 1; scanf("%d %d %d %d %d", &A, &B, &C, &D, &M); for(int i = 0 ; i < 30 ; i ++){ int a = (A >> i) & 1, b = (B >> i) & 1, c = (C >> i) & 1, d = (D >> i) & 1, m = (M >> i) & 1; if(w(mask, f(a, b, m)) == 4) mask -= (4 - g(c, d)) * pw5[f(a, b, m)]; else if(w(mask, f(a, b, m)) != g(c, d)){ printf("-1\n"); return; } } printf("%d\n", (dp[mask] < Inf) ? dp[mask] : -1); } int T = 0; int main(){ init(); scanf("%d", &T); for(int i = 0 ; i < T ; i ++) solve(); return 0; }
1874
C
Jellyfish and EVA
Monsters have invaded the town again! Asuka invites her good friend, Jellyfish, to drive EVA with her. There are $n$ cities in the town. All the monsters are in city $n$. Jellyfish and Asuka are currently in city $1$ and need to move to city $n$ to defeat the monsters. There are $m$ roads. The $i$-th road allows one to travel from city $a_i$ to city $b_i$. All the roads are \textbf{directed}. That is, one cannot travel from city $b_i$ to $a_i$ using the $i$-th road. Interestingly, all roads satisfy $a_i<b_i$. Driving EVA requires two people to work together. However, Asuka and Jellyfish have not done any training together before. Suppose that EVA is currently in city $u$. Jellyfish and Asuka will both choose an undestroyed road that starts at city $u$. Suppose Jellyfish and Asuka choose roads that end at cities $v_1$ and $v_2$ respectively. If $v_1 = v_2$, EVA moves to city $v_1$ successfully. Otherwise, EVA stays in city $u$ and both roads that they have chosen will be destroyed. It is possible that EVA is currently in city $u$ ($u \neq n$) and there are no undestroyed roads that start at city $u$. In that case, the mission will be a failure. Otherwise, if they reach city $n$ in the end, the mission is considered a success. Every time they choose the roads, Jellyfish knows that Asuka will choose a road randomly. Now, Jellyfish wants to know, if she chooses the roads optimally, what is the maximum probability of the mission being successful.
Let's solve this problem by dynamic programming, Let $f_u$ represents the max probability of reaching city $n$ starting from city $u$. The problem is how to transition. for city $u$, we assume that there are $d$ roads from city $u$, the $i$-th road from city $u$ is to city $v_i$. Let's sort $v$ by using $f_v$ as the keyword, with $v$ with the larger $f_v$ coming first. Let's define an array $a$ of $d$ elements equals to $[f_{v_1}, f_{v_2}, \dots, f_{v_d}]$, according to the definition, $a$ is non-increasing. That is, next we want to find an optimal solution such that the probability of going to city $v_i$ is $p_i$, maximizing the value of $\sum_{i=1}^d{a_i \times p_i}$. Tips: the sum of $p_i$ may not be $1$, because it is possible to stay at city $u$ when $d$ is even. For two choices of $p$, $p_1$ and $p_2$, Sometimes we can't which is better, for example: $p_1 = [0.6, 0.2, 0.2], p_2 = [0.5, 0.5, 0]$ when $a=[1.0, 0.2, 0.2]$, $p_1$ is better than $p_2$. But when $a = [1.0, 0.8, 0]$, $p_2$ is better than $p_1$. Tips. In fact, when $d=3$, $p$ has only one choice $[\frac 1 3, \frac 1 3, \frac 1 3]$, the above is just a hypothetical to illustrate the problem. So when can we say, $p_1$ is always not worse than $p_2$? Lemma. If there are two arrays $p_1$ and $p_2$, satisfying $\forall i \leq d, \sum_{j=1}^i{p_1}_j \geq \sum_{j=1}^i{p_2}_j$, $p_1$ is always not worse than $p_2$. Proof. Let's consider $a_{d+1}$ as $0$ and define $a'_i = a_i - a_{i+1}$, ${p_1'}_i = \sum_{j=1}^i{p_1}_j$ and ${p_2'}_i = \sum_{j=1}^i{p_2}_j$, then $\sum_{i=1}^d a_i \times {p_1}_i = \sum_{i=1}^d a'_i \times {p_1'}_i, \sum_{i=1}^d a_i \times {p_2}_i = \sum_{i=1}^d a'_i \times {p_2'}_i$, Because $a$ is non-increasing, so $a'_i \geq 0$, and we have $\forall i \leq d, {p_1}_i \geq {p_2}_i$, so $\sum_{i=1}^d a_i \times {p_1}_i \geq \sum_{i=1}^d a_i \times {p_2}_i$. Now the problem is, there is whether an array $p$ not worse than any other arrays for for all $d$. For $d = 1$, $p = [1.0]$ satisfies the condition. For $d = 2$, $p = [0.5, 0]$ satisfies the condition. What about $d>2$ ? After they choose the roads for the first time, The size of the problem will become $d - 2$. For example, when $d = 4$, Let's assume Jellyfish choose $v_1$ for the first time, then there will be $4$ situations: When Asuka chooses $v_1$ they will go to $v_1$. When Asuka chooses $v_2$, the problem changes into the subproblem with $[v_3, v_4]$. When Asuka chooses $v_3$, the problem changes into the subproblem with $[v_2, v_4]$. When Asuka chooses $v_4$, the problem changes into the subproblem with $[v_2, v_3]$. By calculating, $p = [0.25, 0.25, 0.125, 0]$, it also satisfies the condition. Let's define $g_k$ as the best array $p$ when $d=k$, $g'_k$ as the the probabilities of going to cities except the city Jellyfish choose for the first time. By recursion, we can get $g'_k$ from $g_{k-2}$. After that, we insert $\frac 1 k$ into $g'_k$, keeping it non-increasing, we will get $g_k$. By calculating, we will find $\frac 1 k$ is always the first element in $g_k$. So when $k > 2$, we have the following transition: $g_{k, 1} = \frac 1 k$ $g_{k, 1} = \frac 1 k$ $\forall 1 < i \leq k, g_{k, i} = g_{k-2, i - 2} \times \frac {i - 2} k + g_{k-2, i - 1} \times \frac {k - i} k$ $\forall 1 < i \leq k, g_{k, i} = g_{k-2, i - 2} \times \frac {i - 2} k + g_{k-2, i - 1} \times \frac {k - i} k$ In the above we consider $g_{k, x}$ as $0$ for all $x = 0$ or $x > k$. Because $g_{k-2}$ satisfies the condition, by greedy and induction we can show that $g_k$ satisfies the condition. Time complexity: $O(\max(n)^2)$ for preprocessing and $O(m \log n)$ per test case. Memory complexity: $O(\max(n)^2)$ for preprocessing and $O(m)$ per test case.
[ "dp", "graphs", "greedy", "math", "probabilities" ]
2,300
#include<bits/stdc++.h> using namespace std; const int N = 5000 + 5; int n = 5000, m = 0; vector<vector<int> > G(N), Gx(N); long double f[N] = {}, g[N][N] = {}; inline bool cmp(int u, int v){ return f[u] > f[v]; } inline void work(int u){ if(u == n){ f[u] = 1.00; return; } sort(G[u].begin(), G[u].end(), cmp); int k = G[u].size(); for(int i = 0 ; i < k ; i ++){ int v = G[u][i]; f[u] += f[v] * g[k][i + 1]; } } inline void init(){ for(int u = 1 ; u <= n ; u ++){ f[u] = 0.00; G[u].clear(), Gx[u].clear(); } n = m = 0; } inline void solve(){ scanf("%d %d", &n, &m); for(int i = 1, u = 0, v = 0 ; i <= m ; i ++){ scanf("%d %d", &u, &v); if(u != n){ G[u].push_back(v); Gx[v].push_back(u); } } for(int u = n ; u >= 1 ; u --) work(u); printf("%.12Lf\n", f[1]); } int T = 0; int main(){ for(int i = 1 ; i <= n ; i += 2) for(int j = 1 ; j <= i ; j ++) g[i][j] = 1.00 / i; for(int i = 2 ; i <= n ; i += 2){ g[i][1] = 1.00; for(int j = 1 ; j <= i ; j ++) g[i][j] /= i; if(i + 2 <= n) for(int j = 1 ; j <= i ; j ++) g[i + 2][j + 1] += g[i][j] * (i - j + 1), g[i + 2][j + 2] += g[i][j] * j; } scanf("%d", &T); for(int i = 1 ; i <= T ; i ++) init(), solve(); return 0; }
1874
D
Jellyfish and Miku
There are $n + 1$ cities with numbers from $0$ to $n$, connected by $n$ roads. The $i$-th $(1 \leq i \leq n)$ road connects city $i-1$ and city $i$ bi-directionally. After Jellyfish flew back to city $0$, she found out that she had left her Miku fufu in city $n$. Each road has a \textbf{positive integer} level of beauty. Denote the beauty of the $i$-th road as $a_i$. Jellyfish is trying to find her fufu. Because of her poor sense of direction, she doesn't know which way to go. Every day, she randomly chooses a road connected to the city she currently is in and traverses it. Let $s$ be the sum of the beauty of the roads connected to the current city. For each road connected to the current city, Jellyfish will traverse the road with a probability of $\frac x s$, where $x$ is the beauty of the road, reaching the city on the other side of the road. Jellyfish will start at city $0$, and she will get only her fufu back when she reaches city $n$. You want to choose the beauty of the roads such that the expected number of days Jellyfish takes to find her fufu will be the minimum possible. However, due to limited funding, the sum of beauties of all roads must be less than or equal to $m$. Find the minimum expected number of days Jellyfish needs to get her fufu back if the beauty of the roads is chosen optimally.
Let's assume that $a$ is given. We can use dynamic programming to solve the problem. Let's define $f_i$ as the expected number of days Jellyfish needs to reach city $i$ from city $0$. We will have: $f_0=0, f_1=1$ $f_0=0, f_1=1$ $\forall i > 1, f_i=f_{i-1}+1+\frac {a_{i-1}} {a_{i-1}+a_i} \times (f_i - f_{i-2})$ $\forall i > 1, f_i=f_{i-1}+1+\frac {a_{i-1}} {a_{i-1}+a_i} \times (f_i - f_{i-2})$ Let's make it better: $f_i = f_{i-1} + 1 + \frac {a_{i-1}} {a_i} \times (f_{i-1}-f_{i-2}+1)$ What does this inspire us to do? Let's define $g_i=f_i-f_{i-1}$, then we will get: $g_1=1$ $g_1=1$ $\forall i > 1, g_i = 1 + \frac {a_{i-1}} {a_i} \times (g_{i-1}+1)$ $\forall i > 1, g_i = 1 + \frac {a_{i-1}} {a_i} \times (g_{i-1}+1)$ By induction, we will find $g_i = 1 + 2 \times \sum_{j=1}^{i-1} \frac {a_j} {a_i}$. According to the definition, $f_n = \sum_{i=1}^n g_n = n + 2 \times \sum_{i=1}^n\sum_{j=1}^{i-1} \frac{a_j}{a_i}$. Then we can use dynamic programming to solve the problem itself. Let's define $s_i = \sum_{j=1}^i a_j$, then $f_n = n + 2 \times \sum_{i=1}^n \frac{s_{i-1}}{a_i}$. Let's define $dp_{i, x}$ as the minimum value of $\sum_{i=1}^n \frac{s_{i-1}}{a_i}$ when $s_i = x$. We transit by enumerating the values of $a_{i+1}$, The transition is: $dp_{i+1, x+y} \leftarrow dp_{i, x} + \frac x y$. But the time complexity is $O(n m^2)$, how can it becomes faster? Let's take a closer look at $\sum_{i=1}^n\sum_{j=1}^{i-1} \frac{a_j}{a_i}$. If there exists $i < n$ satisfying $a_i > a_{i+1}$. We can swap $a_i$ and $a_{i+1}$, the answer will be better! so $a$ is a non-decreasing array, which means $a_i \leq \frac m {n - i + 1}$. Because $\sum_{i=1}^n \frac n i$ is $O(n \log n)$, so if we only enumerate the possible values of $a_i$ the time complexity will be $O(m^2 \log m)$. Time complexity: $O(m^2 \log m)$ Memory complexity: $O(nm)$
[ "divide and conquer", "dp", "math", "probabilities" ]
2,800
#include<bits/stdc++.h> using namespace std; const int N = 3000 + 5; const long double Inf = 1e18; int n = 0, m = 0; long double dp[N][N] = {}; int main(){ scanf("%d %d", &n, &m); for(int i = 0 ; i <= n ; i ++) for(int x = 0 ; x <= m ; x ++) dp[i][x] = Inf; dp[0][0] = 0.00; for(int i = 1 ; i <= n ; i ++) for(int x = 0 ; x <= m ; x ++) for(int y = 1 ; x + y * (n - i + 1) <= m ; y ++) dp[i][x + y] = min(dp[i][x + y], dp[i - 1][x] + 1.00 * x / y); printf("%.12Lf", 2 * dp[n][m] + n); return 0; }
1874
E
Jellyfish and Hack
It is well known that quick sort works by randomly selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. But Jellyfish thinks that choosing a random element is just a waste of time, so she always chooses the first element to be the pivot. The time her code needs to run can be calculated by the following pseudocode: \begin{verbatim} function fun(A) if A.length > 0 let L[1 ... L.length] and R[1 ... R.length] be new arrays L.length = R.length = 0 for i = 2 to A.length if A[i] < A[1] L.length = L.length + 1 L[L.length] = A[i] else R.length = R.length + 1 R[R.length] = A[i] return A.length + fun(L) + fun(R) else return 0 \end{verbatim} Now you want to show her that her code is slow. When the function $\mathrm{fun(A)}$ is greater than or equal to $lim$, her code will get $\text{Time Limit Exceeded}$. You want to know how many distinct permutations $P$ of $[1, 2, \dots, n]$ satisfies $\mathrm{fun(P)} \geq lim$. Because the answer may be large, you will only need to find the answer modulo $10^9+7$.
Firstly, if $lim > \frac {n(n+1)} 2$, the answer will be $0$. So we only need to solve the problem which satisfies $lim \leq \frac{n(n+1)} 2$. We can use dynamic programming to solve the problem: Let's define $dp_{i, a}$ means the number of the permutations $P$ of $[1, 2, \dots, i]$, satisfying $\mathrm{fun(P)} = a$. Since only relative rankings are useful, we have the following transition: $dp_{0, 0} = 1$ $dp_{0, 0} = 1$ $\forall i > 0, dp_{i, a} = \sum_{j=1}^i \binom {i-1} {j-1} \sum_{b=0}^{a - i} dp_{j - 1, b} \times dp_{i - j, a - b - i}$ $\forall i > 0, dp_{i, a} = \sum_{j=1}^i \binom {i-1} {j-1} \sum_{b=0}^{a - i} dp_{j - 1, b} \times dp_{i - j, a - b - i}$ The time complexity is $O(n^6)$, because $a \leq \frac {i(i+1)} 2$, how can it becomes faster? FFT (Fast Fourier Transform) might come to mind first. If we use FFT instead of enumerating $t$, The time complexity will become $O(n^4 \log n)$. This is not enough because $n$ is large and $10^9+7$ is not a modulus suitable for NTT (Number-theoretic Transform). Tips. Also, if you use divide and conquer and FFT, The time complexity will be $O(n^3 \text{poly}(\log n))$, but because FFT have a big constant factor, this still can't pass the problem. But we can do something similar to what the FFT does. We define $F_i = \sum_{a=1}^{\frac {n(n + 1)} 2} dp_{i, a} \times x^a$, then we will have the following transition: $F_0=1$ $F_0=1$ $F_i = x^i \times \sum_{j=1}^i \binom{i - 1}{j - 1} F_{j-1} \times F_{i-j}$ $F_i = x^i \times \sum_{j=1}^i \binom{i - 1}{j - 1} F_{j-1} \times F_{i-j}$ For all $1 \leq i \leq n$, The degree of $F_i$ won't exceed $\frac {n(n+1)} 2$. So if we have $\frac{n(n+1)} 2 + 1$ points on $F_n$, We can get $F_n$ by in time complexity $O(n^4)$ using Lagrange Interpolation. The only thing we need to do is the dot product of the functions, so the time complexity of the transition is also $O(n^4)$. Time complexity: $O(n^4)$ Memory complexity: $O(n^3)$
[ "dp", "math" ]
3,000
#include<bits/stdc++.h> using namespace std; typedef long long ll; const ll N = 200 + 5, Mod = 1e9 + 7; inline ll power(ll x, ll y){ ll ret = 1; while(y){ if(y & 1) ret = ret * x % Mod; x = x * x % Mod, y >>= 1; } return ret; } ll n = 0, k = 0, lim = 0; ll C[N][N] = {}, pw[N * N][N] = {}, iv[N * N] = {}, ifac[N * N] = {}, dp[N][N * N] = {}; ll a[N * N] = {}, b[N * N] = {}, ans = 0; int main(){ scanf("%lld %lld", &n, &k); lim = n * (n + 1) / 2; for(ll i = 0 ; i <= n ; i ++){ C[i][0] = 1; for(ll j = 1 ; j <= i ; j ++) C[i][j] = (C[i - 1][j - 1] + C[i - 1][j]) % Mod; } ifac[0] = 1; for(ll x = 0 ; x <= lim ; x ++){ pw[x][0] = 1; if(x) iv[x] = power(x, Mod - 2), ifac[x] = ifac[x - 1] * iv[x] % Mod; for(ll i = 1 ; i <= n ; i ++) pw[x][i] = pw[x][i - 1] * x % Mod; } for(ll x = 0 ; x <= lim ; x ++) dp[0][x] = 1; for(ll i = 1 ; i <= n ; i ++){ for(ll j = 0 ; j < i ; j ++) for(ll x = 0 ; x <= lim ; x ++) dp[i][x] = (dp[i][x] + dp[j][x] * dp[i - 1 - j][x] % Mod * C[i - 1][j]) % Mod; for(ll x = 0 ; x <= lim ; x ++) dp[i][x] = dp[i][x] * pw[x][i] % Mod; } a[0] = 1; for(ll i = 0 ; i <= lim ; i ++){ for(ll x = lim ; x >= 1 ; x --) a[x] = (a[x - 1] + a[x] * (Mod - i)) % Mod; a[0] = a[0] * (Mod - i) % Mod; } for(ll i = 1 ; i <= lim ; i ++) if(dp[n][i]){ ll w = dp[n][i] * ifac[i] % Mod * ifac[lim - i] % Mod; if((lim - i) & 1) w = Mod - w; b[0] = a[0] * (Mod - iv[i]) % Mod; for(ll x = 1 ; x <= lim ; x ++) b[x] = (a[x] - b[x - 1] + Mod) * (Mod - iv[i]) % Mod; for(ll x = k ; x <= lim ; x ++) ans = (ans + b[x] * w) % Mod; } printf("%lld\n", ans); return 0; }
1874
F
Jellyfish and OEIS
Jellyfish always uses OEIS to solve math problems, but now she finds a problem that cannot be solved by OEIS: Count the number of permutations $p$ of $[1, 2, \dots, n]$ such that for all $(l, r)$ such that $l \leq r \leq m_l$, the subarray $[p_l, p_{l+1}, \dots, p_r]$ is not a permutation of $[l, l+1, \dots, r]$. Since the answer may be large, you only need to find the answer modulo $10^9+7$.
Let's call a section $[l, r]$ bad if $[p_l, p_{l+1}, \dots, p_{r-1}, p_r]$ is a permutation of $[l, l + 1, \dots, r - 1, r]$ and $l \leq r \leq m_l$. Let's call a section $[l, r]$ primitive if it's a bad section and there are no section $[l', r']$ satisfying $[l', r']$ is also a bad section and $[l, r]$ covers $[l', r']$. Lemma. For all primitive sections, none two of them intersect. Proof. If $[l_1, r_1]$ is a bad section, $[l_2, r_2]$ is a bad section and $l_1 < l_2 < r_1 < r_2$, the section $[l_2, r_1]$ will also be a bad section, but both $[l_1, r_1]$ and $[l_2, r_2]$ covers $[l_2, r_1]$, so $[l_1, r_1]$ and $[l_2, r_2]$ can't be primitive at the same time. Let's use the principle of inclusion-exclusion and dynamic programming to solve the problem. Let's define $f(l, r)$ as the number of $p_l, p_{l+1}, \dots, p_{r-1}, p_r$ which is a primitive section. By the principle of inclusion-exclusion, we count the ways to fix $k$ primitive sections in range $[l, r]$, and arrange the rest arbitrarily. Then we can define $g_{1/2}(l, r, x)$ as the number of ways to fix $k$ ($k$ is odd/even) primitive sections in range $[l, r]$, and the number of the positions which doesn't cover by any primitive section is $x$. Finally we define $g(l, r, x) = g_2(l, r, x) - g_1(l, r, x)$, We will have the following transition: $\forall l \leq r \leq m_l, f(l, r) = \sum_{x=0}^{r-l+1} g(l, r, x) \times x! + f(l, r)$ $\forall l \leq r \leq m_l, f(l, r) = \sum_{x=0}^{r-l+1} g(l, r, x) \times x! + f(l, r)$ $g(l, r, x) = g(l, r - 1, x - 1) - \sum_{mid=l}^r g(l, mid - 1, x )\times f(mid, r)$ $g(l, r, x) = g(l, r - 1, x - 1) - \sum_{mid=l}^r g(l, mid - 1, x )\times f(mid, r)$ Tips. In the transition of $f(l, r)$, We add $f(l, r)$ because $f(l, r)$ contributes to $g(l, r, 0)$, but it should not contribute to $f(l, r)$, This problem can be solved by ordering the transitions. According to the definition, $\sum_{x=0}^{n} g(1, n, x) \times x!$ is the answer. Time complexity: $O(n^4)$ Memory complexity: $O(n^3)$
[ "dp" ]
3,500
#include<bits/stdc++.h> using namespace std; typedef long long ll; const ll N = 200 + 5, Mod = 1e9 + 7; ll n = 0, m[N] = {}, fac[N] = {}, f[N][N] = {}, g[N][N][N] = {}; int main(){ scanf("%lld", &n); for(ll i = 1 ; i <= n ; i ++) scanf("%lld", &m[i]); if(m[1] == n){ printf("0"); return 0; } fac[0] = 1; for(ll i = 1 ; i <= n ; i ++) fac[i] = fac[i - 1] * i % Mod; for(ll i = 1 ; i <= n ; i ++) g[i][i - 1][0] = 1; for(ll l = n ; l >= 1 ; l --) for(ll r = l ; r <= n ; r ++){ for(ll x = 1 ; x <= r - l + 1 ; x ++) g[l][r][x] = g[l][r - 1][x - 1]; for(ll mid = l ; mid < r ; mid ++) if(r <= m[mid + 1]) for(ll x = 0 ; x <= mid - l + 1 ; x ++) g[l][r][x] = (g[l][r][x] + g[l][mid][x] * (Mod - f[mid + 1][r])) % Mod; for(ll x = 0 ; x <= r - l + 1 ; x ++) f[l][r] = (f[l][r] + g[l][r][x] * fac[x]) % Mod; if(r <= m[l]) g[l][r][0] = (g[l][r][0] + (Mod - f[l][r])) % Mod; } printf("%lld", f[1][n]); return 0; }
1874
G
Jellyfish and Inscryption
Jellyfish loves playing a game called "Inscryption" which is played on a directed acyclic graph with $n$ vertices and $m$ edges. All edges $a \to b$ satisfy $a < b$. You need to move from vertex $1$ to vertex $n$ along the directed edges, and then fight with the final boss. You will collect \textbf{cards} and \textbf{props} in the process. Each \textbf{card} has two attributes: HP and damage. If a \textbf{card's} HP is $a$ and its damage is $b$, then the power of the \textbf{card} is $a \times b$. Each \textbf{prop} has only one attribute: power. In addition to vertex $1$ and vertex $n$, there are some vertices that trigger special events. The special events are: - You will get a \textbf{card} with $a$ HP, and $b$ damage. - If you have at least one \textbf{card}, choose one of your \textbf{cards} and increase its HP by $x$. - If you have at least one \textbf{card}, choose one of your \textbf{cards} and increase its damage by $y$. - You will get a \textbf{prop} with $w$ power. When you get to vertex $n$, you can choose \textbf{at most one} of your \textbf{cards} and multiply its damage by $10^9$. The final boss is very strong, so you want to maximize the sum of the power of all your \textbf{cards} and \textbf{props}. Find the maximum possible sum of power of all your \textbf{cards} and \textbf{props} if you play the game optimally.
For convenience, let's define $k = \max(a, b, x, y)$, use operation 1 for "You will get a card with $a$ HP, and $b$ damage", operation 2 for "If you have at least one card, choose one of your cards and increase its HP by $x$" and operation 3 for "If you have at least one card, choose one of your cards and increase its damage by $y$", operation 4 for "You will get a prop with $w$ power", operation 5 for "You can choose at most one of your cards and multiply its damage by $10^9$", "the $i$-th card" for the card we get from vertex $i$ (It must be operation 1 on vertex $i$), "do operation 2/3/5 onto card $i$" for "When we do this operation 2/3/5, we will choose the card $i$ and increase his HP or damage". Let us consider the problem without the operation 5, what's the maximum possible answer. If you want to maximize the the sum of the power of your cards, the answer will not exceeding $(100 \times 200)^2 = 4 \times 10^8$; If you you want to maximize the the sum of the power of your props, the answer will not exceeding $200 \times 10^6 = 2 \times 10^8$. Because $4 \times 10^8 + 2 \times 10^8 = 6 \times 10^8 < 10^9$, so the operation $n$ is the most important. Let's called the card we wil do the operation 5 onto it the "flash card". Let's use meet-in-the-middle, the problem is divided into two subproblems - the game before we get the flash card and the game after we get the flash card. It's a easy problem, we can use dynamic programming to solve this problem. Since the most important thing is the HP and damage of the flash card, so we define the following dynamic programming state: $dp_b(u, a)$ means we are current at vertex $u$, the current HP of the flash card is $a$, the maximum damage of the flash card. $dp_c(u, a)$ means we are current at vertex $u$, the current HP of the flash card is $a$, the damage of the flash card is $dp_b(u, a)$, the maximum sum of the power of all the other cards and props. Since $a \leq nk$, the time complexity of the transition is $O(m \times n \times k)$. This is the key of the problem, and since it's much more difficult, we first consider the subproblems of this problem. Lemma. We will do all the operation 2 onto one of the cards, symmetrically we will also do all the operation 3 onto one of the cards. Proof. We consider a sequence of operations, let's consider if we do all the operation 1 on the card with the max damage after doing all the operation 2, the answer won't be worse, we can do all the operation 1 onto this card instead, then we make a symmetrical adjustment, the answer won't be worse, and all the operation 2 is done onto one of the cards, all the operation 3 is done onto one of the cards. It's similar to the subproblem I. If we say subproblem I is like a "global max value", then subproblem II is like a "prefix max value". Let's define $p_i$ means we will do the operation 2/3 on vertex $i$ onto the $p_i$-th card. Lemma1. If there is a operation 2 on vertex $i$ and a operation 2 on vertex $j$, if $p_i < j < i$, we will have $p_j = p_i$. Proof. Let's consider the final HP and damage of the cards after all the operations. Because we do the operation 2 on vertex $i$ onto the $p_i$-th card, for all $i' < i$, the damage of the $i'$-th card is not larger than the $p_i$-th card. So for $j$, if we don't do the operation 2 on vertex j onto the $p_i$-th card, we can do it onto the $p_i$-th card instead, the answer won't be worse. Symmetrically, Lemma1 is also correct for operation 3. Now we can use dynamic programming to solve the problem, we define the following dynamic programming state: $f(u, a, b)$ means we are current at vertex $u$, we will do the next several operation 3 onto a card with $a$ HP after all the operations and we will do the next several operation 2 onto a card with $b$ damage after all the operations, and this two cards are not the same. $g(u, a, b)$ means we are current at vertex $u$, We will do the next several operation 2 and operation 3 onto a card currently having $a$ HP and $b$ damage. The time complexity is $O(m \times n^2 \times k^2)$, but it's not enough. Lemma2. If a card has $a$ HP and $b$ damage after all the operations and it's not the flash card, $\min(a, b) \leq k$. Proof. If we use this card as the flash card instead of the last one, and do all the operations done onto the last flash card onto this card, the power of the flash card will be larger. So it won't exist in this half problem. Now the time complexity becomes $O(m \times n \times k^2)$, it's still not enough. Lemma3. Let's define $A$ as the maximum HP of all the cards except the flash card after all the operations, $B$ as the maximum damage of all the cards except the flash card after all the operations, $\min(A, B) \leq k$. Proof. Let's assume that the $i$-th card has $A$ HP after all the operations, the $j$-th card has $B$ damage after all the operations and $A > k, B > k$. If $i = j$, it conflicts with Lemma 2; If $i < j$, because of the Lemma 2, the HP of the $j$-th card after all the operations won't exceed $k$, let's use $A'$ as the HP of the $j$-th card after all the operations, and $B > k$, so we have done some operation 3 onto the $j$-th card. But because $A > k \geq A'$, if we do these operations onto the $i$-th card, the answer will be better; If $i > j$, it's symmetric with $i < j$. But we can make the dynamic programming state better: $f(u, a, b)$ means we are current at vertex $u$, we will do the next several operation 3 onto a card with $a$ HP after all the operations and we will do the next several operation 2 onto a card with $b$ damage after all the operations, and this two cards are not the same. $f(u, a, b)$ means we are current at vertex $u$, we will do the next several operation 3 onto a card with $a$ HP after all the operations and we will do the next several operation 2 onto a card with $b$ damage after all the operations, and this two cards are not the same. $g_a(u, a, a')$ means we are current at vertex $u$, we will do the next several operation 3 onto a card with $a$ HP after all the operations, we will do the next several operation 2 onto a card and totally increase $a'$ HP in the next several operations to reach it's HP after all the operations. $g_a(u, a, a')$ means we are current at vertex $u$, we will do the next several operation 3 onto a card with $a$ HP after all the operations, we will do the next several operation 2 onto a card and totally increase $a'$ HP in the next several operations to reach it's HP after all the operations. $g_b(u, b, b')$ it's symmetric with $g_a$, just swap "HP" and "damage". $g_b(u, b, b')$ it's symmetric with $g_a$, just swap "HP" and "damage". Let's define $A$ as the maximum HP of all the cards except the flash card after all the operations, $B$ as the maximum damage of all the cards except the flash card after all the operations. We will find the $a, a', b, b'$ in $f, g_a, g_b$ is $O(k)$ because if $A \leq k$, using $g_a$ we can get the right answer, if $B \leq k$, using $g_b$ we can get the right answer. the time complexity of the transition is $O(m \times k^2)$. The transition is very complex, you can see more details in my code : ) In my code, I use $g(u)$ to make the transition better. When $a'$ or $b'$ reaches $0$, we reaches the vertex $u$ and there is a operation 1 on vertex $u$, I don't enumerate the value of the next $a$ and $b$ while transiting. I transit it to $g(u)$ first, and enumerate the value of the next $a$ and $b$ together. Since the path is a chain, and we use dynamic programming to solve the problem. There's no difference whether the graph is a chain. Time complexity: $O(m(nk+k^2))$ Memory complexity: $O(n^2k+nk^2)$
[ "dp" ]
3,500
#include<bits/stdc++.h> using namespace std; typedef long long ll; const int N = 200 + 5; const ll lim = 1e9; inline void checkmax(int &x, int y){ if(y > x) x = y; } inline void checkmax(ll &x, ll y){ if(y > x) x = y; } inline void checkmax(pair<int, int> &x, pair<int, int> y){ if(y > x) x = y; } int n = 0, m = 0, k = 0, opt[N] = {}, a[N] = {}, b[N] = {}, w[N] = {}; int f[N][N][N] = {}, g[N] = {}, g_a[N][N][N] = {}, g_b[N][N][N] = {}; pair<int, int> dp[N][N * N] = {}; vector<vector<int> > G(N); int main(){ scanf("%d %d", &n, &m); for(int u = 1 ; u <= n ; u ++){ scanf("%d", &opt[u]); if(opt[u] == 1) scanf("%d %d", &a[u], &b[u]); else if(opt[u] == 2) scanf("%d", &a[u]); else if(opt[u] == 3) scanf("%d", &b[u]); else if(opt[u] == 4) scanf("%d", &w[u]); k = max(k, max(a[u], b[u])); } for(int i = 1, u = 0, v = 0 ; i <= m ; i ++){ scanf("%d %d", &u, &v); G[u].push_back(v); } memset(f, -1, sizeof(f)); memset(g, -1, sizeof(g)), memset(g_a, -1, sizeof(g_a)), memset(g_b, -1, sizeof(g_b)); memset(dp, -1, sizeof(dp)); f[1][0][0] = 0; for(int u = 1 ; u <= n ; u ++){ if(opt[u] == 1 && g[u] != -1){ for(int x = a[u] ; x <= k ; x ++) checkmax(g_a[u][x][x - a[u]], g[u] + x * b[u]); for(int x = b[u] ; x <= k ; x ++) checkmax(g_b[u][x][x - b[u]], g[u] + x * a[u]); checkmax(dp[u][a[u]], make_pair(b[u], g[u])); } for(int v : G[u]){ if(opt[v] == 1){ for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){ if(f[u][x][y] != -1){ checkmax(f[v][max(x, a[v])][max(y, b[v])], f[u][x][y] + a[v] * b[v]); checkmax(g[v], f[u][x][y]); } if(g_a[u][x][y] != -1){ checkmax(g_a[v][max(x, a[v])][y], g_a[u][x][y] + a[v] * b[v]); if(!y){ checkmax(g[v], g_a[u][x][y]); checkmax(f[v][x][b[v]], g_a[u][x][y] + a[v] * b[v]); } } if(g_b[u][x][y] != -1){ checkmax(g_b[v][max(x, b[v])][y], g_b[u][x][y] + a[v] * b[v]); if(!y){ checkmax(g[v], g_b[u][x][y]); checkmax(f[v][a[v]][x], g_b[u][x][y] + a[v] * b[v]); } } } for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){ int y = dp[u][x].first, z = dp[u][x].second; checkmax(dp[v][x], make_pair(y, z + a[v] * b[v])); } } else if(opt[v] == 2){ for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){ if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + a[v] * y); if(g_a[u][x][y] != -1 && y >= a[v]) checkmax(g_a[v][x][y - a[v]], g_a[u][x][y]); if(g_b[u][x][y] != -1) checkmax(g_b[v][x][y], g_b[u][x][y] + a[v] * x); } for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){ int y = dp[u][x].first, z = dp[u][x].second; checkmax(dp[v][x + a[v]], make_pair(y, z)); } } else if(opt[v] == 3){ for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){ if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + x * b[v]); if(g_a[u][x][y] != -1) checkmax(g_a[v][x][y], g_a[u][x][y] + x * b[v]); if(g_b[u][x][y] != -1 && y >= b[v]) checkmax(g_b[v][x][y - b[v]], g_b[u][x][y]); } for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){ int y = dp[u][x].first, z = dp[u][x].second; checkmax(dp[v][x], make_pair(y + b[v], z)); } } else{ for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){ if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + w[v]); if(g_a[u][x][y] != -1) checkmax(g_a[v][x][y], g_a[u][x][y] + w[v]); if(g_b[u][x][y] != -1) checkmax(g_b[v][x][y - b[v]], g_b[u][x][y] + w[v]); } for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){ int y = dp[u][x].first, z = dp[u][x].second; checkmax(dp[v][x], make_pair(y, z + w[v])); } } } } ll ans = f[n][0][0]; for(int x = 0 ; x <= n * k ; x ++) if(dp[n][x] != make_pair(-1, -1)){ int y = dp[n][x].first, z = dp[n][x].second; checkmax(ans, lim * x * y + z); } printf("%lld", ans); return 0; }
1875
A
Jellyfish and Undertale
Flowey has planted a bomb in Snowdin! The bomb has a timer that is initially set to $b$. Every second, the timer will decrease by $1$. When the timer reaches $0$, the bomb will explode! To give the residents of Snowdin enough time to evacuate, you will need to delay the bomb from exploding for as long as possible. You have $n$ tools. Each tool can only be used \textbf{at most} once. If you use the $i$-th tool, the timer will increase by $x_i$. However, if the timer is changed to an integer larger than $a$, the timer will be set to $a$ due to a bug. More specifically, the following events will happen every second in the following order: - You will choose some (possibly none) of your tools that have not been used before. If you choose the $i$-th tool, and the bomb's timer is currently set to $c$, the timer will be changed to $\min(c + x_i, a)$. - The timer decreases by $1$. - If the timer reaches $0$, the bomb explodes. Jellyfish now wants to know the maximum time in seconds until the bomb explodes if the tools are used optimally.
We can use one tool each time the timer reaches to $1$, then the answer will be $\sum_{i=1}^n \min(a - 1, x_i) + b$. This can prove to be optimal. Because for each tool, if we use it when the timer is $c$, its contribution to the answer is $\min(x_i, a - c)$. We can't use the tool when the timer is less than or equal to $0$ because the bomb will explode before that, so $c=1$ is the optimal. Time complexity: $O(n)$ per test case. Memory complexity: $O(1)$ per test case.
[ "brute force", "greedy" ]
900
#include<bits/stdc++.h> using namespace std; int n = 0, a = 0, b = 0; long long ans = 0; inline void solve(){ scanf("%d %d %d", &a, &b, &n); ans = b; for(int i = 0, x = 0 ; i < n ; i ++){ scanf("%d", &x); ans += min(a - 1, x); } printf("%lld\n", ans); } int T = 0; int main(){ scanf("%d", &T); for(int i = 0 ; i < T ; i ++) solve(); return 0; }
1875
C
Jellyfish and Green Apple
Jellyfish has $n$ green apple pieces. Each green apple piece weighs $1~\text{kg}$. Jellyfish wants to divide these green apple pieces \textbf{equally} among $m$ people. Jellyfish has a magic knife. Each time Jellyfish can choose one piece of green apple and divide it into two smaller pieces, with each piece having half the weight of the original piece. Jellyfish wants to know the minimum number of operations needed such that she can divide the green apple pieces such that the total weight of apple pieces received by each person is the same.
Firstly, if $n \geq m$, we can make the problem transform $n < m$ by giving each person an apple at a time until there are not enough apples. We can calculate the mass of apples that each person ends up with as $\frac n m$. Since it's cut in half every time, if $\frac m {\gcd(n, m)}$ is not an integral power of $2$, there's no solution. Since the number of apple pieces is added to exactly $1$ for each cut, So we just need to minimize the final number of apple pieces. As the problem is transformed, $\frac n m$ is less than $1$, and $\frac m {\gcd(n, m)}$ is an integral power of $2$. So we can uniquely find a set of positive integers $S$ satisfying $\frac n m = \sum_{i \in S} \frac 1 {2^i}$. And this method can be proven to be optimal, if we find another multiset $T$ satisfying $S \neq T$, for every element $x$ that appears twice or more, We can make the answer better by removing two $x$ at a time from $T$ and adding one $x - 1$ to $T$. By repeating this process, eventually $T$ will become $S$. We can use $\text{std::__builtin_popcount()}$ to get $|S|$, the answer is $m \times |S|-n$. Time complexity: $O(\log m)$ per test case. Memory complexity: $O(1)$ per test case.
[ "bitmasks", "greedy", "math", "number theory" ]
1,400
#include<bits/stdc++.h> using namespace std; int n = 0, m = 0; inline void solve(){ scanf("%d %d", &n, &m); n %= m; int a = n / __gcd(n, m), b = m / __gcd(n, m); if(__builtin_popcount(b) > 1) printf("-1\n"); else printf("%lld\n", 1ll * __builtin_popcount(a) * m - n); } int T = 0; int main(){ scanf("%d", &T); for(int i = 0 ; i < T ; i ++) solve(); return 0; }
1875
D
Jellyfish and Mex
You are given an array of $n$ nonnegative integers $a_1, a_2, \dots, a_n$. Let $m$ be a variable that is initialized to $0$, Jellyfish will perform the following operation $n$ times: - select an index $i$ ($1 \leq i \leq |a|$) and delete $a_i$ from $a$. - add $\operatorname{MEX}(a)^{\dagger}$ to $m$. Now Jellyfish wants to know the minimum possible final value of $m$ if he performs all the operations optimally. $^{\dagger}$ The MEX (minimum excluded) of an array is the smallest non-negative integer that does not belong to the array. For instance: - The MEX of $[2,2,1]$ is $0$, because $0$ does not belong to the array. - The MEX of $[3,1,0,1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not. - The MEX of $[0,3,1,2]$ is $4$ because $0$, $1$, $2$, and $3$ belong to the array, but $4$ does not.
We only care about the operation before $\text{MEX}(a)$ reaches $0$, because after that, $m$ will never change. Lemma. Before $\text{MEX}(a)$ reaches $0$, we will choose a positive integer $x$ at a time that satisfies $x < \text{MEX}(a)$, and delete all $x$ from $a$, the $\text{MEX}(a)$ will become $x$. Proof. Because if $x > \text{MEX}(a)$, we can place this operation after the $\text{MEX}(a)$ becomes $0$, if we don't delete all of $x$, $\text{MEX}(a)$ won't change, we can also put this operation later. So before $\text{MEX}(a)$ reaches $0$, the $x$ we delete is non-increasing. It means we can solve this problem by dynamic programming. Let $dp_i$ represents when $\text{MEX}(a)=i$, and we haven't delete any $x$ satisfying $x < i$,the minimum value of $m$. Let $c_i$ represents the number of times $i$ appears in $a$, the transition is: $\forall j < i, dp_j \leftarrow dp_i + i \times (c_j - 1) + j$. Time complexity: $O(n^2)$ per test case. Memory complexity: $O(n)$ per test case.
[ "dp" ]
1,600
#include<bits/stdc++.h> using namespace std; typedef long long ll; const ll N = 5000 + 5, Inf = 0x3f3f3f3f3f3f3f3f; ll n = 0, m = 0, a[N] = {}, dp[N] = {}; inline void init(){ for(ll i = 0 ; i <= n ; i ++) a[i] = 0, dp[i] = Inf; n = m = 0; } inline void solve(){ scanf("%lld", &n); for(ll i = 1, x = 0 ; i <= n ; i ++){ scanf("%lld", &x); if(x < n) a[x] ++; } while(a[m]) m ++; dp[m] = 0; for(ll i = m ; i >= 1 ; i --) for(ll j = 0 ; j < i ; j ++) dp[j] = min(dp[j], dp[i] + i * a[j]); printf("%lld\n", dp[0] - m); } ll T = 0; int main(){ memset(dp, 0x3f, sizeof(dp)); scanf("%lld", &T); for(ll i = 0 ; i < T ; i ++) init(), solve(); return 0; }
1876
A
Helmets in Night Light
Pak Chanek is the chief of a village named Khuntien. On one night filled with lights, Pak Chanek has a sudden and important announcement that needs to be notified to all of the $n$ residents in Khuntien. First, Pak Chanek shares the announcement directly to one or more residents with a cost of $p$ for each person. After that, the residents can share the announcement to other residents using a magical helmet-shaped device. However, there is a cost for using the helmet-shaped device. For each $i$, if the $i$-th resident has got the announcement at least once (either directly from Pak Chanek or from another resident), he/she can share the announcement to at most $a_i$ other residents with a cost of $b_i$ \textbf{for each share}. If Pak Chanek can also control how the residents share the announcement to other residents, what is the minimum cost for Pak Chanek to notify all $n$ residents of Khuntien about the announcement?
Notice that since there are $n$ residents, there must be $n$ shares. There are two types of shares. A share directly by Pak Chanek to a resident, and a share by a resident to another resident. A share by Pak Chanek is unlimited, while a share by a resident is limited by $a_i$. So there are unlimited shares with cost $p$ and for each $i$, there are $a_i$ shares with cost $b_i$. In the beginning, no residents are notified about the announcement, so Pak Chanek must share once to some resident in the beginning. After that, since there are $n-1$ residents remaining, there must be $n-1$ shares. Theoretically, the lowerbound of the minimum cost is the $n-1$ cheapest shares (including Pak Chanek's shares). Turns out, there is always a strategy to use the $n-1$ cheapest shares to notify all $n$ residents after the first share by Pak Chanek. The strategy is we share the announcement to the residents with the smallest values of $b_i$ first. Using the fact that $a_i\geq1$, we can deduce that we will always have a resident with an available share. We can solve this using a simple sorting or a priority queue. Keep in mind about the unlimited shares with cost $p$. Time complexity for each test case: $O(n\log n)$
[ "greedy", "sortings" ]
1,000
null
1876
B
Effects of Anti Pimples
Chaneka has an array $[a_1,a_2,\ldots,a_n]$. Initially, all elements are white. Chaneka will choose one or more different indices and colour the elements at those chosen indices black. Then, she will choose all white elements whose indices are multiples of the index of \textbf{at least one} black element and colour those elements green. After that, her score is the \textbf{maximum} value of $a_i$ out of all black and green elements. There are $2^n-1$ ways for Chaneka to choose the black indices. Find the sum of scores for all possible ways Chaneka can choose the black indices. Since the answer can be very big, print the answer modulo $998\,244\,353$.
For some value $w$, let $f(w)$ be the number of different ways to choose the black indices such that the score is exactly $w$ and let $g(w)$ be the number of different ways to choose the black indices such that the score is at most $w$. Then $f(w) = g(w) - g(w-1)$. Let's try to calculate $g(w)$. First, group the elements in $a$ into two groups, with group 1 containing elements with values greater than $w$ and group 2 containing elements with values not greater than $w$. Notice that the only requirement we must satisfy is to not make any group 1 element black or green and we can do anything to the group 2 elements. In order to make sure that no group 1 element is black or green, we must make sure that no black index we choose is a factor of at least one group 1 index. This means, we just need calculate some value $c$ that represents the number of different indices that is a factor at least one group 1 index, then the value of $g(w)$ is $2^{n-c}-1$. The answer to the original problem is the sum of $f(w)\cdot w$ for all possible values $w$. However, we just care about the values $w$ such that $f(w)\neq0$, which only happens if $w$ is equal to the value of some element of $a$. In order to calculate $g(w)$ for all desired values of $w$, we need to iterate the elements of $a$ from the biggest value to the smallest value. Each element we iterate, we iterate every index that is a factor of the index of the current element and mark those indices as group 1 while we keep track of the value of $c$. The total number of iterations is the total number of factors for every index, which is $n\log n$. Time Complexity: $O(n\log n)$
[ "combinatorics", "number theory", "sortings" ]
1,500
null
1876
C
Autosynthesis
Chaneka writes down an array $a$ of $n$ positive integer elements. Initially, all elements are not circled. In one operation, Chaneka can circle an element. It is possible to circle the same element more than once. After doing all operations, Chaneka makes a sequence $r$ consisting of all \textbf{uncircled} elements of $a$ following the order of their indices. Chaneka also makes another sequence $p$ such that its length is equal to the number of operations performed and $p_i$ is the \textbf{index} of the element that is circled in the $i$-th operation. Chaneka wants to do several operations such that sequence $r$ is equal to sequence $p$. Help her achieve this, or report if it is impossible! Note that if there are multiple solutions, you can print any of them.
Let's say we have coloured each element of $a$ into black or white and we want to find a sequence of operations that results in all black elements being circles and all white elements being uncircled. Notice that such a sequence exists if and only if the following holds: For each black element, there must exist at least one white element with a value equal to the index of that black element. (There must be a value in $p$ that circles that black element) For each white element, there cannot be any white element with a value equal to the index of that black element. (There cannot be a value in $p$ that circles that white element) Construct a directed graph of $n$ vertices and $n$ directed edges. Each edge $i$ is from vertex $i$ to vertex $a_i$. For an edge from $x$ to $y$, define $x$ as a child of $y$ and define $y$ as a parent of $x$. Notice that each vertex only has exactly one parent. This means that each connected component in the graph looks like a directed rooted tree with a cycle of arbitrary size at the root. Let each vertex $i$ represent whether element $a_i$ is white or black. From the knowledge above, we can get the following for determining the colour of a certain vertex: If it has at least one white child, then it must be black. If it has no white children, then it must be white. This means we can determine the colour of a vertex once we have determined the colours of its children. Additionally, once there is at least one white child, we can guarantee the vertex to be black. We can solve this by maintaining a queue. First, fill the queue with each vertex with no children ($0$ indegree). When we process a vertex in the queue, determine the colour using the rules above, and then decrease the indegree of its parent (as if the current processed vertex is deleted from the graph). If the indegree of the parent reaches $0$, add the parent to the queue. In particular, if the current vertex is white, we can immediately determine the colour of the parent to be black and add it to the queue as well. The process stops once there are no vertices left with $0$ indegrees and none of the remaining vertices has a white child. Because of the special structure of the graph, once this happens, the remaining graph only contains zero or more disjoint cycles. Let's solve each cycle independently. Notice that if a cycle is of even length, we can colour its vertices black and white alternatingly. However, if a cycle is of odd length, it is impossible to colour it. Time complexity: $O(n)$
[ "constructive algorithms", "dfs and similar", "graphs", "greedy", "sortings" ]
2,100
null
1876
D
Lexichromatography
Pak Chanek loves his faculty, the Faculty of Computer Science, University of Indonesia (Fasilkom). He wants to play with the colours of the faculty's logo, blue and red. There is an array $a$ consisting of $n$ elements, element $i$ has a value of $a_i$. Pak Chanek wants to colour each element in the array blue or red such that these following conditions are satisfied: - If all blue elements are formed into a subsequence$^\dagger$ and so are all the red elements, the blue subsequence is strictly less than the red subsequence lexicographically$^\ddagger$. - Array $a$ does not have \textbf{any subarray} that is imbalanced. A subarray is imbalanced if and only if there is a value $k$ such that the absolute difference between the number of blue elements with value $k$ and the number of red elements with value $k$ in this subarray is $2$ or more. - Note that it is possible to colour every element of the array the same colour. How many different colourings satisfy all those conditions? Since the answer can be very big, print the answer modulo $998\,244\,353$. Two colourings are different if and only if there is at least one element that is blue in one colouring, but red in the other. $^\dagger$ A subsequence of an array is a sequence that can be obtained from the array by deleting some elements (possibly none), without changing the order of the remaining elements. $^\ddagger$ Let $p$ and $q$ be two different sequences. Sequence $p$ is said to be lexicographically less than sequence $q$ if and only if $p$ is a prefix of $q$ or there is an index $i$ such that $p_j=q_j$ holds for every $1\leq j<i$, and $p_i<q_i$. In particular, an empty sequence is always lexicographically less than any non-empty sequence.
There are two conditions in the problem. The lexicography condition and the imbalanced subarrays condition. Let's look at the imbalanced subarrays condition. Consider some value $k$ and consider all elements in $a$ with a value of $k$. In order to satisfy this condition, those elements must be coloured blue and red alternatingly from left to right. So for each value $k$ that is present in $a$, there are $2$ ways to colour its elements, either starting from blue or starting from red. This means, if we ignore the lexicography condition, the number of colourings is $2^c$ with $c$ representing the number of different values in $a$. From now on, we are only considering the colourings that satisfy the imbalanced subarray condition. Define the following: $\text{less}$: the number of colourings with the blue subsequence less than the red subsequence lexicographically. $\text{equal}$: the number of colourings with the blue subsequence equal to the red subsequence. $\text{more}$: the number of colourings with the blue subsequence bigger than the red subsequence lexicographically. We can get that $\text{less}+\text{equal}+\text{more}=2^c$. Furthermore, we can get that $\text{less}=\text{more}$ since we can always flip the colouring to get the opposite one. Therefore, $\text{less}=\frac{2^c-\text{equal}}{2}$. Now, let's calculate the value of $\text{equal}$. Iterate the array from left to right while simultaneously maintaining both the blue and red subsequences. During the iteration, we want both subsequences to be equal. For the first element, we can colour it either red or blue, and put the value of that element into its corresponding subsequence. We iterate each element but we stop the next time both subsequences have the same length. So if we colour the first element blue, then the blue subsequence will be longer than the red one throughout this process, until both subsequences have equal lengths. For each of the next elements, there are a few cases: If we have not found an element with this value before, we must put the element in the currently longer subsequence. (If we put it in the shorter subsequence it will not match with the corresponding element in the longer subsequence. If we have found an element with this value before, it must be the opposite colour than the previous one. This means we must put it in the shorter subsequence. If the corresponding element in the longer subsequence does not match, then $\text{equal}=0$. Otherwise, we can continue the process. Once both subsequences have equal lengths, let's say we ignore all of the previous elements and the currently built subsequences and assume we continue iterating the array while forgetting about all things we did before. We cannot actually do this to calculate $\text{equal}$ properly, but we will get to that later. Notice that continuing the iteration now is just like how we start in the beginning. We can choose the colour of the first element and every other element after that is forced until both subsequences have the same length. We continue doing this repeatedly until the end of $a$. If at the end of $a$, the lengths of both subsequences are different or we hit the $\text{equal}=0$ case at least once, then $\text{equal}=0$. Otherwise, from our assumption above, we can deduce that each time we add an element when both subsequences have equal lengths, we can freely choose the colour of that element. We can split $a$ into several sections where the sections are separated by the times both subsequences have the same length. Using our assumption above, the number of colourings is $2^d$ with $d$ being the number of sections, because there are $2$ starting colours we can choose for each section. However, as we have said before, when starting a new section, we cannot completely ignore the previous sections. Since the colouring of the elements with the same value must be alternating, two sections that have at least one common value must have the same starting colour. This means we can construct a graph of sections and connect the sections that have a common value with edges. Then, the true value of $\text{equal}$ is $2^{d'}$ with $d'$ being the number of connected components. Time complexity: $O(n)$
[ "combinatorics", "dfs and similar", "dsu", "graphs", "two pointers" ]
2,500
null
1876
E
Ball-Stackable
With a problem title like that, there is no way this is going to be a graph problem. Chaneka has a graph with $n$ vertices and $n-1$ edges. Some of the edges are directed and some of the edges are undirected. Edge $i$ connects vertex $u_i$ to vertex $v_i$. If $t_i=0$, edge $i$ is undirected. If $t_i=1$, edge $i$ is directed in the direction from $u_i$ to $v_i$. It is known that if you make all edges undirected, the graph becomes a tree$^\dagger$. Chaneka wants to direct all undirected edges and colour each edge (different edges can have the same colour). After doing that, suppose Chaneka starts a walk from an arbitrary vertex $x$ to an arbitrary vertex $y$ (it is possible that $x=y$) going through one or more edges. She is allowed to go through each edge either following the direction or opposite to the direction of the edge. She is also allowed to visit a vertex or an edge more than once. During the walk, Chaneka maintains a stack of balls that is initially empty before the walk. Each time Chaneka goes through an edge, she does the following: - If Chaneka goes through it in the right direction, she puts a new ball with a colour that is the same as the edge's colour to the top of the stack. - If Chaneka goes through it in the opposite direction, she removes the ball that is on the top of the stack. A walk is \textbf{stackable} if and only if the stack is not empty before each time Chaneka goes through an edge in the opposite direction. A walk is \textbf{ball-stackable} if and only if it is stackable and each time Chaneka goes through an edge in the opposite direction, the colour of the ball removed from the stack is the same as the colour of the edge traversed. Is it possible to direct all undirected edges and colour each edge such that all stackable walks are also ball-stackable? If it is possible, find a construction example that uses the \textbf{maximum number of different colours} among all valid ways of directing and colouring. If there are multiple such solutions, output any of them. $^\dagger$ A tree is a connected graph with no cycles.
Let's try to solve the problem if all edges are directed. First, assign some arbitrary vertex $x$ as the root of the tree. Consider all walks that start from $x$. Notice that for a fixed vertex $y$, no matter how Chaneka walks from $x$, when she reaches $y$, the number of balls in the stack will always be the same. More precisely, if in the shortest path from $x$ to $y$ there are $c_1$ edges in the right direction and $c_2$ edges in the opposite direction, then the size of the stack in $y$ is $c_1-c_2$. For some root $x$, let's calculate the stack size of every vertex using DFS. What happens if the stack size that is calculated is negative? This conflicts with our logic, but let's ignore them for now. Next, find a vertex with the minimum stack size (including negative stack sizes). Then, assign this a the new root $x$ and do the same process to recalculate every stack size. Because of the way we choose the new $x$, the newly recalculated stack sizes will always be non-negative. Consider a path from this root to some other vertex and go through that path while maintaining a stack of colourful balls. Each time we go through an edge in the right direction, we add a new ball to the stack, so we have an opportunity to use a new colour for the edge (and simultaneously for the new ball). Each time we go through an edge in the opposite direction, we remove the ball at the top of the stack, so we must colour the edge to be equal to some ball we added before (the colour of some previous edge). We can simulate every path from this root to every other vertex by doing DFS once while maintaining a stack. Using the rules above, we get the rules for colouring the edges of the graph. In fact, simulating every path from this single root already solves the problem with all directed edges, because there are no other relationships between two edges that must have the same colour that we have not considered. Remember that we do not have to worry about removing from an empty stack because of how we smartly choose the root. Notice that using the rules above, each edge that is directed away from the root contributes to add one new different colour to the colouring while each edge that is directed towards the root does not add any new colour. This means we just want to maximize the total number of edges that is directed away from the root. We can expand this idea for the original problem with undirected edges. We need to find a root, then direct all undirected edges away from that root such that the number of edges directed away from that root is as many as possible. This is equivalent to finding a vertex such that the number of directed edges (ignoring all undirected edges) that is directed away from that vertex is maximum. This vertex can be found using a simple DFS. Furthermore, choosing such a vertex will automatically eliminates negative stack sizes if we set it as the root. Once we find the optimal root and direct the undirected edges, we just do a simulation DFS like above to determine the colouring. Time complexity: $O(n)$
[ "constructive algorithms", "data structures", "dp", "trees" ]
3,300
null
1876
F
Indefinite Clownfish
Pak Chanek has just bought an empty fish tank and he has been dreaming to fill it with his favourite kind of fish, clownfish. Pak Chanek likes clownfish because of their ability to change their genders on demand. Because of the size of his fish tank, Pak Chanek wants to buy exactly $k$ clownfish to fill the tank. Pak Chanek goes to the local fish shop. The shop provides $n$ clownfish numbered from $1$ to $n$, with clownfish $i$ having a size of $a_i$. Initially, every clownfish in the store does not have an assigned gender, but has the ability to be assigned to two possible clownfish genders, female or male. The store has a procedure which Pak Chanek should follow to buy clownfish. The shop owner will point at each clownfish sequentially from $1$ to $n$ and for each clownfish, she asks Pak Chanek whether to buy it or not. Each time Pak Chanek is asked, he must declare whether to buy the currently asked clownfish or not, before the shop owner moves on to the next clownfish. If Pak Chanek declares to buy the currently asked clownfish, he must also declare the gender to be assigned to that clownfish immediately. When assigning the gender for the currently asked clownfish, the following conditions must be satisfied: - If Pak Chanek assigns it to be female and he has bought a female clownfish before, then the size of the current one must be exactly $1$ \textbf{bigger} than the last female one. - If Pak Chanek assigns it to be male and he has bought a male clownfish before, then the size of the current one must be exactly $1$ \textbf{smaller} than the last male one. Pak Chanek wants to buy exactly $k$ clownfish such that: - There is at least one female clownfish and one male clownfish. - Among the $k$ clownfish Pak Chanek buys, the mean size of the female clownfish is equal to the mean size of the male clownfish. Let $l$ and $r$ respectively be the minimum and maximum \textbf{index} of a clownfish Pak Chanek buys. What is the minimum possible value of $r-l$?
We want to find two disjoint subsequences (for the females and the males) such that: The female subsequence contains consecutively increasing values. The male subsequence contains consecutively decreasing values The total length is exactly $k$. The mean values of both subsequences are the same. For a group of consecutive values with minimum value $\text{min}$ and maximum value $\text{max}$, the mean is $\frac{\text{max}+\text{min}}{2}$. So we must make both values of $\text{max}+\text{min}$ the same. We can obtain that since both means are equal, the two subsequences must contain at least one common value. And a common value can only occur twice in the entirety of the two subsequences. For a way to choose the two subsequences, let $x$ be a common value of the two subsequences such that the difference in indices of its two occurences is minimum. Let $p$ and $q$ be the indices of the left and right occurences of $x$ respectively. There are two cases, either the $p$ is female and $q$ is male, or vice versa. If the $p$ is female, the following must be true: To the left $p$, in decreasing indices, the chosen subsequence (the female one) has the values $x-1$, $x-2$, $x-3$, and so on. To the right of $q$, in increasing indices, the chosen subsequence (the male one) has the values $x-1$, $x-2$, $x-3$, and so on. How about the values in the chosen subsequences that are bigger than $x$? Would they conflict in the indices between $p$ and $q$? Well, since we choose $x$ with the minimum difference in indices of its two occurences, then all values bigger than $x$ in the chosen subsequences that are located between $p$ and $q$ are either only from the female subsequence or only from the male subsequence. So, we only need to consider where for a fixed index that is either $p$ or $q$: To the left of that index, in decreasing indices, the chosen subsequence (the male one) has the values $x+1$, $x+2$, $x+3$, and so on. To the right of that index, in increasing indices, the chosen subsequence (the female one) has the values $x+1$, $x+2$, $x+3$, and so on. That means, if $p$ is female, there are two cases (either choosing the index to be $p$ or $q$). If $p$ is male, a similar thing also applies, but the roles of the values smaller than $x$ and the values bigger than $x$ are switched, so there are also two cases. So there are only four cases in total. Notice that in each of the four cases, we can see it as having a pair of indices $(p, q)$ with the same value and having four legs of consecutive values coming out of it in four different directions. Notice that the elements in different legs can never conflict with each other. Each leg is independent. Let's group the four legs into two groups, the two left legs and the two right legs. Next, define $\text{min}_f$ and $\text{max}_f$ as the minimum and maximum value in the female subsequence and define $\text{min}_m$ and $\text{max}_m$ as the minimum and maximum value in the male subsequence. It must hold that $\text{max}_f+\text{min}_f=\text{max}_m+\text{min}_m$. So $\text{max}_m-\text{min}_f=\text{max}_f-\text{min}_m$. Notice that $\text{max}_m-\text{min}_f$ is the total number of elements in the two left legs and $\text{max}_f-\text{min}_m$ is the total number of elements in the two right legs if we exclude $p$ and $q$. Since we want to make the number of elements in the two groups the same, each group must contain exactly $\frac{k-2}{2}$ elements. This means if $k$ is odd, it is impossible. We obtain that we can handle the two groups independently. We just need to make sure the number of elements in one group is exactly $\frac{k-2}{2}$. We want to minimise the value of $r-l$. This means, we want to maximise $l$ in the left group and minimise $r$ in the right group. These two tasks are equivalent, just mirrored. To solve this, let's construct a graph using the array. Each element $a_i$ in the array has four parents: The rightmost element to the left of it with value $a_i-1$. The rightmost element to the left of it with value $a_i+1$. The leftmost element to the right of it with value $a_i-1$. The leftmost element to the right of it with value $a_i+1$. For each type of parent, construct the binary lifting for it so that for each element, we can get its $d$-th parent of a certain for any value of $d$ in $O(\log n)$. Using this, in order to calculate the maximum value of $l$ of the left group, we can do a binary search to find the optimal number of elements in each of the two left legs. We do the same thing for the right group. We do this for each of the four cases to finally find the minimum value of $r-l$ for a particular pair of $(p, q)$. We do that entire process repeatedly for all pairs of adjacent occurences of the same value. Time complexity: $O(n\log^2n)$
[ "binary search", "graphs" ]
3,500
null
1876
G
Clubstep
There is an extremely hard video game that is one of Chaneka's favourite video games. One of the hardest levels in the game is called Clubstep. Clubstep consists of $n$ parts, numbered from $1$ to $n$. Chaneka has practised the level a good amount, so currently, her familiarity value with each part $i$ is $a_i$. After this, Chaneka can do several (possibly zero) attempts on Clubstep. In each attempt, she dies on one of the $n$ parts. If an attempt dies on part $p$, that means it only successfully passes through every part $k$ for all $1 \leq k \leq p-1$ and it does not reach any part $k$ for all $p+1 \leq k \leq n$. An attempt that dies on part $p$ takes $p$ seconds. It is known that Chaneka improves much more on the part she dies on than anything else. It is also known that during an attempt, Chaneka does not get to practise that much on the parts she does not reach. So, the effect of an attempt that dies on part $p$ is as follows: - Chaneka's familiarity value with part $p$ increases by $2$. - Chaneka's familiarity value with each part $k$ for all $1 \leq k \leq p-1$ increases by $1$. There will be $q$ questions. For the $j$-th question, you are given three integers $l_j$, $r_j$, and $x_j$. Then, you are asked to find out the \textbf{minimum time} (in seconds) for Chaneka to make it such that the familiarity value for every part $p$ ($l_j \leq p \leq r_j$) is at least $x_j$.Note that each question is independent, so the attempt Chaneka does on a question does not affect the familiarity values of any other questions.
Let's try to solve a single query $(l,r,x)$ in $O(n)$. It is clear that in the optimal strategy, we do not want to do any attempts that die on parts to the left of $l$ or to the right of $r$. There are two cases: If $a_r\geq x$, then we can ignore index $r$ and solve the query for $(l,r-1,x)$. If $a_r<x$, then it is optimal to do $\lceil\frac{x-a_r}{2}\rceil$ attempts that die on part $r$. This takes $r\cdot\lceil\frac{x-a_r}{2}\rceil$ seconds and will make all indices from $1$ to $r-1$ increase by $\lceil\frac{x-a_r}{2}\rceil$. After doing that, it is equivalent to ignoring index $r$ and solving the query for $(l,r-1,x-\lceil\frac{x-a_r}{2}\rceil)$. For now, let's ignore $l$ and focus on $(r,x)$. For some pair $(r,x)$, there are two cases it can recurse into. We mainly care about the second case since it is the only one that contributes to the total time. For some pair $(r,x)$ of a query, we can turn it into $(r,x)$ where $r'$ ($r'\leq r$) is the rightmost index with $a_{r'}<x$. For some pair $(r,x)$ with $a_r<x$, we can see it as the pair recursing immediately to $(r',x-\lceil\frac{x-a_r}{2}\rceil)$ where $r'$ ($r'<r$) is the rightmost index with $a_{r'}<x-\lceil\frac{x-a_r}{2}\rceil$. We want to maintain all important pairs $(r,x)$ that are needed to answer all queries. Two pairs with the same values of $r$ and $x$ that comes from different queries can be treated as the same pair. Let's imagine a process to calculate all important pairs $(r,x)$. To do this, we iterate $r$ from $n$ to $1$ while maintaining all of the current important values of $x$, including the ones not bigger than $a_r$. Each iteration, we just modify the values of $x$ from the previous iteration. For each $r$, we first add new values of $x$ for all queries with this current value of $r$. Then, the important pairs $(r,x)$ for this value of $r$ are all current values of $x$ that are greater than $a_r$. And those values of $x$ are the values of $x$ which will get updated (changed into $x-\lceil\frac{x-a_r}{2}\rceil$) for the next iteration. If more than one value of $x$ updates into the same value, they merge and the number of values decreases. Using the logic of the process above, it can be obtained that the total number of important pairs $(r,x)$ is $O(n+q\log\max(x))$. Proof. Notice that the number of important pairs is about equal to the number of updates to the value of $x$ in all iterations. Let's calculate the total number of updates. Instead of looking at the values of $x$ that we maintain, let's sort those values and look at the gaps between adjacent values. For a value of $r$, the value of $a_r$ lies in one of the gaps. Then, that gap and all gaps located to the right of it in the number line will get updated. The gap that contains $a_r$ can change arbitrarily, but each gap that is to the right of that gap will have its length divided by $2$, either floored or ceilinged. This means, each gap can only have $\log\max(x)$ updates before having a length of $1$. Then, its length can get ceilinged multiple times before getting floored to $0$. When its length becomes $0$, the two values at the endpoints of the gap will merge. It may look like there can be many updates because a length of $1$ can be ceilinged multiple times. However, for every gap having its length divided by $2$ in an iteration, the case where a length of $1$ gets ceilinged cannot happen on two adjacent gaps, so if there are $c$ gaps updated, that particular case can only happen on $\lceil\frac{c}{2}\rceil$ gaps. That means, the total number of updates is about two times the total number of times a gap gets its length decreased, plus the total number of gaps that contains $a_r$. Therefore, the total number of important pairs is $O(n+q\log\max(x))$. $\blacksquare$ Knowing that, we can calculate all important pairs quickly using a stack and a priority queue. The stack maintains the current values of $x$ that have already been updated at least once. The priority queue only maintains new values of $x$ that have not been updated yet. In each iteration, we only process several values at the top of the stack and at the top of the priority queue, then each of those gets pushed into the stack with their new values, without changes in their order. The total number of operations in the stack is equal to the number of important pairs, but the total number of changes in the priority queue is only equal to the number of queries. So the total complexity of this is $O(n+q(\log\max(x)+\log n))$. While calculating all important pairs, we can simultaneously construct a tree of pairs where the parent of each pair $(r,x)$ is its corresponding pair $(r',x-\lceil\frac{x-a_r}{2}\rceil)$. We can solve each query using a simple binary lifting in the tree, but that would be too slow since there are $O(n+q\log\max(x))$ vertices. Instead, we can do a DFS traversal from the root while maintaining a stack of the values of $r$ in the DFS recursion. Solving a query in a vertex is just doing a binary search in the stack. So the total complexity of answering all queries is $O(n+(q\log\max(x)+\log n))$. Time complexity: $O(n+(q\log\max(x)+\log n))$
[ "binary search", "brute force", "data structures", "greedy", "trees" ]
3,500
null
1877
A
Goals of Victory
There are $n$ teams in a football tournament. Each pair of teams match up once. After every match, Pak Chanek receives two integers as the result of the match, the number of goals the two teams score during the match. The efficiency of a team is equal to the total number of goals the team scores in each of its matches minus the total number of goals scored by the opponent in each of its matches. After the tournament ends, Pak Dengklek counts the efficiency of every team. Turns out that he forgot about the efficiency of one of the teams. Given the efficiency of $n-1$ teams $a_1,a_2,a_3,\ldots,a_{n-1}$. What is the efficiency of the missing team? It can be shown that the efficiency of the missing team can be uniquely determined.
Notice that each goal increases the efficiency of the team that scores by $1$. But it also simultaneously decreases the efficiency of the opposite team by $1$. This means, if we maintain the sum of efficiency for all teams, each goal does not change the sum. Therefore, the sum must be $0$. In order to make the sum to be $0$, the efficiency of the missing team must be equal to the sum of efficiency of the other $n-1$ teams, multiplied by $-1$. Time complexity for each test case: $O(n)$
[ "math" ]
800
null
1877
C
Joyboard
Chaneka, a gamer kid, invented a new gaming controller called joyboard. Interestingly, the joyboard she invented can only be used to play one game. The joyboard has a screen containing $n+1$ slots numbered from $1$ to $n+1$ from left to right. The $n+1$ slots are going to be filled with an array of non-negative integers $[a_1,a_2,a_3,\ldots,a_{n+1}]$. Chaneka, as the player, must assign $a_{n+1}$ with an integer between $0$ and $m$ inclusive. Then, for each $i$ from $n$ to $1$, the value of $a_i$ will be equal to the \textbf{remainder} of dividing $a_{i+1}$ (the adjacent value to the right) by $i$. In other words, $a_i = a_{i + 1} \bmod i$. Chaneka wants it such that after every slot is assigned with an integer, there are exactly $k$ distinct values in the entire screen (among all $n+1$ slots). How many valid ways are there for assigning a non-negative integer into slot $n+1$?
There are only a few cases. If $a_{n+1}=0$, then every value of $a_i$ is $0$. So there is only $1$ distinct value. If $1\leq a_{n+1}\leq n$, then there exists an index $p$ ($p=a_{n+1}$) such that $a_i=a_{n+1}$ for all indices $p+1\leq i\leq n+1$ and $a_i=0$ for all $1\leq i\leq p$. So there are $2$ distinct values. If $a_{n+1}>n$ and $a_{n+1}$ is divisible by $n$, then $a_n=0$ and all values to the left of that are also $0$. So there are $2$ distinct values. If $a_{n+1}>n$ and $a_{n+1}$ is not divisible by $n$, then $1\leq a_n\leq n-1$. This is is equivalent to case 2, which means that there are $2$ distinct values from index $1$ to $n$. So in total, there are $3$ distinct values. So the number of ways is as follows: If $k=1$, there is always $1$ way, since $m\geq0$. If $k=2$ and $m\leq n$, there are $m$ ways. If $k=2$ and $m>n$, there are $n+\lfloor\frac{m-n}{n}\rfloor$. If $k=3$ and $m\leq n$, there are $0$ ways. If $k=3$ and $m>n$, there are $m-n-\lfloor\frac{m-n}{n}\rfloor$. If $k>3$, there are $0$ ways. Time complexity for each test case: $O(1)$
[ "math", "number theory" ]
1,200
null
1878
A
How Much Does Daytona Cost?
We define an integer to be the most common on a subsegment, if its number of occurrences on that subsegment is larger than the number of occurrences of any other integer in that subsegment. A subsegment of an array is a consecutive segment of elements in the array $a$. Given an array $a$ of size $n$, and an integer $k$, determine if there exists a non-empty subsegment of $a$ where $k$ is the most common element.
It's enough to check if there even exists the element equals to $K$, since $K$ is obviously the most common element in the subsegment of length $1$ which contains it.
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int t; //read the number of test cases cin >> t; while(t--){ int n, k; cin >> n >> k; //read n and k bool ys=0; //bool value if true then there exists a subsegment which satisfies the condition, else it doesn't exist for(int i=0; i< n; i++){ int a; //read the i-th element of the array cin >> a; if(a==k)ys=1; //if the element is equal to k, then the subsegment [i, i] has the most common element equal to k } if(ys)cout << "YES\n"; //output the answer else cout << "NO\n"; } }
1878
B
Aleksa and Stack
After the Serbian Informatics Olympiad, Aleksa was very sad, because he didn't win a medal (he didn't know stack), so Vasilije came to give him an easy problem, just to make his day better. Vasilije gave Aleksa a positive integer $n$ ($n \ge 3$) and asked him to construct a strictly increasing array of size $n$ of positive integers, such that - $3\cdot a_{i+2}$ is not divisible by $a_i+a_{i+1}$ for each $i$ ($1\le i \le n-2$). Note that a strictly increasing array $a$ of size $n$ is an array where $a_i < a_{i+1}$ for each $i$ ($1 \le i \le n-1$).Since Aleksa thinks he is a bad programmer now, he asked you to help him find such an array.
There are many solutions to this problem, but the intended one is the following: By selecting the first $n$ odd positive integers $1, 3, 5, \dots, 2n - 1$, we find that $3\cdot a_{i + 2}$ is also an odd number, while the number $a_i + a_{i + 1}$ is even, and an odd number can never be divisible by an even number, so the construction is correct. Time complexity: $\mathcal{O}(n)$
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int t; //read the number of test cases cin >> t; while(t--){ int n; //read n cin >> n; for(int i=0; i< n; i++)cout << i*2+1 << " "; //write the first n odd numbers in order cout << '\n'; } }
1878
C
Vasilije in Cacak
Aca and Milovan, two fellow competitive programmers, decided to give Vasilije a problem to test his skills. Vasilije is given three positive integers: $n$, $k$, and $x$, and he has to determine if he can choose $k$ distinct integers between $1$ and $n$, such that their sum is equal to $x$. Since Vasilije is now in the weirdest city in Serbia where Aca and Milovan live, Cacak, the problem seems weird to him. So he needs your help with this problem.
It is clear that the minimum sum is obtained for the numbers $1, 2, 3, \dots, k$, and its value is $\frac{k\cdot(k+1)}{2}$ (the sum of the first $k$ natural numbers). Furthermore, it is evident that the maximum sum is achieved for the numbers $n, n-1, n-2, \dots, n-k+1$, and its value is $\frac{n\cdot(n+1)-(n-k)\cdot(n-k+1)}{2}$ (the sum of all numbers from $1$ to $n$ minus the sum of all numbers from $1$ to $n-x$ elements). Let's prove that among any $k$ numbers (whose sum is not maximal), there exists a number $a < n$ such that $a+1$ is not among those $k$ numbers. Let's assume the opposite, that is, there exist $k$ numbers whose sum is not maximal, and for each of those $k$ numbers, $k + 1$ is also among those numbers. Let $v$ be the smallest among them. Consequently, $v + 1$ is also among these $k$ numbers. Since $v + 1$ is in these $k$ numbers, then $v + 2$ is also among these $k$ numbers. Similarly, we can conclude that $v, v + 1, v + 2, v + 3, \dots$ are all among these $k$ numbers. However, since we have $k$ of them, these are the $k$ numbers that would yield the maximum sum ($n, n-1, n-2, \dots, n-k+1$). This is a contradiction! So, among any $k$ numbers (whose sum is not maximal), there exists a number $a < n$ such that $a+1$ is not among those $k$ numbers. Based on this, starting from the minimum sum $S$, we can obtain $S + 1$ (by replacing the number $a$ with $a + 1$, the sum increases by $1$), then from the sum $S + 1$, we obtain the sum $S + 2$, and so on. Therefore, by applying the principle of mathematical induction, we can obtain any sum that is greater than or equal to minumum sum and less than or equal to maximum sum.
[ "math" ]
900
#include <iostream> using namespace std; int main(){ int t; //read the number of test cases cin >> t; while(t--){ long long n, x, k; //read n, x, k for each test case cin >> n >> x >> k; if(2*k>=x*(x+1) && 2*k<=n*(n+1)-(n-x)*(n-x+1)){ //check if k is between the minimum and maximum sum cout << "YES\n"; } else cout << "NO\n"; } }
1878
D
Reverse Madness
You are given a string $s$ of length $n$, containing lowercase Latin letters. Next you will be given a positive integer $k$ and two arrays, $l$ and $r$ of length $k$. It is guaranteed that the following conditions hold for these 2 arrays: - $l_1 = 1$; - $r_k = n$; - $l_i \le r_i$, for each positive integer $i$ such that $1 \le i \le k$; - $l_i = r_{i-1}+1$, for each positive integer $i$ such that $2 \le i \le k$; Now you will be given a positive integer $q$ which represents the number of modifications you need to do on $s$. Each modification is defined with one positive integer $x$: - Find an index $i$ such that $l_i \le x \le r_i$ (notice that such $i$ is unique). - Let $a=\min(x, r_i+l_i-x)$ and let $b=\max(x, r_i+l_i-x)$. - Reverse the substring of $s$ from index $a$ to index $b$. Reversing the substring $[a, b]$ of a string $s$ means to make $s$ equal to $s_1, s_2, \dots, s_{a-1},\ s_b, s_{b-1}, \dots, s_{a+1}, s_a,\ s_{b+1}, s_{b+2}, \dots, s_{n-1}, s_n$. Print $s$ after the last modification is finished.
Observation 1: if we look at [$l_i, r_i$] as subsegments for each $i$, notice that they are disjoint, and that two modifications do not interfere with each other if they are from different subsegments. Because of this observation we can basically treat subsegments as separate test cases. Now, without loss of generality, because of the first observation, we can consider the same problem but with $l_1=1$ and $r_1=n$. It is easy to see that the modifications $x$ and $n-x+1$ are equivalent, because the first one will reverse the subsegment [$\min(x, n-x+1), \max(x, n-x+1)$], and the second one will do the same thing. Using this we can consider without loss of generality for all modifications the following holds true: $2\cdot x_i \le n$. Now try visualizing the modifications: if $x=1$ then we reverse the whole string. if $x=2$ then we reverse the whole string except the first and the last element. if $x=3$ then we reverse the whole string except the first two and last two elements. We can logically conclude that the modifications are symmetrical with respect to the middle of the string. From this symmetry, we can conclude that if a modification "touches" index $i$ it also touches index $n-i+1$, and also because of the symmetry, $i$ will always be swapped with $n-i+1$, and no other index. This means that the order of modifications doesn't matter, because for each index it only matters how many modifications affect it. Another thing to note is that for a given index $i$, all modifications such that $x \le i$ affect this index. This gives us the following solution: Let's store the number of modifications for each index in an array, and if $x> \frac{n}{2}$ then store it as $n-x+1$. Next we just iterate over the array while maintaining the sum of the number of operations, and if it's odd we swap elements $i$ and $n-i+1$, else we just continue iterating.
[ "data structures", "greedy" ]
1,600
#include <bits/stdc++.h> using namespace std; int main(){ int t; //number of test cases cin >> t; while(t--){ int n, k; //read the input cin >> n >> k; string s; cin >> s; int a[k]; int b[k]; for(int i=0; i< k; i++){cin >> a[i]; a[i]--;} for(int i=0; i< k; i++){cin >> b[i]; b[i]--;} int q; cin >> q; int cnt[n+1]={0}; //read and preprocess queries for(int i=0; i< q; i++){ int x; cin >> x; cnt[x-1]++; } string ans=""; for(int i=0; i<k; i++){ //treat each interval as a seperate test case string s1=s.substr(a[i], b[i]-a[i]+1); int sum=0; int l=a[i]; int r=b[i]; for(int j=l; j<=(l+r)/2; j++){ sum+=cnt[j]+cnt[r-j+l]; if(sum&1)swap(s1[j-l], s1[r-j]); } ans+=s1; } cout << ans << '\n'; } }
1878
E
Iva & Pav
Iva and Pav are a famous Serbian competitive programming couple. In Serbia, they call Pav "papuca" and that's why he will make all of Iva's wishes come true. Iva gave Pav an array $a$ of $n$ elements. Let's define $f(l, r) = a_l \ \& \ a_{l+1} \ \& \dots \& \ a_r$ (here $\&$ denotes the bitwise AND operation). \textbf{Note that} $f(l, r)$ \textbf{is not defined when} $l>r$. Iva also gave Pav $q$ queries. Each query consists of 2 numbers, $k$ and $l$, and she wants Pav to find the largest index $r$ ($l \le r \le n$), such that $f(l, r) \ge k$. Pav wants to solve this problem fast because he doesn't want to upset Iva. He needs your help.
We can, for each bit, calculate the prefix sums of the array ($pref[i][j]$ is the number of occurrences of the $j$-th bit in the first $i$ elements of the array. This can be calculated in $\mathcal{O}(n \log(max(a)))$. We know that if $pref[r][j] - pref[l - 1][j] = r - l + 1$, then the $j$-th bit is present in all elements of the subsegment [$l, r$] of the array $a$, which means the value of $f(l, r)$ is equal to the sum of all bits for which this condition is true on the subsegment from $l$ to $r$, and we can calculate that in $\mathcal{O}(\log(max(a)))$. Next, for each query, we can use binary search to find $r$, by calculating $f(l, mid)$. If $f(l, r) \ge k$ then we found an index for which the condition is true, so we move the left to $mid + 1$, else we move the right to $mid-1$. This solution works in $\mathcal{O}(Q\cdot\log(N)\cdot\log(max(a)))$ which is around $4\cdot10^7$ operations, with a low constant factor. It is possible to optimize the solution even more by using sparse tables, to calculate $f(l, r)$ in $\mathcal{O}(1)$ therefore removing the $\log(max(a))$ factor, but we think that sparse tables are a little bit too advanced of a topic for div3 E, so we didn't make that solution necessary.
[ "binary search", "bitmasks", "data structures", "greedy" ]
1,400
#include <bits/stdc++.h> using namespace std; using ll = long long; const int N =200003; const int bits=30; int pref[N][bits]; int a[N]; void Buildprefix(int n){ //Builds the prefix sums for each bit for(int i=0; i< n; i++){ for(int j=0; j<30; j++){ if(a[i]&(1<<j)){ pref[i+1][j]=pref[i][j]+1; } else{ pref[i+1][j]=pref[i][j]; } } } } void solve(){ int n; cin >> n; for(int i=0; i< n; i++){ cin >> a[i]; } Buildprefix(n); int q; cin >> q; while(q--){ int l, k; cin >> l >> k; if(a[l-1]<k){ cout << -1 << '\n'; continue; } int lo=l; int hi=n; int ans=l; while(lo<=hi){ int s=(lo+hi)/2; int num=0; for(int j=0; j< bits; j++){ if(pref[s][j]-pref[l-1][j]==s-l+1){ num+=(1<<j); } } if(num>=k){ lo=s+1; ans=max(ans, s); } else hi=s-1; } cout << ans << '\n'; } } int main(){ int t = 1; cin >> t; while(t--){ solve(); } }
1878
F
Vasilije Loves Number Theory
Vasilije is a smart student and his discrete mathematics teacher Sonja taught him number theory very well. He gave Ognjen a positive integer $n$. Denote $d(n)$ as the number of positive integer divisors of $n$, and denote $gcd(a, b)$ as the largest integer $g$ such that $a$ is divisible by $g$ and $b$ is divisible by $g$. After that, he gave Ognjen $q$ queries, and there are $2$ types of queries. - $1$, $x$ — set $n$ to $n \cdot x$, and then answer the following question: does there exist a positive integer $a$ such that $gcd(a, n) = 1$, and $d(n \cdot a) = n$? - $2$ — reset $n$ to its initial value (before any queries). Note that $n$ \textbf{does not} get back to its initial value after the \textbf{type 1} query. Since Ognjen is afraid of number theory, Vasilije promised him that \textbf{after each query}, $d(n) \le 10^9$, however, even with that constraint, he still needs your help with this problem.
Answer: a solution exists if and only if $d(n)$ divides $n$. Proof: consider the prime factorization of the number $n = p_1^{\alpha_1} \cdot p_2^{\alpha_2} \cdots p_k^{\alpha_k}$, where $p_i$ represents the $i$-th prime number, and $\alpha_i$ represents the highest power of the $i$-th prime number such that $p_i^{\alpha_i}$ divides $n$. Knowing this we can calculate the number of divisors of $n$ with the following formula: $d(n) = (\alpha_1+1) \cdot (\alpha_2+1) \cdots (\alpha_k+1)$. Let's now consider the operations we are performing. We are multiplying the number $n$ by a number $a$ that has no common prime factors with $n$ (condition $gcd(a, n) = 1$). Multiplying $n$ by any integer will only bring new prime factors to $n$, which means that we can not change what is already in the brackets of the formula for the number of divisors $d(n)$, but we can add new brackets. Therefore, $d(n)$ will always be a divisor of the number $d(n\cdot a)$. So, we can write $d(n \cdot a) = v \cdot d(n)$. In order to achieve $d(n \cdot a) = n$, we must also have $v \cdot d(n) = n$. Therefore, it is necessary for $d(n)$ to be a divisor of $n$. Let's show that this is also sufficient. Let's denote $k = \frac{n}{d(n)}$. Choose any prime number $p$ that is not a factor of $n$ (such a prime exists because there are infinitely many prime numbers, while $n$ has a finite number of prime factors). By multiplying $n$ by $a = p^{k-1}$, we obtain $d(n\cdot a) = d(n) \cdot k = n$. Using this fact, we just need to efficiently check whether $d(n)$ divides $n$ after each type 1 query. How do we do this? First we pre-calculate for each positive integer less than $10^6$ its smallest prime factor. This allows us to factorize all numbers smaller than or equal to $10^6$ in logarithmic time. We also factorize $n$ and find its number of divisors using the formula mentioned above, and store for each prime factor its highest power which still divides $n$. We can do this using an array, or using a map, either way, let's call this structure $power$. Now we need to deal with the queries: For type 2 query we just need to reset everything. For the first query, we factorize $x$ in $\mathcal{O}(\log{x})$ operations and let $x = r_1^{\beta_1} \cdot r_2^{\beta_2} \cdots r_\ell^{\beta_\ell}$ be the factorization of the number $x$. We update the $d(n)$ by doing the following: for each prime $r_i$ in $x$, divide $d(n)$ by $power[r_i]+1$, then add $\beta_i$ to $power[r_i]$, and then multiply $d(n)$ by $power[r_i]+1$. After we calculate $d(n)$, we should check if $n$ is divisible by it. We can do this in 2 ways: Solution 1: We do this by multiplying the value of all previous type $1$ queries (after the last type $2$ query), and the value of the starting $n$ by modulo $d(n)$, because $n$ can get really large, and we can't store it in a 64-bit integer. If the value mod $d(n)$ is $0$, then its divisible and the answer to our query is "YES", else it is "NO". Time complexity $\mathcal{O}(Q \cdot (Q + \log{x}) + \log{n})$. Solution 2: This solution is even faster, but it might be harder to implement, and that's why we didn't make it necessary. Instead of storing queries, we just need to use another map to store the prime divisors of $d(n)$, now we can compare each prime divisor and its power for $d(n)$ and for the product of $n$ and all queries after the last reset. Now for each prime in $d(n)$, the power of that prime has to be smaller or equal to than the power of that prime in the product of $n$ and queries. Since $d(n) \le 10^9$, the map for $d(n)$ will have at most $\log(max(d(n))$ entries, so this solution runs in $\mathcal{O}(Q \cdot (\log(max(d(n)) \cdot log(log(max(d(n))) + \log(x))+ t \cdot \log(n))$.
[ "brute force", "math", "number theory" ]
1,900
#include <bits/stdc++.h> using ll = long long; using namespace std; const int N = 1000003; map<int, int> powers; //key is a prime number, value is te highest powert map<int, int> original_divisors; //same as map powers but isn't updated during queries, it's used to reset the powers int smallest_divisor[N]; //precalculated smallest divisors to make factorization O(logx) bool mark[N]; //marks numbers which aren't prime (used for sieve) ll divisor_count = 1; //the number of divisors (updated during queries) void prime(){ //calculates the smallers divisor for each number from 1 to N (sieve of erathosetenes) smallest_divisor[1]=1; smallest_divisor[2]=2; for(int i=4; i<N; i+=2){ mark[i]=true; smallest_divisor[i]=2; } for(int i=3; i<N; i+=2){ if(!mark[i]){ smallest_divisor[i]=i; for(ll j = i*1ll*i; j<N; j+=2*i){ if(!mark[j]){ smallest_divisor[j]=i; mark[j]=1; } } } } } //a function for factorizing a number (used to process queries and factorize n in the beginning) //it also updates the highest prime which divides n (powers map), and the number of divisors of n (divisor_count) void factorize(int x){ int p=0; int current_divisor = 1; while(x>1){ //while x has non 1 divisors, divide it by it's smallest divisor which isn't 1(smallers divisor if always prime) if(smallest_divisor[x]!=current_divisor){ if(p>0){ divisor_count/=powers[current_divisor]+1; powers[current_divisor]+=p; divisor_count*=powers[current_divisor]+1; } p=1; current_divisor=smallest_divisor[x]; } else{ p++; } x/=smallest_divisor[x]; } if(p>0){ divisor_count/=powers[current_divisor]+1; powers[current_divisor]+=p; divisor_count*=powers[current_divisor]+1; } return; } int main(){ prime(); //precalculate smallest divisors int t; cin >> t; while(t--){ //read n and q int n; int q; cin >> n >> q; //factorize n factorize(n); //since factorize updates the powers map, update the origional_divisors map too for(auto prime : powers){ original_divisors[prime.first]=prime.second; } int original_divisor_count = divisor_count; //since factorize updates the divisor_count we update the original_divisor_count too vector<int> queries; //storing previous queries //processing queries while(q--){ int query_type; cin >> query_type; if(query_type==1){ //query of type 1 (multiply n by x) int x; cin >> x; factorize(x); //factorize x, update the powers map, and the number of divisors queries.push_back(x); //add x to the list of previous queries ll num=n; for(int query : queries){ //check if the product of all previous queries and n is divisible by d(n) num*=query; num%=divisor_count; } if(num==0){ //if it is the answer is yes else the answer is no cout << "YES\n"; } else cout << "NO\n"; } else{ //here we should reset everything related to the type 1 query powers.clear(); //clear the powers map and set it to original divisors and powers for(auto original_div : original_divisors){ powers[original_div.first]=original_div.second; } divisor_count=original_divisor_count; //restart the divisor_count queries.clear(); //clear the queries (since we only need the queries since the previous type 2 query) } } original_divisors.clear(); powers.clear(); divisor_count=1; original_divisor_count =1; if(t) cout << "\n"; } }
1878
G
wxhtzdy ORO Tree
After (finally) qualifying for the IOI 2023, wxhtzdy was very happy, so he decided to do what most competitive programmers do: trying to guess the problems that will be on IOI. During this process, he accidentally made a problem, which he thought was really cool. You are given a tree (a connected acyclic graph) with $n$ vertices and $n-1$ edges. Vertex $i$ ($1 \le i \le n$) has a value $a_i$. Lets' define $g(u, v)$ as the bitwise or of the values of all vertices on the shortest path from $u$ to $v$. For example, let's say that we want to calculate $g(3, 4)$, on the tree from the first test case in the example. On the path from $3$ to $4$ are vertices $3$, $1$, $4$. Then, $g(3, 4) = a_3 \ | \ a_1 \ | \ a_4$ (here, $|$ represents the bitwise OR operation). Also, you are given $q$ queries, and each query looks like this: You are given $x$ and $y$. Let's consider all vertices $z$ such that $z$ is on the shortest path from $x$ to $y$ (inclusive). Lets define the niceness of a vertex $z$ as the sum of the number of non-zero bits in $g(x, z)$ and the number of non-zero bits in $g(y, z)$. You need to find the maximum niceness among all vertices $z$ on the shortest path from $x$ to $y$. Since his brain is really tired after solving an output only problem on SIO (he had to do it to qualify for the IOI), he wants your help with this problem.
Observation 1: it's enough to only consider $2 \cdot \log(max(a))$ vertices on the shortest path from $x$ to $y$ as candidates for $z$ for each query. Why? Because at most $\log(max(a))$ vertices can add a bit to $g(x, z)$ and we only need to maximize the number of bits in $g(x, z)$ + the number of bits in $g(y, z)$, and all the other vertices will not contribute to the sum at all so we don't even need to consider them for $z$. Now we need to find those vertices. We know that for each bit, the vertex which we will consider can give this bit to $g(x, z)$ will be the closest one to vertex $x$ (on the path from $x$ to $y$). How do we find these vertices qucikly? First let's arbitrarily root the tree. Now let's precalculate for each vertex and bit $cnt[vertex][bit]$ as the number of vertices on the path from the root to this vertex which have this bit set in their value. Then for each bit, we can binary search to find it's first occurenece on the path from $x$ to $y$, and the path from $y$ to $x$. We do binary search by taking $mid$ from vertex $x$ to vertex $y$ and calculating the number of vertices which have the current bit set, by using the $cnt$ matrix combined with range minimum query or binary lifting to find the lowest common ancestor. We are searching for the first position where the number of those is >0. Once we find all the vertices, we can for each of those, in $O(log(max(a))$, calculate it's niceness, and take the maximum out of those. Time complexity: $O(q \cdot (\log n + \log(max(a)) \cdot \log n)$.
[ "binary search", "bitmasks", "brute force", "data structures", "dfs and similar", "implementation", "trees" ]
2,300
#include <bits/stdc++.h> #define f first #define s second using namespace std; #define int long long const int maxn = 2e5 + 69; const int k = 19; const int bits = 30; vector<int> g[maxn]; int n, q, a[maxn], up[maxn][k], tin[maxn], tout[maxn], timer, d[maxn]; int r[maxn][k]; int bst[maxn][bits]; void dfs(int v, int p, vector<int> b) { tin[v] = ++timer; up[v][0] = p; r[v][0] = a[p]; d[v] = d[p] + 1; for (int i = 0;i < bits;i++) { bst[v][i] = b[i]; if (a[v] & (1 << i)) b[i] = v; } for (int i = 1;i < k;i++) { up[v][i] = up[up[v][i - 1]][i - 1]; r[v][i] = r[v][i - 1] | r[up[v][i - 1]][i - 1]; } for (auto u : g[v]) { if (u != p) dfs(u, v, b); } tout[v] = timer; } bool is_anc(int u, int v) { return tin[u] <= tin[v] && tout[u] >= tout[v]; } int lca(int u, int v) { if(is_anc(u, v)) return u; else if(is_anc(v, u)) return v; for (int i = k - 1;i >= 0;i--) { if (!is_anc(up[u][i], v) && up[u][i] > 0) u = up[u][i]; } return up[u][0]; } int OR(int u, int dis) { int res = a[u]; for (int j = 0;j < bits;j++) { if (dis & (1 << j)) { res |= r[u][j]; u = up[u][j]; } } return res; } int Qry(int u, int v) { int lc = lca(u, v); return OR(u, d[u] - d[lc]) | OR(v, d[v] - d[lc]); } signed main() { int tt = 1; cin >> tt; while(tt--) { cin >> n; timer = 0; for (int i = 1;i <= n;i++) g[i].clear(); for (int i = 1;i <= n;i++) cin >> a[i]; for (int i = 1;i <= n - 1;i++) { int x, y; cin >> x >> y; g[x].push_back(y); g[y].push_back(x); } vector<int> temp(30, -1); dfs(1, 0, temp); cin >> q; for (int i = 1;i <= q;i++) { int x, y; cin >> x >> y; int LCA = lca(x, y); vector<int> t; t.push_back(x); t.push_back(y); for (int i = 0;i < bits;i++) { if (bst[x][i] != -1 && is_anc(LCA, bst[x][i])) t.push_back(bst[x][i]); if (bst[y][i] != -1 && is_anc(LCA, bst[y][i])) t.push_back(bst[y][i]); } int ans = __builtin_popcount(a[x]) + __builtin_popcount(a[y]); for (auto p : t) { int x1 = a[x], x2 = a[y]; x1 |= Qry(x, p); x2 |= Qry(y, p); ans = max(ans, 1ll * __builtin_popcount(x1) + __builtin_popcount(x2)); } cout << ans << " "; } cout << "\n"; } return 0; }
1879
A
Rigged!
Monocarp organizes a weightlifting competition. There are $n$ athletes participating in the competition, the $i$-th athlete has strength $s_i$ and endurance $e_i$. The $1$-st athlete is Monocarp's friend Polycarp, and Monocarp really wants Polycarp to win. The competition will be conducted as follows. The jury will choose a positive \textbf{(greater than zero)} integer $w$, which denotes the weight of the barbell that will be used in the competition. The goal for each athlete is to lift the barbell as many times as possible. The athlete who lifts the barbell the most amount of times will be declared the winner \textbf{(if there are multiple such athletes — there's no winner)}. If the barbell's weight $w$ is \textbf{strictly greater} than the strength of the $i$-th athlete $s_i$, then the $i$-th athlete will be unable to lift the barbell even one single time. Otherwise, the $i$-th athlete will be able to lift the barbell, and the number of times he does it will be equal to his endurance $e_i$. For example, suppose there are $4$ athletes with parameters $s_1 = 7, e_1 = 4$; $s_2 = 9, e_2 = 3$; $s_3 = 4, e_3 = 6$; $s_4 = 2, e_4 = 2$. If the weight of the barbell is $5$, then: - the first athlete will be able to lift the barbell $4$ times; - the second athlete will be able to lift the barbell $3$ times; - the third athlete will be unable to lift the barbell; - the fourth athlete will be unable to lift the barbell. Monocarp wants to choose $w$ in such a way that Polycarp (the $1$-st athlete) wins the competition. Help him to choose the value of $w$, or report that it is impossible.
Let's figure out the optimal value of $w$. If $w > s_1$, then Polycarp cannot lift the barbell. If $w < s_1$, then some athletes having less strength than Polycarp might be able to lift the barbell. So the optimal value of $w$ is $s_1$. All that's left to do is check that there are such athletes who are able to lift weight $w$ greater than or equal to $e_1$ times. If such athletes exist, then the answer is $-1$. Otherwise, the answer is $s_1$.
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; const int N = 109; int t; int n; int s[N], e[N]; int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { cin >> n; for (int i = 0; i < n; ++i) { cin >> s[i] >> e[i]; } bool ok = true; for (int i = 1; i < n; ++i) if (s[i] >= s[0] && e[i] >= e[0]) ok = false; if (!ok) { puts("-1"); continue; } cout << s[0] << endl; } return 0; }
1879
B
Chips on the Board
You are given a board of size $n \times n$ ($n$ rows and $n$ colums) and two arrays of positive integers $a$ and $b$ of size $n$. Your task is to place the chips on this board so that the following condition is satisfied for every cell $(i, j)$: - there exists at least one chip in the same column or in the same row as the cell $(i, j)$. I. e. there exists a cell $(x, y)$ such that there is a chip in that cell, and either $x = i$ or $y = j$ (or both). The cost of putting a chip in the cell $(i, j)$ is equal to $a_i + b_j$. For example, for $n=3$, $a=[1, 4, 1]$ and $b=[3, 2, 2]$. One of the possible chip placements is as follows: \begin{center} {\small White squares are empty} \end{center} The total cost of that placement is $(1+3) + (1+2) + (1+2) = 10$. Calculate the minimum possible total cost of putting chips according to the rules above.
Let's note that to maintain the rule for all cells in a certain row, one of two conditions must hold: either the row contains at least one chip, or each column contains at least one chip. Let's apply this observation to all rows of the board. If we consider all rows, this observation means that the placement of chips satisfies the rule if either each row contains at least one chip, or each column contains at least one chip. Let's consider the case when each row contains at least one chip. Since we want to calculate the minimum total cost of placement, each row needs exactly one chip. Now we have to determine where exactly in each row the chip is placed. Since the cost of a chip in the $i$-th row and $j$-th column is equal to $a_i + b_j$, to minimize the cost, we have to choose the minimum value of $b_j$. Applying this to all rows, we obtain that the total cost is equal to $\sum\limits_{i=1}^{n} (a_i + mnB)$, where $mnB$ is the minimum element in the array $b$. Similar reasoning can be applied to the case when there is a chip in each column. In this case, the total cost is equal to $\sum\limits_{j=1}^{n} (mnA + b_j)$, where $mnA$ is the minimum element in the array $a$. The answer to the problem is the minimum of the two values described above.
[ "constructive algorithms", "greedy" ]
900
#include <bits/stdc++.h> using namespace std; using li = long long; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<li> a(n), b(n); for (auto& x : a) cin >> x; for (auto& x : b) cin >> x; li mnA = *min_element(a.begin(), a.end()); li sA = accumulate(a.begin(), a.end(), 0LL); li mnB = *min_element(b.begin(), b.end()); li sB = accumulate(b.begin(), b.end(), 0LL); li ans = min(mnA * n + sB, mnB * n + sA); cout << ans << '\n'; } }
1879
C
Make it Alternating
You are given a binary string $s$. A binary string is a string consisting of characters 0 and/or 1. You can perform the following operation on $s$ any number of times \textbf{(even zero)}: - choose an integer $i$ such that $1 \le i \le |s|$, then erase the character $s_i$. You have to make $s$ alternating, i. e. after you perform the operations, every two adjacent characters in $s$ should be different. Your goal is to calculate two values: - the minimum number of operations required to make $s$ alternating; - the number of different \textbf{shortest} sequences of operations that make $s$ alternating. Two sequences of operations are different if in at least one operation, the chosen integer $i$ is different in these two sequences.
Firstly, let's divide the string $s$ into blocks of equal characters. For example, if $s = 000100111$, then we divide it into four blocks: $000$, $1$, $00$, $111$. Let's denote the length of $i$-th block as $len_i$, and the number of blocks as $k$. To obtain the longest alternating string we can get, we should choose exactly one character from each block and delete all other characters (we cannot leave two or more characters from the same block). Now let's calculate the number of ways to choose characters that stay after string $s$ become alternating. In the first block of length $len_1$, there are $len_1$ ways to choose that element; in the second block, there are $len_2$ ways, and so on. So the final number of ways is equal to $\prod_{i=1}^{k} len_i$. For example, let's consider the string $s = 00011$. This string is divided into two blocks $000$ and $11$, so the number of ways if $len_1 \cdot len_2 = 2 \cdot 3 = 6$: 00011; 00011; 00011; 00011; 00011; 00011. However, we have chosen the characters that remain, but we need to choose the characters we erase and the order in which we erase them. Since choosing the characters that remain is basically the same as choosing the characters that get erased, we only have to choose the order in which the character get erased. The number of characters we erase is $n-k$, so the number of ways to order them is equal to the number of permutations of length $(n-k)$. For example, let's consider that the string $s = 001100$ and the chosen indices to erase are 1, 4 and 5. Then there are $3! = 6$ ways to choose the order of them: 1, 4, 5; 1, 5, 4; 4, 1, 5; 4, 5, 1; 5, 1, 4; 5, 4, 1. Note that after we delete a character from the string, the indices of other characters might change, but it doesn't actually matter. So, the final answer is: the number of operations we perform is $n - k$; and the number of shortest sequences of operations is $\prod_{i=1}^{k} len_i \cdot (n-k)!$.
[ "combinatorics", "dp", "greedy" ]
1,300
#include <bits/stdc++.h> using namespace std; const int MOD = 998'244'353; void upd(int &a, int b) { a = (a * 1LL * b) % MOD; } int t; string s; int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { cin >> s; int res = 1; int k = s.size(); int n = s.size(); for (int l = 0; l < n; ) { int r = l + 1; while(r < n && s[l] == s[r]) ++r; upd(res, r - l); --k; l = r; } for (int i = 1; i <= k; ++i) upd(res, i); cout << k << ' ' << res << endl; } }
1879
D
Sum of XOR Functions
You are given an array $a$ of length $n$ consisting of non-negative integers. You have to calculate the value of $\sum_{l=1}^{n} \sum_{r=l}^{n} f(l, r) \cdot (r - l + 1)$, where $f(l, r)$ is $a_l \oplus a_{l+1} \oplus \dots \oplus a_{r-1} \oplus a_r$ (the character $\oplus$ denotes bitwise XOR). Since the answer can be very large, print it modulo $998244353$.
Let's solve this problem for each bit separately. When we fix a bit (let's denote it as $b$), all we have to do is solve the original problem for a binary string (let's denote this string as $s$). Note that if there is an odd number of $1$'s on a segment, the value of XOR on the segment is $1$, otherwise it's $0$. So, for a fixed bit $b$, we have to calculate $\sum_{l=1}^{n} \sum_{r=l}^{n} g(l, r) \cdot (r - l + 1)$, where $g(l, r) = 1$ if the number of character 1 in substring $s_l, s_{l + 1}, \dots, s_r$ is odd, otherwise $g(l, r) = 0$. Now let's figure out how to solve this problem if the right border of substrings is fixed (let's denote it as $r$). Let's denote a function (which we call $pref(x)$) that is equal to the number of $1$'s on the prefix of length $x$ modulo $2$. So, if there is an even number of $1$'s among the first $x$ characters, then $pref(x) = 0$, otherwise $pref(x) = 1$. If we fix the right border of a substring $r$ is fixed, we need to consider such left borders $l$ that $l < r$ and $pref(l) + pref(r) = 1$. We have to maintain the number of such left borders (let it be $cnt_i$ for borders which have $pref(l) = i$) and their total sum (let it be $sumOfL_i$ for borders which have $pref(l) = i$). These values can be easily maintained as we iterate on $r$. Why do we need these values? We can calculate the value of $\sum_{l=1}^{r} g(l, r) \cdot (r - l + 1)$ for a fixed right border $r$, if we denote the length of the segment $(r - l + 1)$ as the difference between the right border and the left border (we can discard $1$ from this formula if we treat $l$ as an inclusive border, and $r$ as an exclusive border). So, $\sum_{l=1}^{r} g(l, r) \cdot (r - l + 1)$ can be rephrased as the difference between the sum of right borders of those segments (which is equal to $r$ multiplied by the number of different left borders) and the sum of left borders of these segments (which is maintained in $sumOfL$). Now we just need to iterate on the right border and sum this up while maintaining the values of $cnt_i$ and $sumOfL_i$. Don't forget that we solved the problem only for the bit $b$, we need to combine the results for every bit later (and make sure that we multiply the result for the $b$-th bit by $2^b$). Overall complexity is $O(n \log A)$, where $A$ is the constraint on the values in the array.
[ "bitmasks", "combinatorics", "divide and conquer", "dp", "math" ]
1,700
#include <bits/stdc++.h> using namespace std; const int N = 300005; const int MOD = 998244353; int n; int a[N]; void add(int &a, int b) { a += b; if (a >= MOD) a -= MOD; } int sum(int a, int b) { a += b; if (a >= MOD) a -= MOD; if (a < 0) a += MOD; return a; } int mul(int a, int b) { return (a * 1LL * b) % MOD; } int main() { cin >> n; for (int i = 0; i < n; ++i) cin >> a[i]; int res = 0; for (int b = 0; b < 30; ++b) { int cur = 0; vector <int> cnt(2); vector <int> sumOfL(2); cnt[0] = 1; int x = 0; for (int i = 0; i < n; ++i) { x ^= ((a[i] >> b) & 1); int sumOfR = mul(cnt[x ^ 1], i + 1); add(cur, sum(sumOfR, -sumOfL[x ^ 1])); ++cnt[x]; add(sumOfL[x], i + 1); } add(res, mul(1 << b, cur)); } cout << res << endl; }
1879
E
Interactive Game with Coloring
\textbf{This is an interactive problem. Remember to flush your output while communicating with the testing program.} You may use fflush(stdout) in C++, system.out.flush() in Java, stdout.flush() in Python or flush(output) in Pascal to flush the output. If you use some other programming language, consult its documentation. You may also refer to the guide on interactive problems: https://codeforces.com/blog/entry/45307. You are given a tree on $n$ vertices; vertex $1$ is the root of the tree. For every $i \in [2, n]$, the parent of the $i$-th vertex is $p_i$, and $p_i < i$. You have to color all edges of the tree using the \textbf{minimum possible number of colors} such that you can win the game on that tree (every edge should be painted into exactly one color). The game we're going to play will be conducted as follows. After you paint the edges and print their colors, the jury will place a chip into one of the vertices of the tree (except for the root). Your goal is to move this chip to the root in exactly $d$ moves, where $d$ is the distance from the root to that vertex (the distance is equal to the number of edges on the path). If the chip reaches the root in $d$ moves, you win. Otherwise, you lose. The jury won't tell you where the chip is located. You won't even know the value of $d$ in advance. However, at the start of each move, you will be told how many edges of each color are incident to the current vertex (this includes both the edge leading up the tree and the edges leading away from the root). You have to choose one of these colors, and the chip will be moved along the edge of the chosen color (if there are multiple edges with that color incident to the current vertex, the jury gets to choose one of them). After the chip is moved, you will be told the same information about the current vertex again, and the game continues, until you either reach the root, or you make $d$ moves without reaching the root. \textbf{The interactor for this problem is adaptive}. It means that both the starting vertex and the current vertex are \textbf{not fixed} and may change "on the run" depending on the output of your program. However, the state of the game will always be consistent with the information you are given: \textbf{there will always be at least one starting vertex and at least one path of your chip from that vertex consistent with both the information about the colors you receive and the colors you've chosen during the moves}.
First of all, we need to analyze how we can win the game. Since we have to reach the root from a vertex with distance $d$ in exactly $d$ moves, every move we make should be towards the root. So, for each vertex, knowing the colors of adjacent edges, we need to uniquely determine the color we choose to go up. This means that the color of the edge leading up must be different from the colors of the edges going down (otherwise the jury might move the chip down when we're trying to go up). Let's try to estimate the maximum number of colors we have to use. In fact, we don't need more than three colors. We can use the following recursive coloring strategy, for example: if an edge connecting $i$ with $p_i$ has color $x$, then all other edges incident to $i$ should have color $x \bmod 3 + 1$. So, if a vertex is adjacent to edges with colors $1$ and $2$, the edge with color $1$ leads up; if the colors are $2$ and $3$, the edge with color $2$ leads up, and so on. So, we don't need to use more than $3$ colors. Let's figure out when we can use less than $3$ colors. The case $k=1$ is pretty simple: it's if and only if all vertices from $2$ to $n$ are the children of the root, because otherwise some vertex will have more than $1$ adjacent edge (and it's impossible to choose the edge going up for that vertex). The case $k = 2$ is a bit more challenging. If an edge from $i$ to $p_i$ has color $1$, then all other edges incident to $i$ should have color $2$, and vice versa. So, for every vertex which has $0$ or $2+$ children, the edge different from all of the other edges leads to the root. But for vertices having exactly one child, we cannot easily distinguish between the color going up and the color going down. So let's make sure that for every such vertex, the edge that leads up has the same color. So, here are the constraints on the edges we need to ensure so that a $2$-coloring allows us to win the game: for every vertex, every edge going down from it must have a different color than the edge going up; for every vertex with exactly $1$ child, the color of the edge leading up must be the same. There are a few ways to enforce these two constraints, but in my opinion, the most straightforward of them is to use bipartite coloring. Create a graph where each edge is represented by a vertex. The first condition can be modeled if, for each vertex of the tree, we add edges from the vertex representing the edge going up to vertices representing all other edges. The second condition is also quite easy to model: create an auxiliary vertex in the graph, and for every vertex having exactly one child in the tree, connect the vertex representing its parent-edge with that auxiliary vertex. If the resulting graph is bipartite, then its $2$-coloring can be used to choose the colors for the edges of the tree.
[ "brute force", "constructive algorithms", "dfs and similar", "graphs", "implementation", "interactive", "trees" ]
2,400
#include<bits/stdc++.h> using namespace std; const int N = 123; int n; int color[N]; int countColors[N][N]; int p[N]; vector<int> g[N]; int deg[N]; void add_edge(int x, int y) { g[x].push_back(y); g[y].push_back(x); } bool tryTwoColors() { int v1 = n + 1; int v2 = n + 2; for(int i = 2; i <= n; i++) { if(p[i] != 1) { add_edge(i, p[i]); } } for(int i = 2; i <= n; i++) { if(deg[i] == 1) add_edge(i, v1); } for(int i = 2; i <= n; i++) { if(p[i] != 1 && deg[p[i]] == 1) add_edge(i, v2); } add_edge(v1, v2); bool bad = false; for(int i = 2; i <= n + 2; i++) if(color[i] == 0) { color[i] = 1; queue<int> q; q.push(i); while(!q.empty()) { int k = q.front(); q.pop(); for(auto y : g[k]) { if(color[y] == 0) { color[y] = 3 - color[k]; q.push(y); } else if(color[y] == color[k]) bad = true; } } } if(bad) for(int i = 2; i <= n + 2; i++) color[i] = 0; return !bad; } void tryThreeColors() { for(int i = 2; i <= n; i++) if(p[i] == 1) color[i] = 1; else color[i] = (color[p[i]] % 3) + 1; } int findVertex(const vector<int>& colors) { int s = colors.size(); for(int i = 2; i <= n; i++) { if(vector<int>(countColors[i], countColors[i] + s) == colors) return i; } return -1; } int main() { cin >> n; for(int i = 2; i <= n; i++) { cin >> p[i]; deg[p[i]]++; } if(*max_element(p + 2, p + n + 1) == 1) { for(int i = 2; i <= n; i++) color[i] = 1; } else if (!tryTwoColors()) tryThreeColors(); int colorsUsed = *max_element(color + 2, color + n + 1); cout << colorsUsed << endl; for(int i = 2; i <= n; i++) { cout << color[i]; if(i == n) cout << endl; else cout << " "; } cout.flush(); for(int i = 2; i <= n; i++) { countColors[i][color[i]]++; countColors[p[i]][color[i]]++; } while(true) { int resp; cin >> resp; if(resp == -1 || resp == 1) exit(0); vector<int> counts(colorsUsed + 1); for(int i = 1; i <= colorsUsed; i++) cin >> counts[i]; int v = findVertex(counts); assert(v != -1); cout << color[v] << endl; cout.flush(); } }
1879
F
Last Man Standing
There are $n$ heroes in a videogame. Each hero has some health value $h$ and initial armor value $a$. Let the current value of armor be $a_{\mathit{cur}}$, initially equal to $a$. When $x$ points of damage are inflicted on a hero, the following happens: if $x < a_{\mathit{cur}}$, then $x$ gets subtracted from $a_{\mathit{cur}}$; otherwise, $1$ gets subtracted from $h$ and $a_{\mathit{cur}}$ gets assigned back to $a$. In the start of the game, you choose the value $x$ (an integer strictly greater than $0$, arbitrarily large). Then you keep attacking all heroes in rounds: in one round, you inflict $x$ points of damage to all alive heroes. A hero dies when his health becomes $0$. The game ends when all heroes are dead. The last hero to die earns the number of points, equal to the number of rounds he was the only hero alive. The other heroes get $0$ points. In particular, if the last round ends with multiple heroes dying, then every hero gets $0$ points. The game is played for every possible $x$ (from $1$ to infinity). The points are reset between the games. What's the maximum number of points each hero has had?
For each $x$, we can easily tell how many rounds each hero will last. That is equal to $\lceil \frac{a}{x} \rceil \cdot h$. The last hero to die is the one with the maximum of this value. And the number of rounds he will be the only hero alive is determined by the second maximum. Also notice that all $x$ that are greater or equal to the maximum of $a$ behave the same. In one round, all heroes just lose one point of health. Thus, it only makes sense to calculate the answer for all $x$ from $1$ to $\max{a}$. The problem currently is the following: for each $x$ from $1$ to $\max{a}$, find the index and the value of the maximum and the value of the second maximum of $\lceil \frac{a}{x} \rceil \cdot h$. Let's group all heroes by the value of $c = \lceil \frac{a}{x} \rceil$. The values of $a$ for the heroes in the group form a segment from $x \cdot (c - 1) + 1$ to $x \cdot c$. From each group, we only care about the maximum and the second maximum over $h$. Thus, if we can extract the maximum and the second maximum in $O(1)$, the solution will be $O(a \log a)$ because of the harmonic series (iterating over $x$, then over $\lceil \frac{a}{x} \rceil$). First, for each $a$, find the maximum and the second maximum of $h$. To query the maximum on a segment, we can use a sparse table. Apparently, we can modify it to query the second maximum as well. Store a tuple of (value of maximum, index of maximum, value of the second maximum). To merge two segments, we compare if the indices of the maximums are the same. They can possibly be the same because we often query intersecting segments in the sparse table. If they are, the second maximum is the largest of the respective second maximums. If they aren't, let the first node has a large or equal value of maximum. Then the second maximum is the larger of the second maximum of the first node and the maximum of the second node. Overall complexity: $O(n + a \log a)$ time and memory per testcase.
[ "brute force", "data structures", "number theory" ]
2,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int t; scanf("%d", &t); while (t--){ int n; scanf("%d", &n); vector<int> h(n), a(n); forn(i, n) scanf("%d", &h[i]); forn(i, n) scanf("%d", &a[i]); int mxa = *max_element(a.begin(), a.end()) + 1; int l = mxa == 1 ? 0 : (__lg(mxa - 1) + 1); vector<vector<pair<int, int>>> st(l, vector<pair<int, int>>(mxa, make_pair(0, -1))); vector<vector<int>> st2(l, vector<int>(mxa)); forn(i, n){ if (h[i] > st[0][a[i]].first){ st2[0][a[i]] = st[0][a[i]].first; st[0][a[i]] = {h[i], i}; } else if (h[i] > st2[0][a[i]]){ st2[0][a[i]] = h[i]; } } auto combine = [&st, &st2](int i, int x, int y){ int mx = max(st[i][x].first, st[i][y].first); if (mx == st[i][x].first) return max(st2[i][x], st[i][y].first); return max(st[i][x].first, st2[i][y]); }; for (int j = 1; j < l; ++j) forn(i, mxa){ if (i + (1 << (j - 1)) < mxa){ st[j][i] = max(st[j - 1][i], st[j - 1][i + (1 << (j - 1))]); st2[j][i] = combine(j - 1, i, i + (1 << (j - 1))); } else{ st[j][i] = st[j - 1][i]; st2[j][i] = st2[j - 1][i]; } } vector<int> pw(mxa + 1); for (int i = 2; i <= mxa; ++i) pw[i] = pw[i / 2] + 1; auto getmx = [&st, &pw](int l, int r){ int len = pw[r - l]; return max(st[len][l], st[len][r - (1 << len)]); }; auto getmx2 = [&st, &st2, &pw, &combine](int l, int r){ int len = pw[r - l]; if (st[len][l].second != st[len][r - (1 << len)].second) return combine(len, l, r - (1 << len)); return max(st2[len][l], st2[len][r - (1 << len)]); }; vector<long long> svmx(mxa), svmx2(mxa); vector<int> svwho(mxa, -1); for (int x = 1; x < mxa; ++x){ for (int l = 1; l < mxa; l += x){ int r = min(mxa, l + x); int ac = (l - 1) / x + 1; auto tmp = getmx(l, r); long long mx = tmp.first * 1ll * ac; int who = tmp.second; long long mx2 = getmx2(l, r) * 1ll * ac; if (who == -1) continue; if (mx > svmx[x]){ svmx2[x] = svmx[x]; svmx[x] = mx; svwho[x] = who; } else if (mx > svmx2[x]){ svmx2[x] = mx; } svmx2[x] = max(svmx2[x], mx2); } } vector<long long> ans(n); forn(i, mxa) if (svwho[i] != -1) ans[svwho[i]] = max(ans[svwho[i]], svmx[i] - svmx2[i]); forn(i, n) printf("%lld ", ans[i]); puts(""); } return 0; }
1881
A
Don't Try to Count
Given a string $x$ of length $n$ and a string $s$ of length $m$ ($n \cdot m \le 25$), consisting of lowercase Latin letters, you can apply any number of operations to the string $x$. In one operation, you append the current value of $x$ to the end of the string $x$. Note that the value of $x$ will change after this. For example, if $x =$"aba", then after applying operations, $x$ will change as follows: "aba" $\rightarrow$ "abaaba" $\rightarrow$ "abaabaabaaba". After what \textbf{minimum} number of operations $s$ will appear in $x$ as a substring? A substring of a string is defined as a \textbf{contiguous} segment of it.
Note that the answer is always not greater than $5$. When $n=1$, $m=25$, the answer is either $5$ or $-1$, it is easy to see that the answer cannot be greater. This allows us to simply iterate over the number of operations, each time checking if $s$ occurs in $x$. The time complexity of this solution is $O(2^5\cdot n \cdot m)$.
[ "brute force", "strings" ]
800
def solve(): n, m = map(int, input().split()) x = input() s = input() for i in range(6): if s in x: print(i) return x += x print(-1) for _ in range(int(input())): solve()
1881
B
Three Threadlets
Once upon a time, bartender Decim found three threadlets and a pair of scissors. In one operation, Decim chooses any threadlet and cuts it into two threadlets, whose lengths are \textbf{positive integers} and their sum is \textbf{equal} to the length of the threadlet being cut. For example, he can cut a threadlet of length $5$ into threadlets of lengths $2$ and $3$, but he cannot cut it into threadlets of lengths $2.5$ and $2.5$, or lengths $0$ and $5$, or lengths $3$ and $4$. Decim can perform \textbf{at most} three operations. He is allowed to cut the threadlets obtained from previous cuts. Will he be able to make all the threadlets of equal length?
If the lengths of the threadlets are equal, the answer is "YES". Otherwise, let's denote the length of the minimum threadlet as $a$. If we cut it, we will have only two operations, which is not enough. Therefore, the desired length should be equal to $a$. If $b$ or $c$ are not divisible by $a$, the answer is "NO". Otherwise, we will check if $\frac{b}{a} - 1 + \frac{c}{a} - 1 \le 3$.
[ "math" ]
900
for _ in range(int(input())): a, b, c = sorted(map(int, input().split())) if a == b and b == c: print('YES') elif b % a == 0 and c % a == 0 and (b // a - 1) + (c // a - 1) <= 3: print('YES') else: print('NO')
1881
C
Perfect Square
Kristina has a matrix of size $n$ by $n$, filled with lowercase Latin letters. The value of $n$ is \textbf{even}. She wants to change some characters so that her matrix becomes a perfect square. A matrix is called a perfect square if it remains unchanged when rotated $90^\circ$ clockwise \textbf{once}. Here is an example of rotating a matrix by $90^\circ$: In one operation, Kristina can choose any cell and replace its value with the next character in the alphabet. If the character is equal to "z", its value \textbf{does not change}. Find the \textbf{minimum} number of operations required to make the matrix a perfect square. For example, if the $4$ by $4$ matrix looks like this: $$\matrix{ a & b & b & a \cr b & c & \textbf{b} & b \cr b & c & c & b\cr a & b & b & a \cr }$$ then it is enough to apply $1$ operation to the letter \textbf{b}, highlighted in bold.
When rotating a matrix of size $n$ by $n$ by $90$ degrees: element $a[i][j]$ takes the position of element $a[j][n - 1 - i]$; element $a[n - 1 - j][i]$ takes the position of element $a[i][j]$; element $a[n - 1 - i][n - 1 - j]$ takes the position of element $a[n - 1 - j][i]$. In order for the matrix to be a perfect square, the symbols at all of these positions must be equal. Since we can only maximize a symbol, we do the following: Among these 4 symbols, let's find the lexicographically maximal one, that is, the one that is in the alphabet not before all the others; For all characters that are not equal to the maximum, calculate the number of operations that must be applied to them to make them equal to the maximum character. This number of operations is equal to the difference of positions of symbols in the alphabet.
[ "brute force", "implementation" ]
1,200
#include <bits/stdc++.h> #define all(arr) arr.begin(), arr.end() using namespace std; const int MAXN = 1010; int n; string A[MAXN]; int solve() { int ans = 0; for (int i = 0; i * 2 < n; ++i) for (int j = 0; j * 2 < n; ++j) { vector<char> M {A[i][j], A[n - 1 - j][i], A[n - 1 - i][n - 1 - j], A[j][n - 1 - i]}; char c = *max_element(all(M)); for(char e: M) ans += c - e; } return ans; } int main() { int t; cin >> t; while (t--) { cin >> n; for (int i = 0; i < n; ++i) cin >> A[i]; cout << solve() << endl; } }
1881
D
Divide and Equalize
You are given an array $a$ consisting of $n$ positive integers. You can perform the following operation on it: - Choose a pair of elements $a_i$ and $a_j$ ($1 \le i, j \le n$ and $i \neq j$); - Choose one of the divisors of the integer $a_i$, i.e., an integer $x$ such that $a_i \bmod x = 0$; - Replace $a_i$ with $\frac{a_i}{x}$ and $a_j$ with $a_j \cdot x$. Determine whether it is possible to make all elements in the array the same by applying the operation a certain number of times (possibly zero).For example, let's consider the array $a$ = [$100, 2, 50, 10, 1$] with $5$ elements. Perform two operations on it: - Choose $a_3 = 50$ and $a_2 = 2$, $x = 5$. Replace $a_3$ with $\frac{a_3}{x} = \frac{50}{5} = 10$, and $a_2$ with $a_2 \cdot x = 2 \cdot 5 = 10$. The resulting array is $a$ = [$100, 10, 10, 10, 1$]; - Choose $a_1 = 100$ and $a_5 = 1$, $x = 10$. Replace $a_1$ with $\frac{a_1}{x} = \frac{100}{10} = 10$, and $a_5$ with $a_5 \cdot x = 1 \cdot 10 = 10$. The resulting array is $a$ = [$10, 10, 10, 10, 10$]. After performing these operations, all elements in the array $a$ become equal to $10$.
To solve the problem, we need to decompose all numbers in the array into prime divisors. After that, let's calculate the number of each divisor, summarizing the decompositions of all numbers. If each divisor enters $k\cdot n$ times, where $k$ is a natural number, then we can equalize all the numbers in the array: we will sequentially apply the operation so that each number consists of the same set of prime divisors. If some divisor enters a different number of times, then it will not be possible to equalize the numbers in the array.
[ "math", "number theory" ]
1,300
#include<bits/stdc++.h> using namespace std; const int maxv = 1000000; void add_divs(int x, map<int, int>&divs){ int i = 2; while(i * i <= x){ while (x % i == 0){ divs[i]++; x /= i; } i++; } if(x > 1) divs[x]++; } bool solve(){ int n; cin >> n; vector<int>a(n); map<int, int> divs; for(int i = 0; i < n; i++) { cin >> a[i]; add_divs(a[i], divs); } for(auto e: divs){ if(e.second % n != 0) return false; } return true; } int main(){ int t; cin >> t; while(t--) { cout << (solve() ? "YES" : "NO") << "\n"; } }
1881
E
Block Sequence
Given a sequence of integers $a$ of length $n$. A sequence is called beautiful if it has the form of a series of blocks, each starting with its length, i.e., first comes the length of the block, and then its elements. For example, the sequences [$\textcolor{red}{3},\ \textcolor{red}{3},\ \textcolor{red}{4},\ \textcolor{red}{5},\ \textcolor{green}{2},\ \textcolor{green}{6},\ \textcolor{green}{1}$] and [$\textcolor{red}{1},\ \textcolor{red}{8},\ \textcolor{green}{4},\ \textcolor{green}{5},\ \textcolor{green}{2},\ \textcolor{green}{6},\ \textcolor{green}{1}$] are beautiful (different blocks are colored differently), while [$1$], [$1,\ 4,\ 3$], [$3,\ 2,\ 1$] are not. In one operation, you can remove any element from the sequence. What is the minimum number of operations required to make the given sequence beautiful?
Let's use the dynamic programming approach: let $dp[i]$ be the number of operations required to make the segment from $i$ to $n$ beautiful. There are two possible ways to achieve this: Remove the element at position $i$ and make the segment from $i + 1$ to $n$ beautiful, then $dp[i] = dp[i+1] + 1$; Make the segment from $i + a_i + 1$ to $n$ beautiful, then $dp[i] = dp[i + a_i + 1]$ (cases where $i + a_i + 1 > n$ need to be handled separately).
[ "dp" ]
1,500
def solve(): n = int(input()) a = [int(x) for x in input().split()] dp = [n + 1] * n def get(pos): if pos > n: return n + 1 if pos == n: return 0 return dp[pos] dp[-1] = 1 for i in range(n - 2, -1, -1): dp[i] = min(dp[i + 1] + 1, get(i + a[i] + 1)) print(dp[0]) for _ in range(int(input())): solve()
1881
F
Minimum Maximum Distance
You have a tree with $n$ vertices, some of which are marked. A tree is a connected undirected graph without cycles. Let $f_i$ denote the maximum distance from vertex $i$ to any of the marked vertices. Your task is to find the minimum value of $f_i$ among all vertices. For example, in the tree shown in the example, vertices $2$, $6$, and $7$ are marked. Then the array $f(i) = [2, 3, 2, 4, 4, 3, 3]$. The minimum $f_i$ is for vertices $1$ and $3$.
Let's run a breadth-first traversal from any labeled vertex $v_1$ and find the farthest other labeled vertex $v_2$ from it. Then we run a traversal from $v_2$ and find the farthest labeled vertex $v_3$ from it (it may coincide with $v_1$). Then the answer is $\lceil \frac{d}{2} \rceil$, where $d$ is the distance between $v_2$ and $v_3$. It is worth considering separately the case when there is only one labeled vertex in the tree. Then the answer is $0$.
[ "dfs and similar", "dp", "graphs", "shortest paths", "trees" ]
1,700
#include<bits/stdc++.h> using namespace std; int n; vector<vector<int>> g; void dfs(int v, int p, vector<int> &d){ if(p != -1) d[v] = d[p] + 1; for(int u: g[v]){ if(u != p){ dfs(u, v, d); } } } int main(){ int t; cin>>t; while(t--){ int k; cin>>n>>k; g.assign(n, vector<int>(0)); vector<int> marked(k); for(int &e: marked) cin >> e, --e; for(int i=1;i<n;i++){ int u, v; cin >> u >> v; --u, --v; g[u].push_back(v); g[v].push_back(u); } if(k==1){ cout<<0<<"\n"; continue; } vector<int> d1(n); dfs(marked[0], -1, d1); int mx = marked[0]; for(int e: marked){ if(d1[e] > d1[mx]) mx = e; } vector<int> d2(n); dfs(mx, -1, d2); mx = marked[0]; for(int e: marked){ if(d2[e] > d2[mx]) mx = e; } cout << (d2[mx] + 1) / 2 << "\n"; } return 0; }
1881
G
Anya and the Mysterious String
Anya received a string $s$ of length $n$ brought from Rome. The string $s$ consists of lowercase Latin letters and at first glance does not raise any suspicions. An instruction was attached to the string. Start of the instruction. A palindrome is a string that reads the same from left to right and right to left. For example, the strings "anna", "aboba", "level" are palindromes, while the strings "gorilla", "banan", "off" are not. A substring $[l \ldots r]$ of string $s$ is a string $s_l s_{l+1} \ldots s_{r-1} s_r$. For example, the substring $[4 \ldots 6]$ of the string "generation" is the string "era". A string is called beautiful if it \textbf{does not} contain a substring of length \textbf{at least} two that is a palindrome. For example, the strings "fox", "abcdef", and "yioy" are beautiful, while the strings "xyxx", "yikjkitrb" are not. When an integer $x$ is added to the character $s_i$, it is replaced $x$ times with the \textbf{next} character in the alphabet, with "z" being replaced by "a". When an integer $x$ is added to the substring $[l, r]$ of string $s$, it becomes the string $s_1 s_2 \ldots s_{l-1} (s_l + x) (s_{l+1} + x) \ldots (s_{r-1} + x) (s_r + x) s_{r+1} \ldots s_n$. For example, when the substring $[2, 4]$ of the string "abazaba" is added with the number $6$, the resulting string is "ahgfaba". End of the instruction. After reading the instruction, Anya resigned herself to the fact that she has to answer $m$ queries. The queries can be of two types: - Add the number $x$ to the substring $[l \ldots r]$ of string $s$. - Determine whether the substring $[l \ldots r]$ of string $s$ is beautiful.
Let's make two obvious observations about palindromes of length at least $2$: Palindromes of length $2n$ contain a palindrome substring $[n \ldots n + 1]$; Palindromes of length $2n + 1$ contain a palindrome substring $[n \ldots n + 2]$. Now we need to learn how to track only palindromes of length $2$ and $3$. Let's call an index bad if a palindrome of length $2$ starts from it, and terrible if a palindrome of length $3$ starts from it. We will store the bad and terrible indices in a pair of std::set. Let's assume that there are no modification queries and we need to check the substring $[l \ldots r]$ for beauty. If there exists a bad index $i$ ($l \le i \le r - 1$) or a terrible index $j$ ($l \le j \le r - 2$), then the substring is not beautiful; otherwise, it is beautiful. This can be checked using binary search on the set. Now let's learn how to make modifications. Notice that palindromes do not appear or disappear inside a segment, but they can appear or disappear at its boundaries. Let's use a data structure that can add values on a segment and retrieve a value at a point. If such a structure exists, we will add $x \mod 26$ to the segment $[l \ldots r]$, and then process the nearest $10$ indices to $l$ and the nearest $10$ indices to $r$. We can describe more precisely which bad and terrible indices should be processed, but the author is lazy it does not affect the solution, because the number of such indices is still $O(1)$. When processing the indices, we use point queries and insertion/deletion operations in std::set. Now we need to implement such a data structure. The author suggests using a Fenwick tree on a difference array. A segment tree with lazy propagation will also work. This gives a solution with $O((n + q) \log n)$ time complexity.
[ "binary search", "data structures" ]
2,000
#include <iostream> #include <string> #include <set> #include <cstring> #define int long long using namespace std; const int L = 26; const int MAXN = 200200; int n, m; string s; set<int> M2, M3; int fen[MAXN]; void fenadd(int i, int x) { x = (x % L + L) % L; for (; i < n; i |= (i + 1)) fen[i] = (fen[i] + x) % L; } int fenget(int i) { int ans = 0; for (; i >= 0; i = (i & (i + 1)) - 1) ans = (ans + fen[i]) % L; return ans; } void relax(int l, int r) { l = max(l, 0LL); r = min(r, n); for (int i = l; i + 1 < r; ++i) { int c1 = fenget(i); int c2 = fenget(i + 1); if (c1 == c2) M2.insert(i); else M2.erase(i); if (i + 2 >= r) continue; int c3 = fenget(i + 2); if (c1 == c3) M3.insert(i); else M3.erase(i); } } void build() { M2.clear(); M3.clear(); memset(fen, 0, n * sizeof(int)); fenadd(0, s[0] - 'a'); for (int i = 1; i < n; ++i) { fenadd(i, s[i] - s[i - 1] + L); } for (int i = 0; i + 1 < n; ++i) { if (s[i] == s[i + 1]) M2.insert(i); if (i + 2 < n && s[i] == s[i + 2]) M3.insert(i); } } void update(int l, int r, int x) { fenadd(l, x); relax(l - 5, l + 5); fenadd(r, L - x); relax(r - 5, r + 5); } bool query(int l, int r) { auto it = M2.lower_bound(l); if (it != M2.end() && *it + 1 < r) return false; it = M3.lower_bound(l); if (it != M3.end() && *it + 2 < r) return false; return true; } signed main() { int t; cin >> t; while (t--) { cin >> n >> m >> s; build(); while (m--) { int tp, l, r; cin >> tp >> l >> r, --l; if (tp == 1) { int x; cin >> x; update(l, r, x); } else { cout << (query(l, r) ? "YES" : "NO") << '\n'; } } } }
1882
A
Increasing Sequence
You are given a sequence $a_{1}, a_{2}, \ldots, a_{n}$. A sequence $b_{1}, b_{2}, \ldots, b_{n}$ is called good, if it satisfies all of the following conditions: - $b_{i}$ is a positive integer for $i = 1, 2, \ldots, n$; - $b_{i} \neq a_{i}$ for $i = 1, 2, \ldots, n$; - $b_{1} < b_{2} < \ldots < b_{n}$. Find the minimum value of $b_{n}$ among all good sequences $b_{1}, b_{2}, \ldots, b_{n}$.
Greedy solution. Continue constructing $b$ as small as possible. If $a_{1} = 1$, $b_{1} = 2$. Else, $b_{1} = 1$. For $i \ge 2$, if $a_{i} = b_{i-1} + 1$, $b_{i} = b_{i-1} + 2$. Else, $b_{i} = b_{i-1} + 1$. $b_{n}$ calculated by this process is the answer. Time complexity is $\mathcal{O}(n)$ per test case.
[ "greedy" ]
800
null
1882
B
Sets and Union
You have $n$ sets of integers $S_{1}, S_{2}, \ldots, S_{n}$. We call a set $S$ attainable, if it is possible to choose some (possibly, none) of the sets $S_{1}, S_{2}, \ldots, S_{n}$ so that $S$ is equal to their union$^{\dagger}$. If you choose none of $S_{1}, S_{2}, \ldots, S_{n}$, their union is an empty set. Find the maximum number of elements in an attainable $S$ such that $S \neq S_{1} \cup S_{2} \cup \ldots \cup S_{n}$. $^{\dagger}$ The union of sets $A_1, A_2, \ldots, A_k$ is defined as the set of elements present in at least one of these sets. It is denoted by $A_1 \cup A_2 \cup \ldots \cup A_k$. For example, $\{2, 4, 6\} \cup \{2, 3\} \cup \{3, 6, 7\} = \{2, 3, 4, 6, 7\}$.
Sorry for everyone who got FSTs :( We tried our best to make pretest strong especially for this problem, but it wasn't enough. Denote $T = S_{1} \cup S_{2} \cup \cdots \cup S_{n}$, then $S \subset T$. Since $S \neq T$, $i \in T$ and $i \notin S$ for some $i$. Given an integer $i$, can you calculate the maximum size of $S$, such that $i \notin S$? $i \in T$ and $i \notin S$ for some $i$. Here $1 \le i \le 50$. For fixed $i$, select of all the sets among $S_{1}, S_{2}, \cdots, S_{n}$ which don't contain $i$. Size of their union will be the maximum size of $S$ such that $i \notin S$. If we do this for all $i$ in $T$, maximum of them is the answer. Time complexity is $\mathcal{O} \left( N \cdot \max \left( s_{i, j} \right) ^{2}\right)$.
[ "bitmasks", "brute force", "constructive algorithms", "greedy" ]
1,300
null
1882
C
Card Game
There are $n$ cards stacked in a deck. Initially, $a_{i}$ is written on the $i$-th card from the top. The value written on a card does not change. You will play a game. Initially your score is $0$. In each turn, you can do \textbf{one} of the following operations: - Choose an odd$^{\dagger}$ positive integer $i$, which is not greater than the number of cards left in the deck. Remove the $i$-th card from the top of the deck \textbf{and add the number written on the card to your score}. The remaining cards will be reindexed starting from the top. - Choose an even$^{\ddagger}$ positive integer $i$, which is not greater than the number of cards left in the deck. Remove the $i$-th card from the top of the deck. The remaining cards will be reindexed starting from the top. - End the game. You can end the game whenever you want, you \textbf{do not} have to remove all cards from the initial deck. What is the maximum score you can get when the game ends? $^{\dagger}$ An integer $i$ is odd, if there exists an integer $k$ such that $i = 2k + 1$. $^{\ddagger}$ An integer $i$ is even, if there exists an integer $k$ such that $i = 2k$.
Fix the topmost card you'll pick in the initial deck. Let's denote $i$-th card in the initial deck as card $i$. Let the topmost card you'll pick in the initial deck as card $i$. For all cards under card $i$ in initial deck, you can choose all and only cards with the positive value at odd index. Proof: Here is the strategy. Before you pick card $i$, if positive card(under card $i$ in initial deck) in odd index exists, choose it. Repeat this until all positive cards(under card $i$ in initial deck) are in even index. Then if you choose card $i$, all index of positive cards(under card $i$ in initial deck) will be decreased by $1$, and will become odd. Now, choose them from the bottom to top, so that the choosing won't change the other positive cards' index. Denote $prf_{j}$ as the sum of positive numbers among $a_{j}, a_{j+1}, \cdots, a_{n}$, and $prf_{n+1} = 0$. Since $prf_{j} = prf_{j+1} + \max \left( 0, a_{j}\right)$, $prf$ can be calculated in $\mathcal{O}(n)$. You should necessarily pick card $i$ in index of $i$, and can pick all positive cards under card $i$ in initial deck, so your maximum final score will be $(i \% 2 == 1 ? a_{i} : 0) + prf_{i+1}$. The answer is $\max \left( (i \% 2 == 1 ? a_{i} : 0) + prf_{i+1} \, | \, 1 \le i \le n \right)$. There are lots of other solutions too.
[ "brute force", "greedy" ]
1,500
null
1882
D
Tree XOR
You are given a tree with $n$ vertices labeled from $1$ to $n$. An integer $a_{i}$ is written on vertex $i$ for $i = 1, 2, \ldots, n$. You want to make all $a_{i}$ equal by performing some (possibly, zero) spells. Suppose you root the tree at some vertex. On each spell, you can select any vertex $v$ and any non-negative integer $c$. Then for all vertices $i$ in the subtree$^{\dagger}$ of $v$, replace $a_{i}$ with $a_{i} \oplus c$. The cost of this spell is $s \cdot c$, where $s$ is the number of vertices in the subtree. Here $\oplus$ denotes the bitwise XOR operation. Let $m_r$ be the minimum possible total cost required to make all $a_i$ equal, if vertex $r$ is chosen as the root of the tree. Find $m_{1}, m_{2}, \ldots, m_{n}$. $^{\dagger}$ Suppose vertex $r$ is chosen as the root of the tree. Then vertex $i$ belongs to the subtree of $v$ if the simple path from $i$ to $r$ contains $v$.
Try to solve the problem for single root. For any vertex $v$ which isn't the root, denote $par_{v}$ as the parent of $v$. Then value of $a_{v} \oplus a_{par_{v}}$ only changes if we perform the operation on $v$. Since $x_{1} \oplus x_{2} \oplus \cdots \oplus x_{m} \le x_{1} + x_{2} + \cdots + x_{m}$ for any numbers $x_{1}, x_{2}, \cdots, x_{m}$, we can assume that there is at most operation performed in same vertex. Let's solve the problem for single root first. Denote the operation in the statement as $op(v, c)$, and size of $v$'s subtree as $s_{v}$. The goal is to make $a_{v} \oplus a_{par_{v}} = 0$ for every non-root vertex $v$. This value is changed only when we select $v$. To make $a_{v} \oplus a_{par_{v}}$ to $0$, we should perform $op \left( v, a_{v} \oplus a_{par_{v}} \right)$ (Look Hint 3). Therefore, the answer sill be $\sum_{i != root} s_{i} \times (a_{i} \oplus a_{par_{i}})$. Now our task is to calculate this for all roots. This can be done by lots of ways, and the following algorithm is one way. Calculate the answer for $1$ as root first in $\mathcal{O}(n)$. Now, we will traverse the tree starting at vertex $1$ and keep updating the answer. If root changes from $q$ to $r$ ($q$ and $r$ are adjacent), every vertex except $q$ and $r$ will have same parents and subtree size, so will contribute same to the answer, so we only need to consider $q$ and $r$ to calculate the change. Edge connecting $q$ and $r$ will divide the tree into parts: each of size $X$ and $Y$. If root changes $q$ to $r$, $Y \times (a_{q} \oplus a_{r})$ will be subtracted, and $X \times (a_{q} \oplus a_{r})$ will be added to the answer. $X$ and $Y$ can be pre-calculated in $\mathcal{O}(n)$, so this update costs $\mathcal{O}(1)$. Since changing root into the adjacent vertex costs $\mathcal{O}(1)$, answer for all roots can be calculated in $\mathcal{O}(n)$.
[ "bitmasks", "dfs and similar", "dp", "greedy", "trees" ]
1,900
null
1882
E1
Two Permutations (Easy Version)
\textbf{This is the easy version of the problem. The difference between the two versions is that you do not have to minimize the number of operations in this version. You can make hacks only if both versions of the problem are solved.} You have two permutations$^{\dagger}$ $p_{1}, p_{2}, \ldots, p_{n}$ (of integers $1$ to $n$) and $q_{1}, q_{2}, \ldots, q_{m}$ (of integers $1$ to $m$). Initially $p_{i}=a_{i}$ for $i=1, 2, \ldots, n$, and $q_{j} = b_{j}$ for $j = 1, 2, \ldots, m$. You can apply the following operation on the permutations several (possibly, zero) times. In one operation, $p$ and $q$ will change according to the following three steps: - You choose integers $i$, $j$ which satisfy $1 \le i \le n$ and $1 \le j \le m$. - Permutation $p$ is partitioned into three parts using $p_i$ as a pivot: the left part is formed by elements $p_1, p_2, \ldots, p_{i-1}$ (this part may be empty), the middle part is the single element $p_i$, and the right part is $p_{i+1}, p_{i+2}, \ldots, p_n$ (this part may be empty). To proceed, swap the left and the right parts of this partition. Formally, after this step, $p$ will become $p_{i+1}, p_{i+2}, \ldots, p_{n}, p_{i}, p_{1}, p_{2}, \ldots, p_{i-1}$. The elements of the newly formed $p$ will be reindexed starting from $1$. - Perform the same transformation on $q$ with index $j$. Formally, after this step, $q$ will become $q_{j+1}, q_{j+2}, \ldots, q_{m}, q_{j}, q_{1}, q_{2}, \ldots, q_{j-1}$. The elements of the newly formed $q$ will be reindexed starting from $1$. Your goal is to simultaneously make $p_{i}=i$ for $i=1, 2, \ldots, n$, and $q_{j} = j$ for $j = 1, 2, \ldots, m$. Find any valid way to achieve the goal using at most $10\,000$ operations, or say that none exists. Please note that you \textbf{do not have to} minimize the number of operations. It can be proved that if it is possible to achieve the goal, then there exists a way to do so using at most $10\,000$ operations. $^{\dagger}$ A permutation of length $k$ is an array consisting of $k$ distinct integers from $1$ to $k$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($k=3$ but there is $4$ in the array).
Let's think two permutations independently. The goal is to make sequence of operations for each permutation which have same parity of number of operations. Try to sort single permutation of length $N$, using at most $3N$ operations. It is always possible. If the permutation's length is odd, you can perform operation at index $1$ for $N$ times and return to the same permutation. If the permutation's length is even, the parity of inversion number changes in each operations. Hints above were the summary of the tutorial. Please check them. First, let's do Hint 2. There are lots of ways to do this, and I'd like to introduce one which I thought first. It is possible to swap two elements using $3$ operations. Let's denote the two elements as $x$ and $y$, and permutation as $[[ A ] x [ B ] y [ C ]]$ ($[ A ], [ B ], [ C ]$ are subarrays). Then: Perform the operation at $x$. Permutation becomes $[[ B ] y [ C ] x [ A ]]$. Perform the operation at $y$. Permutation becomes $[[ C ] x [ A ] y [ B ]]$. Perform the operation at $x$. Permutation becomes $[[ A ] y [ B ] x [ C ]]$. Using this, we can sort the single permutation of length $N$ using at most $3N$ operations, since we can sort the permutation by $N$ swaps. If this requires same parity of number of operations for $p$ and $q$, the problem is solved. At most $3 \max(m, n)$ operations are used. Else, if $n$ or $m$ is odd, we can make the parity equal by method provided in Hint 3. At most $4 \max(m, n)$ operations are used. Else, then $n$ and $m$ is both even. In this case, it's impossible because of Hint 4. The overall time complexity is $\mathcal{O}(n + m)$ in this solution. Lastly, here is the proof of Hint 4. Proof: Let's consider the permutation of length $N$($N$ is even). Denote the permutation as $[[ A ] x [ B ]]$, and the length of $[ A ]$ and $[ B ]$ as $n_{A}$ and $n_{B}$. Here $n_{A} + n_{B} = N-1$ so one of $n_{A}$ and $n_{B}$ is even and one of them is odd. Permutation becomes $[[ B ] x [ A ]]$ after the operation. First, the parity of inversion number from two elements of $[ A ]$ or two elements of $[ B ]$ doesn't change, because their order inside doesn't change. Second, the parity of inversion number from one element of $[ A ]$ and one element of $[ B ]$ doesn't change, because sum of them are $n_{A} \times n_{B}$, which is even. Third, the parity of inversion number from $x$ and one element of $[ A ]$ or $[ B ]$ changes, because sum of them are $n_{A} + n_{B}$, which is odd. If we add these three, we can lead to the conclusion that the parity of inversion number must change. The text may look a bit complicated, but it will not be that hard if you write them in your own :)
[ "brute force", "constructive algorithms", "greedy", "number theory" ]
2,400
null
1882
E2
Two Permutations (Hard Version)
\textbf{This is the hard version of the problem. The difference between the two versions is that you have to minimize the number of operations in this version. You can make hacks only if both versions of the problem are solved.} You have two permutations$^{\dagger}$ $p_{1}, p_{2}, \ldots, p_{n}$ (of integers $1$ to $n$) and $q_{1}, q_{2}, \ldots, q_{m}$ (of integers $1$ to $m$). Initially $p_{i}=a_{i}$ for $i=1, 2, \ldots, n$, and $q_{j} = b_{j}$ for $j = 1, 2, \ldots, m$. You can apply the following operation on the permutations several (possibly, zero) times. In one operation, $p$ and $q$ will change according to the following three steps: - You choose integers $i$, $j$ which satisfy $1 \le i \le n$ and $1 \le j \le m$. - Permutation $p$ is partitioned into three parts using $p_i$ as a pivot: the left part is formed by elements $p_1, p_2, \ldots, p_{i-1}$ (this part may be empty), the middle part is the single element $p_i$, and the right part is $p_{i+1}, p_{i+2}, \ldots, p_n$ (this part may be empty). To proceed, swap the left and the right parts of this partition. Formally, after this step, $p$ will become $p_{i+1}, p_{i+2}, \ldots, p_{n}, p_{i}, p_{1}, p_{2}, \ldots, p_{i-1}$. The elements of the newly formed $p$ will be reindexed starting from $1$. - Perform the same transformation on $q$ with index $j$. Formally, after this step, $q$ will become $q_{j+1}, q_{j+2}, \ldots, q_{m}, q_{j}, q_{1}, q_{2}, \ldots, q_{j-1}$. The elements of the newly formed $q$ will be reindexed starting from $1$. Your goal is to simultaneously make $p_{i}=i$ for $i=1, 2, \ldots, n$, and $q_{j} = j$ for $j = 1, 2, \ldots, m$. Find any way to achieve the goal \textbf{using the minimum number of operations possible}, or say that none exists. Please note that you \textbf{have to} minimize the number of operations. $^{\dagger}$ A permutation of length $k$ is an array consisting of $k$ distinct integers from $1$ to $k$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($k=3$ but there is $4$ in the array).
The solution is completely different from E1. A brilliant idea is required. Maybe the fact that checker is implemented in linear time might be the hint. Find a way changing the operation to 'swap'. Add an extra character. Add an extra character '$X$' in front of the permutation. Here position of $X$ won't be changed by the operation, and will always locate at left of $p_{1}$. Then in each operation, the permutation will change as: $[X [ A ] c [ B ]] \rightarrow [X [ B ] c [ A ]]$. Now, let's consider the array made by $X$ and permutation as circular. This is possible because $X$ is always in left of $1$st element, so it marks the start of the permutation. Then $[X [ B ] c [ A ]]$ is equivalent with $[c [ A ] X [ B ]]$. Then the operation is: $[X [ A ] c [ B ]] \rightarrow [c [ A ] X [ B ]]$, which is same with swapping $X$ and $c$. Now we need to calculate the minimum odd number of swaps and even number of swaps(of $X$ and any element) each, turning $[X\,p_{1}\,p_{2}\,\cdots\,p_{n}]$ to one of $[X\,1\,2\,\cdots\,n]$, $[n\,X\,1\,2\,\cdots\,(n-1)]$, $[(n-1)\,n\,X\,1\,\cdots\,(n-2)]$, $\cdots$, $[1\,2\,\cdots\,n\,X]$. To calculate the minimum number of swaps required to turn $[X\,p_{1}\,p_{2}\,\cdots\,p_{n}]$ to the given array, first renumber the initial array to $[X\,1\,2\,\cdots\,n]$, then change the given array in the same correspondence. Do permutation cycle decomposition. Then the answer is (sum of (size + 1) for cycles which have size $\ge$ 2 and don't contain $X$) + ($X$'s cycle size $-$ 1). This can be proven easily by counting the number of elements which go into the proper place in each operations. Calculate this for all $[X\,1\,2\,\cdots\,n]$, $[n\,X\,1\,2\,\cdots\,(n-1)]$, $[(n-1)\,n\,X\,1\,\cdots\,(n-2)]$, $\cdots$, $[1\,2\,\cdots\,n\,X]$. Since we can't make the same array using different parity of number of swaps, we can achieve the goal by calculating the minimum odd number and minimum even number each. The overall time complexity is $O(n^{2}+m^{2})$.
[ "constructive algorithms" ]
3,100
null
1883
A
Morning
You are given a four-digit pin code consisting of digits from $0$ to $9$ that needs to be entered. Initially, the cursor points to the digit $1$. In one second, you can perform exactly one of the following two actions: - Press the cursor to display the current digit, - Move the cursor to any adjacent digit. The image above shows the device you are using to enter the pin code. For example, for the digit $5$, the adjacent digits are $4$ and $6$, and for the digit $0$, there is only one adjacent digit, $9$. Determine the minimum number of seconds required to enter the given four-digit pin code.
Let's represent our pin code as a set of digits $a$, $b$, $c$, $d$. Replace all $0$s with $10$ and notice that the answer is equal to $|a - 1|$ + $|b - a|$ + $|c - b|$ + $|d - c|$.
[ "math" ]
800
null
1883
B
Chemistry
You are given a string $s$ of length $n$, consisting of lowercase Latin letters, and an integer $k$. You need to check if it is possible to remove \textbf{exactly} $k$ characters from the string $s$ in such a way that the remaining characters can be rearranged to form a palindrome. Note that you can reorder the remaining characters in any way. A palindrome is a string that reads the same forwards and backwards. For example, the strings "z", "aaa", "aba", "abccba" are palindromes, while the strings "codeforces", "reality", "ab" are not.
Let's remember under what conditions we can rearrange the letters of a word to form a palindrome. This can be done if the number of letters with odd occurrences is not greater than $1$. In our problem, it is sufficient to check that the number of letters with odd occurrences (denoted as $x$) is not greater than $k + 1$. Let's prove this fact. If $x > k + 1$, then it is definitely impossible to obtain the answer, because with $k$ operations we cannot make the number of letters with odd occurrences not greater than $1$. On the other hand, we can simply remove the character with an odd number of occurrences on each removal iteration and decrease the number of odd occurrences. If there are no such characters, we can choose any character and remove it, thus having $1$ character with an odd occurrence.
[ "strings" ]
900
null
1883
C
Raspberries
You are given an array of integers $a_1, a_2, \ldots, a_n$ and a number $k$ ($2 \leq k \leq 5$). In one operation, you can do the following: - Choose an index $1 \leq i \leq n$, - Set $a_i = a_i + 1$. Find the minimum number of operations needed to make the product of all the numbers in the array $a_1 \cdot a_2 \cdot \ldots \cdot a_n$ divisible by $k$.
Let's notice that if $k = 2, 3, 5$, since these are prime numbers, the product of the numbers will be divisible by $k$ if any number in the array is divisible by $k$. From this, we can conclude that it is advantageous to perform operations only on one number. If $k = 4$, we have several cases, and we need to take the minimum among them. Again, we can perform operations on one number and make it divisible by $4$, or we need to perform operations such that there are two even numbers in the array (there is a special case when $n = 1$). To have two even numbers, let's count the number of even numbers in the original array as $cnt$, and if $2 \leq n$, we can say that the answer is $\max(0, 2 - cnt)$.
[ "dp", "math" ]
1,000
null
1883
D
In Love
Initially, you have an empty multiset of segments. You need to process $q$ operations of two types: - $+$ $l$ $r$ — Add the segment $(l, r)$ to the multiset, - $-$ $l$ $r$ — Remove \textbf{exactly} one segment $(l, r)$ from the multiset. It is guaranteed that this segment exists in the multiset. After each operation, you need to determine if there exists a pair of segments in the multiset that do not intersect. A pair of segments $(l, r)$ and $(a, b)$ do not intersect if there does not exist a point $x$ such that $l \leq x \leq r$ and $a \leq x \leq b$.
Let's learn how to solve the problem for a fixed set. The claim is that if the answer exists, we can take the segment with the minimum right boundary and the maximum left boundary (let's denote these boundaries as $r$ and $l$). Therefore, if $r < l$, it is obvious that this pair of segments is suitable for us. Otherwise, all pairs of segments intersect because they have common points in the range $l \ldots r$. Now let's maintain all our operations. For this, we can simply store a multiset of left and right boundaries. With the help of this multiset, we can find the minimum and maximum elements of any of these sets in $O(1)$. The addition and removal operations are also supported by this container.
[ "data structures", "greedy" ]
1,500
null
1883
E
Look Back
You are given an array of integers $a_1, a_2, \ldots, a_n$. You need to make it non-decreasing with the minimum number of operations. In one operation, you do the following: - Choose an index $1 \leq i \leq n$, - Set $a_i = a_i \cdot 2$. An array $b_1, b_2, \ldots, b_n$ is non-decreasing if $b_i \leq b_{i+1}$ for all $1 \leq i < n$.
Let's come up with a naive solution - we could go from left to right and multiply the element at index $i$ by $2$ until it is greater than or equal to the previous element. But this solution will take a long time, as the numbers can become on the order of $2^n$ during the operations. Let's not explicitly multiply our numbers by $2$, but represent each element as the product $a_i \cdot 2^{x_i}$, where $x_i$ is the power to which we multiplied. Let's say we have calculated the value of $x_{i - 1}$, and now we want to calculate $x_i$. We have two cases: If $a_{i - 1} \leq a_i$: Then we can say that $x_i = x_{i - 1}$ and subtract $1$ until $x_i > 0$ and $a_{i - 1} \cdot 2 \leq a_i$. If $a_{i - 1} > a_i$: Then we can say that $x_i = x_{i - 1}$ and add $1$ until $a_i \cdot 2 < a_{i - 1}$. Note that we do not want to change the values of the array $a$ to avoid getting large numbers later, so we use additional variables for this.
[ "bitmasks", "greedy" ]
1,700
null
1883
F
You Are So Beautiful
You are given an array of integers $a_1, a_2, \ldots, a_n$. Calculate the number of \textbf{subarrays} of this array $1 \leq l \leq r \leq n$, such that: - The array $b = [a_l, a_{l+1}, \ldots, a_r]$ occurs in the array $a$ as a \textbf{subsequence} exactly once. In other words, there is exactly one way to select a set of indices $1 \leq i_1 < i_2 < \ldots < i_{r - l + 1} \leq n$, such that $b_j = a_{i_j}$ for all $1 \leq j \leq r - l + 1$.
Note that a subarray suits us if $a_l$ is the leftmost occurrence of the number $a_l$ in the array and $a_r$ is the rightmost occurrence of the number $a_r$ in the array. Let's create an array $b_r$ filled with zeros and set $b_r = 1$ if $a_r$ is the rightmost occurrence of the number $a_r$ in the array (this can be done using sets or dictionaries). Now we need to consider all suitable left boundaries and see how many suitable right boundaries we have on the suffix, either by precomputing a suffix sum or by simply maintaining a variable while traversing from left to right.
[ "data structures" ]
1,400
null
1883
G2
Dances (Hard Version)
\textbf{This is the hard version of the problem. The only difference is that in this version $m \leq 10^9$.} You are given two arrays of integers $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$. Before applying any operations, you can reorder the elements of each array as you wish. Then, in one operation, you will perform both of the following actions, if the arrays are not empty: - Choose any element from array $a$ and remove it (all remaining elements are shifted to a new array $a$), - Choose any element from array $b$ and remove it (all remaining elements are shifted to a new array $b$). Let $k$ be the final size of both arrays. You need to find the minimum number of operations required to satisfy $a_i < b_i$ for all $1 \leq i \leq k$. This problem was too easy, so the problem author decided to make it more challenging. You are also given a positive integer $m$. Now, you need to find the sum of answers to the problem for $m$ pairs of arrays $(c[i], b)$, where $1 \leq i \leq m$. Array $c[i]$ is obtained from $a$ as follows: - $c[i]_1 = i$, - $c[i]_j = a_j$, for $2 \leq j \leq n$.
Let's learn how to solve the problem for a fixed value of $a_1$. Notice that we can perform a binary search on the answer. Let's learn how to check if we can remove $k$ elements from both arrays such that $a_i < b_i$ holds for the remaining elements. It will be advantageous to sort both arrays, remove the $k$ largest elements from the first array, and the $k$ smallest elements from the second array, and then simply check if the condition holds for our pair of arrays. Thus, for a fixed value of $a_1$, we can solve the problem in $O(n \log n)$. Let $f(i)$ be the answer to the problem for $a_1 = i$. We make the following observation - there exists a value $x$ such that: $f(1) = f(2) = \ldots = f(x) = f(x + 1) - 1 = f(x + 2) - 1 = \ldots = f(\inf) - 1$. Indeed, changing one element of array $a$ cannot worsen the answer by more than $1$. Then, we can use binary search to find this value $x$. The overall complexity is $O(n \log n \log m)$.
[ "binary search", "greedy", "sortings", "two pointers" ]
1,900
null
1884
A
Simple Design
A positive integer is called $k$-beautiful, if the digit sum of the decimal representation of this number is divisible by $k^{\dagger}$. For example, $9272$ is $5$-beautiful, since the digit sum of $9272$ is $9 + 2 + 7 + 2 = 20$. You are given two integers $x$ and $k$. Please find the smallest integer $y \ge x$ which is $k$-beautiful. $^{\dagger}$ An integer $n$ is divisible by $k$ if there exists an integer $m$ such that $n = k \cdot m$.
Let $y$ be the answer to the problem. Note that $y - x \leq 18$ and the problem can be solved by brute force. We will prove this fact. If the difference between two numbers is greater than $18$, then there will definitely be a sequence of $10$ numbers where only the last digit differs. Therefore, if the sum of the digits of a number with the last digit $0$ is equal to $x$, then there were sums of digits equal to $x, x + 1, \ldots, x + 9$. Here, there are exactly $10$ consecutive numbers and for each $k$ from $1$ to $10$, there exists a number from this range that is divisible by $k$.
[ "brute force", "greedy", "math" ]
800
null
1884
B
Haunted House
You are given a number in binary representation consisting of exactly $n$ bits, possibly, with leading zeroes. For example, for $n = 5$ the number $6$ will be given as $00110$, and for $n = 4$ the number $9$ will be given as $1001$. Let's fix some integer $i$ such that $1 \le i \le n$. In one operation you can swap any two adjacent bits in the binary representation. Your goal is to find the smallest number of operations you are required to perform to make the number divisible by $2^i$, or say that it is impossible. Please note that for each $1 \le i \le n$ you are solving the problem independently.
In order for a number to be divisible by $2^i$, the last $i$ bits of its binary representation must be equal to $0$. For convenience, let's reverse the binary representation of the number so that our operations aim to zero out the first $i$ bits. Let $zero$ be the number of bits equal to $0$ in the original string. If $i > zero$, it is obvious that the answer is $-1$ since our operations do not change the number of zeros and ones in the number. Otherwise, the answer exists, and we will learn how to calculate the minimum number of operations. Consider all $j \leq i$, where $s_j = 1$. We need to remove these ones from our prefix by replacing them with the nearest zeros after position $i$. We will traverse the string from left to right, keeping track of the number of ones in our prefix (denoted as $cnt$) and the sum of their positions (denoted as $sum\_one$). We also need to maintain the sum of the nearest $cnt$ positions of zeros after our element $i$ (denoted as $sum\_zero$). This can be done using a pointer. The answer is $sum\_zero - sum\_one$. This is both a lower bound estimate and can be greedily shown how to obtain this answer in such a number of operations. First, let's place all the ones at the end of our prefix. We will place the rightmost one at position $i$ and so on. This can be done in $\sum\limits_{j=i - cnt + 1}^i (j - pos)$ operations, where $pos$ is the position of the corresponding one, which in general is actually $= -sum\_one + \sum\limits_{j = i - cnt + 1}^i j$. Now we want to place zeros at these positions, and we will do this greedily $cnt$ times, performing $\sum\limits_{j=i - cnt + 1}^i (pos - j)$ operations, where $pos$ is the position of $0$. Again, the total is $= sum\_zero - \sum\limits_{j = i - cnt + 1}^i j$. Our answer is the sum of these two, which is $sum\_zero - sum\_one$.
[ "binary search", "greedy", "math", "two pointers" ]
1,100
null
1884
C
Medium Design
The array $a_1, a_2, \ldots, a_m$ is initially filled with zeroes. You are given $n$ pairwise distinct segments $1 \le l_i \le r_i \le m$. You have to select an arbitrary subset of these segments (in particular, you may select an empty set). Next, you do the following: - For each $i = 1, 2, \ldots, n$, if the segment $(l_i, r_i)$ has been selected to the subset, then for each index $l_i \le j \le r_i$ you increase $a_j$ by $1$ (i. e. $a_j$ is replaced by $a_j + 1$). If the segment $(l_i, r_i)$ has not been selected, the array does not change. - Next (after processing all values of $i = 1, 2, \ldots, n$), you compute $\max(a)$ as the maximum value among all elements of $a$. Analogously, compute $\min(a)$ as the minimum value. - Finally, the cost of the selected subset of segments is declared as $\max(a) - \min(a)$. Please, find the maximum cost among all subsets of segments.
Let $x$ be the position of the maximum element in the array $a$ in the optimal answer. Notice that in this case, we can select all the segments where $l \leq x \leq r$, because if the position of the minimum element is outside of this segment, we increase the answer by $+1$, and if it is inside, we do not worsen the answer. From this, it follows that the minimum will either occur at position $1$ or at position $m$. We can then consider the segments that do not contain position $1$ (similarly, we can do the same for $m$) and find the position that is covered by the largest number of such segments. This can be done using a line sweep algorithm. A segment $(l, r)$ corresponds to two events: $(l, 0)$ - the segment opens, and $(r, 1)$ - the segment closes. We sort these events and iterate through them, keeping track of the number of open segments at each moment. The maximum value among these quantities is what we need. The solution works in $O(n \log n)$ time.
[ "brute force", "data structures", "dp", "greedy", "sortings" ]
1,700
null
1884
D
Counting Rhyme
You are given an array of integers $a_1, a_2, \ldots, a_n$. A pair of integers $(i, j)$, such that $1 \le i < j \le n$, is called good, if there \textbf{does not exist} an integer $k$ ($1 \le k \le n$) such that $a_i$ is divisible by $a_k$ and $a_j$ is divisible by $a_k$ at the same time. Please, find the number of good pairs.
Note that if $a_i$ is divisible by $a_k$ and $a_j$ is divisible by $a_k$, then $\gcd(a_i, a_j)$ is divisible by $a_k$. Let's calculate the number of pairs from the array for each $g$, such that the $\gcd$ is equal to $g$. To do this, we will create $dp_g$ - the number of pairs in the array with $\gcd$ equal to $g$. For this, we will need an array of counts - $cnt_x$, the number of occurrences of the number $x$ in the array. To calculate $dp_g$, we need to know how many numbers are divisible by $g$, specifically the sum $s = cnt_g + cnt_{2g} + \ldots + cnt_{kg}$, where $kg \leq n$. Therefore, the number of pairs with $\gcd$ equal to $g$ is $\frac{s \cdot (s - 1)}{2}$. However, not all of them have $\gcd$ equal to $g$, it can be equal to a number that is a multiple of $g$. Therefore, $dp_g = \frac{s \cdot (s - 1)}{2} - dp_{2g} - \ldots - dp_{kg}$. This dynamic programming can be calculated in $O(n \log n)$ time. Now let's understand which $\gcd$ values are not suitable. If there is a number $x$ in the array (i.e., $cnt_x > 0$), then all $g$ multiples of $x$ are not suitable. This can also be calculated in $O(n \log n)$ time. Finally, we just need to sum up all the $dp_g$ values, where the number $g$ is not divisible by any number in the array.
[ "dp", "math", "number theory" ]
2,100
null
1884
E
Hard Design
Consider an array of integers $b_0, b_1, \ldots, b_{n-1}$. Your goal is to make all its elements equal. To do so, you can perform the following operation several (possibly, zero) times: - Pick a pair of indices $0 \le l \le r \le n-1$, then for each $l \le i \le r$ increase $b_i$ by $1$ (i. e. replace $b_i$ with $b_i + 1$). - After performing this operation you receive $(r - l + 1)^2$ coins. The value $f(b)$ is defined as a pair of integers $(cnt, cost)$, where $cnt$ is the \textbf{smallest} number of operations required to make all elements of the array equal, and $cost$ is the \textbf{largest} total number of coins you can receive among all possible ways to make all elements equal within $cnt$ operations. In other words, first, you need to minimize the number of operations, second, you need to maximize the total number of coins you receive. You are given an array of integers $a_0, a_1, \ldots, a_{n-1}$. Please, find the value of $f$ for all cyclic shifts of $a$. Formally, for each $0 \le i \le n-1$ you need to do the following: - Let $c_j = a_{(j + i) \pmod{n}}$ for each $0 \le j \le n-1$. - Find $f(c)$. Since $cost$ can be very large, output it modulo $(10^9 + 7)$. Please note that under a fixed $cnt$ you need to maximize the total number of coins $cost$, not its remainder modulo $(10^9 + 7)$.
Let's learn how to solve a problem for a given array. Notice that we want to make all elements equal to the maximum value in the array. For convenience, let's solve a similar problem that is easier to implement. We'll create an array $b_i = \max(a) - a_i$. Then, instead of performing an addition operation, we'll perform a subtraction operation by $-1$ on a range, aiming to make all elements of the array equal to $0$. Let $pos$ be the position of the minimum element in the array $b$. We can apply the operation $b_{pos}$ times on the entire array, and then recursively solve the problem for the left and right parts relative to the position $pos$. However, this doesn't help us solve the problem for all shifts. Let's find the nearest element less than or equal to each element on the left (denoted as $l_i$) and the nearest element less than each element on the right (denoted as $r_i$) - this can be done using linear algorithms. Then, we can observe that when we consider the subarray $l_i + 1 \ldots r_i - 1$ in our recursive solution, we add to the answer a value equal to $b_i - \max(b_{l_i}, b_{r_i})$. We have found a solution to the problem in $O(n)$ time for a fixed array $a$. If we consider all shifts of the array, the nearest elements on the left/right either remain unchanged or simply do not exist, in which case we consider their values to be $0$. Therefore, if we fix the position $i$ (want to determine how much it affects the answer for each position in the array), we have three shift ranges for which we can add different values. Hence, we have $O(n)$ ranges on which we need to add certain values. Since we don't have to make these additions immediately, we can create an array $add_i$ - the value that needs to be added to all elements starting from position $i$. Then, adding the number $x$ on the range $l \ldots r$ changes this array as follows: $add_l = add_l + x$, $add_{r + 1} = add_{r + 1} - x$. The answer to the problem is $ans_i = \sum\limits_{j=1}^i add_i$. This subproblem can be solved in $O(n)$ time. Next, we need to learn how to calculate the total cost. The approach will be similar, but now we need to add a more complex function. If both nearest elements are within a shift range, we will add $(r_i - l_i - 1)^2 \cdot (b_i - \max(b_{l_i}, b_{r_i})$ - a fixed number. If there is no nearest element on the left/right, we will add a value of the form $(r_i - s + 1)^2 \cdot (b_i - b_{r_i})$ or $(s + n - i + 1)^2 \cdot (b_i - b_{l_i})$, where $s$ is the index of the start of the shift position. Both cases can be solved similarly. If we expand the brackets, we get an expression of the form $a \cdot s^2 + b \cdot s + c$, where $a, b, c$ are known coefficients. Let's represent the answer for a fixed shift in this form, and we need to find the value of each coefficient, which is simply adding a fixed number on a range, as we discussed in the previous paragraph. This can also be solved in $O(n)$ time.
[ "greedy", "implementation", "math" ]
2,800
null
1886
A
Sum of Three
Monocarp has an integer $n$. He wants to represent his number as a sum of three \textbf{distinct} positive integers $x$, $y$, and $z$. Additionally, Monocarp wants none of the numbers $x$, $y$, and $z$ to be divisible by $3$. Your task is to help Monocarp to find any valid triplet of distinct positive integers $x$, $y$, and $z$, or report that such a triplet does not exist.
You can solve this problem with some case analysis, but I'll describe another solution. Suppose we want to iterate on every $x$, and then on every $y$, and then calculate $z$ and check that everything's OK. Obviously, this is too slow. But we can speed this up: we don't have to iterate on all possible values of $x$ and $y$. Suppose $x < y < z$; then, if $x \ge 4$, we can decrease $x$ by $3$ and increase $z$ by $3$, until $x$ becomes either $1$ or $2$. The same goes for $y$: if $y \ge 7$, we can decrease it by $3$ and increase $z$ by $3$, until $y < 7$. So, we only have to iterate on $x$ up to $2$, and on $y$ up to $5$. This yields a solution in $O(1)$. You don't have to iterate on $x$ up to exactly $2$ and on $y$ up to exactly $5$, just use an upper bound for which you don't check too many possible triples of $x$, $y$ and $z$.
[ "brute force", "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; int n; inline void read() { cin >> n; } inline void solve() { for (int x = 1; x <= 20; x++) { for (int y = 1; y <= 20; y++) { if (x + y >= n || x == y) continue; int z = n - x - y; if (z == x || z == y) continue; if (x % 3 == 0 || y % 3 == 0 || z % 3 == 0) { continue; } puts("YES"); cout << x << ' ' << y << ' ' << z << endl; return; } } puts("NO"); } int main () { int t; cin >> t; while (t--){ read(); solve(); } }
1886
B
Fear of the Dark
Monocarp tries to get home from work. He is currently at the point $O = (0, 0)$ of a two-dimensional plane; his house is at the point $P = (P_x, P_y)$. Unfortunately, it is late in the evening, so it is very dark. Monocarp is afraid of the darkness. He would like to go home along a path illuminated by something. Thankfully, there are two lanterns, located in the points $A = (A_x, A_y)$ and $B = (B_x, B_y)$. You can choose any non-negative number $w$ and set the power of \textbf{both} lanterns to $w$. If a lantern's power is set to $w$, it illuminates a circle of radius $w$ centered at the lantern location (including the borders of the circle). You have to choose the minimum non-negative value $w$ for the power of the lanterns in such a way that \textbf{there is a path from the point $O$ to the point $P$ which is completely illuminated}. You may assume that the lanterns don't interfere with Monocarp's movement. \begin{center} {\small The picture for the first two test cases} \end{center}
There are only two major cases: both points $O$ and $P$ lie inside the same circle, or the point $O$ lies inside one of the circles and $P$ lies inside the other circle. Let's denote the distance between the points $P$ and $Q$ as $d(P, Q)$. Let's look at the first case, when the points $O$ and $P$ lie inside the circle centered at $A$. In that case, inequalities $d(O, A) \le R$ and $d(P, A) \le R$ must be satisfied. Therefore, the minimum possible radius for that case is equal to $\max(d(O, A), d(P, A))$. Similarly, for the circle centered at $B$, the minimum possible radius for that case is equal to $\max(d(O, B), d(P, B))$. Let's look at the second case, when the point $O$ lies inside the circle centered at $A$ and the point $P$ lies inside the circle centered at $B$. In that case, inequalities $d(O, A) \le R$ and $d(P, B) \le R$ must be satisfied. But there is one extra constraint: the circles must intersect, because there is should an illuminated path from one circle to another. This adds one more inequality - $d(A, B) \le 2R$. Therefore, the minimum possible radius for that case is equal to $\max\left(d(O, A), d(P, B), \frac{d(A, B)}{2}\right)$. Similarly, when $O$ lies inside the circle centered at $B$ and $P$ lies inside the circle centered at $A$, the minimum possible radius for that case is equal to $\max\left(d(O, B), d(P, A), \frac{d(A, B)}{2}\right)$. So the answer to the problem is the minimum among aforementioned cases.
[ "binary search", "geometry", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { auto dist = [](int x1, int y1, int x2, int y2) { return sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)); }; int t; cin >> t; while (t--) { int px, py, ax, ay, bx, by; cin >> px >> py >> ax >> ay >> bx >> by; double pa = dist(px, py, ax, ay), pb = dist(px, py, bx, by); double oa = dist(0, 0, ax, ay), ob = dist(0, 0, bx, by); double ab = dist(ax, ay, bx, by); double ans = 1e9; ans = min(ans, max(pa, oa)); ans = min(ans, max(pb, ob)); ans = min(ans, max({ab / 2, pa, ob})); ans = min(ans, max({ab / 2, pb, oa})); cout << setprecision(10) << fixed << ans << '\n'; } }
1886
C
Decreasing String
Recall that string $a$ is lexicographically smaller than string $b$ if $a$ is a prefix of $b$ (and $a \ne b$), or there exists an index $i$ ($1 \le i \le \min(|a|, |b|)$) such that $a_i < b_i$, and for any index $j$ ($1 \le j < i$) $a_j = b_j$. Consider a sequence of strings $s_1, s_2, \dots, s_n$, each consisting of lowercase Latin letters. String $s_1$ is given explicitly, and all other strings are generated according to the following rule: to obtain the string $s_i$, a character is removed from string $s_{i-1}$ in such a way that string $s_i$ is lexicographically minimal. For example, if $s_1 = \mathrm{dacb}$, then string $s_2 = \mathrm{acb}$, string $s_3 = \mathrm{ab}$, string $s_4 = \mathrm{a}$. After that, we obtain the string $S = s_1 + s_2 + \dots + s_n$ ($S$ is the concatenation of all strings $s_1, s_2, \dots, s_n$). You need to output the character in position $pos$ of the string $S$ (i. e. the character $S_{pos}$).
Let's analyze in which order the characters should be removed from the given string. Suppose we want to remove a character so that the resulting string is lexicographically smallest. We can show that the best option is to find the leftmost pair of adjacent characters $S_i, S_{i+1}$ such that $S_i > S_{i+1}$: removing $S_i$ will decrease the character on that position; if you remove some character to the right of $S_i$, the $i$-th character won't decrease; and if you remove some character to the left of $S_i$, you might accidentally increase some character before the $i$-th position (we chose the leftmost such pair, so the prefix until the $i$-th character is non-decreasing). If there is no such pair of adjacent characters, you should remove the last character of the string. Unfortunately, implementing it naively is too slow (the solution will work in $O(n^2)$). Thankfully, there is a way to determine in which all characters are removed in $O(n)$. Maintain a stack of characters (initially empty) and iterate over the string from left to right. When you encounter a character, if the stack is empty or the character on top of the stack is not greater than the current one, you don't have to remove anything, just push the new character into the stack. But if the character on top of the stack is greater than the new one, then we have found a pair of adjacent characters which meets the condition; so, the character on top of the stack is the next to remove. Pop the topmost element of the stack. Note that we don't push the new character into the stack yet, because the next element on top of the stack might still be greater - in that case, this is the next character to remove (and we pop it), and so on. In the end, the character in the stack will be non-descending from the bottom of the stack to the top, and we should remove them from top to bottom. That's how we can find in which order characters are removed. The rest is quite easy: locate in which of the strings $s_1, s_2, \dots, s_n$ the position $pos$ occurs, remove the required number of characters, and print the corresponding character of the resulting string. Minor detail which can make the implementation easier: if you append some character which is less than a to the end of the given string, you don't have to consider the case when there's no pair of adjacent characters $S_i > S_{i+1}$.
[ "implementation", "strings" ]
1,600
#include <bits/stdc++.h> using namespace std; const int N = 200000; int t; int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { string s; long long pos; cin >> s >> pos; --pos; int curLen = s.size(); vector <char> st; bool ok = pos < curLen; s += '$'; for (auto c : s) { while (!ok && st.size() > 0 && st.back() > c) { pos -= curLen; --curLen; st.pop_back(); if(pos < curLen) ok = true; } st.push_back(c); } cout << st[pos]; } return 0; }
1886
D
Monocarp and the Set
Monocarp has $n$ numbers $1, 2, \dots, n$ and a set (initially empty). He adds his numbers to this set $n$ times in some order. During each step, he adds a new number (which has not been present in the set before). In other words, the sequence of added numbers is a permutation of length $n$. Every time Monocarp adds an element into the set \textbf{except for the first time}, he writes out a character: - if the element Monocarp is trying to insert becomes the maximum element in the set, Monocarp writes out the character >; - if the element Monocarp is trying to insert becomes the minimum element in the set, Monocarp writes out the character <; - if none of the above, Monocarp writes out the character ?. You are given a string $s$ of $n-1$ characters, which represents the characters written out by Monocarp (in the order he wrote them out). You have to process $m$ queries to the string. Each query has the following format: - $i$ $c$ — replace $s_i$ with the character $c$. Both before processing the queries and after each query, you have to calculate the number of different ways to order the integers $1, 2, 3, \dots, n$ such that, if Monocarp inserts the integers into the set in that order, he gets the string $s$. Since the answers might be large, print them modulo $998244353$.
The key observation to this problem is that it's much easier to consider the process in reverse. Suppose Monocarp has a set of integers $1, 2, \dots, n$, and starts removing elements from it one by one. During the $i$-th deletion, if $s_{n-i}$ is <, he removes the minimum element; if $s_{n-i}$ is >, he removes the maximum element; and if $s_{n-i}$ is ?, he removes any element which is neither the minimum nor the maximum. If you consider the process backwards, it's quite easy to see that it doesn't actually matter which numbers are present in the set; we are interested only in their quantity. So, for each action, we can choose an element to remove independently: if $s_{n-i}$ is < or >, there is only one way, otherwise, there are $k-2$ ways, where $k$ is the number of elements in the set (for the $i$-th deletion operation, it is $n-i+1$). So, the answer to the problem is the product of $j-1$ for every character $s_j$ that is equal to ?. To recalculate the answer efficiently when you change a character, you can use one of the following options: build a segment tree with operation "product on segment modulo $998244353$"; or use modular inverse to maintain division operations. Note that sometimes you have to "divide by zero", i. e. remove the zero from the product (when $s_1$ changes from ? to another character); to handle it, you can store the product for every $s_i$ from $2$ to $n-1$, and explicitly multiply it by $0$ before printing when $s_1$ is ?.
[ "combinatorics", "data structures", "math" ]
2,100
#include <bits/stdc++.h> using namespace std; const int MOD = 998244353; const int N = 300009; int bp(int a, int n) { int res = 1; while(n > 0) { if (n & 1) res = (res * 1LL * a) % MOD; a = (a * 1LL * a) % MOD; n >>= 1; } return res; } int n, m; string s; int inv[N]; void upd(int &res, int x) { res = (res * 1LL * x) % MOD; } int main() { inv[1] = 1; for (int i = 2; i < N; ++i){ inv[i] = bp(i, MOD - 2); } cin >> n >> m >> s; int res = 1, k = n; bool isZero = false; for (int i = 0; i < s.size(); ++i) if (s[i] == '?') { if (i == 0) { isZero = true; } else { upd(res, i); } } cout << (isZero? 0 : res) << endl; for(int i = 0; i < m; ++i) { int pos; char c; cin >> pos >> c; --pos; if (s[pos] == '?' && (c == '<' || c == '>')) { if (pos == 0) isZero = false; else upd(res, inv[pos]); } else if ((s[pos] == '<' || s[pos] == '>') && c == '?') { if (pos == 0) isZero = true; else upd(res, pos); } s[pos] = c; cout << (isZero? 0 : res) << endl; } return 0; }
1886
E
I Wanna be the Team Leader
Monocarp is a team leader in a massive IT company. There are $m$ projects his team of programmers has to complete, numbered from $1$ to $m$. The $i$-th project has a difficulty level $b_i$. There are $n$ programmers in the team, numbered from $1$ to $n$. The $j$-th programmer has a stress tolerance level $a_j$. Monocarp wants to assign the programmers to the projects in such a way that: - each programmer is assigned to no more than one project; - each project has at least one programmer assigned to it; - let $k$ programmers be assigned to the $i$-th project; then all the assigned programmers have to have a stress tolerance level greater than or equal to $\frac{b_i}{k}$. Help Monocarp to find a valid assignment. If there are multiple answers, print any of them.
Let's start by arranging the programmers in the increasing order of their stress tolerance level. Not obvious what it achieves at first, but it always helps to sort, doesn't it? Now, consider some assignment of the programmers to the projects. Notice how it's always optimal to take the suffix of the programmers. If there is a valid answer which is not a suffix, then you can always take drop the weakest programmer and replace him with the strongest non-taken one, and the answer will still be valid. We can actually propagate the argument further. It's also always optimal for each project to have assigned a segment of programmers. Look at the weakest taken programmer again and think about the project he's assigned to. All that matters for the project is the number of other programmers assigned to it. So, you can safely replace the current assigned programmers with the same count of the next weakest ones. You can do it by swapping the assigned projects of the adjacent programmers. This way, every swap can only make any other project's weakest programmer stronger. Thus, we can actually build the answer the following way. Choose a project and assign the shortest possible suffix of non-taken programmers to it. We just have to determine the order of the projects to take. Well, let's use dynamic programming for that. Let $\mathit{dp}[\mathit{mask}]$ be the shortest suffix of the programmers that can be assigned to the projects from $\mathit{mask}$. For a transition, just choose one of the projects that isn't in $\mathit{mask}$ and find the shortest segment of programmers starting from $\mathit{dp}[\mathit{mask}]$. How to find this shortest suffix fast enough? You can precompute it before calculating the dynamic programming. For each project and each starting index, you can calculate the shortest suffix with two pointers. Overall complexity: $O(nm + m2^m)$.
[ "bitmasks", "constructive algorithms", "dp", "greedy", "math", "sortings", "two pointers" ]
2,400
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int n, m; scanf("%d%d", &n, &m); vector<int> a(n), b(m); forn(i, n) scanf("%d", &a[i]); forn(i, m) scanf("%d", &b[i]); vector<int> ord(n); iota(ord.begin(), ord.end(), 0); sort(ord.begin(), ord.end(), [&a](int i, int j){ return a[i] > a[j]; }); vector<vector<int>> mn(m, vector<int>(n + 2)); forn(i, m){ int r = 0; forn(l, n + 2){ r = min(n + 1, max(r, l + 1)); while (r <= n && a[ord[r - 1]] * (r - l) < b[i]) ++r; mn[i][l] = r; } } vector<int> dp(1 << m, n + 1); dp[0] = 0; vector<int> p(1 << m, -1); forn(mask, 1 << m) forn(i, m) if (!((mask >> i) & 1) && dp[mask | (1 << i)] > mn[i][dp[mask]]){ dp[mask | (1 << i)] = mn[i][dp[mask]]; p[mask | (1 << i)] = mask; } int mask = (1 << m) - 1; if (dp[mask] > n){ puts("NO"); return 0; } puts("YES"); vector<vector<int>> ans(n); forn(_, m){ int i = __builtin_ctz(mask ^ p[mask]); for (int j = dp[p[mask]]; j < dp[mask]; ++j) ans[i].push_back(ord[j]); mask = p[mask]; } forn(i, m){ printf("%d", int(ans[i].size())); for (int x : ans[i]) printf(" %d", x + 1); puts(""); } return 0; }
1886
F
Diamond Theft
Monocarp is the most famous thief in Berland. This time, he decided to steal two diamonds. Unfortunately for Monocarp, there are $n$ cameras monitoring the diamonds. Each camera has two parameters, $t_i$ and $s_i$. The first parameter determines whether the camera is monitoring the first diamond only ($t_i=1$), the second diamond only ($t_i=2$), or both diamonds ($t_i=3$). The second parameter determines the number of seconds the camera will be disabled after it is hacked. Every second, Monocarp can perform one of the following three actions: - do nothing; - choose a camera and hack it; if Monocarp hacks the $i$-th camera, it will be disabled for the next $s_i$ seconds (if the current second is the $T$-th one, the camera will be disabled from the $(T+1)$-th to the $(T+s_i)$-th second, inclusive); - steal a diamond if all cameras monitoring it are currently disabled. Monocarp cannot steal the second diamond if he hasn't stolen the first diamond yet. Note that Monocarp can hack a camera multiple times, even if it is currently disabled. Your task is to determine the minimum time it will take Monocarp to steal both diamonds, \textbf{beginning with the first diamond}, or report that it is impossible.
In this editorial, we will denote the number of cameras of type $i$ as $k_i$. Without loss of generality, we can assume that our sequence of actions will look as follows: hack all cameras of type $1$, some cameras of type $2$ and all cameras of type $3$ in some order (let's call it the first block of actions); steal the first diamond; hack the remaining cameras of type $2$ and some cameras of type $3$ (the second block of actions); steal the second diamond. Note that we don't hack the same camera of type $1$ or $2$ twice (this is just suboptimal), but some cameras of type $3$ will be hacked twice. We have to minimize the number of cameras of type $3$ which will be hacked twice. Let's iterate on the number of actions we make after stealing the first diamond (denote it by $len$), and then iterate on the number of cameras of type $2$ in the first block (denote it by $i$). Then, the number of cameras of type $2$ we hack in the second block will be $k_2 - i$, and the number of cameras of type $3$ we hack in the second block is $len - (k_2 - i) - 1$. Since this is also the number of cameras of type $3$ we hack twice, we have to minimize this number. Note that the length of the first block is also fixed: it should be $k_1 + k_3 + i$. Suppose we fixed which cameras of type $2$ and $3$ go to the second block, and which don't. For each block, we will store a special data structure that, for every $x$ from $0$ to the length of the block, calculates the value of $d_x - x$, where $d_x$ is the number of cameras which have to be hacked during the last $x$ actions in the block. Using something like Hall's theorem, we can prove that if every value of $d_x - x$ for every block is non-positive, there exists a way to hack all cameras in time. This can be efficiently modeled with a segment tree. All cameras of type $1$ should be inserted into the data structure for the first block. For the cameras of type $2$, we either insert them into the second block, or insert them into the first block while subtracting $len$ from their values of $s_j$. For the cameras of type $3$, we either insert them into both blocks, or insert them into the first block while subtracting $len$ from their values of $s_j$. Among all cameras of type $2$, which ones should belong to the first block, and which ones - to the second block? It's quite easy to see that the "longest" cameras of type $2$ should be in the first block. Unfortunately, with cameras of type $3$, it's a bit more tricky. First, let's insert all cameras of type $1$ into the first block and all cameras of type $2$ into the blocks chosen for them. Then sort all cameras of type $3$ (in ascending order of their $s_j$ values) and use the following greedy approach: if a camera can be inserted into the first block after we subtract $len$ from its $s_j$, we insert it there; otherwise, we insert it into both blocks without subtraction. This greedy approach can be proved using something like exchange argument: if we skipped some camera which could be inserted into the first block, and chose some other camera for the same slot, they can be "swapped". So, all cameras of type $3$ are partitioned into two types. After you've done inserting cameras into the respective blocks, check that everything's fine. This works in something like $O(n^3 \log n)$. Unfortunately, it is too slow. To speed this up, let's use the following strategy: iterate on the value of $len$, then try solving the problem for $i=0$ (i. e. all cameras of type $2$ are initially in the second block). If it fails, we can try increasing $i$, shifting a camera of type $2$ from the second block to the first block and rebuilding the partition of cameras of type $3$, instead of starting it from scratch. But how do we rebuild the partition of cameras of type $3$? We can use the following strategy. Store a set of cameras of type $3$ (sorted by their $s_j$) which are inserted only into the first block. While the first block is not OK (i. e. there exists some value of $x$ such that $d_x - x$ is positive), we will find the minimum $x$ such that $d_x - x$ is positive, and try to decrease it as follows: search for the camera of type $3$ which belongs only to the first block, has $s_j - len \le x$, and has the maximum $s_j$ among all such cameras. This is the camera which should be moved in the partition (i. e. instead of storing it in the first block, we store it in both blocks). Note that you might need to repeat this process (i. e. when you move a camera of type $2$ to the first block, it can force multiple cameras of type $3$ into the second block). Thus, our solution now works in $O(n^2 \log n)$.
[ "data structures", "greedy" ]
3,300
#include <bits/stdc++.h> using namespace std; const int N = 3003; int n; int k[4]; vector<int> a[4]; struct segtree { int sz; int tot; vector<int> t, p; segtree(int sz) : sz(sz) { tot = 0; t = vector<int>(4 * sz); p = vector<int>(4 * sz); build(0, 0, sz); } void build(int v, int l, int r) { if (l + 1 == r) { t[v] = -l; return; } int m = (l + r) / 2; build(v * 2 + 1, l, m); build(v * 2 + 2, m, r); t[v] = max(t[v * 2 + 1], t[v * 2 + 2]); } void push(int v) { if (p[v] == 0) return; if (v + 1 < 2 * sz) { t[v * 2 + 1] += p[v]; p[v * 2 + 1] += p[v]; t[v * 2 + 2] += p[v]; p[v * 2 + 2] += p[v]; p[v] = 0; } } void upd(int v, int l, int r, int L, int R, int x) { if (L >= R) return; if (l == L && r == R) { t[v] += x; p[v] += x; return; } push(v); int m = (l + r) / 2; upd(v * 2 + 1, l, m, L, min(m, R), x); upd(v * 2 + 2, m, r, max(L, m), R, x); t[v] = max(t[v * 2 + 1], t[v * 2 + 2]); } void upd(int pos, int x) { tot += x; upd(0, 0, sz, max(0, pos), sz, x); } int get(int v, int l, int r, int L, int R) { if (L >= R) return -1e9; if (l == L && r == R) return t[v]; push(v); int m = (l + r) / 2; return max( get(v * 2 + 1, l, m, L, min(m, R)), get(v * 2 + 2, m, r, max(L, m), R) ); } int get(int L) { return get(0, 0, sz, max(0, L), sz); } int getBad(int v, int l, int r) { if (t[v] <= 0) return -1; if (l + 1 == r) return l; push(v); int m = (l + r) / 2; if (t[v * 2 + 1] > 0) { return getBad(v * 2 + 1, l, m); } else { return getBad(v * 2 + 2, m, r); } } int getBad() { return getBad(0, 0, sz); } }; int main() { cin >> n; int sz = 1; for (int i = 0; i < n; ++i) { int t, s; cin >> t >> s; a[t].push_back(s); sz = max(sz, s + 1); } for (int t = 1; t < 4; ++t) { sort(a[t].begin(), a[t].end()); k[t] = a[t].size(); } reverse(a[2].begin(), a[2].end()); int ans = 1e4; for (int len = 1; len <= k[2] + k[3] + 1; ++len) { segtree L(sz), R(sz); multiset<int> used; for (int x : a[1]) L.upd(x, +1); for (int x : a[2]) R.upd(x, +1); for (int x : a[3]) { L.upd(x - len, +1); if (L.t[0] <= 0) { used.insert(x - len); } else { L.upd(x - len, -1); L.upd(x, +1); R.upd(x, +1); } } for (int i = 0; i <= k[2]; ++i) { if (L.t[0] <= 0 && R.t[0] <= 0 && R.tot + 1 <= len) ans = min(ans, n + (k[3] - (int)used.size()) + 2); if (i == k[2]) break; R.upd(a[2][i], -1); L.upd(a[2][i] - len, +1); int pos; while ((pos = L.getBad()) != -1) { auto it = used.upper_bound(pos); if (it == used.begin()) break; --it; L.upd(*it, -1); L.upd(*it + len, +1); R.upd(*it + len, +1); used.erase(it); } } } if (ans == 1e4) ans = -1; cout << ans << '\n'; }
1887
B
Time Travel
Berland is a country with ancient history, where roads were built and destroyed for centuries. It is known that there always were $n$ cities in Berland. You also have records of $t$ key moments in the history of the country, numbered from $1$ to $t$. Each record contains a list of \textbf{bidirectional} roads between some pairs of cities, which could be used for travel in Berland at a specific moment in time. You have discovered a time machine that transports you between key moments. Unfortunately, you cannot choose what point in time to end up at, but you know the order consisting of $k$ moments in time $a_{i}$, in which the machine will transport you. Since there is little time between the travels, when you find yourself in the next key moment in time (\textbf{including after the last time travel}), you can travel on at most one existing road at that moment, coming out from the city you were in before time travel. Currently, you are in city $1$, and the time machine has already transported you to moment $a_{1}$. You want to reach city $n$ as quickly as possible. Determine the minimum number of time travels, \textbf{including the first one}, that you need to make in order to reach city $n$.
Let $d_{v}$ denote the minimum number of moves required to reach vertex $v$. Initially, $d_{v} = \infty$ for all vertices except $1$, where $d_{1} = 0$. We will gradually mark the vertices for which we know the optimal answer. Similar to Dijkstra's algorithm, at each iteration, we will select the vertex $v$ with the minimum value of $d_{v}$ among the unmarked vertices. We will mark it and relax its neighbors: let $(v, u)$ be an edge belonging to record $x$, we will find the minimum index $i$ such that $d_{v} < i$ and $a_{i} = x$, then $d_{u} = \min(d_{u}, i)$. We can find $i$ using binary search, by saving the indices of occurrences in $a$ for each time moments. The time complexity of the solution is $O(m (\log k + \log n))$.
[ "binary search", "graphs", "shortest paths" ]
1,900
null
1887
C
Minimum Array
Given an array $a$ of length $n$ consisting of integers. Then the following operation is sequentially applied to it $q$ times: - Choose indices $l$ and $r$ ($1 \le l \le r \le n$) and an integer $x$; - Add $x$ to all elements of the array $a$ in the segment $[l, r]$. More formally, assign $a_i := a_i + x$ for all $l \le i \le r$. Let $b_j$ be the array $a$ obtained after applying the first $j$ operations ($0 \le j \le q$). Note that $b_0$ is the array $a$ before applying any operations. You need to find the lexicographically minimum$^{\dagger}$ array among all arrays $b_j$. $^{\dagger}$An array $x$ is lexicographically smaller than array $y$ if there is an index $i$ such that $x_i < y_i$, and $x_j = y_j$ for all $j < i$. In other words, for the first index $i$ where the arrays differ, $x_i < y_i$.
Let $u$ be a difference array of an array $v$, i.e. $u_0 = v_0$, $u_i = v_i - v_{i-1}$ for all $i \ge 1$. Note that minimizing an array is equivalent to minimizing its difference array. Let's see how the difference array changes: when asked, adding the number $add$ on the segment $[l; r]$ only 2 elements are updated in it: $add$ is added to the $l$th index, and $add$ is subtracted from the $r+1$th index. Let's learn how to compare difference arrays after the $j$th and $i$th queries ($j < i$). Let's consider an auxiliary array, which is a union of changes in the difference array for all queries with numbers from $j+1$ to $i$ inclusive. Consider in this auxiliary array the minimum non-zero element by index. Note that if it is negative, then the difference array after the $i$th query will be smaller than after the $j$th one, and if it is positive, then it will be larger. Let's go through the queries and maintain the number of the minimum difference array. Let the current minimum difference array after request be number $j$. Since changes to the difference array occur only in two indexes, we can maintain them in a dictionary containing changes to the difference array by index. When we want to process the next request, we update the values in the dictionary using two keys. After the change, you need to check the sign of the minimum non-zero element by index: If it is negative, then this means that the current request has updated the minimum, and we clear the dictionary so that we can now store changes relative to this request in it. If it is positive, then we leave the dictionary the same and continue to change it in subsequent queries. As a result, using the found index of the minimum difference array, we restore the array itself.
[ "binary search", "brute force", "constructive algorithms", "data structures", "greedy", "hashing", "two pointers" ]
2,400
null
1887
D
Split
Let's call an array $b_1, b_2, \ldots, b_m$ ($m \ge 2$) good if it can be split into two parts such that all elements in the left part are strictly smaller than all elements in the right part. In other words, there must exist an index $1 \le i < m$ such that every element from $b_1, \ldots, b_i$ is strictly smaller than every element from $b_{i+1}, \ldots, b_m$. Given an array $a_1, a_2, \ldots a_n$ consisting of \textbf{distinct} integers from $1$ to $n$. There are $q$ queries. Each query consists of two numbers $l$ and $r$. For each query, determine whether the array $a_l, a_{l+1}, \ldots, a_r$ is good.
Let's fix element $i$. Let's find all intervals $[l, r]$ for which this element can be the maximum in the left part of a valid cut. Let $x_l$ be the nearest element to the left of $i$ greater than $a_i$, and $x_r$ be the nearest element to the right of $i$ greater than $a_i$. Then, for $i$ to be the maximum element in the left part of the cut, the following conditions must be satisfied: $\begin{cases} x_l < l \le i\\ x_r \le r \end{cases}$ But these conditions are obviously not enough, as we need to guarantee that all elements in the right part of the cut are greater than all elements in the left part of the cut. However, since $i$ is the maximum element in the left part of the cut, it is sufficient for all elements in the right part of the cut to be greater than $a_i$. Therefore, if $y_r$ is the nearest element to the right of $x_r$ smaller than $a_i$, then $r \le y_r$. Thus, element $i$ can be the maximum element in the left part of the cut of interval $[l, r]$ $\Longleftrightarrow$ the following conditions are satisfied: $\begin{cases} x_l < l \le i\\ x_r \le r < y \end{cases}$ For each element $i$, we can find $x_l$, $x_r$, and $y$ in $\mathcal{O}(\log n)$ time. This can be done, for example, using a segment tree or std::set if the elements are iterated in ascending order. It only remains to notice that if we consider the queries $[l, r]$ as points $(l, r)$, then each element $i$ makes all points in some rectangle good. Thus, the problem reduces to checking whether a point lies in one of the rectangles. This is a classic problem that can be solved using sweep line and, for example, a segment tree. We obtained a solution with a time complexity of $\mathcal{O}(n\log n)$.
[ "binary search", "data structures", "divide and conquer", "dsu", "math", "trees", "two pointers" ]
2,700
null
1887
E
Good Colorings
Alice suggested Bob to play a game. Bob didn't like this idea, but he couldn't refuse Alice, so he asked you to write a program that would play instead of him. The game starts with Alice taking out a grid sheet of size $n \times n$, the cells of which are \textbf{initially not colored}. After that she colors some $2n$ cells with colors $1,2,\ldots, 2n$, respectively, and informs Bob about these cells. In one move, Bob can point to a cell that has not been colored yet and ask Alice to color that cell. Alice colors that cell with one of the $2n$ colors of her choice, informing Bob of the chosen color. Bob can make no more than $10$ moves, after which he needs to find a good set of four cells. A set of four cells is considered good if the following conditions are met: - All the cells in the set are colored; - No two cells in the set are colored with the same color; - The centers of the cells form a rectangle with sides parallel to the grid lines.
Let's consider a bipartite graph with $n$ rows and $n$ columns. The cells will correspond to the edges. According to the problem statement, we need to find a pair of rows $\{ u, v \}$ and a pair of columns $\{ x, y \}$ such that cells $(u,x), (u,y), (v, x), (v, y)$ have different colors. Notice that in graph terms, this means that we need to find a cycle of length $4$ with different colors. According to the problem statement, we have $2n$ edges with pairwise distinct colors in a graph with $2n$ vertices. This means that there must exist a cycle of length $2k$ with different colors. Let this cycle be $(v_1, v_2, \ldots, v_{2k})$. Mentally construct the edges $(v_2, v_{2k-1}), (v_3, v_{2k-2}), \ldots, (v_{k-1}, v_{k+2})$. These edges divide our cycle into $k-1$ cycles of length $4$ and are part of the complete bipartite graph. It is claimed that one of these cycles will definitely have different colors. To find it, we apply the binary search technique. Ask for the color of the edge that divides this cycle approximately in half. Its color will either not match any of the edges from the left half or all of the edges from the right half. In each of these cases, we managed to find a cycle with half the length. After $\lceil log_2(k-1) \rceil \le 10$ queries, we will find a cycle of length $4$ with different colors, and therefore the desired set of four cells.
[ "binary search", "constructive algorithms", "graphs", "interactive" ]
3,100
null
1887
F
Minimum Segments
You had a sequence $a_1, a_2, \ldots, a_n$ consisting of integers from $1$ to $n$, not necessarily distinct. For some unknown reason, you decided to calculate the following characteristic of the sequence: - Let $r_i$ ($1 \le i \le n$) be the smallest $j \ge i$ such that on the subsegment $a_i, a_{i+1}, \ldots, a_j$ all distinct numbers from the sequence $a$ appear. More formally, for any $k \in [1, n]$, there exists $l \in [i, j]$ such that $a_k = a_l$. If such $j$ does not exist, $r_i$ is considered to be equal to $n+1$. - The characteristic of the sequence $a$ is defined as the sequence $r_1, r_2, \ldots, r_n$. Unfortunately, the sequence $a$ got lost, but you still have its characteristic $r$. You want to reconstruct any sequence $a$ that matches the characteristic, or determine that there is an error in the characteristic and such a sequence does not exist.
Let's consider a sequence $a_1, a_2, \ldots, a_n$. Let $nxt_i$ be the smallest $j > i$ such that $a_j = a_i$, or $n+1$ if such $j$ does not exist. It is claimed that the characteristic of the sequence can be uniquely determined from the values of $nxt_1, nxt_2, \ldots, nxt_n$. Let's write down the conditions on $nxt_i$ that we can obtain from the characteristic $r_1, r_2, \ldots, r_n$: $nxt_n=n+1$. If $r_i=r_{i+1}$, then $nxt_i \in [i+1, r_i]$, since the number $a_i$ appears on the subsegment $[i+1,r_i]$. If $r_i < r_{i+1}$, then $nxt_i = r_{i+1}$, since we need to extend the segment to the nearest occurrence of the number $a_i$. For each $i > r_1$, there exists $j$ such that $nxt_j=i$, since the first occurrence of each number is on the subsegment $[1, r_1]$. All $nxt_i \le n$ are distinct, since each element can have at most one nearest equal element on its left. We want to construct the array $nxt$ that satisfies these conditions. We have some values of $nxt_i$ that we already know, as well as values of $nxt_i$ that should lie in the intervals $[i+1,r_{i+1}]$. Note that the left and right boundaries of these intervals are non-decreasing, so if condition 4 did not exist, we could greedily assign values to $nxt_i$, moving through the intervals in descending order and assigning the largest unused value of $nxt_i$ that lies in the interval. Let's fix $k$ as the number of intervals in which we will choose the number $n+1$. Obviously, it is more advantageous to place $n+1$ in the $k$ rightmost intervals, as all intervals in which $n+1$ can be placed are nested within each other. After that, we can greedily assign the largest possible values of $nxt_i$ to the intervals from right to left, thus obtaining an answer with the maximum number of $nxt_i \in [r_1+1, n]$ for a given $k$. If we manage to cover all numbers from $[r_1+1, n]$, then we have found a suitable sequence $nxt$. Thus, we have a solution with $O(n^2)$ complexity: we iterate over $k$ and solve it greedily. Let's learn how to optimize it. Let's find the smallest $k$ for which there is an answer without considering condition 4. Note that as $k$ increases, the number of $nxt_i \in [r_1+1, n]$ obtained by the greedy algorithm does not increase, so it is sufficient to run it for the minimum $k$ and check if condition 4 is satisfied for the obtained greedy algorithm answer for this $k$. After we find the array $nxt$, it is easy to obtain the array $a$ from it.
[ "constructive algorithms" ]
3,400
null
1889
A
Qingshan Loves Strings 2
Qingshan has a string $s$ which only contains $0$ and $1$. A string $a$ of length $k$ is good if and only if - $a_i \ne a_{k-i+1}$ for all $i=1,2,\ldots,k$. {\small For Div. 2 contestants, note that this condition is different from the condition in problem B.} For example, $10$, $1010$, $111000$ are good, while $11$, $101$, $001$, $001100$ are not good. Qingshan wants to make $s$ good. To do this, she can do the following operation \textbf{at most} $300$ times (possibly, zero): - insert $01$ to any position of $s$ (getting a new $s$). Please tell Qingshan if it is possible to make $s$ good. If it is possible, print a sequence of operations that makes $s$ good.
First, there is no solution when the number of 0's and 1's are different. Otherwise, the construction follows: If the $s_1 \ne s_n$ now, we can ignore $s_1$ and $s_n$, and consider $s_{2..n-1}$ as a new $s$. If $s$ is empty, the algorithm ends. Now $s_1=s_n$. If they are 1, insert 01 to the front; otherwise, insert 01 to the end. Look at this example: "110010" "1001" "011001" "1100" "10" "" This operation is actually equivalent to moving the last 1 to the front or moving the first 0 to the end. For example, in step 2 to 4 above, we succeed moving the last 1 to the front. So in the worst case, every character in the string are moved, and we need $n$ moves. Actually, we don't need $n$ moves but $n/2$ moves. Because for the 0 and 1 deleted in the same operation, at most one of them need to be moved. Time complexity $O(n)$.
[ "constructive algorithms", "greedy", "implementation" ]
1,300
#include <bits/stdc++.h> bool ok(std::string s) { for (size_t i = 1; i < s.length(); ++i) if (s[i] == s[i - 1]) return false; return true; } std::string s; void solve() { int n; std::cin >> n; std::cin >> s; int cnt0 = 0, cnt1 = 0; for (int i = 0; i < s.length(); ++i) { cnt0 += s[i] == '0'; cnt1 += s[i] == '1'; } if (cnt0 != cnt1) { std::cout << -1 << std::endl; return; } std::vector<int> z; std::deque<char> q; for (int i = 0; i < s.length(); ++i) q.push_back(s[i]); int d = 0; while (!q.empty()) { if (q.front() == q.back()) { if (q.front() == '0') { q.push_back('0'); q.push_back('1'); z.push_back(n - d); } else { q.push_front('1'); q.push_front('0'); z.push_back(0 + d); } n += 2; } while (!q.empty() && q.front() != q.back()) { q.pop_back(); q.pop_front(); ++d; } } std::cout << z.size() << std::endl; for (int i = 0; i < z.size(); ++i) { std::cout << z[i]; if (i + 1 == z.size()) std::cout << std::endl; else std::cout << " "; } } int main() { int t; std::cin >> t; while (t--) solve(); return 0; }
1889
B
Doremy's Connecting Plan
Doremy lives in a country consisting of $n$ cities numbered from $1$ to $n$, with $a_i$ people living in the $i$-th city. It can be modeled as an undirected graph with $n$ nodes. Initially, there are no edges in the graph. Now Doremy wants to make the graph connected. To do this, she can add an edge between $i$ and $j$ if $$ \sum_{k \in S} a_k \ge i\cdot j \cdot c, $$ where $S$ is the set of all the nodes that are currently in the same connected component of either $i$ or $j$, and $c$ is a given constant. Can Doremy make the graph connected? Two nodes $(i, j)$ are in the same connected component if there exists a path from $i$ to $j$. A graph is connected if all its nodes are in the same connected component.
First, we can just solve $c=1$. Because letting $a_i' = \frac{a_i}{c}$ reduces $c\ne 1$ to $c=1$. For convenience, let $s_i = \sum a_j$, where $j$ is currently connected with $i$. Let's see if you can add an edge between $i$ and $j$ ($i \ne 1, j \ne 1$) right now, it means $s_i + s_j \ge i \cdot j \ge i+j$ This actually implies at least one of $s_i \ge i$ and $s_j \ge j$ holds (otherwise $s_i + s_j < i + j$). WLOG, let $s_i \ge i$ be true. Therefore $s_i + s_1 \ge 1 \cdot i$, which means you can add an edge between $1$ and $i$. Moreover, adding a new edge does not cause other edges that can be added to become unable to be added. So it's always good to add the edge between $1$ and $i$. Now we only need to decide the order. Consider this inequality $s_i + s_1 \ge 1 \cdot i$. You can see that larger $s_i - i$ is, faster node $i$ is able to be linked with $1$. So we can sort $i$ by $a_i-i$ in descending order, and that is the order we need. Time complexity $O(n \log n)$. Bonus: Time complexity can be $O(n)$ based on this observation: If you link $(1,i)$, then any $j < i$ can be linked to $1$.
[ "constructive algorithms", "greedy", "math", "sortings" ]
1,700
#include<bits/stdc++.h> using i64 = long long; using namespace std; const int N = 5e5 + 7; int T, n, C, p[N]; i64 a[N]; void solve () { cin >> n >> C; for (int i = 1; i <= n; i ++) cin >> a[i]; iota (p + 1, p + n + 1, 1); sort (p + 1, p + n + 1, [&] (const int &u, const int &v) { return 1ll * u * C - a[u] < 1ll * v * C - a[v]; }); i64 now = a[1]; for (int i = 1, u; i <= n; i ++) { u = p[i]; if (u == 1) continue; if (1ll * u * C > now + a[u]) return cout << "No\n", void (); now += a[u]; } return cout << "Yes\n", void (); } int main () { ios :: sync_with_stdio (false); cin.tie (0); cout.tie (0); cin >> T; while (T --) solve (); }
1889
C1
Doremy's Drying Plan (Easy Version)
\textbf{The only differences between the two versions of this problem are the constraint on $k$, the time limit and the memory limit. You can make hacks only if all versions of the problem are solved.} Doremy lives in a rainy country consisting of $n$ cities numbered from $1$ to $n$. The weather broadcast predicted the distribution of rain in the next $m$ days. In the $i$-th day, it will rain in the cities in the interval $[l_i, r_i]$. A city is called dry if it will never rain in that city in the next $m$ days. It turns out that Doremy has a special power. She can choose $k$ days (in the easy version, $k = 2$), and during these days it will not rain. Doremy wants to calculate the maximum number of dry cities after using the special power.
We consider a brute force solution first. At the beginning, we calculate number of intervals that cover position $i$ for each $i=1,2,\ldots,n$ by prefix sum. Now we can know the number uncovered positions. Let it be $A$. Then we need to calculate the number of new uncovered position after removing two intervals. Let it be $B$. So the answer in the end is $A + \max B$. For calculating $B$, let's enumerate two intervals $I_1,I_2$. If they have no intersection, $B$ is equal to the number of positions that are covered exactly once in interval $I_1$ and $I_2$; If they have intersection. Let the intersection be $J$ (It is an interval). $B$ is equal to the number of positions that are covered exactly once in interval $I_1$ and $I_2$ plus the number of positions that are covered exactly twice in interval $J$; The algorithm can be done in $O(n+m^2)$ time by prefix sum. For optimization, we should notice that: In the "no intersection" case, we can just simply pick two best intervals. In the "intersection" case, there are at most $n$ useful interval pairs. The proof and algorithm goes: for each position $i$, if it is covered by exactly $2$ intervals, then this interval pair is useful and may update the answer. Great, but for code implementation, how do we find those interval pairs? For each interval $[l,r]$, we consider it as two events (like difference and prefix sum): it appears at position $l$, and disappears at position $r+1$. That way, set or just array is able to handle. Time complexity $O(n+m)$.
[ "brute force", "data structures", "dp", "greedy", "sortings" ]
2,000
null
1889
C2
Doremy's Drying Plan (Hard Version)
\textbf{The only differences between the two versions of this problem are the constraint on $k$, the time limit and the memory limit. You can make hacks only if all versions of the problem are solved.} Doremy lives in a rainy country consisting of $n$ cities numbered from $1$ to $n$. The weather broadcast predicted the distribution of rain in the next $m$ days. In the $i$-th day, it will rain in the cities in the interval $[l_i, r_i]$. A city is called dry if it will never rain in that city in the next $m$ days. It turns out that Doremy has a special power. She can choose $k$ days, and during these days it will not rain. Doremy wants to calculate the maximum number of dry cities after using the special power.
$k$ is bigger in this version. However, it is still small, which leads us to a DP approach. Let $dp_{i,j}$ be the number of uncovered positions in $[1,i]$, and the last uncovered position is $i$, and the number of deleted intervals is $j$. For transition, we need to get all the intervals that covers position $i$. We've mentioned this in the editorial of the easy version. Let's iterate the last uncovered position $t$, and calculate the number of intervals that need to be deleted in this transition. Let it be $d_t$ and it is the number of interval $[l,r]$ such that $t < l \le i \le r$. Check this for example: And the transition goes: $dp_{i,j} \gets 1 + \max_{t=0}^{i-1} dp_{t,j-d_t}$ The time complexity is $O(n^2k)$ now. We need to speed the transition up. Note that $d_t$ is increasing while $t$ is decreasing, and $d_t$ is at most $k$. It actually splits the transition interval $[0,i-1]$ into at most $k+1$ intervals $[s_0,s_1],[s_1+1,s_2],\cdots,[s_{l-1} + 1,s_l]$ such that $0=s_0 \le s_1 < s_2 < \cdots < s_l = i - 1$ and for $t$ in the same interval, $d_t$ is the same. So the sparse table can be used for transition. Time complexity $O(nk^2)$. Memory complexity $O(kn\log n)$.
[ "data structures", "dp" ]
2,600
#include <bits/stdc++.h> #define debug(...) fprintf(stderr, __VA_ARGS__) #define LL long long const int MX = 2e5 + 23; const int INF = 1e9; void chkmax(int &x, int y) { x = std::max(x, y); } struct Interval { int l, r; } I[MX]; bool cmp(int x, int y) { return I[x].l > I[y].l; } std::vector<int> add[MX], del[MX]; int n, m, k, ban[MX]; int st[21][MX][18], lg[MX]; int query(int d, int l, int r) { int len = lg[r - l + 1]; return std::max(st[d][l][len], st[d][r - (1 << len) + 1][len]); } void change(int d, int p, int val) { chkmax(st[d][p][0], val); for (int i = 1; p - (1 << i) + 1 >= 0; ++i) { st[d][p - (1 << i) + 1][i] = std::max( st[d][p - (1 << i) + 1][i - 1], st[d][p - (1 << (i - 1)) + 1][i - 1]); } } void init() { lg[0] = -1; for (int i = 1; i <= n + 1; ++i) lg[i] = lg[i - 1] + ((i & -i) == i); for (int i = 1; i <= m; ++i) ban[i] = false; for (int i = 1; i <= n + 1; ++i) { add[i].clear(); del[i].clear(); } for (int i = 0; i <= k; ++i) for (int j = 0; j <= n; ++j) for (int _k = 0; _k < 18; ++_k) st[i][j][_k] = -INF; } void solve() { scanf("%d%d%d", &n, &m, &k); init(); for (int i = 1; i <= m; ++i) { int l, r; scanf("%d%d", &l, &r); I[i] = (Interval){l, r}; add[l].push_back(i); del[r + 1].push_back(i); } std::vector<int> cur, tmp; change(0, 0, 0); int uncover = 0, ans = 0; int cnti = 0; for (int i = 1; i <= n; ++i) { for (auto j : add[i]) { cur.push_back(j); ++cnti; } for (auto j : del[i]) { ban[j] = true; --cnti; } if (cnti > k) { for (int j = 0; j <= k; ++j) change(j, i, -INF); continue; } tmp.clear(); for (auto j : cur) { if (!ban[j]) { tmp.push_back(j); } } cur = tmp; std::sort(cur.begin(), cur.end(), cmp); if (cur.empty()) { for (int j = 0; j <= k; ++j) change(j, i, -INF); ++uncover; continue; } if (I[cur[0]].l <= i - 1) for (int g = 0; g <= k; ++g) { int z = query(g, I[cur[0]].l, i - 1); chkmax(ans, z + 1); change(g, i, z + 1); } else { for (int g = 0; g <= k; ++g) { change(g, i, -INF); } } int cp = I[cur[0]].l - 1; for (int j = 0; j < cur.size(); ++j) { if (j + 1 != cur.size() && I[cur[j]].l == I[cur[j + 1]].l) continue; int np = (j + 1 == cur.size() ? 0 : I[cur[j + 1]].l); for (int g = j + 1; g <= k; ++g) { int z = query(g - (j + 1), np, cp); chkmax(ans, z + 1); change(g, i, z + 1); // change(root[g], i, z + 1); } cp = np - 1; } } printf("%d\n", ans + uncover); } int main() { int t; scanf("%d", &t); while (t--) solve(); return 0; }
1889
D
Game of Stacks
You have $n$ stacks $r_1,r_2,\ldots,r_n$. Each stack contains some positive integers ranging from $1$ to $n$. Define the following functions: \begin{verbatim} function init(pos): stacks := an array that contains n stacks r[1], r[2], ..., r[n] return get(stacks, pos) function get(stacks, pos): if stacks[pos] is empty: return pos else: new_pos := the top element of stacks[pos] pop the top element of stacks[pos] return get(stacks, new_pos) \end{verbatim} You want to know the values returned by $init(1), init(2), \ldots, init(n)$. Note that, during these calls, the stacks $r_1,r_2,\ldots,r_n$ don't change, so the calls $init(1), init(2), \ldots, init(n)$ are independent.
Let's first consider an easy version of the problem: what if there is only one element in each stack? Let $p_i$ be the element in stack $i$. If we link $i \to p_i$, these edges will form several directed pseudo trees. For each pseudo tree, there is a cycle. It's not hard to find that those cycles actually can be ignored, because if we go on the cycle, we will come back to the first node on the cycle and the cycle will be eliminated. After deleting (the edges) on the cycle, there are only trees left. Thus the root of each tree is the answer of nodes on the tree. When it comes back to the original problem, similarly, let $p_i$ be the top element of stack $i$. The rest are the same - cycles can be eliminated. In this case, we should run cycle elimination process many times until there is no cycle in the graph. Time complexity $O(n+\sum k)$.
[ "brute force", "dfs and similar", "graphs", "implementation", "trees" ]
3,000
#include<stack> #include<cstdio> #include<cstring> #include<iostream> #include<algorithm> using namespace std; const int maxn=100005; stack<int>s[maxn]; int re[maxn],st[maxn],vis[maxn],tp; int dfs(int u){ if(re[u]) return re[u]; if(s[u].empty()) return u; if(vis[u]){ st[tp+1]=0; while(st[tp+1]!=u){ s[st[tp]].pop(); vis[st[tp]]=0; --tp; } return dfs(u); } st[vis[u]=++tp]=u; return dfs(s[u].top()); } int main(){ int n;scanf("%d",&n); for(int i=1;i<=n;++i){ int k;scanf("%d",&k); while(k--){ int c;scanf("%d",&c); s[i].push(c); } } for(int i=1;i<=n;++i){ tp=0;int ok=dfs(i); for(int j=1;j<=tp;++j) re[st[j]]=ok; printf("%d%c",ok," \n"[i==n]); } return 0; }
1889
E
Doremy's Swapping Trees
Consider two undirected graphs $G_1$ and $G_2$. Every node in $G_1$ and in $G_2$ has a label. Doremy calls $G_1$ and $G_2$ similar if and only if: - The labels in $G_1$ are distinct, and the labels in $G_2$ are distinct. - The set $S$ of labels in $G_1$ coincides with the set of labels in $G_2$. - For every pair of two distinct labels $u$ and $v$ in $S$, the corresponding nodes are in the same connected component in $G_1$ if and only if they are in the same connected component in $G_2$. Now Doremy gives you two trees $T_1$ and $T_2$ with $n$ nodes, labeled from $1$ to $n$. You can do the following operation any number of times: - Choose an edge set $E_1$ from $T_1$ and an edge set $E_2$ from $T_2$, such that $\overline{E_1}$ and $\overline{E_2}$ are similar. Here $\overline{E}$ represents the graph which is given by only reserving the edge set $E$ from $T$ (i.e., the edge-induced subgraph). In other words, $\overline{E}$ is obtained from $T$ by removing all edges not included in $E$ and further removing all isolated vertices. - Swap the edge set $E_1$ in $T_1$ with the edge set $E_2$ in $T_2$. Now Doremy is wondering how many distinct $T_1$ you can get after any number of operations. Can you help her find the answer? Output the answer modulo $10^9+7$.
We can construct a new directed graph, where we can consider each edge in $T_1$ and $T_2$ as a node. Let the node representing the edge $(u,v)$ in tree $T_i$ be $N(u,v,T_i)$. For each node $N(u,v,T_1)$, let the simple path between $u$ and $v$ in $T_2$ be $(u_1,v_1),\dots,(u_m,v_m)$, and for all $1\le i\le m$ we add an edge from $N(T_1,u,v)$ to $N(T_2,u_i,v_i)$ in the new graph. And for each node $N(u,v,T_2)$, let the simple path between $u$ and $v$ in $T_1$ be $(u_1,v_1),\dots,(u_m,v_m)$, and for all $1\le i\le m$ we add an edge from $N(T_2,u,v)$ to $N(T_1,u_i,v_i)$ in the new graph. Then we calculate the number of strongly connected components with the size bigger than $2$ in the new graph. If it is $t$, the answer is $2^t$. To reduce the time complexity to $O(n\log n)$, we need to build the new graph with binary lifting or heavy-light decomposition or centroid decomposition or something like those. More specifically, we just build the data structure as normal, but replace the "modify" operation with "add edge" operation, which allows us to add edges from one node to a range of nodes with low time complexity. Proof: it is obviously that the edge set we choose in each operation must be a closed subgraph in the new graph. And at the same time, it can be easily proved that every closed subgraph in the new graph is a valid edge set. The another thing we need to prove is that the set of valid closed subgraph won't change after an operation. We can easily prove this by the relationship between the number of edges and the number of nodes in a tree.
[ "dfs and similar", "graphs", "trees" ]
3,500
#include<vector> #include<cstdio> #include<cstring> #include<iostream> #include<algorithm> using namespace std; const int maxn=100005; struct Edge{ int v,nt; Edge(int v=0,int nt=0): v(v),nt(nt){} }eG[maxn*160]; int hdG[maxn*40],numG; void qwqG(int u,int v){ eG[++numG]=Edge(v,hdG[u]);hdG[u]=numG; } int tot; struct AnTree{ struct Edge{ int v,nt; Edge(int v=0,int nt=0): v(v),nt(nt){} }e[maxn*2]; int hd[maxn],num; void qwq(int u,int v){ e[++num]=Edge(v,hd[u]),hd[u]=num; } int n; void init(int n){ this->n=n; for(int i=1;i<n;++i){ int u,v; scanf("%d%d",&u,&v); qwq(u,v);qwq(v,u); } } int pa[maxn][20],id[maxn][20],dp[maxn]; void dfs(int u,int p){ if(p==0)dp[u]=0; for(int i=1;(1<<i)<=dp[u];++i) pa[u][i]=pa[pa[u][i-1]][i-1]; for(int i=hd[u];i;i=e[i].nt){ int v=e[i].v; if(v==p)continue; dp[v]=dp[u]+1; pa[v][0]=u; id[v][0]=++tot; dfs(v,u); } } int getid(int u,int d){ int&re=id[u][d]; if(re)return re; re=++tot; qwqG(re,getid(u,d-1)); qwqG(re,getid(pa[u][d-1],d-1)); return re; } void link(int x,int u,int v){ if(dp[u]<dp[v])swap(u,v); for(int t=dp[u]-dp[v],cn=0;t;t>>=1,++cn) if(t&1)qwqG(x,getid(u,cn)),u=pa[u][cn]; if(u==v)return; int t=0;while(dp[u]>>t)++t; while(t--){ if(pa[u][t]!=pa[v][t]){ qwqG(x,getid(u,t)); qwqG(x,getid(v,t)); u=pa[u][t];v=pa[v][t]; } } qwqG(x,getid(u,0)); qwqG(x,getid(v,0)); } void erase(void){ for(int i=1;i<=n;++i){ hd[i]=0,dp[i]=0;int j=0; while(pa[i][j])pa[i][j++]=0; j=0;while(id[i][j])id[i][j++]=0; } num=0; } int parent(int u){ return pa[u][0]; } int identity(int u){ return id[u][0]; } }T1,T2; int dfn[maxn*40],low[maxn*40],cnt; int st[maxn*40],tp,in[maxn*40]; int scc[maxn*40],scn; int vis[maxn*40]; void tarjan(int u){ dfn[u]=low[u]=++cnt; in[st[++tp]=u]=true; for(int i=hdG[u];i;i=eG[i].nt){ int v=eG[i].v; if(!dfn[v]){ tarjan(v); low[u]=min(low[u],low[v]); } else if(in[v]) low[u]=min(low[u],dfn[v]); } if(dfn[u]==low[u]){ ++scn; st[tp+1]=0; while(st[tp+1]!=u){ in[st[tp]]=false; scc[st[tp]]=scn; --tp; } } } const int mod=1e9+7; int power(int a,int x){ int re=1; while(x){ if(x&1)re=1ll*re*a%mod; a=1ll*a*a%mod,x>>=1; } return re; } void solve(void){ int n;scanf("%d",&n); T1.init(n);T2.init(n); T1.dfs(1,0);T2.dfs(1,0); int rec=tot; for(int i=2;i<=n;++i){ T2.link(T1.identity(i),i,T1.parent(i)); T1.link(T2.identity(i),i,T2.parent(i)); } for(int i=1;i<=tot;++i) if(!dfn[i])tarjan(i); for(int i=1;i<=rec;++i) ++vis[scc[i]]; int ANS=0; for(int i=1;i<=scn;++i){ if(vis[i])++ANS; vis[i]=0; } for(int u=1;u<=rec;++u){ if(eG[hdG[u]].nt==0){ int v=eG[hdG[u]].v; if(u<v&&eG[hdG[v]].nt==0){ if(eG[hdG[v]].v==u){ --ANS; } } } } ANS=power(2,ANS); printf("%d\n",ANS); for(int i=1;i<=tot;++i){ hdG[i]=0; dfn[i]=low[i]=scc[i]=0; } tot=cnt=scn=numG=0; T1.erase();T2.erase(); } int main(){ // freopen("test","r",stdin); // freopen("out","w",stdout); int T;scanf("%d",&T); while(T--)solve(); return 0; }
1889
F
Doremy's Average Tree
Doremy has a rooted tree of size $n$ whose root is vertex $r$. Initially there is a number $w_i$ written on vertex $i$. Doremy can use her power to perform this operation \textbf{at most} $k$ times: - Choose a vertex $x$ ($1 \leq x \leq n$). - Let $s = \frac{1}{|T|}\sum_{i \in T} w_i$ where $T$ is the set of all vertices in $x$'s subtree. - For all $i \in T$, assign $w_i := s$. Doremy wants to know what is the lexicographically smallest$^\dagger$ array $w$ after performing all the operations. Can you help her? If there are multiple answers, you may output any one. $^\dagger$ For arrays $a$ and $b$ both of length $n$, $a$ is lexicographically smaller than $b$ if and only if there exist an index $i$ ($1 \leq i \le n$) such that $a_i < b_i$ and for all indices $j$ such that $j<i$, $a_j=b_j$ is satisfied.
Let $f_{i,j}$ be the array with the smallest lexicographical order if you do at most $j$ operations in the subtree of $i$ (only consisting of the elements in the subtree of $i$). Let $g_{i,j}$ be the first position $k$ where $f_{i,j}[k]$ is different from $f_{i,j-1}[k]$. To get the answer, we just need to calculate $g_{i,j}$ which gives us enough information to compare the lexicographical order between two choice. At the same time we need to record the first number of $f_{i,j}$ (let the number be $t$) and its label, and the first number in $f_{i,j}$ such that it is different from $t$ and its label, which gives us enough information to decide whether to do an operation on $i$. The time complexity is $O(nk)$.
[ "data structures", "dp", "greedy", "trees" ]
3,500
#include<cstdio> #include<cstring> #include<iostream> #include<algorithm> #define ch() getchar() #define pc(x) putchar(x) using namespace std; template<typename T>void read(T&x){ static char c;static int f; for(f=1,c=ch();c<'0'||c>'9';c=ch())if(c=='-')f=-f; for(x=0;c>='0'&&c<='9';c=ch()){x=x*10+(c&15);}x*=f; } template<typename T>void write(T x){ static char q[64];int cnt=0; if(x==0)return pc('0'),void(); if(x<0)pc('-'),x=-x; while(x)q[cnt++]=x%10+'0',x/=10; while(cnt--)pc(q[cnt]); } long long gcd(long long a,long long b){ if(b==0)return a; return gcd(b,a%b); } struct Frac{ long long a; int b;// a/b Frac(long long a=0,int b=1): a(a),b(b){} bool operator < (const Frac o)const{ return a*o.b<o.a*b; } bool operator > (const Frac o)const{ return a*o.b>o.a*b; } bool operator == (const Frac o)const{ return a*o.b==o.a*b; } void output(){ printf("%lld/%d",a,b); } }; const int inf=0x3f3f3f3f; struct Val{ int p1,p2;Frac v1,v2; Val(int p1=inf,int p2=inf,Frac v1=Frac(),Frac v2=Frac()): p1(p1),p2(p2),v1(v1),v2(v2){} void output(void){ printf("%d : ",p1);v1.output();puts(""); printf("%d : ",p2);v2.output();puts("\n"); } }; Val Comb(Val A,Val B){ if(A.p1>B.p1)swap(A,B); if(B.v1==A.v1) return Val(A.p1,A.p2<B.p2?A.p2:B.p2,A.v1,A.p2<B.p2?A.v2:B.v2); return Val(A.p1,A.p2<B.p1?A.p2:B.p1,A.v1,A.p2<B.p1?A.v2:B.v1); } const int maxn=10005,maxk=505; struct Edge{ int v,nt; Edge(int v=0,int nt=0): v(v),nt(nt){} }e[maxn*2]; int hd[maxn],num; void qwq(int u,int v){ e[++num]=Edge(v,hd[u]),hd[u]=num; } long long sm[maxn];int sz[maxn]; int dp[maxn][maxk],wp[maxn][maxk],k,n; Val va[maxn][maxk]; void Merge(int x,int l,int r,int L,int R){ wp[x][0]=0; va[x][0]=Comb(va[l][0],va[r][0]); L=min(L,k);R=min(R,k); int up=min(L+R,k); for(int i=1;i<=up;++i){ int t=wp[x][i]=min(L,i),cl=inf,cr=inf; while(t>0&&i-t<R){ cl=min(cl,dp[l][t]);--t; cr=min(cr,dp[r][i-t]); if(cr<cl)wp[x][i]=t,cl=cr=inf; } t=wp[x][i-1];cl=cr=inf; if(wp[x][i-1]<wp[x][i]){ while(t<wp[x][i])cl=min(cl,dp[l][++t]); dp[x][i]=cl; } else{ while(i-t-1<i-wp[x][i])cr=min(cr,dp[r][i-(t--)]); dp[x][i]=cr; } va[x][i]=Comb(va[l][wp[x][i]],va[r][i-wp[x][i]]); } } int MN[maxn],rsz[maxn]; long long val[maxn]; void dfs(int u,int p){ int l=0,fa=0;sm[u]=val[u]; sz[u]=0;rsz[u]=1;MN[u]=0; for(int i=hd[u];i;i=e[i].nt){ int v=e[i].v; if(v==p){fa=i;continue;}dfs(v,u); Merge(i,i^1,l,sz[v],sz[u]); sm[u]+=sm[v],sz[u]+=sz[v]; rsz[u]+=rsz[v]; l=i; } if(sz[u]==0){ ++sz[u];MN[u]=1; va[fa][0]=va[fa][1]=Val(u,inf,val[u]); dp[fa][1]=inf; } else{ int up=min(sz[u],k); for(int i=0;i<=up;++i) va[fa][i]=Comb(va[l][i],Val(u,inf,val[u])),dp[fa][i]=dp[l][i]; Frac tmp(sm[u],rsz[u]); for(int i=1;i<=up;++i) if(va[fa][i].v1>tmp||(va[fa][i].v1==tmp&&va[fa][i].v2>tmp)) va[fa][i]=Val(va[fa][i].p1,inf,tmp),MN[u]=i; if(MN[u]){ if(va[fa][0].v1==va[fa][1].v1) dp[fa][1]=va[fa][0].p2; else dp[fa][1]=va[fa][0].p1; for(int i=2;i<=MN[u];++i) dp[fa][i]=inf; if(MN[u]<up){ if(va[fa][MN[u]].v1==va[fa][MN[u]+1].v1) dp[fa][MN[u]+1]=va[fa][MN[u]+1].p2; else dp[fa][MN[u]+1]=va[fa][MN[u]+1].p1; } } }/* int up=min(sz[u],k); printf("%d MN:%d\n",u,MN[u]); for(int i=1;i<=up;++i) printf("dp %d = %d\n",i,dp[fa][i]);*/ } int st[maxn],ts; int que[maxn],cnt; void solve(int u,int p,int sk){ if(sk==0)return;sk=min(sk,sz[u]); if(sk<=MN[u])return st[++ts]=u,void(); int tnc=cnt; for(int i=hd[u];i;i=e[i].nt){ int v=e[i].v; if(v==p)continue; que[cnt++]=i; } for(int j=cnt-1;j>=tnc;--j){ int i=que[j]; solve(e[i].v,u,wp[i][sk]); sk-=wp[i][sk]; } } void imple(void){ int r; read(n),read(r),read(k); for(int i=1;i<=n;++i) read(val[i]); qwq(r,0); for(int i=1;i<n;++i){ int u,v;read(u),read(v); qwq(u,v);qwq(v,u); } dfs(r,0); solve(r,0,k); printf("%d\n",ts); for(int i=1;i<=ts;++i) printf("%d%c",st[i]," \n"[i==ts]); for(int i=0;i<=n;++i)hd[i]=0; num=cnt=ts=0; } int main(){ // freopen("test","r",stdin); int T;read(T); while(T--)imple(); return 0; }
1890
A
Doremy's Paint 3
An array $b_1, b_2, \ldots, b_n$ of positive integers is good if all the sums of two adjacent elements are equal to the same value. More formally, the array is good if there exists a $k$ such that $b_1 + b_2 = b_2 + b_3 = \ldots = b_{n-1} + b_n = k$. Doremy has an array $a$ of length $n$. Now Doremy can permute its elements (change their order) however she wants. Determine if she can make the array good.
Statement says $b_1 + b_2 = b_2 + b_3 = \ldots = b_{n-1} + b_n = k$. Let's write it as $b_i + b_{i+1} = b_{i+1} + b_{i+2}$. This is just $b_i = b_{i+2}$, which means the positions with the same parity should contain same value. $b_1 = b_3 = b_5 = \cdots \\ b_2 = b_4 = b_6 = \cdots$ We know that there are $\lceil \frac{n}{2} \rceil$ odd numbers and $\lfloor \frac{n}{2}\rfloor$ even numbers in $[1,n]$. Therefore, if and only if we can find $\lfloor \frac{n}{2}\rfloor$ same numbers, and the remaining are also the same numbers, the answer is YES. All cases can be classified into these categories: All numbers are the same, like $[3,3,3,3,3,3]$. The answer is YES. There are two different numbers, like $[1,2,1,2,1]$. The answer is YES if and only if one of the number appears exactly $\lfloor \frac{n}{2}\rfloor$ times. For example $[1,2,1,2,1]$ and $[2,3,2,3]$ are YES while $[1,1,1,2]$ and $[3,3,3,3,4,4]$ is NO. There are three or more different numbers, like $[1,2,3,2,3]$. The answer is NO. Time complexity $O(n)$.
[ "constructive algorithms" ]
800
// Time complexity O(nlogn) because of map #include <bits/stdc++.h> const int MX = 100 + 5; int main() { int t; std::cin >> t; while (t--) { int n; std::cin >> n; std::map<int ,int> occ; for (int i = 1; i <= n; ++i) { int x; std::cin >> x; occ[x]++; } if (occ.size() >= 3) puts("No"); else { if (std::abs(occ.begin()->second - occ.rbegin()->second) <= 1) { puts("Yes"); } else { puts("No"); } } } return 0; }
1890
B
Qingshan Loves Strings
Qingshan has a string $s$, while Daniel has a string $t$. Both strings only contain $0$ and $1$. A string $a$ of length $k$ is good if and only if - $a_i \ne a_{i+1}$ for all $i=1,2,\ldots,k-1$. For example, $1$, $101$, $0101$ are good, while $11$, $1001$, $001100$ are not good. Qingshan wants to make $s$ good. To do this, she can do the following operation any number of times (possibly, zero): - insert $t$ to any position of $s$ (getting a new $s$). Please tell Qingshan if it is possible to make $s$ good.
There are three circumstances where the answer is "Yes". $s$ is good initially. $t$ is good, $t=$ "10...01", and there is no substring 11 in $s$. $t$ is good, $t=$ "01...10", and there is no substring 00 in $s$.
[ "constructive algorithms", "implementation" ]
800
#include <bits/stdc++.h> bool ok(std::string s) { for (size_t i = 1; i < s.length(); ++i) if (s[i] == s[i - 1]) return false; return true; } void solve() { std::string s, t; int l1, l2; std::cin >> l1 >> l2; std::cin >> s >> t; if (ok(s)) { std::cout << "Yes" << std::endl; return; } if (!ok(t) || *t.begin() != *t.rbegin()) { std::cout << "No" << std::endl; return; } int zz = 0, oo = 0; for (size_t i = 1; i < s.length(); ++i) { if (s[i] == s[i - 1]) { if (s[i] == '0') zz = true; if (s[i] == '1') oo = true; } } if (zz && t[0] == '0') { std::cout << "No" << std::endl; return; } if (oo && t[0] == '1') { std::cout << "No" << std::endl; return; } std::cout << "Yes" << std::endl; return; } int main() { int t; std::cin >> t; while (t--) solve(); return 0; }
1891
A
Sorting with Twos
You are given an array of integers $a_1, a_2, \ldots, a_n$. In one operation, you do the following: - Choose a non-negative integer $m$, such that $2^m \leq n$. - Subtract $1$ from $a_i$ for all integers $i$, such that $1 \leq i \leq 2^m$. Can you sort the array in non-decreasing order by performing some number (possibly zero) of operations? An array is considered non-decreasing if $a_i \leq a_{i + 1}$ for all integers $i$ such that $1 \leq i \leq n - 1$.
Look at the difference array $b$. $b_i$ = $a_{i + 1} - a_{i}$ for each $i < n$. If array is sorted, its difference array has all of its elements non-negative. And operation is adding $1$ to $b_i$ if $i$ is a power of $2$. So look at difference array. If there is such $i$ that $i$ is not a power of $2$ and $b_i < 0$ then answer is <<NO>>. Else answer is <<YES>>. Time complexity: $O(n)$.
[ "constructive algorithms", "sortings" ]
800
null
1891
B
Deja Vu
You are given an array $a$ of length $n$, consisting of positive integers, and an array $x$ of length $q$, also consisting of positive integers. There are $q$ modification. On the $i$-th modification ($1 \leq i \leq q$), for each $j$ ($1 \leq j \leq n$), such that $a_j$ is divisible by $2^{x_i}$, you add $2^{x_i-1}$ to $a_j$. \textbf{Note that} $x_i$ ($1 \leq x_i \leq 30$) is \textbf{a positive integer not exceeding 30}. After \textbf{all} modification queries, you need to output the final array.
Let a number be divisible by $2^x$. Then after applying the operation it is no longer divisible by $2^x$. From this we can conclude that if we apply the operation $x_i$, and there is such an operation $j<i$ that $x_j < x_i$, then the operation $x_i$ does not change the array. So, it is useless and can be simply not processed. Then we will maintain the minimum of the processed $x$. If the new operation is smaller than the minimum processed one, we will process the operation and update it. Otherwise we just won't do anything. Since the minimum processed will decrease at most $30$ times, the time complexity is $O(30 \cdot n)$.
[ "brute force", "math", "sortings" ]
1,100
null
1891
C
Smilo and Monsters
A boy called Smilo is playing a new game! In the game, there are $n$ hordes of monsters, and the $i$-th horde contains $a_i$ monsters. The goal of the game is to destroy all the monsters. To do this, you have two types of attacks and a combo counter $x$, initially set to $0$: - The first type: you choose a number $i$ from $1$ to $n$, such that there is at least one monster left in the horde with the number $i$. Then, you kill one monster from horde number $i$, and the combo counter $x$ increases by $1$. - The second type: you choose a number $i$ from $1$ to $n$, such that there are at least $x$ monsters left in the horde with number $i$. Then, you use an ultimate attack and kill $x$ monsters from the horde with number $i$. After that, $x$ is reset to zero. Your task is to destroy all of the monsters, meaning that there should be no monsters left in any of the hordes. Smilo wants to win as quickly as possible, so he wants to the minimum number of attacks required to win the game.
Note that if the second operation were free, we would need $\lceil \frac{sum}{2} \rceil$ operations to get rid of all the monsters. Indeed, when we kill one monster, we can kill a second monster for free with a second operation. But the second operation is not free. So we need to use the second operation as little as possible. To do this, we need to apply ultimates (second attack) on the current largest horde by number of monsters, when the combo counter reaches the size of the largest horde. And we apply the first attack on the smallest hordes. This is so because the combo counter allows us to defeat $\lceil \frac{sum}{2} \rceil$ monsters. But since we can't apply this operation on several hordes at once, we need to keep the number of hordes on which we apply these attacks of the second type as small as possible. Then we can use the greedy algorithm described above. Formally, we need to keep a sorted array and store two pointers: pointer $i$ - to the smallest horde, $j$ - to the largest horde. Until $i$ is equal to $j$: if after destroying all horde $i$ we can't kill horde $j$ with ultimates, we destroy horde $i$, increase pointer $i$ by $1$ and increase combo counter $x$ by $a$[$i$]. Otherwise, hit the horde $i$ so many times that the combo counter $x$ becomes $a$[$j$]. Then apply a second attack on horde $j$, reduce horde $j$'s counter by $1$, reduce $a$[$i$], and nullify the combo counter. When $i$ becomes equal to $j$, you just need to apply the first attack the right number of times to finish with ultimates (or not if $a$[$i$] $=$ $1$). Total complexity $O$($nlogn$).
[ "binary search", "constructive algorithms", "greedy", "sortings", "two pointers" ]
1,500
null
1891
D
Suspicious logarithms
Let $f$($x$) be the floor of the binary logarithm of $x$. In other words, $f$($x$) is largest non-negative integer $y$, such that $2^y$ does not exceed $x$. Let $g$($x$) be the floor of the logarithm of $x$ with base $f$($x$). In other words, $g$($x$) is the largest non-negative integer $z$, such that ${f(x)}^{z}$ does not exceed $x$. You are given $q$ queries. The $i$-th query consists of two integers $l_i$ and $r_i$. The answer to the query is the sum of $g$($k$) across all integers $k$, such that $l_i \leq k \leq r_i$. Since the answers might be large, print them modulo ${10^9 + 7}$.
First, let us estimate the number of such $i$ that $g(i) \neq g(i+1)$. This can happen if $f(i) \neq f(i+1)$, but it happens rarely (about $log n$ times). And if f(i) is equal, we can see that $f(i) \geq 2$ for any $4 \leq i$, and on a segment with equal $f(i)$ transitions will be $O(log n)$, where $n$ is the length of the segment. That is, there are $O(log n)$ segments with equal $f(i)$, and on each segment there are at most $O(log n)$ transitions. Then there are a total of $O({log}^{2} {C})$ transitions, where $C$ is the maximum value of $i$ in the problem. In fact, in reality, there are even fewer transitions - something like $60$. Now we can suppose all such $i$ that $g(i) \neq g(i+1)$, and by passing over them we can consider the sum of $g(i)$ for all $i \leq r$. Or you can write a binary search and read the answer even faster.
[ "binary search", "brute force", "math" ]
1,900
null
1891
E
Brukhovich and Exams
The boy Smilo is learning algorithms with a teacher named Brukhovich. Over the course of the year, Brukhovich will administer $n$ exams. For each exam, its difficulty $a_i$ is known, which is a non-negative integer. Smilo doesn't like when the greatest common divisor of the difficulties of two consecutive exams is equal to $1$. Therefore, he considers the sadness of the academic year to be the number of such pairs of exams. More formally, the sadness is the number of indices $i$ ($1 \leq i \leq n - 1$) such that $gcd(a_i, a_{i+1}) = 1$, where $gcd(x, y)$ is the greatest common divisor of integers $x$ and $y$. Brukhovich wants to minimize the sadness of the year of Smilo. To do this, he can set the difficulty of any exam to $0$. However, Brukhovich doesn't want to make his students' lives too easy. Therefore, he will perform this action no more than $k$ times. Help Smilo determine the minimum sadness that Brukhovich can achieve if he performs no more than $k$ operations. As a reminder, the greatest common divisor (GCD) of two non-negative integers $x$ and $y$ is the maximum integer that is a divisor of both $x$ and $y$ and is denoted as $gcd(x, y)$. In particular, $gcd(x, 0) = gcd(0, x) = x$ for any non-negative integer $x$.
Consider that every $a_i \neq 1$. Then, we can split our array into blocks in which each adjacent element has $gcd = 1$. For example, consider $a =$[$2$, $3$, $4$, $5$, $5$, $6$, $7$], then the array divides into two blocks: [$2$, $3$, $4$, $5$] and [$5$, $6$, $7$]. Notice that if the lenght of a block is $len$, then it gives $len - 1$ sadness to Smilo. Also notice that we can reduce sadness by two using only one operation: we can, for instance, nullify the second element in a block (if the length of the block is more than two). For illustration, if we nullify the second element in block [$5$, $6$, $7$], the sadness will decrease by two, and the block will be splitted into [$5$], [$0$], [$7$] with zero sadness in each block. If the size of a block is exactly two, we can decrease the sadness by one using one operation. Then notice that while we are having at least one block, which lenght is at least $3$, we can decrease sadness by $2$. Otherwise, we can decrease sadness at most by $1$. Unfortunately, we have ones in our array. In this case our algorithm won't work, since in order to make pair [$1$, $1$] good, we will have to nullify both elements. Then let's slightly modify the algorithm: we will put consecutive ones in different blocks. For example, if $a =$[$1$, $1$, $1$, $3$, $4$, $4$, $1$], then our array will be splitted into 4 blocks: [$1$, $1$, $1$], [$3$, $4$], [$4$], [$1$]. Now notice that if the block of ones is located at the edge of our array, then the sadness will decrease by $len$ using $len$ operations ($len$ is the lenght of the block). Otherwise, sadness will decrease by $len + 1$ using $len$ operations. The final solution becomes obvious. Firstly, we should nullify the elements in non-ones blocks (if possible) in order to decrease sadness by $2$ each operation. Then we need to nullify whole blocks of ones (which are not on the edge of the array) in ascending order of lenght. And finally, we need to use remaining operations to reduce sadness by one each operation. The complexity is $O$($nlogn$).
[ "brute force", "greedy", "implementation", "math", "sortings" ]
2,500
null
1891
F
A Growing Tree
You are given a rooted tree with the root at vertex $1$, initially consisting of a single vertex. Each vertex has a numerical value, initially set to $0$. There are also $q$ queries of two types: - The first type: add a child vertex with the number $sz + 1$ to vertex $v$, where $sz$ is the current size of the tree. The numerical value of the new vertex will be $0$. - The second type: add $x$ to the numerical values of all vertices in the subtree of vertex $v$. After all queries, output the numerical value of all of the vertices in the final tree.
Let's parse all the queries and build the tree. We can easily support subtree addition queries using segment tree on Euler tour of the tree. And when we add new vertex, we just need to set its value to zero. How to do it? You can get the value in this vertex now (by get query from the segment tree), let it be $x$. Then you add $-x$ to its value. You can also use Fenwick tree to handle range add point get queries. Time complexity: $O(q log q)$.
[ "data structures", "dfs and similar", "trees" ]
2,000
null
1893
A
Anonymous Informant
You are given an array $b_1, b_2, \ldots, b_n$. An anonymous informant has told you that the array $b$ was obtained as follows: initially, there existed an array $a_1, a_2, \ldots, a_n$, after which the following two-component operation was performed $k$ times: - A fixed point$^{\dagger}$ $x$ of the array $a$ was chosen. - Then, the array $a$ was cyclically shifted to the left$^{\ddagger}$ exactly $x$ times. As a result of $k$ such operations, the array $b_1, b_2, \ldots, b_n$ was obtained. You want to check if the words of the anonymous informant can be true or if they are guaranteed to be false. $^{\dagger}$A number $x$ is called a fixed point of the array $a_1, a_2, \ldots, a_n$ if $1 \leq x \leq n$ and $a_x = x$. $^{\ddagger}$A cyclic left shift of the array $a_1, a_2, \ldots, a_n$ is the array $a_2, \ldots, a_n, a_1$.
The key idea is that after applying an operation with the number $x$, the last element of the resulting array will be equal to $x$. Since after $x$ cyclic shifts, the array $[a_1, a_2, \ldots, a_n]$ will become $[a_{x+1}, \ldots, a_n, a_1, \ldots, a_x]$, and $a_x = x$, as $x$ was a fixed point of the array $a$. From this idea, we can deduce that the operation can always be undone in at most one way: we need to look at the last element ($a_n$), and if $a_n > n$, it is not possible to undo the operation. Otherwise, we need to cyclically shift the array to the right $a_n$ times. Therefore, the solution to the problem will be to undo the operation on the array by $1$ step $k$ times. If at any point it becomes impossible to undo the operation, the answer is "No". Otherwise, the answer is "Yes". To avoid explicitly shifting the array, we can store a variable $s$ representing the index of the initial element, indicating that the current array is equal to $[a_s, a_{s+1}, \ldots, a_n, a_1, \ldots, a_{s-1}]$. When cyclically shifting the array to the right by $x$, $s$ decreases by $x$, initially $s = 1$. Also, since $k$ in the problem can be up to $10^9$, simulating $k$ undo operations would be very time-consuming. However, we can notice that if we manage to undo the operation $n$ times without breaking anything, we have entered a cycle. Therefore, if we successfully undo the operation $n$ times, the answer is definitely "Yes". In other words, it was sufficient to do k = min(k, n). Time complexity: $O(n)$
[ "brute force", "dfs and similar", "graphs", "implementation" ]
1,400
#include <bits/stdc++.h> using namespace std; void solve() { int n, k; cin >> n >> k; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } k = min(k, n); int last = n - 1; for (int _ = 0; _ < k; _++) { if (a[last] > n) { cout << "No\n"; return; } last += n - a[last]; if (last >= n) { last -= n; } } cout << "Yes\n"; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1893
B
Neutral Tonality
You are given an array $a$ consisting of $n$ integers, as well as an array $b$ consisting of $m$ integers. Let $\text{LIS}(c)$ denote the length of the longest increasing subsequence of array $c$. For example, $\text{LIS}([2, \underline{1}, 1, \underline{3}])$ = $2$, $\text{LIS}([\underline{1}, \underline{7}, \underline{9}])$ = $3$, $\text{LIS}([3, \underline{1}, \underline{2}, \underline{4}])$ = $3$. You need to insert the numbers $b_1, b_2, \ldots, b_m$ into the array $a$, at any positions, in any order. Let the resulting array be $c_1, c_2, \ldots, c_{n+m}$. You need to choose the positions for insertion in order to \textbf{minimize} $\text{LIS}(c)$. Formally, you need to find an array $c_1, c_2, \ldots, c_{n+m}$ that simultaneously satisfies the following conditions: - The array $a_1, a_2, \ldots, a_n$ is a subsequence of the array $c_1, c_2, \ldots, c_{n+m}$. - The array $c_1, c_2, \ldots, c_{n+m}$ consists of the numbers $a_1, a_2, \ldots, a_n, b_1, b_2, \ldots, b_m$, possibly rearranged. - The value of $\text{LIS}(c)$ is the \textbf{minimum} possible among all suitable arrays $c$.
First observation: $\text{LIS}(c) \geq \text{LIS}(a)$. This is true because the array $c$ will always contain $a$ as a subsequence, and therefore any subsequence of $a$ as well. Notice that it is always possible to achieve $\text{LIS}(c) \leq \text{LIS}(a) + 1$. Let $b_1 \geq \ldots \geq b_m$. This can be achieved by inserting $b_1, \ldots, b_m$ in a non-increasing order at any positions in the array $a$, because in this case, any increasing subsequence of $c$ can have at most one element from the array $b$, which means $\text{LIS}(c) \leq \text{LIS}(a) + 1$. Therefore, we need to understand when we can achieve $\text{LIS}(c) = \text{LIS}(a)$. It turns out that this can always be achieved! Let's understand how to insert one number ($x$) into the array while preserving $\text{LIS}(a)$. If $x < \min(a_1, \ldots, a_n)$, then we insert $x$ at the end of the array $a$, so that $x$ cannot be present in any increasing subsequence of length at least $2$, and $\text{LIS}(a)$ will not increase. Otherwise, we can insert $x$ before the first index $i$ such that $x \geq a_i$. Then, both $x$ and $a_i$ cannot be present in the same increasing subsequence. However, in any increasing subsequence where $x$ is present, we can replace $x$ with $a_i$ and the subsequence will still remain increasing. Thus, $\text{LIS}(a)$ will not increase. Now we can apply this solution $m$ times, inserting the numbers $b_1 \geq b_2 \geq \ldots \geq b_m$ one by one. The final algorithm turns out to be surprisingly simple and similar to the merge(a, b) function. Specifically, iterate on arrays $a$ and $b$ with two pointers, at each step adding the larger of the two numbers ($a_i$ or $b_j$) to the answer and moving the corresponding pointer. Time complexity: $O(n + m \log m)$
[ "constructive algorithms", "greedy", "sortings", "two pointers" ]
1,700
#include <bits/stdc++.h> #define pb push_back #define all(x) x.begin(), (x).end() #define rall(x) x.rbegin(), (x).rend() using namespace std; void solve() { int n, m; cin >> n >> m; vector<int> a(n), b(m), c(n + m); for (int i = 0; i < n; i++) { cin >> a[i]; } for (int i = 0; i < m; i++) { cin >> b[i]; } sort(rall(b)); merge(all(a), all(b), c.begin(), greater<int>()); for (int i = 0; i < n + m; i++) { cout << c[i] << ' '; } cout << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1893
C
Freedom of Choice
Let's define the anti-beauty of a multiset $\{b_1, b_2, \ldots, b_{len}\}$ as the number of occurrences of the number $len$ in the multiset. You are given $m$ multisets, where the $i$-th multiset contains $n_i$ distinct elements, specifically: $c_{i, 1}$ copies of the number $a_{i,1}$, $c_{i, 2}$ copies of the number $a_{i,2}, \ldots, c_{i, n_i}$ copies of the number $a_{i, n_i}$. It is guaranteed that $a_{i, 1} < a_{i, 2} < \ldots < a_{i, n_i}$. You are also given numbers $l_1, l_2, \ldots, l_m$ and $r_1, r_2, \ldots, r_m$ such that $1 \le l_i \le r_i \le c_{i, 1} + \ldots + c_{i, n_i}$. Let's create a multiset $X$, initially empty. Then, for each $i$ from $1$ to $m$, you must perform the following action \textbf{exactly once}: - Choose some $v_i$ such that $l_i \le v_i \le r_i$ - Choose any $v_i$ numbers from the $i$-th multiset and add them to the multiset $X$. You need to choose $v_1, \ldots, v_m$ and the added numbers in such a way that the resulting multiset $X$ has the \textbf{minimum possible} anti-beauty.
Note that after performing all operations, the multiset $X$ can have any integer size from $\sum l_i$ to $\sum r_i$. And the number of distinct numbers that can potentially be in $X$ is definitely not greater than $\sum n_i$. Therefore, if $\sum r_i - \sum l_i > \sum n_i$, there will always be a number from $\sum l_i$ to $\sum r_i$ that is not present in any of the $m$ given multisets, and thus will not be in $X$. By making $X$ of such size, we will have the anti-beauty of $X$ equal to $0$, which is the minimum achievable anti-beauty. So, if $\sum r_i - \sum l_i > \sum n_i$, we can output $0$ and terminate. Then we solve the problem under the condition $\sum r_i - \sum l_i \leq \sum n_i$. $\sum n_i$ is limited to $10^5$. Therefore, we can explicitly iterate through all possible sizes of the multiset $X$, from $\sum l_i$ to $\sum r_i$, and for a fixed size $s$, determine the minimum anti-beauty if $X$ has size $s$. Then we take the minimum of these values as the answer. For a fixed size $s$, to minimize the anti-beauty, we want to include as many numbers not equal to $s$ in $X$ as possible. For the multisets where $s$ is not present, we want to take $r_i$. To calculate this sum, we can take the sum of all $r_i$ and subtract from it the sum of $r_i$ for the multisets in which $s$ is present. And for those multisets where $s$ is present, we can explicitly iterate through them and use simple formulas to calculate how many $s$ we are obligated to take from each multiset. This will take at most $O(\sum n_i)$. For specific formulas, refer to the code.
[ "brute force", "greedy", "implementation" ]
2,000
#include <bits/stdc++.h> #define pb push_back #define int long long #define all(x) x.begin(), (x).end() using namespace std; void solve() { int m; cin >> m; vector<int> n(m), l(m), r(m); vector<vector<int>> a(m); vector<vector<int>> c(m); vector<int> sumc(m); int suml = 0, sumr = 0, sumn = 0; for (int i = 0; i < m; i++) { cin >> n[i] >> l[i] >> r[i]; sumn += n[i]; suml += l[i]; sumr += r[i]; a[i].resize(n[i]); for (int j = 0; j < n[i]; j++) { cin >> a[i][j]; } c[i].resize(n[i]); for (int j = 0; j < n[i]; j++) { cin >> c[i][j]; sumc[i] += c[i][j]; } } if (sumr - suml > sumn) { cout << "0\n"; return; } map<int, int> sumr_a; map<int, vector<pair<int, int>>> indexes; for (int i = 0; i < m; i++) { for (int j = 0; j < n[i]; j++) { sumr_a[a[i][j]] += r[i]; indexes[a[i][j]].pb({i, j}); } } int ans = (int) 2e18; for (int len = suml; len <= sumr; len++) { int xsize = 0, must_len = 0; xsize += sumr - sumr_a[len]; for (auto &[i, pos] : indexes[len]) { int cnt_not_len = sumc[i] - c[i][pos]; if (cnt_not_len < l[i]) { xsize += l[i]; must_len += l[i] - cnt_not_len; } else { xsize += min(cnt_not_len, r[i]); } } ans = min(ans, must_len + max(0LL, len - xsize)); } cout << ans << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1893
D
Colorful Constructive
You have $n$ colored cubes, the $i$-th cube has color $a_i$. You need to distribute all the cubes on shelves. There are a total of $m$ shelves, the $i$-th shelf can hold $s_i$ cubes. Also, $s_1 + s_2 + \ldots + s_m = n$. Suppose on a shelf of size $k$ there are cubes of colors $c_1, c_2, \ldots, c_k$, \textbf{in this order}. Then we define the colorfulness of the shelf as the minimum distance between two different cubes of the same color on the shelf. If all the cubes on the shelf have different colors, then the colorfulness is considered to be equal to the size of the shelf, that is, the number $k$. More formally, the colorfulness of $c_1, c_2, \ldots, c_k$ is defined as follows: - If all the colors $c_1, c_2, \ldots, c_k$ are different, the colorfulness is considered to be $k$. - Otherwise, the colorfulness is considered to be the smallest integer $x \geq 1$ such that there exists an index $i$ $(1 \le i \le k - x)$ such that $c_i = c_{i+x}$. For each shelf, you are given the minimum required colorfulness, that is, you are given numbers $d_1, d_2, \ldots, d_m$, which mean that shelf $i$ must have a colorfulness $\geq d_i$ for all $i$. Distribute the available cubes among the shelves to ensure the required colorfulness, or report that it is impossible.
Let's try to come up with a condition equivalent to "A set of cubes $a_1, \ldots, a_n$ can be distributed on a shelf of size $n$ such that the colorfulness of the shelf is $\geq d$" in such a way that this condition can be conveniently combined for $m$ shelves. Let's find the necessary condition (the condition that must be satisfied) if the set of cubes $a_1 \ldots a_n$ can be correctly placed on a shelf with parameters $(n, d)$. In such an arrangement, every $\leq d$ adjacent cubes must have different colors. This means that we can divide the shelf into subsegments as follows: $(a_1, \ldots, a_d), (a_{d+1}, \ldots, a_{2d}), \ldots, (a_{d \cdot (\lfloor \frac{n}{d} \rfloor - 1) + 1}, \ldots, a_{d \cdot \lfloor \frac{n}{d} \rfloor}), (a_{d \cdot \lfloor \frac{n}{d} \rfloor + 1}, \ldots, a_{n})$ such that the colors of all cubes in each subsegment are different. From this, the necessary condition follows: "The set of cubes $a_1, \ldots, a_n$ can be divided into $\lfloor \frac{n}{d} \rfloor$ groups of size $d$, as well as a group of size $n \mod d$, such that the colors of all cubes in each group are different." It turns out that this same condition is also sufficient! We will show this by presenting a way to place all the cubes from the "$\lfloor \frac{n}{d} \rfloor$ groups of size $d$, and the group of size $n \mod d$ (where the colors of all cubes in each group are different)" on a shelf of size $n$ such that the colorfulness of the shelf is $\geq d$. How to do it: place the cubes from the group of size $n \mod d$ in the first positions. Then consider the groups of size $d$ in any order. Place the cubes whose colors are already present in the previously placed array at the same indices as in the previous array, at a distance of $d$, and place the rest of the cubes however you like. It is easy to see that in this construction, the distance between any two identical colors is at least $d$. Example for understanding: let $d = 3$ and the given groups be: $[1, 2, 3], [3, 4, 5], [1, 3, 4], [1, 2]$. Then we proceed as follows: first, place $[1, 2]$. Then we want to add the group $[1, 2, 3]$: $[1, 2, ?, ?, ?]$. The elements $1$ and $2$ were present in the previous array, so we place them at a distance of $d = 3$ from the previous occurrences: $[1, 2, ?, 1, 2]$. Fill in the remaining element: $[1, 2, 3, 1, 2]$. Now append $[3, 4, 5]$: $[1, 2, 3, 1, 2, ?, ?, ?]$. The element $3$ is present in the previous array, so we place it: $[1, 2, 3, 1, 2, 3, ?, ?]$. Fill in the remaining elements in any order: $[1, 2, 3, 1, 2, 3, 4, 5]$. After that, append the group $[1, 3, 4]$ and obtain the final arrangement: $[1, 2, 3, 1, 2, 3, 4, 5, 3, 4, 1]$. The colorfulness is $\geq 3$. What to do with this information now? If we replace the conditions for all $m$ shelves with equivalent ones, we get: "The cubes can be placed on shelves $(s_1, d_1), \ldots, (s_m, d_m)$" is equivalent to "The cubes can be divided into groups of sizes $[\underbrace{d_1, \ldots, d_1}_{\lfloor \frac{s_1}{d_1} \rfloor}, s_1\mod d_1, \ldots, \underbrace{d_m, \ldots, d_m}_{\lfloor \frac{s_m}{d_m} \rfloor}, s_m \mod d_m]$, such that all colors are different in each group." Thus, we have reduced the problem to a simpler one: given $a_1, \ldots, a_n$ and a set $w_1, \ldots, w_k$ with a sum of $n$. We need to divide the numbers $a_1, \ldots, a_n$ into arrays of sizes $w_1, \ldots, w_k$ such that all numbers in each array are different. This problem can be solved using a simple greedy algorithm. Let $[a_1, \ldots, a_n]$ consist of $cnt_1$ copies of element $b_1$, $cnt_2$ copies of element $b_2$, ..., $cnt_t$ copies of element $b_t$ ($\sum cnt_i = n$). Then we consider the elements $b_1, \ldots, b_t$ in any order and place all copies of the current element in arrays of the largest length. If there are fewer arrays with non-zero length remaining than $cnt_i$, there is no solution. After adding, the length of the arrays in which the number was added decreases by $1$. It is easy to see that if we obtain a solution, then the numbers in all arrays will indeed be different. This solution can be easily implemented using std::set. The correctness of the greedy algorithm follows from the fact that if there are two arrays of lengths $n_1 > n_2$ consisting of different numbers, then in the array of length $n_1$, there must be some number that is not present in the array of length $n_2$, so any number from the array of length $n_2$ can be swapped with someone from the array of length $n_1$ to maintain the property that the numbers in both arrays are different. From this, it follows that if we have some solution to the problem and a fixed element, we can obtain a solution in which this fixed element is only present in the largest arrays, which proves our solution. Second solution: Ultra Greedy Algorithm. We go through shelves and places on the shelf, and at each moment we place a cube of a color that occurs the most times among the remaining cubes, naturally excluding colors that would break the condition of having different colors (i.e., excluding colors other than the {$\max(0, d_i - 1)$} colors of the previous cubes on this shelf). The proof of why this solution works will remain behind the scenes.
[ "constructive algorithms", "data structures", "greedy" ]
2,600
#include <bits/stdc++.h> #define pb push_back #define all(x) x.begin(), (x).end() #define rall(x) x.rbegin(), (x).rend() using namespace std; void solve() { int n, m; cin >> n >> m; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } vector<int> s(m), d(m); for (int i = 0; i < m; i++) { cin >> s[i]; } for (int i = 0; i < m; i++) { cin >> d[i]; } vector<int> cnt(n + 1); for (int i = 0; i < n; i++) { cnt[a[i]]++; } set<pair<int, int>> cubes; for (int x = 1; x <= n; x++) { if (cnt[x] == 0) continue; cubes.insert({cnt[x], x}); } vector<vector<int>> ans(m); for (int i = 0; i < m; i++) { ans[i].assign(s[i], 0); for (int j = 0; j < s[i]; j++) { if (j >= d[i]) { if (cnt[ans[i][j - d[i]]] > 0) { cubes.insert({cnt[ans[i][j - d[i]]], ans[i][j - d[i]]}); } } if (cubes.empty()) { cout << "-1\n"; return; } ans[i][j] = (*cubes.rbegin()).second; cubes.erase(*cubes.rbegin()); cnt[ans[i][j]]--; } for (int j = s[i]; j < s[i] + d[i]; j++) { if (cnt[ans[i][j - d[i]]] > 0) { cubes.insert({cnt[ans[i][j - d[i]]], ans[i][j - d[i]]}); } } } for (int i = 0; i < m; i++) { for (int j = 0; j < s[i]; j++) { cout << ans[i][j] << ' '; } cout << '\n'; } } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1893
E
Cacti Symphony
You are given an undirected connected graph in which any two distinct simple cycles \textbf{do not have} common vertices. Since the graph can be very large, it is given to you in a compressed form: for each edge, you are also given a number $d$, which indicates that there are $d$ additional vertices on this edge. You need to assign a weight to each vertex and each edge of the graph — an integer from $1$ to $3$. An edge of the graph is called good if the bitwise XOR of the weights of its adjacent vertices is \textbf{not equal to} $0$ and \textbf{not equal to} the weight of that edge. Similarly, a vertex of the graph is called good if the bitwise XOR of the weights of its adjacent edges is \textbf{not equal to} $0$ and \textbf{not equal to} the weight of that vertex. You need to determine how many ways there are to assign weights to the vertices and edges of the graph so that all vertices and edges are good. Since the answer can be quite large, you need to calculate the remainder of the answer divided by $998\,244\,353$.
First observation: for each edge, the vertices connected by it have different weights, otherwise the XOR of the weights of the adjacent vertices of this edge is equal to $0$. Second observation: for each edge, one of its adjacent vertices has a weight equal to the weight of the edge, since $1 \oplus 2 \oplus 3 = 0$. From these two observations, it follows that each edge has the same weight as EXACTLY one of the vertices it is connected to. Let's say that a vertex controls all the adjacent edges with the same weight. Then each edge is controlled by exactly one vertex. Third observation: if a vertex has an even degree, then the number of edges controlled by it must be odd, and this is a sufficient condition for the vertex to be good. Fourth observation: if a vertex has an odd degree, then the number of edges controlled by it must be even, and this is a sufficient condition for the vertex to be good. Therefore, in order for all vertices to be good, it is necessary to choose who controls each edge in such a way that the parity of the vertex degree and the parity of the number of edges controlled by it are different. And in order for all edges to be good, it is necessary to assign weights from $1$ to $3$ to the vertices in such a way that no two adjacent vertices have the same weight. The weight of each edge will then be equal to the weight of the vertex it is controlled by. Now the problem is divided into two independent parts: finding the number of ways to assign weights to the vertices, and finding the number of ways to orient the edges. The answer to the original problem will be the product of the answers to these two parts. How many ways are there to color the vertices with $3$ colors, such that two adjacent vertices are colored differently? How many ways are there to orient the edges, such that a vertex with an even degree has an odd number of outgoing edges, and a vertex with an odd degree has an even number of outgoing edges? Solution to problem 1: note that if there is a bridge in the graph $G$, and it divides the graph into graphs $G1$ and $G2$, then the answer for $G$ is the product of the answers for $G1$ and $G2$ multiplied by $\frac{2}{3}$. Then we can identify all the bridges, and since the graph is a vertex cactus, after removing the bridges, the graph will be divided into cycle components. For a cycle of length $n$, the number of ways to color the vertices can be calculated using simple dynamic programming: $dp_1 = 3, dp_3 = 6, dp_4 = 18, dp_n = dp_{n - 1} + 2 \cdot dp_{n - 2}$. To prove this dynamic programming, consider any two vertices that are one apart in a cycle of length $n$. If they have the same color, then the vertex between them has $2$ possible colors, and the rest have $dp_{n-2}$ possibilities. If they have different colors, then there are $dp_{n-1}$ possibilities. Knowing the transition formula, we can notice that $dp_n = 2^n + 2 \cdot (-1)^n$. This formula can be trivially proved by induction. Solution to problem 2: we will solve the same problem, but requiring that each vertex has an odd number of outgoing edges (if we invert all the edges in such a graph, we will get what is required). If $n$ and $m$ have different parity, then the answer is $0$, since the sum of $n$ odd numbers, which are the degrees of the vertices, must be equal to $m$. Otherwise, the answer is $2^{number of cycles}$. At any moment, we can remove any leaf, as it does not solve anything. We will remove the leaves as long as they exist. After this, one of the cycles in the graph will be a "leaf" - connected only to one other cycle. For all vertices of this cycle, the necessary parities of the degrees are known. Choose any edge of the cycle and orient it (there are $2$ ways to orient it, both will work). After that, all other edges in the cycle are oriented uniquely. After that, the bridge between this cycle and the other one is also uniquely oriented. Now we have one less cycle, and we also know all the necessary parities of the vertices (the parity changes for the vertex where the bridge came in). In this way, we can gradually reduce the number of cycles. With each removal of a cycle, the answer is multiplied by $2$, since there are $2$ ways to orient the edge in the cycle. In the end, we get $2^{number of cycles}$ ways. The final answer to the problem if $n$ and $m$ have same parity is: $ans = \frac{2}{3}^{number of bridges} \cdot 2^{number of cycles} \cdot (2^{c_1} + 2 \cdot (-1)^{c_1}) \cdot \ldots \cdot (2^{c_k} + 2 \cdot (-1)^{c_k})$ where $c_1, \ldots, c_k$ are the sizes of cycles (cycle can be a single vertice if it did not lie on any other cycle, and $c_i = 0$ in this case).
[ "combinatorics", "dfs and similar", "dp", "graphs" ]
3,500
#include <bits/stdc++.h> typedef long long ll; #define pb push_back using namespace std; const int M = 998244353; const int N = 500500; vector<pair<int, int>> g[N]; set<int> bridgesV[N]; bitset<N> used; int h[N], d[N]; ll bridges = 0; ll solo = 0; int binpow(ll a, ll x) { ll ans = 1; while (x) { if (x % 2) { ans *= a; ans %= M; } a *= a; a %= M; x /= 2; } return (int) ans; } void dfs(int v, int p = -1) { if (p != -1) { d[v] = h[p] + 1; h[v] = h[p] + 1; } used[v] = true; for (auto &[u, w] : g[v]) { if (u == p) continue; if (!used[u]) { dfs(u, v); d[v] = min(d[v], d[u]); if (h[v] < d[u]) { bridgesV[v].insert(u); bridgesV[u].insert(v); bridges += w + 1; solo += w; } } else { d[v] = min(d[v], h[u]); } } } int calc_dp(ll n) { n %= (M - 1); if (n == 1) return 3; if (n == 2) return 0; int val = binpow(2, n); if (n % 2 == 1) { val += M - 2; } else { val += 2; } val %= M; return val; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int n, m; cin >> n >> m; for (int i = 0; i < m; i++) { int a, b, w; cin >> a >> b >> w; a--; b--; g[a].pb({b, w}); g[b].pb({a, w}); } if (n % 2 != m % 2) { cout << "0\n"; return 0; } dfs(0); used.reset(); ll ans = 1; for (int v = 0; v < n; v++) { if (!used[v]) { ll cs = 0; vector<int> q = {v}; used[v] = true; while (!q.empty()) { int u = q.back(); q.pop_back(); for (auto &[uu, w] : g[u]) { if (bridgesV[u].find(uu) != bridgesV[u].end()) continue; cs += w + 1; if (!used[uu]) { used[uu] = true; q.pb(uu); } } } cs /= 2; cs = max(cs, 1LL); ans *= calc_dp(cs); ans %= M; } } ans *= binpow(3, solo); ans %= M; int w = (2 * binpow(3, M - 2)) % M; ans *= binpow(w, bridges); ans %= M; int cycles = (m + 1) - n; ans *= binpow(2, cycles); ans %= M; cout << ans << '\n'; }
1894
A
Secret Sport
Let's consider a game in which two players, A and B, participate. This game is characterized by two positive integers, $X$ and $Y$. The game consists of sets, and each set consists of plays. In each play, \textbf{exactly one} of the players, either A or B, wins. A set ends \textbf{exactly} when one of the players reaches $X$ wins in the plays of that set. This player is declared the winner of the set. The players play sets until one of them reaches $Y$ wins in the sets. After that, the game ends, and this player is declared the winner of the entire game. You have just watched a game but didn't notice who was declared the winner. You remember that during the game, $n$ plays were played, and you know which player won each play. However, you \textbf{do not know} the values of $X$ and $Y$. Based on the available information, determine who won the entire game — A or B. If there is not enough information to determine the winner, you should also report it.
The winner will be the player who wins the last play. This is true because the winner will be the player who wins the last set, because, if this were not the case, the last set would not be played. And in a set, the player who wins the last play wins, because if this were not the case, the last play would not be played.
[ "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; string s; cin >> s; cout << s.back() << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1894
B
Two Out of Three
You are given an array $a_1, a_2, \ldots, a_n$. You need to find an array $b_1, b_2, \ldots, b_n$ consisting of numbers $1$, $2$, $3$ such that \textbf{exactly two} out of the following three conditions are satisfied: - There exist indices $1 \leq i, j \leq n$ such that $a_i = a_j$, $b_i = 1$, $b_j = 2$. - There exist indices $1 \leq i, j \leq n$ such that $a_i = a_j$, $b_i = 1$, $b_j = 3$. - There exist indices $1 \leq i, j \leq n$ such that $a_i = a_j$, $b_i = 2$, $b_j = 3$. If such an array does not exist, you should report it.
By symmetry, it doesn't matter which two conditions are satisfied. Let's assume it's the conditions $(1, 2)$ and $(1, 3)$. Then the elements with $b_i = 2$ and the elements with $b_i = 3$ should not intersect. Therefore, it is sufficient to assign $b_i = 2$ and $b_i = 3$ to only one element that is common with the elements of the form $b_i = 1$ in order to satisfy the conditions $(1, 2)$ and $(1, 3)$. And these elements must be distinct in order to not satisfy the condition $(2, 3)$. Thus, we obtain the necessary condition for the existence of an answer: the array must have at least two elements that occur more than once. It is easy to see that this condition is also sufficient by constructing an example: let $x, y$ be two numbers that occur more than once in the array. Then we can assign $a_i = x \to b_i = 2$ to one of the occurrences, $a_i = y \to b_i = 3$ to one of the occurrence, and assign $b_i = 1$ to all other numbers, and this will be a suitable answer.
[ "constructive algorithms" ]
1,000
#include <bits/stdc++.h> using namespace std; const int N = 100; void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } vector<int> b(n, 1); vector<vector<int>> inds(N + 1); for (int i = 0; i < n; i++) { inds[a[i]].push_back(i); } int k = 2; for (int x = 1; x <= N; x++) { if ((int) inds[x].size() >= 2) { b[inds[x][0]] = k; k++; if (k > 3) { break; } } } if (k <= 3) { cout << "-1\n"; } else { for (int i = 0; i < n; i++) { cout << b[i] << ' '; } cout << '\n'; } } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1895
A
Treasure Chest
Monocarp has found a treasure map. The map represents the treasure location as an OX axis. Monocarp is at $0$, the treasure chest is at $x$, the key to the chest is at $y$. Obviously, Monocarp wants to open the chest. He can perform the following actions: - go $1$ to the left or $1$ to the right (spending $1$ second); - pick the key or the chest up if he is in the same point as that object (spending $0$ seconds); - put the chest down in his current point (spending $0$ seconds); - open the chest if he's in the same point as the chest and has picked the key up (spending $0$ seconds). Monocarp can carry the chest, but the chest is pretty heavy. He knows that he can carry it for at most $k$ seconds in total (putting it down and picking it back up doesn't reset his stamina). What's the smallest time required for Monocarp to open the chest?
Let's consider two cases: If $y < x$, then the answer is $x$, because we need to reach the chest and we can take the key on the way. If $y > x$, then the optimal option would be to bring the chest directly to the key. However, this is not always possible due to the value of $k$. If $y - x \le k$, then we can do it and the answer is $y$. Otherwise, the answer is $y + y - (x + d)$, where $d$ is the time we carried the chest, and $y - (x + d)$ is the time to return from the key to the chest back. From here, we can see that this value is minimum when $d=k$. Thus, the answer is $y + y - (x + k)$.
[ "math" ]
800
for _ in range(int(input())): x, y, k = map(int, input().split()) if y < x: print(x) else: print(y + max(0, y - x - k))