contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1635
F
Closest Pair
There are $n$ weighted points on the $OX$-axis. The coordinate and the weight of the $i$-th point is $x_i$ and $w_i$, respectively. All points have distinct coordinates and positive weights. Also, $x_i < x_{i + 1}$ holds for any $1 \leq i < n$. The weighted distance between $i$-th point and $j$-th point is defined as $|x_i - x_j| \cdot (w_i + w_j)$, where $|val|$ denotes the absolute value of $val$. You should answer $q$ queries, where the $i$-th query asks the following: Find the \textbf{minimum} weighted distance among all pairs of distinct points among the points in subarray $[l_i,r_i]$.
First of all, let's solve the problem for the whole array. Define $L_i$ as the biggest $j$ satisfying $j < i$ and $w_j \leq w_i$, and $R_i$ as the smallest $j$ satisfying $j > i$ and $w_j \leq w_i$. Then, we consider $2n$ pairs of points: $(L_i, i)$ and $(i, R_i)$ for each $1 \leq i \leq n$. In conclusion, the closest pair (the pair with the minimum weighted distance) must be among them. Proof: Assume that $(a, b)$ is the closest pair and $a < b$. If $w_a \leq w_b$ holds, then $a = L_b$ must holds, otherwise $(a, L_b)$ would obviously be a better pair. Similarly, if $w_a > w_b$ holds, then $b = R_a$ must holds, otherwise $(R_a, b)$ would obviously be a better pair. The lemma above also applies to range queries by the exact same proof. So now, we first need to find $L_i$ and $R_i$ for each $1 \leq i \leq n$, this could be simply done with a stack. Then, imagine we draw lines between the endpoints of each pair, and the problem could be reduced to: given $2n$ weighted segments, for each query $i$ find the one with the minimum weight that is totally covered by $[l_i, r_i]$. This is actually a classic problem, which could be solved by sweep line trick + any data structure able to maintain prefix-minimum with single point updates, like BIT or segment tree. Total Complexity: $\mathcal{O}((n+q)\log n)$.
[ "data structures", "greedy" ]
2,800
#include <bits/stdc++.h> using namespace std; const int N = 300005; const long long INF = 1ll << 62; vector <int> useful_pair[N]; vector <pair <int, int>> queries[N]; long long bit[N]; void modify(int p, long long v) { for (int i = p; i > 0; i -= i & (-i)) { bit[i] = min(bit[i], v); } } long long query(int p) { long long ans = INF; for (int i = p; i < N; i += i & (-i)) { ans = min(ans, bit[i]); } return ans; } int main () { ios::sync_with_stdio(false); cin.tie(0); int n, q; cin >> n >> q; vector <int> x(n), w(n); for (int i = 0; i < n; ++i) { cin >> x[i] >> w[i]; } for (int i = 0, l, r; i < q; ++i) { cin >> l >> r, --l; queries[r].emplace_back(l, i); } stack <int> stk; for (int i = 0; i < n; ++i) { while (!stk.empty() && w[stk.top()] > w[i]) { stk.pop(); } if (!stk.empty()) { int x = stk.top(); useful_pair[i].push_back(x); } stk.push(i); } while (!stk.empty()) { stk.pop(); } for (int i = n - 1; ~i; --i) { while (!stk.empty() && w[stk.top()] > w[i]) { stk.pop(); } if (!stk.empty()) { int x = stk.top(); useful_pair[x].push_back(i); } stk.push(i); } fill(bit, bit + N, INF); vector <long long> ans(q); for (int r = 1; r <= n; ++r) { for (int l : useful_pair[r - 1]) { long long val = 1ll * (x[r - 1] - x[l]) * (w[l] + w[r - 1]); modify(l + 1, val); } for (auto [l, id] : queries[r]) { ans[id] = query(l + 1); } } for (int i = 0; i < q; ++i) { cout << ans[i] << endl; } }
1637
A
Sorting Parts
You have an array $a$ of length $n$. You can \textbf{exactly} once select an integer $len$ between $1$ and $n - 1$ inclusively, and then sort in non-decreasing order the prefix of the array of length $len$ and the suffix of the array of length $n - len$ independently. For example, if the array is $a = [3, 1, 4, 5, 2]$, and you choose $len = 2$, then after that the array will be equal to $[1, 3, 2, 4, 5]$. Could it be that after performing this operation, the array will \textbf{not} be sorted in non-decreasing order?
Consider two cases: The array is already sorted. THe array is not sorted. In the first case, sorting any prefix and any suffix does not change the array. So the array will always remain sorted so the answer is "NO". In the second case, there are two elements with indexes $i$ and $j$, such that $i < j$ and $a_i > a_j$. So choose $len$ so that these two elements will be in different parts. For example - $len = i$. Note that after operation $a_i$ will remain to the left of $a_j$, which means the array will not be sorted. So the answer is "YES". Then the solution is to check if the array is sorted, which can be done in $O(n)$.
[ "brute force", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for (int i = 0; i < t; i++) { int n; cin >> n; vector<int> a(n); for (auto& u : a) cin >> u; if (!is_sorted(a.begin(), a.end())) cout << "YES\n"; else cout << "NO\n"; } }
1637
B
MEX and Array
Let there be an array $b_1, b_2, \ldots, b_k$. Let there be a partition of this array into segments $[l_1; r_1], [l_2; r_2], \ldots, [l_c; r_c]$, where $l_1 = 1$, $r_c = k$, and for any $2 \leq i \leq c$ holds that $r_{i-1} + 1 = l_i$. In other words, each element of the array belongs to exactly one segment. Let's define the cost of a partition as $$c + \sum_{i = 1}^{c} \operatorname{mex}(\{b_{l_i}, b_{l_i + 1}, \ldots, b_{r_i}\}),$$ where $\operatorname{mex}$ of a set of numbers $S$ is the smallest non-negative integer that does not occur in the set $S$. In other words, the cost of a partition is the number of segments plus the sum of MEX over all segments. Let's define the value of an array $b_1, b_2, \ldots, b_k$ as the \textbf{maximum} possible cost over all partitions of this array. You are given an array $a$ of size $n$. Find the sum of values of all its subsegments. An array $x$ is a subsegment of an array $y$ if $x$ can be obtained from $y$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
We show, that replacing a segment of length $k$ ($k > 1$) with segments of length $1$ does not decrease the cost of the partition. Consider two cases: The segment does not contain $0$. The segment contains $0$. In the first case the contribution of the segment equals to $1$ (because $mex = 0$), but the contribution of $k$ segments of length $1$ equals to $k$. So the cost increased. In the second case the contribution of the segment equals to $1 + mex <= 1 + k$, but the contribution of the segments of length $1$ would be at least $1 + k$, so the cost has not decreased. Then it is possible to replace all segments of length more than $1$ by segments of length $1$ and not decrease the cost. So the value of the array $b_1, b_2, \ldots, b_k$ equals to $\sum_{i=1}^{k}{(1 + mex(\{b_i\}))}$ = $k$ + (the number of zeros in the array). To calculate the total $value$ of all subsegments, you need to calculate the total length of all subsegments and the contribution of each $0$. The total length of all subsegments equals to $\frac{n \cdot (n + 1) \cdot (n + 2)}{6}$. The contribution of a zero in the position $i$ equals to $i \cdot (n - i + 1)$. This solution works in $O(n)$, but it could be implemented less efficiently. There is also another solution, which uses dynamic programming: let $dp_{l, r}$ is the value of the array $a_l, a_{l + 1}, \ldots, a_r$. Then $dp_{l, r} = max(1 + mex(\{a_l, a_{l + 1}, \ldots, a_r\}), \max_{c = l}^{r - 1} (dp_{l, c} + dp_{c + 1, r}))$. This solution can be implemented in $O(n^3)$ or in $O(n^4)$.
[ "brute force", "dp", "greedy", "math" ]
1,100
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for (int i = 0; i < t; i++) { int n; cin >> n; vector<int> a(n); for (auto& u : a) cin >> u; int ans = 0; for (int i = 0; i < n; i++) { ans += (i + 1) * (n - i); if (a[i] == 0) ans += (i + 1) * (n - i); } cout << ans << '\n'; } }
1637
C
Andrew and Stones
Andrew has $n$ piles with stones. The $i$-th pile contains $a_i$ stones. He wants to make his table clean so he decided to put every stone either to the $1$-st or the $n$-th pile. Andrew can perform the following operation any number of times: choose $3$ indices $1 \le i < j < k \le n$, such that the $j$-th pile contains at least $2$ stones, then he takes $2$ stones from the pile $j$ and puts one stone into pile $i$ and one stone into pile $k$. Tell Andrew what is the minimum number of operations needed to move all the stones to piles $1$ and $n$, or determine if it's impossible.
Consider $2$ cases when the answer is $-1$ for sure: For all $1 < i < n$: $a_i = 1$. In this case, it's not possible to make any operation and not all stones are in piles $1$ or $n$. $n = 3$ and $a_2$ is odd. Then after any operation this number will remain odd, so it can never become equal to $0$. Later it will become clear why these are the only cases where the answer is $-1$. To show it consider the following algorithm: If all stones are in piles $1$ and $n$ then the algorithm is done. If there is at least one non-zero even element (piles $1$ and $n$ don't count), then subtract $2$ from it, add $1$ to the odd number to the left or to the pile $1$ if there's no such number. Similarly add $1$ to the odd number to the right or to the pile $n$ if there's no such number. Then continue the algorithm. Note that the number of odd elements after it (piles $1$ and $n$ don't count) decreases at least by $1$ (if there was any odd number). Also either a new even number has appeared, or the algorithm will be done. If all remaining non-zero numbers are odd, then there is at least one odd number greater than $1$. So let's subtract $2$ from this element and add ones similar to the $2$-nd case. In this case the number of odd elements decreases at least by $1$. From the notes written in the second and third cases, it follows that the algorithm always puts all stones to the piles $1$ and $n$. Also note that if in the initial array the element in position $i$ ($1 < i < n$) was even, then the algorithm did not add any $1$ to it, so the number of operations with the center in $i$ equals to $\frac{a_i}{2}$. And if $a_i$ was odd, the algorithm will add $1$ to this element exactly once, so the number of operations with the center in $i$ equals to $\frac{a_i + 1}{2}$. This algorithm is optimal because for each odd number it's necessary to add at least $1$ to it and the algorithm adds exactly $1$. And from even elements the algorithm can only subtract. It means that the answer to the problem equals to $\sum_{i=2}^{n-1} \lceil \frac{a_i}{2} \rceil$. Time complexity is $O(n)$.
[ "greedy", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (auto &x : a) cin >> x; if (*max_element(a.begin() + 1, a.end() - 1) == 1 || (n == 3 && a[1] % 2 == 1)) { cout << "-1\n"; return; } long long answer = 0; for (int i = 1; i < n - 1; i++) answer += (a[i] + 1) / 2; cout << answer << '\n'; } int main() { ios::sync_with_stdio(false), cin.tie(nullptr); int tests; cin >> tests; while (tests--) solve(); }
1637
D
Yet Another Minimization Problem
You are given two arrays $a$ and $b$, both of length $n$. You can perform the following operation any number of times (possibly zero): select an index $i$ ($1 \leq i \leq n$) and swap $a_i$ and $b_i$. Let's define the cost of the array $a$ as $\sum_{i=1}^{n} \sum_{j=i + 1}^{n} (a_i + a_j)^2$. Similarly, the cost of the array $b$ is $\sum_{i=1}^{n} \sum_{j=i + 1}^{n} (b_i + b_j)^2$. Your task is to minimize the total cost of two arrays.
The $cost$ of the array $a$ equals to $\sum_{i=1}^{n} \sum_{j=i + 1}^{n} (a_i + a_j)^2 = \sum_{i=1}^{n} \sum_{j=i + 1}^{n} (a_i^2 + a_j^2 + 2a_i a_j)$. Let $s = \sum_{i=1}^{n} a_i$. Then $cost = (n - 1) \cdot \sum_{i=1}^{n} a_i^2 + \sum_{i=1}^n (a_i \cdot (s - a_i)) = (n - 1) \cdot \sum_{i=1}^{n} a_i^2 + s^2 - \sum_{i=1}^{n} a_i^2 = (n - 2) \cdot \sum_{i=1}^{n} a_i^2 + (\sum_{i=1}^n a_i)^2$. Then the total cost of two arrays equals to $(n - 2) \cdot \sum_{i=1}^{n} (a_i^2 + b_i^2) + (\sum_{i=1}^n a_i)^2 + (\sum_{i=1}^n b_i)^2$. The first term is a constant $\Rightarrow$ we need to minimize $(\sum_{i=1}^n a_i)^2 + (\sum_{i=1}^n b_i)^2$. There are two continuations of the solution, but the idea of both is to iterate over all possible sum of the array $a$, then calculate the sum of the array $b$ and update the answer using formula written above: Let $dp_{i, w}$ is true if it's possible to make some operations so that the sum of first $i$ elements in the array $a$ equals to $w$, otherwise $dp_{i, w}$ is false. Then $dp_{1, a_1} = dp_{1, b_1} =$ true. For $i > 1$: $dp_{i, w} = dp_{i-1, w-a_i}$ or $dp_{i-1, w-b_i}$. Then to iterate over all possible sums of the array $a$ you need to consider such $s$, that $dp_{n, s} =$ true. Assume, that we have $n$ items, where $i$-th item has a weight of $|b_i - a_i|$. By solving simple backpack problem let's find all possible sums of weights of these items. And if the sum contains $i$-th item, then we will assume that $a_i \ge b_i$, otherwise $a_i \le b_i$. So if the sum of weights of some items equals to $s$, then the sum of the array $a$ equals to $\sum_{i=1}^n min(a_i, b_i) + s$. So all possible sums of the array $a$ can be received from all possible sums of weights of these items. Both solutions works in $O(n^2 \cdot maxA)$ where $maxA = 100$, and both solutions can be optimized with std::bitest, speeding up them by $64$ times, but it wasn't necessary.
[ "dp", "greedy", "math" ]
1,800
#include <bits/stdc++.h> using namespace std; constexpr int MAXSUM = 100 * 100 + 10; int sqr(int x) { return x * x; } void solve() { int n; cin >> n; vector<int> a(n), b(n); for (auto& u : a) cin >> u; for (auto& u : b) cin >> u; int sumMin = 0, sumMax = 0, sumSq = 0; for (int i = 0; i < n; i++) { if (a[i] > b[i]) swap(a[i], b[i]); sumSq += sqr(a[i]) + sqr(b[i]); sumMin += a[i]; sumMax += b[i]; } bitset<MAXSUM> dp; dp[0] = 1; for (int i = 0; i < n; i++) dp |= dp << (b[i] - a[i]); int ans = sqr(sumMin) + sqr(sumMax); for (int i = 0; i <= sumMax - sumMin; i++) if (dp[i]) ans = min(ans, sqr(sumMin + i) + sqr(sumMax - i)); cout << sumSq * (n - 2) + ans << '\n'; } int main() { int t; cin >> t; while (t--) solve(); }
1637
E
Best Pair
You are given an array $a$ of length $n$. Let $cnt_x$ be the number of elements from the array which are equal to $x$. Let's also define $f(x, y)$ as $(cnt_x + cnt_y) \cdot (x + y)$. Also you are given $m$ bad pairs $(x_i, y_i)$. Note that if $(x, y)$ is a bad pair, then $(y, x)$ is also bad. Your task is to find the maximum value of $f(u, v)$ over all pairs $(u, v)$, such that $u \neq v$, that this pair is not bad, and also that $u$ and $v$ each occur in the array $a$. It is guaranteed that such a pair exists.
Let's fix $x$ and iterate over $cnt_y \le cnt_x$. Then we need to find maximum $y$ over all elements that occurs exactly $cnt_y$ times, such that $x \neq y$ and pair $(x, y)$ is not bad. To do that, we will just iterate over all elements that occurs exactly $cnt_y$ times in not increasing order, while pair $(x, y)$ is bad or $x = y$. If we find such $y$ then we will update the answer. To check if the pair is bad, just add all pairs into the set, and check is the pair is in the set. Also you can sort all bad pairs and check it using binary search. Both methods works in $O(\log{n})$. Iterating over all $x$ and $cnt_y \le cnt_x$ works in $O(\sum cnt_x) = O(n)$. And finding maximum $y$ such that $x \neq y$ and the pair $(x, y)$ is not bad works in $O((n + m) \log{m})$ summary, because there are $O(n + m)$ checks if the pair is bad or not. So this solution works in $O((n + m) \log{m} + n \log{n}).$
[ "binary search", "brute force", "implementation" ]
2,100
#include <bits/stdc++.h> using namespace std; void solve() { int n, m; cin >> n >> m; vector<int> a(n); map<int, int> cnt; for (auto &x : a) { cin >> x; cnt[x]++; } vector<pair<int, int>> bad_pairs; bad_pairs.reserve(2 * m); for (int i = 0; i < m; i++) { int x, y; cin >> x >> y; bad_pairs.emplace_back(x, y); bad_pairs.emplace_back(y, x); } sort(bad_pairs.begin(), bad_pairs.end()); vector<vector<int>> occ(n); for (auto &[x, c] : cnt) occ[c].push_back(x); for (auto &v : occ) reverse(v.begin(), v.end()); long long answer = 0; for (int cnt_x = 1; cnt_x < n; cnt_x++) for (int x : occ[cnt_x]) for (int cnt_y = 1; cnt_y <= cnt_x; cnt_y++) for (auto y : occ[cnt_y]) if (x != y && !binary_search(bad_pairs.begin(), bad_pairs.end(), pair<int, int>{x, y})) { answer = max(answer, 1ll * (cnt_x + cnt_y) * (x + y)); break; } cout << answer << '\n'; } int main() { ios::sync_with_stdio(false), cin.tie(nullptr); int tests; cin >> tests; while (tests--) solve(); }
1637
F
Towers
You are given a tree with $n$ vertices numbered from $1$ to $n$. The height of the $i$-th vertex is $h_i$. You can place any number of towers into vertices, for each tower you can choose which vertex to put it in, as well as choose its efficiency. Setting up a tower with efficiency $e$ costs $e$ coins, where $e > 0$. It is considered that a vertex $x$ gets a signal if for some pair of towers at the vertices $u$ and $v$ ($u \neq v$, but it is allowed that $x = u$ or $x = v$) with efficiencies $e_u$ and $e_v$, respectively, it is satisfied that $\min(e_u, e_v) \geq h_x$ and $x$ lies on the path between $u$ and $v$. Find the minimum number of coins required to set up towers so that you can get a signal at all vertices.
Main solution: Let's consider all leaves. If $v$ is a leaf, then all paths, which cover $v$ should start in $v$. Then let's increase heights of all towers in the leaves by $1$ (it's necessary) and decrease all $h_v$ by $1$. Now we should remove all leaves with $h_v = 0$ (they will be already covered). We will repeat the operation until there are covered leaves. We will continue the algorithm but instead of increasing height in new leaves of the tree we will increase height in deleted vertexes (in its subtree). If at some point we have only 1 vertex left we should increase height in $2$ towers (due to the statement we should use $2$ different towers to cover the vertex). To speed up the solution we will sort vertexes by $h_v$ and we will store current level of current leaves. On each iteration we will add amount of leaves multiplied by change of level. If it is not optimal then let's consider heights of towers in the optimal answer. Let $f(i)$ be amount of vertices in optimal answer with height larger or equal to $i$. It is easy to notice that $f(i) - f(i - 1)$ is at least amount of leaves in the tree on $i$-th iteration. If there is only one vertex left on iteration $i$ then $2 = f(i) - f(i - 1)$. That means that the algorithm is optimal. Alternative solution: Let $root$ be the vertex with the maximum height. Let's root tree at $root$. Now there must be two towers in the subtrees of the children of vertex $root$ of height $h_{root}$ (if there's only one child, then there must be a tower at the vertex $root$ of height $h_{root}$). Now we need to place towers in the subtrees of the children of vertex $root$, such that for for every vertex $v$ the highest tower in the subtree of vertex $v$ (including $v$) must be at least $h_v$. After it's done you need to select two highest towers in the subtrees of two different child of the vertex $root$ and set their height to $h_{root}$. Now we need to optimally place towers in the subtrees so that for every vertex $v$ there must be a tower in the subtree of vertex $v$ of height at least $h_v$. To do that in the subtree of vertex $v$, let's recursively place towers in the subtrees of the children of vertex $v$, and then increase the height of the highest tower in the subtree of $v$ to $h_v$ if it's necessary. If vertex $v$ is a leaf then we need to place a tower of height $h_v$ in the vertex $v$. This solutions can be implemented in $O(n)$. For a better understanding, we recommend you to see the implementation.
[ "constructive algorithms", "dfs and similar", "dp", "greedy", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; typedef long long ll; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int n; cin >> n; vector<int> h(n); for (int i = 0; i < n; ++i) cin >> h[i]; vector<vector<int>> g(n); for (int i = 0; i + 1 < n; ++i) { int u, v; cin >> u >> v; --u, --v; g[u].push_back(v); g[v].push_back(u); } int rt = 0; for (int i = 0; i < n; ++i) { if (h[i] > h[rt]) { rt = i; } } ll ans = 0; function<int(int, int)> dfs = [&](int u, int p) { int mx1 = 0, mx2 = 0; vector<int> have; for (auto v : g[u]) { if (v != p) { int cur = dfs(v, u); if (cur > mx1) swap(cur, mx1); if (cur > mx2) swap(cur, mx2); } } if (p != -1) { int delta = max(0, h[u] - mx1); ans += delta; mx1 += delta; } else { ans += max(0, h[u] - mx1) + max(0, h[u] - mx2); } return mx1; }; dfs(rt, -1); cout << ans << '\n'; }
1637
G
Birthday
Vitaly gave Maxim $n$ numbers $1, 2, \ldots, n$ for his $16$-th birthday. Maxim was tired of playing board games during the celebration, so he decided to play with these numbers. In one step Maxim can choose two numbers $x$ and $y$ from the numbers he has, throw them away, and add two numbers $x + y$ and $|x - y|$ instead. He wants all his numbers to be equal after several steps and the sum of the numbers to be minimal. Help Maxim to find a solution. Maxim's friends don't want to wait long, so the number of steps in the solution should not exceed $20n$. It is guaranteed that under the given constraints, if a solution exists, then there exists a solution that makes all numbers equal, minimizes their sum, and spends no more than $20n$ moves.
The answer is $-1$ only if $n = 2$. It will become clear later. Let all numbers be equal to $val$ after all steps. Consider the moves in reverse order: numbers $x$ and $y$ are obtained from $\frac{x + y}{2}$ and $\frac{|x - y|}{2}$. If numbers $x$ and $y$ are multiples of an odd number $p > 1$, then numbers $\frac{x + y}{2}$ and $\frac{|x - y|}{2}$ are also multiples of $p$. But after all steps in reverse order, we need to get $1$ $\Rightarrow$ $val$ can't be a multiple of $p$. It means that $val$ is a power of two. Moreover $val \ge n$ because the maximum after all steps in direct order can't get smaller. Let's show how to make all numbers equal to the smallest power of two which is at least $n$. First of all, let's transform numbers $1, 2, \ldots, n$ into the $n$ powers of two (maybe different), but not greater than the answer. Let the function $solve(n)$ do this. It can be implemented as follows: If $n \le 2$ then all numbers are already a power of two so we can finish the function. if $n$ is a power of two, then call $solve(n - 1)$ and finish the function. Otherwise let's $p$ be the maximum power of two not exceeding $n$. Lets make steps by choosing pairs $(p - 1, p + 1), (p - 2, p + 2), \ldots, (p - (n - p), p + (n - p))$. After that, there will be numbers that can be divided into $3$ groups: Numbers that have not been changed: $1, 2, \ldots, p - (n - p) - 1$ and $p$. $p$ is already a power of two. To transform other numbers into the powers of two let's call $solve(p - (n - p) - 1)$. Numbers that have been obtained as $|x - y|$ after some step: $2, 4, \ldots, 2 \cdot (n - p)$. To transform them into the powers of two we can take the sequence of steps that does $solve(n - p)$ and multiply all numbers in these steps by $2$. All numers that have been obtained as $x + y$ after some step, are equal to $2p$ and they are already a power of two. Numbers that have not been changed: $1, 2, \ldots, p - (n - p) - 1$ and $p$. $p$ is already a power of two. To transform other numbers into the powers of two let's call $solve(p - (n - p) - 1)$. Numbers that have been obtained as $|x - y|$ after some step: $2, 4, \ldots, 2 \cdot (n - p)$. To transform them into the powers of two we can take the sequence of steps that does $solve(n - p)$ and multiply all numbers in these steps by $2$. All numers that have been obtained as $x + y$ after some step, are equal to $2p$ and they are already a power of two. After this transformation, there are always two equal numbers that are smaller than the answer. Let them be equal to $x$. Let's make a step using them and we will get $0$ and $2x$. From numbers $0$ and $x$ we can get numers $0$ and $2x$: $(0, x) \rightarrow (x, x) \rightarrow (0, 2x)$. So let's just use this $0$ to transform all remaining numbers into the answer. After all, let's make a step using $0$ and the answer, so $0$ becomes equal to the answer. It is obvious, that this solution works at most in $O(n \cdot \log{n})$. But at such limits it uses at most $7n$ steps.
[ "constructive algorithms", "greedy", "math" ]
3,000
#include <bits/stdc++.h> using namespace std; vector<pair<int, int>> ops; vector<int> a; void rec(int n, int coeff) { if (n <= 2) { for (int i = 1; i <= n; i++) a.push_back(i * coeff); return; } int p = 1; while (p * 2 <= n) p *= 2; if (p == n) { a.push_back(n * coeff); n--, p /= 2; } a.push_back(p * coeff); for (int i = p + 1; i <= n; i++) { a.push_back(2 * p * coeff); ops.emplace_back(i * coeff, (2 * p - i) * coeff); } rec(2 * p - n - 1, coeff); rec(n - p, coeff * 2); } void solve() { int n; cin >> n; if (n == 2) { cout << "-1\n"; return; } ops.clear(); a.clear(); rec(n, 1); sort(a.begin(), a.end()); int answer = 1; while (answer < n) answer *= 2; for (int i = 0;; i++) if (a[i] == a[i + 1]) { assert(a[i] != answer); ops.emplace_back(a[i], a[i]); a[i + 1] *= 2; a.erase(a.begin() + i); break; } for (auto x : a) while (x != answer) { ops.emplace_back(0, x); ops.emplace_back(x, x); x *= 2; } ops.emplace_back(0, answer); cout << ops.size() << '\n'; for (auto &[x, y] : ops) cout << x << ' ' << y << '\n'; } int main() { int tests; cin >> tests; while (tests--) solve(); }
1637
H
Minimize Inversions Number
You are given a permutation $p$ of length $n$. You can choose any subsequence, remove it from the permutation, and insert it at the beginning of the permutation keeping the same order. For every $k$ from $0$ to $n$, find the minimal possible number of inversions in the permutation after you choose a subsequence of length exactly $k$.
To simplify the explanation, we will represent a permutation as a set of points $(i, p_i)$. $\bf{Lemma}$: Let's consider some selected subsequence. Then there always exists such sequence of the same length, so that if we select it instead, the number of inversions won't increase, for which the following condition is satisfied: if the point $i$ is selected, then all points to the right and below point $i$ are also selected. $\bf{Proof}$: Let assume that it's not right, then there is a pair of points $(i, j)$ such that $i$ is selected and $j$ is not, and j is to the right and below point $i$ (the point is selected if it is in the sequence). Let's call such pair a bad pair. Divide the points into 9 regions by 4 straight lines parallel to the coordinate axes passing through the point $i$ and $j$. No for every point, considering if it is selected or not and considering the region where it is, let's determine by how many it reduces the number of inversions after replacing $i$ with $j$ (i.e. select $j$ instead of $i$). Let's call it a contribution of this point. Also note that after replacing the point $i$ with the point $j$, the inversion formed by these two points disappears. The first integer in each region - the contribution of the point selected there. The second integer - the contribution of unselected one. Note that negative contribution has only selected points, located between the point $i$ and $j$ (on the $Ox$ axis) above the point $i$, and unselected points located between the point $i$ and $j$ below the point $j$. Now let's find any bad pair $(i, j)$ and do the following: If there is a selected point $id$, which has negative contribution, then set $i = id$ and continue the algorithm. Otherwise, if there is a unselected point $id$, which has negative contribution, then set $j = id$ and continue the algorithm. Otherwise, after replacing the point $i$ with the point $j$ the number of inversions will reduce. This algorithm always finds a bad pair such that after replacing $i$ with $j$ the number of inversions will reduce, because each time the value $j - i$ reduces. Q.E.D. Let $d_i$ is a number by which the number of inversions will decrease after moving only the element with index $i$ to the beginning. Then $d_i =$ (the number of points to the left and above i$) - ($the number of points to the left and below i). Then, if sequence $seq$ is selected, then the number of inversions in a new permutation will be equal to $invs - \sum_{i \in seq} d_i + seqInvs - (\frac{|seq| \cdot (|seq| - 1)}{2} - seqInvs) = invs - \sum_{i \in seq} d_i + 2 \cdot seqInvs - \frac{|seq| \cdot (|seq| - 1)}{2}$. Here $invs$ - the number of inversions in the initial permutation, and $seqInvs$ - the number of inversions over all elements in the $seq$. $seqInvs = \sum_{i \in seq}$ (the number of selected points to the right and below $i$). Due to the lemma proved above, if the sequence $seq$ is optimal, then if the point $i$ is selected, then $\Rightarrow$ all points to the right and below $i$ are selected as well, so if $s_i$ is the number of points to the right and below $i$, then $seqInvs = \sum_{i \in seq} s_i$. So the number of inversions in the new permutation equals to $invs - \sum_{i \in seq} (d_i - 2s_i) - \frac{|seq| \cdot (|seq| - 1)}{2}$. Note, that this formula can be incorrect if the sequence is not optimal. In this case point $i$ reduces the number of inversions by $c_i = d_i - 2s_i$. Note that the number of points to the left and below $i$ equals to $p_i - 1 - s_i$, and the number of points to the left and above $i$ equals to $(i - 1) - (p_i - 1 - s_i) = i - p_i + s_i$. So $d_i = (i - p_i + s_i) - (p_i - 1 - s_i) = i - 2p_i + 2s_i + 1$. So $c_i = d_i - 2s_i = i - 2p_i + 1$. Now it's not hard to see, that if the point $j$ is to the left and below the point $i$, then $c_j > c_i$. So if we select $len$ maximum elements by the value of $c_i$, the formula for the number of inversions written above is correct. It means that it is optimal. It can be implemented in $O(n \log{n})$.
[ "data structures", "greedy", "math", "sortings" ]
3,500
#include <bits/stdc++.h> using namespace std; struct binary_index_tree { int n; vector<int> bit; binary_index_tree(int n) : n(n), bit(n + 1) {} void increase(int pos) { for (pos++; pos <= n; pos += pos & -pos) bit[pos]++; } int query(int pref) const { int sum = 0; for (; pref; pref -= pref & -pref) sum += bit[pref]; return sum; } }; void solve() { int n; cin >> n; vector<int> d(n); binary_index_tree bit(n); long long inversions = 0; for (int i = 0; i < n; i++) { int p; cin >> p; p--; d[i] = i - 2 * p; inversions += i - bit.query(p); bit.increase(p); } sort(d.rbegin(), d.rend()); cout << inversions; long long sum = 0; for (int i = 0; i < n; i++) { sum += d[i]; cout << ' ' << inversions - sum - 1ll * i * (i + 1) / 2; } cout << '\n'; } int main() { ios::sync_with_stdio(false), cin.tie(nullptr); int tests; cin >> tests; while (tests--) solve(); }
1638
A
Reverse
You are given a permutation $p_1, p_2, \ldots, p_n$ of length $n$. You have to choose two integers $l,r$ ($1 \le l \le r \le n$) and reverse the subsegment $[l,r]$ of the permutation. The permutation will become $p_1,p_2, \dots, p_{l-1},p_r,p_{r-1}, \dots, p_l,p_{r+1},p_{r+2}, \dots ,p_n$. Find the lexicographically smallest permutation that can be obtained by performing \textbf{exactly} one reverse operation on the initial permutation. Note that for two distinct permutations of equal length $a$ and $b$, $a$ is lexicographically smaller than $b$ if at the first position they differ, $a$ has the smaller element. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
When is it best for the permutation to remain unchanged? What elements are obviously optimal and should remain as they are? Find the first non-optimal element. What's the best way to fix it? Let $p_i$ be the first element such that $p_i \neq i$. For the prefix of elements $p_1=1,p_2=2, \dots ,p_{i-1}=i-1$ we do not need to change anything because it is already the minimum lexicographic, but we would like to have $i$ instead of $p_i$ on that position. The solution is to reverse segment $[i,j]$, where $p_j=i$. Notice that $i<j$ since $p_k=k$ for every $k<i$, so $p_k<p_j=i$ for every $k<i$. Time complexity: $O(n)$.
[ "constructive algorithms", "greedy", "math" ]
800
null
1638
B
Odd Swap Sort
You are given an array $a_1, a_2, \dots, a_n$. You can perform operations on the array. In each operation you can choose an integer $i$ ($1 \le i < n$), and swap elements $a_i$ and $a_{i+1}$ of the array, if $a_i + a_{i+1}$ is odd. Determine whether it can be sorted in non-decreasing order using this operation any number of times.
Replace the condition "$a_i+a_{i+1}$ is odd" with something easier to work with. The condition means that we only swap elements of different parity. Now, make some observations. What happens if there are some elements $a_i$ and $a_j$ ($i<j$) of the same parity, such that $a_i>a_j$? If such pair exists, the answer is "NO". Now consider the array to be a merge between two increasing arrays (one with odd elements, one with even elements). Try to prove that the answer is always "YES" in this case. What does the Bubble Sort algorithm do here? Does it ever do an illegal swap? The condition "$a_i + a_{i+1}$ is odd" means that we can only swap elements of different parity. If either the order of even elements or the order of odd elements is not non-decreasing, then it is impossible to sort the sequence. Otherwise, let's prove that it is always possible to sort the sequence. We can for example perform Bubble Sort algorithm. Note that this algorithm only swaps elements $a_i$ and $a_{i+1}$ if $a_i > a_{i+1}$, so it will never swap two elements of the same parity (given our assumption on their order). Time complexity: $O(n)$.
[ "data structures", "math", "sortings" ]
1,100
null
1638
C
Inversion Graph
You are given a permutation $p_1, p_2, \dots, p_n$. Then, an undirected graph is constructed in the following way: add an edge between vertices $i$, $j$ such that $i < j$ if and only if $p_i > p_j$. Your task is to count the number of connected components in this graph. Two vertices $u$ and $v$ belong to the same connected component if and only if there is at least one path along edges connecting $u$ and $v$. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
The key idea is to start merging from the beginning using a stack. Assume that the connected components are always segments in the permutation (this solution also proves this by induction). We will iterate the prefix and maintain in our stack the minimum/maximum element of all the segments in order. When we increase the prefix adding the next position $i$, we add $(p_i, p_i)$ to the top of the stack. Then, we merge the top two segments while we are able to. If the top two segments have their minimum/maximum elements $(min_1, max_1)$ and $(min_2, max_2)$, in this order from the top, we will merge them only if $max_2 > min_1$, because this means that an edge exist between the two. When we reach the end, our stack contains all connected components. Note that merging two adjacent intervals forms a new interval, so we have proven by induction that our first assumption is correct. Time complexity: $O(n)$.
[ "data structures", "dsu", "graphs", "math" ]
1,300
null
1638
D
Big Brush
You found a painting on a canvas of size $n \times m$. The canvas can be represented as a grid with $n$ rows and $m$ columns. Each cell has some color. Cell $(i, j)$ has color $c_{i,j}$. Near the painting you also found a brush in the shape of a $2 \times 2$ square, so the canvas was surely painted in the following way: initially, no cell was painted. Then, the following painting operation has been performed some number of times: - Choose two integers $i$ and $j$ ($1 \le i < n$, $1 \le j < m$) and some color $k$ ($1 \le k \le nm$). - Paint cells $(i, j)$, $(i + 1, j)$, $(i, j + 1)$, $(i + 1, j + 1)$ in color $k$. All cells must be painted at least once. A cell can be painted multiple times. In this case, its final color will be the last one. Find any sequence of at most $nm$ operations that could have led to the painting you found or state that it's impossible.
Let's try to build the solution from the last operation to the first operation. The last operation can be any $2 \times 2$ square painted in a single color. If there is no such square, it is clearly impossible. Otherwise, this square being the last operation implies that we could previously color its cells in any color, multiple times, without any consequences. We will name these special cells. What happens when we run out of $2 \times 2$ squares painted in a single color? Well, we can use the special cells described above. The next operation considered can be any $2 \times 2$ square such that all its non-special cells are painted in the same color. If there is no such square, it is clearly impossible. We now have a correct solution. It consists of at most $nm$ operation because at each step we turn at least one non-special cell into a special one and there are $nm$ cells. We can implement this solution similar to BFS. First, insert all $2 \times 2$ squares painted in a single color into a queue. Then, at each step take the square in the front of the queue, add it to the solution and make all its non-special cells special. When making a cell special, check all $2 \times 2$ squares that contain it and if some of them meet the condition after the current step, insert them into the queue. Note that there are at most $9$ such squares. Time complexity: $O(nm)$.
[ "constructive algorithms", "data structures", "greedy", "implementation" ]
2,000
null
1638
E
Colorful Operations
You have an array $a_1,a_2, \dots, a_n$. Each element initially has value $0$ and color $1$. You are also given $q$ queries to perform: - Color $l$ $r$ $c$: Change the color of elements $a_l,a_{l+1},\cdots,a_r$ to $c$ ($1 \le l \le r \le n$, $1 \le c \le n$). - Add $c$ $x$: Add $x$ to values of all elements $a_i$ ($1 \le i \le n$) of color $c$ ($1 \le c \le n$, $-10^9 \le x \le 10^9$). - Query $i$: Print $a_i$ ($1 \le i \le n$).
In the first part, let's consider that for all update operations $l=r$. The idea is not to update each element in an Add operation and instead, keeping an array $lazy[color]$ which stores for each color the total sum we must add to it (because we didn't do it when we had to). Lets's discuss each operation: Update $l$ $r$ $c$:We will use the notation $l=r=i$. In this operation we change the color of element $i$ from $c'$ to $c$. First, remember that we have the sum $lazy[c']$ that we haven't added to any of the elements of color $c'$ (including $i$), so we better do it now because the color changes: $a[i] := a[i] + lazy[c']$. Now we can change the color to $c$. But wait, what about $lazy[c]$? It says that we will need to add some value to element $i$, but this is obviously false, since now it is up to date. We can compensate by subtracting now $lazy[c]$ from element $i$, repairing the mistake we will do later: $a[i] := a[i] - lazy[c]$. Finally, don't forget to set $color[i] := c$. We will use the notation $l=r=i$. In this operation we change the color of element $i$ from $c'$ to $c$. First, remember that we have the sum $lazy[c']$ that we haven't added to any of the elements of color $c'$ (including $i$), so we better do it now because the color changes: $a[i] := a[i] + lazy[c']$. Now we can change the color to $c$. But wait, what about $lazy[c]$? It says that we will need to add some value to element $i$, but this is obviously false, since now it is up to date. We can compensate by subtracting now $lazy[c]$ from element $i$, repairing the mistake we will do later: $a[i] := a[i] - lazy[c]$. Finally, don't forget to set $color[i] := c$. Add $c$ $x$:This is as simple as it gets: $lazy[c] := lazy[c] + x$. This is as simple as it gets: $lazy[c] := lazy[c] + x$. Query $i$:The query operation is also not very complicated. We print the value $a[i]$ and don't forget about $lazy[color[i]]$: $print(a[i] + lazy[color[i]])$. The query operation is also not very complicated. We print the value $a[i]$ and don't forget about $lazy[color[i]]$: $print(a[i] + lazy[color[i]])$. The time complexity is $O(1)$ per query. Now we get back to the initial problem and remove the restriction $l=r$. Let's keep an array of maximal intervals of elements with the same color. We will name them color intervals. By doing so, we can keep the $lazy$ value for the a whole color interval. When we change the color of all elements in $[l,r]$, there are two kinds of color intervals that interest us in our array: $[l',r'] \subseteq [l,r]$:In this case the whole interval changes its color. First, we add the $lazy$ values to each interval. Then, after changing the color of all these intervals into the same one, we can merge them all. Now we update the resulting interval similar to how we would update a single element. In this case the whole interval changes its color. First, we add the $lazy$ values to each interval. Then, after changing the color of all these intervals into the same one, we can merge them all. Now we update the resulting interval similar to how we would update a single element. $l \in [l',r']$ or $r \in [l',r']$ (or both):These are the two (or one) intervals that contain the endpoints $l$ and $r$. Here we will first split the color interval into two (or three) smaller ones: outside and inside $[l,r]$. Then, we just update the one inside $[l,r]$ as before. These are the two (or one) intervals that contain the endpoints $l$ and $r$. Here we will first split the color interval into two (or three) smaller ones: outside and inside $[l,r]$. Then, we just update the one inside $[l,r]$ as before. Notice that in contrast to the solution for $l=r$, here we have to add some value on a range. We can do this using a data structure such as Fenwick tree or segment tree in $O(log_2{(n)})$. Also, for storing the color intervals we can use a set. This allows insertions and deletions, as well as quickly finding the range of intervals modified in a coloring. The time complexity is a bit tricky to determine because at first it might seem like it is $O(q \cdot n)$, but if we analyze the effect each update has on the long term, it turns out to be much better. We will further refer to the number of intervals in our array as the potential of the current state. Let's consider that in our update we found $k$ color intervals contained in the update interval, the potential decreases by $k-1$ and then it grows by at most $2$ (because of the two splits). The number of steps our program performs is proportional to the total changes in potential. In one operation, the potential can decrease by a lot, but luckily, it can only grow by $2$. Because the potential is always positive, it decreases in total at most as much as it increases. Thus, the total change in potential is $O(q)$. Although not described here, there exists another solution to this problem using only a segment tree with lazy propagation. In this solution, our data structure stops only on monochrome segments. The time complexity is the same. Time complexity: $O(n+q \cdot log_2{(n)})$.
[ "brute force", "data structures", "implementation" ]
2,400
null
1638
F
Two Posters
You want to advertise your new business, so you are going to place two posters on a billboard in the city center. The billboard consists of $n$ vertical panels of width $1$ and varying integer heights, held together by a horizontal bar. The $i$-th of the $n$ panels has height $h_i$. Initially, all panels hang down from the bar (their top edges lie on it), but before placing the two posters, you are allowed to move each panel up by any integer length, as long as it is still connected to the bar (its bottom edge lies below or on it). After the moves are done, you will place two posters: one below the bar and one above it. They are not allowed to go over the bar and they must be positioned completely inside of the panels. What is the maximum total area the two posters can cover together if you make the optimal moves? \textbf{Note that you can also place a poster of $0$ area. This case is equivalent to placing a single poster.}
There are many ways in which two posters can be positioned, therefore, we split the problem into multiple sub-problems. Consider the following cases. The two posters share no common panel. The two posters share a range of common panels. Here, there are two sub-cases. The second poster does not share all its panels with the first. The second poster shares all its panels with the first. The second poster does not share all its panels with the first. The second poster shares all its panels with the first. Let's find the maximum total area for each of these cases. The answer will be the maximum over them. The important element to analyze in an optimal positioning of the two posters is the bottleneck. This is the panel (or one of them) which is completely covered by the posters and doesn't allow the area to be any larger. The two posters share no common panel.Since the two posters cover disjoint ranges of panels, there is a left poster and a right poster. Thus, we can choose a position somewhere between the two posters and move up to the maximum all panels to the left of this position. So, a correct solution is to consider all possible split positions. For each of them, solve the standard skyline problem for both left and right sides or, for an easier implementation, use a trivial precomputation. Time complexity: $O(n^2)$. Since the two posters cover disjoint ranges of panels, there is a left poster and a right poster. Thus, we can choose a position somewhere between the two posters and move up to the maximum all panels to the left of this position. So, a correct solution is to consider all possible split positions. For each of them, solve the standard skyline problem for both left and right sides or, for an easier implementation, use a trivial precomputation. Time complexity: $O(n^2)$. The two posters share a range of common panels.In the following two subcases we will deal with pairs of posters that share some panels. So, let's make some observations before going any further. Consider the height of the first poster (red) to be $h_1$ and the height of the second poster (blue) to be $h_2$. Now, let's find all panels $i$ such that $h_i < h_l + h_r$. We will color these panels yellow and the rest of them gray. Because of the condition we imposed on the heights of yellow panels, they can't be shared by the two posters. On the other hand, gray panels can. Now we can make one more observation. The range of common panels lies between two yellow panels. Moreover, since we try to maximize the total area and the two heights are fixed, the range of common panels is one of the maximal gray ranges. Now comes the tricky part. We can't iterate over all the maximal gray ranges. There are $O(n)$ such ranges for any $h_1$ and $h_2$. And even worse, the two heights are up to $10^{12}$. The problem is that we are looking from the wrong perspective. Instead of iterating $h_1$ and $h_2$ and then finding the yellow panels, let's consider some possible maximal intersection range and then find all pairs of $h_1$ and $h_2$ influenced by this range. Ok, but there are $O(n^2)$ ranges to consider, right? Well, there are actually only $O(n)$. Please note that the yellow/gray notation only works after fixing some $h_1$ and $h_2$, but we refer to as a maximal intersection range to those ranges that could meet the conditions for some suitable $h_1$ and $h_2$. Let the smallest panel inside a maximal intersection range be its representative (if there are multiple, take the leftmost one). Now consider some panel $i$. For which maximal intesrection ranges is it the representative? Start with the range $[i,i]$ and extend to the left and to the right, while at least one of the bounding panels is larger or equal to $h_i$. We are doing this because we search for maximal intersection ranges, and this means that they are contained between two yellow panels of smaller size. Now we stopped at some range $[x,y]$, where $h_j > h_{x-1}$ and $h_j > h_{y+1}$ for any $j \in [x,y]$. Can we extend even more? No, we considered panel $i$ to be the representative and thus, the smallest in the range, extending it any further would contradict this. We now have $O(n)$ possible maximal intersection ranges, each having a unique representative. Let the prefix of panels before the range be colored red and the suffix of panels after the range be colored blue. Now, let's look at the two remaining cases and solve them. The second poster does not share all its panels with the first.This means that the first poster covers all gray panels and some of the red ones, and the second poster covers all gray panels and some of the blue ones. Consider the following example. These are all the ways two posters can be positioned in this case. Some of them are marked as useless because they can be transformed into better configurations. How do we tell if a configuration is useful? Well, in a useful configuration, the two posters should meet one of the following two conditions. They touch (the representative panel being a bottleneck) and one of them also has its own bottleneck. If none of them had its own bottleneck, it would be possible to stretch the wider one, while shrinking the other, in order to increase the total area. This way, a bottleneck could be formed in a better configuration. Each of them has its own bottleneck. So, let's sum up. First, we choose a representative panel. Then, we find the range it represents. Next, solve each of the above cases separately. Some prefix/suffix precomputation and two pointers algorithm should be enough. Time complexity: $O(n^2)$. The second poster shares all its panels with the first.This means that the second poster covers all gray panels and the first poster covers all gray panels, some of the red ones and some of the blue ones (here, we will replace blue and red with pink). Consider the following example. This is obviously the simple subcase of the two. We only need to keep track on the left and right pink panels while we gradually decrease the height of the second poster. We will use the two pointers algorithm again. Note that the blue poster should touch the end of the representative panel. You might argue that braking this restriction could result in a larger total area, but don't forget that we will handle those cases using other representative panels and their respective gray ranges. Time complexity: $O(n^2)$. In the following two subcases we will deal with pairs of posters that share some panels. So, let's make some observations before going any further. Consider the height of the first poster (red) to be $h_1$ and the height of the second poster (blue) to be $h_2$. Now, let's find all panels $i$ such that $h_i < h_l + h_r$. We will color these panels yellow and the rest of them gray. Because of the condition we imposed on the heights of yellow panels, they can't be shared by the two posters. On the other hand, gray panels can. Now we can make one more observation. The range of common panels lies between two yellow panels. Moreover, since we try to maximize the total area and the two heights are fixed, the range of common panels is one of the maximal gray ranges. Now comes the tricky part. We can't iterate over all the maximal gray ranges. There are $O(n)$ such ranges for any $h_1$ and $h_2$. And even worse, the two heights are up to $10^{12}$. The problem is that we are looking from the wrong perspective. Instead of iterating $h_1$ and $h_2$ and then finding the yellow panels, let's consider some possible maximal intersection range and then find all pairs of $h_1$ and $h_2$ influenced by this range. Ok, but there are $O(n^2)$ ranges to consider, right? Well, there are actually only $O(n)$. Please note that the yellow/gray notation only works after fixing some $h_1$ and $h_2$, but we refer to as a maximal intersection range to those ranges that could meet the conditions for some suitable $h_1$ and $h_2$. Let the smallest panel inside a maximal intersection range be its representative (if there are multiple, take the leftmost one). Now consider some panel $i$. For which maximal intesrection ranges is it the representative? Start with the range $[i,i]$ and extend to the left and to the right, while at least one of the bounding panels is larger or equal to $h_i$. We are doing this because we search for maximal intersection ranges, and this means that they are contained between two yellow panels of smaller size. Now we stopped at some range $[x,y]$, where $h_j > h_{x-1}$ and $h_j > h_{y+1}$ for any $j \in [x,y]$. Can we extend even more? No, we considered panel $i$ to be the representative and thus, the smallest in the range, extending it any further would contradict this. We now have $O(n)$ possible maximal intersection ranges, each having a unique representative. Let the prefix of panels before the range be colored red and the suffix of panels after the range be colored blue. Now, let's look at the two remaining cases and solve them. The second poster does not share all its panels with the first.This means that the first poster covers all gray panels and some of the red ones, and the second poster covers all gray panels and some of the blue ones. Consider the following example. These are all the ways two posters can be positioned in this case. Some of them are marked as useless because they can be transformed into better configurations. How do we tell if a configuration is useful? Well, in a useful configuration, the two posters should meet one of the following two conditions. They touch (the representative panel being a bottleneck) and one of them also has its own bottleneck. If none of them had its own bottleneck, it would be possible to stretch the wider one, while shrinking the other, in order to increase the total area. This way, a bottleneck could be formed in a better configuration. Each of them has its own bottleneck. So, let's sum up. First, we choose a representative panel. Then, we find the range it represents. Next, solve each of the above cases separately. Some prefix/suffix precomputation and two pointers algorithm should be enough. Time complexity: $O(n^2)$. This means that the first poster covers all gray panels and some of the red ones, and the second poster covers all gray panels and some of the blue ones. Consider the following example. These are all the ways two posters can be positioned in this case. Some of them are marked as useless because they can be transformed into better configurations. How do we tell if a configuration is useful? Well, in a useful configuration, the two posters should meet one of the following two conditions. They touch (the representative panel being a bottleneck) and one of them also has its own bottleneck. If none of them had its own bottleneck, it would be possible to stretch the wider one, while shrinking the other, in order to increase the total area. This way, a bottleneck could be formed in a better configuration. Each of them has its own bottleneck. So, let's sum up. First, we choose a representative panel. Then, we find the range it represents. Next, solve each of the above cases separately. Some prefix/suffix precomputation and two pointers algorithm should be enough. Time complexity: $O(n^2)$. The second poster shares all its panels with the first.This means that the second poster covers all gray panels and the first poster covers all gray panels, some of the red ones and some of the blue ones (here, we will replace blue and red with pink). Consider the following example. This is obviously the simple subcase of the two. We only need to keep track on the left and right pink panels while we gradually decrease the height of the second poster. We will use the two pointers algorithm again. Note that the blue poster should touch the end of the representative panel. You might argue that braking this restriction could result in a larger total area, but don't forget that we will handle those cases using other representative panels and their respective gray ranges. Time complexity: $O(n^2)$. This means that the second poster covers all gray panels and the first poster covers all gray panels, some of the red ones and some of the blue ones (here, we will replace blue and red with pink). Consider the following example. This is obviously the simple subcase of the two. We only need to keep track on the left and right pink panels while we gradually decrease the height of the second poster. We will use the two pointers algorithm again. Note that the blue poster should touch the end of the representative panel. You might argue that braking this restriction could result in a larger total area, but don't forget that we will handle those cases using other representative panels and their respective gray ranges. Time complexity: $O(n^2)$.
[ "brute force", "data structures", "greedy", "two pointers" ]
3,200
null
1641
A
Great Sequence
A sequence of positive integers is called great for a positive integer $x$, if we can split it into pairs in such a way that in each pair the first number multiplied by $x$ is equal to the second number. More formally, a sequence $a$ of size $n$ is great for a positive integer $x$, if $n$ is even and there exists a permutation $p$ of size $n$, such that for each $i$ ($1 \le i \le \frac{n}{2}$) $a_{p_{2i-1}} \cdot x = a_{p_{2i}}$. Sam has a sequence $a$ and a positive integer $x$. Help him to make the sequence great: find the minimum possible number of positive integers that should be added to the sequence $a$ to make it great for the number $x$.
Let's look at the minimal integer in our multiset. Since it can be matched with only one integer, we need to create such pair. Thus, we can maintain the current multiset. We need to take the minimal element out of it (and delete it from it), find a pair for it, and delete it from the multiset if such pair exists, or add 1 to the answer if there is no such pair.
[ "brute force", "greedy", "sortings" ]
1,200
#include <iostream> #include <vector> #include <algorithm> using namespace std; signed main() { (*cin.tie(0)).sync_with_stdio(0); int t; cin >> t; while (t--) { int n; int64_t x; cin >> n >> x; vector<int64_t> ar(n); for (auto& it : ar) cin >> it; sort(ar.begin(), ar.end()); vector<bool> vis(n); int j = 0, q = 0; int ans = 0; for (int i = 0; i < n; ++i) { if (vis[i]) continue; if (ar[i] * x > ar[j]) { while (ar[i] * x >= ar[j] && j < n) q = ++j; q = --j; } if (i < q && ar[i] * x == ar[q]) vis[q--] = 1; else ans++; } cout << ans << "\n"; } return 0; }
1641
B
Repetitions Decoding
Olya has an array of integers $a_1, a_2, \ldots, a_n$. She wants to split it into tandem repeats. Since it's rarely possible, before that she wants to perform the following operation several (possibly, zero) number of times: insert a pair of equal numbers into an arbitrary position. Help her! More formally: - A tandem repeat is a sequence $x$ of even length $2k$ such that for each $1 \le i \le k$ the condition $x_i = x_{i + k}$ is satisfied. - An array $a$ could be split into tandem repeats if you can split it into several parts, each being a subsegment of the array, such that each part is a tandem repeat. - In one operation you can choose an arbitrary letter $c$ and insert $[c, c]$ to any position in the array (at the beginning, between any two integers, or at the end). - You are to perform several operations and split the array into tandem repeats or determine that it is impossible. Please note that you do \textbf{not} have to minimize the number of operations.
Let's prove that we can turn the array into a concatenation of tandem repeats using the operations given if and only if every letter occurs an even number of times If there is such letter $x$ that it occurs an odd number of times there is no such sequence of operations, since the parity of the number of occurrences if letter $x$ stays the same. If we insert a different letter, the number of occurrences of letter $x$ does not change, if we insert letter $x$, we add 2 occurrences of it. Thus, it will be impossible to split the array into tandem repeats. If we have an array $s_{1}s_{2}...s_{n}$, and we want to reverse its prefix of length $k \leq n$, we can insert a pair of letters equal to $s_{1}$ after the $k$-th symbol, a pair of letters equal to $s_{2}$ after $(k+1)$-th symbol and etc. $s_1s_2...s_ks_{k+1}...s_n$ $s_1s_2...s_ks_1s_1s_{k+1}...s_n$ $s_1s_2...s_ks_1s_2s_2s_1s_{k+1}...s_n$ $...$ $s_1s_2...s_ks_1s_2...s_ks_k...s_2s_1s_{k+1}...s_n$ It is obvious that the first $2k$ symbols of the array form a tandem repeat. We can add it to our division and cut it out from the array. The array will now have its prefix of length $k$ reversed. Thus, we can move any element to the beginning of the array, so we can simply sort it. Since every element occurs an even number of times, the resulting string will be a concatenation of tandem repeats consisting of the same letters. $\mathcal{O}(2n^2)$ insertions solution: 147514019 $\mathcal{O}(n^2)$ insertions solution: 147514028
[ "constructive algorithms", "implementation", "sortings" ]
2,000
#define _USE_MATH_DEFINES #include <algorithm> #include <array> #include <bitset> #include <cassert> #include <chrono> #include <cmath> #include <complex> #include <deque> #include <fstream> #include <functional> #include <iomanip> #include <iostream> #include <list> #include <map> #include <math.h> #include <numeric> #include <queue> #include <random> #include <set> #include <sstream> #include <stack> #include <string> #include <unordered_map> #include <unordered_set> #include <utility> #include <vector> using namespace std; void reverse_range(vector<int> &ar, vector<pair<int, int>> &ans, vector<int> &lens, int &mdf, int l, int r) { for (int i = l; i <= r; ++i) ans.emplace_back(r + 1 + mdf + i - 2 * l, ar[i]); if (r - l + 1 > 0) lens.push_back((r - l + 1) * 2); mdf += (r - l + 1) * 2; reverse(ar.begin() + l, ar.begin() + r + 1); } void move_last_to_front(vector<int> &ar, vector<pair<int, int>> &ans, vector<int> &lens, int &mdf, int l, int r) { reverse_range(ar, ans, lens, mdf, l, r - 1); reverse_range(ar, ans, lens, mdf, l, r); } signed IlkrasTEQ1Solve(int n, vector<int>& ar) { if (n % 2) { cout << "-1\n"; return 0; } int xr = 0; unordered_map<int, int> cnt; for (auto &it : ar) { cnt[it]++; } for (auto &it : cnt) if (it.second % 2) { cout << "-1\n"; return 0; } vector<pair<int, int>> ans; vector<int> lens; ans.reserve(n * n * 2); lens.reserve(n * 2); int mdf = 0; for (int i = 0; i < n; i += 2) { int fnd = (int)(find(ar.begin() + i + 1, ar.end(), ar[i]) - ar.begin()); move_last_to_front(ar, ans, lens, mdf, i, fnd); lens.push_back(2); mdf += 2; } cout << (int)ans.size() << "\n"; for (auto &it : ans) cout << it.first << " " << it.second << "\n"; cout << (int)lens.size() << "\n"; for (auto &it : lens) cout << it << " "; cout << "\n"; return 0; } signed main() { (*cin.tie(0)).sync_with_stdio(0); int q; cin >> q; while (q--) { int n; cin >> n; vector<int> ar(n); for (auto &it : ar) cin >> it; if (n % 2) { cout << "-1\n"; continue; } IlkrasTEQ1Solve(n, ar); } return 0; }
1641
C
Anonymity Is Important
In the work of a doctor, it is important to maintain the anonymity of clients and the results of tests. The test results are sent to everyone personally by email, but people are very impatient and they want to know the results right away. That's why in the testing lab "De-vitro" doctors came up with an experimental way to report the results. Let's assume that $n$ people took the tests in the order of the queue. Then the chief doctor Sam can make several statements, in each telling if there is a sick person among the people in the queue from $l$-th to $r$-th (inclusive), for some values $l$ and $r$. During the process, Sam will check how well this scheme works and will be interested in whether it is possible to find out the test result of $i$-th person from the information he announced. And if it can be done, then is that patient sick or not. Help Sam to test his scheme.
If $i$-th person is not ill, the following query exists: $0 \ l \ r \ 0$, such that $l \le i \le r$. Otherwise, the person's status is either unknown or they are ill. If $i$-th person is ill, the following query exists:$0 \ l \ r \ 1$, such that $l \le i \le r$, and every person $i$ such that $l \leq j \leq r$ are not ill. If there is such person $j$ that they are not ill, and $j \neq i, l \le j \le r$. In this case, it is impossible to determine if $i$-th person is ill or not. Let's maintain the indices of the people who might be ill using set. When we get a query $0 \ l \ r \ 0$, we can find the first possible ill person with an index of at least $l$ using lower_bound, after that, we need to delete this person from our set, find the next one and do the same thing until we find the first index greater than $r$. This works in $O(nlogn)$. If a person is not in the set, he is totally healthy. Otherwise, we can use a segment tree to store such index $j$ that there is a query $0 \ i \ j \ 1$ and store it in the $i$-th slot of our segment tree. We can update it when we get a new query. When we understand that the $i$-th person might be ill, we can find the first elements to the left ($l$) and to the right ($r$) of $i$, which might be ill using our set. The $i$-th person is ill when the minimal element on segment $[l + 1; i]$ is $< r$. The solution works in $O(nlogn + qlogn)$.
[ "binary search", "brute force", "data structures", "dsu", "greedy", "sortings" ]
2,200
#include <bits/stdc++.h> using namespace std; #define fi first #define se second #define pb push_back #define eb emplace_back #define mp make_pair #define gcd __gcd #define fastio ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0) #define rep(i, n) for (int i=0; i<(n); i++) #define rep1(i, n) for (int i=1; i<=(n); i++) #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define endl "\n" typedef long long ll; typedef unsigned long long ull; typedef unsigned uint; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef vector<int> vi; typedef vector<vector<int>> vvi; typedef vector<ll> vll; typedef vector<vector<ll>> vvll; typedef vector<bool> vb; typedef vector<vector<bool>> vvb; int32_t main() { fastio; int n, q; cin >> n >> q; set<int> active; rep(i, n + 1) active.insert(i); vi m(n, INT_MAX); vi ans(n, -1); while(q--) { int t; cin >> t; if(t == 0) { int l, r, x; cin >> l >> r >> x; --l, --r; if(x == 1) { int v = *active.lower_bound(l); m[v] = min(m[v], r); if(m[v] < *active.upper_bound(v)) ans[v] = 1; } else { int nxt = *active.upper_bound(r); for(auto itr = active.lower_bound(l); *itr <= r;) { if(nxt != n) m[nxt] = min(m[nxt], m[*itr]); ans[*itr] = 0; active.erase(itr); itr = active.lower_bound(l); } if(nxt != n && m[nxt] < *active.upper_bound(nxt)) ans[nxt] = 1; if(*active.begin() < l) { int prv = *prev(active.lower_bound(l)); if(m[prv] < *active.upper_bound(prv)) ans[prv] = 1; } } } else { int p; cin >> p; --p; if(ans[p] == -1) cout << "N/A\n"; else if(ans[p] == 0) cout << "NO\n"; else cout << "YES\n"; } } }
1641
D
Two Arrays
Sam changed his school and on the first biology lesson he got a very interesting task about genes. You are given $n$ arrays, the $i$-th of them contains $m$ different integers — $a_{i,1}, a_{i,2},\ldots,a_{i,m}$. Also you are given an array of integers $w$ of length $n$. Find the minimum value of $w_i + w_j$ among all pairs of integers $(i, j)$ ($1 \le i, j \le n$), such that the numbers $a_{i,1}, a_{i,2},\ldots,a_{i,m}, a_{j,1}, a_{j,2},\ldots,a_{j,m}$ are distinct.
Let's maintain a set of arrays of length $m$, add new arrays there, delete arrays from this set and understand if the set has a suitable pair for some array. To do this, let's consider a pair of sorted arrays $a$ and $b$ of length $m$. Let's write out all subsets of the array $a$. Then we start a counter $count$, and for each subset of the array $b$ we add one to $count$, if the subset occurs in $a$ and contains an odd number of elements, and subtract one if the subset occurs in $a$ and contains an even number of elements. Note that if $a$ and $b$ have at least one element in common, then $count$ will be equal to $1$, otherwise it will be equal to $0$. Thus, we can maintain a trie that contains all the subsets of each array in the set. Now any request to this trie is trivially done for $2^m$. Now let's sort the arrays by $w$ and use our structure to find the first array that has a suitable pair. We can simply find the pair and maintain 2 pointers, $l$ is equal to the first array in the pair, $r$ is equal to the second array in the pair. Note that now we are only interested in pairs $l_{1}, r_{1}$ such that $l < l_{1} < r_{1} < r$. Therefore, we will move $l$ to the left only. When we moved it once again, we will see if there is a pair for it among $l < i < r$. If so, then we will move $r$ to the left until there is a pair for $l$ among $l < i \leq r$. After that we can update the answer with $w_{l} + w_{r}$. The solution works in $O(n\cdot 2^m)$. It is also possible to solve this problem in $O(\frac{n^2 \cdot m}{32})$ using std::bitset.
[ "bitmasks", "brute force", "combinatorics", "greedy", "hashing", "math", "two pointers" ]
2,700
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/trie_policy.hpp> #include <ext/rope> using namespace std; #define fi first #define se second #define pb push_back #define eb emplace_back #define mp make_pair #define gcd __gcd #define fastio ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0) #define rep(i, n) for (int i=0; i<(n); i++) #define rep1(i, n) for (int i=1; i<=(n); i++) #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define endl "\n" typedef long long ll; typedef unsigned long long ull; typedef unsigned uint; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef vector<int> vi; typedef vector<vector<int>> vvi; typedef vector<ll> vll; typedef vector<vector<ll>> vvll; typedef vector<bool> vb; typedef vector<vector<bool>> vvb; constexpr int N = 1e5 + 5; constexpr int M = 5; constexpr int T = 1000; #pragma GCC target("popcnt") int a[N][M]; int elem[N * M]; vi idx[N * M]; unique_ptr<bitset<N>> good[N * M]; bitset<N> state; int w[N]; int ord[N]; int conv[N]; int32_t main() { fastio; int n, m; cin >> n >> m; int sz = 0; rep(i, n) { rep(j, m) cin >> a[i][j], elem[sz++] = a[i][j]; cin >> w[i]; } sort(elem, elem + sz); sz = unique(elem, elem + sz) - elem; iota(ord, ord + n, 0); sort(ord, ord + n, [&](int i, int j) {return w[i] < w[j];}); rep(i, n) conv[ord[i]] = i; rep(i, n) { sort(a[i], a[i] + m); rep(j, m) { int v = lower_bound(elem, elem + sz, a[i][j]) - elem; a[i][j] = v; if(!j || a[i][j] != a[i][j - 1]) idx[v].pb(conv[i]); } } rep(i, sz) if(idx[i].size() >= T) { good[i] = make_unique<bitset<N>>(); good[i] -> set(); for(int v: idx[i]) good[i] -> reset(v); } int ans = INT_MAX; rep(i, n) { state.set(); state[conv[i]] = 0; rep(j, m) if(!j || a[i][j] != a[i][j - 1]) { int v = a[i][j]; if(idx[v].size() < T) for(int x: idx[v]) state[x] = 0; else state &= *good[v]; } int id = state._Find_first(); if(id >= n) continue; else { id = ord[id]; ans = min(ans, w[i] + w[id]); } } cout << (ans == INT_MAX ? -1 : ans) << endl; }
1641
E
Special Positions
You are given an array $a$ of length $n$. Also you are given $m$ distinct positions $p_1, p_2, \ldots, p_m$ ($1 \leq p_i \leq n$). A \textbf{non-empty} subset of these positions $T$ is randomly selected with equal probability and the following value is calculated: $$\sum_{i=1}^{n} (a_i \cdot \min_{j \in T} \left|i - j\right|).$$ In other word, for each index of the array, $a_i$ and the distance to the closest chosen position are multiplied, and then these values are summed up. Find the expected value of this sum. This value must be found modulo $998\,244\,353$. More formally, let $M = 998\,244\,353$. It can be shown that the answer can be represented as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \neq 0$ (mod $M$). Output the integer equal to $p \cdot q^{-1}$ (mod $M$). In other words, output such integer $x$ that $0 \leq x < M$ and $x \cdot q = p$ (mod $M$).
First of all, calculate for each index the total sum of distances among all subsets if the closest selected position is to the left. Let $suf_i = 2^{cnt_i}$, where $cnt_i$ - the number of cpecial positions at $i$ or to the right of $i$ (if $i > n$ then $cnt_i = 0$). Let $special_i = 1$ if position $i$ is special, otherwise $special_i = 0$. It's not hard to see, that the value for the position $pos$ in this case equals to $\sum_{i = 1}^{pos-1} \sum_{j + i = 2 * pos} special_i \cdot suf_i \cdot (pos - i)$. Then for each $pos$ calculate two values: $\sum_{i = 1}^{pos-1} \sum_{j + i = 2 * pos} special_i \cdot suf_i \cdot i$ $\sum_{i = 1}^{pos-1} \sum_{j + i = 2 * pos} special_i \cdot suf_i$ Since $j > i$ we can find first value using DNC (the second value we will find similary): we want to consider every $l \le i < j \le r$. Then halve this segment: $m = \frac{l + r}{2}$. Then create two polynomials: The polynomial $P$ of size $m - l + 1$, where $P_i = special_{l + i} \cdot (l + r)$. The polynomial $Q$ of size $r - m$, where $Q_i = suf_{m + 1 + i}$. By multiplying this two polinomials we can recalculate the values for positions from $l + m + 1$ to $m + r$ and then solve two parts recursively. Thus we can find for each index the total sum of distances among all subsets if the closest selected position is to the left. To find for each index the total sum of distances among all subsets if the closest selected positions is to the right we can do the same stuff but in inverse order. Note, that we need to consider the case where the closest selected position to the left and the closest selected position are at the same distance from $pos$. It can be done by changing $cnt_i$ in one of the cases by the number of special positinos strickly to the right of $i$. It can be implemented in $O(n \log^2{n})$ using FFT.
[ "combinatorics", "divide and conquer", "fft", "math" ]
3,300
#include <bits/stdc++.h> using namespace std; constexpr int MOD = 998244353, ROOT = 3; int add(int a, int b) { a += b; return a - MOD * (a >= MOD); } int sub(int a, int b) { a -= b; return a + MOD * (a < 0); } int mult(int a, int b) { return 1ll * a * b % MOD; } int power(int a, int b) { int prod = 1; for (; b; b >>= 1, a = mult(a, a)) if (b & 1) prod = mult(prod, a); return prod; } void fft(vector<int> &a) { int n = a.size(), lg = __lg(n); assert((1 << lg) == n); for (int i = 0; i < n; i++) { int j = 0; for (int bit = 0; bit < lg; bit++) j ^= (i >> bit & 1) << (lg - bit - 1); if (i < j) swap(a[i], a[j]); } for (int length = 1; length < n; length <<= 1) { int root = power(ROOT, (MOD - 1) / (length << 1)); for (int i = 0; i < n; i += length << 1) for (int j = 0, x = 1; j < length; j++, x = mult(x, root)) { int value = mult(a[i + j + length], x); a[i + j + length] = sub(a[i + j], value); a[i + j] = add(a[i + j], value); } } } vector<int> multiply(vector<int> a, vector<int> b) { int result_size = int(a.size() + b.size()) - 1, n = 1; while (n < result_size) n <<= 1; a.resize(n); b.resize(n); fft(a), fft(b); for (int i = 0; i < n; i++) a[i] = mult(a[i], b[i]); fft(a); reverse(a.begin() + 1, a.end()); int inv_n = power(n, MOD - 2); for (auto &x : a) x = mult(x, inv_n); a.resize(result_size); return a; } void solve(int l, int r, const vector<int> &a, const vector<int> &b, vector<int> &c) { if (r - l == 1) return; int m = (l + r) >> 1; solve(l, m, a, b, c); solve(m, r, a, b, c); auto prod = multiply(vector<int>(a.begin() + l, a.begin() + m), vector<int>(b.begin() + m, b.begin() + r)); for (int i = 0; i < int(prod.size()); i++) c[i + l + m] = add(c[i + l + m], prod[i]); } int main() { ios::sync_with_stdio(false), cin.tie(nullptr); int n, m; cin >> n >> m; vector<int> a(n); for (auto &x : a) cin >> x; vector<int> positions(m); vector<bool> is_special(n); for (auto &pos : positions) { cin >> pos; pos--; is_special[pos] = true; } vector<int> p2(m + 1, 1); for (int i = 1; i <= m; i++) p2[i] = add(p2[i - 1], p2[i - 1]); int sum = 0; for (int rot : {0, 1}) { vector<int> first(n), second(n); for (int i = 0; i < m; i++) { first[positions[i]] = p2[i]; if (positions[i] - rot >= 0) second[positions[i] - rot] = p2[m - 1 - i]; } for (int i = n - 2; i >= 0; i--) second[i] = add(second[i], second[i + 1]); vector<int> ways(2 * n); solve(0, n, first, second, ways); for (int i = 0; i < n; i++) first[i] = mult(first[i], i); vector<int> to_sub(2 * n); solve(0, n, first, second, to_sub); for (int i = 0; i < n; i++) sum = add(sum, mult(a[i], sub(mult(i, ways[2 * i]), to_sub[2 * i]))); for (int i = 0, pref = 0, tot = 0; i < n; i++) { sum = add(sum, mult(a[i], sub(mult(i, sub(p2[pref], 1)), tot))); if (is_special[i]) { tot = add(tot, mult(i, p2[pref])); pref++; } } reverse(a.begin(), a.end()); reverse(is_special.begin(), is_special.end()); reverse(positions.begin(), positions.end()); for (auto &pos : positions) pos = n - 1 - pos; } cout << mult(sum, power(sub(p2[m], 1), MOD - 2)) << '\n'; }
1641
F
Covering Circle
Sam started playing with round buckets in the sandbox, while also scattering pebbles. His mom decided to buy him a new bucket, so she needs to solve the following task. You are given $n$ distinct points with integer coordinates $A_1, A_2, \ldots, A_n$. All points were generated from the square $[-10^8, 10^8] \times [-10^8, 10^8]$ uniformly and independently. You are given positive integers $k$, $l$, such that $k \leq l \leq n$. You want to select a subsegment $A_i, A_{i+1}, \ldots, A_{i+l-1}$ of the points array (for some $1 \leq i \leq n + 1 - l$), and some circle on the plane, containing $\geq k$ points of the selected subsegment (inside or on the border). What is the smallest possible radius of that circle?
If the answer is $r$ let's consider $n$ circles $C_1, C_2, \ldots, C_n$ with centers $A_1, A_2, \ldots, A_n$ and radius $r$. If a required circle with radius $r$ exists it should be true, that $k$ circles $C_{i_1}, C_{i_2}, \ldots, C_{i_k}$ has non empty intersection for some $i_1 < i_2 < \ldots < i_k$ and $i_k - i_1 < l$. For some $1 \leq h \leq k$ we can find intersection point of these circles on circle $C_{i_h}$. Let's define $j = i_h$. Let's define $f(j)$ as the minimal possible $r$, such that there exists $k$ circles $C_{i_1}, C_{i_2}, \ldots, C_{i_k}$ and $i_k - i_1 < l$, such that $j = i_h$ for some $1 \leq h \leq k$ and these circles intersect on circle $C_j$. Then the answer to the problem is $\min_{1 \leq j \leq n} f(j)$. Let's make a procedure to check, that $f(j) \leq x$ for some $x$. To check that let's consider circles $C_{j-l+1}, \ldots, C_j, \ldots, C_{j+l-1}$ with centers $A_{j-l+1}, \ldots, A_j, \ldots, A_{j+l-1}$ and radius $x$. Each of them intersect with $C_j$ with some arc (circular segment). We can find all these arcs. Let's now make scanline and mantain all indices of arcs covering the current point. With segment tree with lazy propagation we can check if there exists $k - 1$ indices with difference at most $l - 1$. The complexity of this check is $O(s \log{n})$, where $s$ is the number of arcs. Let's iterate $j$ from $1$ to $n$ and maintain the current answer $r$. Initially let's initialize $r$ with $\sqrt{2} \cdot 10^8 \cdot \frac{\sqrt{l-1}}{\sqrt{k-1}}$ (it's easy to prove that the answer can't be bigger than these constant for any points). Now if we have some $j$ let's firstly check, that $f(j) \leq r$ (if not - the answer won't be updated), if it is true - let's make a binary search to find a new answer. The only problem is - the number of arcs can be big. Let's note, that $C_i$ makes an arc on $C_j$ if and only if $|A_i A_j| \leq 2x$. So let's make an infinite grid with step $2r$ and maintain a set of points in each square. Also, we should maintain points from the segment $[j - l + 1, j + l - 1]$. After that, we can only check indices, that are in the same square as the point $A_j$ and in $8$ neighboring squares. It's easy to prove, that if $r \leq \sqrt{2} \cdot 10^8 \cdot \frac{\sqrt{l-1}}{\sqrt{k-1}}$ the expected number of points in each grid square is $O(k)$. In practice, the average number of points to check is around $4k \sim 5k$. So we can find these candidate points and then use in procedures to check, that $f(j) \leq x$, which will work in $O(k \log{n})$. If $r$ is changed we can reconstruct all grid in $O(l)$. Due to all points are random the expected number of times when $r$ will change is $O(\log{n})$ (famous Blogewoosh #6 idea). So the complexity of this solution is $O(n k \log{n} + k \log{n} \log{\varepsilon^{-1}})$.
[ "geometry" ]
3,500
#include <bits/stdc++.h> using namespace std; typedef long long ll; typedef double ld; mt19937 rnd(228); #define TIME (clock() * 1.0 / CLOCKS_PER_SEC) const int M = 5e4 + 239; const int X = (int)(1e8) + 239; const int T = (1 << 17) + 239; const ld pi = acos((ld)-1.0); int n, l, k, bd, x[M], y[M]; int idx[M], sz_idx; int divide(int a, int b) { if (a >= 0 || a % b == 0) { return a / b; } return -(abs(a) / b) - 1; } class Hasher { public: size_t operator()(const pair<int, int>& key) const { return ((ll)(key.first + X) << 30LL) + (ll)key.second; } }; struct helper { int R = 0; int l = 0; int r = 0; unordered_map<pair<int, int>, unordered_set<int>, Hasher> in; template <typename P> void upload(int x, int y, P pred) { int xi = divide(x, R); int yi = divide(y, R); for (int dx = -1; dx <= 1; dx++) { for (int dy = -1; dy <= 1; dy++) { auto it = in.find(make_pair(xi + dx, yi + dy)); if (it != in.end()) { for (int i : it->second) { if (pred(i)) { idx[sz_idx++] = i; } } } } } } void add(int i) { int xi = divide(::x[i], R); int yi = divide(::y[i], R); in[make_pair(xi, yi)].insert(i); } void del(int i) { int xi = divide(::x[i], R); int yi = divide(::y[i], R); in[make_pair(xi, yi)].erase(i); } void remake(ld new_r) { R = ceil(new_r) + 1; in.clear(); in.reserve(sz_idx * 2); for (int i = l; i < r; i++) { add(i); } } void move_left() { del(l); l++; } void move_right() { add(r); r++; } }; ld dist(int i, int j) { return hypot(x[j] - x[i], y[j] - y[i]); } ld angle(int i, int j) { return atan2(y[j] - y[i], x[j] - x[i]); } int tree[T], add[T]; void build(int i, int l, int r) { tree[i] = 0; add[i] = 0; if (r - l == 1) { return; } int mid = (l + r) / 2; build(2 * i + 1, l, mid); build(2 * i + 2, mid, r); } inline void push(int i, int l, int r) { tree[i] += add[i]; if (r - l > 1) { add[2 * i + 1] += add[i]; add[2 * i + 2] += add[i]; } add[i] = 0; } void upd(int i, int l, int r, int ql, int qr, int x) { push(i, l, r); if (r <= ql || qr <= l) { return; } if (ql <= l && r <= qr) { add[i] += x; push(i, l, r); return; } int mid = (l + r) / 2; upd(2 * i + 1, l, mid, ql, qr, x); upd(2 * i + 2, mid, r, ql, qr, x); tree[i] = max(tree[2 * i + 1], tree[2 * i + 2]); } void clear(int i, int l, int r) { push(i, l, r); if (tree[i] == 0) { return; } tree[i] = 0; if (r - l > 1) { int mid = (l + r) / 2; clear(2 * i + 1, l, mid); clear(2 * i + 2, mid, r); } } int s[M], cnt; vector<pair<ld, int>> events; bool check(int p, ld t) { if (sz_idx < k - 1) { return false; } bool ans = false; events.clear(); events.reserve(sz_idx * 2); cnt = 0; for (int ii = 0; ii < sz_idx; ii++) { int i = idx[ii]; ld d = dist(p, i); if (d > 2 * t) { continue; } ld a = angle(p, i); ld len = acos(min((ld)1.0, d / (2 * t))); ld lg = a - len; ld rg = a + len; if (lg < -pi) { lg += 2 * pi; } if (rg > pi) { rg -= 2 * pi; } events.emplace_back(lg, -1 - i); events.emplace_back(rg, 1 + i); if (lg > rg) { upd(0, 0, bd, max(0, i - l + 1), min(bd, i + 1), 1); s[cnt++] = i; if (tree[0] >= k - 1) { ans = true; } } } if (!ans) { sort(events.begin(), events.end()); for (const auto& t : events) { if (t.second < 0) { int i = -t.second - 1; upd(0, 0, bd, max(0, i - l + 1), min(bd, i + 1), 1); } else { int i = t.second - 1; upd(0, 0, bd, max(0, i - l + 1), min(bd, i + 1), -1); } if (tree[0] >= k - 1) { ans = true; } } } for (int i = 0; i < cnt; i++) { upd(0, 0, bd, max(0, s[i] - l + 1), min(bd, s[i] + 1), -1); } return ans; } ld func(helper& hl, helper& hr, int p, ld pa) { auto is_good = [&](int i) { return dist(p, i) <= 2 * pa; }; sz_idx = 0; hl.upload(x[p], y[p], is_good); hr.upload(x[p], y[p], is_good); if (!check(p, pa)) { return pa; } ld lg = 0; ld rg = pa; for (int i = 0; i < 70; i++) { ld mid = (lg + rg) / 2; if (check(p, mid)) { rg = mid; } else { lg = mid; } } hl.remake(2 * rg); hr.remake(2 * rg); return rg; } void solve() { cin >> n >> l >> k; for (int i = 0; i < n; i++) { cin >> x[i] >> y[i]; } bd = n - l + 1; int s = sqrt((ld)(l - 1) / (k - 1)); ld ans = ((1e8 / s) * sqrt(2)) + 1; helper hl, hr; hl.remake(2 * ans); hr.remake(2 * ans); for (int i = 0; i < l; i++) { hr.move_right(); } build(0, 0, bd); for (int i = 0; i < n; i++) { hr.move_left(); // solve ans = min(ans, func(hl, hr, i, ans)); if (i + 1 == n) { break; } if (i + l < n) { hr.move_right(); } if (i - l + 1 >= 0) { hl.move_left(); } hl.move_right(); } cout << ans << "\n"; } int main() { #ifdef ONPC freopen("input", "r", stdin); #endif ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cout << fixed << setprecision(20); int t; cin >> t; while (t--) { solve(); } return 0; }
1642
A
Hard Way
Sam lives in Awesomeburg, its downtown has a triangular shape. Also, the following is true about the triangle: - its vertices have integer coordinates, - the coordinates of vertices are non-negative, and - its vertices are not on a single line. He calls a point on the downtown's border (that is the border of the triangle) safe if he can reach this point from \textbf{at least one point} of the line $y = 0$ walking along some \textbf{straight line}, without crossing the interior of the triangle. \begin{center} {\small In the picture the downtown is marked with grey color. The first path is invalid because it does not go along a straight line. The second path is invalid because it intersects with the interior of the downtown. The third and fourth paths are correct.} \end{center} Find the total length of the unsafe parts of the downtown border. It can be proven that these parts are segments and their number is finite.
If the triangle's side is not parallel with the line $y = 0$, all points on this side are safe because we can intersect it with $y = 0$ and there will be a point from which we can reach any point on this side of our triangle. All points on the side, which is parallel with $y = 0$ line contains are also safe if the third point has a greater $y$: Thus, a point can be unreachable if and only if it is the "upper" horizontal side of our triangle, because it is impossible to draw such line which would intersect with $y = 0$ line and would not intersect with the inner part of our triangle:
[ "geometry" ]
800
#include <vector> #include <algorithm> #include <iostream> #include <cassert> #include <map> #include <set> #include <cmath> #include <array> using namespace std; signed main() { if (1) { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); } int t; cin >> t; while (t--) { int a, b, c, d, e, f, ans = 0; cin >> b >> a >> d >> c >> f >> e; if (a == c && e < a) ans = abs(b - d); else if (c == e && a < c) ans = abs(d - f); else if (a == e && c < a) ans = abs(b - f); cout << ans << "\n"; } }
1642
B
Power Walking
Sam is a kindergartener, and there are $n$ children in his group. He decided to create a team with some of his children to play "brawl:go 2". Sam has $n$ power-ups, the $i$-th has type $a_i$. A child's strength is equal to the number of \textbf{different} types among power-ups he has. For a team of size $k$, Sam will distribute all $n$ power-ups to $k$ children in such a way that each of the $k$ children receives at least one power-up, and each power-up is given to someone. For each integer $k$ from $1$ to $n$, find the \textbf{minimum} sum of strengths of a team of $k$ children Sam can get.
It is quite easy to understand that every multiset's power is at least 1. The final answer is at east the number of distict integers in the multiset. It is possible to proof that the answer to the problem for $k$ is equal to $\max(k, cnt)$, where $cnt$ is the number of distinct integers. $\textbf{Proof.}$ If the number of distinct interest is equal to $c \leq k$, is is obvious that we can create $c$ multisets, $i$-th multiset Will only contain integers which are equal to $i$. We can create $k - c$ multisets of size 1. The answer in this case is equal to $k$. If the number of distinct integers is at least $k$, we can divide the integers into groups in such way that for each $x$ all occurrences of $x$ are located in the same multiset. The answer in this case is equal to $cnt$. In the first case the answer is $k$, in the second case - $cnt$. Thus, the answer is equal to $\max(k, cnt)$.
[ "greedy" ]
900
#include <vector> #include <algorithm> #include <iostream> #include <cassert> #include <map> #include <set> #include <cmath> #include <array> using namespace std; signed main() { if (1) { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); } int t; cin >> t; while (t--) { int n; cin >> n; map <int, int> d; for (int i = 0; i < n; ++i) { int x; cin >> x; d[x]++; } int cnt = 0; for (auto i : d) { ++cnt; } int cur_cnt = cnt; for (int k = 1; k <= n; ++k) { cout << max(k, cnt) << "\n"; } } }
1644
A
Doors and Keys
The knight is standing in front of a long and narrow hallway. A princess is waiting at the end of it. In a hallway there are three doors: a red door, a green door and a blue door. The doors are placed one after another, however, possibly in a different order. To proceed to the next door, the knight must first open the door before. Each door can be only opened with a key of the corresponding color. So three keys: a red key, a green key and a blue key — are also placed somewhere in the hallway. To open the door, the knight should first pick up the key of its color. The knight has a map of the hallway. It can be transcribed as a string, consisting of six characters: - R, G, B — denoting red, green and blue doors, respectively; - r, g, b — denoting red, green and blue keys, respectively. Each of these six characters appears in the string exactly once. The knight is standing at the beginning of the hallway — on the left on the map. Given a map of the hallway, determine if the knight can open all doors and meet the princess at the end of the hallway.
The necessary and sufficient condition is the following: for each color the key should appear before the door. Necessary is easy to show: if there is a key after a door, this door can't be opened. Sufficient can be shown the following way. If there are no closed doors left, the knight has reached the princess. Otherwise, consider the first door the knight encounters. He has a key for this door, so he opens it. We remove both the key and the door from the string and proceed to the case with one less door. Overall complexity: $O(1)$.
[ "implementation" ]
800
for _ in range(int(input())): s = input() for (d, k) in zip("RGB", "rgb"): if s.find(d) < s.find(k): print("NO") break else: print("YES")
1644
B
Anti-Fibonacci Permutation
Let's call a permutation $p$ of length $n$ \textbf{anti-Fibonacci} if the condition $p_{i-2} + p_{i-1} \ne p_i$ holds for all $i$ ($3 \le i \le n$). Recall that the permutation is the array of length $n$ which contains each integer from $1$ to $n$ exactly once. Your task is for a given number $n$ print $n$ \textbf{distinct} anti-Fibonacci permutations of length $n$.
Let's consider one of the possible solutions. Let's put the first element in the $x$-th permutation equal to $x$, and sort all the other elements in descending order. Thus, we get permutations of the form: $[1, n, n-1, \dots, 2]$, $[2, n, n-1, \dots, 1]$, ..., $[n, n-1, n-2, \dots, 1]$. In such a construction $p_{i-1} > p_i$ for all $i$ ($3 \le i \le n$), and hence $p_{i-2} + p_{i-1} > p_i$.
[ "brute force", "constructive algorithms", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; for (int i = 1; i <= n; ++i) { cout << i; for (int j = n; j > 0; --j) if (i != j) cout << ' ' << j; cout << '\n'; } } }
1644
C
Increase Subarray Sums
You are given an array $a_1, a_2, \dots, a_n$, consisting of $n$ integers. You are also given an integer value $x$. Let $f(k)$ be the maximum sum of a contiguous subarray of $a$ after applying the following operation: add $x$ to the elements on exactly $k$ \textbf{distinct} positions. An empty subarray should also be considered, it has sum $0$. Note that the subarray doesn't have to include all of the increased elements. Calculate the maximum value of $f(k)$ for all $k$ from $0$ to $n$ independently.
Consider the naive solution. Iterate over $k$. Then iterate over the segment that will have the maximum sum. Let its length be $l$. Since $x$ is non-negative, it's always optimal to increase the elements inside the segment. So if $k \le l$, then the sum of the segment increases by $k \cdot x$. Otherwise, only the elements inside the segment will affect the sum, thus, it will increase by $l \cdot x$. That can be written as $min(k, l) \cdot x$. Notice that we only care about two parameters for each segment. Its length and its sum. Moreover, if there are several segments with the same length, we only care about the one with the greatest sum. Thus, the idea of the solution is the following. For each length, find the segment of this length with the greatest sum. Then calculate $f(k)$ in $O(n)$ by iterating over the length of the segment. Overall complexity: $O(n^2)$ per testcase.
[ "brute force", "dp", "greedy", "implementation" ]
1,400
INF = 10**9 for _ in range(int(input())): n, x = map(int, input().split()) a = list(map(int, input().split())) mx = [-INF for i in range(n + 1)] mx[0] = 0 for l in range(n): s = 0 for r in range(l, n): s += a[r] mx[r - l + 1] = max(mx[r - l + 1], s) ans = [0 for i in range(n + 1)] for k in range(n + 1): bst = 0 for i in range(n + 1): bst = max(bst, mx[i] + min(k, i) * x) ans[k] = bst print(" ".join([str(x) for x in ans]))
1644
D
Cross Coloring
There is a sheet of paper that can be represented with a grid of size $n \times m$: $n$ rows and $m$ columns of cells. All cells are colored in white initially. $q$ operations have been applied to the sheet. The $i$-th of them can be described as follows: - $x_i$ $y_i$ — choose one of $k$ non-white colors and color the entire row $x_i$ and the entire column $y_i$ in it. The new color is applied to each cell, regardless of whether the cell was colored before the operation. The sheet after applying all $q$ operations is called a coloring. Two colorings are different if there exists at least one cell that is colored in different colors. How many different colorings are there? Print the number modulo $998\,244\,353$.
Let's take a look at a final coloring. Each cell has some color. There exist cells such that there were no operation in their row and their column. They are left white, and they don't affect the answer. All other cells are colored in one of $k$ colors. For each cell $(x, y)$ there is a query that has been the last one to color this cell (it covered row $x$, column $y$ or both of them). So all cells that have the same query as the last one will have the same color. Since the color for each query is chosen independently, the number of colorings will be $k$ to the power of the number of queries that have at least one cell belong to them. How to determine if a query has at least one cell. This is true unless one of these things happen afterwards: both its row and its column are recolored; all rows are recolored; all columns are recolored. So the solution is to process the queries backwards. Maintain the set of colored rows and colored columns. For each query, check the conditions. If none hold, multiply the answer by $k$. Overall complexity: $O(q \log (n + m))$ or $O(q)$ per testcase.
[ "data structures", "implementation", "math" ]
1,700
from sys import stdin, stdout MOD = 998244353 ux = [] uy = [] for _ in range(int(stdin.readline())): n, m, k, q = map(int, stdin.readline().split()) while len(ux) < n: ux.append(False) while len(uy) < m: uy.append(False) xs = [-1 for i in range(q)] ys = [-1 for i in range(q)] for i in range(q): x, y = map(int, stdin.readline().split()) xs[i] = x - 1 ys[i] = y - 1 cx = 0 cy = 0 ans = 1 for i in range(q - 1, -1, -1): fl = False if not ux[xs[i]]: ux[xs[i]] = True cx += 1 fl = True if not uy[ys[i]]: uy[ys[i]] = True cy += 1 fl = True if fl: ans = ans * k % MOD if cx == n or cy == m: break for i in range(q): ux[xs[i]] = False uy[ys[i]] = False stdout.write(str(ans) + "\n")
1644
E
Expand the Path
Consider a grid of size $n \times n$. The rows are numbered top to bottom from $1$ to $n$, the columns are numbered left to right from $1$ to $n$. The robot is positioned in a cell $(1, 1)$. It can perform two types of moves: - D — move one cell down; - R — move one cell right. The robot is not allowed to move outside the grid. You are given a sequence of moves $s$ — the initial path of the robot. This path doesn't lead the robot outside the grid. You are allowed to perform an arbitrary number of modifications to it (possibly, zero). With one modification, you can duplicate one move in the sequence. That is, replace a single occurrence of D with DD or a single occurrence of R with RR. Count the number of cells such that there exists at least one sequence of modifications that the robot visits this cell on the modified path and doesn't move outside the grid.
First, get rid of the corner cases. If the string doesn't contain either of the letters, the answer is $n$. The general solution to the problem is to consider every single way to modify the path, then find the union of them. Well, every single path is too much, let's learn to reduce the number of different sequences of modifications that we have to consider. The main observation is that all cells that the robot can visit are enclosed in the space formed by the following two paths: the first 'R' is duplicated the maximum number of times, then the last 'D' is duplicated the maximum number of times; the first 'D' is duplicated the maximum number of times, then the last 'R' is duplicated the maximum number of times. You can realize that by drawing the visited cells for some large test. To show that more formally, you can consider the visited cells row by row. Let's show that for every two different visited cells in the same row, all cells in-between them can also be visited. In general case, we want to show that we can take the prefix of the path to the left one of these cells and duplicate any 'R' on it to reach the right cell. The suffixes of the paths will remain the same as in the initial path. If there exists an 'R' on the prefix, then we are good. Otherwise, the reason that it doesn't exist is that we duplicated 'D' too many times. Reduce that and there will be 'R' immediately after reaching the cell or earlier. We should also show that the number of 'R's on the path to the left cell won't reach the maximum allowed amount until reaching the right cell. Use the fact that the number of 'D's on both prefixes of the paths is the same. The other non-obvious part is that you can't reach cells outside this space. However, that can also be shown by analyzing each row independently. Finally, about the way to calculate the area of this space. The main idea is to calculate the total number of cells outside this area and subtract it from $n^2$. Notice that non-visited cells form two separate parts: the one above the first path and the one to the left of the second path. These are pretty similar to each other. Moreover, you can calculate them with a same function. If we replace all 'D's in the string with 'R' and vice versa, then these parts swap places. So we can calculate the upper part, swap them and calculate it again. I think the algorithm is best described with a picture. Consider test $n=15$, $s=$ DDDRRDRRDDRRR, for example. First, there are some rows that only have one cell visited. Then the first 'R' in the string appears. Since we duplicate it the maximum amount of times, it produces a long row of visited cells. The remaining part of the part becomes the outline of the area. Note that the row that marks the end of the string, always ends at the last column. Thus, only at most first $|s|$ rows matter. To be exact, the amount of rows that matter is equal to the number of letters 'D' in the string. For each letter 'D', let's calculate the number of non-visited cells in a row it goes down to. I found the most convenient way is to go over the string backwards. We start from the row corresponding to the number of letters 'D' in the string. It has zero non-visited cells. We can maintain the number of non-visited cells in the current row. If we encounter an 'R' in the string, we add $1$ to this number. If we encounter a 'D', we add the number to the answer. We have to stop after the first 'R' in the string. The later (well, earlier, since we are going backwards) part corresponds to the prefix of letters 'D' - the starting column on the picture. Each of these rows have $1$ visited cell, so $n-1$ non-visited. So we can easily calculate this part as well. Overall complexity: $O(|s|)$ per testcase.
[ "brute force", "combinatorics", "data structures", "implementation", "math" ]
1,900
def calc(s, n): ld = s.find('R') res = ld * (n - 1) y = 0 for i in range(len(s) - 1, ld - 1, -1): if s[i] == 'D': res += y else: y += 1 return res for _ in range(int(input())): n = int(input()) s = input() if s.count(s[0]) == len(s): print(n) continue ans = n * n ans -= calc(s, n) ans -= calc(''.join(['D' if c == 'R' else 'R' for c in s]), n) print(ans)
1644
F
Basis
For an array of integers $a$, let's define $|a|$ as the number of elements in it. Let's denote two functions: - $F(a, k)$ is a function that takes an array of integers $a$ and a positive integer $k$. The result of this function is the array containing $|a|$ first elements of the array that you get by replacing each element of $a$ with exactly $k$ copies of that element.For example, $F([2, 2, 1, 3, 5, 6, 8], 2)$ is calculated as follows: first, you replace each element of the array with $2$ copies of it, so you obtain $[2, 2, 2, 2, 1, 1, 3, 3, 5, 5, 6, 6, 8, 8]$. Then, you take the first $7$ elements of the array you obtained, so the result of the function is $[2, 2, 2, 2, 1, 1, 3]$. - $G(a, x, y)$ is a function that takes an array of integers $a$ and two \textbf{different} integers $x$ and $y$. The result of this function is the array $a$ with every element equal to $x$ replaced by $y$, and every element equal to $y$ replaced by $x$.For example, $G([1, 1, 2, 3, 5], 3, 1) = [3, 3, 2, 1, 5]$. An array $a$ is a \textbf{parent} of the array $b$ if: - either there exists a positive integer $k$ such that $F(a, k) = b$; - or there exist two different integers $x$ and $y$ such that $G(a, x, y) = b$. An array $a$ is an \textbf{ancestor} of the array $b$ if there exists a finite sequence of arrays $c_0, c_1, \dots, c_m$ ($m \ge 0$) such that $c_0$ is $a$, $c_m$ is $b$, and for every $i \in [1, m]$, $c_{i-1}$ is a parent of $c_i$. And now, the problem itself. You are given two integers $n$ and $k$. Your goal is to construct a sequence of arrays $s_1, s_2, \dots, s_m$ in such a way that: - every array $s_i$ contains exactly $n$ elements, and all elements are integers from $1$ to $k$; - for every array $a$ consisting of exactly $n$ integers from $1$ to $k$, the sequence contains at least one array $s_i$ such that $s_i$ is an ancestor of $a$. Print the minimum number of arrays in such sequence.
First of all, since the second operation changes all occurrences of some number $x$ to other number $y$ and vice versa, then, by using it, we can convert an array into another array if there exists a bijection between elements in the first array and elements in the second array. It can also be shown that $F(G(a, x, y), m) = G(F(a, m), x, y)$, so we can consider that if we want to transform an array into another array, then we first apply the function $F$, then the function $G$. Another relation that helps us is that $G(G(a, x, y), x, y) = a$, it means that every time we apply the function $G$, we can easily rollback the changes. Considering that we have already shown that a sequence of transformations can be reordered so that we apply $G$ only after we've made all operations with the function $F$, let's try to "rollback" the second part of transformations, i. e. for each array, find some canonical form which can be obtained by using the function $G$. Since applying the second operation several times is equal to applying some bijective function to the array, we can treat each array as a partition of the set $\{1, 2, \dots, n\}$ into several subsets. So, if we are not allowed to perform the first operation, the answer to the problem is equal to $\sum \limits_{i=1}^{\min(n, k)} S(n, i)$, where $S(n, i)$ is the number of ways to partition a set of $n$ objects into $i$ non-empty sets (these are known as Stirling numbers of the second kind). There are many ways to calculate Stirling numbers of the second kind, but in this problem, we will have to use some FFT-related approach which allows getting all Stirling numbers for some value of $n$ in time $O(n \log n)$. For example, you can use the following relation: $S(n, k) = \frac{1}{k!} \sum \limits_{i = 0}^{k} (-1)^i {{k}\choose{i}} (k-i)^n$ $S(n, k) = \sum \limits_{i=0}^{k} \frac{(-1)^i \cdot k! \cdot (k-i)^n}{k! \cdot i! \cdot (k-i)!}$ $S(n, k) = \sum \limits_{i=0}^{k} \frac{(-1)^i}{i!} \cdot \frac{(k-i)^n}{(k-i)!}$ If we substitute $p_i = \frac{(-1)^i}{i!}$ and $q_j = \frac{j^n}{j!}$, we can see that the sequence of Stirling numbers for some fixed $n$ is just the convolution of sequences $p$ and $q$. For simplicity in the following formulas, let's denote $A_i = \sum \limits_{j=1}^{\min(i, k)} S(i, j)$. We now know that this value can be calculated in $O(i \log i)$. Okay, now back to the original problem. Unfortunately, we didn't take the operation $F$ into account. Let's analyze it. The result of function $F(a, m)$ consists of several blocks of equal elements, and it's easy to see that the lengths of these blocks (except for maybe the last one) should be divisible by $m$. The opposite is also true - if the lengths of all blocks (except maybe for the last one) are divisible by some integer $m$, then the array can be produced as $F(a, m)$ for some array $a$. What does it mean? If the greatest common divisor of the lengths of the blocks (except for the last one) is not $1$, the array that we consider can be obtained by applying the function $F$ to some other array. Otherwise, it cannot be obtained in such a way. Now, inclusion-exclusion principle comes to the rescue. Let's define $B_i$ as the number of arrays that we consider which have the lengths of all their blocks (except maybe for the last one) divisible by $i$. It's easy to see that $B_i = A_{\lceil \frac{n}{i} \rceil}$ (we can compress every $i$ consecutive elements into one). Then, using inclusion exclusion principle, we can see that the answer is $\sum \limits_{i=1}^{n} \mu(i) B_i = \sum \limits_{i=1}^{n} \mu(i) A_{\lceil \frac{n}{i} \rceil}$ where $\mu(i)$ is the Mobius function. Using this formula, we can calculate the answer in $O(n \log^2 n)$. Note 1. This inclusion-exclusion principle handles the arrays according to the GCD of the blocks that they consist of, except for the last one. But what if the array consists only of one block? These arrays can be counted wrongly, so we should exclude them - i. e. use $A_i - S(i, 1)$ instead of just $A_i$ and count the arrays consisting of the same element (if we need any of them in the answer separately). Note 2. Depending on the way you implement this, $n = 1$ or $k = 1$ (or both) may be a corner case.
[ "combinatorics", "fft", "math", "number theory" ]
2,900
null
1646
A
Square Counting
Luis has a sequence of $n+1$ integers $a_1, a_2, \ldots, a_{n+1}$. For each $i = 1, 2, \ldots, n+1$ it is guaranteed that $0\leq a_i < n$, or $a_i=n^2$. He has calculated the sum of all the elements of the sequence, and called this value $s$. Luis has lost his sequence, but he remembers the values of $n$ and $s$. Can you find the number of elements in the sequence that are equal to $n^2$? We can show that the answer is unique under the given constraints.
Let the number of elements in the sequence equal to $n^2$ be $x$ and let the sum of all other numbers be $u$. Then, $s=x\cdot n^2+u$. If an element of the sequence is not equal to $n^2$, then its value is at most is $n-1$. There are $n+1$ numbers in the sequence, so $u\leq (n-1)\times (n+1)=n^2-1$. Thus, $\displaystyle \left \lfloor \frac{s}{n^2} \right \rfloor= \left \lfloor \frac{x\cdot n^2+u}{n^2} \right \rfloor=\left\lfloor x+\frac{u}{n^2} \right\rfloor=x$, which is the value we want to find. So, to solve the problem it is enough to compute the value of $\displaystyle \left\lfloor \frac{s}{n^2} \right\rfloor$. Intended complexity: $O(1)$ per test case.
[ "math" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin >> t; for(int i = 0; i < t; i++){ long long n, s; cin >> n >> s; cout << s / (n * n) << "\n"; } return 0; }
1646
B
Quality vs Quantity
$ \def\myred#1{\textcolor{red}{\underline{\bf{#1}}}} \def\myblue#1{\textcolor{blue}{\overline{\bf{#1}}}} $ $\def\RED{\myred{Red}} \def\BLUE{\myblue{Blue}}$ You are given a sequence of $n$ non-negative integers $a_1, a_2, \ldots, a_n$. Initially, all the elements of the sequence are unpainted. You can paint each number $\RED$ or $\BLUE$ (but not both), or \textbf{leave it unpainted}. For a color $c$, $\text{Count}(c)$ is the number of elements in the sequence painted with that color and $\text{Sum}(c)$ is the sum of the elements in the sequence painted with that color. For example, if the given sequence is $[2, 8, 6, 3, 1]$ and it is painted this way: $[\myblue{2}, 8, \myred{6}, \myblue{3}, 1]$ (where $6$ is painted red, $2$ and $3$ are painted blue, $1$ and $8$ are unpainted) then $\text{Sum}(\RED)=6$, $\text{Sum}(\BLUE)=2+3=5$, $\text{Count}(\RED)=1$, and $\text{Count}(\BLUE)=2$. Determine if it is possible to paint the sequence so that $\text{Sum}(\RED) > \text{Sum}(\BLUE)$ and $\text{Count}(\RED) < \text{Count}(\BLUE)$.
$\def\myred#1{\color{red}{\underline{\bf{#1}}}} \def\myblue#1{\color{blue}{\overline{\bf{#1}}}}$ $\def\RED{\myred{Red}} \def\BLUE{\myblue{Blue}}$ Suppose $\text{Count}(\RED)=k$. If a solution exists, then there is one with $\text{Count}(\BLUE)=k+1$, because if there are more than $k+1$ numbers painted blue, we can remove some of them until we have exactly $k+1$ numbers, and the sum of these numbers will be smaller. As we want $\text{Sum}(\RED) > \text{Sum}(\BLUE)$ to hold, the optimal way to paint the numbers is to paint the $k$ largest numbers red, and the $k+1$ smallest numbers blue. So, to solve the problem it is enough to sort the sequence, iterate over the value of $k$ and for each of them compute the sum of the $k$ largest numbers, the sum of the $k+1$ smallest numbers and compare them. This can be done efficiently by computing the sum of every prefix and suffix of the sorted sequence in linear time. This way, we can make a constant number of operations for each $k$. Intended complexity: $\mathcal{O}(n\:\text{log}\:n)$ It can be proven that (try to prove it!) if some $k$ works, then $k=\displaystyle \left \lfloor \frac{n-1}{2} \right \rfloor$ has to work. This means that there is always an answer using $n$ or $n-1$ elements, depending on parity.
[ "brute force", "constructive algorithms", "greedy", "sortings", "two pointers" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ ios::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t; cin >> t; for(int test_number = 0; test_number < t; test_number++){ int n; cin >> n; vector <long long> a(n); for(int i = 0; i < n; i++){ cin >> a[i]; } sort(a.begin(), a.end()); vector <long long> prefix_sums = {0}; for(int i = 0; i < n; i++){ prefix_sums.push_back(prefix_sums.back() + a[i]); } vector <long long> suffix_sums = {0}; for(int i = n - 1; i >= 0; i--){ suffix_sums.push_back(suffix_sums.back() + a[i]); } bool answer = false; for(int k = 1; k <= n; k++){ if(2 * k + 1 <= n){ long long blue_sum = prefix_sums[k + 1]; long long red_sum = suffix_sums[k]; if(blue_sum < red_sum){ answer = true; } } } if(answer) cout << "YES\n"; else cout << "NO\n"; } return 0; }
1646
C
Factorials and Powers of Two
A number is called powerful if it is a power of two or a factorial. In other words, the number $m$ is powerful if there exists a non-negative integer $d$ such that $m=2^d$ or $m=d!$, where $d!=1\cdot 2\cdot \ldots \cdot d$ (in particular, $0! = 1$). For example $1$, $4$, and $6$ are powerful numbers, because $1=1!$, $4=2^2$, and $6=3!$ but $7$, $10$, or $18$ are not. You are given a positive integer $n$. Find the minimum number $k$ such that $n$ can be represented as the sum of $k$ \textbf{distinct} powerful numbers, or say that there is no such $k$.
If the problem asked to represent $n$ as a sum of distinct powers of two only (without the factorials), then there is a unique way to do it, using the binary representation of $n$ and the number of terms will be the number of digits equal to $1$ in this binary representation. Let's denote this number by $\text{ones}(n)$. If we fix the factorials we are going to use in the sum, then the rest of the terms are uniquely determined because of the observation above. Note that $1$ and $2$ will not be considered as factorials in order to avoid repeating terms. So, to solve the problem it is enough to iterate through all possibilities of including or not including each factorial (up to $14!$) and for each of them calculate the number of terms used in the sum. If we used $f$ factorials and their sum is $s$, then the number of terms can be calculated as $f+\text{ones}(n-s)$. The minimum of all these numbers will be the answer. Intended complexity: $\mathcal{O}(2^k)$ where $k$ is the biggest positive integer such that $k!\leq n$ Actually, there is no problem in repeating $1$, $2$, or any other power of two. This is because it can be proven that (try to prove it!) those sums are not optimal.
[ "bitmasks", "brute force", "constructive algorithms", "dp", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; const long long MAXAI = 1000000000000ll; int get_first_bit(long long n){ return 63 - __builtin_clzll(n); } int get_bit_count(long long n){ return __builtin_popcountll(n); } int main(){ ios::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t; cin >> t; for(int test_number = 0; test_number < t; test_number++){ long long n; cin >> n; //Computing factorials <= MAXAI vector<long long> fact; long long factorial = 6, number = 4; while(factorial <= MAXAI){ fact.push_back(factorial); factorial *= number; number++; } //Computing masks of factorials vector<pair<long long, long long>> fact_sum(1 << fact.size()); fact_sum[0] = {0, 0}; for(int mask = 1; mask < (1 << fact.size()); mask++){ auto first_bit = get_first_bit(mask); fact_sum[mask].first = fact_sum[mask ^ (1 << first_bit)].first + fact[first_bit]; fact_sum[mask].second = get_bit_count(mask); } long long res = get_bit_count(n); for(auto i : fact_sum){ if(i.first <= n){ res = min(res, i.second + get_bit_count(n - i.first)); } } cout << res << "\n"; } return 0; }
1646
D
Weight the Tree
You are given a tree of $n$ vertices numbered from $1$ to $n$. A tree is a connected undirected graph without cycles. For each $i=1,2, \ldots, n$, let $w_i$ be the weight of the $i$-th vertex. A vertex is called good if its weight is equal to the sum of the weights of all its neighbors. Initially, the weights of all nodes are unassigned. Assign positive integer weights to each vertex of the tree, such that the number of good vertices in the tree is maximized. If there are multiple ways to do it, you have to find one that minimizes the sum of weights of all vertices in the tree.
If $n=2$, we can assign $w_1=1$ and $w_2=1$ and there is no way to get a better answer because all vertices are good and the sum of weights cannot be smaller because the weights have to be positive. If $n>2$, two vertices sharing an edge cannot be both good. To prove this, we are going to analyze two cases. If the two vertices have distinct weights, then the one with a smaller weight cannot be good, because the one with a larger weight is its neighbor. Otherwise, if both vertices have the same weight, then none of them can have another neighbor, as that would increase the sum of their neighbors by at least $1$. So, the only way this could happen is if $n=2$, but we are assuming that $n>2$. Thus, the set of good vertices must be an independent set. We will see that for each independent set of vertices in the tree, there is an assignment of weights where all the vertices from this set are good. We can assign a weight of $1$ to each vertex that is not in the set, and assign its degree to each vertex in the set. Because all neighbors of a vertex in the set are not in the set, then all of them have a weight of $1$ and this vertex is good. Therefore, the maximum number of good vertices is the same as the maximum size of an independent set in this tree. For a fixed independent set of the maximum size, the construction above leads to a configuration with the minimum sum of weights. This is because all vertices must have a weight of at least $1$, and the vertices in the set must have a weight of at least its degree. So, to solve the problem it is enough to root the tree in an arbitrary vertex and solve a tree dp. Let's call $f(x, b)$ to the pair $(g, s)$, where $g$ is the maximum number of good vertices in the subtree of vertex $x$ assuming that $x$ is good (if $b=1$) or that it is not good (if $b=0$), and $s$ is the minimum sum of weights for that value $g$. The values of $f(x,b)$ can be computed with a dp, using the values of $f$ in the children of node $x$. If $b=1$, then for each child $c$ you must sum $f(c,0)$. If $b=0$, for each child $c$ you can choose the best answer between $f(c,0)$ and $f(c,1)$. The answer to the problem will be the best between $f(\text{root},0)$ and $f(\text{root},1)$ To construct the assignment of weights, you can do it recursively considering for each vertex if it has to be good or not, in order to keep the current value of the answer. In case both options (making it good or not) work, you have to choose to not make it good, as you do not know if its father was good or not. Intended complexity: $\mathcal{O}(n)$
[ "constructive algorithms", "dfs and similar", "dp", "implementation", "trees" ]
2,000
#include <bits/stdc++.h> using namespace std; const int MAXN = 400005; vector<int> g[MAXN]; bool vis[MAXN]; int pa[MAXN]; //DFS to compute the parent of each node //parent of node i is stored at pa[i] void dfs(int v){ vis[v] = 1; for(auto i : g[v]){ if(!vis[i]){ pa[i] = v; dfs(i); } } } pair<int, int> dp[MAXN][2]; //Computes the value of function f, using dp //the second coordinate of the pair is negated (to take maximums) pair<int, int> f(int x, int y){ pair<int, int> & res = dp[x][y]; if(res.first >= 0) return res; res = {y, y ? -((int) g[x].size()) : -1}; for(auto i : g[x]){ if(i != pa[x]){ pair<int, int> maxi = f(i, 0); if(y == 0) maxi = max(maxi, f(i, 1)); res.first += maxi.first; res.second += maxi.second; } } return res; } vector<int> is_good; //Recursive construction of the answer //is_good[i] tells whether vertex i is good or not. void build(pair<int, int> value, int v){ if(value == f(v, 0)){ is_good[v] = 0; for(auto i : g[v]){ if(i != pa[v]){ build(max(f(i, 0), f(i, 1)), i); } } }else{ is_good[v] = 1; for(auto i : g[v]){ if(i != pa[v]){ build(f(i, 0), i); } } } } int main(){ ios::sync_with_stdio(0);cin.tie(0);cout.tie(0); int n; cin >> n; for(int i = 0; i < n - 1; i++){ int u, v; cin >> u >> v; u--; v--; g[u].push_back(v); g[v].push_back(u); } if(n == 2){ cout<<"2 2\n1 1\n"; return 0; } pa[0] = -1; dfs(0); for(int i = 0; i < n; i++) dp[i][0] = {-1, -1}, dp[i][1] = {-1, -1}; pair<int, int> res = max(f(0, 0), f(0, 1)); cout << res.first << " " << -res.second << "\n"; is_good.resize(n); build(res, 0); for(int i = 0; i < n; i++){ if(is_good[i]) cout << g[i].size() << " "; else cout << "1 "; } cout << "\n"; return 0; }
1646
E
Power Board
You have a rectangular board of size $n\times m$ ($n$ rows, $m$ columns). The $n$ rows are numbered from $1$ to $n$ from top to bottom, and the $m$ columns are numbered from $1$ to $m$ from left to right. The cell at the intersection of row $i$ and column $j$ contains the number $i^j$ ($i$ raised to the power of $j$). For example, if $n=3$ and $m=3$ the board is as follows: Find the number of distinct integers written on the board.
It is easy to see that the first row only contains the number $1$ and that this number doesn't appear anywhere else on the board. We say that an integer is a perfect power if it can be represented as $x^a$ where $x$ and $a$ are positive integers and $a>1$. For each positive integer $x$ which is not a perfect power, we call $R(x)$ to the set of all numbers which appear in rows $x, x^2, x^3, \ldots$. Claim: If $x\neq y$ are not perfect powers, then $R(x)$ and $R(y)$ have no elements in common. Proof: Suppose there is a common element, then there exist positive integers $a,b$ such that $x^a=y^b$. This is the same as $\displaystyle x=y^{\frac{b}{a}}$. Because $y$ is not a perfect power, $\displaystyle \frac{b}{a}$ has to be a positive integer. If $\displaystyle \frac{b}{a}=1$ then $x=y$, which cannot happen. So then $\displaystyle \frac{b}{a}>1$, which cannot happen as $x$ is not a perfect power. Thus, this common element cannot exist. Based on the observation above, for each not perfect power $x$ we can compute the size of $R(x)$ independently and then sum the results. For a fixed $x$, let $k$ be the number of rows that start with a power of $x$. Then $R(x)$ contains all numbers of the form $x^{i\cdot j}$ where $1\leq i\leq k$ and $1\leq j\leq m$. But, the size of this set is the same as the size of the set containing all numbers of the form $i\cdot j$ where $1\leq i\leq k$ and $1\leq j\leq m$. Note that the number of elements in this set does not depend on $x$, it just depends on $k$. Thus, the size of $R(x)$ is uniquely determined by the value of $k$. As $x^k\leq n$, then we have that $k\leq \text{log}(n)$. Then, for each $k=1,2,\ldots,\lfloor\text{log}(n)\rfloor$ we just need to compute the number of distinct elements of the form $i\cdot j$ where $1\leq i\leq k$ and $1\leq j\leq m$. We can do this using an array of length $m \cdot \text{log}(n)$, and at the $i$-th step (for $i=1,2,\ldots,\lfloor\text{log}(n)\rfloor$) we mark the numbers $i,2i,\ldots, mi$ as visited in the array, and add one to the value we are computing for each number that was not visited before. After the $i$-th step we have computed this value for $k=i$. So, to solve the problem it is enough to compute for each not perfect power $x$, how many rows in the matrix start with a power of $x$ and using the values we calculated in the last paragraph we can know how many distinct numbers are there in $R(x)$. Intended complexity: $\mathcal{O}(m\:\text{log}\:n + n)$ This solution uses the fact that $m\le 10^6$, other solutions do not use this, and work for much larger values of $m$ (like $m\leq 10^{18}$, but taking the answer modulo some big prime number). Try to solve the problem with this new constraint!
[ "brute force", "dp", "math", "number theory" ]
2,200
#include <bits/stdc++.h> #define fore(i,a,b) for(ll i=a,ggdem=b;i<ggdem;++i) using namespace std; typedef long long ll; const int MAXM = 1000006; const int MAXLOGN = 20; bool visited_mul[MAXM * MAXLOGN]; int main(){ ll n, m; cin >> n >> m; vector<ll> mul_quan(MAXLOGN); ll current_vis = 0; fore(i, 1, MAXLOGN){ fore(j, 1, m+1){ if(!visited_mul[i * j]){ visited_mul[i * j] = 1; current_vis++; } } mul_quan[i] = current_vis; } ll res=1; vector<ll> vis(n + 1); fore(i, 2, n+1){ if(vis[i]) continue; ll power = i, power_quan = 0; while(power <= n){ vis[power] = 1; power_quan++; power *= i; } res += mul_quan[power_quan]; } cout << res << "\n"; return 0; }
1646
F
Playing Around the Table
There are $n$ players, numbered from $1$ to $n$ sitting around a round table. The $(i+1)$-th player sits to the right of the $i$-th player for $1 \le i < n$, and the $1$-st player sits to the right of the $n$-th player. There are $n^2$ cards, each of which has an integer between $1$ and $n$ written on it. For each integer from $1$ to $n$, there are exactly $n$ cards having this number. Initially, all these cards are distributed among all the players, in such a way that each of them has exactly $n$ cards. In one operation, each player chooses one of his cards and passes it to the player to his right. All these actions are performed \textbf{simultaneously}. Player $i$ is called solid if all his cards have the integer $i$ written on them. Their objective is to reach a configuration in which everyone is solid. Find a way to do it using at most $(n^2-n)$ operations. You do \textbf{not} need to minimize the number of operations.
For each player, we say it is diverse if all his cards have different numbers. For each player, we will call a card repeated if this player has two or more cards with that number. Observe that a player is diverse if and only if he has no repeated cards. To construct the answer we are going to divide the process into two parts, each of them using at most $\displaystyle \frac{n\times(n-1)}{2}$ operations. In the first part, we are going to reach a configuration where everyone is diverse. In the second one, we will reach the desired configuration. To do the first part, in each operation we will choose for each player a repeated card to pass. We will repeat this until all players are diverse. If some player is already diverse, he will pass a card with the same number as the card he will receive in that operation. This way, if a player was diverse before the operation, he will still be diverse after the operation. We will prove that the above algorithm ends. If it does not end, at some point some non-diverse player will pass a card he already passed (not a card with the same number, the exact same card). At this point, all other players have at least one card with this number and this player has at least two (because it is non-diverse), but this implies that there are at least $n+1$ cards with this number, which cannot happen. Now, we will prove that this algorithm ends in at most $\displaystyle\frac{n\times(n-1)}{2}$ operations. Consider all cards having the number $c$, and for each of them consider the distance it moved, but when a player is diverse, we will consider his cards as static and that the card he received (or a previous one, if there are multiple diverse players before him) moved more than once in a single operation. For each $x$, such that $1\leq x\leq n-1$, consider all cards having the number $c$ that moved a distance of $x$ or more, and look at the one that reaches its final destination first. The first $x$ players that passed this card already had a card with the number $c$ on it, and for each of them, one of these cards will not move anymore (remember that once the player is diverse, his cards are static) and it moved less than $x$, as we are considering the first card. So, there are at most $n-x$ cards that moved a distance of $x$ or more. Thus, the maximum sum of distances of all cards containing the number $c$ is $\displaystyle 1+2+\ldots+(n-1)=\frac{n\times(n-1)}{2}$ and the maximum sum of distances of all cards is $\displaystyle \frac{n^2\times(n-1)}{2}$. In each operation, the sum of all distances is increased by $n$ so there will be at most $\displaystyle \frac{n\times(n-1)}{2}$ operations in this part. To do the second part, for each $j=1,2,\ldots,n-1$ (in this order) the $i$-th player will pass a card with the number $((i-j)\bmod n) + 1$ a total of $j$ times. It is easy to see that after $\displaystyle 1+2+\ldots+(n-1)=\frac{n\times(n-1)}{2}$ operations, all players are solid. Implementing the first part naively will result in an $\mathcal{O}(n^4)$ algorithm, which is enough to solve the problem. However, it is possible to divide the complexity by $n$ maintaining a stack of repeated cards for each player. Intended complexity: $\mathcal{O}(n^3)$ Additionally, competitive__programmer has made video editorials for problems B, C and D.
[ "constructive algorithms", "greedy", "implementation" ]
2,900
#include <bits/stdc++.h> #define fore(i, a, b) for(int i = a; i < b; ++i) using namespace std; const int MAXN = 1010; //c[i][j] = number of cards player i has, with the number j int c[MAXN][MAXN]; //extras[i] is the stack of repeated cards for player i. vector<int> extras[MAXN]; int main(){ ios::sync_with_stdio(0);cin.tie(0);cout.tie(0); int n; cin >> n; fore(i, 0, n){ fore(j, 0, n){ int x; cin >> x; x--; c[i][x]++; if(c[i][x] > 1) extras[i].push_back(x); } } vector<vector<int>> res; //First part while(true){ //oper will describe the next operation to perform vector<int> oper(n); //s will be the first non-diverse player int s = -1; fore(i, 0, n){ if(extras[i].size()){ s = i; break; } } if(s == -1) break; //last_card will be the card that the previous player passed int last_card = -1; fore(i, s, s + n){ int real_i = i % n; if(extras[real_i].size()){ last_card = extras[real_i].back(); extras[real_i].pop_back(); } oper[real_i] = last_card; } res.push_back(oper); fore(i, 0, n){ int i_next = (i + 1) % n; c[i][oper[i]]--; c[i_next][oper[i]]++; } fore(i, 0, n){ int i_next = (i + 1) % n; if(c[i_next][oper[i]] > 1) extras[i_next].push_back(oper[i]); } } //Second part fore(j, 1, n){ vector<int> oper; fore(i, 0, n) oper.push_back((i - j + n) % n); fore(i, 0, j) res.push_back(oper); } cout << res.size() << "\n"; for(auto i : res){ for(auto j : i) cout<< j + 1 << " "; cout << "\n"; } return 0; }
1647
A
Madoka and Math Dad
Madoka finally found the administrator password for her computer. Her father is a well-known popularizer of mathematics, so the password is the answer to the following problem. Find the maximum decimal number without zeroes and with no equal digits in a row, such that the sum of its digits is $n$. Madoka is too tired of math to solve it herself, so help her to solve this problem!
Since we want to maximize the number we need, we will first find the longest suitable number. Obviously, it is better to use only the numbers $1$ and $2$ for this. Therefore, the answer always looks like $2121\ldots$ or $1212\ldots$. The first option is optimal when $n$ has a remainder of $2$ or $0$ modulo $3$, otherwise the second option is optimal. Below is an example of a neat implementation.
[ "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; int type; if (n % 3 == 1) type = 1; else type = 2; int sum = 0; while (sum != n) { cout << type; sum += type; type = 3 - type; } cout << ' '; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
1647
B
Madoka and the Elegant Gift
Madoka's father just reached $1$ million subscribers on Mathub! So the website decided to send him a personalized award — The Mathhub's Bit Button! The Bit Button is a rectangular table with $n$ rows and $m$ columns with $0$ or $1$ in each cell. After exploring the table Madoka found out that: - A subrectangle $A$ is contained in a subrectangle $B$ if there's no cell contained in $A$ but not contained in $B$. - Two subrectangles intersect if there is a cell contained in both of them. - A subrectangle is called \textbf{black} if there's no cell with value $0$ inside it. - A subrectangle is called \textbf{nice} if it's \textbf{black} and it's not contained in another \textbf{black} subrectangle. - The table is called \textbf{elegant} if there are no two \textbf{nice} intersecting subrectangles. For example, in the first illustration the red subrectangle is nice, but in the second one it's not, because it's contained in the purple subrectangle. Help Madoka to determine whether the table is elegant.
Note that the answer to the problem is "YES" if and only if the picture is a certain number of disjoint rectangles. Now, in this case, let's look at all squares of size $2\times 2$, note that there cannot be exactly $3$ filled cells in each of them. It is also clear that if there are $3$ such cells, then there will be two intersecting rectangles. Therefore, you just need to check if there is a $2\times 2$ square in which there are exactly $3$ colored cells.
[ "brute force", "constructive algorithms", "graphs", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n, m; cin >> n >> m; vector<vector<int>> a(n, vector<int> (m)); for (int i = 0; i < n; ++i) { string s; cin >> s; for (int j = 0; j < m; ++j) { a[i][j] = s[j] - '0'; } } for (int i = 0; i < n - 1; ++i) { for (int j = 0; j < m - 1; ++j) { int sum = a[i][j] + a[i][j + 1] + a[i + 1][j] + a[i + 1][j + 1]; if (sum == 3) { cout << "NO "; return; } } } cout << "YES "; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
1647
C
Madoka and Childish Pranks
Madoka as a child was an extremely capricious girl, and one of her favorite pranks was drawing on her wall. According to Madoka's memories, the wall was a table of $n$ rows and $m$ columns, consisting only of zeroes and ones. The coordinate of the cell in the $i$-th row and the $j$-th column ($1 \le i \le n$, $1 \le j \le m$) is $(i, j)$. One day she saw a picture "Mahou Shoujo Madoka Magica" and decided to draw it on her wall. Initially, the Madoka's table is a table of size $n \times m$ filled with zeroes. Then she applies the following operation any number of times: Madoka selects any rectangular subtable of the table and paints it in a chess coloring (the upper left corner of the subtable always has the color $0$). Note that some cells may be colored several times. In this case, the final color of the cell is equal to the color obtained during the last repainting. \begin{center} {\small White color means $0$, black means $1$. So, for example, the table in the first picture is painted in a chess coloring, and the others are not.} \end{center} For better understanding of the statement, we recommend you to read the explanation of the first test. Help Madoka and find some sequence of no more than $n \cdot m$ operations that allows you to obtain the picture she wants, or determine that this is impossible.
According to the condition, if the upper left cell is painted black, then we cannot paint it that way. Otherwise it is possible. Let's colour the cells in the following order: $(n,m), (n,m - 1), \ldots, (n, 1), (n - 1, m), \ldots (1, 1)$. Let the cell $(i, j)$ be colored black, then if $j > 1$, then just paint the rectangle $(i, j - 1)$, $(i, j)$. Otherwise, if $j = 1$, then we will color the rectangle $(i - 1, j)$. After such an operation, no cells that we painted before will be repainted, since they have one coordinate larger than ours, and the cell itself will be painted black. Thus, we are able to paint the table for a maximum of $n\cdot m - 1$ operations.
[ "constructive algorithms", "greedy" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { int n, m; cin >> n >> m; vector<vector<int>> a(n, vector<int> (m)); for (int i = 0; i < n; ++i) { string s; cin >> s; for (int j = 0; j < m; ++j) a[i][j] = s[j] - '0'; } vector<array<int, 4>> ans; if (a[0][0] == 1) { cout << -1<< ' '; return; } for (int j = m - 1; j >= 0; --j) { for (int i = n - 1; i >= 0; --i) { if (a[i][j]) { if (i != 0) { ans.push_back({i, j + 1, i + 1, j + 1}); } else { ans.push_back({i + 1, j, i + 1, j + 1}); } } } } cout << ans.size() << ' '; for (auto i : ans) { cout << i[0] << ' ' << i[1] << ' ' << i[2] << ' ' << i[3] << ' '; } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
1647
D
Madoka and the Best School in Russia
Madoka is going to enroll in "TSUNS PTU". But she stumbled upon a difficult task during the entrance computer science exam: - A number is called good if it is a multiple of $d$. - A number is called beatiful if it is \textbf{good} and it \textbf{cannot} be represented as a product of two good numbers. Notice that a beautiful number must be good. Given a good number $x$, determine whether it can be represented in at least two different ways as a product of several (possibly, one) \textbf{beautiful} numbers. Two ways are different if the sets of numbers used are different. Solve this problem for Madoka and help her to enroll in the best school in Russia!
Let's solve a more complex problem: calculate the number of partitions into such multipliers. This is easily solved by dynamic programming. Let $dp_{n, d}$ be the number of factorizations, if we have a number left to decompose the number $n$, and before that we divided by the number $d$. Let's go through all such beautiful numbers $i\geq d$ that divide $n$, then $dp_{n /i, i} += dp_{n, d}$. Note that in this case, we took into account each option exactly once, since we count the divisors in the order of their increase. Let $C$ be the number of divisors of the number $x$, and $V$ be the number of beautiful divisors of the number $x$. Then it works for $O(C\cdot V)$ or $O(C\cdot V^2\cdot log)$ depending on the implementation (since $n$ is always a divisor of the number $x$), but it all comes easily, since $V$ is no more $700$. Let $x = d^a\cdot b$. Where $b$ is not a multiple of $d$. Then consider a few cases: $a = 1$. Then $a$ is a beautiful number, since any number multiple of $d^2$ can be decomposed into $d\cdot(a/d)$, each of which is colored, and a number multiple of only $d$, obviously, cannot be decomposed. So in this case there is exactly one option. $b$ is composite, then obviously we can decompose in several ways if $a\neq 1$. $d$ is simple. If $b$ is prime, then the statement - we have only one option to decompose the number. Since every beautiful multiplier of the form $d\cdot k$. But since $d$ is simple, there are no other ways to decompose it, except to add a multiplier from $b$, and since $b$ is simple, then all these options will be equal before the permutation. $d$ is composite and is not a power of prime. If $a\leq 2$, then this case is no different from the past, since we still have to get two beautiful multipliers and they will all just be equal to $d$. Otherwise, let $d = p\cdot q$, where $gcd(p, q) = 1$. Then we can make the number $p\cdot q^{b - 1}$ and $p^{b - 1}\cdot q$. And also $q\cdot p, q\cdot p, \ldots , q\cdot p$, in this case we have a different number of divisors, so these options will be different, which means we have several options in this case. $d$ is the power of a prime number. Then $d = p^k$. The statement, if $d = p^2$, and $x =p^7$, then it cannot be decomposed in several ways, otherwise, if $a > 2$ and $k >1$, then let's look at the partition of $p^{2k - 1}, p^{k+1}, p^k, \ldots , p^k$, it is clear that if $k > 2$, then even if $b = p$, then the multiplier of $p^{k+ 2}$ will still be beautiful, so it does not differ from the composite $d$ in last case. Otherwise, if $k = 2$, then if $a = 3$ and $b = p$, then nothing can be added, otherwise we will have the opportunity to choose $3$ of the multiplier $p^k$, and somehow decompose the rest (since in this case $a > 3$, then at least one more multiplier will be) and add $b$ there. And we can decompose these 3 multipliers into $2$ or $3$ multipliers, as written above. Therefore, the only unique case when $d$ is the degree of a prime is $d = p^2, x = p^7$.
[ "constructive algorithms", "dp", "math", "number theory" ]
1,900
#include <bits/stdc++.h> using namespace std; int prime(int x) { for (int i = 2; i * i <= x; ++i) { if (x % i == 0) return i; } return -1; } void solve() { int x, d; cin >> x >> d; int cnt = 0; while (x % d == 0) { ++cnt; x /= d; } if (cnt == 1) { cout << "NO "; return; } if (prime(x) != -1) { cout << "YES "; return; } if (prime(d) != -1 && d == prime(d) * prime(d)) { if (x == prime(d) && cnt == 3) { cout << "NO "; return; } } if (cnt > 2 && prime(d) != -1) { cout << "YES "; return; } cout << "NO "; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
1647
E
Madoka and the Sixth-graders
After the most stunning success with the fifth-graders, Madoka has been trusted with teaching the sixth-graders. There's $n$ single-place desks in her classroom. At the very beginning Madoka decided that the student number $b_i$ ($1 \le b_i \le n$) will sit at the desk number $i$. Also there's an infinite line of students with numbers $n + 1, n + 2, n + 3, \ldots$ waiting at the door with the hope of being able to learn something from the Madoka herself. Pay attention that each student has his \textbf{unique} number. After each lesson, the following happens in sequence. - The student sitting at the desk $i$ moves to the desk $p_i$. All students move simultaneously. - If there is more than one student at a desk, the student with the lowest number keeps the place, and the others are removed from the class \textbf{forever}. - For all empty desks in ascending order, the student from the lowest number from the outside line occupies the desk. Note that in the end there is exactly one student at each desk again. It is guaranteed that the numbers $p$ are such that at least one student is removed after each lesson. Check out the explanation to the first example for a better understanding. After several (possibly, zero) lessons the desk $i$ is occupied by student $a_i$. Given the values $a_1, a_2, \ldots, a_n$ and $p_1, p_2, \ldots, p_n$, find the lexicographically smallest suitable initial seating permutation $b_1, b_2, \ldots, b_n$. The permutation is an array of $n$ different integers from $1$ up to $n$ in any order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not ($2$ occurs twice). $[1,3,4]$ is not a permutation either ($n=3$ but there's $4$ in the array). For two different permutations $a$ and $b$ of the same length, $a$ is lexicographically less than $b$ if in the first position where $a$ and $b$ differ, the permutation $a$ has a smaller element than the corresponding element in $b$.
After each lesson, the person's number increases from the maximum number by the number of desks where no one goes. Therefore, it is easy to calculate how much time has passed since the very beginning, let it be the number $k$. Then let's imagine that schoolchildren are not expelled, but at any given time we are simply interested in a student with a minimum number. Obviously, the answer in this case will not change in any way. Let $to_i$ be the desk to which the student will move after $k$ transfers, who originally sat at $i$ desk. This is a standard problem that can be solved using binary lifts or not the most pleasant dfs with cycle allocation and the like. (but we do not recommend you to write the latter), we define the set $V_i$ as the set of all numbers $j$, where $to_j = i$. Let the starting placement of the student be a permutation of $b$, then we will understand that if someone is transferred to the $i$ desk after $k$ operations, then the value in it is the minimum value in $V_i$. And if no one changes seats for it, then a student with the same number will always sit in it, regardless of the initial seating arrangement. After that, it is not difficult to guess the optimal starting seating of schoolchildren. Let $s$ be a lot of schoolchildren for whom we have not yet chosen the desk at which they are sitting. We will iterate over $i$ from $1$ to $n$ in ascending order. Then you need to understand who should sit at the $i$ desk. If we know that there is a desk for which $min(V_i) = i$ must be performed, then we must put a student with the minimum number of $V_i$ at $i$, and we can put the remaining people at any desks with a number greater than $i$, so we will add all the other students to the set $s$. Otherwise, we just need to take a person from $s$ with the minimum number and put him in a place under the number $i$, and then just remove him from the set of $s$.
[ "data structures", "dfs and similar", "greedy" ]
2,500
#include <bits/stdc++.h> using namespace std; const int K = 30; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int n; cin >> n; vector<int> cnt(n); vector<vector<int>> up(n, vector<int> (K)); for (int i = 0; i < n; ++i) { int a; cin >> a; --a; ++cnt[a]; up[i][0] = a; } for (int k = 1; k < K; ++k) { for (int i = 0; i < n; ++i) { up[i][k] = up[up[i][k - 1]][k - 1]; } } vector<int> a(n); int cnt_bad = 0; for (int i = 0; i < n; ++i) { cin >> a[i]; if (!cnt[i]) { ++cnt_bad; } } int k = (*max_element(a.begin(), a.end()) - n) / cnt_bad; vector<vector<int>> add(n); for (int i = 0; i < n; ++i) { int v = i, level = k; for (int j = K - 1; j >= 0; --j) { if (level >= (1 << j)) { level -= (1 << j); v = up[v][j]; } } add[a[v] - 1].push_back(i); } set<int> now; vector<int> ans(n); for (int i = 0; i < n; ++i) { if (add[i].size()) { int res = *min_element(add[i].begin(), add[i].end()); for (int j : add[i]) { if (j != res) { now.emplace(j); } } ans[res] = i + 1; } else { ans[*now.begin()] = i + 1; now.erase(*now.begin()); } } for (int i : ans) cout << i << ' '; cout << ' '; }
1647
F
Madoka and Laziness
Madoka has become too lazy to write a legend, so let's go straight to the formal description of the problem. An array of integers $a_1, a_2, \ldots, a_n$ is called a hill if it is not empty and there is an index $i$ in it, for which the following is true: $a_1 < a_2 < \ldots < a_i > a_{i + 1} > a_{i + 2} > \ldots > a_n$. A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by deletion of several (possibly, zero or all) elements keeping the order of the other elements. For example, for an array $[69, 1000, 228, -7]$ the array $[1000, -7]$ is a subsequence, while $[1]$ and $[-7, 1000]$ are not. Splitting an array into two subsequences is called good if each element belongs to exactly one subsequence, and also each of these subsequences is a hill. You are given an array of \textbf{distinct} positive integers $a_1, a_2, \ldots a_n$. It is required to find the number of different pairs of maxima of the first and second subsequences among all good splits. Two pairs that only differ in the order of elements are considered same.
The inflection of the sequence is the maximum number in the subsequence. Then note that in the first subsequence, the inflection will be the maximum number in our array, let its position in the array be $ind$. Then let's say for convenience that the inflection of the second subsequence will be to the right of the maximum element. (then expand the array and run the solution one more time). In this case, we will have $3$ states: both subsequences increase, the first decreases, and the second increases, or both subsequences decrease. Let's solve the first case first: Let $dp1_i$ be the minimum possible maximum element in a subsequence that does not contain an $i$ element. It is considered not difficult. If $dp1_{i - 1} < a_{i}$, then $dp1_{i} = min(dp1_{i}, a_{i - 1})$, since in this case we can add an $i$ element to a subsequence that does not contain an element of $i - 1$. Similarly, if $a_{i - 1} < a_{i}$, then $dp1_{i} = min(dp1_{i}, dp1_{i - 1})$. The third case is considered anologically, but only it needs to be counted from the end. (This array will be $dp3$) Now let's deal with the second case: Let $dp2_{i, 0}$ be the minimum possible last element in the second subsequence if the $i$ element belongs to the first subsequence. And $dp2_{i, 1}$ is the maximum possible last element in the first subsequence if the $i$ element belongs to the second subsequence. There are several options. If $i$ and $i - 1$ lie in the same and different subsequences. And we will count for $i\leq ind$, since before this section both sub-sequences increase, and this is another calculated case. And then it is not difficult to get the conversion formulas: $dp2_{ind, 0} = dp1_{ind}$, by definition $dp2_{ind, 0}$ If $a_{i - 1} > a_i$, then $dp2_{i, 0} = min(dp2_{i, 0}, dp2_{i - 1, 0})$. If $dp2_{i - 1, 1} > a_i$, then $dp2_{i, 0} = min(dp2_{i, 0}, a_{i - 1})$. If $a_{i - 1} < a_{i}$, then $dp2_{i, 1} = max(dp2_{i, 1}, dp3_{i - 1, 1})$. If $dp2_{i - 1, 0} < a_i$, then $dp2_{i, 1} = max(dp2_{i, 1}, a_{i - 1})$. Now let's understand that the element with the number $i$ can be an inflection of the second subsequence if $i > ind$, as well as $dp2_{i, 1} > dp3_{i}$. That is, we can move from the second state to the third.
[ "dp", "greedy" ]
3,100
#include <bits/stdc++.h> using namespace std; const int inf = 1e9 + 228; int solve (int n, vector<int> a) { int ind = max_element(a.begin(), a.end()) - a.begin(); vector<int> dp1(n, inf); dp1[0] = -1; for (int i = 1; i <= ind; ++i) { if (a[i] > dp1[i - 1]) dp1[i] = min(dp1[i], a[i - 1]); if (a[i] > a[i - 1]) dp1[i] = min(dp1[i], dp1[i - 1]); } vector<int> dp2(n, inf); dp2[n - 1] = -1; for (int i = n - 2; i >= ind; --i) { if (a[i] > dp2[i + 1]) dp2[i] = min(dp2[i], a[i + 1]); if (a[i] > a[i + 1]) dp2[i] = min(dp2[i], dp2[i + 1]); } vector<array<int, 2>> dp3(n, {inf, -inf}); dp3[ind][0] = dp1[ind]; int ans = 0; for (int i = ind + 1; i < n; ++i) { if (a[i - 1] > a[i]) dp3[i][0] = min(dp3[i][0], dp3[i - 1][0]); if (dp3[i - 1][1] > a[i]) dp3[i][0] = min(dp3[i][0], a[i - 1]); if (a[i - 1] < a[i]) dp3[i][1] = max(dp3[i][1], dp3[i - 1][1]); if (dp3[i - 1][0] < a[i]) dp3[i][1] = max(dp3[i][1], a[i - 1]); if (dp3[i][1] > dp2[i]) ++ans; } return ans; } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) cin >> a[i]; int ans = solve(n, a); reverse(a.begin(), a.end()); cout << ans + solve(n, a) << ' '; }
1648
A
Weird Sum
Egor has a table of size $n \times m$, with lines numbered from $1$ to $n$ and columns numbered from $1$ to $m$. Each cell has a color that can be presented as an integer from $1$ to $10^5$. Let us denote the cell that lies in the intersection of the $r$-th row and the $c$-th column as $(r, c)$. We define the \underline{manhattan distance} between two cells $(r_1, c_1)$ and $(r_2, c_2)$ as the length of a shortest path between them where each consecutive cells in the path must have a common side. The path can go through cells of any color. For example, in the table $3 \times 4$ the manhattan distance between $(1, 2)$ and $(3, 3)$ is $3$, one of the shortest paths is the following: $(1, 2) \to (2, 2) \to (2, 3) \to (3, 3)$. Egor decided to calculate the sum of manhattan distances between each pair of cells of the same color. Help him to calculate this sum.
We note that the manhattan distance between cells $(r_1, c_1)$ and $(r_2, c_2)$ is equal to $|r_1-r_2|+|c_1-c_2|$. For each color we will compose a list of all cells $(r_0, c_0), \ldots, (r_{k-1}, c_{k-1})$ of this color, compute the target sum for this color, and sum up the answers for all colors. The sum is equal: $\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} |r_i - r_j| + |c_i - c_j| = \left(\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} |r_i - r_j|\right) + \left(\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} |c_i - c_j|\right)$ We will compute the first sum, the second sum is analogous. Let an array $s$ be equal to $r$, but sorted in increasing order. Then: $\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} |r_i - r_j| = \sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} s_j - s_i = \left(\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} s_j\right) + \left(\sum_{i=0}^{k-1} \sum_{j=i+1}^{k-1} -s_i\right)$ The value $s_j$ occurs in the first double sum exactly $j$ times, the value $-s_i$ occurs in the second sum exactly $k-1-i$ times. Then, the value is equal to: $\sum_{j=0}^{k-1} js_j + \sum_{i=0}^{k-1} -(k-1-i)s_i = \sum_{i=0}^{k-1} (2i+1-k)s_i$ The last sum can be computed in $O(k)$, the time complexity to sort an array is $O(k \log k)$. The overall complexity is $O(nm\log(nm))$. We can also sort arrays of coordinates by adding cells to lists in the right order. This yields an $O(nm)$ solution.
[ "combinatorics", "data structures", "geometry", "math", "matrices", "sortings" ]
1,400
null
1648
B
Integral Array
You are given an array $a$ of $n$ positive integers numbered from $1$ to $n$. Let's call an array integral if for any two, not necessarily different, numbers $x$ and $y$ from this array, $x \ge y$, the number $\left \lfloor \frac{x}{y} \right \rfloor$ ($x$ divided by $y$ with rounding down) is also in this array. You are guaranteed that all numbers in $a$ do not exceed $c$. Your task is to check whether this array is integral.
Let's consider $x, y \in a$ and $r \notin a$. If $y \cdot r \le x < y \cdot (r + 1)$ then $\left \lfloor \frac{x}{y} \right \rfloor = r$, but $r$ is not in $a$, so the answer is "No". Let's suggest that $y$ and $r$ are already given. We can check if there exists such $x \in a$ from the mentioned segment in $O(1)$. It is done by considering array $cnt_x$ - the amount of occurrences of $x$ in $a$, and prefix sums for that array. Now we only need to run this check for each $r$ and $y$. To do that we can iterate through all $r \notin a$ and $y \in a$ in increasing order. If $r \cdot y > c$ then there is definitely no such $x$ so we can consider the next $r$. This optimization speeds up the process and makes the whole solution work in $O(C \log C)$.
[ "brute force", "constructive algorithms", "data structures", "math" ]
1,800
null
1648
C
Tyler and Strings
While looking at the kitchen fridge, the little boy Tyler noticed magnets with symbols, that can be aligned into a string $s$. Tyler likes strings, and especially those that are lexicographically smaller than another string, $t$. After playing with magnets on the fridge, he is wondering, how many distinct strings can be composed out of letters of string $s$ by rearranging them, so that the resulting string is lexicographically smaller than the string $t$? Tyler is too young, so he can't answer this question. The alphabet Tyler uses is very large, so for your convenience he has already replaced the same letters in $s$ and $t$ to the same integers, keeping that different letters have been replaced to different integers. We call a string $x$ lexicographically smaller than a string $y$ if one of the followings conditions is fulfilled: - There exists such position of symbol $m$ that is presented in both strings, so that before $m$-th symbol the strings are equal, and the $m$-th symbol of string $x$ is smaller than $m$-th symbol of string $y$. - String $x$ is the prefix of string $y$ and $x \neq y$. Because the answer can be too large, print it modulo $998\,244\,353$.
Let $K$ be the size of the alphabet, that is, the number of the maximum letter that occurs in it. First, let's calculate how many different strings can be composed if we have $c_1$ letters of the $1$th type, $c_2$ letters of the $2$th type, $\ldots$, $c_K$ letters of the $K$ type. This is the school formula: $P(c_1, c_2, \ldots, c_K) = \frac{(c_1 + c_2 + \ldots + c_K)!}{c_1! \cdot c_2! \cdot \ldots \cdot c_K!}$ to quickly calculate it for different $c_1, c_2, \ldots, c_k$ pre-calculate all factorials and their reciprocals modulo $C = 998244353$ in O($n \cdot \log{C}$) In order for the string $x$ to be less than the string t, they must have the same prefix. Let's iterate over the length of this matching prefix from $0$ to $\min(n, m)$. If strings $x$ and $t$ have the same first $i$ characters, then we know exactly how many letters we have left. To support this, let's create an array cnt, at the $i$-th position of which there will be the number of remaining letters of type $i$. Let's iterate over the letter that will appear immediately after the matching prefix. For the resulting string to be less than $t$, this letter must be strictly less than the corresponding letter in $t$, and all subsequent letters can be arranged in any order. Let's calculate the number of rows $x$ considered in this way according to the formula above. The only case where the resulting string $x$ can be lexicographically less than $t$, which we will not count, is when it is a prefix of the string $t$, but has a shorter length. We will separately check whether we can get such a string, and if so, add 1 to the answer. Since at each of at most $\min(n, m)$ steps we need to go through at most $K$ options for the next letter, and we calculate each option in O($K$) - we get the asymptotics O($\min (n, m) \cdot K^2 + n \cdot \log{C}$) To speed up the resulting solution, let's create an array $add$, in the $i$-th cell of which we store how many ways it will be possible to arrange the remaining letters if the letter $i$ is put in the current position. In fact $add_i = \frac{(cnt_1 + cnt_2 + \ldots + cnt_K - 1)!}{cnt_1! \cdot cnt_2! \cdot \ldots \cdot (cnt_i - 1)! \cdot \ldots \cdot cnt_K!}$ If we learn how to maintain this array, then at each step we only need to take the sum of the elements at some of its prefix. Let's see how it changes if the next letter in the string $t$ is $i$, i.e. $cnt_i$ should decrease by 1. For all cells $j \neq i$ $add_j$ is replaced by $add_j \cdot \frac{cnt_i}{cnt_1 + cnt_2 + \ldots + cnt_K - 1}$. To apply modifications to the entire array, let's create a separate variable $modify$, by which we need to multiply the value in the cell to get the value that should be there. For cell $i$, $add_i$ will be replaced by $add_i \cdot \frac{cnt_i - 1}{cnt_1 + cnt_2 + \ldots + cnt_K - 1}$. And taking into account the fact that we applied a modifier to all cells, it is enough to multiply the value of $add_i$ by $\frac{cnt_i - 1}{cnt_i}$ With this optimization, we now spend only O(K) actions at each step to calculate the prefix sum, and O($\log(C)$) to calculate what to multiply the array cells by. We get the asymptotics O($\min(n, m) \cdot (K + \log(C))$) To get rid of $K$ asymptotically, note that the only thing we want to do with the $add$ array is take the sum at the prefix and change the value at the point. This can be done in O($log(K)$ using the Fenwick Tree or the Segment Tree. Applying them, we get the final asymptotic O($\min(n, m) \cdot (\log(K) + \log(C))$). In fact, $\log(C)$ in the asymptotics can be eliminated by precalculating modulo reciprocals for all numbers from $1$ to $n$ faster than O($n \cdot \log(C)$), but in this task was not required.
[ "combinatorics", "data structures", "implementation" ]
1,900
null
1648
D
Serious Business
Dima is taking part in a show organized by his friend Peter. In this show Dima is required to cross a $3 \times n$ rectangular field. Rows are numbered from $1$ to $3$ and columns are numbered from $1$ to $n$. The cell in the intersection of the $i$-th row and the $j$-th column of the field contains an integer $a_{i,j}$. Initially Dima's score equals zero, and whenever Dima reaches a cell in the row $i$ and the column $j$, his score changes by $a_{i,j}$. Note that the score can become negative. Initially all cells in the first and the third row are marked as available, and all cells in the second row are marked as unavailable. However, Peter offered Dima some help: there are $q$ special offers in the show, the $i$-th special offer allows Dima to mark cells in the second row between $l_i$ and $r_i$ as available, though Dima's score reduces by $k_i$ whenever he accepts a special offer. Dima is allowed to use as many special offers as he wants, and might mark the same cell as available multiple times. Dima starts his journey in the cell $(1, 1)$ and would like to reach the cell $(3, n)$. He can move either down to the next row or right to the next column (meaning he could increase the current row or column by 1), thus making $n + 1$ moves in total, out of which exactly $n - 1$ would be horizontal and $2$ would be vertical. Peter promised Dima to pay him based on his final score, so the sum of all numbers of all visited cells minus the cost of all special offers used. Please help Dima to maximize his final score.
Let's denote $pref[i][j] := \sum_{k = 0}^{j - 1} a[i][k]$. Then define $s$ and $t$ as follows: $s[i] = pref[0][i + 1] - pref[1][i]$ $f[i] = pref[1][i + 1] - pref[2][i] + pref[2][n]$ Now we can transform the problem to following: compute $\max\limits_{0\leq i\leq j < n} s[i] + f[j] - cost(i, j)$ where $cost(i, j)$ is the minimal cost of unlocking segment $[i, j]$. Let's define $dp[i]$ as the maximum profit for going from $(1, 1)$ to $(2, i)$, if the rightmost segment that we have used ends in $i$ (so it's $s[j]$ for some $j$ minus cost of covering segment $[i, j]$, when we know that there's a segment ending at $i$). The calculation of $dp$ is as follows: for all $i$ look through each segment, which ends at $i$, and relax $dp[i]$ with $\max\limits_{l - 1\leq j < i} dp[j] - k$. It can be calculated using segment tree. Now consider the optimal usage of segments. Fix the rightmost segment. The profit for this segment usage should be $dp[i] + f[j] + k$ for some $i, j$ on this segment. So we can bruteforce the rightmost segment in our answer and relax the overall answer with $\max\limits_{l\leq i\leq j\leq r} dp[i] + f[j] - k$. Also there's a case where taking only 1 segment is optimal, then we should relax the answer with $\max\limits_{l\leq i\leq j\leq r} s[i] + f[j] - k$. We can calculate all of this using segment tree. Overall complexity is $O(q\, log\, n)$.
[ "data structures", "divide and conquer", "dp", "implementation", "shortest paths" ]
2,800
null
1648
E
Air Reform
Berland is a large country with developed airlines. In total, there are $n$ cities in the country that are historically served by the Berlaflot airline. The airline operates bi-directional flights between $m$ pairs of cities, $i$-th of them connects cities with numbers $a_i$ and $b_i$ and has a price $c_i$ for a flight in both directions. It is known that Berlaflot flights can be used to get from any city to any other (possibly with transfers), and the cost of any route that consists of several consequent flights is equal to the cost of the most expensive of them. More formally, the cost of the route from a city $t_1$ to a city $t_k$ with $(k-2)$ transfers using cities $t_2,\ t_3,\ t_4,\ \ldots,\ t_{k - 1}$ is equal to the maximum cost of flights from $t_1$ to $t_2$, from $t_2$ to $t_3$, from $t_3$ to $t_4$ and so on until the flight from $t_{k - 1}$ to $t_k$. Of course, all these flights must be operated by Berlaflot. A new airline, S8 Airlines, has recently started operating in Berland. This airline provides bi-directional flights between all pairs of cities that are not connected by Berlaflot direct flights. Thus, between each pair of cities there is a flight of either Berlaflot or S8 Airlines. The cost of S8 Airlines flights is calculated as follows: for each pair of cities $x$ and $y$ that is connected by a S8 Airlines flight, the cost of this flight is equal to the minimum cost of the route between the cities $x$ and $y$ at Berlaflot according to the pricing described earlier. It is known that with the help of S8 Airlines flights you can get from any city to any other with possible transfers, and, similarly to Berlaflot, the cost of a route between any two cities that consists of several S8 Airlines flights is equal to the cost of the most expensive flight. Due to the increased competition with S8 Airlines, Berlaflot decided to introduce an air reform and change the costs of its flights. Namely, for the $i$-th of its flight between the cities $a_i$ and $b_i$, Berlaflot wants to make the cost of this flight equal to the minimum cost of the route between the cities $a_i$ and $b_i$ at S8 Airlines. Help Berlaflot managers calculate new flight costs.
The formal statement of this problem is that we have a weighted graph, for which we build it's complement, where the weight of an edge between $A$ and $B$ equals to minimal maximum on path from $A$ to $B$ in initial graph. The same way we calculate edge weights of initial graph, and we have to output them. We can notice, that the path where maximum wight is minimal goes directly through the minumum spanning tree. That means that in order to get the final edges weights we need to calculate the minimum spanning tree on the graph complement and to get the final weights of initial edges we need to get the maximum on the tree path, which we can do in $O(m \log n)$ time with binary jumping pointers. In order to build the minimum spanning tree of graph complement, we will do something like Kruskal algorithm. We will sort the edges of initial graph by their weights and store sets of connected vertices by edges of weights less than last added. The same way we will store the sets of connected vertices in graph complement by edges of weights less than last added. These complement components will form subsets of initial components. While adding a new edge, some connected components of initial graph can be merged. That means that some complement components can be merged. We can merge only those complement components, that are subsets of different initial components that became merged after adding an edge. These complement components become merged only if there exists some edge between two vertices of complement components in complement graph. In other words, between some pair of vertices in different complement components there is no edge in initial graph. So when two initial components $S$ and $T$ become merged, we can iterate over all pair of complement components, such that first one is subset of $S$ and second one is subset of $T$. For each two vertices in them we should check that there is an edge between them in initial graph. Only in this case we can not merge these two complement components. For each of this unsuccessful attempt of merging two complement components, we do as much iterations, as there are edges between them in initial graph. So the total number of iterations in unsuccessful merges is $O(m)$. And for successful merges the complement components are merged and a new edge is added to complement minimum spanning tree. So the total number of successful merges is $O(n)$ and the total time if $O(m \log m)$ for edges sorting and $O(m)$ for complement MST build. After that we can find weights of initial edges in $O(m \log n)$ time.
[ "data structures", "dfs and similar", "divide and conquer", "dsu", "graphs", "implementation", "trees" ]
3,200
null
1648
F
Two Avenues
In order to make the capital of Berland a more attractive place for tourists, the great king came up with the following plan: choose two streets of the city and call them avenues. Certainly, these avenues will be proclaimed extremely important historical places, which should attract tourists from all over the world. The capital of Berland can be represented as a graph, the vertices of which are crossroads, and the edges are streets connecting two crossroads. In total, there are $n$ vertices and $m$ edges in the graph, you can move in both directions along any street, you can get from any crossroad to any other by moving only along the streets, each street connects two different crossroads, and no two streets connect the same pair of crossroads. In order to reduce the flow of ordinary citizens moving along the great avenues, it was decided to introduce a toll on each avenue in both directions. Now you need to pay $1$ tugrik for one passage along the avenue. You don't have to pay for the rest of the streets. Analysts have collected a sample of $k$ citizens, $i$-th of them needs to go to work from the crossroad $a_i$ to the crossroad $b_i$. After two avenues are chosen, each citizen will go to work along the path with minimal cost. In order to earn as much money as possible, it was decided to choose two streets as two avenues, so that the total number of tugriks paid by these $k$ citizens is maximized. Help the king: according to the given scheme of the city and a sample of citizens, find out which two streets should be made avenues, and how many tugriks the citizens will pay according to this choice.
Let's consider two edges from the answer $e_1$, $e_2$. At least one of them should lie on dfs tree, otherwise the graph will be connected after removing $e_1$, $e_2$ and the answer will be $0$. Let the answer be $e_1$, $e_2$, where $e_1$ lies on dfs tree. What cases can be for edges $e_1$, $e_2$? The edge $e_2$ should be a bridge of the graph without edge $e_1$ (otherwise $e_2$ can be any). Using this we can highlight possible cases: Both edges $e_1$, $e_2$ are bridges of the graph. The edge $e_1$ is the bridge of the graph, the edge $e_2$ does not matter. The edge $e_1$ lies on dfs tree, the edge $e_2$ is the only outer edge covering the edge $e_1$. Both edges $e_1$, $e_2$ lies on dfs tree, sets of outer edges covering $e_1$, and covering $e_2$ are equal. For each case let's find the maximum answer. For each of $k$ pairs of vertices let's consider the path between vertices in the pair. For each edge $e$ of dfs tree let's calculate three values: $c_e$ - the number of paths, containing the edge $e$. $f_e$ - the number of outer edges, covering the edge $e$. $h_e$ - the hash of all outer edges, covering the edge $e$. For each outer edge let's give random $64$-bit integer, the hash will be equal to the sum of values. In the first case, the answer for two bridges $e_1$, $e_2$ is equal to $c_{e_1} + c_{e_2}$. So we should find two brides with maximum value $c_e$. In the second case, the answer for one bridge $e_1$ is equal to $c_{e_1}$. So we should find the bridge with maximum value $c_e$. In the third case, we consider edges $e_1$, such that $f_{e_1} = 1$. The answer is $c_{e_1}$. So we should find edge $e_1$ of dfs tree, such that $f_{e_1} = 1$ with maximum value $c_{e_1}$. The fourth case is very hard. We consider two edges $e_1$, $e_2$, such that $h_{e_1} = h_{e_2}$. The answer is equal to the number of paths, containing exactly one edge from $e_1$, $e_2$. We can divide each path into two vertical paths, the answer won't change. The plan of the solution will be: let's make a dfs of the dfs tree and maintain the segment tree with operations add on segment, and maximum on segment. The prefix of the segment tree corresponds to edges on the path from the root to the current edge $e_2$. The value in the cell corresponding to the edge $e_1$ is equal to the answer for the pair of edges $e_1$, $e_2$. It is possible to recalculate this segment tree with $O(n + k)$ updates during the dfs tree traversal. The unsolved problem now is how to consider only edges $e_1$, such that $h_{e_1} = h_{e_2}$ in the maximum on segment. Let's call a cluster all edges with equal hash. All edges from one cluster lie on the vertical path. Let's consider the vertical path for each cluster: from the first occurrence of edge from the cluster to the last occurrence of edge from the cluster. Any two paths for two clusters either do not intersect or nested into each other. When making a traversal let's add $-\infty$ on the segment from the last edge with hash $h_{e_2}$ to the edge $e_2$. This move will exclude edges from the maximum. This won't interfere the values, because all hashes from the segment won't be found later in the traversal (due to the clusters structure). After that, let's find the maximum on the segment from the first edge with hash $h_{e_2}$ to the edge $e_2$. In that maximum all edges with the hash $h_{e_2}$ will participate. The total solution has the complexity $O(m + (n + k) \log{n})$.
[ "data structures", "dfs and similar", "graphs" ]
3,500
null
1649
A
Game
You are playing a very popular computer game. The next level consists of $n$ consecutive locations, numbered from $1$ to $n$, each of them containing either land or water. It is known that the first and last locations contain land, and for completing the level you have to move from the first location to the last. Also, if you become inside a location with water, you will die, so you can only move between locations with land. You can jump between adjacent locations for free, as well as \textbf{no more than} once jump from any location with land $i$ to any location with land $i + x$, spending $x$ coins ($x \geq 0$). Your task is to spend the minimum possible number of coins to move from the first location to the last one. Note that this is always possible since both the first and last locations are the land locations.
It is easy to see that if there are no water locations, the answer is $0$. Otherwise, we should jump from the last accessible from the start land location to the first land location from which the finish is accessible. In order to find these locations, one can use two consecutive while loops, one increasing $l$ from $1$ until $a_{l + 1} = 0$, and the other one decreasing $r$ from $n$ until $a_{r - 1} = 0$. After the loops finish, we know that we should jump from the $l$-th location to the $r$-th at the cost of $r - l$.
[ "implementation" ]
800
null
1649
B
Game of Ball Passing
Daniel is watching a football team playing a game during their training session. They want to improve their passing skills during that session. The game involves $n$ players, making multiple passes towards each other. Unfortunately, since the balls were moving too fast, after the session Daniel is unable to know how many balls were involved during the game. The only thing he knows is the number of passes delivered by each player during all the session. Find the minimum possible amount of balls that were involved in the game.
If $max(a) \cdot 2 \leq sum(a)$, we can always prove that we can only use one ball. For other cases, the number of balls is determined by $2 \cdot max(a) - sum(a)$.
[ "greedy", "implementation" ]
1,300
null
1650
A
Deletions of Two Adjacent Letters
The string $s$ is given, the string length is \textbf{odd} number. The string consists of lowercase letters of the Latin alphabet. As long as the string length is greater than $1$, the following operation can be performed on it: select any two adjacent letters in the string $s$ and delete them from the string. For example, from the string "lemma" in one operation, you can get any of the four strings: "mma", "lma", "lea" or "lem" In particular, in one operation, the length of the string reduces by $2$. Formally, let the string $s$ have the form $s=s_1s_2 \dots s_n$ ($n>1$). During one operation, you choose an arbitrary index $i$ ($1 \le i < n$) and replace $s=s_1s_2 \dots s_{i-1}s_{i+2} \dots s_n$. For the given string $s$ and the letter $c$, determine whether it is possible to make such a sequence of operations that in the end the equality $s=c$ will be true? In other words, is there such a sequence of operations that the process will end with a string of length $1$, which consists of the letter $c$?
There will be one character left in the end, so we have to delete all the characters going before and after it. That is, delete some prefix and suffix. Since we always delete some substring of length $2$, we can only delete the prefix and suffix of even length, it means the answer is YES in the case when there is an odd position in $s$ with the character $c$ and NO otherwise.
[ "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) int main() { int t; cin >> t; forn(tt, t) { string s, t; cin >> s >> t; bool yes = false; forn(i, s.length()) if (s[i] == t[0] && i % 2 == 0) yes = true; cout << (yes ? "YES" : "NO") << endl; } }
1650
B
DIV + MOD
Not so long ago, Vlad came up with an interesting function: - $f_a(x)=\left\lfloor\frac{x}{a}\right\rfloor + x \bmod a$, where $\left\lfloor\frac{x}{a}\right\rfloor$ is $\frac{x}{a}$, rounded \textbf{down}, $x \bmod a$ — the remainder of the integer division of $x$ by $a$. For example, with $a=3$ and $x=11$, the value $f_3(11) = \left\lfloor\frac{11}{3}\right\rfloor + 11 \bmod 3 = 3 + 2 = 5$. The number $a$ is fixed and known to Vlad. Help Vlad find the maximum value of $f_a(x)$ if $x$ can take any integer value from $l$ to $r$ inclusive ($l \le x \le r$).
Consider $f_a(r)$. Note that $\left\lfloor\frac{r}{a}\right\rfloor$ is maximal over the entire segment from $l$ to $r$, so if there is $x$ in which $f_a$ gives a greater result, then $x \bmod a > r\bmod a$. Note that numbers from $r - r \bmod a$ to $r$ that have an incomplete quotient when divided by $a$ equal to $\left\lfloor\frac{r}{a}\right\rfloor$ do not fit this condition (and are guaranteed to have a value $f_a$ less than $f_a(r)$). And the number $x = r - r \bmod a - 1$: Has the maximum possible remainder $x \bmod a = a - 1$; Has the maximum possible $\left\lfloor\frac{r}{a}\right\rfloor$ among numbers less than $r - r\bmod a$. So there are two candidates for the answer - these are $r$ and $r - r \bmod a - 1$. The second candidate is suitable only if it is at least $l$. It remains only to compare the values of $f_a$ and select the maximum.
[ "math" ]
900
#include <bits/stdc++.h> //#define int long long #define mp make_pair #define x first #define y second #define all(a) (a).begin(), (a).end() #define rall(a) (a).rbegin(), (a).rend() #pragma GCC optimize("Ofast") #pragma GCC optimize("no-stack-protector") #pragma GCC optimize("unroll-loops") #pragma GCC target("sse,sse2,sse3,ssse3,popcnt,abm,mmx,tune=native") #pragma GCC optimize("fast-math") typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(143); const ll inf = 1e9; const ll M = 1e9; const ld pi = atan2(0, -1); const ld eps = 1e-4; void solve() { int l, r, x; cin >> l >> r >> x; int ans = r / x + r % x; int m = r / x * x - 1; if(m >= l)ans = max(ans, m / x + m % x); cout << ans; } bool multi = true; signed main() { //freopen("in.txt", "r", stdin); //freopen("in.txt", "w", stdout); int t = 1; if (multi) { cin >> t; } for (; t != 0; --t) { solve(); cout << "\n"; } return 0; }
1650
C
Weight of the System of Nested Segments
On the number line there are $m$ points, $i$-th of which has integer coordinate $x_i$ and integer weight $w_i$. The coordinates of all points are different, and the points are numbered from $1$ to $m$. A sequence of $n$ segments $[l_1, r_1], [l_2, r_2], \dots, [l_n, r_n]$ is called system of nested segments if for each pair $i, j$ ($1 \le i < j \le n$) the condition $l_i < l_j < r_j < r_i$ is satisfied. In other words, the second segment is strictly inside the first one, the third segment is strictly inside the second one, and so on. For a given number $n$, find a system of nested segments such that: - both ends of each segment are one of $m$ given points; - the sum of the weights $2\cdot n$ of the points used as ends of the segments is \textbf{minimal}. For example, let $m = 8$. The given points are marked in the picture, their weights are marked in red, their coordinates are marked in blue. Make a system of three nested segments: - weight of the first segment: $1 + 1 = 2$ - weight of the second segment: $10 + (-1) = 9$ - weight of the third segment: $3 + (-2) = 1$ - sum of the weights of all the segments in the system: $2 + 9 + 1 = 12$ \begin{center} {\small System of three nested segments} \end{center}
We create a structure that stores for each point its coordinate, weight, and index in the input data. Sort the $points$ array by increasing weight. The sum of weights of the first $2 \cdot n$ points will be minimal, so we use them to construct a system of $n$ nested segments. We save the weights of the first $2 \cdot n$ points in the variable $sum$ and remove the remaining $m - 2 \cdot n$ points from the array. Now sort the points in ascending order of coordinates and form a system of nested segments such that the endpoints of $i$th segment are $points[i]$ and $points[2 \cdot n - i + 1]$ for $(1 \le i \le 2 \cdot n)$. Thus, the endpoints of the first segment are $points[1]$ and $points[2 \cdot n]$, the endpoints of the $n$th segment are $points[n]$ and $points[n + 1]$. For each test case we first output $sum$, then - $n$ pairs of numbers $i$, $j$ ($1 \le i, j \le m$) - the indices under which the endpoints of the current segment were written in the input data.
[ "greedy", "hashing", "implementation", "sortings" ]
1,200
#include<bits/stdc++.h> using namespace std; using ll = long long; #define forn(i, n) for (int i = 0; i < int(n); i++) struct point{ int weight, position, id; }; void solve(){ int n, m; cin >> n >> m; vector<point>points(m); forn(i, m) { cin >> points[i].position >> points[i].weight; points[i].id = i + 1; } sort(points.begin(), points.end(), [] (point a, point b){ return a.weight < b.weight; }); int sum = 0; forn(i, m){ if(i < 2 * n) sum += points[i].weight; else points.pop_back(); } sort(points.begin(), points.end(), [] (point a, point b){ return a.position < b.position; }); cout << sum << endl; forn(i, n){ cout << points[i].id << ' ' << points[2 * n - i - 1].id << endl; } } int main() { int t; cin >> t; while(t--){ solve(); } return 0; }
1650
D
Twist the Permutation
Petya got an array $a$ of numbers from $1$ to $n$, where $a[i]=i$. He performed $n$ operations sequentially. In the end, he received a new state of the $a$ array. At the $i$-th operation, Petya chose the first $i$ elements of the array and cyclically shifted them to the right an arbitrary number of times (elements with indexes $i+1$ and more remain in their places). One cyclic shift to the right is such a transformation that the array $a=[a_1, a_2, \dots, a_n]$ becomes equal to the array $a = [a_i, a_1, a_2, \dots, a_{i-2}, a_{i-1}, a_{i+1}, a_{i+2}, \dots, a_n]$. For example, if $a = [5,4,2,1,3]$ and $i=3$ (that is, this is the third operation), then as a result of this operation, he could get any of these three arrays: - $a = [5,4,2,1,3]$ (makes $0$ cyclic shifts, or any number that is divisible by $3$); - $a = [2,5,4,1,3]$ (makes $1$ cyclic shift, or any number that has a remainder of $1$ when divided by $3$); - $a = [4,2,5,1,3]$ (makes $2$ cyclic shifts, or any number that has a remainder of $2$ when divided by $3$). Let's look at an example. Let $n=6$, i.e. initially $a=[1,2,3,4,5,6]$. A possible scenario is described below. - $i=1$: no matter how many cyclic shifts Petya makes, the array $a$ does not change. - $i=2$: let's say Petya decided to make a $1$ cyclic shift, then the array will look like $a = [\textbf{2}, \textbf{1}, 3, 4, 5, 6]$. - $i=3$: let's say Petya decided to make $1$ cyclic shift, then the array will look like $a = [\textbf{3}, \textbf{2}, \textbf{1}, 4, 5, 6]$. - $i=4$: let's say Petya decided to make $2$ cyclic shifts, the original array will look like $a = [\textbf{1}, \textbf{4}, \textbf{3}, \textbf{2}, 5, 6]$. - $i=5$: let's say Petya decided to make $0$ cyclic shifts, then the array won't change. - $i=6$: let's say Petya decided to make $4$ cyclic shifts, the array will look like $a = [\textbf{3}, \textbf{2}, \textbf{5}, \textbf{6}, \textbf{1}, \textbf{4}]$. You are given a final array state $a$ after all $n$ operations. Determine if there is a way to perform the operation that produces this result. In this case, if an answer exists, print the numbers of cyclical shifts that occurred during each of the $n$ operations.
The first thing to notice - the answer always exists. For $n$ numbers $1\cdot2\cdot3 \dots n = n!$ answer choices, as well as $n!$ permutation combinations. It remains only to restore the answer from this permutation. We will restore by performing reverse operations. On the $i$-th ($i = n,~n - 1, ~\dots, ~2, ~1$) operation will be selectd the first $i$ elements of the array and rotate them $d[i]$ times to the left ( elements with numbers $i+1$ and more remain in their places). Where $d[i]$ is equal to $0$ if $i = 1$, otherwise $d[i] = (index + 1) \bmod i$, and $index$ - is the index of the number $i$. Thus, for each $i$ from right to left, performing a left cyclic shift operation, we move the number $i$ at index $i$. As a result, we move $O(n)$ numbers $n$ times. The time complexity $O(n^2)$.
[ "brute force", "constructive algorithms", "implementation", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; typedef long long ll; #define forn(i, n) for (int i = 0; i < int(n); i++) void solve() { int n; cin >> n; int a[n]; for (int i = 0; i < n; ++i) { cin >> a[i]; } int ans[n]; for (int i = n; i > 0; --i) { int ind = 0; for (int j = 0; j < i; ++j) { ind = a[j] == i ? j : ind; } int b[i]; for (int j = 0; j < i; ++j) { b[(i - 1 - ind + j) % i] = a[j]; } for (int j = 0; j < i; ++j) { a[j] = b[j]; } ans[i - 1] = i != 1 ? (ind + 1) % i : 0; } for (int i = 0; i < n; ++i) { cout << ans[i] << ' '; } cout << '\n'; } int main() { int tests; cin >> tests; forn(tt, tests) { solve(); } }
1650
E
Rescheduling the Exam
Now Dmitry has a session, and he has to pass $n$ exams. The session starts on day $1$ and lasts $d$ days. The $i$th exam will take place on the day of $a_i$ ($1 \le a_i \le d$), all $a_i$ — are different. \begin{center} {\small Sample, where $n=3$, $d=12$, $a=[3,5,9]$. Orange — exam days. Before the first exam Dmitry will rest $2$ days, before the second he will rest $1$ day and before the third he will rest $3$ days.} \end{center} For the session schedule, Dmitry considers a special value $\mu$ — the smallest of the rest times before the exam for all exams. For example, for the image above, $\mu=1$. In other words, for the schedule, he counts exactly $n$ numbers  — how many days he rests between the exam $i-1$ and $i$ (for $i=0$ between the start of the session and the exam $i$). Then it finds $\mu$ — the minimum among these $n$ numbers. Dmitry believes that he can improve the schedule of the session. He may ask to change the date of one exam (change one arbitrary value of $a_i$). Help him change the date so that all $a_i$ remain different, and the value of $\mu$ is as large as possible. For example, for the schedule above, it is most advantageous for Dmitry to move the second exam to the very end of the session. The new schedule will take the form: \begin{center} {\small Now the rest periods before exams are equal to $[2,2,5]$. So, $\mu=2$.} \end{center} Dmitry can leave the proposed schedule unchanged (if there is no way to move one exam so that it will lead to an improvement in the situation).
To begin with, we will learn how to find the optimal place for the exam that we want to move. Let's imagine that it is not in the schedule, in this case we have two options: Put the exam at the end of the session so that there are $d - a_n - 1$ days before it. Put it in the middle of the largest break between exams ((let its length be $mx$), so that between it and the nearest one there is $\left\lfloor \frac{mx - 1}{2} \right\rfloor$, because this is no worse than putting it in any part of any other break. That is, the answer for such an arrangement is - the minimum of the larger of these options and the minimum break, in schedule without the moved exam. Now note that the minimum break in most variants is the same - minimum in the initial schedule. So in order to reduce $\mu$, you need to move exactly one of the two exams that form it and you need to check which of the two options is better.
[ "binary search", "data structures", "greedy", "implementation", "math", "sortings" ]
1,900
#include <bits/stdc++.h> #define int long long #define mp make_pair #define x first #define y second #define all(a) (a).begin(), (a).end() #define rall(a) (a).rbegin(), (a).rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(143); const ll inf = 1e9; const ll M = 998'244'353; const ld pi = atan2(0, -1); const ld eps = 1e-4; int n, d; int cnt(vector<int> &schedule){ int mx = 0, mn = inf; for(int i = 1; i < n; ++i){ mx = max(mx, schedule[i] - schedule[i - 1] - 1); mn = min(mn, schedule[i] - schedule[i - 1] - 1); } return min(mn, max(d - schedule.back() - 1, (mx - 1) / 2)); } void solve(int test_case) { cin >> n >> d; vector<int> a(n + 1); int mn = d, min_pos = 0; for(int i = 1; i <= n; ++i){ cin >> a[i]; if(a[i] - a[i - 1] - 1 < mn){ mn = a[i] - a[i - 1] - 1; min_pos = i; } } vector<int> schedule; for(int i = 0; i <= n; ++i){ if(i != min_pos){ schedule.push_back(a[i]); } } int ans = cnt(schedule); if(min_pos > 1){ schedule[min_pos - 1] = a[min_pos]; } ans = max(ans, cnt(schedule)); cout << ans; } bool multi = true; signed main() { //freopen("in.txt", "r", stdin); //freopen("in.txt", "w", stdout); ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; if (multi) { cin >> t; } for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
1650
F
Vitaly and Advanced Useless Algorithms
Vitaly enrolled in the course Advanced Useless Algorithms. The course consists of $n$ tasks. Vitaly calculated that he has $a_i$ hours to do the task $i$ from the day he enrolled in the course. That is, the deadline before the $i$-th task is $a_i$ hours. The array $a$ is sorted in ascending order, in other words, the job numbers correspond to the order in which the assignments are turned in. Vitaly does everything conscientiously, so he wants to complete \textbf{each} task by $100$ percent, \textbf{or more}. Initially, his completion rate for each task is $0$ percent. Vitaly has $m$ training options, each option can be used \textbf{not more than} once. The $i$th option is characterized by three integers: $e_i, t_i$ and $p_i$. If Vitaly uses the $i$th option, then after $t_i$ hours (from the current moment) he will increase the progress of the task $e_i$ by $p_i$ percent. For example, let Vitaly have $3$ of tasks to complete. Let the array $a$ have the form: $a = [5, 7, 8]$. Suppose Vitaly has $5$ of options: $[e_1=1, t_1=1, p_1=30]$, $[e_2=2, t_2=3, p_2=50]$, $[e_3=2, t_3=3, p_3=100]$, $[e_4=1, t_4=1, p_4=80]$, $[e_5=3, t_5=3, p_5=100]$. Then, if Vitaly prepares in the following way, he will be able to complete everything in time: - Vitaly chooses the $4$-th option. Then in $1$ hour, he will complete the $1$-st task at $80$ percent. He still has $4$ hours left before the deadline for the $1$-st task. - Vitaly chooses the $3$-rd option. Then in $3$ hours, he will complete the $2$-nd task in its entirety. He has another $1$ hour left before the deadline for the $1$-st task and $4$ hours left before the deadline for the $3$-rd task. - Vitaly chooses the $1$-st option. Then after $1$ hour, he will complete the $1$-st task for $110$ percent, which means that he will complete the $1$-st task just in time for the deadline. - Vitaly chooses the $5$-th option. He will complete the $3$-rd task for $2$ hours, and after another $1$ hour, Vitaly will complete the $3$-rd task in its entirety. Thus, Vitaly has managed to complete the course completely and on time, using the $4$ options. Help Vitaly — print the options for Vitaly to complete the tasks in the correct order. Please note: each option can be used \textbf{not more than} once. If there are several possible answers, it is allowed to output any of them.
Note that it is always advantageous for us to complete the task that has an earlier deadline first. Only then will we proceed to the next task. Then we can solve each problem independently for each exam. Then it remains to score $100$ percent on the task on the available options. This is a typical knapsack problem with an answer recovery.
[ "dp", "greedy", "implementation" ]
2,200
#include <bits/stdc++.h> using namespace std; template<class T> bool ckmin(T &a, T b) {return a > b ? a=b, true : false;} #define forn(i, n) for (int i = 0; i < int(n); i++) #define sz(v) (int)v.size() #define all(v) v.begin(),v.end() struct option { int t, p, id; option(int _t,int _p, int _id) : t(_t), p(_p), id(_id) { } }; const int INF = INT_MAX >> 1; vector<int> get_ans(vector<option> &v) { int n = sz(v); vector<vector<int>> dp(101, vector<int>(n+1, INF)); dp[0][0] = 0; for (int k = 1; k <= n; k++) { auto [t,p,id] = v[k-1]; dp[0][k] = 0; for (int i = 100; i > 0; i--) { int prev = max(0,i - p); dp[i][k] = dp[i][k-1]; ckmin(dp[i][k], dp[prev][k-1] + t); } } vector<int> ans; int t = dp[100][n]; if (t == INF) return {-1}; for (int i = 100, k = n; k >= 1; k--) { if (dp[i][k] == dp[i][k-1]) { continue; } ans.emplace_back(v[k-1].id); i = max(0, i - v[k-1].p); } reverse(all(ans)); ans.emplace_back(t); return ans; } void solve(bool flag=true) { int n,m; cin >> n >> m; vector<int> a(n); forn(i, n) { cin >> a[i]; } for (int i = n-1; i > 0; i--) { a[i] -= a[i-1]; } vector<vector<option>> v(n); forn(j, m) { int e,t,p; cin >> e >> t >> p, e--; v[e].emplace_back(t, p, j+1); } vector<int> ans; forn(i, n) { vector<int> cur = get_ans(v[i]); if (sz(cur) == 1 && cur[0] == -1) { cout << "-1\n"; return; } int t = cur.back(); if (t > a[i]) { cout << "-1\n"; return; } cur.pop_back(); if (i+1 < n) a[i+1] += a[i] - t; ans.insert(ans.end(), all(cur)); } cout << sz(ans) << '\n'; for (auto e:ans) cout << e << ' '; cout << '\n'; } int main() { int t; cin >> t; forn(tt, t) { solve(); } }
1650
G
Counting Shortcuts
Given an undirected connected graph with $n$ vertices and $m$ edges. The graph contains no loops (edges from a vertex to itself) and multiple edges (i.e. no more than one edge between each pair of vertices). The vertices of the graph are numbered from $1$ to $n$. Find the number of paths from a vertex $s$ to $t$ whose length differs from the shortest path from $s$ to $t$ by no more than $1$. It is necessary to consider all suitable paths, even if they pass through the same vertex or edge more than once (i.e. they are not simple). \begin{center} {\small Graph consisting of $6$ of vertices and $8$ of edges} \end{center} For example, let $n = 6$, $m = 8$, $s = 6$ and $t = 1$, and let the graph look like the figure above. Then the length of the shortest path from $s$ to $t$ is $1$. Consider all paths whose length is at most $1 + 1 = 2$. - $6 \rightarrow 1$. The length of the path is $1$. - $6 \rightarrow 4 \rightarrow 1$. Path length is $2$. - $6 \rightarrow 2 \rightarrow 1$. Path length is $2$. - $6 \rightarrow 5 \rightarrow 1$. Path length is $2$. There is a total of $4$ of matching paths.
Note that in any shortest path, we cannot return to the previous vertex. Since if the current vertex $v$, the previous $u$. The current distance $d_v = d_u + 1$ (the shortest distance to vertex $v$), the shortest distance to vertex $t$ - $d_t$. Then, if we return to the vertex $u$, the shortest distance from it to $t$ is $d_t - d_u = d_t - d_v + 1$. If we add to the current distance, we get: $(d_v + 1) + (d_t - d_v + 1) = d_t + 2$. Thus, we get a path at least $2$ longer than the shortest one. Thus, our answer consists of only simple paths. If the answer consists only of simple paths, then we will simply add vertices to the queue when traversing bfs twice (on the first visit, and on the next visit, when the distance to the vertex is equal to the shortest $+1$). And we will also count the number of ways to get to that vertex. Then we can output the answer as soon as we get to the vertex $t$ the second time for processing. After that we can terminate the loop. The asymptotic will be $O(n + m)$ since we only need bfs.
[ "data structures", "dfs and similar", "dp", "graphs", "shortest paths" ]
2,100
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) #define sz(v) (int)v.size() #define eb emplace_back #define mt make_tuple const int INF = INT_MAX >> 1; const int mod = 1e9 + 7; void csum(int &a,int b) { a = (a + b) % mod; } int s, t; vector<int> us; vector<int> dist; vector<int> dp[2]; int bfs(vector<vector<int>> &g) { queue<tuple<int,int,int>> q; q.push(mt(s, 0, 0)); //[v, dist, count] int ans = 0, mnd = INF; us[s] = 1; dp[0][s] = 1; dist[s] = 0; while(!q.empty()) { auto [v,d, x] = q.front(); q.pop(); // cerr << v << ' ' << d << ' ' << dp[x][v] << endl; if (v == t) { if (mnd == INF) { mnd = d; } csum(ans, dp[x][v]); } if (d == mnd + 1) continue; for (int to : g[v]) if(d <= dist[to]) { dist[to] = min(dist[to], d+1); csum(dp[d - dist[to] + 1][to], dp[x][v]); // cerr << "TO: " << to << ' ' << dist[to] << ' ' << d << endl; if(us[to] == 0 || (us[to] == 1 && dist[to] == d)) q.push(mt(to, d+1, us[to]++)); } } return ans; } void solve() { int n,m; cin >> n >> m; cin >> s >> t; us.resize(n+1); dp[0].resize(n+1); dp[1].resize(n+1); dist.resize(n+1); forn(i, n+1) { us[i] = dp[0][i] = dp[1][i] = 0; dist[i] = INF; } vector<vector<int>> g(n+1); forn(i, m) { int a,b; cin >> a >> b; g[a].eb(b); g[b].eb(a); } cout << bfs(g) << '\n'; } int main() { int t; cin >> t; forn(tt, t) { solve(); } }
1651
A
Playoff
Consider a playoff tournament where $2^n$ athletes compete. The athletes are numbered from $1$ to $2^n$. The tournament is held in $n$ stages. In each stage, the athletes are split into pairs in such a way that each athlete belongs exactly to one pair. In each pair, the athletes compete against each other, and exactly one of them wins. The winner of each pair advances to the next stage, the athlete who was defeated gets eliminated from the tournament. The pairs are formed as follows: - in the first stage, athlete $1$ competes against athlete $2$; $3$ competes against $4$; $5$ competes against $6$, and so on; - in the second stage, the winner of the match "$1$–$2$" competes against the winner of the match "$3$–$4$"; the winner of the match "$5$–$6$" competes against the winner of the match "$7$–$8$", and so on; - the next stages are held according to the same rules. When athletes $x$ and $y$ compete, the winner is decided as follows: - if $x+y$ is odd, the athlete with the lower index wins (i. e. if $x < y$, then $x$ wins, otherwise $y$ wins); - if $x+y$ is even, the athlete with the higher index wins. The following picture describes the way the tournament with $n = 3$ goes. Your task is the following one: given the integer $n$, determine the index of the athlete who wins the tournament.
During the first stage, every player with an even index competes against a player with an odd index, so in each match during the first stage, the player whose index is smaller wins. The pairs are formed in such a way that, in each pair, the player with an odd index has smaller index, so all players with even indices get eliminated, and all players with odd indices advance to the next stage. All of the remaining matches are between players with odd indices, so the winner of each match is the player with the larger index. So, the overall winner of the tournament is the player with the greatest odd index, which is $2^n-1$. Note: in some languages (for example, C++), standard power functions work with floating-point numbers instead of integers, so they will produce the answer as a floating-point number (which may lead to wrong formatting of the output and/or calculation errors). You might have to implement your own power function that works with integers, or compute $2^n$ using a loop.
[ "implementation" ]
800
t = int(input()) for i in range(t): n = int(input()) print(2 ** n - 1)
1651
B
Prove Him Wrong
Recently, your friend discovered one special operation on an integer array $a$: - Choose two indices $i$ and $j$ ($i \neq j$); - Set $a_i = a_j = |a_i - a_j|$. After playing with this operation for a while, he came to the next conclusion: - For every array $a$ of $n$ integers, where $1 \le a_i \le 10^9$, you can find a pair of indices $(i, j)$ such that the total sum of $a$ will \textbf{decrease} after performing the operation. This statement sounds fishy to you, so you want to find a counterexample for a given integer $n$. Can you find such counterexample and prove him wrong? In other words, find an array $a$ consisting of $n$ integers $a_1, a_2, \dots, a_n$ ($1 \le a_i \le 10^9$) such that for all pairs of indices $(i, j)$ performing the operation won't decrease the total sum (it will increase or not change the sum).
Suppose the initial sum of $a$ is equal to $S$. If we perform the operation, the new sum will be equal to $S' = S - (a_i + a_j) + 2 |a_i - a_j|$. We want the sum not to decrease, or $S' \ge S$. If $a_i \ge a_j$, we will get: $S' \ge S,$ $S - (a_i + a_j) + 2 (a_i - a_j) \ge S,$ $a_i - 3 a_j \ge 0,$ $a_i \ge 3 a_j.$ In other words, array $a$ you need (if sorted) will have $a_2 \ge 3 a_1$, $a_3 \ge 3 a_2$ and so on. And one of the variants (and, obviously, an optimal one) is just $[1, 3, 9, 27, \dots, 3^{n - 1}]$. As a result, since $a_i \le 10^9$, we just need to check: if $3^{n - 1} \le 10^9$ then we found an answer, otherwise there is no counterexample.
[ "constructive algorithms", "greedy" ]
800
for _ in range(int(input())): n = int(input()) if 3**(n-1) > 10**9: print("NO") else: print("YES") print(*[3**x for x in range(n)])
1651
C
Fault-tolerant Network
There is a classroom with two rows of computers. There are $n$ computers in each row and each computer has its own grade. Computers in the first row have grades $a_1, a_2, \dots, a_n$ and in the second row — $b_1, b_2, \dots, b_n$. Initially, all pairs of \textbf{neighboring} computers in each row are connected by wire (pairs $(i, i + 1)$ for all $1 \le i < n$), so two rows form two independent computer networks. Your task is to combine them into one common network by connecting one or more pairs of computers from \textbf{different} rows. Connecting the $i$-th computer from the first row and the $j$-th computer from the second row costs $|a_i - b_j|$. You can connect one computer to several other computers, but you need to provide at least a basic fault tolerance: you need to connect computers in such a way that the network stays connected, despite one of its computers failing. In other words, if one computer is broken (no matter which one), the network won't split into two or more parts. What is the minimum total cost to make a fault-tolerant network?
There is a criterion when the given network becomes fault-tolerant: the network becomes fault-tolerant if and only if each of the corner computers (let's name them $A_1$, $A_n$, $B_1$, and $B_n$) is connected to the other row. From one side: if, WLOG, $A_1$ is not connected to the other row then if $A_2$ is broken - $A_1$ loses connection to the other network (since $A_1$ is connected only with $A_2$). From the other side: suppose, WLOG, $A_i$ is broken, then the row $A$ is falling into at most two parts: $A_1 - \dots - A_{i - 1}$ and $A_{i + 1} - \dots - A_n$. But since both $A_1$ and $A_n$ are connected to row $B$ and $B$ is still connected, then the resulting network is still connected. Now the question is: how to connect all corner computers? Because sometimes it's optimal not to connect corners directly. One of the approaches is described below. Let's look at $A_1$. Essentially, there are three ways to connect it to row $B$: to $B_1$, $B_n$, or $\mathrm{best}_B(A_1)$ (where $\mathrm{best}_B(A_1)$ is $B_j$ with minimum possible $|a_i - b_j|$). The same applies to $A_n$. So, let's just iterate over all these $3 \times 3$ variants. For each of these variants, if we didn't cover $B_1$ then we should also add one more connection between $B_1$ and $\mathrm{best}_A(B_1)$; if we didn't cover $B_n$ then we should also add one more connection between $B_n$ and $\mathrm{best}_A(B_n)$; As a result, we choose the best variant.
[ "brute force", "data structures", "implementation" ]
1,500
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) typedef long long li; const int INF = int(1e9); int n; vector<int> a, b; inline bool read() { if(!(cin >> n)) return false; a.resize(n); fore (i, 0, n) cin >> a[i]; b.resize(n); fore (i, 0, n) cin >> b[i]; return true; } int bestCandidate(const vector<int> &vals, int cur) { int bst = INF + 10, pos = -1; fore (i, 0, n) { if (bst > abs(cur - vals[i])) { bst = abs(cur - vals[i]); pos = i; } } return pos; } inline void solve() { li bst = 10ll * INF; vector<int> cds1 = {0, bestCandidate(b, a[0]), n - 1}; vector<int> cds2 = {0, bestCandidate(b, a[n - 1]), n - 1}; for (int var1 : cds1) { for (int var2 : cds2) { li ans = (li)abs(a[0] - b[var1]) + abs(a[n - 1] - b[var2]); if (var1 > 0 && var2 > 0) ans += abs(b[0] - a[bestCandidate(a, b[0])]); if (var1 < n - 1 && var2 < n - 1) ans += abs(b[n - 1] - a[bestCandidate(a, b[n - 1])]); bst = min(bst, ans); } } cout << bst << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); int t; cin >> t; while (t--) { read(); solve(); } return 0; }
1651
D
Nearest Excluded Points
You are given $n$ distinct points on a plane. The coordinates of the $i$-th point are $(x_i, y_i)$. For each point $i$, find the nearest (in terms of Manhattan distance) point with \textbf{integer coordinates} that is not among the given $n$ points. If there are multiple such points — you can choose any of them. The Manhattan distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is $|x_1 - x_2| + |y_1 - y_2|$.
Firstly, we can find answers for all points that are adjacent to at least one point not from the set. The distance for such points is obviously $1$ (and this is the smallest possible answer we can get). On the next iteration, we can set answers for all points that are adjacent to points with found answers (because they don't have neighbors not from the set, the distance for them is at least $2$). It doesn't matter which point we will take, so if the point $i$ is adjacent to some point $j$ that have the answer $1$, we can set the answer for the point $i$ as the answer for the point $j$. We can repeat this process until we find answers for all points. In terms of the code, this can be done by breadth first search (BFS). In other words, we set answers for the points that have the distance $1$ and then push these answers to all adjacent points from the set in order of the increasing distance until we find all the answers. Time complexity: $O(n \log{n})$.
[ "binary search", "data structures", "dfs and similar", "graphs", "shortest paths" ]
1,900
#include <bits/stdc++.h> using namespace std; int dx[] = {0, 0, -1, 1}; int dy[] = {-1, 1, 0, 0}; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; scanf("%d", &n); vector<pair<int, int>> a(n); for (auto &[x, y] : a) scanf("%d %d", &x, &y); set<pair<int, int>> st(a.begin(), a.end()); map<pair<int, int>, pair<int, int>> ans; queue<pair<int, int>> q; for (auto [x, y] : a) { for (int i = 0; i < 4; ++i) { int nx = x + dx[i], ny = y + dy[i]; if (st.count({nx, ny})) { continue; } ans[{x, y}] = {nx, ny}; q.push({x, y}); break; } } while (!q.empty()) { int x = q.front().first, y = q.front().second; q.pop(); for (int i = 0; i < 4; ++i) { int nx = x + dx[i], ny = y + dy[i]; if (!st.count({nx, ny}) || ans.count({nx, ny})) { continue; } ans[{nx, ny}] = ans[{x, y}]; q.push({nx, ny}); } } for (auto [x, y] : a) { auto it = ans[{x, y}]; printf("%d %d\n", it.first, it.second); } return 0; }
1651
E
Sum of Matchings
Let's denote the size of the maximum matching in a graph $G$ as $\mathit{MM}(G)$. You are given a bipartite graph. The vertices of the first part are numbered from $1$ to $n$, the vertices of the second part are numbered from $n+1$ to $2n$. \textbf{Each vertex's degree is $2$}. For a tuple of four integers $(l, r, L, R)$, where $1 \le l \le r \le n$ and $n+1 \le L \le R \le 2n$, let's define $G'(l, r, L, R)$ as the graph which consists of all vertices of the given graph that are included in the segment $[l, r]$ or in the segment $[L, R]$, and all edges of the given graph such that each of their endpoints belongs to one of these segments. In other words, to obtain $G'(l, r, L, R)$ from the original graph, you have to remove all vertices $i$ such that $i \notin [l, r]$ and $i \notin [L, R]$, and all edges incident to these vertices. Calculate the sum of $\mathit{MM}(G(l, r, L, R))$ over all tuples of integers $(l, r, L, R)$ having $1 \le l \le r \le n$ and $n+1 \le L \le R \le 2n$.
Instead of counting the edges belonging to the maximum matching, it is easier to count the vertices. So, we will calculate the total number of vertices saturated by the maximum matching over all possible tuples $(l, r, L, R)$, and then divide the answer by $2$. Furthermore, it's easier to calculate the number of unsaturated vertices than the number of saturated vertices, so we can subtract it from the total number of vertices in all graphs we consider and obtain the answer. Let's analyze how to calculate the total number of unsaturated vertices. Each graph $G'(l, r, L, R)$ is a subgraph of the given graph, so it is still bipartite, and the degree of each vertex is still not greater than $2$. A bipartite graph where the degree of each vertex is at most $2$ can be represented as a set of cycles and paths, and the maximum matching over each of these cycles/paths can be considered independently. Each cycle has an even number of vertices (since otherwise the graph would not be bipartite), so we can saturate all vertices on a cycle with the matching. For a path, the number of unsaturated vertices depends on its length: if the number of vertices in a path is even, we can match all vertices on it; otherwise, one vertex will be unsaturated. So the problem reduces to counting paths with odd number of vertices in all possible graphs $G'(l, r, L, R)$. Every path with an odd number of vertices has a center (the vertex which is exactly in the middle of the path). Let's iterate on the center of the path and its length, and calculate the number of times this path occurs in all graphs we consider. Suppose the center of the path is the vertex $x$, and the number of vertices in it is $2k+1$. Then, for this path to exist, two conditions must hold: every vertex $y$ such that the distance from $x$ to $y$ is not greater than $k$ should be present in the graph; every vertex $z$ such that the distance from $x$ to $z$ is exactly $k+1$ should be excluded from the graph. It means that, for each of the two parts of the graph, there are several vertices that should be present in the graph, and zero or two vertices that should be excluded from the graph. It's easy to see that among the vertices we have to include, we are only interested in the minimum one and the maximum one (all vertices between them will be included as well if these two are included). So, we need to implement some kind of function that allows us to calculate the number of segments that cover the minimum and the maximum vertex we need, and don't cover any of the vertices that we have to exclude - this can be easily done in $O(1)$. Note that the segments should be considered independently for both parts of the graph. Overall, for each vertex we have to consider at most $O(n)$ different lengths of odd paths with the center in this vertex. The minimum/maximum indices of vertices in both parts we have to include in the graph can be maintained while we increase the length of the path, so the whole solution works in $O(n^2)$.
[ "brute force", "combinatorics", "constructive algorithms", "dfs and similar", "graph matchings", "greedy", "math" ]
2,600
#include<bits/stdc++.h> using namespace std; const int N = 3043; vector<int> g[2 * N]; int n; int used[2 * N]; int choose2(int n) { return n * (n + 1) / 2; } int count_ways(int L, int R, const vector<int>& forbidden) { if(L > R) { int ml = *min_element(forbidden.begin(), forbidden.end()); int mr = *max_element(forbidden.begin(), forbidden.end()); return choose2(ml) + choose2(mr - ml - 1) + choose2(n - 1 - mr); } int minl = 0; int maxl = L; int minr = R; int maxr = n - 1; for(auto x : forbidden) { if(L <= x && x <= R) return 0; else if(x < L) minl = max(minl, x + 1); else maxr = min(maxr, x - 1); } return (maxl - minl + 1) * (maxr - minr + 1); } vector<int> cur; void dfs(int x) { if(used[x] == 1) return; used[x] = 1; cur.push_back(x); for(auto y : g[x]) dfs(y); } long long calc(const vector<int>& cycle) { int m = cycle.size(); int k = m / 2; long long ans = 0; for(int i = 0; i < m; i++) { int z = cycle[i]; if(z >= n) z -= n; ans += choose2(n) * 1ll * (choose2(z) + 0ll + choose2(n - z - 1)); int l = n - 1, r = 0, L = n - 1, R = 0; int pl = i, pr = i; for(int j = 0; j < k; j++) { for(auto x : vector<int>({cycle[pl], cycle[pr]})) { if(x < n) { l = min(l, x); r = max(r, x); } else { L = min(L, x - n); R = max(R, x - n); } } vector<int> f, F; pl--; if(pl < 0) pl += m; pr++; if(pr >= m) pr -= m; for(auto x : vector<int>({cycle[pl], cycle[pr]})) { if(x < n) f.push_back(x); else F.push_back(x - n); } long long add = count_ways(l, r, f) * 1ll * count_ways(L, R, F); ans += add; } } return ans; } int main() { cin >> n; for(int i = 0; i < 2 * n; i++) { int x, y; cin >> x >> y; --x; --y; g[x].push_back(y); g[y].push_back(x); } long long ans = n * 1ll * choose2(n) * 1ll * choose2(n) * 2ll; for(int i = 0; i < n; i++) { if(used[i]) continue; dfs(i); vector<int> cycle = cur; ans -= calc(cycle); cur = vector<int>(); } cout << ans / 2 << endl; }
1651
F
Tower Defense
Monocarp is playing a tower defense game. A level in the game can be represented as an OX axis, where each lattice point from $1$ to $n$ contains a tower in it. The tower in the $i$-th point has $c_i$ mana capacity and $r_i$ mana regeneration rate. In the beginning, before the $0$-th second, each tower has full mana. If, at the end of some second, the $i$-th tower has $x$ mana, then it becomes $\mathit{min}(x + r_i, c_i)$ mana for the next second. There are $q$ monsters spawning on a level. The $j$-th monster spawns at point $1$ at the beginning of $t_j$-th second, and it has $h_j$ health. Every monster is moving $1$ point per second in the direction of increasing coordinate. When a monster passes the tower, the tower deals $\mathit{min}(H, M)$ damage to it, where $H$ is the current health of the monster and $M$ is the current mana amount of the tower. This amount gets subtracted from both monster's health and tower's mana. Unfortunately, sometimes some monsters can pass all $n$ towers and remain alive. Monocarp wants to know what will be the total health of the monsters after they pass all towers.
Let's start thinking about the problem from the easy cases. How to solve the problem fast if all towers have full mana? We can store prefix sums of their capacities and find the first tower that doesn't get drained completely with a binary search. Let's try the opposite. How to solve the problem fast if all towers were drained completely in the previous second? It's the same but the prefix sums are calculated over regeneration rates. What if all towers were drained at the same second, earlier than the previous second, and no tower is fully restored yet? It's also the same but the regeneration rates are multiplied by the time passed since the drain. What if we drop the condition about the towers not being fully restored? How would a data structure that can answer prefix sum queries work? It should store the total mana capacity of all towers that are full. Then mana regeneration rates for all towers that aren't. If these are kept separately, then it's easy to obtain the prefix sum by providing the time passed. This will be total capacity plus total regeneration rate, multiplied by the time passed. How to determine if the tower is fully restored since the drain or not? That's easy. For each tower, we can calculate the number of seconds it takes it to get restored from zero. That is $\lceil \frac{c}{r} \rceil$. Thus, all towers that have this value smaller than the time passed won't get restored. All the rest will. Unfortunately, in the actual problem, not all towers were last drained at the same time. However, it's possible to reduce the problem to that. Store the segments of towers that were drained at same time. There are also towers that weren't drained completely, but they can be stored as segments of length $1$ too. When a monster comes, it drains some prefix of the towers completely and possibly one more tower partially. In terms of segments, it removes some prefix of the them and possibly cuts one. Then it creates a segment that covers the prefix and possibly a segment of length $1$ (with a partially drained tower). So each monster creates $O(1)$ segments and removes no more segments than were created. Thus, if we were to process each creation and removal in some $O(T)$, then the complexity will be $O(qT)$. All towers on each segment have the same time passed since the drain. We want to query the sum on the entire segment. If it is greater than the remaining health of the monster, we want to find the largest prefix of this segment that has a smaller or equal sum than the monster health. Given time passed, let's learn to query the range sum. If we knew the queries beforehand, it would be easy. Initialize a segment tree as if all towers are completely restored. Then make events of two kinds: a tower with restore time $\lceil \frac{c}{r} \rceil$ and a query with time $t$. Sort them in the decreasing order and start processing one by one. When a tower event happens, update a single position in the segment tree from capacity to regeneration rate. When a query event happens, find the sum. Since the queries are not known beforehand, make that segment tree persistent and ask specific versions of it. If a segment of towers was last drained at time $t_{\mathit{lst}}$, and the query is at time $t$, then you should query the segment tree in version $t - t_{\mathit{lst}}$. Obviously, you can store not all versions but only ones that have some tower change. Moreover, it's more convenient to make one version responsible for one tower update. Then you can lower_bound the array of sorted $\lceil \frac{c}{r} \rceil$ to find the version you want to ask at. To determine the largest prefix of this segment that has a smaller or equal sum than the monster health, you can either binary search for $O(\log^2 n)$ or traverse the segment tree for $O(\log n)$. The time limit might be a little tight for the first approach, but it can still pass. Overall complexity: $O((n + q) \log n)$.
[ "binary search", "brute force", "data structures" ]
3,000
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct node{ node *l, *r; long long sumc, sumr; node() : l(NULL), r(NULL), sumc(0), sumr(0) {} node(node* l, node* r, long long sumc, long long sumr) : l(l), r(r), sumc(sumc), sumr(sumr) {} }; node* build(int l, int r, vector<int> &c){ if (l == r - 1) return new node(NULL, NULL, c[l], 0); int m = (l + r) / 2; node* nw = new node(); nw->l = build(l, m, c); nw->r = build(m, r, c); nw->sumc = nw->l->sumc + nw->r->sumc; return nw; } node* upd(node* v, int l, int r, int pos, int val){ if (l == r - 1) return new node(NULL, NULL, 0, val); int m = (l + r) / 2; node* nw = new node(v->l, v->r, 0, 0); if (pos < m) nw->l = upd(v->l, l, m, pos, val); else nw->r = upd(v->r, m, r, pos, val); nw->sumc = nw->l->sumc + nw->r->sumc; nw->sumr = nw->l->sumr + nw->r->sumr; return nw; } long long getsum(node *v, int mult){ return v->sumc + v->sumr * mult; } int trav(node *v, int l, int r, int L, int R, long long &lft, int mult){ if (L >= R){ return 0; } if (l == L && r == R && lft - getsum(v, mult) >= 0){ lft -= getsum(v, mult); return r - l; } if (l == r - 1){ return 0; } int m = (l + r) / 2; int cnt = trav(v->l, l, m, L, min(m, R), lft, mult); if (cnt == max(0, min(m, R) - L)) cnt += trav(v->r, m, r, max(m, L), R, lft, mult); return cnt; } struct seg{ int l, r, lst, cur; }; int main() { int n; scanf("%d", &n); vector<int> c(n), r(n); forn(i, n) scanf("%d%d", &c[i], &r[i]); vector<int> ord(n); iota(ord.begin(), ord.end(), 0); sort(ord.begin(), ord.end(), [&](int x, int y){ return c[x] / r[x] > c[y] / r[y]; }); vector<int> vals; for (int i : ord) vals.push_back(c[i] / r[i]); vector<node*> root(1, build(0, n, c)); for (int i : ord) root.push_back(upd(root.back(), 0, n, i, r[i])); vector<seg> st; for (int i = n - 1; i >= 0; --i) st.push_back({i, i + 1, 0, c[i]}); long long ans = 0; int q; scanf("%d", &q); forn(_, q){ int t; long long h; scanf("%d%lld", &t, &h); while (!st.empty()){ auto it = st.back(); st.pop_back(); if (it.r - it.l == 1){ it.cur = min((long long)c[it.l], it.cur + (t - it.lst) * 1ll * r[it.l]); if (it.cur <= h){ h -= it.cur; } else{ st.push_back({it.l, it.r, t, int(it.cur - h)}); h = 0; } } else{ int mx = vals.rend() - lower_bound(vals.rbegin(), vals.rend(), t - it.lst); int res = it.l + trav(root[mx], 0, n, it.l, it.r, h, t - it.lst); assert(res <= it.r); if (res != it.r){ if (it.r - res > 1) st.push_back({res + 1, it.r, it.lst, 0}); int nw = min((long long)c[res], r[res] * 1ll * (t - it.lst)); assert(nw - h > 0); st.push_back({res, res + 1, t, int(nw - h)}); h = 0; } } if (h == 0){ break; } } if (st.empty()){ st.push_back({0, n, t, 0}); } else if (st.back().l != 0){ st.push_back({0, st.back().l, t, 0}); } ans += h; } printf("%lld\n", ans); return 0; }
1654
A
Maximum Cake Tastiness
There are $n$ pieces of cake on a line. The $i$-th piece of cake has weight $a_i$ ($1 \leq i \leq n$). The tastiness of the cake is the maximum total weight of two adjacent pieces of cake (i. e., $\max(a_1+a_2,\, a_2+a_3,\, \ldots,\, a_{n-1} + a_{n})$). You want to maximize the tastiness of the cake. You are allowed to do the following operation at most once (doing more operations would ruin the cake): - Choose a contiguous subsegment $a[l, r]$ of pieces of cake ($1 \leq l \leq r \leq n$), and reverse it. The subsegment $a[l, r]$ of the array $a$ is the sequence $a_l, a_{l+1}, \dots, a_r$. If you reverse it, the array will become $a_1, a_2, \dots, a_{l-2}, a_{l-1}, \underline{a_r}, \underline{a_{r-1}}, \underline{\dots}, \underline{a_{l+1}}, \underline{a_l}, a_{r+1}, a_{r+2}, \dots, a_{n-1}, a_n$. For example, if the weights are initially $[5, 2, 1, 4, 7, 3]$, you can reverse the subsegment $a[2, 5]$, getting $[5, \underline{7}, \underline{4}, \underline{1}, \underline{2}, 3]$. The tastiness of the cake is now $5 + 7 = 12$ (while before the operation the tastiness was $4+7=11$). Find the maximum tastiness of the cake after doing the operation at most once.
Suppose you want to choose pieces of cake $i$, $j$. Can you make them adjacent in $1$ move? The answer is the sum of the $2$ maximum weights. You can always pick the $2$ maximum weights: if they are $a_i$ and $a_j$ ($i < j$), you can flip the subsegment $[i, j-1]$ to make them adjacent. The result can't be larger, because the sum of the weights of any $2$ pieces of cake is never greater than the sum of the $2$ maximum weights. Iterating over all pairs of pieces of cake is enough to get AC, but you can solve the problem in $O(n \log n)$ by sorting the weights and printing the sum of the last $2$ values, or even in $O(n)$ if you calculate the maximum and the second maximum in linear time. Complexity: $O(t \cdot n^2)$, $O(t \cdot n \log n)$ or $O(t \cdot n)$
[ "brute force", "greedy", "implementation", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; int ans = 0; for (int i = 0; i < n; i++) for (int j = i+1; j < n; j++) ans = max(ans, a[i]+a[j]); cout << ans << "\n"; } }
1654
B
Prefix Removals
You are given a string $s$ consisting of lowercase letters of the English alphabet. You must perform the following algorithm on $s$: - Let $x$ be the length of the longest prefix of $s$ which occurs somewhere else in $s$ as a contiguous substring (the other occurrence may also intersect the prefix). If $x = 0$, break. Otherwise, remove the first $x$ characters of $s$, and repeat. A prefix is a string consisting of several first letters of a given string, without any reorders. An empty prefix is also a valid prefix. For example, the string "abcd" has 5 prefixes: empty string, "a", "ab", "abc" and "abcd". For instance, if we perform the algorithm on $s =$ "abcabdc", - Initially, "ab" is the longest prefix that also appears somewhere else as a substring in $s$, so $s =$ "cabdc" after $1$ operation. - Then, "c" is the longest prefix that also appears somewhere else as a substring in $s$, so $s =$ "abdc" after $2$ operations. - Now $x=0$ (because there are no non-empty prefixes of "abdc" that also appear somewhere else as a substring in $s$), so the algorithm terminates. Find the final state of the string after performing the algorithm.
Are there any characters that you can never remove? You can't remove the rightmost occurrence of each letter. Can you remove the other characters? If you can remove $s_1, s_2, \dots, s_{i-1}$ and $s_i$ is not the rightmost occurrence of a letter, you can also remove $s_i$. Let $a$ be the initial string. For a string $z$, let's define $z(l, r) = z_lz_{l+1} \dots z_r$, i.e., the substring of $z$ from $l$ to $r$. The final string is $a(k, n)$ for some $k$. In the final string, $x = 0$, so the first character doesn't appear anywhere else in $s$. This means that $a_k\not=a_{k+1}, a_{k+2}, \dots, a_n$. In other words, $a_k$ is the rightmost occurrence of a letter in $s$. Can you ever remove $a_i$, if $a_i\not=a_{i+1}, a_{i+2}, \dots, a_n$? Notice that you would need to remove $a(l, r)$ ($l \leq i \leq r$): this means that there must exist $a(l', r') = a(l, r)$ for some $l' > l$. So, $a_{i+l'-l} = a_i$, and this is a contradiction. Therefore, $k$ is the smallest index such that $a_k\not=a_{k+1}, a_{k+2}, \dots, a_n$. You can find $k$ by iterating over the string from right to left and updating the frequency of each letter. Indeed $a_i\not=a_{i+1}, a_{i+2}, \dots, a_n$ if and only if the frequency of the letter $a_i$ is $0$ up to now (in the iteration from right to left we are performing). The value of $k$ is the minimum such index $i$. Complexity: $O(n)$
[ "strings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { string s; cin >> s; map<char, int> frequency; for (char c : s) frequency[c]++; for (int i = 0; i < s.size(); i++) if (--frequency[s[i]] == 0) { cout << s.substr(i) << "\n"; break; } } }
1654
C
Alice and the Cake
Alice has a cake, and she is going to cut it. She will perform the following operation $n-1$ times: choose a piece of the cake (initially, the cake is all one piece) with weight $w\ge 2$ and cut it into two smaller pieces of weight $\lfloor\frac{w}{2}\rfloor$ and $\lceil\frac{w}{2}\rceil$ ($\lfloor x \rfloor$ and $\lceil x \rceil$ denote floor and ceiling functions, respectively). After cutting the cake in $n$ pieces, she will line up these $n$ pieces on a table in an arbitrary order. Let $a_i$ be the weight of the $i$-th piece in the line. You are given the array $a$. Determine whether there exists an initial weight and sequence of operations which results in $a$.
Can you find the initial weight which results in $a$? The initial weight is the sum of the final weights. Let's simulate the division, starting from a new cake. How to choose fast which splits to do? If the largest piece is not in $a$, you have to split it. How to find the largest piece efficiently? Keep $a$ and the new cake in two multisets or priority queues. First, let's find the initial weight. When a piece of cake is split, the sum of weights is $\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil$: if $w$ is even, $\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil = \frac{w}{2} + \frac{w}{2} = w$; if $w$ is odd, $\lfloor\frac{w}{2}\rfloor + \lceil\frac{w}{2}\rceil = \frac{w-1}{2} + \frac{w+1}{2} = w$. Therefore, the sum of weights is constant, and the initial weight is the sum of the final weights. Now let's start from a cake $b$ of weight $b_1 = \sum_{i=1}^n a_i$, split it (into pieces of weight $b_i$) and try to make it equal to $a$. At any moment, it's convenient to consider the largest $b_i$, because you can determine if you can split it or not. More specifically, if $b_i$ is not in $a$, you have to split it; if $b_i = a_j$ for some $j$, you can only match $a_j$ with $b_i$ or with a $b_k$ such that $b_k = a_j = b_i$ (because there doesn't exist any larger $b_k$): that's equivalent to removing $a_j$ and $b_i$ from $a$, $b$, respectively. Notice that, if at any moment the maximum element of $b$ is smaller than the maximum element of $a$, the answer is NO. If can keep $a$ and $b$ in any data structure that supports inserting an integer, asking for the maximum and removing the maximum (e.g., multiset or priority queue), the following algorithm works. While either $a$ or $b$ is not empty, if the maximum element of $b$ is smaller than the maximum element of $a$, print NO and break; if the maximum element of $b$ is equal to the maximum element of $a$, remove it from both $a$ and $b$; if the maximum element $m$ of $b$ is larger than the maximum element of $a$, remove it from $b$ and split the piece of cake (i.e., insert $\lfloor\frac{w}{2}\rfloor$ and $\lceil\frac{w}{2}\rceil$ into $b$). If $a$ or $b$ are both empty at the end, the answer is YES. Complexity: $O(n \log n)$
[ "data structures", "greedy", "implementation", "sortings" ]
1,400
#include <bits/stdc++.h> using namespace std; using ll = long long; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) { ll n; cin >> n; vector<ll> a(n); for (int i = 0; i < n; i++) cin >> a[i]; ll sum = 0; for (int i = 0; i < n; i++) sum += a[i]; multiset<ll> p = {sum}; multiset<ll> q(a.begin(), a.end()); while (!p.empty()) { ll x = *--p.end(); if (x < *--q.end()) break; p.erase(--p.end()); if (q.find(x) != q.end()) q.erase(q.find(x)); else { p.insert(x/2); p.insert((x+1)/2); } } cout << (q.empty() ? "YES" : "NO") << "\n"; } }
1654
D
Potion Brewing Class
Alice's potion making professor gave the following assignment to his students: brew a potion using $n$ ingredients, such that the proportion of ingredient $i$ in the final potion is $r_i > 0$ (and $r_1 + r_2 + \cdots + r_n = 1$). He forgot the recipe, and now all he remembers is a set of $n-1$ facts of the form, "ingredients $i$ and $j$ should have a ratio of $x$ to $y$" (i.e., if $a_i$ and $a_j$ are the amounts of ingredient $i$ and $j$ in the potion respectively, then it must hold $a_i/a_j = x/y$), where $x$ and $y$ are positive integers. However, it is guaranteed that the set of facts he remembers is sufficient to uniquely determine the original values $r_i$. He decided that he will allow the students to pass the class as long as they submit a potion which satisfies all of the $n-1$ requirements (there may be many such satisfactory potions), and contains a positive integer amount of each ingredient. Find the minimum total amount of ingredients needed to make a potion which passes the class. As the result can be very large, you should print the answer modulo $998\,244\,353$.
The $n-1$ facts make a tree. Assume that the amount of ingredient $1$ is $1$. For each $i$, the amount of ingredient $i$ would be $c_i/d_i$ for some integer $c_i, d_i > 0$. How to make an integer amount of each ingredient? The optimal amount of ingredient $1$ is $\text{lcm}(d_1, d_2, \dots, d_n)$. Can you find the exponent of each prime $p \leq n$ in $\text{lcm}(d_1, d_2, \dots, d_n)$? If you visit the nodes in DFS order, each edge changes the exponent of $O(\log n)$ primes. At each moment, keep the factorization of $c_i/d_i$ of the current node, i.e., the exponents of each $p \leq n$ (they can be negative). For each $p$, also keep the minimum exponent so far. Read the hints. The rest is just implementation. Start a DFS from node $1$. Keep an array $f$ such that $f_p$ is the exponent of $p$ in the amount of ingredients of the current node. Keep also $v_i = c_i/d_i \,\, \text{mod} \,\, 998\,244\,353$. At the beginning, the amount of ingredients (of node $1$) is $1$, so $f_p = 0$ for each $p$. Whenever you move from node $i$ to node $j$, and $r_i/r_j = x/y$, for each $p^k$ such that $p^k \mid x$ and $p^{k+1} \nmid x$, decrease $f_p$ by $k$; for each $p^k$ such that $p^k \mid y$ and $p^{k+1} \nmid y$, increase $f_p$ by $k$. Notice that there exist $O(\log n)$ such values of $p^k$ for each edge, and you can find them by precalculating either the smallest prime factor (with the sieve of Eratosthenes) or the whole factorization of every integer in $[2, n]$. Let $g_p$ be the minimum value of $f_p$ during the DFS. Then, for every $p$, you have to multiply the amount of ingredients of node $1$ by $p^{-g_p}$. The answer is the sum of $v_i$, multiplied by the amount of ingredients of node $1$. Complexity: $O(n \log n)$
[ "dfs and similar", "math", "number theory", "trees" ]
2,100
#include <bits/stdc++.h> using namespace std; using ll = long long; const int inf = 1e9+10; const ll inf_ll = 1e18+10; #define all(x) (x).begin(), (x).end() #define pb push_back #define cmax(x, y) (x = max(x, y)) #define cmin(x, y) (x = min(x, y)) #ifndef LOCAL #define debug(...) 0 #else #include "../../debug.cpp" #endif template<ll M> struct modint { static ll _pow(ll n, ll k) { ll r = 1; for (; k > 0; k >>= 1, n = (n*n)%M) if (k&1) r = (r*n)%M; return r; } ll v; modint(ll n = 0) : v(n%M) { v += (M&(0-(v<0))); } friend string to_string(const modint n) { return to_string(n.v); } friend istream& operator>>(istream& i, modint& n) { return i >> n.v; } friend ostream& operator<<(ostream& o, const modint n) { return o << n.v; } template<typename T> explicit operator T() { return T(v); } friend bool operator==(const modint n, const modint m) { return n.v == m.v; } friend bool operator!=(const modint n, const modint m) { return n.v != m.v; } friend bool operator<(const modint n, const modint m) { return n.v < m.v; } friend bool operator<=(const modint n, const modint m) { return n.v <= m.v; } friend bool operator>(const modint n, const modint m) { return n.v > m.v; } friend bool operator>=(const modint n, const modint m) { return n.v >= m.v; } modint& operator+=(const modint n) { v += n.v; v -= (M&(0-(v>=M))); return *this; } modint& operator-=(const modint n) { v -= n.v; v += (M&(0-(v<0))); return *this; } modint& operator*=(const modint n) { v = (v*n.v)%M; return *this; } modint& operator/=(const modint n) { v = (v*_pow(n.v, M-2))%M; return *this; } friend modint operator+(const modint n, const modint m) { return modint(n) += m; } friend modint operator-(const modint n, const modint m) { return modint(n) -= m; } friend modint operator*(const modint n, const modint m) { return modint(n) *= m; } friend modint operator/(const modint n, const modint m) { return modint(n) /= m; } modint& operator++() { return *this += 1; } modint& operator--() { return *this -= 1; } modint operator++(int) { modint t = *this; return *this += 1, t; } modint operator--(int) { modint t = *this; return *this -= 1, t; } modint operator+() { return *this; } modint operator-() { return modint(0) -= *this; } // O(logk) modular exponentiation modint pow(const ll k) const { return k < 0 ? _pow(v, M-1-(-k%(M-1))) : _pow(v, k); } modint inv() const { return _pow(v, M-2); } }; using mod = modint<998244353>; const int N = 2e5+5; // upper bound on x, y vector<vector<array<int, 3>>> adj; vector<int> d; // d[x] = smallest divisor of x vector<vector<int>> factors; // factors[x] = prime factors of x, including duplicates vector<int> f, wf; // from editorial mod ans = 0; // compute wf void dfs1(int i, int k) { for (auto& [j, x, y] : adj[i]) if (j != k) { for (int p : factors[y]) f[p]--; for (int p : factors[x]) f[p]++, cmax(wf[p], f[p]); dfs1(j, i); for (int p : factors[y]) f[p]++; for (int p : factors[x]) f[p]--; } } // compute the amount of every ingredient, where p is the amount of ingredient i void dfs2(int i, int k, mod p) { ans += p; mod tmp = p; for (auto& [j, x, y] : adj[i]) { if (j != k) { p = tmp; for (int q : factors[y]) p *= q; for (int q : factors[x]) p /= q; dfs2(j, i, p); } } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; d.assign(N, 1); factors.resize(N); for (int i = N-1; i > 1; i--) for (int j = i; j < N; j += i) d[j] = i; for (int i = 1; i < N; i++) for (int x = i; x != 1; x /= d[x]) factors[i].pb(d[x]); f.assign(N, 0); wf.assign(N, 0); while (t--) { int n; cin >> n; set<int> distinct_primes; adj.assign(n, {}); for (int i = 0; i < n-1; i++) { int p, q, x, y; cin >> p >> q >> x >> y; p--, q--; adj[p].pb({q, x, y}); adj[q].pb({p, y, x}); for (int z : factors[x]) distinct_primes.insert(z); for (int z : factors[y]) distinct_primes.insert(z); } dfs1(0, 0); mod p = 1; for (int x : distinct_primes) for (int i = 0; i < wf[x]; i++) p *= x; ans = 0; dfs2(0, 0, p); cout << ans << "\n"; for (int x : distinct_primes) f[x] = wf[x] = 0; } }
1654
E
Arithmetic Operations
You are given an array of integers $a_1, a_2, \ldots, a_n$. You can do the following operation any number of times (possibly zero): - Choose any index $i$ and set $a_i$ to any integer (positive, negative or $0$). What is the minimum number of operations needed to turn $a$ into an arithmetic progression? The array $a$ is an arithmetic progression if $a_{i+1}-a_i=a_i-a_{i-1}$ for any $2 \leq i \leq n-1$.
Consider each element of the array as being a point in the plane $(i, a_i)$. Then all of the elements that don't get affected by an operation lie on a single line in the plane. Find a line that maximizes the number of points on it. Let $m$ be the upper bound on $a_i$. The intended complexity is $O(n \sqrt m)$. Let $m$ be the upper bound on $a_i$. Let $d$ be the difference between adjacent elements in the final array. Create a piecewise algorithm for the cases where $|d| < \sqrt m$ and $|d| \geq \sqrt m$ As explained in the hints, instead of computing the fewest number of operations, we will compute the largest number of elements that don't have an operation on them, and we will create a piecewise algorithm with final complexity $O(n\sqrt m)$ , where $m$ is the upper bound on $a_i$. Let $d$ be the common difference between elements in our final sequence. First of all, I will assume that $d \geq 0$, since solving the problem for negative $d$ is as simple as reversing the array and running the solution again. If $d$ is fixed beforehand, we can solve the problem in $O(n)$ by putting element $i$ into bucket $a_i-d\cdot i$ and returning $n$ minus the size of the biggest bucket. For $d < \sqrt m$, we can use the above algorithm to handle all of these $d$ in $O(n \sqrt m)$ time. We can keep a hashmap from bucket index $\to$ number of elements in the bucket, or we can just keep an array since the bucket indices have a range of at most $O(n \sqrt m)$ which is not enough to exceed the memory limit. For $d \geq \sqrt m$, we observe that if we have two indices $i, j$ such that $j > i+\sqrt m$, then at least one of them definitely has to have an operation performed on it, because the difference between them would have to be $a_j-a_i \geq \sqrt m \cdot d > m$ which is not possible. In other words, if we consider the set of elements which are not edited, that set will have gaps between pairs of elements of size at most $\sqrt m$. So, we can build a graph between indices with an edge $i \to j$ with label $x$ if $i < j \leq i+\sqrt m$ and $\frac{a_j-a_i}{j-i} = x$. This graph has at most $n\sqrt m$ edges. Then we just have to find the longest path in the graph where all edges have the same label. You can do this with dynamic programming -- let $dp_{i,d}$ be the length of the longest path ending at index $i$, where all edges have label $d$. For each $i$, we only need to check edges to $j$ where $i - \sqrt m < j < i$. This means the time complexity is $O(n\sqrt m)$. To store the values $dp_{i,d}$ sparsely, we can use either a hash map or a rotating buffer (where we only store $dp_{i,d}$ for $i$ in a sliding window of width $\sqrt m$). Complexity: $O(n \sqrt m)$
[ "brute force", "data structures", "graphs", "math" ]
2,300
#include <bits/stdc++.h> using namespace std; #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; struct splitmix64 { size_t operator()(size_t x) const { static const size_t fixed = chrono::steady_clock::now().time_since_epoch().count(); x += 0x9e3779b97f4a7c15 + fixed; x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9; x = (x ^ (x >> 27)) * 0x94d049bb133111eb; return x ^ (x >> 31); } }; const int N = 1e5+5, S = 70; int n, a[N]; // for small d case: b[i] = number of elements in bucket i int b[N*S]; // for large d case: dp[i][j] = maximum length of a path ending // at index i, such that all edges in the path have label j gp_hash_table<int, int, splitmix64> dp[N]; // solve under the assumption that d >= 0 int solve() { int ans = 0; // d < S for (int d = 0; d < S; d++) { for (int i = 0; i < n; i++) ans = max(ans, ++b[a[i]+(n-i)*d]); for (int i = 0; i < n; i++) b[a[i]+(n-i)*d] = 0; } // S <= d < N for (int i = 0; i < n; i++) { for (int j = max(0, i-N/S); j < i; j++) { int d = (a[i]-a[j])/(i-j); int r = (a[i]-a[j])%(i-j); if (r == 0 && d >= S) { dp[i][d] = max(dp[i][d], dp[j][d]+1); ans = max(ans, dp[i][d]+1); } } } for (int i = 0; i < n; i++) dp[i].clear(); return ans; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cin >> n; for (int i = 0; i < n; i++) cin >> a[i]; int ans = solve(); reverse(a, a+n); ans = max(ans, solve()); cout << n-ans << "\n"; }
1654
F
Minimal String Xoration
You are given an integer $n$ and a string $s$ consisting of $2^n$ lowercase letters of the English alphabet. The characters of the string $s$ are $s_0s_1s_2\cdots s_{2^n-1}$. A string $t$ of length $2^n$ (whose characters are denoted by $t_0t_1t_2\cdots t_{2^n-1}$) is a xoration of $s$ if there exists an integer $j$ ($0\le j \leq 2^n-1$) such that, for each $0 \leq i \leq 2^n-1$, $t_i = s_{i \oplus j}$ (where $\oplus$ denotes the operation bitwise XOR). Find the lexicographically minimal xoration of $s$. A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; - in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
Let $f(s, x)$ be the string $t$ such that $t_i = s_{i\oplus x}$. The intended solution not only finds the $x$ such that $f(s, x)$ is lexicographically minimized, but produces an array of all $0 \leq x < 2^n$ sorted according the comparator $f(s, i) < f(s, j)$. The solution is similar to the standard method to construct a suffix array. Let $a_k$ be an array of the integers $0$ through $2^n-1$, sorted according to the lexicographic ordering of the first $2^k$ characters of $f(s, x)$. The answer to the problem is $f(s, a_{n,0})$. Given the array $a_{k-1}$, can you compute $a_k$? Let $f(s, x)$ be the string $t$ such that $t_i = s_{i\oplus x}$. The solution is inspired by the standard method to construct a suffix array. Let $a_k$ be an array containing the integers $0, 1, 2,\dots, 2^n-1$, sorted according to the lexicographic ordering of the prefixes of length $2^k$ of the strings $f(s, 0), f(s, 1), \dots, f(s, 2^n-1)$ (i.e., the prefix of length $2^k$ of $f(s, a_k[i])$ is lexicographically smaller or equal than the prefix of length $2^k$ of $f(s, a_k[i+1])$). We can construct $a_k$ using $a_{k-1}$, and $a_0$ is easy to construct as a base case. In order to construct $a_k$ from $a_{k-1}$, we will first construct an auxiliary array of integers $v$ using $a_{k-1}$, where $v_i < v_j$ iff $f(s, i)_{0..2^{k-1}} < f(s, j)_{0..2^{k-1}}$. Then, we will sort the array $a_k$ according to the comparison of tuples $(v_i, v_{i\oplus {2^{k-1}}}) < (v_j, v_{j\oplus {2^{k-1}}})$. Once we have $a_n$, then we just print $f(s, a_{n,0})$. In total, the solution takes $O(2^n n^2)$ time, which can be optimized to $O(2^n n)$ using the fact that tuples of integers in the range $[0, m]$ can be sorted using radix sort in $O(m)$ time. This optimization was not required to get AC.
[ "bitmasks", "data structures", "divide and conquer", "greedy", "hashing", "sortings", "strings" ]
2,800
#include <bits/stdc++.h> using namespace std; const int N = 18; string s; int a[1<<N], v[1<<N], tmp[1<<N]; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int n; cin >> n >> s; assert(s.size() == (1<<n)); iota(a, a+(1<<n), 0); sort(a, a+(1<<n), [&](int i, int j){ return s[i] < s[j]; }); for (int i = 1; i < 1<<n; i++) v[a[i]] = v[a[i-1]] + (s[a[i]] != s[a[i-1]] ? 1 : 0); for (int k = 1; k < 1<<n; k <<= 1) { auto cmp = [&](int i, int j){ return v[i] == v[j] ? v[i^k] < v[j^k] : v[i] < v[j]; }; sort(a, a+(1<<n), cmp); for (int i = 1; i < 1<<n; i++) tmp[a[i]] = tmp[a[i-1]] + (cmp(a[i-1], a[i]) ? 1 : 0); copy(tmp, tmp+(1<<n), v); } for (int i = 0; i < 1<<n; i++) cout << s[i^a[0]]; cout << "\n"; }
1654
G
Snowy Mountain
There are $n$ locations on a snowy mountain range (numbered from $1$ to $n$), connected by $n-1$ trails in the shape of a tree. Each trail has length $1$. Some of the locations are base lodges. The height $h_i$ of each location is equal to the distance to the nearest base lodge (a base lodge has height $0$). There is a skier at each location, each skier has initial kinetic energy $0$. Each skier wants to ski along as many trails as possible. Suppose that the skier is skiing along a trail from location $i$ to $j$. Skiers are not allowed to ski uphill (i.e., if $h_i < h_j$). It costs one unit of kinetic energy to ski along flat ground (i.e., if $h_i = h_j$), and a skier gains one unit of kinetic energy by skiing downhill (i.e., if $h_i > h_j$). For each location, compute the length of the longest sequence of trails that the skier starting at that location can ski along without their kinetic energy ever becoming negative. Skiers are allowed to visit the same location or trail multiple times.
Let's say that a vertex is "flippable" if it has at least one neighbor of the same height. The optimal strategy for a skier is to ski to a flippable vertex with lowest height, flip back and forth between that vertex and its neighbor until all energy is used, and then ski to a base lodge. Assuming the skier starts at vertex $v$ and chooses vertex $u$ as the flippable vertex to flip back and forth at, this skier will "waste" a total of $h_u$ units of kinetic energy for a total path length of $2h_v-h_u$. Note that if no such $u$ exists, then the maximum amount of $h_v$ will be wasted as the skier goes straight to a base lodge. Let $w_v$ be this "wasted" amount. Instead of computing the maximum path length for the skier starting at vertex $v$, compute $w_v$. The answer for that vertex is $2h_v-w_v$. Try to solve this seemingly unrelated problem: Let $S$ be the set of flippable vertices. What is the largest possible value of $\sum\limits_{v\in S}h_v$? The largest possible value of $\sum\limits_{v\in S}h_v$ is $O(n)$. Therefore, there are at most $O(\sqrt n)$ distinct values of $w_v$ across all vertices. Solve for each value of $w_v$ separately. Read the hints. The rest is just implementation. For each set of flippable vertices of the same height, we can calculate the set of starting vertices which are able to reach at least one vertex in that flippable set. To do this, split the graph up into layers of equal height. Let $c_v$ be the minimum required energy to reach a vertex in the flippable set. $c_v$ can be computed via shortests paths, where edges in the same layer have weight $1$ and edges from layer $i$ to $i+1$ have weight $-1$. We can use bfs to relax the costs of vertices in a single layer, and then easily transition to the next layer. We do this for $O(\sqrt n)$ different starting heights, so the total complexity is $O(n\sqrt n)$.
[ "data structures", "dfs and similar", "graphs", "greedy", "shortest paths", "trees" ]
2,900
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define fi first #define se second #define int long long const ll mod=998244353; const int N=2e5+1; int n; int d[N]; queue<int>q; vector<int>adj[N]; int g[N],h[N],z; bool use[N]; vector<int>ans[N]; vector<int>f[N]; bool vis[N]; void dfs(int id,int p){ vis[id]=true; for(auto c:adj[id]){ if(c==p || d[c]!=d[id]) continue; dfs(c,id); for(int i=1; i<=z ;i++){ ans[id][i]=min(ans[id][i],ans[c][i]+1); } ans[id][h[d[id]]]=1; } } void dfs2(int id,int p){ for(auto c:adj[id]){ if(c==p || d[c]!=d[id]) continue; for(int i=1; i<=z ;i++){ ans[c][i]=min(ans[c][i],ans[id][i]+1); } ans[c][h[d[c]]]=1; dfs2(c,id); } } main(){ ios::sync_with_stdio(false);cin.tie(0); cin >> n; for(int i=1; i<=n ;i++){ int x;cin >> x; if(x==1) q.push(i); else d[i]=1e9; } for(int i=1; i<n ;i++){ int u,v;cin >> u >> v; adj[u].push_back(v); adj[v].push_back(u); } while(!q.empty()){ int x=q.front();q.pop(); for(auto c:adj[x]){ if(d[c]>d[x]+1){ d[c]=d[x]+1; q.push(c); } } } for(int i=1; i<=n ;i++){ f[d[i]].push_back(i); for(auto c:adj[i]){ if(d[c]==d[i] && c>i){ use[d[i]]=true; } } } for(int i=0; i<=n ;i++){ if(use[i]){ g[++z]=i; h[i]=z; } }/* for(int i=1; i<=n ;i++){ cout << d[i] << ' '; } cout << endl;*/ for(int i=1; i<=n ;i++){ ans[i].resize(z+1); } for(int j=0; j<=n ;j++){ for(auto x:f[j]){ for(int i=1; i<=z ;i++) ans[x][i]=1e9; for(auto c:adj[x]){ if(d[c]==d[x]-1){ for(int i=1; i<=z ;i++) ans[x][i]=min(ans[x][i],max((ll)0,ans[c][i]-1)); } } } for(auto x:f[j]){ if(!vis[x]){ dfs(x,0); dfs2(x,0); } } /*for(auto x:f[j]){ cout << x << ": "; for(int i=1; i<=z ;i++) cout << ans[x][i] << ' '; cout << '\n'; }*/ } for(int i=1; i<=n ;i++){ int res=d[i]; for(int j=1; j<=z ;j++){ if(ans[i][j]==0) res=max(res,d[i]*2-g[j]); } cout << res << ' '; } cout << '\n'; }
1654
H
Three Minimums
Given a list of distinct values, we denote with first minimum, second minimum, and third minimum the three smallest values (in increasing order). A permutation $p_1, p_2, \dots, p_n$ is good if the following statement holds for all pairs $(l,r)$ with $1\le l < l+2 \le r\le n$. - If $\{p_l, p_r\}$ are (not necessarily in this order) the first and second minimum of $p_l, p_{l+1}, \dots, p_r$ then the third minimum of $p_l, p_{l+1},\dots, p_r$ is either $p_{l+1}$ or $p_{r-1}$. You are given an integer $n$ and a string $s$ of length $m$ consisting of characters "<" and ">". Count the number of good permutations $p_1, p_2,\dots, p_n$ such that, for all $1\le i\le m$, - $p_i < p_{i+1}$ if $s_i =$ "<"; - $p_i > p_{i+1}$ if $s_i =$ ">". As the result can be very large, you should print it modulo $998\,244\,353$.
If $p_1,\dots, p_n$ is good and $p_i=1$, what can you say about $p_1,p_2,\dots, p_i$ and $p_i,p_{i+1},\dots, p_n$? Find a quadratic solution ignoring the constraints given by the string $s$. Find an $O(n^2 + nm)$ solution taking care of the constraints given by the string $s$. Optimize the $O(n^2+nm)$ solution using generating functions. The solution to the ODE $y'(t) = a(t)y(t) + b(t)$ with $y(0)=1$ is given by $\exp(\smallint a)\Big(1+\int b\exp(-\smallint a)\Big)$ First of all, let us state the following lemma (which is sufficient to solve the problem in $O(n^2)$ if one ignores the constraints given by the string $s$). We omit the proof as it is rather easy compared to the difficulty of the problem as a whole. Lemma: The following statements hold for a permutation $p_1,p_2,\dots, p_n$. $p$ is good if and only if $p[1:i]$ and $p[i:n]$ are good, where $p_i = 1$. If $p_1=1$, then $p$ is good if and only if $p[1:i]$ and $p[i:n]$ are good, where $p_i = 2$. If $p_1=1$ and $p_n=2$, then $p$ is good if and only if it is bitonic, i.e., $p_1<p_2<p_3<\cdots<p_i>p_{i+1}>\cdots p_{n-1}>p_n$, where $p_i=n$. Given $1\le l < r\le n$, we say that a permutation $q_1,q_2,\dots, q_{r-l+1}$ of ${1,2,\dots, r-l+1}$ is $[l,r]$-consistent if for any $l\le i \le \min(r, m-1)$: $q_{i-l+1} < q_{i-l+2}$ if $s_i = \texttt{<}$; $q_{i-l+1} > q_{i-l+2}$ if $s_i = \texttt{>}$. Informally speaking, a permutation is $[l,r]$-consistent if it satisfies the constraints given by $s$ when it is written in the positions $[l, r]$. For $1\le l<r\le n$, let $a_{\ast\ast}(l, r)$, $a_{1\ast}(l, r)$, $a_{\ast 1}(l, r)$, $a_{12}(l, r)$, $a_{21}(l, r)$ be the the number of good permutations which are $[l,r]$-consistent and, respectively, No additional conditions; Start with $1$; End with $1$; Start with $1$ and end with $2$; Start with $2$ and end with $1$. For $1\le i < n$ and $c\in{\texttt{<}, \texttt{>}}$, let $q(i, c) := [i>m\text{ or } s_i= c]$. Informally speaking, $q(i, \texttt{<})$ is $1$ if and only if it can be $p_i<p_{i+1}$ and $q(i, \texttt{>})$ is $1$ if and only if it can be $p_i > p_{i+1}$. Thanks to the Lemma, one has the following relations: $a_{\ast\ast}(l, r) = \sum_{i=l}^r a_{\ast1}(l, i) a_{1\ast}(i, r) \binom{r-l}{i-l}$. $a_{1\ast}(l, l) = 1$. For $l < r$, $a_{1\ast}(l, r) = \sum_{i=l+1}^r a_{12}(l, i)a_{1\ast}(i, r)\binom{r-l-1}{i-l-1}$. Analogous formula for $a_{\ast 1}$. $a_{12}(l, l) = 0$ and $a_{12}(l, l+1)= q(l, \texttt{<})$ and $a_{12}(l, l+2)= q(l, \texttt{<})\cdot q(l+1, \texttt{>})$. For $l<l+1<r$, $a_{12}(l, r) = q(l, \texttt{<})a_{21}(l+1, r) + q(r-1, \texttt{>})a_{12}(l, r-1)$. Analogous formula for $a_{21}$. The problem asks to compute $a_{\ast\ast}(1, n)$. Thanks to the formulas stated above, it is straightforward to implement an $O(n^2)$ solution. Now we will tackle the hard task of optimizing it to $O(nm + n\log(n))$. In order to compute $a_{\ast\ast}(1, n)$, we will compute $a_{\ast1}(1, k)$ and $a_{1\ast}(k, n)$ for all $1\le k\le n$. We have the recurrence relation (for $k\ge 2$) $\tag{1} a_{\ast1}(1, k) = \sum_{i=1}^k a_{\ast1}(1, i) a_{21}(i, k) \binom{k-2}{i-1}$ Setting $x_{k-1} := a_{\ast1}(1, k) / (k-1)!$, (1) is equivalent to (for $k\ge 1$, and also for $k=0$!) $\tag{2} k\cdot x_k = \sum_{i=0}^{k-1} x_i \frac{a_{21}(i+1, k+1)}{(k-1-i)!}.$ This looks very similar to an identify between generating functions (a derivative on the left, a product on the right); but for the fact that $a_{21}$ depends on two parameters. To overcome this issue let us proceed as follows. Notice that, if we set $b$ to any of the functions $a_{\ast\ast}$, $a_{\ast1}$, $a_{1\ast}$, $a_{12}$, $a_{21}$, it holds $b(l, r) = b(l+1, r+1)$ whenever $l > m$. Hence, let us define $b_{\ast\ast}(k) = a_{\ast\ast}(n+1, n+k)$ and analogously $b_{1\ast}(k)$, $b_{\ast 1}(k)$, $b_{12}(k)$, $b_{21}(k)$. With these new definitions, (2) becomes (for $k\ge 0$) $\tag{3} k\cdot x_k = \sum_{i=0}^{k-1} x_i \frac{b_{21}((k-1-i) + 2)}{(k-1-i)!} + \sum_{i=0}^{\min(k-1, m-1)} x_i \frac{a_{21}(i+1, k+1) - b_{21}(k+1-i)}{(k-1-i)!} .$ Let $u_i:= \frac{b_{21}(i+2)}{i!}$ and $v_{k-1}:= \sum_{i=0}^{\min(k-1, m-1)} x_i \frac{b_{21}(k+1-i) - a_{21}(i+1, k+1)}{(k-1-i)!}$. So, (3) simplifies to $\tag{4} k\cdot x_k = v_{k-1} + \sum_{i=0}^{k-1} x_i u_{k-1-i} .$ We precompute in $O(nm)$ the values of $a_{12}(l, r)$ and $a_{21}(l, r)$ for $1\le l\le m$, $l < r\le n$. We can also precompute in $O(n)$ the values of $b_{12}(k), b_{21}(k)$ for $1\le k\le n$. In $O(m^2)$ we compute also $x_i$ for all $0\le i\le m-1$. Thus, in $O(nm)$ we can compute, for all $0\le k < n$, the values $u_k$. It is now time to start working with generating functions. Let $X(t):= \sum_{k\ge 0} x_k t^k$, $U(t):=\sum_{k\ge0} u_kt^k$, $V(t):=\sum_{k\ge0} v_kt^k$. We know $U(t)$ and $V(t)$ (at least the first $n$ coefficients) and we want to compute $X(t)$. Since $x_0=1$, we know $X(0)=1$. Moreover (4) is equivalent to the ordinary differential equation $X' = V + U\cdot X$. This ODE is standard and its (unique) solution is given by $X = \exp(\smallint U)\Big(1 + \int V\exp(-\smallint U)\Big).$ Since the product of generating functions and the exponential of a generating function can be computed in $O(n\log(n))$, we are able to obtain the values of $x_k$ for all $0\le k <n$ and thus the values of $a_{\ast 1}(1,k)$. Now, let us see how to compute $a_{1\ast}(k, n)$. Since $a_{1\ast}(k,n) = b_{1\ast}(n-k+1)$ for all $m<k\le n$, let us first compute $b_{1\ast}(k)$ for all $1\le k\le n$. By repeating verbatim the reasoning above, we get that the generating function $Y(t):=\sum_{k\ge 0} y_k t^k$, where $y_{k-1}:=b_{1\ast}(k) / (k-1)!$, is given by ($V=0$ in this case) $Y=\exp(\int U)$. So, it remains only to compute $a_{1\ast}(k, n)$ for $1\le k\le m$. This can be done naively in $O(nm)$. The overall complexity is $O(nm + n\log(n))$.
[ "combinatorics", "constructive algorithms", "divide and conquer", "dp", "fft", "math" ]
3,500
#define _USE_MATH_DEFINES #include <bits/stdc++.h> using namespace std; typedef unsigned long long ULL; typedef long long LL; #define SZ(x) ((int)((x).size())) template <typename T1, typename T2> string print_iterable(T1 begin_iter, T2 end_iter, int counter) { bool done_something = false; stringstream res; res << "["; for (; begin_iter != end_iter and counter; ++begin_iter) { done_something = true; counter--; res << *begin_iter << ", "; } string str = res.str(); if (done_something) { str.pop_back(); str.pop_back(); } str += "]"; return str; } vector<int> SortIndex(int size, std::function<bool(int, int)> compare) { vector<int> ord(size); for (int i = 0; i < size; i++) ord[i] = i; sort(ord.begin(), ord.end(), compare); return ord; } template <typename T> bool MinPlace(T& a, const T& b) { if (a > b) { a = b; return true; } return false; } template <typename T> bool MaxPlace(T& a, const T& b) { if (a < b) { a = b; return true; } return false; } template <typename S, typename T> ostream& operator <<(ostream& out, const pair<S, T>& p) { out << "{" << p.first << ", " << p.second << "}"; return out; } template <typename T> ostream& operator <<(ostream& out, const vector<T>& v) { out << "["; for (int i = 0; i < (int)v.size(); i++) { out << v[i]; if (i != (int)v.size()-1) out << ", "; } out << "]"; return out; } template<class TH> void _dbg(const char* name, TH val){ clog << name << ": " << val << endl; } template<class TH, class... TA> void _dbg(const char* names, TH curr_val, TA... vals) { while(*names != ',') clog << *names++; clog << ": " << curr_val << ", "; _dbg(names+1, vals...); } #if DEBUG && !ONLINE_JUDGE ifstream input_from_file("input.txt"); #define cin input_from_file #define dbg(...) _dbg(#__VA_ARGS__, __VA_ARGS__) #define dbg_arr(x, len) clog << #x << ": " << print_iterable(x, x+len, -1) << endl; #else #define dbg(...) #define dbg_arr(x, len) #endif // Computes the inverse of n modulo m. // Precondition: n >= 0, m > 0 and gcd(n, m) == 1. // The returned value satisfies 0 <= x < m (Inverse(0, 1) = 0). // ACHTUNG: It must hold max(m, n) < 2**31 to avoid integer overflow. LL Inverse(LL n, LL m) { n %= m; if (n <= 1) return n; // Handles properly (n = 0, m = 1). return m - ((m * Inverse(m, n) - 1) / n); } // Fast exponentiation modulo mod. Returns x**e modulo mod. // Assumptions: 0 <= x < mod // mod < 2**31. // 0 <= e < 2**63 LL pow(LL x, LL e, LL mod) { LL res = 1; for (; e >= 1; e >>= 1) { if (e & 1) res = res * x % mod; x = x * x % mod; } return res; } // Struct that computes x % mod faster than usual, if mod is always the same. // It gives a x1.8 speed up over the % operator (with mod ~ 1e9 and x large). // It is an implementation of the Barrett reduction, see // https://en.wikipedia.org/wiki/Barrett_reduction . // If fast_mod is an instance of the class, then fast_mod(x) will return // x % mod. There are no restrictions on the values of mod and x, provided // that they fit in an unsigned long long (and mod != 0). // // ACHTUNG: The integer type __uint128_t must be available. struct FastMod { ULL mod; ULL inv; FastMod(ULL mod) : mod(mod), inv(-1ULL / mod) {} ULL operator()(ULL x) { ULL q = (ULL)((__uint128_t(inv) * x) >> 64); ULL r = x - q * mod; return r - mod * (r >= mod); } }; // Class for integers modulo mod. // It supports all expected operations: +, -, *, /, pow, ==, < and >. // It is as fast as it can be. // The modulo mod shall be set through set_mod(). // // Assumptions: mod < (1<<30). // ACHTUNG: The integer type __uint128_t must be available. // // Remark: To handle larger moduli (up to 1<<62), one has to: // 1. replace int in the definitions of mod, n. // 2. The parameter of fast_mod must be __uint128_t, so it must be // changed in the definition of fast_mod and in the definition of // the operators * and *=. // 3. fast_mod must be a naive modulo operation, no barrett reduction. // 4. In Inverse, __int128_t shall be used. struct ModularInteger { static int mod; static __uint128_t inv_mod; // Necessary for fast_mod. int n; // 0 <= n < mod at all times static void set_mod(int _mod) { mod = _mod; inv_mod = -1ULL / mod; } ModularInteger(): n(0) {} ModularInteger(LL _n): n(_n % mod) { n += (n<0)*mod; } LL get() const { return n; } static int fast_mod(ULL x) { // Barrett reduction. ULL q = (inv_mod * x) >> 64; int m = x - q * mod; m -= mod * (m >= mod); return m; } ModularInteger operator-() const { return n?mod-n:0; } }; int ModularInteger::mod; __uint128_t ModularInteger::inv_mod; ModularInteger operator +(const ModularInteger& A, const ModularInteger& B) { ModularInteger C; C.n = A.n + B.n; C.n -= (C.n >= ModularInteger::mod)*ModularInteger::mod; return C; } void operator +=(ModularInteger& A, const ModularInteger& B) { A.n += B.n; A.n -= (A.n >= ModularInteger::mod)*ModularInteger::mod; } ModularInteger operator -(const ModularInteger& A, const ModularInteger& B) { ModularInteger C; C.n = A.n - B.n; C.n += (C.n < 0)*ModularInteger::mod; return C; } void operator -=(ModularInteger& A, const ModularInteger& B) { A.n -= B.n; A.n += (A.n < 0)*ModularInteger::mod; } ModularInteger operator *(const ModularInteger& A, const ModularInteger& B) { ModularInteger C; C.n = ModularInteger::fast_mod(((ULL)A.n) * B.n); return C; } void operator *=(ModularInteger& A, const ModularInteger& B) { A.n = ModularInteger::fast_mod(((ULL)A.n) * B.n); } // Assumption: gcd(B, mod) = 1. ModularInteger operator /(const ModularInteger& A, const ModularInteger& B) { return A * Inverse(B.n, ModularInteger::mod); } // Assumption: gcd(B, mod) = 1. void operator/=(ModularInteger& A, const ModularInteger& B) { A *= Inverse(B.n, ModularInteger::mod); } ModularInteger power(ModularInteger A, ULL e) { ModularInteger res = 1; for (; e >= 1; e >>= 1) { if (e & 1) res *= A; A *= A; } return res; } bool operator ==(const ModularInteger& A, const ModularInteger& B) { return A.n == B.n; } bool operator !=(const ModularInteger& A, const ModularInteger& B) { return A.n != B.n; } bool operator <(const ModularInteger& A, const ModularInteger& B) { return A.n < B.n; } bool operator >(const ModularInteger& A, const ModularInteger& B) { return A.n > B.n; } bool operator <=(const ModularInteger& A, const ModularInteger& B) { return A.n <= B.n; } bool operator >=(const ModularInteger& A, const ModularInteger& B) { return A.n >= B.n; } ostream& operator <<(ostream& out, const ModularInteger& A) { out << A.n; return out; } istream& operator >>(istream& in, ModularInteger& A) { LL a; in >> a; A = ModularInteger(a); return in; } typedef ModularInteger mint; // Returns x such that Ord_mod(x) = order. // Assumptions: // 1. mod is an odd prime < 2**31 // 2. order | mod-1 // // Complexity: sqrt(lpf(order)) + polylog(order). // When this function is used as a building block for the number theoretic // transform, order is a power of two and therefore the complexity is // polylog(order). LL FindElementWithGivenOrder(LL mod, LL order) { assert((mod-1)%order == 0); vector<LL> primes; // primes dividing order LL n = order; for (LL p = 2; p*p <= n; p++) { if (n % p == 0) { primes.push_back(p); while (n % p == 0) n /= p; } } if (n != 1) primes.push_back(n); for (LL x = 2; x < mod; x++) { LL y = pow(x, (mod-1)/order, mod); if (pow(y, order, mod) != 1) continue; bool works = true; for (LL p : primes) { if (pow(y, order/p, mod) == 1) { works = false; break; } } if (works) return y; } assert(0); return -1; } // Precomputes the vectors rev and roots as required by the function FFT. // It is called internally by FFT when necessary (i.e., when the length of // the vector is strictly larger than all previous calls to FFT). // Both the arrays rev and roots have length N. Their values satisfy: // - rev[i] is the bit-reverse of i. // - roots contains the roots of unity. For any k that is a power of two, // and any i < k, roots[k + i] contains the i-th power of the (2k)-root of // unity. Using such an array speeds up (and simplifies) the code. // // Complexity: O(N) // // Assumptions: // 1. The type T is either complex<double> or ModularInteger. // 2. N power of 2 // 3. If T = ModularInteger, then ModularInteger::mod is prime and // N | mod-1 . template <typename T> void _FFT_Precompute(int N, vector<int>& rev, vector<T>& roots) { static_assert(std::is_same<T, ModularInteger>::value or std::is_same<T, complex<double>>::value); assert(N > 0 and !(N&(N-1))); rev.resize(N); roots.resize(N); rev[0] = 0; for (int i = 1; i < N; i++) rev[i] = (rev[i>>1]>>1) ^ ((i&1)?(N>>1):0); // The generation of primitive roots is handled differently for complex // numbers and finite fields. if constexpr (std::is_same<T, ModularInteger>::value) { ModularInteger primitive_root = FindElementWithGivenOrder(ModularInteger::mod, N); for (int k = 1; k < N; k<<=1) { // z is a (2k)-primitive root. ModularInteger z = power(primitive_root, N/(2*k)); ModularInteger r = 1; for (int i = 0; i < k; i++) { roots[k+i] = r; r *= z; } } } else { for (int k = 1; k < N; k<<=1) { for (int i = 0; i < k; i++) { double theta = M_PI*i/k; roots[k+i] = cos(theta) + 1i * sin(theta); } } } } // Given a sequence of numbers, computes its discrete Fourier transform in-place. // If inverse=true then the inverse of the Fourier transform is computed. // // More precisely, the resulting array a' satisfies // a'_i = \sum a_j w^{ij} , // where w is an N-root of unity (N = a.size()). // // Complexity: O(Nlog(N)) where N = a.size(). // // Assumptions: // 1. The type T is either complex<double> or ModularInteger. // 2. The length of a is a power of 2. // 3. If T = ModularInteger, then ModularInteger::mod is prime and the // length of a divides mod-1 . template <typename T> void FFT(vector<T>& a, bool inverse=false) { static_assert(std::is_same<T, ModularInteger>::value or std::is_same<T, complex<double>>::value); static vector<int> rev; static vector<T> roots; int N = a.size(); assert(N > 0 and !(N&(N-1))); if ((int)rev.size() < N) _FFT_Precompute(N, rev, roots); int offset = 0; while ((1<<offset) < (int)rev.size()/N) offset++; for (int i = 0; i < N; i++) if (i < (rev[i]>>offset)) { swap(a[i], a[rev[i]>>offset]); } for (int k = 1; k < N; k<<=1) { for (int i = 0; i < N; i+=2*k) { for (int j = 0; j < k; j++) { T x = a[i+j], y = a[i+k+j] * roots[k+j]; a[i+j] = x + y; a[i+k+j] = x - y; } } } if (inverse) { reverse(a.begin()+1, a.end()); T invN = static_cast<T>(1) / static_cast<T>(N); for (auto& val : a) val *= invN; } } // Computes the convolution between a and b in O(n log n) where // n = a.size() + b.size(). // It returns a vector c such that // c[k] = \sum_{i+j = k} a[i] b[j]. // // Assumption: a and b are nonempty. template<typename T> vector<T> Convolution(vector<T> a, vector<T> b) { const static int small_len = 64; int alen = a.size(), blen = b.size(); if (alen == 0 or blen == 0) return {}; assert(1 <= alen and 1 <= blen); // Naive convolution O(alen * blen). // Used if one of the two vectors is very short. if (alen < small_len or blen < small_len) { vector<T> c(alen + blen-1, 0); for (int i = 0; i < alen; i++) for (int j = 0; j < blen; j++) c[i + j] += a[i] * b[j]; return c; } int sz = 1; while (sz < alen + blen - 1) sz *= 2; a.resize(sz, 0); b.resize(sz, 0); FFT(a); FFT(b); for (int i = 0; i < sz; i++) a[i] *= b[i]; FFT(a, true); a.resize(alen + blen - 1); return a; } // F is a field. Must support operator +, +=, -, -=, *, / as defined by the field. // F(0) must return additive identity. F(1) must return multiplicative identity. template<typename F> struct Polynomial { vector<F> coef; Polynomial() {} Polynomial(const vector<F> &_coef) : coef(_coef) {} Polynomial(vector<F> &&_coef) : coef(_coef) {} size_t size() const { return coef.size(); } void trim_leading_zeros() { while (!coef.empty() && coef.back() == F(0)) coef.pop_back(); } void resize(size_t new_size) { coef.resize(new_size, F(0)); } void operator+=(const Polynomial &other) { for (size_t i = 0; i < other.size(); i++) if (i < coef.size()) coef[i] += other.coef[i]; else coef.push_back(other.coef[i]); } Polynomial operator+(const Polynomial &other) const { Polynomial ret = *this; ret += other; return ret; } void operator-=(const Polynomial &other) { for (size_t i = 0; i < other.size(); i++) if (i < coef.size()) coef[i] -= other.coef[i]; else coef.push_back(-other.coef[i]); } Polynomial operator-(const Polynomial &other) const { Polynomial<F> ret = *this; ret -= other; return ret; } Polynomial operator*(const Polynomial &other) const { return Polynomial(Convolution<F>(this->coef, other.coef)); } void operator*=(const Polynomial &other) { *this = *this * other; } // f(x)g(x) = 1 (mod x^N) Polynomial recip_mod(size_t deg = 0) const { size_t n = deg; if (n == 0) n = size(); const int N_RECIP_MOD_NAIVE = 64; if (n <= N_RECIP_MOD_NAIVE) { Polynomial ret; ret.coef.reserve(n); ret.coef.push_back(F(1) / coef[0]); for (size_t i = 1; i < n; i++) { F s(0); for (size_t j = 1; j <= i && j < size(); j++) s -= coef[j] * ret.coef[i - j]; ret.coef.push_back(s * ret.coef[0]); } return ret; } size_t n_low = (n + 1) / 2; Polynomial low_recip; if (size() <= n_low) low_recip = recip_mod(n_low); else { Polynomial low(vector<F>(coef.begin(), coef.begin() + n_low)); low_recip = low.recip_mod(); } Polynomial ret = low_recip * low_recip; if (size() <= n) ret *= *this; else { Polynomial trunc(vector<F>(coef.begin(), coef.begin() + n)); ret *= trunc; } ret.resize(n); for (size_t i = n_low; i < n; i++) ret.coef[i] = -ret.coef[i]; return ret; } Polynomial rev() const { Polynomial ret = *this; reverse(ret.coef.begin(), ret.coef.end()); return ret; } Polynomial derivative() const { if (size() <= 1) return Polynomial(); Polynomial deriv; deriv.coef.reserve(size() - 1); F multiplier(0); for (size_t i = 1; i < size(); i++) { multiplier += F(1); deriv.coef.push_back(coef[i] * multiplier); } return deriv; } Polynomial integral() const { if (size() <= 1) return Polynomial({F(0)}); Polynomial integ; integ.coef.reserve(size() + 1); integ.coef.emplace_back(0); F divisor(0); for (size_t i = 0; i < size(); i++) { divisor += F(1); integ.coef.push_back(coef[i] / divisor); } return integ; } }; Polynomial<mint> exp(Polynomial<mint> f) { Polynomial<mint> g({1}); while (g.size() < f.size()) { size_t next_size = min(g.size() * 2, f.size()); vector<mint> f_coef_trunc(f.coef.begin(), f.coef.begin() + next_size); f_coef_trunc[0] += 1; Polynomial<mint> f_trunc(move(f_coef_trunc)); Polynomial<mint> log_g = g; log_g.resize(next_size); log_g = log_g.recip_mod() * g.derivative(); log_g.resize(next_size - 1); log_g = log_g.integral(); g *= f_trunc - log_g; g.resize(next_size); } return g; } /////////////////////////////////////////////////////////////////////////// //////////////////// DO NOT TOUCH BEFORE THIS LINE //////////////////////// /////////////////////////////////////////////////////////////////////////// int main() { ios::sync_with_stdio(0), cin.tie(0); mint::set_mod(998244353); int n, m; cin >> n >> m; string s; cin >> s; s = '_' + s; vector<mint> fact(n+1); fact[0] = 1; for (int i = 1; i <= n; i++) fact[i] = fact[i-1] * i; vector<mint> ifact(n+1); ifact[n] = mint(1) / fact[n]; for (int i = n-1; i >= 0; i--) ifact[i] = ifact[i+1] * (i+1); // Definitions of q, a21, b21, a12. // Complexity O(nm). auto q = [&](int p, char c) { return p > m or s[p] == c; }; vector<vector<mint>> _a21(m+2, vector<mint>(n+m+2)); auto b21 = [&](int k) { return _a21[m+1][m+k]; }; auto a21 = [&](int l, int r) { return (l > m) ? b21(r-l+1) : _a21[l][r]; }; for (int l = m+1; l >= 0; l--) { _a21[l][l+1] = q(l, '>'); _a21[l][l+2] = q(l, '<') and q(l+1, '>'); for (int r = l+3; r <= n+m+1; r++) _a21[l][r] = q(l, '<') * a21(l+1, r) + q(r-1, '>') * a21(l, r-1); } auto a12 = [&](int l, int r) { return (l + 1 == r) ? q(l, '<') : a21(l, r); }; // Definitions of u, x, v. // Complexity O(nm). vector<mint> u(n); for (int i = 0; i < n; i++) u[i] = b21(i+2) * ifact[i]; vector<mint> x(m); x[0] = 1; for (int k = 1; k < m; k++) { for (int i = 0; i < k; i++) x[k] += x[i] * u[k-1-i]; for (int i = 0; i <= min(k-1, m-1); i++) { mint diff = a21(i+1, k+1) - b21(k+1-i); x[k] += diff * x[i] * ifact[k-1-i]; } x[k] /= k; } vector<mint> v(n); for (int k = 0; k < n; k++) { for (int i = 0; i <= min(k, m-1); i++) { mint diff = a21(i+1, k+2) - b21(k+2-i); v[k] += x[i] * diff * ifact[k-i]; } } // BEGIN Generating functions // Complexity O(nlog(n)). Polynomial<mint> U(u); Polynomial<mint> V(v); Polynomial<mint> Y = exp(U.integral()); Polynomial<mint> Y_inv = Y.recip_mod(); Polynomial<mint> X = Y + Y * (V*Y_inv).integral(); // END Generating functions // Computing // pref[k] = a_{*1}(1, k), // suff[k] = a_{1*}(k, n). // Complexity O(nm). vector<mint> pref(n+1); vector<mint> suff(n+1); for (int k = 1; k <= n; k++) { pref[k] = X.coef[k-1] * fact[k-1]; suff[n+1-k] = Y.coef[k-1] * fact[k-1]; } auto binom = [&](int a, int b) { return fact[a] * ifact[b] * ifact[a-b]; }; for (int k = m; k >= 1; k--) { suff[k] = 0; for (int i = k+1; i <= n; i++) suff[k] += a12(k, i) * suff[i] * binom(n-k-1, i-k-1); } // Computing the result. // Complexity O(n). mint res = 0; for (int i = 1; i <= n; i++) res += pref[i] * suff[i] * binom(n-1, i-1); cout << res << "\n"; }
1656
A
Good Pairs
You are given an array $a_1, a_2, \ldots, a_n$ of positive integers. A good pair is a pair of indices $(i, j)$ with $1 \leq i, j \leq n$ such that, for all $1 \leq k \leq n$, the following equality holds: $$ |a_i - a_k| + |a_k - a_j| = |a_i - a_j|, $$ where $|x|$ denotes the absolute value of $x$. Find a good pair. Note that $i$ can be equal to $j$.
By the triangle inequality, for all real numbers $x, y, z$, we have: $|x-y| + |y-z| \geq |x-z|$ with equality if and only if $\min(x, z) \leq y \leq \max(x, z)$. Now, take indices $i$ and $j$ such that $a_i$ and $a_j$ are the maximum and minimum values of the array, respectively. Then, for each index $k$, we have $a_i \geq a_k \geq a_j$, meaning that $|a_i - a_k| + |a_k - a_j| = a_i - a_k + a_k - a_j = a_i- a_j = |a_i - a_j|$, as we desired.
[ "math", "sortings" ]
800
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while(t--) { int n; cin >> n; int minv = 1e9+1; int maxv = -1; int mini; int maxi; for(int i=0; i < n; ++i) { int a; cin >> a; if(a > maxv) { maxi = i+1; maxv = a; } if(a < minv) { mini = i+1; minv = a; } } cout << mini << " " << maxi << endl; } }
1656
B
Subtract Operation
You are given a list of $n$ integers. You can perform the following operation: you choose an element $x$ from the list, erase $x$ from the list, and subtract the value of $x$ from all the remaining elements. Thus, in one operation, the length of the list is decreased by exactly $1$. Given an integer $k$ ($k>0$), find if there is some sequence of $n-1$ operations such that, after applying the operations, the only remaining element of the list is equal to $k$.
Note that, after deleting element $a_j$, all numbers in the set are of the form $a_i - a_j$, since the previous substractions are cancelled. Therefore, the final element will be the difference between the last element and the previous element which was erased. So we just need to check if $k$ is the difference of two elements in the set, which can be done by sorting and using the double pointer technique in $O(n \log n)$ time.
[ "data structures", "greedy", "math", "two pointers" ]
1,100
#include<iostream> #include<vector> #include<algorithm> using namespace std; int main() { int t; cin >> t; while(t--) { int n, a; cin >> n >> a; vector<int> v(n); for(int& x : v) cin >> x; bool ans = false; if(n == 1) ans = (v[0] == a); else { sort(v.begin(), v.end()); int i = 0; int j = 1; while(j < n and i < n) { if(v[i] + abs(a) == v[j]) { ans = true; break; } else if(v[i] + abs(a) < v[j]) ++i; else ++j; } } cout << (ans? "YES" : "NO") << '\n'; } }
1656
C
Make Equal With Mod
You are given an array of $n$ non-negative integers $a_1, a_2, \ldots, a_n$. You can make the following operation: choose an integer $x \geq 2$ and replace each number of the array by the remainder when dividing that number by $x$, that is, for all $1 \leq i \leq n$ set $a_i$ to $a_i \bmod x$. Determine if it is possible to make all the elements of the array equal by applying the operation zero or more times.
Note that, if $1$ is not present in the array, we can always make all elements equal to $0$ by repeatedly applying the operation with $x = \max(a_i)$ until all elements become $0$, as this operation will set the elements equal to the maximum to $0$, while maintaining the others intact. So the answer is YES. If $1$ is present and there are no two consecutive numbers in the array, we can similarly apply repeatedly the operation with $x = \max(a_i) - 1$ until all elements become $1$, as this operation will set the elements equal to the maximum to $1$, while maintaining the others intact. So the answer is again YES. If $1$ is present in the array, and there are two consecutive numbers $m, m + 1$ in the array, the answer is NO. Note that if we have a $0$ and a $1$ in the array, we won't be able to make them equal after any number of operations, and so we cannot have any operation with an $x$ that divides one of the $a_i$'s. The rest of operations will cause that $m, m + 1$ remain consecutive (and thus different), meaning that it is impossible to make all the array equal.
[ "constructive algorithms", "math", "number theory", "sortings" ]
1,200
#include<bits/stdc++.h> using namespace std; typedef vector<int> vi; int main() { int t; cin >> t; while(t--) { int n; cin >> n; vi a(n); for(int i=0; i < n; ++i) cin >> a[i]; sort(a.begin(), a.end()); bool one = false; bool consec = false; for(int i=0; i < n; ++i) { if(a[i] == 1) one = true; if(i < n-1 && a[i]+1 == a[i+1]) consec = true; } cout << ((one && consec) ? "NO" : "YES") << endl; } }
1656
D
K-good
We say that a positive integer $n$ is $k$-good for some positive integer $k$ if $n$ can be expressed as a sum of $k$ positive integers which give $k$ distinct remainders when divided by $k$. Given a positive integer $n$, find some $k \geq 2$ so that $n$ is $k$-good or tell that such a $k$ does not exist.
$n$ is $k$-good if and only if $n \geq 1 + 2 + \ldots + k = \frac{k(k+1)}{2}$. $n \equiv 1 + 2 + \ldots + k \equiv \frac{k(k+1)}{2} \pmod{k}$. It is clear that both conditions are necessary, and it turns out they're sufficient too since $\frac{k(k+1)}{2} + m \cdot k$ is attainable for any integer $m \geq 0$ by repeatedly adding $k$ to one of the terms in the sum $1 + 2 + \ldots + k$. Note that, if $k$ is even, the second condition is $n \equiv \frac{k}{2} \pmod{k}$, which is true if and only if $2n$ is a multiple of $k$ but $n$ is not a multiple of $k$. So all $k$ which divide $2n$ but do not divide $n$ satisfy the second condition, and we want the smallest of them in order to have the best chance of satisfying the first condition. The smallest of such $k$ is $k_1 = 2^{\nu_2(n)+1}$, i. e. the smallest power of $2$ that does not divide $n$. We can compute $k_1$ in $O(\log n)$ (or in $O(1)$ with some architecture-specific functions) and check if it satisfies the first condition. If it doesn't, consider $k_2 = \frac{2n}{k_1}$. Note that $k_2$ is odd, and therefore the second condition is satisfied since $k_2$ is a divisor of $n$. Since $k_1$ did not satisfy the condition, we have $k_1(k_1+1) > 2n \implies k_2 < k_1+1 \implies k_2 \leq k_1-1$ (since $k_2$ is odd), so: $\frac{k_2(k_2+1)}{2} \leq \frac{k_2 \cdot k_1}{2} = n$ So $k_2$ satisfies the first condition. Note that $k_2$ is only a valid answer if $k_2 \neq 1$. If $k_2 = 1$, then we have that $n$ is a power of $2$, and in this case there is no answer since all odd candidates of $k$ must be odd divisors of $n$, of which there is only $1$, and the smallest even candidate for $k$ was $k_1 = 2n$, which does not work. So we have to answer $-1$.
[ "constructive algorithms", "math", "number theory" ]
1,900
#include<bits/stdc++.h> using namespace std; #define endl '\n' typedef long long ll; int main() { ios::sync_with_stdio(false); int T; cin >> T; while(T--) { ll n; cin >> n; ll x = n; while(x % 2 == 0) x /= 2; if(x == 1) { cout << -1 << endl; } else if(x <= 2e9 && (x*(x+1))/2 <= n) { cout << x << endl; } else { cout << 2*(n/x) << endl; } } }
1656
E
Equal Tree Sums
You are given an undirected unrooted tree, i.e. a connected undirected graph without cycles. You must assign a \textbf{nonzero} integer weight to each vertex so that the following is satisfied: if any vertex of the tree is removed, then each of the remaining connected components has the same sum of weights in its vertices.
Bicolor the tree, and put $+\text{deg}(v)$ in vertices of one color and $-\text{deg}(v)$ in vertices of the other color. Consider removing one vertex $u$. In each subtree, for each edge not incident with $u$, $+1$ will be added for one of the endpoints and $-1$ for the other endpoint, so the total contribution is $0$. For the one edge in each subtree incident with $u$, the contribution will be $+1$ or $-1$ depending on the color of $u$. So the sums of the subtrees will all be equal to $+1$ or $-1$.
[ "constructive algorithms", "dfs and similar", "math", "trees" ]
2,200
#include<bits/stdc++.h> using namespace std; typedef vector<int> vi; typedef vector<vi> vvi; vvi al; vi ans; void dfs(int u, int p, int c) { ans[u] = ((int)al[u].size())*c; for(int v : al[u]) { if(v != p) { dfs(v, u, -c); } } } int main() { int T; cin >> T; while(T--) { int n; cin >> n; al = vvi(n); for(int i=0; i < n-1; ++i) { int u, v; cin >> u >> v; u--; v--; al[u].push_back(v); al[v].push_back(u); } ans = vi(n); dfs(0, -1, 1); for(int i=0; i < n; ++i) { cout << ans[i]; if(i < n-1) cout << " "; } cout << endl; } }
1656
F
Parametric MST
You are given $n$ integers $a_1, a_2, \ldots, a_n$. For any real number $t$, consider the complete weighted graph on $n$ vertices $K_n(t)$ with weight of the edge between vertices $i$ and $j$ equal to $w_{ij}(t) = a_i \cdot a_j + t \cdot (a_i + a_j)$. Let $f(t)$ be the cost of the minimum spanning tree of $K_n(t)$. Determine whether $f(t)$ is bounded above and, if so, output the maximum value it attains.
Assume $a_1 \leq \ldots \leq a_n$. We will try to connect each node $u$ to the neighbour $v$ that minimizes the cost function $c(u, v) = a_u a_v + t(a_u + a_v)$. If by doing this we obtain a tree which is connected, it will clearly be an MST. Let $b_i(t) = a_i + t$. We can rewrite $c(u, v)$ as $c(u, v) = b_u(t)b_v(t) - t^2$. So, if we fix $t$ and $u$, this value will be minimized if $v = n$ when $b_u(t) \leq 0$ or $v = 1$ when $b_u(t) \geq 0$. We have three cases: If there are positive and negative values of $b_i(t)$, connect all $i$ with $b_i(t) < 0$ to $v = n$, and connect the rest to $v = 1$. We see that we are adding $n - 1$ edges (since we are counting the edge $\{1, n\}$ twice), and that the resulting graph is connected since every node is connected to either $1$ or $n$. If all $b_i(t)$ are positive, connect all $u \neq 1$ to $v = 1$; and if all are negative, connect all $u \neq n$ to $v = n$. Now it is immediate to see that the MST will only change when some $b_i(t)$ changes its sign, that is, when $t = -a_k$ for some $k$, and that the total cost function will be piecewise affine. Furthermore, updating the total cost at each $t = -a_k$ can be done in $O(1)$ time if we process nodes from $1$ to $n$ and we mantain some cumulative sums. We are left with checking whether the MST total cost function goes to $+\infty$ when $t \to \pm\infty$, which can be done by computing the slope of the MST total cost function at the limiting values $t = -a_1$ and $t = -a_n$ (which can be computed by adding the slopes of the cost functions of the edges, the construction of which we have previously mentioned).
[ "binary search", "constructive algorithms", "graphs", "greedy", "math", "sortings" ]
2,600
#include<bits/stdc++.h> using namespace std; typedef long long ll; typedef vector<ll> vi; int main() { int T; cin >> T; while(T--) { ll n; cin >> n; vi a(n); ll tsum = 0; for(int i=0; i < n; ++i) { cin >> a[i]; tsum += a[i]; } sort(a.begin(), a.end()); if(a[n-1]*(n-2) + tsum < 0 || a[0]*(n-2) + tsum > 0) { cout << "INF" << endl; continue; } ll cslope = a[n-1]*(n-2) + tsum; ll cvalue = -(n-1)*a[n-1]*a[n-1]; ll ans = cvalue; for(ll i=1; i < n; ++i) { ll d = a[n-i]-a[n-i-1]; cvalue += cslope*d; ans = max(cvalue, ans); cslope += a[0]-a[n-1]; } cout << ans << endl; } }
1656
G
Cycle Palindrome
We say that a sequence of $n$ integers $a_1, a_2, \ldots, a_n$ is a palindrome if for all $1 \leq i \leq n$, $a_i = a_{n-i+1}$. You are given a sequence of $n$ integers $a_1, a_2, \ldots, a_n$ and you have to find, if it exists, a cycle permutation $\sigma$ so that the sequence $a_{\sigma(1)}, a_{\sigma(2)}, \ldots, a_{\sigma(n)}$ is a palindrome. A permutation of $1, 2, \ldots, n$ is a bijective function from $\{1, 2, \ldots, n\}$ to $\{1, 2, \ldots, n\}$. We say that a permutation $\sigma$ is a cycle permutation if $1, \sigma(1), \sigma^2(1), \ldots, \sigma^{n-1}(1)$ are pairwise different numbers. Here $\sigma^m(1)$ denotes $\underbrace{\sigma(\sigma(\ldots \sigma}_{m \text{ times}}(1) \ldots))$.
We clearly see that the following two conditions are necessary: Each number must be repeated an even number of time except possibly one of the numbers. (This is necessary to find any permutation that results in a palindrome, not just a cycle). If $n$ is odd, it can not be that the number $a_{(n+1)/2}$ appears only one time. (Otherwise, the permutation would need to have one fixed point). We will see that those two conditions are sufficient. First, let's focus on the $n$ even case. Find any permutation $\sigma$ so that applying it results in a palindrome. This permutation will have a cycle decomposition. We are going to merge all those cycles in one big cycle. To do so, we will first merge the cycles so that indices $i$ and $n-i+1$ are in the same cycle for all $i$. Note that, since $a_{\sigma(i)} = a_{\sigma(n-i+1)}$ we can define a new permutation $\sigma'$ with $\sigma'(n-i+1) = \sigma(i)$ and $\sigma'(i) = \sigma(n-i+1)$ and the permutation $\sigma'$ will still generate a palindrome and $i$ and $n-i+1$ will be in the same cycle. We iterate over all the $i$s and merge all such cycles. Note that we need to keep track efficiently of which indices are in the same cycle, so we should use union-find data structure. Now we have $m$ disjoint cycles which are symmetric, that is $i$ and $n-i+1$ are in the same cycle. If $m = 1$ we've won. Otherwise, let $i_1, \ldots, i_m$ be indices belonging to each of the cycles and let $\sigma$ be our current permutation. Note that if we define $\sigma'(i_j) = \sigma(n-i_{j+1}+1)$ and $\sigma'(n-i_j+1) = \sigma(i_{j+1})$ for $j = 1, \ldots, m-1$ and $\sigma'(i_m) = \sigma(i_1)$, $\sigma'(n-i_m+1) = \sigma(n-i_1+1)$, we have another permutation $\sigma'$ which results in a palindrome and consists of only one cycle, so we're done. There are different ways of handling the case when $n$ is odd, the one that requires least casework for this solution is noticing that in the $n$ odd case we can still merge symmetric cycles with no issues, and that the only thing that could make the last step fail would be to choose some index $i_j$ so that $n-i_j+1 = i_j$. So we have to be careful not to choose $i_j = \frac{n+1}{2}$, and in particular this means that the cycle that contains the middle element can not be a fixed point. If the second condition is satisfied, this can be achieved easily in the initial permutation we choose. Alternative solution: Notice that the above solution has $O(n \alpha (n) )$ complexity because of union-find. Actually, a $O(n)$ complexity can be achieved, we describe briefly an alternative solution with that complexity. We focus on the case when $n$ is even. The idea is to start with a permutation whose cycles are already symmetric. In order to do so, we construct a graph whose vertices are the numbers that appear in the sequence and for each $1 \leq i \leq \frac{n}{2}$ we add an edge between $a_i$ and $a_{n-i+1}$. Now note that for each of the connected components of the graph we can obtain a symmetric cycle resulting in a palindrome from an Eulerian tour of the graph.
[ "constructive algorithms", "graphs", "math" ]
3,200
#include<bits/stdc++.h> using namespace std; typedef vector<int> vi; typedef vector<vi> vvi; vi obtain_p_permutation(const vi& a) { int n = a.size(); vi pp(n, -1); vi p(n); int cp = 0; vi cnt(n); for(int i=0; i < n; ++i) { cnt[a[i]]++; } for(int i=0; i < n; ++i) { if(pp[a[i]] == -1) { if(cnt[a[i]] == 1) { p[i] = n/2; } else { p[i] = cp; pp[a[i]] = n-cp-1; cp++; cnt[a[i]]--; } } else { p[i] = pp[a[i]]; pp[a[i]] = -1; cnt[a[i]]--; } } return p; } vvi find_cycles(vi p, vi& in_cyc) { int n = p.size(); vi vis = vi(n); vvi cycles; for(int i=0; i < n; ++i) { if(!vis[i]) { vi cyc; int v = i; while(!vis[v]) { cyc.push_back(v); in_cyc[v] = cycles.size(); vis[v] = true; v = p[v]; } cycles.push_back(cyc); } } return cycles; } int find_set(vi& dsu, int x) { if(dsu[x] == x) return x; return dsu[x] = find_set(dsu, dsu[x]); } void solve() { int n; cin >> n; vi a(n); vi cnt(n); for(int i=0; i < n; ++i) { cin >> a[i]; a[i]--; cnt[a[i]]++; } int odd_i = -1; for(int i=0; i < n; ++i) { if(cnt[i]%2 == 1) { if(odd_i == -1) { odd_i = i; } else { odd_i = -2; } } } if(odd_i == -2) { cout << "NO" << endl; } else if(odd_i != -1 && (cnt[odd_i] == 1 && odd_i == a[n/2])) { cout << "NO" << endl; } else { vi p = obtain_p_permutation(a); if(odd_i != -1 && p[n/2] == n/2) { for(int i=0; i < n; ++i) { if(i != n/2 && a[i] == odd_i) { p[n/2] = p[i]; p[i] = n/2; break; } } } vi rp(n); for(int i=0; i < n; ++i) { rp[p[i]] = i; } vvi cycles; vi in_cyc(n); cycles = find_cycles(p, in_cyc); vi dsu(n); for(int i=0; i < n; ++i) dsu[i] = i; for(int i=0; i < n; ++i) { if(find_set(dsu, in_cyc[i]) != find_set(dsu, in_cyc[n-i-1])) { dsu[find_set(dsu, in_cyc[i])] = find_set(dsu, in_cyc[n-i-1]); int j1 = rp[i]; int j2 = rp[n-i-1]; p[j2] = i; rp[i] = j2; p[j1] = n-i-1; rp[n-i-1] = j1; } } cycles = find_cycles(p, in_cyc); int nc = cycles.size(); vi prev_p = vi(p); vi prev_rp = vi(rp); for(int i=0; i < nc-1; ++i) { int i0 = cycles[i][0]; int ip1 = cycles[(i+1)][0]; p[prev_rp[n-i0-1]] = ip1; rp[ip1] = prev_rp[n-i0-1]; } p[prev_rp[n-cycles[nc-1][0]-1]] = n-cycles[0][0]-1; rp[n-cycles[0][0]-1] = prev_rp[n-cycles[nc-1][0]-1]; for(int i=0; i < nc-1; ++i) { int i0 = cycles[i][0]; int ip1 = cycles[(i+1)][0]; p[prev_rp[i0]] = n-ip1-1; rp[n-ip1-1] = prev_rp[i0]; } p[prev_rp[cycles[nc-1][0]]] = cycles[0][0]; rp[cycles[0][0]] = prev_rp[cycles[nc-1][0]]; cout << "YES" << endl; for(int i=0; i < n; ++i) { cout << rp[i]+1 << " "; } cout << endl; } } int main() { int T; cin >> T; while(T--) { solve(); } }
1656
H
Equal LCM Subsets
You are given two sets of positive integers $A$ and $B$. You have to find two non-empty subsets $S_A \subseteq A$, $S_B \subseteq B$ so that the least common multiple (LCM) of the elements of $S_A$ is equal to the least common multiple (LCM) of the elements of $S_B$.
First, we see how to check if the two entire sets have the same LCM (without subsets). To do this, for each element $a \in A$ let us compute $f(a, B) = \gcd(a/\gcd(a, b_1), \ldots, a/\gcd(a, b_n))$ If $f(a, B) > 1$, we can simply delete $a$ from $A$ and solve recursively using the remaining sets (similarly if $f(b, A) > 1$). We need to update the values $f(a, B)$ and $f(b, A)$ efficiently. We can do it using many segment tree (one for each $a \in A$, and one for each $b \in B$). The segment tree of $a \in A$, $ST_a$, will have the elements of $B$ as its leaves, and the node $u$ of $ST_a$, which will include a range $[i, j]$ of elements of $B$ will be $\gcd(a/\gcd(a, b_i), \ldots, a/\gcd(a, b_j))$, meaning that after removing an element of $B$, we will need to recompute all of the $n$ segment trees in $\mathcal O(\log n + U)$ time each (it is $\mathcal O(\log n + U)$ and not $\mathcal O(\log n \cdot U)$ by a similar argument to the one used to compute the complexity of computing the gcd of an array of numbers) . Since we have to repeat this for $O(n)$ steps, the total time complexity will be $\mathcal O(n^2 \cdot (\log n + U))$ and the memory complexity is $\mathcal O(n^2)$.
[ "data structures", "math", "number theory" ]
3,200
#include<bits/stdc++.h> using namespace std; typedef __int128 ll; typedef vector<ll> vi; typedef vector<vi> vvi; ll read_ll() { string s; cin >> s; ll x = 0; for(int i=0; i < (int)s.length(); ++i) { x *= 10; x += ll(s[i]-'0'); } return x; } void print_ll(ll x) { vector<int> p; while(x > 0) { p.push_back((int)(x%10)); x /= 10; } reverse(p.begin(), p.end()); for(int y : p) cout << y; } inline ll ctz(ll x) { long long a = x&((ll(1)<<ll(63))-ll(1)); long long b = x>>ll(63); if(a == 0) return ll(63)+__builtin_ctzll(b); return __builtin_ctzll(a); } inline ll abll(ll x) { return x >= 0 ? x : -x; } ll gcd(ll a, ll b) { if(b == 0) return a; if(a == 0) return b; int az = ctz(a); int bz = ctz(b); int shift = min(az, bz); b >>= bz; while(a != 0) { a >>= az; ll diff = b-a; if(diff) az = ctz(diff); b = min(a, b); a = abll(diff); } return b << shift; } void init_st(vi& st, const vi& v) { int n = v.size(); for(int i=0; i < n; ++i) { st[n+i] = v[i]; } for(int i=n-1; i >= 1; --i) { st[i] = gcd(st[2*i], st[2*i+1]); } } void update_st(vi& st, int i, ll x) { int n = (int)st.size() / 2; int pos = n+i; st[pos] = x; while(pos > 1) { pos /= 2; st[pos] = gcd(st[2*pos], st[2*pos+1]); } } void solve(const vi& a, const vi& b, vector<bool>& ai, vector<bool>& bi) { int n = a.size(); int m = b.size(); vvi st_a = vvi(n, vi(2*m, 0)); vvi st_b = vvi(m, vi(2*n, 0)); for(int i=0; i < n; ++i) { vi af(m); for(int j=0; j < m; ++j) { af[j] = a[i]/gcd(a[i], b[j]); } init_st(st_a[i], af); } for(int i=0; i < m; ++i) { vi bf(n); for(int j=0; j < n; ++j) { bf[j] = b[i]/gcd(b[i], a[j]); } init_st(st_b[i], bf); } queue<int> dq; for(int i=0; i < n; ++i) { if(st_a[i][1] > 1) { dq.push(i); ai[i] = false; } } for(int i=0; i < m; ++i) { if(st_b[i][1] > 1) { dq.push(n+i); bi[i] = false; } } while(!dq.empty()) { int idx = dq.front(); dq.pop(); if(idx < n) { int i = idx; for(int j=0; j < m; ++j) { if(bi[j]) { update_st(st_b[j], i, b[j]); if(st_b[j][1] > 1) { dq.push(n+j); bi[j] = false; } } } } else { int i = idx-n; for(int j=0; j < n; ++j) { if(ai[j]) { update_st(st_a[j], i, a[j]); if(st_a[j][1] > 1) { dq.push(j); ai[j] = false; } } } } } } int main() { int T; cin >> T; while(T--) { int n, m; cin >> n >> m; vi a(n); vi b(m); for(int i=0; i < n; ++i) a[i] = read_ll(); for(int i=0; i < m; ++i) b[i] = read_ll(); vector<bool> ai(n, true); vector<bool> bi(m, true); solve(a, b, ai, bi); int as = 0; int bs = 0; for(int i=0; i < n; ++i) { if(ai[i]) as++; } for(int i=0; i < m; ++i) { if(bi[i]) bs++; } if(as == 0 || bs == 0) cout << "NO" << endl; else { cout << "YES" << endl; cout << as << " " << bs << endl; for(int i=0; i < n; ++i) { if(ai[i]) { print_ll(a[i]); cout << " "; } } cout << endl; for(int i=0; i < m; ++i) { if(bi[i]) { print_ll(b[i]); cout << " "; } } cout << endl; } } }
1656
I
Neighbour Ordering
Given an undirected graph $G$, we say that a neighbour ordering is an ordered list of all the neighbours of a vertex for each of the vertices of $G$. Consider a given neighbour ordering of $G$ and three vertices $u$, $v$ and $w$, such that $v$ is a neighbor of $u$ and $w$. We write $u <_{v} w$ if $u$ comes after $w$ in $v$'s neighbor list. A neighbour ordering is said to be good if, for each simple cycle $v_1, v_2, \ldots, v_c$ of the graph, one of the following is satisfied: - $v_1 <_{v_2} v_3, v_2 <_{v_3} v_4, \ldots, v_{c-2} <_{v_{c-1}} v_c, v_{c-1} <_{v_c} v_1, v_c <_{v_1} v_2$. - $v_1 >_{v_2} v_3, v_2 >_{v_3} v_4, \ldots, v_{c-2} >_{v_{c-1}} v_c, v_{c-1} >_{v_c} v_1, v_c >_{v_1} v_2$. Given a graph $G$, determine whether there exists a good neighbour ordering for it and construct one if it does.
First, note that, for each vertex $v$, the relative order between neighbours that belong to different biconnected components (i. e., neighbours $u$ and $w$ so that the edges $vu$ and $vw$ belong to different biconnected components) is irrelevant, since there can not be any (vertex-disjoint) cycle using edges from different biconnected components. Therefore, a good ordering for the entire graph will be possible if and only if a good ordering for each of the biconnected components is possible, and if so the ordering for the entire graph can be obtained from the orderings for each of the components by arbitrarily merging the lists of each component for each vertex (preserving the relative order for the lists of each component). Therefore, from now on we will assume that the graph is biconnected. Also, we assume that the graph has at least 3 vertices to avoid trivialities. Lemma 1: If the graph $G$ has a cycle $\mathcal{C}$, and there are two vertices $u, v \in \mathcal{C}$ not adjacent in the cycle such that there is a path from $u$ to $v$ that passes through a vertex $w \not\in \mathcal{C}$, then the graph $G$ can not have a good ordering. Proof: Assume the graph has a good ordering, and label the vertices of the cycle $v_1, v_2, \ldots, v_n$ so that $v_i \leq_{v_{i+1}} v_{i+2}$ and $u = v_1$ and $v = v_j$ with $j \neq 1, 2, n$. Let the path between $v_j$ and $v_1$ be $v_j, w_1, \ldots, w_p$, where $w_p = v_1$, $p > 1$, and we can assume that all $w_i$ lie outside the cycle. Considering the cycle $v_1, v_2, \ldots, v_j, w_1, \ldots$, we can see that since $v_1 \leq_{v_2} v_3$, it must happen too that $v_j \leq_{w_1} w_2$. Now, considering the cycle $w_p, \ldots, w_1, v_j, v_{j+1}, \ldots, v_n$, we see that since $v_j \leq_{v_{j+1}} v_{j+2}$ (with possibly $v_{j+2} = v_1$), we must have $w_2 \leq_{w_1} v_j$. Since $w_2 \neq v_j$, we can't have the inequality both ways and we reach a contradiction. $\square$ Theorem: If the (biconnected) graph $G$ has a good ordering, then it has a hamiltonian cycle. Proof: Assume it doesn't have a hamiltonian cycle. Since it is biconnected with at least 3 vertices, it must have some cycle. Let $v_1, \ldots, v_c$ be a longest cycle of the graph. Because the cycle does not visit all the vertices of the graph and the graph is connected, there must be some vertex $u$ not in the cycle which is a neighbour of a vertex $v_i$ in the cycle. Because the graph is biconnected, there must be some path $w_1, \ldots, w_d$ not passing through $v_i$ with $w_1 = u$ and $w_d = v_j$, a vertex in the cycle. We can choose the path so that vertices $w_1, \ldots, w_{d-1}$ are not in the cycle, since we can just finish the path at the first vertex that is in the cycle. Now we consider two cases: Case 1: $v_i$ and $v_j$ are adjacent in the cycle (i. e. $j = i+1$ or $j = i-1$, where we admit taking indices modulo $c$). Now we can consider the cycle $v_1, \ldots, v_i, w_1, \ldots, w_{d-1}, v_j, \ldots, v_c$, which is strictly longer than the original cycle, but we said it was a longest cycle, contradiction. Case 2: $v_i$ and $v_j$ are not adjacent in the cycle. Assume $i < j$ and consider the cycle $v_1, \ldots, v_i, w_1, \ldots, w_{d-1}, v_j, \ldots, v_c$. We have a path $v_i, v_{i+1}, \ldots, v_j$ that passes through vertices not in the cycle, so by the first lemma it is not possible to have a good ordering, contradiction. Now, let's see how we can compute this hamiltonian cycle efficiently. The argument we used in the proof already gives a good algorithm: we find a cycle, and repeatedly find paths that go out of the cycle and then return to it. We will either find a path which returns to an adjacent vertex and therefore we can use it to augment the cycle (Case 1) or we will find that it is impossible to have a good ordering and halt (Case 2). However, implementing this directly is in worst case $O(n^2)$. We will find a way to implement this idea in $O(n)$ time, using properties of the DFS tree. Lemma 2: In a DFS tree of a (biconnected) graph $G$ with a good ordering, each vertex can have at most two children. Proof: Suppose that some vertex $u$ has more than two children. Because the graph is biconnected, we have $u$ can not be the root, and that from each of the child subtrees there is a back edge to a proper ancestor of $u$. Let $v_1,v_2, v_3$ be three children of $u$ that have back edges (in their respective subtrees) to three vertices $w_1, w_2, w_3$ with $\text{depth}(w_1) \leq \text{depth}(w_2) \leq \text{depth}(w_3)$, where $\text{depth}$ is the depth in the DFS tree. Consider the cycle $v_1, u, v_2, \ldots, w_2, \ldots, w_1, \ldots$ (goes back to $v_1$) and the path $w_2, \ldots, w_3, \ldots, v_3, u$ between $w_2$ and $u$, two non-adjacent vertices in the cycle. This contradicts lemma 1. $\square$ Lemma 3: In a DFS tree of a (biconnected) graph $G$ with a good ordering, consider a vertex $u$ that is not the root and whose parent $v$ is not the root either with two children $u_1$ and $u_2$, and let $\text{Low}(u_i)$ be the vertex with least depth that can be reached with a back edge from the subtree of $u_i$. Then there is exactly one $i$ such that $\text{Low}(u_i) = v$ Proof: Assume that $\text{Low}(u_1)$ and $\text{Low}(u_2)$ are both proper ancestors of $v$ (they can not be descendants of $v$ since the graph is biconnected). Then, if we consider the cycle which includes the path $u_1, u, u_2$ and then cycles back using back edges without passing through $v$, we have a contradiction with lemma 1 considering the path from some vertex in the cycle which is a proper ancestor of $v$ to $u$ going down the DFS tree. Therefore, at least one of $\text{Low}(u_1)$ or $\text{Low}(u_2)$ is equal to $v$. Now let's see that it can not happen that $\text{Low}(u_1) = \text{Low}(u_2) = v$. Assume that it is the case. Since the graph is biconnected, we must have that $\text{Low}(u) \neq v$ since it must be a proper ancestor of $v$ (note that here we are using that $v$ is not the root). Therefore, since the two children subtrees don't have back edges to vertices higher than $v$, $u$ must have a back edge to $\text{Low}(u)$. Consider the cycle $u, u_1, \ldots, v, \text{parent(v)}, \ldots, \text{Low(u)}$. Now with the path $u, u_2, \ldots, v$ lemma 1 is violated. $\square$ Lemma 4: In a DFS tree of a (biconnected) graph $G$ with a good ordering, consider a vertex $u$ that is not the root with parent $v$ and with only one child $u_1$. If $\text{Low}(u_1) \neq \text{Low}(u)$, then $\text{Low}(u_1) = v$. Proof: Consider the cycle $u, u_1, \ldots, \text{Low}(u_1), \ldots$. If $u_1$ and $\text{Low}(u_1)$ are not adjacent, the path $u, \text{Low}(u), \ldots, \text{Low}(u_1)$ violates lemma 1. $\square$ Theorem: In a DFS tree of a (biconnected) graph $G$ with a good ordering, we can partition the front edges into: One path of the form $u_1, \ldots, u_p$, where $u_{i+1}$ is the parent of $u_i$ for $1 \leq i < p$, and $u_p$ is the root, and there is a back edge between $u_1$ and $u_p$. Some paths of the form $u_1, \ldots, u_p$, where $u_{i+1}$ is the parent of $u_i$ for $1 \leq i < p$, and there is a back edge between $u_1$ and the parent of $u_p$. Proof: Let $u$ be a vertex which is not the root. We say that the representative back edge associated with vertex $u$ is the back edge from $u$'s subtree that goes to $\text{Low}(u)$: if there are multiple such back edges, we choose the one that has as the other vertex the one that has a highest DFS visitation number. We partition the front edges $(u, \text{parent}(u))$ into groups with the same representative edge associated with $u$. We have to prove that this partition has the desired properties. Let $u, v$ be vertices with the same representative edge. One vertex must be an ancestor of the other (otherwise their subtrees would be disjoint and they can not have the same back edges), and every vertex in the path going from one to the other in the tree has the same representative edge (if there was some better edge, the higher vertex would have it as a representative). This proves that the groups form paths of the form $u_1, \ldots, u_p$, where $u_{i+1}$ is the parent of $u_i$ for $1 \leq i < p$ (note that here $u_1, \ldots, u_{p-1}$ would be the vertices with the same representative edge but $u_p$ wouldn't; $u_p$ is in the path because the partition is of edges, not vertices, and $(u_{p-1}, \text{parent}(u_{p-1}))$ is an edge in the group). Now, let's see the other property. It is clear that $u_1$, the vertex with most depth of the path, has the representative edge of the path as one of its back edges. Now consider $u_p$. We have multiple cases: $u_p$ is not the root and it has only one child $u_{p-1}$. Then since it doesn't have the same representative edge necessarily it must have a back edge and $\text{Low}(u_p) \neq \text{Low}(u_{p-1})$. By lemma 4, we must have $\text{Low}(u_{p-1}) = \text{parent}(u_{p})$ and the back edge from $u_1$ goes to the parent of $u_p$, as desired. $u_p$ is not the root, its parent is not the root either, and it has two children, one of which is $u_{p-1}$. Then by lemma 3, exactly one of the two children has its Low value equal to the parent of $u_p$. If it were not $u_{p-1}$, then $u_p$ would have the same representative edge as $u_{p-1}$, which is not possible since it is the endpoint of the path. Therefore again we have $\text{Low}(u_{p-1}) = \text{parent}(u_{p})$, and the back edge from $u_1$ goes to the parent of $u_p$. $u_p$ is the child of the root and it has two children. Now we must have once again $\text{Low}(u_{p-1}) = \text{parent}(u_{p})$ once again because there is no other vertex with least depth than the parent of $u_p$, and the back from $u_1$ goes to the parent of $u_p$. $u_p$ is the root. This happens for only for one path, and clearly there must be a back edge from $u_1$ to $u_p$. This is just what we wanted: doing just one DFS we can get one initial cycle and a series of paths we can use to successively augment our cycle until we have a hamiltonian cycle. One way to implement this is the following: first, we partition the graph into paths as in the previous theorem (we can mantain the representative back edge during the DFS; if a violation of the properties proven in the lemmas is detected, we halt). Then, we begin exploring the initial path (the one that forms a cycle with the root) in a downwards direction that is, first we visit $u_p$, then its child $u_{p-1}$, until we visit $u_1$, which has a edge back to $u_p$. When we visit one vertex, we push it into a vector that will contain the hamiltonian cycle at the end. If at some point we visit a vertex that is the parent of the endpoint of another path, we recursively visit that path and after visiting it we continue with our original path. But when recursively visiting the new path, we traverse it upwards, that is, we start with $u_1$ and go up to the parent of the vertex. If at some point we visit the endpoint of some other path, we again do a recursive visit of that path, this time in downwards direction. This way, we end up visiting all paths in a DFS-like way, and since we alternate between upward and downward traversals we end up having all the vertices of the graph in the vector ordered in a way that they form a hamiltonian cycle. Now that we have found how to compute that hamiltonian cycle in linear time (or halt if we find that some of the necessary conditions for a good ordering are violated), let's see an additional necessary condition for a good ordering. Theorem: In a (biconnected) graph $G$ with a good ordering, for any hamiltonian cycle, then there can not be any pair of edges that "cross", that is, if we draw the graph with vertices placed in a circle in the hamiltonian cycle order and with non-cycle edges drawn as chords, no two chords intersect. Proof: Assume that two edges cross. We can label the vertices $v_1, \ldots, v_n$ so that $v_1, \ldots, v_n$ forms the hamiltonian cycle in that order, $v_{k} \leq_{v_{k+1}} v_{k+2}$ and the endpoints of the chords are $v_1, v_i$ and $v_p, v_q$, with $1 < p < i < q \leq n$. Consider the cycle $v_1, v_2, \ldots, v_p, v_q, v_{j+1}, \ldots$. Because of the orientation of the hamiltonian cycle (note that there are at least three consecutive vertices from the hamiltonian cycle in this cycle) we have that $v_{p-1} \leq_{v_p} v_q$. Similarly for the cycle $v_1, v_i, v_{i+1}, \ldots, v_n$, we see that $v_1 \leq_{v_i} v_{i+1}$. But now consider the cycle $v_1, v_i, v_{i+1}, \ldots, v_q, v_p, v_{p-1}, \ldots$. Because $v_1 \leq_{v_i} v_{i+1}$, we must also have $v_{q} \leq_{v_p} v_{p-1}$, contradiction. $\square$ Once we have found the hamiltonian cycle, this property can be checked in linear time. And now the properties we have checked are not only necessary but also sufficient: Theorem: A biconnected graph $G$ with a hamiltonian cycle $v_1, \ldots, v_n$ such that non-cycle edges do not cross admits a good ordering. Proof: We assign an ordering by choosing one orientation of the cycle, and ordering the edges of each vertex by how far is the endpoint of the edge traversing the cycle in that orientation. We have to prove that this is a good ordering. Let $v_{i_1}, v_{i_2}, \ldots, v_{i_c}$ be any cycle, and label the vertices $v_1, \ldots, v_n$ so that $i_1 = 1$ and $v_{i_1} \leq_{v_{i_2}} v_{i_3}$ according to the orientation. If we show that $i_1 < \ldots < i_c$, we will have shown that the cycle is consistent with the ordering. We already have $i_1 < i_2 < i_3$ by the labelling choice. Note that is $i_{k} < i_{k-1}$ is the first index to break the inequality, then if $1 < i_{k} < i_{k-2}$ the edge between $i_{k-1}$ and $i_k$ crosses some previous edge, and if $i_{k-2} < i_k$, then the cycle from then onwards be contained in the interval between vertices $v_{i_{k-2}}$ and $v_{i_{k-1}}$, which is not possible since vertex $v_1$ lies outside. $\square$ This ordering can be reconstructed in time $O(m \log m)$ by applying any standard sorting algorithm to the adjacency lists of the vertices (the comparison function is different for each vertex), and it can be improved to linear time by making an array of pairs consisting on the information of all the adjacency lists (each edge is included two times in the array, one for each end), sorting the array in linear time using counting sort, and then for each of the vertices the sorted adjacency list can be restored by applying one splice operation to the list given in the order in which the array was sorted.
[ "constructive algorithms", "graphs" ]
3,500
#include<bits/stdc++.h> using namespace std; typedef vector<int> vi; typedef vector<vi> vvi; typedef pair<int, int> ii; typedef vector<ii> vii; typedef vector<vii> vvii; struct graph { int n; int m; vvi al; vi morphism; vvi dfs_children; vi dfs_parent; vi dfs_num; vi dfs_low; int dfs_count; bool is_root_ac = false; bool bad_biccon = false; vii repr_edge; void dt_dfs(int v, int par) { dfs_parent[v] = par; dfs_num[v] = dfs_low[v] = dfs_count++; for(int u : al[v]) { if(u == par) { //Nothing } else if(dfs_num[u] == -1) { dfs_children[v].push_back(u); dt_dfs(u, v); dfs_low[v] = min(dfs_low[v], dfs_low[u]); } else { dfs_low[v] = min(dfs_low[v], dfs_num[u]); } } } ii min_repr_edge(ii a, ii b) { if(a.first == -1) return b; if(b.first == -1) return a; if(dfs_num[a.second] < dfs_num[b.second]) return a; if(dfs_num[a.second] > dfs_num[b.second]) return b; if(dfs_num[a.first] > dfs_num[b.first]) return a; if(dfs_num[a.first] < dfs_num[b.first]) return b; return a; } void dt_dfs_hamil(int v) { repr_edge[v] = ii(-1, -1); for(int u : dfs_children[v]) { dt_dfs_hamil(u); repr_edge[v] = min_repr_edge(repr_edge[v], repr_edge[u]); } for(int u : al[v]) { if(u == dfs_parent[v]) { //Nothing } else if(dfs_num[u] < dfs_num[v]) { repr_edge[v] = min_repr_edge(repr_edge[v], ii(v, u)); } } if(dfs_parent[v] == -1) { //Root case //Nothing more to check, repr_edge will always be set correctly. } else { if(dfs_children[v].size() == 0) { //Nothing to check } else if(dfs_children[v].size() == 1) { //Lemma 4 if(dfs_low[dfs_children[v][0]] != dfs_low[v] && dfs_low[dfs_children[v][0]] != dfs_num[dfs_parent[v]]) { bad_biccon = true; } } else if(dfs_children[v].size() == 2) { //Lemma 3 if(dfs_parent[dfs_parent[v]] != -1) { if(!((dfs_low[dfs_children[v][0]] == dfs_num[dfs_parent[v]]) ^ (dfs_low[dfs_children[v][1]] == dfs_num[dfs_parent[v]]))) { bad_biccon = true; } } if(dfs_low[v] < min(dfs_low[dfs_children[v][0]], dfs_low[dfs_children[v][1]])) { bad_biccon = true; } } else { bad_biccon = true; } } } void dt_dfs_cedges(vvii& cedges, vii& edge_stack, int v, int par) { dfs_parent[v] = par; dfs_num[v] = dfs_low[v] = dfs_count++; for(int u : al[v]) { if(u == par) { //Nothing } else if(dfs_num[u] == -1) { dfs_children[v].push_back(u); edge_stack.emplace_back(v, u); dt_dfs_cedges(cedges, edge_stack, u, v); dfs_low[v] = min(dfs_low[v], dfs_low[u]); if((par == -1 && is_root_ac) || (par != -1 && dfs_low[u] >= dfs_num[v])) { vii comp; while(edge_stack.back().first != v || edge_stack.back().second != u) { comp.push_back(edge_stack.back()); edge_stack.pop_back(); } comp.push_back(edge_stack.back()); edge_stack.pop_back(); cedges.push_back(comp); } } else { dfs_low[v] = min(dfs_low[v], dfs_num[u]); if(dfs_num[u] < dfs_num[v]) { edge_stack.emplace_back(v, u); } } } } void generate_dfs_tree(int root) { dfs_children = vvi(n, vi()); dfs_parent = vi(n); dfs_count = 0; dfs_num = vi(n, -1); dfs_low = vi(n, -1); dt_dfs(root, -1); } void generate_edge_partition(vvii& cedges, vii& edge_stack, int root) { generate_dfs_tree(root); is_root_ac = (dfs_children[root].size() > 1); dfs_children = vvi(n, vi()); dfs_parent = vi(n); dfs_count = 0; dfs_num = vi(n, -1); dfs_low = vi(n, -1); dt_dfs_cedges(cedges, edge_stack, root, -1); if(!edge_stack.empty()) cedges.push_back(edge_stack); } void generate_hamil_dfs_tree(int root) { generate_dfs_tree(root); repr_edge = vii(n, ii()); dt_dfs_hamil(root); } }; vector<graph> partition_biconnected(graph& g) { vvii cedges; vii edge_stack; g.generate_edge_partition(cedges, edge_stack, 0); vector<graph> comp; for(vii& vec : cedges) { graph h; h.n = 0; h.m = vec.size(); unordered_map<int, int> rmorph; for(ii e : vec) { if(rmorph.find(e.first) == rmorph.end()) { rmorph[e.first] = h.n++; } if(rmorph.find(e.second) == rmorph.end()) { rmorph[e.second] = h.n++; } } h.morphism = vi(h.n); h.al = vvi(h.n); for(ii e : vec) { int u = rmorph[e.first]; int v = rmorph[e.second]; h.morphism[u] = g.morphism[e.first]; h.morphism[v] = g.morphism[e.second]; h.al[u].push_back(v); h.al[v].push_back(u); } comp.push_back(h); } return comp; } void upwards_path(graph& g, vi& hc, int v, int tar); void downwards_path(graph& g, vi& hc, int v, int tar); void upwards_path(graph& g, vi& hc, int v, int tar) { if(g.dfs_children[v].size() == 2) { int u1 = g.dfs_children[v][0]; int u2 = g.dfs_children[v][1]; if(g.repr_edge[u1] == g.repr_edge[v]) swap(u1, u2); hc.push_back(v); downwards_path(g, hc, u1, g.repr_edge[u1].first); if(v != tar) { upwards_path(g, hc, g.dfs_parent[v], tar); } } else if(g.dfs_children[v].size() == 1) { int u = g.dfs_children[v][0]; hc.push_back(v); if(g.repr_edge[u] != g.repr_edge[v]) { downwards_path(g, hc, u, g.repr_edge[u].first); } if(v != tar) { upwards_path(g, hc, g.dfs_parent[v], tar); } } else { hc.push_back(v); if(v != tar) { upwards_path(g, hc, g.dfs_parent[v], tar); } } } void downwards_path(graph& g, vi& hc, int v, int tar) { if(g.dfs_children[v].size() == 2) { int u1 = g.dfs_children[v][0]; int u2 = g.dfs_children[v][1]; if(g.repr_edge[u1] == g.repr_edge[v]) swap(u1, u2); upwards_path(g, hc, g.repr_edge[u1].first, u1); hc.push_back(v); downwards_path(g, hc, u2, tar); } else if(g.dfs_children[v].size() == 1) { int u = g.dfs_children[v][0]; if(v == tar) { upwards_path(g, hc, g.repr_edge[u].first, u); hc.push_back(v); } else { hc.push_back(v); downwards_path(g, hc, u, tar); } } else { hc.push_back(v); } } vi hamiltonian_cycle(graph& g) { g.generate_hamil_dfs_tree(0); if(g.bad_biccon) return vi(); vi hc; downwards_path(g, hc, 0, g.repr_edge[0].first); assert((int)hc.size() == g.n); return hc; } int comp_index; bool cyclic_comparator(int a, int b) { if(a < comp_index) a += 1e7; if(b < comp_index) b += 1e7; return a < b; } graph sort_graph(const graph& g, const vi& hc) { graph h; h.n = g.n; h.m = g.m; h.morphism = vi(g.n); h.al = vvi(g.n); vi rhc(g.n); for(int i=0; i < g.n; ++i) { h.morphism[i] = g.morphism[hc[i]]; rhc[hc[i]] = i; } for(int i=0; i < g.n; ++i) { for(int j : g.al[hc[i]]) { h.al[i].push_back(rhc[j]); } comp_index = i; //This is m log m, can be improved sort(h.al[i].begin(), h.al[i].end(), cyclic_comparator); } return h; } bool has_crossing(const graph& g) { vii bad_stack; for(int i=0; i < g.n; ++i) { comp_index = i; while(!bad_stack.empty() && i == bad_stack.back().first) bad_stack.pop_back(); for(int j=(int)g.al[i].size()-2; j > 0; --j) { int u = g.al[i][j]; if(!bad_stack.empty() && cyclic_comparator(bad_stack.back().first, u) && cyclic_comparator(u, bad_stack.back().second)) { return true; } if(u > i && (bad_stack.empty() || u != bad_stack.back().first)) { bad_stack.emplace_back(u, i); } } } return false; } void merge_ans(const graph& g, vvi& ans) { for(int i=0; i < g.n; ++i) { for(int j : g.al[i]) { ans[g.morphism[i]].push_back(g.morphism[j]); } } } void merge_single_edge_ans(const graph& g, vvi& ans) { int u = g.morphism[0]; int v = g.morphism[1]; ans[u].push_back(v); ans[v].push_back(u); } vvi solve(graph& input_graph) { vvi ans(input_graph.n); vector<graph> components = partition_biconnected(input_graph); for(graph g : components) { if(g.n == 1) { //Nothing } else if(g.n == 2) { merge_single_edge_ans(g, ans); } else { vi hc = hamiltonian_cycle(g); if(hc.empty()) { return vvi(); } graph g2 = sort_graph(g, hc); if(has_crossing(g2)) return vvi(); merge_ans(g2, ans); } } return ans; } graph read_graph() { graph g; int n, m; cin >> n >> m; g.n = n; g.m = m; g.al = vvi(n, vi()); g.morphism = vi(n, 0); for(int i=0; i < n; ++i) { g.morphism[i] = i; } for(int i=0; i < m; ++i) { int u, v; cin >> u >> v; g.al[u].push_back(v); g.al[v].push_back(u); } return g; } void print_ans(const vvi& ans) { int n = ans.size(); if(n == 0) { cout << "NO" << endl; } else { cout << "YES" << endl; for(int i=0; i < n; ++i) { for(int x : ans[i]) { cout << x << " "; } cout << endl; } } } int main() { int T; cin >> T; while(T--) { graph input_graph = read_graph(); vvi ans = solve(input_graph); print_ans(ans); } }
1657
A
Integer Moves
There's a chip in the point $(0, 0)$ of the coordinate plane. In one operation, you can move the chip from some point $(x_1, y_1)$ to some point $(x_2, y_2)$ if the Euclidean distance between these two points is an \textbf{integer} (i.e. $\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$ is integer). Your task is to determine the minimum number of operations required to move the chip from the point $(0, 0)$ to the point $(x, y)$.
Note that the answer does not exceed $2$, because the chip can be moved as follows: $(0, 0) \rightarrow (x, 0) \rightarrow (x, y)$. Obviously, in this case, both operation are valid. It remains to check the cases when the answer is $0$ or $1$. The answer is $0$ only if the destination point is $(0, 0)$, and the answer is $1$ if $\sqrt{x^2+y^2}$ is integer.
[ "brute force", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int x, y; cin >> x >> y; int d = x * x + y * y; int r = 0; while (r * r < d) ++r; int ans = 2; if (r * r == d) ans = 1; if (x == 0 && y == 0) ans = 0; cout << ans << '\n'; } }
1657
B
XY Sequence
You are given four integers $n$, $B$, $x$ and $y$. You should build a sequence $a_0, a_1, a_2, \dots, a_n$ where $a_0 = 0$ and for each $i \ge 1$ you can choose: - either $a_i = a_{i - 1} + x$ - or $a_i = a_{i - 1} - y$. Your goal is to build such a sequence $a$ that $a_i \le B$ for all $i$ and $\sum\limits_{i=0}^{n}{a_i}$ is maximum possible.
Strategy is quite easy: we go from $a_1$ to $a_n$ and if $a_{i - 1} + x \le B$ we take this variant (we set $a_i = a_{i - 1} + x$); otherwise we set $a_i = a_{i - 1} - y$. Note that all $a_i$ are in range $[-(x + y), B]$ so there won't be any overflow/underflow. It's also not hard to prove that this strategy maximizes the sum. By contradiction: suppose the optimal answer has some index $i$ where $a_{i - 1} + x \le B$ but $a_i = a_{i - 1} - y$. Let's find first position $j \ge i$ where $a_j = a_{j - 1} + x$ and swap operations between $i$ and $j$. As a result, $B \ge a_i > a_{i + 1} > \dots > a_{j}$, all $a_i$ from $[i, j - 1]$ were increased while $a_j$ remained the same, i. e. there is no violation of the rules and the total sum increased - contradiction.
[ "greedy" ]
800
fun main() { repeat(readLine()!!.toInt()) { val (n, B, x, y) = readLine()!!.split(' ').map { it.toInt() } var cur = 0L var ans = 0L for (i in 1..n) { cur += if (cur + x <= B) x else -y ans += cur } println(ans) } }
1657
C
Bracket Sequence Deletion
You are given a bracket sequence consisting of $n$ characters '(' and/or )'. You perform several operations with it. During one operation, you choose the \textbf{shortest} prefix of this string (some amount of first characters of the string) that is \textbf{good} and remove it from the string. The prefix is considered \textbf{good} if one of the following two conditions is satisfied: - this prefix is a regular bracket sequence; - this prefix is a palindrome of length \textbf{at least two}. A bracket sequence is called regular if it is possible to obtain a correct arithmetic expression by inserting characters '+' and '1' into this sequence. For example, sequences (())(), () and (()(())) are regular, while )(, (() and (()))( are not. The bracket sequence is called palindrome if it reads the same back and forth. For example, the bracket sequences )), (( and )(() are palindromes, while bracket sequences (), )( and ))( are not palindromes. You stop performing the operations when it's not possible to find a \textbf{good} prefix. Your task is to find the number of operations you will perform on the given string and the number of remaining characters in the string. You have to answer $t$ independent test cases.
Consider the first character of the string. If it is '(', then we can remove the first two characters of the string and continue (because the prefix of length $2$ will be either a palindrome or a regular bracket sequence). If the first character of the string is ')' then this is a bad case. Of course, the regular bracket sequence can't start with '(', so this prefix should be a palindrome. And what is the shortest palindrome we can get with the first character ')'? It is the closing bracket ')', then some (possibly, zero) amount of opening brackets '(', and another one closing bracket. We can see that we can't find a palindrome shorter than this one because we have to find a pair for the first character. So, if the first character of the string is ')', then we just remove anything until the next character ')' inclusive. To not remove any characters explicitly, we can just use pointers instead. And the last thing is to carefully handle cases when we can't do any operations.
[ "greedy", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; string s; cin >> n >> s; int l = 0; int cnt = 0; while (l + 1 < n) { if (s[l] == '(' || (s[l] == ')' && s[l + 1] == ')')) { l += 2; } else { int r = l + 1; while (r < n && s[r] != ')') { ++r; } if (r == n) { break; } l = r + 1; } ++cnt; } cout << cnt << ' ' << n - l << '\n'; } return 0; }
1657
D
For Gamers. By Gamers.
Monocarp is playing a strategy game. In the game, he recruits a squad to fight monsters. Before each battle, Monocarp has $C$ coins to spend on his squad. Before each battle starts, his squad is empty. Monocarp chooses \textbf{one type of units} and recruits no more units of that type than he can recruit with $C$ coins. There are $n$ types of units. Every unit type has three parameters: - $c_i$ — the cost of recruiting one unit of the $i$-th type; - $d_i$ — the damage that one unit of the $i$-th type deals in a second; - $h_i$ — the amount of health of one unit of the $i$-th type. Monocarp has to face $m$ monsters. Every monster has two parameters: - $D_j$ — the damage that the $j$-th monster deals in a second; - $H_j$ — the amount of health the $j$-th monster has. Monocarp has to fight only the $j$-th monster during the $j$-th battle. He wants all his recruited units to stay alive. Both Monocarp's squad and the monster attack continuously (not once per second) and at the same time. Thus, Monocarp wins the battle if and only if his squad kills the monster strictly faster than the monster kills one of his units. The time is compared with no rounding. For each monster, Monocarp wants to know the minimum amount of coins he has to spend to kill that monster. If this amount is greater than $C$, then report that it's impossible to kill that monster.
Imagine you are fighting the $j$-th monster, and you fixed the type of units $i$ and their amount $x$. What's the win condition? $\frac{H_j}{d_i \cdot x} < \frac{h_i}{D_j}$. Rewrite it as $H_j \cdot D_j < d_i \cdot x \cdot h_i$. Notice how we only care about $d \cdot h$ for both the units and the monster, but not about $d$ and $h$ on their own. Let's call $d \cdot h \cdot x$ and $D \cdot H$ the power of the squad and the monster. You can see that for each cost $c$ we can only leave one unit type of that price that has the largest value of $d \cdot h$. Let's call it $\mathit{bst}_c$. Now let's learn to determine the maximum power we can obtain for cost exactly $c$. We can iterate over the cost $c$ of one unit and the count $x$ of units in the squad. Since $c \cdot x$ should not exceed $C$, that will take $\frac{C}{1} + \frac{C}{2} + \dots + \frac{C}{C} = O(C \log C)$. Propagate $\mathit{bst}_c$ to be the maximum power for cost exactly $c$. We have the knowledge about cost exactly $c$, but we actually want no more than $c$. Calculate prefix maximums over $\mathit{bst}$ - that will be the maximum power we can obtain with no more than $c$ coins. For each monster, we just have to find the smallest $c$ such that $\mathit{bst}_c > D \cdot H$. Since the array is monotone, we can use binary search. Overall complexity: $O(n + (C + m) \log C)$.
[ "binary search", "brute force", "greedy", "math", "sortings" ]
2,000
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) int main(){ int n, C; scanf("%d%d", &n, &C); vector<long long> bst(C + 1); forn(i, n){ int c, d, h; scanf("%d%d%d", &c, &d, &h); bst[c] = max(bst[c], d * 1ll * h); } for (int c = 1; c <= C; ++c) for (int xc = c; xc <= C; xc += c) bst[xc] = max(bst[xc], bst[c] * (xc / c)); forn(c, C) bst[c + 1] = max(bst[c + 1], bst[c]); int m; scanf("%d", &m); forn(j, m){ int D; long long H; scanf("%d%lld", &D, &H); int mn = upper_bound(bst.begin(), bst.end(), D * H) - bst.begin(); if (mn > C) mn = -1; printf("%d ", mn); } puts(""); }
1657
E
Star MST
In this problem, we will consider \textbf{complete} undirected graphs consisting of $n$ vertices with weighted edges. The weight of each edge is an integer from $1$ to $k$. An undirected graph is considered \textbf{beautiful} if the sum of weights of all edges incident to vertex $1$ is equal to the weight of MST in the graph. MST is the minimum spanning tree — a tree consisting of $n-1$ edges of the graph, which connects all $n$ vertices and has the minimum sum of weights among all such trees; the weight of MST is the sum of weights of all edges in it. Calculate the number of \textbf{complete} \textbf{beautiful} graphs having exactly $n$ vertices and the weights of edges from $1$ to $k$. Since the answer might be large, print it modulo $998244353$.
Let the weight of the edge between the vertex $x$ to the vertex $y$ be $w_{x,y}$. Suppose there exists a pair of vertices $x$ and $y$ (with indices greater than $2$) such that $w_{x,y} < w_{1,x}$ or $w_{x,y} < w_{1,y}$. Then, if we choose the spanning tree with all vertices connected to $1$, it won't be an MST: we can remove either the edge $(1,x)$ or the edge $(1,y)$, add the edge $(x,y)$ instead, and the cost of the spanning tree will decrease. So, we should have $w_{x,y} \ge \max(w_{1,x}, w_{1,y})$ for every pair $(x, y)$. It can be shown that this condition is not only necessary, but sufficient as well: if for every pair $(x, y)$ the condition $w_{x,y} \ge \max(w_{1,x}, w_{1,y})$ holds, the MST can't have the weight less than $\sum \limits_{i=2}^{n} w_{1,i}$. We can prove this by induction (suppose that $w_{1,2} \le w_{1,3} \le \ldots \le w_{1,n}$ for simplicity): in the spanning tree, there should be at least one edge incident to vertex $n$, and its weight is at least $w_{1,n}$; there should be at least two edges incident to vertices $n$ and $n-1$, and their weights are at least $w_{1,n-1} + w_{1,n}$; ...; there should be at least $n-1$ edges incident to vertices from $2$ to $n$, and their weights are at least $\sum \limits_{i=2}^{n} w_{1,i}$. Okay, now let's show how to calculate the number of such graphs. We can run the following dynamic programming: let $dp_{i,j}$ be the number of graphs where we have already connected $i$ vertices to the vertex $1$, and the maximum weight we have used is $j$. We start with $dp_{0,0}$, and for each transition from $dp_{i,j}$, we will iterate on the number of vertices we connect to the vertex $1$ with edges with weight $(j+1)$ (let the number of those vertices be $t$), choose them with a binomial coefficient $\frac{(n-1-i)!}{t!(n-1-i-t)!}$, and also choose the weights for the edges that connect one of the chosen vertices with one of the vertices already connected to $1$ (since for each of those edges, we know that their weights should be in $[j+1,k]$) - so, we need to multiply the value in transition by $(k-j)^e$, where $e$ is the number of such edges. Implementing this dynamic programming can be done in $O(n^2k)$ or $O(n^2 k \log n)$, both are sufficient.
[ "combinatorics", "dp", "graph matchings", "math" ]
2,200
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) const int MOD = 998244353; int add(int a, int b){ a += b; if (a >= MOD) a -= MOD; return a; } int mul(int a, int b){ return a * 1ll * b % MOD; } int binpow(int a, int b){ int res = 1; while (b){ if (b & 1) res = mul(res, a); a = mul(a, a); b >>= 1; } return res; } int main(){ int n, k; scanf("%d%d", &n, &k); --n; vector<vector<int>> dp(k + 1, vector<int>(n + 1, 0)); vector<vector<int>> C(n + 1); forn(i, n + 1){ C[i].resize(i + 1); C[i][0] = C[i][i] = 1; for (int j = 1; j < i; ++j) C[i][j] = add(C[i - 1][j], C[i - 1][j - 1]); } dp[0][0] = 1; forn(i, k) forn(t, n + 1) { int pw = binpow(k - i, t * (t - 1) / 2); int step = binpow(k - i, t); forn(j, n - t + 1){ dp[i + 1][j + t] = add(dp[i + 1][j + t], mul(dp[i][j], mul(C[n - j][t], pw))); pw = mul(pw, step); } } printf("%d\n", dp[k][n]); }
1657
F
Words on Tree
You are given a tree consisting of $n$ vertices, and $q$ triples $(x_i, y_i, s_i)$, where $x_i$ and $y_i$ are integers from $1$ to $n$, and $s_i$ is a string \textbf{with length equal to the number of vertices on the simple path from $x_i$ to $y_i$}. You want to write a lowercase Latin letter on each vertex in such a way that, for each of $q$ given triples, at least one of the following conditions holds: - if you write out the letters on the vertices on the simple path from $x_i$ to $y_i$ in the order they appear on this path, you get the string $s_i$; - if you write out the letters on the vertices on the simple path from $y_i$ to $x_i$ in the order they appear on this path, you get the string $s_i$. Find any possible way to write a letter on each vertex to meet these constraints, or report that it is impossible.
Let's design a naive solution first. For each of the given triples, we have two options: either write the string on the tree in the order from $x_i$ to $y_i$, or in reverse order. Some options conflict with each other. So, we can treat this problem as an instance of 2-SAT: create a variable for each of the given strings, which is true if the string is not reversed, and false if it is reversed; find all conflicting pairs of options and then run the usual algorithm for solving 2-SAT. Unfortunately, the number of conflicting pairs can be up to $O(n^2)$, so we need to improve this solution. Let's introduce a variable for each vertex of the tree which will define the character we write on it. At first, it looks like we can't use these variables in 2-SAT, since the number of possible characters is $26$, not $2$. But if a vertex is covered by at least one path in a triple, then there are only two possible characters we can write in this vertex: either the character which will land on this position if we write the string from $x_i$ to $y_i$, or the character on the opposite position in the string $s_i$. And, obviously, if a vertex is not covered by any triple, we can write any character on it. Okay, now for each vertex $i$, we have two options for a character: $c_{i,1}$ and $c_{i,2}$. Let the variable $p_i$ be true if we write $c_{i,1}$ on vertex $i$, and false if we write $c_{i,2}$. Also, for each triple $j$, let's introduce a variable $w_j$ which is true if the string $s_j$ is written from $x_j$ to $y_j$, and false if it is written in reversed order. If the vertex $i$ is the $k$-th one on the path from $x_j$ to $y_j$, then we should add the following constraints in our 2-SAT: if $s_{j,k} \ne c_{i,1}$, we need a constraint "NOT $p_i$ OR NOT $w_j$"; if $s_{j,k} \ne c_{i,2}$, we need a constraint "$p_i$ OR NOT $w_j$"; if $s_{j,|s_j|-k+1} \ne c_{i,1}$, we need a constraint "NOT $p_i$ OR $w_j$"; if $s_{j,|s_j|-k+1} \ne c_{i,2}$, we need a constraint "$p_i$ OR $w_j$". Thus, we add at most $16 \cdot 10^5$ constraints in our 2-SAT. The only thing we haven't discussed is how to actually restore each path from $x_j$ to $y_j$; this can be done either with any fast algorithm that finds LCA, or by searching for LCA "naively" by ascending from one of those vertices until we arrive at the ancestor of another vertex; this approach will visit at most $\sum \limits_{j=1}^{q} |s_j|$ vertices. Overall, this solution runs in $O(n + q + \sum \limits_{j=1}^{q} |s_j|)$.
[ "2-sat", "dfs and similar", "dsu", "graphs", "trees" ]
2,600
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); ++i) vector<vector<int>> t; vector<int> p, h; void init(int v){ for (int u : t[v]) if (u != p[v]){ p[u] = v; h[u] = h[v] + 1; init(u); } } vector<int> get_path(int v, int u){ vector<int> l, r; while (v != u){ if (h[v] > h[u]){ l.push_back(v); v = p[v]; } else{ r.push_back(u); u = p[u]; } } l.push_back(v); while (!r.empty()){ l.push_back(r.back()); r.pop_back(); } return l; } vector<vector<int>> g, tg; void add_edge(int v, bool vx, int u, bool vy){ // (val[v] == vx) -> (val[u] == vy) g[v * 2 + vx].push_back(u * 2 + vy); tg[u * 2 + vy].push_back(v * 2 + vx); g[u * 2 + !vy].push_back(v * 2 + !vx); tg[v * 2 + !vx].push_back(u * 2 + !vy); } vector<int> ord; vector<char> used; void ts(int v){ used[v] = true; for (int u : g[v]) if (!used[u]) ts(u); ord.push_back(v); } vector<int> clr; int k; void dfs(int v){ clr[v] = k; for (int u : tg[v]) if (clr[u] == -1) dfs(u); } int main(){ cin.tie(0); iostream::sync_with_stdio(false); int n, m; cin >> n >> m; t.resize(n); p.resize(n); h.resize(n); p[0] = -1; forn(i, n - 1){ int v, u; cin >> v >> u; --v, --u; t[v].push_back(u); t[u].push_back(v); } init(0); vector<vector<int>> paths(m); vector<string> s(m); vector<pair<char, char>> opts(n, make_pair(-1, -1)); forn(i, m){ int v, u; cin >> v >> u >> s[i]; --v, --u; paths[i] = get_path(v, u); int k = s[i].size(); assert(int(paths[i].size()) == k); forn(j, k) opts[paths[i][j]] = {s[i][j], s[i][k - j - 1]}; } int nm = (n + m) * 2; g.resize(nm); tg.resize(nm); forn(i, m){ int k = s[i].size(); forn(j, k){ int v = paths[i][j]; char c = s[i][j], rc = s[i][k - j - 1]; char d = opts[v].first, rd = opts[v].second; if (d != c) add_edge(v, false, n + i, true); if (d != rc) add_edge(v, false, n + i, false); if (rd != c) add_edge(v, true, n + i, true); if (rd != rc) add_edge(v, true, n + i, false); } } used.resize(nm); forn(i, nm) if (!used[i]) ts(i); clr.resize(nm, -1); reverse(ord.begin(), ord.end()); for (int v : ord) if (clr[v] == -1){ dfs(v); ++k; } forn(i, nm) if (clr[i] == clr[i ^ 1]){ cout << "NO\n"; return 0; } cout << "YES\n"; for (int i = 0; i < 2 * n; i += 2){ if (opts[i / 2].first == -1) cout << 'a'; else if (clr[i] > clr[i ^ 1]) cout << opts[i / 2].first; else cout << opts[i / 2].second; } cout << "\n"; }
1658
A
Marin and Photoshoot
Today, Marin is at a cosplay exhibition and is preparing for a group photoshoot! For the group picture, the cosplayers form a horizontal line. A group picture is considered beautiful if for every contiguous segment of at least $2$ cosplayers, the number of males does not exceed the number of females (obviously). Currently, the line has $n$ cosplayers which can be described by a binary string $s$. The $i$-th cosplayer is male if $s_i = 0$ and female if $s_i = 1$. To ensure that the line is beautiful, you can invite some additional cosplayers (possibly zero) to join the line at any position. You can't remove any cosplayer from the line. Marin wants to know the minimum number of cosplayers you need to invite so that the group picture of all the cosplayers is beautiful. She can't do this on her own, so she's asking you for help. Can you help her?
How will you solve this problem if there are just $2$ male cosplayers? Notice the distance between $2$ consecutive male cosplayers. It is easy to see, in a beautiful picture, there must be at least $2$ female cosplayers between $2$ consecutive male cosplayers. It is also the sufficient condition, as if there are $x$ male cosplayers in a subsegment, there are at least $2(x - 1) = 2x - 2 \geq x$ for all $x \geq 2$.
[ "constructive algorithms", "implementation", "math" ]
800
t = int(input()) for _ in range(t): n = input() s = input() ans = 0 cnt = 2 for c in s: if c == '1': cnt += 1 else: ans += max(2 - cnt, 0) cnt = 0 print(ans)
1658
B
Marin and Anti-coprime Permutation
Marin wants you to count number of permutations that are beautiful. A beautiful permutation of length $n$ is a permutation that has the following property: $$ \gcd (1 \cdot p_1, \, 2 \cdot p_2, \, \dots, \, n \cdot p_n) > 1, $$ where $\gcd$ is the greatest common divisor. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3, 4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Let's $g = \gcd (1 \cdot p_1, \, 2 \cdot p_2, \, \dots, \, n \cdot p_n)$. What is the maximum value of $g$? For each value of $g$, can you construct a satisfying permutation? We can prove that $g \leq 2$. Assuming $g > 2$: If there exists a prime number $p > 2$ that $p \mid g$, there are $\left \lfloor \dfrac{n}{p} \right \rfloor$ numbers divisible by $p$, so we can match at most $2 \left \lfloor \dfrac{n}{3} \right \rfloor$ numbers into pairs, which is smaller than $n$. If there exists a prime number $p > 2$ that $p \mid g$, there are $\left \lfloor \dfrac{n}{p} \right \rfloor$ numbers divisible by $p$, so we can match at most $2 \left \lfloor \dfrac{n}{3} \right \rfloor$ numbers into pairs, which is smaller than $n$. Otherwise, we can match odd numbers with even positions, and even numbers with odd positions, which leads to $2 \mid g$. Because $p_2$ is odd, $2 \cdot p_2$ is not divisible by $2^k$ with $k > 1$. Otherwise, we can match odd numbers with even positions, and even numbers with odd positions, which leads to $2 \mid g$. Because $p_2$ is odd, $2 \cdot p_2$ is not divisible by $2^k$ with $k > 1$. Therefore, $g\leq 2$. Therefore: If $n$ is odd, the answer is $0$ since the number of odd is greater than the number of even. Otherwise, we will match odd with even and vice versa. For odd number in even position, we have $(\dfrac{n}{2})!$ ways. According to the multiplicative rule, the answer will be $((\dfrac{n}{2})!)^2$.
[ "combinatorics", "math", "number theory" ]
800
MOD = 998244353 t = int(input()) for _ in range(t): n = int(input()) if n & 1: print(0) continue ans = 1 for i in range(1, n // 2 + 1): ans = ans * i % MOD ans = ans * ans % MOD print(ans)
1658
C
Shinju and the Lost Permutation
Shinju loves permutations very much! Today, she has borrowed a permutation $p$ from Juju to play with. The $i$-th cyclic shift of a permutation $p$ is a transformation on the permutation such that $p = [p_1, p_2, \ldots, p_n] $ will now become $ p = [p_{n-i+1}, \ldots, p_n, p_1,p_2, \ldots, p_{n-i}]$. Let's define the power of permutation $p$ as the number of distinct elements in the prefix maximums array $b$ of the permutation. The prefix maximums array $b$ is the array of length $n$ such that $b_i = \max(p_1, p_2, \ldots, p_i)$. For example, the power of $[1, 2, 5, 4, 6, 3]$ is $4$ since $b=[1,2,5,5,6,6]$ and there are $4$ distinct elements in $b$. Unfortunately, Shinju has lost the permutation $p$! The only information she remembers is an array $c$, where $c_i$ is the power of the $(i-1)$-th cyclic shift of the permutation $p$. She's also not confident that she remembers it correctly, so she wants to know if her memory is good enough. Given the array $c$, determine if there exists a permutation $p$ that is consistent with $c$. You do \textbf{not} have to construct the permutation $p$. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3, 4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
There is exactly one $1$ in array $c$ (in $(i - 1)$-th cyclic shift, $c_i > 1$ if $p_1 \neq n$, and $c_i = 1$ if $p_1 = n$), so if the number of $1s$ is greater or less than one, the answer is $\texttt{NO}$. We can rotate the array such that $c_1 = 1$ (initial state) because we don't have to construct the permutation, so if there exists a permutation with $p_1 = n$, the answer is $\texttt{YES}$. Notice the difference of $c_i$ and $c_{i + 1}$. In the $i$-th cyclic shift, if $p_1 > p_2$ then $c_{i + 1} \leq c_i$, otherwise $c_{i + 1} - c_i = 1$, so if there exists a position $i$ such that $c_{i + 1} - c_i > 1$, the answer is $\texttt{NO}$. This is the sufficient condition. It can be shown that, if $c_{i + 1} - c_i \leq 1$ for all $1 \leq i < n$, then there exists a permutation that satisfies. Here is a sketch of the construction: https://codeforces.com/blog/entry/101302?#comment-899523
[ "constructive algorithms", "math" ]
1,700
import sys input = sys.stdin.readline t = int(input()) for _ in range (t): n = int(input()) a = list(map(int, input().split())) if a.count(1) != 1: print("NO") continue a.append(a[0]) ok = True for i in range (0, n): if a[i + 1] - a[i] > 1: ok = False break if ok: print("YES") else: print("NO")
1658
D1
388535 (Easy Version)
This is the easy version of the problem. The difference in the constraints between both versions is colored below in red. You can make hacks only if all versions of the problem are solved. Marin and Gojou are playing hide-and-seek with an array. Gojou initially performs the following steps: - First, Gojou chooses $2$ integers $l$ and $r$ such that $l \leq r$. - Then, Gojou makes an array $a$ of length $r-l+1$ which is a permutation of the array $[l,l+1,\ldots,r]$. - Finally, Gojou chooses a secret integer $x$ and sets $a_i$ to $a_i \oplus x$ for all $i$ (where $\oplus$ denotes the bitwise XOR operation). Marin is then given the values of $l,r$ and the final array $a$. She needs to find the secret integer $x$ to win. Can you help her? Note that there may be multiple possible $x$ that Gojou could have chosen. Marin can find any possible $x$ that could have resulted in the final value of $a$.
Let's look at the binary representation of numbers from $0$ to $7$: $000$ $001$ $010$ $011$ $100$ $101$ $110$ $111$ Let us look at the $i$-th bit only (maybe $i=2$), we will get a sequence like $[0,0,1,1,0,0,1,1]$. Notice that the number of zeroes equals the number of ones in a prefix only when the length of the prefix is a multiple of $2^i$. Otherwise, there will be more zeros than ones. So, we will count the number of flipped and unflipped bits for each bit position. If the number of ones is greater than the number of zeros, the $i$-th bit of $x$ must be $1$. If the number of zeros is greater than the number of ones, the $i$-th bit of $x$ must be $0$. If the number of ones is equal to the number of zeros, that $i$-th bit of $x$ can either be $0$ or $1$. However, if the number of ones is equal to the number of zeros, we can assign the $i$-th bit anything we like. The rough sketch of the proof is that if $x$ is inside $a$ then $x\oplus2^i$ is also inside $a$. 1658D2 - 388535 (Hard Version)
[ "bitmasks", "math" ]
1,600
import sys input = sys.stdin.readline t = int(input()) for _ in range (t): l, r = map(int, input().split()) a = list(map(int, input().split())) cnt = [[0] * 2 for _ in range (32)] for x in a: for i in range (31): cnt[i][x & 1] += 1 x >>= 1 ans = 0 for i in range (31): if (cnt[i][0] < cnt[i][1]): ans |= (1 << i) print(ans)
1658
D2
388535 (Hard Version)
This is the hard version of the problem. The difference in the constraints between both versions are colored below in red. You can make hacks only if all versions of the problem are solved. Marin and Gojou are playing hide-and-seek with an array. Gojou initially perform the following steps: - First, Gojou chooses $2$ integers $l$ and $r$ such that $l \leq r$. - Then, Gojou will make an array $a$ of length $r-l+1$ which is a permutation of the array $[l,l+1,\ldots,r]$. - Finally, Gojou chooses a secret integer $x$ and sets $a_i$ to $a_i \oplus x$ for all $i$ (where $\oplus$ denotes the bitwise XOR operation). Marin is then given the values of $l,r$ and the final array $a$. She needs to find the secret integer $x$ to win. Can you help her? Note that there may be multiple possible $x$ that Gojou could have chosen. Marin can find any possible $x$ that could have resulted in the final value of $a$.
If $a \oplus b = 1$, then $(a \oplus x) \oplus (b \oplus x) = 1$. if $l$ is even and $r$ is odd, the last bit of $x$ can be either $0$ or $1$ (we can pair $a_i$ with $a_i \oplus 1$). There are two solutions to this problem. If $l$ is even and $r$ is odd, we can skip the last bit and divide the range by two, then recursively solve it. Otherwise, we will pair $a_i$ with $a_i \oplus 1$, and we will left with at most $2$ candidates for $x$ to check. There is also another solution: we can iterate all possibilities of $x$ (by assuming $a_i$ is $l \oplus x$ for all $1 \leq i \leq n$). If $x$ is the hidden number, $l \leq a_i \oplus x \leq r$ for all $1 \leq i \leq n$, so the problem reduce to "count the number of $a_i$ that $l \leq a_i \oplus x \leq r$", which can be solved with binary trie.
[ "bitmasks", "brute force", "data structures", "math" ]
2,300
import sys input = sys.stdin.readline def solve(l: int, r: int, s: set): if l % 2 == 0 and r % 2 == 1: t = set() for v in s: t.add(v >> 1) return solve(l >> 1, r >> 1, t) << 1 else: for v in s: if (v ^ 1) not in s: ok = True ans = v if l % 2 == 0: ans ^= r else: ans ^= l for x in s: if (x ^ ans) < l or (x ^ ans) > r: ok = False break if ok: return ans t = int(input()) for _ in range (t): l, r = map(int, input().split()) s = set(map(int, input().split())) print(solve(l, r, s))
1658
E
Gojou and Matrix Game
Marin feels exhausted after a long day of cosplay, so Gojou invites her to play a game! Marin and Gojou take turns to place one of their tokens on an $n \times n$ grid with Marin starting first. There are some restrictions and allowances on where to place tokens: - Apart from the first move, the token placed by a player must be more than Manhattan distance $k$ away from the previous token placed on the matrix. In other words, if a player places a token at $(x_1, y_1)$, then the token placed \textbf{by the other player} in the next move must be in a cell $(x_2, y_2)$ satisfying $|x_2 - x_1| + |y_2 - y_1| > k$. - Apart from the previous restriction, a token can be placed anywhere on the matrix, \textbf{including cells where tokens were previously placed by any player}. Whenever a player places a token on cell $(x, y)$, that player gets $v_{x,\ y}$ points. All values of $v$ on the grid are \textbf{distinct}. You still get points from a cell even if tokens were already placed onto the cell. The game finishes when each player makes $10^{100}$ moves. Marin and Gojou will play $n^2$ games. For each cell of the grid, there will be exactly one game where Marin places a token on that cell on her first move. Please answer for each game, if Marin and Gojou play optimally (after Marin's first move), who will have more points at the end? Or will the game end in a draw (both players have the same points at the end)?
What if a player is forced to play in a cell where they get fewer points than the previous move? Rephrase the problem, then solve it with dynamic programming. Suppose that Marin places a token at $(a,b)$. If Gojou places a token at $(c,d)$ where $V_{c,d}<V_{a,b}$, then Gojou would not have any advantage as Marin can play at $(a,b)$ again. After a very huge number of turns, Marin will have more points than Gojou. Generally, if a player is forced to play in a cell where they get fewer points than the previous move, they would instantly lose. Therefore, we can rephrase the problem as such: Apart from the first move, the token placed by a player must be more than Manhattan distance k away from the previous token placed on the matrix. Apart from the first move, the token placed by a player must be on a cell with more points than the cell with the token placed by the previous player. The player who plays the last token is the winner. This turns out to be a standard dynamic programming. Let $dp[i][j]$ return $1$ if the player who places a token at $(i,j)$ wins. Let $V_{a,b}=n^2$, then we have $dp[a][b]=1$ as a base case. Then we will fill the values of $dp$ in decreasing values of $V_{i,j}$. $dp[i][j]=1$ if for all $(i',j')$ such that $|i-i'|+|j-j'| > k$, we have $dp[i'][j']=0$. Note that by taking the contrapositive, this is equivalent to for all $(i',j')$ such that $dp[i'][j']=1$, we have $|i-i'|+|j-j'| \leq k$. Let us maintain a set $S$ that stores the pairs $(i',j')$ such that $dp[i'][j']=1$, then our operations are: adding point $(i,j)$ to $S$ given $(i,j)$, check if for all $(i',j')$ in $S$, $|i-i'|+|j-j'| \leq k$ Notice that $|i-i'|+|j-j'| \leq k \Leftrightarrow \max(|(i+j)-(i'+j')|,|(i-j)-(i'-j')|) \leq k$. Checking $\max(|(i+j)-(i'+j')|,|(i-j)-(i'-j')|) \leq k$ is very simple as we only need to store the minimum and maximum of all $(i+j)$ and $(i-j)$.
[ "data structures", "dp", "games", "hashing", "implementation", "math", "number theory", "sortings" ]
2,500
#include <bits/stdc++.h> using namespace std; const int N = 2e3 + 5; bool f[N][N]; int l, r, u, d, n, k; vector<array<int, 5>> v; bool check(int i, int j) { return (abs(i - u) <= k && abs(i - d) <= k && abs(j - l) <= k && abs(j - r) <= k); } void solve(){ cin >> n >> k; for (int i = 1; i <= n; ++i) { for (int j = 1; j <= n; ++j) { f[i][j] = false; int x; cin >> x; v.push_back({x, i, j, i + j, j - i}); } } sort(v.begin(), v.end(), greater<array<int, 5>>()); l = v[0][4], r = v[0][4], u = v[0][3], d = v[0][3]; for (array<int, 5> cell: v) { if (check(cell[3], cell[4])) { f[cell[1]][cell[2]] = true; l = min(l, cell[4]); r = max(r, cell[4]); u = min(u, cell[3]); d = max(d, cell[3]); } } for (int i = 1; i <= n; ++i) { for (int j = 1; j <= n; ++j) { if (f[i][j]) cout << 'M'; else cout << 'G'; } cout << '\n'; } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); solve(); return 0; }
1658
F
Juju and Binary String
The cuteness of a binary string is the number of $1$s divided by the length of the string. For example, the cuteness of $01101$ is $\frac{3}{5}$. Juju has a binary string $s$ of length $n$. She wants to choose some non-intersecting subsegments of $s$ such that their concatenation has length $m$ and it has the same cuteness as the string $s$. More specifically, she wants to find two arrays $l$ and $r$ of equal length $k$ such that $1 \leq l_1 \leq r_1 < l_2 \leq r_2 < \ldots < l_k \leq r_k \leq n$, and also: - $\sum\limits_{i=1}^k (r_i - l_i + 1) = m$; - The cuteness of $s[l_1,r_1]+s[l_2,r_2]+\ldots+s[l_k,r_k]$ is equal to the cuteness of $s$, where $s[x, y]$ denotes the subsegment $s_x s_{x+1} \ldots s_y$, and $+$ denotes string concatenation. Juju does not like splitting the string into many parts, so she also wants to \textbf{minimize} the value of $k$. Find the minimum value of $k$ such that there exist $l$ and $r$ that satisfy the constraints above or determine that it is impossible to find such $l$ and $r$ for any $k$.
Let $b$ be the number of black balls and $w$ be the number of white balls. The answer will be impossible if $m$ is not a multiple of $\dfrac{b+w}{gcd(b, w)}$. It is easy to show that the cuteness of $s$ is $\dfrac{b}{n}$. What is the number of $\texttt{1}$ in the concatenated string needed so that the answer exists? The cuteness of $s$ is $\dfrac{b}{n} = \dfrac{b}{b+w}$ by definition. Now consider $t = s[l_1, r_1] + [l_2, r_2] + \dots + s[l_k, r_k]$, the cuteness of the concatenated string. The cuteness of $t$ is $\dfrac{b}{b+w}$, so the number of $\texttt{1}$ needed is $k = \dfrac{m \cdot w}{b + w}$. If $k$ is not an integer then there will be no answer, so $m$ must be a multiple of $\dfrac{b+w}{gcd(b, w)}$. We don't need more than 2 parts, or to say $k \leq 2$ is needed in this problem. Let $c_i =$ (the number of $\texttt{1}$ in $s[i \dots i + m - 1]$). For convenient, from now let assume the array and string is wrapped around: $s[i] = s[i + n]$ and $c_i = c_{i+n}$. We have $|c_i-c_{i+1}| \leq 1$ and there exists $c_i = y$ for all $y$ that $min(c_i) \leq y \leq max(c_i)$. GM+ smax provides a simple and clean answer here. Let $L = min(c_i)$, $R = max(c_i)$ and we are doing with wrap around arrays for convenient. We want to prove that for any $y$ that $L \leq y \leq R$ there will be some $c_i = y$. Let $d_i = max(c_p, c_{p+1}, \dots, c_i)$ for $p$ is the minimal position that $c_p = L$. Let $f_x$ is the first position $i$ starting $p$ that $d_i = x$, we have $d_{f_x} = c_{f_x}$ [1] Since $L \leq y \leq R$ and, we have $L = d_p \leq d_{p+1} \leq \dots \leq d_{p+n-1} = R$. [2] Since $|c_{i+1} - c_i| \leq 1$ we can show that. $\forall p < i < p + n \rightarrow \left[\begin{array}{l} d_i = d_{i-1} \\ d_i = d_{i-1} + 1 \end{array}\right.$ [3] Since $c_i \in \mathbb{N}$, we also have $d_i \in \mathbb{N}$ From [1], [2], [3], if there is $d_i = v > L$ then there will be $d_j = v - 1$ for some $j$ that $p + (v - L) \leq j < i$. $\Leftrightarrow \forall v, min(d_i) \leq v \leq max(d_i) \rightarrow \exists d_{f_i} = v$ $\Leftrightarrow \exists d_{f_i} = y$ $\Leftrightarrow \exists c_{f_i} = y$ The answer will be impossible if $m$ is not a multiple of $\dfrac{b+w}{gcd(b, w)}$. Otherwise, it is proved that $k \leq 2$ is needed. Since we need to select a minimum $k$, let's first check if there is a solution with $k = 1$ If there exist such position $p$ that $1 \leq p \leq n - m + 1$ and $s[p, p + m - 1]$ have the same cuteness as $s[1, n]$ then we found an answer. Otherwise there must be an answer with $k = 2$, and it is simple too If there exist such position $p$ that $1 \leq p < m$ and $s[1, p] \cup s[n - (m - p) + 1, n]$ have the same cuteness as $s[1, n]$ then we found an answer. Both can be done in linear $O(n)$ with the sliding window technique or prefixsum.
[ "brute force", "constructive algorithms", "greedy", "math" ]
2,700
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { int n, k; cin >> n >> k; string s; cin >> s; int one = count_if(s.begin(), s.end(), [](char c) { return c == '1'; }); if (1LL * one * k % n != 0) { cout << "-1\n"; } else { one = 1LL * one * k / n; vector<int> suf(2 * n + 1); for (int i = 1; i <= 2 * n; i++) { suf[i] = suf[i - 1] + (s[(i - 1) % n] == '1'); if (i >= k && suf[i] == suf[i - k] + one) { if (i <= n) { cout << "1\n"; cout << i - k + 1 << " " << i << '\n'; } else { cout << "2\n"; cout << 1 << " " << i - n << '\n'; cout << i - k + 1 << " " << n << '\n'; } break; } } } } }
1659
A
Red Versus Blue
Team Red and Team Blue competed in a competitive FPS. Their match was streamed around the world. They played a series of $n$ matches. In the end, it turned out Team Red won $r$ times and Team Blue won $b$ times. Team Blue was less skilled than Team Red, so $b$ was \textbf{strictly less} than $r$. You missed the stream since you overslept, but you think that the match must have been neck and neck since so many people watched it. So you imagine a string of length $n$ where the $i$-th character denotes who won the $i$-th match  — it is R if Team Red won or B if Team Blue won. You imagine the string was such that the \textbf{maximum} number of times a team \textbf{won in a row} was \textbf{as small as possible}. For example, in the series of matches RBBRRRB, Team Red won $3$ times in a row, which is the maximum. You must find a string satisfying the above conditions. If there are multiple answers, print any.
We have $b$ B's which divide the string into $b + 1$ regions and we have to place the R's in these regions. By the strong form of the pigeonhole principle, at least one region must have at least $\lceil\frac{r}{b + 1}\rceil$ R's. This gives us a lower bound on the answer. Now, we will construct a string whose answer is exactly equal to the lower bound. We place the B's so that they are not adjacent. Then we equally distribute the R's in the $b + 1$ regions. Let $p = \lfloor\frac{r}{b + 1}\rfloor$ and $q = r\bmod(b + 1)$. We place $p$ R's in each region and an extra R in any $q$ regions. Hence, our answer for the construction is $\lceil\frac{r}{b + 1}\rceil$, which is equal to the lower bound. Importantly, $r > b$, so none of the B's will be consecutive. Time complexity: $\mathcal{O}(n)$.
[ "constructive algorithms", "greedy", "implementation", "math" ]
1,000
t = int(input()) for i in range(t): n, r, b = map(int, input().split()) p = r % (b + 1) y = "" for j in range(int(r / (b + 1))): y = y + "R" ans = "" for i in range(b + 1): if i > 0: ans = ans + "B" ans = ans + y if p > 0: ans = ans + "R" p = p - 1 print(ans)
1659
B
Bit Flipping
You are given a binary string of length $n$. You have \textbf{exactly} $k$ moves. In one move, you must select a single bit. The state of all bits \textbf{except} that bit will get flipped ($0$ becomes $1$, $1$ becomes $0$). You need to output the lexicographically largest string that you can get after using \textbf{all} $k$ moves. Also, output the number of times you will select each bit. If there are multiple ways to do this, you may output any of them. A binary string $a$ is lexicographically larger than a binary string $b$ of the same length, if and only if the following holds: - in the first position where $a$ and $b$ differ, the string $a$ contains a $1$, and the string $b$ contains a $0$.
Let's see how many times a given bit will get flipped. Clearly, a bit gets flipped whenever it is not selected in an operation. Therefore, the $i$-th bit gets flipped $k-f_i$ times. We want to select a bit as few times as possible. Now we can handle a few cases. $k$ is even, bit $i$ is $1$ $\Rightarrow$ $f_i=0$ (even number of flips don't change the bit) $k$ is even, bit $i$ is $0$ $\Rightarrow$ $f_i=1$ (odd number of flips toggle the bit from $0$ to $1$) $k$ is odd, bit $i$ is $1$ $\Rightarrow$ $f_i=1$ (even number of flips don't change the bit) $k$ is odd, bit $i$ is $0$ $\Rightarrow$ $f_i=0$ (odd number of flips toggle the bit from $0$ to $1$) Process the string from left to right until you can't anymore. If you still have some remaining moves at the end, you can just give them all to the last bit. Then you can construct the final string by checking the parity of $k-f_i$. Time complexity: $\mathcal{O}(n)$
[ "bitmasks", "constructive algorithms", "greedy", "strings" ]
1,300
t = int(input()) for i in range(t): n, k = map(int, input().split()) kc = k s = input() f = [0] * n ans = "" for i in range(n): if k == 0: break if kc % 2 == 1 and s[i] == '1': f[i] = f[i] + 1 k = k - 1 elif kc % 2 == 0 and s[i] == '0': f[i] = f[i] + 1 k = k - 1 f[n - 1] = f[n - 1] + k for i in range(n): flip = kc - f[i] if flip % 2 == 0: ans = ans + s[i] else: if s[i] == '1': ans = ans + '0' else: ans = ans + '1' print(ans) for i in range(n): print(f[i], end = ' ') print()
1659
C
Line Empire
You are an ambitious king who wants to be the Emperor of The Reals. But to do that, you must first become Emperor of The Integers. Consider a number axis. The capital of your empire is initially at $0$. There are $n$ unconquered kingdoms at positions $0<x_1<x_2<\ldots<x_n$. You want to conquer all other kingdoms. There are two actions available to you: - You can change the location of your capital (let its current position be $c_1$) to any other \textbf{conquered} kingdom (let its position be $c_2$) at a cost of $a\cdot |c_1-c_2|$. - From the current capital (let its current position be $c_1$) you can conquer an unconquered kingdom (let its position be $c_2$) at a cost of $b\cdot |c_1-c_2|$. You \textbf{cannot} conquer a kingdom if there is an unconquered kingdom between the target and your capital. Note that you \textbf{cannot} place the capital at a point without a kingdom. In other words, at any point, your capital can only be at $0$ or one of $x_1,x_2,\ldots,x_n$. Also note that conquering a kingdom does not change the position of your capital. Find the minimum total cost to conquer all kingdoms. Your capital can be anywhere at the end.
Clearly, we should always move from left to right. Also, assume $x_0=0$ for simplicity. Let us analyze what our cost would look like. It will be composed of a part due to moving capitals, and a part due to conquering kingdoms. If we shift our capital from $x_i$ to $x_j$, the cost is $a\cdot(x_j-x_i)$. If we conquer kingdoms from $(i,j]$ with capital $x_i$, the cost is $b\cdot((x_{i+1}-x_i)+(x_{i+2}-x_i)+\ldots+(x_j-x_i))$, which can be written as $b\cdot(p_j-p_i-(j-i)\cdot x_i)$, where $p_i=x_0+x_1+\ldots+x_i$. Now, notice that $x_j-x_i$ and $p_j-p_i$ are linear. Also, if we isolate the parts involving $x_j-x_i$, the sum will be like $(x_{j_1}-x_0)+(x_{j_2}-x_{j_1})+\ldots+(x_{j_f}-x_{j_{f-1}})=x_{j_f}-x_0$. This means we can simply write the final sum of this part as $a\cdot x_f$, where $x_f$ is the final position of the capital. We can say the same thing about $p_j-p_i$, except that the final kingdom conquered is always $x_n$. So the final sum of this part is always $b\cdot p_n$ ($x_0=p_0=0$, so they weren't written explicitly). Our final cost, then, looks like $T=a\cdot x_f+b\cdot (p_n-C)$, where $C$ is composed of terms like $(j-i)\cdot x_i$. If we want to minimise $T$, we want to maximise $C$. That is achieved if we always increase $x_i$! Then we can write $C=x_0+x_1+\ldots+x_{f-1}+\underbrace{x_f+\ldots+x_f}_{n-f\text{ }\mathrm{times}}=p_f+(n-f-1)\cdot x_f$ Hence, our final answer is given by $\min_{f \in [0,n]}{(a\cdot x_f+b\cdot (p_n-p_f-(n-f-1)\cdot x_f))}$ Time complexity: $\mathcal{O}(n)$
[ "binary search", "brute force", "dp", "greedy", "implementation", "math" ]
1,500
#include<bits/stdc++.h> using namespace std; using lol=long long int; #define endl "\n" const lol inf=1e18+8; int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int _=1; cin>>_; while(_--) { int n; lol a,b; cin>>n>>a>>b; vector<lol> x(n+1),p(n+1); x[0]=0; for(int i=1;i<=n;i++) cin>>x[i]; partial_sum(x.begin(),x.end(),p.begin()); lol ans=inf; for(int i=0;i<=n;i++) { ans=min(ans,(a+b)*(x[i]-x[0])+b*(p[n]-p[i]-(n-i)*x[i])); } cout<<ans<<endl; } return 0; }
1659
D
Reverse Sort Sum
Suppose you had an array $A$ of $n$ elements, each of which is $0$ or $1$. Let us define a function $f(k,A)$ which returns another array $B$, the result of sorting the first $k$ elements of $A$ in non-decreasing order. For example, $f(4,[0,1,1,0,0,1,0]) = [0,0,1,1,0,1,0]$. Note that the first $4$ elements were sorted. Now consider the arrays $B_1, B_2,\ldots, B_n$ generated by $f(1,A), f(2,A),\ldots,f(n,A)$. Let $C$ be the array obtained by taking the element-wise sum of $B_1, B_2,\ldots, B_n$. For example, let $A=[0,1,0,1]$. Then we have $B_1=[0,1,0,1]$, $B_2=[0,1,0,1]$, $B_3=[0,0,1,1]$, $B_4=[0,0,1,1]$. Then $C=B_1+B_2+B_3+B_4=[0,1,0,1]+[0,1,0,1]+[0,0,1,1]+[0,0,1,1]=[0,2,2,4]$. You are given $C$. Determine a binary array $A$ that would give $C$ when processed as above. It is guaranteed that an array $A$ exists for given $C$ in the input.
The first thing to notice is that any $1$ in the initial array $A$, will contribute to the sum of elements of array $C$ exactly $n$ times. That means, if $S=c_1+c_2+...+c_n$, $S$ must be divisible by $n$. Let $k=\frac{S}{n}$ be the number of $1$s in the initial array $A$. Observation: The $1$s in $B_n$ form a suffix of it. We'll process the array $C$ from right to left. Assume $a_n$ was $1$. Then, it is clear that $c_n=n$. Then, we can place a $1$ there. Now assume $a_n$ was $0$. Then, it is clear that if there were any $1$s in $A$ (in other words, if $k>0$), then $c_n=1$. Otherwise $c_n=0$. If this is the case, we can place a $0$ there and move on. Finally, we must subtract $1$ from each of the last $k$ elements provided $k>0$. Decrement $k$ if a $1$ was placed. In essence, we simulated removing $B_n$ from the elementwise sum. Once we've processed $c_n$, we can forget about it and continue solving the problem assuming there are $n-1$ elements, and so on. The last thing is to handle subtracting $1$ from the last $k$ elements. It is possible to do it using a segment tree/BIT, but that is overkill for this problem. A simpler way is to maintain a left border for the suffix, and keep track of when the border crosses an element (say $t_i$) and when we reach it. Then we can get a simple formula for the value of the element that looks something like $c_i-(t_i-i)$. Time complexity: $\mathcal{O}(n)$ or $\mathcal{O}(n\log{n})$
[ "constructive algorithms", "data structures", "greedy", "implementation", "math", "two pointers" ]
1,900
#include<bits/stdc++.h> using namespace std; using lol=long long int; #define endl "\n" int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int _=1; cin>>_; while(_--) { int n; cin>>n; vector<lol> v(n); for(auto& e:v) cin>>e; int k=accumulate(v.begin(),v.end(),0ll)/n; vector<int> b(n),ans(n,0); int lf=n-k; for(int i=lf;i<n;i++) b[i]=n-1; for(int i=n-1;i>=0 && lf<=i;i--) { int cur=v[i]-(b[i]-i); if(cur==i+1) ans[i]=1; else if(cur==1) { ans[i]=0; lf--; b[lf]=i-1; } } for(auto& e:ans) cout<<e<<" "; cout<<endl; } return 0; }
1659
E
AND-MEX Walk
There is an undirected, connected graph with $n$ vertices and $m$ weighted edges. A walk from vertex $u$ to vertex $v$ is defined as a sequence of vertices $p_1,p_2,\ldots,p_k$ (which are not necessarily distinct) starting with $u$ and ending with $v$, such that $p_i$ and $p_{i+1}$ are connected by an edge for $1 \leq i < k$. We define the length of a walk as follows: take the ordered sequence of edges and write down the weights on each of them in an array. Now, write down the bitwise AND of every nonempty prefix of this array. The length of the walk is the MEX of all these values. More formally, let us have $[w_1,w_2,\ldots,w_{k-1}]$ where $w_i$ is the weight of the edge between $p_i$ and $p_{i+1}$. Then the length of the walk is given by $\mathrm{MEX}(\{w_1,\,w_1\& w_2,\,\ldots,\,w_1\& w_2\& \ldots\& w_{k-1}\})$, where $\&$ denotes the bitwise AND operation. Now you must process $q$ queries of the form u v. For each query, find the \textbf{minimum} possible length of a walk from $u$ to $v$. The MEX (minimum excluded) of a set is the smallest non-negative integer that does not belong to the set. For instance: - The MEX of $\{2,1\}$ is $0$, because $0$ does not belong to the set. - The MEX of $\{3,1,0\}$ is $2$, because $0$ and $1$ belong to the set, but $2$ does not. - The MEX of $\{0,3,1,2\}$ is $4$ because $0$, $1$, $2$ and $3$ belong to the set, but $4$ does not.
Observation: The MEX can only be $0$, $1$, or $2$. Proof: Suppose the MEX is greater than $2$. We know that on using the bitwise AND function, some on bits will turn off and the sequence will be non-increasing. This would imply that we have $2$, $1$ and $0$ in our sequence. However, going from $2$ (10) to $1$ (01) is not possible as an off bit gets turned on. Hence, $2$ and $1$ can't both be in our sequence. Case $1$: MEX is $0$. For MEX to be $0$, there must be a walk from $u$ to $v$ such that the bitwise AND of the weights of all the edges in that walk is non-zero. This implies that there exists some bit which is on in all the edges of that walk. To check this, we can loop over all possible bits, i.e. $i$ goes from $0$ to $29$, and construct a new graph for each bit while only adding the edges which have the $i$-th bit on. We can use DSU on this new graph and form connected components. This can be processed before taking the queries. In a query, we can go through all bits from $0$ to $29$ and if we get $u$ and $v$ in the same component for some bit, then we're done and our answer is $0$. Case $2$: MEX is $1$. If we didn't get the MEX to be $0$, then we know that $0$ is in our sequence. Now, in our walk, if we ever get a node which has an even edge (let's say this is the first even edge so far) and our bitwise AND so far is greater than $1$ (it would also be an odd number since there's no even edges), then including this edge in our walk would guarantee a MEX of $1$ since the even edge has the $0$-th bit off. Taking the bitwise AND with this edge guarantees that the last bit stays off until the end of our walk and we never get $1$ in the sequence. Let us call this node $x$. For a given $u$, if an $x$ exists, then an answer of $1$ is possible. This also shows us that the value of $v$ is not relevant, since after we get the even edge, our MEX is guaranteed to be $1$ and the subsequent weights do not matter. For the bitwise AND of the walk to be an odd number greater than $1$, all edges on the walk from $u$ to $x$ must have the $0$-th bit on and the $i$-th bit on for some $i$ in $[1, 29]$. Similar to the previous case, we now loop $i$ from $1$ to $29$ and make a new graph for each bit while only adding edges which have the $0$-th and the $i$-th bits on, and use DSU to form connected components. Within a component, if any node has an even edge, then every node in that component can be the starting point of a walk to get the answer as $1$. Then, we go through all the nodes. If the current node, say, $y$ has an even edge, then we can mark the parent node of $y$'s component indicating that this component has an even edge. In the queries, we can go through all the $29$ graphs and if the parent of $u$ in a graph has been marked, then we know that it's possible to have the MEX as $1$. If not, the answer must be $2$ since MEX cannot exceed $2$. Time complexity: $\mathcal{O}(n\log{w}+(m+q)\alpha(n)\log{w})$
[ "bitmasks", "brute force", "constructive algorithms", "dfs and similar", "dsu", "graphs" ]
2,200
#pragma GCC optimize("Ofast") #pragma GCC optimize("unroll-loops") #pragma GCC target("avx,avx2,fma") #include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/trie_policy.hpp> #include <ext/rope> using namespace std; using namespace __gnu_pbds; using namespace __gnu_cxx; mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); #define fi first #define se second #define pb push_back #define eb emplace_back #define mp make_pair #define gcd __gcd #define fastio ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0) #define rep(i, n) for (int i=0; i<(n); i++) #define rep1(i, n) for (int i=1; i<=(n); i++) #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define endl "\n" typedef long long ll; typedef unsigned long long ull; typedef unsigned uint; typedef long double ld; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef vector<int> vi; typedef vector<vector<int>> vvi; typedef vector<ll> vll; typedef vector<vector<ll>> vvll; typedef vector<bool> vb; typedef vector<vector<bool>> vvb; template<typename T, typename cmp = less<T>> using ordered_set=tree<T, null_type, cmp, rb_tree_tag, tree_order_statistics_node_update>; typedef trie<string, null_type, trie_string_access_traits<>, pat_trie_tag, trie_prefix_search_node_update> pref_trie; struct dsu { vi d; dsu(int n) : d(n, -1) {} int find(int x) {return d[x] < 0 ? x : d[x] = find(d[x]);} void join(int x, int y) { x = find(x), y = find(y); if(x == y) return; if(d[x] > d[y]) swap(x, y); d[x] += d[y]; d[y] = x; } bool is_joined(int x, int y) { return find(x) == find(y); } }; int32_t main() { fastio; int n, m; cin >> n >> m; vector<tuple<int, int, int>> edges; rep(i, m) { int u, v, w; cin >> u >> v >> w; edges.eb(--u, --v, w); } vector<dsu> zero(30, n), one(30, n); rep(j, 30) { for(auto& [u, v, w]: edges) if(w >> j & 1) { zero[j].join(u, v); } } vb even(n); rep1(j, 29) { for(auto& [u, v, w]: edges) if((w >> j & 1)) { one[j].join(u, v); } vb vis(n); for(auto& [u, v, w]: edges) if(!(w & 1)) { vis[one[j].find(u)] = 1; vis[one[j].find(v)] = 1; } rep(i, n) if(vis[one[j].find(i)]) even[i] = 1; } auto check = [&](int u, int v) -> int { rep(j, 30) if(zero[j].is_joined(u, v)) return 0; if(even[u]) return 1; rep1(j, 29) if(one[j].is_joined(u, v)) return 1; return 2; }; int q; cin >> q; while(q--) { int u, v; cin >> u >> v; --u, --v; cout << check(u, v) << endl; } }
1659
F
Tree and Permutation Game
There is a tree of $n$ vertices and a permutation $p$ of size $n$. A token is present on vertex $x$ of the tree. Alice and Bob are playing a game. Alice is in control of the permutation $p$, and Bob is in control of the token on the tree. In Alice's turn, she \textbf{must} pick two \textbf{distinct} \textbf{numbers} $u$ and $v$ (\textbf{not} positions; $u \neq v$), such that the token is neither at vertex $u$ nor vertex $v$ on the tree, and swap their positions in the permutation $p$. In Bob's turn, he \textbf{must} move the token to an adjacent vertex from the one it is currently on. Alice wants to sort the permutation in increasing order. Bob wants to prevent that. Alice wins if the permutation is sorted in increasing order at the beginning or end of her turn. Bob wins if he can make the game go on for an infinite number of moves (which means that Alice is never able to get a sorted permutation). Both players play optimally. Alice makes the first move. Given the tree, the permutation $p$, and the vertex $x$ on which the token initially is, find the winner of the game.
Let us call all such $i$ that satisfy $p_i \neq i$ as marked. If $p_i=i$, it is called unmarked. Also, a notation like $XY$ means "swap $X$ and $Y$ in the permutation". We are going to show that it is always possible for Alice to win if the diameter of the tree is $\geq 3$. First of all, it should be obvious that no matter what moves are made, we will eventually end up at a state where we have just $2$ marked vertices and the token is either on one of them or neither of them (with Alice having to move). In the latter case we have a trivial win. Proof: As long as there are $>2$ marked numbers, you can always find a pair of numbers to swap that will unmark one of the numbers. One can imagine this in terms of the cycle decomposition of the permutation. ----- Next, let us show that from the former state, we can also force a state where the two marked vertices are adjacent to each other (with Alice having to move). Proof: If the two marked vertices are already adjacent, we don't need to do anything. Else, let's say we have $A$ and $B$ where $B$ is the vertex with the token on it. Now, consider two cases. If $A$ is at distance $\geq 3$ from $B$, we can choose any adjacent vertex to $B$. Let it be $C$. Swap $C$ and $A$. Now Bob has the option of moving the token onto $C$ or not. It wouldn't be optimal to move it away from $C$, because then we can force the sequence $BC$ $CA$ and Bob won't be able to cover any vertex with his token. So Bob must move his token onto $C$. Swap $A$ and $B$, which puts $A$ in its right place. Now Bob must move this token onto $B$, so we did it. If $A$ is instead at distance $2$ from $B$, pick the vertex between $B$ and $A$ as $C$, and we can repeat the same analysis as above. ----- Now, let's say we have $A$ and $B$ with $B$ being the vertex with the token on it. Let us show that, if possible, we can always move $A$ over $B$ and $B$ over $A$ (effectively "jumping" over the opposite vertex). Proof: For the first one, pick some vertex adjacent to $B$ (not $A$). Let it be $C$. Swap $A$ and $C$. Now Bob has the option of moving to either $C$ or $A$. If he moves to $A$, we can force $BC$ $CA$ and win. If he moves to $C$, we swap $AB$ (putting $A$ in the right place) and Bob has to move his token to $B$. Now we can see that the marked vertex which was $A$, "jumped" over $B$ to reach $C$. For the second one, pick some vertex adjacent to $A$ (not $B$). Let it be $C$. Bob must move his token to $A$. Swap $B$ and $C$ (putting $B$ in the right place). Bob must move his token to $C$. Now we can see that the marked vertex with a token which was $B$, "jumped" over $A$ to reach $C$. ----- Like this, we can move all over the tree. Now let's move such that both of the marked vertices lie on a diameter of the tree and one of them is at one end of it. Consider a tree with diameter $\geq 3$. That means we have a line of at least $4$ vertices taking the two marked vertices at one end. Let's consider just the first $4$ vertices, and show that we can always win here. Proof: Say we have a configuration like $A$-$B$-$C$-$D$ where $A$ and $B$ are marked and $A$ has the token on it. If $B$ has the token on it instead, we can use the moving strategy explained before to first move $A$ to $C$ and then $B$ to $D$ and then it is equivalent to the first configuration. Swap $B$ and $D$. Now Bob must move his token to $B$. Swap $A$ and $D$ (putting $A$ in its right place). Now no matter where Bob moves, after his turn no vertex will be covered and we can force $BD$ and win. ----- Therefore, if the diameter of the tree is $\geq 3$, we always win. If the diameter of the tree is $2$, it is a star graph, and this is a more problematic case. First of all, we must check if the permutation is already sorted, or we can win in the first move. We can only win in the first move if only $1$ swap is required to sort the permutation, and the token is on neither of the numbers we need to swap. If the above is not possible, several cases follow. Let us make the following observation first. If Bob is at the center of the star and the center is a marked vertex, Bob can infinitely stall Alice. Proof: Let's call the center $A$, and suppose we need to swap the number $B$ with it to put it in its right place. $B$ is definitely at a leaf and $1$ vertex away because of the structure of the star graph. So when Bob's turn comes he can simply move the token to where $B$ is and alternate this way between $A$ and $B$, infinitely stalling Alice. Obviously, even if we try swapping $B$ with a different number $C$, we can just move to where $C$ is next until there are just two vertices left, the center and a leaf. So Bob wins. ----- In light of this, it never makes sense to mark the center in our turn if it is unmarked and not covered by the token. So we have $4$ cases to think of now: - Token on center and center is marked vertex: As explained before, Alice loses here. Before discussing the rest of the cases, let us define $d$ as the minimum number of swaps required to sort the permutation and $x$ is $0$ if the token is on the center and $1$ if it is on a leaf. Now I claim that the parity of $d+x$ is invariant. The magnitude of $x$ changes by exactly $1$ every turn and we can say the same about $d$. So considering all possible changes ($-1$,$-1$) ($-1$,$+1$) ($+1$,$-1$) ($+1$,$+1$) we can see the sum of the changes is always $0 \bmod 2$. Hence proved. Consider the possible end states for the game: (all with $2$ marked vertices and with Alice having to move) Token at center, center marked Token at unmarked leaf, center marked Token at marked leaf, center marked Token at center, center unmarked Token at unmarked leaf, center unmarked Token at marked leaf, center unmarked Observe that end states $2$ and $5$ will never occur if the game lasts longer than $1$ turn because if you go back by $1$ turn, Bob would have a more optimal move. Therefore, in states $2$ and $5$ we can win in the very first move. Further observe that states $1$ and $2$ will never occur if the center was initially unmarked or we could unmark it in the first move. The only other possibility would be us being unable to unmark the center in the first move, which is a losing state. So we only care about states $4$ and $6$ now. Observe that state $4$ is a winning position while state $6$ is a losing position. Also observe that state $4$ has $d+x$ odd but state $6$ has $d+x$ even. Now let us continue with the cases and use these facts. - Token at center and center is unmarked vertex: Check the parity of $d+x$ here. If it is odd, we win, otherwise we lose (follows from the invariance of $d+x$). - Token at leaf and center is marked vertex: If we cannot unmark the center vertex in our very first move, we'll reach a losing position. If we can, check parity of $d+x$. Odd is win, even is lose. When can we not unmark the center vertex? Only if the token is on $p_{center}$. Otherwise it is always possible. - Token at leaf and center is unmarked vertex: Check parity of $d+x$. Odd is win, even is lose. This completes the solution. Time complexity: $\mathcal{O}(n)$ or $\mathcal{O}(n\log{n})$ (depending on whether you use cycles or inversions to find the parity of $d$)
[ "dfs and similar", "games", "graphs", "trees" ]
3,000
#include<bits/stdc++.h> using namespace std; using lol=long long int; #define endl "\n" pair<int,int> dfs(int u,const vector<vector<int>>& g,int p=-1) //returns {node with max dist,max dist} { pair<int,int> res{u,0}; int mx=0; for(auto v:g[u]) { if(v==p) continue; pair<int,int> cur=dfs(v,g,u); cur.second++; if(mx<cur.second) { res=cur; mx=cur.second; } } return res; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int _=1; cin>>_; while(_--) { int n,x; cin>>n>>x; vector<vector<int>> g(n+1); vector<int> p(n+1),deg(n+1,0); for(int i=1;i<n;i++) { int u,v; cin>>u>>v; g[u].push_back(v); g[v].push_back(u); deg[u]++,deg[v]++; } for(int i=1;i<=n;i++) { cin>>p[i]; } //find diameter of tree with 2 DFSes int diam=dfs(dfs(1,g).first,g).second; //if diam>=3, Alice always wins //if diam=1, n=2, have to check if p=[1,2] //otherwise we have a star graph and cases follow if(diam>=3) cout<<"Alice"; else if(diam==1) cout<<((p[1]==1)?"Alice":"Bob"); else { //we need to check if we have already won or can win in the first move //this is possible if the permutation is already sorted //or if there are two marked elements and the chip is on neither of them vector<int> marked; for(int i=1;i<=n;i++) { if(p[i]!=i) marked.push_back(i); } if((int)marked.size()==0) { cout<<"Alice"; }else if((int)marked.size()==2 && (x!=marked[0] && x!=marked[1])) { cout<<"Alice"; }else { //we haven't won yet and it is not possible to win in one move //cases follow //first find center, it will have deg>1 int center; for(int i=1;i<=n;i++) { if(deg[i]>1) { center=i; break; } } //is chip on center? bool chiponcenter=(x==center); //is center marked? bool centerismarked=(find(marked.begin(),marked.end(),x)!=marked.end()); //list the cycles vector<int> vis(n+1,false); vector<vector<int> > cycles; for(int i=1;i<=n;i++) { if(vis[i]) continue; int j=i; cycles.push_back({j}); vis[j]=true; while(!vis[p[j]]) { cycles.back().push_back(p[j]); vis[p[j]]=true; j=p[j]; } } //min number of swaps int swapcnt=0; for(auto& cycle:cycles) swapcnt+=(int)cycle.size()-1; //parity int parity=(swapcnt+!chiponcenter)%2; //cases if(!centerismarked) cout<<(parity?"Alice":"Bob"); else if(chiponcenter && centerismarked) cout<<"Bob"; else //chip not on center and center is marked { //need to check if we can unmark center on first move //it is impossible if x is on p[center] (since we need to move p[center] to get center //to the right place, but p[center] is blocked) bool cannotunmarkcenter=(p[center]==x); if(cannotunmarkcenter) cout<<"Bob"; else cout<<(parity?"Alice":"Bob"); } } } cout<<endl; } return 0; }
1660
A
Vasya and Coins
Vasya decided to go to the grocery store. He found in his wallet $a$ coins of $1$ burle and $b$ coins of $2$ burles. He does not yet know the total cost of all goods, so help him find out $s$ ($s > 0$): the \textbf{minimum} positive integer amount of money he \textbf{cannot} pay without change or pay at all using only his coins. For example, if $a=1$ and $b=1$ (he has one $1$-burle coin and one $2$-burle coin), then: - he can pay $1$ burle without change, paying with one $1$-burle coin, - he can pay $2$ burle without change, paying with one $2$-burle coin, - he can pay $3$ burle without change by paying with one $1$-burle coin and one $2$-burle coin, - he cannot pay $4$ burle without change (moreover, he cannot pay this amount at all). So for $a=1$ and $b=1$ the answer is $s=4$.
If Vasya has $b$ coins of $2$ burles, then he can collect amounts of $2, 4, \dots, 2 * b$ burls. If Vasya does not have $1$ burles coins, then he cannot collect the amount of $1$ burle. If he has at least one coin in $1$ burl, he can score odd amounts up to $2*b + a$. The following $1$burl coins increase the maximum amount he can make. If Vasya has $a$ coins for $1$ burle, he can make up the amount of $2 * b + a$ burles, and $2 * b + a + 1$ - not anymore.
[ "greedy", "math" ]
800
#include <iostream> #include <vector> #include <algorithm> #include <set> using namespace std; int main() { int t; cin >> t; for (int it = 0; it < t; ++it) { int a, b; cin >> a >> b; cout << (a == 0 ? 1 : a + 2 * b + 1) << '\n'; } return 0; }
1660
B
Vlad and Candies
Not so long ago, Vlad had a birthday, for which he was presented with a package of candies. There were $n$ types of candies, there are $a_i$ candies of the type $i$ ($1 \le i \le n$). Vlad decided to eat exactly one candy every time, choosing any of the candies of a type that is currently the most frequent (if there are several such types, he can choose \textbf{any} of them). To get the maximum pleasure from eating, Vlad \textbf{does not want} to eat two candies of the same type in a row. Help him figure out if he can eat all the candies without eating two identical candies in a row.
There will be three cases in total, let's consider them on two types of candies: $a_1 = a_2$, then we will eat candies in this order $[1, 2, 1, 2, \ dots, 1, 2]$ $a_1 = a_2 + 1$, then we will eat a candy of the type $1$, and then we will eat in this order $[2, 1, 2, 1, \dots, 2, 1]$ (almost as in the case above) $a_1 >= a_2 + 2$, then we will eat a candy of the type $1$, but there will still be more of them than candies of the type $2$ and we will have to eat a candy of the type $1$ again. So the answer is "NO". Now we prove that it is enough to check these conditions on two maximums of the array $a$. If the third condition is true, the answer is obvious "NO". Otherwise, we will by turns eat candies of the two maximum types until their number is equal to the third maximum, after which we will by turns eat candies of these three types and so on.
[ "math" ]
800
t = int(input()) for _ in range(t): n = int(input()) a = [int(x) for x in input().split()] a.sort() if n == 1: if a[0] > 1: print("NO") else: print("YES") continue if a[-2] + 1 < a[-1]: print("NO") else: print("YES")