contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
2055
B
Crafting
\begin{center} \begin{tabular}{c} \hline {\small As you'd expect, Florida is home to many bizarre magical forces, and Florida Man seeks to tame them.} \ \hline \end{tabular} \end{center} There are $n$ different types of magical materials, numbered from $1$ to $n$. Initially, you have $a_i$ units of material $i$ for each $i$ from $1$ to $n$. You are allowed to perform the following operation: - Select a material $i$ (where $1\le i\le n$). Then, spend $1$ unit of \textbf{every} other material $j$ (in other words, $j\neq i$) to gain $1$ unit of material $i$. More formally, after selecting material $i$, update array $a$ as follows: $a_i := a_i + 1$, and $a_j := a_j - 1$ for all $j$ where $j\neq i$ and $1\le j\le n$. Note that all $a_j$ must remain non-negative, i.e. you cannot spend resources you do not have. You are trying to craft an artifact using these materials. To successfully craft the artifact, you must have at least $b_i$ units of material $i$ for each $i$ from $1$ to $n$. Determine if it is possible to craft the artifact by performing the operation any number of times (including zero).
The order of the moves is pretty much irrelevant. What happens if we try to make two different materials? The key observation is that we will never use the operation to craft two different types of materials $i, j$. This is because if we were to combine the net change in resources from these two operations, we would lose two units each of every material $k \neq i, j$, and receive a net zero change in our amounts of materials $i, j$. Therefore, we will only ever use the operation on one type of material $i$. An immediate corollary of this observation is that we can only be deficient in at most one type of material, i.e. at most one index $i$ at which $a_i < b_i$. If no such index exists, the material is craftable using our starting resources. Otherwise, applying the operation $x$ times transforms our array to $a_j := \begin{cases} a_j + x & \text{if}\ i = j \\ a_j - x & \text{if}\ i \neq j \end{cases}$ i.e. increasing element $a_i$ by $x$ and decreasing all other elements by $x$. We must have $x \geq b_i - a_i$ to satisfy the requirement on material type $i$. However, there is also no point in making $x$ any larger, as by then we already have enough of type $i$, and further operations cause us to start losing materials from other types that we could potentially need to craft the artifact. Therefore, our condition in this case is just to check that $b_i - a_i \leq \min_{j \neq i} a_j - b_j,$ i.e. we are deficient in type $i$ by at most as many units as our smallest surplus in all other material types $j \neq i$. This gives an $O(n)$ solution.
[ "constructive algorithms", "greedy", "sortings" ]
1,000
// #include <ext/pb_ds/assoc_container.hpp> // #include <ext/pb_ds/tree_policy.hpp> // using namespace __gnu_pbds; // typedef tree<int, null_type, std::less<int>, rb_tree_tag, tree_order_statistics_node_update> ordered_set; #include <bits/stdc++.h> #define f first #define s second #define pb push_back typedef long long int ll; typedef unsigned long long int ull; using namespace std; typedef pair<int,int> pii; typedef pair<ll,ll> pll; template<typename T> int die(T x) { cout << x << endl; return 0; } #define mod_fft 998244353 #define mod_nfft 1000000007 #define INF 100000000000000 #define LNF 1e15 #define LOL 12345678912345719ll struct LL { static const ll m = mod_fft; long long int val; LL(ll v) {val=reduce(v);}; LL() {val=0;}; ~LL(){}; LL(const LL& l) {val=l.val;}; LL& operator=(int l) {val=l; return *this;} LL& operator=(ll l) {val=l; return *this;} LL& operator=(LL l) {val=l.val; return *this;} static long long int reduce(ll x, ll md = m) { x %= md; while (x >= md) x-=md; while (x < 0) x+=md; return x; } bool operator<(const LL& b) { return val<b.val; } bool operator<=(const LL& b) { return val<=b.val; } bool operator==(const LL& b) { return val==b.val; } bool operator>=(const LL& b) { return val>=b.val; } bool operator>(const LL& b) { return val>b.val; } LL operator+(const LL& b) { return LL(val+b.val); } LL operator+(const ll& b) { return (*this+LL(b)); } LL& operator+=(const LL& b) { return (*this=*this+b); } LL& operator+=(const ll& b) { return (*this=*this+b); } LL operator-(const LL& b) { return LL(val-b.val); } LL operator-(const ll& b) { return (*this-LL(b)); } LL& operator-=(const LL& b) { return (*this=*this-b); } LL& operator-=(const ll& b) { return (*this=*this-b); } LL operator*(const LL& b) { return LL(val*b.val); } LL operator*(const ll& b) { return (*this*LL(b)); } LL& operator*=(const LL& b) { return (*this=*this*b); } LL& operator*=(const ll& b) { return (*this=*this*b); } static LL exp(const LL& x, const ll& y){ ll z = y; z = reduce(z,m-1); LL ret = 1; LL w = x; while (z) { if (z&1) ret *= w; z >>= 1; w *= w; } return ret; } LL& operator^=(ll y) { return (*this=LL(val^y)); } LL operator/(const LL& b) { return ((*this)*exp(b,-1)); } LL operator/(const ll& b) { return (*this/LL(b)); } LL operator/=(const LL& b) { return ((*this)*=exp(b,-1)); } LL& operator/=(const ll& b) { return (*this=*this/LL(b)); } }; ostream& operator<<(ostream& os, const LL& obj) { return os << obj.val; } int N; vector<ll> segtree; void pull(int t) { segtree[t] = max(segtree[2*t], segtree[2*t+1]); } void point_set(int idx, ll val, int L = 1, int R = N, int t = 1) { if (L == R) segtree[t] = val; else { int M = (L + R) / 2; if (idx <= M) point_set(idx, val, L, M, 2*t); else point_set(idx, val, M+1, R, 2*t+1); pull(t); } } ll range_add(int left, int right, int L = 1, int R = N, int t = 1) { if (left <= L && R <= right) return segtree[t]; else { int M = (L + R) / 2; ll ret = 0; if (left <= M) ret = max(ret, range_add(left, right, L, M, 2*t)); if (right > M) ret = max(ret, range_add(left, right, M+1, R, 2*t+1)); return ret; } } void build(vector<ll>& arr, int L = 1, int R = N, int t = 1) { if (L == R) segtree[t] = arr[L-1]; else { int M = (L + R) / 2; build(arr, L, M, 2*t); build(arr, M+1, R, 2*t+1); pull(t); } } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int T = 1; cin >> T; while (T--) { int N; cin >> N; vector<int> A(N), B(N); for (int i = 0; i < N; i++) cin >> A[i]; int bad = -1, margin = 1e9, need = 0; bool reject = 0; for (int i = 0; i < N; i++) { cin >> B[i]; if (A[i] < B[i]) { if (bad != -1) reject = 1; bad = i; need = B[i] - A[i]; } else { margin = min(margin, A[i] - B[i]); } } if (reject) { cout << "NO" << endl; continue; } else { cout << ((margin >= need) ? "YES" : "NO") << endl; } } }
2055
C
The Trail
\begin{center} \begin{tabular}{c} \hline {\small There are no mountains in Florida, and Florida Man cannot comprehend their existence. As such, he really needs your help with this one.} \ \hline \end{tabular} \end{center} In the wilderness lies a region of mountainous terrain represented as a rectangular grid with $n$ rows and $m$ columns. Each cell in the grid is identified by its position $(i, j)$, where $i$ is the row index and $j$ is the column index. The altitude of cell $(i, j)$ is denoted by $a_{i,j}$. However, this region has been tampered with. A path consisting of $n + m - 1$ cells, starting from the top-left corner $(1, 1)$ and ending at the bottom-right corner $(n, m)$, has been cleared. For every cell $(i, j)$ along this path, the altitude $a_{i,j}$ has been set to $0$. The path moves strictly via downward ($\mathtt{D}$) or rightward ($\mathtt{R}$) steps. To restore the terrain to its original state, it is known that the region possessed a magical property before it was tampered with: all rows and all columns shared the same sum of altitudes. More formally, there exists an integer $x$ such that $\sum_{j=1}^m a_{i, j} = x$ for all $1\le i\le n$, and $\sum_{i=1}^n a_{i, j} = x$ for all $1\le j\le m$. Your task is to assign new altitudes to the cells on the path such that the above magical property is restored. It can be proven that a solution always exists. If there are multiple solutions that satisfy the property, any one of them may be provided.
Pick $x$, and find the sum of the whole grid. What does this tell you? Once you know $x$, the top left cell is fixed. What about the next cell on the trail? The naive solution of writing out a linear system and solving them will take $O((n + m)^3)$ time, which is too slow, so we will need a faster algorithm. We begin by selecting a target sum $S$ for each row and column. If we calculate the sum of all numbers in the completed grid, summing over rows gives a total of $S \cdot n$ while summing over columns gives a total of $S \cdot m$. Therefore, in order for our choice of $S$ to be possible, we require $S \cdot n = S \cdot m$, and since it is possible for $n \neq m$, we will pick $S = 0$ for our choice to be possible in all cases of $n, m$. Notice that all choices $S \neq 0$ will fail on $n \neq m$, as the condition $S \cdot n = S \cdot m$ no longer holds. As such, $S = 0$ is the only one that will work in all cases. Now, we aim to make each row and column sum to $S$. The crux of the problem is the following observation: Denote $x_1, x_2, \dots, x_{n+m-1}$ to be the variables along the path. Let's say variables $x_1, \dots, x_{i-1}$ have their values set for some $1 \leq i < n+m-1$. Then, either the row or column corresponding to variable $x_i$ has all of its values set besides $x_i$, and therefore we may determine exactly one possible value of $x_i$ to make its row or column sum to $0$. The proof of this claim is simple. At variable $x_i$, we look at the corresponding path move $s_i$. If $s_i = \tt{R}$, then the path will never revisit the column of variable $x_i$, and its column will have no remaining unset variables since $x_1, \dots, x_{i-1}$ are already set. Likewise, if $s_i = \tt{D}$, then the path will never revisit the row of variable $x_i$, which can then be used to determine the value of $x_i$. Repeating this process will cause every row and column except for row $n$ and column $m$ to have a sum of zero, with $x_{n+m-1}$ being the final variable. However, we will show that we can use either the row or column to determine it, and it will give a sum of zero for both row $n$ and column $m$. WLOG we use row $n$. Indeed, if the sum of all rows and columns except for column $m$ are zero, we know that the sum of all entries of the grid is zero by summing over rows. However, we may then subtract all columns except column $m$ from this total to arrive at the conclusion that column $m$ also has zero sum. Therefore, we may determine the value of $x_{n+m-1}$ using either its row or column to finish our construction, giving a solution in $O(n \cdot m)$.
[ "brute force", "constructive algorithms", "greedy", "math", "two pointers" ]
1,400
#include <bits/stdc++.h> #define f first #define s second #define pb push_back typedef long long int ll; typedef unsigned long long int ull; using namespace std; typedef pair<int,int> pii; typedef pair<ll,ll> pll; template<typename T> int die(T x) { cout << x << endl; return 0; } #define mod 1000000007 #define INF 1000000000 #define LNF 1e15 #define LOL 12345678912345719ll using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int T; cin >> T; while (T--) { int N, M; cin >> N >> M; string S; cin >> S; vector<vector<ll>> A; for (int i = 0; i < N; i++) { A.push_back(vector<ll>(M)); for (int j = 0; j < M; j++) { cin >> A[i][j]; } } int x = 0, y = 0; for (char c : S) { if (c == 'D') { long long su = 0; for (int i = 0; i < M; i++) { su += A[x][i]; } A[x][y] = -su; ++x; } else { long long su = 0; for (int i = 0; i < N; i++) { su += A[i][y]; } A[x][y] = -su; ++y; } } long long su = 0; for (int i = 0; i < M; i++) { su += A[N-1][i]; } A[N-1][M-1] = -su; for (int i = 0; i < N; i++) { for (int j = 0; j < M; j++) { cout << A[i][j] << " "; } cout << endl; } } return 0; }
2055
D
Scarecrow
\begin{center} \begin{tabular}{c} \hline {\small At his orange orchard, Florida Man receives yet another spam letter, delivered by a crow. Naturally, he's sending it back in the most inconvenient manner possible.} \ \hline \end{tabular} \end{center} A crow is sitting at position $0$ of the number line. There are $n$ scarecrows positioned at integer coordinates $a_1, a_2, \ldots, a_n$ along the number line. These scarecrows have been enchanted, allowing them to move left and right at a speed of $1$ unit per second. The crow is afraid of scarecrows and wants to stay at least a distance of $k$ ahead of the nearest scarecrow positioned \textbf{at or before} it. To do so, the crow uses its teleportation ability as follows: - Let $x$ be the current position of the crow, and let $y$ be the largest position of a scarecrow such that $y \le x$. If $x - y < k$, meaning the scarecrow is too close, the crow will instantly teleport to position $y + k$. This teleportation happens instantly and continuously. The crow will keep checking for scarecrows positioned at or to the left of him and teleport whenever one gets too close (which could happen at non-integral times). Note that besides this teleportation ability, the crow will not move on its own. Your task is to determine the minimum time required to make the crow teleport to a position greater than or equal to $\ell$, assuming the scarecrows move optimally to allow the crow to reach its goal. For convenience, you are asked to output \textbf{twice the minimum time} needed for the crow to reach the target position $\ell$. It can be proven that this value will always be an integer. Note that the scarecrows can start, stop, or change direction at any time (possibly at non-integral times).
Should you ever change the order of scarecrows? What's the first thing that the leftmost scarecrow should do? What's the only way to save time? Think in terms of distances and not times. Look at the points where the crow "switches over" between scarecrows. Greedy. We make a few preliminary observations: (1) The order of scarecrows should never change, i.e no two scarecrows should cross each other while moving along the interval. (2) Scarecrow $1$ should spend the first $a_1$ seconds moving to position zero, as this move is required for the crow to make any progress and there is no point in delaying it. (3) Let's say that a scarecrow at position $p$ `has' the crow if the crow is at position $p + k$, and there are no other scarecrows in the interval $[p, p+k]$. A scarecrow that has the crow should always move to the right; in other words, all scarecrows that find themselves located to the left of the crow should spend all their remaining time moving to the right, as it is the only way they will be useful. (4) Let there be a scenario where at time $T$, scarecrow $i$ has the crow and is at position $x$, and another scenario at time $T$ where scarecrow $i$ also has the crow, but is at position $y \geq x$. Then, the latter scenario is at least as good as the former scenario, assuming scarecrows numbered higher than $i$ are not fixed. (5) The only way to save time is to maximize the distance $d$ teleported by the crow. The second and fifth observations imply that the time spent to move the crow across the interval is $a_1 + \ell - d$. Now, for each scarecrow $i$, define $b_i$ to be the position along the interval at which it begins to have the crow, i.e. the crow is transferred from scarecrow $i-1$ to $i$. For instance, in the second sample case the values of $b_i$ are $(b_1, b_2, b_3) = (0, 2.5, 4.5)$ The second observation above implies that $b_1 = 0$, and the first observation implies that $b_1 \leq \dots \leq b_n$. Notice that we may express the distance teleported as $d = \sum_{i=1}^{n} \min(k, b_{i+1} - b_i)$ with the extra definition that $b_{n+1} = \ell$. For instance, in the second sample case the distance teleported is $d = \min(k, b_2 - b_1) + \min(k, b_3 - b_2) + \min(k, \ell - b_3) = 2 + 2 + 0.5 = 4.5$ and the total time is $a_1 + \ell - d = 2 + 5 - 4.5 = 2.5$. Now, suppose that $b_1, \dots, b_{i-1}$ have been selected for some $2 \leq i \leq n$, and that time $T$ has elapsed upon scarecrow $i-1$ receiving the crow. We will argue the optimal choice of $b_i$. At time $T$ when scarecrow $i-1$ first receives the crow, scarecrow $i$ may be at any position in the interval $[a_i - T, \min(a_i + T, \ell)]$. Now, we have three cases. Case 1. $b_{i-1} + k \leq a_i - T.$ In this case, scarecrow $i$ will need to move some nonnegative amount to the left in order to meet with scarecrow $i-1$. They will meet at the midpoint of the crow position $b_{i-1} + k$ and the leftmost possible position $a_i - T$ of scarecrow $i$ at time $T$. This gives $b_i := \frac{a_i - T + b_{i-1} + k}{2}.$ Case 2. $a_i - T \leq b_{i-1} + k \leq a_i + T.$ Notice that if our choice of $b_i$ has $b_i < b_{i-1} + k$, it benefits us to increase our choice of $b_i$ (if possible) as a consequence of our fourth observation, since all such $b_i$ will cause an immediate transfer of the crow to scarecrow $i$ at time $T$. However, if we choose $b_i > b_{i-1} + k$, lowering our choice of $b_i$ is now better as it loses less potential teleported distance $\min(k, b_i - b_{i-1})$, while leaving more space for teleported distance after position $b_i$. Therefore, we will choose $b_i := b_{i-1} + k$ in this case. Case 3. $a_i + T \leq b_{i-1} + k.$ In this case, regardless of how we choose $b_i$, the crow will immediately transfer to scarecrow $i$ from scarecrow $i-1$ at time $T$. We might as well pick $b_i := a_i + T$. Therefore, the optimal selection of $b_i$ may be calculated iteratively as $b_i := \min\left(\ell, \overbrace{a_i + T}^{\text{case 3}}, \max\left(\overbrace{b_{i-1} + k}^{\text{case 2}}, \overbrace{\frac{a_i - T + b_{i-1} + k}{2}}^{\text{case 1}}\right)\right).$ It is now easy to implement the above approach to yield an $O(n)$ solution. Note that the constraints for $k, \ell$ were deliberately set to $10^8$ instead of $10^9$ to make two times the maximum answer $4 \cdot \ell$ fit within $32$-bit integer types. It is not difficult to show that the values of $b_i$ as well as the answer are always integers or half-integers.
[ "greedy", "implementation", "math" ]
2,000
#include <bits/stdc++.h> #define f first #define s second #define pb push_back typedef long long int ll; typedef unsigned long long int ull; using namespace std; typedef pair<int,int> pii; typedef pair<ll,ll> pll; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int T; cin >> T; while (T--) { int N, k, l; cin >> N >> k >> l; double K = k; double L = l; vector<int> A(N); for (int i = 0; i < N; i++) cin >> A[i]; double T = A[0]; double last_pt = 0; double S = 0; for (int i = 1; i < N; i++) { double this_pt = min(L, min(A[i] + T, max(last_pt + K, (A[i] - T + last_pt + K)/2.0))); T += max(0.0, this_pt - last_pt - K); S += min(K, this_pt - last_pt); last_pt = this_pt; } S += min(K, L - last_pt); cout << (int)round(2*(L - S + A[0])) << endl; } return 0; }
2055
E
Haystacks
\begin{center} \begin{tabular}{c} \hline {\small On the next new moon, the universe will reset, beginning with Florida. It's up to Florida Man to stop it, but he first needs to find an important item.} \ \hline \end{tabular} \end{center} There are $n$ haystacks labelled from $1$ to $n$, where haystack $i$ contains $a_i$ haybales. One of the haystacks has a needle hidden beneath it, but you do not know which one. Your task is to move the haybales so that each haystack is emptied at least once, allowing you to check if the needle is hidden under that particular haystack. However, the process is not that simple. Once a haystack $i$ is emptied for the first time, it will be assigned a height limit and can no longer contain more than $b_i$ haybales. More formally, a move is described as follows: - Choose two haystacks $i$ and $j$. If haystack $i$ has not been emptied before, or haystack $i$ contains strictly less than $b_i$ haybales, you may move exactly $1$ haybale from haystack $j$ to haystack $i$. \textbf{Note}: Before a haystack is emptied, it has no height limit, and you can move as many haybales as you want onto that haystack. Compute the minimum number of moves required to ensure that each haystack is emptied at least once, or report that it is impossible.
Let's say you have to empty the haystacks in a fixed order. What's the best way to do it? Write the expression for the number of moves for a given order. In an optimal ordering, you should not gain anything by swapping two entries of the order. Using this, describe the optimal order. The constraint only limits what you can empty last. How can you efficiently compute the expression in Hint 2? Let's say we fixed some permutation $\sigma$ of $1, \dots, n$ such that we empty haystacks in the order $\sigma_1, \dots, \sigma_n$. Notice that a choice of $\sigma$ is possible if and only if the final stack $\sigma_n$ can be cleared, which is equivalent to the constraint $a_{\sigma_1} + \dots + a_{\sigma_n} \leq b_{\sigma_1} + \dots + b_{\sigma_{n-1}}.$ With this added constraint, the optimal sequence of moves is as follows: Iterate $i$ through $1, \dots, n-1$. For each $i$, try to move its haybales to haystacks $1, \dots, i-1$, and if they are all full then move haybales to haystack $n$. Once this process terminates, move all haystacks from $n$ back onto arbitrary haystacks $1, \dots, n-1$, being careful to not overflow the height limits. The key observation is that the number of extra haybales that must be moved onto haystack $n$ is $\max_{1 \leq i \leq n-1} \left\{\sum_{j=1}^i a_{\sigma_j} - \sum_{j=1}^{i-1} b_{\sigma_j}\right\}.$ To show this, consider the last time $i$ that a haybale is moved onto haystack $n$. At this time, all haybales from haystacks $1, \dots, i$ have found a home, either on the height limited haystacks $1, \dots, i-1$ or on haystack $n$, from which the identity immediately follows. Now, every haystack that wasn't moved onto haystack $n$ will get moved once, and every haystack that did gets moved twice. Therefore, our task becomes the following: Compute $\sum_{i=1}^{n} a_i + \min_{\sigma} \max_{1 \leq i \leq n-1} \left\{\sum_{j=1}^i a_{\sigma_j} - \sum_{j=1}^{i-1} b_{\sigma_j}\right\}$ for $\sigma$ satisfying $a_{\sigma_1} + \dots + a_{\sigma_n} \leq b_{\sigma_1} + \dots + b_{\sigma_{n-1}}.$ Notice that the $\sum_{i=1}^n a_i$ term is constant, and we will omit it for the rest of this tutorial. We will first solve the task with no restriction on $\sigma$ to gain some intuition. Denote $<_{\sigma}$ the ordering of pairs $(a, b)$ corresponding to $\sigma$. Consider adjacent pairs $(a_i, b_i) <_{\sigma} (a_j, b_j)$. Then, if the choice of $\sigma$ is optimal, it must not be better to swap their ordering, i.e. $\begin{align*} \overbrace{(a_i, b_i) <_{\sigma} (a_j, b_j)}^{\text{optimal}} \implies& \max(a_i, a_i + a_j - b_i) \leq \max(a_j, a_i + a_j - b_j) \\ \iff& \max(-a_j, -b_i) \leq \max(-a_i, -b_j)\\ \iff& \min(a_j, b_i) \geq \min(a_i, b_j). \end{align*}$ As a corollary, there exists an optimal $\sigma$ satisfying the following properties: Claim [Optimality conditions of $\sigma$]. All pairs with $a_i < b_i$ come first. Then, all pairs with $a_i = b_i$ come next. Then, all pairs with $a_i > b_i$. The pairs with $a_i < b_i$ are in ascending order of $a_i$. The pairs with $a_i > b_i$ are in descending order of $b_i$. It is not hard to show that all such $\sigma$ satisfying these properties are optimal by following similar logic as above. We leave it as an exercise for the reader. Now, we add in the constraint on the final term $\sigma_n$ of the ordering. We will perform casework on this final haystack. Notice that for any fixed $a_n, b_n$, if $\max_{1 \leq i \leq n-1} \left\{\sum_{j=1}^i a_{\sigma_j} - \sum_{j=1}^{i-1} b_{\sigma_j}\right\}$ is maximized, then so is $\max_{1 \leq i \leq n} \left\{\sum_{j=1}^i a_{\sigma_j} - \sum_{j=1}^{i-1} b_{\sigma_j}\right\}.$ So, if we were to fix any last haystack $\sigma_n$, the optimality conditions tell us that we should still order the remaining $n-1$ haystacks as before. Now, we may iterate over all valid $\sigma_n$ and compute the answer efficiently as follows: maintain a segment tree with leaves representing pairs $(a_i, b_i)$ and range queries for $\max_{1 \leq i \leq n} \left\{\sum_{j=1}^i a_{\sigma_j} - \sum_{j=1}^{i-1} b_{\sigma_j}\right\}.$ This gives an $O(n \log n)$ solution. Note that it is possible to implement this final step using prefix and suffix sums to yield an $O(n)$ solution, but it is not necessary to do so.
[ "brute force", "constructive algorithms", "data structures", "greedy", "sortings" ]
2,800
#include <bits/stdc++.h> #define f first #define s second #define pb push_back typedef long long int ll; typedef unsigned long long int ull; using namespace std; typedef pair<int,int> pii; typedef pair<ll,ll> pll; template<typename T> int die(T x) { cout << x << endl; return 0; } #define LNF 1e15 int N; vector<pll> segtree; pll f(pll a, pll b) { return {max(a.first, a.second + b.first), a.second + b.second}; } void pull(int t) { segtree[t] = f(segtree[2*t], segtree[2*t+1]); } void point_set(int idx, pll val, int L = 1, int R = N, int t = 1) { if (L == R) segtree[t] = val; else { int M = (L + R) / 2; if (idx <= M) point_set(idx, val, L, M, 2*t); else point_set(idx, val, M+1, R, 2*t+1); pull(t); } } pll range_add(int left, int right, int L = 1, int R = N, int t = 1) { if (left <= L && R <= right) return segtree[t]; else { int M = (L + R) / 2; pll ret = {0, 0}; if (left <= M) ret = f(ret, range_add(left, right, L, M, 2*t)); if (right > M) ret = f(ret, range_add(left, right, M+1, R, 2*t+1)); return ret; } } void build(vector<pll>& arr, int L = 1, int R = N, int t = 1) { if (L == R) segtree[t] = arr[L-1]; else { int M = (L + R) / 2; build(arr, L, M, 2*t); build(arr, M+1, R, 2*t+1); pull(t); } } vector<int> theoretical(const vector<pii>& arr) { vector<int> idx(arr.size()); for (int i = 0; i < arr.size(); ++i) { idx[i] = i; } vector<int> ut, eq, lt; for (int i = 0; i < arr.size(); ++i) { if (arr[i].first < arr[i].second) { ut.push_back(i); } else if (arr[i].first == arr[i].second) { eq.push_back(i); } else { lt.push_back(i); } } sort(ut.begin(), ut.end(), [&arr](int i, int j) { return arr[i].first < arr[j].first; }); sort(eq.begin(), eq.end(), [&arr](int i, int j) { return arr[i].first > arr[j].first; }); sort(lt.begin(), lt.end(), [&arr](int i, int j) { return arr[i].second > arr[j].second; }); vector<int> result; result.insert(result.end(), ut.begin(), ut.end()); result.insert(result.end(), eq.begin(), eq.end()); result.insert(result.end(), lt.begin(), lt.end()); return result; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int T = 1; cin >> T; while (T--) { cin >> N; vector<pll> data(N); ll sum_a = 0; ll sum_b = 0; for (int i = 0; i < N; i++) { cin >> data[i].f >> data[i].s; sum_a += data[i].f; sum_b += data[i].s; } vector<int> order = theoretical(vector<pii>(data.begin(), data.end())); vector<pll> data_sorted; for (int i : order) data_sorted.push_back({data[i].first, data[i].first - data[i].second}); data_sorted.push_back({0, 0}); ++N; segtree = vector<pll>(4*N); build(data_sorted); ll ans = LNF; for (int i = 0; i < N-1; i++) { if (sum_b - (data_sorted[i].first - data_sorted[i].second) >= sum_a) { point_set(i+1, data_sorted[N-1]); point_set(N, data_sorted[i]); ans = min(ans, range_add(1, N).first); point_set(i+1, data_sorted[i]); point_set(N, data_sorted[N-1]); } } if (ans == LNF) cout << -1 << endl; else cout << ans + sum_a << endl; } }
2055
F
Cosmic Divide
\begin{center} \begin{tabular}{c} \hline {\small With the artifact in hand, the fabric of reality gives way to its true master — Florida Man.} \ \hline \end{tabular} \end{center} A polyomino is a connected$^{\text{∗}}$ figure constructed by joining one or more equal $1 \times 1$ unit squares edge to edge. A polyomino is convex if, for any two squares in the polyomino that share the same row or the same column, all squares between them are also part of the polyomino. Below are four polyominoes, only the first and second of which are convex. You are given a convex polyomino with $n$ rows and an even area. For each row $i$ from $1$ to $n$, the unit squares from column $l_i$ to column $r_i$ are part of the polyomino. In other words, there are $r_i - l_i + 1$ unit squares that are part of the polyomino in the $i$-th row: $(i, l_i), (i, l_i + 1), \ldots, (i, r_i-1), (i, r_i)$. Two polyominoes are congruent if and only if you can make them fit exactly on top of each other by translating the polyominoes. \textbf{Note that you are not allowed to rotate or reflect the polyominoes.} Determine whether it is possible to partition the given convex polyomino into two disjoint connected polyominoes that are congruent to each other. The following examples illustrate a valid partition of each of the two convex polyominoes shown above: The partitioned polyominoes do not need to be convex, and each unit square should belong to exactly one of the two partitioned polyominoes. \begin{footnotesize} $^{\text{∗}}$A polyomino is connected if and only if for every two unit squares $u \neq v$ that are part of the polyomino, there exists a sequence of distinct squares $s_1, s_2, \ldots, s_k$, such that $s_1 = u$, $s_k = v$, $s_i$ are all part of the polyomino, and $s_i, s_{i+1}$ share an edge for each $1 \le i \le k - 1$. \end{footnotesize}
The results of the partition must be convex. Can you see why? The easier cases are when the polyomino must be cut vertically or horizontally. Let's discard those for now, and consider the "diagonal cuts". I.e. let one polyomino start from the top row, and another starting from some row $c$. WLOG the one starting on row $c$ is on the left side, we will check the other side by just duplicating the rest of the solution. Both sub-polyominoes are fixed if you choose $c$. But, it takes $O(n)$ time to check each one. Can you get rid of most $c$ with a quick check? Look at perimeters shapes, or area. Both will work. Assume there exists a valid partition of the polyomino. Note that the resulting congruent polyominoes must be convex, as it is not possible to join two non-overlapping non-convex polyominoes to create a convex one. Then, there are two cases: either the two resulting polyominoes are separated by a perfectly vertical or horizontal cut, or they share some rows. The first case is easy to check in linear time. The remainder of the solution will focus on the second case. Consider the structure of a valid partition. We will focus on the perimeter: Notice that the cut separating the two polyominoes must only ever move down and right, or up and right, as otherwise one of the formed polyominoes will not be convex. Without loss of generality, say it only goes down and right. In order for our cut to be valid, it must partition the perimeter into six segments as shown, such that the marked segments are congruent in the indicated orientations ($a$ with $a$, $b$ with $b$, $c$ with $c$.) If we label the horizontal lines of the grid to be $0, \dots, n$ where line $i$ is located after row $i$, we notice that the division points along the left side of the original polyomino are located at lines $0, k, 2k$ for some $1 \leq k \leq n/2$. Notice that if we were to fix a given $k$, we can uniquely determine the lower polyomino from the first few rows of the upper polyomino. Indeed, if $a_i = r_i - \ell_i + 1$ denotes the width of the $i$-th row of the original polyomino, we can show that the resulting polyomino for a particular choice of $k$ has $b_i = a_i - a_{i-k} + a_{i-2k} - \dots$ cells in its $i$-th row, for $1 \leq i \leq n - k$. Therefore, iterating over all possible $k$ and checking them individually gives an $O(n^2)$ solution. To speed this up, we will develop a constant-time check that will prune ``most'' choices of $k$. Indeed, we may use prefix sums and hashing to verify the perimeter properties outlined above, so that we can find all $k$ that pass this check in $O(n)$ time. If there are at most $f(n)$ choices of $k$ afterwards, we can check them all for a solution in $O(n \cdot (f(n) + \text{hashing errors}))$. It can actually be shown that for our hashing protocol, $f(n) \leq 9$, so that this algorithm has linear time complexity. While the proof is not difficult, it is rather long and will be left as an exercise. Instead, we will give a simpler argument to bound $f(n)$. Fix some choice of $k$, and consider the generating functions $A(x) = a_0 + a_1 x + \dots + a_n x^n,$ $B(x) = b_0 + b_1 x + \dots + b_{n-k} x^{n-k}.$ The perimeter conditions imply that $(1 + x^k) B(x) = A(x)$. In other words, $k$ may only be valid if $(1 + x^k)$ divides $A(x)$. Therefore, we can use cyclotomic polynomials for $1 + x^k \mid x^{2k} - 1$ to determine that $f(n) \leq \text{maximum number of cyclotomic polynomials that can divide a degree}\ n\ \text{polynomial}.$ As the degree of the $k$-th cyclotomic polynomial is $\phi(k)$ for the Euler totient function $\phi$, we can see that $f(n)$ is also at most the maximum number of $k_i \leq n$ we can select with $\sum_i \phi(k_i) \leq n$. Since $\phi(n) \geq \frac{n}{\log \log n}$, this tells us that $f(n) = O(\sqrt n \cdot \log \log n)$ and this looser bound already yields a time complexity of at most $O(n \sqrt n \cdot \log \log n).$ While this concludes the official solution, there exist many other heuristics with different $f(n)$ that also work. It is in the spirit of the problem to admit many alternate heuristics that give small enough $f(n)$ ``in practice'', as the official solution uses hashing. One such heuristic found by testers is as follows: We take as our pruning criteria that the area of the subdivided polyomino $\sum_{i=1}^{n-k} b_i$ is exactly half of the area of the original polyomino, which is $\sum_{i=1}^n a_i$. Algebraically manipulating $\sum_{i=1}^{n-k} b_i$ to be a linear function of $a_i$ shows it to be equal to $(a_1 + \dots + a_k) - (a_{k+1} + \dots + a_{2k}) + \dots.$ The calculation of all these sums may be sped up with prefix sums, and therefore pruning of $k$, can be done in amortized $O(n \log n)$ time since any fixed choice of $k$ has $\frac nk$ segments. However, for this choice of pruning, it can actually be shown that $f(n) \geq \Omega(d(n))$, where $d(n)$ is the number of divisors of $n$. This lower bound is obtained at an even-sized diamond polyomino, e.g. $(a_1, \dots, a_n) = (2, 4, \dots, n, n, \dots, 4, 2).$ Despite our efforts, we could not find an upper bound on $f(n)$ in this case, though we suspect that if it were not for the integrality and bounding constraints $a_i \in {1, \dots, 10^9}$, then $f(n) = \Theta(n)$, with suitable choices of $a_i$ being found using linear programs. Nevertheless, the solution passes our tests, and we suspect that no countertest exists (though hacking attempts on such solutions would be interesting to see!).
[ "brute force", "geometry", "hashing", "math", "strings" ]
3,200
#include <bits/stdc++.h> #define f first #define s second typedef long long int ll; typedef unsigned long long int ull; using namespace std; typedef pair<int,int> pii; typedef pair<ll,ll> pll; void print_set(vector<int> x) { for (auto i : x) { cout << i << " "; } cout << endl; } void print_set(vector<ll> x) { for (auto i : x) { cout << i << " "; } cout << endl; } bool connected(vector<ll> &U, vector<ll> &D) { if (U[0] > D[0]) return 0; for (int i = 1; i < U.size(); i++) { if (U[i] > D[i]) return 0; if (D[i] < U[i-1]) return 0; if (U[i] > D[i-1]) return 0; } return 1; } bool compare(vector<ll> &U1, vector<ll> &D1, vector<ll> &U2, vector<ll> &D2) { if (U1.size() != U2.size()) return 0; if (!connected(U1, D1)) return 0; for (int i = 0; i < U1.size(); i++) { if (U1[i] - D1[i] != U2[i] - D2[i]) return 0; if (U1[i] - U1[0] != U2[i] - U2[0]) return 0; } return 1; } bool horizontal_check(vector<ll>& U, vector<ll>& D) { if (U.size() % 2) return 0; int N = U.size() / 2; auto U1 = vector<ll>(U.begin(), U.begin() + N); auto D1 = vector<ll>(D.begin(), D.begin() + N); auto U2 = vector<ll>(U.begin() + N, U.end()); auto D2 = vector<ll>(D.begin() + N, D.end()); return compare(U1, D1, U2, D2); } bool vertical_check(vector<ll>& U, vector<ll>& D) { vector<ll> M1, M2; for (int i = 0; i < U.size(); i++) { if ((U[i] + D[i]) % 2 == 0) return 0; M1.push_back((U[i] + D[i]) / 2); M2.push_back((U[i] + D[i]) / 2 + 1); } return compare(U, M1, M2, D); } ll base = 2; ll inv = 1000000006; ll mod = 2000000011; vector<ll> base_pows; vector<ll> inv_pows; void precompute_powers() { base_pows.push_back(1); inv_pows.push_back(1); for (int i = 1; i <= 300000; i++) { base_pows.push_back(base_pows.back() * base % mod); inv_pows.push_back(inv_pows.back() * inv % mod); } } ll sub(vector<ll> &hash_prefix, int a1, int b1) { return ((mod + hash_prefix[b1] - hash_prefix[a1]) * inv_pows[a1]) % mod; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); precompute_powers(); int T; cin >> T; while (T--) { int N; cin >> N; vector<ll> U(N), D(N), H(N); vector<pii> col_UL(N), col_DR(N+1); vector<ll> hash_prefix_U(N), hash_prefix_D(N); for (int i = 0; i < N; i++) { cin >> U[i] >> D[i]; H[i] = D[i] - U[i] + 1; col_UL[i] = {i,U[i]}; col_DR[i] = {i+1,D[i]+1}; } // hashing for (int i = 1; i < N; i++) { hash_prefix_U[i] = (((mod + U[i] - U[i-1]) * base_pows[i-1]) + hash_prefix_U[i-1]) % mod; } for (int i = 1; i < N; i++) { hash_prefix_D[i] = (((mod + D[i] - D[i-1]) * base_pows[i-1]) + hash_prefix_D[i-1]) % mod; } // horizontal split if (horizontal_check(U, D)) { cout << "YES" << endl; goto next; } // vertical split if (vertical_check(U, D)) { cout << "YES" << endl; goto next; } for (int _ = 0; _ < 2; _++) { // down-right split for (int c = 1; c <= N/2; c++) { // check upper portion if (sub(hash_prefix_U, 0, c-1) != sub(hash_prefix_U, c, 2*c-1)) continue; if (H[0] - U[2*c] + U[2*c-1] != U[c-1] - U[c]) continue; // check lower portion if (sub(hash_prefix_D, N-c, N-1) != sub(hash_prefix_D, N-2*c, N-c-1)) continue; if (H[N-1] + D[N-2*c-1] - D[N-2*c] != D[N-c-1] - D[N-c]) continue; // check main portion if (sub(hash_prefix_U, 2*c, N-1) != sub(hash_prefix_D, 0, N-2*c-1)) continue; // brute force section // polynomial division bool ok = 1; vector<ll> H_copy(H.begin(), H.end()); vector<ll> quotient(N); // calculate quotient for (int i = 0; i < N-c; i++) { quotient[i] = H_copy[i]; H_copy[i+c] -= H_copy[i]; if (quotient[i] < 0) ok = 0; } // check for no remainder for (int i = N-c; i < N; i++) if (H_copy[i]) ok = 0; if (!ok) continue; // construct subdivision vector<ll> U1, D1, U2, D2; for (int i = c; i < N; i++) { int ref_height = quotient[i-c]; U1.push_back(D[i-c] - ref_height + 1); D1.push_back(D[i-c]); U2.push_back(U[i]); D2.push_back(U[i] + ref_height - 1); } if (compare(U1, D1, U2, D2)) { cout << "YES" << endl; goto next; } } // flip and go again! swap(hash_prefix_U, hash_prefix_D); swap(U, D); for (int i = 0; i < N; i++) { U[i] = -U[i]; D[i] = -D[i]; } } cout << "NO" << endl; next:; } return 0; }
2056
A
Shape Perimeter
There is an $m$ by $m$ square stamp on an infinite piece of paper. Initially, the bottom-left corner of the square stamp is aligned with the bottom-left corner of the paper. You are given two integer sequences $x$ and $y$, each of length $n$. For each step $i$ from $1$ to $n$, the following happens: - Move the stamp $x_i$ units to the right and $y_i$ units upwards. - Press the stamp onto the paper, leaving an $m$ by $m$ colored square at its current position. \textbf{Note that the elements of sequences $x$ and $y$ have a special constraint: $1\le x_i, y_i\le m - 1$.} Note that you \textbf{do not} press the stamp at the bottom-left corner of the paper. Refer to the notes section for better understanding. It can be proven that after all the operations, the colored shape on the paper formed by the stamp is a single connected region. Find the perimeter of this colored shape.
Look at the picture. I mean, look at the picture. Consider coordinates $x$ and $y$ separately. Consider coordinates $x$ and $y$ separately. Since $1 \le x_i, y_i \le m - 1$, after each step both coordinates increase and remain connected with the previous square. From the picture we can see that each coordinate spans from the bottom-left corner of the first square to the top-right corner of the last square. To calculate the perimeter we can add the length of that interval for both coordinates and multiply it by $2$, as it is counted in the perimeter twice, going in both directions: The coordinates of the bottom-left corner of the first square are $(x_1, y_1)$ and of the top-right corner of the last square are $(m + \sum\limits_{i = 1}^n x_i, m + \sum\limits_{i = 1}^n y_i)$. The lengths of the intervals are $m + \sum\limits_{i = 2}^n x_i$ and $m + \sum\limits_{i = 2}^n y_i$. Therefore, the answer is $2(2m + \sum\limits_{i = 2}^n (x_i + y_i))$.
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n, m; cin >> n >> m; vector<int> x(n), y(n); for(int i = 0; i < n; i++) { cin >> x[i] >> y[i]; } int ans = 2 * (accumulate(x.begin(), x.end(), 0) + m - x[0] + accumulate(y.begin(), y.end(), 0) + m - y[0]); cout << ans << '\n'; } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int ttt = 1; cin >> ttt; while(ttt--) { solve(); } }
2056
B
Find the Permutation
You are given an undirected graph with $n$ vertices, labeled from $1$ to $n$. This graph encodes a hidden permutation$^{\text{∗}}$ $p$ of size $n$. The graph is constructed as follows: - For every pair of integers $1 \le i < j \le n$, an undirected edge is added between vertex $p_i$ and vertex $p_j$ if and only if $p_i < p_j$. Note that the edge \textbf{is not added} between vertices $i$ and $j$, but between the vertices of their respective elements. Refer to the notes section for better understanding. Your task is to reconstruct and output the permutation $p$. It can be proven that permutation $p$ can be uniquely determined. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
"It can be proven that permutation $p$ can be uniquely determined" This means that there is an order of elements. How to determine whether $x$ should be earlier in that order than $y$? Consider two elements $x < y$. Suppose their positions in $p$ are $i$ and $j$ correspondigly. How can we determine if $i < j$? If $i < j$ and $x < y$, we will have $g_{x, y} = g_{y, x} = 1$. Otherwise, $i > j$ and $x < y$, so $g_{x, y} = g_{y, x} = 0$. So if $g_{x, y} = 1$, we know that $i < j$, otherwise $i > j$. That way we can determine for each pair of elements which one of them should appear earlier in the permutation. Notice that this is just a definition of a comparator, which proves that the permutation is indeed unique. We can find it by sorting $p = [1, 2, \ldots, n]$ with that comparator.
[ "brute force", "dfs and similar", "graphs", "implementation", "sortings" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<string> g(n); for(auto &i : g) { cin >> i; } vector<int> p(n); iota(p.begin(), p.end(), 0); sort(p.begin(), p.end(), [&](int x, int y) { if(g[x][y] == '1') return x < y; else return x > y; }); for(auto i : p) cout << i + 1 << " "; cout << '\n'; } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int ttt = 1; cin >> ttt; while(ttt--) { solve(); } }
2056
C
Palindromic Subsequences
For an integer sequence $a = [a_1, a_2, \ldots, a_n]$, we define $f(a)$ as the length of the longest subsequence$^{\text{∗}}$ of $a$ that is a palindrome$^{\text{†}}$. Let $g(a)$ represent the number of subsequences of length $f(a)$ that are palindromes. In other words, $g(a)$ counts the number of palindromic subsequences in $a$ that have the maximum length. Given an integer $n$, your task is to find any sequence $a$ of $n$ integers that satisfies the following conditions: - $1 \le a_i \le n$ for all $1 \le i \le n$. - $g(a) > n$ It can be proven that such a sequence always exists under the given constraints. \begin{footnotesize} $^{\text{∗}}$A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) element from arbitrary positions. $^{\text{†}}$A palindrome is a sequence that reads the same from left to right as from right to left. For example, $[1, 2, 1, 3, 1, 2, 1]$, $[5, 5, 5, 5]$, and $[4, 3, 3, 4]$ are palindromes, while $[1, 2]$ and $[2, 3, 3, 3, 3]$ are not. \end{footnotesize}
Analyze the example with $n = 6$. If $[a_1, a_2, \ldots, a_k, a_{n - k + 1}, a_{n - k + 2}, \ldots, a_n]$ is a palindrome, then $[a_1, a_2, \ldots, a_k, a_i, a_{n - k + 1}, a_{n - k + 2}, \ldots, a_n]$ is also a palindrome for all $k < i < n - k + 1$. If $[a_1, a_2, \ldots, a_k, a_{n - k + 1}, a_{n - k + 2}, \ldots, a_n]$ is a palindrome, then $[a_1, a_2, \ldots, a_k, a_i, a_{n - k + 1}, a_{n - k + 2}, \ldots, a_n]$ is also a palindrome for all $k < i < n - k + 1$, because we make it's length odd and add $a_i$ to the middle. We can use this to create a sequence with a big value of $g$. However, we shouldn't create a palindrome of a greater length than $2k + 1$ by using the fact above. That make us try something like $a = [1, 2, 3, \ldots, n - 2, 1, 2]$. $f(a) = 3$ here, and any of $[a_1, a_i, a_{n - 1}]$ for $1 < i < n - 1$ and $[a_2, a_i, a_n]$ for $2 < i < n$ are palindromes, which means that $g(a) = 2(n - 3) = 2n - 6$. This construction works for $n \ge 7$, so we have to handle $n = 6$ separately. We can also use the construction from the example with $n = 6$ directly: $a = [1, 1, 2, 3, 4, \ldots, n - 3, 1, 2]$, which has $g(a) = 3n - 11$.
[ "brute force", "constructive algorithms", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; if (n == 6) { cout << "1 1 2 3 1 2\n"; } else if(n == 9) { cout << "7 3 3 7 5 3 7 7 3\n"; } else if(n == 15) { cout << "15 8 8 8 15 5 8 1 15 5 8 15 15 15 8\n"; } else { for(int i = 1; i <= n - 2; i++) cout << i << " "; cout << "1 2\n"; } } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int ttt = 1; cin >> ttt; while(ttt--) { solve(); } }
2056
D
Unique Median
An array $b$ of $m$ integers is called good if, \textbf{when it is sorted}, $b_{\left\lfloor \frac{m + 1}{2} \right\rfloor} = b_{\left\lceil \frac{m + 1}{2} \right\rceil}$. In other words, $b$ is good if both of its medians are equal. In particular, $\left\lfloor \frac{m + 1}{2} \right\rfloor = \left\lceil \frac{m + 1}{2} \right\rceil$ when $m$ is odd, so $b$ is guaranteed to be good if it has an odd length. You are given an array $a$ of $n$ integers. Calculate the number of good subarrays$^{\text{∗}}$ in $a$. \begin{footnotesize} $^{\text{∗}}$An array $x$ is a subarray of an array $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
Solve the problem when $a_i \le 2$. Assign $b_i = 1$ if $a_i = 2$ and $b_i = -1$ if $a_i = 1$ and calculate the number of bad subarrays. Extend this solution for $a_i \le 10$, however, you need to take overcounting into account. When is a subarray $a[l, r]$ not good? If $r - l + 1$ is odd, $a[l, r]$ can't be bad. Otherwise, suppose the median of $a[l, r]$ is $x$. Then there need to be exactly $\frac{r - l + 1}{2}$ elements in $a[l, r]$ that are $\le x$ and exactly $\frac{r - l + 1}{2}$ that are $> x$. This gives us an idea to calculate the number of bad subarrays with a median of $x$. Create another array $b$ of size $n$, where $b_i = -1$ if $a_i \le x$ and $b_i = 1$ otherwise. $a[l, r]$, which has a median of $x$, is bad if and only if $\sum\limits_{i = l}^r b_i = 0$ and $r - l + 1$ is even. Notice that the second condition is not needed, as the sum of an odd length subarray of $b$ is always odd, so it can't be zero. Therefore, $a[l, r]$ with a median of $x$ is bad iff $\sum\limits_{i = l}^r b_i = 0$. If there is no $x$ in $[l, r]$, then the median of $a[l, r]$ trivially can't be equal to $x$. If there is an occurrence of $x$ in $a[l, r]$ and $\sum\limits_{i = l}^r b_i = 0$, notice that the median of $a[l, r]$ will always be exactly $x$. This is true because $\frac{r - l + 1}{2}$ smallest elements of $a[l, r]$ are all $\le x$, and there is an occurrence of $x$, so $\frac{r - l + 1}{2}$-th smallest element must be $x$. This allows us to simply count the number of subarrays of $b$ with a sum of $0$ and with an occurrence of $x$ to count the number of bad subarrays with median $x$. We can subtract that value from $\frac{n(n + 1)}{2}$ for all $x$ between $1$ and $A = 10$ to solve the problem in $O(nA)$.
[ "binary search", "brute force", "combinatorics", "data structures", "divide and conquer", "dp" ]
2,200
#include <bits/stdc++.h> using namespace std; const int MAX = 11; int main() { int tests; cin >> tests; for(int test = 0; test < tests; test++) { int n; cin >> n; vector<int> a(n); for(auto &i : a) { cin >> i; } long long ans = 0; for(int x = 1; x < MAX; x++) { vector<int> b(n); for(int i = 0; i < n; i++) { b[i] = (a[i] > x? 1 : -1); } int sum = n; vector<int> pref(n); for(int i = 0; i < n; i++) { pref[i] = sum; sum += b[i]; } vector<int> cnt(2 * n + 1); sum = n; int j = 0; for(int i = 0; i < n; i++) { if(a[i] == x) { while(j <= i) { cnt[pref[j]]++; j++; } } sum += b[i]; ans += cnt[sum]; } } ans = 1ll * n * (n + 1) / 2 - ans; cout << ans << '\n'; } return 0; }
2056
E
Nested Segments
A set $A$ consisting of pairwise distinct segments $[l, r]$ with integer endpoints is called good if $1\le l\le r\le n$, and for any pair of distinct segments $[l_i, r_i], [l_j, r_j]$ in $A$, exactly one of the following conditions holds: - $r_i < l_j$ or $r_j < l_i$ (the segments do not intersect) - $l_i \le l_j \le r_j \le r_i$ or $l_j \le l_i \le r_i \le r_j$ (one segment is fully contained within the other) You are given a good set $S$ consisting of $m$ pairwise distinct segments $[l_i, r_i]$ with integer endpoints. You want to add as many additional segments to the set $S$ as possible while ensuring that set $S$ remains good. Since this task is too easy, you need to determine the number of different ways to add the maximum number of additional segments to $S$, ensuring that the set remains good. Two ways are considered different if there exists a segment that is being added in one of the ways, but not in the other. Formally, you need to find the number of good sets $T$ of distinct segments, such that $S$ is a subset of $T$ and $T$ has the maximum possible size. Since the result might be very large, compute the answer modulo $998\,244\,353$.
Forget about counting. What is the maximum size of $T$ if $m = 0$? It is $2n - 1$. What if $m$ isn't $0$? It is still $2n - 1$. To prove this represent a good set as a forest. We can always add $[1, n]$ and $[i, i]$ for all $1 \le i \le n$ to $S$. Now the tree of $S$ has exactly $n$ leaves. What if a vertex has more than $2$ children? What is the number of solutions when $m = 0$? It is the number of full binary trees with $n$ leaves, which is $C_{n - 1}$, where $C$ denotes the Catalan's sequence. Extend this idea to count the number of solutions for a general tree of $S$. Any good set has a tree-like structure. Specifically, represent $S$ as a forest the following way: segment $[l, r]$ has a parent $[L, R]$ iff $[l, r] \in [L, R]$ and $R - L + 1$ is minimized (its parent is the shortest interval in which it lies). This segment is unique (or does not exist), because there can't be two segments with minimum length that cover $[l, r]$, as they would partially intersect otherwise. Notice that we can always add $[1, n]$ and $[i, i]$ for all $1 \le i \le n$ if they aren't in $S$ yet. Now the forest of $S$ is a tree with exactly $n$ leaves. Suppose $[L, R]$ has $k$ children $[l_1, r_1], [l_2, r_2], \ldots, [l_k, r_k]$. If $k > 2$, we can always add $[l_1, r_2]$ to $S$, which decreases the number of children of $[L, R]$ by $1$ and increases the size of $S$ by $1$. Therefore, in the optimal solution each segment has at most $2$ children. Having exactly one child is impossible, as we have added all $[i, i]$, so every index of $[L, R]$ is covered by its children. This means that we have a tree where each vertex has either $0$ or $2$ children, which is a full binary tree. We have $n$ leaves, and every full binary tree with $n$ leaves has exactly $2n - 1$ vertices, so this is always the optimal size of $T$ regardless of $S$. To count the number of $T$, notice that when $m = 0$ the answer is the number of full binary trees with $n$ leaves, which is $C_{n - 1}$, where $C$ denotes the Catalan's sequence. To extend this to a general tree, we can add $[1, n]$ and $[i, i]$ for all $1 \le i \le n$ to $S$. Now suppose $[L, R]$ has $k \ge 2$ children $[l_1, r_1], [l_2, r_2], \ldots, [l_k, r_k]$. We need to merge some children. We can treat $[l_1, r_1]$ as $[1, 1]$, $[l_2, r_2]$ as $[2, 2]$, etc. This is now the same case as $m = 0$, so there are $C_{k - 1}$ ways to merge children of $[L, R]$. Each vertex is independent of each other, so the answer is $\prod C_{c_v - 1}$ over all non-leaves $v$, where $c_v$ is the number of children of $v$. We can construct the tree in $O(n \log n)$ by definition or in $O(n)$ using a stack.
[ "combinatorics", "dfs and similar", "dp", "dsu", "math" ]
2,500
#include <bits/stdc++.h> using namespace std; const int mod = 998244353; const int MAX = 4e5 + 42; int fact[MAX], inv[MAX], inv_fact[MAX]; int C(int n, int k) { if(n < k || k < 0) return 0; return (long long) fact[n] * inv_fact[k] % mod * inv_fact[n - k] % mod; } int Cat(int n) { return (long long) C(2 * n, n) * inv[n + 1] % mod; } int binpow(int x, int n) { int ans = 1; while(n) { if(n & 1) ans = (long long) ans * x % mod; n >>= 1; x = (long long) x * x % mod; } return ans; } void solve() { int n, m; cin >> n >> m; int initial_m = m; vector<pair<int, int>> a(m); for(auto &[l, r] : a) { cin >> l >> r; } bool was_full = 0; vector<int> was_single(n + 1); for(auto [l, r] : a) was_full |= (r - l + 1 == n); for(auto [l, r] : a) { if(l == r) was_single[l] = 1; } if(!was_full) { a.push_back({1, n}); m++; } for(int i = 1; i <= n; i++) { if(!was_single[i] && n != 1) { a.push_back({i, i}); m++; } } for(auto &[l, r] : a) r = -r; sort(a.begin(), a.end()); vector<int> deg(m); for(int i = 0; i < m; i++) { int j = i + 1; while(j < m) { if(-a[i].second < a[j].first) break; deg[i]++; j = upper_bound(a.begin(), a.end(), make_pair(-a[j].second, 1)) - a.begin(); } } for(auto &[l, r] : a) r = -r; int ans = 1; for(int i = 0; i < m; i++) { if(deg[i] > 0) { assert(deg[i] >= 2); ans = (long long) ans * Cat(deg[i] - 1) % mod; } } cout << ans << '\n'; } signed main() { fact[0] = 1; for(int i = 1; i < MAX; i++) fact[i] = (long long) fact[i - 1] * i % mod; inv_fact[MAX - 1] = binpow(fact[MAX - 1], mod - 2); for(int i = MAX - 1; i; i--) inv_fact[i - 1] = (long long) inv_fact[i] * i % mod; assert(inv_fact[0] == 1); for(int i = 1; i < MAX; i++) inv[i] = (long long) inv_fact[i] * fact[i - 1] % mod; ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int ttt = 1; cin >> ttt; while(ttt--) { solve(); } }
2056
F2
Xor of Median (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, the constraints on $t$, $k$, and $m$ are higher. You can hack only if you solved all versions of this problem.} A sequence $a$ of $n$ integers is called good if the following condition holds: - Let $\text{cnt}_x$ be the number of occurrences of $x$ in sequence $a$. For all pairs $0 \le i < j < m$, at least one of the following has to be true: $\text{cnt}_i = 0$, $\text{cnt}_j = 0$, or $\text{cnt}_i \le \text{cnt}_j$. In other words, if both $i$ and $j$ are present in sequence $a$, then the number of occurrences of $i$ in $a$ is less than or equal to the number of occurrences of $j$ in $a$. You are given integers $n$ and $m$. Calculate the value of the bitwise XOR of the median$^{\text{∗}}$ of all good sequences $a$ of length $n$ with $0\le a_i < m$. Note that the value of $n$ can be very large, so you are given its binary representation instead. \begin{footnotesize} $^{\text{∗}}$The median of a sequence $a$ of length $n$ is defined as the $\left\lfloor\frac{n + 1}{2}\right\rfloor$-th smallest value in the sequence. \end{footnotesize}
The order of the sequence doesn't matter. What if we fix the sequence $\text{cnt}$ and then calculate its contribution to the answer? By Lucas's theorem the contribution is odd iff for each set bit in $n$ there is exactly one $i$ such that $\text{cnt}_i$ has this bit set, and $\sum\limits_{i = 0}^{m - 1} \text{cnt}_i = n$. In other words, $\text{cnt}$ has to partition all set bits in $n$. For a fixed $\text{cnt}$ with an odd contribution, what will be the median? There is a very big element in $\text{cnt}$. Since $\text{cnt}$ partitions the bits of $n$, there will be $i$ which has it's most significant bit. This means that $2\text{cnt}_i > n$, so $i$ will always be the median. Suppose there are $p$ non-zero elements in $\text{cnt}$. We can partition all set bits of $n$ into $p$ non-empty subsequences and then choose which of the $m$ numbers will occur. How can we calculate the answer now? The only time we use the value of $n$ is when we partition all of it's set bits into subsequences. That means that the answer only depends on the number of set bits in $n$, not on $n$ itself. This allows us to solve the easy version by fixing the value of $p$ and the value of the median. To solve the hard version use Lucas's theorem again and do digit dp or SOS-dp. The order of the sequence doesn't matter, so let's fix the sequence $\text{cnt}$ and calculate its contribution to the answer. For a fixed $\text{cnt}$, the number of ways to order $a$ is $\binom{n}{\text{cnt}_0, \text{cnt}_1, \ldots, \text{cnt}_{m - 1}}$. By Lucas's theorem the contribution is odd iff for each set bit in $n$ there is exactly one $i$ such that $\text{cnt}_i$ has this bit set, and $\sum\limits_{i = 0}^{m - 1} \text{cnt}_i = n$. In other words, $\text{cnt}$ has to partition all set bits in $n$. Since $\text{cnt}$ partitions the bits of $n$, there will be $i$ which has it's most significant bit. This means that $2\text{cnt}_i > n$, so $i$ will always be the median. Suppose there are $p$ non-zero elements in $\text{cnt}$. We can partition all set bits of $n$ into $p$ non-empty subsequences and then choose which of the $m$ numbers will occur. The only time we use the value of $n$ is when we partition all of it's set bits into subsequences. That means that the answer only depends on the number of set bits in $n$, not on $n$ itself. So let's fix $p$ as the number of non-zero elements in $\text{cnt}$ and $x$ as the median. Denote $b$ as the number of set bits in $n$. There are $S(b, p)$ ways to partition the bits into $p$ non-empty subsequences, where $S$ denotes Stirling number of the second kind. There are $\binom{x}{p - 1}$ ways to choose which other elements will have non-zero $\text{cnt}$, because the median will always have the largest value and non-zero $\text{cnt}_i$ must be non-decreasing. The answer is then $\oplus_{p = 1}^b \oplus_{x = 0}^{m - 1} \left(S(b, p) \bmod 2\right) \cdot \left(\binom{x}{p - 1} \bmod 2\right) \cdot x$, which we can calculate in $O(km)$, which solves the easy version. To solve the hard version we can use Lucas's theorem again to get that the contribution of $x$ to the answer is the XOR of $S(b, p + 1) \bmod 2$ over all submasks $p$ of $x$. We can limit $p$ to be between $0$ and $b - 1$. That means that only $L = \lceil \log_2 b \rceil$ last bits of $x$ determine whether $x$ contributes something to the answer. We can find which of $2^L$ of them will have an odd contribution by setting $dp_p = S(b, p + 1) \bmod 2$, and calculating it's SOS-dp. Then for fixed $L$ last bits it is easy to find the XOR of all $x < m$ with those bits. Note that $S(n, k)$ is odd iff $(n - k) \text{&} \frac{k - 1}{2} = 0$, which we can derive from the recurrence, combinatorics, google or OEIS. This solves the problem in $O(b \log b) = O(k \log k)$.
[ "bitmasks", "brute force", "combinatorics", "dp", "math" ]
3,000
#include <bits/stdc++.h> using namespace std; int S(int n, int k) { return !(n - k & (k - 1 >> 1)); } int get(int n, int m) { int L = __lg(n) + 1; int up = 1 << L; vector<int> dp(up); for(int i = 0; i < n; i++) dp[i] = S(n, i + 1); for(int j = 0; j < L; j++) { for(int i = 0; i < up; i++) { if(i >> j & 1) dp[i] ^= dp[i ^ (1 << j)]; } } int ans = 0; for(int lst = 0; lst < up && lst < m; lst++) { if(!dp[lst]) continue; int cnt = m - 1 - lst >> L; if(cnt & 1 ^ 1) ans ^= lst; if(cnt % 4 == 0) ans ^= cnt << L; else if(cnt % 4 == 1) ans ^= 1ll << L; else if(cnt % 4 == 2) ans ^= cnt + 1 << L; else ans ^= 0; } return ans; } void solve() { int k, m; string s; cin >> k >> m >> s; int n = 0; for(auto &i : s) n += i & 1; int ans = get(n, m); cout << ans << '\n'; } signed main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int ttt = 1; cin >> ttt; while(ttt--) { solve(); } }
2057
A
MEX Table
One day, the schoolboy Mark misbehaved, so the teacher Sasha called him to the whiteboard. Sasha gave Mark a table with $n$ rows and $m$ columns. His task is to arrange the numbers $0, 1, \ldots, n \cdot m - 1$ in the table (each number must be used exactly once) in such a way as to maximize the sum of MEX$^{\text{∗}}$ across all rows and columns. More formally, he needs to maximize $$\sum\limits_{i = 1}^{n} \operatorname{mex}(\{a_{i,1}, a_{i,2}, \ldots, a_{i,m}\}) + \sum\limits_{j = 1}^{m} \operatorname{mex}(\{a_{1,j}, a_{2,j}, \ldots, a_{n,j}\}),$$ where $a_{i,j}$ is the number in the $i$-th row and $j$-th column. Sasha is not interested in how Mark arranges the numbers, so he only asks him to state one number — the maximum sum of MEX across all rows and columns that can be achieved. \begin{footnotesize} $^{\text{∗}}$The minimum excluded (MEX) of a collection of integers $c_1, c_2, \ldots, c_k$ is defined as the smallest non-negative integer $x$ which does not occur in the collection $c$. For example: - $\operatorname{mex}([2,2,1])= 0$, since $0$ does not belong to the array. - $\operatorname{mex}([3,1,0,1]) = 2$, since $0$ and $1$ belong to the array, but $2$ does not. - $\operatorname{mex}([0,3,1,2]) = 4$, since $0$, $1$, $2$, and $3$ belong to the array, but $4$ does not. \end{footnotesize}
Note that $0$ appears in exactly one row and exactly one column. As for $1$, if it exists, it can only be placed in one of the rows or columns that contain $0$, so the answer does not exceed $\operatorname{max}(n, m) + 1$, since only one column or row can have an answer greater than one, and in another one there can be an answer of $1$. It is not difficult to provide an example where the answer is achieved: if $n > m$, we place the numbers from $0$ to $n - 1$ in the first column; otherwise, we place the numbers from $0$ to $m - 1$ in the first row. The remaining elements can be placed arbitrarily.
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using i64 = long long; void solve() { int n, m; std::cin >> n >> m; std::cout << std::max(n, m) + 1 << "\n"; } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t = 1; std::cin >> t; while (t--) { solve(); } }
2057
B
Gorilla and the Exam
Due to a shortage of teachers in the senior class of the "T-generation", it was decided to have a huge male gorilla conduct exams for the students. However, it is not that simple; to prove his competence, he needs to solve the following problem. For an array $b$, we define the function $f(b)$ as the smallest number of the following operations required to make the array $b$ empty: - take two integers $l$ and $r$, such that $l \le r$, and let $x$ be the $\min(b_l, b_{l+1}, \ldots, b_r)$; then - remove all such $b_i$ that $l \le i \le r$ and $b_i = x$ from the array, the deleted elements are removed, the indices are renumerated. You are given an array $a$ of length $n$ and an integer $k$. No more than $k$ times, you can choose any index $i$ ($1 \le i \le n$) and any integer $p$, and replace $a_i$ with $p$. Help the gorilla to determine the smallest value of $f(a)$ that can be achieved after such replacements.
Note that $f(a)$ is the number of distinct numbers in the array. Since it is beneficial for us to always choose the entire array as the segment, because we will eventually remove the minimum element, it is advantageous to remove it from the entire array at once. Therefore, our task has been reduced to changing no more than $k$ elements in order to minimize the number of distinct elements. To achieve this, we observe that it is always beneficial to change all the numbers that occur the least in the array and change them to the number that occurs the most frequently. This process can be easily accomplished in $O(n \log n)$.
[ "greedy", "sortings" ]
1,000
#include <bits/stdc++.h> using i64 = long long; void solve() { int n, k; std::cin >> n >> k; std::vector<int> a(n); for (int i = 0; i < n; i++) { std::cin >> a[i]; } std::sort(a.begin(), a.end()); std::vector<int> cnt = {1}; for (int i = 1; i < n; i++) { if (a[i] == a[i - 1]) { cnt.back()++; } else { cnt.emplace_back(1); } } std::sort(cnt.begin(), cnt.end()); int m = cnt.size(); for (int i = 0; i < m - 1; i++) { if (cnt[i] > k) { std::cout << m - i << "\n"; return; } k -= cnt[i]; } std::cout << 1 << "\n"; } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t = 1; std::cin >> t; while (t--) { solve(); } }
2057
C
Trip to the Olympiad
In the upcoming year, there will be many team olympiads, so the teachers of "T-generation" need to assemble a team of three pupils to participate in them. Any three pupils will show a worthy result in any team olympiad. But winning the olympiad is only half the battle; first, you need to get there... Each pupil has an independence level, expressed as an integer. In "T-generation", there is exactly one student with each independence levels from $l$ to $r$, inclusive. For a team of three pupils with independence levels $a$, $b$, and $c$, the value of their team independence is equal to $(a \oplus b) + (b \oplus c) + (a \oplus c)$, where $\oplus$ denotes the bitwise XOR operation. Your task is to choose any trio of students with the maximum possible team independence.
Let's take a look at the $i$-th bit. If it appears in all numbers or in none, it contributes nothing to the answer; however, if it appears in one or two numbers, it adds $2^{i + 1}$ to the answer. Therefore, let's examine the most significant bit, let's say the $k$-th bit, which differs in the numbers $l$ and $r$, and note that the more significant bits do not affect the outcome, so the answer cannot exceed $T = 2 \cdot (1 + 2 + \ldots + 2^k)$. Thus, let $x$ be the only number that is divisible by $2^k$ in the range $[l, r]$. Take $y = x - 1$ and $z$ as any number in the range $[l, r]$ that is neither $x$ nor $y$. Notice that by choosing such a triplet of numbers, the value of the expression $(x \oplus y) + (x \oplus z) + (y \oplus z)$ is exactly equal to $T$.
[ "bitmasks", "constructive algorithms", "greedy", "math" ]
1,500
#include <bits/stdc++.h> using i64 = long long; void solve() { int l, r; std::cin >> l >> r; int k = 31 - __builtin_clz(l ^ r); int a = l | ((1 << k) - 1), b = a + 1, c = (a == l ? r : l); std::cout << a << " " << b << " " << c << "\n"; } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t = 1; std::cin >> t; while (t--) { solve(); } }
2057
D
Gifts Order
"T-Generation" has decided to purchase gifts for various needs; thus, they have $n$ different sweaters numbered from $1$ to $n$. The $i$-th sweater has a size of $a_i$. Now they need to send some subsegment of sweaters to an olympiad. It is necessary that the sweaters fit as many people as possible, but without having to take too many of them. They need to choose two indices $l$ and $r$ ($1 \le l \le r \le n$) to maximize the convenience equal to $$\operatorname{max} (a_l, a_{l + 1}, \ldots, a_r) - \operatorname{min} (a_l, a_{l + 1}, \ldots, a_r) - (r - l),$$ that is, the range of sizes minus the number of sweaters. Sometimes the sizes of the sweaters change; it is known that there have been $q$ changes, in each change, the size of the $p$-th sweater becomes $x$. Help the "T-Generation" team and determine the maximum convenience among all possible pairs $(l, r)$ initially, as well as after each size change.
To begin with, let's take a look at what an optimal segment of coats looks like. I claim that in the optimal answer, the maximum and minimum are located at the edges of the segment. Suppose this is not the case; then we can narrow the segment (from the side where the extreme element is neither the minimum nor the maximum), and the answer will improve since the length will decrease, while the minimum and maximum will remain unchanged. Okay, there are two scenarios: when the minimum is at $l$ and the maximum is at $r$, and vice versa. These two cases are analogous, so let's consider the solution when the minimum is at $l$. Let's express what the value of the segment actually is: it is $a_r - a_l - (r - l) = (a_r - r) - (a_l - l)$, meaning there is a part that depends only on $r$ and a part that depends only on $l$. Let's create a segment tree where we will store the answer, as well as the maximum of all $a_i - i$ and the minimum of all $a_i - i$ (for the segment that corresponds to the current node of the segment tree, of course). Now, let's see how to recalculate the values at the node. First, the minimum/maximum of $a_i - i$ can be easily recalculated by taking the minimum/maximum from the two children of the node in the segment tree. Now, how do we recalculate the answer? In fact, it is simply the maximum of the two answers for the children, plus (the maximum in the right child) minus (the minimum in the left child), which is the case when the maximum is in the right child and the minimum is in the left. Since we maintain this in the segment tree, we can easily handle update queries. For greater clarity, I recommend looking at the code.
[ "data structures", "greedy", "implementation", "math", "matrices" ]
2,000
#include <bits/stdc++.h> using i64 = long long; template<class Info> struct SegmentTree { int n; std::vector<Info> info; SegmentTree() : n(0) {} SegmentTree(int n_, Info v_ = Info()) { init(n_, v_); } template<class T> SegmentTree(std::vector<T> init_) { init(init_); } void init(int n_, Info v_ = Info()) { init(std::vector<Info>(n_, v_)); } template<class T> void init(std::vector<T> init_) { n = init_.size(); int sz = (1 << (std::__lg(n - 1) + 1)); info.assign(sz * 2, Info()); std::function<void(int, int, int)> build = [&](int v, int l, int r) { if (l == r) { info[v] = init_[l]; return; } int m = (l + r) / 2; build(v + v, l, m); build(v + v + 1, m + 1, r); info[v] = info[v + v] + info[v + v + 1]; }; build(1, 0, n - 1); } Info rangeQuery(int v, int l, int r, int tl, int tr) { if (r < tl || l > tr) { return Info(); } if (l >= tl && r <= tr) { return info[v]; } int m = (l + r) / 2; return rangeQuery(v + v, l, m, tl, tr) + rangeQuery(v + v + 1, m + 1, r, tl, tr); } Info rangeQuery(int l, int r) { return rangeQuery(1, 0, n - 1, l, r); } void modify(int v, int l, int r, int i, const Info &x) { if (l == r) { info[v] = x; return; } int m = (l + r) / 2; if (i <= m) { modify(v + v, l, m, i, x); } else { modify(v + v + 1, m + 1, r, i, x); } info[v] = info[v + v] + info[v + v + 1]; } void modify(int i, const Info &x) { modify(1, 0, n - 1, i, x); } Info query(int v, int l, int r, int i) { if (l == r) { return info[v]; } int m = (l + r) / 2; if (i <= m) { return query(v + v, l, m, i); } else { return query(v + v + 1, m + 1, r, i); } } Info query(int i) { return query(1, 0, n - 1, i); } }; const int INF = 1E9; struct Info { int min1, min2, max1, max2, ans1, ans2; Info() : min1(INF), min2(INF), max1(-INF), max2(-INF), ans1(0), ans2(0) {} Info(std::pair<int, int> x) : min1(x.first), min2(x.second), max1(x.first), max2(x.second), ans1(0), ans2(0) {} }; Info operator+(const Info &a, const Info &b) { Info res; res.min1 = std::min(a.min1, b.min1); res.min2 = std::min(a.min2, b.min2); res.max1 = std::max(a.max1, b.max1); res.max2 = std::max(a.max2, b.max2); res.ans1 = std::max({a.ans1, b.ans1, b.max1 - a.min1}); res.ans2 = std::max({a.ans2, b.ans2, a.max2 - b.min2}); return res; } void solve() { int n, q; std::cin >> n >> q; std::vector<int> a(n); std::vector<std::pair<int, int>> t(n); for (int i = 0; i < n; i++) { std::cin >> a[i]; t[i] = {a[i] - i, a[i] + i - n + 1}; } SegmentTree<Info> st(t); auto query = [&]() { return std::max(st.info[1].ans1, st.info[1].ans2); }; std::cout << query() << "\n"; for (int i = 0; i < q; i++) { int p, x; std::cin >> p >> x; p--; t[p] = {x - p, x + p - n + 1}; st.modify(p, t[p]); std::cout << query() << "\n"; } } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t; std::cin >> t; while (t--) { solve(); } }
2057
E1
Another Exercise on Graphs (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, there is an additional constraint on $m$. You can hack only if you solved all versions of this problem.} Recently, the instructors of "T-generation" needed to create a training contest. They were missing one problem, and there was not a single problem on graphs in the contest, so they came up with the following problem. You are given a connected weighted undirected graph with $n$ vertices and $m$ edges, which does not contain self-loops or multiple edges. There are $q$ queries of the form $(a, b, k)$: among all paths from vertex $a$ to vertex $b$, find the smallest $k$-th maximum weight of edges on the path$^{\dagger}$. The instructors thought that the problem sounded very interesting, but there is one catch. They do not know how to solve it. Help them and solve the problem, as there are only a few hours left until the contest starts. $^{\dagger}$ Let $w_1 \ge w_2 \ge \ldots \ge w_{h}$ be the weights of all edges in a path, in non-increasing order. The $k$-th maximum weight of the edges on this path is $w_{k}$.
We will learn how to check if there exists a path from $a$ to $b$ such that the $k$-th maximum on this path is less than or equal to $x$. For a fixed $x$, the edges are divided into two types: light (weight less than or equal to $x$) and heavy (weight strictly greater than $x$). We assign a weight of $0$ to all light edges and a weight of $1$ to all heavy edges. Then, the desired path exists if and only if the shortest path between $a$ and $b$ (considering the assigned weights) is strictly less than $k$. Now, let's consider that we have many queries. Initially, we assign a weight of $1$ to all edges and compute the length of the shortest path between each pair of vertices. Then, we will process the edges in increasing order of weight and replace their weights with $0$. After each such change, we need to be able to recalculate the shortest paths between each pair of vertices. Let $\textrm{d}[i][j]$ be the shortest paths before changing the weight of the edge $(a, b)$; then the lengths of the new paths $\textrm{d}'$ can be computed using the formula: $\textrm{d}'[i][j] = \min \{ \textrm{d}[i][j], \textrm{d}[a][i] + \textrm{d}[b][j], \textrm{d}[b][i] + \textrm{d}[a][j], \}$ Thus, each recalculation of the lengths of the shortest paths can be performed in $O(n^2)$. Let $\textrm{dp}[k][i][j]$ be the length of the shortest path between the pair of vertices $i$ and $j$, if the weights of the minimum $k$ edges are $0$, and the weights of the remaining edges are $1$. We have just learned how to recalculate this dynamic programming in $O(n^2 \cdot m)$ time. Using the criterion, we can answer each of the queries in $O(\log m)$ time using the binary search method and the computed array $\textrm{dp}[k][i][j]$. On simple and non-simple paths. If there is a cycle in the path, it can be removed from this path, and the $k$-th maximum will not increase. Thus, if we restrict the set of considered paths to only simple paths (i.e., those that do not contain cycles in any form), the answer will not change.
[ "binary search", "brute force", "dp", "dsu", "graphs", "shortest paths", "sortings" ]
2,300
#include <bits/stdc++.h> using i64 = long long; void solve() { int n, m, q; std::cin >> n >> m >> q; std::vector<std::array<int, 3>> edges(m); for (int i = 0; i < m; i++) { int v, u, w; std::cin >> v >> u >> w; v--, u--; edges[i] = {v, u, w}; } std::sort(edges.begin(), edges.end(), [&](const std::array<int, 3> &a, const std::array<int, 3> &b) { return a[2] < b[2]; }); constexpr int INF = 1e9; std::vector<int> value(m + 1); std::vector<std::vector<std::vector<int>>> dis(m + 1, std::vector<std::vector<int>>(n, std::vector<int>(n, INF))); for (int i = 0; i < n; i++) { dis[0][i][i] = 0; } for (auto edge : edges) { int v = edge[0], u = edge[1]; dis[0][v][u] = dis[0][u][v] = 1; } for (int k = 0; k < n; k++) { for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { dis[0][i][j] = std::min(dis[0][i][j], dis[0][i][k] + dis[0][k][j]); } } } int p = 1; for (auto edge : edges) { int v = edge[0], u = edge[1], w = edge[2]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { dis[p][i][j] = std::min({dis[p - 1][i][j], dis[p - 1][i][v] + dis[p - 1][u][j], dis[p - 1][i][u] + dis[p - 1][v][j]}); } } value[p++] = w; } for (int i = 0; i < q; i++) { int v, u, k; std::cin >> v >> u >> k; v--, u--; int low = 0, high = m; while (high - low > 1) { int mid = (low + high) / 2; if (dis[mid][v][u] < k) { high = mid; } else { low = mid; } } std::cout << value[high] << " \n"[i == q - 1]; } } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t = 1; std::cin >> t; while (t--) { solve(); } }
2057
E2
Another Exercise on Graphs (hard version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, there is no additional constraint on $m$. You can hack only if you solved all versions of this problem.} Recently, the instructors of "T-generation" needed to create a training contest. They were missing one problem, and there was not a single problem on graphs in the contest, so they came up with the following problem. You are given a connected weighted undirected graph with $n$ vertices and $m$ edges, which does not contain self-loops or multiple edges. There are $q$ queries of the form $(a, b, k)$: among all paths from vertex $a$ to vertex $b$, find the smallest $k$-th maximum weight of edges on the path$^{\dagger}$. The instructors thought that the problem sounded very interesting, but there is one catch. They do not know how to solve it. Help them and solve the problem, as there are only a few hours left until the contest starts. $^{\dagger}$ Let $w_1 \ge w_2 \ge \ldots \ge w_{h}$ be the weights of all edges in a path, in non-increasing order. The $k$-th maximum weight of the edges on this path is $w_{k}$.
Please read the solution to problem E1. Let's return to the process where we sequentially changed the weights of the edges from $1$ to $0$. If we replace the weight of another edge with $0$ and it connects vertices that are at a distance of $0$, it means that the shortest paths between each pair of vertices will not change at all, so we will not recalculate the shortest paths in this case. We also will not store useless layers in the array $\textrm{dp}[k][i][j]$. This allows us to improve the asymptotic complexity of our solution to $O(n^3 + q \log n)$. Proof. We will consider a graph that consists only of edges with weight $0$. We have $m$ consecutive requests to add an edge. Notice that we compute a new layer of $\textrm{dp}$ only if, after adding the edge, some two components of this graph are merged, but this will happen exactly $n-1$ times.
[ "binary search", "dfs and similar", "dp", "dsu", "graphs", "shortest paths", "sortings" ]
2,500
#include <bits/stdc++.h> using i64 = long long; struct DSU { std::vector<int> p, sz, h; DSU(int n = 0) : p(n), sz(n, 1), h(n) { std::iota(p.begin(), p.end(), 0); } int leader(int x) { if (x == p[x]) { return x; } return leader(p[x]); } bool same(int x, int y) { return leader(x) == leader(y); } bool merge(int x, int y) { x = leader(x); y = leader(y); if (x == y) return false; if (h[x] < h[y]) { std::swap(x, y); } if (h[x] == h[y]) { ++h[x]; } sz[x] += sz[y]; p[y] = x; return true; } int size(int x) { return sz[leader(x)]; } }; void solve() { int n, m, q; std::cin >> n >> m >> q; std::vector<std::array<int, 3>> edges(m); for (int i = 0; i < m; i++) { int v, u, w; std::cin >> v >> u >> w; v--, u--; edges[i] = {v, u, w}; } std::sort(edges.begin(), edges.end(), [&](const std::array<int, 3> &a, const std::array<int, 3> &b) { return a[2] < b[2]; }); constexpr int INF = 1e9; std::vector<int> value(n); std::vector<std::vector<std::vector<int>>> dis(n, std::vector<std::vector<int>>(n, std::vector<int>(n, INF))); for (int i = 0; i < n; i++) { dis[0][i][i] = 0; } for (auto edge : edges) { int v = edge[0], u = edge[1]; dis[0][v][u] = dis[0][u][v] = 1; } for (int k = 0; k < n; k++) { for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { dis[0][i][j] = std::min(dis[0][i][j], dis[0][i][k] + dis[0][k][j]); } } } int p = 1; DSU dsu(n); for (auto edge : edges) { int v = edge[0], u = edge[1], w = edge[2]; if (dsu.merge(v, u)) { for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { dis[p][i][j] = std::min({dis[p - 1][i][j], dis[p - 1][i][v] + dis[p - 1][u][j], dis[p - 1][i][u] + dis[p - 1][v][j]}); } } value[p++] = w; } } for (int i = 0; i < q; i++) { int v, u, k; std::cin >> v >> u >> k; v--, u--; int low = 0, high = n - 1; while (high - low > 1) { int mid = (low + high) / 2; if (dis[mid][v][u] < k) { high = mid; } else { low = mid; } } std::cout << value[high] << " \n"[i == q - 1]; } } signed main() { std::ios::sync_with_stdio(false); std::cin.tie(nullptr); int t = 1; std::cin >> t; while (t--) { solve(); } }
2057
F
Formation
One day, the teachers of "T-generation" decided to instill discipline in the pupils, so they lined them up and made them calculate in order. There are a total of $n$ pupils, the height of the $i$-th pupil in line is $a_i$. The line is comfortable, if for each $i$ from $1$ to $n - 1$, the following condition holds: $a_i \cdot 2 \ge a_{i + 1}$. Initially, the line is comfortable. The teachers do not like that the maximum height in the line is too small, so they want to feed the pupils pizza. You know that when a pupil eats one pizza, their height increases by $1$. One pizza can only be eaten by only one pupil, but each pupil can eat an unlimited number of pizzas. It is important that after all the pupils have eaten their pizzas, the line is comfortable. The teachers have $q$ options for how many pizzas they will order. For each option $k_i$, answer the question: what is the maximum height $\max(a_1, a_2, \ldots, a_n)$ that can be achieved if the pupils eat at most $k_i$ pizzas.
Hint 1: how much does a single index affect those before him? Hint 2: try looking at the formula for making an index a maximum differently Hint 3: do you need to answer all queries online? Hint 4: do you need to consider all indexes? Solution: Let's fix the index that we want our maximum to be on - let it be $i$. Then, due to $k_i$ being $\le 10^9$ in each test case, and every number in our array $a$ also being $\le 10^9$, we do not need to consider any indexes before $i - 30$ - they are also valid as they were part of the previous good array, and we do not need to make them bigger as $2^{31} \ge 2 * 10^9$. Now let's try doing binary search on the answer for each query (let our current query be with the number $M$). When we fix our possible answer as $X$ at index $i$, what does that mean? It means that we need $a_i \ge X, a_{i-1} \ge \lceil{\frac{X}{2}} \rceil$, and so on, calculating every index through the next one $(i$ from $i + 1)$. Let us call this auxiliary array $c$ $( c_0 = X, c_i = \lceil\frac{c_{i - 1}}{2} \rceil)$. So, if we need to change exactly $k$ indexes before $i$ ($i$, $i - 1$, ..., $i - k$) , our condition for checking whether we could do that is as followed: $1$. $a_{i - j} \le c_j$ for each $0 \le j \le k$ $2$. $\sum_{j = 0}^{k} c_j - a_{i - j} \le M$. Now, we can see that for each fixed $k$ from $0$ to $log MAX = 30$, we should choose such an index $i$, that its previous indexes satisfy condition $1$ from above, and $\sum_{j = 0}^{k} a_{i - j}$ is minimal. Let's consider solving this via scanline. For each index $i$ and each $k$ from $0$ to $30$, let's consider the possible values of $M$ such that if we want our maximum to be on index $i$, it will affect exactly $(a_i, a_{i - 1}, \ldots, a_{i - k})$. We can notice that such values form a segment $[L_{ik}, R_{ik}]$, which is possiblyinvalid $(L_{ik} > R_{ik})$. We can find these segments in $O(N log MAX)$. Now let's go through the request offline in ascending order, maintaining currently active indexes. We maintain $log MAX + 1$ multisets with the minimum possible sums on valid segments of each length, and update them on each opening/closing of a segment $($ this is done in $O(N log ^ 2 MAX) )$. On each request, we do binary search on the answer, and check all $log MAX$ length candidates.
[ "binary search", "data structures", "dp", "sortings", "two pointers" ]
3,300
// Parallel binary search technique optimizes log^3 => log^2 // SegmentTree might be faster then std::multiset // sl[i] should equals max(a) at the begining cause IDK how to explain but it's obviously important #include <bits/stdc++.h> #include <algorithm> using namespace std; using ll = long long; constexpr int W = 31; constexpr int C = 1'000'000'000; struct Event { int x; int ind; ll s; bool operator<(const Event& rhs) const { return x < rhs.x; } }; struct SegmentTree { int n; vector<ll> sgt; SegmentTree(int n) : n(n), sgt(2 * n, -1) {} void Change(int i, ll x) { for (sgt[i += n] = x; i != 1; i >>= 1) { sgt[i >> 1] = max(sgt[i], sgt[i ^ 1]); } } int GlobalMax() const { return sgt[1]; } }; vector<int> solve(const vector<int>& a, const vector<int>& q) { const int n = a.size(), m = q.size(); vector<ll> ps(n + 1); for (int i = 0; i < n; ++i) ps[i + 1] = ps[i] + a[i]; vector<ll> min_x(n, 0); vector<vector<pair<int, ll>>> change_sum(W); vector<Event> events; events.reserve(2 * n); for (int j = 1; j <= W && j <= n; ++j) { events.clear(); for (int i = j - 1; i < n; ++i) { min_x[i] = max(min_x[i], 1 + ((a[i - j + 1] - 1ll) << (j - 1))); ll max_x = i == j - 1 ? 2ll * C : ((ll)a[i - j] << j); max_x = min<ll>(max_x, a[i] + C); if (min_x[i] > max_x) { continue; } const ll xsum = ps[i + 1] - ps[i + 1 - j]; events.push_back(Event{(int)min_x[i], i, xsum}); events.push_back(Event{(int)max_x + 1, i, -1}); } sort(events.begin(), events.end()); SegmentTree sgt(n); const int k = events.size(); ll was_max = -1; for (int i = 0; i < k;) { int s = i; while (i < k && events[i].x == events[s].x) { sgt.Change(events[i].ind, events[i].s); ++i; } ll cur_max = sgt.GlobalMax(); if (cur_max != was_max) { change_sum[j - 1].emplace_back(events[s].x, cur_max); was_max = cur_max; } } } vector<int> sl(m, *max_element(a.begin(), a.end())), sr(m, 2 * C + 1); vector<int> ord_to_check(m); iota(ord_to_check.begin(), ord_to_check.end(), 0); for (int iter = 32; iter--;) { vector<int> sm(m); for (int i = 0; i < m; ++i) { sm[i] = sl[i] + (sr[i] - sl[i]) / 2; } sort(ord_to_check.begin(), ord_to_check.end(), [&](int lhs, int rhs) { return sm[lhs] < sm[rhs]; }); vector<int> ptr(W); vector<ll> actual_sum(W, -1); for (int i : ord_to_check) { const int x = sm[i]; ll upper_sum = 0; bool nice = false; for (int w = 0; w < W; ++w) { int& j = ptr[w]; while (j < change_sum[w].size() && change_sum[w][j].first <= x) { actual_sum[w] = change_sum[w][j++].second; } upper_sum += 1 + ((x - 1) >> w); if (upper_sum - actual_sum[w] <= q[i]) { nice = true; } } if (nice) { sl[i] = sm[i]; } else { sr[i] = sm[i]; } } } return sl; } int main() { ios::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) { int n, q; cin >> n >> q; vector<int> a(n), k(q); for (int& x : a) cin >> x; for (int& x : k) cin >> x; auto ans = solve(a, k); for (int x : ans) cout << x << ' '; cout << '\n'; } }
2057
G
Secret Message
Every Saturday, Alexander B., a teacher of parallel X, writes a secret message to Alexander G., a teacher of parallel B, in the evening. Since Alexander G. is giving a lecture at that time and the message is very important, Alexander B. has to write this message on an interactive online board. The interactive online board is a grid consisting of $n$ rows and $m$ columns, where each cell is $1 \times 1$ in size. Some cells of this board are already filled in, and it is impossible to write a message in them; such cells are marked with the symbol ".", while the remaining cells are called free and are marked with the symbol "#". Let us introduce two characteristics of the online board: - $s$ is the number of free cells. - $p$ is the perimeter of the grid figure formed by the union of free cells. Let $A$ be the set of free cells. Your goal is to find a set of cells $S \subseteq A$ that satisfies the following properties: - $|S| \le \frac{1}{5} \cdot (s+p)$. - Any cell from $A$ either lies in $S$ or shares a side with some cell from $S$. We can show that at least one set $S$ satisfying these properties exists; you are required to find \textbf{any suitable} one.
We will divide the infinite grid into $5$ sets of cells $L_1$, $L_2$, $L_3$, $L_4$, and $L_5$, where $L_i = \{ (x, y) \in \mathbb{Z}^2 \space | \space (x + 2 y) \equiv i \pmod{5} \}$. Note that for any pair $(x,y)$, all $5$ cells $(x,y)$, $(x-1, y)$, $(x+1, y)$, $(x, y-1)$, $(x, y+1)$ belong to different sets $L_1, \ldots, L_5$. If we try to consider the sets $T_i = L_i \cap A$, it is not guaranteed that each of them will satisfy the property: "For each cell in the set $A$, either it or one of its neighboring cells shares a side belongs to the set $S$." Therefore, we denote $D_i \subseteq A$ as the set of cells from $A$ that do not belong to $T_i$ and all four neighbors of each of them also do not belong to $T_i$. Now we need to verify the correctness of two statements: $|T_1| + |T_2| + |T_3| + |T_4| + |T_5| = s$; $|D_1| + |D_2| + |D_3| + |D_4| + |D_5| = p$; The first statement is obvious since $T_1 \cup \ldots \cup T_5 = A$. The second statement is true because if some cell $(x, y)$ does not belong to $T_i$ and all its neighbors do not belong to $T_i$, it means that it has a neighbor $(x', y')$ such that $(x' + 2y') \bmod 5 = i$. But we know that $(x', y') \not \in A$, which means we can mark the shared side of $(x,y)$ and $(x', y')$. If we process $T_1, \ldots, T_5$ sequentially, then after this process, each segment of the boundary will be marked exactly once, and thus the number of such segments coincides with $|D_1| + \ldots + |D_5|$. Now let $S_i = T_i \cup D_i$ and note that $|S_1| + \ldots + |S_5| = s + p$, which means, by the pigeonhole principle, there exists a $j$ such that $|S_j| \le \frac{1}{5} \cdot (s + p)$, and then $S_j$ will be suitable as the desired set.
[ "constructive algorithms", "dfs and similar", "math" ]
3,000
#include <bits/stdc++.h> using namespace std; using pi = pair<int, int>; void solve() { int n, m; cin >> n >> m; vector<string> v(n); for (auto& w : v) cin >> w; auto col = [](int x, int y) { int c = (x+2*y)%5; if (c < 0) c += 5; return c; }; auto cell = [&](int x, int y) { if (x < 0 || y < 0 || x >= n || y >= m) return false; return v[x][y] == '#'; }; array<vector<pi>, 5> colorings; for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) { if (!cell(i, j)) continue; colorings[col(i, j)].emplace_back(i, j); for (int di = -1; di <= 1; ++di) for (int dj = -1; dj <= 1; ++dj) { if (abs(di) + abs(dj) != 1) continue; if (!cell(i+di, j+dj)) { colorings[col(i+di, j+dj)].emplace_back(i, j); } } } auto coloring = colorings[0]; for (const auto& w : colorings) if (w.size() < coloring.size()) coloring = w; for (auto [x, y] : coloring) v[x][y] = 'S'; for (auto line : v) cout << line << '\n'; } int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) solve(); }
2057
H
Coffee Break
There are very long classes in the T-Generation. In one day, you need to have time to analyze the training and thematic contests, give a lecture with new material, and, if possible, also hold a mini-seminar. Therefore, there is a break where students can go to drink coffee and chat with each other. There are a total of $n+2$ coffee machines located in sequentially arranged rooms along a long corridor. The coffee machines are numbered from $0$ to $n+1$, and immediately after the break starts, there are $a_i$ students gathered around the $i$-th coffee machine. The students are talking too loudly among themselves, and the teachers need to make a very important announcement. Therefore, they want to gather the maximum number of students around some single coffee machine. The teachers are too lazy to run around the corridors and gather the students, so they came up with a more sophisticated way to manipulate them: - At any moment, the teachers can choose room $i$ ($1 \le i \le n$) and turn off the lights there; - If there were $x$ students in that room, then after turning off the lights, $\lfloor \frac12 x \rfloor$ students will go to room $(i-1)$, and $\lfloor \frac12 x \rfloor$ other students will go to room $(i+1)$. - If $x$ was odd, then one student remains in the same room. - After that, the lights in room $i$ are turned back on. The teachers have not yet decided where they will gather the students, so for each $i$ from $1$ to $n$, you should determine what is the maximum number of students that can be gathered around the $i$-th coffee machine. The teachers can turn off the lights in any rooms at their discretion, in any order, possibly turning off the lights in the same room multiple times. Note that the values of $a_0$ and $a_{n+1}$ do not affect the answer to the problem, so their values will not be given to you.
Let's call the elementary action $E_j$ the operation after which $a_j$ decreases by $2$, and the values $a_{j-1}$ and $a_{j+1}$ increase by $1$. Let $A$ be some set of actions that can be performed in some order; then this same set of actions can be performed greedily, each time performing any available elementary action. Our solution will be based on two facts: If we want to maximize $a_k$, then actions $E_1, \ldots E_{k-1}, E_{k+1}, \ldots E_n$ can be performed at any time, as long as it is possible. In this case, action $E_k$ cannot be performed at all. The proof of the first fact is obvious. Suppose at some moment we can perform action $E_j$ ($j \neq k$); then if we do not apply it in the optimal algorithm, its application at that moment will not spoil anything. And if we do apply it, then let's apply this action right now instead of postponing it; this will also change nothing. The proof of the second fact is slightly less obvious, so we will postpone it for later. For now, let's figure out how to emulate the application of elementary actions on the prefix $a_1, \ldots, a_{k-1}$ and on the suffix $a_{k+1}, \ldots, a_n$. Without loss of generality, let's assume we are considering the prefix of the array $a$ of length $p$. Additionally, assume that $a_1, \ldots, a_{p-1} \le 1$, i.e., we cannot apply any of the actions $E_1, \ldots, E_{p-1}$. If $a_p \le 1$, then nothing can happen with this prefix for now. Otherwise, the only thing we can do is action $E_p$; after that, we may be able to apply action $E_{p-1}$, $E_{p-2}$, and so on. Let's see what happens if $a_p = 2$: $[\ldots, 0, 1, 1, 1, 2] \to [\ldots, 0, 1, 1, 2, 0] \to \ldots \to [\ldots, 0, 2, 0, 1, 1] \to [\ldots, 1, 0, 1, 1, 1]$ Thus, the position of the last zero in the prefix increases by one, and the value of $a_{p+1}$ also increases by one. We introduce a second elementary action $I_j$ - increasing $a_j$ by one if we previously decreased it by one. This action does not affect the answer, as it can be perceived as "reserving" $a_j$ for the future. To understand what happens to the array $[a_1, \ldots, a_p]$ with an arbitrary value of $a_p$, we will assume that $a_p = 0$, but we will consider that we can apply action $I_p$ $x$ times, which increases $a_p$ by one. After each action $I_p$, we will emulate the greedy application of actions $E_1, \ldots, E_p$. This can be done as follows: In the stack $S$, we store the positions of zeros in the array $[a_1, \ldots, a_p]$ (the other values are equal to one). Let $l$ be the position of the last zero; then: If $l=p$, then this zero simply disappears from the stack. If $l<p$, then it is easy to emulate the next $p-l$ actions $I_p$: after each of them, the array looks like $[\ldots, 0, 1, 1, \ldots, 1, 2]$, and thus after the greedy application of actions $E_1, \ldots, E_p$, the value of $l$ increases by $1$. If the stack $S$ is empty, then after one action, $a_{p+1}$ increases by one, and the only element with index $1$ becomes zero. If we repeat actions after this point, the cases will break down identically with a period of $p+1$. Thus, we have learned to understand what happens with each prefix of the array $a_1, \ldots, a_n$ if we greedily apply elementary actions $E_j$ ($1 \le j \le p$). In total, we can emulate these actions in $O(n)$ time. Now let's return to the second point of the statement: if we want to maximize $a_k$, then we cannot perform actions $E_k$. Without loss of generality, let's assume that we first performed actions $E_j$ ($j \neq k$) greedily, and now we have $a_j \le 1$ ($j \neq k$). Suppose we performed action $E_k$ at least once; then after that, we will greedily perform elementary actions to the left and right, for which the stack scheme we considered above works. Notice that after this, the value of $a_k$ will increase by no more than $2$ (no more than $1$ after actions on the prefix and no more than $1$ after actions on the suffix). Thus, action $E_k$ after application did not lead to an increase in $a_k$, nor will it lead to an increase if we apply $E_k$ later, which means that it is useless-there is no need to perform it. Thus, we have proven that by using elementary actions $E_1, \ldots, E_{k-1}, E_{k+1}, \ldots E_n$ based on the principle of "do while we can", we can easily maximize $a_k$. But in the original problem, we could not apply elementary actions. The operation in the problem, i.e., $a_{i-1} += \lfloor \frac12 a_i\ \rfloor$, $a_{i+1} += \lfloor \frac12 a_i\ \rfloor$, $a_i = a_i \bmod 2$, can be perceived as applying elementary actions $E_i$ consecutively while they are applicable. Fortunately, we just proved that actions can be performed in any order, which means that if we apply the operation from the problem for all indices $i \neq k$, we will eventually reach the same result, and the value of $a_k$ will reach its maximum.
[ "data structures", "greedy", "math" ]
3,500
#include <bits/stdc++.h> using namespace std; using vi = vector<int>; using ll = long long; vector<ll> a, lhs, rhs; vector<int> st; vector<ll> get_right_out(const vector<ll>& a, vector<ll>& res) { const int n = a.size(); st.clear(); res.assign(n+1, 0); for (int i = 0; i < n; ++i) { ll x = a[i] + res[i]; st.push_back(i); while (x != 0) { if (st.empty()) { const int len = i + 1; const ll cnt = x / (len + 1); res[i+1] += cnt * len; x -= cnt * (len + 1); if (x != 0) { res[i+1] += x; st.push_back(x-1); x = 0; } } else { const int j = st.back(); if (x > i - j) { res[i+1] += i - j; st.pop_back(); x -= i - j + 1; } else { res[i+1] += x; st.back() += x; x = 0; } } } } return res; } vector<ll> get_left_out(vector<ll>& a, vector<ll>& b) { reverse(a.begin(), a.end()); get_right_out(a, b); reverse(b.begin(), b.end()); reverse(a.begin(), a.end()); return b; } void solve() { int n; cin >> n; a.resize(n); for (ll& x : a) cin >> x; get_right_out(a, lhs); get_left_out(a, rhs); ll ans = 0; for (int i = 0; i < n; ++i) cout << lhs[i] + a[i] + rhs[i+1] << ' '; cout << '\n'; } int main() { ios::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) solve(); }
2059
A
Milya and Two Arrays
An array is called good if for any element $x$ that appears in this array, it holds that $x$ appears at least twice in this array. For example, the arrays $[1, 2, 1, 1, 2]$, $[3, 3]$, and $[1, 2, 4, 1, 2, 4]$ are good, while the arrays $[1]$, $[1, 2, 1]$, and $[2, 3, 4, 4]$ are not good. Milya has two \textbf{good} arrays $a$ and $b$ of length $n$. She can rearrange the elements in array $a$ in any way. After that, she obtains an array $c$ of length $n$, where $c_i = a_i + b_i$ ($1 \le i \le n$). Determine whether Milya can rearrange the elements in array $a$ such that there are \textbf{at least $3$} distinct elements in array $c$.
If there are at least three distinct values in any of the arrays, the answer is $yes$. We can select any three elements from the other array and arrange them in increasing order, then the corresponding sums will also increase. If each array contains exactly two distinct values, the answer is also $yes$. If we find elements $x$ and $y$ in the first array, where $x < y$, and elements $a$ and $b$ in the second array, such that $a < b$, we can match the elements to obtain the sums $x + a < x + b < y + b$. Since the arrays are good, $x$ appears at least twice, and all three sums can definitely be obtained. In the remaining cases, one of the arrays will contain only equal elements. Since the second array contains no more than $2$ distinct values, it will not be possible to obtain $3$ distinct sums.
[ "constructive algorithms", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int a[55],b[55]; void solve(){ int n;cin>>n; set<int> sa,sb; for(int i=1;i<=n;i++)cin>>a[i],sa.insert(a[i]); for(int i=1;i<=n;i++)cin>>b[i],sb.insert(b[i]); if(sa.size()+sb.size()<4){ cout<<"NO\n"; }else{ cout<<"YES\n"; } } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t;cin>>t; while(t--){ solve(); } return 0; }
2059
B
Cost of the Array
You are given an array $a$ of length $n$ and an \textbf{even} integer $k$ ($2 \le k \le n$). You need to split the array $a$ into exactly $k$ non-empty subarrays$^{\dagger}$ such that each element of the array $a$ belongs to exactly one subarray. Next, all subarrays with even indices (second, fourth, $\ldots$, $k$-th) are concatenated into a single array $b$. After that, $0$ is \textbf{added} to the end of the array $b$. The cost of the array $b$ is defined as the minimum index $i$ such that $b_i \neq i$. For example, the cost of the array $b = [1, 2, 4, 5, 0]$ is $3$, since $b_1 = 1$, $b_2 = 2$, and $b_3 \neq 3$. Determine \textbf{the minimum} cost of the array $b$ that can be obtained with an optimal partitioning of the array $a$ into subarrays. $^{\dagger}$An array $x$ is a subarray of an array $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
If $k = n$, then the partition is unique and the answer can be explicitly found. In all other cases, the answer does not exceed $2$. We want the second subarray of our partition to start not with $1$, while we can ignore all other subarrays, since in this case the answer will be $1$. We will iterate over the starting position of the second subarray, which can range from $2$ to $n - k + 2$. If we encounter only ones in this segment, then the second subarray can consist of $2$ ones, and the answer will be $2$.
[ "brute force", "constructive algorithms", "greedy", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { int n, k; cin >> n >> k; k /= 2; vector<int> a(n); for (auto &it: a) cin >> it; if (2 * k == n) { for (int i = 1; i < n; i += 2) { if (a[i] != (i + 1) / 2) { cout << (i + 1) / 2 << '\n'; return; } } cout << k + 1 << '\n'; } else { for (int i = 1; i <= n - 2 * k + 1; i++) { if (a[i] != 1) { cout << "1\n"; return; } } cout << "2\n"; } } int main() { int T = 1; cin >> T; while (T--) { solve(); } }
2059
C
Customer Service
Nikyr has started working as a queue manager at the company "Black Contour." He needs to choose the order of servicing customers. There are a total of $n$ queues, each initially containing $0$ people. In each of the next $n$ moments of time, there are \textbf{two sequential} events: - New customers arrive in all queues. More formally, at the $j$-th moment of time, the number of people in the $i$-th queue increases by a \textbf{positive} integer $a_{i,j}$. - Nikyr chooses \textbf{exactly one} of the $n$ queues to be served at that moment in time. The number of customers in this queue becomes $0$. Let the number of people in the $i$-th queue after all events be $x_i$. Nikyr wants MEX$^{\dagger}$ of the collection $x_1, x_2, \ldots, x_n$ to be as large as possible. Help him determine the maximum value he can achieve with an optimal order of servicing the queues. $^{\dagger}$The minimum excluded (MEX) of a collection of integers $c_1, c_2, \ldots, c_k$ is defined as the smallest non-negative integer $y$ which does not occur in the collection $c$. For example: - $\operatorname{MEX}([2,2,1])= 0$, since $0$ does not belong to the array. - $\operatorname{MEX}([3,1,0,1]) = 2$, since $0$ and $1$ belong to the array, but $2$ does not. - $\operatorname{MEX}([0,3,1,2]) = 4$, since $0$, $1$, $2$, and $3$ belong to the array, but $4$ does not.
At the last moment in time, we clear some queue, and it will have an empty number of people left, so among the $x_i$, there is guaranteed to be a $0$. To maximize the value of $MEX$, it is necessary for the value $1$ to appear among the $x_i$. However, a positive number of people was added to each queue at the last moment in time. Thus, if $1$ appears among the $x_i$, then the $i$-th queue was served in the penultimate moment in time. We can continue this reasoning and conclude that if we find $x_i$ with values from $0$ to $k$, then the value $k + 1$ could only have been obtained if, in some queue different from these, one was added in each of the last $k + 1$ moments in time, and in the $k + 2$-th moment from the end, that queue was empty. Let's calculate the maximum suffix consisting only of $1$s in the array of additions for each queue. If a queue has a suffix of $m$ ones, then after all events in that queue, we could leave any number from $0$ to $m$, which will participate in $MEX$. To solve this, we will greedily collect the answer by taking the smallest suitable suffix each time.
[ "brute force", "constructive algorithms", "graph matchings", "greedy", "math", "sortings" ]
1,600
#include <bits/stdc++.h> using namespace std; const int N=305; int a[N][N],suff[N]; void solve(){ int n;cin>>n; for(int i=1;i<=n;i++){ suff[i]=0; for(int j=1;j<=n;j++){ cin>>a[i][j]; } } for(int i=1;i<=n;i++){ for(int j=n;j>=1;j--){ if(a[i][j]!=1)break; suff[i]++; } } multiset<int> s; for(int i=1;i<=n;i++){ s.insert(suff[i]); } int ans=1; while(!s.empty()){ int cur=*s.begin(); if(cur>=ans){ ans++; } s.extract(cur); } cout<<min(ans,n)<<'\n'; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int t;cin>>t; while(t--){ solve(); } return 0; }
2059
D
Graph and Graph
You are given two connected undirected graphs with the same number of vertices. In both graphs, there is a token located at some vertex. In the first graph, the token is initially at vertex $s_1$, and in the second graph, the token is initially at vertex $s_2$. The following operation is repeated an \textbf{infinite} number of times: - Let the token currently be at vertex $v_1$ in the first graph and at vertex $v_2$ in the second graph. - A vertex $u_1$, adjacent to $v_1$, is chosen in the first graph. - A vertex $u_2$, adjacent to $v_2$, is chosen in the second graph. - The tokens are moved to the chosen vertices: in the first graph, the token moves from $v_1$ to $u_1$, and in the second graph, from $v_2$ to $u_2$. - The cost of such an operation is equal to $|u_1 - u_2|$. Determine the minimum possible total cost of all operations or report that this value will be infinitely large.
First, we need to understand when the answer is less than infinity. For this to happen, the first token in the first graph must be at vertex $v$, and the second token in the second graph must also be at the vertex with the same number $v$. Additionally, there must be an identical edge from $v$ to $u$ in both graphs for some $u$. If we find ourselves in such a situation, the tokens (each in their respective graph) can move along this edge back and forth as many times as needed with a cost of $0$. Let's first identify all the vertices $v$ (which we will call good) that have this property, and then we will calculate the minimum cost to reach any of these vertices. We will create a new graph where each vertex corresponds to a specific state, and the edges represent transitions between states. We will define a state as a pair $v_1, v_2$, which indicates that the first token is currently at vertex $v_1$, and the second token is at vertex $v_2$. The number of vertices in this graph will be $n^2$. A vertex will have $deg(v_1) \cdot deg(v_2)$ edges (where $deg(v)$ is the degree of $v$ in the graph where this vertex is located). Given the overall constraint on the sum of $deg(v)$, the number of edges in the new state graph will not exceed $m_1 \cdot m_2$. Let's run Dijkstra's algorithm on this graph and take the distance to the pair of good vertices.
[ "data structures", "graphs", "greedy", "shortest paths" ]
1,900
#include "bits/stdc++.h" using namespace std; #define int long long #define eb emplace_back const int INF = 1e18; void solve() { int n, s1, s2; cin >> n >> s1 >> s2; s1--, s2--; vector<vector<int>> g1(n), g2(n); vector<bool> good(n); set<pair<int, int>> edges; int m1; cin >> m1; for (int i = 0; i < m1; i++) { int v, u; cin >> v >> u; v--, u--; if (v > u) swap(v, u); edges.insert({v, u}); g1[v].eb(u); g1[u].eb(v); } int m2; cin >> m2; for (int i = 0; i < m2; i++) { int v, u; cin >> v >> u; v--, u--; if (v > u) swap(v, u); if (edges.find({v, u}) != edges.end()) good[v] = true, good[u] = true; g2[v].eb(u); g2[u].eb(v); } vector<vector<int>> d(n, vector<int> (n, INF)); d[s1][s2] = 0; set<pair<int, pair<int, int>>> st; st.insert({0, {s1, s2}}); while (!st.empty()) { auto [v, u] = st.begin()->second; st.erase(st.begin()); for (auto to1 : g1[v]) { for (auto to2 : g2[u]) { int w = abs(to1 - to2); if (d[to1][to2] > d[v][u] + w) { st.erase({d[to1][to2], {to1, to2}}); d[to1][to2] = d[v][u] + w; st.insert({d[to1][to2], {to1, to2}}); } } } } int ans = INF; for (int i = 0; i < n; i++) { if (!good[i]) continue; ans = min(ans, d[i][i]); } if (ans == INF) ans = -1; cout << ans << '\n'; } signed main() { //cout << fixed << setprecision(5); ios::sync_with_stdio(false); cin.tie(nullptr); int T = 1; cin >> T; //cin >> G; while (T--) solve(); return 0; }
2059
E1
Stop Gaming (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version you only need to find the minimum number of operations. You can hack only if you solved all versions of this problem.} You are given $n$ arrays, each of which has a length of $m$. Let the $j$-th element of the $i$-th array be denoted as $a_{i, j}$. It is guaranteed that all $a_{i, j}$ are \textbf{pairwise distinct}. In one operation, you can do the following: - Choose some integer $i$ ($1 \le i \le n$) and an integer $x$ ($1 \le x \le 2 \cdot n \cdot m$). - For all integers $k$ from $i$ to $n$ in increasing order, do the following: - Add the element $x$ to the beginning of the $k$-th array. - Assign $x$ the value of the last element in the $k$-th array. - Remove the last element from the $k$-th array. In other words, you can insert an element at the beginning of any array, after which all elements in this and all following arrays are shifted by one to the right. The last element of the last array is removed. You are also given a description of the arrays that need to be obtained after all operations. That is, after performing the operations, the $j$-th element of the $i$-th array should be equal to $b_{i, j}$. It is guaranteed that all $b_{i, j}$ are \textbf{pairwise distinct}. Determine the minimum number of operations that need to be performed to obtain the desired arrays.
Let's connect the hints together: we will iterate through the prefix of elements that were originally in the array and have not been removed from the end. We still need to understand when the prefix can be extended to the final array. We will look at the positions of elements from the current prefix, after which in the final array there are elements that do not belong to the current prefix. Claim: For a sequence of actions to exist that leads to the final array, it is necessary and sufficient that for each such element, if it was in its array at position $i$ (in the original array of arrays), there must be at least $m - i$ elements in the final array that do not belong to the current prefix before it. Proof: Necessity. Suppose there is an element that does not satisfy the above condition. Then for any sequence of actions that does not disrupt the order of elements before it, it will never be at the end of its array, meaning it is impossible to insert an element that does not belong to the prefix between it and the next ball. Therefore, in the final array, an element from the current prefix will follow it, which is a contradiction. Sufficiency. To prove sufficiency, we will use induction on the number of elements from the current prefix, after which in the final array there are elements that do not belong to the current prefix. For $n = 1$, this is obviously true. For $n + 1$, we obtain the existence of an algorithm for $n$ elements, where we added at least $m - i$ elements if the current element is at position $i$. Then we will follow this sequence of actions for the first $m - i$ additions. At this point, the $n + 1$-th element will be at the end of the flask. We will add all the elements that should be after it and not in the current prefix in the correct order, and then finish the existing sequence of actions for all previous elements. To check this condition, it is sufficient to iterate through the prefixes and build a subsequence while maintaining whether it is possible to construct with the prefix of elements up to the last element, after which in the final array there are elements that do not belong to the current prefix, and how many elements that do not belong to the current prefix are in the final array before the current element.
[ "brute force", "constructive algorithms", "greedy", "hashing", "strings" ]
2,500
#include "bits/stdc++.h" using namespace std; void solve() { int n, m; cin >> n >> m; vector<int> a(n * m + 1), b(n * m + 1); vector<deque<int>> mat(n + 1, deque<int> (m)); for (int i = 1; i <= n; i++) { for (int j = 0; j < m; j++) { int ind = j + 1 + (i - 1) * m; mat[i][j] = ind; cin >> a[ind]; } } for (int i = 1; i <= n * m; i++) cin >> b[i]; map<int, int> mp; for (int i = 1; i <= n * m; i++) mp[b[i]] = i; vector<int> s(n * m + 1), pos(n * m + 1, -1); for (int i = 1; i <= n * m; i++) { if (mp.find(a[i]) != mp.end()) pos[i] = mp[a[i]]; else break; } pos[0] = 0; int skipped = 0, pref = 0; bool prev = true; for (int i = 1; i <= n * m; i++) { if (pos[i - 1] > pos[i]) break; int d = pos[i] - pos[i - 1] - 1; if (prev) skipped += d; else if (d > 0) break; if (skipped >= m - 1) prev = true; else if ((i - 1) % m > (i + skipped - 1) % m || (i + skipped) % m == 0) prev = true; else prev = false; if (prev) pref = i; } cout << n * m - pref << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); int T = 1; cin >> T; while (T--) solve(); return 0; }
2059
E2
Stop Gaming (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version you need to output all the operations that need to be performed. You can hack only if you solved all versions of this problem.} You are given $n$ arrays, each of which has a length of $m$. Let the $j$-th element of the $i$-th array be denoted as $a_{i, j}$. It is guaranteed that all $a_{i, j}$ are \textbf{pairwise distinct}. In one operation, you can do the following: - Choose some integer $i$ ($1 \le i \le n$) and an integer $x$ ($1 \le x \le 2 \cdot n \cdot m$). - For all integers $k$ from $i$ to $n$ in increasing order, do the following: - Add the element $x$ to the beginning of the $k$-th array. - Assign $x$ the value of the last element in the $k$-th array. - Remove the last element from the $k$-th array. In other words, you can insert an element at the beginning of any array, after which all elements in this and all following arrays are shifted by one to the right. The last element of the last array is removed. You are also given a description of the arrays that need to be obtained after all operations. That is, after performing the operations, the $j$-th element of the $i$-th array should be equal to $b_{i, j}$. It is guaranteed that all $b_{i, j}$ are \textbf{pairwise distinct}. Determine the minimum number of operations that need to be performed to obtain the desired arrays, and also output the sequence of all operations itself.
Let's find the answer to problem E1. We just need to restore the necessary sequence of actions. Let's take a closer look at the inductive proof from E1. Notice that it represents an algorithm where at each step we find the rightmost element at the end of some array after which we need to add something. Thus, such an algorithm will definitely lead to the answer. The only difficulty lies in finding the rightmost element after which something needs to be inserted. We will maintain a data structure (let's call it a segment tree) on the indices from the prefix of the remaining balls. In the $i$-th cell, we will store the number of elements that need to be added for this element to become the last in its array if something needs to be added after it; otherwise, it will be 0. When adding the next element after the $i$-th, we will subtract 1 from the suffix after it. To find the rightmost element at the end of some array after which something needs to be added, we will look for the rightmost zero in the segment tree, and when we have performed all actions for a certain element, we will write a very large number in its cell so that it never appears again.
[ "brute force", "constructive algorithms", "data structures", "hashing", "strings" ]
2,900
#include <bits/stdc++.h> using namespace std; #define eb emplace_back #define int long long #define all(x) x.begin(), x.end() #define fi first #define se second const int INF = 1e9 + 1000; struct segtree { vector<pair<int, int>> tree; vector<int> ass; int size = 1; void init(vector<int> &a) { while (a.size() >= size) { size <<= 1; } tree.assign(2 * size, {}); ass.assign(2 * size, 0); build(0, 0, size, a); } void build(int x, int lx, int rx, vector<int> &a) { if (rx - lx == 1) { tree[x].se = lx; if (lx < a.size()) { tree[x].fi = a[lx]; } else { tree[x].fi = INF; } return; } int m = (lx + rx) / 2; build(2 * x + 1, lx, m, a); build(2 * x + 2, m, rx, a); tree[x] = min(tree[2 * x + 1], tree[2 * x + 2]); } void push(int x, int lx, int rx) { tree[x].fi += ass[x]; if (rx - lx == 1) { ass[x] = 0; return; } ass[2 * x + 1] += ass[x]; ass[2 * x + 2] += ass[x]; ass[x] = 0; } void update(int l, int r, int val, int x, int lx, int rx) { push(x, lx, rx); if (l <= lx && rx <= r) { ass[x] += val; push(x, lx, rx); return; } if (rx <= l || r <= lx) { return; } int m = (lx + rx) / 2; update(l, r, val, 2 * x + 1, lx, m); update(l, r, val, 2 * x + 2, m, rx); tree[x] = min(tree[2 * x + 1], tree[2 * x + 2]); } void update(int l, int r, int val) { update(l, r + 1, val, 0, 0, size); } int req(int x, int lx, int rx) { push(x, lx, rx); if (rx - lx == 1) { return tree[x].se; } int m = (lx + rx) / 2; push(2 * x + 1, lx, m); push(2 * x + 2, m, rx); if (tree[2 * x + 2].fi == 0) { return req(2 * x + 2, m, rx); } else { return req(2 * x + 1, lx, m); } } int req() { return req(0, 0, size); } }; void solve() { int n, m; cin >> n >> m; vector<int> a(n * m + 1), b(n * m + 1); for (int i = 1; i <= n; i++) { for (int j = 0; j < m; j++) { int ind = j + 1 + (i - 1) * m; cin >> a[ind]; } } for (int i = 1; i <= n * m; i++) cin >> b[i]; map<int, int> mp; for (int i = 1; i <= n * m; i++) mp[b[i]] = i; vector<int> s(n * m + 1), pos(n * m + 1, -1); for (int i = 1; i <= n * m; i++) { if (mp.find(a[i]) != mp.end()) pos[i] = mp[a[i]]; else break; } pos[0] = 0; int skipped = 0, pref = 0; bool prev = true; for (int i = 1; i <= n * m; i++) { if (pos[i - 1] > pos[i]) break; int d = pos[i] - pos[i - 1] - 1; if (prev) skipped += d; else if (d > 0) break; if (skipped >= m - 1) prev = true; else if ((i - 1) % m > (i + skipped - 1) % m || (i + skipped) % m == 0) prev = true; else prev = false; if (prev) pref = i; } for (int i = 1; i <= pref; i++) { s[i - 1] = pos[i] - pos[i - 1] - 1; } s[pref] = n * m - pos[pref]; vector<pair<int, int>> ans; int res = 0; for (int i = 0; i <= n * m; i++) { res += s[i]; } vector<int> ost(pref + 1); for (int i = 1; i <= pref; i++) { ost[i] = (m - i % m) % m; } for (int i = 0; i <= pref; i++) { if (s[i] == 0) { ost[i] = INF; } } vector<int> gol(pref + 1); gol[0] = 1; for (int i = 1; i <= pref; i++) { gol[i] = (i + m - 1) / m + 1; } segtree tree; tree.init(ost); for (int step = 0; step < res; step++) { int chel = tree.req(); ans.eb(gol[chel], b[pos[chel] + s[chel]]); tree.update(chel + 1, pref, -1); s[chel]--; if (s[chel] == 0) { tree.update(chel, chel, INF); } } cout << ans.size() << '\n'; for (auto [i, col] : ans) { cout << i << " " << col << '\n'; } } signed main() { //cout << fixed << setprecision(5); ios::sync_with_stdio(false); cin.tie(nullptr); int T = 1; cin >> T; //cin >> G; while (T--) solve(); return 0; }
2060
A
Fibonacciness
There is an array of $5$ integers. Initially, you only know $a_1,a_2,a_4,a_5$. You may set $a_3$ to any positive integer, negative integer, or zero. The Fibonacciness of the array is the number of integers $i$ ($1 \leq i \leq 3$) such that $a_{i+2}=a_i+a_{i+1}$. Find the maximum Fibonacciness over all integer values of $a_3$.
We can notice that the possible values of $a_3$ ranges from $-99$ to $200$. Thus, we can iterate over all values of $a_3$ from $-99$ to $200$, and for each value, compute the Fibonacciness of the sequence. The maximal of all these values gives the solution to the problem. Time Complexity : $O(3*a_{max})$ $a_3$ can be represented as either of the following: $a_3 = a_1 + a_2$, or $a_3 = a_4 - a_2$, or $a_3 = a_5 - a_4$. These are at most 3 possible values. Notice that any value of $a_3$ which increases the Fibonacciness must belong to one of the above values (Think why!). So, in order to increase the Fibonacciness of the sequence, we should choose the most frequent of the above values. Then, in the end, we can calculate the Fibonacciness of the sequence after choosing the best value for $a_3$. Time Complexity : $O(1)$
[ "brute force" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { vll ve(4); for (ll &i : ve) cin >> i; set <ll> st; st.insert(ve[3]-ve[2]); st.insert(ve[2]-ve[1]); st.insert(ve[0]+ve[1]); cout << 4-st.size() << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2060
B
Farmer John's Card Game
Farmer John's $n$ cows are playing a card game! Farmer John has a deck of $n \cdot m$ cards numbered from $0$ to $n \cdot m-1$. He distributes $m$ cards to each of his $n$ cows. Farmer John wants the game to be fair, so each cow should only be able to play $1$ card per round. He decides to determine a turn order, determined by a permutation$^{\text{∗}}$ $p$ of length $n$, such that the $p_i$'th cow will be the $i$'th cow to place a card on top of the center pile in a round. In other words, the following events happen in order in each round: - The $p_1$'th cow places any card from their deck on top of the center pile. - The $p_2$'th cow places any card from their deck on top of the center pile. - ... - The $p_n$'th cow places any card from their deck on top of the center pile. There is a catch. Initially, the center pile contains a card numbered $-1$. In order to place a card, the number of the card must be greater than the number of the card on top of the center pile. Then, the newly placed card becomes the top card of the center pile. If a cow cannot place any card in their deck, the game is considered to be lost. Farmer John wonders: does there exist $p$ such that it is possible for all of his cows to empty their deck after playing all $m$ rounds of the game? If so, output any valid $p$. Otherwise, output $-1$. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ contains each integer from $1$ to $n$ exactly once \end{footnotesize}
Assume the cows are initially ordered correctly. A solution exists if the first cow can place $0$, the second cow can place $1$, and so on, continuing with the first cow placing $n$, the second cow placing $n+1$, and so forth. Observing the pattern, the first cow must be able to place ${0, n, 2n, \dots}$, the second ${1, n+1, 2n+1, \dots}$, and in general, the $i$-th cow (where $i \in [0, n-1]$) must be able to place ${i, n+i, 2n+i, \dots}$. For each cow, sorting their cards reveals whether they satisfy this condition: the difference between adjacent cards in the sorted sequence must be $n$. If any cow's cards do not meet this criterion, the solution does not exist, and we output $-1$. If the cows are not ordered correctly, we need to rearrange them. To construct the solution, find the index of the cow holding card $i$ for each $i \in [0, n-1]$. Since the cards of each cow are sorted, denote $\texttt{min_card}$ as the smallest card a cow holds. Using an array $p$, iterate over each cow $c$ from $0$ to $n-1$ and set $p_\texttt{min_card} = c$. Finally, iterate $i$ from $0$ to $n-1$ and output $p_i$. Time Complexity : $O(nmlog(m))$. Note that sorting is not necessary, and a solution with complexity up to $O((nm)^2)$ will still pass.
[ "greedy", "sortings" ]
1,000
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { ll n, m; cin >> n >> m; vector <vll> ve(n, vll(m)); vll p(n, -16); bool val = true; ll c = 0; for (vll &we : ve) { for (ll &i : we) cin >> i; ll minN = *min_element(we.begin(), we.end()); if (minN < n) p[minN] = c++; val &= minN < n; sort(we.begin(), we.end()); ll last = we[0]-n; for (ll i : we) { val &= last+n == i; last = i; } } if (!val) { cout << "-1\n"; return; } for (ll i : p) cout << i+1 << ' '; cout << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2060
C
Game of Mathletes
Alice and Bob are playing a game. There are $n$ (\textbf{$n$ is even}) integers written on a blackboard, represented by $x_1, x_2, \ldots, x_n$. There is also a given integer $k$ and an integer score that is initially $0$. The game lasts for $\frac{n}{2}$ turns, in which the following events happen sequentially: - Alice selects an integer from the blackboard and erases it. Let's call Alice's chosen integer $a$. - Bob selects an integer from the blackboard and erases it. Let's call Bob's chosen integer $b$. - If $a+b=k$, add $1$ to score. Alice is playing to minimize the score while Bob is playing to maximize the score. Assuming both players use optimal strategies, what is the score after the game ends?
Note that Bob has all the power in this game, because the order in which Alice picks numbers is irrelevant, since Bob can always pick the optimal number to give himself a point. Therefore, we can just ignore Alice and play the game from Bob's perspective. From this point on, a "paired" number is any number $a$ on the blackboard such that there exists a $b$ on the blackboard such that $a+b=k$. Bob's strategy is as follows: if Alice picks a paired number, Bob should pick the other number in the pair. if Alice picks a paired number, Bob should pick the other number in the pair. if Alice picks an unpaired number, Bob can pick any other unpaired number, it doesn't matter which. if Alice picks an unpaired number, Bob can pick any other unpaired number, it doesn't matter which. This always works because the number of "unpaired numbers" is always even, since $n$ is even and the number of "paired numbers" will always be even. Therefore, for every unpaired number Alice picks, Bob will always have an unpaired number to respond with. Therefore, the final score is just the number of pairs in the input. To count them, use a map of counts $c$ such that $c_x$ is the number of occurrences of $x$ on the whiteboard. Then, for each number from $1$ to $\lfloor \frac{k}{2} \rfloor$, take the minimum of $c_x$ and $c_{k-x}$, and add that to the total. Remember the edge case where $k$ is even and $x = \frac{k}{2}$. The number of pairs here is just $\lfloor \frac{c_x}{2} \rfloor$.
[ "games", "greedy", "sortings", "two pointers" ]
900
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { ll n, k; cin >> n >> k; vll ve(n); for (ll &i : ve) cin >> i; vll th(k+1, 0); for (ll i : ve) { if (i >= k) continue; th[i]++; } ll ans = 0; for (ll i = 1; i < k; i++) { if (i == k-i) { ans += th[i]/2; continue; } ll minN = min(th[i], th[k-i]); th[i] -= minN; th[k-i] -= minN; ans += minN; } cout << ans << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2060
D
Subtract Min Sort
You are given a sequence $a$ consisting of $n$ positive integers. You can perform the following operation any number of times. - Select an index $i$ ($1 \le i < n$), and subtract $\min(a_i,a_{i+1})$ from both $a_i$ and $a_{i+1}$. Determine if it is possible to make the sequence \textbf{non-decreasing} by using the operation any number of times.
For clarity, let's denote $op_i$ as an operation performed on $a_i$ and $a_{i+1}$. Claim: if it is possible, then the sequence of operations $op_1,op_2,\ldots,op_{n-1}$ will sort the array. Proof: Let $b$ be any sequence of operations that will sort the array. Let's transform the sequence $b$ such that it becomes $[op_1,op_2,\ldots,op_{n-1}]$. First, let's note that an $op_i$ does nothing if $a_i=0$ or $a_{i+1}=0$. Additionally, after $op_i$, at least one of ${a_i,a_{i+1}}$ will become zero. Thus, we can remove all duplicates in $b$ without altering the result. Now, let $x$ be the largest number such that $op_x$ is in $b$. Since at least one of $a_x,a_{x+1}$ will be zero, operations must be performed such that each of $a_1,a_2,\ldots,a_{x-1}$ are zero. Let $S$ be the set of indices with $i<x$ such that $op_i$ is not in $b$. Note that we can simply append each operation in $S$ at the end of $b$ without altering the result, since all elements before $a_x$ are already zero. Our sequence of operations now contain a permutation of $op_1,op_2,\ldots,op_x$, and we have ensured that all of $a_1,a_2,\ldots,a_x=0$. Since the sequence is now sorted, we can in fact continue performing $op_{x+1},op_{x+2},\ldots,op_{n-1}$ in this order. Notice that this sequence of operations will keep our array sorted, as $op_y$ will make $a_y=0$ (since $a_y<a{y+1}$). Let's show that we can rearrange our operations such that $1$ comes first. There are two cases: either $op_1$ comes before $op_2$, or $op_2$ comes before $op_1$. In the first case, notice that no operation done before $op_1$ will impact either the value of $a_1$ or $a_2$, so rearranging such that $op_1$ comes first does not impact the final results. In the second case, notice that after $op_2$ is done, we must have $a_1=a_2$, as otherwise $op_1$ will not simultaneously make $a_1=a_2=0$. This implies that right before $op_2$ is performed, we must have $a_1+a_3=a_2$. Then, rearranging the operations such that $op_1$ comes first will not impact the final result. Using the same line of reasoning, we can make $op_2$ the second operation, then $op_3$, and so on. To solve the problem, we can simply simulate the operations in this order, and then check if the array is sorted at the end. Time Complexity : $O(n)$
[ "greedy" ]
1,100
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { ll n; cin >> n; vll ve(n); for (ll &i : ve) cin >> i; ll j = 0; for (ll i = 1; i < n; i++) { if (ve[i-1] > ve[i]) j = i; } if (j == 0) { // all done cout << "YES\n"; return; } auto fop = [&](ll i) { ll minN = min(ve[i], ve[i+1]); ve[i] -= minN; ve[i+1] -= minN; }; for (ll i = 0; i < j; i++) { fop(i); } cout << (is_sorted(ve.begin(), ve.end()) ? "YES" : "NO") << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2060
E
Graph Composition
You are given two simple undirected graphs $F$ and $G$ with $n$ vertices. $F$ has $m_1$ edges while $G$ has $m_2$ edges. You may perform one of the following two types of operations any number of times: - Select two integers $u$ and $v$ ($1 \leq u,v \leq n$) such that there is an edge between $u$ and $v$ in $F$. Then, remove that edge from $F$. - Select two integers $u$ and $v$ ($1 \leq u,v \leq n$) such that there is no edge between $u$ and $v$ in $F$. Then, add an edge between $u$ and $v$ in $F$. Determine the minimum number of operations required such that for all integers $u$ and $v$ ($1 \leq u,v \leq n$), there is a path from $u$ to $v$ in $F$ \textbf{if and only if} there is a path from $u$ to $v$ in $G$.
Let's solve the problem for each operation. First, consider the operation that removes an edge from $F$. Divide $G$ into its connected components and assign each vertex a component index. Then, for each edge in $F$, if it connects vertices with different component indices, remove it and increment the operation count. This guarantees no path between $x$ and $y$ in $F$ if there is none in $G$. Next, to ensure a path exists between $x$ and $y$ in $F$ if there is one in $G$, we divide $F$ into connected components. After removing excess edges, each component of $F$ only contains vertices of the same component index. The number of operations needed now is the difference between the number of connected components in $F$ and $G$. All operations can be efficiently performed using DFS or DSU.
[ "dfs and similar", "dsu", "graphs", "greedy" ]
1,500
#include <bits/stdc++.h> #define int long long #define pb emplace_back #define mp make_pair #define x first #define y second #define all(a) a.begin(), a.end() #define rall(a) a.rbegin(), a.rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(time(nullptr)); const int inf = 1e9; const int M = 1e9 + 7; const ld pi = atan2(0, -1); const ld eps = 1e-6; void Gdfs(int v, vector<vector<int>> &sl, vector<int> &col, int c){ col[v] = c; for(int u: sl[v]){ if(col[u] == 0){ Gdfs(u, sl, col, c); } } } int Fdfs(int v, vector<vector<int>> &sl, vector<int> &col, vector<int> &old_col, int c){ col[v] = c; int res = 0; for(int u: sl[v]){ if(col[u] == 0){ if(old_col[u] != c) res++; else res += Fdfs(u, sl, col, old_col, c); } } return res; } void read_con_list(vector<vector<int>> &sl, int m){ for(int i = 0; i < m; ++i){ int u, v; cin >> u >> v; sl[--u].emplace_back(--v); sl[v].emplace_back(u); } } void solve(int tc){ int n, mf, mg; cin >> n >> mf >> mg; vector<vector<int>> fsl(n), gsl(n); read_con_list(fsl, mf); read_con_list(gsl, mg); vector<int> fcol(n), gcol(n); int ans = 0; for(int i = 0; i < n; ++i){ if(gcol[i] == 0){ Gdfs(i, gsl, gcol, i + 1); } if(fcol[i] == 0){ ans += Fdfs(i, fsl, fcol, gcol, gcol[i]); if(gcol[i] < i + 1) ans++; } } cout << ans; } bool multi = true; signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; if (multi)cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
2060
F
Multiplicative Arrays
You're given integers $k$ and $n$. For each integer $x$ from $1$ to $k$, count the number of integer arrays $a$ such that all of the following are satisfied: - $1 \leq |a| \leq n$ where $|a|$ represents the length of $a$. - $1 \leq a_i \leq k$ for all $1 \leq i \leq |a|$. - $a_1 \times a_2 \times \dots \times a_{|a|}=x$ (i.e., the product of all elements is $x$). Note that two arrays $b$ and $c$ are different if either their lengths are different, or if there exists an index $1 \leq i \leq |b|$ such that $b_i\neq c_i$. Output the answer modulo $998\,244\,353$.
When $n$ is large, many of the array's elements will be $1$. The number of non-$1$ elements can't be so large. Try to precompute all arrays without $1$ and then use combinatorics. The naive solution is to define $f_k(n)$ as the number of arrays containing $n$ elements with a product of $k$. We could then try to compute this by considering the last element of the array among the divisors of $k$, giving us: $f_k(n)=\sum_{j|k}f_j(n-1),$ with $f_k(1)=1$. The answers would then be $\sum_{i=1}^{n}f_1(i),\sum_{i=1}^{n}f_2(i),\dots,\sum_{i=1}^{n}f_k(i)$. However, this leads to an $O(nk\log k)$ solution, which exceeds our time constraints. We need to optimize this approach. We can prove that there are at most $16$ non-$1$ elements in our arrays. This is because: The prime factorization of $k=p_1p_2\cdots p_t$ divides $k$ into the most non-$1$ elements. With $17$ non-$1$ elements, the smallest possible value would be $k=2^{17}=131072>10^5$. Based on this observation, we can define our dynamic programming state as: Let $dp[i][j]$ represent the number of arrays with product $i$ containing only $j$ non-$1$ elements. The recurrence relation becomes: $dp[i][j]=\sum_{p|i,p>1}dp[\frac{i}{p}][j-1]$. Base case: $dp[i][1]=1$ for $i>1$. This computation has a time complexity of $O(k\log^2k)$, as we perform $\sum_{j=1}^{k}d(j)=O(k\log{k})$ additions for each $i$. To calculate $f_k(n)$, we: Enumerate the number of non-$1$ elements from $1$ to $16$. For $j$ non-$1$ elements in the array: We have $n-j$ elements that are $1$. We need to choose $j$ positions for non-$1$ elements. Fill these positions with $dp[k][j]$ possible sequences. This gives us: $f_k(n)=\sum_{j=1}^{16}\binom{n}{j}dp[k][j].$ Therefore: $\begin{align*} &\sum_{i=1}^{n}f_k(n)\\ =&\sum_{i=1}^{n}\sum_{j=1}^{16}\binom{i}{j}dp[k][j]\\ =&\sum_{j=1}^{16}\left(dp[k][j]\sum_{i=1}^{n}\binom{i}{j}\right)\\ =&\sum_{j=1}^{16}\binom{n+1}{j+1}dp[k][j]. \end{align*}$ Note that $\sum_{i=1}^{n}\binom{i}{j} = \binom{n+1}{j+1}$ is given by the Hockey Stick Identity. Each answer can be calculated in $O(\log^2k)$ time, giving an overall time complexity of $O(k\log^2k)$. Let's revisit the naive approach and define $S_k(n)=\sum_{i=1}^{n}f_{k}(n)$ as our answers. We can observe: $\begin{align} f_{k}(n)&=\sum_{j|k}f_j(n-1)\nonumber\\ \Leftrightarrow f_{k}(n)-f_{k}(n-1)&=\sum_{j|k,j<k}f_j(n-1). \end{align}$ Accumulating Formula $(1)$ yields: $\begin{align} &&\sum_{i=2}^{n}\left(f_{k}(i)-f_{k}(i-1)\right)&=\sum_{i=2}^{n}\sum_{j|k,j<k}f_j(i-1)\nonumber\\ &\Leftrightarrow& f_{k}(n)-f_{k}(1)&=\sum_{j|k,j<k}\sum_{i=1}^{n-1}f_{j}(i)\nonumber\\ &\Leftrightarrow& f_{k}(n)&=\sum_{j|k,j<k}S_j(n-1)+1. \end{align}$ By induction, we can prove that both $f_{k}(n)$ and $S_k(n)$ are polynomials. Let's denote: Degree of $f_k(n)$ as $p(k)$. Degree of $S_k(n)$ as $q(k)$. From the definition of $S_k(n)$, we have $q(k)=p(k)+1$. Formula $(2)$ gives us: $p(k)=\max_{j|k,j<k}q(k)=\max_{j|k,j<k}p(j)+1,$ with $p(1)=0$. By induction, we can show that $p(k)$ equals the number of primes in $k$'s prime factorization. Therefore, $p(k)\le16$ and $q(k)\le17$. Since $q(k)$ doesn't exceed $17$, we can: Precompute $S_k(1),S_k(2),S_k(3),\dots,S_k(18)$. Use Lagrange Interpolation to compute any $S_k(n)$. This also yields an $O(k\log^2k)$ time complexity.
[ "combinatorics", "dp", "number theory" ]
2,200
using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Runtime.Serialization.Json; using System.Text; using System.Threading.Tasks; using TemplateF; #if !PROBLEM SolutionF a = new(); a.Solve(); #endif namespace TemplateF { internal class SolutionF { private readonly StreamReader sr = new(Console.OpenStandardInput()); private T Read<T>() where T : struct, IConvertible { char c; dynamic res = default(T); dynamic sign = 1; while (!sr.EndOfStream && char.IsWhiteSpace((char)sr.Peek())) sr.Read(); if (!sr.EndOfStream && (char)sr.Peek() == '-') { sr.Read(); sign = -1; } while (!sr.EndOfStream && char.IsDigit((char)sr.Peek())) { c = (char)sr.Read(); res = res * 10 + c - '0'; } return res * sign; } private T[] ReadArray<T>(int n) where T : struct, IConvertible { T[] arr = new T[n]; for (int i = 0; i < n; ++i) arr[i] = Read<T>(); return arr; } public void Solve() { StringBuilder output = new(); const int maxk = 100000, mod = 998244353, bw = 20; List<int>[] d = new List<int>[maxk + 1]; for (int i = 1; i <= maxk; ++i) d[i] = new() { 1 }; for (int i = 2; i <= maxk; ++i) { for (int j = i; j <= maxk; j += i) d[j].Add(i); } int[,] dp = new int[bw + 1, maxk + 1], S = new int[bw + 1, maxk + 1]; for (int k = 1; k <= maxk; ++k) dp[1, k] = S[1, k] = 1; for (int i = 2; i <= bw; ++i) { for (int j = 1; j <= maxk; ++j) { foreach (var k in d[j]) { dp[i, j] += dp[i - 1, k]; dp[i, j] %= mod; } S[i, j] = (S[i - 1, j] + dp[i, j]) % mod; } } int T = Read<int>(); while (T-- > 0) { int k = Read<int>(), n = Read<int>(); if (n <= bw) { for (int j = 1; j <= k; ++j) output.Append(S[n, j]).Append(' '); } else { int _k = k; for (k = 1; k <= _k; ++k) { long ans = 0; for (int i = 1; i <= bw; ++i) { long t = S[i, k]; for (int j = 1; j <= bw; ++j) { if (j == i) continue; t *= n - j; t %= mod; t *= Inv((mod + i - j) % mod, mod); t %= mod; } ans = (ans + t) % mod; } output.Append(ans).Append(' '); } } output.AppendLine(); } Console.Write(output.ToString()); } public static long Inv(long a, long Mod) { long u = 0, v = 1, m = Mod; while (a > 0) { long t = m / a; m -= t * a; (a, m) = (m, a); u -= t * v; (u, v) = (v, u); } return (u % Mod + Mod) % Mod; } } }
2060
G
Bugged Sort
Today, Alice has given Bob arrays for him to sort in increasing order again! At this point, no one really knows how many times she has done this. Bob is given two sequences $a$ and $b$, both of length $n$. All integers in the range from $1$ to $2n$ appear exactly once in either $a$ or $b$. In other words, the concatenated$^{\text{∗}}$ sequence $a+b$ is a permutation$^{\text{†}}$ of length $2n$. Bob must sort \textbf{both sequences} in increasing order \textbf{at the same time} using Alice's swap function. Alice's swap function is implemented as follows: - Given two indices $i$ and $j$ ($i \neq j$), it swaps $a_i$ with $b_j$, and swaps $b_i$ with $a_j$. Given sequences $a$ and $b$, please determine if both sequences can be sorted in increasing order simultaneously after using Alice's swap function any number of times. \begin{footnotesize} $^{\text{∗}}$The concatenated sequence $a+b$ denotes the sequence $[a_1, a_2, a_3, \ldots , b_1, b_2, b_3, \ldots]$. $^{\text{†}}$A permutation of length $m$ contains all integers from $1$ to $m$ in some order. \end{footnotesize}
Consider $(a_i, b_i)$ as the $i$-th pair. Can two elements ever become unpaired? No. Every operation keeps the pairs intact. The best way to think about the operation is like this: Swap the pairs at index $i$ and index $j$. Swap the pairs at index $i$ and index $j$. Flip each pair (swap $a_i$ with $b_i$, and $a_j$ with $b_j$). Flip each pair (swap $a_i$ with $b_i$, and $a_j$ with $b_j$). The operations allow us to swap two pairs, but it also results in both of them getting flipped. Is there any way to swap two pairs without flipping anything? It turns out there is, and there always is, because $n \geq 3$. To swap pairs $i$ and $j$, follow this procedure: Pick any index $k$, where $k \ne i$ and $k \ne j$. Pick any index $k$, where $k \ne i$ and $k \ne j$. Do the operation on $i$ and $k$. Do the operation on $i$ and $k$. Do the operation on $j$ and $k$. Do the operation on $j$ and $k$. Finally, do the operation on $i$ and $k$ again. This results in the pair at index $k$ being unchanged, and the pairs at $i$ and $j$ simply being swapped. Finally, do the operation on $i$ and $k$ again. This results in the pair at index $k$ being unchanged, and the pairs at $i$ and $j$ simply being swapped. Can you flip two pairs without swapping them? Yes. Just perform the operation on pairs $i$ and $j$, and then follow the procedure from Hint 2, and you are effectively swapping, then "swapping and flipping", therefore the final result is simply a flip of two pairs. Can you flip one pair on its own? No, this is not possible due to the parity invariant. The operation involves performing two flips at the same time, which means that it is impossible to end up in a situation where the number of flips is odd. Read the hints first. Key observation: In the final solution, the pairs must be sorted in increasing order of $\min(a_i, b_i)$. We prove this by contradiction. Suppose there exists a pair $(a_i, b_i)$, where $a_i < b_i$ and $a_i$ is smaller than the minimum element in the previous pair. In this case, it becomes impossible to place this pair after the previous one, as $a_i$ would always be smaller than the preceding element, regardless of orientation. Since all $a_i$ and $b_i$ are distinct, there is exactly one valid final ordering of the pairs. Moreover, this ordering is always reachable because we previously proved that any two pairs can be swapped without flipping their elements. Since swapping any two items in an array an arbitrary number of times allows for sorting by definition, we can sort the pairs in this order. Thus, the solution is to initially sort the pairs based on $\min(a_i, b_i)$. Solution: The problem simplifies to determining whether flipping pairs allows sorting. We can solve this using dynamic programming (DP), defined as follows: $dp[1][i] = 1$ if it's possible to sort up and including pair $i$ using an even number of flips, where pair $i$ is not flipped. $dp[2][i] = 1$ if it's possible to sort up and including pair $i$ using an even number of flips, where pair $i$ is flipped. $dp[3][i] = 1$ if it's possible to sort up and including pair $i$ using an odd number of flips, where pair $i$ is not flipped. $dp[4][i] = 1$ if it's possible to sort up and including pair $i$ using an odd number of flips, where pair $i$ is flipped. The base cases are as follows: $dp[1][1] = 1$, because this represents the state where the first pair is not flipped, resulting in zero flips (which is even) $dp[4][1] = 1$, because this represents the state where the first pair is flipped, resulting in one flip (which is odd). There are eight possible transitions, as you can reach each of the four states from two previous states: you either choose to flip the current element or you don't. The final answer is $dp[1][n]$ $\texttt{OR}$ $dp[2][n]$, because by Hints 3 and 4, we can only reach states where an even number of pairs are flipped from the starting position.
[ "dp", "greedy", "sortings" ]
2,400
#include <bits/stdc++.h> using namespace std; #define F0R(i, n) for (int i = 0; i < n; i++) #define FOR(i, m, n) for (int i = m; i < n; i++) #define print pair<int, int> #define vp vector<print> #define ALL(x) (x).begin(), (x).end() void solve() { int n; cin >> n; vp pairs(n); F0R(i, n) { cin >> pairs[i].first; } F0R(i, n) { cin >> pairs[i].second; } sort(ALL(pairs), [](print a, print b) { return min(a.first, a.second) < min(b.first, b.second); }); vector<vector<int>> dp(4, vector<int>(n, 0)); dp[0][0] = 1; dp[3][0] = 1; FOR(i, 1, n) { if (pairs[i - 1].first < pairs[i].first and pairs[i - 1].second < pairs[i].second) { dp[0][i] |= dp[0][i - 1]; dp[1][i] |= dp[1][i - 1]; dp[2][i] |= dp[3][i - 1]; dp[3][i] |= dp[2][i - 1]; } if (pairs[i - 1].first < pairs[i].second and pairs[i - 1].second < pairs[i].first) { dp[0][i] |= dp[2][i - 1]; dp[1][i] |= dp[3][i - 1]; dp[2][i] |= dp[1][i - 1]; dp[3][i] |= dp[0][i - 1]; } } cout << ((dp[0][n - 1] | dp[2][n - 1]) ? "YES" : "NO") << endl; } signed main() { int tests; cin >> tests; F0R(test, tests) { solve(); } return 0; }
2061
A
Kevin and Arithmetic
To train young Kevin's arithmetic skills, his mother devised the following problem. Given $n$ integers $a_1, a_2, \ldots, a_n$ and a sum $s$ initialized to $0$, Kevin performs the following operation for $i = 1, 2, \ldots, n$ in order: - Add $a_i$ to $s$. If the resulting $s$ is even, Kevin earns a point and repeatedly divides $s$ by $2$ until it becomes odd. Note that Kevin can earn at most one point per operation, regardless of how many divisions he does. Since these divisions are considered more beneficial for Kevin's development, his mother wants to rearrange $a$ so that the number of Kevin's total points is maximized. Determine the maximum number of points.
After the first operation, $s$ is always odd. You can only earn points when $a_i$ is odd after this operation. However, during the first operation, you can only earn a point if $a_i$ is even. Let $c_0$ and $c_1$ be the number of even and odd numbers, respectively. If there is at least one even number, you can place it first, making the answer $c_1 + 1$. Otherwise, since no points can be earned during the first operation, the answer is $c_1 - 1$. Time complexity: $O(n)$.
[ "math" ]
800
null
2061
B
Kevin and Geometry
Kevin has $n$ sticks with length $a_1,a_2,\ldots,a_n$. Kevin wants to select $4$ sticks from these to form an isosceles trapezoid$^{\text{∗}}$ with a positive area. Note that rectangles and squares are also considered isosceles trapezoids. Help Kevin find a solution. If no solution exists, output $-1$. \begin{footnotesize} $^{\text{∗}}$An isosceles trapezoid is a convex quadrilateral with a line of symmetry bisecting one pair of opposite sides. In any isosceles trapezoid, two opposite sides (the bases) are parallel, and the two other sides (the legs) are of equal length. \end{footnotesize}
Let $a$ and $b$ be the lengths of the two bases, and let $c$ be the length of the two legs. A necessary and sufficient condition for forming an isosceles trapezoid with a positive area is $|a - b| < 2c$, as the longest edge must be shorter than the sum of the other three edges. To determine whether an isosceles trapezoid can be formed, consider the following cases: If there are two distinct pairs of identical numbers that do not overlap, these pairs can always form an isosceles trapezoid. If there are no pairs of identical numbers, it is impossible to form an isosceles trapezoid. If there is exactly one pair of identical numbers, denoted by $c$, we will use them as the legs. Remove this pair and check whether there exist two other numbers whose difference is less than $2c$. This can be efficiently done by sorting the remaining numbers and checking adjacent pairs. The time complexity of this approach is $O(n \log n)$.
[ "binary search", "geometry" ]
1,100
null
2061
C
Kevin and Puzzle
Kevin enjoys logic puzzles. He played a game with $n$ classmates who stand in a line. The $i$-th person from the left says that there are $a_i$ liars to their left (not including themselves). Each classmate is either honest or a liar, with the restriction that \textbf{no two liars can stand next to each other}. Honest classmates always say the truth. \textbf{Liars can say either the truth or lies}, meaning their statements are considered unreliable. Kevin wants to determine the number of distinct possible game configurations modulo $998\,244\,353$. Two configurations are considered different if at least one classmate is honest in one configuration and a liar in the other.
For the $i$-th person, there are at most two possible cases: If they are honest, there are exactly $a_i$ liars to their left. If they are a liar, since two liars cannot stand next to each other, the $(i-1)$-th person must be honest. In this case, there are $a_{i-1} + 1$ liars to their left. Let $dp_i$ represent the number of possible game configurations for the first $i$ people, where the $i$-th person is honest. The transitions are as follows: If the $(i-1)$-th person is honest, check if $a_i = a_{i-1}$. If true, add $dp_{i-1}$ to $dp_i$. If the $(i-1)$-th person is a liar, check if $a_i = a_{i-2} + 1$. If true, add $dp_{i-2}$ to $dp_i$. The final answer is given by $dp_n + dp_{n-1}$. Time complexity: $O(n)$.
[ "2-sat", "combinatorics", "dp" ]
1,600
null
2061
D
Kevin and Numbers
Kevin wrote an integer sequence $a$ of length $n$ on the blackboard. Kevin can perform the following operation any number of times: - Select two integers $x, y$ on the blackboard such that $|x - y| \leq 1$, erase them, and then write down an integer $x + y$ instead. Kevin wants to know if it is possible to transform these integers into an integer sequence $b$ of length $m$ through some sequence of operations. Two sequences $a$ and $b$ are considered the same if and only if their multisets are identical. In other words, for any number $x$, the number of times it appears in $a$ must be equal to the number of times it appears in $b$.
It can be challenging to determine how to merge small numbers into a larger number directly. Therefore, we approach the problem in reverse. Instead of merging numbers, we consider transforming the number $b$ back into $a$. For each number $x$, it can only be formed by merging $\lceil \frac{x}{2} \rceil$ and $\lfloor \frac{x}{2} \rfloor$. Thus, the reversed operation is as follows: Select an integer $x$ from $b$ and split it into $\lceil \frac{x}{2} \rceil$ and $\lfloor \frac{x}{2} \rfloor$. We can perform this splitting operation on the numbers in $b$ exactly $n - m$ times. If the largest number in $b$ also appears in $a$, we can remove it from both $a$ and $b$ simultaneously. Otherwise, the largest number in $b$ must be split into two smaller numbers. To efficiently manage the numbers in $a$ and $b$, we can use a priority queue or a multiset. The time complexity of this approach is $O((n + m) \log (n + m))$.
[ "bitmasks", "data structures" ]
1,600
null
2061
E
Kevin and And
Kevin has an integer sequence $a$ of length $n$. At the same time, Kevin has $m$ types of magic, where the $i$-th type of magic can be represented by an integer $b_i$. Kevin can perform at most $k$ (possibly zero) magic operations. In each operation, Kevin can do the following: - Choose two indices $i$ ($1\leq i\leq n$) and $j$ ($1\leq j\leq m$), and then update $a_i$ to $a_i\ \&\ b_j$. Here, $\&$ denotes the bitwise AND operation. Find the minimum possible sum of all numbers in the sequence $a$ after performing at most $k$ operations.
For a fixed $p$, let $f(S)$ represent the number obtained after applying the operations in set $S$ to $p$. Define $g(i)$ as the minimum value of $f(S)$ for all subsets $S$ with $|S| = i$, i.e., $g(i) = \min_{|S| = i} f(S)$. - Lemma: The function $g$ is convex, that is for all $i$ ($1 \leq i < m$), the inequality $2g(i) \leq g(i-1) + g(i+1)$ holds. Proof: Let $f(S)$ be the value corresponding to the minimum of $g(i-1)$, and let $f(T)$ correspond to the minimum of $g(i+1)$. Suppose the $w$-th bit is the highest bit where $f(S)$ and $f(T)$ differ. We can always find an operation $y \in T \setminus S$ that turns the $w$-th bit in $g(i-1)$ into $0$. In this case, we have: $g(i - 1) - f(S \cup \{y\}) \geq 2^w.$ Moreover, since the highest bit where $f(S \cup {y})$ and $g(i+1)$ differ is no greater than $w-1$, we have: $f(S \cup \{y\}) - g(i + 1) \leq 2^w.$ Combining these inequalities gives: $2 g(i) \leq 2 f(S \cup \{y\}) \leq g(i - 1) + g(i + 1),$ - For each number $a_i$, we can use brute force to compute the minimum value obtained after applying $j$ operations, denoted as $num(i, j)$. From the lemma, we know that the difference $num(i, k) - num(i, k - 1)$ is non-increasing. Thus, when performing operations on this number, the reductions in value are: $num(i, 0) - num(i, 1), num(i, 1) - num(i, 2), \dots, num(i, m - 1) - num(i, m).$ You need to perform $k$ operations in total. To minimize the result, you can choose the $k$ largest reductions from all possible operations. This can be done by sorting all operations by their cost or by using std::nth_element. Time complexity: $O(n2^m + nm)$.
[ "bitmasks", "brute force", "dp", "greedy", "math", "sortings" ]
2,000
null
2061
F2
Kevin and Binary String (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, string $t$ consists of '0', '1' and '?'. You can hack only if you solved all versions of this problem.} Kevin has a binary string $s$ of length $n$. Kevin can perform the following operation: - Choose two adjacent blocks of $s$ and swap them. A block is a maximal substring$^{\text{∗}}$ of identical characters. Formally, denote $s[l,r]$ as the substring $s_l s_{l+1} \ldots s_r$. A block is $s[l,r]$ satisfying: - $l=1$ or $s_l\not=s_{l-1}$. - $s_l=s_{l+1} = \ldots = s_{r}$. - $r=n$ or $s_r\not=s_{r+1}$. Adjacent blocks are two blocks $s[l_1,r_1]$ and $s[l_2,r_2]$ satisfying $r_1+1=l_2$. For example, if $s=\mathtt{000}\,\mathbf{11}\,\mathbf{00}\,\mathtt{111}$, Kevin can choose the two blocks $s[4,5]$ and $s[6,7]$ and swap them, transforming $s$ into $\mathtt{000}\,\mathbf{00}\,\mathbf{11}\,\mathtt{111}$. Given a string $t$ of length $n$ consisting of '0', '1' and '?', Kevin wants to determine the minimum number of operations required to perform such that for any index $i$ ($1\le i\le n$), if $t_i\not=$ '?' then $s_i=t_i$. If it is impossible, output $-1$. \begin{footnotesize} $^{\text{∗}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
First, we divide the string $s$ into blocks. Let's analyze the properties of the blocks in $s$ that do not move (to simplify corner cases, we can add a block with a different number at both ends of $s$): These blocks must alternate between $0$ and $1$. Between two immovable blocks, the $0$s will shift toward the same side as the adjacent block of $0$s, and the $1$s will shift toward the same side as the adjacent block of $1$s. For example, $\mathtt{0100101101}$ will become $\mathtt{0000011111}$ after the shifting process. This properties can be proven by induction. - Easy Version For the easy version, since the target string $t$ is known, we can greedily determine whether each block in $s$ can remain immovable. Specifically: For each block and the previous immovable block, check if the corresponding digits are different. Also, ensure that the numbers in the interval between the two blocks meet the conditions (i.e., the $0$s and $1$s in this interval must shift to their respective sides). This can be solved efficiently using prefix sums. If there are $x$ blocks between two immovable blocks, then $\frac{x}{2}$ moves are required. Time complexity: $O(n)$. - Hard Version For the hard version, we use dynamic programming to determine which blocks can remain immovable. Let $dp(i)$ represent the minimum cost for the $i$-th block to remain immovable. We have: $dp(i) = \min_{0\leq j <i, (j, i) \text{ is valid}} (dp(j) + (i - j - 1) / 2).$ Without loss of generality, assume that the $i$-th block is composed of $0$s. Let the distance between the $i$-th block and the nearest preceding $1$-block be $p$. The number of $0$s between blocks $j$ and $i$ cannot exceed $p$. There is a restriction: $j \geq l_i$. Similarly, for $j$, we can derive a symmetric restriction: $i \leq r_j$. We can use a segment tree to maintain the values of $2dp(j) - j$ for all valid $j$. Specifically: For a position $j$, if the current $i > r_j$, update the corresponding value to $+\infty$. For each $i$, query the segment tree over the valid interval to compute the minimum efficiently. Time complexity: $O(n \log n)$.
[ "data structures", "dp" ]
3,500
null
2061
G
Kevin and Teams
This is an interactive problem. Kevin has $n$ classmates, numbered $1, 2, \ldots, n$. Any two of them may either be friends or not friends. Kevin wants to select $2k$ classmates to form $k$ teams, where each team contains exactly $2$ people. Each person can belong to at most one team. Let $u_i$ and $v_i$ be two people in the $i$-th team. To avoid potential conflicts during team formation, the team members must satisfy one of the following two conditions: - For all $i$ ($1\leq i \leq k$), classmate $u_i$ and $v_i$ are friends. - For all $i$ ($1\leq i \leq k$), classmate $u_i$ and $v_i$ are not friends. Kevin wants to determine the maximum $k$ such that, regardless of the friendship relationships among the $n$ people, he can always find $2k$ people to form the teams. After that, he needs to form $k$ teams. But asking whether two classmates are friends is awkward, so Kevin wants to achieve this while asking about the friendship status of no more than $n$ pairs of classmates. The interactor is \textbf{adaptive}. It means that the hidden relationship between classmates is not fixed before the interaction and will change during the interaction.
Using graph theory terminology, let edges labeled as $0$ be represented in red and edges labeled as $1$ be represented in blue. - We can prove that if a graph has $3k + 1$ vertices, the maximum matching will be at most $k$. To see this, you can construct a blue clique of size $2k+1$, with all other edges colored red. It is not hard to verify that the maximum matching in both colors is $k$. So the maximum matching you can find is $\lfloor \frac{n+1}{3} \rfloor$. - To construct a matching, we can maintain a chain of edges of the same color. Suppose we currently have a red chain $p_1, p_2, \dots, p_k$. Now, we want to add a new vertex $q$. We check the color of the edge between $p_k$ and $q$: If the edge is red, we add $q$ to the chain. If the edge is blue, $p_{k-1}, p_k, q$ will form a "mixed-color triplet." In this case, we remove $p_{k-1}$ and $p_k$ from the chain. After at most $n-1$ queries, the graph will be divided into several "mixed-color triplets" and one chain of edges of the same color. For the chain of length $k$, we can find $\lfloor \frac{k}{2} \rfloor$ matchings. For each "mixed-color triplet", we can always find one matching corresponding to the color. We can construct a matching of size $\lfloor \frac{n+1}{3} \rfloor$.
[ "constructive algorithms", "graphs", "interactive" ]
2,900
null
2061
H2
Kevin and Stones (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, you need to output a valid sequence of operations if one exists. You can hack only if you solved all versions of this problem.} Kevin has an undirected graph with $n$ vertices and $m$ edges. Initially, some vertices contain stones, which Kevin wants to move to new positions. Kevin can perform the following operation: - For each stone at $u_i$, select a neighboring vertex $v_i$. Simultaneously move each stone from $u_i$ to its corresponding $v_i$. At any time, each vertex can contain \textbf{at most one} stone. Determine whether a valid sequence of operations exists that moves the stones from the initial state to the target state. Output a valid sequence of operations with no more than $2n$ moves if one exists. It can be proven that if a valid sequence exists, a valid sequence with no more than $2n$ moves exists.
Necessary and Sufficient Conditions First, if $s = t$, the transformation is trivially feasible. Beyond this trivial case, let us consider the necessary conditions: The initial state must allow at least one valid move. The target state must allow at least one valid move. For every connected component, the number of stones in the initial state must match the number of stones in the target state. If a connected component is a bipartite graph, then after a coloring of the graph, the number of stones in each part must also satisfy corresponding constraints. We can represent a node $u$ as two separate nodes: $(u, 0)$ and $(u, 1)$, corresponding to states reachable in even and odd steps, respectively. For an undirected edge $(u, v)$, create directed edges between $(u, 0)$ and $(v, 1)$, as well as between $(u, 1)$ and $(v, 0)$. Using this transformation: It is sufficient to enumerate whether the target state is reached on an even or odd step. For each connected component, count the number of stones and verify that the condition is satisfied. The bipartite graph constraint is inherently handled within this framework, eliminating the need for special-case treatment. The first two conditions, ensuring that the initial and target states allow at least one valid move, can be verified using bipartite graph matching. If a valid matching exists, the conditions are satisfied. Time Complexity: $O(\text{BipartiteMatching}(n, m))$, where $n$ is the number of nodes and $m$ is the number of edges. It can be shown that the above conditions are not only necessary but also sufficient. By satisfying these conditions, it is guaranteed that a valid transformation from $s$ to $t$ can be constructed. Non-constructive Proof This is a simpler, non-constructive proof. Below is a brief outline of the proof idea. Since this proof doesn't provide much help for the construction part, some details are omitted. The problem can be easily formulated as a network flow problem. You can construct a layered graph where each layer represents one step of stone movement, and the flow represents the movement of stones. Determining whether the stones can move from the initial state to the target state is equivalent to checking whether the flow is saturated. Let $c$ represent the number of stones. According to the Max-flow Min-cut Theorem, we need to consider whether there exists a cut with a capacity smaller than $c$. If we sort all the cut edges by their layer indices, there must exist two adjacent cut edges whose layers, or their distance from the source or sink, exceed $n$. To disconnect the graph, one of these two regions must fully disconnect the graph. Otherwise, after $n$ layers, the graph will remain connected. However, since there exists a valid matching between the initial and target states, it is possible to repeatedly move along this matching. This ensures that there exists a flow of size $c$ from the source or sink to any layer, meaning that the graph cannot be disconnected using fewer than $c$ edges. Constructive Proof Transforming the Graph into a Tree In the graph after splitting the nodes, if we combine the matching edges from $s$ and $t$, we obtain a structure consisting of chains and cycles. Note that for any cycle, the $s$ and $t$ edges alternate, meaning we can always adjust the matching of $t$ in this cycle to make it identical to $s$. Afterward, we can find a spanning tree among the remaining edges. This spanning tree still satisfies both the existence of matching and connectivity conditions. Moreover, all stones in $s$ and $t$ (after coloring the graph with black and white) will reside on the same layer. Solving the Tree Case For trees, it seems straightforward to move stones outward for subtrees with more stones and inward for subtrees with fewer stones. However, due to the constraint that each step must involve movement, deadlock situations may arise, making it less trivial. For example, consider the following scenario: After the first move, $S$ shifts to positions $2$ and $6$. If we only consider the number of stones in the subtrees of $s$ and $t$ or simply match based on the shortest distance, the stones might return to $1$ and $5$, leading to a deadlock. The correct sequence of moves should be $2 \to 3$, $6 \to 5$, then $\to 4, 2$, and finally $\to 3, 1$. Since $s$ and $t$ are symmetric, a meet-in-the-middle approach can be applied, where we simultaneously consider both $s$ and $t$ and aim to transform them into identical states. First find the matching for $s$ and $t$. The idea is to identify the "outermost" matching edge (the definition of "outermost" will be formalized below). WLOG, we assume it is $(t_u, t_v)$ in $t$. Then, locate the nearest matching edge $(s_u, s_v)$ in $s$ relative to $(t_u, t_v)$. Move the stones from $(s_u, s_v)$ to the corresponding nodes in $t$. Once they reach $(t_u, t_v)$, let them move back and forth between $t_u$ and $t_v$. For the remaining unmatched nodes in $s$, let them oscillate along their own matching edges for the first two steps. If the "outermost" matching edge is in $s$, we can perform the corresponding operations in reverse order. The explanation of how it works: First, since we are choosing the nearest matching edge in $s$, and the remaining stones move along their matching edges for the first two steps. The remaining stones will not meet the path corresponding to $s$. Otherwise, it would contradict the definition of "nearest". After moving $s$ to $t$, the stones will repeatedly move along the matching edge $(t_u, t_v)$ until the end of the process. This means $(t_u, t_v)$ becomes impassable for subsequent moves. Therefore, we require this edge to be "outermost," meaning that after removing these two nodes, the remaining parts of $s$ and $t$ must still be connected. To find the "outermost" edge, we can iteratively remove the leaf nodes of the graph (nodes with degree at most 1). The first matching edge where both nodes are removed is the "outermost" edge. Note that this "outermost" criterion must consider both $s$ and $t$, taking the "outermost" edge among all matches in $s$ and $t$. For the remaining stones, since we ensure the process does not encounter this path or $(t_u, t_v)$, the problem reduces to a subproblem, which can be solved using the same method. This approach takes two steps each time and removes one matching edge. If the tree has $N = 2n$ nodes, the number of matching edges will not exceed $n$, so we can solve the problem in at most $2n$ steps. Time complexity: $O(\text{BipartiteMatching}(n, m) + n^2)$. Note: The bound of $2n$ is tight. For example, consider a chain with an additional triangular loop at the tail. If the goal is to move from $1, 2$ to $1, 3$, at least one stone must traverse the triangular loop to change its parity. This requires at least $2n - O(1)$ steps.
[ "flows", "graphs" ]
3,500
null
2061
I
Kevin and Nivek
Kevin and Nivek are competing for the title of "The Best Kevin". They aim to determine the winner through $n$ matches. The $i$-th match can be one of two types: - \textbf{Type 1}: Kevin needs to spend $a_i$ time to defeat Nivek and win the match. If Kevin doesn't spend $a_i$ time on it, Nivek will win the match. - \textbf{Type 2}: The outcome of this match depends on their historical records. If Kevin's number of wins is greater than or equal to Nivek's up to this match, then Kevin wins. Otherwise, Nivek wins. Kevin wants to know the minimum amount of time he needs to spend to ensure he wins at least $k$ matches. Output the answers for $k = 0, 1, \ldots, n$.
First, there is a straightforward dynamic programming solution with a time complexity of $O(n^2)$. Let $dp(i, j)$ represent the minimum cost for Kevin to achieve $j$ victories in the first $i$ games. However, optimizing this dynamic programming solution directly is difficult because it lacks convexity. Therefore, we use a divide-and-conquer approach to solve it (which is more commonly seen in certain counting problems). - Key Idea: To compute $dp(r, *)$ from $dp(l, *)$ efficiently, the following observations are used: At time $l$, if the difference between Kevin's and Nivek's number of victories exceeds $r-l$, then: For Type 2 matches, Kevin will either always win or always lose. For Type 1 matches, Kevin should prioritize matches with the smallest cost $a_i$ in ascending order. Let $g(i)$ denote the minimum cost of selecting $i$ matches in this range, and let $t$ represent the number of Type 2 matches in the interval. When $j - (l - j) > r-l$, the following transition applies: $dp(l,j)+g(i) \to dp(r,j+t+i).$ Similarly, when $j - (l - j) < -(r-l)$, a similar transition is used. Since $g(i)$ is convex, the best transition point is monotonic. This property allows us to optimize this part using divide-and-conquer, achieving a time complexity of $O(n \log n)$. For the case where $|j - (l - j)| \leq r-l$, brute force is used to compute the transitions explicitly, with a time complexity of $O((r-l)^2)$. By dividing the sequence into blocks of size $O(\sqrt{n \log n})$, the overall time complexity can be reduced to $O(n \sqrt{n \log n})$. - To further optimize the problem, a more refined strategy is applied. Specifically, for the case where $|j - (l - j)| \leq r-l$, instead of computing transitions explicitly with brute force, we recursively divide the problem into smaller subproblems. The goal in each subproblem is to compute the transitions from a segment of $dp(l)$ to $dp(r)$, ensuring that the segment length in $dp(l)$ does not exceed $O(r-l)$. Let $m = \lfloor\frac{l + r}{2}\rfloor$. First, compute the transitions from $dp(l)$ to $dp(m)$. Then, use the results from $dp(m)$ to compute the transitions from $dp(m)$ to $dp(r)$. For each subinterval, the strategy remains the same: For differences in the number of victories that exceed the interval length, apply monotonicity to handle the transitions efficiently. For the remaining part, recursively compute transitions using the same divide-and-conquer approach. The recurrence relation for this approach is: $T(n)=2T(n/2) + O(n \log n)$ So the time complexity is $T(n)=O(n \log^2 n)$.
[ "divide and conquer", "dp" ]
3,500
null
2062
A
String
You are given a string $s$ of length $n$ consisting of $\mathtt{0}$ and/or $\mathtt{1}$. In one operation, you can select a non-empty subsequence $t$ from $s$ such that any two adjacent characters in $t$ are different. Then, you flip each character of $t$ ($\mathtt{0}$ becomes $\mathtt{1}$ and $\mathtt{1}$ becomes $\mathtt{0}$). For example, if $s=\mathtt{\underline{0}0\underline{101}}$ and $t=s_1s_3s_4s_5=\mathtt{0101}$, after the operation, $s$ becomes $\mathtt{\underline{1}0\underline{010}}$. Calculate the minimum number of operations required to change all characters in $s$ to $\mathtt{0}$. Recall that for a string $s = s_1s_2\ldots s_n$, any string $t=s_{i_1}s_{i_2}\ldots s_{i_k}$ ($k\ge 1$) where $1\leq i_1 < i_2 < \ldots <i_k\leq n$ is a subsequence of $s$.
Let $c$ be the number of $1$s. On one hand, each operation can decrease $c$ by at most $1$, so the answer is at least $c$. On the other hand, by only operating on $1$s each time, it takes exactly $c$ operations, so the answer is at most $c$. Therefore, the answer is $c$.
[ "constructive algorithms", "greedy", "math", "strings" ]
800
for _ in range(int(input())): print(input().count('1'))
2062
B
Clockwork
You have a sequence of $n$ time clocks arranged in a line, where the initial time on the $i$-th clock is $a_i$. In each second, the following happens in order: - Each clock's time decreases by $1$. If any clock's time reaches $0$, you lose immediately. - You can choose to move to an adjacent clock or stay at the clock you are currently on. - You can reset the time of the clock you are on back to its initial value $a_i$. Note that the above events happen in order. If the time of a clock reaches $0$ in a certain second, even if you can move to this clock and reset its time during that second, you will still lose. You can start from any clock. Determine if it is possible to continue this process indefinitely without losing.
On one hand, traveling from $i$ to $1$ and back takes time $2(i-1)$, and traveling from $i$ to $n$ and back takes time $2(n-i)$. Therefore, it must hold that $t_i > 2\max(i-1, n-i)$. On the other hand, if this condition is satisfied, one can simply move back and forth between $1$ and $n$ directly.
[ "greedy", "math" ]
900
for _ in range(int(input())): n=int(input()) a=list(map(int,input().split())) for i in range(n): if a[i]<=max(i,n-i-1)*2: print('NO') break else: print('YES')
2062
C
Cirno and Operations
Cirno has a sequence $a$ of length $n$. She can perform either of the following two operations for any (possibly, zero) times \textbf{unless} the current length of $a$ is $1$: - Reverse the sequence. Formally, $[a_1,a_2,\ldots,a_n]$ becomes $[a_n,a_{n-1},\ldots,a_1]$ after the operation. - Replace the sequence with its difference sequence. Formally, $[a_1,a_2,\ldots,a_n]$ becomes $[a_2-a_1,a_3-a_2,\ldots,a_n-a_{n-1}]$ after the operation. Find the maximum possible sum of elements of $a$ after all operations.
Let the reversal be called operation $1$, and the difference be called operation $2$. Consider swapping two adjacent operations: $12 \to 21$. If the sequence before the operations is $[a_1, a_2, \dots, a_n]$, then after the operations, the sequence changes from $[a_{n-1} - a_n, a_{n-2} - a_{n-1}, \dots, a_1 - a_2]$ to $[a_n - a_{n-1}, a_{n-1} - a_{n-2}, \dots, a_2 - a_1]$. Thus, swapping adjacent $1,2$ is equivalent to taking the negation of each element of the array. Therefore, any operation sequence is equivalent to first performing $2$ several times, and then performing $1$ several times, and then taking the negation several times. Since $1$ does not change the sum of the sequence, the answer is the maximum absolute value of the sequence sum after performing a certain number of $2$. There is a corner case: if you don't perform $2$ at all, you can not take the negation. Besides, the upper bound of the answer is $1000\times 2^{50}$, so you have to use 64-bit integers.
[ "brute force", "math" ]
1,200
from math import * for _ in range(int(input())): n=int(input()) a=list(map(int,input().split())) ans=sum(a) while n>1: n-=1 a=[a[i+1]-a[i] for i in range(n)] ans=max(ans,abs(sum(a))) print(ans)
2062
D
Balanced Tree
You are given a tree$^{\text{∗}}$ with $n$ nodes and values $l_i, r_i$ for each node. You can choose an initial value $a_i$ satisfying $l_i\le a_i\le r_i$ for the $i$-th node. A tree is balanced if all node values are equal, and the value of a balanced tree is defined as the value of any node. In one operation, you can choose two nodes $u$ and $v$, and increase the values of all nodes in the subtree$^{\text{†}}$ of node $v$ by $1$ while considering $u$ as the root of the entire tree. Note that $u$ may be equal to $v$. Your goal is to perform a series of operations so that the tree becomes \textbf{balanced}. Find the minimum possible value of the tree after performing these operations. Note that you \textbf{don't} need to minimize the number of operations. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$Node $w$ is considered in the subtree of node $v$ if any path from root $u$ to $w$ must go through $v$. \end{footnotesize}
First, assume that the values of $a_i$ have already been determined and $1$ is the root of the tree. For any edge connecting $x$ and $\text{fa}(x)$, to make $a_x=a_{\text{fa}(x)}$, the only useful operation is to set $u=x,v=\text{fa}(x)$ or $v=x,u=\text{fa}(x)$. After the operation, $a_x-a_{\text{fa}(x)}$ changes by $1$, so you will always perform exactly $|a_x - a_{\text{fa}(x)}|$ operations to make $a_x=a_{\text{fa}(x)}$. You will perform the above operation for each edge to make all nodes' values equal. Among those operations, the connected component containing $1$ will increase by $\sum \max(a_u - a_{\text{fa}(u)}, 0)$ times. Thus, the final answer is $a_1 + \sum \max(a_u - a_{\text{fa}(u)}, 0)$. Now, consider how to choose the values of $a$. If $r_u = +\infty$, then $a_u$ should always be no less than the maximum value of its child nodes $a_v$ (why? Because decreasing $a_u$ by $1$ can only reduce $a_u - a_{\text{fa}(u)}$ by at most $1$, but it will increase $a_v - a_u$ by at least $1$). So $a_u=\min(r_u,\max(l_u,\max\limits_{\text{fa}(v)=u}a_v))$.
[ "dfs and similar", "dp", "graphs", "greedy", "trees" ]
2,200
for _ in range(int(input())): n = int(input()) l,r=[],[] for i in range(n): L,R=map(int,input().split()) l.append(L) r.append(R) e=[[] for i in range(n)] for i in range(n-1): u,v=map(int,input().split()) e[u-1].append(v-1) e[v-1].append(u-1) stk=[0] f=[-1]*n nfd=[] while stk: u=stk.pop() nfd.append(u) for v in e[u]: if v!=f[u]: f[v]=u stk.append(v) nfd.reverse() ans=0 for u in nfd: for v in e[u]: if f[v]==u: l[u]=max(l[u],l[v]) l[u]=min(l[u],r[u]) for v in e[u]: if f[v]==u: ans+=max(0,l[v]-l[u]) print(ans+l[0])
2062
E1
The Game (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, you only need to find one of the possible nodes Cirno may choose. You can hack only if you solved all versions of this problem.} Cirno and Daiyousei are playing a game with a tree$^{\text{∗}}$ of $n$ nodes, rooted at node $1$. The value of the $i$-th node is $w_i$. They take turns to play the game; Cirno goes first. In each turn, assuming the opponent chooses $j$ in the last turn, the player can choose any remaining node $i$ satisfying $w_i>w_j$ and delete the subtree$^{\text{†}}$ of node $i$. In particular, Cirno can choose any node and delete its subtree in the first turn. The first player who \textbf{can not} operate \textbf{wins}, and they all hope to win. Find \textbf{one of the possible nodes Cirno may choose} so that she wins if both of them play optimally. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$Node $u$ is considered in the subtree of node $i$ if any path from $1$ to $u$ must go through $i$. \end{footnotesize}
In the first round, the node $u$ chosen by Cirno must satisfy: there exists a node $v$ outside the subtree of $u$ such that $w_v > w_u$. If no such node exists, then Cirno will lose. Among these nodes, Cirno can choose the node with the largest $w_u$. After this, Daiyousei can only select nodes that do not satisfy the above condition, and subsequently, Cirno will inevitably have no nodes left to operate on. To determine whether there exists a node $v$ outside the subtree of $u$ such that $w_v > w_u$, we record the timestamp $dfn_u$ when each node is first visited during the DFS traversal of the tree, as well as the timestamp $low_u$ when the traversal of its subtree is completed. Since the timestamps outside the subtree form two intervals, namely $[1, dfn_u)$ and $(low_u, n]$, we need to check whether $\max(\max\limits_{dfn_v\in [1,dfn_u)}w_v,\max\limits_{dfn_v\in (low_u,n]}w_v)>w_u$ holds. This can be achieved by maintaining the maximum values in the prefix and suffix. Additionally, we can enumerate the nodes in descending order of $w_i$ and maintain their lowest common ancestor (LCA). If the LCA of all $v$ satisfying $w_v > w_u$ is not a descendant of $u$, then there must exist a node $v$ outside the subtree of $u$ that satisfies $w_v > w_u$. We can also enumerate the nodes in descending order of $w_i$ and maintain their minimum and maximum $dfn_i$, namely $mn,mx$. According to the conclusion above, we can check whether $mn<dfn_u$ or $mx>low_u$ holds. If either condition holds, then there must exist a node $v$ outside the subtree of $u$ that satisfies $w_v > w_u$.
[ "data structures", "dfs and similar", "games", "graphs", "greedy", "trees" ]
2,000
for _ in range(int(input())): n = int(input()) w=[int(x) for x in input().split()] wb=[[] for i in range(n)] for i in range(n): w[i]-=1 wb[w[i]].append(i) e=[[] for i in range(n)] for i in range(n-1): u,v=map(int,input().split()) e[u-1].append(v-1) e[v-1].append(u-1) stk=[0] f,dfn=[-1]*n,[0]*n nfd=[] while stk: u=stk.pop() dfn[u]=len(nfd) nfd.append(u) for v in e[u]: if v!=f[u]: f[v]=u stk.append(v) sz=[1]*n nfd.reverse() for u in nfd: if u==0: continue sz[f[u]]+=sz[u] wb.reverse() mx=-1;mn=n+1 for v in wb: for u in v: if mn<dfn[u] or mx>dfn[u]+sz[u]-1: print(u+1) break else: for u in v: mn=min(mn,dfn[u]) mx=max(mx,dfn[u]) continue break else: print(0)
2062
E2
The Game (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, you need to find all possible nodes Cirno may choose. You can hack only if you solved all versions of this problem.} Cirno and Daiyousei are playing a game with a tree$^{\text{∗}}$ of $n$ nodes, rooted at node $1$. The value of the $i$-th node is $w_i$. They take turns to play the game; Cirno goes first. In each turn, assuming the opponent chooses $j$ in the last turn, the player can choose any remaining node $i$ satisfying $w_i>w_j$ and delete the subtree$^{\text{†}}$ of node $i$. In particular, Cirno can choose any node and delete its subtree in the first turn. The first player who \textbf{can not} operate \textbf{wins}, and they all hope to win. Find \textbf{all possible nodes Cirno may choose in the first turn} so that she wins if both of them play optimally. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$Node $u$ is considered in the subtree of node $i$ if any path from $1$ to $u$ must go through $i$. \end{footnotesize}
In the first round, the node $u$ chosen by Cirno must satisfy: there exists a node $v$ outside the subtree of $u$ such that $w_v > w_u$. If no such node exists, then Cirno will lose. Similar to Cirno's strategy in E1, if Daiyousei can choose a node such that Cirno can still make a move after the operation, then Daiyousei only needs to select the node with the largest $w_u$ among these nodes to win. Therefore, Cirno must ensure that she cannot make a move after Daiyousei's operation. Enumerate the node $u$ chosen by Daiyousei, then the node $v$ chosen by Cirno must satisfy: Either for all nodes $x$ with $w_x > w_u$, $x$ must be within the subtree of $u$ or $v$. In other words, $v$ must be an ancestor of the lca of all $x$ outside the subtree of $u$ that satisfy $w_x > w_u$ (denoted as $g(u) = v$). Or $v$ must prevent Daiyousei from choosing this $u$. In other words, either $w_v \ge w_u$, or $u$ is within the subtree of $v$. Therefore, $v$ must satisfy that for all $w_u > w_v$, either $u$ is within the subtree of $v$, or $g(u)$ is within the subtree of $v$, and this is a necessary and sufficient condition. To compute $g(u)$: Enumerate $u$ from largest to smallest, which is equivalent to computing the lca of all enumerated nodes outside the subtree. This can be achieved in several ways: 1. Use a segment tree or Fenwick tree to maintain the lca of intervals on the dfs order, where the outside of the subtree corresponds to the prefix and suffix of the dfs order. Depending on the method of lca implementation, the complexity could be $O(n\log^2n)$ or $O(n\log n)$. 2. Since the lca of a set of nodes is equivalent to the lca of the node with the smallest dfs order and the node with the largest dfs order, std::set can be used to maintain such nodes. To determine the condition that $v$ must satisfy, also enumerate from largest to smallest. For a node $u$, if there exists a corresponding $g(u)$, then add one to the chain from $u$ to $1$, add one to the chain from $g(u)$ to $1$, and subtract one from the chain from $lca(u,g(u))$ to $1$. This way, all nodes that satisfy the condition are exactly incremented by one. It is only necessary to check whether the current value of $v$ is equal to the number of $u$. Depending on the implementation, the complexity could be $O(n\log^2n)$ or $O(n\log n)$. Additionally, there exists a method that does not require computing the lca.
[ "data structures", "dfs and similar", "games", "graphs", "implementation", "trees" ]
3,000
#include "bits/stdc++.h" using namespace std; #define all(x) (x).begin(),(x).end() const int N = 5e5 + 2; vector<int> e[N]; int dfn[N], nfd[N], low[N]; int id, n; void dfs(int u) { nfd[dfn[u] = ++id] = u; for (int v : e[u]) erase(e[v], u), dfs(v); low[u] = id; } int main() { ios::sync_with_stdio(0); cin.tie(0); cout << fixed << setprecision(15); int T; cin >> T; while (T--) { int n, m, i, j; cin >> n; vector w(n, vector<int>()); vector<int> c(n + 1), ans; for (i = 1; i <= n; i++) { e[i].clear(); id = 0; cin >> j; w[j - 1].push_back(i); } for (i = 1; i < n; i++) { int u, v; cin >> u >> v; e[u].push_back(v); e[v].push_back(u); } dfs(1); int l = n + 1, r = 0; set<int> s; for (i = n - 1; i >= 0; i--) { if (s.size()) { for (int u : w[i]) { int mx = 0; for (j = dfn[u] - 1; j; j -= j & -j) mx = max(mx, c[j]); if ((*s.begin() < dfn[u] || *s.rbegin() > low[u]) && mx <= low[u] && dfn[u] <= l && low[u] >= r) ans.push_back(u); } for (int u : w[i]) { int mn = *s.begin(), mx = *s.rbegin(); int L = dfn[u], R = low[u]; if (mn >= L && mx <= R) continue; if (mn >= L && mn <= R) mn = *s.upper_bound(R); if (mx >= L && mx <= R) mx = *prev(s.lower_bound(L)); auto fun = [&](int x, int y) { if (x > y) swap(x, y); l = min(l, y), r = max(r, x); for (j = x; j <= n; j += j & -j) c[j] = max(c[j], y); }; fun(mn, dfn[u]); fun(mx, dfn[u]); } } for (int u : w[i]) s.insert(dfn[u]); } sort(all(ans)); cout << ans.size(); for (int x : ans) cout << ' ' << x; cout << '\n'; } }
2062
F
Traveling Salescat
You are a cat selling fun algorithm problems. Today, you want to recommend your fun algorithm problems to $k$ cities. There are a total of $n$ cities, each with two parameters $a_i$ and $b_i$. Between any two cities $i,j$ ($i\ne j$), there is a bidirectional road with a length of $\max(a_i + b_j , b_i + a_j)$. The cost of a path is defined as the total length of roads between every two adjacent cities along the path. For $k=2,3,\ldots,n$, find the minimum cost among all simple paths containing exactly $k$ \textbf{distinct} cities.
Let $x_i = \frac{a_i + b_i}{2}$, $y_i = \frac{a_i - b_i}{2}$. Then $\max(a_i + b_j , b_i + a_j) = \max( x_i + y_i + x_j - y_j , x_i - y_i + x_j + y_j) = x_i + x_j + \max(y_i - y_j, y_j - y_i)$ $= x_i + x_j + |y_i - y_j|$. Consider a road that passes through $k$ different cities, and consider the contribution as two parts, $x$ and $y$. For the contribution of $x$, the first and last cities will only contribute to the answer once. And all other cities on the path will contribute twice. For the contribution of $y$, after determining the cities that need to be passed through and the two cities as starting or ending cities (referred to as endpoint cities), you can start from the endpoint city with smaller $y$, move to the city with the smallest $y$ among the cities to be passed through, then move to the city with the largest $y$ among the cities to be passed through, and finally move to the endpoint city with larger $y$. This way, the minimum contribution of y can be obtained while passing through all the cities that need to be passed through. You can sort the cities by $y$ and perform a dp to find the minimum answer among the first $i$ cities where $j$ cities have been selected, including $0$ or $1$ or $2$ endpoint cities. The total time complexity is $O(n^2)$.
[ "constructive algorithms", "dp", "geometry", "graphs", "greedy", "math", "sortings" ]
2,900
class Q: def __init__(this,x,y): this.x,this.y=x+y,x-y def __lt__(this,rhs): return this.y<rhs.y for _ in range(int(input())): n=int(input()) a=[0]*n for i in range(n): x,y=map(int,input().split()) a[i]=Q(x,y) a.sort() l=[10**18]*(n+1) sl=l.copy();slt=l.copy();slrt=l.copy() s=10**18 for m,t in enumerate(a): x,y=t.x,t.y for i in range(m,0,-1): slrt[i+1]=min(slrt[i+1],sl[i]+x+y,slt[i]+x*2+y*2) slt[i+1]=min(slt[i+1],slt[i]+x*2,sl[i]+x-y) sl[i+1]=min(sl[i+1],sl[i]+x*2,l[i]+x+y) l[i+1]=min(l[i+1],l[i]+x*2) sl[1]=min(sl[1],x-y) l[1]=min(l[1],x*2-y*2) print(*[x//2 for x in slrt[2:]])
2062
G
Permutation Factory
You are given two permutations $p_1,p_2,\ldots,p_n$ and $q_1,q_2,\ldots,q_n$ of length $n$. In one operation, you can select two integers $1\leq i,j\leq n,i\neq j$ and swap $p_i$ and $p_j$. The cost of the operation is $\min (|i-j|,|p_i-p_j|)$. Find the minimum cost to make $p_i = q_i$ hold for all $1\leq i\leq n$ and output a sequence of operations to achieve the minimum cost. A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Because the cost of an operation is $\min(|i-j|,|p_i-p_j|)$, we can consider this operation as two operations with different costs, one costing $|i-j|$ and the other costing $|p_i-p_j|$. Consider each number in the permutation as a point $(i, p_i)$ on a two-dimensional plane. So the first operation can be seen as moving $(i, p_i)$ to $(i, p_j)$, and simultaneously moving $(j, p_j)$ to $(j, p_i)$. The second operation can be seen as moving $(i, p_i)$ to $(j, p_i)$ and simultaneously moving $(j, p_j)$ to $(i, p_j)$. The cost of both modes of movement is the coordinate distance between these two points. Supposing that the restriction that two points should be moved simultaneously is removed. Meanwhile, divide the cost of one operation equally between the movements of two points. Then, the cost of moving a point from one coordinate to another is the Manhattan distance between these two points divided by $2$, and the minimum cost is the minimum cost matching to move each point of $p$ to match each point of $q$. Next, we will prove that the minimum answer obtained after removing the simultaneous movement restriction can always be obtained under restricted conditions. First, find any minimum cost matching between points of $p$ and points of $q$. Assume that point $(i,p_i)$ matches point $(a_i,q_{a_i})$, where $a$ is also a permutation. We first make point moves parallel to the x-axis direction. If there exists $a_i \neq i$, find the $i$ with the smallest $a_i$ among them. Then, there must exist $j$ where $a_i \le j < i \le a_j$. By performing such operations on $i$ and $j$, these two points can be moved without increasing the total cost. Continuously performing this operation can complete the movement of all points in the x-axis direction. The same applies to the movement in the y-axis direction. Using the minimum cost and maximum flow to find the minimum cost matching and checking every pair of $(i,j)$ to find a swappable pair will result in a solution with $O(n^4)$ time complexity. Using the Kuhn-Munkres Algorithm to find the minimum cost matching, checking only $a_i$ mentioned above can achieve a time complexity of $O (n^3)$, but is not required.
[ "flows", "geometry", "graph matchings", "graphs" ]
3,500
#include<bits/stdc++.h> using namespace std; struct linex{ int v; int w; int nxt; int c; }; int head[210],cnt,cpos[210],vis[210],qvis[210],totc,dis[210],inf=1000000000; linex l[21000]; queue<int>que; void add(int u,int v,int w,int c){ l[cnt].nxt=head[u]; head[u]=cnt; l[cnt].v=v; l[cnt].w=w; l[cnt].c=c; cnt++; l[cnt].nxt=head[v]; head[v]=cnt; l[cnt].v=u; l[cnt].w=0; l[cnt].c=-c; cnt++; } int spfa(int st,int en,int n){ int i,j,t; for(i=0;i<n;i++) { vis[i]=0; dis[i]=inf; cpos[i]=head[i]; } que.push(st); dis[st]=0; vis[st]=1; qvis[st]=1; while(!que.empty()) { t=que.front(); que.pop(); qvis[t]=0; for(j=head[t];j!=-1;j=l[j].nxt) { if(l[j].w>0&&dis[l[j].v]>dis[t]+l[j].c) { dis[l[j].v]=dis[t]+l[j].c; vis[l[j].v]=1; if(!qvis[l[j].v]) { que.push(l[j].v); qvis[l[j].v]=1; } } } } return vis[en]; } long long dfs(int p,int en,int curr){ int x,j,flow=0; if(p==en)return curr; for(j=cpos[p];j!=-1&&flow<curr;j=l[j].nxt) { cpos[p]=j; qvis[p]=1; if(l[j].w>0&&dis[l[j].v]==dis[p]+l[j].c&&!qvis[l[j].v]) { x=dfs(l[j].v,en,min(curr-flow,l[j].w)); flow+=x; l[j].w-=x; l[j^1].w+=x; totc+=l[j].c*x; } qvis[p]=0; } return flow; } int p[100],q[100],id[100][100]; int cabs(int x){ if(x<0)x=-x; return x; } struct point{ int x; int y; int tx; int ty; }pc[100]; vector<int>ansv[2]; int main(){ ios::sync_with_stdio(false),cin.tie(0); srand(time(0)); int T,n,i,j,flow,flag; for(cin>>T;T>0;T--) { cin>>n; for(i=0;i<n;i++) { cin>>p[i]; p[i]--; } for(i=0;i<n;i++) { cin>>q[i]; q[i]--; } for(i=0;i<n*2+2;i++)head[i]=-1; cnt=0; totc=0; flow=0; for(i=0;i<n;i++) { add(n*2,i,1,0); add(i+n,n*2+1,1,0); for(j=0;j<n;j++) { id[i][j]=cnt; add(i,j+n,1,cabs(i-j)+cabs(p[i]-q[j])); } } while(spfa(n*2,n*2+1,n*2+2))flow+=dfs(n*2,n*2+1,inf); for(i=0;i<n;i++) { pc[i].x=i; pc[i].y=p[i]; for(j=0;j<n;j++) { if(l[id[i][j]].w==0) { pc[i].tx=j; pc[i].ty=q[j]; } } } while(1) { flag=0; for(i=0;i<n;i++) { for(j=0;j<n;j++) { if(pc[j].tx<=pc[i].x&&pc[i].x<pc[j].x&&pc[j].x<=pc[i].tx) { ansv[0].push_back(pc[i].x); ansv[1].push_back(pc[j].x); swap(pc[i].x,pc[j].x); flag=1; } } } if(!flag)break; } while(1) { flag=0; for(i=0;i<n;i++) { for(j=0;j<n;j++) { if(pc[j].ty<=pc[i].y&&pc[i].y<pc[j].y&&pc[j].y<=pc[i].ty) { ansv[0].push_back(pc[i].x); ansv[1].push_back(pc[j].x); swap(pc[i].y,pc[j].y); flag=1; } } } if(!flag)break; } cout<<ansv[0].size()<<'\n'; for(i=0;i<ansv[0].size();i++)cout<<ansv[0][i]+1<<' '<<ansv[1][i]+1<<'\n'; ansv[0].clear(); ansv[1].clear(); } return 0; }
2062
H
Galaxy Generator
In a two-dimensional universe, a star can be represented by a point $(x,y)$ on a two-dimensional plane. Two stars are directly connected if and only if their $x$ or $y$ coordinates are the same, and there are no other stars on the line segment between them. Define a galaxy as a connected component composed of stars connected directly or indirectly (through other stars). For a set of stars, its value is defined as the minimum number of galaxies that can be obtained after performing the following operation for any (possibly, zero) times: in each operation, you can select a point $(x,y)$ without stars. If a star can be directly connected to at least $3$ stars after creating it here, then you create a star here. You are given a $n\times n$ matrix $a$ consisting of $0$ and $1$ describing a set $S$ of stars. There is a star at $(x,y)$ if and only if $a_{x,y}=1$. Calculate the sum, modulo $10^9 + 7$, of the values of all non-empty subsets of $S$.
We will refer to the smallest rectangle containing a galaxy whose edges are parallel to the coordinate axis as the boundary rectangle of a galaxy. For two galaxies, if the projections of their boundary rectangles intersect on the x or y axis, we can merge these two galaxies in one operation. Proof: Let's assume that the projections of two rectangles on the x-axis intersect. If their projections on the x-axis are $(l_1, r_1)$ and $(l_2, r_2)$, let $l_1<l_2$, then $l_2<r_1$. Since stars in a galaxy must be connected, there must be two stars in the first galaxy with x-coordinates on either side of $x=l_2$ that are directly connected. Creating a star at the intersection of the line segment between these two stars and $x=l_2$ can connect two galaxies. The situation is the same for the y-axis. Consider continuously merging two galaxies that meet the above conditions until the projections of the boundary rectangles of all galaxies on the x-axis and y-axis are non intersecting. The set of boundary rectangles obtained in this way must be uniquely determined because such rectangles cannot be merged or split into smaller rectangles that cannot be merged. Next, we need to calculate the number of subsets of stars for each two-dimensional interval, so that all stars in this subset can obtain a unique galaxy through the above process, and the boundary rectangle of this galaxy is the two-dimensional interval. The answer can be calculated using the inclusion exclusion method, because if these stars cannot form such a galaxy, then they must have formed several smaller rectangles with projection intervals that do not intersect on the x-axis and y-axis. We can continuously add these smaller rectangles from left to right in the x-axis direction. At the same time, maintain the bitmask of the union of these rectangular projections in the y-axis direction to ensure that these rectangles do not intersect. To reduce time complexity, calculations can be performed simultaneously for all two-dimensional intervals. We can perform interval dp in the x-axis direction and directly calculate the entire set in the y-axis direction and use high-dimensional prefix sum to obtain answers for each interval. The time complexity of this dp method appears to be $O (2^n \cdot n^5)$, because we need to enumerate the left boundary where the x-axis transition begins, the bitmask in the y-axis direction, and the four boundaries of the concatenated small rectangle when concatenating them. However, since the interval in the y-axis direction must not intersect with the bitmask, in reality, the enumeration here only needs to be calculated $\sum_{i=1}^n (n-i+1) \cdot 2^{n-i} = O(2^n \cdot n)$ times. So the final time complexity is $O (2^n \cdot n^4)$.
[ "bitmasks", "combinatorics", "dp" ]
3,500
#include<bits/stdc++.h> using namespace std; const long long mod=1000000007; string s; long long dp[15][15][1<<14],vl[200],sgvl[15][15][15][15]; long long dp2[15][1<<14],dpvl2[15][1<<14]; int psum[15][15]; long long getvl(long long l,long long r,long long x,long long y){ if(l>=r||x>=y)return 0; return vl[psum[r][y]-psum[l][y]-psum[r][x]+psum[l][x]]; } long long f(long long l,long long r,long long x,long long y){ long long ans=0,i,c,tl,tr,tx,ty; for(i=0;i<16;i++) { c=0; tl=l; tr=r; tx=x; ty=y; if(i&1) { c^=1; tl++; } if(i&2) { c^=1; tr--; } if(i&4) { c^=1; tx++; } if(i&8) { c^=1; ty--; } if(c)ans=(ans+mod-getvl(tl,tr,tx,ty))%mod; else ans=(ans+getvl(tl,tr,tx,ty))%mod; } return ans; } int main(){ ios::sync_with_stdio(false),cin.tie(0); int T,n,i,j,t,p,l,r,len,cmsk; long long ans; for(cin>>T;T>0;T--) { cin>>n; vl[0]=0; for(i=1;i<=n*n;i++)vl[i]=(vl[i-1]*2+1)%mod; for(i=0;i<=n;i++) { for(j=0;j<=n;j++)psum[i][j]=0; } for(i=0;i<n;i++) { cin>>s; for(j=0;j<n;j++)psum[i+1][j+1]=s[j]-'0'; } for(i=0;i<n;i++) { for(j=0;j<=n;j++)psum[i+1][j]+=psum[i][j]; } for(i=0;i<=n;i++) { for(j=0;j<n;j++)psum[i][j+1]+=psum[i][j]; } for(l=0;l<=n;l++) { for(r=l+1;r<=n;r++) { for(i=0;i<=n;i++) { for(j=i+1;j<=n;j++)sgvl[l][r][i][j]=f(l,r,i,j); } } } for(i=0;i<=n;i++) { for(j=0;j<=n;j++) { for(t=0;t<(1<<n);t++)dp[i][j][t]=0; } } for(i=0;i<=n;i++)dp[i][i][0]=1; for(len=1;len<=n;len++) { for(l=0;l+len<=n;l++) { r=l+len; for(i=l+1;i<r;i++) { for(j=1;j<(1<<n);j++) { for(t=0;t<n;t++) { cmsk=j; for(p=t;p<n&&(j>>p&1^1);p++) { cmsk|=(1<<p); dp[l][r][cmsk]=(dp[l][r][cmsk]+dp[l][i][j]*sgvl[i][r][t][p+1])%mod; } } } } for(j=1;j<(1<<n);j++) { for(i=0;(j>>i&1^1);i++); for(t=n-1;(j>>t&1^1);t--); sgvl[l][r][i][t+1]=(sgvl[l][r][i][t+1]+mod-dp[l][r][j])%mod; } for(i=0;i<n;i++) { cmsk=0; for(t=i;t<n;t++) { cmsk|=(1<<t); dp[l][r][cmsk]=(dp[l][r][cmsk]+sgvl[l][r][i][t+1])%mod; } } if(len==1)continue; for(j=0;j<(1<<n);j++)dp[l][r][j]=(dp[l][r-1][j]+dp[l][r][j])%mod; } } for(i=0;i<=n;i++) { for(j=0;j<(1<<n);j++) { dp2[i][j]=0; dpvl2[i][j]=0; } } dp2[0][0]=1; for(l=0;l<n;l++) { for(j=0;j<(1<<n);j++) { dp2[l+1][j]=(dp2[l+1][j]+dp2[l][j])%mod; dpvl2[l+1][j]=(dpvl2[l+1][j]+dpvl2[l][j])%mod; } for(r=l+1;r<=n;r++) { for(j=0;j<(1<<n);j++) { for(i=0;i<n;i++) { cmsk=j; for(t=i;t<n&&(j>>t&1^1);t++) { cmsk|=(1<<t); dp2[r][cmsk]=(dp2[r][cmsk]+dp2[l][j]*sgvl[l][r][i][t+1])%mod; dpvl2[r][cmsk]=(dpvl2[r][cmsk]+(dp2[l][j]+dpvl2[l][j])*sgvl[l][r][i][t+1])%mod; } } } } } ans=0; for(j=1;j<(1<<n);j++)ans=(ans+dpvl2[n][j])%mod; cout<<ans<<'\n'; } return 0; }
2063
A
Minimal Coprime
\begin{center} {\small Today, Little John used all his savings to buy a segment. He wants to build a house on this segment.} \end{center} A segment of positive integers $[l,r]$ is called coprime if $l$ and $r$ are coprime$^{\text{∗}}$. A coprime segment $[l,r]$ is called minimal coprime if it does not contain$^{\text{†}}$ any coprime segment not equal to itself. To better understand this statement, you can refer to the notes. Given $[l,r]$, a segment of positive integers, find the number of minimal coprime segments contained in $[l,r]$. \begin{footnotesize} $^{\text{∗}}$Two integers $a$ and $b$ are coprime if they share only one positive common divisor. For example, the numbers $2$ and $4$ are not coprime because they are both divided by $2$ and $1$, but the numbers $7$ and $9$ are coprime because their only positive common divisor is $1$. $^{\text{†}}$A segment $[l',r']$ is contained in the segment $[l,r]$ if and only if $l \le l' \le r' \le r$. \end{footnotesize}
It is easy to see that two integers $l$ and $l+1$ are always coprime, due to $\gcd(l,l+1)=\gcd(1,l)=\gcd(0,1)=1$. Thus, all segments in the form $[l,l+1]$ are coprime, and all minimal coprime segments must have a size of at most $2$ (otherwise, it will contain another segment of size $2$). It can be observed that the only coprime segment with size $1$ is $[1,1]$ as $\gcd(a,a)=a$. The set of minimal coprime segments is as follows: $[1,1]$ is minimal coprime. For all $l>1$, $[l,l+1]$ is minimal coprime. And there is no other segment that is minimal coprime. The solution is thus as follows: If $x=y=1$, it contains $[1,1]$ and the answer is $1$. Otherwise, $y>1$. There exists exactly one $l$ for all $x \le l < y$ such that $[l,*]$ is minimal coprime and is contained in $[x,y]$. In other words, the answer is $(y-1)-x+1=y-x$ in this case. In other words, the answer is $(y-1)-x+1=y-x$ in this case. The problem has been solved with time complexity $\mathcal{O}(1)$ per test case.
[ "math", "number theory" ]
800
for i in range(int(input())): x,y=map(int,input().split()) if x==y==1: print(1) else: print(y-x)
2063
B
Subsequence Update
\begin{center} {\small After Little John borrowed expansion screws from auntie a few hundred times, eventually she decided to come and take back the unused ones.But as they are a crucial part of home design, Little John decides to hide some in the most unreachable places — under the eco-friendly wood veneers.} \end{center} You are given an integer sequence $a_1, a_2, \ldots, a_n$, and a segment $[l,r]$ ($1 \le l \le r \le n$). You must perform the following operation on the sequence \textbf{exactly once}. - Choose any \textbf{subsequence}$^{\text{∗}}$ of the sequence $a$, and reverse it. Note that the subsequence does not have to be contiguous. Formally, choose any number of indices $i_1,i_2,\ldots,i_k$ such that $1 \le i_1 < i_2 < \ldots < i_k \le n$. Then, change the $i_x$-th element to the original value of the $i_{k-x+1}$-th element simultaneously for all $1 \le x \le k$. Find the \textbf{minimum value} of $a_l+a_{l+1}+\ldots+a_{r-1}+a_r$ after performing the operation. \begin{footnotesize} $^{\text{∗}}$A sequence $b$ is a subsequence of a sequence $a$ if $b$ can be obtained from $a$ by the deletion of several (possibly, zero or all) element from arbitrary positions. \end{footnotesize}
To solve this problem, it is important to observe and prove the following claim: Claim: It is not beneficial to choose indices $i<l$ and $j>r$ at the same time. Notice that we only care about values that end up on indices in $[l,r]$. If we choose $i_1,i_2,\ldots,i_k$ such that $i_1<l$ and $i_k>r$, $i_1$ and $i_k$ will be swapped with each other and not change the values that end up on $[l,r]$. This means we can exchange it for a shorter sequence of indices $i_2,i_3,\ldots,i_{k-1}$, preserving the values ending up on $[l,r]$. If we repeat this exchange until it is no longer possible, it will satisfy either: Every index $i$ is in $[l,n]$; or every index $i$ is in $[1,r]$. We can solve for both cases separately. For either case, we can constructively show that we can get the minimum $r-l+1$ values in the subsegment into $[l,r]$. The proof is as follows: WLOG assume we are solving for $[1,r]$, and the indices of the $k=r-l+1$ minimum values are $j_1,j_2,\ldots,j_k$. Then: If we select every index in $[l,r]$ not one of the minimum $k$ values, there will be $x$ of them. If we select every index outside $[l,r]$ which is one of the minimum $k$ values, there will be also $x$ of them. Thus, we end up with a subsequence of length $2x$, that gets the minimum $k$ values into the subsegment $[l,r]$. As a result, we only have to find the minimum $k$ values of the subsegment. This can be done easily with sorting. Do this for both $[1,r]$ and $[l,n]$, and we get the answer. The problem has been solved with time complexity $\mathcal{O}(n \log n)$ per test case, due to sorting.
[ "constructive algorithms", "data structures", "greedy", "sortings" ]
1,100
import sys input=lambda:sys.stdin.readline().rstrip() for _ in range(int(input())): n,l,r=map(int,input().split());l-=1 arr=[*map(int,input().split())] brr=arr[:l]+sorted(arr[l:]) crr=sorted(arr[:r])[::-1]+arr[r:] print(min(sum(brr[l:r]),sum(crr[l:r])))
2063
C
Remove Exactly Two
\begin{center} {\small Recently, Little John got a tree from his aunt to decorate his house. But as it seems, just one tree is not enough to decorate the entire house. Little John has an idea. Maybe he can remove a few vertices from the tree. That will turn it into more trees! Right?} \end{center} You are given a tree$^{\text{∗}}$ of $n$ vertices. You must perform the following operation \textbf{exactly twice}. - Select a vertex $v$; - Remove all edges incident to $v$, and also the vertex $v$. Please find the maximum number of connected components after performing the operation \textbf{exactly twice}. Two vertices $x$ and $y$ are in the same connected component if and only if there exists a path from $x$ to $y$. For clarity, note that the graph with $0$ vertices has $0$ connected components by definition.$^{\text{†}}$ \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$But is such a graph connected? \end{footnotesize}
The main observation behind this task is that the number of connected components increases by $\text{deg}-1$, where $\text{deg}$ is the degree of the removed vertex at the time of removing. Thus, the number of connected components after removing two vertices $i$ and $j$ is: $d_i+d_j-1$ if $i$ and $j$ are not adjacent; $d_i+d_j-2$ if $i$ and $j$ are adjacent, because removing $i$ will decrease $\text{deg}$ of $j$ by $1$. The main approaches in maximizing this appear only after this observation. There are multiple approaches to this, where the editorial will introduce two of them. First Approach: Bruteforcing the First Vertex You can maintain a sorted version of the degree sequence, using a std::multiset, a heap, or possibly even simply a sorted sequence. After you fix the first vertex $i$, you can find the maximum degree after decreasing the degrees of all adjacent vertices by $1$. std::multiset can deal with this directly by removing and inserting values, while a heap or a sorted sequence can deal with this by popping elements from the end while the two maximums have the same value. Then, the second vertex is easily the one with maximum degree. Maintaining the degrees takes $\mathcal{O}(d \log n)$ time. As $\sum d = 2m = 2(n-1) = \mathcal{O}(n)$, the total time complexity of this approach is $\mathcal{O}(n \log n)$. Second Approach: Bruteforcing the Second Vertex If we greedily select the first vertex by degree, we can notice that selecting the first vertex as one with maximum initial degree will find at least one optimal solution. Assume that the first vertex with maximum degree had a maximum value $d_i+d_j-2$. Then, consider any solution with both vertices' initial degrees strictly less than $d_i$. This second solution's maximum possible value $d_{i'}+d_{j'}-1$ will never exceed $d_i+d_j-2$, because $d_{i'}<d_i$ and $d_{j'} \le d_j$. Thus, at least one optimal answer must have the first vertex as one with maximum initial degree. But we do not know what to do when there are multiple first vertices with maximum initial degree. Luckily, trying only two first vertices with maximum initial degrees will always find one optimal answer. This can be proven by contradiction; If the two first vertices are $u$ and $v$, and the second vertex chosen in the bruteforce is $w$. At least one pair in $(u,v)$, $(u,w)$, $(v,w)$ will be not adjacent, because otherwise it implies the existence of a $3$-cycle, which can only exist if the graph is not a tree. Thus, if the optimal solution is in the form $d_u+d_w-1$, trying two first vertices will always find it. Therefore, we can try two vertices greedily for the first vertex and try all other vertices as the second vertex, finding at least one optimal answer in the process. The time complexity of this approach is $\mathcal{O}(n \log n)$ or $\mathcal{O}(n)$ depending on how adjacency is checked. Adjacency check in $\mathcal{O}(1)$ is possible by preprocessing parents using DFS, because "$u$ and $v$ are adjacent" is equivalent to "$par_u=v$ or $par_v=u$".
[ "brute force", "data structures", "dfs and similar", "dp", "graphs", "greedy", "sortings", "trees" ]
1,600
import sys input=lambda:sys.stdin.readline().rstrip() for i in range(int(input())): n=int(input()) deg=[0]*n adj=[[] for i in range(n)] for i in range(n-1): u,v=map(int,input().split()) u-=1;v-=1 deg[u]+=1 deg[v]+=1 adj[u].append(v) adj[v].append(u) ans=1 mans=0 sdeg=sorted(deg) for i in range(n): ans=deg[i] ideg=[] for v in adj[i]: ideg.append(deg[v]) ideg.append(deg[i]) ideg.sort(reverse=True) rem=[] mx=-1 for d in ideg: if sdeg[-1]==d: sdeg.pop() rem.append(d) rem.reverse() if sdeg: mx=max(mx,sdeg[-1]) for v in adj[i]: mx=max(mx,deg[v]-1) for d in rem: sdeg.append(d) mans=max(ans+mx-1,mans) print(mans)
2063
D
Game With Triangles
\begin{center} {\small Even Little John needs money to buy a house. But he recently lost his job; how will he earn money now? Of course, by playing a game that gives him money as a reward! Oh well, maybe not those kinds of games you are thinking about.} \end{center} There are $n+m$ distinct points $(a_1,0), (a_2,0), \ldots, (a_{n},0), (b_1,2), (b_2,2), \ldots, (b_{m},2)$ on the plane. Initially, your score is $0$. To increase your score, you can perform the following operation: - Choose three distinct points which are not collinear; - Increase your score by the area of the triangle formed by these three points; - Then, erase the three points from the plane. \begin{center} {\small An instance of the game, where the operation is performed twice.} \end{center} Let $k_{\max}$ be the maximum number of operations that can be performed. For example, if it is impossible to perform any operation, $k_\max$ is $0$. Additionally, define $f(k)$ as the maximum possible score achievable by performing the operation \textbf{exactly $k$ times}. Here, $f(k)$ is defined for all integers $k$ such that $0 \le k \le k_{\max}$. Find the value of $k_{\max}$, and find the values of $f(x)$ for all integers $x=1,2,\ldots,k_{\max}$ independently.
Whenever the operation is performed, your score increases by $a_j-a_i$ or $b_j-b_i$, where $i$, $j$ are the indices you choose two points on (WLOG $a_i<a_j$ or $b_i<b_j$, but this assumption is not necessary). For simplicity, we will call an operation where you choose two on $y=0$ "Operation A", and those where you choose two on $y=2$ "Operation B". Let us define a function $g(p,q)$ as the maximum score after performing "Operation A" $x$ times and "Operation B" $y$ times, assuming it is possible. Then, it is not hard to prove that $g(p,q)$ equates to the following value: $g(p,q)=\sum_{i=1}^p {(A_{n+1-i}-A_i)}+\sum_{i=1}^q {(B_{m+1-i}-B_i)}$ Here, $A$ and $B$ are sorted versions of $a$ and $b$. The proof is left as a practice for the reader; if you want a rigorous proof, an approach using the rearrangement inequality might help you. Then, assuming the value $x$ is always chosen so that the operations can be performed, the value of $f(k)$ will be as follows. $f(k)= \max_x{g(x,k-x)}$ Now we have two questions to ask ourselves: For what values of $x$ is it impossible to perform the operations? How do we maximize this value? The first is relatively easier. We first return to the definition of $g(p,q)$, and find specific inequalities from it. Then, exactly four inequalities can be found as follows: $2p+q \le n$, because otherwise we will use more than $n$ points on $y=0$; $p+2q \le m$, because otherwise we will use more than $m$ points on $y=2$; $p \ge 0$, trivially; $q \ge 0$, also trivially. The feasible region found by the inequalities. Assigning $x$ and $k-x$ to $p$ and $q$ for each inequality, we get the following four inequalities: $2x+k-x = x+k \le n \longleftrightarrow x \le n-k$; $x+2(k-x) = 2k-x \le m \longleftrightarrow x \ge 2k-m$; $x \ge 0$; $k-x \ge 0 \longleftrightarrow x \le k$. Compiling all four inequalities into one inequality, we get the following. $\max(0,2k-m) \le x \le \min(k,n-k)$ So now we can easily judge if the operations can be performed for some value $x$. Also, it is easy to see that when the lefthand bound exceeds the righthand bound, it is impossible to perform $k$ operations. Thus, here we can derive $k_\max$. Though it is easy to find a closed form for $k_\max$, it is not required for this problem. Now for the next question. Naively computing the values for every $x$ in the range for all $k$ takes us $\mathcal{O}(nm)$ time. How to do it faster? Again, go back to the definition of $g(p,q)$. Observe that the value dependent on $p$ is a prefix sum of a strictly decreasing sequence, and thus is convex. Likewise, the value dependent on $q$ is also convex. So, given that $g(x,k-x)$ is a sum of two convex functions of $x$, $g(x,k-x)$ is just another convex function of $x$. Thus, as we already know the range of $x$, we can perform a ternary search on the range. Note that we are doing ternary search on integers and not on real values, so you might need to take extra care of off-by-one errors. There are other ways to solve this task (also relying a lot on the convexity), like doing binary search instead of ternary search, or performing two pointers for $p$ and $q$. Anyways, the time complexity is $\mathcal{O}(n \log n + m \log m)$, bounded below by time complexity of sorting.
[ "binary search", "brute force", "data structures", "geometry", "greedy", "implementation", "math", "ternary search", "two pointers" ]
2,000
#include<bits/stdc++.h> using namespace std; using ll=long long; int main() { cin.tie(0)->sync_with_stdio(0); int t;cin>>t; while(t--) { int n,m;cin>>n>>m; vector<ll>arr(n),brr(m); for(ll&i:arr)cin>>i; for(ll&i:brr)cin>>i; sort(begin(arr),end(arr)); sort(begin(brr),end(brr)); vector<ll>asum(n+2),bsum(m+2); for(int i=1;i<=n;i++)asum[i]=asum[i-1]+(arr[n-i]-arr[i-1]); for(int i=1;i<=m;i++)bsum[i]=bsum[i-1]+(brr[m-i]-brr[i-1]); vector<ll>ans{0}; // maximize asum[ka]+bsum[kb] // s.t. ka+kb = x // ka*2+kb <= n -> ka*2+(x-ka) <= n -> ka+x <= n -> ka <= n-x // ka+kb*2 <= m -> ka+2*(x-ka) <= m -> 2*x-ka <= m -> ka >= 2*x-m // ka >= 0, x-ka >= 0 for(int x=1;2*x-m<=n-x;x++) { ll L=max(0,2*x-m),R=min(x,n-x); if(L>R)break; auto f=[&](int ka){return asum[ka]+bsum[x-ka];}; while(R-L>3) { ll mL=(L*2+R)/3,mR=(L+R*2)/3; if(f(mL)>f(mR))R=mR; else L=mL; } ll mans=0; for(int i=L;i<=R;i++) { mans=max(mans,f(i)); } ans.push_back(mans); } int kmax=(int)size(ans)-1; cout<<kmax<<"\n"; for(int i=1;i<=kmax;i++)cout<<ans[i]<<" \n"[i==kmax]; } }
2063
E
Triangle Tree
\begin{center} {\small One day, a giant tree grew in the countryside. Little John, with his childhood eagle, decided to make it his home. Little John will build a structure on the tree with galvanized square steel. However, little did he know, he could not build what is physically impossible.} \end{center} You are given a rooted tree$^{\text{∗}}$ containing $n$ vertices rooted at vertex $1$. A pair of vertices $(u,v)$ is called a good pair if $u$ is not an ancestor$^{\text{†}}$ of $v$ and $v$ is not an ancestor of $u$. For any two vertices, $\text{dist}(u,v)$ is defined as the number of edges on the unique simple path from $u$ to $v$, and $\text{lca}(u,v)$ is defined as their lowest common ancestor. A function $f(u,v)$ is defined as follows. - If $(u,v)$ is a good pair, $f(u,v)$ is the number of distinct integer values $x$ such that there exists a \textbf{non-degenerate triangle}$^{\text{‡}}$ formed by side lengths $\text{dist}(u,\text{lca}(u,v))$, $\text{dist}(v,\text{lca}(u,v))$, and $x$. - Otherwise, $f(u,v)$ is $0$. You need to find the following value: $$\sum_{i = 1}^{n-1} \sum_{j = i+1}^n f(i,j).$$ \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. A rooted tree is a tree where one vertex is special and called the root. $^{\text{†}}$An ancestor of vertex $v$ is any vertex on the simple path from $v$ to the root, including the root, but not including $v$. The root has no ancestors. $^{\text{‡}}$A triangle with side lengths $a$, $b$, $c$ is non-degenerate when $a+b > c$, $a+c > b$, $b+c > a$. \end{footnotesize}
In this editorial, we will denote the depth of vertex $v$ as $d_v$, and the subtree size of vertex $v$ as $s_v$. They can be easily calculated using DFS. Let us try to first understand what $f(x,y)$ means. If the two edge lengths are $a$ and $b$ (WLOG $a \le b$), the third edge length $L$ must satisfy $a+b>L$ and $a+L>b$. This results in the inequality $b-a<L<a+b$. There are exactly $2a-1$ values that satisfy this, from $b-a+1$ to $a+b-1$. This means, if $(u,v)$ is a good pair, the following holds. $f(u,v)=2 \cdot \min(\text{dist}(u,\text{lca}(u,v)),\text{dist}(v,\text{lca}(u,v)))-1$ And of course, if $(u,v)$ is not a good pair, $f(u,v)=0$ by definition. Naively summing this up for all pairs takes at least $\Theta(n^2)$ time. We will simplify the function $f(u,v)$ into a form that can be computed in batches easily, and provide a different summation that finds the same answer using a double counting proof. The simplified version of $f(u,v)$ is as follows: $\begin{split} f(u,v) & = 2 \cdot \min(\text{dist}(u,\text{lca}(u,v)),\text{dist}(v,\text{lca}(u,v)))-1 \\ & = 2 \cdot \min(d_u-d_\text{lca},d_v-d_\text{lca})-1 \\ & = 2 \cdot \min(d_u,d_v)-2 d_\text{lca} -1 \end{split}$ So now we can consider summing up $2 \cdot \min(d_u,d_v)$ and $2d_\text{lca}+1$ separately. For the former, consider the condition for vertex $u$ to decide the value of $\min(d_u,d_v)$. For this, $d_v$ must be no less than $d_u$, and $v$ must not be a descendant of $u$ (because then it will not be a good pair). Therefore, vertex $u$ decides $\min(d_u,d_v)$ for $(\#(d \ge d_u)-s_u)$ different vertices. $\#(d \ge d_u)$ can be computed using a suffix sum of frequency values of $d$. Thus, we will sum up $2 d_u \cdot (\#(d \ge d_u)-s_u)$ for all vertices. This only has a very small issue; it counts pairs $(u,v)$ where $d_u=d_v$ twice. But thankfully, all of these pairs are good pairs (otherwise they would be the same vertex), and we can simply subtract the sum of $2k \cdot \binom{\#(d = k)}{2}$ over all $k$ from the last value. For the latter, consider the condition for vertex $w$ to be the LCA of $u$ and $v$. $u$ and $v$ should be descendants of $w$ not equal to $w$, and they should not be in the same subtree under $w$ (because then there will be a lower common ancestor). Thus, given that the number of possible descendants is $S=s_w-1$, we can get the following value for one vertex $w$ in $\mathcal{O}(\#(\text{children}))$. Counting for all vertices thus takes $\mathcal{O}(n)$ time. $\sum_{par_u=w} {s_u(S-s_u)}$ This also has a very slight issue; it counts $(u,v)$ and $(v,u)$ separately, so we only have to divide this value by $2$ to get the number of good pairs with vertex $w$ as LCA. Multiplying $2d_w+1$ during the summation, we get the sum of the latter value. Thus, we have proved that the following sum yields the very same value that we need: $\sum_{u=1}^n{\left({2 d_u \cdot (\#(d \ge d_u)-s_u)}\right)}-\sum_{k=0}^{n-1}{\left({2k \cdot \binom{\#(d = k)}{2}}\right)}-\frac{1}{2}\sum_{w=1}^n {\left({(2d_w+1) \cdot \sum_{par_u=w}{s_u(s_w-1-s_u)}}\right)}$ Everything in this summation is in a form that can be computed in $\mathcal{O}(n)$ total time. Therefore, we have solved the problem with time complexity $\mathcal{O}(n)$.
[ "data structures", "dfs and similar", "dp", "greedy", "trees" ]
2,300
#include<bits/stdc++.h> using namespace std; using ll=long long; int main() { cin.tie(0)->sync_with_stdio(0); int t;cin>>t; while(t--) { ll n;cin>>n; vector<ll>d(n,0),s(n),dc(n),dcs; vector<vector<ll>>adj(n); for(int i=1;i<n;i++) { int u,v;cin>>u>>v; u--;v--; adj[u].push_back(v); adj[v].push_back(u); } auto dfs1=[&](auto dfs1,int v,int p=-1)->void { dc[d[v]]++; ll sz=1; for(int w:adj[v])if(w!=p) { d[w]=d[v]+1; dfs1(dfs1,w,v); sz+=s[w]; } s[v]=sz; }; dfs1(dfs1,0); dcs=dc; for(int i=n-2;i>=0;i--)dcs[i]+=dcs[i+1]; ll ans=0,ans2=0; auto dfs2=[&](auto dfs2,int v,int p=-1)->void { // v is min ans+=2*d[v]*(dcs[d[v]]-s[v]); // v is lca ll subcnt=s[v]-1,lcnt=0; for(int w:adj[v])if(w!=p) { lcnt+=(subcnt-s[w])*s[w]; dfs2(dfs2,w,v); } ans2+=(2*d[v]+1)*(lcnt/2); }; dfs2(dfs2,0); for(int i=0;i<n;i++) { ans2+=i*dc[i]*(dc[i]-1); } cout<<ans-ans2<<"\n"; } }
2063
F1
Counting Is Not Fun (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, the limits on $t$ and $n$ are smaller. You can hack only if you solved all versions of this problem.} \begin{center} {\small Now Little John is rich, and so he finally buys a house big enough to fit himself and his favorite bracket sequence. But somehow, he ended up with a lot of brackets! Frustrated, he penetrates through the ceiling with the "buddha palm".} \end{center} A bracket sequence is called balanced if it can be constructed by the following formal grammar. - The empty sequence $\varnothing$ is balanced. - If the bracket sequence $A$ is balanced, then $\mathtt{(}A\mathtt{)}$ is also balanced. - If the bracket sequences $A$ and $B$ are balanced, then the concatenated sequence $A B$ is also balanced. For example, the sequences "(())()", "()", "(()(()))", and the empty sequence are balanced, while "(()" and "(()))(" are not. Given a balanced bracket sequence $s$, a pair of indices $(i,j)$ ($i<j$) is called a good pair if $s_i$ is '(', $s_j$ is ')', and the two brackets are added simultaneously with respect to Rule 2 while constructing the sequence $s$. For example, the sequence "(())()" has three different good pairs, which are $(1,4)$, $(2,3)$, and $(5,6)$. One can show that any balanced bracket sequence of $2n$ brackets contains exactly $n$ different good pairs, and using any order of rules to construct the same bracket sequence will yield the same set of good pairs. Emily will play a bracket guessing game with John. The game is played as follows. Initially, John has a balanced bracket sequence $s$ containing $n$ different good pairs, which is not known to Emily. John tells Emily the value of $n$ and asks Emily to guess the sequence. Throughout $n$ turns, John gives Emily the following kind of clue on each turn. - $l\;r$: The sequence $s$ contains a good pair $(l,r)$. The clues that John gives Emily are pairwise distinct and do not contradict each other. At a certain point, Emily can be certain that the balanced bracket sequence satisfying the clues given so far is unique. For example, assume Emily knows that $s$ has $3$ good pairs, and it contains the good pair $(2,5)$. Out of $5$ balanced bracket sequences with $3$ good pairs, there exists only one such sequence "((()))" with the good pair $(2,5)$. Therefore, one can see that Emily does not always need $n$ turns to guess $s$. To find out the content of $s$ as early as possible, Emily wants to know the number of different balanced bracket sequences that match the clues after each turn. Surely, this is not an easy job for Emily, especially when she is given so many good pairs. Now it is your turn to help Emily. Given the clues, you must find the answer before and after each turn. As the answers may be huge, you need to find them modulo $998\,244\,353$.
To solve the easy subtask, you can derive some helpful properties from the usual stack-based algorithm to determine whether a bracket sequence is balanced. Note the following well-known fact. Fact: The good pairs are exactly the pairs of brackets that are popped "together" in the stack-based algorithm. Considering this, we can observe these two facts. The subsequence inside the good pair must be balanced, or otherwise the two brackets would not be popped together. The subsequence outside the good pair must be balanced, or otherwise the entire string won't be balanced. Then, given $k$ good pairs, you will find $\mathcal{O}(k)$ "minimal subsequences" that you know are balanced, defined by this information. (In fact, it is more correct to define "minimal balanced subsequences" to include the good pairs themselves, but in the easy subtask we ignore this to get an easier explanation by sacrificing details.) In fact, these $\mathcal{O}(k)$ minimal balanced subsequences are just the subsequences found by the following modification to the stack-based algorithm: In the usual stack-based algorithm, instead of pushing only brackets into the stack, push everything on the way into the stack. When you pop a closing bracket, pop everything until you find an opening bracket. Then, the $k$ popped subsequences, and the subsequence outside the outermost good pairs, are minimal balanced subsequences. It is known that there are $C_n$ different balanced bracket sequences of length $2n$, where $C_k$ is the $k$-th Catalan number. So for each of the minimal balanced subsequences, you can multiply $C_{len/2}$ to the answer, given that the minimal balanced subsequence has length $len$. This is the answer we needed to know. Therefore, you can simply update the string and re-run the stack-based algorithm everytime. Each run of the stack-based algorithm takes $\mathcal{O}(n)$ time, so the problem is solved in $\mathcal{O}(n^2)$ time complexity. For the easy subtask, the Catalan numbers can be preprocessed in $\mathcal{O}(n^2)$ time, and it was not necessary to use the combinatorial definition. However, if you compute Catalan numbers on the fly instead of preprocessing, or construct a tree explicitly, the time limit may be a little tight for you as the solution will have an extra log factor. Do keep this in mind when you implement the solution.
[ "combinatorics", "data structures", "dfs and similar", "dp", "dsu", "graphs", "hashing", "implementation", "math", "trees" ]
2,400
#include<bits/stdc++.h> using namespace std; using ll=long long; const ll md=998244353; int main() { int t;cin>>t; vector<ll>ctl(5050); ctl[0]=1; for(int n=1;n<5050;n++) { for(int i=1;i<=n;i++) { ctl[n]=(ctl[n]+ctl[i-1]*ctl[n-i]%md)%md; } } while(t--) { int n;cin>>n; ll ans=ctl[n]; cout<<ans<<" "; string s(2*n+2,'.'); s[0]='(';s[2*n+1]=')'; for(int a=0;a<n;a++) { int i,j;cin>>i>>j; ans=1; s[i]='('; s[j]=')'; string stk; for(char c:s) { if(c==')') { int cnt=0; while(stk.back()!='(') { cnt++; stk.pop_back(); } stk.pop_back(); ans=(ans*ctl[cnt/2])%md; } else stk+=c; } cout<<ans<<" \n"[a+1==n]; } } }
2063
F2
Counting Is Not Fun (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, the limits on $t$ and $n$ are bigger. You can hack only if you solved all versions of this problem.} \begin{center} {\small Now Little John is rich, and so he finally buys a house big enough to fit himself and his favorite bracket sequence. But somehow, he ended up with a lot of brackets! Frustrated, he penetrates through the ceiling with the "buddha palm".} \end{center} A bracket sequence is called balanced if it can be constructed by the following formal grammar. - The empty sequence $\varnothing$ is balanced. - If the bracket sequence $A$ is balanced, then $\mathtt{(}A\mathtt{)}$ is also balanced. - If the bracket sequences $A$ and $B$ are balanced, then the concatenated sequence $A B$ is also balanced. For example, the sequences "(())()", "()", "(()(()))", and the empty sequence are balanced, while "(()" and "(()))(" are not. Given a balanced bracket sequence $s$, a pair of indices $(i,j)$ ($i<j$) is called a good pair if $s_i$ is '(', $s_j$ is ')', and the two brackets are added simultaneously with respect to Rule 2 while constructing the sequence $s$. For example, the sequence "(())()" has three different good pairs, which are $(1,4)$, $(2,3)$, and $(5,6)$. One can show that any balanced bracket sequence of $2n$ brackets contains exactly $n$ different good pairs, and using any order of rules to construct the same bracket sequence will yield the same set of good pairs. Emily will play a bracket guessing game with John. The game is played as follows. Initially, John has a balanced bracket sequence $s$ containing $n$ different good pairs, which is not known to Emily. John tells Emily the value of $n$ and asks Emily to guess the sequence. Throughout $n$ turns, John gives Emily the following kind of clue on each turn. - $l\;r$: The sequence $s$ contains a good pair $(l,r)$. The clues that John gives Emily are pairwise distinct and do not contradict each other. At a certain point, Emily can be certain that the balanced bracket sequence satisfying the clues given so far is unique. For example, assume Emily knows that $s$ has $3$ good pairs, and it contains the good pair $(2,5)$. Out of $5$ balanced bracket sequences with $3$ good pairs, there exists only one such sequence "((()))" with the good pair $(2,5)$. Therefore, one can see that Emily does not always need $n$ turns to guess $s$. To find out the content of $s$ as early as possible, Emily wants to know the number of different balanced bracket sequences that match the clues after each turn. Surely, this is not an easy job for Emily, especially when she is given so many good pairs. Now it is your turn to help Emily. Given the clues, you must find the answer before and after each turn. As the answers may be huge, you need to find them modulo $998\,244\,353$.
If you have not read the editorial for the easy subtask, I strongly suggest you to read it first. It contains some important observations that carry on to the solution of the hard subtask. The easy subtask's solution relied on enumerating the "minimal balanced subsequences". (We will use this same term repeatedly in the hard subtask's editorial, so we abbreviate it to "MBS(es)" for convenience. Note that, as opposed to the easy version, the definition of MBSes in the hard version's editorial includes the good pairs themselves.) Now, the hard subtask makes it impossible to use the $\mathcal{O}(n^2)$ solution, so we cannot enumerate the MBSes. The hard subtask asks for a more clever approach. An important observation for the hard subtask is as follows: Observation: When a new good pair is added, in fact, most MBSes do not change. If a good pair is added on an MBS $a$ which is enclosed by another MBS $b$, the MBS $b$ does not change because the good pair does not give any additional information about $b$. (It just gave the information that some supersequence of $b$ is balanced, but that doesn't make any additional information about $b$.) If a good pair is added on an MBS $c$ which encloses another MBS $d$, the MBS $d$ does not change because the good pair does not give any additional information about $d$. (It just gave the information that some supersequence of $d$ is balanced, but that doesn't make any additional information about $d$.) If a good pair is added on an MBS $e$ and there is another MBS $f$ not enclosing/enclosed by it, the MBS $f$ does not change either; the two MBSes are simply independent. Therefore, when the good pair is added on an MBS $S$, the only MBS that changes is $S$, and all other MBSes do not change. Let's observe what happens when a good pair is added on an MBS $S$. Precisely the following things happen according to the information that this good pair adds: As the subsequence inside the good pair must be balanced, the subsequence containing indices inside the good pair becomes a new MBS (assuming that it exists). As the subsequence outside the good pair must be balanced, the subsequence containing indices outside the good pair becomes a new MBS (assuming that it exists). The good pair itself becomes a new MBS, assuming the MBS $S$ was not a single good pair. Thus, the MBS splits into at most three new MBSes upon the addition of a good pair. For example, if the MBS $S$ was $[1,2,5,6,7,8,11,12,13,14]$, and the good pair is $(5,12)$, the MBS splits to three following MBSes: $[6,7,8,11]$ inside the good pair; $[1,2,13,14]$ outside the good pair; $[5,12]$ denoting the good pair itself. Note that the two brackets making a good pair are always added on the same MBS, because otherwise it must make "contradicting informations". If we can track the MBSes, split efficiently, and count the number of indices in the resulting MBSes, we can do the following to update the answer: When an MBS $S$ is split into three MBSes $A$, $B$ and the good pair, divide (multiply the inverse) the answer by $C_{|S|/2}$, and multiply the answer by $C_{|A|/2} \cdot C_{|B|/2}$. The Catalan number corresponding to the good pair is automatically $1$, so it does not matter. To deal with the required operations, the model solution uses a forest of splay trees. A forest of splay trees is just a splay tree which does not specify a single root, so there can be multiple roots in the structure. In practice, a node is considered a root simply if it does not have a parent. Ideally, it is most convenient to implement it on a sequential data structure (e.g. vector) using indices than using pointers in this task, because you will need to access a node immediately given the index. It is possible to do the same using treaps, but some operations' implementation may be more convoluted if you use treaps. You can notice that detecting if there exist subsequences inside or outside the good pair naturally translate to a series of simple operations on the splay tree, like follows: Detecting a subsequence inside: if there exists a subsequence inside the good pair $(l,r)$, you can splay the node $l$ and cut its right subtree. Now the right subtree will only contain values greater than $l$, so if you splay node $r$ on it, node $r$ will not have a left subtree if and only if there was no subsequence inside $(l,r)$. If there exists a subsequence inside, then the left child of $r$ is one element of the subsequence inside. Detecting a subsequence outside: if there exists a subsequence outside the good pair $(l,r)$, the tree will have values either less than $l$ or greater than $r$. In other words, if you splay node $l$ and find a left subtree, or splay node $r$ and find a right subtree, there exists a subsequence outside. Connecting the left and right subsequences: The left subtree of $l$ after it is splayed, and the right subtree of $r$ after it is splayed, might both have two children. If that is the case, you can simply find the leftmost node in the right subtree of $r$ and splay it, and then connect the left subtree of $l$ to its left. (This is only just a simple binary search tree operation.) Counting the subsequence sizes after the split: This is simple; we already found at least one node corresponding to each of the two subsequences. Splay the two nodes and find the size of the trees. The structure of the trees before and after splitting the subsequence. Note that this might not be precisely correct in terms of splay operations. Now the only thing left in the solution is careful implementation. The solution has an amortized time complexity of $\mathcal{O}(n \log n)$, and runs comfortably in $1$ second. Though the model solution used a complex data structure to utilize its convenient characteristics, the author acknowledges that it is possible to replace some details or entire sections of the editorial with other data structures or techniques such as segment trees or a smaller-to-larger trick. If you can prove that your solution works in $\mathcal{O}(n \log n)$ time complexity with a not too terrible constant, your solution is deemed to be correct. UPD: People pointed out that the intended solution in the tutorial is overkill. Yes, I acknowledge this. Please look into the newly added alternative solution if you want a more elegant idea. Consider maintaining the MBSes directly, but instead of using a complex data structure, we will use a small-to-large trick with linked lists. Precisely, every time we have to split a linked list, we will identify the smaller section and split it out in $\mathcal{O}(|small|)$ time, and then the amortized time complexity will be $\mathcal{O}(n \log n)$. In this solution, we do not consider bracket pairs as MBSes, for a specific reason. Given a linked list of indices and two nodes on it, we can identify the smaller section in $\mathcal{O}(|small|)$ by iterating over both lists at the same time, interlacing the operations. The list which hit the end earlier will be the smaller one, and then we immediately know the value of $|small|$. The issue is that you cannot identify the size of both lists in $\mathcal{O}(|small|)$ time. This is hard to fix using only a linked list. Instead, we will also maintain the implicit rooted tree structure of the bracket sequence. The MBSes correspond to vertices, and the bracket pairs correspond to edges. Initially, the tree just consists of one vertex of $2n$ brackets. Now, for each list node, we add a pointer to the corresponding tree vertex. Now we can know which tree vertex the list node corresponds to immediately, and if we bookkeep size informations in the tree vertices, we can also find the sum of two list sizes in $\mathcal{O}(1)$. Therefore we can now know the size of both lists in $\mathcal{O}(|small|)$ time. The only issue is with how to maintain the pointers to the tree vertices. But no problem, you can just do the small-to-large for this also. After splitting the tree vertex into two vertices and one edge, redirect the smaller list to the new smaller vertex. Theoretically there are enough ways to do this, such as swapping vertex indices. Also it is notable that it is not necessary to maintain the whole tree in this process, it is quite sufficient to just maintain it implicitly by just maintaining a sequence of sizes. The problem is solved online with $\mathcal{O}(n \log n)$ amortized time complexity.
[ "combinatorics", "data structures", "dfs and similar", "dsu", "graphs", "implementation", "trees" ]
2,700
#pragma GCC optimize("O3,unroll-loops") #include <iostream> #include <numeric> #include <vector> constexpr int MAX_N = 3e5; constexpr long long MOD = 998244353; struct _inv_small { long long data[MAX_N + 2] = {0}; constexpr _inv_small() { data[1] = 1; for (int i = 2; i <= MAX_N; i += 2) { data[i] = MOD - (MOD / i) * data[MOD % i] % MOD; data[i + 1] = MOD - (MOD / (i + 1)) * data[MOD % (i + 1)] % MOD; } } }; constexpr _inv_small __inv_small; #define inv_small(x) __inv_small.data[x] #define inv(x) (x < MAX_N ? inv_small(x) : data[(x) - MAX_N]) struct _inv { long long data[MAX_N + 3] = {0}; constexpr _inv() { for (int i = 0; i <= MAX_N + 1; i += 2) { data[i] = MOD - (MOD / (i + MAX_N)) * inv(MOD % (i + MAX_N)) % MOD; data[i + 1] = MOD - (MOD / (i + MAX_N + 1)) * inv(MOD % (i + MAX_N + 1)) % MOD; } } }; #undef inv constexpr _inv __inv; #define inv(x) (x < MAX_N ? inv_small(x) : __inv.data[(x) - MAX_N]) struct _catalan { long long data[MAX_N + 2] = {1}; constexpr _catalan() { for (int i = 1; i <= MAX_N; i += 2) { data[i] = (4 * i - 2) * data[i - 1] % MOD * inv(i + 1) % MOD; data[i + 1] = (4 * i + 2) * data[i] % MOD * inv(i + 2) % MOD; } } }; constexpr _catalan __catalan; #define catalan(x) __catalan.data[x] struct _catalan_inv { long long data[MAX_N + 2] = {1}; constexpr _catalan_inv() { for (int i = 1; i <= MAX_N; i += 2) { data[i] = inv(2) * inv(2 * i - 1) % MOD * data[i - 1] % MOD * (i + 1) % MOD; data[i + 1] = inv(2) * inv(2 * i + 1) % MOD * data[i] % MOD * (i + 2) % MOD; } } }; constexpr _catalan_inv __catalan_inv; #define catalan_inv(x) __catalan_inv.data[x] int main() { std::cin.tie(0)->sync_with_stdio(0); int t; std::cin >> t; for (int _ = 0; _ < t; ++_) { int n; std::cin >> n; std::vector<bool> used(2 * n + 2); std::vector<int> point(2 * n + 2); std::iota(point.begin(), point.end(), 1); std::vector<int> size(2 * n + 2); std::vector<int> parent = {0}; std::vector<int> parent_idx(2 * n + 2); used[0] = true; point[0] = 2 * n + 2, point[2 * n + 1] = 2 * n + 1; size[0] = 2 * n; long long ans = catalan(n); std::cout << ans << " "; for (int i = 0, l, r; i < n; ++i) { std::cin >> l >> r; point[l] = r + 1, point[r] = r; used[l] = true; int p = parent[parent_idx[l]]; int out_ptr = p + 1, ins_ptr = l + 1; int out_size = 0, ins_size = 0; while (point[out_ptr] != out_ptr && point[ins_ptr] != ins_ptr) { out_size += !used[out_ptr]; ins_size += !used[ins_ptr]; out_ptr = point[out_ptr]; ins_ptr = point[ins_ptr]; } ans = (ans * catalan_inv(size[p] / 2)) % MOD; int upd_ptr; if (point[out_ptr] == out_ptr) { upd_ptr = p + 1; parent.push_back(p); parent[parent_idx[l]] = l; ins_size = size[p] - 2 - out_size; size[p] = out_size, size[l] = ins_size; } else { upd_ptr = l + 1; parent.push_back(l); out_size = size[p] - 2 - ins_size; size[p] = out_size, size[l] = ins_size; } ans = (ans * catalan(size[p] / 2)) % MOD * catalan(size[l] / 2) % MOD; while (point[upd_ptr] != upd_ptr) { parent_idx[upd_ptr] = parent.size() - 1; upd_ptr = point[upd_ptr]; } std::cout << ans << " "; } std::cout << '\n'; } }
2064
A
Brogramming Contest
One day after waking up, your friend challenged you to a brogramming contest. In a brogramming contest, you are given a binary string$^{\text{∗}}$ $s$ of length $n$ and an initially empty binary string $t$. During a brogramming contest, you can make either of the following moves any number of times: - remove some suffix$^{\text{†}}$ from $s$ and place it at the end of $t$, or - remove some suffix from $t$ and place it at the end of $s$. To win the brogramming contest, you must make the minimum number of moves required to make $s$ contain only the character $0$ and $t$ contain only the character $1$. Find the minimum number of moves required.\begin{footnotesize} $^{\text{∗}}$A binary string is a string consisting of characters $0$ and $1$. $^{\text{†}}$A string $a$ is a suffix of a string $b$ if $a$ can be obtained from deletion of several (possibly, zero or all) elements from the beginning of $b$. \end{footnotesize}
Notice that if $s$ starts with a $1$ we must move the entire string $s$ to $t$ at some point. Also Notice that if we perform the operation, the total number of occurrences of $01$ and $10$ across both strings can only decrease by one. This gives us an upper bound on the answer being the number of occurrences of $01$ and $10$ in $s$ adding one to this if it starts with the character $1$. Now the following construction uses the same number of moves as the upper bound (thus showing it is the minimum number of moves): If $s$ begins with $1$ then select the entire string $s$ and move it to $t$. Then repeatedly find the first character in $s$ or $t$ which is not equal to the character before it (note under this construction such an index can only exist in one string at a time) and selected the suffix starting from this character and move it to the other string. During this construction some prefix of $s$ will contain $0$s and some prefix of $t$ will contain $1$s, so after each move the total number of $01$ and $10$ will decrease. So the answer to this problem will be the number of $01$ and $10$ in $s$ adding one to the answer if it starts with $1$.
[ "greedy", "strings" ]
800
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } void solve(){ int n; cin >> n; string s; cin >> s; int ans = 0; for (int i = 0; i < n - 1; i++) { if (s[i] != s[i + 1]) ans++; } if (s[0] == '1') ans++; cout << ans << "\n"; } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2064
B
Variety is Discouraged
Define the score of an arbitrary array $b$ to be the length of $b$ minus the number of distinct elements in $b$. For example: - The score of $[1, 2, 2, 4]$ is $1$, as it has length $4$ and only $3$ distinct elements ($1$, $2$, $4$). - The score of $[1, 1, 1]$ is $2$, as it has length $3$ and only $1$ distinct element ($1$). - The empty array has a score of $0$. You have an array $a$. You need to remove some \textbf{non-empty} contiguous subarray from $a$ \textbf{at most} once. More formally, you can do the following \textbf{at most} once: - pick two integers $l$, $r$ where $1 \le l \le r \le n$, and - delete the contiguous subarray $[a_l,\ldots,a_r]$ from $a$ (that is, replace $a$ with $[a_1,\ldots,a_{l - 1},a_{r + 1},\ldots,a_n]$). Output an operation such that the score of $a$ is \textbf{maximum}; if there are multiple answers, output one that \textbf{minimises} the final length of $a$ after the operation. If there are still multiple answers, you may output any of them.
The first thing to notice is that if we remove an element from $a$ then our score will never increase. Because removing one element will cause $|a|$ to decrease by $1$ and $\mathrm{distinct}(a)$ will decrease by at most $1$ and so the $|a|$ - $\mathrm{distinct}(a)$ will never increase. This means that we should be trying to removing the longest subarray which does not decrease our score. We can see that removing any element which only occurs once in $a$ will never decrease our score, and that removing any element which occurs more than once will always decrease our score. Thus we should try to find the longest subarray of elements which only have $1$ occurrence in $a$ and this will be the answer. This can be calculated with a single loop in $O(n)$.
[ "binary search", "constructive algorithms", "greedy", "two pointers" ]
1,100
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } void solve(){ int n; cin >> n; vector<int> a(n); for (int &x : a) cin >> x; vector<int> freq(n + 1); for (int x : a) freq[x]++; vector<int> len(n + 1); len[0] = freq[a[0]] == 1; for (int i = 1; i < n; i++) if (freq[a[i]] == 1) len[i] = len[i - 1] + 1; int mx = *max_element(len.begin(), len.end()); if (mx == 0){ cout << "0\n"; return; } for (int i = 0; i < n; i++){ if (len[i] == mx){ cout << i - len[i] + 2 << " " << i + 1 << "\n"; return; } } } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2064
C
Remove the Ends
You have an array $a$ of length $n$ consisting of \textbf{non-zero} integers. Initially, you have $0$ coins, and you will do the following until $a$ is empty: - Let $m$ be the current size of $a$. Select an integer $i$ where $1 \le i \le m$, gain $|a_i|$$^{\text{∗}}$ coins, and then: - if $a_i < 0$, then replace $a$ with $[a_1,a_2,\ldots,a_{i - 1}]$ (that is, delete the suffix beginning with $a_i$); - otherwise, replace $a$ with $[a_{i + 1},a_{i + 2},\ldots,a_m]$ (that is, delete the prefix ending with $a_i$). Find the maximum number of coins you can have at the end of the process. \begin{footnotesize} $^{\text{∗}}$Here $|a_i|$ represents the absolute value of $a_i$: it equals $a_i$ when $a_i > 0$ and $-a_i$ when $a_i < 0$. \end{footnotesize}
First see that at any point we should either remove the leftmost positive element or the rightmost negative element, as if we were to take a positive element that is not the leftmost one we could have just taken the leftmost one first and had a higher score, a similar argument can be made for taking the rightmost negative element. Now if you do either of these moves some number of times you will always take some prefix of positive numbers and then the remaining suffix of negative numbers, and so to calculate the answer we only need to check all $n + 1$ ways to split the array into a prefix and suffix and take the maximum across them all which is easy to do in $O(n)$.
[ "brute force", "constructive algorithms", "dp", "greedy" ]
1,300
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } void solve(){ int n; cin >> n; vector<int> a(n); for (int &x : a) cin >> x; vector<ll> pre(n), suf(n); if (a[0] > 0) pre[0] = a[0]; for (int i = 1; i < n; i++){ pre[i] = pre[i - 1]; if (a[i] > 0) pre[i] += a[i]; } if (a[n - 1] < 0) suf[n - 1] = -a[n - 1]; for (int i = n - 2; i >= 0; i--){ suf[i] = suf[i + 1]; if (a[i] < 0) suf[i] -= a[i]; } ll ans = 0; for (int i = 0; i < n; i++) ans = max(ans, pre[i] + suf[i]); cout << ans << "\n"; } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2064
D
Eating
There are $n$ slimes on a line, the $i$-th of which has weight $w_i$. Slime $i$ is able to eat another slime $j$ if $w_i \geq w_j$; afterwards, slime $j$ disappears and the weight of slime $i$ becomes $w_i \oplus w_j$$^{\text{∗}}$. The King of Slimes wants to run an experiment with parameter $x$ as follows: - Add a new slime with weight $x$ to the right end of the line (after the $n$-th slime). - This new slime eats the slime to its left if it is able to, and then takes its place (moves one place to the left). It will continue to do this until there is either no slime to its left or the weight of the slime to its left is greater than its own weight. (No other slimes are eaten during this process.) - The score of this experiment is the total number of slimes eaten. The King of Slimes is going to ask you $q$ queries. In each query, you will be given an integer $x$, and you need to determine the score of the experiment with parameter $x$. Note that the King does not want you to actually perform each experiment; his slimes would die, which is not ideal. He is only asking what the hypothetical score is; in other words, the queries are \textbf{not} persistent. \begin{footnotesize} $^{\text{∗}}$Here $\oplus$ denotes the bitwise XOR operation. \end{footnotesize}
First notice that if we are unable to eat the next slime then this slime must have had a $\mathrm{msb}$ (most significant bit) at least as large as the current value of $x$. Subsequently this means that if a slime has a $\mathrm{msb}$ strictly less than $x$ we can always eat it. Now see that if we eat a slime with lower $\mathrm{msb}$ than $x$ then the $\mathrm{msb}$ of $x$ will never decrease and that if we eat a slime with equal $\mathrm{msb}$ then $\mathrm{msb}$ of $x$ will decrease, this inspires us to do the following: at any point we should eat as many slimes as we can to the left that have smaller $\mathrm{msb}$ (to calculate the new value of $x$ after doing this we can just do any range xor query such as prefix sums), after this the next slime will always have $\mathrm{msb}$ greater than or equal to $x$ and so we will either not be able to eat it or we will eat it $\mathrm{msb}$ of $x$ will decrease. Because the $\mathrm{msb}$ will always decrease after every operation we will only need to do this operation at most $log(x)$ times! And so we should try find some way to calculate fast the first slime to our left with $\mathrm{msb}$ greater than or equal to the current $\mathrm{msb}$ of $x$. There are many ways to do this but I would say the cleanest way is to use prefix sums, store for each $i, j$ where $1 \le i \le n, 1 \le j \le log(W)$ (where $W$ is the max value of $w_i$ and $x$), the greatest index before it which has $\mathrm{msb}$ greater than or equal to $j$ call this $\mathrm{pre}_{i, j}$ Now we can update it as follows: if $\mathrm{msb}(w_i) < j$ then $\mathrm{pre}_{i, j} = \mathrm{pre}_{i - 1, j}$ if $\mathrm{msb}(w_i) < j$ then $\mathrm{pre}_{i, j} = \mathrm{pre}_{i - 1, j}$ else $\mathrm{pre}_{i, j} = i$ else $\mathrm{pre}_{i, j} = i$ And so the final complexity is $O((n + q)log(W))$
[ "binary search", "bitmasks", "brute force", "data structures", "dp", "greedy", "trees", "two pointers" ]
1,900
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } const int W = 30; void solve(){ int n, q; cin >> n >> q; vector<int> a(n); for (int &x : a) cin >> x; vector<int> pre(n + 1); pre[0] = a[0]; for (int i = 1; i < n; i++){ pre[i] = pre[i - 1] ^a[i]; } vector<array<int, W>> last(n); for (int i = 0; i < n; i++){ fill(last[i].begin(), last[i].end(), 0); if (i > 0) last[i] = last[i - 1]; last[i][__lg(a[i])] = i; for (int j = W - 2; j >= 0; j--){ last[i][j] = max(last[i][j], last[i][j + 1]); } } while (q--) { int x; cin >> x; int idx = n - 1; while (idx >= 0 && x > 0){ int msb = __lg(x); int nxt = last[idx][msb]; x ^= pre[idx] ^ pre[nxt]; idx = nxt; if (nxt == -1 || a[nxt] > x) break; x ^= a[nxt]; idx--; } cout << n - idx - 1 << "\n"; } } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2064
E
Mycraft Sand Sort
Steve has a permutation$^{\text{∗}}$ $p$ and an array $c$, both of length $n$. Steve wishes to sort the permutation $p$. Steve has an infinite supply of coloured sand blocks, and using them he discovered a physics-based way to sort an array of numbers called gravity sort. Namely, to perform gravity sort on $p$, Steve will do the following: - For all $i$ such that $1 \le i \le n$, place a sand block of color $c_i$ in position $(i, j)$ for all $j$ where $1 \le j \le p_i$. Here, position $(x, y)$ denotes a cell in the $x$-th row from the top and $y$-th column from the left. - Apply downwards gravity to the array, so that all sand blocks fall as long as they can fall. \begin{center} {\small An example of gravity sort for the third testcase. $p = [4, 2, 3, 1, 5]$, $c = [2, 1, 4, 1, 5]$} \end{center} Alex looks at Steve's sand blocks after performing gravity sort and wonders how many pairs of arrays $(p',c')$ where $p'$ is a permutation would have resulted in the same layout of sand. Note that the original pair of arrays $(p, c)$ will always be counted. Please calculate this for Alex. As this number could be large, output it modulo $998\,244\,353$. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is a $4$ in the array). \end{footnotesize}
The first thing to notice is that the first column of sand will be identical to $c$ after sorting. This means that it must be true that $c' = c$. Now that we know $c' = c$ we need to find how many ways we can re-arrange $p$ such that the sand layout is the same. Something we can notice is that $p'$ must be able to obtainable from $p$ by only swapping values of the same colour. This is true because if we remove all sand blocks that are not of any specific colour (i.e. keeping only one colour), then by examining the new sand layout we can find what the values of the colour we didn't remove are. Now lets isolate a single colour and try to find the number of ways we can arrange it. One thing we can notice is that we can swap an arbitrary $p_i$ and $p_j$ without changing the final sand layout if and only if every number of different colour between $i$ and $j$ is strictly less than both $p_i$ and $p_j$. After staring at this operation enough you will see that each number in $p_i$ has a certain range of positions it can take. I think the easiest way to calculate this range is to use DSU, iterate in order of $p_i$ and merge with everything it can reach in one move, storing leftmost and rightmost element currently in the component to be fast enough. It is important to note that these ranges we are calculating have some form of tree structure, because of how we calculated them every pair of ranges will either not intersect or one will contain the other. This means that if we iterate in order of size and "fix" a position in each range we will calculate the answer. There is actually no need to build the tree, and instead we can multiply by the size of each DSU component as we are merging, after each merge decreasing the size of the component by $1$.
[ "combinatorics", "data structures", "dsu", "greedy", "math", "sortings" ]
2,400
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } const int MOD = 998244353; template<ll mod> // template was not stolen from https://codeforces.com/profile/SharpEdged struct modnum { static constexpr bool is_big_mod = mod > numeric_limits<int>::max(); using S = conditional_t<is_big_mod, ll, int>; using L = conditional_t<is_big_mod, __int128, ll>; S x; modnum() : x(0) {} modnum(ll _x) { _x %= static_cast<ll>(mod); if (_x < 0) { _x += mod; } x = _x; } modnum pow(ll n) const { modnum res = 1; modnum cur = *this; while (n > 0) { if (n & 1) res *= cur; cur *= cur; n /= 2; } return res; } modnum inv() const { return (*this).pow(mod-2); } modnum& operator+=(const modnum& a){ x += a.x; if (x >= mod) x -= mod; return *this; } modnum& operator-=(const modnum& a){ if (x < a.x) x += mod; x -= a.x; return *this; } modnum& operator*=(const modnum& a){ x = static_cast<L>(x) * a.x % mod; return *this; } modnum& operator/=(const modnum& a){ return *this *= a.inv(); } friend modnum operator+(const modnum& a, const modnum& b){ return modnum(a) += b; } friend modnum operator-(const modnum& a, const modnum& b){ return modnum(a) -= b; } friend modnum operator*(const modnum& a, const modnum& b){ return modnum(a) *= b; } friend modnum operator/(const modnum& a, const modnum& b){ return modnum(a) /= b; } friend bool operator==(const modnum& a, const modnum& b){ return a.x == b.x; } friend bool operator!=(const modnum& a, const modnum& b){ return a.x != b.x; } friend bool operator<(const modnum& a, const modnum& b){ return a.x < b.x; } friend ostream& operator<<(ostream& os, const modnum& a){ os << a.x; return os; } friend istream& operator>>(istream& is, modnum& a) { ll x; is >> x; a = modnum(x); return is; } }; using mint = modnum<MOD>; template <class T> struct SegTree{ vector<T> seg; int n; const T ID = 0; T cmb(T a, T b){ return max(a, b); } SegTree(int _n){ n = 1; while (n < _n) n *= 2; seg.assign(2 * n + 1, ID); } void set(int pos, T val){ seg[pos + n] = val; } void build(){ for (int i = n - 1; i >= 1; i--) seg[i] = cmb(seg[2 * i], seg[2 * i + 1]); } void upd(int v, int tl, int tr, int pos, T val){ if (tl == tr){ seg[v] = val; } else { int tm = (tl + tr) / 2; if (pos <= tm) upd(2 * v, tl, tm, pos, val); else upd(2 * v + 1, tm + 1, tr, pos, val); seg[v] = cmb(seg[2 * v], seg[2 * v + 1]); } } void upd(int pos, T val){ upd(1, 0, n - 1, pos, val); } T query(int v, int tl, int tr, int l, int r){ if (l > r) return ID; if (l == tl && r == tr) return seg[v]; int tm = (tl + tr) / 2; T res = query(2 * v, tl, tm, l, min(tm, r)); res = cmb(res, query(2 * v + 1, tm + 1, tr, max(l, tm + 1), r)); return res; } T query(int l, int r){ return query(1, 0, n - 1, l, r); } }; struct DSU{ vector<int> p, sz, used, mn, mx; DSU(int n){ p.assign(n, 0); sz.assign(n, 1); used.assign(n, 0); mx.assign(n, 0); mn.assign(n, 0); for (int i = 0; i < n; i++) p[i] = i; } int find(int u){ if (p[u] == u) return u; p[u] = find(p[u]); return p[u]; } void unite(int u, int v){ u = find(u); v = find(v); if (u == v) return; if (sz[u] < sz[v]) swap(u, v); p[v] = u; sz[u] += sz[v]; used[u] += used[v]; mn[u] = min(mn[u], mn[v]); mx[u] = max(mx[u], mx[v]); } bool same(int u, int v){ return find(u) == find(v); } int size(int u){ u = find(u); return sz[u]; } }; void solve(){ int n; cin >> n; vector<int> a(n), b(n); for (int &x : a) cin >> x; for (int &x : b) cin >> x; vector<vector<int>> col(n + 1); for (int i = 0; i < n; i++){ col[b[i]].push_back(i); } vector<vector<int>> ord(n + 1); for (int i = 0; i <= n; i++){ ord[i] = col[i]; sort(ord[i].begin(), ord[i].end(), [&](int x, int y) -> bool{ return a[x] < a[y]; }); } SegTree<int> seg(n); for (int i = 0; i < n; i++){ seg.set(i, a[i]); } seg.build(); mint ans = 1; DSU dsu(n); for (int i = 0; i <= n; i++){ for (int j = 0; j < col[i].size(); j++){ dsu.mn[col[i][j]] = j; dsu.mx[col[i][j]] = j; } } for (int i = 0; i <= n; i++){ for (int x : ord[i]) seg.upd(x, 0); for (int x : ord[i]){ int idx = lower_bound(col[i].begin(), col[i].end(), x) - col[i].begin(); for (int j = idx + 1; j < col[i].size(); j++){ if (seg.query(x, col[i][j]) >= a[x]) break; dsu.unite(x, col[i][j]); j = dsu.mx[dsu.find(x)]; } for (int j = idx - 1; j >= 0; j--){ if (seg.query(col[i][j], x) >= a[x]) break; dsu.unite(x, col[i][j]); j = dsu.mn[dsu.find(x)]; } int u = dsu.find(x); ans *= dsu.size(u) - dsu.used[u]; dsu.used[u]++; } for (int x : ord[i]) seg.upd(x, a[x]); } cout << ans << "\n"; } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2064
F
We Be Summing
You are given an array $a$ of length $n$ and an integer $k$. Call a non-empty array $b$ of length $m$ epic if there exists an integer $i$ where $1 \le i < m$ and $\min(b_1,\ldots,b_i) + \max(b_{i + 1},\ldots,b_m) = k$. Count the number of epic subarrays$^{\text{∗}}$ of $a$. \begin{footnotesize} $^{\text{∗}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
For an arbitrary array $b$ first notice that $\min(b_1,\ldots,b_i)$ and $\max(b_{i + 1},\ldots,n)$ is both non-increasing as we increase $i$. This means that if an array $b$ is epic then there exists a unique pair of integers $x, y$ such that $x + y = k$ and there exists $i, (1 \le i < |b|)$ such that $\min(b_1,\ldots,b_i) = x$ and $\max(b_{i + 1},\ldots,n) = y$. Lets call an array $b$ $x$-epic if it is epic due to the pair $x, k - x$. Because the pair $x, y$ is unique, the answer will be equivalent to the sum of all $x$-epic subarrays of $a$. So lets try to find this sum. Lets say we are trying to find the sum of all $x$ epic subarrays for a given $x$ and let $y = k - x$. Firstly the subarray must obviously contain both $x$ and $y$, so lets try to find for each $i$ the number of subarrays that are epic where $a_i$ is the first instance of $x$ in the subarray. Notice that it must be true that the first element smaller than $x$ must come after the last element bigger than $y$. Notice also that there cannot be an element bigger than $y$ after the last instance of $y$. So we should store for each index $j$ such that $p_j = y$ the length of the longest consecutive sequence of elements strictly less than $y$, which contains $j$ as its leftmost element. Then for each $i$ where $p_i = x$ we should binary search for the last time where first element smaller than $x$ after the last element bigger than $y$. Once we can use prefix sums to find the number of possible right positions of the subarray. To find the number of possible left positions of the subarray it is just the length of the longest consecutive interval which only contains element strictly larger than $x$ and contain $i$ as its right most element. Now we know the number of possible left endpoints of the subarray and right endpoints, so we can just add their product to the answer. After doing this across all $i$ we will have the answer.
[ "binary search", "data structures", "dp", "two pointers" ]
2,600
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } template <class T> struct SegTree{ vector<T> seg; int n; const T ID = {INT_MAX, -INT_MAX}; T cmb(T a, T b){ array<int, 2> res; res[0] = min(a[0], b[0]); res[1] = max(a[1], b[1]); return res; } SegTree(int _n){ n = 1; while (n < _n) n *= 2; seg.assign(2 * n + 1, ID); } void set(int pos, T val){ seg[pos + n] = val; } void build(){ for (int i = n - 1; i >= 1; i--) seg[i] = cmb(seg[2 * i], seg[2 * i + 1]); } void upd(int v, int tl, int tr, int pos, T val){ if (tl == tr){ seg[v] = val; } else { int tm = (tl + tr) / 2; if (pos <= tm) upd(2 * v, tl, tm, pos, val); else upd(2 * v + 1, tm + 1, tr, pos, val); seg[v] = cmb(seg[2 * v], seg[2 * v + 1]); } } void upd(int pos, T val){ upd(1, 0, n - 1, pos, val); } T query(int v, int tl, int tr, int l, int r){ if (l > r) return ID; if (l == tl && r == tr) return seg[v]; int tm = (tl + tr) / 2; T res = query(2 * v, tl, tm, l, min(tm, r)); res = cmb(res, query(2 * v + 1, tm + 1, tr, max(l, tm + 1), r)); return res; } T query(int l, int r){ return query(1, 0, n - 1, l, r); } }; void solve(){ int n, k; cin >> n >> k; vector<int> a(n); for (int &x : a) cin >> x; vector<vector<int>> p(n + 1); for (int i = 0; i < n; i++) p[a[i]].push_back(i); vector<array<int, 2>> stk; vector<int> small_l(n, -1), small_r(n, n), big_l(n, -1), big_r(n, n); for (int i = 0; i < n; i++){ while (stk.size() && a[i] <= stk.back()[0]) stk.pop_back(); if (stk.size()) small_l[i] = stk.back()[1]; stk.push_back({a[i], i}); } stk.clear(); for (int i = n - 1; i >= 0; i--){ while (stk.size() && a[i] >= stk.back()[0]) stk.pop_back(); if (stk.size()) big_r[i] = stk.back()[1]; stk.push_back({a[i], i}); } ll ans = 0; SegTree<array<int, 2>> seg_small(n), seg_big(n); for (int x = 1; x <= n; x++){ int y = k - x; if (y > n){ for (int pos : p[x]) seg_small.upd(pos, {pos, pos}); continue; } vector<int> pre(p[y].size()); for (int i = 0; i < p[y].size(); i++){ if (i > 0) pre[i] += pre[i - 1]; int r = big_r[p[y][i]] - 1; if (i < p[y].size() - 1) r = min(r, p[y][i + 1] - 1); pre[i] += r - p[y][i] + 1; } vector<int> first(p[x].size()), last(p[y].size()); for (int i = 0; i < p[x].size(); i++) first[i] = seg_small.query(p[x][i], n - 1)[0]; for (int i = 0; i < p[y].size(); i++) last[i] = seg_big.query(0, p[y][i])[1]; int prev = -1; for (int i = 0; i < p[x].size(); i++){ int l = p[x][i]; int l_min = small_l[l] + 1; l_min = max(l_min, prev + 1); prev = l; int nxt = upper_bound(p[y].begin(), p[y].end(), l) - p[y].begin(); if (nxt == p[y].size()) continue; int lo = nxt, hi = p[y].size(); while (lo < hi){ int mid = (lo + hi) / 2; int pos = p[y][mid]; if (first[i] <= last[mid]) hi = mid; else lo = mid + 1; } lo--; if (lo < nxt) continue; int res = pre[lo]; if (nxt > 0) res -= pre[nxt - 1]; ans += 1LL * (l - l_min + 1) * res; } for (int pos : p[x]) seg_small.upd(pos, {pos, pos}); for (int pos : p[y]) seg_big.upd(pos, {pos, pos}); } cout << ans << "\n"; } int main(){ ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) solve(); }
2065
A
Skibidus and Amog'u
Skibidus lands on a foreign planet, where the local Amog tribe speaks the Amog'u language. In Amog'u, there are two forms of nouns, which are singular and plural. Given that the root of the noun is transcribed as $S$, the two forms are transcribed as: - Singular: $S$ $+$ "us" - Plural: $S$ $+$ "i" Here, $+$ denotes string concatenation. For example, abc $+$ def $=$ abcdef. For example, when $S$ is transcribed as "amog", then the singular form is transcribed as "amogus", and the plural form is transcribed as "amogi". Do note that Amog'u nouns can have an \textbf{empty} root — in specific, "us" is the singular form of "i" (which, on an unrelated note, means "imposter" and "imposters" respectively). Given a transcribed Amog'u noun in singular form, please convert it to the transcription of the corresponding plural noun.
Let $n$ be the length of the string. Output the first $n-2$ characters of the string (to remove the suffix "us"), then the lowercase letter "i".
[ "brute force", "constructive algorithms", "greedy", "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { string str; cin >> str; str.pop_back(); str.pop_back(); cout << str + "i" << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2065
B
Skibidus and Ohio
Skibidus is given a string $s$ that consists of lowercase Latin letters. If $s$ contains more than $1$ letter, he can: - Choose an index $i$ ($1 \leq i \leq |s| - 1$, $|s|$ denotes the current length of $s$) such that $s_i = s_{i+1}$. Replace $s_i$ with any lowercase Latin letter of his choice. Remove $s_{i+1}$ from the string. Skibidus must determine the minimum possible length he can achieve through any number of operations.
Note that if you can ever do an operation, the answer is $1$. This is because once you've done an operation on a string of length $k$, you are free to choose any character to replace $s_i$ with. Therefore, if you replace $s_i$ either with the character directly before it, or the character directly after it, you will end up with a string with length $k-1$ upon which you can do an operation again. Therefore, by induction, it's clear that you can keep operating on the string until only one character remains. However, if you cannot perform any operations, the answer is $|s|$, because you have not modified the original string. Therefore, the answer is $1$ if $s_i = s_{i+1}$ for some $i$, or $n$ otherwise.
[ "strings" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { string str; cin >> str; for (ll i = 1; i < str.size(); i++) { if (str[i-1] == str[i]) { cout << "1\n"; return; } } cout << str.size() << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2065
C2
Skibidus and Fanum Tax (hard version)
\textbf{This is the hard version of the problem. In this version, $m \leq 2\cdot 10^5$.} Skibidus has obtained two arrays $a$ and $b$, containing $n$ and $m$ elements respectively. For \textbf{each} integer $i$ from $1$ to $n$, he is allowed to perform the operation \textbf{at most once}: - Choose an integer $j$ such that $1 \leq j \leq m$. Set $a_i := b_j - a_i$. Note that $a_i$ may become non-positive as a result of this operation. Skibidus needs your help determining whether he can sort $a$ in non-decreasing order$^{\text{∗}}$ by performing the above operation some number of times. \begin{footnotesize} $^{\text{∗}}$$a$ is sorted in non-decreasing order if $a_1 \leq a_2 \leq \ldots \leq a_n$. \end{footnotesize}
The overall idea for both subtasks is that you need to iterate from left to right, and upon each element, pick the operation such that you obtain the smallest value of $a_i$ that is greater than $a_{i-1}$. For C1, you only have two values to consider for each index - $b_1-a_i$ and $a_i$. First, set $a_1 = min(a_1, b_1-a_1)$, then for each subsequent element $i$, consider the two values $b_1-a_i$ and $a_i$. If the smaller of these values is greater than or equal to $a_{i-1}$, then set $a_i$ to this value. Otherwise, check the larger value. If this value is less than $a_{i-1}$, you can straight away output $NO$ and move onto the next testcase. Otherwise, set $a_i$ as the larger of the two aforementioned values. Time complexity: $\mathcal{O}(n)$ per testcase. For C2, you now have $m$ values in the array $b$. Clearly, we now need to consider $m+1$ different possible values per index of $a$. Solving this problem in $\mathcal{O}(nm)$ is clearly too slow. Therefore, we need to employ a different technique. Note that instead of trying every value, you can sort all the values in $m$, and then binary search for the minimal value of $b_j$ such that $b_j-a_i >= a_{i-1}$. Then, you're left with the original problem, where you either leave $a_i$ untouched or you set it to $b_j - a_i$ for this optimal index $j$ that you found using binary search. Now, by proceeding as before, the problem is solved in $\mathcal{O}(n \log m)$ time.
[ "binary search", "greedy" ]
1,300
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; const ll INF = ll(1E18)+16; void tc () { ll n, m; cin >> n >> m; vll va(n), vb(m); for (ll &i : va) cin >> i; for (ll &i : vb) cin >> i; sort(vb.begin(), vb.end()); va.insert(va.begin(), -INF); n++; for (ll i = 1; i < n; i++) { auto it = lower_bound(vb.begin(), vb.end(), -15, [&](ll a, ll _) { assert(_ == -15); return a-va[i] < va[i-1]; }); if (it == vb.end()) continue; ll j = *it; if (va[i] < va[i-1] && j-va[i] < va[i-1]) continue; // OH MY GOD va[i] = min((va[i] < va[i-1] ? INF : va[i]), (j-va[i] < va[i-1] ? INF : j-va[i])); } cout << (is_sorted(va.begin(), va.end()) ? "YES" : "NO") << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2065
D
Skibidus and Sigma
Let's denote the score of an array $b$ with $k$ elements as $\sum_{i=1}^{k}\left(\sum_{j=1}^ib_j\right)$. In other words, let $S_i$ denote the sum of the first $i$ elements of $b$. Then, the score can be denoted as $S_1+S_2+\ldots+S_k$. Skibidus is given $n$ arrays $a_1,a_2,\ldots,a_n$, each of which contains $m$ elements. Being the sigma that he is, he would like to concatenate them in \textbf{any order} to form a single array containing $n\cdot m$ elements. Please find the maximum possible score Skibidus can achieve with his concatenated array! Formally, among all possible permutations$^{\text{∗}}$ $p$ of length $n$, output the maximum score of $a_{p_1} + a_{p_2} + \dots + a_{p_n}$, where $+$ represents concatenation$^{\text{†}}$. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ contains all integers from $1$ to $n$ exactly once. $^{\text{†}}$The concatenation of two arrays $c$ and $d$ with lengths $e$ and $f$ respectively (i.e. $c + d$) is $c_1, c_2, \ldots, c_e, d_1, d_2, \ldots d_f$. \end{footnotesize}
It is clear that the score of an array $b$ of length $n$ can be expressed as $n * b_1 + (n-1) * b_2 + (n-2) * b_3 \dots + 1 * b_n$. Let's solve this problem with $m=1$ for $n$ arrays with $1$ element in each. Due to the above fact, it's clear that the optimal way to arrange this array is to sort the elements from largest to smallest. Now, we come to the issue of combining arrays. Suppose you have two arrays $a$ and $b$, both of length m. Is there a nice way of expressing the score of $a+b$ and the score of $b+a$ (where $+$ represents concatenation) in terms of $score(a)$ and $score(b)$? Turns out there is. The score of $a+b$ is equal to $n * sum(a) + score(a) + score(b)$. This is because of the following: $score(a+b) = (2n * a_1) + ((2n-1) * a_2) \dots + ((n+1) * a_n) + (n * b_1) \dots + (1 * b_n)$ Now, we can clearly replace all the $b$ terms with $score(b)$ as follows: $score(a+b) = (2n * a_1) + ((2n-1) * a_2) \dots + ((n+1) * a_n) + score(b)$ Now, we can take out a factor of $n$ as follows: $score(a+b) = ((n+n) * a_1) + ((n + (n-1)) * a_2) \dots + ((n+1) * a_n) + score(b)$ $score(a+b) = (n * (a_1 + a_2 + a_3)) + (n * a_1) + ((n-1) * a_2) \dots + (1 * a_n) + score(b)$ $score(a+b) = (n * sum(a)) + score(a) + score(b)$ Therefore, whichever of $a$ and $b$ has the larger sum should go first, because then you get a larger overall score. This argument also extends beyond two arrays, so the solution is as follows: Sort the arrays themselves in terms of their sum, from largest sum to smallest Put together the final array Take the score This score is guaranteed to be maximal.
[ "greedy", "sortings" ]
1,200
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; void tc () { ll n, m; cin >> n >> m; vector <vll> ve(n, vll(m)); for (vll &ve2 : ve) { for (ll &i : ve2) cin >> i; } vll vsum(n, 0); for (ll i = 0; i < n; i++) { for (ll j : ve[i]) vsum[i] += j; } vll th(n); iota(th.begin(), th.end(), 0); sort(th.begin(), th.end(), [&](ll a, ll b) { return vsum[a] > vsum[b]; }); ll ans = 0; for (ll i = 0; i < n; i++) { ans += vsum[th[i]]*(n-1-i)*m; } for (vll ve2 : ve) { for (ll i = 0; i < m; i++) { ans += ve2[i]*(m-i); } } cout << ans << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); ll T; cin >> T; while (T--) { tc(); } return 0; }
2065
E
Skibidus and Rizz
With the approach of Valentine's Day, Skibidus desperately needs a way to rizz up his crush! Fortunately, he knows of just the way: creating the perfect Binary String! Given a binary string$^{\text{∗}}$ $t$, let $x$ represent the number of $0$ in $t$ and $y$ represent the number of $1$ in $t$. Its \textbf{balance-value} is defined as the value of $\max(x-y, y-x)$. Skibidus gives you three integers $n$, $m$, and $k$. He asks for your help to construct a binary string $s$ of length $n+m$ with exactly $n$ $0$'s and $m$ $1$'s such that the maximum \textbf{balance-value} among all of its substrings$^{\text{†}}$ is \textbf{exactly} $k$. If it is not possible, output -1. \begin{footnotesize} $^{\text{∗}}$A binary string only consists of characters $0$ and $1$. $^{\text{†}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
Claim 1: It is impossible when $k < |n-m|$ Proof 1: The entire string will have a balance value greater than $k$. Claim 2: It is impossible when $k > max(n, m)$ Proof 2: The maximal possible balance value of any string with $n$ ones and $m$ zeroes is $max(m, n)$, as it is impossible due to the balance value definition. To complete the construction, WLOG $n \ge m$ (because you can invert the string to get the solution for $n \lt m$. Output $k$ zeroes, then output alternating ones and zeroes, then output the remaining ones at the end. Note that with this construction, a substring with balance value $k$ exists (as you can take the first $k$ numbers), but a substring with balance value $>k$ does not exist (as taking more of the string cannot increase the balance value).
[ "constructive algorithms", "greedy", "strings" ]
1,600
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin >> t; while(t--){ int n, m, k; cin >> n >> m >> k; if(max(n, m) - min(n, m) > k || k > max(n, m)){ cout << "-1" << endl; continue; } else{ pair<int, int> use1 = {n, 0}, use2 = {m, 1}; if(m > n) swap(use1, use2); for(int i = 0; i < k; i++){ cout << use1.second; use1.first--; } while(use2.first > 0){ cout << use2.second; use2.first--; swap(use1, use2); } while(use1.first > 0){ cout << use1.second; use1.first--; } cout << "\n"; } } }
2065
F
Skibidus and Slay
Let's define the majority of a sequence of $k$ elements as the unique value that appears strictly more than $\left \lfloor {\frac{k}{2}} \right \rfloor$ times. If such a value does not exist, then the sequence does \textbf{not} have a majority. For example, the sequence $[1,3,2,3,3]$ has a majority $3$ because it appears $3 > \left \lfloor {\frac{5}{2}} \right \rfloor = 2$ times, but $[1,2,3,4,5]$ and $[1,3,2,3,4]$ do not have a majority. Skibidus found a tree$^{\text{∗}}$ of $n$ vertices and an array $a$ of length $n$. Vertex $i$ has the value $a_i$ written on it, where $a_i$ is an integer in the range $[1, n]$. For each $i$ from $1$ to $n$, please determine if there exists a non-trivial simple path$^{\text{†}}$ such that $i$ is the majority of the \textbf{sequence of integers written on the vertices} that form the path. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. $^{\text{†}}$A sequence of vertices $v_1, v_2, ..., v_m$ ($m \geq 2$) forms a non-trivial simple path if $v_i$ and $v_{i+1}$ are connected by an edge for all $1 \leq i \leq m - 1$ and all $v_i$ are pairwise distinct. \textbf{Note that the path must consist of at least $2$ vertices.} \end{footnotesize}
Assume that a sequence a of length $n$ contains two types of elements, $1$ denoting majority and $0$ denoting "not majority". Then, if a contains a majority and $n \ge 2$, the following condition holds. a must contain either $[1,1]$ or $[1,0,1]$. This can be shown by contradiction. It is given that there are $k$ instances of $1$ and $n-k$ instances of $0$. Then it is known that $k>n-k$, so $2k>n$. If a does not contain $[1,1]$, then it needs a $0$ between every $1$. The only way this can happen is when $n-k \ge k-1$. This means $n+1 \ge 2k$, but because $n<2k \le n+1$ the only value $k$ can take is $\frac{n+1}{2}$. When this happens, the only valid orientation of $a$ is $[1,0,1,0,\ldots,0,1,0,1]$, but still this contains $[1,0,1]$. Therefore, it definitely contains either $[1,1]$ or $[1,0,1]$. But we can notice, the subarrays $[1,1]$ and $[1,0,1]$ already contain the majority $1$. So due to this, we conclude that the tree containing the pattern $[k,k]$ or $[k,x,k]$ for some value $k$, is an equivalent condition to the tree containing a non-trivial path with majority $k$. Checking the existence of $[k,k]$ and $[k,x,k]$ in the tree for all k can be done in $\mathcal{O}(n)$ time, due to $\sum{\text{deg}} = 2m = 2(n-1)$. The problem has been solved in $\mathcal{O}(n)$ time. You can solve this problem using the Auxiliary Tree technique. Formally, the auxiliary tree of $k$ specified vertices is the tree consisting of the specified vertices and their pairwise lowest common ancestors. It can be shown that such a tree will always have $\mathcal{O}(k)$ vertices and can be found in $\mathcal{O}(k \log n)$ time using widely known algorithms. Observe the fact that, when you transform the value $x$ into $1$ and all other values into $-1$, $x$ is the majority of the sequence if and only if the transformed sequence has a positive sum. This hints us towards finding the maximum sum of any non-trivial path under the transformed tree. If the maximum sum is found to be positive, then we can determine that there exists a path with $x$ as its majority. This subproblem can be solved by running a modified version of Kadane's algorithm, adapted for use on a tree. This adaptation is widely known for the case when the given tree is a binary tree, and it is trivial to modify it for use on any tree while keeping the time complexity to be $\mathcal{O}(n)$. The only problem is that solving this for all values is $\mathcal{O}(n^2)$, so we compress the tree for each value using the Auxiliary Tree technique. By getting the auxiliary tree for the specified vertices, you get the specified vertices and their LCAs. If some vertex is specified, the value on it is transformed into $1$. Otherwise, the value on it is transformed into $-1$. Meanwhile, now there are lengths on edges, and an edge with length greater than $1$ means that there are unspecified vertices compressed in the edge. Therefore, if an edge has length $l$ which is greater than $1$, make sure to make a new vertex with value $-l+1$ in the middle of the edge as it means that $l-1$ vertices are compressed into the edge. As there are $n$ specified vertices in total, finding each auxiliary tree takes $\mathcal{O}(n \log n)$ time. Solving the rest of the problem for each $i$ only takes $\mathcal{O}(n)$ time trivially. Thus, the problem has been solved in $\mathcal{O}(n \log n)$ time.
[ "data structures", "dfs and similar", "graphs", "greedy", "trees" ]
1,700
#include <bits/stdc++.h> using namespace std; #define f first #define s second #define ll long long #define pii pair<int,int> #define amin(a,b) a = min(a,b) #define amax(a,b) a = max(a,b) vector<int> adj[500005]; int a[500005]; int s[500005]; int d[500005]; int t[1000005]; vector<int> v[500005]; int timer = 0; void dfs(int node, int pa) { d[node] = d[pa]+1; s[node] = ++timer; t[timer] = node; v[a[node]].push_back(node); for (auto it : adj[node]) { if (it == pa)continue; dfs(it,node); t[++timer] = node; } } // sparse table pii spt[1000005][20]; void buildspt(int n) { for (int i = 1; i <= 2*n-1; ++i) { spt[i][0] = {d[t[i]],i}; } for (int j = 1; (1<<j) <= 2*n-1; ++j) { for (int i = 1; i+(1<<j)-1 <= 2*n-1; ++i) { spt[i][j] = min(spt[i][j-1],spt[i+(1<<(j-1))][j-1]); } } return; } pii qry(int l, int r) { int dd = __lg(r-l+1); return min(spt[l][dd],spt[r-(1<<dd)+1][dd]); } int _lca(int l, int r) { return t[qry(s[l],s[r]).s]; } int dp[500005]; int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int tt; cin >> tt; while (tt--) { int n; cin >> n; for (int i = 1; i <= n; ++i) { cin >> a[i]; } for (int i = 1; i <= n-1; ++i) { int u, _v; cin >> u >> _v; adj[u].push_back(_v); adj[_v].push_back(u); } timer = 0; dfs(1,0); buildspt(n); for (int i = 1; i <= n; ++i) { vector<int> v2; int ans = -2e9; for (auto it : v[i]) { dp[it] = 1; if (v2.size()) { int lca = _lca(v2.back(),it); while (v2.size() && d[lca] < d[v2.back()]) { int z = v2.back(); v2.pop_back(); if (v2.empty() || d[v2.back()] < d[lca]) { dp[lca] = (a[lca] == i ? 1 : -1); v2.push_back(lca); } amax(ans,dp[v2.back()]+dp[z]-(d[z]-d[v2.back()]-1)); amax(dp[v2.back()],dp[z]+(a[v2.back()] == i ? 1 : -1)-(d[z]-d[v2.back()]-1)); } } v2.push_back(it); } while ((int)v2.size() > 1) { int z = v2.back(); v2.pop_back(); amax(ans,dp[v2.back()]+dp[z]-(d[z]-d[v2.back()]-1)); amax(dp[v2.back()],dp[z]+(a[v2.back()] == i ? 1 : -1)-(d[z]-d[v2.back()]-1)); } cout << (ans > 0); } cout << '\n'; for (int i = 1; i <= n; ++i) { adj[i].clear(); v[i].clear(); } } return 0; }
2065
G
Skibidus and Capping
Skibidus was abducted by aliens of Amog! Skibidus tries to talk his way out, but the Amog aliens don't believe him. To prove that he is not totally capping, the Amog aliens asked him to solve this task: An integer $x$ is considered a semi-prime if it can be written as $p \cdot q$ where $p$ and $q$ are (not necessarily distinct) prime numbers. For example, $9$ is a semi-prime since it can be written as $3 \cdot 3$, and $3$ is a prime number. Skibidus was given an array $a$ containing $n$ integers. He must report the number of pairs $(i, j)$ such that $i \leq j$ and $\operatorname{lcm}(a_i, a_j)$$^{\text{∗}}$ is semi-prime. \begin{footnotesize} $^{\text{∗}}$Given two integers $x$ and $y$, $\operatorname{lcm}(x, y)$ denotes the least common multiple of $x$ and $y$. \end{footnotesize}
Let $a_i=x$ and $a_j=y$ for any pair $(i,j)$ that we count. First, let's note that since $x|lcm(x,y)$, we don't need to consider any cases where $x$ or $y$ has more than two prime factors. There are three cases we must consider: $x$ and $y$ are primes, and $x\neq y$. Then, $lcm(x,y)=x\cdot y$ and has two prime factors. $x$ and $y$ are primes, and $x\neq y$. Then, $lcm(x,y)=x\cdot y$ and has two prime factors. $x$ is a semiprime, and $y$ is a prime factor of $x$. Then, $lcm(x,y)=x$. $x$ is a semiprime, and $y$ is a prime factor of $x$. Then, $lcm(x,y)=x$. $y$ is a semiprime, and $y=x$. $y$ is a semiprime, and $y=x$. To count this, let's first factorize each number in the array (either by trial division in $O(n\sqrt{a_i})$ or $O(n\log(n))$ sieve precomputation). Let's maintain two maps, one mapping each semiprime present in the array to its number of occurrences, and one mapping each prime present in the array to its number of occurrences. Let $cnt[x]$ be the number of occurrences of $x$. Then, we can count each case as follows: Let the total number of primes be $P$. For each prime $p$, its contribution will be $cnt[p]\cdot(P-cnt[p])$. Note that we must divide our result by $2$ as we would have counted each pair twice. Let the total number of primes be $P$. For each prime $p$, its contribution will be $cnt[p]\cdot(P-cnt[p])$. Note that we must divide our result by $2$ as we would have counted each pair twice. For each semiprime $pq$, add $cnt[pq]\cdot cnt[p]+cnt[pq]\cdot cnt[q]$. For each semiprime $pq$, add $cnt[pq]\cdot cnt[p]+cnt[pq]\cdot cnt[q]$. For each semiprime $pq$, add $cnt[pq]\cdot (cnt[pq]+1)/2$. For each semiprime $pq$, add $cnt[pq]\cdot (cnt[pq]+1)/2$. All of these can be done in $O(n\log n)$ time. Be sure to not use dictionary, counter, or unordered_map as they can be hacked!
[ "combinatorics", "math", "number theory" ]
1,700
#include <bits/stdc++.h> using namespace std; using ll = long long; vector<int> prime_factors(int x){ vector<int> pf; for(ll i = 2; i * i <= x; i++){ while(x % i == 0){ pf.push_back(i); x /= i; } } if(x > 1) pf.push_back(x); return pf; } int main(){ int t; cin >> t; while(t--){ int n; cin >> n; ll ans = 0; vector<int> one(n+1), two_same(n+1), two_diff(n+1), cnt(n+1); int prime_so_far = 0; for(int i = 0; i < n; i++){ int x; cin >> x; vector<int> pf = prime_factors(x); if(pf.size() > 2) continue; if(pf.size() == 1){ one[x]++; prime_so_far++; ans += two_same[x] + two_diff[x] + (prime_so_far - one[x]); } else if(pf[0] == pf[1]){ two_same[pf[0]]++; ans += one[pf[0]] + two_same[pf[0]]; } else{ two_diff[pf[0]]++; two_diff[pf[1]]++; cnt[x]++; ans += one[pf[0]] + one[pf[1]] + cnt[x]; } } cout << ans << "\n"; } }
2065
H
Bro Thinks He's Him
Skibidus thinks he's Him! He proved it by solving this difficult task. Can you also prove yourself? Given a binary string$^{\text{∗}}$ $t$, $f(t)$ is defined as the minimum number of contiguous substrings, each consisting of identical characters, into which $t$ can be partitioned. For example, $f(00110001) = 4$ because $t$ can be partitioned as $[00][11][000][1]$ where each bracketed segment consists of identical characters. Skibidus gives you a binary string $s$ and $q$ queries. In each query, a single character of the string is flipped (i.e. $0$ changes to $1$ and $1$ changes to $0$); changes are saved after the query is processed. After each query, output the sum over all $f(b)$ where $b$ is a non-empty subsequence$^{\text{†}}$ of $s$, modulo $998\,244\,353$. \begin{footnotesize} $^{\text{∗}}$A binary string consists of only characters $0$ and $1$. $^{\text{†}}$A subsequence of a string is a string which can be obtained by removing several (possibly zero) characters from the original string. \end{footnotesize}
A brute attempt at a solution would be to go through each non-empty subset of ${1,2,...,n}$ and, for every pair of adjacent indices $(i, j)$ in the subset, increase $ans$ if $s[i] \neq s[j]$ In other words, increase ans every time $s[i]$ and $s[j]$ create a new border This is extremely slow at $O(2^n*n)$; however, we can notice that every adjacent pair is independent from the rest in the subset. We may ask ourselves: for a fixed $(i, j)$, how many subsets increase ans through that $(i, j)$? The conditions are that: $i$ and $j$ are in the subset $i$ and $j$ are adjacent in the subset (there's no other included value in between) $s[i] \neq s[j]$ Which leaves us free to choose the prefix and the suffix. $(2^{i-1}*2^{n-j})$. An $O(n^2)$ solution is: The rest of the solution will try to optimize how to compute that sum, and then how to maintain the answer through updates. We may notice we can pull out the $2^{n-j}$ factor and rewrite the loops as: Since the second for loop only cares about what $s[j]$ is (nothing else about $j$), we can again easily optimize it as: In other words, we don't need to recalculate $sumI$ each new $j$. Instead, we can maintain its $2$ possible values ($s[j]=0, s[j]=1$) as we increase $j$ (since $sumI$ cares about ($1\leq i<j$)) We can run this $O(n)$ solution once before any updates, but not after each update because $O(q*n)$ is too slow. Instead, we can make the minimal adjustments needed to maintain ans correctly. Let's call the toggled index $k$: How much does $s[k]$ contribute to ans in the first place? We can partition $s[k]$'s contribution to the answer in $2$ cases: - When $j=k$, $ans += sumI[s[k]]*2^{n-k}$ - After $sumI[s[k]\oplus1] += 2^{k-1}$, every time $sumI[s[k]\oplus 1]$ is used to increase ans As for the first case, we can calculate the value of $sumI[s[k]]$ at $j=k$ using the sum of $2^{i-1}$ for $i < k$ and $s[i]=s[k]\oplus1$. As for the second case, it's only slightly trickier, but along the same general idea. It can be calculated using the sum of $2^{n-j}$ for $j > k$ and $s[j]=s[k]\oplus 1$. Fenwick (or segment) tree solves both cases, which are essentially dynamically calculating range sums on an array. Since for both cases we may consider when $s[k]=0$ or $s[k]=1$, a total of $4$ different fenwick trees will be needed. $(2^{i-1}$ when $s[i]=0, 0$ otherwise); ($2^{i-1}$ when $s[i]=1, 0$ otherwise); $(2^{n-j}$ when $s[j]=0, 0$ otherwise); $(2^{n-j}$ when $s[j]=1, 0$ otherwise) The procedure is thus: before toggling a bit, subtract its contribution from ans. After toggling the bit, add its new contribution to ans. Also, update the $4$ fenwick trees accordingly. This correctly maintains ans in $O(log(n))$ per update. So, the final complexity is $O(n+q*log(n))$
[ "combinatorics", "data structures", "divide and conquer", "dp", "math", "matrices" ]
2,200
#include <bits/stdc++.h> using namespace std; using ll = long long; using vll = vector <ll>; using ii = pair <ll, ll>; using vii = vector <ii>; const ll MAXN = 2E5+16, MOD = 998244353; ll pw2[MAXN]; struct SegTree { vll tree; ll n; SegTree (ll n): tree(2*n, 0), n(n) {} void update (ll id, ll val) { for (tree[id += n] = val; id > 1; id >>= 1) tree[id>>1] = (tree[id] + tree[id^1]) % MOD; } ll query (ll ql, ll qr) { ll ans = 0; for (ql += n, qr += n+1; ql < qr; ql >>= 1, qr >>= 1) { if (ql&1) (ans += tree[ql++]) %= MOD; if (qr&1) (ans += tree[--qr]) %= MOD; } return ans; } }; void tc () { string str; cin >> str; ll n = str.size(); SegTree stFreq0(n), stFreq1(n), st0(n), st1(n); for (ll i = 0; i < n; i++) { (str[i] == '0' ? stFreq0 : stFreq1).update(i, pw2[n-1-i]); (str[i] == '0' ? st0 : st1).update(i, pw2[i]); } ll ans = pw2[n]-1; {ll acc0 = 0, acc1 = 0; for (ll i = 0; i < n; i++) { (ans += (str[i] == '0' ? acc1 : acc0)*pw2[n-1-i]) %= MOD; ((str[i] == '0' ? acc0 : acc1) += pw2[i]) %= MOD; }} ll Q; cin >> Q; while (Q--) { ll at; cin >> at; at--; (ans -= (str[at] == '0' ? st1 : st0).query(0, at-1)*pw2[n-1-at]) %= MOD; (ans -= (str[at] == '0' ? stFreq1 : stFreq0).query(at+1, n-1)*pw2[at]) %= MOD; (ans += MOD) %= MOD; (str[at] == '0' ? stFreq0 : stFreq1).update(at, 0); (str[at] == '0' ? st0 : st1).update(at, 0); str[at] = (str[at] == '0' ? '1' : '0'); (str[at] == '0' ? stFreq0 : stFreq1).update(at, pw2[n-1-at]); (str[at] == '0' ? st0 : st1).update(at, pw2[at]); (ans += (str[at] == '0' ? st1 : st0).query(0, at-1)*pw2[n-1-at]) %= MOD; (ans += (str[at] == '0' ? stFreq1 : stFreq0).query(at+1, n-1)*pw2[at]) %= MOD; cout << ans << ' '; } cout << '\n'; } int main () { cin.tie(nullptr) -> sync_with_stdio(false); pw2[0] = 1; for (ll i = 1; i < MAXN; i++) pw2[i] = pw2[i-1]*2 % MOD; ll T; cin >> T; while (T--) { tc(); } return 0; }
2066
A
Object Identification
This is an interactive problem. You are given an array $x_1, \ldots, x_n$ of integers from $1$ to $n$. The jury also has a fixed but hidden array $y_1, \ldots, y_n$ of integers from $1$ to $n$. The elements of array $y$ are \textbf{unknown} to you. Additionally, it is known that for all $i$, $x_i \neq y_i$, and all pairs $(x_i, y_i)$ are distinct. The jury has secretly thought of one of two objects, and you need to determine which one it is: - \textbf{Object A}: A directed graph with $n$ vertices numbered from $1$ to $n$, and with $n$ edges of the form $x_i \to y_i$. - \textbf{Object B}: $n$ points on a coordinate plane. The $i$-th point has coordinates $(x_i, y_i)$. To guess which object the jury has thought of, you can make queries. In one query, you must specify two numbers $i, j$ $(1 \leq i, j \leq n, i \neq j)$. In response, you receive one number: - If the jury has thought of \textbf{Object A}, you receive the length of the shortest path (in edges) from vertex $i$ to vertex $j$ in the graph, or $0$ if there is no path. - If the jury has thought of \textbf{Object B}, you receive the Manhattan distance between points $i$ and $j$, that is $|x_i -x_j| + |y_i - y_j|$. You have $2$ queries to determine which of the objects the jury has thought of.
Note that if Object A is chosen, we can receive the number $0$ in response to some query, while if Object B is chosen, we cannot: since $(x_i,y_i) \neq (x_j,y_j)$. If the array $x_1, x_2, \ldots, x_n$ is not a permutation of the numbers from $1$ to $n$, then there will be some number $1 \leq a \leq n$ that is not present in the array $x$. In this case, if Object A is chosen, any query $(a, *)$ will yield a response of $0$, since there are simply no edges from vertex $a$ (here $*$ denotes any number from $1$ to $n$ that is not equal to $a$). We can make such a query, and by checking whether the response is $0$, we can immediately determine which object the jury has chosen. However, if the array $x_1, x_2, \ldots, x_n$ is a permutation. Let's find such $i$ and $j$ that $x_i=1$ and $x_j=n$, and make the queries $(i,j)$ and $(j,i)$. In the case that Object B is chosen: we should receive two identical numbers, both of which must be $\geq n-1$, since $|x_i-x_j| = n-1$. It is not hard to see that such a situation is impossible if Object A is chosen: for $n \geq 3$, in a directed graph with $n$ vertices and $n$ edges, it cannot be that for some pair of vertices, the distances from one to the other in both directions are $\geq n-1$. Therefore, these two queries are sufficient to uniquely identify the objects. If both received numbers are equal and $\geq n-1$: Object B is chosen; otherwise, Object A is chosen.
[ "graphs", "greedy", "implementation", "interactive" ]
1,400
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> x(n + 1), isx(n + 1); for (int i = 1; i <= n; i++) { cin >> x[i]; isx[x[i]] = 1; } if (accumulate(isx.begin(), isx.end(), 0) == n) { int i1 = 0, in = 0; for (int i = 1; i <= n; i++) { if (x[i] == 1) { i1 = i; } if (x[i] == n) { in = i; } } cout << "? " << i1 << ' ' << in << endl; int ans; cin >> ans; if (ans < n - 1) { cout << "! A" << endl; } else if (ans > n - 1) { cout << "! B" << endl; } else { cout << "? " << in << ' ' << i1 << endl; cin >> ans; if (ans == n - 1) { cout << "! B" << endl; } else { cout << "! A" << endl; } } } else { for (int i = 1; i <= n; i++) { if (!isx[i]) { cout << "? " << i << ' ' << 1 + (i == 1) << endl; int ans; cin >> ans; if (ans == 0) { cout << "! A" << endl; } else { cout << "! B" << endl; } return; } } } } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2066
B
White Magic
We call a sequence $a_1, a_2, \ldots, a_n$ magical if for all $1 \leq i \leq n-1$ it holds that: $\operatorname{min}(a_1, \ldots, a_i) \geq \operatorname{mex}(a_{i+1}, \ldots, a_n)$. In particular, any sequence of length $1$ is considered magical. The minimum excluded (MEX) of a collection of integers $a_1, a_2, \ldots, a_k$ is defined as the smallest non-negative integer $t$ which does not occur in the collection $a$. You are given a sequence $a$ of $n$ non-negative integers. Find the maximum possible length of a magical subsequence$^{\text{∗}}$ of the sequence $a$. \begin{footnotesize} $^{\text{∗}}$A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) element from arbitrary positions. \end{footnotesize}
Note that for those suffixes where there is no number $0$, $\operatorname{mex}$ will be equal to $0$, and thus the condition will be satisfied, since at least on the left side it is definitely $\geq 0$. That is, any array without zeros is magical. However, if there are at least two zeros in the array, it can no longer be magical, since we can consider a prefix containing one zero but not containing the second: the minimum on such a prefix is equal to $0$, while $\operatorname{mex}$ on the corresponding suffix will definitely be $>0$, and the condition will not be satisfied. Thus, we know for sure that the answer is either $n - cnt_0$ or $n - cnt_0 + 1$, where $cnt_0$ is the number of $a_i=0$. Since we can choose a subsequence without zeros of length $n - cnt_0$, it will definitely be magical. However, any subsequence of length $>n - cnt_0 + 1$ must contain at least two zeros and cannot be magical. Therefore, we only need to determine when the answer is equal to $n - cnt_0 + 1$. In this case, we must take a subsequence with exactly one zero, since again, sequences with at least two zeros are definitely not magical. It is not difficult to understand that it is optimal to take the leftmost zero in the subsequence, as the condition $\min \geq \operatorname{mex}$ will be guaranteed to be satisfied for prefixes containing a single zero. Thus, the solution looks as follows: If there are no zeros in the array, the answer is $n$. Otherwise, we need to choose a subsequence consisting of the leftmost zero and all non-zero elements of the sequence. Explicitly check whether it is magical (this can be easily done in $O(n)$, calculating prefix $\min$ and suffix $\operatorname{mex}$ is a well-known problem). If yes, the answer is: $n - cnt_0 + 1$. Otherwise, the answer is $n - cnt_0$, and a subsequence of all non-zero elements will suffice.
[ "constructive algorithms", "data structures", "dp", "greedy", "implementation" ]
1,900
#include <bits/stdc++.h> using namespace std; bool check(vector<int> a) { int n = (int) a.size(); vector<int> suf_mex(n); vector<int> used(n + 1, false); int mex = 0; for (int i = n - 1; i >= 1; i--) { if (a[i] <= n) { used[a[i]] = true; } while (used[mex]) mex++; suf_mex[i] = mex; } int mini = a[0]; for (int i = 0; i < n - 1; i++) { mini = min(mini, a[i]); if (mini < suf_mex[i + 1]) { return false; } } return true; } void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } bool was0 = false; int cnt0 = 0; vector<int> b; for (int i = 0; i < n; i++) { if (a[i] == 0) { cnt0++; if (!was0) { b.push_back(a[i]); } was0 = true; } else { b.push_back(a[i]); } } if (cnt0 > 0 && check(b)) { cout << n - (cnt0 - 1) << '\n'; } else { cout << n - cnt0 << '\n'; } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } return 0; }
2066
C
Bitwise Slides
You are given an array $a_1, a_2, \ldots, a_n$. Also, you are given three variables $P,Q,R$, initially equal to zero. You need to process all the numbers $a_1, a_2, \ldots, a_n$, \textbf{in the order from $1$ to $n$}. When processing the next $a_i$, you must perform \textbf{exactly} one of the three actions of your choice: - $P := P \oplus a_i$ - $Q := Q \oplus a_i$ - $R := R \oplus a_i$ $\oplus$ denotes the bitwise XOR operation. When performing actions, you must follow the main rule: it is necessary that after each action, all three numbers $P,Q,R$ are \textbf{not} pairwise distinct. There are a total of $3^n$ ways to perform all $n$ actions. How many of them do not violate the main rule? Since the answer can be quite large, find it modulo $10^9 + 7$.
Let us denote $pref_i = a_1 \oplus \ldots \oplus a_i$ - the array of prefix $XOR$s. Notice that after the $i$-th action, it is always true that $P \oplus Q \oplus R = pref_i$, regardless of which actions were chosen. With this knowledge, we can say that after the $i$-th action, the condition "the numbers $P, Q, R$ are not pairwise distinct" is equivalent to the condition "at least one of $P, Q, R$ equals $pref_i$." This is because if there is a pair of equal numbers among $P, Q, R$, their $XOR$ equals $0$, which means the third number must equal $pref_i$. Thus, all possible valid states $(P, Q, R)$ after the $i$-th action look like: $(pref_i, x, x), (x, pref_i, x), (x, x, pref_i)$ for some $x$. After this observation, we can try to write a dynamic programming solution: $dp[i][x]$ - the number of ways to reach one of the states of the form $(pref_i, x, x), (x, pref_i, x), (x, x, pref_i)$ after the $i$-th action. At first glance, this dynamic programming approach seems to take an unreasonable amount of time and memory. However, we will still outline its base case and recurrence. Base case: $dp[0][0] = 1$. Recurrence: suppose we want to recalculate $dp[i][x]$ using $dp[i-1][*]$. From which state can we arrive at $(x, x, pref_i)$ if the last move involved XORing one of the variables with $a_i$? There are three possible cases: $(x \oplus a_i, x, pref_i)$, $(x, x \oplus a_i, pref_i)$, $(x, x, pref_i \oplus a_i)$. The number of ways to reach the state $(x, x, pref_i \oplus a_i)$ is actually equal to $dp[i-1][x]$, since $pref_i \oplus a_i = pref_{i-1}$. What about the case $(x \oplus a_i, x, pref_i)$? For the state to be valid from the previous move, two of these numbers must be equal, and the third must equal $pref_{i-1}$. However, we know that $pref_i \neq pref_{i-1}$, since $a_i \neq 0$. Therefore, either $x \oplus a_i = pref_{i-1}$ (which means $x = pref_i$), or $x = pref_{i-1}$. If $x = pref_i$, then the value of $dp[i][x]$ is also equal to $dp[i-1][x]$, because the state $(pref_i, pref_i, pref_i)$ can be reached from one of the states $(pref_{i-1}, pref_i, pref_i)$, $(pref_i, pref_{i-1}, pref_i)$, $(pref_i, pref_i, pref_{i-1})$, and the number of ways to reach one of these states after the $(i-1)$-th move is literally defined as $dp[i-1][pref_i]$. For $x = pref_{i-1}$: $dp[i][pref_{i-1}] = 3 \cdot dp[i-1][pref_{i-1}] + 2 \cdot dp[i-1][pref_i]$. There are a total of 3 ways to arrive from $(pref_{i-1}, pref_{i-1}, pref_{i-1})$, and 2 ways from the states $(pref_{i-1}, pref_i, pref_i)$, $(pref_i, pref_{i-1}, pref_i)$, $(pref_i, pref_i, pref_{i-1})$. In summary, the only $x$ for which $dp[i][x] \neq dp[i-1][x]$ is $x = pref_{i-1}$. Therefore, unexpectedly, we do not need to recalculate the entire array $dp$, but only one of its values. And of course, since the elements of the array are quite large, we will store this $dp$ in a regular map. The answer to the problem will be the sum of all values of $dp$ at the end of the process.
[ "bitmasks", "combinatorics", "dp", "math" ]
2,300
#include <bits/stdc++.h> #define int long long using namespace std; const int M = 1'000'000'000 + 7; void solve() { int n; cin >> n; vector<int> a(n + 1), p(n + 1); for (int i = 1; i <= n; i++) { cin >> a[i]; p[i] = a[i] ^ p[i - 1]; } map<int, int> dp; dp[0] = 1; for (int i = 1; i <= n; i++) { dp[p[i - 1]] *= 3; dp[p[i - 1]] += 2 * dp[p[i]]; dp[p[i - 1]] %= M; } int ans = 0; for (auto &[_, x] : dp) { ans += x; } ans %= M; cout << ans << '\n'; } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2066
D1
Club of Young Aircraft Builders (easy version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, all $a_i = 0$. You can hack only if you solved all versions of this problem.} There is an $n$-story building, with floors numbered from $1$ to $n$ from bottom to top. There is exactly one person living on each floor. All the residents of the building have a very important goal today: to launch at least $c$ paper airplanes collectively. The residents will launch the airplanes in turn. When a person from the $i$-th floor launches an airplane, all residents on the floors from $1$ to $i$ can see it as it descends to the ground. If, from the perspective of the resident on the $i$-th floor, at least $c$ airplanes have already been launched, they will \textbf{not} launch any more airplanes themselves. It is also known that by the end of the day, from the perspective of each resident in the building, at least $c$ airplanes have been launched, and a total of $m$ airplanes were thrown. You carefully monitored this flash mob and recorded which resident from which floor threw each airplane. Unfortunately, the information about who exactly threw some airplanes has been lost. Find the number of ways to fill in the gaps so that the information could be credible. Since the answer can be quite large, output it modulo $10^9 + 7$. \textbf{In this version of the problem, all information has been lost, and the entire array consists of gaps.} It is also possible that you made a mistake in your records, and there is no possible way to restore the gaps. In that case, the answer is considered to be $0$.
The problem is naturally solved using dynamic programming. The author's dynamic programming approach is as follows: Let's think: at which indices can the elements $a_i = 1$ be located? Notice that a person on the first floor can see every airplane, so the positions $a_{c+1} \ldots a_m$ cannot contain a one. However, it is entirely possible for a one to be in any subset of positions $\{1,2,\ldots,c\}$. Let's iterate $k$ from $0$ to $c$-the number of $a_i=1$ in the array. There are a total of $\binom{c}{k}$ ways to place these ones. Note that the task of placing the remaining numbers is actually analogous to the original problem with parameters $(n-1,c,m-k)$, since the launches of airplanes from the first floor do not affect the "counters" of people not on the first floor; they can simply be ignored, and we can solve the problem of placing airplanes from the remaining $n-1$ floors into the remaining $m-k$ positions. Thus, we obtain the following dynamic programming relation: $dp[i][j]$ is the answer for $(n=i,m=j)$, with $c$ globally fixed. Base case: $dp[1][c] = 1$ Transition: $dp[i][j] = \displaystyle\sum_{k=0}^c \binom{c}{k} \cdot dp[i-1][j-k]$ The answer is, accordingly, in $dp[n][m]$, and the overall time complexity of the solution is $O(n \cdot m \cdot c)$. An interesting fact: the answer in the simple version of the problem can be expressed with a simple formula: $\binom{nc-c}{m-c}$. Why is this the case? No idea. If someone finds a beautiful combinatorial interpretation-please share in the comments.
[ "combinatorics", "dp", "math" ]
2,400
#include <bits/stdc++.h> #define int long long using namespace std; const int N = 105; int fact[N * N], inv_fact[N * N]; const int M = (int) 1e9 + 7; int binpow(int a, int x) { int ans = 1; while (x) { if (x % 2) { ans *= a; ans %= M; } a *= a; a %= M; x /= 2; } return ans; } int C(int k, int n) { if (k > n || k < 0) return 0; return (((fact[n] * inv_fact[k]) % M) * inv_fact[n - k] % M); } unordered_map<int, int> info; const int B = 10007; int xd(int n, int m, int c) { if (m < c || m > n * c) return 0; if (n == 1) return (m == c); if (info.find(n*B*B+m*B+c) != info.end())return info[n*B*B+m*B+c]; int ans = 0; for (int k = 0; k <= c; k++) { ans += xd(n - 1, m - k, c) * C(k, c) % M; } ans %= M; return info[n*B*B+m*B+c] = ans; } void solve() { int n, c, m; cin >> n >> c >> m; vector<int> a(m + 1); for (int i = 1; i <= m; i++) { cin >> a[i]; } cout << xd(n, m, c) << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); fact[0] = inv_fact[0] = 1; for (int x = 1; x < N * N; x++) { fact[x] = ((x * fact[x - 1]) % M); inv_fact[x] = binpow(fact[x], M - 2); } int t; cin >> t; while (t--) { solve(); } }
2066
D2
Club of Young Aircraft Builders (hard version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, not necessary $a_i = 0$. You can hack only if you solved all versions of this problem.} There is a building with $n$ floors, numbered from $1$ to $n$ from bottom to top. There is exactly one person living on each floor. All the residents of the building have an important goal today: to launch at least $c$ paper airplanes collectively. The residents will launch the airplanes in turn. When a person from the $i$-th floor launches an airplane, all residents on floors from $1$ to $i$ can see it as it descends to the ground. If, from the perspective of the resident on the $i$-th floor, at least $c$ airplanes have already been launched, they will no longer launch airplanes themselves. It is also known that by the end of the day, from the perspective of each resident in the building, at least $c$ airplanes have been launched, and a total of $m$ airplanes were thrown. You have been carefully monitoring this flash mob, and for each airplane, you recorded which resident from which floor threw it. Unfortunately, the information about who exactly threw some of the airplanes has been lost. Find the number of ways to fill in the gaps so that the information could be credible. Since the answer could be quite large, output it modulo $10^9 + 7$. It is also possible that you made a mistake in your records, and there is no possible way to restore the gaps. In that case, the answer is considered to be $0$.
In fact, the states in the dynamic programming for the complex version will be the same as in the simple version, but to handle non-zero elements, we will need a slightly different perspective. The conceptual basis, as we define the array that we want to consider in the answer, remains the same: we look at where in the array ones can be: only in the first $c$ positions. Where can twos be: also in the first $c$ positions, if we ignore the ones. And so on; in general, element $i$ can only be in the first $C + cnt_1 + \ldots + cnt_{i-1}$ positions, where $cnt_i$ is the number of occurrences of $=i$ in the array. It would be great if we could incorporate $cnt_1 + \ldots + cnt_{i-1}$ into the dynamic programming state. And we indeed can. Let $dp[el][sum]$ be the number of ways to arrange all occurrences of numbers from $1$ to $el$ in the array such that their count is $=sum$ (including those already placed for us where $el \geq a_i > 0$). In fact, this is the same dynamic programming as in the simple version. In one dimension, we have a straightforward prefix, and in the other, the count of already placed elements, which is effectively $m - \{\text{the number of elements left to place}\}$, which we had in the first dynamic programming. Thus, the states are essentially the same as in the simple version, but the thinking is slightly different, in a sense "expanded." In the transition, as in the simple version, we will iterate $k$ from $0$ to $c$: how many elements $=el$ we are placing. According to our criteria, all occurrences $=el$ must be in the first $c + (sum - k)$ positions. We check this by counting the array $last[el]$ - the index of the last occurrence of $el$ in the given array. It must also hold that $sum \geq k$, $k \geq cnt[el]$, and $c + sum - k \leq m$. If any of these conditions are not met, then $dp[el][sum] = 0$. The recounting has the same nature: only the first $c + sum - k$ positions are permissible. $sum - k$ of them are already occupied by smaller numbers. Therefore, there are $c$ free positions. However, some of them may be occupied by elements $\geq el$ that have been placed beforehand. Thus, we need to find out how many elements $\geq el$ are in the prefix of length $c + sum - k$ (which can be precomputed for all elements and all prefixes without much effort), and these positions are also occupied. In the remaining positions, we need to place $k - cnt[el]$ of our elements $=el$. Accordingly, the number of ways to do this is simply the binomial coefficient of one from the other multiplied by $dp[el-1][sum-k]$. The base of the dynamic programming is $dp[0][0] = 1$, and the answer is still in $dp[n][m]$. The asymptotic complexity is $O(n \cdot m \cdot c)$.
[ "combinatorics", "dp", "math" ]
2,900
#include <bits/stdc++.h> #define int long long using namespace std; const int N = 105; int fact[N * N], inv_fact[N * N]; const int M = (int) 1e9 + 7; int binpow(int a, int x) { int ans = 1; while (x) { if (x % 2) { ans *= a; ans %= M; } a *= a; a %= M; x /= 2; } return ans; } int C(int k, int n) { if (k > n || k < 0) return 0; return (((fact[n] * inv_fact[k]) % M) * inv_fact[n - k] % M); } void solve() { int n, c, m; cin >> n >> c >> m; vector<int> a(m + 1); vector<int> last(n + 1), cnt(n + 1); for (int i = 1; i <= m; i++) { cin >> a[i]; if (a[i] != 0) { last[a[i]] = i; cnt[a[i]]++; } } vector<vector<int>> more_on_prefix(m + 1, vector<int>(n + 1)); for (int i = 1; i <= m; i++) { for (int el = 1; el <= n; el++) { more_on_prefix[i][el] = more_on_prefix[i - 1][el] + (a[i] >= el); } } vector<vector<int>> dp(n + 1, vector<int>(m + 1)); dp[0][0] = 1; for (int el = 1; el <= n; el++) { for (int sum = 0; sum <= m; sum++) { dp[el][sum] = 0; for (int x = 0; x <= c; x++) { if (sum < x) { continue; } if (last[el] > c + sum - x) { continue; } if (x < cnt[el]) { continue; } if (c + sum - x > m) { continue; } int free_spots = c - more_on_prefix[c + sum - x][el]; int need_to_put = x - cnt[el]; dp[el][sum] += (dp[el - 1][sum - x] * C(need_to_put, free_spots)) % M; if (dp[el][sum] >= M) { dp[el][sum] -= M; } } } } cout << dp[n][m] << '\n'; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); fact[0] = inv_fact[0] = 1; for (int x = 1; x < N * N; x++) { fact[x] = ((x * fact[x - 1]) % M); inv_fact[x] = binpow(fact[x], M - 2); } int t; cin >> t; while (t--) { solve(); } }
2066
E
Tropical Season
You have $n$ barrels of infinite capacity. The $i$-th barrel initially contains $a_i$ kilograms of water. In this problem, we assume that all barrels weigh the same. You know that \textbf{exactly} one of the barrels has a small amount of tropical poison distributed on its surface, with a total weight of $0.179$ kilograms. However, you do not know which barrel contains the poison. Your task is to identify this poisonous barrel. All the barrels are on scales. Unfortunately, the scales do not show the exact weight of each barrel. Instead, for each pair of barrels, they show the result of a comparison between the weights of those barrels. Thus, for any two barrels, you can determine whether their weights are equal, and if not, which barrel is heavier. The poison and water are included in the weight of the barrel. The scales are always turned on, and the information from them can be used an unlimited number of times. You also have the ability to pour water. You can pour water from any barrel into any other barrel in any amounts. However, to pour water, you must physically handle the barrel from which you are pouring, so if that happens to be the poisonous barrel, you will die. This outcome must be avoided. However, you can pour water into the poisonous barrel without touching it. In other words, you can choose the numbers $i, j, x$ ($i \neq j, 1 \leq i, j \leq n, 0 < x \leq a_i$, the barrel numbered $i$ is \textbf{not} poisonous) and execute $a_i := a_i - x$, $a_j := a_j + x$. Where $x$ is not necessarily an integer. Is it possible to guarantee the identification of which barrel contains the poison and remain alive using pouring and the information from the scales? You know that the poison is located on \textbf{exactly} one of the barrels. Additionally, we ask you to process $q$ queries. In each query, either one of the existing barrels is removed, or an additional barrel with a certain amount of water is added. After each query, you need to answer whether it is possible to guarantee the identification of the poisonous barrel, given that there is exactly one.
If all values $a_1, a_2, \ldots, a_n$ are distinct, we immediately lose (except for $n=1$). Suppose initially there are two barrels of equal weight $a_i = a_j$. Then we look at what the scales indicate for the pair $(i, j)$. If it shows "greater" or "less" - we understand that the poisonous barrel is respectively $i$ or $j$ and we win immediately. If it shows "equal," then both barrels are definitely not poisonous. We will keep only the barrels whose mass $(a_i)$ appears exactly once, and from all the other barrels, we can pour all the water into one of the barrels, and let the mass of this water be $L$. We have $L$ water available to pour into a safe barrel, and in the remaining barrels $b_1 < b_2 < \ldots < b_k$ there is water. If $L$ water is currently available, we can either: Check any barrel that has $\leq L$ water. By pouring water from the safe barrel into it until the weights are equal, we then check the scales. If they show equality, we know for sure that this barrel is safe; otherwise, we know it is poisonous. Take any two barrels with a difference of no more than $L$: $b_j - b_i \leq L$, add water to the $i$-th barrel to equalize the weights, and also compare the barrels $(i,j)$ by checking both. We have no other methods; if with the current $L$ both of these methods are inapplicable, but there are more than one unverified barrels left - we lose. If we consider that the first method is applied "automatically" as long as possible, it means that at any moment we possess some prefix of barrels $b_1 < b_2 < ... < b_i < [...] < b_{i+1} < ... < b_k$, since if we have checked barrel $b_j > b_i$, we can also check $b_i$. Therefore, we need to determine whether it is true that if we have $L+b_1+b_2+\ldots+b_i$ of free water, we can perform some action to check one of the barrels $b_{i+1} \ldots b_k$ for all $i$ from $0$ to $k-2$. If this holds, we can iteratively determine everything; if not, at some point we will hit a wall and will not be able to determine anything about the barrels $b_{i+1} \ldots b_k$. The upper limit is $k-2$, not $k-1$, because if we check all barrels except one, the remaining unverified barrel will be deemed poisonous by the process of elimination. Let's say that $i$ is a special index if $b_{i+1} > b_1+b_2+\ldots+b_i$. Note that we can check the condition from the previous paragraph not for all $i$, but only for special $i$, and this will be equivalent. Since from a non-special index, we can always check barrel $i+1$. However, there are also no more than $\log_2(b_k)$ special indices, since with each special index, the prefix sum of the array $b$ increases at least twofold. Therefore, our further plan for solving the problem is as follows: after each query, we find all special indices and explicitly check each of them, finding $\min (b_{j+1}-b_j)$ on the suffix and comparing it with $L+b_1+\ldots+b_i$. We will learn to find and check a special index in $O(\log(a_n))$ and obtain a solution in $O(q \cdot \log^2(a_n))$. We can use a set to support "unique" weights of barrels and the amount of available water at the very beginning. Then, adding/removing unique barrels can be implemented using a large segment tree of size $\max(a_i)=10^6$. Thus, finding the next "special" index and checking it can be implemented by descending/querying in this segment tree.
[ "binary search", "data structures", "greedy", "implementation" ]
3,300
#include <bits/stdc++.h> using namespace std; #define int long long typedef long long ll; typedef long double ld; #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define pb push_back #define ar(x) array<int, x> const int MAXA = 1'000'009; const int INF = (int) 1e18; struct SegTreeMIN { int n = 1; vector<int> tree; SegTreeMIN(int n_) { while (n <= n_) n *= 2; tree.assign(4 * n + 7, INF); } void upd(int v, int l, int r, int i, int x) { if (l + 1 == r) { tree[v] = x; return; } int mid = (l + r) / 2; if (i < mid) { upd(2 * v + 1, l, mid, i, x); } else { upd(2 * v + 2, mid, r, i, x); } tree[v] = min(tree[2 * v + 1], tree[2 * v + 2]); } int get(int v, int l, int r, int lq, int rq) { if (l >= rq || lq >= r) return INF; if (lq <= l && r <= rq) return tree[v]; int mid = (l + r) / 2; return min(get(2 * v + 1, l, mid, lq, rq), get(2 * v + 2, mid, r, lq, rq)); } void upd(int i, int x) { upd(0, 0, n, i, x); } int get(int lq, int rq) { return get(0, 0, n, lq, rq); } }; struct SegTreeSUM { int n = 1; vector<int> tree; SegTreeSUM(int n_) { while (n <= n_) n *= 2; tree.assign(4 * n + 7, 0); } void upd(int v, int l, int r, int i, int x) { if (l + 1 == r) { tree[v] = x; return; } int mid = (l + r) / 2; if (i < mid) { upd(2 * v + 1, l, mid, i, x); } else { upd(2 * v + 2, mid, r, i, x); } tree[v] = tree[2 * v + 1] + tree[2 * v + 2]; } int get(int v, int l, int r, int lq, int rq) { if (l >= rq || lq >= r) return 0; if (lq <= l && r <= rq) return tree[v]; int mid = (l + r) / 2; return get(2 * v + 1, l, mid, lq, rq) + get(2 * v + 2, mid, r, lq, rq); } void upd(int i, int x) { upd(0, 0, n, i, x); } int get(int lq, int rq) { return get(0, 0, n, lq, rq); } }; struct Solver { int WATER = 0; // sum of all but unique map<int, int> barrels; set<int> unique_barrels; SegTreeMIN dist_to_next = SegTreeMIN(MAXA); SegTreeSUM available_barrels = SegTreeSUM(MAXA); int prev_unique(int x) { if (*unique_barrels.begin() >= x) return -1; auto it = unique_barrels.lower_bound(x); it--; return *it; } int next_unique(int x) { if (*unique_barrels.rbegin() <= x) return -1; auto it = unique_barrels.upper_bound(x); return *it; } void upd_dist(int x) { if (x == -1) return; if (unique_barrels.find(x) == unique_barrels.end()) { dist_to_next.upd(x, INF); return; } int y = next_unique(x); if (y == -1) { dist_to_next.upd(x, INF); return; } dist_to_next.upd(x, y - x); } void upd_ava(int x) { if (unique_barrels.find(x) == unique_barrels.end()) { available_barrels.upd(x, 0); } else { available_barrels.upd(x, x); } } void unique_add(int x) { WATER -= x; unique_barrels.insert(x); int y = prev_unique(x); upd_dist(y); upd_dist(x); upd_ava(x); } void unique_erase(int x) { WATER += x; int y = prev_unique(x); unique_barrels.erase(x); upd_dist(y); upd_dist(x); upd_ava(x); } void add(int x) { WATER += x; if (barrels[x] == 1) { unique_erase(x); } barrels[x]++; if (barrels[x] == 1) { unique_add(x); } } void erase(int x) { assert(barrels[x] >= 1); WATER -= x; if (barrels[x] == 1) { unique_erase(x); } barrels[x]--; if (barrels[x] == 1) { unique_add(x); } } int sum_water(int x) { // total amount of water in u-barrels <= x return available_barrels.get(0, x + 1); } bool is_special(int x) { return x > WATER + sum_water(x - 1); } vector<int> find_all_special() { int balance = WATER; vector<int> ans; while (balance < MAXA) { int x = next_unique(balance); if (x == -1) break; if (is_special(x)) { ans.push_back(x); } balance = WATER + sum_water(x); } return ans; } bool check_val(int x) { // if we have all u-barrels < x, can we get another one? if (unique_barrels.size() <= 1) return true; int y = prev_unique(*unique_barrels.rbegin()); if (x > y) { return true; // <=1 u-barrels left } int BALANCE = WATER + sum_water(x - 1); y = next_unique(x - 1); assert(y != -1); if (BALANCE >= y) { return true; } int min_diff = dist_to_next.get(x, dist_to_next.n); if (BALANCE >= min_diff) { return true; } return false; } bool check() { if (unique_barrels.empty()) return true; for (auto x: find_all_special()) { if (!check_val(x)) { return false; } } return true; } }; void solve() { int n, q; cin >> n >> q; Solver boss; for (int i = 0; i < n; i++) { int x; cin >> x; boss.add(x); } if (boss.check()) { cout << "Yes\n"; } else { cout << "No\n"; } for (int i = 0; i < q; i++) { char c; int x; cin >> c >> x; if (c == '+') { boss.add(x); } else { boss.erase(x); } if (boss.check()) { cout << "Yes\n"; } else { cout << "No\n"; } } } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); solve(); }
2066
F
Curse
You are given two arrays of integers: $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_m$. You need to determine if it is possible to transform array $a$ into array $b$ using the following operation several (possibly, zero) times. - Among all non-empty subarrays$^{\text{∗}}$ of $a$, choose any with the maximum sum, and replace this subarray with an arbitrary non-empty integer array. If it is possible, you need to construct any possible sequence of operations. Constraint: in your answer, the sum of the lengths of the arrays used as replacements must not exceed $n + m$ across all operations. The numbers must not exceed $10^9$ in absolute value. \begin{footnotesize} $^{\text{∗}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
$\operatorname{MSS}(a)$ - the maximum sum of a non-empty subarray $a$. A detailed proof of the solution will be at the end of the analysis. First, we need to characterize all arrays reachable from $a$ through operations. Often, in such cases, we want to find some property that remains unchanged during operations or changes in a predictable way (for example, only increases). However, in this problem, neither the elements nor the length of the array are constant, so at first glance, it is unclear what kind of invariant we want to find. Let's consider a minimal example where not all arrays are explicitly reachable from the original one, and there are some restrictions. Let $a = [-1, -1]$. In the first operation, we replace one of the $-1$s. If we replace it with an array with a maximum subarray sum $> -1$, we are then obliged to replace something inside that subarray. If we replace it with an array with a maximum subarray sum $\leq -1$, we must also replace either strictly inside that array or the second $-1$. Thus, all changes occur either in the array generated by the first $-1$ or the second $-1$. We can represent a barrier between the elements of the array: $[(-1) \color{red}{|}(-1)]$. Any replacement of a subarray will never cross this barrier. It turns out that, in general, we can set such barriers that will never be crossed during operations in a fairly intuitive way. For the array $a_1, ..., a_n$, we define a cool partition of the array into subsegments recursively: The cool partition of an empty array is empty. We find any segment of maximum length $[l,r]$ such that $\operatorname{MSS}(a) = a_l + \ldots + a_r$. Then the cool partition will consist of the segments of the cool partition $a_1, \ldots, a_{l-1}$, the segment $[l,r]$, and the segments of the cool partition $a_{r+1}, \ldots, a_n$. It is not difficult to prove that the cool partition is uniquely defined for each array. It turns out that if we represent barriers between neighboring segments of the cool partition, these barriers will never be crossed during operations. Moreover, the cool partition of any array obtained from $a$ through operations will contain at least the same barriers as $a$. At the same time, due to the properties of construction, one operation in the array $a$ completely replaces one of the segments of the cool partition with the maximum sum. It turns out that all arrays reachable from $a$ must have the following structure: Any number $x$ is chosen from the sums of the segments of the cool partition of $a$. Among the segments of the cool partition of $a$, those segments whose sum is $< x$ remain unchanged. Among the remaining segments: one is replaced with an arbitrary non-empty array, and all others with an arbitrary non-empty array $y$ such that $\operatorname{MSS}(y) \leq x$. The sequence of operations is as follows: we replace all segments in descending order of their sums. The segment we want to replace with an arbitrary array is first replaced with $\{x\}$. After all segments are replaced, the last operation replaces this $\{x\}$ with an arbitrary array. Knowing this, we can solve the problem using dynamic programming in $O(n^2 \cdot m)$. We will precompute the cool partition of the array $a$. Then we can iterate over the separator value $x$ in $O(n)$, and for each such value, we will run the dynamic programming $dp[i][j][flag]$ - whether it is possible to match a prefix of the first $i$ segments of the special partition to the prefix $b_1,b_2,\ldots,b_j$, where $flag$ indicates whether we have already used a replacement with an arbitrary non-empty array, or all current replacements were with arrays with $\operatorname{MSS} \leq x$. Naively, the recalculation will work in $O(n)$, but with some simple optimizations, it is easy to achieve a total asymptotic complexity of recalculation for all states of $O(n \cdot m)$, and then the entire dynamic programming will work in $O(n \cdot m)$, and the whole solution in $O(n^2 \cdot m)$. Now, here is the complete proof of the solution. Statement Given an array $a_1, a_2, \ldots, a_n$ of integers. The following operation is given: Choose any pair $(l, r)$ such that $1 \leq l \leq r \leq n$, and $a_l + \ldots + a_r$ is maximum among all numbers of the form $a_{l'} + \ldots + a_{r'}$. Replace the subarray $a_l...a_r$ with an arbitrary non-empty array. That is, replace the working array $[a_1, \ldots, a_n]$ with $[a_1, \ldots, a_{l-1}, b_1, \ldots, b_m, a_{r+1}, \ldots, a_n]$, where $m \geq 1$ and $b_1, ..., b_m$ are chosen by you, and the numbers are integers. Goal: characterize all arrays reachable from $[a_1, \ldots, a_n]$ through unlimited applications of such operations. General Remarks $[l, r]$ denotes the subarray $a_l...a_r$. $(l \leq r)$ $sum(l, r)$ denotes $a_l + \ldots + a_r$. Definition 1 A segment $[l, r]$ will be called cool if: $sum(l, r)$ is maximum among all segments. $\forall (l_0 \leq l) \wedge (r_0 \geq r) \wedge (l_0 < l \vee r_0 > r)$ it holds that $sum(l_0, r_0) < sum(l, r)$. Claim 2 If operations are applied only to cool segments, the set of reachable arrays will not change. Proof: If the segment $[l, r]$ is not cool, then $\exists l_0, r_0$ such that $l_0 \leq l, r_0 \geq r$ and $[l_0, r_0]$ is cool. Then applying the operation to $[l, r]$ with the array $b_1, .., b_m$ can be replaced by applying the operation to $[l_0, r_0]$ with the array $a_{l_0}, \ldots, a_{l-1}, b_1, \ldots, b_m, a_{r+1}, \ldots, a_{r_0}$, and the array after replacement will remain the same. Therefore, any operation with a non-cool segment can be freely replaced with an operation with a cool one, which proves the claim. From now on, we will assume that operations are applied only to cool segments. Thanks to Claim 2, we know that this is equivalent to the original problem. Claim 3 Two different cool segments do not intersect. That is, if $[l_1,r_1]$ and $[l_2,r_2]$ are cool segments, and $l_1 \neq l_2 \vee r_1 \neq r_2$, it follows that $[l_1,r_1] \cap [l_2,r_2] = \emptyset$. Proof: Assume that there are intersecting cool segments. We will consider two cases: intersection and nesting. We can consider the union and intersection of our segments. The sum of the union and intersection then equals the sums of our cool segments. Thus, either the sum of the union is not less, and the segments are not cool, since they can be expanded. Or the sum of the intersection is greater, and the segments are not cool, since there exists one with a greater sum. Nesting is impossible, as the smaller segment could then be expanded to a larger one, meaning by definition it is not cool. Let's introduce the concept of partitioning the array into cool segments: a cool partition. Definition 4 For the array $a_1, ..., a_n$, we define a cool partition recursively: If $[1,n]$ is a cool segment, then the cool partition will consist of one segment $[1,n]$. Otherwise, we find any cool segment $[l,r]$. Then the cool partition will consist of the segments of the cool partition $a_1, \ldots, a_{l-1}$, the segment $[l,r]$, and the segments of the cool partition $a_{r+1}, \ldots, a_n$. It is easy to see that the cool partition is uniquely defined. This follows from Claim 3. Claim 5 In one operation, one of the segments of the cool partition with the maximum sum is completely replaced. It is obvious that we literally defined the cool partition such that all cool segments of the original array are included, and we replace only cool segments. Claim 6 The existing boundaries of the cool partition will necessarily remain the same after the operation. That is, if our array and its cool partition are: $[a_1, \ldots, a_{r_1}], [a_{r_1+1}, \ldots, a_{r_2}], \ldots, [a_{r_{k-1}+1}, \ldots, a_{r_k}]$ And we replace the cool segment $[a_{r_i+1}, \ldots, a_{r_{i+1}}]$ with maximum sum with some $b_1,\ldots, b_m$, then after the operation, the cool partition of the array will continue to contain all the same segments to the left and right of the replaced one. That is, the cool partition of the new array will look like: $[a_1, \ldots, a_{r_1}], [a_{r_{i-1}+1}, \ldots, a_{r_i}], BCOOL, [a_{r_{i+1}+1}, \ldots, a_{r_{i+2}}], \ldots, [a_{r_{k-1}+1}, \ldots, a_{r_k}]$ Where $BCOOL$ is the cool partition of the array $b_1,\ldots, b_m$. Proof: Any subarray $M$ from the cool partition has the property that all sums of subarrays of $M$ do not exceed the sum of $M$. Otherwise, a subsegment $M$ with a greater sum would have been chosen instead of $M$ in the cool partition. Also, we note that $\forall l \leq r_i a_l+\ldots+a_{r_i}<0$, since otherwise $[a_{r_i+1}, \ldots, a_{r_{i+1}}]$ would not be a cool segment, as it could be expanded to the left without losing its sum. Similarly, it follows that $\forall R \geq r_{i+1}+1, a_{r_{i+1}+1} + \ldots + a_R < 0$. Therefore, the boundaries around the old cool segment will necessarily be preserved. If any of them is violated, the intersecting boundary array will have a prefix or suffix with a negative sum, which violates the property of the subarray from the cool partition, as the complement of this prefix/suffix would have a greater sum than the subarray itself. Since the boundaries hold, the cool partition in 3 parts will be independent of each other, and thus it will remain the same on the left and right as before, while inside will be $BCOOL$. Definition 7 The scale of the operation replacing the segment $[l,r]$ will be called $sum(l,r)$. Definition 8 A finite sequence of operations $op_1, op_2, \ldots, op_k$ will be called reasonable if the sequence of scales of the operations is strictly non-increasing. That is, $s_i \geq s_{i+1}$ for all $i$, where $s_i$ is the scale of the $i$-th operation. Claim 9 If we consider only reasonable sequences of operations, the set of reachable arrays will remain the same. If there is some unreasonable sequence of operations, then it has $s_i < s_{i+1}$. This means that during the $i$-th operation, the maximum sum of the cool segment increased. But as we know, the old segments all remained, and only $BCOOL$ is new. Thus, the next operation will be entirely within $b_1,\ldots,b_m$. But then we could have made this replacement in the previous operation immediately, resulting in one less operation. Therefore, the shortest path to each reachable array is reasonable, as any unreasonable one can be shortened by one operation. Thus, any reachable array is necessarily reachable by reasonable sequences of operations. From now on, we will consider only reasonable sequences of operations. Thanks to Claim 9, this is equivalent to the original problem. Now we are ready to formulate and prove the necessary condition for a reachable array. Let the sums in the segments of the cool partition of the original array be: $[s_1, s_2, \ldots, s_k]$. It is claimed that all reachable arrays must have the following form (necessary condition): Some integer $x$ is chosen. Among the segments of the cool partition, those segments whose sum is $<x$ remain unchanged. Among the remaining segments: one chosen segment is replaced with an arbitrary non-empty array, and all others with an arbitrary non-empty array such that the sum of its maximum subarray $\leq x$. And the reachable array = concatenation of these segments. Let's show that any finite reasonable sequence of operations must lead us to an array of the described form. We have the initial boundaries of the cool partition of the given array. As we know, they will remain with us forever. Also, the sums in these segments cannot increase BEFORE the last operation is applied, as we are only considering reasonable sequences. The very last operation can, of course, increase the sum. Therefore, let us say that in the last operation we replace the subarray with scale $=x$. We look at the state of our array before this last operation. The segments of the cool partition with a sum $<x$ could not have changed, again because we are only considering reasonable sequences, and since $x$ is the scale of the last operation, ALL operations had a scale of at least $x$. In the segments of the cool partition with a sum $\geq x$, now between these same boundaries, the sum of the maximum subarray must be $\leq x$. New boundaries of the cool partition may have appeared, but this is not important to us. We are only looking at the original boundaries and the sums between them. Thus, it turns out to be a description as above. And in the last operation, one of the segments is replaced with an arbitrary array. This last segment must also lie entirely within the boundaries of one of the segments of the original cool partition. Therefore, we can safely say that the entire corresponding cool segment was replaced, as in the claimed description. In the end, any sequence of operations must fit the description. And it remains to show the sufficiency: that any array from the description is reachable. We simply take and replace the cool segments with the desired arrays in descending order of their sums. The one we plan to replace last, with an arbitrary array, is first replaced simply with $[x]$, and then when we replace everything we wanted, we replace it last with an arbitrary array. Thus, the proposed description is indeed the description of all reachable arrays.
[ "constructive algorithms", "dp", "math" ]
3,300
#include <bits/stdc++.h> using namespace std; // #define int long long typedef long long ll; typedef long double ld; #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() #define sum(x) accumulate(all(x), 0) const int INF = 2'000'000'000; const int PLACEHOLDER = INF; vector<vector<int>> cool_split(vector<int> a) { if (a.empty())return {}; int n = (int) a.size(); array<int, 4> mxi = {-INF, 0, 0, 0}; int sumi = 0; int smallest_pref = 0, prefi = 0; for (int i = 0; i < n; i++) { sumi += a[i]; mxi = max(mxi, {sumi - smallest_pref, i - prefi, prefi, i}); if (smallest_pref > sumi) { smallest_pref = sumi; prefi = i + 1; } } auto [_, __, l, r] = mxi; vector<int> al, amid, ar; for (int i = 0; i < n; i++) { if (i < l) { al.push_back(a[i]); } else if (i <= r) { amid.push_back(a[i]); } else { ar.push_back(a[i]); } } vector<vector<int>> X = cool_split(al), Y = cool_split(ar); X.push_back(amid); for (auto el: Y)X.push_back(el); return X; } const int N = 503; bool dp[N][N][2]; array<int, 2> go[N][N][2]; int maxsubsum[N][N]; void solve() { int n, m; cin >> n >> m; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } vector<int> b(m), prefb(m); for (int i = 0; i < m; i++) { cin >> b[i]; } prefb[0] = b[0]; for (int i = 1; i < m; i++) { prefb[i] = b[i] + prefb[i - 1]; } auto get_sum = [&](int l, int r) { if (l == 0) return prefb[r]; return prefb[r] - prefb[l - 1]; }; for (int i = 0; i < m; i++) { maxsubsum[i][i] = b[i]; } for (int l = 1; l < m; l++) { for (int i = 0; i + l < m; i++) { int j = i + l; maxsubsum[i][j] = max({maxsubsum[i + 1][j], maxsubsum[i][j - 1], get_sum(i, j)}); } } vector<vector<int>> spa = cool_split(a); vector<array<int, 2>> order; for (int i = 0; i < spa.size(); i++) { order.push_back({sum(spa[i]), i}); } sort(rall(order)); vector<tuple<int, int, vector<int>>> ops; auto check = [&](int u) -> bool { for (int i = 0; i <= spa.size(); i++) { for (int j = 0; j <= m; j++) { dp[i][j][0] = dp[i][j][1] = false; } } dp[0][0][0] = true; for (int i = 0; i < spa.size(); i++) { if (spa[i] != (vector<int>) {PLACEHOLDER}) { for (int j = 0; j + spa[i].size() <= m; j++) { bool ok = true; for (int k = 0; k < spa[i].size(); k++) { if (spa[i][k] != b[j + k]) { ok = false; break; } } if (!ok)continue; for (auto flag: {0, 1}) { if (dp[i][j][flag]) { dp[i + 1][j + spa[i].size()][flag] = true; go[i + 1][j + spa[i].size()][flag] = {j, flag}; } } } continue; } int jr = 0; bool brandnew = true; int lastgood0 = 0, lastgood1 = 0; for (int j = 0; j < m; j++) { while (jr < m && (jr < j || maxsubsum[j][jr] <= u))jr++; if (dp[i][j][0]) { for (int jk = max(lastgood0, j); jk < jr; jk++) { dp[i + 1][jk + 1][0] = true; go[i + 1][jk + 1][0] = {j, 0}; } lastgood0 = jr; if (brandnew) { brandnew = false; for (int jk = jr; jk < m; jk++) { dp[i + 1][jk + 1][1] = true; go[i + 1][jk + 1][1] = {j, 0}; } } } if (dp[i][j][1]) { for (int jk = max(lastgood1, j); jk < jr; jk++) { dp[i + 1][jk + 1][1] = true; go[i + 1][jk + 1][1] = {j, 1}; } lastgood1 = jr; } } } for (auto flag: {0, 1}) { if (dp[spa.size()][m][flag]) { int i = (int) spa.size(), j = m; int idx = 0; for (auto ar: spa)idx += (int) ar.size(); pair<int, vector<int>> last_op = {-1, {}}; while (i > 0) { idx -= (int) spa[i - 1].size(); int j2 = go[i][j][flag][0]; int flag2 = go[i][j][flag][1]; if (spa[i - 1] == (vector<int>) {PLACEHOLDER}) { vector<int> xar; for (int jk = j2; jk < j; jk++) { xar.push_back(b[jk]); } if (flag == 1 && flag2 == 0) { last_op = {i - 1, xar}; } else { ops.push_back({idx + 1, idx + (int) spa[i - 1].size(), xar}); spa[i - 1] = xar; } } i--; j = j2; flag = flag2; } if (last_op.first != -1) { idx = 0; int ri = last_op.first; auto xar = last_op.second; for (int jk = 0; jk < ri; jk++)idx += (int) spa[jk].size(); ops.push_back({idx + 1, idx + (int) spa[ri].size(), xar}); } return true; } } return false; }; for (int j = 0; j < order.size(); j++) { auto [u, i] = order[j]; int idx = 0; for (int k = 0; k < i; k++) { idx += (int) spa[k].size(); } ops.push_back({idx + 1, idx + (int) spa[i].size(), {PLACEHOLDER}}); spa[i] = {PLACEHOLDER}; if (j + 1 < order.size() && order[j][0] == order[j + 1][0]) continue; if (check(u)) { cout << ops.size() << '\n'; for (auto [l, r, xar]: ops) { cout << l << ' ' << r << ' ' << xar.size() << '\n'; for (auto el: xar) { if (el == PLACEHOLDER) el = u; cout << el << ' '; } cout << '\n'; } return; } } cout << "-1\n"; } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2067
A
Adjacent Digit Sums
You are given two numbers $x, y$. You need to determine if there exists an integer $n$ such that $S(n) = x$, $S(n + 1) = y$. Here, $S(a)$ denotes the sum of the digits of the number $a$ in the decimal numeral system.
Let's look at the last digit of the number $n$. If it is not equal to $9$, then in the number $n+1$, this last digit will be increased by $1$, while the rest of the number will remain unchanged. Thus, for such $n$, we have $S(n+1) = S(n) + 1$. In general, if the number $n$ ends with $k$ consecutive digits of $9$, then it turns out that $S(n+1) = S(n) + 1 - 9k$, since all these $9$s will be replaced by $0$s after adding one to the number $n$. Therefore, if there does not exist an integer $k \geq 0$ such that $y = x + 1 - 9k$, then the answer is definitely No. Otherwise, it is easy to see that the answer is definitely Yes. One of the suitable numbers is: $n = \underbrace{11 \ldots 1}_{x-9k}\underbrace{99 \ldots 9}_{k}$. Then, $S(n) = x - 9k + 9k = x$. And $n + 1 = \underbrace{11 \ldots 1}_{x-9k-1}2\underbrace{00 \ldots 0}_{k}$. Thus, $S(n+1) = x - 9k - 1 + 2 = x + 1 - 9k$. Therefore, to solve the problem, we need to check whether the number $\frac{x + 1 - y}{9}$ is an integer and non-negative.
[ "brute force", "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int x, y; cin >> x >> y; if (x + 1 >= y && (x + 1 - y) % 9 == 0) { cout << "Yes\n"; } else { cout << "No\n"; } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2067
B
Two Large Bags
You have two large bags of numbers. Initially, the first bag contains $n$ numbers: $a_1, a_2, \ldots, a_n$, while the second bag is empty. You are allowed to perform the following operations: - Choose any number from the first bag and move it to the second bag. - Choose a number from the first bag that is also present in the second bag and increase it by one. You can perform an unlimited number of operations of both types, in any order. Is it possible to make the contents of the first and second bags identical?
Note that when a number goes into the second bag, it remains unchanged there until the end of the entire process: our operations cannot interact with this number in any way. Therefore, every time we send a number to the second bag, we must keep in mind that an equal number must remain in the first bag by the end of the operations if we want to equalize the contents of the bags. We will call this equal number in the first bag "blocked": as no operations should be performed with it anymore. Let's sort the array: $a_1 \leq a_2 \leq \ldots \leq a_n$. The first action we will take: sending one of the numbers to the second bag, since the second bag is empty at the beginning of the operations, which means the second operation is not available. We will prove that at some point we will definitely want to send a number equal to $a_1$ to the second bag. Proof by contradiction. Suppose we never do this. Then all the numbers in the second bag, at the end of the operations, will be $>a_1$. And the number $a_1$ will remain in the first bag, which cannot be increased if we never sent $a_1$ to the second bag. Thus, the contents of the bags will never be equal if we do not send the number $a_1$ to the second bag. Therefore, during the operations, we must do this. And we can do this as the first operation since operations with numbers $>a_1$ do not interact with $a_1$ in any case. Alright, our first move: transfer $a_1$ to the second bag. Now we need to "block" one copy of the number $a_1$ in the first bag and not use it in further operations. Therefore, if $a_2 > a_1$, we instantly lose. Otherwise, we fix $a_2=a_1$ in the first bag and $a_1$ in the second bag. And we return to the original problem, but now with the numbers $a_3, a_4, \ldots, a_n$. However, now we have the number $=a_1$ in the second bag. This means that now, perhaps, the first action should not be to transfer the minimum to the second bag, but to somehow use the second operation. It turns out that it is always optimal to use the second operation when possible, not counting the "blocked" numbers. That is, to increase all equal $a_1$ numbers in the first bag by one. And then proceed to the same problem, but with a reduced $n$. Why is this so? Suppose we leave some equal $a_1$ numbers in the first bag without increasing them. Then, by the same logic, we must transfer one of them to the second bag, blocking the equal number in the first bag. But the same could be done if we increased both numbers by $1$, they would still be equal, and there would still be the option to transfer one of them to the second bag and block the equal one in the first. Moreover, the number equal to $a_1$ is already in the second bag, so adding a second copy does not expand the arsenal of possible operations in any way. Therefore, it is never worse to add one to all remaining $=a_1$ numbers. And proceed to the problem with the array $a_3,a_4,\ldots,a_n$, where all numbers are $>a_1$, which can already be solved similarly. A naive simulation of this process takes $O(n^2)$, but, of course, it can be handled in $O(n \log n)$ without much effort, and if we sort the array using counting sort, it can be done in $O(n)$ altogether.
[ "brute force", "dp", "greedy", "sortings" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } sort(a.begin(), a.end()); int mx = 0; for (int i = 0; i < n; i += 2) { if (max(mx, a[i]) != max(mx, a[i + 1])) { cout << "No\n"; return; } mx = max(mx, a[i]) + 1; } cout << "Yes\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2067
C
Devyatkino
You are given a positive integer $n$. In one operation, you can add to $n$ any positive integer whose decimal representation contains only the digit $9$, possibly repeated several times. What is the minimum number of operations needed to make the number $n$ contain at least one digit $7$ in its decimal representation? For example, if $n = 80$, it is sufficient to perform one operation: you can add $99$ to $n$, after the operation $n = 179$, which contains the digit $7$.
The first idea: the answer does not exceed $9$, since we can add $9$ and keep track of the last digit; with each addition, it will decrease by $1$ (going from $0$ to $9$), so after $9$ additions, the last digit will complete a full cycle and will be $7$ exactly once during the process. The second idea: it is quite difficult to briefly answer how adding a number like $99..99$ will affect the digits of an arbitrary number. However, there is an arithmetically similar operation, adding a number of the form $10^x$, which affects the digits in a much more predictable way. It would be nice if we could add powers of ten in the operation instead of $99..99$. The combination of these two ideas solves the problem. Let's iterate through the possible values of the answer. For each $k$ from $0$ to $9$, we want to determine: is it possible to add exactly $k$ numbers made up of nines to produce a digit $7$? One operation is adding a number of the form $10^x - 1$. Since we know we will perform exactly $k$ operations, we can view this process as adding $k$ powers of ten to the number $n-k$. Adding a power of ten increases one of the digits of the number by $1$ (including leading zeros). It may also replace some $9$s with $0$s, which effectively is also an increase of the digit by $1$, modulo $10$. Therefore, it is not difficult to understand that the minimum number of additions of powers of ten needed to introduce a digit $7$ into the number is $\min((7 - digit) \bmod 10)$ across all digits $digit$ from the number, including leading zeros. In summary, we need to iterate $k$ from $0$ to $9$ and compare $k$ with $\min((7 - digit) \bmod 10)$ (where $digit$ is any digit from the number $n-k$, including leading zeros), and output the minimum suitable $k$. From interesting facts: during the solution process, it becomes clear that the answer is $\leq 7$, since $digit$ can always be $0$. However, this is only true under the constraints $n \geq 7$, as in the solution we implicitly rely on the fact that the number $n-k$ does not go negative! For $n=5$ and $n=6$, the answer is actually $8$. Additionally, the solution presented above serves as a proof of another solution that one could believe with sufficient intuition. Let's say that in the optimal answer we add the same number, meaning we never mix adding $9$ and $99$. Believing this, we can iterate through the number being added and keep adding it until we encounter the digit $7$. We then take the minimum of all options for the answer.
[ "brute force", "dfs and similar", "greedy", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; for (int l = 0; l <= 9; l++) { string s = to_string(n - l); int md = 0; for (auto c: s) { if (c <= '7') { md = max(md, c - '0'); } } if (l >= 7 - md) { cout << l << '\n'; return; } } } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
2068
A
Condorcet Elections
It is a municipality election year. Even though the leader of the country has not changed for two decades, the elections are always transparent and fair. There are $n$ political candidates, numbered from $1$ to $n$, contesting the right to govern. The elections happen using a variation of the \underline{Ranked Voting System}. In their ballot, each voter will rank all $n$ candidates from most preferable to least preferable. That is, each vote is a permutation of $\{1, 2, \ldots, n\}$, where the first element of the permutation corresponds to the most preferable candidate. We say that candidate $a$ defeats candidate $b$ if in more than half of the votes candidate $a$ is more preferable than candidate $b$. As the election is fair and transparent, the state television has already decreed a list of $m$ facts—the $i$-th fact being "candidate $a_i$ has defeated candidate $b_i$"—all before the actual election! You are in charge of the election commission and tallying up the votes. You need to present a list of votes that produces the outcome advertised on television, or to determine that it is not possible. However, you are strongly encouraged to find a solution, or you might upset higher-ups.
This problem is inspired by Condorcet's paradox. To put it simply, Condorcet's paradox states that for certain voters preferences, it's impossible to always resolve the elections fairly. No matter the outcome, more than half of the voters would prefer to change the result to some other outcome. This is illustrated by the second example in the statement. The problem's answer is always YES. It's possible to construct a list of votes such that any (not trivially contradicting) list of statements "candidate $x_i$ has defeated candidate $y_i$" can be satisfied. Let $S = \{(a, b) \mid 1 \leq a, b \leq n, \, a \ne b\}$, and let $R \subseteq S$ be the list of requirements. Let $\delta_{a,b}(\text{ans})$, where $(a, b) \in S$ and "$\text{ans}$" is our answer, be how many votes prefer $a$ to $b$ minus how many votes prefer $b$ to $a$. We want to ensure that $\delta_{a,b}(\text{ans}) > 0$ for every $(a, b) \in R$. For every $(a, b) \in S$, it's actually possible to construct votes $\pi_1$, $\pi_2$, such that: $\delta_{a,b}(\{\pi_1, \pi_2\}) = 2$. $\delta_{x,y}(\{\pi_1, \pi_2\}) = 0$ for all $(x, y) \in S$, $\{x, y\} \ne \{a, b\}$. Then clearly adding $\pi_1$, $\pi_2$ for every $(a, b) \in R$ results in a correct answer. Thus, the jury solution uses at most $2 {n \choose 2} \le n^2$ votes. Let us construct $\pi_1$, $\pi_2$. For the sake of exposition, let us assume $a = 1$, $b = 2$. Observe that the $\pi_1$, $\pi_2$ given below satisfy the requirements. Votes $\pi$ for any $(a, b) \ne (1, 2)$ can be constructed by the re-numeration. $\pi_1 = (1, 2, 3, 4, \ldots, n - 1, n)$ $\pi_2 = (n, n - 1, \ldots, 4, 3, 1, 2)$
[ "constructive algorithms", "graphs", "greedy", "probabilities" ]
2,300
null
2068
B
Urban Planning
You are responsible for planning a new city! The city will be represented by a rectangular grid, where each cell is either a park or a built-up area. The residents will naturally want to go for walks in the city parks. In particular, a \underline{rectangular walk} is a rectangle consisting of the grid cells, which is at least 2 cells long both horizontally and vertically, such that all cells on the boundary of the rectangle are parks. Note that the cells inside the rectangle can be arbitrary. \begin{center} An example rectangular walk (cells with dark background). \end{center} Your favourite number is $k$. To leave a long-lasting signature, you want to design the city in such a way that it has exactly $k$ rectangular walks.
Solutions that do not work First of all, let us consider a square with side $n$, consisting only of parks. It has $\left(\frac{n(n-1)}{2}\right)^2$ rectangular walks, since there are $\frac{n(n-1)}{2}$ ways to choose the top and bottom rows, and $\frac{n(n-1)}{2}$ ways to choose the left and right columns. We will denote this number of rectangular walks in a square park as $f(n)$. Now the logical idea would be to find the largest $f(n)$ that does not exceed $k$, create a park with side $n$, and then try to create additional independent parks using the remaining space on the grid to achieve $k-f(n)$ rectangular walks there. When $k=4\cdot10^{12}$, we would choose $n=2000$, get $f(2000)=3\,996\,001\,000\,000$ walks that way, and then we would need to squeeze the remaining $3\,999\,000\,000$ walks into the rest of the 2025 times 2025 grid, which looks like two overlapping 24 times 2025 rectangles after we cut out the 2000 times 2000 rectangle plus adjacent cells to make sure the additional parks do not interact with the big one. However, a 24 times 2025 rectangle has only $565\,606\,800$ rectangular walks, so even with two of those we have no chance to reach the required number. Therefore the idea to have a completely independent large square park is too crude, and we need to improve it. Instead of stopping at $f(n)$, let us add more park cells one by one to go from a square with side $n$ to a square with side $n+1$, first completing column $n+1$ from top to bottom, then completing row $n+1$ from left to right. For each newly added park cell, the number of rectangular walks will increase by at most $n^2$. This way we can find an incomplete square with $x$ rectangular walks such that $k-x < n^2$, so we will only need to squeeze less than $n^2$ additional walks in the remaining space. In practical terms, when $k \le 4\cdot10^{12}$, at the first step we will get a (partial) square with side at most 2001, and have at most $3\,999\,999$ remaining walks, then at the second step we will get a square with side at most 64, at the third step at most 12, then at most 6, then at most 4, then at most 3, at which moment we would have at most 3 remaining walks, which we can address by at most 3 squares with side 2. It is still impossible to put a square of side 2001 and a square of side 64 without overlapping into a grid with side 2025, but now it is clear that we have enough area in our grid for this, and it is only the shape that is the problem. Therefore we should use rectangles instead of squares. Solution that works for $k \le 4\cdot 10^{12}$ In the onsite version of the contest, the constraint was $k \le 4\cdot 10^{12}$, for which the following approach works. Let us start with a rectangle with height 1 and width 2025, and add a second row to it cell by cell from left to right, then the third row, and so on until we would have exceeded $k$ rectangular walks. After this, let us skip one row and start a new rectangle with width 2025 in the same fashion for the remaining walks, and so on until we reach exactly $k$. This way the first (partial) rectangle will be at most 1977 times 2025, the second at most 3 times 2025, and then at most six more 2 times 2025 ones, which fits into our grid even with the additional empty rows between them, so this is a working solution. Solution that works for $k \le 4.194\cdot 10^{12}$ In the online mirror of the contest, this problem had a higher constraint: $k \le 4.194\cdot 10^{12}$. Note that $f(2024) \approx 4.191\cdot 10^{12}$ and $f(2025) \approx 4.2\cdot 10^{12}$, meaning that we need a grid almost full of parks to achieve such high values of $k$, and we cannot afford to have several connected components. First, we do the same as above: let us find a partial rectangle that has almost $k$ rectangular walks. Additionally, we will force the partial rectangle to have $n$ rows and $n+1$ columns for some value of $n$, and we force the removed cells (which make the rectangle partial) to all be in one corner. In other words, we start with a full $n$ times $n+1$ rectangle, and then remove cells in the following order: first $(0, 0)$, then $(0, 1)$, then $(1, 0)$, then $(1, 1)$, then the cells with coordinates $\le 2$, and so on until the number of rectangular walks becomes $\le k$. Since removing each particular cell reduces the number of rectangular walks by at most $n(n-1)$, we will be able to find a partial rectangle with such number of rectangular walks $x$ that $0\le k-x<n(n-1)$. However, for reasons that will become apparent soon, let us remove a few more cells to achieve $n(n-1)\le k-x<2n(n-1)$. Since $f(n+1)-f(n)=O(n^3)$, and removing a cell reduces the number of rectangular walks by $\Omega(n^2)$, we will only remove $O(n)$ cells from the full rectangle to achieve this. And because of the order in which we remove the cells, all removed cells will be concentrated in a $O(\sqrt{n})$ times $O(\sqrt{n})$ corner of the rectangle, which means that we will still have $n-O(\sqrt{n})$ completely untouched rows (with $n+1$ cells each) and $n-O(\sqrt{n})$ completely untouched columns (with $n$ cells each). Now we will add the remaining $k-x$ rectangular walks by adding cells to the right of the untouched rows (in column $n+2$) and to the bottom of the untouched columns (in row $n+1$), so the entire city will fit inside the $n+1$ times $n+2$ grid. If we add $y$ consecutive cells to the right of the untouched rows, this will add $\frac{y(y-1)}{2}(n+1)$ rectangular walks, and adding $y$ consecutive cells to the bottom of the untouched columns will add $\frac{y(y-1)}{2}n$ rectangular walks. Now, remember that every number $z\ge n(n-1)$ can be represented as $pn+q(n+1)$ for some non-negative $p$ and $q$. Since $k-x\ge n(n-1)$, we can find such non-negative $p$ and $q$ that $k-x=pn+q(n+1)$. And now we just need to represent $p$ and $q$ as the sum of values of the form $\frac{y(y-1)}{2}$, and for each such value add a corresponding segment of $y$ extra cells to the right or to the bottom. To represent a given number $p$ as a sum of values of the form $\frac{y(y-1)}{2}$ is a knapsack problem that we will solve greedily: take the largest value of the form $\frac{y(y-1)}{2}$ that does not exceed $p$, and repeat for the remainder. Now since $k-x<2n(n-1)$, both $p$ and $q$ will be $O(n)$, so the first value of $y$ we will get will be $O(\sqrt{n})$, the second will be $O(\sqrt[4]{n})$, and so on, so the output of the greedy knapsack will fit in the $n-O(\sqrt{n})$ untouched rows/columns. This solution does not always work for very small values of $k$, since we need a big enough $n$ for the constant factors hidden in $O()$ notation above to not matter. For those values of $k$ we can either use the solution above for the smaller constraints, or implement a very simple solution for very small constraints. For example, when $k \le \lfloor\frac{2025}{3}\rfloor^2$, we can just output a grid of separate $2 \times 2$ squares with empty rows and columns between them. Also, since the resulting grid has $n+2$ columns, it means that we must have $n \le 2023$, so for the values of $k$ that exceed the total number of rectangular walks in the 2023 times 2024 rectangle it will no longer be true that $k-x<2n(n-1)$. However, one can more carefully examine how much space does the greedy knapsack above need, and find that it can still fit inside the $n-O(\sqrt{n})$ untouched rows/columns at least for $k \le 4.1948\cdot 10^{12}$, which is enough to solve this problem. Remarks There are of course very many approaches that work in this problem; the above solutions are just one possibility. The judges also had a solution that could get even closer to $f(2025)$, approximately to $k\le \frac{1}{3}f(2024) + \frac{2}{3}f(2025)$. However, we felt that the $k \le 4.194\cdot 10^{12}$ constraint is similarly challenging, and did not want to restrict the possible approaches too much. The fact that we are counting hollow rectangular walks instead of rectangles that also have only parks inside them did not actually affect the above solutions. We made them hollow so that counting rectangles takes $O(n^2\log n)$ instead of $O(n^2)$, so hopefully simple hill-climbing solutions were less likely to work.
[ "constructive algorithms" ]
3,100
null
2068
C
Ads
You have $n$ videos on your watchlist on the popular platform YooCube. The $i$-th video lasts $d_i$ minutes. YooCube has recently increased the frequency of their ads. Ads are shown only between videos. After finishing a video, an ad is shown if either of these two conditions is true: - three videos have been watched since the last ad; - at least $k$ minutes have passed since the end of the last ad. You want to watch the $n$ videos in your watchlist. Given that you have just watched an ad, and that you can choose the order of the $n$ videos, what is the minimum number of ads that you are forced to watch? You can start a new video immediately after the previous video or ad ends, and you don't have to watch any ad after you finish.
The result is equal to $g-1$ where $g$ is the number of groups of videos. Therefore, you want to minimize the number of groups and you can do this by creating large groups: first of size 3, then of size 2 and of size 1 with whatever is left. To create groups of size 3, we will pick the shortest video ($m$) as the first element of the group, the longest video ($M$) as the last and for the middle one we choose the longest video of length $x$ such that $m+x<k$. For groups of size 2, we pair the shortest video that is shorter than $k$ with the longest video. We can simulate this greedy process with a tree data structure (e.g. multiset in C++) to solve the problem in $O(n \log n)$. Let's prove that this greedy approach gives us an optimal solution. We can consider a version of the problem where we can choose to watch an ad early if we wish. The optimal solution will be equivalent to the original problem with forced ads. This is because we can move an ad (that we watched early) forward as much as possible and the number of groups in the following videos can't be larger by induction on the number of viewed ads. If the optimal solution contains a group of size 2 or 3, there is a solution that contains the shortest and longest video in the same group. If there wasn't, we can swap the longest video overall with the last video in the group of the shortest one (without affecting the number of groups). If it's possible to create a group of size 3, there exists an optimal solution with such group. Indeed, suppose there is an optimal solution without such group. Clearly all groups can't be of size 1 in an optimal solution so there must be at least one group of size 2. We know there is a solution with a group of size 2 containing the shortest and longest video. If it's possible to insert a third video, the solution can't degrade. If it's possible to construct a group of size 3, it can consist of the shortest ($m$) and longest ($M$) video in the first and last place, respectively. For the second video we can greedily choose the longest available video $x$ such that $m+x<k$ to avoid triggering an early ad. We can prove that a greedy choice is valid with a similar swap argument as before. If it's not possible to construct groups of size 3, and it's possible to create a group of size 2, then there exists an optimal solution with at least a group of size 2. Otherwise, we would have only groups of size 1 and merging two directly decreases the number of groups. If it's not possible to construct groups of size 3, and it's possible to construct a group of size 2, then there exists an optimal solution where the shortest video is paired with the longest video. This can again be proved with a swap argument.
[ "binary search", "greedy", "two pointers" ]
2,100
null
2068
D
Morse Code
Morse code is a classical way to communicate over long distances, but there are some drawbacks that increase the transmission time of long messages. In Morse code, each character in the alphabet is assigned a sequence of dots and dashes such that \textbf{no sequence is a prefix of another}. To transmit a string of characters, the sequences corresponding to each character are sent in order. \textbf{A dash takes twice as long to transmit as a dot.} Your alphabet has $n$ characters, where the $i$-th character appears with frequency $f_i$ in your language. Your task is to design a Morse code encoding scheme, assigning a sequence of dots and dashes to each character, that minimizes the expected transmission time for a single character. In other words, you want to minimize $f_1t_1 + f_2t_2 + \cdots + f_nt_n$, where $t_i$ is the time required to transmit the sequence of dots and dashes assigned to the $i$-th character.
Any assignment of Morse codes to characters can be represented with a binary tree, where going to a left child appends a dot and going to a right child appends a dash. We then associate each character with a leaf in the tree. This ensures that the codes will not be prefixes of each other. Each vertex, both leafs and internal vertices, has a depth in the tree. For this depth to correspond to the length of the associated Morse code, we declare that going to a right child increases the depth by two. Once we know what an optimal binary tree looks like, we can greedily match the characters to the leafs by assigning the most frequent characters to the leafs with the smallest depth. The key observation is that we initially do not need to know what exactly the tree looks like. We only need to know how many leafs and internal vertices exist at each depth. This allows us to use dynamic programming (DP) to determine an optimal tree bottom up. In each state, we keep track of: $v_c$, the number of vertices at the current depth level, where we can still decide whether they become leafs or internal vertices. $v_n$, the number of vertices at the next (deeper) depth level, where we can still decide whether they become leafs or internal vertices. $a$, the number of characters that have already been assigned a value The base state is $DP[0,0,0] = 0$, since transmitting nothing costs nothing. The value we want to know is $DP[1,0,n]$, the cost to transmit all characters from depth zero where we only have a root vertex. The transition function is a minimum of two values. To determine $DP[v_c,v_n,a]$, we can either make a vertex a leaf and assign a character to it, or we can determine that we do not want anymore leafs at this depth by splitting all remaining vertices into two new vertices and going to the next depth. In the first case, this results in the value $DP[v_c-1,v_n,a-1]$ with no additional costs. In the second case, this results in the value $DP[v_n+v_c,v_c,a]$ plus the sum of the frequencies of the least frequent $a$ characters; the cost is caused by the fact that these characters are now placed one level deeper. Finally, there are some boundary conditions where we assign the value infinity. If $v_c = v_n = 0$ and $a > 0$ the state is unsolvable since there are some unassigned characters left but no more leafs can be created. If $v_c + v_n > a$ then the state is trivially suboptimal since we will then obtain more leafs then needed. After completing the DP, the final step is to actually construct an optimal tree. This can be done greedily by keeping track of two vectors of partial codes (one for the current depth, one for the next one) and backtracking through the DP. Each parameter in the state description is bounded by $n$ and the transition function is constant, so overall this results in an $O(n^3)$ algorithm.
[ "dp", "sortings", "trees" ]
3,100
null
2068
E
Porto Vs. Benfica
FC Porto and SL Benfica are the two largest football teams in Portugal. Naturally, when the two play each other, a lot of people travel from all over the country to watch the game. This includes the Benfica supporters' club, which is going to travel from Lisbon to Porto to watch the upcoming game. To avoid tensions between them and the Porto supporters' club, the national police want to delay their arrival to Porto as much as they can. The road network in Portugal can be modelled as a simple, undirected, unweighted, connected graph with $n$ vertices and $m$ edges, where vertices represent towns and edges represent roads. Vertex $1$ corresponds to Lisbon, i.e., the starting vertex of the supporters' club, and vertex $n$ is Porto, i.e., the destination vertex of the supporters' club. The supporters' club wants to minimize the number of roads they take to reach Porto. The police are following the supporters' club carefully, and so they always know where they are. To delay their arrival, at any point the police can pick exactly one road and block it, as long as the supporters' club isn't currently traversing it. They can do this exactly once, and once they do that, the road is blocked forever. Once the police block a road, the supporters' club immediately learns that that road is blocked, and they can change their route however they prefer. Furthermore, the supporters' club knows that the police are planning on blocking some road and can plan their route accordingly. Assuming that both the supporters' club and the police always make optimal choices, determine the minimum number of roads the supporters' club needs to traverse to go from Lisbon to Porto. If the police can block the supporters' club from ever reaching Porto, then output $-1$.
We start by making a few definitions that will help us: Definition. Denote by $f(v)$ the minimum number of roads the supporters' club needs to travel if they start from vertex $v$ and want to end up at vertex $n$, and the police can still block exactly one road. So $f(1)$ is the answer to the problem and $f(n) = 0$. Definition. Denote by $g(v, \, e)$ the shortest path from $v$ to vertex $n$ that doesn't use edge $e$ and $g(v)$ as the maximum of $g(v, \, e)$ for all edges $e$ adjacent to $v$. In other words, $g(v)$ is the shortest path from $v$ to $n$ that doesn't use the edge that leads to the shortest path from $v$ to $n$. Now we have the following: Lemma. $f(v) = \max\{g(v), \, 1 + \min_{v \sim u} f(u)\}$, where $v \sim u$ denotes that the two vertices are adjacent. Proof. It's easy to see that the police only block an edge when the supporters' club is on a vertex adjacent to that edge; otherwise, they could have waited until they were adjacent to that edge to block it. So, when the supporters' club is at some vertex $v$, the police have two choices (and they want to pick the one that maximizes the number of traversed roads): either block some edge adjacent to $v$, or not block any edge and let the supporters' club decide where to go. If they decide to block road $e$, then the supporters' club takes $g(v, \, e)$ roads to reach $n$, so the police might as well pick $g(v)$ to maximize this. If they decide not to block a road, the number of roads is $1$ plus $f(u)$, where $u$ is the vertex the supporters' club ends up at. Since they want to minimize the number of roads taken, they best pick $u$ that minimizes $f(u)$. Note that the values of $g$ can easily be found in $O(m^2)$ time, by running a BFS per vertex $v$. This is too slow to solve this problem, so we will see later how to compute $g$ more efficiently, which is the trickiest part. But first, let's see how to find efficiently the values of $f$ assuming we have already computed $g$. Computing $f$ assuming we know $g$ To find $f$, we can use a greedy algorithm that works exactly the same way as Dijkstra's algorithm. Consider an array $\texttt{f}[]$ that we will use to store the values of $f$. Set $\texttt{f}[n] = 0$ and $\texttt{f}[v] = g(v)$ for all other $v$. Now process each vertex in the following way. Pick the unprocessed vertex $v$ with the lowest current value of $\texttt{f}[v]$. Process $v$ by iterating through all vertices $u$ adjacent to $v$ and setting $\texttt{f}[v] = \max(g[v], \, \min(1 + \texttt{f}[u]))$. Intuitively, this processing step amounts to applying the formula of $f$ present in the lemma above, but considering only one pair of adjacent vertices at a time. To efficiently pick the unprocessed vertex $v$ with the lowest current value of $\texttt{}f[v]$, we can use a min-heap ordered by $\texttt{f}[v]$, which needs to be updated accordingly. This algorithm runs in $O(m \log n)$ time, since we process each vertex once and for each vertex we look at its neighbors and potentially update the elements in a heap containing at most $n$ elements. Lemma. After running the algorithm above, $\texttt{f}[v] = f(v)$. Proof. The correctness of this algorithm follows from an argument very similar to the correctness of Dijkstra's algorithm. The key observation is that if $v$ is an unprocessed vertex with the lowest current value of $\texttt{f}[v]$, then at this point we have that $\texttt{f}[v] = \max(g(v), \, 1 + \min(f(u)))$, where the minimum is over all $u$ that have been previously processed. Since $v$ has the lowest current value of $\texttt{f}[v]$, none of the remaining unprocessed vertices could increase the value of $\texttt{f}[v]$, which means that the current value of $\texttt{f}[v]$ is exactly $f(v)$. Computing $g$ Let's start with two quick definitions. Definition. Denote by $\text{dist}(v, \, u)$ the distance between vertices $v$ and $u$ in the road network graph. Definition. Let $b(v)$ be any neighbor of $v$ such that $\text{dist}(v, \, n) = 1 + \text{dist}(b(v), \, n)$, and break ties arbitrarily. So, we can think of $g(v)$ as the distance from $v$ to $n$ that doesn't use the edge $\{v, \, b(v)\}$. Consider the BFS tree from $n$, which is the same as saying the tree consisting of the edges given by $\{v, \, b(v)\}$ for all $v \neq n$, and root the tree on $n$. Observe that the path that corresponds to $g(v)$ has to necessarily be a path that uses at least one non-tree edge (otherwise it would be a path that uses $\{v, \, b(v)\}$, which is invalid). Furthermore, such a path will look like the following: start at $v$, go down the subtree rooted on $v$ some number of steps (potentially $0$), take one non-tree edge that ends in a vertex that isn't in the subtree rooted at $v$ and then take the shortest path from that vertex to $n$. Note that we can easily precompute the shortest path from any vertex to $n$ by running a single BFS from $n$. To prove the above, note that once we are outside of the subtree rooted at $v$ we are free to take the shortest path to $n$ since it won't use the $\{v, \, b(v)\}$ edge. Additionally, note that we don't want to ever take a non-tree edge to end up at another vertex in the subtree of $v$, since we can always take the tree path and that will always be at most as short (by definition of BFS tree). So we are left with computing for each $v$ the shortest among all paths that take some number of steps down the tree, then takes some non-tree edge, and then takes the shortest path to $n$. Let's first see an inefficient way of doing to, and then we'll make it efficient. For each vertex $v$, compute a list of all the paths of the desired form, and store this in a set (so each vertex gets a set). In particular, we store pairs of two things in this set: the distance to $n$ of the corresponding path, the endpoint of the non-tree edge we are taking. We can do so recursively (in a DFS fashion), so let's start by considering the leaves of the tree. If $v$ is a leaf, then the path can't go down the tree, so it is of the form "take some non-tree edge and then take the shortest path to $n$". For each non-tree edge $\{v, \, u\}$, add the pair $(1 + \text{dist}(u), \, u)$ to the set. So now we know that $g(v)$ is the minimum element in the set. If $v$ isn't a leaf, first recursively compute the distances of all of the children of $v$. Initialize the set corresponding to $v$ by going through each non-tree edge $\{v, \, u\}$, and adding the pair $(1 + \text{dist}(u), \, u)$. Now "merge" this set with all the sets of all of $v$'s children. To merge two sets, we take all pairs $(d, \, u)$ of the children sets and add $(d + 1, \, u)$ to $v$'s set, which corresponds to extending each path from each of the children by taking the edge from $v$ to them. Suppose the minimum element is $(d, \, u)$. Then $u$ could be in the subtree rooted at $v$, which is an invalid path. If that's the case, we delete $(d, \, u)$ from the set and keep deleting the top element until we find one which isn't in the subtree of $v$. Note that this doesn't affect the computation of ancestors of $v$ since $u$ would be in their subtree too. To determine whether $u$ is in the subtree of $v$ we can use DSU (Disjoint Set Union). Initially, all vertices are their own set, and as we go down the tree we merge $v$ with its children. Like before, now we know that $g(v)$ is the minimum element in the remaining set. We are almost done, but there is one inefficient step here: merging the sets of the children of $v$ could take $O(n)$ time since each set could have up to $n$ elements. To implement this step efficiently, we can do "small-to-large merging", just like the union by size merge in a DSU data structure. When merging two sets, don't touch the largest of the two, and copy the elements of the smallest one into the largest one. However, recall that when merging two sets, we take all pairs $(d, \, u)$ of the children sets and add $(d + 1, \, u)$ to $v$'s set, so we need to potentially alter the elements in the largest set if this is one of the children's sets. We can do this by assuming that each set comes with a "modifier", which is an integer $m$ such that if we have an element $(d, \, u)$ in the set, the real distance is $d + m$. Intuitively, this modifier acts as a "lazy propagator", so that we don't have to actually change the elements in the children's sets. When we take a set from a child, we first increment its modifier, and then merge its set with $v$'s set. The total running time of this step is $O(n \log^2 m)$, since the small-to-large merge makes sure we only move an element between sets $O(\log m)$ times, and each move costs $O(\log m)$.
[ "data structures", "dfs and similar", "dsu", "graphs", "shortest paths" ]
2,800
null
2068
F
Mascot Naming
When organizing a big event, organizers often handle side tasks outside their expertise. For example, the chief judge of EUC 2025 must find a name for the event's official mascot while satisfying certain constraints: - The name must include specific words as subsequences$^{*}$, such as the event name and location. You are given the list $s_1,\, s_2,\, \ldots,\, s_n$ of the $n$ required words. - The name must not contain as a subsequence$^{*}$ the name $t$ of last year's mascot. Please help the chief judge find a valid mascot name or determine that none exists.$^{*}$ A string $x$ is a \underline{subsequence} of a string $y$ if $x$ can be obtained from $y$ by erasing some characters (at any positions) while keeping the remaining characters in the same order. For example, $abc$ is a subsequence of $axbycz$ but not of $acbxyz$.
This problem requires constructing a string that contains each of the given strings $s_1, s_2, \dots, s_n$ as subsequences but does not contain $t$ as a subsequence. Key Observation. If $t$ is a subsequence of any $s_i$, then any string containing $s_i$ as a subsequence must also contain $t$. In this case, the answer is $\texttt{NO}$. Otherwise, we can construct a valid string using a greedy approach. Checking for Subsequences. Before solving the main problem, we describe a simple method to check if a string $b$ is a subsequence of another string $a$. We iterate through $a$, trying to match its characters to $b$ in order: If the first characters of $a$ and $b$ match, remove the first character from both. Otherwise, remove only the first character from $a$. Repeat until $a$ is empty. If $b$ becomes empty, it was a subsequence of $a$; otherwise, it was not. Constructing the Answer. We build the answer string $\mathrm{ans}$ by appending characters one by one. Initially, $\mathrm{ans}$ is empty. We repeat the following steps until all $s_i$ are empty: If there exists a character $c$ such that at least one $s_i$ starts with $c$ and $t$ does not start with $c$, append $c$ to $\mathrm{ans}$ and remove the first character from all $s_i$ that start with $c$. Otherwise, all $s_i$ start with the same character as $t$. Append this character to $\mathrm{ans}$ and remove the first character from all $s_i$ and $t$. At the end of this process, $\mathrm{ans}$ contains all $s_i$ as subsequences. Let us understand why $t$ is not a subsequence of $\mathrm{ans}$. Let $s_i$ be the last string to become empty during the process. Consider the steps taken for $s_i$ and $t$ alone. The sequence of character deletions mirrors the subsequence checking algorithm. Since $t$ is not a subsequence of $s_i$, it remains non-empty at the end, ensuring that $t$ is not a subsequence of $\mathrm{ans}$ either. Thus, if $t$ is not a subsequence of any $s_i$, the algorithm constructs a valid $\mathrm{ans}$, and the answer is $\texttt{YES}$. The solution described is linear in the total length of all strings given in input.
[ "brute force", "greedy", "implementation", "strings" ]
1,900
null
2068
G
A Very Long Hike
You are planning a hike in the Peneda-Gerês National Park in the north of Portugal. The park takes its name from two of its highest peaks: Peneda (1340 m) and Gerês (1545 m). For this problem, the park is modelled as an infinite plane, where each position $(x, y)$, with $x, y$ being integers, has a specific altitude. The altitudes are defined by an $n \times n$ matrix $h$, which repeats periodically across the plane. Specifically, for any integers $a, b$ and $0 \leq x, y < n$, the altitude at $(x + an, y + bn)$ is $h[x][y]$. When you are at position $(x, y)$, you can move to any of the four adjacent positions: $(x, y+1)$, $(x+1, y)$, $(x, y-1)$, or $(x-1, y)$. The time required to move between two adjacent positions is $1 + \lvert \text{alt}_1 - \text{alt}_2 \rvert$, where $\text{alt}_1$ and $\text{alt}_2$ are the altitudes of the current and destination positions, respectively. Initially, your position is $(0, 0)$. Compute the number of distinct positions you can reach within $10^{20}$ seconds. Your answer will be considered correct if its relative error is less than $10^{-6}$.
Before diving into the solution, let us make a few key observations to build intuition for tackling the problem. Our goal is to count the number of distinct cells that can be reached within a time limit of $10^{20}$. To gain insight into this, consider first the simpler question: given a distant cell $(x, y)$ (where $x, y$ are large), how can we estimate the shortest time required to reach it from the origin? Since we do not need an exact answer but only a sufficiently precise approximation, we can reframe the previous question: how can we efficiently approximate the distance from the origin to $(x, y)$? Consider any path from $(0,0)$ to $(x, y)$. If this path visits the same position modulo $n$ at both the $i$-th and $j$-th steps, then removing the portion of the path between these two steps does not change the total cost of the remaining path. The last observation is particularly powerful and will be crucial in approximating distances efficiently. We will leverage it to resolve the subproblem identified in the first two observations. Afterward, we will use this approximation to count (approximately) the number of reachable cells within the given time constraint. Finally, we will describe how to translate these results into an algorithm with time complexity $O(n^5)$. It is worth noting that this editorial is relatively long because we rigorously prove that our approximations maintain a relative error smaller than $10^{-6}$. However, in practice, we expect that any reasonable implementation with sufficient numerical precision will be far more accurate than required. As such, contestants do not need to be overly concerned about precision. Lastly, many of the statements we prove here are significantly easier to intuit than to rigorously justify. We recommend that readers focus on understanding the core ideas on their first read and not get too caught up in the technical details. If you grasp the reasoning behind Lemma 5 and Lemma 6, you should already have the essential insights needed to solve the problem. Approximating the distance Let us denote by $D := 1545+1$ the maximum distance between two adjacent cells (i.e., the maximum altitude difference plus one). For any integers $i, j$ with $|i|+|j| \le n$, let $t$ be the minimum number such that, for some $(a, b)$, the distance from $(a, b)$ to $(a+in, b+jn)$ is $t$. Define $T(i, j):=t/n$. This captures the idea that moving by $(i, j)$ costs $T(i, j)$. For any $x, y$, we define $f(x, y)$ as the infimum of $\sum_{|i|+|j|\le n} \lambda_{ij}T(i, j)$, for any nonnegative real numbers $\lambda_{ij}\ge 0$, under the constraint that $\sum_{|i|+|j| \le n}\lambda_{ij}(i, j) = (x, y)$. Observe that $|x|+|y|\le f(x, y)\le D(|x|+|y|)$. Lemma 1. For any cell $(x, y)$ such that both coordinates are divisible by $n$, we have $\text{dist}((0, 0), (x, y)) \ge f(x, y).$ Proof. We show it by induction on the distance from $(0, 0)$ to $(x, y)$. Consider a shortest path from $(0, 0)$ to $(x, y)$. Since $(0, 0)\equiv(x, y) \pmod{n}$, we can consider the two closest cells $(x_1, y_1)$ and $(x_2, y_2)$ along the path such that $(x_1, y_1)\equiv(x_2, y_2) \pmod{n}$. Let $(x', y') = (x_2-x_1, y_2-y_1)$. By construction, the segment of the path from $(x_1, y_1)$ to $(x_2, y_2)$ contains no two positions (besides the endpoints) that are congruent modulo $n$, so it must have at most $n^2$ steps. Thus, we obtain $|x'|+|y'|\le n^2$. Using our third key observation from the introduction, we deduce that $\text{dist}((0, 0), (x, y)) = \text{dist}((x_1, y_1), (x_2, y_2)) + \text{dist}((0, 0), (x-x', y-y')).$ $\text{dist}((x_1, y_1), (x_2, y_2)) \ge n T(x'/n, y'/n).$ $\text{dist}((0, 0), (x-x', y-y')) \ge f(x-x', y-y') .$ $\text{dist}((0, 0), (x, y)) \ge n T(x'/n, y'/n) + f(x-x', y-y') \ge f(x, y).$ Lemma 2. For any cell $(x, y)$, we have $\text{dist}((0, 0), (x, y)) \ge f(x, y) - 2Dn.$ Proof. Let $(\bar x, \bar y)$ be a cell such that its coordinates are divisible by $n$ and the Manhattan distance from $(x, y)$ to $(\bar x, \bar y)$ is $\le n$. We have $\text{dist}((0, 0), (x, y)) \ge \text{dist}((0, 0), (\bar x, \bar y)) - \text{dist}((\bar x, \bar y), (x, y)) \ge f(\bar x, \bar y) - Dn .$ $f(x, y) \le f(\bar x, \bar y) + f(x-\bar x, y-\bar y) \le f(\bar x, \bar y) + T(x-\bar x, y-\bar y) \le f(\bar x, \bar y) + Dn .$ $\text{dist}((0, 0), (x, y)) \ge f(x, y) - 2Dn.$ We have shown that $f$ provides a lower bound for the distance of $(x, y)$ to the origin. Now we turn to proving that it provides an upper bound. Lemma 3. Fix a nonnegative integer $k\ge 0$ and two integers $i, j$ with $|i|+|j|\le n$. For any cell $(x, y)$, we have $\text{dist}((x, y), (x, y) + k(in, jn)) \le kn T(i, j) + 2Dn.$ Proof. Without loss of generality, we may assume that $(x, y) = (0, 0)$. By definition of $T(i, j)$, there exists $(a, b)$ such that $\text{dist}((a, b), (a, b) + (in, jn))=T(i, j)$. Without loss of generality, we may assume that $|a|+|b|\le n$. Consider the path that goes from $(0, 0)$ to $(a, b)$, then to $(a, b) + k(in, jn)$ repeating $k$ times the path that defines $T(i, j)$, and finally from $(a, b)+k(in, jn)$ to $k(in, jn)$. The total length of this path is at most $Dn + kn T(i, j) + Dn$ as desired. Now that we have seen in which cases $T$ provides an upper bound, we deduce how $f$ always provides an upper bound. One can show (it follows from the argument in the next section of this editorial) that the value of $f$ does not change if we require that at most two values of $\lambda_{ij}$ are nonzero. Lemma 4. For any cell $(x, y)$, we have $\text{dist}((0, 0), (x, y)) \le f(x, y) + 2Dn(n+1).$ Proof. Let $(x, y) = a(i, j) + a'(i', j')$ so that $a,a'\ge 0$ and $f(x, y) = aT(i, j) + a'T(i', j')$. Consider the cell $(\bar x, \bar y) := \big\lfloor a/n \big\rfloor (in, jn) + \big\lfloor a'/n \big\rfloor (i'n, j'n).$ $\text{dist}((0, 0), (\bar x, \bar y)) \le \big\lfloor a/n \big\rfloor n T(i, j) + \big\lfloor a'/n \big\rfloor n T(i', j') \le f(x, y) + 2Dn.$ $\text{dist}((0, 0), (x, y)) \le \text{dist}((0, 0), (\bar x, \bar y)) + \text{dist}((\bar x, \bar y), (x, y)) \le f(x, y) + 2Dn + 2n^2D.$ The results of this section can be summarized in the following statement. Lemma 5. For any cell $(x, y)$, we have $f(x, y) - 2Dn \le \text{dist}((0, 0), (x, y)) \le f(x, y) + 2Dn(n+1) .$ Counting the cells with distance $\le R$ Let $R=10^{20}$. Our goal is to count the number of cells $(x, y)$ with $\text{dist}((0, 0), (x, y))\le R$. Let $q(r)$ be the number of cells $(x, y)$ with $f(x, y)\le r$. In view of Lemma 5, we have (setting $R_1 := R-2Dn(n+1)$ and $R_2:=R+2Dn$), $q(R_1) \le \#\{(x, y): \text{dist}((0, 0), (x, y)) \le R \} \le q(R_2) . \quad (\ast)$ Recall that $f(x, y)$ is defined as $\inf\left\{\sum_{|i|+|j|\le n} \lambda_{ij}T(i, j):\, \lambda_{ij}\ge 0\text{ and } \sum_{|i|+|j|\le n} \lambda_{ij}(i, j) = (x, y)\right\} .$ $\inf\left\{\sum_{|i|+|j|\le n} \mu_{ij}:\, \mu_{ij}\ge 0\text{ and } \sum_{|i|+|j|\le n} \mu_{ij}\left(\frac{i}{T(i,j)}, \frac{j}{T(i,j)}\right) = (x, y)\right\} .$ $\inf\left\{t\ge 0:\, \text{there are }\mu_{ij}\ge 0\text{ with } \sum_{|i|+|j|\le n}\mu_{ij} \le 1\text{ such that } \sum_{|i|+|j|\le n} \mu_{ij}p_{ij} = \frac1t(x, y)\right\} .$ Let $P$ be the convex hull of the points $p_{ij}$. We have proven that $f(x, y)$ is the minimum value $t\ge 0$ such that $(x/t, y/t)$ belongs to $P$. Therefore, we have proven that Lemma 6. Let $P$ be the convex hull of the points $p_{ij}:= \left(\frac{i}{T(i,j)}, \frac{j}{T(i,j)}\right)$. For any $r>0$, the number of lattice points in $rP$ (here, $rP$ represents the scaling of $P$ by a factor $r$) coincides with $q(r)$. So, it remains only to (approximately) compute the lattice points inside $rP$. The standard way to approximate the number of lattice points in a convex subset is by computing the area of the subset. Let us check that it is precise enough in our setting. For any convex polygon, we have Lemma 7. Let $P$ be an arbitrary convex polygon. We have $\text{area}(P) - \text{perimeter}(P) - \pi \le \#\{(x, y)\in P: \, x,y\text{ are integers}\} \le \text{area}(P) + \text{perimeter}(P) + \pi.$ Proof. Denote by $d(x, X)$ the Euclidean distance between a point $x$ and a subset $X$ of the plane. For $r>0$, let $P_r$ be the enlargement of $P$ given by $\{p: d(p, P)\le r\}$. For $r<0$, let $P_r$ be the reduction of $P$ given by $\{p: d(p, P^c) > r\}$. Observe that if $r_1, r_2$ have the same sign, then $(P_{r_1})_{r_2} = P_{r_1+r_2}$. Consider the disjoint squares centered at all lattice points in $P$. We have $P_{-\frac1{\sqrt2}} \subseteq [-0.5, 0.5)\times[-0.5,0.5) + \{(x, y)\in P:\, x, y\text{ integers}\} \subseteq P_{\frac1{\sqrt2}}.$ $\text{area}\left(P_{-\frac1{\sqrt2}}\right) \le \#\{(x, y)\in P:\, x, y\text{ integers}\} \le \text{area}\left(P_{\frac1{\sqrt2}}\right).$ Lemma 8. For $r>0$, we have $\text{perimeter}(P_r) = \text{perimeter}(P) + 2\pi r,\quad \text{area}(P_r) = \text{area}(P) + r\cdot\text{perimeter}(P) + \pi r^2,$ $2\sqrt{\pi}\sqrt{\text{area}(P_r)}\le \text{perimeter}(P_r) \le \text{perimeter}(P),\quad \text{area}(P) + r\cdot\text{perimeter}(P) - \pi r^2 \le \text{area}(P_r) \le \pi (R+r)^2 ,$ Proof. Let us observe that, for any $r$, $\frac{d}{dr}\text{area}(P_r)=\text{perimeter}(P_r)$. The formulas for the area follow from those for the perimeter using this differential equation. The formula for the perimeter when $r>0$ is left to the reader. For $r<0$, since $P_r\subseteq P$ we deduce $\text{perimeter}(P_r)\le \text{perimeter}(P)$ and the lower bound for $\text{perimeter}(P_r)$ is the isoperimetric inequality. Concatenating Lemma 6 and Lemma 7 together with $(\ast)$, we have $R_1^2 \text{area}(P)-R_1\text{perimeter}(P)-\pi \le \#\{(x, y): \text{dist}((0, 0), (x, y)) \le R \} \le R_2^2 \text{area}(P)+R_2\text{perimeter}(P)+\pi.$ Lemma 9. We have $\text{perimeter}(P) \le 4 D^2\text{area}(P)$ and also $\text{area}(P) \ge 2D^{-2}$. Proof. By definition of $P$, it is not hard to check that $\{(x, y): |x|+|y| \le D^{-1}\} \subseteq P \subseteq \{(x, y): |x|+|y| \le 1\} .$ Thanks to this lemma, we obtain $(R_1^2-4D^2 R_1-2D^2) \text{area}(P) \le \#\{(x, y): \text{dist}((0, 0), (x, y)) \le R \} \le (R_2^2 + 4D^2R_2 + 2D^2) \text{area}(P).$ Given the constraints of the problem, one can check that $\frac{R_2^2 + 4D^2R_2 + 2D^2}{R_1^2-4D^2 R_1-2D^2} < 1 + 10^{-12},$ Computing the polygon $P$ fast enough Solving the problem is straightforward once we have computed $T(i, j)$ for the $O(n^2)$ pairs $(i, j)$ such that $|i|+|j|\le n$. Indeed, once we have these values: We can compute the family of points $p_{ij}$ in $O(n^2)$. We can compute its convex hull in $O(n^2\log(n))$. Finally, computing the area of the resulting polygon is straightforward. To compute $T(i, j)$ for all $i, j$ with $|i|+|j|\le n$, a naive approach would be: Iterate over all $O(n^2)$ interesting pairs $(i, j)$. For each such pair, iterate over all $O(n^2)$ interesting pairs $(a, b)$. Compute the distance between $(a, b)$ and $(a+in, b+jn)$ using Dijkstra's algorithm, which takes $O(n^4\log(n))$. This results in a total complexity of $O(n^8\log(n))$, which is too slow. Optimization 1: Batch processing in Dijkstra. Instead of computing $T(i, j)$ separately for each pair, we observe that once $(a, b)$ is fixed, we can compute the answer for all relevant $(i, j)$ in one run of Dijkstra's algorithm. This reduces the complexity to $O(n^6\log(n))$, which is still likely too slow. Let us remark that, when running Dijkstra's algorithm, we can completely ignore all cells at a Manhattan distance greater than $n^2$ from $(a, b)$. This is because we are only interested in paths where no two points share the same position modulo $n$. Otherwise, the path could be decomposed into multiple shorter segments, each obeying this property. Optimization 2: Reducing the search space using modulo $n$ properties. A key observation is that any path whose endpoints are equivalent modulo $n$ must contain a cell where at least one coordinate is divisible by $n$. Thus, we may assume that either $a$ or $b$ is $0$. This reduces the complexity to $O(n^5\log(n))$, which might be sufficient to get accepted. Optimization 3: Removing the logarithmic factor. To further improve performance, note that the maximum weight is $D$. We can replace Dijkstra's priority queue with a bucket-based shortest path algorithm (similar to 0-1 BFS). This reduces the complexity per Dijkstra run to $O(n^5 + D)$.
[ "shortest paths" ]
3,500
null
2068
H
Statues
The mayor of a city wants to place $n$ statues at intersections around the city. The intersections in the city are at all points $(x, y)$ with integer coordinates. Distances between intersections are measured using Manhattan distance, defined as follows: $$ \text{distance}((x_1, y_1), (x_2, y_2)) = |x_1 - x_2| + |y_1 - y_2|. $$ The city council has provided the following requirements for the placement of the statues: - The first statue is placed at $(0, 0)$; - The $n$-th statue is placed at $(a, b)$; - For $i = 1, \dots, n-1$, the distance between the $i$-th statue and the $(i+1)$-th statue is $d_i$. It is allowed to place multiple statues at the same intersection. Help the mayor find a valid arrangement of the $n$ statues, or determine that it does not exist.
Let $d_0 = a + b$ be the distance between the first and the $n$-th statue. We are going to prove that a valid arrangement exists if and only if the following two conditions are satisfied: $d_0 + d_1 + \dots + d_{n-1}$ is even; $d_i \leq d_0 + \dots + d_{i-1} + d_{i+1} + \dots + d_{n-1}$ for all $0 \leq i \leq n-1$. To start, let us prove that the two conditions above are necessary. Suppose there is a valid arrangement $s_1 = (x_1, y_1), \, s_2 = (x_2, y_2), \, \dots, \, s_n = (x_n, y_n)$. Then $d_0 + d_1 + \dots + d_{n-1} = \left( |x_n - x_1| + |y_n - y_1| \right) + \left( |x_1 - x_2| + |y_1 - y_2| \right) + \dots + \left( |x_{n-1} - x_n| + |y_{n-1} - y_n| \right);$ Conversely, we now show by induction on $n$ that conditions 1 and 2 are sufficient to construct a valid arrangement. Our proof will effectively yield an $O(n)$ algorithm to construct a valid arrangement. If $n=2$, condition 2 tells us that $d_0 \leq d_1$ and $d_1 \leq d_0$, so $d_0 = d_1$. Then the arrangement $s_0 = (0, 0), s_1 = (a, b)$ is valid. Suppose now that $n \geq 3$. Our aim is to determine a placement $s_{n-1} = (a', b')$ for the $(n-1)$-th statue such that: $\text{distance}(s_{n-1}, \, s_n) = d_{n-1}$; if we define $\hat{d_0} = \text{distance}(s_1, \, s_{n-1})$, then the distances $\hat{d_0}, d_1, \dots, d_{n-2}$ satisfy conditions 1 and 2. For any choice of $s_{n-1}$, note that $d_0 + \hat{d_0} + d_{n-1}$ is necessarily even, because $d_0, \hat{d_0}, d_{n-1}$ are the distances between the three points $s_1, s_{n-1}, s_n$. Therefore, $\hat{d_0} + d_1 + \dots + d_{n-2} \equiv d_0 + d_1 + \dots + d_{n-1} \equiv 0 \pmod 2,$ As $s_{n-1}$ varies among all integer points with $\text{distance}(s_{n-1}, \, s_n) = d_{n-1}$, the distance $\hat{d_0}$ takes all values between $| d_0 - d_{n-1} |$ and $d_0 + d_{n-1}$ having the same parity as $d_0 + d_{n-1}$. The possible locations of $s_{n-1}$ are shown in red in the picture below. We are going to show that the value $\hat{d_0} = \min( d_0 + d_{n-1}, \, d_1 + \dots + d_{n-2} )$ belongs to the admissible range $\left\{| d_0 - d_{n-1} |, \, \dots, \, d_0 + d_{n-1} \right\}$, has the correct parity, and makes the sequence $\hat{d_0}, \, d_1, \, \dots, \, d_{n-2}$ satisfy condition 2. Case 1: $\hat{d_0} = d_0 + d_{n-1} \leq d_1 + \dots + d_{n-2}$. Obviously, $d_0 + d_{n-1}$ belongs to the admissible range and has the correct parity. The sequence $\hat{d_0}, \, d_1, \, \dots, \, d_{n-2}$ satisfies condition 2 for $i = 0$ because $\hat{d_0} \leq d_1 + \dots + d_{n-2}$. For $i \geq 1$, we need to check that $d_i \leq \hat{d_0} + d_1 + \dots + d_{i-1} + d_{i+1} + \dots + d_{n-2} = (d_0 + d_{n-1}) + d_1 + \dots + d_{i-1} + d_{i+1} + \dots + d_{n-2},$ Case 2: $\hat{d_0} = d_1 + \dots + d_{n-2} \leq d_0 + d_{n-1}$. For $\hat{d_0}$ to belong to the admissible range, we need to check that $d_1 + \dots + d_{n-2} \geq | d_0 - d_{n-1} |$. This is ensured by condition 2 for the original sequence $d_0, \, \dots, \, d_{n-1}$ for $i=0$ and $i=n-1$. Condition 1 for the original sequence ensures that $d_1 + \dots + d_{n-2}$ has the same parity as $d_0 + d_{n-1}$. We now check condition 2 for the sequence $\hat{d_0}, \, d_1, \, \dots, \, d_{n-2}$. For $i=0$, we are done because $\hat{d_0} = d_1 + \dots + d_{n-2}$. For $i \geq 1$, condition 2 is satisfied because $d_i \leq \hat{d_0}$.
[ "constructive algorithms", "greedy", "math" ]
2,700
null
2068
I
Pinball
You are playing a pinball-like game on a $h \times w$ grid. The game begins with a small ball located at the center of a specific cell marked as $S$. Each cell of the grid is either: - A block-type wall ($#$) that prevents the ball from entering the cell, reflecting it instead. - A thin oblique wall, either left-leaning ($\\$) or right-leaning ($/$), which reflects the ball according to its orientation. - A free cell ($.$) where the ball can move freely. The goal is to make the ball escape the grid. At the start, you can nudge the ball in one of four directions: up ($U$), down ($D$), left ($L$), or right ($R$). The ball traverses a free cell in one second, it enters and exits a cell containing a thin oblique wall in one second, and it bounces off a block-type wall in no time (the block-type wall occupies all of its cell). Collisions between the ball and all walls, both block-type and oblique, are perfectly elastic, causing the ball to reflect upon contact. For example, the ball takes two seconds to enter a free cell, traverse it, bounce off an adjacent block-type wall, and traverse back the free cell until it exits. As the ball moves, you may destroy oblique walls at any time, permanently converting them into free cells. You may destroy multiple oblique walls throughout the game, at any given time. Determine whether it is possible for the ball to escape, and if so, find the \textbf{minimum} number of oblique walls that need to be destroyed, along with the precise time each chosen wall should be destroyed.
Let's decompose the whole grid into regions delimited by the movement of the ball without altering the walls. These regions are depicted in the picture below by colored lines. The path described by the solution is bolded. Let's build a graph over the set of all regions where we add an edge $(i, j)$ if regions $i$ and $j$ share an oblique wall. In the picture above, the red and blue regions will be connected by an edge. Similarly, the blue and purple regions, as well as the blue and green regions and the red and orange regions are adjacent in this formulation. Furthermore, let's compute the minimum distance between any of the two initial regions where the ball is located, and any of the regions that go outside the grid in this above described graph. Intuitively, this distance is a lower bound for our answer, as destroying $k$ walls can only let the ball travel to regions at distance at most $k$ from the initial regions. For a more formal proof, consider the relaxation of the problem where instead of destroying $k$ walls, we can choose $k$ walls and allow each of them to independently be active or inactive at any moment in time. Clearly, under this relaxation, the ball can only reach regions at distance at most $k$ from its initial position. Next, we can prove that this lower bound is, in fact, the answer to our problem, by providing a construction that allows the ball to escape under this minimum number of destroyed walls. There are multiple ways of solving the reconstruction problem. The easiest is perhaps to first find any sequence of $k$ regions that end up with the ball escaping. Then, starting from the most distant ($k$-th) region, simulate the game in reverse order, and once you first reach the oblique wall that connects the $k$-th and $(k-1)$-th regions, "destroy" such wall (in reverse, the correct formulation is to "restore" the already-destroyed wall), and continue the process until the ball reaches its initial position. Finally, an important aspect to quickly solve this problem is implementation. In order to simplify the implementation, one may skip constructing the graph described above and instead use it implicitly by keeping information over triplets of the form $(i, j, d)$ where $(i, j)$ is the position of the ball, and $d \in \{ \texttt{U}, \texttt{D}, \texttt{L}, \texttt{R} \}$ is the direction where the ball is heading, and using 0-1 BFS or even Dijkstra to compute the minimum number of walls that need to be destroyed to reach any such state.
[ "graphs", "shortest paths" ]
3,500
null
2068
J
The Ultimate Wine Tasting Event
Rumors of the excellence of Gabriella's wine tasting events have toured the world and made it to the headlines of prestigious wine magazines. Now, she has been asked to organize an event at the EUC 2025! This time she selected $2n$ bottles of wine, of which exactly $n$ are of white wine, and exactly $n$ of red wine. She arranged them in a line as usual, in a predetermined order described by a string $s$ of length $2n$: for $1 \le i \le 2n$, the $i$-th bottle from the left is white wine if $s_i = W$ and red wine if $s_i = R$. To spice things up for the attendees (which include EUC contestants), Gabriella came up with the following wine-themed problem: Consider a way of dividing the $2n$ bottles into two disjoint subsets, each containing $n$ bottles. Then, for every $1 \le i \le n$, swap the $i$-th bottle in the first subset (from the left) and the $i$-th bottle of the second subset (also from the left). Is it possible to choose the subsets so that, after this operation is done exactly once, the white wines occupy the first $n$ positions?
We will find a simple necessary and sufficient condition on the string $s$ for the operation to be successful. Then, it will be trivial to check whether this condition is satisfied. First, suppose that it is possible to rearrange the bottles as described in the statement. That is, there esist two subsets $A = \{a_1, \, \dots, \, a_n\}$ and $B = \{b_1, \, \, \dots, \, b_n\}$ of $\{1, \, 2, \, \dots, \, 2n\}$, each of size $n$ and disjoint, such that swapping $s_{a_i}$ and $s_{b_i}$ (assuming the elements are sorted in each subset) results in the string $\texttt{WW} \dots \texttt{WWRR} \dots \texttt{RR}$. Let $h$ be the last index such that $a_h \le n$, and similarly let $k$ be the last index such that $b_k \le n$. If either of them do not exist, set it equal to $0$. Note that, by definition, $h + k = n$. Modulo swapping $A$ and $B$, we can assume that $h \le k$. Then, upon swapping $a_i$ with $b_i$ for $1 \le i \le n$, all the bottles at indices $a_1, \, \dots, \, a_h$ will occupy positions among the leftmost $n$, which means that $s_{a_i} = \texttt{W}$ for all $1 \le i \le h$. Also, after swapping, all bottles at indices $b_1, \, \dots, \, b_h$ will occupy positions among the first $n$, and therefore $s_{b_i} = \texttt{W}$ for $1 \le i \le h$. On the other hand, the bottles at indices $b_{h + 1}, \, \dots, \, b_k$ will occupy positions greater than $n$, so that $s_{b_i} = \texttt{R}$ for each $h + 1 \le i \le k$. This implies that, among the first $n$ bottles, the white ones are $2h$ and the red ones are $k - h = n - 2h$. Finally, consider the first $h$ bottles in the initial arrangement. Each of these belongs to either $\{a_1, \, \dots, \, a_h\}$ or $\{b_1, \, \dots, \, b_h\}$, and therefore, when swapped, it will end up in one of the first $n$ positions. This implies that $s_i = \texttt{W}$ for $1 \le i \le h$. We deduced that a necessary condition is: the number of white bottles in the first half is even, say $2h$, and the first $h$ bottles are white. Note that the number of red bottles in the second half is also $2h$ and, by the same argument, the last $h$ bottles have to be red. Let's now prove that this condition is also sufficient. We construct $A$ and $B$ as follows: $A$ contains the first $h$ positions (which are white wines), and the first $n - h$ positions with red wines. $B$ contains everything else, that is, all positions with white wines except the first $h$, and the last $h$ positions (with red wines). It is easy to see that the operation of swapping $A$ and $B$ places all white wines in the first half and all red wines in the second half. Checking whether this condition is satisfied is trivial and can be done in time $O(n)$. Remark. Arrangements such as $\texttt{WWRWWRRRRW}$ ($n = 5$), where only the "first-half" condition is satisfied, do not work. One needs to check that both the white wines in the first half as well as the red wines in the second half satisfy the condition.
[ "combinatorics", "greedy" ]
2,000
null
2068
K
Amusement Park Rides
Ivan, Dmitrii, and Pjotr are celebrating Ivan's birthday at an amusement park with $n$ attractions. The $i$-th attraction operates at minutes $a_i, 2a_i, 3a_i, \dots$ (i.e., every $a_i$ minutes). Each minute, the friends can either ride exactly one available attraction \textbf{together} or wait. Since the rides are very short, they can board another attraction the next minute. They may ride the attractions in any order. They want to experience each ride exactly once before heading off to enjoy the birthday cake. What is the earliest time by which they can finish all $n$ attractions?
Assume that among the numbers $a_i$ there are $k$ distinct values, denoted by $p_1, p_2, \dots, p_k$. Suppose each $p_j$ appears $q_j$ times in the sequence $\{a_i\}$. We now construct a flow graph as follows: Define the vertex set as $V = \{S, T\} \cup \{A_1, A_2, \dots, A_k\}$. For each vertex $A_i$ (with $1 \le i \le k$), add an edge from $S$ to $A_i$ with capacity $q_i$. For each positive integer $i$, introduce a vertex $B_i$. For every vertex $A_j$ (with $1 \le j \le k$), if $i \equiv 0 \pmod{p_j}$, then add an edge from $A_j$ to $B_i$ with capacity $1$. In addition, add an edge from $B_i$ to $T$ with capacity $1$. We continue adding vertices $B_i$ (in increasing order of $i$) until the maximum flow from $S$ to $T$ reaches $n$. The answer to the problem is the largest index $i$ for which a vertex $B_i$ was added. This approach has a running time of $O(\mathrm{ans} \cdot E)$, where $\mathrm{ans}$ denotes the final value of $i$ and $E$ is the number of edges. However, this is too slow since it requires creating a vertex $B_i$ for every positive integer $i$. We can improve efficiency by avoiding the creation of every $B_i$. Instead, we maintain a mapping $M: \mathbb{Z}^+ \to \{\mathrm{indices}\}$, where $M(i)$ is a set of indices. Initially, for each $1 \le j \le k$, we set $M(p_j) = \{j\}$. At each step, perform the following: Find the smallest $i$ for which $M(i)$ is nonempty. Add the vertex $B_i$ along with its edge to $T$ and add edges from $A_j$ to $B_i$ for every $j \in M(i)$. If adding these edges increases the maximum flow, then for each $j \in M(i)$ insert $j$ into $M(i+p_j)$. This means that a future vertex $B_{i+p_j}$ may also need to be connected to $A_j$. Otherwise, if the maximum flow does not increase, no additional edges from $A_j$ (for $j \in M(i)$) will be needed. Finally, clear the set $M(i)$. Note that we examine at most $n+k$ distinct values of $i$ (because in each step we either increase the maximum flow or remove at least one element from $\bigcup_i M(i)$). Therefore, the overall time complexity of the solution is $O(n \cdot E)$, and it can be shown that $E = O(n \log n)$.
[ "flows", "graphs" ]
3,000
null
2069
A
Was there an Array?
For an array of integers $a_1, a_2, \dots, a_n$, we define its \textbf{equality characteristic} as the array $b_2, b_3, \dots, b_{n-1}$, where $b_i = 1$ if the $i$-th element of the array $a$ is equal to both of its neighbors, and $b_i = 0$ if the $i$-th element of the array $a$ is not equal to at least one of its neighbors. For example, for the array $[1, 2, 2, 2, 3, 3, 4, 4, 4, 4]$, the equality characteristic will be $[0, 1, 0, 0, 0, 0, 1, 1]$. You are given the array $b_2, b_3, \dots, b_{n-1}$. Your task is to determine whether there exists such an array $a$ for which the given array is the equality characteristic.
Let's try to find some contradiction of the following kind for the given array: we know for sure that some element $a$ must be equal to its neighbors, but the corresponding value $b_i$ is zero, or vice versa. If both $b_{i-1}$ and $b_{i+1}$ are equal to $1$, it means that the $i$-th element is equal to both of its neighbors, then $b_i$ must also be equal to $1$. That is, if there exists such an index $i$ that $b_{i-1} = b_{i+1} = 1$ and $b_i = 0$, we have a contradiction. It is not difficult to prove that if such an index does not exist, then it is always possible to find an array $a$ that satisfies all conditions. Let's construct it as follows: $a_1 = 1$; for each subsequent element $a_i$, if at least one of $(b_{i-1}, b_i)$ is equal to $1$, we must choose $a_i = a_{i-1}$, otherwise we set $a_i = a_{i-1} + 1$. This way, we can be sure that if $b_i$ is equal to $1$, then $a_{i-1} = a_i = a_{i+1}$. Therefore, the only problem may arise if some $b_i$ is equal to $0$, but at the same time $a_{i-1} = a_i = a_{i+1}$. However, if $a_{i-1} = a_i$ and $b_i = 0$, then $b_{i-1} = 1$. Similarly, $b_{i+1} = 1$. Thus, we have constructed the case where $b_{i-1} = b_{i+1} = 1$ and $b_i = 0$, which is the only case which can lead to a contradiction. Therefore, the solution reduces to checking whether there exists such an index $i$ that $b_{i-1} = b_{i+1} = 1$ and $b_i = 0$.
[ "graph matchings", "greedy" ]
800
t = int(input()) for i in range(t): n = int(input()) s = list(input().split()) s = "".join(s) if "101" in s: print('NO') else: print('YES')
2069
B
Set of Strangers
You are given a table of $n$ rows and $m$ columns. Initially, the cell at the $i$-th row and the $j$-th column has color $a_{i, j}$. Let's say that two cells are strangers if they \textbf{don't} share a side. Strangers are allowed to touch with corners. Let's say that the set of cells is a set of strangers if all pairs of cells in the set are strangers. Sets with no more than one cell are sets of strangers by definition. In one step, you can choose any set of strangers \textbf{such that all cells in it have the same color} and paint all of them in some other color. You can choose the resulting color. What is the minimum number of steps you need to make the whole table the same color?
Let's fix some color and look at all cells of that color. We can decide either to leave this color as the resulting one and then we can ignore all these cells. Or we should get rid of this color, and it would cost at least one operation. If all cells are pairwise strangers, then one operation is enough to recolor all of them in the desired color. Otherwise, we need at least two operations. It turns out that two operations are enough: each connected component of the same color can be painted in two steps if we color the table like a chessboard and choose "black" cells at the first step and "white" cells at the second step. Since components don't touch each other, we can choose subsets from different components independently. In total, we can get rid of any color in two steps. As a result, for each color, let's calculate the number of steps we need to get rid of it as $v_c$. It's equal to $1$ if it's present in the table, plus $1$ if there are two neighbors of that color. The answer then equals to $\sum_{c=1}^{nm}{v_c}$ minus the color we decided to leave untouched. Or, optimally, $\sum_{c=1}^{nm}{v_c} - \max_{c=1}^{nm}{v_c}$
[ "greedy", "matrices" ]
1,200
for _ in range(int(input())): n, m = map(int, input().split()) a = [list(map(int, input().split())) for i in range(n)] hasColor = [0] * (n * m) hasBad = [0] * (n * m) for i in range(n): for j in range(m): hasColor[a[i][j] - 1] = 1 if i + 1 < n and a[i][j] == a[i + 1][j]: hasBad[a[i][j] - 1] = 1 if j + 1 < m and a[i][j] == a[i][j + 1]: hasBad[a[i][j] - 1] = 1 print(sum(hasColor) + sum(hasBad) - 1 - max(hasBad))
2069
C
Beautiful Sequence
Let's call an integer sequence \textbf{beautiful} if the following conditions hold: - its length is at least $3$; - for every element except the first one, there is an element to the left less than it; - for every element except the last one, there is an element to the right larger than it; For example, $[1, 4, 2, 4, 7]$ and $[1, 2, 4, 8]$ are beautiful, but $[1, 2]$, $[2, 2, 4]$, and $[1, 3, 5, 3]$ are not. Recall that a subsequence is a sequence that can be obtained from another sequence by removing some elements without changing the order of the remaining elements. You are given an integer array $a$ of size $n$, where \textbf{every element is from $1$ to $3$}. Your task is to calculate the number of beautiful subsequences of the array $a$. Since the answer might be large, print it modulo $998244353$.
Let's change the definition of a beautiful sequence a bit. We can prove that the condition "for every element except the first one, there is an element to the left less than it" is equivalent to "the first element is less than every other element in the sequence". We can prove that these two conditions are equivalent by induction: for the $2$-nd element of the sequence $s_2$, the only element to the left is the $1$-st ($s_1$), so $s_1 < s_2$; assume we have proved that, for every $i \in [2, k]$, $s_1 < s_i$. Let's prove that $s_1 < s_{i+1}$. Suppose that the element to the left of $s_{i+1}$ which is less than $s_{i+1}$ is $s_j$; if $j = 1$, then obviously, $s_1 < s_{i+1}$; otherwise, $s_1 < s_j$ and $s_j < s_{i+1}$, so $s_1 < s_{i+1}$. Using similar induction, we can prove that "for every element except the last one, there is an element to the right larger than it" is the same as "the last element is greater than every other element in the sequence". Since the array consists only of $1$, $2$ and $3$, we can notice that there is only one possible beautiful sequence pattern: 122...223 (i. e. one $1$, followed by any number of consecutive $2$ and one final $3$). Any other pattern is invalid: the leftmost element should be strictly less than every element in the middle (every element from the $2$-nd to the second-to-last), and the rightmost should be strictly greater than every element in the middle; so, the leftmost element should be $1$, the rightmost element should be $3$, and every element in the middle should be $2$. In order to calculate the number of subsequences that match the aforementioned pattern, we can use dynamic programming. Let $dp_{i, j}$ be the number of subsequences if we have considered the first $i$ elements of the array, and the current state is $j$ (for example, state $0$ means that the sequence is empty, state $1$ means that we have taken the element equal to $1$, state $2$ means that we have taken some number of $2$'s, state $3$ means that we have taken the element $3$ and the sequence is finished). The transitions in this dynamic programming are pretty simple and can be done in $O(1)$ for each state. So the total complexity of the solution is $O(n)$.
[ "combinatorics", "dp", "greedy", "two pointers" ]
1,500
#include <bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; if (x >= MOD) x -= MOD; return x; } int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> dp(4, 0); dp[0] = 1; while (n--) { int x; cin >> x; if (x == 2) dp[x] = add(dp[x], dp[x]); dp[x] = add(dp[x], dp[x - 1]); } cout << dp[3] << '\n'; } }
2069
D
Palindrome Shuffle
You are given a string $s$ consisting of lowercase Latin letters. You can perform the following operation with the string $s$: choose a contiguous substring (possibly empty) of $s$ and shuffle it (reorder the characters in the substring as you wish). Recall that a palindrome is a string that reads the same way from the first character to the last and from the last character to the first. For example, the strings a, bab, acca, bcabcbacb are palindromes, but the strings ab, abbbaa, cccb are not. Your task is to determine the minimum possible length of the substring on which the aforementioned operation must be performed in order to convert the given string $s$ into a palindrome.
I will describe a solution in $O(n \log n)$. It is possible to solve the problem in $O(n)$ with careful implementation of two pointers method, but $O(n \log n)$ is more intuitive and easier to understand, in my opinion. First, while the first character of the string is equal to the last character of the string, let's get rid of both of them - it's pretty obvious we shouldn't touch them. After that, we get a string for which the first character is not equal to the last character. At least one of them should be changed - so, the substring we need to shuffle is either a prefix or a suffix of the string. Suppose the string we shuffle is a prefix (if we need to shuffle a suffix, we'll check it by reversing the string and trying to shuffle a prefix of the reversed string). Suppose that, by shuffling a prefix of length $m$, we get the answer. Then, by shuffling a prefix of length $m+1$, we will also be able to make the string a palindrome. It means that the shortest possible length of the prefix we need to shuffle can be found with binary search. In our binary search, we need to check whether we can make the string a palindrome by shuffling the prefix of certain length. Let's check it in $O(n)$ by verifying the following two conditions: for every pair $(s_i, s_{n-i+1})$ such that $s_i \ne s_{n-i+1}$, at least one of the characters should be changed, so it should belong to the prefix; for every character $c$ from a to z, let's calculate the number of pairs where this character should fill both positions (i. e. the number of pairs $(s_i, s_{n-i+1})$ where at least one of the characters is equal to $c$ and is not affected by our shuffle). This number of pairs should not exceed the number of pairs of character $c$ in the whole string, otherwise we won't have enough occurrences of $c$ to fill all these pairs of positions. It's obvious that these two conditions are necessary, but we can prove that they are sufficient: if both of them hold, we have enough characters to fill all pairs of positions where we need a fixed character, and all the remaining pairs of positions can be filled by any remaining pairs of characters (the input is guaranteed to have an even number of every character, so it's always possible to split all characters into pairs). So, we need to check if some substring is a possible answer $O(\log n)$ times, and every such check can be done in $O(n)$. Thus, our solution works in $O(n \log n)$.
[ "binary search", "greedy", "hashing", "strings", "two pointers" ]
1,800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { string s; cin >> s; int n = s.size(); int i = 0; while (i < n / 2 && s[i] == s[n - i - 1]) ++i; n -= 2 * i; s = s.substr(i, n); int ans = n; for (int z = 0; z < 2; ++z) { int l = 0, r = n; while (l <= r) { int m = (l + r) / 2; vector<int> cnt(26); for (int i = 0; i < m; ++i) cnt[s[i] - 'a']++; bool ok = true; for (int i = 0; i < min(n / 2, n - m); ++i) { char c = s[n - i - 1]; if (i < m) { ok &= cnt[c - 'a'] > 0; cnt[c - 'a']--; } else { ok &= (c == s[i]); } } for (auto x : cnt) ok &= (x % 2 == 0); if (ok) { r = m - 1; } else { l = m + 1; } } ans = min(ans, r + 1); reverse(s.begin(), s.end()); } cout << ans << '\n'; } }
2069
E
A, B, AB and BA
You are given a string $s$ consisting of characters A and B. Your task is to split it into blocks of length $1$ and $2$ in such a way that - there are no more than $a$ strings equal to "A"; - there are no more than $b$ strings equal to "B"; - there are no more than $ab$ strings "AB"; - there are no more than $ba$ strings "BA"; Strings "AA" and "BB" are prohibited. Each character of the initial string $s$ should belong to exactly one block.
Firstly, let's find the solution that maximizes the number of used blocks of length $2$ (AB and BA). Each used block of length $2$ frees us one A and B, so we don't lose anything. Secondly, since we don't have AA and BB, any pair of equal neighboring characters will be split in-between in any possible partition. So, let's split them at the start. As a result, we'll get blocks with alternating characters of four types: ABA..A: if it's length $l$ it can be split in $x$ AB-s and $y$ BA-s for any $x + y \le \frac{l}{2}$. For example: A|BA|BA, AB|B|BA or AB|AB|A; BAB..B: practically the same as the previous, so let's just count the total number of blocks we can get from the first two types as $tot$; ABA..B: it can be split in $\frac{l}{2}$ of AB, but if we need at least one BA the total number of blocks will reduce to $\frac{l}{2} - 1$. For example: AB|AB|AB, but AB|A|BA|B; BAB..A: the same case, but favors BA-s instead. So, let's split ABA..B into AB and BAB..A into BA as much as we can. As a result, one of three cases will follow: We spent all $ab$: then remaining ABA..B blocks will be split into BA. We lose one pair of A and B for each ABA..B block, so it's optimal to reduce the number of remaining ABA...B, so it's optimal to split shortest ABA...B-s into AB-s at the first step. We spent all $ba$: remaining BAB..A blocks will be split into AB. The case is the same as previous and gives the same greedy: split shortest BAB..A-s into BA-s at the first step. There are no AB..B and BA..A left. Only odd-length blocks left (the first two types}, so it doesn't matter how to split it. In total we'll get $\min{(ab + ba, tot)}$ more blocks of length $2$. In total, the strategy is the following: split AB..B-s into AB in increasing order of lengths; split BA..A-s into BA in increasing order of lengths; split remaining AB..B-s into BA in any order; split remaining BA..A-s into AB in any order; calculate extra pairs you'll get using the formula $\min{(ab + ba, tot)}$. check that you have enough $a$ and $b$ to cover remaining A-s and B-s. Total complexity is $O(|s| \log{|s|})$.
[ "constructive algorithms", "greedy", "sortings", "strings" ]
2,300
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) typedef long long li; template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) { return out << "(" << p.x << ", " << p.y << ")"; } template<class A> ostream& operator <<(ostream& out, const vector<A> &v) { fore(i, 0, sz(v)) { if(i) out << " "; out << v[i]; } return out; } const int INF = int(1e9); const li INF64 = li(1e18); string s; int a, b, ab, ba; inline bool read() { if(!(cin >> s)) return false; cin >> a >> b >> ab >> ba; return true; } int process(vector<int> &rem, int &tot) { int placed = 0; sort(rem.begin(), rem.end(), greater<int>()); while (!rem.empty() && tot > 0) { int cnt = min(rem.back() / 2, tot); placed += cnt; tot -= cnt; rem.back() -= 2 * cnt; if (rem.back() == 0) rem.pop_back(); } return placed; } int remnants(vector<int> &rem, int &tot) { int placed = 0; for (int &cur : rem) { int cnt = min(tot, (cur - 2) / 2); placed += cnt; tot -= cnt; cur -= 2 * cnt + 2; } return placed; } inline void solve() { s.push_back(s.back()); int eq = 0, placedPairs = 0; vector<int> remAB, remBA; int lst = 0; // cerr << "s = " << s << endl; fore (i, 1, sz(s)) { if (s[i] != s[i - 1]) continue; if (s[lst] == s[i - 1]) { // ABA..A or BAB..B eq += (i - lst) / 2; } else { int len = i - lst; if (s[lst] == 'A') { // ABA..B remAB.push_back(len); } else { // BAB..A remBA.push_back(len); } } // cerr << lst << ", " << i << endl; lst = i; } s.pop_back(); placedPairs += process(remAB, ab); assert(remAB.empty() || ab == 0); placedPairs += process(remBA, ba); assert(remBA.empty() || ba == 0); placedPairs += remnants(remAB, ba); placedPairs += remnants(remBA, ab); int remPlaced = min(eq, ab + ba); placedPairs += remPlaced; int needA = count(s.begin(), s.end(), 'A') - placedPairs; int needB = count(s.begin(), s.end(), 'B') - placedPairs; if (needA <= a && needB <= b) cout << "YES\n"; else cout << "NO\n"; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; while (t--) { read(); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
2069
F
Graph Inclusion
A connected component of an undirected graph is defined as a set of vertices $S$ of this graph such that: - for every pair of vertices $(u, v)$ in $S$, there exists a path between vertices $u$ and $v$; - there is no vertex outside $S$ that has a path to a vertex within $S$. For example, the graph in the picture below has three components: $\{1, 3, 7, 8\}$, $\{2\}$, $\{4, 5, 6\}$. We say that graph $A$ includes graph $B$ if every component of graph $B$ is a subset of some component of graph $A$. You are given two graphs, $A$ and $B$, both consisting of $n$ vertices numbered from $1$ to $n$. Initially, there are no edges in the graphs. You must process queries of two types: - add an edge to one of the graphs; - remove an edge from one of the graphs. After each query, you have to calculate the minimum number of edges that have to be added to $A$ so that $A$ includes $B$, and print it. Note that you don't actually add these edges, you just calculate their number.
First, let's understand how to solve the problem without queries (given two graphs $A$ and $B$, add the minimum number of edges to $A$ so that it includes graph $B$), and then we'll deal with query processing. Let's understand what it means for graph $A$ to not include $B$. This means that some component of graph $B$ is not a subset of any component of graph $A$ - that is, there exist two vertices (let's call them $x$ and $y$) that belong to the same component in graph $B$, but do not belong to the same component in graph $A$. If this happens, let's connect vertices $x$ and $y$ with an edge in graph $A$, and then try to find such a pair of vertices again. We will continue doing this until it turns out that there is no such pair of vertices. After that, graph $A$ will include graph $B$. Let's take a look at what kind of graph we have obtained. The following holds for it: if two vertices $x$ and $y$ are in the same component either in graph $A$ or in graph $B$ (or in both of these graphs), then they are in the same component in the resulting graph (note that the reverse is not necessarily true). This means that the partition of this graph into components will be the same as if we constructed a graph that has edges from both graphs $A$ and $B$, and partitioned it into components. Thus, for graph $A$ to include graph $B$, we need to make its components match those of the union of graphs $A$ and $B$. Since the union of graphs $A$ and $B$ includes graph $A$, it is sufficient for us to count the number of components in graph $A$ and in the union, and the difference between these two numbers will be exactly the number of edges we need to add to graph $A$ for its components to coincide with the components of the union. Now let's deal with the queries. We need to maintain two graphs ($A$ and the union), add/remove edges, and compute the number of components in the graphs. Moreover, all queries can be read from the very beginning and then processed (the problem can be solved offline). This means we can use the Dynamic Connectivity Offline technique. Below is a description of this technique. First, for each edge in each graph, we will identify all segments of queries when this edge exists (that is, such segments $[l, r]$ that the edge exists from query $l$ to query $r$). In total, there will be $O(q)$ such segments. We will build a segment tree with $q$ leaves, where each leaf corresponds to a certain query. Each segment $[l, r]$ that we have identified can be split into $O(\log q)$ segments corresponding to the vertices of the segment tree (similarly to how a segment tree splits any query on a segment into $O(\log q)$ vertices). We will add information of the form "this edge exists throughout this segment" to the corresponding vertices. Now our task is the following: for each query, form the graph so that it contains only edges existing during that query. Each query corresponds to one of the leaves of the segment tree, and the edges that exist at that moment are the edges stored in the vertices on the path to that leaf of the segment tree. Thus, now for each leaf of the segment tree, we need to form a partition of the graph into components if the graph contains exactly those edges that are on the path from the root to that leaf. To do this, we can recursively traverse the segment tree. When we visit vertex $v$, we will do the following: add all edges stored in vertex $v$ of the segment tree; if vertex $v$ has children, recursively visit the children; if vertex $v$ has no children, then it is a leaf of the segment tree, and the graph is currently in such a state that we can find the answer to the corresponding query (the graph contains all edges that exist at that moment); before returning from vertex $v$, "rollback" all changes we made in it. We need a data structure that can add an edge to the graph and rollback the last changes. The simplest option for such a structure is a DSU with rollbacks. We will write a standard DSU with a rank heuristic, but without path compression (it does not combine well with rollbacks). Each time we make a change in the DSU, we will remember which values we changed and what was stored there before (look at the function change in the model solution to understand how to implement this most simply). Since we use only one DSU heuristic, adding one edge will take $O(\log n)$ - and, accordingly, rolling back an edge will also take $O(\log n)$. In total, we have $q$ segments of edge existence, each of which is split by the segment tree into $O(\log q)$ segments, so the total number of edge addition operations will be $O(q \log q)$, and the entire solution will work in $O(q \log q \log n)$.
[ "data structures", "dfs and similar", "divide and conquer", "dsu", "graphs" ]
2,800
#include<bits/stdc++.h> using namespace std; const int N = 400043; const int BUF = N * 20; int* where[BUF]; int val[BUF]; int cur = 0; void change(int& x, int y) { val[cur] = x; where[cur] = &x; x = y; cur++; } void rollback() { cur--; (*where[cur]) = val[cur]; } struct DSU { vector<int> p, s; int n; int comps; int get(int x) { if(p[x] == x) return x; return get(p[x]); } void merge(int x, int y) { x = get(x); y = get(y); if(x == y) return; change(comps, comps - 1); if(s[x] < s[y]) swap(x, y); change(p[y], x); change(s[x], s[x] + s[y]); } DSU(int n = 0) { this->n = n; s = vector<int>(n, 1); p = vector<int>(n); iota(p.begin(), p.end(), 0); comps = n; } }; struct edge { char g; int x; int y; edge(char g = 'A', int x = 0, int y = 0) : g(g), x(x), y(y) {}; }; DSU A, united; vector<edge> T[4 * N]; int ans[N]; void dfs(int v, int l, int r) { int state = cur; for(auto e : T[v]) { united.merge(e.x, e.y); if(e.g == 'A') A.merge(e.x, e.y); } if(l == r - 1) { ans[l] = A.comps - united.comps; } else { int m = (l + r) / 2; dfs(v * 2 + 1, l, m); dfs(v * 2 + 2, m, r); } while(state != cur) rollback(); } void add_edge(int v, int l, int r, int L, int R, edge e) { if(L >= R) return; if(L == l && R == r) T[v].push_back(e); else { int m = (l + r) / 2; add_edge(v * 2 + 1, l, m, L, min(m, R), e); add_edge(v * 2 + 2, m, r, max(L, m), R, e); } } map<pair<int, int>, int> last[2]; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int n, q; cin >> n >> q; A = DSU(n); united = DSU(n); for(int i = 0; i < q; i++) { string s; int x, y; cin >> s >> x >> y; --x; --y; int idx = (s[0] == 'A' ? 0 : 1); if(x > y) swap(x, y); if(last[idx].count(make_pair(x, y)) == 0) { last[idx][make_pair(x, y)] = i; } else { add_edge(0, 0, q, last[idx][make_pair(x, y)], i, edge(s[0], x, y)); last[idx].erase(make_pair(x, y)); } } for(int i = 0; i < 2; i++) for(auto a : last[i]) add_edge(0, 0, q, a.second, q, edge(char(i + 'A'), a.first.first, a.first.second)); dfs(0, 0, q); for(int i = 0; i < q; i++) cout << ans[i] << "\n"; }
2070
A
FizzBuzz Remixed
FizzBuzz is one of the most well-known problems from coding interviews. In this problem, we will consider a remixed version of FizzBuzz: Given an integer $n$, process all integers from $0$ to $n$. For every integer such that its remainders modulo $3$ and modulo $5$ are the same (so, for every integer $i$ such that $i \bmod 3 = i \bmod 5$), print FizzBuzz. However, you don't have to solve it. Instead, given the integer $n$, you have to report how many times the correct solution to that problem will print FizzBuzz.
The key observation in this problem is that, if you pick two integers $x$ and $x+15$, both their remainders modulo $3$ and modulo $5$ are the same. So, the number of integers we need to count in $[0, 14]$ is the same as in $[15, 29]$, the same as in $[30, 44]$, and so on. So, you can calculate the number of segments of length $15$ starting from $0$ before $n$ (which is $\lfloor \frac{n}{15}\rfloor$), multiply it by the number of values we need in $[0, 14]$, and then process the last (partial) segment naively, since it will contain at most $15$ elements. Time complexity: $O(1)$.
[ "brute force", "math" ]
800
t = int(input()) for i in range(t): n = int(input()) ans = 3 * (n // 15) n %= 15 for j in range(n + 1): if j % 3 == j % 5: ans += 1 print(ans)