contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1930
|
I
|
Counting Is Fun
|
You are given a binary$^\dagger$ pattern $p$ of length $n$.
A binary string $q$ of the same length $n$ is called \textbf{good} if for every $i$ ($1 \leq i \leq n$), there exist indices $l$ and $r$ such that:
- $1 \leq l \leq i \leq r \leq n$, and
- $p_i$ is a mode$^\ddagger$ of the string $q_lq_{l+1}\ldots q_r$.
Count the number of good binary strings modulo $998\,244\,353$.
$^\dagger$ A binary string is a string that only consists of characters $\mathtt{0}$ and $\mathtt{1}$.
$^\ddagger$ Character $c$ is a mode of string $t$ of length $m$ if the number of occurrences of $c$ in $t$ is at least $\lceil \frac{m}{2} \rceil$. For example, $\mathtt{0}$ is a mode of $\mathtt{010}$, $\mathtt{1}$ is not a mode of $\mathtt{010}$, and both $\mathtt{0}$ and $\mathtt{1}$ are modes of $\mathtt{011010}$.
|
A167510 It is convinient here to assign weights to $\mathtt{0} \to -1$ and $\mathtt{1} \to 1$. Given a string $t$, we can define the prefix sum $p$ of it's weights. For example, if $t=\mathtt{0010111}$, then $p=[0,-1,-2,-1,-2,-1,0,1]$. So that if $t$ is bad and $i$ is a index that violates the definition, then $\max(p_0,p_1,\ldots,p_{i-1}) < \min(p_i,p_{i+1},\ldots,p_n)$ if $s_i = \mathtt{0}$ or $\min(p_0,p_1,\ldots,p_{i-1}) > \max(p_i,p_{i+1},\ldots,p_n)$ if $s_i = \mathtt{1}$. Naturally, it is convinient to assume that $p_0 < p_n$. All string with $p_0 = p_n$ are clearly good and $p_0 > p_n$ is handled similarly. For strings with $p_0 < p_n$, the condition can only be violated on an index with $s_i = \mathtt{0}$. The solution works using PIE. Let us fix a set of positions $I$ that are bad, so that if $i \in I$, then $s_i =\mathtt{0}$ and $\max(p_0,p_1,\ldots,p_{i-1}) < \min(p_i,p_{i+1},\ldots,p_n)$. Then we need to count the number of ways to construct $p$ satisfying these conditions and then add it to the answer multiplied by $(-1)^{|I|}$. Suppose that $I = i_1,i_2,\ldots,i_k$. $t[1,i_1]$ and $t[i_k,n]$ need to be a ballot sequence of length $i_1-1$ and $n-i_k$ respectively (A001405, denote it as $f(n)$) while $t[i_j,i_{j+1}]$ needs to be bidirectional ballot sequence of length $i_{j+1}-i_j-1$ (A167510, denote it as $g(n)$). Note that in our definition of ballot sequence, we are do not require that prefixes and suffixes have strictly more $\mathtt{1}$ s thatn $\mathtt{0}$ s. It is the same sequence, but note that we need to shift it by a few places when refering to OEIS. The first $n$ terms of $f$ is easily computed in linear time. We will focus on how to compute the first $n$ terms of $g$ in $O(n \log^2 n)$. Computing $g(n)$ Firstly, let us consider the number of bidirectional sequences with $\frac{n+k}{2}$ $\mathtt{1}$ s and $\frac{n-k}{2}$ $\mathtt{0}$ s. We will imagine this as lattice walks from $(0,0)$ to $(n,k)$ where $\mathtt{1} \to (1,1)$ and $\mathtt{0} \to (1,-1)$. If we touch the lines $y=-1$ or $y=k+1$, the walk is invalid. We can use the reflection method here, similar to a proof of Catalan. The number of valid walks is $#(*) - #(T) + #(TB) - #(TBT) ..... - #(B) + #(BT) - #(BTB) + .....$ where $#(BTB)$ denotes the number of walks that touch the bottom line, then the top line, then the bottom line, and then may continue to touch the top and bottom line after that. We have $#(*) =$ $\binom{n}{\frac{n+k}{2}}$, $#(T) =$ $\binom{n}{\frac{n+k+2}{2}}$, $#(TB) =$ $\binom{n}{\frac{n+3k+4}{2}}$, $#(TBT) =$ $\binom{n}{\frac{n+3k+6}{2}}$, $\ldots$, $#(B) =$ $\binom{n}{\frac{n+k+2}{2}}$, $#(BT) =$ $\binom{n}{\frac{n+k+4}{2}}$, $#(BTB) =$ $\binom{n}{\frac{n+3k+6}{2}}$, $\ldots$ This already gives us a method to compute $g(n)$ in $O(n \log n)$ since for a fixed $k$, we can compute the above sum in $O(\frac{n}{k})$, since only the first $O(\frac{n}{k})$ terms are not $0$. First, notice that we can aggregate them sums without iterating on $k$, for some fixed $j$, we can find the coefficient $c_j$ of $\binom{n}{\frac{n+j}{2}}$ across all $k$. Notice that this coefficient is independent across all $n$, so we only need to compute $c$ once. Now, note that $\binom{n}{\frac{n+z}{2}} = [x^z] (x^{-1} + x) ^ n$. So that $g(n) = [x^0] C \cdot (x^{-1} + x)^n$, where $C$ is the ogf of $c$. From this formulation, we can describe how to compute the first $n$ terms of $g$ in $O(n \log^2 n)$ using Divide and Conquer. $DnC(l,r,V)$ computes the $g(l) \ldots g(r)$ where $V$ is the coefficents between $[l-r,r-l]$ of $C \cdot (x^{-1} + x)^l$. $DnC(l,r,V)$ will call $DnC(l,m,V)$ and $DnC(m+1,r,V \cdot (x^{-1}+x)^{m-l+1})$. We have the reccurence $T(n) = 2 T(\frac{n}{2}) + O(n \log n)$ so $T(n) = O(n \log^2 n)$. Of course, for constant time speedup, you can choose to split the odd and even coefficients, but that is not needed. It is possible to compute the first $n$ of $g$ in $O(n \log n)$ but it does not improve the overall complexity of the solution. Final Steps Now that we obtained the $n$ terms of $f$ and $g$, let us return to the origial problem. If $s_i =\mathtt{0}$, define $dp_i = f(i-1) - \sum\limits_{s_j = \mathtt{0}} dp_j \cdot g(j-i-1)$. Then this contributes $f(n-i) \cdot dp_i$ to the number of bad strings. Again, we will use Divide and Conquer to perform this quickly. Briefly, $DnC(l,r)$ will compute the values of $dp[l,r]$ given that contributions from $dp[1,l-1]$ has been transferred to $dp[l,r]$ already. We will call $DnC(l,m)$, compute the contribution from $dp[l,m]$ to $dp[m+1,r]$ using FFT and then call $DnC(m+1,r)$. The complexity of this is $O(n \log^2 n)$.
|
[
"combinatorics"
] | 3,500
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define ll long long
#define ii pair<int,int>
#define iii tuple<int,int,int>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
const int MOD=998244353;
ll qexp(ll b,ll p,int m){
ll res=1;
while (p){
if (p&1) res=(res*b)%m;
b=(b*b)%m;
p>>=1;
}
return res;
}
ll inv(ll i){
return qexp(i,MOD-2,MOD);
}
ll fix(ll i){
i%=MOD;
if (i<0) i+=MOD;
return i;
}
ll fac[1000005];
ll ifac[1000005];
ll nCk(int i,int j){
if (i<j) return 0;
return fac[i]*ifac[j]%MOD*ifac[i-j]%MOD;
}
const ll mod = (119 << 23) + 1, root = 62; // = 998244353
// For p < 2^30 there is also e.g. 5 << 25, 7 << 26, 479 << 21
// and 483 << 21 (same root). The last two are > 10^9.
typedef vector<ll> vl;
void ntt(vl &a) {
int n = sz(a), L = 31 - __builtin_clz(n);
static vl rt(2, 1);
for (static int k = 2, s = 2; k < n; k *= 2, s++) {
rt.resize(n);
ll z[] = {1, qexp(root, mod >> s, mod)};
rep(i,k,2*k) rt[i] = rt[i / 2] * z[i & 1] % mod;
}
vector<int> rev(n);
rep(i,0,n) rev[i] = (rev[i / 2] | (i & 1) << L) / 2;
rep(i,0,n) if (i < rev[i]) swap(a[i], a[rev[i]]);
for (int k = 1; k < n; k *= 2)
for (int i = 0; i < n; i += 2 * k) rep(j,0,k) {
ll z = rt[j + k] * a[i + j + k] % mod, &ai = a[i + j];
a[i + j + k] = ai - z + (z > ai ? mod : 0);
ai += (ai + z >= mod ? z - mod : z);
}
}
vl conv(const vl &a, const vl &b) {
if (a.empty() || b.empty()) return {};
int s = sz(a) + sz(b) - 1, B = 32 - __builtin_clz(s), n = 1 << B;
int inv = qexp(n, mod - 2, mod);
vl L(a), R(b), out(n);
L.resize(n), R.resize(n);
ntt(L), ntt(R);
rep(i,0,n) out[-i & (n - 1)] = (ll)L[i] * R[i] % mod * inv % mod;
ntt(out);
return {out.begin(), out.begin() + s};
}
int n;
string s;
int c[100005];
int f[100005];
void calc(int l,int r,vector<int> v){
while (sz(v)>(r-l)*2+1) v.pob();
if (l==r){
f[l]=v[0];
return;
}
int m=l+r>>1;
calc(l,m,vector<int>(v.begin()+(r-m),v.end()));
vector<int> a;
int t=m-l+1;
rep(x,0,t+1) a.pub(nCk(t,x)),a.pub(0);
v=conv(v,a);
calc(m+1,r,vector<int>(v.begin()+(2*t),v.end()));
}
int ans[100005];
void solve(int l,int r){
if (l==r) return;
int m=l+r>>1;
solve(l,m);
auto a=conv(vector<int>(ans+l,ans+m+1),vector<int>(f,f+(r-l)));
rep(x,m+1,r+1) if (s[x]=='0') ans[x]=fix(ans[x]-a[x-1-l]);
solve(m+1,r);
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
fac[0]=1;
rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD;
ifac[1000004]=inv(fac[1000004]);
rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD;
rep(x,1,100005){
c[x]++;
int curr=x;
while (curr<100005){
if (curr+2<100005) c[curr+2]-=2;
if (curr+4<100005) c[curr+4]++;
if (curr+2*x+4<100005) c[curr+2*x+4]++;
curr+=2*x+4;
}
}
cin>>n;
cin>>s;
vector<int> v;
rep(x,n+1,1) v.pub(fix(c[x]+(x>=2?c[x-2]:0LL)));
calc(1,n,v);
f[0]=1;
int fin=qexp(2,n,MOD);
rep(_,0,2){
rep(x,0,n) ans[x]=(s[x]=='0')?nCk(x,x/2):0LL;
solve(0,n-1);
rep(x,0,n) if (s[x]=='0') fin=fix(fin-ans[x]*nCk((n-x-1),(n-x-1)/2));
for (auto &it:s) it^=1;
}
cout<<fin<<endl;
}
|
1931
|
A
|
Recovering a Small String
|
Nikita had a word consisting of exactly $3$ lowercase Latin letters. The letters in the Latin alphabet are numbered from $1$ to $26$, where the letter "a" has the index $1$, and the letter "z" has the index $26$.
He encoded this word as the sum of the positions of all the characters in the alphabet. For example, the word "cat" he would encode as the integer $3 + 1 + 20 = 24$, because the letter "c" has the index $3$ in the alphabet, the letter "a" has the index $1$, and the letter "t" has the index $20$.
However, this encoding turned out to be ambiguous! For example, when encoding the word "ava", the integer $1 + 22 + 1 = 24$ is also obtained.
Determine the lexicographically smallest word of $3$ letters that could have been encoded.
A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds:
- $a$ is a prefix of $b$, but $a \ne b$;
- in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
|
The problem can be solved by simply going through all the $3$ letter combinations and searching for the lexicographically minimal one among them. It is also possible to consider a string consisting of three letters "a", and, going through it from the end until the value of $n$ is greater than zero, increase the letters to the maximum possible value and subtract the corresponding difference from $n$.
|
[
"brute force",
"strings"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
void solve(){
int n, sz = 26;
cin >> n;
string mins = "zzz", cur;
for(int i = 0; i < sz; i++){
for(int j = 0; j < sz; j++){
for(int k = 0; k < sz; k++){
if(i + j + k + 3 == n){
cur += char(i + 'a');
cur += char(j + 'a');
cur += char(k + 'a');
mins = min(cur, mins);
}
}
}
}
cout << mins << "\n";
}
int main(){
int t;
cin >> t;
while(t--) {
solve();
}
}
|
1931
|
B
|
Make Equal
|
There are $n$ containers of water lined up, numbered from left to right from $1$ to $n$. Each container can hold any amount of water; initially, the $i$-th container contains $a_i$ units of water. The sum of $a_i$ is divisible by $n$.
You can apply the following operation any (possibly zero) number of times: pour any amount of water from the $i$-th container to the $j$-th container, where $i$ must be \textbf{less} than $j$ (i.e. $i<j$). Any index can be chosen as $i$ or $j$ any number of times.
Determine whether it is possible to make the amount of water in all containers the same using this operation.
|
Since the number of operations does not matter, let's find any suitable sequence of operations. Each vessel should contain $k = \frac{\sum a_i}{n}$ units of water. Water can only be poured to the right, so we will iterate over $i$ from $1$ to $n-1$ and pour all the excess water from the $i$-th vessel to the $(i+1)$-th. If at some point there is not enough water in the $i$-th vessel, the answer is NO.
|
[
"greedy"
] | 800
|
def solve():
n = int(input())
a = [int(x) for x in input().split()]
k = sum(a) // n
for i in range(n - 1):
if a[i] < k:
print('NO')
return
a[i + 1] += a[i] - k
a[i] = k
print('YES')
for _ in range(int(input())):
solve()
|
1931
|
C
|
Make Equal Again
|
You have an array $a$ of $n$ integers.
You can \textbf{no more than once} apply the following operation: select three integers $i$, $j$, $x$ ($1 \le i \le j \le n$) and assign all elements of the array with indexes from $i$ to $j$ the value $x$. The price of this operation depends on the selected indices and is equal to $(j - i + 1)$ burles.
For example, the array is equal to $[1, 2, 3, 4, 5, 1]$. If we choose $i = 2, j = 4, x = 8$, then after applying this operation, the array will be equal to $[1, 8, 8, 8, 5, 1]$.
What is the least amount of burles you need to spend to make all the elements of the array equal?
|
If all the elements of the arrays are equal, then nothing additional needs to be done, and the answer is $0$. Otherwise, you need to apply the assignment operation on the segment alone. Our goal is to choose the shortest possible segment, which means to exclude as many elements as possible from the beginning and end of the segment. Note that we can exclude only equal elements, and then assign a value equal to the excluded elements on the segment. Let's find the lengths of the maximum prefix and suffix consisting of equal elements. Let's denote by $k$ the number of excluded elements, by $k_1$ - the length of the maximum suitable prefix, by $k_2$ - the maximum suitable suffix. Then if $a_0 = a_{n - 1}$, then $k = k_1 + k_2$ (exclude both prefix and suffix), otherwise $k = max(k_1, k_2)$ (exclude the longer one). The answer is $n - k$ - all non-excluded elements must be replaced so that they become equal to the excluded one.
|
[
"brute force",
"greedy",
"math"
] | 1,000
|
def solve():
n = int(input())
a = list(map(int, input().split()))
i1 = 0
i2 = 0
while i1 < n and a[i1] == a[0]:
i1 += 1
while i2 < n and a[n - i2 - 1] == a[n - 1]:
i2 += 1
res = n
if a[0] == a[n - 1]:
res -= i1
res -= i2
else:
res -= max(i1, i2)
print(max(0, res))
t = int(input())
for i in range(t):
solve()
|
1931
|
D
|
Divisible Pairs
|
Polycarp has two favorite integers $x$ and $y$ (they can be equal), and he has found an array $a$ of length $n$.
Polycarp considers a pair of indices $\langle i, j \rangle$ ($1 \le i < j \le n$) beautiful if:
- $a_i + a_j$ is divisible by $x$;
- $a_i - a_j$ is divisible by $y$.
For example, if $x=5$, $y=2$, $n=6$, $a=$[$1, 2, 7, 4, 9, 6$], then the only beautiful pairs are:
- $\langle 1, 5 \rangle$: $a_1 + a_5 = 1 + 9 = 10$ ($10$ is divisible by $5$) and $a_1 - a_5 = 1 - 9 = -8$ ($-8$ is divisible by $2$);
- $\langle 4, 6 \rangle$: $a_4 + a_6 = 4 + 6 = 10$ ($10$ is divisible by $5$) and $a_4 - a_6 = 4 - 6 = -2$ ($-2$ is divisible by $2$).
Find the number of beautiful pairs in the array $a$.
|
Let's consider a good pair. Since $(a_i + a_j) \bmod x = 0$, it follows that $(a_i \bmod x + a_j \bmod x) \bmod x = 0$, which implies that $a_i \bmod x + a_j \bmod x$ is either $x$ or $0$. Therefore, for some $j$, this holds true if $a_i \bmod x = (x - a_j \bmod x) \bmod x$. Since $(a_i - a_j) \bmod y = 0$, it follows that $a_i \bmod y - a_j \bmod y = 0$, which implies that $a_i \bmod y = a_j \bmod y$. Thus, for some fixed $a_j$, all $a_i$ will fit the following criteria: $i < j$; $a_i \bmod x = (x - a_j \bmod x) \bmod x$; $a_i \bmod y = a_j \bmod y$. We will iterate through $j$ from left to right and maintain the count of elements with specific pairs $\langle a_i \bmod x, a_i \bmod y \rangle$ using a map on the prefix.
|
[
"combinatorics",
"math",
"number theory"
] | 1,300
|
def solve():
n, x, y = map(int, input().split())
a = [int(x) for x in input().split()]
cnt = dict()
ans = 0
for e in a:
xx, yy = e % x, e % y
ans += cnt.get(((x - xx) % x, yy), 0)
cnt[(xx, yy)] = cnt.get((xx, yy), 0) + 1
print(ans)
for _ in range(int(input())):
solve()
|
1931
|
E
|
Anna and the Valentine's Day Gift
|
Sasha gave Anna a list $a$ of $n$ integers for Valentine's Day. Anna doesn't need this list, so she suggests destroying it by playing a game.
Players take turns. Sasha is a gentleman, so he gives Anna the right to make the first move.
- On her turn, \textbf{Anna must} choose an element $a_i$ from the list and reverse the sequence of its digits. For example, if Anna chose the element with a value of $42$, it would become $24$; if Anna chose the element with a value of $1580$, it would become $851$. Note that leading zeros are removed. After such a turn, the number of elements in the list does not change.
- On his turn, \textbf{Sasha must} extract \textbf{two} elements $a_i$ and $a_j$ ($i \ne j$) from the list, concatenate them in any order and insert the result back into the list. For example, if Sasha chose the elements equal to $2007$ and $19$, he would remove these two elements from the list and add the integer $200719$ or $192007$. After such a turn, the number of elements in the list decreases by $1$.
Players can't skip turns. The game ends when Sasha can't make a move, i.e. \textbf{after} Anna's move there is \textbf{exactly} one number left in the list. If this integer is \textbf{not less than} $10^m$ (i.e., $\ge 10^m$), Sasha wins. Otherwise, Anna wins.
It can be shown that the game will always end. Determine who will win if both players play optimally.
|
If the decimal representation of a number $x$ has exactly $c$ digits, then $x \ge 10^{c - 1}$. From this, it can be concluded that Sasha is not required to maximize the final number; it is sufficient for him to maximize the number of digits in it. During his turn, Sasha does not change the total number of digits, but Anna can change it. It is easy to understand that the total number of digits cannot increase, but it can decrease if Anna removes trailing zeros from a number ($1200 \to 21$). Now the optimal strategy for Sasha and Anna can be formulated. Anna should choose a number with the largest number of trailing zeros and remove them. Sasha should find a number with the largest number of trailing zeros and concatenate it with any other number. Thus, since $1 \le a_i$, he will preserve the zeros of this number ($100500, 2007 \to 1005002007$). Implementing such a solution in $O(n \log n)$ can be done using sorting and linear traversal or by using std::set. If desired, counting sort can be used to improve the asymptotic complexity to linear.
|
[
"games",
"greedy",
"math",
"sortings"
] | 1,400
|
#include <bits/stdc++.h>
#define all(arr) arr.begin(), arr.end()
using namespace std;
const int MAXN = 200200;
int n, m;
string arr[MAXN];
int len[MAXN], zrr[MAXN];
void build() {
memset(zrr, 0, sizeof(*zrr) * n);
for (int i = 0; i < n; ++i) {
len[i] = arr[i].size();
for (auto it = arr[i].rbegin(); it != arr[i].rend() && *it == '0'; ++it) {
++zrr[i];
}
}
}
string solve() {
int ans = 0;
for (int i = 0; i < n; ++i) {
ans += len[i] - zrr[i];
}
sort(zrr, zrr + n);
reverse(zrr, zrr + n);
for (int i = 0; i < n; ++i) {
if (i & 1) ans += zrr[i];
}
return (ans - 1 >= m ? "Sasha" : "Anna");
}
int main() {
int t; cin >> t;
while (t--) {
cin >> n >> m;
for (int i = 0; i < n; ++i)
cin >> arr[i];
build();
cout << solve() << '\n';
}
}
|
1931
|
F
|
Chat Screenshots
|
There are $n$ people in the programming contest chat. Chat participants are ordered by activity, but each person sees himself at the top of the list.
For example, there are $4$ participants in the chat, and their order is $[2, 3, 1, 4]$. Then
- $1$-st user sees the order $[1, 2, 3, 4]$.
- $2$-nd user sees the order $[2, 3, 1, 4]$.
- $3$-rd user sees the order $[3, 2, 1, 4]$.
- $4$-th user sees the order $[4, 2, 3, 1]$.
$k$ people posted screenshots in the chat, which show the order of participants shown to this user. The screenshots were taken within a short period of time, and the order of participants has not changed.
Your task is to determine whether there is a certain order that all screenshots correspond to.
|
The author of the screenshot is always in the first position, so based on his screenshot, nothing can be said about his first position. The rest of the chat participants are ordered based on the real order. Let's build a graph of $n$ vertices. For each screenshot, add $n - 2$ edges. For all $2 \le i <n$, add an edge between the vertices $a_{ji}$ and $a_{j{i + 1}}$, where $j$ is the number of the current screenshot. If there is an order that satisfies all screenshots, then topological sorting will exist in this graph, which means there will be no cycles in it. So our goal is to check the graph for acyclicity.
|
[
"combinatorics",
"dfs and similar",
"graphs"
] | 1,700
|
#include <iostream>
#include <vector>
#include <set>
#include <queue>
#include <algorithm>
using namespace std;
typedef long long ll;
int timer = 0;
void dfs(int v, vector<vector<int>> &g, vector<bool> &vis, vector<int> &tout) {
vis[v] = true;
for (int u: g[v]) {
if (!vis[u]) {
dfs(u, g, vis, tout);
}
}
tout[v] = timer++;
}
void solve() {
timer = 0;
int n, k;
cin >> n >> k;
vector<vector<int>> a(k, vector<int>(n));
vector<int> authors(k);
for (int i = 0; i < k; ++i) {
for (int j = 0; j < n; ++j) {
cin >> a[i][j];
a[i][j]--;
}
authors[i] = a[i][0];
}
vector<vector<int>> g(n);
for (int i = 0; i < k; ++i) {
for (int j = 1; j + 1 < n; ++j) {
int i1 = a[i][j], i2 = a[i][j + 1];
g[i1].push_back(i2);
}
}
vector<int> tout(n, -1);
vector<bool> vis(n);
for (int i = 0; i < n; ++i) {
if (tout[i] == -1) {
dfs(i, g, vis, tout);
}
}
for (int i = 0; i < k; ++i) {
for (int j = 1; j + 1 < n; ++j) {
int i1 = a[i][j], i2 = a[i][j + 1];
if (tout[i1] < tout[i2]) {
cout << "NO";
return;
}
}
}
cout << "YES";
}
int main() {
int t;
cin >> t;
for (int _ = 0; _ < t; ++_) {
solve();
cout << "\n";
}
}
|
1931
|
G
|
One-Dimensional Puzzle
|
You have a one-dimensional puzzle, all the elements of which need to be put in one row, connecting with each other. All the puzzle elements are completely white and distinguishable from each other only if they have different shapes.
Each element has straight borders at the top and bottom, and on the left and right it has connections, each of which can be a protrusion or a recess. You \textbf{cannot} rotate the elements.
You can see that there are exactly $4$ types of elements. Two elements can be connected if the right connection of the left element is opposite to the left connection of the right element.
\begin{center}
All possible types of elements.
\end{center}
The puzzle contains $c_1, c_2, c_3, c_4$ elements of each type. The puzzle is considered complete if you have managed to combine \textbf{all} elements into one long chain. You want to know how many ways this can be done.
|
Note that the elements of the $3$ and $4$ types on the right have a connection type opposite to that on the left. This means that what type of connection should be at the end of the chain to attach an element of any of these types will remain the same. Therefore, elements of these types can be combined into separate chains consisting only of elements of the same type. Joining such a chain will not change the type of connection at the end. Therefore, the basis of the chain consists of elements of the $1$ and $2$ types. First, you need to make the basis of a chain of only them, and then insert an arbitrary set of chains of elements of the $3$ and $4$ types. In order to make the basis of the chain possible, it is necessary that $|c_1 - c_2| \le 1$ be executed. Otherwise, there will be extra elements left, and the answer is $0$. The basis of the chain can start with an element of type $i$ ($1\le i\le 2$) if $c_i\ge c_j$, where $j = 3 - i$ - an element of the opposite type ($1$ for $2$ and vice versa). If $c_1 = c_2$, then there are $2$ options for the basis of the chain, and they need to be counted separately. To find the number of combinations at a fixed base of the chain, we calculate two values independently, and then multiply them: $x_1$ - the number of ways to arrange chains of $3$-type elements after $1$-type elements. $x_2$ - the number of ways to arrange chains of $4$-type elements after $2$-type elements. $x_1$ and $x_2$ are calculated using the same formula, also known as the formula of balls and partitions. We want to arrange $m$ elements ($m$ is equal to $x_3$ or $x_4$, depending on which value we calculate). To arrange them, we have $n = n0 + add$ positions, where $n0$ is the number of elements of type $1$ or $2$ (similar to $m$), and $add$ is the value required to process an additional position equal to $1$ if the basis of the chain begins with an element of another type (then we can put the chain in front of it, for example, several elements of the $4$ type in front of the $1$ type element), and $0$ otherwise. The number of ways to arrange $m$ indistinguishable elements in $n$ positions is $C_{n +m - 1}^{m}$, where $C_n^k$ is the number of combinations. To calculate the answer, multiply the values of $x_1$ and $x_2$, and in the case when $c_1 = c_2$ - sum the resulting products.
|
[
"combinatorics",
"math",
"number theory"
] | 2,000
|
#include <iostream>
#include <vector>
#include <set>
#include <queue>
#include <algorithm>
using namespace std;
typedef long long ll;
const int mod = 998244353;
ll pow_mod(ll x, ll p) {
if (p == 0) {
return 1;
}
if (p % 2 == 0) {
ll y = pow_mod(x, p / 2);
return (y * y) % mod;
}
return (x * pow_mod(x, p - 1)) % mod;
}
ll inv(ll x) {
return pow_mod(x, mod - 2);
}
vector<ll> fact = {1};
ll cnk(ll n, ll k) {
ll res = fact[n];
res = (res * inv(fact[k])) % mod;
res = (res * inv(fact[n - k])) % mod;
return res;
}
ll calc(int n1, int n2, int n3, int n4) {
return (cnk(n1 + n3 - 1, n3) * cnk(n2 + n4 - 1, n4)) % mod;
}
void solve() {
int n1, n2, n3, n4;
cin >> n1 >> n2 >> n3 >> n4;
if (n1 + n2 == 0) {
cout << (n3 == 0 || n4 == 0 ? 1 : 0) << '\n';
return;
}
if (abs(n1 - n2) > 1) {
cout << "0\n";
return;
}
ll res = 0;
if (n1 <= n2) {
res += calc(n1 + 1, n2, n3, n4);
}
if (n2 <= n1) {
res += calc(n1, n2 + 1, n3, n4);
}
cout << res % mod << '\n';
}
int main() {
for (ll i = 1; i <= 4e6; ++i) {
fact.push_back((fact.back() * i) % mod);
}
int t;
cin >> t;
for (int _ = 0; _ < t; ++_) {
solve();
}
}
|
1932
|
A
|
Thorns and Coins
|
During your journey through computer universes, you stumbled upon a very interesting world. It is a path with $n$ consecutive cells, each of which can either be empty, contain thorns, or a coin. In one move, you can move one or two cells along the path, provided that the destination cell does not contain thorns (and belongs to the path). If you move to the cell with a coin, you pick it up.
\begin{center}
{\small Here, green arrows correspond to legal moves, and the red arrow corresponds to an illegal move.}
\end{center}
You want to collect as many coins as possible. Find the maximum number of coins you can collect in the discovered world if you start in the leftmost cell of the path.
|
Let's move forward by $1$ if the next cell does not have spikes, and by $2$ otherwise. By doing so, we will visit all spike-free cells that we can reach, and thus collect all the coins in those cells. Note that if we are in cell $i$, and cells $i+1$ and $i+2$ have spikes, then we can only jump into the spikes and thus end the game. Therefore, we need to count how many coins appear in the string before the substring "**".
|
[
"dp",
"greedy",
"implementation"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
signed main() {
cin.tie(nullptr);
cout.tie(nullptr);
ios::sync_with_stdio(false);
int t;
cin >> t;
for(int _ = 0; _ < t; ++_){
int n, ans = 0;
cin >> n;
string s;
cin >> s;
for (int i = 1; i < n; i++) {
ans += (s[i] == '@');
if (s[i] == '*' && s[i - 1] == '*')
break;
}
cout << ans << "\n";
}
return 0;
}
|
1932
|
B
|
Chaya Calendar
|
The Chaya tribe believes that there are $n$ signs of the apocalypse. Over time, it has been found out that the $i$-th sign occurs every $a_i$ years (in years $a_i$, $2 \cdot a_i$, $3 \cdot a_i$, $\dots$).
According to the legends, for the apocalypse to happen, the signs must occur sequentially. That is, first they wait for the first sign to occur, then strictly after it, the second sign will occur, and so on. That is, if the $i$-th sign occurred in the year $x$, the tribe starts waiting for the occurrence of the $(i+1)$-th sign, starting from the year $x+1$.
In which year will the $n$-th sign occur and the apocalypse will happen?
|
The tribe will wait for the first sign in year $a_1$. They will expect the second event in some year $x > a_1$, which is divisible by $a_2$, this will happen after $a_2 - a_1 \bmod a_2$ years. Let's maintain the number $cur$ of the year in which the $i$-th sign occurred, then the $(i+1)$-th will occur in the year $cur + a_{i+1} - cur \bmod a_{i+1}$. Since this is the smallest year number divisible by $a_{i+1}$, which is strictly greater than $cur$.
|
[
"number theory"
] | 1,100
|
def solve():
n = int(input())
a = [int(x) for x in input().split()]
cur = 0
for e in a:
cur += e - cur % e
print(cur)
for _ in range(1, int(input()) + 1):
solve()
|
1932
|
C
|
LR-remainders
|
You are given an array $a$ of length $n$, a positive integer $m$, and a string of commands of length $n$. Each command is either the character 'L' or the character 'R'.
Process all $n$ commands in the order they are written in the string $s$. Processing a command is done as follows:
- First, output the remainder of the product of all elements of the array $a$ when divided by $m$.
- Then, if the command is 'L', remove the leftmost element from the array $a$, if the command is 'R', remove the rightmost element from the array $a$.
Note that after each move, the length of the array $a$ decreases by $1$, and after processing all commands, it will be empty.
Write a program that will process all commands in the order they are written in the string $s$ (from left to right).
|
If we perform all deletions except the last one, only one element will remain. Let's find its index in the array. Now we will perform the operations in reverse order, then the deletion operations will become additions, which are much easier to maintain. We will store the remainder of the division of the product of the current segment by $m$ and multiply by the new elements when extending the segment. We will output all the obtained numbers in reverse order.
|
[
"brute force",
"data structures",
"implementation",
"math",
"two pointers"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int n, m;
cin >> n >> m;
vector<int> a(n);
forn(i, n)
cin >> a[i];
string s;
cin >> s;
int l = 0;
int r = n - 1;
forn(i, n - 1)
if (s[i] == 'L')
l++;
else
r--;
assert(l == r);
vector<int> b(n);
b[n - 1] = a[l] % m;
for (int i = n - 2; i >= 0; i--) {
if (s[i] == 'L')
b[i] = (b[i + 1] * a[--l]) % m;
else
b[i] = (b[i + 1] * a[++r]) % m;
}
assert(l == 0);
assert(r == n - 1);
forn(i, n)
cout << b[i] << " ";
cout << endl;
}
}
|
1932
|
D
|
Card Game
|
Two players are playing an online card game. The game is played using a 32-card deck. Each card has a suit and a rank. There are four suits: clubs, diamonds, hearts, and spades. We will encode them with characters 'C', 'D', 'H', and 'S', respectively. And there are 8 ranks, in increasing order: '2', '3', '4', '5', '6', '7', '8', '9'.
Each card is denoted by two letters: its rank and its suit. For example, the 8 of Hearts is denoted as 8H.
At the beginning of the game, one suit is chosen as the \textbf{trump suit}.
In each round, players make moves like this: the first player places one of his cards on the table, and the second player must beat this card with one of their cards. After that, both cards are moved to the discard pile.
A card can beat another card if both cards have the same suit and the first card has a higher rank than the second. For example, 8S can beat 4S. Additionally, a trump card can beat any non-trump card, regardless of the rank of the cards, for example, if the trump suit is clubs ('C'), then 3C can beat 9D. Note that trump cards can be beaten only by the trump cards of higher rank.
There were $n$ rounds played in the game, so the discard pile now contains $2n$ cards. You want to reconstruct the rounds played in the game, but the cards in the discard pile are shuffled. Find any possible sequence of $n$ rounds that might have been played in the game.
|
Let's start by solving the problem separately for each suit, except for the trump suit. To do this, we will form the maximum possible number of pairs, after which there will be no more than one card of this suit without a pair. Thus, we will need the minimum number of trump cards to beat the non-trump cards. Now we will beat each of the remaining cards with a trump. If there are not enough trump cards, there is no solution. All that remains is to solve the problem for the remaining trumps, just as at the beginning of the solution. Note that since the total number of cards is even, and the rest of cards are paired, there will be no extra trump card left.
|
[
"greedy",
"implementation"
] | 1,400
|
#include <bits/stdc++.h>
#define long long long int
#define DEBUG
using namespace std;
// @author: pashka
int main() {
ios::sync_with_stdio(false);
int t;
cin >> t;
for (int tt = 0; tt < t; tt++) {
int n;
cin >> n;
string suites = "CDHS";
string ts;
cin >> ts;
int trump = suites.find(ts[0]);
vector<int> bs[4];
for (int i = 0; i < 2 * n; i++) {
string s;
cin >> s;
bs[suites.find(s[1])].push_back(s[0] - '2');
}
vector<string> res;
vector<string> left;
for (int i = 0; i < 4; i++) {
sort(bs[i].begin(), bs[i].end());
if (i == trump) continue;
if (bs[i].size() % 2 == 1) {
int x = bs[i].back();
left.push_back(string() + char('2' + x) + suites[i]);
bs[i].pop_back();
}
for (int j = 0; j < (int) bs[i].size(); j++) {
int x = bs[i][j];
res.push_back(string() + char('2' + x) + suites[i]);
}
}
if (left.size() > bs[trump].size()) {
cout << "IMPOSSIBLE\n";
} else {
for (string s : left) {
res.push_back(s);
int x = bs[trump].back();
bs[trump].pop_back();
res.push_back(string() + char('2' + x) + suites[trump]);
}
for (int j = 0; j < (int) bs[trump].size(); j++) {
int x = bs[trump][j];
res.push_back(string() + char('2' + x) + suites[trump]);
}
for (int i = 0; i < 2 * n; i += 2) {
cout << res[i] << " " << res[i + 1] << "\n";
}
}
}
}
|
1932
|
E
|
Final Countdown
|
You are in a nuclear laboratory that is about to explode and destroy the Earth. You must save the Earth before the final countdown reaches zero.
The countdown consists of $n$ ($1 \le n \le 4 \cdot 10^5$) mechanical indicators, each showing one decimal digit. You noticed that when the countdown changes its state from $x$ to $x-1$, it doesn't happen in one move. Instead, each change of a single digit takes one second.
So, for example, if the countdown shows 42, then it will change to 41 in one second, because only one digit is changed, but if the countdown shows 2300, then it will change to 2299 in three seconds, because the three last digits are changed.
Find out how much time is left before the countdown reaches zero.
|
Let's assume that the number $s$ is initially displayed on the countdown. Let's see how many times each of the indicators will switch. Indicator number $i$ (if we number the indicators from right to left, starting with 0) will switch exactly $\lfloor s / 10^i \rfloor$ times. Thus, the answer is equal to $\sum_{i=0}^{n-1} \lfloor s / 10^i \rfloor$. To calculate this sum, let's split the number $s$ into individual digits and add their contributions. Let the digits of the number $s$ be $s_{n-1}s_{n-2}\ldots s_0$. The contribution of the digit $s_i$ to the final sum is $s_i\cdot\sum_{j=0}^i 10^j$. Thus, the answer is equal to $\sum_{i=0}^{n-1} s_i\cdot\sum_{j=0}^i 10^j$. Changing the order of summation, we get $\sum_{j=0}^{n-1} 10^j \cdot \sum_{j=i}^{n-1} s_i$. To calculate this sum, we need to pre-calculate the suffix sums $\sum_{j=i}^{n-1} s_i$ (this can be done in linear time), and then add each of the sums to the corresponding decimal digit of the final number, carrying over to the higher digits (this can also be done in linear time).
|
[
"implementation",
"math",
"number theory"
] | 1,600
|
#include <bits/stdc++.h>
#define long long long int
#define DEBUG
using namespace std;
// @author: pashka
int main() {
ios::sync_with_stdio(false);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
string s;
cin >> s;
reverse(s.begin(), s.end());
vector<int> a(n + 1);
for (int i = n - 1; i >= 0; i--) {
a[i] = a[i + 1] + (s[i] - '0');
}
string res;
int c = 0;
for (int i = 0; i < n; i++) {
c += a[i];
res += (char)(c % 10 + '0');
c /= 10;
}
res += (char)(c + '0');
while (res.back() == '0') {
res.pop_back();
}
reverse(res.begin(), res.end());
cout << res << "\n";
}
return 0;
}
|
1932
|
F
|
Feed Cats
|
There is a fun game where you need to feed cats that come and go. The level of the game consists of $n$ steps. There are $m$ cats; the cat $i$ is present in steps from $l_i$ to $r_i$, inclusive. In each step, you can feed all the cats that are currently present or do nothing.
If you feed the same cat more than once, it will overeat, and you will immediately lose the game. Your goal is to feed as many cats as possible without causing any cat to overeat.
Find the maximum number of cats you can feed.
Formally, you need to select several integer points from the segment from $1$ to $n$ in such a way that among given segments, none covers two or more of the selected points, and as many segments as possible cover one of the selected points.
|
Let's use dynamic programming. Let $dp_i$ be the answer for the first $i$ moves ($dp_0 = 0$). Then there are two possible cases: we fed the cats on step $i$ or not. If we did not feed the cats on step $i$, then $dp_i = dp_{i - 1}$, because this is the best result for the first $i-1$ moves, and nothing has changed on this move. If we fed the cats on step $i$, then we could not have fed the cats later than when the first of the present cats arrived, because then it would overeat. Using the multiset structure, we will find the step $x$ at which it arrived. In this case, we will feed all the present cats and can choose the best way to feed the cats for the first $x-1$ moves. At each of the $n$ moves, we will choose the best of the two options. To maintain the multiset with the moments of the cats' arrivals, we will add them at these moments in time and remove them at the departure moments (for each $r$, we can store all $l_i$ that will leave at that moment).
|
[
"data structures",
"dp",
"sortings"
] | 1,900
|
#include <bits/stdc++.h>
//#define int long long
#define pb emplace_back
#define mp make_pair
#define x first
#define y second
#define all(a) a.begin(), a.end()
#define rall(a) a.rbegin(), a.rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(time(nullptr));
const ll inf = 1e18;
const ll M = 998244353;
const ld pi = atan2(0, -1);
const ld eps = 1e-6;
void solve(int tc){
int n, m;
cin >> n >> m;
vector<pair<int, int>> a(m);
vector<int> op(n + 1);
vector<vector<int>> del(n + 1);
for(auto &e: a) {
cin >> e.x >> e.y;
op[e.x]++;
del[e.y].emplace_back(e.x);
}
multiset<int> cur;
vector<int> dp(n + 1);
for(int i = 1; i <= n; ++i){
dp[i] = dp[i - 1];
for(int j = 0; j < op[i]; ++j){
cur.insert(i);
}
if(!cur.empty()){
dp[i] = max(dp[i], int(dp[*cur.begin() - 1] + cur.size()));
}
for(int e: del[i]){
cur.erase(cur.find(e));
}
}
cout << dp[n];
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
1932
|
G
|
Moving Platforms
|
There is a game where you need to move through a labyrinth. The labyrinth consists of $n$ platforms, connected by $m$ passages.
Each platform is at some level $l_i$, an integer number from $0$ to $H - 1$. In a single step, if you are currently on platform $i$, you can stay on it, or move to another platform $j$. To move to platform $j$ they have to be connected by the passage, and their levels have to be the same, namely $l_i = l_j$.
After each step, the levels of all platforms change. The new level of platform $i$ is calculated as $l'_i = (l_i + s_i) \bmod H$, for all $i$.
You start on platform $1$. Find the minimum number of steps you need to get to platform $n$.
|
First, note that the event of two platforms being on the same level at a given moment does not depend on your moves. Hence, it is always optimal to get to any vertex you may need to reach the vertex $n$ as soon as possible. It means that you can simply run Dijkstra's algorithm to calculate the minimal number of moves needed to reach every vertex. The only complication is that you need to determine for some vertices $u$, $v$ and moment $t$ what is the next moment when $l'_u = l'_v$, i.e. find minimal $k$ such that $k \ge t$ and $l_u + ks_u = l_v + ks_v \bmod H$. That can be equivalently written as $l_u - l_v = k(s_v - s_u) \bmod H$. $a = k'b \bmod H$ can be solved in such way: If $a$ is not divisible by $\gcd(H, b)$ then there is no solution. Otherwise, divide $a, b, H$ by $\gcd(H, b)$. Name them $a', b', H'$. Then $k' = b'^{-1}a' \bmod H$. $b'^{-1}$ can be found using extended Eucledian algorithm. The only thing remaining is to find minimal $k$ such that $k' = k \bmod H'$ and $k \ge t$.
|
[
"graphs",
"math",
"number theory",
"shortest paths"
] | 2,300
|
#include <bits/stdc++.h>
#define long long long int
#define DEBUG
using namespace std;
// @author: pashka
struct triple {
long d, x, y;
};
triple eucl(long a, long b) {
if (b == 0) {
return {a, 1, 0};
}
long k = a / b;
auto [d, x, y] = eucl(b, a - k * b);
return {d, y, x - k * y};
}
struct test {
void solve() {
int n, m, H;
cin >> n >> m >> H;
vector<long> l(n);
for (int i = 0; i < n; i++) cin >> l[i];
vector<long> s(n);
for (int i = 0; i < n; i++) cin >> s[i];
vector<vector<int>> g(n);
for (int i = 0; i < m; i++) {
int x, y;
cin >> x >> y;
x--;
y--;
g[x].push_back(y);
g[y].push_back(x);
}
const long INF = 1e18;
vector<long> d(n, INF);
d[0] = 0;
set<pair<long, int>> q;
q.insert({d[0], 0});
while (!q.empty()) {
auto p = *q.begin();
q.erase(p);
int v = p.second;
long t = d[v];
for (int u: g[v]) {
long a = (l[v] + (t % H) * s[v]) - (l[u] + (t % H) * s[u]);
a %= H;
if (a < 0) a += H;
long b = s[u] - s[v];
b %= H;
if (b < 0) b += H;
// a - bx = yH
auto [dd, x, y] = eucl(b, H);
// xb + yH = dd
if (a % dd != 0) continue;
x *= a / dd;
x %= (H / dd);
if (x < 0) x += H / dd;
long dt = x;
if (d[v] + dt + 1 < d[u]) {
q.erase({d[u], u});
d[u] = d[v] + dt + 1;
q.insert({d[u], u});
}
}
}
long res = d[n - 1];
if (res == INF) res = -1;
cout << res << "\n";
}
};
int main() {
ios::sync_with_stdio(false);
int n;
cin >> n;
for (int i = 0; i < n; i++) {
test().solve();
}
return 0;
}
|
1933
|
A
|
Turtle Puzzle: Rearrange and Negate
|
You are given an array $a$ of $n$ integers. You must perform the following two operations on the array (the first, then the second):
- Arbitrarily rearrange the elements of the array or leave the order of its elements unchanged.
- Choose at most one contiguous segment of elements and replace the signs of all elements in this segment with their opposites. Formally, you can choose a pair of indices $l, r$ such that $1 \le l \le r \le n$ and assign $a_i = -a_i$ for all $l \le i \le r$ (negate elements). Note that you may choose not to select a pair of indices and leave all the signs of the elements unchanged.
What is the \textbf{maximum sum of the array elements} after performing these two operations (the first, then the second)?
|
Neither of the operations change the absolute value of any $a_i$. Therefore, the maximum answer we can get is $|a_1| + |a_2| + \ldots + |a_n|$. Now we will show there is a way to obtain this sum. The way is simple. Firstly use operation 1 to sort the array, so all the negative elements come before the non-negative elements. Then use operation 2 to invert the prefix of negative elements (or not perform the operation if all the elements are non-negative initially). Then we obtain an array of non-negative integers, hence the answer is indeed $|a_1| + |a_2| + \ldots + |a_n|$.
|
[
"greedy",
"math",
"sortings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> arr(n);
for (int i = 0; i < n; i++) {
cin >> arr[i];
}
int sum = 0;
for (int i = 0; i < n; i++) {
sum += abs(arr[i]);
}
cout << sum << endl;
}
return 0;
}
|
1933
|
B
|
Turtle Math: Fast Three Task
|
You are given an array $a_1, a_2, \ldots, a_n$.
In one move, you can perform either of the following two operations:
- Choose an element from the array and remove it from the array. As a result, the length of the array decreases by $1$;
- Choose an element from the array and increase its value by $1$.
You can perform any number of moves. If the current array becomes empty, then no more moves can be made.
Your task is to find the \textbf{minimum} number of moves required to make the sum of the elements of the array $a$ divisible by $3$. It is possible that you may need $0$ moves.
Note that the sum of the elements of an empty array (an array of length $0$) is equal to $0$.
|
Let's denote the sum of elements as $s$. If $s$ is already divisible by $3$, then the answer is $0$. The answer is $1$ in the following cases: If $s \bmod 3 = 2$, then we can add $1$ to any element to make the sum divisible by $3$; If there exists an $a_i$ such that $s \bmod 3 = a_i \bmod3$, then we can remove such $a_i$ to make the sum divisible by $3$. Otherwise, if $s \bmod 3 = 1$, we cannot achieve the required sum in one operation. We can increase any element twice, then the sum will increase by $2$ and become divisible by $3$.
|
[
"implementation",
"math",
"number theory"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
int main(){
int t;
cin>>t;
while(t--){
int k;
cin>>k;
int ACC=0;
bool hv=false;
for(int i=0;i<k;i++){
int x;
cin>>x;
ACC+=x;
if(x%3==1){
hv=true;
}
}
if(ACC%3==0){
cout<<0<<endl;
}else if(ACC%3==2){
cout<<1<<endl;
}else{
if(hv==true){
cout<<1<<endl;
}else{
cout<<2<<endl;
}
}
}
}
|
1933
|
C
|
Turtle Fingers: Count the Values of k
|
You are given three \textbf{positive} integers $a$, $b$ and $l$ ($a,b,l>0$).
It can be shown that there always exists a way to choose \textbf{non-negative} (i.e. $\ge 0$) integers $k$, $x$, and $y$ such that $l = k \cdot a^x \cdot b^y$.
Your task is to find the number of distinct possible values of $k$ across all such ways.
|
Notice that there are not so many suitable values of $x$ and $y$ (since $2 \le a, b$ and $2^{20}=1\ 048\ 576$). This allows us to iterate through all suitable pairs of $x$ and $y$ and thus find all different suitable $k$.
|
[
"brute force",
"implementation",
"math",
"number theory"
] | 1,100
|
#include <bits/stdc++.h>
#define int long long
using namespace std;
void solve(int tc){
int a, b, l;
cin >> a >> b >> l;
set<int> ans;
for(int i = 0; i <= 34; ++i){
int x = l;
bool fail = false;
for(int _ = 0; _ < i; ++_){
if(x % a){
fail = true;
break;
}
x /= a;
}
if(fail) break;
while(true){
ans.insert(x);
if(x % b) break;
x /= b;
}
}
cout << ans.size();
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
1933
|
D
|
Turtle Tenacity: Continual Mods
|
Given an array $a_1, a_2, \ldots, a_n$, determine whether it is possible to \textbf{rearrange its elements} into $b_1, b_2, \ldots, b_n$, such that $b_1 \bmod b_2 \bmod \ldots \bmod b_n \neq 0$.
Here $x \bmod y$ denotes the remainder from dividing $x$ by $y$. Also, the modulo operations are calculated from left to right. That is, $x \bmod y \bmod z = (x \bmod y) \bmod z$. For example, $2024 \bmod 1000 \bmod 8 = (2024 \bmod 1000) \bmod 8 = 24 \bmod 8 = 0$.
|
Sort the array in non-decreasing order. Now, assume $a_1 \le a_2 \le \ldots \le a_n$. If $a_1 \neq a_2$, the minimum is unique. Therefore, place $a_1$ at the front, and the result after all modulo operations is just $a_1 > 0$. Hence the answer is yes for this case. If $a_1 = a_2$ and there exists some element $a_x$ such that $a_x \not\equiv 0 \pmod {a_1}$, then a possible solution is rearranging the array to $[a_x, a_1, a_2, \ldots, a_{x-1}, a_{x+1}, \ldots, a_n]$. Since $a_x \bmod a_1 < a_1$, $a_x \bmod a_1$ is the minimum among the other elements and the result after all modulo operations equals $a_x \bmod a_1 > 0$. Hence the answer is yes for this case. Otherwise (if all elements are multiples of the minimum) the answer is no, because any element modulo the minimum equals $0$, and at least one of the minimums must not be the first element. So after passing through two minimums we are guaranteed to get a $0$.
|
[
"constructive algorithms",
"greedy",
"math",
"number theory",
"sortings"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
int a[n];
for(int i=0; i<n; i++) cin >> a[i];
sort(a, a + n);
if(a[0] != a[1]) {
cout << "YES\n";
}
else {
bool PASS = 0;
for(int i=1; i<n; i++) {
if(a[i] % a[0] != 0) PASS = 1;
}
if(PASS) cout << "YES\n";
else cout << "NO\n";
}
}
}
|
1933
|
E
|
Turtle vs. Rabbit Race: Optimal Trainings
|
Isaac begins his training. There are $n$ running tracks available, and the $i$-th track ($1 \le i \le n$) consists of $a_i$ equal-length sections.
Given an integer $u$ ($1 \le u \le 10^9$), finishing each section can increase Isaac's ability by a certain value, described as follows:
- Finishing the $1$-st section increases Isaac's performance by $u$.
- Finishing the $2$-nd section increases Isaac's performance by $u-1$.
- Finishing the $3$-rd section increases Isaac's performance by $u-2$.
- $\ldots$
- Finishing the $k$-th section ($k \ge 1$) increases Isaac's performance by $u+1-k$. (The value $u+1-k$ can be negative, which means finishing an extra section decreases Isaac's performance.)
You are also given an integer $l$. You must choose an integer $r$ such that $l \le r \le n$ and Isaac will finish \textbf{each} section of \textbf{each} track $l, l + 1, \dots, r$ (that is, a total of $\sum_{i=l}^r a_i = a_l + a_{l+1} + \ldots + a_r$ sections).
Answer the following question: what is the optimal $r$ you can choose that the increase in Isaac's performance is maximum possible?
If there are multiple $r$ that maximize the increase in Isaac's performance, output the \textbf{smallest} $r$.
To increase the difficulty, you need to answer the question for $q$ different values of $l$ and $u$.
|
Notice that if we choose some $r$ such that the sum of $a_l, a_{l+1}, \dots, a_r$ does not exceed $u$, then completing each of the sections will increase our abilities. Using prefix sums and binary search, we will find the largest such $r$. Smaller values will increase our abilities by a smaller amount, so there is no point in checking them. However, it is worth considering the value $r' = r+1$. Despite the fact that some of the sections of the $r'$ track will only decrease our abilities, the overall increase may be positive (for example, if the increases on the $r'$ track are [$3, 2, 1, 0, -1$]). Considering following values is pointless, as completing all following tracks will only decrease our abilities. Also, for finding the answer, ternary search can be used.
|
[
"binary search",
"implementation",
"math",
"ternary search"
] | 1,500
|
#include "bits/stdc++.h"
using namespace std;
#define int long long
#define double long double
void solve(int tc) {
int n;
cin >> n;
int a[n + 1];
for(int i = 1; i <= n; i++) cin >> a[i];
int ps[n + 1];
ps[0] = 0;
for(int i = 1; i <= n; i++) ps[i] = ps[i - 1] + a[i];
int q;
cin >> q;
while(q--) {
int l, u;
cin >> l >> u;
int lb = l, rb = n;
while(lb < rb) {
int mid = (lb + rb + 1) >> 1;
if(ps[mid] - ps[l - 1] <= u) lb = mid;
else rb = mid - 1;
}
int maxu = -1e18, optid;
for(int i = max(l, lb - 2); i <= min(n, lb + 2); i++) {
int t = ps[i] - ps[l - 1];
int ut = (u + (u - t + 1)) * t / 2;
if(ut > maxu) {
maxu = ut;
optid = i;
}
}
cout << optid << " ";
}
}
signed main() {
int t = 1; cin >> t;
for(int i = 1; i <= t; i++){
solve(i);
cout << "\n";
}
}
|
1933
|
F
|
Turtle Mission: Robot and the Earthquake
|
The world is a grid with $n$ rows and $m$ columns. The rows are numbered $0, 1, \ldots, n-1$, while the columns are numbered $0, 1, \ldots, m-1$. In this world, the columns are \textbf{cyclic} (i.e. the top and the bottom cells in each column are adjacent). The cell on the $i$-th row and the $j$-th column ($0 \le i < n, 0 \le j < m$) is denoted as $(i,j)$.
\textbf{At time $0$}, the cell $(i,j)$ (where $0 \le i < n, 0 \le j < m$) contains either a \textbf{rock} or \textbf{nothing}. The state of cell $(i,j)$ can be described using the integer $a_{i,j}$:
- If $a_{i,j} = 1$, there is a rock at $(i,j)$.
- If $a_{i,j} = 0$, there is nothing at $(i,j)$.
As a result of aftershocks from the earthquake, the columns follow tectonic plate movements: each column moves cyclically \textbf{upwards} at a velocity of $1$ cell per unit of time. Formally, for some $0 \le i < n, 0 \le j < m$, if $(i,j)$ contains a rock at the moment, it will move from $(i, j)$ to $(i - 1, j)$ (or to $(n - 1, j)$ if $i=0$).
The robot called RT is initially positioned at $(0,0)$. It has to go to $(n-1,m-1)$ to carry out an earthquake rescue operation (to the bottom rightmost cell). The earthquake doesn't change the position of the robot, they only change the position of rocks in the world.
Let RT's current position be $(x,y)$ ($0 \le x < n, 0 \le y < m$), it can perform the following operations:
- Go one cell cyclically upwards, i.e. from $(x,y)$ to $((x+n-1) \bmod n, y)$ using $1$ unit of time.
- Go one cell cyclically downwards, i.e. $(x,y)$ to $((x+1) \bmod n, y)$ using $1$ unit of time.
- Go one cell to the right, i.e. $(x,y)$ to $(x, y+1)$ using $1$ unit of time. (RT may perform this operation only if $y < m-1$.)
\textbf{Note that RT cannot go left using the operations nor can he stay at a position.}
Unfortunately, RT will explode upon colliding with a rock. As such, when RT is at $(x,y)$ and there is a rock at $((x+1) \bmod n, y)$ or $((x+2) \bmod n, y)$, RT cannot move down or it will be hit by the rock.
Similarly, if $y+1 < m$ and there is a rock at $((x+1) \bmod n, y+1)$, RT cannot move right or it will be hit by the rock.
However, it is worth noting that if there is a rock at $(x \bmod n, y+1)$ and $((x+1) \bmod n, y)$, RT can still move right safely.
Find the minimum amount of time RT needs to reach $(n-1,m-1)$ without colliding with any rocks. If it is impossible to do so, output $-1$.
|
Idea: erniepsycholone, prepared: erniepsycholone View the task in a relative perspective, with robot RT and the ending location moving downwards instead of rocks moving upwards.
|
[
"dfs and similar",
"dp",
"graphs",
"shortest paths"
] | 2,100
| null |
1933
|
F
|
Turtle Mission: Robot and the Earthquake
|
The world is a grid with $n$ rows and $m$ columns. The rows are numbered $0, 1, \ldots, n-1$, while the columns are numbered $0, 1, \ldots, m-1$. In this world, the columns are \textbf{cyclic} (i.e. the top and the bottom cells in each column are adjacent). The cell on the $i$-th row and the $j$-th column ($0 \le i < n, 0 \le j < m$) is denoted as $(i,j)$.
\textbf{At time $0$}, the cell $(i,j)$ (where $0 \le i < n, 0 \le j < m$) contains either a \textbf{rock} or \textbf{nothing}. The state of cell $(i,j)$ can be described using the integer $a_{i,j}$:
- If $a_{i,j} = 1$, there is a rock at $(i,j)$.
- If $a_{i,j} = 0$, there is nothing at $(i,j)$.
As a result of aftershocks from the earthquake, the columns follow tectonic plate movements: each column moves cyclically \textbf{upwards} at a velocity of $1$ cell per unit of time. Formally, for some $0 \le i < n, 0 \le j < m$, if $(i,j)$ contains a rock at the moment, it will move from $(i, j)$ to $(i - 1, j)$ (or to $(n - 1, j)$ if $i=0$).
The robot called RT is initially positioned at $(0,0)$. It has to go to $(n-1,m-1)$ to carry out an earthquake rescue operation (to the bottom rightmost cell). The earthquake doesn't change the position of the robot, they only change the position of rocks in the world.
Let RT's current position be $(x,y)$ ($0 \le x < n, 0 \le y < m$), it can perform the following operations:
- Go one cell cyclically upwards, i.e. from $(x,y)$ to $((x+n-1) \bmod n, y)$ using $1$ unit of time.
- Go one cell cyclically downwards, i.e. $(x,y)$ to $((x+1) \bmod n, y)$ using $1$ unit of time.
- Go one cell to the right, i.e. $(x,y)$ to $(x, y+1)$ using $1$ unit of time. (RT may perform this operation only if $y < m-1$.)
\textbf{Note that RT cannot go left using the operations nor can he stay at a position.}
Unfortunately, RT will explode upon colliding with a rock. As such, when RT is at $(x,y)$ and there is a rock at $((x+1) \bmod n, y)$ or $((x+2) \bmod n, y)$, RT cannot move down or it will be hit by the rock.
Similarly, if $y+1 < m$ and there is a rock at $((x+1) \bmod n, y+1)$, RT cannot move right or it will be hit by the rock.
However, it is worth noting that if there is a rock at $(x \bmod n, y+1)$ and $((x+1) \bmod n, y)$, RT can still move right safely.
Find the minimum amount of time RT needs to reach $(n-1,m-1)$ without colliding with any rocks. If it is impossible to do so, output $-1$.
|
By viewing the robot's movement relative to the rocks, Robot RT's three moves become as follows: Up: Stationary Down, $(x,y)$ to $((x+2) \bmod n, y)$ Right, $(x,y)$ to $((x+1) \bmod n, y+1)$ As staying stationary is not necessary now when we are finding the minimum time, we can run a bfs/dp from $(0,0)$ to find the minimum time required to reach every grid in the second last column $(x mod n,m-2)$. Finally, choose the best among all n tiles after waiting for the endpoint to cycle back.
|
[
"dfs and similar",
"dp",
"graphs",
"shortest paths"
] | 2,100
|
#include<bits/stdc++.h>
#define int long long
using namespace std;
signed main(){
int t;
cin >> t;
while (t--){
int n, m;
cin >> n >> m;
bool a[n][m + 1];
for (int i = 0; i < n; i++) {
for (int j = 1; j <= m; j++) {
cin >> a[i][j];
}
}
int dp[n][m + 1];
for (int i = 0; i < n; i++) {
for (int j = 0; j <= m; j++) {
dp[i][j] = INT_MAX;
}
}
dp[0][1] = 0;
for (int i = 1; i <= m; i++) {
for (int j = 0; j < n; j++) {
if (a[j][i]){
continue;
}
dp[j][i] = min(dp[j][i], dp[(j - 1 + n) % n][i - 1] + 1);
}
for (int j = 0; j < 3 * n; j++) {
if (a[j % n][i] || a[(j - 1 + n) % n][i]){
continue;
}
dp[j % n][i] = min(dp[j % n][i], dp[(j - 2 + n) % n][i] + 1);
}
}
int ans = INT_MAX;
for (int i = 0; i < n; i++) {
if (dp[i][m] == INT_MAX){
continue;
}
int npos = ((n - 1) + dp[i][m]) % n;
if (npos < i) npos += n;
int cur = dp[i][m] + min(npos - i, n - (npos - i));
ans = min(ans, cur);
}
if(ans == INT_MAX){
cout << -1 << endl;
}else{
cout << ans << endl;
}
}
}
|
1933
|
G
|
Turtle Magic: Royal Turtle Shell Pattern
|
Turtle Alice is currently designing a fortune cookie box, and she would like to incorporate the theory of LuoShu into it.
The box can be seen as an $n \times m$ grid ($n, m \ge 5$), where the rows are numbered $1, 2, \dots, n$ and columns are numbered $1, 2, \dots, m$. Each cell can either be \textbf{empty} or have a single fortune cookie of one of the following shapes: \textbf{circle} or \textbf{square}. The cell at the intersection of the $a$-th row and the $b$-th column is denoted as $(a, b)$.
Initially, the entire grid is empty. Then, Alice performs $q$ operations on the fortune cookie box. The $i$-th operation ($1 \le i \le q$) is as follows: specify a currently empty cell $(r_i,c_i)$ and a shape (circle or square), then put a fortune cookie of the specified shape on cell $(r_i,c_i)$. Note that after the $i$-th operation, the cell $(r_i,c_i)$ is no longer empty.
Before all operations \textbf{and} after each of the $q$ operations, Alice wonders what the number of ways to place fortune cookies in \textbf{all remaining empty cells} is, such that the following condition is satisfied:
No three consecutive cells (in horizontal, vertical, and both diagonal directions) contain cookies of the same shape. Formally:
- There does not exist any $(i,j)$ satisfying $1 \le i \le n, 1 \le j \le m-2$, such that there are cookies of the same shape in cells $(i,j), (i,j+1), (i,j+2)$.
- There does not exist any $(i,j)$ satisfying $1 \le i \le n-2, 1 \le j \le m$, such that there are cookies of the same shape in cells $(i,j), (i+1,j), (i+2,j)$.
- There does not exist any $(i,j)$ satisfying $1 \le i \le n-2, 1 \le j \le m-2$, such that there are cookies of the same shape in cells $(i,j), (i+1,j+1), (i+2,j+2)$.
- There does not exist any $(i,j)$ satisfying $1 \le i \le n-2, 1 \le j \le m-2$, such that there are cookies of the same shape in cells $(i,j+2), (i+1,j+1), (i+2,j)$.
You should output all answers modulo $998\,244\,353$. Also note that it is possible that after some operations, the condition is already not satisfied with the already placed candies, in this case you should output $0$.
|
We claim that there are only $8$ configurations that satisfy the condition. The proof is as follows. Firstly, consider a $2 \times 2$ subgrid that does not lie on any of the grid's corners. Claim 1. Within the $2 \times 2$ subgrid, there must be 2 Os and 2 Xs. Proof of Claim 1. Assume the contrary that there are 3 Os or 4 Os. (The case with 3 Xs or 4 Xs is done too due to symmetry.) Then, after some logical deduction we will end up with 3 consecutive Xs, as illustrated in the following figure. The condition is not satisfied, hence there is a contradiction and there must be 2 Os and 2 Xs within a $2 \times 2$ subgrid that does not lie on any of the grid's corners. Next, consider a $3 \times 3$ subgrid that does not lie on any of the grid's corners. Claim 2. Within the $3 \times 3$ subgrid, at least one $2 \times 2$ sub-subgrid is one of the four patterns below. We will call the following $2 \times 2$ patterns good patterns. Proof of Claim 2. Assume the contrary that none of the 4 sub-subgrids in the $3 \times 3$ subgrid are one of the four given patterns. This naturally means every two cells which share an edge have different shapes. This gives the following two patterns: The diagonals have the same shape, so the condition is not satisfied. There is a contradiction, hence the claim is true. Next, consider a good $2 \times 2$ subgrid that does not lie on any of the grid's corners. (We have proved its existence in Claim 2.) Finally, it is possible to uniquely extend a good subgrid to the rest of the grid. For example, you can see in the illustration below, after a few unique logical deductions the grey pattern on the top can tesselate itself a few times, and this can be infinitely repeated. Therefore, the entire grid must be tesselations of one of the four good patterns. The first two patterns may be shifted one column to the right, and the last two patterns may be shifted one column downwards. So there are a total of $4 \times 2 = 8$ ways to satisfy the condition. Specifically, the $8$ ways for $n = 5, m =5$ are as follows: As for some implementation details, the following $8$ statements each correspond to one $n \times m$ configuration that satisfies the condition. $a_{i,j}$ represents whether the cell on the $i$-th row and the $j$-th column ($1 \le i \le n, 1 \le j \le m$) has a circle-shaped fortune cookie. $a_{i,j} = 1$ iff $i + \left \lceil \frac{j}{2} \right \rceil$ is odd. $a_{i,j} = 1$ iff $i + \left \lceil \frac{j}{2} \right \rceil$ is even. $a_{i,j} = 1$ iff $i + \left \lfloor \frac{j}{2} \right \rfloor$ is odd. $a_{i,j} = 1$ iff $i + \left \lfloor \frac{j}{2} \right \rfloor$ is even. $a_{i,j} = 1$ iff $j + \left \lceil \frac{i}{2} \right \rceil$ is odd. $a_{i,j} = 1$ iff $j + \left \lceil \frac{i}{2} \right \rceil$ is even. $a_{i,j} = 1$ iff $j + \left \lfloor \frac{i}{2} \right \rfloor$ is odd. $a_{i,j} = 1$ iff $j + \left \lfloor \frac{i}{2} \right \rfloor$ is even. Discussion: Try to solve the problem for general $n, m$. Specifically that includes the cases when $1 \le \min(n, m) \le 4$.
|
[
"bitmasks",
"brute force",
"combinatorics",
"constructive algorithms",
"dfs and similar",
"math"
] | 2,300
|
#include "bits/stdc++.h"
using namespace std;
void solve(int tc) {
int n, m, q;
cin >> n >> m >> q;
bool b[8] = {1, 1, 1, 1, 1, 1, 1, 1};
int ans = 8;
cout << ans << '\n';
while(q--) {
int r, c;
cin >> r >> c;
string shape;
cin >> shape;
if((r + (c+1) / 2) % 2) {
b[0] &= (shape == "circle");
b[1] &= (shape == "square");
}
else {
b[0] &= (shape == "square");
b[1] &= (shape == "circle");
}
if((r + c / 2) % 2) {
b[2] &= (shape == "circle");
b[3] &= (shape == "square");
}
else {
b[2] &= (shape == "square");
b[3] &= (shape == "circle");
}
if((c + (r+1) / 2) % 2) {
b[4] &= (shape == "circle");
b[5] &= (shape == "square");
}
else {
b[4] &= (shape == "square");
b[5] &= (shape == "circle");
}
if((c + r / 2) % 2) {
b[6] &= (shape == "circle");
b[7] &= (shape == "square");
}
else {
b[6] &= (shape == "square");
b[7] &= (shape == "circle");
}
ans = 0;
for(int i = 0; i < 8; i++) ans += b[i];
cout << ans << '\n';
}
}
int main() {
int t;
cin >> t;
for(int i = 1; i <= t; i++) solve(i);
}
|
1934
|
A
|
Too Min Too Max
|
Given an array $a$ of $n$ elements, find the maximum value of the expression:
$$|a_i - a_j| + |a_j - a_k| + |a_k - a_l| + |a_l - a_i|$$
where $i$, $j$, $k$, and $l$ are four \textbf{distinct} indices of the array $a$, with $1 \le i, j, k, l \le n$.
Here $|x|$ denotes the absolute value of $x$.
|
What will be answer if there were only $4$ elements in the array? Suppose if there were only $4$ elements in the array. Let them be $a \leq b \leq c \leq d$. Then the answer will be maximum of the three cases which are listed as follows:- $|a-b|+|b-c|+|c-d|+|d-a| = 2*d - 2*a$ $|a-b|+|b-d|+|d-c|+|c-a| = 2*d-2*a$ $|a-c|+|c-b|+|b-d|+|d-a| = 2*(d+c)-2*(a+b)$ so clearly $2*(d+c)-2*(a+b)$ is the maximum. So, to maximize this we can set $d$, $c$ as large as possible and $a$, $b$ as small as possible i.e. $d=a_n$, $c=a_{n-1}$, $b=a_{2}$ and $a=a_{1}$ where $a_i$ means $i^{th}$ element in sorted order of given array.
|
[
"greedy",
"math"
] | 800
|
t = int(input())
for i in range(t):
n = int(input())
a = list(map(int, input().split()))
a = sorted(a)
ans = 2 * (a[n - 1] - a[0] + a[n - 2] - a[1])
print(ans)
|
1934
|
B
|
Yet Another Coin Problem
|
You have $5$ different types of coins, each with a value equal to one of the first $5$ triangular numbers: $1$, $3$, $6$, $10$, and $15$. These coin types are available in abundance. Your goal is to find the minimum number of these coins required such that their total value sums up to exactly $n$.
We can show that the answer always exists.
|
At max how many $1$, $3$, $6$, $10$ are required? Fact: You will never need more than $2$ ones, $1$ threes, $4$ sixes and $2$ tens. Reason: For $1$: Suppose if you used $k$ > $2$ ones, then you could have used one $3$ and $k$ - $3$ ones. For $3$: Suppose if you used $k$ > $1$ threes, then you could have used one $6$ and $k$ - $2$ threes. For $6$: Suppose if you used $k$ > $4$ sixes, then you could have used two $15$'s and $k$ - $5$ sixes. For $10$: Suppose if you used $k$ > $2$ tens, then you could have used two $15$'s and $k$ - $3$ tens. now since bound on their count is less, we can bruteforce on these count. When will the greedy logic of choosing the higher-valued coin first work? Fact $1$: If coins of value $1$, $3$, $6$ and $15$ were only present the greedy logic of selecting the higher valued first would work. Reason: We use coins of value one at most $2$ times, coins of value three at most $1$ time, coins of value six at most $2$ times (if it was used $3$ times, it would be better to use two coins $15 + 3$) But we can't use the coin of value $3$ and both coins of value $6$ simultaneously, because we would prefer just using $15$. It means that these coins may sum up to $1 + 1 + 3 + 6 = 11$ or $1 + 1 + 6 + 6 = 14$ at max. So, we may use the value $15$ greedily, because the remaining part is less than $15$. When we are left with only the values $1$, $3$, and $6$, greedily solving is obviously correct, because each coin is a divisor of the next coin. Fact $2$: We don't need more than $2$ ten coins. Reason: Better to use $2$ fifteen coins instead of $3$ ten coins. Using the above two facts it can be shown that the answer will have $k < 3$ ten coin, therefore, answer = $\min$($\text{answer}(n-10*k)+k$ assuming $1$, $3$, $6$ and $15$ coins are only present).
|
[
"brute force",
"dp",
"greedy",
"math"
] | 1,200
|
#include<bits/stdc++.h>
using namespace std;
int getAns(int n){
int ans=0;
ans+=n/15;
n%=15;
ans+=n/6;
n%=6;
ans+=n/3;
n%=3;
ans+=n;
return ans;
}
int main(){
ios::sync_with_stdio(false), cin.tie(nullptr);
int testcases;
cin>>testcases;
for(int i=1;i<=testcases;i++){
int n;cin>>n;
if(n<10){
cout<<getAns(n)<<endl;
}else if(n<20){
cout<<min(getAns(n),getAns(n-10)+1)<<endl;
}else{
cout<<min({getAns(n),getAns(n-10)+1,getAns(n-20)+2})<<endl;
}
}
}
|
1934
|
C
|
Find a Mine
|
\textbf{This is an interactive problem.}
You are given a grid with $n$ rows and $m$ columns. The coordinates $(x, y)$ represent the cell on the grid, where $x$ ($1 \leq x \leq n$) is the row number counting from the top and $y$ ($1 \leq y \leq m$) is the column number counting from the left. It is guaranteed that there are exactly $2$ mines in the grid at \textbf{distinct} cells, denoted as $(x_1, y_1)$ and $(x_2, y_2)$. You are allowed to make no more than $4$ queries to the interactor, and after these queries, you need to provide the location of \textbf{one of the mines}.
In each query, you can choose any grid cell $(x, y)$, and in return, you will receive the minimum Manhattan distance from both the mines to the chosen cell, i.e., you will receive the value $\min(|x-x_1|+|y-y_1|, |x-x_2|+|y-y_2|)$.
Your task is to determine the location of one of the mines after making the queries.
|
After querying "$?$ $1$ $1$" What do we know about the location of mines? If the answer is $a_1$ the location of one of the mines is $(x, y)$ such that $x+y = a_1+2$. First, we query "$?$ $1$ $1$" then you get a value $a_1$ and you know that at least one of the mine is on the line $x+y=a_1+2$. Now we make two more queries on both ends of this line where it touches the grid. After these two queries, we get two possible locations of the mine. We query one of these positions if we receive $0$ as the answer then this location otherwise the other location is the answer. Reason: One of that two locations contains a mine. The other mine could spoil the result of the query only from one end. If it was closer from the other mine to both of the ends, it would mean that going from one end to the other is shorter through that mine, than through the diagonal. That's impossible. What if we made the second query as "$?$ $n$ $m$"? If the answer is $a_2$ the location of one of the mines is $(x, y)$ such that $x+y = n+m-a_2$. Using Hint $1$ and Hint $2$ if we get the same line equation i.e. $n+m-a_2=a_1-2$ we can just query one of the endpoints of this line and get the answer. Otherwise, we can query "$?$ $1$ $m$" and we will get one more line perpendicular to the earlier two. One of the mines has to be on this line therefore there will be a mine either on the intersection of line we got from query $1$ and query $3$ or on the intersection of line we got from query $2$ and query $3$. We can make our next query on any of these intersections if we get the answer as $0$, then this point is the answer otherwise the other point is the answer.
|
[
"binary search",
"constructive algorithms",
"geometry",
"greedy",
"interactive",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
ll query(ll x, ll y) {
cout << "? " << x << ' ' << y << endl;
ll d;
cin >> d;
return d;
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
ll t;
cin >> t;
while (t--) {
ll n, m;
cin >> n >> m;
ll sum1 = query(1, 1) + 2;
ll sum2 = n + m - query(n, m);
ll dif1 = query(1, m) + 1 - m;
auto simplify = [&](ll num, ll lim)->ll{
return max(min(num, lim), 1LL);
};
ll x1 = simplify((sum1+dif1)/2, n), y1 = simplify((sum1-dif1)/2, m);
ll x2 = simplify((sum2+dif1)/2, n), y2 = simplify((sum2-dif1)/2, m);
if (query(x1, y1)) cout << "! " << x2 << ' ' << y2 << endl;
else cout << "! " << x1 << ' ' << y1 << endl;
}
return 0;
}
|
1934
|
D1
|
XOR Break --- Solo Version
|
\textbf{This is the solo version of the problem. Note that the solution of this problem may or may not share ideas with the solution of the game version. You can solve and get points for both versions independently.}
\textbf{You can make hacks only if both versions of the problem are solved.}
Given an integer variable $x$ with the initial value of $n$. A single break operation consists of the following steps:
- Choose a value $y$ such that $0 \lt y \lt x$ and $0 \lt (x \oplus y) \lt x$.
- Update $x$ by either setting $x = y$ or setting $x = x \oplus y$.
Determine whether it is possible to transform $x$ into $m$ using a maximum of $63$ break operations. If it is, provide the sequence of operations required to achieve $x = m$.You don't need to minimize the number of operations.
Here $\oplus$ denotes the bitwise XOR operation.
|
To generate any possible $m$, it takes at most two operations. Let's determine the achievable values of $m$ for a given $n$. If $n$ is a perfect power of $2$, then it cannot be broken down further, and no $m < n$ is achievable. Otherwise, if $n$ has at least two set bits, let's denote the most significant bit as $a$ and the second most significant bit as $b$. Fact 1: All $m$ values less than $n$ are achievable if their most significant bit does not lie between $b+1$ and $a-1$. Reason: For instance, $1001????$ can be decomposed into $1000????$ and $0001????$, or $1001????$ and $0000????$. In either case, we can never flip the $6^{th}$ or $7^{th}$ bit. Using the above fact, we can break the problem into two cases: Case 1: If the most significant bit of $m$ is at position $\leq b$. Perform the first operation as $2^{b+1} - 1$ and $n \oplus (2^{b+1} - 1)$ ($10001????$ -> $10000????$ and $000011111$ in binary form). Then, any submask of $2^{b+1} - 1$ can be created in the next operation. Case 2: If the most significant bit of $m$ is at position $a$. If the most significant bit of $m$ is at position $a$, then $m$ can be obtained in one operation as $m$ and $m \oplus n$, since $m < n$ as given in the question, and $n \oplus m$ has its most significant bit $< a$.
|
[
"bitmasks",
"constructive algorithms",
"greedy"
] | 2,100
|
#include<bits/stdc++.h>
using namespace std;
using ll = long long;
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
ll t;
cin >> t;
while (t--) {
ll n, m;
cin >> n >> m;
ll hi = 0, sec_hi = 0;
for (ll p = (1LL<<62); p > 0; p >>= 1) {
if (p & n) {
if (!hi) hi = p;
else if (!sec_hi) sec_hi = p;
}
}
bool flag = (sec_hi && ((m & hi) || (m < sec_hi*2)));
if (!flag) {
cout << -1 << '\n';
continue;
}
vector<ll> ans = {n, m};
if (!(m & hi) && !(m & sec_hi)) ans = {n, m^sec_hi, m};
cout << (ll)ans.size()-1 << '\n';
for (auto &x: ans) cout << x << ' ';
cout << '\n';
}
return 0;
}
|
1934
|
D2
|
XOR Break --- Game Version
|
\textbf{This is an interactive problem.}
\textbf{This is the game version of the problem. Note that the solution of this problem may or may not share ideas with the solution of the solo version. You can solve and get points for both versions independently.}
Alice and Bob are playing a game. The game starts with a positive integer $n$, with players taking turns. On each turn of the game, the following sequence of events takes place:
- The player having the integer $p$ breaks it into two integers $p_{1}$ and $p_{2}$, where $0 \lt p_{1} \lt p$, $0 \lt p_{2} \lt p$ and $p_{1} \oplus p_{2} = p$.
- If no such $p_{1}$, $p_{2}$ exist, the player loses.
- Otherwise, the opponent does either select the integer $p_{1}$ or $p_{2}$.
- The game continues with the selected integer. The opponent will try to break it.
As Alice, your goal is to win. You can execute a maximum of $63$ break operations. You have the choice to play first or second. The system will act for Bob.
Here $\oplus$ denotes the bitwise XOR operation.
|
If both $p_1$ and $p_2$ are perfect powers of $2$, it is a losing state since you cannot perform a break operation on either of those. If either $p_1$ or $p_2$ has a bit count of $2$, then this is a winning state. You can force your opponent into the state described in Hint $1$ using the number which has $2$ bitcount. Fact 1: If $p_1$ has an odd bit count, then it can only be broken into two numbers such that one has an odd bit count and the other has an even bit count. Fact 2: If either $p_1$ or $p_2$ has an even bit count, then this is a winning state. Reason : If either $p_1$ or $p_2$ has an even bit count, without loss of generality, assume it's $p_1$. Then break it into $2^{msb \text{ of } p_1}$ and $p_1 \oplus 2^{msb \text{ of } p_1}$, where $msb$ is the most significant bit. If the opponent chooses $2^{msb \text{ of } p_1}$, they instantly lose (using Hint $1$), so they are forced to choose the other number with an odd bit count. From Fact $1$, we can conclude that in the next turn, the state will remain conserved for the current player. Because we eliminate the most significant bit in every query, this game will go on for at most $60$ turns for the player who reached this position first. At one point, the player who is at this state will have a number with two set bits. Hence, from Hint $2$, we can say this player will win. So, as Alice, you will start first if $n$ has an even number of bits and start second if it has an odd number of bits. Proceed using the strategy discussed above. So as Alice you have will start first if $n$ has even number of bits and start second if it has odd number of bits. Any proceed using the strategy discussed above.
|
[
"bitmasks",
"games",
"greedy",
"interactive"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(false);
cin.tie(nullptr);
int testcases;cin>>testcases;
for(int testcase=1;testcase<=testcases;testcase++){
long long n;cin>>n;
long long curr=n;
int parity=0;
if(__builtin_popcountll(n)%2){
cout<<"secoNd"<<endl;
}else{
cout<<"firSt"<<endl;
parity=1;
}
int turn=0;
while(1){
if(turn%2==parity){
long long p1,p2;cin>>p1>>p2;
if (p1 == 0 && p2 == 0)
break;
if(__builtin_popcountll(p1)%2==0){
curr=p1;
}else{
curr=p2;
}
}else{
int pos=0;
for(int i=0;(1ll<<i)<curr;i++){
if(curr&(1ll<<i)){
pos=i;
}
}
long long p1=(1ll<<pos);
long long p2=curr^p1;
cout<<p1<<" "<<p2<<endl;
curr = p1;
}
turn++;
}
}
}
|
1934
|
E
|
Weird LCM Operations
|
Given an integer $n$, you construct an array $a$ of $n$ integers, where $a_i = i$ for all integers $i$ in the range $[1, n]$. An operation on this array is defined as follows:
- Select three distinct indices $i$, $j$, and $k$ from the array, and let $x = a_i$, $y = a_j$, and $z = a_k$.
- Update the array as follows: $a_i = \operatorname{lcm}(y, z)$, $a_j = \operatorname{lcm}(x, z)$, and $a_k = \operatorname{lcm}(x, y)$, where $\operatorname{lcm}$ represents the least common multiple.
Your task is to provide a possible sequence of operations, containing at most $\lfloor \frac{n}{6} \rfloor + 5$ operations such that after executing these operations, if you create a set containing the greatest common divisors (GCDs) of all subsequences with a \textbf{size greater than $1$}, then all numbers from $1$ to $n$ should be present in this set.After all the operations $a_i \le 10^{18}$ should hold for all $1 \le i \le n$.
We can show that an answer always exists.
|
Fact 1: If for $(x, y, z)$ their pairwise GCDs are equal to their common GCD (this means that $(x, y, z)$ = $(g * X, g * Y, g * Z)$ where $(X, Y, Z)$ are pairwise coprime), then making an operation on them (gives $(g * X * Y, g * X * Z,g * Y * Z)$) and looking at the subsequences of size EXACTLY 2, we find all three GCD: $x$, $y$, $z$. Let's call such tuple NICE. Result: If we can split all values in the array into independent NICE tuples, then we can just perform an operation on each of them and the problem is solved. Fact 2: We don't touch any value $\leq$ $\frac{n}{2}$. If there is $x$ $\leq$ $\frac{n}{2}$, then $2 * x \leq n$. If we don't touch $x$, then we will always have another value that is divisible by $x$ (it's easy to see that performing an operation on a multiple of $x$ leaves us with another multiple of $x$), so we will always have GCD equal to $x$ taking a subsequence $(x, x * A)$. Fact 3: A sequence of consecutive integers $(x, x+1, x+2, ..., x+11)$ can be partitioned into $4$ disjoint sets of size $3$, each forming a NICE tuple, if $(x+11) \% 4$ equals $2$ or $1$. For $(x+11) \% 4 = 2$: The sets $(x, x+1, x+2)$, $(x+4, x+5, x+6)$ and $(x+8, x+9, x+10)$ are NICE because, the first and third terms are always odd, and the second term is always even. The set $(x+3, x+7, x+11)$ is NICE because it has the form of $2*(2*n-1)$, $2*(2*n)$, $2*(2*n+1)$, ensuring that the pairwise GCDs are equal to the common GCD. For $(x+11) \% 4 = 1$: The sets $(x, x+4, x+8)$, $(x+1, x+2, x+3)$, $(x+5, x+6, x+7)$ and $(x+9, x+10, x+11)$ are NICE, the same logic like $(x+11) \% 4 = 2$ follows, If $n \% 4 = 3$ we can do one operation as $(1, 2, n)$, and if $n \% 4 = 0$ we can do one operation as $(1, n-1, n)$. Let's group the remaining elements into the groups of size $12$, starting from the end, and continuing until we reach the number $\frac{n}{2}$. Eventually, we can count that we used no more than $\lfloor \frac{n}{6} \rfloor + 5$ operations. Solutions for $n \leq 13$ should be found manually.
|
[
"brute force",
"constructive algorithms",
"number theory"
] | 3,000
|
#include<bits/stdc++.h>
using namespace std;
vector<vector<vector<int>>> pans(14);
int main(){
ios::sync_with_stdio(false), cin.tie(nullptr);
int t;cin>>t;
pans[3]={{1,2,3}};
pans[4]={{1,3,4}};
pans[5]={{3,4,5}};
pans[6]={{1,3,5},{2,4,6}};
pans[7]={{2,4,6},{3,5,7}};
pans[8]={{2,6,8},{3,5,7}};
pans[9]={{1,3,5},{2,4,6},{7,8,9}};
pans[10]={{3,4,5},{2,6,8},{7,9,10}};
pans[11]={{2,6,8},{3,5,7},{9,10,11}};
pans[12]={{1,11,12},{6,8,10},{5,7,9}};
pans[13]={{1,12,13},{7,9,11},{6,8,10}};
for(int tt=0;tt<t;tt++){
int n;cin>>n;
if(n<=2){
cout<<-1<<endl;
}else if(n<14){
cout<<pans[n].size()<<endl;
for(auto w: pans[n]){
cout<<w[0]<<" "<<w[1]<<" "<<w[2]<<endl;
}
}else{
vector<int> v(n);
for(int i=0;i<n;i++){
v[i]=i+1;
}
vector<vector<int>> ans;
while (2*v.size()>n){
if(v.size()%4==2){
vector<int> buf;
for(int i=0;i<12;i++){
buf.push_back(v.back());
v.pop_back();
}
reverse(buf.begin(),buf.end());
ans.push_back({buf[0],buf[1],buf[2]});
ans.push_back({buf[4],buf[5],buf[6]});
ans.push_back({buf[8],buf[9],buf[10]});
ans.push_back({buf[3],buf[7],buf[11]});
}else if(v.size()%4==1){
vector<int> buf;
for(int i=0;i<12;i++){
buf.push_back(v.back());
v.pop_back();
}
reverse(buf.begin(),buf.end());
ans.push_back({buf[1],buf[2],buf[3]});
ans.push_back({buf[5],buf[6],buf[7]});
ans.push_back({buf[9],buf[10],buf[11]});
ans.push_back({buf[0],buf[4],buf[8]});
}else if(v.size()%4==3){
vector<int> buf;
buf.push_back(v.back());
v.pop_back();
buf.push_back(2);
buf.push_back(1);
reverse(buf.begin(),buf.end());
ans.push_back({buf[0],buf[1],buf[2]});
}else{
vector<int> buf;
buf.push_back(v.back());
v.pop_back();
buf.push_back(v.back());
v.pop_back();
buf.push_back(1);
reverse(buf.begin(),buf.end());
ans.push_back({buf[0],buf[1],buf[2]});
}
}
cout<<ans.size()<<endl;
for(auto w: ans){
cout<<w[0]<<" "<<w[1]<<" "<<w[2]<<endl;
}
}
}
}
|
1935
|
A
|
Entertainment in MAC
|
Congratulations, you have been accepted to the Master's Assistance Center! However, you were extremely bored in class and got tired of doing nothing, so you came up with a game for yourself.
You are given a string $s$ and an \textbf{even} integer $n$. There are two types of operations that you can apply to it:
- Add the reversed string $s$ to the end of the string $s$ (for example, if $s = $ cpm, then after applying the operation $s = $ cpmmpc).
- Reverse the current string $s$ (for example, if $s = $ cpm, then after applying the operation $s = $ mpc).
It is required to determine the lexicographically smallest$^{\dagger}$ string that can be obtained after applying \textbf{exactly} $n$ operations. Note that you can apply operations of different types in any order, but you must apply exactly $n$ operations in total.
$^{\dagger}$A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds:
- $a$ is a prefix of $b$, but $a \ne b$;
- in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
|
The answer will always have either the prefix $s$, or the reversed string $s$. Adding the string to the end is required no more than once. Let $t$ be the reversed string $s$. Notice that it is advantageous for us to use operation 1 (adding the reversed string at the end) no more than once. Indeed, having obtained some string, we will simply spend the remaining operations on flipping the string. Thus, we will get the original string or the reversed one, depending on the parity of the number of remaining operations. It is easy to see that the answer will always have either the prefix $s$, or $t$. Then, we find two lexicographically minimal strings with the prefix $s$ and $t$. These will be strings: $s$, flip the string $n$ times (since $n$ is even, every 2 operations we return the string to its original) $t + s$, initially flip the string, add the reversed string to the end, then flip the string $n - 2$ times. The answer will be the lexicographically minimal string out of $s$ and $t + s$.
|
[
"constructive algorithms",
"strings"
] | 800
|
t = int(input())
for _ in range(t):
n = int(input())
s = input()
t = s[::-1]
print(min(s, t + s))
|
1935
|
B
|
Informatics in MAC
|
In the Master's Assistance Center, Nyam-Nyam was given a homework assignment in informatics.
There is an array $a$ of length $n$, and you want to divide it into $k > 1$ subsegments$^{\dagger}$ in such a way that the $\operatorname{MEX} ^{\ddagger}$ on each subsegment is equal to the same integer.
Help Nyam-Nyam find any suitable division, or determine that it does not exist.
$^{\dagger}$A division of an array into $k$ subsegments is defined as $k$ pairs of integers $(l_1, r_1), (l_2, r_2), \ldots, (l_k, r_k)$ such that $l_i \le r_i$ and for each $1 \le j \le k - 1$, $l_{j + 1} = r_j + 1$, and also $l_1 = 1$ and $r_k = n$. These pairs represent the subsegments themselves.
$^{\ddagger}\operatorname{MEX}$ of an array is the smallest non-negative integer that does not belong to the array.
For example:
- $\operatorname{MEX}$ of the array $[2, 2, 1]$ is $0$, because $0$ does not belong to the array.
- $\operatorname{MEX}$ of the array $[3, 1, 0, 1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not.
- $\operatorname{MEX}$ of the array $[0, 3, 1, 2]$ is $4$, because $0$, $1$, $2$, and $3$ belong to the array, but $4$ does not.
|
What is the minimum $k$ that can be in a division? Suppose $\operatorname{MEX}(x, y) = \operatorname{MEX}(y + 1, z)$, what can be said about $\operatorname{MEX}(x, z)$? Suppose we correctly divided the array into $k > 2$ segments - $(1, r_1), (l_2, r_2), \ldots, (l_k, r_k)$. Then, note that we can merge first two subsegments, as the numbers from 0 to $mex - 1$ are encountered in these two segments and the number $mex$ is not encountered in them. Therefore, if there is a division into $k > 2$ segments, then there is also for $k - 1$ segments. Therefore, it is sufficient to check whether there is a division of the array into $k = 2$ segments, which can be done in $O(n)$ by precalcing $\operatorname{MEX}$ on the prefixes and suffixes, then we need to find some $i$ for which $\operatorname{MEX}(1, i) = \operatorname{MEX}(i + 1, n)$.
|
[
"constructive algorithms"
] | 1,200
|
def solve():
n = int(input())
a = list(map(int, input().split()))
cur_mex = 0
cur_have = [0] * (n + 1)
for el in a:
cur_have[el] += 1
while cur_have[cur_mex]:
cur_mex += 1
another_mex = 0
another_have = [0] * (n + 1)
for i in range(n):
cur_have[a[i]] -= 1
if cur_have[a[i]] == 0 and cur_mex > a[i]:
cur_mex = a[i]
another_have[a[i]] += 1
while another_have[another_mex]:
another_mex += 1
if cur_mex == another_mex:
print(2)
print("1 " + str(i + 1))
print(str(i + 2) + " " + str(n))
return
print(-1)
t = int(input())
for _ in range(t):
solve()
|
1935
|
C
|
Messenger in MAC
|
In the new messenger for the students of the Master's Assistance Center, Keftemerum, an update is planned, in which developers want to optimize the set of messages shown to the user. There are a total of $n$ messages. Each message is characterized by two integers $a_i$ and $b_i$. The time spent reading the set of messages with numbers $p_1, p_2, \ldots, p_k$ ($1 \le p_i \le n$, all $p_i$ are \textbf{distinct}) is calculated by the formula:
$$\Large \sum_{i=1}^{k} a_{p_i} + \sum_{i=1}^{k - 1} |b_{p_i} - b_{p_{i+1}}|$$
Note that the time to read a set of messages consisting of \textbf{one} message with number $p_1$ is equal to $a_{p_1}$. Also, the time to read an empty set of messages is considered to be $0$.
The user can determine the time $l$ that he is willing to spend in the messenger. The messenger must inform the user of the maximum possible size of the set of messages, the reading time of which does not exceed $l$. Note that the maximum size of the set of messages can be equal to $0$.
The developers of the popular messenger failed to implement this function, so they asked you to solve this problem.
|
Try to find the answer to the problem by hand. How can changing the order of the messages reduce time? Let the order in the response be $p_1, p_2, \ldots, p_k$. It is always advantageous for the response to satisfy $b_{p_1} \leq b_{p_2} \leq \ldots \leq b_{p_k}$. Greedy? Let the order in the response be $p_1, p_2, \ldots, p_k$. Note that in the optimal response, it will hold that $b_{p_1} \leq b_{p_2} \leq \ldots \leq b_{p_k}$. It's not hard to prove that such an order minimizes the sum of the absolute differences of adjacent elements in the array $b$. Then, this sum will turn into $b_{p_k} - b_{p_1}$, that is, the difference between the highest and lowest value of $b$ in the set of messages. Let's sort the pairs $(a_i, b_i)$ in ascending order of $b_i$. Fix the minimum value in the set of messages $b_l$ and the maximum value $b_r$. Note that the sum of the absolute differences of $b$ in the response will not change if taking values $b_l \leq b_i \leq b_r$. Thus, the task reduces to finding the maximum number of messages so that $\sum a \leq L - (b_r - b_l)$. Iterate over $b_l$ and $b_r$. Apply a greedy algorithm, sort all values of $a_i$ ($l \leq i \leq r$), and keep adding values until the time exceeds $L$. This solution works in $O(n^3\log{}n)$, which is too slow. To speed up the solution, while iterating over $b_r$, maintain a data structure that allows adding an element, removing the maximum, getting the maximum (in C++, there are $\textit{multiset}$ and $\textit{priority_queue}$). In this data structure, maintain the minimum values of $a$, so the sum of times does not exceed $L$. Then, if the current time exceeds $L$, remove from the structure until the current time becomes no more than $L$. There will be no more than $n$ such removals. We have obtained a solution in $O(n^2\log{}n)$, which is feasibly fast.
|
[
"binary search",
"brute force",
"constructive algorithms",
"data structures",
"dp",
"greedy",
"sortings"
] | 1,800
|
import heapq
import sys
input = sys.stdin.readline
t = int(input())
for _ in range(t):
n, L = map(int, input().split())
v = []
for i in range(n):
a, b = map(int, input().split())
v.append((b, a))
v.sort()
ans = 0
for l in range(n):
pq = []
heapq.heapify(pq)
cur = 0
size = 0
for r in range(l, n):
heapq.heappush(pq, -v[r][1])
size += 1
cur += v[r][1]
while size and v[r][0] - v[l][0] + cur > L:
maxx = -heapq.heappop(pq)
cur -= maxx
size -= 1
ans = max(ans, size)
print(ans)
|
1935
|
D
|
Exam in MAC
|
The Master's Assistance Center has announced an entrance exam, which consists of the following.
The candidate is given a set $s$ of size $n$ and some strange integer $c$. For this set, it is needed to calculate the number of pairs of integers $(x, y)$ such that $0 \leq x \leq y \leq c$, $x + y$ \textbf{is not} contained in the set $s$, and also $y - x$ \textbf{is not} contained in the set $s$.
Your friend wants to enter the Center. Help him pass the exam!
|
The principle of inclusion-exclusion. The equation $x+y = s_i$, $y-x=s_j$ has $0$ or $1$ solutions with integers $x, y$. Applying the formula of inclusion-exclusion, the answer to the problem will be: $\mathrm{cnt}(x, y) - \mathrm{cnt}(x, y: x + y \in s) - \mathrm{cnt}(x, y: y - x \in s) + \mathrm{cnt}(x, y: x + y, y - x \in s)$. Let's calculate each value separately. The number of possible pairs $x, y$ is $\frac{(c+1)\cdot(c+2)}{2}$. The number of pairs $x, y: x + y \in s$. We iterate over the sum value $s_i$, let $x + y = s_i$, then for $0 \leq x \leq \lfloor \frac{s_i}{2} \rfloor$ there will correspond exactly one $y$, i.e., the number of pairs with such a sum is $\lfloor \frac{s_i}{2} \rfloor + 1$. The number of pairs $x, y: y - x \in s$. We iterate over the difference value $s_i$, let $y - x = s_i$, then for $s_i \leq y \leq c$ there will correspond exactly one $x$, i.e., the number of pairs with such a difference is $c - s_i + 1$. The number of pairs $x, y: x + y, y - x \in s$. Let $x+y=s_i$, $y-x=s_j$, then $x = \frac{s_i - s_j}{2}$, and $y = \frac{s_i+s_j}{2}$. If $s_i, s_j$ have different parities, then such $x, y$ will not be suitable because they will be non-integers; otherwise, such $x, y$ are suitable, and we need to add this pair. Then, let's calculate the number of even and odd numbers in $s$ - $even$ and $odd$ respectively. Thus, the number of such pairs is $\frac{even\cdot(even+1)}{2}+\frac{odd\cdot(odd+1)}{2}$. With all these quantities, we can calculate the answer. The final complexity is $O(n)$. The problem can also be solved in $O(n \cdot \log n)$ or $O(n)$ using other methods.
|
[
"binary search",
"combinatorics",
"implementation",
"math"
] | 1,800
|
def solve_case() :
n, c = map(int, input().split())
s = list(map(int, input().split()))
ans = (c + 1) * (c + 2) // 2
even, odd = 0, 0
for i in range(n):
ans -= s[i] // 2 + 1
ans -= c - s[i] + 1
if s[i] % 2 == 0:
even += 1
else:
odd += 1
ans += even * (even + 1) // 2
ans += odd * (odd + 1) // 2
print(ans)
t = int(input())
for test in range(t):
solve_case()
|
1935
|
E
|
Distance Learning Courses in MAC
|
The New Year has arrived in the Master's Assistance Center, which means it's time to introduce a new feature!
Now students are given distance learning courses, with a total of $n$ courses available. For the $i$-th distance learning course, a student can receive a grade ranging from $x_i$ to $y_i$.
However, not all courses may be available to each student. Specifically, the $j$-th student is only given courses with numbers from $l_j$ to $r_j$, meaning the distance learning courses with numbers $l_j, l_j + 1, \ldots, r_j$.
The creators of the distance learning courses have decided to determine the final grade in a special way. Let the $j$-th student receive grades $c_{l_j}, c_{l_j + 1}, \ldots, c_{r_j}$ for their distance learning courses. Then their final grade will be equal to $c_{l_j}$ $|$ $c_{l_j + 1}$ $|$ $\ldots$ $|$ $c_{r_j}$, where $|$ denotes the bitwise OR operation.
Since the chatbot for solving distance learning courses is broken, the students have asked for your help. For each of the $q$ students, tell them the maximum final grade they can achieve.
|
Try to solve the problem if $x_i = 0$ for each $i$. What will be the answer for two segments $(0, 2^i), (0, 2^i)$? Which bits will definitely be included in the answer? Let's solve the problem when $x = 0$. Then we will iterate over the bits from the most significant to the least significant and try to include them in the answer. Suppose we are iterating over bit $i$, then if such a bit occurs $c$ times in $y$ numbers there are several cases: $c = 0$ - this bit cannot be included in the answer $c = 1$ - this bit will be included in the answer, we add it $c > 1$ - a special case, because for one number where bit x is present, we can set it, and for another number, we can set all bits $j < i$. Therefore, if we encounter some bit $i$ that occurs more than once, then all bits $j \le i$ will also be included in the answer. Now let's solve the original problem, for this we take a pair $(x_i, y_i)$ and find the bitwise largest common prefix - let it be number $w$. Then it is clear that all bits from $w$ will be included in the answer, then we make a new pair $(x'_i, y'_i)$ = $(x_i - w, y_i - w)$, and remember the number $w_i$. Now note that the number $y'_i - 1 \ge x'_i$. Why do we need this fact? Remember, in the case when some bit occurred more than once, we added it and all smaller bits to the answer. For this, we would set at this position a number equal to $2^i - 1$ (and other larger bits $i$), but if $y'_i - 1 \ge x'_i$, then we can always add all such bits. Therefore, the solution algorithm for request $j$ is as follows: Take the Bitwise OR of all $w_i$ for $l_j \le i \le r_j$, let this be number $W$ Iterate over bit $i$ and similarly to the case $x = 0$ consider the same cases, but for the array $y'$. Also, take into account that the bit occurs in $W$. This solution can be implemented in $O(n \cdot \log n)$ using prefix sums for each bit. There is also a solution using a segment tree, so the problem can be solved with modification requests as well.
|
[
"bitmasks",
"brute force",
"data structures",
"greedy",
"math"
] | 2,400
|
/* Includes */
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
/* Using libraries */
using namespace std;
/* Defines */
template <class T>
using vc = vector <T>;
using ll = long long;
using ld = long double;
using pii = pair <int, int>;
template<class T>
void output(T &a) {
for (auto i : a)
cout << i << ' ';
cout << '\n';
}
template<class T>
bool chmin(T &a, T b) {
if (a > b) {
a = b;
return true;
}
return false;
}
template<class T>
bool chmax(T &a, T b) {
if (a < b) {
a = b;
return true;
}
return false;
}
const int N = 2e5;
const int bit = 30;
struct segtree {
vc <int> t; int n;
segtree (int n) : n(n) {
t.resize(n * 2);
}
void upd (int i, int x) {
for (t[i += n] = x; i > 1; i >>= 1) {
t[i >> 1] = t[i] | t[i ^ 1];
}
}
int get (int l, int r) {
int res = 0;
for (l += n, r += n; l < r; l >>= 1, r >>= 1) {
if (l & 1)
res |= t[l++];
if (r & 1)
res |= t[--r];
}
return res;
}
};
int n;
int l[N], r[N];
int w[N];
void fix () {
for (int i = 0; i < n; ++i) {
if (l[i] == r[i]) {
w[i] = l[i];
l[i] = r[i] = 0;
continue;
}
int pref = (1 << (__lg(l[i] ^ r[i]) + 1)) - 1;
w[i] = r[i] - (r[i] & pref);
r[i] &= pref;
l[i] &= pref;
}
}
void solve() {
cin >> n;
for (int i = 0; i < n; ++i) {
cin >> l[i] >> r[i];
}
fix();
segtree t(n);
vc <vc <int>> bits(bit, vc <int> (n + 1));
for (int i = 0; i < n; ++i) {
t.upd(i, w[i]);
for (int j = 0; j < bit; ++j) {
bits[j][i + 1] = bits[j][i];
if (r[i] >> j & 1) {
bits[j][i + 1]++;
}
}
}
int q;
cin >> q;
while (q--) {
int x, y;
cin >> x >> y; --x;
int ans = t.get(x, y);
for (int j = bit - 1; j >= 0; --j) {
int cnt = bits[j][y] - bits[j][x] + (ans >> j & 1);
if (cnt > 1) {
ans |= (2 << j) - 1;
break;
} else if (cnt == 1) {
ans |= 1 << j;
}
}
cout << ans << ' ';
}
cout << "\n";
}
signed main() {
ios::sync_with_stdio(false); cin.tie(0); cout.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
1935
|
F
|
Andrey's Tree
|
Master Andrey loves trees$^{\dagger}$ very much, so he has a tree consisting of $n$ vertices.
But it's not that simple. Master Timofey decided to steal one vertex from the tree. If Timofey stole vertex $v$ from the tree, then vertex $v$ and all edges with one end at vertex $v$ are removed from the tree, while the numbers of other vertices remain unchanged. To prevent Andrey from getting upset, Timofey decided to make the resulting graph a tree again. To do this, he can add edges between any vertices $a$ and $b$, but when adding such an edge, he must pay $|a - b|$ coins to the Master's Assistance Center.
Note that the resulting tree \textbf{does not contain} vertex $v$.
Timofey has not yet decided which vertex $v$ he will remove from the tree, so he wants to know for each vertex $1 \leq v \leq n$, the minimum number of coins needed to be spent to make the graph a tree again after removing vertex $v$, as well as which edges need to be added.
$^{\dagger}$A tree is an undirected connected graph without cycles.
|
What different values can the answer for a vertex take? Edges with a weight of 1 can connect all vertices $[1, v - 1]$ and all vertices $[v + 1, n]$. Consider the edges of the form $(mn_u, mn_u - 1)$, $(mx_u, mx_u + 1)$, where $mn_u$, $mx_u$ are the minimum and maximum in the subtree of vertex u with respect to $v$. Let's fix some vertex $v$ for which the answer is being calculated. Suppose the degree of the vertex $v$ in the tree is $deg_v$, then it's clear that it's necessary to add $deg_v - 1$ edges. Consider the components that appear after removing $v$. Then, the goal is to use the new edges to unite all the components into one, using the minimum total cost. This is the same as finding the minimum spanning tree in a graph, where the vertices are the components that resulted from removing $v$, and for every $a, b$, an edge with a weight of $|a - b|$ is drawn between the components containing $a$ and $b$. Let's simulate Kruskal's algorithm for this graph. Consider all the single-weight edges in this graph: $(1, 2), (2, 3), ..., (v - 2, v - 1), (v + 1, v + 2), ..., (n - 1, n)$. It's clear that using the single-weight edges, the vertices with numbers $[1, v - 1]$ will definitely end up in one component, and the vertices with numbers $[v + 1, n]$ will also end up in one component. To unite these two components, it would be optimal to add an edge $(v - 1, v + 1)$. It turns out that it's sufficient to consider only all the single-weight edges and the edge $(v - 1, v + 1)$. Let's limit the number of single-weight edges to $O(deg_v)$. For this, in each component $u$, calculate $mn_u$ and $mx_u$ - the minimum and maximum in the component, respectively. Claim: among the single-weight edges, it's sufficient to consider edges of the form $(mn_u, mn_u - 1)$, $(mx_u, mx_u + 1)$. First, understand when it's necessary to add the edge $(v - 1, v + 1)$. Note that if there's at least one component $u$ such that $mn_u < v < mx_u$, then the edge $(v - 1, v + 1)$ won't be needed; otherwise, it will be. This is quite easy to show by simulating Kruskal's algorithm. Let $v = 1$. We'll show that using edges $(mn_u, mn_u - 1)$, all components will unite. Go through the vertices $x$ from $2$ to $n$ and maintain the invariant that all vertices from $2$ to $x$ are in one component. At $x = 2$, this holds. When $x$ is the minimum in some component, then the edge $(x - 1, x)$ will be added, and since $x - 1$ is in one component with $2$, $x$ will now also be. When $x$ is not the minimum in some component, then $mn$ - the minimum in the component $x$ will be in one component with $2$ ($mn < x$, the invariant holds), meaning $x$ will also be in one component with $2$. Thus, it turns out that all will be in one component. Now consider an arbitrary $v$. Separately consider the prefix of vertices $[1, v - 1]$ and the suffix $[v + 1, n]$. Then, similarly to $v = 1$, it can be shown that for the prefix of vertices $[1, v - 1]$, using edges of the form $(mn_u, mn_u - 1)$, you can unite $[1, v - 1]$. Similarly, for the suffix of vertices $[v + 1, n]$, using edges of the form $(mx_u, mx_u + 1)$, you can unite $[v + 1, n]$. Now, if the edge $(v - 1, v + 1)$ is necessary, then add it to the answer. Otherwise, there's at least one component $u$ such that $mn_u < v < mx_u$, meaning the prefix of vertices $[1, v - 1]$ and the suffix $[v + 1, n]$ will unite into one component. Finding $mn_u$, $mx_u$ for each component is straightforward; what remains is to determine which components are connected by the edges $(mn_u, mn_u - 1)$, $(mx_u, mx_u + 1)$. This can be done with binary search through the Euler tour of the tree. After that, Kruskal's algorithm can be initiated to calculate the answer. Let's estimate the time complexity. For a specific vertex $v$, the time complexity will be $deg_v \cdot \log n$, so the total time complexity is $O(n \cdot \log n)$. Depending on the implementation of the last step, the problem can be solved in $O(n)$, $O(n \cdot \alpha(n))$, where $\alpha(n)$ is the inverse Ackermann function relative to $n$.
|
[
"binary search",
"constructive algorithms",
"data structures",
"dfs and similar",
"dsu",
"greedy",
"implementation",
"trees"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
struct DSU {
vector<int> p, r;
int comp;
DSU(int n) : p(n), r(n) {
iota(p.begin(), p.end(), 0);
comp = n;
}
int find(int v) {
return (p[v] == v) ? v : p[v] = find(p[v]);
}
bool join(int a, int b) {
a = find(a), b = find(b);
if (a == b) return false;
comp--;
if (r[a] < r[b]) swap(a, b);
p[b] = a;
if (r[a] == r[b]) r[a]++;
return true;
}
};
int main() {
ios::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
int T;
cin >> T;
while (T--) {
int n;
cin >> n;
vector<vector<int>> g(n);
for (int i = 0; i < n - 1; i++) {
int x, y;
cin >> x >> y;
x--, y--;
g[x].push_back(y);
g[y].push_back(x);
}
vector<int> tin(n), tout(n), l0(n), r0(n);
vector<int> lup(n), rup(n);
int timer = 0;
vector<int> ord;
function<void(int, int)> dfs = [&](int v, int p) {
l0[v] = r0[v] = v;
tin[v] = timer++;
ord.push_back(v);
for (int u : g[v]) {
if (u == p) continue;
dfs(u, v);
l0[v] = min(l0[v], l0[u]);
r0[v] = max(r0[v], r0[u]);
}
tout[v] = timer - 1;
};
dfs(0, 0);
function<bool(int, int)> ancestor = [&](int v, int u) {
return tin[v] <= tin[u] && tin[u] <= tout[v];
};
vector<int> pref_min_ord(n + 1, n), suf_min_ord(n + 1, n);
vector<int> pref_max_ord(n + 1, -1), suf_max_ord(n + 1, -1);
for (int i = 0; i < n; i++) {
pref_min_ord[i + 1] = min(pref_min_ord[i], ord[i]);
pref_max_ord[i + 1] = max(pref_max_ord[i], ord[i]);
}
for (int i = n - 1; i >= 0; i--) {
suf_min_ord[i] = min(suf_min_ord[i + 1], ord[i]);
suf_max_ord[i] = max(suf_max_ord[i + 1], ord[i]);
}
for (int id = (int)ord.size() - 1; id >= 0; id--) {
int v = ord[id];
lup[v] = min(pref_min_ord[tin[v]], suf_min_ord[tout[v] + 1]);
rup[v] = max(pref_max_ord[tin[v]], suf_max_ord[tout[v] + 1]);
}
vector<vector<pair<int, int>>> ans(n);
vector<int> weight(n);
function<void(int, int)> dfs2 = [&](int v, int p) {
vector<int> all_id, all_tin;
int idpar = -1;
for (int id = 0; id < g[v].size(); id++) {
int u = g[v][id];
if (u != p) {
dfs2(u, v);
all_id.push_back(id);
all_tin.push_back(tin[u]);
} else {
idpar = id;
}
}
function<int(int)> get_subtree = [&](int u) {
if (!ancestor(v, u)) return idpar;
return all_id[upper_bound(all_tin.begin(), all_tin.end(), tin[u]) - all_tin.begin() - 1];
};
DSU dsu(g[v].size());
function<void(int, int)> try_add = [&](int x, int y) {
if (x == -1 || y == n || x == v || y == v) return;
if (dsu.join(get_subtree(x), get_subtree(y))) {
ans[v].push_back({x, y});
weight[v] += abs(x - y);
}
};
for (int u : g[v]) {
if (u != p) {
try_add(l0[u] - 1, l0[u]);
try_add(r0[u], r0[u] + 1);
} else {
try_add(lup[v] - 1, lup[v]);
try_add(rup[v], rup[v] + 1);
}
}
if (dsu.comp != 1) {
try_add(v - 1, v + 1);
}
assert(dsu.comp == 1);
};
dfs2(0, 0);
for (int i = 0; i < n; i++) {
cout << weight[i] << " " << ans[i].size() << "\n";
for (auto [a, b] : ans[i]) {
cout << a + 1 << " " << b + 1 << "\n";
}
cout << "\n";
}
}
}
|
1936
|
A
|
Bitwise Operation Wizard
|
\textbf{This is an interactive problem.}
There is a secret sequence $p_0, p_1, \ldots, p_{n-1}$, which is a permutation of $\{0,1,\ldots,n-1\}$.
You need to find any two indices $i$ and $j$ such that $p_i \oplus p_j$ is maximized, where $\oplus$ denotes the bitwise XOR operation.
To do this, you can ask queries. Each query has the following form: you pick arbitrary indices $a$, $b$, $c$, and $d$ ($0 \le a,b,c,d < n$). Next, the jury calculates $x = (p_a \mid p_b)$ and $y = (p_c \mid p_d)$, where $|$ denotes the bitwise OR operation. Finally, you receive the result of comparison between $x$ and $y$. In other words, you are told if $x < y$, $x > y$, or $x = y$.
Please find any two indices $i$ and $j$ ($0 \le i,j < n$) such that $p_i \oplus p_j$ is maximum among all such pairs, using at most $3n$ queries. If there are multiple pairs of indices satisfying the condition, you may output any one of them.
|
What's the maximum value of $p_i \oplus p_j$? How to get $i$, such that $p_i = n-1$? How to get $j$, such that $p_i \oplus p_j$ reaches the maximum value? Step1:do queries $? \ x \ x \ y \ y$ like classic searching for the maximum value among $n$ numbers to get $p_i= n-1$; Step2:do queries $? \ x \ i \ y \ i$ to find all index $k$ such that $p_i | p_{k}$ reaches the maximum value. We store all such indexes in a vector $id$ . Step3:do queries $? \ id[x] \ id[x] \ id[y] \ id[y]$ to find $j$ in $id$ such that $p_j$ reaches the minimum value.
|
[
"bitmasks",
"constructive algorithms",
"greedy",
"interactive",
"math"
] | 1,700
|
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
const int LOGN=28;
const ll TMD=0;
const ll INF=2147483647;
int T,n;
char query(int a,int b,int c,int d)
{
char x;
printf("? %d %d %d %d\n",a,b,c,d);
fflush(stdout);
cin>>x;
fflush(stdout);
return x;
}
int main()
{
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
int mx=0,ans1=0,ans2;
vector<int> v;
for(int i=1;i<n;i++)
{
int c=query(ans1,ans1,i,i);
if(c=='<') ans1=i;
}
v.push_back(0);
for(int i=1;i<n;i++)
{
int c=query(mx,ans1,i,ans1);
if(c=='<')
{
mx=i;
v.clear();
v.push_back(i);
}
else if(c=='=') v.push_back(i);
}
ans2=v[0];
for(int i=1;i<v.size();i++)
{
int c=query(ans2,ans2,v[i],v[i]);
if(c=='>') ans2=v[i];
}
printf("! %d %d\n",ans1,ans2);
fflush(stdout);
}
return 0;
}
|
1936
|
B
|
Pinball
|
There is a one-dimensional grid of length $n$. The $i$-th cell of the grid contains a character $s_i$, which is either '<' or '>'.
When a pinball is placed on one of the cells, it moves according to the following rules:
- If the pinball is on the $i$-th cell and $s_i$ is '<', the pinball moves one cell to the left in the next second. If $s_i$ is '>', it moves one cell to the right.
- After the pinball has moved, the character $s_i$ is inverted (i. e. if $s_i$ used to be '<', it becomes '>', and vice versa).
- The pinball stops moving when it leaves the grid: either from the left border or from the right one.
You need to answer $n$ \textbf{independent} queries. In the $i$-th query, a pinball will be placed on the $i$-th cell. Note that we always place a pinball on the initial grid.
For each query, calculate how many seconds it takes the pinball to leave the grid. It can be shown that the pinball will always leave the grid within a finite number of steps.
|
Observe: which cells actually change the direction of the pinball placed at position $p$ initially? These cells are the $>$ to the left of $p$ and the $<$ to the right of $p$. Can you see the trace of the pinball? How to quickly calculate the time when a pinball leaves the grid? We observe that, in fact, only the $>$ to the left of $p$ and the $<$ to the right of $p$ change the direction of the pinball placed at position $p$ initially. For convenience, let's assume $s_p$ is $>$, $k=min(countright(1,p),countleft(p+1,n))$, and the pinball leaves from the left boundary(for other situations, we can handle them in a similar way). We can obtain $right[1,\ldots,k]$ and $left[1,\ldots,k]$ through prefix sum + binary search, where $right$ represents the indices of $>$ to the left of $p$ (in decreasing order), and $left$ represents the indices of $<$ to the right of $p$ (in increasing order). We use $right$ and $left$ to describe the trace of the pinball: The first segment: the pinball moves from $right_1$ to $left_1$; The second segment: the pinball moves from $left_1$ to $right_2$; The third segment: the pinball moves from $right_2$ to $left_3$; $\ldots$ The $2k$-th segment: the pinball moves from $left_k$ to the left boundary. It is not difficult to observe that we can use prefix sum to store the sum of indices, and then quickly calculate the time when the pinball moves.
|
[
"binary search",
"data structures",
"implementation",
"math",
"two pointers"
] | 2,000
|
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
const int LOGN=28;
const ll TMD=0;
const ll INF=2147483647;
int T,n;
ll Sl[N],Sr[N],IDl[N],IDr[N];
char s[N];
int findpre(int x)
{
int L=0,R=n+1,M;
while(L+1!=R)
{
M=(L+R)>>1;
if(Sr[M]<x) L=M;
else R=M;
}
return R;
}
int findsuf(int x)
{
int L=0,R=n+1,M;
while(L+1!=R)
{
M=(L+R)>>1;
if(Sl[n]-Sl[M-1]<x) R=M;
else L=M;
}
return L;
}
int main()
{
scanf("%d",&T);
while(T--)
{
scanf("%d%s",&n,s);
for(int i=1;i<=n;i++)
{
Sr[i]=Sr[i-1]+(s[i-1]=='>');
Sl[i]=Sl[i-1]+(s[i-1]=='<');
IDr[i]=IDr[i-1]+i*(s[i-1]=='>');
IDl[i]=IDl[i-1]+i*(s[i-1]=='<');
}
for(int i=1;i<=n;i++)
{
if(s[i-1]=='>')
{
if(Sr[i]>Sl[n]-Sl[i])
{
int p=findpre(Sr[i]-(Sl[n]-Sl[i]));
printf("%I64d ",2*((IDl[n]-IDl[i])-(IDr[i]-IDr[p-1]))+i+(n+1));
}
else
{
int p=findsuf((Sl[n]-Sl[i])-Sr[i]+1);
printf("%I64d ",2*((IDl[p]-IDl[i])-(IDr[i]-IDr[0]))+i);
}
}
else
{
if(Sr[i]>=Sl[n]-Sl[i-1])
{
int p=findpre(Sr[i]-(Sl[n]-Sl[i-1])+1);
printf("%I64d ",2*((IDl[n]-IDl[i-1])-(IDr[i]-IDr[p-1]))-i+(n+1));
}
else
{
int p=findsuf((Sl[n]-Sl[i-1])-Sr[i]);
printf("%I64d ",2*((IDl[p]-IDl[i-1])-(IDr[i]-IDr[0]))-i);
}
}
}
printf("\n");
}
return 0;
}
|
1936
|
C
|
Pokémon Arena
|
You are at a dueling arena. You also possess $n$ Pokémons. Initially, only the $1$-st Pokémon is standing in the arena.
Each Pokémon has $m$ attributes. The $j$-th attribute of the $i$-th Pokémon is $a_{i,j}$. Each Pokémon also has a cost to be hired: the $i$-th Pokémon's cost is $c_i$.
You want to have the $n$-th Pokémon stand in the arena. To do that, you can perform the following two types of operations any number of times in any order:
- Choose three integers $i$, $j$, $k$ ($1 \le i \le n$, $1 \le j \le m$, $k > 0$), increase $a_{i,j}$ by $k$ permanently. The cost of this operation is $k$.
- Choose two integers $i$, $j$ ($1 \le i \le n$, $1 \le j \le m$) and hire the $i$-th Pokémon to duel with the current Pokémon in the arena based on the $j$-th attribute. The $i$-th Pokémon will win if $a_{i,j}$ is \textbf{greater than or equal to} the $j$-th attribute of the current Pokémon in the arena (otherwise, it will lose). After the duel, only the winner will stand in the arena. The cost of this operation is $c_i$.
Find the minimum cost you need to pay to have the $n$-th Pokémon stand in the arena.
|
In fact, you don't need to hire the same Pokemon more than once. Consider graph building. How to reduce the number of edges in the graph? Let's consider $n$ Pokémon as nodes, and defeating Pokémon $u$ by Pokémon $v$ as the edge $u \rightarrow v$. Then the problem is essentially finding the shortest path from $n$ to $1$. If we brute force to construct the graph, the time complexity will be $O(n^2m)$, which is unacceptable. How can we find a better way to build the graph? Recalling, we need to represent all processes like "Pokémon $u$ increased attribute $x$ by some value and defeated Pokémon $v$" using paths in the graph. We will use the following graph building to achieve this. Overall, we consider each attribute separately. For the $x$-th attribute, we construct $2n$ virtual nodes $X_1,...,X_n$ and $Y_1,...,Y_n$, and connect each Pokémon based on the $x$-th attribute. For example, $n=3$ and $a_{1,1}=9, a_{2,1}=6, a_{3,1}=1$, we have the following graph building for attribute $1$: In this graph,for example,"Pokémon $3$ increased attribute $1$ by $8$ and defeated Pokémon $1$" can be represented as path $3 \rightarrow X1 \rightarrow X2 \rightarrow X3 \rightarrow Y3 \rightarrow 1$. More generally, our graph building method is : Consider each attribute separately. Assuming we are processing the $i$-th attribute, insert all $a_{1,i} , \ldots ,a_{n,i}$ into $val$ and sort it (for convenience, we assume that they are pairwise different). Consider each attribute separately. Assuming we are processing the $i$-th attribute, insert all $a_{1,i} , \ldots ,a_{n,i}$ into $val$ and sort it (for convenience, we assume that they are pairwise different). Construct $2n$ virtual nodes $X_1,...,X_n$ and $Y_1,...,Y_n$; Construct $2n$ virtual nodes $X_1,...,X_n$ and $Y_1,...,Y_n$; Add edge $X_i \rightarrow X_{i+1}$ with a value of $(val_{i+1}-val_i)$ for $1 \le i < n$; Add edge $X_i \rightarrow X_{i+1}$ with a value of $(val_{i+1}-val_i)$ for $1 \le i < n$; Add edge $Y_{i+1} \rightarrow Y_{i}$ with a value of $0$ for $1 \le i < n$; Add edge $Y_{i+1} \rightarrow Y_{i}$ with a value of $0$ for $1 \le i < n$; Add edge $X_i \rightarrow Y_{i}$ with a value of $0$ for $1 \le i \le n$; Add edge $X_i \rightarrow Y_{i}$ with a value of $0$ for $1 \le i \le n$; Add edge $i \rightarrow X_{rank_i}$ with a value of $c_i$ for $1 \le i \le n$; Add edge $i \rightarrow X_{rank_i}$ with a value of $c_i$ for $1 \le i \le n$; Add edge $Y_{rank_i} \rightarrow i$ with a value of $0$ for $1 \le i \le n$. Add edge $Y_{rank_i} \rightarrow i$ with a value of $0$ for $1 \le i \le n$. Then we just run Dijkstra algorithm in this graph. The time complexity is $O(nmlog(nm))$.
|
[
"data structures",
"graphs",
"greedy",
"implementation",
"shortest paths",
"sortings"
] | 2,400
|
#include <bits/stdc++.h>
#define int long long
#define fi first
#define se second
using namespace std;
const int INFF = 1e18;
int32_t main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t --> 0) {
int n, m;
cin >> n >> m;
vector<int> c(n + 1);
for (int i = 1; i <= n; i++) cin >> c[i];
vector<vector<int>> a(n + 1, vector<int>(m + 1));
vector<vector<pair<int, int>>> b(m + 1);
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
cin >> a[i][j];
b[j].push_back({a[i][j], i});
}
}
vector<vector<int>> rank(n + 1, vector<int>(m + 1));
vector<vector<int>> dec(n + 1, vector<int>(m + 1));
for (int j = 1; j <= m; j++) {
sort(b[j].begin(), b[j].end());
for (int i = 0; i < n; i++) {
auto [x, id] = b[j][i];
rank[id][j] = i + 1;
dec[i + 1][j] = id;
}
}
int ans = INFF;
vector<int> vis(n + 1, 0);
vector<vector<int>> dist(n + 1, vector<int>(m + 1, INFF));
priority_queue<tuple<int, int, int>, vector<tuple<int, int, int>>, greater<tuple<int, int, int>>> pq;
vis[1] = 1;
for (int j = 1; j <= m; j++) dist[1][j] = 0, pq.push({dist[1][j], 1, j});
while (!pq.empty()) {
auto [w, x, t] = pq.top();
pq.pop();
if (dist[x][t] < w) continue;
if (x == n) ans = min(ans, w + c[n]);
if (rank[x][t] < n) {
int z = dec[rank[x][t] + 1][t];
if (w < dist[z][t]) {
dist[z][t] = w;
pq.push({dist[z][t], z, t});
}
}
if (rank[x][t] > 1) {
int z = dec[rank[x][t] - 1][t];
if (w + a[x][t] - a[z][t] < dist[z][t]) {
dist[z][t] = w + a[x][t] - a[z][t];
pq.push({dist[z][t], z, t});
}
}
if (!vis[x]) {
vis[x] = 1;
for (int j = 1; j <= m; j++) {
if (w + c[x] < dist[x][j]) {
dist[x][j] = w + c[x];
pq.push({dist[x][j], x, j});
}
}
}
}
cout << ans << '\n';
}
return 0;
}
|
1936
|
D
|
Bitwise Paradox
|
You are given two arrays $a$ and $b$ of size $n$ along with a fixed integer $v$.
An interval $[l, r]$ is called a \textbf{good} interval if $(b_l \mid b_{l+1} \mid \ldots \mid b_r) \ge v$, where $|$ denotes the bitwise OR operation. The \textbf{beauty} of a good interval is defined as $\max(a_l, a_{l+1}, \ldots, a_r)$.
You are given $q$ queries of two types:
- "1 i x": assign $b_i := x$;
- "2 l r": find the \textbf{minimum} beauty among all \textbf{good} intervals $[l_0,r_0]$ satisfying $l \le l_0 \le r_0 \le r$. If there is no suitable good interval, output $-1$ instead.
Please process all queries.
|
First we use the line segment tree to maintain sequence $b$. For the nodes $[l,r]$ on each line segment tree, we maintain the first and last occurrence positions of each binary bit in the interval. We need to merge the two intervals, whether it is modification or query. Suppose you want to use the information of $[l, mid], [mid +1, r]$ to merge the information of $[l, r]$. Consider the answer that spans two intervals. If we want to make the $i$-th position of the interval OR $1$, then there are two possibilities Select the last occurrence position $P$ of the $i$-th bit in $[l,mid].$ Select the last occurrence position $P$ of the $i$-th bit in $[l,mid].$ Select the first occurrence position $Q$ of the $i$-th bit in $[mid+1,r].$ Select the first occurrence position $Q$ of the $i$-th bit in $[mid+1,r].$ Let $x = max(a[P], a[P+1],\ldots, a[mid]), y = max(a[mid + 1], a[mid + 2], \ldots, a[Q]).$ If $x<=y,$ we can choose the position $P$ greedily, because his price is smaller. If you choose $Q,$ then you must also choose $P,$ because choosing $P$ does not increase max $a.$ Otherwise, select $Q$ on the contrary. With the above greedy, then you can enumerate the first binary bit $i$ that is larger than $v$. The $i$-th bit of the interval OR must be $1$, and the $i$-th bit of $v$ is $0$. For the $j$-th $(j >i)$ bit, if the $j$-th bit of $v$ is $1$, then the $j$-th bit must also be $1$. The rest of the bits can be regarded as $0$ or $1$, you only need to deal with these bits that must be selected $1$ greedily, and expand the interval. You can use the st table $O(1)$ to find the interval max of $a$, so you can merge the information of the two intervals in the time of $O(log V)$. With the line segment tree, $O(q log n log V + n log V)$ can be done.
|
[
"binary search",
"bitmasks",
"data structures",
"greedy",
"two pointers"
] | 3,100
|
#include <bits/stdc++.h>
using namespace std;
constexpr int N = 5e5 + 10, V = 30, inf = INT_MAX, L = 18;
int n, q, S, a[N], b[N];
struct Rmq
{
int st[L][N];
void init()
{
for (int i = 1; i <= n; i++)
st[0][i] = a[i];
for (int i = 1; i < L; i++)
{
for (int j = 1; j <= n - (1 << i) + 1; j++)
{
st[i][j] = max(st[i - 1][j], st[i - 1][j + (1 << i - 1)]);
}
}
}
int qry(int l, int r)
{
int k = __lg(r - l + 1);
return max(st[k][l], st[k][r - (1 << k) + 1]);
}
} ds;
struct Info
{
int pre[V], suf[V];
int ans, l, r;
Info()
{
memset(pre, 0, sizeof(pre));
memset(suf, 0, sizeof(suf));
ans = inf, l = r = 0;
}
Info(const Info &x, const Info &y)
{
for (int i = 0; i < V; i++)
{
pre[i] = x.pre[i] ? x.pre[i] : y.pre[i];
suf[i] = y.suf[i] ? y.suf[i] : x.suf[i];
}
ans = inf, l = r = 0;
}
friend Info operator+(const Info &x, const Info &y)
{
Info z(x, y);
z.ans = min(x.ans, y.ans), z.l = x.l, z.r = y.r;
int pl = x.r, pr = y.l;
if (!pl)
return z;
for (int i = V - 1; i >= 0; i--)
{
int u = x.suf[i], v = y.pre[i];
if (u)
u = min(u, pl);
if (v)
v = max(v, pr);
int lans = u ? ds.qry(u, pr) : inf;
int rans = v ? ds.qry(pl, v) : inf;
if (lans < rans)
{
if (S >> i & 1)
{
if (lans < z.ans)
pl = u;
else
break;
}
else
z.ans = min(z.ans, lans);
}
else
{
if (S >> i & 1)
{
if (rans < z.ans)
pr = v;
else
break;
}
else
z.ans = min(z.ans, rans);
}
}
return z;
}
void clear()
{
*this = Info();
}
void upd(int p)
{
l = r = p, ans = b[p] > S ? a[p] : inf;
memset(pre, 0, sizeof(pre));
memset(suf, 0, sizeof(suf));
for (int j = 0; j < V; j++)
{
if (b[p] >> j & 1)
{
suf[j] = p;
pre[j] = p;
}
}
}
} ans;
void reply(const Info &cur)
{
ans = ans + cur;
}
struct Node
{
Node *ls, *rs;
int l, r, mid;
Info info;
void up()
{
info = ls->info + rs->info;
}
void update(int p)
{
if (l == r)
return info.upd(l);
p <= mid ? ls->update(p) : rs->update(p);
up();
}
void query(int ql, int qr)
{
if (l >= ql && r <= qr)
return reply(info);
if (ql <= mid)
ls->query(ql, qr);
if (qr > mid)
rs->query(ql, qr);
}
} mempool[N + 5 << 1], *cnt = mempool, *rt;
Node *build(int l, int r)
{
Node *u = cnt++;
int mid = l + r >> 1;
u->l = l, u->r = r, u->mid = mid;
if (l == r)
{
u->info.upd(l);
return u;
}
u->ls = build(l, mid);
u->rs = build(mid + 1, r);
u->up();
return u;
}
void solve()
{
cin >> n >> S;
S--;
for (int i = 1; i <= n; i++)
cin >> a[i];
ds.init();
for (int i = 1; i <= n; i++)
cin >> b[i];
rt = build(1, n);
cin >> q;
for (int i = 0; i < q; i++)
{
int opt;
cin >> opt;
if (opt == 1)
{
int p, x;
cin >> p >> x;
b[p] = x;
rt->update(p);
}
else
{
int l, r;
cin >> l >> r;
ans.clear();
rt->query(l, r);
cout << (ans.ans == inf ? -1 : ans.ans) << ' ';
}
}
cout << "\n";
}
int main()
{
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
#endif
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
ios::sync_with_stdio(false), cin.tie(0);
int t = 0;
cin >> t;
while (t--)
{
solve();
}
// cerr << 1.0 * clock() / CLOCKS_PER_SEC << '\n';
return 0;
}
|
1936
|
E
|
Yet Yet Another Permutation Problem
|
You are given a permutation $p$ of length $n$.
Please count the number of permutations $q$ of length $n$ which satisfy the following:
- for each $1 \le i < n$, $\max(q_1,\ldots,q_i) \neq \max(p_1,\ldots,p_i)$.
Since the answer may be large, output the answer modulo $998\,244\,353$.
|
We found it difficult to directly calculate all valid permutations. How about calculating all invalid permutations? Can you come up with an $O(n^2)$ DP solution? How to optimize the $O(n^2)$ DP solution? We found it difficult to directly calculate all valid permutations. Consider calculating all invalid permutations and subtract it from $n!$. Let's first make some notes. We note $P_i=max(p_1,...,p_i),Q_i=max(q_1,...,q_i)$,and $last_i=max(j)(P_j \neq P_i)$.And note the values of all extreme points is $val[1,...,m]$ and positions is $pos[1,...,m]$. For example,$p=[3,1,4,2,6,5]$,we get $m=3$, $val=[3,4,6]$ and $pos=[1,3,5]$. Let's call a permutation $q$ "i-invalid" if there's an index $j$ satisfying $Q_j=P_j=val_i$. Note $S_i$ as the set of all "i-invalid" permutations.According to the Inclusion-Exclusion Principle,the answer is $n!-(|S_1|+|S_2|+...)+(|S_1 \cap S_2|+|S_1 \cap S_3|+...)-...$ Consider put $val_i$ in $q$ to make $q$ "i-invalid".There're $2$ kinds of ways: $q_j=val_i(1 \le j < pos_i)$ and $Q_{pos_i}=val_i$; $q_j=val_i(1 \le j < pos_i)$ and $Q_{pos_i}=val_i$; $q_j=val_i(pos_i \le j< pos_{i+1})$ and $Q_{pos_i}=val_i$. $q_j=val_i(pos_i \le j< pos_{i+1})$ and $Q_{pos_i}=val_i$. In both cases,we call the first $max(j,pos_i)$ numbers are "determined". In other words,they're some numbers in $[1,val_i]$. Then we can find an $O(n^2)$ DP. Note $dp_{i,j}$ - $i$ stands for the first $i$ numbers is "determined",and $j$ stands for $j$ extreme values are "invalid". In fact we only care about the parity of $j$,so $j=0,1$. Note for the first $i$ numbers are "determined" automatically means $Q_i=P_i$. In addition,$q_i=P_i(P_i=P_{i-1})$ or $q_i<P_i(P_i \neq P_{i-1})$. We get a DP formula $dp_{i,j}= \Sigma_{k=0}^{last_i} dp_{k,j \oplus 1} \cdot A^{P_i-k-1}_{i-k-1} \cdot (P_i==P_{i+1}?1:i-k)$ Here is a small trick to reduce the constant. If we note $f_i=dp_{i,0}-dp_{i,1}$, we get a more concise formula $f_{i}= -\Sigma_{j=0}^{last_i} f_{j} \cdot A^{P_i-j-1}_{i-j-1} \cdot (P_i==P_{i+1}?1:i-j)$ We perform the following transformation on the formula. (1) For $i$ satisfying $P_i=P_{i-1}$, we get $f_{i}= -\Sigma_{j=0}^{last_i} f_{j} \cdot A^{P_i-j-1}_{i-j-1} = -[(P_i-i)!]^{-1} \Sigma_{j=0}^{last_i} f_{j} \cdot (P_i-j-1)!$ (2) For $i$ satisfying $P_i \neq P_{i-1}$, we get $f_{i}= -\Sigma_{j=0}^{last_i} f_{j} \cdot A^{P_i-j-1}_{i-j-1} \cdot (i-j) = -i[(P_i-i)!]^{-1} \Sigma_{j=0}^{last_i} f_{j} \cdot (P_i-j-1)! + [(P_i-i)!]^{-1} \Sigma_{j=0}^{last_i} jf_{j} \cdot (P_i-j-1)!$ After that, we have transformed the formula into a classical form $f(i)= h(i) \Sigma_{j=0}^{i-1} f(j) \cdot g(i-j)$, which can be calculate in $O(n log^2 n)$ by D&C+FFT.
|
[
"divide and conquer",
"fft",
"math"
] | 3,400
|
#include<bits/stdc++.h>
using namespace std;
#define all(a) a.begin(),a.end()
#define pb push_back
#define sz(a) ((int)a.size())
using ll=long long;
using u32=unsigned int;
using u64=unsigned long long;
using i128=__int128;
using u128=unsigned __int128;
using f128=__float128;
using pii=pair<int,int>;
using pll=pair<ll,ll>;
template<typename T> using vc=vector<T>;
template<typename T> using vvc=vc<vc<T>>;
template<typename T> using vvvc=vc<vvc<T>>;
using vi=vc<int>;
using vll=vc<ll>;
using vvi=vc<vi>;
using vvll=vc<vll>;
#define vv(type,name,n,...) \
vector<vector<type>> name(n,vector<type>(__VA_ARGS__))
#define vvv(type,name,n,m,...) \
vector<vector<vector<type>>> name(n,vector<vector<type>>(m,vector<type>(__VA_ARGS__)))
template<typename T> using min_heap=priority_queue<T,vector<T>,greater<T>>;
template<typename T> using max_heap=priority_queue<T>;
// https://trap.jp/post/1224/
#define rep1(n) for(ll i=0; i<(ll)(n); ++i)
#define rep2(i,n) for(ll i=0; i<(ll)(n); ++i)
#define rep3(i,a,b) for(ll i=(ll)(a); i<(ll)(b); ++i)
#define rep4(i,a,b,c) for(ll i=(ll)(a); i<(ll)(b); i+=(c))
#define cut4(a,b,c,d,e,...) e
#define rep(...) cut4(__VA_ARGS__,rep4,rep3,rep2,rep1)(__VA_ARGS__)
#define per1(n) for(ll i=((ll)n)-1; i>=0; --i)
#define per2(i,n) for(ll i=((ll)n)-1; i>=0; --i)
#define per3(i,a,b) for(ll i=((ll)a)-1; i>=(ll)(b); --i)
#define per4(i,a,b,c) for(ll i=((ll)a)-1; i>=(ll)(b); i-=(c))
#define per(...) cut4(__VA_ARGS__,per4,per3,per2,per1)(__VA_ARGS__)
#define rep_subset(i,s) for(ll i=(s); i>=0; i=(i==0?-1:(i-1)&(s)))
template<typename T, typename S> constexpr T ifloor(const T a, const S b){return a/b-(a%b&&(a^b)<0);}
template<typename T, typename S> constexpr T iceil(const T a, const S b){return ifloor(a+b-1,b);}
template<typename T>
void sort_unique(vector<T> &vec){
sort(vec.begin(),vec.end());
vec.resize(unique(vec.begin(),vec.end())-vec.begin());
}
template<typename T, typename S> constexpr bool chmin(T &a, const S b){if(a>b) return a=b,true; return false;}
template<typename T, typename S> constexpr bool chmax(T &a, const S b){if(a<b) return a=b,true; return false;}
template<typename T, typename S> istream& operator >> (istream& i, pair<T,S> &p){return i >> p.first >> p.second;}
template<typename T, typename S> ostream& operator << (ostream& o, const pair<T,S> &p){return o << p.first << ' ' << p.second;}
#ifdef i_am_noob
#define bug(...) cerr << "#" << __LINE__ << ' ' << #__VA_ARGS__ << "- ", _do(__VA_ARGS__)
template<typename T> void _do(vector<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(unordered_set<T> x){for(auto i: x) cerr << i << ' ';cerr << "\n";}
template<typename T> void _do(T && x) {cerr << x << endl;}
template<typename T, typename ...S> void _do(T && x, S&&...y) {cerr << x << ", "; _do(y...);}
#else
#define bug(...) 777771449
#endif
template<typename T> void print(vector<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(unordered_set<T> x){for(auto i: x) cout << i << ' ';cout << "\n";}
template<typename T> void print(T && x) {cout << x << "\n";}
template<typename T, typename... S> void print(T && x, S&&... y) {cout << x << ' ';print(y...);}
template<typename T> istream& operator >> (istream& i, vector<T> &vec){for(auto &x: vec) i >> x; return i;}
vvi read_graph(int n, int m, int base=1){
vvi adj(n);
for(int i=0,u,v; i<m; ++i){
cin >> u >> v,u-=base,v-=base;
adj[u].pb(v),adj[v].pb(u);
}
return adj;
}
vvi read_tree(int n, int base=1){return read_graph(n,n-1,base);}
template<typename T, typename S> pair<T,S> operator + (const pair<T,S> &a, const pair<T,S> &b){return {a.first+b.first,a.second+b.second};}
template<typename T> constexpr T inf=0;
template<> constexpr int inf<int> = 0x3f3f3f3f;
template<> constexpr ll inf<ll> = 0x3f3f3f3f3f3f3f3f;
template<typename T> vector<T> operator += (vector<T> &a, int val){for(auto &i: a) i+=val; return a;}
template<typename T> T isqrt(const T &x){T y=sqrt(x+2); while(y*y>x) y--; return y;}
#define ykh mt19937 rng(chrono::steady_clock::now().time_since_epoch().count())
#include <utility>
namespace atcoder {
namespace internal {
// @param m `1 <= m`
// @return x mod m
constexpr long long safe_mod(long long x, long long m) {
x %= m;
if (x < 0) x += m;
return x;
}
// Fast modular multiplication by barrett reduction
// Reference: https://en.wikipedia.org/wiki/Barrett_reduction
// NOTE: reconsider after Ice Lake
struct barrett {
unsigned int _m;
unsigned long long im;
// @param m `1 <= m < 2^31`
barrett(unsigned int m) : _m(m), im((unsigned long long)(-1) / m + 1) {}
// @return m
unsigned int umod() const { return _m; }
// @param a `0 <= a < m`
// @param b `0 <= b < m`
// @return `a * b % m`
unsigned int mul(unsigned int a, unsigned int b) const {
// [1] m = 1
// a = b = im = 0, so okay
// [2] m >= 2
// im = ceil(2^64 / m)
// -> im * m = 2^64 + r (0 <= r < m)
// let z = a*b = c*m + d (0 <= c, d < m)
// a*b * im = (c*m + d) * im = c*(im*m) + d*im = c*2^64 + c*r + d*im
// c*r + d*im < m * m + m * im < m * m + 2^64 + m <= 2^64 + m * (m + 1) < 2^64 * 2
// ((ab * im) >> 64) == c or c + 1
unsigned long long z = a;
z *= b;
#ifdef _MSC_VER
unsigned long long x;
_umul128(z, im, &x);
#else
unsigned long long x =
(unsigned long long)(((unsigned __int128)(z)*im) >> 64);
#endif
unsigned int v = (unsigned int)(z - x * _m);
if (_m <= v) v += _m;
return v;
}
};
// @param n `0 <= n`
// @param m `1 <= m`
// @return `(x ** n) % m`
constexpr long long pow_mod_constexpr(long long x, long long n, int m) {
if (m == 1) return 0;
unsigned int _m = (unsigned int)(m);
unsigned long long r = 1;
unsigned long long y = safe_mod(x, m);
while (n) {
if (n & 1) r = (r * y) % _m;
y = (y * y) % _m;
n >>= 1;
}
return r;
}
// Reference:
// M. Forisek and J. Jancina,
// Fast Primality Testing for Integers That Fit into a Machine Word
// @param n `0 <= n`
constexpr bool is_prime_constexpr(int n) {
if (n <= 1) return false;
if (n == 2 || n == 7 || n == 61) return true;
if (n % 2 == 0) return false;
long long d = n - 1;
while (d % 2 == 0) d /= 2;
constexpr long long bases[3] = {2, 7, 61};
for (long long a : bases) {
long long t = d;
long long y = pow_mod_constexpr(a, t, n);
while (t != n - 1 && y != 1 && y != n - 1) {
y = y * y % n;
t <<= 1;
}
if (y != n - 1 && t % 2 == 0) {
return false;
}
}
return true;
}
template <int n> constexpr bool is_prime = is_prime_constexpr(n);
// @param b `1 <= b`
// @return pair(g, x) s.t. g = gcd(a, b), xa = g (mod b), 0 <= x < b/g
constexpr std::pair<long long, long long> inv_gcd(long long a, long long b) {
a = safe_mod(a, b);
if (a == 0) return {b, 0};
// Contracts:
// [1] s - m0 * a = 0 (mod b)
// [2] t - m1 * a = 0 (mod b)
// [3] s * |m1| + t * |m0| <= b
long long s = b, t = a;
long long m0 = 0, m1 = 1;
while (t) {
long long u = s / t;
s -= t * u;
m0 -= m1 * u; // |m1 * u| <= |m1| * s <= b
// [3]:
// (s - t * u) * |m1| + t * |m0 - m1 * u|
// <= s * |m1| - t * u * |m1| + t * (|m0| + |m1| * u)
// = s * |m1| + t * |m0| <= b
auto tmp = s;
s = t;
t = tmp;
tmp = m0;
m0 = m1;
m1 = tmp;
}
// by [3]: |m0| <= b/g
// by g != b: |m0| < b/g
if (m0 < 0) m0 += b / s;
return {s, m0};
}
// Compile time primitive root
// @param m must be prime
// @return primitive root (and minimum in now)
constexpr int primitive_root_constexpr(int m) {
if (m == 2) return 1;
if (m == 167772161) return 3;
if (m == 469762049) return 3;
if (m == 754974721) return 11;
if (m == 998244353) return 3;
int divs[20] = {};
divs[0] = 2;
int cnt = 1;
int x = (m - 1) / 2;
while (x % 2 == 0) x /= 2;
for (int i = 3; (long long)(i)*i <= x; i += 2) {
if (x % i == 0) {
divs[cnt++] = i;
while (x % i == 0) {
x /= i;
}
}
}
if (x > 1) {
divs[cnt++] = x;
}
for (int g = 2;; g++) {
bool ok = true;
for (int i = 0; i < cnt; i++) {
if (pow_mod_constexpr(g, (m - 1) / divs[i], m) == 1) {
ok = false;
break;
}
}
if (ok) return g;
}
}
template <int m> constexpr int primitive_root = primitive_root_constexpr(m);
} // namespace internal
} // namespace atcoder
#include <cassert>
#include <numeric>
#include <type_traits>
namespace atcoder {
namespace internal {
#ifndef _MSC_VER
template <class T>
using is_signed_int128 =
typename std::conditional<std::is_same<T, __int128_t>::value ||
std::is_same<T, __int128>::value,
std::true_type,
std::false_type>::type;
template <class T>
using is_unsigned_int128 =
typename std::conditional<std::is_same<T, __uint128_t>::value ||
std::is_same<T, unsigned __int128>::value,
std::true_type,
std::false_type>::type;
template <class T>
using make_unsigned_int128 =
typename std::conditional<std::is_same<T, __int128_t>::value,
__uint128_t,
unsigned __int128>;
template <class T>
using is_integral = typename std::conditional<std::is_integral<T>::value ||
is_signed_int128<T>::value ||
is_unsigned_int128<T>::value,
std::true_type,
std::false_type>::type;
template <class T>
using is_signed_int = typename std::conditional<(is_integral<T>::value &&
std::is_signed<T>::value) ||
is_signed_int128<T>::value,
std::true_type,
std::false_type>::type;
template <class T>
using is_unsigned_int =
typename std::conditional<(is_integral<T>::value &&
std::is_unsigned<T>::value) ||
is_unsigned_int128<T>::value,
std::true_type,
std::false_type>::type;
template <class T>
using to_unsigned = typename std::conditional<
is_signed_int128<T>::value,
make_unsigned_int128<T>,
typename std::conditional<std::is_signed<T>::value,
std::make_unsigned<T>,
std::common_type<T>>::type>::type;
#else
template <class T> using is_integral = typename std::is_integral<T>;
template <class T>
using is_signed_int =
typename std::conditional<is_integral<T>::value && std::is_signed<T>::value,
std::true_type,
std::false_type>::type;
template <class T>
using is_unsigned_int =
typename std::conditional<is_integral<T>::value &&
std::is_unsigned<T>::value,
std::true_type,
std::false_type>::type;
template <class T>
using to_unsigned = typename std::conditional<is_signed_int<T>::value,
std::make_unsigned<T>,
std::common_type<T>>::type;
#endif
template <class T>
using is_signed_int_t = std::enable_if_t<is_signed_int<T>::value>;
template <class T>
using is_unsigned_int_t = std::enable_if_t<is_unsigned_int<T>::value>;
template <class T> using to_unsigned_t = typename to_unsigned<T>::type;
} // namespace internal
} // namespace atcoder
#include <cassert>
#include <numeric>
#include <type_traits>
#ifdef _MSC_VER
#include <intrin.h>
#endif
namespace atcoder {
namespace internal {
struct modint_base {};
struct static_modint_base : modint_base {};
template <class T> using is_modint = std::is_base_of<modint_base, T>;
template <class T> using is_modint_t = std::enable_if_t<is_modint<T>::value>;
} // namespace internal
template <int m, std::enable_if_t<(1 <= m)>* = nullptr>
struct static_modint : internal::static_modint_base {
using mint = static_modint;
public:
static constexpr int mod() { return m; }
static mint raw(int v) {
mint x;
x._v = v;
return x;
}
static_modint() : _v(0) {}
template <class T, internal::is_signed_int_t<T>* = nullptr>
static_modint(T v) {
long long x = (long long)(v % (long long)(umod()));
if (x < 0) x += umod();
_v = (unsigned int)(x);
}
template <class T, internal::is_unsigned_int_t<T>* = nullptr>
static_modint(T v) {
_v = (unsigned int)(v % umod());
}
static_modint(bool v) { _v = ((unsigned int)(v) % umod()); }
unsigned int val() const { return _v; }
mint& operator++() {
_v++;
if (_v == umod()) _v = 0;
return *this;
}
mint& operator--() {
if (_v == 0) _v = umod();
_v--;
return *this;
}
mint operator++(int) {
mint result = *this;
++*this;
return result;
}
mint operator--(int) {
mint result = *this;
--*this;
return result;
}
mint& operator+=(const mint& rhs) {
_v += rhs._v;
if (_v >= umod()) _v -= umod();
return *this;
}
mint& operator-=(const mint& rhs) {
_v -= rhs._v;
if (_v >= umod()) _v += umod();
return *this;
}
mint& operator*=(const mint& rhs) {
unsigned long long z = _v;
z *= rhs._v;
_v = (unsigned int)(z % umod());
return *this;
}
mint& operator/=(const mint& rhs) { return *this = *this * rhs.inv(); }
mint operator+() const { return *this; }
mint operator-() const { return mint() - *this; }
mint pow(long long n) const {
assert(0 <= n);
mint x = *this, r = 1;
while (n) {
if (n & 1) r *= x;
x *= x;
n >>= 1;
}
return r;
}
mint inv() const {
if (prime) {
assert(_v);
return pow(umod() - 2);
} else {
auto eg = internal::inv_gcd(_v, m);
assert(eg.first == 1);
return eg.second;
}
}
friend mint operator+(const mint& lhs, const mint& rhs) {
return mint(lhs) += rhs;
}
friend mint operator-(const mint& lhs, const mint& rhs) {
return mint(lhs) -= rhs;
}
friend mint operator*(const mint& lhs, const mint& rhs) {
return mint(lhs) *= rhs;
}
friend mint operator/(const mint& lhs, const mint& rhs) {
return mint(lhs) /= rhs;
}
friend bool operator==(const mint& lhs, const mint& rhs) {
return lhs._v == rhs._v;
}
friend bool operator!=(const mint& lhs, const mint& rhs) {
return lhs._v != rhs._v;
}
private:
unsigned int _v;
static constexpr unsigned int umod() { return m; }
static constexpr bool prime = internal::is_prime<m>;
};
template <int id> struct dynamic_modint : internal::modint_base {
using mint = dynamic_modint;
public:
static int mod() { return (int)(bt.umod()); }
static void set_mod(int m) {
assert(1 <= m);
bt = internal::barrett(m);
}
static mint raw(int v) {
mint x;
x._v = v;
return x;
}
dynamic_modint() : _v(0) {}
template <class T, internal::is_signed_int_t<T>* = nullptr>
dynamic_modint(T v) {
long long x = (long long)(v % (long long)(mod()));
if (x < 0) x += mod();
_v = (unsigned int)(x);
}
template <class T, internal::is_unsigned_int_t<T>* = nullptr>
dynamic_modint(T v) {
_v = (unsigned int)(v % mod());
}
dynamic_modint(bool v) { _v = ((unsigned int)(v) % mod()); }
unsigned int val() const { return _v; }
mint& operator++() {
_v++;
if (_v == umod()) _v = 0;
return *this;
}
mint& operator--() {
if (_v == 0) _v = umod();
_v--;
return *this;
}
mint operator++(int) {
mint result = *this;
++*this;
return result;
}
mint operator--(int) {
mint result = *this;
--*this;
return result;
}
mint& operator+=(const mint& rhs) {
_v += rhs._v;
if (_v >= umod()) _v -= umod();
return *this;
}
mint& operator-=(const mint& rhs) {
_v += mod() - rhs._v;
if (_v >= umod()) _v -= umod();
return *this;
}
mint& operator*=(const mint& rhs) {
_v = bt.mul(_v, rhs._v);
return *this;
}
mint& operator/=(const mint& rhs) { return *this = *this * rhs.inv(); }
mint operator+() const { return *this; }
mint operator-() const { return mint() - *this; }
mint pow(long long n) const {
assert(0 <= n);
mint x = *this, r = 1;
while (n) {
if (n & 1) r *= x;
x *= x;
n >>= 1;
}
return r;
}
mint inv() const {
auto eg = internal::inv_gcd(_v, mod());
assert(eg.first == 1);
return eg.second;
}
friend mint operator+(const mint& lhs, const mint& rhs) {
return mint(lhs) += rhs;
}
friend mint operator-(const mint& lhs, const mint& rhs) {
return mint(lhs) -= rhs;
}
friend mint operator*(const mint& lhs, const mint& rhs) {
return mint(lhs) *= rhs;
}
friend mint operator/(const mint& lhs, const mint& rhs) {
return mint(lhs) /= rhs;
}
friend bool operator==(const mint& lhs, const mint& rhs) {
return lhs._v == rhs._v;
}
friend bool operator!=(const mint& lhs, const mint& rhs) {
return lhs._v != rhs._v;
}
private:
unsigned int _v;
static internal::barrett bt;
static unsigned int umod() { return bt.umod(); }
};
template <int id> internal::barrett dynamic_modint<id>::bt = 998244353;
using modint998244353 = static_modint<998244353>;
using modint1000000007 = static_modint<1000000007>;
using modint = dynamic_modint<-1>;
namespace internal {
template <class T>
using is_static_modint = std::is_base_of<internal::static_modint_base, T>;
template <class T>
using is_static_modint_t = std::enable_if_t<is_static_modint<T>::value>;
template <class> struct is_dynamic_modint : public std::false_type {};
template <int id>
struct is_dynamic_modint<dynamic_modint<id>> : public std::true_type {};
template <class T>
using is_dynamic_modint_t = std::enable_if_t<is_dynamic_modint<T>::value>;
} // namespace internal
} // namespace atcoder
#include <algorithm>
#include <array>
#ifdef _MSC_VER
#include <intrin.h>
#endif
namespace atcoder {
namespace internal {
// @param n `0 <= n`
// @return minimum non-negative `x` s.t. `n <= 2**x`
int ceil_pow2(int n) {
int x = 0;
while ((1U << x) < (unsigned int)(n)) x++;
return x;
}
// @param n `1 <= n`
// @return minimum non-negative `x` s.t. `(n & (1 << x)) != 0`
int bsf(unsigned int n) {
#ifdef _MSC_VER
unsigned long index;
_BitScanForward(&index, n);
return index;
#else
return __builtin_ctz(n);
#endif
}
} // namespace internal
} // namespace atcoder
#include <cassert>
#include <type_traits>
#include <vector>
namespace atcoder {
namespace internal {
template <class mint, internal::is_static_modint_t<mint>* = nullptr>
void butterfly(std::vector<mint>& a) {
static constexpr int g = internal::primitive_root<mint::mod()>;
int n = int(a.size());
int h = internal::ceil_pow2(n);
static bool first = true;
static mint sum_e[30]; // sum_e[i] = ies[0] * ... * ies[i - 1] * es[i]
if (first) {
first = false;
mint es[30], ies[30]; // es[i]^(2^(2+i)) == 1
int cnt2 = bsf(mint::mod() - 1);
mint e = mint(g).pow((mint::mod() - 1) >> cnt2), ie = e.inv();
for (int i = cnt2; i >= 2; i--) {
// e^(2^i) == 1
es[i - 2] = e;
ies[i - 2] = ie;
e *= e;
ie *= ie;
}
mint now = 1;
for (int i = 0; i <= cnt2 - 2; i++) {
sum_e[i] = es[i] * now;
now *= ies[i];
}
}
for (int ph = 1; ph <= h; ph++) {
int w = 1 << (ph - 1), p = 1 << (h - ph);
mint now = 1;
for (int s = 0; s < w; s++) {
int offset = s << (h - ph + 1);
for (int i = 0; i < p; i++) {
auto l = a[i + offset];
auto r = a[i + offset + p] * now;
a[i + offset] = l + r;
a[i + offset + p] = l - r;
}
now *= sum_e[bsf(~(unsigned int)(s))];
}
}
}
template <class mint, internal::is_static_modint_t<mint>* = nullptr>
void butterfly_inv(std::vector<mint>& a) {
static constexpr int g = internal::primitive_root<mint::mod()>;
int n = int(a.size());
int h = internal::ceil_pow2(n);
static bool first = true;
static mint sum_ie[30]; // sum_ie[i] = es[0] * ... * es[i - 1] * ies[i]
if (first) {
first = false;
mint es[30], ies[30]; // es[i]^(2^(2+i)) == 1
int cnt2 = bsf(mint::mod() - 1);
mint e = mint(g).pow((mint::mod() - 1) >> cnt2), ie = e.inv();
for (int i = cnt2; i >= 2; i--) {
// e^(2^i) == 1
es[i - 2] = e;
ies[i - 2] = ie;
e *= e;
ie *= ie;
}
mint now = 1;
for (int i = 0; i <= cnt2 - 2; i++) {
sum_ie[i] = ies[i] * now;
now *= es[i];
}
}
for (int ph = h; ph >= 1; ph--) {
int w = 1 << (ph - 1), p = 1 << (h - ph);
mint inow = 1;
for (int s = 0; s < w; s++) {
int offset = s << (h - ph + 1);
for (int i = 0; i < p; i++) {
auto l = a[i + offset];
auto r = a[i + offset + p];
a[i + offset] = l + r;
a[i + offset + p] =
(unsigned long long)(mint::mod() + l.val() - r.val()) *
inow.val();
}
inow *= sum_ie[bsf(~(unsigned int)(s))];
}
}
}
} // namespace internal
template <class mint, internal::is_static_modint_t<mint>* = nullptr>
std::vector<mint> convolution(std::vector<mint> a, std::vector<mint> b) {
int n = int(a.size()), m = int(b.size());
if (!n || !m) return {};
if (std::min(n, m) <= 60) {
if (n < m) {
std::swap(n, m);
std::swap(a, b);
}
std::vector<mint> ans(n + m - 1);
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
ans[i + j] += a[i] * b[j];
}
}
return ans;
}
int z = 1 << internal::ceil_pow2(n + m - 1);
a.resize(z);
internal::butterfly(a);
b.resize(z);
internal::butterfly(b);
for (int i = 0; i < z; i++) {
a[i] *= b[i];
}
internal::butterfly_inv(a);
a.resize(n + m - 1);
mint iz = mint(z).inv();
for (int i = 0; i < n + m - 1; i++) a[i] *= iz;
return a;
}
template <unsigned int mod = 998244353,
class T,
std::enable_if_t<internal::is_integral<T>::value>* = nullptr>
std::vector<T> convolution(const std::vector<T>& a, const std::vector<T>& b) {
int n = int(a.size()), m = int(b.size());
if (!n || !m) return {};
using mint = static_modint<mod>;
std::vector<mint> a2(n), b2(m);
for (int i = 0; i < n; i++) {
a2[i] = mint(a[i]);
}
for (int i = 0; i < m; i++) {
b2[i] = mint(b[i]);
}
auto c2 = convolution(move(a2), move(b2));
std::vector<T> c(n + m - 1);
for (int i = 0; i < n + m - 1; i++) {
c[i] = c2[i].val();
}
return c;
}
std::vector<long long> convolution_ll(const std::vector<long long>& a,
const std::vector<long long>& b) {
int n = int(a.size()), m = int(b.size());
if (!n || !m) return {};
static constexpr unsigned long long MOD1 = 754974721; // 2^24
static constexpr unsigned long long MOD2 = 167772161; // 2^25
static constexpr unsigned long long MOD3 = 469762049; // 2^26
static constexpr unsigned long long M2M3 = MOD2 * MOD3;
static constexpr unsigned long long M1M3 = MOD1 * MOD3;
static constexpr unsigned long long M1M2 = MOD1 * MOD2;
static constexpr unsigned long long M1M2M3 = MOD1 * MOD2 * MOD3;
static constexpr unsigned long long i1 =
internal::inv_gcd(MOD2 * MOD3, MOD1).second;
static constexpr unsigned long long i2 =
internal::inv_gcd(MOD1 * MOD3, MOD2).second;
static constexpr unsigned long long i3 =
internal::inv_gcd(MOD1 * MOD2, MOD3).second;
auto c1 = convolution<MOD1>(a, b);
auto c2 = convolution<MOD2>(a, b);
auto c3 = convolution<MOD3>(a, b);
std::vector<long long> c(n + m - 1);
for (int i = 0; i < n + m - 1; i++) {
unsigned long long x = 0;
x += (c1[i] * i1) % MOD1 * M2M3;
x += (c2[i] * i2) % MOD2 * M1M3;
x += (c3[i] * i3) % MOD3 * M1M2;
// B = 2^63, -B <= x, r(real value) < B
// (x, x - M, x - 2M, or x - 3M) = r (mod 2B)
// r = c1[i] (mod MOD1)
// focus on MOD1
// r = x, x - M', x - 2M', x - 3M' (M' = M % 2^64) (mod 2B)
// r = x,
// x - M' + (0 or 2B),
// x - 2M' + (0, 2B or 4B),
// x - 3M' + (0, 2B, 4B or 6B) (without mod!)
// (r - x) = 0, (0)
// - M' + (0 or 2B), (1)
// -2M' + (0 or 2B or 4B), (2)
// -3M' + (0 or 2B or 4B or 6B) (3) (mod MOD1)
// we checked that
// ((1) mod MOD1) mod 5 = 2
// ((2) mod MOD1) mod 5 = 3
// ((3) mod MOD1) mod 5 = 4
long long diff =
c1[i] - internal::safe_mod((long long)(x), (long long)(MOD1));
if (diff < 0) diff += MOD1;
static constexpr unsigned long long offset[5] = {
0, 0, M1M2M3, 2 * M1M2M3, 3 * M1M2M3};
x -= offset[diff % 5];
c[i] = x;
}
return c;
}
} // namespace atcoder
using namespace atcoder;
using mint=modint998244353;
//using mint=modint1000000007;
template<int mod>
struct nCr{
vector<static_modint<mod>> fac,inv,ifac;
void calc(int n){
fac.resize(n+1),inv.resize(n+1),ifac.resize(n+1);
fac[0]=inv[1]=ifac[0]=1;
for(int i=1; i<=n; ++i) fac[i]=fac[i-1]*static_modint<mod>::raw(i);
for(int i=2; i<=n; ++i) inv[i]=inv[mod%i]*static_modint<mod>::raw(mod-mod/i);
for(int i=1; i<=n; ++i) ifac[i]=ifac[i-1]*inv[i];
}
static_modint<mod> C(int n, int m){
if(m<0||m>n) return 0;
return fac[n]*ifac[m]*ifac[n-m];
}
};
nCr<998244353> de;
void ahcorz(){
// (a_i - j)! / (a_i - i)! (a_i = a_j)
// (a_i - j - 1)! / (a_i - i)! * (i - j) (a_i != a_j)
int n; cin >> n;
de.calc(n+1);
vi a; a.pb(0);
rep(n){
int x; cin >> x;
a.pb(max(a.back(),x));
}
vc<mint> dp(n+1),aux(n+1);
auto solve=[&](auto &self, int l, int r) -> void{
if(l+1==r){
if(l==0) dp[l]=-1;
else dp[l]+=aux[l]*l;
dp[l]*=de.ifac[a[l]-l];
return;
}
int mid=l+r>>1;
self(self,l,mid);
if(a[mid]==a[mid-1]){
int x=mid-1,y=mid;
while(x-1>=l&&a[x-1]==a[mid]) x--;
while(y+1<r&&a[y+1]==a[mid]) y++;
mint tot,tot1,tot2;
rep(i,x,mid) tot+=dp[i]*de.fac[a[mid]-i],tot1+=dp[i]*de.fac[a[mid]-i-1],tot2+=dp[i]*de.fac[a[mid]-i-1]*i;
rep(i,mid,y+1) dp[i]-=tot-(tot1*i-tot2);
}
vc<mint> vec;
rep(i,a[mid]-mid,a[r-1]-(l+1)+1) vec.pb(de.fac[i]);
vc<mint> vec1,vec2;
rep(i,l,mid) vec1.pb(dp[i]),vec2.pb(dp[i]*i);
auto res1=convolution(vec,vec1),res2=convolution(vec,vec2);
int base=a[mid]-mid+l+1;
rep(i,mid,r) if(a[i]-base>=0&&a[i]-base<sz(res1)) aux[i]-=res1[a[i]-base],dp[i]+=res2[a[i]-base];
self(self,mid,r);
};
solve(solve,0,n+1);
print(dp[n].val());
}
signed main(){
ios_base::sync_with_stdio(0),cin.tie(0);
cout << fixed << setprecision(20);
int t=1;
cin >> t;
while(t--) ahcorz();
}
|
1936
|
F
|
Grand Finale: Circles
|
You are given $n$ circles on the plane. The $i$-th of these circles is given by a tuple of integers $(x_i, y_i, r_i)$, where $(x_i, y_i)$ are the coordinates of its center, and $r_i$ is the radius of the circle.
Please find a circle $C$ which meets the following conditions:
- $C$ is contained inside all $n$ circles given in the input.
- Among all circles $C$ that meet the first condition, the radius of the circle is maximum.
Let the largest suitable circle have the radius of $a$.
Your output $C$, described as $(x,y,r)$, will be accepted if it meets the following conditions:
- For each $i$, $\sqrt{(x_i-x)^2+(y_i-y)^2}+ r \le r_i+\max(a,1)\cdot 10^{-7}$.
- The absolute or relative error of $r$ does not exceed $10^{-7}$. Formally, your answer is accepted if and only if $\frac{\left|r - a\right|}{\max(1, a)} \le 10^{-7}$.
|
For a given center coordinate $(x,y)$, we can model the objective function $r=f(x,y)$ to maximize as follows. $f(x,y)=\min_{1 \le i \le n} {\left({r_i-\sqrt{(x_i-x)^2+(y_i-y)^2}}\right)}$ Formally, this can be modeled as follows: $\begin{split} \max & \ r \\ \text{s.t.} & \ \sqrt{(x_i-x)^2+(y_i-y)^2} \le r_i-r \ \ \forall \ 1 \le i \le n \end{split}$ This is a second order cone program. Generally, this is not as simple to solve as linear or quadratic programs. There are multiple solvers out there, but all of them are either commercial or simply unfit for use on Codeforces. And even if we use general approaches to second order cone programming, their time complexity is at least $\mathcal{O}(n^{3.5})$, so they cannot work. Almost all general approaches to convex programming do not help very much for this task also, because of the following reasons. Gradient Descent cannot work in this task. The objective function is not smooth. Gradient Descent cannot work in this task. The objective function is not smooth. Coordinate Descent also cannot work in this task for the same reason, and it is not hard to find a countercase. Coordinate Descent also cannot work in this task for the same reason, and it is not hard to find a countercase. Subgradient methods can solve this task, but they take $\mathcal{O}(1/\epsilon^2)$ iterations to converge. This is a very big dependency on the precision, so they take too much time. Subgradient methods can solve this task, but they take $\mathcal{O}(1/\epsilon^2)$ iterations to converge. This is a very big dependency on the precision, so they take too much time. We do not expect anyone to write Adaptive Coordinate Descent during contest time, but we still tested the PRAXIS algorithm and checked that it does not pass. We do not expect anyone to write Adaptive Coordinate Descent during contest time, but we still tested the PRAXIS algorithm and checked that it does not pass. Using nested ternary searches does not help very much. The time complexity will be $\mathcal{O}(n \log^2 (1/\epsilon))$, but the constants are large due to multiple function calls and floating point operations. Using nested ternary searches does not help very much. The time complexity will be $\mathcal{O}(n \log^2 (1/\epsilon))$, but the constants are large due to multiple function calls and floating point operations. Heuristics such as the Nelder-Mead method are not proven to converge. In many cases we found it not converging successfully. Heuristics such as the Nelder-Mead method are not proven to converge. In many cases we found it not converging successfully. In the editorial, we will explain a solution of expected $\mathcal{O}(n)$ time complexity, with constants depending on the number of dimensions (which is $2$ in our case). First, we observe that any answer can be described as a "basis" of at most $3$ selected circles. For $k=0,1,2,3$, the basis is as follows. Case 1: $k=0$ This is the case of no circles, which can be regarded as an "identity element" of bases. Some very huge circle that will enclose any given circle, for example the circle with $10^{18}$ as radius and $(0,0)$ as center, can represent this case. Case 2: $k=1$ This is the case of a single circle. The circle that represents this case is the one selected circle. Case 3: $k=2$ This is the case of two circles, for which the basis is the largest circle in the intersection of two circles. We can find this by choosing the center as the midpoint of the two points $A$ and $B$, where $A$ and $B$ are the points where the line segment between circle centers and the circle itself intersect. Case 4: $k=3$ This is the case of three circles, and the most tricky of all four cases. It can be found as one solution to the following system of quadratic equations: $(x_A-x)^2+(y_A-y)^2 = (r_A-r)^2 \\ (x_B-x)^2+(y_B-y)^2 = (r_B-r)^2 \\ (x_C-x)^2+(y_C-y)^2 = (r_C-r)^2$ This can be solved algebraically. If you are used to solving geometry problems in Olympiads, you may know the ''Problem of Apollonius''. This is one special case of the problem, which is not very often mentioned because often the three circles do not intersect with each other (thus most of the time one solution for $r$ must be negative). Or if one does not know the problem, they can find that the locus of the circle center that meets with two circle edges (as in Case 3) is a conic section, precisely one side of a pair of hyperbola. By finding the intersections of two hyperbola, one can find the center of this circle. Thus, this circle can be found in $\mathcal{O}(1)$ time, with constants based on number of dimensions. By the system of quadratic equations stated above, we see that there are at most $2$ solutions. It is sufficient to take the one that happens to be inside the intersection. (If there are two such solutions, take the larger one. For the current version of the data this check is not necessary, but it does not hurt to try.) Thus, for some $n>3$, the bases can be represented with some $k \le 3$ circles. Do note that, already, we have an $\mathcal{O}(n^3)$ solution based on this. We enumerate all possible bases, and find the smallest one out of them (since that one will be the one that satisfies the conditions). Now, for some basis circle $(x,y,r)$, we can check whether some given circle $(x_i,y_i,r_i)$ is "violated" by this basis (i.e. this basis is not inside this circle) in $\mathcal{O}(1)$ by comparing the distance. We take an incremental approach. We add circles to the basis one by one, up to $3$ circles. We choose to change the basis if only the next added circle is violated by the current basis. Of course, still this takes $\mathcal{O}(n^3)$ time, and sorting the circles based on any argument will not help anyways. However, adding the circles in random order will make it only take expected $\mathcal{O}(n)$ time. This can be analyzed by the probability that a new basis will be added. The precise analysis for the time complexity is left as a practice for the reader. If you have seen the minimum enclosing circle problem, the Welzl's algorithm to solve that precise problem will seem very similar to this. In fact, the suggested algorithm itself is not very different from Welzl's algorithm. If you want, though, you may take an iterative approach instead of modifying Welzl's algorithm. Using three nested loops will do. There is only two issues left. The first is the issue of numerical instability. Sometimes, due to numerical instability, the algorithm will return a NaN value. It is not that hard to solve, we can simply shuffle again and rerun the algorithm. The given time limit will be sufficient for doing this unless the constant is too big. The second is more tricky, and it is about hacks. If you use a fixed seed for randomization (or time-based seeds to some extent), someone may hack you by adding an adversarial test case which makes the time complexity explode back to $\mathcal{O}(n^3)$. This is very bad. How do we solve this? To intervent this, we will make the algorithm halt when the number of iterations exceed a certain limit. Formally, let us choose a constant $c$, and the algorithm will halt and rerun when the number of iterations exceeds $c \cdot n$. Then, if the probability of the algorithm successfully running in this number of iterations is $p$, the expected number of reruns needed is $\mathcal{O}(1/p)$, and hacking is much harder. The algorithm will successfully find the solution in $\mathcal{O}((c/p) \cdot n)$ time. Of course, the probability $p$ depends on $c$, and you must tune the value as needed. Empirically $c=30$ runs very well, almost always taking no longer than $0.4$ seconds under proper optimizations, and $1.5$ seconds without.
|
[
"binary search",
"geometry"
] | 3,300
|
#include<bits/stdc++.h>
using namespace std;
using ll=long long;
using lf=long double;
using pt=pair<lf,lf>;
lf& real(pt& p){return p.first;}
lf& imag(pt& p){return p.second;}
pt midp(pt a,pt b){return pt{(real(a)+real(b))/2,(imag(a)+imag(b))/2};}
pt addi(pt a,pt b){return pt{(real(a)+real(b)),(imag(a)+imag(b))};}
pt subt(pt a,pt b){return pt{(real(a)-real(b)),(imag(a)-imag(b))};}
pt mult(pt a,lf b){return pt{real(a)*b,imag(a)*b};}
lf abs(pt a){return sqrtl(powl(real(a),2)+powl(imag(a),2));}
lf dist(pt a,pt b){return sqrtl(powl(real(a)-real(b),2)+powl(imag(a)-imag(b),2));}
struct circ{pt p;lf r;};
const lf inf=1e18;
circ basis0(){return {pt{0,0},inf};}
circ basis1(circ a){return a;}
circ basis2(circ a,circ b)
{
pt aa=a.p,bb=b.p;
pt ab=subt(bb,aa),ba=subt(aa,bb);
lf aab=abs(ab),aba=abs(ba);
real(ab)/=aab;
imag(ab)/=aab;
real(ba)/=aba;
imag(ba)/=aba;
pt ar=addi(aa,mult(ab,a.r)),br=addi(bb,mult(ba,b.r));
return {midp(ar,br),dist(ar,br)/2.0L};
}
circ basis3(circ a,circ b,circ c)
{
lf x1=real(a.p),y1=imag(a.p),r1=a.r;
lf x2=real(b.p),y2=imag(b.p),r2=b.r;
lf x3=real(c.p),y3=imag(c.p),r3=c.r;
lf a2=x1-x2,a3=x1-x3,b2=y1-y2,b3=y1-y3,c2=r2-r1,c3=r3-r1;
lf d1=x1*x1+y1*y1-r1*r1,d2=d1-x2*x2-y2*y2+r2*r2,d3=d1-x3*x3-y3*y3+r3*r3;
lf ab=a3*b2-a2*b3;
lf xa=(b2*d3-b3*d2)/(ab*2)-x1;
lf xb=(b3*c2-b2*c3)/ab;
lf ya=(a3*d2-a2*d3)/(ab*2)-y1;
lf yb=(a2*c3-a3*c2)/ab;
lf A=xb*xb+yb*yb-1;
lf B=2*(r1+xa*xb+ya*yb);
lf C=xa*xa+ya*ya-r1*r1;
lf r=-(A?(B-sqrtl(B*B-4*A*C))/(2*A):C/B);
return {pt{x1+xa+xb*r,y1+ya+yb*r},r};
}
bool viol(circ p,circ a)
{
return a.r<p.r+dist(a.p,p.p);
}
circ solve(vector<circ>&v)
{
lf _nan=nan("aaa");
mt19937_64 mt(1999999);
//shuffle(begin(v),end(v),mt);
vector<circ>basis;
auto trivial=[&]()
{
if(size(basis)==0)return basis0();
if(size(basis)==1)return basis[0];
if(size(basis)==2)return basis2(basis[0],basis[1]);
return basis3(basis[0],basis[1],basis[2]);
};
int counter=0;
auto rec=[&](auto rec,int n)->circ
{
if(n==0||size(basis)==3)return trivial();
counter++;
if(counter>30*size(v))return {pt{0,0},_nan};
auto c=rec(rec,n-1);
auto p=v[n-1];
if(!viol(c,p)||isnan(c.r))return c;
basis.push_back(p);
c=rec(rec,n-1);
basis.pop_back();
return c;
};
auto c=rec(rec,size(v));
while(isnan(c.r))
{
counter=0;
shuffle(begin(v),end(v),mt);
c=rec(rec,size(v));
}
return c;
}
int main()
{
cin.tie(0)->sync_with_stdio(0);
int n;cin>>n;
vector<circ>v(n);
for(auto&[p,r]:v)
{
ll x,y,rr;cin>>x>>y>>rr;
p={x,y};r=rr;
}
auto ans=solve(v);
cout<<setprecision(16)<<fixed<<real(ans.p)<<" "<<imag(ans.p)<<" "<<ans.r<<"\n";
}
|
1937
|
A
|
Shuffle Party
|
You are given an array $a_1, a_2, \ldots, a_n$. Initially, $a_i=i$ for each $1 \le i \le n$.
The operation $swap(k)$ for an integer $k \ge 2$ is defined as follows:
- Let $d$ be the largest divisor$^\dagger$ of $k$ which is not equal to $k$ itself. Then swap the elements $a_d$ and $a_k$.
Suppose you perform $swap(i)$ for each $i=2,3,\ldots, n$ in this exact order. Find the position of $1$ in the resulting array. In other words, find such $j$ that $a_j = 1$ after performing these operations.
$^\dagger$ An integer $x$ is a divisor of $y$ if there exists an integer $z$ such that $y = x \cdot z$.
|
If you think in terms of $k$, it might be hard to find the solution. Maybe it will be helpful if you fix $d$ and find the $k$ which will be swapped with $d$. Try bruteforcing for, say, $n \le 20$. Do you see a pattern? On $n=1$, the answer is trivially $1$. On $n=2$, index $1$ and $2$ are swapped. The answer is therefore $2$. For $n>2$, let us assume the element $1$ is on index $a$ and it will be swapped with index $k$. Then, $a$ must be $k/2$. We will prove this by contradiction. Let us assume that there is no divisor greater than $a$, and $a$ is lesser than $k/2$. But then, as $a$ is a multiple of $2$ in the base condition, $k$ must be a multiple of $2$, and $k/2$ is a divisor of $k$. This contradicts the assumption. By induction we can see that $a$ is always a multiple of $2$, and this proof will always hold. The element $1$ will move only when $k/2=a$. Therefore, we can find the largest value of $v$ such that $2^v \le n$. The answer turns out to be $2^v$.
|
[
"implementation",
"math"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
using ll=long long;
int main()
{
cin.tie(0)->sync_with_stdio(0);
ll q;cin>>q;
while(q--)
{
ll n,p=1;cin>>n;
while(p*2<=n)p<<=1;
cout<<p<<"\n";
}
}
|
1937
|
B
|
Binary Path
|
You are given a $2 \times n$ grid filled with zeros and ones. Let the number at the intersection of the $i$-th row and the $j$-th column be $a_{ij}$.
There is a grasshopper at the top-left cell $(1, 1)$ that can only jump one cell right or downwards. It wants to reach the bottom-right cell $(2, n)$. Consider the binary string of length $n+1$ consisting of numbers written in cells of the path without changing their order.
Your goal is to:
- Find the lexicographically smallest$^\dagger$ string you can attain by choosing any available path;
- Find the number of paths that yield this lexicographically smallest string.
$^\dagger$ If two strings $s$ and $t$ have the same length, then $s$ is lexicographically smaller than $t$ if and only if in the first position where $s$ and $t$ differ, the string $s$ has a smaller element than the corresponding element in $t$.
|
Let's call path $(1,1) \rightarrow \ldots \rightarrow (1,i)\rightarrow(2,i) \rightarrow \ldots \rightarrow (2,n)$ the $i$-th path. What's the difference between the $i$-th path and the $(i+1)$-th path? What if $a_{2i}=1$ and $a_{1(i+1)}=0$ ? What if $a_{2i}=0$ and $a_{1(i+1)}=1$ ? Let the string achieved by moving down on the $i$-th column be $S_i$. Then, for any $1 \le k < n$, one can observe that $S_k$ and $S_{k+1}$ can only differ by at most one index, which is index $k+1$. Thus, comparing $S_k$ and $S_{k+1}$ lexicographically can be done in $O(1)$, by comparing this one index. Let the string achieved by moving down on the $i$-th column be $S_i$. Then, for any $1 \le k < n$, one can observe that $S_k$ and $S_{k+1}$ can only differ by at most one index, which is index $k+1$. Thus, comparing $S_k$ and $S_{k+1}$ lexicographically can be done in $O(1)$, by comparing this one index. After finding the lexicographically smallest string, counting the occurrence of this string can be done straightforwardly in $O(n)$. After finding the lexicographically smallest string, counting the occurrence of this string can be done straightforwardly in $O(n)$. The implementation can differ for each participant, but the following method provides a clean implementation. Let the initial coordinate be $(1,1)$, and maintain a counter which is initially reset to $1$. Then repeat the following until we reach $(2,n)$: The implementation can differ for each participant, but the following method provides a clean implementation. Let the initial coordinate be $(1,1)$, and maintain a counter which is initially reset to $1$. Then repeat the following until we reach $(2,n)$: If we are on row $2$, move to the right. If we are on column $n$, move downwards. From here, let the character on the right be $a$, and one below be $b$. If $a>b$, move downwards. If $a<b$, move to the right and reset the counter to $1$. If $a=b$, move to the right and increment the counter. In the end, the string on the path we passed through will be lexicographically smallest, and the integer on the counter will be the number of occurrences.
|
[
"dp",
"greedy",
"implementation"
] | 1,300
|
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
const int LOGN=28;
const ll TMD=0;
const ll INF=2147483647;
int T,n;
char a[3][N];
int main()
{
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
for(int i=1;i<=2;i++)
{
scanf("\n");
for(int j=1;j<=n;j++)
scanf("%c",&a[i][j]);
}
int max_down=n,min_down=1;
for(int i=n;i>=2;i--)
if(a[1][i]=='1'&&a[2][i-1]=='0') max_down=i-1;
for(int i=1;i<max_down;i++)
if(a[2][i]=='1'&&a[1][i+1]=='0') min_down=i+1;
for(int i=1;i<=max_down;i++) printf("%c",a[1][i]);
for(int i=max_down;i<=n;i++) printf("%c",a[2][i]);
printf("\n");
printf("%d\n",max_down-min_down+1);
}
return 0;
}
|
1941
|
A
|
Rudolf and the Ticket
|
Rudolf is going to visit Bernard, and he decided to take the metro to get to him. The ticket can be purchased at a machine that accepts exactly two coins, the sum of which does not exceed $k$.
Rudolf has two pockets with coins. In the left pocket, there are $n$ coins with denominations $b_1, b_2, \dots, b_n$. In the right pocket, there are $m$ coins with denominations $c_1, c_2, \dots, c_m$. He wants to choose exactly one coin from the left pocket and exactly one coin from the right pocket (two coins in total).
Help Rudolf determine how many ways there are to select indices $f$ and $s$ such that $b_f + c_s \le k$.
|
For each test case, we calculate all elements $b_i$ from the first array. Then we iterate through the elements $c_j$ in the second array and in a loop, we calculate each sum $b_i + c_j$. If this sum is less than or equal to $k$, we add $1$ to the answer.
|
[
"brute force",
"math"
] | 800
|
#include <iostream>
#include <vector>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, m, k;
cin >> n >> m >> k;
int ans = 0;
vector<int> v1(n);
for (int i = 0; i < n; i++) {
cin >> v1[i];
}
int o;
for (int j = 0; j < m; j++) {
cin >> o;
for (int i = 0; i < n; i++) {
if ((o + v1[i]) <= k) ans++;
}
}
cout << ans << "\n";
}
return 0;
}
|
1941
|
B
|
Rudolf and 121
|
Rudolf has an array $a$ of $n$ integers, the elements are numbered from $1$ to $n$.
In one operation, he can choose an index $i$ ($2 \le i \le n - 1$) and assign:
- $a_{i - 1} = a_{i - 1} - 1$
- $a_i = a_i - 2$
- $a_{i + 1} = a_{i + 1} - 1$
Rudolf can apply this operation any number of times. Any index $i$ can be used zero or more times.
Can he make all the elements of the array equal to zero using this operation?
|
Let's consider the minimum $i$ such that $a_i > 0$. Making this element zero is only possible by choosing the $(i + 1)$-th element for the operation (applying the operation to a more leftward element is either impossible or will make some elements less than zero). We will apply operations in this way until we reach the end of the array. If there are non-zero elements left after applying the operations, the answer is "NO".
|
[
"brute force",
"dp",
"greedy",
"math"
] | 1,000
|
def solve():
n = int(input())
a = [int(x) for x in input().split()]
for i in range(n - 2):
if a[i] < 0:
print('NO')
return
op = a[i]
a[i] -= op
a[i + 1] -= 2 * op
a[i + 2] -= op
if a[-1] != 0 or a[-2] != 0:
print('NO')
else:
print('YES')
for _ in range(int(input())):
solve()
|
1941
|
C
|
Rudolf and the Ugly String
|
Rudolf has a string $s$ of length $n$. Rudolf considers the string $s$ to be ugly if it contains the substring$^\dagger$ "pie" or the substring "map", otherwise the string $s$ will be considered beautiful.
For example, "ppiee", "mmap", "dfpiefghmap" are ugly strings, while "mathp", "ppiiee" are beautiful strings.
Rudolf wants to shorten the string $s$ by removing some characters to make it beautiful.
The main character doesn't like to strain, so he asks you to make the string beautiful by removing the minimum number of characters. He can remove characters from \textbf{any} positions in the string (not just from the beginning or end of the string).
$^\dagger$ String $a$ is a substring of $b$ if there exists a \textbf{consecutive} segment of characters in string $b$ equal to $a$.
|
To solve this problem, you need to find all occurrences of the substrings "pie", "map", "mapie" in the string $s$ and remove the middle character in each of them. This way, you will remove the minimum number of characters to ensure that the string does not contain the substrings "pie" and "map".
|
[
"dp",
"greedy",
"strings"
] | 900
|
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
using namespace std;
int main() {
long long t;
cin>>t;
for(long long c=0;c<t;c++){
long long n;
cin >> n;
string s;
cin >> s;
vector<long long> va;
for (string sul : {"mapie", "pie", "map"}) {
for (size_t pos = 0; (pos = s.find(sul, pos)) != string::npos;) {
s[pos + sul.length() / 2] = '?';
va.push_back(pos + sul.length() / 2);
}
}
cout << va.size() << endl;
}
return 0;
}
|
1941
|
D
|
Rudolf and the Ball Game
|
Rudolf and Bernard decided to play a game with their friends. $n$ people stand in a circle and start throwing a ball to each other. They are numbered from $1$ to $n$ in the clockwise order.
Let's call a transition a movement of the ball from one player to his neighbor. The transition can be made clockwise or counterclockwise.
Let's call the clockwise (counterclockwise) distance from player $y_1$ to player $y_2$ the number of transitions clockwise (counterclockwise) that need to be made to move from player $y_1$ to player $y_2$. For example, if $n=7$ then the clockwise distance from $2$ to $5$ is $3$, and the counterclockwise distance from $2$ to $5$ is $4$.
Initially, the ball is with the player number $x$ (players are numbered clockwise). On the $i$-th move the person with the ball throws it at a distance of $r_i$ ($1 \le r_i \le n - 1$) clockwise or counterclockwise. For example, if there are $7$ players, and the $2$nd player, after receiving the ball, throws it a distance of $5$, then the ball will be caught by either the $7$th player (throwing clockwise) or the $4$th player (throwing counterclockwise). An illustration of this example is shown below.
The game was interrupted after $m$ throws due to unexpected rain. When the rain stopped, the guys gathered again to continue. However, no one could remember who had the ball. As it turned out, Bernard remembered the distances for each of the throws and the direction for \textbf{some} of the throws (clockwise or counterclockwise).
Rudolf asks you to help him and based on the information from Bernard, calculate the numbers of the players who could have the ball after $m$ throws.
|
Let's introduce a set of unique elements $q$, initially containing a single element $x$ - the index of the first player who started the game. For each $i$ from $1$ to $m$, we will update $q$ in such a way as to maintain the set of players who could have the ball after the $i$-th throw. For each element $q_j$ of the set $q$, we will remove $q_j$ from $q$, and also: if $c_i =$"$0$" or $c_i =$"$?$", add to $q$ the element $\left( (q_j + r_i - 1)\ mod\ n + 1 \right)$ - the index of the player who will receive the ball in case of a clockwise throw; if $c_i =$"$1$" or $c_i =$"$?$", add to $q$ the element $\left( (q_j - r_i - 1 + n)\ mod\ n + 1 \right)$ - the index of the player who will receive the ball in case of a counterclockwise throw. The term $n$ before the $mod$ operation is necessary to obtain a positive argument of this operation. Otherwise, in some programming languages, the $mod$ function will return a negative result, which does not correspond to the semantics of the problem. After the $m$-th iteration of the described cyclic process, the set $q$ will contain the desired indices of all players who could have the ball at the end of the game. At each iteration of the loop, the power of $q$ does not exceed $n$, and a total of exactly $m$ iterations will be performed. Thus, the asymptotic complexity of the algorithm is of the order $O(n * m)$.
|
[
"dfs and similar",
"dp",
"implementation"
] | 1,200
|
#include <iostream>
#include <set>
using namespace std;
int main()
{
int t; cin >> t;
while (t--) {
int n, m, a; cin >> n >> m >> a;
set <int> q[2];
int ix = 0;
q[ix].insert(a);
while (m--) {
int x; char ch; cin >> x >> ch;
while (!q[ix].empty()) {
int u = *(q[ix].begin());
q[ix].erase(u);
if (ch == '?' || ch == '0') {
q[ix ^ 1].insert((u + x - 1) % n + 1);
}
if (ch == '?' || ch == '1') {
q[ix ^ 1].insert((u - x - 1 + n) % n + 1);
}
}
ix ^= 1;
}
cout << q[ix].size() << '\n';
for (auto& x : q[ix]) {
cout << x << ' ';
}
cout << '\n';
}
return 0;
}
|
1941
|
E
|
Rudolf and k Bridges
|
Bernard loves visiting Rudolf, but he is always running late. The problem is that Bernard has to cross the river on a ferry. Rudolf decided to help his friend solve this problem.
The river is a grid of $n$ rows and $m$ columns. The intersection of the $i$-th row and the $j$-th column contains the number $a_{i,j}$ — the depth in the corresponding cell. All cells in the \textbf{first} and \textbf{last} columns correspond to the river banks, so the depth for them is $0$.
\begin{center}
{\small The river may look like this.}
\end{center}
Rudolf can choose the row $(i,1), (i,2), \ldots, (i,m)$ and build a bridge over it. In each cell of the row, he can install a support for the bridge. The cost of installing a support in the cell $(i,j)$ is $a_{i,j}+1$. Supports must be installed so that the following conditions are met:
- A support must be installed in cell $(i,1)$;
- A support must be installed in cell $(i,m)$;
- The distance between any pair of adjacent supports must be \textbf{no more than} $d$. The distance between supports $(i, j_1)$ and $(i, j_2)$ is $|j_1 - j_2| - 1$.
Building just one bridge is boring. Therefore, Rudolf decided to build $k$ bridges on \textbf{consecutive} rows of the river, that is, to choose some $i$ ($1 \le i \le n - k + 1$) and independently build a bridge on each of the rows $i, i + 1, \ldots, i + k - 1$. Help Rudolf \textbf{minimize} the total cost of installing supports.
|
First, for each string separately, we will calculate the minimum total cost of supports and write it to an array. This can be done, for example, using dynamic programming as follows. We will go through the string and maintain the last $d+1$ minimum total costs in a multiset. Then the answer for the current position will be the sum of the cost of the support at this position and the minimum element from the multiset. When moving to the next element of the string, it is important to remember to update the multiset. After that, from the array of minimum total costs, we choose a segment of length $k$ with the minimum sum, which will be the answer. The algorithm's asymptotic complexity is of the order $O(n \cdot m \cdot \log d)$.
|
[
"binary search",
"data structures",
"dp",
"two pointers"
] | 1,600
|
#include <iostream>
#include <vector>
#include <algorithm>
#include <set>
using namespace std;
int main() {
int t = 1;
cin >> t;
while (t--) {
int N, M, K, D;
cin >> N >> M >> K >> D;
vector<long long> a(N);
for (int i = 0; i < N; i++) {
vector<long long> dp(M, 1e9);
vector<int> v(M);
multiset<long long> mst = {1};
dp[0] = 1;
cin >> v[0];
for (int j = 1; j < M - 1; j++) {
cin >> v[j];
dp[j] = *mst.begin() + v[j] + 1;
if (j - D - 1 >= 0)
mst.erase(mst.find((dp[j - D - 1])));
mst.insert(dp[j]);
}
cin >> v.back();
dp.back() = 1 + *mst.begin();
a[i] = dp.back();
}
long long cur = 0;
for (int i = 0; i < K; i++)
cur += a[i];
long long mn = cur;
for (int i = K; i < N; i++) {
cur += a[i] - a[i - K];
mn = min(cur, mn);
}
cout << mn << endl;
}
return 0;
}
|
1941
|
F
|
Rudolf and Imbalance
|
Rudolf has prepared a set of $n$ problems with complexities $a_1 < a_2 < a_3 < \dots < a_n$. He is not entirely satisfied with the balance, so he wants to add \textbf{at most one} problem to fix it.
For this, Rudolf came up with $m$ models of problems and $k$ functions. The complexity of the $i$-th model is $d_i$, and the complexity of the $j$-th function is $f_j$. To create a problem, he selects values $i$ and $j$ ($1 \le i \le m$, $1 \le j \le k$) and by combining the $i$-th model with the $j$-th function, he obtains a new problem with complexity $d_i + f_j$ (a new element is inserted into the array $a$).
To determine the imbalance of the set, Rudolf sorts the complexities of the problems in ascending order and finds the largest value of $a_i - a_{i - 1}$ ($i > 1$).
What is the minimum value of imbalance that Rudolf can achieve by adding at most one problem, created according to the described rules?
|
Let's consider the differences $a_i - a_{i-1}$. Since we can only insert one problem, we can reduce the difference in difficulty in only one place. If we insert a problem not between the tasks whose difference in difficulty is maximum (denote them as $a_p$ and $a_{p+1}$), then the imbalance will not change. The best way to insert a problem in this way is to choose the middle between these tasks, so the larger of the differences $d_i + f_j - a_p$ and $a_{p + 1} - d_i - f_j$ will be minimal. We also cannot forget about the other tasks. Let's find the second maximum among the values $a_i - a_{i - 1}$. Since we insert a problem in another place, this difference will not decrease, and therefore the answer cannot be less than it. To understand what answer we can achieve, let's sort the functions and iterate through all the models. For the model $d_i$, using binary search, we will find the maximum index $l$ such that $d_i + f_l \le \lfloor \frac{a_p + a_{p + 1}}{2} \rfloor$. For inserting with the selected model, the best fit will be either a problem of difficulty $d_i + f_l$ or a problem of difficulty $d_i + f_{l + 1}$ (if $l < k$) since it is the closest problems to the middle. We will check both options and update the answer.
|
[
"binary search",
"greedy",
"sortings",
"two pointers"
] | 1,800
|
def solve():
n, m, k = map(int, input().split())
a = [int(x) for x in input().split()]
d = [int(x) for x in input().split()]
f = [int(x) for x in input().split()]
d.sort()
f.sort()
m1, m2 = 0, 0
ind = -1
for i in range(1, n):
e = a[i] - a[i - 1]
m2 = max(m2, e)
if m2 > m1:
m1, m2 = m2, m1
ind = i - 1
ans = m1
target = (a[ind] + a[ind + 1]) // 2
for model in d:
l, r = 0, k - 1
while r - l > 1:
mid = (r + l) // 2
if model + f[mid] <= target:
l = mid
else:
r = mid
ans = min(ans, max(m2, abs(model + f[l] - a[ind]), abs(model + f[l] - a[ind + 1])))
ans = min(ans, max(m2, abs(model + f[r] - a[ind]), abs(model + f[r] - a[ind + 1])))
print(ans)
for _ in range(int(input())):
solve()
|
1941
|
G
|
Rudolf and Subway
|
Building bridges did not help Bernard, and he continued to be late everywhere. Then Rudolf decided to teach him how to use the subway.
Rudolf depicted the subway map as an undirected connected graph, without self-loops, where the vertices represent stations. There is at most one edge between any pair of vertices.
Two vertices are connected by an edge if it is possible to travel directly between the corresponding stations, bypassing other stations. The subway in the city where Rudolf and Bernard live has a color notation. This means that any edge between stations has a specific color. Edges of a specific color together form a subway line. A subway line \textbf{cannot} contain unconnected edges and forms a connected subgraph of the given subway graph.
An example of the subway map is shown in the figure.
Rudolf claims that the route will be optimal if it passes through the minimum number of subway lines.
Help Bernard determine this minimum number for the given departure and destination stations.
|
Let's construct a bipartite graph, where one part is the vertices of the original graph, i.e., subway stations, and the other part is the subway lines. We add an edge between a station vertex and a line vertex if in the original subway graph, the station is incident to an edge of the corresponding subway line. In the new graph, we find the shortest path between the vertices corresponding to the departure and destination stations (for example, using breadth-first search). The answer to the problem will be the shortest path, divided in half.
|
[
"constructive algorithms",
"dfs and similar",
"graphs",
"shortest paths"
] | 2,000
|
#include<bits/stdc++.h>
using LL = long long;
using ld = long double;
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
cout << fixed << setprecision(10);
int _ = 0, __ = 1;
cin >> __;
for (int _ = 0; _ < __; ++_) {
int n, m;
cin >> n >> m;
vector<vector<pair<int, int>>> g(n + 1);
map<int, int> clrs;
int idx = n + 1;
for (int i = 0; i < m; ++i) {
int u, v, c;
cin >> u >> v >> c;
g[u].push_back({v, c});
g[v].push_back({u, c});
if (!clrs.count(c)) {
clrs[c] = idx;
idx++;
}
}
int s, t;
cin >> s >> t;
if (s == t)
{
cout << 0 << endl;
continue;
}
vector<set<int>> bg(n + clrs.size() + 3);
for (int i = 1; i <= n; ++i) {
for(auto &[to, c] : g[i])
{
int clr_v = clrs[c];
bg[i].insert(clr_v);
bg[clr_v].insert(i);
bg[to].insert(clr_v);
bg[clr_v].insert(to);
}
}
vector<int> used(bg.size());
vector<int> d(bg.size(), -1);
auto bfs = [&](int v)
{
queue<int> q;
q.push(v);
used[v] = 1;
d[v] = 0;
while (!q.empty())
{
auto from = q.front();
q.pop();
for (auto& to : bg[from])
{
if (!used[to])
{
q.push(to);
used[to] = 1;
d[to] = d[from] + 1;
}
}
}
};
bfs(s);
if (d[t] > 0) cout << d[t] / 2 << endl;
else cout << -1 << endl;
}
return 0;
}
|
1942
|
A
|
Farmer John's Challenge
|
\begin{quote}
Trade Winds - Patrick Deng
\hfill ⠀
\end{quote}
Let's call an array $a$ sorted if $a_1 \leq a_2 \leq \ldots \leq a_{n - 1} \leq a_{n}$.
You are given two of Farmer John's favorite integers, $n$ and $k$. He challenges you to find any array $a_1, a_2, \ldots, a_{n}$ satisfying the following requirements:
- $1 \leq a_i \leq 10^9$ for each $1 \leq i \leq n$;
- Out of the $n$ total cyclic shifts of $a$, exactly $k$ of them are sorted.$^\dagger$
If there is no such array $a$, output $-1$.
$^\dagger$The $x$-th ($1 \leq x \leq n$) \textbf{cyclic shift} of the array $a$ is $a_x, a_{x+1} \ldots a_n, a_1, a_2 \ldots a_{x - 1}$. If $c_{x, i}$ denotes the $i$'th element of the $x$'th cyclic shift of $a$, exactly $k$ such $x$ should satisfy $c_{x,1} \leq c_{x,2} \leq \ldots \leq c_{x, n - 1} \leq c_{x, n}$.
For example, the cyclic shifts for $a = [1, 2, 3, 3]$ are the following:
- $x = 1$: $[1, 2, 3, 3]$ (sorted);
- $x = 2$: $[2, 3, 3, 1]$ (not sorted);
- $x = 3$: $[3, 3, 1, 2]$ (not sorted);
- $x = 4$: $[3, 1, 2, 3]$ (not sorted).
|
Solve for $k = 1$. $a = 1, 2, 3 \dots n$ works. Why? Solve for $k = n$. $a = 1, 1, 1 \dots 1$ works. Why? What other $k$ work for a given $n$? Only $k = 1$ and $k = n$ have possible answers. Read the hints. For $k=1$, the construction $1,2,3\dots n$ will always work because in any other cyclic shift, $n$ will be before $1$. Now, consider what happens if we want to find an array with two or more cyclic shifts that are sorted. If we consider two of these cyclic shifts, notice they are actually shifts of each other as well. So, there should be a sorted array, and a shift of it which is also sorted. What this means is that some of the first elements in the array move to the back and stays sorted, which can only happen if all values in the array are equal. If the array has all equal values, this gives us our construction for $k=n$. As we have seen, for $k>1$, only $k=n$ is possible. Thus, for any other $k$ not equal to $1$ or $n$, we can print $-1$.
|
[
"constructive algorithms",
"math"
] | 800
|
#include <iostream>
using namespace std;
int main(){
int t; cin >> t;
while(t--){
int n, k; cin >> n >> k;
if(k == 1) for(int i = 0; i < n; i++) cout << i + 1 << " ";
else if(k == n) for(int i = 0; i < n; i++) cout << 1 << " ";
else cout << -1;
cout << "\n";
}
}
|
1942
|
B
|
Bessie and MEX
|
\begin{quote}
MOOO! - Doja Cat
\hfill ⠀
\end{quote}
Farmer John has a permutation $p_1, p_2, \ldots, p_n$, where every integer from $0$ to $n-1$ occurs exactly once. He gives Bessie an array $a$ of length $n$ and challenges her to construct $p$ based on $a$.
The array $a$ is constructed so that $a_i$ = $MEX(p_1, p_2, \ldots, p_i) - p_i$, where the $MEX$ of an array is the minimum non-negative integer that does not appear in that array. For example, $MEX(1, 2, 3) = 0$ and $MEX(3, 1, 0) = 2$.
Help Bessie construct any valid permutation $p$ that satisfies $a$. The input is given in such a way that at least one valid $p$ exists. If there are multiple possible $p$, it is enough to print one of them.
|
Solution 1 We will construct the solution forward. Separate the $a_i$'s given into negative and positive cases. What does this tell us about the $\texttt{MEX}$? We can find $p_1, p_2, ... , p_n$ in order, looking at positive and negative cases. Note that $a_i \neq 0$ because $p_i$ would equal $\texttt{MEX}$($p_1 \dots p_i$), which can never happen. If $a_i > 0$, then $\texttt{MEX}$($p_1, p_2, ... p_i$) must increase from $\texttt{MEX}$($p_1, p_2, ... p_{i-1}$), so we know $p_i$ must equal $\texttt{MEX}$($p_1, p_2, ... p_{i-1}$). Otherwise, the $\texttt{MEX}$ stays the same, so $p_i$ is just simply $\texttt{MEX}$($p_1, p_2, ... p_{i-1}$) - $a_i$. Thus, we can just maintain the $\texttt{MEX}$ and find each $p_i$ as we go forward. There are only 2 things the $\texttt{MEX}$ can do: increase or stay the same (it can never decrease since larger prefixes contain smaller ones). In the case of a positive difference, consider what would happen if the $\texttt{MEX}$ stayed the same. Since the difference is positive, the $\texttt{MEX}$ would have to be greater than the current value, which is impossible because that value had to appear earlier on in the prefix. Since the $\texttt{MEX}$ is calculated on a permutation that can't happen. So the $\texttt{MEX}$ has to increase. In the case of a negative value, the $\texttt{MEX}$ has to be less than the current value. But if it increased that means the current value changed its the $\texttt{MEX}$, meaning its the $\texttt{MEX}$ is now at least (current value + 1) and it is actually greater. So it has to stay the same. Note that this is also a way to show $p$ is always unique. Solution 2 We will construct the solution backwards. The $\texttt{MEX}$ is determined at the last index, since all of $0, 1 \dots n-1$ appear in $p$. Read the hints. Since we know the $\texttt{MEX}$ of the last position is $n$, then $n - p_n = a_n$. From this equation, we can find that $p_n = n - a_n$. Now, because we know $p_n$, we can determine the $\texttt{MEX}$ of the first $n-1$ numbers. Like how we found $p_n$, we can do a similar process for finding $p_{n-1}$. Doing this for $i = n, n - 1 \dots 1$ will get us a valid answer $p$. Note that this is also a way to show $p$ is always unique.
|
[
"constructive algorithms",
"math"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n; cin >> n;
vector<int> a(n);
for(int& i: a) cin >> i;
vector<int> p(n), has(n + 1);
int mex = 0;
for(int i = 0; i < n; i++){
if(a[i] >= 0){
p[i] = mex;
}
else{
p[i] = mex - a[i];
}
has[p[i]] = true;
while(has[mex]) mex++;
}
for(int i: p) cout << i << " ";
cout << "\n";
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int T = 1;
cin >> T;
while(T--) solve();
}
/* /\_/\
* (= ._.)
* / > \>
*/
|
1942
|
C2
|
Bessie's Birthday Cake (Hard Version)
|
\begin{quote}
Proof Geometric Construction Can Solve All Love Affairs - manbo-p
\hfill ⠀
\end{quote}
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $y$. In this version $0 \leq y \leq n - x$. You can make hacks only if both versions are solved.}
Bessie has received a birthday cake from her best friend Elsie, and it came in the form of a regular polygon with $n$ sides. The vertices of the cake are numbered from $1$ to $n$ clockwise. You and Bessie are going to choose some of those vertices to cut \textbf{non-intersecting} diagonals into the cake. In other words, the endpoints of the diagonals must be part of the chosen vertices.
Bessie would only like to give out pieces of cake which result in a triangle to keep consistency. The size of the pieces doesn't matter, and the whole cake does not have to be separated into all triangles (other shapes are allowed in the cake, but those will not be counted).
Bessie has already chosen $x$ of those vertices that can be used to form diagonals. She wants you to choose \textbf{no more than} $y$ other vertices such that the number of triangular pieces of cake she can give out is maximized.
What is the maximum number of triangular pieces of cake Bessie can give out?
|
Ignoring $n$ for now, let's just focus on the $x$ chosen vertices. Sort the $x$ vertices and connect adjacent vertices to form its own smaller polygon. By drawing out some cases or if you're familiar with triangulation (video that proves by induction), you can form $x - 2$ triangles by drawing diagonals in a polygon with $x$ vertices. If you don't know it, one possible construction that always work is fixing one vertex $v$ and drawing diagonals to every other vertex not adjacent to $v$. Now, let's consider the original $n$-sided polygon. In additional to the aforementioned construction, to close the $x$-sided polygon up: for every non-adjacent vertex in the chosen vertices based on the bigger $n$-sided polygon, draw a diagonal between them. Through this construction, we can always guarantee $x - 2$ triangles. However, this doesn't account for all triangles, as some triangles can form sides with the bigger $n$-sided polygon. These triangles occur exactly when two adjacent vertex in $x$ have exactly one vertex not in $x$ between them but part of the bigger polygon. This is because one side is from the diagonals we drew, and the other two sides form from the $n$-sided polygon. Therefore, the answer is $x - 2$ + (number of adjacent chosen vertices $2$ apart). Note that the first chosen vertex is also adjacent to last chosen vertex in $x$. Read the solution to the easy version first. We can now reduce the problem to the following: For every vertex we choose, we can make one more triangle. For every chosen vertices two apart, we can make one more triangle. Let's focus on the latter condition. To maximize the effect of our $y$ vertices, we want to prioritize vertices $v$ that satisfy the following: the vertex is not included in $x$ and there is a chosen vertex two apart. Let's denote such $v$ as good. Let vertex $a$ and $b$ $(a < b)$ be adjacent chosen vertices in the $x$-sided polygon. There are $g = b - a - 1$ vertices for us to choose between them. There are two cases: if $g$ is odd, then we can choose $\lfloor \frac{g}{2} \rfloor$ vertices to form $\lfloor \frac{g}{2}\rfloor + 1$ extra triangles. This is because all of the vertices we choose will be good. if $g$ is even, then we can choose $\frac{g}{2}$ vertices to make $\frac{g}{2}$ extra triangles. This is because one of the vertices we choose will not be good. This shows that it is always optimal to process smaller and odd $g$ first. After processing these adjacent gaps, if we have any leftover vertices, we can just ignore them. This is because since we have maximized the number of good vertices, even though any further vertex we place will increase the answer by $1$, it will break two good vertices which will decrease the answer by $1$.
|
[
"geometry",
"greedy",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, x, y; cin >> n >> x >> y;
int initial_y = y;
vector<int> chosen(x);
for(int& i: chosen) cin >> i;
sort(chosen.begin(), chosen.end());
int ans = x - 2;
int triangles_from_even_g = 0;
vector<int> odd_g;
auto process_gap = [&](int g) -> void{
if(g <= 1){
// already two apart
ans += g;
}
else if(g % 2 == 1){
odd_g.push_back(g / 2);
}
else{
triangles_from_even_g += g / 2;
}
};
for(int i = 0; i < x - 1; i++){
process_gap(chosen[i + 1] - chosen[i] - 1);
}
process_gap((chosen[0] + n) - chosen[x - 1] - 1);
sort(odd_g.begin(), odd_g.end());
for(int g: odd_g){
if(y >= g){
// all vertices are good, so we can make g + 1 triangles
ans += g + 1;
y -= g;
}
else{
ans += y;
y = 0;
break;
}
}
int even_triangles = min(triangles_from_even_g, y);
y -= even_triangles;
ans += even_triangles;
int used_vertices = initial_y - y;
ans += used_vertices;
cout << ans << "\n";
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int T = 1;
cin >> T;
while(T--) solve();
}
/* /\_/\
* (= ._.)
* / > \>
*/
|
1942
|
D
|
Learning to Paint
|
\begin{quote}
Pristine Beat - Touhou
\hfill ⠀
\end{quote}
Elsie is learning how to paint. She has a canvas of $n$ cells numbered from $1$ to $n$ and can paint any (potentially empty) subset of cells.
Elsie has a 2D array $a$ which she will use to evaluate paintings. Let the maximal contiguous intervals of painted cells in a painting be $[l_1,r_1],[l_2,r_2],\ldots,[l_x,r_x]$. The beauty of the painting is the sum of $a_{l_i,r_i}$ over all $1 \le i \le x$. In the image above, the maximal contiguous intervals of painted cells are $[2,4],[6,6],[8,9]$ and the beauty of the painting is $a_{2,4}+a_{6,6}+a_{8,9}$.
There are $2^n$ ways to paint the strip. Help Elsie find the $k$ largest possible values of the beauty of a painting she can obtain, among all these ways. Note that these $k$ values do not necessarily have to be distinct. It is guaranteed that there are at least $k$ different ways to paint the canvas.
|
We can solve this with dynamic programming. Let $\texttt{dp}[i]$ store a list of all $\min(k, 2^i)$ highest beauties of a painting in non-increasing order if you only paint cells $1,2,\ldots ,i$. Transitioning boils down to finding the $k$ largest elements from $n$ non-increasing lists. Try to do this in $\mathcal{O}((n + k) \log n)$ time. Let $\texttt{dp}[i]$ store a list of all $\min(k, 2^i)$ highest beauties of a painting in non-increasing order if you only paint cells $1,2,\ldots ,i$. Let's define merging two lists as creating a new list that stores the $k$ highest values from the two lists. First, let's look at a slow method of transitioning. We iterate over the left endpoint $l$ such that $l \ldots i$ is a maximal painted interval. At each $l$, we merge $\texttt{dp}[l - 2]$, with $a_{l,i}$ added to each value, into $\texttt{dp}[i]$. We also merge $\texttt{dp}[i - 1]$ into $\texttt{dp}[i]$ to handle the case in which we do not paint cell $i$. If implemented well, transitioning takes $\mathcal{O}(nk)$ time leading to an $\mathcal{O}(n^2k)$ solution which is too slow. We can speed this up. This merging process boils down to finding the $k$ largest elements from $n$ non-increasing lists in $\mathcal{O}((n + k) \log n)$ time. We can use a priority queue that stores ($\texttt{element}, \texttt{index}$) for each list. We can do the following $k$ times: add the top of the priority queue to our answer, advance its index, and update the priority queue accordingly. This allows us to transition in $\mathcal{O}((n + k) \log n)$ time which leads to an $\mathcal{O}((n^2 + nk) \log n)$ solution. Bonus: find an $\mathcal{O}(n^2 + k)$ solution.
|
[
"binary search",
"data structures",
"dfs and similar",
"dp",
"greedy",
"implementation",
"sortings"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n, k;
cin >> n >> k;
int A[n + 1][n + 1];
for (int i = 1; i <= n; i++)
for (int j = i; j <= n; j++)
cin >> A[i][j];
// dp[i] = Answer if we consider 1...i
vector<int> dp[n + 1];
dp[0] = {0};
for (int i = 1; i <= n; i++){
priority_queue<array<int, 3>> pq;
// Don't create an interval
pq.push({dp[i - 1][0], i - 1, 0});
// Create interval j+2...i (transition from j)
for (int j = i - 2; j >= -1; j--){
pq.push({(j < 0 ? 0 : dp[j][0]) + A[j + 2][i], j, 0});
}
while (pq.size() and dp[i].size() < k){
auto [val, j, num] = pq.top(); pq.pop();
dp[i].push_back(val);
if (j < 0 or num + 1 >= dp[j].size())
continue;
// Don't create interval
if (j == i - 1)
pq.push({dp[i - 1][num + 1], i - 1, num + 1});
// Create interval j+2...i (transition from j)
else
pq.push({dp[j][num + 1] + A[j + 2][i], j, num + 1});
}
}
for (int i : dp[n])
cout << i << " ";
cout << "\n";
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
int tc;
cin >> tc;
while (tc--)
solve();
}
|
1942
|
E
|
Farm Game
|
\begin{quote}
Lunatic Princess - Touhou
\hfill ⠀
\end{quote}
Farmer Nhoj has brought his cows over to Farmer John's farm to play a game! FJ's farm can be modeled by a number line with walls at points $0$ and $l + 1$. On the farm, there are $2n$ cows, with $n$ of the cows belonging to FJ and the other $n$ belonging to FN. They place each of their cows at a distinct point, and no two FJ's cows nor FN's cows are adjacent. Two cows are adjacent if there are no other cows between them.
Formally, if $a_1, a_2, \ldots, a_n$ represents the positions of FJ's cows and $b_1, b_2, \ldots, b_n$ represents the positions of FN's cows, then either $0 < a_1 < b_1 < a_2 < b_2 < \ldots < a_n < b_n < l + 1$ or $0 < b_1 < a_1 < b_2 < a_2 < \ldots < b_n < a_n < l + 1$.
In one move, a farmer chooses a number $k$ $(1 \leq k \leq n)$ and a direction (left or right). Then, that farmer chooses $k$ of his cows and moves them one position towards the chosen direction. A farmer cannot move any of his cows onto the walls or onto another farmer's cow. If a farmer cannot move any cows, then that farmer loses. FJ starts the game, making the first turn.
Given $l$ and $n$, find the number of possible game configurations for Farmer John to win if both farmers play optimally. It may be the case that the game will continue indefinitely, in which no farmer wins. A configuration is different from another if there is any $i$ such that $a_i$ or $b_i$ is different. Output the answer modulo $998\,244\,353$.
|
Think about the gaps between $(a_1,b_1), (a_2,b_2), ... ,(a_n,b_n)$. Let the gaps be $g_1,g_2,...,g_n$. What effect do the moves have on the gaps? WLOG let the first cow be FJ's cow. FN wins if all $g_i$ are even. Otherwise FJ wins. Note that the game will always end. Try to count the number of ways FN wins. We can iterate over the sum of $g_i$ and use stars and bars as well as other counting techniques. Consider the gaps between $(a_1,b_1), (a_2,b_2), ... ,(a_n,b_n)$. In the example below, '$\texttt{a}$' represents FJ's cow, '$\texttt{b}$' represents FN's cow, and '$\texttt{.}$' represents an empty space. The gaps are $[2,3]$. $\texttt{|.a..b....a...b.|}$ Consider the gaps $g_1,g_2,...,g_n$. When a farmer moves, they choose some non-empty subset of non-zero $g_i$ and add or subtract $1$ from every element in the subset (so long such move is possible). The game ends when some farmer cannot move (which implies that all $g_i$ must be $0$ when the game ends). WLOG let the first cow be FJ's cow. The winner of the game is determined as follows: 1) If all $g_i$ are even, then FN wins. If FJ pushes the cows corresponding to gaps $i_1, i_2, \ldots, i_k$ some direction in a move, then $g_{i_1}, g_{i_2}, \ldots g_{i_k}$ will change by $1$ and become odd. FN can then push the cows corresponding to gaps $i_1, i_2, \ldots, i_k$ left so that $g_{i_1}, g_{i_2}, \ldots g_{i_k}$ will become even again. Therefore, all $g_i$ will be even when it's FJ's turn and at least one $g_i$ will be odd when it's FN's turn. Since the game ends when all $g_i$ are $0$ (which is even), then FJ will lose. Note that the game will always end. This is since the sum of positions of FN's cows will always be decreasing since FN will always push some cow left. 2) Otherwise, FJ wins since he can subtract $1$ to all odd $g_i$ in his first move to make all elements even. Then it's FN's turn and all $g_i$ are even so by the reasoning above, FJ wins. Let's use complementary counting and determine the number of ways FN wins (which is when all $g_i$ are even). We can iterate over the sum, $s$ ($s$ is even), of all $g_i$. At each $s$, we can use stars and bars to find how many such $g_1,g_2,...,g_n$ (all $g_i$ are even) sum to $s$. Specifically, the number of ways are $\frac{s}{2} + n - 1 \choose n - 1$. The number of ways to place the cows in the available positions given the gaps will be $2 \cdot {l - s - n \choose n}$. We multiply by $2$ since we can alternate starting with either FJ's cow or FN's cow. Finally, we subtract this result from the total number of configurations, $2 \cdot {l \choose 2 \cdot n}$, to get the number of ways FJ wins. This runs in $\mathcal{O}(l)$ time.
|
[
"combinatorics",
"games"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const int MAX = 2e6 + 5, MOD = 998244353;
ll fact[MAX], ifact[MAX];
ll bpow(ll a, int p){
ll ans = 1;
for (;p; p /= 2, a = (a * a) % MOD)
if (p & 1)
ans = (ans * a) % MOD;
return ans;
}
ll ncr(int n, int r){
if (n < 0)
return 0;
if (r > n)
return 0;
return fact[n] * ifact[r] % MOD * ifact[n - r] % MOD;
}
void solve(){
int l, n;
cin >> l >> n;
ll all_even = 0;
for (int s = 0; s <= l; s += 2){
all_even += 2 * ncr(s / 2 + n - 1, n - 1) % MOD * ncr(l - s - n, n) % MOD;
all_even %= MOD;
}
cout << (2 * ncr(l, 2 * n) % MOD - all_even + MOD) % MOD << "\n";
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
for (int i = 0; i < MAX; i++)
fact[i] = !i ? 1 : fact[i - 1] * i % MOD;
for (int i = MAX - 1; i >= 0; i--)
ifact[i] = (i == MAX - 1) ? bpow(fact[MAX - 1], MOD - 2) : ifact[i + 1] * (i + 1) % MOD;
int tc;
cin >> tc;
while (tc--)
solve();
}
|
1942
|
F
|
Farmer John's Favorite Function
|
\begin{quote}
ΩΩPARTS - Camellia
\hfill ⠀
\end{quote}
Farmer John has an array $a$ of length $n$. He also has a function $f$ with the following recurrence:
- $f(1) = \sqrt{a_1}$;
- For all $i > 1$, $f(i) = \sqrt{f(i-1)+a_i}$.
Note that $f(i)$ is not necessarily an integer.
He plans to do $q$ updates to the array. Each update, he gives you two integers $k$ and $x$ and he wants you to set $a_k = x$. After each update, he wants to know $\lfloor f(n) \rfloor$, where $\lfloor t \rfloor$ denotes the value of $t$ rounded down to the nearest integer.
|
Consider the case where all $f(i)$ are integers. Decreasing any $a_i$ will decrease $\lfloor f(n) \rfloor$. So solutions that iterate over the last few elements will not work. If $n \ge 6$, changing $a_1$ will only affect $\lfloor f(n) \rfloor$ by at most $1$. We can take the floor each time we square root. Specifically, we can define $f$ as: $f(1) = \lfloor \sqrt{a_1} \rfloor$ For all $i > 1$, $f(i) = \lfloor \sqrt{f(i-1)+a_i} \rfloor$ This allows us to work with integers. We can divide our array into blocks of size $b \ge 6$ (note that the value of $b$ depends on the solution). How can we use hint $2$? What should we store for each block? Let's first look at the impact of earlier numbers on the final result. Earlier numbers can still influence $\lfloor f(n) \rfloor$ when $n$ is large. Suppose that all $f(i)$ are integers. Then decreasing any $a_i$ will decrease $\lfloor f(n) \rfloor$. However, it is clear that the impact of earlier number on $\lfloor f(n) \rfloor$ is still extremely small. A key observation is that when $n \ge 6$, changing $a_1$ will only affect $\lfloor f(n) \rfloor$ by at most $1$. Another observation is that we can take the floor each time we square root. Specifically, we can define $f$ as: $f(1) = \lfloor \sqrt{a_1} \rfloor$ For all $i > 1$, $f(i) = \lfloor \sqrt{f(i-1)+a_i} \rfloor$ This allows us to work with integers. From here on, we work with this new definition of $f$. Let's divide our array into blocks of size $b=6$. We can append zeroes to the front of the array to make $n$ a multiple of $b$. Consider some block representing range $[l,r]$. Let's consider the subarray $a_l,a_{l+1},\ldots ,a_r$. Let $v$ be the value of $f(r)$ if we let $f(l - 1)=0$. Using our first observation, we know that $f(r)$ will be either $v$ or $v+1$ depending on $f(l - 1)$. So let's find the smallest value $c$ such that if $f(l - 1)=c$, then $f(r)=v+1$. This can be done by iterating over the elements of the block backwards. For each block, we store its corresponding ($v,c$). We can build a segment tree over the blocks for an $\mathcal{O}((n + q) \log n)$ solution. Alternatively, we can do square root decomposition by having $b=\sqrt n$ which leads to an $\mathcal{O}(n + q \sqrt n)$ solution (in practice, we should set $b$ to something small like $100$).
|
[
"brute force",
"data structures",
"implementation",
"math"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const int B = 100, MAX = 2e5 + B + 5;
int n, q, offset, numBlocks; ll A[MAX], val[MAX], cut[MAX];
void buildBlock(int blk){
int l = blk * B;
int r = l + B - 1;
ll cur = 0;
for (int i = l; i <= r; i++)
cur = floor(sqrtl((long double)cur + A[i]));
val[blk] = cur;
ll req = cur + 1;
for (int i = r; i >= l; i--){
if (req > 2e9){
cut[blk] = 2e18;
return;
}
req = req * req - A[i];
}
cut[blk] = req;
}
ll qry(){
ll cur = 0;
for (int b = 0; b < numBlocks; b++)
cur = (cur >= cut[b]) ? val[b] + 1 : val[b];
return cur;
}
int main(){
ios_base::sync_with_stdio(0); cin.tie(0);
cin >> n >> q;
offset = (B - n % B) % B;
n += offset;
numBlocks = n / B;
for (int i = offset; i < n; i++)
cin >> A[i];
for (int b = 0; b < numBlocks; b++)
buildBlock(b);
while (q--){
ll k, x;
cin >> k >> x;
k--;
k += offset;
A[k] = x;
buildBlock(k / B);
cout << qry() << "\n";
}
}
|
1942
|
G
|
Bessie and Cards
|
\begin{quote}
Second Dark Matter Battle - Pokemon Super Mystery Dungeon
\hfill ⠀
\end{quote}
Bessie has recently started playing a famous card game. In the game, there is only one deck of cards, consisting of $a$ "draw $0$" cards, $b$ "draw $1$" cards, $c$ "draw $2$" cards, and $5$ special cards. At the start of the game, all cards are in the randomly shuffled deck.
Bessie starts the game by drawing the top $5$ cards of the deck. She may then play "draw $x$" cards from the hand to draw the next $x$ cards from the top of the deck. Note that every card can only be played once, special cards cannot be played, and if Bessie uses a "draw $2$" card when there is only $1$ card remaining in the deck, then she simply draws that remaining card. Bessie wins if she draws all $5$ special cards.
Since Bessie is not very good at math problems, she wants you to find the probability that she wins, given that the deck is shuffled randomly over all $(a + b + c + 5)!$ possible orderings. It can be shown that this answer can always be expressed as a fraction $\frac{p}{q}$ where $p$ and $q$ are coprime integers. Output $p \cdot q^{-1}$ modulo $998\,244\,353$.
|
"Draw $1$" cards do not matter because we can immediately play them when we draw them. Treat "draw $2$" as $+1$ and "draw $0$" as $-1$. We start with a balance of $+5$. We draw all the cards up to the first prefix where the balance dips to $0$. We are interested in the number of ways where the special cards lie in this prefix. Read the hints. Treat the special cards as "draw $0$" cards and multiply by the appropriate binomial coefficient at the end. Also treat cards of the same type as indistinguishable and multiply by the appropriate factorials at the end. Enumerate the length of the prefix where the balance first hits $0$. This dictates how many $+1$ and $-1$ we have. Now we want to count the number of ways of arranging $+1$ and $-1$ such that the balance never dips to $0$, assuming we start with $+5$. To solve this subproblem, we can draw inspiration from the path counting interpretation for counting balanced bracket sequences. We have $n$ open and $n$ close brackets and want to count the number of balanced bracket sequences starting with (((( (note that there are only $4$ instead of $5$ open brackets to shift from all balances being strictly positive to all balances being non-negative). The number of sequences ignoring the balance constraint is $\binom{2n-4}{n}$. Any bracket sequence where some balance dips negative corresponds to a path that walks above the line $y = x$. For those paths, we reflect over the line $y = x + 1$ at the first point where it steps over the line. So there is a bijection between these paths and paths that reach the point $(n-1,n+1)$, of which there are $\binom{2n-4}{n+1}$. So the total number of ways is $\binom{2n-4}{n} - \binom{2n-4}{n+1}$. Our final answer is summing up the number of ways over all prefix lengths. Make sure to handle the case where you draw the entire deck correctly. The complexity is $\mathcal O(\min(a,c))$.
|
[
"combinatorics",
"dp",
"math"
] | 2,800
| null |
1942
|
H
|
Farmer John's Favorite Intern
|
\begin{quote}
Peaches...
\hfill ⠀
\end{quote}
Ruby just won an internship position at Farmer John's farm by winning a coding competition! As the newly recruited intern, Ruby is tasked with maintaining Farmer John's peach tree, a tree consisting of $n$ nodes rooted at node $1$. Each node initially contains $a_i = 0$ peaches, and there are two types of events that can happen:
- Growth event at some node $x$: Ruby must choose \textbf{either} the parent of $x$ \textbf{or} any node in the subtree of $x$ and increase the amount of peaches it contains by one.
- Harvest event at some node $x$: Ruby must choose a single node that is in the subtree of $x$ and decrease the amount of peaches it contains by one. Note that this is \textbf{not} the same set of nodes as the growth event.
Note that the subtree of $x$ includes the node $x$ as well.Ruby is also given an array $b$ of length $n$. The peach tree is deemed healthy if $a_i \ge b_i$ for every node $i$.
Ruby is asked to perform $q$ operations of two types:
- 1 x v — Perform $v$ growth events on node $x$. Ruby does \textbf{not} have to choose the same node to increase in every growth event.
- 2 x v — Perform $v$ harvest events on node $x$. Ruby does \textbf{not} have to choose the same node to decrease in every harvest event.
For every prefix of operations, Ruby asks you to find if she can perform these operations \textbf{in some order} such that the resulting peach tree (at the end of these operations) is healthy. Note that Ruby can't perform a harvest event that makes any $a_i$ negative.
Every prefix is independent, meaning that for a given operation, Ruby may choose different nodes to perform events on for every prefix that contains that operation.
|
Model this as a flow problem where growth events are an input of some flow to a node and harvest events and the necessary amount of fruits are an output of flow to the sink. We will define the $a_i$ edge to be the input edge of every node, $b_i$ to be the output edge for necessary fruits, and $c_i$ to be the output edge for growth events. The connections for each individual node will look like: $(\text{source}, a_i, \text{# of growth events})$ $(a_i, \text{temporary node}, \text{INF})$ $(\text{temporary node}, b_i, \text{INF})$ $(\text{temporary node}, c_i, \text{INF})$ $(b_i, \text{sink}, \text{# of required peaches})$ $(c_i, \text{sink}, \text{# of harvest events})$ To simulate growth events affecting the subtree, every temporary node has infinite capacity directed edges down to the temporary nodes of its children. To simulate growth events affecting the parent, every temporary node has an infinite capacity directed edge to its parent's $b_i$. To simulate harvest events affecting subtree, every node has infinite capacity directed edges to all its ancestors' $c_i$ nodes. Lets try to find the min cut of the graph. If we don't put cut some node's $a_i$, then we must cut both $b_i$ and $c_i$ for all of its descendants. If we don't cut some node's $a_i$, then we also must cut $c_i$ for all its ancestors and $b_i$ for its parent. Thus, we can model a $dp[i][0/1/2]$ to know if we are currently cutting the a_i input edge, cutting the b_i and c_i output edge, or cutting the a_i edge but there is some node in the subtree where we did not cut the a_i edge, in which case we need to cut the c_i edge as well. Transitions look like: $dp[i][0] = a_i + \sum_j dp[j][0]$ $dp[i][1] = b_i + c_i + \sum_j dp[j][1]$ $dp[i][2] = a_i + c_i + \min(\sum_j \min(dp[j][0], dp[j][2]), b_i + \sum_j \min(dp[j][0], dp[j][1], dp[j][2]))$ where at least one child is a $dp[j][2]$ or $dp[j][1]$ $dp[i][2]$ can be rewritten as: $dp[i][2] = a_i + c_i + \min(\min(dp[j][2] - \min(dp[j][0], dp[j][2])) + \sum_j \min(dp[j][0], dp[j][2]),$ $\min(dp[j][1] - \min(dp[j][0], dp[j][1], dp[j][2])) + b_i + \sum_j \min(dp[j][0], dp[j][1], dp[j][2])))$ Use dynamic dp by representing dp transitions as a matrix where all the light children dp values are known and the heavy child dp values are variable. Then, we can accelerate each chain in the heavy light decomposition of the tree. To get the matrix representation of each transition to the heavy child, there are 5 cases when considering a $dp[j][0/1/2]$ -> $dp[j][2]$ transiton where $dp[j][0]$ is the min, $dp[j][1]$ is the min, and $dp[j][2]$ is the min, $dp[j][2] - \min(dp[j][0], dp[j][2])$ is the optimal added value, or $dp[j][0] - min(dp[j][0], dp[j][1], dp[j][2])$, is the optimal added value. For each cell in the matrix, put the min of the $5$ cases. Since you will only ever change hld chains $\log n$ times, the total complexity will be $n \log^2 n \cdot 3^3$. If we use balanced hld or other $n \log n$ hld techniques, then the complexity becomes $n \log^2 n$. Both solutions should pass under the given constraints. There also exists a solution without flows. Thanks to rainboy for discovering it during testing! Lets greedily assign the growth events. We will always try to use the growth events to satisfy harvest events or required peaches in its subtree, and if there is any excess then we give it to the parent's required peaches. Similarly, we will always use harvest events on growth events in its subtree. Lets define a $dp[u]$ for each node $u$. If $dp[x] > 0$ then it means we can give $dp[u]$ growths to $u$'s parent. Otherwise, it means $dp[u]$ requires $-dp[u]$ growths from its ancestors. Let $pos_u$ denote the sum of positive $dp[v]$ where $v$ is a child a of $u$, and $neg_u$ denote the sum of negative $dp[v]$. First we used the excess growth to satisfy the required peaches, so we can set $b_u = \max(0, b_u - pos_u)$. Then we distribute the growth events at node $u$ to satisfy the requirements in the subtree. Define $a_i$ as the number of growth events on node $i$ and $c_i$ as the number of harvest events on node $i$. To account for the harvest events at node $i$, we can define $bal_u = \text{sum of } a_i \text{ in u's subtree} - \text{sum of } b_i \text{ in u's subtree} - \text{sum of } c_i \text{ in u's subtree}$. We know that $dp[u] \le bal_u$, and as it turns out, as long as we keep $bal_u$ as an upper bound on $dp[u]$, the harvest events will always be satisfied. Why? $a_u + neg_u - \max(0, b_u - pos_u)$ is the assignment of growth events without considering harvest events. The amount of unused growth events in the subtree of $u$ is given by $bal_u - (a_u + neg_u - \max(0, b_u - pos_u)) + c_u$. If $a_u + neg_u - \max(0, b_u - pos_u) \le bal_u$, then it means we can use all of our harvest events on unused growth events in the subtree. If $a_u + neg_u - \max(0, b_u - pos_u) > bal_u$, then it means we will use all of our harvest events but still have $bal_u - (a_u + neg_u - \max(0, b_u - pos_u))$ harvest events remaining. Thus, we can formulate our dp as $dp[u] = \min(a_u + neg_u - \max(0, b_u - pos_u), bal_u)$. Let $v$ be the heavy child of $u$ and $pos'_u$ be the sum of positive dp excluding $v$ and $neg'_u$ be the sum of positive dp excluding $v$. Then, we can rewrite our dp as $dp[u] = \min(dp[v] + a_u + neg'_u - \max(0, b_u - pos'_u), a_u + neg'_u, bal_u)$. Define $x_u = a_u + neg'_u - \max(0, b_u - pos'_u)$ and $y_u = a_u + neg'_u$. Then $dp[u] = \min(dp[v] + x_u, y_u, bal_u)$. This can be accelerated by representing the transition as a matrix and accelerating it using dynamic dp. However, we can also notice that the dp can be represented as the smallest $x_1 + x_2 + \ldots + x_{i - 1} + \min(y_i, bal_i)$ over all prefixes of each heavy light decomposition chain. Thus, we can maintain the dp by using a range add and prefix min segment tree.
|
[
"data structures",
"dp",
"flows",
"trees"
] | 3,500
|
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define N 300000
#define N_ (1 << 19) /* N_ = pow2(ceil(log2(N))) */
#define INF 0x3f3f3f3f3f3f3f3fLL
long long min(long long a, long long b) { return a < b ? a : b; }
long long max(long long a, long long b) { return a > b ? a : b; }
int *ej[N], eo[N], n;
void append(int i, int j) {
int o = eo[i]++;
if (o >= 2 && (o & o - 1) == 0)
ej[i] = (int *) realloc(ej[i], o * 2 * sizeof *ej[i]);
ej[i][o] = j;
}
int dd[N], pp[N], qq[N], jj[N], ta[N], tb[N], qu[N];
int bb[N]; long long aa[N], zz[N], zzp[N], zzn[N];
int dfs1(int i, int d) {
int o, s, k_, j_;
dd[i] = d;
s = 1, k_ = 0, j_ = -1;
zz[i] = -bb[i];
for (o = eo[i]; o--; ) {
int j = ej[i][o], k = dfs1(j, d + 1);
s += k;
if (k_ < k)
k_ = k, j_ = j;
zz[i] += zz[j];
}
qq[i] = j_, jj[i] = j_ == -1 ? i : jj[j_];
return s;
}
int t_;
void dfs2(int i, int q) {
int o, j_;
qu[ta[i] = t_++] = i;
j_ = qq[i], qq[i] = q;
if (j_ != -1)
dfs2(j_, q);
for (o = eo[i]; o--; ) {
int j = ej[i][o];
if (j != j_)
dfs2(j, j);
}
tb[i] = t_;
}
long long stx[N_ * 2], sty[N_ * 2], stz[N_ * 2], lz[N_]; int h_, n_;
void put(int i, long long x) {
stz[i] += x;
if (i < n_)
lz[i] += x;
}
void pus(int i) {
if (lz[i])
put(i << 1 | 0, lz[i]), put(i << 1 | 1, lz[i]), lz[i] = 0;
}
void pul(int i) {
if (!lz[i]) {
int l = i << 1, r = l | 1;
stx[i] = stx[l] + stx[r];
sty[i] = min(sty[l], stx[l] + sty[r]);
stz[i] = min(stz[l], stx[l] + stz[r]);
}
}
void push(int i) {
int h;
for (h = h_; h > 0; h--)
pus(i >> h);
}
void pull(int i) {
while (i > 1)
pul(i >>= 1);
}
void build() {
int i, j, a, o;
h_ = 0;
while (1 << h_ < n)
h_++;
n_ = 1 << h_;
memset(stx, 0, n_ * 2 * sizeof *stx);
memset(sty, 0, n_ * 2 * sizeof *sty);
memset(stz, 0, n_ * 2 * sizeof *stz);
memset(lz, 0, n_ * sizeof *lz);
memset(aa, 0, n * sizeof *aa);
memset(zzp, 0, n * sizeof *zzp);
memset(zzn, 0, n * sizeof *zzn);
for (i = 0; i < n; i++) {
a = ta[i];
stz[n_ + a] = zz[i];
for (o = eo[i]; o--; ) {
j = ej[i][o];
if (qq[j] == j)
zzn[i] += -zz[j];
}
stx[n_ + a] = -zzn[i] - bb[i];
sty[n_ + a] = -zzn[i];
}
for (i = n_ - 1; i > 0; i--)
pul(i);
}
void upd(int i) {
int i_ = n_ + ta[i];
push(i_);
stx[i_] = aa[i] - zzn[i] - max(bb[i] - zzp[i], 0);
sty[i_] = aa[i] - zzn[i];
pull(i_);
}
void update_z(int l, int r, long long x) {
int l_ = l += n_, r_ = r += n_;
push(l_), push(r_);
for ( ; l <= r; l >>= 1, r >>= 1) {
if ((l & 1) == 1)
put(l++, x);
if ((r & 1) == 0)
put(r--, x);
}
pull(l_), pull(r_);
}
long long query(int l, int r) {
long long xl, yl, zl, xr, yr, zr;
push(l += n_), push(r += n_);
xl = 0, yl = INF, zl = INF, xr = 0, yr = INF, zr = INF;
for ( ; l <= r; l >>= 1, r >>= 1) {
if ((l & 1) == 1) {
yl = min(yl, xl + sty[l]), zl = min(zl, xl + stz[l]), xl += stx[l];
l++;
}
if ((r & 1) == 0) {
yr = min(sty[r], yr == INF ? INF : stx[r] + yr), zr = min(stz[r], zr == INF ? INF : stx[r] + zr), xr += stx[r];
r--;
}
}
return min(min(xl + xr, min(yl, yr == INF ? INF : xl + yr)), min(zl, zr == INF ? INF : xl + zr));
}
int main() {
int t;
scanf("%d", &t);
while (t--) {
int q_, i, p, q;
scanf("%d%d", &n, &q_);
for (i = 0; i < n; i++)
ej[i] = (int *) malloc(2 * sizeof *ej[i]), eo[i] = 0;
pp[0] = -1;
for (i = 1; i < n; i++) {
scanf("%d", &pp[i]), pp[i]--;
append(pp[i], i);
}
for (i = 0; i < n; i++)
scanf("%d", &bb[i]);
dfs1(0, 0);
t_ = 0, dfs2(0, 0);
build();
while (q_--) {
int t, x;
scanf("%d%d%d", &t, &i, &x), i--;
if (t == 2)
x = -x;
if (t == 1)
aa[i] += x, upd(i);
while (i >= 0) {
q = qq[i], p = pp[q];
if (p >= 0) {
if (zz[q] > 0)
zzp[p] -= zz[q];
else
zzn[p] -= -zz[q];
}
update_z(ta[q], ta[i], x);
zz[q] = query(ta[q], ta[jj[q]]);
if (p >= 0) {
if (zz[q] > 0)
zzp[p] += zz[q];
else
zzn[p] += -zz[q];
upd(p);
}
i = p;
}
printf(zz[0] >= 0 ? "YES\n" : "NO\n");
}
for (i = 0; i < n; i++)
free(ej[i]);
}
return 0;
}
|
1943
|
A
|
MEX Game 1
|
Alice and Bob play yet another game on an array $a$ of size $n$. Alice starts with an empty array $c$. Both players take turns playing, with Alice starting first.
On Alice's turn, she picks one element from $a$, appends that element to $c$, and then deletes it from $a$.
On Bob's turn, he picks one element from $a$, and then deletes it from $a$.
The game ends when the array $a$ is empty. Game's score is defined to be the MEX$^\dagger$ of $c$. Alice wants to maximize the score while Bob wants to minimize it. Find game's final score if both players play optimally.
$^\dagger$ The $\operatorname{MEX}$ (minimum excludant) of an array of integers is defined as the smallest non-negative integer which does not occur in the array. For example:
- The MEX of $[2,2,1]$ is $0$, because $0$ does not belong to the array.
- The MEX of $[3,1,0,1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not.
- The MEX of $[0,3,1,2]$ is $4$, because $0$, $1$, $2$ and $3$ belong to the array, but $4$ does not.
|
Alice can adapt to Bob's strategy. Try to keep that in mind. Whenever Bob chooses $i$, and if there are any copies of $i$ left, Alice can take $i$ on her next move. Let $f$ be the frequency array of $a$. You can ignore all $f(i) \ge 2$ due to the previous hint. Now the answer is some simple casework. For any $i$ s.t. $f_i \ge 2$, whenever Bob chooses to remove an occurence of $i$, on the next move Alice simply chooses $i$ herself (if she hasn't already taken $i$ before that). Thus, we only focus on $f_i \le 1$ now. The answer is min(first $i$ s.t. $f(i) = 0$, second $i$ s.t. $f(i) = 1$). Obviously, Alice can't do better than the mex of array (first $i$ s.t. $f(i) = 0$). Further, among $f(i) = 1$, Alice can save atmost $1$ $i$ after which Bob will remove the smallest $f(i) = 1$ he can find. This is optimal for Bob as well because he cannot do better when Alice removes the (first $i$ s.t. $f(i) = 1$) on her first move.
|
[
"games",
"greedy"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n; cin >> n;
vector <int> f(n + 1, 0);
for (int i = 0; i < n; i++){
int x; cin >> x;
f[x]++;
}
int c = 0;
for (int i = 0; i <= n; i++){
c += (f[i] == 1);
if ((c == 2) || (f[i] == 0)){
cout << i << "\n";
break;
}
}
}
return 0;
}
|
1943
|
B
|
Non-Palindromic Substring
|
A string $t$ is said to be $k$-good if there exists at least one substring$^\dagger$ of length $k$ which is not a palindrome$^\ddagger$. Let $f(t)$ denote the sum of all values of $k$ such that the string $t$ is $k$-good.
You are given a string $s$ of length $n$. You will have to answer $q$ of the following queries:
- Given $l$ and $r$ ($l < r$), find the value of $f(s_ls_{l + 1}\ldots s_r)$.
$^\dagger$ A substring of a string $z$ is a contiguous segment of characters from $z$. For example, "$\mathtt{defor}$", "$\mathtt{code}$" and "$\mathtt{o}$" are all substrings of "$\mathtt{codeforces}$" while "$\mathtt{codes}$" and "$\mathtt{aaa}$" are not.
$^\ddagger$ A palindrome is a string that reads the same backwards as forwards. For example, the strings "$z$", "$aa$" and "$tacocat$" are palindromes while "$codeforces$" and "$ab$" are not.
|
When is a string not $k$-good? (Ignore the trivial edge cases of $k = 1$ and $k = n$). What happens when $s[i....j]$ and $s[i + 1....j + 1]$ are both palindromes? We first try to find the answer for a string Let $k = j - i + 1$, $s[i....j]$ -> I and $s[i + 1...j + 1]$ -> II are both palindromes. Then $s_i = s_j$ (due to I) $s_j = s_{i + 2}$ (due to II) $s_{i + 2} = s_{j - 2}$ (due to I) $s_{j - 2} = s_{i + 4}$ (due to II) and so on.... you can see that it forces $s_i = s_{i + 2} = s_{i + 4} = ....$. A similiar reasoning gives you $s_{i + 1} = s_{i + 3} = s_{i + 5}$. Further, if $k$ is even, $i$ and $j$ have different parities, but $s_i = s_j$ implies that all characters must be equal actually. We mentioned that the edge cases are $1$ and $n$, but why exactly? How does the analysis fail for them?(Left as a exercise) So, the condition for a string to be $k$-good can be written as follows : $k = 1$ : never possible $1 < k < n$, odd : not an alternating string $1 < k < n$, even : not all characters same $k = n$ : non-palindromic string Now onto substring queries. The second and third things are easy to handle, you can store the next position where $s_i \ne s_{i + 2}$ and $s_i \ne s_{i + 1}$ respectively. Checking if a substring is a palindrome is standard with various methods such as string hashing or manacher's algorithm.
|
[
"hashing",
"implementation",
"math",
"strings"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
#define INF (int)1e18
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
vector<int> manacher_odd(string s) {
int n = s.size();
s = "$" + s + "^";
vector<int> p(n + 2);
int l = 1, r = 1;
for(int i = 1; i <= n; i++) {
p[i] = max(0, min(r - i, p[l + (r - i)]));
while(s[i - p[i]] == s[i + p[i]]) {
p[i]++;
}
if(i + p[i] > r) {
l = i - p[i], r = i + p[i];
}
}
return vector<int>(begin(p) + 1, end(p) - 1);
}
vector<int> manacher(string s) {
string t;
for(auto c: s) {
t += string("#") + c;
}
auto res = manacher_odd(t + "#");
return vector<int>(begin(res) + 1, end(res) - 1);
}
#define int long long
void Solve()
{
int n, q; cin >> n >> q;
string s; cin >> s;
auto v = manacher(s);
for (auto &x : v) x--;
// we also need to know if all same, and all alternating
set <int> s1, s2;
for (int i = 0; i < n - 1; i++){
if (s[i] != s[i + 1]) s1.insert(i);
if (i != n - 1 && s[i] != s[i + 2]) s2.insert(i);
}
while (q--){
int l, r; cin >> l >> r;
l--;
r--;
if (l == r){
cout << 0 << "\n";
continue;
}
int len = r - l + 1;
int ans;
auto it = s1.lower_bound(l);
if (it == s1.end() || (*it) >= r){
ans = 0;
} else {
it = s2.lower_bound(l);
if (it == s2.end() || (*it) >= r - 1){
ans = ((len - 1)/ 2) * (((len - 1) / 2) + 1);
} else {
ans = len * (len - 1) / 2 - 1;
}
}
if (v[l + r] < (r - l + 1)) ans += len;
cout << ans << "\n";
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
1943
|
C
|
Tree Compass
|
You are given a tree with $n$ vertices numbered $1, 2, \ldots, n$. Initially, all vertices are colored white.
You can perform the following two-step operation:
- Choose a vertex $v$ ($1 \leq v \leq n$) and a distance $d$ ($0 \leq d \leq n-1$).
- For all vertices $u$ ($1 \leq u \leq n$) such that $\text{dist}^\dagger(u,v)=d$, color $u$ black.
Construct a sequence of operations to color all the nodes in the tree black using the minimum possible number of operations. It can be proven that it is always possible to do so using at most $n$ operations.
$^\dagger$ $\text{dist}(x, y)$ denotes the number of edges on the (unique) simple path between vertices $x$ and $y$ on the tree.
|
Try to solve for line case. You may have to use more than $1$ node for certain cases. Extend the solution for the line to the general tree version (consider the diamater). For a line, an obvious bound on the answer is $\lceil \frac{n}{2} \rceil$, as we can colour atmost $2$ nodes per operation. I claim this is achieveable except for when $n$ mod $4 = 2$, where we do $1$ worse. That is however still provably optimal as you can bicolour the line and operations only colours nodes black which are in the same bicolouring. When $n$ mod $2 = 1$, simply use the centre of the line and do operations of the form $(centre, i)$ ($0 \le i \le \lfloor \frac{n}{2} \rfloor$). When $n$ mod $4 = 0$, for convenience let the line be $1, 2, ...., n$. Then, we can do operations like $(2, 1), (3, 1), (6, 1), (7, 1)....$. When $n$ mod $4 = 2$, either of the above $2$ methods can be adapted to work because we are allowed $1$ "extra" operation. Now that we have the solution for the line case, lets divide into $2$ cases based on parity of diamater (maximum number of nodes on a path) : diameter mod $2 = 1$ : Find the centre of the diamater. Then we can simply do operations of the form $(centre, i)$ (for all $0 \le i \le \lfloor \frac{diameter}{2} \rfloor$). If this doesn't colour all nodes, then one can easily check that the diamater we found is not the real diamater, as the node which is not coloured is an endpoint of a larger diameter. diamater mod $2 = 0$ : Find the $2$ centres of the diameter. Then the following set of operations satisfy the requirements : $(centre1, i)$ and $(centre2, i)$ for all odd $i$ satisfying $1 \le i \le \frac{diameter}{2}$. The intuition behind this is to basically split the nodes into $2$ sets according to a bicolouring, and then $1$ centre colours all nodes of a certain colour, while the other centre colours all nodes of the other colour.
|
[
"constructive algorithms",
"dfs and similar",
"greedy",
"trees"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n;
cin >> n;
vector<vector<int>> E(n);
for (int i = 1; i < n; i++){
int u, v; cin >> u >> v;
u--; v--;
E[u].push_back(v);
E[v].push_back(u);
}
auto bfs = [&](int s){
vector<int> d(n, -1);
d[s] = 0;
queue<int> Q;
Q.push(s);
while (!Q.empty()){
int v = Q.front();
Q.pop();
for (int w : E[v]){
if (d[w] == -1){
d[w] = d[v] + 1;
Q.push(w);
}
}
}
return d;
};
vector<int> d1 = bfs(0);
int a = max_element(d1.begin(), d1.end()) - d1.begin();
vector<int> d2 = bfs(a);
int b = max_element(d2.begin(), d2.end()) - d2.begin();
vector<int> d3 = bfs(b);
int diam = d3[max_element(d3.begin(), d3.end()) - d3.begin()] + 1;
//if 3 we want 1, 1 if 4 we want 1 2
vector <int> ans;
for (int i = 0; i < n; i++){
if ((d2[i] + d3[i] == diam - 1) && ((d2[i] == diam/2) || (d3[i] == diam/2)))
ans.push_back(i);
}
if (diam & 1) assert(ans.size() == 1);
else assert(ans.size() == 2);
vector <pair<int, int>> ok;
if (diam & 1){
//print everything from 0 to diam/2
for (int i = 0; i <= diam/2; i++){
ok.push_back({ans[0], i});
}
} else {
//2 => 2 ops, 4 => 2 ops , 6 => 4 ops, 8 => 4 ops
int ops = ((n - 2)/4) + 1;
int need = (diam/2) - 1;
while (need >= 0){
ok.push_back({ans[0], need});
ok.push_back({ans[1], need});
need -= 2;
}
}
cout << ok.size() << "\n";
for (auto [u, r] : ok){
cout << u + 1 << " " << r << "\n";
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
// cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
1943
|
D1
|
Counting Is Fun (Easy Version)
|
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $n$. You can make hacks only if both versions of the problem are solved.}
An array $b$ of $m$ non-negative integers is said to be good if all the elements of $b$ can be made equal to $0$ using the following operation some (possibly, zero) times:
- Select two \textbf{distinct} indices $l$ and $r$ ($1 \leq l \textcolor{red}{<} r \leq m$) and subtract $1$ from all $b_i$ such that $l \leq i \leq r$.
You are given two positive integers $n$, $k$ and a prime number $p$.
Over all $(k+1)^n$ arrays of length $n$ such that $0 \leq a_i \leq k$ for all $1 \leq i \leq n$, count the number of good arrays.
Since the number might be too large, you are only required to find it modulo $p$.
|
Try to come up with some necessary and sufficient conditions of a good array. Apply dp once you have a good condition. All operations which include $i$ must also either include $i - 1$ or $i + 1$. Hence $a_i \le a_{i - 1} + a_{i + 1}$ must hold. Throughout the editorial treat $a_i = 0$ for $(i \le 0)$ or $(i > n)$. But, is this sufficient? Infact, it is and we can prove it by strong induction. Group arrays according to the sum of the array. We will now apply strong induction on the sum of the array. Base Cases (sum = 0, 1 or 2) are all trivial. Now, assume that the condition is sufficient for all arrays with sum $< x$ ($x \ge 3$). Let us consider some array $a_1, a_2, ...., a_n$ with sum $x$. Let $a_i$ be the first non-zero element of $a$.(Observe that $a_{i + 1}$ can't be $0$). We claim that either operating on $[i, i + 1]$ or $[i, i + 2]$ will give still satisfy the condition $a_j \le a_{j - 1} + a_{j + 1}$ for all $j$. Let's check it. The only time $[i, i + 1]$ operation causes an issue is when $a_{i + 2} > a_{i + 1} - 1 + a_{i + 3}$, i.e. it should necessarily hold that $a_{i + 2} > a_{i + 3}$, but then $a_{i + 3} \le a_{i + 2} - 1$, and so, $a_{i + 3} \le a_{i + 2} - 1 + a_{i + 4}$, meaning $[i, i + 2]$ is valid. Now that we have the condition, we can define a dynamic programming as follows : $dp(i, a, b)$ = number of ways to make an array upto the $i$-th index with $a_{i - 1} = a$, $a_i = b$ (since only the last $2$ elements are relevant). Let the new element be $c$, then it is a valid way iff $b \le a + c$. So we have a $\mathcal{O}(n^4)$ by iterating over all possibilities. As a final optimization, we can speed it up to $\mathcal{O}(n^3)$ by using prefix sums. Note that the valid values of $a$ for fixed $b$ and $c$ satisfy $a \ge max(0, b - c)$, and hence maintaining prefix sums over $a$, we can speed it up.
|
[
"brute force",
"combinatorics",
"dp",
"math"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k, mod; cin >> n >> k >> mod;
vector<vector<int>> dp(k + 1, vector<int>(k + 1, 0));
dp[0][0] = 1; // dp[a][b]
for (int i = 1; i <= n + 2; i++){
vector<vector<int>> ndp(k + 1, vector<int>(k + 1, 0)); // dp[b][c]
vector<vector<int>> pref(k + 1, vector<int>(k + 1, 0)); // pref[b][a]
for (int b = 0; b <= k; b++){
pref[b][0] = dp[0][b];
for (int a = 1; a <= k; a++){
pref[b][a] = (pref[b][a - 1] + dp[a][b]) % mod;
}
}
for (int b = 0; b <= k; b++){
for (int c = 0; c <= k; c++){
if (b > c){
// a must be atleast b - c
ndp[b][c] = (pref[b][k] - pref[b][b - c - 1] + mod) % mod;
} else {
ndp[b][c] = pref[b][k];
}
}
}
dp = ndp;
}
cout << dp[0][0] << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
1943
|
D2
|
Counting Is Fun (Hard Version)
|
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $n$. You can make hacks only if both versions of the problem are solved.}
An array $b$ of $m$ non-negative integers is said to be good if all the elements of $b$ can be made equal to $0$ using the following operation some (possibly, zero) times:
- Select two \textbf{distinct} indices $l$ and $r$ ($1 \leq l \textcolor{red}{<} r \leq m$) and subtract $1$ from all $b_i$ such that $l \leq i \leq r$.
You are given two positive integers $n$, $k$ and a prime number $p$.
Over all $(k+1)^n$ arrays of length $n$ such that $0 \leq a_i \leq k$ for all $1 \leq i \leq n$, count the number of good arrays.
Since the number might be too large, you are only required to find it modulo $p$.
|
Try to apply Principle of Inclusion Exclusion (PIE). You do not need to store both last elements! Only the last is enough. Since there are $n^3$ states in our dp, we will have to optimize the number of states somehow. Let us consider all arrays and not just good arrays. An element is bad if $a_i > a_{i - 1} + a_{i + 1}$. Suppose, $f(x_1, x_2, ...., x_k)$ gave us the number of arrays where each of $x_i$ are distinct and bad. (Note that other positions may be bad as well). Then, by PIE, the answer is $(-1)^k \cdot f(x_1, x_2, ....., x_k)$. For example, for $n = 3$, we would want to compute $f([]) - f([1]) - f([2]) - f([3]) + f([1, 2]) + f([1, 3]) + f([2, 3]) - f([1, 2, 3])$. Note that $f([])$ is simply $(k + 1)^n$ as there are no restrictions placed on the array. This has $2^n$ calculations, so we need to optimize it. First optimization : obviously, only $k$ matters and not the exact indices. This means we only have to maintain the count of the number of indices we have made sure are bad. Second optimization : only parity of $k$ matters, due to the only dependence of $k$ being $(-1)^k$. We now define $dp(i, last, x)$ = number of arrays such that $a_i = last$ and the parity of bad elements (that we know of) till $i$ is $x$. Transitions : Without (necessarily) creating a bad element : $dp(i, last, x) += dp(i - 1, y, x)$ for all $0 \le y \le k$. We might accidentally create more bad elements, but remember that PIE allows us to not worry about that. Without (necessarily) creating a bad element : $dp(i, last, x) += dp(i - 1, y, x)$ for all $0 \le y \le k$. We might accidentally create more bad elements, but remember that PIE allows us to not worry about that. With creating a bad element : We view it as a transition from index $(i - 2)$, $a_{i - 1} > a_i + a_{i - 2}$ for it to bad, so fix $a_i = l1$, $a_{i - 2} = l2$, and you get $dp(i, l1, x) += dp(i - 2, l2, 1 - x) \cdot (max(0, k - l1 - l2))$. The $max(0, k - l1 - l2)$ term comes from the ways to choose $a_{i - 1}$ such that $a_{i - 1} > l1 + l2$. With creating a bad element : We view it as a transition from index $(i - 2)$, $a_{i - 1} > a_i + a_{i - 2}$ for it to bad, so fix $a_i = l1$, $a_{i - 2} = l2$, and you get $dp(i, l1, x) += dp(i - 2, l2, 1 - x) \cdot (max(0, k - l1 - l2))$. The $max(0, k - l1 - l2)$ term comes from the ways to choose $a_{i - 1}$ such that $a_{i - 1} > l1 + l2$. Both the transitions are optimizable with some prefix sums and running totals to get a $\mathcal{O}(n^2)$ solution.
|
[
"combinatorics",
"dp"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k, mod; cin >> n >> k >> mod;
vector<vector<int>> dp(2, vector<int>(k + 1, 0));
auto prev2 = dp;
dp[0][0] = 1;
for (int i = 1; i <= n + 1; i++){
vector<vector<int>> ndp(2, vector<int>(k + 1, 0));
vector<int> sum(2, 0);
for (int j = 0; j < 2; j++) for (int x = 0; x <= k; x++){
sum[j] += dp[j][x]; sum[j] %= mod;
}
for (int j = 0; j < 2; j++){
int s1 = 0, s2 = 0;
for (int x = k; x >= 0; x--){
ndp[j][x] += sum[j]; // normal transition
ndp[j][x] += s2; ndp[j][x] %= mod; // with one wrong
s1 += prev2[j ^ 1][k - x]; s1 %= mod;
s2 += s1; s2 %= mod;
}
}
prev2 = dp;
dp = ndp;
}
int ans = (dp[0][0] - dp[1][0] + mod) % mod;
cout << ans << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
1943
|
E1
|
MEX Game 2 (Easy Version)
|
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $t$, $m$ and the sum of $m$. You can make hacks only if both versions of the problem are solved.}
Alice and Bob play yet another game on an array $a$ of size $n$. Alice starts with an empty array $c$. Both players take turns playing, with Alice starting first.
On Alice's turn, she picks one element from $a$, appends that element to $c$, and then deletes it from $a$.
On Bob's turn, he picks at most $k$ elements from $a$, and then deletes it from $a$.
The game ends when the array $a$ is empty. Alice's score is defined to be the MEX$^\dagger$ of $c$. Alice wants to maximize her score while Bob wants to minimize it. Find Alice's final score if both players play optimally.
The array will be given in compressed format. Instead of giving the elements present in the array, we will be giving their frequencies. Formally, you will be given $m$, the maximum element in the array, and then $m + 1$ integers $f_0, f_1, \ldots, f_m$, where $f_i$ represents the number of times $i$ occurs in the array $a$.
$^\dagger$ The $\operatorname{MEX}$ (minimum excludant) of an array of integers is defined as the smallest non-negative integer which does not occur in the array. For example:
- The MEX of $[2,2,1]$ is $0$, because $0$ does not belong to the array.
- The MEX of $[3,1,0,1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not.
- The MEX of $[0,3,1,2]$ is $4$, because $0$, $1$, $2$ and $3$ belong to the array, but $4$ does not.
|
It might be optimal for Bob to reduce multiple elements at the same time, thus making Alice choose between which element she wants to take. Suppose you are only checking whether $ans \ge i$ for now, what would Alice's strategy be? Try fixing the element that Bob wants to make sure Alice is not able to get. What would be his strategy then? For now, lets try to see if $ans \ge i$. This is equivalent to Alice being able to take $1$ occurence of everything when the game is reduced to numbers $[0, i - 1]$. Alice's strategy here is actually easy to find. At every step, Alice will choose the minimum $f_j$ such that $0 \le j < i$, and she hasn't chosen $i$ yet. You can reason this with greedy stays ahead, exchange argument, whatever you want. This gives us a nice definition of Alice's moves, however we seemingly have to maintain the sorted sequence of $f_i$ always. But we can actually rewrite Bob's moves such that it does not affect the sorted order of $f_i$ and always keeps it sorted. Here, by sorted order, we mean some permutation $p = [p_1, p_2, ...., p_i]$ of $[0, i - 1]$ such that $f_{p_i} \le f_{p_j}$ whenever $i \le j$. First, instead of subtracting $k$, we will do $k$ subtractions of only $1$. Then, the only case when sorted order can be destroyed is when there exists $k1$ and $k2$ such that $f_{k1} = f_{k2}$ and we do an operation on $k1$ but $k2$ occurs before $k1$ in the sorted order. This issue can simply be fixed by doing the operation on the smallest $x$ (according to sorted order) such that $f_x = f_{k1}$. Now, we have a good way of representing Alice moves. Suppose we fixed the element that Bob "wins" on. Then, Bob's strategy will obviously be to make the frequency of that element as small as possible, but he must make sure to never violate sorted condition. Since Bob will make at most $m$ moves, you can just simulate his moves. The main details of the simulation is that you need to figure out upto what index all values will become equal when doing k operations (or nearly equal, off by 1), and then first take all elements to that state. Let $w$ be the remaining operations from the $k$ operations after this, and $l$ the length of the equal sequence. Then you will reduce every frequency by $w / l$, and finally reduce the first $w$ mod $l$ numbers of this sequence. Check the code for more details. A naive implementation takes $O(m)$ per move, so $O(m^2)$ per element we fix => $O(m^3)$ total to check $ans \ge i$. With a binary search on the answer, you get $O(m^3 logm)$. It can be optimized further, but it wasnt needed to pass. Most other polynomial solutions should pass this.
|
[
"binary search",
"brute force",
"greedy"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define f first
#define s second
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n, k; cin >> n >> k;
vector <int> a(n + 1);
for (auto &x : a) cin >> x;
auto check = [&](int x){
vector <int> b;
for (int i = 0; i < x; i++){
b.push_back(a[i] - k);
}
sort(b.begin(), b.end());
for (int fix = 1; fix < x; fix++){
// this is element where bob wins
deque <int> c;
for (int i = 0; i <= fix; i++){
c.push_back(b[i]);
}
assert(c.size() >= 2);
while (c.size() != 2){
c.pop_front();
// find suffix which works
int sz = c.size();
int works = 0;
int sum = 0;
for (int j = 1; j < sz; j++){
// sum(elements of c - current element)
// this shud be >= k
sum += c[sz - j];
int loss = sum - c[sz - j - 1] * j;
if (loss >= k){
works = sz - j;
break;
}
}
int have = k;
// make everything = c[works]
for (int j = works + 1; j < sz; j++){
have -= c[j] - c[works];
c[j] = c[works];
}
assert(have >= 0);
for (int j = works; j < sz; j++){
c[j] -= have / (sz - works);
}
have %= (sz - works);
for (int j = works; j < sz; j++){
if (have){
c[j]--;
have--;
}
}
for (int j = 0; j < sz - 1; j++){
assert(c[j] <= c[j + 1]);
}
}
c.pop_front();
if (c[0] <= 0) return false;
}
return true;
};
int l = 1, r = n + 1;
while (l != r){
int mid = (l + r + 1) / 2;
if (check(mid)) l = mid;
else r = mid - 1;
}
cout << l << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
1943
|
E2
|
MEX Game 2 (Hard Version)
|
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $t$, $m$ and the sum of $m$. You can make hacks only if both versions of the problem are solved.}
Alice and Bob play yet another game on an array $a$ of size $n$. Alice starts with an empty array $c$. Both players take turns playing, with Alice starting first.
On Alice's turn, she picks one element from $a$, appends that element to $c$, and then deletes it from $a$.
On Bob's turn, he picks at most $k$ elements from $a$, and then deletes it from $a$.
The game ends when the array $a$ is empty. Alice's score is defined to be the MEX$^\dagger$ of $c$. Alice wants to maximize her score while Bob wants to minimize it. Find Alice's final score if both players play optimally.
The array will be given in compressed format. Instead of giving the elements present in the array, we will be giving their frequencies. Formally, you will be given $m$, the maximum element in the array, and then $m + 1$ integers $f_0, f_1, \ldots, f_{m}$, where $f_i$ represents the number of times $i$ occurs in the array $a$.
$^\dagger$ The $\operatorname{MEX}$ (minimum excludant) of an array of integers is defined as the smallest non-negative integer which does not occur in the array. For example:
- The MEX of $[2,2,1]$ is $0$, because $0$ does not belong to the array.
- The MEX of $[3,1,0,1]$ is $2$, because $0$ and $1$ belong to the array, but $2$ does not.
- The MEX of $[0,3,1,2]$ is $4$, because $0$, $1$, $2$ and $3$ belong to the array, but $4$ does not.
|
Instead of doing the check for $ans \geq i$ in $O(m^3)$, we will do it in $O(m)$. For an array $f$ of length $n$. Let $s=f_1+f_2+\ldots+f_n$ be the sum of the element. $f$ will be called flat if $f_i = \lfloor \frac{s+i-1}{n} \rfloor$. That is, it has the form $x, \ldots, x, x+1, \ldots x+1$. All flat array can be characters by $2$ integers $(n,s)$ only. Imagine simulating Bob's strategy without simulating Alice's moves of removing the first element of the array. For some prefix of the moves, the actual simulation will be a suffix of this simulation. This is because to subtract something from index $\leq i$, we must have $f_{i+1} = f_{i+2} = \ldots = f_n$. As an example, let $k=4$. With Alice moves: $[1,2,3,5,5] \to [2,3,3,3] \to [1,2,2]$ Without Alice moves: $[1,2,3,5,5] \to [1,2,3,3,3] \to [1,1,2,2,2]$ $[1,2,2]$ is not a suffix of $[1,1,2,2,2]$ and is the first time such a thing happens. Suppose that the first time this happens is after $p$ moves. Then the resulting array is the flat array $(n-p,f_{p+1}+f_{p+2}+\ldots+f_n-pk)$. To find the necessary value of $p$, we can binary search or run $2$-pointers to check if the suffix of the array can reach $f_p$ with the amount of subtraction operations till then. (What we basically did is find the suffix of the array that actually gets operated on, since that makes it much more easier to solve) Then, since the flat array $(n,s)$ becomes $(n-1,s-\lfloor \frac{s}{n} \rfloor - k)$, we can figure out whether each flat array evantually becomes a losing or winning state as we can calculate for each $n$, the minimum $s$ such that $(n,s)$ is a winning state.
|
[
"binary search",
"greedy",
"two pointers"
] | 3,300
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n,k;
int arr[200005];
int temp[200005];
bool solve(vector<int> v){
sort(all(v));
rep(x,0,sz(v)) temp[x]=1e18;
int l=1,curr=0;
rep(x,1,sz(v)){
curr+=v[x];
while (l<x && (curr-v[l])-(x-l)*v[l] >= l*k){
curr-=v[l];
l++;
}
temp[x-l]=min(temp[x-l],curr-l*k);
}
rep(x,sz(v),1) temp[x-1]=min(temp[x-1],(temp[x]-temp[x]/(x+1))-k);
return temp[0]>0;
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
int TC;
cin>>TC;
while (TC--){
cin>>n>>k;
rep(x,0,n+1) cin>>arr[x];
// solve(vector<int>(arr,arr+n+1));
// continue;
int lo=0,hi=n+2,mi;
while (hi-lo>1){
mi=hi+lo>>1;
if (solve(vector<int>(arr,arr+mi))) lo=mi;
else hi=mi;
}
cout<<lo<<endl;
}
}
|
1943
|
F
|
Minimum Hamming Distance
|
You are given a binary string$^\dagger$ $s$ of length $n$.
A binary string $p$ of the same length $n$ is called \textbf{good} if for every $i$ ($1 \leq i \leq n$), there exist indices $l$ and $r$ such that:
- $1 \leq l \leq i \leq r \leq n$
- $s_i$ is a mode$^\ddagger$ of the string $p_lp_{l+1}\ldots p_r$
You are given another binary string $t$ of length $n$. Find the minimum Hamming distance$^\S$ between $t$ and any \textbf{good} string $g$.
$^\dagger$ A binary string is a string that only consists of characters $\mathtt{0}$ and $\mathtt{1}$.
$^\ddagger$ Character $c$ is a mode of string $p$ of length $m$ if the number of occurrences of $c$ in $p$ is at least $\lceil \frac{m}{2} \rceil$. For example, $\mathtt{0}$ is a mode of $\mathtt{010}$, $\mathtt{1}$ is not a mode of $\mathtt{010}$, and both $\mathtt{0}$ and $\mathtt{1}$ are modes of $\mathtt{011010}$.
$^\S$ The Hamming distance of strings $a$ and $b$ of length $m$ is the number of indices $i$ such that $1 \leq i \leq m$ and $a_i \neq b_i$.
|
Assumption: 0 is the mode of string t. If 1 occurs more times than 0 in t, we will flip all characters of s and t so that 0 is the mode of string t. Let us say index $i$ nice if there exists $l$ and $r$($1 \le l \le i \le r \le n$) such that $s_i$ is the mode of substring $t[l,r]$. So we might have some not nice indices $i$ such that $s_i =$ $\mathtt{1}$. We will not have any index $i$ such that i is not nice and $s_i =$ $\mathtt{0}$, as $\mathtt{0}$ is the mode of $t$. So, we need to fix the not nice indices. Now we will start changing some $\mathtt{0}$ to $\mathtt{1}$. So it might be possible that in final $t$, the frequency of $\mathtt{1}$ is more than that of $\mathtt{0}$, and $\mathtt{0}$ is no longer the mode. So should we worry about indices $i$ such that $i$ was nice in the beginning, and now that we made some flips, $i$ may become not nice? No! It will never be possible. In case $\mathtt{1}$ occurs more times than $\mathtt{0}$ in the updated $t$, we will have $frequency[1] = frequency[0] + 1$ and $t[1] = t[n] =$ $\mathtt{1}$(we will have such cases for pairs like ($s =$ $\mathtt{011}$, $t =$ $\mathtt{100}$; for this pair final $t$ should be $t =$ $\mathtt{101}$). So the substrings $t[1, n - 1]$ and $t[2, n]$ will have equal numbers of $\mathtt{0}$ and $\mathtt{1}$, and thus all indices should be nice. So our claim is that we should change some $\mathtt{0}$ to $\mathtt{1}$, without caring about indices $i$(which were nice initially) becoming not nice such that $s_i =$ $\mathtt{0}$. We can use dynamic programming here. So let us have $dp$, such that $dp[i][j]$ gives the minimum number of flips required to make $t[1, i]$ friend of $s[1, i]$ and the maximum suffix sum is $j$. String $x$ is called to be friend of string $y(|x| = |y|)$, if for every $i$($1 \le i \le |x|$), there exists indices $l$ and $r$ such that: 1. $1 \le l \le i \le r \le |x|$ 2. $x_i$ is a mode of the string $y[l,r]$ Please read hints $1$ and $2$ if you haven't as they contain some claims and definitions. Note that when we find some sum, we add $1$ when we get $\mathtt{1}$ and subtract $-1$ when we get $\mathtt{0}$. Suppose we have found $dp$ values for the first $i - 1$ indices, and we want to find $dp[i][j]$ for $0 \le j \le n$. Now, we need to perform the transitions. Let us try to have a $O(n^3)$ solution first, which we can optimise after making some observations. Take some $l$($0 \leq l \leq i - 1$). We will iterate over $suff$_$sum = 0$ to $n$, here $suff$_$sum$ is the maximum suffix sum of substring $t[1, l]$, and use $dp[l][suff$_$sum]$ to find optimal values for $dp[i][x]$ for some $x$. So we need to do some flips to substring $t[l + 1, i]$, as $s[1, l]$ and $t[1, l]$ are already friends. So we only care to make all indices $j$ ($l + 1 \le j \le i$) nice. So there are two possibilities(either $\mathtt{1}$ occurs in substring $s[l + 1, i]$ or not): If $\mathtt{1}$ does not occur, we can perform the transition without making any flips. If $\mathtt{1}$ does not occur, we can perform the transition without making any flips. Assume $\mathtt{1}$ occurs in substring $s[l + 1, i]$. So firstly, find the sum(say $cur$_$sum$) of substring $t[l + 1, i]$. Now, if we do some flips to substring $t[l + 1, i]$, $cur$_$sum$ will change accordingly. We will do a minimum number of flips such that $suff$_$sum + cur$_$sum \ge 0$. Note that we are talking here about updated $cur$_$sum$. So we can find the minimum number(say $cost$) of flips, which will be $\lfloor \frac{\max(0, 1 -d )}{2} \rfloor$, where $d=suff$_$sum + initial$_$cur$_$sum$. So we know how many flips to make. Assume $\mathtt{1}$ occurs in substring $s[l + 1, i]$. So firstly, find the sum(say $cur$_$sum$) of substring $t[l + 1, i]$. Now, if we do some flips to substring $t[l + 1, i]$, $cur$_$sum$ will change accordingly. We will do a minimum number of flips such that $suff$_$sum + cur$_$sum \ge 0$. Note that we are talking here about updated $cur$_$sum$. So we can find the minimum number(say $cost$) of flips, which will be $\lfloor \frac{\max(0, 1 -d )}{2} \rfloor$, where $d=suff$_$sum + initial$_$cur$_$sum$. So we know how many flips to make. But which ones to flip? Here is one more claim. We should only flip the last $cost$ $\mathtt{0}$ of substring $t[l + 1, i]$. So this is a sufficient condition, as we can certainly say that $t[1, i]$ will be friend of $s[1, i]$ now. So we know the required number of flips, which is $dp[l][suff$_$sum] + cost$. We need to find one more thing - what would be the maximum suffix sum if we flip the last $cost$ characters of $t[l + 1, i]$? We can precompute. But we have an issue now. We know that what we performed is sufficient. But is it necessary? What if we did not need to flip cost characters of $t[l + 1, i]$? It might be possible that we could have done less number of flips and still made all indices $l + 1 \le j \le i$ nice. The reasoning behind this is we made sure that $suff$_$sum + cur$_$sum \ge 0$, but what if it was not needed? Like it is possible that total sum is negative, but all indices $j$($l + 1 \le j \le i$) such that $s_j =$ $\mathtt{1}$ are satisfied. So here, we can use exchange arguments and conclude that all cases will be covered if we check for all pairs of ($l, suff$_$sum$) $0 \le l, suff$_$sum \le i - 1$. Now we need to optimise this to $O(n^2)$. Notice that when we do the flips, there will be a suffix(possibly empty when $cost=0$) of $t[l + 1, i]$ containing only $\mathtt{1}$ s. Suppose we are at index $i$ and need to find $dp[i][j]$ for $0 \le j \le i$. We can iterate over all $j$($1 \le j \le i$), assume that all the characters in substring $t[j,i]$ are $\mathtt{1}$ s, and find the $dp$ values. Maximum suffix sum will be $i-j+1+max$_$suffix$_$sum[j-1]$. So we can find the smallest index $p$ such that the sum of the elements in substring $t[p,l]$ is greater than or equal to $0$ if we make all the characters in substring $t[j,i]$ $\mathtt{1}$. Notice that we already have the new suffix maximum, and we know the $cost$ too, which is equal to the number of $\mathtt{0}$ s in the original substring $t[j,i]$. So our transition will be $dp[i][new$_$suffix$_$max]=\max(dp[i][new$_$suffix$_$max], \min\limits_{k = p-1}^{i} best[k] + cost)$, where $best[i]= \min\limits_{k = 0}^{i} dp[i][k]$. So, our final complexity will be $O(n^2)$, as we can perform the transition in $O(1)$ if we precompute the needed things.
|
[
"dp"
] | 3,500
|
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll int
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=998244353;
const ll MAX=10005;
ll dp[MAX][MAX];
ll suffix_min[MAX];
ll suffix_max[MAX];
ll can_go_till[MAX][MAX+5];
void solve(){
ll n; cin>>n;
ll shift=n;
for(ll i=-n;i<=0;i++){
can_go_till[0][shift+i]=0;
}
string s,t; cin>>s>>t;
s=" "+s,t=" "+t;
ll sum=0;
for(ll i=1;i<=n;i++){
sum+=2*(t[i]-'0')-1;
}
suffix_max[0]=0;
if(sum>=0){
for(ll i=1;i<=n;i++){
s[i]='0'+'1'-s[i];
t[i]='0'+'1'-t[i];
}
}
ll max_sum=0;
for(ll i=1;i<=n;i++){
max_sum+=2*(t[i]-'0')-1;
max_sum=max(0,max_sum);
suffix_max[i]=max_sum;
}
for(ll i=1;i<=n;i++){
sum=0;
for(ll j=-n;j<=0;j++){
can_go_till[i][shift+j]=i+1;
}
for(ll j=i;j>=1;j--){
sum+=2*(t[j]-'0')-1;
ll use=min(sum,0);
can_go_till[i][shift+use]=j;
}
for(ll j=n-1;j>=0;j--){
can_go_till[i][j]=min(can_go_till[i][j],can_go_till[i][j+1]);
}
}
for(ll i=0;i<=n+1;i++){
for(ll j=0;j<=n+1;j++){
dp[i][j]=MOD;
}
}
dp[0][0]=0;
vector<ll> best(n+5,MOD);
best[0]=0;
for(ll i=1;i<=n;i++){
for(ll l=0;l<=i-1;l++){
dp[i][l+1]=dp[i-1][l]+(t[i]=='0');
}
for(ll l=0;l<=i;l++){
ll new_sum=l+2*(t[i]-'0')-1;
if(s[i]=='1' and new_sum<=-1){
continue;
}
new_sum=max(0,new_sum);
dp[i][new_sum]=min(dp[i][new_sum],dp[i-1][l]);
}
suffix_min[i]=MOD;
for(ll j=i-1;j>=0;j--){
suffix_min[j]=min(suffix_min[j+1],best[j]);
}
ll cnt=0;
for(ll j=i;j>=1;j--){
cnt+=(t[j]=='0');
ll now=i-j+1;
ll cur_suff_max=now+suffix_max[j-1];
ll pos=max(0,can_go_till[j-1][shift-now]-1);
dp[i][cur_suff_max]=min(dp[i][cur_suff_max],suffix_min[pos]+cnt);
}
for(ll j=0;j<=n;j++){
best[i]=min(best[i],dp[i][j]);
}
}
ll ans=MOD;
s[0]='1';
for(ll i=n;i>=0;i--){
ans=min(ans,best[i]);
if(s[i]=='1'){
cout<<ans<<nline;
return;
}
}
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
|
1944
|
A
|
Destroying Bridges
|
There are $n$ islands, numbered $1, 2, \ldots, n$. Initially, every pair of islands is connected by a bridge. Hence, there are a total of $\frac{n (n - 1)}{2}$ bridges.
Everule lives on island $1$ and enjoys visiting the other islands using bridges. Dominater has the power to destroy at most $k$ bridges to minimize the number of islands that Everule can reach using (possibly multiple) bridges.
Find the minimum number of islands (including island $1$) that Everule can visit if Dominater destroys bridges optimally.
|
What is the minimum number of bridges to burn if we want to make exactly $i$ islands visitable from $1$? Atleast $i \cdot (n - i)$ bridges need to burnt (the bridges connecting the $i$ reachable islands and the $n - i$ non-reachable islands). A simple $O(n)$ solution is for every $i$ from $1$ to $n$, check if $i \cdot (n - i) \le k$, in which case print $i$ and break. What is the answer when $k \ge n - 1$. When $k < n - 1$, is it possible to make any island non-visitable? When $k \ge n - 1$, the answer is $1$ since we can just destroy all bridges $(1, i)$ for $2 \le i \le n$. Otherwise, suppose we tried to make some set of $i$ islands non-visitable, and the other $n - i$ nodes reachable from $1$. Then we need to burn atleast $i \cdot (n - i)$ bridges (the bridges connecting the $2$ sets). It is not hard to see that this function attains the minimum value when $i = 1$ or $i = n - 1$ for $1 \le i < n$. Hence the minimum number of bridges to burn always exceeds $n - 1$. The function $f(x) = x \cdot (n - x)$ is a quadratic function in $x$, which attains maximum value at $x = \frac{n}{2}$, and the value decreasing proportionally as the distance from $\frac{n}{2}$ increases. This means that $f(1) = f(n - 1)$, and $f(1) > f(i)$ for all ($2 \le i \le (n - 2)$).
|
[
"graphs",
"greedy",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k; cin >> n >> k;
if (k >= n - 1) cout << 1 << "\n";
else cout << n << "\n";
}
return 0;
}
|
1944
|
B
|
Equal XOR
|
You are given an array $a$ of length $2n$, consisting of each integer from $1$ to $n$ exactly \textbf{twice}.
You are also given an integer $k$ ($1 \leq k \leq \lfloor \frac{n}{2} \rfloor $).
You need to find two arrays $l$ and $r$ each of length $\mathbf{2k}$ such that:
- $l$ is a subset$^\dagger$ of $[a_1, a_2, \ldots a_n]$
- $r$ is a subset of $[a_{n+1}, a_{n+2}, \ldots a_{2n}]$
- bitwise XOR of elements of $l$ is equal to the bitwise XOR of elements of $r$; in other words, $l_1 \oplus l_2 \oplus \ldots \oplus l_{2k} = r_1 \oplus r_2 \oplus \ldots \oplus r_{2k}$
It can be proved that at least one pair of $l$ and $r$ always exists. If there are multiple solutions, you may output any one of them.
$^\dagger$ A sequence $x$ is a subset of a sequence $y$ if $x$ can be obtained by deleting several (possibly none or all) elements of $y$ and rearranging the elements in any order. For example, $[3,1,2,1]$, $[1, 2, 3]$, $[1, 1]$ and $[3, 2]$ are subsets of $[1, 1, 2, 3]$ but $[4]$ and $[2, 2]$ are not subsets of $[1, 1, 2, 3]$.
|
Group numbers according to how many times they occur in $a[1...n]$. The group of numbers having $0$ occurrences in $a[1...n]$ is of the same size as the group of numbers having $2$ occurences in $a[1...n]$. Try to use the $0$ and $2$ occurrence numbers first, and then if we still need more, we can use the $1$ occurence numbers. Remember that we have to form sequences of size $2 \cdot k$ which is even. We can append any $2$ occurrence numbers to our sequence $l$ and any $0$ occurrence numbers to our sequence $r$ without any issue because the xor value will cancel out. We do this while our sequence sizes are less than $2 \cdot k$. At the end of this process, $l$ and $r$ will have the same size due to Hint $2$. Now, we use as many $1$ occurrence numbers appending to both $l$ and $r$ as needed. Since we append to both sequences, the xor value of the $2$ sequences will be the same. If we had to solve for odd sequence sizes, we could take a $1$ occurrence number at the very start to make it even, and then run the same process, but if there are no $1$ occurrence numbers at all, we fail with this method.
|
[
"bitmasks",
"constructive algorithms"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k;
cin >> n >> k;
k = 2 * k;
vector <int> a(2 * n), occ(n + 1, 0);
for (auto &x : a) cin >> x;
for (int i = 0; i < n; i++) occ[a[i]]++;
vector <int> g0, g1, g2;
for (int i = 1; i <= n; i++){
if (occ[i] == 0) g0.push_back(i);
else if (occ[i] == 1) g1.push_back(i);
else g2.push_back(i);
}
int v = 0;
for (auto x : g2){
if (v < k){
v += 2;
cout << x << " " << x << " ";
}
}
for (auto x : g1){
if (v < k){
v++;
cout << x << " ";
}
}
cout << "\n";
v = 0;
for (auto x : g0){
if (v < k){
v += 2;
cout << x << " " << x << " ";
}
}
for (auto x : g1){
if (v < k){
v++;
cout << x << " ";
}
}
cout << "\n";
}
return 0;
}
|
1945
|
A
|
Setting up Camp
|
The organizing committee plans to take the participants of the Olympiad on a hike after the tour. Currently, the number of tents needed to be taken is being calculated. It is known that each tent can accommodate up to $3$ people.
Among the participants, there are $a$ introverts, $b$ extroverts, and $c$ universals:
- Each introvert wants to live in a tent alone. Thus, a tent with an introvert must contain exactly one person — only the introvert himself.
- Each extrovert wants to live in a tent with two others. Thus, the tent with an extrovert must contain exactly three people.
- Each universal is fine with any option (living alone, with one other person, or with two others).
The organizing committee respects the wishes of each participant very much, so they want to fulfill all of them.
Tell us the minimum number of tents needed to be taken so that all participants can be accommodated according to their preferences. If it is impossible to accommodate the participants in a way that fulfills all the wishes, output $-1$.
|
First, let's consider introverts. Since each of them needs exactly one tent, we simply add $a$ to the answer. Then let's consider extroverts. If their number is divisible by 3, we add $\frac{b}{3}$ to the answer. Otherwise, we calculate $d = 3 - b \bmod 3$, where $x \bmod y$ denotes the remainder from dividing $x$ by $y$. If $d > c$, then there is no answer, as it is not possible to accommodate the extroverts in the tents. Otherwise, we subtract $d$ from $c$ and add $\frac{c}{3}$ to the answer. The general formula when there is an answer: $a + \frac{b + d}{3} + \lceil\frac{c - d + 2}{3}\rceil$. Complexity: $\mathcal{O}(1)$.
|
[
"greedy",
"math"
] | 800
|
#include <iostream>
using namespace std;
using ll = long long;
void solve() {
ll single, poly, uni;
cin >> single >> poly >> uni;
ll needPoly = (3 - poly % 3) % 3;
if (poly > 0 && needPoly > uni) {
cout << "-1\n";
return;
}
uni -= needPoly;
poly += needPoly;
ll mn = single + uni / 3 + (uni % 3 + 1) / 2 + poly / 3;
cout << mn << '\n';
}
int32_t main() {
ll T;
cin >> T;
while(T--){
solve();
}
return 0;
}
|
1945
|
B
|
Fireworks
|
One of the days of the hike coincided with a holiday, so in the evening at the camp, it was decided to arrange a festive fireworks display. For this purpose, the organizers of the hike bought two installations for launching fireworks and a huge number of shells for launching.
Both installations are turned on simultaneously. The first installation launches fireworks every $a$ minutes (i.e., after $a, 2 \cdot a, 3 \cdot a, \dots$ minutes after launch). The second installation launches fireworks every $b$ minutes (i.e., after $b, 2 \cdot b, 3 \cdot b, \dots$ minutes after launch).
Each firework is visible in the sky for $m + 1$ minutes after launch, i.e., if a firework was launched after $x$ minutes after the installations were turned on, it will be visible every minute from $x$ to $x + m$, inclusive. If one firework was launched $m$ minutes after another, both fireworks will be visible for one minute.
What is the maximum number of fireworks that could be seen in the sky at the same time?
|
Let's consider the moment in time $T = \text{LCM}(a, b)$ (least common multiple of numbers $a$ and $b$). It is easy to notice that since $T > 0$ and $T$ is divisible by both $a$ and $b$, at this moment both camps will release a firework, each of which will disappear after $m + 1$ minutes. Let's look at the sky at the $T + m$-th minute. We will still see two fireworks released at the moment $T$, the first camp managed to release $\lfloor\frac{m}{a}\rfloor$ fireworks, and the second, similarly, $\lfloor\frac{m}{b}\rfloor$ fireworks (this follows from the fact that $T$ is divisible by $a$ and $b$). Taking into account the two fireworks released at the moment $T$, we will reach the answer $\lfloor\frac{m}{a}\rfloor + \lfloor\frac{m}{b}\rfloor + 2$. Why can't we achieve a greater answer? Let's consider two setups for launching fireworks independently. Without loss of generality, consider the first setup (releasing fireworks every $a$ minutes). Consider the moment when the fireworks from this setup are visible in the sky. Let the first firework among the visible ones disappear at the moment $x$, then at the moment $x$ we will see no fewer fireworks, but at the moment $x + 1$ - fewer. At the moment $x$, there will be fireworks in the sky, released at moments [$x$, $x + a$, $\dots$, $x + a \cdot \lfloor\frac{m}{a}\rfloor$]. Thus, from the first setup, we can see no more than $\lfloor\frac{m}{a}\rfloor + 1$ fireworks simultaneously. The same formula is applicable to the second setup. We have proved that the answer does not exceed $\lfloor\frac{m}{a}\rfloor + 1 + \lfloor\frac{m}{b}\rfloor + 1 = \lfloor\frac{m}{a}\rfloor + \lfloor\frac{m}{b}\rfloor + 2$, so that is the answer. Complexity: $\mathcal{O}(1)$.
|
[
"math",
"number theory"
] | 900
|
t = int(input())
for qi in range(t):
a, b, m = [int(x) for x in input().split()]
ans = m // a + m // b + 2
print(ans)
|
1945
|
C
|
Left and Right Houses
|
In the village of Letovo, there are $n$ houses. The villagers decided to build a big road that will divide the village into left and right sides. Each resident wants to live on either the right or the left side of the street, which is described as a sequence $a_1, a_2, \dots, a_n$, where $a_j = 0$ if the resident of the $j$-th house wants to live on the left side of the street; otherwise, $a_j = 1$.
The road will pass between two houses. The houses to the left of it will be declared the left-side, and the houses to the right will be declared the right-side. More formally, let the road pass between houses $i$ and $i+1$. Then the houses at positions between $1$ and $i$ will be on the \textbf{left} side of the street, and at positions between $i+1$ and $n$ will be on the \textbf{right} side. The road also \textbf{may} pass before the first and after the last house; in this case, the entire village is declared to be either the right or left side, respectively.
To make the design fair, it was decided to lay the road so that at least half of the residents on each side of the village are satisfied with the choice. That is, among $x$ residents on one side, at least $\lceil\frac{x}{2}\rceil$ should want to live on that side, where $\lceil x \rceil$ denotes rounding up a real number $x$.
\begin{center}
{\small To the left of the road, there will be $i$ houses, among the corresponding $a_j$ there must be at least $\lceil\frac{i}{2}\rceil$ zeros. To the right of the road, there will be $n-i$ houses, among the corresponding $a_j$ there must be at least $\lceil\frac{n-i}{2}\rceil$ ones.}
\end{center}
Determine after which house $i$ the road should be laid in order to satisfy the described condition and be as close to the middle of the village as possible. Formally, among all suitable positions $i$, minimize $\left|\frac{n}{2} - i\right|$.
If there are multiple suitable positions $i$ with the minimum $\left|\frac{n}{2} - i\right|$, output the smaller one.
|
According to the statement, to the left of the road there should be no less elements $a_i$ such that $a_i = 0$ than such that $a_i = 1$, and to the right of the road there should be no less elements $a_i$ than such that $a_i = 1$ than such that $a_i = 0$. We will consider each position of the road and check the compliance with the road design condition. To do this, we will use the prefix sum method to access the number of $1$'s in the suffix in $\mathcal{O}(1)$ (the number of such $i$ that $i > x$ and $a_i = 1$ for any $x$). We will also maintain the count of $0$'s among the elements to the left of the road and the optimal answer. If the road position $x$ is suitable and it is closer to the middle than the most optimal answer found before, we will update it (and will not forget to increase the count of $0$ if the next element $a_{x + 1} = 0$). It is convenient to double the distance to the middle of the village: instead of $\left|\frac{n}{2} - i\right|$, consider it as $2\left|\frac{n}{2} - i\right| = \left|n - 2\cdot i\right|$. This way, we can get rid of calculations in non-integer numbers. Complexity: $\mathcal{O}(n)$
|
[
"brute force"
] | 1,200
|
for case in range(int(input())):
n = int(input())
a = input()
suf_cnt = [0] * (n + 1)
for i in range(n - 1, -1, -1):
suf_cnt[i] = suf_cnt[i + 1] + (a[i] == '1')
pref_cnt = 0
opt_ans = -1
opt_dist = n * 2
threshold = (n + 1) // 2
for i in range(n + 1):
if pref_cnt >= (i + 1) // 2 and suf_cnt[i] >= (n - i + 1) // 2 and abs(n - 2 * i) < opt_dist:
opt_dist = abs(n - 2 * i)
opt_ans = i
if i != n:
pref_cnt += (a[i] == '0')
print(opt_ans)
|
1945
|
D
|
Seraphim the Owl
|
The guys lined up in a queue of $n$ people, starting with person number $i = 1$, to ask Serafim the Owl about the meaning of life. Unfortunately, Kirill was very busy writing the legend for this problem, so he arrived a little later and stood at the end of the line after the $n$-th person. Kirill is completely dissatisfied with this situation, so he decided to bribe some people ahead of him.
For the $i$-th person in the queue, Kirill knows two values: $a_i$ and $b_i$. If at the moment Kirill is standing at position $i$, then he can choose any position $j$ such that $j < i$ and exchange places with the person at position $j$. In this case, Kirill will have to pay him $a_j$ coins. And for each $k$ such that $j < k < i$, Kirill will have to pay $b_k$ coins to the person at position $k$. Kirill can perform this action any number of times.
Kirill is thrifty, so he wants to spend as few coins as possible, but he doesn't want to wait too long, so Kirill believes he should be among the first $m$ people in line.
Help Kirill determine the minimum number of coins he will have to spend in order to not wait too long.
|
Let's consider a greedy approach. Suppose we are standing at position $i$. Find the first $j$ such that $j < i$ and $a_j < b_j$. If such $j$ exists and $j > m$, then swap with $j$. This will be optimal, because in any case we will have to pay the people at positions $i - 1, i - 2, \dots, j$ some amount of coins, and in this way we will pay each person at position $k$ where $i > k > j$, $b_k$ coins. According to the greedy condition $b_k > a_k$, so $b_k$ is the minimum amount of coins we can pay $k$-th person. We will also pay the $j$-th person $a_j$ coins. $a_j < b_j$, hence we will pay the minimum amount of coins to all people. If such $j$ does not exist, then it is advantageous for us to choose the final position $f$, such that $1 \le f \le m$, in order to finish the movements and overpay as little as possible. Simply check each $f$, recalculating the answer using prefix sums on array $b$ and choose the smallest one. Asymptotics: $\mathcal{O}(n)$.
|
[
"dp",
"greedy"
] | 1,300
|
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
using ll = long long;
void solve() {
ll n, k;
cin >> n >> k;
vector<ll> A(n);
for (auto& item : A) {
cin >> item;
}
reverse(A.begin(), A.end());
vector<ll> B(n);
for (auto& item : B) {
cin >> item;
}
reverse(B.begin(), B.end());
ll bSum = 0;
ll pref = 0;
for (ll i = 0; i < n - k; ++i) {
if (A[i] < B[i]) {
pref += bSum;
pref += A[i];
bSum = 0;
} else {
bSum += B[i];
}
}
ll res = 1e18;
for (ll i = n - k; i < n; ++i) {
res = min(res, pref + bSum + A[i]);
bSum += B[i];
}
cout << res << '\n';
}
int32_t main() {
ll testN;
cin >> testN;
while (testN--) {
solve();
}
return 0;
}
|
1945
|
E
|
Binary Search
|
Anton got bored during the hike and wanted to solve something. He asked Kirill if he had any new problems, and of course, Kirill had one.
You are given a permutation $p$ of size $n$, and a number $x$ that needs to be found. A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
You decided that you are a cool programmer, so you will use an advanced algorithm for the search — binary search. However, you forgot that for binary search, the array must be sorted.
You did not give up and decided to apply this algorithm anyway, and in order to get the correct answer, you can perform the following operation \textbf{no more than} $2$ times before running the algorithm: choose the indices $i$, $j$ ($1\le i, j \le n$) and swap the elements at positions $i$ and $j$.
After that, the binary search is performed. At the beginning of the algorithm, two variables $l = 1$ and $r = n + 1$ are declared. Then the following loop is executed:
- If $r - l = 1$, end the loop
- $m = \lfloor \frac{r + l}{2} \rfloor$
- If $p_m \le x$, assign $l = m$, otherwise $r = m$.
The goal is to rearrange the numbers in the permutation before the algorithm so that after the algorithm is executed, $p_l$ is equal to $x$. It can be shown that $2$ operations are always sufficient.
|
In fact, the problem can be solved in a single operation. Let's start a binary search on the initial permutation. We will get some index $l$. Then, it is enough to swap $p_l$ and $x$. Now our binary search will stop exactly at the number $x$. Notice that in the search, any number less than or equal to $x$ affects the result like $x$. Indeed, $p_l \le x$ remains true if we replace $p_l$ with $x$. Note that if we swapped two numbers that give the same result in the condition of the binary search, then the final $l$ will not change. The previous statements assumed that $p_l \le x$. Indeed, in the algorithm description, after changing $l$, the number $p_l$ cannot become greater than $x$. It is also necessary to consider the case when $l = 1$ after executing the algorithm. If $l = 1$, then at each iteration of the algorithm, $p_m > x$, in particular, this means that $p_m$ was never equal to $x$. In this case, swapping $p_1$ and $x$ will also not affect the result ($l$) of the binary search. Complexity: $\mathcal{O}(n)$.
|
[
"binary search",
"constructive algorithms",
"greedy"
] | 1,700
|
#include <iostream>
#include <vector>
using namespace std;
void solve() {
int n, x;
cin >> n >> x;
vector<int> src(n);
int P = 0;
for (int i = 0; i < n; ++i) {
cin >> src[i];
if (src[i] == x) {
P = i;
}
}
int l = 0;
int r = n;
while (r - l > 1) {
int m = (l + r) / 2;
if (src[m] <= x) {
l = m;
} else {
r = m;
}
}
cout << "1\n";
cout << P + 1 << ' ' << l + 1 << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int T;
cin >> T;
while (T--) {
solve();
}
return 0;
}
|
1945
|
F
|
Kirill and Mushrooms
|
As soon as everyone in the camp fell asleep, Kirill sneaked out of the tent and went to the Wise Oak to gather mushrooms.
It is known that there are $n$ mushrooms growing under the Oak, each of which has magic power $v_i$. Kirill really wants to make a magical elixir of maximum strength from the mushrooms.
The strength of the elixir is equal to the product of the \textbf{number} of mushrooms in it and the \textbf{minimum} magic power among these mushrooms. To prepare the elixir, Kirill will sequentially pick one mushroom growing under the Oak. Kirill can gather mushrooms in any order.
However, it's not that simple. The Wise Oak informed Kirill of a permutation of numbers $p$ from $1$ to $n$. If Kirill picks only $k$ mushrooms, then the magic power of all mushrooms with indices $p_1, p_2, \dots, p_{k - 1}$ will become $0$. Kirill will not use mushrooms with zero magic power to prepare the elixir.
Your task is to help Kirill gather mushrooms in such a way that he can brew the elixir of maximum possible strength. However, Kirill is a little scared to stay near the oak for too long, so out of all the suitable options for gathering mushrooms, he asks you to find the one with the minimum number of mushrooms.
A permutation of length $n$ is an array consisting of $n$ different integers from $1$ to $n$ in any order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears in the array twice) and $[1,3,4]$ is also not a permutation ($n=3$, but $4$ appears in the array).
|
Consider a fixed number $k$ - the amount of mushrooms in the elixir. In this case, we need to maximize the minimum of the taken numbers. We will iterate through the numbers in descending order: until we collect $k$ numbers, we iterate over the next number. If its index is greater than or equal to $k$, we take this number as the answer, otherwise we skip it. If we iterate through $k$, we get a solution in $\mathcal{O}(n^2)$ time. How can we speed it up? Notice that at incrementing $k$, only a few numbers change: $v_{p_k}$ becomes zero, if we have already taken it, then we need to take the next one in descending order. Also, we need to take the next number in descending order, because we are incrementing $k$. After sorting, we can traverse the array in linear time. We keep a pointer $j$ to the last taken element, as well as an array $\text{used}$ - whether the $i$-th number is taken, and an array $\text{zero}$ - whether the $i$-th number is turned into zero. Every time we move $j$, we check that $\text{zero}_j = \text{false}$, and if so, we take the number and set $\text{used}_j = \text{true}$. Complexity: $\mathcal{O}(n \log n)$.
|
[
"data structures",
"sortings"
] | 1,900
|
#include <iostream>
#include <vector>
#include <set>
#include <algorithm>
using namespace std;
using ll = long long;
void solve() {
ll n;
cin >> n;
vector<ll> src(n);
vector<pair<ll, ll>> can(n);
for (ll i = 0; i < n; ++i) {
cin >> src[i];
can[i] = {src[i], i};
}
vector<ll> ord(n);
for (auto& item : ord) {
cin >> item;
--item;
}
sort(can.rbegin(), can.rend());
ll best = can[0].first;
ll take = 1;
ll cur;
ll P = 1;
vector<bool> burn(n);
vector<bool> used(n);
used[can[0].second] = true;
for (ll k = 0; k + 1 < n && P < n; ++k) {
while (P < n && burn[can[P].second]) {
++P;
}
if (P == n) {
break;
}
used[can[P].second] = true;
cur = can[P].first;
++P;
burn[ord[k]] = true;
if (used[ord[k]]) {
while (P < n && burn[can[P].second]) {
++P;
}
if (P == n) {
break;
}
used[can[P].second] = true;
cur = can[P].first;
++P;
}
if (best < cur * (k + 2)) {
take = k + 2;
best = cur * (k + 2);
}
}
cout << best << ' ' << take << '\n';
}
int32_t main() {
ll testN;
cin >> testN;
while (testN--) {
solve();
}
return 0;
}
|
1945
|
G
|
Cook and Porridge
|
Finally, lunchtime!
$n$ schoolchildren have lined up in a long queue at the cook's tent for porridge. The cook will be serving porridge for $D$ minutes. The schoolchild standing in the $i$-th position in the queue has a priority of $k_i$ and eats one portion of porridge in $s_i$ minutes.
\textbf{At the beginning} of each minute of the break, the cook serves the first schoolchild in the queue one portion of porridge, after which the schoolchild goes to eat their portion. If the $i$-th schoolchild is served a portion at the beginning of the $x$-th minute, then they will return to the queue \textbf{at the end} of the $(x + s_i)$-th minute.
When the $i$-th schoolchild returns to the queue, the schoolchildren at the end of the queue whose priority is \textbf{strictly lower} than that of the $i$-th schoolchild must let them pass. Thus, they will stand in the queue behind the last schoolchild whose priority is \textbf{not lower} than their own. That is, behind the last schoolchild $j$ with $k_j \ge k_i$. If there is no such schoolchild in the queue, the $i$-th schoolchild will stand at the front of the queue.
If several schoolchildren return at the same time, they will return to the queue in ascending order of their $s_i$.
For example, if $n = 3$, $D = 3$, $k = [2, 3, 2]$, and $s = [2, 1, 3]$, the serving will occur as follows:
- At the beginning of minute $1$, the students in the queue are $[1, 2, 3]$, and student $1$ is served porridge;
- at the beginning of minute $2$, the students in the queue are $[2, 3]$, and student $2$ is served porridge;
- at the beginning of minute $3$, the student in the queue is $[3]$, and student $3$ is served porridge;
- at the end of minute $3$, student $2$ returns to the queue, and the queue becomes $[2]$;
- at the end of minute $3$, student $1$ returns to the queue, and the queue becomes $[2, 1]$, as his priority is lower.
Determine the minimum number of minutes after the start of the break that each schoolchild will receive porridge at least once, or report that this will not happen within $D$ minutes.
|
Let's divide the queue into two parts: the original queue $\text{Q1}$ and the returning people queue $\text{Q2}$. $\text{Q1}$ will simply be an array with a pointer $P$ to the current person at the front. And $\text{Q2}$ will be a priority queue. Now the problem can be formulated as follows: find out how much time it will take for $P$ to become equal to $n + 1$. Let's iterate through the time. At each moment, the cook serves porridge to either a person from $\text{Q1}$ or from $\text{Q2}$. How do we know which one? Let's calculate $\text{sufMax}$ - an array of suffix maximums for $k$. Now it's clear that if $\text{sufMax}_P \le k(\text{Q2}_\text{front})$ (here $k(x)$ is the priority of student $x$), then there is still someone in $\text{Q1}$ with a higher $k$ than the one who wants to return, so at the next moment the cook will serve porridge to a person from $\text{Q1}$. Otherwise, the person with the maximum appetite from $\text{Q2}$ can bypass everyone from $\text{Q1}$ and get the porridge. To form $\text{Q2}$, we will store an array of arrays $\text{eat}$, where $\text{eat}_i$ is the list of people who will return to the queue at the end of the $i$-th minute. Then we simply need to simulate this process. The answer will be the moment when $P$ becomes equal to $n + 1$. Complexity: $\mathcal{O}(d \log n)$
|
[
"binary search",
"constructive algorithms",
"data structures",
"implementation"
] | 2,500
|
#include <iostream>
#include <vector>
#include <queue>
using namespace std;
using ll = long long;
const int MAX_N = 2e5;
const int MAX_D = 3e5;
struct Student {
int k;
int s;
int tin = 0;
bool operator<(const Student& other) const {
if (k == other.k) {
if (tin == other.tin) {
return s > other.s;
}
return tin > other.tin;
}
return k < other.k;
}
};
int n, D, x;
Student qu1[MAX_N];
int sufMax[MAX_N];
vector<Student> eat[MAX_D];
int check() {
int origPos = 0;
priority_queue<Student> qu2;
for (int i = 0; i < D && origPos < n; ++i) {
if (qu2.empty() || qu2.top().k <= sufMax[origPos]) {
ll nxtTime = ll(i) + ll(x) * ll(qu1[origPos].s);
if (nxtTime < D) {
eat[nxtTime].push_back(qu1[origPos]);
}
++origPos;
if (origPos == n) {
for (int tm = 0; tm < D; ++tm) {
eat[tm].clear();
}
return i + 1;
}
} else {
ll nxtTime = ll(i) + ll(x) * ll(qu2.top().s);
if (nxtTime < D) {
eat[nxtTime].push_back(qu2.top());
}
qu2.pop();
}
for (const auto& student : eat[i]) {
qu2.push({student.k, student.s, i});
}
}
for (int i = 0; i < D; ++i) {
eat[i].clear();
}
return -1;
}
void solve() {
cin >> n >> D;
x = 1;
for (int i = 0; i < n; ++i) {
cin >> qu1[i].k >> qu1[i].s;
}
sufMax[n - 1] = qu1[n - 1].k;
for (int i = n - 2; i >= 0; --i) {
sufMax[i] = max(qu1[i].k, sufMax[i + 1]);
}
cout << check() << '\n';
}
int32_t main() {
ll testN;
cin >> testN;
while (testN--) {
solve();
}
return 0;
}
|
1945
|
H
|
GCD is Greater
|
In the evenings during the hike, Kirill and Anton decided to take out an array of integers $a$ of length $n$ from their backpack and play a game with it. The rules are as follows:
- Kirill chooses from $2$ to $(n-2)$ numbers and encircles them in red.
- Anton encircles all the remaining numbers in blue.
- Kirill calculates the \textbf{greatest common divisor} (GCD) of all the red numbers.
- Anton calculates the \textbf{bitwise AND} of all the blue numbers and adds the number $x$ to the result.
- If the GCD of all the red numbers is strictly greater than the sum of the bitwise AND of all the blue numbers and the number $x$, then Kirill wins; otherwise, Anton wins.
Help Kirill to beat Anton or tell if it's impossible.
|
Let's notice that it is sufficient to take the greatest common divisor (GCD) of two numbers. Indeed, suppose we take the GCD of a larger number of numbers. Then we can move one of them to the set of AND. Then the GCD will not decrease, and the AND will not increase. Also, pre-calculate the number of ones in the given numbers and remember it. Notice that if a certain bit $y$ is equal to zero in more than two numbers, then in the AND of all these numbers bit $y$ will also be zero. For all the bits that are equal to zero in no more than two numbers, we will save these numbers. We will iterate through each of these numbers as one of the red numbers, and iterate through the second one for $n$. This will require $\mathcal{O}(2\cdot \log(\text{maxA}) \cdot \log(\text{maxA}) \cdot n)$ time. The additional logarithm comes from calculating the GCD for a pair of numbers. After this, we remove all such numbers from our list. Let $A$ be the bitwise AND of all the numbers in the initial array. Now we can consider that the GCD should be greater than $A + X$. We will iterate through the GCD from ($A + X + 1$) to maxA and check: if there is a pair of numbers in the array (not counting the discarded ones) that are divisible by it, then the answer is to color these numbers in red, and the rest in blue. Otherwise, if such a GCD is not found, the answer is "NO".
|
[
"brute force",
"data structures",
"math",
"number theory"
] | 2,600
|
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
const int BITS = 20;
const int MAX_N = 4e5 + 10;
int n;
int x;
int src[MAX_N];
void coutYES(int fId, int sId) {
cout << "YES\n";
cout << "2 " << src[fId] << ' ' << src[sId] << '\n';
cout << n - 2 << ' ';
for (int i = 0; i < n; i++) {
if (i == fId || i == sId) {
continue;
}
cout << src[i] << ' ';
}
cout << '\n';
}
void solve() {
cin >> n >> x;
vector<int> bitCnt[BITS];
int maxA = 1;
for (int i = 0; i < n; i++) {
cin >> src[i];
maxA = max(maxA, src[i] + 1);
for (int bit = 0; bit < BITS; bit++) {
if ((1 << bit) & src[i]) {
continue;
}
bitCnt[bit].push_back(i);
}
}
bool incr[n];
int cnt[maxA];
int divBy[maxA];
for (int i = 0; i < maxA; ++i) {
cnt[i] = 0;
divBy[i] = 0;
}
int pref[n];
int suf[n];
for (int i = 0; i < n; i++) {
incr[i] = false;
pref[i] = src[i];
if (i) {
pref[i] = pref[i - 1] & src[i];
}
}
for (int i = n - 2; i >= 0; i--) {
suf[n - 1] = src[n - 1];
if (i < n - 1) {
suf[i] = suf[i + 1] & src[i];
}
}
for (const auto& item : bitCnt) {
if (item.size() <= 2) {
for (const int& id : item) {
incr[id] = true;
int myAnd = -1;
for (int j = id + 1; j < n; j++) {
int curAND = (1 << BITS) - 1;
if (j + 1 < n) curAND &= suf[j + 1];
if (id - 1 >= 0) curAND &= pref[id - 1];
if (myAnd != -1) curAND &= myAnd;
if (curAND + x < __gcd(src[id], src[j])) {
coutYES(id, j);
return;
}
if (myAnd == -1) {
myAnd = src[j];
} else {
myAnd &= src[j];
}
}
myAnd = -1;
for (int j = id - 1; j >= 0; j--) {
int curAND = (1 << BITS) - 1;
if (j - 1 >= 0) curAND &= pref[j - 1];
if (id + 1 < n) curAND &= suf[id + 1];
if (myAnd != -1) curAND &= myAnd;
if (curAND + x < __gcd(src[id], src[j])) {
coutYES(id, j);
return;
}
if (myAnd == -1) {
myAnd = src[j];
} else {
myAnd &= src[j];
}
}
}
}
}
int AND = (1 << BITS) - 1;
for (int i = 0; i < BITS; i++) {
if (!bitCnt[i].empty()) {
AND ^= (1 << i);
}
}
for (int i = 0; i < n; i++) {
if (!incr[i]) {
++cnt[src[i]];
}
}
for (int i = 1; i < maxA; i++) {
for (int j = i; j < maxA; j += i) {
divBy[i] += cnt[j];
}
}
for (int g = maxA - 1; g > AND + x; g--) {
if (divBy[g] < 2) {
continue;
}
int fId = -1;
int sId = -1;
for (int i = 0; i < n; i++) {
if (!incr[i]) {
if (src[i] % g == 0) {
if (fId == -1) {
fId = i;
} else {
sId = i;
}
}
}
}
coutYES(fId, sId);
return;
}
cout << "NO\n";
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int testN;
cin >> testN;
while (testN--) {
solve();
}
return 0;
}
|
1946
|
A
|
Median of an Array
|
You are given an array $a$ of $n$ integers.
The median of an array $q_1, q_2, \ldots, q_k$ is the number $p_{\lceil \frac{k}{2} \rceil}$, where $p$ is the array $q$ sorted in non-decreasing order. For example, the median of the array $[9, 5, 1, 2, 6]$ is $5$, as in the sorted array $[1, 2, 5, 6, 9]$, the number at index $\lceil \frac{5}{2} \rceil = 3$ is $5$, and the median of the array $[9, 2, 8, 3]$ is $3$, as in the sorted array $[2, 3, 8, 9]$, the number at index $\lceil \frac{4}{2} \rceil = 2$ is $3$.
You are allowed to choose an integer $i$ ($1 \le i \le n$) and increase $a_i$ by $1$ in one operation.
Your task is to find the minimum number of operations required to increase the median of the array.
Note that the array $a$ may not necessarily contain distinct numbers.
|
The median is defined as the number at index $\lceil \frac{n}{2} \rceil$ in the sorted array, so we can sort the array and work with it. So, let's start by sorting the array and finding the median in it, namely the number $a_{\lceil \frac{n}{2} \rceil}$, let it be equal to $x$. In order for the median to increase, that is, to become at least $x + 1$, it is necessary that there are at least $n - \lceil \frac{n}{2} \rceil + 1$ numbers in the array greater than or equal to $x + 1$. Now let's find the maximum index $t$ such that $a_t$ equals $x$. Then we know that there are currently $n - t$ numbers that are greater than or equal to $x + 1$ (all such $a_i$ that $i > t$), which means that at least $(n - \lceil \frac{n}{2} \rceil + 1) - (n - t) = t - \lceil \frac{n}{2} \rceil + 1$ operations will be required. I claim that this estimate is always achievable, it is enough to apply one operation to each index from $\lceil \frac{n}{2} \rceil$ to $t$, because all the numbers under these indices are equal to $x$, so after applying the operations they will become equal to $x + 1$. And in the end, the number of numbers greater than or equal to $x + 1$ will become equal to $(n - t) + (t - \lceil \frac{n}{2} \rceil + 1) = n - \lceil \frac{n}{2} \rceil + 1$, which is what we need.
|
[
"greedy",
"implementation",
"sortings"
] | 800
|
#include <bits/stdc++.h>
using i64 = long long;
void solve() {
int n;
std::cin >> n;
std::vector<int> a(n);
for (int i = 0; i < n; i++) {
std::cin >> a[i];
}
std::sort(a.begin(), a.end());
int p = (n + 1) / 2 - 1;
int res = std::count(a.begin() + p, a.end(), a[p]);
std::cout << res << "\n";
}
signed main() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
int t = 1;
std::cin >> t;
while (t--) {
solve();
}
}
|
1946
|
B
|
Maximum Sum
|
You have an array $a$ of $n$ integers.
You perform exactly $k$ operations on it. In one operation, you select any contiguous subarray of the array $a$ (possibly empty) and insert the sum of this subarray anywhere in the array.
Your task is to find the maximum possible sum of the array after $k$ such operations.
As this number can be very large, output the answer modulo $10^9 + 7$.
Reminder: the remainder of a number $x$ modulo $p$ is the smallest non-negative $y$ such that there exists an integer $q$ and $x = p \cdot q + y$.
|
Let's denote $s$ as the sum of the original array and $x$ as the sum of the subarray with the maximum sum from the original array. We solve the problem when $k$ equals $1$. In this case, we need to find the subarray of the array with the maximum sum and insert this sum anywhere in the array, so the answer is $s + x$. Now, let $k$ be $2$. In this case, there is already a value where we insert the sum of the subarray with the maximum sum. Then we can increase the sum of the subarray with the maximum sum by no more than $x$ (we can increase it by $x$ simply by inserting it into the subarray with the maximum sum), and obtain the sum of the subarray with the maximum sum $2 \cdot x$. Then insert it anywhere in the array, thus obtaining the sum of the final array equal to $s + x + 2 \cdot x$. Similarly, for any $k$, the sum of the subarray with the maximum sum is initially $x$, then $2 \cdot x$, then $4 \cdot x$, $\dots$, $2^{k - 1} \cdot x$, then the answer is equal to $s + x + 2 \cdot x + \dots + 2^{k - 1} \cdot x = s + 2^k \cdot x - x$.
|
[
"dp",
"greedy",
"math"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
const int P = 1e9 + 7;
void solve() {
int n, k;
cin >> n >> k;
vector<int> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
int S = 0, sum = 0;
int cur = 0;
for (int i = 0; i < n; i++) {
sum += a[i];
cur += a[i];
cur = max(cur, 0LL);
S = max(S, cur);
}
sum = (sum % P + P) % P;
S = S % P;
int t = 1;
for (int i = 0; i < k; i++) {
t = t * 2 % P;
}
int ans = (sum + S * t - S + P) % P;
cout << ans << '\n';
}
signed main() {
//cout << fixed << setprecision(20);
ios::sync_with_stdio(false);
cin.tie(nullptr);
int T = 1, G = 1;
//cin >> G;
cin >> T;
while (T--)
solve();
return 0;
}
|
1946
|
C
|
Tree Cutting
|
You are given a tree with $n$ vertices.
Your task is to find the maximum number $x$ such that it is possible to remove exactly $k$ edges from this tree in such a way that the size of each remaining connected component$^{\dagger}$ is at least $x$.
$^{\dagger}$ Two vertices $v$ and $u$ are in the same connected component if there exists a sequence of numbers $t_1, t_2, \ldots, t_k$ of arbitrary length $k$, such that $t_1 = v$, $t_k = u$, and for each $i$ from $1$ to $k - 1$, vertices $t_i$ and $t_{i+1}$ are connected by an edge.
|
Let's hang the tree from an arbitrary vertex, for definiteness let's hang the tree from vertex $1$ (proof is given below). First of all, notice that if we can obtain some answer $x$, then we can also obtain the answer $x - 1$ (exactly the same way as for $x$), so we can do a binary search for $x$. To check the condition for a fixed $x$, we will use a greedy algorithm. We will find the maximum number of connected components into which we can cut our tree so that each component has at least $x$ vertices. We will start a dfs from vertex $1$, let's say we are currently at vertex $v$ and the number of vertices in its subtree is greater than or equal to $x$, then it is advantageous for us to remove the edge from vertex $v$ to its parent. If after this process there are at least $k + 1$ connected components, then the condition is satisfied for this $x$, otherwise it is not. Proof that it doesn't matter which vertex to hang the tree from: We need to prove that the greedy algorithm will obtain the same number of cuts for all roots. We will prove this in the order of depth-first search. It is also important to note that it doesn't matter in which order to run the greedy algorithm from the children. Let the initial root be $1$, and we want to prove it for its child $2$. Then let's see how the greedy algorithm will act in the first case: it will start from vertex $2$, and then from all its adjacent vertices except $1$, and remove some edges. When we run the greedy algorithm in the second case, we can reorder the vertices, and first run it from all adjacent vertices except $1$, and there will be the same removals. Then we will run the greedy algorithm from vertex $1$, it will perform the same removals as in the first case, if we made vertex $2$ the last one. So the only edge that may not coincide in these removals is $1-2$. If this edge did not participate in the removals the first time, then the size of the remaining part of vertex $2 < x$, so we cannot remove it now. If the edge $1-2$ was removed in the first variant, then the size of the component $1$ was $\ge x$, and there were no vertices of subtree $2$ in it. Then when running the greedy algorithm from the second vertex, it will cut the edge $1-2$, because the size of this part became $\ge x$, so the set of edges coincides.
|
[
"binary search",
"dp",
"greedy",
"implementation",
"trees"
] | 1,600
|
#include <bits/stdc++.h>
using i64 = long long;
void solve() {
int n, k;
std::cin >> n >> k;
std::vector<std::vector<int>> adj(n);
for (int i = 0; i < n - 1; i++) {
int v, u;
std::cin >> v >> u;
--v, --u;
adj[v].emplace_back(u);
adj[u].emplace_back(v);
}
auto check = [&](int x) {
int res = 0;
auto dfs = [&](auto self, int v, int f) -> int {
int sz = 1;
for (int u : adj[v]) {
if (u == f) {
continue;
}
sz += self(self, u, v);
}
if (sz >= x && f != v) {
++res, sz = 0;
}
return sz;
};
int t = dfs(dfs, 0, 0);
return (res > k || (t >= x && res == k) ? true : false);
};
int low = 1, high = n / (k + 1) + 1;
while (high - low > 1) {
int mid = (low + high) / 2;
if (check(mid)) {
low = mid;
} else {
high = mid;
}
}
std::cout << low << "\n";
}
signed main() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
int t = 1;
std::cin >> t;
while (t--) {
solve();
}
}
|
1946
|
D
|
Birthday Gift
|
Yarik's birthday is coming soon, and Mark decided to give him an array $a$ of length $n$.
Mark knows that Yarik loves bitwise operations very much, and he also has a favorite number $x$, so Mark wants to find the maximum number $k$ such that it is possible to select pairs of numbers [$l_1, r_1$], [$l_2, r_2$], $\ldots$ [$l_k, r_k$], such that:
- $l_1 = 1$.
- $r_k = n$.
- $l_i \le r_i$ for all $i$ from $1$ to $k$.
- $r_i + 1 = l_{i + 1}$ for all $i$ from $1$ to $k - 1$.
- $(a_{l_1} \oplus a_{l_1 + 1} \oplus \ldots \oplus a_{r_1}) | (a_{l_2} \oplus a_{l_2 + 1} \oplus \ldots \oplus a_{r_2}) | \ldots | (a_{l_k} \oplus a_{l_k + 1} \oplus \ldots \oplus a_{r_k}) \le x$, where $\oplus$ denotes the operation of bitwise XOR, and $|$ denotes the operation of bitwise OR.
If such $k$ does not exist, then output $-1$.
|
For convenience, let's increase $x$ by $1$, and then iterate through the bit in which the final number is less than $x$. We will iterate from the most significant bit to the least significant bit, denoting it as $i$. The initial bit will be $30$. Let's look at all the numbers $a$ in which this bit is $1$. If there is an odd number of such numbers, then the $\oplus$ of some subsegment will be odd, and therefore the final $|$ will also be odd. If at the same time the bit $i$ in $x$ is $0$, then the process needs to be terminated, because in all the other bits it will no longer be possible to decrease the final number. If, however, in $x$ this bit is also $1$, then we need to move on to the less significant bits, since in both numbers this bit will always be $1$. If the number of numbers with $1$ in bit $i$ is even, then in order to make this bit $0$ in the final number, each segment must contain an even number of such numbers, and since we want to maximize the number of segments, each segment must contain exactly $2$ such numbers. For this, for every two indices $l$ and $r$, such that $a_{l}$ and $a_{r}$ in bit $i$ contain $1$, and all numbers from $a_{l + 1}$ to $a_{r - 1}$ contain $0$, replace the subsegment $[l, r]$ with $a_{l} \oplus a_{l + 1} \oplus \ldots \oplus a_{r}$. After this, if there is a $1$ in $x$ in this bit, update the answer and return the array to its original state. Then move on to the less significant bits. There is also an alternative solution that also iterates through the bits from the most significant to the least significant, but instead of compressing subsegments into one number, it uses a greedy algorithm.
|
[
"bitmasks",
"brute force",
"constructive algorithms",
"greedy",
"implementation"
] | 1,900
|
#include<bits/stdc++.h>
using namespace std;
int const maxn = 1e5 + 5;
int a[maxn];
int solve(int n, int x) {
int res = 0, curr = 0;
for (int i = 1; i <= n; i++) {
curr ^= a[i];
if ((curr|x) == x) curr = 0, res++;
else {
if (i == n) return -1;
}
}
return res;
}
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while (t--) {
int n, x;
cin >> n >> x;
for (int i = 1; i <= n; i++) cin >> a[i];
int ans = -1;
ans = max(ans, solve(n, x));
for (int b = 29; b >= 0; b--) {
if ((x>>b)&1) {
int y = (x^(1 << b));
for (int c = b - 1; c >= 0; c--) {
if (((y>>c)&1) == 0) y ^= (1 << c);
}
ans = max(ans, solve(n, y));
}
}
cout << ans << '\n';
}
return 0;
}
|
1946
|
E
|
Girl Permutation
|
Some permutation of length $n$ is guessed.
You are given the indices of its prefix maximums and suffix maximums.
Recall that a permutation of length $k$ is an array of size $k$ such that each integer from $1$ to $k$ occurs exactly once.
Prefix maximums are the elements that are the maximum on the prefix ending at that element. More formally, the element $a_i$ is a prefix maximum if $a_i > a_j$ for every $j < i$.
Similarly, suffix maximums are defined, the element $a_i$ is a suffix maximum if $a_i > a_j$ for every $j > i$.
You need to output the number of different permutations that could have been guessed.
As this number can be very large, output the answer modulo $10^9 + 7$.
|
First, if $p_1$ is not equal to $1$, or $s_{m_2}$ is not equal to $n$, or $p_{m_1}$ is not equal to $s_1$, then the answer is $0$ for obvious reasons. Otherwise, we know exactly where the number $n$ is located, at position $s_1$. Next, we have $\binom{n - 1}{s_1 - 1}$ ways to divide the numbers from $1$ to $n - 1$ into two sets - the numbers that will be in the left part and the numbers that will be in the right part (the left part - all indices $< s_1$, the right part - all indices $> s_1$). We solve for the left part, and similarly for the right part. For the left part, the position of the maximum ($p_{m_1 - 1}$) is again defined, and the maximum itself is also unique in the set of numbers for the left part, so we can again divide the left part into two, with $\binom{p_{m_1} - 2}{p_{m_1 - 1} - 1}$ ways to do so, but we can also arrange the numbers between indices $p_{m_1 - 1}$ and $p_{m_1}$ (non-inclusive) in any order, i.e., $(p_{m_1} - p_{m_1 - 1} - 1)!$ ways. Then we solve similarly for the left set (i.e., for indices less than $p_{m_1 - 1}$).
|
[
"combinatorics",
"dp",
"math",
"number theory"
] | 2,200
|
#include <bits/stdc++.h>
using i64 = long long;
template<class T>
constexpr T power(T a, i64 b) {
T res = 1;
for (; b; b /= 2, a *= a) {
if (b % 2) {
res *= a;
}
}
return res;
}
template<int P>
struct MInt {
int x;
constexpr MInt() : x{} {}
constexpr MInt(i64 x) : x{norm(x % P)} {}
constexpr int norm(int x) const {
if (x < 0) {
x += P;
}
if (x >= P) {
x -= P;
}
return x;
}
constexpr int val() const {
return x;
}
explicit constexpr operator int() const {
return x;
}
constexpr MInt operator-() const {
MInt res;
res.x = norm(P - x);
return res;
}
constexpr MInt inv() const {
assert(x != 0);
return power(*this, P - 2);
}
constexpr MInt &operator*=(MInt rhs) {
x = 1LL * x * rhs.x % P;
return *this;
}
constexpr MInt &operator+=(MInt rhs) {
x = norm(x + rhs.x);
return *this;
}
constexpr MInt &operator-=(MInt rhs) {
x = norm(x - rhs.x);
return *this;
}
constexpr MInt &operator/=(MInt rhs) {
return *this *= rhs.inv();
}
friend constexpr MInt operator*(MInt lhs, MInt rhs) {
MInt res = lhs;
res *= rhs;
return res;
}
friend constexpr MInt operator+(MInt lhs, MInt rhs) {
MInt res = lhs;
res += rhs;
return res;
}
friend constexpr MInt operator-(MInt lhs, MInt rhs) {
MInt res = lhs;
res -= rhs;
return res;
}
friend constexpr MInt operator/(MInt lhs, MInt rhs) {
MInt res = lhs;
res /= rhs;
return res;
}
friend constexpr std::istream &operator>>(std::istream &is, MInt &a) {
i64 v;
is >> v;
a = MInt(v);
return is;
}
friend constexpr std::ostream &operator<<(std::ostream &os, const MInt &a) {
return os << a.val();
}
friend constexpr bool operator==(MInt lhs, MInt rhs) {
return lhs.val() == rhs.val();
}
friend constexpr bool operator!=(MInt lhs, MInt rhs) {
return lhs.val() != rhs.val();
}
};
constexpr int MOD = 1e9 + 7;
using Z = MInt<MOD>;
namespace comb {
int n = 0;
std::vector<Z> _fac = {1};
std::vector<Z> _invfac = {1};
std::vector<Z> _inv = {0};
void init(int m) {
if (m <= n) return;
_fac.resize(m + 1);
_invfac.resize(m + 1);
_inv.resize(m + 1);
for (int i = n + 1; i <= m; i++) {
_fac[i] = _fac[i - 1] * i;
}
_invfac[m] = _fac[m].inv();
for (int i = m; i > n; i--) {
_invfac[i - 1] = _invfac[i] * i;
_inv[i] = _invfac[i] * _fac[i - 1];
}
n = m;
}
Z fac(int m) {
if (m > n) init(2 * m);
return _fac[m];
}
Z invfac(int m) {
if (m > n) init(2 * m);
return _invfac[m];
}
Z inv(int m) {
if (m > n) init(2 * m);
return _inv[m];
}
Z binom(int m, int k) {
if (m < k || k < 0) return 0;
return fac(m) * invfac(k) * invfac(m - k);
}
} // namespace comb
void solve() {
int n, m1, m2;
std::cin >> n >> m1 >> m2;
std::vector<int> p(m1), s(m2);
for (int i = 0; i < m1; i++) {
std::cin >> p[i];
}
for (int i = 0; i < m2; i++) {
std::cin >> s[i];
}
if (p[0] != 1 || s[0] != p[m1 - 1] || s[m2 - 1] != n) {
std::cout << "0\n";
return;
}
Z res = comb::binom(n - 1, s[0] - 1);
for (int i = m1 - 2; i > -1; i--) {
res *= comb::binom(p[i + 1] - 2, p[i + 1] - p[i] - 1) * comb::fac(p[i + 1] - p[i] - 1);
}
for (int i = 1; i < m2; i++) {
res *= comb::binom(n - s[i - 1] - 1, s[i] - s[i - 1] - 1) * comb::fac(s[i] - s[i - 1] - 1);
}
std::cout << res << "\n";
}
signed main() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
#ifndef ONLINE_JUDGE
freopen("/home/q1n/coding/input.txt", "r", stdin);
freopen("/home/q1n/coding/output.txt", "w", stdout);
#else
// online submission
#endif
int t = 1;
std::cin >> t;
while (t--) {
solve();
}
}
|
1946
|
F
|
Nobody is needed
|
Oleg received a permutation $a$ of length $n$ as a birthday present.
Oleg's friend Nechipor asks Oleg $q$ questions, each question is characterized by two numbers $l$ and $r$, in response to the question Oleg must say the number of sets of indices $(t_1, t_2, \ldots, t_k)$ of any length $k \ge 1$ such that:
- $l \le t_i \le r$ for each $i$ from $1$ to $k$.
- $t_i < t_{i+1}$ for each $i$ from $1$ to $k-1$.
- $a_{t_{i+1}}$ is divisible by $a_{t_i}$ for each $i$ from $1$ to $k-1$.
Help Oleg and answer all of Nechipor's questions.
|
Let's iterate the left boundary of our queries $L$ from $n$ to $1$ and maintain the Fenwick tree $F$, where $F_i$ = the number of sought sets of indices in which $t_k = i$, and $t_1 \ge L$. Then the answer to the query with the left boundary at $L$ will be the sum over the interval $[l_i; r_i]$ in our Fenwick tree. Now we just need to learn how to update our tree when transitioning from $L$ to $L - 1$. Some new sets of indices may have been added, starting at $L - 1$. Let's denote $pos_i$ as the position of the number $i$ in the permutation. We'll create an auxiliary array $dp$. We'll learn to calculate the dynamics $dp_i =$ the number of sought sets of indices where $t_1 = L - 1$, $t_k = i$. The base case will be $dp_{L - 1} = 1$. Suppose we know the value of $dp_{pos_x}$. Then we can update $dp_{pos_y} = dp_{pos_y} + dp_{pos_x}$, if $y$ is divisible by $x$ and $y \ne x$, $pos_{a_i} \le pos_x \le pos_y$. We can calculate this dynamics straightforwardly, iterating over $x$ and $y$ $(y$ is divisible by $x$ and $x$ is divisible by $a_i)$. Notice that this will work in total for $O(n \cdot \log{n} \cdot \log{n})$ for the permutation. Now we can iterate over all numbers again that are multiples of $a_i$, update the Fenwick tree, and clear the $dp$ array to reuse it in the future.
|
[
"2-sat",
"data structures",
"dfs and similar",
"dp"
] | 2,500
|
#include <bits/stdc++.h>
using i64 = long long;
template<class Info>
struct Fenwick {
std::vector<Info> t;
int n;
Fenwick(int n = 0) : n(n) {
t.resize(n);
}
void add(int x, const Info &v) {
for (int i = x + 1; i <= n; i += i & -i) {
t[i - 1] = t[i - 1] + v;
}
}
Info sum(int x) {
x++;
Info res = Info();
for (int i = x; i > 0; i -= i & -i) {
res = res + t[i - 1];
}
return res;
}
Info rangeSum(int l, int r) {
Info res = sum(r) - sum(l - 1);
return res;
}
};
void solve() {
int n, q;
std::cin >> n >> q;
std::vector<int> a(n), pos(n + 1);
for (int i = 0; i < n; i++) {
std::cin >> a[i];
}
std::reverse(a.begin(), a.end());
for (int i = 0; i < n; i++) {
pos[a[i]] = i;
}
constexpr int K = 19;
std::vector<i64> res(q);
std::vector<std::vector<std::pair<int, int>>> qry(n);
for (int i = 0; i < q; i++) {
int l, r;
std::cin >> l >> r;
l--, r--;
std::swap(l, r);
l = n - l - 1;
r = n - r - 1;
qry[r].emplace_back(l, i);
}
std::vector<i64> dp(n + 1);
Fenwick<i64> f(n);
for (int r = 0; r < n; r++) {
int x = a[r];
dp[x] = 1;
// n * log(n) * log(n)
for (int y = x; y <= n; y += x) {
if (pos[y] > pos[x]) {
continue;
}
for (int z = 2 * y; z <= n; z += y) {
if (pos[z] > pos[y]) {
continue;
}
dp[z] += dp[y];
}
}
// n * log(n) * log(n)
for (int y = x; y <= n; y += x) {
f.add(pos[y], dp[y]);
dp[y] = 0;
}
// q * log(n)
for (auto [l, i] : qry[r]) {
res[i] += f.rangeSum(l, r);
}
}
for (int i = 0; i < q; i++) {
std::cout << res[i] << " \n"[i == q - 1];
}
}
signed main() {
std::ios::sync_with_stdio(false);
std::cin.tie(nullptr);
int t = 1;
std::cin >> t;
while (t--) {
solve();
}
}
|
1948
|
A
|
Special Characters
|
You are given an integer $n$.
Your task is to build a string of uppercase Latin letters. There must be exactly $n$ special characters in this string. Let's call a character special if it is equal to exactly one of its neighbors.
For example, there are $6$ special characters in the AAABAACC string (at positions: $1$, $3$, $5$, $6$, $7$ and $8$).
Print any suitable string or report that there is no such string.
|
Let's look at the blocks of consecutive equal characters (such that it cannot be extended to the left or to the right): if its length is $1$, then this block has $0$ special characters; if its length is $2$, then this block has $2$ special characters; if its length is at least $3$, then this block has $2$ special characters (only the leftmost and the rightmost elements); We can see that there is no way to obtain an odd number of special characters. So the answer is NO for all odd values of $n$. And we can easily build the string for all even values of $n$. The answer for an even $n$ may look like this: AABBAABB... with $\frac{n}{2}$ blocks of length $2$ alternating between two different letters.
|
[
"brute force",
"constructive algorithms"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
if (n % 2 == 1) {
cout << "NO" << '\n';
continue;
}
cout << "YES" << '\n';
for (int i = 0; i < n / 2; ++i)
for (int j = 0; j < 2; ++j)
cout << "AB"[i & 1];
cout << '\n';
}
}
|
1948
|
B
|
Array Fix
|
You are given an integer array $a$ of length $n$.
You can perform the following operation any number of times (possibly zero): take any element of the array $a$, which is at least $10$, delete it, and instead insert the digits that element consisted of in the same position, in order they appear in that element.
For example:
- if we apply this operation to the $3$-rd element of the array $[12, 3, 45, 67]$, then the array becomes $[12, 3, 4, 5, 67]$.
- if we apply this operation to the $2$-nd element of the array $[2, 10]$, then the array becomes $[2, 1, 0]$.
Your task is to determine whether it is possible to make $a$ sorted in non-descending order using the aforementioned operation \textbf{any number of times (possibly zero)}. In other words, you have to determine if it is possible to transform the array $a$ in such a way that $a_1 \le a_2 \le \dots \le a_k$, where $k$ is the current length of the array $a$.
|
The key to solving the problem is the following observation: if $a_i > a_{i + 1}$, then the $i$-th element should always be split (since it is the only way to decrease the element compared with $a_{i+1}$). This observation allows us to solve the problem greedily as follows: iterate on the array $a$ from right to left, keeping track of the list of elements we processed; if the current element is greater than the last element in our list, we have to split it (and add the digits it consists of to the list); otherwise, we don't have to split it (because it might forbid some of the next elements and force them to split as well). After that, we have to make sure we obtained a sorted array (because, for example, a number like $98$ will break the sorted order if we split it, and we didn't check that). But since we maintained the list of elements we processed, that's quite easy, because that list is the resulting array $a$ in reversed order.
|
[
"brute force",
"dp",
"greedy",
"implementation"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
for (auto& x : a) cin >> x;
vector<int> b({a[n - 1]});
for (int i = n - 2; i >= 0; --i) {
if (a[i] > b.back()) {
b.push_back(a[i] % 10);
b.push_back(a[i] / 10);
} else {
b.push_back(a[i]);
}
}
cout << (is_sorted(b.rbegin(), b.rend()) ? "YES" : "NO") << '\n';
}
}
|
1948
|
C
|
Arrow Path
|
There is a grid, consisting of $2$ rows and $n$ columns. The rows are numbered from $1$ to $2$ from top to bottom. The columns are numbered from $1$ to $n$ from left to right. Each cell of the grid contains an arrow pointing either to the left or to the right. No arrow points outside the grid.
There is a robot that starts in a cell $(1, 1)$. Every second, the following two actions happen one after another:
- Firstly, the robot moves left, right, down or up (\textbf{it can't try to go outside the grid, and can't skip a move});
- then it moves along the arrow that is placed in the current cell (the cell it ends up after its move).
Your task is to determine whether the robot can reach the cell $(2, n)$.
|
There are multiple different approaches to this problem. We will describe a couple of them. The key observation for this problem that works for all these approaches is that, since we never skip a move and never try to move outside the grid, we can split the cells into two groups: if the sum of coordinates of the cell is even (we will call it "even cell"), we can end up in such a cell only after moving along the arrow (only after the second action during each second); otherwise, if the sum of coordinates of the cell is odd (we will call it "odd cell"), we can end up in such a cell only after making a move on our own (only after the first action during each second). Each movement leads us from one of these groups to the other group, that's why we always visit even cells after the second action and odd cells after the first action during each second. Solution 1 (graph-based). Create a graph where each cell is represented by a vertex; if this is an even cell, then the robot can choose any direction from it, so we add directed arcs from it into all adjacent cells. If this is an odd cell, then the robot will be forced to move along the arrow in that cell, so we add only one directed arc from the corresponding vertex. Then we check if the vertex representing $(2, n)$ is reachable from the vertex representing $(1, 1)$ via DFS or BFS. Solution 2 (observation-based). We can try searching for a pattern that "blocks" our robot, i. e. does not let it advance further. If there are two adjacent by angle odd cells, which both have arrows leading to the left, then every path of the robot will pass through at least one of them; and every time it goes through one of them, it will be forced to the left. So any pair of odd <-type cells which share an angle (i. e. are on the same diagonal) means that the answer is NO. Otherwise, we can show that the answer is yes; the proof is based on the fact that for every even cell, either the cell in the same column or the cell to the right will contain an arrow leading to the right, so we can always go forward.
|
[
"brute force",
"constructive algorithms",
"dfs and similar",
"dp",
"graphs",
"shortest paths"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<char> ok1(n / 2), ok2(n / 2);
for (int i = 0; i < 2; ++i) {
string s;
cin >> s;
for (int j = 0; j < n; ++j) if ((i + j) & 1) {
ok1[(i + j) / 2] |= (s[j] == '>');
ok2[(j - i + 1) / 2] |= (s[j] == '>');
}
}
bool ans = true;
for (int i = 0; i < n / 2; ++i) ans &= ok1[i] && ok2[i];
cout << (ans ? "YES" : "NO") << '\n';
}
}
|
1948
|
D
|
Tandem Repeats?
|
You are given a string $s$, consisting of lowercase Latin letters and/or question marks.
A tandem repeat is a string of an even length such that its first half is equal to its second half.
A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end.
Your goal is to replace each question mark with some lowercase Latin letter in such a way that the length of the longest substring that is a tandem repeat is maximum possible.
|
The key idea of the problem is to believe that no algorithms are required to solve the problem. Now, we can start looking for ways to iterate over all substrings so that it doesn't degenerate into $O(n^3)$. One of the ways is the following. How to check if the substring $[l; r)$ is a tandem repeat? Let $d = \frac{r - l}{2}$. Then $s_i$ should match (either be equal or one of them should be a question mark) $s_{i + d}$ for all $i$ from $l$ to $l + d$ exclusive. If you iterate over the left border, then the right border of the substring, a lot of these checks change. However, if you iterate over the length $d$ first, then over the left border, it becomes much easier. What changes when you go from checking $[l; r)$ to checking $[l + 1; r + 1)$? One check is removed: $s_l$ matches $s_{l + d}$ is now irrelevant, since $l$ doesn't belong to the substring anymore. One check is added: $s_{l + d}$ matches $s_r$ is required, since $r$ belongs to the substring now. So, the solution can be as follows: maintain the number of successful checks for the current substring $[l; r)$. If that value is equal to $d$, then the substring is a tandem repeat, and you can update the answer with its length. Otherwise, subtract the contribution of the check ($1$ if it's successful, and $0$ otherwise) for $l$ and $l + d$, add the contribution for $l + d$ and $r$ and proceed to $[l + 1; r + 1)$. Overall complexity: $O(n^2)$ per testcase.
|
[
"brute force",
"strings",
"two pointers"
] | 1,700
|
for _ in range(int(input())):
s = input()
n = len(s)
ans = 0
for d in range(1, n // 2 + 1):
cnt = 0
for i in range(n - d):
cnt += s[i] == s[i + d] or s[i] == '?' or s[i + d] == '?'
if i - d >= 0:
cnt -= s[i - d] == s[i] or s[i - d] == '?' or s[i] == '?'
if i - d >= -1 and cnt == d:
ans = 2 * d
print(ans)
|
1948
|
E
|
Clique Partition
|
You are given two integers, $n$ and $k$. There is a graph on $n$ vertices, numbered from $1$ to $n$, which initially has no edges.
You have to assign each vertex an integer; let $a_i$ be the integer on the vertex $i$. All $a_i$ should be distinct integers from $1$ to $n$.
After assigning integers, for every pair of vertices $(i, j)$, you add an edge between them if $|i - j| + |a_i - a_j| \le k$.
Your goal is to create a graph which can be partitioned into the minimum possible (for the given values of $n$ and $k$) number of cliques. Each vertex of the graph should belong to exactly one clique. Recall that a clique is a set of vertices such that every pair of vertices in it are connected with an edge.
Since BledDest hasn't really brushed his programming skills up, he can't solve the problem "given a graph, partition it into the minimum number of cliques". So we also ask you to print the partition itself.
|
There are two main steps to solve the problem: analyzing the maximum size of a clique; showing a construction that always allows us to get a clique of the maximum possible size. Firstly, the maximum size of a clique cannot exceed $k$. If there are at least $k+1$ vertices in the same clique, then at least two of them (call them $i$ and $j$) have $|i - j| \ge k$. And since $a_i \ne a_j$, then $|a_i - a_j| \ge 1$. So, $|i - j| + |a_i - a_j|$ is at least $k+1$, so these two vertices won't have an edge connecting them (and cannot belong to the same clique). Secondly, let's try to find a construction that always allows us to get cliques of size $k$. To do this, try to solve the problem when $k = n$; and if $n > k$, we can split all vertices into $\lceil \frac{n}{k} \rceil$ cliques as follows: for each clique, we assign a consecutive block of vertices and numbers that will be assigned to them (for example, vertices from $1$ to $k$ and numbers from $1$ to $k$ belong to the first clique, vertices from $k+1$ to $2k$ and numbers from $k+1$ to $2k$n belong to the second clique), and then use the solution for $n = k$ on each of these blocks. To obtain a solution for $n = k$, you can either try bruteforcing it locally on, say, $n \le 10$ and analyzing the results. One of the possible constructions is as follows: let $m = \lceil \frac{k}{2} \rceil$; split all vertices and numbers from $1$ to $k$ into two blocks: $[1, m]$ and $[m + 1, k]$; and then, in each block, the greater the index of the vertex, the less the integer it gets. So it looks as follows: $a_1 = m, a_2 = m-1, \dots, a_m = 1, a_{m+1} = n, a_{m+2} = n-1, \dots, a_n = m+1$. We can show that the "distance" between any two vertices in different halves is exactly $k$, and the distance between any two vertices in the same half is at most $2(m-1)$, which never exceeds $k$.
|
[
"brute force",
"constructive algorithms",
"graphs",
"greedy",
"implementation"
] | 2,100
|
#include<bits/stdc++.h>
using namespace std;
void solve()
{
int n, k;
cin >> n >> k;
vector<int> a(n), c(n);
for(int i = 0; i < n; i++)
{
a[i] = i + 1;
c[i] = i / k + 1;
}
int q = *max_element(c.begin(), c.end());
for(int i = 1; i <= q; i++)
{
int l = find(c.begin(), c.end(), i) - c.begin();
int r = c.rend() - find(c.rbegin(), c.rend(), i);
int m = (l + r) / 2;
reverse(a.begin() + l, a.begin() + m);
reverse(a.begin() + m, a.begin() + r);
}
for(int i = 0; i < n; i++)
cout << a[i] << " \n"[i == n - 1];
cout << q << "\n";
for(int i = 0; i < n; i++)
cout << c[i] << " \n"[i == n - 1];
}
int main()
{
int t;
cin >> t;
for(int i = 0; i < t; i++)
solve();
}
|
1948
|
F
|
Rare Coins
|
There are $n$ bags numbered from $1$ to $n$, the $i$-th bag contains $a_i$ golden coins and $b_i$ silver coins.
The value of a gold coin is $1$. The value of a silver coin is either $0$ or $1$, determined for each silver coin independently ($0$ with probability $\frac{1}{2}$, $1$ with probability $\frac{1}{2}$).
You have to answer $q$ independent queries. Each query is the following:
- $l$ $r$ — calculate the probability that the total value of coins in bags from $l$ to $r$ is strictly greater than the total value in all other bags.
|
To calculate the probability, we will calculate the number of ways to assign values to silver coins in such a way that the total value inside the segment is greater than the total value outside. Let define $x$ as the number of silver coins with value $1$ inside the segment, $y$ as the number of silver coins with value $1$ outside, $k$ as the difference between golden coins inside and outside the segment. Then the way to assign values to the coins is considered good if condition $x - y + k > 0$ is met. We have an issue here: coins in the segment and coins outside the segment affect the inequality in different directions (that is, one of them increases the difference with a probability of $\frac{1}{2}$, while others decrease with a probability of $\frac{1}{2}$). It is not convenient for us. So let's fix it, we can say that the coins inside the segment have value $1$ by default, and with a probability of $\frac{1}{2}$ are $0$. And the coins outside the segment have value of $0$ by default, and with a probability of $\frac{1}{2}$ are $1$. Then each of them, with a probability of $\frac{1}{2}$, reduces the difference "total in the segment minus total outside the segment" by $1$. Thus, we have some starting value that decreases by $1$ with a probability of $\frac{1}{2}$ $m$ times (where $m$ is the total number of silver coins). And it is left to calculate the number of ways when the number of decreases is not big enough to break the inequality. Let $cur$ be the number of silver coins inside the segment. Then the starting value is $st = k+cur$ and the number of ways to not break the inequality is equal to $sum = \sum\limits_{i=0}^{st-1} \binom{m}{i}$. And the answer to the problem is equal to $\frac{sum}{2^m}$. Now we only need to consider how to answer queries faster than iterating from $0$ to $st$. But thanks to the fact that the value of $m$ is always the same, we can precalculate the prefix sums on these binomial coefficients.
|
[
"combinatorics",
"math",
"probabilities"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
int add(int x, int y) {
x += y;
if (x >= MOD) x -= MOD;
return x;
}
int mul(int x, int y) {
return x * 1LL * y % MOD;
}
int binpow(int x, int y) {
int z = 1;
while (y) {
if (y & 1) z = mul(z, x);
x = mul(x, x);
y >>= 1;
}
return z;
}
int main() {
ios::sync_with_stdio(false); cin.tie(0);
int n, q;
cin >> n >> q;
vector<int> a(n), b(n);
for (auto& x : a) cin >> x;
for (auto& x : b) cin >> x;
vector<int> suma(n + 1), sumb(n + 1);
for (int i = 0; i < n; ++i) {
suma[i + 1] = suma[i] + a[i];
sumb[i + 1] = sumb[i] + b[i];
}
int m = sumb[n];
vector<int> f(m + 1), invf(m + 1);
f[0] = 1;
for (int i = 1; i <= m; ++i) f[i] = mul(f[i - 1], i);
invf[m] = binpow(f[m], MOD - 2);
for (int i = m; i > 0; --i) invf[i - 1] = mul(invf[i], i);
vector<int> sumc(m + 2);
for (int i = 0; i <= m; ++i)
sumc[i + 1] = add(sumc[i], mul(f[m], mul(invf[i], invf[m - i])));
int pw2 = binpow(binpow(2, MOD - 2), m);
while (q--) {
int l, r;
cin >> l >> r;
--l;
int k = 2 * (suma[r] - suma[l]) - suma[n];
int cur = sumb[r] - sumb[l];
int mx = max(0, min(k + cur, m + 1));
int cnt = sumc[mx];
cout << mul(cnt, pw2) << ' ';
}
}
|
1948
|
G
|
MST with Matching
|
You are given an undirected connected graph on $n$ vertices. Each edge of this graph has a weight; the weight of the edge connecting vertices $i$ and $j$ is $w_{i,j}$ (or $w_{i,j} = 0$ if there is no edge between $i$ and $j$). All weights are positive integers.
You are also given a positive integer $c$.
You have to build a spanning tree of this graph; i. e. choose exactly $(n-1)$ edges of this graph in such a way that every vertex can be reached from every other vertex by traversing some of the chosen edges. The cost of the spanning tree is the sum of two values:
- the sum of weights of all chosen edges;
- the maximum matching in the spanning tree (i. e. the maximum size of a set of edges such that they all belong to the chosen spanning tree, and no vertex has more than one incident edge in this set), multiplied by the given integer $c$.
Find any spanning tree with the minimum cost. Since the graph is connected, there exists at least one spanning tree.
|
The key to solving this problem is applying Kőnig's theorem: in a bipartite graph, the size of the maximum matching is equal to the size of the minimum vertex cover. At the first glance it shouldn't work here, because our graph is not bipartite; however, when we leave only the edges belonging to the spanning tree we chose, it becomes bipartite. So instead of adding $c$ to the cost of the tree for each edge in its maximum matching, we can add $c$ to the cost of the tree for each vertex in its minimum vertex cover. This allows us to try iterating on that vertex cover. Let's fix a set of vertices $S$, say that this is the minimum vertex cover, and the second part of the cost is equal to $c \cdot |S|$. How to build a minimum-cost spanning tree which has $S$ as its minimum vertex cover? Well, that's not really easy to do; but instead, we can build a minimum-cost spanning tree which has $S$ as one of its vertex covers (not necessarily minimum). The fact that there might be some other vertex cover of smaller size for the tree doesn't really matter, because we can generate this tree again using a smaller set of vertices as its vertex cover. So, how do we build a minimum spanning tree such that $S$ is its vertex cover? This is actually easy: we forbid all edges such that none of their endpoints are in $S$ (because they can't be covered), and use any regular MST algorithm on all edges we have not forbidden. If we use Prim's algorithm, we get a solution in $O(2^n \cdot n^2)$, which is fast enough to pass the time limit. Kruskal's algorithm with DSU might be a bit slower, but can also get accepted.
|
[
"bitmasks",
"brute force",
"dsu",
"graph matchings",
"trees"
] | 3,100
|
#include<bits/stdc++.h>
using namespace std;
int n, c;
int g[21][21];
const int INF = int(1e9);
int primMST(int vertex_cover_mask)
{
vector<int> d(n, INF);
vector<bool> used(n, false);
d[0] = 0;
int ans = 0;
for(int i = 0; i < n; i++)
{
int idx = -1;
for(int j = 0; j < n; j++)
{
if(!used[j] && d[j] < INF && (idx == -1 || d[j] < d[idx]))
idx = j;
}
if(idx == -1) return INF;
used[idx] = true;
ans += d[idx];
for(int k = 0; k < n; k++)
{
if(used[k])
continue;
if((vertex_cover_mask & (1 << idx)) == 0 && (vertex_cover_mask & (1 << k)) == 0)
continue;
d[k] = min(d[k], g[idx][k]);
}
}
return ans;
}
int main()
{
cin >> n >> c;
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
cin >> g[i][j];
if(g[i][j] == 0) g[i][j] = INF;
}
}
int ans = INF;
for(int mask = 0; mask < (1 << n); mask++)
{
int cur = primMST(mask);
if(cur != INF) ans = min(ans, cur + c * __builtin_popcount(mask));
}
cout << ans << endl;
}
|
1949
|
A
|
Grove
|
You want to plant trees in a square lawn of size $n \times n$ whose corners have Cartesian coordinates $(0, 0)$, $(n, 0)$, $(0, n)$, and $(n, n)$. Trees can only be planted at locations with integer coordinates. Every tree will grow roots within a disk of radius $r$ centered at the location where the tree was planted; such disks must be fully contained in the lawn (possibly touching the boundary of the lawn) and can only intersect each other on their boundaries.
Find a configuration that maximizes the number of trees.
|
Let $\delta = \lceil r \rceil$. Note that all trees need to be planted at least $\delta$ away from the boundary of the lawn, so their coordinates $(x, y)$ have to satisfy $\delta \leq x, y \leq n-\delta$. For integers $0 \leq a_\delta, \dots, a_{n-\delta} \leq n-\delta$, denote by $f(a_\delta, \dots, a_{n-\delta})$ any configuration that maximizes the number of trees while only using locations $(x, y)$ such that $x \leq a_y$. Our task is to compute $f(n-\delta, \dots, n-\delta)$. We compute values of $f$ recursively, and store all results using memoization. Let $\bar x = \max(a_\delta, \dots, a_{n-\delta})$, and let $\bar y$ be such that $a_{\bar y} = \bar x$. Note that, if $\bar x < \delta$, then $f(a_\delta, \dots, a_{n-\delta})=\emptyset$ because any tree would be too close to the left boundary of the lawn. Assuming that $\bar x \geq \delta$, we try to plant or not plant a tree at the location $(\bar x, \bar y)$. If we do not plant a tree at $(\bar x, \bar y)$, an optimal configuration is given by $f(a_\delta', \dots, a_{n-\delta}')$, where $a_y' = a_y$ if $y \neq \bar y$ and $a_{\bar y}' = a_{\bar y} - 1$. If we plant a tree at $(\bar x, \bar y)$, an optimal configuration is given by $f(a_\delta', \dots, a_{n-\delta}') \cup \{ (\bar x, \bar y) \}$, where $a_y'$ is the largest integer $\leq a_y$ such that locations $(a_y', y)$ and $(\bar x, \bar y)$ are at distance $\geq 2r$. The number of recursive calls increases as $r$ decreases. For this reason, it can be beneficial to solve by hand the cases where $r$ is small. For instance, if $r\leq 1/2$, then we can plant trees at all integer coordinates in the interior of the lawn; if $1/2 < r < \sqrt{2}/2$, we can plant trees at all integer coordinates $(x, y)$ in the interior of the lawn such that $x+y$ is even. If the implementation of the above algorithm is not fast enough, it is also possible to pre-compute all optimal configurations offline. Indeed, for every $n$, there is a finite number of intervals $(p,q] \subseteq \mathbb R$ such that the valid configurations of trees are the same for all $r\in (p, q]$. The boundaries of these intervals are of the form $\sqrt{a^2+b^2}/2$ for integers $0\leq a \leq b \leq n$. For instance, the first two intervals are always $(0,1/2]$ and $(1/2, \sqrt 2/2]$, which we have already encountered above. Finally, it is also possible to solve this problem using a generic max-clique algorithm, especially in combination with the above idea to pre-compute optimal configurations offline.
|
[
"brute force",
"dfs and similar",
"dp",
"geometry",
"probabilities"
] | 3,300
| null |
1949
|
B
|
Charming Meals
|
The Czech cuisine features $n$ appetizers and $n$ main dishes. The $i$-th appetizer has spiciness $a_i$, and the $i$-th main dish has spiciness $b_i$.
A typical Czech meal consists of exactly one appetizer and one main dish. You want to pair up the $n$ appetizers and $n$ main dishes into $n$ meals with each appetizer and each main dish being included in exactly one meal.
Your meals shall surprise the diners, so you want the spiciness levels of the two parts of the same meal to be as different as possible. The charm of a meal is the difference (in absolute value) between the spiciness of the appetizer and the spiciness of the main dish. So, a meal consisting of an appetizer with spiciness $x$ and a main dish with spiciness $y$ has charm equal to $|x-y|$.
You want to maximize the minimum charm of the resulting $n$ meals. What is the largest possible value of the minimum charm that you can achieve?
|
Let's sort spicinesses, assume $a_1 \leq a_2 \leq \ldots \leq a_n$, $b_1 \leq b_2 \leq \ldots \leq b_n$. Lemma: Let $c, d$ be some nondecreasing arrays of length $n$. If there exists some permutation $\sigma(1), \sigma(2), \ldots, \sigma(n)$, such that $c_i \leq d_{\sigma(i)}$ for all $i$, then $c_i \leq d_i$ for all $i$. Proof: for each $i$, there can be at most $i-1$ $d$s smaller than $c_i$: $d_{p_1}, d_{p_2}, \ldots, d_{p_{i-1}}$, as for any $j\geq i$ we have $d_{p_j} \geq c_j \geq c_i$. Consider some optimal pairing, assume we paired $a_i$ with $b_{\sigma(i)}$, and got a minimum charm of $k$. Let's call pairs with $a_i + k \leq b_{\sigma(i)}$ small, and pairs with $a_i \geq b_{\sigma(i)} + k$ large. Consider small pairs, assume there are $t$ of them. Note that, by the lemma, we can assume that $a_i$s and $b_{\sigma(i)}$s in these pairs are nondecreasing. Similarly, they are nondecreasing in large pairs. But then we can just pair smallest $t$ $a_i$s with largest $t$ $b_i$s, and largest $n-t$ $a_i$s with smallest $n-t$ $b_i$s, and we will still have a charm of at least $k$. More formally: For $1 \leq i \leq t$, pair $a_i$ with $b_{i + (n-t)}$; For $t+1 \leq i \leq n$, pair the $a_i$ with $b_{i - t}$. So, it's enough to check such a pairing for each possible $0 \leq 1 \leq t \leq n$, and to pick the best one! Total runtime is $O(n^2)$.
|
[
"binary search",
"brute force",
"greedy",
"sortings"
] | 1,500
| null |
1949
|
C
|
Annual Ants' Gathering
|
Deep within a forest lies an ancient tree, home to $n$ ants living in $n$ tiny houses, indexed from $1$ to $n$, connected by the branches of the tree.
Once a year, all the ants need to gather to watch the EUC. For this, all ants move along the $n-1$ branches of the tree they live on to meet at the home of one ant.
However, this year the ants could not agree on where to meet and need your help to gather up. You can tell all the ants currently at house $u$ to move to house $v$ if there is a branch directly connecting those two houses. However, the ants ignore your command if there are fewer ants gathered in house $v$ than in house $u$, i.e., if it would be easier for the ants from house $v$ to move. This even holds true if no ant at all is currently in house $v$. You can give this kind of commands as many times as you want.
Is it possible for you to gather all the ants in a single house?
|
First Solution Since ants can not move to an empty house the region of non-empty homes always needs to stay connected. Therefore, only ants on a leaf of the tree (of non-empty homes) can ever move and they can only move in one direction. Further, notice that if the smallest group of ants on a leaf can not move the solution is impossible (all other leaf groups are larger and can therefore also never be merged into the parent group of the smallest group). We can simply simulate this process. Second Solution Notice that the centroids of the tree are the only homes where all ants can gather up. For a fixed root we can greedily simulate the process with a DFS for each centroid.
|
[
"dfs and similar",
"dp",
"greedy",
"trees"
] | 1,900
| null |
1949
|
D
|
Funny or Scary?
|
You are designing a new video game. It has $n$ scenarios, which the player may play in any order, but each scenario must be played exactly once. When a player switches from a scenario to another scenario, the game shows a specially crafted transition video to make it all feel part of one big story. This video is specific to a pair of scenarios, but not to their order, in other words, the video playing when switching from scenario $a$ to scenario $b$ is the same as the video playing when switching from scenario $b$ to scenario $a$. Therefore, you need to create $\frac{n(n-1)}{2}$ different transition videos, one for each possible pair of different scenarios.
Each transition video can be either \underline{funny} or \underline{scary}. It is boring to see too many funny videos or too many scary videos in a row. Therefore, your goal is to create the videos in such a way that no matter in which order does the player approach the scenarios, they will never see more than $\lceil \frac{3n}{4} \rceil$ transition videos of the same type in a row.
You have already come up with ideas for at most $\lfloor \frac{n}{2} \rfloor$ of the transition videos, and therefore already know if those will be funny or scary. Now you need to choose funny or scary for all other transition videos in such a way that the above requirement is satisfied.
|
In graph terms, this problem can be formulated as follows: you are given a complete undirected graph on $n$ vertices. Some of the edges of the graph are already colored with two colors. You need to color all remaining edges with two colors in such a way that the graph does not have long monochromatic simple paths. First of all, what happens when no edges are already colored? It turns out this exact question is well-studied in mathematics, and the definitive answer was given 57 years ago in the following paper: L. Gerencsér, A. Gyárfás, On Ramsey-type problems, Ann. Univ. Sci. Budapest Eötvös Sect. Math. 10 (1967), pp. 167-170. They prove that for any such graph on $n$ vertices there always exists a monochromatic path with at least $\lfloor \frac{2n}{3} \rfloor$ edges, and provide an example of such a graph where the longest monochromatic path has exactly $\lfloor \frac{2n}{3} \rfloor$ edges. What does their example look like? The key idea is to use a bipartite graph with unbalanced parts. More specifically, we split all vertices into two groups, the first one of size roughly $\frac{n}3$, and the second one of size roughly $\frac{2n}3$. We color the edges between the two groups red, and the edges within each group blue. Now each blue path will have to stay within one of the groups, and therefore will not be longer than $\frac{2n}3$. The red edges form a bipartite graph, so each red path will have to alternate groups, which means that it will have to go to the smaller group every second step, and since the smaller group has only $\frac{n}3$ vertices, the path also cannot be longer than $\frac{2n}3$. Sadly, not everyone will have read this seminal paper from 1967, so how can one come up with this construction during the contest? Since every edge in the complete graph will have one of the two colors, at least one of the colors will be used by a lot of edges. So we need to come up with examples of graphs with lots of edges, but no long paths. Disconnected graphs are a natural way to achieve this, and then we can consider what edges the complement of a disconnected graph must have, and arrive at the above construction. One other thing one can do when stuck is experimentation: try generating random complete graphs colored with two colors until you find some examples with no long monochromatic paths, for example where there is no monochromatic path with $n-1$ edges, and then try to generalize their structure. In fact, the answer to the second sample was generated in exactly this fashion: we repeatedly tried to color each edge randomly and independenty until we got a graph with 12 vertices and no monochromatic path with 10 edges. It actually took a few hours to find just one such graph. Studying this answer could be another way to discover the above construction, as the funny edges do form an almost-bipartite structure with nodes 5 and 11 in one part, and the rest in the other part (for example row 5 of the matrix is $\texttt{FFFF.FFFFFSF}$). We still need to deal with edges that are already colored, but we also still have some reserves: our paths do not exceed $\lfloor \frac{2n}{3} \rfloor$, while this problem allows paths up to $\lceil \frac{3n}{4} \rceil$. How bad is a wrongly colored edge to the above construction? If an edge that is supposed to be red is actually blue, it is very bad: now the two blue groups are connected, and therefore it is easy to create a blue path through all vertices. However, if an edge that is supposed to be blue is actually red, it is not so bad: even when this edge is used in a red path, all "originally" red edges still have to have one of their endpoints in the smaller group, and therefore the number of originally red edges in a red path still does not exceed two times the size of the smaller group, so changing one blue edge to red increases the boundary on the red path length by at most 1. We have the freedom to choose which vertex belongs to which group, so we need to use this freedom to make sure that all edges that are already blue connect vertices within one group, as even one incorrectly blue edge is bad. This means that we need to find the connected components using the preexisting blue edges, and then take some of those components to the bigger group and the rest to the smaller group. The edges that are already colored red can each increase the boundary on the red path length by 1. If all $\lfloor \frac{n}{2} \rfloor$ already colored edges are red, this would increase the path length by $\lfloor \frac{n}{2} \rfloor$, which is a lot. However, we also have the freedom to decide which one out of funny and scary corresponds to red. Let us choose one that has fewer edges, and therefore the number of already colored red edges will not exceed $\lfloor \frac{n}{4} \rfloor$. Because the length of the red path can be increased by $\lfloor \frac{n}{4} \rfloor$ through the already colored edges, we need to make the smaller group even smaller: since the red path must not exceed $\lceil \frac{3n}{4} \rceil$, we can have at most $\lfloor\frac{\lceil \frac{3n}{4} \rceil - \lfloor \frac{n}{4} \rfloor}{2}\rfloor$ vertices in the smaller group, which is roughly $\frac{n}{4}$, and more precisely it is $\ge \lfloor \frac{n}{4} \rfloor$. If the smaller group has $\lfloor \frac{n}{4} \rfloor$ vertices, the bigger group has $n-\lfloor \frac{n}{4} \rfloor=\lceil \frac{3n}{4} \rceil$ vertices, which means that blue paths have at most $\lceil \frac{3n}{4} \rceil-1$ edges. Notice how all pieces of the puzzle come together now: this is good enough for this problem, but only barely, by 1 edge. So we can allow the smaller group to have either $\lfloor \frac{n}{4} \rfloor$ or $\lfloor \frac{n}{4} \rfloor - 1$ vertices. The only remaining question is: since we add vertices to groups not one by one, but an entire preexisting blue edge component at a time, can we actually achieve the required size of the smaller group? It turns out that we can, using the following simple approach: sort the components in increasing order of size, and take them in this order to the smaller group until the size of the smaller group is $\lfloor \frac{n}{4} \rfloor - 1$ or $\lfloor \frac{n}{4} \rfloor$. The only way this could fail is if we jump from at most $\lfloor \frac{n}{4} \rfloor - 2$ to at least $\lfloor \frac{n}{4} \rfloor + 1$, meaning that the component we have added has at least 3 vertices. But since we add components in increasing order of size, it means that the remaining components also all have at least 3 vertices, which means that components with at least 3 vertices cover at least $n-(\lfloor \frac{n}{4} \rfloor - 2)=\lceil \frac{3n}{4} \rceil+2$ vertices of the graph. Since we need 2 edges for a component with 3 vertices, and with bigger components we still need at least $\frac{2k}{3}$ edges for a component with $k$ vertices, we need at least $\frac{2(\lceil \frac{3n}{4} \rceil+2)}{3}$ edges to get into this situation, and $\frac{2(\lceil \frac{3n}{4} \rceil+2)}{3} \ge \frac{\frac{3n}{2}+4}{3} > \frac{\frac{3n}{2}}{3} = \frac{n}{2}$. So we need more than $\frac{n}{2}$ edges to get into this situation, which is impossible under the constraints of the problem, therefore the greedy approach of taking already blue components in increasing order of size always works. The solution is complete now, and it runs in time linear in the input size, in other words $O(n^2)$. Why does the problem have the very low limit of $n \le 24$ in this case? The reason is that we also need to implement the checker that will read your output and check if there are no long monochromatic paths. For that we use a relatively standard dynamic programming approach that runs in $O(n^2\cdot 2^n)$, sped up using bitmasks to do only $O(n\cdot 2^n)$ operations with integers. It takes a few seconds for $n=24$ so we could not go much higher. We have explored using various heuristic approaches to find the longest monochromatic paths for larger values of $n$, but it turns out that even though those heuristic approaches work quite well on random graphs, they actually cannot find the longest path reliably in the type of graph output by the solution above, namely an unbalanced bipartite graph with a few additional edges. The low value of $n$ allows to simplify the implementation of the above solution in the following manner: we can skip finding connected components of preexisting blue edges, instead just iterating over all $2^n$ subsets of vertices as candidates for the smaller group, and then checking if the size of the smaller group is appropriate and that there are no preexisting blue edges going from the smaller group to its complement.
|
[
"constructive algorithms"
] | 2,600
| null |
1949
|
E
|
Damage per Second
|
You just created a new character in your favourite role-playing game and now have to decide how to skill him.
The two skill attributes to be chosen are: \underline{damage per hit} and \underline{hits per second}. Damage per hit is the amount of damage you deal with a single hit, while hits per second is the number of hits you can make in one second. Initially, both skill attributes are set at $0$. You have $k$ skill points to distribute as you want; in other words, you can choose the values of the two skills so that they are positive integers with sum at most $k$.
The tutorial of the game (the boring part you want to finish as soon as possible) consists of $n$ monsters to be killed one after the other. The $i$-th monster has $h_i$ health points, i.e., it dies after you have inflicted at least $h_i$ damage.
How can you assign the two skill attributes to minimize the time necessary to kill all the $n$ monsters?
|
Once the sugar-coating is removed, the problem boils down to: Let $f(x) = \sum_{i=1}^n \left\lceil\frac{h_i}{x}\right\rceil$, and $g(x) = \frac{f(x)}{k-x}$. Find the minimum of $g(x)$. Sort the values so that $h_1\geq h_2\geq\dots\geq h_n$. Define $H:=h_1+\dots+h_n$. From now on, we will assume that $k$ is even, so that $x=\frac k2$ is a valid choice (the case $k$ odd is absolutely equivalent, just more annoying to write down). Let us begin with two simple observations. Observation 1. What are the "most interesting" values of $x$? If we ignore the ceiling in the definition of $g$, we get $g(x) \ge \frac{H}{x(k-x)}$ whose minimum is achieved by $x=\frac k2$. Unless the values $h_1, \dots, h_n$ are well-crafted, we expect the minimum of $g$ to be achieved for an $x$ very close to $\frac k2$. Observation 2. Speeding up the computation of $g$. Let us describe a strategy to compute $g(x)$. Iterate from $i=1$ to $n$ until $i < \frac{h_i}{x}\log n$. Say that such condition is satisfied for $1\le i\le q(x)$ and fails for $q+1$ (for notational clarity we will omit the dependence of $q$ on $x$). We compute the first part of $g(x)$, that is $\sum_{i=1}^q \lceil \frac{h_i}x\rceil$, naively in $O(q)$. For $q+1\le i \le n$ the quantity $\lceil \frac{h_i}x\rceil$ takes at most $\frac{h_{q+1}}x$ different values. Therefore the second part of $g(x)$, that is $\sum_{i=q+1}^n \lceil \frac{h_i}x\rceil$, can be computed in $O(\frac{h_{q+1}}{x}\log n) = O(q)$ by using binary search. So, we have described a way to compute $g(x)$ in $O(q(x))$ where $q$ satisfies $h_q \ge \frac{qx}{\log n}$. Description of the algorithm. These two observations suggest the following algorithm. Maintain the running minimum $m$ of $g(x)$, initializing it as $m=g(\frac k2)$. Iterate over all $x$, starting from those closest to $\frac k2$ and moving further away. For each $x$, if $\frac H{x(k-x)}\ge m$ then skip this choice of $x$ (as it cannot be a minimizer in view of the first observation). Otherwise compute $g(x)$ in $O(q(x))$ as described in observation 2 and update $m$ accordingly. This algorithm produces the correct answer, but is it provably fast enough? Unexpectedly yes! We will see that this algorithm has the remarkable running time $O(\sqrt{n\log n}k)$. Running time analysis. Let us begin by making one further remark about the second observation. By definition of $q(x)$, we have $H \ge h_1 + \cdots + h_q \ge qh_q \ge \frac{q^2x}{\log n} \implies q\le \sqrt{\frac{H\log n}{x}}.$ Let us now discuss the first observation. For any $1\le x\le k-1$, $\frac{H}{x(k-x)}\leq g(x)\leq \frac{H}{x(k-x)} + \frac{n}{k-x} .$ We have $g(\frac k2)\le \frac{4H}{k^2} + \frac{2n}{k}$ and $\frac{H}{x(k-x)}\leq g(x)$. Therefore, if $\frac{4H}{k^2} + \frac{2n}{k} < \frac{H}{x(k-x)} \iff \frac{k^2}4 - \big(x-\frac k2\big)^2 = x(k-x) < \frac{H}{\frac{4H}{k^2} + \frac{2n}{k}} \iff \big(x-\frac k2\big)^2 > \frac{k^2}4 - \frac{H}{\frac{4H}{k^2} + \frac{2n}{k}} = \frac{nk^3}{8H+4nk}$ $\Big|x - \frac k2\Big| \le \sqrt{\frac{nk^3}{8H}} .$ Conclusion. Now we have all the tools and we can conclude by considering two cases (which, morally, correspond to $H$ large and $H$ small). If $\sqrt{\frac{nk^3}{8H}} < \frac k4$, then the interesting values of $x$ are comparable to $k$. Thus $g(x)$ can be computed in $O(\sqrt{\frac{H\log n}{k}})$ and we shall do such computation $2\sqrt{\frac{nk^3}{8H}}$ times. So the overall complexity is $O(\sqrt{n\log n}k)$. On the other hand, if $\sqrt{\frac{nk^3}{8H}} \ge \frac k4$ (which implies $H\le 2nk$) then we use only the second observation to bound the complexity. Let us assume that we care only about values of $x$ of magnitude comparable to $k$, the other cases are easy to handle. Then, the computation of $g(x)$ requires $O(q\log n)$. To compute $g(x)$ for all $1\le x\le k$, we need $O\Big(\sum_{x=1}^k \sqrt{\frac{H\log n}{x}}\Big) = \sqrt{nk\log n}O\Big(\sum_{x=1}^k x^{-1/2}\Big) = O(\sqrt{n\log n}k).$
|
[
"brute force",
"math"
] | 2,900
| null |
1949
|
F
|
Dating
|
You are the developer of a dating app which ignores gender completely. The app has $n$ users, indexed from $1$ to $n$. Each user's profile features a list of the activities they enjoy doing. There are $m$ possible activities, indexed from $1$ to $m$.
A match between two users is good if they share at least one activity and, at the same time, both of them like at least one activity that the other user does not like.
Find a good match if it exists.
|
After formalizing the statement, we get the following: Given $n$ sets $S_1, S_2, \ldots, S_n$ of activities, find a pair $(a, b)$ such that all three sets $S_a \setminus S_b, S_b \setminus S_a, S_a \cap S_b$ are non-empty. In essence, this means that for a pair $(a, b)$ to not be good, it must suffice that $S_a$ and $S_b$ are either disjoint or one included in the other. Consequently, it means that, if there are no such good pairs, then the sets must induce a tree structure! In this structure, some vertex $u$ is an ancestor of vertex $v$ if and only if $S_v \subseteq S_u$. The solution now can be summarized by the following idea: try to construct the tree (assuming there are no good pairs), and check for errors along the process (this will, in fact, expose a good pair). There are multiple ways of achieving that, both top-down and bottom-up (and they are more or less equivalent). We will describe a solution based on a top-down approach. Top down approach. Let's start by sort the activity sets in decreasing order of lengths. For all activity sets $S_i$ in this order, we will create a parent-child relationship $j = p(i)$ with the last processed index $j$ that contains some activity (or $-1$, if there is no such index). The activity based on which we choose $j$ is not relevant, for reasons that will be apparent as follows. After finding such $j$ (or deciding that $j = -1$), we have some cases to consider: If $j \neq -1$ and $S_i \not\subseteq S_j$, then $(i, j)$ is a good pair, as $S_j$ cannot possibly include $S_i$ due to the length of $S_j$ being at least as much as $S_i$. If some activity $x \in S_i$ is also in some other set $S_k$ where $p(k) = p(i)$, then $(i, k)$ is a good pair. Otherwise, the activity sets still form a tree, therefore no good pairs exist (yet). Complexity is $\mathcal{O}(k \log k)$ or $\mathcal{O}(k)$, where $k$ is the total number of activities, depending on whether one uses ordered sets or hash maps. It is possible to implement this solution in $\mathcal{O}(k)$ using just arrays, but this is not required. Alternative approach. One may solve this problem without leveraging the tree structure of the activity sets. It stems from the observation that any good pair $(a, b)$ must have a common element $x$. Let's fix all such $x$, and sort the sets having $x$ as an element in the order of length. Then, similarly to the first solution, one may prove that it is sufficient that a set with a smaller length has an element not present in some set with a larger length. We can solve this by simply keeping an array which keeps track of existing elements present in the sets, and iterating from right to left. This approach does not quite work yet, as it has quadratic complexity; however, by considering the sets with large size (i.e., larger than $\sqrt{k}$) separately than the elements with a small size (i.e. smaller than $\sqrt{k}$), then one may achieve $\mathcal{O}(k \sqrt{k})$ complexity using this approach.
|
[
"greedy",
"sortings",
"trees"
] | 2,200
| null |
1949
|
G
|
Scooter
|
The Czech Technical University campus consists of $n$ buildings, indexed from $1$ to $n$. In each building, there can be a math class scheduled, or a computer science class, or neither (but not both). Additionally, in each building, there is at most one professor, and each professor is either an expert in mathematics or in computer science.
As an intern at University Express Inc., your job is to quickly transport the professors to their classes. For this, you have been granted a brand new two-person scooter, able to accommodate yourself, plus at most one passenger.
Initially, you are the only person on the scooter. When you arrive at a building, you may drop off or pick up professors to/from that building. However, in order to improve the efficiency of your task, you are allowed to drive to each of the $n$ buildings \textbf{at most once}, in the order of your choice (you can also decide where to start the itinerary).
After the end of your itinerary, in each building where a math class is scheduled, there must be a professor expert in math, and in each building where a computer science class is scheduled, there must be a professor expert in computer science.
Devise an itinerary that makes it possible to teach all classes.
|
First and foremost, let's notice that we can ignore professors that are already in a proper building, as well as empty buildings (in essence, all positions $i$ such that $c_i = p_i$). More so, if there are no positions $i$ such that $c_i \neq p_i$ then we can simply answer $\texttt{YES}$ without doing any operations. From this point on, we will assume that for all $i$, we have $c_i \neq p_i$. General idea. Intuitively, we would want to iteratively pick up the professor from some building and drive them to a building where a corresponding class is held. However, we must make sure to not visit the same place twice. Multiple greedy solutions might solve this, but we also have to take into account implementation effort and the number of edge cases to consider. In this sense, we advise the reader to not rush into writing an implementation just yet, and instead try to simplify the solution as much as possible on paper. An easier subproblem. The issue with the problem currently is that there is much room for making choices and, consequently, much room for making an erroneous decision. Let's consider the easier subproblem where there are exactly as many professors of a given class types as there are classes. Even though one might say that this makes the problem tighter, it actually paradoxically makes it easier! This is mainly because, as tight as it is, there is not much room for bad decisions. A key observation is that this subproblem (and, as it turns out, the original problem as well) is $\textbf{impossible}$ if there are no buildings where no class is being held, as there is simply no way of returning to the starting position to drop off the corresponding professor (note, again, that we are ignoring positions $i$ with $c_i = p_i$). Therefore, a greedy strategy that works for this subproblem is simply repeatedly choosing a professor from a building where no class is being held and driving them to one of the buildings where their class is being held. If one prioritizes buildings where there are professors before those where there are no professors, the key condition never becomes unsatisfied, thus the problem is solved. Solving the original problem. Going back, let's solve the original problem. We can do that by simply reducing it to the easier version, by removing "excess" professors from some of the buildings. At the same time, note that we have to make sure the key observation above remains valid throughout the removal process. One easy and correct way of doing that is, yet again, the greedy approach of simply prioritizing removing professors from buildings where there are classes to buldings with no classes. Final complexity of the solution is $\mathcal{O}(n)$.
|
[
"graphs",
"greedy"
] | 2,300
| null |
1949
|
H
|
Division Avoidance
|
A newly discovered organism can be represented as a set of cells on an infinite grid. There is a coordinate system on the grid such that each cell has two integer coordinates $x$ and $y$. A cell with coordinates $x=a$ and $y=b$ will be denoted as $(a, b)$.
Initially, the organism consists of a single cell $(0, 0)$. Then zero or more divisions can happen. In one division, a cell $(a, b)$ is removed and replaced by two cells $(a+1, b)$ and $(a, b+1)$.
For example, after the first division, the organism always consists of two cells $(1, 0)$ and $(0, 1)$, and after the second division, it is either the three cells $(2, 0)$, $(1, 1)$ and $(0, 1)$, or the three cells $(1, 0)$, $(1, 1)$ and $(0, 2)$.
\textbf{A division of a cell $(a, b)$ can only happen if the cells $(a+1, b)$ and $(a, b+1)$ are not yet part of the organism.} For example, the cell $(1, 0)$ cannot divide if the organism currently consists of the three cells $(1, 0)$, $(1, 1)$ and $(0, 2)$, since the cell $(1, 1)$ that would be one of the results of this division is already part of the organism.
You are given a set of forbidden cells ${(c_i, d_i)}$. Is it possible for the organism to contain none of those cells after zero or more divisions?
|
Looking at the solution for the sample, the most difficulty seems to come from the condition in bold: A division of a cell $(a, b)$ can only happen if the cells $(a+1, b)$ and $(a, b+1)$ are not yet part of the organism. It forces us to clean up space, potentially recursively, before a division that we want to happen can actually happen. What happens if we remove this condition, instead allowing to have multiple copies of a cell with the same coordinates in the organism? Then of course it is possible to avoid any finite set of forbidden cells, so this version of the problem is not very useful. However, consider the following modification instead: we allow to have multiple copies of a cell with the same coordinates temporarily, but the final state of the organism must have at most one copy of any cell, and of course zero copies of the forbidden cells. It turns out that this modification is equivalent to the original problem! To see why, consider a sequence of operations that starts from $(0,0)$ and ends in a valid state as described above, but has multiple copies of some cells in its intermediate states. We claim that it is possible to reoder its sequence of operations in such a way that all operations stay valid, but we never have multiple copies of the same cell. Notice that reodering the operations does not change the end state, because each operation can be seen as subtracting 1 from the number of occurrences of the cell $(a,b)$ and adding 1 to the numbers of occurrences of the cells $(a+1,b)$ and $(a,b+1)$, and such subtractions and additions clearly commute. To avoid getting multiple copies of the same cell, we can repeatedly do the following to produce a reordering: from all operations that are yet to be done in the reference sequence of operations, we choose the operation $(a,b)$ such that $(a,b)$ is part of the organism and $a+b$ is maximized. Maximizing $a+b$ guarantees that we are not going to have two copies of the same cell: when we decide to divide $(a,b)$, we cannot have $(a+1,b)$ in the organism, as otherwise $(a+1,b)$ would also be one of the operations yet to be done (since the final state has at most one cell at $(a+1,b)$), so we should have chosen $(a+1,b)$ as the opeation to do over $(a,b)$ since $a+1+b>a+b$, which is a contradiction. Now we know that we can focus on what operations to do, and no longer care about the order in which we do them. It means that we can choose the order that suits us best, and as long as we do only necessary operations, we will always arrive at exactly the same result. It is natural to go by diagonals: first consider all operations with $a+b=0$, then all operations with $a+b=1$, then all operations with $a+b=2$, and so on. The operations with $a+b=z$ only create new cells with $a+b=z+1$, so those operations are independent within each value of $z$, and we never have to return to smaller values of $z$. This leads to the following solution: we go in increasing order of $z$, and before processing a given value of $z$ we know how many copies of each cell with $a+b=z$ we have, as an array $d_{z,a}$. Now for each forbidden cell $(a,b)$ with $a+b=z$ we have to apply division to it $d_{z,a}$ times, which means that we have to add $d_{z,a}$ to $d_{z+1,a}$ and to $d_{z+1,a+1}$. Similarly, for each non-forbidden cell we have to apply division to it $\max(0,d_{z,a}-1)$ times. We continue while the array $d_z$ has at least one non-zero value. If this process terminates, the answer is $\texttt{YES}$. If it runs infinitely, the answer is $\texttt{NO}$. For example, in the first sample case we get the following arrays: $d_0 = (1)$ $d_1 = (1, 1)$ $d_2 = (1, 2, 1)$ $d_3 = (0, 2, 2, 0)$ $d_4 = (0, 1, 2, 1, 0)$ $d_5 = (0, 0, 1, 1, 0, 0)$ $d_6 = (0, 0, 0, 0, 0, 0, 0)$ However, this solution is still not practical: how do we check if the process runs infinitely? Looking at a few examples can help spot a pattern: if we ever get a 3, then the process never stops. Having identified this pattern, it is relatively easy to prove. Since we always add the same number to two adjacent elements of $d_{z+1}$, getting a 3 in it means that one of its adjacent numbers will be at least 2. And then, even if there are no forbidden cells at all anymore, $3, 2$ becomes $2, 3, 1$ on the next diagonal, so we have a 3 adjacent to a 2 again, and this will repeat forever. What if we never get a 3? Then it turns out that as soon as we pass all forbidden cells, we will start going down towards all zeros and eventually get there. With no forbidden cells, a 1 does not propagate to the next diagonal at all, and a 2 propagates two adjacent +1s to the next diagonal. So if we have a sequence that looks like $(2, 2, \dots, 2)$ with $k$ 2s, on the next diagonal we will get $(1, 2, 2, \dots, 2, 1)$ with $k-1$ 2s, so after $k$ diagonals the 2s will disappear. This argument also demonstrates that the process will converge after $O(n)$ steps, since every forbidden cell will create a constant amount of additional 2s, and without forbidden cells the number of 2s decreases by at least 1 per step. Therefore we finally have a complete solution, albeit one that runs in $O(n^2)$ which is too slow for $n=10^6$. In order to make it faster, we have to take a closer look at the changes of the array $d$. Suppose the array looks like $(0, 0, \dots, 0, 1, 2, 2, \dots, 2, 1, 0, 0, \dots, 0)$ for a certain diagonal. How is it going to look for the next diagonal depending on the locations of the forbidden cells on this diagonal? If a cell with a 0 is forbidden, this changes nothing. If a cell with a 2 is forbidden, then we are going to get a 3 on the next diagonal, and the answer will be $\texttt{NO}$. So the only interesting question is whether the two cells with 1 are forbidden: If none of the 1s are forbidden, then we get the same structure on the next diagonal, but with the left 1 moving one position to the right (and therefore the number of 2s decreases by one). If only the left 1 is forbidden, then we still get the same pattern on the next diagonal, but the left 1 stays in the same position (and therefore the number of 2s is unchanged). If only the right 1 is forbidden, then it moves one position to the right on the next diagonal (and therefore the number of 2s is also unchanged). Finally, if both 1s are forbidden, then we still get the same pattern on the next diagonal, but with the number of 2s actually increasing by one. This suggests the following idea: when there are no 3s or higher on a diagonal, and when not all numbers are 0s, the array $d$ for it always has the form $(0, 0, \dots, 0, 1, 2, 2, \dots, 2, 1, 0, 0, \dots, 0)$: exactly two 1s, zero or more 2s in between the 1s, and 0s on the outside. This is not entirely correct: in the sample discussed above there is also the case of $(0, 0, \dots, 0, 2, 2, 0, 0, \dots, 0)$: exactly two 2s surrounded by all 0s. This pattern always appears after $(0, 0, \dots, 0, 1, 2, 1, 0, 0, \dots, 0)$ when the 2 is forbidden, and it is actually the only situation where a 2 can be forbidden without getting a 3. It turns out that this is the only special case, and we can easily prove by induction that we always have one of the patterns mentioned above. Therefore as we go in increasing order of $z$, for the diagonal $a+b=z$ we can simply store two integers and a boolean: the position of the left 1, the position of the right 1, and whether we have the special $0, 2, 2, 0$ case. We need to find all forbidden cells on the given diagonal, which we can achieve by sorting the forbidden cells by the sum of the coordinates and keeping a pointer into the sorted array as we increase $z$. After that we can determine the state of the next diagonal in $O(1)$, so the total running time of this solution is $O(n\log n)$ if we use normal sorting, or $O(n)$ if we use counting/radix sorting (we can use counting sorting since we only care about the first $O(n)$ diagonals). The time limit in this problem was pretty tight, so even the $O(n)$ solution had to be implemented with care. The reason for this was that the $O(n^2)$ solution mentioned above yields itself perfectly to optimizations. The only quadratic part is a couple of small inner loops looking roughly like this: The C++ compilers are very good at vectorizing such code using SIMD instructions (which you can enable using some pragmas). But we can also help the compiler because we know that the values in our arrays are always only 0, 1 or 2, so we can actually represent our array as two bitsets. We could not get the compiler to efficiently vectorize std::bitset, but it turns out that gcc supports a relatively new syntax This defines a new type that works like an array of 4 uint64's, but supports element-wise operations including bitwise operations that automatically use vectorization. Using this trick made the $O(n^2)$ solution mentioned above run in under 7 seconds for $n=10^6$, therefore to make sure this solution does not pass we had to set the tight time limit. One other potentially relevant remark for implementing the solution: we have mentioned above that we only care about the first $O(n)$ diagonals, but what is the constant hidden in the $O$-notation? Initially one might think that a case with the following forbidden cells is the worst: $(0, 0), (1, 0), (2, 0), \dots, (\lceil\frac{n-1}{2}\rceil, 0), (0, 1), (0, 2), \dots, (0, \lfloor\frac{n-1}{2}\rfloor)$. The final state of the organism in this case is a big rectangle (so it also catches solutions that assume we only do a linear number of divisions), and we have $n+1$ nonzero diagonals in this case. However, the $0, 2, 2, 0$ exception from the solution above allows to construct a testcase that is twice nastier: $(0, 0), (0, 1), (1, 0), (1, 1), (2, 2), \dots, (n-3, n-3)$. In this case the array $d$ constantly changes from $1, 2, 1$ to $0, 2, 2, 0$ and back, and we get $2n-2$ nonzero diagonals. So if your solution hardcodes the number of diagonals to process you need to go as high as $2n-2$. For more context about the origins of this problem and related topics you can check out the following paper: F. Chung, R. Graham, J. Morrison, A. Odlyzko, Pebbling a Chessboard, Amer. Math. Monthly 102 (1995), pp. 113-123.
|
[
"greedy",
"math"
] | 3,100
| null |
1949
|
I
|
Disks
|
You are given $n$ disks in the plane. The center of each disk has integer coordinates, and the radius of each disk is a positive integer. No two disks overlap in a region of positive area, but it is possible for disks to be tangent to each other.
Your task is to determine whether it is possible to change the radii of the disks in such a way that:
- Disks that were tangent to each other remain tangent to each other.
- No two disks overlap in a region of positive area.
- The sum of all radii strictly decreases.
The new radii are allowed to be arbitrary positive real numbers. The centers of the disks cannot be changed.
|
To start, we build the graph that describes the tangency relation between the disks: each disk is represented by a node in the graph, and an edge connects two nodes if and only if the corresponding disks are tangent. Constructing this graph can be done in time $O(n^2)$ by checking all pairs of disks. In fact, it can be done in faster than quadratic time, but this is not necessary to solve the problem with the given constraints. Suppose that you change the radii of the disks by real numbers $\delta_1, \dots, \delta_n$. In order to maintain tangency, we must have $\delta_i + \delta_j = 0$ whenever the $i$-th and $j$-th disks are initially tangent. In fact, this is condition is also sufficient, provided that $\delta_1, \dots, \delta_n$ are small enough in absolute value (to avoid overlaps and to keep all the radii positive). Fix a connected component of the tangency graph, and fix any node $i$ in this component. The value $\delta_i$ determines the values of $\delta_j$ for all $j$ in the same connected component, and each $\delta_j$ has to be equal to either $\delta_i$ or $-\delta_i$. If the connected component is not bipartite, it contains a cycle of odd length, and such a cycle shows that $\delta_i = -\delta_i$ and so $\delta_i=0$. On the other hand, if the connected component is bipartite, then it is possible to assign any real value to $\delta_i$. Color the nodes of such a bipartite connected component black and white, so that every edge connects a white node and a black node; use white to color the node $i$. The sum of the radii in this connected component changes by $\delta_i \cdot k$, where $k$ is the difference between the number of white nodes and the number of black nodes. In order to strictly decrease the sum of the radii, we need a different number of black and white nodes ($k \neq 0$); if this happens, we can pick $\delta_i$ to be any sufficiently small real number such that $\delta_i \cdot k < 0$. To summarize, the answer is YES if and only if there is at least one connected component which is bipartite and has a different number of white and black nodes. This condition can be checked in time $O(n)$.
|
[
"dfs and similar",
"geometry",
"graph matchings",
"graphs"
] | 1,800
| null |
1949
|
J
|
Amanda the Amoeba
|
\textbf{This problem has an attachment. You can use it to simulate and visualize the movements of the amoeba.}
Amoeba Amanda lives inside a rectangular grid of square pixels. Her body occupies some of these pixels. Other pixels may be either free or blocked. Amanda moves across the grid using the so-called amoeboid movement. In each step of such a movement, her body first shrinks by one pixel (one pixel of the body is removed and becomes free), and then grows at a different place (one previously-free pixel is added to the body).
To prevent structural damage, Amanda's body always occupies a connected region of pixels, which means that any pair of pixels forming the body can be connected by a sequence of adjacent pixels without ever leaving the body. Two pixels are considered adjacent if they share a common side (each pixel has at most 4 neighbours). The body remains connected even during the movement, including the moment after removing a pixel and before adding another one.
Your task is to help Amanda find her way around. Given her initial position and desired final position, suggest a sequence of valid moves leading from the former to the latter.
\begin{center}
{\small Illustration of sample $1$: The filled shape is the initial position, the dotted region is the final position.}
\end{center}
|
To solve this problem, we first find a path $P_{AB}$ connecting two pixels $A$ and $B$ such that Pixel $A$ is a part of the initial position. Pixel $B$ is a part of the final position. All inner pixels of $P_{AB}$ (let us denote their sequence as $Q_{AB}$) are free pixels, neither inside the initial position nor the final one. Note that it is generally allowed to select two neighbouring pixels or even having $A=B$, in which case $Q_{AB}$ will be empty. For the solution to work, it is not important how such a path is found, one possible way is to find the shortest path between the initial and final position. If no such path exists, the problem has no solution, as it means the initial and final position are completely separated by blocked cells. Otherwise, the final position is reachable and the problem has a solution, which will be proven by describing its construction. As the next step, we will construct a tree $T_A$ rooted in pixel $A$ and containing all pixels of the initial position (and no other pixels), and also a tree $T_B$ rooted in pixel $B$ and containing all pixels of the final position (and only them). Again, it is not important how the trees look like, as far as they contain all appropriate pixels. It is easy to construct them using either DFS or BFS. The solution then consecutively removes pixels of $T_A$ bottom-up, starting from leaves and proceeding to parents when they become leaves. Such removed pixels are first added to fill the sequence $Q_{AB}$, and then to tree $T_B$ top-down, starting with $B$ and going to lower nodes after their parents were added. In the end, pixels of $Q_{AB}$ need to be removed (and added to the bottom of $T_{B}$). After that, the amoeba will occupy the exact final position. There is one more important issue to cope with. The trees $T_A$ and $T_B$ are not necessarily disjoint, which means that some (or even many) pixels may appear in both. While removing pixels from $T_A$ and adding them to $T_B$, the following situations may occur: Some pixel should be removed from $T_A$ but we know it is also part of $T_B$ and has not been added yet. In this case, the algorithm proceeds just normally, removing the pixel. It will later be added again to the body, which adds to the total number of moves, but there is no requirement to minimize that number. Some pixel should be added to $T_B$ but it is still part of the body, as it was not yet removed from $T_A$. In such a case, the pixel is skipped (and stays a part of the body to keep it connected) and it has to be marked as such. When the algorithm later proceeds to that pixel as part of $T_A$, it must not be removed anymore, and the algorithm proceeds to other pixels in $T_A$. These steps keep the amoeba connected at all times, and in the end, all pixels of $T_B$ are occupied, which means reaching the final position. Although the algorithm may remove some pixels and then add the same pixels again, this happens at most once for each of the pixels. In fact, each pixel is removed at most once and also added at most once, which means the maximal number of steps produced by the algorithm is limited by the total number of pixels in the pixture, $r\cdot c$, which is way below the required limit.
|
[
"graphs",
"implementation",
"trees",
"two pointers"
] | 2,600
| null |
1949
|
K
|
Make Triangle
|
You are given $n$ positive integers $x_1, x_2, \ldots, x_n$ and three positive integers $n_a, n_b, n_c$ satisfying $n_a+n_b+n_c = n$.
You want to split the $n$ positive integers into three groups, so that:
- The first group contains $n_a$ numbers, the second group contains $n_b$ numbers, the third group contains $n_c$ numbers.
- Let $s_a$ be the sum of the numbers in the first group, $s_b$ be the sum in the second group, and $s_c$ be the sum in the third group. Then $s_a, s_b, s_c$ are the sides of a triangle with positive area.
Determine if this is possible. If this is possible, find one way to do so.
|
We will denote these groups as $A, B, C$ correspondingly. Wlog $n_a \leq n_b \leq n_c$, and $x_1 \leq x_2 \leq \ldots \leq x_n$. Let $x_1 + x_2 + \ldots + x_n = S$. We just want the sum in each group to be smaller than $\frac{S}{2}$. Let's note some obvious conditions for construction to be possible: The largest group is not too large: $x_1 + x_2 + \ldots + x_{n_c} < \frac{S}{2}$; The largest item is not too large: $x_n + (x_1 + x_2 + \ldots + x_{n_a - 1}) < \frac{S}{2}$. It turns out that these conditions are sufficient! We will prove even stronger statement. Lemma. Assume we already placed some numbers, which are $\geq$ than any remaining numbers. Assume current sum in group $g$ is $S_g$, the number of empty spots in group $g$ is $n'_g$, and there are $n'_a + n'_b + n'_c = n'$ numbers remaining, $x_1 \leq x_2 \leq \ldots \leq x_{n'}$. Then, it's possible to distribute the remaining numbers if and only if the following conditions hold: No group is too large: for any group $g$, $S_g + x_1 + x_2 + \ldots + x_{n'_g} < \frac{S}{2}$ $S_g + x_1 + x_2 + \ldots + x_{n'_g} < \frac{S}{2}$ The largest item is not too large: there exists a group $g$ with $n'_g>0$, such that $S_g + x_{n'} + (x_1 + x_2 + \ldots + x_{n'_g - 1}) < \frac{S}{2}$ $S_g + x_{n'} + (x_1 + x_2 + \ldots + x_{n'_g - 1}) < \frac{S}{2}$ Proof. These conditions are obviously necessary; let's show that they are sufficient. Forget about $n_a \leq n_b \leq n_c$ for now Wlog we can put the largest element $x_{n'}$ in group $a$, so $S_a + x_{n'} + (x_1 + x_2 + \ldots + x_{n'_a - 1}) < \frac{S}{2}$. Put the remaining numbers in the other two groups arbitrarily. If both have sum less than $\frac{S}{2}$, we are good! Otherwise, wlog, sum in group $B$ is at least $\frac{S}{2}$. Now, start swapping free elements in $B$ and $C$ one by one. If at some point sums in both $B$ and $C$ were smaller than $\frac{S}{2}$, we are good. Note that it's not possible that at one moment, the sum in $B$ is $\geq \frac{S}{2}$, and at next the sum in $C$ is $\geq \frac{S}{2}$: the sum in $C$ currently is at most $S - \frac{S}{2} - x_{n'}$, and it will decrease by at most $x_{n'}$ and increase by at least $1$ after the swap, so it will still be at most $S - \frac{S}{2} - 1 < \frac{S}{2}$. So, the only possibility is that the sum in $B$ is still $\geq \frac{S}{2}$. Now, note that if we swap free elements in $A$ and $C$, the sum in $B$ is still $\geq \frac{S}{2}$. Also, note that if $n'_C = 0$, that is, there are no free elements in $C$ remaining, then we swap free elements in $A$ and $B$, and we will either get a valid partition, or the sum in $B$ will still be $\geq \frac{S}{2}$, as it's not possible for sum in $A$ to get $\geq \frac{S}{2}$ after one swap, for the same reason. So, here is the idea: we will start swapping elements, so that the smallest $n'_B$ end up in $B$. We will get a contradiction, since we know that $S_B + x_1 + x_2 + \ldots + x_{n'_B}) < \frac{S}{2}$. This lemma also gives us a way to find a construction: add elements from largest to smallest. When deciding where to put an element, put it in any group, such that the conditions will hold after we put it there. We can check these conditions in $O(1)$ after we sorted $x_i$s and precomputed their prefix sums. Runtime $O(n\log{n})$.
|
[
"constructive algorithms",
"math"
] | 2,800
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.