contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1512
E
Permutation by Sum
A permutation is a sequence of $n$ integers from $1$ to $n$, in which all the numbers occur exactly once. For example, $[1]$, $[3, 5, 2, 1, 4]$, $[1, 3, 2]$ are permutations, and $[2, 3, 2]$, $[4, 3, 1]$, $[0]$ are not. Polycarp was given four integers $n$, $l$, $r$ ($1 \le l \le r \le n)$ and $s$ ($1 \le s \le \frac{n (n+1)}{2}$) and asked to find a permutation $p$ of numbers from $1$ to $n$ that satisfies the following condition: - $s = p_l + p_{l+1} + \ldots + p_r$. For example, for $n=5$, $l=3$, $r=5$, and $s=8$, the following permutations are suitable (not all options are listed): - $p = [3, 4, 5, 2, 1]$; - $p = [5, 2, 4, 3, 1]$; - $p = [5, 2, 1, 3, 4]$. But, for example, there is no permutation suitable for the condition above for $n=4$, $l=1$, $r=1$, and $s=5$.Help Polycarp, for the given $n$, $l$, $r$, and $s$, find a permutation of numbers from $1$ to $n$ that fits the condition above. If there are several suitable permutations, print any of them.
It is easy to show that if we choose $k$ numbers from a permutation of length $n$, then the minimum sum of $k$ numbers is $\frac{k(k+1)}{2}$, the maximum sum is $\frac{k(2n+1-k)}{2}$ and any sum between them is achievable (that is, you can choose exactly $k$ numbers from $n$ so that their sum is equal to the desired one). This fact allows us to implement the following greedy solution: Denote for $low(k)=\sum\limits_{i=1}^{k} i = \frac{k(k+1)}{2}$, for $high(n, k)=\sum\limits_{i=n-k+1}^{n} i = \frac{k(2n+1-k)}{2}$ and for $k=r-l+1$; We will consider the numbers $i=n,n-1,\ldots,1$ and determine whether to put them in the segment $[l, r]$ or not; If $k>0$, $high(i, k) \ge s$ and $s-i \ge low(k-1)$, then put the number $i$ in the segment $[l, r]$ , decrease $s$ by $i$, decrease $k$ by $1$; Otherwise, we will not put the number $i$ in the segment $[l, r]$. In the end, if $s=0$, then we have chosen $r-l+1$ a number with the sum of $s$, so the remaining number can be arranged in any order. If at the end $s>0$, then there is no way to select $r-l+1$ a number from $1,2,\ldots,n$ with the sum of $s$.
[ "brute force", "greedy", "math" ]
1,600
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n, l, r, s; cin >> n >> l >> r >> s; l--; r--; for (int first = 1; first + (r - l) <= n; first++) { int sum = 0; for (int i = l; i <= r; i++) { sum += first + (i - l); } if (sum <= s && s - sum <= r - l + 1) { int needAdd = r - (s - sum) + 1; vector<int> ans(n); set<int> non_blocked; for (int i = 1; i <= n; i++) { non_blocked.insert(i); } for (int i = l; i <= r; i++) { ans[i] = first + (i - l); if (i >= needAdd) { ans[i]++; } non_blocked.erase(ans[i]); } if (ans[r] > n) { continue; } non_blocked.erase(ans[r]); for (int i = 0; i < l; i++) { ans[i] = *non_blocked.begin(); non_blocked.erase(non_blocked.begin()); } for (int i = r + 1; i < n; i++) { ans[i] = *non_blocked.begin(); non_blocked.erase(non_blocked.begin()); } for (int i : ans) { cout << i << " "; } cout << "\n"; return; } } cout << "-1\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int n; cin >> n; while (n--) { solve(); } }
1512
F
Education
Polycarp is wondering about buying a new computer, which costs $c$ tugriks. To do this, he wants to get a job as a programmer in a big company. There are $n$ positions in Polycarp's company, numbered starting from one. An employee in position $i$ earns $a[i]$ tugriks every day. The higher the position number, the more tugriks the employee receives. Initially, Polycarp gets a position with the number $1$ and has $0$ tugriks. Each day Polycarp can do one of two things: - If Polycarp is in the position of $x$, then he can earn $a[x]$ tugriks. - If Polycarp is in the position of $x$ ($x < n$) and has at least $b[x]$ tugriks, then he can spend $b[x]$ tugriks on an online course and move to the position $x+1$. For example, if $n=4$, $c=15$, $a=[1, 3, 10, 11]$, $b=[1, 2, 7]$, then Polycarp can act like this: - On the first day, Polycarp is in the $1$-st position and earns $1$ tugrik. Now he has $1$ tugrik; - On the second day, Polycarp is in the $1$-st position and move to the $2$-nd position. Now he has $0$ tugriks; - On the third day, Polycarp is in the $2$-nd position and earns $3$ tugriks. Now he has $3$ tugriks; - On the fourth day, Polycarp is in the $2$-nd position and is transferred to the $3$-rd position. Now he has $1$ tugriks; - On the fifth day, Polycarp is in the $3$-rd position and earns $10$ tugriks. Now he has $11$ tugriks; - On the sixth day, Polycarp is in the $3$-rd position and earns $10$ tugriks. Now he has $21$ tugriks; - Six days later, Polycarp can buy himself a new computer. Find the minimum number of days after which Polycarp will be able to buy himself a new computer.
Since the array $a$ does not decrease, if we want to get the position $x$ at some point, it is best to get it as early as possible, because if we get it earlier, we will earn no less money. Therefore, the solution looks like this - rise to some position and earn money on it for a laptop. Let's go through the number of the position and use simple formulas to calculate the number of days it takes to raise to this position and the number of days it takes to buy a laptop. From all the options, choose the minimum one.
[ "brute force", "dp", "greedy", "implementation" ]
1,900
#include <bits/stdc++.h> using namespace std; typedef long long ll; void solve() { int n, c; cin >> n >> c; vector<int> a(n); vector<int> b(n - 1); for (int &e : a) { cin >> e; } for (int &e : b) { cin >> e; } b.push_back(0); ll ans = 1e18; ll cur = 0; ll bal = 0; for (int i = 0; i < n; i++) { ans = min(ans, cur + max(0ll, c - bal + a[i] - 1) / a[i]); ll newDays = max(0ll, b[i] - bal + a[i] - 1) / a[i]; cur += newDays + 1; bal += a[i] * newDays - b[i]; } cout << ans << "\n"; } int main() { int n; cin >> n; while (n--) { solve(); } return 0; }
1512
G
Short Task
Let us denote by $d(n)$ the sum of all divisors of the number $n$, i.e. $d(n) = \sum\limits_{k | n} k$. For example, $d(1) = 1$, $d(4) = 1+2+4=7$, $d(6) = 1+2+3+6=12$. For a given number $c$, find the minimum $n$ such that $d(n) = c$.
Note that $n \le d(n) \le {10}^7$ (${10}^7$ is the maximum value of $c$ in the problem), so it is enough for every $n=1..{10}^7$ to calculate the value of $d(n)$. To calculate the value of $d(n)$, you can use the sieve of Eratosthenes and get the solution for $\mathcal{O}({10}^7 \log ({10}^7))$. Also, you can use the linear sieve of Eratosthenes to find the minimum divisor for each $n=1..{10}^7$ and use the multiplicativity of the function $d(n)$: $d(a \cdot b) = d(a) \cdot d(b)$ if $gcd(a, b)=1$.
[ "brute force", "dp", "math", "number theory" ]
1,700
#include <bits/stdc++.h> using namespace std; const int N = (int) 1e7 + 100; long long s[N]; int d[N]; int ans[N]; int main() { fill(d, d + N, -1); d[1] = 1; for (int i = 2; i * i < N; i++) { if (d[i] == -1) { d[i] = i; for (int j = i * i; j < N; j += i) { if (d[j] == -1) { d[j] = i; } } } } s[1] = 1; for (int i = 2; i < N; i++) { if (d[i] == -1) { d[i] = i; s[i] = i + 1; } else { int j = i; s[i] = 1; while (j % d[i] == 0) { j /= d[i]; s[i] = s[i] * d[i] + 1; } s[i] *= s[j]; } } fill(ans, ans + N, -1); for (int i = N - 1; i > 0; i--) { if (s[i] < N) { ans[s[i]] = i; } } int tests; cin >> tests; while (tests-- > 0) { int c; cin >> c; cout << ans[c] << endl; } return 0; }
1513
A
Array and Peaks
A sequence of $n$ integers is called a permutation if it contains all integers from $1$ to $n$ exactly once. Given two integers $n$ and $k$, construct a permutation $a$ of numbers from $1$ to $n$ which has \textbf{exactly} $k$ peaks. An index $i$ of an array $a$ of size $n$ is said to be a peak if $1 < i < n$ and $a_i \gt a_{i-1}$ and $a_i \gt a_{i+1}$. If such permutation is not possible, then print $-1$.
There are many ways to solve this problem. The key idea is we try to use the first $k$ largest elements from $n$ to $n-k+1$ to construct the $k$ peaks. So try constructing the array like this: $1, n, 2, n-1, 3, n-2,..., n-k+1, k+1, k+2,..., n-k$. For this answer to be possible $n-k+1 \gt k+1$ which means $2 \cdot k \lt n$. If this is not the case we output $-1$.
[ "constructive algorithms", "implementation" ]
800
#include<bits/stdc++.h> using namespace std; int main() { int tests; cin>>tests; while(tests--) { int n,k; cin>>n>>k; vector<int> ans(n+1); int num=n; for(int i=2;i<n;i+=2) { if(k==0)break; ans[i]=num--; k--; } if(k) { cout<<-1<<endl; continue; } int cur=1; for(int i=1;i<=n;i++) { if(!ans[i]) ans[i]=cur++; } for(int i=1;i<=n;i++) cout<<ans[i]<<" "; cout<<endl; } }
1513
B
AND Sequences
A sequence of $n$ non-negative integers ($n \ge 2$) $a_1, a_2, \dots, a_n$ is called good if for all $i$ from $1$ to $n-1$ the following condition holds true: $$a_1 \: \& \: a_2 \: \& \: \dots \: \& \: a_i = a_{i+1} \: \& \: a_{i+2} \: \& \: \dots \: \& \: a_n,$$ where $\&$ denotes the bitwise AND operation. You are given an array $a$ of size $n$ ($n \geq 2$). Find the number of permutations $p$ of numbers ranging from $1$ to $n$, for which the sequence $a_{p_1}$, $a_{p_2}$, ... ,$a_{p_n}$ is good. Since this number can be large, output it modulo $10^9+7$.
Consider an arbitrary sequence $b_1,b_2,\dots,b_n$. First let us define the arrays $AND\_pref$ and $AND\_suf$ of length $n$ where $AND\_pref_i = b_1 \: \& \: b_2 \: \& \: \dots \: \& \: b_i$ and $AND\_suf_i = b_{i} \: \& \: b_{i+1} \: \& \: \dots \: \& \: b_n$ . According to the definition of good sequence: $AND\_pref_1 = AND\_suf_2$ which means $b_1 = b_2 \: \& \: b_3 \: \& \: \dots \: \& \: b_n$ . Now $AND\_pref_2 \leq AND\_pref_1 = AND\_suf_2 \leq AND\_suf_3$. Also according to definition of good sequence, $AND\_pref_2 = AND\_suf_3$. This means that $b_1 = AND\_pref_2 = AND\_suf_3$. Similarly, for all $i$ from $1$ to $n$, we get $AND\_pref_i = b_1$ and $AND\_suf_i = b_1$. Therefore for the sequence to be good, $b_1 = b_n$ and the $b_i$ must be a super mask of $b_1$ for all $i$ from $2$ to $n-1$. Initially, we have an array $a_1,a_2,\dots,a_n$. Let the minimum value among these elements be $x$. Let the number of elements that have the value of $x$ be $cnt$. In order to rearrange the elements of $a_1,a_2,\dots,a_n$ to a good sequence, we need to have $cnt \geq 2$ and the remaining elements need to be a super mask of $x$. If we don't meet this criterion, then the answer is $0$. Else the answer will be $(cnt \cdot (cnt-1) \cdot (n-2)!) \% (10^9+7)$. The time complexity is $O(n)$.
[ "bitmasks", "combinatorics", "constructive algorithms", "math" ]
1,400
#include<bits/stdc++.h> using namespace std; void solveTestCase() { int MOD=1e9+7; int n; cin>>n; vector<int> a(n); for(int i=0;i<n;i++)cin>>a[i]; int min1=*min_element(a.begin(),a.end()); int cnt=0; for(int x:a) { if(min1==x)cnt++; if((min1&x)!=min1) { printf("0\n"); return; } } int fact=1; for(int i=1;i<=n-2;i++)fact=(1LL*fact*i)%MOD; int ans=(1LL * cnt * (cnt-1))%MOD; ans = (1LL * ans * fact) % MOD; printf("%d\n",ans); } int main() { int tests; cin>>tests; while(tests--) solveTestCase(); return 0; }
1513
C
Add One
You are given an integer $n$. You have to apply $m$ operations to it. In a single operation, you \textbf{must} replace every digit $d$ of the number with the decimal representation of integer $d + 1$. For example, $1912$ becomes $21023$ after applying the operation once. You have to find the length of $n$ after applying $m$ operations. Since the answer can be very large, print it modulo $10^9+7$.
We can solve this problem using 1D dp. Let $dp_i$ be defined as the length of the string after applying operation $i$-times to the number $10$. Then, $dp_i =2$, $\forall$ $i$ in $[0,8]$ $dp_i =3$, if $i=9$ (The final number after applying $9$ operations to the number $10$ is $109$.) (The final number after applying $9$ operations to the number $10$ is $109$.) $dp_i =dp_{i-9}+dp_{i-10}$, Otherwise.(length would be the sum of $i-9$ operations and $i-10$ operations.) (length would be the sum of $i-9$ operations and $i-10$ operations.) Now for each test case, the final answer is: $ans=\sum_{i=1}^{|s|}\Bigl( \bigr(m+(int)(s[i]-'0')<10 \bigr) ? 1 : dp_{m-10+(int)(s[i]-'0')} \Bigr)$, where $s$ is $n$ (in form of string). Time Complexity O($m$+$t \cdot$|$s$|), where |$s$| is the number of digits in $n$.
[ "dp", "matrices" ]
1,600
#include <bits/stdc++.h> using namespace std; #define int long long const int max_n = 200005, mod = 1000000007; int dp[max_n]; signed main(){ for(int i=0; i<9; i++)dp[i] = 2; dp[9] = 3; for(int i=10; i<max_n; i++){ dp[i] = (dp[i-9] + dp[i-10])%mod; } ios_base::sync_with_stdio(false); cin.tie(NULL); int t; cin>>t; while(t--){ int n, m; cin>>n>>m; int ans = 0; while(n > 0){ int x = n%10; ans += ((m + x < 10) ? 1 : dp[m + x - 10]); ans %= mod; n/=10; } cout<<ans<<"\n"; } return 0; }
1513
D
GCD and MST
You are given an array $a$ of $n$ ($n \geq 2$) positive integers and an integer $p$. Consider an undirected weighted graph of $n$ vertices numbered from $1$ to $n$ for which the edges between the vertices $i$ and $j$ ($i<j$) are added in the following manner: - If $gcd(a_i, a_{i+1}, a_{i+2}, \dots, a_{j}) = min(a_i, a_{i+1}, a_{i+2}, \dots, a_j)$, then there is an edge of weight $min(a_i, a_{i+1}, a_{i+2}, \dots, a_j)$ between $i$ and $j$. - If $i+1=j$, then there is an edge of weight $p$ between $i$ and $j$. Here $gcd(x, y, \ldots)$ denotes the greatest common divisor (GCD) of integers $x$, $y$, .... Note that there could be multiple edges between $i$ and $j$ if both of the above conditions are true, and if both the conditions fail for $i$ and $j$, then there is no edge between these vertices. The goal is to find the weight of the minimum spanning tree of this graph.
We will iterate from smallest to largest number like in krushkal's algorithm. By this, we will consider the edges with the smallest weight first. Now, while iterating, we will assume the current value as the gcd we want to get, let's say $g$ and we will go left and then right while going left/right, if $gcd(new\_element, g) = g$, we will add an edge in the graph between corresponding positions of $new\_element$ and $g$. Also while adding edge we must keep in mind that we shouldn't create a cycle. If we are forming a cycle, we shouldn't add this edge and stop spanning in that direction. We can use a DSU to check this but in this particular problem, this is not required. We can simply check previously whether there is any edge connected to $new\_element$. If so, we will add this edge and stop spanning further as any further entry forms a cycle (Think why this is so. Hint: This is because we are adding edges in segments kind of fashion). Also, we need to stop when the current $g$ we are considering is greater than the parameter $p$ since we can connect the currently connected components by edges with weight $p$. Thus the number of edges considered would be $O(N)$, and then the overall complexity of finding $MST$ would be $O(N \log N)$ due to sorting.
[ "constructive algorithms", "dsu", "graphs", "greedy", "number theory", "sortings" ]
2,000
#include<bits/stdc++.h> using namespace std; void solveTestCase() { int n,x; cin>>n>>x; vector<int> a(n); for(int i=0;i<n;i++)cin>>a[i]; //tells whether vertices i and i+1 are connected for 0<=i<n-1 vector<bool> isConnected(n); vector<pair<int,int>> vals; for(int i=0;i<n;i++) vals.push_back(make_pair(a[i],i)); sort(vals.begin(),vals.end()); long long int ans=0; for(auto p:vals) { int cur_val=p.first; int i=p.second; if(cur_val>=x)break; while(i>0) { if(isConnected[i-1])break; if(a[i-1]%cur_val==0) { isConnected[i-1]=true; ans+=cur_val; i--; } else break; } i=p.second; while(i<n-1) { if(isConnected[i])break; if(a[i+1]%cur_val==0) { isConnected[i]=true; ans+=cur_val; i++; } else break; } } for(int i=0;i<n-1;i++) { if(!isConnected[i]) ans+=x; } cout<<ans<<endl; } int main() { int T; cin>>T; while(T--) { solveTestCase(); } return 0; }
1513
E
Cost Equilibrium
An array is called beautiful if all the elements in the array are equal. You can transform an array using the following steps any number of times: - Choose two indices $i$ and $j$ ($1 \leq i,j \leq n$), and an integer $x$ ($1 \leq x \leq a_i$). Let $i$ be the source index and $j$ be the sink index. - Decrease the $i$-th element by $x$, and increase the $j$-th element by $x$. The resulting values at $i$-th and $j$-th index are $a_i-x$ and $a_j+x$ respectively. - The cost of this operation is $x \cdot |j-i| $. - Now the $i$-th index can no longer be the sink and the $j$-th index can no longer be the source. The total cost of a transformation is the sum of all the costs in step $3$.For example, array $[0, 2, 3, 3]$ can be transformed into a beautiful array $[2, 2, 2, 2]$ with total cost $1 \cdot |1-3| + 1 \cdot |1-4| = 5$. An array is called balanced, if it can be transformed into a beautiful array, and the cost of such transformation is \textbf{uniquely} defined. In other words, the minimum cost of transformation into a beautiful array equals the maximum cost. You are given an array $a_1, a_2, \ldots, a_n$ of length $n$, consisting of non-negative integers. Your task is to find the number of balanced arrays which are permutations of the given array. Two arrays are considered different, if elements at some position differ. Since the answer can be large, output it modulo $10^9 + 7$.
let $S$ = $\sum_0^{n-1}(a_i)$. The first and foremost condition is that $S\%n=0$, and the final values in the beautiful array will be equal to $x=S/n$. Since a node cannot operate as both, a source and a sink, therefore: Nodes with $values > x$ can only be the source vertices. Nodes with $values < x$ can only be the sink vertices. Nodes with $values = x$ with not be used as a source or a sink. The given condition, $minimumCost=maximumCost$ condition holds only when at least one of the below condition holds: Number of source vertices are zero or one. Number of sink vertices are zero or one. All the source vertices are before the sink vertices in the permutation. All the source vertices are after the sink vertices in the permutation. $f(i)$ denotes the factorial of $i$. Thus, the answer would be: If the number of source or sink is less than or equal to one, we need to consider all the unique permutations.$Ans = f(n) / (f(x_1)*f(x_2)*...)$ where $x_1, x_2, \dots$ are frequencies of different values. $Ans = f(n) / (f(x_1)*f(x_2)*...)$ where $x_1, x_2, \dots$ are frequencies of different values. Else,$Ans = 2 * A * B * C$ $A = ($# ways arranging source nodes$) = f(src) / (f(x_1) * f(x_2) * \dots)$ where $x_1, x_2, \dots$ are frequencies of different values in source nodes. $B = ($# ways arranging sink nodes$) = f(snk) / (f(x_1) * f(x_2) * \dots)$ where $x_1, x_2, \dots$ are frequencies of different values in sink nodes. $C = ($# ways filling $(n-src-snk)$ identical values in $(src+snk+1)$ places) = $Binomial(n, src + src)$. We have an additional factor of $2$ to cover the two possibilities: all sources before all sinks, and all sources after all sinks. $Ans = 2 * A * B * C$ $A = ($# ways arranging source nodes$) = f(src) / (f(x_1) * f(x_2) * \dots)$ where $x_1, x_2, \dots$ are frequencies of different values in source nodes. $B = ($# ways arranging sink nodes$) = f(snk) / (f(x_1) * f(x_2) * \dots)$ where $x_1, x_2, \dots$ are frequencies of different values in sink nodes. $C = ($# ways filling $(n-src-snk)$ identical values in $(src+snk+1)$ places) = $Binomial(n, src + src)$. We have an additional factor of $2$ to cover the two possibilities: all sources before all sinks, and all sources after all sinks. The overall complexity of the solution would be $O(n)$.
[ "combinatorics", "constructive algorithms", "math", "sortings" ]
2,300
#include "bits/stdc++.h" #define ll long long #define MOD 1000000007 ll power(ll x,ll y, ll md=MOD){ll res = 1;x%=md;while(y){if(y&1)res = (res*x)%md;x *= x; if(x>=md) x %= md; y >>= 1;}return res;} using namespace std; #define int long long #define MAX 100005 vector<int> f(MAX); vector<int> inv(MAX); void init() { f[0] = 1; for(int i=1;i<MAX;i++) f[i] = (f[i-1]*i)%MOD; inv[MAX-1] = power(f[MAX-1], MOD-2, MOD); for(int i=MAX-2;i>=0;i--) inv[i] = (inv[i+1]*(i+1)) % MOD; for(int i=0;i<MAX;i++) assert(inv[i]==power(f[i],MOD-2,MOD)); } int ncr(int n, int r) { if(r > n || r < 0) return 0; int ans = f[n]; ans *= (inv[r] * inv[n - r]) % MOD; ans %= MOD; return ans; } int solve(const vector<int> &v) { int n = v.size(); int s = 0; for(auto x: v) s += x; if(!(s % n == 0)) return 0; int src = 0; int snk = 0; map<int,int> freqSrc, freqSnk; for(auto x: v) { if(s / n < x) { freqSrc[x]++; src ++; } if(s / n > x) { freqSnk[x]++; snk ++; } } if(src == 0 && snk == 0) return 1; if(src == 1 || snk == 1) { int ans = f[n]; for(auto x: freqSnk) { ans = (ans * inv[x.second]) % MOD; } for(auto x: freqSrc) { ans = (ans * inv[x.second]) % MOD; } ans *= inv[n - src - snk]; ans %= MOD; return ans; } int ans = (2 * f[src] * f[snk]) % MOD; // Divide by freq of repeating elements for(auto x: freqSnk) { ans = (ans * inv[x.second]) % MOD; } for(auto x: freqSrc) { ans = (ans * inv[x.second]) % MOD; } int tot = src + snk; int left = n - tot; // Number of Solution: x_0 + x_1 + x_2 + ... + x_tot = left ans = (ans * ncr(left + tot, tot)) % MOD; return ans; } signed main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); init(); int n; cin>>n; vector<int> v(n); for(auto &x: v) cin>>x; cout<<solve(v); }
1513
F
Swapping Problem
You are given 2 arrays $a$ and $b$, both of size $n$. You can swap two elements in $b$ at most \textbf{once} (or leave it as it is), and you are required to minimize the value $$\sum_{i}|a_{i}-b_{i}|.$$ Find the minimum possible value of this sum.
Let's form 2 sets $X$ and $Y$. $X$ contains those indices $i$ such that $A_i$ < $B_i$ and $Y$ contains those indices $i$ such that $A_i$ > $B_i$. We call a segment $L_i$, $R_i$ as $L_i = min(A_i, B_i)$ and $R_i = max(A_i, Bi_i)$ Let break this Question into a series of observation: Observation 1: The answer will reduce by swapping two indexes $i$ and $j$ in $B$ only if: If an index $i$ belongs to X and $j$ belongs to Y. Segment $i$ and $j$ overlap. Observation 2: Based on previous observation, we are now left with only a few cases. WLOG we can fix $A_i$ and $B_i$, and see where our index $j$ be located. $B_j \le A_i$ and $A_i \le A_j \le B_i$ $B_j \le A_i$ and $B_i \le A_j$ $B_j \ge A_i$ and $A_i \le A_j \le B_i$ $B_j \ge A_i$ and $B_i \le A_j$ Observation 3: We can see that, in each of the cases our final answer changes by, $final\_answer = original\_answer - 2 * overlap(Segment_i, Segment_j)$ So, the algorithm would like this: $final\_answer = original\_answer - 2 * max(0, A, B)$ $A = max(min(r_i, prefix\_Maxima(Y, l_i)) - l_i)$ where $(l_i, r_i)$ is a member of X. $B = max(min(r_i, prefix\_Maxima(X, l_i)) - l_i)$ where $(l_i, r_i)$ is a member of Y. $prefix\_Maxima(P,q)$ gives the maximum value of $r$ such that $l <= q$, where $(l, r)$ is a member of $P$. $prefix\_Maxima$ can be implemented using a map, segment tree, or two-pointer approach. The overall complexity of the solution would be O(n*lg(n))
[ "brute force", "constructive algorithms", "data structures", "sortings" ]
2,500
// created by mtnshh #include<bits/stdc++.h> #include<algorithm> using namespace std; #define ll long long #define ld long double #define rep(i,a,b) for(ll i=a;i<b;i++) #define repb(i,a,b) for(ll i=a;i>=b;i--) #define pb push_back #define all(A) A.begin(),A.end() #define allr(A) A.rbegin(),A.rend() #define ft first #define sd second #define pll pair<ll,ll> #define V vector<ll> #define S set<ll> #define VV vector<V> #define Vpll vector<pll> #define endl "\n" const ll logN = 20; const ll M = 1000000007; const ll INF = 1e18; #define PI 3.14159265 const ll N = 100005; int main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); ll n; cin >> n; ll A[n], B[n]; rep(i,0,n) cin >> A[i]; rep(i,0,n) cin >> B[i]; Vpll x1, x2, y1, y2; ll ans = 0; rep(i,0,n){ ans += abs(A[i]-B[i]); if(A[i]<B[i]){ x1.pb({A[i], B[i]}); x2.pb({B[i], A[i]}); } else{ y1.pb({A[i], B[i]}); y2.pb({B[i], A[i]}); } } ll fin = ans; // set<ll> s1; sort(all(x1)); sort(all(y2)); ll cur1 = 0; for(auto i: x1){ while(cur1 < y2.size() and y2[cur1].ft <= i.ft){ s1.insert(y2[cur1].sd); cur1++; } if(s1.size() > 0){ ll last = *s1.rbegin(); fin = min(fin, ans - 2 * (min(i.sd, last) - i.ft)); } } set<ll> s2; sort(all(x1)); sort(all(y2)); ll cur2 = 0; for(auto i: y2){ while(cur2 < x1.size() and x1[cur2].ft <= i.ft){ s2.insert(x1[cur2].sd); cur2++; } if(s2.size() > 0){ ll last = *s2.rbegin(); fin = min(fin, ans - 2 * (min(last, i.sd) - i.ft)); } } cout << fin << endl; return 0; }
1514
A
Perfectly Imperfect Array
Given an array $a$ of length $n$, tell us whether it has a non-empty subsequence such that the product of its elements is \textbf{not} a perfect square. A sequence $b$ is a subsequence of an array $a$ if $b$ can be obtained from $a$ by deleting some (possibly zero) elements.
If any element is not a perfect square, the answer is yes. Otherwise, the answer is no, because $a^2*b^2*...=(a*b*...)^2$.
[ "math", "number theory" ]
800
"#include <bits/stdc++.h>\nusing namespace std;\n#define MX 10000\nbool sq[MX+5];\nint main()\n{\n\tfor (int i=1;i*i<=MX;i++)\n\tsq[i*i]=1;\n\tint t;\n\tscanf(\"%d\",&t);\n\twhile (t--)\n\t{\n\t\tint n;\n\t\tscanf(\"%d\",&n);\n\t\tbool ok=1;\n\t\twhile (n--)\n\t\t{\n\t\t\tint a;\n\t\t\tscanf(\"%d\",&a);\n\t\t\tok&=sq[a];\n\t\t}\n\t\tputs(ok? \"NO\":\"YES\");\n\t}\n}\n"
1514
B
AND 0, Sum Big
Baby Badawy's first words were "AND 0 SUM BIG", so he decided to solve the following problem. Given two integers $n$ and $k$, count the number of arrays of length $n$ such that: - all its elements are integers between $0$ and $2^k-1$ (inclusive); - the bitwise AND of all its elements is $0$; - the sum of its elements is as large as possible. Since the answer can be very large, print its remainder when divided by $10^9+7$.
Let's start with an array where every single bit in every single element is $1$. It clearly doesn't have bitwise-and equal to $0$, so for each bit, we need to turn it off (make it $0$) in at least one of the elements. However, we can't turn it off in more than one element, since the sum would then decrease for no reason. So for every bit, we should choose exactly one element and turn it off there. Since there are $k$ bits and $n$ elements, the answer is just $n^k$.
[ "bitmasks", "combinatorics", "math" ]
1,200
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nint n,k;\nconst int MOD=1e9+7;\n\nint main()\n{\n\tint t;\n\tscanf(\"%d\",&t);\n\twhile(t--)\n\t{\n\t\tscanf(\"%d %d\",&n,&k);\n\t\tlong long ans=1;\n\t\tfor(int i=0;i<k;i++) ans=(ans*n)%MOD;\n\t\tprintf(\"%lld\\n\",ans);\n\t}\n}"
1514
C
Product 1 Modulo N
Now you get Baby Ehab's first words: "Given an integer $n$, find the longest subsequence of $[1,2, \ldots, n-1]$ whose product is $1$ modulo $n$." Please solve the problem. A sequence $b$ is a subsequence of an array $a$ if $b$ can be obtained from $a$ by deleting some (possibly all) elements. The product of an empty subsequence is equal to $1$.
So first observe that the subsequence can't contain any element that isn't coprime with $n$. Why? Because then its product won't be coprime with $n$, so when you take it modulo $n$, it can't be $1$. In mathier words, $gcd(prod \space mod \space n,n)=gcd(prod,n) \neq 1$. Now, let's take all elements less than $n$ and coprime with it, and let's look at their product modulo $n$; call it $p$. If $p$ is $1$, you can take all these elements. Otherwise, you should take them all except for $p$. It belongs to them because $p$ is coprime with $n$, since $gcd(p \space mod \space n,n)=gcd(p,n)=1$ since all the elements in $p$ are coprime with $n$.
[ "greedy", "number theory" ]
1,600
"#include <bits/stdc++.h>\nusing namespace std;\nbool ok[100005];\nint main()\n{\n\tint n;\n\tscanf(\"%d\",&n);\n\tlong long prod=1;\n\tfor (int i=1;i<n;i++)\n\t{\n\t\tif (__gcd(n,i)==1)\n\t\t{\n\t\t\tok[i]=1;\n\t\t\tprod=(prod*i)%n;\n\t\t}\n\t}\n\tif (prod!=1)\n\tok[prod]=0;\n\tprintf(\"%d\\n\",count(ok+1,ok+n,1));\n\tfor (int i=1;i<n;i++)\n\t{\n\t\tif (ok[i])\n\t\tprintf(\"%d \",i);\n\t}\n}\n"
1514
D
Cut and Stick
Baby Ehab has a piece of Cut and Stick with an array $a$ of length $n$ written on it. He plans to grab a pair of scissors and do the following to it: - pick a range $(l, r)$ and cut out every element $a_l$, $a_{l + 1}$, ..., $a_r$ in this range; - stick some of the elements together in the same order they were in the array; - end up with multiple pieces, where every piece contains some of the elements and every element belongs to some piece. More formally, he partitions the sequence $a_l$, $a_{l + 1}$, ..., $a_r$ into subsequences. He thinks a partitioning is beautiful if for every piece (subsequence) it holds that, if it has length $x$, then no value occurs strictly more than $\lceil \frac{x}{2} \rceil$ times in it. He didn't pick a range yet, so he's wondering: for $q$ ranges $(l, r)$, what is the minimum number of pieces he needs to partition the elements $a_l$, $a_{l + 1}$, ..., $a_r$ into so that the partitioning is beautiful. A sequence $b$ is a subsequence of an array $a$ if $b$ can be obtained from $a$ by deleting some (possibly zero) elements. Note that it does \textbf{not} have to be contiguous.
Suppose the query-interval has length $m$. Let's call an element super-frequent if it occurs more than $\lceil\frac{m}{2}\rceil$ times in it, with frequency $f$. If there's no super-frequent element, then we can just put all the elements in $1$ subsequence. Otherwise, we need the partitioning. Let's call the rest of the elements (other than the super-frequent one) good elements. One way to partition is to put all the $m-f$ good elements with $m-f+1$ super-frequent elements; then, put every remaining occurrence of the super-frequent element in a subsequence on its own. The number of subsequences we need here is then $1+f-(m-f+1)=2*f-m$. There's no way to improve this, because: for every subsequence we add, the number of occurrences of the super-frequent element minus the number of good elements is at most $1$, so by making it exactly $1$ in each subsequence, we get an optimal construction. Now, the problem boils down to calculating $f$. Note that calculating the most frequent element in general is a well-known slow problem. It's usually solved with MO's algorithm in $O(n\sqrt{n}log(n))$, maybe with a tricky optimization to $O(n\sqrt{n})$. However, notice that we only need the most frequent element if it occurs more than $\lceil\frac{m}{2}\rceil$ times. How can we use this fact? We can pick ~$40$ random elements from our range to be candidates for the super-frequent element, then count their occurrences and maximize. If there's a super-frequent element, the probability it's not picked is at most $2^{-40}$, which is incredibly small. To count the number of occurrences of an element in a range, we can carry for each element a vector containing all the positions it occurs in increasing order. Then, upper_bound(r)-lower_bound(l) gives us the number of occurrences in $O(log(n))$. Observe that if a range has a super-frequent element, and we split it into $2$ ranges, then this element must be super-frequent in one of them. Now suppose we create a segment tree where every node $[l;r]$ returns an element in the range, and suppose every node merges the $2$ elements returned by its children as follows: count their occurrences in $[l;r]$ and pick whichever occurs more. In general, that doesn't return the most frequent element. However, if there's a super-frequent element, it must return it! That's because if there's a super-frequent element in $[l;r]$, it must be super-frequent in one of its children, so by induction the segment tree works. The time complexity is $O(nlog^2(n))$.
[ "binary search", "data structures", "greedy", "implementation", "sortings" ]
2,000
"#include <bits/stdc++.h>\nusing namespace std;\nint a[300005],tree[1200005];\nvector<int> v[300005];\nint cnt(int l,int r,int c)\n{\n\treturn upper_bound(v[c].begin(),v[c].end(),r)-lower_bound(v[c].begin(),v[c].end(),l);\n}\nvoid build(int node,int st,int en)\n{\n\tif (st==en)\n\ttree[node]=a[st];\n\telse\n\t{\n\t\tint mid=(st+en)/2;\n\t\tbuild(2*node,st,mid);\n\t\tbuild(2*node+1,mid+1,en);\n\t\ttree[node]=(cnt(st,en,tree[2*node])>cnt(st,en,tree[2*node+1])? tree[2*node]:tree[2*node+1]);\n\t}\n}\nint query(int node,int st,int en,int l,int r)\n{\n\tif (en<l || st>r || r<l)\n\treturn 0;\n\tif (l<=st && en<=r)\n\treturn cnt(l,r,tree[node]);\n\tint mid=(st+en)/2;\n\treturn max(query(2*node,st,mid,l,r),query(2*node+1,mid+1,en,l,r));\n}\nint main()\n{\n\tint n,q;\n\tscanf(\"%d%d\",&n,&q);\n\tfor (int i=1;i<=n;i++)\n\t{\n\t\tscanf(\"%d\",&a[i]);\n\t\tv[a[i]].push_back(i);\n\t}\n\tbuild(1,1,n);\n\twhile (q--)\n\t{\n\t\tint l,r;\n\t\tscanf(\"%d%d\",&l,&r);\n\t\tprintf(\"%d\\n\",max(1,2*query(1,1,n,l,r)-(r-l+1)));\n\t}\n}"
1514
E
Baby Ehab's Hyper Apartment
This is an interactive problem. Baby Ehab loves crawling around his apartment. It has $n$ rooms numbered from $0$ to $n-1$. For every pair of rooms, $a$ and $b$, there's either a direct passage from room $a$ to room $b$, or from room $b$ to room $a$, but never both. Baby Ehab wants to go play with Baby Badawy. He wants to know if he could get to him. However, he doesn't know anything about his apartment except the number of rooms. He can ask the baby sitter two types of questions: - is the passage between room $a$ and room $b$ directed from $a$ to $b$ or the other way around? - does room $x$ have a passage towards any of the rooms $s_1$, $s_2$, ..., $s_k$? He can ask at most $9n$ queries of the first type and at most $2n$ queries of the second type. After asking some questions, he wants to know for every pair of rooms $a$ and $b$ whether there's a path from $a$ to $b$ or not. A path from $a$ to $b$ is a sequence of passages that starts from room $a$ and ends at room $b$.
Throughout the editorial, I'll call the first type of queries OneEdge and the second type ManyEdges. The basic idea behind this problem is to find a few edges such that every path that could be traversed in your graph could be traversed using only these edges. With that motivation in mind, let's get started. The first observation is: the graph has a hamiltonian path. To prove this, suppose you split the graph into $2$ halves, each containing some of the nodes. Then, we'll proceed by induction. Suppose each half has a hamiltonian path. I'll describe a way to merge them into one path. First, let's look at the first node in each path and ask about the edge between them. Suppose it's directed from the first to the second one. Then, I'll start my new merged path with the first node, remove it, and repeat. This works because no matter which node follows it, it sends an edge out to it. This is best described by the following picture: We start with the $2$ hamiltonian paths we got by induction, then we query that red edge. We find that it's from the grey node to the white node. We then put our grey node as the start of the path and continue doing that with the rest of the nodes, and we don't care which node will follow it, because the edge is out from the black node either way! If everything falls into place in your mind, you should recognize that this algorithm is merge sort. We just run merge sort on the nodes of the graph, using the comparator OneEdge. That gives you a hamiltonian path in $nlog(n)$ queries. Now that we have a hamiltonian path, every edge that goes forward in it is useless, since you can just walk down the path itself: So let's focus on the edges going backwards. Suppose we iterate through the hamiltonian path from its end to its beginning, looking at the edges going back from the nodes we met. An edge going backwards from the current node is important only if it goes back further than any of the edges we've met so far. That's because we can take a (much less direct) route instead of this edge if it doesn't go so far back: Now with a few more edges we can form all the paths! How do we get these edges? We can use $2$ pointers. Let's iterate through the hamiltonian path from its end to its beginning, carrying a pointer $p$ that tells us how far back the edges we met so far can go. To update $p$, let's ask ManyEdges from the node we're currently at, to the first $p$ nodes in the hamiltonian path. While the answer is $1$, we'll keep decreasing $p$. This algorithm calls ManyEdges $2n$ times, since every time we call it, it either returns $0$ and the node we're at decreases, or it returns $1$ and $p$ decreases.
[ "binary search", "graphs", "interactive", "sortings", "two pointers" ]
2,700
"#include <bits/stdc++.h>\nusing namespace std;\nbool ManyEdges(int x,vector<int> s)\n{\n\tprintf(\"2 %d %d\",x,s.size());\n\tfor (int i:s)\n\tprintf(\" %d\",i);\n\tprintf(\"\\n\");\n\tfflush(stdout);\n\tint ret;\n\tscanf(\"%d\",&ret);\n\tif (ret==-1)\n\texit(0);\n\treturn ret;\n}\nbool OneEdge(int a,int b)\n{\n\tprintf(\"1 %d %d\\n\",a,b);\n\tfflush(stdout);\n\tint ret;\n\tscanf(\"%d\",&ret);\n\tif (ret==-1)\n\texit(0);\n\treturn ret;\n}\nvector<int> getprefix(vector<int> v,int p)\n{\n\tvector<int> ret;\n\tfor (int i=0;i<=p;i++)\n\tret.push_back(v[i]);\n\treturn ret;\n}\nvector<vector<int> > getMap(int n)\n{\n\tvector<vector<int> > ret(n,vector<int>(n,1));\n\tvector<int> path;\n\tfor (int i=0;i<n;i++)\n\tpath.push_back(i);\n\tstable_sort(path.begin(),path.end(),OneEdge);\n\tint p=n-2;\n\tfor (int i=n-1;i>=0;i--)\n\t{\n\t\tif (p==i)\n\t\t{\n\t\t\tfor (int j=0;j<=i;j++)\n\t\t\t{\n\t\t\t\tfor (int k=i+1;k<n;k++)\n\t\t\t\tret[path[k]][path[j]]=0;\n\t\t\t}\n\t\t\tp--;\n\t\t}\n\t\twhile (ManyEdges(path[i],getprefix(path,p)))\n\t\tp--;\n\t}\n\treturn ret;\n}\nint main()\n{\n\tint t;\n\tscanf(\"%d\",&t);\n\twhile (t--)\n\t{\n\t\tint n;\n\t\tscanf(\"%d\",&n);\n\t\tauto res=getMap(n);\n\t\tputs(\"3\");\n\t\tfor (int i=0;i<n;i++)\n\t\t{\n\t\t\tfor (int j=0;j<n;j++)\n\t\t\tprintf(\"%d\",res[i][j]);\n\t\t\tprintf(\"\\n\");\n\t\t}\n\t\tfflush(stdout);\n\t\tint ok;\n\t\tscanf(\"%d\",&ok);\n\t\tif (ok==-1)\n\t\texit(0);\n\t}\n}\n"
1515
A
Phoenix and Gold
Phoenix has collected $n$ pieces of gold, and he wants to weigh them together so he can feel rich. The $i$-th piece of gold has weight $w_i$. All weights are \textbf{distinct}. He will put his $n$ pieces of gold on a weight scale, one piece at a time. The scale has an unusual defect: if the total weight on it is \textbf{exactly} $x$, it will explode. Can he put all $n$ gold pieces onto the scale in some order, without the scale exploding during the process? If so, help him find some possible order. Formally, rearrange the array $w$ so that for each $i$ $(1 \le i \le n)$, $\sum\limits_{j = 1}^{i}w_j \ne x$.
Note that if the sum of all the weights is $x$, the scale will always explode and the answer will be NO. Otherwise, we claim there is always an answer. Basically, at each point, we choose an arbitrary gold piece to add to the scale so that it doesn't explode. There is always a valid gold piece to add because the weights are distinct. For example, we can try adding piece $1$, $2$, $\dots$, $n$, in that order. Suppose we are currently considering the $i$-th piece. If we can add it to the scale without an explosion, we do it. If we can't, then we can just first add piece $i+1$, and then piece $i$ ($w_i \neq w_{i+1}$ because weights are distinct). $i+1$ must be less than or equal to $n$ because otherwise, the total sum of the weights would be $x$. Time complexity for each test case: $O(n)$
[ "constructive algorithms", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve(){ int n,x; int w[101]; cin>>n>>x; int sum=0; for (int i=0;i<n;i++){ cin>>w[i]; sum+=w[i]; } //if the sum is x, we cannot avoid the explosion if (sum==x){ cout<<"NO"<<endl; return; } cout<<"YES"<<endl; //otherwise, the answer always exists //we will keep adding elements from left to right //if we can't add element i, we add element i+1 first by swapping the two for (int i=0;i<n;i++){ if (x==w[i]){ //i+1 will always be less than n because otherwise, the total sum would be x swap(w[i],w[i+1]); } cout<<w[i]<<' '; x-=w[i]; } cout<<endl; return; } int main(){ int T; cin>>T; while (T--) solve(); return 0; }
1515
B
Phoenix and Puzzle
Phoenix is playing with a new puzzle, which consists of $n$ identical puzzle pieces. Each puzzle piece is a right isosceles triangle as shown below. \begin{center} {\small A puzzle piece} \end{center} The goal of the puzzle is to create a \textbf{square} using the $n$ pieces. He is allowed to rotate and move the pieces around, but none of them can overlap and all $n$ pieces must be used (of course, the square shouldn't contain any holes as well). Can he do it?
If $n$ can be written as $2x$ or $4x$, where $x$ is a square number, then the answer is YES. Otherwise it is NO. To visualize this construction, we start by first building a smaller square using exactly $2$ or $4$ pieces (the drawings are in the sample test explanation). We can just use $x$ of those smaller squares to build a larger square. Let's prove that there are no other answers (although this isn't necessary to solve the problem). Let's define each triangle piece to have a short side of length $1$ and a longer side of length $\sqrt{2}$. Consider one side of the square, and suppose that it has $a$ triangles on the short side and $b$ triangles on the longer side. The side length will be $a+\sqrt{2}b$. The area of the square is a rational number because the area of each triangle piece is rational. So, $(a+\sqrt{2}b)^2$ has to be rational, which means either $a$ is $0$, or $b$ is $0$. If either is $0$, we can use the construction in the previous paragraph. Time complexity for each test case: $O(\sqrt{n})$ or $O(\log{n})$ (depends on how you check for square numbers)
[ "brute force", "geometry", "math", "number theory" ]
1,000
#include <bits/stdc++.h> using namespace std; bool isSquare(int x){ int y=sqrt(x); return y*y==x; } void solve(){ int n; cin>>n; if (n%2==0 && isSquare(n/2)) cout<<"YES"<<endl; else if (n%4==0 && isSquare(n/4)) cout<<"YES"<<endl; else cout<<"NO"<<endl; } int main(){ int t; cin>>t; while (t--) solve(); }
1515
C
Phoenix and Towers
Phoenix has $n$ blocks of height $h_1, h_2, \dots, h_n$, and all $h_i$ don't exceed some value $x$. He plans to stack all $n$ blocks into $m$ separate towers. The height of a tower is simply the sum of the heights of its blocks. For the towers to look beautiful, no two towers may have a height difference of strictly more than $x$. Please help Phoenix build $m$ towers that look beautiful. Each tower must have at least one block and all blocks must be used.
Greedily adding blocks to the current shortest tower will always give a valid solution. Let's prove it with contradiction. If the towers weren't beautiful, then some two towers would have a height difference of more than $x$. Since a single block cannot exceed a height of $x$, the difference would be more than one block. This is a contradiction with our solution because we always add to the shortest tower. To implement this, we can use a set-like structure and store pairs of ($h_i$, $i$) for each $i$ ($1 \le i \le m)$, where $h_i$ is the current height of the $i$-th tower (initialized to $0$). When adding a block, we remove the first element of the sorted set, update the tower height, and add it back into the set. Time complexity for each test case: $O(n \log{n})$
[ "constructive algorithms", "data structures", "greedy" ]
1,400
#include <bits/stdc++.h> using namespace std; int N,M,X; int H[100001]; void solve(){ cin>>N>>M>>X; cout<<"YES"<<endl; set<pair<int,int>>s; //stores pairs of (height, index) for (int i=1;i<=M;i++) s.insert({0,i}); for (int i=0;i<N;i++){ cin>>H[i]; pair<int,int>p=*s.begin(); s.erase(p); cout<<p.second<<' '; s.insert({p.first+H[i],p.second}); } cout<<endl; } int main(){ int T; cin>>T; while (T--) solve(); }
1515
D
Phoenix and Socks
To satisfy his love of matching socks, Phoenix has brought his $n$ socks ($n$ is even) to the sock store. Each of his socks has a color $c_i$ and is either a left sock or right sock. Phoenix can pay one dollar to the sock store to either: - recolor a sock to any color $c'$ $(1 \le c' \le n)$ - turn a left sock into a right sock - turn a right sock into a left sock The sock store may perform each of these changes any number of times. Note that the color of a left sock doesn't change when it turns into a right sock, and vice versa. A matching pair of socks is a left and right sock with the same color. What is the minimum cost for Phoenix to make $n/2$ matching pairs? Each sock must be included in exactly one matching pair.
First, let's remove all pairs that are already matching because an optimal solution will never change them. Suppose there remain $l$ left socks and $r$ right socks. Without loss of generality, assume $l \geq r$. If not, we can just swap all left and right socks. We know that regardless of other operations, we will always need to turn $(l-r)/2$ left socks into right socks. So, if there are any pairs of left socks with the same color, we will first make matching pairs by turning half of them into right socks (while making sure $l$ is always at least $r$). This is optimal because since we need to turn left socks into right socks eventually, we might as well do it in a way that yields matching pairs immediately. After we equalize the number of left socks and right socks, or run out of pairs of left socks with the same color, we can finish by turning any $(l-r)/2$ left socks into right socks and then recoloring all left socks to match a right sock. Note that $l$ and $r$ change throughout this process. Time complexity for each test case: $O(n)$
[ "greedy", "sortings", "two pointers" ]
1,500
#include <bits/stdc++.h> using namespace std; int N,L,R; int C[200001]; int lcnt[200001],rcnt[200001]; void solve(){ cin>>N>>L>>R; for (int i=1;i<=N;i++){ lcnt[i]=0; rcnt[i]=0; } for (int i=1;i<=N;i++){ cin>>C[i]; if (i<=L) lcnt[C[i]]++; else rcnt[C[i]]++; } //remove pairs that are already matching for (int i=1;i<=N;i++){ int mn=min(lcnt[i],rcnt[i]); lcnt[i]-=mn; rcnt[i]-=mn; L-=mn; R-=mn; } if (L<R){ swap(lcnt,rcnt); swap(L,R); } //now, there are at least as many left socks as right socks int ans=0; for (int i=1;i<=N;i++){ int extra=L-R; //always even int canDo=lcnt[i]/2; int Do=min(canDo*2,extra); //turn "Do"/2 left socks of color i into right socks ans+=Do/2; L-=Do; } //turn extra lefts into rights, then adjust all colors ans+=(L-R)/2+(L+R)/2; cout<<ans<<endl; } int main() { ios_base::sync_with_stdio(0);cin.tie(0); int T; cin>>T; while (T--) solve(); return 0; }
1515
E
Phoenix and Computers
There are $n$ computers in a row, all originally off, and Phoenix wants to turn all of them on. He will manually turn on computers one at a time. At any point, if computer $i-1$ and computer $i+1$ are both on, computer $i$ $(2 \le i \le n-1)$ will turn on automatically if it is not already on. Note that Phoenix cannot manually turn on a computer that already turned on automatically. If we only consider the sequence of computers that Phoenix turns on manually, how many ways can he turn on all the computers? Two sequences are distinct if either the set of computers turned on manually is distinct, or the order of computers turned on manually is distinct. Since this number may be large, please print it modulo $M$.
Let's first determine how many ways there are to turn on a segment of $k$ computers without any of them turning on automatically (we manually enable all $k$). We have two methods: Method 1: If we turn on computer $1$ first, then we must turn on $2$, $3$, $\dots$, $k$ in that order. There are $k-1 \choose 0$ ways to do this, by stars and bars. If we turn on computer $2$ first, then we must turn on $3$, $4$, $\dots$, $k$ in that order, with computer $1$ inserted somewhere. There are $k-1 \choose 1$ ways to do this. If we turn on computer $3$ first, then we must turn on $4$, $5$, $\dots$, $k$ in that order, with computers $2$ and $1$ inserted somewhere (also in that order). There are $k-1 \choose 2$ ways to do this. In total, there are $k-1 \choose 0$ $+$ $k-1 \choose 1$ $+$ $\dots$ $+$ $k-1 \choose k-1$$= 2^{k-1}$ ways. Method 2: We can prove it with induction. For the base case $k=1$, there is $1$ way. Now, if there are $x$ ways for some $k$, we wish to show that there are $2x$ ways for $k+1$. The critical observation is that the last computer in the sequence is either computer $1$ or computer $k+1$. If the last computer is $k+1$, then we have $x$ ways (we simply append $k+1$ to each previous sequence). If the last computer is $1$, then we also have $x$ ways (we append $1$ and increment every computer in the previous sequences). In total, we have $x+x=2x$ ways. Therefore, there are $2^{k-1}$ ways for any $k$. As a result, we can write $dp[len][cnt]$ - the number of ways to turn on a prefix of length $len-1$ computers having manually turned on $cnt$ computers. The last one $len$ (if $len < n$) will be turned on automatically. To transition, iterate the length $k$ of a new segment of manual computers to add and calculate the number of ways with formula ($dp[len+k+1][cnt+k]$ $+=$ $dp[len][cnt]$ $\cdot$ $cnt + k \choose k$ $\cdot$ $2^{k - 1}$). The answer will be $\sum\limits_{i = 1}^{n} dp[n+1][i]$. Time Complexity: $O(n^3)$
[ "combinatorics", "dp", "math" ]
2,200
#include <bits/stdc++.h> using namespace std; typedef long long ll; ll MOD; int N; ll dp[405][405]; ll fastexp(ll b, ll exp){ if (exp==0) return 1; ll temp=fastexp(b,exp/2); temp=(temp*temp)%MOD; if (exp%2==1) temp*=b; return temp%MOD; } ll fact[405],inv[405],choose[405][405],pow2[405]; void precompute(){ fact[0]=1; inv[0]=1; for (int i=1;i<=N;i++){ fact[i]=(fact[i-1]*i)%MOD; inv[i]=fastexp(fact[i],MOD-2); } for (int i=0;i<=N;i++) for (int j=0;j<=i;j++) choose[i][j]=((fact[i]*inv[j])%MOD*inv[i-j])%MOD; for (int i=0;i<=N;i++) pow2[i]=fastexp(2,i); } int main() { cin>>N>>MOD; precompute(); dp[0][0]=1; for (int i=0;i<N;i++){ for (int j=0;j<=i;j++){ for (int k=1;i+k<=N;k++){ dp[i+k+1][j+k]+=((dp[i][j]*pow2[k-1])%MOD*choose[j+k][k]); dp[i+k+1][j+k]%=MOD; } } } ll ans=0; for (int i=0;i<=N;i++) ans=(ans+dp[N+1][i])%MOD; cout<<ans<<endl; return 0; }
1515
F
Phoenix and Earthquake
Phoenix's homeland, the Fire Nation had $n$ cities that were connected by $m$ roads, but the roads were all destroyed by an earthquake. The Fire Nation wishes to repair $n-1$ of these roads so that all the cities are connected again. The $i$-th city has $a_i$ tons of asphalt. $x$ tons of asphalt are used up when repairing a road, and to repair a road between $i$ and $j$, cities $i$ and $j$ must have at least $x$ tons of asphalt between them. In other words, if city $i$ had $a_i$ tons of asphalt and city $j$ had $a_j$ tons, there would remain $a_i+a_j-x$ tons after repairing the road between them. Asphalt can be moved between cities if the road between them is already repaired. Please determine if it is possible to connect all the cities, and if so, output any sequence of roads to repair.
First, note that cities connected by roads behave as one city with the total amount of asphalt. This means building a road is equivalent to merging two cities and summing their asphalt. If the total asphalt is less than $(n-1)$ $\cdot$ $x$, then we don't have enough asphalt to repair all the roads. It turns out that if the total asphalt is at least $(n-1)$ $\cdot$ $x$, it is always possible to repair all the roads. We can prove this by induction on the number of vertices. This is obviously true for $n=1$. Now we just need to show there is always a road that can be repaired, because doing so will reduce the number of vertices by one and the resulting asphalt will be at least $(n-2)$ $\cdot$ $x$, so the resulting graph can be merged by the inductive hypothesis. If some city has at least $x$ asphalt, we can merge it with any neighbor. Otherwise, all cities have less than $x$ asphalt. It turns out that any two cities $i$ and $j$ must have at least $x$ asphalt between them. To see why, notice that $a_k<x$ for all $1\le k\le n$, so if $a_i+a_j<x$, then the total asphalt must be less than $(n-1)$ $\cdot$ $x$. Thus, one approach is to repeatedly find the city with the most asphalt and merge it with any of its neighbors. This can be implemented with a priority queue and union-find data structure in $O(m+n\log n)$. There are other approaches as well. For example, since only connectedness of the graph is important, we can discard all edges except for a spanning tree. Pick any leaf. If it can be merged with its parent, do so. Otherwise, it must have less than $x$ asphalt. This means deleting the leaf will leave a connected graph with at least $(n-2)$ $\cdot$ $x$ asphalt, which can be merged recursively. The leaf is merged last. This can be implemented with a DFS in $O(m+n)$.
[ "constructive algorithms", "dfs and similar", "dsu", "graphs", "greedy", "trees" ]
2,600
// Fix a spanning tree // Pick a leaf, either merge it into its parent or postpone it to the end #include <cstdio> #include <vector> #include <algorithm> const int MAXV=300005; const int MAXE=300005; int N,M,X; long long as[MAXV]; int elist[MAXE*2]; int head[MAXV]; int prev[MAXE*2]; int tot=0; bool vis[MAXV]; int ans[MAXV]; int front,back; void dfs(int u){ vis[u]=true; for(int e=head[u];e!=-1;e=prev[e]){ int v=elist[e^1]; if(vis[v]) continue; dfs(v); if(as[u]+as[v]>=X){ as[u]+=as[v]-X; ans[front++]=e>>1; }else{ ans[--back]=e>>1; } } } int main(){ scanf("%d %d %d",&N,&M,&X); long long total=0; for(int i=0;i<N;i++){ scanf("%lld",&as[i]); total+=as[i]; } if(total<1LL*(N-1)*X){ printf("NO\n"); return 0; } std::fill(head,head+N,-1); for(int i=0;i<M*2;i++){ int U; scanf("%d",&U); U--; prev[i]=head[U]; head[U]=i; elist[i]=U; } back=N-1; dfs(0); printf("YES\n"); for(int i=0;i<N-1;i++){ printf("%d\n",ans[i]+1); } }
1515
G
Phoenix and Odometers
In Fire City, there are $n$ intersections and $m$ one-way roads. The $i$-th road goes from intersection $a_i$ to $b_i$ and has length $l_i$ miles. There are $q$ cars that may only drive along those roads. The $i$-th car starts at intersection $v_i$ and has an odometer that begins at $s_i$, increments for each mile driven, and resets to $0$ whenever it reaches $t_i$. Phoenix has been tasked to drive cars along some roads (possibly none) and \textbf{return them to their initial intersection} with the odometer showing $0$. For each car, please find if this is possible. A car may visit the same road or intersection an arbitrary number of times. The odometers don't stop counting the distance after resetting, so odometers may also be reset an arbitrary number of times.
We can solve for each strongly-connected component independently. From now on, we will assume the graph is strongly-connected. Define the length of a walk to be the sum of the weights of its edges, modulo $MOD$, the distance at which the odometer resets. This is different for different queries, but the important thing is some $MOD$ exists. If there is a walk from $a$ to $b$ with length $x$, there is a walk from $b$ to $a$ with length $-x$ (for any $MOD$). To see this, note that since the graph is strongly-connected, there is a path from $b$ to $a$. Let its length be $y$. We can walk from $b$ to $a$ and back $MOD-1$ times and then go to $a$ for a total length of $(MOD)y+(MOD-1) x\equiv -x$. Note that this works for any $MOD$, even if we don't know what it is yet. We can show that if $a$ is in a cycle of length $x$, then $b$ is in a cycle of length $x$. Suppose a path from $a$ to $b$ has length $y$. We can go from $b$ to $a$, go around the cycle, and go back to $b$ for a total length of $y+x+(-y)=x$. $a$ is in a cycle of length $0$. If $a$ is in a cycle of length $x$, it is in a cycle of $kx$ for any integer $k$. If $a$ is in a cycle of length $x$ and a cycle of length $y$, it is in a cycle of length $x+y$. The set of possible cycle lengths of cycles containing vertex $a$ are exactly the multiples of some number, namely the $\gcd$ of all possible cycle lengths. So, if we want to determine all possible cycle lengths, we just need to compute this gcd. Fix an arbitrary spanning tree rooted at $r$ and denote the length of the tree path from $r$ to $a$ by $\phi(a)$. If there is an edge of length $l$ from $a$ to $b$, then $r$ is in a cycle of length $\phi(a)+l-\phi(b)$. This cycle can be constructed by taking the walk along tree edges from $r$ to $a$, across the edge from $a$ to $b$, then backwards along tree edges from $b$ to $a$. Thus, we can make any cycles whose length is a multiple of $G=\gcd\{\phi(a)+l-\phi(b):(a,b,l)\in E\}$. It turns out these are the only possible lengths. If a walk from $a$ to $b$ has length $x$, then $x\equiv \phi(b)-\phi(a)\pmod G$. This can be proved by induction on the number of edges in the walk. Now that we know which cycle lengths are possible, we can answer the queries. An odometer can be reset if and only if $S$ plus some multiple of $T$ is a multiple of $G$. An odometer can be reset if and only if $S$ is a multiple of $\gcd(T,G)$. For implementation, the easiest way to get a spanning tree is by DFS. This can be done while computing strongly-connected components.
[ "dfs and similar", "graphs", "math", "number theory" ]
2,700
#include <cstdio> #include <vector> #include <cassert> #include <algorithm> #include <cmath> using ll = long long; std::vector<std::pair<int,int> > fwd[200005]; std::vector<std::pair<int,int> > rev[200005]; int vis[200005]; int cc[200005]; ll offset[200005]; int ncc; ll loop[200005]; std::vector<int> ord; ll gcd(ll a,ll b){ return b?gcd(b,a%b):a; } void dfs_fwd(int x){ vis[x]=1; for(auto e:fwd[x]){ int y=e.first; if(vis[y]!=1){ dfs_fwd(y); } } ord.push_back(x); } void dfs_rev(int x){ vis[x]=2; for(auto e:rev[x]){ int y=e.first,l=e.second; if(vis[y]!=2){ cc[y]=cc[x]; offset[y]=offset[x]+l; dfs_rev(y); }else if(cc[y]==cc[x]){ loop[cc[x]]=gcd(loop[cc[x]],std::llabs(offset[x]+l-offset[y])); } } } int main(){ int N,M; scanf("%d %d",&N,&M); for(int i=0;i<M;i++){ int A,B,L; scanf("%d %d %d",&A,&B,&L); A--,B--; fwd[A].push_back({B,L}); rev[B].push_back({A,L}); } for(int i=0;i<N;i++){ if(vis[i]!=1){ dfs_fwd(i); } } std::reverse(ord.begin(),ord.end()); for(int i:ord){ if(vis[i]!=2){ cc[i]=ncc++; offset[i]=0; dfs_rev(i); } } int Q; scanf("%d",&Q); for(int i=0;i<Q;i++){ int V,S,T; scanf("%d %d %d",&V,&S,&T); V--; assert(0<=S&&S<T); if(S%gcd(loop[cc[V]],T)==0){ printf("YES\n"); }else{ printf("NO\n"); } } }
1515
H
Phoenix and Bits
Phoenix loves playing with bits — specifically, by using the bitwise operations AND, OR, and XOR. He has $n$ integers $a_1, a_2, \dots, a_n$, and will perform $q$ of the following queries: - replace all numbers $a_i$ where $l \le a_i \le r$ with $a_i$ AND $x$; - replace all numbers $a_i$ where $l \le a_i \le r$ with $a_i$ OR $x$; - replace all numbers $a_i$ where $l \le a_i \le r$ with $a_i$ XOR $x$; - output how many \textbf{distinct} integers $a_i$ where $l \le a_i \le r$. For each query, Phoenix is given $l$, $r$, and $x$. Note that he is considering the values of the numbers, not their indices.
We store the binary representation of all the numbers in a trie. To perform operations on a range, we split the trie to extract the range, perform the operation, and merge everything back. AND $x$ is equivalent to the sequence XOR $2^{20}-1$, OR $x\oplus(2^{20}-1)$, XOR $2^{20}-1$. XOR does not affect the number of numbers in a subtrie, so we can just lazily propagate it. This leaves just OR operations. Suppose we want to set the $k$th bit in a subtrie. If all numbers in it have the $k$th bit set, we do nothing; if no numbers in it have the $k$th bit set, we lazily XOR that bit. Thus, we can handle OR operations recursively, merging the children if the bit to be set is at the current level, and stopping when all bits to be set satisfy one of those two conditions. We can detect these conditions by storing, in every trie node, a mask of bits that are set in some leaf and a mask of bits not set in some leaf. This approach can be shown to be $O((n+q)\log^2C)$ by amortized analysis. Intuitively, the expensive OR operations will make the bits in a subtrie more similar. Define $\Phi_k$ to be the number of trie nodes that have some leaf with the $k$th bit set and some leaf with the $k$th bit not set. Define $\Phi_*$ to be $1+\log C$ times the total number of trie nodes. Define the total potential to be the $\sum_{k=0}^{\log C-1}\Phi_k+\Phi_*$. This is always nonnegative and at most $O(n\log^2 C)$. Split operations create at most $\log C$ nodes, each adding at most $O(\log C)$ to the potential, so its amortized time complexity is $O(\log^2 C)$. Merge operations where one side is empty take $O(1)$. Recursive merge operations combine the two roots, the decrease in $\Phi_*$ paying for the possible increase in $\Phi_k$ at the root. (As usual, the recursive calls are not included in the cost as they are paid for by their own potential decrease). An OR operation only recurses when there is some $k$ such that the subtrie has both a leaf with the $k$th bit set and a leaf with the $k$ bit not set. After the operation, all leaves will have the $k$th bit set. Thus, the recursion is paid for by the decrease in $\Phi_k$ at the root.
[ "bitmasks", "brute force", "data structures", "sortings" ]
3,500
#include <cstdio> #include <cassert> #include <utility> #include <functional> //numbers up to 2^MAXLOGX-1 const int MAXLOGX=20; template<int k> struct Trie{ Trie<k-1>* chd[2]; int cnt; int lazy; int has[2]; int get_cnt(){ assert(this!=NULL); return cnt; } int get_has(int d){ assert(this!=NULL); push(); assert(has[d]<(1<<(k+1))); return has[d]; } Trie(Trie<k-1>* l,Trie<k-1>* r):chd{l,r},cnt(0),lazy(0),has{0,0}{ if(l){ cnt+=l->get_cnt(); has[0]|=l->get_has(0)|(1<<k); has[1]|=l->get_has(1); } if(r){ cnt+=r->get_cnt(); has[0]|=r->get_has(0); has[1]|=r->get_has(1)|(1<<k); } assert(has[0]<(1<<(k+1))); assert(has[1]<(1<<(k+1))); } void push(){ assert(lazy<(1<<(k+1))); if(!lazy) return; //handle kth bit if(lazy&(1<<k)){ std::swap(chd[0],chd[1]); if((has[0]^has[1])&(1<<k)){ has[0]^=(1<<k); has[1]^=(1<<k); } lazy^=(1<<k); } //handle rest of bits int flip=(has[0]^has[1])&lazy; has[0]^=flip; has[1]^=flip; if(chd[0]) chd[0]->lazy^=lazy; if(chd[1]) chd[1]->lazy^=lazy; lazy=0; assert(has[0]<(1<<(k+1))); assert(has[1]<(1<<(k+1))); } }; template<> struct Trie<-1>{ int lazy; Trie():lazy(0){ } int get_cnt(){ assert(this!=NULL); return 1; } int get_has(int d){ assert(this!=NULL); return 0; } }; template<int k> Trie<k>* create(int x){ if(x&(1<<k)){ return new Trie<k>(NULL,create<k-1>(x)); }else{ return new Trie<k>(create<k-1>(x),NULL); } } template<> Trie<-1>* create(int x){ return new Trie<-1>(); } template<int k> std::pair<Trie<k-1>*,Trie<k-1>*> destruct(Trie<k>* a){ assert(a!=NULL); a->push(); auto res=std::make_pair(a->chd[0],a->chd[1]); delete a; return res; } template<int k> Trie<k>* join(Trie<k-1>* l,Trie<k-1>* r){ if(l==NULL&&r==NULL) return NULL; return new Trie<k>(l,r); } template<int k> Trie<k>* merge(Trie<k>* a,Trie<k>* b){ if(!a) return b; if(!b) return a; auto aa=destruct(a); auto bb=destruct(b); Trie<k-1>* l=merge<k-1>(aa.first,bb.first); Trie<k-1>* r=merge<k-1>(aa.second,bb.second); return join<k>(l,r); } template<> Trie<-1>* merge<-1>(Trie<-1>* a,Trie<-1>* b){ if(!a) return b; if(!b) return a; delete b; return a; } template<int k> //<thres and >=thres std::pair<Trie<k>*,Trie<k>*> split(Trie<k>* a,int thres){ if(a==NULL){ return {NULL,NULL}; } if(thres<=0) return {NULL,a}; if(thres>=(1<<(k+1))) return {a,NULL}; assert(k>=0); auto aa=destruct(a); if(thres<(1<<k)){ Trie<k-1>* l,*r; std::tie(l,r)=split<k-1>(aa.first,thres); return std::make_pair(join<k>(l,NULL),join<k>(r,aa.second)); }else if(thres>(1<<k)){ Trie<k-1>* l,*r; std::tie(l,r)=split<k-1>(aa.second,thres-(1<<k)); return std::make_pair(join<k>(aa.first,l),join<k>(NULL,r)); }else{ return std::make_pair(join<k>(aa.first,NULL),join<k>(NULL,aa.second)); } } template<> std::pair<Trie<-1>*,Trie<-1>*> split<-1>(Trie<-1>* a,int thres){ assert(0); } template<int k> Trie<k>* update(Trie<k>* a,int val){ if(a==NULL) return NULL; a->push(); assert(val<(1<<(k+1))); if((val&a->has[0]&a->has[1])==0){ a->lazy^=(val&a->has[0]); return a; } Trie<k-1>* l,*r; std::tie(l,r)=destruct(a); l=update<k-1>(l,val&~(1<<k)); r=update<k-1>(r,val&~(1<<k)); if(val&(1<<k)){ return join<k>(NULL,merge<k-1>(l,r)); }else{ return join<k>(l,r); } } template<> Trie<-1>* update<-1>(Trie<-1>* a,int val){ return a; } int main(){ Trie<MAXLOGX-1>* root=NULL; int N,Q; scanf("%d %d",&N,&Q); for(int i=0;i<N;i++){ int A; scanf("%d",&A); root=merge(root,create<MAXLOGX-1>(A)); } for(int i=0;i<Q;i++){ int T,L,R; scanf("%d %d %d",&T,&L,&R); Trie<MAXLOGX-1>* left,*right; std::tie(left,root)=split(root,L); std::tie(root,right)=split(root,R+1); if(T==4){ printf("%d\n",root?root->cnt:0); }else{ int X; scanf("%d",&X); if(root!=NULL){ if(T==1){ root->lazy^=((1<<MAXLOGX)-1); root=update(root,X^((1<<MAXLOGX)-1)); root->lazy^=((1<<MAXLOGX)-1); }else if(T==2){ root=update(root,X); }else if(T==3){ assert(X<(1<<MAXLOGX)); root->lazy^=X; } } } root=merge(root,left); root=merge(root,right); } }
1515
I
Phoenix and Diamonds
Phoenix wonders what it is like to rob diamonds from a jewelry store! There are $n$ types of diamonds. The $i$-th type has weight $w_i$ and value $v_i$. The store initially has $a_i$ diamonds of the $i$-th type. Each day, for $q$ days, one of the following will happen: - A new shipment of $k_i$ diamonds of type $d_i$ arrive. - The store sells $k_i$ diamonds of type $d_i$. - Phoenix wonders what will happen if he robs the store using a bag that can fit diamonds with total weight not exceeding $c_i$. If he greedily takes diamonds of the largest value that fit, how much value would be taken? If there are multiple diamonds with the largest value, he will take the one with minimum weight. If, of the diamonds with the largest value, there are multiple with the same minimum weight, he will take any of them. Of course, since Phoenix is a law-abiding citizen, this is all a thought experiment and he never actually robs any diamonds from the store. This means that queries of type $3$ do not affect the diamonds in the store.
Suppose the largest weight of an item is less than $2^k$. Call an item heavy if its weight is in the range $[2^{k-1},2^k)$ and light if its weight is in the range $(0,2^{k-1})$. Sort the items in decreasing order by value. As the thief moves left to right, his remaining capacity is nonincreasing. Consider the point where it drops below $2^{k-1}$. Before this point, he takes all light items. After this point, he takes no heavy items. The latter can be solved recursively by querying the same data structure only on light items, starting at the appropriate point. The former is a bit trickier. The thief will take every item until his capacity drops below $2^k$. This point can be found by binary searching on a segment tree of sums. After this point, he can only take at most one more heavy item. To find this item (if it exists), we can binary search on a segment tree storing range minimums of prefix sum of light items plus weight of heavy item. The first item that is small enough will be taken, if it exists. Either way, we've located all heavy items that will be taken and can recursively handle the light items. To handle updates, reserve space for all items offline and change their multiplicity as needed. When an item is updated, we can modify all segment trees that it participates in. This solution is $O((n+q)\log{n}\log{C})$.
[ "binary search", "data structures", "sortings" ]
3,400
#include <cstdio> #include <algorithm> #include <functional> #include <utility> #include <numeric> #include <cassert> using ll=long long; const ll INF=1e18+7; struct Diamond{ int w,v,t; ll a; }diamonds[200005]; int index[200005]; int N; struct Node{ ll sum_w[31];//sum of weights of all diamonds with weight <2^k ll sum_v[31];//sum of values of all diamonds with weight <2^k ll one_w[31];//sum of weights of light+first heavy when restricted to <2^k ll one_v[31];//sum of values of light+first heavy when restricted to <2^k }st[800005]; void rebuild(int w,int L,int R,int a,int b){ if(a>=R||b<=L) return; if(R-L==1){ for(int k=1;k<=30;k++){ st[w].sum_w[k]=diamonds[L].a*diamonds[L].w*(diamonds[L].w<(1<<k)); st[w].sum_v[k]=diamonds[L].a*diamonds[L].v*(diamonds[L].w<(1<<k)); st[w].one_w[k]=INF;//query capacity never exceed INF st[w].one_v[k]=INF; if(diamonds[L].w>=(1<<(k-1))&&diamonds[L].w<(1<<k)&&diamonds[L].a>0){ st[w].one_w[k]=diamonds[L].w; st[w].one_v[k]=diamonds[L].v; } } }else{ int M=(L+R)/2; rebuild(w*2+1,L,M,a,b); rebuild(w*2+2,M,R,a,b); for(int k=1;k<=30;k++){ st[w].sum_w[k]=st[w*2+1].sum_w[k]+st[w*2+2].sum_w[k]; st[w].sum_v[k]=st[w*2+1].sum_v[k]+st[w*2+2].sum_v[k]; if(st[w*2+1].one_w[k]<st[w*2+1].sum_w[k-1]+st[w*2+2].one_w[k]){ st[w].one_w[k]=st[w*2+1].one_w[k]; st[w].one_v[k]=st[w*2+1].one_v[k]; }else{ st[w].one_w[k]=st[w*2+1].sum_w[k-1]+st[w*2+2].one_w[k]; st[w].one_v[k]=st[w*2+1].sum_v[k-1]+st[w*2+2].one_v[k]; } } } } //only consider weights <2^k //take maximal prefix possible void query_all(int w,int L,int R,int& i,int k,ll& cap,ll& value){ assert(i>=L&&i<=R); assert(R-L>0); if(i==R) return; if(i==L&&st[w].sum_w[k]<=cap){ cap-=st[w].sum_w[k]; value+=st[w].sum_v[k]; i=R; }else if(R-L>1){ int M=(L+R)/2; if(i<M){ query_all(w*2+1,L,M,i,k,cap,value); } if(i>=M){ query_all(w*2+2,M,R,i,k,cap,value); } } } std::array<ll,2> query_one_range_simpl(int w,int L,int R,int a,int b,int k){ if(a>=R||b<=L) return {INF,0}; if(a<=L&&b>=R){ return {st[w].one_w[k],st[w].sum_w[k-1]}; }else{ int M=(L+R)/2; std::array<ll,2> lsum,rsum; lsum=query_one_range_simpl(w*2+1,L,M,a,b,k); rsum=query_one_range_simpl(w*2+2,M,R,a,b,k); if(lsum[0]<lsum[1]+rsum[0]){ return {lsum[0],lsum[1]+rsum[1]}; }else{ return {lsum[1]+rsum[0],lsum[1]+rsum[1]}; } } } std::array<ll,4> query_one_range(int w,int L,int R,int a,int b,int k){ if(a>=R||b<=L) return {INF,INF,0,0}; if(a<=L&&b>=R){ return {st[w].one_w[k],st[w].one_v[k],st[w].sum_w[k-1],st[w].sum_v[k-1]}; }else{ int M=(L+R)/2; std::array<ll,4> lsum,rsum; lsum=query_one_range(w*2+1,L,M,a,b,k); rsum=query_one_range(w*2+2,M,R,a,b,k); if(lsum[0]<lsum[2]+rsum[0]){ return {lsum[0],lsum[1],lsum[2]+rsum[2],lsum[3]+rsum[3]}; }else{ return {lsum[2]+rsum[0],lsum[3]+rsum[1],lsum[2]+rsum[2],lsum[3]+rsum[3]}; } } } //returns min j such that one_w[i..j) <= cap, or -1 if none exist //reduce cap by weight of light in [max(i,L),R) int query_one_range_search_(int w,int L,int R,int& i,int k,ll& cap){ assert(i>=L&&i<=R); assert(R-L>0); if(i==R) return -1; if(i==L&&st[w].one_w[k]>cap){ cap-=st[w].sum_w[k-1]; i=R; return -1; }else if(R-L==1){ assert(i==L); assert(st[w].one_w[k]<=cap); cap-=st[w].sum_w[k-1]; i=R; return R; }else{ int M=(L+R)/2; int res=-1; if(i<M){ res=query_one_range_search_(w*2+1,L,M,i,k,cap); } if(res!=-1) return res; if(i>=M){ res=query_one_range_search_(w*2+2,M,R,i,k,cap); } return res; } } int query_one_range_search(int i,int k,ll cap){ //note this copy of cap will be modified return query_one_range_search_(0,0,N,i,k,cap); } void query_one(int& L,int k,ll& cap,ll& value){ int high=query_one_range_search(L,k,cap); //v[high]<=cap if(high!=-1){ auto v=query_one_range(0,0,N,L,high,k); L=high; cap-=v[0]; value+=v[1]; } } ll query(ll cap){ ll value=0; int i=0; for(int k=30;k>0;k--){ query_all(0,0,N,i,k,cap,value); if(i==N) break; ll take=std::min(diamonds[i].a,cap/diamonds[i].w); cap-=take*diamonds[i].w; value+=take*diamonds[i].v; i++; query_one(i,k,cap,value); } return value; } int main(){ int Q; scanf("%d %d",&N,&Q); for(int i=0;i<N;i++){ scanf("%lld %d %d",&diamonds[i].a,&diamonds[i].w,&diamonds[i].v); diamonds[i].t=i; } std::sort(diamonds,diamonds+N,[](Diamond x,Diamond y){ return (x.v!=y.v)?(x.v>y.v):(x.w<y.w); }); for(int i=0;i<N;i++){ index[diamonds[i].t]=i; } rebuild(0,0,N,0,N); for(int i=0;i<Q;i++){ int T; scanf("%d",&T); if(T==1){ int K,D; scanf("%d %d",&K,&D); D--; diamonds[index[D]].a+=K; rebuild(0,0,N,index[D],index[D]+1); }else if(T==2){ int K,D; scanf("%d %d",&K,&D); D--; diamonds[index[D]].a-=K; assert(diamonds[index[D]].a>=0); rebuild(0,0,N,index[D],index[D]+1); }else{ ll C; scanf("%lld",&C); printf("%lld\n",query(C)); } } }
1516
A
Tit for Tat
Given an array $a$ of length $n$, you can do at most $k$ operations of the following type on it: - choose $2$ different elements in the array, add $1$ to the first, and subtract $1$ from the second. However, all the elements of $a$ have to remain non-negative after this operation. What is lexicographically the smallest array you can obtain? An array $x$ is lexicographically smaller than an array $y$ if there exists an index $i$ such that $x_i<y_i$, and $x_j=y_j$ for all $1 \le j < i$. Less formally, at the first index $i$ in which they differ, $x_i<y_i$.
The general approach to minimizing an array lexicographically is to try to make the first element as small as possible, then the second element, and so on. So greedily, in each operation, we'll pick the first non-zero element and subtract $1$ from it, and we'll add that $1$ to the very last element. You can make the implementation faster by doing as many operations as you can on the first non-zero element simultaneously, but it's not necessary.
[ "greedy" ]
800
"#include <bits/stdc++.h>\nusing namespace std;\nint a[105];\nint main()\n{\n\tint t;\n\tscanf(\"%d\",&t);\n\twhile (t--)\n\t{\n\t\tint n,k;\n\t\tscanf(\"%d%d\",&n,&k);\n\t\tfor (int i=0;i<n;i++)\n\t\tscanf(\"%d\",&a[i]);\n\t\tfor (int i=0;i<n-1;i++)\n\t\t{\n\t\t\tif (a[i]<k)\n\t\t\t{\n\t\t\t\tk-=a[i];\n\t\t\t\ta[n-1]+=a[i];\n\t\t\t\ta[i]=0;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\ta[i]-=k;\n\t\t\t\ta[n-1]+=k;\n\t\t\t\tk=0;\n\t\t\t}\n\t\t}\n\t\tfor (int i=0;i<n;i++)\n\t\tprintf(\"%d \",a[i]);\n\t\tprintf(\"\\n\");\n\t}\n}\n"
1516
B
AGAGA XOOORRR
Baby Ehab is known for his love for a certain operation. He has an array $a$ of length $n$, and he decided to keep doing the following operation on it: - he picks $2$ adjacent elements; he then removes them and places a single integer in their place: their bitwise XOR. Note that the length of the array decreases by one. Now he asks you if he can make all elements of the array equal. Since babies like to make your life harder, he requires that you leave at least $2$ elements remaining.
So let's try to understand what the final array looks like in terms of the initial array. The best way to see this is to look at the process backwards. Basically, start with the final array, and keep replacing an element with the $2$ elements that xor-ed down to it, until you get the initial array. You'll see that the first element turns into a prefix, the second element turns into a subarray that follows this prefix, and so on. Hence, the whole process of moving from the initial to the final array is like we divide the array into pieces, and then replace each piece with its xor, and we want these xors to be equal. A nice observation is: we need at most $3$ pieces. That's because if we have $4$ or more pieces, we can take $3$ pieces and merge them into one. Its xor will be the same, but the total piece count will decrease by $2$. Now, checking if you can divide it into $2$ or $3$ pieces is a simple task that can be done by bruteforce. You can iterate over the positions you'll split the array, and then check the xors are equal using a prefix-xor array or any other method you prefer. Additional idea: for $2$ pieces, you don't even need bruteforce. It's sufficient to check the xor of the whole array is $0$. Hint to see this: write the bruteforce.
[ "bitmasks", "brute force", "dp", "greedy" ]
1,500
"#include <bits/stdc++.h>\nusing namespace std;\nint pre[2005];\nint main()\n{\n\tint t;\n\tscanf(\"%d\",&t);\n\twhile (t--)\n\t{\n\t\tint n;\n\t\tscanf(\"%d\",&n);\n\t\tfor (int i=1;i<=n;i++)\n\t\t{\n\t\t\tint a;\n\t\t\tscanf(\"%d\",&a);\n\t\t\tpre[i]=(pre[i-1]^a);\n\t\t}\n\t\tbool yes=!pre[n];\n\t\tfor (int i=1;i<=n;i++)\n\t\t{\n\t\t\tfor (int j=i+1;j<n;j++)\n\t\t\tyes|=(pre[i]==(pre[j]^pre[i]) && pre[i]==(pre[n]^pre[j]));\n\t\t}\n\t\tputs(yes? \"YES\":\"NO\");\n\t}\n}\n"
1516
C
Baby Ehab Partitions Again
Baby Ehab was toying around with arrays. He has an array $a$ of length $n$. He defines an array to be good if there's no way to partition it into $2$ subsequences such that the sum of the elements in the first is equal to the sum of the elements in the second. Now he wants to remove the minimum number of elements in $a$ so that it becomes a good array. Can you help him? A sequence $b$ is a subsequence of an array $a$ if $b$ can be obtained from $a$ by deleting some (possibly zero or all) elements. A partitioning of an array is a way to divide it into $2$ subsequences such that every element belongs to exactly one subsequence, so you must use all the elements, and you can't share any elements.
First of all, let's check if the array is already good. This can be done with knapsack dp. If it is, the answer is $0$. If it isn't, I claim you can always remove one element to make it good, and here's how to find it: Since the array can be partitioned, its sum is even. So if we remove an odd element, it will be odd, and there will be no way to partition it. If there's no odd element, then all elements are even. But then, you can divide all the elements by $2$ without changing the answer. Why? Because a partitioning in the new array after dividing everything by $2$ is a partitioning in the original array and vice versa. We just re-scaled everything. So, while all the elements are even, you can keep dividing by $2$, until one of the elements becomes odd. Remove it and you're done. If you want the solution in one sentence, remove the element with the smallest possible least significant bit. Alternatively, for a very similar reasoning, you can start by dividing the whole array by its $gcd$ and remove any odd element (which must exist because the $gcd$ is $1$,) but I think this doesn't give as much insight ;)
[ "bitmasks", "constructive algorithms", "dp", "math" ]
1,700
"#include <bits/stdc++.h>\nusing namespace std;\nint n;\nbool bad(vector<int> v)\n{\n\tint s=0;\n\tfor (int i:v)\n\ts+=i;\n\tif (s%2)\n\treturn 0;\n\tbitset<200005> b;\n\tb[0]=1;\n\tfor (int i:v)\n\tb|=(b<<i);\n\treturn b[s/2];\n}\nint main()\n{\n\tscanf(\"%d\",&n);\n\tvector<int> v(n);\n\tfor (int i=0;i<n;i++)\n\tscanf(\"%d\",&v[i]);\n\tif (bad(v))\n\t{\n\t\tpair<int,int> mn(20,0);\n\t\tfor (int i=0;i<n;i++)\n\t\tmn=min(mn,make_pair(__builtin_ctz(v[i]),i+1));\n\t\tprintf(\"1\\n%d\",mn.second);\n\t}\n\telse\n\tprintf(\"0\");\n}"
1516
D
Cut
This time Baby Ehab will only cut and not stick. He starts with a piece of paper with an array $a$ of length $n$ written on it, and then he does the following: - he picks a range $(l, r)$ and cuts the subsegment $a_l, a_{l + 1}, \ldots, a_r$ out, removing the rest of the array. - he then cuts this range into multiple subranges. - to add a number theory spice to it, he requires that the elements of every subrange must have their product equal to their least common multiple (LCM). Formally, he partitions the elements of $a_l, a_{l + 1}, \ldots, a_r$ into contiguous subarrays such that the product of every subarray is equal to its LCM. Now, for $q$ independent ranges $(l, r)$, tell Baby Ehab the minimum number of subarrays he needs.
Let's understand what "product=LCM" means. Let's look at any prime $p$. Then, the product operation adds up its exponent in all the numbers, while the LCM operation takes the maximum exponent. Hence, the only way they're equal is if every prime divides at most one number in the range. Another way to think about it is that every pair of numbers is coprime. Now, we have the following greedy algorithm: suppose we start at index $l$; we'll keep extending our first subrange while the condition (every pair of numbers is coprime) is satisfied. We clearly don't gain anything by stopping when we can extend, since every new element just comes with new restrictions. Once we're unable to extend our subrange, we'll start a new subrange, until we reach index $r$. Now, for every index $l$, let's define $go_l$ to be the first index that will make the condition break when we add it to the subrange. Then, our algorithm is equivalent to starting with an index $cur=l$, then replacing $cur$ with $go_{cur}$ until we exceed index $r$. The number of steps it takes is our answer. We now have $2$ subproblems to solve: To calculate $go_l$, let's iterate over $a$ from the end to the beginning. While at index $l$, let's iterate over the prime divisors of $a_l$. Then, for each prime, let's get the next element this prime divides. We can store that in an array that we update as we go. If we take the minimum across these occurrences, we'll get the next number that isn't coprime to $l$. Let's set $go_l$ to that number. However, what if $2$ other elements, that don't include $l$, are the ones who aren't coprime? A clever way to get around this is to minimize $go_l$ with $go_{l+1}$, since $go_{l+1}$ covers all the elements coming after $l$. This is a pretty standard problem solvable with the binary lifting technique. The idea is to perform many jumps at a time, instead of $1$. Let's calculate $dp[i][l]$: the index we'll end up at if we keep replacing $l$ with $go_l$ $2^i$ times. Clearly, $dp[i][l]=dp[i-1][dp[i-1][l]]$ since $2^{i-1}+2^{i-1}=2^i$. Now, to calculate how many steps it takes from index $l$ to index $r$, let's iterate over the numbers from $log(n)$ to $0$. Let the current be $i$. If $dp[i][l]$ is less than or equal to $r$, we can jump $2^i$ steps at once, so we'll make $l$ equal to $dp[i][l]$ and add $2^i$ to the answer. At the end, we'll make one more jump.
[ "binary search", "data structures", "dp", "graphs", "number theory", "two pointers" ]
2,100
"#include <bits/stdc++.h>\nusing namespace std;\n#define MX 100000\nvector<int> p[100005];\nint a[100005],nex[100005],dp[20][100005];\nint main()\n{\n int n,q;\n\tscanf(\"%d%d\",&n,&q);\n\tfor (int i=1;i<=n;i++)\n\tscanf(\"%d\",&a[i]);\n\tfor (int i=2;i<=MX;i++)\n\t{\n\t\tif (p[i].empty())\n\t\t{\n\t\t nex[i]=n+1;\n\t\t\tfor (int j=i;j<=MX;j+=i)\n\t\t\tp[j].push_back(i);\n\t\t}\n\t}\n\tdp[0][n+1]=n+1;\n\tfor (int i=n;i>0;i--)\n\t{\n\t\tdp[0][i]=dp[0][i+1];\n\t\tfor (int j:p[a[i]])\n\t\t{\n\t\t\tdp[0][i]=min(dp[0][i],nex[j]);\n\t\t\tnex[j]=i;\n\t\t}\n\t}\n\tfor (int i=1;i<20;i++)\n\t{\n\t\tfor (int j=1;j<=n+1;j++)\n\t\tdp[i][j]=dp[i-1][dp[i-1][j]];\n\t}\n\twhile (q--)\n\t{\n\t\tint l,r,ans=0;\n\t\tscanf(\"%d%d\",&l,&r);\n\t\tfor (int i=19;i>=0;i--)\n\t\t{\n\t\t\tif (dp[i][l]<=r)\n\t\t\t{\n\t\t\t\tans+=(1<<i);\n\t\t\t\tl=dp[i][l];\n\t\t\t}\n\t\t}\n\t\tprintf(\"%d\\n\",ans+1);\n\t}\n}\n"
1516
E
Baby Ehab Plays with Permutations
This time around, Baby Ehab will play with permutations. He has $n$ cubes arranged in a row, with numbers from $1$ to $n$ written on them. He'll make \textbf{exactly} $j$ operations. In each operation, he'll pick up $2$ cubes and switch their positions. He's wondering: how many different sequences of cubes can I have at the end? Since Baby Ehab is a turbulent person, he doesn't know how many operations he'll make, so he wants the answer for every possible $j$ between $1$ and $k$.
Let's think about the problem backwards. Let's try to count the number of permutations which need exactly $j$ swaps to be sorted. To do this, I first need to refresh your mind (or maybe introduce you) to a greedy algorithm that does the minimum number of swaps to sort a permutation. Look at the last mismatch in the permutation, let it be at index $i$ and $p_i=v$. We'll look at where $v$ is at in the permutation, and swap index $i$ with that index so that $v$ is in the correct position. Basically, we look at the last mismatch and correct it immediately. We can build a slow $dp$ on this greedy algorithm: let $dp[n][j]$ denote the number of permutations of length $n$ which need $j$ swaps to be sorted. If element $n$ is in position $n$, we can just ignore it and focus on the first $n-1$ elements, so that moves us to $dp[n-1][j]$. If it isn't, then we'll swap the element at position $n$ with wherever $n$ is at so that $n$ becomes in the right position, by the greedy algorithm. There are $n-1$ positions index $n$ can be at, and after the swap, you can have an arbitrary permutation of length $n-1$ that needs to be sorted; that gives us $n-1$ ways to go to $dp[n-1][j-1]$. Combining, we get that $dp[n][j]=dp[n-1][j]+(n-1)*dp[n-1][j-1]$. Next, notice that you don't have to do the minimum number of swaps in the original problem. You can swap $2$ indices and swap them back. Also, it's well-known that you can either get to a permutation with an even number of swaps or an odd number, but never both (see this problem.) So now, after you calculate your $dp$, the number of permutations you can get to after $j$ swaps is $dp[n][j]+dp[n][j-2]+dp[n][j-4]+...$. Now, let's solve for $n \le 10^9$. Notice that after $k$ swaps, only $2k$ indices can move from their place, which is pretty small. That gives you a pretty intuitive idea: let's fix a length $i$ and then a subset of length $i$ that will move around. The number of ways to pick this subset is $\binom{n}{i}$, and the number of ways to permute it so that we need $j$ swaps is $dp[i][j]$. So we should just multiply them together and sum up, right? Wrong. The problem is that double counting will happen. For example, look at the sorted permutation. This way, you count it for every single subset when $j=0$, but you should only count it once. A really nice solution is: force every element in your subset to move from its position. How does this solve the double counting? Suppose $2$ different subsets give you the same permutation; then, there must be an index in one and not in the other. But how can they give you the same permutation if that index moves in one and doesn't move in the other? So to mend our solution, we need to create $newdp[n][j]$ denoting the number of permutations of length $n$ which need $j$ swaps to be sorted, and every single element moves from its position (there's no $p_i=i$.) How do we calculate it? One way is to do inclusion-exclusion on the $dp$ we already have! Suppose I start with all permutations which need $j$ swaps. Then, I fix one index, and I try to subtract the number of permutations which need $j$ swaps to be sorted after that index is fixed. There are $n$ ways to choose the index, and $dp[n-1][j]$ permutations, so we subtract $n*dp[n-1][j]$. But now permutations with $2$ fixed points are excluded twice, so we'll include them, and so on and so forth. In general, we'll fix $f$ indices in the permutation. There are $\binom{n}{f}$ ways to pick them, and then there are $dp[n-f][j]$ ways to pick the rest so that we need $j$ swaps. Hence: $newdp[n][j]=\sum\limits_{f=0}^{n} (-1)^f*\binom{n}{f}*dp[n-f][j]$. Phew! If you have no idea what the hell I was talking about in the inclusion-exclusion part, try this problem first. Let $[l;r]$ denote the set of the integers between $l$ and $r$ (inclusive.) Let's try to calculate $dp[2n]$ from $dp[n]$. To do that, we need to understand our $dp$ a bit further. Recall that $dp[n][j]=dp[n-1][j]+(n-1)*dp[n-1][j-1]$. Let's think about what happens as you go down the recurrence. When you're at index $n$, either you skip it and move to $n-1$, or you multiply by $n-1$. But you do that exactly $j$ times, since $j$ decreases every time you do it. So, this $dp$ basically iterates over every subset of $[0;n-1]$ of size $j$, takes its product, and sums up! $dp[n][j]=\sum\limits_{s \subset [0;n-1],|s|=j} s_1*s_2 \ldots *s_j$ Now, let's use this new understanding to try and calculate $dp[2n]$ from $dp[n]$. suppose I pick a subset of $[0;2n-1]$. Then, a part of it will be in $[0;n-1]$ and a part will be in $[n;2n-1]$. I'll call the first part small and the second part big. So, to account for every subset of length $j$, take every subset of length $j_2$ of the big elements, multiply it by a subset of length $j-j_2$ of the small elements, and sum up. This is just normal polynomial multiplication! Let $big[n][j]$ denote the sum of the products of the subsets of length $j$ of the big elements. That is: $big[n][j]=\sum\limits_{s \subset [n;2n-1],|s|=j} s_1*s_2 \ldots *s_j$ Then, the polynomial multiplication between $dp[n]$ and $big[n]$ gives $dp[2n]$! How do we calculate $big$ though? Notice that every big element is a small element plus $n$. So we can instead pick a subset of the small elements and add $n$ to each element in it. This transforms the formula to: $big[n][j]=\sum\limits_{s \subset [0;n-1],|s|=j} (s_1+n)*(s_2+n) \ldots *(s_j+n)$ Let's expand this summand. What will every term in the expansion look like? Well, it will be a subset of length $l$ from our subset of length $j$, multiplied by $n^{j-l}$. Now, let's think about this backwards. Instead of picking a subset of length $j$ and then picking a subset of length $l$ from it, let's pick the subset of length $l$ first, and then see the number of ways to expand it into a subset of length $j$. Well, there are $n-l$ elements left, and you should pick $j-l$ elements from them, so there are $\binom{n-l}{j-l}$ ways to expand. That gives use: $big[n][j]=\sum\limits_{l=0}^{j} \binom{n-l}{j-l}*n^{j-l}* \sum\limits_{s \subset [0;n-1],|s|=l} s_1*s_2 \ldots *s_l$ But the interior sum is just $dp[l]$! Hurray! So we can finally calculate $big[n][j]$ to be: $big[n][j]=\sum\limits_{l=0}^{j} \binom{n-l}{j-l}*n^{j-l}*dp[l]$ And then polynomial multiplication with $dp[n]$ itself would give $dp[2n]$. Since you can move to $dp[n+1]$ and to $dp[2n]$, you can reach any $n$ you want in $O(log(n))$ iterations using its binary representation.
[ "combinatorics", "dp", "math" ]
2,500
"#include <bits/stdc++.h>\nusing namespace std;\n#define MX 200\n#define mod 1000000007\nlong long inv[MX+5];\nlong long ncr(int n,int r)\n{\n\tlong long ret=1;\n\tfor (int i=n-r+1;i<=n;i++)\n\tret=(ret*i)%mod;\n\tfor (int i=1;i<=r;i++)\n\tret=(ret*inv[i])%mod;\n\treturn ret;\n}\nint main()\n{\n\tinv[1]=1;\n\tfor (int i=2;i<=MX;i++)\n\tinv[i]=inv[mod%i]*(mod-mod/i)%mod;\n\tint n,k;\n\tscanf(\"%d%d\",&n,&k);\n\tvector<long long> dp(k+1,0);\n\tdp[0]=1;\n\tint curn=1;\n\tfor (int i=30-__builtin_clz(n);i>=0;i--)\n\t{\n\t\tvector<long long> big(k+1,0),pw(k+1),tmp(k+1,0);\n\t\tpw[0]=1;\n\t\tfor (int j=1;j<=k;j++)\n\t\tpw[j]=(curn*pw[j-1])%mod;\n\t\tfor (int j=0;j<=min(k,curn);j++)\n\t\t{\n\t\t\tfor (int l=0;l<=j;l++)\n\t\t\tbig[j]=(big[j]+ncr(curn-l,j-l)*dp[l]%mod*pw[j-l])%mod;\n\t\t}\n\t\tfor (int js=0;js<=k;js++)\n\t\t{\n\t\t\tfor (int jb=0;jb<=k;jb++)\n\t\t\t{\n\t\t\t\tif (js+jb<=k)\n\t\t\t\ttmp[js+jb]=(tmp[js+jb]+dp[js]*big[jb])%mod;\n\t\t\t}\n\t\t}\n\t\tdp=tmp;\n\t\tcurn*=2;\n\t\tif (n&(1<<i))\n\t\t{\n\t\t\tvector<long long> tmp(k+1);\n\t\t\ttmp[0]=1;\n\t\t\tfor (int j=1;j<=k;j++)\n\t\t\ttmp[j]=(dp[j]+curn*dp[j-1])%mod;\n\t\t\tdp=tmp;\n\t\t\tcurn++;\n\t\t}\n\t}\n\tint ans[]={1,0};\n\tfor (int j=1;j<=k;j++)\n\t{\n\t\tans[j%2]=(ans[j%2]+dp[j])%mod;\n\t\tprintf(\"%d \",ans[j%2]);\n\t}\n\tprintf(\"\\n\");\n}\n"
1517
A
Sum of 2050
A number is called 2050-number if it is $2050$, $20500$, ..., ($2050 \cdot 10^k$ for integer $k \ge 0$). Given a number $n$, you are asked to represent $n$ as the sum of some (not necessarily distinct) 2050-numbers. Compute the minimum number of 2050-numbers required for that.
First, we need to check whether $n$ is the multiple of $2050$. If $n$ is not the multiple of $2050$, the answer is always $-1$. Then we can divide $n$ by $2050$, the problem now is how to represent $n$ as the sum of powers of $10$. So the answer is the sum of its digits in decimal representation.
[ "greedy", "math" ]
800
null
1517
B
Morning Jogging
The 2050 volunteers are organizing the "Run! Chase the Rising Sun" activity. Starting on Apr 25 at 7:30 am, runners will complete the 6km trail around the Yunqi town. There are $n+1$ checkpoints on the trail. They are numbered by $0$, $1$, ..., $n$. A runner must start at checkpoint $0$ and finish at checkpoint $n$. No checkpoint is skippable — he must run from checkpoint $0$ to checkpoint $1$, then from checkpoint $1$ to checkpoint $2$ and so on. Look at the picture in notes section for clarification. Between any two adjacent checkpoints, there are $m$ different paths to choose. For any $1\le i\le n$, to run from checkpoint $i-1$ to checkpoint $i$, a runner can choose exactly one from the $m$ possible paths. The length of the $j$-th path between checkpoint $i-1$ and $i$ is $b_{i,j}$ for any $1\le j\le m$ and $1\le i\le n$. To test the trail, we have $m$ runners. Each runner must run from the checkpoint $0$ to the checkpoint $n$ once, visiting all the checkpoints. Every path between every pair of adjacent checkpoints needs to be ran by \textbf{exactly one} runner. If a runner chooses the path of length $l_i$ between checkpoint $i-1$ and $i$ ($1\le i\le n$), his tiredness is $$\min_{i=1}^n l_i,$$ i. e. the minimum length of the paths he takes. Please arrange the paths of the $m$ runners to minimize the sum of tiredness of them.
The minimum sum is the sum of $m$ smallest of all $nm$ numbers. To construct the answer, we can just mark these $m$ smallest numbers and put them in $m$ different columns. A possible way is that for each row, you can sort all numbers from small to large, and rotate the marked number in this row to unmarked columns. For example, 1???? ?23?? ???45
[ "constructive algorithms", "greedy", "sortings" ]
1,200
null
1517
C
Fillomino 2
Fillomino is a classic logic puzzle. (You do not need to know Fillomino in order to solve this problem.) In one classroom in Yunqi town, some volunteers are playing a board game variant of it: Consider an $n$ by $n$ chessboard. Its rows are numbered from $1$ to $n$ from the top to the bottom. Its columns are numbered from $1$ to $n$ from the left to the right. A cell on an intersection of $x$-th row and $y$-th column is denoted $(x, y)$. The main diagonal of the chessboard is cells $(x, x)$ for all $1 \le x \le n$. A permutation of $\{1, 2, 3, \dots, n\}$ is written on the main diagonal of the chessboard. There is exactly one number written on each of the cells. The problem is to partition the cells under and on the main diagonal (there are exactly $1+2+ \ldots +n$ such cells) into $n$ connected regions satisfying the following constraints: - Every region should be connected. That means that we can move from any cell of a region to any other cell of the same region visiting only cells of the same region and moving from a cell to an adjacent cell. - The $x$-th region should contain cell on the main diagonal with number $x$ for all $1\le x\le n$. - The number of cells that belong to the $x$-th region should be equal to $x$ for all $1\le x\le n$. - Each cell under and on the main diagonal should belong to exactly one region.
The answer is unique and always exists. There are two ways to construct the answer. Construction 1: Start with the main diagonal. There is one cell $(x, x)$ with number $1$ on it. That cell must form a region by itself. For each cell $(y, y)$ on the main diagonal that is above $(x, x)$, the cell $(y+1, y)$ belongs to the same region as $(y, y)$. We write the number on $(y, y)$ minus $1$ on the cell $(y+1, y)$ and make $(y+1, y)$ belong to the same region as $(y, y)$. For each cell $(y, y)$ on the main diagonal that is below $(x, x)$, the cell $(y, y-1)$ belongs to the same region as $(y, y)$. We write the number on $(y, y)$ minus $1$ on the cell $(y, y-1)$ and make $(y, y - 1)$ belong to the same region as $(y, y)$. Then a permutation of $1, \ldots, n-1$ are written on the cells $(2, 1), (3, 2), \ldots, (n, n-1)$. We can continue this process on this subdiagonal. Then a oermutation of $1, 2, \ldots, n-2$ will be written on the cells $(3, 1), (4, 2), \ldots, (n, n-2)$. Repeat until all cells have numbers written on them. Note that the numbers written on the cells are not the output. Put $x$ for the region with size $x$ after constructing the regions. This construction proves that the solution always exists and is unique. Construction 2: Construct the region for $(1, 1), (2, 2), \ldots, (n, n)$ in order. When starting at $(i, i)$, we walk from $(i, i)$ for $x$ steps, where $x$ is the number written on $(i, i)$. For each step, if the cell to the left of your current cell is empty, we go to that cell and write $x$ on it. Otherwise we go down and write $x$ there. We can prove that the second construction produces exactly the same answer as the first construction. They only differ in their orders of processing. The regions are always paths.
[ "constructive algorithms", "dfs and similar", "greedy", "implementation" ]
1,400
null
1517
D
Explorer Space
You are wandering in the explorer space of the 2050 Conference. The explorer space can be viewed as an undirected weighted grid graph with size $n\times m$. The set of vertices is $\{(i, j)|1\le i\le n, 1\le j\le m\}$. Two vertices $(i_1,j_1)$ and $(i_2, j_2)$ are connected by an edge if and only if $|i_1-i_2|+|j_1-j_2|=1$. At each step, you can walk to any vertex connected by an edge with your current vertex. On each edge, there are some number of exhibits. Since you already know all the exhibits, whenever you go through an edge containing $x$ exhibits, your boredness increases by $x$. For each starting vertex $(i, j)$, please answer the following question: What is the minimum possible boredness if you walk from $(i, j)$ and go back to it after exactly $k$ steps? You can use any edge for multiple times but the boredness on those edges are also counted for multiple times. At each step, you cannot stay on your current vertex. You also cannot change direction while going through an edge. Before going back to your starting vertex $(i, j)$ after $k$ steps, you can visit $(i, j)$ (or not) freely.
Since the graph is bipartite, when $k$ is odd, it is impossible to go back to the vertex after $k$ steps. Since the graph is undirected, we can always find a path with length $k/2$, walk along this path and return. We can use dynamic programming to compute the shortest path from $u$ with length $k$. $dp_{u,k} = \min_{(u, v) \in E} \{dp_{v,k-1} + w_{u,v}\}$. The time complexity is $O(nmk)$.
[ "dp", "graphs", "shortest paths" ]
1,800
null
1517
E
Group Photo
In the 2050 Conference, some people from the competitive programming community meet together and are going to take a photo. The $n$ people form a line. They are numbered from $1$ to $n$ from left to right. Each of them either holds a cardboard with the letter 'C' or a cardboard with the letter 'P'. Let $C=\{c_1,c_2,\dots,c_m\}$ $(c_1<c_2<\ldots <c_m)$ be the set of people who hold cardboards of 'C'. Let $P=\{p_1,p_2,\dots,p_k\}$ $(p_1<p_2<\ldots <p_k)$ be the set of people who hold cardboards of 'P'. The photo is good if and only if it satisfies the following constraints: - $C\cup P=\{1,2,\dots,n\}$ - $C\cap P =\emptyset $. - $c_i-c_{i-1}\leq c_{i+1}-c_i(1< i <m)$. - $p_i-p_{i-1}\geq p_{i+1}-p_i(1< i <k)$. Given an array $a_1,\ldots, a_n$, please find the number of good photos satisfying the following condition: $$\sum\limits_{x\in C} a_x < \sum\limits_{y\in P} a_y.$$ The answer can be large, so output it modulo $998\,244\,353$. Two photos are different if and only if there exists at least one person who holds a cardboard of 'C' in one photo but holds a cardboard of 'P' in the other.
There can't be two $i$'s such that $c_i-c_{i-1}>2$, or $p_i-p_{i-1}$ won't be non-increasing. And there can't be two $i$'s such that $p_i-p_{i-1}>2$, or $c_i-c_{i-1}$ won't be non-decreasing. So for any $2\leq i<m$, $c_i-c_{i-1}\leq 2$, and for any $2\leq i<k$, $p_i-p_{i-1}\leq 2$. Then we can find out that there are only two possible patterns: PP...PCC...C or (C/P)CC...CPCPC...PCPP...P(C/P). You can calculate the first pattern in $O(n)$ and calculate the second pattern in $O(n)$ or $O(nlogn)$ with two-pointers or divide-and-conquer.
[ "binary search", "data structures", "implementation", "two pointers" ]
2,500
null
1517
F
Reunion
It is reported that the 2050 Conference will be held in Yunqi Town in Hangzhou from April 23 to 25, including theme forums, morning jogging, camping and so on. The relationship between the $n$ volunteers of the 2050 Conference can be represented by a tree (a connected undirected graph with $n$ vertices and $n-1$ edges). The $n$ vertices of the tree corresponds to the $n$ volunteers and are numbered by $1,2,\ldots, n$. We define the distance between two volunteers $i$ and $j$, \textrm{dis}$(i,j)$ as the number of edges on the shortest path from vertex $i$ to vertex $j$ on the tree. \textrm{dis}$(i,j)=0$ whenever $i=j$. Some of the volunteers can attend the on-site reunion while others cannot. If for some volunteer $x$ and nonnegative integer $r$, all volunteers whose distance to $x$ is no more than $r$ can attend the on-site reunion, a forum with radius $r$ can take place. The level of the on-site reunion is defined as the maximum possible radius of any forum that can take place. Assume that each volunteer can attend the on-site reunion with probability $\frac{1}{2}$ and these events are independent. Output the expected level of the on-site reunion. When no volunteer can attend, the level is defined as $-1$. When all volunteers can attend, the level is defined as $n$.
Let $B(u,r) = \{v | \mathrm{dis}(u,v) \leq r \}$. And a vertex is colored black iff the volunteer is not attend. First, we enumerate $r$ and count the number of ways that the answer is no larger than $r$. That is equivalent to for all black vertices $u$, the union of $B(u,r)$ will cover all vertices. So a typical tree dp is to consider for a subtree, the depth of the deepest uncovered vertex, and how long it can extend (the most shallow black vertex whose neighbor can extend out from this subtree). Here is an observation that if there is an uncovered vertex in the subtree, how long it can extend doesn't matter. The reason is this vertex needs to be covered by the vertex ($v$) from other subtrees, and $v$ can extend further than one in this subtree. So if there is an uncovered vertex in the subtree, we only care about the depth of the deepest uncovered one. Otherwise, we care about the depth of the most shallow black vertex. The state of this subtree is $O(size_a)$ (actually is $O(dep_a)$). Time complexity for each $r$ is $O(n^2)$, and overall complexity is $O(n^3)$. It is possible to optimize it to $O(n^2 \log n)$ further.
[ "combinatorics", "dp", "trees" ]
3,200
null
1517
G
Starry Night Camping
At the foot of Liyushan Mountain, $n$ tents will be carefully arranged to provide accommodation for those who are willing to experience the joy of approaching nature, the tranquility of the night, and the bright starry sky. The $i$-th tent is located at the point of $(x_i, y_i)$ and has a weight of $w_i$. A tent is important if and only if both $x_i$ and $y_i$ are even. You need to remove some tents such that for each remaining important tent $(x, y)$, there do not exist $3$ other tents $(x'_1, y'_1)$, $(x'_2, y'_2)$ and $(x'_3, y'_3)$ such that both conditions are true: - $|x'_j-x|, |y'_j - y|\leq 1$ for all $j \in \{1, 2, 3\}$, and - these four tents form a parallelogram (or a rectangle) and one of its sides is \textbf{parallel to the $x$-axis}. Please maximize the sum of the weights of the tents that are \textbf{not} removed. Print the maximum value.
We can label all the integer points in the plane as follow: ...2323... ...1010... ...2323... ...1010... Where all the good points are labeled with 1. If we draw an edge between every adjacent points, all the forbidden patterns form all paths of length 4 with label 0-1-2-3. So we transform the problem to another problem: delete some points with smallest sum of value such that the remaining points doesn't contain a 0-1-2-3 path. It can be solved with a simple min-cut algorithm.
[ "constructive algorithms", "flows", "graphs" ]
3,300
null
1517
H
Fly Around the World
After hearing the story of Dr. Zhang, Wowo decides to plan his own flight around the world. He already chose $n$ checkpoints in the world map. Due to the landform and the clouds, he cannot fly too high or too low. Formally, let $b_i$ be the height of Wowo's aircraft at checkpoint $i$, $x_i^-\le b_i\le x_i^+$ should be satisfied for all integers $i$ between $1$ and $n$, where $x_i^-$ and $x_i^+$ are given integers. The angle of Wowo's aircraft is also limited. For example, it cannot make a $90$-degree climb. Formally, $y_i^-\le b_i-b_{i-1}\le y_i^+$ should be satisfied for all integers $i$ between $2$ and $n$, where $y_i^-$ and $y_i^+$ are given integers. The final limitation is the speed of angling up or angling down. An aircraft should change its angle slowly for safety concerns. Formally, $z_i^- \le (b_i - b_{i-1}) - (b_{i-1} - b_{i-2}) \le z_i^+$ should be satisfied for all integers $i$ between $3$ and $n$, where $z_i^-$ and $z_i^+$ are given integers. Taking all these into consideration, Wowo finds that the heights at checkpoints are too hard for him to choose. Please help Wowo decide whether there exists a sequence of \textbf{real} numbers $b_1, \ldots, b_n$ satisfying all the contraints above.
Consider the DP idea: dp[i][x][y] represests whether there exists a sequence $b_1, \ldots, b_i$ satisfying all the constraints. (Constraints about $b_{i+1}, b_{i+2}, \ldots$ are ignored.) Then $R(i)=\{(x, y) | dp[i][x][y] = true\}$ is a region on the plane. We will prove that it is convex and then show that we can efficiently transform $R(i)$ to $R(i+1)$. Initially, we can construct $R(2)$. In $R(2)$, $x$ represents $b_2$ and $y$ represents $b_2-b_1$. The constraints are $x_1^-\le b_1=x-y \le x_1^+$, $x_2^-\le b_2=x\le x_2^+$, $y_2^-\le y=b_2-b_1\le y_2^+$. $R(2)$ is the intersection of these half planes (so it is convex.) Calculate $R(i+1)$ by $R(i)$: We should take the new constraints $x_{i+1}^- \le b_{i+1} \le x_{i+1}^+$, $y_{i+1}^- \le b_{i+1}-b_i \le y_{i+1}^+$, $z_{i+1}^- \le b_{i+1}-b_i-(b_i-b_{i-1}) \le z_{i+1}^+$ into account. Let's consider the last constraint first and ignore the first two for now. If $(b_{i+1}-b_i)-(b_i-b_{i-1})=z$, point $(x, y)$ in $R(i)$ (represents $b_i=x, b_{i-1}=x-y$ since $y=b_i-b_{i-1}$) will become $(b_{i+1}, b_{i+1}-b_i)=(b_{i}+(b_i-b_{i-1}) + ((b_{i+1}-b_i)-(b_i-b_{i-1})), (b_i-b_{i-1}) + ((b_{i+1}-b_i)-(b_i-b_{i-1})))=(x+y+z, y+z)$ in $R(i+1)$ (since $(b_{i+1}-b_i)-(b_i-b_{i-1})=z$). Thus, to transform $R(i)$ to $R(i+1)$, we simply apply the $x\leftarrow x+y$ tranformation and then move the region by the vector $(z, z)$. $x\leftarrow x+y$ is a linear transformation. We call the new region $R(i)(z)$. Now we know that $z$ is between $z_{i+1}^-$ and $z_{i+1}^+$. So we should take the union of all $R(i)(z)$ for $z\in[z_{i+1}^-,z_{i+1}^+]$. Finally, we add the first two constraints about $x$ and $y$ back. These constraints correspond to cutting the region by vertical or horizontal half planes. The answer is yes if and only if $R(n)$ is nonempty. By the process above, we can prove (inductively) that the region $R(i)$ is always convex. For each $i$, we only add vertical and horizontal half planes. These half planes will cut some original edges of the convex polygon and add some new vertical or horizontal edges to it. The transformation $x\leftarrow x+y$ will change an edge with slope $1/k$ into an edge with slope $1/(k+1)$ (vertical edges becomes edges with slope $1/1$). So we can also prove inductively that for any $i$, all the edges of $R(i)$ have slope $0, \infty$ or $1/k$ for some integer $k\le n$. These edges can be written as $Ax+By+C=0$ for integral $A, B$ and $C$. We now know that for any $i$, the region $R(i)$ is always convex and all edges have slope $0, \infty$ or $1/k$ for some integer $k\le n$. Next we describe how to implement the process described above efficiently. We maintain two deques for the vertices on the upper and lower hull of the polygon. Vertical edges are included in the upper hull if they have minimum $x$ coordinate (and lower hull if they have maximum $x$ coordinate.) To cut the polygon by vertical or horizontal half planes, we simply pop some vertices at the end of the deques. (And possibly add some new vertices at the front or back of the deques.) To apply the transformation $x\leftarrow x+y$, we apply it to all vertices of the convex polygon. Since we apply the same operation to all vertices, this can be done by tags in constant time (like the tags in segment trees). We will explain the precision issues later. Finally, to take the union of all $R(i)(z)$, we move the upper hull by $(z_{i+1}^+, z_{i+1}^+)$ and the lower hull by $(z_{i+1}^-, z_{i+1}^-)$. (These are again transformations applied to all vertices in each deque. We use tags as well.) The new upper hull and new lower hull are not necessarily connected. If they are not, we connect their first and last vertices. The time complexity is $O(n)$ assuming each arithmetic operation costs constant time.
[ "dp", "geometry" ]
3,500
null
1519
A
Red and Blue Beans
You have $r$ red and $b$ blue beans. You'd like to distribute them among several (maybe, one) packets in such a way that each packet: - has at least one red bean (or the number of red beans $r_i \ge 1$); - has at least one blue bean (or the number of blue beans $b_i \ge 1$); - the number of red and blue beans should differ in no more than $d$ (or $|r_i - b_i| \le d$) Can you distribute all beans?
Without loss of generality, let's say $r \le b$ (otherwise, we can swap them). Note that you can't use more than $r$ packets (at least one red bean in each packet), so $b$ can't exceed $r \cdot (d + 1)$ (at most $d + 1$ blue beans in each packet). So, if $b > r \cdot (d + 1)$ then asnwer is NO. Otherwise, we can form exactly $r$ packets.
[ "math" ]
800
fun main() { repeat(readLine()!!.toInt()) { val (r, b, d) = readLine()!!.split(' ').map { it.toInt() } println(if (minOf(r, b) * (d + 1).toLong() >= maxOf(r, b)) "YES" else "NO") } }
1519
B
The Cake Is a Lie
There is a $n \times m$ grid. You are standing at cell $(1, 1)$ and your goal is to finish at cell $(n, m)$. You can move to the neighboring cells to the right or down. In other words, suppose you are standing at cell $(x, y)$. You can: - move right to the cell $(x, y + 1)$ — it costs $x$ burles; - move down to the cell $(x + 1, y)$ — it costs $y$ burles. Can you reach cell $(n, m)$ spending \textbf{exactly} $k$ burles?
Note that whichever path you choose, the total cost will be the same. If you know that the cost is the same, then it's not hard to calculate it. It's equal to $n \cdot m - 1$. So the task is to check: is $k$ equal to $n \cdot m - 1$ or not. The constant cost may be proved by induction on $n + m$: for $n = m = 1$ cost is $1 \cdot 1 - 1 = 0$. For a fixed $(n, m)$, there are only two last steps you can make: either from $(n, m - 1)$ with cost $n$: the total cost is $n \cdot (m - 1) - 1 + n$ $=$ $n \cdot m - 1$ or from $(n - 1, m)$ with cost $m$: the total cost is $(n - 1) \cdot m - 1 + m$ $=$ $n \cdot m - 1$. So, whichever path you choose, the total cost is the same.
[ "dp", "math" ]
800
fun main() { repeat(readLine()!!.toInt()) { val (n, m, k) = readLine()!!.split(' ').map { it.toInt() } println(if (n * m - 1 == k) "YES" else "NO") } }
1519
C
Berland Regional
Polycarp is an organizer of a Berland ICPC regional event. There are $n$ universities in Berland numbered from $1$ to $n$. Polycarp knows all competitive programmers in the region. There are $n$ students: the $i$-th student is enrolled at a university $u_i$ and has a programming skill $s_i$. Polycarp has to decide on the rules now. In particular, the number of members in the team. Polycarp knows that if he chooses the size of the team to be some integer $k$, each university will send their $k$ strongest (with the highest programming skill $s$) students in the first team, the next $k$ strongest students in the second team and so on. If there are fewer than $k$ students left, then the team can't be formed. Note that there might be universities that send zero teams. The strength of the region is the total skill of the members of all present teams. If there are no teams present, then the strength is $0$. Help Polycarp to find the strength of the region for each choice of $k$ from $1$ to $n$.
There are two important observations to make. The first one is that you can calculate the answers for each university independently of each other and sum them up to obtain the true answer. The second one is that if there are $x$ students in an university, then that university can only contribute to answers for $k$ from $1$ to $x$. So if we learn to calculate the contribution of the $i$-th university for some fixed $k$ in $O(1)$, then we will be able to iterate over all possible $k$ for each university and get the solution in $O(\sum \limits_{i=1}^{n} x_i) = O(n)$, where $x_i$ is the number of students in the $i$-th university. To achieve it, you have to gather the sum of the maximum number of students that can form full teams of size $k$. That must be the highest number less than or equal to $x_i$ that is divisible by $k$, so $\lfloor \frac{x_i}{k} \rfloor \cdot k$. Sort the students of each university, precalculate partial sums, and now you are free to add the prefix sum of that number of students to the answer for $k$. Overall complexity: $O(n \log n)$ per testcase.
[ "brute force", "data structures", "greedy", "number theory", "sortings" ]
1,400
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int t; scanf("%d", &t); forn(_, t){ int n; scanf("%d", &n); vector<int> s(n), u(n); forn(i, n){ scanf("%d", &s[i]); --s[i]; } forn(i, n){ scanf("%d", &u[i]); } vector<vector<int>> bst(n); forn(i, n) bst[s[i]].push_back(u[i]); forn(i, n) sort(bst[i].begin(), bst[i].end(), greater<int>()); vector<vector<long long>> pr(n, vector<long long>(1, 0)); forn(i, n) for (int x : bst[i]) pr[i].push_back(pr[i].back() + x); vector<long long> ans(n); forn(i, n) for (int k = 1; k <= int(bst[i].size()); ++k) ans[k - 1] += pr[i][bst[i].size() / k * k]; forn(i, n) printf("%lld ", ans[i]); puts(""); } return 0; }
1519
D
Maximum Sum of Products
You are given two integer arrays $a$ and $b$ of length $n$. You can reverse \textbf{at most one} subarray (continuous subsegment) of the array $a$. Your task is to reverse such a subarray that the sum $\sum\limits_{i=1}^n a_i \cdot b_i$ is \textbf{maximized}.
The naive approach is to iterate over $l$ and $r$, reverse the subsegment of the array $[l, r]$ and calculate the answer. But this solution is too slow and works in $O(n^3)$. Instead, we can iterate over the center of the reversed segment and its length. If the current segment is $[l, r]$, and we want to go to $[l - 1, r + 1]$, then the answer for the subsegment will increase by $a_{l-1}* b_{r + 1} + a_{r + 1} * b_{l-1}$. It remains to add the answer for $[1, l)$ and $(r, n]$, but without reversion, this is easy to do if you pre-calculate the prefix sums of the values $a_i * b_i$.
[ "brute force", "dp", "implementation", "math", "two pointers" ]
1,600
#include <bits/stdc++.h> using namespace std; using li = long long; int main() { int n; cin >> n; vector<li> a(n), b(n); for (auto& x : a) cin >> x; for (auto& x : b) cin >> x; vector<li> pr(n + 1, 0); for (int i = 0; i < n; ++i) pr[i + 1] = pr[i] + a[i] * b[i]; li ans = pr[n]; for (int c = 0; c < n; ++c) { li cur = a[c] * b[c]; for (int l = c - 1, r = c + 1; l >= 0 && r < n; --l, ++r) { cur += a[l] * b[r]; cur += a[r] * b[l]; ans = max(ans, cur + pr[l] + (pr[n] - pr[r + 1])); } cur = 0; for (int l = c, r = c + 1; l >= 0 && r < n; --l, ++r) { cur += a[l] * b[r]; cur += a[r] * b[l]; ans = max(ans, cur + pr[l] + (pr[n] - pr[r + 1])); } } cout << ans << endl; }
1519
E
Off by One
There are $n$ points on an infinite plane. The $i$-th point has coordinates $(x_i, y_i)$ such that $x_i > 0$ and $y_i > 0$. The coordinates are not necessarily integer. In one move you perform the following operations: - choose two points $a$ and $b$ ($a \neq b$); - move point $a$ from $(x_a, y_a)$ to either $(x_a + 1, y_a)$ or $(x_a, y_a + 1)$; - move point $b$ from $(x_b, y_b)$ to either $(x_b + 1, y_b)$ or $(x_b, y_b + 1)$; - remove points $a$ and $b$. \textbf{However, the move can only be performed if there exists a line that passes through the new coordinates of $a$, new coordinates of $b$ and $(0, 0)$.} Otherwise, the move can't be performed and the points stay at their original coordinates $(x_a, y_a)$ and $(x_b, y_b)$, respectively. The numeration of points \textbf{does not change} after some points are removed. Once the points are removed, they can't be chosen in any later moves. Note that you have to move both points during the move, you can't leave them at their original coordinates. What is the maximum number of moves you can perform? What are these moves? If there are multiple answers, you can print any of them.
At first the problem sounds like some sort of matching. However, it seems like you first want to match each point with either of its moves and then some pairs of points to each other. That doesn't sound viable but since the matchings are often connected with graphs, the graph idea might come handy. Let's first consider a pair of matched points. What does it actually mean that there exists a line through their new coordinates and $(0, 0)$? It's the same as: the angles of a line through the new coordinates of $a$ and $(0, 0)$ and a line through the new coordinates of $b$ and $(0, 0)$ are the same. Angles are the same means that their tangents are the same (and vice versa since we only consider the first quadrant of the plane). So we can conclude that $\frac{y}{x+1}$ or $\frac{y+1}{x}$ of the first point should be equal to any of these of the second point. Now consider the following graph. Various values of tangents of the lines are the nodes. Each point produces an edge between their $\frac{y}{x+1}$ and $\frac{y+1}{x}$. What are the matched pairs of points in this graph? It's such a pair of edges that they share at least one endpoint. Building a graph is the slowest part of the solution since you have to use some data structure (or at least a sort and a binary search). $O(n)$ is possible with some sort of hashmap but $O(n \log n)$ should be perfectly fine as well. So we reduced the problem to a more well-known one: given an arbitrary undirected graph, find the maximum number of pairs of edges such that each pair shares at least one endpoint and each edge is included in no more than one pair. The upper bound on the answer is the following. Let $m_i$ be the number of edges in the $i$-th connected component. Best case we can make $\lfloor \frac{m_i}{2} \rfloor$ pairs from it. Let's come up with an algorithm to achieve this bound. Consider a dfs tree of a component. It's known that a dfs tree of an undirected graph contains no cross edges. So if we direct all the edges of a dfs tree downwards (convert all back edges to forward edges), each edge will connect some vertex to its descendant. Imagine we came up with a dfs such that $dfs(v)$ matches all the edges that have their upper node in the subtree of $v$ to each other (except one edge in case there is an odd number of them). $dfs(root)$ will solve the task exactly then. How should that dfs work exactly? What if there were no forward edges at all? That case is easy since all edges are tree edges. We'll try to maintain an invariant that the only unmatched edge is an edge that has $v$ as one of its endpoints. If $v$ is a leaf, then there's nothing to match. Otherwise, we go into some child $u$. If it can't match all its edges, then match its remaining edge to an edge $(v, u)$. If it can then remember that we have an edge $(v, u)$ unmatched so far. Go into another child $w$. Same, match our edge with its edge if it has one unmatched. However, if $(v, w)$ turned out to get unmatched and $(v, u)$ turned out to be unmatched, then you can match them to each other. This way you will be left with at most one unmatched edge after you process all the children, and that edge has its endpoint at $v$. Add the forward edges back. Did anything change? Look at the forward edge that has its upper vertex the lowest. We can see that it points to a vertex $u$ that has its subtree fully matches. So why don't we treat this forward edge the same as an edge to a leaf? Forget that $u$ has some subtree of its own and just believe that you can't match the edge $(v, u)$ so far. Proceed the same as the easy case. Since we mark exactly which edges you pair up with which, it's trivial to retrieve the answer. Overall complexity: $O(n)/O(n \log n)$.
[ "constructive algorithms", "dfs and similar", "geometry", "graphs", "sortings", "trees" ]
2,700
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) #define x first #define y second using namespace std; struct point{ int a, b, c, d; }; typedef pair<long long, long long> frac; typedef pair<int, int> pt; int n; vector<point> a; map<frac, int> sv; frac norm(long long x, long long y){ long long g = __gcd(x, y); return {x / g, y / g}; } vector<vector<pt>> g; vector<int> used; vector<pt> ans; int dfs(int v){ used[v] = 1; int cur = -1; for (auto it : g[v]){ int u = it.x; int i = it.y; if (used[u] == 1) continue; int nw = i; if (!used[u]){ int tmp = dfs(u); if (tmp != -1){ ans.push_back({nw, tmp}); nw = -1; } } if (nw != -1){ if (cur != -1){ ans.push_back({cur, nw}); cur = -1; } else{ cur = nw; } } } used[v] = 2; return cur; } int main() { scanf("%d", &n); a.resize(n); forn(i, n) scanf("%d%d%d%d", &a[i].a, &a[i].b, &a[i].c, &a[i].d); g.resize(2 * n); forn(i, n){ frac f1 = norm((a[i].a + a[i].b) * 1ll * a[i].d, a[i].b * 1ll * a[i].c); frac f2 = norm(a[i].a * 1ll * a[i].d, a[i].b * 1ll * (a[i].c + a[i].d)); if (!sv.count(f1)){ int k = sv.size(); sv[f1] = k; } if (!sv.count(f2)){ int k = sv.size(); sv[f2] = k; } g[sv[f1]].push_back({sv[f2], i}); g[sv[f2]].push_back({sv[f1], i}); } used.resize(sv.size()); forn(i, sv.size()) if (!used[i]) dfs(i); printf("%d\n", int(ans.size())); for (auto it : ans) printf("%d %d\n", it.x + 1, it.y + 1); }
1519
F
Chests and Keys
Alice and Bob play a game. Alice has got $n$ treasure chests (the $i$-th of which contains $a_i$ coins) and $m$ keys (the $j$-th of which she can sell Bob for $b_j$ coins). Firstly, Alice puts some locks on the chests. There are $m$ types of locks, the locks of the $j$-th type can only be opened with the $j$-th key. To put a lock of type $j$ on the $i$-th chest, Alice has to pay $c_{i,j}$ dollars. Alice can put any number of different types of locks on each chest (possibly, zero). Then, Bob buys some of the keys from Alice (possibly none, possibly all of them) and opens each chest he can (he can open a chest if he has the keys for all of the locks on this chest). Bob's profit is the difference between the total number of coins in the opened chests and the total number of coins he spends buying keys from Alice. If Bob's profit is \textbf{strictly positive} (greater than zero), he wins the game. Otherwise, Alice wins the game. Alice wants to put some locks on some chests so no matter which keys Bob buys, she always wins (Bob cannot get positive profit). Of course, she wants to spend the minimum possible number of dollars on buying the locks. Help her to determine whether she can win the game at all, and if she can, how many dollars she has to spend on the locks.
Firstly, let's try to find some naive solution for this problem. Let's iterate on the subset of locks Alice puts on the chests. After choosing the subset of locks, how to check whether Bob can gain positive profit? We can iterate on the subset of keys he can buy as well, but in fact, this problem has a polynomial solution. Construct a flow network as follows: each chest and each key represents a vertex; there are $n$ arcs from the source to the vertices representing the chests (each having capacity $a_i$), $m$ arcs from the vertices representing the keys to the sink (each having capacity $b_j$), and for each chosen lock, an arc from the respective chest-vertex to the respective key-vertex with infinite capacity. If we find the minimum cut from the source to the sink, then Bob's profit is $(\sum_{i = 1}^{n} a_i) - mincut$. The reasoning behind this solution is the following one: if Bob takes all the chests and all the keys belonging to the first part of the cut, his profit is equal to the total cost of all chests he has taken, minus the total cost of all keys he has taken, minus infinity if he takes a chest he can't open. And the value of the cut is equal to the total cost of chests he doesn't take, plus the total cost of keys he takes, plus infinity if he can't open some chest he takes (since the arc from this chest-vertex to one of the key-vertices belongs to the cut). So, Bob's profit is $(\sum_{i = 1}^{n} a_i) - cut$, and by minimizing the cut value, we maximize his profit. A minimum cut can be easily found using any maxflow algorithm. Unfortunately, even iterating through all subsets of locks is too slow. To improve this solution, we should look at the minimum cut and its usage a bit more in detail. Notice that Bob can always take no keys and open no chests to get a profit of zero, so Alice's goal is to ensure that it is the best Bob's option. If Bob takes no chests and no keys, it means that the cut divides the network into two parts: the source and all other vertices. And, in terms of flows, it means that the maximum flow in this network should saturate all arcs going from the source (I highlighted it because it is the key idea of the solution). Here the constraints on $a_i$, $n$ and $m$ come in handy. We can use a dynamic programming with the flow over all arcs going from the source as one of the states. One of the ways to implement it is to have $(f_1, f_2, \dots, f_n, i, j, r)$ as the state, where $f_1$ through $f_n$ are the values of the flow going from the arcs from the source, $i$ is the current vertex in the left part we consider, $j$ is the current vertex in the right part we consider, and $r$ is the flow we already pushed through the arc connecting vertex $j$ of the right part to the sink (and the value we store for this state is the minimum cost Alice has pay to reach this state). There are two basic types of transitions in this dynamic programming: we either skip the arc from $i$ to $j$, or pick it and transfer some flow through it; and no matter what we've chosen, we move to the next vertex of the left part (or to $1$ and increase $j$ by $1$ if we are already considering the $n$-th vertex of the left part). The constraints were loose enough to implement this dp basically in any form (there was no need to compress the states into single integers, for example, which was what the most participants of the round did).
[ "bitmasks", "brute force", "dfs and similar", "dp", "flows" ]
3,200
#include<bits/stdc++.h> using namespace std; const int N = 6; const int M = 400; const int INF = int(1e9); int a[N]; int b[N]; int c[N][N]; int n, m; struct state { vector<int> need; int v2; int v1; int rem; state() {}; state(vector<int> need, int v1, int v2, int rem) : need(need), v1(v1), v2(v2), rem(rem) {}; }; int get_code(const vector<int>& v) { int ans = 0; for(int i = 0; i < v.size(); i++) ans = ans * 5 + v[i]; return ans; } int get_code(const state& s) { int code = get_code(s.need); code = code * 6 + s.v2; code = code * 6 + s.v1; code = code * 5 + s.rem; return code; } vector<int> get_vector(int code, int n) { vector<int> res(n); for(int i = n - 1; i >= 0; i--) { res[i] = code % 5; code /= 5; } return res; } state get_state(int code) { int rem = code % 5; code /= 5; int v1 = code % 6; code /= 6; int v2 = code % 6; code /= 6; vector<int> need = get_vector(code, n); return state(need, v1, v2, rem); } const int Z = 40 * int(1e6); int dp[Z]; int main() { cin >> n >> m; for(int i = 0; i < n; i++) cin >> a[i]; for(int i = 0; i < m; i++) cin >> b[i]; for(int i = 0; i < n; i++) for(int j = 0; j < m; j++) cin >> c[i][j]; for(int i = 0; i < Z; i++) dp[i] = INF; state start(vector<int>(n, 0), 0, 0, 0); int ans = INF; dp[get_code(start)] = 0; for(int i = 0; i < Z; i++) { if(dp[i] == INF) continue; state s = get_state(i); for(int f = 0; f <= 4; f++) { if(s.need[s.v1] + f > a[s.v1] || s.rem + f > b[s.v2]) continue; int add = (f == 0 ? 0 : c[s.v1][s.v2]); state nw = s; nw.need[s.v1] += f; nw.rem += f; if(s.v1 == n - 1) { nw.v1 = 0; nw.v2 = s.v2 + 1; nw.rem = 0; } else { nw.v1 = s.v1 + 1; } if(nw.need == vector<int>(a, a + n)) ans = min(ans, dp[i] + add); if(nw.v2 < m) { int code = get_code(nw); dp[code] = min(dp[code], dp[i] + add); } } } if(ans == INF) ans = -1; cout << ans << endl; }
1520
A
Do Not Be Distracted!
Polycarp has $26$ tasks. Each task is designated by a capital letter of the Latin alphabet. The teacher asked Polycarp to solve tasks in the following way: if Polycarp began to solve some task, then he must solve it to the end, without being distracted by another task. After switching to another task, Polycarp cannot return to the previous task. Polycarp can only solve one task during the day. Every day he wrote down what task he solved. Now the teacher wants to know if Polycarp followed his advice. For example, if Polycarp solved tasks in the following order: "DDBBCCCBBEZ", then the teacher will see that on the third day Polycarp began to solve the task 'B', then on the fifth day he got distracted and began to solve the task 'C', on the eighth day Polycarp returned to the task 'B'. Other examples of when the teacher is suspicious: "BAB", "AABBCCDDEEBZZ" and "AAAAZAAAAA". If Polycarp solved the tasks as follows: "FFGZZZY", then the teacher cannot have any suspicions. Please note that Polycarp is not obligated to solve all tasks. Other examples of when the teacher doesn't have any suspicious: "BA", "AFFFCC" and "YYYYY". Help Polycarp find out if his teacher might be suspicious.
The simplest solution - go through the problem, because of which the teacher might have suspicions. Now you can find the first day when Polycarp solved this problem and the last such day. Between these two days, all problems should be the same. If this is not the case, the answer is "NO".
[ "brute force", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using cd = complex<ld>; void solve() { int n; cin >> n; string s; cin >> s; for (char c = 'A'; c <= 'Z'; c++) { int first = n; int last = -1; for (int i = 0; i < n; i++) { if (s[i] == c) { first = min(first, i); last = max(last, i); } } for (int i = first; i <= last; i++) { if (s[i] != c) { cout << "NO\n"; return; } } } cout << "YES\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) { solve(); } }
1520
B
Ordinary Numbers
Let's call a positive integer $n$ ordinary if in the decimal notation all its digits are the same. For example, $1$, $2$ and $99$ are ordinary numbers, but $719$ and $2021$ are not ordinary numbers. For a given number $n$, find the number of ordinary numbers among the numbers from $1$ to $n$.
Note that every ordinary number can be represented as $d \cdot (10^0 + 10^1 + \ldots + 10^k)$. Therefore, to count all ordinary numbers among the numbers from $1$ to $n$, it is enough to count the number of $(d, k)$ pairs such that $d \cdot (10^0 + 10^1 + \ldots + 10^k) \le n$. In the given constraints, it is enough to iterate over $d$ from $1$ to $9$ and $k$ from $0$ to $8$.
[ "brute force", "math", "number theory" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n; cin >> n; int res = 0; for (ll pw = 1; pw <= n; pw = pw * 10 + 1) { for (int d = 1; d <= 9; d++) { if (pw * d <= n) { res++; } } } cout << res << endl; } int main() { int tests; cin >> tests; while (tests-- > 0) { solve(); } return 0; }
1520
C
Not Adjacent Matrix
We will consider the numbers $a$ and $b$ as adjacent if they differ by exactly one, that is, $|a-b|=1$. We will consider cells of a square matrix $n \times n$ as adjacent if they have a common side, that is, for cell $(r, c)$ cells $(r, c-1)$, $(r, c+1)$, $(r-1, c)$ and $(r+1, c)$ are adjacent to it. For a given number $n$, construct a square matrix $n \times n$ such that: - Each integer from $1$ to $n^2$ occurs in this matrix exactly once; - If $(r_1, c_1)$ and $(r_2, c_2)$ are adjacent cells, then the numbers written in them \textbf{must not be adjacent}.
Note that $n = 2$ is the only case where there is no answer. For other cases, consider the following construction: Let's say that the cell $(i, j)$ is white if $i + j$ is an even number, otherwise, we will say that the cell $(i, j)$ is black; Let's arrange the cells so that all white cells are first, and if the colors are equal, the cells will be compared lexicographically. Arrange numbers from $1$ to $n^2$ in ordered cells. For example, for $n=3$, the following matrix will be constructed: $\begin{pmatrix} 1 & 6 & 2\\ 7 & 3 & 8\\ 4 & 9 & 5\\ \end{pmatrix}$
[ "constructive algorithms" ]
1,000
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; if (n == 1) { cout << "1" << endl; return; } else if (n == 2) { cout << "-1" << endl; return; } vector<vector<int>> a(n, vector<int>(n)); a[0][0] = 1; a[n - 1][n - 1] = n * n; int x = n * n - 1; for (int i = 1; i + 1 < n; i++) { for (int j = i; j >= 0; j--, x--) { a[i - j][j] = x; } } x = 2; for (int j = n - 2; j > 0; j--) { for (int i = 0; i < n - j; i++, x++) { a[n - i - 1][j + i] = x; } } for (int i = n - 1; i >= 0; i--, x++) { a[i][n - i - 1] = x; } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { cout << a[i][j] << " "; } cout << endl; } } int main() { int tests; cin >> tests; while (tests-- > 0) { solve(); } return 0; }
1520
D
Same Differences
You are given an array $a$ of $n$ integers. Count the number of pairs of indices $(i, j)$ such that $i < j$ and $a_j - a_i = j - i$.
Let's rewrite the original equality a bit: $a_j - a_i = j - i,$ $a_j - j = a_i - i$ Let's replace each $a_i$ with $b_i = a_i - i$. Then the answer is the number of pairs $(i, j)$ such that $i < j$ and $b_i = b_j$. To calculate this value you can use map or sorting.
[ "data structures", "hashing", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; map<int, int> a; long long res = 0; for (int i = 0; i < n; i++) { int x; cin >> x; x -= i; res += a[x]; a[x]++; } cout << res << endl; } int main() { int tests; cin >> tests; while (tests-- > 0) { solve(); } return 0; }
1520
E
Arranging The Sheep
You are playing the game "Arranging The Sheep". The goal of this game is to make the sheep line up. The level in the game is described by a string of length $n$, consisting of the characters '.' (empty space) and '*' (sheep). In one move, you can move any sheep one square to the left or one square to the right, if the corresponding square \textbf{exists and is empty}. The game ends as soon as the sheep are lined up, that is, there should be no empty cells between any sheep. For example, if $n=6$ and the level is described by the string "**.*..", then the following game scenario is possible: - the sheep at the $4$ position moves to the right, the state of the level: "**..*."; - the sheep at the $2$ position moves to the right, the state of the level: "*.*.*."; - the sheep at the $1$ position moves to the right, the state of the level: ".**.*."; - the sheep at the $3$ position moves to the right, the state of the level: ".*.**."; - the sheep at the $2$ position moves to the right, the state of the level: "..***."; - the sheep are lined up and the game ends. For a given level, determine the minimum number of moves you need to make to complete the level.
Let's denote by $k$ the number of sheep in the string, and by $x_1, x_2, \ldots, x_k$ ($1 \le x_1 < x_2 < \ldots < x_k \le n$) their positions in the string. Note that in the optimal solution the sheep with the number $m = \lceil\frac{n}{2}\rceil$ will not make moves. This can be proved by considering the optimal solution in which the sheep with the number $m$ makes at least one move and come to the conclusion that this solution is not optimal. Consider sheep with numbers from $i=1$ to $n$. Then the final position of the $i$-th sheep will be $x_m - m + i$, and the answer will be $\sum\limits_{i=1}^{k} |x_i - (x_m - m + i)|$.
[ "greedy", "math" ]
1,400
#include<bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; string s; cin >> s; int cnt = 0; for(auto x : s) cnt += (x == '*' ? 1 : 0); int pos = -1; int cur = -1; for(int i = 0; i < n; i++) { if(s[i] == '*') { cur++; if(cur == cnt / 2) pos = i; } } long long ans = 0; cur = pos - cnt / 2; for(int i = 0; i < n; i++) if(s[i] == '*') { ans += abs(cur - i); cur++; } cout << ans << endl; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int tc = 1; cin >> tc; for(int i = 0; i < tc; i++) { solve(); } }
1520
F1
Guess the K-th Zero (Easy version)
\textbf{This is an interactive problem.} \textbf{This is an easy version of the problem. The difference from the hard version is that in the easy version $t=1$ and the number of queries is limited to $20$.} Polycarp is playing a computer game. In this game, an array consisting of zeros and ones is hidden. Polycarp wins if he guesses the position of the $k$-th zero from the left $t$ times. Polycarp can make no more than $20$ requests of the following type: - ? $l$ $r$ — find out the sum of all elements in positions from $l$ to $r$ ($1 \le l \le r \le n$) inclusive. In this (easy version) of the problem, this paragraph doesn't really make sense since $t=1$ always. To make the game more interesting, each guessed zero turns into one and the game continues on the changed array. More formally, if the position of the $k$-th zero was $x$, then after Polycarp guesses this position, the $x$-th element of the array will be replaced from $0$ to $1$. Of course, this feature affects something only for $t>1$. Help Polycarp win the game.
This problem can be solved by binary search. Let's maintain a segment that is guaranteed to contain the $k$-th zero and gradually narrow it down. Let the current segment be - $[l, r]$ and we want to find $k$-th zero on it. Let's make a query on the half of the segment $[l, m]$, where $m = \frac{l+r}{2}$. If there are at least $k$ zeros on the left half of the segment, then we go to the left segment and look for the $k$-th zero. If there is only $x < k$ zeros on the left half, then we go to the right segment and look for $(k-x)$-th zero. We will spend $\log{n}$ queries, which is within the limits.
[ "binary search", "interactive" ]
1,600
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using cd = complex<ld>; void calc(int l, int r, int k) { if (l == r) { cout << "! " << l << endl; return; } int m = (l + r) / 2; cout << "? " << l << " " << m << endl; int sum; cin >> sum; if ((m - l + 1) - sum >= k) { calc(l, m, k); } else { calc(m + 1, r, k - (m - l + 1) + sum); } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int n, t, k; cin >> n >> t >> k; calc(1, n, k); }
1520
F2
Guess the K-th Zero (Hard version)
\textbf{This is an interactive problem.} \textbf{This is a hard version of the problem. The difference from the easy version is that in the hard version $1 \le t \le \min(n, 10^4)$ and the total number of queries is limited to $6 \cdot 10^4$.} Polycarp is playing a computer game. In this game, an array consisting of zeros and ones is hidden. Polycarp wins if he guesses the position of the $k$-th zero from the left $t$ times. Polycarp can make no more than $6 \cdot 10^4$ requests totally of the following type: - ? $l$ $r$ — find out the sum of all elements in positions from $l$ to $r$ ($1 \le l \le r \le n$) inclusive. To make the game more interesting, each guessed zero turns into one and the game continues on the changed array. More formally, if the position of the $k$-th zero was $x$, then after Polycarp guesses this position, the $x$-th element of the array will be replaced from $0$ to $1$. Help Polycarp win the game.
In this problem, you can apply the same solution as in the previous one, but you need to remember the responses to the requests and not make the same requests several times. Why does it work? Imagine a binary tree with a segment at each vertex, and its children - the left and right half of the segment. We will leave only those vertexes for which we made requests. It remains to show that there are no more than $6 \cdot 10^4$ such vertices. First - the height of the tree is not more than $18$. Let's understand what the maximum number of vertices can be in the tree. We will consider each level separately. If the level number $x$ is less than $\log{t} \le 14$, then we can spend no more than $x$ of vertices (since there are simply no more vertices). If the level number is from $15$ to $18$, then we can spend no more than $t$ vertices, so each request uses only one vertex per level. By limiting the number of vertices in this way, we get that there are no more than $2^14 - 1 + 4 \cdot 10^4 = 56383$, which fits into the constraints.
[ "binary search", "constructive algorithms", "data structures", "interactive" ]
2,200
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) map<pair<int,int>, int> cache; void dec(int pos, int L, int R) { cache[{L, R}]--; if (L != R) { int M = (L + R) / 2; if (pos <= M) dec(pos, L, M); else dec(pos, M + 1, R); } } int main() { int n, cases; cin >> n >> cases; forn(case_, cases) { int k; cin >> k; int L = 0, R = n - 1; while (L != R) { int M = (L + R) / 2; pair<int,int> range = make_pair(L, M); if (cache.count(range) == 0) { cout << "? " << range.first + 1 << " " << range.second + 1 << endl; cin >> cache[range]; cache[range] = range.second - range.first + 1 - cache[range]; } int value = cache[range]; if (k <= value) R = M; else { k -= value; L = M + 1; } } cout << "! " << L + 1 << endl; dec(L, 0, n - 1); } }
1520
G
To Go Or Not To Go?
Dima overslept the alarm clock, which was supposed to raise him to school. Dima wonders if he will have time to come to the first lesson. To do this, he needs to know the \textbf{minimum time} it will take him to get from home to school. The city where Dima lives is a rectangular field of $n \times m$ size. Each cell $(i, j)$ on this field is denoted by one number $a_{ij}$: - The number $-1$ means that the passage through the cell is prohibited; - The number $0$ means that the cell is free and Dima can walk though it. - The number $x$ ($1 \le x \le 10^9$) means that the cell contains a portal with a cost of $x$. A cell with a portal is also considered free. From any portal, Dima can go to any other portal, while the time of moving from the portal $(i, j)$ to the portal $(x, y)$ corresponds to the sum of their costs $a_{ij} + a_{xy}$. In addition to moving between portals, Dima can also move between unoccupied cells adjacent to one side in time $w$. In particular, he can enter a cell with a portal and not use it. Initially, Dima is in the upper-left cell $(1, 1)$, and the school is in the lower right cell $(n, m)$.
There is no point in using two transitions between portals, because if you want to go from portal $A$ to portal $B$, and then from portal $C$ to portal $D$, then you can immediately go from portal $A$ to portal $D$ for less. Then there are two possible paths. First - do not use portals. Here it is enough to find the shortest path between two points. The second - use a single transition. Let's choose a portal from which we should teleport. Obviously, this is a portal with a minimum distance to it and the cost of the transition. Similarly, the portal in which we should teleport is selected.
[ "brute force", "dfs and similar", "graphs", "greedy", "implementation", "shortest paths" ]
2,200
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using cd = complex<ld>; const int MAX_N = 2010; int dd[4][2] = { {1, 0}, {0, 1}, {-1, 0}, {0, -1} }; void bfs(int sx, int sy, vector<vector<int>> &d, vector<vector<int>> &a) { int n = d.size(); int m = d[0].size(); queue<pair<int, int>> q; q.push({sx, sy}); d[sx][sy] = 1; while (!q.empty()) { auto [x, y] = q.front(); q.pop(); for (auto [dx, dy] : dd) { int tx = x + dx; int ty = y + dy; if (tx >= 0 && ty >= 0 && tx < n && ty < m && d[tx][ty] == 0 && a[tx][ty] != -1) { d[tx][ty] = d[x][y] + 1; q.push({tx, ty}); } } } for (auto &e : d) { for (auto &i : e) { i--; } } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int n, m, w; cin >> n >> m >> w; vector<vector<int>> a(n, vector<int>(m)); vector<vector<int>> d1(n, vector<int>(m)); vector<vector<int>> d2(n, vector<int>(m)); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { cin >> a[i][j]; } } bfs(0, 0, d1, a); bfs(n - 1, m - 1, d2, a); ll bestFinish = 1e18; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { if (d2[i][j] != -1 && a[i][j] >= 1) { bestFinish = min(bestFinish, a[i][j] + w * 1ll * d2[i][j]); } } } ll ans = w * 1ll * d1[n - 1][m - 1]; if (d1[n - 1][m - 1] == -1) { ans = 1e18; } for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { if (d1[i][j] != -1 && a[i][j] >= 1 && bestFinish != 1e18) { ans = min(ans, w * 1ll * d1[i][j] + a[i][j] + bestFinish); } } } if (ans == 1e18) { cout << -1; } else { cout << ans; } }
1521
A
Nastia and Nearly Good Numbers
Nastia has $2$ positive integers $A$ and $B$. She defines that: - The integer is good if it is divisible by $A \cdot B$; - Otherwise, the integer is nearly good, if it is divisible by $A$. For example, if $A = 6$ and $B = 4$, the integers $24$ and $72$ are good, the integers $6$, $660$ and $12$ are nearly good, the integers $16$, $7$ are neither good nor nearly good. Find $3$ \textbf{different} positive integers $x$, $y$, and $z$ such that \textbf{exactly one} of them is good and the \textbf{other} $2$ are nearly good, and $x + y = z$.
There are $2$ cases: if $B = 1$, then the answer doesn't exist. Here we cannot get the nearly good numbers at all. Otherwise, we can construct the answer as $A + A \cdot B = A \cdot (B + 1)$.
[ "constructive algorithms", "math", "number theory" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int a, b; cin >> a >> b; if (b == 1) { cout << "NO" << endl; } else { cout << "YES" << endl; cout << a << ' ' << a * (long long)b << ' ' << a * (long long)(b + 1) << endl; } } }
1521
B
Nastia and a Good Array
Nastia has received an array of $n$ positive integers as a gift. She calls such an array $a$ good that for all $i$ ($2 \le i \le n$) takes place $gcd(a_{i - 1}, a_{i}) = 1$, where $gcd(u, v)$ denotes the greatest common divisor (GCD) of integers $u$ and $v$. You can perform the operation: select two \textbf{different} indices $i, j$ ($1 \le i, j \le n$, $i \neq j$) and two integers $x, y$ ($1 \le x, y \le 2 \cdot 10^9$) so that $\min{(a_i, a_j)} = \min{(x, y)}$. Then change $a_i$ to $x$ and $a_j$ to $y$. The girl asks you to make the array good using \textbf{at most} $n$ operations. It can be proven that this is always possible.
There are many ways to solve the problem. Here is one of them: We will use the fact that $gcd(i, i + 1) = 1$ for any integer $i \ge 1$. Let's find the minimum element $x$ of the array $a$ that is located in the position $pos$. Then for all integer $i$ ($1 \le i \le n$) perform the following operation: $(pos,\ i,\ x,\ x + abs(pos - i))$. That's how we replace $a_i$ to $x + abs(pos - i)$. The main condition: $\min{(a_i, a_j)} = \min{(x, y)}$ is satisfied because $a_{pos}$ always equals to $x$ and this value always less than any other element in the array $a$. Consider the structure of the array $a$ after performing the operations describing above. Let's define $l = x + pos - i$ and $r = x + i - pos$. These are the leftmost and the rightmost elements in the array $a$, respectively. The array $a$ looks like $[l,\ l - 1\ \ldots\ x + 1,\ x,\ x + 1\ \ldots\ r - 1,\ r]$. Thus, we obtain an absolute difference, equal to one, between all pairs of adjacent elements.
[ "constructive algorithms", "math", "number theory" ]
1,300
#include "bits/stdc++.h" using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int n; cin >> n; int x = 1e9 + 7, pos = -1; for (int i = 0; i < n; ++i) { int a; cin >> a; if (a < x) x = a, pos = i; } cout << n - 1 << endl; for (int i = 0; i < n; ++i) { if (i == pos) continue; cout << pos + 1 << ' ' << i + 1 << ' ' << x << ' ' << x + abs(i - pos) << "\n"; } } }
1521
C
Nastia and a Hidden Permutation
\textbf{This is an interactive problem!} Nastia has a hidden permutation $p$ of length $n$ consisting of integers from $1$ to $n$. You, for some reason, want to figure out the permutation. To do that, you can give her an integer $t$ ($1 \le t \le 2$), two \textbf{different} indices $i$ and $j$ ($1 \le i, j \le n$, $i \neq j$), and an integer $x$ ($1 \le x \le n - 1$). Depending on $t$, she will answer: - $t = 1$: $\max{(\min{(x, p_i)}, \min{(x + 1, p_j)})}$; - $t = 2$: $\min{(\max{(x, p_i)}, \max{(x + 1, p_j)})}$. You can ask Nastia \textbf{at most} $\lfloor \frac {3 \cdot n} { 2} \rfloor + 30$ times. It is guaranteed that she will \textbf{not} change her permutation depending on your queries. Can you guess the permutation?
Solution $1$: Let's fix $2$ indices $i$ and $j$ $(1 \le i, j \le n,$ $i \neq j)$. Then restore $p_{i}$ and $p_{j}$. Let's assume we know the maximum element among $p_i$ and $p_j$: $mx = \max(p_i, p_j)$. Now we can figure out where exactly the maximum is by asking the following query: $val = \max{(\min{(mx - 1, p_i)}, \min{(mx, p_j)})}$. There are $2$ cases: $val = mx - 1$: it means $p_j \lt mx$, otherwise $val = \min{(mx, p_j)} = mx$. $val = mx$: it means $p_j = mx$, because $\min{(mx - 1, p_i)} \lt mx$. If we know where the maximum is located we easily can find the remaining element: if $p_i = mx$, then $p_j = \min{(\max{(1, p_j)}, \max{(2, p_i)})}$. if $p_j = mx$, then $p_i = \min{(\max{(1, p_i)}, \max{(2, p_j)})}$. We can solve the problem if we know the $mx$. Let's find it: $mx = \max{(\min{(n - 1, p_i)}, \min{(n, p_j)})}$ Take a look on the case when $mx = n - 1$. Here we cannot be sure that $mx = max(p_i, p_j)$. It's possible only if $p_i = n$. Thus if $mx = n - 1$ we spend an extra query to be sure that $p_i \neq n$. We will ask: $val = \max{(\min{(n - 1, p_j)}, \min{(n, p_i)})}$ So, if $val = n$, then the value $p_i$ equlas to $n$, otherwise the value $mx = n - 1$ equals to the real maximum among $p_i$ and $p_j$. As a result, we can restore any $2$ elements of the permutation. Let's split our permutation into $\lfloor \frac {n} {2} \rfloor$ pairs and restore them independently of each other. The total queries we perform is $\lfloor \frac {3 \cdot n} { 2} \rfloor + 2$. We spend $3$ operations to restore each of the pair of elements. And no more than $2$ extra queries to be sure that $mx$ is correct. Solution $2$: Let's find the maximum of the permutation and then restore the element $p_i$ $(1 \le i \le n)$ of the permutation by query: $p_i = \min{(\max{(1, p_i)}, \max{(2, mx})}$. To find the maximum element let's split the permutation into $\lfloor \frac {n} {2} \rfloor$ pairs and perform the following operation to each of them: $mx = \max{(\min{(n - 1, p_i)}, \min{(n, p_j)})}$. if $mx = n$, then $j$ is the position with a maximum element. if $mx = n - 1$, then need to make sure that $p_i \neq n$. Let's make the same extra query as we do in the Solution 1. Note that if you don't find the maximum element among the $\lfloor \frac {n} {2} \rfloor$, then it's in the remaining element when $n$ is odd. The total queries we perform is n + $\lfloor \frac {n} { 2} \rfloor + 1$. We ask $\lfloor \frac {n} { 2} \rfloor + 2$ queries to find the maximum of the permutation and $n - 1$ quiries to restore the remaining elements.
[ "constructive algorithms", "interactive" ]
2,000
#include "bits/stdc++.h" using namespace std; int ask(int t, int i, int j, int x) { cout << "? " << t << ' ' << i + 1 << ' ' << j + 1 << ' ' << x << endl; int val; cin >> val; if (val == -1) exit(0); return val; } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int n; cin >> n; vector<int> p(n, -1); for (int i = 1; i < n; i += 2) { int pos1 = i - 1, pos2 = i; int val = ask(1, pos1, pos2, n - 1); if (val == n - 1) { val = ask(1, pos2, pos1, n - 1); if (val == n) { p[pos1] = val; p[pos2] = ask(2, pos2, pos1, 1); continue; } } int get = ask(1, pos1, pos2, val - 1); if (get == val) { p[pos2] = val; p[pos1] = ask(2, pos1, pos2, 1); } if (get == val - 1) { p[pos1] = val; p[pos2] = ask(2, pos2, pos1, 1); } } if (p.back() == -1) { vector<bool> us(n + 1); for (int i = 0; i < n - 1; ++i) { us[p[i]] = true; } for (int i = 1; i <= n; ++i) { if (!us[i]) { assert(p[p.size() - 1] == -1); p[p.size() - 1] = i; } } } cout << "! "; for (int i = 0; i < n; ++i) { cout << p[i] << ' '; } cout << endl; } }
1521
D
Nastia Plays with a Tree
Nastia has an unweighted tree with $n$ vertices and wants to play with it! The girl will perform the following operation with her tree, as long as she needs: - Remove any existing edge. - Add an edge between any pair of vertices. What is the \textbf{minimum} number of operations Nastia needs to get a bamboo from a tree? A bamboo is a tree in which no node has a degree greater than $2$.
Let's define the variable $x$ as a minimum number of operations that we need to get bamboo from a tree. Let's remove $x$ edges first and then add $x$ new ones to the graph. Consider the structure of the graph after removing $x$ edges. This is a forest with a $x + 1$ connected components. Easy to notice each of the $x + 1$ connected components in the getting forest of trees must be a bamboo to get the bamboo after adding new $x$ edges. The red edges are removed. Thus, we can get the bamboo from the forest of bamboo after removing $x$ edges by $x$ times adding the conjunction between leaves that are in the different components of connectivity of the forest. The green edges are added. So, the task is to find the minimum number of the removing edges needs to get the forest of bamboos. Here works the following greedy: Let's define any vertice of the tree as a root. We will solve the problem for each of the subtrees $v$ $(1 \le v \le n)$. First, solve the problem for all child vertices of $v$. Then define the value $c_v$ as the number of the children and the value $p_v$ as the ancestor for vertex $v$. There are $3$ cases: If $c_v <= 1$, then we don't remove anything. If $c_v = 2$, then we remove the edge $(p_v, v)$ if $p_v$ exists. If $c_v > 2$, then we remove the edge $(p_v, v)$ if $p_v$ exists and any $c - 2$ existing edges from $v$ to one of the children vertex. Take a look at the picture: The root of the tree is vertex $1$.
[ "constructive algorithms", "data structures", "dfs and similar", "dp", "dsu", "greedy", "implementation", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 7; struct edge { int v, u; }; vector<pair<edge, edge>> operations; int dp[N], answer = 0; bool isDeleted[N]; vector<pair<int, int>> g[N]; void dfs(int v, int p = -1) { int sz = (int)g[v].size() - (p != -1); for (auto to : g[v]) { if (to.first == p) continue; dfs(to.first, v); if (dp[to.first]) { --sz; ++answer; isDeleted[to.second] = true; } } if (sz >= 2) { dp[v] = true; for (auto to : g[v]) { if (to.first == p) continue; if (sz <= 2) break; if (!dp[to.first]) { --sz; ++answer; isDeleted[to.second] = true; } } } } vector<pair<int, int>> bamboos; bool used[N]; vector<int> leaves; void dfs2(int v, int root) { used[v] = true; int numberOfChildren = 0; for (auto to : g[v]) { if (used[to.first] || isDeleted[to.second]) continue; ++numberOfChildren; dfs2(to.first, root); } if (v == root && numberOfChildren == 1) { leaves.push_back(v); } else if (!numberOfChildren) { leaves.push_back(v); } } void clear(int n) { answer = 0; for (int i = 0; i < n; ++i) { dp[i] = 0; g[i].clear(); used[i] = isDeleted[i] = false; } bamboos.clear(); operations.clear(); } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int n; cin >> n; for (int i = 0; i < n - 1; ++i) { int a, b; cin >> a >> b; --a; --b; g[a].push_back({b, i}); g[b].push_back({a, i}); } dfs(0); for (int i = 0; i < n; ++i) { if (!used[i]) { dfs2(i, i); assert((int)leaves.size() <= 2); if ((int)leaves.size() == 2) { bamboos.push_back({leaves[0], leaves[1]}); } if ((int)leaves.size() == 1) { bamboos.push_back({leaves.back(), leaves.back()}); } leaves.clear(); } } vector<edge> deletedEdges, addedEdges; for (int v = 0; v < n; ++v) { for (auto to : g[v]) { if (isDeleted[to.second]) { if (v < to.first) { deletedEdges.push_back({v, to.first}); } } } } for (int i = 1; i < (int)bamboos.size(); ++i) { addedEdges.push_back({bamboos[i - 1].second, bamboos[i].first}); } assert(answer == (int)deletedEdges.size()); assert((int)deletedEdges.size() == (int)addedEdges.size()); for (int i = 0; i < answer; ++i) { operations.push_back({deletedEdges[i], addedEdges[i]}); } assert(answer == (int)operations.size()); cout << answer << endl; for (pair<edge, edge> to : operations) { cout << to.first.v + 1 << ' ' << to.first.u + 1 << ' '; cout << to.second.v + 1 << ' ' << to.second.u + 1 << endl; } clear(n); } }
1521
E
Nastia and a Beautiful Matrix
You like numbers, don't you? Nastia has a lot of numbers and she wants to share them with you! Isn't it amazing? Let $a_i$ be how many numbers $i$ ($1 \le i \le k$) you have. An $n \times n$ matrix is called beautiful if it contains \textbf{all} the numbers you have, and for \textbf{each} $2 \times 2$ submatrix of the original matrix is satisfied: - The number of occupied cells doesn't exceed $3$; - The numbers on each diagonal are distinct. Make a beautiful matrix of \textbf{minimum} size.
Let's fix $n$ and will check whether we build a beautiful matrix or not. Let's define the variable $mx$ as a maximum element among all elements from the array $a$. In other words, the amount of the most frequently occurring number we have. Also, define the variable $sum$ as an amount of numbers we have. We can single out the $2$ criteria. These are $sum \le n ^ 2 - \lfloor \frac {n} {2} \rfloor ^ 2$ and $mx \le n \cdot \lceil \frac {n} {2} \rceil$. It can be proved by spitting the matrix into disjoint matrices $2 \times 2$. Let's construct the structure of the matrix and provide the algorithm of arrangement, that if the previous $2$ conditions are satisfied, then we always can create the beautiful matrix of $n$ size. Take a look at the picture: There are $4$ types of cells. White cells are empty, blue ones can consist of any number. Let's fill yellow and red cells in a way that they don't have any common number. We will do it greedily: Let's fill red cells at first. Let's take numbers $x$, which is the most frequently occurring, and just try to fill red cells using only it. If all numbers $x$ are fully fit there, then we just take a new $x$ and continue filling cells. If there are elements of color $x$ that cannot be placed there then we put all such remaining elements in blue cells. It is always possible to do because the number of blue cells and yellow cells together is $n \cdot \lceil \frac {n} {2} \rceil$, that is the upper bound of the value $mx$. In this way, we filled red cells and some blue cells. The other elements, which haven't been placed yet, can be placed randomly because they won't match(by diagonal) due to the way of filling red cells.
[ "binary search", "constructive algorithms", "dp", "greedy" ]
2,700
#include "bits/stdc++.h" using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int q; cin >> q; while (q--) { int m, k; cin >> m >> k; pair<int, int> a[k]; for (int i = 0; i < k; ++i) { cin >> a[i].first, a[i].second = i + 1; } sort(a, a + k, greater<pair<int, int>>()); int mx = a[0].first; for (int n = 1; n <= m; ++n) { // mx <= n * ceil(n / 2) if (mx > n * (long long)((n + 1) / 2)) continue; // m <= n ^ 2 - floor(n / 2) ^ 2 if (m > n * (long long)n - (n / 2) * (long long)(n / 2)) continue; // answer = n vector<pair<int, int>> x, y, z; for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { if ((i + j) % 2 == 1) { if (i % 2 == 0) x.push_back({i, j}); else y.push_back({i, j}); } else { if (i % 2 == 0) z.push_back({i, j}); } } } int ans[n][n]; for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { ans[i][j] = 0; } } for (int i = 0; i < k; ++i) { vector<pair<int, int>> &cur = (x.empty() ? y : x); while (a[i].first && !cur.empty()) { pair<int, int> pos = cur.back(); ans[pos.first][pos.second] = a[i].second; cur.pop_back(); --a[i].first; } while(a[i].first--) { assert((int)z.size() > 0); pair<int, int> pos = z.back(); ans[pos.first][pos.second] = a[i].second; z.pop_back(); } } // print answer cout << n << endl; for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { cout << ans[i][j] << ' '; } cout << endl; } break; } } }
1523
A
Game of Life
William really likes the cellular automaton called "Game of Life" so he decided to make his own version. For simplicity, William decided to define his cellular automaton on an array containing $n$ cells, with each cell either being alive or dead. Evolution of the array in William's cellular automaton occurs iteratively in the following way: - If the element is dead and it has \textbf{exactly} $1$ alive neighbor \textbf{in the current state of the array}, then on the next iteration it will become alive. For an element at index $i$ the neighbors would be elements with indices $i - 1$ and $i + 1$. If there is no element at that index, it is considered to be a dead neighbor. - William is a humane person so all alive elements stay alive. Check the note section for examples of the evolution. You are given some initial state of all elements and you need to help William find the state of the array after $m$ iterations of evolution.
Notice that evolution will go on for no more than $n$ iterations, since on each iteration at least one new living cell will appear, and if it doesn't this would mean that we remain in the same state as on the previous step and the simulation is over. Knowing this we can write a simple simulation of the process described in the problem statement, which would process each iteration in $O(n)$. Final complexity: $O(n^2)$.
[ "implementation" ]
800
null
1523
B
Lord of the Values
While trading on his favorite exchange trader William realized that he found a vulnerability. Using this vulnerability he could change the values of certain internal variables to his advantage. To play around he decided to change the values of all internal variables from $a_1, a_2, \ldots, a_n$ to $-a_1, -a_2, \ldots, -a_n$. For some unknown reason, the number of service variables is always an even number. William understands that with his every action he attracts more and more attention from the exchange's security team, so the number of his actions must not exceed $5\,000$ and after every operation no variable can have an absolute value greater than $10^{18}$. William can perform actions of two types for two chosen variables with indices $i$ and $j$, where $i < j$: - Perform assignment $a_i = a_i + a_j$ - Perform assignment $a_j = a_j - a_i$ William wants you to develop a strategy that will get all the internal variables to the desired values.
Notice that for transforming any pair of numbers $(a, b)$ into a pair $(-a, -b)$ a sequence of operations such as $(1, 2, 1, 2, 1, 2)$ can be performed. Since $n$ is even we can apply this sequence of operations for all pairs of numbers $(a_{i \cdot 2 - 1}, a_{i \cdot 2})$ for all $i$ from $1$ to $\frac{n}{2}$. Final complexity: $O(n)$.
[ "constructive algorithms" ]
1,100
null
1523
C
Compression and Expansion
William is a huge fan of planning ahead. That is why he starts his morning routine by creating a nested list of upcoming errands. A valid nested list is any list which can be created from a list with one item "1" by applying some operations. Each operation inserts a new item into the list, \textbf{on a new line}, just after one of existing items $a_1 \,.\, a_2 \,.\, a_3 \,.\, \,\cdots\, \,.\,a_k$ and can be one of two types: - Add an item $a_1 \,.\, a_2 \,.\, a_3 \,.\, \cdots \,.\, a_k \,.\, 1$ (starting a list of a deeper level), or - Add an item $a_1 \,.\, a_2 \,.\, a_3 \,.\, \cdots \,.\, (a_k + 1)$ (continuing the current level). Operation can only be applied if the list does not contain two identical items afterwards. And also, if we consider every item as a sequence of numbers, then the sequence of items should always remain increasing in lexicographical order. Examples of valid and invalid lists that are shown in the picture can found in the "Notes" section.When William decided to save a Word document with the list of his errands he accidentally hit a completely different keyboard shortcut from the "Ctrl-S" he wanted to hit. It's not known exactly what shortcut he pressed but after triggering it all items in the list were replaced by a single number: the last number originally written in the item number. William wants you to help him restore a fitting original nested list.
Let's maintain the current depth of the list in a stack. Initially the stack is empty. For each new $a_i$ there are two options: $a_i=1$. In this case we just add the given number to the end of the stack and it will point to a new subitem in the list. $a_i > 1$. In this case we need to find the subitem, the last number of which will be one less than $a_i$. To do this we will remove the last elements from a stack until we find this number. After this at the end of each iteration we will print the resulting stack as a new item in the list. Note that due to outputting the whole list the complexity will be quadratic. Final complexity: $O(n^2)$.
[ "brute force", "data structures", "greedy", "implementation", "trees" ]
1,600
null
1523
D
Love-Hate
William is hosting a party for $n$ of his trader friends. They started a discussion on various currencies they trade, but there's an issue: not all of his trader friends like every currency. They like some currencies, but not others. For each William's friend $i$ it is known whether he likes currency $j$. There are $m$ currencies in total. It is also known that a trader may not like more than $p$ currencies. Because friends need to have some common topic for discussions they need to find the largest by cardinality (possibly empty) subset of currencies, such that there are at least $\lceil \frac{n}{2} \rceil$ friends (rounded up) who like each currency in this subset.
Notice that the final answer will be a submask of one of $\lceil \frac{n}{2} \rceil$ friends. Knowing this, random generation may be used to pick a random mask. If we check a randomly-generated index 50 times then the probability of not hitting a single index of a friend from the required group is $(\frac{1}{2})^{50}$. This probability is 1125899906 times smaller than the probability of the contestant being hit by a falling meteorite, i.e. insignificant. Now that we have some basis mask $mask$ we can calculate the maximal answer for it. To do this let's "compress" each mask for each friend to a size no larger than $p$ by only keeping those true bits which are also true in $mask$. Now for each mask $s$ of length $p$ we can calculate $cnd_{s}$ which is the number of friends that like this mask. For each mask $cnt_{s}$ is the sum of all $cnt_{msk}$ such that s is a submask of $msk$. We can brute force all submasks in $O(3^p)$ by using an algorithm found $\href{https://cp-algorithms.com/algebra/all-submasks.html}{here}$. Now all we have to do is to pick the $best$ mask for which $\lceil \frac{n}{2} \rceil \le cnt_{best}$ and which has the largest number of true bits in its uncompressed state. Final complexity: $O(iters \cdot p \cdot (2^p + n))$ or $O(iters \cdot (3^p + n \cdot p))$, depending on the implementation.
[ "bitmasks", "brute force", "dp", "probabilities" ]
2,400
null
1523
E
Crypto Lights
To monitor cryptocurrency exchange rates trader William invented a wonderful device consisting of $n$ lights arranged in a row. The device functions in the following way: Initially, all lights on William's device are turned off. At the beginning of a new iteration the device randomly, with a uniform distribution, picks a light that is turned off and turns it on, telling William which cryptocurrency he should invest in. After this iteration if any $k$ consecutive lights contain more than one turned on light, then the device finishes working. William doesn't like uncertainty, so he wants you to calculate the expected value of the number of lights that are turned on in the device after it finishes working.
Let's consider all states of the device where $p$ lights are turned on, and when at the current moment the algorithm has not yet finished working. We will be checking all values for $p$ from 1 to $n$. Notice that for a state where $p$ lights are turned on the probability of getting to that state is $\frac{p!}{n \cdot (n - 1) \cdot (n - 2) \dots (n - p - 1)}$. Now for a new value of $p$ we need to calculate the number of fitting states. Without the condition about having a distance of $k$ between the lights this number would be equal to the number of ways to split the lights into $p$ groups, which is $C(n-1, p-1)$. Next we can notice that each group must also contain $k-1$ elements to provide the necessary distance. Then the number of "free" cells necessary to select an arrangement of groups will be $n - (k - 1) \cdot (p - 1)$. Then the final quantity of arrangements will be $C(n - (k - 1) \cdot (p - 1), p - 1)$. The final sum of these quantities, multiplied by the probability of getting each one will be the answer.
[ "combinatorics", "dp", "math", "probabilities" ]
2,600
null
1523
F
Favorite Game
After William is done with work for the day, he enjoys playing his favorite video game. The game happens in a 2D world, starting at turn $0$. William can pick any cell in the game world and spawn in it. Then, each turn, William may remain at his current location or move from the current location (x, y) to one of the following locations: (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1). To accelerate movement the game has $n$ fast travel towers. $i$-th tower is located at location ($xa_i, ya_i$). To be able to instantly travel to the tower from any location in the game world it must first be activated. Activation of tower $i$ happens at the moment when the player is in cell ($xa_i, ya_i$) after this the tower remains active throughout the entire game. William also knows that the game has $m$ quests. $i$-th quest can be completed instantly by being at location ($xb_i, yb_i$) on turn $t_i$. William wants to find out the maximal number of quests he will be able to complete by optimally traversing the game world.
For convenience we will sort quests by time. Let's make two DP: $F(mask, done_q)$ - minimum amount of time it takes to visit the set of $mask$ towers and complete $done_q$ quests. William is in one of the towers. $G(mask, q)$ - maximum number of quests that William can complete if he visited a set of $mask$ towers, and the last completed quest is quest number $q$. It is assumed that the current time for this state is equal to $t_q$ and William is at the coordinate $(xb_q, yb_q)$. DP transitions will be as follows: Transition describing a visit to the new tower $tower_{new}$, assumed that William was in the tower before: $F(mask, done_q) + distance_{tower}(mask, tower_{new}) \underset{minimize}{\rightarrow} F(mask \cup tower_{new}, done_q)$, $tower_{new} \notin mask$, where $distance_{tower}(mask, tower_{new})$ - minimum distance among all towers from the set $mask$ to the tower $tower_{new}$. Transition describing a visit to a new quest, assumed that William was in the tower before: $(done_q + 1) \underset{minimize}{\rightarrow} G(mask, q)$, only if $F(mask, done_q) + distance_{quest}(mask, q) \leq t_q$, where $distance_{quest}(mask, q)$ - minimum distance among all towers from the set $mask$ to the quest $q$. Transition describing visiting a new quest immediately after the old one, without visiting new towers: $G(mask, q) + 1 \underset{minimize}{\rightarrow} G(mask, q_{new})$, only if $t_q + min(distance(q, q_{new}), distance_{quest}(mask, q_{new})) \leq t_{q_{new}}$, where $distance(q, q_{new})$ - distance between two quests (without instant travel), $min(distance(q, q_{new}), distance_{quest}(mask, q_{new}))$ - choosing the optimal route: either directly to the quest, or through the tower. Transition describing a visit to a new tower, immediately after William visited quest number $q$: $t_q + min(distance(q, tower_{new}), distance_{tower}(mask, tower_{new}) \underset{minimize}{\rightarrow} F(mask \cup tower_{new}, G(mask, q))$, $tower_{new} \notin mask$, where $distance(q, tower_{new})$ - distance from quest number $q$ to tower number $tower_{new}$, $distance_{tower}(mask, tower_{new})$ - minimum distance from one of the towers from the set $mask$ to the tower $tower_{new}$. Initial states will be: $G(0, q) = 1$, for each quest $q$ - William can choose any starting coordinate. Therefore, he can choose to start at the coordinate with the quest and wait for it. All other states initialized as $G = -\infty$. $F(\{tower\}, 0) = 0$, for each tower $tower$ - William can choose any starting coordinate. Therefore, he can choose the coordinate of the tower and be in it at time $0$. All other states initialized as $F = \infty$. Answer: The answer is the maximum value among all $G(mask, q)$. Time complexity: There are $O(2^N \cdot M)$ states in total. From each state there are $O(N + M)$ transitions. Hence the time complexity is $O(2^N \cdot M \cdot (N + M))$.
[ "bitmasks", "dp" ]
3,300
null
1523
G
Try Booking
William owns a flat in central London. He decided to rent his flat out for the next $n$ days to earn some money. Since his flat is in the center of the city, he instantly got $m$ offers in the form $(l_i, r_i)$, which means that someone wants to book the flat from day $l_i$ until day $r_i$ inclusive. To avoid spending a lot of time figuring out whether it's profitable for him to accept an offer, William decided to develop an algorithm. The algorithm processes all offers as they arrive and will only accept offer $i$ if the following two conditions are satisfied: - $r_i - l_i + 1 \ge x$. - None of the days between $l_i$ and $r_i$ are occupied by a previously accepted offer William isn't sure what value $x$ should have and he asks you for help. For all $x$ from $1$ to $n$ he wants you to calculate the total number of days for which the flat would be occupied if the corresponding value will be assigned to $x$.
Note that if you think of the answer as the number accepted offers for rent, the sum of the answers in the worst case will be no more than $\frac{n}{n}+\frac{n}{n - 1}+\ldots+\frac{n}{1}$. This value can be estimated in $n \cdot log (n)$. So for each $i$ we can try to learn how to process only those offers that are actually included in the answer for this $i$. Let's learn how to solve the problem recursively for a fixed $i$ and assume that all segments have a length of at least $i$. The recursion will contain two parameters $l$ and $r$ - a continuous segment of days on which all days are still free. Now, if we learn how to find an offer with a minimum $id$ for the segment $l$ and $r$ such that $l \le l_{id} \le r_{id} \le r$, then we can solve our problem. We will only need to recursively find the answer for $l..l_{id}-1$ and $r_{id}+1..r$. To find the minimum $id$ , you can use a 2D data structure that supports the following queries: Get: Minimum on the prefix matrix (one of the corners of the submatrix lies in the cell $(1, 1)$). The minimum can be found on the prefix matrix because we are dealing with segments for which the condition $l \le r$is is satisfied. Update: Updates the value in the cell $(x, y)$ by a strictly smaller number than it was there before. In the author's solution such a data structure was a 2D segments tree. We can iterate $i$ from $n$ to $1$ so that we only need to add new segments. Total complexity $O (n \cdot log (n)^3 + m \cdot log(n)^2)$.
[ "data structures", "divide and conquer" ]
3,200
null
1523
H
Hopping Around the Array
William really wants to get a pet. Since his childhood he dreamt about getting a pet grasshopper. William is being very responsible about choosing his pet, so he wants to set up a trial for the grasshopper! The trial takes place on an array $a$ of length $n$, which defines lengths of hops for each of $n$ cells. A grasshopper can hop around the sells according to the following rule: from a cell with index $i$ it can jump to any cell with indices from $i$ to $i+a_i$ inclusive. Let's call the $k$-grasshopper value of some array the smallest number of hops it would take a grasshopper to hop from the first cell to the last, but before starting you can select no more than $k$ cells and remove them from the array. When a cell is removed all other cells are renumbered but the values of $a_i$ for each cell remains the same. \textbf{During this the first and the last cells may not be removed.} It is required to process $q$ queries of the following format: you are given three numbers $l$, $r$, $k$. You are required to find the $k$-grasshopper value for an array, which is a subarray of the array $a$ with elements from $l$ to $r$ inclusive.
Let's look into all jumps in the optimal answer except the last one with $k = 0$. Thereafter, from the position $i$ it is most effective to jump at such position $best$ that: $i \le best \le i+a_i$ and $a_{best} + best$ is the maximal possible. For each $i$ we can use a segment tree for maximum to find that optimal $best$. Let's also notice that if we want to jump from position $i$ having done $x$ removals of cells which indices are from $i+1$ to $a_i+i$, then we want to end up in a cell $a_i+i+x$, because we can do fewer removals otherwise. These observations are sufficient for us to use the idea of dynamic programming. Let $dp[t][i][k]$ store the maximal pair $(a_j + j, j)$ in which we can end up jumping from position $i$, having performed $2^t$ jumps and removed not greater than $k$ cells. With $t=0$ we can initialize the dynamic with optimal $best$. Now we will iterate over $t$, $i$ and $k$ in increasing order and see how to recalculate the dynamic. Let's iterate over $k1$ - the number of removals for the first $2^{t-1}$ jumps. Then, using the value $dp[t-1][i][k1]$ we know in which position we stopped. Now we need to perform $2^{t-1}$ jumps from that position having done not greater than $k-k1$ removals. Additionally, let's make another dynamic $imax[t][i][k]$ - stores such maximal position of initial array in which we can end up jumping from position $i$, having performed $2^t$ jumps and removed not greater than $k$ cells. Just like for the previous dynamic we can initialize it using $best$, and then similarly recalculate the value of the dynamic, but using two dynamics this time - $imax$ and $dp$. It is noteworthy that both dynamics need it to be supposed that when we start in element $i$ we do not remove it. Let's see how we can answer the query using the calculated dynamics. To begin with, let's consider particular cases. If $l=r$, then the answer is equal to $0$. If $a_l + l + k \ge r$, then the answer is equal to $1$. Farther on, let's make one more dynamic $now_j$, in which we will maintain three numbers $(cnt, a_i + i, i)$. With that the first number is the number of jumps (we minimize it), two following numbers declare our current position $i$, and we maximize these two numbers as a pair $(a_i + i, i)$. Using that dynamic we wish to end up in the last but one cell of the optimal way, having removed not greater than $j$ cells. Initially, the values of dynamic are equal to $(0, a_l + l, l)$. Let's iterate over $t$ in decreasing order (which is to say, we iterate over the $2^t$ jumps). Let's create a dynamic $go$, which we are going to recalculate on the current step, and then make $now=go$ (in other words, we recalculate the dynamic $now$). Let's iterate over $k1$ - the number of removals during the preceding steps and $k2$ - the number of removals we are going to do on the current step. Using the value $now_{k1}$ we know in which position we stopped, and with dynamic $imax$ we can check if we cannot overjump $r$ (or end up in it) having performed $2^t$ jumps. If we cannot overjump $r$, then we will try to update the value $go_{k1+k2}$, otherwise, it's not profitable to perform the jump and thus let's update $go_{k1}$. As a result, we have the calculated dynamic $now_j$. Let's iterate over it and find $j$-s such that we can jump to $r$ with one jump and take the minimal answer over these $j$-s (don't forget to add $1$, since we jumped to the last by one cell in the dynamic). We calculate the dynamics in $O(n \cdot k^2 \cdot log(n))$, answer the query in $O(k^2 \cdot log(n))$, thus the solution is in $O((n + q) \cdot k^2 \cdot log(n))$.
[ "data structures", "dp" ]
3,500
null
1525
A
Potion-making
You have an initially empty cauldron, and you want to brew a potion in it. The potion consists of two ingredients: magic essence and water. The potion you want to brew should contain exactly $k\ \%$ magic essence and $(100 - k)\ \%$ water. In one step, you can pour either one liter of magic essence or one liter of water into the cauldron. What is the minimum number of steps to brew a potion? You don't care about the total volume of the potion, only about the ratio between magic essence and water in it. A small reminder: if you pour $e$ liters of essence and $w$ liters of water ($e + w > 0$) into the cauldron, then it contains $\frac{e}{e + w} \cdot 100\ \%$ (without rounding) magic essence and $\frac{w}{e + w} \cdot 100\ \%$ water.
Since you need $e$ liters of essence to be exactly $k\ \%$ of potion then we can write an equality: $\frac{e}{e + w} = \frac{k}{100}$ or $k = x \cdot e$ and $100 = x \cdot (e + w)$ for some integer $x$. Since we need to minimize $e + w$ and $x(e + w) = 100$, then we should maximize $x$, but both $k$ and $100$ should be divisible by $x$. In other words, taking $x$ as Greatest Common Divisor of $k$ and $100$ is optimal. As a result $e + w = \frac{100}{x} = \frac{100}{\gcd(k, 100)}$.
[ "math", "number theory" ]
800
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int k; cin >> k; cout << 100 / gcd(100, k) << endl; } return 0; }
1525
B
Permutation Sort
You are given a permutation $a$ consisting of $n$ numbers $1$, $2$, ..., $n$ (a permutation is an array in which each element from $1$ to $n$ occurs exactly once). You can perform the following operation: choose some subarray (contiguous subsegment) of $a$ and rearrange the elements in it in any way you want. But this operation cannot be applied to the whole array. For example, if $a = [2, 1, 4, 5, 3]$ and we want to apply the operation to the subarray $a[2, 4]$ (the subarray containing all elements from the $2$-nd to the $4$-th), then after the operation, the array can become $a = [2, 5, 1, 4, 3]$ or, for example, $a = [2, 1, 5, 4, 3]$. Your task is to calculate the minimum number of operations described above to sort the permutation $a$ in ascending order.
To solve the problem, it is enough to consider several cases: if the array is already sorted, the answer is $0$; if $a[1] = 1$ (or $a[n] = n$), then you can sort the array in one operation by selecting the subarray $[1, n-1]$ (or $[2, n]$); if $a[1] = n$ and $a[n] = 1$, you can perform the sequence of operations $[1, n-1]$, $[2, n]$ and $[1, n-1]$ and sort the array on each of them (you can't do it faster since you can't move both $n$ to position $n$ and $1$ to position $1$ in only $2$ operations); otherwise, the array can be sorted in $2$ operations.
[ "constructive algorithms", "greedy" ]
900
#include <bits/stdc++.h> using namespace std; int main() { int t; scanf("%d", &t); while (t--) { int n; scanf("%d", &n); vector<int> a(n); for (int &x : a) scanf("%d", &x); int ans = 2; if (is_sorted(a.begin(), a.end())) ans = 0; else if (a[0] == 1 || a[n - 1] == n) ans = 1; else if (a[0] == n && a[n - 1] == 1) ans = 3; printf("%d\n", ans); } }
1525
C
Robot Collisions
There are $n$ robots driving along an OX axis. There are also two walls: one is at coordinate $0$ and one is at coordinate $m$. The $i$-th robot starts at an integer coordinate $x_i~(0 < x_i < m)$ and moves either left (towards the $0$) or right with the speed of $1$ unit per second. No two robots start at the same coordinate. Whenever a robot reaches a wall, it turns around instantly and continues his ride in the opposite direction with the same speed. Whenever several robots meet at the same \textbf{integer} coordinate, they collide and explode into dust. Once a robot has exploded, it doesn't collide with any other robot. Note that if several robots meet at a non-integer coordinate, nothing happens. For each robot find out if it ever explodes and print the time of explosion if it happens and $-1$ otherwise.
Notice that the robots that start at even coordinates can never collide with the robots that start at odd coordinates. You can see that if a robot starts at an even coordinate, it'll be at an even coordinate on an even second and at an odd coordinate on an odd second. Thus, we'll solve the even and the odd cases separately. Sort the robots by their starting coordinate. Apparently, that step was an inconvenience for some of you. There is a common trick that can help you to implement that. Create a separate array of integer indices $1, 2, \dots, n$ and sort them with a comparator that looks up the value by indices provided to tell the order. This gives you the order of elements and doesn't require you to modify the original data in any way. Consider the task without reflections of the wall. Take a look at the first robot. If it goes to the left, then nothing ever happens to it. Otherwise, remember that it goes to the right. Look at the next one. If it goes to the left, then it can collide with the first one if that went to the right. Otherwise, remember that it also goes to the right. Now for the third one. If this one goes to the left, who does it collide with? Obviously, the rightmost alive robot that goes to the right. So the idea is to keep a stack of the alive robots. If a robot goes to the left, then check if the stack is empty. If it isn't, then the top of the stack robot is the one who will collide with it. Pop it from the stack, since it explodes. If a robot goes to the right, simply push it to the stack. The time of the collision is just the distance between the robots divided by $2$. If there are robots left in the stack after every robot is processed, then they all go to the right together, so they never collide. What changes when the reflections are introduced? Almost nothing, actually. Well, now if the stack is empty and a robot goes to the left, then it behaves as a one going to the right. You can reflect the part of the way from its start to the wall. Just say that instead of starting at some $x$ going to the left, it starts at $-x$ going to the right. Since there's no one alive to the left of him initially, that will change nothing. That $-x$ should be used for computing the collision time. However, the final robots in the stack also act differently. First, the top of the stack robots reflects off the wall and collides with the second on the stack one. Then the third and the fourth and so on. So you can pop them in pairs until $0$ or $1$ are left. The coordinate reflection trick can be used here as well. Imagine that the top of the stack starts at $m + (m - x)$ and goes to the left instead of starting in $x$ going to the right. For the same reason it changes nothing. Overall complexity: $O(n \log n)$.
[ "data structures", "greedy", "implementation", "sortings" ]
2,000
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct bot{ int x, d; }; int main() { int t; cin >> t; forn(_, t){ int n, m; scanf("%d%d", &n, &m); vector<bot> a(n); forn(i, n) scanf("%d", &a[i].x); forn(i, n){ char c; scanf(" %c", &c); a[i].d = c == 'L' ? -1 : 1; } vector<int> ord(n); iota(ord.begin(), ord.end(), 0); sort(ord.begin(), ord.end(), [&a](int x, int y){ return a[x].x < a[y].x; }); vector<int> ans(n, -1); vector<vector<int>> par(2); for (int i : ord){ int p = a[i].x % 2; if (a[i].d == -1){ if (par[p].empty()) par[p].push_back(i); else{ int j = par[p].back(); par[p].pop_back(); ans[i] = ans[j] = (a[i].x - (a[j].d == 1 ? a[j].x : -a[j].x)) / 2; } } else{ par[p].push_back(i); } } forn(p, 2){ while (int(par[p].size()) > 1){ int i = par[p].back(); par[p].pop_back(); int j = par[p].back(); par[p].pop_back(); ans[i] = ans[j] = (2 * m - a[i].x - (a[j].d == 1 ? a[j].x : -a[j].x)) / 2; } } forn(i, n){ printf("%d ", ans[i]); } puts(""); } return 0; }
1525
D
Armchairs
There are $n$ armchairs, numbered from $1$ to $n$ from left to right. Some armchairs are occupied by people (at most one person per armchair), others are not. The number of occupied armchairs is not greater than $\frac{n}{2}$. For some reason, you would like to tell people to move from their armchairs to some other ones. If the $i$-th armchair is occupied by someone and the $j$-th armchair is not, you can tell the person sitting in the $i$-th armchair to move to the $j$-th armchair. The time it takes a person to move from the $i$-th armchair to the $j$-th one is $|i - j|$ minutes. You may perform this operation any number of times, but these operations must be done sequentially, i. e. you cannot tell a person to move until the person you asked to move in the last operation has finished moving to their destination armchair. You want to achieve the following situation: every seat that was initially occupied must be free. What is the minimum time you need to do it?
Let's say that the starting position of people are $x_1, x_2, \dots, x_k$ (in sorted order) and ending positions of people are $y_1, y_2, \dots, y_k$ (also in sorted order). It's always optimal to match these starting and ending positions in sorted order: the leftmost starting position is matched with the leftmost ending, the second starting position is matched with the second ending, and so on. To prove it, suppose that position $X_1$ is matched with $Y_2$, position $X_2$ is matched with $Y_1$, $X_1 \le X_2$ and $Y_1 \le Y_2$. If both persons go to the left or to the right, it means that either $X_1 \le X_2 \le Y_1 \le Y_2$ or $Y_1 \le Y_2 \le X_1 \le X_2$, so nothing changes if we swap the matched positions. If, instead, the person that goes from $X_1$ to $Y_2$ goes to the right, and the person that goes from $X_2$ to $Y_1$ goes to the left, the segment $[\max(X_1, Y_1), \min(X_2, Y_2)]$ belongs to both paths, and swapping the matched pairs removes this segment from both paths (and decreases the total time). So, if the order of starting positions is sorted and the order of ending positions is sorted, these positions should be matched exactly in those order. Using this fact, we can implement the following dynamic programming: let $dp_{i, j}$ be the minimum time if we considered $i$ first positions and picked $j$ of them as the ending ones. Transitions are the following: we either take the current position as the ending one (if it's not a starting one), match it with the $j$-th starting position and go to $dp_{i+1, j+1}$, or we skip the current position and go to $dp_{i+1,j}$. It works in $O(n^2)$ since it has up to $O(n^2)$ states and just up to $2$ transitions from each state.
[ "dp", "flows", "graph matchings", "greedy" ]
1,800
#include <bits/stdc++.h> using namespace std; const int INF = int(1e9); int main() { int n; cin >> n; vector<int> a(n); for(int i = 0; i < n; i++) cin >> a[i]; vector<int> pos; for(int i = 0; i < n; i++) if(a[i] == 1) pos.push_back(i); int k = pos.size(); vector<vector<int>> dp(n + 1, vector<int>(k + 1, INF)); dp[0][0] = 0; for(int i = 0; i < n; i++) for(int j = 0; j <= k; j++) { if(dp[i][j] == INF) continue; dp[i + 1][j] = min(dp[i + 1][j], dp[i][j]); if(j < k && a[i] == 0) dp[i + 1][j + 1] = min(dp[i + 1][j + 1], dp[i][j] + abs(pos[j] - i)); } cout << dp[n][k] << endl; }
1525
E
Assimilation IV
Monocarp is playing a game "Assimilation IV". In this game he manages a great empire: builds cities and conquers new lands. Monocarp's empire has $n$ cities. In order to conquer new lands he plans to build \textbf{one Monument in each city}. The game is turn-based and, since Monocarp is still amateur, he builds exactly one Monument per turn. Monocarp has $m$ points on the map he'd like to control using the constructed Monuments. For each point he knows the distance between it and each city. Monuments work in the following way: when built in some city, a Monument controls all points at distance at most $1$ to this city. Next turn, the Monument controls all points at distance at most $2$, the turn after — at distance at most $3$, and so on. Monocarp will build $n$ Monuments in $n$ turns and his empire will conquer all points that are controlled by at least one Monument. Monocarp can't figure out any strategy, so during each turn he will choose a city for a Monument randomly among all remaining cities (cities without Monuments). Monocarp wants to know how many points (among $m$ of them) he will conquer at the end of turn number $n$. Help him to calculate the expected number of conquered points!
Let $I(j)$ be the indicator function equal to $1$ if the $j$-th point is controlled by any city and $0$ otherwise. Then the expected number of controlled points $ans$ can be written as $E(\sum\limits_{j=1}^{m}{I(j)}) = \sum\limits_{j=1}^{m}{E(I(j))}$ (by linearity of expected value). The expected value of the indicator function is equal to the probability of this function equal to $1$ ($E(I(j)) = P[I(j) = 1]$). In other words, for each point we need to calculate the probability of this point being controlled by any city. Let's instead calculate the probability of point $j$ not being controlled by any city. Suppose, the distance between point $j$ and some city $i$ is equal to $x$. If we build a Monument in city $i$ at step $< n + 1 - x$ (zero indexed) then the point will be controlled by city $i$. But building the Monument at any step greater or equal than $n + 1 - x$ is fine. Let's for each turn $k \in [0, n)$ calculate the number of cities that you can build Monument in starting this turn as $cnt[k]$. Our task is to calculate the number of permutations that are consistent with array $cnt$. At first turn, we can choose one of $cnt[0]$ cities, at second turn we have $cnt[0] + cnt[1] - 1$ choices, at third step - $cnt[0] + cnt[1] + cnt[2] - 2$ choices, and so on. Using this idea, it's not hard to calculate the number of good permutations and then the initial probablity $P[I(j) = 1]$ $\equiv$ $1 - \text{good_permutations} \cdot (n!)^{-1} \pmod{998244353}$. The expected value $ans \equiv \sum\limits_{j=1}^{m}{P[I(j) = 1]} \pmod{998244353}$.
[ "combinatorics", "dp", "math", "probabilities", "two pointers" ]
2,100
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define x first #define y second typedef long long li; typedef pair<int, int> pt; const int MOD = 998244353; int norm(int a) { while (a >= MOD) a -= MOD; while (a < 0) a += MOD; return a; } int mul(int a, int b) { return int(a * 1ll * b % MOD); } int binPow(int a, int k) { int ans = 1; while (k > 0) { if (k & 1) ans = mul(ans, a); a = mul(a, a); k >>= 1; } return ans; } int inv(int a) { return binPow(a, MOD - 2); } vector< vector<int> > d; int n, m; inline bool read() { if(!(cin >> n >> m)) return false; d.resize(n, vector<int>(m)); fore (i, 0, n) fore (j, 0, m) cin >> d[i][j]; return true; } inline void solve() { int invFact = 1; fore (i, 1, n + 1) invFact = mul(invFact, i); invFact = inv(invFact); int E = 0; fore (j, 0, m) { vector<int> cnt(n + 1, 0); fore (i, 0, n) cnt[n + 1 - d[i][j]]++; vector<int> d(n + 1, 0); d[0] = 1; int rem = 0; fore (i, 0, n) { rem += cnt[i]; d[i + 1] = norm(d[i + 1] + mul(d[i], rem)); rem = max(0, rem - 1); } // cerr << d[n] << " - " << norm(1 - mul(d[n], invFact)) << endl; E = norm(E + 1 - mul(d[n], invFact)); } cout << E << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1525
F
Goblins And Gnomes
Monocarp plays a computer game called "Goblins and Gnomes". In this game, he manages a large underground city of gnomes and defends it from hordes of goblins. The city consists of $n$ halls and $m$ one-directional tunnels connecting them. The structure of tunnels has the following property: if a goblin leaves any hall, he cannot return to that hall. The city will be attacked by $k$ waves of goblins; during the $i$-th wave, $i$ goblins attack the city. Monocarp's goal is to pass all $k$ waves. The $i$-th wave goes as follows: firstly, $i$ goblins appear in some halls of the city and pillage them; \textbf{at most one goblin appears in each hall}. Then, goblins start moving along the tunnels, pillaging all the halls in their path. Goblins are very greedy and cunning, so they choose their paths so that no two goblins pass through the same hall. Among all possible attack plans, they choose a plan which allows them to \textbf{pillage the maximum number of halls}. After goblins are done pillaging, they leave the city. If all halls are pillaged during the wave — Monocarp loses the game. Otherwise, the city is restored. If some hall is pillaged during a wave, goblins are still interested in pillaging it during the next waves. Before each wave, Monocarp can spend some time preparing to it. Monocarp doesn't have any strict time limits on his preparations (he decides when to call each wave by himself), but the longer he prepares for a wave, the fewer points he gets for passing it. If Monocarp prepares for the $i$-th wave for $t_i$ minutes, then he gets $\max(0, x_i - t_i \cdot y_i)$ points for passing it (obviously, if he doesn't lose in the process). While preparing for a wave, Monocarp can block tunnels. He can spend one minute to \textbf{either block all tunnels leading from some hall or block all tunnels leading to some hall}. If Monocarp blocks a tunnel while preparing for a wave, it stays blocked during the next waves as well. Help Monocarp to defend against all $k$ waves of goblins and get the maximum possible amount of points!
First of all, let's try to solve the following problem: given a DAG, cover its vertices with the minimum number of vertex-disjoint paths. Solving this problem allows us to calculate the number of goblins that can pillage all of the halls when the tunnel network is fixed. This problem is a fairly classical one; since the number of vertices in each path is greater than the number of arcs in it exactly by $1$, we should take the maximum possible number of arcs into our paths. So we can reduce this problem to bipartite maximum matching - build a bipartite graph where each part consists of $n$ vertices, and for every directed arc $(x, y)$ in the original graph, connect the vertex $x$ of the left part to the vertex $y$ in the right part of the bipartite graph. The maximum matching in this graph allows us to pick the maximum number of arcs into the paths of the original problem (the matching ensures that each vertex has at most one chosen ingoing arc and at most one chosen outgoing arc, so the paths are vertex-disjoint). Okay, now we at least can check if the goblin wave can pillage all of the halls. Let's say that the minimum number of goblins required to pillage the original city is $c$. Obviously, in order to pass the $c$-th wave and waves after it, we have to increase this number. In one minute, Monocarp can block all of the tunnels leading to some hall or out of some hall - and in terms of our reduction to the bipartite matching problem, it means that we remove all edges connected to some vertex of the bipartite graph. Obviously, in one minute, we can increase $c$ by at most $1$, since $c$ is equal to the difference between $n$ and the maximum matching size. It turns out that it's always possible to choose a vertex that belongs to all maximum matchings in the bipartite graph (note that it doesn't work in non-bipartite graphs, but in our problem, it doesn't matter). For the proof of this fact, you can check the last paragraph of the editorial. So, each minute Monocarp prepares for a wave, he increases the maximum number of goblins he can repel by $1$. Now the solution splits into two much easier parts. The first part is finding a sequence in which Monocarp blocks the tunnels, so that each his action reduces the size of the maximum matching by $1$. Since the constraints are small, even a naive approach in $O(n^5)$ - always iterate on the vertex we try to remove from the graph and check that removing it is possible by running Kuhn's algorithm - is fast enough. The second part is to choose when Monocarp calls waves of goblins and when he prepares for them - this can be easily done with dynamic programming: let $dp_{i,j}$ be the maximum Monocarp's score if he has already passed $i$ waves, and the current size of the maximum matching is $j$. The most naive implementation of this dynamic programming runs in $O(n^3)$, so the whole solution works in $O(n^5)$. We can improve it to $O(n^3)$, though it is not needed under these constraints. Instead of finding the vertices to remove from the bipartite graph one-by-one, let's find all of them at once in $O(n^3)$. Recall that the size of maximum matching in a bipartite graph is equal to the size of its minimum vertex cover, and the minimum vertex cover can be reconstructed after finding the maximum matching. If we remove a vertex from the minimum vertex cover, the size of the minimum vertex cover of the remaining graph is reduced by $1$, so the size of the maximum matching is reduced by $1$ as well. It means that we can always choose to remove a vertex from the minimum vertex cover we found. By the way, it also proves that it's always possible to remove a vertex from a bipartite graph so the size of the maximum matching decreases by $1$ (obviously, if it's not $0$ already).
[ "brute force", "dp", "flows", "graph matchings" ]
2,800
#include <bits/stdc++.h> using namespace std; const int N = 543; const long long INF = (long long)(1e18); struct Matching { int n1, n2; vector<set<int>> g; vector<int> mt, used; void init() { mt = vector<int>(n2, -1); } int kuhn(int x) { if(used[x] == 1) return 0; used[x] = 1; for(auto y : g[x]) if(mt[y] == -1 || kuhn(mt[y]) == 1) { mt[y] = x; return 1; } return 0; } int calc() { init(); int sum = 0; for(int i = 0; i < n1; i++) { used = vector<int>(n1, 0); sum += kuhn(i); } return sum; } void remove_vertex(int v, bool right) { if(right) { for(int i = 0; i < n1; i++) g[i].erase(v); } else g[v].clear(); } void add_edge(int x, int y) { g[x].insert(y); } Matching() {}; Matching(int n1, int n2) : n1(n1), n2(n2) { g.resize(n1); }; }; int n, m, k; long long dp[N][N]; int p[N][N]; vector<int> g[N]; long long x[N], y[N]; int main() { cin >> n >> m >> k; for(int i = 0; i < m; i++) { int u, v; cin >> u >> v; --u; --v; g[u].push_back(v); } for(int i = 0; i < k; i++) cin >> x[i] >> y[i]; Matching mt(n, n); for(int i = 0; i < n; i++) for(auto j : g[i]) mt.add_edge(i, j); int cnt = mt.calc(); int cur = cnt; vector<int> seq; while(cur > 0) { int idx = 0; for(int i = 0; i < n; i++) { Matching mt2 = mt; mt2.remove_vertex(i, false); if(mt2.calc() < cur) idx = i + 1; mt2 = mt; mt2.remove_vertex(i, true); if(mt2.calc() < cur) idx = -(i + 1); } assert(idx != 0); seq.push_back(idx); mt.remove_vertex(abs(idx) - 1, idx < 0); cur--; } reverse(seq.begin(), seq.end()); for(int i = 0; i <= k; i++) for(int j = 0; j <= cnt; j++) dp[i][j] = -INF; dp[0][cnt] = 0; for(int i = 0; i < k; i++) for(int j = 0; j <= cnt; j++) { if(dp[i][j] == -INF) continue; for(int z = 0; z <= j; z++) { if(i + 1 + z >= n) continue; int t = j - z; long long add = max(0ll, x[i] - t * y[i]); if(dp[i + 1][z] < dp[i][j] + add) { dp[i + 1][z] = dp[i][j] + add; p[i + 1][z] = j; } } } cur = max_element(dp[k], dp[k] + cnt + 1) - dp[k]; vector<int> res; for(int i = k; i > 0; i--) { res.push_back(0); for(int j = p[i][cur] - 1; j >= cur; j--) res.push_back(seq[j]); cur = p[i][cur]; } reverse(res.begin(), res.end()); cout << res.size() << endl; for(auto x : res) cout << x << " "; cout << endl; }
1526
A
Mean Inequality
You are given an array $a$ of $2n$ \textbf{distinct} integers. You want to arrange the elements of the array in a circle such that no element is equal to the the arithmetic mean of its $2$ neighbours. More formally, find an array $b$, such that: - $b$ is a permutation of $a$. - For every $i$ from $1$ to $2n$, $b_i \neq \frac{b_{i-1}+b_{i+1}}{2}$, where $b_0 = b_{2n}$ and $b_{2n+1} = b_1$. It can be proved that under the constraints of this problem, such array $b$ always exists.
Notice that the array size is even length. Usually in such problems, we would split the array into $2$ equal parts. Can you figure out what those $2$ parts are? We sort the array and split it into the big half and the small half. The main idea is that we can split the numbers into the two halves, the big half and small half, we can place the bigger half at the odd positions and the smaller half at the even positions. This works because the smallest big number is larger than the biggest small number. Hence, the mean of any two small numbers is smaller than any big number, and the mean of any two big numbers is bigger than any small number.
[ "constructive algorithms", "sortings" ]
800
[(lambda n: (lambda arr : [print(f'{arr[i]} {arr[i+n]}', end = ' \n'[i==n-1]) for i in range(n)])(sorted(list(map(int,input().split())))))(int(input())) for _ in range(int(input()))]
1526
B
I Hate 1111
You are given an integer $x$. Can you make $x$ by summing up some number of $11, 111, 1111, 11111, \ldots$? (You can use any number among them any number of times). For instance, - $33=11+11+11$ - $144=111+11+11+11$
Read the name of the problem ;) $1111=11 \cdot 101$ All numbers other than $11$ and $111$ are useless. Notice that $1111=11 \cdot 100+11$ and similarly $11111=111 \cdot 100 + 11$. This implies that we can construct $1111$ and all bigger numbers using only $11$ and $111$. So it suffices to check whether we can construct $X$ from $11$ and $111$ only. Suppose $X=A \cdot 11 + B \cdot 111$, where $A,B \geq 0$. Suppose $B=C \cdot 11 + D$, where $D < 11$. Then $X=(A+C \cdot 111) \cdot 11 + D \cdot 111$. So we can just brute force all $11$ value of $D$ to check whether $X$ can be made. Since $\gcd(11,111)=1$, by the Chicken McNugget Theorem, all numbers greater than $1099$ can be written as a sum of $11$ and $111$. We can use brute force to find the answer to all values less than or equal to $1099$ and answer yes for all other numbers.
[ "dp", "math", "number theory" ]
1,400
[(lambda n : print("YES" if (n >= 111*(n%11)) else "NO"))(int(input())) for _ in range(int(input()))]
1526
C2
Potions (Hard Version)
\textbf{This is the hard version of the problem. The only difference is that in this version $n \leq 200000$. You can make hacks only if both versions of the problem are solved.} There are $n$ potions in a line, with potion $1$ on the far left and potion $n$ on the far right. Each potion will increase your health by $a_i$ when drunk. $a_i$ can be negative, meaning that potion will decrease will health. You start with $0$ health and you will walk from left to right, from first potion to the last one. At each potion, you may choose to drink it or ignore it. \textbf{You must ensure that your health is always non-negative}. What is the largest number of potions you can drink?
Try dp! The dp states are position and number of potions drank. Speedup dp or use greedy for full solution. Let's consider a dynamic programming solution. Let $dp[i][k]$ be the maximum possible health achievable if we consider only the first $i$ potions, and $k$ is the total number of potions taken. The transition is as follows: $dp[i][k] = max(dp[i-1][k-1] + a_i, dp[i-1][k])$ if $dp[i-1][k-1] + a_i \geq 0$, and just $dp[i-1][k]$ otherwise. The first term represents the case if we take potion $i$ and the second term is when we ignore potion $i$. This runs in $O(n^2)$ and passes the easy version. We iterate through the potions in non-decreasing order and drink the potion if we do not die. For convenience, we define $c_i=a_i+i \cdot \epsilon$, where $\epsilon$ is a very small real number so that we can treat $c_i$ as distinct integers. We will show by exchange argument that our greedy is optimal. Let $S$ be the set of potions that is an optimal solution. Suppose $i<j$ and $c_i<c_j$. If $i \in S$ and $j \notin S$, then removing $i$ and adding $j$ into $S$ will still make $S$ a valid solution. Now, suppose that position $i$ is not drunk. We can assume that there are no $k$ ($k<i$) such that $c_k<c_i$ and $k$ is drunk by the previous assertion. Suppose we add $i$ into $S$. If we reach a position $j$ (we possibly die at $j$) where $c_j<c_i$, then we can remove $j$ from $S$ and add $i$ to $S$. Otherwise, our greedy will also not choose $i$, as it was not chosen when we only consider indexes $k$ such that $c_k<c_i$. Notice that we can arbitrarily define the way $\epsilon$ is added, so basically we can process $a_i$ in any non-increasing fashion. Doing this naively is $O(n^2)$ as well. However, using a range add range max lazy propagation segment tree, we can check if a certain potion can be drunk without dying, and the solution runs in $O(n \log n)$. We process the potions from left to right. At the same time, we maintain the list of potions we have taken so far. When processing potion $i$, if we can take $i$ without dying, then we take it. Otherwise, if the most negative potion we've taken is more negative than potion $i$, then we can swap out potion $i$ for that potion. To find the most negative potion we've taken, we can maintain the values of all potions in a minimum priority_queue. This runs in $O(nlogn)$ as well To prove that this works, let's consider best solution where we take exactly $k$ potions (best as in max total health). The solution involves taking the $k$ largest values in our priority queue. Then when considering a new potion, we should see whether swapping out the new potion for the $k$th largest potion will improve the answer. Since the priority queue is strictly decreasing, there will be a cutoff $K$, where for $k$ at most $K$, the answer is not affected, and for larger than $K$, we swap out the $k$th largest potion. It turns out this process is equivalent to inserting the new potion's value into the priority_queue. For those positions at most $K$, they are not affected. For the positions larger than $K$, the elements get pushed back one space, meaning that the smallest element is undrinked. This can also be seen as an efficient way to to transition from one layer of the $dp$ table to the next.
[ "data structures", "greedy" ]
1,600
import heapq H = [] n=int(input()) for i in list(map(int,input().split())): tot = (tot+i if 'tot' in locals() or 'tot' in globals() else i) tot -= ((heapq.heappush(H,i) if tot >= 0 else heapq.heappushpop(H,i)) or 0) print(len(H))
1526
D
Kill Anton
After rejecting $10^{100}$ data structure problems, Errorgorn is very angry at Anton and decided to kill him. Anton's DNA can be represented as a string $a$ which only contains the characters "ANTON" (there are only $4$ distinct characters). Errorgorn can change Anton's DNA into string $b$ which must be a \textbf{permutation} of $a$. However, Anton's body can defend against this attack. In $1$ second, his body can swap $2$ \textbf{adjacent} characters of his DNA to transform it back to $a$. Anton's body is smart and will use the minimum number of moves. To maximize the chance of Anton dying, Errorgorn wants to change Anton's DNA the string that maximizes the time for Anton's body to revert his DNA. But since Errorgorn is busy making more data structure problems, he needs your help to find the best string $B$. Can you help him?
The time it takes for Anton's body to revert the string is related to inversion number. We claim that in the optimal answer all characters of some type will appear consecutively. Consider the character at position $i$ in string $s$. We define $C_i$ as the position in which that character will be in string $t$. For $s=\texttt{CAGCAA}$ and $t=\texttt{AAGCAC}$, $C={4,1,3,6,2,5}$. Then the minimum number of moves to transformed $s$ to $t$ is given by the inversion index of $C$. Suppose we find a substring of $s$ where it is $\texttt{XXAA...AA[A]XX...XX[A]AA...AAXX}$, where characters $\texttt{X}$ can be either one of $\texttt{TCG}$. The square brackets are just for clarity of explanation. We will show that it our solution will not be worse if we merge these $2$ contiguous segments of $\texttt{A}$. Consider transforming the string into $\texttt{XXAA...AAXX...XX[A][A]AA...AAXX}$ and $\texttt{XXAA...AA[A][A]XX...XXAA...AAXX}$. Let the difference in the number of moves be $D_1$ and $D_2$ respectively. And let the index of $\texttt{[A]}$s be $i$ and $j$ respectively. Then $D_1=\sum\limits_{k=i+1}^{j-1} \sigma(C_k-C_i)$ and $D_2=\sum\limits_{k=i+1}^{j-1} \sigma(C_j-C_k)$, where $\sigma(x)=\frac{x}{|x|}$, or rather $\sigma$ is the sign function. I claim that $D_1+D_2 \geq 0$. Suppose that $D_1+D_2<0$, then there exist $k$ such that $\sigma(C_k-C_i)+\sigma(C_j-C_k)<0$ which implies $C_k-C_i<0$ and $C_j-C_k<0$. However, this implies that $C_j<C_i$ which is clearly a contradiction. Since $D_1+D_2 \geq 0$, either $D_1 \geq 0$ or $D_2 \geq 0$. WLOG, $D_1 \geq 0$. This implies that we turn $s$ into $\texttt{XXAA...AAXX...XX[A][A]AA...AAXX}$ without decreasing the number of moves to transform $s$ into $t$. Now, consider the rest of the $\texttt{A}$ on the left segment of $\texttt{A}$. We can move it too the right also. Recall that $D_1 + D_2 \geq 0$. In this case, we know that $D_2 \leq 0$ as it is the cost to move the left $\texttt{[A]}$ back to its original position. So $D_1 \geq 0$. Therefore, we have merged $2$ different segments of $A$ without decreasing the number of moves to transform $s$ into $t$. So, we can try all $24$ possible strings and check the number of moves Anton's body needs to transform each string. The time limit is relaxed enough for a $O(n \log n)$ similar to 1430E - String Reversal. But there is a $O(n)$ solution, the number of moves Anton's body needs to transform each string is given by the number of inversions in the string. But since we know that we only care about strings that have all the same characters appear consecutively, we can just keep a count of the number of inversions for each pair of characters.
[ "brute force", "constructive algorithms", "data structures", "math", "strings" ]
2,200
import itertools TC=int(input()) m={"A":0,"N":1,"O":2,"T":3} st="ANOT" for _ in range(TC): s=input() arr=[] for ch in s: arr.append(m[ch]) cnt=[[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]] cnt1=[0,0,0,0] for i in arr: for j in range(4): cnt[j][i]+=cnt1[j] cnt1[i]+=1 val=-1 best=[] for perm in itertools.permutations([0,1,2,3]): curr=0 for i in range(4): for j in range(i+1,4): curr+=cnt[perm[j]][perm[i]] if (curr>val): val=curr best=perm for i in range(4): for j in range(cnt1[best[i]]): print(st[best[i]],end="") print()
1526
E
Oolimry and Suffix Array
Once upon a time, Oolimry saw a suffix array. He wondered how many strings can produce this suffix array. More formally, given a suffix array of length $n$ and having an alphabet size $k$, count the number of strings that produce such a suffix array. Let $s$ be a string of length $n$. Then the $i$-th suffix of $s$ is the substring $s[i \ldots n-1]$. A suffix array is the array of integers that represent the starting indexes of all the suffixes of a given string, after the suffixes are sorted in the lexicographic order. For example, the suffix array of oolimry is $[3,2,4,1,0,5,6]$ as the array of sorted suffixes is $[imry,limry,mry,olimry,oolimry,ry,y]$. A string $x$ is lexicographically smaller than string $y$, if either $x$ is a prefix of $y$ (and $x\neq y$), or there exists such $i$ that $x_i < y_i$, and for any $1\leq j < i$ , $x_j = y_j$.
Try to solve the problem of the minimal $K$ needed to make a string with such a suffix array. First, let's consider a simpler problem. Is it possible to make a string with a certain suffix array given an alphabet size $k$. Consider two adjacent suffixes in the suffix array "xy" and "ab" where $b$ and $y$ are some strings and $x$ and $a$ are some characters i.e. $x$ is the first character of that suffix and similarly for $a$. If $b$ and $y$ don't exist we consider them as \$, i.e. smaller than everything. Also, "xy" comes before "ab". Observation: If the position of $b$ is less than the position of $y$ in the suffix array, $x$ must be less than $a$. Otherwise, $x$ must be less than or equal to $a$. This can be easily shown as "xy" must be lexicographically smaller than "ab". This is sufficient also. Thus we can iterate through the suffix array and check if $pos[arr[i]+1]>pos[arr[i+1]+1]$ where $arr$ is the suffix array and $pos$ is the position of the $i$th element in the suffix array. If this condition holds then the $arr[i]$ th character must be strictly less than the $arr[i+1]$ th character. Thus we can just count how many such pairs exist. If this count is larger than the alphabet size no such string is possible. Otherwise, such string exists. Note that special care must be taken when considering $arr[i]=n-1$ as $pos[n]$ may not be defined ($0$-indexed). After tackling the simpler question we move on to the full question of counting how many such strings are there. If we consider the string as an array and the order of the array as the order of the suffix array meaning that the $i$ th element of this array is the $arr[i]$ th element of the string. We have now transformed the question into "Given that some elements must be greater than the previous elements while others can be equal. Count how many arrays are there such that the largest element is less than $k$". Consider the difference array, some elements must be $\geq 1$ while some can be $\geq 0$. We add padding to the front and back of the array so as to account for the first value being non-zero and the last element being less than $k$. These two elements are both $\geq 0$. Let $cnt$ be the number of elements that must be $\geq 1$ in the difference array. Now, this becomes count how many arrays of $(n-1+2)=(n+1)$ non-negative elements sum to $k-cnt$. This can be solved using stars and bars so the final answer comes out to be $n+1+k-cnt \choose n$ which can be found easily. Note that we define $\binom{a}{b}=0$ when $a<b$. Final Complexity $O(n\log MOD)$ or $O(n)$ depending on how you find the modular inverse. Btw $k$ was chosen to also be $\leq 2\times 10^5$ so as to hide the final complexity :)
[ "combinatorics", "constructive algorithms", "math" ]
2,400
MOD=998244353 n,k=map(int,input().split()) arr=list(map(int,input().split())) pos=[0]*(n+1) for i in range(n): pos[arr[i]]=i pos[n]=-1 cnt=0 for i in range(n-1): if (pos[arr[i]+1]>pos[arr[i+1]+1]): cnt+=1 #k-cnt+n-1 choose n num,denom=1,1 for i in range(n): num=num*(k-cnt+n-1-i)%MOD denom=denom*(i+1)%MOD print(num*pow(denom,MOD-2,MOD)%MOD)
1526
F
Median Queries
\textbf{This is an interactive problem.} There is a secret permutation $p$ ($1$-indexed) of numbers from $1$ to $n$. More formally, for $1 \leq i \leq n$, $1 \leq p[i] \leq n$ and for $1 \leq i < j \leq n$, $p[i] \neq p[j]$. \textbf{It is known that $p[1]<p[2]$}. In $1$ query, you give $3$ \textbf{distinct} integers $a,b,c$ ($1 \leq a,b,c \leq n$), and receive the \textbf{median} of $\{|p[a]-p[b]|,|p[b]-p[c]|,|p[a]-p[c]|\}$. In this case, the median is the $2$-nd element ($1$-indexed) of the sequence when sorted in non-decreasing order. The median of $\{4,6,2\}$ is $4$ and the median of $\{0,123,33\}$ is $33$. Can you find the secret permutation in not more than $2n+420$ queries? \textbf{Note: the grader is not adaptive:} the permutation is fixed before any queries are made.
Find elements $1$ and $2$ in $N+420$ queries. Find a pair $(a,b)$ such that $|p[b]-p[a]| \leq \frac{n}{3}-c$, where $c$ is some constant. The problem can be broken into $2$ main parts: Finding a pair $(a,b)$ such that $|p[a]-p[b]|$ is roughly less than a third. Finding elements $1$ and $2$ (actually we may find $N$ and $N-1$ but it is the same). Find the rest of the elements A convenient way to think about the queries, is that given $p[a]<p[b]<p[c]$, we will be returned $\max(p[b]-p[a],p[c]-p[b])$. We will spend the first queries to find a tuple $(a,b,c)$ such that the return value of this query is at most $\lfloor \frac{n-4}{6} \rfloor$. The easiest way to do this is by randomly choosing distinct $a$,$b$,$c$ and querying it. See proof 1 for the random analysis. But there is also a deterministic solution. Choose any $13$ elements from the array, and it is guaranteed that there exists a tuple in those $13$ elements such that the return value of the query is at most $\lfloor \frac{n-4}{6} \rfloor$, see proof 2. To query all tuples in this sub-array of size $13$, it takes $286$ queries, which fits very comfortably under $420$. user14767553 conjectured that a sub-array of size $10$ works, however we are unable to prove it. You can try to hack his solution at 117685511. Among $a$, $b$, and $c$, we choose $a$ and $b$. The important idea here is that the gap between $a$ and $b$ is small, because we have found a tuple $(a,b,c)$ is less than or equal to $\lfloor \frac{n-4}{6} \rfloor$, so $|p[a]-p[b]| \leq 2 \cdot \lfloor \frac{n-4}{6} \rfloor \leq \lfloor \frac{n-4}{3} \rfloor$. Now, we query all tuples $(a,b,x)$ for all $x \ne a$ and $x \ne b$. Since the gap between $a$ and $b$ is small, we should be able to identify either $(1,2)$ or $(n,n-1)$. This is because either $p[a]-1 > p[b]-p[a]$ or $n-p[b] > p[b] - p[a]$ (WLOG assuming $p[b] > p[a]$), see proof 3. Therefore, the furthest $x$ will return the largest value when querying for $(a,b,x)$. (If there's ties choose anyone). Suppose we found element $1$, there are at most two possible candidates for element $2$, $y_1$ and $y_2$ (the other possible candidate will be either $n$ or $n-1$). We can use 2 queries to figure out which of them is element $2$. We can query $(1, y_1, a)$ and $(1, y_2, a)$. Whichever produces the smaller value is element $2$, see proof 4. Once we have found elements $1$ and $2$, solving the rest of the problem is simple. Querying $(1,2,x)$ will always return $x-2$. And we can find the rest of the elements in another $n-2$ queries. Do not forgot about the condition that $p[0]<p[1]$! This is a rough asymptotic analysis for large $n$. Observe that if $\frac n6 < b < \frac{5n}{6}$ (this happens around $\frac 23$ of the time) $b-\frac{n}{6} < a < b$ (around $\frac 16$ of the time) $b < c < b+\frac{n}{6}$ (around $\frac 1 6$ of the time) Around $\frac{1}{54}$, we will get a return value of less than $\frac{1}{6}$. Since there are 6 permutations (i.e. relative orderings of a,b,c), there is a probability of roughly $\frac{1}{9}$ that some permutation of $(a,b,c)$ satisfies the above condition. The probability that we do not find such a tuple after $420$ queries is $(\frac{8}{9})^{420} \approx 3.28\cdot10^{-22}$. To put this into perspective, you have a higher chance of: Flipping heads $70$ times in a row Drawing a royal flush $5$ times in a row Anton not rejecting a data structure problem We will use contradiction. let $d1, d2, \ldots ...$ refer to the gaps between the elements. If the maximum any two consecutive gaps is at most $\lfloor \frac{n-4}{6} \rfloor$, one of the queries will work. As such, it is necessary that least $6$ of these gaps are at least $\lfloor \frac{n-4}{6} \rfloor$. Then we have $6 \cdot (\lfloor \frac{n-4}{6} \rfloor+1) +13$ as the smallest possible size of the original array. However, $6 \cdot (\lfloor \frac{n-4}{6} \rfloor+1) +13 \geq 6 \cdot (\frac{n-9}{6}+1) +13 = n+10$ which is a clear contradiction. We proof this statement by contrapositive. This statement is equivalent to: if $a-1 \leq b-a$ and $n-b \leq b-a$, then $b-a > \lfloor \frac{n-4}{3} \rfloor$. Abbreviating $p[a]$ and $p[b]$ to $a$ and $b$ respectively. Using algebra, $a-1 \leq b-a$ implies $2a -1 \leq b$ and $n+a \leq 2b$. Therefore, $n+3a-1 \leq 3b$. Then, $\frac{n-1}{3} \leq b-a$. It is trivial to see that $\lfloor \frac{n-4}{3} \rfloor < \frac{n-1}{3}$. Therefore, we have shown that $b-a > \lfloor \frac{n-4}{3} \rfloor$. Suppose that $y_1=2$ and $y_2 = n-1$ or $y_2=n$. We can bound $a$ by $y_1 < a < y_2$ We want to show that the query of $(1,y_1,a)$ is smaller than $(1,a,y_2)$. This is equivalent to showing that $\max(|1-2|,|2-a|) < \max (|1-a|,|a-y_2|)$. Since $2 < a$, $\max(|1-2|,|2-a|)=a-2$. Case $1$: $|1-a| \geq |a-y_2|$ $\max(|1-a|,|a-y_2|)=a-1$. Therefore, $\max(|1-2|,|2-a|) = a-2 < a-1 = \max(|1-a|,|a-y_2|)$. Case $2$: $|a-y_2| > |1-a|$ $\max(|1-a|,|a-y_2|)=|a-y_2|$. Therefore, $\max(|1-2|,|2-a|) = a-2 < a-1 = |1-a| < |a-y_2| = \max(|1-a|,|a-y_2|)$.
[ "constructive algorithms", "interactive", "probabilities" ]
3,000
import sys, random print = sys.stdout.write input = sys.stdin.readline def query(a, b, c): print("? " + str(a+1) + " " + str(b+1) + " " + str(c+1) + "\n") sys.stdout.flush() return int(input()) def answer(): print("! " + " ".join([str(i) for i in ans]) + "\n") sys.stdout.flush() return int(input()) T = int(input()) testVal = 13 randomQueryOrder = [] for i in range(testVal): for j in range(i+1, testVal): for k in range(j+1, testVal): randomQueryOrder.append((i, j, k)) for caseno in range(T): N = int(input()) a = -1 b = -1 ans = [0 for i in range(N)] res = [0 for i in range(N)] order = [i for i in range(N)] minVal = N+5 random.shuffle(order) random.shuffle(randomQueryOrder) for ijk in randomQueryOrder: i, j, k = ijk i = order[i] j = order[j] k = order[k] temp = query(i, j, k) if temp <= (N-4)//6: a = i b = j break assert(a != -1 and b != -1) ''' Now, the key thing to note is that abs(P[a]-P[b]) is less than (roughly speaking) N/3. Observe that the index c such that query(a, b, c) is maximum must be such that P[c] == 1 or P[c] == N (some steps skipped). But given the condition P[1] < P[2], we can assume either, then flip using P[i] = N+1-P[i] if condition not satisfied. ''' # Now, we assume P[c] = 1 # There always exists another index d such that query(a, b, d) = query(a, b, c)-1 and P[d] = 2 # There can be only at most 2 d which satisfy the former, and we can check which is which by comparing query(a, c, d) maxVal = -1 c = -1 for i in range(N): if a == i or b == i: continue res[i] = query(a, b, i) if res[i] > maxVal: maxVal = res[i] c = i d2 = [] for i in range(N): if a == i or b == i: continue if res[i] == maxVal-1: d2.append(i) d = -1 if len(d2) == 1: d = d2[0] elif len(d2) == 2: temp1 = query(a, c, d2[0]) temp2 = query(a, c, d2[1]) if temp1 < temp2: d = d2[0] else: d = d2[1] else: assert(False) # After finding indices c and d such P[c] = 1 and P[d] = 2, we can now find the rest of the values ans[c] = 1 ans[d] = 2 for i in range(N): if i == c or i == d: continue ans[i] = 2 + query(c, d, i) # Flip since P[1] < P[2] not satisfied if ans[0] >= ans[1]: for i in range(N): ans[i] = N+1-ans[i] result = answer() # I dislike getting WA, hence I make my code MLE if result != 1: MLE = [1 for i in range(1<<27)] sys.stdout.write(MLE[0])
1527
A
And Then There Were K
Given an integer $n$, find the maximum value of integer $k$ such that the following condition holds: \begin{center} $n$ & ($n-1$) & ($n-2$) & ($n-3$) & ... ($k$) = $0$ \end{center} where & denotes the bitwise AND operation.
Let $T =$ $n$ & ($n-1$) & ($n-2$) & ($n-3$) & ... ($k$) If there is at least one integer from $K$ to $N$ whose bit at the $i_{th}$ index is $0$, then the value of the $i_{th}$ bit in $T$ will also be $0$. We can easily observe that the $msb$ (Highest set bit in $n$) in $N$ will become $0$ for the first time when $K = 2^{msb}- 1$. All the other bits will become zero when $K = 2^{msb}$. Thus the answer is, $K = 2^{msb} - 1$.
[ "bitmasks" ]
800
#include<bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #define ll long long #define pb push_back #define ppb pop_back #define endl '\n' #define mii map<ll,ll> #define msi map<string,ll> #define mis map<ll, string> #define rep(i,a,b) for(ll i=a;i<b;i++) #define repr(i,a,b) for(ll i=b-1;i>=a;i--) #define trav(a, x) for(auto& a : x) #define pii pair<ll,ll> #define vi vector<ll> #define vii vector<pair<ll, ll>> #define vs vector<string> #define all(a) (a).begin(),(a).end() #define F first #define S second #define sz(x) (ll)x.size() #define hell 1000000007 #define lbnd lower_bound #define ubnd upper_bound #define DEBUG cerr<<"/n>>>I'm Here<<</n"<<endl; #define display(x) trav(a,x) cout<<a<<" ";cout<<endl; #define what_is(x) cerr << #x << " is " << x << endl; #define ordered_set tree<ll, null_type,less<ll>, rb_tree_tag,tree_order_statistics_node_update> #define FAST ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0); using namespace __gnu_pbds; using namespace std; #define PI 3.141592653589793 #define N 200005 void solve() { ll n; cin >> n; ll cnt=0; while(n!=0){ cnt++; n=n/2; } cout << (1<<(cnt-1))-1 << endl; } int main() { #ifndef ONLINE_JUDGE freopen ("input.txt","r",stdin); #endif ll int TEST=1; cin >> TEST; //init(); while(TEST--) { solve(); } }
1527
B1
Palindrome Game (easy version)
\textbf{The only difference between the easy and hard versions is that the given string $s$ in the easy version is initially a palindrome, this condition is not always true for the hard version.} A palindrome is a string that reads the same left to right and right to left. For example, "101101" is a palindrome, while "0101" is not. Alice and Bob are playing a game on a string $s$ \textbf{(which is initially a palindrome in this version)} of length $n$ consisting of the characters '0' and '1'. Both players take alternate turns with Alice going first. In each turn, the player can perform one of the following operations: - Choose any $i$ ($1 \le i \le n$), where $s[i] =$ '0' and change $s[i]$ to '1'. Pay 1 dollar. - Reverse the whole string, pay 0 dollars. This operation is only allowed if the string is currently \textbf{not} a palindrome, and the last operation was not reverse. That is, if Alice reverses the string, then Bob can't reverse in the next move, and vice versa. Reversing a string means reordering its letters from the last to the first. For example, "01001" becomes "10010" after reversing. The game ends when every character of string becomes '1'. The player who spends minimum dollars till this point wins the game and it is a draw if both spend equal dollars. If both players play optimally, output whether Alice wins, Bob wins, or if it is a draw.
If the count of zeros in the string $s$ is even then Bob always win Proof Bob can restrict Alice from performing operation $2$ by making string $s$ palindrome (if Alice changes $s[i]$ to '1' then Bob will change $s[n-i+1]$ to '1'). However, when the last '0' is remaining, Bob will reverse the string, eventually forcing Alice to perform the operation $1$. This way Alice will spend $2$ dollars more than Bob resulting in Bob's win. Proof Alice will change $s[n/2]$ from '0' to '1' and play with the same strategy as Bob did in the above case. This way Bob will spend $1$ dollar more than Alice resulting in Alice's win.
[ "constructive algorithms", "games" ]
1,200
#include<bits/stdc++.h> using namespace std; #define int long long int #define mp(a,b) make_pair(a,b) #define vi vector<int> #define mii map<int,int> #define mpi map<pair<int,int>,int> #define vp vector<pair<int,int> > #define pb(a) push_back(a) #define fr(i,n) for(i=0;i<n;i++) #define rep(i,a,n) for(i=a;i<n;i++) #define F first #define S second #define endl "\n" #define fast std::ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0); #define mod 1000000007 #define dom 998244353 #define sl(a) (int)a.length() #define sz(a) (int)a.size() #define all(a) a.begin(),a.end() #define pii pair<int,int> #define mini 2000000000000000000 #define time_taken 1.0 * clock() / CLOCKS_PER_SEC //const long double pi = acos(-1); //mt19937_64 mt(chrono::steady_clock::now().time_since_epoch().count()); //primes for hashing 937, 1013 template<typename T, typename U> static inline void amin(T &x, U y) { if (y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y) { if (x < y) x = y; } void shikhar7s(int cas) { int n,i; cin>>n; string s; cin>>s; int f=1,z=0; fr(i,n) { if(s[i]=='0') z++; } int x=0; fr(i,n/2) { if(s[i]!=s[n-1-i]) { f=0; x++; } } if(f) { if(z==1||z%2==0) cout<<"BOB"<<endl; else cout<<"ALICE"<<endl; } else { if(x==1&&z==2) cout<<"DRAW"<<endl; else cout<<"ALICE"<<endl; } } signed main() { fast; //freopen("input.txt", "rt", stdin); //freopen("output.txt", "wt", stdout); int t=1; cin>>t; int cas=1; while(cas<=t) { //cout<<"Case #"<<cas<<": "; shikhar7s(cas); cas++; } return 0; }
1527
B2
Palindrome Game (hard version)
\textbf{The only difference between the easy and hard versions is that the given string $s$ in the easy version is initially a palindrome, this condition is not always true for the hard version.} A palindrome is a string that reads the same left to right and right to left. For example, "101101" is a palindrome, while "0101" is not. Alice and Bob are playing a game on a string $s$ of length $n$ consisting of the characters '0' and '1'. Both players take alternate turns with Alice going first. In each turn, the player can perform one of the following operations: - Choose any $i$ ($1 \le i \le n$), where $s[i] =$ '0' and change $s[i]$ to '1'. Pay 1 dollar. - Reverse the whole string, pay 0 dollars. This operation is only allowed if the string is currently \textbf{not} a palindrome, and the last operation was not reverse. That is, if Alice reverses the string, then Bob can't reverse in the next move, and vice versa. Reversing a string means reordering its letters from the last to the first. For example, "01001" becomes "10010" after reversing. The game ends when every character of string becomes '1'. The player who spends minimum dollars till this point wins the game and it is a draw if both spend equal dollars. If both players play optimally, output whether Alice wins, Bob wins, or if it is a draw.
Solution 1: The case when $s$ is a palindrome is discussed in B1. Otherwise, Alice will win or the game will end with a draw. Proof If it is optimal for Alice to perform operation $1$ in the first move, she will perform it else she will perform operation $2$ forcing Bob to perform operation $1$ (which is not optimal otherwise Alice would have performed it in the first move). Optimal Strategy for Alice Alice will keep reversing the string till string $s$ is one move short to become a palindrome with an even number of zeros. This time Alice will perform operation $1$ instead of reversing and will make string $s$ a palindrome. Now, string $s$ is a palindrome containing an even number of zeros, with Bob's turn. Here, Alice can take $2$ dollars advantage using the strategy mentioned in B1. This way Bob will spend at least $1$ dollars more than Alice resulting in Alice's win. In some cases like '1100', '1001110' Alice will just keep on reversing the string and Bob has to change all '0' to 1'. Solution 2: As constraints on $n$ were small, the problem can also be solved using dynamic programming. Following parameters of string $s$ is enough define current state of game: $cnt00$ = count of symmetric $00$ pair ($s[i]=$'0' as well as $s[n-i+1]=$'0') $cnt01$ = count of symmetric $01$ or $10$ pair ($s[i]$ is not equal to $s[n-i+1]$) $mid$ = $True$ if mid character exists (length of string is odd) and it is '0' else $False$ $rev$ = $True$ if previous move was operation $2$ else $False$If any player encounters the game state $\{cnt00,cnt01,mid,rev\}$, we define $dp_{cnt00,cnt01,mid,rev}$ to be the minimum cost difference he/she can achieve from here. Transition is pretty simple if $rev$ is $False$ and $cnt01>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},-dp_{cnt00,cnt01,mid,True} )$ if $cnt00>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00-1,cnt01+1,mid,False})$ if $cnt01>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00,cnt01-1,mid,False})$ if $mid$ is $True$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00,cnt01,False,False})$ Finally, If $dp_{cnt00,cnt01,mid,rev}<0$, Alice wins If $dp_{cnt00,cnt01,mid,rev}>0$, Bob wins If $dp_{cnt00,cnt01,mid,rev}=0$, Draw We can precompute this dp and answer all test cases in $O(n)$. Overall time complexity - $O(n^{2})$ If any player encounters the game state $\{cnt00,cnt01,mid,rev\}$, we define $dp_{cnt00,cnt01,mid,rev}$ to be the minimum cost difference he/she can achieve from here. Transition is pretty simple if $rev$ is $False$ and $cnt01>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},-dp_{cnt00,cnt01,mid,True} )$ if $cnt00>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00-1,cnt01+1,mid,False})$ if $cnt01>0$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00,cnt01-1,mid,False})$ if $mid$ is $True$: $dp_{cnt00,cnt01,mid,rev} = \min(dp_{cnt00,cnt01,mid,rev},1-dp_{cnt00,cnt01,False,False})$ If $dp_{cnt00,cnt01,mid,rev}<0$, Alice wins If $dp_{cnt00,cnt01,mid,rev}>0$, Bob wins If $dp_{cnt00,cnt01,mid,rev}=0$, Draw
[ "constructive algorithms", "games" ]
1,900
#include <bits/stdc++.h> using namespace std; typedef long long ll; double pi = acos(-1); #define _time_ 1.0 * clock() / CLOCKS_PER_SEC #define fi first #define se second #define mp make_pair #define pb push_back #define all(a) a.begin(),a.end() mt19937 rng(chrono::high_resolution_clock::now().time_since_epoch().count()); int dp[505][505][2][2]; void precompute(){ for(int i=0;i<=500;i++){ for(int j=0;j<=500;j++){ for(int k=0;k<2;k++){ for(int l=1;l>=0;l--){ dp[i][j][k][l] = 1e9; } } } } dp[0][0][0][0]=0; dp[0][0][0][1]=0; for(int i=0;i<=500;i++){ for(int j=0;j<=500;j++){ for(int k=0;k<2;k++){ for(int l=1;l>=0;l--){ if(l==0 && j>0) dp[i][j][k][l] = min(dp[i][j][k][l],-dp[i][j][k][1]); if(i>0) dp[i][j][k][l] = min(dp[i][j][k][l],1-dp[i-1][j+1][k][0]); if(j>0) dp[i][j][k][l] = min(dp[i][j][k][l],1-dp[i][j-1][k][0]); if(k==1) dp[i][j][k][l] = min(dp[i][j][k][l],1-dp[i][j][0][0]); } } } } } void solve(){ int n; cin >> n; string s; cin >> s; int cnt00=0,cnt01=0,mid=0; for(int i=0;i<n/2;i++){ if(s[i]=='0' && s[i]==s[n-i-1]) cnt00++; if(s[i]!=s[n-i-1]) cnt01++; } if(n%2 && s[n/2]=='0') mid=1; if(dp[cnt00][cnt01][mid][0]<0){ cout << "ALICE"; }else if(dp[cnt00][cnt01][mid][0]>0){ cout << "BOB"; }else{ cout << "DRAW"; } } int main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); #ifdef SIEVE sieve(); #endif #ifdef NCR init(); #endif precompute(); int t; cin >> t; while(t--){ solve(); cout << "\n"; } return 0; }
1527
C
Sequence Pair Weight
The weight of a sequence is defined as the number of unordered pairs of indexes $(i,j)$ (here $i \lt j$) with same value ($a_{i} = a_{j}$). For example, the weight of sequence $a = [1, 1, 2, 2, 1]$ is $4$. The set of unordered pairs of indexes with same value are $(1, 2)$, $(1, 5)$, $(2, 5)$, and $(3, 4)$. You are given a sequence $a$ of $n$ integers. Print the sum of the weight of all subsegments of $a$. A sequence $b$ is a subsegment of a sequence $a$ if $b$ can be obtained from $a$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
First of all, it can be proved that the maximum possible answer occurs when $n = 10^5$ and all the elements are identical which comes out to be of the order $4 \cdot 10^{18}$, which fits in the long long integer range, thus preventing overflow. The brute force approach is just to find the weight of each subarray and sum them up but it will definitely not fit into TL. Let $dp_i$ represents the sum of the weights of all subsegments which ends at index $i$. So $answer = \sum_{i = 1}^{n} dp_i$ Now, $dp_i$ includes: Sum of the weights of all subsegments ending at index $i-1$ which is equal to $dp_{i-1}$. The contribution of $i_{th}$ index for all the subsegments ending at index $i$. This is just the sum of prefix sums of indexes of the element having the same value as that of $a_i$ which has occurred before index $i$. This is because the number of subsegments ending at index $i$ in which any index $j$ ($j < i$ and $a_j = a_i$) has been counted as unordered pair ${j,i}$ is equal to $j$. This can be done by maintaining a map that stores the prefix sum of indexes for a given value till $i$.
[ "hashing", "implementation", "math" ]
1,600
t = int(input()) for j in range(t): n = int(input()) a = list(map(int,input().split())) value = {} fa, ca = 0, 0 for i in range(n): if a[i] in value: ca += value[a[i]] else: value[a[i]]=0 value[a[i]] += i+1 fa += ca print(fa)
1527
D
MEX Tree
You are given a tree with $n$ nodes, numerated from $0$ to $n-1$. For each $k$ between $0$ and $n$, inclusive, you have to count the number of unordered pairs $(u,v)$, $u \neq v$, such that the \textbf{MEX} of all the node labels in the shortest path from $u$ to $v$ (including end points) is $k$. The \textbf{MEX} of a sequence of integers is the smallest non-negative integer that does not belong to the sequence.
This problem can be solved with two pointer approach. We will use the following simple formula to evaluate the number of paths with MEX = i, $ans_i$ = (Number of paths with $MEX \ge i$) - (Number of paths with $MEX > i$). First we root the tree at node $0$ and calculate the subtree sizes using a basic subtree DP. Now we can easily calculate answer for $0$ by just summing up the $size \cdot (size-1)/2$ of its children. Since total number of paths are $\left(\begin{array}{c}n\\ 2\end{array}\right)$, the number of paths with $MEX > 0$ are $\left(\begin{array}{c}n\\ 2\end{array}\right) - ans_0$ and we will maintain these paths which are having $MEX > i$ for every $i$ in a variable lets say $P$. Now the main idea is that for the $MEX$ of a path to be equal to $i$ it must contain all nodes from $0...i-1$ and not $i$. It can be seen that if we have current $MEX = i$ in some path then this path can only be extended from both ends as for $MEX > i$ this whole path must be included. So we start with our $2$ pointers initially $l = 0,r = 0$ i.e. both at the root. We will also be maintaining a visited array which will be denoting all those vertices which are lying in our path from $l$ to $r$. Now we loop from $1$ to $n$ and successively calculate their answers. Suppose we need to add some vertex $i$ to the path, then if the current vertex is already in the path i.e. already visited then we can just continue as the variable $P$ will remain same. In the other case, a possible approach is to just recursively move to the parent marking them as visited until we find a previously visited node. Let that node be $U$. Note that we will always break at the root as it is marked visited in the initial step. Now the following two cases arise, The node $U$ is one of the endpoints $l$ or $r$. In this case we can simply calculate the number of paths having $MEX = i$. See the diagram below, we are having $P$ from previous calculations the number of paths having $MEX \ge i$ and the number of paths with $MEX > i$ (actually those paths which include all nodes from $0$ to $i-1$) can simply be seen as $subtree_i \cdot subtree_l$ or $subtree_i \cdot subtree_r$ depending on which two endpoints are there now. It should be noted that we also need to update the subtree sizes as we move up to parent recursively, by subtracting the $subtree_v$ from $subtree_{parent(v)}$. After updating the $l$ and $r$ pointers, the answer for $MEX = i$ is simply $P - subtree_l \cdot subtree_r$ and finally we update the variable $P = subtree_l \cdot subtree_r$ denoting for $i + 1$, the number of paths with $MEX \ge i+1$. The next case is when $U$ is not among the endpoints $l$ and $r$, then in this case $ans_i = P$ and all other remaining i.e $MEX \ge i+1$ are zero, since we can never find a path which can contain all vertices from $0$ to $i$.
[ "combinatorics", "dfs and similar", "implementation", "math", "trees" ]
2,400
#include<bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #define ll long long #define pb push_back #define ppb pop_back #define endl '\n' #define mii map<ll,ll> #define msi map<string,ll> #define mis map<ll, string> #define rep(i,a,b) for(ll i=a;i<b;i++) #define repr(i,a,b) for(ll i=b-1;i>=a;i--) #define trav(a, x) for(auto& a : x) #define pii pair<ll,ll> #define vi vector<ll> #define vii vector<pair<ll, ll>> #define vs vector<string> #define all(a) (a).begin(),(a).end() #define F first #define S second #define sz(x) (ll)x.size() #define hell 1000000007 #define lbnd lower_bound #define ubnd upper_bound #define DEBUG cerr<<"/n>>>I'm Here<<</n"<<endl; #define display(x) trav(a,x) cout<<a<<" ";cout<<endl; #define what_is(x) cerr << #x << " is " << x << endl; #define ordered_set tree<ll, null_type,less<ll>, rb_tree_tag,tree_order_statistics_node_update> #define FAST ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0); using namespace __gnu_pbds; using namespace std; #define PI 3.141592653589793 #define N 200005 ll add(ll x,ll y){return (x+y)%hell;} ll mul(ll x,ll y){return (x*y)%hell;} vector <ll> adj[N]; ll subtree[N]; ll min1[N]; ll ans[N]; ll vis[N]; ll le; int v;int u; ll n; int prevv;int prevu; void init(){ rep(i,0,n){ adj[i].clear(); subtree[i]=0; min1[i]=n; ans[i]=0; vis[i]=0; } le=n*(n-1)/2; v=0;u=0; prevv=-1; prevu=-1; } void dfs(int v,int prev=-1){ subtree[v]=1; min1[v]=v; for (auto it: adj[v]){ if (it==prev) continue; dfs(it,v); subtree[v]+=subtree[it]; min1[v]=min(min1[v],min1[it]); } } void find(int val){ //cout << v << " " << u << " " << prevv << " " << prevu << " " << val << endl; if (v==val or u==val){ ll a,b; if (subtree[prevv]>subtree[v] or prevv==-1) a=subtree[v]; else a=n-subtree[prevv]; if (subtree[prevu]>subtree[u] or prevu==-1) b=subtree[u]; else b=n-subtree[prevu]; ans[val]=le-a*b; //cout << a << " " << b << endl; le=a*b; return; } for (auto it: adj[v]){ if (it==prevv) continue; if (min1[it]==val){ if (v==u) prevu=it; prevv=v; v=it; vis[it]=1; find(val); return; } } for (auto it: adj[u]){ if (it==prevu) continue; if (min1[it]==val){ prevu=u; u=it; vis[it]=1; find(val); return; } } ans[val]=le; le=0; } void solve() { cin >> n; init(); rep(i,0,n-1){ int x,y; cin >> x >> y; adj[x].pb(y); adj[y].pb(x); } dfs(0); //cout << le << endl; for (auto it: adj[0]) {ans[0]+=(subtree[it]*(subtree[it]-1))/2;} //cout << endl; //cout << ans[0] << endl; le-=ans[0]; rep(i,1,n){ if (vis[i]==1 or le==0) ans[i]=0; else find(i); } ans[n]=le; rep(i,0,n+1) cout << ans[i] << " "; cout << endl; return; } int main() { #ifndef ONLINE_JUDGE freopen ("input.txt","r",stdin); #endif ll int TEST=1; cin >> TEST; while(TEST--) { solve(); } }
1527
E
Partition Game
You are given an array $a$ of $n$ integers. Define the cost of some array $t$ as follows: $$cost(t) = \sum_{x \in set(t) } last(x) - first(x),$$ where $set(t)$ is the set of all values in $t$ without repetitions, $first(x)$, and $last(x)$ are the indices of the first and last occurrence of $x$ in $t$, respectively. In other words, we compute the distance between the first and last occurrences for each distinct element and sum them up. You need to split the array $a$ into $k$ consecutive segments such that each element of $a$ belongs to exactly one segment and the sum of the cost of individual segments is minimum.
Let's use dynamic programming to solve this problem. Consider $dp[i][j]$ be the answer for prefix $j$ with $i$ subsegments. Transitions are fairly straightforward. Let $c[i][j]$ be the cost of subarray starting at $i^{th}$ index and ending at $j^{th}$ index. $dp[i][j] = \min\limits_{k \lt j}(dp[i-1][k] + c[k+1][j])$ If we calculate it naively, it would result in $O(n^2k)$ solution which would not be enough for given constraints. We need some data structure to maintain $dp[i-1][k] + c[k+1][j]$ for every $k \lt j$. Consider the example $a = [2,2,3,2,3]$. We can write cost as $(2-1) + (4-2) + (5-3) = 5$. Except for the first occurrence, we just need a sum of $b[k] = k - last[a[k]]$ in the range, where $last[a[k]]$ is the index of the last occurrence of $a[k]$ just before index $k$. We can maintain this in some segment tree. Let's say our segment tree is built for prefix $j$, we can transition to prefix $j+1$ by adding $b[j+1]$ in range $[0,last[a[j+1]]-1]$. This will result in a $O(n \cdot k log(n))$ solution.
[ "binary search", "data structures", "divide and conquer", "dp" ]
2,500
#include<bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #define ll long long #define pb push_back #define ppb pop_back #define endl '\n' #define mii map<ll,ll> #define msi map<string,ll> #define mis map<ll, string> #define rep(i,a,b) for(ll i=a;i<b;i++) #define repr(i,a,b) for(ll i=b-1;i>=a;i--) #define trav(a, x) for(auto& a : x) #define pii pair<ll,ll> #define vi vector<ll> #define vii vector<pair<ll, ll>> #define vs vector<string> #define all(a) (a).begin(),(a).end() #define F first #define S second #define sz(x) (ll)x.size() #define hell 1000000007 #define lbnd lower_bound #define ubnd upper_bound #define DEBUG cerr<<"/n>>>I'm Here<<</n"<<endl; #define display(x) trav(a,x) cout<<a<<" ";cout<<endl; #define what_is(x) cerr << #x << " is " << x << endl; #define ordered_set tree<ll, null_type,less<ll>, rb_tree_tag,tree_order_statistics_node_update> #define FAST ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0); using namespace __gnu_pbds; using namespace std; #define PI 3.141592653589793 #define MAXN 35005 int tree1[4*MAXN]; int lazy[4*MAXN]; int s[MAXN]; void build(int node, int start, int end) { lazy[node]=0; if(start == end) { // Leaf node will have a single element tree1[node] = s[start]; //cout << tree1[node] << " "; } else { int mid = (start + end) / 2; // Recurse on the left child build(2*node, start, mid); // Recurse on the right child build(2*node+1, mid+1, end); // Internal node will have the sum of both of its children tree1[node] = min(tree1[2*node],tree1[2*node+1]); } } void updateRange(int node, int start, int end, int l, int r, int val) { if (l>r) return; if(lazy[node] != 0) { // This node needs to be updated tree1[node] = tree1[node]+lazy[node]; // Update it if(start != end) { lazy[node*2] += lazy[node]; // Mark child as lazy lazy[node*2+1] += lazy[node]; // Mark child as lazy } lazy[node] = 0; // Reset it } if(start > end or start > r or end < l) // Current segment is not within range [l, r] return; if(start >= l and end <= r) { // Segment is fully within range tree1[node] = tree1[node]+val; if(start != end) { // Not leaf node lazy[node*2] += val; lazy[node*2+1] += val; } return; } int mid = (start + end) / 2; updateRange(node*2, start, mid, l, r, val); // Updating left child updateRange(node*2 + 1, mid + 1, end, l, r, val); // Updating right child tree1[node] = min(tree1[node*2],tree1[node*2+1]); // Updating root with max value } ll queryRange(int node, int start, int end, int l, int r) { if(start > end or start > r or end < l) return hell; // Out of range if(lazy[node] != 0) { // This node needs to be updated tree1[node] = tree1[node]+lazy[node]; // Update it if(start != end) { lazy[node*2] += lazy[node]; // Mark child as lazy lazy[node*2+1] += lazy[node]; // Mark child as lazy } lazy[node] = 0; // Reset it } if(start >= l and end <= r) // Current segment is totally within range [l, r] return tree1[node]; int mid = (start + end) / 2; ll p1 = queryRange(node*2, start, mid, l, r); // Query left child ll p2 = queryRange(node*2 + 1, mid + 1, end, l, r); // Query right child return min(p1,p2); } void solve() { ll n,k; cin >> n >> k; int a[n]; rep(i,0,n) cin >> a[i]; int lastoc[n]; map <int,int> m1; rep(i,0,n){ if (m1.find(a[i])==m1.end()) lastoc[i]=-1; else lastoc[i]=m1[a[i]]; m1[a[i]]=i; } int dp[n][k+1]; dp[0][1]=0; rep(i,1,n){ dp[i][1]=dp[i-1][1]; if (lastoc[i]!=-1) dp[i][1]+=i-lastoc[i]; } rep(i,2,k+1){ rep(j,0,n) s[j]=dp[j][i-1]; build(1,0,n-1); rep(j,0,i-1) dp[j][i]=hell; dp[i-1][i]=0; rep(j,i,n) { int lastj=lastoc[j]; if (lastj>0 and (i-2)<(lastj)) { updateRange(1,0,n-1,i-2,lastj-1,j-lastj); } dp[j][i] = queryRange(1,0,n-1,i-2,j-1); } } cout << dp[n-1][k] << endl; } int main() { #ifndef ONLINE_JUDGE freopen ("input.txt","r",stdin); #endif ll int TEST=1; //cin >> TEST; //init(); while(TEST--) { solve(); } } }
1528
A
Parsa's Humongous Tree
Parsa has a humongous tree on $n$ vertices. On each vertex $v$ he has written two integers $l_v$ and $r_v$. To make Parsa's tree look even more majestic, Nima wants to assign a number $a_v$ ($l_v \le a_v \le r_v$) to each vertex $v$ such that the beauty of Parsa's tree is maximized. Nima's sense of the beauty is rather bizarre. He defines the beauty of the tree as the sum of $|a_u - a_v|$ over all edges $(u, v)$ of the tree. Since Parsa's tree is too large, Nima can't maximize its beauty on his own. Your task is to find the \textbf{maximum} possible beauty for Parsa's tree.
The solution is based on the fact that an optimal assignment for $a$ exists such that for each vertex $v$, $a_v \in {l_v, r_v}$. Proving this fact isn't hard, pick any assignment for $a$. Assume $v$ is a vertex in this assignment such that $a_v \notin {l_v, r_v}$. Let $p$ be the number of vertices $u$ adjacent to $v$ such that $a_u > a_v$. Let $q$ be the number of vertices $u$ adjacent to $v$ such that $a_u < a_v$. Consider the following cases: $p > q$: In this case we can decrease $a_v$ to $l_v$ and get a better result. $p < q$: In this case we can increase $a_v$ to $r_v$ and get a better result. $p = q$: In this case changing $a_v$ to $l_v$ or $r_v$ will either increase or not change the beauty of the tree. Based on this fact, we can use dynamic programming to find the answer. Define $dp_{v,0}$ as the maximum beauty of $v$'s subtree if $a_v$ is equal to $l_v$. Similarly, define $dp_{v,1}$ as the maximum beauty of $v$'s subtree if $a_v$ is equal to $r_v$. $dp_{v,j}$ is calculated based on $v$'s children, for each of $v$'s children such as $u$, we add $u$'s contribution to $dp_{v,j}$. The transitions are: $dp_{v,0} += max(dp_{u,0} + |l_v - l_u| , dp_{u,1} + |l_v - r_u|)$ $dp_{v,1} += max(dp_{u,0} + |r_v - l_u| , dp_{u,1} + |r_v - r_u|)$ It's clear that the answer is equal to $max(dp_{v,0}, dp_{v,1})$. complexity: $\mathcal{O}(n)$
[ "dfs and similar", "divide and conquer", "dp", "greedy", "trees" ]
1,600
// Call my Name and Save me from The Dark #include <bits/stdc++.h> using namespace std; typedef long long int ll; typedef pair<int, int> pii; #define SZ(x) (int) x.size() #define F first #define S second const int N = 2e5 + 10; ll dp[2][N]; int A[2][N], n; vector<int> adj[N]; void DFS(int v, int p = -1) { dp[0][v] = dp[1][v] = 0; for (int u : adj[v]) { if (u == p) continue; DFS(u, v); dp[0][v] += max(abs(A[0][v] - A[1][u]) + dp[1][u], dp[0][u] + abs(A[0][v] - A[0][u])); dp[1][v] += max(abs(A[1][v] - A[1][u]) + dp[1][u], dp[0][u] + abs(A[1][v] - A[0][u])); } } void Solve() { scanf("%d", &n); for (int i = 1; i <= n; i++) scanf("%d%d", &A[0][i], &A[1][i]); fill(adj + 1, adj + n + 1, vector<int>()); for (int i = 1; i < n; i++) { int u, v; scanf("%d%d", &u, &v); adj[u].push_back(v); adj[v].push_back(u); } DFS(1); printf("%lld\n", max(dp[0][1], dp[1][1])); } int main() { int t; scanf("%d", &t); while (t--) Solve(); return 0; }
1528
B
Kavi on Pairing Duty
Kavi has $2n$ points lying on the $OX$ axis, $i$-th of which is located at $x = i$. Kavi considers all ways to split these $2n$ points into $n$ pairs. Among those, he is interested in \textbf{good} pairings, which are defined as follows: Consider $n$ segments with ends at the points in correspondent pairs. The pairing is called good, if for every $2$ different segments $A$ and $B$ among those, at least one of the following holds: - One of the segments $A$ and $B$ lies completely inside the other. - $A$ and $B$ have the same length. Consider the following example: $A$ is a good pairing since the red segment lies completely inside the blue segment. $B$ is a good pairing since the red and the blue segment have the same length. $C$ is not a good pairing since none of the red or blue segments lies inside the other, neither do they have the same size. Kavi is interested in the number of good pairings, so he wants you to find it for him. As the result can be large, find this number modulo $998244353$. Two pairings are called different, if some two points are in one pair in some pairing and in different pairs in another.
Let $dp_i$ be the number of good pairings of $2i$ points. Clearly, the answer is $dp_n$. Lemma: Denote $x$ as the point matched with the point $1$. Notice that each point $p$ $(x < p \le 2n)$ belongs to a segment with length equal to $[1 , x]$'s length. Proof: Assume some point $p$ $(x < p \le 2n)$ is paired with a point $q$ $(q > p)$, since $[p , q]$ doesn't lie inside $[1, x]$ then their size must be the equal for the pairing to be good. To compute $dp_n$, consider the following cases: $x > n$: Similar to lemma mentioned above, it can be proved that each point $p$ $(1 \le p \le 2n-x+1)$ is paired with the point $i + x - 1$, the remaining unpaired $x - n - 1$ points form a continuous subarray which lies inside each of the current pairs, thus they can be paired in $dp_{x - n - 1}$ ways. $x \le n$: In this case, due to the lemma mentioned above all the segments must have the same length, thus their length must be a divisor of $n$, in this case they can be paired in $D(n)$ ways; where $D(n)$ is the number of divisors of $n$. So $dp_n$ = $D(n) + \sum_{i = 0}^{n-1} {dp_i}$. Note that $dp_0 = dp_1 = 1$. complexity: $\mathcal{O}(n\log n)$
[ "combinatorics", "dp", "math" ]
1,700
#include <bits/stdc++.h> using namespace std; typedef long long ll; typedef pair<ll, ll> pll; typedef pair<int, int> pii; #define X first #define Y second #define endl '\n' const int N = 1e6 + 10; const int MOD = 998244353; int n, dp[N], S; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cin >> n; for (int i = 1; i <= n; i++) { for (int j = i + i; j <= n; j += i) { dp[j]++; } } dp[0] = S = 1; for (int i = 1; i <= n; i++) { dp[i] = (dp[i] + S) % MOD; S = (S + dp[i]) % MOD; } cout << dp[n] << endl; return 0; }
1528
C
Trees of Tranquillity
Soroush and Keshi each have a labeled and rooted tree on $n$ vertices. Both of their trees are rooted from vertex $1$. Soroush and Keshi used to be at war. After endless decades of fighting, they finally became allies to prepare a Codeforces round. To celebrate this fortunate event, they decided to make a memorial graph on $n$ vertices. They add an edge between vertices $u$ and $v$ in the memorial graph if \textbf{both} of the following conditions hold: - One of $u$ or $v$ is the ancestor of the other in Soroush's tree. - Neither of $u$ or $v$ is the ancestor of the other in Keshi's tree. Here vertex $u$ is considered ancestor of vertex $v$, if $u$ lies on the path from $1$ (the root) to the $v$. Popping out of nowhere, Mashtali tried to find the maximum clique in the memorial graph for no reason. He failed because the graph was too big. Help Mashtali by finding the size of the maximum clique in the memorial graph. As a reminder, clique is a subset of vertices of the graph, each two of which are connected by an edge.
Let's start with some observations. Take any clique $C$ in the memorial graph. Notice that the vertices of $C$ are a subset of a path from root to some leaf in Soroush's tree. So it's sufficient to solve the task for every leaf in Soroush's tree, specifically we should consider subsets of the paths starting from the root and ending in a leaf in Soroush's tree. Assume you have a data structure that supports the following operations: Insert a vertex. Erase a vertex. Among the vertices inside it, find the biggest set of vertices $S$ such that none of them is the ancestor of the other in Keshi's tree. To solve the task, start doing DFS from the root of Soroush's tree. Every time you visit a new vertex $v$, add $v$ using the $1$-st operation. Every time you finish doing DFS in a vertex $v$, erase $v$ using the $2$-nd operation. It's easy to see that the vertices in the data structure always form a path from root to some vertex $v$ in Soroush's tree. The answer to the task is the maximum size of $S$ in the $3$-rd operation for every leaf $u$ of Soroush's tree, when adding $u$ has been the last operation in the data structure; In other words $ans = max(ans , x)$ where $x$ is the size of $S$ in the $3$-rd operation whenever you reach a leaf while doing DFS in Soroush's tree. When adding a vertex $v$ to the data structure, if no vertex $u \in S$ existed such that $u$ was in $v$'s subtree in Keshi's tree, consider the following cases: If no ancestor of $v$ was in $S$, greedily add $v$ to $S$. Otherwise, let that ancestor be $w$, erase $w$ from $S$ and add $v$ instead. On the other hand, if such a vertex $u$ already existed in $S$, we won't add $v$ to $S$ based on the greedy solution mentioned above. Whatever notation used from here onwards refers to Keshi's tree unless stated. Do a DFS on the tree and find the starting time/finishing time for each vertex. It's widely known that vertex $v$ is an ancestor of vertex $u \iff$ $st_v \le st_u$ and $ft_v \ge ft_u$. Observation: for any pair of vertices $u$ and $v$, segments $[st_u , ft_u]$ and $[st_v , ft_v]$ either don't share an element or one of them lies completely inside the other. To construct the aforementioned data structure: Let the set $S$ be a maximal set of the vertices that form a clique in the memorial graph. For each vertex $v$ we store a pair $\{st_v , v\}$ in $S$. Now to check whether any vertex $u$ in the subtree of vertex $v$ exists in $S$: Let $p$ be the first pair in $S$ such that the first element in $p \ge st_v$. If $p$'s second element's finishing time is less than $ft_v$ then $p$'s second element is in $v$'s subtree, otherwise it's not. Now to check whether any ancestor of $v$ is in $S$ or not: Let $p$ be the first pair in $S$ such that $p$'s first element is less than $st_v$, it can be proved that if an ancestor $u$ of $v$ exists in $S$, then $p = \{st_u , u\}$, thus we can check if $v$ is in the subtree of $p$'s second element by the aforementioned observation. Doing the erase operation is also possible by keeping a history of the deleted elements from the set $S$. complexity: $\mathcal{O}(n \log n)$
[ "data structures", "dfs and similar", "greedy", "trees" ]
2,300
#include<bits/stdc++.h> #define lc (id * 2) #define rc (id * 2 + 1) #define md ((s + e) / 2) #define dm ((s + e) / 2 + 1) #define ln (e - s + 1) using namespace std; typedef long long ll; const ll MXN = 3e5 + 10; const ll MXS = MXN * 4; ll n, timer, ans, Ans; ll lazy[MXS], seg[MXS]; ll Stm[MXN], Ftm[MXN]; vector<ll> adj[MXN][2]; void Build(ll id = 1, ll s = 1, ll e = n){ lazy[id] = -1, seg[id] = 0; if(ln > 1) Build(lc, s, md), Build(rc, dm, e); } void Shift(ll id, ll s, ll e){ if(lazy[id] == -1) return; seg[id] = lazy[id]; if(ln > 1) lazy[lc] = lazy[rc] = lazy[id]; lazy[id] = -1; } void Upd(ll l, ll r, ll x, ll id = 1, ll s = 1, ll e = n){ Shift(id, s, e); if(e < l || s > r) return; if(l <= s && e <= r){ lazy[id] = x; Shift(id, s, e); return; } Upd(l, r, x, lc, s, md), Upd(l, r, x, rc, dm, e); seg[id] = max(seg[lc], seg[rc]); } ll Get(ll l, ll r, ll id = 1, ll s = 1, ll e = n){ Shift(id, s, e); if(e < l || s > r) return 0; if(l <= s && e <= r) return seg[id]; return max(Get(l, r, lc, s, md), Get(l, r, rc, dm, e)); } void prep(ll u, ll par, ll f){ Stm[u] = ++ timer; for(auto v : adj[u][f]){ if(v != par) prep(v, u, f); } Ftm[u] = timer; } bool Is_Jad(ll j, ll u){ return (Stm[j] <= Stm[u] && Ftm[u] <= Ftm[j]); } void DFS(ll u, ll par, ll f){ ll j = Get(Stm[u], Ftm[u]); if(!j) Upd(Stm[u], Ftm[u], u), ans ++; else{ if(Is_Jad(j, u)){ Upd(Stm[j], Ftm[j], 0); Upd(Stm[u], Ftm[u], u); } } Ans = max(Ans, ans); for(auto v : adj[u][f]){ if(v != par) DFS(v, u, f); } if(!j) Upd(Stm[u], Ftm[u], 0), ans --; else{ if(Is_Jad(j, u)){ Upd(Stm[u], Ftm[u], 0); Upd(Stm[j], Ftm[j], j); } } } void solve(){ cin >> n, timer = ans = Ans = 0; for(int i = 1; i <= n; i ++) adj[i][0].clear(), adj[i][1].clear(); for(int t = 0; t < 2; t ++) for(int u = 2, v; u <= n; u ++){ cin >> v, adj[v][t].push_back(u), adj[u][t].push_back(v); } Build(), prep(1, 0, 1); DFS(1, 0, 0); cout << Ans << '\n'; } int main(){ ios::sync_with_stdio(0);cin.tie(0); cout.tie(0); ll q; cin >> q; while(q --) solve(); return 0; } /*! HE'S AN INSTIGATOR, ENEMY ELIMINATOR, AND WHEN HE KNOCKS YOU BETTER YOU BETTER LET HIM IN. */ //! N.N
1528
D
It's a bird! No, it's a plane! No, it's AaParsa!
There are $n$ cities in Shaazzzland, numbered from $0$ to $n-1$. Ghaazzzland, the immortal enemy of Shaazzzland, is ruled by AaParsa. As the head of the Ghaazzzland's intelligence agency, AaParsa is carrying out the most important spying mission in Ghaazzzland's history on Shaazzzland. AaParsa has planted $m$ transport cannons in the cities of Shaazzzland. The $i$-th cannon is planted in the city $a_i$ and is initially pointing at city $b_i$. It is guaranteed that each of the $n$ cities has \textbf{at least one} transport cannon planted inside it, and that no two cannons from the same city are initially pointing at the same city (that is, all pairs $(a_i, b_i)$ are distinct). AaParsa used very advanced technology to build the cannons, the cannons rotate every second. In other words, if the $i$-th cannon is pointing towards the city $x$ at some second, it will target the city $(x + 1) \mod n$ at the next second. As their name suggests, transport cannons are for transportation, specifically for human transport. If you use the $i$-th cannon to launch yourself towards the city that it's currently pointing at, you'll be airborne for $c_i$ seconds before reaching your target destination. If you still don't get it, using the $i$-th cannon at the $s$-th second (using which is only possible if you are currently in the city $a_i$) will shoot you to the city $(b_i + s) \mod n$ and you'll land in there after $c_i$ seconds (so you'll be there in the $(s + c_i)$-th second). Also note the cannon that you initially launched from will rotate every second but you obviously won't change direction while you are airborne. AaParsa wants to use the cannons for travelling between Shaazzzland's cities in his grand plan, and he can start travelling at second $0$. For him to fully utilize them, he needs to know the minimum number of seconds required to reach city $u$ from city $v$ using the cannons for every pair of cities $(u, v)$. \textbf{Note that AaParsa can stay in a city for as long as he wants}.
Suppose we did normal dijkstra, the only case that might be missed is when we wait in a vertex for some time. To handle the 'waiting' concept, we can add $n$ fake edges, $i$-th of which is from the $i$-th vertex to the $(i+1 \mod n)$ -th vertex with weight equal to one. Note that unlike the cannons, fake edges do not rotate. It can be proved that doing dijkstra in the new graph is sufficient if we guarantee that the first used edge is not fake. We can map waiting for $x$ seconds and then using an edge to go to $u$ from $v$ to using a cannon and then using $x$ fake edges to go to $u$ from $v$. Also due to the strict time limit you should use the $\mathcal{O}(n^2)$ variant of dijkstra. complexity: $\mathcal{O}(n^3)$
[ "constructive algorithms", "graphs", "shortest paths" ]
2,500
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int maxn = 600 + 5; int n; int dp[maxn]; bool mark[maxn]; vector<pair<int,int>> g[maxn]; int go[maxn]; int dis(int s, int t) { if (s <= t) return t - s; return n - (s - t); } void dijkstra(int src) { memset(dp, 63, sizeof dp); memset(mark, 0, sizeof mark); dp[src] = 0; set<int> S; for (int i = 0; i < n; i++) S.insert(i); for (int i = 0; i < n - 1; i++) { int v = -1; for (int j = 0; j < n; j++) { if (mark[j]) continue; if (v == -1 or dp[v] > dp[j]) v = j; } S.erase(v); mark[v] = 1; if (v != src) { auto it = S.lower_bound(v); if (it == S.end()) it = S.lower_bound(0); int nex = *it; dp[nex] = min(dp[nex], dp[v] + dis(v, nex)); } for (int i = 2 * n - 1; i >= 0; i--) { int v = i % n; if (!mark[v]) go[v] = v; else go[v] = go[(v + 1) % n]; } for (auto [u, w] : g[v]) { int nex = go[(u + dp[v]) % n]; dp[nex] = min(dp[nex], dp[v] + w + dis((u + dp[v]) % n, nex)); } } } int main() { ios_base::sync_with_stdio(false); int m; cin >> n >> m; for (int i = 0; i < m; i++) { int v, u, w; cin >> v >> u >> w; g[v].push_back({u, w}); } for (int i = 0; i < n; i++) { dijkstra(i); for (int j = 0; j < n; j++) cout << dp[j] << " \n"[j == n-1]; } }
1528
E
Mashtali and Hagh Trees
Today is Mashtali's birthday! He received a \textbf{Hagh} tree from Haj Davood as his birthday present! A directed tree is called a \textbf{Hagh} tree iff: - The length of the longest directed path in it is exactly $n$. - Every vertex has \textbf{at most three edges} attached to it independent of their orientation. - Let's call vertices $u$ and $v$ friends if one of them has a directed path to the other. For every pair of vertices $u$ and $v$ that are not friends, there should exist a vertex $w$ that is friends with both $u$ and $v$ (a mutual friend). After opening his gift, Mashtali found out that the labels on the vertices were gone. Immediately, he asked himself: how many different unlabeled Hagh trees are there? That is, how many possible trees could he have received as his birthday present? At the first glance, the number of such trees seemed to be infinite since there was no limit on the number of vertices; but then he solved the problem and proved that \textbf{there's a finite number of unlabeled Hagh trees!} Amazed by this fact, he shared the task with you so that you could enjoy solving it as well. Since the answer can be rather large he asked you to find the number of different unlabeled Hagh trees modulo $998244353$. Here two trees are considered different, if they are not isomorphic: if there is no way to map nodes of one tree to the second tree, so that edges are mapped to edges preserving the orientation. Some examples for $n = 2$: Directed trees $D$ and $E$ are Hagh. $C$ is not Hagh because it has a vertex with $4$ edges attached to it. $A$ and $B$ are not Hagh because their longest directed paths are not equal to $n$. Also in $B$ the leftmost and rightmost vertices are not friends neither do they have a mutual friend.
Let $dp_i$ be the answer for all trees such that there exists a root and all edges are directed in the same direction from root and the root has at most $2$ children. We transition: $dp_i = dp_{i-1}+dp_{i-1} \cdot pdp_{i-2}+\frac{dp_{i-1} \cdot (dp_{i-1}+1)}{2}$ where $pdp_i = \sum_{j = 0}^{i}{dp_j}$. Then let $dp2_i$ be the same as $dp_i$ except the root must have exactly $2$ children. So $dp2_i = dp_i-dp_{i-1}$. The answer for these cases is: $2 \cdot (dp_n+\frac{ dp_{n-1} \cdot pdp_{n-2} \cdot (pdp_{n-2}+1)}{2}+\frac{pdp_{n-2} \cdot dp_{n-1} \cdot (dp_{n-1}+1)}{2}+ \frac{dp_{n-1} \cdot (dp_{n-1}+1) \cdot (dp_{n-1}+2)}{6})-1$. This is because $dp_n$ holds the answer for at most $2$ children and the other section accounts for the rest. We multiply by $2$ to account for both edges directions, and subtract $1$ because a single path is isomorphic. This obviously doesn't handle all cases, but all other cases can be found in the following form. Let $t_{up,k}$ be a tree where the root has $2$ children and the edges are directed up and the longest path is $k$, and let $t_{down,k}$ be a tree where the root has $2$ children and the edges are directed down and the longest path is $k$. Then all other cases are $t_{up,k}$ which exists on some path of length $l$ to and connects to $t_{down,n-k-l}$. We can count every other case as $\sum_{i=0}^{n-1}{(dp_{i}-1) \cdot dp2_{n-1-i}}$ This works because we pretend the path is always length $1$, then if we do $dp_i \cdot dp2_{n-1-i}$ we handle all cases except for when the $t_{up,k}$ is empty, and that only happens once.
[ "combinatorics", "dp", "trees" ]
2,900
#include <bits/stdc++.h> #pragma GCC optimize ("O2,unroll-loops") //#pragma GCC optimize("no-stack-protector,fast-math") using namespace std; typedef long long ll; typedef long double ld; typedef pair<int, int> pii; typedef pair<pii, int> piii; typedef pair<ll, ll> pll; #define debug(x) cerr<<#x<<'='<<(x)<<endl; #define debugp(x) cerr<<#x<<"= {"<<(x.first)<<", "<<(x.second)<<"}"<<endl; #define debug2(x, y) cerr<<"{"<<#x<<", "<<#y<<"} = {"<<(x)<<", "<<(y)<<"}"<<endl; #define debugv(v) {cerr<<#v<<" : ";for (auto x:v) cerr<<x<<' ';cerr<<endl;} #define all(x) x.begin(), x.end() #define pb push_back #define kill(x) return cout<<x<<'\n', 0; const int inf=1000000010; const ll INF=1000000000000001000LL; const int mod=998244353, inv6=166374059; const int MAXN=1000010, LOG=20; ll n, m, k, u, v, x, y, t, a, b, ans; ll dp[MAXN], ps[MAXN]; ll pd[MAXN], sp[MAXN]; ll C3(ll x){ return x*(x-1)%mod*(x-2)%mod*inv6%mod; } ll C2(ll x){ return x*(x-1)/2%mod; } int main(){ ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); cin>>n; dp[0]=ps[0]=1; pd[0]=sp[0]=1; for (int i=1; i<=n; i++){ ps[i]=(1 + ps[i-1] + ps[i-1]*(ps[i-1]+1)/2)%mod; dp[i]=ps[i]-ps[i-1]; pd[i]=dp[i]-dp[i-1]; sp[i]=(sp[i-1]+pd[i])%mod; } for (int i=0; i<n; i++) ans=(ans + pd[i]*sp[n-1-i])%mod; ans=(ans + 2*C3(ps[n-1]+2))%mod; if (n>=2) ans=(ans - 2*C3(ps[n-2]+2))%mod; ans=(ans + 2*C2(ps[n-1]+1))%mod; if (n>=2) ans=(ans - 2*C2(ps[n-2]+1))%mod; if (ans<0) ans+=mod; cout<<ans<<"\n"; return 0; }
1528
F
AmShZ Farm
To AmShZ, all arrays are equal, but some arrays are \textbf{more-equal} than others. Specifically, the arrays consisting of $n$ elements from $1$ to $n$ that can be turned into permutations of numbers from $1$ to $n$ by adding a non-negative integer to each element. Mashtali \sout{who wants to appear in every problem statement} thinks that an array $b$ consisting of $k$ elements is compatible with a \textbf{more-equal} array $a$ consisting of $n$ elements if for each $1 \le i \le k$ we have $1 \le b_i \le n$ and also $a_{b_1} = a_{b_2} = \ldots = a_{b_k}$. Find the number of pairs of arrays $a$ and $b$ such that $a$ is a more-equal array consisting of $n$ elements and $b$ is an array compatible with $a$ consisting of $k$ elements modulo $998244353$. Note that the elements of $b$ are \textbf{not necessarily distinct}, same holds for $a$.
Consider the following problem: $n$ cars want to enter a parking lot one by one. The parking lot has $n$ slots numbered $1, 2 , \ldots , n$, the $i$-th of the $n$ cars wants to park in the $a_i$-th slot. When the $i$-th car drives in, it will park in the first empty slot $s$ such that $s \ge a_i$. An array $a$ is Good if all of the cars are parked in some slot after the end of the procedure. Good arrays can be mapped to more-equal arrays, because it can be proved that in any sorted Good array $a_i \le i$, same goes for the more-equal arrays. Now let's modify the above problem a bit, consider a circular parking lot with $n + 1$ free slots. $n$ cars want to park in it, $i$-th of which wants to park in the $a_i$-th slot $(1 \le a_i \le n + 1)$. When the $i$-th car drives in, it will park in the first empty slot $s$ such that $s$ is the first empty slot after (including) $a_i$ in the clockwise direction. It's obvious that one slot will be empty after the end of the procedure, let's call this slot $x$. Arrays in which $x = n+1$ can also be mapped to more-equal arrays, let them be good arrays. let the other arrays be called bad arrays (arrays in which $x \neq n+1$). Every good can mapped to $n$ bad arrays, just add $x$ $(1 \le x \le n)$ to all of elements of $a$, formally speaking for each $i$ $(1 \le i \le n)$, $a_i = ((a_i + x )\mod_{n + 1} + 1)$. This can also be viewed as a circular shift of the elements/final positions. Note that the number of arrays compatible with $a$ stays the same in this proccess. Thus the number of bad arrays is equal to $\frac{n}{n+1}$ of all the possible $(n + 1)^n$ arrays. We know that $a_{b_1} = a_{b_2} = \ldots = a_{b_k}$, assume $a_{b_1} = x$, let's fix $x$ $(1 \le x \le n+1)$, let $CNT$ be the number of occurrences of $x$ in $a$. The number of compatible arrays $b$ such that $a_{b_j} = x$ $(1 \le j \le k)$ is equal to $CNT^k$. So the number of pairs of arrays $a$ and $b$ where $b$ is a compatible array with the more-equal array $a$ is equal to $\sum_{CNT = 0}^{n}{{n}\choose{CNT}} \cdot n^{n-CNT} \cdot CNT^k$. There are $n+1$ ways to choose $x$, also we had to divide the result by $n + 1$ because we were previously counting bad arrays as well, so we can simplify both of this terms with each other. The sum $\sum_{CNT = 0}^{n}{{n}\choose{CNT}} \cdot n^{n-CNT} \cdot CNT^k$ mentioned above is equal to $\sum_{i = 1}^{k}{S(k , i) \cdot i! \cdot {{n}\choose{i}} \cdot (n+1)^{(n-i)}}$, where $S$ refers to Stirling numbers of the second kind. All of $S(k , i)$ can be found in $\mathcal{O}(k\log k)$ using FFT. So overall we can solve the task in $\mathcal{O}(k\log k)$.
[ "combinatorics", "fft", "math" ]
3,300
# include <bits/stdc++.h> using namespace std; typedef long long ll; typedef long double ld; typedef pair <int, int> pii; typedef pair <pii, int> ppi; typedef pair <int, pii> pip; typedef pair <pii, pii> ppp; typedef pair <ll, ll> pll; # define A first # define B second # define endl '\n' # define sep ' ' # define all(x) x.begin(), x.end() # define kill(x) return cout << x << endl, 0 # define SZ(x) int(x.size()) # define lc id << 1 # define rc id << 1 | 1 # define InTheNameOfGod ios::sync_with_stdio(0);cin.tie(0); cout.tie(0); ll power(ll a, ll b, ll md) {return (!b ? 1 : (b & 1 ? a * power(a * a % md, b / 2, md) % md : power(a * a % md, b / 2, md) % md));} const int xn = 3e5 + 10; const int xm = 18; const int sq = 320; const int inf = 1e9 + 10; const ll INF = 1e18 + 10; const int mod = 998244353; const int base = 257; int n, k, C[xn], ans, A[xn], B[xn], rev[xn]; int fact[xn], ifact[xn]; void NTT(int *A, bool inv){ int n = (1 << xm); for (int i = 0; i < (1 << xm); ++ i) if (rev[i] < i) swap(A[i], A[rev[i]]); for (int ln = 1; ln < n; ln <<= 1){ int w = power(3, mod / 2 / ln, mod); if (inv) w = power(w, mod - 2, mod); for (int i = 0; i < n; i += ln + ln){ int wn = 1; for (int j = i; j < i + ln; ++ j){ int x = A[j], y = 1ll * A[j + ln] * wn % mod; A[j] = (x + y) % mod; A[j + ln] = (x - y + mod) % mod; wn = 1ll * wn * w % mod; } } } if (inv){ int invn = power(1 << xm, mod - 2, mod); for (int i = 0; i < n; ++ i) A[i] = 1ll * A[i] * invn % mod; } } int main(){ InTheNameOfGod; cin >> n >> k; C[0] = fact[0] = 1; for (int i = 1; i < xn; ++ i){ C[i] = 1ll * C[i - 1] * (n - i + 1) % mod * power(i, mod - 2, mod) % mod; fact[i] = 1ll * fact[i - 1] * i % mod; } ifact[xn - 1] = power(fact[xn - 1], mod - 2, mod); for (int i = xn - 2; i >= 0; -- i) ifact[i] = 1ll * ifact[i + 1] * (i + 1) % mod; for (int i = 1; i < (1 << xm); ++ i) rev[i] = (rev[i >> 1] >> 1) | ((i & 1) << (xm - 1)); for (int i = 0; i <= k; ++ i){ A[i] = 1ll * power(i, k, mod) * ifact[i] % mod; B[i] = ifact[i]; if (i % 2) B[i] = (mod - B[i]) % mod; } NTT(A, 0), NTT(B, 0); for (int i = 0; i < xn; ++ i) A[i] = 1ll * A[i] * B[i] % mod; NTT(A, 1); for (int i = 1; i <= k; ++ i) ans = (ans + 1LL * A[i] * C[i] % mod * power(n + 1, n - i, mod) % mod * fact[i] % mod) % mod; cout << ans << endl; return 0; }
1529
A
Eshag Loves Big Arrays
Eshag has an array $a$ consisting of $n$ integers. Eshag can perform the following operation any number of times: choose some subsequence of $a$ and delete every element from it which is \textbf{strictly} larger than $AVG$, where $AVG$ is the average of the numbers in the chosen subsequence. For example, if $a = [1 , 4 , 3 , 2 , 4]$ and Eshag applies the operation to the subsequence containing $a_1$, $a_2$, $a_4$ and $a_5$, then he will delete those of these $4$ elements which are larger than $\frac{a_1+a_2+a_4+a_5}{4} = \frac{11}{4}$, so after the operation, the array $a$ will become $a = [1 , 3 , 2]$. Your task is to find the \textbf{maximum} number of elements Eshag can delete from the array $a$ by applying the operation described above some number (maybe, zero) times. A sequence $b$ is a subsequence of an array $c$ if $b$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements.
We state that every element except for the elements with the smallest value can be deleted. Proof: denote $MN$ as the minimum element (s) of the array $a$, in each operation pick $MN$ and some other element, say $X$, which is bigger than $MN$, since $AVG = \frac{X + MN}{2} < X$, then $X$ will be deleted. Doing this for every $X > MN$ will result in the deletion of every element except for the elements with the smallest value. So the answer to the problem is $n - cntMN$, where $cntMN$ is the number of times $MN$ appeared in $a$. complexity: $\mathcal{O}(n)$
[ "constructive algorithms", "greedy", "math" ]
800
// khodaya khodet komak kon # include <bits/stdc++.h> using namespace std; typedef long long ll; typedef long double ld; typedef pair <int, int> pii; typedef pair <pii, int> ppi; typedef pair <int, pii> pip; typedef pair <pii, pii> ppp; typedef pair <ll, ll> pll; # define A first # define B second # define endl '\n' # define sep ' ' # define all(x) x.begin(), x.end() # define kill(x) return cout << x << endl, 0 # define SZ(x) int(x.size()) # define lc id << 1 # define rc id << 1 | 1 ll power(ll a, ll b, ll md) {return (!b ? 1 : (b & 1 ? a * power(a * a % md, b / 2, md) % md : power(a * a % md, b / 2, md) % md));} const int xn = 1e2 + 10; const int sq = 320; const int inf = 1e9 + 10; const ll INF = 1e18 + 10; const int mod = 1e9 + 7;//998244353; const int base = 257; int qq, n, a[xn], mn, ans; int main(){ ios::sync_with_stdio(0);cin.tie(0); cout.tie(0); cin >> qq; while (qq --){ cin >> n, mn = inf, ans = 0; for (int i = 1; i <= n; ++ i) cin >> a[i], mn = min(mn, a[i]); for (int i = 1; i <= n; ++ i) ans += a[i] != mn; cout << ans << endl; } return 0; }
1529
B
Sifid and Strange Subsequences
A sequence $(b_1, b_2, \ldots, b_k)$ is called \textbf{strange}, if the absolute difference between any pair of its elements is greater than or equal to the maximum element in the sequence. Formally speaking, it's strange if for every pair $(i, j)$ with $1 \le i<j \le k$, we have $|a_i-a_j|\geq MAX$, where $MAX$ is the largest element of the sequence. In particular, any sequence of length at most $1$ is strange. For example, the sequences $(-2021, -1, -1, -1)$ and $(-1, 0, 1)$ are strange, but $(3, 0, 1)$ is not, because $|0 - 1| < 3$. Sifid has an array $a$ of $n$ integers. Sifid likes everything big, so among all the strange subsequences of $a$, he wants to find the length of the \textbf{longest} one. Can you help him? A sequence $c$ is a subsequence of an array $d$ if $c$ can be obtained from $d$ by deletion of several (possibly, zero or all) elements.
It's easy to prove that a strange subsequence can't contain more than one positive element. So it's optimal to pick all of the non-positive elements, now we can pick at most one positive element. Assume $x$ is the minimum positive element in the array. We can pick $x$ if no two elements in the already picked set such as $a$ and $b$ exist in a way that $|a - b| < x$. To check this, we just have to sort the already picked elements and see the difference between adjacent pairs. complexity: $\mathcal{O}(n\log n)$
[ "greedy", "math", "sortings" ]
1,100
// khodaya khodet komak kon // Nightcall - London Grammer # include <bits/stdc++.h> using namespace std; typedef long long ll; typedef long double ld; typedef pair <int, int> pii; typedef pair <pii, int> ppi; typedef pair <int, pii> pip; typedef pair <pii, pii> ppp; typedef pair <ll, ll> pll; # define A first # define B second # define endl '\n' # define sep ' ' # define all(x) x.begin(), x.end() # define kill(x) return cout << x << endl, 0 # define SZ(x) int(x.size()) # define lc id << 1 # define rc id << 1 | 1 ll power(ll a, ll b, ll md) {return (!b ? 1 : (b & 1 ? a * power(a * a % md, b / 2, md) % md : power(a * a % md, b / 2, md) % md));} const int xn = 1e5 + 10; const int xm = - 20 + 10; const int sq = 320; const int inf = 1e9 + 10; const ll INF = 1e18 + 10; const int mod = 1e9 + 7;//998244353; const int base = 257; int qq, n, a[xn], ans, mn; bool flag; int main(){ ios::sync_with_stdio(0);cin.tie(0); cout.tie(0); cin >> qq; while (qq --){ cin >> n, ans = 0; for (int i = 1; i <= n; ++ i) cin >> a[i], ans += (a[i] <= 0); sort(a + 1, a + n + 1), mn = inf; for (int i = 1; i <= n; ++ i) if (a[i] > 0) mn = min(mn, a[i]); flag = (mn < inf); for (int i = 2; i <= n; ++ i) if (a[i] <= 0) flag &= (a[i] - a[i - 1] >= mn); if (flag) cout << ans + 1 << endl; else cout << ans << endl; } return 0; }
1530
A
Binary Decimal
Let's call a number a binary decimal if it's a positive integer and all digits in its decimal notation are either $0$ or $1$. For example, $1\,010\,111$ is a binary decimal, while $10\,201$ and $787\,788$ are not. Given a number $n$, you are asked to represent $n$ as a sum of some (not necessarily distinct) binary decimals. Compute the smallest number of binary decimals required for that.
Let $d$ be the largest decimal digit of $n$. Note that we need at least $d$ binary decimals to represent $n$ as a sum. Indeed, if we only use $k < d$ binary decimals, no digit of the sum will ever exceed $k$. However, we need at least one digit equal to $d$. At the same time, it is easy to construct an answer with exactly $d$ terms. Start with all terms equal to $0$, and consider each digit separately. Let the $i$-th digit of $n$ be $a_i$. Pick any $a_i$ terms out of the $d$ terms we have, and add $1$ to their $i$-th digits. Therefore, the answer is $d$.
[ "greedy", "math" ]
800
null
1530
B
Putting Plates
To celebrate your birthday you have prepared a festive table! Now you want to seat as many guests as possible. The table can be represented as a rectangle with height $h$ and width $w$, divided into $h \times w$ cells. Let $(i, j)$ denote the cell in the $i$-th row and the $j$-th column of the rectangle ($1 \le i \le h$; $1 \le j \le w$). Into each cell of the table you can either put a plate or keep it empty. As each guest has to be seated next to their plate, you can only put plates on the edge of the table — into the first or the last row of the rectangle, or into the first or the last column. Formally, for each cell $(i, j)$ you put a plate into, at least one of the following conditions must be satisfied: $i = 1$, $i = h$, $j = 1$, $j = w$. To make the guests comfortable, no two plates must be put into cells that have a common side or corner. In other words, if cell $(i, j)$ contains a plate, you can't put plates into cells $(i - 1, j)$, $(i, j - 1)$, $(i + 1, j)$, $(i, j + 1)$, $(i - 1, j - 1)$, $(i - 1, j + 1)$, $(i + 1, j - 1)$, $(i + 1, j + 1)$. Put as many plates on the table as possible without violating the rules above.
There are many ways to solve this problem and even more ways to get it accepted. Let's consider a provable solution that minimizes the amount of casework. We'll call a valid solution optimal if it has the largest possible number of plates. Claim. There exists an optimal solution that contains a plate in every corner of the table. Proof. Consider any optimal solution, and consider all four corners one-by-one in any order. If a corner contains a plate, do nothing. If a corner doesn't contain a plate, but either of its neighbors does, move the plate from the neighbor to the corner. Note that we'll still obtain a valid optimal solution. If neither a corner nor any of its neighbors contain a plate, we can put a plate into the corner, increasing the number of plates and contradicting the optimality of our solution. Thus, this case is impossible. After considering all four corners, we'll obtain an optimal solution with a plate in every corner of the table, as desired. $\blacksquare$ Once we put a plate in every corner, four sides of the table don't interact anymore, and we can solve the problem on each side independently. Putting as many plates as possible on one side is easy: just leave one empty cell between neighboring plates, and maybe leave two empty cells at the end if the length of the side is even.
[ "constructive algorithms", "implementation" ]
800
null
1530
C
Pursuit
You and your friend Ilya are participating in an individual programming contest consisting of multiple stages. A contestant can get between $0$ and $100$ points, inclusive, for each stage, independently of other contestants. Points received by contestants in different stages are used for forming overall contest results. Suppose that $k$ stages of the contest are completed. For each contestant, $k - \lfloor \frac{k}{4} \rfloor$ stages with the highest scores are selected, and these scores are added up. This sum is the overall result of the contestant. (Here $\lfloor t \rfloor$ denotes rounding $t$ down.) For example, suppose $9$ stages are completed, and your scores are $50, 30, 50, 50, 100, 10, 30, 100, 50$. First, $7$ stages with the highest scores are chosen — for example, all stages except for the $2$-nd and the $6$-th can be chosen. Then your overall result is equal to $50 + 50 + 50 + 100 + 30 + 100 + 50 = 430$. As of now, $n$ stages are completed, and you know the points you and Ilya got for these stages. However, it is unknown how many more stages will be held. You wonder what the smallest number of additional stages is, after which your result might become greater than or equal to Ilya's result, at least in theory. Find this number!
The first thing to notice is that since we're chasing Ilya and we want to reach his score as soon as possible, it only makes sense to add $100$'s to our scores and $0$'s to his. We can also notice that the answer never exceeds $n$. No matter how bad a stage is for us in terms of points, adding a single stage where we score $100$ and Ilya scores $0$ "compensates" it. In particular, in the worst case, when all $a_i = 0$ and all $b_i = 100$, the answer is exactly $n$. However, if we just add the $100/0$ stages one-by-one and calculate the overall results from scratch every time, our solution will have $O(n^2)$ complexity and that's too much. There are many ways to optimize the solution. One of them is to transform the given $a$ and $b$ arrays into arrays of length $101$, containing the count of each score (since we are not interested in the order of the scores). Let $m = 100$ be the maximum achievable score in a stage. Using the transformed arrays, we can calculate the overall scores in $O(m)$ instead of $O(n)$, to lower the final complexity to $O(mn)$. Alternatively, notice that when we add $100$ to our scores, it just adds $100$ to our overall score except for the case when the total number of completed stages becomes divisible by $4$, when we also need to subtract the score of the worst currently included stage from the sum. We can similarly handle adding $0$ to Ilya's scores. If we sort all our and Ilya's scores at the beginning and maintain a pointer to the current worst included stage in both scoresheets, we can add a new $100/0$ stage and recalculate the totals in $O(1)$. Finally, we can also notice that whenever adding $k$ stages works for us, adding $k+1$ stages will work too. Thus, we can use binary search on $k$. We can easily check a single value of $k$ in $O(n)$ or $O(n \log n)$, resulting in $O(n \log n)$ or $O(n \log^2 n)$ time complexity.
[ "binary search", "brute force", "greedy", "sortings" ]
1,200
null
1530
D
Secret Santa
Every December, VK traditionally holds an event for its employees named "Secret Santa". Here's how it happens. $n$ employees numbered from $1$ to $n$ take part in the event. Each employee $i$ is assigned a different employee $b_i$, to which employee $i$ has to make a new year gift. Each employee is assigned to exactly one other employee, and nobody is assigned to themselves (but two employees may be assigned to each other). Formally, all $b_i$ must be distinct integers between $1$ and $n$, and for any $i$, $b_i \ne i$ must hold. The assignment is usually generated randomly. This year, as an experiment, all event participants have been asked who they wish to make a gift to. Each employee $i$ has said that they wish to make a gift to employee $a_i$. Find a valid assignment $b$ that maximizes the number of fulfilled wishes of the employees.
Let $m$ be the number of different values among $a_i$ (that is, the number of distinct employees someone wishes to make a gift to). It's easy to see that the answer, $k$, can not exceed $m$: each employee mentioned in $a_i$ allows us to fulfill at most one wish. It turns out that $k$ can always be equal to $m$, and here's how. We can visualize the problem in terms of graphs. We are given a functional graph $G$ (for each $i$, there is an edge from $i$ to $a_i$), and we need to find a directed graph that consists of cycles of length at least $2$ and shares as many edges with $G$ as possible. For each vertex in $G$ that has at least one incoming edge, keep any of these edges and remove the others. Now every vertex has outdegree at most $1$ and indegree at most $1$. Hence, our graph becomes a collection of paths and cycles (isolated vertices are considered to be paths of length $0$). Let the paths be $P_1, P_2, \ldots, P_t$. For each $i = 1 \ldots t$, create an edge from the end of path $P_i$ to the beginning of path $P_{i \bmod t + 1}$. That is, we are forming a single loop out of all the paths. This will always work except for one case: if the new loop we are forming has length $1$. It means that we have a single isolated vertex $v$, and all other vertices form valid cycles. If we ever arrive at this case, we can pick the initial edge going from $v$ to $a_v$, return it to the graph, and remove the other edge going from some vertex $u$ into $a_v$ that we kept. This will break the cycle containing vertex $a_v$ without changing the number of edges shared with the initial graph. Finally, add an edge from $u$ to $v$ closing the cycle, and that will give us a correct answer. Alternatively, to not ever arrive at the bad case, we can use the knowledge of our future selves at the earlier stage of choosing the edges to keep and remove, and prioritize keeping edges going from vertices with indegree $0$. Finally, randomized solutions are also possible. For example, we can select the edges to keep at random, write down the sets of vertices with indegree $0$ and vertices with outdegree $0$, shuffle both sets, and try to create an edge from the $i$-th vertex of one set to the $i$-th vertex of the other one. If we fail, and that can only happen when we try to create an edge from a vertex to self, we just start the process from scratch. With an argument similar to counting derangements, it can be shown that the expected number of iterations until we find a correct answer is constant.
[ "constructive algorithms", "flows", "graphs", "greedy", "math" ]
1,600
null
1530
E
Minimax
Prefix function of string $t = t_1 t_2 \ldots t_n$ and position $i$ in it is defined as the length $k$ of the longest proper (not equal to the whole substring) prefix of substring $t_1 t_2 \ldots t_i$ which is also a suffix of the same substring. For example, for string $t = $ abacaba the values of the prefix function in positions $1, 2, \ldots, 7$ are equal to $[0, 0, 1, 0, 1, 2, 3]$. Let $f(t)$ be equal to the maximum value of the prefix function of string $t$ over all its positions. For example, $f($abacaba$) = 3$. You are given a string $s$. Reorder its characters arbitrarily to get a string $t$ (the number of occurrences of any character in strings $s$ and $t$ must be equal). The value of $f(t)$ must be minimized. Out of all options to minimize $f(t)$, choose the one where string $t$ is the lexicographically smallest.
This problem required careful case analysis. First of all, if all characters of $s$ are the same, there is nothing to reorder: $t = s$, and $f(t) = |t| - 1$. Second, if the first character of $t$ appears somewhere else in the string, $f(t) \ge 1$. Otherwise, $f(t) = 0$. Thus, if some character has only one occurrence in $s$, choose the smallest such character and put it at the front of $t$, followed by all the remaining characters of $s$ in alphabetical order. That's the lexicographically smallest way to obtain $f(t) = 0$. Otherwise, if all characters have at least two occurrences in $s$, we can always reach $f(t) = 1$. The easiest way to construct such $t$ is to choose any character, put one of its occurrences at the front of $t$, put the remaining occurrences at the back of $t$, and put the rest of the characters in-between ordered arbitrarily. However, we have to minimize $t$ lexicographically. First, we have to put the smallest character of $s$ at the front of $t$, say, a. Then, let's try to follow it with the same character. In this case, $t$ starts with aa, and we can't have another occurrence of aa anywhere in $t$, as we aim at $f(t) = 1$, which means that at most half of the letters (approximately) can be equal to a. The exact condition of whether we can start with aa is: the number of occurrences of a in $s$ must not exceed $\frac{|s|}{2} + 1$. If this is the case, the lexicographically smallest $t$ we can form will look like aabababacacadddeeefffff. If $t$ starts with a, but can not start with aa, let's find the second smallest character in $s$, say, b, and start $t$ with ab. Now we are not allowed to have ab anywhere else in $t$. There are two (final) cases: if $s$ doesn't have any other characters except a and b, we have to put all the remaining b's in front of all a's. The smallest $t$ will look like abbbbbbbaaaaaaaaaaaaa. otherwise, find the third smallest character in $s$, say, c. We can now afford putting all a's after the ab prefix if we follow these a's with c. The smallest $t$ will look like abaaaaaaaaaaacbbbcddd. It's possible to consider less cases if you implement a function that checks if a prefix of $t$ can be finished with the remaining characters of $s$ to obtain $f(t) = 1$. Then $t$ can be formed character-by-character from left to right. This approach still requires care, though.
[ "constructive algorithms", "greedy", "strings" ]
2,100
null
1530
F
Bingo
Getting ready for VK Fest 2021, you prepared a table with $n$ rows and $n$ columns, and filled each cell of this table with some event related with the festival that could either happen or not: for example, whether you will win a prize on the festival, or whether it will rain. Forecasting algorithms used in VK have already estimated the probability for each event to happen. Event in row $i$ and column $j$ will happen with probability $a_{i, j} \cdot 10^{-4}$. All of the events are mutually independent. Let's call the table winning if there exists a line such that all $n$ events on it happen. The line could be any horizontal line (cells $(i, 1), (i, 2), \ldots, (i, n)$ for some $i$), any vertical line (cells $(1, j), (2, j), \ldots, (n, j)$ for some $j$), the main diagonal (cells $(1, 1), (2, 2), \ldots, (n, n)$), or the antidiagonal (cells $(1, n), (2, n - 1), \ldots, (n, 1)$). Find the probability of your table to be winning, and output it modulo $31\,607$ (see Output section).
Let $l_1, l_2, \ldots, l_{2n+2}$ denote the $2n+2$ possible lines that can be formed. Let $L_i$ denote the event that line $l_i$ is formed, and $\overline{L_i}$ denote the event that line $l_i$ is not formed (i.e., $P(L_i) + P(\overline{L_i}) = 1$). Let's find the probability that our table is not winning. It is equal to $P(\overline{L_1} \cap \overline{L_2} \cap \ldots \cap \overline{L_{2n+2}})$. Note that the following two statements are true: $P(\overline{L_1} \cap \overline{L_2} \cap \ldots \cap \overline{L_{2n+2}}) + P(L_1 \cap \overline{L_2} \cap \ldots \cap \overline{L_{2n+2}}) = P(\overline{L_2} \cap \ldots \cap \overline{L_{2n+2}})$; $P(L_1 \cap \overline{L_2} \cap \ldots \cap \overline{L_{2n+2}}) = P(\overline{L_2} \cap \ldots \cap \overline{L_{2n+2}} | L_1) \cdot P(L_1)$. The first one follows from the law of total probability, and the second one follows from the definition of conditional probability. These two statements combined allow us to use the following formula: $P(\overline{L_1} \cap \overline{L_2} \cap \ldots \cap \overline{L_{2n+2}}) = P(\overline{L_2} \cap \ldots \cap \overline{L_{2n+2}}) - P(\overline{L_2} \cap \ldots \cap \overline{L_{2n+2}} | L_1) \cdot P(L_1)$. We can apply this formula recursively. Specifically, we can make a function $f(i, S)$, where $S = \{s_1, s_2, \ldots, s_k\}$ is a subset of $\{1, 2, \ldots, i-1\}$, which calculates $P(\overline{L_i} \cap \overline{L_{i+1}} \cap \ldots \cap \overline{L_{2n+2}} | L_{s_1} \cap \ldots \cap L_{s_k})$. For $i = 2n+3$, $f(i, S) = 1$, and for $i \le 2n+2$ we can generalize the formula above as follows: $f(i, S) = f(i + 1, S) - f(i + 1, S \cup \{i\}) \cdot P(L_i | L_{s_1} \cap \ldots \cap L_{s_k})$. Here, $P(L_i | L_{s_1} \cap \ldots \cap L_{s_k})$ is the probability that line $l_i$ is formed given that lines $l_{s_1}, \ldots, l_{s_k}$ are formed. This is equal to the product of probabilities of all cells belonging to $l_i$ which do not belong to any of $l_{s_1}, \ldots, l_{s_k}$. The answer to the problem, i.e. the probability that our table is winning, is $1 - f(1, \{\})$. This allows us to implement an $O(2^{2n} \cdot n)$ solution, which is too slow. In fact, this solution is equivalent to applying inclusion-exclusion principle. To optimize this solution, note that once it becomes easy to calculate $f(i, S)$, we don't have to make any more recursive calls. Why would it become easy to calculate $f(i, S)$ though? Let's order the lines in such a way that $l_{n+3}, l_{n+4}, \ldots, l_{2n+2}$ are the horizontal lines of the table. Consider a call of the form $f(n + 3, S)$. This call is basically asking: "what is the probability that none of lines $l_{n+3}, l_{n+4}, \ldots, l_{2n+2}$ are formed, given that lines $l_{s_1}, \ldots, L_{s_k}$ are formed?". Note that the horizontal lines are independent, and we can actually answer this question in $O(n^2)$. Specifically, for any horizontal line, the probability that it is not formed is $1$ minus the product of probabilities of all its cells not belonging to any of $l_{s_1}, \ldots, l_{s_k}$. The overall value of $f(n + 3, S)$ is the product of probabilities for individual horizontal lines. This way, we have built an $O(2^n \cdot n^2)$ solution. This might be fast enough depending on your implementation, but there are at least two ways to optimize it to $O(2^n \cdot n)$: The first way is to maintain the products of probabilities of untouched cells for horizontal lines on the fly. For simplicity, assume that $l_1$ and $l_2$ are the diagonal lines. For each $i = 3, 4, \ldots, n+2$, after we process (vertical) line $l_i$, we can update the products for all horizontal lines with the cells of $l_i$ (in $O(n)$), make a recursive call, and roll back the updates (in $O(n)$ again). Once we get to $i = n + 3$, instead of going through every cell of the table in $O(n^2)$, we can just multiply $n$ horizontal line products in $O(n)$. The second way is to define $g(i, mask)$ ($1 \le i \le n$; $mask$ is a bitmask of size $n$) to be the product of $a_{i, j}$ over all $j$ belonging to $mask$. All values of $g$ can be calculated in $O(2^n \cdot n)$ using dynamic programming: $g(i, 0) = 1$, and $g(i, mask) = g(i, mask \oplus 2^j) \cdot a_{i, j}$, where $j$ is any bit set in $mask$. When we arrive at a $f(n + 3, S)$ call, for each of the $n$ horizontal lines, instead of going through its cells, we can construct the mask of cells the values in which we want to multiply, and use the corresponding value of $g$ in $O(1)$.
[ "bitmasks", "combinatorics", "dp", "math", "probabilities" ]
2,600
null
1530
G
What a Reversal
You have two strings $a$ and $b$ of equal length $n$ consisting of characters 0 and 1, and an integer $k$. You need to make strings $a$ and $b$ equal. In one step, you can choose any substring of $a$ containing exactly $k$ characters 1 (and arbitrary number of characters 0) and reverse it. Formally, if $a = a_1 a_2 \ldots a_n$, you can choose any integers $l$ and $r$ ($1 \le l \le r \le n$) such that there are exactly $k$ ones among characters $a_l, a_{l+1}, \ldots, a_r$, and set $a$ to $a_1 a_2 \ldots a_{l-1} a_r a_{r-1} \ldots a_l a_{r+1} a_{r+2} \ldots a_n$. Find a way to make $a$ equal to $b$ using at most $4n$ reversals of the above kind, or determine that such a way doesn't exist. The number of reversals doesn't have to be minimized.
First of all, the number of 1's in $a$ and $b$ must match. Let $c$ be the number of 1's in $a$. If $k = 0$ or $k > c$, we can not do a meaningful reversal, so we just check if $a = b$. If $k = c$, we can not change the contents of $a$ between the leftmost and the rightmost 1's, we can only reverse it and shift with regard to the outside 0's. It's easy to check whether $a$ can become equal to $b$ then, and at most $2$ steps are required if so. Otherwise, suppose $0 < k < c$. Since our operations are reversible, we'll apply a usual trick: we'll transform both strings into some canonical representations in at most $2n$ steps each. If $a$'s and $b$'s canonical representations don't match, we'll prove that there is no solution. Otherwise, we can transform $a$ into $b$ in at most $4n$ steps: transform $a$ into the canonical representation, and then perform the steps transforming $b$ into the canonical representation, but in reverse order. What representation is canonical is for us to determine later. We'll focus on one string now. Let's write down the lengths of blocks of consecutive 0's in the string. Formally, let $p_0, p_1, p_2, \ldots, p_c$ be the sequence consisting of the number of 0's: before the first 1, between the first and the second 1's, $\ldots$, after the last 1. Note that some $p_i$ might be equal to $0$. What does a single reversal do to the sequence of blocks? $k+1$ consecutive blocks get reversed, and additionally, the 0's in the leftmost and the rightmost block of these $k+1$ blocks can be redistributed arbitrarily. It turns out that we don't need more than $k+2$ blocks to achieve our goal (we'll see that later), so let's start with the following. Let $p_i$ be the rightmost block such that $p_i > 0$. If $i \le k+1$, stop. Otherwise, perform two reversals of blocks $p_{i-k}, p_{i-k+1}, \ldots, p_i$, moving all the 0's from $p_i$ to $p_{i-k}$ in the process, effectively making $p_i$ zero, and repeat. After this process, only $p_0, p_1, \ldots, p_{k+1}$ can still be non-zero. Now, consider what happens if we reverse $p_0, p_1, \ldots, p_k$, and then reverse $p_1, p_2, \ldots, p_{k+1}$ without changing any values: sequence $p_0, p_1, \ldots, p_{k+1}$ will change to $p_k, p_{k+1}, p_0, p_1, \ldots, p_{k-1}$, that is, will cyclically shift by two positions to the right. Suppose we repeat this pair of reversals $k+1$ times. The $k$-th ($0$-indexed) position initially contains $p_k$, after the first pair it will contain $p_{k-2}$, then $p_{k-4}$ and so on. Assume that $k$ is odd. Then every block will reach the $k$-th position at some point! Now, during the first reversal in the pair, let's move all 0's from the $k$-th block to the $0$-th block, and keep the second reversal in the pair as is, without changing the values. Then, after $k+1$ pairs of reversals, all the 0's will move to the $0$-th block, that is, all $p_1, p_2, \ldots, p_{k+1}$ will become zero. We can for sure call this representation - with all of $p_1, p_2, \ldots, p_c$ zeroed out, and $p_0$ equal to the number of 0's in the original string - canonical. It follows that if the number of 1's in $a$ and $b$ is equal to $c$, $k < c$ and $k$ is odd, a solution always exists. Let's move on to the case when $k$ is even. In this case, since a pair of reversals makes a cyclic shift by $2$, only even-numbered blocks will appear at the $k$-th position. However, notice that this is not a coincidence: when $k$ is even, whatever reversal we perform, even-numbered blocks only interact with even-numbered blocks, and odd-numbered blocks interact with odd-numbered blocks. In particular, the sum of $p_0, p_2, p_4, \ldots$ can never change, and the sum of $p_1, p_3, p_5, \ldots$ can never change either. Thus, when $k$ is even, let's call the following representation canonical: $p_0$ contains the sum of all initial values of $p_0, p_2, p_4, \ldots$; $p_{k+1}$ contains the sum of all initial values of $p_1, p_3, p_5, \ldots$; and all the other $p_i$'s are zeros. We can reach this representation in a similar way to the odd $k$ case, using pairs of reversals. Note that similar to the $k$-th position which contains blocks $p_k, p_{k-2}, p_{k-4}, \ldots$ during the process, the $1$-st position contains blocks $p_1, p_{k+1}, p_{k-1}, p_{k-3}, \ldots$ during the process. During the first reversal in each pair, as in the previous case, we'll move all the 0's from the $k$-th block to the $0$-th block. During the second reversal in each pair, we'll move all the 0's from the $1$-st block to the $k+1$-th block. After $k+1$ pairs of reversals (actually just $\frac{k}{2}$ pairs are enough), we'll zero out all the values except for $p_0$ and $p_{k+1}$. All in all, we have found a way to reach the canonical representation in $2c \le 2n$ steps, as desired.
[ "constructive algorithms" ]
3,300
null
1530
H
Turing's Award
Alan Turing is standing on a tape divided into cells that is infinite in both directions. Cells are numbered with consecutive integers from left to right. Alan is initially standing in cell $0$. Every cell $x$ has cell $x - 1$ on the left and cell $x + 1$ on the right. Each cell can either contain an integer or be empty. Initially all cells are empty. Alan is given a permutation $a_1, a_2, \ldots, a_n$ of integers from $1$ to $n$ that was chosen \textbf{uniformly at random} among all permutations of length $n$. At time $1$, integer $a_1$ is written down into cell $0$ where Alan is located. At each time $i$ from $2$ to $n$ inclusive, the following happens. First, Alan decides whether to stay in the same cell he's currently in, move to the neighboring cell on the left, or move to the neighboring cell on the right. After that, integer $a_i$ is written down into the cell where Alan is located. If that cell already contained some integer, the old integer is overwritten and irrelevant from that moment on. Once $a_n$ is written down into some cell at time $n$, sequence $b$ of all integers contained in the cells from left to right is formed. Empty cells are ignored. Turing's award is equal to the length of the longest increasing subsequence of sequence $b$. Help Alan and determine the largest possible value of his award if he acts optimally.
Let's look at Alan's moves in reverse. The process can be reformulated as follows. Initially, $a_n$ is written down into the cell where Alan is located. Then, for each $i$ from $n-1$ down to $1$, first Alan decides whether to stay or move to a neighboring cell, and then $a_i$ is written down into Alan's cell only if the cell is empty. This way, it's easier to reason about the process since numbers never get overwritten. The state at any moment can be described by the current sequence $b$ and Alan's position inside this sequence. The only way we can change $b$ is by appending a number to either end. Suppose we have decided which elements of the permutation appear in the final sequence $b$, and for each element, whether it gets appended to the front or to the back. Let's call these elements useful. As a special case, $a_n$ is always useful. Two things to notice here: We don't care about intermediate Alan's positions, as long as he has enough time to move between consecutive useful elements. For example, suppose $a_i$ gets appended to the front of $b$ (the other case is symmetrical), and there are exactly $k$ useful elements among $a_i, a_{i+1}, \ldots, a_n$. Let $a_j$ be the "next" useful element, that is, $j < i$ and there are no useful elements between $a_j$ and $a_i$. If $a_j$ should be appended to the front of $b$, Alan can just stay in the same cell for elements $a_{i-1}, a_{i-2}, \ldots, a_{j+1}$, and then move to the left to append $a_j$. Here, Alan is always in time. If $a_j$ should be appended to the back of $b$, Alan has to move to the other end of $b$, and he needs to make $k$ steps to the right. This is possible if and only if $i - j \ge k$. If $a_j$ should be appended to the front of $b$, Alan can just stay in the same cell for elements $a_{i-1}, a_{i-2}, \ldots, a_{j+1}$, and then move to the left to append $a_j$. Here, Alan is always in time. If $a_j$ should be appended to the back of $b$, Alan has to move to the other end of $b$, and he needs to make $k$ steps to the right. This is possible if and only if $i - j \ge k$. Consider a useful element that neither belongs to the LIS (longest increasing subsequence) of $b$, nor is $a_n$. We can always change this element's status to "not useful", and it won't harm us in any way. Thus, we can assume that all of the useful elements, except for maybe $a_n$, belong to the LIS. Let's casework on whether $a_n$ belongs to the LIS. From now on, we will assume that $a_n$ does belong to the LIS. However, the other case differs just slightly. Armed with the above observations, we can solve the problem using dynamic programming: $f_L(k, i)$: suppose $a_i$ is a useful element that gets appended to the front, and there are $k$ useful elements among $a_i, a_{i+1}, \ldots, a_n$. Then $f_L(k, i)$ is the smallest possible value of the element at the back of the LIS. $f_R(k, i)$: suppose $a_i$ is a useful element that gets appended to the back, and there are $k$ useful elements among $a_i, a_{i+1}, \ldots, a_n$. Then $f_R(k, i)$ is the largest possible value of the element at the front of the LIS. Note that some DP states might be unreachable: in this case, we set $f_L(\ldots)$ to $+\infty$ and $f_R(\ldots)$ to $-\infty$. Consider transitions from $f_L(k, i)$ (transitions from $f_R(k, i)$ are similar): For any $j < i$ such that $a_j < a_i$, we can set $f_L(k + 1, j)$ to $f_L(k, i)$. For any $j \le i - k$ such that $a_j > f_L(k, i)$, we can set $f_R(k + 1, j)$ to $a_i$. Thus, we can transition from $f(k, \ldots)$ to $f(k + 1, \ldots)$ in $O(n^2)$ time. We can optimize it to $O(n \log n)$ if we sweep through the permutation in reverse order ($i = n, n - 1, \ldots, 1$) and maintain a segment tree for minimum/maximum indexed with element values. The answer to the problem is the largest $k$ such that $f_L(k, i)$ or $f_R(k, i)$ is a reachable state for some $i$. The complexity of our solution is thus $O(kn \log n)$. Finally, note that the part of $b$ to the left of $a_n$ forms an increasing subsequence of $a$, and the part of $b$ to the right of $a_n$ forms a decreasing subsequence of $a$. It is well-known that the expected length of the longest increasing/decreasing subsequence in a random permutation is $O(\sqrt{n})$. Therefore, our solution works in $O(n^{1.5} \log n)$ on average.
[ "data structures", "dp" ]
3,400
null
1534
A
Colour the Flag
Today we will be playing a red and white colouring game (no, this is not the Russian Civil War; these are just the colours of the Canadian flag). You are given an $n \times m$ grid of "R", "W", and "." characters. "R" is red, "W" is white and "." is blank. The neighbours of a cell are those that share an edge with it (those that only share a corner do not count). Your job is to colour the blank cells red or white so that every red cell only has white neighbours (and no red ones) and every white cell only has red neighbours (and no white ones). You are not allowed to recolour already coloured cells.
Observe that there are only two valid grids, one where the top left cell is "R" and one where it's "W". We can just test those two grids and see if they conform with the requirements. Time complexity: $\mathcal{O}(nm)$
[ "brute force", "implementation" ]
800
#include <bits/stdc++.h> #define all(x) (x).begin(), (x).end() #ifdef LOCAL template<typename T> void pr(T a){std::cerr<<a<<std::endl;} template<typename T, typename... Args> void pr(T a, Args... args){std::cerr<<a<<' ',pr(args...);} #else template<typename... Args> void pr(Args... args){} #endif using namespace std; const int MM = 55; int t, n, m; string s[MM]; char cc[] = {'R', 'W'}; int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cin.exceptions(cin.failbit); cin>>t; while(t--){ cin>>n>>m; vector<int> r(2), w(2); for(int i = 0; i < n; i++){ cin>>s[i]; for(int j = 0; j < m; j++){ if(s[i][j] == 'R') r[i+j&1] = 1; else if(s[i][j] == 'W') w[i+j&1] = 1; } } int v = r[1] or w[0]; int vv = r[0] or w[1]; if(v and vv){ cout<<"NO\n"; continue; } cout<<"YES\n"; for(int i = 0; i < n; i++){ for(int j = 0; j < m; j++){ char c = cc[i+j+v&1]; cout<<c; if(s[i][j] != '.') assert(s[i][j] == c); } cout<<'\n'; } } }
1534
B
Histogram Ugliness
Little Dormi received a histogram with $n$ bars of height $a_1, a_2, \ldots, a_n$ for Christmas. However, the more he played with his new histogram, the more he realized its imperfections, so today he wanted to modify it to his liking. To modify the histogram, Little Dormi is able to perform the following operation an arbitrary number of times: - Select an index $i$ ($1 \le i \le n$) where $a_i>0$, and assign $a_i := a_i-1$. Little Dormi defines the ugliness score of his histogram (after performing some number of operations) as the sum of the vertical length of its outline and the number of operations he performed on it. And to make the histogram as perfect as possible, he would like to minimize the ugliness score after modifying it with some number of operations. However, as his histogram is very large, Little Dormi is having trouble minimizing the ugliness score, so as Little Dormi's older brother, help him find the minimal ugliness. Consider the following example where the histogram has $4$ columns of heights $4,8,9,6$: The blue region represents the histogram, and the red lines represent the vertical portion of the outline. Currently, the vertical length of the outline is $4+4+1+3+6 = 18$, so if Little Dormi does not modify the histogram at all, the ugliness would be $18$. However, Little Dormi can apply the operation once on column $2$ and twice on column $3$, resulting in a histogram with heights $4,7,7,6$: Now, as the total vertical length of the outline (red lines) is $4+3+1+6=14$, the ugliness is $14+3=17$ dollars. It can be proven that this is optimal.
It's only optimal to decrease a column $i$ if $a_i > a_{i+1}$ and $a_i > a_{i-1}$, as that would reduce the vertical length of the outline by $2$ while only costing $1$ operation. Additionally, observe that decreasing a column will never affect whether it is optimal to decrease any other column, so we can treat the operations as independent. Thus, our algorithm is as follows: while it is optimal to decrease a column, do it. Once this is no longer the case, calculate the final value of the histogram. This can be sped up to $\mathcal{O}(n)$ with some arithmetic and the observation above.
[ "greedy", "implementation", "math" ]
1,100
#include "bits/stdc++.h" using namespace std; using ll = long long; using pii = pair<int,int>; using pll = pair<ll,ll>; template<typename T> int sz(const T &a){return int(a.size());} const int MN=4e5+2; ll arr[MN]; int main(){ cin.tie(NULL); ios_base::sync_with_stdio(false); int t; cin>>t; while(t--){ int n; cin>>n; arr[n+1]=0; for(int i=1;i<=n;i++)cin>>arr[i]; ll ans=0; for(int i=1;i<=n;i++){ ll should=min(arr[i],max(arr[i-1],arr[i+1])); ans+=arr[i]-should+abs(should-arr[i-1]); arr[i]=should; } printf("%lli\n",ans+arr[n]); } return 0; }
1534
C
Little Alawn's Puzzle
When he's not training for IOI, Little Alawn enjoys playing with puzzles of various types to stimulate his brain. Today, he's playing with a puzzle that consists of a $2 \times n$ grid where each row is a permutation of the numbers $1,2,3,\ldots,n$. The goal of Little Alawn's puzzle is to make sure no numbers on the same column or row are the same (we'll call this state of the puzzle as solved), and to achieve this he is able to swap the numbers in any column. However, after solving the puzzle many times, Little Alawn got bored and began wondering about the number of possible solved configurations of the puzzle he could achieve from an initial \textbf{solved} configuration only by swapping numbers in a column. Unfortunately, Little Alawn got stuck while trying to solve this harder problem, so he was wondering if you could help him with it. Find the answer modulo $10^9+7$.
Define the "direction" of a column as the orientation of its numbers. Swapping the numbers in a column will flip its direction. Let's create a simple, undirected graph where the nodes are the $n$ columns on the puzzle and we draw one edge connecting it to the $2$ other columns that share a number with it. Notice that the degree of every node in this graph is $2$, so the graph must be made of some number of disjoint simple cycles. Now consider any component in the graph. If we fix the direction of any of the columns in the component, that will fix the direction of the columns adjacent to it, and so on until the direction of every column in the component has been fixed (also note that as the component is a simple cycle, we will never get a contradiction). As there are $2$ possible directions for any column, there are thus $2$ ways to direct the columns in this component. Lastly, notice that the columns in each component are independent, so the answer is simply $2^k$, where $k$ is the number of components in our graph. Time Complexity: $\mathcal{O}(n)$ with DFS
[ "combinatorics", "dp", "dsu", "graphs", "math" ]
1,300
#include "bits/stdc++.h" using namespace std; using ll = long long; using pii = pair<int,int>; using pll = pair<ll,ll>; template<typename T> int sz(const T &a){return int(a.size());} const int MN=4e5+1; const ll mod=1e9+7; bool gone[MN]; vector<int> adj[MN]; int arr[MN][2]; void dfs(int loc){ gone[loc]=true; for(auto x:adj[loc])if(!gone[x])dfs(x); } int main(){ cin.tie(NULL); ios_base::sync_with_stdio(false); int t; cin>>t; while(t--) { int n; cin >> n; for (int i = 1; i <= n; i++)cin >> arr[i][0],adj[i]=vector<int>(),gone[i]=false; for (int i = 1; i <= n; i++)cin >> arr[i][1]; for (int i = 1; i <= n; i++) { adj[arr[i][0]].push_back(arr[i][1]), adj[arr[i][1]].push_back(arr[i][0]); } ll ans = 1; for (int i = 1; i <= n; i++) { if (!gone[i]) { ans = ans * ll(2) % mod; dfs(i); } } printf("%lli\n", ans); } return 0; }