contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1787
|
C
|
Remove the Bracket
|
RSJ has a sequence $a$ of $n$ integers $a_1,a_2, \ldots, a_n$ and an integer $s$. For each of $a_2,a_3, \ldots, a_{n-1}$, he chose a pair of \textbf{non-negative integers} $x_i$ and $y_i$ such that $x_i+y_i=a_i$ and $(x_i-s) \cdot (y_i-s) \geq 0$.
Now he is interested in the value $$F = a_1 \cdot x_2+y_2 \cdot x_3+y_3 \cdot x_4 + \ldots + y_{n - 2} \cdot x_{n-1}+y_{n-1} \cdot a_n.$$
Please help him find the minimum possible value $F$ he can get by choosing $x_i$ and $y_i$ optimally. It can be shown that there is always at least one valid way to choose them.
|
$(x_i-s)(y_i-s)\geq 0$ tells us either $\min(x_i,y_i)\geq s$ or $\max(x_i,y_i) \leq s$, so pickable $x_i$ is a consecutive range. Just consider $(x_i+y_i)$, remove the bracket then it turns to $\ldots+ y_{i-1}\cdot x_i+y_i\cdot x_{i+1}+\ldots$. When $y_{i-1} = x_{i+1}$, the result is constant, so we assume that $y_{i-1} < x_{i+1}$. If $x_i$ does not hit the maximum, increase $x_i$ by $1$ and decrease $y_i$ by $1$, the result will increase by $y_{i-1}$ and decrease by $x_{i+1}$, which means decrease by $x_{i+1}-y_{i-1}$, always better. So $x_i$ will either hit the maximum or the minimum. Thus, each number has only two ways of rewriting that might be optimal. This can be done with DP.
|
[
"dp",
"greedy",
"math"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200005;
long long f[N][2],x[N],y[N];
void get() {
int i,n,s,j;
cin>>n>>s;
for(i=1; i<=n; i++) {
cin>>j;
if(i==1||i==n) x[i]=y[i]=j;
else if(j<=s) x[i]=0,y[i]=j;
else x[i]=s,y[i]=j-s;
}
f[1][0]=f[1][1]=0;
for(i=2; i<=n; i++) {
f[i][0]=min(f[i-1][0]+y[i-1]*x[i],f[i-1][1]+x[i-1]*x[i]);
f[i][1]=min(f[i-1][0]+y[i-1]*y[i],f[i-1][1]+x[i-1]*y[i]);
}
cout<<f[n][0]<<endl;
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
int T; cin>>T;
while(T--) get();
return 0;
}
|
1787
|
D
|
Game on Axis
|
There are $n$ points $1,2,\ldots,n$, each point $i$ has a number $a_i$ on it. You're playing a game on them. Initially, you are at point $1$. When you are at point $i$, take following steps:
- If $1\le i\le n$, go to $i+a_i$,
- Otherwise, the game ends.
Before the game begins, you can choose two integers $x$ and $y$ satisfying $1\le x\le n$, $-n \le y \le n$ and replace $a_x$ with $y$ (set $a_x := y$). Find the number of distinct pairs $(x,y)$ such that the game that you start after making the change ends in a finite number of steps.
Notice that you do not have to satisfy $a_x\not=y$.
|
First, add directed edges from $i$ to $i+a_i$. If there's a path from $1$ to $x$ satisfying $x<1$ or $x>n$, the game end. We consider all nodes $x$ satisfying $x<1$ or $x>n$ to be the end node, and we call the path which starts at node $1$ until it loops or ends the key path. If we can end the game at first: Let's count the opposite: the number of invalid pairs. The graph with the end node forms a tree. Changing edges not on the key path is always legal. If changing the edges on the key path, for node $x$, we can only change its successor to other connected components, its precursor or itself on the tree with the end node. For the answer, use dfs to count the number of precursors for every nodes on this tree, which is the size of subtree. If we cannot end the game at first: We can only change the edge on the key path, redirect them to any nodes on the tree with the end node.
|
[
"combinatorics",
"dfs and similar",
"dsu",
"graphs",
"implementation"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200005;
int a[N];
int v[N]; //= 1 -> in the tree with the end node
int s[N]; //subtree size
struct E {
int to;
E *nex;
} *h[N];
void add(int u, int v) {
E *cur = new E;
cur->to = v, cur->nex = h[u], h[u] = cur;
}
void dfs(int u) {
s[u] = v[u] = 1;
for (E *cur = h[u]; cur; cur = cur->nex)
dfs(cur->to), s[u] += s[cur->to];
}
void get() {
int i, j, n;
cin >> n;
for (i = 1; i <= n + 1; i++)
s[i] = v[i] = 0, h[i] = 0;
for (i = 1; i <= n; i++) {
cin >> a[i], a[i] = min(i + a[i], n + 1);
if (a[i] <= 0) a[i] = n + 1;
add(a[i], i);
}
dfs(n + 1); //start with the end point, dfs the tree
long long ans = 0;
if (v[1] == 1) {
j = 1; do { ans -= s[j] + (n - s[n + 1] + 1), j = a[j]; }
while (j != n + 1);
ans += 1ll * n * (2 * n + 1);
} else {
j = 1; do { ans += (n + s[n + 1]), v[j] = 2, j = a[j]; }
while (v[j] != 2);
}
cout << ans << endl;
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
int T;
cin >> T;
while (T--) get();
return 0;
}
|
1787
|
E
|
The Harmonization of XOR
|
You are given an array of exactly $n$ numbers $[1,2,3,\ldots,n]$ along with integers $k$ and $x$.
Partition the array in exactly $k$ non-empty disjoint subsequences such that the bitwise XOR of all numbers in each subsequence is $x$, and each number is in exactly one subsequence. Notice that there are no constraints on the length of each subsequence.
A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements.
For example, for $n = 15$, $k = 6$, $x = 7$, the following scheme is valid:
- $[6,10,11]$, $6 \oplus 10 \oplus 11 = 7$,
- $[5,12,14]$, $5 \oplus 12 \oplus 14 = 7$,
- $[3,9,13]$, $3 \oplus 9 \oplus 13 = 7$,
- $[1,2,4]$, $1 \oplus 2 \oplus 4 = 7$,
- $[8,15]$, $8 \oplus 15 = 7$,
- $[7]$, $7 = 7$,
where $\oplus$ represents the bitwise XOR operation.The following scheme is invalid, since $8$, $15$ do not appear:
- $[6,10,11]$, $6 \oplus 10 \oplus 11 = 7$,
- $[5,12,14]$, $5 \oplus 12 \oplus 14 = 7$,
- $[3,9,13]$, $3 \oplus 9 \oplus 13 = 7$,
- $[1,2,4]$, $1 \oplus 2 \oplus 4 = 7$,
- $[7]$, $7 = 7$.
The following scheme is invalid, since $3$ appears twice, and $1$, $2$ do not appear:
- $[6,10,11]$, $6 \oplus 10 \oplus 11 = 7$,
- $[5,12,14]$, $5 \oplus 12 \oplus 14 = 7$,
- $[3,9,13]$, $3 \oplus 9 \oplus 13 = 7$,
- $[3,4]$, $3 \oplus 4 = 7$,
- $[8,15]$, $8 \oplus 15 = 7$,
- $[7]$, $7 = 7$.
|
First, we observe that three subsequences can combine into one, so we only need to care about the maximum number of subsequences. Make subsequences in the form of $[a,a \oplus x]$ as much as possible, leave $[x]$ alone if possible, and the rest becomes a subsequence. This would be optimal. Proof: Let $B$ be the highest bit of $x$, i.e. the $B$-th bit of $x$ is on. Let $M$ be the number of numbers from $1$ to $n$ satisfying the $B$-th bit is on. Then the number of subsequences must be smaller than or equal to $M$, since there's at least one $B$-th-bit-on number in each subsequence. These $B$-th-bit-on numbers XOR $x$ must be smaller than themselves, so we can always obtain $M$ subsequences.
|
[
"bitmasks",
"constructive algorithms",
"greedy",
"math"
] | 2,100
| null |
1787
|
F
|
Inverse Transformation
|
A permutation scientist is studying a self-transforming permutation $a$ consisting of $n$ elements $a_1,a_2,\ldots,a_n$.
A permutation is a sequence of integers from $1$ to $n$ of length $n$ containing each number exactly once. For example, $[1]$, $[4, 3, 5, 1, 2]$ are permutations, while $[1, 1]$, $[4, 3, 1]$ are not.
The permutation transforms day by day. On each day, each element $x$ becomes $a_x$, that is, $a_x$ becomes $a_{a_x}$. Specifically:
- on the first day, the permutation becomes $b$, where $b_x = a_{a_x}$;
- on the second day, the permutation becomes $c$, where $c_x = b_{b_x}$;
- $\ldots$
For example, consider permutation $a = [2,3,1]$. On the first day, it becomes $[3,1,2]$. On the second day, it becomes $[2,3,1]$.You're given the permutation $a'$ on the $k$-th day.
Define $\sigma(x) = a_x$, and define $f(x)$ as the minimal positive integer $m$ such that $\sigma^m(x) = x$, where $\sigma^m(x)$ denotes $\underbrace{\sigma(\sigma(\ldots \sigma}_{m \text{ times}}(x) \ldots))$.
For example, if $a = [2,3,1]$, then $\sigma(1) = 2$, $\sigma^2(1) = \sigma(\sigma(1)) = \sigma(2) = 3$, $\sigma^3(1) = \sigma(\sigma(\sigma(1))) = \sigma(3) = 1$, so $f(1) = 3$. And if $a=[4,2,1,3]$, $\sigma(2) = 2$ so $f(2) = 1$; $\sigma(3) = 1$, $\sigma^2(3) = 4$, $\sigma^3(3) = 3$ so $f(3) = 3$.
Find the initial permutation $a$ such that $\sum\limits^n_{i=1} \dfrac{1}{f(i)}$ is \textbf{minimum possible}.
|
Consider another question: Given the initial permutation $a$ and find the permutation $a'$ on the $k$-th day? After a day, element $x$ will become $\sigma(x) = a_x$, so we can consider the numbers as nodes, and we add directed edges from $x$ to $a_x$. Note that $a$ is a permutation, so the graph consists of several cycles. After one day, $a_x$ will become $a_{\sigma(x)} = a_{a_x}$ - the second element after $x$ in the corresponding cycle. Similarly, we can conclude that $a_x$ will become the $2^k$-th element after $x$. If a cycle $c$ has an odd length $l$. Because $2^k \bmod l \neq 0$, the cycle will never split, the element $c_x$ will become $c_{(x+2^k)\bmod l}$ on the $k$-th day. Otherwise, if a cycle $c$ has an even length $l$, after the first day, $c_1$ will become $c_3$, $c_3$ will become $c_5$, $\ldots$, $c_{2l-1}$ will become $c_1$. And so will the even indices - that means the original cycle split into two small cycles of the same size $\dfrac{l}{2}$. So, if a cycle of length $l = p\cdots 2^q$ will split into $2^q$ small cycles of length $p$ after exactly $q$ days. Images illustrating how a 12-node cycle split in two days. Because the length of a cycle $l\le n$, so we can perform the transformation for at least $\log n$ days to split all the cycles of even lengths into cycles of odd lengths. Then we can use our conclusion to calculate the final permutation on the $k$-th day. Now let's back to our question. We observed that $\sum\limits^n_{i=1} \dfrac{1}{f(i)}$ means the number of cycles in the original graph. Then we need to construct the original permutation $a$ of the least possible cycle. As we've said before, the cycle of length $l = p\cdots 2^q$ will split into $2^q$ small cycles of length $p$ after exactly $q$ days. So we can count all the cycles in permutation $a'$. For cycles of odd length $l$, we can combine some of them. Specifically, we can combine $2^q$ cycles into a cycle of length $l\cdots 2^q$ in the original permutation $a$. Note that $q$ must be less than or equal to $k$. And for cycles of even length $l$, we can also combine them. But because they haven't been split on the $k$-th day, we must combine exactly $2^k$ cycles. So if the number of these cycles isn't a multiple of $2^k$, just print NO.
|
[
"constructive algorithms",
"implementation",
"math"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200005;
int dest[N], visit[N], ans[N], mp[N], ansinv[N], sp;
int pow2(int y, int M) {
long long x = 2, ans = 1;
while (y) {
if (y & 1) ans = ans * x % M;
x = x * x % M, y >>= 1;
}
return ans % M;
}
vector<int> cyc[N];
void get() {
sp = 0;
int n, i, j, m, z, o;
cin >> n >> m;
for (i = 1; i <= n; i++) cyc[i].clear();
for (i = 1; i <= n; i++) visit[i] = 0, cin >> dest[i];
for (i = 1; i <= n; i++) {
if (visit[i]) continue;
j = i, z = 0; do { ++z, visit[j] = 1, j = dest[j]; } while (j != i);
cyc[z].push_back(i);
}
int lim = pow2(min(m, 20), 1e9), s = 0, d, t, num, cp;
for (i = 1; i <= n; i++) {
if (!cyc[i].size()) continue;
int siz = cyc[i].size();
if (i % 2 == 0 && siz % lim) {
cout << "NO" << endl; return;
} else {
cp = -1;
for (j = lim; j > 0; j /= 2) {
while (siz >= j) {
siz -= j, d = s + j * i, t = j;
while (t--) {
++s, ++cp, num = s, o = cyc[i][cp];
do {
ans[num] = o, num += pow2(m, i * j);
if (num > d) num -= j * i;
o = dest[o];
} while (o != cyc[i][cp]);
}
for (int k = sp + 1; k < sp + j * i; k++) mp[k] = k + 1;
mp[sp + j * i] = sp + 1, sp += j * i, s = d;
}
}
}
}
cout << "YES" << endl;
for (i = 1; i <= n; i++) ansinv[ans[i]] = i;
for (i = 1; i <= n; i++) cout << ans[mp[ansinv[i]]] << " \n"[i == n];
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int T;
cin >> T;
while (T--) get();
return 0;
}
|
1787
|
G
|
Colorful Tree Again
|
An edge-weighted tree of $n$ nodes is given with each edge colored in some color. Each node of this tree can be blocked or unblocked, all nodes are unblocked initially.
A simple path is a path in a graph that does not have repeating nodes. The length of a path is defined as the sum of weights of all edges on the path.
A path is good when it is a simple path consisting of edges of the same color $c$, all edges of color $c$ are on this path, and every node on the path is unblocked.
You need to operate $2$ kinds of queries:
- block a node,
- unblock a node.
After each query, print the maximum length among all good paths. If there are no good paths, print $0$.
|
On the original tree, all good paths are constant and one edge can only be on at most one good path. For each color $c$, find all edges of color $c$, and judge if they form a simple path by counting the degree of each node on the path. If so, mark these edges and calculate the length. When a query changes a node $u$, several paths across $u$ are infected. For example, An example doodle, assume that we are destroying node $2$. If we are destroying node $2$, path $8-1-2-3$, $2-4-9$, $2-5$, and $7-6-2-10$ will not be good paths. A brute-force solution is to use a priority queue, delete length of bad paths, find the maximum length. To gain better time complexity, treat two cases of influenced paths separately: 1. Only one path that goes from the ancestor of $u$ to its subtree, like $8-1-2-3$. 2. All paths whose LCA is $u$ will be infected, like $2-4-9$, $2-5$, and $7-6-2-10$. This inspires us to maintain these paths according to their LCAs. That is, use priority queue on every node $u$ to maintain all paths whose LCA is $u$. Then use a global priority queue to maintain the answer of all paths maintained on all nodes. UPD: An alternative implementation using segment tree comment link.
|
[
"brute force",
"data structures",
"trees"
] | 3,000
| null |
1787
|
H
|
Codeforces Scoreboard
|
You are participating in a Codeforces Round with $n$ problems.
You spend exactly one minute to solve each problem, the time it takes to submit a problem can be ignored. You can only solve at most one problem at any time. The contest starts at time $0$, so you can make your first submission at any time $t \ge 1$ minutes. Whenever you submit a problem, it is always accepted.
The scoring of the $i$-th problem can be represented by three integers $k_i$, $b_i$, and $a_i$. If you solve it at time $t$ minutes, you get $\max(b_i - k_i \cdot t,a_i)$ points.
Your task is to choose an order to solve all these $n$ problems to get the maximum possible score. You can assume the contest is long enough to solve all problems.
|
Since $\max\{b_i-k_i\cdot t,a_i\} = b_i - \min\{k_i\cdot t, b_i - a_i\}$, we can pre-calculate the sum of $b_i$. Let $c_i = b_i - a_i$. Now our mission is to minimize the sum of $\min\{k_i\cdot t, b_i - a_i\}$. If we assume $\min\{k_i\cdot t, b_i - a_i\} = k_i\cdot t$ for some $i$, sort these problems in descending order of $k_i$, then assign $1,2,3,\ldots$ to their $t$ in order. For $\min\{k_i\cdot t, b_i - a_i\} = b_i - a_i$, they can be directly calculated. There comes a DP solution: Let $f_{i,j}$ be the minimum sum, when considering first $i$ problems (after sorting), $j$ problems of them satisfying $\min\{k_i\cdot t, b_i - a_i\} = k_i\cdot t$. $f_{i,j}=\left\{ \begin{aligned} f_{i-1,j}+c_i & & j=0 \\ \min\{f_{i-1,j}+c_i,f_{i-1,j-1}+k_i\cdot j\} & & j>0 \end{aligned} \right.$ The time complexity is $O(n^2)$. We can find that $f_{i,j}$ is convex. Proof below: Let $g_{i,j}=f_{i,j}-f_{i,j-1}$. Thus $f_{i,j}=f_{i,0}+\sum\limits_{k=1}^jg_{i,k}$, rewrite this formula with the original recursive formula. Then $f_{i,0}+\sum_{k=1}^{j}g_{i,k}=\min\{f_{i-1,0},\sum_{k=1}^{j}g_{i-1,k}+c_i,f_{i-1,0}+\sum_{k=1}^{j-1}g_{i-1,k}+k_i\cdot j\}$ $c_i+\sum_{k=1}^{j}g_{i,k}=\sum_{k=1}^{j-1}g_{i-1,k}+\min\{g_{i-1,j}+c_i,k_i\cdot j\}$ $c_i+\sum_{k=1}^{j-1}g_{i,k}=\sum_{k=1}^{j-2}g_{i-1,k}+\min\{g_{i-1,j-1}+c_i,k_i\cdot (j-1)\}$ $g_{i,j}=g_{i-1,j-1}+\min\{g_{i-1,j}+c_i,k_i\cdot j\}-\min\{g_{i-1,j-1}+c_i,k_i\cdot (j-1)\}$ Because $k$ is monotone non-increasing, so we can use mathematical induction to proof that for $j$, the increase speed of $g_{i-1,j}$ is faster than that of $k_i\cdot j$. We can find a critical value $M$ that $\forall j\le M, g_{i,j}<k_i\cdot j$ and $\forall j>M,g_{i,j}>k_i\cdot j$. So compared $g_i$ with $g_{i+1}$, the first part of the sequence remains unchanged, and the last part is added with $k_i$, with an additional number $k_i\cdot j$ in the middle, where $j$ is the maximum number satisfied $g_{i-1,j-1}\le k_i \cdot j$. Use the treap to maintain this sequence, which might be called the slope trick.
|
[
"binary search",
"data structures",
"dp",
"geometry"
] | 3,300
| null |
1787
|
I
|
Treasure Hunt
|
Define the beauty value of a sequence $b_1,b_2,\ldots,b_c$ as the maximum value of $\sum\limits_{i=1}^{q}b_i + \sum\limits_{i=s}^{t}b_i$, where $q$, $s$, $t$ are all integers and $s > q$ or $t\leq q$. Note that $b_i = 0$ when $i<1$ or $i>c$, $\sum\limits_{i=s}^{t}b_i = 0$ when $s>t$.
For example, when $b = [-1,-2,-3]$, we may have $q = 0$, $s = 3$, $t = 2$ so the beauty value is $0 + 0 = 0$. And when $b = [-1,2,-3]$, we have $q = s = t = 2$ so the beauty value is $1 + 2 = 3$.
You are given a sequence $a$ of length $n$, determine the sum of the beauty value of all non-empty subsegments $a_l,a_{l+1},\ldots,a_r$ ($1\leq l\leq r\leq n$) of the sequence $a$.
Print the answer modulo $998\,244\,353$.
|
First, we observe that: $\max\limits_{l>p\,\texttt{or}\,r\leq p}{(\sum\limits_{i=1}^{p}{a_i}+\sum\limits_{i=l}^{r}{a_i})}=\max\limits_{p=1}^{n}{\sum\limits_{i=1}^{p}{a_i}}+\max\limits_{l,r}{\sum\limits_{i=l}^{r}{a_i}}$. Proof: We can proof it by contradiction. Define $g(l,r,p)$ as $\sum\limits_{i=1}^{p}{a_i}+\sum\limits_{i=l}^{r}{a_i}$. Suppose $l\leq p<r$ when $g(l,r,p)$ hits the maximum value. If $\sum\limits_{i=p+1}^{r}{a_i}\geq 0$, $g(l,r,r)\geq g(l,r,p)$, otherwise $g(l,p,p)> g(l,r,p)$. So the above observation is always correct. So we can divide the problem into two parts: $\max\limits_{p=1}^{n}{\sum\limits_{i=1}^{p}{a_i}}$ and $\max\limits_{l,r}{\sum\limits_{i=l}^{r}{a_i}}$. For $\max\limits_{p=1}^{n}{\sum\limits_{i=1}^{p}{a_i}}$, it's easy to solve it by using a stack to calculate the prefix maximum value, and then calculate the answer. And for $\max\limits_{l,r}{\sum\limits_{i=l}^{r}{a_i}}$, consider divide and conquer. For a part $[l,r]$,let $m=\lfloor \frac{l+r}{2}\rfloor$. Define $h(L,R)$ as $\max\limits_{L\leq l,r\leq R}{\sum\limits_{i=l}^{r}{a_i}}$. We want to calculate the sum of $h(L,R)$ satisfied $l\leq L\leq m<R < r$. Obviously $h(L,R) = \max(\max(h(L,m),h(m+1,R)),\max\limits_{L\leq p\leq m}{\sum\limits_{j=p}^{m}{a_j}}+\max\limits_{m<p\leq R}{\sum\limits_{j=m+1}^{p}{a_j}})$, and it's easy to calculate above four value in the time complexity of $O(r - l)$. We can use binary search to calculate the maximum value. So the problem can be solved in the time complexity of $O(n\log ^2n)$. But it's not good enough. We can solve it in the time complexity of $O(n\log n)$ based on the following theorem: Define $f(p)$ as $\max\limits_{1\leq q\leq p}{\sum\limits_{i=1}^q{a_i}}$. $h(1,p)-f(p)$ is non-decreasing. Proof: Actually, if we extend the first theorem, we will find $l>p$ or $r=p$ : If $l>p$, so $\sum\limits_{i=p+1}^r{a_i}<0$. When we add an element at the back of the sequence: If $\sum\limits_{i=1}^{len}{a_i}>\sum\limits_{i=1}^p{a_i}$, $p$ and $r$ become $len$ at the same time. So $\Delta f(len)<\Delta h(1,len)$ since $\sum\limits_{i=p+1}^r{a_i}<0$. Otherwise if $\sum\limits_{i=1}^{len}{a_i}\leq \sum\limits_{i=1}^p{a_i}$, $\Delta f(len)=0$ while $\Delta h(1,len) \geq 0$. So the theorem always holds in this circumstance. If $p=r$, $\Delta f(len)=\Delta h(1,len)$, so the theorem holds too. Because of the theorem, we can replace binary search by two pointers. So the final time complexity is $O(n\log n)$.
|
[
"data structures",
"divide and conquer",
"two pointers"
] | 3,400
| null |
1788
|
A
|
One and Two
|
You are given a sequence $a_1, a_2, \ldots, a_n$. Each element of $a$ is $1$ or $2$.
Find out if an integer $k$ exists so that the following conditions are met.
- $1 \leq k \leq n-1$, and
- $a_1 \cdot a_2 \cdot \ldots \cdot a_k = a_{k+1} \cdot a_{k+2} \cdot \ldots \cdot a_n$.
If there exist multiple $k$ that satisfy the given condition, print the smallest.
|
There should be same number of $2$ at $a_1, a_2, \cdots a_k$ and $a_{k+1}, \cdots, a_n$. By checking every $k$, we can solve the problem at $O(N^2)$. By sweeping $k$ from $1$ to $n-1$, we can solve the problem in $O(N)$. Not counting the number of $2$ but naively multiplying using sweeping in python was accepted since it has time complexity $O(N^2)$. Checking every $k$ and naively multiplying solutions are $O(N^3)$, so those solutions won't fit in time limit.
|
[
"brute force",
"implementation",
"math"
] | 800
| null |
1788
|
B
|
Sum of Two Numbers
|
The sum of digits of a non-negative integer $a$ is the result of summing up its digits together when written in the decimal system. For example, the sum of digits of $123$ is $6$ and the sum of digits of $10$ is $1$. In a formal way, the sum of digits of $\displaystyle a=\sum_{i=0}^{\infty} a_i \cdot 10^i$, where $0 \leq a_i \leq 9$, is defined as $\displaystyle\sum_{i=0}^{\infty}{a_i}$.
Given an integer $n$, find two non-negative integers $x$ and $y$ which satisfy the following conditions.
- $x+y=n$, and
- the sum of digits of $x$ and the sum of digits of $y$ differ by at most $1$.
It can be shown that such $x$ and $y$ always exist.
|
Let's assume that there is no carry while adding $x$ and $y$. Denote $n=a_9 \cdots a_1a_0$, $x=b_9 \cdots b_1b_0$, $y=c_9 \cdots c_1c_0$ in decimal system. The condition can be changed as the following condition. - $a_i=b_i+c_i$ for all $0 \leq i \leq 9$. - Sum of $b_i$ and sum of $c_i$ should differ by at most $1$. If $a_i$ is even, let $b_i=c_i=a_i/2$. Otherwise, let $b_i$ and $c_i$ be $\frac{a_i+1}{2}$ or $\frac{a_i-1}{2}$. By alternating between $(b_i, c_i)=(\frac{a_i+1}{2}, \frac{a_i-1}{2})$ and $(b_i, c_i)=(\frac{a_i-1}{2}, \frac{a_i+1}{2})$, we can satisfy the condition where sum of $b_i$ and sum of $c_i$ differ by at most $1$. There is an alternative solution. If $n$ is even, divide it into ($\frac{n}{2}, \frac{n}{2}$). If remainder of $n$ divided by $10$ is not $9$, divide it into ($\frac{n+1}{2}, \frac{n-1}{2}$). If remainder of $n$ divided by $10$ is $9$, recursively find an answer for $\lfloor \frac{n}{10} \rfloor$ which is ($x', y'$) and the answer will be ($10x'+4, 10y'+5$) or ($10x'+5, 10y'+4$) depending on what number has a bigger sum of digits. The following solution has a countertest. 1. Trying to find $x$ and $y$ by bruteforce from ($1, n-1$). 2. Trying to find $x$ and $y$ by bruteforce from ($\frac{n+1}{2}, \frac{n-1}{2}$) A solution that randomly finds ($x, y$) passes.
|
[
"constructive algorithms",
"greedy",
"implementation",
"math",
"probabilities"
] | 1,100
| null |
1788
|
C
|
Matching Numbers
|
You are given an integer $n$. Pair the integers $1$ to $2n$ (i.e. each integer should be in exactly one pair) so that each sum of matched pairs is consecutive and distinct.
Formally, let $(a_i, b_i)$ be the pairs that you matched. $\{a_1, b_1, a_2, b_2, \ldots, a_n, b_n\}$ should be a permutation of $\{1, 2, \ldots, 2n\}$. Let the sorted list of $\{a_1+b_1, a_2+b_2, \ldots, a_n+b_n\}$ be $s_1 < s_2 < \ldots < s_n$. We must have $s_{i+1}-s_i = 1$ for $1 \le i \le n - 1$.
|
Let's assume that $1$ to $2n$ is paired and each sum of pair is $k, k+1, \cdots, k+n-1$. Sum from $1$ to $2n$ should equal to the sum of $k$ to $k+n-1$. So we obtain $n(2n+1)=\frac{(2k+n-1)n}{2}$, which leads to $4n+2=2k+n-1$. Since $4n+2$ is even, $2k+n-1$ should be even. So if $n$ is even, we cannot find such pairing. If $n$ is odd, there are various ways to make such pairing. Let $m=\frac{n-1}{2}$. $(1, 3m+3), (2, 3m+4), \ldots (m, 4m+2), (m+1, 2m+2), (m+2, 2m+3), \ldots (2m+1, 3m+2)$ can be such pairing.
|
[
"constructive algorithms",
"greedy",
"math"
] | 1,300
| null |
1788
|
D
|
Moving Dots
|
We play a game with $n$ dots on a number line.
The initial coordinate of the $i$-th dot is $x_i$. These coordinates are distinct. Every dot starts moving simultaneously with the same constant speed.
Each dot moves in the direction of the closest dot (different from itself) until it meets another dot. In the case of a tie, it goes to the left. Two dots meet if they are in the same coordinate, after that, they stop moving.
After enough time, every dot stops moving. The result of a game is the number of distinct coordinates where the dots stop.
Because this game is too easy, calculate the sum of results when we play the game for every subset of the given $n$ dots that has at least two dots. As the result can be very large, print the sum modulo $10^9+7$.
|
Let's think about the original problem where we do not think about subsets. We can easily observe that each dot does not change direction during moving. Assume that dots gather at coordinate $x$. Rightmost dot of dots that have smaller coordinate than $x$ should move right, and leftmost dot which has bigger coordinate than $x$ should move left. We can observe that the number of adjacent dot where each move toward each other will be the answer. Now let's solve the problem for subsets. Instead of counting number of adjacent dot that moves toward each other for each subset of dots, we will count the number of subset for each possible $1 \leq i < j \leq N$ where dot $i$ moves right and dot $j$ moves left and there are no dots between $i$ and $j$. Let the coordinate of a dot in a subset be $k$. We will try to find out which $k$ can be in a subset where dot $i$ and dot $j$ move toward each other. Since there are no dot between $i$ and $j$, dots satisfying $x_i < k < x_j$ should not be in the subset. Since dot $i$ should move right, dots that satisfy $k<x_i$ and $x_i-k \leq x_j-x_i$ should not be in the subset. As the same way for dot $j$, dots that satisfy $k>x_j$ and $k-x_j < x_j-x_i$ should not be in the subset. Summing these up, dots satisfying $2x_i -- x_j \leq k < 2x_j -- x_i$ should not be in the subset. By using binary search, we can find the number of dots that cannot be in the subset in $O(logN)$. If there are $p$ dots that can be in the subset, the number of subset where $i$ and $j$ moves toward each other will be $2^p$. Summing all $2^p$ will give us the answer. Since there are $O(N^2)$ pairs of $(i, j)$, we can solve the problem in $O(N^2logN)$. Instead of using binary search, we can use the monotonicity of $2x_j-x_i$ and $2x_i-x_j$ when $j$ increases, we can solve the range of dots that cannot be in the subset in $O(N^2)$ by sweeping. Both $O(N^2logN)$ and $O(N^2)$ solutions will be accepted. There were some dynamic programming solutions from some testers.
|
[
"binary search",
"brute force",
"combinatorics",
"math",
"two pointers"
] | 2,000
| null |
1788
|
E
|
Sum Over Zero
|
You are given an array $a_1, a_2, \ldots, a_n$ of $n$ integers. Consider $S$ as a set of segments satisfying the following conditions.
- Each element of $S$ should be in form $[x, y]$, where $x$ and $y$ are integers between $1$ and $n$, inclusive, and $x \leq y$.
- No two segments in $S$ intersect with each other. Two segments $[a, b]$ and $[c, d]$ intersect if and only if there exists an integer $x$ such that $a \leq x \leq b$ and $c \leq x \leq d$.
- For each $[x, y]$ in $S$, $a_x+a_{x+1}+ \ldots +a_y \geq 0$.
The length of the segment $[x, y]$ is defined as $y-x+1$. $f(S)$ is defined as the sum of the lengths of every element in $S$. In a formal way, $f(S) = \sum_{[x, y] \in S} (y - x + 1)$. Note that if $S$ is empty, $f(S)$ is $0$.
What is the maximum $f(S)$ among all possible $S$?
|
Denote $p$ as the prefix sum of $a$. For a segment $[x+1, y]$ to be an element of $S$, $p_x \leq p_y$ should be satisfied. Let's denote $dp_i$ as the maximum value of the sum of length of segment smaller than $i$ in $S$. Segment $[x, y]$ is smaller than $i$ if $y \leq i$. If there is no segment ending at $i$, $dp_i=dp_{i-1}$. If there is segment $[k+1, i]$ in $S$, $dp_i=\max_{ p_{k} \leq p_i }(dp_k+i-k)$. By summing up, $dp_i = \max(dp_{i-1}, \max_{p_k \leq p_i}(dp_k+i-k)$ With this dp, we get an $O(N^2)$ solution. Now let's try to speed up the dp transition using segment tree. First, use coordinate compression on $p_i$ since we only see whether one prefix sum is bigger than the other. We will maintain a segment tree that stores $dp_k-k$ in position $p_k$. Let's find $dp_i$ in order of $i$. $dp_i = \max(dp_{i-1}, \max_{p_k \leq p_i}(dp_k-k)+i)$ We can solve $\max_{p_k \leq p_i}(dp_k-k)$ by range query $[0, p_i]$ on a segment tree. So we can solve $dp_i$ in $O(logN)$ for each $i$. The entire problem is solved in $O(NlogN)$. There is an alternative solution that maintains pair $(dp_k-k, p_k)$ monotonically with a set. This solution also runs in $O(NlogN)$.
|
[
"data structures",
"dfs and similar",
"dp"
] | 2,200
| null |
1788
|
F
|
XOR, Tree, and Queries
|
You are given a tree of $n$ vertices. The vertices are numbered from $1$ to $n$.
You will need to assign a weight to each edge. Let the weight of the $i$-th edge be $a_i$ ($1 \leq i \leq n-1$). The weight of each edge should be an integer between $0$ and $2^{30}-1$, inclusive.
You are given $q$ conditions. Each condition consists of three integers $u$, $v$, and $x$. This means that the bitwise XOR of all edges on the shortest path from $u$ to $v$ should be $x$.
Find out if there exist $a_1, a_2, \ldots, a_{n-1}$ that satisfy the given conditions. If yes, print a solution such that $a_1 \oplus a_2 \oplus \ldots \oplus a_{n-1}$ is the \textbf{smallest}. Here, $\oplus$ denotes the bitwise XOR operation.
If there are multiple solutions such that $a_1 \oplus a_2 \oplus \ldots \oplus a_{n-1}$ is the smallest, print any.
|
Let's denote $p_i$ as xor of edges in path from node $1$ to node $i$. Edges in path from $i$ to $j$ is (edges in path from $1$ to $i$) + (edges in path from $1$ to $j$) - $2 \times$(edges in path from $1$ to $lca(i, j)$) where $lca(i, j)$ denotes the least common ancestor of $i$ and $j$. Since xor of two same number is $0$, we can observe that xor of path from $i$ to $j$ is $p_i \oplus p_j$. If we know every $p_i$ for all $i$, weight of edge connecting $i$ and $j$ is $p_i \oplus p_j$, so deciding weight of every edge is equivalent to deciding every $p_i$ for $i \geq 2$. Let $G$ be the original tree graph. Let's make a undirected weighted graph $G'$ from the given condition. Condition ($u$, $v$, $x$) makes an edge connecting $u$ and $v$ with weight $x$. This edge means that $p_v=p_u \oplus x$. Now if $u$ and $v$ are connected by edges in $G'$, this means that $p_v=p_u \oplus$ (xor of edges in path from $u$ to $v$). First, let's check if it is possible to make a graph that satisfies the conditions. If $G'$ is a connected graph, we can solve $p_i$ using dfs from node $1$. For edges not used in dfs (back edges), we have to check if it is valid with the solved $p_i$. Specifically, if there is an edge ($u$, $v$, $x$), we need to check if $p_u \oplus p_v=x$. Now let's think when $G'$ is not a connected graph. By dividing the graph into connected components, we can solve if it is impossible to make a graph that satisfies the conditions in a same fashion. Now let's try to minimize $a_1 \oplus a_2 \oplus \cdots \oplus a_{n-1}$. For edge $(u, v)$ in $G$, weight of edge is $p_u \oplus p_v$. By substituting $a_i$ into $p_{x_i} \oplus$ $p_{y_i}$, $a_1 \oplus a_2 \oplus \cdots \oplus a_{n-1}$ can be written as xor of $p_i$ where node $i$ has odd degree. Let the connected components of $G'$ be $G_1, G_2, \ldots, G_k$. Pick any vertex $r_k$ in each component $G_k$. For every vertex $i$ in $G_k$, $p_i$ is expressed as $p_{r_k} \oplus$ (xor of edges in path from $r_k$ to $i$). Let's define a set $L$ as $L=${$i | G_i$ has odd number of odd degree vertices}. By substituting $p_i$ into ($p_{r_k} \oplus$ (xor of edges in path in $G'$ from $r_k$ to $i$)), we can rewrite "xor of $p_i$ where node $i$ has odd degree" into $\bigoplus_{k \in L} p_{r_k} \oplus c$ where $c$ is a constant. If $L$ is empty, the answer is constant and any solution that satisfy the given conditions will be the answer. If $L$ is not empty, set one of $p_{r_k}$ to $c$ and other $p_{r_k}$ as $0$ so that $a_1 \oplus a_2 \oplus \cdots a_{n-1}$ is $0$. We can solve the problem in $O(N)$. Fast $O(NlogX)$ solution might be accepted, including the solution where you divide the weight of edge by bits.
|
[
"bitmasks",
"constructive algorithms",
"dfs and similar",
"dsu",
"graphs",
"greedy",
"implementation",
"trees"
] | 2,500
| null |
1789
|
A
|
Serval and Mocha's Array
|
Mocha likes arrays, and Serval gave her an array consisting of positive integers as a gift.
Mocha thinks that for an array of positive integers $a$, it is good iff the greatest common divisor of all the elements in $a$ is no more than its length. And for an array of at least $2$ positive integers, it is beautiful iff all of its prefixes whose length is no less than $2$ are good.
For example:
- $[3,6]$ is not good, because $\gcd(3,6)=3$ is greater than its length $2$.
- $[1,2,4]$ is both good and beautiful, because all of its prefixes whose length is no less than $2$, which are $[1,2]$ and $[1,2,4]$, are both good.
- $[3,6,1]$ is good but not beautiful, because $[3,6]$ is not good.
Now Mocha gives you the gift array $a$ of $n$ positive integers, and she wants to know whether array $a$ could become beautiful by reordering the elements in $a$. It is allowed to keep the array $a$ unchanged.
|
Considering an array $a$ of $n$ ($n\geq 2$) positive integers, the following inequality holds for $2\leq i\leq n$: $\gcd(a_1,a_2,\cdots,a_i) \leq \gcd(a_1,a_2) \leq 2$ Therefore, when the prefix $[a_1,a_2]$ of $a$ is good, we can show that all the prefixes of $a$ whose length is no less than $2$ are good, then $a$ is beautiful. It is obvious that $[a_1, a_2]$ is good when $a$ is beautiful. So we get the conclusion that $a$ is beautiful if and only if the prefix $[a_1, a_2]$ is good. We can check if there exist $a_i, a_j$ ($i\neq j$) such that $\gcd(a_i, a_j)\leq 2$. If so, we can move $a_i,a_j$ to the front of $a$ to make it beautiful, then the answer is Yes. If not, the answer is No. Time complexity: $O(n^2\log 10^6)$.
|
[
"brute force",
"math",
"number theory"
] | 800
| null |
1789
|
B
|
Serval and Inversion Magic
|
Serval has a string $s$ that only consists of 0 and 1 of length $n$. The $i$-th character of $s$ is denoted as $s_i$, where $1\leq i\leq n$.
Serval can perform the following operation called Inversion Magic on the string $s$:
- Choose an segment $[l, r]$ ($1\leq l\leq r\leq n$). For $l\leq i\leq r$, change $s_i$ into 1 if $s_i$ is 0, and change $s_i$ into 0 if $s_i$ is 1.
For example, let $s$ be 010100 and the segment $[2,5]$ is chosen. The string $s$ will be 001010 after performing the Inversion Magic.
Serval wants to make $s$ a palindrome after performing Inversion Magic \textbf{exactly once}. Help him to determine whether it is possible.
A string is a palindrome iff it reads the same backwards as forwards. For example, 010010 is a palindrome but 10111 is not.
|
If $s$ is palindromic initially, we can operate on the interval $[1,n]$, the answer is Yes. Let's consider the other case. In a palindrome $s$, for each $i$ in $[1,\lfloor n/2\rfloor]$, $s_{i}=s_{n-i+1}$ must hold. For those $i$, we may check whether $s_{i}=s_{n-i+1}$ is true in the initial string. For all the illegal positions $i$, the operation must contain either $i$ or $n+1-i$, but not both. For the legal positions, the operation must contain neither of $i$ nor $n+1-i$, or both of them. If the illegal positions is continuous (which means that they are $l,l+1,\dots,r-1,r$ for some $l$ and $r$), we may operate on the interval $[l,r]$ (or $[n+1-r,n+1-l]$), making the string palindromic. The answer is Yes. Otherwise, there must be some legal positions that lie between the illegal ones. Suppose the illegal positions range between $[l,r]$ (but not continuous), and the operation is $[o_{1},o_{2}]$. Without loss of generality, let the operation lies in the left part of the string. Then $o_{1}\le l,r\le o_{2}<n+1-r$ must hold to correct all the illegal positions. This interval covers all the legal positions that lie between the illegal ones but does not cover their symmetrical positions. Thus, such kind of operation will produce new illegal positions. In other words, there are no valid operations in this situation. The answer is No. Time complexity: $O(n)$.
|
[
"brute force",
"implementation",
"strings",
"two pointers"
] | 800
| null |
1789
|
C
|
Serval and Toxel's Arrays
|
Toxel likes arrays. Before traveling to the Paldea region, Serval gave him an array $a$ as a gift. This array has $n$ \textbf{pairwise distinct} elements.
In order to get more arrays, Toxel performed $m$ operations with the initial array. In the $i$-th operation, he modified the $p_{i}$-th element of the $(i-1)$-th array to $v_{i}$, resulting in the $i$-th array (the initial array $a$ is numbered as $0$). During modifications, Toxel guaranteed that the elements of each array are still \textbf{pairwise distinct} after each operation.
Finally, Toxel got $m+1$ arrays and denoted them as $A_{0}=a, A_{1},\ldots,A_{m}$. For each pair $(i,j)$ ($0\le i<j\le m$), Toxel defines its value as the number of distinct elements of the concatenation of $A_{i}$ and $A_{j}$. Now Toxel wonders, what is the sum of the values of all pairs? Please help him to calculate the answer.
|
Consider the contribution of each value. We only need to count the number of concatenated arrays each value appears in, and sum all those counts up. The answer to this problem only depends on the number of appearances of this value. Notice that the appearance of each value forms some intervals. Each interval starts when it modifies another element (or in the initial array), and ends when it is modified (or in the $m$-th array). As there are no duplicate elements, the intervals do not intersect, so we can simply sum their lengths up. Let's use an array $\text{appear}$ to track the appearance of each value. We first set the appearance of the initial elements to $0$, and other elements to $-1$, which means the value does not appear. Then, in the $i$-th modification, suppose we modified some elements from $x$ to $y$, then we should add $i-\text{appear}_{x}$ to $\text{count}_{x}$, and set $\text{appear}_{x}$ to $-1$. We should also set $\text{appear}_{y}$ to $i$. After all operations, for all $x$, add $m+1-\text{appear}_{x}$ to $\text{count}_{x}$ if $\text{appear}_{x}$ is not $-1$. Value $x$ appears in $\frac{m(m+1)}{2}-\frac{(m-\text{count}_{x})(m-\text{count}_{x}+1)}{2}$ concatenated arrays. Time complexity: $O(n+m)$.
|
[
"combinatorics",
"dp",
"implementation",
"math"
] | 1,500
| null |
1789
|
D
|
Serval and Shift-Shift-Shift
|
Serval has two $n$-bit binary integer numbers $a$ and $b$. He wants to share those numbers with Toxel.
Since Toxel likes the number $b$ more, Serval decides to change $a$ into $b$ by some (possibly zero) operations. In an operation, Serval can choose any \textbf{positive} integer $k$ between $1$ and $n$, and change $a$ into one of the following number:
- $a\oplus(a\ll k)$
- $a\oplus(a\gg k)$
In other words, the operation moves every bit of $a$ left or right by $k$ positions, where the overflowed bits are removed, and the missing bits are padded with $0$. The bitwise XOR of the shift result and the original $a$ is assigned back to $a$.
Serval does not have much time. He wants to perform \textbf{no more than} $n$ operations to change $a$ into $b$. Please help him to find out an operation sequence, or determine that it is impossible to change $a$ into $b$ in at most $n$ operations. You do \textbf{not need} to minimize the number of operations.
In this problem, $x\oplus y$ denotes the bitwise XOR operation of $x$ and $y$. $a\ll k$ and $a\gg k$ denote the logical left shift and logical right shift.
|
First of all, it could be proven that the answer exists if and only if $a$ and $b$ are both zero or $a$ and $b$ are both non-zero. If $a$ is zero, it remains zero after any operations. Therefore it cannot become $b$ if $b$ is non-zero. If $a$ is non-zero, logical left shift it will definitely increase its lowest bit or make it zero, thus changing it into a different number. The same applies to logical right shift. Therefore, the xor result must be non-zero and there are no possible operations if $b$ is zero. We will show that it is always possible to change $a$ into $b$ in the other cases. We denote $\text{lb}(a)$ as the lowest bit of $a$ and $\text{hb}(a)$ as the highest bit of $a$. If $a$ and $b$ are both zero, no operations are needed. If they are both non-zero, the construction consists of four steps: If $\text{hb}(a)<\text{lb}(b)$, logical left shift $a$ by $\text{lb}(b)-\text{hb}(a)$ bits. Then $\text{hb}(a)$ must be equal or greater than $\text{lb}(b)$. For each bit $i$ of $\text{lb}(b)-1,\text{lb}(b)-2,\dots,1$, if $a_{i}=1$, we may logical right shift $a$ by $\text{hb}(a)-i$ bits to erase it. After that, we have $\text{lb}(a)\ge\text{lb}(b)$. If $\text{lb}(a)>\text{lb}(b)$, logical right shift $a$ by $\text{lb}(a)-\text{lb}(b)$ bits. Now it is guaranteed that $\text{lb}(a)=\text{lb}(b)$. For each bit $i$ of $\text{lb}(b)+1,\text{lb}(b)+2,\dots,n$, if $a_{i}\neq b_{i}$, we may logical left shift $a$ by $i-\text{lb}(a)$ bits to erase it. After that, we must have $a=b$. Step 2 and step 4 require at most $n-1$ operations. We may also note that step 1 and step 3 never appear simultaneously. If step 1 is operated, then $\text{lb}(a)=\text{lb}(b)$ is guaranteed after step 2. Thus, we need not operate step 3 in this case. In conclusion, we may use no more than $n$ operations to change $a$ into $b$ if they are both non-zero. Time Complexity: $O(n^{2})$ or $O(\frac{n^{2}}{w})$ by using std::bitset.
|
[
"bitmasks",
"brute force",
"constructive algorithms",
"implementation"
] | 2,200
| null |
1789
|
E
|
Serval and Music Game
|
Serval loves playing music games. He meets a problem when playing music games, and he leaves it for you to solve.
You are given $n$ positive integers $s_1 < s_2 < \ldots < s_n$. $f(x)$ is defined as the number of $i$ ($1\leq i\leq n$) that exist non-negative integers $p_i, q_i$ such that:
$$s_i=p_i\left\lfloor{s_n\over x}\right\rfloor + q_i\left\lceil{s_n\over x}\right\rceil$$
Find out $\sum_{x=1}^{s_n} x\cdot f(x)$ modulo $998\,244\,353$.
As a reminder, $\lfloor x\rfloor$ denotes the maximal integer that is no greater than $x$, and $\lceil x\rceil$ denotes the minimal integer that is no less than $x$.
|
Consider the following two cases: Case 1: $x$ is not a factor of $s_n$. In this case we have $\left\lfloor{s_n\over x}\right\rfloor + 1 = \left\lceil{s_n\over x}\right\rceil$. Let $k = \left\lfloor{s_n\over x}\right\rfloor$. It can be shown that there are at most $2\sqrt{s_n}$ different values of $k$. The constraint of $s_i$ can be written in the following form: $s_i = p_i \cdot k + q_i \cdot (k+1) = (p_i+q_i)\cdot k + q_i$ For a certain $k$, such $p_i$ and $q_i$ do not exist if and only if $s_i\bmod k > \left\lfloor{s_i\over k}\right\rfloor$. To prove it, we show the contradiction that $q_i\bmod k = s_i\bmod k > \left\lfloor{s_i\over k}\right\rfloor \geq q_i$, and we can give a construction of $p_i$ and $q_i$ when $s_i\bmod k\leq \left\lfloor{s_i\over k}\right\rfloor$ that $q_i = s_i\bmod k$ and $p_i=\left\lfloor{s_i\over k}\right\rfloor - q_i$. By observation, these $s_i$ are in one of the following $k-2$ intervals: $[1, k-1], [k+2, 2k-1],\dots, [(i-1)k+i, ik-1],\dots, [(k-2)k+(k-1), (k-1)k-1]$ We can count the number of these $s_i$ by pre-calculating the prefix sums to calculate $f(x)$. This case can be solved in $O(s_n)$ time, and we will show this fact: When $k\leq \sqrt{s_n}$, there are $k - 2$ intervals that need to be considered for a certain $k$. Since $\sum_{k\leq \sqrt{s_n}} k \leq s_n$, this part can be solved in $O(s_n)$ time. When $k>\sqrt{s_n}$, notice that there are at most $\left\lceil s_n\over k\right\rceil \leq \sqrt{s_n}$ intervals that need to be considered for a certain $k$. Recall that there are at most $\sqrt{s_n}$ different values of $k$ in this part, so it can be solved in $O(s_n)$ time. Case 2: $x$ is a factor of $s_n$. In this case we have $\left\lfloor{s_n\over x}\right\rfloor = \left\lceil{s_n\over x}\right\rceil$. Let $k = {s_n\over x}$. The constraint of $s_i$ becomes: $s_i = (p_i + q_i) \cdot k$ To calculate $f(x)$, we only need to count the number of multiples of $x$. To do this, we can first calculate $s'_i = \gcd(s_i, s_n)$ for all $1\leq i\leq n$ in $O(n\log s_n)$ time. It is obvious that $s_i'$ is a factor of $s_n$. For a certain $x$, we can enumerate all the factors of $s_n$, find out the multiples of $x$ among them, and sum up the times that they occurred in $s'$. Recall that $s_n$ has at most $2\sqrt{s_n}$ factors, so this takes $O(s_n)$ time. This case can be solved in $O(n\log s_n + s_n)$ time in total. Time complexity: $O(n\log s_n + s_n)$. $O(s_n + \sigma(s_n))$ solutions can pass all the tests, where $\sigma(n)$ denotes the sum of all the factors of $n$. A well-implemented $O(s_n\log s_n)$ solutions may pass the tests, too. Bonus: Solve this problem in $O(n + s_n)$ time.
|
[
"brute force",
"dp",
"implementation",
"math",
"number theory"
] | 2,500
| null |
1789
|
F
|
Serval and Brain Power
|
Serval loves Brain Power and his brain power problem.
Serval defines that a string $T$ is powerful iff $T$ can be obtained by concatenating some string $T'$ multiple times. Formally speaking, $T$ is powerful iff there exist a string $T'$ and an integer $k\geq 2$ such that $$T=\underbrace{T'+T'+\dots+T'}_{k\text{ times}}$$
For example, gogogo is powerful because it can be obtained by concatenating go three times, but power is not powerful.
Serval has a string $S$ consists of lowercase English letters. He is curious about the longest powerful subsequence of $S$, and he only needs you to find out the length of it. If all the non-empty subsequences of $S$ is not powerful, the answer is considered to be $0$.
A string $a$ is a subsequence of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters.
|
Assume that the longest powerful subsequence of the given string $S$ is $T$, which can be obtained by concatenating $k$ copies of string $T'$. Noticing that $|S|\leq 80$, we have the observation that $k\cdot |T'| \leq |S| \leq 80$, so it is impossible that both $k$ and $|T'|$ is large. When $k < 5$, we only need to consider the $k = 2$ case and the $k = 3$ case. The $k = 4$ case is covered by $k = 2$ case, since $T = T'+T'+T'+T' = (T'+T') + (T'+T')$. For the $k = 2$ case, we split $S$ into two parts $S=S_1+S_2$, then calculate the maximal length of $\operatorname{LCS}(S_1, S_2)$ by dynamic programming over all the possible splits. This case can be solved in $O(w_2\cdot|S|^3)$ time, where $w_2$ is a small constant. It is similar to solve the $k = 3$ case. We split $S$ into three parts $S = S_1 + S_2 + S_3$, then calculate the maximal length of $\operatorname{LCS}(S_1, S_2, S_3)$ over all the possible splits. This case can be solved in $O(w_3\cdot|S|^5)$ time, where $w_3$ is a small constant. We will estimate $w_3$ later. When $k \geq 5$, we have $|T'|\leq {|S|\over k}\leq {|S|\over 5}$. It can be shown that, if we split $S$ into $5$ parts, $T'$ will be the subsequence of at least one of them. We can split $S$ into equal lengths, then enumerate all the subsequences of these substrings as the possible $T'$. For a possible $T'$, we can find out corresponding $k$ by matching $T'$ and $S$ greedily. This case can be solved in $O(5\cdot 2^{|S|/5}|S|)$. Now let us roughly estimate how small $w_3$ could be. The time that dynamic programming consumed for certain $S_1, S_2, S_3$ is $|S_1|\cdot|S_2|\cdot|S_3|$. Since $|S_1|+|S_2|+|S_3|=|S|$, we have $|S_1|\cdot|S_2|\cdot|S_3|\leq {1\over 27}|S|^3$. Recall that there are ${|S|-1 \choose 2} \leq {1\over 2}|S|^2$ possible splits, then $w_3\leq {1\over 54}$ holds. Time complexity: $O(w_3\cdot|S|^5 + 5\cdot 2^{|S|/5}|S|)$.
|
[
"bitmasks",
"brute force",
"dp",
"greedy",
"implementation",
"strings"
] | 2,700
| null |
1790
|
A
|
Polycarp and the Day of Pi
|
On March 14, the day of the number $\pi$ is celebrated all over the world. This is a very important mathematical constant equal to the ratio of the circumference of a circle to its diameter.
Polycarp was told at school that the number $\pi$ is irrational, therefore it has an infinite number of digits in decimal notation. He wanted to prepare for the Day of the number $\pi$ by memorizing this number as accurately as possible.
Polycarp wrote out all the digits that he managed to remember. For example, if Polycarp remembered $\pi$ as $3.1415$, he wrote out 31415.
Polycarp was in a hurry and could have made a mistake, so you decided to check how many first digits of the number $\pi$ Polycarp actually remembers correctly.
|
In the problem, you had to find the largest common prefix(LCP) of the first $30$ characters of the number $\pi$ and the string $n$. To do this, we will go from the beginning and compare the characters until we find a non-matching one, or until the string $n$ ends.
|
[
"implementation",
"math",
"strings"
] | 800
|
t = int(input())
pi = '31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679'
for _ in range(t):
n = input() + '#'
for i in range(len(n)):
if pi[i] != n[i]:
print(i)
break
|
1790
|
B
|
Taisia and Dice
|
Taisia has $n$ six-sided dice. Each face of the die is marked with a number from $1$ to $6$, each number from $1$ to $6$ is used once.
Taisia rolls all $n$ dice at the same time and gets a sequence of values $a_1, a_2, \ldots, a_n$ ($1 \le a_i \le 6$), where $a_i$ is the value on the upper face of the $i$-th dice. The sum of this sequence is equal to $s$.
Suddenly, Taisia's pet cat steals exactly \textbf{one} dice with \textbf{maximum} value $a_i$ and calculates the sum of the values on the remaining $n-1$ dice, which is equal to $r$.
You only know the number of dice $n$ and the values of $s$, $r$. Restore a possible sequence $a$ that fulfills the constraints.
|
It is easy to find the value on the cube that the cat stole, it is equal $mx = s - r$. All other values must be no more than $mx$. Let's try to get $r$ by taking $mx$ $\lfloor\frac{r}{mx}\rfloor$ times and adding the remainder there if it is non-zero. We could not get more than $n - 1$ cubes this way, because otherwise $(n - 1)mx > r$, but we are guaranteed that the answer exists. Now, until we get the $n-1$ cube, let's find the cube with the maximum value, reduce it by $1$ and add the cube with the value of $1$. We can definitely get $n - 1$ dice, because otherwise $r < n - 1$, but we are guaranteed that the answer exists. It remains only to add a cube with the value of $mx$ to our $n-1$ cubes. We obtained a solution with $O(n^2)$ asymptotics.
|
[
"greedy",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 55;
int n, s1, s2;
vector<int> res;
void solve() {
res.clear();
int d = s1 - s2;
for (; s2 >= d; s2 -= d)
res.push_back(d);
if (s2) res.push_back(s2);
for (int i = 0; i < res.size() && res.size() + 1 < n;) {
if (res[i] == 1) {
++i;
continue;
}
--res[i];
res.push_back(1);
}
res.push_back(d);
}
int main() {
int t; cin >> t;
while (t--) {
cin >> n >> s1 >> s2;
solve();
sort(res.begin(), res.end());
for (int x: res)
cout << x << ' ';
cout << endl;
}
}
|
1790
|
C
|
Premutation
|
A sequence of $n$ numbers is called permutation if it contains all integers from $1$ to $n$ exactly once. For example, the sequences [$3, 1, 4, 2$], [$1$] and [$2,1$] are permutations, but [$1,2,1$], [$0,1$] and [$1,3,4$] — are not.
Kristina had a permutation $p$ of $n$ elements. She wrote it on the whiteboard $n$ times in such a way that:
- while writing the permutation at the $i$-th ($1 \le i \le n)$ time she skipped the element $p_i$
So, she wrote in total $n$ sequences of length $n-1$ each.For example, suppose Kristina had a permutation $p$ = $[4,2,1,3]$ of length $4$. Then she did the following:
- Wrote the sequence $[2, 1, 3]$, skipping the element $p_1=4$ from the original permutation.
- Wrote the sequence $[4, 1, 3]$, skipping the element $p_2=2$ from the original permutation.
- Wrote the sequence $[4, 2, 3]$, skipping the element $p_3=1$ from the original permutation.
- Wrote the sequence $[4, 2, 1]$, skipping the element $p_4=3$ from the original permutation.
You know all $n$ of sequences that have been written on the whiteboard, but you do not know the order in which they were written. They are given in \textbf{arbitrary order}. Reconstruct the original permutation from them.
For example, if you know the sequences $[4, 2, 1]$, $[4, 2, 3]$, $[2, 1, 3]$, $[4, 1, 3]$, then the original permutation will be $p$ = $[4, 2, 1, 3]$.
|
When Kristina writes sequences on the whiteboard, she removes an element with each index exactly once. Thus, the first element of the permutation will be deleted only once - on the first step. All sequences except one will start with it To solve the problem, find a sequence $p_i$ such that: it starts with some element $y$ all sequences other than this one begin with some element $x$ Then this permutation will describe the sequence of numbers remaining after removing the $1$th element, and the first element itself will be equal to the number $x$. The initial permutation will look like [$x, y = p_{i1}, p_{i2}, \dots, p_{in}$].
|
[
"brute force",
"implementation",
"math"
] | 1,000
|
#include "bits/stdc++.h"
using namespace std;
int n;
void solve(){
cin >> n;
vector<vector<int>>perm(n, vector<int>(n - 1));
vector<int>p(n, 0);
vector<int>cnt(n + 1, 0);
for(int i = 0; i < n; i++){
p[i] = i + 1;
for(int j = 0; j < n - 1; j++){
cin >> perm[i][j];
if(j == 0) cnt[perm[i][j]]++;
}
}
for(int i = 1; i <= n; i++){
if(cnt[i] == n - 1){
p[0] = i;
break;
}
}
for(int i = 0; i < n; i++){
if(perm[i][0] != p[0]){
for(int j = 0; j < n - 1; j++){
p[j + 1] = perm[i][j];
}
}
}
for(int i = 0; i < n; i++) cout << p[i] << ' ';
cout << endl;
}
int main(){
int t;
cin >> t;
while(t--){
solve();
}
return 0;
}
|
1790
|
D
|
Matryoshkas
|
Matryoshka is a wooden toy in the form of a painted doll, inside which you can put a similar doll of a smaller size.
A set of nesting dolls contains one or more nesting dolls, their sizes are consecutive positive integers. Thus, a set of nesting dolls is described by two numbers: $s$ — the size of a smallest nesting doll in a set and $m$ — the number of dolls in a set. In other words, the set contains sizes of $s, s + 1, \dots, s + m - 1$ for some integer $s$ and $m$ ($s,m > 0$).
You had one or more sets of nesting dolls. Recently, you found that someone mixed all your sets in one and recorded a sequence of doll sizes — integers $a_1, a_2, \dots, a_n$.
You do not remember how many sets you had, so you want to find the \textbf{minimum} number of sets that you could initially have.
For example, if a given sequence is $a=[2, 2, 3, 4, 3, 1]$. Initially, there could be $2$ sets:
- the first set consisting of $4$ nesting dolls with sizes $[1, 2, 3, 4]$;
- a second set consisting of $2$ nesting dolls with sizes $[2, 3]$.
According to a given sequence of sizes of nesting dolls $a_1, a_2, \dots, a_n$, determine the minimum number of nesting dolls that can make this sequence.
Each set is completely used, so all its nesting dolls are used. Each element of a given sequence must correspond to exactly one doll from some set.
|
First, for each size, let's count $cnt_s$ - the number of dolls of this size. Then, let's create a set, in which for each doll of size $s$ we add the numbers $s$ and $s + 1$. This will allow you to process all the segments, as well as the dimensions adjacent to them. We will iterate over the set in ascending order of size. Let $x$ be the number of matryoshkas of the current size, $y$ - of the previous one considered ($0$ at the beginning). If the numbers do not match, then you need to close (if $x < y$), or open (if $x > y$) $|x - y|$ segments. It is enough to add only the opening of the segments to the answer.
|
[
"data structures",
"greedy",
"sortings"
] | 1,200
|
#include <iostream>
#include <vector>
#include <queue>
#include <map>
#include <set>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
map<int, int> cnt;
set<int> b;
for (int i = 0; i < n; ++i) {
cin >> a[i];
cnt[a[i]]++;
b.insert(a[i]);
b.insert(a[i] + 1);
}
int last = 0;
int res = 0;
for (auto x: b) {
int c = cnt[x];
res += max(0, c - last);
last = c;
}
cout << res << '\n';
}
int main(int argc, char* argv[]) {
int t;
cin >> t;
for (int i = 0; i < t; ++i) {
solve();
}
}
|
1790
|
E
|
Vlad and a Pair of Numbers
|
Vlad found two positive numbers $a$ and $b$ ($a,b>0$). He discovered that $a \oplus b = \frac{a + b}{2}$, where $\oplus$ means the bitwise exclusive OR , and division is performed without rounding..
Since it is easier to remember one number than two, Vlad remembered only $a\oplus b$, let's denote this number as $x$. Help him find any suitable $a$ and $b$ or tell him that they do not exist.
|
Consider the answer by bits. We know that if the $i$th bit of the number $x$ is zero, then these bits are the same for $a$ and $b$, otherwise they differ. Then let's first make $a = x$, $b = 0$. Note that $a\oplus b$ is already equal to $x$, but $\frac{a+b}{2}$ is not yet. So we need to dial another $x$ with matching bits, we will add them to both integers greedily, going from the highest bit to the lowest, skipping those bits that are already one in $a$. If after this algorithm $a$ and $b$ do not satisfy the conditions, then the answer is $-1$. In total, this solution works for $\mathcal{O}(\log{x})$. You can see that in the solution above, we actually just added $\frac{x}{2}$ to both numbers, which could only be done with one set of bits, so if the answer exists, the pair $a= \frac{3\cdot x}{2} fits$, $b = \frac{x}{2}$.
|
[
"bitmasks",
"constructive algorithms"
] | 1,400
|
t = int(input())
for _ in range(t):
x = int(input())
a = x
b = 0
for i in range(32, -1, -1):
if x & (1 << i) > 0:
continue
if 2 * x - a - b >= (2 << i):
a += 1 << i
b += 1 << i
if a + b == 2 * x and a ^ b == x:
print(a, b)
else:
print(-1)
|
1790
|
F
|
Timofey and Black-White Tree
|
Timofey came to a famous summer school and found a tree on $n$ vertices. A tree is a connected undirected graph without cycles.
Every vertex of this tree, except $c_0$, is colored \textbf{white}. The vertex $c_0$ is colored \textbf{black}.
Timofey wants to color all the vertices of this tree in \textbf{black}. To do this, he performs $n - 1$ operations. During the $i$-th operation, he selects the vertex $c_i$, which is currently \textbf{white}, and paints it \textbf{black}.
Let's call the positivity of tree the minimum distance between all pairs of different \textbf{black} vertices in it. The distance between the vertices $v$ and $u$ is the number of edges on the path from $v$ to $u$.
After each operation, Timofey wants to know the positivity of the current tree.
|
Let's store for each vertex the minimum distance from it to the nearest black one, let's call it $d[v]$. We will also store the global answer, which for obvious reasons does not increase, we will call it $ans$. Let's now color the vertex $c_i$, let's set $d[c_i] = 0$ and run a depth first search from it. This DFS will visit only the vertices $v$ with $d[v]<ans$. Let us consider the vertex $v$ and its neighbour $u$. If we can relax $d[u]$ through $d[v]+ 1$, let's do it and start from $u$. Otherwise, $u$ has a closer black neighbour, let's try to update the answer through it $ans = min(ans, d[v] +1 +d[u])$. Also, do not forget to update the answer via $d[v]$ from all black vertices $v$ that we visited. The correctness of the algorithm is obvious, let's evaluate its time complexity. It is easy to notice that after the first $\lceil \sqrt n\rceil$ operations, $ans \le\lceil \sqrt n\rceil$. We enter only the vertices, $d$ from which, firstly, does not exceed $ans$, and secondly, was relaxed by the parent (that is, decreased by at least $1$). So, we allowed ourselves a complete tree bypassing for the first $\lceil\sqrt n\rceil$ operations, and then amortized performed no more than $n\lceil\sqrt n\rceil$ operations. The final asymptotics will be $O(n\sqrt n)$.
|
[
"brute force",
"dfs and similar",
"divide and conquer",
"graphs",
"greedy",
"math",
"shortest paths",
"trees"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 200200;
const int INF = 1e9;
int n, ANS = INF;
int crr[MAXN], dist[MAXN], res[MAXN];
bool clr[MAXN];
vector<int> gr[MAXN];
void init() {
ANS = INF;
for (int v = 0; v < n; ++v)
gr[v].clear();
fill(dist, dist + n, INF);
memset(clr, 0, n);
}
void dfs(int v, int p) {
if (dist[v] >= ANS) return;
if (clr[v]) ANS = min(ANS, dist[v]);
for (int u: gr[v]) {
if (u == p) continue;
if (dist[v] + 1 < dist[u]) {
dist[u] = dist[v] + 1;
dfs(u, v);
} else ANS = min(ANS, dist[v] + 1 + dist[u]);
}
}
void solve() {
dist[*crr] = 0;
dfs(*crr, -1);
clr[*crr] = true;
for (int i = 1; i < n; ++i) {
dist[crr[i]] = 0;
dfs(crr[i], -1);
clr[crr[i]] = true;
res[i] = ANS;
}
}
int main() {
int gorilla; cin >> gorilla;
while (gorilla--) {
cin >> n >> *crr, --(*crr);
init();
for (int i = 1; i < n; ++i)
cin >> crr[i], --crr[i];
for (int i = 1; i < n; ++i) {
int v, u; cin >> v >> u, --v, --u;
gr[v].push_back(u);
gr[u].push_back(v);
}
solve();
for (int i = 1; i < n; ++i)
cout << res[i] << ' ';
cout << '\n';
}
}
|
1790
|
G
|
Tokens on Graph
|
You are given an undirected connected graph, some vertices of which contain tokens and/or bonuses. Consider a game involving one player — you.
You can move tokens according to the following rules:
- At the beginning of the game, you can make exactly one turn: move any token to any adjacent vertex.
- If the movement of the token ended on the bonus, then you are allowed to make another turn with any \textbf{other} token.
You can use different bonuses in any order. The same bonus can be used an unlimited number of times. Bonuses do not move during the game.
There can be several tokens in one vertex at the same time, but initially there is no more than one token in each vertex.
The vertex with number $1$ is the finish vertex, and your task is to determine whether it is possible to hit it with any token by making turns with the tiles according to the rules described above. If a token is initially located at the vertex of $1$, then the game is considered already won.
\begin{center}
{\small The finish line is in black, the bonuses are in red, the chips are in grey.}
\end{center}
For example, for a given graph, you can reach the finish line with a chip from the $8$th vertex by making the following sequence of turns:
- Move from the $8$-th vertex to the $6$-th.
- Move from the $7$-th vertex to the $5$-th.
- Move from the $6$-th vertex to the $4$-th.
- Move from the $5$-th vertex to the $6$-th.
- Move from the $4$-th vertex to the $2$-nd.
- Move from the $6$-th vertex to the $4$-th.
- Move from the $2$-nd vertex to the $1$-st vertex, which is the finish.
|
Let's calculate the shortest paths to the finish along the vertices containing bonuses. We will try to reach the finish line with the chip that is closest to it, and mark it. If there is none, we lose. Other chips will give her extra moves. Find all connected components from vertices containing bonuses. Then, for each component, we find all the tokens that are not selected, located at the vertex of this component, and at the vertices adjacent to at least one vertex of this component. Consider the size of the connectivity component. If it is equal to $1$, then the chip located in the neighboring vertex gives $1$ an additional move. Otherwise, the chip located at the top of the component or at the neighboring vertex, as well as the selected chip, will be able to move indefinitely in turn, which gives us a victory. Otherwise, you need to count the number of extra moves and compare it with the shortest distance to the finish line.
|
[
"constructive algorithms",
"dfs and similar",
"graphs",
"shortest paths"
] | 2,300
|
#include<bits/stdc++.h>
using namespace std;
void solve()
{
int n;
cin >> n;
vector<int> token(n), boni(n);
vector<vector<int>> g(n);
vector<int> good(n);
int m;
cin >> m;
int p, b;
cin >> p >> b;
for(int i = 0; i < p; i++)
{
int x;
cin >> x;
--x;
token[x] = 1;
}
for(int i = 0; i < b; i++)
{
int x;
cin >> x;
--x;
boni[x] = 1;
}
for(int i = 0; i < m; i++)
{
int x, y;
cin >> x >> y;
--x;
--y;
g[x].push_back(y);
g[y].push_back(x);
}
for(int i = 0; i < n; i++)
for(auto x : g[i])
if(boni[i] && boni[x]) good[i] = 1;
set<int> good_tokens;
set<int> not_so_good_tokens;
for(int i = 0; i < n; i++)
for(auto x : g[i])
{
if(token[i] && good[x]) good_tokens.insert(i);
else if(token[i] && boni[x]) not_so_good_tokens.insert(i);
}
vector<int> d(n, int(1e9));
queue<int> q;
d[0] = 0;
q.push(0);
while(!q.empty())
{
int k = q.front();
q.pop();
for(auto x : g[k])
{
if(d[x] > d[k] + 1)
{
d[x] = d[k] + 1;
if(boni[x]) q.push(x);
}
}
}
bool has_ans = false;
for(int i = 0; i < n; i++)
{
if(!token[i] || d[i] > n) continue;
has_ans |= (!good_tokens.empty() && (*good_tokens.begin() != i || *good_tokens.rbegin() != i));
int cnt = not_so_good_tokens.size();
if(not_so_good_tokens.count(i)) cnt--;
has_ans |= d[i] <= 1 + cnt;
}
cout << (has_ans ? "YES" : "NO") << endl;
}
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
int tc = 1;
cin >> tc;
for(int i = 0; i < tc; i++)
{
solve();
}
}
|
1791
|
A
|
Codeforces Checking
|
Given a lowercase Latin character (letter), check if it appears in the string $codeforces$.
|
You need to implement what is written in the statement. You can either use an if-statement for each of the characters $\{\texttt{c}, \texttt{o}, \texttt{d}, \texttt{e}, \texttt{f}, \texttt{r}, \texttt{s}\}$, or you can iterate through the string $\texttt{codeforces}$ check if the current character equals $c$.
|
[
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
const string s = "codeforces";
void solve() {
char c;
cin >> c;
for (char i : s) {
if (i == c) {cout << "YES\n"; return;}
}
cout << "NO\n";
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();}
// solve();
}
|
1791
|
B
|
Following Directions
|
Alperen is standing at the point $(0,0)$. He is given a string $s$ of length $n$ and performs $n$ moves. The $i$-th move is as follows:
- if $s_i = L$, then move one unit left;
- if $s_i = R$, then move one unit right;
- if $s_i = U$, then move one unit up;
- if $s_i = D$, then move one unit down.
\begin{center}
{\small If Alperen starts at the center point, he can make the four moves shown.}
\end{center}
There is a candy at $(1,1)$ (that is, one unit above and one unit to the right of Alperen's starting point). You need to determine if Alperen ever passes the candy. \begin{center}
{\small Alperen's path in the first test case.}
\end{center}
|
We can keep track of our current point $(x,y)$ as we iterate over the string: if $s_i = \texttt{L}$, then decrement $x$ (set $x \leftarrow x-1$); if $s_i = \texttt{R}$, then increment $x$ (set $x \leftarrow x+1$); if $s_i = \texttt{U}$, then increment $y$ (set $y \leftarrow y+1$); if $s_i = \texttt{D}$, then decrement $y$ (set $y \leftarrow y-1$). The time complexity is $\mathcal{O}(n)$.
|
[
"geometry",
"implementation"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
const int MAX = 200007;
const int MOD = 1000000007;
void solve() {
int n;
cin >> n;
string s;
cin >> s;
int x = 0, y = 0;
for (int i = 0; i < n; i++) {
if (s[i] == 'L') {x--;}
if (s[i] == 'R') {x++;}
if (s[i] == 'D') {y--;}
if (s[i] == 'U') {y++;}
if (x == 1 && y == 1) {cout << "YES\n"; return;}
}
cout << "NO\n";
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();}
// solve();
}
|
1791
|
C
|
Prepend and Append
|
Timur initially had a binary string$^{\dagger}$ $s$ (possibly of length $0$). He performed the following operation several (possibly zero) times:
- Add $0$ to one end of the string and $1$ to the other end of the string. For example, starting from the string $1011$, you can obtain either $\textcolor{red}{0}1011\textcolor{red}{1}$ or $\textcolor{red}{1}1011\textcolor{red}{0}$.
You are given Timur's final string. What is the length of the \textbf{shortest} possible string he could have started with?$^{\dagger}$ A binary string is a string (possibly the empty string) whose characters are either $0$ or $1$.
|
Let's perform the process in reverse: we will remove the first and last character of the string, if these two characters are different. We should do this as long as possible, since we need to find the shortest initial string. So the algorithm is straightfoward: keep track of the left and right characters, and if they are different, remove both. Otherwise, output the length of the current string (or output $0$ if the string became empty). There are a few ways to implement this. For example, you can keep two pointers, one at the beginning of the string and one at the end, say, $l=1$ and $r=n$, and check if $s_l=s_r$. If it's true, then we increment $l$ and decrement $r$. Otherwise, we output $r-l+1$. We stop when $l \geq r$. Alternatively, you can use deque to simulate the operations directly. The time complexity is $\mathcal{O}(n)$.
|
[
"implementation",
"two pointers"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
const int MAX = 200007;
const int MOD = 1000000007;
void solve() {
int n;
cin >> n;
string s;
cin >> s;
int l = 0, r = n - 1, ans = n;
while (s[l] != s[r] && ans > 0) {l++; r--; ans -= 2;}
cout << ans << '\n';
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();}
// solve();
}
|
1791
|
D
|
Distinct Split
|
Let's denote the $f(x)$ function for a string $x$ as the number of distinct characters that the string contains. For example $f(abc) = 3$, $f(bbbbb) = 1$, and $f(babacaba) = 3$.
Given a string $s$, split it into two non-empty strings $a$ and $b$ such that $f(a) + f(b)$ is the maximum possible. In other words, find the maximum possible value of $f(a) + f(b)$ such that $a + b = s$ (the concatenation of string $a$ and string $b$ is equal to string $s$).
|
Let's check all splitting points $i$ for all ($1 \leq i \leq n - 1$). We denote a splitting point as the last index of the first string we take (and all the remaining characters will go to the second string). We need to keep a dynamic count of the number of distinct characters in both strings $a$ (the first string) and $b$ (the second string). We can do this using two frequency arrays (and adding one to the distinct count of either string $a$ or $b$ when the frequency of a character is greater than zero.
|
[
"brute force",
"greedy",
"strings"
] | 1,000
|
#include "bits/stdc++.h"
using namespace std;
#define ll long long
#define all(v) v.begin(), v.end()
#define rall(v) v.rbegin(),v.rend()
#define pb push_back
#define sz(a) (int)a.size()
void solve() {
int n; string s; cin >> n >> s;
vector<int> cnt(26, 0), p(26, 0);
for(auto x: s) cnt[x - 'a']++;
int ans = 0;
for(auto x: s) {
--cnt[x - 'a'];
++p[x - 'a'];
int cur = 0;
for(int i = 0; i < 26; ++i) {
cur += min(1, cnt[i]) + min(1, p[i]);
}
ans = max(ans, cur);
}
cout << ans << "\n";
}
int32_t main() {
ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0);
int t = 1;
cin >> t;
while(t--) {
solve();
}
}
|
1791
|
E
|
Negatives and Positives
|
Given an array $a$ consisting of $n$ elements, find the maximum possible sum the array can have after performing the following operation \textbf{any number of times}:
- Choose $2$ \textbf{adjacent} elements and flip both of their signs. In other words choose an index $i$ such that $1 \leq i \leq n - 1$ and assign $a_i = -a_i$ and $a_{i+1} = -a_{i+1}$.
|
We can notice that by performing any number of operations, the parity of the count of negative numbers won't ever change. Thus, if the number of negative numbers is initially even, we can make it equal to $0$ by performing some operations. So, for an even count of negative numbers, the answer is the sum of the absolute values of all numbers (since we can make all of them positive). And if the count of negative numbers is odd, we must have one negative number at the end. We will choose the one smallest by absolute value and keep the rest positive (for simplicity, we consider $-0$ as a negative number).
|
[
"dp",
"greedy",
"sortings"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t; cin >> t;
while(t--) {
int n; cin >> n;
vector<int> a(n);
long long sum = 0;
int negs = 0;
for(int i = 0; i < n; ++i) {
cin >> a[i];
if(a[i] < 0) {
++negs;
a[i] = -a[i];
}
sum += a[i];
}
sort(a.begin(), a.end());
if(negs & 1) sum -= 2 * a[0];
cout << sum << "\n";
}
}
|
1791
|
F
|
Range Update Point Query
|
Given an array $a_1, a_2, \dots, a_n$, you need to handle a total of $q$ updates and queries of two types:
- $1$ $l$ $r$ — for each index $i$ with $l \leq i \leq r$, update the value of $a_i$ to the sum of the digits of $a_i$.
- $2$ $x$ — output $a_x$.
|
Let $S(n)$ denote the sum of the digits of $n$. The key observation is the following: after the operation is applied to index $i$ thrice, it won't change after any further operations. The proof$^{\dagger}$ is provided at the bottom of the editorial. So we only need to update $a_i$ if it's been updated at most $2$ times so far; otherwise, we can ignore it. This allows us to do the following solution: store the current "active" indices (that is, indices that have been updated $\leq 2$ times) in a sorted list (for example, set in C++). Then: $1$ $l$ $r$ - search for the smallest active index at least $l$ (since the list is sorted, we can do it in $\mathcal{O}(\log n)$). Afterwards, update that index (replace $a_i$ with $S(a_i)$), remove it if it's no longer active, and binary search for the next largest active index in the sorted list, until we pass $r$. $2$ $x$ - just output $a_x$. Therefore the time complexity is amortized $\mathcal{O}(q + n \log n)$. $^{\dagger}$ To show this, note that initially $1 \leq a_i \leq 10^9$. The maximum possible value of the sum of the digits of $a_i$ is $81$, achieved when $a_i = 999{,}999{,}999$. So $1 \leq S(a_i) \leq 81$. Now considering the numbers from $1$ to $81$, the one with maximum sum of digits is $79$, with $S(79)=16$. Hence $1 \leq S(S(a_i)) \leq 16$. Finally, considering the numbers from $1$ to $16$, the one with maximum sum of digits is $9$, so $1 \leq S(S(S(a_i))) \leq 9$. That is, after three operations, $a_i$ becomes a single digit. Any further operations, and it won't change any more.
|
[
"binary search",
"brute force",
"data structures"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int digit_sum(int n) {
int ret = 0;
while(n) {
ret += n % 10;
n /= 10;
}
return ret;
}
void solve() {
int n, q; cin >> n >> q;
vector<int> a(n);
set<int> s;
for(int i = 0; i < n; ++i) {
cin >> a[i];
if(a[i] > 9) s.insert(i);
}
while(q--) {
int type; cin >> type;
if(type == 1) {
int l, r; cin >> l >> r; --l, --r;
int lst = l;
while(!s.empty()) {
auto it = s.lower_bound(lst);
if(it == s.end() || *it > r) break;
a[*it] = digit_sum(a[*it]);
int paiu = *it;
s.erase(it);
if(a[paiu] > 9) s.insert(paiu);
lst = paiu + 1;
}
} else {
int x; cin >> x; --x;
cout << a[x] << "\n";
}
}
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(0);
int t; cin >> t;
while(t--) {
solve();
}
}
|
1791
|
G1
|
Teleporters (Easy Version)
|
\textbf{The only difference between the easy and hard versions are the locations you can teleport to.}
Consider the points $0, 1, \dots, n$ on the number line. There is a teleporter located on each of the points $1, 2, \dots, n$. At point $i$, you can do the following:
- Move left one unit: it costs $1$ coin.
- Move right one unit: it costs $1$ coin.
- Use a teleporter at point $i$, if it exists: it costs $a_i$ coins. As a result, you teleport to point $0$. Once you use a teleporter, you \textbf{can't} use it again.
You have $c$ coins, and you start at point $0$. What's the most number of teleporters you can use?
|
It's easy to see that it's optimal to only move right or to use a portal once we are at it. We can notice that when we teleport back, the problem is independent of the previous choices. We still are at point $0$ and have some portals left. Thus, we can just find out the individual cost of each portal, sort portals by individual costs, and take them from smallest to largest by cost as long as we can. The cost of portal $i$ is $i + a_i$ (since we pay $a_i$ to use it and need $i$ moves to get to it).
|
[
"greedy",
"sortings"
] | 1,100
|
#include "bits/stdc++.h"
using namespace std;
#define ll long long
#define all(v) v.begin(), v.end()
#define rall(v) v.rbegin(),v.rend()
#define pb push_back
#define sz(a) (int)a.size()
void solve() {
int n, c, ans = 0; cin >> n >> c;
priority_queue<int> q;
for(int i = 1, x; i <= n; ++i) {
cin >> x;
q.push(-x - i);
}
while(!q.empty()) {
int x = -q.top(); q.pop();
if(x > c) break;
++ans;
c -= x;
}
cout << ans << "\n";
}
int32_t main() {
ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0);
int t = 1;
cin >> t;
while(t--) {
solve();
}
}
|
1791
|
G2
|
Teleporters (Hard Version)
|
\textbf{The only difference between the easy and hard versions are the locations you can teleport to.}
Consider the points $0,1,\dots,n+1$ on the number line. There is a teleporter located on each of the points $1,2,\dots,n$. At point $i$, you can do the following:
- Move left one unit: it costs $1$ coin.
- Move right one unit: it costs $1$ coin.
- Use a teleporter at point $i$, if it exists: it costs $a_i$ coins. As a result, you can choose whether to teleport to point $0$ or point $n+1$. Once you use a teleporter, you \textbf{can't} use it again.
You have $c$ coins, and you start at point $0$. What's the most number of teleporters you can use?
|
Please also refer to the tutorial for the easy version. If we are not at the first taken portal, the problem is still independent for each portal, but this time the cost of a portal is $min(a_i + i, a_i + n + 1 - i)$ (since we can come to a portal either from point $0$ or point $n+1$). So, we again sort the portals by their costs. But this time, we need to make sure that the first taken portal is taken from point $0$, so we will iterate over all portals and check the maximum amount of portals we can take if we use it as the first one. We can check this using prefix sums over the minimum cost array and binary searching, checking if the amount of considered portals taken doesn't exceed the number of coins we initially have (we also have to deal with the case when the portal we are considering is included both times as the initial portal and in the minimum cost prefix).
|
[
"binary search",
"greedy",
"sortings"
] | 1,900
|
#include <bits/stdc++.h>
#define startt ios_base::sync_with_stdio(false);cin.tie(0);
typedef long long ll;
using namespace std;
#define vint vector<int>
#define all(v) v.begin(), v.end()
#define MOD 1000000007
#define MOD2 998244353
#define MX 1000000000
#define MXL 1000000000000000000
#define PI (ld)2*acos(0.0)
#define pb push_back
#define sc second
#define fr first
#define int long long
#define endl '\n'
#define ld long double
#define NO cout << "NO" << endl
#define YES cout << "YES" << endl
int ceildiv(int one, int two) {if (one % two == 0) {return one / two;}else {return one / two + 1;}} int power(int n, int pow, int m) {if (pow == 0) return 1;if (pow % 2 == 0) {ll x = power(n, pow / 2, m);return (x * x) % m;}else return (power(n, pow - 1, m) * n) % m;} int gcd(int a, int b) { if (!b)return a; return gcd(b, a % b);} int factorial(int n, int mod) {if (n > 1)return (n * factorial(n - 1, mod)) % mod; else return 1;} int lcm(int a, int b) {return (a * b) / gcd(a, b);} vector<int> read(int n) {vector<int> a; for (int i = 0; i < n; i++) { int x; cin >> x; a.pb(x);} return a;}struct prefix_sum{vint pref;void build(vint a){pref.pb(0);for(int i = 0; i < a.size(); i++){pref.pb(pref.back()+a[i]);}}int get(int l, int r){return pref[r]-pref[l-1];}};//mesanu
void solve()
{
int n, c;
cin >> n >> c;
vector<pair<int, int>> a;
for(int i = 0; i < n; i++)
{
int x;
cin >> x;
a.pb({x+min(i+1, n-i), x+i+1});
}
sort(all(a));
vector<int> pref;
pref.pb(0);
for(int i = 0; i < n; i++)
{
pref.pb(pref.back()+a[i].fr);
}
int ans = 0;
for(int i = 0; i < n; i++)
{
int new_c = c-a[i].sc;
int l = 0, r = n;
int mx = 0;
while(l <= r)
{
int mid = l+r>>1;
// Calculate price
int price = pref[mid];
int now = mid+1;
if(mid > i)
{
price-=a[i].fr;
now--;
}
if(price <= new_c)
{
mx = max(now, mx);
l = mid+1;
}
else
{
r = mid-1;
}
}
ans = max(ans, mx);
}
cout << ans << endl;
}
int32_t main(){
startt
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
1792
|
A
|
GamingForces
|
Monocarp is playing a computer game. He's going to kill $n$ monsters, the $i$-th of them has $h_i$ health.
Monocarp's character has two spells, either of which he can cast an arbitrary number of times (possibly, zero) and in an arbitrary order:
- choose exactly two alive monsters and decrease their health by $1$;
- choose a single monster and kill it.
When a monster's health becomes $0$, it dies.
What's the minimum number of spell casts Monocarp should perform in order to kill all monsters?
|
The first spell looks pretty weak compared to the second spell. Feels like you almost always replace one with another. Let's show that you can totally avoid casting the spell of the first type twice or more on one monster. Let the two first spell casts be $(i, j)$ and $(i, k)$ for some monsters $i, j$ and $k$. You can replace them by a cast of the second spell on $i$ and a cast of the first spell on $(j, k)$. That would deal even more damage to $i$ and the same amount to $j$ and $k$. The number of casts doesn't change. Thus, it only makes sense to use the first spell on monsters with $1$ health. Calculate the number of them, kill the full pairs of them with the first spell, and use the second spell on the remaining monsters. Overall complexity: $O(n)$ per testcase.
|
[
"greedy",
"sortings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
int cnt1 = 0;
for (int i = 0; i < n; ++i) {
int x;
cin >> x;
cnt1 += (x == 1);
}
cout << n - cnt1 / 2 << '\n';
}
}
|
1792
|
B
|
Stand-up Comedian
|
Eve is a beginner stand-up comedian. Her first show gathered a grand total of two spectators: Alice and Bob.
Eve prepared $a_1 + a_2 + a_3 + a_4$ jokes to tell, grouped by their type:
- type 1: both Alice and Bob like them;
- type 2: Alice likes them, but Bob doesn't;
- type 3: Bob likes them, but Alice doesn't;
- type 4: neither Alice nor Bob likes them.
Initially, both spectators have their mood equal to $0$. When a spectator hears a joke he/she likes, his/her mood increases by $1$. When a spectator hears a joke he/she doesn't like, his/her mood decreases by $1$. If the mood of a spectator becomes negative (strictly below zero), he/she leaves.
When someone leaves, Eve gets sad and ends the show. If no one leaves, and Eve is out of jokes, she also ends the show.
Thus, Eve wants to arrange her jokes in such a way that the show lasts as long as possible. Help her to calculate the maximum number of jokes she can tell before the show ends.
|
First, let Eve tell the jokes of the first type - they will never do any harm. At the same time, let her tell the jokes of the fourth time at the very end - they will not do any good. Types two and three are kind of opposites of each other. If you tell jokes of each of them one after another, then the moods of both spectators don't change. Let's use that to our advantage. Tell the jokes of these types in pairs until one of them runs out. There's a little corner case here, though. If there were no jokes of the first type, then you can't use a single pair because of the spectators leaves after one joke. Finally, try to tell the remaining jokes of the same type before the fourth type. So the construction looks like $1, 1, \dots, 1, 2, 3, 2, 3, \dots, 2, 3, 2, 2, 2, \dots, 2, 4, 4, 4, \dots, 4$ with $2$ and $3$ possibly swapped with each other. Let's recover the answer from that construction. After the first type, both moods are $a_1$. After the alternating jokes, the moods are still the same. After that, one of the spectators will have his/her mood only decreasing until the end. Once it reaches $-1$, the show ends. Thus, Eve can tell $a_1 + min(a_2, a_3) \cdot 2 + min(a_1 + 1, abs(a_2 - a_3) + a_4)$ jokes if $a_1 \neq 0$. Otherwise, it's always $1$. Overall complexity: $O(1)$.
|
[
"greedy",
"math"
] | 1,200
|
for _ in range(int(input())):
a1, a2, a3, a4 = map(int, input().split())
if a1 == 0:
print(1)
else:
print(a1 + min(a2, a3) * 2 + min(a1 + 1, abs(a2 - a3) + a4))
|
1792
|
C
|
Min Max Sort
|
You are given a permutation $p$ of length $n$ (a permutation of length $n$ is an array of length $n$ in which each integer from $1$ to $n$ occurs exactly once).
You can perform the following operation any number of times (possibly zero):
- choose two different elements $x$ and $y$ and erase them from the permutation;
- insert the minimum of $x$ and $y$ into the permutation in such a way that it becomes the first element;
- insert the maximum of $x$ and $y$ into the permutation in such a way that it becomes the last element.
For example, if $p = [1, 5, 4, 2, 3]$ and we want to apply the operation to the elements $3$ and $5$, then after the first step of the operation, the permutation becomes $p = [1, 4, 2]$; and after we insert the elements, it becomes $p = [3, 1, 4, 2, 5]$.
Your task is to calculate the minimum number of operations described above to sort the permutation $p$ in ascending order (i. e. transform $p$ so that $p_1 < p_2 < \dots < p_n$).
|
If the array is already sorted, then the answer is $0$. Otherwise, there is a last operation, after which the permutation takes the form $1, 2, \dots, n$. Which means that the elements $1$ and $n$ are selected as the last operation (because they are at the first and last positions after the operation). Now we know that the last operation is $(1, n)$ and it doesn't matter where exactly these numbers are in the permutation, i. e. we can assume that the answer has increased by $1$, and consider only the numbers $2, 3, \dots, n-2, n-$1. Similarly, for the "remaining" permutation, there are two options, either it is sorted, and then the answer is $1$, or there is a last operation and the numbers $2$ and $n-1$ are used in it. And so on until the "remaining" permutation is sorted or empty. It remains to find out how to quickly check whether the numbers in the segment $[k, n - k + 1]$ are sorted (they go in the correct order in the initial permutation). Note that this segment corresponds to values of elements, not to positions in the permutation. If this segment is sorted for some $k$, then the answer does not exceed $k-1$. There are several ways to check, let's consider one of them. Note that if the segment $[k, n - k + 1]$ is sorted for some value $k$, then it will be sorted for large values as well. So we can start with the maximum value of $k$ (which is equal to $\left\lfloor\frac{n+1}{2}\right\rfloor$) and decrease it until the segment remains sorted. Now for each $k$ we need only two checks that $pos_k$ < $pos_{k + 1}$ and $pos_{n - k + 1}$ > $pos_{n - (k + 1) + 1}$, where $pos_i$ is the position of the element $i$ in the permutation. Thus, we got the solution in linear time. Another way is to run binary search on $k$ since if the numbers in $[k, n - k + 1]$ appear in the permutation in sorted order, the same holds for $k+1$. This approach yields a solution in $O(n \log n)$.
|
[
"binary search",
"brute force",
"greedy",
"math",
"two pointers"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> pos(n + 1);
for (int i = 0; i < n; ++i) {
int x;
cin >> x;
pos[x] = i;
}
int l = (n + 1) / 2, r = (n + 2) / 2;
while (l > 0 && (l == r || (pos[l] < pos[l + 1] && pos[r - 1] < pos[r]))) {
--l;
++r;
}
cout << (n - r + l + 1) / 2 << '\n';
}
}
|
1792
|
D
|
Fixed Prefix Permutations
|
You are given $n$ permutations $a_1, a_2, \dots, a_n$, each of length $m$. Recall that a permutation of length $m$ is a sequence of $m$ distinct integers from $1$ to $m$.
Let the beauty of a permutation $p_1, p_2, \dots, p_m$ be the largest $k$ such that $p_1 = 1, p_2 = 2, \dots, p_k = k$. If $p_1 \neq 1$, then the beauty is $0$.
The product of two permutations $p \cdot q$ is a permutation $r$ such that $r_j = q_{p_j}$.
For each $i$ from $1$ to $n$, print the largest beauty of a permutation $a_i \cdot a_j$ over all $j$ from $1$ to $n$ (possibly, $i = j$).
|
Let's try to solve for one of the given permutations. Let it be some $p$. How to make the answer for it at least $1$? Well, we have to find another permutation $q$ such that $p \cdot q = (1, r_2, r_3, \dots, r_m)$. How about at least $k$? Well, the same: $p \cdot q = (1 2 \dots, k, r_{k+1}, \dots, r_m)$. Push $q$ to the right side of the equation. $p = (1 2 \dots, k, r_{k+1}, \dots, r_m) \cdot q^{-1}$. Now think. What does it actually mean for some permutation to be multiplied by $(1, 2, \dots, k)$? It stays the same. So the first $k$ elements of $p$ will be equal to the first $k$ elements of $q^{-1}$. Thus, you have to find a permumtation such that its inverse has the longest common prefix with $p$. This can be done in multiple ways. For example, you can store all inverses in a trie and traverse it with $p$ until you reach a dead end. Or simply push all prefixes of each inverse into a set and iterate over $k$. Alternatively, you can just sort inverses and do lower_bound for $p$ in this list - the permutation with longest common prefix will be either the result or the one before it. Overall complexity: $O(nm)/O(nm \log n)/O(nm^2 \log n)$ per testcase.
|
[
"binary search",
"bitmasks",
"data structures",
"hashing",
"math",
"sortings"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for(int i = 0; i < int(n); i++)
int get(const vector<int> &a, const vector<int> &b){
int res = 0;
while (res < int(a.size()) && a[res] == b[res])
++res;
return res;
}
int main(){
int t;
scanf("%d", &t);
while (t--){
int n, m;
scanf("%d%d", &n, &m);
vector<vector<int>> a(n, vector<int>(m));
forn(i, n) forn(j, m){
scanf("%d", &a[i][j]);
--a[i][j];
}
vector<vector<int>> b(n, vector<int>(m));
forn(i, n) forn(j, m) b[i][a[i][j]] = j;
sort(b.begin(), b.end());
forn(i, n){
int j = lower_bound(b.begin(), b.end(), a[i]) - b.begin();
int ans = 0;
if (j > 0) ans = max(ans, get(a[i], b[j - 1]));
if (j < n) ans = max(ans, get(a[i], b[j]));
printf("%d ", ans);
}
puts("");
}
}
|
1792
|
E
|
Divisors and Table
|
You are given an $n \times n$ multiplication table and a positive integer $m = m_1 \cdot m_2$. A $n \times n$ multiplication table is a table with $n$ rows and $n$ columns numbered from $1$ to $n$, where $a_{i, j} = i \cdot j$.
For each divisor $d$ of $m$, check: does $d$ occur in the table at least once, and if it does, what is the minimum row that contains $d$.
|
Firstly, let's factorize $m$. Since $m = m_1 \cdot m_2$ we can factorize $m_1$ and $m_2$ separately and then "unite" divisors. For example, use can get canonical representations of $m_1 = p_1^{f_1} p_2^{f_2} \dots p_k^{f_k}$ and $m_2 = p_1^{g_1} p_2^{g_2} \dots p_k^{g_k}$ to get canonical representation of $m = p_1^{f_1 + g_1} p_2^{f_2 + g_2} \dots p_k^{f_k + g_k}$ and then use it to generate all divisors of $m$. Let's estimate the number of divisors $divs(m)$. It's convenient for our purposes to estimate it as $O(m^{\frac{1}{3}})$. More precisely, there are at most $105\,000$ divisors for $m \le 10^{18}$ (search "Highly Composite Numbers" for more info). How to calculate the answer $a_i$ for each divisor $d$? There are two ways. The intended solution: for each $d$ we are searching for the minimum $x$ that $d = x y$ and $y \le n$. Since $d$ is fixed, the minimum $x$ means the maximum $y \le n$. So let's find $y$ instead. In other words, for each $d$ we need to find the maximum $y$ such that $y$ divides $d$ and $y \le n$. We can do it efficiently with $dp$ on divisors. Let $dp[d]$ be the maximum $y$ that is a divisor of $d$ and $y \le n$. If $d \le n$ then, obviously, $dp[d] = d$. Otherwise, we know that we are searching $y < d$. Let say that $p_1, p_2, \dots, p_k$ are the prime divisors of the initial number $m$. Since $y$ is a divisor of $d$ and $y < d$ then exists some $p_i$ among the set of prime divisors such that $y$ is a divisor of $\frac{d}{p_i}$ as well. So, instead of brute search, it's enough to take a value $dp[\frac{d}{p_i}]$. In other words, if $d > n$ we can calculate $dp[d] = \max\limits_{p_i | d}{dp[\frac{d}{p_i}]}$. Ok, now we know value $dp[d]$ for each divisor $d$. Since we found the maximum $y \le n$, the last step is to calculate the desired $x = \frac{d}{dp[d]}$ and if $x \le n$ we found the answer $a_i$, otherwise ($x > n$) it means that $d$ is not presented $n \times n$ table and $a_i = 0$. The total complexity is $O(\sqrt{m_1 + m_2} + divs(m) \cdot z(m) \cdot \log(divs(m)))$ per test, where $divs(m)$ is the number of divisors of $m$ ($divs(m) \le 105\,000$) and $z(m)$ is the number of prime divisor of $m$ ($z(m) \le 15$). Note that complexity is quite high, so you should write it at least a little accurate, for example store $dp[d]$ in an array, not map, and search position of $dp[\frac{d}{p_i}]$ with $lower\_bound()$. There is also a way to get rid of extra $\log(divs(m))$ factor if you iterate through $dp$ is a smart way. The alternative solution (faster, easier, unproven): Let's generate a list of all divisors of $m$ as $d_1, d_2, \dots, d_l$ in the increasing order. For some divisor $d_i$ we are searching the minimum $x$ that is a divisor of $d_i$ and $\frac{d_i}{x} \le n$. It means that $x \ge \left\lceil \frac{d_i}{n} \right\rceil$. So let's just find the first position $j$ such that $d_j \ge \left\lceil \frac{d_i}{n} \right\rceil$ with $lower\_bound$ and start iterating from $j$ onward searching the first $d_j$ that is a divisor of $d_i$. The found $d_j$ would be the minimum $x$ we need. It looks like, in average, we will find the correct $d_j$ quite fast, or we'll break when $d_j > n$.
|
[
"brute force",
"dfs and similar",
"dp",
"number theory"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
#define x first
#define y second
typedef long long li;
typedef long double ld;
typedef pair<int, int> pt;
template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) {
return out << "(" << p.x << ", " << p.y << ")";
}
template<class A> ostream& operator <<(ostream& out, const vector<A> &v) {
fore(i, 0, sz(v)) {
if(i) out << " ";
out << v[i];
}
return out;
}
const int INF = int(1e9);
const li INF64 = li(1e18);
const ld EPS = 1e-9;
int n;
li m1, m2;
inline bool read() {
if(!(cin >> n >> m1 >> m2))
return false;
return true;
}
vector<pt> mFact;
vector<li> divs;
void factM(li m1, li m2) {
mFact.clear();
for (int d = 2; d * d <= m1 || d * d <= m2; d++) {
int cnt = 0;
while (m1 % d == 0) {
m1 /= d;
cnt++;
}
while (m2 % d == 0) {
m2 /= d;
cnt++;
}
if (cnt > 0)
mFact.push_back({d, cnt});
}
if (m1 > m2)
swap(m1, m2);
if (m1 > 1)
mFact.push_back({m1, 1});
if (m2 > 1) {
if (m2 == m1)
mFact.back().y++;
else
mFact.push_back({m2, 1});
}
}
void genDivisors(int pos, li val) {
if (pos >= sz(mFact)) {
divs.push_back(val);
return;
}
li cur = val;
fore (pw, 0, mFact[pos].y + 1) {
genDivisors(pos + 1, cur);
if (pw < mFact[pos].y)
cur *= mFact[pos].x;
}
}
inline void solve() {
factM(m1, m2);
divs.clear();
genDivisors(0, 1);
sort(divs.begin(), divs.end());
vector<int> ans(sz(divs), 0);
vector<li> dp(sz(divs), -1);
fore (id, 0, sz(divs)) {
if (divs[id] <= n)
dp[id] = divs[id];
for (auto [p, pw] : mFact) {
if (divs[id] % p != 0)
continue;
int pos = int(lower_bound(divs.begin(), divs.end(), divs[id] / p) - divs.begin());
dp[id] = max(dp[id], dp[pos]);
}
if (divs[id] / dp[id] <= n)
ans[id] = divs[id] / dp[id];
}
int cnt = 0;
int xorSum = 0;
fore (i, 0, sz(ans)) {
cnt += ans[i] > 0;
xorSum ^= ans[i];
}
// cout << sz(ans) << endl;
// cout << ans << endl;
cout << cnt << " " << xorSum << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
int tt = clock();
#endif
ios_base::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
cout << fixed << setprecision(15);
int t; cin >> t;
while (t--) {
read();
solve();
#ifdef _DEBUG
cerr << "TIME = " << clock() - tt << endl;
tt = clock();
#endif
}
return 0;
}
|
1792
|
F1
|
Graph Coloring (easy version)
|
\textbf{The only difference between the easy and the hard version is the constraint on $n$.}
You are given an undirected complete graph on $n$ vertices. A complete graph is a graph where each pair of vertices is connected by an edge. You have to paint the edges of the graph into two colors, red and blue (each edge will have one color).
A set of vertices $S$ is \textbf{red-connected} if, for every pair of vertices $(v_1, v_2)$ such that $v_1 \in S$ and $v_2 \in S$, there exists a path from $v_1$ to $v_2$ that goes only through red edges and vertices from $S$. Similarly, a set of vertices $S$ is \textbf{blue-connected} if, for every pair of vertices $(v_1, v_2)$ such that $v_1 \in S$ and $v_2 \in S$, there exists a path from $v_1$ to $v_2$ that goes only through blue edges and vertices from $S$.
You have to paint the graph in such a way that:
- there is at least one red edge;
- there is at least one blue edge;
- for each set of vertices $S$ such that $|S| \ge 2$, $S$ is either red-connected or blue-connected, but \textbf{not both}.
Calculate the number of ways to paint the graph, and print it modulo $998244353$.
|
Lemma: if an undirected graph is disconnected, then its complement is connected. Similarly, if its complement is disconnected, then the graph itself is connected. Proof: suppose a graph is disconnected. Pick two vertices $x$ and $y$ from different components. Every vertex outside of $x$'s component is connected to $x$ in the complement, and every vertex outside of $y$'s component is connected to $y$ in the complement; the complement also contains the edge from $x$ to $y$, so all vertices in the complement graph belong to the single component. Why do we need this lemma at all? We can treat the graph formed by blue edges as the complement to the graph formed by red edges. So, if the "red" graph is disconnected, then the "blue" graph is connected, so we don't need to consider the case when some set of vertices is connected by neither color. We only need to make sure that no set of vertices is connected by both colors. Let $A_n$ be the answer for $n$. Every graph counted in $A_n$ is either red-disconnected or blue-disconnected; since there is a bijection between red-disconnected and blue-disconnected graphs (you can flip the colors of all edges to transform one type into the other), we will count only red-disconnected graphs and multiply it by $2$. Let $B_n$ be the number of blue-connected graphs with $n$ vertices meeting the properties of the problem statement. It's easy to see that $A_n = 2 \cdot B_n$ if $n > 1$, otherwise $A_n = B_n$ (the case $n = 1$ is special because a graph on one vertex is both red-connected and blue-connected). To calculate $A_n$, let's iterate on $k$ - the number of vertices which are in the same "red" component as $1$. This component must be a red-connected graph which meets the problem statement, so the number of ways to build the graph on these $k$ vertices is $B_k$; there are $\frac{(n-1)!}{(k-1)!(n-k)!}$ ways to choose the vertices in the same component as $1$, and the remaining graph can be either red-connected or blue-connected, so the number of ways to build the remaining graph is $A_{n-k}$. Thus, we get the following two relations: $B_{n} = \sum\limits_{k=1}^{n-1} B_k A_{n-k} \frac{(n-1)!}{(k-1)!(n-k)!}$ $A_n = 2 \cdot B_n \textrm{ if } n>1 \textrm{, otherwise } B_n$ We can calculate all values with dynamic programming using these formulas in $O(n^2)$.
|
[
"combinatorics",
"dp",
"graphs"
] | 2,700
|
#include<bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
int add(int x, int y)
{
x += y;
while(x >= MOD) x -= MOD;
while(x < 0) x += MOD;
return x;
}
int mul(int x, int y)
{
return (x * 1ll * y) % MOD;
}
int varMul(int x)
{
return x;
}
template<typename... Args>
int varMul(int x, Args... args)
{
return mul(x, varMul(args...));
}
int binpow(int x, int y)
{
int z = 1;
while(y)
{
if(y & 1) z = mul(z, x);
x = mul(x, x);
y /= 2;
}
return z;
}
vector<int> fact, rfact;
vector<int> dp;
int n;
void precalc()
{
fact.resize(n + 1);
rfact.resize(n + 1);
fact[0] = 1;
for(int i = 1; i <= n; i++)
fact[i] = mul(i, fact[i - 1]);
for(int i = 0; i <= n; i++)
rfact[i] = binpow(fact[i], MOD - 2);
dp.resize(n + 1, -1);
}
int C(int n, int k)
{
if(n < 0 || n < k || k < 0) return 0;
return varMul(fact[n], rfact[k], rfact[n - k]);
}
int calc(int x)
{
if(dp[x] != -1) return dp[x];
if(x == 1) return dp[x] = 1;
if(x == 2) return dp[x] = 1;
dp[x] = 0;
int& d = dp[x];
for(int i = 1; i < x; i++)
{
d = add(d, varMul(calc(i), (i == x - 1 ? (MOD + 1) / 2 : calc(x - i)), 2, C(x - 1, i - 1)));
}
return d;
}
int main()
{
cin >> n;
precalc();
cout << add(mul(calc(n), 2), -2) << endl;
}
|
1792
|
F2
|
Graph Coloring (hard version)
|
\textbf{The only difference between the easy and the hard version is the constraint on $n$.}
You are given an undirected complete graph on $n$ vertices. A complete graph is a graph where each pair of vertices is connected by an edge. You have to paint the edges of the graph into two colors, red and blue (each edge will have one color).
A set of vertices $S$ is \textbf{red-connected} if, for every pair of vertices $(v_1, v_2)$ such that $v_1 \in S$ and $v_2 \in S$, there exists a path from $v_1$ to $v_2$ that goes only through red edges and vertices from $S$. Similarly, a set of vertices $S$ is \textbf{blue-connected} if, for every pair of vertices $(v_1, v_2)$ such that $v_1 \in S$ and $v_2 \in S$, there exists a path from $v_1$ to $v_2$ that goes only through blue edges and vertices from $S$.
You have to paint the graph in such a way that:
- there is at least one red edge;
- there is at least one blue edge;
- for each set of vertices $S$ such that $|S| \ge 2$, $S$ is either red-connected or blue-connected, but \textbf{not both}.
Calculate the number of ways to paint the graph, and print it modulo $998244353$.
|
Please read the tutorial for the easy version first, since this tutorial uses some definitions from it. Okay, we need more definitions. Here they come: $C_0 = 0, C_i = \frac{A_i}{i!} \textrm{ if } i > 0$ $D_0 = 0, D_i = \frac{B_i}{(i-1)!} \textrm{ if } i > 0$ This way, we can transform the formula for $B_n$ to the following: $B_n = (n-1)! \cdot \sum\limits_{k=1}^{n-1} C_{n-k} D_k$ Or even this, since $C_0 = D_0 = 0$: $B_n = (n-1)! \cdot \sum\limits_{k=0}^{n} C_{n-k} D_k$ This is almost the convolution of the sequences $C$ and $D$ (with a bit extra additional operations after the convolution), so, to compute the sequence $B$, we just need to compute the sequences $C$ and $D$, and then calculate their convolution with NTT. All that's left is to multiply every element by the corresponding factorial. But wait, that's not so easy. In order to calculate $C_i$ and $D_i$, we need to know $B_i$. Note that we can ignore the fact that $C_i$ and $D_i$ appear in the formula for $B_i$, since they are multiplied by $0$, so at least we don't have a dependency cycle. Unfortunately, we cannot just straightforwardly use convolution if we don't know the sequences $C_i$ and $D_i$. The model solution handles it using the following approach. Let's generate $A$, $B$, $C$ and $D$ in parallel: on the $i$-th iteration, calculate $B_i$, then calculate $A_i$, $C_i$ and $D_i$ using it. And sometimes we will calculate the convolution of the sequences $C$ and $D$. Suppose we want to calculate $B_i$, and the last time we calculated the convolution of $C$ and $D$ was after the iteration $t$. Back then, we knew all elements from $C_0$ to $C_t$ and from $D_0$ to $D_t$. So, the $i$-th term in the convolution of $C$ and $D$ contained the sum of $C_{i-k} D_k$ over all $k$ such that $k \le t$ and $i - k \le t$. So, in order to calculate $B_i$, we have to pick this value from the convolution and then add the sum of $C_{i-k} D_k$ over all $k$ such that $k > t$ or $k \le i - t$, and there are $2(i-t)$ such values. Suppose we compute the convolution every $K$ iterations. Then the maximum value of $i-t$ is $K$, and every value of $B_i$ is calculated in $O(K)$. We also make $\frac{n}{K}$ convolutions, so the total complexity of this solution will be $O(\frac{n^2 \log n}{K} + nK)$, which can be transformed into $O(n \sqrt{n \log n})$ if we pick $K = \sqrt{n \log n}$.
|
[
"brute force",
"combinatorics",
"divide and conquer",
"dp",
"fft",
"graphs"
] | 2,900
|
#include<bits/stdc++.h>
using namespace std;
const int LOGN = 18;
const int N = (1 << LOGN);
const int MOD = 998244353;
const int g = 3;
#define forn(i, n) for(int i = 0; i < int(n); i++)
inline int mul(int a, int b)
{
return (a * 1ll * b) % MOD;
}
inline int norm(int a)
{
while(a >= MOD)
a -= MOD;
while(a < 0)
a += MOD;
return a;
}
inline int binPow(int a, int k)
{
int ans = 1;
while(k > 0)
{
if(k & 1)
ans = mul(ans, a);
a = mul(a, a);
k >>= 1;
}
return ans;
}
inline int inv(int a)
{
return binPow(a, MOD - 2);
}
vector<int> w[LOGN];
vector<int> iw[LOGN];
vector<int> rv[LOGN];
void precalc()
{
int wb = binPow(g, (MOD - 1) / (1 << LOGN));
for(int st = 0; st < LOGN; st++)
{
w[st].assign(1 << st, 1);
iw[st].assign(1 << st, 1);
int bw = binPow(wb, 1 << (LOGN - st - 1));
int ibw = inv(bw);
int cw = 1;
int icw = 1;
for(int k = 0; k < (1 << st); k++)
{
w[st][k] = cw;
iw[st][k] = icw;
cw = mul(cw, bw);
icw = mul(icw, ibw);
}
rv[st].assign(1 << st, 0);
if(st == 0)
{
rv[st][0] = 0;
continue;
}
int h = (1 << (st - 1));
for(int k = 0; k < (1 << st); k++)
rv[st][k] = (rv[st - 1][k & (h - 1)] << 1) | (k >= h);
}
}
inline void fft(int a[N], int n, int ln, bool inverse)
{
for(int i = 0; i < n; i++)
{
int ni = rv[ln][i];
if(i < ni)
swap(a[i], a[ni]);
}
for(int st = 0; (1 << st) < n; st++)
{
int len = (1 << st);
for(int k = 0; k < n; k += (len << 1))
{
for(int pos = k; pos < k + len; pos++)
{
int l = a[pos];
int r = mul(a[pos + len], (inverse ? iw[st][pos - k] : w[st][pos - k]));
a[pos] = norm(l + r);
a[pos + len] = norm(l - r);
}
}
}
if(inverse)
{
int in = inv(n);
for(int i = 0; i < n; i++)
a[i] = mul(a[i], in);
}
}
int aa[N], bb[N], cc[N];
vector<int> multiply(vector<int> a, vector<int> b)
{
int sza = a.size();
int szb = b.size();
int n = 1, ln = 0;
while(n < (sza + szb))
n <<= 1, ln++;
for(int i = 0; i < n; i++)
aa[i] = (i < sza ? a[i] : 0);
for(int i = 0; i < n; i++)
bb[i] = (i < szb ? b[i] : 0);
fft(aa, n, ln, false);
fft(bb, n, ln, false);
for(int i = 0; i < n; i++)
cc[i] = mul(aa[i], bb[i]);
fft(cc, n, ln, true);
int szc = n;
vector<int> c(szc);
szc = n;
for(int i = 0; i < n; i++)
c[i] = cc[i];
return c;
}
int main()
{
int n;
cin >> n;
vector<int> fact(n + 1);
fact[0] = 1;
for(int i = 0; i < n; i++)
fact[i + 1] = mul(fact[i], i + 1);
precalc();
vector<int> A = {0, 1, 2};
vector<int> B = {0, 1, 1};
vector<int> C = {0, 1, 1};
vector<int> D = {0, 1, 1};
vector<int> conv;
const int K = 2000;
int last_conv = -1e9;
while(A.size() <= n)
{
int cur = A.size();
if(cur - last_conv >= K)
{
last_conv = cur - 1;
conv = multiply(C, D);
}
/*for(auto x : conv) cerr << x << " ";
cerr << endl;*/
int val_A;
if(last_conv * 2 >= cur)
{
val_A = conv[cur];
// [cur - last_conv, last_conv] are already used
for(int i = 1; i < (cur - last_conv); i++)
{
val_A = norm(val_A + mul(C[i], D[cur - i]));
}
for(int i = last_conv + 1; i < cur; i++)
{
val_A = norm(val_A + mul(C[i], D[cur - i]));
}
}
else
{
val_A = 0;
for(int i = 1; i <= cur - 1; i++)
{
val_A = norm(val_A + mul(C[i], D[cur - i]));
}
}
val_A = mul(val_A, fact[cur - 1]);
val_A = mul(val_A, 2);
A.push_back(val_A);
B.push_back(mul(val_A, inv(2)));
C.push_back(mul(val_A, inv(fact[cur])));
D.push_back(mul(B.back(), inv(fact[cur - 1])));
}
cout << norm(A[n] - 2) << endl;
}
|
1793
|
A
|
Yet Another Promotion
|
The famous store "Second Food" sells groceries only two days a month. And the prices in each of days differ. You wanted to buy $n$ kilos of potatoes for a month. You know that on the first day of the month $1$ kilo of potatoes costs $a$ coins, and on the second day $b$ coins. In "Second Food" you can buy any \textbf{integer} kilograms of potatoes.
Fortunately, "Second Food" has announced a promotion for potatoes, which is valid only on the first day of the month — for each $m$ kilos of potatoes you buy, you get $1$ kilo as a gift! In other words, you can get $m + 1$ kilograms by paying for $m$ kilograms.
Find the minimum number of coins that you have to spend to buy \textbf{at least} $n$ kilos of potatoes.
|
Let $n = (m + 1) \cdot q + r$. Note that you need to use a promotion if $a \cdot m \leq b \cdot (m + 1)$. In this case, we will buy potatoes $q$ times for the promotion. The remaining potatoes (or all if the promotion is unprofitable) can be bought at $\min(a, b)$ per kilogram. Then the answer is: $q \cdot \min(a \cdot m, b \cdot (m + 1)) + r \cdot \min(a, b)$ Thus this solution works in $\mathcal{O}(1)$
|
[
"greedy",
"math"
] | 800
|
t = int(input())
for i in range(t):
a, b = map(int, input().split(" "))
n, m = map(int, input().split(" "))
q = n // (m + 1)
r = n - q * (m + 1)
print(q * min(a * m, b * (m + 1))+ r*min(a,b))
|
1793
|
B
|
Fedya and Array
|
For his birthday recently Fedya was given an array $a$ of $n$ integers arranged in a circle, For each pair of neighboring numbers ($a_1$ and $a_2$, $a_2$ and $a_3$, $\ldots$, $a_{n - 1}$ and $a_n$, $a_n$ and $a_1$) the absolute difference between them is equal to $1$.
Let's call a local maximum an element, which is greater than both of its neighboring elements. Also call a local minimum an element, which is less than both of its neighboring elements. Note, that elements $a_1$ and $a_n$ are neighboring elements.
Unfortunately, Fedya lost an array, but he remembered in it the sum of local maximums $x$ and the sum of local minimums $y$.
Given $x$ and $y$, help Fedya find any matching array of \textbf{minimum} length.
|
Note that the local minimums and maximums will alternate, and there will be the same number of them $k$. Let's call the $i$-th local maximum by $a_i$, the $i$-th local minimum by $b_i$. Without loss of generality, consider that $a_i$ goes before $b_i$. To get $b_i$ from $a_i$ we need to write out $a_i - b_i$ numbers, to get $a_{(i + 1) \bmod k}$ from $b_i$ we need to write out $a_{(i + 1) \bmod k} - b_i$ numbers. Thus, $(a_1 - b_1) + (a_2 - b_1) + (a_2 - b_2) + \ldots + (a_k - b_k) + (a_1 - b_k) =$ $= 2 \cdot (a_1 + a_2 + \ldots + a_k) - 2 \cdot (b_1 + b_2 + \ldots + b_k) = 2 \cdot (A - B) = n$ The array $[y, y + 1, y + 2, \ldots, x - 1, x, x - 1, x - 2, \ldots, y + 1]$ will satisfy the condition.
|
[
"constructive algorithms",
"math"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
void solve() {
ll a, b;
cin >> a >> b;
ll n = 2 * (a - b);
cout << n << '\n';
vector<ll> arr(n);
int ptr = 0;
for (ll c = b; c <= a; ++c) {
arr[ptr++] = c;
}
for (ll c = a - 1; c > b; --c) {
arr[ptr++] = c;
}
for (int i = 0; i < n; ++i) {
cout << arr[i] << " \n"[i == n - 1];
}
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
|
1793
|
C
|
Dora and Search
|
As you know, the girl Dora is always looking for something. This time she was given a permutation, and she wants to find such a subsegment of it that none of the elements at its ends is either the minimum or the maximum of the entire subsegment. More formally, you are asked to find the numbers $l$ and $r$ $(1 \leq l \leq r \leq n)$ such that $a_l \neq \min(a_l, a_{l + 1}, \ldots, a_r)$, $a_l \neq \max(a_l, a_{l + 1}, \ldots, a_r)$ and $a_r \neq \min(a_l, a_{l + 1}, \ldots, a_r)$, $a_r \neq \max(a_l, a_{l + 1}, \ldots, a_r)$.
A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in any order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ occurs twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$, but $4$ is present in the array).
Help Dora find such a subsegment, or tell her that such a subsegment does not exist.
|
Suppose we want to check whether the entire array satisfies the claim. If this is the case, then we can output the entire array as an answer. Otherwise, one of the two extreme elements does not meet our requirements. From this we can conclude that all segments containing an element that does not meet our requirements will also be incorrect, because this extreme element will remain the minimum/maximum. The algorithm follows from the fact above: let's look at the sub-section $[l; r]$, which is initially equal to $[1; n]$. If $a_l = \min(a_{l}, a_{l+1}, \ldots, a_{r})$ or $a_l = \max(a_l, a_{l +1}, \ldots, a_r)$, then we proceed to the segment $[l+1; r]$. A similar reasoning is also needed for $a_r$. Thus, either after some iterations we will get the required sub-section, or we will get $l == r$ and the answer will be $-1$. Final asymptotics: $\mathcal{O}(n\log n)$ or $\mathcal{O}(n)$ depending on the implementation.
|
[
"constructive algorithms",
"data structures",
"two pointers"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
typedef vector<int> vi;
void solve() {
int n;
cin >> n;
vi a(n);
for (int &i: a)
cin >> i;
int l = 0, r = n - 1;
int mn = 1, mx = n;
while (l <= r) {
if (a[l] == mn) {
l++;
mn++;
} else if (a[l] == mx) {
l++;
mx--;
} else if (a[r] == mn) {
r--;
mn++;
} else if (a[r] == mx) {
r--;
mx--;
} else {
break;
}
}
if(l <= r){
cout << l + 1 << " " << r + 1 << endl;
} else{
cout << -1 << endl;
}
}
signed main() {
int q = 1;
cin >> q;
while (q--)
solve();
return 0;
}
|
1793
|
D
|
Moscow Gorillas
|
In winter, the inhabitants of the Moscow Zoo are very bored, in particular, it concerns gorillas. You decided to entertain them and brought a permutation $p$ of length $n$ to the zoo.
A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in any order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ occurs twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$, but $4$ is present in the array).
The gorillas had their own permutation $q$ of length $n$. They suggested that you count the number of pairs of integers $l, r$ ($1 \le l \le r \le n$) such that $\operatorname{MEX}([p_l, p_{l+1}, \ldots, p_r])=\operatorname{MEX}([q_l, q_{l+1}, \ldots, q_r])$.
The $\operatorname{MEX}$ of the sequence is the minimum integer \textbf{positive} number missing from this sequence. For example, $\operatorname{MEX}([1, 3]) = 2$, $\operatorname{MEX}([5]) = 1$, $\operatorname{MEX}([3, 1, 2, 6]) = 4$.
You do not want to risk your health, so you will not dare to refuse the gorillas.
|
Denote by $pos_x$ the index of the number $x$ in the permutation. Subsegments with $\operatorname{MEX}>1$ are as follows $1 \le l \le pos_1 \le r \le n$. Denote by: $l_x = \min{[pos_1, pos_2, \ldots, pos_x]}$, $r_x=\max{[pos_1, pos_2, \ldots, os_x]}$. Subsegments with $\operatorname{MEX}>x$ are as follows $1 \le l \le l_x \le r_x \le r \le n$. Let's find all subsegments with $\operatorname{MEX}=x$. If $pos_{x + 1} < l_x$, then the subsegments with $\operatorname{MEX}=x+1$ are as follows $pos_{x+1} < l \le l_x \le r_x \le r \le n$ If $l_x \le pos_{x + 1} \le r_x$, then there is no subsegment with $\operatorname{MEX}=x+1$ If $r_x <pos_{x+1}$, then the subsegments with $\operatorname{MEX}=x+1$ are as follows $1 \le l \le l_x \le r_x \le r < pos_{x+1}$ It remains only to intersect the sets of such subsegments for $p$ and $q$, which is done trivially.
|
[
"binary search",
"dp",
"greedy",
"implementation",
"math",
"two pointers"
] | 1,800
|
#include <bits/stdc++.h>
#define int long long
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> pos_a(n + 1);
vector<int> pos_b(n + 1);
for (int i = 0; i < n; i++) {
int a;
cin >> a;
pos_a[a] = i + 1;
}
for (int i = 0; i < n; i++) {
int b;
cin >> b;
pos_b[b] = i + 1;
}
int la = n, ra = 1, lb = n, rb = 1, ans = 0;
for (int i = 1; i + 1 <= n; i++) {
la = min(la, pos_a[i]);
ra = max(ra, pos_a[i]);
lb = min(lb, pos_b[i]);
rb = max(rb, pos_b[i]);
int min_la, max_ra, min_lb, max_rb;
if (pos_a[i + 1] < la) {
min_la = pos_a[i + 1] + 1;
max_ra = n;
} else {
min_la = 1;
max_ra = pos_a[i + 1] - 1;
}
if (pos_b[i + 1] < lb) {
min_lb = pos_b[i + 1] + 1;
max_rb = n;
} else {
min_lb = 1;
max_rb = pos_b[i + 1] - 1;
}
ans += max(min(la, lb) - max(min_la, min_lb) + 1, 0ll) * max(min(max_ra, max_rb) - max(ra, rb) + 1, 0ll);
}
ans += min(pos_a[1], pos_b[1]) * (min(pos_a[1], pos_b[1]) - 1) / 2;
ans += (n - max(pos_a[1], pos_b[1])) * (n - max(pos_a[1], pos_b[1]) + 1) / 2;
ans += abs(pos_a[1] - pos_b[1]) * (abs(pos_a[1] - pos_b[1]) - 1) / 2;
ans++;
cout << ans << endl;
}
signed main() {
int q = 1;
while (q--)
solve();
return 0;
}
|
1793
|
E
|
Velepin and Marketing
|
The famous writer Velepin is very productive. Recently, he signed a contract with a well-known publication and now he needs to write $k_i$ books for $i$-th year. This is not a problem for him at all, he can write as much as he wants about samurai, space, emptiness, insects and werewolves.
He has $n$ regular readers, each of whom in the $i$-th year will read one of the $k_i$ books published by Velepin. Readers are very fond of discussing books, so the $j$-th of them will be satisfied within a year if at least $a_j$ persons read the same book as him (\textbf{including himself}).
Velepin has obvious problems with marketing, so he turned to you! A well-known book reading service can control what each of Velepin's regular readers will read, but he does not want books to be wasted, so \textbf{someone should read each book}. And so they turned to you with a request to tell you what the maximum number of regular readers can be made satisfied during each of the years, if you can choose each person the book he will read.
|
Let's sort people by their group size requirement. Suppose we have such a person $i$ that he is not satisfied, and we have a person $j > i$ who is satisfied. Then we can replace person $j$ in his group with $i$ and the answer for us will not be worse. It follows that for a particular $k$ the answer is some prefix of the people we can make satisfied. Let us also prove that there exists some arrangement of groups that covers the same prefix, and that each group is a continuous segment. Let's take some correct partitioning into groups. Then each group will be a set of unconnected segments. Let's take the leftmost such segment. Note that we can swap it to the nearest segment of the same group to the right without breaking anything. Thus we obtained that we can look for a solution in the form of partitioning each prefix into valid groups, which are segments. We will solve this problem using dynamic programming. Let $dp[i]$ -- the maximum number of groups into which $i$th prefix can be partitioned, so that everyone is satisfied (and no elements beyond the prefix can be used). Dynamics base: $dp[0] = 0$ (empty prefix maximum can be divided into 0 groups). Transition: for $i$th person his group must have size at least $a[i]$, so the transition looks like this $dp[i] = \underset{0 \leqslant j \leqslant i - a[i]}{\max} dp[j] + 1$. But what if $a[i] > i$? Then we can't dial the $i$th prefix. Then we put $dp[i] = -\infty$. This dynamics can be calculated using prefix maximums. This part of the solution works for $\mathcal{O}(n)$. Earlier we said that the answer would be some prefix of people who would be satisfied. If we can partition the prefix into some number of groups, then that answer can be the prefix for all $k \leqslant dp[i] + n - i$. (we partition our prefix into $dp$, and the rest of the people one by one into the group) If we can't make the whole prefix satisfied ($dp[i] = -\infty$), then we need to add people from outside. Thus, the maximum number of groups we can split into if $i$th prefix is completely satisfied is $n - a[i] + 1$. Note that if by some prefix we can score $k$, then we can also score $k - 1$ (combining two groups into one). Then we need to find the largest prefix that fits the given $k$ in the query. This can be done by an array of suffix maximums over $\mathcal{O}(q)$ total. The final asymptotic of the solution is $\mathcal{O}(n \log n + q)$.
|
[
"binary search",
"data structures",
"dp",
"greedy",
"sortings",
"two pointers"
] | 2,600
|
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define pii pair<int, int>
#define ld long double
#define all(a) (a).begin(), (a).end()
const int inf = 1e9 + 7;
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int n;
cin >> n;
vector<int> c(n);
for (int i = 0; i < n; ++i) cin >> c[i];
sort(all(c));
vector<int> ans(n + 1);
vector<int> dp(n + 1, -inf);
dp[0] = 0;
for (int i = 1; i <= n; ++i) {
if (c[i-1] <= i) {
dp[i] = dp[i - c[i-1]] + 1;
ans[dp[i] + n - i] = max(ans[dp[i] + n - i], i);
} else {
if (c[i-1] <= n) {
ans[1 + n - c[i-1]] = max(ans[1 + n - c[i-1]], i);
}
}
dp[i] = max(dp[i], dp[i-1]);
}
for (int i = n - 1; i >= 0; i--) {
ans[i] = max(ans[i], ans[i + 1]);
}
int q;
cin >> q;
for (int i = 0; i < q; ++i) {
int x;
cin >> x;
cout << ans[x] << '\n';
}
return 0;
}
|
1793
|
F
|
Rebrending
|
Kostya and Zhenya — the creators of the band "Paper" — after the release of the legendary album decided to create a new band "Day movers", for this they need to find two new people.
They invited $n$ people to the casting. The casting will last $q$ days. On the $i$th of the days, Kostya and Zhenya want to find two people on the segment from $l_i$ to $r_i$ who are most suitable for their band. Since "Day movers" are doing a modern art, musical skills are not important to them and they look only at other signs: they want the height difference between two people to be as small as possible.
Help them, and for each day, find the minimum difference in the growth of people from the casting on this segment!
|
Let's go through all the elements from left to right. The main task will be to support the current version of $dp[i]$ -- the minimum difference of $a_i$ with the elements to the right of it that we managed to consider. Let us correctly calculate $dp$ for the first $r$ elements. Let's move on to $r + 1$. Let's show how to update the answer for all $j < i$, such that $a[j] > a[i]$. For $j < i$, such that $a[j] <a[i]$ is solved similarly. Let's take the first element $a[j]$ to the left of $i$, such that $a[j] > a[i]$. Note that if there is $l<j < i$ such that $a[l] > a[j]> a[i]$, then we will not update $dp[l]$ for it, because $|a[l] - a[j]| < |a[l] - a[i]|$. Also, we will not update the answer for $l$ such that $|a[l] - a[j]| < |a[l] - a[i]|$, that is, if $a[l] > a[i] + \frac{a[j] - a[i]}{2}$. Therefore, further we will be interested only in the numbers from the segment $\left[a[i], a[i] + \frac{a[j] - a[i]}{2}\right]$. Let's note that we have reduced the length of the segment by $2$ times. That is, there will be no more such iterations than $\mathcal{O}(\log n)$. You can find the rightmost number belonging to a segment using the segment tree. The answer for the segment $l_i, r_i$ will be $\underset{l_i\leqslant j<r}{\min} dp[l]$ at the moment $r_i$. This can also be efficiently found using the segment tree. The final asymptotics of the solution is $\mathcal{O}(n\log^2 n + q\log n)$. There is also a solution for $\mathcal{O}(n\sqrt{n} + q\log q)$ that passes all the tests.
|
[
"brute force",
"data structures",
"divide and conquer",
"implementation"
] | 2,600
|
#include <bits/stdc++.h>
using namespace std;
const int inf = 1e9 + 228;
template<class T, class Fun = function<T(const T &, const T &)>>
struct SegTree {
Fun f;
vector<T> t;
int n;
SegTree(int sz, const Fun &g, T default_value = T()) : f(g) {
n = 1;
while (n < sz) n <<= 1;
t.resize(n * 2, default_value);
}
SegTree(vector<T> &a, const Fun &g, T default_value = T()) : SegTree(a.size(), g, default_value) {
for (int i = 0; i < n; ++i) t[i + n] = a[i];
for (int i = n - 1; i >= 1; --i) t[i] = f(t[i << 1], t[i << 1 | 1]);
}
void upd(int i, T x) {
i += n;
t[i] = f(t[i], x);
for (i >>= 1; i > 1; i >>= 1) t[i] = f(t[i << 1], t[i << 1 | 1]);
}
T get(int l, int r) {
T resL = t[0], resR = t[0];
for (l += n, r += n; l < r; l >>= 1, r >>= 1) {
if (l & 1) resL = f(resL, t[l++]);
if (r & 1) resR = f(t[--r], resR);
}
return f(resL, resR);
}
};
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int n, q;
cin >> n >> q;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
vector<vector<pair<int, int>>> posts(n);
for (int i = 0; i < q; i++) {
int l, r;
cin >> l >> r;
l--, r--;
posts[r].push_back({l, i});
}
SegTree<int> ind(n + 1, [](int x, int y) { return max(x, y); }, -1);
SegTree<int> dp(n + 1, [](int x, int y) { return min(x, y); }, inf);
vector<int> answer(q);
for (int i = 0; i < n; i++) {
{
int limit = n + 1;
while (true) {
int j = ind.get(a[i], limit);
if (j == -1)
break;
dp.upd(j, abs(a[j] - a[i]));
limit = a[i] + (a[j] - a[i] + 1) / 2;
}
}
{
int limit = 0;
while (true) {
int j = ind.get(limit, a[i]);
if (j == -1)
break;
dp.upd(j, abs(a[j] - a[i]));
limit = a[i] - (a[i] - a[j] + 1) / 2 + 1;
}
}
ind.upd(a[i], i);
for (pair<int, int> j: posts[i]) {
answer[j.second] = dp.get(j.first, i);
}
}
for (int i: answer) {
cout << i << "\n";
}
return 0;
}
|
1794
|
A
|
Prefix and Suffix Array
|
Marcos loves strings a lot, so he has a favorite string $s$ consisting of lowercase English letters. For this string, he wrote down all its non-empty prefixes and suffixes (except for $s$) on a piece of paper in arbitrary order. You see all these strings and wonder if Marcos' favorite string is a palindrome or not. So, your task is to decide whether $s$ is a palindrome by just looking at the piece of paper.
A string $a$ is a prefix of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the end.
A string $a$ is a suffix of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning.
A palindrome is a string that reads the same backward as forward, for example, strings "gg", "ioi", "abba", "icpci" are palindromes, but strings "codeforces", "abcd", "alt" are not.
|
Observe that there are exactly two strings of length $n-1$ (one prefix and one suffix). We will call them $x$ and $y$. Then, $s$ is a palindrome if and only if $\text{rev}(x)=y$, where $\text{rev}(x)$ is the reversal of string $x$. So, to solve the problem it is enough to find the two strings of length $n-1$ and check if one of them is equal to the reversal of the other. This solution also works for any length greater or equal to $\lfloor \frac{n}{2} \rfloor$. Intended complexity: $\mathcal{O}(n^2)$ per test case. (reading the input)
|
[
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
int t; cin >> t;
for(int test_number = 0; test_number < t; test_number++){
int n; cin >> n;
vector <string> long_subs;
for(int i = 0; i < 2 * n - 2; i++){
string s;
cin >> s;
if((int)s.size() == n - 1){
long_subs.push_back(s);
}
}
reverse(long_subs[1].begin(), long_subs[1].end());
if(long_subs[0] == long_subs[1]){
cout<<"YES\n";
}else{
cout<<"NO\n";
}
}
return 0;
}
|
1794
|
B
|
Not Dividing
|
You are given an array of $n$ positive integers $a_1, a_2, \ldots, a_n$. In one operation, you can choose any number of the array and add $1$ to it.
Make at most $2n$ operations so that the array satisfies the following property: $a_{i+1}$ is \textbf{not} divisible by $a_i$, for each $i = 1, 2, \ldots, n-1$.
You do \textbf{not} need to minimize the number of operations.
|
First, we add one to all the numbers in the array equal to $1$. This uses at most $n$ operations. Then, we iterate through the elements of the array from left to right, starting from the second element. At each step, let $a_x$ be the element we are iterating. If $a_x$ is divisible by $a_{x-1}$, we add one to $a_x$. Now this element is not divisible by $a_{x-1}$, because otherwise both $a_x$ and $a_x+1$ are divisible by $a_{x-1}$, but that means $1$ is also divisible by $a_{x-1}$ which cannot happen since all the elements in the array are at least $2$ (because of the first step we did). This part also uses at most $n$ operations, so we used at most $2n$ operations in total. The resulting array will satisfy the statement property. Intended complexity: $\mathcal{O}(n)$ per test case. Actually, the maximum number of operations performed by this algorithm is $\frac{3}{2}n$. Try to prove it!
|
[
"constructive algorithms",
"greedy",
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
int t; cin >> t;
for(int test_number = 0; test_number < t; test_number++){
int n; cin >> n;
vector <int> a(n);
for(int i = 0; i < n; i++){
cin >> a[i];
}
for(int i = 0; i < n; i++){
if(a[i] == 1){
a[i]++;
}
}
for(int i = 1; i < n; i++){
if(a[i] % a[i - 1] == 0){
a[i]++;
}
}
for(auto i : a){
cout << i << " ";
}
cout << "\n";
}
return 0;
}
|
1794
|
C
|
Scoring Subsequences
|
The score of a sequence $[s_1, s_2, \ldots, s_d]$ is defined as $\displaystyle \frac{s_1\cdot s_2\cdot \ldots \cdot s_d}{d!}$, where $d!=1\cdot 2\cdot \ldots \cdot d$. In particular, the score of an empty sequence is $1$.
For a sequence $[s_1, s_2, \ldots, s_d]$, let $m$ be the maximum score among all its subsequences. Its cost is defined as the maximum length of a subsequence with a score of $m$.
You are given a \textbf{non-decreasing} sequence $[a_1, a_2, \ldots, a_n]$ of integers of length $n$. In other words, the condition $a_1 \leq a_2 \leq \ldots \leq a_n$ is satisfied. For each $k=1, 2, \ldots , n$, find the cost of the sequence $[a_1, a_2, \ldots , a_k]$.
A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by deletion of several (possibly, zero or all) elements.
|
We will first see how to find the cost of a single non-decreasing sequence $s_1, s_2, \ldots, s_\ell$. If we choose a subsequence with $k$ elements, to achieve the maximum score it is optimal to choose the $k$ largest elements. As the sequence is in non-decreasing order, the $k$ largest elements will be the last $k$ elements of the sequence. Thus, all possible candidates to be the answer are the suffixes of the sequence. Now let's divide the $i$-th element from the right by $i$. The sequence now turns into $\displaystyle \frac{s_1}{\ell}, \frac{s_2}{\ell-1}, \ldots, \frac{s_{\ell-1}}{2}, \frac{s_\ell}{1}$. Observe that the score of a suffix in the original sequence is equal to the product of the suffix of the same size in the new sequence. The original sequence satisfies $s_1 \leq s_2 \leq \ldots \leq s_{\ell-1} \leq s_\ell$. It also true that $\displaystyle \frac{1}{\ell} \leq \frac{1}{\ell-1} \leq \ldots \leq \frac{1}{2} \leq \frac{1}{1}$ then combining these two inequalities we have $\displaystyle \frac{s_1}{\ell} \leq \frac{s_2}{\ell-1} \leq \ldots \leq \frac{s_{\ell-1}}{2} \leq \frac{s_\ell}{1}$ so the new sequence is also in non-decreasing order. In order to maximize the product of a suffix in the new sequence, we will choose all the elements in the new sequence which are greater or equal to $1$ (these elements form a suffix because the new sequence is non-decreasing). If there are elements equal to $1$ in the new sequence, we have to choose them in order to get the subsquence with maximum length (among all the ones with maximum score). Therefore, the cost of a sequence $s_1, s_2, \ldots, s_\ell$ is the maximum length of a suffix of $\displaystyle \frac{s_1}{\ell}, \frac{s_2}{\ell-1}, \ldots, \frac{s_{\ell-1}}{2}, \frac{s_\ell}{1}$ such that each element is at least $1$. Now, we have to find the cost of every prefix of the given sequence $[a_1, a_2, \ldots , a_n]$. For a fixed $k$, the cost of $[a_1, a_2, \ldots , a_k]$ will be the maximum length of a suffix of $\displaystyle \frac{a_1}{k}, \frac{a_2}{k-1}, \ldots, \frac{a_{k-1}}{2}, \frac{a_k}{1}$ such that each element is at least $1$. We can find this length using binary search. Observe that we cannot compute the transformed sequence for every prefix, as that will be too slow. Instead, we can compute in each step of the binary search what would the number in that position be in the transformed sequence. By doing these, we can find the score of each prefix in $\mathcal{O}(\text{log}\:n)$ time. Intended complexity: $\mathcal{O}(n\:\text{log}\:n)$ per test case.
|
[
"binary search",
"greedy",
"math",
"two pointers"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
int main(){
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
int t; cin >> t;
for(int test_number = 0; test_number < t; test_number++){
int n; cin >> n;
vector <int> a(n);
for(int i = 0; i < n; i++){
cin >> a[i];
}
vector<int> res;
for(int i = 0; i < n; i++){
int l = 1, r = i + 1;
while(l <= r){
int m = (l + r) / 2;
if(a[i - m + 1] >= m){
l = m + 1;
}else{
r = m - 1;
}
}
res.push_back(r);
}
for(auto i : res){
cout << i << " ";
}
cout<<"\n";
}
return 0;
}
|
1794
|
D
|
Counting Factorizations
|
The prime factorization of a positive integer $m$ is the unique way to write it as $\displaystyle m=p_1^{e_1}\cdot p_2^{e_2}\cdot \ldots \cdot p_k^{e_k}$, where $p_1, p_2, \ldots, p_k$ are prime numbers, $p_1 < p_2 < \ldots < p_k$ and $e_1, e_2, \ldots, e_k$ are positive integers.
For each positive integer $m$, $f(m)$ is defined as the multiset of all numbers in its prime factorization, that is $f(m)=\{p_1,e_1,p_2,e_2,\ldots,p_k,e_k\}$.
For example, $f(24)=\{2,3,3,1\}$, $f(5)=\{1,5\}$ and $f(1)=\{\}$.
You are given a list consisting of $2n$ integers $a_1, a_2, \ldots, a_{2n}$. Count how many positive integers $m$ satisfy that $f(m)=\{a_1, a_2, \ldots, a_{2n}\}$. Since this value may be large, print it modulo $998\,244\,353$.
|
First, we will count how many times each different element in $a$ occurs and check which of these elements are prime numbers. This can be done by checking for each element if it has a divisor up to its square root or using the Sieve of Eratosthenes. To construct a number $m$ such that $f(m)=\{a_1, a_2, \ldots, a_{2n}\}$ we have to choose $n$ elements of $a$ to be the primes in its factorization and $n$ elements to be the exponents. The numbers we choose to be the primes in the factorization have to be prime numbers and distinct. If there are less than $n$ distinct primes in $a$, then there is no number $m$ satisfying the property. So, from now on we will assume there are at least $n$ distinct prime numbers in $a$. Let $b_1,b_2, \ldots, b_s$ be the number of occurrences of each non-prime number in $a$ and let $c_1, c_2, \ldots, c_t$ be the number of occurrences of each prime number in $a$. After we choose the primes for the factorization, let $c'_1, c'_2, \ldots, c'_t$ be the remaining number of occurrences of each prime number. As we can choose each prime number at most once, then $c'_i=c_i$ or $c'_i=c_i-1$. For each way of choosing the primes, the number of possible values for $m$ is $\displaystyle \frac{n!}{b_1!\:\:b_2!\ldots b_s!\:\:c'_1!\:\:c'_2!\ldots c'_t!}$ because this is the number of ways to choose where to place the exponents. The answer to the problem is the sum of all these numbers over all ways of choosing the prime numbers. Observe that when we sum terms of the form $\displaystyle \frac{n!}{b_1!\:\:b_2!\ldots b_s!\:\:c'_1!\:\:c'_2!\ldots c'_t!}$, the value $\displaystyle \frac{n!}{b_1!\:\:b_2!\ldots b_s!}$ is a common factor over all these terms. Thus, we just need to find the sum of the terms $\displaystyle \frac{1}{c'_1!\:\:c'_2!\ldots c'_t!}$ and then multiply by the common factor. To find this sum, we will use dynamic programming. Let $g(x,y)$ be the sum considering only the primes from the $x$-th of them and assuming that $y$ primes need to be chosen (or that $n-y$ primes are already chosen). Then $\displaystyle g(x,y)=\frac{1}{c_x!}\times g(x+1,y)+\frac{1}{(c_x-1)!}\times g(x+1,y-1)$. The value of $g(1,n)$ will give the desired sum. Intended complexity: $\mathcal{O}(n^2)$ (plus the complexity of primality checking) It is possible to solve the problem with greater constraints, like $n \leq 10^5$. Try to solve it with this new constraint!
|
[
"combinatorics",
"divide and conquer",
"dp",
"math",
"number theory"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll MOD = 998244353;
//checks if n is prime
bool is_prime(ll n){
if(n == 1){
return false;
}
for(ll i = 2; i * i <= n; i++){
if(n %i == 0){
return false;
}
}
return true;
}
//computes b ** e % MOD
ll fast_pow(ll b, ll e){
ll res = 1;
while(e > 0){
if(e % 2 == 1){
res = res * b % MOD;
}
b = b * b % MOD;
e /= 2;
}
return res;
}
vector<pair<ll, ll>> primes;
const int MAXN = 5050;
ll dp[MAXN][MAXN];
ll fact[MAXN], fact_inv[MAXN];
ll f(ll x, ll y){
ll &res = dp[x][y];
if(res >= 0){
return res;
}
if(x == (int)primes.size()){
return res = (y == 0);
}
res = fact_inv[primes[x].second] * f(x + 1, y) % MOD;
if(y > 0){
res = (res + fact_inv[primes[x].second - 1] * f(x + 1, y - 1)) % MOD;
}
return res;
}
int main(){
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
//reading the input
int n; cin >> n;
vector<ll> a(2 * n);
for(int i = 0; i < 2 * n; i++){
cin >> a[i];
}
sort(a.begin(), a.end());
//compressed version of a, pairs {value, #occurrences}
vector<pair<ll, ll>> a_comp;
for(int i = 0; i < 2 * n; i++){
if(a_comp.size() == 0u || a_comp.back().first != a[i]){
a_comp.push_back({a[i], 1});
}else{
a_comp.back().second++;
}
}
//computing factorials and inverses
fact[0] = 1;
for(ll i = 1; i < MAXN; i++){
fact[i] = fact[i-1] * i % MOD;
}
fact_inv[0] = 1;
for(ll i = 0; i < MAXN; i++){
fact_inv[i] = fast_pow(fact[i], MOD - 2);
}
//adding only primes for the dp
for(auto i : a_comp){
if(is_prime(i.first)){
primes.push_back(i);
}
}
memset(dp, -1, sizeof(dp));
ll res = f(0, n);
//we have to consider the contribution of non-primes too!
for(auto i : a_comp){
if(!is_prime(i.first)){
res = res * fact_inv[i.second] % MOD;
}
}
res = res * fact[n] % MOD;
cout << res << "\n";
return 0;
}
|
1794
|
E
|
Labeling the Tree with Distances
|
You are given an unweighted tree of $n$ vertices numbered from $1$ to $n$ and a list of $n-1$ integers $a_1, a_2, \ldots, a_{n-1}$. A tree is a connected undirected graph without cycles. You will use each element of the list to label one vertex. No vertex should be labeled twice. You can label the only remaining unlabeled vertex with any integer.
A vertex $x$ is called good if it is possible to do this labeling so that for each vertex $i$, its label is the distance between $x$ and $i$. The distance between two vertices $s$ and $t$ on a tree is the minimum number of edges on a path that starts at vertex $s$ and ends at vertex $t$.
Find all good vertices.
|
First, count the number of occurrences of each element in the list $a$. Let these numbers be $c_0, c_1, \ldots, c_{n-1}$. Then, compute the polynomial hash of the array $c$, that is $\displaystyle H=\sum_{i=0}^{n-1}c_i\:b^i$ where $b$ is the base of the hash. Because the tree is unweighted, there are only $n$ possible values to write in the unlabeled vertex (all integers between $0$ and $n-1$). Including this extra number, the hash has $n$ possibilities: $H+b^i$ for each $i=0,1,\ldots, n-1$. Now, let's compute the same hash for each vertex of the tree. That is, for each vertex $x$ count how many vertices are at distances $0,1,\ldots,n-1$, and if these numbers are $d_0, d_1, \ldots, d_{n-1}$ compute the value $\displaystyle h_x=\sum_{i=0}^{n-1}d_i\:b^i$. To compute these values efficiently, we will use rerooting dynamic programming: We will make two DFS. In the first one, compute the parent of each node and the hash of each node only considering the nodes in its subtree. For each vertex $x$, if we call this hash $\text{dp}_x$ then $\displaystyle \text{dp}_x=1+b\sum_{\text{child}\:i}\text{dp}_i$. In the second one (which must be performed after the first one), for each vertex $x$ compute the hash of the parent of $x$ considering the nodes which are not in the subtree of vertex $x$. If we call this hash $\text{dp2}_x$ then $\displaystyle \text{dp2}_x=\text{dp}_{\text{pa}_x}-b\:\text{dp}_{x}+b\:\text{dp2}_{\text{pa}_x}$ where $\text{pa}_x$ is the parent of $x$. Now, observe that $h_x=\text{dp}_x+b\:\text{dp2}_x$. With these hashes, we can compute the good vertices. A vertex $x$ will be good if and only if $h_x=H+b^i$ for some $i=0,1,\ldots,n-1$. Doing a two pointers algorithm with the two sorted lists $h_1, h_2, \ldots, h_n$ and $H+1, H+b, H+b^2,\ldots, H+b^{n-1}$ is enough to find for each $h_x$ if there is a number on the other list equal to it. To make the solution pass, it is advisable to use hashing with multiple modules or just one big modulo. Intended complexity: $\mathcal{O}(n\:\text{log}\:n)$.
|
[
"data structures",
"dp",
"greedy",
"hashing",
"implementation",
"trees"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int MAXN = 200005;
mt19937_64 rng(chrono::system_clock::now().time_since_epoch().count());
//Hashing stuff
const ll MOD[3] = {999727999, 1070777777, 1000000007};
ll B[3];
vector<ll> shift(vector<ll> h, ll val = 0){
for(int k = 0; k < 3; k++){
h[k] = (h[k] * B[k] + val) % MOD[k];
}
return h;
}
vector<ll> add(vector<ll> a, vector<ll> b){
vector<ll> res(3);
for(int k = 0; k < 3; k++){
res[k] = (a[k] + b[k]) % MOD[k];
}
return res;
}
vector<ll> sub(vector<ll> a, vector<ll> b){
vector<ll> res(3);
for(int k = 0; k < 3; k++){
res[k] = (a[k] - b[k] + MOD[k]) % MOD[k];
}
return res;
}
//Tree stuff
vector<int> g[MAXN];
bool vis[MAXN];
int parent[MAXN];
vector<ll> dp[MAXN], dp2[MAXN];
void dfs(int x){
vis[x] = true;
for(auto i : g[x]){
if(!vis[i]){
parent[i] = x;
dfs(i);
dp[x] = add(dp[x], shift(dp[i]));
}
}
dp[x] = add(dp[x], {1, 1, 1});
}
void dfs2(int x){
if(x != 0){
dp2[x] = sub(dp[parent[x]], shift(dp[x]));
dp2[x] = add(dp2[x], shift(dp2[parent[x]]));
}
for(auto i : g[x]){
if(i != parent[x]){
dfs2(i);
}
}
}
int main(){
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
for(int k = 0; k < 3; k++){
B[k] = rng() % MOD[k];
}
//reading the input
int n; cin >> n;
vector<int> occurrences(n);
for(int i = 0; i < n - 1; i++){
int a; cin >> a;
occurrences[a]++;
}
for(int i = 0; i < n - 1; i++){
int u, v; cin >> u >> v;
u--; v--;
g[u].push_back(v);
g[v].push_back(u);
}
//calculating possible list hashes
vector<vector<ll>> list_hashes;
vector<ll> h = {0, 0, 0};
for(int i = n - 1; i >= 0; i--){
h = shift(h, occurrences[i]);
}
vector<ll> extra = {1, 1, 1};
for(int i = 0; i < n; i++){
list_hashes.push_back(add(h, extra));
extra = shift(extra);
}
//calculating possible tree hashes
for(int i = 0; i < n; i++){
dp[i] = {0, 0, 0};
dp2[i] = {0, 0, 0};
}
parent[0] = -1;
dfs(0);
dfs2(0);
vector<pair<vector<ll>, int>> tree_hashes;
for(int i = 0; i < n; i++){
if(i == 0){
tree_hashes.push_back({dp[i], i});
}else{
tree_hashes.push_back({add(dp[i], shift(dp2[i])), i});
}
}
//calculting the answer
sort(list_hashes.begin(), list_hashes.end());
sort(tree_hashes.begin(), tree_hashes.end());
vector<int> res;
int pos = 0;
for(auto lh : list_hashes){
while(pos < n && tree_hashes[pos].first < lh){
pos++;
}
while(pos < n && tree_hashes[pos].first == lh){
res.push_back(tree_hashes[pos].second);
pos++;
}
}
sort(res.begin(), res.end());
cout << res.size() << "\n";
for(auto i : res){
cout << i + 1 << " ";
}
cout << "\n";
return 0;
}
|
1795
|
A
|
Two Towers
|
There are two towers consisting of blocks of two colors: red and blue. Both towers are represented by strings of characters B and/or R denoting the order of blocks in them \textbf{from the bottom to the top}, where B corresponds to a blue block, and R corresponds to a red block.
\begin{center}
{\small These two towers are represented by strings BRBB and RBR.}
\end{center}
You can perform the following operation any number of times: choose a tower with \textbf{at least two blocks}, and move its \textbf{top} block to the \textbf{top} of the other tower.
The pair of towers is beautiful if no pair of touching blocks has the same color; i. e. no red block stands on top of another red block, and no blue block stands on top of another blue block.
You have to check if it is possible to perform any number of operations (possibly zero) to make the pair of towers beautiful.
|
Note that it does not make sense to move several blocks first from the left tower to the right, and then from the right to the left, since this is similar to canceling the last actions. Using the fact described above and small restrictions on the input data, one of the possible solutions is the following: choose which tower will be the one where we take blocks from (try both options), iterate over the number of operations, and then check that both towers are beautiful after that number of operations. There is a faster solution: move all the blocks to the left tower, and then check that there is no more than one pair of adjacent blocks of the same color. If there are no such pairs, then we can divide the tower into two in an arbitrary way, and if there is exactly one pair, then we need to make a "cut" exactly between two blocks of the same color. Otherwise, there will always be a pair of adjacent blocks of the same color in one of the towers.
|
[
"brute force",
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, m;
string s, t;
cin >> n >> m >> s >> t;
reverse(t.begin(), t.end());
s += t;
int cnt = 0;
for (int i = 1; i < n + m; ++i) cnt += s[i - 1] == s[i];
cout << (cnt <= 1 ? "YES\n" : "NO\n");
}
}
|
1795
|
B
|
Ideal Point
|
You are given $n$ one-dimensional segments (each segment is denoted by two integers — its endpoints).
Let's define the function $f(x)$ as the number of segments covering point $x$ (a segment covers the point $x$ if $l \le x \le r$, where $l$ is the left endpoint and $r$ is the right endpoint of the segment).
An integer point $x$ is called ideal if it belongs to more segments than any other integer point, i. e. $f(y) < f(x)$ is true for any other integer point $y$.
You are given an integer $k$. Your task is to determine whether it is possible to remove some (possibly zero) segments, so that the given point $k$ becomes ideal.
|
First of all, let's delete all segments that do not cover the point $k$ (because they increase the value of the function $f$ at points other than $k$). If there are no segments left, then the answer is NO. Otherwise, all segments cover the point $k$. And it remains to check whether the point $k$ is the only point which is covered by all segments. Note that it does not make sense to delete any of the remaining segments, because if there are several points with maximum value of $f$, then deleting segments can only increase their number. To check the number of points with the maximum value of $f$, you can iterate over $x$ from $1$ to $50$ and calculate $f(x)$, because of the small number of segments in the problem. A faster way is to check the size of the intersection of all segments. The left boundary of the intersection is $L = \max\limits_{i=1}^{n}{l_i}$, and the right boundary is $R = \min\limits_{i=1}^{n}{r_i}$; if $L = R$, then the point $k$ is ideal, otherwise it is not.
|
[
"brute force",
"geometry",
"greedy"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, k;
cin >> n >> k;
int L = 0, R = 55;
while (n--) {
int l, r;
cin >> l >> r;
if (l <= k && k <= r)
L = max(L, l), R = min(R, r);
}
cout << (L == R ? "YES\n" : "NO\n");
}
}
|
1795
|
C
|
Tea Tasting
|
A tea manufacturer decided to conduct a massive tea tasting. $n$ sorts of tea will be tasted by $n$ tasters. Both the sorts of tea and the tasters are numbered from $1$ to $n$. The manufacturer prepared $a_i$ milliliters of the $i$-th sort of tea. The $j$-th taster can drink $b_j$ milliliters of tea at once.
The tasting will be conducted in steps. During the first step, the $i$-th taster tastes the $i$-th sort of tea. The $i$-th taster drinks $\min(a_i, b_i)$ tea (how much is available of the $i$-th sort and how much the $i$-th taster can drink). $a_i$ also decreases by this amount.
Then all tasters move to the previous sort of tea. Thus, during the second step, the $i$-th taster tastes the $(i-1)$-st sort of tea. The $i$-th taster drinks $\min(a_{i-1}, b_i)$ tea. The $1$-st person ends the tasting.
During the third step, the $i$-th taster tastes the $(i-2)$-nd sort of tea. The $2$-nd taster ends the tasting. This goes on until everyone ends the tasting.
Take a look at the tasting process for $n = 3$, $a = [10, 20, 15]$, $b = [9, 8, 6]$. In the left row, there are the current amounts of each sort of tea. In the right column, there are current amounts of tea each taster has drunk in total. The arrow tells which taster each tea goes to on the current step. The number on the arrow is the amount — minimum of how much is available of the sort of tea and how much the taster can drink.
For each taster, print how many milliliters of tea he/she will drink in total.
|
Consider how each sort of tea affects the tasters. The $i$-th sort makes testers $i, i + 1, \dots, j - 1$, for some $j$, drink to their limit $b_i, b_{i + 1}, \dots, b_{j - 1}$, and the $j$-th taster drink the remaining tea. Sometimes there is no such $j$-th taster, but we'll explore the general case. Let's add the remaining tea straight to the $j$-th taster answer $\mathit{add}_j$. And for each taster $k$ from $i$ to $j-1$ we'll add $1$ into the value $\mathit{cnt}_k$ denoting how many times they drank at their limit $b_k$. If we have these calculated, we can obtain the answer by adding $\mathit{add}_i$ and $\mathit{cnt}_i \cdot b_i$. In order to find $j$, we can use prefix sums. Build $\mathit{pref}$ over the sequence $b$. Now you want to find the largest $j$ such that $\mathit{pref}_{j} - \mathit{pref}_i \le a_i$. Rewrite it as $\mathit{pref}_{j} \le a_i + \mathit{pref}_i$. You can do this with a binary search. In particular, with an upper_bound call. The amount of the remaining tea can also be calculated from prefix sums. To add $1$ on a range $[i, j-1]$, you can use a technique called delta encoding. Add $1$ to $\mathit{cnt}_i$. Subtract $1$ from $\mathit{cnt}_j$. After everything is added, propagate this values via a prefix sum. This way, if both $+1$ and $-1$ happened non-strictly to the left or strictly to the right of $i$, it doesn't affect $i$ at all (the segment either closes before $i$ or opens after $i$). Otherwise, it adds exactly $1$ to $\mathit{cnt}_i$. Overall complexity: $O(n \log n)$ per testcase.
|
[
"binary search",
"data structures",
"implementation"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
using li = long long;
int main() {
ios::sync_with_stdio(false); cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<li> a(n), b(n);
for (auto& x : a) cin >> x;
for (auto& x : b) cin >> x;
vector<li> sum(n + 1);
for (int i = 0; i < n; ++i) sum[i + 1] = sum[i] + b[i];
vector<li> cnt(n + 1), add(n + 1);
for (int i = 0; i < n; ++i) {
int j = upper_bound(sum.begin(), sum.end(), a[i] + sum[i]) - sum.begin() - 1;
cnt[i] += 1;
cnt[j] -= 1;
add[j] += a[i] - sum[j] + sum[i];
}
for (int i = 0; i < n; ++i) {
cout << cnt[i] * b[i] + add[i] << ' ';
cnt[i + 1] += cnt[i];
}
cout << '\n';
}
}
|
1795
|
D
|
Triangle Coloring
|
You are given an undirected graph consisting of $n$ vertices and $n$ edges, where $n$ is divisible by $6$. Each edge has a weight, which is a positive (greater than zero) integer.
The graph has the following structure: it is split into $\frac{n}{3}$ triples of vertices, the first triple consisting of vertices $1, 2, 3$, the second triple consisting of vertices $4, 5, 6$, and so on. Every pair of vertices from the same triple is connected by an edge. There are no edges between vertices from different triples.
You have to paint the vertices of this graph into two colors, red and blue. Each vertex should have exactly one color, there should be exactly $\frac{n}{2}$ red vertices and $\frac{n}{2}$ blue vertices. The coloring is called valid if it meets these constraints.
The weight of the coloring is the sum of weights of edges connecting two vertices with different colors.
Let $W$ be the maximum possible weight of a valid coloring. Calculate the number of valid colorings with weight $W$, and print it modulo $998244353$.
|
Let's ignore the constraint on the number of red/blue vertices for a moment. What is the maximum possible weight of a coloring? From any triple, we can have any two edges connect vertices of different colors. So, the maximum possible weight of a coloring (not necessarily a valid one) is the sum of all edge weights except for the minimum weight in each triple. Let's show that it is always possible to choose a valid coloring to achieve this weight. In each triple, we should make sure that the two maximum edges connect vertices with different colors; to do this, we can color the vertex incident to both of these edges in one color, and the two other vertices will be painted in the other color. So, for each triple of vertices, there will be either one red vertex and two blue ones, or two red ones and one blue. Let's suppose the first $\frac{n}{6}$ triples have one red vertex and two blue vertices each, and the other $\frac{n}{6}$ triples have one blue vertex and two red vertices each. That way, we obtain a valid coloring with maximum possible weight. Okay, now let's try to find out how do we calculate the number of valid colorings with the maximum possible weight. Each triple of vertices will be either "red" (two red vertices, one blue), or "blue" (the other way around). Since exactly half of the vertices should be red, then exactly half of the triples should be red, so the number of ways to choose a "color" for all triples is ${n/3}\choose{n/6}$. After choosing the color of each triple, let's choose how we actually color them. The triples are independent, so for each triple, we can introduce the coefficient $c_i$, which is the number of ways to color it so that its weight is maximized, and the triple has some specific type (either red or blue, doesn't matter since these are symmetric). Choosing the vertex which will be different from its neighbors is equivalent to choosing the edge which will not be included in the weight of the coloring (this is the edge which is not incident to the chosen vertex). So, $c_i$ is equal to the number of ways to choose that vertex in the $i$-th triple so that the weight is maximized; i. e. the weight of the edge not incident to the chosen vertex should be minimized. Thus, $c_i$ is just the number of minimum edge weights in the $i$-th triple. The formula for the final answer is ${{n/3}\choose{n/6}} \prod\limits_{i=1}^{n/3} c_i$.
|
[
"combinatorics",
"math"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
int add(int x, int y)
{
return ((x + y) % MOD + MOD) % MOD;
}
int mul(int x, int y)
{
return x * 1ll * y % MOD;
}
int binpow(int x, int y)
{
int z = 1;
while(y)
{
if(y % 2 == 1) z = mul(z, x);
x = mul(x, x);
y /= 2;
}
return z;
}
int inv(int x)
{
return binpow(x, MOD - 2);
}
int divide(int x, int y)
{
return mul(x, inv(y));
}
int main()
{
int n;
cin >> n;
int ans = 1;
for(int i = 1; i <= n / 6; i++)
ans = mul(ans, divide(i + n / 6, i));
for(int i = 0; i < n / 3; i++)
{
vector<int> a(3);
for(int j = 0; j < 3; j++)
cin >> a[j];
int mn = *min_element(a.begin(), a.end());
int cnt_min = count(a.begin(), a.end(), mn);
ans = mul(ans, cnt_min);
}
cout << ans << endl;
}
|
1795
|
E
|
Explosions?
|
You are playing yet another game where you kill monsters using magic spells. There are $n$ cells in the row, numbered from $1$ to $n$. Initially, the $i$-th cell contains the $i$-th monster with $h_i$ health.
You have a basic spell that costs $1$ MP and deals $1$ damage to the monster you choose. You can cast it any number of times. Also, you have a special scroll with "Explosion" spell you can use only once. You want to finish killing monsters with explosion, that's why you, firstly, cast the basic spell several times (possibly, zero), and then after that, you cast one "Explosion".
How does "Explosion" spell work? Firstly, you choose the power of the spell: if you pour $x$ MP into it, "Explosion" will deal $x$ damage. Secondly, you choose some monster $i$, which will be targeted by the spell. That's what happens next:
- if its \textbf{current} health $h_i > x$, then he stays alive with health decreased by $x$;
- if $h_i \le x$, the $i$-th monster dies with an explosion that deals $h_i - 1$ damage to monsters in the neighboring cells $i - 1$ and $i + 1$, if these cells exist and monsters inside are still alive;
- if the damage dealt by the explosion is enough to kill the monster $i - 1$ (or $i + 1$), i. e. the current $h_{i - 1} \le h_i - 1$ (or $h_{i + 1} \le h_i - 1$), then that monster also dies creating a secondary explosion of power $h_{i-1} - 1$ (or $h_{i+1} - 1$) that may deals damage to their neighbors, and so on, until the explosions end.
Your goal is to kill all the remaining monsters with those "chaining" explosions, that's why you need a basic spell to decrease $h_i$ of some monsters or even kill them beforehand (monsters die when their current health $h_i$ becomes less or equal to zero). Note that monsters don't move between cells, so, for example, monsters $i$ and $i + 2$ will never become neighbors.
What is the minimum total MP you need to kill all monsters in the way you want? The total MP is counted as the sum of the number of basic spells you cast and the power $x$ of explosion scroll you've chosen.
|
Note that each unit of damage dealt by explosions save us from using one more basic spell. In other words, the more the damage from explosions, the better. So, the answer will be equal to $\sum{h_i} - (\text{maximum total damage from explosions})$. Note that in order to kill all remaining monsters with the last spell, the array $h$ should have the following structure: there is a monster $p$ we cast the spell onto it and $h$ is strictly increasing in $[0..p]$ and strictly decreasing in $[p..n)$ (ignoring prefix and suffix of $0$-s). Let's focus on the left part of array $h$ (segment $[0...p]$), since solving the right part is exactly the same. Maximizing the total damage is equivalent to maximizing the sum of $h_i$ right before the final spell. Note that we can use the straight greedy strategy: to kill the chosen monster $p$ we should use "Explosion" spell of power exactly $h_p$ - it's not optimal to make it either more or less powerful. After that, monster $p$ will create an explosion of power $h_p - 1$. If $h_{p-1} > h_{p} - 1$ we must decrease it to exactly $h_{p} - 1$ to continue the chain of explosions of maximum total damage. If $h_{p-2} > h_{p} - 2$ we also decrease it to exactly $h_{p} - 2$ and so on. (The general formula is $h_{p - i} > h_{p} - i$). This series will stop either if $h_{p} - i \le 0$ (or $i \ge h_p$), or there are no monsters left ($p - i < 0$), or we met the monster with $h_{p - i} \le h_{p} - i$. The two first cases are easy to check in constant time, so let's look at the last case. Suppose that monster position is equal to $j = p - i$, then $i = p - j$ or $h_{j} \le h_{p} - (p - j)$ $\leftrightarrow$ $h_{j} - j \le h_{p} - p$. That monster $j$ is interesting to us because after death it creates an explosion of damage $h_{j} - 1$ that already doesn't depend on $p$ and next calculation is practically the same task: what chain of explosion we can have if we start from $j$. That idea drives us to dp: let $d[i]$ be the maximum damage of chaining explosion we can deal if we start from $i$ and move to the left. For simplicity, let's include $h_i$ into that total damage. Calculating $d[i]$ is next: let's find the "first" $j$ such that $h_{j} - j \le h_{i} - i$. If there are no such $j$ (or if that $j$ is too far from $i$, i. e. $i - j \ge h_i$), we will set $j = \max(-1, i - h_i)$. Now we know that on interval $(j, i]$ the damage dealt is the arithmetic progression: for $i$ it's $h_i$, for $i - 1$ it's $h_i - 1$, ..., for $j + 1$ it's $h_i - (i - j) + 1$. In total, $d[i] = (i - j) h_i - \frac{(i - j) (i - j - 1)}{2}$. And if such $j$ exists and not too far away, we increase $d[i]$ by $d[j]$ as well. The last question is finding for each $i$ the closest $j < i$ such that $a_j - i \le a_i - i$. Note that if we define $a'_i = a_i - i$, we need just need to find last $a'_j \le a'_i$ and that's quite standard task that can be solved with stack. Let's iterate over $i$ and maintain a stack of previous $a'_i$. When we need to find $j$ for the current $a'_i$ let's just look at the top of the stack: if $a'_j \le a'_i$ we found $j$ we wanted, otherwise just pop it and check the new top again and so on, until either we find $j$ or stack becomes empty that would mean that there are no $a'_j \le a'_i$. After processing the $i$-th element, push $a'_i$ on top of the stack. Why it works? Consider some $i$. The element on top of the stack is $a'_j$ (firstly, it's $a'_{i-1}$ but we are talking about general case). If $a'_j \le a'_i$ we found what we want. Otherwise, $a'_j > a'_i$ but it also means that previous elements $a'_k$, that was popped on previous iteration $j$, was greater than $a'_j$. So, $a'_k$ is bigger than $a'_i$ as well, and there were no need to even consider them, i. e. popping them out earlier doesn't break anything. Since each element is pushed in the stack once and popped out once, then the complexity is $O(n)$ for all $i$ for $1$ to $n$, or $O(1)$ amortized. The answer for the chosen position $p$ then is $\sum{h_i} - dL[p] - dR[p] + 2 h_p$ where $dL[p]$ is dp we discussed above, $dR[p]$ is the same dp but on reversed array $h$ and $2 h_p$ because we included $h_p$ into both $dL[p]$ and $dR[p]$. Both $dL$ and $dR$ are calculated in $O(n)$, so the total comlpexity is $O(n)$.
|
[
"binary search",
"data structures",
"dp",
"greedy",
"math"
] | 2,200
|
import java.util.*
fun main() {
repeat(readln().toInt()) {
val n = readln().toInt()
var h = readln().split(' ').map { it.toInt() }
val d = Array(2) { LongArray(n) { 0 } }
for (tp in 0..1) {
val s = Stack<Pair<Int, Int>>()
for (i in h.indices) {
while (s.isNotEmpty() && s.peek().first > h[i] - i)
s.pop()
var j = maxOf(-1, i - h[i])
if (s.isNotEmpty())
j = maxOf(j, s.peek().second)
val len = (i - j).toLong()
d[tp][i] = len * h[i] - len * (len - 1) / 2
if (j >= 0 && len < h[i])
d[tp][i] += d[tp][j]
s.push(Pair(h[i] - i, i))
}
h = h.reversed()
}
d[1] = d[1].reversedArray()
var ans = 1e18.toLong()
val sum = h.fold(0L) { total, it -> total + it }
for (i in h.indices) {
val cur = sum - d[0][i] - d[1][i] + 2 * h[i]
ans = minOf(ans, cur)
}
println(ans)
}
}
|
1795
|
F
|
Blocking Chips
|
You are given a tree, consisting of $n$ vertices. There are $k$ chips, placed in vertices $a_1, a_2, \dots, a_k$. All $a_i$ are distinct. Vertices $a_1, a_2, \dots, a_k$ are colored black initially. The remaining vertices are white.
You are going to play a game where you perform some moves (possibly, zero). On the $i$-th move ($1$-indexed) you are going to move the $((i - 1) \bmod k + 1)$-st chip from its current vertex to an adjacent \textbf{white} vertex and color that vertex \textbf{black}. So, if $k=3$, you move chip $1$ on move $1$, chip $2$ on move $2$, chip $3$ on move $3$, chip $1$ on move $4$, chip $2$ on move $5$ and so on. If there is no adjacent white vertex, then the game ends.
What's the maximum number of moves you can perform?
|
The constraints tell us that the solution should be linear or pretty close to it. Well, in particular, that implies that the solution almost certainly isn't dynamic programming, since we have both $n$ and $k$ to care about. Thus, we'll think about something greedy. When we know the number of move the game will last, we can tell how many steps each chip should make. Well, since the more moves the game last, the more steps each ship makes, the answer is a monotonic function. Let's apply binary search and think if we can check if each chip can make some known number of steps. A common idea in the problems where you have to do something greedily on a tree is to root the tree arbitrarily and process everything bottom up. Consider the bottommost chip. If it can move its number of moves downwards, it's always optimal to do that. Since it's the bottommost chip, it can only make things worse for chips above it. And any of them can't pass through the initial vertex of this chip anyway. If it can't, it has to move to its parent vertex. Let's move it there and deal with this chip later - when it becomes the bottommost again. If it can't move to its parent, it can't move at all. Thus, the game can't last for this many steps. Since we only apply either the move which is guaranteed to not interrupt any other moves or the move which is forced, the greedy strategy is correct. As for implementation details, it's not too tricky. Basically, for each vertex, we should maintain these values: if this vertex has been visited; the number of steps the chip in this vertex still has to make (if any chip is in this vertex); the longest path downwards from this vertex via non-visited vertices. The second value can be initialized beforehand and pushed to the parent when needed. The rest of them are easily maintained with a single dfs. Overall complexity: $O(n \log n)$ per testcase.
|
[
"binary search",
"constructive algorithms",
"dfs and similar",
"greedy",
"trees"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for(int i = 0; i < int(n); i++)
vector<vector<int>> g;
vector<int> req;
vector<char> used;
vector<int> d;
bool dfs(int v, int p = -1){
d[v] = 0;
for (int u : g[v]) if (u != p){
if (!dfs(u, v)) return false;
if (!used[u]) d[v] = max(d[v], d[u] + 1);
}
if (req[v] == 0 || d[v] >= req[v]) return true;
if (p == -1 || used[p]) return false;
used[p] = true;
req[p] = req[v] - 1;
return true;
}
int main(){
int t;
scanf("%d", &t);
while (t--){
int n;
scanf("%d", &n);
g.assign(n, {});
d.resize(n);
forn(i, n - 1){
int v, u;
scanf("%d%d", &v, &u);
--v, --u;
g[v].push_back(u);
g[u].push_back(v);
}
int k;
scanf("%d", &k);
vector<int> a(k);
forn(i, k){
scanf("%d", &a[i]);
--a[i];
}
int l = 1, r = n;
int res = 0;
while (l <= r){
int m = (l + r) / 2;
used.assign(n, 0);
req.assign(n, 0);
forn(i, k){
used[a[i]] = true;
req[a[i]] = m / k + (i < m % k);
}
if (dfs(0)){
res = m;
l = m + 1;
}
else{
r = m - 1;
}
}
printf("%d\n", res);
}
}
|
1795
|
G
|
Removal Sequences
|
You are given a simple undirected graph, consisting of $n$ vertices and $m$ edges. The vertices are numbered from $1$ to $n$. The $i$-th vertex has a value $a_i$ written on it.
You will be removing vertices from that graph. You are allowed to remove vertex $i$ only if its degree is equal to $a_i$. When a vertex is removed, all edges incident to it are also removed, thus, decreasing the degree of adjacent non-removed vertices.
A valid sequence of removals is a permutation $p_1, p_2, \dots, p_n$ $(1 \le p_i \le n)$ such that the $i$-th vertex to be removed is $p_i$, and every removal is allowed.
A pair $(x, y)$ of vertices is nice if there exist two valid sequences of removals such that $x$ is removed before $y$ in one of them and $y$ is removed before $x$ in the other one.
Count the number of nice pairs $(x, y)$ such that $x < y$.
|
Let's consider what the sequence of removals looks like in general. We will base some intuition on a fact that at least one valid sequence is guaranteed to exist. Remove all vertices that have their degree correct from the start at once. There surely be such vertices, since a valid sequence would have to start with some of them. Notice that there can't be any adjacent vertices among them. If there were, we wouldn't be able to remove such a pair regardless of the order we choose, since removing one of them makes another one's degree too low. Now remove the vertices that just got their degrees correct from removing the first layer. Once again, these must exist (if the graph is not empty yet), because otherwise any valid sequence would get stuck. Process until nothing is left. This algorithm is basically a bfs, and you can implement it like one. Note that each vertex becomes available to be removed only after a certain subset of its neighbours is removed. No matter what order you choose to remove the vertices in, these vertices will always be the same. Huh, so for each vertex, some of its neighbours have to be removed before it, and the rest have to be removed after it (since otherwise, the degree of that vertex will become too low). That actually means that our graph is not as undirected as it seemed. We can direct each edge from a vertex that is removed before the other. This makes a valid sequence of removals just a topological sort of that directed graph. So a pair is nice if there exist two topological orders such that $x$ and $y$ go one before another in them. We can make a bold but perfectly reasonable guess about all nice pairs. A pair is nice if neither of $x$ and $y$ are reachable from each other. The necessity of this condition is obvious. Let's show sufficiency. Let's show the construction such that $x$ goes before $y$. To remove $x$, we first have to remove all vertices that have edges to $x$. To remove them, we have to remove vertices with edge to them. And so on. Basically, to remove $x$, we have to remove all vertices that are reachable from $x$ on the transposed directed graph. Since $x$ is not reachable from $y$, it doesn't have to be removed before $x$. So we can first remove all the required vertices, then remove $x$, then continue removing vertices until we are able to remove $y$. By switching $x$ and $y$ in the description of that construction, we can obtain the construction for $y$ before $x$. Thus, we reduced the problem to a rather well-known one. Calculate the number of reachable pairs of vertices in a directed graph. As far as I know, it's not known to be solvable in sub-quadratic time. And we are not given a specific graph. Yes, it's obviously acyclic, but turns out every acyclic graph can be made into a test for this problem. You just have to make $d_v$ equal to the number of the outgoing edges for each $v$. Somehow we are still given $10^5$ vertices and edges. If you are more familiar with that problem, you might know that you can use bitset to solve it. In particular, let $\mathit{reach}[v]$ be a bitset such that $\mathit{reach}[v][u] = 1$ if $u$ if reachable from $v$. Then you can initialize $\mathit{reach}[v][v] = 1$ for all vertices and propagate the knowledge in reverse topological order by applying $\mathit{reach}[v] = \mathit{reach}[v] | \mathit{reach}[u]$ for all edges $(v, u)$. Unfortunately, that requires $O(n^2)$ memory, and $10^{10}$ bits is over a gigabyte. Let's use of my favorite tricks to make a solution with $O(n)$ memory and the same complexity. Man, I love that trick. Process vertices in batches of $64$. Let's calculate which vertices can reach vertices from $1$ to $64$. The algorithm is basically the same. For each vertex, store a smaller bitset of size $64$ (also known as an unsigned long long). Initialize the bitset for $64$ vertices from the batch and propagate the same way for all $n$ vertices. Now just add up the number of ones in each bitset (__builtin_popcountll). Proceed to the next batch. That makes it $\frac{n}{64}$ iterations of a $O(n + m)$ algorithm. This might require some constant optimizations. In particular, I suggest not to use dfs inside the iteration, since the recursion makes it really slow. You might iterate over a vertex in reverse topological order and its outgoing edges. Or, which is way faster, unroll that graph into a list of edges and iterate over it directly.
|
[
"bitmasks",
"dfs and similar",
"graphs"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for(int i = 0; i < int(n); i++)
typedef unsigned long long uli;
int main(){
int t;
scanf("%d", &t);
while (t--){
int n, m;
scanf("%d%d", &n, &m);
vector<int> a(n);
forn(i, n) scanf("%d", &a[i]);
vector<vector<int>> g(n);
forn(i, m){
int v, u;
scanf("%d%d", &v, &u);
--v, --u;
g[v].push_back(u);
g[u].push_back(v);
}
vector<char> rem(n);
vector<int> d(n);
forn(i, n) d[i] = g[i].size();
queue<int> q;
forn(i, n) if (a[i] == d[i]) q.push(i);
vector<pair<int, int>> ord;
while (!q.empty()){
int v = q.front();
q.pop();
rem[v] = true;
for (int u : g[v]) if (!rem[u]){
--d[u];
ord.push_back({v, u});
if (d[u] == a[u])
q.push(u);
}
}
reverse(ord.begin(), ord.end());
vector<uli> mask(n);
long long ans = n * 1ll * (n + 1) / 2;
for (int l = 0; l < n; l += 64){
int r = min(n, l + 64);
for (int i = l; i < r; ++i)
mask[i] = 1ull << (i - l);
for (const pair<int, int> &it : ord)
mask[it.first] |= mask[it.second];
forn(i, n){
ans -= __builtin_popcountll(mask[i]);
mask[i] = 0;
}
}
printf("%lld\n", ans);
}
}
|
1796
|
A
|
Typical Interview Problem
|
The FB-string is formed as follows. Initially, it is empty. We go through all positive integers, starting from $1$, in ascending order, and do the following for each integer:
- if the current integer is divisible by $3$, append F to the end of the FB-string;
- if the current integer is divisible by $5$, append B to the end of the FB-string.
Note that if an integer is divisible by both $3$ and $5$, we append F, and then B, not in the opposite order.
The first $10$ characters of the FB-string are FBFFBFFBFB: the first F comes from the integer $3$, the next character (B) comes from $5$, the next F comes from the integer $6$, and so on. It's easy to see that this string is infinitely long. Let $f_i$ be the $i$-th character of FB-string; so, $f_1$ is F, $f_2$ is B, $f_3$ is F, $f_4$ is F, and so on.
You are given a string $s$, consisting of characters F and/or B. You have to determine whether it is a substring (contiguous subsequence) of the FB-string. In other words, determine if it is possible to choose two integers $l$ and $r$ ($1 \le l \le r$) so that the string $f_l f_{l+1} f_{l+2} \dots f_r$ is exactly $s$.
For example:
- FFB is a substring of the FB-string: if we pick $l = 3$ and $r = 5$, the string $f_3 f_4 f_5$ is exactly FFB;
- BFFBFFBF is a substring of the FB-string: if we pick $l = 2$ and $r = 9$, the string $f_2 f_3 f_4 \dots f_9$ is exactly BFFBFFBF;
- BBB is not a substring of the FB-string.
|
It's easy to see that the FB-string repeats every $8$ characters: after processing every $15$ numbers, we will get the same remainders modulo $3$ and $5$ as $15$ numbers ago, and when we process $15$ consecutive numbers, we get $8$ characters. So, $f_{i+8} = f_i$. This means that if we want to find a substring no longer than $10$ characters in the FB-string, we don't need to consider more than $17$ first characters of the FB-string: the substring of length $10$ starting with the $8$-th character ends with the $17$-th character, and we don't need to consider substrings starting on positions greater than $8$. So, the solution is to generate at least $17$ first characters of the FB-string, and then check if the substring occurs in the generated string using a standard function like find.
|
[
"brute force",
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
string fb;
int cur = 1;
while(fb.size() < 100)
{
if(cur % 3 == 0) fb += "F";
if(cur % 5 == 0) fb += "B";
cur++;
}
int t;
cin >> t;
for(int i = 0; i < t; i++)
{
int k;
cin >> k;
string s;
cin >> s;
cout << (fb.find(s) != string::npos ? "YES" : "NO") << endl;
}
}
|
1796
|
B
|
Asterisk-Minor Template
|
You are given two strings $a$ and $b$, consisting of lowercase Latin letters.
A template $t$ is string, consisting of lowercase Latin letters and asterisks (character '*'). A template is called asterisk-minor if the number of asterisks in it is less than or equal to the number of letters in it.
A string $s$ is said to be matching a template $t$ if you can replace each asterisk in $t$ with a string of lowercase Latin letters (possibly, an empty string) so that it becomes equal to $s$.
Find an asterisk-minor template such that both $a$ and $b$ match it, or report that such a template doesn't exist. If there are multiple answers, print any of them.
|
What's the reason behind authors specifically asking for templates that have less or equal asterisks than letters? Well, without that the problem would be kind of trivial. A template "*" is matched by every string, so it would always work. Hmm, let's try to make something similar to that template then. We basically have to find some part of that occurs in both strings that we can use letters on to get some freedom to use asterisks. There are some easy cases. If the first letters of both strings are the same, then the template can be that letter followed by an asterisk. There's a symmetrical case for the last letter. By studying the examples, you can also notice the final case: a common substring of both strings of length at least two surrounded by two asterisks. Moreover, since we only use two asterisks, we can find a substring of length exactly two (which always exists if a longer common substring exists). Turns out, that's it. If a template exists, one of these three kinds also exists. This is not that hard to show. If the first two kinds don't work, then you have to use asterisks on both sides of the template. In order for the template with asterisks on both sides to work, there have to be adjacent letters in it at least once (otherwise, it's like "*a*a*a*", and there are more asterisks than letters). And since at least one such substring exists, we can just remove everything other than this substring and the asterisks on the sides. Overall complexity: $O(|a| \cdot |b|)$ per testcase.
|
[
"implementation",
"strings"
] | 1,000
|
for _ in range(int(input())):
a = input()
b = input()
if a[0] == b[0]:
print("YES")
print(a[0] + "*")
continue
if a[-1] == b[-1]:
print("YES")
print("*" + a[-1])
continue
for i in range(len(b) - 1):
if (b[i] + b[i + 1]) in a:
print("YES")
print("*" + b[i] + b[i + 1] + "*")
break
else:
print("NO")
|
1796
|
C
|
Maximum Set
|
A set of positive integers $S$ is called beautiful if, for every two integers $x$ and $y$ from this set, either $x$ divides $y$ or $y$ divides $x$ (or both).
You are given two integers $l$ and $r$. Consider all beautiful sets consisting of integers not less than $l$ and not greater than $r$. You have to print two numbers:
- the maximum possible size of a beautiful set where all elements are from $l$ to $r$;
- the number of beautiful sets consisting of integers from $l$ to $r$ with the maximum possible size.
Since the second number can be very large, print it modulo $998244353$.
|
Every beautiful set can be represented as a sequence of its elements in sorted order. Let these elements for some set be $a_1, a_2, \dots, a_m$; also, let $d_i = \frac{a_{i+1}}{a_i}$. When the set is beautiful, every $d_i$ is an integer greater than $1$. It's easy to see that if $a_1$ and $a_m$ belong to $[l, r]$, the whole set belongs to $[l, r]$. Since $a_m = a_1 \cdot \prod \limits_{i=1}^{m-1} d_i$, in order to maximize $m$, we need to choose $a_1$ and $d_i$ as small as possible. So, why don't we choose $a_1 = l$ and every $d_i = 2$? This will allow us to calculate the maximum possible size of a beautiful set (let $m$ be this maximum possible size). Okay, what about counting those sets? The claims $a_1 = l$ and that every $d_i = 2$ are no longer true by default. However, there are some constraints on $d_i$. Firstly, every $d_i \le 3$. If we had some value of $d_i \ge 4$, we could replace it with two values of $d_i = 2$, and the size of the set would increase. Secondly, there is at most one $d_i = 3$. If there are two values $d_i = 3$, we could replace them with three $d_i = 2$, and the size of the set would increase as well. So, the sequence $d_i$ contains at most one value $3$, and the rest of the values are $2$. We will divide the sets we want to count into two categories: the ones with all $d_i = 2$, and the ones with one value $d_i = 3$. To count the sets in the first category, we simply need to count the number of different minimum values in those sets. Those minimum values have to be such that multiplying them by $2^{m-1}$ wouldn't make them greater than $r$, so these are all integers from the segment $[l, \lfloor \frac{r}{2^{m-1}} \rfloor]$. For every such integer, there exists exactly one set of the first category. To count the sets in the second category, we do a similar thing. The minimum value in the set should be from the segment $[l, \lfloor \frac{r}{2^{m-2} \cdot 3} \rfloor$; but for every integer from this segment, there are $m-1$ different sets of the second category since there are $m-1$ ways to choose which $d_i$ is equal to $3$.
|
[
"binary search",
"math"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
for(int i = 0; i < t; i++)
{
int l, r;
cin >> l >> r;
int max_size = 1;
while((l << max_size) <= r)
max_size++;
int ans2 = (r / (1 << (max_size - 1)) - l + 1);
if(max_size > 1)
ans2 += (max_size - 1) * max(0, (r / (1 << (max_size - 2)) / 3 - l + 1));
cout << max_size << " " << ans2 << endl;
}
}
|
1796
|
D
|
Maximum Subarray
|
You are given an array $a_1, a_2, \dots, a_n$, consisting of $n$ integers. You are also given two integers $k$ and $x$.
You have to perform the following operation exactly once: add $x$ to the elements on \textbf{exactly} $k$ \textbf{distinct} positions, and subtract $x$ from all the others.
For example, if $a = [2, -1, 2, 3]$, $k = 1$, $x = 2$, and we have picked the first element, then after the operation the array $a = [4, -3, 0, 1]$.
Let $f(a)$ be the maximum possible sum of a subarray of $a$. The subarray of $a$ is a contiguous part of the array $a$, i. e. the array $a_i, a_{i + 1}, \dots, a_j$ for some $1 \le i \le j \le n$. An empty subarray should also be considered, it has sum $0$.
Let the array $a'$ be the array $a$ after applying the aforementioned operation. Apply the operation in such a way that $f(a')$ is the maximum possible, and print the maximum possible value of $f(a')$.
|
There are greedy and dynamic programming solutions. We will describe dynamic programming solution. The main task is to choose some segment that is the answer to the problem, while choosing $k$ positions to increase by $x$. To do this, we can use dynamic programming $dp_{i, j, t}$, where $i$ is the number of positions that have already been considered (from $0$ to $n$), $j$ is the number of elements that have already been increased by $x$ (from $0$ to $k$), $t$ is the flag showing the current state (whether we are before the chosen segment, inside the segment, or after the segment). Transitions in such dynamic programming are quite simple: we have a choice either to increase $j$ by $1$, then the value of the $i$-th element is $a_i + x$, or not to increase, then the value of the $i$-th element is $a_i - x$; we can also change the state of the flag (note that you can only switch from the current state to the subsequent ones, i.e., for example, you cannot switch from the state "the segment has already ended" to the state "inside the segment"). If the current state of the flag is "inside the segment", then $a_i + x$ or $a_i - x$ (depending on the selected transition) should be added to the dynamic programming value itself. So, we got a solution in $O(nk)$.
|
[
"data structures",
"dp",
"greedy",
"two pointers"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
using li = long long;
const int N = 222222;
const int K = 22;
const li INF = 1e18;
int n, k, x;
int a[N];
li dp[N][K][3];
int main() {
ios::sync_with_stdio(false); cin.tie(0);
int tc;
cin >> tc;
while (tc--) {
cin >> n >> k >> x;
for (int i = 0; i < n; ++i) cin >> a[i];
for (int i = 0; i <= n; ++i) {
for (int j = 0; j <= k; ++j) {
for (int t = 0; t < 3; ++t) {
dp[i][j][t] = -INF;
}
}
}
dp[0][0][0] = 0;
for (int i = 0; i < n; ++i) {
for (int j = 0; j <= k; ++j) {
for (int t = 0; t < 3; ++t) {
if (dp[i][j][t] == -INF) continue;
for (int jj = j; jj <= min(k, j + 1); ++jj) {
li add = a[i] + (j == jj ? -x : x);
for (int tt = t; tt < 3; ++tt) {
dp[i + 1][jj][tt] = max(dp[i + 1][jj][tt], dp[i][j][t] + (tt == 1 ? add : 0));
}
}
}
}
}
cout << max(dp[n][k][1], dp[n][k][2]) << '\n';
}
}
|
1796
|
E
|
Colored Subgraphs
|
Monocarp has a tree, consisting of $n$ vertices.
He is going to select some vertex $r$ and perform the following operations on each vertex $v$ from $1$ to $n$:
- set $d_v$ equal to the distance from $v$ to $r$ (the number of edges on the shortest path);
- color $v$ some color.
A nice coloring satisfies two conditions:
- for each pair of vertices of the same color $(v, u)$, there exists a path from $v$ to $u$ that only visits vertices of the same color;
- for each pair of vertices of the same color $(v, u)$, $d_v \neq d_u$.
Note that Monocarp can choose any amount of different colors he wants to use.
For each used color, he then counts the number of vertices of this color. The cost of the tree is the minimum of these numbers.
What can be the maximum cost of the tree?
|
Let's start by choosing a vertex $r$ naively. Iterate over all vertices and try each of them. Root the tree by $r$ and observe what the conditions become. $d_v$ for each $v$ is just the depth of each vertex. Well, then the only case, when the connected subgraph of vertices of the same color has all values of $d$ distinct, is when they form a vertical path in the tree. So the problem becomes the following. Split the tree into some vertical paths in such a way that the shortest path is as long as possible. Let's try greedy, I guess. Start the paths from the leaves and propagate them up. Consider some vertex $v$ with at least two children. All children have some paths leading up to them. We'd love to continue them all with $v$, but we can't do that. We can only continue one path and cut the rest. Pretty easy to see that the path to continue is the shortest path available. It's at least as optimal as any other path. Do that from the lowest vertices up, and you got yourself a working greedy. Also don't forget to stop all paths in root, since you can't continue any of them further up. Let's make this greedy more formal. Every time we update the answer is with a path that is: the shortest in every vertex lower than the current one; not the shortest in the current one. So we want to propagate the shortest child up and update the answer with the remaining children. Updating the answer means just taking the minimum of values. Thus, we can actually ignore all children except the second shortest in each vertex. Just don't forget to treat the root properly. Now we can actually solve the problem in $O(n)$ for a fixed $r$. You can just find two minimums in each vertex. Well, now that we can solve the problem for a single root, let's try rerooting to solve for all of them. There are solutions in $O(n)$ but I found the solution in $O(n \log n)$ the neatest. The constraints are low enough to allow it. For each vertex, maintain a multiset of lengths of vertical paths from its children. I chose to store nothing in the leaves - that only makes the implementation cleaner. In order to update the vertex from its child, you can take the minimum element in the child's set and add $1$ to it. If it's empty (the child is a leaf), return $1$. Additionally, store a multiset of the second minimums of all vertices that have at least two children. In order to update the answer with the current root, find the minimum of that multiset and the shortest path from the root. To achieve $O(n)$, you will probably have to either store prefix and suffix second minimums over children of each vertex or store three shortest paths in it. It is kind of messy but it should still perform better. Overall complexity: $O(n \log n)$ or $O(n)$ per testcase.
|
[
"dfs and similar",
"dp",
"games",
"greedy",
"trees"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for(int i = 0; i < int(n); i++)
vector<vector<int>> g;
vector<multiset<int>> len;
multiset<int> all;
int getlen(int v){
return len[v].empty() ? 1 : *len[v].begin() + 1;
}
void init(int v, int p = -1){
for (int u : g[v]) if (u != p){
init(u, v);
len[v].insert(getlen(u));
}
if (int(len[v].size()) > 1){
all.insert(*(++len[v].begin()));
}
}
int ans;
void dfs(int v, int p = -1){
ans = max(ans, min(*len[v].begin() + 1, *all.begin()));
for (int u : g[v]) if (u != p){
if (int(len[v].size()) > 1) all.erase(all.find(*(++len[v].begin())));
len[v].erase(len[v].find(getlen(u)));
if (int(len[v].size()) > 1) all.insert(*(++len[v].begin()));
if (int(len[u].size()) > 1) all.erase(all.find(*(++len[u].begin())));
len[u].insert(getlen(v));
if (int(len[u].size()) > 1) all.insert(*(++len[u].begin()));
dfs(u, v);
if (int(len[u].size()) > 1) all.erase(all.find(*(++len[u].begin())));
len[u].erase(len[u].find(getlen(v)));
if (int(len[u].size()) > 1) all.insert(*(++len[u].begin()));
if (int(len[v].size()) > 1) all.erase(all.find(*(++len[v].begin())));
len[v].insert(getlen(u));
if (int(len[v].size()) > 1) all.insert(*(++len[v].begin()));
}
}
int main(){
int t;
scanf("%d", &t);
while (t--){
int n;
scanf("%d", &n);
g.assign(n, {});
forn(i, n - 1){
int v, u;
scanf("%d%d", &v, &u);
--v, --u;
g[v].push_back(u);
g[u].push_back(v);
}
len.assign(n, {});
all.clear();
all.insert(n);
init(0);
ans = 0;
dfs(0);
printf("%d\n", ans);
}
}
|
1796
|
F
|
Strange Triples
|
Let's call a triple of positive integers ($a, b, n$) strange if the equality $\frac{an}{nb} = \frac{a}{b}$ holds, where $an$ is the concatenation of $a$ and $n$ and $nb$ is the concatenation of $n$ and $b$. For the purpose of concatenation, the integers are considered without leading zeroes.
For example, if $a = 1$, $b = 5$ and $n = 9$, then the triple is strange, because $\frac{19}{95} = \frac{1}{5}$. But $a = 7$, $b = 3$ and $n = 11$ is not strange, because $\frac{711}{113} \ne \frac{7}{3}$.
You are given three integers $A$, $B$ and $N$. Calculate the number of strange triples $(a, b, n$), such that $1 \le a < A$, $1 \le b < B$ and $1 \le n < N$.
|
Let $|n|$ be the length of number $n$. Then $\frac{an}{nb} = \frac{a}{b} \Leftrightarrow (a \cdot 10^{|n|} + n) b = a (n \cdot 10^{|b|} + b)$ $(a \cdot 10^{|n|} + n) b = a (n \cdot 10^{|b|} + b) \Leftrightarrow g^2 a' b' \cdot 10^{|n|} + g n b' = g a' n \cdot 10^{|b|} + g^2 a' b' \Leftrightarrow$ $\Leftrightarrow g a' b' \cdot 10^{|n|} + n b' = a' n \cdot 10^{|b|} + g a' b'$ Note that the right part is divisible by $a'$, so the left part should as well. Then we can see that $n$ should be divisible by $a'$, since $\gcd(b', a') = 1$. Let's say $n = n' a'$ then divide both sides by $a'$. We get $g a' b' \cdot 10^{|n|} + n' a' b' = a' n' a' \cdot 10^{|b|} + g a' b' \Leftrightarrow g b' \cdot 10^{|n|} + n' b' = a' n' \cdot 10^{|b|} + g b'$ Let's rearrange it and get the following: $n' b' = a' n' \cdot 10^{|b|} - b (10^{|n|} - 1)$ $b' = a' \cdot 10^{|b|} - \frac{b (10^{|n|} - 1)}{n'}$ Since $b'$ is integer, then $\frac{b (10^{|n|} - 1)}{n'}$ should be integer. In other words, we can define $n' = k_1 k_2$ such that $b$ is divisible by $k_1$ and $(10^{|n|} - 1)$ is divisible by $k_2$ and $k_1$ is minimum possible. Since $(10^{|n|} - 1)$ has a very special structure, let's iterate over all lengths $|n|$ ($1 \le |n| \le 9$) and all divisors $k_2$ of $(10^{|n|} - 1)$ for a fixed $|n|$. Let's say $d = \frac{b}{k_1}$ and $r = \frac{(10^{|n|} - 1)}{k_2}$. Then $b' = a' \cdot 10^{|b|} - d \cdot r$ For a fixed $|n|$ and $k_2$ we know $r$, but don't know $d$. So, let's just iterate over all possible $d$ ($1 \le d \le 10^5$) and let's also iterate over all $|b|$, since $|b|$ is small ($|d| \le |b| \le 5$, since $d$ is a divisor of $b$). Next step is following: let's look at previous equation, but modulo $10^{|b|}$: $b' \equiv - d \cdot r \pmod {10^{|b|}}$ Since $b' \le b < 10^{|b|}$ then there is a unique solution to the previous module equation, or: $b' = 10^{|b|} - (d \cdot r) \bmod {10^{|b|}}$ Now we know exact value $b'$ and $|b|$, so now the time to guess $g$ (let's recall that $b = b' g$). Since we fixed $|b|$, then $10^{|b| - 1} \le b' g < 10^{|b|}$, or $\left\lceil \frac{10^{|b| - 1}}{b'} \right\rceil \le g \le \left\lfloor \frac{10^{|b|} - 1}{b'} \right\rfloor$ But we can make constrains even tighter: note that $b = b' g$, but lately, we said that $d = \frac{b}{k_1}$ or $b = d k_1$. So, $b' g = d k_1$, or $g$ should be divisible by $dd = \frac{d}{\gcd(d, b')}$. In total, we can iterate $g$ in range $[\left\lceil \frac{lb}{dd} \right\rceil \cdot dd, \dots, \left\lfloor \frac{rb}{dd} \right\rfloor \cdot dd]$ with step $dd$, since we are interested only in $g$ divisible by $dd$. Now we have enough variables to construct a triple: we know $b'$ and $g$, so $b = b' g$. If $b$ is already big ($b \ge B$), we can skip that candidate. Also, we can calculate $k_1 = \frac{b}{d}$ and check that pair $(k_1, k_2)$ is valid, i. e. $k_1$ is really minimum possible. We can understand it by checking that $\gcd(k_1, r) = 1$ (otherwise, we can reduce $k_1$ by $\gcd(k_1, r)$). Value $a'$ can be calculated from one of the formulas above as $a' = \frac{d \cdot r + b'}{10^{|b|}}$. After that, we calculate $a = a' g$ and check that $a$ is not too big. Value $n$ can be calculated as $n = k_1 k_2 a' = \frac{b}{d} k_2 a'$. At last, we should check that the given triple satisfy all remaining assumptions we made: $n$ is not too big, $\gcd(a', b')$ is really $1$ and length of calculated $n$ is exactly $|n|$ we fixed. If it's all fine, then we found a correct triple. It looks like, thanks to all previous checks, the triple we found is unique, but, just for safety, let's push them all in one set to get rid of copies. Calculating complexity is not trivial, but let's note something: the total number of divisors $k_2$ of $10^{|n|} - 1$ for all $|n|$ is around $180$. For a fixed pair $(|n|, k_2)$, we iterate over all $d$ from $1$ to $B$ and for each $d$ we iterate $|b|$ from $|d|$ to $5$, but it's easy to prove that the total number of pairs $(d, |b|)$ is at most $1.3 \cdot B$. Now the last cycle: iteration of $g$ with step $dd$ where $dd = \frac{d}{\gcd(d, b')}$. If we assume that $\gcd(d, b')$ is quite small then $dd$ is proportional to $d$, and pairs $(d, g)$ are something like harmonic series with $O(B \log B)$ complexity. In total, the complexity is around $O(200 B \log{B})$.
|
[
"brute force",
"math",
"number theory"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
#define sz(a) int((a).size())
using li = long long;
const int MAXLEN = 5;
vector<int> divs(int x) {
vector<int> res;
for (int i = 1; i * i <= x; ++i) {
if (x % i == 0) {
res.push_back(i);
if (i * i != x)
res.push_back(x / i);
}
}
return res;
}
int main() {
int A, B, N;
cin >> A >> B >> N;
vector<int> pw(10);
pw[0] = 1;
for (int i = 1; i < 10; ++i) pw[i] = pw[i - 1] * 10;
int PW = pw[MAXLEN];
set<array<int, 3>> used;
vector<int> len(PW);
for (int i = 0; i < PW; ++i)
len[i] = sz(to_string(i));
for (int lenn = 1; lenn <= 9; ++lenn) {
int x = pw[lenn] - 1;
for (int k2 : divs(x)) {
int r = x / k2;
for (int d = 1; d < PW; ++d) {
for (int lenb = len[d]; lenb <= MAXLEN; ++lenb) {
int bg = pw[lenb] - d * li(r) % pw[lenb];
int dd = d / __gcd(d, bg);
int lb = (pw[lenb - 1] + bg - 1) / bg;
int rb = (pw[lenb] - 1) / bg;
for (int g = (lb + dd - 1) / dd * dd; g <= rb; g += dd) {
int b = bg * g;
assert(b % d == 0);
if (b >= B || __gcd(b / d, r) != 1) continue;
int ag = (d * li(r) + bg) / pw[lenb];
li n = b / d * li(k2) * ag;
if (n < N && ag * g < A && __gcd(ag, bg) == 1 && sz(to_string(n)) == lenn)
used.insert({ag * g, b, n});
}
}
}
}
}
int res = 0;
for (auto it : used) {
li a = it[0], b = it[1], n = it[2];
int lenn = sz(to_string(n));
int lenb = sz(to_string(b));
res += a * b * pw[lenn] + n * b == a * n * pw[lenb] + a * b;
}
cout << res << endl;
}
|
1797
|
A
|
Li Hua and Maze
|
There is a rectangular maze of size $n\times m$. Denote $(r,c)$ as the cell on the $r$-th row from the top and the $c$-th column from the left. Two cells are adjacent if they share an edge. A path is a sequence of adjacent empty cells.
Each cell is initially empty. Li Hua can choose some cells (except $(x_1, y_1)$ and $(x_2, y_2)$) and place an obstacle in each of them. He wants to know the minimum number of obstacles needed to be placed so that there isn't a path from $(x_1, y_1)$ to $(x_2, y_2)$.
Suppose you were Li Hua, please solve this problem.
|
We can put obstacles around $(x_1,y_1)$ or $(x_2,y_2)$ and the better one is the answer. More formally, let's define a function $f$: $f(x,y)= \begin{cases} 2,&(x,y)\textrm{ is on the corner}\\ 3,&(x,y)\textrm{ is on the border}\\ 4,&(x,y)\textrm{ is in the middle}\\ \end{cases}$ Then the answer is $\min\{f(x_1,y_1),f(x_2,y_2)\}$. Without loss of generality, assume that $f(x_1,y_1)\le f(x_2,y_2)$. As the method is already given, the answer is at most $f(x_1,y_1)$. Let's prove that the answer is at least $f(x_1,y_1)$. If $(x_1,y_1)$ is on the corner, we can always find two paths from $(x_1,y_1)$ to $(x_2,y_2)$ without passing the same cell (except $(x_1,y_1)$ and $(x_2,y_2)$). Similarly, we can always find three or four paths respectively if $(x_1,y_1)$ is on the border or in the middle. As the paths have no common cell, we need to put an obstacle on each path, so the answer is at least $f(x_1,y_1)$. In conclusion, the answer is exactly $f(x_1,y_1)$. As we assumed that $f(x_1,y_1)\le f(x_2,y_2)$, the answer to the original problem is $\min\{f(x_1,y_1),f(x_2,y_2)\}$. Time complexity: $O(1)$.
|
[
"constructive algorithms",
"flows",
"graphs",
"greedy",
"implementation"
] | 800
|
//By: OIer rui_er
#include <bits/stdc++.h>
#define rep(x,y,z) for(int x=(y);x<=(z);x++)
#define per(x,y,z) for(int x=(y);x>=(z);x--)
#define debug(format...) fprintf(stderr, format)
#define fileIO(s) do{freopen(s".in","r",stdin);freopen(s".out","w",stdout);}while(false)
using namespace std;
typedef long long ll;
#define y1 y114514
int T, n, m, x1, y1, x2, y2;
template<typename T> void chkmin(T& x, T y) {if(x > y) x = y;}
template<typename T> void chkmax(T& x, T y) {if(x < y) x = y;}
int f(int x, int y) {
if((x == 1 || x == n) && (y == 1 || y == m)) return 2;
if(x == 1 || x == n || y == 1 || y == m) return 3;
return 4;
}
int main() {
for(scanf("%d", &T);T;T--) {
scanf("%d%d%d%d%d%d", &n, &m, &x1, &y1, &x2, &y2);
printf("%d\n", min(f(x1, y1), f(x2, y2)));
}
return 0;
}
|
1797
|
B
|
Li Hua and Pattern
|
Li Hua has a pattern of size $n\times n$, each cell is either blue or red. He can perform \textbf{exactly $k$} operations. In each operation, he chooses a cell and changes its color from red to blue or from blue to red. Each cell can be chosen as many times as he wants. Is it possible to make the pattern, that matches its rotation by $180^{\circ}$?
Suppose you were Li Hua, please solve this problem.
|
We can calculate the minimum needed operations $k_{\min}$ easily by enumerating through the cells and performing an operation if the color of the cell is different from the targeted cell. Obviously, if $k < k_{\min}$, the problem has no solution. Otherwise, there are two cases: If $2\mid n$, the solution exists if and only if $2\mid(k-k_{\min})$, as we must perform two operations each time to meet the requirement. If $2\nmid n$, the solution always exists, as we can perform the remaining operations at the center of the pattern. Time complexity: $O(n^2)$.
|
[
"constructive algorithms",
"greedy"
] | 1,100
|
//By: OIer rui_er
#include <bits/stdc++.h>
#define rep(x,y,z) for(int x=(y);x<=(z);x++)
#define per(x,y,z) for(int x=(y);x>=(z);x--)
#define debug(format...) fprintf(stderr, format)
#define fileIO(s) do{freopen(s".in","r",stdin);freopen(s".out","w",stdout);}while(false)
using namespace std;
typedef long long ll;
const int N = 1e3+5;
int T, n, k, a[N][N];
template<typename T> void chkmin(T& x, T y) {if(x > y) x = y;}
template<typename T> void chkmax(T& x, T y) {if(x < y) x = y;}
int main() {
for(scanf("%d", &T);T;T--) {
scanf("%d%d", &n, &k);
rep(i, 1, n) rep(j, 1, n) scanf("%d", &a[i][j]);
int diff = 0;
rep(i, 1, n) rep(j, 1, n) if(a[i][j] != a[n+1-i][n+1-j]) ++diff;
diff /= 2;
if(diff > k) puts("NO");
else {
k -= diff;
if(n & 1) puts("YES");
else if(k & 1) puts("NO");
else puts("YES");
}
}
return 0;
}
|
1797
|
C
|
Li Hua and Chess
|
\textbf{This is an interactive problem.}
Li Ming and Li Hua are playing a game. Li Hua has a chessboard of size $n\times m$. Denote $(r, c)$ ($1\le r\le n, 1\le c\le m$) as the cell on the $r$-th row from the top and on the $c$-th column from the left. Li Ming put a king on the chessboard and Li Hua needs to guess its position.
Li Hua can ask Li Ming \textbf{no more than $3$} questions. In each question, he can choose a cell and ask the minimum steps needed to move the king to the chosen cell. Each question is independent, which means the king doesn't actually move.
A king can move from $(x,y)$ to $(x',y')$ if and only if $\max\{|x-x'|,|y-y'|\}=1$ (shown in the following picture).
\begin{center}
\begin{tabular}{c}
\
\end{tabular}
\end{center}
The position of the king is chosen \textbf{before} the interaction.
Suppose you were Li Hua, please solve this problem.
|
We can first ask $(1,1)$ and get the result $k$. Obviously, the king must be on the following two segments: from $(1,k+1)$ to $(k+1,k+1)$. from $(k+1,1)$ to $(k+1,k+1)$. Then, we can ask $(1,k+1)$ and $(k+1,1)$ and get the results $p,q$. There are three cases: If $p=q=k$, the king is at $(k+1,k+1)$. If $p < k$, the king is at $(p+1,k+1)$. If $q < k$, the king is at $(k+1,q+1)$.
|
[
"constructive algorithms",
"greedy",
"interactive"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
int T, n, m;
int ask(int x, int y) {
printf("? %d %d\n", x, y);
fflush(stdout);
scanf("%d", &x);
return x;
}
void give(int x, int y) {
printf("! %d %d\n", x, y);
fflush(stdout);
}
int main() {
for(scanf("%d", &T);T;T--) {
scanf("%d%d", &n, &m);
int T1 = ask(1, 1);
if(T1 >= n) {
int T2 = ask(1, T1+1);
give(T2+1, T1+1);
}
else if(T1 >= m) {
int T2 = ask(T1+1, 1);
give(T1+1, T2+1);
}
else {
int T2 = ask(T1+1, 1);
int T3 = ask(1, T1+1);
if(T2 == T1 && T3 == T1) give(T1+1, T1+1);
else if(T3 == T1) give(T1+1, T2+1);
else give(T3+1, T1+1);
}
}
return 0;
}
|
1797
|
D
|
Li Hua and Tree
|
Li Hua has a tree of $n$ vertices and $n-1$ edges. The root of the tree is vertex $1$. Each vertex $i$ has importance $a_i$. Denote the size of a subtree as the number of vertices in it, and the importance as the sum of the importance of vertices in it. Denote the heavy son of a non-leaf vertex as the son with the \textbf{largest} subtree size. If multiple of them exist, the heavy son is the one with the \textbf{minimum} index.
Li Hua wants to perform $m$ operations:
- "1 $x$" ($1\leq x \leq n$) — calculate the importance of the subtree whose root is $x$.
- "2 $x$" ($2\leq x \leq n$) — rotate the heavy son of $x$ up. Formally, denote $son_x$ as the heavy son of $x$, $fa_x$ as the father of $x$. He wants to remove the edge between $x$ and $fa_x$ and connect an edge between $son_x$ and $fa_x$. It is guaranteed that $x$ is not root, but \textbf{not} guaranteed that $x$ is not a leaf. If $x$ is a leaf, please ignore the operation.
Suppose you were Li Hua, please solve this problem.
|
Denote $T_x$ as the subtree of $x$. The "rotate" operation doesn't change the tree much. More specifically, only the importance of $T_{fa_x},T_x,T_{son_x}$ changes. We can use the brute force method to maintain useful information about each vertex when the operations are performed. What we need to do next is to find the heavy son of a vertex in a reasonable time. We can use a set to maintain the size and index of all the sons of each vertex. Time complexity: $O((n+m)\log n)$.
|
[
"brute force",
"data structures",
"dfs and similar",
"dp",
"implementation",
"trees"
] | 1,900
|
//By: Luogu@rui_er(122461)
#include <bits/stdc++.h>
#define rep(x,y,z) for(ll x=y;x<=z;x++)
#define per(x,y,z) for(ll x=y;x>=z;x--)
#define debug printf("Running %s on line %d...\n",__FUNCTION__,__LINE__)
#define fileIO(s) do{freopen(s".in","r",stdin);freopen(s".out","w",stdout);}while(false)
using namespace std;
typedef long long ll;
const ll N = 1e5+5;
ll n, m, a[N], sz[N], son[N], fa[N], sum[N];
vector<ll> e[N];
set<tuple<ll, ll> > sons[N];
template<typename T> void chkmin(T& x, T y) {if(x > y) x = y;}
template<typename T> void chkmax(T& x, T y) {if(x < y) x = y;}
void dfs(ll u, ll f) {
sz[u] = 1;
sum[u] = a[u];
fa[u] = f;
for(auto v : e[u]) {
if(v == f) continue;
dfs(v, u);
sz[u] += sz[v];
sum[u] += sum[v];
sons[u].insert(make_tuple(-sz[v], v));
if(sz[v] > sz[son[u]] || sz[v] == sz[son[u]] && v < son[u]) son[u] = v;
}
}
int main() {
scanf("%lld%lld", &n, &m);
rep(i, 1, n) scanf("%lld", &a[i]);
rep(i, 1, n-1) {
ll u, v;
scanf("%lld%lld", &u, &v);
e[u].push_back(v);
e[v].push_back(u);
}
dfs(1, 0);
while(m --> 0) {
ll op, u;
scanf("%lld%lld", &op, &u);
if(op == 1) printf("%lld\n", sum[u]);
else {
ll v = son[u];
if(!v) continue;
ll p = fa[u];
sz[u] -= sz[v];
sum[u] -= sum[v];
sons[u].erase(make_tuple(-sz[v], v));
son[u] = sons[u].empty() ? 0 : get<1>(*sons[u].begin());
fa[u] = v;
sz[v] += sz[u];
sum[v] += sum[u];
sons[v].insert(make_tuple(-sz[u], u));
son[v] = get<1>(*sons[v].begin());
fa[v] = p;
sons[p].erase(make_tuple(-sz[v], u));
sons[p].insert(make_tuple(-sz[v], v));
son[p] = get<1>(*sons[p].begin());
}
}
return 0;
}
|
1797
|
E
|
Li Hua and Array
|
Li Hua wants to solve a problem about $\varphi$ — Euler's totient function. Please recall that $\varphi(x)=\sum\limits_{i=1}^x[\gcd(i,x)=1]$.$^{\dagger,\ddagger}$
He has a sequence $a_1,a_2,\cdots,a_n$ and he wants to perform $m$ operations:
- "1 $l$ $r$" ($1\le l\le r\le n$) — for \textbf{each} $x\in[l,r]$, change $a_x$ into $\varphi(a_x)$.
- "2 $l$ $r$" ($1\le l\le r\le n$) — find out the minimum changes needed to make sure $a_l=a_{l+1}=\cdots=a_r$. In each change, he chooses \textbf{one} $x\in[l,r]$, change $a_x$ into $\varphi(a_x)$. Each operation of this type is independent, which means the array doesn't actually change.
Suppose you were Li Hua, please solve this problem.
$^\dagger$ $\gcd(x,y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$.
$^\ddagger$ The notation $[\textrm{cond}]$ equals $1$ if the condition $\textrm{cond}$ is true, and $0$ otherwise.
|
Denote $w=\max\limits_{i=1}^n\{a_i\}$. Also denote $\varphi^k(x)=\begin{cases}x,&k=0\\\varphi(\varphi^{k-1}(x)),&k\in\mathbb{N}^*\end{cases}$. It can be proven that after $O(\log w)$ operations, any $a_i$ will become $1$ and more operations are useless. In other words, $\varphi^{\log_2 w+1}(a_i)=1$. Let's construct a tree of size $w$, where $1$ is the root and the father of $k$ is $\varphi(k)$. The height of the tree is $O(\log w)$. After some precalculating, we can find the LCA of two vertices within $O(\log\log w)$. We can use a dsu to maintain the next not-$1$ element of each $a_i$ and use a segment tree to maintain the LCA, minimal depth, and answer in the range. We can brute force the changes using the dsu and meanwhile do point update on the segment tree. The queries can be solved using a range query on the segment tree. With the potential method, we denote $\Phi(a_i)$ as the minimum integer $k$ which satisfies $\varphi^k(a_i)=1$. Since each successful operation on $a_i$ will decrease $\Phi(a_i)$ by $1$, the maximum number of successful operations we can perform on $a_i$ is $\Phi(a_i)$. Therefore, the maximum number of successful operations is $\sum\limits_{i=1}^n\Phi(a_i)=O(n\log w)$. For each successful operation, we visit $O(\log n)$ nodes on the segment tree and merge the information of two subtrees for $O(\log n)$ times. Because of the time complexity of calculating LCA, We need $O(\log\log w)$ time to merge the information. So all the operations will take up $O(n\log n\log w\log\log w)$ time. We need to initialize $\varphi$ within $O(w)$ time and binary lifting the ancestors on the tree within $O(w\log\log w)$ time. We also need $O(\log n\log\log w)$ for each query. In conclusion, the time complexity is $O(w\log\log w+n\log n\log w\log\log w+m\log n\log\log w)$. The above algorithm is enough to pass this problem. However, it has a mass number of information merging operations, so it runs quite slowly. We use the segment tree not only to maintain the LCA, minimal depth, and answer of the ranges, but also whether $\Phi(l_u;r_u)=\sum\limits_{i\in[l_u,r_u]}\Phi(a_i)=0$. If we enter a node whose $\Phi(l_u;r_u)=0$, we can just ignore it. Otherwise, we recursively work on the segment tree until leaf and brute force update its information. Time complexity is the same but it's much more efficient. Bonus: Can you solve this problem within $O(m\log n)$?
|
[
"brute force",
"data structures",
"dsu",
"math",
"number theory",
"two pointers"
] | 2,300
|
//By: Luogu@rui_er(122461)
#include <bits/stdc++.h>
#define rep(x,y,z) for(int x=y;x<=z;x++)
#define per(x,y,z) for(int x=y;x>=z;x--)
#define debug printf("Running %s on line %d...\n",__FUNCTION__,__LINE__)
#define fileIO(s) do{freopen(s".in","r",stdin);freopen(s".out","w",stdout);}while(false)
using namespace std;
typedef long long ll;
const int N = 1e5 + 5, P = 4e5 + 5, K = 5e6 + 5, W = 5e6;
int n, m, a[N], tab[K], phi[K], p[P], pcnt, dis[K], fa[K][6];
template<typename T> void chkmin(T& x, T y) {if(x > y) x = y;}
template<typename T> void chkmax(T& x, T y) {if(x < y) x = y;}
void sieve(int lim) {
phi[1] = 1;
rep(i, 2, lim) {
if(!tab[i]) {
p[++pcnt] = i;
phi[i] = i - 1;
}
for(int j=1;j<=pcnt&&1LL*i*p[j]<=lim;j++) {
tab[i*p[j]] = 1;
if(i % p[j]) phi[i*p[j]] = phi[i] * phi[p[j]];
else {
phi[i*p[j]] = phi[i] * p[j];
break;
}
}
}
}
void initTree(int lim) {
dis[1] = 1;
rep(j, 0, 5) fa[1][j] = 1;
rep(i, 2, lim) {
dis[i] = dis[phi[i]] + 1;
fa[i][0] = phi[i];
rep(j, 1, 5) fa[i][j] = fa[fa[i][j-1]][j-1];
}
}
int LCA(int u, int v) {
if(dis[u] < dis[v]) swap(u, v);
per(i, 5, 0) if(dis[fa[u][i]] >= dis[v]) u = fa[u][i];
if(u == v) return u;
per(i, 5, 0) if(fa[u][i] != fa[v][i]) u = fa[u][i], v = fa[v][i];
return fa[u][0];
}
struct Node {
int lca, ans, mndis, allrt;
};
struct SegTree {
Node t[N<<2];
#define lc(u) (u<<1)
#define rc(u) (u<<1|1)
void pushup(int u, int l, int r) {
int mid = (l + r) >> 1;
t[u].lca = LCA(t[lc(u)].lca, t[rc(u)].lca);
t[u].ans = t[lc(u)].ans + t[rc(u)].ans
+ (mid - l + 1) * (dis[t[lc(u)].lca] - dis[t[u].lca])
+ (r - mid) * (dis[t[rc(u)].lca] - dis[t[u].lca]);
t[u].mndis = min(t[lc(u)].mndis, t[rc(u)].mndis);
t[u].allrt = t[lc(u)].allrt && t[rc(u)].allrt;
}
void build(int u, int l, int r) {
if(l == r) {
t[u].lca = a[l];
t[u].ans = 0;
t[u].mndis = dis[a[l]];
t[u].allrt = a[l] == 1;
return;
}
int mid = (l + r) >> 1;
build(lc(u), l, mid);
build(rc(u), mid+1, r);
pushup(u, l, r);
}
void modify(int u, int l, int r, int ql, int qr) {
if(t[u].allrt) return;
if(l == r) {
t[u].lca = fa[t[u].lca][0];
--t[u].mndis;
t[u].allrt = t[u].lca == 1;
return;
}
int mid = (l + r) >> 1;
if(ql <= mid) modify(lc(u), l, mid, ql, qr);
if(qr > mid) modify(rc(u), mid+1, r, ql, qr);
pushup(u, l, r);
}
int queryLCA(int u, int l, int r, int ql, int qr) {
if(ql <= l && r <= qr) return t[u].lca;
int mid = (l + r) >> 1;
if(qr <= mid) return queryLCA(lc(u), l, mid, ql, qr);
if(ql > mid) return queryLCA(rc(u), mid+1, r, ql, qr);
int ans = queryLCA(lc(u), l, mid, ql, qr);
if(ans == 1) return 1;
return LCA(ans, queryLCA(rc(u), mid+1, r, ql, qr));
}
int queryAns(int u, int l, int r, int ql, int qr, int lca) {
if(ql <= l && r <= qr) {
return t[u].ans + (r - l + 1) * (dis[t[u].lca] - dis[lca]);
}
int mid = (l + r) >> 1, ans = 0;
if(ql <= mid) ans += queryAns(lc(u), l, mid, ql, qr, lca);
if(qr > mid) ans += queryAns(rc(u), mid+1, r, ql, qr, lca);
return ans;
}
#undef lc
#undef rc
}sgt;
int main() {
sieve(W);
initTree(W);
scanf("%d%d", &n, &m);
rep(i, 1, n) scanf("%d", &a[i]);
sgt.build(1, 1, n);
while(m --> 0) {
int op, l, r;
scanf("%d%d%d", &op, &l, &r);
if(op == 1) sgt.modify(1, 1, n, l, r);
else {
int lca = sgt.queryLCA(1, 1, n, l, r);
printf("%d\n", sgt.queryAns(1, 1, n, l, r, lca));
}
}
return 0;
}
|
1797
|
F
|
Li Hua and Path
|
Li Hua has a tree of $n$ vertices and $n-1$ edges. The vertices are numbered from $1$ to $n$.
A pair of vertices $(u,v)$ ($u < v$) is considered cute if \textbf{exactly one} of the following two statements is true:
- $u$ is the vertex with the minimum index among all vertices on the path $(u,v)$.
- $v$ is the vertex with the maximum index among all vertices on the path $(u,v)$.
There will be $m$ operations. In each operation, he decides an integer $k_j$, then inserts a vertex numbered $n+j$ to the tree, connecting with the vertex numbered $k_j$.
He wants to calculate the number of cute pairs before operations and after each operation.
Suppose you were Li Hua, please solve this problem.
|
There exists an acceptable $O(n\log^2n+m)$ solution using centroid decomposition, but there is a better $O(n\log n+m)$ solution using reconstruction trees. The initial tree is shown in the following picture: Consider the following reconstruction trees. We will define two reconstruction trees called min-RT and max-RT where "RT" means reconstruction tree. For the max-RT, we enumerate vertices in increasing order. We create a new vertex $u$ in the max-RT, then find each vertex $v$ such that $v < u$ and $v$ is adjacent to $u$ in the original tree, make $u$ become the father of the root of $v$ on the max-RT. We can use a dsu to build the max-RT. The max-RT is shown in the following picture: The property that $\operatorname{LCA}(u, v)$ on the max-RT is the maximum index on the path between $(u,v)$ on the original tree is satisfied. The min-RT is similar to the max-RT. The min-RT is shown in the following picture: After designing the reconstruction trees, I found out that the reconstruction trees are related to Kruskal reconstruction trees (KRT) to a certain extent. Here is another way to understand the two reconstruction trees. For the max-RT, since we want to calculate the maximum index on the path, we define the weight of an edge $(u,v)$ as $\max\{u,v\}$. We build a (minimum spanning) KRT of the original tree and merge the vertices with the same weight into one vertex. For the min-RT, define the weight of an edge $(u,v)$ as $\min\{u,v\}$ and use the (maximum spanning) KRT. The max-RT and the (minimum spanning) KRT: (left: maximum valued graph; middle: KRT; right: max-RT) The min-RT and the (maximum spanning) KRT: (left: minimum valued graph; middle: KRT; right: min-RT) We will solve the problem using the two reconstruction trees. Let's call the two restrictions in the statement I and II. Denote $K$ as the number of pairs satisfying exactly one of I and II (which is the answer), $A$ as the number of pairs satisfying I, $B$ as the number of pairs satisfying II and $C$ as the number of pairs satisfying both I and II. It's obvious that $K=A+B-2C$. We can easily calculate $A$ and $B$ using the depth of each vertex on two reconstruction trees. Due to the property mentioned above, $C$ is the number of pairs $(u,v)$ satisfying the condition that $u$ is an ancestor of $v$ on min-RT and $v$ is an ancestor of $u$ on max-RT, which can be solved using dfs order and Fenwick tree. Finally, we calculated the original answer. If we add a vertex with the largest index as a leaf, paths ending with this vertex will satisfy II, so we can use min-RT to calculate the paths ending with this vertex not satisfying I and update the answer. Time complexity: $O(n\log n+m)$.
|
[
"data structures",
"dfs and similar",
"divide and conquer",
"dsu",
"trees"
] | 3,000
|
//By: OIer rui_er
#include <bits/stdc++.h>
#define rep(x,y,z) for(ll x=(y);x<=(z);x++)
#define per(x,y,z) for(ll x=(y);x>=(z);x--)
#define debug(format...) fprintf(stderr, format)
#define fileIO(s) do{freopen(s".in","r",stdin);freopen(s".out","w",stdout);}while(false)
using namespace std;
typedef long long ll;
const ll N = 4e5+5;
ll n, m, k, faMx[N], faMn[N], disMx[N], disMn[N], dfn[N], sz[N], tms;
ll A, B, C, ans;
vector<ll> eMx[N], eMn[N], reMx[N], reMn[N];
template<typename T> void chkmin(T& x, T y) {if(x > y) x = y;}
template<typename T> void chkmax(T& x, T y) {if(x < y) x = y;}
struct Dsu {
ll fa[N];
void init(ll x) {rep(i, 1, x) fa[i] = i;}
ll find(ll x) {return x == fa[x] ? x : fa[x] = find(fa[x]);}
bool merge(ll x, ll y) {
ll u = find(x), v = find(y);
if(u == v) return 0;
fa[u] = v;
return 1;
}
}dsu;
struct BIT {
ll c[N], sz;
void init(ll x) {sz = x; rep(i, 1, x) c[i] = 0;}
ll lowbit(ll x) {return x & (-x);}
void add(ll x, ll k) {for(; x <= sz; x += lowbit(x)) c[x] += k;}
ll ask(ll x) {ll k = 0; for(; x; x -= lowbit(x)) k += c[x]; return k;}
ll Ask(ll x, ll y) {return ask(y) - ask(x - 1);}
}bit;
void dfs1(ll u) {
dfn[u] = ++tms;
sz[u] = 1;
disMx[u] = disMx[faMx[u]] + 1;
B += disMx[u] - 1;
for(ll v : reMx[u]) {
dfs1(v);
sz[u] += sz[v];
}
}
void dfs2(ll u) {
disMn[u] = disMn[faMn[u]] + 1;
A += disMn[u] - 1;
C += bit.Ask(dfn[u], dfn[u] + sz[u] - 1);
bit.add(dfn[u], 1);
for(ll v : reMn[u]) dfs2(v);
bit.add(dfn[u], -1);
}
int main() {
scanf("%lld", &n);
rep(i, 1, n-1) {
ll u, v;
scanf("%lld%lld", &u, &v);
if(u < v) swap(u, v);
eMx[u].push_back(v);
eMn[v].push_back(u);
}
dsu.init(n);
rep(i, 1, n) {
for(ll j : eMx[i]) {
j = dsu.find(j);
dsu.merge(j, i);
faMx[j] = i;
reMx[i].push_back(j);
}
}
dsu.init(n);
per(i, n, 1) {
for(ll j : eMn[i]) {
j = dsu.find(j);
dsu.merge(j, i);
faMn[j] = i;
reMn[i].push_back(j);
}
}
bit.init(n);
dfs1(n);
dfs2(1);
ans = A + B - 2 * C;
// printf("%lld + %lld - 2 * %lld = %lld\n", A, B, C, ans);
printf("%lld\n", ans);
for(scanf("%lld", &m); m; m--) {
scanf("%lld", &k);
disMn[++n] = disMn[k] + 1;
ans += n - disMn[n];
printf("%lld\n", ans);
}
return 0;
}
|
1798
|
A
|
Showstopper
|
You are given two arrays $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$.
In one operation, you can choose any integer $i$ from $1$ to $n$ and swap the numbers $a_i$ and $b_i$.
Determine whether, after using any (possibly zero) number of operations, the following two conditions can be satisfied simultaneously:
- $a_n = \max(a_1, a_2, \ldots, a_n)$,
- $b_n = \max(b_1, b_2, \ldots, b_n)$.
Here $\max(c_1, c_2, \ldots, c_k)$ denotes the maximum number among $c_1, c_2, \ldots, c_k$. For example, $\max(3, 5, 4) = 5$, $\max(1, 7, 7) = 7$, $\max(6, 2) = 6$.
|
The first solution: Fix the position of the numbers $a_n, b_n$. And for each other index $i$, let's check whether the conditions $a_i \leq a_n$ and $b_i \leq b_n$ are met. If not, swap $a_i$ and $b_i$ and check again. If the conditions are not met for some index in both variants - the answer is "No", otherwise "Yes". The second solution: Let $M$ be the maximum of all the numbers $a_1, a_2, \ldots, a_n, b_1, b_2, \ldots, b_n$. Then if $a_n < M$ and $b_n < M$ the answer is - "No", since in the one of the arrays where the number $M$ appears after the operations, the maximum will be $M$, which is greater than $a_n$ and $b_n$. Otherwise, either $a_n = M$ or $b_n = M$. If $a_n = M$, then swap $a_n$ and $b_n$, now $b_n = M$. So the condition $b_n = \max(b_1, b_2, \ldots, b_n)$ will always be met, regardless of the numbers $b_1, b_2, \ldots, b_{n-1}$. Therefore, it would be best to put in $b_i$ the maximum of the numbers $a_i, b_i$ for each $i$. After that, it remains to check the condition for the array $a_1, a_2, \ldots, a_n$, and if it is met, the answer is -"Yes", otherwise -"No"
|
[
"greedy",
"implementation",
"sortings"
] | 800
|
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split()))
b = list(map(int, input().split()))
for i in range(n):
if a[i] > b[i]:
a[i], b[i] = b[i], a[i]
if a[-1] == max(a) and b[-1] == max(b):
print("YES")
else:
print("NO")
|
1798
|
B
|
Three Sevens
|
Lottery "Three Sevens" was held for $m$ days. On day $i$, $n_i$ people with the numbers $a_{i, 1}, \ldots, a_{i, n_i}$ participated in the lottery.
It is known that in each of the $m$ days, only one winner was selected from the lottery participants. The lottery winner on day $i$ was not allowed to participate in the lottery in the days from $i+1$ to $m$.
Unfortunately, the information about the lottery winners has been lost. You need to find any possible list of lottery winners on days from $1$ to $m$ or determine that no solution exists.
|
Let's calculate the array $last$, where $last_X$ is the last day of the lottery in which the person $X$ participated. Then the only day when $X$ could be a winner is the day $last_X$. Then on the day of $i$, only the person with $last_X = i$ can be the winner. It is also clear that if there are several such participants for the day $i$, you can choose any of them as the winner, since these participants cannot be winners on any other days. In total, we need to go through all the days, if for some day there are no participants with $last$ equal to this day, then the answer is $-1$. Otherwise, we choose any participant with $last_X = i$ as the winner on the day of $i$.
|
[
"brute force",
"data structures",
"greedy",
"implementation"
] | 1,000
|
MAX = 50000
last = [0] * (MAX + 777)
for _ in range(int(input())):
m = int(input())
a_ = []
for day in range(m):
n = int(input())
a = list(map(int, input().split()))
for x in a:
last[x] = day
a_.append(a)
ans = [-1] * m
for day in range(m):
for x in a_[day]:
if last[x] == day:
ans[day] = x
if ans[day] == -1:
print(-1)
break
else:
print(*ans)
|
1798
|
C
|
Candy Store
|
The store sells $n$ types of candies with numbers from $1$ to $n$. One candy of type $i$ costs $b_i$ coins. In total, there are $a_i$ candies of type $i$ in the store.
You need to pack all available candies in packs, each pack should contain only one type of candies. Formally, for each type of candy $i$ you need to choose the integer $d_i$, denoting the number of type $i$ candies in one pack, so that $a_i$ is divided without remainder by $d_i$.
Then the cost of one pack of candies of type $i$ will be equal to $b_i \cdot d_i$. Let's denote this cost by $c_i$, that is, $c_i = b_i \cdot d_i$.
After packaging, packs will be placed on the shelf. Consider the cost of the packs placed on the shelf, in order $c_1, c_2, \ldots, c_n$. Price tags will be used to describe costs of the packs. One price tag can describe the cost of all packs from $l$ to $r$ inclusive if $c_l = c_{l+1} = \ldots = c_r$. Each of the packs from $1$ to $n$ must be described by at least one price tag. For example, if $c_1, \ldots, c_n = [4, 4, 2, 4, 4]$, to describe all the packs, a $3$ price tags will be enough, the first price tag describes the packs $1, 2$, the second: $3$, the third: $4, 5$.
You are given the integers $a_1, b_1, a_2, b_2, \ldots, a_n, b_n$. Your task is to choose integers $d_i$ so that $a_i$ is divisible by $d_i$ for all $i$, and the required number of price tags to describe the values of $c_1, c_2, \ldots, c_n$ is the minimum possible.
For a better understanding of the statement, look at the illustration of the first test case of the first test:
Let's repeat the meaning of the notation used in the problem:
$a_i$ — the number of candies of type $i$ available in the store.
$b_i$ — the cost of one candy of type $i$.
$d_i$ — the number of candies of type $i$ in one pack.
$c_i$ — the cost of one pack of candies of type $i$ is expressed by the formula $c_i = b_i \cdot d_i$.
|
To begin with, let's understand when 1 price tag will be enough. Let the total cost of all packs of candies be $cost$. Two conditions are imposed on $cost$: The first condition: $cost$ must be divided by each of the numbers $b_i$, because $cost = b_i \cdot d_i$. This is equivalent to the fact that $cost$ is divided by $lcm(b_1, \ldots, b_n)$, where $lcm$ denotes the least common multiple. The second condition: $a_i$ is divided by $d_i$. We want $cost$ to somehow appear in this condition. Therefore, multiply both parts by $b_i$, and we get that $a_i \cdot b_i$ is divisible by $b_i \cdot d_i = cost$. That is, $a_i \cdot b_i$ is divided by $cost$. This is equivalent to $\gcd(a_1 \cdot b_1, \ldots, a_n \cdot b_n)$ is divided by $cost$. Thus, if one price tag is enough, it is necessary that $\gcd(a_1 \cdot b_1, \ldots, a_n \cdot b_n)$ was divided by $cost$ and $cost$ was divided by $lcm(b_1, \ldots, b_n)$. So a necessary and sufficient condition for one price tag will be "$\gcd(a_1 \cdot b_1, \ldots, a_n \cdot b_n)$ is divided by $lcm(b_1, \ldots, b_n)$". It is not difficult to understand that if one price tag is enough for a set of candies, then if you remove any type of candy from this set, one price tag will still be enough. This means that a simple greedy algorithm will work. Let's select the largest prefix of candies such that one price tag is enough for it, "paste" a price tag on this prefix, and repeat for the remaining candies until the array ends.
|
[
"greedy",
"math",
"number theory"
] | 1,700
|
from math import gcd
def lcm(a, b):
return a * b // gcd(a, b)
for _ in range(int(input())):
n = int(input())
a = []
b = []
for i in range(n):
ai, bi = map(int, input().split())
a.append(ai)
b.append(bi)
g = 0
l = 1
ans = 1
for i in range(n):
g = gcd(g, a[i] * b[i])
l = lcm(l, b[i])
if g % l:
ans += 1
g = a[i] * b[i]
l = b[i]
print(ans)
|
1798
|
D
|
Shocking Arrangement
|
You are given an array $a_1, a_2, \ldots, a_n$ consisting of integers such that $a_1 + a_2 + \ldots + a_n = 0$.
You have to rearrange the elements of the array $a$ so that the following condition is satisfied:
$$\max\limits_{1 \le l \le r \le n} \lvert a_l + a_{l+1} + \ldots + a_r \rvert < \max(a_1, a_2, \ldots, a_n) - \min(a_1, a_2, \ldots, a_n),$$ where $|x|$ denotes the absolute value of $x$.
More formally, determine if there exists a permutation $p_1, p_2, \ldots, p_n$ that for the array $a_{p_1}, a_{p_2}, \ldots, a_{p_n}$, the condition above is satisfied, and find the corresponding array.
Recall that the array $p_1, p_2, \ldots, p_n$ is called a permutation if for each integer $x$ from $1$ to $n$ there is exactly one $i$ from $1$ to $n$ such that $p_i = x$.
|
If the array consists entirely of zeros, then this is impossible, since $\max(a) - \min(a) = 0$. Otherwise, we will put all zeros at the beginning. Now our array is without zeros. We will add the elements into the array in order. If now the sum of the added elements is $\leq 0$, we will take any of the remaining positive numbers as the next element. Since the sum of all the numbers is $0$, a positive number is guaranteed to be found (if there are still numbers left). If now the sum is $> 0$, then we will take any of the remaining negative numbers as the next element. Again, since the sum of all the numbers is $0$, a negative number is bound to be found. Why it works: The maximum number of the form $\lvert a_l + a_{l+1} + ... + a_r \rvert$ is the same as $maxPrefSum(a) - minPrefSum(a)$ (including $0$) According to the structure of the algorithm, it is clear that in the resulting array $maxPrefSum \leq \max(a)$, and $minPrefSum > \min(a)$. In total, the condition will be met.
|
[
"constructive algorithms",
"greedy",
"math"
] | 1,600
|
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split()))
if max(a) == 0:
print("No")
else:
print("Yes")
prefix_sum = 0
pos = []
neg = []
for x in a:
if x >= 0:
pos.append(x)
else:
neg.append(x)
ans = []
for _ in range(n):
if prefix_sum <= 0:
ans.append(pos[-1])
pos.pop()
else:
ans.append(neg[-1])
neg.pop()
prefix_sum += ans[-1]
print(' '.join(list(map(str, ans))))
|
1798
|
E
|
Multitest Generator
|
Let's call an array $b_1, b_2, \ldots, b_m$ a test if $b_1 = m - 1$.
Let's call an array $b_1, b_2, \ldots, b_m$ a multitest if the array $b_2, b_3, \ldots, b_m$ can be split into $b_1$ non-empty subarrays so that each of these subarrays is a test. Note that each element of the array must be included in exactly one subarray, and the subarrays must consist of consecutive elements.
Let's define the function $f$ from the array $b_1, b_2, \ldots, b_m$ as the minimum number of operations of the form "Replace any $b_i$ with any \textbf{non-negative} integer $x$", which needs to be done so that the array $b_1, b_2, \ldots, b_m$ becomes a multitest.
You are given an array of \textbf{positive} integers $a_1, a_2, \ldots, a_n$. For each $i$ from $1$ to $n - 1$, find $f([a_i, a_{i+1}, \ldots, a_n])$.
Below are some examples of tests and multitests.
- Tests: $[\underline{1}, 5]$, $[\underline{2}, 2, 2]$, $[\underline{3}, 4, 1, 1]$, $[\underline{5}, 0, 0, 0, 0, 0]$, $[\underline{7}, 1, 2, 3, 4, 5, 6, 7]$, $[\underline{0}]$. These arrays are tests since their first element (underlined) is equal to the length of the array minus one.
- Multitests: $[1, \underline{\underline{1}, 1}]$, $[2, \underline{\underline{3}, 0, 0, 1}, \underline{\underline{1}, 12}]$, $[3, \underline{\underline{2}, 2, 7}, \underline{\underline{1}, 1}, \underline{\underline{3}, 4, 4, 4}]$, $[4, \underline{\underline{0}}, \underline{\underline{3}, 1, 7, 9}, \underline{\underline{4}, 2, 0, 0, 9}, \underline{\underline{1}, 777}]$. Underlined are the subarrays after the split, and double underlined are the first elements of each subarray.
|
The first idea: you can make a multitest from any array using $2$ operations. To do this, replace the first element with $1$, and the second with $n - 2$, where $n$ is the length of the array. It remains to learn how to determine whether it is possible to make a multitest from an array for $0$ and for $1$ change. First let's deal with the $0$ changes. Question: when is the array $a_i, \ldots, a_n$ a multitest? Answer: when $a_{i+1}, \ldots, a_n$ represents exactly $a_i$ of tests written in a row. For convenience, let's call the index $i$ good if $a_i, \ldots, a_n$ represents a certain number of tests written in a row. For all indexes, we want to find out if they are good, and if so, how many tests the corresponding suffix consists of. Denote $go_i = i + a_i + 1$. The logic is that if $i$ is the first element of a certain test, then $go_i$ is the first element of the next test. Then the index $i$ will be good if the chain $i \to go_i \to go_{go_i} \to \ldots$ ends in $n + 1$. To find out if this chain ends in $n + 1$ and if so, how many tests it consists of can be simple dynamic programming by suffixes. In total, to determine whether the suffix $a_i, \ldots, a_n$ is a multitest in itself, it is necessary to check that $i+1$ is good and its suffix consists of exactly $a_i$ tests. Now it remains to determine whether it is possible to make a multitest for $1$ change. If the array has become a multitest after the change, then the element responsible for either the number of tests or the first element of some test has been changed. Case one: the number of tests changes. Then the array can be made a multitest if and only if $i + 1$ is a good index. Case two: the index after the change will become the first element of some test. We will consider "the maximum number of tests that can be achieved by changing one element" as the dynamics of the suffix. Then one change can make the suffix a multitest if the dynamics value from $i+1$ is greater than or equal to $a_i$. How to calculate this dynamics for $i+1$: The index being changed then must be one of $i+1$, $go_{i+1}$, $go_{go_{i+1}}$, $\ldots$, otherwise the test chain of $i+1$ will remain the same and the change will not affect anything. Then if $go_{i+1}, go_{go_{i+1}}, \ldots$ changes, then this change will be reflected in the dynamics value from $go_{i+1}$, through which it can be recalculated. And if $i+1$ changes, then you need to make a change so that $go_{i+1}$ is a good index (after the change). $go_{i+1}$ can become any index of $\geq i+2$ after the change. And of all these, you need to choose a good index with the maximum number of tests. This value can be maintained separately when recalculating the dynamics.
|
[
"brute force",
"dp"
] | 2,300
|
N = 300777
a = [0] * N
go = [0] * N
good_chain = [0] * N
chain_depth = [0] * N
suf_max_chain_depth = [0] * N
ans = ""
for _ in range(int(input())):
n = int(input())
a = [0] + list(map(int, input().split()))
chain_depth[n + 1] = 0
suf_max_chain_depth[n + 1] = 0
curr_max_chain_depth = 0
for i in range(n, 0, -1):
go[i] = i + a[i] + 1
if go[i] == n + 1 or (go[i] <= n and good_chain[go[i]]):
good_chain[i] = True
else:
good_chain[i] = False
chain_depth[i] = 1 + chain_depth[min(go[i], n + 1)]
suf_max_chain_depth[i] = 1 + curr_max_chain_depth
if go[i] <= n + 1:
suf_max_chain_depth[i] = max(suf_max_chain_depth[i], 1 + suf_max_chain_depth[go[i]])
if good_chain[i]:
curr_max_chain_depth = max(curr_max_chain_depth, chain_depth[i])
for i in range(1, n):
if good_chain[i + 1] and chain_depth[i + 1] == a[i]:
ans += "0 "
elif good_chain[i + 1] or suf_max_chain_depth[i + 1] >= a[i]:
ans += "1 "
else:
ans += "2 "
ans += '\n'
print(ans)
|
1798
|
F
|
Gifts from Grandfather Ahmed
|
Grandfather Ahmed's School has $n+1$ students. The students are divided into $k$ classes, and $s_i$ students study in the $i$-th class. So, $s_1 + s_2 + \ldots + s_k = n+1$.
Due to the upcoming April Fools' Day, all students will receive gifts!
Grandfather Ahmed planned to order $n+1$ boxes of gifts. Each box can contain one or more gifts. He plans to distribute the boxes between classes so that the following conditions are satisfied:
- Class number $i$ receives \textbf{exactly} $s_i$ boxes (so that each student can open exactly one box).
- The total number of gifts in the boxes received by the $i$-th class should be a multiple of $s_i$ (it should be possible to equally distribute the gifts among the $s_i$ students of this class).
Unfortunately, Grandfather Ahmed ordered only $n$ boxes with gifts, the $i$-th of which contains $a_i$ gifts.
Ahmed has to buy the missing gift box, and the number of gifts in the box should be an integer between $1$ and $10^6$. Help Ahmed to determine, how many gifts should the missing box contain, and build a suitable distribution of boxes to classes, or report that this is impossible.
|
Incredible mathematical fact: from any $2n - 1$ integers, you can choose $n$ with a sum divisible by $n$ (Erdős-Ginzburg-Ziv theorem) The proof can be found in the world wide Web. Brief idea: first prove for primes, and then prove that if true for $n = a$ and $n = b$, then true for $n = ab$. Sort the class sizes: $s_1 \leq s_2 \leq\ldots \leq s_k$. Let's distribute the available boxes into the first $k - 1$ classes in order. Then considering the $i$ class, we have $s_i + \ldots + s_k - 1$ boxes at our disposal. $s_k\geq s_i$, which means there are at least $2\cdot s_i - 1$ boxes from which you can always allocate $s_i$ to send to the $i$ class. And for the last class, we can add a box with the necessary number of gifts ourselves to ensure divisibility. The question remains how to allocate $s_i$ numbers with a sum divisible by $s_i$ among $2 \cdot s_i - 1$ numbers. Restrictions allow you to do this in a straight dynamics for $n^3$. $dp[i][j][k]$ - is it possible to choose $j$ among the first $i$ numbers so that their sum gives the remainder of $k$ by the required modulus. Restore of the answer is done by the classical method.
|
[
"dp",
"math",
"number theory"
] | 2,500
|
#include <bits/stdc++.h>
#define pb push_back
// #define int long long
#define all(x) x.begin(), x.end()
#define ld long double
using namespace std;
const int N = 210;
bool dp[N][N][N];
bool take[N][N][N];
signed main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int n, k;
cin >> n >> k;
vector<int> a(n), s(k);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
for (int i = 0; i < k; i++) {
cin >> s[i];
}
vector<pair<int, int>> s2;
for (int i = 0; i < k; i++) {
s2.pb({s[i], i});
}
sort(all(s2));
vector<vector<int>> ans(k);
for (int i = 0; i < k - 1; i++) {
int class_size = s2[i].first;
vector<int> boxes;
for (int _ = 0; _ < 2 * class_size - 1; _++) {
boxes.pb(a.back());
a.pop_back();
}
for (int sz = 0; sz <= class_size; sz++) {
for (int r = 0; r < class_size; r++) {
dp[0][sz][r] = false;
take[0][sz][r] = false;
}
}
dp[0][0][0] = true;
dp[0][1][boxes[0] % class_size] = true;
take[0][1][boxes[0] % class_size] = true;
for (int j = 1; j < (int) boxes.size(); j++) {
for (int sz = 0; sz <= class_size; sz++) {
for (int r = 0; r < class_size; r++) {
dp[j][sz][r] = dp[j - 1][sz][r];
if (sz > 0 && dp[j - 1][sz - 1][(class_size + r - boxes[j] % class_size) % class_size]) {
dp[j][sz][r] = true;
take[j][sz][r] = true;
} else {
take[j][sz][r] = false;
}
}
}
}
vector<bool> used(2 * class_size - 1);
int sz = class_size, r = 0;
for (int j = (int) boxes.size() - 1; j >= 0; j--) {
if (take[j][sz][r]) {
used[j] = true;
sz--;
r += (class_size - boxes[j] % class_size);
r %= class_size;
} else {
used[j] = false;
}
}
vector<int> to_class;
for (int j = 0; j < (int) boxes.size(); j++) {
if (!used[j]) {
a.pb(boxes[j]);
} else {
to_class.pb(boxes[j]);
}
}
ans[s2[i].second] = to_class;
}
int sum = 0;
for (auto x : a) {
sum += x;
sum %= (int) (a.size() + 1);
}
int add = (int) (a.size() + 1) - sum;
ans[s2[k - 1].second] = a;
ans[s2[k - 1].second].pb(add);
cout << add << '\n';
for (auto arr : ans) {
for (auto x : arr) {
cout << x << ' ';
}
cout << '\n';
}
}
|
1799
|
A
|
Recent Actions
|
On Codeforces the "Recent Actions" field shows the last $n$ posts with recent actions.
Initially, there are posts $1, 2, \ldots, n$ in the field (this is in order from top to down). Also there are infinitely many posts not in the field, numbered with integers $n + 1, n + 2, \ldots$.
When recent action happens in the post $p$:
- If it is in the "Recent Actions" field, it moves from its position to the top position.
- Otherwise, it is added to the top position, and the post on the down position is removed from the "Recent Actions" field.
You know, that the next $m$ recent actions will happen in the posts $p_1, p_2, \ldots, p_m$ ($n + 1 \leq p_i \leq n + m$) in the moments of time $1, 2, \ldots, m$. \textbf{Note}, that recent actions only happen with posts with numbers $\geq n + 1$.
For each post $i$ ($1 \leq i \leq n$), find the first time it will be removed from the "Recent Actions" field or say, that it won't be removed.
|
Note, that posts will be removed in the order $n, n - 1, \ldots, 1$. The post $n - k + 1$ will be removed at the first time, when there are at least $k$ different numbers among $p_1, p_2, \ldots, p_i$. So let's calculate the number of different numbers among $p_1, p_2, \ldots, p_i$ for each $i$ using boolean array of length $m$ iterating $i$ from $1$ to $m$. Using them we can calculate the answer. Time complexity: $O(n + m)$.
|
[
"data structures",
"greedy",
"implementation",
"math"
] | 800
| null |
1799
|
B
|
Equalize by Divide
|
You are given an array $a_1, a_2, \ldots, a_n$ of positive integers.
You can make this operation multiple (possibly zero) times:
- Choose two indices $i$, $j$ ($1 \leq i, j \leq n$, $i \neq j$).
- Assign $a_i := \lceil \frac{a_i}{a_j} \rceil$. Here $\lceil x \rceil$ denotes $x$ rounded up to the smallest integer $\geq x$.
Is it possible to make all array elements equal by some sequence of operations (possibly empty)? If yes, print \textbf{any} way to do it in at most $30n$ operations.
It can be proven, that under the problem constraints, if some way exists to make all elements equal, there exists a way with at most $30n$ operations.
|
If all numbers are equal initially - we can do nothing. Otherwise if some $a_i = 1$, answer do not exist: this $a_i$ can't became bigger during operations and all other elements can't be equal to $1$ simultaniously, because after the last operation $a_j > 1$ (otherwise we can remove this operation). If all $a_i \geq 2$, the answer exists and we can simulate such algorithm: let's take $i$, such that $a_i$ is maximum possible and $j$, such that $a_j$ is smallest possible. Make operation with $(i, j)$. Note, that after at most $30n$ operations all elements will be equal. It is true, because after each operation $a_i$ decreases at least by $2$ times (and rounded up) and all elements are bounded $a_x \geq 2$ after each operation. Each number can't be decreased more than $30$ times. Time complexity: $O(n^2 \log{C})$, where $C = 10^9$.
|
[
"brute force",
"constructive algorithms",
"greedy",
"math"
] | 1,200
| null |
1799
|
C
|
Double Lexicographically Minimum
|
You are given a string $s$. You can reorder the characters to form a string $t$. Define $t_{\mathrm{max}}$ to be the lexicographical maximum of $t$ and $t$ in reverse order.
Given $s$ determine the lexicographically minimum value of $t_{\mathrm{max}}$ over all reorderings $t$ of $s$.
A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds:
- $a$ is a prefix of $b$, but $a \ne b$;
- in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
|
Let's iterate all symbols of $s$ in order from smallest to largest and construct an answer $t_{max}$. Let the current symbol be $x$. If there are at least $2$ remaining symbols equal to $x$, we should add them to the current prefix and suffix of $t_{max}$ and continue. If there are at most one other symbol $y$ is left (there are $x$ and $c$ times $y$ left in $s$) we should add $\lceil \frac{c}{2} \rceil$ symbols $y$, symbol $x$ and $\lfloor \frac{c}{2} \rfloor$ symbols $y$ to the prefix of $t_{max}$ and break. Otherwise we should add all remaining symbols of $s$ (excluding $x$) to prefix of $t_{max}$ in the sorted order and after it symbol $x$. Time complexity: $O(n + A)$, where $A = 26$.
|
[
"greedy",
"strings"
] | 1,700
| null |
1799
|
D2
|
Hot Start Up (hard version)
|
This is a hard version of the problem. The constraints of $t$, $n$, $k$ are the only difference between versions.
You have a device with two CPUs. You also have $k$ programs, numbered $1$ through $k$, that you can run on the CPUs.
The $i$-th program ($1 \le i \le k$) takes $cold_i$ seconds to run on some CPU. However, if the last program we ran on this CPU was also program $i$, it only takes $hot_i$ seconds ($hot_i \le cold_i$). Note that this only applies if we run program $i$ multiple times consecutively — if we run program $i$, then some different program, then program $i$ again, it will take $cold_i$ seconds the second time.
You are given a sequence $a_1, a_2, \ldots, a_n$ of length $n$, consisting of integers from $1$ to $k$. You need to use your device to run programs $a_1, a_2, \ldots, a_n$ in sequence. For all $2 \le i \le n$, you cannot start running program $a_i$ until program $a_{i - 1}$ has completed.
Find the minimum amount of time needed to run all programs $a_1, a_2, \ldots, a_n$ in sequence.
|
Consider maintaining the following 2-dimensional DP: $dp_{i,j}$ will be the minimum time needed to run all previous programs such that the last program run on CPU 1 was program $i$, and the last program run on CPU 2 was program $j$. Initially we have $dp_{0,0} = 0$ (here, $0$ is a placeholder program) and $dp_{i,j} = INF$ for $(i, j) \neq (0, 0)$. When we come to a program $x$, we can transition as follows. First, create a new DP array $ndp$, initialized to all $INF$. Then: For all $dp_{i,j}$ with $i \neq x$, set $ndp_{x,j} = \min(ndp_{x,j}, dp_{i,j} + cold_x)$. For all $dp_{x,j}$, set $ndp_{x,j} = \min(ndp_{x,j}, dp_{x,j} + hot_x)$. For all $dp_{i,j}$ with $j \neq x$, set $ndp_{i,x} = \min(ndp_{i,x}, dp_{i,j} + cold_x)$. For all $dp_{i,x}$, set $ndp_{i,x} = \min(ndp_{i,x}, dp_{i,x} + hot_x)$. After all updates, replace $dp$ with $ndp$. This works in $O(nk^2)$. To optimize it, we can notice that after processing a program $x$, only entries in $dp$ with the row or column equal to $x$ will be non-$INF$. Consider instead the following 1-dimensional DP array: $dp_i$ contains the minimum time to run previous programs if one CPU last ran program $x$, and the other last ran program $i$. Initially, $dp_0 = 0$ and all other $dp_i = INF$. Also, we add a dummy program with ID $0$, and $hot_0 = cold_0 = 0$ (this will make implementation easier). When we come to a program $x$, again, let's create a new DP array $ndp$, again initialized to all $INF$. Then, we can case on whether the previous program was equal to $x$. If the last program run was also $x$: For all $i$, set $ndp_i = \min(ndp_i, dp_i + hot_x)$. For all $i \neq x$, set $ndp_x = \min(ndp_x, dp_i + cold_x)$. Set $ndp_x = \min(ndp_x, dp_x + hot_x)$. For all $i$, set $ndp_i = \min(ndp_i, dp_i + hot_x)$. For all $i \neq x$, set $ndp_x = \min(ndp_x, dp_i + cold_x)$. Set $ndp_x = \min(ndp_x, dp_x + hot_x)$. Otherwise, let $y$ be the last program run. For all $i$, set $ndp_i = \min(ndp_i, dp_i + cold_x)$. For all $i \neq x$, set $ndp_y = \min(ndp_y, dp_i + cold_x)$. Set $ndp_y = \min(ndp_y, dp_x + hot_x)$. For all $i$, set $ndp_i = \min(ndp_i, dp_i + cold_x)$. For all $i \neq x$, set $ndp_y = \min(ndp_y, dp_i + cold_x)$. Set $ndp_y = \min(ndp_y, dp_x + hot_x)$. This gets us a $O(nk)$ solution which gets accepted in the easy version. To optimize it further, we can use a data structure to perform updates (since with each transition we either add some value to every $dp$ element, or add some value to a single index). It is possible to do this in constant time per update, or using a segment tree or some other range update structure (though this is somewhat overkill). The overall complexity then becomes $O(n + k)$ or $O(n \log k + k)$ per test.
|
[
"data structures",
"dp"
] | 2,100
| null |
1799
|
E
|
City Union
|
You are given $n \times m$ grid. Some cells are filled and some are empty.
A city is a maximal (by inclusion) set of filled cells such that it is possible to get from any cell in the set to any other cell in the set by moving to adjacent (by side) cells, without moving into any cells not in the set. In other words, a city is a connected component of filled cells with edges between adjacent (by side) cells.
Initially, there are \textbf{two cities} on the grid. You want to change some empty cells into filled cells so that both of the following are satisfied:
- There is \textbf{one city} on the resulting grid.
- The shortest path between any two filled cells, achievable only by moving onto filled cells, is equal to the Manhattan distance between them.
The Manhattan distance between two cells $(a, b)$ and $(c, d)$ is equal to $|a - c| + |b - d|$.
Find a way to add filled cells that satisfies these conditions and minimizes the total number of filled cells.
|
Let's note, that the resulting grid is correct if and only if filled cells form continious segment in each row and column (condition *) and there is one city. So we can define a filling operation: given a grid, fill all cells between the most left and most right cells in each row and the most up and most down cells in each column. Doing this operation $(n+m)$ times we get that condition * is satisfied and we filled cells that definitely should be filled. If now there is one city, we solved the problem. But if there are still two cities, their projections to horisontal and vertical axes do not intersect. So we need to connect them with some path. Let's consider the case when one city is upper left than the other city (otherwise we can apply rotation and get this case). Let's define the lowest row of the first city as $i_1$, the right column of the first city as $j_1$, the upper row of the second city as $i_2$ and the left column of the second city as $j_2$. We can fill cells on any Manhattan shortest path between cells $(i_1, j_1)$, $(i_2, j_2)$. After that again using filling operation we will fill cells that should be filled. It is easy to see, that by this solution we will get the smallest possible number of filled cells. Time complexity: $O(nm (n + m))$.
|
[
"constructive algorithms",
"dfs and similar",
"dsu",
"geometry",
"greedy",
"implementation",
"math"
] | 2,300
| null |
1799
|
F
|
Halve or Subtract
|
You have an array of positive integers $a_1, a_2, \ldots, a_n$, of length $n$. You are also given a positive integer $b$.
You are allowed to perform the following operations (possibly several) times in any order:
- Choose some $1 \le i \le n$, and replace $a_i$ with $\lceil \frac{a_i}{2} \rceil$. Here, $\lceil x \rceil$ denotes the smallest integer not less than $x$.
- Choose some $1 \le i \le n$, and replace $a_i$ with $\max(a_i - b, 0)$.
However, you must also follow these rules:
- You can perform at most $k_1$ operations of type 1 in total.
- You can perform at most $k_2$ operations of type 2 in total.
- For all $1 \le i \le n$, you can perform at most $1$ operation of type 1 on element $a_i$.
- For all $1 \le i \le n$, you can perform at most $1$ operation of type 2 on element $a_i$.
The cost of an array is the sum of its elements. Find the minimum cost of $a$ you can achieve by performing these operations.
|
For convenience, let $half(x)$ denote $\lceil \frac{x}{2} \rceil$, and $sub(x)$ denote $\max(x - b, 0)$. First, notice that if we apply both operations to some element, it will be optimal to apply halving first, then subtraction. We can prove this with 2 cases: $a_i \le 2b$. In this case, $half(a_i) \le b$, and so $sub(half(a_i)) = 0$. Since applying either function to a nonnegative integer results in a nonnegative integer, $0 = sub(half(a_i)) \le half(sub(a_i))$. Otherwise, $a_i > 2b$. Then $sub(half(a_i)) = a_i - \lfloor \frac{a_i}{2} \rfloor - b$, and $half(sub(a_i)) = a_i - b - \lfloor \frac{a_i - b}{2} \rfloor$. Since $\lfloor \frac{a_i - b}{2} \rfloor \le \lfloor \frac{a_i}{2} \rfloor$, $sub(half(a_i)) \le half(sub(a_i))$. Next, suppose there are exactly $p$ elements to which we apply both operations. Then, it will be optimal to apply both operations to the $p$ greatest elements in the array. This should be somewhat intuitive, but here's a proof: Suppose there are some $a_i, a_j$ such that $a_i < a_j$. Suppose we apply both operations to $a_i$, and only one operation to $a_j$. Then applying both operations to $a_j$ and a single operation to $a_i$ instead won't increase the resulting sum. We can prove this with two cases: We apply $half$ to $a_j$. Note that since it's optimal to apply $half$ first when applying both operations, this case is essentially: halve both elements, then choose one to apply $sub$ to. And it's better to subtract $b$ from the greater element, since $x - sub(x) \le y - sub(y)$ for $x \le y$. We apply $sub$ to $a_j$. We can analyze this with (surprise!) a few cases: $a_i, a_j \le 2b$. Then whichever element we apply both operations to will be $0$, and the result will depend only on the other element. $sub(x) \le sub(y)$ for $x \le y$, so it's better to apply both operations to the greater element. $a_j > 2b$. If we apply both operations to the greater element, we subtract $\min(a_i, b) + \lfloor \frac{a_j}{2} \rfloor + b$ from the sum. But applying both operations to the lesser element subtracts $\lfloor \frac{a_i}{2} \rfloor + \min(half(a_i), b) + b \le \min(a_i, b) + \lfloor \frac{a_i}{2} \rfloor + b$ from the sum. So it's optimal to apply both operations to the greater element. And this covers all cases where $a_i < a_j$. We apply $half$ to $a_j$. Note that since it's optimal to apply $half$ first when applying both operations, this case is essentially: halve both elements, then choose one to apply $sub$ to. And it's better to subtract $b$ from the greater element, since $x - sub(x) \le y - sub(y)$ for $x \le y$. We apply $sub$ to $a_j$. We can analyze this with (surprise!) a few cases: $a_i, a_j \le 2b$. Then whichever element we apply both operations to will be $0$, and the result will depend only on the other element. $sub(x) \le sub(y)$ for $x \le y$, so it's better to apply both operations to the greater element. $a_j > 2b$. If we apply both operations to the greater element, we subtract $\min(a_i, b) + \lfloor \frac{a_j}{2} \rfloor + b$ from the sum. But applying both operations to the lesser element subtracts $\lfloor \frac{a_i}{2} \rfloor + \min(half(a_i), b) + b \le \min(a_i, b) + \lfloor \frac{a_i}{2} \rfloor + b$ from the sum. So it's optimal to apply both operations to the greater element. And this covers all cases where $a_i < a_j$. $a_i, a_j \le 2b$. Then whichever element we apply both operations to will be $0$, and the result will depend only on the other element. $sub(x) \le sub(y)$ for $x \le y$, so it's better to apply both operations to the greater element. $a_j > 2b$. If we apply both operations to the greater element, we subtract $\min(a_i, b) + \lfloor \frac{a_j}{2} \rfloor + b$ from the sum. But applying both operations to the lesser element subtracts $\lfloor \frac{a_i}{2} \rfloor + \min(half(a_i), b) + b \le \min(a_i, b) + \lfloor \frac{a_i}{2} \rfloor + b$ from the sum. So it's optimal to apply both operations to the greater element. Let's fix $p$, the number of elements we apply both operations to. After taking them out, we will be left with the $n-x$ smallest elements in the array. Suppose we have $v_1$ operations of type 1 left, and $v_2$ operations of type 2 left. We'll assume $v_1 + v_2 \le n - x$ (otherwise we can apply both operations to more elements). Notice that it's optimal to apply our $v_1 + v_2$ operations to the $v_1 + v_2$ greatest remaining elements: subtracting from a greater element can't decrease the amount we subtract, and halving a greater element can't decrease the amount we take away. So we're left with $v_1 + v_2$ elements, and we want to choose $v_1$ of them to apply the halving to. Let's consider a few cases and try to analyze them. Let $a_i \le a_j$, and suppose we want to apply $half$ to one of them and $sub$ to the other. $a_i, a_j \le b$. Then it will be optimal to apply $sub$ to the greater element. $b \le a_i, a_j$. Then it will be optimal to apply $sub$ to the smaller element. Using this information, we can form the final lemma we need for our solution: Let $a_i \le a_j \le a_k$. Suppose we apply $sub$ to $a_i$ and $a_k$, and $half$ to $a_j$. Then it will not increase the answer to apply $half$ to one of $a_i$ or $a_k$ instead. There are 4 cases we should consider to prove this lemma: $a_k \le b$. Then we should apply $half$ to $a_i$ and $sub$ to the others. $b \le a_i$. Then we should apply $half$ to $a_k$ and $sub$ to the others. $a_i, a_j \le b \le a_k$. Then we should apply $half$ to $a_i$ and $sub$ to the others. $a_i \le b \le a_j, a_k$. Then we should apply $half$ to $a_k$ and $sub$ to the others. You can verify that doing this produces the optimal answer. And using this lemma, we find that the optimal answer has all $sub$ operations applied to some of the middle elements, with all $half$ operations applied to the endpoints. To summarize, the optimal answer will have a form like this (assuming $a$ is sorted in non-increasing order, $a_1 \ge a_2 \ge a_3 \ge \ldots \ge a_n$): First come some elements to which we apply both operations. Second come some elements to which we apply only $half$. Third come some elements to which we apply only $sub$. Fourth come some elements to which we apply only $half$. Finally come some elements to which we apply no operations. Note that some of these segments may be empty. It's easy to verify that it's optimal to use all given operations. So if we loop through all possible sizes for the first two segments, we can uniquely determine the sizes of the last three. Finally, using prefix sums to quickly find the sums of elements in a segment, we get an $O(n^2)$ solution. There is an interesting fact: if we will fix the size of the first group $p$ and calculate the answer $f(p)$ for it, the function $f$ is convex. So the ternary or binary search can be used here to find the minimum in $O(n \log{n})$ time, but it was not necessary.
|
[
"binary search",
"brute force",
"dp",
"greedy",
"sortings"
] | 2,700
| null |
1799
|
G
|
Count Voting
|
There are $n$ people that will participate in voting. Each person has exactly one vote.
$i$-th person has a team $t_i$ ($1 \leq t_i \leq n$) where $t_i = t_j$ means $i$, $j$ are in the same team. By the rules each person should vote for the person from the different team. Note that it automatically means that each person can't vote for himself.
Each person knows the number of votes $c_i$ he wants to get. How many possible votings exists, such that each person will get the desired number of votes? Due to this number can be big, find it by modulo $998\,244\,353$.
|
Let's solve the problem using inclusion-conclusion principle. If we do not consider teams, there are $\frac{n!}{\prod\limits_{i=1}^{n} c_i!}$ ways to make votings. But now consider such sets of bad votings: person $i$ made a vote for the person from the same team $t_i$. We want to calculate the size of the union of such sets of votings. So the answer to the problem is $\sum\limits_{k=0}^{n} (-1)^k f_k$, where $f_k$ is the number of ways to make votings, where we firstly fix some subset of $k$ people who will vote for the same team (for others the vote can be any) and calculate the number of votings for it, and we sum these numbers for all subsets of $k$ people. Let's define $b_1, b_2, \ldots, b_n$, where $0 \leq b_i \leq c_i$ is the number of people from the same team who made a vote for $i$-th person (so, $\sum\limits_{i=1}^{n} b_i = k$). So, $f_k = \sum\limits_{0 \leq b_i \leq c_i} cnt(b_1, b_2, \ldots, b_n)$, where $cnt(b_1, b_2, \ldots, b_n)$ is the number of votings, where for each $i$ we firstly fix $b_i$ people from team $t_i$ that will vote for $i$ and calculate the number of votings, and we sum these numbers for all ways to fix. How $cnt$ is calculated? There are $\frac{(\sum\limits_{i=1}^{n} c_i - b_i)!}{\prod\limits_{i=1}^{n} (c_i - b_i)!} = \frac{(n - k)!}{\prod\limits_{i=1}^{n} (c_i - b_i)!}$ ways to make votes for people who were not selected into subset. This should be multiplied for the number of ways to vote for people that were fixed. This should be found for each team separately. Let's consider some team $1, 2, \ldots, m$ of $m$ people (WLOG they are first $m$ people). There are $\frac{m!}{(\prod\limits_{i=1}^{m} b_i!) (m - \sum\limits_{i=1}^{m} b_i)!}$ ways (*) to fix people in this team. These counts should be multiplied for all teams. Now to sum all $cnt$ for all $b$ we of course can't iterate over all possible $b$, we will do it using dynamic programming. Let's iterate over all possible teams and calculate $dp_{i,k}$ - the sum of current counts for all prefixes of arrays $b$, where we considered first $i$ teams and the sum of elements of $b$ on this prefix is $k$. When we add a new team to our $dp$, we can write $dp_{i+1, k+s} += dp_{i,k} t_{i+1,s}$, where $t_{i+1,s}$ is the sum of (*) multiplied by $\frac{1}{\prod\limits_{i=1}^{m} (c_i - b_i)!}$ for all ways to choose $b$ inside the team $i+1$ with sum of $b_j$ in this team equal to $s$. To calculate values $t_{i,s}$ for each team $i$ we can also use a prefix dynamic programming inside the team. At the end we found $f_k = dp_{t,k} (n-k)!$, where $t$ is the number of teams. So, the time complexity of this solution is $O(n^2)$. Where are many different ways of how the solution can be implemented (and even optimized with power series), but the inclusion-conclusion is necessary to solve the problem.
|
[
"combinatorics",
"dp",
"math"
] | 2,600
| null |
1799
|
H
|
Tree Cutting
|
You are given a tree with $n$ vertices.
A hero $k$ times do the following operation:
- Choose some edge.
- Remove it.
- Take one of the two remaining parts and delete it.
- Write the number of vertices in the remaining part.
You are given an initial tree and the a sequence of written numbers. Find the number of ways to make operations such that the written numbers are equal to the given numbers. Due to the answer can be big, find it by modulo $998\,244\,353$. Two ways are considered different, if on some operation edge or remaining part are selected differently.
|
We should calculate the number of ways to choose some $k$ edges of our tree (and directions of them) corresponding to operations, such that the operations that will be done with them will result in the given sequence of written numbers. To calculate these ways let's consider a subtree $dp_{v,mask}$ for our main tree. Here $v$ corresponds to subtree of our tree with root in vertex $v$, also we include an edge from $v$ to the parent into this subtree. $mask$ is a submask of $\{1, 2, \ldots, k\}$, meaning that we have chosen in this subtree edges corresponding to operations from $mask$. First of all, consider the case, when the edge from $v$ to parent is not chosen to any of the operations. So let's iterate over children of $v$ and calculate the same dynamic programming for the considered prefix of subtrees. To add the new subtree, we should iterate over $mask_1$ mask of operations that have been done on prefix and $mask_2$ mask of operations that have been done in new subtree. For these masks the condition $mask_1 \cap mask_2 = \emptyset$ should hold. In the case, when the edge from $v$ to parent is chosen let's iterate of the operation $i$ corresponding to it. After that, $sz_{v} = s_i$ or $sz_{v} = n - s_i$ should hold. If $sz_{v} = s_i$ we can choose this edge for the operation $i$ and orient it from parent to $v$. So, the operations in $mask \setminus \{i\}$ should be with indices $> i$. Iterate such masks and update the $dp$ value corresponding to it (using dp values already calculated for subtree of $v$ for the case when edge to parent is not chosen). In the case $sz_{v} = n - s_i$ similarly $sz_{v} = n - s_i$ should hold and operations in $mask \setminus \{i\}$ should be with indices $\leq i$. We can see, that the number of ways calculated with such dp is correct, because if we assigned operation $i$ to some edge with some direction, we ensured the written size after the operation is correct and that the operations in the subtree have been done before or after the operation $i$ (in relation to direction of edge). Time complexity: $O(n (3^k + 2^k k))$, but it can be improved to $O(n 2^k k)$ with sum over subsets calculation.
|
[
"bitmasks",
"dfs and similar",
"dp"
] | 3,200
| null |
1800
|
A
|
Is It a Cat?
|
You were walking down the street and heard a sound. The sound was described by the string $s$ consisting of lowercase and uppercase Latin characters. Now you want to find out if the sound was a cat meowing.
For the sound to be a meowing, the string can only contain the letters 'm', 'e', 'o' and 'w', in either uppercase or lowercase. Also:
- string must start with non-empty sequence consisting only of characters 'm' or 'M'
- it must be immediately followed by non-empty sequence consisting only of characters 'e' or 'E'
- it must be immediately followed by non-empty sequence consisting only of characters 'o' or 'O'
- it must be immediately followed by non-empty sequence consisting only of characters 'w' or 'W', this sequence ends the string, after it immediately comes the string end
For example, strings "meow", "mmmEeOWww", "MeOooOw" describe a meowing, but strings "Mweo", "MeO", "moew", "MmEW", "meowmeow" do not.
Determine whether the sound you heard was a cat meowing or something else.
|
To solve the problem, you may convert the string to lower case, strip all duplicated characters from it and compare the result to "meow" string. To exclude duplicate characters, you can, for example, use the unique function in C++.
|
[
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n;
cin >> n;
string s;
cin >> s;
transform(s.begin(), s.end(), s.begin(), [] (char c) {
return tolower(c);
});
s.erase(unique(s.begin(), s.end()), s.end());
cout << (s == "meow" ? "YES" : "NO") << "\n";
}
int main(){
int t;
cin >> t;
while(t--) solve();
}
|
1800
|
B
|
Count the Number of Pairs
|
Kristina has a string $s$ of length $n$, consisting only of lowercase and uppercase Latin letters. For each pair of lowercase letter and its matching uppercase letter, Kristina can get $1$ burl. However, pairs of characters cannot overlap, so each character can only be in one pair.
For example, if she has the string $s$ = "aAaaBACacbE", she can get a burl for the following character pairs:
- $s_1$ = "a" and $s_2$ = "A"
- $s_4$ = "a" and $s_6$ = "A"
- $s_5$ = "B" and $s_{10}$ = "b"
- $s_7$= "C" and $s_9$ = "c"
Kristina wants to get more burles for her string, so she is going to perform no more than $k$ operations on it. In one operation, she can:
- either select the lowercase character $s_i$ ($1 \le i \le n$) and make it uppercase.
- or select uppercase character $s_i$ ($1 \le i \le n$) and make it lowercase.
For example, when $k$ = 2 and $s$ = "aAaaBACacbE" it can perform one operation: choose $s_3$ = "a" and make it uppercase. Then she will get another pair of $s_3$ = "A" and $s_8$ = "a"
Find \textbf{maximum} number of burles Kristina can get for her string.
|
Count two arrays $big$ and $small$, such that $big[i]$ contains the number of occurrences of $i$th letter of the alphabet in the string in upper case, while $small[i]$ - in lower case. Let's add all existing pairs to the answer, so let's add $min(small[i], big[i])$ to it for each letter. Subtract this minimum from $small[i]$ and $big[i]$ to get the number of unpaired identical letters. Next, we will act greedily: if there is some set of at least two identical letters in the same case, we can apply the operation to half of them and get new pairs. Therefore, for each letter we will add $\frac{min(k, max(small[i], big[i]))}{2}$ to the answer and decrease $k$ by that amount.
|
[
"greedy",
"strings"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
const int N = 26;
void solve(){
int n, k;
cin >> n >> k;
string s;
cin >> s;
vector<int>big(N, 0), small(N, 0);
for(auto &i : s){
if('A' <= i && 'Z' >= i) big[i - 'A']++;
else small[i - 'a']++;
}
int answer = 0;
for(int i = 0; i < N; i++){
int pairs = min(small[i], big[i]);
answer += pairs;
small[i] -=pairs; big[i] -= pairs;
int add = min(k, max(small[i], big[i]) / 2);
k -= add; answer += add;
}
cout << answer << endl;
}
int main(){
int t;
cin >> t;
while(t--) solve();
return 0;
}
|
1800
|
C1
|
Powering the Hero (easy version)
|
\textbf{This is an easy version of the problem. It differs from the hard one only by constraints on $n$ and $t$}.
There is a deck of $n$ cards, each of which is characterized by its power. There are two types of cards:
- a hero card, the power of such a card is always equal to $0$;
- a bonus card, the power of such a card is always positive.
You can do the following with the deck:
- take a card from the top of the deck;
- if this card is a bonus card, you can put it \textbf{on top} of your bonus deck or discard;
- if this card is a hero card, then the power of \textbf{the top} card from your bonus deck is added to his power (if it is not empty), after that the hero is added to your army, and the used bonus discards.
Your task is to use such actions to gather an army with the maximum possible total power.
|
To solve it, it should be noted that despite the way the deck with bonuses works, the order in which they will be applied is not important. Then, when we meet the hero card, we just need to add to the answer the maximum of the available bonuses. Constraints allow you to sort the current array with bonus values each time and remove the maximum element.
|
[
"data structures",
"greedy"
] | 1,000
|
def solve():
n = int(input())
s = [int(x) for x in input().split()]
ans = 0
buffs = [0] * n
for e in s:
if e > 0:
buffs += [e]
j = len(buffs) - 1
while buffs[j] < buffs[j - 1]:
buffs[j], buffs[j - 1] = buffs[j - 1], buffs[j]
j -= 1
else:
ans += buffs.pop()
print(ans)
t = int(input())
for _ in range(t):
solve()
|
1800
|
C2
|
Powering the Hero (hard version)
|
\textbf{This is a hard version of the problem. It differs from the easy one only by constraints on $n$ and $t$}.
There is a deck of $n$ cards, each of which is characterized by its power. There are two types of cards:
- a hero card, the power of such a card is always equal to $0$;
- a bonus card, the power of such a card is always positive.
You can do the following with the deck:
- take a card from the top of the deck;
- if this card is a bonus card, you can put it \textbf{on top} of your bonus deck or discard;
- if this card is a hero card, then the power of \textbf{the top} card from your bonus deck is added to his power (if it is not empty), after that the hero is added to your army, and the used bonus discards.
Your task is to use such actions to gather an army with the maximum possible total power.
|
To solve it, it should be noted that despite the way the deck with bonuses works, the order in which they will be applied is not important. Then, when we meet the hero card, we just need to add to the answer the maximum of the available bonuses. Constraints make you use structures such as a priority queue to quickly find and extract the maximum.
|
[
"data structures",
"greedy"
] | 1,100
|
from queue import PriorityQueue
def solve():
n = int(input())
s = [int(x) for x in input().split()]
ans = 0
buffs = PriorityQueue()
for e in s:
if e > 0:
buffs.put(-e)
elif not buffs.empty():
ans -= buffs.get()
print(ans)
t = int(input())
for _ in range(t):
solve()
|
1800
|
D
|
Remove Two Letters
|
Dmitry has a string $s$, consisting of lowercase Latin letters.
Dmitry decided to remove two \textbf{consecutive} characters from the string $s$ and you are wondering how many different strings can be obtained after such an operation.
For example, Dmitry has a string "aaabcc". You can get the following different strings: "abcc"(by deleting the first two or second and third characters), "aacc"(by deleting the third and fourth characters),"aaac"(by deleting the fourth and the fifth character) and "aaab" (by deleting the last two).
|
Consider deleting characters with numbers $i$ and $i + 1$, as well as characters with numbers $i + 1$ and $i + 2$. In the first case, the symbol with the number $i + 2$ remains, in the second - $i$. Symbols with numbers less than $i$ or more than $i + 2$ remain in both cases. Therefore, the same strings will be obtained if the characters with the numbers $i$ and $i + 2$ match. Therefore, we just need to count the number of $i: 1 \le i\le n - 2: s_i =s_{i+2}$, and subtract this value from $n - 1$.
|
[
"data structures",
"greedy",
"hashing",
"strings"
] | 1,200
|
#include <iostream>
#include <vector>
#include <queue>
#include <map>
#include <set>
using namespace std;
void solve() {
int n;
cin >> n;
string s;
cin >> s;
int res = n - 1;
for (int i = 1; i + 1 < n; ++i) {
if (s[i - 1] == s[i + 1]) {
res--;
}
}
cout << res << '\n';
}
int main(int argc, char* argv[]) {
int t;
cin >> t;
for (int i = 0; i < t; ++i) {
solve();
}
}
|
1800
|
E1
|
Unforgivable Curse (easy version)
|
\textbf{This is an easy version of the problem. In this version, $k$ is always $3$.}
The chief wizard of the Wizengamot once caught the evil wizard Drahyrt, but the evil wizard has returned and wants revenge on the chief wizard. So he stole spell $s$ from his student Harry.
The spell — is a $n$-length string of lowercase Latin letters.
Drahyrt wants to replace spell with an unforgivable curse — string $t$.
Drahyrt, using ancient magic, can swap letters at a distance $k$ or $k+1$ in spell as many times as he wants. In this version of the problem, you can swap letters at a distance of $3$ or $4$. In other words, Drahyrt can change letters in positions $i$ and $j$ in spell $s$ if $|i-j|=3$ or $|i-j|=4$.
For example, if $s = $ "talant" and $t = $ "atltna", Drahyrt can act as follows:
- swap the letters at positions $1$ and $4$ to get spell "aaltnt".
- swap the letters at positions $2$ and $6$ to get spell "atltna".
You are given spells $s$ and $t$. Can Drahyrt change spell $s$ to $t$?
|
In these constraints , the problem could be solved as follows: Note that for strings of length $6$ and more, it is enough to check that the strings $s$ and $t$ match character by character, that is, up to permutation, since each character can be moved to the desired half, and then moved to the desired side by length $1$ by applying two operations on the length is $3$ and $4$. For example, there was a string "abudance" and we want to shift the $c$ character to the left, then we can first get the string "acudab" and then the string "aaudcb". Well, we can restore the used symbols by putting them in their place "budka". That is, we were able to simply swap two adjacent characters. Thus, with such a clipping, it was possible to solve the problem by brute force for $n\le 5$. To iterate, it was possible to store $map$ of strings, which we can get and iterate through all the strings using $bfs$.
|
[
"brute force",
"constructive algorithms",
"dsu",
"graphs",
"greedy",
"strings"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
#define sz(v) (int)v.size()
#define all(v) v.begin(),v.end()
#define eb emplace_back
void slow_solve(int n, int k, string s, string t) {
set<string> was;
queue<string> q;
q.push(s);
was.insert(s);
auto add = [&](string& s, int i, int j) {
if (i >= 0 && i < j && j < n) {
swap(s[i], s[j]);
if (!was.count(s)) {
was.insert(s);
q.push(s);
}
swap(s[i], s[j]);
}
};
while (!q.empty()) {
s = q.front(); q.pop();
for (int i = 0; i < n; ++i) {
add(s, i, i+k);
add(s, i, i+k+1);
add(s, i-k, i);
add(s, i-k-1, i);
}
}
cout << (was.count(t) ? "Yes" : "No") << '\n';
}
void solve() {
int n,k; cin >> n >> k;
string s; cin >> s;
string t; cin >> t;
if (n <= 5) {
slow_solve(n, k, s, t);
return;
}
map<char, int> cnt;
for (char c : s) {
cnt[c]++;
}
for (char c : t) {
cnt[c]--;
}
bool ok = true;
for (auto [c, x] : cnt) {
ok &= x == 0;
}
cout << (ok ? "Yes" : "No") << '\n';
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
}
|
1800
|
E2
|
Unforgivable Curse (hard version)
|
\textbf{This is a complex version of the problem. This version has no additional restrictions on the number $k$.}
The chief wizard of the Wizengamot once caught the evil wizard Drahyrt, but the evil wizard has returned and wants revenge on the chief wizard. So he stole spell $s$ from his student Harry.
The spell — is a $n$-length string of lowercase Latin letters.
Drahyrt wants to replace spell with an unforgivable curse — string $t$.
Dragirt, using ancient magic, can swap letters at a distance $k$ or $k+1$ in spell as many times as he wants. In other words, Drahyrt can change letters in positions $i$ and $j$ in spell $s$ if $|i-j|=k$ or $|i-j|=k+1$.
For example, if $k = 3, s = $ "talant" and $t = $ "atltna", Drahyrt can act as follows:
- swap the letters at positions $1$ and $4$ to get spell "aaltnt".
- swap the letters at positions $2$ and $6$ to get spell "atltna".
You are given spells $s$ and $t$. Can Drahyrt change spell $s$ to $t$?
|
The solution of the problem $E1$ hints to us that with the help of such operations, it is possible to move the symbol in the right direction by $1$ using two operations. Then we can show that among the symbols that we can swap with at least one other symbol, we can get any permutation. For example, you can apply such a greedy solution: we will build an answer from the boundaries of the string $t$ to the middle. Since we can move the symbol by a distance of $1$, we can move it to the border and thus we can build any string $t$. Thus, it is enough to check that the sets of characters that can be swapped with some other match. And for the rest of the characters, check that they just match.
|
[
"brute force",
"constructive algorithms",
"dfs and similar",
"dsu",
"graphs",
"greedy",
"strings"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
#define sz(v) (int)v.size()
#define all(v) v.begin(),v.end()
#define eb emplace_back
void solve() {
int n, k; cin >> n >> k;
string s; cin >> s;
string t; cin >> t;
vector<int> cnt(26, 0);
bool ok = true;
for (int i = 0; i < n; ++i) {
if (i >= k || i+k < n){
cnt[s[i] - 'a']++;
cnt[t[i] - 'a']--;
} else {
ok &= s[i] == t[i];
}
}
cout << (ok && count(all(cnt), 0) == 26 ? "YES" : "NO") << '\n';
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
}
|
1800
|
F
|
Dasha and Nightmares
|
Dasha, an excellent student, is studying at the best mathematical lyceum in the country. Recently, a mysterious stranger brought $n$ words consisting of small latin letters $s_1, s_2, \ldots, s_n$ to the lyceum. Since that day, Dasha has been tormented by nightmares.
Consider some pair of integers $\langle i, j \rangle$ ($1 \le i \le j \le n$). A nightmare is a string for which it is true:
- It is obtained by concatenation $s_{i}s_{j}$;
- Its length is \textbf{odd};
- The number of different letters in it is \textbf{exactly} $25$;
- The number of occurrences of each letter that is in the word is \textbf{odd}.
For example, if $s_i=$ "abcdefg" and $s_j=$ "ijklmnopqrstuvwxyz", the pair $\langle i, j \rangle$ creates a nightmare.
Dasha will stop having nightmares if she counts their number. There are too many nightmares, so Dasha needs your help. Count the number of different nightmares.
Nightmares are called different if the corresponding pairs $\langle i, j \rangle$ are different. The pairs $\langle i_1, j_1 \rangle$ and $\langle i_2, j_2 \rangle$ are called different if $i_1 \neq i_2$ \textbf{or} $j_1 \neq j_2$.
|
Observation $1$: the product of odd numbers is odd, so the condition for the length of nightmare is automatically completed. Denote by $f(x)$ the number of ones in binary representation of $x$. Let's enumerate the letters of the Latin alphabet from $0$ to $25$. Observation $2$: for each word, it is enough to know the set of letters included in it and the evenness of their numbers. There are only $26$ letters in the alphabet, so it is convenient to store the word characteristic $s_i$ as a pair of masks $\langle a_i, b_i \rangle$. The bit with the number $j$ in $a_i$ will be responsible for the availability of the letter $j$ in $s_i$. The bit with the number $j$ in $b_i$ will be responsible for the evenness of the number of letters $j$ in $s_i$. Observation $3$: strings $s_is_j$ creates nightmare if and only if $f(a_i|a_j) = f(b_i \oplus b_j) = 25$. Let's fix the number $k$ - the index of the letter that will not be in nightmares. Let's throw out all the words with the letter $k$, now we can look at the words in turn and look for a pair of them among those already considered. It is easy to see that the condition $f(a_i| a_j) = 25$ follows from the condition $f(b_i\oplus b_j) = 25$ if one letter is banned. To count the number of pairs that include our word, we need to count the number of words with the characteristic $b_j = b_i \oplus (2^{26}-1)$. We can do this by bin-searching through a sorted array of $b$ or using standard data structures. We got the solution for $O(\sum |s| + 26 \cdot n \cdot \log n)$.
|
[
"bitmasks",
"hashing",
"meet-in-the-middle",
"strings"
] | 1,900
|
#pragma GCC optimize("O3")
#pragma GCC optimize("unroll-loops")
#pragma GCC target("avx2,avx,fma,bmi2")
#include <bits/stdc++.h>
#include <immintrin.h>
#include <ext/pb_ds/assoc_container.hpp>
using namespace std;
using namespace __gnu_pbds;
#define endl '\n'
//#define int long long
#define all(arr) arr.begin(), arr.end()
#define multitest() int _gorilla_silverback; cin >> _gorilla_silverback; while (_gorilla_silverback --> 0)
#define pikachu push_back
#define ls(id) (id << 1 | 1)
#define rs(id) ((id << 1) + 2)
#define sqr(x) ((x) * (x))
#define dlg(x) (31 - __builtin_clz(x))
#define ulg(x) (32 - __builtin_clz(x))
typedef pair<int, int> ipair;
typedef tree<int, null_type, less<int>, rb_tree_tag, tree_order_statistics_node_update> treap;
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
const int MAXN = 200200;
const int L = 26;
int n;
string srr[MAXN];
int arr[MAXN], brr[MAXN], crr[MAXN];
void build() {
for (int i = 0; i < n; ++i) {
for (char c: srr[i]) {
arr[i] ^= (1 << (c - 'a'));
brr[i] |= (1 << (c - 'a'));
}
}
}
long long calc(int c) {
int k = 0;
for (int i = 0; i < n; ++i)
if (brr[i] >> c & 1 ^ 1) crr[k++] = arr[i];
sort(crr, crr + k);
int mask = -1 & ((1 << L) - 1) ^ (1 << c);
long long ans = 0;
for (int i = 0; i < k; ++i) {
auto itl = lower_bound(crr, crr + k, crr[i] ^ mask);
auto itr = upper_bound(crr, crr + k, crr[i] ^ mask);
ans += itr - itl;
}
return ans >> 1LL;
}
long long solve() {
long long ans = 0;
for (int c = 0; c < L; ++c)
ans += calc(c);
return ans;
}
signed main() {
ios_base::sync_with_stdio(false), cin.tie(NULL), cout.tie(NULL);
cin >> n;
for (int i = 0; i < n; ++i)
cin >> srr[i];
build();
cout << solve() << endl;
}
|
1800
|
G
|
Symmetree
|
Kid was gifted a tree of $n$ vertices with the root in the vertex $1$. Since he really like symmetrical objects, Kid wants to find out if this tree is symmetrical.
\begin{center}
{\small For example, the trees in the picture above are symmetrical.}
\end{center}
\begin{center}
{\small And the trees in this picture are not symmetrical.}
\end{center}
Formally, a tree is symmetrical if there exists an order of children such that:
- The subtree of the leftmost child of the root is a mirror image of the subtree of the rightmost child;
- the subtree of the second-left child of the root is a mirror image of the subtree of the second-right child of the root;
- ...
- if the number of children of the root is odd, then the subtree of the middle child should be symmetrical.
|
Note that if one subtree is a mirror image of another, then they are isomorphic (that is, equal without taking into account the vertex numbers). To check the subtrees for isomorphism, we use hashing of root trees. Now we just have to learn how to check trees for symmetry. To do this, let's calculate how many children of each type our vertex has (let's denote the hash of its subtree by the vertex type). In order for the vertex subtree to be symmetric, each child must have a pair of the same type, except perhaps one, which must also be symmetric. We can calculate the symmetry of the subtrees while counting their hash to simplify this task.
|
[
"dfs and similar",
"hashing",
"implementation",
"trees"
] | 2,200
|
#include <bits/stdc++.h>
#define int long long
#define pb emplace_back
#define mp make_pair
#define x first
#define y second
#define all(a) a.begin(), a.end()
#define rall(a) a.rbegin(), a.rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(time(nullptr));
const int inf = 2e18;
const ll M = 1e9;
const ld pi = atan2(0, -1);
const ld eps = 1e-6;
int last;
map<vector<int>, int> eq;
map<int, bool> symmetrical;
int dfs(int v, int p, vector<vector<int>> &sl){
vector<int> children;
for(int u: sl[v]){
if(u == p) continue;
children.emplace_back(dfs(u, v, sl));
}
sort(all(children));
if(!eq.count(children)){
map<int, int> cnt;
for(int e: children){
cnt[e]++;
}
int odd = 0, bad = 0;
for(auto e: cnt){
if(e.y & 1){
odd++;
bad += !symmetrical[e.x];
}
}
eq[children] = last;
symmetrical[last] = odd < 2 && bad == 0;
last++;
}
return eq[children];
}
void solve(int tc){
int n;
cin >> n;
eq.clear();
symmetrical.clear();
eq[vector<int>(0)] = 0;
symmetrical[0] = true;
last = 1;
vector<vector<int>> sl(n);
for(int i = 1; i < n; ++i){
int u, v;
cin >> u >> v;
sl[--u].emplace_back(--v);
sl[v].emplace_back(u);
}
cout << (symmetrical[dfs(0, 0, sl)]? "YES" : "NO");
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
1801
|
A
|
The Very Beautiful Blanket
|
Kirill wants to weave the very beautiful blanket consisting of $n \times m$ of the same size square patches of some colors. He matched some non-negative integer to each color. Thus, in our problem, the blanket can be considered a $B$ matrix of size $n \times m$ consisting of non-negative integers.
Kirill considers that the blanket is very beautiful, if for each submatrix $A$ of size $4 \times 4$ of the matrix $B$ is true:
- $A_{11} \oplus A_{12} \oplus A_{21} \oplus A_{22} = A_{33} \oplus A_{34} \oplus A_{43} \oplus A_{44},$
- $A_{13} \oplus A_{14} \oplus A_{23} \oplus A_{24} = A_{31} \oplus A_{32} \oplus A_{41} \oplus A_{42},$
where $\oplus$ means bitwise exclusive OR
Kirill asks you to help her weave a very beautiful blanket, and as colorful as possible!
He gives you two integers $n$ and $m$.
Your task is to generate a matrix $B$ of size $n \times m$, which corresponds to a very beautiful blanket and in which the number of different numbers maximized.
|
The maximum number of different numbers we can type is always $n\cdot m$. Let's show how you can build an example for any $n$ and $m$. Note that if we were able to construct a correct matrix, then any of its submatrix is also a correct matrix of a smaller size. Therefore, let's build a correct matrix for some $N$ and $M$, and as an answer we will output the upper left corner of this matrix of the desired size. Take $N = M = 2^8$ and construct the matrix using the following algorithm. Let's break it into blocks of size $2 \times 2$. Let's number the blocks from left to right and from top to bottom in order, starting from zero. The $i$th block will have the form $4i + 0$ $4i + 1$ $4i + 2$ $4i + 3$ With this construction, the bitwise exclusive OR any submatrix of size $2\times 2$ will be zero. You can verify this as follows. Let's look at the upper left corner of $(i,\,j)$ of an arbitrary submatrix of size $2\times 2$. There are 4 cases: both coordinates are even; $i$ is even, $j$ is odd; $i$ odd, $j$ even; both coordinates are odd. Immediately note that $i, \, j < 200 < 2^8$ Consider the most unpleasant case - the last one. The remaining cases are treated in a similar way. In this case, the submatrix will have the form: $4i + 3$ $4(i + 1) + 2$ $4(i + 2^8) + 1$ $4(i + 2^8 + 1) + 0$ Note that the second part of each term is less than 4, and the first part of each term is greater than or equal to 4. Therefore, they can be considered independently. $3 \oplus 2 \oplus 1 \oplus 0$ $=$ $0$. If $i$$=$$1$, then $4i \oplus 4(i + 1)$ $=$ $12$, $4(1 + 2^8) \oplus 4(2 + 2^8)$ $=$ $12$. If $i\neq 1$, then $4i \oplus 4(i + 1)$ $=$ $4$ $4(i + 2^8) \oplus 4(i + 2^8 + 1)$$=$$4$ (for $i=0$, you can check with your hands, for $1 < i <2^8$ $4(i+ 2^8)$ will be reduced and $4$ will remain from the second term). $4 \oplus 4 \oplus 0$ $=$ $0$. Thus, in the selected submatrix, the bitwise exclusive OR is zero.
|
[
"bitmasks",
"constructive algorithms"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int SZ = 256;
int v[SZ][SZ];
void Solve(){
int n, m;
cin >> n >> m;
cout << n * m << '\n';
for(int i = 0; i < n; i++)
for(int j = 0; j < m; j++)
cout << v[i][j] << " \n"[j + 1 == m];
}
signed main(){
ios_base::sync_with_stdio(NULL);
cin.tie(NULL);
cout.tie(NULL);
{
int now = 0;
int n = 256;
int m = 256;
for(int i = 0; i < n; i += 2)
for(int j = 0; j < m; j += 2){
v[i][j] = now;
v[i][j + 1] = now + 1;
v[i + 1][j] = now + 2;
v[i + 1][j + 1] = now + 3;
now += 4;
}
}
int num_test = 1;
cin >> num_test;
for(int i = 1; i <= num_test; i++){
Solve();
}
}
|
1801
|
B
|
Buying gifts
|
Little Sasha has two friends, whom he wants to please with gifts on the Eighth of March. To do this, he went to the largest shopping center in the city.There are $n$ departments in the mall, each of which has exactly two stores. For convenience, we number the departments with integers from $1$ to $n$. It is known that gifts in the first store of the $i$ department cost $a_i$ rubles, and in the second store of the $i$ department — $b_i$ rubles.
Entering the mall, Sasha will visit each of the $n$ departments of the mall, and in each department, he will enter exactly one store. When Sasha gets into the $i$-th department, he will perform exactly one of two actions:
- Buy a gift for the first friend, spending $a_i$ rubles on it.
- Buy a gift for the second friend, spending $b_i$ rubles on it.
Sasha is going to buy at least one gift for each friend. Moreover, he wants to pick up gifts in such a way that the price difference of the most expensive gifts bought for friends is as small as possible so that no one is offended.
More formally: let $m_1$ be the maximum price of a gift bought to the first friend, and $m_2$ be the maximum price of a gift bought to the second friend. Sasha wants to choose gifts in such a way as to minimize the value of $\lvert m_1 - m_2 \rvert$.
|
To begin with, let's sort all departments in descending order $b_i$ (and if ~--- is equal, in ascending order $a_i$). Now let's go through the $i$ department, in which the most expensive gift for the second girlfriend will be bought. Note that in all departments with numbers $j < i$, Sasha must buy a gift for the first girlfriend, otherwise the gift $i$ will not have the maximum value among the gifts bought for the second girlfriend. Therefore, we will immediately find the value of $m = \max \limits_{j < i} a_j$. Thus, we can already get the answer $\lvert m - b_i\rvert$. In all departments with numbers $j > i$, for which $a_j \le m$, Sasha can buy a gift for any of her friends, and this will not affect the answer in any way. Now consider all departments with numbers $j > i$ for which $a_j > m$. If you buy a gift for your first girlfriend in some of these departments, the value of $m$ will increase, which means the answer may improve. Therefore, let's iterate through all these departments and update the response with the value $\lvert a_j - b_i\rvert$. Time $O(n^2)$. Let's optimize this solution. To begin with, instead of calculating the value of $m$ anew at each iteration, we will maintain its value in some variable. Then, when moving from department $i - 1$ to department $i$, we will update the value of $m$ as follows: $m:= \max(m, a_i)$. It remains to learn how to quickly find the optimal department number $j$, such that $j > i$, $a_j > m$, as well as $\lvert a_j - b_i\rvert$ is minimal. Let's choose on the suffix of the array the minimum $a_j$, such that $a_j\ge b_i$, and also the maximum $a_j$, such that $a_j \le b_i$. You can notice that the optimal $a_j$ is one of the two selected numbers (you also need to remember to check the condition $a_j > m$). Therefore, it is enough to update the answer only with the help of them. You can search for these two elements using the \texttt{set} data structure. We will support in the set all $a_j$ located on the suffix. Then you can find the necessary two elements for $O(\log n)$. When moving from department $i - 1$ to department $i$, you need to remove the value $a_{i - 1}$ from the data structure. Time $O(n\log n)$
|
[
"data structures",
"greedy",
"sortings"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
template<typename T>
bool smin(T& a, const T& b) {
if (b < a) {
a = b;
return true;
}
return false;
}
template<typename T>
bool smax(T& a, const T& b) {
if (a < b) {
a = b;
return true;
}
return false;
}
const int INF = 0x3f3f3f3f;
const int N = 500100;
std::pair<int, int> a[N];
void run() {
int n;
scanf("%d", &n);
for (int i = 0; i < n; ++i) {
scanf("%d%d", &a[i].first, &a[i].second);
}
sort(a, a + n, [&](const pair<int, int>& p1,
const pair<int, int>& p2) {
return p1.second > p2.second || (p1.second == p2.second && p1.first < p2.first);
});
multiset<int> setik;
for (int i = 0; i < n; ++i) {
setik.insert(a[i].first);
}
int mx = -INF;
int ans = INF;
for (int i = 1; i < n; ++i) {
smin(ans, abs(a[i].first - a[0].second));
}
for (int i = 0; i < n; ++i) {
setik.erase(setik.find(a[i].first));
if (i == 0) {
mx = a[i].first;
continue;
}
smin(ans, abs(mx - a[i].second));
auto it = setik.lower_bound(a[i].second);
if (it != setik.end() && *it >= mx) {
smin(ans, abs(*it - a[i].second));
}
if (it != setik.begin() && *std::prev(it) >= mx) {
smin(ans, abs(*prev(it) - a[i].second));
}
smax(mx, a[i].first);
}
printf("%d\n", ans);
}
int main(void) {
int t;
scanf("%d", &t);
while (t--) {
run();
}
return 0;
}
|
1801
|
C
|
Music Festival
|
The boy Vitya loves to listen to music very much. He knows that $n$ albums are due to be released this Friday, $i$-th of which contains $k_i$ tracks. Of course, Vitya has already listened to all the tracks, and knows that in the $i$-th album, the coolness of the $j$-th track is equal to $a_{i,j}$.Vitya has a friend Masha, whom he really wants to invite to the festival, where his favorite bands perform. However, in order for a friend to agree, she must first evaluate the released novelties. Vitya knows that if Masha listens to a track that was cooler than all the previous ones, she will get 1 unit of impression. Unfortunately, albums can only be listened to in their entirety, without changing the songs in them in places.
Help Vitya find such an order of albums so that Masha's impression turns out to be as much as possible, and she definitely went to the festival with him.
|
Let's introduce the concept of a compressed album for an album, which is obtained from the original one by removing all elements except those that are the first maxima on their corresponding prefixes. For example: For the album $[\textbf{1}, \textbf{4}, 4, 3, \textbf{6}, 5, 6]$ the album will be compressed $[1, 4, 6]$. Now we note that the solution of the original problem is reduced to solving the same problem, but on compressed albums. Indeed, the answer to them will not be different, because if some element increased the impression on ordinary albums, then it will increase if you compress albums and vice versa. Next, it will be assumed that all albums have been compressed beforehand. Let's introduce $dp_c$ - the maximum impression that can be obtained if there were no albums such that they have elements larger than $c$. Then, $dp_c$ is equal to $dp_{c-1}$, or you can add another element or two if $c$ is the maximum element for some album. Then for all compressed albums, it can be recalculated through the value of $dp$ at the point before the first element of the album, or through $c - 1$. Thus, for recalculation, it is enough to know for each $c$ which albums ended in this index, as well as for each album its first element. Solution for $O(K)$ Let's now solve the complete problem. For each value of $c$, let's remember the indexes of albums that contain an element equal to $c$. We go in order of increasing $c$, we maintain for each album the value of $dp_i$ - the maximum impression that can be obtained if there were no elements of large $c$ and Masha listened to the last $i$ album. Suppose for the next $c$ there is an album $i$, that there is a song with the coolness of $c$ in it. Then $dp_i$ should be taken as the maximum of $dp_i + 1$ and the values for all $dp_j + 1$, such that the maximum element in the $j$th album is less than the maximum element of $i$th, since she could listen to this track, either next in this album, or after listening to some other album completely. Note that you can store the value of $mx$ - maximum for all albums for which the maximum value in them is less than $c$ and recalculate it when moving to $c + 1$, storing those albums that have ended, then you will get a solution for $O(K + C)$.
|
[
"binary search",
"data structures",
"dp",
"greedy",
"sortings"
] | 1,900
|
#include "bits/stdc++.h"
#include <algorithm>
#include <locale>
#include <random>
#include <unordered_map>
#include <vector>
using namespace std;
#define all(x) x.begin(), x.end()
#define rall(x) x.rbegin(), x.rend()
typedef long long ll;
typedef long double db;
typedef unsigned long long ull;
vector<int> shrink(vector<int> &a) {
vector<int> a1;
int n = a.size();
int mx = 0;
for (int i = 0; i < n; ++i) {
if (a[i] > mx) {
a1.emplace_back(a[i]);
mx = a[i];
}
}
return a1;
}
void solve() {
int n;
cin >> n;
vector<vector<int>> a(n);
int k;
for (int i = 0; i < n; ++i) {
int k;
cin >> k;
a[i].resize(k);
for (auto &j : a[i]) {
cin >> j;
}
}
vector<vector<int>> a1(n);
for (int i = 0; i < n; ++i) {
a1[i] = shrink(a[i]);
}
map<int, vector<int>> b;
for (int i = 0; i < n; ++i) {
for (auto &j : a1[i]) {
b[j].emplace_back(i);
}
}
vector<int> dp(n);
int closed = 0;
for (auto &it : b) {
int c = it.first;
int newclosed = 0;
for (auto &i : it.second) {
if (c == a1[i].back()) {
dp[i] = max(dp[i] + 1, closed + 1);
newclosed = max(newclosed, dp[i]);
continue;
}
if (c == a1[i].front()) {
dp[i] = closed + 1;
continue;
}
dp[i] = max(dp[i] + 1, closed + 1);
}
closed = max(closed, newclosed);
}
cout << *max_element(all(dp));
}
signed main() {
int t = 0;
cin >> t;
while (t --> 0) {
solve();
cout << '\n';
}
return 0;
}
|
1801
|
D
|
The way home
|
The famous magician Borya Budini traveled through the country $X$, which consists of $n$ cities. However, an accident happened, and he was robbed in the city number $1$. Now Budini will have a hard way home to the city number $n$.He's going to get there by plane. In total, there are $m$ flights in the country, $i$-th flies from city $a_i$ to city $b_i$ and costs $s_i$ coins. Note that the $i$-th flight is one-way, so it can't be used to get from city $b_i$ to city $a_i$. To use it, Borya must be in the city $a_i$ and have at least $s_i$ coins (which he will spend on the flight).
After the robbery, he has only $p$ coins left, but he does not despair! Being in the city $i$, he can organize performances every day, each performance will bring him $w_i$ coins.
Help the magician find out if he will be able to get home, and what is the minimum number of performances he will have to organize.
|
Note that the show can be done "postponed". As soon as we don't have enough money to walk along the edge, we can do several shows in advance among the peaks that we have already passed, so as to earn the maximum amount of money. For the general case, you can write $dp[v][best] = (\textit{min show}, \textit{max money})$, where $v$ is the number of the vertex where we are, and $best$ is the vertex with max. $w_i$, which we have already passed through. It can be shown that it is optimal to minimize the number of shows first, and then maximize the amount of money. This dynamics can be recalculated using Dijkstra's algorithm. Asymptotics of $O(mn\log n)$
|
[
"binary search",
"data structures",
"dp",
"graphs",
"greedy",
"shortest paths",
"sortings"
] | 2,100
|
#include "bits/stdc++.h"
#define rep(i, n) for (int i = 0; i < (n); ++i)
#define pb push_back
#define all(a) (a).begin(), (a).end()
#define ar array
#define vec vector
using namespace std;
using ll = long long;
using pi = pair<int, int>;
using vi = vector<int>;
using vpi = vector<pair<int, int>>;
const ll INF = 2e18;
const int maxN = 3e5 + 10;
struct PathParams {
ll num_show;
int money;
};
bool operator==(const PathParams &a, const PathParams &b) {
return tie(a.num_show, a.money) == tie(b.num_show, b.money);
}
bool operator!=(const PathParams &a, const PathParams &b) {
return !(a == b);
}
struct State {
PathParams params;
int v;
int best;
};
bool operator<(const PathParams &a, const PathParams &b) {
if (a.num_show != b.num_show) return a.num_show < b.num_show;
return a.money > b.money;
}
bool operator<(const State &a, const State &b) {
return tie(a.params, a.v, a.best) < tie(b.params, b.v, b.best);
}
bool operator>(const State &a, const State &b) {
return !(a < b);
}
void solve() {
int n, m, p, group;
cin >> n >> m >> p;
vector dp(n, vector<PathParams>(n, {INF, 0}));
vector<vpi> g(n);
vi w(n);
rep(i, n) cin >> w[i];
rep(i, m) {
int a, b, s;
cin >> a >> b >> s;
a--;
b--;
g[a].emplace_back(b, s);
}
dp[0][0] = {0, p};
priority_queue<State, vector<State>, greater<>> pq;
pq.push({.params = {.num_show=0, .money=p}, .v = 0, .best=0});
while (!pq.empty()) {
auto current = pq.top();
pq.pop();
int v = current.v;
int best = current.best;
if (dp[v][best] != current.params) continue;
for (auto &[u, s]: g[v]) {
auto state2 = current;
PathParams &path = state2.params;
if (path.money < s) {
ll need = (s - path.money + w[best] - 1) / w[best];
path.num_show += need;
path.money += need * w[best];
assert(path.money < s + w[best]);
}
path.money -= s;
state2.v = u;
if (w[u] > w[state2.best]) state2.best = u;
if (path < dp[u][state2.best]) {
dp[u][state2.best] = path;
pq.push(state2);
}
}
}
ll ans = INF;
rep(i, n) {
ans = min(ans, dp[n - 1][i].num_show);
}
cout << (ans == INF ? -1 : ans) << '\n';
}
signed main() {
int t = 1;
cin >> t;
rep(_, t) {
solve();
}
return 0;
}
|
1801
|
E
|
Gasoline prices
|
Berland — is a huge country consisting of $n$ cities. The road network of Berland can be represented as a root tree, that is, there is only $n - 1$ road in the country, and you can get from any city to any other exactly one way, if you do not visit any city twice. For the convenience of representing the country, for each city $i$, the city $p_i$ is fixed, equal to the first city to which you need to go from the city $i$ to get to the city $1$. In other words, the city $p_i$ is equal to the ancestor of the city $i$ if the tree is hung for the city $1$.There is one gas station in each city of Berland. Gas stations have special pricing, and for each gas station there is a fixed range of prices for which they are ready to sell gasoline. A gas station in the city with the number $i$ is ready to sell gasoline at any price from $l_i$ to $r_i$ inclusive.
The King of Berland — is an exemplary family man, and for $m$ years, two sons were born to him every year. The king's children have been involved in public affairs since early childhood, and at the end of each year they check the honesty of gasoline prices. From birth, the king's children, who are born in the year $i$, are responsible for checking gasoline prices on the ways from the city of $a_i$ to the city of $b_i$ and from the city of $c_i$ to the city of $d_i$, respectively.
The check is as follows: both children simultaneously start their journey from the cities $a_i$ and $c_i$, respectively. The first son of the king, born in the year $i$, moves along the path from the city $a_i$ to the city $b_i$, and the second — from the city $c_i$ to the city $d_i$. Children check that the price of gasoline in the city of $a_i$ coincides with the price of gasoline in the city of $c_i$. Next, they check that the price of gasoline in the second city on the way from $a_i$ to $b_i$ coincides with the price in the second city on the way from $c_i$ to $d_i$. Then they repeat the same thing for a couple of third cities on their paths and so on. At the end, they check that the price of gasoline in the city of $b_i$ coincides with the price of gasoline in the city of $d_i$. It is guaranteed that the length of the path from the city $a_i$ to the city $b_i$ coincides with the length of the path from the city $c_i$ to the city $d_i$.
Gas stations must strictly obey the laws, and therefore all checks of gasoline prices should not reveal violations. Help Berland gas stations find out how many ways they can set gasoline prices for $m$ years. In other words, for each $i$ from $1$ to $m$, calculate how many ways you can set gasoline prices at all gas stations so that after the birth of the first $i$ pairs of the king's children, all their checks did not reveal violations, and at any gas station the price was in the acceptable price range. Since the number of such methods can be large, calculate the answer modulo $10^9 + 7$.
|
To begin with, let's understand what is required of us. A tree is given, in each vertex of which the price range for this vertex is recorded. A query is a pair of paths of equal length, the prices at the $i$-th vertices along these paths should be equal for all $i$. We need to find the number of ways to place prices at the vertices for each prefix of restrictions Let's start with a slow solution of the problem. We will store the connectivity components (in each vertex prices should be equal). For each of them, we store an acceptable price range. The answer will be the product of the lengths of the ranges over all components. We will go along the paths and combine 2 vertices into one component using DSU. It is clear that to speed up this solution, it is necessary to search faster for the moments when two vertices are combined into one component. First, let's analyze the long solution. Let's make a heavy-light decomposition, with which we will hash the paths in the tree, taking the root number of its components as a symbol for the vertex. Now, with the help of bin search, we will look for the first moment when the hashes on the path prefixes differ, that is, two vertices are combined into one component. With the help of transfusions, we will update the roots of their components for vertices and the tree of segments for hld. We will get $n$ unions, we will find each one for $O(log^2(n))$ using hld. There will also be $O(n\cdot log(n))$ updates in the segment tree due to overflows. For each request there will be $O(log^2(n))$ from hld. The final asymptotic $O((n+q)\cdot log^2(n))$ Now let's give a beautiful solution to this problem. Let's start with bamboo. Replace the equality of prices on a pair of paths with two pairs of paths with lengths equal to the maximum power of two, less than the length of the original path (as in sparse table). Now the path lengths of all constraints have become powers of two. We will iterate over the powers of two in descending order $2^k$, for each path of length $2^k$ we will get a vertex in the graph, we will also get a vertex for each such path in reverse order. Now the constraints define edges in such a graph. Let's spend them, select the spanning tree. For each edge from the backbone, we divide the constraints into 2 constraints with path lengths half as long and continue the process. On a layer with lengths 1, we will get the spanning tree we need, which will be responsible for the first moments when some pairs of vertices were combined into components. Note that no more than $2n$ edges will be added down from each layer, as well as no more than $2q$ edges from queries. That is, each layer will work for $O((n + q)\cdot\alpha(n))$, where $\alpha(n)$ is the average operating time in DSU, the inverse of the Ackerman function. We got the solution in $O((n + q) \cdot\alpha(n)\cdot log(n))$ For a complete solution on the tree, first we divide a pair of paths into three pairs of corresponding vertical paths (take from the 4 end vertices of these paths the pair of vertices closest to the lca on its path, then we pair this path with a vertical path (part of another path), now we get one vertical path and an arbitrary path in the tree, let's split the second path by lca and the first by the corresponding lengths). Next, we will proceed similarly to bamboo, only the place of the vertex responsible for the segment, we will get the vertex responsible for the binary ascent in the tree to a height equal to the power of two.
|
[
"data structures",
"divide and conquer",
"dsu",
"hashing",
"trees"
] | 3,000
|
#include "bits/stdc++.h"
using namespace std;
const int mod = (int) 1e9 + 7;
int inv(int x) {
int res = 1, n = mod - 2;
while (n) {
if (n & 1) {
res = res * 1ll * x % mod;
}
x = x * 1ll * x % mod;
n /= 2;
}
return res;
}
const int N = (int) 2e5 + 22, K = 18;
vector<int> g[N];
pair<int, int> a[N];
int n, q, ans = 1;
int h[N], up[N][K], lg[N];
vector<array<int, 3>> graph[K]; // lower vertex1, lower vertex2, time, vertex with direction
vector<array<int, 3>> gr[N];
vector<array<int, 3>> edges;
struct dsu {
vector<int> p, sz;
void build(int n) {
p.resize(n);
sz.resize(n);
for (int i = 0; i < n; i++) {
p[i] = i;
sz[i] = 1;
}
}
int get(int v) {
return (v == p[v] ? v : p[v] = get(p[v]));
}
bool merge(int v, int u) {
v = get(v), u = get(u);
if (v != u) {
if (sz[v] > sz[u]) {
swap(v, u);
}
p[v] = u;
sz[u] += sz[v];
return true;
}
return false;
}
} G, dsu[K];
void dfs(int v, int pr, int d) {
h[v] = d;
up[v][0] = pr;
for (int j = 1; j < K; j++) {
up[v][j] = up[up[v][j - 1]][j - 1];
}
for (auto u : g[v]) {
dfs(u, v, d + 1);
}
}
int la(int v, int x) {
for (int j = 0; j < K; j++) {
if (x >> j & 1) {
v = up[v][j];
}
}
return v;
}
int lca(int v, int u) {
if (h[v] > h[u]) {
swap(v, u);
}
u = la(u, h[u] - h[v]);
if (v == u) {
return v;
}
for (int j = K - 1; j >= 0; j--) {
if (up[v][j] != up[u][j]) {
v = up[v][j], u = up[u][j];
}
}
return up[v][0];
}
int id(int v) {
return (v > 0 ? v : -v + n);
}
int sgn(int v) {
return (v > 0 ? 1 : -1);
}
void add_edge(int j, int v, int u, int t) {
if (dsu[j].merge(id(v), id(u))) {
if (j > 0) {
if (sgn(v) == sgn(u)) {
add_edge(j - 1, v, u, t);
add_edge(j - 1, sgn(v) * up[abs(v)][j - 1], sgn(u) * up[abs(u)][j - 1], t);
} else {
if (sgn(v) == -1) {
swap(v, u);
}
add_edge(j - 1, v, sgn(u) * up[abs(u)][j - 1], t);
add_edge(j - 1, sgn(v) * up[abs(v)][j - 1], u, t);
}
} else {
edges.push_back({abs(v), abs(u), t});
}
}
}
void add(int v, int u, int x, int y, int t, int type1, int type2) {
if (h[v] < h[u]) {
swap(v, u);
}
if (h[x] < h[y]) {
swap(x, y);
}
assert(h[v] - h[u] == h[x] - h[y]);
int g = lg[h[v] - h[u]];
if (type1 == type2) {
add_edge(g, type1 * v, type2 * x, t);
add_edge(g, type1 * la(v, h[v] - h[u] - (1 << g) + 1), type2 * la(x, h[x] - h[y] - (1 << g) + 1), t);
} else {
add_edge(g, type1 * v, type2 * la(x, h[x] - h[y] - (1 << g) + 1), t);
add_edge(g, type1 * la(v, h[v] - h[u] - (1 << g) + 1), type2 * x, t);
}
}
void merge(int v, int u) {
v = G.get(v), u = G.get(u);
if (v != u) {
G.merge(v, u);
if (G.sz[v] > G.sz[u]) {
swap(v, u);
}
if (a[v].first <= a[v].second) {
ans = ans * 1ll * inv(a[v].second - a[v].first + 1) % mod;
}
if (a[u].first <= a[u].second) {
ans = ans * 1ll * inv(a[u].second - a[u].first + 1) % mod;
}
a[u].first = max(a[u].first, a[v].first);
a[u].second = min(a[u].second, a[v].second);
if (a[u].first > a[u].second) {
ans = 0;
} else {
ans = ans * 1ll * (a[u].second - a[u].first + 1) % mod;
}
}
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
for (int j = 2; j < N; j++) {
lg[j] = lg[j / 2] + 1;
}
cin >> n;
for (int i = 2; i <= n; i++) {
int v;
cin >> v;
g[v].push_back(i);
}
for (int i = 1; i <= n; i++) {
cin >> a[i].first >> a[i].second;
ans = ans * 1ll * (a[i].second - a[i].first + 1) % mod;
}
dfs(1, 1, 0);
cin >> q;
for (int j = 0; j < K; j++) {
dsu[j].build(2 * n + 1);
}
for (int i = 0; i < q; i++) {
int v, u, x, y;
cin >> v >> u >> x >> y;
int w = lca(v, u);
int z = lca(x, y);
if (h[v] - h[w] > h[x] - h[z]) {
swap(v, x);
swap(u, y);
swap(w, z);
}
if (v != w) {
int d = h[v] - h[w];
int v2 = la(v, d - 1);
int x2 = la(x, d - 1);
add(v, v2, x, x2, i, 1, 1);
v = up[v2][0];
x = up[x2][0];
}
if (x != z) {
int d = h[x] - h[z];
int v2 = la(u, (h[u] - h[v]) - d);
int x2 = la(x, d - 1);
add(v, up[v2][0], x, x2, i, -1, 1);
v = v2;
x = up[x2][0];
}
add(v, u, x, y, i, (h[v] > h[u] ? 1 : -1), (h[x] > h[y] ? 1 : -1));
}
G.build(n + 1);
int j = 0;
for (int i = 0; i < q; i++) {
while (j < (int) edges.size() && edges[j][2] == i) {
merge(edges[j][0], edges[j][1]);
j++;
}
cout << ans << '\n';
}
}
|
1801
|
F
|
Another n-dimensional chocolate bar
|
Mom bought the boy Vasya a $n$-dimensional chocolate bar, which is a $n$-dimensional cube with the length of each side equal to $1$. The chocolate is planned to be divided into slices. According to the $i$th dimension, it can be divided by hyperplanes into $a_i$ equal parts. Thus, the chocolate is divided in total into $a_1 \cdot a_2 \cdot a_3 \cdot \ldots \cdot a_n$ slices, each slice has a length of $i$-th dimension equal to $\frac{1}{a_i}$, respectively, the volume of each slice is $\frac{1}{a_1 a_2 \cdots a_n}$.Vasya and his friends want to cut a chocolate bar to get at least $k$ pieces, while Vasya wants to maximize the volume of the smallest of them. It is possible to cut the chocolate bar only at the junction of the lobules, and each incision must pass through the entire chocolate bar along some hyperplane involved in the formation of lobules. Only after making all the cuts, Vasya disassembles the chocolate into pieces.
More formally, Vasya wants to choose the numbers $b_1, b_2, \dots, b_n$ ($1 \le b_i \le a_i$) — the number of parts into which Vasya will cut the chocolate bar along each dimension. The condition $b_1 \cdot b_2 \cdot \ldots \cdot b_n \ge k$ must be met to get at least $k$ pieces after all cuts. It can be noted that with optimal cutting with such parameters, the minimum piece will contain $\lfloor \frac{a_1}{b_1} \rfloor \dotsm \lfloor \frac{a_n}{b_n} \rfloor$ slices, and its volume will be equal to $\lfloor \frac{a_1}{b_1} \rfloor \dotsm \lfloor \frac{a_n}{b_n} \rfloor \cdot \frac{1}{a_1 a_2 \cdots a_n}$.
Vasya wants to get the maximum possible value of the volume of the minimum piece multiplied by $k$, that is, he wants to maximize the number of $\lfloor \frac{a_1}{b_1} \rfloor \dotsm \lfloor \frac{a_n}{b_n} \rfloor \cdot \frac{1}{a_1 a_2 \cdots a_n} \cdot k$. Help him with this.
|
For $A$ we denote the maximum value of $a_i$ To begin with, let's solve the problem for $O(n\cdot k\cdot f(k, A))$ using dynamic programming. Let's put $dp[i][j]$- the maximum possible volume of the smallest piece, if by the first $i$ measurements we divided the chocolate into $j$ parts. If we have divided into more than $k$ parts, we will also put the result in $dp[i][k]$. In terms of calculation, we need to decide how many hours to divide the chocolate bar along the next dimension. Let's look at several ways to do this. It is possible for $O(k)$ to sort out the state to which we are moving, and from this calculate how many parts you need to divide the chocolate bar along the next dimension. - We get $O(n\cdot k^2)$ It is possible for $O(k)$ to sort out the state to which we are moving, and from this calculate how many parts you need to divide the chocolate bar along the next dimension. - We get $O(n\cdot k^2)$ It is possible for $O(A)$ to sort out how many parts we divide the chocolate bar along the next dimension. It is possible for $O(A)$ to sort out how many parts we divide the chocolate bar along the next dimension. Being in the state of $dp[i][j]$, you can iterate over $b_i$ - into how many parts to divide the chocolate until $j\cdot b_i\le k$. It can be shown that such a solution will work for $O(n\cdot k\cdot\ln{k})$ Being in the state of $dp[i][j]$, you can iterate over $b_i$ - into how many parts to divide the chocolate until $j\cdot b_i\le k$. It can be shown that such a solution will work for $O(n\cdot k\cdot\ln{k})$ The key idea suppose we need to divide a chocolate bar into $10$ parts, and along the first measurements we have already divided it into $5$ parts, or $6$ parts, or $7, 8$ or $9$ parts. All these states are not distinguishable for us, because in all these cases we need to divide the chocolate bar into at least $2$ parts. It remains to understand how many such <> states there are and learn how to store them. There are several approaches for this, let's analyze one of them: suppose we need to divide a chocolate bar into $10$ parts, and along the first measurements we have already divided it into $5$ parts, or $6$ parts, or $7, 8$ or $9$ parts. All these states are not distinguishable for us, because in all these cases we need to divide the chocolate bar into at least $2$ parts. It remains to understand how many such <> states there are and learn how to store them. There are several approaches for this, let's analyze one of them: we are interested in all the values of $\lceil\frac{k}{i}\rceil$ for $i = 1, 2, \ldots k$- this is how many parts the chocolate bar may still need to be divided into. Among them, only $O(\sqrt{k})$different, since either $i\le\sqrt{k}$, or the value of $\lceil\frac{k}{i}\rceil\le\sqrt{k}$ itself. If we make all these numbers states, and recalculate, iterating over the state to which to go, we get $O(n\cdot\sqrt{k} \cdot\sqrt{k}) = O(n\cdot k)$- this is still not enough to solve the hollow problem. we are interested in all the values of $\lceil\frac{k}{i}\rceil$ for $i = 1, 2, \ldots k$- this is how many parts the chocolate bar may still need to be divided into. Among them, only $O(\sqrt{k})$different, since either $i\le\sqrt{k}$, or the value of $\lceil\frac{k}{i}\rceil\le\sqrt{k}$ itself. If we make all these numbers states, and recalculate, iterating over the state to which to go, we get $O(n\cdot\sqrt{k} \cdot\sqrt{k}) = O(n\cdot k)$- this is still not enough to solve the hollow problem. Last observation If we are in the state of $dp[i][remain]$ where $remain = \lceil\frac{k}{i}\rceil$ for some $i$, we will apply the same idea to it. From it, we are interested in transitions to the states $\lceil \frac{remain}{j} \rceil$ for $j = 1, 2, \ldots remain$. What kind of asymptotics will be obtained if we iterate over only interesting transitions? $n \cdot (\sum\limits_{r=1}^{\sqrt{k}}{ 2 \cdot \sqrt{r} + 2 \cdot \sqrt{\lceil \frac{k}{r} \rceil}})$ it can be shown that this is $O(n\cdot k^{3/4})$- which solves the problem
|
[
"dp",
"math",
"meet-in-the-middle",
"number theory"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 200;
int a[MAXN];
const int MAXK = 1e7 + 100, MAXH = 1e4;
int hsh[MAXK];
int rev[MAXH];
double dp[MAXN][MAXH];
vector<array<int, 2>> go[MAXH];
main() {
ios_base::sync_with_stdio(false);
cin.tie(0); cout.tie(0);
int n, k;
cin >> n >> k;
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int id = 0;
for (int c = 1;; ++id) {
rev[id] = (k + c - 1) / c;
hsh[(k + c - 1) / c] = id;
int t = (k + c - 1) / c - 1;
if (t == 0) break;
c = (k + t - 1) / t;
}
++id;
dp[0][hsh[k]] = k;
for (int i = 0; i < id; ++i) {
int k = rev[i];
for (int c = 1;;) {
go[i].push_back({c, hsh[(k + c - 1) / c]});
int t = (k + c - 1) / c - 1;
if (t == 0) break;
c = (k + t - 1) / t;
}
}
for (int i = 0; i < n; ++i) {
for (int j = 0; j < id; ++j) {
double val = dp[i][j];
if (val == 0) continue;
for (auto elem : go[j]) {
int c = elem[0], k1 = elem[1];
if (c > a[i]) break;
int cur = a[i] / c;
dp[i + 1][k1] = max(dp[i + 1][k1], val * cur / a[i]);
}
}
}
cout << fixed << setprecision(20);
cout << dp[n][hsh[1]] << '\n';
return 0;
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.