contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1575
|
D
|
Divisible by Twenty-Five
|
Mr. Chanek has an integer represented by a string $s$. Zero or more digits have been erased and are denoted by the character _. There are also zero or more digits marked by the character X, meaning they're the same digit.
Mr. Chanek wants to count the number of possible integer $s$, where $s$ is divisible by $25$. Of course, $s$ must not contain any leading zero. He can replace the character _ with any digit. He can also replace the character X with any digit, but it must be the same for every character X.
As a note, a leading zero is any 0 digit that comes before the first nonzero digit in a number string in positional notation. For example, 0025 has two leading zeroes. An exception is the integer zero, (0 has no leading zero, but 0000 has three leading zeroes).
|
There are no dirty tricks to solve this problem. Brute force all possible number between $i \in [10^{|s| - 1}, 10^{|s|} - 1]$, with step $i := i + 25$. You might want to handle when $|s| = 1$, because $0$ is a valid $s$, if possible. For easier implementation, you can use the std::to_string(s) in C++. It is also possible to solve it in $O(|s|)$ by case analysis. Time complexity: $O(\frac{1}{25} \cdot |s| \cdot 10^{|s|})$ or $O(|s|)$.
|
[
"brute force",
"dfs and similar",
"dp"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
#define sz(x) (int)(x).size()
typedef long long LL;
LL expo(LL a, LL b){
// a %= MOD; // USE THIS WHEN N IS REALLY BIG!
LL ret = 1;
while(b > 0){
if(b&1) ret = (ret*a);
a = (a*a); b >>= 1;
}
return ret;
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
string s; cin >> s;
int low = expo(10, sz(s) - 1);
int high = expo(10, sz(s)) - 1;
if(low == 1) low--;
while(low%25) low++;
int ans = 0;
for(;low <= high;low += 25){
string current = to_string(low);
char xval = '-';
bool can = 1;
for(int i = 0;i < sz(s);i++){
if(s[i] == '_') continue;
if(s[i] == 'X'){
if(xval != '-' && xval != current[i]){
can = 0;
break;
}
xval = current[i];
}else if(s[i] != current[i]){
can = 0;
break;
}
}
ans += can;
}
cout << ans << endl;
}
|
1575
|
E
|
Eye-Pleasing City Park Tour
|
There is a city park represented as a tree with $n$ attractions as its vertices and $n - 1$ rails as its edges. The $i$-th attraction has happiness value $a_i$.
Each rail has a color. It is either black if $t_i = 0$, or white if $t_i = 1$. Black trains only operate on a black rail track, and white trains only operate on a white rail track. If you are previously on a black train and want to ride a white train, or you are previously on a white train and want to ride a black train, you need to use $1$ ticket.
The path of a tour must be a simple path — it must not visit an attraction more than once. You do not need a ticket the first time you board a train. You only have $k$ tickets, meaning \textbf{you can only switch train types at most $k$ times}. In particular, you do not need a ticket to go through a path consisting of one rail color.
Define $f(u, v)$ as the sum of happiness values of the attractions in the tour $(u, v)$, which is a simple path that starts at the $u$-th attraction and ends at the $v$-th attraction. Find the sum of $f(u,v)$ for all valid tours $(u, v)$ ($1 \leq u \leq v \leq n$) that does not need more than $k$ tickets, modulo $10^9 + 7$.
|
We can use centroid decomposition to solve this problem. Suppose we find the centroid $cen$ of the tree, and root the tree at $cen$. We consider each subtree of the children of $cen$ as different groups of vertices. We want to find the sum of $f(u,v)$ for all valid tours, such that $u$ and $v$ are from different groups. We can solve this with basic inclusion-exclusion. We count the sum of $f(u,v)$ where the path $u \to cen \to v$ uses less than $k$ tickets, without caring which group $u,v$ belongs to. Then, we can subtract it by only considering $u \to cen \to v$ where $u,v$ belongs from the same group. Define $cost(u)$ as the number of tickets you need to go from $u$ to $cen$. For a fixed set of vertices $S$, you can count $f(u,v)$ where $cost(u) + cost(v) + z \leq k$ with prefix sums. Note that $z$ depends on whether the last edge of the path from $u \to cen$ and $v \to cen$ has different colors. We can do all of these in $O(|S|)$. We use the solution above while setting $S$ as the set of all vertices in $cen$'s subtree, or the set of vertices with the same group. Because the depth of a centroid tree is $O(\log n)$, the overall complexity of the solution is $O(n \log n)$.
|
[
"data structures",
"trees"
] | 2,600
| null |
1575
|
F
|
Finding Expected Value
|
Mr. Chanek opened a letter from his fellow, who is currently studying at Singanesia. Here is what it says.
Define an array $b$ ($0 \leq b_i < k$) with $n$ integers. While there exists a pair $(i, j)$ such that $b_i \ne b_j$, do the following operation:
- Randomly pick a number $i$ satisfying $0 \leq i < n$. Note that each number $i$ has a probability of $\frac{1}{n}$ to be picked.
- Randomly Pick a number $j$ satisfying $0 \leq j < k$.
- Change the value of $b_i$ to $j$. It is possible for $b_i$ to be changed to the same value.
Denote $f(b)$ as the expected number of operations done to $b$ until all elements of $b$ are equal.
You are given two integers $n$ and $k$, and an array $a$ ($-1 \leq a_i < k$) of $n$ integers.
For every index $i$ with $a_i = -1$, replace $a_i$ with a random number $j$ satisfying $0 \leq j < k$. Let $c$ be the number of occurrences of $-1$ in $a$. There are $k^c$ possibilites of $a$ after the replacement, each with equal probability of being the final array.
Find the expected value of $f(a)$ modulo $10^9 + 7$.
Formally, let $M = 10^9 + 7$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
After reading the letter, Mr. Chanek gave the task to you. Solve it for the sake of their friendship!
|
We can use this trick, which is also explained below. Suppose $a_i \neq -1$ for now. We want to find a function $F(a)$ such that $\mathbb{E}(F_{t + 1} - F_t | F_t) = -1$, where $F_t$ is the value of $F(a)$ at time $t$. If we can find such a function, then the expected stopping time is equal to $F(a_0) - F(a_T)$, where $a_0$ is the initial array before doing any operation, and $a_T$ is the final array where we don't do any more operation (that is, all elements of $a_T$ are equal). Suppose $occ(x)$ is the number of occurrences of $x$ in the current array, for some $0 \leq x < k$. It turns out we can find such $F$ satisfying $F = \sum_{x = 0}^{k - 1} f(occ(x))$ for some function $f$. We now try to find $f$. Suppose we currently have $a_t$, and we want to find the expected value of $F(a_{t + 1})$. There are two cases to consider: $\forall x, occ_{t + 1}(x) = occ_t(x)$ if $a_i$ doesn't change when doing the operation. This happens with probability $\frac{1}{k} \cdot \frac{occ_t(x)}{n}$ for each $x$. Otherwise, there exist some $x, y$ ($x \neq y$) such that $occ_{t + 1}(x) = occ_t(x) - 1$ and $occ_{t + 1}(y) = occ_t(y) + 1$. This happens if initially $a_i = x$, then by doing the operation we change it to $y$. This happens with probability $\frac{1}{k} \cdot \frac{occ_t(x)}{n}$ for each $x,y$. Thus, $\begin{split} & \mathbb{E}(F_{t + 1} - F_t | F_t) = -1\\ \implies & \sum_{i = 0}^{k - 1} f(occ_{t + 1}(i)) - \sum_{i = 0}^{k - 1}f(occ_t(x)) = -1\\ \implies & \sum_{i = 0}^{k - 1} f(occ_{t + 1}(i)) = \sum_{i = 0}^{k - 1}f(occ_t(i)) - 1\\ \implies & \frac{1}{k}\sum_{i = 0}^{k - 1}f(occ_t(i)) + \sum_{x = 0}^{k - 1}\sum_{y = 0}^{k - 1}[x \neq y]\frac{occ_t(x)}{nk}\Big( \sum_{i = 0}^{k - 1}f(occ_t(i)) - f(occ_t(x)) - \\ & f(occ_t(y)) + f(occ_t(x) - 1) + f(occ_t(y) + 1)\Big) = \sum_{i = 0}^{k - 1}f(occ_t(i)) - 1\\ \implies & \sum_{x = 0}^{k - 1}\sum_{y = 0}^{k - 1}[x \neq y]\frac{occ_t(x)}{nk}\Big( - f(occ_t(x)) - f(occ_t(y)) + f(occ_t(x) - 1) + f(occ_t(y) + 1)\Big) = - 1\\ \implies & \sum_{x = 0}^{k - 1}\frac{(k - 1)occ_t(x)}{nk} \Big(f(occ_t(x) - 1) - f(occ_t(x))\Big) + \frac{n - occ_t(x)}{nk}\Big( f(occ_t(x) + 1) - f(occ_t(x)) \Big) = - 1\\ \implies & \sum_{x = 0}^{k - 1}\frac{(k - 1)occ_t(x)}{nk} \Big(f(occ_t(x) - 1) - f(occ_t(x))\Big) + \frac{n - occ_t(x)}{nk}\Big( f(occ_t(x) + 1) - f(occ_t(x)) \Big) + \frac{occ_t(x)}{n} = 0\\ \end{split}$ Suppose $a = occ_t(x)$. If we can find $f$ such that $\frac{(k - 1)a}{nk} \Big(f(a - 1) - f(a)\Big) + \frac{n - a}{nk}\Big( f(a + 1) - f(a) \Big) + \frac{a}{n} = 0$ then $f$ satisfies $F$. $\frac{(k - 1)a}{nk} \Big(f(a - 1) - f(a)\Big) + \frac{n - a}{nk}\Big( f(a + 1) - f(a) \Big) + \frac{a}{n} = 0\\ (k - 1)a \Big(f(a - 1) - f(a)\Big) + (n - a)\Big( f(a + 1) - f(a) \Big) + ak = 0\\ (k - 1)af(a - 1) - (k - 1)af(a) + (n - a)f(a + 1) - (n - a)f(a) + ak = 0\\ f(a + 1) = \frac{1}{a - n}\Big( (k - 1)af(a - 1) + (2a - ak - n)f(a) + ak \Big)$ So we can set $f$ to any function that satisfies the recursive formula above, and then derive $F$. To handle $a_i = -1$, note that $F$ depends only on the occurrence of each value $x$ ($0 \leq x < k$), and each of them is independent. Therefore, we can count the contribution for each $x$ towards all possible final arrays separately. This is easy to do in $O(n)$. Moreoever, there is only $O(\sqrt{n})$ values of $occ(x)$ in the initial array (before changing $a_i = -1$), and each $x$ with the same occurrences contribute the same amount. Therefore, we can solve the problem in $O(n \sqrt{n})$.
|
[
"math"
] | 2,900
| null |
1575
|
G
|
GCD Festival
|
Mr. Chanek has an array $a$ of $n$ integers. The prettiness value of $a$ is denoted as:
$$\sum_{i=1}^{n} {\sum_{j=1}^{n} {\gcd(a_i, a_j) \cdot \gcd(i, j)}}$$
where $\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$.
In other words, the prettiness value of an array $a$ is the total sum of $\gcd(a_i, a_j) \cdot \gcd(i, j)$ for all pairs $(i, j)$.
Help Mr. Chanek find the prettiness value of $a$, and output the result modulo $10^9 + 7$!
|
Define: $d(n)$ as the set of all divisors of $n$; $\phi(x)$ as the euler totient function of $x$; and $d(a, b)$ as the set of all divisors of both $a$ and $b$; or equivalently, $d(\gcd(a, b))$. Observe that $\sum_{x \in d(n)}\phi(x) = n$. This implies $\sum_{x \in d(a, b)}\phi(x) = \gcd(a,b)$ $\sum_{i = 1}^n \sum_{j = 1}^n \gcd(i, j) \cdot \gcd(a_i, a_j)\\ \sum_{i = 1}^n \sum_{j = 1}^n \left(\left(\sum_{x \in d(i,j)} \phi(x)\right) \cdot \gcd(a_i, a_j))\right)\\ \sum_{x=1}^n \phi(x) \sum_{i = 1}^{\lfloor \frac{n}{x} \rfloor} \sum_{j = 1}^{\lfloor \frac{n}{x} \rfloor} \gcd(a_{ix}, a_{jx})\\ \sum_{x=1}^n \phi(x) \sum_{i = 1}^{\lfloor \frac{n}{x} \rfloor} \sum_{j = 1}^{\lfloor \frac{n}{x} \rfloor} \sum_{y \in d(a_{ix}, a_{jx})} \phi(y)\\ \sum_{x=1}^n \phi(x) \sum_{y} \phi(y) \left(\sum_{i = 1}^{\lfloor \frac{n}{x} \rfloor} [a_{ix} \bmod y = 0] \right)^2$ If we only iterate $y$ where $y$ is a divisor of one of $a_{ix}$, we can compute the above summation in $O(n \log n \max_{i=1}^n(|d(a_i)|))$.
|
[
"math",
"number theory"
] | 2,200
| null |
1575
|
H
|
Holiday Wall Ornaments
|
The Winter holiday will be here soon. Mr. Chanek wants to decorate his house's wall with ornaments. The wall can be represented as a binary string $a$ of length $n$. His favorite nephew has another binary string $b$ of length $m$ ($m \leq n$).
Mr. Chanek's nephew loves the non-negative integer $k$. His nephew wants exactly $k$ occurrences of $b$ as substrings in $a$.
However, Mr. Chanek does not know the value of $k$. So, for each $k$ ($0 \leq k \leq n - m + 1$), find the minimum number of elements in $a$ that have to be changed such that there are exactly $k$ occurrences of $b$ in $a$.
A string $s$ occurs exactly $k$ times in $t$ if there are exactly $k$ different pairs $(p,q)$ such that we can obtain $s$ by deleting $p$ characters from the beginning and $q$ characters from the end of $t$.
|
Do a dynamic programming with three states: Position in $s$ Position in $t$ How many matches left. define the dynamic programming of $dp[a][b][rem]$ as the minimum cost of having the string $p = s[1..a]$, $rem$ matches left, and the longest prefix match between $s$ and $t$ is at $b$. The answer will be at $dp[n][c][0]$ for any arbitrary $c$. The transition can first be precomputed with brute force in $O(n^3)$ or with Aho-Corasick. Time complexity: $O(n^3)$ Space complexity: $O(n^2)$
|
[
"dp",
"strings"
] | 2,200
| null |
1575
|
I
|
Illusions of the Desert
|
Chanek Jones is back, helping his long-lost relative Indiana Jones, to find a secret treasure in a maze buried below a desert full of illusions.
The map of the labyrinth forms a tree with $n$ rooms numbered from $1$ to $n$ and $n - 1$ tunnels connecting them such that it is possible to travel between each pair of rooms through several tunnels.
The $i$-th room ($1 \leq i \leq n$) has $a_i$ illusion rate. To go from the $x$-th room to the $y$-th room, there must exist a tunnel between $x$ and $y$, and it takes $\max(|a_x + a_y|, |a_x - a_y|)$ energy. $|z|$ denotes the absolute value of $z$.
To prevent grave robbers, the maze can change the illusion rate of any room in it. Chanek and Indiana would ask $q$ queries.
There are two types of queries to be done:
- $1\ u\ c$ — The illusion rate of the $x$-th room is changed to $c$ ($1 \leq u \leq n$, $0 \leq |c| \leq 10^9$).
- $2\ u\ v$ — Chanek and Indiana ask you the minimum sum of energy needed to take the secret treasure at room $v$ if they are initially at room $u$ ($1 \leq u, v \leq n$).
Help them, so you can get a portion of the treasure!
|
Note that $\max(|a_x + a_y|, |a_x - a_y|) = |a_x| + |a_y|$. Now the problem can be reduced to updating a vertex's value and querying the sum of values of vertices in a path. This can be done in several ways. One can use euler tour tree flattening method, as described in Euler Tour Magic by brdy blog, or use heavy-light decomposition. Time complexity : $O((q + n) \log^2 n)$ or $O((q + n) \log n)$
|
[
"data structures",
"trees"
] | 2,300
| null |
1575
|
J
|
Jeopardy of Dropped Balls
|
Mr. Chanek has a new game called Dropping Balls. Initially, Mr. Chanek has a grid $a$ of size $n \times m$
Each cell $(x,y)$ contains an integer $a_{x,y}$ denoting the direction of how the ball will move.
- $a_{x,y}=1$ — the ball will move to the right (the next cell is $(x, y + 1)$);
- $a_{x,y}=2$ — the ball will move to the bottom (the next cell is $(x + 1, y)$);
- $a_{x,y}=3$ — the ball will move to the left (the next cell is $(x, y - 1)$).
Every time a ball leaves a cell $(x,y)$, the integer $a_{x,y}$ will change to $2$. Mr. Chanek will drop $k$ balls sequentially, each starting from the first row, and on the $c_1, c_2, \dots, c_k$-th ($1 \leq c_i \leq m$) columns.
Determine in which column each ball will end up in (\textbf{position of the ball after leaving the grid}).
|
Naively simulating the ball's path is enough, and runs in $O(nm + nk)$. Note that if we visit a non-$2$ cell, then the path length of the current ball is increased by $1$, and then the cell turns into $2$. So the total length of all paths can be increased by at most $O(nm)$ times. In addition, each ball needs at least $O(n)$ moves to travel, so we get $O(nm + nk)$. We can improve this further. You can speed up each drops by storing consecutive $2$-cell segments in the downwards direction for each column. Using a Disjoint-Set Union data structure, for each cell $a_{x,y} = 2$, join it with its bottom cell if $a_{x + 1, y} = 2$. Time complexity: $O(k + rc\cdot\alpha(rc))$
|
[
"binary search",
"brute force",
"dsu",
"implementation"
] | 1,500
| null |
1575
|
K
|
Knitting Batik
|
Mr. Chanek wants to knit a batik, a traditional cloth from Indonesia. The cloth forms a grid $a$ with size $n \times m$. There are $k$ colors, and each cell in the grid can be one of the $k$ colors.
\textbf{Define} a sub-rectangle as an ordered pair of two cells $((x_1, y_1), (x_2, y_2))$, denoting the top-left cell and bottom-right cell (inclusively) of a sub-rectangle in $a$. Two sub-rectangles $((x_1, y_1), (x_2, y_2))$ and $((x_3, y_3), (x_4, y_4))$ have the same pattern if and only if the following holds:
- they have the same width ($x_2 - x_1 = x_4 - x_3$);
- they have the same height ($y_2 - y_1 = y_4 - y_3$);
- for every pair $(i, j)$ where $0 \leq i \leq x_2 - x_1$ and $0 \leq j \leq y_2 - y_1$, the color of cells $(x_1 + i, y_1 + j)$ and $(x_3 + i, y_3 + j)$ are equal.
Count the number of possible batik color combinations, such that the subrectangles $((a_x, a_y),(a_x + r - 1, a_y + c - 1))$ and $((b_x, b_y),(b_x + r - 1, b_y + c - 1))$ have the same pattern.
Output the answer modulo $10^9 + 7$.
|
October the 2nd is the National Batik Day of Indonesia Observe that only some several non-intersecting part of $nm - rc$ that is independent in the grid. Simple casework shows that the answer is $k^{nm}$ if $a = b$, and $k^{nm - rc}$ otherwise. Time complexity: $O(\log nm)$
|
[
"implementation",
"math"
] | 2,200
| null |
1575
|
L
|
Longest Array Deconstruction
|
Mr. Chanek gives you a sequence $a$ indexed from $1$ to $n$. Define $f(a)$ as the number of indices where $a_i = i$.
You can pick an element from the current sequence and remove it, then concatenate the remaining elements together. For example, if you remove the $3$-rd element from the sequence $[4, 2, 3, 1]$, the resulting sequence will be $[4, 2, 1]$.
You want to remove some elements from $a$ in order to maximize $f(a)$, using zero or more operations. Find the largest possible $f(a)$.
|
Define $a'$ as the array we get after removing some elements in $a$ and valid element as $a'_i$ that satisfy $a'_i = i$. We can try to find combination of indices ${c_1, c_2, \dots c_m}$ such that $a_{c_i} = a'_{p_i} = p_i$ for a certain set ${p_1, p_2, \dots p_m}$. In other words, we want to find all indices ${c_1, c_2, \dots c_m}$ such that $a_{c_i}$ will be a valid element in the $a'$. Observe that each element in $c$ and every pair $i$ and $j$ ($i < j$) must satisfy: 1. $c_i < c_j$ 2. $a_{c_i} < a_{c_j}$ 3. $c_i - a_{c_i} \leq c_j - a_{c_j}$, the element you need to remove to adjust $a_{c_i}$ to it's location is smaller than $a_{c_j}$. Therefore, we can convert each index into $(c_i, a_{c_i}, c_i - a_{c_i})$ and find the longest sequence of those tuples that satisfy the conditions. This is sufficient with divide and conquer in $O(n\log n\log n)$. But the solution can be improved further. Notice that if $(2) \land (3) \implies (1)$. Hence we can solve problem by finding the longest sequence of pairs ($a_{c_i}, c_i - a_{c_i}$) with any standard LIS algorithm. Time complexity: $O(n\log n)$
|
[
"data structures",
"divide and conquer",
"dp",
"sortings"
] | 2,100
| null |
1575
|
M
|
Managing Telephone Poles
|
Mr. Chanek's city can be represented as a plane. He wants to build a housing complex in the city.
There are some telephone poles on the plane, which is represented by a grid $a$ of size $(n + 1) \times (m + 1)$. There is a telephone pole at $(x, y)$ if $a_{x, y} = 1$.
For each point $(x, y)$, define $S(x, y)$ as the square of the Euclidean distance between the nearest pole and $(x, y)$. Formally, the square of the Euclidean distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is $(x_2 - x_1)^2 + (y_2 - y_1)^2$.
To optimize the building plan, the project supervisor asks you the sum of all $S(x, y)$ for each $0 \leq x \leq n$ and $0 \leq y \leq m$. Help him by finding the value of $\sum_{x=0}^{n} {\sum_{y=0}^{m} {S(x, y)}}$.
|
Interestingly, if you generate the Voronoi Diagram and transcribe it to a grid, then the same connected area in the Voronoi Diagram is not necessarily in the same 8-connected component in the grid. This is why most Dijkstra solutions will get WA. We can use convex hull trick to solve this problem. Suppose that we only need to calculate $\sum_{x = 0}^{m} {S(x, y)}$ for a certain $y$. For a fixed $y$ axis and a pole located in point $(x_i, y_i)$, define $f(x) = (x - x_i)^2 + (y - y_i)^2 = - 2xx_i + x^2 - x_i^2 + (y - y_i)^2$, which is the euclidean distance of point $(x, y)$ and pole $(x_i, y_i)$. Notice that, for a fixed pole $i$ and axis $y$, $f(x)$ is a line equation, thus we can maintain it with convex hull trick. Additionally, for a certain $y$, there are only $m$ poles that we need to consider. More specifically, pole $(x_i, y_i)$ is called considerable if there is no other pole $(x_j, y_j)$ such that $x_i = x_j$ and $|y_i - y| < |y_j - y|$. Hence we can find the $\sum_{x = 0}^{m} {S(x, y)}$ for a certain $y$ in $O(m)$ or $O(m \log m)$. Calculating $\sum_{x = 0}^{m} {S(x, y)}$ for all $y$ will result in $O(nm)$ or $O(nm \log m)$ time complexity.
|
[
"data structures",
"geometry"
] | 2,400
| null |
1579
|
A
|
Casimir's String Solitaire
|
Casimir has a string $s$ which consists of capital Latin letters 'A', 'B', and 'C' only. Each turn he can choose to do one of the two following actions:
- he can either erase exactly one letter 'A' \textbf{and} exactly one letter 'B' from arbitrary places of the string (these letters don't have to be adjacent);
- or he can erase exactly one letter 'B' \textbf{and} exactly one letter 'C' from arbitrary places in the string (these letters don't have to be adjacent).
Therefore, each turn the length of the string is decreased exactly by $2$. All turns are independent so for each turn, Casimir can choose any of two possible actions.
For example, with $s$ $=$ "ABCABC" he can obtain a string $s$ $=$ "ACBC" in one turn (by erasing the first occurrence of 'B' and the second occurrence of 'A'). There are also many other options for a turn aside from this particular example.
For a given string $s$ determine whether there is a sequence of actions leading to an empty string. In other words, Casimir's goal is to erase all letters from the string. Is there a way to do this?
|
Note that no matter which action is chosen, after this action is performed exactly one letter 'B' is erased from the string exactly two letters in total are erased from the string Let's denote the length of the string $s$ by $n$. If $n$ is odd, then described turns can not erase all the characters from the strings, because if he is deleting two letters on each turn, the length will always remain odd. For example, if the original length of the string is $5$, then after one turn it will be equal to $3$, and after two moves it will be equal to $1$ in which case the next turn is impossible. Thus, if the length of the row is odd, the answer is NO. If $n$ is even, it will take exactly $\frac{n}{2}$ steps to erase all characters from the string. Since each action removes exactly one letter 'B' from the string, the string can become empty only if there are exactly $\frac{n}{2}$ letters 'B'. Let us show that this condition is sufficient, that is, if a string has exactly half of the letters equal to 'B', then there always exists a sequence of actions leading to an empty string. Indeed, if a string of length $n$ has exactly $\frac{n}{2}$ letters 'B', exactly $x$ letters 'A' and exactly $y$ letters 'C', then $x + y = n - \frac{n}{2} = \frac{n}{2}$. Then Casimir can make $x$ moves of the first type, each time removing the first occurrence of 'B' and the first occurrence of 'A', and $y$ moves of the second type, each time removing the first occurrence of 'B' and the first occurrence of 'C'. After $x + y = \frac{n}{2}$ such moves, the string will become empty. Thus, checking that the number of letters 'B' in the string is exactly half of its length was enough to solve the problem.
|
[
"math",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
string s;
cin >> s;
cout << (count(s.begin(), s.end(), 'B') * 2 == s.size() ?
"YES\n" : "NO\n");
}
}
|
1579
|
B
|
Shifting Sort
|
The new generation external memory contains an array of integers $a[1 \ldots n] = [a_1, a_2, \ldots, a_n]$.
This type of memory does not support changing the value of an arbitrary element. Instead, it allows you to cut out any segment of the given array, cyclically shift (rotate) it by any offset and insert it back into the same place.
Technically, each cyclic shift consists of two consecutive actions:
- You may select arbitrary indices $l$ and $r$ ($1 \le l < r \le n$) as the boundaries of the segment.
- Then you replace the segment $a[l \ldots r]$ with it's cyclic shift to the \textbf{left} by an arbitrary offset $d$. The concept of a cyclic shift can be also explained by following relations: the sequence $[1, 4, 1, 3]$ is a cyclic shift of the sequence $[3, 1, 4, 1]$ to the left by the offset $1$ and the sequence $[4, 1, 3, 1]$ is a cyclic shift of the sequence $[3, 1, 4, 1]$ to the left by the offset $2$.
For example, if $a = [1, \textcolor{blue}{3, 2, 8}, 5]$, then choosing $l = 2$, $r = 4$ and $d = 2$ yields a segment $a[2 \ldots 4] = [3, 2, 8]$. This segment is then shifted by the offset $d = 2$ to the \textbf{left}, and you get a segment $[8, 3, 2]$ which then takes the place of of the original elements of the segment. In the end you get $a = [1, \textcolor{blue}{8, 3, 2}, 5]$.
Sort the given array $a$ using no more than $n$ cyclic shifts of any of its segments. Note that you don't need to minimize the number of cyclic shifts. Any method that requires $n$ or less cyclic shifts will be accepted.
|
In this problem, it was enough to implement an analogue of standard selection sort or insertion sort. Here is an example of a solution based on selection sort. Let's find the minimum element in the array by simply iterating over it. Let's denote its index in the array by $p_1$. If we apply a shift "$1$ $p_1$ $(p_1 - 1)$" to it, the following happens: $a \rightarrow [\color{blue}{a_\color{red}{p_1}, a_1, a_2, \ldots, a_{p_1 - 1}}, a_{p_1 + 1}, \ldots, a_n]$ Let us perform a similar shift for the second-largest element of the array, putting it in second place, for the third-largest element of the array, putting it in third place, and so on. More formally, let's describe the $i$-th iteration as follows: At the beginning of the $i$-th iteration, the first $i - 1$ elements of the array are its $i - 1$ minimal elements, already in their correct places in sorted order. During the $i$-th iteration, the $i$-th largest element of the array is placed in the $i$-th place in the array. Since the first $i - 1$ minimal elements are already in their places, the $i$-th largest element of the array is simply the smallest element among $[a_i, a_{i + 1}, \ldots, a_n]$. Let's find it by iterating over these elements and denote its index in the array $a$ by $p_i$. Make a shift "$i$ $p_i$ $(p_i - i)$". The first $i - 1$ elements will not change, and the element from the $p_i$-th place in the array will move to the $i$-th: $a \rightarrow [a_1, \ldots, a_{i - 1}, \color{blue}{a_\color{red}{p_i}, a_i, a_{i + 1}, \ldots, a_{p_i - 1}}, a_{p_i + 1}, \ldots, a_n]$ It is worth noting that the output format forbids shifting segments with $l = r$. Regarding this case, we should check the equality $i = p_i$ separately. If these two indexes coincide, then the $i$-th element is already in its place, and no shift should be done on this iteration. $a \rightarrow [a_1, \ldots, a_{i - 1}, \color{blue}{a_\color{red}{p_i}, a_i, a_{i + 1}, \ldots, a_{p_i - 1}}, a_{p_i + 1}, \ldots, a_n]$ It is worth noting that the output format forbids shifting segments with $l = r$. Regarding this case, we should check the equality $i = p_i$ separately. If these two indexes coincide, then the $i$-th element is already in its place, and no shift should be done on this iteration. Let us repeat this algorithm for $i = 2$, $i = 3$, ..., $i = n - 1$. At each iteration, the new element will be shifted into its place in sorted order, and each iteration performs no more than one shift operation. Thus, in strictly less than $n$ shifts, the array will be completely sorted. The time complexity is $\mathcal{O}\left(t \cdot n^2\right)$.
|
[
"implementation",
"sortings"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
typedef pair<int, int> pii;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
vector<pii> actions;
for (int i = 0; i < n; i++)
cin >> a[i];
for (int i = 0; i < n - 1; i++) {
int min_pos = i;
for (int j = i + 1; j < n; j++)
if (a[j] < a[min_pos])
min_pos = j;
if (min_pos > i) {
actions.push_back({i, min_pos});
int opt = a[min_pos];
for (int j = min_pos; j > i; j--)
a[j] = a[j - 1];
a[i] = opt;
}
}
cout << actions.size() << '\n';
for (auto &lr: actions) {
cout << lr.first + 1 << ' ' << lr.second + 1 << ' ' << lr.second - lr.first << '\n';
}
}
}
|
1579
|
C
|
Ticks
|
Casimir has a rectangular piece of paper with a checkered field of size $n \times m$. Initially, all cells of the field are white.
Let us denote the cell with coordinates $i$ vertically and $j$ horizontally by $(i, j)$. The upper left cell will be referred to as $(1, 1)$ and the lower right cell as $(n, m)$.
Casimir draws ticks of different sizes on the field. A tick of size $d$ ($d > 0$) with its center in cell $(i, j)$ is drawn as follows:
- First, the center cell $(i, j)$ is painted black.
- Then exactly $d$ cells on the top-left diagonally to the center and exactly $d$ cells on the top-right diagonally to the center are also painted black.
- That is all the cells with coordinates $(i - h, j \pm h)$ for all $h$ between $0$ and $d$ are painted. In particular, a tick consists of $2d + 1$ black cells.
An already painted cell will remain black if painted again. Below you can find an example of the $4 \times 9$ box, with two ticks of sizes $2$ and $3$.
You are given a description of a checkered field of size $n \times m$. Casimir claims that this field came about after he drew some (possibly $0$) ticks on it. The ticks could be of different sizes, but the size of each tick is at least $k$ (that is, $d \ge k$ for all the ticks).
Determine whether this field can indeed be obtained by drawing some (possibly none) ticks of sizes $d \ge k$ or not.
|
For each painted cell, we will determine whether it can be part of some tick of the allowed size. If some of the cells cannot be a part of any tick, the answer is obviously NO. Otherwise, let's match each colored cell with an arbitrary valid (entirely contained in the field under consideration and of size $\ge k$) tick containing it. Let's draw all such ticks, then the following holds: no empty (white) cell of the field will be painted, since only ticks that do not contradict the field in question have been considered; every colored cell of the field will be covered by at least one drawn tick (at least the one we matched it with). In order to check that all painted cells are parts of some ticks, let's go through all possible ticks of size $d \ge k$ and for each tick mark all the cells included in it. If there is at least one unmarked painted cell in the end, it can't be a part of any valid tick, and the answer is NO. To consider all possible ticks, we can iterate through all their possible center cells, that is, all the painted cells. Since smaller ticks are subsets of larger ticks with the same center cell it is sufficient to find the maximal size tick that can be constructed from that center cell. So for each painted cell we aim to find the maximal possible size of a tick with its center in this very cell. Let us now consider a painted cell $(i, j)$ as a possible center of some tick. By the definition of a tick, this cell can be a center of a tick of size $d$ if for all $h$ from $0$ to $d$ both cells $(i - h, j - h)$ and $(i - h, j + h)$ exist (are not out of bounds) and are painted. Let's iterate through $h$ from $1$ to $i$, and stop when the described condition is no longer satisfied. The largest $h$ for which the condition is still satisfied gives us $d_{i, j}$ - the maximum possible size of a tick with its center in $(i, j)$. Now if $d_{i, j} \ge k$, then such a tick is valid, and all the cells included in it should be marked. Otherwise, it could not have been drawn, and none of its cells should be marked. After a complete check of all possible ticks in a given field, either there will be no unchecked painted cells and then the answer is YES, or at least one painted cell is not covered by any valid checkbox and then the answer is NO. The time complexity is $\mathcal{O}\left(t \cdot n^2 m\right)$.
|
[
"greedy",
"implementation"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, m, k;
cin >> n >> m >> k;
vector<vector<int>> status(n, vector<int>(m, 0));
for (int i = 0; i < n; i++) {
string s;
cin >> s;
for (int j = 0; j < m; j++)
if (s[j] == '*')
status[i][j] = 1;
}
for (int i = n - 1; i > -1; i--) {
for (int j = 0; j < m; j++) {
if (status[i][j] == 0)
continue;
int len = 0;
while (j > len && j + len + 1 < m && i > len) {
if (status[i - len - 1][j - len - 1] == 0 || status[i - len - 1][j + len + 1] == 0)
break;
len++;
}
if (len >= k) {
for (int d = 0; d <= len; d++) {
status[i - d][j - d] = 2;
status[i - d][j + d] = 2;
}
}
}
}
bool ok = true;
for (int i = 0; i < n; i++)
for (int j = 0; j < m; j++)
if (status[i][j] == 1)
ok = false;
cout << (ok ? "YES" : "NO") << '\n';
}
}
|
1579
|
D
|
Productive Meeting
|
An important meeting is to be held and there are exactly $n$ people invited. At any moment, any two people can step back and talk in private. The same two people can talk several (as many as they want) times per meeting.
Each person has limited sociability. The sociability of the $i$-th person is a non-negative integer $a_i$. This means that after exactly $a_i$ talks this person leaves the meeting (and does not talk to anyone else anymore). If $a_i = 0$, the $i$-th person leaves the meeting immediately after it starts.
A meeting is considered most productive if the maximum possible number of talks took place during it.
You are given an array of sociability $a$, determine which people should talk to each other so that the total number of talks is as large as possible.
|
For the first conversation let's choose two people $i$ and $j$ with maximal values of sociability. Note that after this conversation takes place, we move on to a similar problem, but in which $a_i$ and $a_j$ are decreased by $1$. After decreasing $a_i$ and $a_j$ by $1$, we repeat the choice of the two people with the maximum values of sociability. Let us repeat such iterations while at least two people with positive sociability parameters remain. Let us prove that this solution leads to the optimal answer. Let's denote the sum $\sum\limits_{i = 1}^n a_i$ by $S$ and consider two fundamentally different cases: The maximal element $a$ is greater than or equal to the sum of all remaining elements. That is, there exists $i$ such that $a_i \ge \sum\limits_{i \neq j} a_j = S - a_i$. In this case, note that the $i$-th person can not possibly have more than $S - a_i$ conversations, because by that point all other people already reached their sociability limits and left the meeting. Thus, if $a_i \ge S - a_i$, the answer cannot be more than $(S - a_i) + \sum\limits_{j \neq i} a_j = 2 \cdot (S - a_i)$. Note that this estimation is accurate since an example exists in which $i$-th person talks to all other people as many times as possible (that is, $a_j$ times with $j$-th person for all $j$). And the algorithm described above will just choose the $i$th person as one of the participants of a conversation every time, because for every conversation both $a_i$ and $\sum\limits_{j \neq i} a_j$ decrease by exactly $1$, so the inequality holds and it follows that $\forall k \neq i:\, a_i \ge \sum\limits_{j \neq i} a_j \ge a_k\text{.}$ Thus, if $a_i \ge S - a_i$, the answer cannot be more than $(S - a_i) + \sum\limits_{j \neq i} a_j = 2 \cdot (S - a_i)$. Note that this estimation is accurate since an example exists in which $i$-th person talks to all other people as many times as possible (that is, $a_j$ times with $j$-th person for all $j$). And the algorithm described above will just choose the $i$th person as one of the participants of a conversation every time, because for every conversation both $a_i$ and $\sum\limits_{j \neq i} a_j$ decrease by exactly $1$, so the inequality holds and it follows that $\forall k \neq i:\, a_i \ge \sum\limits_{j \neq i} a_j \ge a_k\text{.}$ Otherwise, we can prove that the maximum number of conversations is always $\left\lfloor\frac{S}{2}\right\rfloor$. Obviously, it is impossible to get more than this number, since each conversation requires exactly two units of sociability (one from two people), while a larger answer would mean that $S = \sum\limits_{i = 1}^n a_i \ge 2 \cdot \left(\left\lfloor\frac{S}{2}\right\rfloor + 1\right) > S\text{,}$ which is obviously wrong. Let us prove that this answer is achieved by the described algorithm. Let's look at the last conversation held. If there are at least two more people left in the meeting after it, we can hold another conversation, which means there is a more optimal answer. If there are zero people left in the meeting, then an estimate of $\frac{S}{2}$ of conversations has been achieved. And if there is one person with a remaining sociability $= 1$, then an estimate of $\frac{S - 1}{2}$ of conversations has been achieved. If there is exactly one remaining person $i$ with a sociability residual $> 1$, then we can guarantee that this person has participated in all previous conversations. Indeed, let's look at the last conversation - it was held between two people with the maximum parameters of the remaining sociability. But the $i$-th person has at least $2$ sociability remaining, so it couldn't have been the other two people with residuals of $1$ who left right after that. Thus, analyzing all conversations in reverse order, we can prove that at any time $a_i > \sum\limits_{j \neq i} a_j$, which means that it is in fact the case considered above. $S = \sum\limits_{i = 1}^n a_i \ge 2 \cdot \left(\left\lfloor\frac{S}{2}\right\rfloor + 1\right) > S\text{,}$ Let us prove that this answer is achieved by the described algorithm. Let's look at the last conversation held. If there are at least two more people left in the meeting after it, we can hold another conversation, which means there is a more optimal answer. If there are zero people left in the meeting, then an estimate of $\frac{S}{2}$ of conversations has been achieved. And if there is one person with a remaining sociability $= 1$, then an estimate of $\frac{S - 1}{2}$ of conversations has been achieved. If there is exactly one remaining person $i$ with a sociability residual $> 1$, then we can guarantee that this person has participated in all previous conversations. Indeed, let's look at the last conversation - it was held between two people with the maximum parameters of the remaining sociability. But the $i$-th person has at least $2$ sociability remaining, so it couldn't have been the other two people with residuals of $1$ who left right after that. Thus, analyzing all conversations in reverse order, we can prove that at any time $a_i > \sum\limits_{j \neq i} a_j$, which means that it is in fact the case considered above. We have proven that the described greedy algorithm works. This algorithm can be implemented by using any balanced search tree, such as std::set. By storing pairs of elements $(a_i, i)$ in it, we could for $\mathcal{O}(\log n)$ each time choose the next two people to talk to and update the sociability values. The time complexity is $\mathcal{O}(S \log n)$.
|
[
"constructive algorithms",
"graphs",
"greedy"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
typedef pair<int, int> pii;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
auto cmp = [](pii const &x, pii const &y) {
return x > y;
};
set<pii, decltype(cmp)> a(cmp);
vector<pii> answer;
for (int i = 0; i < n; i++) {
int ai;
cin >> ai;
if (ai > 0)
a.emplace(ai, i + 1);
}
while (a.size() > 1) {
auto p1 = *a.begin();
a.erase(a.begin());
auto p2 = *a.begin();
a.erase(a.begin());
answer.emplace_back(p1.second, p2.second);
if (p1.first > 1) a.emplace(p1.first - 1, p1.second);
if (p2.first > 1) a.emplace(p2.first - 1, p2.second);
}
cout << answer.size() << '\n';
for (auto &p : answer) {
cout << p.first << ' ' << p.second << '\n';
}
}
}
|
1579
|
E1
|
Permutation Minimization by Deque
|
In fact, the problems E1 and E2 do not have much in common. You should probably think of them as two separate problems.
A permutation $p$ of size $n$ is given. A permutation of size $n$ is an array of size $n$ in which each integer from $1$ to $n$ occurs exactly once. For example, $[1, 4, 3, 2]$ and $[4, 2, 1, 3]$ are correct permutations while $[1, 2, 4]$ and $[1, 2, 2]$ are not.
Let us consider an empty deque (double-ended queue). A deque is a data structure that supports adding elements to both the beginning and the end. So, if there are elements $[1, 5, 2]$ currently in the deque, adding an element $4$ to the beginning will produce the sequence $[\textcolor{red}{4}, 1, 5, 2]$, and adding same element to the end will produce $[1, 5, 2, \textcolor{red}{4}]$.
The elements of the permutation are sequentially added to the initially empty deque, starting with $p_1$ and finishing with $p_n$. Before adding each element to the deque, you may choose whether to add it to the beginning or the end.
For example, if we consider a permutation $p = [3, 1, 2, 4]$, one of the possible sequences of actions looks like this:
\begin{tabular}{lll}
$\quad$ 1. & add $3$ to the end of the deque: & deque has a sequence $[\textcolor{red}{3}]$ in it; \
$\quad$ 2. & add $1$ to the beginning of the deque: & deque has a sequence $[\textcolor{red}{1}, 3]$ in it; \
$\quad$ 3. & add $2$ to the end of the deque: & deque has a sequence $[1, 3, \textcolor{red}{2}]$ in it; \
$\quad$ 4. & add $4$ to the end of the deque: & deque has a sequence $[1, 3, 2, \textcolor{red}{4}]$ in it; \
\end{tabular}
Find the lexicographically smallest possible sequence of elements in the deque after the entire permutation has been processed.
A sequence $[x_1, x_2, \ldots, x_n]$ is lexicographically smaller than the sequence $[y_1, y_2, \ldots, y_n]$ if there exists such $i \leq n$ that $x_1 = y_1$, $x_2 = y_2$, $\ldots$, $x_{i - 1} = y_{i - 1}$ and $x_i < y_i$. In other words, if the sequences $x$ and $y$ have some (possibly empty) matching prefix, and the next element of the sequence $x$ is strictly smaller than the corresponding element of the sequence $y$. For example, the sequence $[1, 3, 2, 4]$ is smaller than the sequence $[1, 3, 4, 2]$ because after the two matching elements $[1, 3]$ in the start the first sequence has an element $2$ which is smaller than the corresponding element $4$ in the second sequence.
|
We'll process the permutation elements one by one. For the first element, it doesn't matter which side of the deque we add it to, the result of its addition will be the same - there will be a sequence of one element (equal to the first permutation element) in the deque. Now let's consider adding the $i$-th element of a permutation to the deque. First $i = 2$ will be considered, then $i = 3$, and so on up to $i = n$. Let us describe the general algorithm for choosing the side of the deque for each step. Note that if the elements $[d_1, \ldots, d_{i - 1}]$ are now in the deque, then all final permutations that can be obtained in the deque from the current state can be broken down into pairs of the form $[\ldots, \color{red}{a_i}, \color{blue}{d_1, \ldots, d_{i - 1}}, \ldots]\,$ $[\ldots, \color{blue}{d_1, \ldots, d_{i - 1}}, \color{red}{a_i}, \ldots]\text{,}$ Note that when $a_i < d_1$ the first permutation will always be lexicographically smaller than the second one, and vice versa. Therefore, regardless of the following choices, if $a_i < d_1$ then the second permutation will never be minimal, and if $a_i > d_1$ then the first permutation will never be minimal. This means that we can make a choice about the side of the deque to add the $i$-th element to based only on its relation to $d_1$: if $a_i < d_1$, then $a_i$ is added to the beginning of the deque, otherwise - to the end. The time complexity is $\mathcal{O}(n)$. Alternative solutions, which also fit in the time limit, involved finding a lexicographically minimal increasing sequence in the reversed original permutation and could be implemented either with $\mathcal{O}(n \log n)$ time complexity or with $\mathcal{O}(n)$ time complexity if the permutation's definition was taken into consideration.
|
[
"constructive algorithms",
"greedy",
"math"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, a;
cin >> n;
deque<int> d;
for (int i = 0; i < n; i++) {
cin >> a;
if (d.empty() || a < d[0])
d.push_front(a);
else
d.push_back(a);
}
for (int x : d)
cout << x << ' ';
cout << '\n';
}
}
|
1579
|
E2
|
Array Optimization by Deque
|
In fact, the problems E1 and E2 do not have much in common. You should probably think of them as two separate problems.
You are given an integer array $a[1 \ldots n] = [a_1, a_2, \ldots, a_n]$.
Let us consider an empty deque (double-ended queue). A deque is a data structure that supports adding elements to both the beginning and the end. So, if there are elements $[3, 4, 4]$ currently in the deque, adding an element $1$ to the beginning will produce the sequence $[\textcolor{red}{1}, 3, 4, 4]$, and adding the same element to the end will produce $[3, 4, 4, \textcolor{red}{1}]$.
The elements of the array are sequentially added to the initially empty deque, starting with $a_1$ and finishing with $a_n$. Before adding each element to the deque, you may choose whether to add it to the beginning or to the end.
For example, if we consider an array $a = [3, 7, 5, 5]$, one of the possible sequences of actions looks like this:
\begin{tabular}{lll}
$\quad$ 1. & add $3$ to the beginning of the deque: & deque has a sequence $[\textcolor{red}{3}]$ in it; \
$\quad$ 2. & add $7$ to the end of the deque: & deque has a sequence $[3, \textcolor{red}{7}]$ in it; \
$\quad$ 3. & add $5$ to the end of the deque: & deque has a sequence $[3, 7, \textcolor{red}{5}]$ in it; \
$\quad$ 4. & add $5$ to the beginning of the deque: & deque has a sequence $[\textcolor{red}{5}, 3, 7, 5]$ in it; \
\end{tabular}
Find the minimal possible number of inversions in the deque after the whole array is processed.
An inversion in sequence $d$ is a pair of indices $(i, j)$ such that $i < j$ and $d_i > d_j$. For example, the array $d = [5, 3, 7, 5]$ has exactly two inversions — $(1, 2)$ and $(3, 4)$, since $d_1 = 5 > 3 = d_2$ and $d_3 = 7 > 5 = d_4$.
|
Let's process the array elements one by one. For the first element, it doesn't matter which side of the deque we add it to, the result of its addition will be the same - there will be a sequence of one element (equal to the first array element) in the deque. Now let's consider adding the $i$th element of an array into the deck. First $i = 2$ will be considered, then $i = 3$, and so on up to $i = n$. Let us describe the general algorithm for choosing the side of the dec for each step. Note that if the elements $[d_1, \ldots, d_{i - 1}]$ now lie in the deck, then all final sequences that can be obtained in the deck from the current state can be broken down into pairs of the form $[\ldots, \color{red}{a_i}, \color{blue}{d_1, \ldots, d_{i - 1}}, \ldots]\,$ $[\ldots, \color{blue}{d_1, \ldots, d_{i - 1}}, \color{red}{a_i}, \ldots]\text{,}$ Note that since the prefix and suffix hidden behind the dots completely coincide in the two sequences under consideration, as well as the set of numbers in the central part coincides, the numbers of inversions also coincide: inside the prefix and inside the suffix; between elements of the prefix and elements of the suffix; between elements of the prefix or suffix and elements of the central part. The difference between the number of inversions in the first and second sequence consists only of the difference between the number of inversions in their central part. So, we can determine at the stage of adding $a_i$ to the deque, which direction of its addition is guaranteed not to lead to the optimal answer and choose the opposite one. If $a_i$ is added to the beginning of the deque, the number of inversions in the central part will increase by the number of elements in the deque strictly smaller than $a_i$, and if we add it to the end of the deque, it will increase by the number of elements in the deque strictly larger than $a_i$. Let us make a choice such that the number of inversions increases by the minimum of these two values. To quickly find the number of elements smaller or larger than $a_i$, we will store all already processed array elements in a structure that supports the element order search operation, such as __gnu_pbds::tree. Besides using this structure specifically, you can write any balanced binary search tree (such as a Cartesian tree); sort all numbers in the input array and compress them to values $[1, n]$, preserving the "$\le$" relation, then build a segment tree on them, storing in the node $[l, r)$ the number of array numbers already processed by the deque with values between $l$ and $r$. Requests to update and get an order in such structures take $\mathcal{O}(\log n)$ time, and the construction takes at worst $\mathcal{O}(n \log n)$, so the time complexity of the algorithm is $\mathcal{O}(n \log n)$.
|
[
"data structures",
"greedy"
] | 1,700
|
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
using namespace std;
using namespace __gnu_pbds;
typedef pair<int, int> node;
typedef tree<node, null_type, less<node>,
rb_tree_tag, tree_order_statistics_node_update> ordered_set;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
ordered_set s;
long long cnt = 0;
for (int i = 0; i < n; i++) {
int a;
cin >> a;
int less = s.order_of_key(node(a, 0)),
greater = i - s.order_of_key(node(a, n));
cnt += min(less, greater);
s.insert(node(a, i));
}
cout << cnt << '\n';
}
}
|
1579
|
F
|
Array Stabilization (AND version)
|
You are given an array $a[0 \ldots n - 1] = [a_0, a_1, \ldots, a_{n - 1}]$ of zeroes and ones only. Note that in this problem, unlike the others, the array indexes are numbered from zero, not from one.
In one step, the array $a$ is replaced by another array of length $n$ according to the following rules:
- First, a new array $a^{\rightarrow d}$ is defined as a cyclic shift of the array $a$ to the right by $d$ cells. The elements of this array can be defined as $a^{\rightarrow d}_i = a_{(i + n - d) \bmod n}$, where $(i + n - d) \bmod n$ is the remainder of integer division of $i + n - d$ by $n$. It means that the whole array $a^{\rightarrow d}$ can be represented as a sequence $$a^{\rightarrow d} = [a_{n - d}, a_{n - d + 1}, \ldots, a_{n - 1}, a_0, a_1, \ldots, a_{n - d - 1}]$$
- Then each element of the array $a_i$ is replaced by $a_i \,\&\, a^{\rightarrow d}_i$, where $\&$ is a logical "AND" operator.
For example, if $a = [0, 0, 1, 1]$ and $d = 1$, then $a^{\rightarrow d} = [1, 0, 0, 1]$ and the value of $a$ after the first step will be $[0 \,\&\, 1, 0 \,\&\, 0, 1 \,\&\, 0, 1 \,\&\, 1]$, that is $[0, 0, 0, 1]$.
The process ends when the array stops changing. For a given array $a$, determine whether it will consist of only zeros at the end of the process. If yes, also find the number of steps the process will take before it finishes.
|
We'll consider an arbitrary index of the array $i$ and see what changes happen to $a_i$ during several steps of the described algorithm. Let's denote by $a^k$ the value of the array after $k$ steps of the algorithm and prove by induction that $a^k_i$ is the logical "AND" of $k + 1$ elements of the array $a$, starting from $i$ with step $d$ to the left, that is $a^k_i = a_i \,\&\, a_{(i - d) \bmod n} \,\&\, \ldots \,\&\, a_{(i - kd) \bmod n}$ Base of induction: for $k = 0$ the element of the original array $a^0_i$ is $a_i$. For clarity we can also show that the statement is true for $k = 1$: during the first step $a_i$ is replaced by $a_i \,\&\, a_{(i - d) \bmod n}$ by the definition of cyclic shift by $d$ to the right. For simplicity, we will omit the "$\bmod n$" operation in the following formulas but will keep it in mind implicitly. That is, $a_{i - kd}$ will imply $a_{(i - kd) \bmod n}$. Induction step: let the above statement be true for $k - 1$, let us prove it for $k$. By the definition of cyclic shift $a^k_i = a^{k-1}_i \,\&\, a^{k-1}_{i - d}$. And by the induction assumption, these two numbers are equal to $a^{k-1}_i = a_i \,\&\, \ldots \,\&\, a_{i - (k-1)d}$ $a^{k-1}_{i - d} = a_{i - d} \,\&\, \ldots \,\&\, a_{i - kd}$ $a^k_i = a_i \,\&\, a_{i - d} \,\&\, \ldots \,\&\, a_{i - kd}\text{,}$ It follows from this formula that $a_i$ turns to zero after the $k$-th step if and only if $a_i = 1$, $a_{i - d} = 1$, ..., $a_{i - (k-1)d} = 1$, and $a_{i - kd} = 0$. Up to the $k$-th step all elements will be equal to $1$, and so their logical "AND" will also be equal to $1$. As soon as $0$ appears in the sequence in question, the logical "AND" will also become zero. Thus, we reduced the problem to finding the maximal block of elements equal to $1$ of the pattern $a_i = a_{i - d} = a_{i - 2d} = \ldots = a_{i - (k-1)d} = 1$. Note that by shifts of $d$ the array splits into $\mathtt{gcd}(n, d)$ cyclic sequences of this kind, each of length $\frac{n}{\mathtt{gcd}(n, d)}$. Let's look at these cyclic sequences independently from each other and iterate over each of them in linear time complexity to find the maximal block of consecutive elements equal to $1$ - this will be the answer to the problem. Remember to check that if at least one of these sequences consists entirely of elements equal to $1$, its elements will never zero out, and the answer in such case is -1. The time complexity is $\mathcal{O}(n)$.
|
[
"brute force",
"graphs",
"math",
"number theory",
"shortest paths"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n, d;
cin >> n >> d;
vector<int> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
vector<bool> used(n, false);
bool fail = false;
int res = 0;
for (int i = 0; i < n; i++) {
if (used[i])
continue;
int cur = i, pref = 0, last = 0, iter = 0, ans = 0;
do {
used[cur] = true;
if (a[cur] == 0) {
ans = max(ans, last);
last = 0;
} else {
last++;
if (iter == pref) {
pref++;
}
}
cur = (cur + d) % n;
iter++;
} while (cur != i);
if (iter != pref)
ans = max(ans, pref + last);
else {
fail = true;
break;
}
res = max(res, ans);
}
if (fail)
cout << "-1\n";
else
cout << res << '\n';
}
}
|
1579
|
G
|
Minimal Coverage
|
You are given $n$ lengths of segments that need to be placed on an infinite axis with coordinates.
The first segment is placed on the axis so that one of its endpoints lies at the point with coordinate $0$. Let's call this endpoint the "start" of the first segment and let's call its "end" as that endpoint that is not the start.
The "start" of each following segment must coincide with the "end" of the previous one. Thus, if the length of the next segment is $d$ and the "end" of the previous one has the coordinate $x$, the segment can be placed either on the coordinates $[x-d, x]$, and then the coordinate of its "end" is $x - d$, or on the coordinates $[x, x+d]$, in which case its "end" coordinate is $x + d$.
The total coverage of the axis by these segments is defined as their overall union which is basically the set of points covered by at least one of the segments. It's easy to show that the coverage will also be a segment on the axis. Determine the minimal possible length of the coverage that can be obtained by placing all the segments on the axis without changing their order.
|
One possible solution involves the method of dynamic programming. As a state of DP we will use the number of already placed segments $i$, and the distance $l$ from the "end" of the last segment to the current left boundary of the coverage, and in the DP we will store the minimal possible distance from the "end" of the last segment to the current right boundary of the coverage. We can prove that the answer never exceeds $2 \cdot l_{\mathtt{max}}$, where $l_\mathtt{max} = \max(a)$ is the maximal length of the segments. To do this, let us define a region of length $2 \cdot l_{\mathtt{max}}$, specifically the segment $[-l_{\mathtt{max}}, l_{\mathtt{max}}]$. If the "end" of the last segment has a coordinate $x > 0$, we put the next segment to the left, otherwise, we put it to the right. With this algorithm, none of the "end" endpoints of the segments will go beyond the marked boundaries, because to do so, the segment must be placed from the coordinate of one sign beyond the boundary of the opposite sign, and thus must have a length greater than $l_{\mathtt{max}}$ which contradicts how we defined $l_{\mathtt{max}}$. Using this fact, we will consider the DP $\mathtt{dp}_{i, l}$ for $0 \le i \le n$ and $0 \le j \le 2 \cdot l_{\mathtt{max}}$ as the minimum distance between the "end" of the $i$-th segment and the right boundary of the axis coverage of the first $i$ segments when the distance to the left boundary of the coverage equals to $l$. The "end of the $0$-th segment" here is the "beginning" of the first one, that is, the point $0$. The base of DP is $\mathtt{dp}_{0, 0} = 0$, since when no segments are placed, the coverage boundaries and the current point $0$ are all coincident. Next, we consider the forward dynamic programming relaxation: for every $(i, l)$ there are two cases to consider, the case of the next segment being placed to the left and the case of it being placed to the right (value $r$ below refers to the distance to the right boundary of the coverage and is an alias for $\mathtt{dp}_{i, l}$): If a segment of length $a_{i + 1}$ is placed to the left side, then the new distance to the left boundary will be equal to $\max(0, l - a_{i + 1})$, and distance to the right boundary will always be $r + a_{i + 1}$, which gives us the relaxation formula $\mathtt{dp}_{i + 1, \max(0, l - a_{i + 1})} \leftarrow \mathtt{dp}_{i, l} + a_{i + 1}$ $\mathtt{dp}_{i + 1, \max(0, l - a_{i + 1})} \leftarrow \mathtt{dp}_{i, l} + a_{i + 1}$ If a segment of length $a_{i + 1}$ is placed to the right side, then the new distance to the right boundary will be equal to $\max(0, r - a_{i + 1})$, and distance to the left boundary will always be $l + a_{i + 1}$, which gives us the relaxation formula $\mathtt{dp}_{i + 1, l + a_{i + 1}} \leftarrow \max(0, \mathtt{dp}_{i, l} - a_{i + 1})$ $\mathtt{dp}_{i + 1, l + a_{i + 1}} \leftarrow \max(0, \mathtt{dp}_{i, l} - a_{i + 1})$ The values in array $\mathtt{dp}$ can be calculated in ascending order by $i$. Then the answer for the problem can be found as the minimum sum of $l$ and $r$ in the last row of $\mathtt{dp}$, that is $\min\limits_{l} l + \mathtt{dp}_{i, l}$. The time complexity is $\mathcal{O}(n \cdot l_{\mathtt{max}})$.
|
[
"dp"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int INF = 1000000000;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
int maxl = 0;
for (int i = 0; i < n; i++) {
cin >> a[i];
maxl = max(maxl, a[i]);
}
vector<vector<int>> dp(n + 1, vector<int>(2 * maxl + 1, INF));
dp[0][0] = 0;
for (int i = 0; i < n; i++) {
for (int left = 0; left <= 2 * maxl; left++) {
if (dp[i][left] == INF)
continue;
dp[i + 1][max(left - a[i], 0)] = min(dp[i + 1][max(left - a[i], 0)], dp[i][left] + a[i]);
if (left + a[i] < dp[i + 1].size()) {
dp[i + 1][left + a[i]] = min(dp[i + 1][left + a[i]], max(dp[i][left] - a[i], 0));
}
}
}
int ans = 2 * maxl + 1;
for (int left = 0; left <= 2 * maxl; left++)
ans = min(ans, left + dp[n][left]);
cout << ans << '\n';
}
}
|
1580
|
A
|
Portal
|
CQXYM found a rectangle $A$ of size $n \times m$. There are $n$ rows and $m$ columns of blocks. Each block of the rectangle is an obsidian block or empty. CQXYM can change an obsidian block to an empty block or an empty block to an obsidian block in one operation.
A rectangle $M$ size of $a \times b$ is called a portal if and only if it satisfies the following conditions:
- $a \geq 5,b \geq 4$.
- For all $1 < x < a$, blocks $M_{x,1}$ and $M_{x,b}$ are obsidian blocks.
- For all $1 < x < b$, blocks $M_{1,x}$ and $M_{a,x}$ are obsidian blocks.
- For all $1<x<a,1<y<b$, block $M_{x,y}$ is an empty block.
- $M_{1, 1}, M_{1, b}, M_{a, 1}, M_{a, b}$ \textbf{can be any type}.
Note that the there must be $a$ rows and $b$ columns, not $b$ rows and $a$ columns.\textbf{Note that corners can be any type}
CQXYM wants to know the minimum number of operations he needs to make at least one sub-rectangle a portal.
|
We can enumerate the two corner of the submatrix, calculate the answer by precalculating the prefix sums. The time complexity is $O(\sum n^2m^2)$. When we enumerated the upper edge and the lower edge of the submatrix, we can calculate the answer by prefix sum. Assume the left edge of the submatrix is $l$, and the right edge is $r$. The part of anwer contributed by upper and lower edge are two segments, we can calculate the answer by prefix sums. The middle empty part is a submaxtrix, and we can use prefix sums too. Since we have enumerated the upper edge and lower edge, the left edge part is just about $l$, and the right part is just about $r$. Then we enumerate $l$, the answer of the best $r$ can be calculated by precalculating the suffix miniums. The time complexity is $O(\sum{n^2m})$, space complexity is $O(nm)$.
|
[
"brute force",
"data structures",
"dp",
"greedy",
"implementation"
] | 1,700
|
#include<stdio.h>
char s[402];
int sum[401][401],f[401];
inline int GetSum(int lx,int ly,int rx,int ry){
return sum[rx][ry]-sum[rx][ly-1]-sum[lx-1][ry]+sum[lx-1][ly-1];
}
inline void Solve(){
int n,m,i,j,k,ans=999999,cur;
scanf("%d%d",&n,&m);
for(i=1;i<=n;i++){
scanf("%s",s+1);
for(j=1;j<=m;j++){
sum[i][j]=sum[i-1][j]+sum[i][j-1]-sum[i-1][j-1]+(s[j]=='1');
}
}
for(i=1;i!=n;i++){
for(j=i+4;j<=n;j++){
for(k=4;k<=m;k++){
f[k]=GetSum(i+1,1,j-1,k-1)-GetSum(i,1,i,k-1)-GetSum(j,1,j,k-1)-GetSum(i+1,k,j-1,k)+(k<<1)+j-i-3;
}
for(k=m-1;k!=3;k--){
if(f[k+1]<f[k]){
f[k]=f[k+1];
}
}
for(k=1;k!=m-2;k++){
cur=f[k+3]-GetSum(i+1,1,j-1,k)+GetSum(i,1,i,k)+GetSum(j,1,j,k)-(k<<1)-GetSum(i+1,k,j-1,k)+j-i-1;
if(cur<ans){
ans=cur;
}
}
}
}
printf("%d\n",ans);
}
int main(){
int t;
scanf("%d",&t);
while(t!=0){
Solve();
t--;
}
return 0;
}
|
1580
|
B
|
Mathematics Curriculum
|
Let $c_1, c_2, \ldots, c_n$ be a permutation of integers $1, 2, \ldots, n$. Consider all subsegments of this permutation containing an integer $x$. Given an integer $m$, we call the integer $x$ good if there are exactly $m$ different values of maximum on these subsegments.
Cirno is studying mathematics, and the teacher asks her to count the number of permutations of length $n$ with exactly $k$ good numbers.
Unfortunately, Cirno isn't good at mathematics, and she can't answer this question. Therefore, she asks you for help.
Since the answer may be very big, you only need to tell her the number of permutations modulo $p$.
A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
A sequence $a$ is a subsegment of a sequence $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
|
Define the dp state $f_{l,s,d}$ as the number of the permutaion length of $l$ with exactly $d$ such numbers that all the subsegments containing them have exactly $s$ different maxima in total. We enumerate the position of the bigest number in the permutaion. We call the position is $a$. The numbers before $a$ and after $a$ are independent. Then we transform the statement $(l,s,d)$ to $(a-1,x,d+1)$ and $(l-a,y,d+1)$. We also have to distribute the numbers to two parts, so the dp transformation is: $f_{l,s,d}=\sum_{a=1}^{l} \binom{l-1}{a-1} \sum_{x=0}^{s}f_{a-1,x,d+1}f_{l-a,s-x-[d=k],d+1}$ In addition, the answer of the problem is $f_{n,m,k}$. Actually, the dp proccess is just like a cartesian tree. The time complexity is $O(n^2m^2k)$, space complexity is $O(nmk)$. However, it's enough to pass the tests.
|
[
"brute force",
"combinatorics",
"dp",
"trees"
] | 2,600
|
#include <bits/stdc++.h>
using namespace std;
const int MAX_N = 100 + 5;
int n, m, k, P;
int fac[MAX_N], c[MAX_N][MAX_N], f[MAX_N][MAX_N][MAX_N];
int add(int a, int b) {
return a + b < P ? a + b : a + b - P;
}
void dp(int sz, int cnt, int dep) {
if (f[dep][sz][cnt] != -1) return ;
register int &F = f[dep][sz][cnt] = 0;
if (!sz) {
F = 1;
return ;
}
if ((m - dep < 7 && (1 << (m - dep)) < cnt) || (cnt && sz < m - dep)) return ;
if (dep == m) {
F = (cnt == 1 ? fac[sz] : 0);
return ;
}
for (int i = 0; i < sz; i ++) {
register int fi = 0;
register int *fl = f[dep + 1][i], *fr = f[dep + 1][sz - i - 1];
for (int j = max(0, cnt + i + 1 - sz); j <= min(cnt, i); j ++)
if (fl[j] && fr[cnt - j]) {
dp(i, j, dep + 1);
dp(sz - i - 1, cnt - j, dep + 1);
fi = (fi + 1ll * fl[j] * fr[cnt - j]) % P;
}
F = (F + 1ll * fi * c[sz - 1][i]) % P;
}
}
int main() {
cin >> n >> m >> k >> P; m --;
for (int i = c[0][0] = fac[0] = 1; i <= n; i ++) {
c[i][0] = c[i][i] = 1;
fac[i] = 1ll * fac[i - 1] * i % P;
for (int j = 1; j < i; j ++) c[i][j] = add(c[i - 1][j - 1], c[i - 1][j]);
}
memset(f, -1, sizeof(f));
dp(n, k, 0);
cout << f[0][n][k] << endl;
return 0;
}
|
1580
|
C
|
Train Maintenance
|
Kawasiro Nitori is excellent in engineering. Thus she has been appointed to help maintain trains.
There are $n$ models of trains, and Nitori's department will only have at most one train of each model at any moment. In the beginning, there are no trains, at each of the following $m$ days, one train will be added, or one train will be removed. When a train of model $i$ is added at day $t$, it works for $x_i$ days (day $t$ inclusive), then it is in maintenance for $y_i$ days, then in work for $x_i$ days again, and so on until it is removed.
In order to make management easier, Nitori wants you to help her calculate how many trains are in maintenance in each day.
\textbf{On a day a train is removed, it is not counted as in maintenance.}
|
Let's distinguish the trains according to $x_i + y_i$. If $x_i + y_i > \sqrt{m}$, the total times of maintenance and running don't exceed $\frac{m}{\sqrt{m}}=\sqrt{m}$. So we can find every date that the train of model $i$ begin or end maintenance in $O(\sqrt{m})$, and we can maintain a differential sequence. We can add 1 to the beginning date and minus 1 to the end date, and the prefix sum of this sequence is the number of trains in maintenance. If $x_i + y_i \le \sqrt{m}$, suppose the train of model $i$ is repaired at $s_i$ day. For a date $t$ that the train of model $i$ is in maintenance if and only if $(t-s_i) \ mod \ (x_i+y_i) \geq x_i$. Thus for each $a \le \sqrt{m}$, we can use an array of length $a$ to record the date of all trains that satisfy $x_i + y_i = a$ are in maintenance modulo $a$. And for one period of maintenance, the total days aren't exceed $\sqrt{m}$. So we can maintain $(t - s_i) \ mod \ (x_i + y_i)$ in $O(\sqrt{m})$. Thus the intended time complexity is $O(m\sqrt{m})$ and the intended memory complexity is $O(n + m)$. Finished reading the statement, you may have thought about this easy solution as followed. Maintain an array to count the trains in maintenance in each day. For add and remove operations, traverse the array and add the contribution to the array. The algorithm works in the time complexity of $O(nm)$. Besides, we can use the difference array to modify a segment in $O(1)$. However, we can optimize this solution. For the trains of $x+y>\sqrt m$, we can modify every period by brute force because the number of periods is less than $\sqrt m$. For the trains of $x+y \leqslant \sqrt m$, the number of the periods can be large. In this case, we set the blocks, each of them sizes $O(\sqrt m)$. We can merge the modifies which are completely included in the same block, with the same length of period and the same start position in the block. It's fine to use an array to record this. Number of segments are not completely included in the block is about $O(\sqrt m)$. The total complexity is $O(m\sqrt m)$.
|
[
"brute force",
"data structures",
"implementation"
] | 2,200
|
#include <bits/stdc++.h>
char BUF_R[1 << 22], *csy1, *csy2;
#define GC (csy1 == csy2 && (csy2 = (csy1 = BUF_R) + fread(BUF_R, 1, 1 << 22, stdin), csy1 == csy2) ? EOF : *csy1 ++)
template <typename Ty>
inline void RI(Ty &t) {
char c = GC;
for (t = 0; c < 48 || c > 57; c = GC);
for (; c > 47 && c < 58; c = GC) t = t * 10 + (c ^ 48);
}
const int MAX_N = 200000 + 5;
const int MAX_M = 256;
int n, m, thre, x[MAX_N], y[MAX_N], cnt[MAX_M][MAX_M], s[MAX_N], a[MAX_N], ans;
void block_modify(int Tm, int k, int v) {
int bl = x[k] + y[k], *c = cnt[bl];
int l = (Tm + x[k]) % bl, r = (Tm - 1) % bl;
if (l <= r) for (int i = l; i <= r; i ++) c[i] += v;
else {
for (int i = 0; i <= r; i ++) c[i] += v;
for (int i = l; i < bl; i ++) c[i] += v;
}
}
int block_query(int Tm) {
register int res = 0;
for (int i = 2; i <= thre; i ++) res += cnt[i][Tm % i];
return res;
}
int main() {
RI(n); RI(m);
thre = std::min((int)(0.5 * sqrt(m)) + 1, 255);
for (int i = 1; i <= n; i ++) {RI(x[i]); RI(y[i]);}
for (int i = 1; i <= m; i ++) {
int opt, k;
RI(opt); RI(k);
if (opt == 1) {
if (thre < x[k] + y[k]) {
for (int j = i + x[k]; j <= m; j += x[k] + y[k]) {
a[j] ++;
if (j + y[k] <= m) a[j + y[k]] --;
}
}else block_modify(i, k, 1);
s[k] = i;
}else {
if (thre < x[k] + y[k]) {
for (int j = s[k] + x[k]; j <= m; j += x[k] + y[k]) {
a[j] --;
if (j + y[k] <= m) a[j + y[k]] ++;
if (j < i && j + y[k] >= i) ans --;
}
}else block_modify(s[k], k, -1);
}
ans += a[i];
printf("%d\n", ans + block_query(i));
}
return 0;
}
|
1580
|
D
|
Subsequence
|
Alice has an integer sequence $a$ of length $n$ and \textbf{all elements are different}. She will choose a subsequence of $a$ of length $m$, and defines the value of a subsequence $a_{b_1},a_{b_2},\ldots,a_{b_m}$ as $$\sum_{i = 1}^m (m \cdot a_{b_i}) - \sum_{i = 1}^m \sum_{j = 1}^m f(\min(b_i, b_j), \max(b_i, b_j)),$$ where $f(i, j)$ denotes $\min(a_i, a_{i + 1}, \ldots, a_j)$.
Alice wants you to help her to maximize the value of the subsequence she choose.
A sequence $s$ is a subsequence of a sequence $t$ if $s$ can be obtained from $t$ by deletion of several (possibly, zero or all) elements.
|
First we can change the way we calculate the value of a subsequence. We can easily see the value of a subsequence $a_{b_1},a_{b_2},\ldots ,a_{b_m}$ is also $\sum_{i = 1}^m \sum_{j = i + 1}^m a_{b_i} + a_{b_j} - 2 \times f(b_i, b_j)$, which is very similar to the distance of two node on a tree. Thus we can build the cartesian tree of the sequence, and set the weight of a edge between node $i$ and $j$ as $|a_i - a_j|$. Then we can see what we are going to calculate turns into follows : choosing $m$ nodes, maximize the total distance between every two nodes. Thus we can solve this task using dynamic programming with the time complexity $O(n^2)$. From the statement, we'd calculate the sums like $a_i+a_j-Min_{k=i}^j a_k$. When we put this on a cartesian tree, it turns out to be $a_i+a_j-a_{LCA(i,j)}$. Set the weigth of the edge $x \rightarrow y$ as $a_y-a_x$, $a_i+a_j-a_{LCA(i,j)}$ equals to the distance between node $i,j$ on the tree. Define the dp state $f_{i,j}$ as the maxium answer in the subtree rooted at node $i$, and we choose $j$ of the nodes in the subtree. Enumerate how many nodes we choose in the subtree of left-son of node $i$, number of nodes of right-son, and whether node $i$ is chose. The contribution made by the edge is its weight times the number of nodes are chose. Since a pair of node will contribute time complexity only when we are calculating the dp state of their LCA, the total time complexity is $O(n^2)$.
|
[
"brute force",
"divide and conquer",
"dp",
"greedy",
"trees"
] | 2,900
|
#include <cstdio>
#include <algorithm>
typedef long long ll;
const int MAX_N = 4000 + 5;
int N, M, a[MAX_N], ls[MAX_N], rs[MAX_N], lw[MAX_N], rw[MAX_N], sz[MAX_N];
ll f[MAX_N][MAX_N];
inline void umax(ll &a, ll b) {
a = a < b ? b : a;
}
void dfs(int u) {
sz[u] = 1;
if (ls[u]) {
dfs(ls[u]);
for (int i = std::min(M, sz[u]); i >= 0; i --)
for (int j = std::min(M, sz[ls[u]]); j >= 0; j --)
umax(f[u][i + j], f[u][i] + f[ls[u]][j] + 1ll * j * (M - j) * lw[u]);
sz[u] += sz[ls[u]];
}
if (rs[u]) {
dfs(rs[u]);
for (int i = std::min(M, sz[u]); i >= 0; i --)
for (int j = std::min(M, sz[rs[u]]); j >= 0; j --)
umax(f[u][i + j], f[u][i] + f[rs[u]][j] + 1ll * j * (M - j) * rw[u]);
sz[u] += sz[rs[u]];
}
}
int main() {
scanf("%d%d", &N, &M);
static int sta[MAX_N], tp;
for (int i = 1; i <= N; i ++) {
scanf("%d", a + i);
int k = tp;
for (; k && a[i] < a[sta[k]]; k --);
if (k) {
rs[sta[k]] = i;
rw[sta[k]] = a[i] - a[sta[k]];
}
if (k < tp) {
ls[i] = sta[k + 1];
lw[i] = a[sta[k + 1]] - a[i];
}
sta[++ k] = i;
tp = k;
}
dfs(sta[1]);
printf("%lld\n", f[sta[1]][M]);
return 0;
}
|
1580
|
E
|
Railway Construction
|
Because the railway system in Gensokyo is often congested, as an enthusiastic engineer, Kawasiro Nitori plans to construct more railway to ease the congestion.
There are $n$ stations numbered from $1$ to $n$ and $m$ two-way railways in Gensokyo. Every two-way railway connects two different stations and has a positive integer length $d$. No two two-way railways connect the same two stations. Besides, it is possible to travel from any station to any other using those railways. Among these $n$ stations, station $1$ is the main station. You can get to any station from any other station using only two-way railways.
Because of the technological limitation, Nitori can only construct one-way railways, whose length can be arbitrary positive integer. Constructing a one-way railway from station $u$ will costs $w_u$ units of resources, no matter where the railway ends. To ease the congestion, Nitori plans that after construction there are at least two shortest paths from station $1$ to any other station, and these two shortest paths do not pass the same station except station $1$ and the terminal. Besides, Nitori also does not want to change the distance of the shortest path from station $1$ to any other station.
Due to various reasons, sometimes the cost of building a new railway will increase uncontrollably. There will be a total of $q$ occurrences of this kind of incident, and the $i$-th event will add additional amount of $x_i$ to the cost of building a new railway from the station $k_i$.
To save resources, before all incidents and after each incident, Nitori wants you to help her calculate the minimal cost of railway construction.
|
For convenience, we first define $dis[u]$ as the length of the shortest path between node 1 and node $u$ and "distance" of node $u$ as $dis[u]$, and call node $u$ is "deeper" than node $v$ if and only if $dis[u] > dis[v]$. Similarly, we call node $u$ is "lower" than node $v$ if and only if $dis[u] < dis[v]$. We will use $u$ -> $v$ to denote a a directed edge staring at node $u$ and ending at node $v$, and use $u$ --> $v$ to denote an arbitrary path staring at node $u$ and ending at node $v$. We call two paths "intersect" if and only if they pass through at least 1 same node. First let's focus on several facts. Since all edges' weights are positive, and we have to make sure that the distance of every node does not change, for any node $u$, we can only add edges starting at a node whose distance is strictly less than $dis[u]$. Besides, we can always add edge 1 -> $u$, but we can never add edge $u$ -> 1. Since the distance of every node does not change and the train only pass through the shortest path, if an edge is not on the shortest path at first, however we add edges it won't be on any shortest path. Thus we can first calculate the shortest path and then remove all edges which are not on the shortest path. We can easily see that the new graph is a $DAG$ (Directed Acyclic Graph). And in this graph the topological order is also the ascending order of nodes' distances. In the following part of this tutorial, we will use the new graph instead of the original graph. After we add all edges, every node except node 1 must have at least 2 incoming edges. Then let's prove a lemma: if every node except node 1 exactly has 2 incoming edges, the graph will meet the requirement. We will use mathematical induction method to prove the lemma. For an arbitrary node $u$, we suppose that all nodes whose distance is less than $u$'s has met the requirement, and we only need to prove that node $u$ also meets the requirement. Suppose the start of the 2 incoming edges is separately $s$ and $t$. First, if $s$ or $t$ is node 1, we can simply choose $1$ --> $u$ and $1$ --> $t$ -> $u$ as the two paths and obviously they don't intersect. Thus it meet the requirement. Second, if $s$ and $t$ are not node 1. We choose an arbitrary path $1$ --> $s$ and call it path 0. According to our assumption, we can choose two different paths starting at node 1 and ending at node $t$, and the two paths don't intersect. We call them path 1 and path 2 separately. If path 0 and path 1 (or path 2) doesn't intersect, then we can choose path 0 and path 1 to meet the requirement. Thus we only need to consider the situation that path 0 intersect with both path 1 and path 2. In this case, we first find the lowest and deepest node where path 0 and path 1 or path 2 intersect, and call them $a$ and $b$ separately. If the both are the intersect points of path 0 and path 1, like the case below, we can choose path (1 -> a -> (through path 1, i.e. blue path) b -> s) and path 2. Otherwise, like the case below, we can choose path (1 -> a -> t) and path (1 -> b -> s). Both cases are meet the requirement. Thus we've proved the lemma. So we only need to make sure that every node except node 1 has at least 2 incoming edges. Then we can get the following solution if $w$ is fixed: For every node which has only 1 incoming edge, we record the start point of the incoming edge in an array. Then We scan all nodes in the ascending order of nodes' distances, and maintain the previous minima and second minima of $w$ in an array $val$, adding edges greedy. Note that we only need to maintain the index of $w$ instead of its real value. This solution consumes $O(n)$ to calculate for a fixed $w$, so the total complexity is $O(nq)$. To accelerate the solution, let's try maintaining array $val$ while changing $w$. And we will apply all the changes in reverse order. That is, we only consider the case that $w$ is decreasing. According to the value of $val$, we can separate the sequence into many subsegments, and we can use std::set to maintain those subsegments. For one change in $w_k$, it will affect a particular suffix of $val$, so we can first find the suffix. Then we consecutively change the array $val$ until $w_k$ is greater than current subsegment's second minima. Next we will prove that the solution works in $O((n+q)logn)$. When we change a particular subsegment, we separate the operation into 3 types according to the relationship between $w_k$ and the subsegment's $val$. If $w_k$ is exactly the minima of the subsegment. This kind of operation may be done many times, but we will find that it won't change $val$ at all. So we can do them at a time using segment tree. Thus in one change we will only do it at most 1 time. If $w_k$ is the second minima of the subsegment. Note the fact that every subsegment must has different second minima, so this kind of operation will be also done at most once in one change. If $w_k$ is neither the minima nor the second minima of the subsegment. Note the fact that each time we do it, except the first time, will make the number of subsegments decrease by 1. Thus this kind of operation will be done no more than $n+q$ times. Using std::set and segment tree, all of these operations could be done in $O(logn)$ at a time. Thus the total complexity is $O((n+q)logn)$. In conclusion, we can solve this task in $O(mlogm + (n+q)logn)$.
|
[
"brute force",
"constructive algorithms",
"data structures",
"graphs",
"shortest paths"
] | 3,400
|
#include <cstdio>
#include <algorithm>
#include <queue>
#include <set>
using std::set;
typedef long long ll;
char BUF_R[1 << 22], *csy1, *csy2;
#define GC (csy1 == csy2 && (csy2 = (csy1 = BUF_R) + fread(BUF_R, 1, 1 << 22, stdin), csy1 == csy2) ? EOF : *csy1 ++)
template <class T>
inline void RI(T &t) {
char c = GC;
for (t = 0; c < 48 || c > 57; c = GC);
for (; c > 47 && c < 58; c = GC) t = t * 10 + (c ^ 48);
}
const int MAX_N = 200000 + 5;
const int MAX_M = 600000 + 5;
const int INF32 = 0x7fffffff;
const ll INF64 = 0x3fffffffffffffffll;
int N, M, Q, ques[MAX_N][2];
struct Edge{
int y, prev, len;
}e[MAX_M];
int elast[MAX_N], ecnt = 1;
std::priority_queue < std::pair <ll, int> > pq;
ll dis[MAX_N], w[MAX_N];
unsigned long long res[MAX_N], ans;
int cnt[MAX_N], fa[MAX_N], a[MAX_N], t;
int rt[MAX_N], pos[MAX_N], sum[MAX_N], endpos[MAX_N];
struct Node{
int f, s;
Node (int a = 0, int b = 0) : f(a), s(b) {}
inline bool operator == (const Node &comp) const {return f == comp.f && s == comp.s;}
inline void swap() {
int t = f;
f = s;
s = t;
}
}minv, val[MAX_N];
set <int> s;
namespace SGT{
const int MAX_M = 10000000;
int tot;
struct SegmentNode{
int sum, ls, rs;
}node[MAX_M];
void segment_modify(int &i, int l, int r, int x) {
i = i ? i : ++ tot;
node[i].sum ++;
if (l == r) return ;
int mid = (l + r) >> 1;
mid < x ? segment_modify(node[i].rs, mid + 1, r, x) : segment_modify(node[i].ls, l, mid, x);
}
int segment_query(int i, int l, int r, int ql, int qr) {
if (!i) return 0;
if (l < ql || r > qr) {
int mid = (l + r) >> 1, res = 0;
if (mid >= ql) res = segment_query(node[i].ls, l, mid, ql, qr);
if (mid < qr) res += segment_query(node[i].rs, mid + 1, r, ql, qr);
return res;
}else return node[i].sum;
}
}
int count_illegal(int idx, int l, int r) {
if (r < l || idx == 1) return 0;
return SGT::segment_query(rt[idx], 1, N, l, r);
}
int count_legal(int idx, int l, int r) {
if (r < l) return 0;
return sum[r] - sum[l - 1] - count_illegal(idx, l, r);
}
void Build(int x, int y, int z) {
ecnt ++;
e[ecnt].y = y;
e[ecnt].len = z;
e[ecnt].prev = elast[x];
elast[x] = ecnt;
}
int main() {
RI(N); RI(M); RI(Q);
for (int i = 1; i <= N; i ++) {
RI(w[i]);
pos[i] = N + 1;
dis[i] = INF64;
}
for (int i = 1; i <= M; i ++) {
int x, y, z;
RI(x); RI(y); RI(z);
Build(x, y, z);
Build(y, x, z);
}
for (int i = 1; i <= Q; i ++) {
RI(ques[i][0]); RI(ques[i][1]);
w[ques[i][0]] += ques[i][1];
}
dis[1] = 0;
pq.push(std::make_pair(0, 1));
while (!pq.empty()) {
int u = pq.top().second;
ll d = -pq.top().first;
pq.pop();
if (dis[u] != d) continue;
a[++ t] = u;
for (int i = elast[u]; i; i = e[i].prev) {
int v = e[i].y;
if (d + e[i].len < dis[v]) {
dis[v] = d + e[i].len;
pq.push(std::make_pair(-dis[v], v));
}
}
}
for (int i = 2; i <= ecnt; i ++) {
int u = e[i ^ 1].y, v = e[i].y;
if (dis[u] + e[i].len == dis[v]) {
cnt[v] ++; fa[v] = u;
}
}
ans = 0;
s.insert(1);
pos[1] = 1;
minv.f = 1;
minv.s = N + 1;
val[1] = minv;
w[N + 1] = INF64;
for (int i = 2, j = 2; i <= N; i ++) {
int u = a[i];
for (; dis[a[j]] < dis[u]; j ++) {
int v = a[j];
pos[v] = i;
if (w[v] < w[minv.f]) {
s.insert(i);
endpos[minv.f] = i;
minv.s = minv.f;
minv.f = v;
val[i] = minv;
}else if (w[v] < w[minv.s]) {
s.insert(i);
minv.s = v;
val[i] = minv;
}
}
sum[i] = sum[i - 1];
if (cnt[u] < 2) {
sum[i] ++;
if (fa[u] > 1) SGT::segment_modify(rt[fa[u]], 1, N, i);
ans += w[(minv.f == 1 || minv.f != fa[u]) ? minv.f : minv.s];
}
}
endpos[minv.f] = N + 1;
s.insert(N + 1);
res[Q] = ans;
for (int i = Q; i > 0; i --) {
int k = ques[i][0], x = ques[i][1];
w[k] -= x;
if (pos[k] > N) {
res[i - 1] = ans;
continue;
}
set <int>::iterator it = -- s.upper_bound(pos[k]), lst;
minv = Node();
int response = 0, p = pos[k];
if (k == val[*it].f) response = 1;
else if (k == val[*it].s) response = 2;
while (response >= 0 && *it <= N) {
lst = it ++;
if (response == 0) {
if (w[val[*lst].s] <= w[k]) break;
if (w[k] < w[val[*lst].f]) {
if (!minv.f) endpos[val[*lst].f] = p;
int c = count_illegal(val[*lst].f, p, (*it) - 1), tot = sum[(*it) - 1] - sum[p - 1];
ans -= 1ull * c * w[val[*lst].s] + 1ll * (tot - c) * w[val[*lst].f];
if (p != *lst) val[p] = val[*lst];
val[p].s = val[p].f;
val[p].f = k;
c = count_illegal(k, p, (*it) - 1);
ans += 1ull * c * w[val[p].s] + 1ll * (tot - c) * w[k];
if (p != *lst) {
s.insert(p);
minv = val[p];
}else if (minv == val[p]) s.erase(lst);
else minv = val[p];
endpos[k] = p = *it;
}else {
ans += 1ull * count_illegal(val[*lst].f, p, (*it) - 1) * (w[k] - w[val[*lst].s]);
if (p != *lst) val[p] = val[*lst];
val[p].s = k;
if (p != *lst) {
s.insert(p);
minv = val[p];
}else if (minv == val[p]) s.erase(lst);
else minv = val[p];
p = *it;
}
}else if (response == 1) {
it = s.lower_bound(endpos[k]);
ans -= 1ull * count_legal(k, p, (*it) - 1) * x;
p = *it; lst = it; lst --;
minv = val[*lst];
response = val[p].s == k ? 2 : 0;
}else {
if (w[val[*lst].f] <= w[k]) {
ans -= 1ull * count_illegal(val[*lst].f, p, (*it) - 1) * x;
minv = val[*lst]; p = *it;
}else {
endpos[val[*lst].f] = p;
int c = count_illegal(val[*lst].f, p, (*it) - 1), tot = sum[(*it) - 1] - sum[p - 1];
ans -= 1ull * c * (w[val[*lst].s] + x) + 1ull * (tot - c) * w[val[*lst].f];
if (p != *lst) val[p] = val[*lst];
val[p].swap();
c = count_illegal(val[p].f, p, (*it) - 1);
ans += 1ull * c * w[val[p].s] + 1ull * (tot - c) * w[val[p].f];
if (p != *lst) {
s.insert(p);
minv = val[p];
}else if (minv == val[p]) s.erase(lst);
else minv = val[p];
endpos[k] = p = *it;
}
response = 0;
}
}
res[i - 1] = ans;
}
for (int i = 0; i <= Q; i ++) printf("%llu\n", res[i]);
return 0;
}
|
1580
|
F
|
Problems for Codeforces
|
XYMXYM and CQXYM will prepare $n$ problems for Codeforces. The difficulty of the problem $i$ will be an integer $a_i$, where $a_i \geq 0$. The difficulty of the problems must satisfy $a_i+a_{i+1}<m$ ($1 \leq i < n$), and $a_1+a_n<m$, where $m$ is a fixed integer. XYMXYM wants to know how many plans of the difficulty of the problems there are modulo $998\,244\,353$.
Two plans of difficulty $a$ and $b$ are different only if there is an integer $i$ ($1 \leq i \leq n$) satisfying $a_i \neq b_i$.
|
If two numbers $a,b$ satisfying $a+b<m$, there can only be one number not less than $\lceil \frac{m}{2} \rceil$. Consider that cut the cycle to a sequence at the first position $p$ satisfying $\max(a_p,a_{p+1})<\lceil \frac{m}{2} \rceil$. When we minus all the numbers that are not less than $\lceil \frac{m}{2} \rceil$ by $\lceil \frac{m}{2} \rceil$, we can get a sub-problem. However, If $n$ is an even number, we may not find such a position $p$, but we can still get a sub-problem easily. For this problem on a sequence, we can divide the sequence into many segments, and each of them can not be cut by us anymore. There may be only $1$ segment, and the first, last element of the segment are not less than $\lceil \frac{m}{2} \rceil$. There may be many segments, the length of the first one and last one are even, and the rest of them are odd. To solve the problem, we define the GF A,B. A is the GF of the length of the segments are odd situation, and B is the even one. If $m$ is odd, the segment with only a number $[\frac{m}{2}]$ exists, and the GF of number of the sequence should be: $B^2(\sum_{i=0} (A+x)^i)+A=\frac{B^2}{1-x-A}+A$ Otherwise, it is: $B^2(\sum_{i=0} A^i)+A=\frac{B^2}{1-A}+A$. To solve this problem, use NTT and polynomial inversion algorithm is just ok. Each time we transform a problem with limit $m$ to $\frac{m}{2}$, so the time complexity is $O(n \log n \log m)$. UPD: Chinese editorial can be found here.
|
[
"combinatorics",
"fft",
"math"
] | 3,300
|
#include<stdio.h>
#include<memory.h>
#define mod 998244353
unsigned long long tmp[131073],invn;
int a_[131072];
inline int ksm(unsigned long long a,int b){int ans=1;while(b)(b&1)&&(ans=a*ans%mod),a=a*a%mod,b>>=1;return ans;}
void init(int n){
for(int i=1;i<n;i++)a_[i]=i&1?a_[i^1]|n>>1:a_[i>>1]>>1;
for(int i=tmp[0]=1,j=ksm(3,(mod-1)/n);i<=n;i++)tmp[i]=tmp[i-1]*j%mod;
invn=ksm(n,mod-2);
}
void ntt(int a[],int n,bool typ){
int p;
for(int i=1;i<n;i++)if(a_[i]<i)p=a[i],a[i]=a[a_[i]],a[a_[i]]=p;
if(typ)for(int i=1,d=n>>1;d;i<<=1,d>>=1)for(int j=0;j<n;j+=i<<1)for(int k=0;k<i;k++)
p=tmp[n-d*k]*a[i+j+k]%mod,(a[i+j+k]=a[j+k]+mod-p)>=mod&&(a[i+j+k]-=mod),(a[j+k]+=p)>=mod&&(a[j+k]-=mod);
else for(int i=1,d=n>>1;d;i<<=1,d>>=1)for(int j=0;j<n;j+=i<<1)for(int k=0;k<i;k++)
p=tmp[d*k]*a[i+j+k]%mod,(a[i+j+k]=a[j+k]+mod-p)>=mod&&(a[i+j+k]-=mod),(a[j+k]+=p)>=mod&&(a[j+k]-=mod);
if(typ)for(int i=0;i<n;i++)a[i]=invn*a[i]%mod;
}
void getinv(int n,int a[],int b[]){
static int tmp[131072];
memset(b,0,sizeof(int)*(n<<1));
b[0]=ksm(a[0],mod-2);
for(int i=1;i<n;i<<=1){
init(i<<2);
memset(tmp,0,sizeof(int)*(i<<2));
memcpy(tmp,a,sizeof(int)*(i<<1));
ntt(tmp,i<<2,0);
ntt(b,i<<2,0);
for(int j=0;j<i<<2;j++)b[j]=(1ull*(mod-tmp[j])*b[j]%mod+2)*b[j]%mod;
ntt(b,i<<2,1);
memset(b+(i<<1),0,sizeof(int)*(i<<1));
}
}
int n,m,a[131072],b[131072],d0[131072],d1[131072],len,ans;
void work(int v){
if(v==1){
for(int i=0;i<len;i++)a[i]=1;
ans=1;
return;
}
work(v>>1);
memset(d0,0,sizeof(int)*(len<<1));
memset(d1,0,sizeof(int)*(len<<1));
for(int i=0;i<len;i++)(i&1?d1:d0)[i]=a[i];
if(v&1)++d1[1]==mod&&(d1[1]=0);
memset(a,0,sizeof(int)*(len<<1));
for(int i=0;i<len;i++)a[i]=d1[i]?mod-d1[i]:0;
++a[0]==mod&&(a[0]=0);
getinv(len,a,b);
if(v==m||!(n&1)){
int Ans=0;
for(int i=1;i<=n;i+=2)
Ans=(1ull*d1[i]*b[n-i]%mod*i+Ans)%mod;
if(!(n&1))ans=(2ull*ans+Ans)%mod;
else ans=Ans;
if(v==m)return;
}
init(len<<1);
ntt(d0,len<<1,0);
for(int i=0;i<len<<1;i++)d0[i]=1ull*d0[i]*d0[i]%mod;
ntt(d0,len<<1,1);
memset(d0+len,0,sizeof(int)*len);
ntt(d0,len<<1,0);
ntt(b,len<<1,0);
for(int i=0;i<len<<1;i++)a[i]=1ull*d0[i]*b[i]%mod;
ntt(a,len<<1,1);
for(int i=1;i<len;i+=2)(a[i]+=d1[i])>=mod&&(a[i]-=mod);
if(v&1)a[1]?a[1]--:a[1]=mod-1;
}
int main(){
scanf("%d%d",&n,&m);
len=n<<1;
while(len^len&-len)len^=len&-len;
work(m);
printf("%d\n",ans);
}
|
1581
|
A
|
CQXYM Count Permutations
|
CQXYM is counting permutations length of $2n$.
A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
A permutation $p$(length of $2n$) will be counted only if the number of $i$ satisfying $p_i<p_{i+1}$ is no less than $n$. For example:
- Permutation $[1, 2, 3, 4]$ will count, because the number of such $i$ that $p_i<p_{i+1}$ equals $3$ ($i = 1$, $i = 2$, $i = 3$).
- Permutation $[3, 2, 1, 4]$ won't count, because the number of such $i$ that $p_i<p_{i+1}$ equals $1$ ($i = 3$).
CQXYM wants you to help him to count the number of such permutations modulo $1000000007$ ($10^9+7$).
In addition, modulo operation is to get the remainder. For example:
- $7 \mod 3=1$, because $7 = 3 \cdot 2 + 1$,
- $15 \mod 4=3$, because $15 = 4 \cdot 3 + 3$.
|
Assume a permutation $p$, and $\sum_{i=2}^{2n}[p_{i-1}<p_i]=k$. Assume a permutaion $q$, satisfying $\forall 1 \leqslant i \leqslant 2n, q_i=2n-p_i$. We can know that $\forall 2 \leqslant i \leqslant 2n,[p_{i-1}<p_i]+[q_{i-1}<q_i]=1$. Thus,$\sum_{i=2}^{2n}[q_{i-1}<q_i]=2n-1-k$, and either $p$ should be counted or $q$ should be counted. All in all, the half of all the permutaions would be counted in the answer. Thus, the answer is $\frac{1}{2}(2n)!$. The time complexity is $O(\sum n)$. If you precalulate the factors, then the complexity will be $O(t+n)$.
|
[
"combinatorics",
"math",
"number theory"
] | 800
|
#include<stdio.h>
int f[100001];
int main(){
f[1]=1;
for(register int i=2;i!=100001;i++){
f[i]=((i<<1)-1ll)*f[i-1]%1000000007*(i<<1)%1000000007;
}
int n;
scanf("%d",&n);
for(register int i=n;i!=0;i--){
scanf("%d",&n);
printf("%d\n",f[n]);
}
return 0;
}
|
1581
|
B
|
Diameter of Graph
|
CQXYM wants to create a connected undirected graph with $n$ nodes and $m$ edges, and the diameter of the graph must be strictly less than $k-1$. Also, CQXYM doesn't want a graph that contains self-loops or multiple edges (i.e. each edge connects two different vertices and between each pair of vertices there is at most one edge).
The diameter of a graph is the maximum distance between any two nodes.
The distance between two nodes is the minimum number of the edges on the path which endpoints are the two nodes.
CQXYM wonders whether it is possible to create such a graph.
|
If $m < n-1$, the graph can't be connected, so the answer should be No. If $m > \frac{n(n-1)}{2}$, the graph must contaion multiedges, so the answer should be No. If $m=\frac{n(n-1)}{2}$, the graph must be a complete graph. The diameter of the graph is $1$. If $k>2$ the answer is YES, otherwise the answer is NO. If $n=1$, the graph has only one node, and its diameter is $0$. If $k>1$ the answer is YES, otherwise the answer is NO. If $m=n-1$, the graph must be a tree, the diameter of the tree is at least $2$ when it comes to each node has an edge with node $1$. If $m>n-1 \wedge m<\frac{n(n-1)}{2}$, we can add edges on the current tree and the diameter wouldn't be more than $2$. Since the graph is not complete graph, the diameter is more than $1$, the diameter is just $2$. If $k>3$ the answer is YES, otherwise the answer is NO. The time complexity is $O(t)$.
|
[
"constructive algorithms",
"graphs",
"greedy",
"math"
] | 1,200
|
#include<stdio.h>
inline void Solve(){
int n,m,k;
scanf("%d%d%d",&n,&m,&k);
if((n-1ll)*n>>1<m||m<n-1){
puts("NO");
return;
}
if(n==1){
if(k>1){
puts("YES");
}else{
puts("NO");
}
}else if(m<(n-1ll)*n>>1){
if(k>3){
puts("YES");
}else{
puts("NO");
}
}else if(k>2){
puts("YES");
}else{
puts("NO");
}
}
int main(){
int t;
scanf("%d",&t);
for(register int i=0;i!=t;i++){
Solve();
}
return 0;
}
|
1582
|
A
|
Luntik and Concerts
|
Luntik has decided to try singing. He has $a$ one-minute songs, $b$ two-minute songs and $c$ three-minute songs. He wants to distribute all songs into two concerts such that every song should be included to exactly one concert.
He wants to make the absolute difference of durations of the concerts as small as possible. The duration of the concert is the sum of durations of all songs in that concert.
Please help Luntik and find the minimal possible difference in minutes between the concerts durations.
|
Let $S$ be the sum of durations of all songs, that is $S = a + 2 \cdot b + 3 \cdot c$. Let's notice that since $a, b, c \ge 1$, it is possible to make a concert of any duration from $0$ to $S$ (indeed, if we just execute a greedy algorithm and take three-minute songs while possible, then take two-minute songs, and then one-minute ones, we can get any duration we need). Now, the answer is the remainder of $S$ modulo $2$, because if $S$ is even, then it's possible to from the first concert with duration $\frac{S}{2}$, and the second one will be left with duration $S-\frac{S}{2}=\frac{S}{2}$, and the difference between the durations will be $0$. If $S$ is odd, then the smallest possible difference is equal to $1$, let's form the first concert with duration $\left \lfloor{\frac{S}{2}}\right \rfloor$, and the second one is left with duration $\left \lceil{\frac{S}{2}}\right \rceil$
|
[
"math"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t, a, b, c;
cin >> t;
while (t--) {
cin >> a >> b >> c;
cout << (a + c) % 2 << '\n';
}
return 0;
}
|
1582
|
B
|
Luntik and Subsequences
|
Luntik came out for a morning stroll and found an array $a$ of length $n$. He calculated the sum $s$ of the elements of the array ($s= \sum_{i=1}^{n} a_i$). Luntik calls a subsequence of the array $a$ nearly full if the sum of the numbers in that subsequence is equal to $s-1$.
Luntik really wants to know the number of nearly full subsequences of the array $a$. But he needs to come home so he asks you to solve that problem!
A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by deletion of several (possibly, zero or all) elements.
|
It can be noticed that all subsequences with sum $s-1$ appear if we erase some $0$-es from the array and also erase exactly one $1$. We can independently calculate the number of ways to erase some $0$-es from the array (that way the sum will remain the same), then calculate the number of ways to erase exactly one $1$ from the array (that way the sum will become equal to $s-1$), and then multiply these two numbers. Therefore, the answer is equal to $2^{c_0} \cdot c_1$, where $c_0$ is the number of $0$-es in the array, and $c_1$ is the number of $1$-s.
|
[
"combinatorics",
"math"
] | 900
|
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t, n, x;
cin >> t;
while (t--) {
cin >> n;
int cnt0 = 0, cnt1 = 0;
for (int i = 1; i <= n; ++i) {
cin >> x;
if (x == 0) cnt0++;
if (x == 1) cnt1++;
}
cout << (1ll << cnt0) * (ll)cnt1 << '\n';
}
return 0;
}
|
1582
|
C
|
Grandma Capa Knits a Scarf
|
Grandma Capa has decided to knit a scarf and asked Grandpa Sher to make a pattern for it, a pattern is a string consisting of lowercase English letters. Grandpa Sher wrote a string $s$ of length $n$.
Grandma Capa wants to knit a beautiful scarf, and in her opinion, a beautiful scarf can only be knit from a string that is a palindrome. She wants to change the pattern written by Grandpa Sher, but to avoid offending him, she will choose one lowercase English letter and erase some (at her choice, possibly none or all) occurrences of that letter in string $s$.
She also wants to minimize the number of erased symbols from the pattern. Please help her and find the minimum number of symbols she can erase to make string $s$ a palindrome, or tell her that it's impossible. Notice that she can only erase symbols equal to the \textbf{one} letter she chose.
A string is a palindrome if it is the same from the left to the right and from the right to the left. For example, the strings 'kek', 'abacaba', 'r' and 'papicipap' are palindromes, while the strings 'abb' and 'iq' are not.
|
Let's iterate over the letter that we will erase from the string (from 'a' to 'z'), and for each letter independently find the minimal number of erased symbols required to make the string a palindrome. Let's say we are currently considering a letter $c$. Let's use the two pointers method: we will maintain two pointers $l$, $r$, initially $l$ points at the beginning of the string, and $r$ points at the end of the string. Now we will form a palindrome: each time we will compare $s_l$ and $s_r$, if they are equal, then we can add both of them to the palindrome at corresponding positions and iterate to symbols $l+1$ and $r-1$. If $s_l \neq s_r$, then we need to erase one of these symbols (otherwise, we won't get a palindrome), if $s_l=c$, let's erase it (we'll add to the number of erased symbols $1$ and iterate to $l+1$-th symbol), similarly, if $s_r=c$, we'll add to the number of the erased symbols $1$ and iterate to $r-1$-th symbol. And the last case, if $s_l \neq c$ and $s_r \neq c$, then it's impossible to get a palindrome from $s$ by erasing only letters equal to $c$. The asymptotic behaviour of this solution is $O(|C| \cdot n)$, where $|C|$ is the size of the alphabet, i.e. $26$.
|
[
"brute force",
"data structures",
"greedy",
"strings",
"two pointers"
] | 1,200
|
#include<bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t, n;
cin >> t;
while (t--) {
string s;
cin >> n >> s;
int ans = n + 1;
for (int c = 0; c < 26; ++c) {
int l = 0, r = n - 1, cnt = 0;
while (l <= r) {
if (s[l] == s[r]) {
l++, r--;
}
else if (s[l] == char('a' + c)) {
cnt++, l++;
}
else if (s[r] == char('a' + c)) {
cnt++, r--;
}
else {
cnt = n + 1;
break;
}
}
ans = min(ans, cnt);
}
if (ans == n + 1) ans = -1;
cout << ans << '\n';
}
return 0;
}
|
1582
|
D
|
Vupsen, Pupsen and 0
|
Vupsen and Pupsen were gifted an integer array. Since Vupsen doesn't like the number $0$, he threw away all numbers equal to $0$ from the array. As a result, he got an array $a$ of length $n$.
Pupsen, on the contrary, likes the number $0$ and he got upset when he saw the array without zeroes. To cheer Pupsen up, Vupsen decided to come up with another array $b$ of length $n$ such that $\sum_{i=1}^{n}a_i \cdot b_i=0$. Since Vupsen doesn't like number $0$, \textbf{the array $b$ must not contain numbers equal to $0$}. Also, the numbers in that array must not be huge, so \textbf{the sum of their absolute values cannot exceed $10^9$}. Please help Vupsen to find any such array $b$!
|
Let's consider two cases: when $n$ is even, and when $n$ is odd. If $n$ is even, let's split all numbers into pairs: let $a_1$ and $a_2$ be in one pair, $a_3$ and $a_4$ in one pair as well and so on. In the pair $a_i$, $a_{i+1}$, let $b_i=a_{i+1}$ and $b_{i+1}=-a_i$, then the sum $a_i \cdot b_i + a_{i+1} \cdot b_{i+1}$ in every pair will be equal to $0$ ($a_i \cdot a_{i+1} + a_{i+1} \cdot (-a_i)=0$), so the overall sum $a_1 \cdot b_1 + a_2 \cdot b_2 + \ldots + a_n \cdot b_n$ will be equal to $0$ as well. Notice that the sum of the $|b_i|$ is equal to the sum of $|a_i|$, therefore the sum $|b_1| + |b_2| + \ldots + |b_n|$ does not exceed $MAXA \cdot MAXN = 10^9$. If $n$ is odd, then let's cut off the last $3$ numbers and calculate $b$ for the array $a_1, a_2, \ldots, a_{n-3}$ the way we did it for even $n$ ($n-3$ is even since $n$ is odd). Then for the last $3$ numbers $a_{n-2}, a_{n-1}, a_n$ let's find two numbers with sum not equal to $0$ (by the Dirichlet principle, there will always exist two numbers out of three not equal to $0$, which are both positive or both negative, and the sum of those two numbers cannot be equal to $0$), let them be $a_i$, $a_j$, and the third one $a_k$. Then let $b_i=-a_k$, $b_j=-a_k$, $b_k=(a_i+a_j)$, the sum of $a_i \cdot b_i + a_j \cdot b_j + a_k \cdot b_k = 0$, and numbers of $b$ are not equal to $0$. The sum $|b_1| + |b_2| + \ldots + |b_{n-3}|$ does not exceed $MAXA \cdot (MAXN - 1 - 3)$ (since $MAXN$ is even, and we consider a case with odd $n$) and the sum $b_{n-2}+b_{n-1}+b_n$ does not exceed $MAXA + MAXA + 2 \cdot MAXA = 4 \cdot MAXA$, so the overall sum of $|b_i|$ does not exceed $MAXA \cdot MAXN = 10^9$.
|
[
"constructive algorithms",
"math"
] | 1,600
|
ttt = int(input())
for t in range(ttt):
n = int(input())
a = [int(x) for x in input().split()]
start = 0
if n % 2 == 1:
if (a[0] + a[1] != 0):
print(-a[2], -a[2], a[0] + a[1], end = " ")
elif (a[1] + a[2] != 0):
print(a[2] + a[1], -a[0], -a[0], end = " ")
else:
print(-a[1], a[0] + a[2], -a[1], end = " ")
start = 3
while start < n:
print(-a[s + 1], a[s], end = " ")
start += 2
print()
|
1582
|
E
|
Pchelyonok and Segments
|
Pchelyonok decided to give Mila a gift. Pchelenok has already bought an array $a$ of length $n$, but gifting an array is too common. Instead of that, he decided to gift Mila the segments of that array!
Pchelyonok wants his gift to be beautiful, so he decided to choose $k$ non-overlapping segments of the array $[l_1,r_1]$, $[l_2,r_2]$, $\ldots$ $[l_k,r_k]$ such that:
- the length of the first segment $[l_1,r_1]$ is $k$, the length of the second segment $[l_2,r_2]$ is $k-1$, $\ldots$, the length of the $k$-th segment $[l_k,r_k]$ is $1$
- for each $i<j$, the $i$-th segment occurs in the array earlier than the $j$-th (i.e. $r_i<l_j$)
- the sums in these segments are strictly increasing (i.e. let $sum(l \ldots r) = \sum\limits_{i=l}^{r} a_i$ — the sum of numbers in the segment $[l,r]$ of the array, then $sum(l_1 \ldots r_1) < sum(l_2 \ldots r_2) < \ldots < sum(l_k \ldots r_k)$).
Pchelenok also wants his gift to be as beautiful as possible, so he asks you to find the maximal value of $k$ such that he can give Mila a gift!
|
Let's notice that $k$ can be the answer, only if the sum of lengths of the segments does not exceed the number of elements in the array, that is $\frac{k \cdot (k + 1)}{2} \le n$. From this inequation we can get that $k$ is less than $\sqrt{2n}$, and when $n$ hits its maximal value, it does not exceed $447$. Let $dp_{i,j}$ be the maximal sum on the segment of length $j$, with that we have already considered all elements on the suffix $i$ (that is, elements with indices $i, i + 1, \ldots, n$) and have already chosen segments with lengths $j, j - 1, \ldots, 1$, in which the sums increase. Let's learn to recalculate the values of dynamics. We can either not include the $i$-th element in the segment of length $j$, then we need refer to the value of dynamics $dp_{i+1,j}$, or include the $i$-th element, and then we are interested in the value of dynamics $dp_{i+j,j-1}$ - if it's greater than the sum on the segment $i, i + 1, \ldots, i + j - 1$, then we can take $j$ segments with lengths from $j$ to $1$ on the suffix $i$, answ otherwise we cannot take such segments on the suffix $i$. We need to take the maximum among these two cases in order to maximize the sum. To calculate the sum on a segment you can use prefix sums. Asymptotic behavior of this solution - $O(n \cdot \sqrt{n})$.
|
[
"binary search",
"data structures",
"dp",
"greedy",
"math"
] | 2,000
|
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int const maxn = 1e5 + 5, maxk = 450;
int a[maxn], dp[maxn][maxk];
int inf = 1e9 + 7;
ll pref[maxn];
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t, n;
cin >> t;
while (t--) {
cin >> n;
for (int i = 1; i <= n; ++i) {
cin >> a[i];
pref[i] = pref[i - 1] + a[i];
}
int k = 0;
while (k * (k + 1) / 2 <= n) k++;
for (int j = 0; j < k; ++j) {
dp[n + 1][j] = -inf;
}
dp[n + 1][0] = inf;
for (int i = n; i >= 1; --i) {
for (int j = 0; j < k; ++j) {
dp[i][j] = dp[i + 1][j];
if (j && i + j - 1 <= n && pref[i + j - 1] - pref[i - 1] < dp[i + j][j - 1]) {
dp[i][j] = max(dp[i][j], (int)(pref[i + j - 1] - pref[i - 1]));
}
}
}
int ans = 0;
for (int j = 0; j < k; ++j) {
if (dp[1][j] > 0) ans = j;
}
cout << ans << '\n';
}
return 0;
}
|
1582
|
F2
|
Korney Korneevich and XOR (hard version)
|
\textbf{This is a harder version of the problem with bigger constraints.}
Korney Korneevich dag up an array $a$ of length $n$. Korney Korneevich has recently read about the operation bitwise XOR, so he wished to experiment with it. For this purpose, he decided to find all integers $x \ge 0$ such that there exists an \textbf{increasing} subsequence of the array $a$, in which the bitwise XOR of numbers is equal to $x$.
It didn't take a long time for Korney Korneevich to find all such $x$, and he wants to check his result. That's why he asked you to solve this problem!
A sequence $s$ is a subsequence of a sequence $b$ if $s$ can be obtained from $b$ by deletion of several (possibly, zero or all) elements.
A sequence $s_1, s_2, \ldots , s_m$ is called increasing if $s_1 < s_2 < \ldots < s_m$.
|
Let's iterate over all numbers of the array and for each number $t$ maintain a list $g_t$ of all numbers $y$, such that it's possible to choose an increasing subsequence on the current prefix, in which $xor$ of numbers is equal to $y$, and the last number of that increasing subsequence is less than $t$. Let us currently consider the element $a_i$. Then let's consider elements of $g_{a_i}$ - there will be all values of $xor$-s of the subsequences to which we can append the element $a_i$. If $g_{a_i}$ contains a value $f$, then it's possible to get a value $f \oplus a_i$, then let's add the value $f \oplus a_i$ to all lists $g$ from $a_i + 1$ to the maximal value of $a$ (if the value that is being added is already in some $g$-s, it's unnecessary to add it there again). Let's perform some optimizations for this solution. Let's stop considering the values $g_t$ that have already been considered. That is, if we have already considered $g_t$ at some iteration, then let's erase it, but remember that we never need to add the values of $xor$, that are being erased. That optimization is sufficient to get the asymptotic behaviour $O((max\_a)^3)$, where $max\_a$ is the greatest one among all numbers of the array $a$ (for every number $t$ and its possible value of $xor$ $f$ we will pass over the value $t \oplus f$ to all states $t + 1, \ldots, max\_a$; the amount of different $t$ is $O(max\_a)$, the amount of $f$ is $O(max\_a)$ as well, and the passing of the value each time is performed in $O(max\_a)$). Let's notice that when we pass some value of $xor$ equal to $f$ to elements $t + 1, \ldots, max\_a$, and find the element $r$, in which that value of $xor$ has already been, then the value $f$ is already in all elements greater than $r$, and that's why we don't have to add the value $f$ any further. Using this optimization we can finally get a solution in $O(max\_a^2)$, since for every value of $xor$ (the amount of them is $O(max\_a)$), we perform $O(max\_a)$ operations. In total (considering all optimizations), the asymptotic behaviour of the solution is $O(n + (max\_a)^2)$.
|
[
"binary search",
"brute force",
"dp",
"greedy",
"two pointers"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
int const max_value = (1 << 13);
vector < int > g[max_value];
int ans[max_value], r[max_value];
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int n, x;
cin >> n;
ans[0] = 1;
for (int i = 0; i < max_value; ++i) {
g[i].push_back(0);
}
for (int i = 0; i < max_value; ++i) {
r[i] = max_value;
}
for (int i = 1; i <= n; ++i) {
cin >> x;
for (auto key : g[x]) {
int to = (key^x);
ans[to] = 1;
while (r[to] > x) {
r[to]--;
if (r[to] != x) g[r[to]].push_back(to);
}
}
g[x] = {};
}
int k = 0;
for (int i = 0; i < max_value; ++i) {
k += ans[i];
}
cout << k << '\n';
for (int i = 0; i < max_value; ++i) {
if (ans[i]) cout << i << " ";
}
cout << '\n';
return 0;
}
|
1582
|
G
|
Kuzya and Homework
|
Kuzya started going to school. He was given math homework in which he was given an array $a$ of length $n$ and an array of symbols $b$ of length $n$, consisting of symbols '*' and '/'.
Let's denote a path of calculations for a segment $[l; r]$ ($1 \le l \le r \le n$) in the following way:
- Let $x=1$ initially. For every $i$ from $l$ to $r$ we will consequently do the following: if $b_i=$ '*', $x=x*a_i$, and if $b_i=$ '/', then $x=\frac{x}{a_i}$. Let's call a path of calculations for the segment $[l; r]$ a list of all $x$ that we got during the calculations (the number of them is exactly $r - l + 1$).
For example, let $a=[7,$ $12,$ $3,$ $5,$ $4,$ $10,$ $9]$, $b=[/,$ $*,$ $/,$ $/,$ $/,$ $*,$ $*]$, $l=2$, $r=6$, then the path of calculations for that segment is $[12,$ $4,$ $0.8,$ $0.2,$ $2]$.
Let's call a segment $[l;r]$ simple if the path of calculations for it contains \textbf{only integer numbers}.
Kuzya needs to find the number of simple segments $[l;r]$ ($1 \le l \le r \le n$). Since he obviously has no time and no interest to do the calculations for each option, he asked you to write a program to get to find that number!
|
Notice that the segment is simple, if for any prime number we will get a bracket sequence, which has the minimal balance greater of equal to $0$. The bracket sequence is formed the following way: we will iterate over the segment and add an opening bracket if we multiply by that number, and a closing bracket, if we divide by that number. Let's consider the elements of the array $a$ and calculate the array $nxt_i$, wich contains the greatest left bound, such that we can do the $i$-th operations in integer numbers with every $l \le nxt_i$. To calculate such bounds, for each prime number let's maintain all indices of its occurences in the numbers of $a$ in a stack (if the prime numbers occurs in a number several times, we need to store the index several times). If the $i$-th operation is the operation of multiplying, then $nxt_i$ is equal to $i$, and for all prime divisors of the number $a_i$ we need to add the index $i$, and if it's the operation of division,then for all prime divisors of $a_i$ we need to delete indices (in the same amount as the prime divisor occurs in $a_i$) and save the smallest erased index in $nxt_i$. If for any prime divisor we had to erase an index from an empty stack, then we got a non-integer result, so $nxt_i=-1$. Now that we know the values of the array $nxt$, we need to calculate the number of segments $1 \le l \le r \le n$, such that $l \le min(l, r)$, where $min(l, r)$ is the minimal value of $nxt_i$ on segment $l, r$. We can do that using segment tree on minimum in $O(n \log_2 n)$ (iterate over the left bound and traversing the tree from the root find the greatest right bound for current left one) or using linear algorithms with a stack (to do that, let's iterate over all left bounds in decreasing order and maintain a stack on minimum on the array $nxt$).
|
[
"data structures",
"number theory"
] | 2,600
|
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
int const maxn = 1e6 + 5;
int a[maxn];
int prime[maxn];
int L[maxn];
vector < int > pos[maxn];
inline void add(int x, int l) {
L[l] = l;
while (x > 1) {
pos[prime[x]].push_back(l);
x /= prime[x];
}
}
inline void del(int x, int l) {
if (x == 1) {
L[l] = l;
return;
}
L[l] = l;
while (x > 1) {
if ((int)pos[prime[x]].size() == 0) {
L[l] = 0;
return;
}
L[l] = min(L[l], pos[prime[x]].back());
pos[prime[x]].pop_back();
x /= prime[x];
}
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int n;
cin >> n;
for (int i = 1; i <= n; ++i) cin >> a[i];
for (int i = 2; i < maxn; ++i) {
if (prime[i] == 0) {
prime[i] = i;
if (i > 1000) continue;
for (int j = i * i; j < maxn; j += i) {
prime[j] = i;
}
}
}
char type;
for (int i = 1; i <= n; ++i) {
cin >> type;
if (type == '*') add(a[i], i);
else del(a[i], i);
}
ll answer = 0;
vector < pair < int, int > > f_min;
for (int i = n; i >= 1; --i) {
int cnt = 1;
while ((int)f_min.size() > 0 && f_min.back().first >= L[i]) {
cnt += f_min.back().second;
f_min.pop_back();
}
f_min.push_back({L[i], cnt});
if (L[i] == i) answer += (ll)cnt;
}
cout << answer << '\n';
return 0;
}
|
1583
|
A
|
Windblume Ode
|
A bow adorned with nameless flowers that bears the earnest hopes of an equally nameless person.
You have obtained the elegant bow known as the Windblume Ode. Inscribed in the weapon is an array of $n$ ($n \ge 3$) positive \textbf{distinct} integers (i.e. different, no duplicates are allowed).
Find the largest subset (i.e. having the maximum number of elements) of this array such that its sum is a composite number. A positive integer $x$ is called composite if there exists a positive integer $y$ such that $1 < y < x$ and $x$ is divisible by $y$.
If there are multiple subsets with this largest size with the composite sum, you can output any of them. It can be proven that under the constraints of the problem such a non-empty subset always exists.
|
Let $s$ be equal to the sum of array $a$. If $s$ is composite then $a$ is the largest composite subset of itself. Otherwise, since $n \geq 3$, $s$ must be a prime number greater than $2$, meaning $s$ must be odd. Now notice that because all elements of $a$ are distinct, if we remove any one number from $a$, the remaining sum must be strictly greater than $2$. This leads us to the following solution: if $s$ is prime, removing any odd number from $a$ will give a composite subset of size $n-1$. This is because that since $s$ is assumed to be odd, an odd number must exist in $a$, and the difference of two odd numbers is always even. Since we claim that this difference is at least $4$, the new sum will always be composite.
|
[
"math",
"number theory"
] | 800
|
//make sure to make new file!
import java.io.*;
import java.util.*;
public class OmkarAndHeavenlyTreeSolution{
public static void main(String[] args)throws IOException{
BufferedReader f = new BufferedReader(new InputStreamReader(System.in));
PrintWriter out = new PrintWriter(System.out);
int t = Integer.parseInt(f.readLine());
for(int q = 1; q <= t; q++){
StringTokenizer st = new StringTokenizer(f.readLine());
int n = Integer.parseInt(st.nextToken());
int m = Integer.parseInt(st.nextToken());
HashSet<Integer> hset = new HashSet<Integer>();
for(int k = 0; k < m; k++){
st = new StringTokenizer(f.readLine());
int a = Integer.parseInt(st.nextToken());
int b = Integer.parseInt(st.nextToken());
int c = Integer.parseInt(st.nextToken());
hset.add(b);
}
int middle = -1;
for(int k = 1; k <= n; k++){
if(!hset.contains(k)){
middle = k;
break;
}
}
for(int k = 1; k <= n; k++){
if(k == middle)
continue;
out.println(k + " " + middle);
}
}
out.close();
}
}
|
1583
|
B
|
Omkar and Heavenly Tree
|
Lord Omkar would like to have a tree with $n$ nodes ($3 \le n \le 10^5$) and has asked his disciples to construct the tree. However, Lord Omkar has created $m$ ($\mathbf{1 \le m < n}$) restrictions to ensure that the tree will be as heavenly as possible.
A tree with $n$ nodes is an connected undirected graph with $n$ nodes and $n-1$ edges. Note that for any two nodes, there is exactly one simple path between them, where a simple path is a path between two nodes that does not contain any node more than once.
Here is an example of a tree:
A restriction consists of $3$ pairwise distinct integers, $a$, $b$, and $c$ ($1 \le a,b,c \le n$). It signifies that node $b$ cannot lie on the simple path between node $a$ and node $c$.
Can you help Lord Omkar and become his most trusted disciple? You will need to find heavenly trees for multiple sets of restrictions. It can be shown that a heavenly tree will always exist for any set of restrictions under the given constraints.
|
Because the number of restrictions is less than $n$, there is guaranteed to be at least one value from $1$ to $n$ that is not a value of $b$ for any of the restrictions. Find a value that is not $b$ for all of the restrictions and construct a tree that is a "star" with that value in the middle. An easy way to do this is to make an edge from that value to every other number from $1$ to $n$.
|
[
"constructive algorithms",
"trees"
] | 1,200
|
//Praise our lord and saviour qlf9
//DecimalFormat f = new DecimalFormat("##.00");
import java.util.*;
import java.io.*;
import java.math.*;
import java.text.*;
public class CCorrect{
public static void main(String[] omkar) throws Exception
{
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(in.readLine());
StringBuilder sb = new StringBuilder();
int n = Integer.parseInt(st.nextToken());
int m = Integer.parseInt(st.nextToken());
int[][] grid = new int[n][m];
String s;
char cc;
for(int i = 0; i < n; i++)
{
s = in.readLine();
for(int j = 0; j < m; j++)
{
if(s.charAt(j) == 'X')
{
grid[i][j] = 1;
}
else
{
grid[i][j] = 0;
}
}
}
int[] numBad = new int[m+1];
int total = 0;
for(int j = 0; j < m-1; j++)
{
for(int i = 1; i< n; i++)
{
if(grid[i][j] == 1 && grid[i-1][j+1] == 1)
{
total++;
}
}
numBad[j+1] = total;
}
numBad[m] = total;
int q = Integer.parseInt(in.readLine());
int a, b;
for(int i = 0; i < q; i++)
{
st = new StringTokenizer(in.readLine());
a = Integer.parseInt(st.nextToken())-1;
b = Integer.parseInt(st.nextToken())-1;
if(numBad[b]-numBad[a] == 0)
{
sb.append("yes\n");
}
else
{
sb.append("no\n");
}
}
System.out.println(sb);
}
}
|
1583
|
C
|
Omkar and Determination
|
The problem statement looms below, filling you with determination.
Consider a grid in which some cells are empty and some cells are filled. Call a cell in this grid \textbf{exitable} if, starting at that cell, you can exit the grid by moving up and left through only empty cells. This includes the cell itself, so all filled in cells are not exitable. Note that you can exit the grid from any leftmost empty cell (cell in the first column) by going left, and from any topmost empty cell (cell in the first row) by going up.
Let's call a grid \textbf{determinable} if, given only which cells are exitable, we can exactly determine which cells are filled in and which aren't.
You are given a grid $a$ of dimensions $n \times m$ , i. e. a grid with $n$ rows and $m$ columns. You need to answer $q$ queries ($1 \leq q \leq 2 \cdot 10^5$). Each query gives two integers $x_1, x_2$ ($1 \leq x_1 \leq x_2 \leq m$) and asks whether the subgrid of $a$ consisting of the columns $x_1, x_1 + 1, \ldots, x_2 - 1, x_2$ is determinable.
|
First notice that in a determinable grid, for any cell, it can't be that both the cell above it and the cell to its left are filled. If that were the case, then the cell wouldn't be exitable regardless of whether it was filled or not, and so we couldn't determine whether it was filled. Now notice that in any grid with the above property, namely that from each cell you can move either up or to the left into an empty cell (or both), every empty cell must be exitable - just keep moving either up or to the left, whichever is possible, until you exit the grid. It follows that for any grid satisfying that property, given only which cells are exitable, starting from the outermost cells you will be able to determine that the nonexitable cells are filled, which implies that the next cells satisfy the property, which further implies that the nonexitable ones there are filled, and so on. This allows you to determine the entire grid (since the exitable cells are obviously empty). Therefore, a grid being determinable is equivalent to all of its cells having an empty cell immediately above and/or to the left of it. You can check this for arbitrary subgrids by precomputing two dimensional prefix sums of the cells that violate this property, then checking whether the sum for a given subgrid is $0$. This solution is $O(nm + q)$. The actual problem only asked for subgrids that contained every row, which allows for a somewhat simpler implementation.
|
[
"data structures",
"dp"
] | 1,700
|
/*
“I just don't have any patience for people who would rather hurt others instead of facing their own reality.”
– Mikoto Misaka.
*/
import static java.lang.Math.*;
import static java.lang.System.out;
import java.util.*;
import java.io.*;
import java.math.*;
public class MeaningOfMikotoMisaka
{
public static void main(String hi[]) throws Exception
{
BufferedReader infile = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(infile.readLine());
int N = Integer.parseInt(st.nextToken());
//find p[N-1]
int[] res = new int[N];
res[N-1] = N;
for(int v=1; v < N; v++)
{
int[] temp = new int[N];
Arrays.fill(temp, v+1);
temp[N-1] = 1;
if(query(temp, infile) == 0)
{
res[N-1] = v;
break;
}
}
//find all p[i] < p[N-1]
for(int increase=1; increase < res[N-1]; increase++)
{
int[] temp = new int[N];
Arrays.fill(temp, increase+1);
temp[N-1] = 1;
int loc = query(temp, infile)-1;
res[loc] = res[N-1]-increase;
}
//find all p[i] > p[N-1]
for(int decrease=1; decrease <= N-res[N-1]; decrease++)
{
int[] temp = new int[N];
Arrays.fill(temp, 1);
temp[N-1] += decrease;
int loc = query(temp, infile)-1;
res[loc] = res[N-1]+decrease;
}
StringBuilder sb = new StringBuilder("! ");
for(int x: res)
sb.append(x+" ");
System.out.println(sb);
}
public static int query(int[] arr, BufferedReader infile) throws Exception
{
StringBuilder owo = new StringBuilder("? ");
for(int i=0; i < arr.length; i++)
owo.append(arr[i]+" ");
System.out.println(owo);
return Integer.parseInt(infile.readLine());
}
}
|
1583
|
D
|
Omkar and the Meaning of Life
|
It turns out that the meaning of life is a permutation $p_1, p_2, \ldots, p_n$ of the integers $1, 2, \ldots, n$ ($2 \leq n \leq 100$). Omkar, having created all life, knows this permutation, and will allow you to figure it out using some queries.
A query consists of an array $a_1, a_2, \ldots, a_n$ of integers between $1$ and $n$. $a$ is \textbf{not} required to be a permutation. Omkar will first compute the pairwise sum of $a$ and $p$, meaning that he will compute an array $s$ where $s_j = p_j + a_j$ for all $j = 1, 2, \ldots, n$. Then, he will find the smallest index $k$ such that $s_k$ occurs more than once in $s$, and answer with $k$. If there is no such index $k$, then he will answer with $0$.
You can perform at most $2n$ queries. Figure out the meaning of life $p$.
|
Solution 1 We will determine for each $j$, the index $\text{next}_j$ such that $p_{\text{next}_j} = p_j + 1$. For each index $j$, perform a query with all $1$s except that $a_j = 2$. If the result $k$ exists, then we should set $\text{next}_k = j$. Also, for each index $j$, perform a query with all $2$s except that $a_j = 1$. If the result $k$ exists, then we should set $\text{next}_j = k$. For each $j$ such that $p_j \neq n$, either $\text{next}_j > j$, in which case the first set of queries will find $\text{next}_j$, or $\text{next}_j < j$, in which case the second set of queries will find $\text{next}_j$. Therefore we will fully determine the array $\text{next}$. To compute $p$, note that the index $j$ such that $p_j = 1$ will not appear in the array $\text{next}$. Therefore, find this $j$, and set $p_j = 1$. Then set $j$ to $\text{next}_j$, and set $p_j = 2$, and so on. The total number of queries used is $2n$, which is exactly the limit. Solution 2 We will first determine $q_j = p_j - p_n$ for all $j$. For each value of $x$ from $-(n - 1)$ to $n - 1$ (these are the only possible values of $p_j - p_n$), if $x$ is nonnegative, then make a query where all of $a + 1$ is $x$ except that $a_n = 1$; otherwise, make a query where all of $a$ is $1$ except that $a_n = 1 - x$. If the result $k$ exists, then $q_k = x$. Note that there is at most one $k$ such that $q_k = x$ for each $x$, so we will fully determine $q$ this way (obviously we need to manually set $q_n = 0$). $p_n$ is then equal to the number of $j$ such that $q_j \leq 0$. Using this, we can determine the rest of $p$ as $p_j = q_j + p_n$. The total number of queries used is $2n - 1$, which is $1$ below the limit. Bonus question: Optimize this solution to use $n$ queries.
|
[
"constructive algorithms",
"greedy",
"interactive"
] | 1,800
|
//stan hu tao
//come to K-expo!!!
//watch me get carried in nct ridin
import static java.lang.Math.max;
import static java.lang.Math.min;
import static java.lang.Math.abs;
import static java.lang.System.out;
import java.util.*;
import java.io.*;
import java.math.*;
public class MomentOfBloomModel
{
static ArrayDeque<Integer>[] edges;
static ArrayDeque<Integer>[] tree;
public static void main(String hi[]) throws Exception
{
BufferedReader infile = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(infile.readLine());
int N = Integer.parseInt(st.nextToken());
int M = Integer.parseInt(st.nextToken());
//FOR SAMPLE CASE HIDING ONLY
ArrayList<Integer> input = new ArrayList<Integer>();
input.add(N); input.add(M);
edges = new ArrayDeque[N+1];
for(int i=1; i <= N; i++)
edges[i] = new ArrayDeque<Integer>();
for(int i=0; i < M; i++)
{
st = new StringTokenizer(infile.readLine());
int a = Integer.parseInt(st.nextToken());
int b = Integer.parseInt(st.nextToken());
edges[a].add(b); edges[b].add(a);
input.add(a); input.add(b);
}
int[] parity = new int[N+1];
int Q = Integer.parseInt(infile.readLine());
input.add(Q);
int[] queries = new int[2*Q];
for(int i=1; i < 2*Q; i+=2)
{
st = new StringTokenizer(infile.readLine());
int a = Integer.parseInt(st.nextToken());
int b = Integer.parseInt(st.nextToken());
queries[i-1] = a;
queries[i] = b;
parity[a] ^= 1;
parity[b] ^= 1;
input.add(a); input.add(b);
}
int countExtra = 0;
for(int x: parity)
countExtra+=x;
if(countExtra > 0)
{
System.out.println("NO");
System.out.println(countExtra/2);
return;
}
//get dfs tree
tree = new ArrayDeque[N+1];
for(int i=1; i <= N; i++)
tree[i] = new ArrayDeque<Integer>();
seen = new boolean[N+1];
dfs(1, 0);
System.out.println("YES");
StringBuilder sb = new StringBuilder();
for(int qq=1; qq < 2*Q; qq+=2)
{
int a = queries[qq-1];
int b = queries[qq];
//is this too slow? probably not
int[] parents = new int[N+1];
ArrayDeque<Integer> q = new ArrayDeque<Integer>();
q.add(a); parents[a] = -1;
bfs:while(q.size() > 0)
{
int curr = q.poll();
for(int next: tree[curr])
if(parents[next] == 0)
{
parents[next] = curr;
if(next == b)
break bfs;
q.add(next);
}
}
ArrayList<Integer> path = new ArrayList<Integer>();
int curr = b;
while(curr != a)
{
path.add(curr);
curr = parents[curr];
}
path.add(a);
Collections.reverse(path);
sb.append(path.size()+"\n");
for(int x: path)
sb.append(x+" ");
sb.append("\n");
}
System.out.print(sb);
}
static boolean[] seen;
public static void dfs(int curr, int par)
{
seen[curr] = true;
for(int next: edges[curr])
if(!seen[next])
{
tree[curr].add(next);
tree[next].add(curr);
dfs(next, curr);
}
}
}
|
1583
|
E
|
Moment of Bloom
|
She does her utmost to flawlessly carry out a person's last rites and preserve the world's balance of yin and yang.
Hu Tao, being the little prankster she is, has tried to scare you with this graph problem! You are given a connected undirected graph of $n$ nodes with $m$ edges. You also have $q$ queries. Each query consists of two nodes $a$ and $b$.
Initially, all edges in the graph have a weight of $0$. For each query, you must choose a simple path starting from $a$ and ending at $b$. Then you add $1$ to every edge along this path. Determine if it's possible, after processing all $q$ queries, for all edges in this graph to have an even weight. If so, output the choice of paths for each query.
If it is not possible, determine the smallest number of extra queries you could add to make it possible. It can be shown that this number will not exceed $10^{18}$ under the given constraints.
A simple path is defined as any path that does not visit a node more than once.
An edge is said to have an even weight if its value is divisible by $2$.
|
Let $f_v$ be the number of times $v$ appears in the $q$ queries. If $f_v$ is odd for any $1 \leq v \leq n$, then there does not exist an assignment of paths that will force all even edge weights. To see why, notice that one query will correspond to exactly one edge adjacent to $v$. If an odd number of paths are adjacent to $v$, this implies that at least one edge adjacent to $v$ will have an odd degree. It turns out that this is the only condition that we need to check. In other words, if $f_v$ is even for all $v$, then there will exist an assignment of paths that will force all edge weights to be even. Let's assume all $f_v$ is even. We can find a solution by doing the following: take any spanning tree of the graph and assign each query to be the path from $a$ to $b$ in this tree. An intuitive way of thinking about this is the following. Consider the case if the spanning tree is a line. Then each query becomes a range and we're checking if all points in this range are covered an even number of times. For all points to be covered an even number of times, every point should occur an even number of times in the queries. To generalize this to a tree, when the first path $a_1$ to $b_1$ is incremented, in order to make these values even again, we will need later paths to also overlap the segment from $a_1$ to $b_1$. One way this can be done is if we use two paths $a_1$ to $c$ and $c$ to $b_1$. Notice that even if a new segment that makes the $a_1$ to $b_1$ path even makes some other edges odd, the later queries will always fix these edges.
|
[
"constructive algorithms",
"dfs and similar",
"graph matchings",
"graphs",
"greedy",
"trees"
] | 2,200
|
//stan hu tao
//come to K-expo!!!
//watch me get carried in nct ridin
import static java.lang.Math.max;
import static java.lang.Math.min;
import static java.lang.Math.abs;
import static java.lang.System.out;
import java.util.*;
import java.io.*;
import java.math.*;
public class DefenderModel
{
public static void main(String hi[]) throws Exception
{
BufferedReader infile = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer st = new StringTokenizer(infile.readLine());
int N = Integer.parseInt(st.nextToken());
int K = Integer.parseInt(st.nextToken());
int colors = 1;
int curr = K;
while(curr < N)
{
curr *= K;
colors++;
}
System.out.println(colors);
ArrayList<Integer>[] values = new ArrayList[N];
for(int v=0; v < N; v++)
{
values[v] = new ArrayList<Integer>();
int temp = v;
while(temp > 0)
{
values[v].add(temp%K);
temp /= K;
}
if(v == 0)
values[v].add(0);
}
StringBuilder sb = new StringBuilder();
for(int a=0; a < N; a++)
for(int b=a+1; b < N; b++)
{
if(values[a].size() < values[b].size())
{
int choice = values[b].size();
sb.append(choice+" ");
}
else
{
int choice = values[a].size()-1;
while(values[a].get(choice) == values[b].get(choice))
choice--;
choice++;
sb.append(choice+" ");
}
}
System.out.println(sb);
}
}
|
1583
|
F
|
Defender of Childhood Dreams
|
Even if you just leave them be, they will fall to pieces all by themselves. So, someone has to protect them, right?
You find yourself playing with Teucer again in the city of Liyue. As you take the eccentric little kid around, you notice something interesting about the structure of the city.
Liyue can be represented as a directed graph containing $n$ nodes. Nodes are labeled from $1$ to $n$. There is a directed edge from node $a$ to node $b$ if and only if $a < b$.
A path between nodes $a$ and $b$ is defined as a sequence of edges such that you can start at $a$, travel along all of these edges in the corresponding direction, and end at $b$. The length of a path is defined by the number of edges. A rainbow path of length $x$ is defined as a path in the graph such that there exists at least 2 distinct colors among the set of $x$ edges.
Teucer's favorite number is $k$. You are curious about the following scenario: If you were to label each edge with a color, what is the minimum number of colors needed to ensure that all paths of length $k$ or longer are rainbow paths?
Teucer wants to surprise his older brother with a map of Liyue. He also wants to know a valid coloring of edges that uses the minimum number of colors. Please help him with this task!
|
The minimum number of colors that you need is $\lceil \log_k n \rceil$. To achieve this, you can divide the nodes into $k$ contiguous subsegments of equal size (or as close as possible). Any edge between nodes in different subsegments, you color with $1$ for example. Then you recursively solve those subsegments excluding the color that you used. Any path of the same color is between same size subsegments inside a single bigger subsegment (or the whole array). Since there would be only $k$ such subsegments, the path could only have length at most $k - 1$. The highest recursion depth is $\lceil \log_k n \rceil$, so this is the number of colors used as desired. We will now prove that $\lceil \log_k n \rceil$ colors are necessary. We will do this by equivalently proving that if you have a valid coloring using $c$ colors, then $n$ is at most $k^c$. This, in turn, we will prove by induction on $c$. The base case is $c = 0$. If you have no colors, then you can't color any edges, so $n$ must be at most $1 = k^0$. For the inductive step, we assume that any valid coloring using at most $c - 1$ colors can have at most $k^{c - 1}$ nodes, and we desire to show that any valid coloring using at most $c$ colors can have at most $k^c$ nodes. To do this, we will choose an arbitrary color, then partition all our nodes into at most $k$ groups such that inside each group, there are no edges of that color. It follows that each group is colored using at most $c - 1$ colors and so can have at most $k^{c - 1}$ nodes, so overall we can have at most $k \cdot k^{c - 1} = k^c$ nodes. The partition is defined as follows: we will partition the nodes into the sets $s_0, s_1, \ldots, s_{k - 1}$ where $s_j$ contains all nodes $a$ such that the length of the longest path ending in $a$ using only edges of our chosen color is exactly $j$. This length is at most $k - 1$ since our coloring can't have paths of length $k$ of a single color. Furthermore, there can't be edges of our chosen color inside a set $s_j$, because otherwise the endpoint of such an edge would be the end of the longest path to the start point of the edge plus the edge itself, which would be of length $j + 1$. Therefore, any valid coloring using $c$ colors can have at most $c^k$ nodes, and so we must use at least $\lceil \log_k n \rceil$ colors in our construction, which we have already seen how to do.
|
[
"bitmasks",
"constructive algorithms",
"divide and conquer"
] | 2,500
|
//khodaya khodet komak kon
# include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef long double ld;
typedef pair <int, int> pii;
typedef pair <pii, int> ppi;
typedef pair <int, pii> pip;
typedef pair <pii, pii> ppp;
typedef pair <ll, ll> pll;
# define A first
# define B second
# define endl '\n'
# define sep ' '
# define all(x) x.begin(), x.end()
# define kill(x) return cout << x << endl, 0
# define SZ(x) int(x.size())
# define lc id << 1
# define rc id << 1 | 1
# define fast_io ios::sync_with_stdio(0);cin.tie(0); cout.tie(0);
ll power(ll a, ll b, ll md) {return (!b ? 1 : (b & 1 ? a * power(a * a % md, b / 2, md) % md : power(a * a % md, b / 2, md) % md));}
const int xn = 4e5 + 10;
const int xm = - 20 + 10;
const int sq = 320;
const int inf = 1e9 + 10;
const ll INF = 1e18 + 10;
const ld eps = 1e-15;
const int mod = 1e9 + 7;//998244353;
const int base = 257;
int n, dp[xn], ans, fen[xn], q, seg[xn << 2], pd[xn], b[xn];
pii a[xn];
vector <pii> Q[xn];
bool flag[xn];
void mofen(int pos, int val = 1){
for (pos += 5; pos < xn; pos += pos & - pos)
fen[pos] = (fen[pos] + val) % mod;
}
int gefen(int pos){
int res = 0;
for (pos += 5; pos > 0; pos -= pos & - pos)
res = (res + fen[pos]) % mod;
return res;
}
void modify(int id, int l, int r, int pos, int val){
seg[id] = max(seg[id], val);
if (r - l == 1)
return;
int mid = l + r >> 1;
if (pos < mid)
modify(lc, l, mid, pos, val);
else
modify(rc, mid, r, pos, val);
}
int get(int id, int l, int r, int ql, int qr){
if (qr <= l || r <= ql || qr <= ql)
return 0;
if (ql <= l && r <= qr)
return seg[id];
int mid = l + r >> 1;
return max(get(lc, l, mid, ql, qr), get(rc, mid, r, ql, qr));
}
int main(){
fast_io;
cin >> n;
for (int i = 1; i <= n; ++ i){
int l, r;
cin >> l >> r;
a[r] = {l, i};
}
cin >> q;
while (q --){
int x;
cin >> x;
flag[x] = true;
}
for (int i = 1; i <= n + n; ++ i){
if (!a[i].A)
continue;
dp[i] = (gefen(i) - gefen(a[i].A) + mod + 1) % mod;
mofen(a[i].A, dp[i]);
if (!flag[a[i].B])
continue;
int mx = get(1, 0, xn, a[i].A, i);
b[i] = mx;
Q[mx].push_back({a[i].A, i});
modify(1, 0, xn, a[i].A, i);
}
int mx = get(1, 0, xn, 0, xn);
Q[mx].push_back({0, n + n + 1});
fill(fen, fen + xn, 0);
for (int i = 1; i <= n + n; ++ i){
for (pii x : Q[i])
pd[x.B] = (gefen(i) - gefen(x.A) + mod) % mod;
if (a[i].A)
mofen(a[i].A, dp[i]);
}
for (int i = 1; i <= n + n; ++ i)
if (flag[a[i].B])
pd[i] = (pd[i] + pd[b[i]] + 1) % mod;
cout << (pd[n + n + 1] + pd[mx]) % mod << endl;
return 0;
}
|
1583
|
G
|
Omkar and Time Travel
|
El Psy Kongroo.
Omkar is watching Steins;Gate.
In Steins;Gate, Okabe Rintarou needs to complete $n$ tasks ($1 \leq n \leq 2 \cdot 10^5$). Unfortunately, he doesn't know when he needs to complete the tasks.
Initially, the time is $0$. Time travel will now happen according to the following rules:
- For each $k = 1, 2, \ldots, n$, Okabe will realize at time $b_k$ that he was supposed to complete the $k$-th task at time $a_k$ ($a_k < b_k$).
- When he realizes this, if $k$-th task was already completed at time $a_k$, Okabe keeps the usual flow of time. Otherwise, he time travels to time $a_k$ then immediately completes the task.
- If Okabe time travels to time $a_k$, all tasks completed after this time will become incomplete again. That is, for every $j$, if $a_j>a_k$, the $j$-th task will become incomplete, if it was complete (if it was incomplete, nothing will change).
- Okabe has bad memory, so he can time travel to time $a_k$ \textbf{only immediately after} getting to time $b_k$ and learning that he was supposed to complete the $k$-th task at time $a_k$. That is, even if Okabe already had to perform $k$-th task before, he wouldn't remember it before stumbling on the info about this task at time $b_k$ again.
Please refer to the notes for an example of time travelling.
There is a certain set $s$ of tasks such that the first moment that all of the tasks in $s$ are simultaneously completed (regardless of whether any other tasks are currently completed), a funny scene will take place. Omkar loves this scene and wants to know how many times Okabe will time travel before this scene takes place. Find this number modulo $10^9 + 7$. It can be proven that eventually all $n$ tasks will be completed and so the answer always exists.
|
Each time travel that Okabe performs creates a new set of completed tasks. We will take this as given, but it can be proven using ideas from the rest of the proof. It thus suffices to count the number of distinct sets of task that come before the first one that contains $s$ as a subset. We should first figure out what kinds of sets will ever appear as a set of completed tasks at all; we will call these sets valid. We will represent tasks below as intervals $[a_k, b_k]$. First note that clearly not all sets are valid. If you have the intervals $[1, 2]$ and $[3, 4]$, clearly the set $\{[3, 4]\}$ is not a valid set. You can actually see that the same is true for the intervals $[1, 3]$ and $[2, 4]$ ($\{[2, 4]\}$ is not a valid set) by working through Okabe's activities. This generalizes in a very important way: if there are two intervals $[a, b]$ and $[c, d]$ with $a < c$ and $b < d$, then any valid set that contains $[c, d]$ must also contain $[a, b]$. This is because if $d$ is reached to complete the task $[c, d]$, then $b$ will already have been reached to complete the task $[a, b]$ (since $b < d$), and any time travel that undoes $[a, b]$ must also undo $[c, d]$ (since $a < c$). The above property is actually equivalent to being a valid set; we have already seen that it is necessary, and from the next part of the tutorial we will have way to prove that it is sufficient, but you should have some intuition for why this is true. In order to solve the problem, we want to think about how to determine, given two valid sets, which one Okabe will encounter first. First, for any two valid sets, consider their last interval (i. e. interval with greatest value of $b$). If these are different, then the one with largest interval having greater $b$ will come later. This is because for any valid set, the largest value of $b$ in any interval in that set is equal to the largest value of $b$ that Okabe has ever encountered. You can see this by noticing that the only way to undo a task is to perform a task with greater value of $b$; any task with smaller $b$ is either contained inside the first task, in which case it won't undo it, or also has a smaller value of $a$, in which case by the above property of valid sets it must already be completed. Since the maximum value of $b$ that Okabe has ever encountered will only get larger as his activities continue, it follows that the valid set with larger maximum value of $b$ must occur later. We can further see that for any two valid sets where the interval with largest $b$ is equal, we can discard that interval and consider the next largest interval from both valid sets. This gives us an ordering of the valid sets. We can prove that the aformentioned property is sufficient for being a valid set by showing that at any valid set, the next valid set encountered is the immediately next one in the ordering. The details are left to the reader. In order to use this to finally solve the problem, it is useful to represent valid sets in a different way. Specifically, we can represent a valid set $v$ as the set $u$ of intervals that aren't implied to be in $v$ by any other element of $v$. By thinking about the above property, you can see that $u$ is actually a set of recursively containing intervals; i. e. it contains an interval, then another interval inside that one, then another interval inside that one, etc. We will consider the above representation to be ordered, so that the last interval is the one that contains all the others, and the first interval is the one inside of all the others. We can now solve the problem. For the given set $s$, first determine its above representation; this can be done easily using sorting, discarding any redundant interval. The valid sets, also in their above representation, that come before $s$ are thus the ones that, excluding their common suffix with $s$, have a last interval whose $b$ is smaller than the $b$ for the last interval in $s$ excluding the common suffix. We can therefore solve this as follows. We will count the number of above sets for each possible common suffix. For each suffix, let the last interval in $s$ not included in the suffix be $x$, and let the first interval in the suffix be $y$. The amount of sets for this suffix is equal to the amount of recursively containing sets that have a largest interval that is contained in $y$ whose value of $b$ is less than the value of $b$ for $x$. We can compute this follows. We will maintain a range sum query data structure such as binary index tree. The data structure will have at each $a$, the number of recursively containing subsets whose largest interval is the one with that $a$ (once that interval has been processed). We will process the intervals in increasing order of $b$. For each interval $x$, to put it into the data structure, we can simply perform a range query of literally the interval $x$, and add $1$ to the result. That will be equal to the number of recursively containing sets with $x$ as the largest interval, so we simply insert that into the data structure at the value of $a$ of $x$. Before putting $x$ into the data structure, if it is in the representation of $s$, then we can find the answer for the suffix of $s$ that contains all intervals to the right of $x$ as follows. Since the intervals currently in the data structure are precisely the ones with value of $b$ less than $x$, the answer for that suffix is simply the range query of $y$ where $y$ is the immediately next interval after $x$ in $s$. We therefore perform this range query then add it to our answer. Note that all of this computation doesn't count $s$ itself, but it does count the empty set which doesn't need to be counted, so our answer is correct. The runtime of this solution is $O(n\lg n)$.
|
[
"data structures",
"math"
] | 3,000
|
//
// Created by Danny Mittal on 4/11/21.
//
#define ll int
#define MAXN 200000
#define LGMAXN 18
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
struct edge {
int from;
int to;
ll capacity;
ll toll;
};
bool compareEdges(edge e, edge f) {
return f.capacity < e.capacity;
}
struct query {
int from;
ll vehicles;
int index;
};
bool compareQueries(query query1, query query2) {
return query2.vehicles < query1.vehicles;
}
ll enjoyment[MAXN + 1];
vector<edge> adj[MAXN + 1];
edge edges[MAXN - 1];
query queries[MAXN];
int parent[LGMAXN][MAXN + 1];
ll maxToll[LGMAXN][MAXN + 1];
int depth[MAXN + 1];
void dfs(int a) {
for (edge e : adj[a]) {
if (e.to != parent[0][a]) {
parent[0][e.to] = a;
maxToll[0][e.to] = e.toll;
for (int d = 1; d < LGMAXN; d++) {
parent[d][e.to] = parent[d - 1][parent[d - 1][e.to]];
maxToll[d][e.to] = max(maxToll[d - 1][e.to], maxToll[d - 1][parent[d - 1][e.to]]);
}
depth[e.to] = depth[a] + 1;
dfs(e.to);
}
}
}
ll maxTollOnPath(int a, int b) {
if (depth[b] < depth[a]) {
swap(a, b);
}
ll res = 0;
for (int d = LGMAXN - 1; d >= 0; d--) {
if (depth[b] - depth[a] >= 1 << d) {
res = max(res, maxToll[d][b]);
b = parent[d][b];
}
}
if (a == b) {
return res;
}
for (int d = LGMAXN - 1; d >= 0; d--) {
if (parent[d][b] != parent[d][a]) {
res = max(res, max(maxToll[d][a], maxToll[d][b]));
a = parent[d][a];
b = parent[d][b];
}
}
res = max(res, max(maxToll[0][a], maxToll[0][b]));
return res;
}
int dsu[MAXN + 1];
ll maxTollInside[MAXN + 1];
int find(int u) {
if (dsu[dsu[u]] != dsu[u]) {
dsu[u] = find(dsu[u]);
}
return dsu[u];
}
ll answerEnjoyment[MAXN];
ll answerToll[MAXN];
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int n, q;
cin >> n >> q;
for (int a = 1; a <= n; a++) {
cin >> enjoyment[a];
dsu[a] = a;
maxTollInside[a] = 0;
}
for (int j = 0; j < n - 1; j++) {
int a, b;
ll c, t;
cin >> a >> b >> c >> t;
edges[j] = {a, b, c, t};
adj[a].push_back({a, b, c, t});
adj[b].push_back({b, a, c, t});
}
dfs(1);
for (int j = 0; j < q; j++) {
ll v;
int x;
cin >> v >> x;
queries[j] = {x, v, j};
}
sort(edges, edges + n - 1, compareEdges);
sort(queries, queries + q, compareQueries);
int k = 0;
for (int h = 0; h < q; h++) {
int x = queries[h].from;
ll v = queries[h].vehicles;
int j = queries[h].index;
while (k < n - 1 && edges[k].capacity >= v) {
int u = find(edges[k].from);
int w = find(edges[k].to);
if (enjoyment[u] > enjoyment[w]) {
dsu[w] = u;
} else if (enjoyment[w] > enjoyment[u]) {
dsu[u] = w;
} else {
dsu[w] = u;
maxTollInside[u] = max(max(maxTollInside[u], maxTollInside[w]), maxTollOnPath(u, w));
}
k++;
}
int u = find(x);
answerEnjoyment[j] = enjoyment[u];
answerToll[j] = max(maxTollInside[u], maxTollOnPath(x, u));
}
for (int j = 0; j < q; j++) {
cout << answerEnjoyment[j] << ' ' << answerToll[j] << '\n';
}
cout << flush;
return 0;
}
|
1583
|
H
|
Omkar and Tours
|
Omkar is hosting tours of his country, Omkarland! There are $n$ cities in Omkarland, and, rather curiously, there are exactly $n-1$ bidirectional roads connecting the cities to each other. It is guaranteed that you can reach any city from any other city through the road network.
Every city has an enjoyment value $e$. Each road has a capacity $c$, denoting the maximum number of vehicles that can be on it, and an associated toll $t$. However, the toll system in Omkarland has an interesting quirk: if a vehicle travels on multiple roads on a single journey, they pay only the highest toll of any single road on which they traveled. (In other words, they pay $\max t$ over all the roads on which they traveled.) If a vehicle traverses no roads, they pay $0$ toll.
Omkar has decided to host $q$ tour groups. Each tour group consists of $v$ vehicles starting at city $x$. (Keep in mind that a tour group with $v$ vehicles can travel only on roads with capacity $\geq v$.) Being the tour organizer, Omkar wants his groups to have as much fun as they possibly can, but also must reimburse his groups for the tolls that they have to pay. Thus, for each tour group, Omkar wants to know two things: first, what is the enjoyment value of the city $y$ with maximum enjoyment value that the tour group can reach from their starting city, and second, how much per vehicle will Omkar have to pay to reimburse the entire group for their trip from $x$ to $y$? (This trip from $x$ to $y$ will always be on the shortest path from $x$ to $y$.)
In the case that there are multiple reachable cities with the maximum enjoyment value, Omkar will let his tour group choose which one they want to go to. Therefore, to prepare for all possible scenarios, he wants to know the amount of money per vehicle that he needs to guarantee that he can reimburse the group regardless of which city they choose.
|
First, note that we can process all the queries offline. We can sort the queries by the number of vehicles in the tour group and process them in decreasing order. Now, consider solving a version of the problem with distinct enjoyment values. Then, there will always be exactly one reachable city with the maximum enjoyment value. To solve this, we can maintain a DSU that stores, for each connected component, the maximum enjoyment value and the index of the node with the maximum enjoyment value, which we denote as $enj[u], mxi[u]$ for a connected component $u$. When merging two connected components $u$, $v$, we simply take $enj[u] = \max(enj[u], enj[v]), mxi[u] = \text{arg}\max(mxi[u], mxi[v])$. Now, consider processing a query with starting node $a$ and number of vehicles $x$, we denote its "connected component" $u$ as the connected component of $a$ in the graph that contains only edges with capacity $\geq x$. Finding the maximum enjoyment value that can be reached from $a$ is simple; we can just output $enj[u]$. To compute the second value, because there is only one node with the maximum enjoyment value ($mxi[u]$), we can find the maximum edge on the path from $a$ to $mxi[u]$ using binary lifting. (Denote this as $\text{maxEdge}(a, mxi[u])$.) We now consider the original problem, with non-distinct enjoyment values. However, here we make the key observation: for each query, the maximum toll edge always lies on either all the paths from node $a$ to any node with maximum enjoyment value, or on a path between two nodes with maximum enjoyment value. To show this, $\ell$ be the node with maximum enjoyment value whose path to $a$ contains the maximum toll edge, and let $m$ be an arbitrary node with maximum value. The path from $\ell$ to $a$ is completely contained in the union of the path from $m$ to $a$ and the path from $\ell$ to $m$. Therefore, the maximum toll edge lies on at least one of these path as desired. Using this observation, we can modify our DSU to handle the general case. First, we now let $mxi[u]$ to be the index of any maximum enjoyment value node in $u$. We also add a new variable, $tol[u]$, which denotes the maximum toll cost among all paths between nodes of maximum enjoyment value in connected component $u$. Now, when merging components $u$ and $v$, if $enj[u]$ is not equal to $enj[v]$, then we can simply take all the above values from the component with a larger $enj$. However, if $enj[u] = enj[v]$, we will only need to update $tol[u]$. To do this, we need to consider edges that could possibly connect the two components along with ones within the components, so we let $tol[u] = max(tol[u], tol[v], \text{maxEdge}(mxi[u], mxi[v]))$. Again, maxEdge can be computed using binary lifting. Now, to process the queries, we will make use of our observation. For a query with starting node $a$ and connected component $u$, the maximum enjoyment value is again $enj[u]$. However, the second value can now be more easily computed by $\max(\text{maxEdge}(a, mxi[u]), tol[u])$. As the preprocessing necessary for binary lifting takes $O(n \log n)$ time, and all the queries can be answered in $O((n + q) \log n)$ time, the overall complexity is $O((n + q) \log n)$, which is fast enough.
|
[
"data structures",
"divide and conquer",
"sortings",
"trees"
] | 3,300
|
#ifdef DEBUG
#define _GLIBCXX_DEBUG
#endif
//#pragma GCC optimize("O3")
#include <bits/stdc++.h>
using namespace std;
typedef long double ld;
typedef long long ll;
int n;
const int maxN = 2005;
int a[maxN][maxN];
int b[maxN][maxN];
vector<pair<int,int>> by[maxN];
const int dx[3] = {-1, 0, 0};
const int dy[3] = {0, -1, 1};
int his_val[maxN];
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
//freopen("input.txt", "r", stdin);
cin >> n;
if (n % 2 == 1) {
cout << "NONE";
return 0;
}
for (int j = 0; j < n; j++) {
a[0][j] = j / 2;
b[0][j] = 0;
}
for (int i = 0; i + 1 < n; i++) {
for (int j = 0; j < n; j++) {
int xr = 0;
int c = 0;
vector<int> vals;
vals.emplace_back(a[i][j]);
for (int z = 0; z < 3; z++) {
int ni = i + dx[z];
int nj = j + dy[z];
if (ni < 0 || ni >= n) continue;
if (nj < 0 || nj >= n) continue;
vals.emplace_back(a[ni][nj]);
}
int cnt = 0;
int he = -1;
sort(vals.begin(), vals.end());
vals.erase(unique(vals.begin(), vals.end()), vals.end());
for (int& t : vals) {
int pp = 0;
for (int z = 0; z < 3; z++) {
int ni = i + dx[z];
int nj = j + dy[z];
if (ni < 0 || ni >= n) continue;
if (nj < 0 || nj >= n) continue;
if (a[ni][nj] == t) pp ^= 1;
if (a[i][j] == t) pp ^= 1;
}
if (pp && t != a[i][j]) {
cnt++;
he = t;
}
}
assert(cnt == 0 || cnt == 1);
if (cnt == 0) a[i + 1][j] = a[i][j];
else a[i + 1][j] = he;
int pp = 0;
for (int z = 0; z < 3; z++) {
int ni = i + dx[z];
int nj = j + dy[z];
if (ni < 0 || ni >= n) continue;
if (nj < 0 || nj >= n) continue;
pp ^= b[ni][nj];
pp ^= b[i][j];
pp ^= 1;
}
b[i + 1][j] = (pp ^ 1) ^ b[i][j];
}
}
memset(his_val, -1, sizeof his_val);
bool ok = true;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
char c;
cin >> c;
if (c == '.') continue;
int d = 0;
if (c == 'S') d = 0;
else d = 1;
d ^= b[i][j];
if (his_val[a[i][j]] == -1) his_val[a[i][j]] = d;
if (his_val[a[i][j]] != d) ok = false;
}
}
if (!ok) {
cout << "NONE\n";
return 0;
}
bool hs = false;
for (int z = 0; z < n / 2; z++) {
if (his_val[z] == -1) hs = true;
}
if (hs) {
cout << "MULTIPLE\n";
return 0;
}
cout << "UNIQUE\n";
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
int d = his_val[a[i][j]] ^ b[i][j];
if (d == 0) cout << "S";
else cout << "G";
}
cout << '\n';
}
return 0;
}
|
1584
|
A
|
Mathematical Addition
|
Ivan decided to prepare for the test on solving integer equations. He noticed that all tasks in the test have the following form:
- You are given two positive integers $u$ and $v$, find any pair of integers (\textbf{not necessarily positive}) $x$, $y$, such that: $$\frac{x}{u} + \frac{y}{v} = \frac{x + y}{u + v}.$$
- The solution $x = 0$, $y = 0$ is forbidden, so you should find any solution with $(x, y) \neq (0, 0)$.
Please help Ivan to solve some equations of this form.
|
We have the equation: $\frac{x}{u} + \frac{y}{v} =\frac{x +y}{u+ v}.$ Let's multiply the left and right parts by $u*v*(u + v)$. Received: $x * v * (u + v) + y * u * (u + v) = (x + y) * u * v$. After opening the brackets and simplifying, we have: $x *v^2 + y*u^2 = 0$. One of the solutions to this equation is $x = -u^2$, $y = v^2$
|
[
"math"
] | 800
| null |
1584
|
B
|
Coloring Rectangles
|
David was given a \textbf{red} checkered rectangle of size $n \times m$. But he doesn't like it. So David cuts the original or any other rectangle piece obtained during the cutting into two new pieces along the grid lines. He can do this operation as many times as he wants.
As a result, he will get a set of rectangles. Rectangles $1 \times 1$ are \textbf{forbidden}.
David also knows how to paint the cells \textbf{blue}. He wants each rectangle from the resulting set of pieces to be colored such that any pair of adjacent cells by side (from the same piece) have different colors.
What is the minimum number of cells David will have to paint?
|
Rectangles after cutting will be painted in a chess coloring. So, if the area is even, then the number of cells of different colors is the same, and if it is odd, then it differs by exactly $1$. Let's find out what part the colored cells occupy in relation to all of them. For an even area, this ratio is always $\frac{1}{2}$. For odd $\frac{S - 1}{2 \cdot S}$. The smaller the odd area, the smaller the ratio. An area equal to $1$ cannot be obtained, so the best ratio is $\frac{1}{3}$ and is achieved with an area equal to $3$. Then we get the lower estimate for the answer: $answer \geq n \cdot m \cdot \frac{1}{3}$. Great! We know that the answer is integer, so if we manage to construct such a cut that it is necessary to color exactly such a $cnt$ of cells that$\frac{n \cdot m}{3} \leq cnt < \frac{n \cdot m}{3} + 1$, then $cnt$ will be the answer. After all, $cnt$ is the minimum integer value satisfying the estimate. If one of the sides is divisible by $3$, then it is obvious how to cut into $1 \times 3$ rectangles and get the perfect answer. If the remains of sides $1$ and $1$ or $2$ and $2$, then you can cut into $1 \times 3$ rectangles and one rectangle with an area of $4$, in which you need to paint over $2$ cells. Then the answer also fits the assessment. If the remains of sides $1$ and $2$, then after cutting into $1 \times 3$ rectangles, a rectangle will remain $1 \times 2$, in which you need to paint one cell. The answer also fits the assessment. For all pairs of remains, there is a way to construct an answer satisfying the inequality. Therefore, the answer is $\lceil \frac{n \cdot m}{3} \rceil$
|
[
"greedy",
"math"
] | 1,000
| null |
1584
|
C
|
Two Arrays
|
You are given two arrays of integers $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$.
Let's define a transformation of the array $a$:
- Choose any non-negative integer $k$ such that $0 \le k \le n$.
- Choose $k$ distinct array indices $1 \le i_1 < i_2 < \ldots < i_k \le n$.
- Add $1$ to each of $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$, all other elements of array $a$ remain unchanged.
- Permute the elements of array $a$ in any order.
Is it possible to perform some transformation of the array $a$ \textbf{exactly once}, so that the resulting array is equal to $b$?
|
Let's sort the arrays first. Let's check the two smallest elements in the arrays and investigate their behavior. First, obviously, if $a_1 + 1 < b_1$ (as nothing can be matched with $a_1$) or $a_1 > b_1$ (as nothing can be matched with $b_1$) the answer is No. Then, it's possible that $a_1 = b_1 = x$. In this case, we have to have at least one $x$ in the array $a$ at the end. Hence, we can leave $a_1$ untouched, as it already suits to $b_1$. It's also possible that $a_1 + 1 = b_1$. Here we have to increase $a_1$ by $1$. In both cases, the task is reduced to the smallest one. Going to the exact solution from this logic, we just have to sort both arrays and check that for each $1 \leq i \leq n$ it's $a_i = b_i$ or $a_i + 1 = b_i$. The complexity of the solution is $O(n log(n))$.
|
[
"greedy",
"math",
"sortings"
] | 900
| null |
1584
|
D
|
Guess the Permutation
|
\textbf{This is an interactive problem.}
Jury initially had a sequence $a$ of length $n$, such that $a_i = i$.
The jury chose three integers $i$, $j$, $k$, such that $1 \leq i < j < k \leq n$, $j - i > 1$. After that, Jury reversed subsegments $[i, j - 1]$ and $[j, k]$ of the sequence $a$.
Reversing a subsegment $[l, r]$ of the sequence $a$ means reversing the order of elements $a_l, a_{l+1}, \ldots, a_r$ in the sequence, i. e. $a_l$ is swapped with $a_r$, $a_{l+1}$ is swapped with $a_{r-1}$, etc.
You are given the number $n$ and you should find $i$, $j$, $k$ after asking some questions.
In one question you can choose two integers $l$ and $r$ ($1 \leq l \leq r \leq n$) and ask the number of inversions on the subsegment $[l, r]$ of the sequence $a$. You will be given the number of pairs $(i, j)$ such that $l \leq i < j \leq r$, and $a_i > a_j$.
Find the chosen numbers $i$, $j$, $k$ after at most $40$ questions.
The numbers $i$, $j$, and $k$ are fixed before the start of your program and do not depend on your queries.
|
Note that the number of inversions on decreasing sequence of length $l$ is $(_2^l)$. As we reversed two non-overlaping subsegments, the number of inversions on each subsegment is equal to sum of number of inversions of parts of reversed subsegments, which are decreasing. First of all, let's find $A := (_2^{k-j+1}) + (_2^{j-i})$ - total number of inversions in sequence. We use 1 question for that. Now let's look on the number of inversions on subsegment [$x$, $n$]. If this number is less than A, then not both reversed subsegments fit entirely, so $i < x$, otherwise $i \geq x$. Now we can apply binnary search to find $i$. We use $log_2(n)$ questions here. Now let's ask the number of inversions on subsegment [$i+1$, $n$], let's call this number B. We use $1$ question here. From the structure of sequence: $A-B$ = $|\{x | x > i, a_x < a_i\}|$ = $|[i+1, j-1]|$ = $j - i - 1$. Now we can find $j - i$, $j$ and $(_2^{j - i})$, due to the definition of A, we find $(_2^{k - j + 1})$. Finaly, we can solve quadratic equation for $k-j+1$ and get $k$. Overall, we used $log_2(n) + 2 \leq 32$ questions, but we gave you a bit more, in case your solution uses few more questions on some stage.
|
[
"binary search",
"combinatorics",
"interactive",
"math"
] | 2,000
| null |
1584
|
E
|
Game with Stones
|
Bob decided to take a break from calculus homework and designed a game for himself.
The game is played on a sequence of piles of stones, which can be described with a sequence of integers $s_1, \ldots, s_k$, where $s_i$ is the number of stones in the $i$-th pile. On each turn, Bob picks a pair of non-empty adjacent piles $i$ and $i+1$ and takes one stone from each. If a pile becomes empty, its adjacent piles \textbf{do not become adjacent}. The game ends when Bob can't make turns anymore. Bob considers himself a winner if at the end all piles are empty.
We consider a sequence of piles \textbf{winning} if Bob can start with it and win with some sequence of moves.
You are given a sequence $a_1, \ldots, a_n$, count the number of subsegments of $a$ that describe a winning sequence of piles. In other words find the number of segments $[l, r]$ ($1 \leq l \leq r \leq n$), such that the sequence $a_l, a_{l+1}, \ldots, a_r$ is winning.
|
This game has greedy strategy: look at first pile, all its stones have to be matched with stones from next pile, because it is its only adjacent pile. If pile is non-empty and there are no next pile, or next pile is smaller than current, Bob loses. Otherwise, Bob makes current pile empty, and remove corresponding number of stones from next pile. Now Bob plays the same game as if had one pile less, we can remove first pile without changing game. Bob wins if at the moment he reduced game to one pile it's already empty. Now let's iteratively define array $c$, where $c_i$ - number of stones left in the $i$-th after removing $1, \ldots, i - 1$ piles, according to greedy strategy. Let $c_0 = 0$, then $c_i = a_i - c_{i - 1}$. If array contains only positive numbers, then it means that Bob is able to remove piles all the way over. Otherwise, let $t$ be the first moment with $c_t < 0$, this means that Bob was able to remove piles until he meet $t$-th pile and $c_{t-1} > a_t$ happened, so Bob loses. To check that last pile is empty, we need to check if $c_n = 0$. So we have criteria of winning subsequence: $c_i \geq 0$ for all $i$, $c_n = 0$. Let's expand recursive notation of $c_i$: $c_i = a_i - a_{i - 1} + a_{i - 2} -\ldots + (-1)^{i-1} \cdot a_1$. We will solve problem separately for different $l$ - left bound of subsegment. Let's define sequence $a^l := a_l, a_{l+1} \ldots a_n$, $a_i^l = a_{l + i - 1}$. It has similar array $c^l$. We will find first position of negative number in $c^l$ -$t$ ($c_t^l < 0$). And then count how may zeros are on prefix [$1$, $t-1$]. This will give us number of winning subsegemtns with form $[l, r]$, sum over all $l$ will give us answer for the problem. Note, that $c_i^l = a_i^l - a_{i - 1}^l + \ldots + (-1)^{i-1} \cdot a_1^l = a_{l+i-1} - a_{l+i-2} + \ldots + (-1)^{i-1} \cdot a_{l} = c_{l+i-1} + (-1)^{i-1} \cdot c_{l - 1}$. Note, that $c_i^l < 0$ if and only if $c_{l + i - 1} < (-1)^{i - 1} \cdot c_{l - 1}$. Let's divide problem by parity of indexes. Now to find first position of negative number in $c^l$ we should find first position of "number less than x" on suffix of $c$. This can be done many ways, for example, by descending through segment tree (segment tree for each parity). Note, that $c_i^l = 0$, if and only if $c_{l + i - 1} = (-1)^{i - 1} \cdot c_{l - 1}$. Same division of problem by parity. Now to count number of zeros on subsegment of $c_l$ we should count number of "equals to x" on subsegment of $c$. This can be done by storing all positions of each $c_i$ in some container (one for each parity) and binnary search. Overall complexity of the solution : $\mathcal{O}(n \cdot log(n))$
|
[
"binary search",
"data structures",
"games",
"greedy"
] | 2,300
| null |
1584
|
F
|
Strange LCS
|
You are given $n$ strings $s_1, s_2, \ldots, s_n$, each consisting of lowercase and uppercase English letters. In addition, it's guaranteed that each character occurs in each string \textbf{at most twice}. Find the longest common subsequence of these strings.
A string $t$ is a subsequence of a string $s$ if $t$ can be obtained from $s$ by deletion of several (possibly, zero or all) symbols.
|
Let's define efinegraph with vertexes ($c$, $msk$), where $chr$ denoting some character, ans $mask$ is $n$-bit mask of occasions ($i$-th bit is set to $1$ if and only if we consider second occasion of $c$ in $i$-th string). Not all $mask$ are possible for some $c$ since there could be less than $2$ occasions. Note: vertex define choise of character and positions of this character in all strings. Note, that graph has $\mathcal{O}(|\Sigma| \cdot 2^n)$ vertices. Let's define strict comparison (<) of vertices:($chr1$, $msk1$) < ($char2$, $msk2$) if and only if positions chosen by first vertex are stricly lefter than ones chosen by second (for all strings). Let's define another comparison ($\leq$) the same way, but allow some position to be equal. Note: strict comparison is anti-reflexive, both comparison are transitive and this stands ($a$ $\leq$ $b$ < $c$ $\Rightarrow$ $a$ < $c$). Graph contains directed edges from one vertex to another, if and only if first is smaller by defined comparison. Note: graph is acyclic, because of transitivity of pair comparison. Note: that for every common subsequence there is unique corresponding path in defined graph and vice versa. So we need to find longest path in this graph. Vertex-length of path will be equal to the length of corresponding subsequence. We want to calculate $dp$($c$, $msk$) - length of longest path starting from this vertex. This $dp$ is easy to calulate on DAG, but number of edges is too big. We want to remove some edges without changing values of $dp$. Note: if removal of edge doesn't change $dp$ of its starting point, when it doesn't change $dp$ at all. Let's look at arbitrary vertex $(c, msk)$, that has at least one outgoing edge and some longest path starting from it (it has at least one edge): [$(c, msk) \rightarrow (c2, msk2) \rightarrow \ldots$]. Suppose there exists mask $MidMsk \neq msk2$, such that: (c, mask) < ($c2$, $MidMsk$) $\leq$ ($c2$, $msk2$). By the qualities of defined comparisons, we can change second vertex in longest path to ($c2$, $MidMsk$). Now let's find mask for fixed $c2$ which correspond to choise of leftmost positions righter than positions chosen by ($c$, $msk$). It can be found in $\mathcal{O}(n)$ time. As we notices earlier, without loss of generality, longest path (with fixed $c2$) goes through this vertex , so we can harmlessly remove all edges going from current vertex to the vertices with character $c2$, but diffrent mask. This can be done for every character. Now graph each vertex has $\mathcal{O}(|\Sigma|)$ outgoing edges, so $dp$ can be calculated fast enough. Subsequence itself can be easily found now. Note: there is no need for graph in solution, it's just abstraction for better understanding. Note: we don't have to calculate $dp$ for all vertices, we only need to find $dp(c, 0)$ for all $c$, it can be proven by applying same logic. Overall complexity: $\mathcal{O}(n \cdot |\Sigma|^2 \cdot 2^n)$
|
[
"bitmasks",
"dp",
"graphs",
"greedy",
"strings"
] | 2,600
| null |
1584
|
G
|
Eligible Segments
|
You are given $n$ \textbf{distinct} points $p_1, p_2, \ldots, p_n$ on the plane and a positive integer $R$.
Find the number of pairs of indices $(i, j)$ such that $1 \le i < j \le n$, and for every possible $k$ ($1 \le k \le n$) the distance from the point $p_k$ to the \textbf{segment} between points $p_i$ and $p_j$ is at most $R$.
|
The distance from point $P$ to segment $[AB]$ is equal to the maximum of the distance from point $P$ to ray $[AB)$ and the distance from point $P$ to ray $[BA)$. Let's fix a point $P_i$. Now we have to find all points $P_j$ such that distance from every point $P_k (1 \le k \le n)$ to the ray $[P_i, P_j)$ is less than $R$. For every point $P_k$ let's build two tangents from point $P_i$ to the circle with the center in the point $P_k$ and radius $R$. These tangents form an angle $A_k$. The distance from the point $P_k (1 \le k \le n)$ to the ray $[P_i, P_j)$ is less than $R$ iff the ray $[P_i, P_j)$ lies inside the angle $A_k$. So we can build all these angles $A_k$ and intersect them, after that, we only have to check that the ray $[P_i, P_j)$ lies inside the intersection of all angles for all $1 \le j \le n$. Time complexity: $O(n^2)$.
|
[
"geometry"
] | 3,200
| null |
1585
|
A
|
Life of a Flower
|
Petya has got an interesting flower. Petya is a busy person, so he sometimes forgets to water it. You are given $n$ days from Petya's live and you have to determine what happened with his flower in the end.
The flower grows as follows:
- If the flower isn't watered for two days in a row, it dies.
- If the flower is watered in the $i$-th day, it grows by $1$ centimeter.
- If the flower is watered in the $i$-th and in the $(i-1)$-th day ($i > 1$), then it grows by $5$ centimeters instead of $1$.
- If the flower is not watered in the $i$-th day, it does not grow.
At the beginning of the $1$-st day the flower is $1$ centimeter tall. What is its height after $n$ days?
|
Iterating through array and looking on our element and previous element, there is possible 4 variants: $a_i == 1$ and $a_{i - 1} == 1$ - k += 5 $a_i == 1$ and $a_{i - 1} == 0$ - k += 1 $a_i == 0$ and $a_{i - 1} == 1$ - k += 0 $a_i == 0$ and $a_{i - 1} == 0$ - k = -1, break
|
[
"implementation"
] | 800
| null |
1585
|
B
|
Array Eversion
|
You are given an array $a$ of length $n$.
Let's define the eversion operation. Let $x = a_n$. Then array $a$ is partitioned into two parts: left and right. The left part contains the elements of $a$ that are not greater than $x$ ($\le x$). The right part contains the elements of $a$ that are strictly greater than $x$ ($> x$). The order of elements in each part is kept the same as before the operation, i. e. the partition is stable. Then the array is replaced with the concatenation of the left and the right parts.
For example, if the array $a$ is $[2, 4, 1, 5, 3]$, the eversion goes like this: $[2, 4, 1, 5, 3] \to [2, 1, 3], [4, 5] \to [2, 1, 3, 4, 5]$.
We start with the array $a$ and perform eversions on this array. We can prove that after several eversions the array $a$ stops changing. Output the minimum number $k$ such that the array stops changing after $k$ eversions.
|
Lemma: If $x$ is max element of the array then eversion doesn't change the array. Proof: In spite of the fact that division is stable, all elements will be passed to the left part. Their order won't be changed. Lemma: The lastest element after eversion is the rightest element of the array which is greater than $x$ and lefter than $x$ in the array. Proof: Look at all elements that are greater than $x$. This is the right part of the division. Due to stable division, the right element that is greater will be new $x$. There're no elements greater than $x$ and righter than $x$ because of eversion definition. Let's build a sequence $x_a, x_{a-1}, \dots x_0$, where $x_0 = a_n$, $x_{i + 1}$ - the rightest element lefter than $x_i$ and greater than $x_i$. The answer is equals to $a$ because $\{x_i\}$ is a sequence of last elements overall eversions. Example: $6\ 10\ 4\ 17\ 9\ 2\ 8\ 1$ Sequence $\{x_i\}$ - is $1, 8, 9, 17$. Answer is $3$.
|
[
"greedy"
] | 900
| null |
1585
|
C
|
Minimize Distance
|
A total of $n$ depots are located on a number line. Depot $i$ lies at the point $x_i$ for $1 \le i \le n$.
You are a salesman with $n$ bags of goods, attempting to deliver one bag to each of the $n$ depots. You and the $n$ bags are initially at the origin $0$. You can carry up to $k$ bags at a time. You must collect the required number of goods from the origin, deliver them to the respective depots, and then return to the origin to collect your next batch of goods.
Calculate the minimum distance you need to cover to deliver all the bags of goods to the depots. You do \textbf{not} have to return to the origin after you have delivered all the bags.
|
This problem can be solved with a greedy approach. First, we note that it makes sense to solve positive points $x^p$ and negative points $x^n$ separately since we would like the minimize the number of times we move across the origin. Second, when we move to the farthest depot to which we haven't delivered a bag yet, we can cover $k-1$ other depots. Thus we can also deliver bags to the $k-1$ next farthest depots from the origin. Thus to solve the positive set, we can just sort the positive points and take the sum of the distance of points starting from the farthest point, jumping k points at each step. Thus we can sort $x^p$ and $x^n$ and find the answer through the following equations: $sum_{pos} = \sum_{i=0}^{|pos| - ki >= 0} x^p_{|pos| - ki}$ $sum_{neg} = \sum_{i=0}^{|neg| - ki >= 0} x^n_{|neg| - ki}$ The final answer will be $2(sum_{pos} + sum_{neg})$ minus the maximum distance of a positive or negative depot since we do not have to return to the origin in the end.
|
[
"greedy"
] | 1,300
| null |
1585
|
D
|
Yet Another Sorting Problem
|
Petya has an array of integers $a_1, a_2, \ldots, a_n$. He only likes sorted arrays. Unfortunately, the given array could be arbitrary, so Petya wants to sort it.
Petya likes to challenge himself, so he wants to sort array using only $3$-cycles. More formally, in one operation he can pick $3$ \textbf{pairwise distinct} indices $i$, $j$, and $k$ ($1 \leq i, j, k \leq n$) and apply $i \to j \to k \to i$ cycle to the array $a$. It simultaneously places $a_i$ on position $j$, $a_j$ on position $k$, and $a_k$ on position $i$, without changing any other element.
For example, if $a$ is $[10, 50, 20, 30, 40, 60]$ and he chooses $i = 2$, $j = 1$, $k = 5$, then the array becomes $[\underline{50}, \underline{40}, 20, 30, \underline{10}, 60]$.
Petya can apply arbitrary number of $3$-cycles (possibly, zero). You are to determine if Petya can sort his array $a$, i. e. make it non-decreasing.
|
Set of all $3$-cycles generates a group of even permuations $A_n$. So the answer is "YES" if and only if there is an even permutation that sorts array $a$. If all elements of $a$ are distinct, then there is unique sorting permutation that has the same parity as $a$. If there are identic elements in $a$, let's look at arbitrary sorting permutation. If it's odd, then we can add transposition of identic elements to it, after this permutation remains sorting, but becomes even. So in this case, even sorting permutation always exists. Overall, we need to check if all elements of $a$ are distinct. If it's not true, then answer is "YES". Otherwise, we need to check that permutation $a$ is even. This can be done in many ways, including some with $\mathcal{O}(n)$ complexity.
|
[
"data structures",
"math"
] | 1,900
| null |
1585
|
E
|
Frequency Queries
|
Petya has a rooted tree with an integer written on each vertex. The vertex $1$ is the root. You are to answer some questions about the tree.
A tree is a connected graph without cycles. A rooted tree has a special vertex called the root. The parent of a node $v$ is the next vertex on the shortest path from $v$ to the root.
Each question is defined by three integers $v$, $l$, and $k$. To get the answer to the question, you need to perform the following steps:
- First, write down the sequence of all integers written on the shortest path from the vertex $v$ to the root (including those written in the $v$ and the root).
- Count the number of times each integer occurs. Remove all integers with less than $l$ occurrences.
- Replace the sequence, removing all duplicates and ordering the elements by the number of occurrences in the original list in increasing order. In case of a tie, you can choose the order of these elements arbitrary.
- The answer to the question is the $k$-th number in the remaining sequence. Note that the answer is not always uniquely determined, because there could be several orderings. Also, it is possible that the length of the sequence on this step is less than $k$, in this case the answer is $-1$.
For example, if the sequence of integers on the path from $v$ to the root is $[2, 2, 1, 7, 1, 1, 4, 4, 4, 4]$, $l = 2$ and $k = 2$, then the answer is $1$.
Please answer all questions about the tree.
|
Let's traverse through the tree with depth-first-search from the root and maintain counting array ($cnt_x$ := "number of occasions of x in the sequence"). When dfs enters vertex $v$, it increases $cnt_{a_v}$ by $1$, then it proceeds all queries correspondent to $v$. When dfs leaves the vertex, it decreases $cnt_{a_v}$ by $1$. Let's maintain this quantities: Sorting permuation $p$ of $cnt$, initially - $1, 2, \ldots, n$. Inverse permuation $p^{-1}$. For each $x \in \{0, 1, \ldots, n\}$, "lower_bound" $lb_x$ in sorted array. More formally, minimal $i$ such that $cnt_{p_i} \geq x$. When we want to increase $cnt_x$ by $1$: Move $x$ in the end of block of same values in sorted array. So we need to swap ($p^{-1}_i$)-th and $(lb_{cnt_x+1}-1$)-th positions of $p$. Change $p^{-1}$ accrodingly to the change of $p$. Decrease $lb_{cnt_x + 1}$ by $1$. Note: that's the only $lb$ value that after during this operation. Increase $cnt_x$. Operation of decreasing $cnt_x$ by $1$ can be done symmetrically. Note: if answer exists, then one of possible answers is $p_{lb_l + k - 1}$. Total complexity: $\mathcal{O}(n + q)$.
|
[
"data structures",
"dfs and similar",
"trees"
] | 2,400
| null |
1585
|
F
|
Non-equal Neighbours
|
You are given an array of $n$ positive integers $a_1, a_2, \ldots, a_n$. Your task is to calculate the number of arrays of $n$ positive integers $b_1, b_2, \ldots, b_n$ such that:
- $1 \le b_i \le a_i$ for every $i$ ($1 \le i \le n$), and
- $b_i \neq b_{i+1}$ for every $i$ ($1 \le i \le n - 1$).
The number of such arrays can be very large, so print it modulo $998\,244\,353$.
|
Let's solve the problem using the inclusion-exclusion formula. Let the $i$-th property mean that the elements $b_i$ and $b_{i+1}$ are the same. Then for each $k=1, \ldots, n - 1$ the array is divided into $n-k$ consecutive segments, where all the numbers in each of the segments are equal. Next, we will use the dynamic programming method $dp[i][j]$ $-$ we have already split the prefix of the array $b$ of length $i$ into a number of segments, where $j$ denotes the parity of the number of segments. We will iterate over the index $i=1, \ldots, n$. Now for each $j$ ($1\le j < i$) we have to make an update $dp[i][0] += dp[j - 1][1] \cdot f(j, i)$, where $f(j, i)$ is the minimum of the numbers in the array $a$ on the segment $[j, i]$. Similar to $dp[i][1] += dp[j - 1][0] *\cdot f(j, i)$. We get the solution with time complexity $O(n^2)$. To speed it up to $O(n)$, it is enough to maintain a stack of minimums on the prefix and recalculate $dp[i][0/1]$ with it.
|
[
"combinatorics",
"dp",
"math"
] | 2,400
| null |
1585
|
G
|
Poachers
|
Alice and Bob are two poachers who cut trees in a forest.
A forest is a set of zero or more trees. A tree is a connected graph without cycles. A rooted tree has a special vertex called the root. The parent of a node $v$ is the next vertex on the shortest path from $v$ to the root. Children of vertex $v$ are all nodes for which $v$ is the parent. A vertex is a leaf if it has no children.
In this problem we define the depth of vertex as number of vertices on the simple path from this vertex to the root. The rank of a tree is the minimum depth among its leaves.
Initially there is a forest of rooted trees. Alice and Bob play a game on this forest. They play alternating turns with Alice going first. At the beginning of their turn, the player chooses a tree from the forest. Then the player chooses a positive cutting depth, which should \textbf{not exceed the rank} of the chosen tree. Then the player removes all vertices of that tree whose depth is less that or equal to the cutting depth. All other vertices of the tree form a set of rooted trees with root being the vertex with the smallest depth before the cut. All these trees are included in the game forest and the game continues.
A player loses if the forest is empty at the beginning of his move.
You are to determine whether Alice wins the game if both players play optimally.
|
Solution below uses Sprague-Grundy theory. Make sure you understand this concept before reading the editorial. We can note that every position that appears in the game is subtree of one of initial trees or their combinations. This shows that it's sufficient to find grundy-values for all subtrees of initial trees. Dynamic programming on subtrees will be applied here. We will identify subtree by its root. In each node we will store array of grundy-values for each of possible moves in the game on correspondent subtree. Length of this array is numerically equal to the rank of subtree. We will arrange moves in decreasing by cutting depth order. We will also store grundy-value for subtree itself, it's equal to MEX of grundy-values over all moves. DP is recalculated in this way: If node is a leaf it has one move leading to empty game. If node has exactly one child, then we should add grundy-value of child in the end of its array and copy this array in current node. We will recalculate MEX of grundy-values in array explicitely here. We must use the fact that MEX can only increase at this point. Suppose that node has multiple children. We will choose one with the smallest rank (shortest array). Let's imagine here, that we cut all arrays by the length of the shortest, as we cannot make moves with cutting depth greater than rank. In the array of current node we will put array, in each position of which lies XOR of grundy-values in correspondent positions over all arrays of children. We should also add in the end of this array XOR of all grundy-values over all children. We should also calculate MEX of grundy-values in constructed array, we will do it explicitly and starting from $0$, calculated value will be equal to grundy-value of current node. Why this work fast? We will use coin method of amortyzed analysis. We will remain the invariant that on each array that we track lie at least ($3 \cdot$ LENGTH $-$ MEX) coins: Each time we calculate MEX explicitly we can use one coin, so this operation is completely paid off. In leaf-case we create one array with length $1$ and put $3$ coins on it. In other cases we will track one of arrays of children as array of current node and stop tracking arrays of other children. In there is exactly one child, we extend arrays length by $1$ and put $3$ extra coins on it. Remember, we have already showed that MEX recalculation is paid off. In case of multiple children: let's fix $h$ - length of shortest array of chidlren. We will track this array as array of current. There are ($\geq 2 \cdot h$) coins on this array. As all other arrays are not shorter, they also have ($\geq 2 \cdot h$) coins each. Let's move $h$ coins from them to the shortest one, now it has ($\geq 3 \cdot h$) coins. For each of array (except shortest) we will spend $h$ coins to recalculate grundy-value in the shortest one, we do XOR= operation here. In the moment we extend array by adding XOR of grundy-values over children we put extra $3$ coins on it. Note that we lose track of all arrays used here except the shortest one. Note, that there are enough coins to pay off MEX calculation on the shortest array. The only problem left is that when we are trying to recalculate MEX of some array we have to check if number appears in the array. We can maintain set of all elements of the array, each node should contain set for its array. Number of insert/erase/exist queries to all sets are bounded by $4 \cdot n$, as we pay off each operation except failed attempt to increase MEX, but this happens ones in each node. Total complexity: $\mathcal{O}(n \cdot \log(n))$.
|
[
"dp",
"games",
"graphs",
"trees"
] | 2,500
| null |
1586
|
I
|
Omkar and Mosaic
|
Omkar is creating a mosaic using colored square tiles, which he places in an $n \times n$ grid. When the mosaic is complete, each cell in the grid will have either a glaucous or sinoper tile. However, currently he has only placed tiles in some cells.
A completed mosaic will be a \textbf{mastapeece} if and only if each tile is adjacent to exactly $2$ tiles of the same color ($2$ tiles are adjacent if they share a side.) Omkar wants to fill the rest of the tiles so that the mosaic becomes a \textbf{mastapeece}. Now he is wondering, is the way to do this unique, and if it is, what is it?
|
The first main observation to make is that the possible mastapeeces don't just have square "loops" of the same color. A counterexample to this is shown below: Instead, observe that, in a mastapeece: a) The two cells adjacent to corner cells must be the same color as the corner. b) Any cell not on the border must be adjacent to two sinoper tiles and two glaucous tiles If we then start at two cells adjacent to some corner and keep applying b) to cells on the long diagonal with the corner, we find that the long diagonals starting at the adjacent cells must be identical and tiled alternately with glaucous and sinoper tiles, like so: From here, it's pretty easy to figure out how to implement the problem. Some examples of what mastapeeces look like are shown below:
|
[
"combinatorics",
"constructive algorithms",
"math"
] | 3,500
| null |
1588
|
F
|
Jumping Through the Array
|
You are given an array of integers $a$ of size $n$ and a permutation $p$ of size $n$. There are $q$ queries of three types coming to you:
- For given numbers $l$ and $r$, calculate the sum in array $a$ on the segment from $l$ to $r$: $\sum\limits_{i=l}^{r} a_i$.
- You are given two numbers $v$ and $x$. Let's build a directed graph from the permutation $p$: it has $n$ vertices and $n$ edges $i \to p_i$. Let $C$ be the set of vertices that are reachable from $v$ in this graph. You should add $x$ to all $a_u$ such that $u$ is in $C$.
- You are given indices $i$ and $j$. You should swap $p_i$ and $p_j$.
\begin{center}
{\small The graph corresponding to the permutation $[2, 3, 1, 5, 4]$.}
\end{center}
Please, process all queries and print answers to queries of type $1$.
|
Let's call $B = \lfloor \sqrt{n} \rfloor$. Let's divide an array $a$ into consecutive blocks of size $B$. To answer the query we will have to sum $O(B)$ $a_i$ near the segment's bounds and $O(\frac{n}{B})$ sums on blocks. Let's try to calculate them fast. There are two types of cycles: small: with length $< B$ big: with length $\geq B$ If there is a second type query for the small cycle it is easy to make it in $O(B)$ time: let's iterate over cycle's elements and add $x$ into its $a_i$ and into its array's block sum. It is harder to deal with big cycles. Let's divide each big cycle into blocks, each having the size in $[B, 2B - 1]$. Initially, it is possible. After each query of type $3$ it is possible to reconstruct this division fast: Let's split two blocks with $i$ and $j$. After that, it is possible to reconstruct the divisions of each new cycle into blocks. After that, we should avoid small blocks (with size $< B$). Let's merge two consecutive blocks if one of them has the size $< B$. After that, if the total block has size $\geq 2B$ let's split it into two equal blocks. Someone calls this method split-rebuild. So, maintaining this structure we will have at most $\frac{n}{B}$ cycle blocks at any moment. If there is a second type query for the big cycle let's add $x$ to the cycle blocks for this cycle. To consider these values while answering the first type query let's maintain the last structure: For each cycle block $t$ let's calculate the values $pref_{t,0}, pref_{t,1}, \ldots, pref_{t,\lceil \frac{n}{B} \rceil}$, where $pref_{t,i}$ is the number of elements from the $t$-th cycle block in the first $i$ array's blocks. Using these values it is easy to consider additions to the cycle blocks in the subsegments sums queries. And the values of $pref$ can be recalculated during the cycle's updates because we make $O(1)$ splits and merges. During each split or merge we should recalculate $pref$ for $O(1)$ rows (and this can be done in $O(\frac{n}{B})$ for one $t$). Also during each split or merge we should zero additions to blocks in operations just pushing the added value into the elements of the block (their number is smaller than $2B$). The total complexity of the solution is $O(n \sqrt{n})$.
|
[
"binary search",
"data structures",
"graphs",
"two pointers"
] | 3,500
| null |
1592
|
A
|
Gamer Hemose
|
One day, Ahmed_Hossam went to Hemose and said "Let's solve a gym contest!". Hemose didn't want to do that, as he was playing Valorant, so he came up with a problem and told it to Ahmed to distract him. Sadly, Ahmed can't solve it... Could you help him?
There is an Agent in Valorant, and he has $n$ weapons. The $i$-th weapon has a damage value $a_i$, and the Agent will face an enemy whose health value is $H$.
The Agent will perform one or more moves until the enemy dies.
In one move, he will choose a weapon and decrease the enemy's health by its damage value. The enemy will die when his health will become less than or equal to $0$. However, not everything is so easy: \textbf{the Agent can't choose the same weapon for $2$ times in a row}.
What is the minimum number of times that the Agent will need to use the weapons to kill the enemy?
|
It's always optimal to use two weapons with the highest damage value and switch between them. Let $x$ be the highest damage value of a weapon, and $y$ be the second-highest damage value of a weapon. we will decrease monster health by $x$ in the first move, and by $y$ in the second move and so on. $ans=\begin{cases} 2 \cdot \frac{H}{x+y}, & \text{if $H$ $mod$ $(x+y)$ $= 0$}\\ 2 \cdot \lfloor{\frac{H}{x+y}}\rfloor + 1, & \text{if $H$ $mod$ $(x+y)$ $\leq x$}\\ 2 \cdot \lfloor{\frac{H}{x+y}}\rfloor + 2, & \text{otherwise}\\ \end{cases}$
|
[
"binary search",
"greedy",
"math",
"sortings"
] | 800
| null |
1592
|
B
|
Hemose Shopping
|
Hemose was shopping with his friends Samez, AhmedZ, AshrafEzz, TheSawan and O_E in Germany. As you know, Hemose and his friends are problem solvers, so they are very clever. Therefore, they will go to all discount markets in Germany.
Hemose has an array of $n$ integers. He wants Samez to sort the array in the non-decreasing order. Since it would be a too easy problem for Samez, Hemose allows Samez to use only the following operation:
- Choose indices $i$ and $j$ such that $1 \le i, j \le n$, and $\lvert i - j \rvert \geq x$. Then, swap elements $a_i$ and $a_j$.
Can you tell Samez if there's a way to sort the array in the non-decreasing order by using the operation written above some finite number of times (possibly $0$)?
|
The answer is always "YES" If $n \geq 2*x$ because you can reorder the array as you want. Otherwise, You can swap the first $n-x$ elements and the last $n-x$ elements, so you can reorder them as you want but the rest have to stay in their positions in the sorted array. So if elements in the subarray $[n-x+1, x]$ in the original array are in their same position after sorting the array then the answer is YES, otherwise NO.
|
[
"constructive algorithms",
"dsu",
"math",
"sortings"
] | 1,200
| null |
1592
|
C
|
Bakry and Partitioning
|
Bakry faced a problem, but since he's lazy to solve it, he asks for your help.
You are given a tree of $n$ nodes, the $i$-th node has value $a_i$ assigned to it for each $i$ from $1$ to $n$. As a reminder, a tree on $n$ nodes is a connected graph with $n-1$ edges.
You want to delete \textbf{at least $1$, but at most $k-1$ edges} from the tree, so that the following condition would hold:
- For every connected component calculate the bitwise XOR of the values of the nodes in it. Then, these values have to be the same for all connected components.
Is it possible to achieve this condition?
|
The most important observation is: If you can partition the tree into $m$ components such that the xor of every component is $x$, Then you can partition the tree into $m-2$ components by merging any 3 adjacent components into 1 component, and the xor of the new component will equal $x$, since $x$ xor $x$ xor $x$ = $x$. Notice that the answer is always YES if the xor of the array is $0$. Because you can delete any edge in the tree, and the 2 components will have the same xor. Otherwise, We need to partition the tree into 3 components that have the same xor. Let $x$ be the xor of all node values in the tree, then The xor of every component will equal $x$. We need to search for 2 edges to delete from the tree and one of them such that the xor every component equals $x$ and if we found them and $K \neq 2$ then the answer is "YES" otherwise "NO". To search on the 2 edges, We will first root the tree on node 1, then we will search on the deepest subtree such that xor value of subtree equals $x$, then erase the edge and search again for the 2nd edge.
|
[
"bitmasks",
"constructive algorithms",
"dfs and similar",
"dp",
"graphs",
"trees"
] | 1,700
| null |
1592
|
D
|
Hemose in ICPC ?
|
\textbf{This is an interactive problem!}
{In the last regional contest Hemose, ZeyadKhattab and YahiaSherif — members of the team Carpe Diem — did not qualify to ICPC because of some \sout{un}known reasons. Hemose was very sad and had a bad day after the contest, but ZeyadKhattab is very wise and knows Hemose very well, and does not want to see him sad.}
Zeyad knows that Hemose loves tree problems, so he gave him a tree problem with a very special device.
Hemose has a weighted tree with $n$ nodes and $n-1$ edges. Unfortunately, Hemose doesn't remember the weights of edges.
Let's define $Dist(u, v)$ for $u\neq v$ as the greatest common divisor of the weights of all edges on the path from node $u$ to node $v$.
Hemose has a special device. Hemose can give the device a set of nodes, and the device will return the largest $Dist$ between any two nodes from the set. More formally, if Hemose gives the device a set $S$ of nodes, the device will return the largest value of $Dist(u, v)$ over all pairs $(u, v)$ with $u$, $v$ $\in$ $S$ and $u \neq v$.
Hemose can use this Device \textbf{at most $12$ times}, and wants to find any two distinct nodes $a$, $b$, such that $Dist(a, b)$ is maximum possible. Can you help him?
|
The maximum gcd of a path equals the maximum weight of an edge in the tree. Let $x$ be the value of the maximum weight of an edge in the tree, We need to find $u$, $v$ such that there's an edge between $u$ and $v$ with weight equals $x$. Let's find $x$ by putting all the $n$ nodes in the same query, Now we need to find $u$, $v$. If we have an array of edges such that for any consecutive subarray: The component of nodes inside the subarray is connected using the edges inside the subarray. Then we can binary search on this array to find the edge with the maximum weight. If we put the edges in the array using the order of Euler tour traversal, the array will satisfy the condition above, and we can solve the problem. Total number of queries is $1 + log(2*(n-1))$.
|
[
"binary search",
"dfs and similar",
"implementation",
"interactive",
"math",
"number theory",
"trees"
] | 2,300
| null |
1592
|
E
|
Bored Bakry
|
Bakry got bored of solving problems related to xor, so he asked you to solve this problem for him.
You are given an array $a$ of $n$ integers $[a_1, a_2, \ldots, a_n]$.
Let's call a subarray $a_{l}, a_{l+1}, a_{l+2}, \ldots, a_r$ \textbf{good} if $a_l \, \& \, a_{l+1} \, \& \, a_{l+2} \, \ldots \, \& \, a_r > a_l \oplus a_{l+1} \oplus a_{l+2} \ldots \oplus a_r$, where $\oplus$ denotes the bitwise XOR operation and $\&$ denotes the bitwise AND operation.
Find the length of the longest good subarray of $a$, or determine that no such subarray exists.
|
Let $And(l, r)$ denotes the bitwise and of the elements in subarray $[L, R]$ in the array and $Xor(l, r)$ denotes the bitwise xor of the elements in subarray $[L, R]$ in the array. If subarray $[L, R]$ length is odd then it can't be a good subarray because $And(l, r)$ is a submask of $Xor(l, r)$ since every bit in $And(l, r)$ occurs odd times so it will be set in $Xor(l, r)$. If subarray $[L, r]$ length is even then every bit in $And(l, r)$ will be unset in $Xor(l, r)$ so we only care about the most significant bit in $And(l, r)$. Let's solve for every bit $k$, Let's call bit $m$ important bit if $m > k$. For every $r$ in the array, We need to find the minimum $l$ such that $[l, r]$ is even, $k$ is set in $And(l, r)$ and all the important bits are unset in $Xor(l, r)$. Since we only care about the important bits, We can make a prefix array where $pref_i$ is the prefix xor till the index $i$ considering only the important bits. So for every $r$, we need to find the minimum $l$ that satisfy these conditions: $1$ - $r-l+1$ is even. $2$ - $k$ is set in all elements in subarray $[l, r]$. $3$ - $pref_r$ xor $pref_{l-1}$ $= 0$. which can be solved easily in $O(NlogN)$.
|
[
"bitmasks",
"greedy",
"math",
"two pointers"
] | 2,400
| null |
1592
|
F1
|
Alice and Recoloring 1
|
\textbf{The difference between the versions is in the costs of operations. Solution for one version won't work for another!}
Alice has a grid of size $n \times m$, \textbf{initially all its cells are colored white}. The cell on the intersection of $i$-th row and $j$-th column is denoted as $(i, j)$. Alice can do the following operations with this grid:
- Choose any subrectangle containing cell $(1, 1)$, and flip the colors of all its cells. (Flipping means changing its color from white to black or from black to white).
\textbf{This operation costs $1$ coin.}
- Choose any subrectangle containing cell $(n, 1)$, and flip the colors of all its cells.
\textbf{This operation costs $2$ coins.}
- Choose any subrectangle containing cell $(1, m)$, and flip the colors of all its cells.
\textbf{This operation costs $4$ coins.}
- Choose any subrectangle containing cell $(n, m)$, and flip the colors of all its cells.
\textbf{This operation costs $3$ coins.}
\textbf{As a reminder, subrectangle is a set of all cells $(x, y)$ with $x_1 \le x \le x_2$, $y_1 \le y \le y_2$ for some $1 \le x_1 \le x_2 \le n$, $1 \le y_1 \le y_2 \le m$}.
Alice wants to obtain her favorite coloring with these operations. What's the smallest number of coins that she would have to spend? It can be shown that it's always possible to transform the initial grid into any other.
|
We will transform favorite coloring to the all-white coloring instead. Let's denote W by $0$ and B by $1$. First observation is that it doesn't make sense to do operations of type $2$ and $3$ at all. Indeed, each of them can be replaced with $2$ operations of type $1$: Instead of flipping subrectangle $[x, n]\times[1, y]$, we can flip subrectangles $[1, n]\times[1, y]$ and $[1, x-1]\times[1, y]$, and similarly for $[1, x]\times[y, n]$. So, from now on we only consider first and last operations. Let's create a grid $a$ of $n$ rows and $m$ columns, where $a[i][j]$ denotes the parity of the sum of numbers in those of cells $(i, j), (i+1, j), (i, j+1), (i+1, j+1)$ of favorite coloring, which are present on the grid. Clearly, all numbers in $a$ are $0$ if and only if current coloring is all $0$, so we will want to just make the grid $a$ all $0$. Let's look at how first and last operations affect the grid $a$. If we flip the subrectangle $[1, x]\times [1, y]$ with the operation of the first type, in grid $a$ the only value that changes is $a[x][y]$. If we flip the subrectangle $[x, n]\times [y, m]$, clearly $x, y>1$ (otherwise we could have instead used $2$ operations of the first type). Then, it's easy to see that the only cells that will change are $a[x-1][y-1], a[n][y-1], a[x-1][m], a[n][m]$. So, now we have the following problem: we have a binary grid $n\times m$. In one operation, we can change some $1$ to $0$, with cost of $1$ coin, or to select some $1 \le x\le n-1, 1 \le y \le m-1$, and flip the numbers in cells $a[x][y], a[n][y], a[x][m], a[n][m]$ with cost of $3$ coins. What's the smallest cost to make all numbers $0$? Now, it's easy to see that it doesn't make sense to apply second operation more than once, as instead of doing it twice, we can apply the operation of the first type at most $6$ times (as cell $a[n][m]$ will be flipped twice). Moreover, it only makes sense to apply the second operation if there exist such $1 \le x\le n-1, 1 \le y \le m-1$ for which all cells $a[x][y], a[n][y], a[x][m], a[n][m]$ are $1$. So, the algorithm would be to calculate the grid $a$, to calculate the total number of ones in it $ans$, and substract $1$ from $ans$ if there exists such $1 \le x\le n-1, 1 \le y \le m-1$ for which all cells $a[x][y], a[n][y], a[x][m], a[n][m]$ are $1$. Complexity $O(nm)$.
|
[
"constructive algorithms",
"greedy"
] | 2,600
| null |
1592
|
F2
|
Alice and Recoloring 2
|
\textbf{The difference between the versions is in the costs of operations. Solution for one version won't work for another!}
Alice has a grid of size $n \times m$, \textbf{initially all its cells are colored white}. The cell on the intersection of $i$-th row and $j$-th column is denoted as $(i, j)$. Alice can do the following operations with this grid:
- Choose any subrectangle containing cell $(1, 1)$, and flip the colors of all its cells. (Flipping means changing its color from white to black or from black to white).
\textbf{This operation costs $1$ coin.}
- Choose any subrectangle containing cell $(n, 1)$, and flip the colors of all its cells.
\textbf{This operation costs $3$ coins.}
- Choose any subrectangle containing cell $(1, m)$, and flip the colors of all its cells.
\textbf{This operation costs $4$ coins.}
- Choose any subrectangle containing cell $(n, m)$, and flip the colors of all its cells.
\textbf{This operation costs $2$ coins.}
\textbf{As a reminder, subrectangle is a set of all cells $(x, y)$ with $x_1 \le x \le x_2$, $y_1 \le y \le y_2$ for some $1 \le x_1 \le x_2 \le n$, $1 \le y_1 \le y_2 \le m$}.
Alice wants to obtain her favorite coloring with these operations. What's the smallest number of coins that she would have to spend? It can be shown that it's always possible to transform the initial grid into any other.
|
Everything from the editorial of the first part of the problem stays the same, except one thing: now the second operation on the modified grid costs only $2$ coins. Sadly, now it's not true that it's not optimal to use the second operation more than once. Let's denote the second operation on $a$ by $op(x, y)$ (meaning that we invert numbers at $a[x][y], a[n][y], a[x][m], a[n][m]$). New claim is that it's not optimal to make operations which have the same $x$ or the same $y$. Indeed, suppose that we made $op(x, y_1)$ and $op(x, y_2)$. Then cells $a[x][m]$ and $a[n][m]$ haven't changed, so we only flipped $4$ cells with cost of $2\times 2 = 4$ coins. We could have done this with operations of the first type. Another observation: it doesn't make sense to make $op(x, y)$, unless all the cells $a[x][y], a[x][m], a[n][y]$ are $1$. Indeed, no other operation of this type will affect any of these cells. If some cell here is $0$, and we still make $op(x, y)$ in $2$ coins, then we will have to spend one additional coin on reverting it back to $0$ with the operation of the first type. Instead, you could have just flipped $3$ other cells from $a[x][y], a[x][m], a[n][y], a[n][m]$ with the operations of the first type in the same $3$ coins. How many such operations can we make then? Let's build a bipartite graph, with parts of sizes $n-1$ and $m-1$, and connect node $x$ from the left part with node $y$ from the right part iff $a[x][y] = a[x][m] = a[n][y] = 1$. Find the maximum matching in this graph with your favorite algorithm (for example, in $O(mn\sqrt{m+n})$ with Hopcroft-Karp), let its size be $K$. Then, for each $0 \le k \le K$, we can find the number of ones that will be left in the grid if we make exactly $k$ operations of the second type. Then the answer to the problem is just the minimum of $(ones\_left+2k)$ over all $k$.
|
[
"constructive algorithms",
"flows",
"graph matchings",
"greedy"
] | 2,800
| null |
1593
|
A
|
Elections
|
The elections in which three candidates participated have recently ended. The first candidate received $a$ votes, the second one received $b$ votes, the third one received $c$ votes. For each candidate, solve the following problem: how many votes should be added to this candidate so that he wins the election (i.e. the number of votes for this candidate was strictly greater than the number of votes for any other candidate)?
Please note that for each candidate it is necessary to solve this problem \textbf{independently}, i.e. the added votes for any candidate \textbf{do not} affect the calculations when getting the answer for the other two candidates.
|
Let's solve the problem for the first candidate. To win the election, he needs to get at least $1$ more votes than every other candidate. Therefore, the first candidate needs to get at least $\max(b, c) + 1$ votes. If $a$ is already greater than this value, then you don't need to add any votes, otherwise, you need to add $\max(b, c) + 1 - a$ votes. So the answer for the first candidate is $\max(0, \max(b,c) + 1 - a)$. Similarly, the answer for the second candidate is $\max(0, \max(a, c) + 1 - b)$, and, for the third one, the answer is $\max(0, \max(a, b) + 1 - c)$.
|
[
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int solveSingle(int best, int other1, int other2)
{
return max(0, max(other1, other2) + 1 - best);
}
int main()
{
int t;
cin >> t;
while (t--)
{
int a, b, c;
cin >> a >> b >> c;
cout << solveSingle(a, b, c) << ' ' << solveSingle(b, a, c) << ' ' << solveSingle(c, a, b) << '\n';
}
return 0;
}
|
1593
|
B
|
Make it Divisible by 25
|
It is given a positive integer $n$. In $1$ move, one can select any single digit and remove it (i.e. one selects some position in the number and removes the digit located at this position). The operation cannot be performed if only one digit remains. If the resulting number contains leading zeroes, they are automatically removed.
E.g. if one removes from the number $32925$ the $3$-rd digit, the resulting number will be $3225$. If one removes from the number $20099050$ the first digit, the resulting number will be $99050$ (the $2$ zeroes going next to the first digit are automatically removed).
What is the minimum number of steps to get a number such that it is divisible by $25$ and \textbf{positive}? It is guaranteed that, for each $n$ occurring in the input, the answer exists. It is guaranteed that the number $n$ has no leading zeros.
|
A number is divisible by $25$ if and only if its last two digits represent one of the following strings: "00", "25", "50", "75". Let's solve for each string the following subtask: what is the minimum number of characters to be deleted so that the string becomes a suffix of the number. Then, choosing the minimum of the answers for all subtasks, we solve the whole problem. Let's solve the subtask for a string "X Y" where 'X' and 'Y' are digits. We can do it using the following algorithm: let's delete the last digit of the number until it is equal to 'Y', then the second to last digit of the number until it is equal to 'X'. If it is not possible, then this subtask has no solution (i.e. its result will not affect the answer).
|
[
"dfs and similar",
"dp",
"greedy",
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
const string subseqs[] = { "00", "25", "50", "75" };
const int INF = 100;
int solve(string& s, string& t)
{
int sptr = s.length() - 1;
int ans = 0;
while (sptr >= 0 && s[sptr] != t[1])
{
sptr--;
ans++;
}
if (sptr < 0) return INF;
sptr--;
while (sptr >= 0 && s[sptr] != t[0])
{
sptr--;
ans++;
}
return sptr < 0 ? INF : ans;
}
int main()
{
int t;
cin >> t;
while (t--)
{
string n;
cin >> n;
int ans = INF;
for (auto e : subseqs)
ans = min(ans, solve(n, e));
cout << ans << '\n';
}
return 0;
}
|
1593
|
C
|
Save More Mice
|
There are one cat, $k$ mice, and one hole on a coordinate line. The cat is located at the point $0$, the hole is located at the point $n$. All mice are located between the cat and the hole: the $i$-th mouse is located at the point $x_i$ ($0 < x_i < n$). At each point, many mice can be located.
In one second, the following happens. First, \textbf{exactly one} mouse moves to the right by $1$. If the mouse reaches the hole, it hides (i.e. the mouse will not any more move to any point and will not be eaten by the cat). Then (\textbf{after that} the mouse has finished its move) the cat moves to the right by $1$. If at the new cat's position, some mice are located, the cat eats them (they will not be able to move after that). The actions are performed until any mouse hasn't been hidden or isn't eaten.
In other words, the first move is made by a mouse. If the mouse has reached the hole, it's saved. Then the cat makes a move. The cat eats the mice located at the pointed the cat has reached (if the cat has reached the hole, it eats nobody).
Each second, you can select a mouse that will make a move. What is the maximum number of mice that can reach the hole without being eaten?
|
Let's solve the problem using a linear search. Let $m$ be the number of mice we are trying to save. Then it is more efficient to save $m$ mice such that they are the closest ones to the hole. Let $r_i$ be the distance from the $i$-th mouse to the hole ($r_i = n - x_i$). Denote $R := \sum\limits_{i = 1}^m r_i$. Let's prove that these mice will be saved if and only if $R < n$. The necessary condition. Suppose we can save the mice and $R \ge n$. Since only one mouse can be moved in one second, the following will happen: $m - 1$ of mice will already be saved and one mouse will have to be saved. When it's been $q$ seconds, then the distance from the cat to the hole will be equal to $n - q$, and the distance from the mouse to the hole will be equal to $R - q$ (since all other mice are already in the hole, their distances to the hole are equal to $0$, so the sum of the distances from all mice to the hole at the current time is exactly equal to the distance to the hole from one remaining mouse). Since $R - q \ge n - q$, the distance from the mouse to the hole is greater than or equal to the distance from the cat to the hole. But this cannot be, because both the mice and the cat move only to the right, and all mice met by the cat are eaten. So, $R < n$. Sufficient condition. Suppose $R < n$. If $R = 0$, then all the mice are already in the hole, i.e. they are saved. Suppose $R > 0$. Let's move any mouse, then the cat. Suppose the cat ate at least one of the mice. This mouse is definitely not the one that was moved. Then the distance from it to the eaten mouse was equal to $1$, i.e. the distance from it to the hole was equal to the distance from the eaten mouse to the hole plus $1$. The distance from the moved mouse to the hole was at least $1$. So, $R \ge r_s + r_m$, where $r_s = n - 1$ is the distance from the eaten mouse to the hole, $r_m \ge 1$ is the distance from the moved mouse to the hole. So, $R \ge n - 1 + 1 = n$, but it's false. Therefore, none of the mice will be eaten on the first move. Then the distance from the cat to the hole will be equal to $n - 1$, the total distance from the mice to the hole will be equal to $R - 1$. $R - 1 < n - 1$, i.e. now we have to solve a similar problem for smaller $R$ and $n$. So $R$ will be gradually decreased to $0$, while no mouse will be eaten. So, if $R < n$, all the mice will be saved. Thus, to solve the problem, we need to find the maximum $m$ such that the sum of the distances from the $m$ nearest mice to the hole is less than $n$.
|
[
"binary search",
"greedy"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--)
{
int n, k;
cin >> n >> k;
vector<int> x(k);
for (int i = 0; i < k; i++) cin >> x[i];
sort(x.rbegin(), x.rend());
int cnt = 0;
long long sum = 0;
while (cnt < x.size() && sum + n - x[cnt] < n)
{
sum += n - x[cnt++];
}
cout << cnt << '\n';
}
return 0;
}
|
1593
|
D1
|
All are Same
|
This problem is a simplified version of D2, but it has significant differences, so read the whole statement.
Polycarp has an array of $n$ ($n$ is even) integers $a_1, a_2, \dots, a_n$. Polycarp conceived of a positive integer $k$. After that, Polycarp began performing the following operations on the array: take an index $i$ ($1 \le i \le n$) and reduce the number $a_i$ by $k$.
After Polycarp performed some (possibly zero) number of such operations, it turned out that \textbf{all} numbers in the array became the same. Find the maximum $k$ at which such a situation is possible, or print $-1$ if such a number can be arbitrarily large.
|
$k$ can be arbitrarily large if and only if all numbers in the array are the same. In this case, we can choose any number $k$ and subtract it from all the numbers, for example, exactly once. Suppose we fix some $k$. Let $q_i$ be the number of subtractions of the number $k$ from the number $a_i$. In this case, all numbers will be equal if and only if, for any two numbers $a_i$ and $a_j$ from the array, $a_i - k \cdot q_i = a_j - k\cdot q_j$. Let $q_{i_0}$ be the minimum of $q_i$. Then all numbers in the array become the same if for each index $i$ we subtract from $a_i$ $k$ not $q_i$, but $q_i - q_{i_0}$ times. Then we will never subtract $k$ from the element $a_{i_0}$. This means that there is always an element in the array from which we can never subtract $k$. This element is the minimum on the array. Then from $a_i$ we will subtract $k$ exactly $\frac{a_i - a_{i_0}}{k}$ times. Thus, with the current $k$, it is possible to make all elements equal if and only if for all elements $a_i$ the value $a_i - a_{i_0}$ (where $a_{i_0}$ is the minimum on the array) is divisible by $k$. So, the maximum $k$ is the greatest common divisor of all values of $a_i - a_{i_0}$, $i = \overline{1, n}$.
|
[
"math",
"number theory"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--)
{
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
bool inf = true;
int minval = a[0];
for (int i = 1; i < n; i++)
{
if (a[i] != a[0])
{
inf = false;
break;
}
minval = min(minval, a[i]);
}
if (inf)
{
cout << "-1\n";
continue;
}
sort(a.begin(), a.end());
int ans = 0;
for (int i = 0; i < n; i++)
ans = gcd(ans, a[i] - minval);
cout << ans << '\n';
}
return 0;
}
|
1593
|
D2
|
Half of Same
|
This problem is a complicated version of D1, but it has significant differences, so read the whole statement.
Polycarp has an array of $n$ ($n$ is even) integers $a_1, a_2, \dots, a_n$. Polycarp conceived of a positive integer $k$. After that, Polycarp began performing the following operations on the array: take an index $i$ ($1 \le i \le n$) and reduce the number $a_i$ by $k$.
After Polycarp performed some (possibly zero) number of such operations, it turned out that \textbf{at least half} of the numbers in the array became the same. Find the maximum $k$ at which such a situation is possible, or print $-1$ if such a number can be arbitrarily large.
|
$k$ can be arbitrarily large if and only if at least half of the numbers in the array are the same. In this case, we can choose any number $k$ and subtract it from all numbers, for example, exactly once. Let's iterate over the element $a_{i_0}$, it will be the minimum among the numbers that we want to make the same. Let's calculate the number of numbers in the array that are equal to this element. If this number is at least $\frac{n}{2}$, then the answer is -1. Otherwise, we will iterate over numbers $a_i$ which are strictly greater than the selected minimum, and, for each number, we will iterate over the divisors of the number $a_i - a_{i_0}$. For each of the found divisors, let's calculate the number of $a_i$ for which this divisor was found. Among all such divisors for which the sum of the found number and the number of numbers equal to $a_{i_0}$ is greater than or equal to $\frac{n}{2}$, we will choose the maximum one. The greatest found divisor will be the desired $k$. This solution works in $O(n^2\times\sqrt{A})$ (where $A$ is the absolute value of the maximum on the array).
|
[
"brute force",
"math",
"number theory"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
set<int> divs(int n) {
set<int> d;
for (int dd = 1; dd * dd <= n; dd++)
if (n % dd == 0) {
d.insert(n / dd);
d.insert(dd);
}
return d;
}
int main() {
int t;
cin >> t;
forn(tt, t) {
int n;
cin >> n;
vector<int> a(n);
forn(i, n)
cin >> a[i];
int k = -1;
forn(i, n) {
int minv = a[i];
int same = 0;
vector<int> d;
forn(j, n) {
if (a[j] == minv)
same++;
else if (a[j] > minv)
d.push_back(a[j] - minv);
}
if (same >= n / 2) {
k = INT_MAX;
continue;
}
map<int,int> div_counts;
for (int di: d)
for (int dd: divs(di))
div_counts[dd]++;
for (auto p: div_counts)
if (p.second >= n / 2 - same)
k = max(k, p.first);
}
cout << (k == INT_MAX ? -1 : k) << endl;
}
}
|
1593
|
E
|
Gardener and Tree
|
A tree is an undirected connected graph in which there are no cycles. This problem is about non-rooted trees. A leaf of a tree is a vertex that is connected to \textbf{at most one} vertex.
The gardener Vitaly grew a tree from $n$ vertices. He decided to trim the tree. To do this, he performs a number of operations. In one operation, he removes \textbf{all} leaves of the tree.
\begin{center}
{\small Example of a tree.}
\end{center}
For example, consider the tree shown in the figure above. The figure below shows the result of applying exactly one operation to the tree.
\begin{center}
{\small The result of applying the operation "remove all leaves" to the tree.}
\end{center}
Note the special cases of the operation:
- applying an operation to an empty tree (of $0$ vertices) does not change it;
- applying an operation to a tree of one vertex removes this vertex (this vertex is treated as a leaf);
- applying an operation to a tree of two vertices removes both vertices (both vertices are treated as leaves).
Vitaly applied $k$ operations sequentially to the tree. How many vertices remain?
|
Let's create two arrays of length $n$. The element of the array $layer$ will contain the operation number at which the vertex which is the index of the array will be deleted. The $rem$ array will contain the number of neighbors of a given vertex at a certain time. This array must be initialized with the number of neighbors in the original tree. Initially, we will suppose that the gardener performs an infinite number of operations, and we will simply calculate for each vertex the number of the operation on which it will be deleted. Let's create a queue $q$, which will store the order of deleting vertices. The queue will contain only those vertices whose neighbors, except, maybe, one, have been removed (i.e. $rem[v] \le 1$). Let's add all leaves of the original tree to it, for each of them let's store the value $1$ in the array $layer$ (because all original leaves will be removed during the first operation). Next, we will take sequentially one vertex from the queue and update the data about its neighbors. Consider the neighbors. Since we are deleting the current vertex, we need to update $rem$ of its neighbors. If the neighbor's $rem$ is equal to $1$, then it's already in the queue and it doesn't need to be considered right now. Otherwise, we will decrease the neighbor's $rem$ by $1$. If it becomes equal to $1$, then the neighbor must be added to the queue. The number of the operation during which the neighbor will be deleted is equal to the number of the operation during which the current vertex will be deleted plus $1$. After we calculate the numbers of operations for all vertices, we need to select among them those that will not be deleted during operations with numbers $1, 2, \dots, k$. Thus, the answer is the number of vertices $v$ such that $layer[v] > k$.
|
[
"brute force",
"data structures",
"dfs and similar",
"greedy",
"implementation",
"trees"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--)
{
int n, k;
cin >> n >> k;
vector<vector<int>> g(n, vector<int>());
vector<int> rem(n, 0);
vector<int> layer(n, 0);
for (int i = 1; i < n; i++)
{
int x, y;
cin >> x >> y;
x--;
y--;
g[x].push_back(y);
g[y].push_back(x);
rem[x]++;
rem[y]++;
}
queue<int> q;
for (int i = 0; i < n; i++)
if (rem[i] == 1)
{
q.push(i);
layer[i] = 1;
}
while (!q.empty())
{
int u = q.front();
q.pop();
for (int v : g[u])
{
if (rem[v] != 1)
{
rem[v]--;
if (rem[v] == 1)
{
layer[v] = layer[u] + 1;
q.push(v);
}
}
}
}
int ans = 0;
for (int i = 0; i < n; i++)
if (layer[i] > k)
ans++;
cout << ans << '\n';
}
return 0;
}
|
1593
|
F
|
Red-Black Number
|
It is given a non-negative integer $x$, the decimal representation of which contains $n$ digits. You need to color \textbf{each} its digit in red or black, so that the number formed by the red digits is divisible by $A$, and the number formed by the black digits is divisible by $B$.
\textbf{At least one} digit must be colored in each of two colors. Consider, the count of digits colored in red is $r$ and the count of digits colored in black is $b$. Among all possible colorings of the given number $x$, you need to output any such that the value of $|r - b|$ is \textbf{the minimum possible}.
Note that the number $x$ and the numbers formed by digits of each color, \textbf{may contain leading zeros}.
\begin{center}
{\small Example of painting a number for $A = 3$ and $B = 13$}
\end{center}
The figure above shows an example of painting the number $x = 02165$ of $n = 5$ digits for $A = 3$ and $B = 13$. The red digits form the number $015$, which is divisible by $3$, and the black ones — $26$, which is divisible by $13$. Note that the absolute value of the difference between the counts of red and black digits is $1$, it is impossible to achieve a smaller value.
|
The number $x$ is divisible by the number $y$ if and only if $x \equiv 0$ modulo $y$. To solve this problem, let's use the concept of dynamic programming. There will be four states: the number of considered digits of the number $x$, the number of such considered digits that we have colored red, the remainder from dividing the red number by $A$ and the black one by $B$. The value corresponding to the state will be described by three parameters: the possibility of a situation described by the states, the color of the last digit, and the parent state. Let's assume that the number that contains $0$ digits is equal to $0$. Initially, let's mark the state in which $0$ digits are considered, of which $0$ are red digits, and both remainders are equal to $0$, as possible. Next, let's iterate over all states in the following order: first by the number of considered digits, then by the number of considered red digits, then by the remainder of the division by $A$ and by $B$. From the current state, if it is possible (i.e. the corresponding mark is set), you can make two transitions to new states. At the first transition, we paint the last digit in red, at the second one in black. We need also to store the current state in the new states as the previous one. A solution exists if and only if some state in which exactly $n$ digits are considered, of which at least $1$ and at most $n - 1$ red digits, and the remainders are equal to $0$, is marked as possible. Let's find such a state. Using the stored information about the color of the last digit and the previous state, we can restore the colors of all digits of the number $x$.
|
[
"dfs and similar",
"dp",
"implementation",
"math",
"meet-in-the-middle"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
const int MAX_N = 64;
bool dp[MAX_N][MAX_N][MAX_N][MAX_N]; // (taken, red, mod A, mod B) -> may be
pair<bool, int> sert[MAX_N][MAX_N][MAX_N][MAX_N]; // the same -> (true (red) | false(black), prev red/black)
int main()
{
int t;
cin >> t;
while (t--)
{
int n, a, b;
string x;
cin >> n >> a >> b >> x;
for (int i = 0; i <= n; i++)
for (int j = 0; j <= n; j++)
for (int k = 0; k < a; k++)
for (int l = 0; l < b; l++)
dp[i][j][k][l] = false;
dp[0][0][0][0] = true;
for(int taken = 0; taken < n; taken++)
for(int red = 0; red <= taken; red++)
for(int remA = 0; remA < a; remA++)
for(int remB = 0; remB < b; remB++)
if (dp[taken][red][remA][remB])
{
// red transition
dp[taken + 1][red + 1][(remA * 10 + (x[taken] - '0')) % a][remB] = true;
sert[taken + 1][red + 1][(remA * 10 + (x[taken] - '0')) % a][remB] = { true, remA };
// black transition
dp[taken + 1][red][remA][(remB * 10 + (x[taken] - '0')) % b] = true;
sert[taken + 1][red][remA][(remB * 10 + (x[taken] - '0')) % b] = { false, remB };
}
int bestRed = 0;
for (int red = 1; red < n; red++)
if (dp[n][red][0][0] && abs(red - (n - red)) < abs(bestRed - (n - bestRed)))
bestRed = red;
if (bestRed == 0)
{
cout << "-1\n";
}
else
{
int taken = n;
int red = bestRed;
int remA = 0;
int remB = 0;
string ans = "";
while (taken > 0)
{
auto way = sert[taken][red][remA][remB];
if (way.first)
{
red--;
remA = way.second;
ans.push_back('R');
}
else
{
remB = way.second;
ans.push_back('B');
}
taken--;
}
reverse(ans.begin(), ans.end());
cout << ans << '\n';
}
}
return 0;
}
|
1593
|
G
|
Changing Brackets
|
A sequence of round and square brackets is given. You can change the sequence by performing the following operations:
- change the direction of a bracket from opening to closing and vice versa without changing the form of the bracket: i.e. you can change '(' to ')' and ')' to '('; you can change '[' to ']' and ']' to '['. The operation costs $0$ burles.
- change any \textbf{square} bracket to \textbf{round} bracket having the same direction: i.e. you can change '[' to '(' but \textbf{not} from '(' to '['; similarly, you can change ']' to ')' but \textbf{not} from ')' to ']'. The operation costs $1$ burle.
The operations can be performed in any order any number of times.
You are given a string $s$ of the length $n$ and $q$ queries of the type "l r" where $1 \le l < r \le n$. For every substring $s[l \dots r]$, find the minimum cost to pay to make it a correct bracket sequence. It is guaranteed that the substring $s[l \dots r]$ has an even length.
The queries must be processed independently, i.e. the changes made in the string for the answer to a question $i$ don't affect the queries $j$ ($j > i$). In other words, for every query, the substring $s[l \dots r]$ is given from the initially given string $s$.
A correct bracket sequence is a sequence that can be built according the following rules:
- an empty sequence is a correct bracket sequence;
- if "s" is a correct bracket sequence, the sequences "(s)" and "[s]" are correct bracket sequences.
- if "s" and "t" are correct bracket sequences, the sequence "st" (the concatenation of the sequences) is a correct bracket sequence.
E.g. the sequences "", "(()[])", "[()()]()" and "(())()" are correct bracket sequences whereas "(", "[(])" and ")))" are not.
|
Consider a substring $t = s[l \dots r]$. Let's call square brackets located in odd positions in the substring odd brackets, and square brackets located in even positions even brackets. Let $cnt_{odd}$ be the number of odd brackets, $cnt_{even}$ be the number of even brackets, $cnt_{all} = cnt_{odd} + cnt_{even}$ be the number of all square brackets. Let's prove that the string $t$ can be turned into a correct bracket sequence for $0$ burles if and only if $cnt_{odd} = cnt_{even}$. Let's prove the necessary condition. Suppose the initial substring has been turned into a correct bracket sequence. Since we have paid $0$ burles, there's no bracket which form has been changed. Therefore, $cnt_{odd}$ for the new sequence is the same as $cnt_{odd}$ for the initial sequence, the similar situation happens with $cnt_{even}$. Let's say that two square brackets form a pair if the left one is an opening bracket and the right one is a closing bracket and the substring between them is a correct bracket sequence. A pair can be formed only by one odd bracket and one even bracket because between them is placed an even number of brackets (since it's a correct bracket sequence) so the difference between their indices is odd. In a correct bracket sequence, each square bracket has a pairwise bracket. Therefore, a correct bracket sequence contains $\frac{cnt_{all}}{2}$ pairs of brackets so $cnt_{odd} = cnt_{even} = \frac{cnt_{all}}{2}$. Let's prove the sufficient condition. Suppose the initial substring contains equal numbers of odd and even brackets. Let's prove by induction that the substring may be turned into a correct bracket sequence for $0$ burles. Suppose $cnt_{odd} = cnt_{even} = 0$. So the initial substring contains only round brackets. Let's make the first $\frac{r - l + 1}{2}$ brackets opening and the other brackets closing. The resulting sequence is a correct bracket sequence whereas we haven't changed the form of any bracket so the cost is equal to $0$. A correct bracket sequence has two important properties: after deleting its substring being a correct bracket sequence, the resulting string is a correct bracket sequence; after inserting at any place any correct bracket sequence, the resulting string is a correct bracket sequence. These properties can be applied to an incorrect bracket sequence, too: after deleting a substring being a correct bracket subsequence from an incorrect bracket sequence or inserting a correct bracket sequence into an incorrect one, the resulting sequence is an incorrect bracket sequence. Consider a substring $t$ such that $cnt_{odd} = cnt_{even} > 0$. Suppose we have proved before that each substring $t$ having $cnt_{odd} = cnt_{even}$ decreased by $1$ can be turned into a correct bracket sequence for $0$ burles. Let's find two square brackets such that one of them is odd and another one is even and there are no square brackets between them. There's an even number of round brackets between them that can be turned into a correct bracket sequence for $0$ burles. Let's make the left found bracket opening and the right one closing. Then the substring starting at the left found bracket and ending at the right found bracket is a correct bracket sequence. Let's remove it from $t$. The resulting string contains $cnt_{odd} - 1$ odd brackets and $cnt_{even} - 1$ even brackets so, by the assumption of induction, it can be turned into a correct bracket sequence for $0$ burles. Let's do it and then insert the removed string into its place. Since we insert a correct bracket sequence into a correct bracket sequence, the resulting string is a correct bracket sequence. Actually, the operations of inserting and removing are not allowed, they have been used for clarity, the string can be turned into a correct bracket sequence without these operations as follows: let's turn the substring we have removed into a correct bracket sequence (as it was described above), then change the other brackets of the string the same way as it was done with the string that was the result after removing. The resulting string is a correct bracket sequence. Therefore, the illegal operations of inserting and removing are not necessary, all other operations cost $0$ burles so the substring $t$ can be turned into a correct bracket sequence for $0$ burles. Therefore, to turn a substring into a correct bracket sequence, we need to get a sequence such that $cnt_{odd} = cnt_{even}$. Suppose, initiallly, $cnt_{odd} > cnt_{even}$. Let's pay $cnt_{odd} - cnt_{even}$ burles to replace $cnt_{odd} - cnt_{even}$ odd brackets with round brackets. If $cnt_{odd} < cnt_{even}$, let's replace $cnt_{even} - cnt_{odd}$ even brackets with round brackets. Anyway, we must pay $|cnt_{odd} - cnt_{even}|$ burles. We cannot pay less than this value because for a correct bracket sequence, $cnt_{odd} = cnt_{even}$. But there's no need to pay more than this value, because, if we turn the initial substring into a sequence with $cnt_{odd} = cnt_{even}$, we can turn it into a correct bracket sequence for free. Therfore, the answer for a given question is $|cnt_{odd} - cnt_{even}|$. Since we must answer the queries fast, let's use a concept of prefix sums. If the given string $s$ contains $n$ brackets, let's create arrays $psumOdd$ and $psumEven$ with the length $n + 1$. $psumOdd[i]$ will contain the number of odd brackets on the prefix of the string $s$ with the length $i$, $psumEven[i]$ - the same value for even brackets. Let's initialize $psumOdd[0] = psumEven[0] = 0$ and then iterate $i$ from $1$ to $n$. Let's initialize $psumOdd[i] = psumOdd[i - 1]$ and $psumEven[i] = psumEven[i - 1]$. If the $i$-th bracket is round, then the current values are correct. Otherwise, let's find out what bracket is it. If $i$ is odd, the bracket is odd so we must increase $psumOdd[i]$ by $1$. If $i$ is even, the bracket is even so we must increase $psumEven[i]$ by $1$. To get the answer for a current $l$ and $r$, let's calculate $cnt_{odd}$ and $cnt_{even}$. $cnt_{odd}$ is a number of odd brackets that belong to the prefix with the length $r$ but not to the prefix with the length $l - 1$ so $cnt_{odd} = psumOdd[r] - psumOdd[l - 1]$. Similarly, $cnt_{even} = psumEven[r] - psumEven[l - 1]$. The remaining thing is to output $|cnt_{odd} - cnt_{even}|$.
|
[
"constructive algorithms",
"data structures",
"dp",
"greedy"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int MAX_N = 1'000'000;
int psumOdd[MAX_N + 1];
int psumEven[MAX_N + 1];
void solve()
{
string s;
int q;
cin >> s >> q;
int n = s.length();
psumOdd[0] = psumEven[0] = 0;
for (int i = 0; i < n; i++)
{
psumOdd[i + 1] = psumOdd[i];
psumEven[i + 1] = psumEven[i];
if (s[i] == '[' || s[i] == ']')
{
if (i & 1)
psumOdd[i + 1]++;
else
psumEven[i + 1]++;
}
}
while (q--)
{
int l, r;
cin >> l >> r;
l--;
int odd = psumOdd[r] - psumOdd[l];
int even = psumEven[r] - psumEven[l];
cout << abs(odd - even) << '\n';
}
}
int main()
{
int t;
cin >> t;
while (t--)
solve();
return 0;
}
|
1594
|
A
|
Consecutive Sum Riddle
|
Theofanis has a riddle for you and if you manage to solve it, he will give you a Cypriot snack halloumi for free (Cypriot cheese).
You are given an integer $n$. You need to find two integers $l$ and $r$ such that $-10^{18} \le l < r \le 10^{18}$ and $l + (l + 1) + \ldots + (r - 1) + r = n$.
|
You can take $(-n + 1) + (-n + 2) + \ldots + (n - 1) + n$, so the sum will be $n$. Thus, $l = -n + 1$ and $r = n$.
|
[
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin>>t;
while(t--){
ll n;
cin>>n;
cout<<-n+1<<" "<<n<<endl;
}
}
|
1594
|
B
|
Special Numbers
|
Theofanis really likes sequences of positive integers, thus his teacher (Yeltsa Kcir) gave him a problem about a sequence that consists of only special numbers.
Let's call a positive number special if it can be written as a sum of \textbf{different} non-negative powers of $n$. For example, for $n = 4$ number $17$ is special, because it can be written as $4^0 + 4^2 = 1 + 16 = 17$, but $9$ is not.
Theofanis asks you to help him find the $k$-th special number if they are sorted in increasing order. Since this number may be too large, output it modulo $10^9+7$.
|
The problem is the same as finding the $k$-th number that in base $n$ has only zeros and ones. So you can write $k$ in binary system and instead of powers of $2$ add powers of $n$.
|
[
"bitmasks",
"math"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll INF = 1e9+7;
const ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ll,ii>
#define f(i,a,b) for(ll i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin>>t;
while(t--){
ll n,k;
cin>>n>>k;
ll p = 1;
ll ans = 0;
f(j,0,31){
if(k & (1<<j)){
ans = (ans + p) % INF;
}
p *= n;
p %= INF;
}
cout<<ans<<"\n";
}
}
|
1594
|
C
|
Make Them Equal
|
Theofanis has a string $s_1 s_2 \dots s_n$ and a character $c$. He wants to make all characters of the string equal to $c$ using the minimum number of operations.
In one operation he can choose a number $x$ ($1 \le x \le n$) and \textbf{for every position $i$}, where $i$ is \textbf{not} divisible by $x$, replace $s_i$ with $c$.
Find the minimum number of operations required to make all the characters equal to $c$ and the $x$-s that he should use in his operations.
|
If the whole string is equal to $c$ then you don't need to make any operations. In order to find if it is possible with exactly $1$ operation, we can pass through every $x$ and count all the letters $c$ that are divisible by $x$. This takes $O(|s| log |s|)$ time complexity. If for some $x$ all its multiples are $c$ then the answer is $1$ operation with that $x$. If all the above conditions don't hold you can always make $2$ operations and make all the elements equal. One possible way is with $x = |s|$ and $x = |s|-1$. After the first operation only the last element of $s$ is not $c$ thus if we use $x = |s|-1$ since $gcd(|s|,|s|-1) = 1$ then $|s|$ is not divisible by $|s|-1$ and it will become equal to $c$. Time complexity: $O(|s| log |s|)$ per test case.
|
[
"brute force",
"greedy",
"math",
"strings"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll INF = 1e9+7;
const ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ll,ii>
#define f(i,a,b) for(int i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin>>t;
while(t--){
ll n;
cin>>n;
char c;
cin>>c;
string s;
cin>>s;
vector<int>ans;
bool ok = true;
for(auto x:s){
if(x != c){
ok = false;
}
}
if(!ok){
f(i,1,n+1){
ok = true;
f(j,i,n+1){
ok &= (s[j-1] == c);
j += i-1;
}
if(ok){
ans.pb(i);
break;
}
}
if(!ok){
ans.pb(n);
ans.pb(n-1);
}
}
cout<<ans.size()<<"\n";
for(auto x:ans){
cout<<x<<" ";
}
cout<<"\n";
}
}
|
1594
|
D
|
The Number of Imposters
|
Theofanis started playing the new online game called "Among them". However, he always plays with Cypriot players, and they all have the same name: "Andreas" (the most common name in Cyprus).
In each game, Theofanis plays with $n$ other players. Since they all have the same name, they are numbered from $1$ to $n$.
The players write $m$ comments in the chat. A comment has the structure of "$i$ $j$ $c$" where $i$ and $j$ are two distinct integers and $c$ is a string ($1 \le i, j \le n$; $i \neq j$; $c$ is either imposter or crewmate). The comment means that player $i$ said that player $j$ has the role $c$.
An imposter always lies, and a crewmate always tells the truth.
Help Theofanis find the maximum possible number of imposters among all the other Cypriot players, or determine that the comments contradict each other (see the notes for further explanation).
Note that each player has exactly \textbf{one} role: either imposter or crewmate.
|
If person $A$ said in a comment that person $B$ is a $crewmate$ then $A$ and $B$ belong to the same team (either imposters or crewmates). If person $A$ said in a comment that person $B$ is an $imposter$ then $A$ and $B$ belong to different teams. Solution $1$: You can build a graph and check if all its components are bipartite. If person $A$ said that $B$ is an imposter then we add an edge from $A$ to $B$. If person $A$ said that $B$ is a crewmate then we add an edge from $A$ to a fake node and from the same fake node to $B$. For each component, we check if it's bipartite and take the maximum from the two colours. If a component is not bipartite then the answer is $-1$. Solution $2$: We can build the graph in the other way: If $A$ and $B$ are in the same team then we add edge with weight $0$, otherwise with weight $1$. Then you can use dfs and colour the nodes either $0$ or $1$ maintaining the property that $u \oplus v = w(u, v)$.
|
[
"constructive algorithms",
"dfs and similar",
"dp",
"dsu",
"graphs"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll INF = 1e9+7;
const ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ll,ii>
#define f(i,a,b) for(int i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
vector<vector<ii> >adj;
int c[2];
int colour[200005];
bool ok;
void dfs(int idx){
c[colour[idx]]++;
for(auto x:adj[idx]){
if(colour[x.F] == -1){
colour[x.F] = colour[idx] ^ x.S;
dfs(x.F);
}
else if(colour[x.F] != (colour[idx] ^ x.S)){
///impossible
ok = false;
}
}
}
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin>>t;
while(t--){
int n,m;
cin>>n>>m;
adj.assign(n+5,vector<ii>());
f(i,0,n+5){
colour[i] = -1;
}
f(i,0,m){
int a,b;
string c;
cin>>a>>b>>c;
if(c == "crewmate"){
///same team
adj[a].pb(ii(b,0));
adj[b].pb(ii(a,0));
}
else{
///different team
adj[a].pb(ii(b,1));
adj[b].pb(ii(a,1));
}
}
int ans = 0;
ok = true;
f(i,1,n+1){
if(colour[i] == -1){
colour[i] = 0;
c[0] = c[1] = 0;
dfs(i);
ans += max(c[0],c[1]);
}
}
if(!ok){
ans = -1;
}
cout<<ans<<"\n";
}
}
|
1594
|
E1
|
Rubik's Cube Coloring (easy version)
|
\textbf{It is the easy version of the problem. The difference is that in this version, there are no nodes with already chosen colors.}
Theofanis is starving, and he wants to eat his favorite food, sheftalia. However, he should first finish his homework. Can you help him with this problem?
You have a perfect binary tree of $2^k - 1$ nodes — a binary tree where all vertices $i$ from $1$ to $2^{k - 1} - 1$ have exactly two children: vertices $2i$ and $2i + 1$. Vertices from $2^{k - 1}$ to $2^k - 1$ don't have any children. You want to color its vertices with the $6$ Rubik's cube colors (White, Green, Red, Blue, Orange and Yellow).
Let's call a coloring good when all edges connect nodes with colors that are neighboring sides in the Rubik's cube.
\begin{center}
\begin{tabular}{cc}
& \
\end{tabular}
{\small A picture of Rubik's cube and its 2D map.}
\end{center}
More formally:
- a white node can \textbf{not} be neighboring with white and yellow nodes;
- a yellow node can \textbf{not} be neighboring with white and yellow nodes;
- a green node can \textbf{not} be neighboring with green and blue nodes;
- a blue node can \textbf{not} be neighboring with green and blue nodes;
- a red node can \textbf{not} be neighboring with red and orange nodes;
- an orange node can \textbf{not} be neighboring with red and orange nodes;
You want to calculate the number of the good colorings of the binary tree. Two colorings are considered different if at least one node is colored with a different color.
The answer may be too large, so output the answer modulo $10^9+7$.
|
You have $6$ choices for the first node and $4$ for each other node. Thus, the answer is $6 \cdot 4 ^{2 ^ k - 2}$.
|
[
"combinatorics",
"math"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll INF = 1e9+7;
const ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ii,ll>
#define f(i,a,b) for(ll i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
ll power(ll a,ll b,ll mod){
if(b == 0){
return 1;
}
ll ans = power(a,b/2,mod);
ans *= ans;
ans %= mod;
if(b % 2){
ans *= a;
}
return ans % mod;
}
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
ll k;
cin>>k;
ll othernodes = (1LL<<k) - 2;
ll ans = power(4,othernodes,INF);
ans *= 6;
ans %= INF;
cout<<ans<<"\n";
}
|
1594
|
E2
|
Rubik's Cube Coloring (hard version)
|
\textbf{It is the hard version of the problem. The difference is that in this version, there are nodes with already chosen colors.}
Theofanis is starving, and he wants to eat his favorite food, sheftalia. However, he should first finish his homework. Can you help him with this problem?
You have a perfect binary tree of $2^k - 1$ nodes — a binary tree where all vertices $i$ from $1$ to $2^{k - 1} - 1$ have exactly two children: vertices $2i$ and $2i + 1$. Vertices from $2^{k - 1}$ to $2^k - 1$ don't have any children. You want to color its vertices with the $6$ Rubik's cube colors (White, Green, Red, Blue, Orange and Yellow).
Let's call a coloring good when all edges connect nodes with colors that are neighboring sides in the Rubik's cube.
\begin{center}
\begin{tabular}{cc}
& \
\end{tabular}
{\small A picture of Rubik's cube and its 2D map.}
\end{center}
More formally:
- a white node can \textbf{not} be neighboring with white and yellow nodes;
- a yellow node can \textbf{not} be neighboring with white and yellow nodes;
- a green node can \textbf{not} be neighboring with green and blue nodes;
- a blue node can \textbf{not} be neighboring with green and blue nodes;
- a red node can \textbf{not} be neighboring with red and orange nodes;
- an orange node can \textbf{not} be neighboring with red and orange nodes;
However, there are $n$ special nodes in the tree, colors of which are already chosen.
You want to calculate the number of the good colorings of the binary tree. Two colorings are considered different if at least one node is colored with a different color.
The answer may be too large, so output the answer modulo $10^9+7$.
|
Let's define a node as marked if it has a predefined node in its subtree. There is always at least $1$ marked node since all predefined nodes are definitely marked. You can see that marked nodes form a path for the root to any predefined node. Thus there are at most $n \cdot k$ marked nodes and we can run a standard $dp[i][j]$ on them (node $i$ is colored with $j$). Depending on the implementation the $dp$ can have time complexity $O(n \cdot k \cdot 6 \cdot 4)$ or $O(n \cdot k \cdot log(n \cdot k) \cdot 6 \cdot 4)$ if you use map. You multiply the result with $4^{m}$ where $m$ is the number of unmarked nodes. This holds because if their parent has a fixed color they always have $4$ choices and so on.
|
[
"brute force",
"dp",
"implementation",
"math",
"trees"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll INF = 1e9+7;
const ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ii,ll>
#define f(i,a,b) for(ll i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
ll dp[(60 * 10000) + 5][6];
vll color[6];
vector<vector<int> >adj;
map<ll,int>label;
ll c[(60 * 10000) + 5];
ll solve(int i,int j){
if(c[i] != -1 && c[i] != j){
return 0;
}
if(dp[i][j] != -1){
return dp[i][j];
}
ll ans = 0;
ll sum[2] = {0};
for(auto x:color[j]){
f(j,0,adj[i].size()){
sum[j] += solve(adj[i][j],x);
sum[j] %= INF;
}
}
if(adj[i].empty()){
sum[0] = sum[1] = 1;
}
if((ll)adj[i].size() == 1){
sum[1] = 1;
}
ans = (sum[0] * sum[1]) % INF;
return dp[i][j] = ans;
}
ll power(ll a,ll b,ll mod){
if(b == 0){
return 1;
}
ll ans = power(a,b/2,mod);
ans *= ans;
ans %= mod;
if(b % 2){
ans *= a;
}
return ans % mod;
}
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
color[0] = {1,2,4,5};
color[1] = {0,2,3,5};
color[2] = {0,1,3,4};
color[3] = {1,2,4,5};
color[4] = {0,2,3,5};
color[5] = {0,1,3,4};
map<string,ll>mp;
mp["white"] = 0;
mp["blue"] = 1;
mp["red"] = 2;
mp["yellow"] = 3;
mp["green"] = 4;
mp["orange"] = 5;
memset(dp,-1,sizeof dp);
memset(c,-1,sizeof c);
ll k;
cin>>k;
ll n;
cin>>n;
ll posoi = (1LL<<k) - 1;
int lab = 0;
map<ll,int>ar;
f(i,0,n){
ll x;
cin>>x;
string s;
cin>>s;
ar[x] = mp[s];
ll cur = x;
while(cur >= 1 && !label.count(cur)){
label[cur] = lab;
lab++;
posoi--;
cur /= 2;
}
}
adj.assign(lab + 5,vector<int>());
for(auto x:label){
if(ar.count(x.F)){
c[x.S] = ar[x.F];
}
if(label.count(x.F * 2)){
adj[x.S].pb(label[x.F * 2]);
}
if(label.count(x.F * 2 + 1)){
adj[x.S].pb(label[x.F * 2 + 1]);
}
}
ll ans = power(4,posoi,INF);
ll sum = 0;
f(j,0,6){
sum += solve(label[1],j);
sum %= INF;
}
ans *= sum;
ans %= INF;
cout<<ans<<"\n";
}
|
1594
|
F
|
Ideal Farm
|
Theofanis decided to visit his uncle's farm. There are $s$ animals and $n$ animal pens on the farm. For utility purpose, animal pens are constructed in one row.
Uncle told Theofanis that a farm is lucky if you can distribute all animals in all pens in such a way that there are no empty pens and there is at least one continuous segment of pens that has exactly $k$ animals in total.
Moreover, a farm is ideal if it's lucky for any distribution without empty pens.
Neither Theofanis nor his uncle knows if their farm is ideal or not. Can you help them to figure it out?
|
The problem is the same as: We have an array $a$ of length $n$ where every element of it is a positive integer and the sum of the whole array is $s$. If no matter how we construct the array $a$, we can find a non-zero length subarray which has sum equal to $k$ print "YES" else print "NO". If $s = k$ then the answer is obviously "YES" and if $s < k$ then the answer is obviously "NO". Let $pre[i] = \sum_{j=1}^{j<=i} a[j]$ ($1$ - indexed) All the elements of array pre are different as all $a[i]$ are positive integers. Let $b[i] = pre[i] + k$ but we also have $b[0] = k$. Again all the elements of $b$ are different because all $a[i]$ are positive integers. Array $pre$ has size $n$ and array $b$ has size $n + 1$. If and only if an element from $pre$ is equal to an element from $b$ then it means that $pre[i] = pre[j] + k$ or $pre[i] = k$. If it is the second case then obviously there is a subarray with sum equal to $k$. If it's the first case then $pre[i] - pre[j] = k$ so the subarray $[j+1, i]$ has sum $k$. But when do we have an equation in these two arrays? There are $n + (n + 1) = 2n + 1$ elements and they can be values from $1$ to $s+k$. If the maximum number of distinct elements that we can take is less than $2n + 1$ the answer is "YES" else the answer is "NO". Let $m$ be the maximum number of elements that we can take. We go through the last k elements ($[s-k+1,s]$) and we count the number of elements that have the same modulo $k$. For each element in this range, if there are odd elements that have the same modulo, we can't take all of them because for every element $x$ that we add in $pre$ that we also add $x+k$ to $b$. Thus one element would have a $x+k$ out of range. Therefore we count all the elements that have odd elements with the same modulo $k$ and subtract them from $s+k$ to find $m$.
|
[
"constructive algorithms",
"math"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
ll INF = 1e9+7;
ll MOD = 998244353;
typedef pair<ll,ll> ii;
#define iii pair<ii,ll>
#define f(i,a,b) for(int i = a;i < b;i++)
#define pb push_back
#define vll vector<ll>
#define F first
#define S second
#define all(x) (x).begin(), (x).end()
int main(void){
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin>>t;
while(t--){
ll n,k,s;
cin>>s>>n>>k;
if(s == k){
cout<<"YES\n";
}
else if(k > s){
cout<<"NO\n";
}
else{
ll posa = s+k;
ll l = s-k+1,r = s;
ll siz = r - l + 1;
ll a = 0,b = 0;
ll num = (s / k) * k;
b = r - num + 1;
a = siz - b;
if((s / k) % 2 == 1){
posa -= b;
}
else{
posa -= a;
}
if((2 * n + 1) > posa){
cout<<"YES\n";
}
else{
cout<<"NO\n";
}
}
}
}
|
1598
|
A
|
Computer Game
|
Monocarp is playing a computer game. Now he wants to complete the first level of this game.
A level is a rectangular grid of $2$ rows and $n$ columns. Monocarp controls a character, which starts in cell $(1, 1)$ — at the intersection of the $1$-st row and the $1$-st column.
Monocarp's character can move from one cell to another in one step if the cells are adjacent by side and/or corner. Formally, it is possible to move from cell $(x_1, y_1)$ to cell $(x_2, y_2)$ in one step if $|x_1 - x_2| \le 1$ and $|y_1 - y_2| \le 1$. Obviously, it is prohibited to go outside the grid.
There are traps in some cells. If Monocarp's character finds himself in such a cell, he dies, and the game ends.
To complete a level, Monocarp's character should reach cell $(2, n)$ — at the intersection of row $2$ and column $n$.
Help Monocarp determine if it is possible to complete the level.
|
At first glance, it seems like a graph problem. And indeed, this problem can be solved by explicitly building a graph considering cells as the vertices and checking that there is a safe path from start to finish via DFS/BFS/DSU/any other graph algorithm or data structure you know. But there's a much simpler solution. Since there are only two rows in a matrix, it's possible to move from any cell in the column $i$ to any cell in column $i + 1$ (if they are both safe, of course). It means that as long as there is at least one safe cell in each column, it is possible to reach any column of the matrix (and the cell $(2, n)$ as well). It's easy to see that if this condition is not met, there exists a column with two unsafe cells - and this also means that this column and columns to the right of it are unreachable. So, the problem is reduced to checking if there is a column without any unsafe cells. To implement this, you can read both rows of the matrix as strings (let these strings be $s_1$ and $s_2$) and check that there is a position $i$ such that both $s_1[i]$ and $s_2[i]$ are equal to 1.
|
[
"brute force",
"dfs and similar",
"dp",
"implementation"
] | 800
|
def solve():
n = int(input())
s1 = input()
s2 = input()
bad = False
for i in range(n):
bad |= s1[i] == '1' and s2[i] == '1'
if bad:
print('NO')
else:
print('YES')
t = int(input())
for i in range(t):
solve()
|
1598
|
B
|
Groups
|
$n$ students attended the first meeting of the Berland SU programming course ($n$ is even). All students will be divided into two groups. Each group will be attending exactly one lesson each week during one of the five working days (Monday, Tuesday, Wednesday, Thursday and Friday), and the days chosen for the groups must be different. Furthermore, both groups should contain the same number of students.
Each student has filled a survey in which they told which days of the week are convenient for them to attend a lesson, and which are not.
Your task is to determine if it is possible to choose two different week days to schedule the lessons for the group (the first group will attend the lesson on the first chosen day, the second group will attend the lesson on the second chosen day), and divide the students into two groups, so the groups have equal sizes, and for each student, the chosen lesson day for their group is convenient.
|
Since there are only five days, we can iterate over the two of them that will be the answer. Now we have fixed a pair of days $a$ and $b$ and want to check if it can be the answer. All students can be divided into four groups: marked neither of days $a$ and $b$, marked only day $a$, marked only day $b$ and marked both days. Obviously, if the first group is non-empty, days $a$ and $b$ can't be the answer. Let's call the number of students, who only marked day $a$, $\mathit{cnt}_a$ and the number of students, who only marked day $b$, $\mathit{cnt}_b$. If either of $\mathit{cnt}_a$ or $\mathit{cnt}_b$ exceed $\frac{n}{2}$, then days $a$ and $b$ can't be the answer as well. Otherwise, we can always choose $\frac{n}{2} - \mathit{cnt}_a$ students from the ones who marked both days and send them to day $a$. The rest of the students can go to day $b$.
|
[
"brute force",
"implementation"
] | 1,000
|
t = int(input())
for i in range(t):
n = int(input())
a = [[] for i in range(n)]
for j in range(n):
a[j] = list(map(int, input().split()))
ans = False
for j in range(5):
for k in range(5):
if k != j:
cnt1 = 0
cnt2 = 0
cntno = 0
for z in range(n):
if a[z][j] == 1:
cnt1 += 1
if a[z][k] == 1:
cnt2 += 1
if a[z][j] == 0 and a[z][k] == 0:
cntno += 1
if cnt1 >= n // 2 and cnt2 >= n // 2 and cntno == 0:
ans = True
if ans:
print('YES')
else:
print('NO')
|
1598
|
C
|
Delete Two Elements
|
Monocarp has got an array $a$ consisting of $n$ integers. Let's denote $k$ as the mathematic mean of these elements (note that it's possible that $k$ is not an integer).
The mathematic mean of an array of $n$ elements is the sum of elements divided by the number of these elements (i. e. sum divided by $n$).
Monocarp wants to delete exactly two elements from $a$ so that the mathematic mean of the remaining $(n - 2)$ elements is still equal to $k$.
Your task is to calculate the number of pairs of positions $[i, j]$ ($i < j$) such that if the elements on these positions are deleted, the mathematic mean of $(n - 2)$ remaining elements is equal to $k$ (that is, it is equal to the mathematic mean of $n$ elements of the original array $a$).
|
First of all, instead of the mathematic mean, let's consider the sum of elements. If the mathematic mean is $k$, then the sum of elements of the array is $k \cdot n$. Let's denote the sum of elements in the original array as $s$. Note $s$ is always an integer. If we remove two elements from the array, the resulting sum of elements should become $k \cdot (n - 2) = \frac{s \cdot (n - 2)}{n}$. So, the sum of the elements we remove should be exactly $\frac{2s}{n}$. If $\frac{2s}{n}$ is not an integer, the answer is $0$ (to check that, you can simply compare $(2s) \bmod n$ with $0$). Otherwise, we have to find the number of pairs $(i, j)$ such that $i < j$ and $a_i + a_j = \frac{2s}{n}$. This is a well-known problem. To solve it, you can calculate the number of occurrences of each element and store it in some associative data structure (for example, map in C++). Let $cnt_x$ be the number of occurrences of element $x$. Then, you should iterate on the element $a_i$ you want to remove and check how many elements match it, that is, how many elements give exactly $\frac{2s}{n}$ if you add $a_i$ to them. The number of these elements is just $cnt_{\frac{2s}{n} - a_i}$. Let's sum up all these values for every element in the array. Unfortunately, this sum is not the answer yet. We need to take care of two things: if for some index $i$, $2 \cdot a_i = \frac{2s}{n}$, then $a_i$ matches itself, so you have to subtract the number of such elements from the answer; every pair of elements is counted twice: the first time when we consider the first element of the pair, and the second time - when we consider the second element of the pair. So, don't forget to divide the answer by $2$.
|
[
"data structures",
"dp",
"implementation",
"math",
"two pointers"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
scanf("%d", &t);
while (t--) {
int n;
scanf("%d", &n);
vector<int> a(n);
map<int, int> cnt;
for (auto &x : a) {
scanf("%d", &x);
cnt[x] += 1;
}
long long sum = accumulate(a.begin(), a.end(), 0LL);
if ((2 * sum) % n != 0) {
puts("0");
continue;
}
long long need = (2 * sum) / n;
long long ans = 0;
for (int i = 0; i < n; ++i) {
int a1 = a[i];
int a2 = need - a1;
if (cnt.count(a2)) ans += cnt[a2];
if (a1 == a2) ans -= 1;
}
printf("%lld\n", ans / 2);
}
}
|
1598
|
D
|
Training Session
|
Monocarp is the coach of the Berland State University programming teams. He decided to compose a problemset for a training session for his teams.
Monocarp has $n$ problems that none of his students have seen yet. The $i$-th problem has a topic $a_i$ (an integer from $1$ to $n$) and a difficulty $b_i$ (an integer from $1$ to $n$). All problems are different, that is, there are no two tasks that have the same topic and difficulty at the same time.
Monocarp decided to select exactly $3$ problems from $n$ problems for the problemset. The problems should satisfy \textbf{at least one} of two conditions (possibly, both):
- the topics of all three selected problems are different;
- the difficulties of all three selected problems are different.
Your task is to determine the number of ways to select three problems for the problemset.
|
There are many different ways to solve this problem, but, in my opinion, the easiest one is to count all possible triples and subtract the number of bad triples. The first part is easy - the number of ways to choose $3$ elements out of $n$ is just $\frac{n \cdot (n - 1) \cdot (n - 2)}{6}$. The second part is a bit tricky. What does it mean that the conditions in the statements are not fulfilled? There is a pair of problems with equal difficulty, and there is a pair of problems with the same topic. Since all problems in the input are different, it means that every bad triple has the following form: $[(x_a, y_a), (x_b, y_a), (x_a, y_b)]$ - i. e. there exists a problem such that it shares the difficulty with one of the other two problems, and the topic with the remaining problem of the triple. This observation allows us to calculate the number of bad triples as follows: we will iterate on the "central" problem (the one that shares the topic with the second problem and the difficulty with the third problem). If we pick $(x_a, y_a)$ as the "central" problem, we need to choose the other two. Counting ways to choose the other problems is easy if we precalculate the number of problems for each topic/difficulty: let $cntT_x$ be the number of problems with topic $x$, and $cntD_y$ be the number of problems with difficulty $y$; then, if we pick the problem $(x, y)$ as the "central one", there are $cntT_x - 1$ ways to choose a problem that shares the topic with it, and $cntD_y - 1$ ways to choose a problem that has the same difficulty - so, we have to subtract $(cntT_x - 1)(cntD_y - 1)$ from the answer for every problem $(x, y)$.
|
[
"combinatorics",
"data structures",
"geometry",
"implementation",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n), b(n), ca(n + 1), cb(n + 1);
for (int i = 0; i < n; ++i) {
cin >> a[i] >> b[i];
ca[a[i]]++; cb[b[i]]++;
}
long long ans = n * 1LL * (n - 1) * (n - 2) / 6;
for (int i = 0; i < n; ++i)
ans -= (ca[a[i]] - 1) * 1LL * (cb[b[i]] - 1);
cout << ans << '\n';
}
}
|
1598
|
E
|
Staircases
|
You are given a matrix, consisting of $n$ rows and $m$ columns. The rows are numbered top to bottom, the columns are numbered left to right.
Each cell of the matrix can be either free or locked.
Let's call a path in the matrix a staircase if it:
- starts and ends in the free cell;
- visits only free cells;
- has one of the two following structures:
- the second cell is $1$ to the right from the first one, the third cell is $1$ to the bottom from the second one, the fourth cell is $1$ to the right from the third one, and so on;
- the second cell is $1$ to the bottom from the first one, the third cell is $1$ to the right from the second one, the fourth cell is $1$ to the bottom from the third one, and so on.
In particular, a path, consisting of a single cell, is considered to be a staircase.
Here are some examples of staircases:
Initially all the cells of the matrix are \textbf{free}.
You have to process $q$ queries, each of them flips the state of a single cell. So, if a cell is currently free, it makes it locked, and if a cell is currently locked, it makes it free.
Print the number of different staircases after each query. Two staircases are considered different if there exists such a cell that appears in one path and doesn't appear in the other path.
|
The solution consist of two main parts: calculate the initial number of staircases and recalculate the number of staircases on query. The constraints were pretty loose, so we'll do the first part in $O(nm)$ and the second part in $O(n + m)$ per query. However, it's worth mentioning that faster is possible. The first part can surely be done in $O(n + m)$ and can probably be done in $O(1)$. The second part can be done in $O(\log n)$ per query. It's important to notice is that the only staircase that satisfy the requirements for both types is the staircase that consists of a single cell. Thus, staircases of both types can be calculated almost separately. Let's define "base" staircases as the staircases that can't be prolonged further in any direction. There are $O(n + m)$ of them on the grid. If a staircase consists of at least two cells, it's a part of exactly one base staircase. At the same time, every segment of a base staircase is a valid staircase by itself. Thus, the main idea of calculating the initial answer is the following. Isolate each base staircase and determine its length $\mathit{len}$ (possibly, in $O(n + m)$). Add $\binom{len-1}{2}$ (the number of segments of length at least $2$) to the answer. Add extra $nm$ one cell staircases afterwards. If you draw the base staircases on the grid, you can easily determine their starting cell. The base staircases, that start by going one cell to the right, start from the first row. The base staircases, that start by going one cell to the bottom, start from the first column. Notice that both types can start from cell $(1, 1)$. The updates can be handled the following way. The answer always changes by the number of staircases that pass through cell $(x, y)$ (if you ignore its state). If the cell becomes free, then these staircases are added to the answer. Otherwise, they are subtracted from it. That can be calculated for two cases as well. Go first down, then right, as far as possible. Let it be $cnt_1$ steps. Go first left, then up, as far as possible. Let it be $cnt_2$ steps. Then $cnt_1 \cdot cnt_2$ staircases are added to the answer. Then change the order of steps in both directions to calculate the other type of staircases. Beware of one cell staircases again. To achieve $O(n + m)$ for precalc, you can calculate the length of each base staircase with a formula. To achieve $O(\log n)$ per query, you can first enumerate cells in each base staircase separately, then maintain the set of segments of adjacent free cells in it.
|
[
"brute force",
"combinatorics",
"data structures",
"dfs and similar",
"dp",
"implementation",
"math"
] | 2,100
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
int main() {
int n, m, q;
scanf("%d%d%d", &n, &m, &q);
vector<vector<int>> a(n, vector<int>(m, 1));
long long ans = 0;
forn(x, n) forn(y, m){
if (x == 0){
for (int k = 1;; ++k){
int nx = x + k / 2;
int ny = y + (k + 1) / 2;
if (nx == n || ny == m) break;
ans += k;
}
}
if (y == 0){
for (int k = 1;; ++k){
int nx = x + (k + 1) / 2;
int ny = y + k / 2;
if (nx == n || ny == m) break;
ans += k;
}
}
}
ans += n * m;
forn(i, q){
int x, y;
scanf("%d%d", &x, &y);
--x, --y;
forn(c, 2){
int l = 1, r = 1;
while (true){
int nx = x + (l + c) / 2;
int ny = y + (l + !c) / 2;
if (nx == n || ny == m || a[nx][ny] == 0) break;
++l;
}
while (true){
int nx = x - (r + !c) / 2;
int ny = y - (r + c) / 2;
if (nx < 0 || ny < 0 || a[nx][ny] == 0) break;
++r;
}
if (a[x][y] == 0){
ans += l * r;
}
else{
ans -= l * r;
}
}
ans += a[x][y];
a[x][y] ^= 1;
ans -= a[x][y];
printf("%lld\n", ans);
}
}
|
1598
|
F
|
RBS
|
A bracket sequence is a string containing only characters "(" and ")". A regular bracket sequence (or, shortly, an RBS) is a bracket sequence that can be transformed into a correct arithmetic expression by inserting characters "1" and "+" between the original characters of the sequence. For example:
- bracket sequences "()()" and "(())" are regular (the resulting expressions are: "(1)+(1)" and "((1+1)+1)");
- bracket sequences ")(", "(" and ")" are not.
Let's denote the concatenation of two strings $x$ and $y$ as $x+y$. For example, "()()" $+$ ")(" $=$ "()())(".
You are given $n$ bracket sequences $s_1, s_2, \dots, s_n$. You can rearrange them in any order (you can rearrange only the strings themselves, but not the characters in them).
Your task is to rearrange the strings in such a way that the string $s_1 + s_2 + \dots + s_n$ has as many non-empty prefixes that are RBS as possible.
|
The constraint $n \le 20$ is a clear hint that we need some exponential solution. Of course, we cannot try all $n!$ permutations. Let's instead try to design a solution with bitmask dynamic programming. A string is an RBS if its balance (the difference between the number of opening and closing brackets) is $0$, and the balance of its each prefix is non-negative. So, let's introduce the following dynamic programming: $dp_{m,b,f}$ is the greatest number of RBS-prefixes of a string if we considered a mask $m$ of strings $s_i$, the current balance of the prefix is $b$, and $f$ is a flag that denotes whether there already has been a prefix with negative balance. We can already get rid of one of the states: the current balance is uniquely determined by the mask $m$. So, this dynamic programming will have $O(2^{n+1})$ states. To perform transitions, we need to find a way to recalculate the value of $f$ and the answer if we append a new string at the end of the current one. Unfortunately, it's too slow to simply simulate the process. Instead, for every string $s_i$, let's precalculate the value $go(s_i, f, x)$ - how does the flag and the answer change, if the current flag is $f$, and the current balance is $x$. The resulting flag will be true in one of the following two cases: it is already true; the string we append creates a new prefix with non-positive balance. The second case can be checked as follows: let's precalculate the minimum balance of a prefix of $s_i$; let it be $c$. If $x + c < 0$, the flag will be true. Calculating how the answer changes is a bit trickier. If the current flag is already true, the answer doesn't change. But if it is false, the answer will increase by the number of new RBS-prefixes. If the balance before adding the string $s_i$ is $b$, then we get a new RBS-prefix for every prefix of $s_i$ such that: its balance is exactly $(-b)$ (to compensate the balance we already have); there is no prefix with balance $(-b-1)$ in $s_i$ before this prefix. To quickly get the number of prefixes meeting these constraints, we can create a data structure that stores the following information: for every balance $j$, store a sorted vector of positions in $s_i$ with balance equal to $j$. Then, to calculate the number of prefixes meeting the constraints, we can find the first position in $s_i$ with balance equal to $(-b-1)$ by looking at the beginning of the vector for $(-b-1)$, and then get the number of elements less than this one from the vector for balance $(-b)$ by binary search. These optimizations yield a solution in $O(2^n \log A + A \log A)$, although it's possible to improve to $O(2^n + A \log A)$ if you precalculate each value of $go(s_i, f, x)$ for every string $s_i$.
|
[
"binary search",
"bitmasks",
"brute force",
"data structures",
"dp"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
const int INF = int(1e9);
const int N = 20;
const int M = (1 << N);
struct BracketSeqn
{
int balance;
int lowestBalance;
vector<int> queryAns;
pair<int, bool> go(int x, bool f)
{
if(f)
return make_pair(0, true);
else
return make_pair(x < queryAns.size() ? queryAns[x] : 0, x + lowestBalance < 0);
}
BracketSeqn() {};
BracketSeqn(string s)
{
vector<int> bal;
int cur = 0;
int n = s.size();
for(auto x : s)
{
if(x == '(')
cur++;
else
cur--;
bal.push_back(cur);
}
balance = bal.back();
lowestBalance = min(0, *min_element(bal.begin(), bal.end()));
vector<vector<int>> negPos(-lowestBalance + 1);
for(int i = 0; i < n; i++)
{
if(bal[i] > 0) continue;
negPos[-bal[i]].push_back(i);
}
queryAns.resize(-lowestBalance + 1);
for(int i = 0; i < queryAns.size(); i++)
{
int lastPos = int(1e9);
if(i != -lowestBalance)
lastPos = negPos[i + 1][0];
queryAns[i] = lower_bound(negPos[i].begin(), negPos[i].end(), lastPos) - negPos[i].begin();
}
};
};
int dp[M][2];
char buf[M];
int total_bal[M];
int main()
{
int n;
scanf("%d", &n);
vector<BracketSeqn> bs;
for(int i = 0; i < n; i++)
{
scanf("%s", buf);
string s = buf;
bs.push_back(BracketSeqn(s));
}
for(int i = 0; i < (1 << n); i++)
for(int j = 0; j < n; j++)
if(i & (1 << j))
total_bal[i] += bs[j].balance;
for(int i = 0; i < (1 << n); i++)
for(int j = 0; j < 2; j++)
dp[i][j] = -int(1e9);
dp[0][0] = 0;
for(int i = 0; i < (1 << n); i++)
for(int f = 0; f < 2; f++)
{
if(dp[i][f] < 0) continue;
for(int k = 0; k < n; k++)
{
if(i & (1 << k)) continue;
pair<int, bool> res = bs[k].go(total_bal[i], f);
dp[i ^ (1 << k)][res.second] = max(dp[i ^ (1 << k)][res.second], dp[i][f] + res.first);
}
}
printf("%d\n", max(dp[(1 << n) - 1][0], dp[(1 << n) - 1][1]));
}
|
1598
|
G
|
The Sum of Good Numbers
|
Let's call a positive integer good if there is no digit 0 in its decimal representation.
For an array of a good numbers $a$, one found out that the sum of some two neighboring elements is equal to $x$ (i.e. $x = a_i + a_{i + 1}$ for some $i$). $x$ had turned out to be a good number as well.
Then the elements of the array $a$ were written out one after another without separators into one string $s$. For example, if $a = [12, 5, 6, 133]$, then $s = 1256133$.
You are given a string $s$ and a number $x$. Your task is to determine the positions in the string that correspond to the adjacent elements of the array that have sum $x$. If there are several possible answers, you can print any of them.
|
Let's denote $a$ as the largest of the terms of the sum, and $b$ is the smaller one. Consider $2$ cases: $|a| =|x| - 1$ or $|a|=|x|$. If $|a| =|x| - 1$, then $|b| =|x| - 1$. So we need to find two consecutive substrings of length $|x| - 1$ such that if we convert these substrings into integers, their sum is equal to $x$. If $|a| = |x|$, let $\mathit{lcp}$ be the largest common prefix of $a$ and $x$ if we consider them as strings. Then $|b| = |x| - \mathit{lcp}$ or $|b| =|x| - \mathit{lcp} - 1$. So it is necessary to check only these two cases, and whether $b$ goes before or after $a$ (in the string $s$). Thus, we have reduced the number of variants where the substrings for $a$ and $b$ are located to $O(n)$. It remains to consider how to quickly check whether the selected substrings are suitable. To do this, you can use hashes (preferably with several random modules).
|
[
"hashing",
"math",
"string suffix structures",
"strings"
] | 3,200
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); ++i)
const int MOD[] = { 597804841, 618557587, 998244353 };
const int N = 500 * 1000 + 13;
const int K = 3;
using hs = array<int, K>;
int add(int x, int y, int mod) {
x += y;
if (x >= mod) x -= mod;
if (x < 0) x += mod;
return x;
}
int mul(int x, int y, int mod) {
return x * 1LL * y % mod;
}
hs get(const int& x) {
hs c;
forn(i, K) c[i] = x;
return c;
}
hs operator +(const hs& a, const hs& b) {
hs c;
forn(i, K) c[i] = add(a[i], b[i], MOD[i]);
return c;
}
hs operator -(const hs& a, const hs& b) {
hs c;
forn(i, K) c[i] = add(a[i], -b[i], MOD[i]);
return c;
}
hs operator *(const hs& a, const hs& b) {
hs c;
forn(i, K) c[i] = mul(a[i], b[i], MOD[i]);
return c;
}
int n, m;
string s, sx;
hs sum[N], pw[N];
hs get(int l, int r) {
return sum[r] - sum[l] * pw[r - l];
}
vector<int> zfunction(const string& s) {
int n = s.size();
vector<int> z(n);
int l = 0, r = 0;
for (int i = 1; i < n; ++i) {
if (i <= r) z[i] = min(z[i - l], r - i + 1);
while (i + z[i] < n && s[z[i]] == s[i + z[i]])
++z[i];
if (i + z[i] - 1 > r) {
l = i;
r = i + z[i] - 1;
}
}
return z;
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
cin >> s >> sx;
n = s.size();
m = sx.size();
pw[0] = get(1);
forn(i, N - 1) pw[i + 1] = pw[i] * get(10);
sum[0] = get(0);
forn(i, n) sum[i + 1] = sum[i] * get(10) + get(s[i] - '0');
hs x = get(0);
forn(i, m) x = x * get(10) + get(sx[i] - '0');
if (m > 1) for (int i = 0; i + 2 * (m - 1) <= n; ++i) {
if (get(i, i + m - 1) + get(i + m - 1, i + 2 * (m - 1)) == x) {
cout << i + 1 << ' ' << i + m - 1 << '\n';
cout << i + m << ' ' << i + 2 * (m - 1) << '\n';
return 0;
}
}
auto z = zfunction(sx + "#" + s);
for (int i = 0; i + m <= n; ++i) {
int lcp = z[m + i + 1];
for (int len = m - lcp - 1; len <= m - lcp; ++len) {
if (len < 1) continue;
if (i + m + len <= n && get(i, i + m) + get(i + m, i + m + len) == x) {
cout << i + 1 << ' ' << i + m << '\n';
cout << i + m + 1 << ' ' << i + m + len << '\n';
return 0;
}
if (i >= len && get(i - len, i) + get(i, i + m) == x) {
cout << i - len + 1 << ' ' << i << '\n';
cout << i + 1 << ' ' << i + m << '\n';
return 0;
}
}
}
assert(false);
}
|
1601
|
A
|
Array Elimination
|
You are given array $a_1, a_2, \ldots, a_n$, consisting of non-negative integers.
Let's define operation of "elimination" with integer parameter $k$ ($1 \leq k \leq n$) as follows:
- Choose $k$ distinct array indices $1 \leq i_1 < i_2 < \ldots < i_k \le n$.
- Calculate $x = a_{i_1} ~ \& ~ a_{i_2} ~ \& ~ \ldots ~ \& ~ a_{i_k}$, where $\&$ denotes the bitwise AND operation (notes section contains formal definition).
- Subtract $x$ from each of $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$; all other elements remain untouched.
Find all possible values of $k$, such that it's possible to make all elements of array $a$ equal to $0$ using a finite number of elimination operations with parameter $k$. It can be proven that exists at least one possible $k$ for any array $a$.
Note that you \textbf{firstly choose $k$ and only after that perform elimination operations with value $k$ you've chosen initially}.
|
Let's note, that in one destruction for any bit $i$ ($0 \leq i < 30$) we either change all $k$-th non-zero bits into zero bits, or nothing changes. So, the number of $i$-th non-zero bits in the array either decreases by $k$ or doesn't change. In the end, all these numbers will be equal to $0$. So, to be able to destruct the array, the number of $i$-th non-zero bits in the array should be divisible by $k$ for all bits $i$. Let's prove, that it is enough to destruct the array. Let's make operations with non-zero AND, while we can make them. In the end, there is at least one non-zero element, if we have not destructed the array. So, there is at least one bit $i$, for which the number of $i$-th non-zero bits in the array is non-zero, so this number is at least $k$ (because it is divisible by $k$). So we can select $k$ numbers with $i$-th non-zero bit to the next operation and make the new destruction, which is impossible. So the resulting solution is: for each bit $i$ ($0 \leq i < 30$) let's find the number of array's elements with non-zero $i$-th bit. Let's find all common divisors $k$ ($1 \leq k \leq n$) of these numbers. Time complexity is $O(n \log{C})$, where $C = 10^9$ - upper limit on all numbers in the array.
|
[
"bitmasks",
"greedy",
"math",
"number theory"
] | 1,300
| null |
1601
|
B
|
Frog Traveler
|
Frog Gorf is traveling through Swamp kingdom. Unfortunately, after a poor jump, he fell into a well of $n$ meters depth. Now Gorf is on the bottom of the well and has a long way up.
The surface of the well's walls vary in quality: somewhere they are slippery, but somewhere have convenient ledges. In other words, if Gorf is on $x$ meters below ground level, then in one jump he can go up on any integer distance from $0$ to $a_x$ meters inclusive. (Note that Gorf can't jump down, only up).
Unfortunately, Gorf has to take a break after each jump (including jump on $0$ meters). And after jumping up to position $x$ meters below ground level, he'll slip exactly $b_x$ meters down while resting.
Calculate the minimum number of jumps Gorf needs to reach ground level.
|
Let's denote sequence of moves $i \Rightarrow i - a_i \dashrightarrow i - a_i + b_{i-a_i}$ as jump. We will use $dp_i$ - minimal number of moves needed to travel from $i$ to $0$. It can be calculated $dp_i = 1 + \min (dp_j + b_j)$, with $i - a_i \le j \le i$. We expected calculations to use bfs-style order. So, if there is a jump to $0$, $dp$ is $1$. If there is no jump to $0$, but there is a jump to position with $dp=1$, then $dp$ is $2$, and so on. What happens, when we know all dp's with values from $0$ to $d$? We'll take position $v$ ($dp_v = d$) and all $u$ with condition $u + b_u = v$. Then all $j$ that has $j - a_j \le u \le j$ we know for sure $dp_j = d + 1$. For every $i$ we will save in minimum segment tree value $i - a_i$. So, all $j$'s are just elements from a suffix with value not greater than $u$. We can iterate through all $j$'s, because every of them is used only once - right after we know $dp_j$, we can use any neutral value (infinity in our case). Time complexity is $O(n \log n)$ Bonus. Try to solve it in linear time :)
|
[
"data structures",
"dfs and similar",
"dp",
"graphs",
"shortest paths",
"two pointers"
] | 1,900
| null |
1601
|
C
|
Optimal Insertion
|
You are given two arrays of integers $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_m$.
You need to insert all elements of $b$ into $a$ in an arbitrary way. As a result you will get an array $c_1, c_2, \ldots, c_{n+m}$ of size $n + m$.
Note that you are not allowed to change the order of elements in $a$, while you can insert elements of $b$ at arbitrary positions. They can be inserted at the beginning, between any elements of $a$, or at the end. Moreover, elements of $b$ can appear in the resulting array in any order.
What is the minimum possible number of inversions in the resulting array $c$? Recall that an inversion is a pair of indices $(i, j)$ such that $i < j$ and $c_i > c_j$.
|
Let's sort array $b$. Let's define $p_i$ as the index of the array $a$, before which we should insert $b_i$. If $b_i$ should be inserted at the end of the array $a$, let's make $p_i = n + 1$. Let's note, that all inserted elements should go in the sorted order in the optimal answer. If it is false and there exists $p_i > p_j$ for $i < j$, let's swap $b_i$ and $b_j$ in the answer. The number of inversions will decrease, a contradiction. So $p_1 \leq p_2 \leq \ldots \leq p_m$. If we will find their values we will be able to restore an array after inserting elements. Let's use "Divide and conquer" to find them. Let's write a recursive function $solve(l_i, r_i, l_p, r_p)$, that will find $p_i$ for all $l_i \leq i < r_i$, if it is known, that $p_i \in [l_p, r_p]$ for all such $i$. To find all values of $p$ we should call the function $solve(1, m + 1, 1, n + 1)$. Realization of $solve(l_i, r_i, l_p, r_p)$: If $l_i \geq r_i$, we shouldn't do anything. Let $mid = \lfloor \frac{l_i + r_i}{2} \rfloor$. Let's find $p_{mid}$. The number of inversions with $b_{mid}$ will be (the number of $a_i > b_{mid}$ for $i < p_{mid}$) + (the number of $a_i < b_{mid}$ for $i \geq p_{mid}$). This sum differs by a constant from: (the number of $a_i > b_{mid}$ for $l_p \leq i < p_{mid}$) + (the number of $a_i < b_{mid}$ for $p_{mid} \leq i < r_p$). For this sum it is simple to find the minimal optimal $p_{mid}$ in $O(r_p - l_p)$. Let's make two recursive calls of $solve(l_i, mid - 1, l_p, p_{mid})$, $solve(mid + 1, r_i, p_{mid}, r_p)$, that will find all remaining values of $p$. The complexity of this function will be $O((n+m)\log{m})$, because there will be $O(\log{m})$ levels of recursion and we will make $O(n+m)$ operations on each of them. In the end, using the values $p_i$ we will restore the array and find the number of inversions in it. Total complexity: $O((n+m)\log{(n+m)})$. Also, there exist other correct solutions with the same complexity, using segment tree.
|
[
"data structures",
"divide and conquer",
"dp",
"greedy",
"sortings"
] | 2,300
| null |
1601
|
D
|
Difficult Mountain
|
A group of $n$ alpinists has just reached the foot of the mountain. The initial difficulty of climbing this mountain can be described as an integer $d$.
Each alpinist can be described by two integers $s$ and $a$, where $s$ is his skill of climbing mountains and $a$ is his neatness.
An alpinist of skill level $s$ is able to climb a mountain of difficulty $p$ only if $p \leq s$. As an alpinist climbs a mountain, they affect the path and thus may change mountain difficulty. Specifically, if an alpinist of neatness $a$ climbs a mountain of difficulty $p$ the difficulty of this mountain becomes $\max(p, a)$.
Alpinists will climb the mountain one by one. And before the start, they wonder, what is the maximum number of alpinists who will be able to climb the mountain if they choose the right order. As you are the only person in the group who does programming, you are to answer the question.
Note that after the order is chosen, each alpinist who can climb the mountain, must climb the mountain at that time.
|
First, discard all $i$ such that $s_i < d$ Instead of climbers, we will consider pairs $(s_i, a_i)$, and also say that the set of pairs is correct if there is a permutation $p_1 \ldots p_n$ such that for every $i : \max (d, a_ {p_1}, \ldots a_ {p_ {i-1}}) \le s_ {p_i}$, which means that there is an order in which all climbers can climb. We will call a pair of indices $i, j$ incompatible if $i \neq j$ and $s_j < a_i$ and $s_i <a_j$. - this means that the $i$-th climber cannot climb after the $j$-th and vice versa. Note that if the set does not have an incompatible pair of indices, then it is correct. The suitable order for pairs $(s_i, a_i)$ can be reached by sorting them in increasing order of pairs ${min (s_i, a_i), \max (s_i, a_i)}$. After sorting either $i$-th climber can go after $(i - 1)$-th or the pair $(i - 1), i$ is incompatible. $\quad$ Let's solve the problem with an additional restriction first, namely: for each $i : s_i < a_i$, In this case, you can use the following greedy solution: Let $D = d$, find among the pairs $(s_i, a_i)$ such that $D \le s_i$, and among such pairs - the pair with the smallest $a_i$ - it will be the next in our order. Replace $D$ by $a_i$, increase the answer by 1 and repeat the algorithm. If the pair with $D \le s_i$ does not exist, terminate the algorithm. The correctness of such an algorithm is proved by induction. To effectively implement this solution, let's sort all the pairs $(s_i, a_i)$ in increasing order of $a_i$ Let's go through the indices $i$ from 1 to $n$ $\quad$ If $D \le s_i$, then add 1 to the answer and replace $D$ with $a_i$. $\quad$ Let's get back to the main problem: Consider a pair of indices $i, j$ such that $s_i < a_j \le s_j < a_i$ Such a pair of indices is incompatible, and if the optimal answer contains $i$, then it can be replaced with $j$ and the sequence will not break. $\quad$ $s_i < s_j \Rightarrow$ for any value of $D$ that matches $i$ it matches $j$. $\quad$ $a_j < a_i \Rightarrow$ for any $D: max (D, a_j) \le max (D, a_i)$ Therefore, for any such pair $i, j$, the $i$-th can be excluded from the set of climbers and the answer will not worsen. $\quad$ To effectively remove all such $(s_i, a_i)$ pairs, we use the two-pointer method: Let's take out all such pairs that $a_i \le s_i$ into the $b$ array. Let the remaining pairs be in the $c$ array. Let's sort the array $b$ in increasing order of $a_i$ and the array $c$ in increasing order of $s_i$. Let's create an ordered set $M$, which can store the pairs $(s_i, a_i)$ in decreasing order of $a_i$. Let's create a pointer $j = 0$. Let's go through the elements of the $b$ array with index i. $\quad$ For this item: $\quad$ While $c_ {j} .s < b_ {i} .a$ we will add $c_j$ to the set $M$ $\quad$ Now while $b_ {i} .s < M_ {1} .a$ we will delete the first element M. Among the elements of the $b$ array, the $M$ set and the remaining elements in the $c$ array, there are no more required pairs. Note that among the remaining pairs $(s_i, a_i)$, any pair of indices $i, j$ such that $a_i \le s_i$ or $a_j \le s_j$ is not incompatible. Now, if we find the maximum correct subset of the pairs $(s_i, a_i)$, such that $s_i < a_i$ and combine it with the set of pairs $(s_i, a_i)$, such that $a_i \le s_i$, we get the correct set, moreover, for obvious reasons - it has maximum size. Therefore, we will get the answer to the problem.
|
[
"data structures",
"dp",
"greedy",
"sortings"
] | 2,700
| null |
1601
|
E
|
Phys Ed Online
|
Students of one unknown college don't have PE courses. That's why $q$ of them decided to visit a gym nearby by themselves. The gym is open for $n$ days and has a ticket system. At the $i$-th day, the cost of one ticket is equal to $a_i$. You are free to buy more than one ticket per day.
You can activate a ticket purchased at day $i$ either at day $i$ or any day later. Each activated ticket is valid only for $k$ days. In other words, if you activate ticket at day $t$, it will be valid only at days $t, t + 1, \dots, t + k - 1$.
You know that the $j$-th student wants to visit the gym at each day from $l_j$ to $r_j$ inclusive. Each student will use the following strategy of visiting the gym at any day $i$ ($l_j \le i \le r_j$):
- person comes to a desk selling tickets placed near the entrance and buy several tickets with cost $a_i$ apiece (possibly, zero tickets);
- if the person has at least one activated and still valid ticket, they just go in. Otherwise, they activate one of tickets purchased today or earlier and go in.
Note that each student will visit gym only starting $l_j$, so each student has to buy at least one ticket at day $l_j$.
Help students to calculate the minimum amount of money they have to spend in order to go to the gym.
|
Observe that we need to buy a subscription at day one, then we need to buy the cheapest subscricption among first $k + 1$ days, $...$, then the cheapest among the first $tk + 1$ days. Let's denote $b_i$ as a minimum on a segment $[i - k, i - k + 1, ..., i]$, $b_i$ can be calculated in a linear time using monotonic queue. So the answer for query is $a_l + b_{l + k} + \textrm{min}(b_{l + k}, b_{l + 2k}) + ... + \textrm{min}(b_{l + k}, b_{l + 2k} + ... + b_{l + tk})$, where $l + tk \leq r$. Observe that such a sum is independent of the remainder of the division of $l$ by $k$, so we can solve an easier problem instead: we are given with an array $c$, and we need to calculate a sum of prefix minimums on a segment. To solve this, let's calculate an array $nxt_i$ - the minimum position $nxt_i > i$ such that $c_{nxt_i} < c_i$. Let $dp_i$ be a sum of minimums on prefixes of $i$-th suffix. Observe that $dp_i = dp_{nxt_i} + c_i \cdot (nxt_i - i)$. To calculate a sum of prefix minimums on the segment $[l, r]$, find a position $p$, such that $a_p$ is a minimum on the segment $[l, r]$, then the answer is $dp_l - dp_p + (r - p + 1) \cdot c_p$. So we have a solution in $\mathcal{O}(n + q\alpha^{-1}(n))$, where $\alpha^{-1}$ is the inverse Ackermann function, if we use a Tarjan's algorithm for offline rmq. It was enough to use any logarithmic data structure to solve a problem.
|
[
"data structures",
"dp",
"greedy"
] | 2,900
| null |
1601
|
F
|
Two Sorts
|
Integers from $1$ to $n$ (inclusive) were sorted lexicographically (considering integers as strings). As a result, array $a_1, a_2, \dots, a_n$ was obtained.
Calculate value of $(\sum_{i = 1}^n ((i - a_i) \mod 998244353)) \mod 10^9 + 7$.
$x \mod y$ here means the remainder after division $x$ by $y$. This remainder is always non-negative and doesn't exceed $y - 1$. For example, $5 \mod 3 = 2$, $(-1) \mod 6 = 5$.
|
Suppose $b$ is an inverse permutation of $a$, that is $a_{b_{i}} = b_{a_i} = i$, that is $b_i$ is an index of $i$ in the lexicographical sorting. Rewrite desired sum replacing $i \to b_i$: $\sum_{i=1 \ldots n} ((i - a_i) \bmod p) = \sum_{i=1 \ldots n} ((b_i - a_{b_i}) \bmod p) = \sum_{i=1 \ldots n} ((b_i - i) \bmod p)$. First, we need to understand how to calculate $b_i$'s. Observe that $b_i$ equals $1$ plus number of integers $x$ ($1 \le x \le n$) so that $x <_{lex} i$. These integers are of two possible kinds: own prefixes of $i$ (the number of such $x$ depends only on the length of $i$) and integers $x$ having a common prefix with $i$ of some length $t$ and a smaller digit $c$ in $(t+1)$-th index. If we fix values of $t$, $c$, length of $x$, we have a "mask" of the following kind: "123???", and we are interested in the number of $x$ matching this mask. This number almost always depends on the number of "?" with minor exceptions concerning $n$. E. g. consider $n = 123456$ for the example above. So in the desired sum, we group summands by the following markers of $i$ (brute force the value of these markers): Length of $i$, Position of first digit different in $i$ and $n$ (cases, where $i$ is an own prefix of $n$, shall be considered separately). The value of this digit. So we know description of $i$ of the following kind: $i = \overline{c_1 c_2 \ldots c_k x_1 \ldots x_l}$, where $c_j$ are fixed, and $x_1, \ldots, x_l$ are arbitrary integer variables in $[0, 9]$. Observe that both $b_i$ and $i$ are linear combination of variables $x_j$ and $1$, so $b_i - i$ is also a linear combination of them. The only issue is computing $b_i - i$ modulo $p$. To do summing over all $x_1, \ldots, x_l$ we use the meet-in-the-middle method: bruteforce separately the values for the first half and the second half, and then match one with another. If $n \le 10^L$, the solution works in $\mathcal{O}(10^{\frac{L}{2}} \operatorname{poly}(L))$, or $\mathcal{O}(\sqrt{n}\, \operatorname{poly}(\log n))$.
|
[
"binary search",
"dfs and similar",
"math",
"meet-in-the-middle"
] | 3,400
| null |
1602
|
A
|
Two Subsequences
|
You are given a string $s$. You need to find two non-empty strings $a$ and $b$ such that the following conditions are satisfied:
- Strings $a$ and $b$ are both \textbf{subsequences} of $s$.
- For each index $i$, character $s_i$ of string $s$ must belong to \textbf{exactly one} of strings $a$ or $b$.
- String $a$ is lexicographically minimum possible; string $b$ may be any possible string.
Given string $s$, print any valid $a$ and $b$.
\textbf{Reminder:}
A string $a$ ($b$) is a subsequence of a string $s$ if $a$ ($b$) can be obtained from $s$ by deletion of several (possibly, zero) elements. For example, "dores", "cf", and "for" are subsequences of "codeforces", while "decor" and "fork" are not.
A string $x$ is lexicographically smaller than a string $y$ if and only if one of the following holds:
- $x$ is a prefix of $y$, but $x \ne y$;
- in the first position where $x$ and $y$ differ, the string $x$ has a letter that appears earlier in the alphabet than the corresponding letter in $y$.
|
Note that taking $a$ as minimum character in $s$ is always optimal ($a$ starts with minimum possible character and is prefix of any other longer string). In such case, $b$ is just all characters from $s$ except character from $a$.
|
[
"implementation"
] | 800
| null |
1602
|
B
|
Divine Array
|
Black is gifted with a Divine array $a$ consisting of $n$ ($1 \le n \le 2000$) integers. Each position in $a$ has an initial value. After shouting a curse over the array, it becomes angry and starts an unstoppable transformation.
The transformation consists of infinite steps. Array $a$ changes at the $i$-th step in the following way: for every position $j$, $a_j$ becomes equal to the number of occurrences of $a_j$ in $a$ before starting this step.
Here is an example to help you understand the process better:
\begin{center}
\begin{tabular}{cc}
Initial array: & $2$ $1$ $1$ $4$ $3$ $1$ $2$ \
After the $1$-st step: & $2$ $3$ $3$ $1$ $1$ $3$ $2$ \
After the $2$-nd step: & $2$ $3$ $3$ $2$ $2$ $3$ $2$ \
After the $3$-rd step: & $4$ $3$ $3$ $4$ $4$ $3$ $4$ \
... & ... \
\end{tabular}
\end{center}
In the initial array, we had two $2$-s, three $1$-s, only one $4$ and only one $3$, so after the first step, each element became equal to the number of its occurrences in the initial array: all twos changed to $2$, all ones changed to $3$, four changed to $1$ and three changed to $1$.
The transformation steps continue \textbf{forever}.
You have to process $q$ queries: in each query, Black is curious to know the value of $a_x$ after the $k$-th step of transformation.
|
It can be shown that after at most $n$ steps of transformation, array $a$ becomes repetitive. There is even a better lower bound: it can be shown that after at most $\log(n)$ steps $a$ becomes repetitive, so we use either of these two facts to simulate the process and answer the queries.
|
[
"constructive algorithms",
"implementation"
] | 1,100
| null |
1603
|
A
|
Di-visible Confusion
|
YouKn0wWho has an integer sequence $a_1, a_2, \ldots, a_n$. He will perform the following operation until the sequence becomes empty: select an index $i$ such that $1 \le i \le |a|$ and $a_i$ is \textbf{not} divisible by $(i + 1)$, and erase this element from the sequence. Here $|a|$ is the length of sequence $a$ at the moment of operation. Note that the sequence $a$ changes and the next operation is performed on this changed sequence.
For example, if $a=[3,5,4,5]$, then he can select $i = 2$, because $a_2 = 5$ is not divisible by $i+1 = 3$. After this operation the sequence is $[3,4,5]$.
Help YouKn0wWho determine if it is possible to erase the whole sequence using the aforementioned operation.
|
Notice that we can erase $a_i$ at positions from $1$ to $i$. So for each $i$, there should be at least one integer from $2$ to $i + 1$ such that $a_i$ is not divisible by that integer. If it is not satisfied for some integer $i$, then there is no solution for sure. Otherwise, it turns out that a solution always exists. Why? We can prove it by induction. Let's say it is possible to erase the prefix containing $n - 1$ elements. As $a_n$ can be erased at some position from $1$ to $n$(let's say $k$), then while erasing the prefix of $n - 1$ elements, when the prefix contains $k - 1$ elements, then $a_n$ is at the $k$-th position, so we can erase it at that position and erase the rest of the sequence accordingly. So we just have to check for all integers $i$ from $1$ to $n$, if $a_i$ is not divisible by at least one integer from $2$ to $i + 1$. Notice that if $a_i$ is divisible by all integers from $2$ to $i + 1$, then it means that $a_i$ is divisible by $\operatorname{LCM}(2, 3, \ldots, (i+1))$. But when $i = 22$, $\operatorname{LCM}(2, 3, \ldots, 23) \gt 10^9 \gt a_i$. So for $i \ge 22$, there will always be an integer from $2$ to $i + 1$ which doesn't divide $a_i$. So we don't have to check for them. For $i \lt 22$, use bruteforce. Complexity: $\mathcal{O}(n + 21^2)$.
|
[
"constructive algorithms",
"math",
"number theory"
] | 1,300
|
#include<bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) {
int n; cin >> n;
bool ok = true;
for (int i = 1; i <= n; i++) {
int x; cin >> x;
bool found = false;
for (int j = i + 1; j >= 2; j--) { // this loop will run not more than 22 times, in practice its much lower than that
if (x % j) {
found = true;
break;
}
}
ok &= found;
}
if (ok) {
cout << "YES\n";
}
else {
cout << "NO\n";
}
}
return 0;
}
|
1603
|
B
|
Moderate Modular Mode
|
YouKn0wWho has two \textbf{even} integers $x$ and $y$. Help him to find an integer $n$ such that $1 \le n \le 2 \cdot 10^{18}$ and $n \bmod x = y \bmod n$. Here, $a \bmod b$ denotes the remainder of $a$ after division by $b$. If there are multiple such integers, output any. It can be shown that such an integer always exists under the given constraints.
|
If $x \gt y$, then $x + y$ works, as $(x + y) \bmod x = y \bmod x = y$ and $y \bmod (x + y) = y$. The challenge arrives when $x \le y$. The later part of the editorial assumes that $x \le y$. Claim $1$: $n$ can't be less than $x$. Proof: Assume that for some $n \lt x$, $n \bmod x = y \bmod n$ is satisfied. Then $n \bmod x = n$ but $y \bmod n \lt n$. So $n \bmod x$ can't be equal to $y \bmod n$ which is a contradiction. Claim $2$: $n$ can't be greater than $y$. Proof: Assume that for some $n \gt y$, $n \bmod x = y \bmod n$ is satisfied. Then $n \bmod x \lt x$ but $y \bmod n = y \ge x$. So $n \bmod x$ can't be equal to $y \bmod n$ which is a contradiction. So $n$ should be in between $x$ and $y$. But what is the exact value of $n$? Let's solve this intuitively. Consider a line on the $X$ axis. Imagine you are at position $0$. You will start jumping from $0$ to $y$ with a step of length $x$. So there will be a position from where if you jump one more time it will exceed $y$. This position is $p = y - y \bmod x$. From this position let's go to $y$ in exactly $2$ steps! Notice that $y - p$ is guaranteed to be even as $x$ and $y$ both are even. So we need to jump with a length of $\frac{y - p}{2}$ and we will jump to the position $t = p + \frac{y - p}{2}$. And voila! $t$ is our desired $n$ because $t \bmod x = \frac{y - p}{2}$ and $y \bmod t = (y - p) - \frac{y - p}{2} = \frac{y - p}{2}$. To be precise, $n = t = y - \frac{y \bmod x}{2}$. Here is a cute illustration for you:
|
[
"constructive algorithms",
"math",
"number theory"
] | 1,600
|
#include<bits/stdc++.h>
using namespace std;
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) {
int x, y; cin >> x >> y;
if (x <= y) {
cout << y - y % x / 2 << '\n';
}
else {
cout << x + y << '\n';
}
}
return 0;
}
|
1603
|
C
|
Extreme Extension
|
For an array $b$ of $n$ integers, the extreme value of this array is the minimum number of times (possibly, zero) the following operation has to be performed to make $b$ \textbf{non-decreasing}:
- Select an index $i$ such that $1 \le i \le |b|$, where $|b|$ is the current length of $b$.
- Replace $b_i$ with two elements $x$ and $y$ such that $x$ and $y$ both are \textbf{positive} integers and $x + y = b_i$.
- This way, the array $b$ changes and the next operation is performed on this modified array.
For example, if $b = [2, 4, 3]$ and index $2$ gets selected, then the possible arrays after this operation are $[2, \underline{1}, \underline{3}, 3]$, $[2, \underline{2}, \underline{2}, 3]$, or $[2, \underline{3}, \underline{1}, 3]$. And consequently, for this array, this single operation is enough to make it non-decreasing: $[2, 4, 3] \rightarrow [2, \underline{2}, \underline{2}, 3]$.
It's easy to see that every array of positive integers can be made non-decreasing this way.
YouKn0wWho has an array $a$ of $n$ integers. Help him find the sum of extreme values of all nonempty subarrays of $a$ modulo $998\,244\,353$. If a subarray appears in $a$ multiple times, its extreme value should be counted the number of times it appears.
An array $d$ is a subarray of an array $c$ if $d$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
|
Let's find out how to calculate the extreme value of an array $a$ of $n$ integers. It turns out that a greedy solution exists! Consider the rightmost index $i$ such that $a_i \gt a_{i + 1}$. So we must split $a_i$ into new (let's say $k$) elements $1 \le b_1 \le b_2 \le \ldots \le b_k \le a_{i+1}$ such that $b_1 + b_2 + \ldots + b_k = a_i$ . Notice that $k \ge \left\lceil \frac{a_i}{a_{i+1}} \right\rceil$ because $b_k \le a_{i+1}$. But it is always optimal to make $b_1$ as large as possible. It is not hard to see that the smaller the $k$, the bigger the $b_1$ we can achieve. So let's set $k = \left\lceil \frac{a_i}{a_{i+1}} \right\rceil$. Now, notice that $b_1 \le \left\lfloor \frac{a_i}{k} \right\rfloor$. So let's set $b_1 = \left\lfloor \frac{a_i}{k} \right\rfloor = \left\lfloor \frac{a_i}{\left\lceil \frac{a_i}{a_{i+1}} \right\rceil} \right\rfloor$. So we have performed $k - 1 = \left\lceil \frac{a_i}{a_{i+1}} \right\rceil - 1$ operations and will solve the problem analogously for the previous indices after replacing $a_i$ by $[b_1, b_2, \ldots, b_k]$. To sum it all up, we can calculate the extreme value in the following procedure: Iterator from $i = n - 1$ to $1$. Add $\left\lceil \frac{a_i}{a_{i+1}} \right\rceil - 1$ to the answer. Set $a_i = \left\lfloor \frac{a_i}{\left\lceil \frac{a_i}{a_{i+1}} \right\rceil} \right\rfloor$. Pretty elegant! Let's call it elegant procedure from now on. So we can calculate the extreme value of an array of $n$ integers in $\mathcal{O}{(n)}$. To solve it for all subarrays in $\mathcal{O}{(n^2)}$, we need to fix a prefix and solve each suffix of this prefix in a total of $\mathcal{O}{(n)}$ operations. We can do that easily because the procedure to calculate the extreme values starts from the end, so we can sum up the contributions on the run. How to solve the problem faster? Think dp. Let $dp(i, x)$ be the count of subarrays $a[i;j]$ such that $i \le j$ and after the elegant procedure $x$ becomes the first element of the final version of that subarray. We only care about the $x$s for which $dp(i, x)$ is non-zero. How many different $x$ is possible? Well, it can be up to $10^5$, right? Wrong! Let's go back to our elegant procedure once again. For the time being, let's say for all $x = 1$ to $10^5$, $dp(i + 1, x)$ is non-zero. So for each $x$, we will add $dp(i + 1, x)$ to $dp(i, \left\lfloor \frac{a_i}{\left\lceil \frac{a_i}{x} \right\rceil} \right\rfloor)$. But there can be at most $2 \sqrt{m}$ distinct values in the sequence $\left\lfloor \frac{m}{1} \right\rfloor, \left\lfloor \frac{m}{2} \right\rfloor, \ldots, \left\lfloor \frac{m}{m} \right\rfloor$. Check this for a proof. So there can be $\mathcal{O}{(\sqrt{10^5})}$ distinct $x$s for which $dp(i, x)$ is non-zero. So we can solve this dp in $\mathcal{O}{(n \cdot \sqrt{10^5}})$. To optimize the space-complexity we can observe that we only need the dp values of $i + 1$. So we can use only two arrays to maintain everything. Check my solution for more clarity. To get the final answer, we will use the contribution technique. To be precise, for each $(i + 1, x)$ we will add $i \cdot dp(i + 1, x) \cdot (\left\lceil \frac{a_i}{a_{i+1}} \right\rceil - 1)$ to our answer and its not hard to see this. Here, $i \cdot dp(i + 1, x)$ is the number of arrays where the $i$-th element will be set to $x$ in the elegant procedure and $\left\lceil \frac{a_i}{a_{i+1}} \right\rceil - 1$ is the number of operations that will be performed for the same. Overall time complexity will be $\mathcal{O}{(n \cdot \sqrt{10^5}})$ and space complexity will be $\mathcal{O}(n)$.
|
[
"dp",
"greedy",
"math",
"number theory"
] | 2,300
|
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5 + 9, mod = 998244353;
vector<int> v[2];
int dp[2][N];
int a[N];
int32_t main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) {
int n; cin >> n;
for (int i = 1; i <= n; i++) {
cin >> a[i];
}
long long ans = 0;
for (int i = n; i >= 1; i--) {
int k = i & 1;
v[k].push_back(a[i]);
dp[k][a[i]] = 1;
int last = a[i];
for (auto x: v[k ^ 1]) {
int y = dp[k ^ 1][x];
int split = (a[i] + x - 1) / x;
int st = a[i] / split;
ans += 1LL * (split - 1) * y * i;
dp[k][st] += y;
if (last != st) {
v[k].push_back(st), last = st;
}
}
for (auto x: v[k ^ 1]) dp[k ^ 1][x] = 0;
v[k ^ 1].clear();
ans %= mod;
}
cout << ans << '\n';
for (auto x: v[0]) dp[0][x] = 0;
for (auto x: v[1]) dp[1][x] = 0;
v[0].clear(); v[1].clear();
}
return 0;
}
|
1603
|
D
|
Artistic Partition
|
For two positive integers $l$ and $r$ ($l \le r$) let $c(l, r)$ denote the number of integer pairs $(i, j)$ such that $l \le i \le j \le r$ and $\operatorname{gcd}(i, j) \ge l$. Here, $\operatorname{gcd}(i, j)$ is the greatest common divisor (GCD) of integers $i$ and $j$.
YouKn0wWho has two integers $n$ and $k$ where $1 \le k \le n$. Let $f(n, k)$ denote the minimum of $\sum\limits_{i=1}^{k}{c(x_i+1,x_{i+1})}$ over all integer sequences $0=x_1 \lt x_2 \lt \ldots \lt x_{k} \lt x_{k+1}=n$.
Help YouKn0wWho find $f(n, k)$.
|
For now, let $c(l, r)$ denote the number of integer pairs $(i, j)$ such that $l \le i \lt j \le r$ (instead of $i \le j$) and $\operatorname{gcd}(i, j) \ge l$. So we can add $n$ to $f(n, k)$ in the end. We can construct a straightforward dp where $f(n, k) = \min\limits_{i = 1}^{n}{(f(i - 1, k - 1)+c(i, n))}$. As a straightforward implementation of $c(l, r)$ takes $\mathcal{O}(n^2 \log_2 n)$ time, the total complexity of finding $f(n, k)$ will be $\mathcal{O}(n^5 \log_2 n)$ which is quite shameful. Let's see how to do better. Tiny Observation: $c(x, 2 \cdot x - 1) = 0$. It's easy to see why it holds. Cute Observation: $f(n, k) = 0$ when $k \gt \log_2 n$. Proof: Let $L = \log_2 n$. Following the tiny observation, we can split the numbers as $[1, 1], [2, 3], [4, 7], \ldots, [2^{L - 1}, 2^L - 1], [2^L, n]$ without spending a single penny. Now we can solve $f(n, k)$ in $\mathcal{O}(n^4 \log_2^2 n)$ which is still shameful. So we just have to find $f(n, k)$ for $1 \le n \le 10^5$ and $1 \le k \le 17$. Let's optimize the calculation for $c(l, r)$ . $c(l, r) = \sum\limits_{i=l}^{r}\sum\limits_{j=i+1}^{r}{[\gcd(i, j) \ge l]}$ $=\sum\limits_{k=l}^{r}\sum\limits_{i=l}^{r}\sum\limits_{j=i+1}^{r}{[\gcd(i, j) = k]}$ $=\sum\limits_{k=l}^{r}\sum\limits_{i=l, k | i}^{r}\sum\limits_{j=i+1, k | j}^{r}{[\gcd(i, j) = k]}$ $=\sum\limits_{k=l}^{r}\sum\limits_{i=1}^{\left\lfloor \frac{r}{k} \right\rfloor}\sum\limits_{j=i+1}^{\left\lfloor \frac{r}{k} \right\rfloor}{[\gcd(i \cdot k, j \cdot k) = k]}$ $=\sum\limits_{k=l}^{r}\sum\limits_{i=1}^{\left\lfloor \frac{r}{k} \right\rfloor}\sum\limits_{j=i+1}^{\left\lfloor \frac{r}{k} \right\rfloor}{[\gcd(i, j) = 1]}$ $=\sum\limits_{k=l}^{r}\sum\limits_{i=1}^{\left\lfloor \frac{r}{k} \right\rfloor}{\phi(i)}$ $=\sum\limits_{k=l}^{r}{p(\left\lfloor \frac{r}{k} \right\rfloor)}$ But notice that there can be at most $2 \sqrt{m}$ distinct values in the sequence $\left\lfloor \frac{m}{1} \right\rfloor, \left\lfloor \frac{m}{2} \right\rfloor, \ldots, \left\lfloor \frac{m}{m} \right\rfloor$. Check this for a proof. So we can calculate $c(l, r)$ in $\mathcal{O}(\sqrt{n})$ which improves our solution to $\mathcal{O}(n^2 \sqrt{n})$. But notice that as $c(l, r) =\sum\limits_{k=l}^{r}{p(\left\lfloor \frac{r}{k} \right\rfloor)}$ we can precalculate the suffix sums for each $r=1$ to $n$ over all distinct values of $\left\lfloor \frac{r}{k} \right\rfloor$ and then calculate $c(l, r)$ in $\mathcal{O}(1)$. This preprocessing will take $\mathcal{O}(n \sqrt{n})$ time and $\mathcal{O}(n \sqrt{n})$ memory. That means we can solve our problem $\mathcal{O}(n^2 + n\sqrt{n})$ which is promising. Critical Observation: $c(l, r)$ satisfies quadrangle inequality, that is $c(i,k)+c(j,l)\le c(i,l)+c(j,k)$ for $i\le j \le k \le l$. Proof: Let $f(i, j, r) =\sum\limits_{k=i}^{j}{p(\left\lfloor \frac{r}{k} \right\rfloor)}$. Here, $c(i, l) + c(j, k)= f(i, l, l) + f(j, k, k)$ $= (f(i, j - 1, l) + f(j, l, l)) + (f(i, k, k) - f(i, j - 1, k))$ $=f(i, j - 1, l) + c(j, l) + c(i, k) - f(i, j - 1, k)$ $= c(i, k) + c(j, l) + f(i, j - 1, l) - f(i, j - 1, k)$ You can learn more about quadrangle inequality and how it is useful from here. Read it because I won't describe why it helps us here. This suggests that we can solve this problem using Divide and Conquer DP or 1D1D DP which will optimize our $\mathcal{O}(n^2)$ part to $\mathcal{O}(n \log_2^2 n)$. To solve for multiple queries we can just precalculate $f(n, k)$ for $1 \le n \le 10^5$ and $1 \le k \le 17$. Overall complexity: $\mathcal{O}(n \log_2^2 n + n\sqrt{n})$ where $n = 10^5$. This problem can also be solved using Divide and Conquer DP and by calculating $c(l, r)$ in $\mathcal{O}(\sqrt{r-l})$ in each level which runs pretty fast in practice (for $n = 5 \cdot 10^5$, it takes less than 3s) but I don't have a rigorous upper bound on the time complexity. Check out my solution for more clarity.
|
[
"divide and conquer",
"dp",
"number theory"
] | 3,000
|
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5 + 9;
const long long inf = 1e12;
using ll = long long;
int phi[N];
void totient() {
for (int i = 1; i < N; i++) phi[i] = i;
for (int i = 2; i < N; i++) {
if (phi[i] == i) {
for (int j = i; j < N; j += i) phi[j] -= phi[j] / i;
}
}
}
ll a[N];
ll c(int l, int r) {
if (l > r) return inf;
ll ans = 0;
for (int i = l, last; i <= r; i = last + 1) {
last = r / (r / i);
int x = 0;
if (i >= l) x = last - i + 1;
else if (last >= l) x = last - l + 1;
ans += a[r / i] * x;
}
return ans;
}
ll dp[N][17];
void yo(int i, int l, int r, int optl, int optr) {
if(l > r) return;
int mid = (l + r) / 2;
dp[mid][i] = inf;
int opt = optl;
ll cost = c(min(mid, optr) + 1, mid);
for(int k = min(mid, optr); k >= optl ; k--) {
ll cc = dp[k][i - 1] + cost;
if (cc <= dp[mid][i]) {
dp[mid][i] = cc;
opt = k;
}
if (k <= mid) {
if (cost == inf) cost = a[mid / k];
else cost += a[mid / k];
}
}
yo(i, l, mid - 1, optl, opt);
yo(i, mid + 1, r, opt, optr);
}
int32_t main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
totient();
for (int i = 1; i < N; i++) {
a[i] = a[i - 1] + phi[i];
}
int n = 100000;
dp[0][0] = 0;
for (int i = 1; i <= n; i++) {
dp[i][0] = inf;
}
for(int i = 1; i <= n; i++) dp[i][1] = 1LL * i * (i + 1) / 2;
for(int i = 2; i <= 16; i++) yo(i, 1, n, 1, n);
int q; cin >> q;
while (q--) {
int n, k; cin >> n >> k;
if (k >= 20 or (1 << k) > n) {
cout << n << '\n';
}
else {
cout << dp[n][k] << '\n';
}
}
return 0;
}
|
1603
|
E
|
A Perfect Problem
|
A sequence of integers $b_1, b_2, \ldots, b_m$ is called good if $max(b_1, b_2, \ldots, b_m) \cdot min(b_1, b_2, \ldots, b_m) \ge b_1 + b_2 + \ldots + b_m$.
A sequence of integers $a_1, a_2, \ldots, a_n$ is called perfect if every non-empty subsequence of $a$ is good.
YouKn0wWho has two integers $n$ and $M$, $M$ is prime. Help him find the number, modulo $M$, of perfect sequences $a_1, a_2, \ldots, a_n$ such that $1 \le a_i \le n + 1$ for each integer $i$ from $1$ to $n$.
A sequence $d$ is a subsequence of a sequence $c$ if $d$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements.
|
Let's go deeper into the properties of perfect sequences. Observation $1$: If a sorted sequence $a_1, a_2, \ldots, a_n$ is perfect then all its permutations are also perfect. Actually, it's very easy to see that as the order doesn't matter. So we will work on the sorted version of the sequence. Observation $2$: The sorted sequence $a_1, a_2, \ldots, a_n$ is perfect iff all its subarrays are good. Proof: Let's fix the $\min$ and $\max$ of a subsequence. As $\min \cdot \max$ is fixed, the worst case happens when we insert as many elements as possible in this subsequence which are in between $\min$ and $\max$ as we are comparing $\min \cdot \max$ with the sum, and the worst case is a subarray of this sorted sequence! Now we will go one step further. Observation $3$: The sorted sequence $a_1, a_2, \ldots, a_n$ is perfect iff all its prefixes are good. Proof: The condition essentially means that $a_1 \cdot a_i \ge a_1 + a_2 + \ldots +a_i$ for all $1 \le i \le n$. Imagine that its true. Now we will prove that the sequence is perfect. Consider a subarray $[i; j]$. $a_i \cdot a_j \ge a_1 \cdot a_j$ $\ge a_1 + a_2 + \ldots + a_j$ $\ge a_i + a_{i+1} + \ldots +a_j$. That means all subarrays are good. And following observation $2$, it indicates that the sequence is perfect. And it is necessary that all the prefixes should be good because they are subarrays after all which proves the iff condition. Now let's exploit the fact that all $a_i$ are $\le n+1$. Let's remind again that we are considering the sorted sequences for now. Observation $4$: $a_k \ge k$ is required for any $k$. Proof: If $a_k < k$, $a_1 + a_2 + \ldots + a_k \ge a_1 + a_1 + \ldots + a_1 \ge a_1 \cdot k \gt a_1 \cdot a_k$ (as $k \gt a_k$) which violates the good condition. Observation $5$: If $a_k = k$, then all $a_i = k$ for $i \le k$. Proof: $a_1 \cdot a_k = a_1 \cdot k \ge a_1 + a_2 + \ldots + a_k$. This means $(a_1 - a_1) + (a_2 - a_1) + \ldots + (a_k - a_1) \le 0$. But as all $a_i \ge a_1$, this can only happen when all $a_i = a_1$. But as $a_k = k$, consequently, all $a_i$ should be $k$. So $a_n$ can be either $n$ or $n+1$. If $a_n=n$ then according to observation $5$ there is only one such sequence. From now on, let's consider $a_n = n + 1$. So $a_1 \cdot (n + 1) \ge a_1 + a_2 + \ldots a_n$. But this formulation is a bit hard to work on. So now we will make a move which I would like to call a programmer move. That is, the equation is equivalent to $(a_1 - a_1) + (a_2 - a_1) + \ldots + (a_n - a_1) \le a_1$. Following observation $3$, its necessary that $a_1 \cdot a_i \ge a_1 + a_2 + \ldots + a_i$ for all $i$. Observation $6$: if $a_n=n+1$ and $a_i \ge i+1$, then the prefix $a_1, a_2, \ldots, a_i$ is automatically good if the whole sorted sequence $a_1, a_2, \ldots, a_n$ is good. Proof: If the whole sequence is good then, $(a_1 - a_1) + (a_2 - a_1) + \ldots + (a_n - a_1) \le a_1$. Now, $(a_1 - a_1) + (a_2 - a_1) + \ldots + (a_i - a_1) \le (a_1 - a_1) + (a_2 - a_1) + \ldots + (a_n - a_1) \le a_1$. So, $a_1+a_2+\ldots+a_i \le a_1 \cdot (i+1) \le a_1 \cdot a_i$ So the prefix is good! Let's recap that $a_i \ge i$ is required(observation $4$), if $a_i \ge i + 1$, then $a_1, a_2, \ldots, a_i$ is automatically good (observation $6$) and $a_i = i$ is only possible when $i = a_1$(observation $5$). So the necessary and sufficient conditions for the sorted sequence $a$ to be perfect assuming $a_n = n + 1$ are - For $i \le a_1$, $a_1 \le a_i \le n + 1$ For $i \gt a_1$, $i + 1 \le a_i \le n + 1$ $(a_1 - a_1) + (a_2 - a_1) + \ldots + (a_n - a_1) \le a_1$ Notice that we need the answer considering all such sequences, not only sorted sequences, but it is not hard to track that using basic combinatorics. So let's fix the smallest number $a_1$ and subtract $a_1$ from everything. So we need to count the number of sequences $b_1, b_2, \ldots, b_n$ such that - $0 \le b_i \le n + 1 - a_1$ $b_1 + b_2 + \ldots b_n \le a_1$ There is at least $1$ number $\ge n+1-a_1$, at least $2$ numbers $\ge n-a_1$, $\ldots$, at least $n - a_1$ numbers $\ge 2$ (this condition is basically the intuition of when $i \le a_1$, $a_1 \le a_i \le n + 1$ and when $i \gt a_1$, $i + 1 \le a_i \le n + 1$). Let's formulate a dp in top-down manner. We will choose $b_i$ going from $n+1-a_1$ to $0$. Let $\operatorname{dp}(i, sum, k) =$ number of sequences when we already chose $i$ elements, the total sum of the selected elements $= sum$ and we will select $k$ now. So if we choose $cnt$ numbers of $k$, then the transition is something like $\operatorname{dp}(i, sum, k)$ += $\frac{\operatorname{dp}(i + cnt, sum + cnt \cdot k, k - 1)}{cnt!}$. And at the base case, when $i=n$, return $n!$. The factorial terms are easy to make sense of because we are considering all perfect sequences, not only sorted ones. Check my solution for more clarity. This dp will take $\mathcal{O}(n^4 \cdot \log_2 n)$ for each $a_1$ ($\log_2 n$ term is for harmonic series sum as $cnt \le \frac{a_1}{k}$) yielding a total of $\mathcal{O}(n^5 \cdot \log_2 n)$. Observation $7$: If $a_1 \lt n - 2 \cdot \sqrt{n}$, then no perfect sequences are possible. Proof: Assume that $a_1 \lt n - 2 \cdot \sqrt{n}$, or $n - a_1 \gt 2 \cdot \sqrt{n}$, or $n - a_1 \gt 2 \cdot \sqrt{n}$. But because of the condition $3$, there will be at least at least $1$ number $\ge n - 2 \cdot \sqrt{n} + 2$, at least $2$ numbers $\ge n - 2 \cdot \sqrt{n} + 1$, $\ldots$, at least $n - 2 \cdot \sqrt{n} + 1$ numbers $\ge 2$. But that clearly means that $b_1 + b_2 + \ldots b_n \gt n + 1 \gt a_1$ which is a contradiction. So we need to traverse $\mathcal{O}(\sqrt{n})$ $a_1$s which also means in the dp $k$ is also $\mathcal{O}(\sqrt{n})$. So the total time complexity will be $\mathcal{O}(n^3 \sqrt{n} \log_2 n)$ or faster but with very low constant that it will work instantly! Phew, such a long editorial. I need to take a break and explore the world sometimes...
|
[
"combinatorics",
"dp",
"math"
] | 3,200
|
#include<bits/stdc++.h>
using namespace std;
const int N = 205;
int mod;
int power(long long n, long long k) {
int ans = 1 % mod; n %= mod; if (n < 0) n += mod;
while (k) {
if (k & 1) ans = (long long) ans * n % mod;
n = (long long) n * n % mod;
k >>= 1;
}
return ans;
}
int dp[N][N][40], fac[N], ifac[N];
int vis[N][N][40];
int n, a1;
/*
number of solutions to the equation b1 + b2 + ... bn <= a1
s.t. 0<=bi<=n+1-a1
and there is at least 1 number >= n + 1 - a1,
at least 2 numbers >= (n - a1), ..., at least (n - a1) numbers >= 2
*/
int yo(int i, int sum, int k) { // k <= 2 * sqrt(n)
if (i == n) return fac[n];
if (k == 0) return 1LL * fac[n] * ifac[n - i] % mod;
if (vis[i][sum][k] == a1) return dp[i][sum][k];
vis[i][sum][k] = a1;
int &ans = dp[i][sum][k];
ans = 0;
int r = (a1 - sum) / k;
for (int cnt = r; cnt >= 0; cnt--) {
if (k > 1 and i + cnt < n - a1 + 1 - k + 1) continue;
ans += 1LL * yo(i + cnt, sum + cnt * k, k - 1) * ifac[cnt] % mod;
ans %= mod;
}
return ans;
}
int32_t main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> n >> mod;
fac[0] = 1;
ifac[0] = 1;
for (int i = 1; i < N; i++) {
fac[i] = 1LL * fac[i - 1] * i % mod;
ifac[i] = power(fac[i], mod - 2);
}
int ans = 0;
int lim = 2 * sqrt(n) + 1;
for (a1 = max(1, n - lim); a1 <= n; a1++) {
ans += yo(0, 0, n + 1 - a1);
ans %= mod;
}
ans %= mod;
cout << ans << '\n';
return 0;
}
|
1603
|
F
|
October 18, 2017
|
It was October 18, 2017. Shohag, a melancholic soul, made a strong determination that he will pursue Competitive Programming seriously, by heart, because he found it fascinating. Fast forward to 4 years, he is happy that he took this road. He is now creating a contest on Codeforces. He found an astounding problem but has no idea how to solve this. Help him to solve the final problem of the round.
You are given three integers $n$, $k$ and $x$. Find the number, modulo $998\,244\,353$, of integer sequences $a_1, a_2, \ldots, a_n$ such that the following conditions are satisfied:
- $0 \le a_i \lt 2^k$ for each integer $i$ from $1$ to $n$.
- There is no non-empty subsequence in $a$ such that the bitwise XOR of the elements of the subsequence is $x$.
A sequence $b$ is a subsequence of a sequence $c$ if $b$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements.
|
When $x = 0$, we need to count the number of sequences $(a_1, a_2, \ldots, a_n)$, such that the $a_i$s are linearly independent. Clearly, for $n>k$ there are no such sequences, and for $n \le k$, the answer is $(2^k-1)\cdot(2^k-2)\cdot \ldots \cdot (2^k - 2^{n-1})$, as $a_i$ can be any element not generated by the elements $a_1, a_2, \ldots, a_{i-1}$, and they generate exactly $2^{i-1}$ elements. Now, let's deal with the case $x>0$. It's easy to see that the exact value of $x$ doesn't matter here, it's easy to construct a bijection between such sequences for this problem and such sequences for the case $x = 1$, by change of basis from the one which has $x$ as the first element to one which has $1$ as the first element. So from now on $x = 1$. We will show that the answer for given $n, k$ is $(2^k)^n \cdot 2^{k-1} - (2^{k-1})^n \cdot (2^{k-1}-1)\cdot 2^{k-2} + (2^{k-2})^n \cdot (2^{k-1}-1)(2^{k-2}-1)\cdot 2^{k-3} \ldots$ More generally, it's $\sum_{i = 0}^{k}((-1)^i \cdot (2^{k-i})^n \cdot \prod_{j = 0}^{i-1} (2^{k-j-1}-1) 2^{k-i-1} )$ This is easy to calculate in $O(k + \log{n})$, by calculating prefix products $P_i = \prod_{j = 0}^{i-1} (2^{k-j-1}-1)$ in $O(k)$, and finding $2^n$ in $O(\log{n})$ and its powers in $O(k)$. Now, let's prove the formula. It's possible to prove the formula by some inclusion-exclusion arguments, but here we will present a bit more combinatorial approach. Let's just look at $(2^{k-i})^n \cdot \prod_{j = 0}^{i-1} (2^{k-j-1}-1) 2^{k-i-1}$ and find some combinatorial interpretation for it. Let's consider a linear space of dimension $t$ (with $2^t$ elements), containing $1$. Consider all its subspaces of dimension $t-1$. Firstly, how many of them are there? Exactly $2^t-1$, each corresponding to the only nonzero element of the orthogonal complement of the subspace. It's easy to see that exactly $2^{t-1}-1$ of them will contain $1$, and other $2^{t-1}$ won't. Then, there is a natural representation of the number $\prod_{j = 0}^{i-1} (2^{k-j-1}-1) 2^{k-i-1}$ - it's the number of sequences of spaces $(S_0, S_1, S_2, \ldots, S_{i-1}, S_i)$, where: $S_0$ is the whole space of our problem, of dimension $k$, $S_0$ is the whole space of our problem, of dimension $k$, For $j$ from $1$ to $i-1$, $S_j$ is a subspace of $S_{j-1}$ of dimension $k-j$, containing $1$. For $j$ from $1$ to $i-1$, $S_j$ is a subspace of $S_{j-1}$ of dimension $k-j$, containing $1$. $S_i$ is a subspace of $S_{i-1}$ of dimension $k-i$, not containing $1$. $S_i$ is a subspace of $S_{i-1}$ of dimension $k-i$, not containing $1$. Then, $(2^{k-i})^n \cdot \prod_{j = 0}^{i-1} (2^{k-j-1}-1) 2^{k-i-1}$ is the number of sequences $(S_0, S_1, S_2, \ldots, S_{i-1}, S_i, a)$, where sets $S$ are described as above, and all elements of $a$ lie in $S_i$ (as there are $2^{k-i}$ ways to choose each of them). This starts to resemble what we need... As in such structure, space generated by $a$ won't be able to contain $1$, and it's exactly the kind of arrays we are interested in. So, we want to show that the actual number of arrays from the statement is equal to the number of tuples $(S_0, S_1, S_2, \ldots, S_{i-1}, S_i, a)$, where the tuple is counted with a plus sign for even $i$ and with a minus sign for odd $i$. It's enough to prove that each array $a$ will be counted exactly once this way (meaning that it will be in tuples taken with plus one more time than in tuples taken with a minus). Fine, let's consider an array $a$ such that subspace spanned by it doesn't contain $1$ and look at the sequences of sets $(S_0, S_1, S_2, \ldots, S_{i-1}, S_i)$ such that $a$ is contained in $S_i$. If the subspace generated by $a$ is $T$, we just need all spaces to contain $T$. If size of $T$ is $t$, there is a bijection between such sequences and sequences $(S_0, S_1, S_2, \ldots, S_{i-1}, S_i)$, where $S_0$ is the space orthogonal to subspace $T$, of dimension $k-t$, $S_0$ is the space orthogonal to subspace $T$, of dimension $k-t$, For $j$ from $1$ to $i-1$, $S_j$ is a subspace of $S_{j-1}$ of dimension $k-t-j$, containing $1$. For $j$ from $1$ to $i-1$, $S_j$ is a subspace of $S_{j-1}$ of dimension $k-t-j$, containing $1$. $S_i$ is a subspace of $S_{i-1}$ of dimension $k-t-i$, not containing $1$. $S_i$ is a subspace of $S_{i-1}$ of dimension $k-t-i$, not containing $1$. But the number of such sequences for a fixed $i$ is precisely $\prod_{j = 0}^{i-1} (2^{k-t-j-1}-1) \cdot 2^{k-t-i-1}$! So, we need to show that the sum of this (with correspondent signs) over $i$ from $0$ to $n-t$ is $1$. Let's replace $n-t$ by $n$, we need to show that $\sum_{i = 0}^k ((-1)^i \cdot 2^{k-i-1} \cdot \prod_{j = 0}^{i-1}(2^{k-j-1}-1) ) = 1$ This is easy to prove by induction, as after moving $2^{k-1}$ to the right side it's equivalent to $\sum_{i = 1}^k ((-1)^i \cdot 2^{k-i-1} \cdot \prod_{j = 0}^{i-1}(2^{k-j-1}-1) ) = (1 - 2^{k-1})$
|
[
"combinatorics",
"dp",
"implementation",
"math"
] | 2,700
|
#include<bits/stdc++.h>
using namespace std;
const int N = 1e7 + 9, mod = 998244353;
template <const int32_t MOD>
struct modint {
int32_t value;
modint() = default;
modint(int32_t value_) : value(value_) {}
inline modint<MOD> operator + (modint<MOD> other) const { int32_t c = this->value + other.value; return modint<MOD>(c >= MOD ? c - MOD : c); }
inline modint<MOD> operator - (modint<MOD> other) const { int32_t c = this->value - other.value; return modint<MOD>(c < 0 ? c + MOD : c); }
inline modint<MOD> operator * (modint<MOD> other) const { int32_t c = (int64_t)this->value * other.value % MOD; return modint<MOD>(c < 0 ? c + MOD : c); }
inline modint<MOD> & operator += (modint<MOD> other) { this->value += other.value; if (this->value >= MOD) this->value -= MOD; return *this; }
inline modint<MOD> & operator -= (modint<MOD> other) { this->value -= other.value; if (this->value < 0) this->value += MOD; return *this; }
inline modint<MOD> & operator *= (modint<MOD> other) { this->value = (int64_t)this->value * other.value % MOD; if (this->value < 0) this->value += MOD; return *this; }
inline modint<MOD> operator - () const { return modint<MOD>(this->value ? MOD - this->value : 0); }
modint<MOD> pow(uint64_t k) const { modint<MOD> x = *this, y = 1; for (; k; k >>= 1) { if (k & 1) y *= x; x *= x; } return y; }
modint<MOD> inv() const { return pow(MOD - 2); } // MOD must be a prime
inline modint<MOD> operator / (modint<MOD> other) const { return *this * other.inv(); }
inline modint<MOD> operator /= (modint<MOD> other) { return *this *= other.inv(); }
inline bool operator == (modint<MOD> other) const { return value == other.value; }
inline bool operator != (modint<MOD> other) const { return value != other.value; }
inline bool operator < (modint<MOD> other) const { return value < other.value; }
inline bool operator > (modint<MOD> other) const { return value > other.value; }
};
template <int32_t MOD> modint<MOD> operator * (int64_t value, modint<MOD> n) { return modint<MOD>(value) * n; }
template <int32_t MOD> modint<MOD> operator * (int32_t value, modint<MOD> n) { return modint<MOD>(value % MOD) * n; }
template <int32_t MOD> istream & operator >> (istream & in, modint<MOD> &n) { return in >> n.value; }
template <int32_t MOD> ostream & operator << (ostream & out, modint<MOD> n) { return out << n.value; }
using mint = modint<mod>;
mint pw[N];
mint genius(int n, int k) {
if (k == 0) return 1;
vector<mint> c(k, 0), d(k + 1, 0);
d[k] = 1;
for (int i = k - 1; i >= 0; i--) {
c[i] = pw[i] * d[i + 1];
d[i] = (pw[i] - 1) * d[i + 1];
}
mint ans = 0, pwn = mint(2).pow(n), cur = 1;
for (int i = 0; i < k; i++) {
ans += ((k - 1 - i) & 1 ? mod - 1 : 1) * c[i] * cur;
cur *= pwn;
}
return ans;
}
mint f(int n, int k) {
if (k == 0 or n > k) return 0;
mint ans = 1;
for (int i = 0; i < n; i++) {
ans *= pw[k] - pw[i];
}
return ans;
}
int32_t main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
pw[0] = 1;
for (int i = 1; i < N; i++) {
pw[i] = pw[i - 1] * 2;
}
int t; cin >> t;
while (t--) {
int n, k, x; cin >> n >> k >> x;
if (x) {
cout << genius(n, k) << '\n';
}
else {
cout << f(n, k) << '\n';
}
}
return 0;
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.