contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1916
A
2023
In a sequence $a$, whose product was equal to $2023$, $k$ numbers were removed, leaving a sequence $b$ of length $n$. Given the resulting sequence $b$, find any suitable sequence $a$ and output which $k$ elements were removed from it, or state that such a sequence could not have existed. Notice that you are not guaranteed that such array exists.
Let the product of the numbers in our array be $x$. If $2023$ is not divisible by $x$, then the answer is NO, otherwise the answer is YES. One of the ways to construct the removed numbers is as follows-$1$ number $\frac{2023}{x}$ and $k - 1$ numbers $1$.
[ "constructive algorithms", "implementation", "math", "number theory" ]
800
null
1916
B
Two Divisors
A certain number $1 \le x \le 10^9$ is chosen. You are given two integers $a$ and $b$, which are the two largest divisors of the number $x$. At the same time, the condition $1 \le a < b < x$ is satisfied. For the given numbers $a$, $b$, you need to find the value of $x$. $^{\dagger}$ The number $y$ is a divisor of the number $x$ if there is an integer $k$ such that $x = y \cdot k$.
First case: $b \mod a = 0$. In this case, $b = a \cdot p$, where $p$ is the smallest prime factor of $x$. Then $x = b \cdot p = b \cdot \frac{b}{a}$. Second case: $b \mod a \neq 0$. In this case, $b = \frac{x}{p}, a = \frac{x}{q}$, where $p, q$ are the two smallest prime factors of $x$. Then $\gcd(a, b) = \frac{x}{p \cdot q}, x = b \cdot p = b \cdot \frac{a}{\gcd(a, b)}$.
[ "constructive algorithms", "math", "number theory" ]
1,000
null
1916
C
Training Before the Olympiad
Masha and Olya have an important team olympiad coming up soon. In honor of this, Masha, for warm-up, suggested playing a game with Olya: There is an array $a$ of size $n$. Masha goes first, and the players take turns. Each move is described by the following sequence of actions: $\bullet$ If the size of the array is $1$, the game ends. $\bullet$ The player who is currently playing chooses two \textbf{different} indices $i$, $j$ ($1 \le i, j \le |a|$), and performs the following operation — removes $a_i$ and $a_j$ from the array and adds to the array a number equal to $\lfloor \frac{a_i + a_j}{2} \rfloor \cdot 2$. In other words, first divides the sum of the numbers $a_i$, $a_j$ by $2$ rounding down, and then multiplies the result by $2$. Masha aims to maximize the final number, while Olya aims to minimize it. Masha and Olya decided to play on each non-empty prefix of the initial array $a$, and asked for your help. For each $k = 1, 2, \ldots, n$, answer the following question. Let only the first $k$ elements of the array $a$ be present in the game, with indices $1, 2, \ldots, k$ respectively. What number will remain at the end with optimal play by both players?
Note that our operation replaces two numbers with their sum if they are of the same parity, and with their sum $-1$ otherwise. Therefore, the second player needs to perform as many operations as possible where an even and an odd number are used. Also, note that on any move of the second player, there will be at least one even number that the first player made on the previous move. Then our task for the first player is to remove two odd numbers as often as possible. It can also be observed that the number of odd numbers in the array does not decrease, which means all operations with two odd numbers will go before operations with two even numbers. And after each such move, the second player will remove one odd number from the array. That is, in two moves, the number of odd numbers decreases by $3$. Let's consider all possible remainders of the number of odd numbers modulo $3$. If the remainder is $0$, then the answer is $sum - \frac{cnt}{3}$, where $sum$ is the sum of all numbers. If the remainder modulo is $1$, then two situations are possible: when the size of the array is $1$ - then the answer is the single number in the array. Otherwise, at the moment when there is $1$ odd number left, there will be one more move of player number $2$, which means he will reduce the total sum once more, and the answer will be $sum - \lfloor \frac{cnt}{3} \rfloor - 1$. If the remainder modulo is $2$, then the number of moves when the second player reduces the sum does not change, so the answer is $sum - \lfloor \frac{cnt}{3} \rfloor$.
[ "constructive algorithms", "games", "greedy", "implementation", "math" ]
1,200
null
1916
D
Mathematical Problem
The mathematicians of the 31st lyceum were given the following task: You are given an \textbf{odd} number $n$, and you need to find $n$ different numbers that are squares of integers. But it's not that simple. Each number should have a length of $n$ (and should not have leading zeros), and the multiset of digits of all the numbers should be the same. For example, for $\mathtt{234}$ and $\mathtt{432}$, and $\mathtt{11223}$ and $\mathtt{32211}$, the multisets of digits are the same, but for $\mathtt{123}$ and $\mathtt{112233}$, they are not. The mathematicians couldn't solve this problem. Can you?
$1.$ For $n = 1$ the answer is $1$. $2.$ For $n = 3$ the answers are $169, 961, 196$. $3.$ How to obtain the answer for $n + 2$: multiply each number for the answer of length $n$ by $100$ (the square of the number $10$), then each number will also remain a square. And we still have 2 numbers left, let's compose them as follows: $9...6...1$, $1...6...9$ where in the gaps between the digits there should be $(n - 3) / 2$ zeros, these will respectively be the squares of numbers $1...3, 3...1$ with $(n - 3) / 2$ zeros in the gaps between the digits. Alternative solution: For $n = 11$ generate 99 such numbers, for $n \ge 11$ pad them with zeros. For $n < 11$ solve in $O(\sqrt{10^n})$
[ "brute force", "constructive algorithms", "geometry", "math" ]
1,700
null
1916
E
Happy Life in University
Egor and his friend Arseniy are finishing school this year and will soon enter university. And since they are very responsible guys, they have started preparing for admission already. First of all, they decided to take care of where they will live for the long four years of study, and after visiting the university's website, they found out that the university dormitory can be represented as a root tree with $n$ vertices with the root at vertex $1$. In the tree, each vertex represents a recreation with some type of activity $a_i$. The friends need to choose $2$ recreations (not necessarily different) in which they will settle. The guys are convinced that the more the value of the following function $f(u, v) = diff(u, lca(u, v)) \cdot diff(v, lca(u, v))$, the more fun their life will be. Help Egor and Arseniy and find the maximum value of $f(u, v)$ among all pairs of recreations! $^{\dagger} diff(u, v)$ — the number of different activities listed on the simple path from vertex $u$ to vertex $v$. $^{\dagger} lca(u, v)$ — a vertex $p$ such that it is at the maximum distance from the root and is a parent of both vertex $u$ and vertex $v$.
Let's initiate a tree traversal from the first vertex, in which for each color on the path from the root to the current vertex, we will maintain the nearest vertex with that color, or note that it does not exist. This can be maintained using an array that is recalculated in $O(1)$. Now, for each vertex, let's remember all the vertices for which we are the nearest ancestor with the same color. Build a segment tree on the Euler tour that will support two operations - adding to a segment and finding the maximum. Let's start another traversal of the tree, and for each vertex, we will maintain the count of different colors on the path from it to the vertex we are currently traversing with depth-first search. It is then asserted that we need to add $1$ to the entire subtree of this vertex and subtract $1$ from the subtrees of all vertices for which we are the nearest ancestor with the same color. After that, we need to find the $2$ maximums among all the subtrees of the children and update the answer using them.
[ "data structures", "dfs and similar", "greedy", "trees" ]
2,300
null
1916
F
Group Division
In the $31$st lyceum, there were two groups of olympiad participants: computer science and mathematics. The number of computer scientists was $n_1$, and the number of mathematicians was $n_2$. It is not known for certain who belonged to which group, but it is known that there were friendly connections between some pairs of people (these connections could exist between a pair of people from the same group or from different groups). The connections were so strong that even if one person is removed along with all their friendly connections, any pair of people still remains acquainted either directly or through mutual friends. $^{\dagger}$ More formally, two people $(x, y)$ are acquainted in the following case: there are people $a_1, a_2, \ldots,a_n$ ($1 \le a_i \le n_1 + n_2$) such that the following conditions are simultaneously met: $\bullet$ Person $x$ is directly acquainted with $a_1$. $\bullet$ Person $a_n$ is directly acquainted with $y$. $\bullet$ Person $a_i$ is directly acquainted with $a_{i + 1}$ for any ($1 \le i \le n - 1$). The teachers were dissatisfied with the fact that computer scientists were friends with mathematicians and vice versa, so they decided to divide the students into two groups in such a way that the following two conditions are met: $\bullet$ There were $n_1$ people in the computer science group, and $n_2$ people in the mathematics group. $\bullet$ Any pair of computer scientists should be acquainted (acquaintance involving mutual friends, who must be from the same group as the people in the pair, is allowed), the same should be true for mathematicians. Help them solve this problem and find out who belongs to which group.
We will generate the first set by adding one vertex at a time. Initially, we will include any vertex. Then, to add a vertex from the second set, it must not be a cut vertex and must be connected to some vertex in the first set. Statement: such a vertex always exists. Proof: No cut vertex $\Rightarrow$ such a vertex exists. Otherwise, consider a cut vertex such that, if removed, at least one of the resulting components will not have any other cut vertices. Then, if none of the vertices in this component is connected to the second component, this cut vertex will be a cut vertex in the original graph. Contradiction. Time - $O(n_1 \cdot (n_1 + n_2 + m))$
[ "constructive algorithms", "dfs and similar", "graphs", "greedy" ]
2,900
null
1916
G
Optimizations From Chelsu
You are given a tree with $n$ vertices, whose vertices are numbered from $1$ to $n$. Each edge is labeled with some integer $w_i$. Define $len(u, v)$ as the number of edges in the simple path between vertices $u$ and $v$, and $gcd(u, v)$ as the Greatest Common Divisor of all numbers written on the edges of the simple path between vertices $u$ and $v$. For example, $len(u, u) = 0$ and $gcd(u, u) = 0$ for any $1 \leq u \leq n$. You need to find the maximum value of $len(u, v) \cdot gcd(u, v)$ over all pairs of vertices in the tree.
Let's perform centroid decomposition and maintain the current found answer $ans$. On vertical paths, simply find the value of the function, and we will immediately update the answer, We need to learn how to update the answer for a pair of vertical paths, combining pairs $(g_1, len_1)$ and $(g_2, len_2)$. Let's make several observations. First. If $len_1 \ge len_2$, then $gcd(g_1, g_2) = g_1$. Because if $gcd(g_1, g_2)$ is not equal to $g_1$, then it is at least twice smaller than $g_1$. Second. From the first fact, we conclude that $g_2 = g_1 \cdot k$, so in order to make sense to update the answer, we need $g_1 \cdot (len_1 + len_2) > g_1 \cdot k \cdot len_2$ and from here we get that $len_1 > len_2 \cdot (k - 1)$. This means that $k \leq len_1$. Let's do the following for a fixed centroid for each component: For each $g$, we will only keep the largest value of $len$, We will iterate over all such pairs and assume that we have fixed the pair $(g_1, len_1)$. For this, we will need to iterate over $1 \leq k \leq len_1$. Let's make the following optimization-if $g \cdot (len + d) \leq ans$, then there is no need to iterate over $k$, because we are guaranteed not to improve the answer. $d$-is the farthest distance from the centroid to any vertex. We have an upper bound estimate for this solution $O(n \sqrt n \log n)$. Let's prove it. For the pair $(len_i, gcd_i)$, we iterate over $k$ such that $1 \leq k \leq k_i$, where $k_i \leq len_i$ and we also cut off by the answer. Let's mark $k_i$ edges on the path from the centroid to the vertex from which this pair came. Let's look at some edge, suppose it is marked $c$ times, the distance from the centroid is $L$, and the $\gcd$ from the centroid, including this edge, is $g$. This means that we also marked it for some $gcd_i \leq \frac{g}{c}$. But we already knew that the answer was at least $g$, which means the size of the current graph is at least $L \cdot c$. It turns out that we could not have marked an edge at a distance $L$ from the centroid more than $\frac{n}{L}$ times. We need to estimate in the tree the sum of $min(\frac{n}{depth[v]}, sz[v])$. It can be estimated as $n \sqrt n$. For $depth[v] \leq \sqrt n$, we have a total size of subtrees at the level of $n$, and then the value $\frac{n}{depth[v]} \leq \sqrt n$.
[ "divide and conquer", "dp", "number theory", "trees" ]
3,500
null
1916
H2
Matrix Rank (Hard Version)
\textbf{This is the hard version of the problem. The only differences between the two versions of this problem are the constraints on $k$. You can make hacks only if all versions of the problem are solved.} You are given integers $n$, $p$ and $k$. $p$ is guaranteed to be a prime number. For each $r$ from $0$ to $k$, find the number of $n \times n$ matrices $A$ of the field$^\dagger$ of integers modulo $p$ such that the rank$^\ddagger$ of $A$ is exactly $r$. Since these values are big, you are only required to output them modulo $998\,244\,353$. $^\dagger$ https://en.wikipedia.org/wiki/Field_(mathematics) $^\ddagger$ https://en.wikipedia.org/wiki/Rank_(linear_algebra)
Let $dp_{m,k}$ denote the number of $m \times n$ matrices whose rank is $k$. The recurrence for $dp_{m,k} = dp_{m-1,k} \cdot p^k + dp_{m-1,k-1} \cdot (p^n-p^{k-1})$ with the base case of $dp_{0,0}=1$. Consider enumerating $m \times n$ matrices $A$ such that $\text{rank}(A)=k$ based on a $(m-1) \times n$ matrix $B$ and a row vector $C$ such that $A$ is made by appending $C$ to $B$: If $\text{rank}(B) = \text{rank}(A)$, then $C$ must be inside the span of $B$ which has size $p^k$. If $\text{rank}(B)+1=\text{rank}(A)$, then $C$ must be outside the span of $B$ which has size $p^n-p^{k-1}$. The $p^n-p^{k-1}$ term in our recurrence is annoying to handle. So we will define $g_{m,k}$ as $g_{m,k} = g_{m-1,k} \cdot p^k + g_{m-1,k-1}$. It should be easy to see that $dp_{m,k}=g_{m,k} \cdot \prod\limits_{i=0}^{k-1} (p^n-p^i)$. Method 1: Easy Version Only We will show how to compute $g_{2m}$ from $g_m$. I claim that $g_{2m,k} = \sum\limits_{a+b=k} g_{m,a} \cdot g_{m,b} \cdot p^{a(m-b)}$. The justification for this is to consider moving the DP state from $(0,0) \to (m,a) \to (2m,a+b)$. The contribution of moving $(m,a) \to (2m,a+b)$ is actually very similar to that of moving $(0,a) \to (m,a+b)$, except every time we multiply the value by $p^k$, we have to instead multiply it by $p^{k+a}$. And conveniently, the number of times we need to additionally multiply by $p^k$ is $m-b$. To convolute these sequences, we will use a technique that is similar to chirp-z transform. Note that $ab = \binom{a+b}{2} - \binom{a}{2} - \binom{b}{2}$, you can prove this geometrically by looking at $\binom{x}{2}$ as a triangle pyramid. $\begin{aligned} \sum\limits_{a+b=k} g_{m,a} \cdot g_{m,b} \cdot p^{a(m-b)} &= \sum\limits_{a+b=k} g_{m,a} \cdot g_{m,b} \cdot p^{am + \binom{a}{2} + \binom{b}{2} - \binom{a+b}{2}} \\ &= p^{-\binom{k}{2}} \sum\limits_{a+b=k} (g_{m,a} \cdot p^{am + \binom{a}{2}}) \cdot (g_{m,b} \cdot p^{\binom{b}{2}}) \end{aligned}$ Therefore, it is we can get $g_{2m}$ from $g_m$ by convoluting $(g_{m,i} \cdot p^{im+\binom{i}{2}})$ and $(g_{m,i} \cdot p^{\binom{i}{2}})$ in $O(k \log k)$ time. By performing a process similar to fast exponentiation, we can compute $g_n$ in $O(k \log k \log n)$. Method 2: Easy Version Only Consider the ordinary generating function $G_k = \sum\limits g_{m,k} x^m$. Based on the recurrence above, We have $G_k = x(p^kG_k + G_{k-1})$. This gives us $G_k = \frac{xG_{k-1}}{1-p^kx} = x^k \prod\limits_{i=0}^k \frac{1}{1-p^ix}$. We want to find $[x^n] G_0, [x^n] G_1, \ldots, [x^n] G_k$. Notice that $[x^n] G_k = [x^{n-k}] \prod\limits_{i=0}^k \frac{1}{1-p^ix} = [x^k] x^n ~\%~ (\prod\limits_{i=0}^k x-p^i)$. Where $A ~\%~ B$ denotes the unique polynomial $C$ with $\text{deg}(C)<\text{deg}(B)$ and there exists $D$ such that $A = BD + C$. We can compute all required coefficient in a divide and conquer-esque way. Note that $A ~\%~ B = (A ~\%~ BC)~\%~B$. We want $\text{dnc}(l,r)$ to find all the answers for $[x^n] G_l, [x^n] G_{l+1}, \ldots, [x^n] G_r$. $\text{dnc}(l,r)$ will be provided with $P(x)=x^n ~\%~ (\prod\limits_{i=0}^r x-p^i)$ and $Q(x) = \prod\limits_{i=0}^{l} x-p^i$. Then the pseudocode of the DnC is as follows: The above code is at least quadratic. However, we only need to save the last $(r-l+1)$ non-zero coefficients from both $P$ and $Q$. If implemented properly, this code runs in $O(k \log k \log n)$. Method 3: Hard Version As with the first part of method $2$, we want to find $[x^n] G_0, [x^n] G_1, \ldots, [x^n] G_k$ where $G_k = x^k \prod\limits_{i=0}^k \frac{1}{1-p^ix}$. We will try to decompose $\prod\limits_{i=0}^k \frac{1}{1-p^ix}$ into partial fractions. Repeated factors in partial fractions is hard to deal with. It turns out, we do not need to deal with them at all. Suppose $ord$ is the smallest positive integer that such $p^{ord} = 1 \pmod{998244353}$. Then $p^0,p^1,\ldots,p^{ord-1}$ are all distinct. Furthermore, $dp_{n,k}=0$ for $k \geq ord$ as $dp_{m,k}=g_{m,k} \cdot \prod\limits_{i=0}^{k-1} (p^n-p^i)$. Therefore, we can safely assume $k < ord$ so that $p^0,p^1,\ldots,p^k$ will be all distinct. By using the cover-up rule on $\prod\limits_{i=0}^k \frac{1}{1-p^ix}$, we obtain a partial fraction decomposition of $\sum\limits_{i=0}^k \frac{c_i}{1-p^ix}$ where $c_i = \prod\limits_{j=0, i\neq j}^k \frac{1}{1-p^{j-i}}$. Let $L_i=\prod_{j=1}^i\frac 1{1-p^{-j}},R_i=\prod_{j=1}^i\frac 1{1-p^j}$,so that $c_i=L_iR_{k-i}$. $\begin{aligned}[x^n] G_k &= [x^{n-k}] \prod\limits_{i=0}^k \frac{1}{1-p^ix} \\ &= [x^{n-k}] \sum\limits_{a+b=k} \frac{L_a R_b}{1-p^ax} \\ &= \sum\limits_{a+b=k} p^{a(n-k)} L_a R_b \\ &=\sum\limits_{a+b=k} p^{\binom{n-b}{2} - \binom{n-k}{2} - \binom{a}{2}} L_a R_b \\ &=p^{-\binom{n-k}{2}} \sum\limits_{a+b=k} (L_a \cdot p^{-\binom{a}{2}}) (R_{b} \cdot p^{\binom{n-b}{2}}) \end{aligned}$ So we can find $[x^n] G_0, [x^n] G_1, \ldots, [x^n] G_k$ by convoluting $(L_i \cdot p^{\binom{i}{2}})$ and $(R_{i} \cdot p^{\binom{n-i}{2}})$. The final time complexity is $O(k \log k)$. Method 4: Hard Version Thanks to Endagorion for finding this solution during testing. For an $n$-dimensional vector space of $\mathbb{F}_q$, the number of $k$-dimensional vector subspace is denoted as $\binom{n}{k}_q$ and is known as the q-analogue of binomial coefficients. It is known that $\binom{n}{k}_q = \frac{(q^n-1)\ldots(q^{n-k+1}-1)}{(1-q)\ldots(1-q^k)}$. This fact is not too hard to derive by considering the number of possible independent sets dividing the the number of sets that results in the same span. Here it is helpful to use the form $\binom{n}{k}_q = \frac{[n!]_q}{[k!]_q[n-k!]_q}$ where $[n!]_q = (1) \cdot (1+q) \ldots \cdot (1+q+\ldots+q^{n-1})$. If you want to read more about these series, I recommend checking out Enumerative Combinatorics. We will proceed with PIE (principle of inclusion-exclusion) on these $q$-binomials. We want to find the number of ways to choose $n$ vectors that spans exactly some $k$-dimensional vector space, but this is hard. However, we realize that we are able to count the number of ways to choose to choose $n$ vectors that spans a subset of some $k$-dimensional vector space. Let $f_k$ and $g_k$ denote these values respectively. Then we have $g_k = \sum\limits_{i=0}^k f_i \cdot \binom{k}{i}_q$. Note that this is very similar to egf-convolution with the identity sequence. Indeed, if we define the generating functions $F = \sum f_k \frac{x^k}{[k!]_q}$, $G = \sum g_k \frac{x^k}{[k!]_q}$ and the identity sequence $I = \sum \frac{x^k}{[k!]_q}$, we have $F \cdot I = G$. We are able to calculate $I$ and $G$, so we can find $F = G \cdot I^{-1}$. Then, the number of matrices with rank $k$ is simply $f_k \cdot \binom{n}{k}_q$. Of course, a caveat is that division might not be defined when some $[n!]_q=0$, and it is a simple exercise to the reader to figure out why defining $0^{-1}=0$, which aligns with our usual definition of $a^{MOD-2}=a^{-1}$ does not break anything in the solution. The final time complexity is $O(k \log k)$.
[ "combinatorics", "dp", "math", "matrices", "string suffix structures" ]
2,700
null
1917
A
Least Product
You are given an array of integers $a_1, a_2, \dots, a_n$. You can perform the following operation any number of times (possibly zero): - Choose any element $a_i$ from the array and change its value to any integer between $0$ and $a_i$ (inclusive). More formally, if $a_i < 0$, replace $a_i$ with any integer in $[a_i, 0]$, otherwise replace $a_i$ with any integer in $[0, a_i]$. Let $r$ be the minimum possible product of all the $a_i$ after performing the operation any number of times. Find the minimum number of operations required to make the product equal to $r$. Also, print one such shortest sequence of operations. If there are multiple answers, you can print any of them.
What is the minimum product that we can get, when one of the given numbers is equal to $0$. How is the absolute value of the integer changed, when we apply the given operation on that integer? We can always make the product as small as possible with at most $1$ operation. First, let's find the minimum product we can get. If one of the numbers is or becomes $0$, then the product will be $0$. Otherwise, all the numbers don't change their sign during the operation. So the initial product won't change its sign as well. Also, we can note that the absolute value will not increase after an operation. That means if the initial product is negative, we cannot decrease it. In this case the necessary number of operations will be $0$. If the initial product is positive, then we know the product won't become negative, therefore we will make it zero with $1$ operation, which will be the answer and the operation will be changing any number to $0$. If the initial product is zero, then we don't need to change anything, so the number of operations needed is $0$.
[ "constructive algorithms", "math" ]
800
#include <iostream> #include <string> #include <vector> using namespace std; const int N = 105; int t, n; int a[N]; int main() { cin >> t; while (t--) { cin >> n; int c_neg = 0; int c_zero = 0; int c_pos = 0; for (int i = 1; i <= n; i++) { cin >> a[i]; if (a[i] < 0) { c_neg++; } else if (a[i] == 0) { c_zero++; } else { c_pos++; } } if (c_zero || c_neg % 2) { cout << 0 << endl; } else { cout << 1 << endl; cout << 1 << " " << 0 << endl; } } return 0; }
1917
B
Erase First or Second Letter
You are given a string $s$ of length $n$. Let's define two operations you can apply on the string: - remove the first character of the string; - remove the second character of the string. Your task is to find the number of distinct \textbf{non-empty} strings that can be generated by applying the given operations on the initial string any number of times (possibly zero), in any order.
Do we need to use the first operation after the second operation? Try to fix the number of the applied first operation. How many different strings can be obtained? When can two reached strings be the same? Try to consider the first occurrence for each letter. Let's first see, that applying the second operation and then the first is equivalent to applying the first operation twice. In the former case the string will become $s_1s_2s_3 \ldots s_n \to s_1s_3 \ldots s_n \to s_3 \ldots s_n$, and in the latter case: $s_1s_2s_3 \ldots s_n \to s_2s_3 \ldots s_n \to s_3 \ldots s_n$. As we are concerned with only the number of distinct resulting strings, let's assume that the second operation is never done before the first operation. This means we do $op_1$ first operations (possibly zero) and then $op_2$ second operations (possibly zero). Let's now find the result of applying $i$ of the first and then $j$ of the second operations. It's easy to see, that the result is $s_{i+1}s_{i+j+2}s_{i+j+3} \ldots s_n$. The only remaining question is in which cases two sequences of operations such that the first operation always comes before the second result in the same string. Consider for the $(i_1, j_1)$ pair, the resulting string is the same as for the $(i_2, j_2)$ pair. We can see that $i_1+j_1 = i_2+j_2$, because the number of erased letters should be the same to get strings of the same length. Next, $s_{i_1+1} = s_{i_2+1}$ as those are the first letters of two resulting equal strings. It's easy to see that these conditions are also sufficient for the result to be the same string. If after applying the first operation $op_1$ times the first letter is not its first occurrence, then any subsequent result could have been achieved by less operations of the first type by removing first character until reaching that letter and then by removing the second character until we reach $op_1$ operations in total. This means we need to consider using the second operation only at the first occurrence of the letter. The final solution can look like this: for each letter $a \ldots z$ find it's first occurrence. If the letter is found, any number of second type operations lead to a different result. Thus we can just calculate the number of second operations that is valid and add that to the answer.
[ "brute force", "combinatorics", "data structures", "dp", "strings" ]
1,100
#include <iostream> #include <vector> #include <string> using namespace std; int n, t; int main() { cin >> t; while (t--) { cin >> n; string s; cin >> s; vector<long long> ans(n, 0); vector<int> nxt(26, n); ans[n - 1] = 1; nxt[s[n - 1] - 'a'] = n - 1; for (int i = n - 2; i >= 0; i--) { ans[i] = ans[i + 1] + (nxt[s[i] - 'a'] - i); nxt[s[i] - 'a'] = i; } cout << ans[0] << endl; } return 0; }
1917
C
Watering an Array
You have an array of integers $a_1, a_2, \ldots, a_n$ of length $n$. On the $i$-th of the next $d$ days you are going to do exactly one of the following two actions: - Add $1$ to each of the first $b_i$ elements of the array $a$ (i.e., set $a_j := a_j + 1$ for each $1 \le j \le b_i$). - Count the elements which are equal to their position (i.e., the $a_j = j$). Denote the number of such elements as $c$. Then, you add $c$ to your score, and reset the entire array $a$ to a $0$-array of length $n$ (i.e., set $[a_1, a_2, \ldots, a_n] := [0, 0, \ldots, 0]$). Your score is equal to $0$ in the beginning. Note that on each day you should perform exactly one of the actions above: you cannot skip a day or perform both actions on the same day. What is the maximum score you can achieve at the end? Since $d$ can be quite large, the sequence $b$ is given to you in the compressed format: - You are given a sequence of integers $v_1, v_2, \ldots, v_k$. The sequence $b$ is a concatenation of infinitely many copies of $v$: $b = [v_1, v_2, \ldots, v_k, v_1, v_2, \ldots, v_k, \ldots]$.
Assume that you are starting with array $a=[0, 0, \ldots, 0]$. Can your score increase by more than $1$ in this case? Note that array $a$ is non-increasing after each operation and $[1, 2, \ldots, n]$ is strictly increasing. Try fixing the first day you make reset operation on. Can your score increase by more than $n$ on reset operation? Let's first solve this problem if we start with the array $a=[0, 0, \ldots, 0]$. This array is non-increasing and adding $1$ to any of its prefixes keeps it non-increasing. On the other hand, array $[1, 2, \ldots, n]$ is strictly increasing. This means that if $a_i=i$ and $a_j=j$ then $i=j$ (because if $i<j$ then $a_i \ge a_j$ and both conditions cannot hold simultaneously). Thus you cannot increase your score by more than $1$ in one reset operation. Also you cannot increase your score by $1$ in two operations in a row, because array $a$ will be equal to $[0, 0, \ldots, 0]$ before the second of this operations. Similary, you cannot increase your score on the first day. Thus, the maximum score you can get is $\lfloor \frac{d}{2} \rfloor$. On the other way, you can always achieve this score by alternating operations. So once we fixed the first day $i$ we are making reset operation on, we can easily compute the maximum total score we will get for all the further days. Trying all $d$ possibilities of the first day $i$ we are making reset operation on is too slow. But do we need to wait for this for more than $2n$ days? Actually no because then we will get at most $n$ score for waiting for $2n+1$ days, but we can get the same (or the greater) score by doing the first reset operation on the first day. Thus, we can solve this problem in $\mathcal{O}(n\min(n,d))$. Find the number of ways to achieve the maximum score. Solve this problem for the larger $n$.
[ "brute force", "greedy", "implementation", "math" ]
1,600
#include <bits/stdc++.h> using namespace std; void solve() { int n, k, s; cin >> n >> k >> s; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; vector<int> b(k); for (int i = 0; i < k; i++) cin >> b[i]; int ans = 0; for (int i = 0; i < s && i <= 2 * n; i++) { int cur = 0; for (int j = 0; j < n; j++) { cur += (a[j] == j + 1); } cur += (s - i - 1) / 2; if (cur > ans) { ans = cur; } for (int j = 0; j < b[i % k]; j++) { a[j]++; } } cout << ans << endl; } int main() { int t; cin >> t; while (t--) { solve(); } }
1917
D
Yet Another Inversions Problem
You are given a permutation $p_0, p_1, \ldots, p_{n-1}$ of odd integers from $1$ to $2n-1$ and a permutation $q_0, q_1, \ldots, q_{k-1}$ of integers from $0$ to $k-1$. An array $a_0, a_1, \ldots, a_{nk-1}$ of length $nk$ is defined as follows: \begin{center} $a_{i \cdot k+j}=p_i \cdot 2^{q_j}$ for all $0 \le i < n$ and all $0 \le j < k$ \end{center} For example, if $p = [3, 5, 1]$ and $q = [0, 1]$, then $a = [3, 6, 5, 10, 1, 2]$. Note that all arrays in the statement are zero-indexed. Note that each element of the array $a$ is uniquely determined. Find the number of inversions in the array $a$. Since this number can be very large, you should find only its remainder modulo $998\,244\,353$. An inversion in array $a$ is a pair $(i, j)$ ($0 \le i < j < nk$) such that $a_i > a_j$.
How to count the number of inversions in the permutation of length $n$ in $\mathcal{O}(n \log n)$? Consider two arrays $x_1, x_2, \ldots, x_k$ and $\alpha x_1, \alpha x_2, \ldots, \alpha x_k$ for some positive $\alpha$. How does the number of inversions in one of them correspond to the number of inversions in the other of them? Consider splitting array $a$ into subarrays of the length $k$. Let's say you have two arrays $[11, 22, 44, \ldots, 11 \cdot 2^m]$ and $[13, 26, 52, \ldots, 13 \cdot 2^m]$ of the same length. How many inversions are there in their concatenation $[11, 22, 44, \ldots, 11 \cdot 2^m, 13, 26, 52, \ldots, 13 \cdot 2^m]$? Consider the merging process of arrays $[11, 22, 44, \ldots, 11 \cdot 2^m]$ and $[13, 26, 52, \ldots, 13 \cdot 2^m]$ into the sorted array (as in the merge sort). What if the first elements of the arrays you concatenate are not $11$ and $13$ but some odd positive integers $x$ and $y$? Merging processes for some pairs $(x, y)$ look quite similar. The number of inversions in the array $[x, 2x, 4x, \ldots, 2^m x, y, 2y, 4y, \ldots, 2^m y]$ depends only on $\lfloor \log_2(\frac{y}{x}) \rfloor$ and $m$. You don't need to think about rounding of $\log_2(\frac{y}{x})$ because $x$ and $y$ are odd in this problem. Also, $m$ is the same for all merges. Consider the following $\mathcal{O}(n \log n)$ algorithm to find the number of inversions in the permutation: make a segment tree corresponding to this permutation and fill it with zeroes. For all $i$ from $1$ to $n$ find $j$ such that $p_j=i$, increase the $j$-th element of this segment tree by $1$ and add the sum of elements on the right of the $j$-th element to the number of inversions. Improve this algorithm to count the number of inversions in array $a$ assuming $q=[0,1,\ldots,k-1]$. The problem can be solved in $\mathcal{O}(n \log n \min(\log n, k) + k \log k)$. The order of elements in $q$ matters only on the inversions inside the blocks of length $k$ you chosen in the hint $2$. Let's split the array $a$ into subarrays of the length $k$. The relative order of the elements in each of them is the same (as in permutation $q$), so the number of inversions is the same, too. You can find the number of invesions in one of them as described in the hint $7$. By multiplying this number by $n$, you count all the in-block inversions. All the remaining inversions are formed by pairs of elements from the distinct blocks. You may assume that $q=[0, 1, \ldots, k-1]$ now for simplicity: it won't change the number of such inversions. Let's fix two elements $x$ and $y$ of $p$ and count the number of inversions $(i, j)$ such that $a_i = x \cdot 2^{\alpha}$ and $a_j = y \cdot 2^{\beta}$ for some $\alpha$ and $\beta$. It is equivalent to counting the number of inversions in the array $[x, 2x, 4x, \ldots, 2^mx, y, 2y, 4y, \ldots, 2^my]$. Consider merging two arrays $[x, 2x, 4x, \ldots, 2^mx]$ and $[y, 2y, 4y, \ldots, 2^my]$ with $x < y$ ($x$ and $y$ are odd) into one sorted subarray: if $\color{blue}{x} < \color{red}{y} < \color{blue}{2x}$, then the resulting array would look like $[\color{blue}{x}, \color{red}{y}, \color{blue}{2x}, \color{red}{2y}, \color{blue}{4x}, \color{red}{4y}, \ldots, \color{blue}{2^mx}, \color{red}{2^my}]$; if $\color{blue}{2x} < \color{red}{y} < \color{blue}{4x}$, then the resulting array would look like $[\color{blue}{x}, \color{blue}{2x}, \color{red}{y}, \color{blue}{4x}, \color{red}{2y}, \color{blue}{8x}, \color{red}{4y}, \ldots, \color{blue}{2^{m-1}x}, \color{red}{2^{m-2}y}, \color{blue}{2^mx}, \color{red}{2^{m-1}y}, \color{red}{2^my}]$; if $\color{blue}{4x} < \color{red}{y} < \color{blue}{8x}$, then the resulting array would look like $[\color{blue}{x}, \color{blue}{2x}, \color{blue}{4x}, \color{red}{y}, \color{blue}{8x}, \color{red}{2y}, \color{blue}{16x}, \color{red}{4y}, \ldots, \color{blue}{2^{m-1}x}, \color{red}{2^{m-3}y}, \color{blue}{2^mx}, \color{red}{2^{m-2}y}, \color{red}{2^{m-1}y}, \color{red}{2^my}]$; ... You can see several blue elements in the beginning, followed by alternating blue and red elements, which are followed by several red elements. The number of blue elements in the beginning is equal to the number of red elements in the end and equal to the largest $z$ such that $2^z x < y$. Furthermore, this $z$ is also limited by $\log n + 1$ because $x$ and $y$ are both positive integers less than $2n$. If $x > y$ the situation is similar, but the order of colors is reversed. Going back to inversions, we have some array $[\color{blue}{x}, \color{blue}{2x}, \color{blue}{4x}, \ldots, \color{blue}{2^mx}, \color{red}{y}, \color{red}{2y}, \color{red}{4y}, \ldots, \color{red}{2^my}]$. Inversions are formed by a large blue element and a small red element. if $x < y < 2x$, then there are $0+1+2+\ldots+m=\frac{m(m+1)}{2}$ inversions; if $2x < y < 4x$, the there are $0+0+1+2+\ldots+(m-1)=\frac{m(m+1)}{2} - m$ inversions; if $4x < y < 8x$, the there are $0+0+0+1+2+\ldots+(m-2)=\frac{m(m+1)}{2} - m - (m-1)$ inversions; ... For $x > y$ the situation is similar, but we will start with $1+2+3+\ldots+m+(m+1)=\frac{(m+1)(m+2)}{2}$ inversions for $y < x < 2y$ and we will add $m, (m-1), \ldots$ terms instead of substracting them. Well, now we can solve this problem in $\mathcal{O}(n^2 \log n)$: enumerate pairs $(x, y)$, find $\log_2(\frac{y}{x})$ and add some value to the answer. Now let's add the inversion counting algorithm to solve this problem. Again, let's solve the problem for $x < y$ first. Let's enumerate the value of $y$ from $1$ to $2n-1$. the $x$-s on the right of $y$ should not be counted in now; each of the $x$-s on the left of $y$ such that $x < y$ adds $\frac{m(m+1)}{2}$ to the answer; each of the $x$-s on the left of $y$ such that $2x < y$ adds $\frac{m(m+1)}{2} - m$ to the answer; each of the $x$-s on the left of $y$ such that $4x < y$ adds $\frac{m(m+1)}{2} - m - (m-1)$ to the answer; ... We can maintain a segment tree to compute the sum of the values we should sum up. To update this segment tree let's additionally maintain $\Theta(\log n)$ pointers that maintain the largest $x$ such that $2^z x < y$ for each $z$ from $0$ to $\lceil\log n\rceil$. The solution is similar for $x > y$ pairs. You should be careful when implementing this, because for small $k$ at some moment there becomes $0$ elements in the alternating segment of the blue-red array and you shouldn't substract anything further. You are also given $s$ queries of the following two types: Swap $p_i$ and $p_j$ Swap $q_i$ and $q_j$. Perform these queries and maintain the answer. You given three permutations $p$, $q$ and $r$ of lengths $l_1$, $l_2$, $l_3$ (of the first $l_1$, $l_2$ and $l_3$ positive integers correspondingly) and array $a$ is defined as $a[i \cdot l_2 \cdot l_3 + j \cdot l_3 + k] = p[i] \cdot 2^{q[j]} \cdot 2^{2^{r[k]}}$. Find the number of inversions in it.
[ "combinatorics", "data structures", "dp", "implementation", "math", "number theory" ]
2,300
const int MOD = 998'244'353; #include <bits/stdc++.h> using namespace std; void add(int pos, long long val, vector<long long>& fenw) { while (pos < fenw.size()) { fenw[pos] += val; fenw[pos] = (fenw[pos] % MOD + MOD) % MOD; pos |= (pos + 1); } } long long get(int pos, vector<long long>& fenw) { long long res = 0; while (pos >= 0) { res += fenw[pos]; res %= MOD; pos &= (pos + 1); pos--; } return res; } long long get(int l, int r, vector<long long>& fenw) { long long vr = get(r, fenw); long long vl = get(l - 1, fenw); return ((vr - vl) % MOD + MOD) % MOD; } void solve() { int n, k; cin >> n >> k; vector<int> p(n), q(k); for (int i = 0; i < n; i++) cin >> p[i]; for (int i = 0; i < k; i++) cin >> q[i]; long long ans = 0; { vector<long long> fenwE(k); for (int i = 0; i < k; i++) { ans += i - get(q[i], fenwE); add(q[i], 1, fenwE); } ans = ans * n % MOD; } vector<int> pos(2 * n); for (int i = 0; i < n; i++) pos[p[i]] = i; vector<long long> fenwT(n), fenw(n); vector<long long> shT(n); const int LG = 20; vector<long long> shiftContrib(LG); for (int i = 0; i < LG && i < k; i++) { shiftContrib[i] = 1ll * (k - i + 1) * (k - i) / 2; shiftContrib[i] %= MOD; } vector<int> pnt(min(LG, k), 1); for (int num = 1; num <= 2 * n - 1; num += 2) { for (int j = 0; j < LG && j < k; j++) { while ((1ll << j) * pnt[j] < num) { int p = pos[pnt[j]]; shT[p]++; add(p, shiftContrib[shT[p]] - shiftContrib[shT[p] - 1], fenwT); pnt[j] += 2; } } int i = pos[num]; add(i, shiftContrib[0], fenwT); add(i, 1, fenw); ans += get(0, i - 1, fenwT); ans += get(i + 1, n - 1, fenw) * (1ll * k * k % MOD) - get(i + 1, n - 1, fenwT); ans = (ans % MOD + MOD) % MOD; } cout << ans << "\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { solve(); } }
1917
E
Construct Matrix
You are given an \textbf{even} integer $n$ and an integer $k$. Your task is to construct a matrix of size $n \times n$ consisting of numbers $0$ and $1$ in such a way that the following conditions are true, or report that it is impossible: - the sum of all the numbers in the matrix is exactly $k$; - the bitwise $XOR$ of all the numbers in the row $i$ is the same for each $i$; - the bitwise $XOR$ of all the numbers in the column $j$ is the same for each $j$.
Does solution exist when $k$ is odd? What can you understand when $k = 2$ or $k = n^2 - 2$? How can we easily fill the matrix with $1$s, such that all problem conditions are satisfied when $k$ is divisible by $4$? How will you solve the problem when $k = 6$? Try to merge ideas of Hint $3$ and Hint $4$ together. First, let's note that when $k$ is odd, the solution doesn't exist. It's obvious since in the solution the xors of all the rows are the same, it follows that the parity of the number of $1$s in each row is the same, and let's remember that $n$ is even, and from these conditions get that solution exists only when $k$ is even. Second, let's note that for $k = 2$ or $k = n^2 - 2$, the solution exists only for $n = 2$. For all other cases, a solution always exists. when $k \equiv 0 \pmod{4}$, we can fill $\frac{k}{4}$ submatrices of size $2 \times 2$. when $k \equiv 2 \pmod{4}$. Let's note that $k \geq 6$. Let's write $1$ in the following positions: $(1, 1)$, $(1, 2)$, $(2, 1)$, $(2, 3)$, $(3, 2)$, $(3, 3)$. After this, we should fill the remaining $(k - 6)$ ones, and let's note that $(k - 6) \equiv 0 \pmod{4}$. There are obvious $\frac{n^2 - 16}{4}$ submatrices of size $2 \times 2$, which aren't filled yet - outside the top left $4 \times 4$ submatrix. If $k < n^2 - 6$, then we can fill as many of those $2 \times 2$ submatrices as necessary, otherwise if $k = n^2 - 6$, we can also fill with $1$s the following $4$ positions too: $(1, 3)$, $(1, 4)$, $(4, 3)$, $(4, 4)$. Can you solve the problem for odd $n$? tourist solved it!
[ "constructive algorithms", "math" ]
2,500
/////////////////////////////// _LeMur_ #define _CRT_SECURE_NO_WARNINGS #include <unordered_map> #include <unordered_set> #include <algorithm> #include <iostream> #include <cstring> #include <cassert> #include <complex> #include <chrono> #include <random> #include <bitset> #include <cstdio> #include <vector> #include <string> #include <stack> #include <tuple> #include <queue> #include <ctime> #include <cmath> #include <list> #include <map> #include <set> using namespace std; const int N = 300005; const int inf = 1000 * 1000 * 1000; const int mod = 998244353; // mt19937 myrand(chrono::steady_clock::now().time_since_epoch().count()); mt19937 myrand(373); int t; int n, k; void print(vector < vector<int> > &answ) { for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { cout << answ[i][j] << " "; } cout << endl; } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cin >> t; while (t--) { cin >> n >> k; if (k % 2) { cout << "No" << endl; continue; } if (k == 2 || k == n * n - 2) { if (n != 2) { cout << "No" << endl; } else { cout << "Yes" << endl; cout << "1 0" << endl; cout << "1 0" << endl; } continue; } cout << "Yes" << endl; vector < vector<int> > answ(n, vector<int>(n, 0)); if (k % 4 == 0) { int c = k / 4; for (int i = 0; i < n; i += 2) { for (int j = 0; j < n; j += 2) { if (c > 0) { answ[i][j] = 1; answ[i][j + 1] = 1; answ[i + 1][j] = 1; answ[i + 1][j + 1] = 1; --c; } } } print(answ); continue; } assert(k >= 6); answ[0][0] = 1; answ[0][1] = 1; answ[1][0] = 1; answ[1][2] = 1; answ[2][1] = 1; answ[2][2] = 1; int c = (k - 6) / 4; for (int i = 0; i < n; i += 2) { for (int j = 0; j < n; j += 2) { if (i == 0 && j == 0) continue; if (i == 0 && j == 2) continue; if (i == 2 && j == 2) continue; if (i == 2 && j == 0) continue; if (c > 0) { answ[i][j] = 1; answ[i][j + 1] = 1; answ[i + 1][j] = 1; answ[i + 1][j + 1] = 1; --c; } } } assert(c <= 1); if (c == 1) { answ[0][2] = answ[0][3] = 1; answ[3][2] = answ[3][3] = 1; } print(answ); } return 0; }
1917
F
Construct Tree
You are given an array of integers $l_1, l_2, \dots, l_n$ and an integer $d$. Is it possible to construct a tree satisfying the following three conditions? - The tree contains $n + 1$ nodes. - The length of the $i$-th edge is equal to $l_i$. - The (weighted) diameter of the tree is equal to $d$.
If a solution exists, then we can always construct a tree containing a diameter and edges incident to the vertex $v$ ($v$ is from diameter). Try to consider the maximum of the given lengths. What can we say when there exist two lengths with a sum greater than $d$? If a solution exists, then there should be a subset of lengths with the sum equal to $d$. Consider cases when there exists a subset of lengths containing the maximum length with the sum equal to $d$ and doesn't. Knapsack with bitset. Let's consider the lengths in increasing order: $l_1 \leq l_2 \leq \ldots \leq l_n$. We will discuss some cases depending on the maximum length $l_n$: If $l_n + l_{n - 1} > d$, then the solution doesn't exist since an arbitrary tree will have a diameter greater than $d$. There exists the subset of the given lengths $l$, such that the sum of the lengths of that subset is equal to $d$ (for making a diameter) and $l_n$ is in that subset. In this case, the solution always exists, since we can construct a tree for example in the following way: let's consider that the size of the found subset is equal to $k$, then we can connect the vertices from $1$ to $k + 1$, such that the vertices $i$ and $i + 1$ are connected by edge for each $1 \leq i \leq k$ and $length(1, 2) = l_n$. We have some remaining lengths that we haven't used yet, so we can add edges for each length incident to the vertex $2$. Added edges will not increase the diameter, since $l_n$ is greater than or equal to all the remaining edges and $l_n + l_{n - 1} \leq d$. To check if there exists a subset of lengths such that it contains $l_n$ and the sum of elements in the subset is equal to $d$, can be easily done by the knapsack algorithm. Here, we need to find the subset of the given lengths $l$, such that the sum of the lengths of that subset is equal to $d$ (for making a diameter and we also know that $l_n$ can not be in that subset). Let's consider that diameter consisting of the vertices $v_1$, $v_2$, $\ldots$, $v_k$, such that $v_i$ and $v_{i + 1}$ are connected by edge for each $1 \leq i \leq k - 1$. Now, we need to connect an edge with length $l_n$ to the one vertex $v'$ from diameter, such that both $dist(v', v_1) + l_n < d$ and $dist(v', v_k) + l_n < d$. All the other not-used lengths we can also connect to the vertex $v'$. To check this, we should write knapsack but with two states: $dist(v', v_1)$ and $dist(v', v_k)$. Knapsack can be done using bitset and the final complexity will be $O(\frac{n \cdot d^2}{64})$. You can also optimize this two times, since we know that the minimum of $dist(v', v_1)$ and $dist(v', v_k)$ is at most $\frac{d}{2}$.
[ "bitmasks", "constructive algorithms", "dp", "trees" ]
2,500
/////////////////////////////// _LeMur_ #define _CRT_SECURE_NO_WARNINGS #include <unordered_map> #include <unordered_set> #include <algorithm> #include <iostream> #include <cstring> #include <cassert> #include <complex> #include <chrono> #include <random> #include <bitset> #include <cstdio> #include <vector> #include <string> #include <stack> #include <tuple> #include <queue> #include <ctime> #include <cmath> #include <list> #include <map> #include <set> using namespace std; const int N = 2005; const int D = 2005; const int inf = 1000 * 1000 * 1000; const int mod = 998244353; // mt19937 myrand(chrono::steady_clock::now().time_since_epoch().count()); mt19937 myrand(373); int t; int n, d; pair <int, int> p[N]; bitset<D> dp[D / 2]; void add(int id) { int len = p[id].first; for (int i = d / 2; i >= 0; i--) { if (i + len <= d / 2) dp[i + len] |= dp[i]; dp[i] = (dp[i] | (dp[i] << len)); } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cin >> t; while (t--) { cin >> n >> d; int mx = 0; int sum = 0; for (int i = 1; i <= n; i++) { cin >> p[i].first; sum += p[i].first; p[i].second = i; mx = max(mx, p[i].first); } sort(p + 1, p + n + 1); if (sum == d) { cout << "Yes" << endl; continue; } for (int i = 0; i <= d / 2; i++) { for (int j = 0; j <= d; j++) { dp[i][j] = 0; } } dp[0][0] = 1; bool found = false; int it = 1; for (int x = 1; x <= d / 2; x++) { while (it <= n && p[it].first <= x) { add(it); ++it; } int big = 0; for (int i = it; i <= n; i++) { big += p[i].first; } if (big <= d - x && dp[x][d - x - big]) { found = true; break; } } if (!found) { cout << "No" << endl; } else { cout << "Yes" << endl; } } return 0; }
1918
A
Brick Wall
A brick is a strip of size $1 \times k$, placed horizontally or vertically, where $k$ can be an arbitrary number that is at least $2$ ($k \ge 2$). A brick wall of size $n \times m$ is such a way to place several bricks inside a rectangle $n \times m$, that all bricks lie either horizontally or vertically in the cells, do not cross the border of the rectangle, and that each cell of the $n \times m$ rectangle belongs to exactly one brick. Here $n$ is the height of the rectangle $n \times m$ and $m$ is the width. \textbf{Note} that there can be bricks with different values of k in the same brick wall. The wall stability is the difference between the number of horizontal bricks and the number of vertical bricks. \textbf{Note} that if you used $0$ horizontal bricks and $2$ vertical ones, then the stability will be \textbf{$-2$, not $2$}. What is the maximal possible stability of a wall of size $n \times m$? It is guaranteed that under restrictions in the statement at least one $n \times m$ wall exists.
The stability of the wall is the number of horizontal bricks minus the number of vertical bricks. Since a horizontal brick has a length of at least $2$, no more than $\lfloor\frac{m}{2}\rfloor$ horizontal bricks can be placed in one row. Therefore, the answer does not exceed $n \cdot \lfloor\frac{m}{2}\rfloor$. On the other hand, if horizontal bricks of length $2$ are placed in a row, and when $m$ is odd, the last brick has a length of $3$, then in each row there will be exactly $\lfloor\frac{m}{2}\rfloor$ horizontal bricks, and there will be no vertical bricks in the wall at all. This achieves the maximum stability of $n \cdot \lfloor\frac{m}{2}\rfloor$. The solution is one formula, so it works in $O(1)$ time.
[ "constructive algorithms", "greedy", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while(t--) { int64_t n,m; cin >> n >> m; cout << n*(m/2) << '\n'; } return 0; }
1918
B
Minimize Inversions
You are given two permutations $a$ and $b$ of length $n$. A permutation is an array of $n$ elements from $1$ to $n$ where all elements are distinct. For example, an array [$2,1,3$] is a permutation, but [$0,1$] and [$1,3,1$] aren't. You can (as many times as you want) choose two indices $i$ and $j$, then swap $a_i$ with $a_j$ and $b_i$ with $b_j$ simultaneously. You hate inversions, so you want to minimize the total number of inversions in both permutations. An inversion in a permutation $p$ is a pair of indices $(i, j)$ such that $i < j$ and $p_i > p_j$. For example, if $p=[3,1,4,2,5]$ then there are $3$ inversions in it (the pairs of indices are $(1,2)$, $(1,4)$ and $(3,4)$).
Notice that by performing operations of the form: swap $a_i$ with $a_j$ and $b_i$ with $b_j$ simultaneously, we can rearrange the array $a$ how we want, but the same $a_i$ will correspond to the same $b_i$ (because we are changing both $a_i$ and $b_i$ at the same time). Let's sort the array $a$ using these operations. Then the sum of the number of inversions in $a$ and $b$ will be the number of inversions in $b$, since $a$ is sorted. It is claimed that this is the minimum sum that can be achieved. Proof: Consider two pairs of elements $a_i$ with $a_j$ and $b_i$ with $b_j$ ($i$ < $j$). In each of these pairs, there can be either $0$ or $1$ inversions, so among the two pairs, there can be $0$, $1$, or $2$ inversions. If there were $0$ inversions before the operation, then there will be $2$ after the operation; if there was $1$, then there will still be $1$; if there were $2$, then it will become $0$. If the permutation $a_i$ is sorted, then in each pair of indices $i$ and $j$ there will be a maximum of 1 inversion, so any pair of indices will give no more inversions than if they were swapped. Since the number of inversions in each pair is the minimum possible, the total number of inversions is also the minimum possible. Time complexity: $O(n \log n)$ per test case.
[ "constructive algorithms", "data structures", "greedy", "implementation", "sortings" ]
900
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while(t--) { int n; cin >> n; pair<int,int> ab[n]; for(int i = 0;i < n;++i) { cin >> ab[i].first; } for(int i = 0;i < n;++i) { cin >> ab[i].second; } sort(ab,ab+n); for(int i = 0;i < n;++i) { cout << ab[i].first << ' '; } cout << "\n"; for(int i = 0;i < n;++i) { cout << ab[i].second << ' '; } cout << "\n"; } }
1918
C
XOR-distance
You are given integers $a$, $b$, $r$. Find the smallest value of $|({a \oplus x}) - ({b \oplus x})|$ among all $0 \leq x \leq r$. $\oplus$ is the operation of bitwise XOR, and $|y|$ is absolute value of $y$.
Let's consider the bitwise representation of numbers $a$, $b$, $x$. Let's look at any $2$ bits at the same position in $a$ and $b$, if they are the same, then regardless of what is in $x$ on this position, the number $|({a \oplus x}) - ({b \oplus x})|$ will have a $0$ at this position. Therefore, it is advantageous to set $0$ at all such positions in $x$ (since we want $x \leq r$, and the answer does not depend on the bit). If the bits in $a$ and $b$ at the same position are different, then at this position there will be a $1$ either in $a \oplus x$ or in $b \oplus x$ depending on what is at this position in $x$. Let $a$ < $b$, if not, then we will swap them. Then at the highest position, where the bits differ, there is a $0$ in $a$ and a $1$ in $b$. There are $2$ options, either to set a $1$ at this position in $x$ (and then there will be a $1$ in $a \oplus x$), or to set a $0$ in $x$ (and then there will be a $0$ in $a \oplus x$). Suppose we set $0$ in $x$, then $a \oplus x$ will definitely be less than $b \oplus x$ (because in the highest differing bit, $a \oplus x$ has $0$, and $b \oplus x$ has $1$). Therefore, it is advantageous to set $1$ in $a \oplus x$ on all next positions, as this will make their difference smaller. Therefore, we can go through the positions in descending order, and if the position is differing, then we will set a $1$ in $a \oplus x$ at this position if possible (if after this $x$ does not exceed $r$). The second case (when we set $1$ in $x$ at the position of the first differing bit) is analyzed similarly, but in fact it is not needed, because the answer will not be smaller, and $x$ will become larger. Time complexity: $O(\log 10^{18})$ per test case.
[ "bitmasks", "greedy", "implementation", "math" ]
1,400
#include <bits/stdc++.h> using namespace std; const int maxb = 60; bool get_bit(int64_t a,int i) { return a&(1ll<<i); } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while(t--) { int64_t a,b,r; cin >> a >> b >> r; int64_t x = 0; bool first_bit = 1; if(a > b) swap(a,b); for(int i = maxb-1;i >= 0;--i) { bool bit_a = get_bit(a,i); bool bit_b = get_bit(b,i); if(bit_a != bit_b) { if(first_bit) { first_bit = 0; } else { if(!bit_a && x+(1ll<<i) <= r) { x += (1ll<<i); a ^= (1ll<<i); b ^= (1ll<<i); } } } } cout << b-a << "\n"; } }
1918
D
Blocking Elements
You are given an array of numbers $a_1, a_2, \ldots, a_n$. Your task is to block some elements of the array in order to minimize its cost. Suppose you block the elements with indices $1 \leq b_1 < b_2 < \ldots < b_m \leq n$. Then the cost of the array is calculated as the maximum of: - the sum of the blocked elements, i.e., $a_{b_1} + a_{b_2} + \ldots + a_{b_m}$. - the maximum sum of the segments into which the array is divided when the blocked elements are removed. That is, the maximum sum of the following ($m + 1$) subarrays: [$1, b_1 − 1$], [$b_1 + 1, b_2 − 1$], [$\ldots$], [$b_{m−1} + 1, b_m - 1$], [$b_m + 1, n$] (the sum of numbers in a subarray of the form [$x,x − 1$] is considered to be $0$). For example, if $n = 6$, the original array is [$1, 4, 5, 3, 3, 2$], and you block the elements at positions $2$ and $5$, then the cost of the array will be the maximum of the sum of the blocked elements ($4 + 3 = 7$) and the sums of the subarrays ($1$, $5 + 3 = 8$, $2$), which is $\max(7,1,8,2) = 8$. You need to output the minimum cost of the array after blocking.
Let's do a binary search. Suppose we know that the minimum possible cost is at least $l$ and not greater than $r$. Let's choose $m = (l+r)/2$. We need to learn how to check if the answer is less than or equal to $m$. We will calculate $dp_i$-the minimum sum of blocked elements in the prefix up to $i$ if position $i$ is blocked, and on each of the subsegments without blocked elements, the sum of elements is less than or equal to $m$. Then $dp_i = a_i + \min(dp_j)$ for all $j$ such that the sum on the subsegment from $j+1$ to $i-1$ is less than or equal to $m$. Such $j$ form a segment, since $a_j$ is positive. We will maintain the boundaries of this segment. We will also maintain all $dp_j$ for $j$ inside this subsegment in the set. When moving from $i$ to $i+1$, we will move the left boundary of the subsegment until the sum on it becomes less than or equal to $m$, and remove $dp_j$ from the set, and also add $dp_i$ to the set. The minimum sum of blocked elements under the condition that the sum on all subsegments without blocked elements is less than or equal to $m$ can be found as the minimum among all $dp_i$ such that the sum from $i$ to $n$ is less than or equal to $m$. If this answer is less than or equal to $m$, then the answer to the problem is less than or equal to $m$, otherwise the answer is greater than $m$. Time complexity: $O(n \log n \log 10^9)$ per test case.
[ "binary search", "data structures", "dp", "implementation", "two pointers" ]
1,900
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while(t--) { int n; cin >> n; int64_t a[n+1]; for(int i = 0;i < n;++i) { cin >> a[i]; } int64_t l = 0,r = int64_t(1e9)*n; while(l < r) { int64_t m = (l+r)/2; set<pair<int64_t,int>> pos; int64_t dp[n+1]; int p2 = n; dp[n] = 0; pos.insert({dp[n],n}); int64_t sum = 0; for(int j = n-1;j >= 0;--j) { while(sum > m) { sum -= a[p2-1]; pos.erase({dp[p2],p2}); p2--; } dp[j] = pos.begin()->first + a[j]; pos.insert({dp[j],j}); sum += a[j]; } sum = 0; int yes = 0; for(int j =0;j < n;++j) { if(sum <= m && dp[j] <= m) yes = 1; sum += a[j]; } if(yes) r = m; else l = m+1; } cout << l << "\n"; } }
1918
E
ace5 and Task Order
This is an interactive problem! In the new round, there were $n$ tasks with difficulties from $1$ to $n$. The coordinator, who decided to have the first round with tasks in unsorted order of difficulty, rearranged the tasks, resulting in a permutation of difficulties from $1$ to $n$. After that, the coordinator challenged ace5 to guess the permutation in the following way. Initially, the coordinator chooses a number $x$ from $1$ to $n$. ace5 can make queries of the form: $?\ i$. The answer will be: - $>$, if $a_i > x$, after which $x$ increases by $1$. - $<$, if $a_i < x$, after which $x$ decreases by $1$. - $=$, if $a_i = x$, after which $x$ remains unchanged. The task for ace5 is to guess the permutation in no more than $40n$ queries. Since ace5 is too busy writing the announcement, he has entrusted this task to you.
Randomized solution: We will use the quicksort algorithm. We will choose a random element from the array, let its index be $i$, and we will perform $?$ $i$ until we get the answer $=$ (i.e., $x = a_i$). Now we will ask about all the other elements, thereby finding out whether they are greater than or less than $a_i$ (don't forget to return $x = a_i$, i.e., perform $?$ $i$ after each query about the element). After this, we will divide all the elements into two parts, where $a_i > x$ and $a_i < x$. We will recursively run the algorithm on each part. The parts will become smaller and smaller, and in the end, we will sort our permutation, allowing us to guess it. Non-randomized solution: We will find the element $1$ in the array in $3n$ queries. To do this, we will go through the array, asking about each element each time. If the answer is $<$, we will continue asking until the answer becomes $=$, and if the answer is $=$ or $>$, we will move on to the next element. Then the last element on which the answer was $<$ is the element $1$. $x$ will increase by a maximum of $n$ in the process (a maximum of $1$ from each element), so it will decrease by a maximum of $2n$, i.e., a maximum of $3n$ queries. Similarly, we will find the element $n$. Now we will run an algorithm similar to the randomized solution, but now we can set $x = n/2$ instead of taking $x$ as a random element. Both solutions comfortably fit within the limit of $40n$ queries.
[ "constructive algorithms", "divide and conquer", "implementation", "interactive", "probabilities", "sortings" ]
2,200
#include <bits/stdc++.h> using namespace std; char query(int pos) { cout << "? " << pos << endl; char ans; cin >> ans; return ans; } void dnq(int l,int r,vector<int> pos,vector<int> & res,int pos1,int posn) { int m = (l+r)/2; vector<int> lh; vector<int> rh; for(int i = 0;i < pos.size();++i) { char x = query(pos[i]); if(x == '>') { rh.push_back(pos[i]); query(pos1); } else if(x == '<') { lh.push_back(pos[i]); query(posn); } else { res[pos[i]] = m; } } if(lh.size() != 0) { int m2 = (l+m-1)/2; for(int j = 0;j < m-m2;++j) query(pos1); dnq(l,m-1,lh,res,pos1,posn); query(posn); } if(rh.size() != 0) { int m2 = (m+1+r)/2; for(int j = 0;j < m2-m;++j) query(posn); dnq(m+1,r,rh,res,pos1,posn); } return ; } int main() { int t; cin >> t; while(t--) { int n; cin >> n; int pos1 = -1; for(int i = 1;i <= n;++i) { char ans = query(i); if(ans == '<') { i--; } else if(ans == '=') { pos1 = i; } else { if(pos1 != -1) { query(pos1); } } } int posn = -1; for(int i = 1;i <= n;++i) { char ans = query(i); if(ans == '>') { i--; } else if(ans == '=') { posn = i; } else { if(posn != -1) { query(posn); } } } vector<int> res(n+1); vector<int> pos(n); for(int j = 0;j < n;++j) pos[j] = j+1; int m = (1+n)/2; for(int k = 0;k < n-m;++k) { query(pos1); } dnq(1,n,pos,res,pos1,posn); cout << "! "; for(int j = 1;j <= n;++j) cout << res[j] << ' '; cout << endl; } }
1918
F
Caterpillar on a Tree
The caterpillar decided to visit every node of the tree. Initially, it is sitting at the root. The tree is represented as a rooted tree with the root at the node $1$. Each crawl to a neighboring node takes $1$ minute for the caterpillar. And there is a trampoline under the tree. If the caterpillar detaches from the tree and falls onto the trampoline, it will end up at the root of the tree in $0$ seconds. But the trampoline is old and can withstand no more than $k$ caterpillar's falls. What is the minimum time the caterpillar can take to visit all the nodes of the tree? More formally, we need to find the minimum time required to visit all the nodes of the tree, if the caterpillar starts at the root (node $1$) and moves using two methods. - Crawl along an edge to one of the neighboring nodes: takes $1$ minute. - Teleport to the root: takes no time, no new nodes become visited. The second method (teleportation) can be used at most $k$ times. The caterpillar can finish the journey at any node.
First, it can be noticed that it is enough to visit all the leaves of the tree. After all, if the caterpillar skips some internal node, then it will not be able to reach the subtree of this node and visit the leaves in it. Therefore, it makes no sense to teleport to the root from a non-leaf (otherwise it would be more profitable to move to the root earlier, and all the leaves would remain visited). The optimal path of the caterpillar on the tree can be divided into movements from the root to a leaf, movements from one leaf to another, and teleportations from a leaf to the root. Let the order of visiting the leaves in the optimal path be fixed. Then it makes no sense to teleport from the last leaf, as all the leaves have already been visited. In addition, it is not profitable to move not along the shortest path in the sections of transition from the root to a leaf without visiting other leaves, or in movements from one leaf to another without visiting other leaves. If the leaf $v$ is visited after the leaf $u$, then teleporting from $u$ saves time of transition from $u$ to $v$ minus the time of moving to $v$ from the root. It is possible to choose $k$ leaves, without the last visited leaf, which give the maximum savings (if there are fewer leaves in the tree, or the savings become negative, then take fewer than $k$ leaves), and teleport from them. Thus, if the order of visiting the leaves is known, the optimal time can be found. It turns out that if you take the tree and sort the children of each node in ascending (not descending) order of the depth of the subtree, and then write down all the leaves from left to right (in depth-first order), then this will be one of the optimal leaf orders. This order of sorting the tree and its leaves will be called the order of sorting by the subtree depth. The tree can be sorted in this way in one depth-first traversal. For each leaf, it is possible to calculate how much time teleporting from it saves. To do this, it is enough to move from this leaf to the root until the first node, for which the previous node is not the rightmost child. Then the savings are the length of the path traveled minus the remaining distance to the root. Such paths for different leaves do not intersect along the edges, and the remaining distance to the root can be precalculated in a depth-first search for all nodes at once. Therefore, the algorithm works in $O(n \log n)$ time, where the logarithm arises from sorting the children of each node by the depth of the subtree. Theorem. There exists a shortest route for the caterpillar, in which the leaves are visited in the order of sorting the children of each node by the depth of the subtree. Let $u_1, \ldots, u_m$ be all the leaves of the tree in the order that will result if the children of each node are sorted in ascending order of the depth of the subtree. Consider the shortest route of the caterpillar visiting all the nodes of the tree. Let $v_1, \ldots, v_m$ be the leaves of the tree, in the order of visiting in this route. Consider the maximum prefix of leaves that coincides with the order of sorting by the depth of the subtree: $v_1 = u_1,\ldots, v_i = u_i$. If $i = m$, then the theorem is proven. Now, let's assume that the next leaf is the incorrect leaf $v_{i+1} \neq u_{i+1}$. The goal is to change the route in such a way that the time of traversing the tree does not increase, so that the first $i$ visited leaves do not change and remain in the same order, and so that the leaf $u_{i+1}$ is encountered earlier in the route than before the change. Then it is possible to move the leaf $u_{i+1}$ to its $(i+1)$s place, while maintaining the order of the first $i$ visited leaves. Then, in the same way, one can put all the leaves $u_{i+1}, \ldots, u_m$ in their places one by one and get the shortest route of the caterpillar with the desired order of visiting leaves. Lemma. Let the node $w$ be the ancestor of the node $u$. Let the caterpillar in the shortest route on the tree crawl from $u$ to $w$. Then the caterpillar enters the subtree of the node $u$ only once, traverses this subtree depth-first, and returns to $w$. Proof of the lemma. If the caterpillar crawls from $w$ to $u$ only once, then it cannot leave the subtree of $u$ until all the nodes in this subtree are visited, and it cannot jump on a trampoline, as it still needs to move from $u$ to $w$. All this cannot be done faster than the number of steps equal to twice the number of nodes in the subtree of $u$, because the caterpillar needs to reach each node via an edge from the ancestor and to return to the ancestor. And any route without teleportations, which uses each edge twice, is one of the depth-first traversals. If the caterpillar crawls from $w$ to $u$ two or more times, then the route can be shortened, as shown in figure. The lemma is proven. At the moment, the leaf $v_{i+1}$ lies in the order of visiting the leaves in the optimal route of the caterpillar in the place of the leaf $u_{i+1}$. The goal is to move the leaf $u_{i+1}$ closer to the beginning of the route, without changing the first $i$ visited leaves: $u_1, \ldots, u_i$. Let $w$ be the least common ancestor of the leaves $u_{i+1}$ and $v_{i+1}$, let $u$ be the child of $w$, in the subtree of which the node $u_{i+1}$ is located, and $v$ be the child of $w$, in the subtree of which the node $v_{i+1}$ is located, as shown in figure. To move the leaf $u_{i+1}$ closer to the beginning of the route, let us consider cases. Case 1: The caterpillar in the current version of the optimal route crawls from $u$ to $w$. In this case, according to the lemma, the caterpillar enters the subtree of vertex $u$ only once, and traverses it in the depth-first manner before returning to $w$. There are no leaves $u_1, \ldots, u_i$ in the subtree of $u$, because all the leaves of the subtree of $u$ are visited consecutively during the depth-first traversal, and the leaf $v_{i+1}$ not from the subtree of $u$ is visited after $u_1, \ldots, u_i$, but before $u_{i+1}$. Then the route can be changed as follows: the cycle $w \to \text{ (traversal of the subtree of } u\text{) } \to w$ is cut out from where it is located, and inserted at the moment of the first visit to $w$ after the visit to the leaf $u_i$. The leaf $u_i$ is not in the subtree of the node $v$, because the subtree of $u$ has a smaller depth ($u_{i+1}$ is earlier in the desired leaf order than $v_{i+1}$), and there are still unvisited leaves in it. Then, before entering $v_{i+1}$, the caterpillar will have to come from the leaf $u_i$ to the node $w$, and at that moment a depth-first traversal of the subtree of $u$ with the visit to the leaf $u_{i+1}$ will occur. This traversal was moved to an earlier time, before visiting $v_{i+1}$, which means that in the order of visiting the leaves in the caterpillar's route, the leaf $u_{i+1}$ has been moved closer to the beginning. Case 2: The caterpillar in the current version of the optimal route does not crawl from $u$ to $w$, but crawls from $v$ to $w$. Then the entire subtree of $v$ is traversed in a depth-first traversal. Since the leaf $u_{i+1}$ is earlier in the desired order than $v_{i+1}$, the subtree of $v$ is deeper than the subtree of $u$, and in the desired order all the leaves of $v$ come after $u_{i+1}$. Moreover, since the caterpillar does not crawl from $u$ to $w$, it is impossible to leave the subtree of $u$ except by teleporting to the root. The last jump on the trampoline from the subtree of the node $u$ is considered (or stopping at the end of the route). At this moment, all the leaves of the subtree of $u$ are visited. The route can be changed as follows: cut out the depth-first traversal of the subtree of $v$, cancel the last jump or stop in the subtree of $u$, descend from there to $w$, perform a depth-first traversal of the subtree of $v$ in such a way that the deepest node of this subtree is visited last, and teleport to the root (or stop at the end of the route). This will not be longer, because a section of transition from a leaf in the subtree of $u$ to $w$ has been added, and a section of movement from the deepest leaf in the subtree of $v$ to $w$ has disappeared, here it is important that the subtree of $v$ is deeper than the subtree of $u$. And the node $u_{i+1}$ has become closer to the beginning of the list of visited leaves, because all the leaves of the subtree of $v$, including $v_{i+1}$, have moved somewhere after all the leaves of the subtree of $u$. Case 3: The caterpillar in the current version of the optimal route does not crawl either from $u$ to $w$, nor from $v$ to $w$. Then all the sections of the route that move into the subtrees of the nodes $u$ and $v$ do not leave these subtrees and end with teleportation to the root or stopping at the end of the route. Among them, there is a section that starts with a step from $w$ to $u$, in which the leaf $u_{i+1}$ is visited, and a section that starts with a step from $w$ to $v$, in which the leaf $v_{i+1}$ is visited. In the current route, the section with $v_{i+1}$ comes earlier. The route can be changed very simply: swap the section visiting the leaf $u_{i+1}$ and the section visiting the leaf $v_{i+1}$. If both sections end with teleportations, then a correct caterpillar route will result. If the caterpillar stopped at the end of the section visiting $u_{i+1}$, and teleported to the root from the section with $v_{i+1}$, then now it will teleport after completing the section with $u_{i+1}$ and stop at the end of the section with $v_{i+1}$. The positions of the leaves $u_1, \ldots, u_i$ in the route will not change: they are not in the subtree of $v$, and they are not in the section with $u_{i+1}$, visited after $v_{i+1}$. And the leaf $u_{i+1}$ will get a place closer to the beginning in the order of visiting the leaves, because the section with its visit now occurs earlier in the route. In all cases, it was possible to move the leaf $u_{i+1}$ in the optimal route of the caterpillar closer to the beginning, while maintaining the order of the first $i$ visited leaves, which means that there exists an optimal route of the caterpillar in which the leaves are visited in the order of sorting the subtrees of the tree by depth. The theorem is proven.
[ "dfs and similar", "graphs", "greedy", "implementation", "sortings", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; const int maxn = 200005; int d[maxn]; int h[maxn]; int p[maxn]; vector<int> leaf_jump_gains; vector<vector<int> > children; bool comp_by_depth(int u,int v) { return d[u] < d[v]; } void sort_subtrees_by_depth(int v) { d[v] = 0; if(v == 1) h[v] = 0; else h[v] = h[p[v]]+1; for(int i = 0; i < int(children[v].size()); ++i) { int u = children[v][i]; sort_subtrees_by_depth(u); d[v] = max(d[v],d[u]+1); } sort(children[v].begin(),children[v].end(),comp_by_depth); } int main() { ios_base::sync_with_stdio(false); cin.tie(0); int n,k; cin >> n >> k; children.resize(n+1); for(int i = 2; i <= n; ++i) { cin >> p[i]; children[p[i]].push_back(i); } sort_subtrees_by_depth(1); for(int i = 1; i <= n; ++i) { if(children[i].size() == 0) { int jump_gain = 0; int v = i; while(v != 1) { int s = children[p[v]].size(); if(children[p[v]][s-1] == v) { v = p[v]; ++jump_gain; } else { jump_gain = jump_gain+1-h[p[v]]; break; } } leaf_jump_gains.push_back(jump_gain); } } sort(leaf_jump_gains.begin(),leaf_jump_gains.end()); int s = leaf_jump_gains.size(); ++k; //non-returning from the last leaf is like one more jump int res = 2*(n-1); for(int i = s-1; i >= max(0,s-k); --i) res -= max(leaf_jump_gains[i],0); cout << res << '\n'; return 0; }
1918
G
Permutation of Given
You were given only one number, $n$. It didn't seem interesting to you, so you wondered if it's possible to come up with an array of length $n$ consisting of non-zero integers, such that if each element of the array is replaced by the sum of its neighbors (the elements on the ends are replaced by their only neighbors), you obtain a permutation of the numbers in the original array.
At first, you can manually find answers for small values of $n$. For $n = 2, 4, 6$, the answer will be "YES" and the arrays will be $[-1, 1]$, $[-1, -2, 2, 1]$, $[-1, -2, 2, 1, -1, 1]$. It is not difficult to prove by case analysis that the answer is "NO" for $3$ and $5$. It can be assumed that for all odd $n$, the answer is "NO", but if you try to prove the absence of an array or find an array for $n = 7$, it turns out that the array exists: $[-5, 8, 1, -3, -4, -2, 5]$. In fact, an array exists for all $n$ except $3$ and $5$. It would be easy if it were possible to make the number in each cell unchanged. But the presence of array edges and the prohibition of zeros make this impossible. Furthermore, it can be noticed that an infinite array in which the sequence of six numbers $1, -3, -4, -1, 3, 4$ is repeated, generates the same number in each cell as it was there. In general, any sequence of six numbers of the form $a, -b, -a-b, -a, b, a+b$ will have this property. Thus, it is possible to transform the internal cells of the array into cells with the same numbers, the question is what to do at the edges. In the author's solution, suitable edges (possibly consisting of several numbers) were manually selected for each remainder of division by $6$. And then the solution for each value of $n$ was created as follows: take the edges for $n \mod 6$ and insert into the middle as many sequences of six numbers that transition into themselves as needed. Solution by green_gold_dog: The idea is that any correct array can be extended by $2$ elements to remain correct. Let the array end with the numbers $a$ and $b$. Then it can be extended by two elements as follows: $\begin{equation*} [\ldots,\;a,\;b] \to [\ldots, a,\; b,\; -b,\; a-b] \end{equation*}$ To start, two arrays are sufficient: $[1, 2]$ for $n = 2$ and $[1, 2, -3, 2, 4, -5, -2]$ for $n = 7$. Then these arrays can be extended by $2$ to obtain the answer for even or odd $n$. Both solutions print an array using simple rules and work in $O(n)$ time.
[ "constructive algorithms", "math" ]
2,700
//#pragma GCC optimize("Ofast") //#pragma GCC target("avx,avx2,sse,sse2,sse3,ssse3,sse4,abm,popcnt,mmx") #include <bits/stdc++.h> using namespace std; typedef long long ll; typedef double db; typedef long double ldb; typedef complex<double> cd; constexpr ll INF64 = 9'000'000'000'000'000'000, INF32 = 2'000'000'000, MOD = 1'000'000'007; constexpr db PI = acos(-1); constexpr bool IS_FILE = false, IS_TEST_CASES = false; random_device rd; mt19937 rnd32(rd()); mt19937_64 rnd64(rd()); template<typename T> bool assign_max(T& a, T b) { if (b > a) { a = b; return true; } return false; } template<typename T> bool assign_min(T& a, T b) { if (b < a) { a = b; return true; } return false; } template<typename T> T square(T a) { return a * a; } template<> struct std::hash<pair<ll, ll>> { ll operator() (pair<ll, ll> p) const { return ((__int128)p.first * MOD + p.second) % INF64; } }; void solve() { ll n; cin >> n; if (n == 5 || n == 3) { cout << "NO\n"; return; } cout << "YES\n"; vector<ll> arr; if (n % 2 == 0) { arr.push_back(1); arr.push_back(2); } else { arr.push_back(1); arr.push_back(2); arr.push_back(-3); arr.push_back(2); arr.push_back(4); arr.push_back(-5); arr.push_back(-2); } while (arr.size() != n) { ll x = arr[arr.size() - 2]; ll y = x - arr.back(); ll z = y - x; arr.push_back(z); arr.push_back(y); } for (auto i : arr) { cout << i << ' '; } cout << '\n'; } int main() { if (IS_FILE) { freopen("", "r", stdin); freopen("", "w", stdout); } ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); ll t = 1; if (IS_TEST_CASES) { cin >> t; } for (ll i = 0; i < t; i++) { solve(); } }
1919
A
Wallet Exchange
Alice and Bob are bored, so they decide to play a game with their wallets. Alice has $a$ coins in her wallet, while Bob has $b$ coins in his wallet. Both players take turns playing, with Alice making the first move. In each turn, the player will perform the following steps \textbf{in order}: - Choose to exchange wallets with their opponent, or to keep their current wallets. - Remove $1$ coin from the player's current wallet. The current wallet cannot have $0$ coins before performing this step. The player who cannot make a valid move on their turn loses. If both Alice and Bob play optimally, determine who will win the game.
When does the game end? Depending on whether the player chooses to exchange wallets with their opponent on step $1$, $1$ coins will be removed from either the opponent's wallet or the player's wallet. This means that if either of the players still has remaining coins, the game will not end as at least one of the choices will still be valid. The only way that the game ends is when both players have $0$ coins. Since each operation decreases the total amount of coins by exactly $1$, the only way for Alice to win the game is if $a + b$ is odd.
[ "games", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int a, b; cin >> a >> b; if ((a + b) % 2 == 0) { cout << "Bob\n"; } else { cout << "Alice\n"; } } }
1919
B
Plus-Minus Split
You are given a string $s$ of length $n$ consisting of characters "+" and "-". $s$ represents an array $a$ of length $n$ defined by $a_i=1$ if $s_i=$ "+" and $a_i=-1$ if $s_i=$ "-". You will do the following process to calculate your penalty: - Split $a$ into non-empty arrays $b_1,b_2,\ldots,b_k$ such that $b_1+b_2+\ldots+b_k=a^\dagger$, where $+$ denotes array concatenation. - The penalty of a single array is the absolute value of its sum multiplied by its length. In other words, for some array $c$ of length $m$, its penalty is calculated as $p(c)=|c_1+c_2+\ldots+c_m| \cdot m$. - The total penalty that you will receive is $p(b_1)+p(b_2)+\ldots+p(b_k)$. If you perform the above process optimally, find the minimum possible penalty you will receive. $^\dagger$ Some valid ways to split $a=[3,1,4,1,5]$ into $(b_1,b_2,\ldots,b_k)$ are $([3],[1],[4],[1],[5])$, $([3,1],[4,1,5])$ and $([3,1,4,1,5])$ while some invalid ways to split $a$ are $([3,1],[1,5])$, $([3],[\,],[1,4],[1,5])$ and $([3,4],[5,1,1])$.
Try to find a lower bound. The answer is $|a_1 + a_2 + \ldots + a_n|$. Intuitively, whenever we have a subarray with a sum equal to $0$, it will be helpful for us as its penalty will become $0$. Hence, we can split $a$ into subarrays with a sum equal to $0$ and group up the remaining elements into individual subarrays of size $1$. A formal proof is given below. Let us define an alternative penalty function $p2(l, r) = |a_l + a_{l+1} + \ldots + a_r|$. We can see that $p2(l, r) \le p(l, r)$ for all $1\le l\le r\le n$. Since the alternative penalty function does not have the $(r - l + 1)$ term, there is no reason for us to partition $a$ into two or more subarrays as $|x| + |y| \ge |x + y|$ for all integers $x$ and $y$, so the answer for the alternative penalty function is $|a_1 + a_2 + \ldots + a_n|$. Since $p2(l, r)\le p(l, r)$, this means that the answer to our original problem cannot be smaller than $|a_1 + a_2 + \ldots + a_n|$. In fact, this lower bound is always achievable. Let us prove this by construction. Note that if we flip every "$\mathtt{+}$" to "$\mathtt{-}$" and every "$\mathtt{-}$" to "$\mathtt{+}$", our answer will remain the same since our penalty function involves absolute values. Hence, we can assume that the sum of elements of $a$ is non-negative. If the sum of elements of $a$ is $0$, we can split $a$ into a single array equal to itself $b_1 = a$ and obtain a penalty of $0$. Otherwise, we find the largest index $i$ where $a_1 + a_2 + \ldots + a_i = 0$. Then we let the first subarray be $b_1 = [a_1, a_2, \ldots, a_i]$ and the second subarray be $b_2 = [a_{i + 1}]$, so we have $p(b_1) = 0$ and $p(b_2) = 1$. Since $i$ is the largest index, $a_{i + 1}$ has to be equal to $1$ as if $a_{i + 1}$ is $-1$ instead, there has to be a larger index where the prefix sum becomes $0$ again for the prefix sum to go from negative to the final positive total sum. This means that for the remaining elements of the array $a_{i+2\ldots n}$, the sum of its elements decreases by $1$, so we can continue to use the same procedure to split the remaining elements which decrease the sum by $1$ and increase the penalty by $1$ each time until the sum of elements becomes $0$. Hence, the total penalty will be equal to the sum of elements of $a$.
[ "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int t; int n; string s; int main() { cin >> t; while (t--) { cin >> n; cin >> s; int sm = 0; for (int i = 0; i < n; i++) { sm += s[i] == '+' ? 1 : -1; } cout << abs(sm) << '\n'; } }
1919
C
Grouping Increases
You are given an array $a$ of size $n$. You will do the following process to calculate your penalty: - Split array $a$ into two (possibly empty) subsequences$^\dagger$ $s$ and $t$ such that every element of $a$ is either in $s$ or $t^\ddagger$. - For an array $b$ of size $m$, define the penalty $p(b)$ of an array $b$ as the number of indices $i$ between $1$ and $m - 1$ where $b_i < b_{i + 1}$. - The total penalty you will receive is $p(s) + p(t)$. If you perform the above process optimally, find the minimum possible penalty you will receive. $^\dagger$ A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements. $^\ddagger$ Some valid ways to split array $a=[3,1,4,1,5]$ into $(s,t)$ are $([3,4,1,5],[1])$, $([1,1],[3,4,5])$ and $([\,],[3,1,4,1,5])$ while some invalid ways to split $a$ are $([3,4,5],[1])$, $([3,1,4,1],[1,5])$ and $([1,3,4],[5,1])$.
Consider a greedy approach. Consider the following approach. We start with empty arrays $b$ and $c$, then insert elements of the array $a$ one by one to the back of $b$ or $c$. Our penalty function only depends on adjacent elements, so at any point in time, we only care about the value of the last element of arrays $b$ and $c$. Suppose we already inserted $a_1, a_2, \ldots, a_{i - 1}$ into arrays $b$ and $c$ and we now want to insert $a_i$. Let $x$ and $y$ be the last element of arrays $b$ and $c$ respectively (if they are empty, use $\infty$). Note that swapping arrays $b$ and $c$ does not matter, so without loss of generality, assume that $x\le y$. We will use the following greedy approach. If $a_i\le x$, insert $a_i$ to the back of the array with a smaller last element. If $y < a_i$, insert $a_i$ to the back of the array with a smaller last element. If $x < a_i\le y$, insert $a_i$ to the back of the array with a bigger last element. The proof of why the greedy approach is optimal is given below: $a_i\le x$. In this case, $a_i$ is not greater than the last element of both arrays, so inserting $a_i$ to the back of either array will not add additional penalties. However, it is better to insert $a_i$ into the array with a smaller last element so that in the future, we can insert a wider range of values into the new array without additional penalty. $y < a_i$. In this case, $a_i$ is greater than the last element of both arrays, so inserting $a_i$ to the back of either array will contribute to $1$ additional penalty. However, it is better to insert $a_i$ into the array with a smaller last element so that in the future, we can insert a wider range of values into the new array without additional penalty. $x < a_i\le y$. In this case, if we insert $a_i$ to the back of the array with the larger last element, there will not be any additional penalty. However, if we insert $a_i$ to the back of the array with the smaller last element, there will be an additional penalty of $1$. The former option is always better than the latter. This is because if we consider making the same choices for the remaining elements $a_{i+1}$ to $a_n$ in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting $a_i$. After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal. Following the greedy approach for all 3 cases will result in a correct solution that runs in $O(n)$ time. Consider a dynamic programming approach. Let $dp_{i, v}$ represent the minimum penalty when we are considering splitting $a_{1\ldots i}$ into two subarrays where the last element of one subarray is $a_i$ while the last element of the second subarray is $v$. Speed up the transition by storing the state in a segment tree. Let us consider a dynamic programming solution. Let $dp_{i, v}$ represent the minimum penalty when we are considering splitting $a_{1\ldots i}$ into two subarrays where the last element of one subarray is $a_i$ while the last element of the second subarray is $v$. Then, our transition will be $dp_{i, v} = dp_{i - 1, v} + [a_{i - 1} < a_i]$ for all $1\le v\le n, v\neq a_{i - 1}$ and $dp_{i, a_{i - 1}} = \min(dp_{i - 1, a_{i - 1}} + [a_{i - 1} < a_i], \min_{1\le x\le n}(dp_{i - 1, x} + [x < a_i]))$. To speed this up, we use a segment tree to store the value of $dp_{i - 1, p}$ at position $p$. To transition to $dp_i$, notice that the first transition is just a range increment on the entire range $[1, n]$ of the segment tree if $a_{i - 1} < a_i$. For the second transition, we can do two range minimum queries on ranges $[1, a_i - 1]$ and $[a_i, n]$. The final time complexity is $O(n\log n)$. Solve the problem if you have to split the array into $k$ subsequences, where $k$ is given in the input ($k = 2$ for the original problem). There is an array $A$ of size $N$ and an array $T$ of size $K$. Initially, $T_i = \infty$ for all $1 \le i \le K$. For each time $t$ from $1$ to $N$, the following will happen: Select an index $1 \le i \le K$. If $A_t > T_i$, we increase the cost by $1$. Then, we set $T_i := A_t$. Find the minimum possible cost after time $N$ if we select the indices optimally. The order of $T$ does not matter. Hence for convenience, we will maintain $T$ in non-decreasing order. At each time $t$, we will use the following algorithm: If $A_t > T_K$, do the operation on index $1$. Otherwise, find the smallest index $1 \le i \le K$ where $A_t \le T_i$ and do the operation on index $i$. Suppose there exists an optimal solution that does not follow our algorithm. We will let $OT_{t, i}$ denote the value of $T_i$ before the operation was done at time $t$ in the optimal solution. Let $et$ be the earliest time that the operation done by the optimal solution differs from that of the greedy solution. Case 1: $A_{et} > OT_{et,K}$. Since we are maintaining $T$ in the sorted order, having $A_{et} > OT_{et,K}$ means that $A_{et}$ is larger than all elements of $T$. This means that no matter which index $i$ we choose to do the operation on, the cost will always increase by $1$. Suppose an index $i > 1$ was chosen in the optimal solution. We can always choose to do the operation on index $1$ instead of index $i$ and the answer will not be less optimal. This is because if we let $T'$ be the array $T$ after the operation was done on index $1$, $T'_p \le OT_{et+1,p}$ for all $1 \le p \le K$ since $T'_p = \begin{cases}OT_{et,p+1}&\text{if }p<K\newline A_{et}&\text{if }p=K\end{cases}$ while $OT_{et+1,p} = \begin{cases}OT_{et,p}&\text{if }p<i\newline OT_{et,p+1}&\text{if }i\le p<K\newline A_{et}&\text{if }p=K\end{cases}$. Case 2: $A_{et} \le OT_{et,K}$. For convenience, we will denote that the operation was done on index $i$ in the greedy solution while the operation was done on index $j$ based on the optimal solution during time $et$. Case 2A: $i < j$. In this case, the cost does not increase for both the optimal solution and the greedy solution. However, we can always do an operation on index $i$ instead of index $j$ and the answer will not be less optimal. This is because if we let $T'$ be the array $T$ after the operation was done on index $i$, $T'_p\le OT_{et+1,p}$ for all $1\le p\le K$ since $T'_p = \begin{cases}OT_{et,p}&\text{if }p\neq i\newline A_{et}&\text{if }p=i\end{cases}$ while $OT_{et+1,p} = \begin{cases}OT_{et,p}&\text{if }p<i\newline A_{et}&\text{if }p=i\newline OT_{et,p-1}&\text{if }i< p\le j\newline OT_{et,p}&\text{if }j<p\le K\end{cases}$. Case 2B: $i > j$. For this case, the cost increases for the optimal solution while the cost does not change for the greedy solution. However, it is not trivial to prove that the greedy solution is more optimal as even though it has a smaller cost, it results in a less optimal array $T$. Hence, we will prove this case below. Case 2A: $i < j$. In this case, the cost does not increase for both the optimal solution and the greedy solution. However, we can always do an operation on index $i$ instead of index $j$ and the answer will not be less optimal. This is because if we let $T'$ be the array $T$ after the operation was done on index $i$, $T'_p\le OT_{et+1,p}$ for all $1\le p\le K$ since $T'_p = \begin{cases}OT_{et,p}&\text{if }p\neq i\newline A_{et}&\text{if }p=i\end{cases}$ while $OT_{et+1,p} = \begin{cases}OT_{et,p}&\text{if }p<i\newline A_{et}&\text{if }p=i\newline OT_{et,p-1}&\text{if }i< p\le j\newline OT_{et,p}&\text{if }j<p\le K\end{cases}$. Case 2B: $i > j$. For this case, the cost increases for the optimal solution while the cost does not change for the greedy solution. However, it is not trivial to prove that the greedy solution is more optimal as even though it has a smaller cost, it results in a less optimal array $T$. Hence, we will prove this case below. Case 2B We want to come up with a modified solution that does the same operations as the optimal solution for time $1\le t<et$ and does an operation on index $i$ during time $et$. Adopting a similar notation to $OT$, we will let $MT_{t, i}$ denote the value of $T_i$ before the operation was done at time $t$ in this modified solution. Then, $MT_{et+1,p} = \begin{cases}OT_{et,p}&\text{if }p\neq i\newline A_{et}&\text{if } p=i\end{cases}$ and $OT_{et+1,p}=\begin{cases}OT_{et,p} &\text{if } p<j\newline OT_{et,p+1}&\text{if }j\le p<i-1\newline A_{et}&\text{if }p=i-1\newline OT_{et,p}&\text{if }i\le p\le K\end{cases}$. Note that in this case, $MT_{et+1,p}\le OT_{et+1,p}$ for all $1\le p\le K$, which means that our modified solution results in a less optimal state than the optimal solution. However, since our modified solution requires one less cost up to this point, we will be able to prove that our modified solution will not perform worse than the optimal solution. Notice that $OT_{et+1,p}\le MT_{et+1,p+1}$ for all $1\le p<K$. We denote that the index that the optimal solution operates on during time $t$ is $x_t$. Let $r$ be the minimum time where $et+1\le r\le N$ and $e_r=N$. Due to the above property that $OT_{et+1,p}\le MT_{et+1,p+1}$ for all $1\le p<K$, we can let our modified solution do the operation on index $x_t+1$ for all time $et+1\le t<r$ and the cost will not be more than the optimal solution. This is because the property that $OT_{t+1,p}\le MT_{t+1,p+1}$ for all $1\le p<K$ still holds throughout that time range even after each update. Note that if such an $r$ does not exist, we can let our modified solution do the operation on index $x_t+1$ for all time $et+1\le t\le K$ and we completed coming up with the modified solution with a cost not more than the optimal solution. However, if such an $r$ exists, then at time $r$, since $x_r=N$, we are no longer able to use the same method. However, let us consider what happens if we let our modified solution do an operation on index $1$ during time $r$. If $A_r>MT_{r,K}$, it will mean that $MT_{r+1,p}=\begin{cases}MT_{r,p+1}&\text{if }p<K\newline A_r&\text{if }p=K\end{cases}$ while $OT_{r+1,p}=\begin{cases}OT_{r,p}&\text{if }p<K\newline A_r&\text{if }p=K\end{cases}$ since $OT_{r,K-1}\le MT_{r,K}<A_r$. Even though during this time, it might be possible that the cost of the modified solution increases by $1$ while the cost of the optimal solution remains the same, recall that previously during time $i$ our modified solution used one less cost than the optimal solution. As a result, the modified solution will end up having a cost of not more than the optimal solution. At the same time, $OT_{r+1,p}\le MT_{r+1,p}$ for all $1\le p\le K$. Hence, for all time $r<t\le K$, we can let our modified solution do the operation on the same index as the optimal solution $x_t$ and the cost of our modified solution will not be more than that of the optimal solution. On the other hand, suppose $A_r\le MT_{r,K}$. Let $v$ be the minimum position such that $A_r\le MT_{r,v}$ and let $w$ be the minimum position such that $A_r\le OT_{r,w}$. Then, $MT_{r+1,p}=\begin{cases}MT_{r,p+1}&\text{if }p<v-1\newline A_r&\text{if }p=v-1\newline MT_{r,p}&\text{if }p\ge v\end{cases}$ and $OT_{r+1,p}=\begin{cases}OT_{r,p}&\text{if }p<w\newline A_r&\text{if }p=w\newline OT_{r,p-1}&\text{if }p>w\end{cases}$. In the same way, the cost of our modified solution might increase while the cost of the optimal solution stays the same, however, $OT_{r+1,p}\le MT_{r+1,p}$ for all $1\le p\le K$. - For $p<v-1$ and $p>w$, the condition holds since $OT_{r,p}\le MT_{r,p+1}$ for all $1\le p<K$. Note that $v-1\le w$ because of the same inequality as well. - Suppose $v-1=w$. Then for $p=v-1$, $OT_{r+1,p}=A_r\le A_r=MT_{r+1,p}$. From now on, we suppose $v-1\neq w$ - For $p=v-1$, $OT_{r,v-1}\le A_r$ as $w$ is defined as the minimum position that $A_r\le OT_{r,w}$ and $v-1< w$. - For $v\le p<w$, $OT_{r,p}\le MT_{r,p}$ as $OT_{r,p}<A_r\le MT_{r,p}$ - For $p=w$, $A_r\le MT_{r,w}$ as $v$ is defined as the minimum position that $A_r\le MT_{r,v}$ and $v-1<w$ Now that we managed to construct a modified solution which follows the greedy algorithm from time $1\le t\le et$ and is not less optimal than the optimal solution, we can let the optimal solution be our modified solution and find the new $et$ to get a new modified solution. Hence by induction, our greedy solution is optimal.
[ "data structures", "dp", "greedy" ]
1,400
#include <bits/stdc++.h> using namespace std; const int INF = 1000000005; const int MAXN = 200005; int t; int n; int a[MAXN]; int mn[MAXN * 4], lz[MAXN * 4]; void init(int u = 1, int lo = 1, int hi = n) { mn[u] = lz[u] = 0; if (lo != hi) { int mid = lo + hi >> 1; init(u << 1, lo, mid); init(u << 1 ^ 1, mid + 1, hi); } } void propo(int u) { if (lz[u] == 0) { return; } lz[u << 1] += lz[u]; lz[u << 1 ^ 1] += lz[u]; mn[u << 1] += lz[u]; mn[u << 1 ^ 1] += lz[u]; lz[u] = 0; } void incre(int s, int e, int x, int u = 1, int lo = 1, int hi = n) { if (lo >= s && hi <= e) { mn[u] += x; lz[u] += x; return; } propo(u); int mid = lo + hi >> 1; if (s <= mid) { incre(s, e, x, u << 1, lo, mid); } if (e > mid) { incre(s, e, x, u << 1 ^ 1, mid + 1, hi); } mn[u] = min(mn[u << 1], mn[u << 1 ^ 1]); } int qmn(int s, int e, int u = 1, int lo = 1, int hi = n) { if (s > e) { return INF; } if (lo >= s && hi <= e) { return mn[u]; } propo(u); int mid = lo + hi >> 1; int res = INF; if (s <= mid) { res = min(res, qmn(s, e, u << 1, lo, mid)); } if (e > mid) { res = min(res, qmn(s, e, u << 1 ^ 1, mid + 1, hi)); } return res; } int main() { ios::sync_with_stdio(0), cin.tie(0); cin >> t; while (t--) { cin >> n; for (int i = 1; i <= n; i++) { cin >> a[i]; } init(); for (int i = 1; i <= n; i++) { int ndp = min(qmn(1, a[i] - 1) + 1, qmn(a[i], n)); if (i > 1) { if (a[i - 1] < a[i]) { incre(1, n, 1); } int dp = qmn(a[i - 1], a[i - 1]); if (ndp < dp) { incre(a[i - 1], a[i - 1], ndp - dp); } } } cout << qmn(1, n) << '\n'; } }
1919
D
01 Tree
There is an edge-weighted complete binary tree with $n$ leaves. A complete binary tree is defined as a tree where every non-leaf vertex has exactly 2 children. For each non-leaf vertex, we label one of its children as the left child and the other as the right child. The binary tree has a very strange property. For every non-leaf vertex, one of the edges to its children has weight $0$ while the other edge has weight $1$. Note that the edge with weight $0$ can be connected to either its left or right child. You forgot what the tree looks like, but luckily, you still remember some information about the leaves in the form of an array $a$ of size $n$. For each $i$ from $1$ to $n$, $a_i$ represents the distance$^\dagger$ from the root to the $i$-th leaf in dfs order$^\ddagger$. Determine whether there exists a complete binary tree which satisfies array $a$. Note that you \textbf{do not} need to reconstruct the tree. $^\dagger$ The distance from vertex $u$ to vertex $v$ is defined as the sum of weights of the edges on the path from vertex $u$ to vertex $v$. $^\ddagger$ The dfs order of the leaves is found by calling the following $dfs$ function on the root of the binary tree. \begin{verbatim} dfs_order = [] function dfs(v): if v is leaf: append v to the back of dfs_order else: dfs(left child of v) dfs(right child of v) dfs(root) \end{verbatim}
What does the distance of two leaves that share the same parent look like? What happens if we delete two leaves that share the same parent? Consider two leaves that share the same parent. They will be adjacent to each other in the dfs order, so their distances will be adjacent in array $a$. Furthermore, their distances to the root will differ by exactly $1$ since one of the edges from the parent to its children will have weight $0$ while the other will have weight $1$. If we delete two leaves that share the same parent, the parent itself will become the leaf. Since one of the edges from the parent to the child has weight $0$, the distance from the parent to the root is equal to the smaller distance between its two children. This means that deleting two leaves that share the same parent is the same as selecting an index $i$ such that $a_i = a_{i - 1} + 1$ or $a_i = a_{i + 1} + 1$, then removing $a_i$ from array $a$. Consider the largest value in $a$. If it is possible to delete it (meaning there is a value that is exactly one smaller than it to its left or right), we can delete it immediately. This is because keeping the largest value will not help to enable future operations as it can only help to delete elements that are $1$ greater than it, which is not possible for the largest value. Now, all we have to do is to maintain all elements that can be deleted and choose the element with the largest value each time. Then, whenever we delete an element, we need to check whether the two adjacent elements become deletable and update accordingly. We can do this using a priority_queue and a linked list in $O(n\log n)$. Note that many other implementations exist, including several $O(n)$ solutions.
[ "constructive algorithms", "data structures", "dsu", "greedy", "sortings", "trees" ]
2,100
#include<bits/stdc++.h> using namespace std; const int MAXN = 200005; int n; int a[MAXN]; int prv[MAXN],nxt[MAXN]; bool in[MAXN]; bool good(int i) { if (i < 1 || i > n) { return 0; } return a[prv[i]] == a[i] - 1 || a[nxt[i]] == a[i] - 1; } int main(){ ios::sync_with_stdio(0), cin.tie(0); int t; cin >> t; while (t--) { cin >> n; priority_queue<pair<int, int>> pq; for (int i = 1; i <= n; i++) { prv[i] = i - 1; nxt[i] = i + 1; in[i] = 0; cin >> a[i]; } a[n + 1] = a[0] = -2; for (int i = 1; i <= n; i++) { if (good(i)) { in[i] = 1; pq.push({a[i], i}); } } while (!pq.empty()) { auto [_, i] = pq.top(); pq.pop(); nxt[prv[i]] = nxt[i]; prv[nxt[i]] = prv[i]; if (!in[prv[i]] && good(prv[i])) { in[prv[i]]=1; pq.push({a[prv[i]], prv[i]}); } if (!in[nxt[i]] && good(nxt[i])) { in[nxt[i]]=1; pq.push({a[nxt[i]], nxt[i]}); } } int mn = n, bad = 0; for (int i = 1; i <= n; i++) { bad += !in[i]; mn = min(a[i], mn); } if (bad == 1 && mn == 0) { cout << "YES\n"; } else { cout << "NO\n"; } } }
1919
E
Counting Prefixes
There is a hidden array $a$ of size $n$ consisting of only $1$ and $-1$. Let $p$ be the prefix sums of array $a$. More formally, $p$ is an array of length $n$ defined as $p_i = a_1 + a_2 + \ldots + a_i$. Afterwards, array $p$ is sorted in non-decreasing order. For example, if $a = [1, -1, -1, 1, 1]$, then $p = [1, 0, -1, 0, 1]$ before sorting and $p = [-1, 0, 0, 1, 1]$ after sorting. You are given the prefix sum array $p$ after sorting, but you do not know what array $a$ is. Your task is to count the number of initial arrays $a$ such that the above process results in the given sorted prefix sum array $p$. As this number can be large, you are only required to find it modulo $998\,244\,353$.
Try solving the problem if the sum of elements of array $a$ is equal to $s$. If we can do this in $O(n)$ time, we can iterate through all possible values of $p_1 \le s \le p_n$ and sum up the number of ways for each possible sum. Consider starting with array $a = [1, 1, \ldots, 1, -1, -1, \ldots, -1]$ where there are $p_n$ occurences of $1$ and $p_n - s$ occurrences of $-1$. Then, try inserting $(-1, 1)$ into the array to fix the number of occurrences of prefix sum starting from the largest value ($p_n$) to the smallest value ($p_1$). Let us try to solve the problem if the sum of elements of array $a$ is equal to $s$. Consider starting array $a = [1, 1, \ldots, 1, -1, -1, \ldots, -1]$ where there are $p_n$ occurrences of $1$ and $p_n - s$ occurrences of $-1$. Notice that when we insert $(-1, 1)$ into array $a$ between positions $i$ and $i + 1$ where the sum of $a$ from $1$ to $i$ is $s$, two new prefix sums $s - 1$ and $s$ will be formed while the remaining prefix sums remain the same. Let us try to fix the prefix sums starting from the largest prefix sum to the smallest prefix sum. In the starting array $a$, we only have $1$ occurrence of prefix sum with value $p_n$. We can insert $(-1, 1)$ right after it $k$ times to increase the number of occurrences of prefix sum with value $p_n$ by $k$. In the process of doing so, the number of occurrences of prefix sum with value $p_n - 1$ also increased by $k$. Now, we want to fix the number of occurrences of prefix sum with value $p_n - 1$. If we already have $x$ occurrences but we require $y > x$ occurrences, we can choose to insert $y - 1$ pairs of $(-1, 1)$ right after any of the $x$ occurrences. We can calculate the number of ways using stars and bars to obtain the formula $y - 1\choose y - x$. We continue using a similar idea to fix the number of occurrences of $p_n - 2, p_n - 3, \ldots, p_1$, each time considering the additional occurrences that were contributed from the previous layer. Each layer can be calculated in $O(1)$ time after precomputing binomial coefficients, so the entire calculation to count the number of array $a$ whose sum is $s$ and has prefix sum consistent to the input takes $O(n)$ time. Then, we can iterate through all possible $p_1 \le s \le p_n$ and sum up the answers to obtain a solution that works in $O(n^2)$ time. Solve the problem in $O(n)$ time. Unfortunately, it seems like ARC146E is identical to this problem :(
[ "combinatorics", "constructive algorithms", "dp", "implementation", "math" ]
2,600
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int INF = 1000000005; const int MAXN = 200005; const int MOD = 998244353; ll fact[MAXN * 2], ifact[MAXN * 2]; int t; int n; int f[MAXN * 2], d[MAXN * 2]; inline ll ncr(int n, int r) { if (r < 0 || n < r) { return 0; } return fact[n] * ifact[r] % MOD * ifact[n - r] % MOD; } // count number of a_1 + a_2 + ... + a_n = x inline ll starbar(int n, int x) { if (n == 0 && x == 0) { return 1; } return ncr(x + n - 1, x); } int main() { ios::sync_with_stdio(0), cin.tie(0); fact[0] = 1; for (int i = 1; i < MAXN * 2; i++) { fact[i] = fact[i - 1] * i % MOD; } ifact[0] = ifact[1] = 1; for (int i = 2; i < MAXN * 2; i++) { ifact[i] = MOD - MOD / i * ifact[MOD % i] % MOD; } for (int i = 2; i < MAXN * 2; i++) { ifact[i] = ifact[i - 1] * ifact[i] % MOD; } cin >> t; while (t--) { cin >> n; for (int i = 0; i < n * 2 + 5; i++) { f[i] = 0; } n++; for (int i = 1; i < n; i++) { int s; cin >> s; f[s + n]++; } f[n]++; int mn = INF, mx = -INF; for (int i = 0; i <= 2 * n; i++) { if (f[i]) { mn = min(mn, i); mx = max(mx, i); } } bool bad = 0; for (int i = mn; i <= mx; i++) { if (!f[i]) { bad = 1; break; } } if (bad || mn == mx) { cout << 0 << '\n'; continue; } ll ans = 0; for (int x = mx; x >= mn; x--) { d[mx - 1] = f[mx] + (mx > n) - (mx == x); for (int i = mx - 2; i >= mn - 1; i--) { d[i] = f[i + 1] - d[i + 1] + (i >= x) + (i >= n); } if (d[mn - 1] != 0) { continue; } ll res = 1; for (int i = mx - 1; i >= mn; i--) { res = res * starbar(d[i], f[i] - d[i]) % MOD; } ans += res; if (ans >= MOD) { ans -= MOD; } } cout << ans << '\n'; } }
1919
F1
Wine Factory (Easy Version)
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $c_i$ and $z$. You can make hacks only if both versions of the problem are solved.} There are three arrays $a$, $b$ and $c$. $a$ and $b$ have length $n$ and $c$ has length $n-1$. Let $W(a,b,c)$ denote the liters of wine created from the following process. Create $n$ water towers. The $i$-th water tower initially has $a_i$ liters of water and has a wizard with power $b_i$ in front of it. Furthermore, for each $1 \le i \le n - 1$, there is a valve connecting water tower $i$ to $i + 1$ with capacity $c_i$. For each $i$ from $1$ to $n$ in this order, the following happens: - The wizard in front of water tower $i$ removes at most $b_i$ liters of water from the tower and turns the removed water into wine. - If $i \neq n$, at most $c_i$ liters of the remaining water left in water tower $i$ flows through the valve into water tower $i + 1$. There are $q$ updates. In each update, you will be given integers $p$, $x$, $y$ and $z$ and you will update $a_p := x$, $b_p := y$ and $c_p := z$. After each update, find the value of $W(a,b,c)$. Note that previous updates to arrays $a$, $b$ and $c$ persist throughout future updates.
When $c_i$ and $z$ equals $10^{18}$, it means that all the remaining water will always flow into the next water tower. Hence, the answer will be the sum of $a_i$ minus the amount of water remaining at tower $n$ after the process. From hint 1, our new task now is to determine the amount of water remaining at tower $n$ after the process. Let $v_i = a_i - b_i$. The remaining amount of water remaining at tower $n$ is the maximum suffix sum of $v$, or more formally $\max\limits_{1\le k\le n}\ \sum\limits_{i = k}^n v_i$. We can use a segment tree where position $p$ of the segment tree stores $\sum\limits_{i = p}^n v_i$. The updates can be done using range prefix increment and the queries can be done using range maximum. Code ReLU segment tree. A similar method can be used to solve the full problem if you combine even more ReLUs. However, it is not very elegant and is much more complicated than the intended solution below. ReLU is a common activation function used in neural networks which is defined by $f(x) = \max(x, 0)$. The objective of ReLU segment tree is to compose ReLU-like functions together. More precisely, ReLU segment tree can solve the following problem: You are given two arrays $a$ and $b$ of length $n$. You are required to answer the following queries: - $\texttt{1 p x y}$. Update $a_p = x$ and $b_p = y$. - $\texttt{2 l r c}$. Output the value of $f_l(f_{l+1}(\ldots f_{r-1}(f_r(c))))$, where $f_i(x) = \max(x - a_i, b_i)$. The main idea to solve the problem is to observe that composing ReLU functions still results in a ReLU function, so we just need to store in each node the resultant function $f(x) = \max(x - p, q)$ after composing the functions that fall in the range of the segment tree node. For the merge function, we just need to figure out the details of composing two ReLU functions together.
[ "data structures", "greedy" ]
2,300
#include <bits/stdc++.h> using namespace std; typedef long long ll; const ll LINF = 1000000000000000005; const int MAXN = 500005; int n, q; int a[MAXN], b[MAXN]; ll c[MAXN]; ll v[MAXN], sv[MAXN]; ll mx[MAXN * 4], lz[MAXN * 4]; void init(int u = 1, int lo = 1, int hi = n) { lz[u] = 0; if (lo == hi) { mx[u] = sv[lo]; } else { int mid = lo + hi >> 1; init(u << 1, lo, mid); init(u << 1 ^ 1, mid + 1, hi); mx[u] = max(mx[u << 1], mx[u << 1 ^ 1]); } } void propo(int u) { if (lz[u] == 0) { return; } lz[u << 1] += lz[u]; lz[u << 1 ^ 1] += lz[u]; mx[u << 1] += lz[u]; mx[u << 1 ^ 1] += lz[u]; lz[u] = 0; } void incre(int s, int e, ll x, int u = 1, int lo = 1, int hi = n) { if (lo >= s && hi <= e) { mx[u] += x; lz[u] += x; return; } propo(u); int mid = lo + hi >> 1; if (s <= mid) { incre(s, e, x, u << 1, lo, mid); } if (e > mid) { incre(s, e, x, u << 1 ^ 1, mid + 1, hi); } mx[u] = max(mx[u << 1], mx[u << 1 ^ 1]); } int main() { ios::sync_with_stdio(0), cin.tie(0); cin >> n >> q; for (int i = 1; i <= n; i++) { cin >> a[i]; } for (int i = 1; i <= n; i++) { cin >> b[i]; } for (int i = 1; i < n; i++) { cin >> c[i]; } ll sma = 0; for (int i = n; i >= 1; i--) { v[i] = a[i] - b[i]; sv[i] = v[i] + sv[i + 1]; sma += a[i]; } init(); while (q--) { int p, x, y; ll z; cin >> p >> x >> y >> z; sma -= a[p]; incre(1, p, -v[p]); a[p] = x; b[p] = y; v[p] = a[p] - b[p]; sma += a[p]; incre(1, p, v[p]); cout << sma - max(0ll, mx[1]) << '\n'; } }
1919
F2
Wine Factory (Hard Version)
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $c_i$ and $z$. You can make hacks only if both versions of the problem are solved.} There are three arrays $a$, $b$ and $c$. $a$ and $b$ have length $n$ and $c$ has length $n-1$. Let $W(a,b,c)$ denote the liters of wine created from the following process. Create $n$ water towers. The $i$-th water tower initially has $a_i$ liters of water and has a wizard with power $b_i$ in front of it. Furthermore, for each $1 \le i \le n - 1$, there is a valve connecting water tower $i$ to $i + 1$ with capacity $c_i$. For each $i$ from $1$ to $n$ in this order, the following happens: - The wizard in front of water tower $i$ removes at most $b_i$ liters of water from the tower and turns the removed water into wine. - If $i \neq n$, at most $c_i$ liters of the remaining water left in water tower $i$ flows through the valve into water tower $i + 1$. There are $q$ updates. In each update, you will be given integers $p$, $x$, $y$ and $z$ and you will update $a_p := x$, $b_p := y$ and $c_p := z$. After each update, find the value of $W(a,b,c)$. Note that previous updates to arrays $a$, $b$ and $c$ persist throughout future updates.
Try modelling this problem into a maximum flow problem. Try using minimum cut to find the maximum flow. Speed up finding the minimum cut using a segment tree. Consider a flow graph with $n + 2$ vertices. Let the source vertex be $s = n + 1$ and the sink vertex be $t = n + 2$. For each $i$ from $1$ to $n$, add edge $s\rightarrow i$ with capacity $a_i$ and another edge $i\rightarrow t$ with capacity $b_i$. Then for each $i$ from $1$ to $n - 1$, add edge $i\rightarrow i + 1$ with capacity $c_i$. The maximum flow from $s$ to $t$ will be the answer to the problem. Let us try to find the minimum cut of the above graph instead. Claim: The minimum cut will contain exactly one of $s\rightarrow i$ or $i\rightarrow t$ for all $1\le i\le n$. Proof: If the minimum cut does not contain both $s\rightarrow i$ and $i\rightarrow t$, $s$ can reach $t$ through vertex $i$ and hence it is not a minimum cut. Now, we will prove why the minimum cut cannot contain both $s\rightarrow i$ and $i\rightarrow t$. Suppose there exists a minimum cut where there exists a vertex $1\le i\le n$ where $s\rightarrow i$ and $i\rightarrow t$ are both in the minimum cut. We will consider two cases: Case 1: $s$ can reach $i$ (through some sequence of vertices $s\rightarrow j\rightarrow j+1\rightarrow \ldots \rightarrow i$ where $j < i$). If our minimum cut only contains $i\rightarrow t$ without $s\rightarrow i$, nothing changes as $s$ was already able to reach $i$ when $s\rightarrow i$ was removed. Hence, $s$ will still be unable to reach $t$ and we found a minimum cut that has equal or smaller cost. Case 2: $s$ cannot reach $i$. If our minimum cut only contains $s\rightarrow i$ without $i\rightarrow t$, nothing changes as $s$ is still unable to reach $i$, so we cannot make use of the edge $i\rightarrow t$ to reach $t$ from $s$. Hence, $s$ will still be unable to reach $t$ and we found a minimum cut that has equal or smaller cost. Now, all we have to do is select for each $1\le i\le n$, whether to cut the edge $s\rightarrow i$ or the edge $i\rightarrow t$. Let us use a string $x$ consisting of characters $\texttt{A}$ and $\texttt{B}$ to represent this. $x_i = \texttt{A}$ means we decide to cut the edge $s\rightarrow i$ for a cost of $a_i$ and $x_i = \texttt{B}$ means we decide to cut the edge from $i\rightarrow t$ for a cost of $b_i$. Notice that whenever we have $x_i = \texttt{B}$ and $x_{i + 1} = \texttt{A}$, $s$ can reach $t$ through $s\rightarrow i\rightarrow i + 1\rightarrow t$. To prevent this, we have to cut the edge $i\rightarrow i + 1$ for a cost of $c_i$. To handle updates, we can use a segment tree. Each node of the segment tree stores the minimum possible cost for each of the four combinations of the two endpoints being $\texttt{A}$ or $\texttt{B}$. When merging the segment tree nodes, add a cost of $c$ when the right endpoint of the left node is $\texttt{B}$ and the left endpoint of the right node is $\texttt{A}$. The final time complexity is $O(n\log n)$ as only a segment tree is used.
[ "data structures", "dp", "flows", "greedy", "matrices" ]
2,800
#include <bits/stdc++.h> using namespace std; typedef long long ll; const ll LINF = 1000000000000000005ll; const int MAXN = 500005; int n, q; int a[MAXN], b[MAXN]; ll c[MAXN]; ll st[MAXN * 4][2][2]; void merge(int u, int lo, int hi) { int mid = lo + hi >> 1, lc = u << 1, rc = u << 1 ^ 1; for (int l = 0; l < 2; l++) { for (int r = 0; r < 2; r++) { st[u][l][r] = min({st[lc][l][0] + st[rc][0][r], st[lc][l][1] + st[rc][1][r], st[lc][l][0] + st[rc][1][r], st[lc][l][1] + st[rc][0][r] + c[mid]}); } } } void init(int u = 1, int lo = 1, int hi = n) { if (lo == hi) { st[u][0][0] = a[lo]; st[u][1][1] = b[lo]; st[u][1][0] = st[u][0][1] = LINF; return; } int mid = lo + hi >> 1, lc = u << 1, rc = u << 1 ^ 1; init(lc, lo, mid); init(rc, mid + 1, hi); merge(u, lo, hi); } void upd(int p, int u = 1, int lo = 1, int hi = n) { if (lo == hi) { st[u][0][0] = a[lo]; st[u][1][1] = b[lo]; st[u][1][0] = st[u][0][1] = LINF; return; } int mid = lo + hi >> 1, lc = u << 1, rc = u << 1 ^ 1; if (p <= mid) { upd(p, lc, lo, mid); } else { upd(p, rc, mid + 1, hi); } merge(u, lo, hi); } int main() { ios::sync_with_stdio(0), cin.tie(0); cin >> n >> q; for (int i = 1; i <= n; i++) { cin >> a[i]; } for (int i = 1; i <= n; i++) { cin >> b[i]; } for (int i = 1; i < n; i++) { cin >> c[i]; } init(); while (q--) { int p, x, y; ll z; cin >> p >> x >> y >> z; a[p] = x; b[p] = y; c[p] = z; upd(p); cout << min({st[1][0][0], st[1][0][1], st[1][1][0], st[1][1][1]}) << '\n'; } }
1919
G
Tree LGM
In TreeWorld, there is a popular two-player game played on a tree with $n$ vertices labelled from $1$ to $n$. In this game, the tournament leaders first choose a vertex to be the root of the tree and choose another vertex (possibly the same vertex as the root) to place a coin on. Then, each player will take turns moving the coin to any child$^\dagger$ of the vertex that the coin is currently on. The first player who is unable to make a move loses. Alice wants to be a tree LGM, so she spends a lot of time studying the game. She wrote down an $n$ by $n$ matrix $s$, where $s_{i,j} = \mathtt{1}$ if the first player can win with the root of the tree chosen to be vertex $i$, and the coin was initially placed on vertex $j$. Otherwise, $s_{i, j} = \mathtt{0}$. Alice is a perfectionist, so she assumes that both players play perfectly in the game. However, she accidentally knocked her head on the way to the tournament and forgot what the tree looked like. Determine whether there exists a tree that satisfies the winning and losing states represented by matrix $s$, and if it exists, construct a valid tree. $^\dagger$ A vertex $c$ is a child of vertex $u$ if there is an edge between $c$ and $u$, and $c$ does not lie on the unique simple path from the root to vertex $u$.
Think about how you would construct matrix $s$ if you were given the tree. If $s_{i, i} = \mathtt{0}$, what should the value of $s_{j, i}$ be for all $1\le j\le n$? For some $i$ where $s_{i, i} = \mathtt{1}$, let $Z$ be a set containing all the vertices $j$ where $s_{j, i} = \mathtt{0}$. More formally, $Z = \{j\ |\ 1\le j\le n\text{ and } s_{j, i} = \mathtt{0}\}$. Does $Z$ have any special properties? Using hint 3, can we break down the problem into smaller sub-problems? Try solving the problem if the values in each column are a constant. In other words, $s_{i, j} = s_{j, j}$ for all $1\le i\le n$ and $1\le j\le n$. Let us consider how we can code the checker for this problem. In other words, if we are given a tree, how can we construct matrix $s$? We can solve this using dynamic programming. $s_{i, j} = \mathtt{1}$ if and only if at least one child $c$ of vertex $j$ (when the tree is rooted at vertex $i$) has $s_{i, c} = \mathtt{0}$. This is because the player can move the coin from vertex $j$ to vertex $c$ which will cause the opponent to be in a losing state. For convenience, we will call a vertex $i$ special if there exists some $1\le j\le n$ where $s_{j, i} \neq s_{i, i}$. Suppose there exist some $i$ where $s_{i, i} = \mathtt{0}$. This means that moving the coin to any of the neighbours of $i$ results in a winning state for the opponent. If the tree was rooted at some other vertex $j\neq i$, it will still be a losing state as it reduces the options that the player can move the coin to, so $s_{j, i}$ should be $\mathtt{0}$ for all $1\le j\le n$. This means that special vertices must have $s_{i, i} = \mathtt{1}$ Now, let us take a look at special vertices. Let $x$ be a special vertex, meaning $s_{x, x} = \mathtt{1}$ and there exist some $j$ where $s_{j, x} = \mathtt{0}$. Let $Z$ be a set containing all the vertices $j$ where $s_{j, x} = \mathtt{0}$. More formally, $Z = \{j\ |\ 1\le j\le n\text{ and } s_{j, x} = \mathtt{0}\}$. $Z$ cannot be empty due to the property of special vertices. Notice that whenever we choose to root at some vertex $j\neq x$, the number of children of $x$ decreases by exactly $1$. This is because the neighbour that lies on the path from vertex $x$ to vertex $j$ becomes the parent of $x$ instead of the child of $x$. If rooting the tree at vertex $x$ is a winning state but rooting the tree at some other vertex $j$ results in a losing state instead, it means that the only winning move is to move the coin from vertex $x$ to the neighbour that is on the path from vertex $x$ to $j$. Let $y$ denote the only neighbour of vertex $x$ where we can move the coin from vertex $x$ to vertex $y$ and win. In other words, $y$ is the neighbour of vertex $x$ where $y$ lies on the path of the vertices in set $Z$ and $x$. This means that $Z$ is the set of vertices that are in the subtree of $y$ rooted at vertex $x$. Now, let us try to find vertex $y$. Notice that $s_{y, y} = \mathtt{1}$. This is because $s_{y, x} = \mathtt{0}$, so the coin can be moved from vertex $y$ to vertex $x$ to result in a losing state for the opponent. Furthermore, $s_{j, y} = \mathtt{0}$ if and only if $j$ is not in $Z$, otherwise $s_{j, y} = \mathtt{1}$. This is because $s_{x, y} = \mathtt{0}$ since moving the coin from vertex $x$ to vertex $y$ is a winning move for the first player. For all other vertex $u\in Z$ that is not $y$, this property will not hold as even if $s_{u, u} = \mathtt{1}$ and $s_{x, u} = \mathtt{0}$, $s_{y, u}$ will be equal to $\mathtt{0}$ as well as the tree being rooted at $x$ has the same effect as if it was rooted at $y$. Since $y \in Z$, $s_{y, u} = \mathtt{0}$ does not satisfy $s_{j, u} = \mathtt{1}$ for all $j$ in $Z$. Since $y$ is a neighbour of vertex $x$, we know that there is an edge between vertex $y$ and $x$. Furthermore, we know that if the edge between vertex $y$ and $x$ is removed, the set of vertices $Z$ forms a single connected component containing $y$, while the set of vertices not in $Z$ forms another connected component containing $x$. This means that we can recursively solve the problem for the two connected components to check whether the values in the matrix $s$ are valid within their components. After recursively solving for each connected component, we are only left with non-special vertices ($s_{j, i} = s_{i, i}$ for all $1\le j\le n$) and some special vertices that already have an edge that connects to outside the component. Non-special vertices with $s_{i, i} = \mathtt{1}$ has to be connected to at least $2$ non-special vertices with $s_{i, i} = \mathtt{0}$. The most optimal way to do this is to form a line 0 - 1 - 0 - 1 - 0 as it requires the least amount of $s_{i, i} = \mathtt{0}$. If there is not enough $s_{i, i} = \mathtt{0}$ to form the line, a solution does not exist. Otherwise, connect the left-over $s_{i, i} = \mathtt{0}$ to any of $s_{i, i} = \mathtt{1}$. On the other hand, special vertices can either be connected to nothing, connected to other special vertices, or connected to non-special vertices with $s_{i, i} = \mathtt{1}$. For the final step, we need to check whether $s_{i, j}$ is consistent when $i$ and $j$ are in different components (i.e. ($i\in Z$ and $j\notin Z$) or ($i\notin Z$ and $j\in Z$)). Notice that $s_{i, j} = s_{x, j}$ for all $i\in Z$ and $j\notin Z$ and $j\neq x$, and $s_{i, j} = s_{y, j}$ for all $i\notin Z$ and $j\in Z$ and $j\neq y$. From the steps above, we managed to account for every value in the matrix, hence if matrix $s$ is consistent through all the steps, the constructed tree would be valid as well. We can make use of xor hash to find vertex $x$ together with its corresponding vertex $y$. With xor hash, the time complexity is $O(n^2)$. Well-optimised bitset code with time complexity of $O(\frac{n^3}{w})$ can pass as well.
[ "constructive algorithms", "divide and conquer", "games", "trees" ]
3,500
#include <bits/stdc++.h> using namespace std; const int MAXN = 5005; mt19937_64 rnd(chrono::high_resolution_clock::now().time_since_epoch().count()); int n; unsigned long long r[MAXN], hsh[MAXN], totr; string s[MAXN]; vector<pair<int, int>> ans; bool done[MAXN]; bool solve(vector<int> grp) { int pr = -1, pl = -1; vector<int> lft, rht; for (int i : grp) { if (s[i][i] == '0' || done[i] || hsh[i] == totr) { continue; } rht.clear(); for (int j : grp) { if (s[j][i] == '0') { lft.push_back(j); } else { rht.push_back(j); } } if (!lft.empty()) { pr = i; break; } } if (pr == -1) { vector<int> dv, zero, one; for (int i : grp) { if (done[i]) { dv.push_back(i); } else if (s[i][i] == '0') { zero.push_back(i); } else { one.push_back(i); } } for (int i = 1; i < dv.size(); i++) { ans.push_back({dv[i - 1], dv[i]}); } if (one.empty() && zero.empty()) { return 1; } if (one.size() >= zero.size()) { return 0; } if (one.empty()) { if (zero.size() >= 2 || !dv.empty()) { return 0; } return 1; } for (int i = 0; i < one.size(); i++) { ans.push_back({zero[i], one[i]}); ans.push_back({one[i], zero[i + 1]}); } for (int i = one.size() + 1; i < zero.size(); i++) { ans.push_back({one[0], zero[i]}); } if (!dv.empty()) { ans.push_back({one[0], dv[0]}); } return 1; } for (int i : lft) { if (s[i][i] == '0' || done[i] || ((hsh[i] ^ hsh[pr]) != totr)) { continue; } vector<int> trht; for (int j : grp) { if (s[j][i] == '0') { trht.push_back(j); } } if (trht == rht) { pl = i; break; } } if (pl == -1) { return 0; } for (int i : lft) { for (int j : rht) { if (j == pr) { continue; } if (s[i][j] != s[pr][j]) { return 0; } } } for (int i : rht) { for (int j : lft) { if (j == pl) { continue; } if (s[i][j] != s[pl][j]) { return 0; } } } ans.push_back({pl, pr}); done[pl] = done[pr] = 1; return solve(lft) && solve(rht); } int main() { ios::sync_with_stdio(0), cin.tie(0); cin >> n; for (int i = 0; i < n; i++) { cin >> s[i]; } for (int i = 0; i < n; i++) { r[i] = rnd(); totr ^= r[i]; } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (s[i][j] == '1') { hsh[j] ^= r[i]; } } } bool pos = 1; for (int i = 0; i < n; i++) { if (s[i][i] == '1') { continue; } for (int j = 0; j < n; j++) { if (s[j][i] == '1') { pos = 0; break; } } } if (!pos) { cout << "NO\n"; return 0; } vector<int> v(n, 0); iota(v.begin(), v.end(), 0); if (!solve(v)) { cout << "NO\n"; return 0; } cout << "YES\n"; for (auto [u, v] : ans) { cout << u + 1 << ' ' << v + 1 << '\n'; } }
1919
H
Tree Diameter
There is a hidden tree with $n$ vertices. The $n-1$ edges of the tree are numbered from $1$ to $n-1$. You can ask the following queries of two types: - Give the grader an array $a$ with $n - 1$ \textbf{positive} integers. For each edge from $1$ to $n - 1$, the weight of edge $i$ is set to $a_i$. Then, the grader will return the length of the diameter$^\dagger$. - Give the grader two indices $1 \le a, b \le n - 1$. The grader will return the number of edges between edges $a$ and $b$. In other words, if edge $a$ connects $u_a$ and $v_a$ while edge $b$ connects $u_b$ and $v_b$, the grader will return $\min(\text{dist}(u_a, u_b), \text{dist}(v_a, u_b), \text{dist}(u_a, v_b), \text{dist}(v_a, v_b))$, where $\text{dist}(u, v)$ represents the number of edges on the path between vertices $u$ and $v$. Find any tree isomorphic$^\ddagger$ to the hidden tree after at most $n$ queries of type 1 and $n$ queries of type 2 in any order. $^\dagger$ The distance between two vertices is the sum of the weights on the unique simple path that connects them. The diameter is the largest of all those distances. $^\ddagger$ Two trees, consisting of $n$ vertices each, are called isomorphic if there exists a permutation $p$ containing integers from $1$ to $n$ such that edge ($u$, $v$) is present in the first tree if and only if the edge ($p_u$, $p_v$) is present in the second tree.
Full solution: dario2994 The original problem allowed $5n$ type 1 queries and $n$ type 2 queries and was used as the last problem of a Div. 2 round. When we opened the round for testing, dario2994 solved the problem using $2n$ type 1 queries, and a few days later, he managed to improve his solution to use only $n$ type 1 queries. This was what made us decide to change this round into a Div. 1 round instead of a Div. 2 round. Try rooting the tree at an edge. Use $n - 2$ of query $2$ to find the distance of every edge to the root. For convenience, we will call the distance of an edge to the root the depth of the edge. We start with only the root edge, then add edges of depth $0$, followed by depth $1, 2, \ldots$ If we want to add a new edge with depth $i$, we need to attach it to one of the edges with depth $i - 1$. We can let the weight of the edge we want to attach be $10^9$ to force the diameter to pass through it, then let the edges of depth $i - 1$ have weights be different multiples of $n$. This way, we can determine which edges of depth $i - 1$ are used in the diameter (unless the two largest edges are used). Make use of isomorphism to handle the case in hint 4 where the largest two edges are used. If isomorphism cannot be used, find a leaf edge of a lower depth than the query edge and force the diameter to pass through the leaf edge and the query edge. Then, only 1 edge of depth $i - 1$ will be used and the edge weights for edges of depth $i - 1$ can follow a similar structure as hint 4. We will root the tree at edge $1$. Then, use $n - 2$ of query $2$ to find the distance of every edge to the root. For convenience, we will call the distance of an edge to the root the depth of the edge. Our objective is to add the edges in increasing order of depth, so when we are inserting an edge of depth $i$, all edges of depth $i - 1$ are already inserted and we just have to figure out which edge of depth $i - 1$ we have to attach the edge of depth $i$ to. For convenience, the edge weights used in query $1$ will be $1$ by default unless otherwise stated. Let $c_i$ store the list of edges with depth $i$. Suppose we want to insert edge $u$ into the tree and the depth of edge $u$ is $d$. We let the weight of the edge $u$ be $10 ^ 9$ and the weight of edges in $c_{d - 1}$ be $n, 2n, 3n, \ldots, (|c_{d - 1}| - 2)n, (|c_{d - 1}| - 1)n, (|c_{d - 1}| - 1)n$. The diameter will pass through edge $u$, the parent edge of $u$, as well as one edge of weight $(|c_{d - 1}| - 1)n$. If we calculate $\left\lfloor\frac{\text{diameter} - 10^9}{n}\right\rfloor - (|c_{d - 1}| - 1)$, we will be able to tell the index of the parent edge of $u$. However, there is one exception. When the parent edge of $u$ is one of the last 2 edges of $c_{d - 1}$, we are unable to differentiate between the two of them as they have the same weight. This is not a problem if the last 2 edges are isomorphic to each other, as attaching $u$ to either parent results in the same tree. For now, we will assume that the last 2 edges of $c_{d - 1}$ are isomorphic to each other. However, after attaching edge $u$ to one last 2 edges in $c_{d - 1}$, they are no longer isomorphic. Hence, we need to use a different method to insert the remaining edges of depth $d$. Let the new edge that we want to insert be $v$. Let the weight of edges $u$ and $v$ be $10^9$ and the weights of edges in $c_{d - 1}$ be the same as before. Now, we can use $\left\lfloor\frac{\text{diameter} - 2\cdot 10^9}{n}\right\rfloor$ to determine whether edge $v$ share the same parent as $u$, and if it does not share the same parent, it can still determine the index of the parent edge of $v$. With the additional information of whether edge $v$ shares the same parent as edge $u$, we will be able to differentiate the last 2 edges of $c_{d - 1}$ from each other. Now, we just need to handle the issue where the last 2 edges of $c_{d - 1}$ are not isomorphic. When we only have the root edge at the start, the left and right ends of the edge are isomorphic (note that for the root edge, we consider it as 2 separate edges, one with the left endpoint and one with the right endpoint). We try to maintain the isomorphism as we add edges of increasing depth. Suppose the last two edges of $c_{d - 1}$ are isomorphic. Let the two edges be $a$ and $b$. Then, we insert edges of depth $d$ using the above method. Let the child edges attached to $a$ and $b$ be represented by sets $S_a$ and $S_b$ respectively. If either $S_a$ or $S_b$ has sizes at least $2$, the two edges in the same set will be isomorphic, so we can let those 2 edges be the last 2 edges of $c_d$. Now, the sizes of $S_a$ and $S_b$ are both strictly smaller than $2$. If the sizes of both sets are exactly $1$, the two edges from each set will be isomorphic as well as $a$ and $b$ are isomorphic. Now, the only case left is if at least one of the sets is empty. Without loss of generality, assume that $S_a$ is empty. Since it is no longer possible to maintain two isomorphic edges, we now change our objective to find a leaf (it will be clear why in the following paragraphs). If $S_b$ is empty as well, both $a$ and $b$ are leaves so we can choose any one of them. If $S_b$ is not empty, then $a$ and $b$ are no longer isomorphic due to their children. This means that we cannot simply use $b$ as the leaf $S_a$ might be children of $b$ instead of $a$ as we did not differentiate $a$ and $b$ in the previous paragraphs. To determine whether $S_a$ belongs to $a$ or $b$, we can make use of one type 2 query to find the distance between one of the edges in $S_a$ and $a$. If the distance is $0$, it means that $S_a$ belongs to $a$. Otherwise, the distance will be $1$ and $S_a$ belongs to $b$. Now that we found a leaf, we can use the following method to insert an edge $u$ of depth $d$. We let the weight of the edge $u$ and the leaf edge be $10 ^ 9$ and the weight of edges in $c_{d - 1}$ be $n, 2n, 3n, \ldots, (|c_{d - 1}| - 2)n, (|c_{d - 1}| - 1)n, |c_{d - 1}|n$. The diameter will pass through edge $u$, the leaf edge, and only one edge of depth $d - 1$ which is the parent edge of $u$. Hence, after finding a leaf edge, we can uniquely determine the parent edge from $\left\lfloor\frac{\text{diameter} - 2\cdot 10^9}{n}\right\rfloor$. We used $n - 2$ type 1 queries and $n - 1$ type 2 queries in total. This is because we used a single type 1 query for each non-root edge. We used $n - 2$ type 2 queries at the start, and we only used $1$ additional type 2 query when we were no longer able to maintain two isomorphic edges and changed our methodology to use a leaf edge instead.
[ "interactive", "trees" ]
2,000
#include <bits/stdc++.h> using namespace std; typedef long long ll; const int INF = 1000000000; const int MAXN = 1000; int n; int lvl[MAXN + 5]; int pe[MAXN + 5]; vector<int> ch[MAXN + 5]; ll query(vector<int> a) { cout << "? 1"; for (int i = 1; i < n; i++) { cout << ' ' << a[i]; } cout << endl; ll res; cin >> res; return res; } int query(int a, int b) { cout << "? 2 " << a << ' ' << b << endl; int res; cin >> res; return res; } int main() { cin >> n; for (int i = 2; i < n; i++) { lvl[i] = query(1, i); } int ptr = 3; vector<int> base = {1, 2}; pe[1] = pe[2] = 1; bool iso = 1; int piv = -1; for (int l = 0; l < n; l++) { vector<int> a(n, 1); int m = base.size(); for (int i = 0; i < m; i++) { a[pe[base[i]]] = min(i + 1, m - iso) * MAXN; } if (!iso) { a[pe[piv]] = INF; } bool ciso = 0; for (int u = 2; u < n; u++) { if (lvl[u] != l) { continue; } a[u] = INF; ll res = query(a) - INF; a[u] = 1; if (!iso || ciso) { res -= INF; } int id = res / MAXN; if (iso && l) { id -= m - 1; } int v = ptr++; pe[v] = u; if (ciso) { if ((l == 0 && id == 0) || id == -(m - 1)) { ch[base[m - 2]].push_back(v); } else if (id == m - 1) { ch[base[m - 1]].push_back(v); } else { ch[base[id - 1]].push_back(v); } } else if (iso && id == m - 1) { ch[base[m - 2]].push_back(v); ciso = 1; a[u] = INF; } else { ch[base[id - 1]].push_back(v); } } if (m >= 2 && ch[base[m - 2]].size() > ch[base[m - 1]].size()) { swap(base[m - 2], base[m - 1]); } vector<int> nbase; for (int i = 0; i < m; i++) { for (int j : ch[base[i]]) { nbase.push_back(j); } } if (!iso || ch[base[m - 1]].size() >= 2 || ch[base[m - 2]].size() == 1) { base = nbase; continue; } if (ch[base[m - 1]].empty()) { piv = base[m - 1]; } else { ll res = query(pe[ch[base[m - 1]][0]], pe[base[m - 1]]); if (res) { swap(base[m - 2], base[m - 1]); swap(ch[base[m - 2]], ch[base[m - 1]]); } piv = base[m - 2]; } iso = 0; base = nbase; } cout << '!' << endl; cout << 1 << ' ' << 2 << endl; for (int u = 1; u <= n; u++) { for (int v : ch[u]) { cout << u << ' ' << v << endl; } } }
1920
A
Satisfying Constraints
Alex is solving a problem. He has $n$ constraints on what the integer $k$ can be. There are three types of constraints: - $k$ must be \textbf{greater than or equal to} some integer $x$; - $k$ must be \textbf{less than or equal to} some integer $x$; - $k$ must be \textbf{not equal to} some integer $x$. Help Alex find the number of integers $k$ that satisfy all $n$ constraints. It is guaranteed that the \textbf{answer is finite} (there exists at least one constraint of type $1$ and at least one constraint of type $2$). Also, it is guaranteed that \textbf{no two constraints are the exact same}.
Suppose there are no $\neq$ constraints. How would you solve the problem? How would you then factor in the $\neq$ constraints into your solution? Let's first only consider the $\geq$ and $\leq$ constraints. The integers satisfying those two constraints will be some contiguous interval $[l, r]$. To find $[l,r]$, for each $\geq x$ constraint, we do $l := \max{(l, x)}$ and for each $\leq x$ constraint, we do $r := \min{(r, x)}$. Now, for each, $\neq x$ constraint, we check if $x$ is in $[l, r]$. If so, we subtract one from the answer (remember that there are no duplicate constraints). Let the total number of times we subtract be $s$. Then our answer is $\max{(r-l+1-s,0)}$. The time complexity of this solution is $O(n)$.
[ "brute force", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin >> n; int l = 1; int r = 1e9; int s = 0; vector<int> neq; for (int i = 0; i < n; i++){ int a, x; cin >> a >> x; if (a == 1) l = max(l, x); if (a == 2) r = min(r, x); if (a == 3) neq.push_back(x); } for (int x : neq) if (x >= l and x <= r) s++; cout<<max(r - l + 1 - s, 0)<<"\n"; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
1920
B
Summation Game
Alice and Bob are playing a game. They have an array $a_1, a_2,\ldots,a_n$. The game consists of two steps: - First, Alice will remove \textbf{at most} $k$ elements from the array. - Second, Bob will multiply \textbf{at most} $x$ elements of the array by $-1$. Alice wants to maximize the sum of elements of the array while Bob wants to minimize it. Find the sum of elements of the array after the game if both players play optimally.
What is the optimal strategy for Bob? It is optimal for Bob to negate the $x$ largest elements of the array. So what should Alice do? It is optimal for Bob to negate the $x$ largest elements of the array. So in order to minimize the damage Bob will do, Alice should always remove some number of largest elements. To solve the problem, we can sort the array and iterate over $i$ ($0 \leq i \leq k$) where $i$ is the number of elements Alice removes. For each $i$, we know that Alice will remove the $i$ largest elements of the array and Bob will then negate the $x$ largest remaining elements. So the sum at the end can be calculated quickly with prefix sums. The time complexity is $O(n \log n)$ because of sorting.
[ "games", "greedy", "math", "sortings" ]
1,100
#include <bits/stdc++.h> using namespace std; void solve(){ int n, k, x; cin >> n >> k >> x; int A[n + 1] = {}; for (int i = 1; i <= n; i++) cin >> A[i]; sort(A + 1, A + n + 1, greater<int>()); for (int i = 1; i <= n; i++) A[i] += A[i - 1]; int ans = -1e9; for (int i = 0; i <= k; i++) ans = max(ans, A[n] - 2 * A[min(i + x, n)] + A[i]); cout<<ans<<"\n"; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
1920
C
Partitioning the Array
Allen has an array $a_1, a_2,\ldots,a_n$. For every positive integer $k$ that is a divisor of $n$, Allen does the following: - He partitions the array into $\frac{n}{k}$ disjoint subarrays of length $k$. In other words, he partitions the array into the following subarrays: $$[a_1,a_2,\ldots,a_k],[a_{k+1}, a_{k+2},\ldots,a_{2k}],\ldots,[a_{n-k+1},a_{n-k+2},\ldots,a_{n}]$$ - Allen earns one point if there exists some positive integer $m$ ($m \geq 2$) such that if he replaces every element in the array with its remainder when divided by $m$, then all subarrays will be identical. Help Allen find the number of points he will earn.
Try to solve the problem for just two integers $x$ and $y$. Under what $m$ are they equal (modulo $m$)? How can we use the previous hint and gcd to solve the problem? For some $x$ and $y$, let's try to find all $m$ such that $x \bmod m \equiv y \bmod m$. We can rearrange the equation into $(x-y) \equiv 0 \pmod m$. Thus, if $m$ is a factor of $|x-y|$, then $x$ and $y$ will be equal modulo $m$. Let's solve for some $k$. A valid partition exists if there exists some $m>1$ such that the following is true: $a_1 \equiv a_{1+k} \pmod m$ $a_2 \equiv a_{2+k} \pmod m$ ... $a_{n-k} \equiv a_{n} \pmod m$ The first condition $a_1 \equiv a_{1+k} \pmod m$ is satisfied if $m$ is a factor of $|a_1-a_{1+k}|$. The second condition $a_2 \equiv a_{2+k} \pmod m$ is satisfied if $m$ is a factor of $|a_2-a_{2+k}|$. And so on... Thus, all conditions are satisfied if $m$ is a factor of: $|a_1-a_{1+k}|, |a_2-a_{2+k}|,...,|a_{n-k}-a_n|$ In other words, all conditions are satisfied if $m$ is a factor of: $\gcd(|a_1-a_{1+k}|, |a_2-a_{2+k}|,...,|a_{n-k}-a_n|)$ So a valid $m$ exists for some $k$ if the aforementioned $\gcd$ is greater than $1$. We can iterate over all possible $k$ (remember that $k$ is a divisor of $n$) and solve for each $k$ to get our answer. The time complexity of this will be $O((n + \log n) \cdot \text{max divisors of n})$. Note that each pass through the array takes $n + \log n$ time because of how the gcd will either be halved or stay the same at each point.
[ "brute force", "math", "number theory" ]
1,600
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin >> n; int A[n]; for (int &i : A) cin >> i; int ans = 0; for (int k = 1; k <= n; k++){ if (n % k == 0){ int g = 0; for (int i = 0; i + k < n; i++) g = __gcd(g, abs(A[i + k] - A[i])); ans += (g != 1); } } cout<<ans<<"\n"; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
1920
D
Array Repetition
Jayden has an array $a$ which is initially empty. There are $n$ operations of two types he must perform in the given order. - Jayden appends an integer $x$ ($1 \leq x \leq n$) to the end of array $a$. - Jayden appends $x$ copies of array $a$ to the end of array $a$. In other words, array $a$ becomes $[a,\underbrace{a,\ldots,a}_{x}]$. It is guaranteed that he has done at least one operation of the first type before this. Jayden has $q$ queries. For each query, you must tell him the $k$-th element of array $a$. The elements of the array are numbered from $1$.
For some query try to trace your way back to where the $k$-th number was added. Here's an example of tracing back: $\underbrace{[l_1, l_2,...l_{x}]}_{\text{length } x} \underbrace{[l_1, l_2,...l_{x}]}_{\text{length } x} \underbrace{[l_1, l_2,...l_{x}]}_{\text{length } x} ... \underbrace{[l_1, \textbf{l_2},...l_{x}]}_{\text{length } x}$ Suppose the $k$-th element is the bolded $l_2$. Finding the $k$-th element is equivalent to finding the ($k \bmod x$)-th element (unless if $k \bmod x$ is $0$). First, let's precalculate some things: $lst_i=\text{last element after performing the first $i$ operations}$ $dp_i=\text{number of elements after the first i operations}$ Now, let's try answering some query $k$. If we have some $dp_i=k$ then the answer is $lst_i$. Otherwise, let's find the first $i$ such that $dp_i > k$. This $i$ will be a repeat operation and our answer will lie within one of the repetitions. Our list at this point will look like: $\underbrace{[l_1, l_2,...l_{dp_{i-1}}]}_{\text{length } dp_{i-1}} \underbrace{[l_1, l_2,...l_{dp_{i-1}}]}_{\text{length } dp_{i-1}} \underbrace{[l_1, l_2,...l_{dp_{i-1}}]}_{\text{length } dp_{i-1}} ... \underbrace{[l_1, \textbf{l_2},...l_{dp_{i-1}}]}_{\text{length } dp_{i-1}}$ Let the $k$-th element be the bolded $l_2$ of the final repetition. As you can see, finding the $k$-the element is equivalent to finding the $(k \bmod dp_{i-1})$-th element. Thus, we should do $k:=k \bmod dp_{i-1}$ and repeat! But there is one more case! If $k \equiv 0 \pmod {dp_{i-1}}$ then the answer is $lst_{i-1}$. At this point there are 2 ways we can go about solving this: Notice that after $\log{(\max{k})}$ operations of the second type, the number of elements will exceed $\max{k}$. So we only care about the first $\log{(\max{k})}$ operations of the second type. Thus, iterate through the $\log{(\max{k})}$ operations of the second type backwards and perform the casework described above. This leads to a $O(n+q\log{(\max{k})})$ solution or a $O(n+q(\log{(\max{k})}+\log n))$ solution depending on implementation details. Observe that $k:=k \bmod dp_{i-1}$ will reduce $k$ by at least half. If we repeatedly binary search for the first $i$ such that $dp_i \geq k$, and then do $k:=k \bmod dp_{i-1}$ (or stop if it's one of the other cases), then each query will take $O(\log n\log k)$ time so the total time complexity will be $O(n+q\log n\log {(\max{k})})$.
[ "binary search", "brute force", "dsu", "implementation", "math" ]
1,900
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ int n, q; cin >> n >> q; ll dp[n + 1] = {}; int lstAdd[n + 1] = {}; for (int i = 1; i <= n; i++){ int a, v; cin >> a >> v; if (a == 1){ lstAdd[i] = v; dp[i] = dp[i - 1] + 1; } else{ lstAdd[i] = lstAdd[i - 1]; dp[i] = ((v + 1) > 2e18 / dp[i - 1]) ? (ll)2e18 : dp[i - 1] * (v + 1); } } while (q--){ ll k; cin >> k; while (true){ int pos = lower_bound(dp + 1, dp + n + 1, k) - dp; if (dp[pos] == k){ cout<<lstAdd[pos]<<" \n"[q == 0]; break; } if (k % dp[pos - 1] == 0){ cout<<lstAdd[pos - 1]<<" \n"[q == 0]; break; } k %= dp[pos - 1]; } } } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
1920
E
Counting Binary Strings
Patrick calls a substring$^\dagger$ of a binary string$^\ddagger$ good if this substring contains exactly one 1. Help Patrick count the number of binary strings $s$ such that $s$ contains exactly $n$ good substrings and has no good substring of length strictly greater than $k$. Note that substrings are differentiated by their location in the string, so if $s =$ 1010 you should count both occurrences of 10. $^\dagger$ A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. $^\ddagger$ A binary string is a string that only contains the characters 0 and 1.
How do you count the number of good substrings in a string? We can count the number of good substrings in a string with the counting contribution technique. Now try to solve this problem with dynamic programming. $\frac{n}{1} + \frac{n}{2} + \cdots + \frac{n}{n} = O(n\log n)$ Let's first solve the problem where we are given some string $s$ and must count the number of good substrings. To do this we use the technique of counting contributions. For every $1$ in $s$, we find the number of good substrings containing that $1$. Consider the following example: $\underbrace{00001}_{a_1} \! \! \! \, \underbrace{ \; \, \! 0001}_{a_2} \! \! \! \, \underbrace{ \; \, \! 00000001}_{a_3} \! \! \! \, \underbrace{ \; \, \! 0001}_{a_4} \! \! \! \, \underbrace{ \; \, \! 000}_{a_5}$ The number of good substrings in this example is $a_1 a_2 + a_2 a_3 + a_3 a_4 + a_4 a_5$. We can create such array for any string $s$ and the number of good substrings of $s$ is the sum of the products of adjacent elements of the array. This motivates us to reformulate the problem. Instead, we count the number of arrays $a_1,a_2,...,a_m$ such that every element is positive and the sum of the products of adjacent elements is exactly equal to $n$. Furthermore, every pair of adjacent elements should have sum minus $1$ be less than or equal to $k$. We can solve this with dynamic programming. $dp_{i,j} = \text{number of arrays with sum $i$ and last element $j$}$ $\displaystyle dp_{i, j} = \sum_{p=1}^{\min({\lfloor \frac{i}{j} \rfloor},\, k-j+1)}{dp_{i - j \cdot p,p}}$ The key observation is that we only have to iterate $p$ up to $\lfloor \frac{i}{j} \rfloor$ (since if $p$ is any greater, $j \cdot p$ will exceed $i$). At $j=1$, we will iterate over at most $\lfloor \frac{i}{1} \rfloor$ values of $p$. At $j=2$, we will iterate over at most $\lfloor \frac{i}{2} \rfloor$ values of $p$. In total, at each $i$, we will iterate over at most $\lfloor \frac{i}{1} \rfloor + \lfloor \frac{i}{2} \rfloor +\cdots + \lfloor \frac{i}{i} \rfloor \approx i \log i$ values of $p$. Thus, the time complexity of our solution is $O(nk\log n)$.
[ "combinatorics", "dp", "math" ]
2,100
#include <bits/stdc++.h> using namespace std; const int md = 998244353; void solve(){ int n, k; cin >> n >> k; int dp[n + 1][k + 1] = {}; int ans = 0; fill(dp[0] + 1, dp[0] + k + 1, 1); for (int sum = 1; sum <= n; sum++){ for (int cur = 1; cur <= k; cur++){ for (int prv = 1; cur * prv <= sum and cur + prv - 1 <= k; prv++) dp[sum][cur] = (dp[sum][cur] + dp[sum - cur * prv][prv]) % md; if (sum == n) ans = (ans + dp[sum][cur]) % md; } } cout<<ans<<"\n"; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
1920
F1
Smooth Sailing (Easy Version)
\textbf{The only difference between the two versions of this problem is the constraint on $q$. You can make hacks only if both versions of the problem are solved.} Thomas is sailing around an island surrounded by the ocean. The ocean and island can be represented by a grid with $n$ rows and $m$ columns. The rows are numbered from $1$ to $n$ from top to bottom, and the columns are numbered from $1$ to $m$ from left to right. The position of a cell at row $r$ and column $c$ can be represented as $(r, c)$. Below is an example of a valid grid. \begin{center} {\small Example of a valid grid} \end{center} There are three types of cells: island, ocean and underwater volcano. Cells representing the island are marked with a '#', cells representing the ocean are marked with a '.', and cells representing an underwater volcano are marked with a 'v'. It is guaranteed that there is at least one island cell and at least one underwater volcano cell. It is also guaranteed that the set of all island cells forms a single connected component$^{\dagger}$ and the set of all ocean cells and underwater volcano cells forms a single connected component. Additionally, it is guaranteed that there are no island cells at the edge of the grid (that is, at row $1$, at row $n$, at column $1$, and at column $m$). Define a round trip starting from cell $(x, y)$ as a path Thomas takes which satisfies the following conditions: - The path starts and ends at $(x, y)$. - If Thomas is at cell $(i, j)$, he can go to cells $(i+1, j)$, $(i-1, j)$, $(i, j-1)$, and $(i, j+1)$ as long as the destination cell \textbf{is an ocean cell or an underwater volcano cell} and is still inside the grid. Note that it is allowed for Thomas to visit the same cell multiple times in the same round trip. - The path must go around the island and fully encircle it. Some path $p$ fully encircles the island if it is impossible to go from an island cell to a cell on the grid border by only traveling \textbf{to adjacent on a side or diagonal} cells without visiting a cell on path $p$. In the image below, the path starting from $(2, 2)$, going to $(1, 3)$, and going back to $(2, 2)$ the other way does \textbf{not} fully encircle the island and is not considered a round trip. \begin{center} {\small Example of a path that does \textbf{not} fully encircle the island} \end{center} The safety of a round trip is the minimum Manhattan distance$^{\ddagger}$ from a cell on the round trip to an underwater volcano (note that the presence of island cells does not impact this distance). You have $q$ queries. A query can be represented as $(x, y)$ and for every query, you want to find the maximum safety of a round trip starting from $(x, y)$. It is guaranteed that $(x, y)$ is an ocean cell or an underwater volcano cell. $^{\dagger}$A set of cells forms a single connected component if from any cell of this set it is possible to reach any other cell of this set by moving only through the cells of this set, each time going to a cell \textbf{with a common side}. $^{\ddagger}$Manhattan distance between cells $(r_1, c_1)$ and $(r_2, c_2)$ is equal to $|r_1 - r_2| + |c_1 - c_2|$.
Use the fact that a good some path $p$ fully encircles the island if it is impossible to go from an island cell to a cell on the border by only travelling to adjacent or diagonal cells without touching a cell on path $p$. Binary search! For each non-island cell $(i, j)$, let $d_{i,j}$ be the minimum Manhattan distance of cell $(i, j)$ to an underwater volcano. We can find all $d_{i,j}$ with a multisource BFS from all underwater volcanos. The danger of a round trip is the smallest value of $d_{u,v}$ over all $(u, v)$ in the path. For each query, binary search on the answer $k$ - we can only visit cell ($i, j$) if $d_{i,j} \geq k$. Now, let's mark all cells ($i, j$) ($d_{i,j} \geq k$) reachable from ($x, y$). There exists a valid round trip if it is not possible to go from an island cell to a border cell without touching a marked cell. The time complexity of this solution is $O(nm \log{(n+m)})$.
[ "binary search", "brute force", "data structures", "dfs and similar", "dsu", "graphs", "shortest paths" ]
2,500
#include <bits/stdc++.h> using namespace std; const int mx = 3e5 + 5; const int diAdj[4] = {-1, 0, 1, 0}, djAdj[4] = {0, -1, 0, 1}; const int diDiag[8] = {0, 0, -1, 1, -1, -1, 1, 1}, djDiag[8] = {-1, 1, 0, 0, -1, 1, 1, -1}; int n, m, q, islandi, islandj; string A[mx]; vector<int> dist[mx]; vector<bool> reachable[mx], islandVis[mx]; queue<pair<int, int>> bfsQ; bool inGrid(int i, int j){ return i >= 0 and i < n and j >= 0 and j < m; } bool onBorder(int i, int j){ return i == 0 or i == n - 1 or j == 0 or j == m - 1; } void getReach(int i, int j, int minVal){ if (!inGrid(i, j) or reachable[i][j] or dist[i][j] < minVal or A[i][j] == '#') return; reachable[i][j] = true; for (int dir = 0; dir < 4; dir++) getReach(i + diAdj[dir], j + djAdj[dir], minVal); } bool reachBorder(int i, int j){ if (!inGrid(i, j) or reachable[i][j] or islandVis[i][j]) return false; if (onBorder(i, j)) return true; islandVis[i][j] = true; bool ok = false; for (int dir = 0; dir < 8; dir++) ok |= reachBorder(i + diDiag[dir], j + djDiag[dir]); return ok; } bool existsRoundTrip(int x, int y, int minVal){ // Reset for (int i = 0; i < n; i++){ reachable[i] = vector<bool>(m, false); islandVis[i] = vector<bool>(m, false); } // Get all valid cells you can reach from (x, y) getReach(x, y, minVal); // Check if the valid cells you can reach from (x, y) blocks the island from the border return !reachBorder(islandi, islandj); } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cin >> n >> m >> q; for (int i = 0; i < n; i++){ cin >> A[i]; dist[i] = vector<int>(m, 1e9); for (int j = 0; j < m; j++){ if (A[i][j] == 'v'){ dist[i][j] = 0; bfsQ.push({i, j}); } if (A[i][j] == '#'){ islandi = i; islandj = j; } } } // Multisource BFS to find min distance to volcano while (bfsQ.size()){ auto [i, j] = bfsQ.front(); bfsQ.pop(); for (int dir = 0; dir < 4; dir++){ int ni = i + diAdj[dir], nj = j + djAdj[dir]; if (inGrid(ni, nj) and dist[i][j] + 1 < dist[ni][nj]){ dist[ni][nj] = dist[i][j] + 1; bfsQ.push({ni, nj}); } } } while (q--){ int x, y; cin >> x >> y; x--; y--; int L = 0, H = n + m; while (L < H){ int M = (L + H + 1) / 2; existsRoundTrip(x, y, M) ? L = M : H = M - 1; } cout<<L<<"\n"; } }
1920
F2
Smooth Sailing (Hard Version)
\textbf{The only difference between the two versions of this problem is the constraint on $q$. You can make hacks only if both versions of the problem are solved.} Thomas is sailing around an island surrounded by the ocean. The ocean and island can be represented by a grid with $n$ rows and $m$ columns. The rows are numbered from $1$ to $n$ from top to bottom, and the columns are numbered from $1$ to $m$ from left to right. The position of a cell at row $r$ and column $c$ can be represented as $(r, c)$. Below is an example of a valid grid. \begin{center} {\small Example of a valid grid} \end{center} There are three types of cells: island, ocean and underwater volcano. Cells representing the island are marked with a '#', cells representing the ocean are marked with a '.', and cells representing an underwater volcano are marked with a 'v'. It is guaranteed that there is at least one island cell and at least one underwater volcano cell. It is also guaranteed that the set of all island cells forms a single connected component$^{\dagger}$ and the set of all ocean cells and underwater volcano cells forms a single connected component. Additionally, it is guaranteed that there are no island cells at the edge of the grid (that is, at row $1$, at row $n$, at column $1$, and at column $m$). Define a round trip starting from cell $(x, y)$ as a path Thomas takes which satisfies the following conditions: - The path starts and ends at $(x, y)$. - If Thomas is at cell $(i, j)$, he can go to cells $(i+1, j)$, $(i-1, j)$, $(i, j-1)$, and $(i, j+1)$ as long as the destination cell \textbf{is an ocean cell or an underwater volcano cell} and is still inside the grid. Note that it is allowed for Thomas to visit the same cell multiple times in the same round trip. - The path must go around the island and fully encircle it. Some path $p$ fully encircles the island if it is impossible to go from an island cell to a cell on the grid border by only traveling \textbf{to adjacent on a side or diagonal} cells without visiting a cell on path $p$. In the image below, the path starting from $(2, 2)$, going to $(1, 3)$, and going back to $(2, 2)$ the other way does \textbf{not} fully encircle the island and is not considered a round trip. \begin{center} {\small Example of a path that does \textbf{not} fully encircle the island} \end{center} The safety of a round trip is the minimum Manhattan distance$^{\ddagger}$ from a cell on the round trip to an underwater volcano (note that the presence of island cells does not impact this distance). You have $q$ queries. A query can be represented as $(x, y)$ and for every query, you want to find the maximum safety of a round trip starting from $(x, y)$. It is guaranteed that $(x, y)$ is an ocean cell or an underwater volcano cell. $^{\dagger}$A set of cells forms a single connected component if from any cell of this set it is possible to reach any other cell of this set by moving only through the cells of this set, each time going to a cell \textbf{with a common side}. $^{\ddagger}$Manhattan distance between cells $(r_1, c_1)$ and $(r_2, c_2)$ is equal to $|r_1 - r_2| + |c_1 - c_2|$.
How do we check if a point is inside a polygon using a ray? If u draw line from island cell extending all the way right, an optimal round trip will cross this line an odd number of times. What can your state be? How can we simplify this problem down into finding the path that maximizes the minimum node? How can we solve this classic problem? For each non-island cell $(i, j)$, let $d_{i,j}$ be the minimum Manhattan distance of cell $(i, j)$ to an underwater volcano. We can find all $d_{i,j}$ with a multisource BFS from all underwater volcanos. The danger of a round trip is the smallest value of $d_{u,v}$ over all $(u, v)$ in the path. Consider any island cell. We can take inspiration from how we check whether a point is in a polygon - if a point is inside the polygon then a ray starting from the point and going in any direction will intersect the polygon an odd number of times. Draw an imaginary line along the top border of the cell and extend it all the way to the right of the grid. We can observe that an optimal round trip will always cross the line an odd number of times. Using this observation, we can let our state be $(\text{row}, \, \text{column}, \, \text{parity of the number of times we crossed the line})$. Naively, we can binary search for our answer and BFS to check if $(x, y, 0)$ and $(x, y, 1)$ are connected. This solves the easy version of the problem. To fully solve this problem, we can add states (and their corresponding edges to already added states) one at a time from highest $d$ to lowest $d$. For each query $(x, y)$, we want to find the first time when $(x, y, 0)$ and $(x, y, 1)$ become connected. This is a classic DSU with small to large merging problem. In short, we drop a token labeled with the index of the query at both $(x, y, 0)$ and $(x, y, 1)$. Each time we merge, we also merge the sets of tokens small to large and check if merging has caused two tokens of the same label to be in the same component. The time complexity of our solution is $O(nm \log{(nm)} + q\log^2 q)$ with the $\log{(nm)}$ coming from sorting the states or edges. Note that there exists a $O((nm \cdot \alpha{(nm)} + q) \cdot \log{(n+m)})$ parallel binary search solution as well as a $O(nm \log{(nm)} + q\log{(nm)})$ solution that uses LCA queries on the Kruskal's reconstruction tree or min path queries on the MSTs. In fact, with offline LCA queries, we can reduce the complexity to $O(nm \cdot \alpha{(nm)} + q)$.
[ "binary search", "data structures", "dsu", "geometry", "graphs", "trees" ]
3,000
#include <bits/stdc++.h> using namespace std; const int mx = 3e5 + 5, di[4] = {-1, 0, 1, 0}, dj[4] = {0, -1, 0, 1}; int n, m, q, id, linei, linej, par[mx * 4], dep[mx], up[mx * 4][21], val[mx * 4]; string A[mx]; queue<pair<int, int>> bfsQ; vector<int> dist[mx], adj[mx * 4]; vector<array<int, 3>> edges; int enc(int i, int j, bool crossParity){ // Note that nodes are 1 indexed return 1 + i * m + j + crossParity * n * m; } bool inGrid(int i, int j){ return i >= 0 and i < n and j >= 0 and j < m; } int getR(int i){ return i == par[i] ? i : par[i] = getR(par[i]); } void merge(int a, int b, int w){ a = getR(a); b = getR(b); if (a == b) return; adj[id].push_back(a); adj[id].push_back(b); val[id] = w; par[a] = par[b] = id; id++; } void dfs(int i){ for (int l = 1; l < 21; l++) up[i][l] = up[up[i][l - 1]][l - 1]; for (int to : adj[i]){ if (to != up[i][0]){ up[to][0] = i; dep[to] = dep[i] + 1; dfs(to); } } } int qry(int x, int y){ if (dep[x] < dep[y]) swap(x, y); for (int l = 20, jmp = dep[x] - dep[y]; ~l; l--){ if (jmp & (1 << l)){ x = up[x][l]; } } if (x == y) return val[x]; for (int l = 20; ~l; l--){ if (up[x][l] != up[y][l]){ x = up[x][l]; y = up[y][l]; } } return val[up[x][0]]; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cin >> n >> m >> q; for (int i = 0; i < n; i++){ cin >> A[i]; for (int j = 0; j < m; j++){ dist[i].push_back(1e9); if (A[i][j] == 'v'){ dist[i][j] = 0; bfsQ.push({i, j}); } if (A[i][j] == '#'){ linei = i; linej = j; } } } // Multisource BFS to find min distance to volcano while (bfsQ.size()){ auto [i, j] = bfsQ.front(); bfsQ.pop(); for (int dir = 0; dir < 4; dir++){ int ni = i + di[dir], nj = j + dj[dir]; if (inGrid(ni, nj) and dist[i][j] + 1 < dist[ni][nj]){ dist[ni][nj] = dist[i][j] + 1; bfsQ.push({ni, nj}); } } } // Get the edges for (int i = 0; i < n; i++){ for (int j = 0; j < m; j++){ // Look at cells to the up and left (so dir = 0 and dir = 1) for (int dir = 0; dir < 2; dir++){ int ni = i + di[dir], nj = j + dj[dir]; if (inGrid(ni, nj) and A[i][j] != '#' and A[ni][nj] != '#'){ int w = min(dist[i][j], dist[ni][nj]); // Crosses the line if (i == linei and ni == linei - 1 and j > linej){ edges.push_back({w, enc(i, j, 0), enc(ni, nj, 1)}); edges.push_back({w, enc(i, j, 1), enc(ni, nj, 0)}); } // Doesn't cross the line else{ edges.push_back({w, enc(i, j, 0), enc(ni, nj, 0)}); edges.push_back({w, enc(i, j, 1), enc(ni, nj, 1)}); } } } } } // We merge from largest w to smallest sort(edges.begin(), edges.end(), greater<array<int, 3>>()); // Init DSU stuff id = n * m * 2 + 1; iota(par, par + mx * 4, 0); // Merge for (auto [w, u, v] : edges) merge(u, v, w); // DFS to construct the Kruskal's reconstruction trees for (int i = n * m * 4; i; i--) if (!up[i][0]) dfs(i); // Answer queries via LCA queries for (int i = 0; i < q; i++){ int x, y; cin >> x >> y; x--; y--; cout<<qry(enc(x, y, 0), enc(x, y, 1))<<"\n"; } }
1921
A
Square
A square of positive (strictly greater than $0$) area is located on the coordinate plane, with sides parallel to the coordinate axes. You are given the coordinates of its corners, in random order. Your task is to find the area of the square.
There are many ways to solve this problem, the simplest way is as follows. Let's find the minimum and maximum coordinate $x$ among all the corners of the square. The difference of these coordinates will give us the length of the square side $d = x_{max} - x_{min}$. After that, we can calculate the area of the square as $s = d^2$.
[ "greedy", "math" ]
800
t = int(input()) for _ in range(t): a = [[int(x) for x in input().split()] for i in range(4)] x = [p[0] for p in a] dx = max(x) - min(x) print(dx * dx)
1921
B
Arranging Cats
In order to test the hypothesis about the cats, the scientists must arrange the cats in the boxes in a specific way. Of course, they would like to test the hypothesis and publish a sensational article as quickly as possible, because they are too engrossed in the next hypothesis about the phone's battery charge. Scientists have $n$ boxes in which cats may or may not sit. Let the current state of the boxes be denoted by the sequence $b_1, \dots, b_n$: $b_i = 1$ if there is a cat in box number $i$, and $b_i = 0$ otherwise. Fortunately, the unlimited production of cats has already been established, so in one day, the scientists can perform one of the following operations: - Take a new cat and place it in a box (for some $i$ such that $b_i = 0$, assign $b_i = 1$). - Remove a cat from a box and send it into retirement (for some $i$ such that $b_i = 1$, assign $b_i = 0$). - Move a cat from one box to another (for some $i, j$ such that $b_i = 1, b_j = 0$, assign $b_i = 0, b_j = 1$). It has also been found that some boxes were immediately filled with cats. Therefore, the scientists know the initial position of the cats in the boxes $s_1, \dots, s_n$ and the desired position $f_1, \dots, f_n$. Due to the large amount of paperwork, the scientists do not have time to solve this problem. Help them for the sake of science and indicate the minimum number of days required to test the hypothesis.
Denote the amount of indices $i$ such that $s_i = 0$ and $f_i = 1$ as $add\_amnt$. Since it is impossible to change 0 to 1 at two different positions in one turn, the answer is not less than $add\_amnt$. Analogously, if $rmv\_amnt$ is amount of indices such that $s_i = 1$ and $f_i = 0$, the answer is not less than $rmv\_amnt$. It turns out that the answer is actually equal to $\max (add\_amnt, rmv\_amnt)$. We can simply apply move operation from the index $i$ with $s_i = 1, f_i = 0$ to $j$ with $s_j = 0, f_j = 1$ while there are both of these types of indices (that will be $\min (rmv\_amnt, add\_amnt)$ operations) and then add or remove the rest of unsatisfied indices (that is exactly $|rmv\_amnt - add\_amnt|$ operations).
[ "greedy", "implementation" ]
800
t = int(input()) for _ in range(t): n = int(input()) start = [int(x) for x in input()] finish = [int(x) for x in input()] pairs = list(zip(start, finish)) add_amnt = sum(int(a < b) for a, b in pairs) rmv_amnt = sum(int(a > b) for a, b in pairs) print(max(add_amnt, rmv_amnt))
1921
C
Sending Messages
Stepan is a very busy person. Today he needs to send $n$ messages at moments $m_1, m_2, \dots m_n$ ($m_i < m_{i + 1}$). Unfortunately, by the moment $0$, his phone only has $f$ units of charge left. At the moment $0$, the phone is turned on. The phone loses $a$ units of charge for each unit of time it is on. Also, at any moment, Stepan can turn off the phone and turn it on later. This action consumes $b$ units of energy each time. Consider turning on and off to be instantaneous, so you can turn it on at moment $x$ and send a message at the same moment, and vice versa, send a message at moment $x$ and turn off the phone at the same moment. If at any point the charge level drops to $0$ (becomes $\le 0$), it is impossible to send a message at that moment. Since all messages are very important to Stepan, he wants to know if he can send all the messages without the possibility of charging the phone.
The most challenging part of this problem was probably carefully understanding the problem statement. The problem can be reduced to the following. There are $n$ time intervals: from 0 to $m_1$, from $m_1$ to $m_2$, ..., from $m_{n-1}$ to $m_n$. For each interval, we need to find a way to spend as little charge of the phone as possible, and check that the total amount of charge we spend is less than the initial charge of the phone. To spend the minimum amount of charge for one time interval, we can act in one of two ways. Let the length of the interval be $t$. We either leave the phone on and spend $a\cdot t$ units of charge, or turn off the phone at the very beginning of the interval and turn it on at the very end, spending $b$ units of charge. The total time complexity of this solution is $O(n)$.
[ "greedy", "math" ]
900
t = int(input()) for _ in range(t): n, f, a, b = map(int, input().split()) m = [0] + [int(x) for x in input().split()] for i in range(1, n + 1): f -= min(a * (m[i] - m[i - 1]), b) print('YES' if f > 0 else 'NO')
1921
D
Very Different Array
Petya has an array $a_i$ of $n$ integers. His brother Vasya became envious and decided to make his own array of $n$ integers. To do this, he found $m$ integers $b_i$ ($m\ge n$), and now he wants to choose some $n$ integers of them and arrange them in a certain order to obtain an array $c_i$ of length $n$. To avoid being similar to his brother, Vasya wants to make his array as different as possible from Petya's array. Specifically, he wants the total difference $D = \sum_{i=1}^{n} |a_i - c_i|$ to be as large as possible. Help Vasya find the maximum difference $D$ he can obtain.
Let's sort the array $a$ in ascending order, and the array $b$ in descending order. Notice that small elements of array $a$ need to be matched with large elements of array $b$ and vice versa. Thus, for some $k$, we need to take a prefix of array $b$ of length $k$ and a suffix of length $n-k$, and form array $c$ from them. We iterate over the value of $k$ from $0$ to $n$, and each time $k$ changes by 1, only one element of array $c$ changes, so we can recalculate the value of $D$ in $O(1)$. We select the maximum value of $D$ from the obtained values to get the answer. This solution works in $O(n)$ time plus the initial sorting in $O(n \log n)$. There are other ways to solve the problem in the same time complexity.
[ "data structures", "greedy", "sortings", "two pointers" ]
1,100
#include <bits/stdc++.h> using namespace std; struct test { void solve() { int n, m; cin >> n >> m; vector<int> a(n); for (int i = 0; i < n; i++) cin >> a[i]; vector<int> b(m); for (int i = 0; i < m; i++) cin >> b[i]; sort(a.begin(), a.end()); sort(b.rbegin(), b.rend()); vector<int> c(n); long long s = 0; for (int i = 0; i < n; i++) { c[i] = b[m &mdash; n + i]; s += abs(c[i] &mdash; a[i]); } long long res = 0; for (int k = 0; k <= n; k++) { res = max(res, s); if (k < n) { s -= abs(c[k] &mdash; a[k]); c[k] = b[k]; s += abs(c[k] &mdash; a[k]); } } cout << res << "\n"; } }; int main() { ios::sync_with_stdio(false); int n; cin >> n; for (int i = 0; i < n; i++) { test().solve(); } return 0; }
1921
E
Eat the Chip
Alice and Bob are playing a game on a checkered board. The board has $h$ rows, numbered from top to bottom, and $w$ columns, numbered from left to right. Both players have a chip each. Initially, Alice's chip is located at the cell with coordinates $(x_a, y_a)$ (row $x_a$, column $y_a$), and Bob's chip is located at $(x_b, y_b)$. It is guaranteed that the initial positions of the chips do not coincide. Players take turns making moves, with Alice starting. On her turn, Alice can move her chip one cell down or one cell down-right or down-left (diagonally). Bob, on the other hand, moves his chip one cell up, up-right, or up-left. It is not allowed to make moves that go beyond the board boundaries. More formally, if at the beginning of Alice's turn she is in the cell with coordinates $(x_a, y_a)$, then she can move her chip to one of the cells $(x_a + 1, y_a)$, $(x_a + 1, y_a - 1)$, or $(x_a + 1, y_a + 1)$. Bob, on his turn, from the cell $(x_b, y_b)$ can move to $(x_b - 1, y_b)$, $(x_b - 1, y_b - 1)$, or $(x_b - 1, y_b + 1)$. The new chip coordinates $(x', y')$ must satisfy the conditions $1 \le x' \le h$ and $1 \le y' \le w$. \begin{center} Example game state. Alice plays with the white chip, Bob with the black one. Arrows indicate possible moves. \end{center} A player immediately wins if they place their chip in a cell occupied by the other player's chip. If either player cannot make a move (Alice—if she is in the last row, i.e. $x_a = h$, Bob—if he is in the first row, i.e. $x_b = 1$), the game immediately ends in a draw. What will be the outcome of the game if both opponents play optimally?
First, let's note that the difference $x_b - x_a$ decreases exactly by one each (both Alice's and Bob's) turn. In the end, if one of the players was able to win, $x_b - x_a = 0$. In particular, that means that if $x_a - x_b$ is initially odd then only Alice has a chance of winning the match and vice versa. Suppose that $x_a - x_b$ is initially even (the outcome of the match could be either Bob's win or draw). If $x_a \ge x_b$ the answer is immediately draw. Otherwise, the players will make $t = (x_b - x_a) / 2$ moves each before $x_a = x_b$. If at some point during these $2t$ moves Bob can achieve $y_a = y_b$, he is winning as he can continue with symmetrical responses to Alice's turns. If $y_a > y_b$ and Bob cannot reach right border ($w > y_b + t$), Alice can always choose the rightmost option for her and after each of $2t$ moves $y_a$ will be greater than $y_b$ which means Bob cannot win. Otherwise, if Bob always chooses the rightmost option for him, he will eventually achieve $y_a = y_b$. The case when $y_a$ is initially less than $y_b$ as well as the case when Alice has a chance to win ($x_b - x_a$ is odd) can be covered in a similar way.
[ "brute force", "games", "greedy", "math" ]
1,600
def solve(): h, w, xA, yA, xB, yB = map(int, input().split()) if (xA - xB) % 2 == 0: winner = "Bob" if xA >= xB: win = False elif yA == yB: win = True else: if yA < yB: n_turns = yB - 1 else: n_turns = w - yB win = xB - 2 * n_turns >= xA else: winner = "Alice" xA += 1 yA += 0 if yB - yA == 0 else 1 if yB - yA > 0 else -1 if xA > xB: win = False elif yA == yB: win = True else: if yA < yB: n_turns = w - yA else: n_turns = yA - 1 win = xB - 2 * n_turns >= xA print(winner if win else "Draw") t = int(input()) for _ in range(t): solve()
1921
F
Sum of Progression
You are given an array $a$ of $n$ numbers. There are also $q$ queries of the form $s, d, k$. For each query $q$, find the sum of elements $a_s + a_{s+d} \cdot 2 + \dots + a_{s + d \cdot (k - 1)} \cdot k$. In other words, for each query, it is necessary to find the sum of $k$ elements of the array with indices starting from the $s$-th, taking steps of size $d$, multiplying it by the serial number of the element in the resulting sequence.
The key idea is that we know how to calculate the sum $(i - l + 1) \cdot a_i$ for $l \le i \le r$ fast - we need to calculate all prefix sums $i \cdot a_i$ and $a_i$ for $1 \le i \le k$, then take the difference between the $r$-th and $(l-1)$-th of $i \cdot a_i$ and subtract the difference between the $r$-th and $(l-1)$-th multiplied by $l - 1$. This way queries with step $1$ will be processed in $O(n + q)$ time, where $q$ is the total amount of queries with step 1. But this idea can be generalized to the following: we can precalculate all the prefix sums and all the prefix sums with multiplication by index for every $d_0 \le d$ in $O(n \cdot d)$ time, and then process all queries with step $d_0 \le d$ in $O(1)$ time. However, for all other queries we can process a single query in $O(n / d)$ time, because the difference between consecutive elements in the resulting sequence is greater than $d$. Combining these two ideas, we get a solution with a time complexity $O(n \cdot d + q \cdot n / d)$. Setting $d = \sqrt{q}$, we get a solution with a time complexity $O(n \sqrt{q})$. The model solution fixes the value of $d = 322$, which is equal to $\sqrt{MAX}$. Interestingly, this solution can be generalized to calculate the sums $(i + 1) ^ 2 \cdot a_{s + d \cdot i}$.
[ "brute force", "data structures", "dp", "implementation", "math" ]
1,900
#include <bits/stdc++.h> using namespace std; long long precalc[322][200322]; long long precalci[322][200322]; void solve() { int n, q; cin >> n >> q; vector<long long> a(n); int pivot = 1; while (pivot * pivot < n) { pivot++; } for (int i = 0; i < n; i++) { cin >> a[i]; } for (int i = 0; i < pivot; i++) { for (int j = 0; j <= i; j++) { precalc[i][j] = 0LL; precalci[i][j] = 0LL; } for (int j = 0; j < n; j++) { precalci[i][j + i + 1] = precalci[i][j] + a[j] * (j / (i + 1) + 1); precalc[i][j + i + 1] = precalc[i][j] + a[j]; } } while (q--) { int s, d; long long k; long long ans = 0; cin >> s >> d >> k; s--; if (d > pivot) { for (int i = s; i <= s + (k - 1) * d; i += d) { ans += a[i] * ((i - s) / d + 1); } cout << ans << " "; continue; } long long last = s + d * k - d; int first = s; cout << precalci[d - 1][last + d] - precalci[d - 1][first] - (precalc[d - 1][last + d] - precalc[d - 1][first]) * (first / d) << ' '; } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int tests; cin >> tests; while (tests--) { solve(); if (tests) cout << "\n"; } return 0; }
1921
G
Mischievous Shooter
Once the mischievous and wayward shooter named Shel found himself on a rectangular field of size $n \times m$, divided into unit squares. Each cell either contains a target or not. Shel only had a lucky shotgun with him, with which he can shoot in one of the four directions: right-down, left-down, left-up, or right-up. When fired, the shotgun hits all targets in the chosen direction, the Manhattan distance to which does not exceed a fixed constant $k$. The Manhattan distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is equal to $|x_1 - x_2| + |y_1 - y_2|$. \begin{center} {\small Possible hit areas for $k = 3$.} \end{center} Shel's goal is to hit as many targets as possible. Please help him find this value.
First of all, notice that we can consider only the case where the triangle of affected cells is oriented left-up. To solve the remaining cases, we can solve the problem for four different board orientations and choose the maximum result from the obtained results. We will store several arrays with prefix sums: for the sum of all numbers in the cells of the current column above the given one, as well as for the sum of all numbers in the cells up and to the right of the given one. Using such prefix sums, we can easily recalculate the answer in cell $(i, j)$ through the answer in cell $(i, j-1)$ in $O(1)$. To do this, we need to add the sum of the cells marked in green and subtract the sum of the cells marked in red. The total time complexity of this solution is $O(nm)$. Note that the problem could also be solved in time $O(nm\min(n, m))$ by computing prefix sums in the minimum of the directions and calculating the sums in the triangle in $O(\min(n, m))$. This solution also fits within the time limit.
[ "brute force", "data structures", "divide and conquer", "dp", "implementation" ]
2,200
//ciao_chill #include<bits/stdc++.h> using namespace std; #define int long long int n, m, k; vector<vector<int>> a; bool prov(int i, int j) { return 0 <= i && i < n && 0 <= j && j < m; } int ans() { int cnt = 0; int dp[n][m]; int pref[n][m]; int pref_up[n][m]; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { pref_up[i][j] = a[i][j]; if (prov(i - 1, j)) pref_up[i][j] += pref_up[i - 1][j]; } } for (int i = 0; i < n; i++) { for (int j = m - 1; j >= 0; j--) { pref[i][j] = a[i][j]; if (prov(i - 1, j + 1)) pref[i][j] += pref[i - 1][j + 1]; } } for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { dp[i][j] = pref_up[i][j]; if (prov(i - k, j)) dp[i][j] -= pref_up[i - k][j]; if (prov(i, j - 1)) dp[i][j] += dp[i][j - 1]; if (j < k) { int i1 = j - k + i; if (i1 >= 0) dp[i][j] -= pref[i1][0]; } else dp[i][j] -= pref[i][j - k]; if (prov(i - k, j)) dp[i][j] += pref[i - k][j]; cnt = max(cnt, dp[i][j]); } } return cnt; } void solve() { cin >> n >> m >> k; k++; char c; bool st[n][m]; a.resize(n); for (int i = 0; i < n; i++) { a[i].resize(m); } for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { cin >> c; st[i][j] = (c == '#'); a[i][j] = st[i][j]; } } int mx = ans(); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { a[i][j] = st[n - i - 1][j]; } } mx = max(mx, ans()); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { a[i][j] = st[i][m - j - 1]; } } mx = max(mx, ans()); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { a[i][j] = st[n - i - 1][m - j - 1]; } } mx = max(mx, ans()); cout << mx << '\n'; } signed main() { cin.tie(nullptr); cout.tie(nullptr); ios_base::sync_with_stdio(false); int tt; cin >> tt; while (tt--) solve(); return 0; }
1922
A
Tricky Template
You are given an integer $n$ and three strings $a, b, c$, each consisting of $n$ lowercase Latin letters. Let a template be a string $t$ consisting of $n$ lowercase and/or uppercase Latin letters. The string $s$ matches the template $t$ if the following conditions hold for all $i$ from $1$ to $n$: - if the $i$-th letter of the template is \textbf{lowercase}, then $s_i$ must be \textbf{the same} as $t_i$; - if the $i$-th letter of the template is \textbf{uppercase}, then $s_i$ must be \textbf{different} from the \textbf{lowercase version} of $t_i$. For example, if there is a letter 'A' in the template, you cannot use the letter 'a' in the corresponding position of the string. Accordingly, the string doesn't match the template if the condition doesn't hold for at least one $i$. Determine whether there exists a template $t$ such that the strings $a$ and $b$ match it, while the string $c$ does not.
In order for a string not to match the pattern, there must be at least one position $i$ from $1$ to $n$ where the condition doesn't hold. Let's iterate over this position and check if it is possible to pick a letter in the pattern such that $a_i$ and $b_i$ match, while $c_i$ doesn't match. If $a_i$ or $b_i$ equal $c_i$, then it is definitely not possible. Since $c_i$ does not match, the equal letter also doesn't match. And if both are different from $c_i$, then it is always possible to pick the uppercase letter $c_i$ to only prohibit it. Great, now the string $c$ definitely doesn't match the pattern. Now we should guarantee that the strings $a$ and $b$ match. Complete the template as follows: for all other positions, we will pick uppercase letters that differ from both $a_i$ and $b_i$. Obviously, among $26$ letters, there will always be such a letter. Therefore, the solution is as follows: iterate over the positions and check that there is at least one where $a_i$ differs from $c_i$ and $b_i$ differs from $c_i$. If there is, the answer is "YES". Otherwise, the answer is "NO". Overall complexity: $O(n)$ for each testcase.
[ "constructive algorithms", "implementation", "strings" ]
800
for _ in range(int(input())): n = int(input()) a = input() b = input() c = input() print("YES" if any([a[i] != c[i] and b[i] != c[i] for i in range(n)]) else "NO")
1922
B
Forming Triangles
You have $n$ sticks, numbered from $1$ to $n$. The length of the $i$-th stick is $2^{a_i}$. You want to choose \textbf{exactly} $3$ sticks out of the given $n$ sticks, and form a \textbf{non-degenerate} triangle out of them, using the sticks as the sides of the triangle. A triangle is called non-degenerate if its area is \textbf{strictly} greater than $0$. You have to calculate the number of ways to choose exactly $3$ sticks so that a triangle can be formed out of them. Note that the order of choosing sticks does not matter (for example, choosing the $1$-st, $2$-nd and $4$-th stick is the same as choosing the $2$-nd, $4$-th and $1$-st stick).
At first, let's figure out which sticks can be used to make a triangle. Let's denote the length of the longest stick as $2^{s_0}$, the shortest stick as $2^{s_2}$ and the middle stick as $2^{s_1}$ (in other words, $s$ is an array of length $3$, consisting of three sticks for a triangle, sorted in non-ascending order). Important fact: $s_0 == s_1$. It's true because if $s_0 > s_1$, then $2^{s_0} \ge 2^{s_1} + 2^{s_2}$ and the triangle is degenerate. At the same time, the value of the $s_2$ can be any integer from $0$ to $s_0$. So all we have to do is calculate the number of triples of sticks such that there are two or three maximums in the triple. Let's create an array $cnt$, where $cnt_i$ is the number of sticks of length $2^i$, and the array $sumCnt$, where $sumCnt_i$ is the number of sticks no longer than $2^i$. Now let's iterate over the length of the longest stick in the triangle (denote it as $m$). Then there are two cases: All three sticks in a triangle are equal. Then the number of such triangles can be computed with a binomial coefficient: $\frac{cnt_m * (cnt_m - 1) * (cnt_m - 2)}{6}$; Only two sticks are equal (and have the same length). Then the number of such triangles is $\frac{cnt_m * (cnt_m - 1)}{2} \cdot sumCnt_{m - 1}$.
[ "combinatorics", "constructive algorithms", "math", "sortings" ]
1,200
#include <bits/stdc++.h> using namespace std; int t; int main() { cin >> t; for (int tc = 0; tc < t; ++tc) { int n; cin >> n; map<int, int> numOfLens; for (int i = 0; i < n; ++i){ int x; cin >> x; ++numOfLens[x]; } long long res = 0; int sum = 0; for (auto it : numOfLens) { long long cnt = it.second; if(cnt >= 3) res += cnt * (cnt - 1) * (cnt - 2) / 6; if(cnt >= 2) res += cnt * (cnt - 1) / 2 * sum; sum += cnt; } cout << res << endl; } return 0; }
1922
C
Closest Cities
There are $n$ cities located on the number line, the $i$-th city is in the point $a_i$. The coordinates of the cities are given in ascending order, so $a_1 < a_2 < \dots < a_n$. The distance between two cities $x$ and $y$ is equal to $|a_x - a_y|$. For each city $i$, let's define the \textbf{closest} city $j$ as the city such that the distance between $i$ and $j$ is not greater than the distance between $i$ and each other city $k$. For example, if the cities are located in points $[0, 8, 12, 15, 20]$, then: - the closest city to the city $1$ is the city $2$; - the closest city to the city $2$ is the city $3$; - the closest city to the city $3$ is the city $4$; - the closest city to the city $4$ is the city $3$; - the closest city to the city $5$ is the city $4$. The cities are located in such a way that for every city, the closest city is unique. For example, it is impossible for the cities to be situated in points $[1, 2, 3]$, since this would mean that the city $2$ has two closest cities ($1$ and $3$, both having distance $1$). You can travel between cities. Suppose you are currently in the city $x$. Then you can perform one of the following actions: - travel to any other city $y$, paying $|a_x - a_y|$ coins; - travel to the city which is the closest to $x$, paying $1$ coin. You are given $m$ queries. In each query, you will be given two cities, and you have to calculate the minimum number of coins you have to spend to travel from one city to the other city.
Important observation: the answer will not change if you are allowed to move only to adjacent cities. It is true because if you move to a non-adjacent city, you can split the path to that city into parts without increasing its cost. So, the shortest way from $x$ to $y$ (consider the case $x < y$) is to move from city $x$ to city $x+1$ for $1$ coin if it's possible or for $a_{x+1} - a_x$ if it's not. Then move from city $x+1$ to city $x+2$ for $1$ coin if it's possible, or for $a_{x+2} - a_{x+1}$ coins if it's not. And so on, until we reach the city $y$. Now let's calculate two arrays: $l$ and $r$. $r_i$ is equal to the minimum amount of coins to reach the city $i$ from city $1$ (from left to right), $l_i$ is equal to the minimum amount of coins to reach the city $i$ from city $n$ (from right to left). Both of these arrays can be precalculated just like the arrays of prefix sums are calculated. For example, $r_1 = 0$, $r_2 = dist(1, 2)$, $r_3 = r_2 + dist(2, 3)$, $r_4 = r_3 + dist(3, 4)$ and so on. Here, $dist(s, t)$ denotes the cheapest way to travel between two adjacent cities $s$ and $t$. Then, the cheapest way between two cities $x$ and $y$ can be calculated in the same way as the sum on subarray is calculated for the prefix sum array. There are two cases: If $x < y$ then the answer is $r_y - r_x$; If $x > y$ then the answer is $l_y - l_x$;
[ "greedy", "implementation", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; const int N = 200'000; const int INF = 1'000'000'009; int t; char type(const vector <int>& a, int id) { int distL = (id == 0? INF : a[id] - a[id - 1]); int distR = (id + 1 == a.size()? INF : a[id + 1] - a[id]); if(distL < distR) return 'L'; if(distL > distR) return 'R'; assert(false); } int main() { ios::sync_with_stdio(false); cin >> t; for (int tc = 0; tc < t; ++tc) { int n; cin >> n; vector <int> a(n); for (int i = 0; i < n; ++i) cin >> a[i]; vector <int> l(n), r(n); for (int i = 1; i < n; ++i) r[i] = r[i - 1] + (type(a, i - 1) == 'R'? 1 : a[i] - a[i - 1]); for (int i = n - 2; i >= 0; --i) l[i] = l[i + 1] + (type(a, i + 1) == 'L'? 1 : a[i + 1] - a[i]); int m; cin >> m; for (int i = 0; i < m; ++i) { int x, y; cin >> x >> y; --x, --y; if (x < y) cout << r[y] - r[x] << endl; else cout << l[y] - l[x] << endl; } } return 0; }
1922
D
Berserk Monsters
Monocarp is playing a computer game (yet again). Guess what is he doing? That's right, killing monsters. There are $n$ monsters in a row, numbered from $1$ to $n$. The $i$-th monster has two parameters: attack value equal to $a_i$ and defense value equal to $d_i$. In order to kill these monsters, Monocarp put a berserk spell on them, so they're attacking each other instead of Monocarp's character. The fight consists of $n$ rounds. Every round, the following happens: - first, every alive monster $i$ deals $a_i$ damage to the \textbf{closest} alive monster to the left (if it exists) and the \textbf{closest} alive monster to the right (if it exists); - then, every alive monster $j$ which received more than $d_j$ damage \textbf{during this round} dies. I. e. the $j$-th monster dies if and only if its defense value $d_j$ is \textbf{strictly less} than the total damage it received \textbf{this round}. For each round, calculate the number of monsters that will die during that round.
It is important to note that if during the $j$-th round the $i$-th monster did not die and none of its alive neighbors died, then there is no point in checking this monster in the $(j+1)$-th round. Therefore, we can solve the problem as follows: let's maintain a list of candidates (those who can die) for the current round; if the monster dies in the current round, then add its neighbors to the list of candidates for the next round. Since killing a monster adds no more than $2$ candidates, the total size of the candidate lists for all rounds does not exceed $3n$ (since the size of the list for the first round is equal to $n$ plus no more than $2$ for each killed monster). Therefore, we can simply iterate through these lists to check if the monster will be killed. The only problem left is finding the alive neighbors of the monster (to check whether he is killed or not during the round). This can be done by creating an ordered set with the indices of the monsters. set allows us to remove the killed ones and find neighboring monsters in $O(\log{n})$. Thus, the solution works in $O(n\log{n})$.
[ "brute force", "data structures", "dsu", "implementation", "math" ]
1,900
#include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n + 2), d(n + 2, INT_MAX); for (int i = 1; i <= n; ++i) cin >> a[i]; for (int i = 1; i <= n; ++i) cin >> d[i]; set<int> lft, cur; for (int i = 0; i < n + 2; ++i) { lft.insert(i); cur.insert(i); } for (int z = 0; z < n; ++z) { set<int> del, ncur; for (int i : cur) { auto it = lft.find(i); if (it == lft.end()) continue; int prv = *prev(it); int nxt = *next(it); if (a[prv] + a[nxt] > d[i]) { del.insert(i); ncur.insert(prv); ncur.insert(nxt); } } cout << del.size() << ' '; for (auto it : del) lft.erase(it); cur = ncur; } cout << '\n'; } }
1922
E
Increasing Subsequences
Let's recall that an increasing subsequence of the array $a$ is a sequence that can be obtained from it by removing some elements without changing the order of the remaining elements, and the remaining elements are strictly increasing (i. e $a_{b_1} < a_{b_2} < \dots < a_{b_k}$ and $b_1 < b_2 < \dots < b_k$). Note that an empty subsequence is also increasing. You are given a positive integer $X$. Your task is to find an array of integers of length \textbf{at most $200$}, such that it has exactly $X$ increasing subsequences, or report that there is no such array. If there are several answers, you can print any of them. If two subsequences consist of the same elements, but correspond to different positions in the array, they are considered different (for example, the array $[2, 2]$ has two different subsequences equal to $[2]$).
Let's consider one of the solutions for constructing the required array. Let array $a$ have $x$ increasing subsequences. If we add a new minimum to the end of the array, the number of increasing subsequences in the new array equals $x+1$ (since the new element does not form increasing subsequences with other elements, only subsequences consisting of this element will be added). If we add a new maximum to the end of the array, the number of increasing subsequences in the new array equals $2x$ (since the new element forms increasing subsequences with any other elements). Using the above facts, let's define a recursive function $f(x)$, which returns an array with exactly $x$ increasing subsequences. For an odd value of $x$, $f(x) = f(x-1) + min$ (here + denotes adding an element to the end of the array); for an even value of $x$, $f(x) = f(\frac{x}{2}) + max$. Now we need to estimate the number of elements in the array obtained by this algorithm. Note that there cannot be two consecutive operations of the first type ($x \rightarrow x-1$), so every two operations, the value of $x$ decreases by at least two times. Thus, the size of the array satisfies the limit of $200$.
[ "bitmasks", "constructive algorithms", "divide and conquer", "greedy", "math" ]
1,800
#include <bits/stdc++.h> using namespace std; vector<int> f(long long x) { vector<int> res; if (x == 2) { res.push_back(0); } else if (x & 1) { res = f(x - 1); res.push_back(*min_element(res.begin(), res.end()) - 1); } else { res = f(x / 2); res.push_back(*max_element(res.begin(), res.end()) + 1); } return res; } int main() { int t; cin >> t; while (t--) { long long x; cin >> x; auto ans = f(x); cout << ans.size() << '\n'; for (int i : ans) cout << i << ' '; cout << '\n'; } }
1922
F
Replace on Segment
You are given an array $a_1, a_2, \dots, a_n$, where each element is an integer from $1$ to $x$. You can perform the following operation with it any number of times: - choose three integers $l$, $r$ and $k$ such that $1 \le l \le r \le n$, $1 \le k \le x$ and \textbf{each} element $a_i$ such that $l \le i \le r$ is different from $k$. Then, for each $i \in [l, r]$, replace $a_i$ with $k$. In other words, you choose a subsegment of the array and an integer from $1$ to $x$ which does not appear in that subsegment, and replace every element in the subsegment with that chosen integer. Your goal is to make all elements in the array equal. What is the minimum number of operations that you have to perform?
First of all, we claim the following: if you apply an operation on a segment, you may treat the resulting segment as one element (i. e. we can "merge" the elements affected by an operation into one). This is quite intuitive, but the formal proof is kinda long, so if you're not interested in it, feel free to skip the next paragraphs written in italic. Formal proof: suppose we merged several adjacent equal elements into one. Let's show that it doesn't change the answer for the array. Let the array before merging a segment of adjacent equal elements be $a$, and the array after merging be $a'$. We will show that $f(a) = f(a')$, where $f(x)$ is the minimum number of operations to solve the problem on the array $x$. $f(a) \ge f(a')$: suppose we built a sequence of operations that turns all elements of $a$ equal. Consider the segment of adjacent equal elements we merged to get $a'$. Let's discard all elements of that segment, except for the first one, from all operations in the sequence, and remove all operations which now affect zero elements. We will get a valid sequence of operations that turns all elements of $a'$ equal. So, any valid sequence of operations for $a$ can be transformed into a valid sequence of operations for $a'$ (with possibly discarding some operations), and that's why $f(a) \ge f(a')$; $f(a) \le f(a')$: suppose we built a sequence of operations that turns all elements of $a'$ equal. It can be transformed into a valid sequence of operations for $a'$, if we "expand" the element we got from merging the segment in all operations. So, $f(a) \le f(a')$; since $f(a) \ge f(a')$ and $f(a) \le f(a')$, then $f(a) = f(a')$. This means that after you've done an operation on a segment, the next operations will either affect that whole segment, or not affect any element from the segment at all. This allows us to use the following dynamic programming idea: let $dp[l][r][k]$ be the minimum number of operations required to turn all elements on the segment $[l, r]$ into $k$. If we want to transform all elements into $k$, then there are two options: either the last operation will turn the whole segment into $k$, so we need to calculate the number of operations required to get rid of all elements equal to $k$ from the segment; or the segment $[l, r]$ can be split into several segments which we will turn into $k$ separately. The second option can be modeled quite easily: we iterate on the splitting point between two parts $i$, and update $dp[l][r][k]$ with $dp[l][i][k] + dp[i+1][r][k]$. However, the first option is a bit more complicated. Let's introduce a second dynamic programming to our solution: let $dp2[l][r][k]$ be the minimum number of operations to remove all occurrences of $k$ from the segment $[l, r]$. Then, the first option for computing $dp[l][r][k]$ can be implemented by simply updating $dp[l][r][k]$ with $dp2[l][r][k] + 1$. Now, let's show how to calculate $dp2[l][r][k]$. It's quite similar to the first dynamic programming: either the last operation on the segment will turn the whole segment into some other element $m$, so we can iterate on it and update $dp2[l][r][k]$ with $dp[l][r][m]$; or the segment $[l, r]$ can be split into two parts, and we will get rid of elements equal to $k$ from these parts separately (so we update $dp2[l][r][k]$ with $dp2[l][i][k] + dp2[i+1][r][k]$). Okay, it looks like we got a solution working in $O(n^4)$. There's just one problem, though. There are cyclic dependencies in our dynamic programming: $dp[l][r][k]$ depends on $dp2[l][r][k]$; $dp2[l][r][k]$ depends on $dp[l][r][m]$, where $m \ne k$; $dp[l][r][m]$ depends on $dp2[l][r][m]$; $dp2[l][r][m]$ depends on $dp[l][r][k]$. We have to either somehow deal with them, or get rid of them. The model solution eliminates these cyclic dependencies as follows: when we need to calculate $dp[l][r][k]$, let's discard all elements equal to $k$ from the ends of the segment (i. e. move $l$ to $l'$ and $r$ to $r'$, where $l'$ and $r'$ are the first and last occurrences of elements not equal to $k$). Similarly, when we need to calculate $dp2[l][r][k]$, let's discard all elements not equal to $k$ from the ends of the segment. It's quite easy to show that these operations won't make the answer worse (if you remove an element from an array, the minimum number of operations to "fix" the array doesn't increase). It's also not that hard to show that this method gets rid of all cyclic dependencies: if we consider the cyclic dependency we described earlier, we can see that the element $a_l$ will be discarded from the segment either when computing $dp[l][r][k]$ (if $a_l = k$) or when computing $dp2[l][r][k]$ (if $a_l \ne k$). That way, we get a dynamic programming solution working in $O(n^4)$.
[ "dp", "graph matchings" ]
2,500
#include <bits/stdc++.h> using namespace std; const int N = 111; int n, k; int a[N]; int nxtx[N][N], prvx[N][N]; int nxtnx[N][N], prvnx[N][N]; int dp1[N][N][N], dp2[N][N][N]; int calc2(int, int, int); int calc1(int l, int r, int x) { l = nxtnx[l][x], r = prvnx[r][x]; if (l > r) return 0; if (dp1[l][r][x] != -1) return dp1[l][r][x]; dp1[l][r][x] = calc2(l, r, x) + 1; for (int i = l; i < r; ++i) dp1[l][r][x] = min(dp1[l][r][x], calc1(l, i, x) + calc1(i + 1, r, x)); return dp1[l][r][x]; } int calc2(int l, int r, int x) { l = nxtx[l][x], r = prvx[r][x]; if (l > r) return 0; if (dp2[l][r][x] != -1) return dp2[l][r][x]; dp2[l][r][x] = n; for (int i = l; i < r; ++i) dp2[l][r][x] = min(dp2[l][r][x], calc2(l, i, x) + calc2(i + 1, r, x)); for (int y = 0; y < k; ++y) if (x != y) dp2[l][r][x] = min(dp2[l][r][x], calc1(l, r, y)); return dp2[l][r][x]; } void solve() { cin >> n >> k; for (int i = 0; i < n; ++i) cin >> a[i], --a[i]; for (int x = 0; x < k; ++x) prvx[0][x] = prvnx[0][x] = -1; for (int i = 0; i < n; ++i) { prvx[i][a[i]] = i; for (int x = 0; x < k; ++x) prvx[i + 1][x] = prvx[i][x]; for (int x = 0; x < k; ++x) if (x != a[i]) prvnx[i][x] = i; for (int x = 0; x < k; ++x) prvnx[i + 1][x] = prvnx[i][x]; } for (int x = 0; x < k; ++x) nxtx[n][x] = nxtnx[n][x] = n; for (int i = n - 1; i >= 0; --i) { for (int x = 0; x < k; ++x) nxtx[i][x] = nxtx[i + 1][x]; nxtx[i][a[i]] = i; for (int x = 0; x < k; ++x) nxtnx[i][x] = nxtnx[i + 1][x]; for (int x = 0; x < k; ++x) if (x != a[i]) nxtnx[i][x] = i; } memset(dp1, -1, sizeof(dp1)); memset(dp2, -1, sizeof(dp2)); int ans = n; for (int x = 0; x < k; ++x) ans = min(ans, calc1(0, n - 1, x)); cout << ans << '\n'; } int main() { int t; cin >> t; while (t--) solve(); }
1923
A
Moving Chips
There is a ribbon divided into $n$ cells, numbered from $1$ to $n$ from left to right. Each cell either contains a chip or is free. You can perform the following operation any number of times (possibly zero): choose a chip and move it to the \textbf{closest free cell to the left}. You can choose any chip that you want, provided that there is at least one free cell to the left of it. When you move the chip, the cell where it was before the operation becomes free. Your goal is to move the chips in such a way that \textbf{they form a single block, without any free cells between them}. What is the minimum number of operations you have to perform?
Denote the position of the leftmost chip as $l$, the position of the rightmost chip as $r$, and the number of chips as $c$. Having all chips in one block without free spaces means that we need to reach the situation when $r - l = c - 1$. Since $r - l \ge c - 1$ is always met (the situation when $r-l = c-1$ is when the chips are packed as close as possible), we need to decrease the value of $r-l$ as fast as possible. There are two approaches to do it. One is to try decreasing $r$ every time; i. e. let's code a greedy solution that always applies an operation to the current rightmost chip. This is actually one of the correct solutions to the problem. But if we prove it, we can design an easier solution. Whenever we apply an operation on any chip other than the rightmost chip, the value of $r-l$ does not decrease (we either don't change $l$ and $r$ at all, or decrease $l$). But whenever we apply an operation on the rightmost chip, $r$ decreases by exactly $1$ (the new rightmost chip will be in position $r-1$ - either it is present there before the operation, or it will be moved there from $r$). So, only applying the operations to the rightmost chip decreases $r-l$, and it always decreases $r-l$ by exactly $1$ no matter what. So, the answer to the problem is actually $(r-l) - (c-1)$.
[ "greedy", "implementation" ]
800
t = int(input()) for i in range(t): n = int(input()) a = list(map(int, input().split())) l, r = a.index(1), n - a[::-1].index(1) - 1 c = a.count(1) print(r - l - c + 1)
1923
B
Monsters Attack!
You are playing a computer game. The current level of this game can be modeled as a straight line. Your character is in point $0$ of this line. There are $n$ monsters trying to kill your character; the $i$-th monster has health equal to $a_i$ and is initially in the point $x_i$. Every second, the following happens: - first, you fire up to $k$ bullets at monsters. Each bullet targets exactly one monster and decreases its health by $1$. For each bullet, you choose its target arbitrary (for example, you can fire all bullets at one monster, fire all bullets at different monsters, or choose any other combination). Any monster can be targeted by a bullet, regardless of its position and any other factors; - then, all alive monsters with health $0$ or less die; - then, all alive monsters move $1$ point closer to you (monsters to the left of you increase their coordinates by $1$, monsters to the right of you decrease their coordinates by $1$). If any monster reaches your character (moves to the point $0$), you lose. Can you survive and kill all $n$ monsters without letting any of them reach your character?
Let's look at monsters at distance of $1$ (i. e. in positions $1$ and $-1$). We must kill them in the $1$-st second, so their total hp (let denote it as $s_1$) should not exceed $k$. If this condition is not met, the answer is NO. Otherwise, we can say that there are $k-s_1$ bullets left unused during the first second (let denote it as $lft$). Now let's look at the monsters at a distance of $2$. We must kill them no later than the $2$-nd second. We have $k$ bullets for the $2$-nd second plus $lft$ unused bullets from the $1$-st second, so total hp of monsters at distance $2$ should not exceed $k+lft$. We can see that the situation is similar to the $1$-st second, if the condition is not met, then the answer is NO, otherwise we move on to the next distance with the updated value of $lft$. If all $n$ seconds are considered and all conditions are met, then the answer is YES. Therefore, we got a solution that works in $O(n)$.
[ "dp", "greedy", "implementation" ]
1,100
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n, k; cin >> n >> k; vector<int> a(n), x(n); for (auto& it : a) cin >> it; for (auto& it : x) cin >> it; vector<long long> s(n + 1); for (int i = 0; i < n; ++i) s[abs(x[i])] += a[i]; bool ok = true; long long lft = 0; for (int i = 1; i <= n; ++i) { lft += k - s[i]; ok &= (lft >= 0); } cout << (ok ? "YES" : "NO") << '\n'; } }
1923
C
Find B
An array $a$ of length $m$ is considered good if there exists an integer array $b$ of length $m$ such that the following conditions hold: - $\sum\limits_{i=1}^{m} a_i = \sum\limits_{i=1}^{m} b_i$; - $a_i \neq b_i$ for every index $i$ from $1$ to $m$; - $b_i > 0$ for every index $i$ from $1$ to $m$. You are given an array $c$ of length $n$. Each element of this array is greater than $0$. You have to answer $q$ queries. During the $i$-th query, you have to determine whether the subarray $c_{l_{i}}, c_{l_{i}+1}, \dots, c_{r_{i}}$ is good.
At first, let's precalculate the array of prefix sum and the number of elements equal to one on prefix (let's denote them as $sum[]$ and $cnt1[]$). More formally, $sum_i = \sum_{k=1}^{i} c_i$ and $cnt1_i = \sum_{k=1}^{i} (is\_1(c_i))$, where the function $is\_1(x)$ returns $1$ if $x$ equals $1$ and $0$ otherwise . For example, if $c = [5, 1, 1, 2, 1, 10]$, then $sum = [0, 5, 6, 7, 9, 10, 20]$ and $cnt_1 = [0, 0, 1, 2, 2, 3, 3]$. Now we can answer the query $l$, $r$. For this, let's calculate the sum on subarray $c_l, \dots, c_r$ and the number of elements equal to $1$ on subarray $c_l, \dots, c_r$ (let's denote it as $cur\_sum$ and $cur\_cnt1$). We can do it by precalculated arrays $sum$ and $cnt1$: $cur\_sum = sum_r - sum _{l - 1}$; $cur\_cnt1 = cnt1_r - cnt1 _{l - 1}$. To answer query $l$, $r$ at first let's try to find the array $b$ with the minimum value of $\sum b_i$. For this, for indexes $i$, where $c_i = 1$ we have to set $b_i = 2$, and for indexes $i$, where $c_i > i$ we have to set $b_i = 1$. Thus, the sum of all elements in array $b$ equal to $cnt_1 \cdot 2 + (r - l + 1) - cnt_1$ or $(r - l + 1) + cnt_1$. Now we have three cases: If this sum is greater than $\sum_{i=l}^{r} c_i$ then the answer is $NO$; If this sum is equal to $\sum_{i=l}^{r} c_i$ then the answer is $YES$; If this sum is greater than $\sum_{i=l}^{r} c_i$ then we can add this excess to the maximal element in array $b$. In this case, the answer is $YES$.
[ "constructive algorithms", "greedy" ]
1,400
#include <bits/stdc++.h> using namespace std; const int N = 300043; int t; int n, m; int a[N]; long long sum[N]; int cnt1[N]; int main() { ios_base::sync_with_stdio(false); cin >> t; for (int tc = 0; tc < t; ++tc) { cin >> n >> m; memset(sum, 0, sizeof(sum[0]) * (n + 5)); memset(cnt1, 0, sizeof(cnt1[0]) * (n + 5)); for (int i = 0; i < n; ++i) { cin >> a[i]; sum[i + 1] = sum[i] + a[i]; cnt1[i + 1] = cnt1[i] + (a[i] == 1); } for (int i = 0; i < m; ++i) { int l, r; cin >> l >> r; --l; int cur_cnt1 = cnt1[r] - cnt1[l]; long long cur_sum = sum[r] - sum[l]; if((r - l) + cur_cnt1 <= cur_sum && r - l > 1) cout << "YES\n"; else cout << "NO\n"; } } return 0; }
1923
D
Slimes
There are $n$ slimes placed in a line. The slimes are numbered from $1$ to $n$ in order from left to right. The size of the $i$-th slime is $a_i$. Every second, the following happens: \textbf{exactly one} slime eats one of its neighbors and increases its size by the eaten neighbor's size. A slime can eat its neighbor only if it is strictly bigger than this neighbor. If there is no slime which is strictly bigger than one of its neighbors, the process ends. For example, suppose $n = 5$, $a = [2, 2, 3, 1, 4]$. The process can go as follows: - first, the $3$-rd slime eats the $2$-nd slime. The size of the $3$-rd slime becomes $5$, the $2$-nd slime is eaten. - then, the $3$-rd slime eats the $1$-st slime (they are neighbors since the $2$-nd slime is already eaten). The size of the $3$-rd slime becomes $7$, the $1$-st slime is eaten. - then, the $5$-th slime eats the $4$-th slime. The size of the $5$-th slime becomes $5$, the $4$-th slime is eaten. - then, the $3$-rd slime eats the $5$-th slime (they are neighbors since the $4$-th slime is already eaten). The size of the $3$-rd slime becomes $12$, the $5$-th slime is eaten. For each slime, calculate the minimum number of seconds it takes for this slime to be eaten by another slime (among all possible ways the process can go), or report that it is impossible.
Let's solve the problem independently for each slime (denote it as $i$). After any number of seconds, the size of each slime is equal to the sum of some subarray. In order to eat the $i$-th slime, the "eater" should be its neighbor. So the eater size is equal to the sum of subarray $[j, i-1]$ for some $j < i$ or $[i + 1, j$] for some $j > i$. It remains to understand which $j$ can be the answer. First of all, the sum of subarray should be strictly greater than $a_i$. And also, there should be a sequence of operations that can combine the selected segment of slimes into one slime. Such a sequence exists in two cases: the length of subarray is $1$; there are at least two distinct values in subarray. It is not difficult to prove that these are the only conditions. If the length is $1$, then the subarray is already just only one slime. If all the slimes have the same size, then none of the neighboring pairs can be combined, which means that it is impossible to combine all slimes into one. If there are at least two distinct values, there exist a pair of adjacent slimes of the form (maximum, not maximum). After combining such a pair, the result is the only maximum of the subarray, which means that it can eat all the other slimes in the subarray. It remains to understand how to find such $j$ that satisfies the aforementioned conditions and is as close to $i$ as possible (because the number of required operations is $|i-j|$) faster than iterating over all values of $j$. We can notice that, if the subarray $[j, i-1]$ is good, then the subarray $[j-1, i-1]$ is also good. This leads us to the fact that we can use binary search. It is enough to pre-calculate two arrays: an array of prefix sums that used to find the sum of the subarray and an array $p$, where $p_i$ is the position of the nearest element that different to $a_i$ - to determine whether there are two different elements. So we can find the answer for one slime in $O(\log{n})$. And the total running time is $O(n\log{n})$.
[ "binary search", "constructive algorithms", "data structures", "greedy", "two pointers" ]
1,800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (auto& x : a) cin >> x; vector<int> ans(n, n); for (int z = 0; z < 2; ++z) { vector<long long> s(n + 1); for (int i = 0; i < n; ++i) s[i + 1] = s[i] + a[i]; vector<int> p(n, -1); for (int i = 1; i < n; ++i) { int j = (z ? n - i - 1 : i); int l = 1, r = i; while (l <= r) { int m = (l + r) / 2; if (s[i] - s[i - m] > a[i] && p[i - 1] >= i - m) { ans[j] = min(ans[j], m); r = m - 1; } else { l = m + 1; } } if (a[i - 1] > a[i]) ans[j] = 1; p[i] = (a[i] == a[i - 1] ? p[i - 1] : i - 1); } reverse(a.begin(), a.end()); } for (int i = 0; i < n; ++i) cout << (ans[i] == n ? -1 : ans[i]) << ' '; cout << '\n'; } }
1923
E
Count Paths
You are given a tree, consisting of $n$ vertices, numbered from $1$ to $n$. Every vertex is colored in some color, denoted by an integer from $1$ to $n$. A simple path of the tree is called beautiful if: - it consists of at least $2$ vertices; - the first and the last vertices of the path have the same color; - no other vertex on the path has the same color as the first vertex. Count the number of the beautiful simple paths of the tree. Note that paths are considered undirected (i. e. the path from $x$ to $y$ is the same as the path from $y$ to $x$).
Let's consider what the paths passing through some vertex $v$ look like. First, root the tree arbitrarily. Let $u_1, u_2, \dots, u_k$ be the children of $v$. Then, for some color $x$, there are $\mathit{cnt}[u_1][x], \mathit{cnt}[u_2][x], \dots, \mathit{cnt}[u_k][x]$ top-level vertices in their subtrees. Top-level here means that there are no vertices of color $x$ on the path from them to $u_i$. If the color of $v$ is not $x$, then you can combine all $\mathit{cnt}$ top-level vertices from every pair of children into paths. If the color $v$ is $x$, then all $\mathit{cnt}$ top-level vertices can only be paired with $v$. Moreover, the top-level vertices of color $x$ in subtree of $v$ is only $v$ itself now. With these ideas, some small-to-large can be implemented. Store $\mathit{cnt}[v][x]$ for all colors $x$ such that there exist top-level vertices of this color. In order to recalculate $\mathit{cnt}[v]$ from the values of its children, you can first calculate the sum of $\mathit{cnt}$ for each $x$, then replace $\mathit{cnt}[v][c_v]$ with $1$ (regardless of if it appeared in children or not). So that can be done by adding all values to the values of the largest child of $v$ (largest by its size of $\mathit{cnt}$, for example). During this process, you can calculate the number of paths as well. The complexity will be $O(n \log^2 n)$ for each testcase, and that should pass freely. There's also an idea for a faster solution. Two words: virtual trees. Basically, you can build a virtual tree of all vertices of each color. Now, there are vertices colored $x$ in it and some auxiliary vertices. The answer for that color can be calculated with some sort of dynamic programming. Similar to the first solution, for each vertex, store the number of top-level vertices of color $x$ in its subtree. All the calculations are exactly the same. You can build all virtual trees in $O(n \log n)$ in total.
[ "data structures", "dfs and similar", "dp", "dsu", "graphs", "trees" ]
2,000
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int n; vector<int> a; vector<vector<int>> g; long long ans; vector<map<int, int>> cnt; void dfs(int v, int p = -1){ int bst = -1; for (int u : g[v]) if (u != p){ dfs(u, v); if (bst == -1 || cnt[bst].size() < cnt[u].size()) bst = u; } for (int u : g[v]) if (u != p && u != bst){ for (auto it : cnt[u]){ int x = it.first, y = it.second; if (x != a[v]) ans += cnt[bst][x] * 1ll * y; cnt[bst][x] += y; } } if (bst != -1) swap(cnt[bst], cnt[v]); ans += cnt[v][a[v]]; cnt[v][a[v]] = 1; } int main() { cin.tie(0); ios::sync_with_stdio(false); int t; cin >> t; while (t--){ int n; cin >> n; a.resize(n); forn(i, n) cin >> a[i]; g.assign(n, {}); forn(_, n - 1){ int v, u; cin >> v >> u; --v, --u; g[v].push_back(u); g[u].push_back(v); } ans = 0; cnt.assign(n, {}); dfs(0); cout << ans << '\n'; } return 0; }
1923
F
Shrink-Reverse
You are given a binary string $s$ of length $n$ (a string consisting of $n$ characters, and each character is either 0 or 1). Let's look at $s$ as at a binary representation of some integer, and name that integer as the value of string $s$. For example, the value of 000 is $0$, the value of 01101 is $13$, "100000" is $32$ and so on. You can perform at most $k$ operations on $s$. Each operation should have one of the two following types: - SWAP: choose two indices $i < j$ in $s$ and swap $s_i$ with $s_j$; - SHRINK-REVERSE: delete all leading zeroes from $s$ and reverse $s$. For example, after you perform SHRINK-REVERSE on 000101100, you'll get 001101.What is the minimum value of $s$ you can achieve by performing at most $k$ operations on $s$?
In order to solve a task, let's observe and prove several facts: Fact 0: Suppose, you have two strings $s_1$ and $s_2$ without leading zeroes. Value of $s_1$ is lower than value of $s_2$ if either $|s_1| < |s_2|$ or $|s_1| = |s_2|$ and $s_1 < s_2$ lexicographically. Fact 1: There is a strategy, where you, firstly, perform all swaps, and only after that perform reverses. Proof: let's take some optimal strategy and split all its operations in blocks by reverse operation. Let's look at the last block of swaps: if there is no reverse before it - we found strategy. Otherwise, we can "push" all swaps from this block into the previous block (of course, with fixed indices), since all positions that exist at the current moment, existed earlier. Answer won't increase after this "push". So we can transform any optimal strategy into one we need. Fact 2: There is no need to make more than $2$ reverses. Since, we, firstly, swap and then reverse - making $3$ reverses is equivalent to making $1$ reverse. Analogically, making $4$ reverses is equivalent to making $2$ reverses. Fact 3: There is no need to make more than $1$ reverse. If, after the first reverse, we grouped all $1$-s into one segment, then the second reverse won't change anything. Otherwise, instead of the second reverse, we could make one more swap and "group $1$-s tighter" thus lowering the value of $s$. Fact 4: Suppose, you have two binary strings $s_1$ and $s_2$ with equal length, equal number of $1$-s, and you'll replace last $k - 1$ zeroes in both of them thus getting strings $s'_1$ and $s'_2$. If $s_1 \le s_2$ then $s'_1 \le s'_2$ (lexicographically). Proof: if $s_1 = s_2$ then $s'_1 = s'_2$. Otherwise, there is the first position $p$ where $s_1[p] = 0$ and $s_2[p] = 1$. After replacing last $k - 1$ zeroes with ones, if $s_1[p]$ was replaced, then $s'_1 = s'_2$. Otherwise, $s'_1 < s'_2$ since $s'_1[p] = 0$ and $s'_2[p] = 1$. Using facts above, we can finally solve the problem. Let's check $2$ cases: $0$ reverses or $1$ reverse. $0$ reverses: it's optimal to be greedy. Let's take the leftmost $1$ and swap it with the rightmost $0$ as many times as we can. $1$ reverse: leading zeroes will be deleted; trailing zeroes will become leading, so they won't affect the value of the resulting string. That's why the answer will depend only on positions of leftmost and rightmost $1$-s. For convenience, let's reverse $s$. Then, let's iterate over all possible positions $i$ of now leftmost one. For each left position, let's find the minimum possible position $u$ of the rightmost one. Minimizing $u$ will minimize the value of the answer, so it's optimal. There are two conditions that should be satisfied: the interval $[i, u)$ should be long enough to contain all $1$-s from the whole string $s$; the number of $1$-s outside the interval should be at most $k - 1$. Last step is to choose the minimum interval among all of them. Due to fact 0, we should, firstly, choose the shortest interval. Then (due to fact 4) it's enough to choose the lexicographically smallest one among them. In order to compare two intervals of reversed string $s$ it's enough to use their "class" values from Suffix Array built on reversed $s$. The complexity is $O(n \log{n})$ or $O(n \log^2{n})$ for building Suffix Array + $O(n)$ for checking each case and comparing answers from both cases.
[ "binary search", "brute force", "greedy", "hashing", "implementation", "string suffix structures", "strings" ]
2,800
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define all(a) (a).begin(), (a).end() typedef long long li; typedef pair<int, int> pt; const int INF = int(1e9); const int MOD = int(1e9) + 7; int add(int a, int b) { a += b; if (a >= MOD) a -= MOD; return a; } int mul(int a, int b) { return int(a * 1ll * b % MOD); } namespace SuffixArray { string s; vector< array<int, 2> > classes; vector<int> build(const string &bs) { s = bs; s += '$'; vector<int> c(all(s)), ord(sz(s)); iota(all(ord), 0); classes.resize(sz(s)); for (int len = 1; len < 2 * sz(s); len <<= 1) { int half = len >> 1; fore (i, 0, sz(s)) classes[i] = {c[i], c[(i + half) % sz(s)]}; sort(all(ord), [&](int i, int j) { return classes[i] < classes[j]; }); c[ord[0]] = 0; fore (i, 1, sz(ord)) c[ord[i]] = c[ord[i - 1]] + (classes[ord[i - 1]] != classes[ord[i]]); } c.pop_back(); for (int &cc : c) cc--; return c; } }; int n, k; string s; inline bool read() { if(!(cin >> n >> k)) return false; cin >> s; return true; } string calcZero(string s) { int rem = k; int pos = 0; for (int i = sz(s) - 1; rem > 0 && i >= 0; i--) { if (s[i] == '1') continue; while (pos < sz(s) && s[pos] == '0') pos++; if (pos >= i) break; swap(s[pos], s[i]); rem--; } return s.substr(s.find('1')); } string calcOne(string s) { reverse(all(s)); auto c = SuffixArray::build(s); int cntOnes = count(all(s), '1'); array<int, 3> mn = { 2 * sz(s), INF, -1 }; int u = 0, curOnes = 0; fore (i, 0, n) { while (u < sz(s) && (u - i < cntOnes || cntOnes - curOnes > k - 1)) { curOnes += s[u] == '1'; u++; } if (u - i < cntOnes || cntOnes - curOnes > k - 1) break; array<int, 3> curAns = { u - i, c[i], i }; mn = min(mn, curAns); curOnes -= s[i] == '1'; } assert(mn[2] >= 0); string res = s.substr(mn[2], mn[0]); int toAdd = cntOnes - count(all(res), '1'); for (int i = sz(res) - 1; toAdd > 0 && i >= 0; i--) { if (res[i] == '0') { res[i] = '1'; toAdd--; } } return res; } inline void solve() { auto s1 = calcZero(s); auto s2 = calcOne(s); if (sz(s1) > sz(s2) || (sz(s1) == sz(s2) && s1 > s2)) swap(s1, s2); int res = 0; for (char c : s1) res = add(mul(res, 2), c - '0'); cout << res << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1924
A
Did We Get Everything Covered?
You are given two integers $n$ and $k$ along with a string $s$. Your task is to check whether all possible strings of length $n$ that can be formed using the first $k$ lowercase English alphabets occur as a subsequence of $s$. If the answer is NO, you also need to print a string of length $n$ that can be formed using the first $k$ lowercase English alphabets which does not occur as a subsequence of $s$. If there are multiple answers, you may print any of them. \textbf{Note:} A string $a$ is called a subsequence of another string $b$ if $a$ can be obtained by deleting some (possibly zero) characters from $b$ without changing the order of the remaining characters.
Try to build the counter-case greedily. While building the counter-case, it is always optimal to choose the first character as the one whose first index of occurrence in the given string is the highest. We will try to construct a counter-case. If we can't the answer is YES otherwise NO. We will greedily construct the counter-case. It is always optimal to choose the first character of our counter-case as the character (among the first $k$ English alphabets) whose first index of occurrence in $s$ is the highest. Add this character to our counter-case, remove the prefix up to this character from $s$ and repeat until the length of the counter-case reaches $n$ or we reach the end of $s$. If the length of the counter-case is less than $n$, find a character which does not appear in the last remaining suffix of $s$. Keep adding this character to the counter-case until its length becomes $n$. This is a valid string which does not occur as a subsequence of $s$. Otherwise, all possible strings of length $n$ formed using the first $k$ English alphabets occur as a subsequence of $s$. This problem is essentially "Implement the checker for Div. 2A". It was not among the problems that were initially proposed for the round. While preparing problem 2A, I realized that writing the checker in itself might be a more interesting problem compared to 2A.
[ "constructive algorithms", "dp", "greedy", "shortest paths", "strings" ]
1,500
#include <bits/stdc++.h> using namespace std; int main() { std::ios::sync_with_stdio(false); cin.tie(NULL);cout.tie(NULL); int t; cin>>t; while(t--) { int n, k, m; cin>>n>>k>>m; string s; cin>>s; string res=""; bool found[k]; memset(found, false, sizeof(found)); int count=0; for(char c:s) { if(res.size()==n) break; count+=(!found[c-'a']); found[c-'a']=true; if(count==k) { memset(found, false, sizeof(found)); count=0; res+=c; } } if(res.size()==n) cout<<"YES\n"; else { cout<<"NO\n"; for(int i=0;i<k;i++) { if(!found[i]) { while(res.size()<n) res+=(char)('a'+i); } } cout<<res<<"\n"; } } }
1924
B
Space Harbour
There are $n$ points numbered $1$ to $n$ on a straight line. Initially, there are $m$ harbours. The $i$-th harbour is at point $X_i$ and has a value $V_i$. \textbf{It is guaranteed that there are harbours at the points $1$ and $n$.} There is exactly one ship on each of the $n$ points. The cost of moving a ship from its current location to the next harbour is the product of the value of the nearest harbour to its left and the distance from the nearest harbour to its right. Specifically, if a ship is already at a harbour, the cost of moving it to the next harbour is $0$. Additionally, there are $q$ queries, each of which is either of the following $2$ types: - $1$ $x$ $v$ — Add a harbour at point $x$ with value $v$. It is guaranteed that before adding the harbour, there is no harbour at point $x$. - $2$ $l$ $r$ — Print the sum of the cost of moving all ships at points from $l$ to $r$ to their next harbours. \textbf{Note that you just need to calculate the cost of moving the ships but not actually move them.}
Segment Tree with Lazy Propogation We can maintain a segment tree of size $n$ which initially stores the cost of all the ships. Now there are 2 types of updates when we add an harbour: The ships to the left of the new harbour have their cost decreased by a fixed amount. The ships to the right of the harbour have their cost changed by the value equivalent to product of distance from the harbour on their right (which remains unchanged) and the difference in values of the previous and new harbour to their left. Let $n=8$ and there be harbours on point $1$ and $8$ with values $10$ and $15$ respectively. Now we add a harbour at point $x=4$ with value $5$. Case 1: $(x=2, 3)$ Cost for both ships get decreased by $10 \times (8-4)$ Case 2: $(x=5,6,7)$ Cost for the ships get increased by $(5-10) \times 3, (5-10) \times 2, (5-10) \times 1$ respectively. There are multiple ways to handle both the updates simultaneously, a simple way would be to use a struct simulating an arithmetic progression. It can contain two values Base: Simply a value which has to be added to all values belonging to the segment. Difference: For each node of the segment, $dif \times dist$ will be added to the node, where $dist$ is the distance of node from the end of the segment. Using summation of arithmetic progression we can make sure that the updates on Difference can be applied to an entire segment lazily. You can see the code for more details.
[ "data structures", "implementation", "math", "sortings" ]
2,100
#include <bits/stdc++.h> #define int long long #define IOS std::ios::sync_with_stdio(false); cin.tie(NULL);cout.tie(NULL); using namespace std; const long long N=300005; struct ap { int base, dif; }; ap add(ap a, ap b) { ap res; res.base = a.base + b.base; res.dif = a.dif + b.dif; return res; } int convert(ap a, int n) { int res = (n*a.base); res += ((n*(n-1))/2ll)*a.dif; return res; } int st[4*N], cost[N]; ap lazy[4*N]; ap zero = {0, 0}; void propogate(int node, int l, int r) { st[node] += convert(lazy[node], r-l+1); if(l!=r) { lazy[node*2+1] = add(lazy[node*2+1], lazy[node]); int mid = (l+r)/2; int rig = (r-mid); lazy[node].base += (rig*lazy[node].dif); lazy[node*2] = add(lazy[node*2], lazy[node]); } lazy[node] = zero; } void build(int node, int l, int r) { if(l==r) { st[node] = cost[l]; lazy[node] = zero; return; } int mid=(l+r)/2; build(node*2, l, mid); build(node*2+1, mid+1, r); st[node] = (st[node*2] + st[node*2+1]); lazy[node] = zero; return; } void update(int node, int l, int r, int x, int y, ap val) { if(lazy[node].base != 0 || lazy[node].dif != 0) propogate(node, l, r); if(y<x||x>r||y<l) return; if(l>=x&&r<=y) { st[node] += convert(val, r-l+1); if(l!=r) { lazy[node*2+1] = add(lazy[node*2+1], val); int mid = (l+r)/2; int rig = (r-mid); val.base += (rig*val.dif); lazy[node*2] = add(lazy[node*2], val); } return; } int mid=(l+r)/2; update(node*2+1, mid+1, r, x, y, val); if(y>mid) { int rig = (min(y, r)-mid); val.base += (rig*val.dif); } update(node*2, l, mid, x, y, val); st[node] = st[node*2] + st[node*2+1]; return; } int query(int node, int l, int r, int x, int y) { if(lazy[node].base != 0 || lazy[node].dif != 0) propogate(node, l, r); if(y<x||y<l||x>r) return 0; if(l>=x&&r<=y) return st[node]; int mid=(l+r)/2; return query(node*2, l, mid, x, y) + query(node*2+1, mid+1, r, x, y); } int32_t main() { IOS; int n, m, q; cin>>n>>m>>q; set <int> harbours; int X[m], V[n+1]; for(int i=0;i<m;i++) { cin>>X[i]; harbours.insert(X[i]); } for(int i=0;i<m;i++) { int v; cin>>v; cost[X[i]] = v; V[X[i]] = v; } for(int i=1;i<=n;i++) { if(cost[i] == 0) cost[i] = cost[i-1]; } int dist=0; for(int i=n;i>0;i--) { if(harbours.find(i) != harbours.end()) dist=0; else dist++; cost[i] *= dist; } build(1, 1, n); while(q--) { int typ; cin>>typ; if(typ == 1) { int x, v; cin>>x>>v; V[x] = v; auto it = harbours.lower_bound(x); int nxt = (*it); it--; int prev = (*it); ap lef = {(V[prev]*(x-nxt)), 0}; ap rig = {0, V[x]-V[prev]}; update(1, 1, n, prev+1, x, lef); update(1, 1, n, x+1, nxt, rig); harbours.insert(x); } else { int l, r; cin>>l>>r; cout<<query(1, 1, n, l, r)<<'\n'; } } }
1924
C
Fractal Origami
You have a square piece of paper with a side length equal to $1$ unit. In one operation, you fold each corner of the square to the center of the paper, thus forming another square with a side length equal to $\dfrac{1}{\sqrt{2}}$ units. By taking this square as a new square, you do the operation again and repeat this process a total of $N$ times. \begin{center} {\small Performing operations for $N = 2$.} \end{center} After performing the set of operations, you open the paper with the same side up you started with and see some crease lines on it. Every crease line is one of two types: a mountain or a valley. A mountain is when the paper folds outward, and a valley is when the paper folds inward. You calculate the sum of the length of all mountain crease lines on the paper and call it $M$. Similarly, you calculate for valley crease lines and call it $V$. You want to find the value of $\dfrac{M}{V}$. It can be proved that this value can be represented in the form of $A + B\sqrt{2}$, where $A$ and $B$ are rational numbers. Let this $B$ be represented as an irreducible fraction $\dfrac{p}{q}$, your task is to print $p*inv(q)$ modulo $999\,999\,893$ \textbf{(note the unusual modulo)}, where $inv(q)$ is the modular inverse of $q$.
You need more math and less paper :) The length of mountain crease lines and valley crease lines created in each operation is the same, except for the first operation. Let there be an upside of the paper and a downside of the paper, initially the upside of the paper is facing up. With a little imagination, we can see that the mountain crease lines on the upside of the paper will be valley crease lines on the downside of the paper, and vice versa. Grey is the upside and orange is the downside After the first operation, what once was a single layer of square paper turns into a square with two overlapping layers of paper. The layer at the bottom has its upside facing up, and the layer at the top has its downside facing up. After this first operation, whatever crease lines are formed on the upside of the bottom layer will be the same as the ones formed on the bottom layer of the top layer, which means when the paper is unfolded and the upside of the entire paper is facing up, the mountain crease lines and the valley crease lines created after the first operation will be equal. Let $M$ be the length of mountain crease lines and $V$ be the length of valley crease lines after $N$ moves, and the side of the square paper is $1$. Let $diff = V - M =$ Length of valley crease lines created in the first operation $= 2\sqrt{2}$. It is easy to calculate the total crease lines that are created (mountain and valley) in $N$ operations. It is the sum of a GP. Let $sum = V + M = {\displaystyle\sum_{i = 1}^{N}}2^{i - 1}\cdot2\sqrt{2}\cdot{(\dfrac{1}{\sqrt{2}})}^{i - 1} = {\displaystyle\sum_{i = 1}^{N}}(\sqrt{2})^{i + 2}$ Now to find $\dfrac{M}{V}$, we use the age-old componendo and dividendo. $\dfrac{M}{V} = \dfrac{sum - diff}{sum + diff}$ And then rationalize it to find the coefficient of $\sqrt{2}$. The reason for the uncommon modulo in the problem is that the irrational part of $\dfrac{M}{V}$ is of the form $\dfrac{a\sqrt{2}}{b^2 - 2}$, where $a$ and $b$ are some integers. If there exists any $b$ such that $b^2 \equiv 2 \pmod m$, then the denominator becomes $0$. To avoid this situation, such a prime modulo $m$ was taken for which $2$ is not a quadratic residue modulo $m$. It can be seen that $999999893\bmod 8 = 5$ and so, the Legendre symbol $\bigg(\dfrac{2}{999999893}\bigg) = -1$ meaning that $2$ is a quadratic non-residue modulo $999999893$.
[ "geometry", "math", "matrices" ]
2,400
// library link: https://github.com/manan-grover/My-CP-Library/blob/main/library.cpp #include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace std; using namespace __gnu_pbds; #define asc(i,a,n) for(I i=a;i<n;i++) #define dsc(i,a,n) for(I i=n-1;i>=a;i--) #define forw(it,x) for(A it=(x).begin();it!=(x).end();it++) #define bacw(it,x) for(A it=(x).rbegin();it!=(x).rend();it++) #define pb push_back #define mp make_pair #define fi first #define se second #define lb(x) lower_bound(x) #define ub(x) upper_bound(x) #define fbo(x) find_by_order(x) #define ook(x) order_of_key(x) #define all(x) (x).begin(),(x).end() #define sz(x) (I)((x).size()) #define clr(x) (x).clear() #define U unsigned #define I long long int #define S string #define C char #define D long double #define A auto #define B bool #define CM(x) complex<x> #define V(x) vector<x> #define P(x,y) pair<x,y> #define OS(x) set<x> #define US(x) unordered_set<x> #define OMS(x) multiset<x> #define UMS(x) unordered_multiset<x> #define OM(x,y) map<x,y> #define UM(x,y) unordered_map<x,y> #define OMM(x,y) multimap<x,y> #define UMM(x,y) unordered_multimap<x,y> #define BS(x) bitset<x> #define L(x) list<x> #define Q(x) queue<x> #define PBS(x) tree<x,null_type,less<I>,rb_tree_tag,tree_order_statistics_node_update> #define PBM(x,y) tree<x,y,less<I>,rb_tree_tag,tree_order_statistics_node_update> #define pi (D)acos(-1) #define md 999999893 #define rnd randGen(rng) I modex(I a,I b,I m){ a=a%m; if(b==0){ return 1; } I temp=modex(a,b/2,m); temp=(temp*temp)%m; if(b%2){ temp=(temp*a)%m; } return temp; } I mod(I a,I b,I m){ a=a%m; b=b%m; I c=__gcd(a,b); a=a/c; b=b/c; c=modex(b,m-2,m); return (a*c)%m; } int main(){ mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); uniform_int_distribution<I> randGen; ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("out04.txt", "w", stdout); #endif I temp=md; I t; cin>>t; while(t--){ I n; cin>>n; I b=modex(2,n/2,md)-1; I a; if(n%2){ a=modex(2,n/2+1,md)-1; }else{ a=modex(2,n/2,md)-1; } I temp=(a+1)*(a+1); temp%=md; temp-=2*b*b; temp%=md; I ans=mod(2*b,temp,md); ans+=md; ans%=md; cout<<ans<<"\n"; } return 0; }
1924
D
Balanced Subsequences
A sequence of brackets is called balanced if one can turn it into a valid math expression by adding characters '+' and '1'. For example, sequences '(())()', '()', and '(()(()))' are balanced, while ')(', '(()', and '(()))(' are not. A subsequence is a sequence that can be derived from the given sequence by deleting zero or more elements without changing the order of the remaining elements. You are given three integers $n$, $m$ and $k$. Find the number of sequences consisting of $n$ '(' and $m$ ')', such that the longest balanced subsequence is of length $2 \cdot k$. Since the answer can be large calculate it modulo $1\,000\,000\,007$ ($10^9 + 7$).
Consider a function $f(n, m, k)$ which returns the number of permutations such that length of longest regular bracket subsequence is at most $2 \cdot k$ where $n$ is the number of '(' and $m$ is the number of ')'. Answer to the problem is $f(n, m, k) - f(n, m, k-1)$ Now to compute $f(n, m, k)$, we can consider the following cases: Case 1: $k \ge min(n, m)\rightarrow$ Here, the answer is ${n+m} \choose {m}$, since none of the strings can have a subsequence of length greater than $k$. Case 1: $k \ge min(n, m)\rightarrow$ Here, the answer is ${n+m} \choose {m}$, since none of the strings can have a subsequence of length greater than $k$. Case 2: $k < min(n, m)\rightarrow$ Here we can write $f(n, m, k) = f(n-1, m, k-1) + f(n, m-1, k)$ since all strings will either start with: a) ')' $\rightarrow$ Here, count is equal to $f(n, m-1, k)$. b) '(' $\rightarrow$ Here, count is equal to $f(n-1, m, k-1)$, the first opening bracket will always add $1$ to the optimal subsequence length because all strings where each of the $m$ ')' is paired to some '(' don't contribute to the count since $k<m$. Case 2: $k < min(n, m)\rightarrow$ Here we can write $f(n, m, k) = f(n-1, m, k-1) + f(n, m-1, k)$ since all strings will either start with: a) ')' $\rightarrow$ Here, count is equal to $f(n, m-1, k)$. b) '(' $\rightarrow$ Here, count is equal to $f(n-1, m, k-1)$, the first opening bracket will always add $1$ to the optimal subsequence length because all strings where each of the $m$ ')' is paired to some '(' don't contribute to the count since $k<m$. After this recurrence relation we have 2 base cases: $k=0\rightarrow$ Here, count is $1$ which can also be written as ${n+m} \choose {k}$. $k=min(n, m)\rightarrow$ Here, count is ${n+m} \choose {m}$ (as proved in Case 1) which can also be written as ${n+m} \choose {k}$. Now using induction we can prove that the value of $f(n, m, k)$ for Case 2 is ${n+m} \choose {k}$.
[ "combinatorics", "dp", "math" ]
2,700
#include <bits/stdc++.h> #define int long long using namespace std; const long long N=4005, mod=1000000007; int power(int a, int b, int p) { if(a==0) return 0; int res=1; a%=p; while(b>0) { if(b&1) res=(1ll*res*a)%p; b>>=1; a=(1ll*a*a)%p; } return res; } int fact[N], inv[N]; void pre() { fact[0]=inv[0]=1; for(int i=1;i<N;i++) fact[i]=(fact[i-1]*i)%mod; for(int i=1;i<N;i++) inv[i]=power(fact[i], mod-2, mod); } int nCr(int n, int r) { if(min(n, r)<0 || r>n) return 0; if(n==r) return 1; return (((fact[n]*inv[r])%mod)*inv[n-r])%mod; } int f(int n, int m, int k) { if(k>=min(n, m)) return nCr(n+m, m); return nCr(n+m, k); } int32_t main() { pre(); int t; cin>>t; while(t--) { int n, m, k; cin>>n>>m>>k; cout<<(f(n, m, k) - f(n, m, k-1) + mod)%mod<<"\n"; } }
1924
E
Paper Cutting Again
There is a rectangular sheet of paper with initial height $n$ and width $m$. Let the current height and width be $h$ and $w$ respectively. We introduce a $xy$-coordinate system so that the four corners of the sheet are $(0, 0), (w, 0), (0, h)$, and $(w, h)$. The sheet can then be cut along the lines $x = 1,2,\ldots,w-1$ and the lines $y = 1,2,\ldots,h-1$. In each step, the paper is cut randomly along any one of these $h+w-2$ lines. After each vertical and horizontal cut, the right and bottom piece of paper respectively are discarded. Find the expected number of steps required to make the area of the sheet of paper strictly less than $k$. It can be shown that this answer can always be expressed as a fraction $\dfrac{p}{q}$ where $p$ and $q$ are coprime integers. Calculate $p\cdot q^{-1} \bmod (10^9+7)$.
Can you solve it for $k=2$? Using linearity of expectation, we can say that expected number of cuts is same as sum of probability of each line being cut and for $k=2$, we need to cut the paper until we achieve a $1 \times 1$ piece of paper. Probability of each horizontal line being cut is $\dfrac {1}{count \thinspace of \thinspace lines \thinspace above \thinspace it + 1}$ Probability of each vertical line being cut is $\dfrac {1}{count \thinspace of \thinspace lines \thinspace left \thinspace to \thinspace it + 1}$ Now for $k=2$ we just have to calculate the sum of all these probabilities. For the given problem we can solve it by dividing it into 4 cases: Only Vertical Cuts: Let the largest integer $y$ for which $n\cdot y<k$ be $reqv$. The probability of achieving the goal using only vertical cuts is $\frac{reqv}{reqv+n-1}$ since one of the $reqv$ lines needs to be cut before all the horizontal lines. Now we will multiply this with the sum of conditional probabilities for each of the vertical line. Sum of conditional probabilities of being cut for all lines from $1$ to $reqv$ is exactly $1$, since for area to be smaller than $k$, one of them needs to be cut and as soon as one is cut, the area becomes less than $k$ so no further cuts are required. Now for the lines from $reqv+1$ to $m-1$, their conditional probabilities will form the following harmonic progression: $\frac{1}{n+reqv} + \frac{1}{n+reqv+1} + .... + \frac{1}{n+m-2}$ This is due to the fact that for a line to be cut it needs to occur before all horizontal lines and all vertical lines smaller than it. Only Vertical Cuts: Let the largest integer $y$ for which $n\cdot y<k$ be $reqv$. The probability of achieving the goal using only vertical cuts is $\frac{reqv}{reqv+n-1}$ since one of the $reqv$ lines needs to be cut before all the horizontal lines. Now we will multiply this with the sum of conditional probabilities for each of the vertical line. Sum of conditional probabilities of being cut for all lines from $1$ to $reqv$ is exactly $1$, since for area to be smaller than $k$, one of them needs to be cut and as soon as one is cut, the area becomes less than $k$ so no further cuts are required. Now for the lines from $reqv+1$ to $m-1$, their conditional probabilities will form the following harmonic progression: $\frac{1}{n+reqv} + \frac{1}{n+reqv+1} + .... + \frac{1}{n+m-2}$ This is due to the fact that for a line to be cut it needs to occur before all horizontal lines and all vertical lines smaller than it. Case with only horizontal cuts can be handled similarly. Case with only horizontal cuts can be handled similarly. Case when the overall last cut is a horizontal one and there is at least one vertical cut: For this, we iterate over the last cut among all the vertical cuts. Let the last vertical cut be $x$. We now find the largest $y$ such that $(y\cdot x)<k$. Let this value be $reqh$. The objective can be achieved using this case if the vertical line $x$ occurs before all the horizontal lines from $1$ to $reqh$ and all the vertical lines from $1$ to $x-1$, and after that any one of the horizontal line from $1$ to $reqh$ occur before all the vertical lines from $1$ to $x-1$. The probability of this happening is $\frac{1}{x+reqh}\times\frac{reqh}{reqh+x-1}$. Now we just add the the conditional probabilities of being cut for every line and multiply this with the above probability to find the contribution of this case to the final answer. a. Firstly the conditional probability of the vertical line $x$ being cut is $1$. b. The sum of conditional probabilities for horizontal lines $1$ to $reqh$ is also $1$. The order of the first $x$ vertical cuts and the first $reqh$ horizontal cuts for the given case would look like: $x_v$, $y1_h$, ($(x-1)$ vertical lines and $(reqh-1)$ horizontal lines in any relative ordering) [Total $x+reqh$ elements in the sequence]. $x_v$ denotes the $x^{th}$ vertical cut. $y1_h$ denotes the horizontal cut which occurs first among all the first $reqh$ horizontal cuts. c. Vertical cuts from $x+1$ to $m-1$: For the $(x+1)^{th}$ cut, we can look at this like it needs to occur before the $x^{th}$ vertical cut (or it has one gap in the sequence to choose from a total of $(x+reqh+1)$ gaps). So the probability is $\frac{1}{x+reqh+1}$. For the $(x+2)^{th}$ cut, we can first place the $(x+2)^{th}$ cut with probability $\frac{1}{x+reqh+1}$ now we need to place the $(x+1)^{th}$ cut after of this which will happen with the probability of $\frac{x+reqh+1}{x+reqh+2}$. So the overall probability is the product i.e., $\frac{1}{x+reqh+2}$. Similarly, the probability for $(x+i)^{th}$ cut is $\frac{1}{x+reqh+i}$. The sum of this Harmonic Progression can be computed in $\mathcal{O}(1)$ using pre computation. d. Horizontal cuts from $reqh+1$ to $n-1$ (Trickiest Case imo) Let us see the case for the $(reqh+1)^{th}$ cut $\rightarrow$ It has $2$ optimal gaps (before $x_v$ and the gap between $x_v$ and $y1_h$) so the probability is $\frac{2}{x+reqh+1}$. Now for finding the probability for $(reqh+i)^{th}$ cut, we first place this cut into one of the $2$ gaps and handle both cases separately. Gap before $x_v$: this case is similar to case 3c and the answer is just $\frac{1}{x+reqh+i}$. Gap between $x_v$ and $y1_h$. Here we again have to ensure that all lines from $reqh+1$ to $reqh+i-1$ occur after $reqh+i$. So we multiply $\frac{reqh+x}{reqh+x+2}\times\frac{reqs+x+1}{reqh+x+3}$ since after we place the $(reqh+i)^{th}$ cut, we have $(reqh+x)$ good gaps among a total of $(reqh+x+2)$ and so on for all the other lines we place (Their relative ordering does not matter since we are only concerned with $(reqh+i)^{th}$ cut). The final term for $(reqh+i)^{th}$ cut occurs out to be: $\frac{x+reqh}{(x+reqh+i)\cdot(x+reqh+i-1)}$. This forms a quadratic Harmonic Progression, the sum of which can be computed in $\mathcal{O}(1)$ using precomputation. Case when the overall last cut is a horizontal one and there is at least one vertical cut: For this, we iterate over the last cut among all the vertical cuts. Let the last vertical cut be $x$. We now find the largest $y$ such that $(y\cdot x)<k$. Let this value be $reqh$. The objective can be achieved using this case if the vertical line $x$ occurs before all the horizontal lines from $1$ to $reqh$ and all the vertical lines from $1$ to $x-1$, and after that any one of the horizontal line from $1$ to $reqh$ occur before all the vertical lines from $1$ to $x-1$. The probability of this happening is $\frac{1}{x+reqh}\times\frac{reqh}{reqh+x-1}$. Now we just add the the conditional probabilities of being cut for every line and multiply this with the above probability to find the contribution of this case to the final answer. a. Firstly the conditional probability of the vertical line $x$ being cut is $1$. b. The sum of conditional probabilities for horizontal lines $1$ to $reqh$ is also $1$. The order of the first $x$ vertical cuts and the first $reqh$ horizontal cuts for the given case would look like: $x_v$, $y1_h$, ($(x-1)$ vertical lines and $(reqh-1)$ horizontal lines in any relative ordering) [Total $x+reqh$ elements in the sequence]. $x_v$ denotes the $x^{th}$ vertical cut. $y1_h$ denotes the horizontal cut which occurs first among all the first $reqh$ horizontal cuts. c. Vertical cuts from $x+1$ to $m-1$: For the $(x+1)^{th}$ cut, we can look at this like it needs to occur before the $x^{th}$ vertical cut (or it has one gap in the sequence to choose from a total of $(x+reqh+1)$ gaps). So the probability is $\frac{1}{x+reqh+1}$. For the $(x+2)^{th}$ cut, we can first place the $(x+2)^{th}$ cut with probability $\frac{1}{x+reqh+1}$ now we need to place the $(x+1)^{th}$ cut after of this which will happen with the probability of $\frac{x+reqh+1}{x+reqh+2}$. So the overall probability is the product i.e., $\frac{1}{x+reqh+2}$. Similarly, the probability for $(x+i)^{th}$ cut is $\frac{1}{x+reqh+i}$. The sum of this Harmonic Progression can be computed in $\mathcal{O}(1)$ using pre computation. d. Horizontal cuts from $reqh+1$ to $n-1$ (Trickiest Case imo) Let us see the case for the $(reqh+1)^{th}$ cut $\rightarrow$ It has $2$ optimal gaps (before $x_v$ and the gap between $x_v$ and $y1_h$) so the probability is $\frac{2}{x+reqh+1}$. Now for finding the probability for $(reqh+i)^{th}$ cut, we first place this cut into one of the $2$ gaps and handle both cases separately. Gap before $x_v$: this case is similar to case 3c and the answer is just $\frac{1}{x+reqh+i}$. Gap between $x_v$ and $y1_h$. Here we again have to ensure that all lines from $reqh+1$ to $reqh+i-1$ occur after $reqh+i$. So we multiply $\frac{reqh+x}{reqh+x+2}\times\frac{reqs+x+1}{reqh+x+3}$ since after we place the $(reqh+i)^{th}$ cut, we have $(reqh+x)$ good gaps among a total of $(reqh+x+2)$ and so on for all the other lines we place (Their relative ordering does not matter since we are only concerned with $(reqh+i)^{th}$ cut). The final term for $(reqh+i)^{th}$ cut occurs out to be: $\frac{x+reqh}{(x+reqh+i)\cdot(x+reqh+i-1)}$. This forms a quadratic Harmonic Progression, the sum of which can be computed in $\mathcal{O}(1)$ using precomputation. Case when the overall last cut is a vertical one and there is at least one horizontal cut: This case can be handled in a similar way as the previous case. Case when the overall last cut is a vertical one and there is at least one horizontal cut: This case can be handled in a similar way as the previous case.
[ "combinatorics", "probabilities" ]
3,100
#include <bits/stdc++.h> #define int long long using namespace std; const long long N=2000005, mod=1000000007; int power(int a, int b, int p) { if(a==0) return 0; int res=1; a%=p; while(b>0) { if(b&1) res=(1ll*res*a)%p; b>>=1; a=(1ll*a*a)%p; } return res; } int pref[N], inv[N], pref2[N], hp2[N]; // Returns (1/l + 1/(l+1) + ... + 1/r) int cal(int l, int r) { if(l>r) return 0; return (pref[r]-pref[l-1]+mod)%mod; } // Returns (1/(l*(l+1)) + ... + 1/(r(r+1))) int cal2(int l, int r) { if(l>r) return 0; return (pref2[r]-pref2[l-1]+mod)%mod; } void pre() { pref[0]=0; pref2[0]=0; for(int i=1;i<N;i++) { inv[i]=power(i, mod-2, mod); pref[i]=(pref[i-1]+inv[i])%mod; int mul2=(i*(i+1))%mod; pref2[i]=(pref2[i-1] + power(mul2, mod-2, mod))%mod; hp2[i]=((i*(i-1))%mod)%mod; } } int solve2(int n, int m, int k) { if((n*m)<k) return 0; int ans=0; int reqv=(k-1)/n; if(reqv<m) { int prob=(reqv*inv[reqv+n-1])%mod; int exp=(prob*(1ll + cal(n+reqv, m+n-2)))%mod; ans=(ans + exp)%mod; } int reqh=(k-1)/m; if(reqh<n) { int prob=(reqh*inv[reqh+m-1])%mod; int exp=(prob*(1ll + cal(m+reqh, m+n-2)))%mod; ans=(ans + exp)%mod; } for(int i=1;i<min(n, k);i++) { reqv=(k-1)/i; if(reqv>=m) continue; int prob=(inv[i+reqv]*reqv)%mod; prob=(prob*inv[i-1+reqv])%mod; int num1=(hp2[i+reqv+1]*cal2(i+reqv, i+m-2))%mod; int num2=((i+reqv+1)*cal(i+reqv+1, i+m-1))%mod; int num=((num1+num2)*inv[i+reqv+1])%mod; int exp=(prob*(2ll + cal(i+reqv+1, reqv+(n-1))+num))%mod; ans=(ans + exp)%mod; } for(int i=1;i<min(m, k);i++) { reqh=(k-1)/i; if(reqh>=n) continue; int prob=(inv[i+reqh]*reqh)%mod; prob=(prob*inv[i-1+reqh])%mod; // Sum for cases when the (reqh+i)th horizontal line occurs in the gap b/w x_v and y1_h int num1=(hp2[i+reqh+1]*cal2(i+reqh, i+n-2))%mod; // Sum for cases when the (reqh+i)th horizontal line occurs in the gap to the left of x_v int num2=((i+reqh+1)*cal(i+reqh+1, i+n-1))%mod; int num=((num1+num2)*inv[i+reqh+1])%mod; int exp=(prob*(2ll + cal(i+reqh+1, reqh+(m-1))+num))%mod; ans=(ans + exp)%mod; } return ans; } int32_t main() { pre(); int t; cin>>t; while(t--) { int n, m, k; cin>>n>>m>>k; cout<<solve2(n, m, k)<<"\n"; } }
1924
F
Anti-Proxy Attendance
This is an interactive problem! Mr. 1048576 is one of those faculty who hates wasting his time in taking class attendance. Instead of taking attendance the old-fashioned way, he decided to try out something new today. There are $n$ students in his class, having roll numbers $1$ to $n$. He knows that \textbf{exactly $1$ student is absent} today. In order to determine who is absent, he can ask some queries to the class. In each query, he can provide two integers $l$ and $r$ ($1\leq l\leq r\leq n$) and all students whose roll numbers are between $l$ and $r$ (inclusive) will raise their hands. He then counts them to determine if the roll number of the absent student lies between these values. Things seemed fine until his teaching assistant noticed something — the students are dishonest! Some students whose roll numbers lie in the given range may not raise their hands, while some other students whose roll number does not lie in the given range may raise their hands. But the students don't want to raise much suspicion. So, only the following $4$ cases are possible for a particular query $(l,r)$ — - True Positive: $r-l+1$ students are present and $r-l+1$ students raised their hands. - True Negative: $r-l$ students are present and $r-l$ students raised their hands. - False Positive: $r-l$ students are present but $r-l+1$ students raised their hands. - False Negative: $r-l+1$ students are present but $r-l$ students raised their hands. In the first two cases, the students are said to be answering honestly, while in the last two cases, the students are said to be answering dishonestly. The students can mutually decide upon their strategy, not known to Mr. 1048576. Also, the students do not want to raise any suspicion and at the same time, want to create a lot of confusion. So, their strategy always meets the following two conditions — - The students will never answer honestly $3$ times in a row. - The students will never answer dishonestly $3$ times in a row. Mr. 1048576 is frustrated by this act of students. So, he is willing to mark at most $2$ students as absent (though he knows that only one is). The attendance is said to be successful if the student who is actually absent is among those two. Also, due to limited class time, he can only ask up to $\lceil\log_{1.116}{n}\rceil-1$ queries (weird numbers but okay). Help him complete a successful attendance.
$\sqrt[4]{1.5}=1.1069$ Try to do a $3\rightarrow 2$ reduction in $4$ queries. Try to do a $3\rightarrow 2$ reduction in such a way that if the middle part is eliminated $3$ queries are used otherwise $4$ queries are used. Instead of dividing the search space into $3$ equal parts, divide it into parts having size $36\%$, $28\%$ and $36\%$. There might be multiple strategies to solve this problem. I will describe one of them. First, let's try to solve a slightly easier version where something like $\lceil \log_{1.1} n\rceil$ queries are allowed and subset queries are allowed instead of range queries. The main idea is to maintain a search space of size $S$ and reduce it to a search space of size $\big\lceil \frac{2S}{3}\big\rceil$ using atmost $4$ queries. At the end, there will be exactly $2$ elements remaining in the search space which can be guessed. The number of queries required to reduce a search space of size $n$ to a search space of size $2$ using the above strategy will be equal to $4\cdot\log_{\frac{3}{2}}\frac{n}{2}$ $= \frac{\ln\frac{n}{2}}{0.25\ln 1.5}$ $= \frac{\ln n - \ln 2}{\ln 1.1067}$ $< \frac{\ln n - \ln 2}{\ln 1.1}$ $= \log_{1.1} n - 9.01$ $< \lceil\log_{1.1} n\rceil$. Given below is one of the strategies how this can be achieved. Let the current search space be $T$. Divide $T$ into $3$ disjoint exhaustive subsets $T_1$, $T_2$ and $T_3$ of nearly equal size. Then follow the decision tree given below to discard one of the three subsets using at most $4$ queries. It can seen that all the leaf nodes discard at least one-third of the search space based on the previous three queries. Now, coming back to the problem where only ranges are allowed to be queried. This can be easily solved by choosing $T_1$, $T_2$ and $T_3$ in such a way that all elements of $T_1$ are less than all elements of $T_2$ and all elements of $T_2$ are less than all elements of $T_3$. Then all queries used in the above decision tree can be reduced to range queries since it really doesn't matter what are the actual elements of $T_1$, $T_2$ and $T_3$. Finally, there is just one small optimization left to be done. Notice that when $T_2$ gets eliminated, only $3$ queries are used and when $T_1$ and $T_3$ get eliminated, $4$ queries are used. So, it must be more optimal to keep the size of $T_2$ smaller than $T_1$ and $T_3$ but by how much? The answer is given by the equation $(1-x)^3 = (2x)^4$. It has two imaginary and two real roots out of which only one is positive $x=0.35843$. So by taking the size of the segments approximately $36\%$, $28\%$ and $36\%$, you can do it in lesser number of queries which is less than $\max(\lceil\log_{\sqrt[4]{0.64^{-1}}} n\rceil, \lceil\log_{\sqrt[3]{0.72^{-1}}} n\rceil)-\log_{1.116} 2$ which is less than $\lceil\log_{1.116} n\rceil-1$. During testing, the testers were able to come up with harder versions requiring lesser number of queries. Specifically, try to solve the problem under the following constraints assuming $n\leq 10^5$: Harder version: $70$ queries by dario2994 Even harder version: $55$ queries by kevinsogo
[ "constructive algorithms", "dp", "interactive", "ternary search" ]
3,500
#include <bits/stdc++.h> using namespace std; void assemble(vector<int> &x,vector<int> &x1,vector<int> &x2,vector<int> &x3,string mask) { x.clear(); if(mask[0]=='1') { for(int i=0;i<x1.size();i++) x.push_back(x1[i]); } if(mask[1]=='1') { for(int i=0;i<x2.size();i++) x.push_back(x2[i]); } if(mask[2]=='1') { for(int i=0;i<x3.size();i++) x.push_back(x3[i]); } } bool query(vector<int> &x1,vector<int> &x2,vector<int> &x3,string mask) { vector<int> x; assemble(x,x1,x2,x3,mask); int l=x[0],r=x.back(); cout << "? " << l << " " << r << endl; int y; cin >> y; return (y==r-l); } void guess(int x) { cout << "! " << x << endl; int y; cin >> y; } void finish() { cout << "#" << endl; } int main() { int tc; cin >> tc; while(tc--) { int n; cin >> n; vector<int> x; for(int i=1;i<=n;i++) x.push_back(i); while(x.size()>2) { int m = x.size(); int k = round(0.36*m); vector<int> x1,x2,x3; for(int i=0;i<k;i++) x1.push_back(x[i]); for(int i=k;i<m-k;i++) x2.push_back(x[i]); for(int i=m-k;i<m;i++) x3.push_back(x[i]); if(query(x1,x2,x3,"110")) { if(query(x1,x2,x3,"100")) { if(query(x1,x2,x3,"111")) { assemble(x,x1,x2,x3,"011"); } else { assemble(x,x1,x2,x3,"110"); } } else { if(query(x1,x2,x3,"110")) { assemble(x,x1,x2,x3,"101"); } else { if(query(x1,x2,x3,"111")) { assemble(x,x1,x2,x3,"110"); } else { assemble(x,x1,x2,x3,"011"); } } } } else { if(query(x1,x2,x3,"100")) { if(query(x1,x2,x3,"110")) { if(query(x1,x2,x3,"111")) { assemble(x,x1,x2,x3,"011"); } else { assemble(x,x1,x2,x3,"110"); } } else { assemble(x,x1,x2,x3,"101"); } } else { if(query(x1,x2,x3,"111")) { assemble(x,x1,x2,x3,"110"); } else { assemble(x,x1,x2,x3,"011"); } } } } guess(x[0]); guess(x[1]); finish(); } }
1925
A
We Got Everything Covered!
You are given two positive integers $n$ and $k$. Your task is to find a string $s$ such that all possible strings of length $n$ that can be formed using the first $k$ lowercase English alphabets occur as a subsequence of $s$. If there are multiple answers, print the one with the smallest length. If there are still multiple answers, you may print any of them. \textbf{Note:} A string $a$ is called a subsequence of another string $b$ if $a$ can be obtained by deleting some (possibly zero) characters from $b$ without changing the order of the remaining characters.
The smallest length for such a string is $n\cdot k$. The smallest length possible for such a string is $n\cdot k$. To have the string $\texttt{aaa}\ldots\texttt{a}$ as a subsequence, you need to have at least $n$ characters in the string as $\texttt{a}$. Similarly for all $k$ different characters. So, that gives a total length of at least $n\cdot k$. In fact, it is always possible to construct a string of length $n\cdot k$ that satisfies this property. One such string is $(a_1a_2a_3\ldots a_k)(a_1a_2a_3\ldots a_k)(a_1a_2a_3\ldots a_k)\ldots n$ times where $a_i$ is the $i^{th}$ letter of English alphabet. For example, the answer for $n=3,k=4$ can be $\texttt{abcdabcdabcd}$. It is not hard to see that the first letter of the subsequence can be from the first group of $k$ letters, second letter from the second group and so on.
[ "constructive algorithms", "greedy", "strings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int tc; cin >> tc; while(tc--) { int n,k; cin >> n >> k; for(int i=0;i<n;i++) for(char c='a';c<'a'+k;c++) cout << c; cout << '\n'; } }
1925
B
A Balanced Problemset?
Jay managed to create a problem of difficulty $x$ and decided to make it the second problem for Codeforces Round #921. But Yash fears that this problem will make the contest highly unbalanced, and the coordinator will reject it. So, he decided to break it up into a problemset of $n$ sub-problems such that the difficulties of all the sub-problems are a positive integer and their sum is equal to $x$. The coordinator, Aleksey, defines the balance of a problemset as the GCD of the difficulties of all sub-problems in the problemset. Find the maximum balance that Yash can achieve if he chooses the difficulties of the sub-problems optimally.
$GCD(a_1,a_2,a_3,\ldots,a_n) = GCD(a_1,a_1+a_2,a_1+a_2+a_3,\ldots,a_1+a_2+a_3+\ldots+a_n)$ The maximum GCD that can be achieved is always a divisor of $x$. Let the difficulties of the $n$ sub-problems be $a_1,a_2,a_3,\ldots,a_n$. By properties of GCD, $GCD(a_1,a_2,a_3,\ldots,a_n)$ $= GCD(a_1,a_1+a_2,a_1+a_2+a_3,\ldots,a_1+a_2+a_3+\ldots+a_n)$ $= GCD(a_1,a_1+a_2,a_1+a_2+a_3,\ldots,x)$. So, the final answer will always be a divisor of $x$. Now, consider a divisor $d$ of $x$. If $n\cdot d\leq x$, you can choose the difficulties of the sub-problems to be $d,d,d,\ldots,x-(n-1)d$ each of which is a multiple of $d$ and hence, the balance of this problemset will be $d$. Otherwise you cannot choose the difficulties of $n$ sub-problems such that each of them is a multiple of $d$. Find the maximum $d$ for which this condition holds. This can be done in $\mathcal{O}(\sqrt{x})$ using trivial factorization.
[ "brute force", "greedy", "math", "number theory" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { int tc; cin >> tc; while(tc--) { int x,n; cin >> x >> n; int ans = 1; for(int i=1;i*i<=x;i++) { if(x%i==0) { if(n<=x/i) ans=max(ans,i); if(n<=i) ans=max(ans,x/i); } } cout << ans << '\n'; } }
1925
D
Good Trip
There are $n$ children in a class, $m$ pairs among them are friends. The $i$-th pair who are friends have a friendship value of $f_i$. The teacher has to go for $k$ excursions, and for each of the excursions she chooses a pair of children randomly, equiprobably and independently. If a pair of children who are friends is chosen, their friendship value increases by $1$ for all subsequent excursions (the teacher can choose a pair of children more than once). The friendship value of a pair who are not friends is considered $0$, and it does not change for subsequent excursions. Find the expected value of the sum of friendship values of all $k$ pairs chosen for the excursions (at the time of being chosen). It can be shown that this answer can always be expressed as a fraction $\dfrac{p}{q}$ where $p$ and $q$ are coprime integers. Calculate $p\cdot q^{-1} \bmod (10^9+7)$.
Since expected value is linear, we can consider the contribution of initial friendship values and the contribution of increase in friendship values by repeated excursions independently. Contribution of the initial friendship values will be $\dfrac{k\times \sum_{i=1}^{n}f_i}{\binom{n}{2}}$. Now we can assume that there are $m$ pair of friends out of a total of $\binom{n}{2}$ pairs of students whose initial friendship value is $0$. Since expected value is linear, we can consider the contribution of initial friendship values and the contribution of increase in friendship values by repeated excursions independently. Let $d=\binom{n}{2}$ denote the total number of pairs of students that can be formed. Contribution of the initial friendship values will be $s=\dfrac{k\cdot \sum_{i=1}^{n}f_i}{d}$. Now, to calculate the contribution to the answer by the increase in friendship values due to excursions, for each pair of friends, it will be $\sum_{x=0}^{k}\dfrac{x(x-1)}{2}\cdot P(x)$ where $P(x)$ is the probability of a pair of friends to be selected for exactly $x$ out of the $k$ excursions which is given by $P(x)=\binom{k}{x}\cdot \bigg(\dfrac{1}{d}\bigg)^{x}\cdot \bigg(\dfrac{d-1}{d}\bigg)^{k-x}$. Since the increase is uniform for all pair of friends, we just have to multiply this value by $m$ and add it to the answer. The time complexity is $\mathcal{O}(m+k\log k)$ per test case.
[ "combinatorics", "dp", "math", "probabilities" ]
1,900
#include <bits/stdc++.h> #define int long long #define IOS std::ios::sync_with_stdio(false); cin.tie(NULL);cout.tie(NULL); #define mod 1000000007ll using namespace std; const long long N=200005, INF=2000000000000000000; int power(int a, int b, int p) { if(b==0) return 1; if(a==0) return 0; int res=1; a%=p; while(b>0) { if(b&1) res=(1ll*res*a)%p; b>>=1; a=(1ll*a*a)%p; } return res; } int fact[N],inv[N]; void pre() { fact[0]=1; inv[0]=1; for(int i=1;i<N;i++) fact[i]=(i*fact[i-1])%mod; for(int i=1;i<N;i++) inv[i]=power(fact[i], mod-2, mod); } int nCr(int n, int r, int p) { if(r>n || r<0) return 0; if(n==r) return 1; if (r==0) return 1; return (((fact[n]*inv[r]) % p )*inv[n-r])%p; } int32_t main() { IOS; pre(); int t; cin>>t; while(t--) { int n, m, k; cin>>n>>m>>k; int sum=0; for(int i=0;i<m;i++) { int a, b, f; cin>>a>>b>>f; sum=(sum + f)%mod; } int den=((n*(n-1))/2ll)%mod; int den_inv=power(den, mod-2, mod); int base=(((sum*k)%mod)*den_inv)%mod; int avg_inc=0; for(int i=1;i<=k;i++) { // Extra sum added to ans if a particular pair of friends is picked i times. int sum=((i*(i-1))/2ll)%mod; int prob = (nCr(k, i, mod)*power(den_inv, i, mod))%mod; // Probablity that a particular pair in unpicked for a given excursion. int unpicked_prob = ((den-1)*den_inv)%mod; // Probability that a particular pair is picked exactly i times. prob=(prob * power(unpicked_prob, k-i, mod))%mod; avg_inc = (avg_inc + (sum*prob)%mod)%mod; } int ans = (base + (m*avg_inc)%mod)%mod; cout<<ans<<'\n'; } }
1926
A
Vlad and the Best of Five
Vladislav has a string of length $5$, whose characters are each either $A$ or $B$. Which letter appears most frequently: $A$ or $B$?
Since the string is of an odd length, we know that the number of $\texttt{A}$s can't be equal to the number of $\texttt{B}$s. So, there is always only one possible answer. Denote variables a_counter and b_counter - the count of $\texttt{A}$s and $\texttt{B}$s in the string respectively. Let's just iterate through all 5 characters of the string and increase the a_counter every time we see an $\texttt{A}$ and the b_counter every time we see a $\texttt{B}$. If the a_counter is greater than the b_counter we output $\texttt{A}$, and $\texttt{B}$ otherwise.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { string s; cin >> s; int a = 0, b = 0; for (char c : s) { if (c == 'A') {a++;} else {b++;} } cout << (a > b ? 'A' : 'B') << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1926
B
Vlad and Shapes
Vladislav has a binary square grid of $n \times n$ cells. A triangle or a square is drawn on the grid with symbols $1$. As he is too busy being cool, he asks you to tell him which shape is drawn on the grid. - A triangle is a shape consisting of $k$ ($k>1$) consecutive rows, where the $i$-th row has $2 \cdot i-1$ consecutive characters $1$, and the central 1s are located in one column. An upside down triangle is also considered a valid triangle (but not rotated by 90 degrees). \begin{center} {\small Two left pictures contain examples of triangles: $k=4$, $k=3$. The two right pictures don't contain triangles.} \end{center} - A square is a shape consisting of $k$ ($k>1$) consecutive rows, where the $i$-th row has $k$ consecutive characters $1$, which are positioned at an equal distance from the left edge of the grid. \begin{center} {\small Examples of two squares: $k=2$, $k=4$.} \end{center} For the given grid, determine the type of shape that is drawn on it.
Let's draw some examples on paper and notice a pattern. What we notice is that in the case of a triangle there is a row with exactly one $\texttt{1}$, but not a square. So, this is what we need to check. Iterate through all rows, and check if there is a row with exactly one $\texttt{1}$. If it was the case for at least one, then the answer is "TRIANGLE", and "SQUARE" otherwise. Another solution. Check if any $2 \times 2$ square has sum $3$. If it does, then we must be at one of the sloped sides of a triangle, so the answer is "TRIANGLE". If there is no such square, the answer is "SQUARE". Why does it work?
[ "geometry", "implementation" ]
800
#include <iostream> #include <algorithm> #include <vector> #include <array> #include <set> #include <map> #include <queue> #include <stack> #include <list> #include <chrono> #include <random> #include <cstdlib> #include <cmath> #include <ctime> #include <cstring> #include <iomanip> #include <bitset> #include <cassert> using namespace std; void solve() { int n; cin >> n; vector<string> g; for(int i = 0; i < n; i++) { string s; cin >> s; g.push_back(s); } bool triangle = false; for(int i = 0; i < n; i++) { int cnt = 0; for(int j = 0; j < n; j++) { if(g[i][j] == '1') { cnt++; } } if(cnt == 1) { triangle = true; } else if(cnt > 1) { break; } } reverse(g.begin(), g.end()); for(int i = 0; i < n; i++) { int cnt = 0; for(int j = 0; j < n; j++) { if(g[i][j] == '1') { cnt++; } } if(cnt == 1) { triangle = true; } else if(cnt > 1) { break; } } if(triangle) { cout << "TRIANGLE" << endl; } else { cout << "SQUARE" << endl; } } int32_t main(){ int t = 1; cin >> t; while (t--) { solve(); } }
1926
C
Vlad and a Sum of Sum of Digits
{\textbf{Please note that the time limit for this problem is only 0.5 seconds per test.}} Vladislav wrote the integers from $1$ to $n$, inclusive, on the board. Then he replaced each integer with the sum of its digits. What is the sum of the numbers on the board now? For example, if $n=12$ then initially the numbers on the board are: $$1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.$$ Then after the replacement, the numbers become: $$1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3.$$ The sum of these numbers is $1+2+3+4+5+6+7+8+9+1+2+3=51$. Thus, for $n=12$ the answer is $51$.
Let's denote $S(x)$ as the sum of digits of number $x$. Since $n \leq 2 \cdot 10^5$, for a single test case, we can brute force $S(1) + S(2) + S(3) + \dots + S(n)$ and output the answer. However, since the number of test cases is large, we can't compute this value for $n$ each time. This needs a standard idea of precomputation: we will compute the answer for each value from $1$ to $n$ and store it in an array $\mathrm{ans}$: $\mathrm{ans}(n) = S(n) + \mathrm{ans}(n-1)$. Then to answer each test case we just output $\mathrm{ans}(n)$. No math is needed! The precomputation takes $\mathcal{O}(n \log n)$ time (it takes $\mathcal{O}(\log n)$ time to find sum of digits), but now we can answer queries in $\mathcal{O}(1)$ per test case, so overall the complexity is $\mathcal{O}(n \log n + t)$.
[ "dp", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; int res[MAX]; int S(int x) { int res = 0; while (x) { res += (x % 10); x /= 10; } return res; } void solve() { int x; cin >> x; cout << res[x] << '\n'; } int main() { res[0] = 0; for (int i = 1; i < MAX; i++) { res[i] = res[i - 1] + S(i); } int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1926
D
Vlad and Division
Vladislav has $n$ non-negative integers, and he wants to divide \textbf{all} of them into several groups so that in any group, any pair of numbers does not have matching bit values among bits from $1$-st to $31$-st bit (i.e., considering the $31$ least significant bits of the binary representation). For an integer $k$, let $k_2(i)$ denote the $i$-th bit in its binary representation (from right to left, indexing from 1). For example, if $k=43$, since $43=101011_2$, then $43_2(1)=1$, $43_2(2)=1$, $43_2(3)=0$, $43_2(4)=1$, $43_2(5)=0$, $43_2(6)=1$, $43_2(7)=0$, $43_2(8)=0, \dots, 43_2(31)=0$. \textbf{Formally, for any two numbers $x$ and $y$ in the same group, the condition $x_2(i) \neq y_2(i)$ must hold for all $1 \leq i < 32$.} What is the minimum number of groups Vlad needs to achieve his goal? Each number must fall into exactly one group.
We can notice that a group contains either one or two numbers. Now, we check how many numbers we can pair together. The condition that all bits in a pair differ is equivalent to the pair resulting in a number which has all bits set as $1$ when we XOR the $2$ numbers. So, we need to see the number of pairs for which their XOR is equal to $2^{31} - 1$ (As this is the number with all bits set). Now, we iterate through the numbers in order from left to right, we check if we can pair the current number with some existing previous one. We can check if the number can be paired with some previous one if we encountered the value of $(2^{31} - 1)$ XOR $a_i$ in the past. If we have, we mark that value and the current value as taken, and don't start a new group, otherwise we start a new group and continue the process.
[ "bitmasks", "greedy" ]
1,300
#include "bits/stdc++.h" using namespace std; void solve() { int n; cin >> n; map<int, int> cnt; int ans = 0; for(int i = 0, x; i < n; ++i) { cin >> x; if(!cnt[x]) ++ans, ++cnt[((1 << 31) - 1) ^ x]; else --cnt[x]; } cout << ans << "\n"; } main() { int t = 1; cin >> t; while(t--) { solve(); } }
1926
E
Vlad and an Odd Ordering
Vladislav has $n$ cards numbered $1, 2, \dots, n$. He wants to lay them down in a row as follows: - First, he lays down all the odd-numbered cards from smallest to largest. - Next, he lays down all cards that are twice an odd number from smallest to largest (i.e. $2$ multiplied by an odd number). - Next, he lays down all cards that are $3$ times an odd number from smallest to largest (i.e. $3$ multiplied by an odd number). - Next, he lays down all cards that are $4$ times an odd number from smallest to largest (i.e. $4$ multiplied by an odd number). - And so on, until all cards are laid down. What is the $k$-th card he lays down in this process? Once Vladislav puts a card down, he cannot use that card again.
Idea. The problem is very recursive; after we lay all the odd cards down, we have the same problem as we started with, but every card is multiplied by $2$. Can you solve the problem from here? We present two different solutions, which are actually the same idea, but presented a little differently, so you can understand the problem better ;). Solution 1. Note that we will never lay cards down on moves that are not powers of $2$. Why? Well, for example, if a number is $3 \times \mathrm{odd}$, then this number is also $1 \times \mathrm{odd}$, and similarly for $5 \times \mathrm{odd}$, $7 \times \mathrm{odd}$, $\dots$. This same logic shows that $6 \times \mathrm{odd}$ numbers will be laid down in the $2 \times \mathrm{odd}$ step, etc. This means that all our cards are divided into "blocks": $1 \times \mathrm{odd}$ numbers (odd numbers), $2 \times \mathrm{odd}$ numbers (multiples of $2$ but not $4$), $4 \times \mathrm{odd}$ numbers (multiples of $4$ but not $8$), $8 \times \mathrm{odd}$ numbers (multiples of $8$ but not $16$), and so on. This leads to the solution. We can find the number of cards in each of these groups by repeatedly finding the number of odd cards, removing them, and continuing the process on the remaining deck. So to find the $k$-th card, we find the first prefix sum that exceeds $k$, and find the $k$-th card in this block. Solution 2. Let's make some observations: Of the cards $1, 2, \dots, n$, there are $\lceil \frac{n}{2} \rceil$ odd ones. The even-numbered cards cannot be laid down on an odd-numbered step ($1 \times \mathrm{odd}$, $3 \times \mathrm{odd}$, $5 \times \mathrm{odd}$, $\dots$), because all those values are odd. In other words, even-numbered cards can only be laid down at even-numbered steps $2 \times \mathrm{odd}$, $4 \times \mathrm{odd}$, $6 \times \mathrm{odd}$, $\dots$. In other words, even-numbered cards can only be laid down at even-numbered steps $2 \times \mathrm{odd}$, $4 \times \mathrm{odd}$, $6 \times \mathrm{odd}$, $\dots$. Otherwise, once we are finished laying down all the odd numbers, $\lceil \frac{n}{2} \rceil$ turns have passed, and our remaining cards are the numbers $2, 4, 6, \dots$ and we will only lay down cards in the steps $2 \times \mathrm{odd}$, $4 \times \mathrm{odd}$, $6 \times \mathrm{odd}$, $\dots$. Since all the remaining numbers have a factor of $2$, we will divide it out and get the following equivalent problem: and our remaining cards are the numbers $1, 2, 3, \dots$ and we will only lay down cards in the steps $1 \times \mathrm{odd}$, $2 \times \mathrm{odd}$, $3 \times \mathrm{odd}$, $\dots$. But this is exactly the same as the original problem! Thus we can solve this problem recursively. More formally, let the answer be $\mathrm{ans}(n, k)$. Then if $k \leq \lceil \frac{n}{2} \rceil$, output the $k$-th odd number; otherwise, output $2 \cdot \mathrm{ans}(\lfloor \frac{n}{2} \rfloor, k - \lceil \frac{n}{2} \rceil)$. This works in $\mathcal{O}(\log n)$ per test case.
[ "binary search", "bitmasks", "data structures", "dp", "implementation", "math", "number theory" ]
1,500
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; void solve() { int n, k; cin >> n >> k; vector<int> v; while (n) { v.push_back((n + 1) / 2); n /= 2; } int tot = 0, pow2 = 1; for (int x : v) { if (tot < k && k <= tot + x) { cout << pow2 * (2 * (k - tot) - 1) << '\n'; return; } tot += x; pow2 *= 2; } } int main() { int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1926
F
Vlad and Avoiding X
Vladislav has a grid of size $7 \times 7$, where each cell is colored black or white. In one operation, he can choose any cell and change its color (black $\leftrightarrow$ white). Find the minimum number of operations required to ensure that there are no black cells with four diagonal neighbors also being black. \begin{center} {\small The left image shows that initially there are two black cells violating the condition. By flipping one cell, the grid will work.} \end{center}
Notice that we can split the grid into two parts, red and blue, in a chessboard coloring fashion as shown, and that Xs from one color only influence cells of that color. This means that we can solve the problem for the red and blue parts independently, and combine them at the end. Let's brute force the number of cells in the blue part that we need to flip using backtracking (i.e. let's brute force all ways we can flip $0$ cells and see if all the black cells work, then try flipping $1$, then $2$, and so on). In fact, it can be shown that this number does not exceed $4$, by running this algorithm on an all-black grid. Similarly, we see that the number of cells in the red part that we need to flip is also not more than $4$. The backtracking will try $\sim \binom{25}{4} + \binom{24}{4}$ flips, and each takes around $50$ operations to check the whole grid, which not more than $115000$. This is not very big, and even with $200$ test cases, it does not take more than 140 milliseconds. You can run the worst-case locally to check. Bonus. Can this problem be solved in polynomial time?
[ "bitmasks", "brute force", "dfs and similar", "dp", "implementation" ]
2,200
#include <bits/stdc++.h> using namespace std; const int MAX = 200'007; const int MOD = 1'000'000'007; vector<pair<int, int>> odd, even; bool valid(int gc[7][7], bool odd) { for (int r = 1; r < 6; r++) { for (int c = 1; c < 6; c++) { if (gc[r][c] && ((r + c) % 2 == odd)) { if (gc[r - 1][c - 1] && gc[r - 1][c + 1] && gc[r + 1][c - 1] && gc[r + 1][c + 1]) { return false; } } } } return true; } bool check(int g[7][7], int flips_left, int idx, vector<pair<int, int>>& vec, int valid_val) { if (flips_left == 0) { return valid(g, valid_val); } if (idx == vec.size()) { return false; } bool ok = false; ok |= check(g, flips_left, idx + 1, vec, valid_val); g[vec[idx].first][vec[idx].second] ^= 1; ok |= check(g, flips_left - 1, idx + 1, vec, valid_val); g[vec[idx].first][vec[idx].second] ^= 1; return ok; } void solve() { int g[7][7]; for (int i = 0; i < 7; i++) { for (int j = 0; j < 7; j++) { char c; cin >> c; g[i][j] = (c == 'B'); } } int res = 0; for (int i = 0; i <= 4; i++) { if (check(g, i, 0, odd, 1)) {res += i; break;} } for (int i = 0; i <= 4; i++) { if (check(g, i, 0, even, 0)) {res += i; break;} } cout << res << '\n'; } int main() { for (int i = 0; i < 7; i++) { for (int j = 0; j < 7; j++) { if ((i + j) % 2) { odd.emplace_back(i, j); } else { even.emplace_back(i, j); } } } int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} }
1926
G
Vlad and Trouble at MIT
Vladislav has a son who really wanted to go to MIT. The college dormitory at MIT (Moldova Institute of Technology) can be represented as a tree with $n$ vertices, each vertex being a room with exactly one student. A tree is a connected undirected graph with $n$ vertices and $n-1$ edges. Tonight, there are three types of students: - students who want to party and play music (marked with $P$), - students who wish to sleep and enjoy silence (marked with $S$), and - students who don't care (marked with $C$). Initially, all the edges are thin walls which allow music to pass through, so when a partying student puts music on, it will be heard in every room. However, we can place some thick walls on any edges — thick walls don't allow music to pass through them. The university wants to install some thick walls so that every partying student can play music, and no sleepy student can hear it. Because the university lost a lot of money in a naming rights lawsuit, they ask you to find the minimum number of thick walls they will need to use.
Let's think of the problem as trying to separate some rooms(with P and possibly some C) that will hear music and the other ones(with S and possibly some C) that will not hear music. Imagine red "water" from the P nodes and blue "water" flowing from the S nodes flowing freely until hitting a thick wall. We don't want these two "waters" to mix. Let's rotate the tree upside down so that on top are all the leaves and end with the root node at the bottom and start letting blue and red water flow down. We want to check so that, at any point, these waters don't mix. Let's go through them layer by layer and do dynamic programming $dp_{i, 3}$ where for each node $i$, we remember the minimum number of walls we have to add such that only red water flows there, only blue water flows, or no water flows, ensuring there is no mixing in nodes above. Using the $dp$ values for the nodes above, we can calculate the $dp$ value of the node below. Of course, the $dp$ value for the "pumping" nodes (P or S) will have the $dp$ of other colors infinite. The final answer will be the minimum of $dp_{1, 0}$, $dp_{1, 1}$, $dp_{1, 2}$, the minimum number of walls needed for the root to not reach "water"(and no mixing above), to reach only red "water"(and no mixing above) or reach only blue "water"(and no mixing above). Final complexity is $\mathcal{O}(n)$
[ "dfs and similar", "dp", "flows", "graphs", "greedy", "implementation", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; #define int long long #define INF (int)1e18 int n; const int N = 1e5 + 69; int dp[N][3]; string s; vector <int> adj[N]; void dfs(int u){ // dp[u][0] = nothing open // dp[u][1] = P open // dp[u][2] = S open dp[u][0] = INF; if (s[u] != 'S') dp[u][1] = 0; else dp[u][1] = INF; if (s[u] != 'P') dp[u][2] = 0; else dp[u][2] = INF; int tot = 0; for (int v : adj[u]){ dfs(v); dp[u][1] = dp[u][1] + min({dp[v][1], dp[v][2] + 1, dp[v][0]}); dp[u][2] = dp[u][2] + min({dp[v][2], dp[v][1] + 1, dp[v][0]}); tot += dp[v][0]; } if (s[u] != 'C') tot = INF; dp[u][0] = min({tot, dp[u][1] + 1, dp[u][2] + 1}); //cout << u << " " << dp[u][0] << " " << dp[u][1] << " " << dp[u][2] << "\n"; } void Solve() { cin >> n; for (int i = 1; i <= n; i++) adj[i].clear(); for (int i = 2; i <= n; i++){ int x; cin >> x; adj[x].push_back(i); } cin >> s; s = "0" + s; dfs(1); cout << min({dp[1][0], dp[1][1], dp[1][2]}) << "\n"; } int32_t main() { int t = 1; // freopen("in", "r", stdin); // freopen("out", "w", stdout); cin >> t; for(int i = 1; i <= t; i++) { //cout << "Case #" << i << ": "; Solve(); } return 0; }
1927
A
Make it White
You have a horizontal strip of $n$ cells. Each cell is either white or black. You can choose a \textbf{continuous} segment of cells once and paint them all white. After this action, all the black cells in this segment will become white, and the white ones will remain white. What is the minimum length of the segment that needs to be painted white in order for all $n$ cells to become white?
To repaint all the black cells in white, it is necessary to select a segment from $l$ to $r$ that contains all the black cells. Let's choose the entire strip as a segment ($l=1, r=n$). As long as the segment starts with a white cell, it can be left unpainted, i.e., $l$ can be increased by one. Otherwise, we cannot exclude the cell from the segment. Similarly, we can exclude the last cell from the segment until it becomes black. After all exclusions, the answer will be $r - l + 1$. The selected segment contains all the black cells, as we have excluded only white cells. The segment is also minimal because reducing the segment from either side will result in one of the black cells remaining black.
[ "greedy", "strings" ]
800
from collections import deque def solve(): n = int(input()) s = deque(input()) while len(s) > 0 and s[0] == 'W': s.popleft() while len(s) > 0 and s[-1] == 'W': s.pop() print(len(s)) for _ in range(int(input())): solve()
1927
B
Following the String
Polycarp lost the string $s$ of length $n$ consisting of lowercase Latin letters, but he still has its trace. The trace of the string $s$ is an array $a$ of $n$ integers, where $a_i$ is the number of such indices $j$ ($j < i$) that $s_i=s_j$. For example, the trace of the string abracadabra is the array [$0, 0, 0, 1, 0, 2, 0, 3, 1, 1, 4$]. Given a trace of a string, find \textbf{any} string $s$ from which it could have been obtained. The string $s$ should consist only of lowercase Latin letters a-z.
To build the desired string, we will move from left to right and add characters. In order to add a character at each step with the required number of occurrences, we will keep track of the number of times each character is added.
[ "constructive algorithms", "greedy", "strings" ]
900
def solve(): n = int(input()) a = [int(x) for x in input().split()] cnt = [0] * 26 s = '' for i in range(n): for j in range(26): if cnt[j] == a[i]: cnt[j] += 1 s += chr(97 + j) break print(s) for _ in range(int(input())): solve()
1927
C
Choose the Different Ones!
Given an array $a$ of $n$ integers, an array $b$ of $m$ integers, and an even number $k$. Your task is to determine whether it is possible to choose \textbf{exactly} $\frac{k}{2}$ elements from both arrays in such a way that among the chosen elements, every integer from $1$ to $k$ is included. For example: - If $a=[2, 3, 8, 5, 6, 5]$, $b=[1, 3, 4, 10, 5]$, $k=6$, then it is possible to choose elements with values $2, 3, 6$ from array $a$ and elements with values $1, 4, 5$ from array $b$. In this case, all numbers from $1$ to $k=6$ will be included among the chosen elements. - If $a=[2, 3, 4, 5, 6, 5]$, $b=[1, 3, 8, 10, 3]$, $k=6$, then it is not possible to choose elements in the required way. Note that you are not required to find a way to choose the elements — your program should only check whether it is possible to choose the elements in the required way.
Notice that elements with a value greater than $k$ are not relevant to us. Let's divide the values into three categories: Occurring only in array $a$; occurring only in array $b$; occurring in both arrays. The answer will be NO if any of the following conditions are met: the number of values of the first type is greater than $\frac{k}{2}$ (this implies that we cannot select all such elements); the number of values of the second type is greater than $\frac{k}{2}$ (this implies that we cannot select all such elements); the total number of values of all three types is less than $k$ (this implies that some values do not occur in either of the arrays). Otherwise, the answer is YES.
[ "brute force", "greedy", "math" ]
1,000
def solve(): n, m, k = map(int, input().split()) a = [int(x) for x in input().split()] b = [int(x) for x in input().split()] cnt = [0] * (k + 1) for e in a: if e <= k: cnt[e] |= 1 for e in b: if e <= k: cnt[e] |= 2 c = [0] * 4 for e in cnt: c[e] += 1 if c[1] > k // 2 or c[2] > k // 2 or c[1] + c[2] + c[3] != k: print("NO") else: print("YES") for _ in range(int(input())): solve()
1927
D
Find the Different Ones!
You are given an array $a$ of $n$ integers, and $q$ queries. Each query is represented by two integers $l$ and $r$ ($1 \le l \le r \le n$). Your task is to find, for each query, two indices $i$ and $j$ (or determine that they do not exist) such that: - $l \le i \le r$; - $l \le j \le r$; - $a_i \ne a_j$. In other words, for each query, you need to find a pair of different elements among $a_l, a_{l+1}, \dots, a_r$, or report that such a pair does not exist.
Before processing the requests, let's calculate the position $p_i$ of the nearest element to the left that is not equal to it (we will consider $p_1 = -1$). To do this in linear time, we will traverse the array from left to right. For the $i$-th element ($1 < i \le n$), if $a_i \ne a_{i - 1}$, then $p_i=i-1$, otherwise $p_i=p_{i-1}$ (all elements between $p_{i - 1}$ and $i$ will be equal to $a_i$ by construction). To answer a query, we will fix the right boundary as one of the positions for the answer. Then the best of all suitable positions for it is $p_r$, as it is the maximum, and we just need to check that $l \le p_r$. (Note that by fixing any other position, you may not find the answer for a segment of the form $[1, 1, 1, \dots, 1, 2]$)
[ "binary search", "brute force", "data structures", "dp", "dsu", "greedy", "two pointers" ]
1,300
def solve(): n = int(input()) a = [int(x) for x in input().split()] p = [-1] * n for i in range(1, n): p[i] = p[i - 1] if a[i] != a[i - 1]: p[i] = i - 1 for i in range(int(input())): l, r = map(int, input().split()) l -= 1 r -= 1 if p[r] < l: print("-1 -1") else: print(p[r] + 1, r + 1) t = int(input()) for _ in range(t): solve() if _ + 1 != t: print()
1927
E
Klever Permutation
You are given two integers $n$ and $k$ ($k \le n$), where $k$ is even. A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in any order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation (as $2$ appears twice in the array) and $[0,1,2]$ is also not a permutation (as $n=3$, but $3$ is not present in the array). Your task is to construct a $k$-level permutation of length $n$. A permutation is called $k$-level if, among all the sums of continuous segments of length $k$ (of which there are exactly $n - k + 1$), any two sums differ by no more than $1$. More formally, to determine if the permutation $p$ is $k$-level, first construct an array $s$ of length $n - k + 1$, where $s_i=\sum_{j=i}^{i+k-1} p_j$, i.e., the $i$-th element is equal to the sum of $p_i, p_{i+1}, \dots, p_{i+k-1}$. A permutation is called $k$-level if $\max(s) - \min(s) \le 1$. Find \textbf{any} $k$-level permutation of length $n$.
To construct a permutation, let's make an important observation: $s_i$ cannot be equal to $s_{i + 1}$ (i.e., they differ by at least $1$). Since the array $s$ can only contain two different values, it always has the form $[x, x+1, x, x+1, \dots]$ or $[x, x-1, x, x-1, \dots]$. Let's construct a permutation of the first form. Since $s_1 + 1 = s_2$, then $a_1 + 1=a_{k+1}$; since $s_2 = s_3 + 1$, then $a_2 = a_{k+2} + 1$; since $s_3 + 1 = s_4$, then $a_3 + 1 = a_{k+3}$; $\dotsi$ since $s_k = s_{k + 1} + 1$, then $a_k = a_{k+k} + 1$; $\dotsi$ Thus, for all odd positions $i$, it must hold that $a_i + 1 = a_{i + k}$, and for even positions, $a_i = a_{i + k} + 1$. To construct such a permutation, we will iterate through all positions $i$ from $1$ to $k$ and fill the permutation in positions $i, i+k, i+2\cdot k, \dots$.
[ "constructive algorithms", "math", "two pointers" ]
1,400
def solve(): n, k = map(int, input().split()) l, r = 1, n ans = [0] * n for i in range(k): for j in range(i, n, k): if i % 2 == 0: ans[j] = l l += 1 else: ans[j] = r r -= 1 print(*ans) for _ in range(int(input())): solve()
1927
F
Microcycle
Given an undirected weighted graph with $n$ vertices and $m$ edges. There is at most one edge between each pair of vertices in the graph, and the graph does not contain loops (edges from a vertex to itself). The graph is not necessarily connected. A cycle in the graph is called simple if it doesn't pass through the same vertex twice and doesn't contain the same edge twice. Find any simple cycle in this graph in which the weight of the lightest edge is minimal.
Let's use the disjoint sets union (DSU). We will add edges to the DSU in descending order of weight. At the same time, we will build a graph containing only the edges that, when added to the DSU, unite different sets. We will also remember the last edge that we did not add to the graph, as it will be the lightest edge of the desired cycle. To find the cycle itself, we will find a path in the constructed graph leading from one end of this edge to the other (due to the construction of the graph, it does not contain cycles, so we need to find a path in the tree).
[ "data structures", "dfs and similar", "dsu", "graphs", "greedy", "implementation", "sortings", "trees" ]
1,900
#include <bits/stdc++.h> #define int long long #define pb emplace_back #define mp make_pair #define x first #define y second #define all(a) a.begin(), a.end() #define rall(a) a.rbegin(), a.rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(time(nullptr)); const ll inf = 1e18; const ll M = 998244353; const ld pi = atan2(0, -1); const ld eps = 1e-6; struct dsu{ vector<int> p, lvl; dsu(int n){ p.resize(n); iota(p.begin(), p.end(), 0); lvl.assign(n, 0); } int get(int i){ if (p[i] == i) return i; return p[i] = get(p[i]); } bool unite(int a, int b){ a = get(a); b = get(b); if(a == b) return false; if(lvl[a] < lvl[b]) swap(a, b); p[b] = a; if(lvl[a] == lvl[b]) lvl[a]++; return true; } }; bool found; vector<int> ans, path; void dfs(int v, int p, vector<vector<int>> &g, int f){ path.push_back(v); if(v == f){ ans = path; found = true; return; } for(int u: g[v]){ if(u != p) dfs(u, v, g, f); if (found) return; } path.pop_back(); } void solve(int tc){ int n, m; cin >> n >> m; vector<vector<int>> sl(n); vector<pair<int, pair<int, int>>> edges; for(int i = 0; i < m; ++i){ int u, v, w; cin >> u >> v >> w; --u, --v; edges.push_back({w, {u, v}}); } sort(rall(edges)); dsu g(n); pair<int, int> fin; int best = INT_MAX; for(auto e: edges){ if(!g.unite(e.y.x, e.y.y)){ fin = e.y; best = e.x; } else{ sl[e.y.x].push_back(e.y.y); sl[e.y.y].push_back(e.y.x); } } found = false; path.resize(0); dfs(fin.x, -1, sl, fin.y); cout << best << " " << ans.size() << "\n"; for(int e: ans) cout << e + 1 << " "; } bool multi = true; signed main() { int t = 1; if (multi)cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
1927
G
Paint Charges
A horizontal grid strip of $n$ cells is given. In the $i$-th cell, there is a paint charge of size $a_i$. This charge can be: - either used to the left — then all cells to the left at a distance less than $a_i$ (from $\max(i - a_i + 1, 1)$ to $i$ inclusive) will be painted, - or used to the right — then all cells to the right at a distance less than $a_i$ (from $i$ to $\min(i + a_i - 1, n)$ inclusive) will be painted, - or not used at all. Note that a charge can be used no more than once (that is, it \textbf{cannot} be used simultaneously to the left and to the right). It is allowed for a cell to be painted more than once. What is the minimum number of times a charge needs to be used to paint all the cells of the strip?
Let's use the method of dynamic programming. Let $dp[i][j][k]$ be the minimum number of operations required for the distance from $i$ to the farthest unpainted cell on the left to be $j$, and to the nearest unpainted cell on the right to be $k$ (including itself). We will update the values forward, that is, for all reachable states, we will find the states reachable from it and update the answer for them. In this case, we will move from the current $i$ to $i + 1$, recalculating $j$ and $k$ depending on the action: not spraying paint from $i$, spraying paint from $i$ to the left, spraying paint from $i$ to the right. The problem could also have been solved in $O(n^2)$, however, the constraints did not require this.
[ "data structures", "dp", "greedy", "math" ]
2,300
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) int main() { int t; cin >> t; forn(tt, t) { int n; cin >> n; vector<int> a(n); forn(i, n) cin >> a[i]; vector<vector<vector<int>>> d(n + 1, vector<vector<int>>(n + 1, vector<int>(n + 1, INT_MAX))); d[0][0][0] = 0; forn(i, n) forn(j, n) forn(k, n + 1) if (d[i][j][k] < INT_MAX) { int ai = a[i]; // Z { int ni = i + 1; int nj = j > 0 ? j + 1 : (k == 0 ? 1 : 0); int nk = max(0, k - 1); d[ni][nj][nk] = min(d[ni][nj][nk], d[i][j][k]); } // L { int ni = i + 1; int nj = j > 0 ? j + 1 : 0; if (nj <= ai) nj = 0; int nk = max(0, k - 1); d[ni][nj][nk] = min(d[ni][nj][nk], d[i][j][k] + 1); } // R { int ni = i + 1; int nj = j > 0 ? j + 1 : 0; int nk = max(ai - 1, k - 1); d[ni][nj][nk] = min(d[ni][nj][nk], d[i][j][k] + 1); } } cout << *min_element(d[n][0].begin(), d[n][0].end()) << endl; } }
1928
A
Rectangle Cutting
Bob has a rectangle of size $a \times b$. He tries to cut this rectangle into two rectangles with integer sides by making a cut parallel to one of the sides of the original rectangle. Then Bob tries to form some \textbf{other} rectangle from the two resulting rectangles, and he can rotate and move these two rectangles as he wishes. Note that if two rectangles differ only by a $90^{\circ}$ rotation, they are considered \textbf{the same}. For example, the rectangles $6 \times 4$ and $4 \times 6$ are considered the same. Thus, from the $2 \times 6$ rectangle, another rectangle can be formed, because it can be cut into two $2 \times 3$ rectangles, and then these two rectangles can be used to form the $4 \times 3$ rectangle, which is different from the $2 \times 6$ rectangle. However, from the $2 \times 1$ rectangle, another rectangle cannot be formed, because it can only be cut into two rectangles of $1 \times 1$, and from these, only the $1 \times 2$ and $2 \times 1$ rectangles can be formed, which are considered the same. Help Bob determine if he can obtain some other rectangle, or if he is just wasting his time.
Let $a \le b$. Let's consider several cases: If $a$ is even, then we can cut the rectangle into two rectangles of size $\frac{a}{2} \times b$ and combine them into a rectangle of size $\frac{a}{2} \times 2b$, which is definitely different from $a \times b$. If $a$ is even, then we can cut the rectangle into two rectangles of size $\frac{a}{2} \times b$ and combine them into a rectangle of size $\frac{a}{2} \times 2b$, which is definitely different from $a \times b$. If $b$ is even and $b \ne 2a$, then we can cut the rectangle into two rectangles of size $a \times \frac{b}{2}$ and combine them into a rectangle of size $2a \times \frac{b}{2}$. Note that here we use the fact that $b \ne 2a$, because if $b = 2a$, then we will get the same rectangle of size $b \times a$. If $b$ is even and $b \ne 2a$, then we can cut the rectangle into two rectangles of size $a \times \frac{b}{2}$ and combine them into a rectangle of size $2a \times \frac{b}{2}$. Note that here we use the fact that $b \ne 2a$, because if $b = 2a$, then we will get the same rectangle of size $b \times a$. If $a$ and $b$ are both odd, or $b = 2a$ and $a$ is odd, then the rectangle is not interesting. It is easy to understand that if we cut the rectangle of size $a \times b$ into two rectangles of size $a \times c$ and $a \times d$, where $c \ne d$, then we can always only combine the original rectangle (similarly if we cut it into rectangles $c \times b$ and $d \times b$). And from here it follows that we must divide one of the sides of the rectangle in half, so at least one side must be even. If $a$ and $b$ are both odd, or $b = 2a$ and $a$ is odd, then the rectangle is not interesting. It is easy to understand that if we cut the rectangle of size $a \times b$ into two rectangles of size $a \times c$ and $a \times d$, where $c \ne d$, then we can always only combine the original rectangle (similarly if we cut it into rectangles $c \times b$ and $d \times b$). And from here it follows that we must divide one of the sides of the rectangle in half, so at least one side must be even.
[ "geometry", "math" ]
800
#include <vector> #include <iostream> #include <numeric> #include <algorithm> #include <cassert> #include <map> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int a, b; cin >> a >> b; if (a > b) { swap(a, b); } if (((a % 2 == 1) && (b % 2 == 1)) || ((a % 2 == 1) && (b == 2 * a))) { cout << "No\n"; } else { cout << "Yes\n"; } } return 0; }
1928
B
Equalize
Vasya has two hobbies — adding permutations$^{\dagger}$ to arrays and finding the most frequently occurring element. Recently, he found an array $a$ and decided to find out the maximum number of elements equal to the same number in the array $a$ that he can obtain after adding some permutation to the array $a$. More formally, Vasya must choose exactly one permutation $p_1, p_2, p_3, \ldots, p_n$ of length $n$, and then change the elements of the array $a$ according to the rule $a_i := a_i + p_i$. After that, Vasya counts how many times each number occurs in the array $a$ and takes the maximum of these values. You need to determine the maximum value he can obtain. $^{\dagger}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
Suppose we already know the permutation that needs to be added. Let's consider the elements that will become equal after the addition. Notice that among them there cannot be equal elements, because among the numbers we are adding, there are no duplicates. Thus, only a set of numbers among which there are no equal ones, and the difference between the maximum and minimum does not exceed $n - 1$, can become equal. It is easy to see that any set of numbers satisfying these conditions can be equalized, and any set of numbers that became equal after adding the permutation satisfies these constraints. So let's sort the array, remove the equal elements from it. After that, we can use two pointers to find the maximum length subarray where the difference between the maximum and minimum does not exceed $n - 1$. The answer will be the length of such a subarray. The complexity of the solution is $O(n \log n)$.
[ "binary search", "greedy", "sortings", "two pointers" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; } sort(a.begin(), a.end()); a.resize(unique(a.begin(), a.end()) - a.begin()); int pnt = 0, ans = 0; for (int i = 0; i < a.size(); i++) { while(a[i] - a[pnt] >= n) { pnt++; } ans = max(ans, i - pnt + 1); } cout << ans << endl; } signed main() { int t = 1; cin >> t; for (int i = 0; i < t; ++i) { solve(); } return 0; }
1928
C
Physical Education Lesson
In a well-known school, a physical education lesson took place. As usual, everyone was lined up and asked to settle in "the first–$k$-th" position. As is known, settling in "the first–$k$-th" position occurs as follows: the first $k$ people have numbers $1, 2, 3, \ldots, k$, the next $k - 2$ people have numbers $k - 1, k - 2, \ldots, 2$, the next $k$ people have numbers $1, 2, 3, \ldots, k$, and so on. Thus, the settling repeats every $2k - 2$ positions. Examples of settling are given in the "Note" section. The boy Vasya constantly forgets everything. For example, he forgot the number $k$ described above. But he remembers the position he occupied in the line, as well as the number he received during the settling. Help Vasya understand how many natural numbers $k$ fit under the given constraints. Note that the settling exists if and only if $k > 1$. In particular, this means that the settling \textbf{does not exist} for $k = 1$.
All numbers repeat every $2k - 2$ positions. If the boy Vasya's number is calculated to be $x$, then it can be at positions of the form $(2k - 2) \cdot t + x$, or $(2k - 2) \cdot t + k + k - x$, for some non-negative $t$. This is true for all $x$, except for $x = 1$ and $x = k$ ~--- for these values, there is only one option left. Let's fix one of the options, the second one will be analogous. We need to find how many different values of $k$ satisfy the equation $(2k - 2) \cdot t + x = n$, for some non-negative $t$. It is not difficult to see that this holds if and only if $n - x$ is divisible by $2k - 2$. Therefore, it is necessary to find the number of \textbf{even} divisors of the number $n - x$. To consider the second case, it is necessary to proceed similarly with the number $n + x - 2$. The solution's complexity: $O(\sqrt{n})$
[ "brute force", "math", "number theory" ]
1,600
#include <iostream> #include <unordered_set> using namespace std; unordered_set<int> solve(int a) { unordered_set<int> candidates; for (int i = 1; i * i <= a; i++) { if (a % i == 0) { if (i % 2 == 0) // segment len should be even candidates.insert(i); if ((a / i) % 2 == 0) candidates.insert(a / i); } } unordered_set<int> answer; for (int i : candidates) { answer.insert(1 + i / 2); } return answer; } int main() { int t; cin >> t; for (int _ = 1; _ <= t; _++) { int n, pos; cin >> n >> pos; unordered_set<int> candidates = solve(n - pos); // bug2 for (int i : solve(n + pos - 2)) { // bug1 candidates.insert(i); } int answer = 0; for (int i : candidates) { if (i >= pos) { answer++; } } cout << answer << endl; } }
1928
D
Lonely Mountain Dungeons
Once, the people, elves, dwarves, and other inhabitants of Middle-earth gathered to reclaim the treasures stolen from them by Smaug. In the name of this great goal, they rallied around the powerful elf Timothy and began to plan the overthrow of the ruler of the Lonely Mountain. The army of Middle-earth inhabitants will consist of several squads. It is known that each pair of creatures of \textbf{the same race}, which are in different squads, adds $b$ units to the total strength of the army. But since it will be difficult for Timothy to lead an army consisting of a large number of squads, the total strength of an army consisting of $k$ squads is reduced by $(k - 1) \cdot x$ units. Note that the army always consists \textbf{of at least one squad}. It is known that there are $n$ races in Middle-earth, and the number of creatures of the $i$-th race is equal to $c_i$. Help the inhabitants of Middle-earth determine the maximum strength of the army they can assemble.
Let's learn how to solve the problem when $n = 1$. Suppose there is only one race and the number of its representatives is $c$. Notice that for a fixed $k$, it is advantageous for us to divide the representatives of the race almost evenly into squads. If $c$ is divisible by $k$, then it is advantageous for us to take exactly $y = \frac{c}{k}$ beings in each squad. Then the total number of pairs of beings in different squads is equal to $\frac{k(k-1)}{2} \cdot y^2$ (there are a total of $\frac{k(k-1)}{2}$ pairs of squads, and for each pair of squads there are $y^2$ pairs of beings from different squads). In the general case, when $c$ may not be divisible by $k$, let's denote $y = \left\lfloor \frac{c}{k} \right\rfloor$ and $y' = \left\lceil \frac{c}{k} \right\rceil$. Then it is advantageous for us to make squads of size $y$ and $y'$, where the number of squads of size $y'$ is equal to $c \bmod k$ (we essentially make all squads of size $y$, and then add 1 to some squads from the remaining part). In this case, the total number of pairs of beings in different squads is equal to $C_{k - c \bmod k}^2 \cdot y^2 + C_{c \bmod k}^2 \cdot y'^2 + (k - c \bmod k) \cdot (c \bmod k) \cdot y \cdot y'$. It remains to notice that it makes no sense to have $k > c$, so we can simply iterate through $k$ from $1$ to $c$ and choose the optimal one. When $n > 1$, we can notice that for a fixed $k$, we can solve the problem independently for each race. Let the number of representatives of the $i$-th race be $c_i$. Then we will iterate through $k$ from $1$ to $c_i$ for it and add the maximum total strength to the value of $cnt_k$ (the array $cnt$ is common for all races). Also, notice that for $k > c_i$, we will get the same total strength as for $k = c_i$. Then in the additional array $add$ (again, common for all races), we will add the maximum total strength for $k = c_i$ to $add_{c_i}$. We get the following solution: first, calculate the described arrays $cnt$ and $add$. After that, iterate through $k$ from $1$ to the maximum $c_i$. The maximum total strength of the squads for a fixed $k$ will be equal to $(cnt_k + (\text{sum of values } add_i \text{ for } i < k)) \cdot b - (k - 1) \cdot X$. From these values, we need to choose the maximum.
[ "brute force", "data structures", "greedy", "math", "ternary search" ]
1,900
#include <iostream> #include <vector> using namespace std; using ll = long long; ll pairs(ll n, ll k){ if (n == 0 || k == 0){ return 0; } ll x = n / k; ll l = n % k; ll r = k - l; ll L = (x + 1) * l, R = x * r; return R * L + (R - x) * R / 2 + L * (L - x - 1) / 2; } void solve(){ int m; long long c, b; cin >> m >> b >> c; int n = 0; vector<int> cnt(m); for (int i = 0; i < m; ++i) { cin >> cnt[i]; n = max(n, cnt[i]); } vector<long long> pair(n + 1); vector<long long> add(n + 1); for (int i = 0; i < m; ++i) { for (int j = 1; j <= cnt[i]; ++j) { pair[j] += pairs(cnt[i], j); } add[cnt[i]] += pairs(cnt[i], cnt[i]); } long long ans = 0; long long other = 0; for (int i = 1; i <= n; ++i) { ans = max(ans, b * (pair[i] + other) - c * (i - 1)); other += add[i]; } cout << ans << endl; } int main(){ int t; cin >> t; while (t--){ solve(); } }
1928
E
Modular Sequence
You are given two integers $x$ and $y$. A sequence $a$ of length $n$ is called modular if $a_1=x$, and for all $1 < i \le n$ the value of $a_{i}$ is either $a_{i-1} + y$ or $a_{i-1} \bmod y$. Here $x \bmod y$ denotes the remainder from dividing $x$ by $y$. Determine if there exists a modular sequence of length $n$ with the sum of its elements equal to $S$, and if it exists, find any such sequence.
Let's see what the answer will look like: first, there will be a prefix of the form $x, x + y, \ldots, x + k\cdot y$, and then there will be some number of blocks of the form $x \bmod y, x \bmod y + y, \ldots, x \bmod y + k \cdot y$. We can subtract the number $x \bmod y$ from all the elements of the sequence, and then divide all the elements by $y$ (all the elements will be divisible by $y$, since they initially had a remainder of $x \bmod y$). Let $b_1 = \frac{x - x \bmod y}{y}$. Then our sequence will start with $b_1, b_1 + 1, \ldots, b_1 + k_1$, and then there will be blocks of the form $0, 1, \ldots, k_i$. Let's calculate these values: $dp_i$~--- the minimum length of a sequence of blocks of the form $0, 1, \ldots, k_j$ with a sum of $i$. This value can be calculated for all numbers from $0$ to $s$ using dynamic programming. If we have processed all values from $0$ to $k-1$, then for $k$ we have calculated the minimum length, and we can update the value of $dp$ for $k + 1, k + 1 + 2, \ldots$~--- a total of $O(\sqrt{s})$ values not exceeding $s$. In this same $dp$, we can store through which values we were recalculated, for the restoration of the answer. Now, we can iterate over the length of the first block of the form $b_1, b_1 + 1, \ldots, b_1 + k_1$. Then we know the sum of the remaining blocks, and using the precalculated $dp$, we can determine whether the desired sequence can be formed or not.
[ "brute force", "constructive algorithms", "dp", "graphs", "greedy", "math", "number theory" ]
2,300
#include <cassert> #include <initializer_list> #include <numeric> #include <vector> #include <iostream> #include <utility> #include <stack> #include <queue> #include <set> #include <map> #include <algorithm> #include <math.h> using namespace std; #define pii pair<int, int> #define mp make_pair #define fi first #define se second #define all(x) (x).begin(), (x).end() #define ll long long #define pb emplace_back const int INF = 1e9 + 10; const ll INFLL = 1e18; void solve() { int n, x, y, S; cin >> n >> x >> y >> S; vector<int> dp(S + 1, INF); dp[0] = 0; for (int k = 1; k <= S; k++) { for (int l = 2; (l * (l - 1)) / 2 <= k; l++) { // just 0 is never optimal dp[k] = min(dp[k], dp[k - (l * (l - 1)) / 2] + l); } assert(dp[k] <= 2 * k); } for (ll k = 0; k < n; k++) { ll prevSum = (k + 1) * x + (k * (k + 1)) / 2 * y; if (prevSum > S) { continue; } ll needSum = S - prevSum; needSum -= (n - k - 1) * (x % y); if (needSum < 0) { continue; } if (needSum % y != 0) { continue; } needSum /= y; assert(needSum <= S); if (dp[needSum] <= n - k - 1) { // we found the answer vector<int> a(n); a[0] = x; for (int i = 1; i <= k; i++) { // construct prefix a[i] = a[i - 1] + y; } for (int i = k + 1; i <= k + (n - k - 1) - dp[needSum]; i++) { // fill the rest like 0 0 0 ... a[i] = x % y; } int i = k + (n - k - 1) - dp[needSum] + 1; // first free index vector<int> lens; // recover lengths of the segments while (needSum != 0) { for (int l = 2; (l * (l - 1)) / 2 <= needSum; l++) { if (dp[needSum] == dp[needSum - (l * (l - 1)) / 2] + l) { lens.pb(l); needSum -= (l * (l - 1)) / 2; break; } } } for (auto &len : lens) { for (int j = 0; j < len; j++) { a[i] = (x % y) + y * j; i++; } } cout << "YES\n"; for (auto &c : a) { cout << c << " "; } cout << "\n"; return; } } cout << "NO\n"; } int main() { cin.tie(0); cout.tie(0); ios_base::sync_with_stdio(0); int t = 1; cin >> t; while (t--) { solve(); } return 0; }
1928
F
Digital Patterns
Anya is engaged in needlework. Today she decided to knit a scarf from semi-transparent threads. Each thread is characterized by a single integer — the transparency coefficient. The scarf is made according to the following scheme: horizontal threads with transparency coefficients $a_1, a_2, \ldots, a_n$ and vertical threads with transparency coefficients $b_1, b_2, \ldots, b_m$ are selected. Then they are interwoven as shown in the picture below, forming a piece of fabric of size $n \times m$, consisting of exactly $nm$ nodes: \begin{center} {\small Example of a piece of fabric for $n = m = 4$.} \end{center} After the interweaving tightens and there are no gaps between the threads, each node formed by a horizontal thread with number $i$ and a vertical thread with number $j$ will turn into a cell, which we will denote as $(i, j)$. Cell $(i, j)$ will have a transparency coefficient of $a_i + b_j$. The interestingness of the resulting scarf will be the number of its sub-squares$^{\dagger}$ in which there are no pairs of neighboring$^{\dagger \dagger}$ cells with the same transparency coefficients. Anya has not yet decided which threads to use for the scarf, so you will also be given $q$ queries to increase/decrease the coefficients for the threads on some ranges. After each query of which you need to output the interestingness of the resulting scarf. $^{\dagger}$A sub-square of a piece of fabric is defined as the set of all its cells $(i, j)$, such that $x_0 \le i \le x_0 + d$ and $y_0 \le j \le y_0 + d$ for some integers $x_0$, $y_0$, and $d$ ($1 \le x_0 \le n - d$, $1 \le y_0 \le m - d$, $d \ge 0$). $^{\dagger \dagger}$. Cells $(i_1, j_1)$ and $(i_2, j_2)$ are neighboring if and only if $|i_1 - i_2| + |j_1 - j_2| = 1$.
Let's assume that $a_i = a_{i+1}$ for some $1 \le i < n$, then for any $1 \le j \le m$, the cells $(i, j)$ and $(i+1, j)$ will have the same transparency. A similar statement can be made if there is an index $j$: $b_j = b_{j+1}$. Then the positions $a_i = a_{i+1}$ divide the array $a$ into \textit{blocks}, in each of which all neighboring pairs are not equal to each other. It is clear that if there is a square $(x, y, d)$ consisting of cells $(i, j)$ such that $x \le i < x+d$ and $y \le j < y+d$, then the segment $[x, x+d-1]$ is entirely contained in one of these \textit{blocks} of the array $a$. Similarly, the array $b$ can also be divided into blocks, and then the segment $[y, y+d-1]$ will also be entirely contained in one of the blocks. Let's try to solve the problem in $O(1)$ time, if there are no neighboring elements with the same values in the arrays $a$ and $b$ (also assuming that $n \le m$): $f(n, m) = \sum \limits_{k=1}^{n} (n-k+1)(m-k+1) = \sum_{k=1}^n \left( k^2 + (m-n)k \right) \\ f(n, m) = \frac{n(n + 1)(2n + 1)}6 + (m - n) \cdot \frac{n(n+1)}2 \\$ This formula can be further transformed by introducing a quadruple of numbers for each natural number $n$: $a_n = 1$, $b_n = n$, $c_n = \frac12 n(n+1)$, $d_n = \frac16 n(n+1)(2n+1) - \frac12n^2(n+1)$. Then $f(n, m) = d_n a_m + c_n b_m$, if $n \le m$ and $f(n, m) = a_n d_m + b_n c_m$, if $n > m$. But if there are neighboring identical elements in the arrays $a$ and $b$, then this means that they are somehow divided into blocks. If these are blocks of lengths $n_1, \ldots, n_k$ in the array $a$ and blocks of lengths $m_1, \ldots, m_l$ in the array $b$, then the answer to the problem is $\textrm{ans} = \sum_{i=1}^k \sum_{j=1}^l f(n_i, m_j)$ Let's learn how to quickly calculate sums of the form $f(x, m_1) + \ldots + f(x, m_l)$. To do this, we will create 4 segment trees to quickly calculate the sums $\sum a_y$, $\sum b_y$, $\sum c_y$, $\sum d_y$ over segments of $y$, taking into account the multiplicity of $y$ in the array $m_1, \ldots, m_l$. Now the calculation of $f(x, m_1) + \ldots + f(x, m_k)$ is reduced to $4$ segment tree queries: $f(x, m_1) + \ldots + f(x, m_l) = a_x \cdot \sum_{m_i < x} d_{m_i} + b_x \cdot \sum_{m_i < x} c_{m_i} + c_x \cdot \sum_{m_i \ge x} b_{m_i} + d_x \cdot \sum_{m_i \ge x} a_{m_i}$ The sum $f(n_1, y) + \ldots + f(n_k, y)$ is calculated similarly. Now we just need to put our solution together. We will maintain the blocks of arrays $a$ and $b$ in an online mode. It is very convenient to do this by storing the positions $a_i = a_{i+1}$ in a data structure like std::set, and also by working with the differential array $a$ (i.e., maintaining not the array $a$ itself, but the array of differences between neighboring elements $c_i = a_{i+1} - a_i$). To recalculate the answer, we will count the number of squares that are involved in a specific block of the array $a$ or $b$, using the above result. As a result, we have a solution in $O((n+q) (\log n + \log m))$. P.S. A solution in $O(q \sqrt n)$ will not work due to a large constant. I tried very hard to rule it out :D.
[ "combinatorics", "data structures", "implementation", "math" ]
2,900
#include <bits/stdc++.h> using namespace std; using ll = long long; using pi = pair<int, int>; struct SegmentTree { int n; vector<ll> t; SegmentTree(int n) : n(n), t(2*n) { } void Add(int i, ll x) { for (i += n; i != 0; i >>= 1) t[i] += x; } ll Query(int l, int r) { ll ans = 0; for (l += n, r += n - 1; l <= r; l >>= 1, r >>= 1) { if ((l&1) == 1) ans += t[l++]; if ((r&1) == 0) ans += t[r--]; } return ans; } }; struct SegmentContainer { int side; SegmentTree sgt_a, sgt_b, sgt_c, sgt_d; int id; // sgt_a: sum(1) // sgt_b: sum(m) // sgt_c: sum(m*(m+1)/2) // sgt_d: sum(m*(m-1)*(2*m-1)/6 - m*(m-1)/2*m) SegmentContainer(int side) : side(side), sgt_a(side), sgt_b(side), sgt_c(side), sgt_d(side) { } tuple<ll, ll, ll, ll> GetABCD(ll m) { return make_tuple(1, m, m*(m+1)/2, m*(m-1)*(2*m-1)/6 - m*(m-1)/2*m); } void Insert(int m) { auto [a, b, c, d] = GetABCD(m); sgt_a.Add(m-1, +a); sgt_b.Add(m-1, +b); sgt_c.Add(m-1, +c); sgt_d.Add(m-1, +d); } void Erase(int m) { auto [a, b, c, d] = GetABCD(m); sgt_a.Add(m-1, -a); sgt_b.Add(m-1, -b); sgt_c.Add(m-1, -c); sgt_d.Add(m-1, -d); } ll SquaresCount(int n) { const int mid = min(side, n); auto sum_a = sgt_a.Query(mid, side); // m > n auto sum_b = sgt_b.Query(mid, side); // m > n auto sum_c = sgt_c.Query(0, mid); // m <= n auto sum_d = sgt_d.Query(0, mid); // m <= n auto [a, b, c, d] = GetABCD(n); return d * sum_a + c * sum_b + b * sum_c + a * sum_d; } }; struct SegmentMaintainer { SegmentContainer& my; SegmentContainer& other; ll& ans; set<int> pos_zero; vector<ll> diff_array; SegmentMaintainer(vector<int> a, SegmentContainer& my, SegmentContainer& other, ll& ans) : pos_zero(), diff_array(a.size()), my(my), other(other), ans(ans) { pos_zero.insert(0); for (int i = 1; i < a.size(); ++i) { diff_array[i] = a[i] - a[i-1]; if (diff_array[i] == 0) pos_zero.insert(i); } pos_zero.insert(a.size()); for (auto it = pos_zero.begin(); *it != my.side; ++it) { OnSegmentAppear(*next(it) - *it); } } void OnSegmentAppear(int len) { my.Insert(len); ans += other.SquaresCount(len); } void OnSegmentDissapear(int len) { my.Erase(len); ans -= other.SquaresCount(len); } void ChangeBound(int pos, ll dx) { if (pos == 0 || pos == my.side) return; bool was_zero = diff_array[pos] == 0; diff_array[pos] += dx; bool now_zero = diff_array[pos] == 0; if (was_zero && !now_zero) { auto mid = pos_zero.find(pos); auto prv = prev(mid), nxt = next(mid); OnSegmentDissapear(*mid - *prv); OnSegmentDissapear(*nxt - *mid); OnSegmentAppear(*nxt - *prv); pos_zero.erase(mid); } if (!was_zero && now_zero) { auto mid = pos_zero.insert(pos).first; auto prv = prev(mid), nxt = next(mid); OnSegmentAppear(*mid - *prv); OnSegmentAppear(*nxt - *mid); OnSegmentDissapear(*nxt - *prv); } } void RangeAdd(int l, int r, int x) { ChangeBound(l, +x); ChangeBound(r, -x); } }; int main() { ios::sync_with_stdio(0); cin.tie(0); int n, m, q; cin >> n >> m >> q; vector<int> a(n), b(m); for (int& x : a) cin >> x; for (int& x : b) cin >> x; ll ans = 0; SegmentContainer a_segments(n), b_segments(m); a_segments.id = 1; b_segments.id = 2; SegmentMaintainer a_maintainer(a, a_segments, b_segments, ans); SegmentMaintainer b_maintainer(b, b_segments, a_segments, ans); cout << ans << '\n'; while (q--) { int t, l, r, x; cin >> t >> l >> r >> x; --l; if (t == 1) a_maintainer.RangeAdd(l, r, x); if (t == 2) b_maintainer.RangeAdd(l, r, x); cout << ans << '\n'; } }
1929
A
Sasha and the Beautiful Array
Sasha decided to give his girlfriend an array $a_1, a_2, \ldots, a_n$. He found out that his girlfriend evaluates the beauty of the array as the sum of the values $(a_i - a_{i - 1})$ for all integers $i$ from $2$ to $n$. Help Sasha and tell him the maximum beauty of the array $a$ that he can obtain, if he can rearrange its elements in any way.
$a_2 - a_1 + a_3 - a_2 + \ldots + a_n - a_{n - 1} = a_n - a_1$. So we just need to maximize this value, which means the answer is the maximum number in the array minus the minimum.
[ "constructive algorithms", "greedy", "math", "sortings" ]
800
null
1929
B
Sasha and the Drawing
Even in kindergarten, Sasha liked a girl. Therefore, he wanted to give her a drawing and attract her attention. As a drawing, he decided to draw a square grid of size $n \times n$, in which some cells are colored. But coloring the cells is difficult, so he wants to color as few cells as possible. But at the same time, he wants \textbf{at least} $k$ diagonals to have at least one colored cell. Note that the square grid of size $n \times n$ has a total of $4n - 2$ diagonals. Help little Sasha to make the girl fall in love with him and tell him the minimum number of cells he needs to color.
Let's notice that each cell intersects with no more than two diagonals, so the answer to the problem is at least $\frac{k + 1}{2}$. Claim: Let's look at the construction where we color all cells in the first row and leave only two side cells uncolored in the last row. Then each of these cells corresponds to exactly two diagonals. And if $k \leq (2n - 2) * 2$, then the answer is exactly $\frac{k + 1}{2}$. Now let's notice that if we color $2n - 1$ or $2n$ cells, then one or two cells will correspond to exactly one diagonal respectively, because in this case we must color the side cells, as they are the only diagonals not touched, but they are already covered by another diagonal corresponding to another corner cell. Therefore, the answer in case of $4n - 3$ remains the same due to parity, and for $4n - 2$ it is $\frac{k}{2} + 1$.
[ "constructive algorithms", "greedy", "math" ]
800
null
1929
C
Sasha and the Casino
Sasha decided to give his girlfriend the best handbag, but unfortunately for Sasha, it is very expensive. Therefore, Sasha wants to earn it. After looking at earning tips on the internet, he decided to go to the casino. Sasha knows that the casino operates under the following rules. If Sasha places a bet of $y$ coins (where $y$ is a positive integer), then in case of winning, he will receive $y \cdot k$ coins (i.e., his number of coins will increase by $y \cdot (k - 1)$). And in case of losing, he will lose the entire bet amount (i.e., his number of coins will decrease by $y$). Note that the bet amount must always be a positive ($> 0$) integer and cannot exceed Sasha's current number of coins. Sasha also knows that there is a promotion at the casino: he cannot lose more than $x$ times in a row. Initially, Sasha has $a$ coins. He wonders whether he can place bets such that he is guaranteed to win any number of coins. In other words, is it true that for any integer $n$, Sasha can make bets so that for any outcome that does not contradict the rules described above, at some moment of time he will have at least $n$ coins.
Let's notice that the condition that we can achieve arbitrarily large values means that we need to guarantee at least a $+1$ to our coins. At the very first win. In this case, we can repeat this strategy indefinitely. Also, let's notice that if we have lost a total of $z$ before, then in the next round we need to bet $y$ such that $y \cdot (k - 1) > z$, because otherwise the casino can give us a win. In this case, the condition of not losing more than $x$ times in a row will disappear, and we will end up in the negative. Therefore, the tactic is not optimal. Therefore, the solution is as follows: we bet $1$ at first, then we bet the minimum number such that the win covers our loss. And if we have enough to make such a bet for $x + 1$, then the casino must end up in the negative, otherwise we cannot win. So the solution is in $O(x)$ time complexity, where we simply calculate these values in a loop.
[ "binary search", "brute force", "constructive algorithms", "games", "greedy", "math" ]
1,400
null
1929
D
Sasha and a Walk in the City
Sasha wants to take a walk with his girlfriend in the city. The city consists of $n$ intersections, numbered from $1$ to $n$. Some of them are connected by roads, and from any intersection, there is exactly one simple path$^{\dagger}$ to any other intersection. In other words, the intersections and the roads between them form a tree. Some of the intersections are considered dangerous. Since it is unsafe to walk alone in the city, Sasha does not want to visit three or more dangerous intersections during the walk. Sasha calls a set of intersections good if the following condition is satisfied: - If in the city only the intersections contained in this set are dangerous, then any simple path in the city contains \textbf{no more than two} dangerous intersections. However, Sasha does not know which intersections are dangerous, so he is interested in the number of different good sets of intersections in the city. Since this number can be very large, output it modulo $998\,244\,353$. $^{\dagger}$A simple path is a path that passes through each intersection at most once.
Let $dpv$ be the number of non-empty sets of vertices in the subtree rooted at $v$ such that there are no pairs of vertices in the set where one vertex is the ancestor of the other. Then $dpv = (dp_{u_1} + 1) \cdot (dp_{u_2} + 1) \cdot \ldots \cdot (dp_{u_k} + 1)$, where $u_1, \ldots, u_k$ are the children of vertex $v$. This is because from each subtree, you can choose either any non-empty set or an empty set. Choosing an empty set from each subtree implies that only the single vertex $v$ is selected (since our dynamic programming state cannot be empty). Now the claim is: the answer to the problem is $dp_1 + dp_2 + \ldots + dp_n + 1$. This is because if we consider the case where there is a pair of vertices where vertex $v$ is the ancestor of the other, the answer in this case is $dp{u_1} + \ldots + dp{u_k}$, as we can select such a set of vertices exactly from one subtree from the dynamic programming states. (And here we are using non-empty sets in the dynamic state, since otherwise, the case where there are no vertices where one is the ancestor of the other would be counted). And $dp_1 + 1$ (where $1$ is the root of the tree) accounts for the scenarios where there is no vertex where one is the ancestor of the other.
[ "combinatorics", "dp", "math", "trees" ]
1,900
null
1929
E
Sasha and the Happy Tree Cutting
Sasha was given a tree$^{\dagger}$ with $n$ vertices as a prize for winning yet another competition. However, upon returning home after celebrating his victory, he noticed that some parts of the tree were missing. Sasha remembers that he colored some of the edges of this tree. He is certain that for any of the $k$ pairs of vertices $(a_1, b_1), \ldots, (a_k, b_k)$, he colored at least one edge on the simple path$^{\ddagger}$ between vertices $a_i$ and $b_i$. Sasha does not remember how many edges he exactly colored, so he asks you to tell him the minimum number of edges he could have colored to satisfy the above condition. $^{\dagger}$A tree is an undirected connected graph without cycles. $^{\ddagger}$A simple path is a path that passes through each vertex at most once.
Let's consider each edge $i$ and mark the set of pairs $S_i$ that it covers. Then the claim is: we have a total of $O(k)$ different sets. This is because we are only interested in the edges that are present in the compressed tree on these $k$ pairs of vertices. And as it is known, the number of edges in the compressed tree is $O(k)$. Then we need to find the minimum number of sets among these sets such that each pair is present in at least one of them. This can be achieved by dynamic programming on sets as follows: Let $dp[mask]$ be the minimum number of edges that need to be removed in order for at least one edge to be removed among the pairs corresponding to the individual set bits in $mask$. Then the update is as follows: $dp[mask | S_i] = \min(dp[mask | S_i], dp[mask] + 1)$ for all distinct sets $S$, where $S_i$ is the mask corresponding to the pairs passing through the edge. This update is performed because we are adding one more edge to this mask. As a result, we obtain a solution with a time complexity of $O(nk + 2^k k)$, where $O(nk)$ is for precomputing the set of pairs removed by each edge for each edge, and $O(2^k k)$ is for updating the dynamic programming.
[ "bitmasks", "brute force", "dfs and similar", "dp", "graphs", "greedy", "math", "trees" ]
2,300
null
1929
F
Sasha and the Wedding Binary Search Tree
Having overcome all the difficulties and hardships, Sasha finally decided to marry his girlfriend. To do this, he needs to give her an engagement ring. However, his girlfriend does not like such romantic gestures, but she does like binary search trees$^{\dagger}$. So Sasha decided to give her such a tree. After spending a lot of time on wedding websites for programmers, he found the perfect binary search tree with the root at vertex $1$. In this tree, the value at vertex $v$ is equal to $val_v$. But after some time, he forgot the values in some vertices. Trying to remember the found tree, Sasha wondered — how many binary search trees could he have found on the website, if it is known that the values in all vertices are integers in the segment $[1, C]$. Since this number can be very large, output it modulo $998\,244\,353$. $^{\dagger}$A binary search tree is a rooted binary tree in which for any vertex $x$, the following property holds: the values of all vertices in the left subtree of vertex $x$ (if it exists) are less than or equal to the value at vertex $x$, and the values of all vertices in the right subtree of vertex $x$ (if it exists) are greater than or equal to the value at vertex $x$.
Let's list the numbers of vertices in the order of their values. Let it be $v1, \ldots, vn$. Then it must satisfy $value{vi} \leq value{v{i + 1}}$. Then we have some segments in this order for which we do not know the values. For each segment, we know the maximum and minimum value that the values in this segment can take, let's say $L$ and $R$. Then we need to choose a value from the interval $(L, R)$ for each number in this segment in order to maintain the relative order. This is a known problem, and there are $\binom{R - L + len}{len}$ possible ways to do this, where $len$ is the length of the segment. Then we need to multiply all these binomial coefficients. Now, notice that $R - L + len$ is large, so for calculation we can simply use the formula $\binom{n}{k} = \frac{n \cdot (n - 1) \cdot \ldots \cdot (n - k + 1)}{k!}$, since the sum $len$ does not exceed $n$.
[ "brute force", "combinatorics", "data structures", "dfs and similar", "math", "trees" ]
2,300
null
1930
A
Maximise The Score
There are $2n$ positive integers written on a whiteboard. Being bored, you decided to play a one-player game with the numbers on the whiteboard. You start with a score of $0$. You will increase your score by performing the following move \textbf{exactly} $n$ times: - Choose two integers $x$ and $y$ that are written on the whiteboard. - Add $\min(x,y)$ to your score. - Erase $x$ and $y$ from the whiteboard. Note that after performing the move $n$ times, there will be no more integers written on the whiteboard. Find the maximum final score you can achieve if you optimally perform the $n$ moves.
Selecting the smallest two elements on the whiteboard is a good choice in the first move. Let $b$ denote the sorted array $a$. Assume that $b$ contains only distinct elements for convenience. We prove by induction on $n$ that the maximum final score is $b_1 + b_3 + \ldots + b_{2n-1}$. For the base case $n = 1$, the final and only possible score that can be achieved is $b_1$. Now let $n > 1$. Claim: It is optimal to choose $b_{1}$ with $b_{2}$ for some move. Suppose that in some move, $b_{1}$ is choosen with $b_i$ and $b_{2}$ is choosen with $b_j$, for some $2 < i,j < 2n, i \not = j$. The contribution to the score according to these choices is $\min(b_{1}, b_{i}) + \min(b_{2}, b_{j}) = b_{1} + b_{2}$. However, if we had chosen $b_{1}$ and $b_{2}$ in one move, and $b_i$ and $b_j$ in the other move, the score according to these choices is $\min(b_{1}, b_{2}) + \min(b_{i}, b_{j}) = b_{1} + \min(b_{i}, b_{j})$. As $i, j > 2$, $b_i > b_{2}$ and $b_j > b_{2} \implies \min(b_{i}, b_{j}) > b_2$. Thus, we can achieve a strictly larger score by choosing $b_1$ with $b_2$ in some move. The choice of selecting $b_1$ and $b_2$ contributes a value of $b_1$ to the score. The maximum score that can achieved for the remaining numbers $[b_3, b_4, \ldots, b_{2n}]$ on the whiteboard in the remaining moves is $b_3 + b_5 + b_7 + \ldots b_{2n-1}$ by the induction hypothesis. Note that we can extend the arguments for the case where $a$ has duplicate elements.
[ "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; vector<ll> a(2*n); ll ans=0; for(auto &it:a){ cin>>it; } sort(a.begin(),a.end()); for(ll i=0;i<2*n;i+=2){ ans+=a[i]; } cout<<ans<<"\n"; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
B
Permutation Printing
You are given a positive integer $n$. Find a permutation$^\dagger$ $p$ of length $n$ such that there do \textbf{not} exist two \textbf{distinct} indices $i$ and $j$ ($1 \leq i, j < n$; $i \neq j$) such that $p_i$ divides $p_j$ and $p_{i+1}$ divides $p_{j+1}$. Refer to the Notes section for some examples. Under the constraints of this problem, it can be proven that at least one $p$ exists. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
For integers $x$ ($\lfloor \frac{n}{2} \rfloor < x \leq n$), there does not exist integer $y$ ($y > x$) such that $y$ is divisible by $x$. Consider the permutation $p$ such that $p=[1, n, 2, n - 1, \ldots \lceil \frac{n+1}{2} \rceil]$. It is valid. Why? We have $\max(p_a, p_{a+1}) > \lfloor \frac{n}{2} \rfloor$ for all $1 \leq a < n - 1$. So we cannot ever have a pair of integers ($a,b$) such that: $1 \leq a < n - 1$ $1 \leq b < n$ $a \neq b$ $p_a$ divides $p_b$ and $p_{a+1}$ divides $p_{b+1}$ Now, we just need to check for $a = n - 1$. First of all, notice that $p_a$ does not divide $p_1$. There does not exist an integer $b$ ($2 \leq b < n - 1$) such that $p_{a+1}$ divides $p_{b+1}$ as $2 \cdot p_{a+1} \ge n$ and $p_{c+1} < n$ for all $c$ ($2 \leq c < n - 1$). Note that we covered all possible pairs of indices and did not find two distinct indices $i$ and $j$ ($1 \leq i, j < n$; $i \neq j$) such that $p_i$ divides $p_j$ and $p_{i+1}$ divides $p_{j+1}$.
[ "brute force", "constructive algorithms", "math" ]
1,000
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; ll l=1,r=n; for(ll i=1;i<=n;i++){ if(i&1){ cout<<l<<" "; l++; } else{ cout<<r<<" "; r--; } } cout<<"\n"; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
C
Lexicographically Largest
Stack has an array $a$ of length $n$. He also has an empty set $S$. Note that $S$ is \textbf{not} a multiset. He will do the following three-step operation exactly $n$ times: - Select an index $i$ such that $1 \leq i \leq |a|$. - Insert$^\dagger$ $a_i + i$ into $S$. - Delete $a_i$ from $a$. Note that the indices of all elements to the right of $a_i$ will decrease by $1$. Note that after $n$ operations, $a$ will be empty. Stack will now construct a new array $b$ which is $S$ \textbf{sorted in decreasing order}. Formally, $b$ is an array of size $|S|$ where $b_i$ is the $i$-th largest element of $S$ for all $1 \leq i \leq |S|$. Find the lexicographically largest$^\ddagger$ $b$ that Stack can make. $^\dagger$ A set can only contain unique elements. \textbf{Inserting an element that is already present in a set will not change the elements of the set.} $^\ddagger$ An array $p$ is lexicographically larger than a sequence $q$ if and only if one of the following holds: - $q$ is a prefix of $p$, but $p \ne q$; or - in the first position where $p$ and $q$ differ, the array $p$ has a larger element than the corresponding element in $q$. Note that $[3,1,4,1,5]$ is lexicographically larger than $[3,1,3]$, $[\,]$, and $[3,1,4,1]$ but not $[3,1,4,1,5,9]$, $[3,1,4,1,5]$, and $[4]$.
and satyam343 Consider an array $c$ of length $n$ such that $c_i :=$ number of indices smaller than $i$ which were chosen before index $i$. So set $S$ will be a collection of $a_i + i - c_i$ over all $1 \leq i \leq n$. Now one might wonder what type of arrays $c$ is it possible to get. First, it is easy to see that we should have $0 \leq c_i < i$ for all $i$. Call an array $c$ of length $n$ good, if $0 \leq c_i < i$ for all $1 \leq i \leq n$. The claim is that all good arrays of length $n$ can be obtained. We can prove it by induction on $n$. $c_1 = 0$ always holds. Now $c_2$ can either be $0$ or $1$. We can obtain $c_2 = 0$ by deleting the element at index $2$ before the element at index $1$. We can also obtain $c_2 = 1$ by deleting it after deleting the element at index $1$. Thus, all good arrays of length 2 can be obtained. Now assume that it is possible to obtain all good arrays of length atmost $k$. Choose an integer $x$ ($0 \leq x \leq k$) arbitrarily. Consider the following sequence for the order of deletion: The elements at indices $1, 2, \ldots, x$ in the same order. The element at index $k$. The elements at indices $x + 1, \ldots, k - 1$ in the same order. It is easy to see that the array obtained on performing the above sequence of operations is a good array of length $k + 1$ with $c_{k+1} = x$. Hence we can establish a bijection between the sequence of order of deletion and the number of good arrays. So we have the following subproblem. We have a set $S$. We will iterate $i$ from $1$ to $n$, select an integer $c_i$ ($0 \leq c_i \leq i - 1$) and insert $a_i + i - c_i$ into set $S$ and move to $i + 1$. Now using exchange arguments, we can prove that it is never bad if we select the smallest integer $v$ ($0 \leq v \leq i - 1$) such that $a_i + i - v$ is not present in the set $S$, and assign it to $c_i$. Note that as we have $i$ options for $v$, and we would have inserted exactly $i-1$ elements before index $i$, there always exists an integer $v$ ($0 \leq v \leq i - 1$) such that $a_i + i - v$ is not present in the set $S$. You can refer to the attached submission to see how to find $v$ efficiently for each $i$.
[ "binary search", "constructive algorithms", "data structures", "greedy", "sortings" ]
1,700
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; set<ll> used,not_used; vector<ll> ans; for(ll i=1;i<=n;i++){ ll x; cin>>x; x+=i; if(!used.count(x)){ not_used.insert(x); } ll cur=*(--not_used.upper_bound(x)); //find the largest element(<= x) which is not in set "used" not_used.erase(cur); ans.push_back(cur); used.insert(cur); if(!used.count(cur-1)){ not_used.insert(cur-1); } } sort(ans.begin(), ans.end()); reverse(ans.begin(), ans.end()); for(auto i:ans){ cout<<i<<" "; } cout<<"\n"; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
D1
Sum over all Substrings (Easy Version)
\textbf{This is the easy version of the problem. The only difference between the two versions is the constraint on $t$ and $n$. You can make hacks only if both versions of the problem are solved.} For a binary$^\dagger$ pattern $p$ and a binary string $q$, both of length $m$, $q$ is called $p$-good if for every $i$ ($1 \leq i \leq m$), there exist indices $l$ and $r$ such that: - $1 \leq l \leq i \leq r \leq m$, and - $p_i$ is a mode$^\ddagger$ of the string $q_l q_{l+1} \ldots q_{r}$. For a pattern $p$, let $f(p)$ be the minimum possible number of $\mathtt{1}$s in a $p$-good binary string (of the same length as the pattern). You are given a binary string $s$ of size $n$. Find $$\sum_{i=1}^{n} \sum_{j=i}^{n} f(s_i s_{i+1} \ldots s_j).$$ In other words, you need to sum the values of $f$ over all $\frac{n(n+1)}{2}$ substrings of $s$. $^\dagger$ A binary pattern is a string that only consists of characters $\mathtt{0}$ and $\mathtt{1}$. $^\ddagger$ Character $c$ is a mode of string $t$ of length $m$ if the number of occurrences of $c$ in $t$ is at least $\lceil \frac{m}{2} \rceil$. For example, $\mathtt{0}$ is a mode of $\mathtt{010}$, $\mathtt{1}$ is not a mode of $\mathtt{010}$, and both $\mathtt{0}$ and $\mathtt{1}$ are modes of $\mathtt{011010}$.
To find $f(s)$, we can partition $s$ into multiple independent substrings of length atmost $3$ and find best answer for them separately. There always exists a string $g$ such that: $g$ is $s-good$ there are $f(s)$ number of $\mathtt{1}$s in $g$ $g$ is of the form $b_1 + b_2 + \ldots b_q$, where $b_i$ is either equal to $\mathtt{0}$ or $\mathtt{010}$. First of all, append $n$ $\mathtt{0}$ s to the back of $s$ for our convenience. Note that this does not change the answer. Now let us call a binary string $p$ of size $d$ nice if: there exists a positive integer $k$ such that $d = 3k$ there exists a positive integer $k$ such that $d = 3k$ $p$ is of form $f(\mathtt{0} , k) + f(\mathtt{1} , k) + f(\mathtt{0} , k)$, where $f(c, z)$ gives a string containing exactly $z$ characters equal to $c$. $p$ is of form $f(\mathtt{0} , k) + f(\mathtt{1} , k) + f(\mathtt{0} , k)$, where $f(c, z)$ gives a string containing exactly $z$ characters equal to $c$. Suppose binary string $t$ is one of the $s-good$ strings such that there are exactly $f(s)$ $\mathtt{1}$ s in $t$. We claim that for any valid $t$, there always exists a binary string $t'$ such that: $t'$ is permutation of $t$ $t'$ is $s-good$ $t'$ is of the form $f(\mathtt{0}, d_1) + z_1 + f(\mathtt{0}, d_2) + z_2 + f(\mathtt{0}, d_3) + \ldots + z_g + f(\mathtt{0}, d_{g+1})$, where $z_1, z_2, \ldots z_g$ are nice binary strings and $d_1, d_2, \ldots d_{g+1}$ are non-negative integers. Initially, all the $\mathtt{1}$s in $s$ are unmarked. We will mark all of them and modify the string $t$ in the process. We will do the following recursive process unless all the $\mathtt{1}$ s in $s$ are marked. Find the index of leftmost unmarked $\mathtt{1}$ in $s$. Suppose its index is $x$. Now suppose $y$ is the largest index such that there are an equal number of $\mathtt{0}$ s and $\mathtt{1}$ s in substring $t[x, y]$. Note that $y$ will always exist as we appended some extra $\mathtt{0}$s in the starting. Now we can rearrange the characters in substring $t[x,y]$, as they will still contain an equal number of $\mathtt{0}$ s and $\mathtt{1}$ s and $\mathtt{1}$ will still be the mode of substring $t[x,y]$. Obviously rearranging the characters in $t[x,y]$ to $\mathtt{0} \ldots \mathtt{0} \mathtt{1} \ldots \mathtt{1}$ is the best we can do. We will mark all the $\mathtt{1}$ s in substring $s[x,y]$. Suppose $y-x+1 = 2v$. Now $t[y+1,y+v]$ might contain some $\mathtt{1}$ s. Say there are $z$ $\mathtt{1}$ s in $t[y+1,y+v]$ initially. We will do the following operation exactly $z$ times. Find the leftmost $\mathtt{1}$ in substring $t[y+1,y+v]$. Find the leftmost $\mathtt{0}$ in substring $t[y+v+1,2n]$. Swap both characters. Now note that $t[x,x+3v-1]$ will be of form $f(\mathtt{0}, v) + f(\mathtt{1}, v) + f(\mathtt{0}, v)$. It is easy to verify that in the updated $t$, there won't be any index $i$ for which there does not exist two indices $1 \le l \le i \le r \le 2n$ such that $s_i$ is mode of $t[l,r]$. Now we can mark all the $\mathtt{1}$ s in substring $s[x+2v,x+3v-1]$ too, as $t[x+v,x+3v-1]$ contain equal number of $\mathtt{0}$ s and $\mathtt{1}$ s. It is not hard to conclude that the updated $t$ will be of form $f(\mathtt{0}, d_1) + z_1 + f(\mathtt{0}, d_2) + z_2 + f(\mathtt{0}, d_3) + \ldots + z_g + f(\mathtt{0}, d_{g+1})$, where $z_1, z_2, \ldots z_g$ are nice binary strings and $d_1, d_2, \ldots d_{g+1}$ are non-negative integers. Note that the $\mathtt{1}$ s in $t[x, x + 3v - 1]$ won't help the $\mathtt{1}$ s in $s[x + 3v, 2n]$. So, we can solve for $s[x + 3v, 2n]$ independently. Let $t'$ be the updated $t$. Now, carefully observe the structure of $t'$. We can replace all the substrings of the form $f(\mathtt{0} , k) + f(\mathtt{1} , k) + f(\mathtt{0} , k)$ in $t'$ with $\mathtt{0} \mathtt{1} \mathtt{0} \mathtt{0} \mathtt{1} \mathtt{0} \ldots \mathtt{0} \mathtt{1} \mathtt{0} \mathtt{0} \mathtt{1} \mathtt{0}$. So the updated $t'$(say $t"$) will be of form $b_1 + b_2 + \ldots b_q$, where $b_i$ is either equal to $\mathtt{0}$ or $\mathtt{010}$. So whenever we need to find $f(e)$ for some binary string $e$, we can always try to find a string of form $t"$ using as few $\mathtt{1}$s as possible. Notice that we can construct $t"$ greedily. You can look at the attached code for the implementation details. Also, we don't need to actually append the $\mathtt{0}$s at the back of $s$. It was just for proof purposes.
[ "brute force", "dp", "greedy", "strings" ]
1,800
#include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MOD=1e9+7; const ll MAX=500500; ll f(string s){ ll len=s.size(),ans=0,pos=0; while(pos<len){ if(s[pos]=='1'){ ans++; pos+=2; } pos++; } return ans; } void solve(){ ll n,ans=0; cin>>n; string s; cin>>s; for(ll i=0;i<n;i++){ string t; for(ll j=i;j<n;j++){ t.push_back(s[j]); ans+=f(t); } } cout<<ans<<nline; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
D2
Sum over all Substrings (Hard Version)
\textbf{This is the hard version of the problem. The only difference between the two versions is the constraint on $t$ and $n$. You can make hacks only if both versions of the problem are solved.} For a binary$^\dagger$ pattern $p$ and a binary string $q$, both of length $m$, $q$ is called $p$-good if for every $i$ ($1 \leq i \leq m$), there exist indices $l$ and $r$ such that: - $1 \leq l \leq i \leq r \leq m$, and - $p_i$ is a mode$^\ddagger$ of the string $q_l q_{l+1} \ldots q_{r}$. For a pattern $p$, let $f(p)$ be the minimum possible number of $\mathtt{1}$s in a $p$-good binary string (of the same length as the pattern). You are given a binary string $s$ of size $n$. Find $$\sum_{i=1}^{n} \sum_{j=i}^{n} f(s_i s_{i+1} \ldots s_j).$$ In other words, you need to sum the values of $f$ over all $\frac{n(n+1)}{2}$ substrings of $s$. $^\dagger$ A binary pattern is a string that only consists of characters $\mathtt{0}$ and $\mathtt{1}$. $^\ddagger$ Character $c$ is a mode of string $t$ of length $m$ if the number of occurrences of $c$ in $t$ is at least $\lceil \frac{m}{2} \rceil$. For example, $\mathtt{0}$ is a mode of $\mathtt{010}$, $\mathtt{1}$ is not a mode of $\mathtt{010}$, and both $\mathtt{0}$ and $\mathtt{1}$ are modes of $\mathtt{011010}$.
We can use the idea of D1 and dynamic programming to solve in $O(n)$. Suppose $dp[i][j]$ denotes $f(s[i,j])$ for all $1 \le i \le j \le n$. Performing the transition is quite easy. If $s_i = \mathtt{1}$, $dp[i][j]=1+dp[i+3][j]$, otherwise $dp[i][j]=dp[i+1][j]$. Note that $dp[i][j] = 0$ for if $i > j$. So if we fix $j$, we can find $dp[i][j]$ for all $1 \le i \le j$ in $O(n)$, and the original problem in $O(n^2)$. Now, we need to optimise it. Suppose $track[i] = \sum_{j=i}^{n} dp[i][j]$ for all $1 \le i \le n$, with base condition that $track[i] = 0$ if $i > n$. There are two cases: $s_i = \mathtt{0}$ : $track[i] = \sum_{j=i}^{n} dp[i][j]$ $track[i] = dp[i][i] + \sum_{j=i+1}^{n} dp[i][j]$ $track[i] = dp[i][i] + \sum_{j=i+1}^{n} dp[i+1][j]$ $track[i] = track[i+1]$ as $dp[i][i]=0$ $s_i = \mathtt{1}$ : $track[i] = \sum_{j=i}^{n} dp[i][j]$ $track[i] = \sum_{j=i}^{n} 1 + dp[i+3][j]$ $track[i] = n - i + 1 + \sum_{j=i+3}^{n} dp[i+3][j]$ as $dp[i+3][i]=dp[i+3][i+1]=dp[i+3][i+2]=0$ $track[i] = n - i + 1 + \sum_{j=i+3}^{n} dp[i+3][j]$ $track[i] = n - i + 1 + track[i+3]$ So, the answer to the original problem is $\sum_{i=1}^{n} track[i]$, which we can do in $O(n)$.
[ "bitmasks", "divide and conquer", "dp", "dsu", "greedy", "implementation", "strings" ]
2,100
#include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MOD=1e9+7; const ll MAX=500500; void solve(){ ll n,ans=0; cin>>n; string s; cin>>s; s=" "+s; vector<ll> dp(n+5,0); for(ll i=n;i>=1;i--){ if(s[i]=='1'){ dp[i]=dp[i+3]+n-i+1; } else{ dp[i]=dp[i+1]; } ans+=dp[i]; } cout<<ans<<nline; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
E
2..3...4.... Wonderful! Wonderful!
Stack has an array $a$ of length $n$ such that $a_i = i$ for all $i$ ($1 \leq i \leq n$). He will select a positive integer $k$ ($1 \leq k \leq \lfloor \frac{n-1}{2} \rfloor$) and do the following operation on $a$ any number (possibly $0$) of times. - Select a subsequence$^\dagger$ $s$ of length $2 \cdot k + 1$ from $a$. Now, he will delete the first $k$ elements of $s$ from $a$. To keep things perfectly balanced (as all things should be), he will also delete the last $k$ elements of $s$ from $a$. Stack wonders how many arrays $a$ can he end up with for each $k$ ($1 \leq k \leq \lfloor \frac{n-1}{2} \rfloor$). As Stack is weak at counting problems, he needs your help. Since the number of arrays might be too large, please print it modulo $998\,244\,353$. $^\dagger$ A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by deleting several (possibly, zero or all) elements. For example, $[1, 3]$, $[1, 2, 3]$ and $[2, 3]$ are subsequences of $[1, 2, 3]$. On the other hand, $[3, 1]$ and $[2, 1, 3]$ are not subsequences of $[1, 2, 3]$.
Suppose you are given some array $b$ of length $m$ and a positive integer $k$. How to check whether we can get the array $b$ if we start with an array $a$ of length $n$ such that $a_i = i$ for all $i$ ($1 \leq i \leq n$)? First of all, array $b$ should be a subsequence of $a$. Now consider an increasing array $c$ (possibly empty) such that it contains all the elements of $a$ which are not present in $b$. Now look at some trivial necessary conditions. The length of array $c$ should divisible by $2k$, as exactly $2k$ elements were deleted in one operation. The length of array $c$ should divisible by $2k$, as exactly $2k$ elements were deleted in one operation. There should be atleast one element $v$ in $b$ such that there are atleast $k$ elements smaller than $v$ in the array $c$, and alteast $k$ elements greater than $v$ in the array $c$. Why? Think about the last operation. We can consider the case of empty $c$ separately. There should be atleast one element $v$ in $b$ such that there are atleast $k$ elements smaller than $v$ in the array $c$, and alteast $k$ elements greater than $v$ in the array $c$. Why? Think about the last operation. We can consider the case of empty $c$ separately. In fact, it turns out that these necessary conditions are sufficient (Why?). Now, we need to find the number of possible $b$. We can instead find the number of binary strings $s$ of length $n$ such that $s_i = 1$ if $i$ is present in $b$, and $s_i=0$ otherwise. For given $n$ and $k$, let us call $s$ good if there exists some $b$ which can be achieved from $a$. Instead of counting strings $s$ which are good, let us count the number of strings which are not good. For convenience, we will only consider strings $s$ having the number of $\mathtt{0}$'s divisible by $2k$. Now, based on the conditions in hint $2$, we can conclude that $s$ is bad if and only if there does not exist any $1$ between the $k$-th $0$ from the left and the $k$-th $0$ from the right in $s$. Let us compress all the $\mathtt{0}$'s between the $k$-th $0$ from the left and the $k$-th $0$ from the right into a single $0$ and call the new string $t$. Note that $t$ will have exactly $2k - 1$ $\mathtt{0}$'s. We can also observe that for each $t$, a unique $s$ exists. This is only because we have already fixed the parameters $n$ and $k$. Thus the number of bad $s$ having exactly $x$ $\mathtt{1}$'s is ${{x + 2k - 1} \choose {2k - 1}}$ as there are exactly ${{x + 2k - 1} \choose {2k - 1}}$ binary strings $t$ having $2k - 1$ $\mathtt{0}$'s and $x$ $\mathtt{1}$'s. Finally, there are exactly $\binom{n}{x} - \binom{x + 2k - 1}{2k - 1}$ good binary strings $s$ having $x$ $\mathtt{1}$'s and $n-x$ $\mathtt{0}$'s. Now, do we need to find this value for each $x$ from $1$ to $n$? No, as the number($n-x$) of $\mathtt{0}$'s in $s$ should be a multiple of $2k$. There are only $O(\frac{n}{2k})$ useful candidates for $x$. Thus, our overall complexity is $O(n \log(n))$ (as $\sum_{i=1}^{n} O(\frac{n}{i}) = O(n \log(n))$).
[ "combinatorics", "dp", "math" ]
2,400
#pragma GCC optimize("O3,unroll-loops") #include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long const ll INF_ADD=1e18; #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MOD=998244353; const ll MAX=5000500; vector<ll> fact(MAX+2,1),inv_fact(MAX+2,1); ll binpow(ll a,ll b,ll MOD){ ll ans=1; a%=MOD; while(b){ if(b&1) ans=(ans*a)%MOD; b/=2; a=(a*a)%MOD; } return ans; } ll inverse(ll a,ll MOD){ return binpow(a,MOD-2,MOD); } void precompute(ll MOD){ for(ll i=2;i<MAX;i++){ fact[i]=(fact[i-1]*i)%MOD; } inv_fact[MAX-1]=inverse(fact[MAX-1],MOD); for(ll i=MAX-2;i>=0;i--){ inv_fact[i]=(inv_fact[i+1]*(i+1))%MOD; } } ll nCr(ll a,ll b,ll MOD){ if(a==b){ return 1; } if((a<0)||(a<b)||(b<0)) return 0; ll denom=(inv_fact[b]*inv_fact[a-b])%MOD; return (denom*fact[a])%MOD; } ll n,k; ll ways(ll gaps,ll options){ gaps--; ll now=nCr(gaps+options-1,options-1,MOD); return now; } void solve(){ cin>>n; for(ll k=1;k<=(n-1)/2;k++){ ll ans=1; for(ll deleted=2*k;deleted<=n-1;deleted+=2*k){ ll options=2*k,left_elements=n-deleted; ans=(ans+ways(left_elements+1,deleted+1)-ways(left_elements+1,options)+MOD)%MOD; } cout<<ans<<" "; } cout<<nline; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; precompute(MOD); while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
F
Maximize the Difference
For an array $b$ of $m$ non-negative integers, define $f(b)$ as the \textbf{maximum} value of $\max\limits_{i = 1}^{m} (b_i | x) - \min\limits_{i = 1}^{m} (b_i | x)$ over all possible non-negative integers $x$, where $|$ is bitwise OR operation. You are given integers $n$ and $q$. You start with an empty array $a$. Process the following $q$ queries: - $v$: append $v$ to the back of $a$ and then output $f(a)$. It is guaranteed that $0 \leq v < n$. \textbf{The queries are given in a modified way.}
For an array $b$ consiting of $m$ non-negative integers, $f(b) = \max\limits_{p=1}^{m} ( \max\limits_{i = 1}^{m} (b_i | b_p) - \min\limits_{i = 1}^{m} (b_i | b_p))$. In other, we can get the maximum possible value by choosing $x=b_p$ for some $p$ ($1 \leq p \leq m$). $f(b)$ is the maximum value of $b_i \land$ ~ $b_j$ over all pairs of ($i,j$) ($1 \leq i,j \leq m$), where $\land$ is the bitwise AND operator, and ~ is the bitwise One's complement operator. Let us see how to find $f(b)$ in $O(n \log(n))$ first. We will focus on updates later. Let us have two sets $S_1$ and $S_2$ such that $S_1$ contains all submasks of $b_i$ for all $1 \leq i \leq m$ $S_2$ contains all submasks of ~$b_i$for all $1 \leq i \leq m$ We can observe that $f(b)$ is the largest element present in both sets $S_1$ and $S_2$. Now, we can insert all submasks naively. But it would be pretty inefficient. Note that we need to insert any submask atmost once in either of the sets. Can you think of some approach in which you efficiently insert all the non-visited submasks of some mask? Note that the above pseudo code inserts all the all submasks efficiently. As all the masks will be visited atmost once, the amortised complexity will be $O(n \log(n)^2)$. Note that instead of using a set, we can use a boolean array of size $n$ to reduce the complexity to $O(n \log(n))$. Thus, we can use the above idea to find $f(b)$ in $O(n \log(n))$. For each $i$ from $1$ to $m$, we can insert all submasks of $b_i$ into set $S_1$ and insert all the submasks of ~$b_i$ into set $S_2$. The above idea hints at how to deal with updates. If we need to append an element $z$ to $b$, we can just insert all submasks of $z$ into set $S_1$ and insert all the submasks of ~$z$ into set $S_2$. Hence, the overall complexity is $O(n \log(n))$.
[ "bitmasks", "brute force", "dfs and similar" ]
2,700
#pragma GCC optimize("O3,unroll-loops") #include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MAX=100100; void solve(){ ll n,q; cin>>n>>q; ll till=1,len=1; while(till<n){ till*=2; len++; } ll ans=0; vector<vector<ll>> track(2,vector<ll>(till+5,0)); auto add=[&](ll x,ll p){ queue<ll> trav; if(track[p][x]){ return; } trav.push(x); track[p][x]=1; while(!trav.empty()){ auto it=trav.front(); trav.pop(); if(track[0][it]&track[1][it]){ ans=max(ans,it); } for(ll j=0;j<len;j++){ if(it&(1<<j)){ ll cur=(it^(1<<j)); if(!track[p][cur]){ track[p][cur]=1; trav.push(cur); } } } } }; ll supermask=till-1; vector<ll> a(q+5); for(ll i=1;i<=q;i++){ ll h; cin>>h; a[i]=(h+ans)%n; add(a[i],0); add(supermask^a[i],1); cout<<ans<<" \n"[i==q]; } return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
G
Prefix Max Set Counting
Define a function $f$ such that for an array $b$, $f(b)$ returns the array of prefix maxima of $b$. In other words, $f(b)$ is an array containing only those elements $b_i$, for which $b_i=\max(b_1,b_2,\ldots,b_i)$, without changing their order. For example, $f([3,10,4,10,15,1])=[3,10,10,15]$. You are given a tree consisting of $n$ nodes rooted at $1$. A permutation$^\dagger$ $p$ of is considered a pre-order of the tree if for all $i$ the following condition holds: - Let $k$ be the number of proper descendants$^\ddagger$ of node $p_i$. - For all $x$ such that $i < x \leq i+k$, $p_x$ is a proper descendant of node $p_i$. Find the number of distinct values of $f(a)$ over all possible pre-orders $a$. Since this value might be large, you only need to find it modulo $998\,244\,353$. $^\dagger$ A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^\ddagger$ Node $t$ is a proper descendant of node $s$ if $s \neq t$ and $s$ is on the unique simple path from $t$ to $1$.
Consider some subsequence $S$ of $[1,2, \ldots n]$ such that there exists atleast one pre-order $a$ such that $F(a)=S$. Look for some non-trivial properties about $S$. Node $S_i$ will be visited before the node $S_{i+1}$. Assume $|S|=k$. First of all, we should $S_1=1$ and $S_k = n$. There cannot exists there distinct indices $a, b$ and $c$ ($1 \leq a < b < c \leq |k|$) such that $S_c$ lies on the path from $S_a$ to $S_b$. Assume $LCA_i$ is the lowest common ancestor of $S_i$ and $S_{i+1}$. For all $i$($1 \leq i <k$), the largest value over all the nodes on the unique path from $S_i$ to $S_{i+1}$ should be $S_{i+1}$. Suppose $nax_p$ is the maximum value in the subtree of $p$. There is one more important restriction if $S_i$ does not lie on the path from $S_{i+1}$ to $1$. Suppose $m$ is the child of $LCA_i$, which lies on the path from $S_i$ to $LCA_i$. We should have $S_{i+1} > nax_m$. In fact, if you observe carefully you will realise that we should have $S_i = nax_m$. Let us use dynamic programming. Suppose $dp[i]$ gives the number of valid subsequences(say $S$) such that the last element of $S$ is $i$. Note that the answer to our original problem will be $dp[n]$. Suppose $nax(u,v)$ denotes the maximum value on the path from $u$ to $v$(including the endpoints). Let us have a $O(n^2)$ solution first. We have $dp[1]=1$. Suppose we are at some node $i$($i > 1$), and we need to find $dp[i]$. Let us consider some node $j$($1 \le j < i$) and see if we can append $i$ to all the subsequences which end with node $j$. If we can append, we just need to add $dp[j]$ to $dp[i]$. But how do we check if we can append $i$ to all the subsequences that end with node $j$? Check hints 2 and 3. So, we have a $n^2$ solution now. We need to optimise it now. We will move in the increasing value of the nodes(from $2$ to $n$) and calculate the $dp$ values. Suppose $nax(1, par_i) = v$, where $par_i$ denotes the parent of $i$. Assume we go from node $j$($j < i$) to node $i$. There are two cases: $j$ lies on the path from $1$ to $i$: This case is easy, as we just need to ensure that $nax(j,par_i) = j$. We can add $dp[j]$ to $dp[i]$ if we have $nax(j,par_i) = j$. Note that there exists only one node(node $v$) for which might add $dp[v]$ to $dp[i]$ $j$ lies on the path from $1$ to $i$: This case is easy, as we just need to ensure that $nax(j,par_i) = j$. We can add $dp[j]$ to $dp[i]$ if we have $nax(j,par_i) = j$. Note that there exists only one node(node $v$) for which might add $dp[v]$ to $dp[i]$ $j$ does not lie on the path from $1$ to $i$ : We will only consider the case in which $dp[j]$ will be added to $dp[i]$. Suppose $lca$ is the lowest common ancestor of $j$ and $i$, and $m$ is the child of $lca$, which lies on the path from $j$ to $lca$. Notice that the largest value in the subtree of $m$ should be $j$. This observation is quite helpful. We can traverse over all the ancestors of $i$. Suppose that we are at ancestor $u$. We will iterate over all the child(say $c$) of $u$ such that $nax_c < i$, and add $dp[nax_c]$ to $dp[i]$ if $nax(u,par_i) < nax_c$. Suppose $track[u][i]$ stores the sum of $dp[nax_c]$ for all $c$ such that $nax_c < i$. So we should add $track[u][i]$ to $dp[i]$. But there is a catch. This way, $dp[nax_c]$ will also get added $dp[i]$ even when $nax_c < nax(u, par_i)$. So, we need to subtract some values, which is left as an exercise for the readers. Now, $track[u][d] = track[u][d-1] + dp[nax_c]$ if $nax_c = d$. So, instead of keeping a two-dimensional array $track$, we can just maintain a one-dimensional array $track$. Note that we will only need the sum of $track[u]$ for all the ancestors of $i$, which we can easily calculate using the euler tour. $j$ does not lie on the path from $1$ to $i$ : We will only consider the case in which $dp[j]$ will be added to $dp[i]$. Suppose $lca$ is the lowest common ancestor of $j$ and $i$, and $m$ is the child of $lca$, which lies on the path from $j$ to $lca$. Notice that the largest value in the subtree of $m$ should be $j$. This observation is quite helpful. We can traverse over all the ancestors of $i$. Suppose that we are at ancestor $u$. We will iterate over all the child(say $c$) of $u$ such that $nax_c < i$, and add $dp[nax_c]$ to $dp[i]$ if $nax(u,par_i) < nax_c$. Suppose $track[u][i]$ stores the sum of $dp[nax_c]$ for all $c$ such that $nax_c < i$. So we should add $track[u][i]$ to $dp[i]$. But there is a catch. This way, $dp[nax_c]$ will also get added $dp[i]$ even when $nax_c < nax(u, par_i)$. So, we need to subtract some values, which is left as an exercise for the readers. Now, $track[u][d] = track[u][d-1] + dp[nax_c]$ if $nax_c = d$. So, instead of keeping a two-dimensional array $track$, we can just maintain a one-dimensional array $track$. Note that we will only need the sum of $track[u]$ for all the ancestors of $i$, which we can easily calculate using the euler tour. You can look at the attached code for the implementation details. The intended time complexity is $O(n \cdot \log(n))$.
[ "data structures", "dp", "trees" ]
3,100
#include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MOD=998244353; const ll MAX=1000100; struct FenwickTree{ vector<ll> bit; ll n; FenwickTree(ll n){ this->n = n; bit.assign(n, 0); } FenwickTree(vector<ll> a):FenwickTree(a.size()){ ll x=a.size(); for(size_t i=0;i<x;i++) add(i,a[i]); } ll sum(ll r) { ll ret=0; for(;r>=0;r=(r&(r+1))-1){ ret+=bit[r]; ret%=MOD; } return ret; } ll sum(ll l,ll r) { if(l>r) return 0; ll val=sum(r)-sum(l-1)+MOD; val%=MOD; return val; } void add(ll idx,ll delta) { for(;idx<n;idx=idx|(idx+1)){ bit[idx]+=delta; bit[idx]%=MOD; } } }; vector<vector<ll>> adj; vector<ll> tin(MAX,0),tout(MAX,0); vector<ll> parent(MAX,0); vector<ll> overall_max(MAX,0); ll now=1; vector<ll> jump_to(MAX,0),sub(MAX,0); void dfs(ll cur,ll par){ parent[cur]=par; tin[cur]=now++; overall_max[cur]=cur; for(auto chld:adj[cur]){ if(chld!=par){ jump_to[chld]=max(jump_to[cur],cur); dfs(chld,cur); overall_max[cur]=max(overall_max[cur],overall_max[chld]); } } tout[cur]=now++; } vector<ll> dp(MAX,0); void solve(){ ll n; cin>>n; adj.clear(); adj.resize(n+5); for(ll i=1;i<=n-1;i++){ ll u,v; cin>>u>>v; adj[u].push_back(v); adj[v].push_back(u); } now=1; dfs(1,0); ll ans=0; FenwickTree enter_time(now+5),exit_time(now+5); overall_max[0]=MOD; dp[0]=1; for(ll i=1;i<=n;i++){ ll p=jump_to[i]; dp[i]=(enter_time.sum(0,tin[i])-exit_time.sum(0,tin[i])-sub[p]+dp[p]+MOD+MOD)%MOD; if(p>i){ dp[i]=0; } ll node=i; while(overall_max[node]==i){ node=parent[node]; } enter_time.add(tin[node],dp[i]); exit_time.add(tout[node],dp[i]); sub[i]=(enter_time.sum(0,tin[i])-exit_time.sum(0,tin[i])+MOD)%MOD; } cout<<dp[n]<<nline; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1930
H
Interactive Mex Tree
This is an interactive problem. Alice has a tree $T$ consisting of $n$ nodes, numbered from $1$ to $n$. Alice will show $T$ to Bob. After observing $T$, Bob needs to tell Alice two permutations $p_1$ and $p_2$ of $[1, 2, \ldots, n]$. Then, Alice will play $q$ rounds of the following game. - Alice will create an array $a$ that is a permutation of $[0,1,\ldots,n-1]$. The value of node $v$ will be $a_v$. - Alice will choose two nodes $u$ and $v$ ($1 \leq u, v \leq n$, $u \neq v$) of $T$ and tell them to Bob. Bob will need to find the $\operatorname{MEX}^\dagger$ of the values on the unique simple path between nodes $u$ and $v$. - To find this value, Bob can ask Alice at most $5$ queries. In each query, Bob should give three integers $t$, $l$ and $r$ to Alice such that $t$ is either $1$ or $2$, and $1 \leq l \leq r \leq n$. Alice will then tell Bob the value equal to $$\min_{i=l}^{r} a[p_{t,i}].$$ Note that all rounds are independent of each other. In particular, the values of $a$, $u$ and $v$ can be different in different rounds. Bob is puzzled as he only knows the HLD solution, which requires $O(\log(n))$ queries per round. So he needs your help to win the game. $^\dagger$ The $\operatorname{MEX}$ (minimum excludant) of a collection of integers $c_1, c_2, \ldots, c_k$ is defined as the smallest non-negative integer $x$ which does not occur in the collection $c$.
$\operatorname{MEX}$ of the path from $u$ to $v$ will be the minimum value over all the nodes of $T$ which do not lie on the path from $u$ to $v$. $p_1$ and $p_2$ are associated with the Euler tour $p_1$ is the permutation of $[1,2, \ldots n]$ sorted in increasing order on the basis on $tin$ time observed during Euler tour. Similarly, $p_2$ is the permutation of $[1,2, \ldots n]$ sorted in increasing order based on $tout$ time. Note that any Euler tour is fine. Now we have $p_1$ and $p_2$ with us. Suppose we need to find $\operatorname{MEX}$ of the path from $u$ to $v$. Assume that $tin_u < tin_v$ for convenience. Assume $T'$ is the forest we get if we remove all the nodes on the path from $u$ to $v$ from $T$. Our goal is to find the minimum value over all the nodes in $T'$. Assume that $T'$ is non-empty, as $\operatorname{MEX}$ will be $n$ if $T'$ is empty. Assume $LCA$ is the lowest common ancestor of $u$ and $v$. Suppose $m$ is the child of $LCA(u,v)$, which lies on the path from $v$ to $LCA(u,v)$. Let us consider some groups of nodes $p$ such that. $tout_p < tin_u$ $tin_u < tin_p < tin_m$ $tin_m < tout_p < tin_v$ $tin_v < tin_p$ $tout_{LCA} < tout_p$ Note that we have precisely $5$ groups, with the $i$-th group consisting of only those nodes which satisfy the $i$-th condition. Here comes the interesting claim. All nodes in $T'$ are present in atleast one of the above $5$ groups. There does not exist a node $p$ such that $p$ is on the path from $u$ from $v$ and $p$ is present in any of the groups. Now, it is not hard to observe that if we consider the nodes of any group, they will form a continuous segment in either $p_1$ or $p_2$. So we can cover each group in a single query. Hence, we can find the $\operatorname{MEX}$ of the path in any round using atmost $5$ queries.
[ "constructive algorithms", "dfs and similar", "interactive", "trees" ]
3,300
#include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long const ll INF_ADD=1e18; #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() const ll MOD=998244353; const ll MAX=200200; vector<ll> adj[MAX]; ll now=0,till=20; vector<ll> tin(MAX,0),tout(MAX,0),depth(MAX,0); vector<vector<ll>> jump(MAX,vector<ll>(till+1,0)); void dfs(ll cur,ll par){ jump[cur][0]=par; for(ll i=1;i<=till;i++) jump[cur][i]=jump[jump[cur][i-1]][i-1]; tin[cur]=++now; for(ll chld:adj[cur]){ if(chld!=par){ depth[chld]=depth[cur]+1; dfs(chld,cur); } } tout[cur]=++now; } bool is_ancestor(ll a,ll b){ if((tin[a]<=tin[b])&&(tout[a]>=tout[b])) return 1; return 0; } ll lca(ll a,ll b){ if(is_ancestor(a,b)) return a; for(ll i=till;i>=0;i--){ if(!is_ancestor(jump[a][i],b)) a=jump[a][i]; } return jump[a][0]; } void solve(){ ll n; cin>>n; ll m; cin>>m; vector<ll> a(n+5); for(ll i=1;i<=n-1;i++){ ll u,v; cin>>u>>v; adj[u].push_back(v); adj[v].push_back(u); } now=1; dfs(1,1); vector<ll> p(n),q(n); for(ll i=0;i<n;i++){ p[i]=q[i]=i+1; } sort(all(p),[&](ll l,ll r){ return tin[l]<tin[r]; }); sort(all(q),[&](ll l,ll r){ return tout[l]<tout[r]; }); for(auto it:p){ cout<<it<<" "; } cout<<endl; for(auto it:q){ cout<<it<<" "; } cout<<endl; auto query_p=[&](ll l,ll r){ ll left_pos=n+1,right_pos=-1; for(ll i=0;i<n;i++){ ll node=p[i]; if((tin[node]>=l) and (tin[node]<=r)){ left_pos=min(left_pos,i); right_pos=i; } } ll now=n+5; if(right_pos!=-1){ left_pos++,right_pos++; cout<<"? 1 "<<left_pos<<" "<<right_pos<<endl; cin>>now; } return now; }; auto query_q=[&](ll l,ll r){ ll left_pos=n+1,right_pos=-1; for(ll i=0;i<n;i++){ ll node=q[i]; if((tout[node]>=l) and (tout[node]<=r)){ left_pos=min(left_pos,i); right_pos=i; } } ll now=n+5; if(right_pos!=-1){ left_pos++,right_pos++; cout<<"? 2 "<<left_pos<<" "<<right_pos<<endl; cin>>now; } return now; }; while(m--){ ll u,v; cin>>u>>v; if(tout[u]>tout[v]){ swap(u,v); } ll lca_node=lca(u,v); ll ans=n; if(lca_node==v){ ans=min(ans,query_q(1,tin[u])); ans=min(ans,query_p(tin[u]+1,tout[v])); ans=min(ans,query_q(tout[v]+1,now)); cout<<"! "<<ans<<endl; ll x; cin>>x; continue; } ans=min(ans,query_q(1,tin[u])); ll consider=v; for(auto it:adj[lca_node]){ if(is_ancestor(lca_node,it) and is_ancestor(it,v)){ consider=it; } } ans=min(ans,query_p(tin[u]+1,tin[consider]-1)); ans=min(ans,query_q(tin[consider],tin[v])); ans=min(ans,query_p(tin[v]+1,tout[lca_node])); ans=min(ans,query_q(tout[lca_node]+1,now)); cout<<"! "<<ans<<endl; ll x; cin>>x; } for(ll i=1;i<=n;i++){ adj[i].clear(); } return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }