contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
2023
B
Skipping
It is already the year $3024$, ideas for problems have long run out, and the olympiad now takes place in a modified individual format. The olympiad consists of $n$ problems, numbered from $1$ to $n$. The $i$-th problem has its own score $a_i$ and a certain parameter $b_i$ ($1 \le b_i \le n$). Initially, the testing system gives the participant the \textbf{first} problem. When the participant is given the $i$-th problem, they have two options: - They can submit the problem and receive $a_i$ points; - They can skip the problem, in which case they will never be able to submit it. Then, the testing system selects the next problem for the participant from problems with indices $j$, such that: - If he submitted the $i$-th problem, it looks at problems with indices $j < i$; - If he skipped the $i$-th problem, it looks at problems with indices $j \leq b_i$. Among these problems, it selects the problem with the \textbf{maximum} index that it has \textbf{not previously given} to the participant (he has neither submitted nor skipped it before). If there is no such problem, then the competition for the participant \textbf{ends}, and their result is equal to the sum of points for all submitted problems. In particular, if the participant submits the first problem, then the competition for them ends. Note that the participant receives each problem \textbf{at most once}. Prokhor has prepared thoroughly for the olympiad, and now he can submit any problem. Help him determine the maximum number of points he can achieve.
Notice that it makes no sense for us to skip problems for which $b_i \leq i$, since we can solve the problem, earn points for it, and the next problem will be chosen from the numbers $j < i$. If we skip it, we do not earn points, and the next problem will be chosen from the same set of problems or even a smaller one. We also note that if we are at problem $i$ and the maximum problem number that has been assigned to us earlier in the competition is $j \geqslant b_i$, then it makes no sense to skip this problem, because after skipping, we could have reached the next problem from problem $j$ simply by solving problems. Under these conditions, it turns out that all the problems assigned to us, after the competition ends, are on some prefix of the set of problems (i.e., there exists some number $i$ from $1$ to $n$ such that all problems with numbers $j \leq i$ were received by us, and problems with numbers $j > i$ were not received). This is indeed the case; let $i$ be the maximum problem number that has been assigned to us. After this problem is assigned, we will not skip any more problems, as we have already proven that it is not beneficial, which means we will only solve problems and will solve all problems with numbers $j < i$ that have not been visited before. Instead of trying to maximize the total score for the solved problems, we will aim to minimize the total score for the skipped problems. We will incur a penalty equal to $a_i$ for a skipped problem and $0$ if we solve it. We know that the answer lies on some prefix, so now we want to determine the minimum penalty required to reach each problem. Let's solve the following subproblem. We are given the same problems, and the following two options if we are at problem $i$: Pay a penalty of $0$ and move to problem $i - 1$, if such a problem exists; Pay a penalty of $a_i$ and move to problem $b_i$. Now we are allowed to visit each problem as many times as we want. In this case, we can construct a weighted directed graph of the following form: The graph has $n$ vertices, each vertex $i$ corresponds to problem $i$; For each $i > 1$, there is an edge of weight $0$ from vertex $i$ to vertex $i - 1$; For each $i$, there is an edge of weight $a_i$ from vertex $i$ to vertex $b_i$. Thus, our task reduces to finding the shortest distance from vertex $1$ to each vertex. Recall that the shortest distance guarantees that on the way to vertex $i$, we visited each vertex at most once, which means that if we reached problem $i$ with some penalty, we can solve all problems on the prefix up to $i$ (inclusive), since the points for all skipped problems will be compensated by the penalty. Since we already know that the optimal answer lies on one of the prefixes, we need to know the total points for the problems for each prefix, which can be easily done using prefix sums. After that, from all the values of the difference between the prefix sum and the minimum penalty needed to reach vertex $i$, we will choose the maximum across all prefixes $i$, and this will be the answer. We will find the shortest distance using Dijkstra's algorithm in $O(n \log n)$. Prefix sums are calculated in $O(n)$. Final asymptotic complexity: $O(n \log n)$.
[ "binary search", "dp", "graphs", "shortest paths" ]
1,700
null
2023
C
C+K+S
You are given two strongly connected$^{\dagger}$ directed graphs, each with exactly $n$ vertices, but possibly different numbers of edges. Upon closer inspection, you noticed an important feature — the length of any cycle in these graphs is divisible by $k$. Each of the $2n$ vertices belongs to exactly one of two types: incoming or outgoing. For each vertex, its type is known to you. You need to determine whether it is possible to draw exactly $n$ directed edges between the source graphs such that the following four conditions are met: - The ends of any added edge lie in different graphs. - From each outgoing vertex, exactly one added edge originates. - Into each incoming vertex, exactly one added edge enters. - In the resulting graph, the length of any cycle is divisible by $k$. $^{\dagger}$A strongly connected graph is a graph in which there is a path from every vertex to every other vertex.
Consider a strongly connected graph in which the lengths of all cycles are multiples of $k$. It can be observed that it is always possible to color this graph with $k$ colors in such a way that any edge connects a vertex of color $color$ to a vertex of color $(color + 1) \mod k$. It turns out that we can add edges to this graph only if they preserve the described color invariant. Let's make some colorings of the original graphs. With fixed colorings, it is quite easy to check whether the required edges can be added. To do this, we will create corresponding counting arrays for each color and each class of vertices, and then we will compare the elements of the arrays according to the criterion mentioned above. However, we could have initially colored the second graph differently, for example, by adding $1$ to the color of each vertex modulo $k$. It is not difficult to verify that all the values of the counting arrays for the second graph would then shift by $1$ in a cycle. Similarly, depending on the coloring, all values could shift cyclically by an arbitrary amount. To solve this problem, we will construct the initial arrays in such a way that, for fixed colorings, they need to be checked for equality. If equality is achieved for some coloring, it means that one array is a cyclic shift of the other. This condition can be checked, for example, using the Knuth-Morris-Pratt algorithm.
[ "constructive algorithms", "dfs and similar", "graphs", "greedy", "hashing", "implementation", "strings" ]
2,400
null
2023
D
Many Games
Recently, you received a rare ticket to the only casino in the world where you can actually earn something, and you want to take full advantage of this opportunity. The conditions in this casino are as follows: - There are a total of $n$ games in the casino. - You can play each game \textbf{at most once}. - Each game is characterized by two parameters: $p_i$ ($1 \le p_i \le 100$) and $w_i$ — the probability of winning the game in percentage and the winnings for a win. - If you lose in any game you decide to play, you will receive nothing at all (even for the games you won). You need to choose a set of games in advance that you will play in such a way as to maximize the expected value of your winnings. In this case, if you choose to play the games with indices $i_1 < i_2 < \ldots < i_k$, you will win in all of them with a probability of $\prod\limits_{j=1}^k \frac{p_{i_j}}{100}$, and in that case, your winnings will be equal to $\sum\limits_{j=1}^k w_{i_j}$. That is, the expected value of your winnings will be $\left(\prod\limits_{j=1}^k \frac{p_{i_j}}{100}\right) \cdot \left(\sum\limits_{j=1}^k w_{i_j}\right)$. To avoid going bankrupt, the casino owners have limited the expected value of winnings for each individual game. Thus, for all $i$ ($1 \le i \le n$), it holds that $w_i \cdot p_i \le 2 \cdot 10^5$. Your task is to find the maximum expected value of winnings that can be obtained by choosing some set of games in the casino.
Claim 1: Suppose we took at least one item with $p_i < 100$. Then it is claimed that the sum $w_i$ over all taken elements is $\le 200\,000 \cdot \frac{100}{99} = C$. To prove this, let's assume the opposite and try to remove any element with $p_i < 100$ from the taken set, denoting $q_i = \frac{p_i}{100}$. The answer was $(W + w_i) \cdot Q \cdot q_i$, but it became $W \cdot Q$. The answer increased if $w_i \cdot Q \cdot q_i < W \cdot Q \cdot (1 - q_i)$, that is, $w_i \cdot q_i < W \cdot (1 - q_i)$, which is true if $w_i \cdot pi < W \cdot (100 - p_i)$. This is true since $w_i \cdot p_i \le 200,000$, while $W \cdot (100 - p_i) > 200\,000$. Let's isolate all items with $p_i == 100$. If their sum is $> C$, then we know the answer; otherwise, for each weight sum, we will find the maximum probability with which such weight can be obtained using a dynamic programming approach similar to the knapsack problem. To do this quickly, we will reduce the number of considered items. For each $p$, answer will contain some prefix of elements with that $p$, sorted in descending order by $w_i$. If in the optimal answer there are $c_p$ elements with $p_i == p$, then $c_p \cdot q^{c_p} > (c_p - 1) \cdot q^{c_p - 1}$; otherwise, the smallest element can definitely be removed. Rewriting the inequality gives us $c_p \cdot q > c_p - 1$, which means $c_p < \frac{1}{1-q}$. Thus, among the elements with a given $p$, it is sufficient to keep for consideration the top $\frac{100}{100-p}$ best items, or about $450$ (i.e., $99 \cdot \ln{99}$) items across all $p$. In the end, it is enough to go through the dynamic programming and find the cell with the highest answer. The total running time is $C \cdot 99 \cdot \ln{99}$.
[ "brute force", "dp", "greedy", "math", "probabilities" ]
2,900
null
2023
E
Tree of Life
In the heart of an ancient kingdom grows the legendary Tree of Life — the only one of its kind and the source of magical power for the entire world. The tree consists of $n$ nodes. Each node of this tree is a magical source, connected to other such sources through magical channels (edges). In total, there are $n-1$ channels in the tree, with the $i$-th channel connecting nodes $v_i$ and $u_i$. Moreover, there exists a unique simple path through the channels between any two nodes in the tree. However, the magical energy flowing through these channels must be balanced; otherwise, the power of the Tree of Life may disrupt the natural order and cause catastrophic consequences. The sages of the kingdom discovered that when two magical channels converge at a single node, a dangerous "magical resonance vibration" occurs between them. To protect the Tree of Life and maintain its balance, it is necessary to select several paths and perform special rituals along them. A path is a sequence of distinct nodes $v_1, v_2, \ldots, v_k$, where each pair of adjacent nodes $v_i$ and $v_{i+1}$ is connected by a channel. When the sages perform a ritual along such a path, the resonance vibration between the channels $(v_i, v_{i+1})$ and $(v_{i+1}, v_{i+2})$ is blocked for each $1 \leq i \leq k - 2$. The sages' task is to select the minimum number of paths and perform rituals along them to block all resonance vibrations. This means that for every pair of channels emanating from a single node, there must exist \textbf{at least one} selected path that contains \textbf{both} of these channels. Help the sages find the minimum number of such paths so that the magical balance of the Tree of Life is preserved, and its power continues to nourish the entire world!
This problem has several solutions that are similar to varying degrees. We will describe one of them. We will apply the following greedy approach. We will construct the answer by combining the answers for the subtrees. To do this, we will perform a depth-first traversal, and when exiting a vertex, we return a triplet $(ans, up, bonus)$, where $ans$ is the minimum number of paths needed to cover all pairs of adjacent edges in the subtree (considering the edge upwards), $up$ is the number of edges upwards, and $bonus$ is the number of paths that are connected at some vertex of the subtree but can be separated into two paths upwards without violating the coverage. Then, if we are at vertex $v$ and receive from child $u$ the triplet $(ans_u, up_u, bonus_u)$, we need to increase $up_u$ to at least $deg_v - 1$ (to satisfy the coverage). Next, we will effectively reduce it by $deg_v - 1$, implying that we have satisfied all such pairs. Meanwhile, we will sum $ans_u$ and $bonus_u$, and subtract from $ans$ when connecting. We first increase using $bonus$ ($ans_u += 1, up_u += 2, bonus_u -= 1$), and then simply by adding new paths. After this, we have remaining excess paths ($up$) leading to $v$, which we might be able to combine to reduce the answer. This is represented by the set $U = \{up_{u_1}, up_{u_2}, \ldots, up_{u_k}\}$. If $\max(U) * 2 \leq sum(U)$, we can combine all pairs (leaving at most $1$ path upwards) in $U$, adding these paths to $bonus_v$. Otherwise, we increase all $up_{u_i} \neq \max(U)$ using $bonus_{u_i}$ until $\max(U) * 2 \leq sum(U)$ is satisfied. Finally, we return $up_v = (sum(U) \mod 2) + deg_v - 1$ if the condition is met, and $up_v = 2 * \max(U) - sum(U) + deg_v - 1$, while $bonus_v$ and $ans_v$ are the sums that may have changed during the process. Don't forget to account for the paths that merged at $v$. The root needs to be handled separately, as there are no paths upwards there. To prove this solution, one can consider $dp_{v, up}$ - the minimum number of paths needed to cover subtree $v$ if $up$ paths go upwards. It is not hard to notice that the triplet $(ans, up, bonus)$ in the greedy solution describes all optimal states of the dynamic programming. P.S. Strict proofs are left as an exercise for the reader.
[ "dp", "greedy", "trees" ]
3,300
null
2023
F
Hills and Pits
In a desert city with a hilly landscape, the city hall decided to level the road surface by purchasing a dump truck. The road is divided into $n$ sections, numbered from $1$ to $n$ from left to right. The height of the surface in the $i$-th section is equal to $a_i$. If the height of the $i$-th section is greater than $0$, then the dump truck must take sand from the $i$-th section of the road, and if the height of the $i$-th section is less than $0$, the dump truck must fill the pit in the $i$-th section of the road with sand. It is guaranteed that the initial heights are not equal to $0$. When the dump truck is in the $i$-th section of the road, it can either take away $x$ units of sand, in which case the height of the surface in the $i$-th section will decrease by $x$, or it can fill in $x$ units of sand (provided that it currently has at least $x$ units of sand in its bed), in which case the height of the surface in the $i$-th section of the road will increase by $x$. The dump truck can start its journey from any section of the road. Moving to an adjacent section on the left or right takes $1$ minute, and the time for loading and unloading sand can be neglected. The dump truck has an infinite capacity and is initially empty. You need to find the minimum time required for the dump truck to level the sand so that the height in each section becomes equal to $0$. Note that after all movements, the dump truck \textbf{may still have sand left in its bed}. You need to solve this problem \textbf{independently} for the segments numbered from $l_i$ to $r_i$. Sand outside the segment cannot be used.
To begin, let's solve the problem for a single query, where $l = 1$ and $r = n$. We will introduce an additional constraint - the dump truck must start on the first segment and finish on the last one. We will calculate the prefix sums of the array $a_1, a_2, \ldots, a_n$, denoted as $p_1, p_2, \ldots, p_n$. If $p_n < 0$, then the answer is $-1$, as there won't be enough sand to cover all $a_i < 0$; otherwise, an answer exists. Let $x_i$ be the number of times the dump truck travels between segments $i$ and $i+1$ (in either direction). Notice that if $p_i < 0$, then $x_i \geq 3$. Since the dump truck starts to the left of segment $i+1$, it will have to pass through $x_i$ once. Additionally, since $p_i < 0$, it will have to return to the prefix to balance it out (it won't have enough sand to cover the negative $a_i$). Then it will have to pass through $x_i$ a third time, as the dump truck must finish its journey to the right of segment $i$. The dump truck can travel such that when $p_i < 0$, $x_i = 3$, and when $p_i \geq 0$, $x_i = 1$. To achieve this, it simply needs to travel from left to right, and if $p_i \geq 0$, it just returns left while traversing the negative prefix sums, nullifying the negative $a_i$. If it reaches $p_j \geq 0$, then the entire prefix to the left of it has already been nullified (we maintain this invariant), and it simply returns right to $p_i$. The entire prefix $p_i$ is nullified. With this algorithm, we achieve $x_i = 3$ when $p_i < 0$ and $x_i = 1$ otherwise, which is the optimal answer to the problem. Now, let's remove the constraint on the starting and ending positions of the dump truck. Let the dump truck start at segment $s$ and finish at segment $f$, with $s \leq f$. Then, if $i < s$, $x_i \geq 2$, since the dump truck starts and finishes to the right of segment $i$, meaning it needs to reach it and then return right. Similarly, if $i \geq f$, then $x_i = 2$. For all $s \leq i < f$, the constraints on $x_i$ described earlier still hold. How can we now achieve optimal $x_i$? Let the dump truck travel from $s$ to $1$, nullifying all $a_i > 0$ and collecting sand, and then travel right to $s$, covering all $a_i < 0$ (here we used the fact that $p_s \geq 0$, which we will prove later). From $s$ to $f$, the dump truck follows the previously described algorithm for $s = 1$ and $f = n$. Next, it will travel from $f$ to $n$, nullifying $a_i > 0$, and then return to $f$, covering $a_i < 0$. Thus, we have obtained optimal $x_i$. We still need to understand why $p_s \geq 0$. Suppose this is not the case; then, according to our estimates, $x_s \geq 3$. But if we increase $s$ by $1$, this estimate will decrease to $x_s \geq 2$, meaning we have no reason to consider this $s$. If we cannot increase $s$ by $1$, it means that $s = f$. Then for all $i$, we have the estimate $x_i \geq 2$. But then we can simply traverse all segments from right to left and then from left to right, resulting in all $x_i = 2$. The case $s \leq f$ has been analyzed. If $s > f$, we simply need to reverse the array $a_i$ and transition to the case $s < f$. Let's try to better understand what we have actually done by removing the constraints on the starting and ending positions of the dump truck. Let's look at the array $x_i$. Initially, it contains $1$ and $3$; we can replace some prefix and suffix with $2$. The answer to the problem is the sum of the resulting array. First, let's understand how to recalculate $1$ and $3$. We will sort the queries by $p_{l-1}$ (assuming $p_0 = 0$). Let's consider the segment $a_l, a_{l+1}, \ldots, a_r$. Its prefix sums are $p_l - p_{l-1}, p_{l+1} - p_{l-1}, \ldots, p_r - p_{l-1}$. Notice that as $p_{l-1}$ increases, the values $p_i - p_{l-1}$ will decrease. Therefore, if we consider the queries in increasing order of $p_{l-1}$, initially $x_i = 1$ for any $i$, and then they gradually become $x_i = 3$ (when $p_i$ becomes less than $p_{l-1}$). We need to come up with a structure that can efficiently find the optimal replacement over a segment. We can simply use a Segment Tree, where each node will store $5$ values - the sum over the segment without replacements; the sum over the segment if everything is replaced; the sum over the segment if the prefix is optimally replaced; the sum over the segment if the suffix is optimally replaced; and the sum over the segment if both the prefix and suffix are optimally replaced. It is not difficult to figure out how to combine $2$ segments (this problem is very similar to the well-known problem of finding the maximum subarray of ones in a binary array). To solve the problem, we just need to make replacements of $1$ with $3$ at a point and make a query over the segment in this Segment Tree. This solution needs to be run $2$ times, once for the original array $a_i$ and once for its reversed version. The answer to the query is the minimum of the two obtained results. The complexity of the solution is $O((n + q) \log(n))$ time and $O(n + q)$ space.
[ "data structures", "greedy", "math", "matrices" ]
3,500
null
2024
A
Profitable Interest Rate
Alice has $a$ coins. She can open a bank deposit called "Profitable", but the minimum amount required to open this deposit is $b$ coins. There is also a deposit called "Unprofitable", which can be opened with \textbf{any} amount of coins. Alice noticed that if she opens the "Unprofitable" deposit with $x$ coins, the minimum amount required to open the "Profitable" deposit decreases by $2x$ coins. However, these coins cannot later be deposited into the "Profitable" deposit. Help Alice determine the maximum number of coins she can deposit into the "Profitable" deposit if she first deposits some amount of coins (possibly $0$) into the "Unprofitable" deposit. If Alice can never open the "Profitable" deposit, output $0$.
Let's say we have deposited $x$ coins into the "Unprofitable" deposit, then we can open a "Profitable" deposit if $a - x \geq b - 2x$ is satisfied. Which is equivalent to the inequality: $x \geq b - a$. Thus, we need to open an "Unprofitable" deposit for $\text{max}(0, b - a)$ coins, and open a "Profitable" deposit for the rest of the coins.
[ "greedy", "math" ]
800
null
2024
B
Buying Lemonade
There is a vending machine that sells lemonade. The machine has a total of $n$ slots. You know that initially, the $i$-th slot contains $a_i$ cans of lemonade. There are also $n$ buttons on the machine, each button corresponds to a slot, with exactly one button corresponding to each slot. Unfortunately, the labels on the buttons have worn off, so you \textbf{do not know} which button corresponds to which slot. When you press the button corresponding to the $i$-th slot, one of two events occurs: - If there is a can of lemonade in the $i$-th slot, it will drop out and you will take it. At this point, the number of cans in the $i$-th slot decreases by $1$. - If there are no cans of lemonade left in the $i$-th slot, nothing will drop out. After pressing, the can drops out so quickly that it is impossible to track from which slot it fell. The contents of the slots are hidden from your view, so you cannot see how many cans are left in each slot. The only thing you know is the initial number of cans in the slots: $a_1, a_2, \ldots, a_n$. Determine the minimum number of button presses needed to guarantee that you receive at least $k$ cans of lemonade. Note that you can adapt your strategy during the button presses based on whether you received a can or not. It is guaranteed that there are at least $k$ cans of lemonade in total in the machine. In other words, $k \leq a_1 + a_2 + \ldots + a_n$.
Let's make a few simple observations about the optimal strategy of actions. First, if after pressing a certain button, no cans have been obtained, there is no point in pressing that button again. Second, among the buttons that have not yet resulted in a failure, it is always advantageous to press the button that has been pressed the least number of times. This can be loosely justified by the fact that the fewer times a button has been pressed, the greater the chance that the next press will be successful, as we have no other information to distinguish these buttons from one another. From this, our strategy clearly emerges: let's sort the array, let $a_1 \leq a_2 \leq \ldots a_n$. In the first action, we will press all buttons $a_1$ times. It is clear that all these presses will yield cans, and in total, we will collect $a_1 \cdot n$ cans. If $k \leq a_1 \cdot n$, no further presses are needed. However, if $k > a_1 \cdot n$, we need to make at least one more press. Since all buttons are still indistinguishable to us, it may happen that this press will be made on the button corresponding to $a_1$ and will be unsuccessful. Next, we will press all remaining buttons $a_2 - a_1$ times; these presses will also be guaranteed to be successful. After that, again, if $k$ does not exceed the number of cans already collected, we finish; otherwise, we need to make at least one more press, which may hit an empty cell $a_2$. And so on. In total, the answer to the problem will be $k + x$, where $x$ is the smallest number from $0$ to $n-1$ such that the following holds: $\displaystyle\sum_{i=0}^{x} (a_{i+1}-a_i) \cdot (n-i) \geq k$ (here we consider $a_0 = 0$). $O(n \log n)$.
[ "binary search", "constructive algorithms", "sortings" ]
1,100
null
2025
A
Two Screens
There are two screens which can display sequences of uppercase Latin letters. Initially, both screens display nothing. In one second, you can do one of the following two actions: - choose a screen and an uppercase Latin letter, and append that letter to \textbf{the end} of the sequence displayed on that screen; - choose a screen and copy the sequence from it to the other screen, \textbf{overwriting the sequence that was displayed on the other screen}. You have to calculate the minimum number of seconds you have to spend so that the first screen displays the sequence $s$, and the second screen displays the sequence $t$.
Whenever we perform the second action (copy from one screen to the other screen), we overwrite what was written on the other screen. It means that we can consider the string on the other screen to be empty before the copying, and that we only need to copy at most once. So, the optimal sequence of operations should look as follows: add some characters to one of the screens, copy them to the other screen, and then finish both strings. We need to copy as many characters as possible. Since after copying, we can only append new characters to the end of the strings on the screens, the string we copy must be a prefix of both of the given strings. So, we need to find the length of the longest common prefix of the given strings. This can be done in linear time if we scan these strings until we find a pair of different characters in the same positions; however, the constraints were low enough so that you could do it slower (like, for example, iterate on the length of the prefix and check that both prefixes of this length are equal). Okay, now let $l$ be the length of the longest common prefix. Using $l+1$ operations, we write $2l$ characters in total; so the number of operations we need will be $|s|+|t|-2l+l+1 = |s|+|t|+1-l$. However, if the longest common prefix is empty, there's no need to copy anything, and we can write both strings in $|s|+|t|$ seconds.
[ "binary search", "greedy", "strings", "two pointers" ]
800
for _ in range(int(input())): s = input() t = input() lcp = 0 n = len(s) m = len(t) for i in range(1, min(n, m) + 1): if s[:i] == t[:i]: lcp = i print(n + m - max(lcp, 1) + 1)
2025
B
Binomial Coefficients, Kind Of
Recently, akshiM met a task that needed binomial coefficients to solve. He wrote a code he usually does that looked like this: \begin{verbatim} for (int n = 0; n < N; n++) { // loop over n from 0 to N-1 (inclusive) C[n][0] = 1; C[n][n] = 1; for (int k = 1; k < n; k++) // loop over k from 1 to n-1 (inclusive) C[n][k] = C[n][k - 1] + C[n - 1][k - 1]; } \end{verbatim} Unfortunately, he made an error, since the right formula is the following: \begin{verbatim} C[n][k] = C[n - 1][k] + C[n - 1][k - 1] \end{verbatim} But his team member keblidA is interested in values that were produced using the wrong formula. Please help him to calculate these coefficients for $t$ various pairs $(n_i, k_i)$. Note that they should be calculated according to the first (wrong) formula. Since values $C[n_i][k_i]$ may be too large, print them modulo $10^9 + 7$.
In order to solve the task, just try to generate values and find a pattern. The pattern is easy: $C[n][k] = 2^k$ for all $k \in [0, n)$. The last step is to calculate $C[n][k]$ fast enough. For example, we can precalculate all powers of two in some array $p$ as $p[k] = 2 \cdot p[k - 1] \bmod (10^9 + 7)$ for all $k < 10^5$ and print the necessary values when asked. Proof: $C[n][k] = C[n][k - 1] + C[n - 1][k - 1] =$ $= C[n][k - 2] + 2 \cdot C[n - 1][k - 2] + C[n - 2][k - 2] =$ $= C[n][k - 3] + 3 \cdot C[n - 1][k - 3] + 3 \cdot C[n - 2][k - 3] + C[n - 3][k - 3] =$ $= \sum_{i = 0}^{j}{\binom{j}{i} \cdot C[n - i][k - j]} = \sum_{i = 0}^{k}{\binom{k}{i} \cdot C[n - i][0]} = \sum_{i = 0}^{k}{\binom{k}{i}} = 2^k$
[ "combinatorics", "dp", "math" ]
1,100
#include<bits/stdc++.h> using namespace std; const int MOD = int(1e9) + 7; int main() { int t; cin >> t; vector<int> ks(t); for (int _ = 0; _ < 2; _++) for (int i = 0; i < t; i++) cin >> ks[i]; vector<int> ans(1 + *max_element(ks.begin(), ks.end()), 1); for (int i = 1; i < (int)ans.size(); i++) ans[i] = (2LL * ans[i - 1]) % MOD; for (int k : ks) cout << ans[k] << '\n'; return 0; }
2025
C
New Game
There's a new game Monocarp wants to play. The game uses a deck of $n$ cards, where the $i$-th card has exactly one integer $a_i$ written on it. At the beginning of the game, on the first turn, Monocarp can take any card from the deck. During each subsequent turn, Monocarp can take exactly one card that has either the same number as on the card taken on the previous turn or a number that is one greater than the number on the card taken on the previous turn. In other words, if on the previous turn Monocarp took a card with the number $x$, then on the current turn he can take either a card with the number $x$ or a card with the number $x + 1$. Monocarp can take any card which meets that condition, regardless of its position in the deck. After Monocarp takes a card on the current turn, it is removed from the deck. According to the rules of the game, the number of distinct numbers written on the cards that Monocarp has taken must not exceed $k$. If, after a turn, Monocarp cannot take a card without violating the described rules, the game ends. Your task is to determine the maximum number of cards that Monocarp can take from the deck during the game, given that on the first turn he can take any card from the deck.
Let's fix the value of the first selected card $x$. Then it is optimal to take the cards as follows: take all cards with the number $x$, then all cards with the number $x + 1$, ..., all cards with the number $x + k - 1$. If any of the intermediate numbers are not present in the deck, we stop immediately. Let's sort the array $a$. Then the answer can be found as follows. Start at some position $i$ and move to the right from it as long as the following conditions are met: the difference between the next number and the current one does not exceed $1$ (otherwise, some number between them occurs $0$ times) and the difference between the next number and $a_i$ is strictly less than $k$ (otherwise, we will take more than $k$ different numbers). It is easy to notice that as $i$ increases, the position we reach through this process also increases. Therefore, we can use two pointers to solve the problem. We will maintain a pointer to this position and update the answer at each $i$ with the distance between $i$ and the pointer. Overall complexity: $O(n \log n)$ for each test case.
[ "binary search", "brute force", "greedy", "implementation", "sortings", "two pointers" ]
1,300
for _ in range(int(input())): n, k = map(int, input().split()) a = list(map(int, input().split())) a.sort() ans = 0 j = 0 for i in range(n): j = max(i, j) while j + 1 < n and a[j + 1] - a[j] <= 1 and a[j + 1] - a[i] < k: j += 1 ans = max(ans, j - i + 1) print(ans)
2025
D
Attribute Checks
Imagine a game where you play as a character that has two attributes: "Strength" and "Intelligence", that are at zero level initially. During the game, you'll acquire $m$ attribute points that allow you to increase your attribute levels — one point will increase one of the attributes by one level. But sometimes, you'll encounter a so-called "Attribute Checks": if your corresponding attribute is high enough, you'll pass it; otherwise, you'll fail it. Spending some time, you finally prepared a list which contains records of all points you got and all checks you've met. And now you're wondering: what is the maximum number of attribute checks you can pass in a single run if you'd spend points wisely? Note that you can't change the order of records.
For the start, let's introduce a slow but correct solution. Let $d[i][I]$ be the answer to the task if we processed first $i$ records and the current Intelligence level is $I$. If we know Intelligence level $I$, then we also know the current Strength level $S = P - I$, where $P$ is just a total number of points in first $i$ records. Since we want to use dp, let's discuss transitions: If the last record $r_i = 0$, then it was a point, and there are only two options: we either raised Intelligence, so the last state was $d[i - 1][I - 1]$; or raised Strength, coming from state $d[i - 1][I]$. In other words, we can calculate $d[i][I] = \max(d[i - 1][I - 1], d[i - 1][I])$. we either raised Intelligence, so the last state was $d[i - 1][I - 1]$; or raised Strength, coming from state $d[i - 1][I]$. If the last record $r_i > 0$, then it's an Intelligence check, and it doesn't affect the state, only its answer. For all $I \ge r_i$, $d[i][I] = d[i - 1][I] + 1$; otherwise, it's $d[i][I] = d[i - 1][I]$. If $r_i < 0$, then it's a Strength check and affects the values in a similar way. For all $I \le P + r_i$, $d[i][I] = d[i - 1][I] + 1$; otherwise, it's also $d[i][I] = d[i - 1][I]$. OK, we've got a solution with $O(nm)$ time and memory, but we can speed it up. Note that the first case appears only $O(m)$ times, while the second and third cases are just range additions. So, if we can do addition in $O(1)$, then processing the first case in linear time is enough to achieve $O(m^2 + n)$ complexity. How to process range additions in $O(1)$ time? Let's use some difference array $a$ to do it lazily. Instead of adding some value $v$ to the segment $[l, r]$, we'll only add value $v$ to $a[l]$ and $-v$ to $a[r + 1]$. And when we meet $r_i = 0$, we'll "push" all accumulated operations all at once. The total value you need to add to some position $i$ is $\sum_{j=0}^{i}{a[i]}$. So, we can calculate all of them in $O(m)$ just going from left to right, maintaining the prefix sum. The last question is reducing the space complexity. As usual, you can store only the last two layers: for the current layer $i$ and previous layer $i - 1$. But actually, you can store only one layer and update it in-place. Let's store only one last layer of dp as $d[I]$. Attribute checks don't change array $d$ at all. In case $r_i = 0$, you, firstly, push all data from $a$ to $d$, and then you need to recalc values in $d$. But since the formula is $d[I] = \max(d[I], d[I - 1])$, you can just iterate over $I$ in descending order and everything works! In total, we have a solution with $O(m^2 + n)$ time and $O(m)$ space complexity.
[ "brute force", "data structures", "dp", "implementation", "math", "two pointers" ]
1,800
n, m = map(int, input().split()) rs = map(int, input().split()) d = [-int(1e9)] * (m + 1) d[0] = 0 add = [0] * (m + 2) def addSegment(l, r): if l <= r: add[l] += 1 add[r + 1] -= 1 def pushAll(): sum = 0 for i in range(m + 1): sum += add[i] d[i] += sum for i in range(m + 2): add[i] = 0 cntPoints = 0 for r in rs: if r == 0: pushAll() for i in range(m, 0, -1): d[i] = max(d[i], d[i - 1]) cntPoints += 1 else: lf, rg = 0, 0 if (r > 0): lf = min(r, m + 1) rg = m else: lf = 0 rg = max(-1, cntPoints + r) addSegment(lf, rg) pushAll() print(max(d))
2025
E
Card Game
In the most popular card game in Berland, a deck of $n \times m$ cards is used. Each card has two parameters: suit and rank. Suits in the game are numbered from $1$ to $n$, and ranks are numbered from $1$ to $m$. There is exactly one card in the deck for each combination of suit and rank. A card with suit $a$ and rank $b$ can beat a card with suit $c$ and rank $d$ in one of two cases: - $a = 1$, $c \ne 1$ (a card of suit $1$ can beat a card of any other suit); - $a = c$, $b > d$ (a card can beat any other card of the same suit but of a lower rank). Two players play the game. Before the game starts, they receive exactly half of the deck each. The first player wins if for every card of the second player, he can choose his card that can beat it, and there is no card that is chosen twice (i. e. there exists a matching of the first player's cards with the second player's cards such that in each pair the first player's card beats the second player's card). Otherwise, the second player wins. Your task is to calculate the number of ways to distribute the cards so that the first player wins. Two ways are considered different if there exists a card such that in one way it belongs to the first player and in the other way it belongs to the second player. The number of ways can be very large, so print it modulo $998244353$.
Suppose we're solving the problem for one suit. Consider a distribution of cards between two players; how to check if at least one matching between cards of the first player and cards of the second player exists? Let's order the cards from the highest rank to the lowest rank and go through them in that order. If we get a card of the first player, we can add it to the "pool" of cards to be matched; if we get a card belonging to the second player, we match it with one of the cards from the "pool" (if there are none - there is no valid matching). So, if there exists a prefix of this order where the number of cards of the second player exceeds the number of cards belonging to the first player, it is not a valid distribution. Does this sound familiar? Let's say that a card belonging to the first player is represented by an opening bracket, and a card belonging to the second player is represented by a closing bracket. Then, if we need to solve the problem for just one suit, the distribution must be a regular bracket sequence. So, in this case, we just need to count the number of regular bracket sequences. However, if there are at least $2$ suits, there might be "extra" cards of the $1$-st suit belonging to the first player, which we can match with "extra" cards of other suits belonging to the second player. To resolve this issue, we can use the following dynamic programming: let $dp_{i,j}$ be the number of ways to distribute the cards of the first $i$ suits so that there are $j$ extra cards of the $1$-st suit belonging to the $1$-st player. To calculate $dp_{1,j}$, we have to count the number of bracket sequences such that the balance on each prefix is at least $0$, and the balance of the whole sequence is exactly $j$. In my opinion, the easiest way to do this is to run another dynamic programming (something like "$w_{x,y}$ is the number of sequences with $x$ elements and balance $y$"); however, you can try solving it in a combinatorial way similar to how Catalan numbers are calculated. What about transitions from $dp_{i,j}$ to $dp_{i+1, j'}$? Let's iterate on $k$ - the number of "extra" cards we will use to match the cards of the $(i+1)$-th suit belonging to the second player, so we transition from $dp_{i,j}$ to $dp_{i+1,j-k}$. Now we need to count the ways to distribute $m$ cards of the same suit so that the second player receives $k$ cards more than the first player, and all cards of the first player can be matched. Consider that we ordered the cards from the lowest rank to the highest rank. Then, on every prefix, the number of cards belonging to the first player should not exceed the number of cards belonging to the second player (otherwise we won't be able to match all cards belonging to the first player), and in total, the number of cards belonging to the second player should be greater by $k$. So this is exactly the number of bracket sequences with balance $\ge 0$ on every prefix and balance equal to $k$ in total (and we have already calculated that)! So, the solution consists of two steps. First, for every $k \in [0, m]$, we calculate the number of bracket sequences with non-negative balance on each prefix and total balance equal to $k$; then we run dynamic programming of the form "$dp_{i,j}$ is the number of ways to distribute the cards of the first $i$ suits so that there are $j$ extra cards of the $1$-st suit belonging to the $1$-st player". The most time-consuming part of the solution is this dynamic programming, and it works in $O(nm^2)$, so the whole solution works in $O(nm^2)$.
[ "combinatorics", "dp", "fft", "greedy", "math" ]
2,200
#include <bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; if (x >= MOD) x -= MOD; return x; } int mul(int x, int y) { return x * 1LL * y % MOD; } int main() { int n, m; cin >> n >> m; vector<vector<int>> ways(m + 1, vector<int>(m + 1)); ways[0][0] = 1; for (int i = 0; i < m; ++i) { for (int j = 0; j <= i; ++j) { ways[i + 1][j + 1] = add(ways[i + 1][j + 1], ways[i][j]); if (j) ways[i + 1][j - 1] = add(ways[i + 1][j - 1], ways[i][j]); } } vector<vector<int>> dp(n + 1, vector<int>(m + 1)); dp[0][0] = 1; for (int i = 0; i < n; ++i) { for (int j = 0; j <= m; ++j) { for (int k = 0; k <= m; ++k) { int nj = i ? j - k : j + k; if (0 <= nj && nj <= m) { dp[i + 1][nj] = add(dp[i + 1][nj], mul(dp[i][j], ways[m][k])); } } } } cout << dp[n][0] << '\n'; }
2025
F
Choose Your Queries
You are given an array $a$, consisting of $n$ integers (numbered from $1$ to $n$). Initially, they are all zeroes. You have to process $q$ queries. The $i$-th query consists of two different integers $x_i$ and $y_i$. During the $i$-th query, you have to choose an integer $p$ (which is either $x_i$ or $y_i$) and an integer $d$ (which is either $1$ or $-1$), and assign $a_p = a_p + d$. After each query, every element of $a$ should be a non-negative integer. Process all queries in such a way that the sum of all elements of $a$ after the last query is the minimum possible.
We will use the classical method for solving maximization/minimization problems: we will come up with an estimate for the answer and try to achieve it constructively. The first idea. Since we have $n$ objects connected by binary relations, we can model the problem as a graph. Let the $n$ elements of the array be the vertices, and the $q$ queries be the edges. For each query, we would like to choose the direction of the edge (let the edge be directed towards the vertex to which the operation is applied) and the sign of the operation. It would be great if, in each connected component, we could choose an equal number of pluses and minuses so that the sum equals zero. Or, if the number of edges in the component is odd, to make the sum equal to one. Obviously, it is not possible to do less than this. It turns out that this is always possible. There is a well-known graph problem: to split the edges of an undirected connected graph into pairs with at least one common endpoint. We will reduce our problem to this one. If we split the graph into pairs, we will construct the answer from them as follows: we will direct the edges of each pair towards any common vertex, write + on the edge with the smaller number (query index), and write - on the edge with the larger number. This construction guarantees that each pair will not add anything to the sum, and each element will be non-negative after each query. The problem of splitting into pair can be solved using the following algorithm. We will perform a DFS from any vertex and construct a DFS tree. Now we will divide the edges into pairs in the order from the leaves to the root. Let us stand at some vertex $v$. First, run the algorithm recursively for the children of $v$. When we finish processing the children, we will consider the following edges from the current vertex $v$: tree edges to the children, the edge to the parent, and back edges with the lower endpoint at $v$. Now, we form as many pairs as possible from the edges to the children and the back edges. We will remove all such edges from the graph. If their total number was odd, then one edge remains unpaired. Form a pair from it and the edge to the parent, and again remove them from the graph. It turns out that when we exit the recursive call, either there will be no edges left in the subtree at all, or there will be one tree edge that will be processed by the parent later. If the number of edges in the component is odd, then at the end, one edge without a pair will remain at the root. I would also like to mention the following implementation details. In the adjacency list, it is better to store the indices of the queries rather than the vertices, and then derive the vertex numbers from them. This will allow for careful handling of multiple edges. Removing edges from the graph directly is also not very convenient. The most convenient way to work around this, in my opinion, is to maintain an array that marks that the edge with such a query number is now removed. Alternatively, for back edges, we can check whether $v$ is the upper or lower endpoint, as well as return a flag from the depth-first search indicating "is the edge to the child removed". Overall complexity: $O(n + q)$.
[ "constructive algorithms", "dfs and similar", "dp", "graphs", "greedy", "trees" ]
2,700
#include<bits/stdc++.h> using namespace std; const int N = 300043; string choice = "xy"; string sign = "+-"; int qs[N][2]; string ans[N]; int n, q; vector<int> g[N]; int color[N]; void pair_queries(int q1, int q2) { if(q1 > q2) swap(q1, q2); for(int i = 0; i < 2; i++) for(int j = 0; j < 2; j++) if(qs[q1][i] == qs[q2][j]) { ans[q1] = { choice[i], sign[0] }; ans[q2] = { choice[j], sign[1] }; return; } } bool dfs(int v, int pe = -1) { // return true if parent edge still exists color[v] = 1; vector<int> edge_nums; for(auto e : g[v]) { int u = v ^ qs[e][0] ^ qs[e][1]; if(color[u] == 1) continue; if(color[u] == 0) { if(dfs(u, e)) edge_nums.push_back(e); } else edge_nums.push_back(e); } bool res = true; if(edge_nums.size() % 2 != 0) { if(pe != -1) edge_nums.push_back(pe); else edge_nums.pop_back(); res = false; } for(int i = 0; i < edge_nums.size(); i += 2) pair_queries(edge_nums[i], edge_nums[i + 1]); color[v] = 2; return res; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cin >> n >> q; for(int i = 0; i < q; i++) { cin >> qs[i][0] >> qs[i][1]; --qs[i][0]; --qs[i][1]; g[qs[i][0]].push_back(i); g[qs[i][1]].push_back(i); ans[i] = "x+"; } for(int i = 0; i < n; i++) if(color[i] == 0) dfs(i); for(int i = 0; i < q; i++) cout << ans[i] << endl; }
2025
G
Variable Damage
Monocarp is gathering an army to fight a dragon in a videogame. The army consists of two parts: the heroes and the defensive artifacts. Each hero has one parameter — his health. Each defensive artifact also has one parameter — its durability. Before the battle begins, Monocarp distributes artifacts to the heroes so that each hero receives at most one artifact. The battle consists of rounds that proceed as follows: - first, the dragon deals damage equal to $\frac{1}{a + b}$ (\textbf{a real number without rounding}) to each hero, where $a$ is the number of heroes alive and $b$ is the number of active artifacts; - after that, all heroes with health $0$ or less die; - finally, some artifacts are deactivated. An artifact with durability $x$ is deactivated when one of the following occurs: the hero holding the artifact either dies or receives $x$ total damage (from the start of the battle). If an artifact is not held by any hero, it is inactive from the beginning of the battle. The battle ends when there are no heroes left alive. Initially, the army is empty. There are $q$ queries: add a hero with health $x$ or an artifact with durability $y$ to the army. After each query, determine the maximum number of rounds that Monocarp can survive if he distributes the artifacts optimally.
Let's start unraveling the solution from the end. Suppose we currently have $n$ heroes with health $a_1, a_2, \dots, a_n$ and $m$ artifacts with durability $b_1, b_2, \dots, b_m$. Let's assume we have already distributed the artifacts to the heroes, forming pairs $(a_i, b_i)$. If $m > n$, we will discard the excess artifacts with the lowest durability. If $m < n$, we will add the artifacts with durability $0$ (that will deactivate at the start). How many rounds will the battle last? Notice the following: a hero with health $a$ and an artifact with durability $b$ can be replaced with two heroes with health $a$ and $\min(a, b)$, respectively, and the answer will not change. Thus, it is sufficient to analyze the case when there are no artifacts, and there are only heroes with health $a_1, \min(a_1, b_1), a_2, \min(a_2, b_2), \dots, a_n, \min(a_n, b_n)$. The idea is as follows: in each round, the heroes take exactly $1$ point of damage in total. Therefore, the battle will last $\displaystyle \sum_{i=1}^n a_i + \sum_{i=1}^n \min(a_i, b_i)$ rounds. The first sum is easy to maintain -let's focus on the second one. Next, we need to learn how to distribute the artifacts in such a way that maximizes this sum. Intuitively, it seems that the healthiest heroes should receive the most durable artifacts. That is, we should sort the heroes in descending order of health and the artifacts in descending order of durability. We will show that this is indeed the case. Suppose there are two heroes with health $a_1$ and $a_2$ ($a_1 \ge a_2$). They receive artifacts with durability $b_1$ and $b_2$ ($b_1 \ge b_2$). We will show that it is optimal to give the first artifact to the first hero and the second one to the second hero. If there is at least one artifact with durability not greater than $a_2$ (that is, the minimum will always be equal to this artifact), it is always optimal to give it to the second hero and the other artifact to the first hero. If there is at least one artifact with durability not less than $a_1$ (that is, the minimum will always be equal to the hero), it is always optimal to give it to the first hero. Otherwise, the durabilities of the artifacts lie between the health values of the heroes, meaning that the minimum for the first hero will always be equal to his artifact, and the minimum for the second hero will be his health. Therefore, it is again optimal to give the larger artifact to the first hero. It follows that if for a pair of heroes the condition that the larger hero has the larger artifact is not met, we can swap their artifacts, and the answer will not decrease. Thus, the task is as follows. After each query, we need to maintain a sorted sequence of heroes, artifacts, and the sum $\min(a_i, b_i)$. This sounds quite complicated because there are a lot of changes with each query. Some suffix of one of the arrays shifts one position to the right after inserting a new element, affecting many terms. Let's consider an idea of a sweep line instead. We will combine the heroes and artifacts into one array and sort it in descending order. For simplicity, let's assume all durability and health values are distinct integers. Then we will iterate over this array while maintaining the number of heroes who have not yet received an artifact and the number of artifacts that have not been assigned to a hero. If we encounter an artifact and there are previously encountered heroes who have not received an artifact, we will give this artifact to any of them. Since all these heroes have health greater than the durability of this artifact, the minimum will always be equal to the durability of the artifact. Thus, the sum will increase by the durability of the artifact. Otherwise, we will remember that there is one more free artifact. The same goes for a hero. If we encounter a hero and there are previously encountered artifacts that have not been assigned to a hero, we will give any of these artifacts to the hero. The sum will then increase by the hero's health. It can be shown that this process of assigning artifacts yields the same result as sorting. Note that at any moment of time, there are either no free heroes or no free artifacts. Thus, it is sufficient to maintain a "balance" -the difference between the number of heroes and artifacts on the prefix. If the balance is positive, and we encounter an artifact, we add its durability to the answer. If the balance is negative, and we encounter a hero, we add his health to the answer. Note that equal elements do not break this algorithm. For simplicity, I suggest sorting not just the values but pairs of values and query indices to maintain a strict order. How does this reduction help? It turns out that it is now possible to use square root decomposition to solve the problem. Read all queries in advance and sort them by the value $(v_i, i)$ in descending order. We will divide all queries into blocks of size $B$. Initially, all queries are deactivated. When processing the next query in the input order, we will activate it within the block and recalculate the entire block. What should we store for each block? Notice that all balance checks for the terms depend on two values: the balance at the start of the block and the balance before this term within the block. Therefore, for the block, we can maintain the following values: the total balance in the block, as well as the total contribution of the terms from this block for each balance at the start of the block. Obviously, the balance at the start of the block can range from $-q$ to $q$. However, if the absolute value of the balance exceeds $B$, the contribution of the block will be the same as if this balance was limited by $-B$ or $B$, respectively (since within the block it will either always be positive or always negative). Thus, it is sufficient to calculate the answer only for balances from $-B$ to $B$. Knowing these values for each block, the answer can be calculated in $O(\frac{q}{B})$. We will go through the blocks while maintaining the current balance. We will add the block's contribution for the current balance to the answer and add the total balance of the block to the current balance. We still need to learn how to quickly recalculate the answers for all balances within the block. We will iterate over the elements within the block while maintaining the current balance inside the block. Let the balance be $k$. Then if the current element is an artifact, its durability will be added to the sum if the balance at the start of the block is at least $-k + 1$. Similarly, if the current element is a hero, his health will be added if the balance at the start of the block is at most $-k - 1$. Thus, we need to add a value over some range of balances-either from $-k + 1$ to $B$, or from $-B$ to $-k - 1$. This can be done using a difference array. That is, we can make two updates in $O(1)$ for each element of the block, and then compute the prefix sum once in $O(B)$. Therefore, for each query, we can update the structure in $O(B)$, and recalculate the sum in $O(\frac{q}{B})$. Hence, it is optimal to choose $B = \sqrt{q}$. Overall complexity: $O(q \sqrt{q})$.
[ "data structures", "flows" ]
3,000
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct query{ int t, v; }; int main() { cin.tie(0); ios::sync_with_stdio(false); int m; cin >> m; vector<query> q(m); forn(i, m) cin >> q[i].t >> q[i].v; vector<pair<int, int>> xs; forn(i, m) xs.push_back({q[i].v, i}); sort(xs.rbegin(), xs.rend()); forn(i, m) q[i].v = xs.rend() - lower_bound(xs.rbegin(), xs.rend(), make_pair(q[i].v, i)) - 1; const int p = sqrt(m + 10); const int siz = (m + p - 1) / p; vector<int> tp(m); vector<int> val(m); vector<vector<long long>> dp(p, vector<long long>(2 * siz + 1)); vector<int> blbal(p); auto upd = [&](const query &q){ tp[q.v] = q.t; val[q.v] = xs[q.v].first; blbal[q.v / siz] += q.t == 1 ? 1 : -1; }; auto recalc = [&](int b){ dp[b].assign(2 * siz + 1, 0); int bal = 0; for (int i = b * siz; i < m && i < (b + 1) * siz; ++i){ if (tp[i] == 1){ dp[b][0] += val[i]; dp[b][0] += val[i]; dp[b][-bal + siz] -= val[i]; ++bal; } else if (tp[i] == 2){ dp[b][-bal + 1 + siz] += val[i]; --bal; } } forn(i, 2 * siz){ dp[b][i + 1] += dp[b][i]; } }; auto get = [&](int b, int bal){ bal += siz; if (bal < 0) return dp[b][0]; if (bal >= 2 * siz + 1) return dp[b].back(); return dp[b][bal]; }; for (auto it : q){ upd(it); recalc(it.v / siz); int bal = 0; long long ans = 0; for (int i = 0; i * siz < m; ++i){ ans += get(i, bal); bal += blbal[i]; } cout << ans << '\n'; } return 0; }
2026
A
Perpendicular Segments
You are given a coordinate plane and three integers $X$, $Y$, and $K$. Find two line segments $AB$ and $CD$ such that - the coordinates of points $A$, $B$, $C$, and $D$ are integers; - $0 \le A_x, B_x, C_x, D_x \le X$ and $0 \le A_y, B_y, C_y, D_y \le Y$; - the length of segment $AB$ is at least $K$; - the length of segment $CD$ is at least $K$; - segments $AB$ and $CD$ are perpendicular: if you draw lines that contain $AB$ and $CD$, they will cross at a right angle. Note that it's \textbf{not} necessary for segments to intersect. Segments are perpendicular as long as the lines they induce are perpendicular.
Let's look at all segments with a fixed angle between them and the X-axis. Let's take the shortest one with integer coordinates and length at least $K$ as $AB$. Let's say that a bounding box of $AB$ has width $w$ and height $h$. It's easy to see that a bounding box of segment $CD$ will have width at least $h$ and height at least $w$, since the shortest segment $CD$ will be just the segment $AB$ rotated at ninety degrees. So, in order to fit both segments $AB$ and $CD$, both $h$ and $w$ should be at most $M = \min(X, Y)$. But if both $w \le M$ and $h \le M$, then what is the longest segment that can fit in such a bounding box? The answer is to set $h = w = M$, then the length $|AB| \le M \sqrt{2}$. In such a way, we found out that $K$ must not exceed $M \sqrt{2}$, but if $K \le M \sqrt{2}$, then we can always take the following two segments: $(0, 0) - (M, M)$ and $(0, M) - (M, 0)$ where $M = \min(X, Y)$. They are perpendicular, fit in the allowed rectangle, and have length exactly $M \sqrt{2}$.
[ "constructive algorithms", "geometry", "greedy", "math" ]
900
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int X, Y, K; cin >> X >> Y >> K; int M = min(X, Y); cout << "0 0 " << M << " " << M << endl; cout << "0 " << M << " " << M << " 0" << endl; } }
2026
B
Black Cells
You are given a strip divided into cells, numbered from left to right from $0$ to $10^{18}$. Initially, all cells are white. You can perform the following operation: choose two \textbf{white} cells $i$ and $j$, such that $i \ne j$ and $|i - j| \le k$, and paint them black. A list $a$ is given. All cells from this list must be painted black. Additionally, \textbf{at most one} cell that is not in this list can also be painted black. Your task is to determine the minimum value of $k$ for which this is possible.
First, let's consider the case when $n$ is even. It is not difficult to notice that to minimize the value of $k$, the cells have to be painted in the following pairs: $(a_1, a_2)$, $(a_3, a_4)$, ..., $(a_{n-1}, a_n)$. Then the answer is equal to the maximum of the distances between the cells in the pairs. For odd $n$, it is necessary to add one more cell (so that cells can be divided into pairs). If we add a new cell from the segment $(a_i, a_{i+1})$, it's paired either with the $i$-th cell or with the $(i+1)$-th cell (depending on the parity of $i$). Therefore, to minimize the value of $k$, the new cell should be chosen among $(a_i+1)$ and $(a_{i+1}-1)$ (in fact, only one of the two options is needed, but we can keep both). Note that one of these options may already be an existing cell, in which case it does not need to be considered. Thus, there are $O(n)$ options for the new cell, and for each of them, we can calculate the answer in $O(n)$ or $O(n\log{n})$, and take the minimum among them. Thus, we obtained a solution in $O(n^2)$ (or $O(n^2\log{n})$ depending on the implementation). There is also a faster solution in $O(n)$, but it was not required for this problem.
[ "binary search", "brute force", "constructive algorithms", "greedy" ]
1,300
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<long long> a(n); for (auto& x : a) cin >> x; long long ans = 1e18; auto upd = [&](vector<long long> a) { sort(a.begin(), a.end()); for (int i = 1; i < (int)a.size(); ++i) if (a[i - 1] == a[i]) return; long long res = 0; for (int i = 0; i < (int)a.size(); i += 2) res = max(res, a[i + 1] - a[i]); ans = min(ans, res); }; if (n % 2 == 0) { upd(a); cout << ans << '\n'; continue; } for (int i = 0; i < n; ++i) { for (int x : {-1, 1}) { a.push_back(a[i] + x); upd(a); a.pop_back(); } } cout << ans << '\n'; } }
2026
C
Action Figures
There is a shop that sells action figures near Monocarp's house. A new set of action figures will be released shortly; this set contains $n$ figures, the $i$-th figure costs $i$ coins and is available for purchase from day $i$ to day $n$. For each of the $n$ days, Monocarp knows whether he can visit the shop. Every time Monocarp visits the shop, he can buy any number of action figures which are sold in the shop (of course, he cannot buy an action figure that is not yet available for purchase). If Monocarp buys \textbf{at least two} figures during the same day, he gets a discount equal to the cost of \textbf{the most expensive} figure he buys (in other words, he gets the most expensive of the figures he buys for free). Monocarp wants to buy \textbf{exactly} one $1$-st figure, one $2$-nd figure, ..., one $n$-th figure from the set. He cannot buy the same figure twice. What is the minimum amount of money he has to spend?
Consider the following solution: we iterate on the number of figures we get for free (let this number be $k$), and for each value of $k$, we try to check if it is possible to get $k$ figures for free, and if it is, find the best figures which we get for free. For a fixed value of $k$, it is optimal to visit the shop exactly $k$ times: if we visit the shop more than $k$ times, then during some visits, we buy only one figure - instead of that, we can buy figures from these visits during the last day, so there are no visits during which we buy only one figure. It is quite obvious that if we want to visit the shop $k$ times, we always can do it during the last $k$ days with $s_i=1$. Let the last $k$ days with $s_i=1$ be $x_1, x_2, \dots, x_k$ (from right to left, so $x_1 > x_2 > \dots > x_k$). It is impossible to get a total discount of more than $(x_1 + x_2 + \dots + x_k)$ if we visit the shop only $k$ times, since when we visit the shop on day $i$, the maximum discount we can get during that day is $i$. Now suppose we can't get the figures $\{x_1, x_2, \dots, x_k\}$ for free, but we can get some other set of figures $\{y_1, y_2, \dots, y_k\}$ ($y_1 > y_2 > \dots > y_k$) for free. Let's show that this is impossible. Consider the first $j$ such that $x_j \ne y_j$: if $y_j > x_j$, it means that on some suffix of days (from day $y_j$ to day $n$), we visit the shop $j$ times. But since $x_1, x_2, \dots, x_{j-1}, x_j$ are the last $j$ days when we visit the shop, then we can't visit the shop $j$ times from day $y_j$ to day $n$, so this is impossible; otherwise, if $y_j < x_j$, it means that during the day $x_j$, we get some figure $f$ for free, but it is not $x_j$. Let's get the figure $x_j$ for free during that day instead (swap the figures $f$ and $x_j$). Using a finite number of such transformations, we can show that we can get the figures $\{x_1, x_2, \dots, x_k\}$ for free. Now, for a fixed value of $k$, we know which figures we should get for free. And if we increase the value of $k$, our total discount increases as well. Let's find the greatest possible $k$ with binary search, and we will get a solution working in $O(n \log n)$. The only thing that's left is checking that some value of $k$ is achievable. To do it, we can mark the $k$ figures we try to get for free, and simulate the process, iterating the figures from left to right. If on some prefix, the number of figures we want to get for free is greater than the number of figures we pay for, then it is impossible, since we can't find a "match" for every figure we want to get for free.
[ "binary search", "brute force", "constructive algorithms", "data structures", "greedy", "implementation" ]
1,500
#include<bits/stdc++.h> using namespace std; const int N = 400043; char buf[N]; bool can(const string& s, int k) { int n = s.size(); vector<int> used(n); for(int i = n - 1; i >= 0; i--) if(k > 0 && s[i] == '1') { used[i] = 1; k--; } int cur = 0; for(int i = 0; i < n; i++) if(used[i]) { cur--; if(cur < 0) return false; } else cur++; return true; } void solve() { int n; scanf("%d", &n); scanf("%s", buf); if(n == 1) { puts("1"); return; } string s = buf; int count_1 = 0; for(auto x : s) if (x == '1') count_1++; int l = 1; int r = count_1 + 1; while(r - l > 1) { int mid = (l + r) / 2; if(can(s, mid)) l = mid; else r = mid; } long long ans = 0; for(int i = n - 1; i >= 0; i--) if(s[i] == '1' && l > 0) l--; else ans += (i + 1); printf("%lld\n", ans); } int main() { int t; scanf("%d", &t); for(int i = 0; i < t; i++) solve(); }
2026
D
Sums of Segments
You are given a sequence of integers $[a_1, a_2, \dots, a_n]$. Let $s(l,r)$ be the sum of elements from $a_l$ to $a_r$ (i. e. $s(l,r) = \sum\limits_{i=l}^{r} a_i$). Let's construct another sequence $b$ of size $\frac{n(n+1)}{2}$ as follows: $b = [s(1,1), s(1,2), \dots, s(1,n), s(2,2), s(2,3), \dots, s(2,n), s(3,3), \dots, s(n,n)]$. For example, if $a = [1, 2, 5, 10]$, then $b = [1, 3, 8, 18, 2, 7, 17, 5, 15, 10]$. You are given $q$ queries. During the $i$-th query, you are given two integers $l_i$ and $r_i$, and you have to calculate $\sum \limits_{j=l_i}^{r_i} b_j$.
In the editorial, we will treat all elements as $0$-indexed. The array $b$ consists of $n$ "blocks", the first block has the elements $[s(0,0), s(0,1), \dots, s(0,n-1)]$, the second block contains $[s(1,1), s(1,2), \dots, s(1,n-1)]$, and so on. Each position of element in $b$ can be converted into a pair of the form "index of the block, index of element in the block" using either a formula or binary search. Let's analyze a query of the form "get the sum from the $i_1$-th element in the $j_1$-th block to the $i_2$-th element in the $j_2$-th block". Let's initialize the result with the sum of all blocks from $j_1$ to $j_2$ (inclusive), then drop some first elements from the block $j_1$, and then drop some last elements from the block $j_2$. We have to be able to calculate the following values: given an index of the block $b$ and the indices of elements $l$ and $r$ from this block, calculate the sum from the $l$-th element to the $r$-th element in that block; given two indices of blocks $l$ and $r$, calculate the sum from block $l$ to block $r$. In the first case, we need to calculate $s(b,l+b) + s(b,l+b+1) + \dots + s(b,r+b)$. Let $p_i$ be the sum of the first $i$ elements from the given array. The sum can be rewritten as $(p_{l+b+1} - p_b) + (p_{l+b+2} - p_b) + \dots + (p_{r+b+1}-p_b)$. This is equal to $\sum\limits_{i=l+b+1}^{r+b+1} p_i - (r-l+1) \cdot p_l$; the first value can be calculated in $O(1)$ by building prefix sums over the array $p$. The easiest way to calculate the sum over several blocks is the following one: for each block, calculate the sum in it the same way as we calculate the sum over part of the block; build prefix sums over sums in blocks. Depending on the implementation, the resulting complexity will be either $O(n)$ or $O(n \log n)$.
[ "binary search", "data structures", "dp", "implementation", "math" ]
1,900
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int n; vector<long long> a; vector<long long> pa; vector<long long> ppa; vector<long long> start; vector<long long> block; vector<long long> pblock; vector<long long> prefix_sums(vector<long long> v) { int k = v.size(); vector<long long> res(k + 1); for(int i = 0; i < k; i++) res[i + 1] = res[i] + v[i]; return res; } long long get_partial(int l, int r1, int r2) { // s(l, r1) + s(l, r1 + 1) + ... + s(l, r2 - 1) if(r2 <= r1) return 0ll; int cnt = r2 - r1; long long rem = pa[l] * cnt; long long add = ppa[r2 + 1] - ppa[r1 + 1]; return add - rem; } pair<int, int> convert(long long i) { int idx = upper_bound(start.begin(), start.end(), i) - start.begin() - 1; pair<int, int> res = {idx, i - start[idx] + idx}; return res; } long long query(long long l, long long r) { pair<int, int> lf = convert(l); pair<int, int> rg = convert(r); long long res = pblock[rg.first + 1] - pblock[lf.first]; if(lf.second != lf.first) res -= get_partial(lf.first, lf.first, lf.second); if(rg.second != n - 1) res -= get_partial(rg.first, rg.second + 1, n); return res; } int main() { scanf("%d", &n); a.resize(n); for(int i = 0; i < n; i++) scanf("%lld", &a[i]); pa = prefix_sums(a); ppa = prefix_sums(pa); start = {0}; for(int i = n; i >= 1; i--) start.push_back(start.back() + i); block.resize(n); for(int i = 0; i < n; i++) block[i] = get_partial(i, i, n); pblock = prefix_sums(block); int q; scanf("%d", &q); for(int i = 0; i < q; i++) { long long l, r; scanf("%lld %lld", &l, &r); printf("%lld\n", query(l - 1, r - 1)); } }
2026
E
Best Subsequence
Given an integer array $a$ of size $n$. Let's define the value of the array as its size minus the number of set bits in the bitwise OR of all elements of the array. For example, for the array $[1, 0, 1, 2]$, the bitwise OR is $3$ (which contains $2$ set bits), and the value of the array is $4-2=2$. Your task is to calculate the maximum possible value of some subsequence of the given array.
Let the number of chosen elements be $k$, and their bitwise OR be $x$. The answer is equal to $k - popcount(x)$, where $popcount(x)$ is the number of bits equal to $1$ in $x$. However, we can rewrite $popcount(x)$ as $60 - zeroes(x)$, where $zeroes(x)$ is equal to the number of bits equal to $0$ among the first $60$ bits in $x$. So, we have to maximize the value of $k + zeroes(x)$. Consider the following bipartite graph: the left part consists of elements of the given array, the right part consists of bits. An edge between vertex $i$ from the left part and vertex $j$ from the right part means that the $j$-th bit in $a_i$ is set to $1$. Suppose we've chosen a set of vertices in the left part (the elements of our subsequence). The bits which are equal to $0$ in the bitwise OR are represented by such vertices $j$ such that no chosen vertex from the left part is connected to $j$. So, if we unite the chosen vertices from the left part with the vertices in the right part representing bits equal to $0$, this will be an independent subset of vertices. Thus, our problem is reduced to finding the maximum size of the independent subset of vertices. Usually, this problem is NP-complete. However, our graph is bipartite. So, we can use the following: by Konig's theorem, in a bipartite graph, the size of the minimum vertex cover is equal to the size of the maximum matching; and in every graph, independent subsets and vertex covers are complements of each other (if $S$ is a vertex cover, then the set of all vertices not belonging to $S$ is an independent subset, and vice versa). So, the maximum independent subset is the complement of the minimum vertex cover. It means that we can calculate the size of the maximum independent subset as $V - MM$, where $V$ is the number of vertices in the graph, and $MM$ is the size of the maximum matching. We can compute $MM$ using any maximum matching algorithm or network flow.
[ "bitmasks", "dfs and similar", "flows", "graph matchings", "graphs" ]
2,500
#include <bits/stdc++.h> using namespace std; #define sz(a) int((a).size()) template<typename T = int> struct Dinic { struct edge { int u, rev; T cap, flow; }; int n, s, t; T flow; vector<int> lst; vector<int> d; vector<vector<edge>> g; Dinic() {} Dinic(int n, int s, int t) : n(n), s(s), t(t) { g.resize(n); d.resize(n); lst.resize(n); flow = 0; } void add_edge(int v, int u, T cap, bool directed = true) { g[v].push_back({u, sz(g[u]), cap, 0}); g[u].push_back({v, sz(g[v]) - 1, directed ? 0 : cap, 0}); } T dfs(int v, T flow) { if (v == t) return flow; if (flow == 0) return 0; T result = 0; for (; lst[v] < sz(g[v]); ++lst[v]) { edge& e = g[v][lst[v]]; if (d[e.u] != d[v] + 1) continue; T add = dfs(e.u, min(flow, e.cap - e.flow)); if (add > 0) { result += add; flow -= add; e.flow += add; g[e.u][e.rev].flow -= add; } if (flow == 0) break; } return result; } bool bfs() { fill(d.begin(), d.end(), -1); queue<int> q({s}); d[s] = 0; while (!q.empty() && d[t] == -1) { int v = q.front(); q.pop(); for (auto& e : g[v]) { if (d[e.u] == -1 && e.cap - e.flow > 0) { q.push(e.u); d[e.u] = d[v] + 1; } } } return d[t] != -1; } T calc() { T add; while (bfs()) { fill(lst.begin(), lst.end(), 0); while((add = dfs(s, numeric_limits<T>::max())) > 0) flow += add; } return flow; } }; const int B = 60; const int INF = 1e9; int main() { int t; cin >> t; while (t--) { int n; cin >> n; int s = n + B, t = n + B + 1; Dinic mf(t + 1, s, t); for (int i = 0; i < n; ++i) { long long x; cin >> x; mf.add_edge(s, i, 1); for (int j = 0; j < B; ++j) { if ((x >> j) & 1) mf.add_edge(i, j + n, INF); } } for (int i = 0; i < B; ++i) mf.add_edge(i + n, t, 1); cout << n - mf.calc() << '\n'; } }
2026
F
Bermart Ice Cream
In the Bermart chain of stores, a variety of ice cream is sold. Each type of ice cream has two parameters: price and tastiness. Initially, there is one store numbered $1$, which sells nothing. You have to process $q$ queries of the following types: - $1~x$ — a new store opens, that sells the same types of ice cream as store $x$. It receives the minimum available positive index. The order of the types of ice cream in the new store is the same as in store $x$. - $2~x~p~t$ — a type of ice cream with price $p$ and tastiness $t$ becomes available in store $x$. - $3~x$ — a type of ice cream that was available the longest (appeared the earliest) in store $x$ is removed. - $4~x~p$ — for store $x$, find the maximum total tastiness of a subset of types of ice cream that are sold there, such that the total price does not exceed $p$ (each type can be used in the subset no more than once).
Let's try to solve the problem without queries of type $1$. This leads to a data structure where elements are added to the end and removed from the beginning. In other words, a queue. The query is a dynamic programming problem of the "knapsack" type. To implement a knapsack on a queue, we can use the technique of implementing a queue with two stacks. This is commonly referred to as a "queue with minimum", as it is usually used to maintain the minimum. You can read more about it on cp-algorithms. We replace the operation of taking the minimum with the operation of adding a new item to the knapsack in $O(P)$, where $P$ is the maximum price in the queries. This gives us a solution with a time complexity of $O(qP)$. Now, let's return to the original problem. A type $1$ operation essentially adds the necessity to make the data structure persistent. However, this is only true if we need to respond to queries in the order they are given. In this problem, this is not the case - we can try to transition to an "offline" solution. Let's build a version tree of our persistent structure. For each query of type $1$, $2$, or $3$, create a new version. The edge between versions will store information about the type of change: a simple copy, an addition of an element, or a deletion of an element. Queries of type $4$ will be stored in a list for each vertex to which they are addressed. We will perform a depth-first traversal of the version tree. During the transition along an edge, we need to be able to do the following. If the transition is of the "add element" type, we need to add the element to the end, then process the subtree of the vertex, answer the queries for that vertex, and upon exiting, remove the element from the end. For a transition of the "remove element" type, we need to remove the element from the beginning and then add an element to the beginning. Thus, a structure of the "deque" type is sufficient. It turns out that a deque that maintains a minimum also exists. You can read about it in a blog by k1r1t0. Overall complexity: $O(qP)$, where $P$ is the maximum price in the queries.
[ "data structures", "dfs and similar", "divide and conquer", "dp", "implementation", "trees" ]
2,700
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int P = 2000 + 5; struct item{ int p, t; }; struct minstack { stack<item> st; stack<array<int, P>> dp; int get(int p) {return dp.empty() ? 0 : dp.top()[p];} bool empty() {return st.empty();} int size() {return st.size();} void push(item it) { if (empty()){ dp.push({}); for (int i = 0; i < P; ++i) dp.top()[i] = 0; } else{ dp.push(dp.top()); } st.push(it); for (int i = P - it.p - 1; i >= 0; --i) dp.top()[i + it.p] = max(dp.top()[i + it.p], dp.top()[i] + it.t); } void pop() { st.pop(); dp.pop(); } item top() { return st.top(); } void swap(minstack &x) { st.swap(x.st); dp.swap(x.dp); } }; struct mindeque { minstack l, r, t; void rebalance() { bool f = false; if (r.empty()) {f = true; l.swap(r);} int sz = r.size() / 2; while (sz--) {t.push(r.top()); r.pop();} while (!r.empty()) {l.push(r.top()); r.pop();} while (!t.empty()) {r.push(t.top()); t.pop();} if (f) l.swap(r); } int get(int p) { int ans = 0; for (int i = 0; i <= p; ++i) ans = max(ans, l.get(i) + r.get(p - i)); return ans; } bool empty() {return l.empty() && r.empty();} int size() {return l.size() + r.size();} void push_front(item it) {l.push(it);} void push_back(item it) {r.push(it);} void pop_front() {if (l.empty()) rebalance(); l.pop();} void pop_back() {if (r.empty()) rebalance(); r.pop();} item front() {if (l.empty()) rebalance(); return l.top();} item back() {if (r.empty()) rebalance(); return r.top();} void swap(mindeque &x) {l.swap(x.l); r.swap(x.r);} }; struct edge{ int u, tp, p, t; }; struct query{ int i, p; }; vector<vector<edge>> g; vector<vector<query>> qs; vector<int> ans; mindeque ks; void dfs(int v){ for (auto& [i, p] : qs[v]){ ans[i] = ks.get(p); } for (auto& [u, tp, p, t] : g[v]){ if (tp == 0){ dfs(u); } else if (tp == -1){ auto it = ks.front(); ks.pop_front(); dfs(u); ks.push_front({it.p, it.t}); } else{ ks.push_back({p, t}); dfs(u); ks.pop_back(); } } } int main() { cin.tie(0); ios::sync_with_stdio(false); int q; cin >> q; vector<int> st(1); vector<int> where(q, -1); int cnt = 1, cnt_real = 1; where[0] = 0; g.push_back({}); qs.push_back({}); auto copy_shop = [&](int x, bool real){ g.push_back({}); qs.push_back({}); st.push_back(st[x]); if (real){ where[cnt_real] = cnt; ++cnt_real; } ++cnt; }; forn(i, q){ int tp, x; cin >> tp >> x; --x; int v = where[x], u = -1; if (tp != 4){ copy_shop(v, tp == 1); u = cnt - 1; } if (tp == 1){ g[v].push_back({u, 0, -1, -1}); continue; } if (tp == 3){ g[v].push_back({u, -1, -1, -1}); ++st[u]; where[x] = u; continue; } int p; cin >> p; if (tp == 4){ qs[v].push_back({i, p}); continue; } int t; cin >> t; g[v].push_back({u, 1, p, t}); where[x] = u; } ans.assign(q, -1); dfs(0); forn(i, q) if (ans[i] != -1) cout << ans[i] << '\n'; return 0; }
2027
A
Rectangle Arrangement
You are coloring an infinite square grid, in which all cells are initially white. To do this, you are given $n$ stamps. Each stamp is a rectangle of width $w_i$ and height $h_i$. You will use \textbf{each} stamp exactly \textbf{once} to color a rectangle of the same size as the stamp on the grid in black. You cannot rotate the stamp, and for each cell, the stamp must either cover it fully or not cover it at all. You can use the stamp at any position on the grid, even if some or all of the cells covered by the stamping area are already black. What is the minimum sum of the \textbf{perimeters} of the connected regions of black squares you can obtain after all the stamps have been used?
We must minimize the perimeter, and an obvious way to attempt this is to maximize the overlap. To achieve this, we can place each stamp such that the lower left corner of each stamp is at the same position, like shown in the sample explanation. Now, we can observe that the perimeter of this shape is determined solely by the maximum height and width of any stamp, and these values cannot be reduced further. Therefore, the answer is $2 \cdot \bigl( \max(a_i) + \max(b_i) \bigr)$. Furthermore, it's true that any arrangement of stamps which are fully enclosed in an outer rectangular area of $\max(a_i)$ by $\max(b_i)$ will yield the same perimeter.
[ "geometry", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; int maxw = 0, maxh = 0; for (int i=0; i<n; i++) { int w, h; cin >> w >> h; maxw = max(maxw, w); maxh = max(maxh, h); } cout << 2 * (maxw + maxh) << "\n"; } return 0; }
2027
B
Stalin Sort
Stalin Sort is a humorous sorting algorithm designed to eliminate elements which are out of place instead of bothering to sort them properly, lending itself to an $\mathcal{O}(n)$ time complexity. It goes as follows: starting from the second element in the array, if it is strictly smaller than the previous element (ignoring those which have already been deleted), then delete it. Continue iterating through the array until it is sorted in non-decreasing order. For example, the array $[1, 4, 2, 3, 6, 5, 5, 7, 7]$ becomes $[1, 4, 6, 7, 7]$ after a Stalin Sort. We define an array as vulnerable if you can sort it in \textbf{non-increasing} order by repeatedly applying a Stalin Sort to \textbf{any of its subarrays$^{\text{∗}}$}, as many times as is needed. Given an array $a$ of $n$ integers, determine the minimum number of integers which must be removed from the array to make it vulnerable. \begin{footnotesize} $^{\text{∗}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
An array is vulnerable if and only if the first element is the largest. To prove the forward direction, we can trivially perform a single operation on the entire range, which will clearly make it non-increasing. Now, let's prove the reverse direction. Consider any array in which the maximum is not the first element. Note that a Stalin Sort on any subarray will never remove the first element, and also will never remove the maximum. So if the first is not the maximum, this will always break the non-increasing property. Therefore, we just need to find the longest subsequence in which the first element is the largest. This can be done easily in $\mathcal{O}(n^2)$ - consider each index being the first item in the subsequence, and count all items to the right of it which are smaller or equal to it. Find the maximum over all of these, then subtract this from $n$. Bonus: Solve this task in $\mathcal{O}(n \log n)$.
[ "brute force", "greedy" ]
1,100
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> A(n); for (int i=0; i<n; i++) cin >> A[i]; int best = 0; for (int i=0; i<n; i++) { int curr = 0; for (int j=i; j<n; j++) { if (A[j] <= A[i]) { curr += 1; } } best = max(best, curr); } cout << n - best << "\n"; } return 0; }
2027
C
Add Zeros
You're given an array $a$ initially containing $n$ integers. In one operation, you must do the following: - Choose a position $i$ such that $1 < i \le |a|$ and $a_i = |a| + 1 - i$, where $|a|$ is the \textbf{current} size of the array. - Append $i - 1$ zeros onto the end of $a$. After performing this operation as many times as you want, what is the maximum possible length of the array $a$?
Let's rearrange the equation given in the statement to find a 'required' $|a|$ value in order to use an operation at that index. We have $a_i = |a| + 1 - i$, so $|a| = a_i + i - 1$. Note that if $a_i = 0$, then the condition is never true, since $i - 1 < |a|$ always. So actually, we just need to consider the first $n$ positions. Once we use an operation at position $i$, the length of the array will increase by $i - 1$. So, for each position we require some length $u_i = a_i + i - 1$ and make a length of $v_i = u + i - 1$. So, let's create a graph containing all $n$ edges, and run some DFS or BFS on this graph starting from node $n$ (the starting value of $|a|$). It then follows that the largest node visited is the maximum length of the array we can possibly get. We should use a map to store the graph, since the length of the array can grow up to $\approx n^2$ and so the graph is very sparse. If we don't want to use a map, we can also take advantage of the fact that all edges are directed from $u_i$ to $v_i$ where $u_i < v_i$ and that all edges and nodes are fixed, so we can actually iterate through all $u_i$ and maintain a boolean array of which $v_i$ values are visited so far, updating it at any point using a binary search.
[ "brute force", "data structures", "dfs and similar", "dp", "graphs", "greedy" ]
1,500
#include <bits/stdc++.h> using namespace std; using ll = long long; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<ll> A(n); for (int i=0; i<n; i++) cin >> A[i]; map<ll,vector<ll>> adj; for (int i=1; i<n; i++) { ll u = A[i] + i; ll v = u + i; adj[u].push_back(v); } set<ll> vis; function<void(ll)> dfs = [&](ll u) -> void { if (vis.count(u)) return; vis.insert(u); for (ll v : adj[u]) dfs(v); }; dfs(n); cout << *vis.rbegin() << "\n"; } return 0; }
2027
D1
The Endspeaker (Easy Version)
\textbf{This is the easy version of this problem. The only difference is that you only need to output the minimum total cost of operations in this version. You must solve both versions to be able to hack.} You're given an array $a$ of length $n$, and an array $b$ of length $m$ ($b_i > b_{i+1}$ for all $1 \le i < m$). Initially, the value of $k$ is $1$. Your aim is to make the array $a$ empty by performing one of these two operations repeatedly: - Type $1$ — If the value of $k$ is less than $m$ and the array $a$ is \textbf{not empty}, you can increase the value of $k$ by $1$. This does not incur any cost. - Type $2$ — You remove a non-empty prefix of array $a$, such that its sum does not exceed $b_k$. This incurs a cost of $m - k$. You need to minimize the total cost of the operations to make array $a$ empty. If it's impossible to do this through any sequence of operations, output $-1$. Otherwise, output the minimum total cost of the operations.
Let's use dynamic programming. We will have $\operatorname{dp}_{i,j}$ be the minimum cost to remove the prefix of length $i$, where the current value of $k$ is $j$. By a type $1$ operation, we can transition from $\operatorname{dp}_{i,j}$ to $\operatorname{dp}_{i,j+1}$ at no cost. Otherwise, by a type $2$ operation, we need to remove some contiguous subarray $a_{i+1}, a_{i+2}, \dots, a_{x}$ (a prefix of the current array), to transition to $\operatorname{dp}_{x,j}$ with a cost of $m - k$. Let $r$ be the largest value of $x$ possible. Given we're spending $m - k$ whatever value of $x$ we choose, it's clear to see that we only need to transition to $\operatorname{dp}_{r,j}$. To find $r$ for each value of $i$ and $j$, we can either binary search over the prefix sums or simply maintain $r$ as we increase $i$ for a fixed value of $k$. The answer is then $\operatorname{dp}_{n,k}$. The latter method solves the problem in $\mathcal{O}(nm)$.
[ "binary search", "dp", "graphs", "greedy", "implementation", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; using ll = long long; const int inf = 1 << 30; void chmin(int &a, int b) { a = min(a, b); } int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; vector<int> A(n+1); for (int i=0; i<n; i++) cin >> A[i]; vector<int> B(m); for (int i=0; i<m; i++) cin >> B[i]; vector nxt(n, vector<int>(m)); for (int k=0; k<m; k++) { int r = -1, sum = 0; for (int i=0; i<n; i++) { while (r < n && sum <= B[k]) sum += A[++r]; nxt[i][k] = r; sum -= A[i]; } } vector dp(n+1, vector<int>(m, inf)); dp[0][0] = 0; for (int k=0; k<m; k++) { for (int i=0; i<n; i++) { chmin(dp[nxt[i][k]][k], dp[i][k] + m - k - 1); if (k < m-1) chmin(dp[i][k+1], dp[i][k]); } } int ans = inf; for (int k=0; k<m; k++) { chmin(ans, dp[n][k]); } if (ans == inf) { cout << "-1\n"; } else { cout << ans << "\n"; } } return 0; }
2027
D2
The Endspeaker (Hard Version)
\textbf{This is the hard version of this problem. The only difference is that you need to also output the number of optimal sequences in this version. You must solve both versions to be able to hack.} You're given an array $a$ of length $n$, and an array $b$ of length $m$ ($b_i > b_{i+1}$ for all $1 \le i < m$). Initially, the value of $k$ is $1$. Your aim is to make the array $a$ empty by performing one of these two operations repeatedly: - Type $1$ — If the value of $k$ is less than $m$ and the array $a$ is \textbf{not empty}, you can increase the value of $k$ by $1$. This does not incur any cost. - Type $2$ — You remove a non-empty prefix of array $a$, such that its sum does not exceed $b_k$. This incurs a cost of $m - k$. You need to minimize the total cost of the operations to make array $a$ empty. If it's impossible to do this through any sequence of operations, output $-1$. Otherwise, output the minimum total cost of the operations, and the number of sequences of operations which yield this minimum cost modulo $10^9 + 7$. Two sequences of operations are considered different if you choose a different type of operation at any step, or the size of the removed prefix is different at any step.
Following on from the editorial for D1. Let's have the $\operatorname{dp}$ table store a pair of the minimum cost and the number of ways. Since we're now counting the ways, it's not enough just to consider the transition to $\operatorname{dp}_{r,j}$; we also need to transition to all $\operatorname{dp}_{x,j}$. Doing this naively is too slow, so let's instead find a way to perform range updates. Let's say we want to range update $\operatorname{dp}_{l,j}, \operatorname{dp}_{l+1,j}, ..., \operatorname{dp}_{r,j}$. We'll store some updates in another table at either end of the range. Then, for a fixed value of $k$ as we iterate through increasing $i$-values, let's maintain a map of cost to ways. Whenever the number of ways falls to zero, we can remove it from the map. On each iteration, we can set $\operatorname{dp}_{i,j}$ to the smallest entry in the map, and then perform the transitions. This works in $\mathcal{O}(nm \log n)$. Bonus: It's also possible to solve without the $\log n$ factor. We can use the fact that $\operatorname{dp}_{i,k}$ is non-increasing for a fixed value of $k$ to make the range updates non-intersecting by updating a range strictly after the previous iteration. Then we can just update a prefix sum array, instead of using a map. There exists an alternative solution for D1 & D2, using segment tree. We can actually consider the process in reverse; let's reformulate $\operatorname{dp}_{i,j}$ to represent the minimum score required to remove all elements after the $i$-th element, given that the current value of $k$ is $j$. Instead of using a dp table, we maintain $m$ segment trees, each of length $n$. The $i$-th segment tree will represent the $i$-th column of the dp table. We precalculate for each $i$ and $j$ the furthest position we can remove starting from $i$ - specifically, the maximum subarray starting from $i$ with a sum less than $b_j$. We store this in $\operatorname{nxt}_{i,j}$. This calculation can be done in $\mathcal{O}(nm)$ time using a sliding window. To transition in the dp, we have: $\operatorname{dp}_{i,j} = \min\left( \operatorname{dp}_{i, j + 1}, \, \min(\operatorname{dp}_{i + 1, j}, \operatorname{dp}_{i + 2, j}, \ldots, \operatorname{dp}_{{\operatorname{nxt}_{i,j}}, j}) + m - j + 1 \right)$ This transition can be computed in $\mathcal{O}(\log n)$ time thanks to range querying on the segment tree, so our total complexity is $\mathcal{O}(nm \log n)$. For D2, we can store the count of minimums within each segment, and simply sum these counts to get the total number of ways.
[ "binary search", "data structures", "dp", "greedy", "implementation", "two pointers" ]
2,200
#include <bits/stdc++.h> using namespace std; #define int long long int modN = 1e9 + 7; int mod(int n) { return (n + modN) % modN; } struct SegmentTree { struct Node { int val = 1e18; int cnt = 1; }; vector<Node> st; int n; SegmentTree(int n): n(n) { st.resize(4 * n + 1, Node()); } SegmentTree(vector<int> a): n(a.size()) { st.resize(4 * n + 1, Node()); build(a, 1, 0, n - 1); } void merge(Node& a, Node& b, Node& c) { a.val = min(b.val, c.val); if (b.val == c.val) a.cnt = mod(b.cnt + c.cnt); else if (b.val < c.val) a.cnt = b.cnt; else if (b.val > c.val) a.cnt = c.cnt; } void build(vector<int>& a, int id, int l, int r) { if (l == r) { st[id].val = a[l]; return; } int mid = (l + r) / 2; build(a, id * 2, l, mid); build(a, id * 2 + 1, mid + 1, r); merge(st[id], st[id * 2], st[id * 2 + 1]); } void update(int id, int l, int r, int u, int val, int cnt) { if (l == r) { st[id].val = val; // or st[id].sum += val st[id].cnt = cnt; return; } int mid = (l + r) / 2; if (u <= mid) update(id * 2, l, mid, u, val, cnt); else update(id * 2 + 1, mid + 1, r, u, val, cnt); merge(st[id], st[id * 2], st[id * 2 + 1]); } void update(int idx, int val, int cnt) { //wrapper update(1, 0, n - 1, idx, val, cnt); } Node query(int id, int l, int r, int u, int v) { //give 0, n - 1 as l and r and 1 as id if (v < l || r < u) return Node(); if (u <= l && r <= v) { return st[id]; } int mid = (l + r) / 2; auto a = query(id * 2, l, mid, u, v); auto b = query(id * 2 + 1, mid + 1, r, u, v); Node res; merge(res, a, b); return res; } Node query(int l, int r) { //wrapper return query(1, 0, n - 1, l, r); } }; void solve() { int n, m; cin >> n >> m; vector<int> a(n), b(m); for (int &A : a) cin >> A; for (int &B : b) cin >> B; if (*max_element(a.begin(), a.end()) > b[0]) { cout << -1 << '\n'; return; } vector<vector<int>> nxt(m, vector<int>(n)); for (int i = 0; i < m; i++) { int curr = 0, r = -1; for (int j = 0; j < n; j++) { while (r + 1 < n && curr + a[r + 1] <= b[i]) curr += a[r + 1], r += 1; nxt[i][j] = r + 1; if (j <= r) curr -= a[j]; r = max(r, j); } } vector<SegmentTree> dp(m, SegmentTree(vector<int>(n + 1, 1e18))); for (int i = 0; i < m; i++) dp[i].update(n, 0, 1); for (int i = n - 1; i >= 0; i--) { for (int j = m - 1; j >= 0; j--) { auto q1 = dp[j].query(i + 1, nxt[j][i]); int v1 = q1.val + m - (j + 1), ps1 = q1.cnt; if (i + 1 <= nxt[j][i]) dp[j].update(i, v1, ps1); if (j != m - 1) { auto q2 = dp[j + 1].query(i, i); int v2 = q2.val, ps2 = q2.cnt; auto q3 = dp[j].query(i, i); if (v2 < q3.val) dp[j].update(i, v2, ps2); else if (v2 == q3.val) dp[j].update(i, v2, mod(ps2 + q3.cnt)); } } } cout << dp[0].query(0, 0).val << ' ' << dp[0].query(0, 0).cnt << '\n'; } signed main() { cin.tie(0) -> sync_with_stdio(false); int t; cin >> t; while (t--) solve(); return 0; }
2027
E1
Bit Game (Easy Version)
\textbf{This is the easy version of this problem. The only difference is that you need to output the winner of the game in this version, and the number of stones in each pile are fixed. You must solve both versions to be able to hack.} Alice and Bob are playing a familiar game where they take turns removing stones from $n$ piles. Initially, there are $x_i$ stones in the $i$-th pile, and it has an associated value $a_i$. A player can take $d$ stones away from the $i$-th pile if and only if both of the following conditions are met: - $1 \le d \le a_i$, and - $x \, \& \, d = d$, where $x$ is the current number of stones in the $i$-th pile and $\&$ denotes the bitwise AND operation. The player who cannot make a move loses, and Alice goes first. You're given the $a_i$ and $x_i$ values for each pile, please determine who will win the game if both players play optimally.
Each pile is an independent game, so let's try to find the nimber of a pile with value $a$ and size $x$ - let's denote this by $f(x, a)$. Suppose $x = 2^k - 1$ for some integer $k$; in this case, the nimber is entirely dependent on $a$. If $x$ is not in that form, consider the binary representation of $a$. If any bit in $a$ is on where $x$ is off, then it's equivalent to if $a$ had that bit off, but all lower bits were on. Let's call such a bit a 'good bit'. So we can build some value $a'$ by iterating along the bits from highest to lowest; if $x$ has an on bit in this position, then $a'$ will have an on bit in this position if and only if $a$ has one there, or we've already found a good bit. Now that the bits of $x$ and $a'$ align, we can remove all zeros from both representations and it's clear that this is an equivalent game with $x = 2^k - 1$. One small observation to make is that we can remove the ones in $x$ which correspond to leading zeros in $a'$, since we can never use these. So actually, after calculating $a'$ we only need $g(a') = f(2^k - 1, a')$, where $k$ is the smallest integer such that $2^k - 1 \ge a'$. By running a brute force using the Sprague-Grundy theorem to find the nimbers you can observe some patterns which encompass all cases: $g(2^k - 2) = 0$, for all $k \ge 1$. $g(2^k - 1) = k$, for all $k \ge 0$. $g(2^k) = k \oplus 1$, for all $k \ge 0$. $g(2^k+1) = g(2^k+2) = \ldots = g(2^{k+1} - 3) = k + 1$, for all $k \ge 2$. The first $3$ cases are nice and relatively simple to prove, but the last is a lot harder (at least, I couldn't find an easier way). Anyway, I will prove all of them here, for completeness. First, note that if at any point we have $x \le a$, then we have a standard nim-game over the $k$ bits of the number, since we can remove any of them on each operation, so the nimber of such a game would be $k$. The second case is one example of this; we have $x = a = 2^k - 1$, so its nimber is $k$. Now let's prove the first case. When $k = 1$, $g(0) = 0$ since there are no moves. For larger values of $k$, no matter which value of $d$ we choose, the resulting game has $x \le a$ where $x$ has a positive number of bits. Overall, we have the $\operatorname{mex}$ of a bunch of numbers which are all positive, so the nimber is $0$ as required. Now let's prove the third case. To reiterate, since $a' = 2^k$ we have $x = 2^{k+1} - 1$ by definition. This case is equivalent to having a nim-game with one pile of $k$ bits (all bits below the most significant one), and another pile of $1$ bit (the most significant one). This is because we can remove any amount from either pile in one move, but never both. Therefore, the nimber is $k \oplus 1$ as required. The last case is the hardest to prove. We need to show that there is some move that lands you in a game for all nimbers from $0$ to $k$, and never with nimber $k + 1$. Well, the only way that you could get a nimber of $k + 1$ is by landing in case $3$ without changing the number of bits in $x$/$a'$. Any move will change this though, since any move will remove some bits of $x$ and when $a'$ is recalculated it'll have fewer bits. Okay, now one way of getting a nimber of $0$ is just by removing all bits in $x$ which are strictly after the first zero-bit in $a'$. For example, if $a' = 101101$, then let's choose $d = 001111$; then we'll land in case $1$. As for getting the rest of the nimbers from $1$ to $k$, it helps if we can just land in case $2$ but with a varying number of bits. If $a' \ge 2^k + 2^{k-1} - 1$ this is easily possible by considering all $d$ in the range $2^k \le d < 2^k + 2^{k-1}$. On the other hand, if $a < 2^k + 2^{k-1} - 1$, let's first consider the resulting value of $x$ to be equal to $2^k + 2^m$, where $2^m$ is the lowest set bit in $a'$. This will give a nimber of $2$. Now, we can repeatedly add the highest unset bit in this $x$ to itself to generate the next nimber until we now have all nimbers from $2$ to $k$. For the nimber of $1$, just have the resulting $x$ be $2^k$. Time complexity: $\mathcal{O}(n \log C)$, where $C$ is the upper limit on $x_i$.
[ "bitmasks", "brute force", "games", "math" ]
2,800
#include <bits/stdc++.h> using namespace std; int nimber(int x, int a) { int aprime = 0; bool goodbit = false; for (int bit=30; bit>=0; bit--) { if (x & (1 << bit)) { aprime *= 2; if (goodbit || (a & (1 << bit))) { aprime += 1; } } else if (a & (1 << bit)) { goodbit = true; } } // g(2^k - 2) = 0, for all k >= 1. for (int k=1; k<=30; k++) { if (aprime == (1 << k) - 2) { return 0; } } // g(2^k - 1) = k, for all k >= 1. for (int k=1; k<=30; k++) { if (aprime == (1 << k) - 1) { return k; } } // g(2^k) = k + (-1)^k, for all k >= 0. for (int k=1; k<=30; k++) { if (aprime == (1 << k)) { if (k % 2) return k - 1; else return k + 1; } } // g(2^k+1) = g(2^k+2) = ... = g(2^{k+1} - 3) = k + 1, for all k >= 2. for (int k=2; k<=30; k++) { if ((1 << k) < aprime && aprime <= (2 << k) - 3) { return k + 1; } } // should never get to this point assert(false); return -1; } int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> A(n); for (int i=0; i<n; i++) cin >> A[i]; vector<int> X(n); for (int i=0; i<n; i++) cin >> X[i]; int curr = 0; for (int i=0; i<n; i++) curr ^= nimber(X[i], A[i]); cout << (curr ? "Alice" : "Bob") << "\n"; } return 0; }
2027
E2
Bit Game (Hard Version)
\textbf{This is the hard version of this problem. The only difference is that you need to output the number of choices of games where Bob wins in this version, where the number of stones in each pile are not fixed. You must solve both versions to be able to hack.} Alice and Bob are playing a familiar game where they take turns removing stones from $n$ piles. Initially, there are $x_i$ stones in the $i$-th pile, and it has an associated value $a_i$. A player can take $d$ stones away from the $i$-th pile if and only if both of the following conditions are met: - $1 \le d \le a_i$, and - $x \, \& \, d = d$, where $x$ is the current number of stones in the $i$-th pile and $\&$ denotes the bitwise AND operation. The player who cannot make a move loses, and Alice goes first. You're given the $a_i$ values of each pile, but the number of stones in the $i$-th pile has not been determined yet. For the $i$-th pile, $x_i$ can be any integer between $1$ and $b_i$, inclusive. That is, you can choose an array $x_1, x_2, \ldots, x_n$ such that the condition $1 \le x_i \le b_i$ is satisfied for all piles. Your task is to count the number of games where Bob wins if both players play optimally. Two games are considered different if the number of stones in any pile is different, i.e., the arrays of $x$ differ in at least one position. Since the answer can be very large, please output the result modulo $10^9 + 7$.
Let's continue on from the editorial for E1. We figured out that the nimbers lie in one of these four cases: $g(2^k - 2) = 0$, for all $k \ge 1$. $g(2^k - 1) = k$, for all $k \ge 0$. $g(2^k) = k + (-1)^k$, for all $k \ge 0$. $g(2^k+1) = g(2^k+2) = \ldots = g(2^{k+1} - 3) = k + 1$, for all $k \ge 2$. From these, it's clear to see that the nimber of each pile is always $\le 31$ given the constraints. So, let's say we can count the amount of values of $x_i \le b_i$ exist such that $f(x_i, a_i) = p$ for all $p$, then we can easily maintain a knapsack to count the number of games where the nimbers of piles have a bitwise XOR of $0$. All that's left is to count the number of values of $x_i$ which yield some $a'$ in each of the cases above, which is probably the hardest part of the problem. We can use digit DP for this. The state should store the position of the most significant one in $a'$, and whether $a'$ is $0$ or in the form $1$, $11 \cdots 1$, $11 \cdots 0$, $100 \cdots 0$ or anything else. To find whether a $1$ added to $x$ becomes $0$ or $1$ in $a'$, have another flag to check whether a good bit has occurred yet. Also, to ensure that $x \le b$, have one more flag to check whether the current prefix has fallen strictly below $b$ yet. Please see the model code's comments for a better understanding of how the transitions are computed. Time complexity: $\mathcal{O}(n \log^2 C)$, where $C$ is the upper limit on $a_i$ and $b_i$.
[ "bitmasks", "dp", "math" ]
3,100
#include <bits/stdc++.h> using namespace std; int dp[32][32][6][2][2]; const int mod = 1000000007; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> A(n); for (int i=0; i<n; i++) cin >> A[i]; vector<int> B(n); for (int i=0; i<n; i++) cin >> B[i]; vector<int> curr(32); curr[0] = 1; // identity for (int i=0; i<n; i++) { memset(dp, 0, sizeof dp); dp[0][0][0][0][0] = 1; for (int j=0; j<=29; j++) { int p = 29 - j; // place we are going to add a bit in for (int k=0; k<=29; k++) { // position of most significant one in a' for (int type=0; type<6; type++) { // 0, 1, 11111, 11110, 10000, else for (int good=0; good<2; good++) { // good=1 iff good bit has occured for (int low=0; low<2; low++) { // low=1 iff prefix below b for (int bit=0; bit<2; bit++) { // bit in x if (dp[j][k][type][good][low] == 0) continue; // no point in transition since count is 0 if (!low && (B[i] & (1 << p)) == 0 && bit == 1) continue; // x can't go higher than B[i] if (bit == 0) { int good2 = good || (A[i] & (1 << p)) != 0; // check good bit int low2 = low || (B[i] & (1 << p)) != 0; // check if low // nothing added to a' so nothing else changes (dp[j+1][k][type][good2][low2] += dp[j][k][type][good][low]) %= mod; } else { int bita = good || (A[i] & (1 << p)) != 0; // bit in a' int k2 = type == 0 ? 0 : k + 1; // increase if MSOne exists int type2 = bita ? ( type == 0 ? 1 : // add first one (type == 1 || type == 2) ? 2 : // 11111 5 // can't add 1 after a 0 ) : ( (type == 0) ? 0 : // 0 (type == 1 || type == 4) ? 4 : // 10000 (type == 2) ? 3 : // 11110 5 // can't have a zero in any other case ); (dp[j+1][k2][type2][good][low] += dp[j][k][type][good][low]) %= mod; } } } } } } } vector<int> count(32); // number of x-values for each nimber for (int k=0; k<=29; k++) { // position of MSOne for (int good=0; good<2; good++) { // doesn't matter for (int low=0; low<2; low++) { // doesn't matter (count[0] += dp[30][k][0][good][low]) %= mod; // 0 (count[1] += dp[30][k][1][good][low]) %= mod; // 1 (count[k+1] += dp[30][k][2][good][low]) %= mod; // 11111 (count[0] += dp[30][k][3][good][low]) %= mod; // 11110 (count[k+(k%2?-1:1)] += dp[30][k][4][good][low]) %= mod; // 10000 (count[k+1] += dp[30][k][5][good][low]) %= mod; // else } } } count[0] -= 1; // remove when x=0 vector<int> next(32); // knapsack after adding this pile for (int j=0; j<32; j++) for (int k=0; k<32; k++) (next[j ^ k] += 1LL * curr[j] * count[k] % mod) %= mod; swap(curr, next); } cout << curr[0] << "\n"; } return 0; }
2028
A
Alice's Adventures in ''Chess''
Alice is trying to meet up with the Red Queen in the countryside! Right now, Alice is at position $(0, 0)$, and the Red Queen is at position $(a, b)$. Alice can only move in the four cardinal directions (north, east, south, west). More formally, if Alice is at the point $(x, y)$, she will do one of the following: - go north (represented by N), moving to $(x, y+1)$; - go east (represented by E), moving to $(x+1, y)$; - go south (represented by S), moving to $(x, y-1)$; or - go west (represented by W), moving to $(x-1, y)$. Alice's movements are predetermined. She has a string $s$ representing a sequence of moves that she performs from left to right. Once she reaches the end of the sequence, she repeats the same pattern of moves forever. Can you help Alice figure out if she will ever meet the Red Queen?
We can run the whole pattern $100\gg \max(a, b, n)$ times, which gives a total runtime of $O(100tn)$ (be careful in that running the pattern only 10 times is not enough!) To prove that $100$ repeats suffices, suppose that Alice's steps on the first run of the pattern are $(0, 0), (x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)$ (we will take $x_0 = 0, y_0 = 0$ for convenience) Then, Alice ends up at position $(a, b)$ if there exists a $t\ge 0$ (the number of extra repeats) such that for some $i$, $x_i + tx_n = a$ and $y_i + ty_n = b$. Certainly, if $x_n = y_n = 0$, we only need one repeat so assume WLOG that $x_n \neq 0$. Then, it must be the case that $t = \frac{a - x_i}{x_n}$. However, $a - x_i \le 20$ (since $x_i \ge -10$) and $|x_n|\ge 1$, so $t \le 20$ and therefore $21$ repeats always suffice. In fact, the above proof shows that we can solve each testcase in time $O(n)$.
[ "brute force", "implementation", "math" ]
900
def solve(): [n, a, b] = list(map(int, input().split())) s = str(input()) x, y = 0, 0 for __ in range(100): for c in s: if c == 'N': y += 1 elif c == 'E': x += 1 elif c == 'S': y -= 1 else: x -= 1 if x == a and y == b: print("YES") return print("NO") t = int(input()) for _ in range(t): solve()
2028
B
Alice's Adventures in Permuting
Alice mixed up the words transmutation and permutation! She has an array $a$ specified via three integers $n$, $b$, $c$: the array $a$ has length $n$ and is given via $a_i = b\cdot (i - 1) + c$ for $1\le i\le n$. For example, if $n=3$, $b=2$, and $c=1$, then $a=[2 \cdot 0 + 1, 2 \cdot 1 + 1, 2 \cdot 2 + 1] = [1, 3, 5]$. Now, Alice really enjoys permutations of $[0, \ldots, n-1]$$^{\text{∗}}$ and would like to transform $a$ into a permutation. In one operation, Alice replaces the maximum element of $a$ with the $\operatorname{MEX}$$^{\text{†}}$ of $a$. If there are multiple maximum elements in $a$, Alice chooses the leftmost one to replace. Can you help Alice figure out how many operations she has to do for $a$ to become a permutation for the first time? If it is impossible, you should report it. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $0$ to $n-1$ in arbitrary order. \textbf{Please note, this is slightly different from the usual definition of a permutation.} For example, $[1,2,0,4,3]$ is a permutation, but $[0,1,1]$ is not a permutation ($1$ appears twice in the array), and $[0,2,3]$ is also not a permutation ($n=3$ but there is $3$ in the array). $^{\text{†}}$The $\operatorname{MEX}$ of an array is the smallest non-negative integer that does not belong to the array. For example, the $\operatorname{MEX}$ of $[0, 3, 1, 3]$ is $2$ and the $\operatorname{MEX}$ of $[5]$ is $0$. \end{footnotesize}
Suppose that $b = 0$. Then, if $c \ge n$, the answer is $n$; if $c = n - 1$ or $c = n - 2$, the answer is $n - 1$; and otherwise, it is $-1$ (for example, consider $c = n - 3$, in which case we will end up with $a = [0, 1, \ldots, n - 4, n - 3, n - 3, n - 3] \rightarrow [0, 1, \ldots, n - 4, n - 3, n - 3, n - 2] \rightarrow [0, 1, \ldots, n - 4, n - 3, n - 3, n - 1]$ Otherwise, since $a$ has distinct elements, we claim that the answer is $n - m$, where $m$ is the number of elements in $0, 1, \ldots, n - 1$ already present in the array. Equivalently, it is the number of steps until $\max(a) < n$ since we always preserve the distinctness of the elements of $a$. So, we want to find the maximum $i$ such that $a_i < n$. This happens exactly when $i < \frac{n - c}{b}$. The expected complexity is $O(1)$ per testcase.
[ "binary search", "implementation", "math" ]
1,400
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using vi = vector<int>; #define rep(i, a, b) for(int i = a; i < (b); ++i) #define all(x) (x).begin(), (x).end() #define sz(x) (int)(x).size() #define smx(a, b) a = max(a, b) #define smn(a, b) a = min(a, b) #define pb push_back #define endl '\n' const ll MOD = 1e9 + 7; const ld EPS = 1e-9; mt19937 rng(time(0)); int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { ll n, b, c; cin >> n >> b >> c; if (b == 0) { if (c >= n) { cout << n << "\n"; } else if (c >= n - 2) { cout << n - 1 << "\n"; } else { cout << -1 << "\n"; } } else { if (c >= n) cout << n << "\n"; else cout << n - max(0ll, 1 + (n - c - 1) / b) << "\n"; } } }
2028
C
Alice's Adventures in Cutting Cake
Alice is at the Mad Hatter's tea party! There is a long sheet cake made up of $n$ sections with tastiness values $a_1, a_2, \ldots, a_n$. There are $m$ creatures at the tea party, excluding Alice. Alice will cut the cake into $m + 1$ pieces. Formally, she will partition the cake into $m + 1$ subarrays, where each subarray consists of some number of adjacent sections. The tastiness of a piece is the sum of tastiness of its sections. Afterwards, she will divvy these $m + 1$ pieces up among the $m$ creatures and herself (her piece can be empty). However, each of the $m$ creatures will only be happy when the tastiness of its piece is $v$ or more. Alice wants to make sure every creature is happy. Limited by this condition, she also wants to maximize the tastiness of her own piece. Can you help Alice find the maximum tastiness her piece can have? If there is no way to make sure every creature is happy, output $-1$.
Alice's piece of cake will be some subsegment $a[i:j]$. For a fixed $i$, how large can $j$ be? To determine this, let $pfx[i]$ be the maximum number of creatures that can be fed on $a[:i]$ and $sfx[j]$ the maximum number of creatures on $a[j:]$. Then, for a given $i$, the maximum possible $j$ is exactly the largest $j$ such that $pfx[i] + sfx[j] \ge m$. If we compute the $pfx$ and $sfx$ arrays, we can then compute these largest $j$ for all $i$ with two pointers in $O(n)$ (or with binary search in $O(n\log n)$, since $sfx$ is monotonically non-increasing). To compute $pfx[i]$, we can use prefix sums and binary search to find the maximum $k < i$ such that $\sum_{\ell=k}^{i}a[\ell] \ge v$: then $pfx[i] = 1 + pfx[k]$. We can compute $sfx$ similarly by reversing $a$. This takes time $O(n\log n)$ (it is also possible to do this with two pointers in $O(n)$, which you can see in the model solution). Expected complexity: $O(n)$ or $O(n\log n)$.
[ "binary search", "dp", "greedy", "two pointers" ]
1,600
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using vi = vector<int>; #define rep(i, a, b) for(int i = a; i < (b); ++i) #define all(x) (x).begin(), (x).end() #define sz(x) (int)(x).size() #define smx(a, b) a = max(a, b) #define smn(a, b) a = min(a, b) #define pb push_back #define endl '\n' const ll MOD = 1e9 + 7; const ld EPS = 1e-9; mt19937 rng(time(0)); int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; ll v; cin >> v; vector<ll> a(n); rep(i,0,n) cin >> a[i]; vector<ll> sums(n + 1); rep(i,0,n) sums[i + 1] = sums[i] + a[i]; auto query = [&](int i, int j) { // [i, j) return sums[j] - sums[i]; }; auto compute_pfx = [&]() -> vector<int> { vector<int> pfx(n + 1, 0); int end = 0, val = 0; ll sum = 0; for (int start = 0; start < n; start++) { while (end < n && sum < v) { sum += a[end]; ++end; pfx[end] = max(pfx[end], pfx[end - 1]); } if (sum >= v) { pfx[end] = 1 + pfx[start]; } sum -= a[start]; } rep(i,1,n+1) { pfx[i] = max(pfx[i], pfx[i - 1]); } return pfx; }; auto pfx = compute_pfx(); reverse(all(a)); auto sfx = compute_pfx(); reverse(all(a)); reverse(all(sfx)); if (pfx[n] < m) { cout << "-1\n"; continue; } int end = 0; ll ans = 0; for (int start = 0; start < n; start++) { while (end < n && pfx[start] + sfx[end + 1] >= m) ++end; if (pfx[start] + sfx[end] >= m) ans = max(ans, query(start, end)); } cout << ans << "\n"; } }
2028
D
Alice's Adventures in Cards
Alice is playing cards with the Queen of Hearts, King of Hearts, and Jack of Hearts. There are $n$ different types of cards in their card game. Alice currently has a card of type $1$ and needs a card of type $n$ to escape Wonderland. The other players have one of each kind of card. In this card game, Alice can trade cards with the three other players. Each player has different preferences for the $n$ types of cards, which can be described by permutations$^{\text{∗}}$ $q$, $k$, and $j$ for the Queen, King, and Jack, respectively. A player values card $a$ more than card $b$ if for their permutation $p$, $p_a > p_b$. Then, this player is willing to trade card $b$ to Alice in exchange for card $a$. Alice's preferences are straightforward: she values card $a$ more than card $b$ if $a > b$, and she will also only trade according to these preferences. Determine if Alice can trade up from card $1$ to card $n$ subject to these preferences, and if it is possible, give a possible set of trades to do it. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
We will use DP to answer the following question for each $a$ from $n$ down to $1$: is it possible to trade up from card $a$ to card $n$? To answer this question efficiently, we need to determine for each of the three players whether there exists some $b > a$ such that $p(b) < p(a)$ and it is possible to trade up from card $b$ to card $n$. We can do this efficiently by keeping track for each player the minimum value $x = p(b)$ over all $b$ that can reach $n$: then, for a given $a$ we can check for each player if $p(a)$ exceeds $x$. If it does for some player, we can then update the values $x$ for each of the three players. Alongside these minimum values $x$ we can keep track of the $b$ achieving them to be able to reconstruct a solution. This takes time $O(n)$ since each iteration takes time $O(1)$. Solve the same problem, but now with the additional requirement that the solution must use the minimum number of trades (same constraints).
[ "constructive algorithms", "data structures", "dp", "graphs", "greedy", "implementation", "ternary search" ]
2,000
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using vi = vector<int>; #define rep(i, a, b) for(int i = a; i < (b); ++i) #define all(x) (x).begin(), (x).end() #define sz(x) (int)(x).size() #define smx(a, b) a = max(a, b) #define smn(a, b) a = min(a, b) #define pb push_back #define endl '\n' const ll MOD = 1e9 + 7; const ld EPS = 1e-9; mt19937 rng(time(0)); int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; std::string s = "qkj"; while (t--) { int n; cin >> n; vector p(3, vector<int>(n + 1)); rep(i,0,3) rep(j,1,n + 1) cin >> p[i][j]; vector<pair<char, int>> sol(n + 1, {'\0', -1}); array<int, 3> mins = {n, n, n}; // minimizing index for (int i = n - 1; i >= 1; i--) { int win = -1; rep(j,0,3) if (p[j][i] > p[j][mins[j]]) win = j; if (win == -1) continue; sol[i] = {s[win], mins[win]}; rep(j,0,3) if (p[j][i] < p[j][mins[j]]) mins[j] = i; } if (sol[1].second == -1) { cout << "NO\n"; continue; } cout << "YES\n"; vector<pair<char, int>> ans = {sol[1]}; while (ans.back().second >= 0) { ans.push_back(sol[ans.back().second]); } ans.pop_back(); cout << sz(ans) << "\n"; for (auto && [c, i] : ans) { cout << c << " " << i << "\n"; } } }
2028
E
Alice's Adventures in the Rabbit Hole
Alice is at the bottom of the rabbit hole! The rabbit hole can be modeled as a tree$^{\text{∗}}$ which has an exit at vertex $1$, and Alice starts at some vertex $v$. She wants to get out of the hole, but unfortunately, the Queen of Hearts has ordered her execution. Each minute, a fair coin is flipped. If it lands heads, Alice gets to move to an adjacent vertex of her current location, and otherwise, the Queen of Hearts gets to pull Alice to an adjacent vertex of the Queen's choosing. If Alice ever ends up on any of the non-root leaves$^{\text{†}}$ of the tree, Alice loses. Assuming both of them move optimally, compute the probability that Alice manages to escape for every single starting vertex $1\le v\le n$. Since these probabilities can be very small, output them modulo $998\,244\,353$. Formally, let $M = 998\,244\,353$. It can be shown that the exact answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$. \begin{footnotesize} $^{\text{∗}}$A tree is a connected simple graph which has $n$ vertices and $n-1$ edges. $^{\text{†}}$A leaf is a vertex that is connected to exactly one edge. \end{footnotesize}
Note that Alice should always aim to move to the root, as this maximizes her probability of escaping (i.e., for any path from leaf to root, Alice's probability of escaping increases moving up the path). Furthermore, the Queen should always move downward for the same reason as above. Furthermore, the Queen should always move to the closest leaf in the subtree rooted at the current node. There are now a few ways to compute this, but basically all of them are $O(n)$. For one such slick way, define $d(v)$ to be the distance to the closest leaf of the subtree of $v$, $p(v)$ to be the parent of $v$, and $t(v)$ to be the probability Alice gets the treasure starting at node $v$. Then, we claim that $t(v) = \frac{d(v)}{d(v)+1}\cdot t(p(v))$ for all vertices except the root. We can populate these values as we DFS down the tree. Indeed, suppose that the tree is just a path with $d + 1$ vertices, labelled from $1$ through $d + 1$. Then, Alice's probability of getting the treasure at node $i$ is $1 - \frac{i - 1}{d}$ (this comes from solving a system of linear equations). This lines up with the above $t(v)$ calculation. Now, we can construct the answer inductively. Let $P$ be a shortest root-to-leaf path in $T$ and consider the subtrees formed by removing the edges of $P$ from $T$. Each such subtree is rooted at some node $v\in P$. Then, in a subtree $T'$ rooted at $v$, the probability of Alice getting the treasure is exactly the probability that Alice gets to node $v$ and then to node $1$ from $v$, by the chain rule of conditioning and noting that the Queen will never re-enter the subtree $T'$ and the game will only play out along $P$ from this point forward. Therefore, it suffices to deconstruct $T$ into shortest root-to-leaf paths and note that at a given vertex $v$, we only have to play along the sequence of shortest root-to-leaf paths leading from $v$ to $1$. Along each of these paths, the above probability calculation holds, so we are done.
[ "combinatorics", "dfs and similar", "dp", "games", "greedy", "math", "probabilities", "trees" ]
2,300
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using vi = vector<int>; #define rep(i, a, b) for(int i = a; i < (b); ++i) #define all(x) (x).begin(), (x).end() #define sz(x) (int)(x).size() #define smx(a, b) a = max(a, b) #define smn(a, b) a = min(a, b) #define pb push_back #define endl '\n' const ll MOD = 1e9 + 7; const ld EPS = 1e-9; // mt19937 rng(time(0)); ll euclid(ll a, ll b, ll &x, ll &y) { if (!b) return x = 1, y = 0, a; ll d = euclid(b, a % b, y, x); return y -= a/b * x, d; } const ll mod = 998244353; struct mint { ll x; mint(ll xx) : x(xx) {} mint operator+(mint b) { return mint((x + b.x) % mod); } mint operator-(mint b) { return mint((x - b.x + mod) % mod); } mint operator*(mint b) { return mint((x * b.x) % mod); } mint operator/(mint b) { return *this * invert(b); } mint invert(mint a) { ll x, y, g = euclid(a.x, mod, x, y); assert(g == 1); return mint((x + mod) % mod); } mint operator^(ll e) { if (!e) return mint(1); mint r = *this ^ (e / 2); r = r * r; return e&1 ? *this * r : r; } }; void solve() { int n; cin >> n; vector<vector<int>> t(n); rep(i,0,n-1) { int x, y; cin >> x >> y; --x, --y; t[x].push_back(y); t[y].push_back(x); } vector<int> d(n, n + 1); function<int(int, int)> depths = [&](int curr, int par) { for (auto v : t[curr]) { if (v == par) continue; d[curr] = min(d[curr], 1 + depths(v, curr)); } if (d[curr] > n) d[curr] = 0; return d[curr]; }; depths(0, -1); vector<mint> ans(n, 0); function<void(int, int, mint)> dfs = [&](int curr, int par, mint val) { ans[curr] = val; for (auto v : t[curr]) { if (v == par) continue; dfs(v, curr, val * d[v] / (d[v] + 1)); } }; dfs(0, -1, mint(1)); for (auto x : ans) { cout << x.x << " "; } cout << "\n"; } int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) solve(); }
2028
F
Alice's Adventures in Addition
\textbf{Note that the memory limit is unusual.} The Cheshire Cat has a riddle for Alice: given $n$ integers $a_1, a_2, \ldots, a_n$ and a target $m$, is there a way to insert $+$ and $\times$ into the circles of the expression $$a_1 \circ a_2 \circ \cdots \circ a_n = m$$ to make it true? We follow the usual order of operations: $\times$ is done before $+$. Although Alice is excellent at chess, she is not good at math. Please help her so she can find a way out of Wonderland!
Let $dp[i][j]$ be whether $a_1\circ a_2\circ \ldots \circ a_i = j$ can be satisfied. Then, let's case on $a_i$. If $a_i = 0$, then $dp[i][j] = dp[i - 1][j] \lor dp[i - 2][j]\lor \cdots \lor dp[0][j]$ (where we will take $dp[0][j]= \boldsymbol{1}[j = 0]$, the indicator that $j = 0$). This is because we can multiply together any suffix to form $0$. We can do this in time $O(1)$ by keeping these prefix ORs. If $a_i = 1$, then $dp[i][j] = dp[i - 1][j] \lor dp[i - 1][j - 1]$ (since $m \ge 1$ we don't have to worry about accidentally allowing $0$s). This is because we can either multiply $a_{i-1}$ by $1$ or add $1$. Otherwise, note that we can only multiply together at most $\log_2(j)$ many $a_i > 1$ before the result exceeds $j$. So, for each $i$ let $\text{back}(i)$ denote the biggest $k < i$ such that $a_k \neq 1$. Then, we can write $dp[i][j] = dp[i - 1][j - a_i] \lor dp[\text{back}(i) - 1][j - a_i \cdot a_{\text{back}(i)}] \lor \cdots$ where we continue until we either reach $i = 0$, hit $0$, or exceed $j$. $dp[i][j] = dp[i - 1][j - a_i] \lor dp[\text{back}(i) - 1][j - a_i \cdot a_{\text{back}(i)}] \lor \cdots$ There is one special case: if $a_k = 0$ for any $k < i$, then we should also allow $dp[i][j] |= dp[k - 1][j] \lor dp[k - 2][j] \lor \cdots \lor dp[0][j]$. We can keep track of the last time $a_k = 0$ and use the same prefix OR idea as above. Note that all of these operations are "batch" operations: that is, we can do them for all $j$ simultaneously for a given $i$. Thus, this gives a bitset solution in time $O(\frac{nm\log m}{w})$ and with space complexity $O(\frac{nm}{w})$. However, this uses too much space. We can optimize the space complexity to only store $\log_2(m) + 4$ bitsets (with some extra integers) instead. To do this, note that for the $0$ case we only require one bitset which we can update after each $i$, for the $1$ case we only have to keep the previous bitset, and for the $> 1$ case we only need to store the most recent $\log_2(m)$ bitsets for indices $i$ with $a_{i+1} > 1$. We can keep this in a deque and pop from the back if the size exceeds $\log_2(m)$. For the special case at the end, we can keep track of a prefix bitset for the last occurrence of a $0$. Overall, this uses space complexity $O(\frac{m\log_2 m}{w})$ which is sufficient (interestingly, we don't even have to store the input!)
[ "bitmasks", "brute force", "dp", "implementation" ]
2,700
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; using pii = pair<int, int>; using vi = vector<int>; #define rep(i, a, b) for(int i = a; i < (b); ++i) #define all(x) (x).begin(), (x).end() #define sz(x) (int)(x).size() #define smx(a, b) a = max(a, b) #define smn(a, b) a = min(a, b) #define pb push_back #define endl '\n' const ll MOD = 1e9 + 7; const ld EPS = 1e-9; mt19937 rng(time(0)); const int LOG = 14; const int MAX = 10000 + 1; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; bitset<MAX> prev(1), pfx(1), zero(0); list<pair<int, bitset<MAX>>> q; rep(i,0,n) { int x; cin >> x; bitset<MAX> curr = zero; if (x == 0) { curr |= pfx; zero = curr; q.push_front({0, prev}); } else if (x == 1) { curr |= prev | (prev << 1); } else { int prod = 1; q.push_front({x, prev}); for (auto const& val : q) { if (prod == 0 || prod * val.first > m) break; prod *= val.first; curr |= val.second << prod; } } pfx |= curr; prev = curr; if (sz(q) > LOG) q.pop_back(); } cout << (prev[m] ? "YES" : "NO") << "\n"; } }
2029
A
Set
You are given a positive integer $k$ and a set $S$ of all integers from $l$ to $r$ (inclusive). You can perform the following two-step operation any number of times (possibly zero): - First, choose a number $x$ from the set $S$, such that there are at least $k$ multiples of $x$ in $S$ (including $x$ itself); - Then, remove $x$ from $S$ (note that nothing else is removed). Find the maximum possible number of operations that can be performed.
Greedy from small to large. We can delete the numbers from small to large. Thus, previously removed numbers will not affect future choices (if $x<y$, then $x$ cannot be a multiple of $y$). So an integer $x$ ($l\le x\le r$) can be removed if and only if $k\cdot x\le r$, that is, $x\le \left\lfloor\frac{r}{k}\right\rfloor$. The answer is $\max\left(\left\lfloor\frac{r}{k}\right\rfloor-l+1,0\right)$. Time complexity: $\mathcal{O}(1)$ per test case.
[ "greedy", "math" ]
800
for _ in range(int(input())): l, r, k = map(int, input().split()) print(max(r // k - l + 1, 0))
2029
B
Replacement
You have a binary string$^{\text{∗}}$ $s$ of length $n$, and Iris gives you another binary string $r$ of length $n-1$. Iris is going to play a game with you. During the game, you will perform $n-1$ operations on $s$. In the $i$-th operation ($1 \le i \le n-1$): - First, you choose an index $k$ such that $1\le k\le |s| - 1$ and $s_{k} \neq s_{k+1}$. If it is impossible to choose such an index, you lose; - Then, you replace $s_ks_{k+1}$ with $r_i$. Note that this decreases the length of $s$ by $1$. If all the $n-1$ operations are performed successfully, you win. Determine whether it is possible for you to win this game. \begin{footnotesize} $^{\text{∗}}$A binary string is a string where each character is either $\mathtt{0}$ or $\mathtt{1}$. \end{footnotesize}
($\texttt{01}$ or $\texttt{10}$ exists) $\Longleftrightarrow$ (both $\texttt{0}$ and $\texttt{1}$ exist). Each time we do an operation, if $s$ consists only of $\texttt{0}$ or $\texttt{1}$, we surely cannot find any valid indices. Otherwise, we can always perform the operation successfully. In the $i$-th operation, if $t_i=\texttt{0}$, we actually decrease the number of $\texttt{1}$-s by $1$, and vice versa. Thus, we only need to maintain the number of $\texttt{0}$-s and $\texttt{1}$-s in $s$. If any of them falls to $0$ before the last operation, the answer is NO, otherwise, the answer is YES. Time complexity: $\mathcal{O}(n)$ per test case.
[ "constructive algorithms", "games", "strings" ]
1,100
for _ in range(int(input())): n = int(input()) s = input() one = s.count("1") zero = s.count("0") ans = "YES" for ti in input(): if one == 0 or zero == 0: ans = "NO" break one -= 1 zero -= 1 if ti == "1": one += 1 else: zero += 1 print(ans)
2029
C
New Rating
\begin{quote} {{Hello, \sout{Codeforces} Forcescode!}} \end{quote} Kevin used to be a participant of Codeforces. Recently, the KDOI Team has developed a new Online Judge called Forcescode. Kevin has participated in $n$ contests on Forcescode. In the $i$-th contest, his performance rating is $a_i$. Now he has hacked into the backend of Forcescode and will select an interval $[l,r]$ ($1\le l\le r\le n$), then skip all of the contests in this interval. After that, his rating will be recalculated in the following way: - Initially, his rating is $x=0$; - For each $1\le i\le n$, after the $i$-th contest, - If $l\le i\le r$, this contest will be skipped, and the rating will remain unchanged; - Otherwise, his rating will be updated according to the following rules: - If $a_i>x$, his rating $x$ will increase by $1$; - If $a_i=x$, his rating $x$ will remain unchanged; - If $a_i<x$, his rating $x$ will decrease by $1$. You have to help Kevin to find his maximum possible rating after the recalculation if he chooses the interval $[l,r]$ optimally. Note that Kevin has to skip at least one contest.
Binary search. Do something backward. First, do binary search on the answer. Suppose we're checking whether the answer can be $\ge k$ now. Let $f_i$ be the current rating after participating in the $1$-st to the $i$-th contest (without skipping). Let $g_i$ be the minimum rating before the $i$-th contest to make sure that the final rating is $\ge k$ (without skipping). $f_i$-s can be calculated easily by simulating the process in the statement. For $g_i$-s, it can be shown that $g_i= \begin{cases} g_{i+1}-1, & a_i\ge g_{i+1}\\ g_{i+1}+1, & a_i< g_{i+1} \end{cases}$ where $g_{n+1}=k$. Then, we should check if there exists an interval $[l,r]$ ($1\le l\le r\le n$), such that $f_{l-1}\ge g_{r+1}$. If so, we can choose to skip $[l,r]$ and get a rating of $\ge k$. Otherwise, it is impossible to make the rating $\ge k$. We can enumerate on $r$ and use a prefix max to check whether valid $l$ exists. Time complexity: $\mathcal{O}(n\log n)$ per test case. Consider DP. There are only three possible states for each contest: before, in, or after the skipped interval. Consider $dp_{i,0/1/2}=$ the maximum rating after the $i$-th contest, where the $i$-th contest is before/in/after the skipped interval. Let $f(a,x)=$ the result rating when current rating is $a$ and the performance rating is $x$, then $\begin{cases} dp_{i,0} = f(dp_{i-1,0}, a_i),\\ dp_{i,1} = \max(dp_{i-1,1}, dp_{i-1,0}), \\ dp_{i,2} = \max(f(dp_{i-1,1}, a_i), f(dp_{i-1,2}, a_i)). \end{cases}$ And the final answer is $\max(dp_{n,1}, dp_{n,2})$. Time complexity: $\mathcal{O}(n)$ per test case.
[ "binary search", "data structures", "dp", "greedy" ]
1,700
for _ in range(int(input())): n = int(input()) def f(a, x): return a + (a < x) - (a > x) dp = [0, -n, -n] for x in map(int, input().split()): dp[2] = max(f(dp[1], x), f(dp[2], x)) dp[1] = max(dp[1], dp[0]) dp[0] = f(dp[0], x) print(max(dp[1], dp[2]))
2029
D
Cool Graph
You are given an undirected graph with $n$ vertices and $m$ edges. You can perform the following operation at most $2\cdot \max(n,m)$ times: - Choose three distinct vertices $a$, $b$, and $c$, then for each of the edges $(a,b)$, $(b,c)$, and $(c,a)$, do the following: - If the edge does not exist, add it. On the contrary, if it exists, remove it. A graph is called cool if and only if one of the following holds: - The graph has no edges, or - The graph is a tree. You have to make the graph cool by performing the above operations. Note that you can use at most $2\cdot \max(n,m)$ operations. It can be shown that there always exists at least one solution.
There are many different approaches to this problem. Only the easiest one (at least I think so) is shared here. Try to make the graph into a forest first. ($deg_i\le 1$ for every $i$) $\Longrightarrow$ (The graph is a forest). Let $d_i$ be the degree of vertex $i$. First, we keep doing the following until it is impossible: Choose a vertex $u$ with $d_u\ge 2$, then find any two vertices $v,w$ adjacent to $u$. Perform the operation on $(u,v,w)$. Since each operation decreases the number of edges by at least $1$, at most $m$ operations will be performed. After these operations, $d_i\le 1$ holds for every $i$. Thus, the resulting graph consists only of components with size $\le 2$. If there are no edges, the graph is already cool, and we don't need to do any more operations. Otherwise, let's pick an arbitrary edge $(u,v)$ as the base of the final tree, and then merge everything else to it. For a component with size $=1$ (i.e. it is a single vertex $w$), perform the operation on $(u, v, w)$, and set $(u, v) \gets (u, w)$. For a component with size $=2$ (i.e. it is an edge connecting $a$ and $b$), perform the operation on $(u, a, b)$. It is clear that the graph is transformed into a tree now. The total number of operations won't exceed $n+m\le 2\cdot \max(n,m)$. In the author's solution, we used some data structures to maintain the edges, thus, the time complexity is $\mathcal{O}(n+m\log m)$ per test case.
[ "constructive algorithms", "data structures", "dfs and similar", "dsu", "graphs", "greedy", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; #ifdef DEBUG #include "debug.hpp" #else #define debug(...) (void)0 #endif using i64 = int64_t; constexpr bool test = false; int main() { cin.tie(nullptr)->sync_with_stdio(false); int t; cin >> t; for (int ti = 0; ti < t; ti += 1) { int n, m; cin >> n >> m; vector<set<int>> adj(n + 1); for (int i = 0, u, v; i < m; i += 1) { cin >> u >> v; adj[u].insert(v); adj[v].insert(u); } vector<tuple<int, int, int>> ans; for (int i = 1; i <= n; i += 1) { while (adj[i].size() >= 2) { int u = *adj[i].begin(); adj[i].erase(adj[i].begin()); int v = *adj[i].begin(); adj[i].erase(adj[i].begin()); adj[u].erase(i); adj[v].erase(i); ans.emplace_back(i, u, v); if (adj[u].contains(v)) { adj[u].erase(v); adj[v].erase(u); } else { adj[u].insert(v); adj[v].insert(u); } } } vector<int> s; vector<pair<int, int>> p; for (int i = 1; i <= n; i += 1) { if (adj[i].size() == 0) { s.push_back(i); } else if (*adj[i].begin() > i) { p.emplace_back(i, *adj[i].begin()); } } if (not p.empty()) { auto [x, y] = p.back(); p.pop_back(); for (int u : s) { ans.emplace_back(x, y, u); tie(x, y) = pair(x, u); } for (auto [u, v] : p) { ans.emplace_back(y, u, v); } } println("{}", ans.size()); for (auto [x, y, z] : ans) println("{} {} {}", x, y, z); } }
2029
E
Common Generator
For two integers $x$ and $y$ ($x,y\ge 2$), we will say that $x$ is a generator of $y$ if and only if $x$ can be transformed to $y$ by performing the following operation some number of times (possibly zero): - Choose a divisor $d$ ($d\ge 2$) of $x$, then increase $x$ by $d$. For example, - $3$ is a generator of $8$ since we can perform the following operations: $3 \xrightarrow{d = 3} 6 \xrightarrow{d = 2} 8$; - $4$ is a generator of $10$ since we can perform the following operations: $4 \xrightarrow{d = 4} 8 \xrightarrow{d = 2} 10$; - $5$ is not a generator of $6$ since we cannot transform $5$ into $6$ with the operation above. Now, Kevin gives you an array $a$ consisting of $n$ pairwise distinct integers ($a_i\ge 2$). You have to find an integer $x\ge 2$ such that for each $1\le i\le n$, $x$ is a generator of $a_i$, or determine that such an integer does not exist.
$2$ is powerful. Consider primes. How did you prove that $2$ can generate every integer except odd primes? Can you generalize it? In this problem, we do not take the integer $1$ into consideration. Claim 1. $2$ can generate every integer except odd primes. Proof. For a certain non-prime $x$, let $\operatorname{mind}(x)$ be the minimum divisor of $x$. Then $x-\operatorname{mind}(x)$ must be an even number, which is $\ge 2$. So $x-\operatorname{mind}(x)$ can be generated by $2$, and $x$ can be generated by $x-\operatorname{mind}(x)$. Thus, $2$ is a generator of $x$. Claim 2. Primes can only be generated by themselves. According to the above two claims, we can first check if there exist primes in the array $a$. If not, then $2$ is a common generator. Otherwise, let the prime be $p$, the only possible generator should be $p$ itself. So we only need to check whether $p$ is a generator of the rest integers. For an even integer $x$, it is easy to see that, $p$ is a generator of $x$ if and only if $x\ge 2\cdot p$. Claim 3. For a prime $p$ and an odd integer $x$, $p$ is a generator of $x$ if and only if $x - \operatorname{mind}(x)\ge 2\cdot p$. Proof. First, $x-\operatorname{mind}(x)$ is the largest integer other than $x$ itself that can generate $x$. Moreover, only even numbers $\ge 2\cdot p$ can be generated by $p$ ($x-\operatorname{mind}(x)$ is even). That ends the proof. Thus, we have found a good way to check if a certain number can be generated from $p$. We can use the linear sieve to pre-calculate all the $\operatorname{mind}(i)$-s. Time complexity: $\mathcal{O}(\sum n+V)$, where $V=\max a_i$. Some other solutions with worse time complexity can also pass, such as $\mathcal{O}(V\log V)$ and $\mathcal{O}(t\sqrt{V})$.
[ "brute force", "constructive algorithms", "math", "number theory" ]
2,100
#include <bits/stdc++.h> #define all(s) s.begin(), s.end() using namespace std; using ll = long long; using ull = unsigned long long; const int _N = 4e5 + 5; int vis[_N], pr[_N], cnt = 0; void init(int n) { vis[1] = 1; for (int i = 2; i <= n; i++) { if (!vis[i]) { pr[++cnt] = i; } for (int j = 1; j <= cnt && i * pr[j] <= n; j++) { vis[i * pr[j]] = pr[j]; if (i % pr[j] == 0) continue; } } } int T; void solve() { int n; cin >> n; vector<int> a(n + 1); for (int i = 1; i <= n; i++) cin >> a[i]; int p = 0; for (int i = 1; i <= n; i++) { if (!vis[a[i]]) p = a[i]; } if (!p) { cout << 2 << '\n'; return; } for (int i = 1; i <= n; i++) { if (a[i] == p) continue; if (vis[a[i]] == 0) { cout << -1 << '\n'; return; } if (a[i] & 1) { if (a[i] - vis[a[i]] < 2 * p) { cout << -1 << '\n'; return; } } else { if (a[i] < 2 * p) { cout << -1 << '\n'; return; } } } cout << p << '\n'; return; } int main() { ios::sync_with_stdio(false), cin.tie(0), cout.tie(0); init(400000); cin >> T; while (T--) { solve(); } }
2029
F
Palindrome Everywhere
You are given a cycle with $n$ vertices numbered from $0$ to $n-1$. For each $0\le i\le n-1$, there is an undirected edge between vertex $i$ and vertex $((i+1)\bmod n)$ with the color $c_i$ ($c_i=R$ or $B$). Determine whether the following condition holds for every pair of vertices $(i,j)$ ($0\le i<j\le n-1$): - There exists a palindrome route between vertex $i$ and vertex $j$. Note that the route may \textbf{not} be simple. Formally, there must exist a sequence $p=[p_0,p_1,p_2,\ldots,p_m]$ such that: - $p_0=i$, $p_m=j$; - For each $0\leq x\le m-1$, either $p_{x+1}=(p_x+1)\bmod n$ or $p_{x+1}=(p_{x}-1)\bmod n$; - For each $0\le x\le y\le m-1$ satisfying $x+y=m-1$, the edge between $p_x$ and $p_{x+1}$ has the same color as the edge between $p_y$ and $p_{y+1}$.
If there are both consecutive $\texttt{R}$-s and $\texttt{B}$-s, does the condition hold for all $(i,j)$? Why? Suppose that there are only consecutive $\texttt{R}$-s, check the parity of the number of $\texttt{R}$-s in each consecutive segment of $\texttt{R}$-s. If for each consecutive segment of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is odd, does the condition hold for all $(i,j)$? Why? If for at least two of the consecutive segments of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is even, does the condition hold for all $(i,j)$? Why? Is this the necessary and sufficient condition? Why? Don't forget some trivial cases like $\texttt{RRR...RB}$ and $\texttt{RRR...R}$. For each $k>n$ or $k\leq0$, let $c_k$ be $c_{k\bmod n}$. Lemma 1: If there are both consecutive $\texttt{R}$-s and $\texttt{B}$-s, the answer is NO. Proof 1: Suppose that $c_{i-1}=c_i=\texttt{R}$ and $c_{j-1}=c_j=\texttt{B}$, it's obvious that there doesn't exist a palindrome route between $i$ and $j$. Imagine there are two persons on vertex $i$ and $j$. They want to meet each other (they are on the same vertex or adjacent vertex) and can only travel through an edge of the same color. Lemma 2: Suppose that there are only consecutive $\texttt{R}$-s, if for each consecutive segment of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is odd, the answer is NO. Proof 2: Suppose that $c_i=c_j=\texttt{B}$, $i\not\equiv j\pmod n$ and $c_{i+1}=c_{i+2}=\dots=c_{j-1}=\texttt{R}$. The two persons on $i$ and $j$ have to "cross" $\texttt{B}$ simultaneously. As for each consecutive segment of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is odd, they can only get to the same side of their current consecutive segment of $\texttt{R}$-s. After "crossing" $\texttt{B}$, they will still be on different consecutive segments of $\texttt{R}$-s separated by exactly one $\texttt{B}$ and can only get to the same side. Thus, they will never meet. Lemma 3: Suppose that there are only consecutive $\texttt{R}$-s, if, for at least two of the consecutive segments of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is even, the answer is NO. Proof 3: Suppose that $c_i=c_j=\texttt{B}$, $i\not\equiv j \pmod n$ and vertex $i$ and $j$ are both in a consecutive segment of $\texttt{R}$-s with even number of $\texttt{R}$-s. Let the starting point of two persons be $i$ and $j-1$ and they won't be able to "cross" and $\texttt{B}$. Thus, they will never meet. The only case left is that there is exactly one consecutive segment of $\texttt{R}$-s with an even number of $\texttt{R}$-s. Lemma 4: Suppose that there are only consecutive $\texttt{R}$-s, if, for exactly one of the consecutive segments of $\texttt{R}$-s, the parity of the number of $\texttt{R}$-s is even, the answer is YES. Proof 4: Let the starting point of the two persons be $i,j$. Consider the following cases: Case 1: If vertex $i$ and $j$ are in the same consecutive segment of $\texttt{R}$-s, the two persons can meet each other by traveling through the $\texttt{R}$-s between them. Case 2: If vertex $i$ and $j$ are in the different consecutive segment of $\texttt{R}$-s and there are odd numbers of $\texttt{R}$-s in both segments, the two person may cross $\texttt{B}$-s in the way talked about in Proof 2. However, when one of them reaches a consecutive segment with an even number of $\texttt{R}$-s, the only thing they can do is let the one in an even segment cross the whole segment and "cross" the next $\texttt{B}$ in the front, while letting the other one traveling back and forth and "cross" the $\texttt{B}$ he just "crossed". Thus, unlike the situation in Proof 2, we successfully changed the side they can both get to and thus they will be able to meet each other as they are traveling toward each other and there are only odd segments between them. Case 3: If vertex $i$ and $j$ are in the different consecutive segment of $\texttt{R}$-s and there are an odd number of $\texttt{R}$-s in exactly one of the segments, we can let both of them be in one side of there segments and change the situation to the one we've discussed about in Case 2 (when one of them reached a consecutive segment with even number of $\texttt{R}$-s). As a result, the answer is YES if: At least $n-1$ of $c_1,c_2,\dots,c_n$ are the same (Hint 6), or Suppose that there are only consecutive $\texttt{R}$-s, there's exactly one of the consecutive segments of $\texttt{R}$-s such that the parity of the number of $\texttt{R}$-s is even. And we can judge them in $\mathcal{O}(n)$ time complexity. Count the number of strings of length $n$ satisfying the condition. Solve the problem for $c_i\in\{\texttt{A},\texttt{B},\dots,\texttt{Z}\}$, and solve the counting version.
[ "constructive algorithms", "graphs", "greedy" ]
2,500
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin>>n; string s; cin>>s; int visr=0,visb=0,ok=0; for(int i=0;i<n;i++){ if(s[i]==s[(i+1)%n]){ if(s[i]=='R') visr=1; else visb=1; ok++; } } if(visr&visb){ cout<<"NO\n"; return ; } if(ok==n){ cout<<"YES\n"; return ; } if(visb) for(int i=0;i<n;i++) s[i]='R'+'B'-s[i]; int st=0; for(int i=0;i<n;i++) if(s[i]=='B') st=(i+1)%n; vector<int> vc; int ntot=0,cnt=0; for(int i=0,j=st;i<n;i++,j=(j+1)%n){ if(s[j]=='B') vc.push_back(ntot),cnt+=(ntot&1)^1,ntot=0; else ntot++; } if(vc.size()==1||cnt==1){ cout<<"YES\n"; return ; } cout<<"NO\n"; return ; } signed main(){ int t; cin>>t; while(t--) solve(); return 0; }
2029
G
Balanced Problem
There is an array $a$ consisting of $n$ integers. Initially, all elements of $a$ are equal to $0$. Kevin can perform several operations on the array. Each operation is one of the following two types: - Prefix addition — Kevin first selects an index $x$ ($1\le x\le n$), and then for each $1\le j\le x$, increases $a_j$ by $1$; - Suffix addition — Kevin first selects an index $x$ ($1\le x\le n$), and then for each $x\le j\le n$, increases $a_j$ by $1$. In the country of KDOI, people think that the integer $v$ is balanced. Thus, Iris gives Kevin an array $c$ consisting of $n$ integers and defines the beauty of the array $a$ as follows: - Initially, set $b=0$; - For each $1\le i\le n$, if $a_i=v$, add $c_i$ to $b$; - The beauty of $a$ is the final value of $b$. Kevin wants to maximize the beauty of $a$ after all the operations. However, he had already performed $m$ operations when he was sleepy. Now, he can perform an arbitrary number (possibly zero) of new operations. You have to help Kevin find the maximum possible beauty if he optimally performs the new operations. However, to make sure that you are not just rolling the dice, Kevin gives you an integer $V$, and you need to solve the problem for each $1\le v\le V$.
This problem has two approaches. The first one is the authors' solution, and the second one was found during testing. The array $a$ is constructed with the operations. How can we use this property? If we want to make all $a_i=x$, what is the minimum value of $x$? Use the property mentioned in Hint 1. (For the subproblem in Hint 2), try to find an algorithm related to the positions of $\texttt{L/R}$-s directly. (For the subproblem in Hint 2), the conclusion is that, the minimum $x$ equals $\text{# of }\texttt{L}\text{-s} + \text{# of }\texttt{R}\text{-s} - \text{# of adjacent }\texttt{LR}\text{-s}$. Think why. Go for DP. Read the hints first. Then, note that there are only $\mathcal{O}(V)$ useful positions: If (after the initial operations) $a_i>V$ or $a_{i}=a_{i-1}$, we can simply ignore $a_i$, or merge $c_i$ into $c_{i-1}$. Now let $dp(i,s)$ denote the answer when we consider the prefix of length $i$, and we have "saved" $s$ pairs of $\texttt{LR}$. Then, $dp(i,s)=\displaystyle\max_{j< i} dp(j,s-|\mathrm{cntL}(j,i-1)-\mathrm{cntR}(j+1,i)|)+c_i$ Write $\mathrm{cntL}$ and $\mathrm{cntR}$ as prefix sums: $dp(i,s)=\displaystyle\max_{j< i} dp(j,s-|\mathrm{preL}(i-1)-\mathrm{preR}(i)+\mathrm{preR}(j)-\mathrm{preL}(j-1)|)+c_i$ Do casework on the sign of the things inside the $\mathrm{abs}$, and you can maintain both cases with 1D Fenwick trees. Thus, you solved the problem in $\mathcal{O}(V^2\log V)$. Solve the problem for a single $v$ first. Don't think too much, just go straight for a DP solution. Does your time complexity in DP contain $n$ or $m$? In fact, both $n$ and $m$ are useless. There are only $\mathcal{O}(V)$ useful positions. Use some data structures to optimize your DP. Even $\mathcal{O}(v^2\log^2 v)$ is acceptable. Here is the final step: take a look at your DP carefully. Can you change the definition of states a little, so that it can get the answer for each $1\le v\le V$? First, note that there are only $\mathcal{O}(V)$ useful positions: If (after the initial operations) $a_i>V$ or $a_{i}=a_{i-1}$, we can simply ignore $a_i$, or merge $c_i$ into $c_{i-1}$. Now, let's solve the problem for a single $v$. Denote $dp(i,j,k)$ as the maximum answer when considering the prefix of length $i$, and there are $j$ prefix additions covering $i$, $k$ suffix additions covering $i$. Enumerate on $i$, and it is easy to show that the state changes if and only if $j+k+a_i=v$, and $dp(i,j,k)=\displaystyle\max_{p\le j,q\ge k} dp(i-1,p,q) + c_i$ You can use a 2D Fenwick tree to get the 2D prefix max. Thus, you solved the single $v$ case in $\mathcal{O}(v^2\log^2 v)$. In fact, we can process the DP in $\mathcal{O}(v^2 \log v)$ by further optimization: $dp(i,j,k)=\displaystyle\max_{p\le i-1,q\ge j,v-a_p-q\le k} dp(p,q,v-a_p-q) + c_i$ This only requires $a_p+q\ge a_i+j$ when $a_p\le a_i$, and $q\le j$ when $a_p\ge a_i$. So you can use 1D Fenwick trees to process the dp in $\mathcal{O}(v^2 \log v)$. Now, let's go for the whole solution. Let's modify the DP state a bit: now $dp(i,j,k)$ is the state when using $v-k$ suffix operations (note that $v$ is not a constant here). The transformation is similar. Then the answer for $v=i$ will be $\max dp(*,*,i)$.
[ "data structures", "dp" ]
3,000
#include <bits/stdc++.h> using namespace std; typedef long long ll; #define pb push_back #define pii pair<int, int> #define all(a) a.begin(), a.end() const int mod = 1e9 + 7, N = 5005; void solve() { int n, m, V; cin >> n >> m >> V; vector <int> c(n); for (int i = 0; i < n; ++i) { cin >> c[i]; } vector <int> pre(n + 1); for (int i = 0; i < m; ++i) { char x; int v; cin >> x >> v, --v; if (x == 'L') { pre[0]++, pre[v + 1]--; } else { pre[v]++; } } for (int i = 0; i < n; ++i) { pre[i + 1] += pre[i]; } vector <pair <ll, int>> vec; for (int i = 0, j = 0; i < n; i = j) { ll tot = 0; while (j < n && pre[i] == pre[j]) { tot += c[j], j++; } if (pre[i] <= V) { vec.emplace_back(tot, pre[i]); } } vector bit(V + 5, vector <ll>(V + 5, -1ll << 60)); auto upd = [&](int x, int y, ll v) { for (int i = x + 1; i < V + 5; i += i & (-i)) { for (int j = y + 1; j < V + 5; j += j & (-j)) { bit[i][j] = max(bit[i][j], v); } } }; auto query = [&](int x, int y) { ll ans = -1ll << 60; for (int i = x + 1; i > 0; i -= i & (-i)) { for (int j = y + 1; j > 0; j -= j & (-j)) { ans = max(ans, bit[i][j]); } } return ans; }; upd(0, 0, 0); vector <ll> tmp(V + 1); for (auto [val, diff] : vec) { for (int i = 0; i + diff <= V; ++i) { tmp[i] = query(i, i + diff); } for (int i = 0; i + diff <= V; ++i) { upd(i, i + diff, tmp[i] + val); } } for (int i = 1; i <= V; ++i) { cout << query(i, i) << " \n"[i == V]; } } int main() { ios::sync_with_stdio(false), cin.tie(0); int t; cin >> t; while (t--) { solve(); } }
2029
H
Message Spread
Given is an undirected graph with $n$ vertices and $m$ edges. Each edge connects two vertices $(u, v)$ and has a probability of $\frac{p}{q}$ of appearing each day. Initially, vertex $1$ has a message. At the end of the day, a vertex has a message if and only if itself or at least one of the vertices adjacent to it had the message the day before. Note that each day, each edge chooses its appearance independently. Calculate the expected number of days before all the vertices have the message, modulo $998\,244\,353$.
It's hard to calculate the expected number. Try to change it to the probability. Consider a $\mathcal{O}(3^n)$ dp first. Use inclusion and exclusion. Write out the transformation. Try to optimize it. Let $dp_S$ be the probability such that exactly the points in $S$ have the message. The answer is $\sum dp_S\cdot tr_S$, where $tr_S$ is the expected number of days before at least one vertex out of $S$ to have the message (counting from the day that exactly the points in $S$ have the message). For transformation, enumerate $S,T$ such that $S\cap T=\varnothing$. Transfer $dp_S$ to $dp_{S\cup T}$. It's easy to precalculate the coefficient, and the time complexity differs from $\mathcal{O}(3^n)$, $\mathcal{O}(n\cdot 3^n)$ to $\mathcal{O}(n^2\cdot 3^n)$, according to the implementation. The transformation is hard to optimize. Enumerate $T$ and calculate the probability such that vertices in $S$ is only connected with vertices in $T$, which means that the real status $R$ satisfies $S\subseteq R\subseteq (S\cup T)$. Use inclusion and exclusion to calculate the real probability. List out the coefficient: $\frac{\displaystyle\prod_{e\in\{1,2,3,\dots,n\}}(1-w_e)\cdot\prod_{e\in(S\cup T)}\frac{1}{1-w_e}\cdot\prod_{e\in \{1,2,\dots,n\}\setminus S}\frac{1}{1-w_e}\cdot\prod_{e\in T}(1-w_e)}{\displaystyle1-\prod_{e\in\{1,2,3,\dots,n\}}(1-w_e)\cdot\prod_{e\in S}\frac{1}{1-w_e}\cdot\prod_{e\in\{1,2,3,\dots,n\}\setminus S}\frac{1}{1-w_e}}$ (Note that $w_e$ denotes the probability of the appearance of the edge $e$) We can express it as $\mathrm{Const}\cdot f_{S\cup T}\cdot g_S\cdot h_T$. Use a subset convolution to optimize it. The total time complexity is $\mathcal{O}(2^n\cdot n^2)$. It's easy to see that all $dp_S\neq0$ satisfies $1\in S$, so the time complexity can be $\mathcal{O}(2^{n-1}\cdot n^2)$ if well implemented.
[ "bitmasks", "brute force", "combinatorics", "dp" ]
3,500
#include <bits/stdc++.h> #define int long long using namespace std; const int N=(1<<21),mod=998244353; const int Lim=8e18; inline void add(signed &i,int j){ i+=j; if(i>=mod) i-=mod; } int qp(int a,int b){ int ans=1; while(b){ if(b&1) (ans*=a)%=mod; (a*=a)%=mod; b>>=1; } return ans; } int dp[N],f[N],g[N],h[N]; int s1[N],s2[N]; signed pre[22][N/2],t[22][N/2],pdp[N][22]; int p[N],q[N],totp,totq; signed main(){ int n,m; cin>>n>>m; int totprod=1; for(int i=0;i<(1<<n);i++) s1[i]=s2[i]=1; for(int i=1;i<=m;i++){ int u,v,p,q; cin>>u>>v>>p>>q; int w=p*qp(q,mod-2)%mod; (s1[(1<<(u-1))+(1<<(v-1))]*=(mod+1-w))%=mod; (s2[(1<<(u-1))+(1<<(v-1))]*=qp(mod+1-w,mod-2))%=mod; (totprod*=(mod+1-w))%=mod; } for(int j=1;j<=n;j++) for(int i=0;i<(1<<n);i++) if((i>>(j-1))&1) (s1[i]*=s1[i^(1<<(j-1))])%=mod,(s2[i]*=s2[i^(1<<(j-1))])%=mod; for(int i=0;i<(1<<n);i++) f[i]=s2[i],g[i]=totprod*s2[((1<<n)-1)^i]%mod*qp(mod+1-totprod*s2[i]%mod*s2[((1<<n)-1)^i]%mod,mod-2)%mod,h[i]=s1[i]; for(int i=1;i<(1<<n);i++) pre[__builtin_popcount(i)][i>>1]=h[i]; dp[1]=1; for(int j=1;j<=n;j++){ if(!((1>>(j-1))&1)) add(pdp[1|(1<<(j-1))][j],mod-(dp[1]*g[1]%mod*f[1]%mod)); } t[0][0]=dp[1]*g[1]%mod; for(int k=1;k<=n;k++) for(int j=1;j<n;j++) for(int i=0;i<(1<<(n-1));i++) if((i>>(j-1))&1) add(pre[k][i],pre[k][i^(1<<(j-1))]); for(int j=1;j<n;j++) for(int i=0;i<(1<<(n-1));i++) if((i>>(j-1))&1) add(t[0][i],t[0][i^(1<<(j-1))]); for(int k=1;k<n;k++){ totp=totq=0; for(int i=0;i<(1<<(n-1));i++) if(__builtin_popcount(i)<=k) p[++totp]=i; else q[++totq]=i; for(int l=1,i=p[l];l<=totp;l++,i=p[l]) for(int j=0;j<k;j++) add(t[k][i],1ll*t[j][i]*pre[k-j][i]%mod); for(int i=0;i<(1<<(n-1));i++) t[k][i]%=mod; for(int j=1;j<n;j++) for(int l=1,i=p[l];l<=totp;l++,i=p[l]) if((i>>(j-1))&1) add(t[k][i],mod-t[k][i^(1<<(j-1))]); for(int i=0;i<(1<<(n-1));i++){ if(__builtin_popcount(i)==k){ add(pdp[(i<<1)|1][0],t[k][i]*f[(i<<1)|1]%mod); int pre=0; for(int j=1;j<=n;j++){ (pre+=pdp[(i<<1)|1][j-1])%=mod; if(!((((i<<1)|1)>>(j-1))&1)) add(pdp[(i<<1)|1|(1<<(j-1))][j],mod-pre); } (pre+=pdp[(i<<1)|1][n])%=mod; dp[(i<<1)|1]=pre; for(int j=1;j<=n;j++){ if(!((((i<<1)|1)>>(j-1))&1)) add(pdp[(i<<1)|1|(1<<(j-1))][j],mod-(dp[(i<<1)|1]*g[(i<<1)|1]%mod*f[(i<<1)|1]%mod)); } t[k][i]=pre*g[(i<<1)|1]%mod; } else t[k][i]=0; } for(int j=1;j<n;j++) for(int l=1,i=q[l];l<=totq;l++,i=q[l]) if((i>>(j-1))&1) add(t[k][i],t[k][i^(1<<(j-1))]); } int ans=0; for(int i=1;i<(1<<n)-1;i+=2) (ans+=dp[i]*qp(mod+1-totprod*s2[i]%mod*s2[((1<<n)-1)^i]%mod,mod-2)%mod)%=mod; cout<<ans; return 0; }
2029
I
Variance Challenge
Kevin has recently learned the definition of variance. For an array $a$ of length $n$, the variance of $a$ is defined as follows: - Let $x=\dfrac{1}{n}\displaystyle\sum_{i=1}^n a_i$, i.e., $x$ is the mean of the array $a$; - Then, the variance of $a$ is $$ V(a)=\frac{1}{n}\sum_{i=1}^n(a_i-x)^2. $$ Now, Kevin gives you an array $a$ consisting of $n$ integers, as well as an integer $k$. You can perform the following operation on $a$: - Select an interval $[l,r]$ ($1\le l\le r\le n$), then for each $l\le i\le r$, increase $a_i$ by $k$. For each $1\le p\le m$, you have to find the minimum possible variance of $a$ after exactly $p$ operations are performed, independently for each $p$. For simplicity, you only need to output the answers multiplied by $n^2$. It can be proven that the results are always integers.
The intended solution has nothing to do with dynamic programming. This is the key observation of the problem. Suppose we have an array $b$ and a function $f(x)=\sum (b_i-x)^2$, then the minimum value of $f(x)$ is the variance of $b$. Using the observation mentioned in Hint 2, we can reduce the problem to minimizing $\sum(a_i-x)^2$ for a given $x$. There are only $\mathcal{O}(n\cdot m)$ possible $x$-s. The rest of the problem is somehow easy. Try flows, or just find a greedy algorithm! If you are trying flows: the quadratic function is always convex. Key Observation. Suppose we have an array $b$ and a function $f(x)=\sum (b_i-x)^2$, then the minimum value of $f(x)$ is the variance of $b$. Proof. This is a quadratic function of $x$, and the its symmetry axis is $x=\frac{1}{n}\sum b_i$. So the minimum value is $f\left(\frac{1}{n}\sum b_i\right)$. That is exactly the definition of variance. Thus, we can enumerate all possible $x$-s, and find the minimum $\sum (a_i-x)^2$ after the operations, then take the minimum across them. That will give the correct answer to the original problem. Note that there are only $\mathcal{O}(n\cdot m)$ possible $x$-s. More formally, let $k_{x,c}$ be the minimum value of $\sum (a_i-x)^2$ after exactly $c$ operations. Then $\mathrm{ans}_i=\displaystyle\min_{\text{any possible }x} k_{x,i}$. So we only need to solve the following (reduced) problem: Given a (maybe non-integer) number $x$. For each $1\le i\le m$, find the minimum value of $\sum (a_i-x)^2$ after exactly $i$ operations. To solve this, we can use the MCMF model: Set a source node $s$ and a target node $t$. For each $1\le i\le n$, add an edge from $s$ to $i$ with cost $0$. For each $1\le i\le n$, add an edge from $i$ to $t$ with cost $0$. For each $1\le i< n$, add an edge from $i$ to $i+1$ with cost being a function $\mathrm{cost}(f)=(a_i+f-x)^2-(a_i-x)^2$, where $f$ is the flow on this edge. Note that the $\mathrm{cost}$ function is convex, so this model is correct, as you can split an edge into some edges with cost of $\mathrm{cost}(1)$, $\mathrm{cost}(2)-\mathrm{cost}(1)$, $\mathrm{cost}(3)-\mathrm{cost}(2)$, and so on. Take a look at the model again. We don't need to run MCMF. We can see that it is just a process of regret greedy. So you only need to find the LIS (Largest Interval Sum :) ) for each operation. Thus, we solved the reduced problem in $\mathcal{O(n\cdot m)}$. Overall time complexity: $\mathcal{O}((n\cdot m)^2)$ per test case. Reminder: don't forget to use __int128 if you didn't handle the numbers carefully!
[ "flows", "graphs", "greedy" ]
3,400
#include <bits/stdc++.h> #define all(s) s.begin(), s.end() using namespace std; using ll = long long; using ull = unsigned long long; const int _N = 1e5 + 5; int T; void solve() { ll n, m, k; cin >> n >> m >> k; vector<ll> a(n + 1); for (int i = 1; i <= n; i++) cin >> a[i]; vector<ll> kans(m + 1, LLONG_MAX); vector<__int128> f(n + 1), g(n + 1), v(n + 1); vector<int> vis(n + 1), L(n + 1), L2(n + 1); ll sum = 0; for (int i = 1; i <= n; i++) sum += a[i]; __int128 pans = 0; for (int i = 1; i <= n; i++) pans += n * a[i] * a[i]; auto work = [&](ll s) { __int128 ans = pans; ans += s * s - 2ll * sum * s; f.assign(n + 1, LLONG_MAX); g.assign(n + 1, LLONG_MAX); for (int i = 1; i <= n; i++) { v[i] = n * (2 * a[i] * k + k * k) - 2ll * s * k; vis[i] = 0; } for (int i = 1; i <= m; i++) { for (int j = 1; j <= n; j++) { L[j] = L2[j] = j; if (f[j - 1] < 0) f[j] = f[j - 1] + v[j], L[j] = L[j - 1]; else f[j] = v[j]; if (!vis[j]) { g[j] = LLONG_MAX; continue; } if (g[j - 1] < 0) g[j] = g[j - 1] + 2ll * n * k * k - v[j], L2[j] = L2[j - 1]; else g[j] = 2ll * n * k * k - v[j]; } __int128 min_sum = LLONG_MAX; int l = 1, r = n, type = 0; for (int j = 1; j <= n; j++) { if (f[j] < min_sum) { min_sum = f[j], r = j, l = L[j]; } } for (int j = 1; j <= n; j++) { if (g[j] < min_sum) { min_sum = g[j], r = j, l = L2[j]; type = 1; } } ans += min_sum; if (type == 0) { for (int j = l; j <= r; j++) vis[j]++, v[j] += 2 * n * k * k; } else { for (int j = l; j <= r; j++) vis[j]--, v[j] -= 2 * n * k * k; } kans[i] = min((__int128)kans[i], ans); } }; for (ll x = sum; x <= sum + n * m * k; x += k) { work(x); } for (int i = 1; i <= m; i++) cout << kans[i] << " \n"[i == m]; return; } int main() { ios::sync_with_stdio(false), cin.tie(0), cout.tie(0); cin >> T; while (T--) { solve(); } }
2030
A
A Gift From Orangutan
While exploring the jungle, you have bumped into a rare orangutan with a bow tie! You shake hands with the orangutan and offer him some food and water. In return... The orangutan has gifted you an array $a$ of length $n$. Using $a$, you will construct two arrays $b$ and $c$, both containing $n$ elements, in the following manner: - $b_i = \min(a_1, a_2, \ldots, a_i)$ for each $1 \leq i \leq n$. - $c_i = \max(a_1, a_2, \ldots, a_i)$ for each $1 \leq i \leq n$. Define the score of $a$ as $\sum_{i=1}^n c_i - b_i$ (i.e. the sum of $c_i - b_i$ over all $1 \leq i \leq n$). Before you calculate the score, you can \textbf{shuffle} the elements of $a$ however you want. Find the maximum score that you can get if you shuffle the elements of $a$ optimally.
First, what is the maximum possible value of $c_i-b_j$ for any $i,j$? Since $c_i$ is the maximum element of some subset of $a$ and $b_i$ is the minimum element of some subset of $a$, the maximum possible value of $c_i-b_j$ is $max(a)-min(a)$. Also note that $c_1=b_1$ for any reordering of $a$. By reordering such that the largest element of $a$ appears first and the smallest element of $a$ appears second, the maximum possible value of the score is achieved. This results in a score of $(max(a)-min(a))\cdot(n-1)$.
[ "constructive algorithms", "greedy", "math", "sortings" ]
800
for i in range(int(input())): n = int(input()) mx = 0 mn= 1000000 lst = input().split() for j in range(n): x = int(lst[j]) mx = max(mx, x) mn = min(mn, x) print((mx-mn)*(n-1))
2030
B
Minimise Oneness
For an arbitrary binary string $t$$^{\text{∗}}$, let $f(t)$ be the number of non-empty subsequences$^{\text{†}}$ of $t$ that contain only $\mathtt{0}$, and let $g(t)$ be the number of non-empty subsequences of $t$ that contain at least one $\mathtt{1}$. Note that for $f(t)$ and for $g(t)$, each subsequence is counted as many times as it appears in $t$. E.g., $f(\mathtt{000}) = 7, g(\mathtt{100}) = 4$. We define the oneness of the binary string $t$ to be $|f(t)-g(t)|$, where for an arbitrary integer $z$, $|z|$ represents the absolute value of $z$. You are given a positive integer $n$. Find a binary string $s$ of length $n$ such that its oneness is as small as possible. If there are multiple strings, you can print any of them. \begin{footnotesize} $^{\text{∗}}$A binary string is a string that only consists of characters $0$ and $1$. $^{\text{†}}$A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements. For example, subsequences of $\mathtt{1011101}$ are $\mathtt{0}$, $\mathtt{1}$, $\mathtt{11111}$, $\mathtt{0111}$, but not $\mathtt{000}$ nor $\mathtt{11100}$. \end{footnotesize}
Observation: $f(t)-g(t)$ is odd. Proof: $f(t)+g(t)$ is the set of all non-empty subsets of $t$, which is $2^{|t|}-1$, which is odd. The sum and difference of two integers has the same parity, so $f(t)-g(t)$ is always odd. By including exactly one $1$ in the string $s$, we can make $f(s)=2^{n-1}-1$ and $g(s)=2^{n-1}$, or $f(s)-g(s)=1$ by the multiplication principle. Clearly, this is the best we can do. So, we print out any string with exactly one $1$.
[ "combinatorics", "constructive algorithms", "games", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while(t--) { int n; cin >> n; cout << '1'; for(int i = 1; i < n; i++) cout << '0'; cout << endl; } }
2030
C
A TRUE Battle
Alice and Bob are playing a game. There is a list of $n$ booleans, each of which is either true or false, given as a binary string $^{\text{∗}}$ of length $n$ (where $1$ represents true, and $0$ represents false). Initially, there are no operators between the booleans. Alice and Bob will take alternate turns placing and or or between the booleans, with Alice going first. Thus, the game will consist of $n-1$ turns since there are $n$ booleans. Alice aims for the final statement to evaluate to true, while Bob aims for it to evaluate to false. Given the list of boolean values, determine whether Alice will win if both players play optimally. To evaluate the final expression, repeatedly perform the following steps until the statement consists of a single true or false: - If the statement contains an and operator, choose any one and replace the subexpression surrounding it with its evaluation. - Otherwise, the statement contains an or operator. Choose any one and replace the subexpression surrounding the or with its evaluation. For example, the expression true or false and false is evaluated as true or (false and false) $=$ true or false $=$ true. It can be shown that the result of any compound statement is unique.\begin{footnotesize} $^{\text{∗}}$A binary string is a string that only consists of characters $0$ and $1$ \end{footnotesize}
Let's understand what Alice wants to do. She wants to separate a statement that evaluates to true between two or's. This guarantees her victory since or is evaluated after all and's. First, if the first or last boolean is true, then Alice instantly wins by placing or between the first and second, or second to last and last booleans. Otherwise, if there are two true's consecutively, Alice can also win. Alice may place or before the first of the two on her first move. If Bob does not put his operator between the two true's, then Alice will put an or between the two true's on her next move and win. Otherwise, Bob does place his operator between the two true's. However, no matter what Bob placed, the two true's will always evaluate to true, so on her second move Alice can just place an or on the other side of the two true's to win. We claim these are the only two cases where Alice wins. This is because otherwise, there does not contain two true's consecutively. Now, whenever Alice places an or adjacent to a true, Bob will respond by placing and after the true, which will invalidate this clause to be false.
[ "brute force", "games", "greedy" ]
1,100
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while(t--) { int n; cin >> n; string s; cin >> s; vector<int> v(n); for(int i = 0; i < n; i++) { if(s[i]=='1') v[i]=1; } bool win = false; if(v[0]||v[n-1]) win=true; for(int i = 1; i < n; i++) { if(v[i]&&v[i-1]) win=true; } if(win) cout << "YES" << endl; else cout << "NO" << endl; } }
2030
D
QED's Favorite Permutation
QED is given a permutation$^{\text{∗}}$ $p$ of length $n$. He also has a string $s$ of length $n$ containing only characters $L$ and $R$. QED only likes permutations that are sorted in non-decreasing order. To sort $p$, he can select any of the following operations and perform them any number of times: - Choose an index $i$ such that $s_i = L$. Then, swap $p_i$ and $p_{i-1}$. It is guaranteed that $s_1 \neq L$. - Choose an index $i$ such that $s_i = R$. Then, swap $p_i$ and $p_{i+1}$. It is guaranteed that $s_n \neq R$. He is also given $q$ queries. In each query, he selects an index $i$ and changes $s_i$ from $L$ to $R$ (or from $R$ to $L$). Note that the changes are \textbf{persistent}. After each query, he asks you if it is possible to sort $p$ in non-decreasing order by performing the aforementioned operations any number of times. Note that before answering each query, the permutation $p$ is reset to its original form. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
Observation: Through a series of swaps, we can swap an element from position $i$ to position $j$ (WLOG, assume $i < j$) if there is no such $k$ such that $i \leq k < j$ such that $s_k = \texttt{L}$ and $s_{k+1} = \texttt{R}$. Let's mark all indices $i$ such that $s_i = \texttt{L}$ and $s_{i+1} = \texttt{R}$ as bad. If $pos_i$ represents the position of $i$ in $p$, then we must make sure it is possible to swap from $\min(i, pos_i)$ to $\max(i, pos_i)$. As you can see, we can model these conditions as intervals. We must make sure there are no bad indices included in any intervals. We need to gather indices $i$ such that $i$ is included in at least one interval. This can be done with difference array. Let $d_i$ denote the number of intervals that include $i$. If $d_i > 0$, then we need to make sure $i$ is not a bad index. We can keep all bad indices in a set. Notice that when we update $i$, we can only potentially toggle indices $i$ and $i-1$ from good to bad (or vice versa). For example, if $s_i = \texttt{L}$, $s_i = \texttt{R}$, $d_i > 0$ and index $i$ is not in the bad set, then we will insert it. After each query, if the bad set is empty, then the answer is "YES".
[ "data structures", "implementation", "sortings" ]
1,700
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while(t--) { int n,q; cin >> n >> q; vector<int> perm(n); for(int i = 0; i < n; i++) cin >> perm[i]; for(int i = 0; i < n; i++) perm[i]--; vector<int> invperm(n); for(int i = 0; i < n; i++) invperm[perm[i]]=i; vector<int> diffArr(n); for(int i = 0; i < n; i++) { diffArr[min(i, invperm[i])]++; diffArr[max(i, invperm[i])]--; } for(int i = 1; i < n; i++) diffArr[i]+=diffArr[i-1]; string s; cin >> s; set<int> problems; for(int i = 0; i < n-1; i++) { if(s[i]=='L'&&s[i+1]=='R'&&diffArr[i]!=0) { problems.insert(i); } } while(q--) { int x; cin >> x; x--; if(s[x]=='L') { s[x]='R'; } else { s[x]='L'; } if(s[x-1]=='L'&&s[x]=='R'&&diffArr[x-1]!=0) { problems.insert(x-1); } else { problems.erase(x-1); } if(s[x]=='L'&&s[x+1]=='R'&&diffArr[x]!=0) { problems.insert(x); } else { problems.erase(x); } if(problems.size()) { cout << "NO" << endl; } else { cout << "YES" << endl; } } } }
2030
E
MEXimize the Score
Suppose we partition the elements of an array $b$ into any number $k$ of non-empty multisets $S_1, S_2, \ldots, S_k$, where $k$ is an arbitrary positive integer. Define the score of $b$ as the maximum value of $\operatorname{MEX}(S_1)$$^{\text{∗}}$$ + \operatorname{MEX}(S_2) + \ldots + \operatorname{MEX}(S_k)$ over all possible partitions of $b$ for any integer $k$. Envy is given an array $a$ of size $n$. Since he knows that calculating the score of $a$ is too easy for you, he instead asks you to calculate the sum of scores of all $2^n - 1$ non-empty subsequences of $a$.$^{\text{†}}$ Since this answer may be large, please output it modulo $998\,244\,353$. \begin{footnotesize} $^{\text{∗}}$$\operatorname{MEX}$ of a collection of integers $c_1, c_2, \ldots, c_k$ is defined as the smallest non-negative integer $x$ that does not occur in the collection $c$. For example, $\operatorname{MEX}([0,1,2,2]) = 3$ and $\operatorname{MEX}([1,2,2]) = 0$ $^{\text{†}}$A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by deleting several (possibly, zero or all) elements. \end{footnotesize}
Observation: The score of $b$ is equivalent to $f_0$ + $\min(f_0, f_1)$ + $\ldots$ + $\min(f_0, \ldots, f_{n-1})$ where $f_i$ stores the frequency of integer $i$ in $b$. Intuition: We can greedily construct the $k$ arrays by repeating this step: Select the minimum $j$ such that $f_j = 0$ and $\min(f_0, \ldots f_{j-1}) > 0$, and construct the array $[0, 1, \ldots, j-1]$. This is optimal because every element we add will increase the MEX by $1$, which will increase the score by $1$. If we add $j$, the MEX will not increase. Also, when we add an element, we cannot increase the score by more than $1$. Adding less than $j$ elements cannot increase MEX for future arrays. From this observation, we can see that only the frequency array of $a$ matters. From now on, let's denote the frequency of $i$ in $a$ as $f_i$. We can find the sum over all subsequences using dynamic programming. Let's denote $dp[i][j]$ as the number of subsequences containing only the first $i$ integers and $min(f_0, \ldots, f_i) = j$. Initially, $dp[0][i] = \binom{f_0}{i}$. To transition, we need to consider two cases: In the first case, let's assume $j < \min(f_0, \ldots, f_{i-1})$. The number of subsequences that can be created is $(\sum_{k=j+1}^n dp[i-1][k]) \cdot \binom{f_i}{j}$. That is, all the subsequences from previous length such that it is possible for $j$ to be the new minimum, multiplied by the number of subsequences where $f_i = j$. In the second case, let's assume $j \geq \min(f_0, \ldots, f_{i-1})$. The number of subsequences that can be created is $(\sum_{k=j}^{f_i} \binom{f_i}{k}) \cdot dp[i-1][j]$. That is, all subsequences containing at least $j$ elements of $i$, multiplied by all previous subsequences with minimum already equal to $j$. The total score is $dp[i][j] \cdot j \cdot 2^{f_{i+1} + \dots + f_{n-1}}$ over the length of the prefix $i$ and prefix minimum $j$. We can speed up the calculations for both cases using suffix sums, however, this still yields an $O(n^2)$ algorithm. However, $j$ is bounded to the interval $[1, f_i]$ for each $i$. Since the sum of $f_i$ is $n$, the total number of secondary states is $n$. This becomes just a constant factor, so the total complexity is $O(n)$.
[ "combinatorics", "data structures", "dp", "greedy", "implementation", "math" ]
2,200
#include <bits/stdc++.h> #define int long long #define ll long long #define pii pair<int,int> #define piii pair<pii,pii> #define fi first #define se second #pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt") using namespace std; const int MX = 2e5; ll fact[MX+1]; ll ifact[MX+1]; ll MOD=998244353; ll binPow(ll base, ll exp) { ll ans = 1; while(exp) { if(exp%2) { ans = (ans*base)%MOD; } base = (base*base)%MOD; exp /= 2; } return ans; } int nCk(int N, int K) { if(K>N||K<0) { return 0; } return (fact[N]*((ifact[K]*ifact[N-K])%MOD))%MOD; } void ICombo() { fact[0] = 1; for(int i=1;i<=MX;i++) { fact[i] = (fact[i-1]*i)%MOD; } ifact[MX] = binPow(fact[MX],MOD-2); for(int i=MX-1;i>=0;i--) { ifact[i] = (ifact[i+1]*(i+1))%MOD; } } void solve() { int n, ans=0; cin >> n; vector<int> c(n); for (int r:c) { cin >> r; c[r]++; } vector<vector<int>> dp(n,vector<int>(1)); vector<int> ps, co; for (int i = 1; i <= c[0]; i++) dp[0].push_back(nCk(c[0],i)); for (int i = 1; i < n; i++) { ps.resize(1); co=ps; for (int r:dp[i-1]) ps.push_back((ps.back()+r)%MOD); int m=ps.size()-1; dp[i].resize(min(m,c[i]+1)); for (int j = 0; j <= c[i]; j++) co.push_back((co.back()+nCk(c[i],j))%MOD); for (int j = 1; j < dp[i].size(); j++) dp[i][j]=nCk(c[i],j)*(ps[m]-ps[j]+MOD)%MOD+(co.back()-co[j+1]+MOD)*dp[i-1][j]%MOD; } int j=0; for (auto r:dp) { n-=c[j++]; for (int i = 1; i < r.size(); i++) (ans+=i*r[i]%MOD*binPow(2,n))%=MOD; } cout << ans << "\n"; } int32_t main() { ios::sync_with_stdio(0); cin.tie(0); ICombo(); int t = 1; cin >> t; while (t--) solve(); }
2030
F
Orangutan Approved Subarrays
Suppose you have an array $b$. Initially, you also have a set $S$ that contains all distinct elements of $b$. The array $b$ is called orangutan-approved if it can be \textbf{emptied} by repeatedly performing the following operation: - In one operation, select indices $l$ and $r$ ($1 \leq l \leq r \leq |b|$) such that $v = b_l = b_{l+1} = \ldots = b_r$ and $v$ is present in $S$. Remove $v$ from $S$, and simultaneously remove all $b_i$ such that $l \leq i \leq r$. Then, reindex the elements $b_{r+1}, b_{r+2}, \ldots$ as $b_l, b_{l+1}, \ldots$ accordingly. You are given an array $a$ of length $n$ and $q$ queries. Each query consists of two indices $l$ and $r$ ($1 \le l \le r \le n$), and you need to determine whether or not the subarray $a_{l}, a_{l+1}, \ldots, a_r$ is orangutan-approved.
First, let's try to find whether a single array is orangutan-approved or not. Claim: The array $b$ of size $n$ is not orangutan-approved if and only if there exists indices $1\leq w < x < y < z \leq n$ such that $b_w=b_y$, $b_x=b_z$, and $b_w\neq b_x$. Proof: Let's prove this with strong induction. For $n=0$, the claim is true because the empty array is orangutan-approved. Now, let's suppose that the claim is true for all $m < n$. Now, let $s$ be a sorted sequence such that $x$ is in $s$ if and only if $b_x=b_1$. Suppose $s$ is length $k$. We can split the array $b$ into disjoint subarrays $c_1, c_2, \ldots, c_k$ such that $c_i$ is the subarray $b[s_i+1\ldots s_{i+1}-1]$ for all $1 \leq i < k$ and $c_k=b[s_k+1\ldots n]$ That is, $c_i$ is the subarray that lies between each occurrence of $b_1$ in the array $b$. First, we note that the set of unique elements of $c_i$ and $c_j$ cannot contain any elements in common for all $i\neq j$. This is because suppose that there exists $i$ and $j$ such that $i < j$ and the set of unique values in $c_i$ and $c_j$ both contain $y$. Then, in the original array $b$, there must exist a subsequence $b_1, y, b_1, y$. This makes our premise false. By our inductive claim, each of the arrays $c_1, c_2, \ldots, c_k$ must be orangutan-approved. Since there are no overlapping elements, we may delete each of the arrays $c_1, c_2, \ldots, c_k$ separately. Finally, the array $b$ is left with $k$ copies of $b_1$, and we can use one operation to delete all remaining elements in the array $b$. Now, how do we solve for all queries? First, precompute the array $last$, which is the array containing for each $i$ the largest index $j<i$ such that $a[j]=a[i]$. Let's then use two pointers to compute the last element $j<i$ such that $a[j\ldots i]$ is orangutan-approved but $a[j-1\ldots i]$ is not, and store this in an array called $left$. Let's also keep a maximum segment tree $next$ such that $next[i]$ is the first element $j>i$ such that $a_j=a_i$. As we sweep from $i-1$ to $i$, we do the following: Set $L=left[i-1]$ Set $L=left[i-1]$ Otherwise, while $\max(next[L...last[i]-1]>last[i]$ and $last[i]\neq-1$), increment $L$ by $1$. Otherwise, while $\max(next[L...last[i]-1]>last[i]$ and $last[i]\neq-1$), increment $L$ by $1$. Set $left[i]=L$ Set $left[i]=L$ When the $left$ array is fully calculated, we can solve each query in $O(1)$.
[ "binary search", "data structures", "dp", "greedy", "implementation", "two pointers" ]
2,400
#pragma GCC optimize("Ofast") #include <bits/stdc++.h> using namespace std; #define ll long long #define nline "\n" #define f first #define s second #define sz(x) x.size() #define all(x) x.begin(),x.end() mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); const ll INF_ADD=1e18; const ll MOD=1e9+7; const ll MAX=1048579; class ST { public: vector<ll> segs; ll size = 0; ll ID = 0; ST(ll sz) { segs.assign(2 * sz, ID); size = sz; } ll comb(ll a, ll b) { return max(a, b); } void upd(ll idx, ll val) { segs[idx += size] = val; for(idx /= 2; idx; idx /= 2) segs[idx] = comb(segs[2 * idx], segs[2 * idx + 1]); } ll query(ll l, ll r) { ll lans = ID, rans = ID; for(l += size, r += size + 1; l < r; l /= 2, r /= 2) { if(l & 1) lans = comb(lans, segs[l++]); if(r & 1) rans = comb(segs[--r], rans); } return comb(lans, rans); } }; void solve(){ ll n,q,l=1; cin>>n>>q; ST track(n+5); vector<ll> a(n+5),last(n+5,0),lft(n+5); for(ll i=1;i<=n;i++){ cin>>a[i]; ll till=last[a[i]]; while(track.query(l,till)>=till+1){ l++; } if(till){ track.upd(till,i); } lft[i]=l; last[a[i]]=i; } while(q--) { ll l,r; cin>>l>>r; if(lft[r]<=l){ cout<<"YES\n"; } else{ cout<<"NO\n"; } } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll test_cases=1; cin>>test_cases; while(test_cases--){ solve(); } }
2030
G2
The Destruction of the Universe (Hard Version)
\textbf{This is the hard version of the problem. In this version, $n \leq 10^6$. You can only make hacks if both versions of the problem are solved.} Orangutans are powerful beings—so powerful that they only need $1$ unit of time to destroy every vulnerable planet in the universe! There are $n$ planets in the universe. Each planet has an interval of vulnerability $[l, r]$, during which it will be exposed to destruction by orangutans. Orangutans can also expand the interval of vulnerability of any planet by $1$ unit. Specifically, suppose the expansion is performed on planet $p$ with interval of vulnerability $[l_p, r_p]$. Then, the resulting interval of vulnerability may be either $[l_p - 1, r_p]$ or $[l_p, r_p + 1]$. Given a set of planets, orangutans can destroy all planets if the intervals of vulnerability of all planets in the set intersect at least one common point. Let the score of such a set denote the minimum number of expansions that must be performed. Orangutans are interested in the sum of \textbf{scores} of all non-empty subsets of the planets in the universe. As the answer can be large, output it modulo $998\,244\,353$.
To find the score of a set of intervals $([l_1, r_1], [l_2, r_2], \ldots, [l_v, r_v])$, we follow these steps: Initially, the score is set to $0$. We perform the following process repeatedly: Let $x$ be the interval with the smallest $r_i$ among all active intervals. Let $y$ be the interval with the largest $l_i$ among all active intervals. If $r_x < l_y$, add $l_y - r_x$ to the score, mark intervals $x$ and $y$ as inactive, and continue the process. If $r_x \geq l_y$, stop the process. At the end of this process, all active intervals will intersect at least one common point. Now, we need to prove that the process indeed gives us the minimum possible score. We can prove this by induction. Let $S$ be some set of intervals, and let $x$ and $y$ be the intervals defined above. Consider the set $S' = S \setminus {x, y}$ (i.e., $S'$ is the set $S$ excluding $x$ and $y$). We claim that: $\text{score}(S) \geq \text{score}(S') + \text{distance}(x, y)$ This is true because, for $x$ and $y$ to intersect, we must perform at least $\text{distance}(x, y)$ operations. Our construction achieves the lower bound of $\text{score}(S') + \text{distance}(x, y)$. Thus, $\text{score}(S) = \text{score}(S') + \text{distance}(x, y)$ During the process, we pair some intervals (possibly none). Specifically, in the $k$-th step, we pair the interval with the $k$-th smallest $r_i$ with the interval having the $k$-th largest $l_j$, and add the distance between them to the score. In the problem G1, we can compute the contribution of each pair of intervals as follows: Suppose we consider a pair $(i, j)$. Without loss of generality, assume that $r_i < l_j$. The pair $(i, j)$ will be considered in some subset $S$ if there are exactly $x$ intervals $p$ such that $r_p < r_i$ and exactly $x$ intervals $p$ such that $l_p > l_j$, for some non-negative integer $x$. Let there be $g$ intervals $p$ such that $r_p < r_i$ and $h$ intervals $p$ such that $l_p > l_j$. For $(i, j)$ to be paired in some subset $S$, we must choose $x$ intervals from the $g$ intervals on the left and $x$ intervals from the $h$ intervals on the right, for some non-negative integer $x$. There are no restrictions on the remaining $n - 2 - g - h$ intervals. Therefore, the contribution of $(i, j)$ is: $\sum_{x = 0}^{g} (l_j - r_i) \cdot \binom{g}{x} \cdot \binom{h}{x} \cdot 2^{n - 2 - g - h}$ We can simplify this sum using the identity: $\sum_{x = 0}^{g} \binom{g}{x} \cdot \binom{h}{x} = \binom{g + h}{g}$ (This is a form of the Vandermonde Identity.) Thus, the contribution of $(i, j)$ becomes: $(l_j - r_i) \cdot \binom{g + h}{g} \cdot 2^{n - 2 - g - h}$ This can be computed in $O(1)$ time. Note that in the explanation above, we assumed that the interval endpoints are distinct for simplicity. If they are not, we can order the intervals based on their $l_i$ values to maintain consistency. Let us find the contribution of $r[i]$, the right endpoint of $i$-th interval. Let $x$ be the number of intervals $j$ such that $r[j] < r[i]$, and let $y$ be the number of intervals $j$ such that $l[j] > r[i]$. To determine the contribution of $r[i]$ to the final answer, we consider selecting $p$ intervals from the $x$ intervals to the left and $q$ intervals from the $y$ intervals to the right, with the constraint that $p < q$. We require $p < q$ so that interval $i$ is paired with some other interval on the right (as discussed in the solution for G1). Therefore, the contribution of $r[i]$ can be expressed as: $\text{Contribution} = -r[i] \cdot \text{ways}(x, y) \cdot 2^{n - 1 - x - y}$ Here, $\text{ways}(x, y)$ represents the number of valid selections where $p < q$. Calculating $\text{ways}(x, y)$ To compute $\text{ways}(x, y)$, we can use the Vandermonde identity to simplify the expression: $\text{ways}(x, y) = \sum_{\substack{0 \leq p < q \leq y}} \binom{x}{p} \binom{y}{q}$ This can be rewritten as: $\text{ways}(x, y) = \sum_{p=0}^{x} \sum_{k=1}^{y} \binom{x}{p} \binom{y}{p + k}$ Define the function $g(k)$ as: $g(k) = \sum_{p=0}^{x} \binom{x}{p} \binom{y}{p + k}$ By applying the Vandermonde identity, we get: $g(k) = \binom{x + y}{x + k}$ Thus, the total number of ways is: $\text{ways}(x, y) = \sum_{k=1}^{y} \binom{x + y}{x + k}$ We can simplify this summation using the property of binomial coefficients: $\text{ways}(x, y) = 2^{x + y} - h(x + y, x)$ where the function $h(p, q)$ is defined as: $h(p, q) = \sum_{i=0}^{q} \binom{p}{i}$ Efficient Computation of $h(p, q)$ Note that: $h(p, q) = 2 \cdot h(p - 1, q) - \binom{p - 1}{q}$ Suppose throughout the solution, we call the function $h(p, q)$ $d$ times for pairs $(p_1, q_1), (p_2, q_2), \ldots, (p_d, q_d)$, in that order. We can observe that: $\sum_{i=2}^{d} |p_i - p_{i-1}| + |q_i - q_{i-1}| = O(n)$ Since $h(p_i, q_i)$ can be computed from $h(p_{i-1}, q_{i-1})$ in $|p_i - p_{i-1}| + |q_i - q_{i-1}|$ operations, the amortized time complexity for this part is $O(n)$. Final Contribution Combining the above results, the contribution of $r[i]$ to the answer is: $-r[i] \cdot \left(2^{x + y} - h(x + y, x)\right) \cdot 2^{n - 1 - x - y}$ A similar calculation can be applied to the contribution of $l[i]$. By summing these contributions across all relevant intervals, we obtain the final answer.
[ "combinatorics", "math" ]
3,100
#pragma GCC optimize("Ofast") #include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long #define ld long double #define nline "\n" #define f first #define s second #define sz(x) (ll)x.size() #define vl vector<ll> const ll INF_MUL=1e13; const ll INF_ADD=1e18; #define all(x) x.begin(),x.end() mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); typedef tree<ll, null_type, less<ll>, rb_tree_tag, tree_order_statistics_node_update> ordered_set; //-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- const ll MOD=998244353; const ll MAX=5000500; vector<ll> fact(MAX+2,1),inv_fact(MAX+2,1); ll binpow(ll a,ll b,ll MOD){ ll ans=1; a%=MOD; while(b){ if(b&1) ans=(ans*a)%MOD; b/=2; a=(a*a)%MOD; } return ans; } ll inverse(ll a,ll MOD){ return binpow(a,MOD-2,MOD); } void precompute(ll MOD){ for(ll i=2;i<MAX;i++){ fact[i]=(fact[i-1]*i)%MOD; } inv_fact[MAX-1]=inverse(fact[MAX-1],MOD); for(ll i=MAX-2;i>=0;i--){ inv_fact[i]=(inv_fact[i+1]*(i+1))%MOD; } } ll nCr(ll a,ll b,ll MOD){ if((a<0)||(a<b)||(b<0)) return 0; ll denom=(inv_fact[b]*inv_fact[a-b])%MOD; return (denom*fact[a])%MOD; } ll l[MAX],r[MAX],power[MAX]; ll ans=0,on_left=0,on_right,len; ll x=0,y=0,ways=0,inv2; ll getv(){ while(x>=len+1){ ways=((ways+nCr(x-1,y,MOD))*inv2)%MOD; x--; } while(x<=len-1){ ways=(2ll*ways-nCr(x,y,MOD)+MOD)%MOD; x++; } while(y<=on_left-1){ ways=(ways+nCr(x,y+1,MOD))%MOD; y++; } return ways; } void solve(){ ll n; cin>>n; power[0]=1; vector<array<ll,2>> track; multiset<ll> consider; for(ll i=1;i<=n;i++){ cin>>l[i]>>r[i]; track.push_back({r[i],l[i]}); consider.insert(l[i]); power[i]=(power[i-1]*2ll)%MOD; } ans=on_left=0; on_right=len=n; x=y=0; ways=1; sort(all(track)); for(auto it:track){ while(!consider.empty()){ if(*consider.begin() <= it[0]){ consider.erase(consider.begin()); on_right--,len--; } else{ break; } } ll now=power[len]-getv(); now=(now*power[n-1-len])%MOD; now=(now*it[0])%MOD; ans=(ans+MOD-now)%MOD; on_left++,len++; } track.clear(); consider.clear(); for(ll i=1;i<=n;i++){ track.push_back({l[i],r[i]}); consider.insert(r[i]); } sort(all(track)); reverse(all(track)); on_left=0; on_right=len=n; x=y=0; ways=1; for(auto it:track){ while(!consider.empty()){ if(*(--consider.end()) >= it[0]){ consider.erase(--consider.end()); on_right--,len--; } else{ break; } } ll now=power[len]-getv(); now=(now*power[n-1-len])%MOD; now=(now*it[0])%MOD; ans=(ans+now)%MOD; on_left++,len++; } ans=(ans+MOD)%MOD; cout<<ans<<nline; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll test_cases=1; cin>>test_cases; precompute(MOD); inv2=inverse(2,MOD); while(test_cases--){ solve(); } cout<<fixed<<setprecision(12); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
2031
A
Penchick and Modern Monument
Amidst skyscrapers in the bustling metropolis of Metro Manila, the newest Noiph mall in the Philippines has just been completed! The construction manager, Penchick, ordered a state-of-the-art monument to be built with $n$ pillars. The heights of the monument's pillars can be represented as an array $h$ of $n$ positive integers, where $h_i$ represents the height of the $i$-th pillar for all $i$ between $1$ and $n$. Penchick wants the heights of the pillars to be in \textbf{non-decreasing} order, i.e. $h_i \le h_{i + 1}$ for all $i$ between $1$ and $n - 1$. However, due to confusion, the monument was built such that the heights of the pillars are in \textbf{non-increasing} order instead, i.e. $h_i \ge h_{i + 1}$ for all $i$ between $1$ and $n - 1$. Luckily, Penchick can modify the monument and do the following operation on the pillars as many times as necessary: - Modify the height of a pillar to any positive integer. Formally, choose an index $1\le i\le n$ and a positive integer $x$. Then, assign $h_i := x$. Help Penchick determine the minimum number of operations needed to make the heights of the monument's pillars \textbf{non-decreasing}.
Consider the maximum number of pillars that can be left untouched instead. Under what conditions can $k>1$ pillars be untouched? Note that if pillars $i$ and $j$ with different heights are both untouched, then the first pillar must be taller than the second, which contradicts the fact that the heights must be non-decreasing after the adjustment. Thus, all unadjusted pillars must be of the same height. Therefore, the required number of pillars to adjust is $n-k$, where $k$ is the maximum number of pillars with the same height $h$. This bound is reachable by, for example, adjusting all pillars to height $h$. To find $k$, we can go through each index $i$ and find the number of pillars with the same height as $i$; this gives $O(n^2)$ time complexity, which is good enough for this problem. Alternatively, you can use a frequency array or std::map, or sequentially go through the list and find the longest sequence of equal terms, all of which have better time complexities.
[ "constructive algorithms", "dp", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin >> n; vector<int> arr(n); for(auto &x : arr) cin >> x; int ans = 0, cnt = 1; for(int i = 1; i < n; i++){ if(arr[i] == arr[i - 1]) cnt++; else{ ans = max(ans, cnt); cnt = 1; } } ans = max(ans, cnt); cout << n - ans << "\n"; } int main(){ ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while(t--){ solve(); } }
2031
B
Penchick and Satay Sticks
Penchick and his friend Kohane are touring Indonesia, and their next stop is in Surabaya! In the bustling food stalls of Surabaya, Kohane bought $n$ satay sticks and arranged them in a line, with the $i$-th satay stick having length $p_i$. It is given that $p$ is a permutation$^{\text{∗}}$ of length $n$. Penchick wants to sort the satay sticks in increasing order of length, so that $p_i=i$ for each $1\le i\le n$. For fun, they created a rule: they can only swap neighboring satay sticks whose lengths differ by exactly $1$. Formally, they can perform the following operation any number of times (including zero): - Select an index $i$ ($1\le i\le n-1$) such that $|p_{i+1}-p_i|=1$; - Swap $p_i$ and $p_{i+1}$. Determine whether it is possible to sort the permutation $p$, thus the satay sticks, by performing the above operation. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
Consider which permutations you can get by reversing the operations and starting from the identity permutation After $p_i$ and $p_{i+1}$ have been swapped, i.e. $p_i=i+1$ and $p_{i+1}=i$, neither of them can then be involved in another different swap. Suppose we begin with the identity permutation. Consider what happens after swapping $p_i=i$ and $p_{i+1}=i+1$. After this swap, elements $p_1$ to $p_{i-1}$ will consist of $1$ to $i-1$, and $p_{i+2}$ to $p_n$ will consist of $i+2$ to $n$. Thus, it is impossible for $p_i=i+1$ to swap with $p_{i-1}\lt i$, or for $p_{i+1}=i$ to swap with $p_{i+2}\gt i+1$. Therefore, the swaps made must be independent of each other; in other words, the indices $i$ chosen in the process must differ from each other than at least $2$. These permutations satisfy the following: for each index $i$, $p_i=i$, or $p_i=i+1$ and $p_{i+1}=i$, or $p_i=i-1$ and $p_{i-1}=i$. One way to check for this is to iterate for $i$ from $1$ to $n$. If $p_i=i$ then continue, and if $p_i=i+1$ then check if $p_{i+1}=i$, then swap $p_i$ and $p_{i+1}$. Otherwise, the permutation cannot be sorted. Time complexity: $O(n)$
[ "brute force", "greedy", "sortings" ]
900
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin >> n; vector<int> arr(n); for(auto &x : arr) cin >> x; for(int i = 0; i < n - 1; i++){ if(arr[i] != i + 1){ if(arr[i + 1] == i + 1 && arr[i] == i + 2) swap(arr[i], arr[i + 1]); else{ cout << "NO\n"; return; } } } cout << "YES\n"; } int main(){ ios_base::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while(t--){ solve(); } }
2031
C
Penchick and BBQ Buns
Penchick loves two things: square numbers and Hong Kong-style BBQ buns! For his birthday, Kohane wants to combine them with a gift: $n$ BBQ buns arranged from left to right. There are $10^6$ available fillings of BBQ buns, numbered from $1$ to $10^6$. To ensure that Penchick would love this gift, Kohane has a few goals: - No filling is used exactly once; that is, each filling must either not appear at all or appear at least twice. - For any two buns $i$ and $j$ that have the same filling, the distance between them, which is $|i-j|$, must be a perfect square$^{\text{∗}}$. Help Kohane find a valid way to choose the filling of the buns, or determine if it is impossible to satisfy her goals! If there are multiple solutions, print any of them. \begin{footnotesize} $^{\text{∗}}$A positive integer $x$ is a perfect square if there exists a positive integer $y$ such that $x = y^2$. For example, $49$ and $1$ are perfect squares because $49 = 7^2$ and $1 = 1^2$ respectively. On the other hand, $5$ is not a perfect square as no integer squared equals $5$ \end{footnotesize}
Solve the problem for $n=2$ and for even $n$ in general. For odd $n$, there exists a color that appears at least thrice. What does this mean? Note that $1$ is a square number; thus, for even $n$, the construction $1~1~2~2~3~3\ldots\frac{n}{2}~\frac{n}{2}$ works. For odd $n$, note that there exists a color that appears at least thrice, say at positions $x\lt y \lt z$. Then $y-x$, $z-y$ and $z-x$ are all square numbers. Note that $z-x=(z-y)+(y-x)$, which has the smallest solution being $z-x=5^2=25$, and ${z-y,y-x}={9,16}$. Therefore, there is no solution if $n\le 25$. We devise a solution for $n=27$. By the above, we have the following posts filled in: $1\text{ (8 blanks) }1\text{ (15 blanks) }1~\underline{ }$ We can use the same color for positions $11$ and $27$, to obtain the following: $1\text{ (8 blanks) }1~2\text{ (14 blanks) }1~2$ The remaining even-length blanks can be filled in similar to above. The result is as follows and can be hard-coded: $\mathtt{1~3~3~4~4~5~5~6~6~1~2~7~7~8~8~9~9~10~10~11~11~12~12~13~13~1~2}$ Then, for odd $n\ge 27$, add $\frac{n-27}{2}$ pairs with distance $1$ to complete the construction. Note that there are different ways to construct this starting array for $n=27$ as well. Time complexity: $O(n)$
[ "constructive algorithms", "math", "number theory" ]
1,300
#include <bits/stdc++.h> using namespace std; int main() { int t;cin>>t; while (t--) { int n;cin>>n; if (n%2) { if (n<27) cout<<-1<<endl; else { cout<<"1 3 3 4 4 5 5 6 6 1 2 7 7 8 8 9 9 10 10 11 11 12 12 13 13 1 2 "; for (int i=14;i<=n/2;i++) cout<<i<<" "<<i<<" "; cout<<endl; } } else { for (int i=1;i<=n/2;i++) cout<<i<<" "<<i<<" "; cout<<endl; } } }
2031
D
Penchick and Desert Rabbit
Dedicated to pushing himself to his limits, Penchick challenged himself to survive the midday sun in the Arabian Desert! While trekking along a linear oasis, Penchick spots a desert rabbit preparing to jump along a line of palm trees. There are $n$ trees, each with a height denoted by $a_i$. The rabbit can jump from the $i$-th tree to the $j$-th tree if exactly one of the following conditions is true: - $j < i$ and $a_j > a_i$: the rabbit can jump backward to a taller tree. - $j > i$ and $a_j < a_i$: the rabbit can jump forward to a shorter tree. For each $i$ from $1$ to $n$, determine the maximum height among all trees that the rabbit can reach if it starts from the $i$-th tree.
Suppose that you have found the maximum height reachable from tree $i+1$. How do you find the maximum height reachable from tree $i$? Let $p$ be the highest height among trees indexed from $1$ to $i$, and $s$ be the lowest height among trees indexed from $i+1$ to $n$. When can tree $i$ be reachable from tree $i+1$? First, observe that a rabbit at tree $n$ can reach the highest tree; if the tree has index $i<n$, then the rabbit can jump from tree $n$ to $i$. Let $ans_k$ denote the tallest height reachable from tree $k$, then $ans_n=\max(a_1,a_2,\ldots a_n)$. We iteratively look at trees $n-1$ to $1$. Suppose we have found the tallest height $ans_{i+1}$ reachable from tree $i+1$. Note that from tree $i$ we can reach the tallest tree with index between $1$ and $i$, and from tree $i+1$ we can reach the shortest tree with index between $i+1$ and $n$. Let $a_x=p_i=\max(a_1,a_2,\ldots a_i)$ and $a_y=s_i=\min(a_{i+1},a_{i+1},\ldots a_n)$. Then if $p_i>s_i$ then tree $i+1$ is reachable from tree $i$ by the sequence $i\leftrightarrow x\leftrightarrow y\leftrightarrow i+1$. Thus, any tree reachable from tree $i$ is reachable from tree $i+1$, and vice versa; thus, $ans_i=ans_{i+1}.$ On the other hand, if $p_i\le s_i$, then for any $r\le i$ and $s\ge i+1$, we have $r<s$ and $a_r\le p_i\le s_i\le a_s$. Thus, no tree between index $i+1$ and $n$ inclusive is reachable from any tree from $1$ and $i$ inclusive. Similar to the first paragraph, we have $ans_i=\max(a_1,a_2,\ldots a_i)=p_i$. Time complexity: $O(n)$
[ "binary search", "data structures", "dfs and similar", "dp", "dsu", "greedy", "implementation", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; int main() { cin.tie(0)->sync_with_stdio(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> ar(n), pref(n), suff(n), ans(n); for (int i = 0; i < n; i++) cin >> ar[i]; // prefix maximum pref[0] = ar[0]; for (int i = 1; i < n; i++) pref[i] = max(pref[i-1], ar[i]); // suffix minimum suff[n-1] = ar[n-1]; for (int i = n - 2; i >= 0; i--) suff[i] = min(suff[i+1], ar[i]); ans[n-1] = pref[n-1]; // maximum of all a[i] for (int i=n-2;i>=0;i--) { if (pref[i]>suff[i+1]) ans[i] = ans[i+1]; else ans[i] = pref[i]; } for (int i = 0; i < n; i++) cout << ans[i] << " "; cout << "\n"; } }
2031
E
Penchick and Chloe's Trees
With just a few hours left until Penchick and Chloe leave for Singapore, they could hardly wait to see the towering trees at the Singapore Botanic Gardens! Attempting to contain their excitement, Penchick crafted a rooted tree to keep Chloe and himself busy. Penchick has a rooted tree$^{\text{∗}}$ consisting of $n$ vertices, numbered from $1$ to $n$, with vertex $1$ as the root, and Chloe can select a non-negative integer $d$ to create a perfect binary tree$^{\text{†}}$ of depth $d$. Since Penchick and Chloe are good friends, Chloe wants her tree to be isomorphic$^{\text{‡}}$ to Penchick's tree. To meet this condition, Chloe can perform the following operation on her own tree any number of times: - Select an edge $(u,v)$, where $u$ is the parent of $v$. - Remove vertex $v$ and all the edges connected to $v$, then connect all of $v$'s previous children directly to $u$. In particular, doing an operation on an edge $(u, v)$ where $v$ is a leaf will delete vertex $v$ without adding any new edges. Since constructing a perfect binary tree can be time-consuming, Chloe wants to choose the minimum $d$ such that a perfect binary tree of depth $d$ can be made isomorphic to Penchick's tree using the above operation. Note that she can't change the roots of the trees. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. A rooted tree is a tree where one vertex is special and called the root. The parent of vertex $v$ is the first vertex on the simple path from $v$ to the root. The root has no parent. A child of vertex $v$ is any vertex $u$ for which $v$ is the parent. A leaf is any vertex without children. $^{\text{†}}$A full binary tree is rooted tree, in which each node has $0$ or $2$ children. A perfect binary tree is a full binary tree in which every leaf is at the same distance from the root. The depth of such a tree is the distance from the root to a leaf. $^{\text{‡}}$Two rooted trees, rooted at $r_1$ and $r_2$ respectively, are considered isomorphic if there exists a permutation $p$ of the vertices such that an edge $(u, v)$ exists in the first tree if and only if the edge $(p_u, p_v)$ exists in the second tree, and $p_{r_1} = r_2$. \end{footnotesize}
Consider the tree where root $1$ has $k$ children which are all leaves. What is its minimum depth? Consider undoing the operations from the given tree back to the perfect binary tree. Where can each child of the tree go? As in Hint 2, suppose that there exists a finite sequence of operations that convert a perfect binary tree $T_d$ of depth $d$ to our target tree $T$. We consider where each child of vertex $1$ in $T$ is mapped to the binary tree; specifically, for each such child $c$, let $c'$ be the highest vertex in $T_d$ that maps to $c$ under the operations. Then we can see that the subtree rooted at $c'$ in $T_d$ maps to the subtree rooted at $c$ in $T$. Suppose that the minimum depth required for the subtree rooted at $c$ in $T$ is $d_c$. Claim 1: $2^d\ge \sum_{c}2^{d_c}$, where the sum is taken across all children $c$ of $1$. Note that no two of the $v_c$ are ancestors or descendants of each other; otherwise, if $v_{c_1}$ is an ancestor of $v_{c_2}$, then $c_1$ would be an ancestor of $c_2$. Consider the $2^d$ leaves of $T_d$. Of them, for each $c$, $2^{d_c}$ of them are descendants of $v_c$. As no leaf can be descendants of two $v_c$'s, the inequality follows. Claim 2: If $1$ has only one child $c$, then $d=d_c+1$; otherwise, $d$ is the least integer that satisfies the inequality of Claim 1. Suppose $1$ only has one child $c$. Then $d\le d_c$ clearly does not suffice, but $d=d_c+1$ does as we can merge the entire right subtree into the root $1$. Suppose now that $1$ has multiple children $c_1,c_2,\ldots c_k$, sorted by descending $d_c$. For each child $c_i$ from $c_1$ to $c_k$, allocate $v_{c_i}$ to be the leftmost possible vertex at a height of $d_{c_i}$. Then the leaves that are ancestors of $c_i$ form a contiguous segment, so this construction ensures that each child $c_i$ can be placed on the tree. Thus, we can apply tree dp with the transition function from $d_{c_i}$ to $d$ described by Claim 2. However, naively implementing it has a worst-case time complexity of $O(n^2).$ Consider a tree constructed this way: $2$ and $3$ are children of $1$, $4$ and $5$ are children of $3$, $6$ and $7$ are children of $5$, and so on; the odd-numbered vertices form a chain with the even-numbered vertices being leaves. In such a graph, the depth $d$ is the length of the odd-numbered chain. Thus, during the computation of $d$, we would have to evaluate $2^x+1$ for $x$ from $1$ to $d\approx\frac{n}{2}.$ However, evaluating $2^x+1$ naively requires at least $O(x)$ time, so the algorithm runs in $O(n^2)$. There are several ways to improve the time complexity of the dp transition. For example, sort $d_{c_i}$ in increasing order, then maintain the sum as $a\times 2^b$, where initially $a=b=0$. For each $c_i$ in order, replace $a$ by $\lceil \frac{a}{2^{d_{c_i}-b}}\rceil$ and replace $b$ by $d_{c_i}$, which essentially serves to round up the sum to the nearest $2^{d_{c_i}}$. Then, increment $a$ to add $2^{d_{c_i}}$ to the sum. At the end of these operations, we have $d=\lceil \log_2 a\rceil+b$. Another way to do so is to "merge" terms from smallest to largest; importantly, since we just need to crudely bound $S=\sum_{c}2^{d_c}$, we can replace $2^x+2^y$ by $2^{max(x,y)+1}$. Then we can repeat this process until only one element remains. Suppose that $x>y$. Then the above operation rounds up $S$ to the nearest $2^x$. Since $2^d\ge S>2^x$, rounding up $S$ will not cause it to exceed the next power of $2$, so $2^d\ge S$ remains true after the operation.
[ "data structures", "dfs and similar", "dp", "greedy", "implementation", "math", "sortings", "trees" ]
2,100
#include <bits/stdc++.h> using namespace std; const int maxn = 1'000'000; void dfs(int u, int p, vector<vector<int>>& adj, vector<int>& depth) { priority_queue<int,vector<int>,greater<int>> pq; for (int v : adj[u]) { if (v != p) { dfs(v, u, adj, depth); pq.push(depth[v]); } } if (pq.size() == 0) { depth[u] = 0; } else if (pq.size() == 1) { depth[u] = pq.top() + 1; } else { int hold = -1; bool add = false; while (pq.size()) { if (pq.top() != hold) { if (hold != -1) { add = true; } hold = pq.top(); } else { pq.push(hold + 1); hold = -1; } pq.pop(); } depth[u] = hold + add; } } int main() { cin.tie(0)->sync_with_stdio(0); int t;cin>>t;while (t--) { int n; cin >> n; vector<int> depth(n); vector<vector<int>> adj(n); for (int i = 1; i < n; i++) { int p; cin >> p; adj[p - 1].push_back(i); adj[i].push_back(p - 1); } dfs(0, 0, adj, depth); cout << depth[0] << "\n"; } }
2031
F
Penchick and Even Medians
This is an interactive problem. Returning from a restful vacation on Australia's Gold Coast, Penchick forgot to bring home gifts for his pet duck Duong Canh! But perhaps a beautiful problem crafted through deep thought on the scenic beaches could be the perfect souvenir. There is a hidden permutation$^{\text{∗}}$ $p$ of length $n$, where $n$ is even. You are allowed to make the following query: - Choose a subsequence$^{\text{†}}$ of the permutation $p$ with even length $4\le k\le n$. The interactor will return the \textbf{value} of the two medians$^{\text{‡}}$ in the chosen subsequence. Find the \textbf{index} of the two medians in permutation $p$ using at most $80$ queries. Note that the interactor is \textbf{non-adaptive}. This means that the permutation $p$ is fixed at the beginning and will not change based on your queries. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^{\text{†}}$A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) element from arbitrary positions. $^{\text{‡}}$The two medians of an array $a$ with even length $k$ are defined as the $\frac{k}{2}$-th and $\left(\frac{k}{2} + 1\right)$-th \textbf{smallest} element in the array ($1$-indexed). \end{footnotesize}
Querying $n - 2$ elements is very powerful. Try to find two indices $x$ and $y$ such that one of $p_x, p_y$ is strictly smaller than $\frac{n}{2}$ and the other is strictly greater than $\frac{n}{2} + 1$. What can you do after finding these two elements? This solution is non-deterministic and uses $\frac{n}{2} + O(1)$ queries Part 1 Suppose we select all elements except for two indices $1 \le i, j \le n$ to be used in the query. Let the result we receive be $(a, b)$ where $a \lt b$. If $a = \frac{n}{2}$ and $b = \frac{n}{2} + 1$, it means that one of $p_i, p_j$ is strictly smaller than $\frac{n}{2}$ and the other is strictly larger than $\frac{n}{2}$. If we do the above query randomly, there is around $50\%$ chance of getting the above outcome. So we can just randomly select two indices to exclude from the query until we get the above result. Part 2 Now that we have two elements $x$ and $y$ such that one of $p_x, p_y$ is strictly smaller than $\frac{n}{2}$ and the other is strictly greater than $\frac{n}{2} + 1$, we can query $[x, y, i, j]$. The median of $[p_x, p_y, p_i, p_j]$ will include $\frac{n}{2}$ if and only if one of $p_i, p_j$ is equal to $\frac{n}{2}$. The same is true for $\frac{n}{2} + 1$. We can iterate through all $\frac{n}{2} - 1$ pairs to find the two pairs that contain the median, then iterate through all $4\choose 2$ combinations of median to find the answer. This solution is deterministic and uses $\frac{3n}{4}$ queries To make the solution deterministic, we need to find a deterministic solution to Part 1. Part 2 is already deterministic so we can just use the same solution. Let us analyse all the possible cases if we select all elements except for two indices $1 \le i, j \le n$ to be used in the query. Let the result we receive be $(a, b)$ where $a \lt b$. $a = \frac{n}{2}$ and $b = \frac{n}{2} + 1$. In this case, one of $p_i, p_j$ is strictly smaller than $\frac{n}{2}$ and the other is strictly larger than $\frac{n}{2}$, which is exactly what we wanted to find. $a = \frac{n}{2}$ and $b = \frac{n}{2} + 2$. In this case, one of $p_i, p_j$ is strictly smaller than $\frac{n}{2}$ and the other is equal to $\frac{n}{2} + 1$. $a = \frac{n}{2} - 1$ and $b = \frac{n}{2} + 1$. In this case, one of $p_i, p_j$ is strictly larger than $\frac{n}{2} + 1$ and the other is equal to $\frac{n}{2}$. $a = \frac{n}{2} - 1$ and $b = \frac{n}{2}$. In this case, both of $p_i, p_j$ are larger than or equal to $\frac{n}{2} + 1$. $a = \frac{n}{2} + 1$ and $b = \frac{n}{2} + 2$. In this case, both $p_i, p_j$ are smaller than or equal to $\frac{n}{2}$. $a = \frac{n}{2} - 1$ and $b = \frac{n}{2} + 2$. In this case, $p_i$ and $p_j$ are the two medians. \end{enumerate} If we have two queries such that one query is type 4 and another is type 5, then we can use one element from each pair to form the desired $x$ and $y$. We have to use one additional query to make sure that the chosen elements are not part of the median. We will only ask at most $\frac{n}{4}$ queries before there is at least one query of type 4 and one query of type 5. Types 2 and 3 can be treated together with types 4 and 5 with some careful handling. This solution is deterministic and uses $\frac{n}{2}+\log_2 n$ queries. Special thanks to SHZhang for discovering this solution. Call the elements of the permutation less than or equal to $\frac{n}{2}$ lower elements, and call the rest upper elements. Pair up the permutation's elements (we can just pair index 1 with index 2, index 3 with index 4, and so on). Do $\frac{n}{2}$ queries where the $i$-th query consists of all elements except those in the $i$-th pair. This lets us determine whether (1) both elements in the pair are lower elements, (2) both are upper elements, or (3) there is one lower and one upper element in the pair. In case (3), we can also tell if the pair contains one or both of the desired medians (Take a look at solution 2 for a more in-depth case analysis). Our goal now is to identify the pairs that the lower and upper medians belong to. It suffices to be able to find them from the pairs of type (1) and (2), since we have already found the ones in type (3) pairs. This can be done with binary search on the type (1) and (2) pairs, by balancing the number of pairs of both types and checking if the median is in the result (The two binary searches can be performed simultaneously). This is similar to Part 2 of solution 1, but instead of just using $4$ elements, we can generalise to use more elements if there is an equal number of lower and upper elements. After figuring out which pair each median is in, there are four possibilities remaining for the answer. For each one, make a query consisting of all the elements but the two candidates for the two medians that we are checking. When we see $\left(\frac{n}{2} - 1, \frac{n}{2} + 2\right)$ as the response, we know we found the answer.
[ "binary search", "constructive algorithms", "interactive", "probabilities" ]
2,800
#include <cstdio> #include <algorithm> #include <vector> #include <set> #include <queue> #include <cmath> #include <cstdlib> #include <utility> #include <cstring> using namespace std; #define ll long long #define MOD // Insert modulo here #define mul(a, b) (((ll)(a) * (ll)(b)) % MOD) #ifndef ONLINE_JUDGE #define debug(format, ...) fprintf(stderr, \ "%s:%d: " format "\n", __func__, __LINE__,##__VA_ARGS__) #else #define debug(format, ...) #define NDEBUG #endif int n, k; pair<int, int> query(vector<int>& v) { printf("? %d ", (int)v.size()); for (int x: v) printf("%d ", x); printf("\n"); fflush(stdout); int m1, m2; scanf("%d%d", &m1, &m2); if (m1 == -1) { exit(0); } return make_pair(m1, m2); } pair<int, int> query_missing(int x, int y) { vector<int> v; for (int i = 1; i <= n; i++) { if (i != x && i != y) v.push_back(i); } return query(v); } void work() { scanf("%d", &n); k = n / 2; vector<int> lower_pairs, upper_pairs; int lower_mid_pair = -1; int upper_mid_pair = -1; for (int i = 1; i <= k; i++) { pair<int, int> pr = query_missing(2*i-1, 2*i); if (pr.first == k+1 && pr.second == k+2) { lower_pairs.push_back(i); } else if (pr.first == k-1 && pr.second == k) { upper_pairs.push_back(i); } else { bool has_lower_mid = (pr.first == k-1); bool has_upper_mid = (pr.second == k+2); if (has_lower_mid && has_upper_mid) { printf("! %d %d\n", 2*i-1, 2*i); fflush(stdout); return; } else if (has_lower_mid) { lower_mid_pair = i; } else if (has_upper_mid) { upper_mid_pair = i; } } } if (lower_mid_pair == -1 || upper_mid_pair == -1) { int loweridx = 0, upperidx = 0; for (int bit = 0; bit < 10; bit++) { vector<int> qr; for (int i = 0; i < lower_pairs.size(); i++) { if (!(i & (1 << bit))) { qr.push_back(2*lower_pairs[i]); qr.push_back(2*lower_pairs[i]-1); qr.push_back(2*upper_pairs[i]); qr.push_back(2*upper_pairs[i]-1); } } pair<int, int> result = query(qr); if (result.first < k) loweridx |= (1 << bit); if (result.second > k+1) upperidx |= (1 << bit); } if (lower_mid_pair == -1) lower_mid_pair = lower_pairs[loweridx]; if (upper_mid_pair == -1) upper_mid_pair = upper_pairs[upperidx]; } /*if (lower_mid_pair == upper_mid_pair) { printf("! %d %d\n", 2*lower_mid_pair-1, 2*lower_mid_pair); return; }*/ for (int i = 2*lower_mid_pair-1; i <= 2*lower_mid_pair; i++) { for (int j = 2*upper_mid_pair-1; j <= 2*upper_mid_pair; j++) { pair<int, int> result = query_missing(i, j); if (result.first == k-1 && result.second == k+2) { printf("! %d %d\n", i, j); fflush(stdout); return; } } } } int main() { int t; scanf("%d", &t); for (int i = 1; i <= t; i++) { work(); fflush(stdout); } return 0; }
2032
A
Circuit
Alice has just crafted a circuit with $n$ lights and $2n$ switches. Each component (a light or a switch) has two states: on or off. The lights and switches are arranged in a way that: - Each light is connected to \textbf{exactly two} switches. - Each switch is connected to \textbf{exactly one} light. It's \textbf{unknown} which light each switch is connected to. - When all switches are off, all lights are also off. - If a switch is toggled (from on to off, or vice versa), the state of the light connected to it will also toggle. Alice brings the circuit, which shows only the states of the $2n$ switches, to her sister Iris and gives her a riddle: what is the minimum and maximum number of lights that can be turned on? Knowing her little sister's antics too well, Iris takes no more than a second to give Alice a correct answer. Can you do the same?
Observe that an even number of switch toggles on the same light will not change that light's status. In other words, a light is on if and only if exactly one of the two switches connecting to it is on. Let's denote $\text{cnt}_0$ and $\text{cnt}_1$ as the number of off switches and on switches in the circuit. We see that The maximum number of on lights is $\min(\text{cnt}_0, \text{cnt}_1)$: we can't achieve more than this amount since any on light decreases both $\text{cnt}_0$ and $\text{cnt}_1$ by $1$, and we can achieve this amount by matching $\min(\text{cnt}_0, \text{cnt}_1)$ pairs of one off switch and one on switch, and the rest can be matched arbitrarily. The minimum number of on lights is $\text{cnt}_0 \bmod 2$. Since $\text{cnt}_0 + \text{cnt}_1 = 2n$, they have the same parity. Then, we can easily see that when both $\text{cnt}_0$ and $\text{cnt}_1$ are even, we can match $n$ pairs of the same type of switches, so there are no on lights in this case. When both $\text{cnt}_0$ and $\text{cnt}_1$ are odd, we must match one on switch with one off switch to make $\text{cnt}_0$ and $\text{cnt}_1$ even, so there is one on light in this case. The calculation of $\text{cnt}_0$ and $\text{cnt}_1$ can be easily done by a simple iteration over the switches. Time complexity: $\mathcal{O}(n)$.
[ "greedy", "implementation", "math", "number theory" ]
800
class Solution: hasMultipleTests = True n: int = None a: list = None @classmethod def preprocess(cls): pass @classmethod def input(cls, testcase): cls.n = int(input()) cls.a = list(map(int, input().split())) @classmethod def solve(cls, testcase): cnt0 = sum(cls.a) print(cnt0 & 1, min(cnt0, cls.n*2 - cnt0)) # end Solution
2032
B
Medians
You are given an array $a = [1, 2, \ldots, n]$, where $n$ is \textbf{odd}, and an integer $k$. Your task is to choose an \textbf{odd} positive integer $m$ and to split $a$ into $m$ subarrays$^{\dagger}$ $b_1, b_2, \ldots, b_m$ such that: - Each element of the array $a$ belongs to exactly one subarray. - For all $1 \le i \le m$, $|b_i|$ is \textbf{odd}, i.e., the length of each subarray is odd. - $\operatorname{median}([\operatorname{median}(b_1), \operatorname{median}(b_2), \ldots, \operatorname{median}(b_m)]) = k$, i.e., the median$^{\ddagger}$ of the array of medians of all subarrays must equal $k$. $\operatorname{median}(c)$ denotes the median of the array $c$. $^{\dagger}$A subarray of the array $a$ of length $n$ is the array $[a_l, a_{l + 1}, \ldots, a_r]$ for some integers $1 \le l \le r \le n$. $^{\ddagger}$A median of the array of odd length is the middle element after the array is sorted in non-decreasing order. For example: $\operatorname{median}([1,2,5,4,3]) = 3$, $\operatorname{median}([3,2,1]) = 2$, $\operatorname{median}([2,1,2,1,2,2,2]) = 2$.
For $n = 1$ (and $k = 1$ as well), the obvious answer would be not partitioning anything, i.e., partition with 1 subarray being itself. For $n > 1$, we see that $k = 1$ and $k = n$ cannot yield a satisfactory construction. Proof is as follows: $m = 1$ will yield $ans = \lfloor \frac{n+1}{2} \rfloor$, which will never be equal to $1$ or $n$ when $n \ge 3$. If $m > 1$, considering the case of $k = 1$, we see that $\operatorname{median}(b_i) = 1$ iff $i \ge 2$, and since the original array $a$ is an increasingly-sorted permutation, we can conclude that $\operatorname{median}(b_1) < 1$. This is not possible. Similarly, $k = n$ also doesn't work with $m > 1$, as it'll require $\operatorname{median}(b_m) > n$. Apart from these cases, any other $k$ can yield an answer with $m = 3$ - a prefix subarray $b_1$, a middle subarray $b_2$ containing $k$ ($b_2$ will be centered at $k$, of course), and a suffix subarray $b_3$. This way, the answer will be $\operatorname{median}(b_2) = k$. The length of $b_2$ can be either $1$ or $3$, depending on the parity of $k$ (so that $b_1$ and $b_3$ could have odd lengths). In detail: $b_2$ will have length $1$ (i.e., $[k]$) if $k$ is an even integer, and length $3$ (i.e., $[k-1, k, k+1]$) if $k$ is an odd integer. Time complexity: $\mathcal{O}(1)$.
[ "constructive algorithms", "greedy", "implementation", "math" ]
1,100
class Solution: hasMultipleTests = True n: int = None k: int = None @classmethod def preprocess(cls): pass @classmethod def input(cls, testcase): cls.n, cls.k = map(int, input().split()) @classmethod def solve(cls, testcase): if cls.n == 1: return(print('1\n1')) if cls.k in {1, cls.n}: return(print(-1)) p2, p3 = cls.k - cls.k % 2, cls.k + 1 + cls.k % 2 print(f'3\n1 {p2} {p3}') # end Solution
2032
C
Trinity
You are given an array $a$ of $n$ elements $a_1, a_2, \ldots, a_n$. You can perform the following operation any number (possibly $0$) of times: - Choose two integers $i$ and $j$, where $1 \le i, j \le n$, and assign $a_i := a_j$. Find the minimum number of operations required to make the array $a$ satisfy the condition: - For every pairwise distinct triplet of indices $(x, y, z)$ ($1 \le x, y, z \le n$, $x \ne y$, $y \ne z$, $x \ne z$), there exists a non-degenerate triangle with side lengths $a_x$, $a_y$ and $a_z$, i.e. $a_x + a_y > a_z$, $a_y + a_z > a_x$ and $a_z + a_x > a_y$.
Without loss of generality, we assume that every array mentioned below is sorted in non-descending order. An array $b$ of $k$ elements ($k \ge 3$) will satisfy the problem's criteria iff $b_1 + b_2 > b_k$. The proof is that $b_1 + b_2$ is the minimum sum possible of any pair of distinct elements of array $b$, and if it is larger than the largest element of $b$, every pair of distinct elements of $b$ will be larger than any element of $b$ on its own. The upper bound for our answer is $n - 2$. This can be done as follows: we will turn every value from $a_2$ to $a_{n-1}$ to $a_n$ - this way, we only have two types of triangles: $(a_1, a_n, a_n)$ and $(a_n, a_n, a_n)$. Since $a_1 \ge 1 > 0$, we have $a_1 + a_n > a_n$, which means the former type of triangles is non-degenerate. The latter is also trivially one, as it is a regular/equilateral triangle. Otherwise, we'll need a pair of indices $(i, j)$ ($1 \le i \le n-2$, $i+2 \le j \le n$), so that in the final array after applying operations to $a$, $a_i$ and $a_{i+1}$ will be respectively the smallest and second smallest element, and $a_j$ will be the largest element. Such indices must satisfy $a_i + a_{i+1} > a_j$. Let's consider a pair $(i, j)$ that satisfies the above condition, then we need to turn elements outside of it (i.e. those before $i$ or after $j$) into some elements within the range $[a_{i+1}, a_j]$, and indeed we can change them into $a_{i+1}$ - this way, we have everything in place while keeping the relative rankings of $a_i$, $a_{i+1}$ and $a_j$ as what they are initially. Therefore, for such a pair, the number of operations needed is $n - (j - i + 1)$. This means that for every $i$, we need to find the largest $j > i$ that satisfies the condition, which can easily be done using two pointers. Sorting complexity: $\mathcal{O}(n \log n)$. Two-pointer complexity: $\mathcal{O}(n)$.
[ "binary search", "math", "sortings", "two pointers" ]
1,400
class Solution: hasMultipleTests = True n: int = None a: list = None @classmethod def preprocess(cls): pass @classmethod def input(cls, testcase): cls.n = int(input()) cls.a = list(map(int, input().split())) @classmethod def solve(cls, testcase): cls.a.sort() l, r, ans = 0, 2, cls.n - 2 while r < cls.n: while r - l >= 2 and cls.a[l] + cls.a[l+1] <= cls.a[r]: l += 1 ans = min(ans, cls.n - (r - l + 1)) r += 1 print(ans) # end Solution
2032
D
Genokraken
This is an interactive problem. Upon clearing the Waterside Area, Gretel has found a monster named Genokraken, and she's keeping it contained for her scientific studies. The monster's nerve system can be structured as a tree$^{\dagger}$ of $n$ nodes (really, everything should stop resembling trees all the time$\ldots$), numbered from $0$ to $n-1$, with node $0$ as the root. Gretel's objective is to learn the exact structure of the monster's nerve system — more specifically, she wants to know the values $p_1, p_2, \ldots, p_{n-1}$ of the tree, where $p_i$ ($0 \le p_i < i$) is the direct parent node of node $i$ ($1 \le i \le n - 1$). She doesn't know exactly how the nodes are placed, but she knows a few convenient facts: - If we remove root node $0$ and all adjacent edges, this tree will turn into a forest consisting of only paths$^{\ddagger}$. Each node that was initially adjacent to the node $0$ \textbf{will be the end of some path}. - The nodes are indexed in a way that if $1 \le x \le y \le n - 1$, then $p_x \le p_y$. - Node $1$ has \textbf{exactly two} adjacent nodes (including the node $0$). \begin{center} \begin{tabular}{ccc} & & \ {\small The tree in this picture \textbf{does not} satisfy the condition, because if we remove node $0$, then node $2$ (which was initially adjacent to the node $0$) will not be the end of the path $4-2-5$.} & {\small The tree in this picture \textbf{does not} satisfy the condition, because $p_3 \le p_4$ must hold.} & {\small The tree in this picture \textbf{does not} satisfy the condition, because node $1$ has only one adjacent node.} \ \end{tabular} \end{center} Gretel can make queries to the containment cell: - "? a b" ($1 \le a, b < n$, $a \ne b$) — the cell will check if the simple path between nodes $a$ and $b$ contains the node $0$. However, to avoid unexpected consequences by overstimulating the creature, Gretel wants to query at most $2n - 6$ times. Though Gretel is gifted, she can't do everything all at once, so can you give her a helping hand? $^{\dagger}$A tree is a connected graph where every pair of distinct nodes has exactly one simple path connecting them. $^{\ddagger}$A path is a tree whose vertices can be listed in the order $v_1, v_2, \ldots, v_k$ such that the edges are $(v_i, v_{i+1})$ ($1 \le i < k$).
For simplicity, we'll use the term "tentacle" to call each path tree in the forest made by cutting off node $0$. We also notice that in each tentacle, two nodes will never have the same distance from root node $0$. The condition of $p_x \le p_y$ iff $x \le y$ leads to a crucial observation of the system: it is indexed in accordance to a BFS order of the tree. Hence, we now have two goals: Determine $m$ - the number of tentacles. From node $m+1$ to $n-1$, assign every node to their respective tentacles. Due to the BFS order, at the moment of assignment, the previous tip of the tentacle is the parent of the current node, and the current node becomes the new tip of the tentacle. For the first objective, we see that node $1$ is guaranteed to be connected with nodes $0$ and $m + 1$. Furthermore, $m + 1$ is the first non-zero node where the path between it and $1$ does not cross $0$. Therefore, you can keep querying $(1, j)$ for increasing $j$ until you find a $0$ to get $m$. For the second objective, we need to find the tentacle that each node $i$ ($m + 2 \le i \le n - 1$) belongs to; in other words, find the node $1 \le i \le m$ so that query $(i, j)$ yields a $0$. Denote $t(j)$ as the tentacle associated with node $j$, then note that: If $j-1$ and $j$ share the same distance from $0$, then obviously $t(j-1) < t(j)$, i.e., $t(j)$ will be a tentacle at the forward direction from $t(j-1)$ in the tentacle list. If $j-1$ and $j$ don't share the same distance from $0$, then $t(j)$ can be any tentacle in the list. So we can approach this objective like this: denote $\text{next}(t(j-1))$ as the next tentacle in the list after $t(j-1)$ (or $1$ if $t(j-1)$ was at the end of the list), starting from $i = t(j-1)$, we'll keep re-assigning $i = \text{next}(i)$ until query $(i, j)$ yields a $0$. From hindsight, it looks like we'll need $\mathcal{O}(n^2)$ query count order to finish this part, but there is another crucial observation: due to the nodes being indexed in BFS order, if any tentacle yields a $1$ during probing, that tentacle will never be extended again - proof for this is pretty intuitive but a bit lengthy to express in words, so we'll leave it as an exercise for the reader - thus if you reach an $i$ that has already been deactivated before, you ignore it and call $\text{next}$ again, which wouldn't count towards the queries as it is your internal processing. Let's count the number of queries we used. Let $m \le n - 2$ be the number of tentacles, then If $m = n-2$, the second objective wouldn't be needed, so we end up with $n - 2 \le 2n - 6$ queries in total (as $n \le 4$). If $m \le n - 3$, note that each time we process a query, either a node is appended to a tentacle, or a tentacle is removed. Since at most $m - 1$ tentacles can be removed and there are $n - m - 2$ nodes to be processed, the second phase uses at most $n - 3$ queries, so in total we use $n + m - 3 \le 2n - 6$ queries. To process the list of tentacles, there are a few options: Naively mark the tentacles as active/inactive to know when to stop by for queries and when to skip. Time complexity will be $\mathcal{O}(n^2)$, and though it can still pass (in fact one such solution from the author passed nicely), it is not recommended. Maintain the list of tentacles in a set, if a node is known to be inactive, remove it. Time complexity will be $\mathcal{O}(n \log n)$. Maintain the list of tentacles in a similar manner as above, but using a doubly linked list this time. Time complexity will be $\mathcal{O}(n)$.
[ "constructive algorithms", "data structures", "graphs", "greedy", "implementation", "interactive", "trees", "two pointers" ]
1,800
class DeleteOnly_DLL: def __init__(self, size: int): self.values = [None for _ in range(size)] self.prev = [(i + size - 1) % size for i in range(size)] self.next = [(i + 1) % size for i in range(size)] self.pointer = 0 def current(self): return self.values[self.pointer] # Set value at pointer and move pointer to next def set_and_move(self, val): self.values[self.pointer] = val self.pointer = self.next[self.pointer] # "Delete" node and move pointer to next def erase(self): if self.prev[self.pointer] != -1: self.next[self.prev[self.pointer]] = self.next[self.pointer] if self.next[self.pointer] != -1: self.prev[self.next[self.pointer]] = self.prev[self.pointer] next_id = self.next[self.pointer] self.prev[self.pointer] = self.next[self.pointer] = -1 self.pointer = next_id # end DeleteOnly_DLL class Solution: hasMultipleTests = True n: int = None @classmethod def ask(cls, a: int, b: int): print(f'? {a} {b}', flush=True) return int(input()) @classmethod def answer(cls, p: list): print(f'! {" ".join(map(str, p[1:]))}', flush=True) @classmethod def preprocess(cls): pass @classmethod def input(cls, testcase): cls.n = int(input()) @classmethod def solve(cls, testcase): p = [-1 for _ in range(cls.n)] p[1] = 0 r = 2 while True: response = cls.ask(1, r) if response == -1: exit(2226) if response == 1: p[r] = 0 r += 1 else: break tentacle_count = r - 1 tentacles = DeleteOnly_DLL(size = tentacle_count) for i in range(tentacle_count): tentacles.set_and_move(i + 1) p[r] = tentacles.current() tentacles.set_and_move(r) r += 1 while r < cls.n: response = cls.ask(tentacles.current(), r) if response == -1: exit(2226) if response == 1: tentacles.erase() else: p[r] = tentacles.current() tentacles.set_and_move(r) r += 1 cls.answer(p) # end Solution
2032
E
Balanced
You are given a \textbf{cyclic} array $a$ with $n$ elements, where $n$ is \textbf{odd}. In each operation, you can do the following: - Choose an index $1 \le i \le n$ and increase $a_{i - 1}$ by $1$, $a_i$ by $2$, and $a_{i + 1}$ by $1$. The element before the first element is the last element because this is a cyclic array. A cyclic array is called balanced if all its elements are equal to each other. Find any sequence of operations to make this cyclic array balanced or determine that it is impossible. Please note that you \textbf{do not} have to minimize the number of operations.
To simplify this problem a little bit before starting, we will temporarily allow "negative" operation: choose an index $1 \le i \le n$ and increase $a_{i - 1}$ by $-1$, $a_i$ by $-2$, and $a_{i + 1}$ by $-1$. This is counted as $-1$ operation on index $i$. Should we get negative elements in array $v$ in the end, we can normalize it just fine by subtracting all $v_i$ with $\min v_i$ so that the final array $v$ is valid - it's trivial to prove that applying the same amount of operations in all indices does not change the relative difference between any two values in the array. Imagine we have $n = 3$ and array $a = [a_1, a_2, a_3]$ where $a_1 \ge a_2 \le a_3$; i.e., a trench. This array always has at least one solution: try to balance $a_1$ and $a_3$ by adding an amount of operation on either side based on their difference - here we have something we'll denote as a "balanced trench", then add another amount of operations on index $2$ to balance them three, and due to the cyclic nature of $a$. In fact, every array with $n = 3$, without regards to value intensity, can be thought of this form - if $a_2$ is higher than both $a_1$ and $a_3$, the act of "raising" $a_2$ is actually applying a negative amount of operations to index $2$. How to make a "balanced trench" for $n > 3$? At least, we can balance $a_1$ and $a_n$ in the same fashion as we did for $n = 3$. Can we balance $a_2$ and $a_{n-1}$ without breaking the balance we achieved between $a_1$ and $a_n$? Assuming we have an array $[0, x, y, x + 1, 0]$. By logic, we want to increase the value of index $2$. Applying an operation to index $1$ won't do, as the new array would be $[2, x+1, y, x+1, 1]$. We are balancing the inner elements by sacrificing the outer ones. Applying an operation to index $3$ also won't do as it increases both sides. Applying an operation to index $2$ will make the array become $[1, x+2, y+1, x+1, 0]$. By applying another operation to index $5$, we'll reach our desired goal with array $[2, x+2, y+1, x+2, 2]$. In fact, a series of operations in "consecutive" indices of the same parity would have this effect, regardless of how long that series is. To be precise, without loss of generality, a series of operations in indices $2, 4, \ldots, i$, with $i \le n-1$, will increase $a_1$ and $a_{i+1}$ by $1$, and all values with indices in range $[2, i]$ by $2$. The catch here is that we mitigate $1$ unit of difference between sides with each operation series by adding just $1$ unit to the higher side, while the corresponding other $1$ would be further beyond the lower side. If we aim to balance the sides from outwards to inwards, that exceeding $1$ will either fall into a deeper-inwards layer, or the center of the array (since $n$ is odd), which will not harm whatever we have achieved at first. Take an example with array $[48, 18, 26, 57, 39]$. First, we'll balance index $1$ and index $5$. We can simply apply $9$ operations to index $5$. The new array would be $[57, 18, 26, 66, 57]$. Then, we'll balance index $2$ and index $4$. From index $2$, we'll move to the left until it reaches index $5$, and apply $48$ operations for every $2$ steps. In other words, apply $48$ operations to index $2$ and $48$ operations to index $5$. This array is now a balanced trench: $[153, 114, 64, 114, 153]$. Now, achieving the desired array (we'll call it a "plateau") from a balanced trench is easy: starting from the rightmost element of the left side before the center going leftwards, compare the value to its adjacent element to the right, and apply a corresponding amount of operations. Now, take the balanced trench we just acquired. First, we'll check index $2$. Clearly, we want to rise index $3$ to close the $50$ unit gap, thus we'll apply $50$ operations to index $3$. The new array will become $[153, 164, 164, 164, 153]$. Then, we'll check index $1$. Our objective is to decrease $11$ for all elements with indices in range $[2, 4]$. Using the similar operation series as discussed earlier, this can be done like this: apply $-11$ operations to index $2$, then apply $-11$ operations to index $4$. The final array will be $[142, 142, 142, 142, 142]$. That operation series can be used here because the range of elements changing by $2$ units per series has an odd size, and since we're growing the plateau from the center point outwards, its size is always odd as well. With this, the non-normalized array $v$ will be $[0, 46, 50, -11, 57]$. Implementing this method can be separated into two separate steps: Step $1$ (creating the balanced trench): for each pair of indices $(i, n+1-i)$ with difference $a_{n+1-i} - a_i = d$, apply $d$ operations for each index of the cyclic range $[n+3-i, i]$ with step $2$. Step $2$ (creating the plateau): for each pair of indices $(i, i+1)$ with difference $a_i - a_{i+1} = d$, apply $d$ operations for each index of the range $[i+1, n-i]$ with step $2$. Some extra notes: Each step requires an independent prefix-sum structure to quickly maintain the operation updates. Notice that the prefix sum here takes account of parity, since only the other index in a range is updated, not every one of them. Remember that after each index considered, its value will alter based on the amount of operations just applied on it, so keep track of it properly. To avoid confusion, it's advised to apply the operations of step $1$ directly into array $a$ before proceeding with step $2$. Remember to normalize array $v$ before outputting to get rid of negative values. Refer to the model solution for more details. Time complexity: $\mathcal{O}(n)$.
[ "constructive algorithms", "data structures", "greedy", "implementation", "math" ]
2,400
class Solution: hasMultipleTests = True n: int = None a: list = None @classmethod def apply_prefixes(cls, prefixes, v): for i in range(2, cls.n*2): prefixes[i] += prefixes[i - 2] for i in range(cls.n*2): v[i % cls.n] += prefixes[i] @classmethod def construct_trench(cls, arr, v): prefixes = [0 for _ in range(cls.n * 2)] delta = [0 for _ in range(cls.n)] for i in range(cls.n // 2): diff = arr[cls.n - 1 - i] - (arr[i] + delta[i]) delta[i] += 2 * diff delta[i + 1] += diff prefixes[cls.n - i] += diff prefixes[cls.n + i + 2] -= diff cls.apply_prefixes(prefixes, v) for i in range(cls.n): arr[i] += v[i] * 2 arr[(i + 1) % cls.n] += v[i] arr[(i + cls.n - 1) % cls.n] += v[i] @classmethod def construct_plateau(cls, arr, v): prefixes = [0 for _ in range(cls.n * 2)] delta = [0 for _ in range(cls.n)] for i in range(cls.n // 2 - 1, -1, -1): diff = arr[i] - (arr[i + 1] + delta[i + 1]) delta[i] += diff prefixes[i + 1] += diff prefixes[cls.n - i] -= diff cls.apply_prefixes(prefixes, v) @classmethod def preprocess(cls): pass @classmethod def input(cls, testcase): cls.n = int(input()) cls.a = list(map(int, input().split())) @classmethod def solve(cls, testcase): if cls.n == 1: return(print(0)) v = [0 for _ in range(cls.n)] cls.construct_trench(cls.a, v) cls.construct_plateau(cls.a, v) offset = min(v) v = list(map(lambda x: x - offset, v)) print(*v) # end Solution
2032
F
Peanuts
Having the magical beanstalk, Jack has been gathering a lot of peanuts lately. Eventually, he has obtained $n$ pockets of peanuts, conveniently numbered $1$ to $n$ from left to right. The $i$-th pocket has $a_i$ peanuts. Jack and his childhood friend Alice decide to play a game around the peanuts. First, Alice divides the pockets into some boxes; each box will have a non-zero number of \textbf{consecutive} pockets, and each pocket will, obviously, belong to exactly one box. At the same time, Alice does not change the order of the boxes, that is, the boxes are numbered in ascending order of the indices of the pockets in them. After that, Alice and Jack will take turns alternately, with Alice going first. At each turn, the current player will remove a positive number of peanuts from \textbf{exactly one} pocket which belongs to the \textbf{leftmost non-empty box} (i.e., the leftmost box containing at least one non-empty pocket). In other words, if we number the boxes from left to right, then each player can only pick peanuts from the pocket in the $j$-th box ($j \ge 2$) only if the $(j - 1)$-th box has no peanuts left. The player who cannot make a valid move loses. Alice is sure she will win since she has the advantage of dividing the pockets into boxes herself. Thus, she wanted to know how many ways there are for her to divide the peanuts into boxes at the start of the game so that she will win, assuming both players play optimally. Can you help her with the calculation? As the result can be very large, output it modulo $998\,244\,353$.
Let's get the trivial case out of the way: If the peanut pockets always contain $1$ nut each, then partitioning the pockets doesn't affect the game's outcome at all: Alice will always win if $n$ is odd, and there are $2^{n-1}$ ways to partition $n$ pockets. Jack will always win if $n$ is even. Proof for the trivial case is, indeed, trivial. For the main problem, we see that this is a derivative of a game of Nim. To be exact, each box is a vanilla Nim game. To determine the winner of a vanilla Nim game when both players play optimally is trivial - if not for you, I strongly suggest reading about the game and the Sprague-Grundy theorem before continuing. In short, the Nim-sum of a Nim game is the xor sum of all values presented, and if that value is at least $1$, the first player will win if they play optimally. The original game of this problem is a series of consecutive Nim games, with the loser of the previous game becoming the first player of the next game. Clearly, trying to win all the boxes isn't a correct approach - one of the simplest counterexamples is a partition with two boxes, both with the first player winning if played optimally, so of course if the first player "wins" the first box, they immediately lose the second one and thus lose the whole game. In short, sometimes, tactically "losing" some boxes might be required. But how to know which player would lose if they both aimed for it? Now, introducing the "mirrored" version of a Nim game - a Misère Nim game, where the winning condition is the original Nim game's losing condition. If the peanut pockets always contain $1$ nut each, then the winner of a Misère Nim game can be easily declared by the parity of $n$. Otherwise, the winner of a Misère Nim game can be decided using the same nimber used in a regular Nim game: if the nimber is not $0$, the first player wins both the original and the Misère version; otherwise, the second player wins - the optimal strategies to acquire such outcome have the exact mirror intents of those in a regular Nim game. Also, surpassing the leading $1$s in array $a$, both Alice and Jack have the rights to tactically lose. Thus, any of them would win the game if and only if they could win the first box containing non-trivial pockets (here defined as pockets with more than $1$ nut, we'll call a box having at least one non-trivial pocket a non-trivial box) if both play optimally until there - as proven above, if they could theoretically win it, they could also tactically lose it, thus they would have full control of the game, and they could make a decision in accordance with whatever partition coming next in the remaining pockets. We'll denote $l$ as the number of trivial pockets (i.e. pockets with $1$ nut each) standing at the left side of array $a$, i.e., the $(l+1)^{th}$ pocket will be the leftmost one to have more than $1$ nut. We'll consider all possible options for first boxes containing non-trivial pockets, and thus we'll iterate $r$ in range $[l+1, n]$: First, we denote $P(r)$ as the xor sum of all elements of the prefix of array $a$ up until the $r^{th}$ element. This value will determine how much control Alice would have. If $P(r) = 0$, Alice will lose in all cases with the first non-trivial box ending at $r$. Proof is simple: if this box has an even amount of $1$s before it, obviously Alice will be the starting player of a game with nimber of $0$ and thus cannot control it to her will; and if the amount of preceding $1$s is odd, then the first non-trivial box is a game with nimber of $1$ and Jack as first player, thus Jack retains full control. If $P(r) = 1$, Alice will win in all cases with the first non-trivial box ending at $r$. Proof is literally the reverse of the above case. If $P(r) > 1$, both Alice and Jack have full control to win it, thus Alice will win if and only if she is the starting player of the game at the first non-trivial box. So we have the detailed winning condition. Now, towards the maths. First, whatever pockets after the first non-trivial box doesn't matter. Thus, for each $r$, there exists $2^{\max(0, n-r-1)}$ different partitions of the pockets following the $r^{th}$ one. We don't consider cases with $P(r) = 0$, obviously. If $P(r) = 1$, all partitions involving only the first $l$ pockets are allowed. In fact, there are $l+1$ items here: $l$ trivial pockets, and the first non-trivial blob always coming last, thus the number of different partitions of the pockets preceding the $r^{th}$ one in this case is $2^l$. If $P(r) > 1$, we'll consider all even $l_0$ in range $[0, l]$, with $l_0$ denoting the number of $1$s not within the first non-trivial box. Clearly, for each $l_0$, the number of different partitions would be $2^{\max(0, l_0-1)}$. And since $l$ is fixed and this process has no relation with $r$, this value could be pre-calculated. In more details, denoting that value as $M$, we have $M = \sum_{i = 0}^{\lfloor \frac{l0}{2} \rfloor} 2^{\max(0, 2i-1)}$. All powers of $2$ could be pre-calculated as well, saving a considerable amount of runtime. All pre-calculations have time complexity in linear order of the maximum size of array $a$. Time complexity: $\mathcal{O}(n)$.
[ "combinatorics", "dp", "games", "math" ]
2,700
class Solution: hasMultipleTests = True n: int = None a: list = None MAXN: int = 1000000 MOD: int = 998244353 pow2: list = None @classmethod def preprocess(cls): cls.pow2 = [None for _ in range(cls.MAXN)] for i in range(cls.MAXN): cls.pow2[i] = 1 if i == 0 else (2 * cls.pow2[i-1]) % cls.MOD @classmethod def input(cls, testcase): cls.n = int(input()) cls.a = list(map(int, input().split())) @classmethod def solve(cls, testcase): if max(cls.a) == 1: return(print(cls.pow2[cls.n-1] if cls.n & 1 else 0)) ans = 0 # The critical layer (assuming prefix Grundy > 1) can only fall into Alice's control # if and only if before it is an even amount of pockets alice_at_critical = 1 prefix_1 = 0 while cls.a[prefix_1] == 1: prefix_1 += 1 if prefix_1 % 2 == 0: alice_at_critical = (alice_at_critical + cls.pow2[prefix_1 - 1]) % cls.MOD grundy = prefix_1 & 1 for r in range(prefix_1, cls.n): grundy ^= cls.a[r] if grundy == 0: continue post_critical = cls.pow2[cls.n - 2 - r] if r < cls.n - 1 else 1 pre_critical = cls.pow2[prefix_1] if grundy == 1 else alice_at_critical ans = (ans + pre_critical * post_critical) % cls.MOD print(ans) # end Solution
2033
A
Sakurako and Kosuke
Sakurako and Kosuke decided to play some games with a dot on a coordinate line. The dot is currently located in position $x=0$. They will be taking turns, and \textbf{Sakurako will be the one to start}. On the $i$-th move, the current player will move the dot in some direction by $2\cdot i-1$ units. Sakurako will always be moving the dot in the negative direction, whereas Kosuke will always move it in the positive direction. In other words, the following will happen: - Sakurako will change the position of the dot by $-1$, $x = -1$ now - Kosuke will change the position of the dot by $3$, $x = 2$ now - Sakurako will change the position of the dot by $-5$, $x = -3$ now - $\cdots$ They will keep on playing while the absolute value of the coordinate of the dot does not exceed $n$. More formally, the game continues while $-n\le x\le n$. It can be proven that the game will always end. Your task is to determine who will be the one who makes the last turn.
For this task we could just brute-force the answer by repeatedly adding or substracting the odd numbers from the initial position $0$. This would result in $O(n)$ time complexity. This is sufficient enough.
[ "constructive algorithms", "implementation", "math" ]
800
def solve(): n = int(input()) x = 0 c = 1 while -n <= x <= n: if c % 2 == 1: x -= 2 * c - 1 else: x += 2 * c - 1 c += 1 if c % 2 == 0: print("Sakurako") else: print("Kosuke") for tc in range(int(input())): solve()
2033
B
Sakurako and Water
During her journey with Kosuke, Sakurako and Kosuke found a valley that can be represented as a matrix of size $n \times n$, where at the intersection of the $i$-th row and the $j$-th column is a mountain with a height of $a_{i,j}$. If $a_{i,j} < 0$, then there is a lake there. Kosuke is very afraid of water, so Sakurako needs to help him: - With her magic, she can select a square area of mountains and increase the height of each mountain on the main diagonal of that area by exactly one. More formally, she can choose a submatrix with the upper left corner located at $(i, j)$ and the lower right corner at $(p, q)$, such that $p-i=q-j$. She can then add one to each element at the intersection of the $(i + k)$-th row and the $(j + k)$-th column, for all $k$ such that $0 \le k \le p-i$. Determine the minimum number of times Sakurako must use her magic so that there are no lakes.
In this task we were supposed to find the minimal possible amount of moves that Sakurako needs to make in order to make all elements in the matrix non-negative. The key observation is to notice that Sakurako can only add simultaneously to elements that lay on one diagonal. For cell $(i,j)$, let the "index" of diagonal which it is placed on is equal to $d(i,j)=(i-j)$. This is proven by the fact that for $(i,j)$ and $(i+1,j+1)$ the equation $d(i,j)=d(i+1,j+1)$ holds. We are able to add to a pair of elements $(x,y)$ and $(x_1,y_1)$ simultaneously if and only if $d(x,y)=d(x_1,y_1)$. From this we can reduce our problem to finding the amount of times that we need to add 1 to this diagonal in order for all of its elements to become non-negative. For each diagonal we find the minimal element in it and there will be two cases: 1. The minimal element is non-negative: we don't need to add anything to that diagonal. 2. The minimal element is negative and equal to $x$: we will need to add one at least $-x$ times (remember that $x$ is negative). After that, the answer for our task is the sum of answers for each individual diagonal. Total time complexity $O(n^2)$
[ "brute force", "constructive algorithms", "greedy" ]
900
def solve(): n = int(input()) mn = dict() for i in range(n): a = [int(x) for x in input().split()] for j in range(n): mn[i - j] = min(a[j], mn.get(i - j, 0)) ans = 0 for value in mn.values(): ans -= value print(ans) t = int(input()) for _ in range(t): solve()
2033
C
Sakurako's Field Trip
Even in university, students need to relax. That is why Sakurakos teacher decided to go on a field trip. It is known that all of the students will be walking in one line. The student with index $i$ has some topic of interest which is described as $a_i$. As a teacher, you want to minimise the disturbance of the line of students. The disturbance of the line is defined as the number of neighbouring people with the same topic of interest. In other words, disturbance is the number of indices $j$ ($1 \le j < n$) such that $a_j = a_{j + 1}$. In order to do this, you can choose index $i$ ($1\le i\le n$) and swap students at positions $i$ and $n-i+1$. You can perform any number of swaps. Your task is to determine the minimal amount of disturbance that you can achieve by doing the operation described above any number of times.
Note that the answer is influenced by neighboring elements. This allows us to optimally place elements $i$ and $n - i + 1$ with respect to elements $i - 1$ and $n - i + 2$. Thus, we need to be able to choose the best order for an array of $4$ elements. Let's consider several types of arrays: $[1, x, y, 1]$ or $[x, 1, 1, y]$ (the ones denote equal elements): swaps will not change the answer; $[1, 1, y, 2]$: a swap will improve the answer if $y \ne 1$, otherwise the answer will not change; Thus, if $a[i - 1] = a[i]$ or $a[n - i + 2] = a[n - i + 1]$, then swapping elements $a[i]$ and $a[n - i + 1]$ will either not change the answer or improve it. After all swaps, we only need to calculate the final disturbance.
[ "dp", "greedy", "two pointers" ]
1,400
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ int n; cin>>n; int a[n+1]; for(int i=1;i<=n;i++){ cin>>a[i]; } for(int i=n/2-1;i>=1;i--){ if(a[i]==a[i+1] || a[n-i+1]==a[n-i]){ swap(a[i],a[n-i+1]); } } int re=0; for(int i=1;i<n;i++){ re+=(a[i]==a[i+1]); } cout<<re<<endl; } }
2033
D
Kousuke's Assignment
After a trip with Sakurako, Kousuke was very scared because he forgot about his programming assignment. In this assignment, the teacher gave him an array $a$ of $n$ integers and asked him to calculate the number of \textbf{non-overlapping} segments of the array $a$, such that each segment is considered beautiful. A segment $[l,r]$ is considered beautiful if $a_l + a_{l+1} + \dots + a_{r-1} + a_r=0$. For a fixed array $a$, your task is to compute the maximum number of non-overlapping beautiful segments.
For this task we were supposed to find the biggest amount of non-intersecting segments all of which have their sum equal to zero. First of all, the problem "find the maximal number of non-intersecting segments" is an exaxmple of a classic dynamic programming problem. First of all, we will sort all our segments by their right end in an increasing order. After that, we will be processing all segments one by one and updating our answer as follows: For a segment with fixed ends $l,r$, we will be updating our answer as follows. $dp_r=max(dp_{r-1},dp_{l-1}+1)$ Because we are processing all our segments one by one in that order, when we start computing $dp_r$, for all $i<r$ the maximal possible answer will already be computed. By filling the $dp$ array in this way, we are sure that the answer will always be contained in $dp_n$. Now back to our task: First of all, if we construct an array $p$ where $p_i=\sum^i_{j=1}a_i$, $(p_0=0)$ then for every segment which has sum equal to $0$, $p_{l-1}=p_r$. It can be easily proven that for fixed $r$ there is at most one segment which will be useful for the optimal answer: If there is no $p_l$ where $l<r$ such that $p_l=p_r$, then there is no segment that ends in position $r$. Otherwise, it is sufficient to choose the one with the largest $l$. That is because if we chose the one with $l_1<l$, then we would have missed segment $[l_1+1,l]$. Because of that miss, we would not have found the correct answer for our $r$. So our final algorithm would be to find the smallest segments that end in position $r$ and have their sum equal to $0$. After that we can compute the answer by simply solving the maximal number of non-intersecting segments problem using dynamic programming. Total tme complexity $O(nlogn)$ or $O(n)$ depending on implementation.
[ "data structures", "dp", "dsu", "greedy", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ int n; cin>>n; int a[n+1]; map<int,int>mp; for(int i=1;i<=n;i++){ cin>>a[i]; } int p_su[n+1]; p_su[0]=0; int lst[n+1]; mp[0]=0; for(int i=1;i<=n;i++){ p_su[i]=p_su[i-1]+a[i]; if(mp.find(p_su[i])==mp.end()){ lst[i]=-1; } else{ lst[i]=mp[p_su[i]]; } mp[p_su[i]]=i; } int dp[n+1]; memset(dp,0,sizeof dp); for(int i=1;i<=n;i++){ dp[i]=max(dp[i],dp[i-1]); if(lst[i]!=-1){ dp[i]=max(dp[i],dp[lst[i]]+1); } } cout<<*max_element(dp,dp+n+1)<<endl; } }
2033
E
Sakurako, Kosuke, and the Permutation
Sakurako's exams are over, and she did excellently. As a reward, she received a permutation $p$. Kosuke was not entirely satisfied because he failed one exam and did not receive a gift. He decided to sneak into her room (thanks to the code for her lock) and spoil the permutation so that it becomes simple. A permutation $p$ is considered simple if for every $i$ $(1\le i \le n)$ one of the following conditions holds: - $p_i=i$ - $p_{p_i}=i$ For example, the permutations $[1, 2, 3, 4]$, $[5, 2, 4, 3, 1]$, and $[2, 1]$ are simple, while $[2, 3, 1]$ and $[5, 2, 1, 4, 3]$ are not. In one operation, Kosuke can choose indices $i,j$ $(1\le i,j\le n)$ and swap the elements $p_i$ and $p_j$. Sakurako is about to return home. Your task is to calculate the minimum number of operations that Kosuke needs to perform to make the permutation simple.
Lets make this the shortest editorial out of all. Observation $1$: All permutations can be split into cycles. All cycles of permutation can be traversed in $O(n)$ time. Observation $2$: When we are swapping $2$ elements that belong to one cycle, we are splitting our cycle into $2$ parts. If we rephrase our definition of simple permutation, we can see that the permutation is called simple if every cycle in it has length not larger than $2$. Observation $3$: By splitting our initial cycle of length $x$ repeatedly, we can achieve its division into cycles of length not larger than $2$ in $\lfloor\frac{x-1}{2}\rfloor$ swaps. (this is achieved by repeatedly decreasing size of the cycle by $2$) Observation $4$: All cycles are independent, so the answer for the initial task is the sum of answers for every cycle. Total time complexity is $O(n)$.
[ "brute force", "data structures", "dfs and similar", "dsu", "graphs", "greedy", "math" ]
1,400
#include<bits/stdc++.h> using namespace std; int main(){ int t; ios_base::sync_with_stdio(false);cout.tie(nullptr);cin.tie(nullptr); cin>>t; while(t--){ int n; cin>>n; int p[n+1]; for(int i=1;i<=n;i++){ cin>>p[i]; } bool us[n+1]; memset(us,0,sizeof us); int re=0; for(int i=1;i<=n;i++){ if(!us[i]){ int cu=i; int le=0; while(us[cu]==0){ le++; us[cu]=1; cu=p[cu]; } re+=(le-1)/2; } } cout<<re<<'\n'; } }
2033
F
Kosuke's Sloth
Kosuke is too lazy. He will not give you any legend, just the task: Fibonacci numbers are defined as follows: - $f(1)=f(2)=1$. - $f(n)=f(n-1)+f(n-2)$ $(3\le n)$ We denote $G(n,k)$ as an index of the $n$-th Fibonacci number that is divisible by $k$. For given $n$ and $k$, compute $G(n,k)$.As this number can be too big, output it by modulo $10^9+7$. For example: $G(3,2)=9$ because the $3$-rd Fibonacci number that is divisible by $2$ is $34$. $[1,1,\textbf{2},3,5,\textbf{8},13,21,\textbf{34}]$.
This was one of my favourite tasks untill I realised that the amount of numbers in Fibonacci cycle is either $1$, $2$ or $4$... First of all, the length of cycle after which our sequence would be repeating for modulo $k$ is at most $6k$ (We will just take this as a fact for now, it is too long to explain but you can read it here. Now, if we know that the amount of operations needed to take untill we are in a cycle is at most $6k$, we can brute-force our solution in $O(k)$ time. Also, one last thing that we will need to consider is the fact that if $F_i$ is divisible by $k$, then for every $j$, $F_{i\cdot j}$ is also divisible by $k$. Proof: as we know, $gcd(F_n,F_m)=F_{gcd(n,m)}$ so if we take any multiple of $i$ as $n$ and $i=m$, then the $gcd(F_{n},F_{m})$ would be equal to $F_{i}$. And because $F_{i}$ is divisible by $k$, every other multiple of $i$ is going to be also divisible by $k$. So, our final solution would be brute-forcing first $6k$ Fibonacci numbers and their remainder after division by $k$ in order to find the first one that is divisible by $k$. Then just multiply that number by $n$ and we will get the answer for our task. Also, don't forget to take everything via modulo $10^9+7$. (Someone kept whining about it in comments) Total time complexity $O(k)$.
[ "brute force", "math", "number theory" ]
1,800
#include <bits/stdc++.h> using namespace std; using LL = long long; #define ssize(x) (int)(x.size()) #define ALL(x) (x).begin(), (x).end() mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int rd(int l, int r) { return uniform_int_distribution<int>(l, r)(rng); } const LL MOD = 1e9 + 7; int bp(int a, int n) { if (n == 0) return 1; if (n % 2 == 0) return bp(1LL * a * a % MOD, n / 2); else return 1LL * bp(a, n - 1) * a % MOD; } int inv(int a) { return bp(a, MOD - 2); } void solve() { LL n, k; cin >> n >> k; n %= MOD; if (k == 1) { cout << n << "\n"; return; } vector<int> fib(3); fib[0] = fib[1] = 1; int cnt = 0; for (int i = 2; i <= 10 * k; i++) { fib[i % 3] = (fib[(i + 2) % 3] + fib[(i + 1) % 3]) % k; if (fib[i % 3] == 0) cnt++; if (fib[i % 3] == 1 && fib[(i + 2) % 3] == 0) { cout << 1LL * i * n % MOD * inv(cnt) % MOD << "\n"; return; } } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while (t--) { solve(); } }
2033
G
Sakurako and Chefir
Given a tree with $n$ vertices rooted at vertex $1$. While walking through it with her cat Chefir, Sakurako got distracted, and Chefir ran away. To help Sakurako, Kosuke recorded his $q$ guesses. In the $i$-th guess, he assumes that Chefir got lost at vertex $v_i$ and had $k_i$ stamina. Also, for each guess, Kosuke assumes that Chefir could move along the edges an arbitrary number of times: - from vertex $a$ to vertex $b$, if $a$ \textbf{is an ancestor}$^{\text{∗}}$ of $b$, the stamina will not change; - from vertex $a$ to vertex $b$, if $a$ \textbf{is not an ancestor} of $b$, then Chefir's stamina decreases by $1$. If Chefir's stamina is $0$, he cannot make a move of the second type. For each assumption, your task is to find the distance to the farthest vertex that Chefir could reach from vertex $v_i$, having $k_i$ stamina. \begin{footnotesize} $^{\text{∗}}$Vertex $a$ is an ancestor of vertex $b$ if the shortest path from $b$ to the root passes through $a$. \end{footnotesize}
In each query, Chefir can ascend from vertex $v$ by no more than $k$. To maximize the distance, we need to first ascend $x$ times ($0 \le x \le k$), and then descend to the deepest vertex. For each vertex $u$, we will find $maxd[v].x$ - the distance to the farthest descendant of vertex $u$. We will also need $maxd[v].y$ - the distance to the farthest descendant of vertex $u$ from a different subtree of vertex $v$, which will allow us to avoid counting any edges twice when searching for the answer. Now we can construct binary lifts. The lift by $2^i$ from vertex $u$ will store information about all vertices from $u$ to the $(2^i)$-th ancestor, excluding vertex $u$ itself. The value of the lift from vertex $u$ by $1$ (i.e., to its ancestor $p$) will be calculated as follows: if $maxd[v].x + 1 < maxd[p].x$, then the value is equal to $maxd[p].x - h(p)$, and $maxd[p].y - h(p)$ otherwise. Here, $h(p)$ is the distance from vertex $p$ to the root. The subsequent lifts will be computed as the maximums of the corresponding values. The constructed maximums will not account for cases where any edge was traversed twice. Thus, by ascending from vertex $v$ by $k$, we will be able to find the best value of the form $\text{max_depth} - h(u)$, and by adding $h(v)$ to it, we will obtain the distance to the desired vertex.
[ "data structures", "dfs and similar", "dp", "greedy", "trees" ]
2,200
#include <bits/stdc++.h> //#define int long long #define pb emplace_back #define mp make_pair #define x first #define y second #define all(a) a.begin(), a.end() #define rall(a) a.rbegin(), a.rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(time(nullptr)); const int inf = 1e9; const int M = 1e9 + 7; const ld pi = atan2(0, -1); const ld eps = 1e-6; void precalc(int v, int p, vector<vector<int>> &sl, vector<pair<int, int>> &maxd, vector<int> &h){ maxd[v] = {0, 0}; if (v != p) h[v] = h[p] + 1; for(int u: sl[v]){ if (u == p) continue; precalc(u, v, sl, maxd, h); if (maxd[v].y < maxd[u].x + 1) { maxd[v].y = maxd[u].x + 1; } if (maxd[v].y > maxd[v].x) { swap(maxd[v].x, maxd[v].y); } } } void calc_binups(int v, int p, vector<vector<int>> &sl, vector<vector<pair<int, int>>> &binup, vector<pair<int, int>> &maxd, vector<int> &h){ binup[v][0] = {maxd[p].x, p}; if (maxd[p].x == maxd[v].x + 1) { binup[v][0].x = maxd[p].y; } binup[v][0].x -= h[p]; for(int i = 1; i < 20; ++i){ binup[v][i].y = binup[binup[v][i - 1].y][i - 1].y; binup[v][i].x = max(binup[v][i - 1].x, binup[binup[v][i - 1].y][i - 1].x); } for(int u: sl[v]){ if (u == p) continue; calc_binups(u, v, sl, binup, maxd, h); } } int get_ans(int v, int k, vector<vector<pair<int, int>>> &binup, vector<pair<int, int>> &maxd, vector<int> &h){ k = min(k, h[v]); int res = maxd[v].x - h[v]; int ini = h[v]; for(int i = 19; i >= 0; --i){ if ((1 << i) <= k) { res = max(res, binup[v][i].x); v = binup[v][i].y; k -= (1 << i); } } return res + ini; } void solve(int tc){ int n; cin >> n; vector<vector<int>> sl(n); for(int i = 1; i < n; ++i){ int u, v; cin >> u >> v; sl[--u].emplace_back(--v); sl[v].emplace_back(u); } vector<pair<int, int>> maxd(n); vector<int> h(n); precalc(0, 0, sl, maxd, h); vector<vector<pair<int, int>>> binup(n, vector<pair<int, int>>(20)); calc_binups(0, 0, sl, binup, maxd, h); int q; cin >> q; for(int _ = 0; _ < q; ++_){ int v, k; cin >> v >> k; cout << get_ans(v - 1, k, binup, maxd, h) << " "; } } bool multi = true; signed main() { int t = 1; if (multi)cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
2034
A
King Keykhosrow's Mystery
There is a tale about the wise King Keykhosrow who owned a grand treasury filled with treasures from across the Persian Empire. However, to prevent theft and ensure the safety of his wealth, King Keykhosrow's vault was sealed with a magical lock that could only be opened by solving a riddle. The riddle involves two sacred numbers $a$ and $b$. To unlock the vault, the challenger must determine the smallest key number $m$ that satisfies two conditions: - $m$ must be greater than or equal to at least one of $a$ and $b$. - The remainder when $m$ is divided by $a$ must be equal to the remainder when $m$ is divided by $b$. Only by finding the smallest correct value of $m$ can one unlock the vault and access the legendary treasures!
Step 1: Prove that for the minimum value of $m$, we must have $m \% a = m \% b = 0$. Step 2: To prove this, show that if $m \% a = m \% b = x > 0$, then $m-1$ will also satisfy the problem's requirements. Step 3: Since $m \ge \min(a , b)$, if $x > 0$, then $m > \min(a , b)$ must hold. Therefore, $m - 1 \ge \min(a , b)$ implies that $m-1$ satisfies the requirements. Step 4: Thus, $m$ must be divisible by both $a$ and $b$. The smallest such $m$ is $lcm(a, b)$ which can be calculated in $O(\log(\max(a, b)))$.
[ "brute force", "chinese remainder theorem", "math", "number theory" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int tt; cin >> tt; while(tt--){ int a, b; cin >> a >> b; cout << lcm(a , b) << endl; } }
2034
B
Rakhsh's Revival
Rostam's loyal horse, Rakhsh, has seen better days. Once powerful and fast, Rakhsh has grown weaker over time, struggling to even move. Rostam worries that if too many parts of Rakhsh's body lose strength at once, Rakhsh might stop entirely. To keep his companion going, Rostam decides to strengthen Rakhsh, bit by bit, so no part of his body is too frail for too long. Imagine Rakhsh's body as a line of spots represented by a binary string $s$ of length $n$, where each $0$ means a weak spot and each $1$ means a strong one. Rostam's goal is to make sure that no interval of $m$ consecutive spots is entirely weak (all $0$s). Luckily, Rostam has a special ability called Timar, inherited from his mother Rudabeh at birth. With Timar, he can select any segment of length $k$ and instantly strengthen all of it (changing every character in that segment to $1$). The challenge is to figure out the minimum number of times Rostam needs to use Timar to keep Rakhsh moving, ensuring there are no consecutive entirely weak spots of length $m$.
We will solve the problem using the following approach: Start from the leftmost spot and move rightwards. Whenever a consecutive segment of $m$ weak spots (i.e., $0$'s) is found, apply Timar to a segment of length $k$, starting from the last index of the weak segment. Repeat this process until no segment of $m$ consecutive weak spots remains. The key idea behind this solution is that whenever we encounter a block of $m$ consecutive $0$'s, we need to strengthen it. Since we can apply Timar to a segment of length $k$, the optimal strategy is always to apply Timar starting at the last index of the block of $m$ consecutive $0$'s. Correctness Proof: For any block of $m$ consecutive $0$'s, we must apply Timar to at least one index within this block. Hence, the strengthened segment of length $k$ must overlap with the block of weak spots. Suppose an optimal solution exists where Timar is applied to a segment starting leftward within the block. Suppose we shift this segment one step to the right (closer to the end of the block). In that case, the solution remains valid and optimal since it covers all weak spots in the block while reducing unnecessary overlap with already-strengthened areas. By always starting from the last index of a block of $m$ consecutive $0$'s, this greedy strategy ensures that Timar is used in the minimum number of applications, making it correct and efficient.
[ "data structures", "greedy", "implementation", "two pointers" ]
1,000
# include <bits/stdc++.h> using namespace std; const int xn = 2e5 + 10; int q, n, m, k, ps[xn]; string s; int main() { cin >> q; while (q --) { cin >> n >> m >> k >> s; fill(ps, ps + n, 0); int ans = 0, cnt = 0, sum = 0; for (int i = 0; i < n; ++ i) { sum += ps[i]; if (sum || s[i] == '1') cnt = 0; else { cnt++; if (cnt == m) { sum++, ans++, cnt = 0; if (i + k < n) ps[i + k]--; } } } cout << ans << "\n"; } }
2034
C
Trapped in the Witch's Labyrinth
In the fourth labor of Rostam, the legendary hero from the Shahnameh, an old witch has created a magical maze to trap him. The maze is a rectangular grid consisting of $n$ rows and $m$ columns. Each cell in the maze points in a specific direction: up, down, left, or right. The witch has enchanted Rostam so that whenever he is in a cell, he will move to the next cell in the direction indicated by that cell. If Rostam eventually exits the maze, he will be freed from the witch's enchantment and will defeat her. However, if he remains trapped within the maze forever, he will never escape. The witch has not yet determined the directions for all the cells. She wants to assign directions to the unspecified cells in such a way that the number of starting cells from which Rostam will be trapped forever is maximized. Your task is to find the maximum number of starting cells which make Rostam trapped.
If a cell has a fixed direction (i.e., it points to another cell), and following that direction leads outside the maze, it must eventually exit the maze. Such cells cannot be part of any loop. We can analyze the remaining cells once we identify cells that lead out of the maze. Any undirected cell or $?$ cell might either lead to the exit or form part of a loop. If all neighboring cells of a $?$ cell can eventually lead out of the maze, then this $?$ cell will also lead out of the maze. The state of such $?$ cells can be determined based on their surroundings. For any remaining cells (directed cells that do not lead out of the maze, or other $?$ cells that cannot be determined to lead to an exit), we can assign directions such that starting from those cells will eventually lead to a loop. These cells will form the loops. To find how many cells will eventually lead to a loop, we can use a Depth-First Search (DFS) on the reversed graph, where all directions are reversed. By performing DFS starting from the "out-of-maze" cells, we can identify all cells that are reachable from the outside and thus will eventually lead out of the maze. Count the number of cells that can reach the exit. Subtract this number from the total number of cells in the maze to determine how many are part of loops (i.e., cells that cannot reach the exit).
[ "constructive algorithms", "dfs and similar", "graphs", "implementation" ]
1,400
#include <bits/stdc++.h> using namespace std; int main() { int tc; cin >> tc; while(tc--){ int n, m; cin >> n >> m; string c[n+1]; for(int i = 1 ; i <= n ; i++) cin >> c[i] , c[i] = "-" + c[i]; vector<pair<int,int>> jda[n+2][m+2]; for(int i = 1 ; i <= n ; i++){ for(int j = 1 ; j <= m ; j++){ if(c[i][j] == 'U') jda[i-1][j].push_back({i , j}); if(c[i][j] == 'R') jda[i][j+1].push_back({i , j}); if(c[i][j] == 'D') jda[i+1][j].push_back({i , j}); if(c[i][j] == 'L') jda[i][j-1].push_back({i , j}); } } int vis[n+2][m+2] = {}; queue<pair<int,int>> q; for(int j = 0 ; j <= m+1 ; j++) vis[0][j] = 1 , q.push({0 , j}); for(int i = 1 ; i <= n+1 ; i++) vis[i][0] = 1 , q.push({i , 0}); for(int j = 1 ; j <= m+1 ; j++) vis[n+1][j] = 1 , q.push({n+1 , j}); for(int i = 1 ; i <= n ; i++) vis[i][m+1] = 1 , q.push({i , m+1}); while(q.size()){ auto [i , j] = q.front(); q.pop(); for(auto [a , b] : jda[i][j]){ if(vis[a][b] == 0){ vis[a][b] = 1; q.push({a , b}); } } } for(int i = 1 ; i <= n ; i++){ for(int j = 1 ; j <= m ; j++){ if(c[i][j] == '?' and vis[i-1][j] and vis[i][j+1] and vis[i+1][j] and vis[i][j-1]) vis[i][j] = 1; } } int ans = n * m; for(int i = 1 ; i <= n ; i++){ for(int j = 1 ; j <= m ; j++){ if(vis[i][j] == 1) ans -= 1; } } cout << ans << endl; } return 0; }
2034
D
Darius' Wisdom
Darius the Great is constructing $n$ stone columns, each consisting of a base and between $0$, $1$, or $2$ inscription pieces stacked on top. In each move, Darius can choose two columns $u$ and $v$ such that the difference in the number of inscriptions between these columns is exactly $1$, and transfer one inscription from the column with more inscriptions to the other one. It is guaranteed that at least one column contains exactly $1$ inscription. Since beauty is the main pillar of historical buildings, Darius wants the columns to have ascending heights. To avoid excessive workers' efforts, he asks you to plan a sequence of \textbf{at most $n$} moves to arrange the columns in non-decreasing order based on the number of inscriptions. Minimizing the number of moves is \textbf{not required}.
Step 1: Using two moves, we can move an element to any arbitrary position in the array. Thus, we can place all $0$'s in their correct positions with at most $2 \min(count(0), count(1) + count(2))$ moves. Step 2: After placing all $0$'s, the rest of the array will contain only $1$'s and $2$'s. To sort this part of the array, we need at most $min(count(1), count(2))$ moves. Step 3: The first step takes at most $n$ moves, and the second step takes at most $\frac{n}{2}$ moves. However, it can be proven that the total number of moves is at most $\frac{8n}{7}$. Step 4: We can assume $count(0) \leq count(2)$ without loss of generality (Why?). So, the maximum number of moves are: $2 \min(count(0), count(1)+count(2))+\min(count(1), count(2))$ $= 2 \cdot count(0) + \min(count(1), count(2))$ $\le count(0) + \max(count(1), count(2)) + \min(count(1), count(2))$ $=count(0)+count(1)+count(2)=n$ Better approach: Step 1: Since we are allowed to perform $n$ moves, assign each index one "move" as its "specified cost". Step 2: While there exists an index with a value of $0$ or $2$ that can be fixed with just one move, fix it using its assigned cost. Step 3: After fixing all $0$'s, $2$'s, and all $1$'s except one, the remaining array will have the following structure and we are now allowed to use $2x+1$ moves: $2\ 2\ \dots\ 2\ (x\ \text{times})\ 1\ 0\ 0\ \dots\ 0\ (x\ \text{times}),$ Step 4: First, swap the $1$ with a random element (denote it as $r$). Then, for $2x-1$ moves, swap the index with the value $1$ with any index where the correct value must be placed, except $r$. Finally, swap $1$ and $r$.
[ "constructive algorithms", "greedy", "implementation", "sortings" ]
1,600
// In the name of god #include <bits/stdc++.h> using namespace std; const int N = 200000; int n, cnt[3], a[N]; vector<int> vip[3][3]; // Value In Position vector<pair<int, int>> swaps; inline int Pos(int index) { if(index < cnt[0]) return 0; else if(index < cnt[0]+cnt[1]) return 1; else return 2; } inline void AddBack(int index) { vip[a[index]][Pos(index)].push_back(index); } inline void RemoveBack(int index) { vip[a[index]][Pos(index)].pop_back(); } inline void Swap(int i, int j) { swaps.push_back({i, j}); RemoveBack(i); RemoveBack(j); swap(a[i], a[j]); AddBack(i); AddBack(j); } inline void Fix() { bool change; do { change = false; while ((!vip[1][0].empty()) && (!vip[0][1].empty())) Swap(vip[1][0].back(), vip[0][1].back()), change = true; while ((!vip[1][0].empty()) && (!vip[0][2].empty())) Swap(vip[1][0].back(), vip[0][2-0].back()), change = true; while ((!vip[1][2].empty()) && (!vip[2][1].empty())) Swap(vip[1][2].back(), vip[2][1].back()), change = true; while ((!vip[1][2].empty()) && (!vip[2][0].empty())) Swap(vip[1][2].back(), vip[2][0].back()), change = true; } while (change); } inline void PingPong() { if(vip[0][2].empty()) return; Swap(vip[1][1].back(), vip[0][2].back()); while (true){ Swap(vip[1][2].back(), vip[2][0].back()); if(vip[0][2].empty()) break; Swap(vip[1][0].back(), vip[0][2].back()); } Swap(vip[1][0].back(), vip[0][1].back()); } int main() { ios_base::sync_with_stdio(false), cin.tie(0); int t; cin >> t; while (t--) { cin >> n; for(int i = 0; i < n; i++) cin >> a[i], cnt[a[i]]++; for(int i = 0; i < n; i++) AddBack(i); Fix(); PingPong(); cout << swaps.size() << endl; for(auto [i, j]: swaps) cout << i+1 << ' ' << j+1 << endl; // reset cnt[0] = cnt[1] = cnt[2] = 0; for(int i = 0; i < 3; i++) for(int j = 0; j < 3; j++) vip[i][j].clear(); swaps.clear(); } return 0; }
2034
E
Permutations Harmony
Rayan wants to present a gift to Reyhaneh to win her heart. However, Reyhaneh is particular and will only accept a k-harmonic set of permutations. We define a k-harmonic set of permutations as a set of $k$ \textbf{pairwise distinct} permutations $p_1, p_2, \ldots, p_k$ of size $n$ such that for every pair of indices $i$ and $j$ (where $1 \leq i, j \leq n$), the following condition holds: $$ p_1[i] + p_2[i] + \ldots + p_k[i] = p_1[j] + p_2[j] + \ldots + p_k[j] $$ Your task is to help Rayan by either providing a valid k-harmonic set of permutations for given values of $n$ and $k$ or by determining that such a set does not exist. We call a sequence of length $n$ a permutation if it contains every integer from $1$ to $n$ exactly once.
Step 1: There are $n$ positions that must be equal, and their sum is $\frac{n \cdot (n+1) \cdot k}{2}$. Hence, each position must be $\frac{(n+1) \cdot k}{2}$. Additionally, there must be $k$ distinct permutations, so $k \leq n!$. Step 2: For even $k$, we can group $n!$ permutations into $\frac{n!}{2}$ double handles, where each group corresponds to a solution for $k = 2$. Then, pick $\frac{k}{2}$ handles. The match for permutation $a_1, a_2, \ldots, a_n$ is $(n+1)-a_1, (n+1)-a_2, \ldots, (n+1)-a_n$. Step 3: For $k = 1$, $n$ must be $1$. Symmetrically, $k$ cannot be $n! - 1$. Solutions for other odd $k$ will now be provided. Step 4: To construct an answer for $k = 3$ and $n = 2x + 1$, consider the following derived using a greedy approach: Step 5: Now, combine the solution for even $k$ and the $k = 3$ solution by selecting the 3 permutations and $\frac{k-3}{2}$ other handles.
[ "combinatorics", "constructive algorithms", "greedy", "hashing", "math" ]
2,200
// In the name of God #include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false), cin.tie(0); int t; cin >> t; int f[8] = {1,1,2,6,24,120,720,5040}; while(t--) { int n, k; cin >> n >> k; if(min(n, k) == 1) { if(n*k == 1) { cout << "Yes\n1\n"; } else cout << "No\n"; } else if(n < 8 and (f[n] < k or f[n] == k+1)) { cout << "No\n"; } else if(n % 2 == 0 and k % 2 == 1) { cout << "No\n"; } else { vector<vector<int>> base, all; vector<int> per(n); for(int i = 0; i < n; i++) per[i] = i+1; if(k % 2) { vector<int> p1(n), p2(n); for(int i = 0; i < n; i += 2) p1[i] = (n+1)/2-i/2, p2[i] = n-i/2; for(int i = 1; i < n; i += 2) p1[i] = n-i/2, p2[i] = n/2-i/2; all = base = {per, p1, p2}; k -= 3; } do { if(k == 0) break; vector<int> mirror(n); for(int i = 0; i < n; i++) mirror[i] = n+1-per[i]; if(per < mirror) { bool used = false; for(auto &p: base) used |= (p == per), used |= (p == mirror); if(not used) { k -= 2; all.push_back(per); all.push_back(mirror); } } } while (next_permutation(per.begin(), per.end())); cout << "Yes\n"; for(auto p: all) { for(int i = 0; i < n; i++) cout << p[i] << (i+1==n?'\n':' '); } } } return 0; } // Thanks God
2034
F1
Khayyam's Royal Decree (Easy Version)
\textbf{This is the easy version of the problem. The only differences between the two versions are the constraints on $k$ and the sum of $k$.} In ancient Persia, Khayyam, a clever merchant and mathematician, is playing a game with his prized treasure chest containing $n$ red rubies worth $2$ dinars each and $m$ blue sapphires worth $1$ dinar each. He also has a satchel, which starts empty, and $k$ scrolls with pairs $(r_1, b_1), (r_2, b_2), \ldots, (r_k, b_k)$ that describe special conditions. The game proceeds for $n + m$ turns as follows: - Khayyam draws a gem uniformly at random from the chest. - He removes the gem from the chest and places it in his satchel. - If there exists a scroll $i$ ($1 \leq i \leq k$) such that the chest contains exactly $r_i$ red rubies and $b_i$ blue sapphires, Khayyam receives a royal decree that doubles the value of all the gems in his satchel as a reward for achieving a special configuration. Note that the value of some gems might be affected by multiple decrees, and in that case the gems' value is doubled multiple times. Determine the expected value of Khayyam's satchel at the end of the game, modulo $998,244,353$. Formally, let $M = 998,244,353$. It can be shown that the exact answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
Step 1: For simplicity, redefine the special conditions for the number of rubies and sapphires in your satchel (not chest). Add two dummy states, $(0,0)$ and $(n,m)$ for convenience (the first one indexed as $0$ and the second one indexed as $k+1$). Note that these dummy states won't involve doubling the value. Step 2: Order the redefined conditions $(x, y)$ in increasing order based on the value of $x + y$. Step 3: Define $ways_{i,j}$ as the number of ways to move from state $i$ to state $j$ without passing through any other special condition. This can be computed using inclusion-exclusion in $O(k^3)$. Step 4: Define $cost_{i,j}$, the increase in value for moving directly from state $i$ to state $j$ without intermediate doubling, as: $cost_{i,j} = 2|x_i - x_j| + |y_i - y_j|$ Step 5: Define $dp_i$ as the total sum of the value of your satchel across all ways to reach the state defined by the $i$-th condition. This can be computed recursively as: $dp_i = 2 \sum_{0 \leq j < i} ways_{j,i} \times (dp_j + \binom{x_j + y_j}{x_j} \times cost_{j,i})$ Step 6: Compute the final answer as the value of $dp_{k+1}$ divided by the total number of ways to move from $(0,0)$ to $(n,m)$, which is $\binom{n+m}{n}$.
[ "combinatorics", "dp", "math", "sortings" ]
2,500
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 400010 ll fc[maxn], nv[maxn]; ll fxp(ll b, ll e) { ll r = 1, k = b; while (e != 0) { if (e % 2) r = (r * k) % mod; k = (k * k) % mod; e /= 2; } return r; } ll inv(ll x) { return fxp(x, mod - 2); } ll bnm(ll a, ll b) { if (a < b || b < 0) return 0; ll r = (fc[a] * nv[b]) % mod; r = (r * nv[a - b]) % mod; return r; } int main() { ios::sync_with_stdio(0); cin.tie(0); fc[0] = 1; nv[0] = 1; for (ll i = 1; i < maxn; i++) { fc[i] = (i * fc[i - 1]) % mod; nv[i] = inv(fc[i]); } ll t; cin >> t; while (t--) { ll n, m, k; cin >> n >> m >> k; vector<array<ll, 2>> a(k + 2, {0, 0}); for (ll i = 1; i <= k; i++) { cin >> a[i][0] >> a[i][1]; a[i][0] = n - a[i][0]; a[i][1] = m - a[i][1]; } a[k + 1] = {n, m}; k++; sort(a.begin() + 1, a.end()); auto paths = [&](ll i, ll j) { ll dx = a[j][0] - a[i][0], dy = a[j][1] - a[i][1]; return bnm(dx + dy, dx); }; auto add = [&](ll &x, ll y) { x = (x + y) % mod; x = (x + mod) % mod; }; vector direct(k + 1, vector<ll>(k + 1, 0)); for (ll i = 1; i <= k; i++) { for (ll j = i - 1; j >= 0; j--) { direct[j][i] = paths(j, i); for (ll l = j + 1; l < i; l++) { add(direct[j][i], -paths(j, l) * direct[l][i]); } } } vector<ll> dp(k + 1, 0); for (ll i = 1; i <= k; i++) { for (ll j = 0; j < i; j++) { if (direct[j][i] == 0) continue; ll partial = dp[j]; ll delta = 2 * (a[i][0] - a[j][0]) + (a[i][1] - a[j][1]); add(partial, paths(0, j) * delta); add(dp[i], partial * direct[j][i]); } if (i != k) dp[i] = (2 * dp[i]) % mod; } ll ans = (dp[k] * inv(bnm(n + m, m))) % mod; cout << ans << nl; } return 0; }
2034
F2
Khayyam's Royal Decree (Hard Version)
\textbf{This is the hard version of the problem. The only differences between the two versions are the constraints on $k$ and the sum of $k$.} In ancient Persia, Khayyam, a clever merchant and mathematician, is playing a game with his prized treasure chest containing $n$ red rubies worth $2$ dinars each and $m$ blue sapphires worth $1$ dinar each. He also has a satchel, which starts empty, and $k$ scrolls with pairs $(r_1, b_1), (r_2, b_2), \ldots, (r_k, b_k)$ that describe special conditions. The game proceeds for $n + m$ turns as follows: - Khayyam draws a gem uniformly at random from the chest. - He removes the gem from the chest and places it in his satchel. - If there exists a scroll $i$ ($1 \leq i \leq k$) such that the chest contains exactly $r_i$ red rubies and $b_i$ blue sapphires, Khayyam receives a royal decree that doubles the value of all the gems in his satchel as a reward for achieving a special configuration. Note that the value of some gems might be affected by multiple decrees, and in that case the gems' value is doubled multiple times. Determine the expected value of Khayyam's satchel at the end of the game, modulo $998,244,353$. Formally, let $M = 998,244,353$. It can be shown that the exact answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
Step 1: For simplicity, redefine the special conditions for the number of rubies and sapphires in your satchel (not chest). Step 2: Order the redefined conditions $(x, y)$ in increasing order based on the value of $x + y$. Step 3: Define $total_{i,j}$ as the total number of ways to move from state $i$ to state $j$ (ignoring special condition constraints). This can be computed as: $total_{i,j} = \binom{|x_i - x_j| + |y_i - y_j|}{|x_i - x_j|}$ Step 4: Define $weight_i$ as the total contribution of all paths passing through condition $i$ to reach the final state $(n, m)$. This can be computed recursively as: $weight_i = \sum_{i < j \leq k} total_{i,j} \times weight_j$ Step 5: The main insight is to account for the doubling effect of passing through multiple scrolls. If a path passes through a sequence of conditions $s_1, \dots, s_c$, each gem collected before entering $s_1$ is counted with multiplicity $2^c$. Instead of explicitly multiplying by $2^c$, consider the number of subsets $q_1, \dots, q_d$ of $s_1, \dots, s_c$. By summing over all subsets, the correct multiplicity is automatically handled. Step 6: Define $dp_i$ as the total value of all paths passing through condition $i$, considering the contribution of each state's rubies and sapphires. This can be computed as: $dp_i = (2x_i + y_i) \times total_{0, i} \times weight_i$ Step 7: Compute the final answer as $\sum dp_i$ divided by the total number of ways to move from $(0,0)$ to $(n,m)$, which is equal to $\binom{n+m}{n}$. Clarification: The approach hinges on the insight that $2^i$ can be derived from the structure of subsets of scrolls $s_1, \dots, s_c$. Generalizations to $3^i$ or other multiplicative factors are possible by appropriately modifying $weight_i$ and adjusting the factor in Step 5. For example, a factor of 3 can be applied by multiplying path contributions by 2 at the relevant steps.
[ "combinatorics", "dp", "math", "sortings" ]
2,800
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll long long #define pb push_back #define _ << ' ' << #define INF (ll)1e18 #define mod 998244353 #define maxn 400010 ll fc[maxn], nv[maxn]; ll fxp(ll b, ll e) { ll r = 1, k = b; while (e != 0) { if (e % 2) r = (r * k) % mod; k = (k * k) % mod; e /= 2; } return r; } ll inv(ll x) { return fxp(x, mod - 2); } ll bnm(ll a, ll b) { if (a < b || b < 0) return 0; ll r = (fc[a] * nv[b]) % mod; r = (r * nv[a - b]) % mod; return r; } int main() { ios::sync_with_stdio(0); cin.tie(0); fc[0] = 1; nv[0] = 1; for (ll i = 1; i < maxn; i++) { fc[i] = (i * fc[i - 1]) % mod; nv[i] = inv(fc[i]); } ll t; cin >> t; while (t--) { ll n, m, k; cin >> n >> m >> k; vector<array<ll, 2>> a(k + 2, {0, 0}); for (ll i = 1; i <= k; i++) { cin >> a[i][0] >> a[i][1]; a[i][0] = n - a[i][0]; a[i][1] = m - a[i][1]; } a[k + 1] = {n, m}; k++; sort(a.begin() + 1, a.end()); auto paths = [&](ll i, ll j) { ll dx = a[j][0] - a[i][0], dy = a[j][1] - a[i][1]; return bnm(dx + dy, dx); }; auto add = [&](ll &x, ll y) { x = (x + y) % mod; x = (x + mod) % mod; }; vector<ll> cnt_weighted(k + 1, 0); cnt_weighted[k] = 1; for (ll i = k - 1; i >= 1; i--) { for (ll j = i + 1; j <= k; j++) { add(cnt_weighted[i], paths(i, j) * cnt_weighted[j]); } } ll ans = 0; for (ll i = 1; i <= k; i++) { ll delta = 2 * a[i][0] + a[i][1]; add(ans, delta * paths(0, i) % mod * cnt_weighted[i]); } ans = (ans * inv(bnm(n + m, m))) % mod; cout << ans << nl; } return 0; }
2034
G1
Simurgh's Watch (Easy Version)
\textbf{The only difference between the two versions of the problem is whether overlaps are considered at all points or only at integer points.} The legendary Simurgh, a mythical bird, is responsible for keeping watch over vast lands, and for this purpose, she has enlisted $n$ vigilant warriors. Each warrior is alert during a specific time segment $[l_i, r_i]$, where $l_i$ is the start time (included) and $r_i$ is the end time (included), both positive integers. One of Simurgh's trusted advisors, Zal, is concerned that if multiple warriors are stationed at the same time and all wear the same color, the distinction between them might be lost, causing confusion in the watch. To prevent this, whenever multiple warriors are on guard at the same moment (\textbf{which can be non-integer}), there must be at least one color which is worn by exactly one warrior. So the task is to determine the minimum number of colors required and assign a color $c_i$ to each warrior's segment $[l_i, r_i]$ such that, for every (real) time $t$ contained in at least one segment, there exists one color which belongs to exactly one segment containing $t$.
It is easy to check if the solution can be achieved with only one color. For any time point $x$, there must be at most one interval containing $x$, since if multiple intervals contain $x$, they must be colored differently. A simple strategy is to solve the problem using three colors. First, we color some intervals with colors 1 and 2, then color others with color 3. For each step, we find the leftmost point that has not been colored yet and color the segment that contains this point. We always choose the interval with the largest endpoint that contains the current point. By coloring the intervals alternately with colors 1 and 2, we ensure that all points are covered by exactly one of these colors. For each step, we find the leftmost point that has not been colored yet and color the segment that contains this point. We always choose the interval with the largest endpoint that contains the current point. By coloring the intervals alternately with colors 1 and 2, we ensure that all points are covered by exactly one of these colors. Now, we check if we can color the intervals with just two colors using a greedy algorithm: We iterate over the intervals sorted by start (increasingly) and then by end (decreasingly). At each point, we keep track of the number of colors used in previous intervals that are not yet closed. Let this number be $I$, and suppose we are currently at interval $i$. We color the current interval based on the value of $I$: If $I = 0$, color interval $i$ with color 1. If $I = 1$, color interval $i$ with the opposite color of the current used color. If $I = 2$, color interval $i$ with the opposite color of the interval with the greatest endpoint among the currently open intervals. If it is impossible to assign a unique color between overlapping intervals at any point, it can be shown that coloring the intervals using only 2 colors is impossible. We iterate over the intervals sorted by start (increasingly) and then by end (decreasingly). At each point, we keep track of the number of colors used in previous intervals that are not yet closed. Let this number be $I$, and suppose we are currently at interval $i$. We color the current interval based on the value of $I$: If $I = 0$, color interval $i$ with color 1. If $I = 1$, color interval $i$ with the opposite color of the current used color. If $I = 2$, color interval $i$ with the opposite color of the interval with the greatest endpoint among the currently open intervals. If $I = 0$, color interval $i$ with color 1. If $I = 1$, color interval $i$ with the opposite color of the current used color. If $I = 2$, color interval $i$ with the opposite color of the interval with the greatest endpoint among the currently open intervals. If it is impossible to assign a unique color between overlapping intervals at any point, it can be shown that coloring the intervals using only 2 colors is impossible. Solving G1 using G2: It's sufficient to check the integer points and half-points (e.g., 1.5, 2.5, \dots ) to verify whether the coloring is valid (Why?). To handle this, we can multiply all the given points by two, effectively converting the problem into one in which only integer points exist. After this transformation, we solve the problem in the integer system of G2, where the intervals and coloring rules are defined using integer boundaries!
[ "constructive algorithms", "greedy", "implementation", "sortings" ]
3,500
/* In the name of Allah */ // Welcome to the Soldier Side! // Where there's no one here, but me... #include<bits/stdc++.h> using namespace std; const int N = 2e5 + 5; int t, n, l[N], r[N], col[N]; vector<int> st[N << 1], en[N << 1]; void compress_points() { vector<int> help; for (int i = 0; i < n; i++) { help.push_back(l[i]); help.push_back(r[i]); } sort(help.begin(), help.end()); help.resize(unique(help.begin(), help.end()) - help.begin()); for (int i = 0; i < n; i++) { l[i] = lower_bound(help.begin(), help.end(), l[i]) - help.begin(); r[i] = lower_bound(help.begin(), help.end(), r[i]) - help.begin(); } } void record_points() { for (int i = 0; i < n; i++) { st[l[i]].push_back(i); en[r[i] + 1].push_back(i); } for (int i = 0; i < 2 * n; i++) sort(st[i].begin(), st[i].end(), [](int i, int j) { return r[i] > r[j]; }); } void try3_points() { fill(col, col + n, 0); int cur = -1, nxt = -1, c = 2; for (int i = 0; i < 2 * n; i++) { if (st[i].empty()) continue; if (!~cur || i > r[cur]) { if (cur ^ nxt && r[nxt] < i) { col[nxt] = (c ^= 3); cur = nxt; } if (cur ^ nxt) cur = nxt; else { cur = st[i][0]; for (int p: st[i]) if (r[p] > r[cur]) cur = p; nxt = cur; } col[cur] = (c ^= 3); } for (int p: st[i]) if (r[p] > r[nxt]) nxt = p; } if (cur ^ nxt) col[nxt] = c ^ 3; } bool is_bad(set<pair<int, int>> s[2]) { int cnt1 = s[0].size(), cnt2 = s[1].size(); return cnt1 + cnt2 && cnt1 ^ 1 && cnt2 ^ 1; } void try2_points() { set<pair<int, int>> s[2]; for (int i = 0; i <= 2 * n; i++) { for (int p: en[i]) s[col[p]].erase({r[p], p}); if (is_bad(s)) { try3_points(); return; } for (int p: st[i]) { int cnt1 = s[0].size(); int cnt2 = s[1].size(); if (!cnt1 || !cnt2) col[p] = cnt1 > 0; else if (cnt1 ^ cnt2) col[p] = cnt1 < cnt2; else col[p] = s[0].begin()->first > s[1].begin()->first; s[col[p]].insert({r[p], p}); if (is_bad(s)) { try3_points(); return; } } } } void read_input() { cin >> n; for (int i = 0; i < n; i++) cin >> l[i] >> r[i]; } void solve() { compress_points(); record_points(); try2_points(); } void write_output() { cout << *max_element(col, col + n) + 1 << endl; for (int i = 0; i < n; i++) cout << col[i] + 1 << "\n "[i < n - 1]; } void reset_variables() { for (int i = 0; i < n; i++) { col[i] = 0; st[l[i]].clear(); en[r[i] + 1].clear(); } } int main() { ios:: sync_with_stdio(0), cin.tie(0), cout.tie(0); for (cin >> t; t--; reset_variables()) read_input(), solve(), write_output(); return 0; }
2034
G2
Simurgh's Watch (Hard Version)
\textbf{The only difference between the two versions of the problem is whether overlaps are considered at all points or only at integer points.} The legendary Simurgh, a mythical bird, is responsible for keeping watch over vast lands, and for this purpose, she has enlisted $n$ vigilant warriors. Each warrior is alert during a specific time segment $[l_i, r_i]$, where $l_i$ is the start time (included) and $r_i$ is the end time (included), both positive integers. One of Simurgh's trusted advisors, Zal, is concerned that if multiple warriors are stationed at the same time and all wear the same color, the distinction between them might be lost, causing confusion in the watch. To prevent this, whenever multiple warriors are on guard at the same \textbf{integer} moment, there must be at least one color which is worn by exactly one warrior. So the task is to determine the minimum number of colors required and assign a color $c_i$ to each warrior's segment $[l_i, r_i]$ such that, for every (integer) time $t$ contained in at least one segment, there exists one color which belongs to exactly one segment containing $t$.
Step 1: It is easy to check if the solution can be achieved with only one color. For any time point $x$, there must be at most one interval containing $x$, since if multiple intervals contain $x$, they must be colored differently. Step 2: A simple strategy is to solve the problem using three colors; First, we color some intervals with colors 1 and 2, then color others with color 3. For each step, we find the leftmost point that has not been colored yet and color the segment that contains this point. We always choose the interval with the largest endpoint that contains the current point. By coloring the intervals alternately with colors 1 and 2, we ensure that all points are covered by exactly one of these colors. Step 3: Now, we check if we can color the intervals with just two colors. For some point, $x$, suppose we have already colored the intervals $[l_i, r_i]$ with $l_i \leq x$, such that all points before $x$ have a unique color. At each step, we only need to determine which of the intervals like $p$ that $l_p \leq x \leq r_p$ can have a unique color. The key observation is that if an interval can be uniquely colored at time $x$, it can also remain uniquely colored for all times $t$ such that $x \leq t \leq r_i$. Lemma: If an interval $[l_i, r_i]$ can be uniquely colored at time $x$, it can also be uniquely colored at all subsequent times $x \leq t \leq r_i$. Proof: Consider coloring the intervals at time $x$. Intervals starting at $x + 1$ will be colored with the opposite color to interval $i$, ensuring that the interval remains uniquely colored at time $x+1$. With this lemma, we can conclude that the changes in the coloring are $O(n)$. It suffices to track the intervals that are added and removed at each point in time. Lemma: If an interval $[l_i, r_i]$ can be uniquely colored at time $x$, it can also be uniquely colored at all subsequent times $x \leq t \leq r_i$. Proof: Consider coloring the intervals at time $x$. Intervals starting at $x + 1$ will be colored with the opposite color to interval $i$, ensuring that the interval remains uniquely colored at time $x+1$. With this lemma, we can conclude that the changes in the coloring are $O(n)$. It suffices to track the intervals that are added and removed at each point in time. Step 4: To efficiently move from time $x$ to $x + 1$, we perform the following steps: Remove the intervals that have $r_i = x$ (since they no longer contain $x+1$). Add the intervals that have $l_i = x + 1$. Update the set of intervals that can be uniquely colored at time $x+1$. Remove the intervals that have $r_i = x$ (since they no longer contain $x+1$). Add the intervals that have $l_i = x + 1$. Update the set of intervals that can be uniquely colored at time $x+1$. Step 5: Finally, we observe that only the following points are important for the coloring: $l_i$ and $r_i$ for each interval. $l_i - 1$ and $r_i + 1$, since these points mark the boundaries where intervals start or end. Step 5: Finally, we observe that only the following points are important for the coloring: $l_i$ and $r_i$ for each interval. $l_i - 1$ and $r_i + 1$, since these points mark the boundaries where intervals start or end. Thus, we can compress the numbers to reduce the range of values we need to process.
[ "greedy", "implementation" ]
3,500
/* In the name of Allah */ // Welcome to the Soldier Side! // Where there's no one here, but me... #include<bits/stdc++.h> using namespace std; const int N = 2e5 + 5; vector<int> st[N << 2], en[N << 2]; int t, n, k, l[N], r[N], dp[N], col[N], prv[N]; void compress_numbers() { vector<int> help; for (int i = 0; i < n; i++) { help.push_back(l[i] - 1); help.push_back(l[i]); help.push_back(r[i]); help.push_back(r[i] + 1); } sort(help.begin(), help.end()); help.resize(k = unique(help.begin(), help.end()) - help.begin()); for (int i = 0; i < n; i++) { l[i] = lower_bound(help.begin(), help.end(), l[i]) - help.begin(); r[i] = lower_bound(help.begin(), help.end(), r[i]) - help.begin(); } } void save_checkpoints() { for (int i = 0; i < n; i++) { st[l[i]].push_back(i); en[r[i]].push_back(i); } } bool check_one() { for (int i = 0, open = 0; i < k; i++) { open += st[i].size(); if (open > 1) return false; open -= en[i].size(); } return true; } void color_with_two() { for (int i = k - 1, cur = -1; ~i; i--) { if (en[i].empty()) continue; while (!~cur || i < dp[cur]) if (~cur && ~prv[cur]) { col[prv[cur]] = col[cur]; if (r[prv[cur]] >= l[cur]) col[prv[cur]] ^= 1; cur = prv[cur]; } else for (int p: en[i]) if (~dp[p] && (!~cur || dp[p] < dp[cur])) cur = p; for (int p: en[i]) if (p ^ cur) col[p] = col[cur] ^ 1; } } bool check_two() { set<int> goods, bads; fill(dp, dp + n, -1); fill(prv, prv + n, -1); for (int i = 0; i < k; i++) { int prev = -1; if (i) for (int p: en[i - 1]) { bads.erase(p), goods.erase(p); if (~dp[p] && (!~prev || dp[p] < dp[prev])) prev = p; } int open = goods.size() + bads.size(); if (open == 1 || (open == 2 && !goods.empty())) { for (int p: bads) { if (open == 1) prv[p] = prev; else prv[p] = *goods.begin(); goods.insert(p); dp[p] = i; } bads.clear(); } if (open == 1) prev = *goods.begin(); for (int p: st[i]) if (!open || open == 1 || ~prev) { goods.insert(p); prv[p] = prev; dp[p] = i; } else bads.insert(p); open += st[i].size(); if (open && goods.empty()) return false; } color_with_two(); return true; } void color_with_three() { int cur = -1, nxt = -1; for (int i = 0; i < k; i++) { if (st[i].empty()) continue; if (~cur && i > r[cur] && nxt ^ cur) { col[nxt] = col[cur] ^ 3; cur = nxt; } if (!~cur || i > r[cur]) { for (int p: st[i]) if (!~cur || r[p] > r[cur]) cur = p; col[nxt = cur] = 1; } for (int p: st[i]) if (r[p] > r[nxt]) nxt = p; } if (cur ^ nxt) col[nxt] = col[cur] ^ 3; } void read_input() { cin >> n; for (int i = 0; i < n; i++) cin >> l[i] >> r[i]; } void solve() { compress_numbers(); save_checkpoints(); if (check_one()) return; if (check_two()) return; color_with_three(); } void write_output() { cout << *max_element(col, col + n) + 1 << endl; for (int i = 0; i < n; i++) cout << col[i] + 1 << "\n "[i < n - 1]; } void reset_variables() { fill(col, col + n, 0); for (int i = 0; i < k; i++) { st[i].clear(); en[i].clear(); } } int main() { ios:: sync_with_stdio(0), cin.tie(0), cout.tie(0); for (cin >> t; t--; reset_variables()) read_input(), solve(), write_output(); return 0; }
2034
H
Rayan vs. Rayaneh
Rayan makes his final efforts to win Reyhaneh's heart by claiming he is stronger than Rayaneh (i.e., computer in Persian). To test this, Reyhaneh asks Khwarizmi for help. Khwarizmi explains that a set is integer linearly independent if no element in the set can be written as an integer linear combination of the others. Rayan is given a set of integers each time and must identify one of the largest possible integer linearly independent subsets. Note that a single element is always considered an integer linearly independent subset. An integer linearly combination of $a_1, \ldots, a_k$ is any sum of the form $c_1 \cdot a_1 + c_2 \cdot a_2 + \ldots + c_k \cdot a_k$ where $c_1, c_2, \ldots, c_k$ are integers (which may be zero, positive, or negative).
Step 1: According to Bézout's Identity, we can compute $\gcd(x_1, \ldots, x_t)$ and all its multipliers as an integer linear combination of $x_1, x_2, \ldots, x_t$. Step 2: A set {$a_1, \ldots, a_k$} is good (integer linearly independent) if for every $i$, $\gcd(${$a_j \mid j \neq i$}$) \nmid a_i$. Step 3: A set {$a_1, \ldots, a_k$} is good if and only if there exists a set {${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$} such that ${p_i}^{q_i} \mid a_j$ for $j \neq i$ and ${p_i}^{q_i} \nmid a_i$. Step 4: The set {$a_1, \ldots, a_k$} can be identified by determining {${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$}. Assume $p_1^{q_1} < p_2^{q_2} < \ldots < p_k^{q_k}$, where $p_i \neq p_j$ and $p_i$ is prime. Step 5: Let $G = {p_1}^{q_1} \cdot {p_2}^{q_2} \ldots \cdot {p_k}^{q_k}.$ Then {$a_1, \ldots, a_k$} is good if and only if $\frac{G}{{p_i}^{q_i}} \mid a_i$ and $G \nmid a_i$ for every $i$. Step 6: The answer is a singleton if, for every pair of numbers $x$ and $y$ in the array, $x \mid y$ or $y \mid x$. Since the numbers are distinct, a good subset {$a_1, a_2$} can always be found by searching the first $\log M + 2$ elements. Step 7: Define $CM[i]$ (count multipliers of $i$) as the number of $x$ such that $i \mid a_x$. This can be computed in $O(n + M \log M)$. Step 8: A corresponding set {$a_1, \ldots, a_k$} exists for a set {${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$} if and only if $CM\left[\frac{G}{{p_i}^{q_i}}\right] > CM[G] \geq 0$ for all $i$. Step 9: Iterate over all valid sets of the form {${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$}, and check if a corresponding {$a_1, a_2, \ldots, a_k$} exists. Note that $k \geq 3$ since a good subset {$a_1, a_2$} is found using another method. Step 10: We know $\frac{G}{{p_1}^{q_1}} \leq M$ and also ${p_1}^{q_1} \leq \sqrt{M},$ as ${p_1}^{q_1} \leq \sqrt{{p_2}^{q_2} \cdot {p_3}^{q_3}} \leq \sqrt{\frac{G}{{p_1}^{q_1}}} \leq \sqrt{M}.$ Step 11: There are $\sum_{i=1}^{\frac{\log M}{2}} P[\lfloor \sqrt[2i]{M} \rfloor]$ numbers in the form ${p_1}^{q_1}$, where $P[i]$ denotes the number of primes in the range $[1, i]$. This count is $O(\frac{\sqrt M}{\log M})$. Step 12: The value of $k$ is at most 6 (denoted as $K$), as ${p_2}^{q_2} \ldots {p_k}^{q_k} = \frac{G}{{p_1}^{q_1}} \leq M,$ and $3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \leq M < 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 17.$ Step 13: We can determine {$a_1, \ldots, a_k$} from {${p_1}^{q_1}, {p_2}^{q_2}, \ldots, {p_k}^{q_k}$} in $O(n \cdot K)$. The total time complexity is $O\left(T \cdot M \cdot \frac{\sqrt M}{\log M} \cdot K + T \cdot M \cdot \log M + \sum_{i=0}^T n_i \cdot K\right).$
[ "brute force", "dfs and similar", "dp", "number theory" ]
3,300
/// In the name of God the most beneficent the most merciful #pragma GCC optimize("Ofast,no-stack-protector,unroll-loops,fast-math,O3") #include <bits/stdc++.h> using namespace std; typedef long long ll; constexpr int T = 100; constexpr int M = 100001; constexpr int SQM = 320; constexpr int LGM = 20; vector<pair<int,int>> factor; int t, n[T], count_multipliers[T][M]; bitset<M> is_composite; vector<int> ans[T], a[T]; inline void calculate_importants() { for(int i = 2; i < SQM; i++) if(!is_composite[i]) { for(int j = i; j < M; j *= i) factor.push_back({j,i}); for(int j = i*i; j < M; j += i) is_composite.set(j); } for(int i = SQM; i < M; i++) if(!is_composite[i]) factor.push_back({i,i}); sort(factor.begin(), factor.end()); } void check(vector<int> &factors, int G) { if(factors.size() > 2u) { for(int i = 0; i < t; i++) if(ans[i].size() < factors.size()) { int count_product = (G < M? count_multipliers[i][G] : 0); bool can = true; for(auto u: factors) if(count_multipliers[i][G/factor[u].first] == count_product) { can = false; break; } if(can) ans[i] = factors; } } int bound = (factors.size() == 1 ? SQM : M); if(1LL*G/factor[factors[0]].first*factor[factors.back()].first > bound) return; for(int new_factor = factors.back(); G/factor[factors[0]].first*factor[new_factor].first <= bound; new_factor++) if(G%factor[new_factor].second) { factors.push_back(new_factor); check(factors, G*factor[new_factor].first); factors.pop_back(); } } int main() { ios_base :: sync_with_stdio(false); cin.tie(nullptr); calculate_importants(); cin >> t; for(int i = 0; i < t; i++) { cin >> n[i]; a[i].resize(n[i]); for(int j = 0; j < n[i]; j++) { cin >> a[i][j]; count_multipliers[i][a[i][j]]++; } ans[i] = {a[i][0]}; sort(a[i].begin(), a[i].begin()+min(n[i], LGM)); for(int c = 0; c+1 < n[i]; c++) if(a[i][c+1]%a[i][c]) { ans[i] = {a[i][c], a[i][c+1]}; break; } for(int c = 1; c < M; c++) for(int j = c+c; j < M; j += c) count_multipliers[i][c] += count_multipliers[i][j]; } for(int i = 0; factor[i].first < SQM; i++) { vector<int> starter = {i}; check(starter, factor[i].first); } for(int i = 0; i < t; i++) { int k = ans[i].size(); cout << k << '\n'; if(k == 1u) { cout << ans[i][0] << '\n'; } else if(k == 2u) { cout << ans[i][0] << ' ' << ans[i][1] << '\n'; } else { int subset[k]; for(auto u: a[i]) { int ls = -1; for(int j = 0; j < (int)k; j++) if(u%factor[ans[i][j]].first) ls = (ls == -1? j: -2); if(ls >= 0) subset[ls] = u; } for(int j = 0; j < k; j++) cout << subset[j] << (j+1 == k? '\n' : ' '); } } return 0; } /// Thank God . . .
2035
A
Sliding
\begin{quote} Red was ejected. They were not the imposter. \end{quote} There are $n$ rows of $m$ people. Let the position in the $r$-th row and the $c$-th column be denoted by $(r, c)$. Number each person starting from $1$ in row-major order, i.e., the person numbered $(r-1)\cdot m+c$ is initially at $(r,c)$. The person at $(r, c)$ decides to leave. To fill the gap, let the person who left be numbered $i$. Each person numbered $j>i$ will move to the position where the person numbered $j-1$ is initially at. The following diagram illustrates the case where $n=2$, $m=3$, $r=1$, and $c=2$. Calculate the sum of the Manhattan distances of each person's movement. If a person was initially at $(r_0, c_0)$ and then moved to $(r_1, c_1)$, the Manhattan distance is $|r_0-r_1|+|c_0-c_1|$.
The people with a smaller row major number won't move at all so we can ignore them. Now we can break everyone else into $2$ groups of people. The first group will be changing rows while the second group will stay in the same row. For the people changing rows their manhattan distance will change by a total of $m$ and there are $n - r$ of these people. For the second group of people, their manhattan distance will change by $1$ and there are $n \cdot m - ((r - 1) \cdot m + c) - (n - r)$ of these people. Thus, the final answer will be: $m \cdot (n - r) + n \cdot m - ((r - 1) \cdot m + c) - (n - r)$
[ "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; using int64 = long long; int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) { int64 n, m, r, c; cin >> n >> m >> r >> c; cout << (n - r) * (m - 1) + n * m - (r - 1) * m - c << endl; } }
2035
B
Everyone Loves Tres
\begin{quote} There are 3 heroes and 3 villains, so 6 people in total. \end{quote} Given a positive integer $n$. Find the \textbf{smallest} integer whose decimal representation has length $n$ and consists only of $3$s and $6$s such that it is divisible by both $33$ and $66$. If no such integer exists, print $-1$.
If we had the lexicographically smallest number for some $n$, we could extend it to the lexicographically smallest number for $n+2$ by prepending $33$ to it. This is due to the lexicographic condition that values minimize some digits over all the digits behind them. In other words, $3666xx$ is always better than $6333xx$. We can observe that $66$ is the lexicographically smallest number for $n = 2$, $n = 1$ and $n = 3$ are impossible, and $36366$ is the lexicographically smallest number for $n = 5$. So our final answer is $n-2$ $3$'s followed by $66$ if $n$ is even, $-1$ if $n = 1$ or $n = 3$, or $n-5$ $3$'s followed by $36366$ if $n$ is odd and greater than or equal to $5$. For instance, for $n = 8$, the answer is $33333366$, and for $n = 9$, the answer is $333336366$.
[ "constructive algorithms", "greedy", "math", "number theory" ]
900
#include <iostream> using namespace std; int main(){ int T; cin >> T; while(T--){ int n; cin >> n; if(n == 1 || n == 3){ cout << "-1\n"; }else if(n%2 == 0){ for(int i = 0; i<n-2; i++){ cout << "3"; } cout << "66\n"; }else{ for(int i = 0; i<n-5; i++){ cout << "3"; } cout << "36366\n"; } } }
2035
C
Alya and Permutation
Alya has been given a hard problem. Unfortunately, she is too busy running for student council. Please solve this problem for her. Given an integer $n$, construct a permutation $p$ of integers $1, 2, \ldots, n$ that maximizes the value of $k$ (which is initially $0$) after the following process. Perform $n$ operations, on the $i$-th operation ($i=1, 2, \dots, n$), - If $i$ is odd, $k=k\,\&\,p_i$, where $\&$ denotes the bitwise AND operation. - If $i$ is even, $k=k\,|\,p_i$, where $|$ denotes the bitwise OR operation.
We can make $k$ what it needs to be using at most the last $5$ numbers of the permutation. Every other element can be assigned randomly. We can split this up into several cases. Case 1: $n$ is odd. The last operation will be bitwise and. The bitwise and of $k$ with the last element which is less than or equal to $n$ is less than or equal to $n$. It is always possible to get the final $k$ value equal to $n$. Let $l$ be the lowest bit of $n$. We can set the last $4$ numbers to be: $l, l + (l == 1 ? 2 : 1), n - l, n$ After the first $2$, $k$ will at least have the bit $l$. After the third one, $k$ will be equal to $n$. The bitwise and of $k$ and the last element equal to $n$ will be $n$. Case 2: $n$ is even Now the last operation will be a bitwise or so we can do better than $n$ here. The maximum possible $k$ will be the bitwise or of every number from $1$ to $n$. Case 2a: $n$ is not a power of $2$. Let $h$ be the highest bit of $n$. We can set the last $3$ numbers to: $n, n - 1, h - 1$ After the first $2$, $k$ will have at least the highest bit of $n$. After bitwise or-ing this with the third element, it will have all bits. Case 2b: $n$ is a power of $2$. We can set the last $5$ numbers to $1, 3, n - 2, n - 1, n$
[ "bitmasks", "constructive algorithms", "math" ]
1,400
#include <bits/stdc++.h> using namespace std; using vi = vector<int>; #define FOR(i, a, b) for (int i = (a); i < (b); i++) int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; set<int> s; FOR(i, 1, n) s.insert(i); vi a(n + 1); int po2 = 1; while (po2 * 2 <= n) po2 *= 2; if (n & 1) { cout << n << endl; int low = n & (-n); a[n - 3] = low, a[n - 2] = low + (low == 1 ? 2 : 1), a[n - 1] = n - low, a[n] = n; } else { cout << po2 * 2 - 1 << endl; if (n == po2) { a[n - 4] = 1, a[n - 3] = 3, a[n - 2] = n - 2, a[n - 1] = n - 1, a[n] = n; } else { a[n - 2] = n, a[n - 1] = n - 1, a[n] = po2 - 1; } } FOR(i, 1, n + 1) s.erase(a[i]); FOR(i, 1, n + 1) if (!a[i]) a[i] = *s.begin(), s.erase(a[i]); FOR(i, 1, n + 1) cout << a[i] << " "; cout << endl; } }
2035
D
Yet Another Real Number Problem
\begin{quote} Three r there are's in strawberry. \end{quote} You are given an array $b$ of length $m$. You can perform the following operation any number of times (possibly zero): - Choose two distinct indices $i$ and $j$ \textbf{where} $\bf{1\le i < j\le m}$ and $b_i$ is even, divide $b_i$ by $2$ and multiply $b_j$ by $2$. Your task is to maximize the sum of the array after performing any number of such operations. Since it could be large, output this sum modulo $10^9+7$.Since this problem is too easy, you are given an array $a$ of length $n$ and need to solve the problem for each prefix of $a$. In other words, denoting the maximum sum of $b$ after performing any number of such operations as $f(b)$, you need to output $f([a_1])$, $f([a_1,a_2])$, $\ldots$, $f([a_1,a_2,\ldots,a_n])$ modulo $10^9+7$ respectively.
Consider how to solve the problem for an entire array. We iterate backwards on the array, and for each $a_i$ we consider, we give all of its $2$ divisors to the largest $a_j > a_i$ with $j > i$. If $a_i$ is the largest, we do nothing. To do this for every prefix $x$, notice that if we after we have iterated backwards from $x$ over $31$ $2$ divisors, the largest $a_i$ will always be larger than $10^9$ and thus all remaining $2$'s will be given to that largest $a_i$. Thus, we can iterate over these $31$ $2$'s and assign them manually, then multiply the largest $a_i$ by the number of $2$'s we have remaining. Thus, if we maintain the previous $31$ even numbers for every prefix, we can solve the problem in $O(n \log a_i)$.
[ "binary search", "data structures", "divide and conquer", "greedy", "implementation", "math" ]
1,800
#include <bits/stdc++.h> using namespace std; #ifdef DEBUG #include "debug.hpp" #else #define debug(...) (void)0 #endif using i64 = int64_t; using u64 = uint64_t; constexpr bool test = false; constexpr i64 mod = 1000000007; int main() { cin.tie(nullptr)->sync_with_stdio(false); int t; cin >> t; for (int ti = 0; ti < t; ti += 1) { int n; cin >> n; vector<pair<i64, i64>> stack; auto pow = [&](i64 a, i64 r) { i64 res = 1; for (; r; r >>= 1, a = a * a % mod) if (r & 1) res = res * a % mod; return res; }; i64 sum = 0; for (int i = 0, ai; i < n; i += 1) { cin >> ai; i64 r = countr_zero(u64(ai)), a = ai >> r; while (not stack.empty()) { if (r >= 30 or stack.back().first <= (a << r)) { r += stack.back().second; sum += stack.back().first; stack.pop_back(); } else { break; } } if (r == 0) { sum += a; } else { stack.emplace_back(a, r); } i64 res = sum; for (auto [a, r] : stack) res += pow(2, r) * a % mod; cout << res % mod << " "; } cout << "\n"; } }
2035
E
Monster
\begin{quote} Man, this Genshin boss is so hard. Good thing they have a top-up of $6$ coins for only $ \$4.99$. I should be careful and spend no more than I need to, lest my mom catches me... \end{quote} You are fighting a monster with $z$ health using a weapon with $d$ damage. Initially, $d=0$. You can perform the following operations. - Increase $d$ — the damage of your weapon by $1$, costing $x$ coins. - Attack the monster, dealing $d$ damage and costing $y$ coins. You cannot perform the first operation for more than $k$ times in a row. Find the minimum number of coins needed to defeat the monster by dealing at least $z$ damage.
Let $\text{damage}(a, b)$ represent the maximum number of damage by performing the first operation $a$ times and the second operation $b$ times. Since we should always increase damage if possible, the optimal strategy will be the following: increase damage by $k$, deal damage, increase damage by $k$, deal damage, and so on, until we run out of the first or second operation. Let's treat each group of increase by $k$ and one damage as one block. Specifically, we will have $c = \min(\lfloor \frac{a}{k} \rfloor, b)$ blocks. Afterwards, we will increase damage by $a\bmod k$ times (making our weapon's damage $a$) and use our remaining $b - c$ attacks. Hence, $\text{damage}(a, b) = k\cdot \frac{c\cdot (c + 1)}{2} + a\cdot (b - c)$ where $c = \min(\lfloor \frac{a}{k} \rfloor, b)$. The cost of this is $\text{cost}(a, b) = a\cdot x + b\cdot y$. For a fixed $a$, since $\text{damage}(a, b)$ is non-decreasing, we can binary search the smallest $b = \text{opt_b}(a)$ where $\text{damage}(a, b)\geq z$. Similarly, for a fixed $b$, we can binary search the smallest $a = \text{opt_a}(b)$ where $\text{damage}(a, b)\geq z$. Now, the issue is that we can't evaluate all $z$ possible values of $a$ or $b$. However, we can prove that for $(a, b)$ where all operations are useful (when we end with an attack), $\text{damage}(a, b) > \frac{a\cdot b}{2}$: first, the lower bound of $\text{damage}(a, b)$ must be when $k = 1$ (when we're limited the most). Since we end with an attack, $a\leq b$ so $c = \min(a, b) = a$. Now, $\text{damage}(a, b) = \frac{a\cdot (a + 1)}{2} + a\cdot(b - a) > \frac{a^2}{2} + a\cdot b - a^2 = a\cdot b - \frac{a^2}{2} > \frac{a\cdot b}{2}$. Since we want $\text{damage}(a, b)\geq z$, we have $\text{damage}(a, b) > \frac{a\cdot b}{2}\geq z$ which gives us $a\cdot b\geq z\cdot 2$. This implies that $\min(a, b)\leq \sqrt{2\cdot z}$. Thus, it suffices to check all fixed $a$ where $a\leq \sqrt{2\cdot z}$ and all fixed $b$ where $b\leq \sqrt{2\cdot z}$. Alternatively, we can binary search the smallest $x$ where $x\geq \text{opt_b}(x)$ and check all $a\leq x$ and $b\leq \text{opt_b}(x)$. Our final solution runs in $O(\sqrt z\log z)$ time.
[ "binary search", "brute force", "constructive algorithms", "greedy", "implementation", "math", "ternary search" ]
2,300
#include <bits/stdc++.h> using namespace std; using i64 = long long; i64 solve(int x, int y, int z, int k) { auto cost = [&](int a, int b) { return (i64) a * x + (i64) b * y; }; auto damage = [&](int a, int b) { int c = min(b, a / k); return (i64) k * c * (c + 1) / 2 + (i64) (b - c) * a; }; auto opt_a = [&](int b) { int low = 1, hi = z + 1; while (low < hi) { int a = (low + hi) / 2; damage(a, b) >= z ? hi = a : low = a + 1; } return low; }; auto opt_b = [&](int a) { int low = 1, hi = z + 1; while (low < hi) { int b = (low + hi) / 2; damage(a, b) >= z ? hi = b : low = b + 1; } return low; }; int low = 0, hi = z; while (low < hi) { int a = (low + hi) / 2; a >= opt_b(a) ? hi = a : low = a + 1; } i64 ans = LLONG_MAX; assert(low > 0 && low <= z); for (int a = 1; a <= low; a++) { int b = opt_b(a); if (b <= z) { ans = min(ans, cost(a, b)); } } for (int b = 1; b <= low; b++) { int a = opt_a(b); if (a <= z) { ans = min(ans, cost(a, b)); } } return ans; } int main() { int T; cin >> T; while (T--) { int x, y, z, k; cin >> x >> y >> z >> k; cout << solve(x, y, z, k) << '\n'; } }
2035
F
Tree Operations
\begin{quote} This really says a lot about our society. \end{quote} One day, a turtle gives you a tree with $n$ nodes rooted at node $x$. Each node has an initial nonnegative value; the $i$-th node has starting value $a_i$. You want to make the values of all nodes equal to $0$. To do so, you will perform a series of operations on the tree, where each operation will be performed on a certain node. Define an operation on node $u$ as choosing a single node in $u$'s subtree$^{\text{∗}}$ and incrementing or decrementing its value by $1$. The order in which operations are performed on nodes is as follows: - For $1 \le i \le n$, the $i$-th operation will be performed on node $i$. - For $i > n$, the $i$-th operation will be performed on the same node as operation $i - n$. More formally, the $i$-th operation will be performed on the $(((i - 1) \bmod n) + 1)$-th node.$^{\text{†}}$ Note that you cannot skip over operations; that is, you cannot perform the $i$-th operation without first performing operations $1, 2, \ldots, i - 1$. Find the minimum number of operations you must perform before you can make the values of all nodes equal to $0$, assuming you pick operations optimally. If it's impossible to make the values of all nodes equal to $0$ after finite operations, output $-1$. \begin{footnotesize} $^{\text{∗}}$The subtree of a node $u$ is the set of nodes for which $u$ lies on the shortest path from this node to the root, including $u$ itself. $^{\text{†}}$Here, $a \bmod b$ denotes the remainder from dividing $a$ by $b$. \end{footnotesize}
The key observation to make is that if we can make all nodes equal to $0$ using $t$ operations, we can make all nodes equal to $0$ in $t + 2n$ operations by increasing all nodes by $1$ before decreasing all nodes by $1$. As such, this motivates a binary search solution. For each $i$ from $0$ to $2n - 1$, binary search on the set of integers {$x \in \mathbb{Z} \mid x \bmod 2n \equiv i$} for the first time we can make all nodes $0$. Our answer is then the minimum value over all these binary searches. For a constant factor speedup, note that we only need to check the values of $i$ such that the parity of $i$ is equal to the parity of $\sum{a_i}$. All that remains is to find a suitably fast check function for determining whether or not we can solve the tree using a fixed number of operations. This can be done with dynamic programming in $O(n)$ time. Suppose we fix the number of operations as $x$. We can then calculate for each node in $O(1)$ time how many operations on its respective subtree it can perform. For a node $i$, this is $\lfloor \frac{x}{n} \rfloor + [((x \bmod n) + 1) < i]$. Define this value as $\text{has_i}$. Define $\text{dp}[i]$ as the number of operations needed to make the entire subtree of $i$ equal to $0$. $\text{dp}[i]$ is then $(\sum_{j\in\text{child($i$)}}\text{dp}[j]) + a_i - \text{has_i}$. If $\text{dp}[i] < 0$ at the end, $\text{dp}[i] = -\text{dp}[i] \bmod 2$, since we may need to waste an operation. Finally, the tree is solvable for a fixed $x$ if $\text{dp}[\text{root}] = 0$. This solution can actually be improved to $O(n\log(a_i) + n^2\log(n))$. We will do so by improving our bounds for each binary search we perform. This improved solution also serves as a proof for why it is always possible to make all nodes equal to $0$ after finite operations. Consider solving the easier subtask where the goal is to make all values of the tree $\le 0$. The minimum number of operations can be binary searched for directly, and the only change to the check function is changing the value of $\text{dp}[i]$ for when $\text{dp}[i]$ $\le 0$ to simply be $0$ instead. As such, this subtask can be solved in $O(n\log(a_i))$, and is clearly a lower bound on the answer. Let's denote this number of operations as $t$. Claim: The tree is solvable within $t + n^2 + 2n + 1$ operations. Proof: After using $t$ operations, we can reduce all the values in the tree into either $1$ or $0$ based on the parity of extra operations used. We want to turn all the nodes into the same number. Consider running through $n$ operations with each node applying the operation to itself. Note that the parity of all nodes flip. We will fix the parity of $1$ node per $n$ operations (that is, while all other nodes change parity, one node's parity will stay the same). We can accomplish this by having the root node change the parity of the node who's parity we want to maintain, while all other nodes apply the operation to itself. Now, after $n^2$ operations, all nodes (except for possibly the root) will have the same parity. We can fix the parity of the root in at worst $n + 1$ extra operations. Have all nodes toggle their own parity, but have the root toggle the parity of the node that comes directly after it. Now, that node can just fix itself, and all nodes have the same parity. If all the nodes are equal to $1$, we simply use $n$ operations to toggle the parity and make all the nodes equal to $0$. With this bound of $t + n^2 + 2n + 1$, we can simply adjust the upper and lower bounds of our binary search for the $O(n^2\log(a_i))$ solution, resulting in a $O(n\log(a_i) + n^2\log(n))$ solution.
[ "binary search", "brute force", "dfs and similar", "dp", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; #ifdef DEBUG #include "debug.hpp" #else #define debug(...) (void)0 #endif using i64 = int64_t; int main() { cin.tie(nullptr)->sync_with_stdio(false); int t; cin >> t; for (int ti = 0; ti < t; ti += 1) { int n, x; cin >> n >> x; vector<int> a(n + 1); for (int i = 1; i <= n; i += 1) cin >> a[i]; vector<vector<int>> adj(n + 1); for (int i = 1, u, v; i < n; i += 1) { cin >> u >> v; adj[u].push_back(v); adj[v].push_back(u); } vector<int> p(n + 1), o; auto rec = [&](auto& rec, int u) -> void { for (int v : adj[u]) if (v != p[u]) { p[v] = u; rec(rec, v); } o.push_back(u); }; rec(rec, x); vector<int> can_use(n); auto dfs = [&](auto&& self, int cur, int par) -> long long { long long need = a[cur]; for (auto to : adj[cur]) { if (to == par) continue; need += self(self, to, cur); } if (need >= can_use[cur - 1]) { return need - can_use[cur - 1]; } else { return 0; } }; auto check = [&](long long operations) -> bool { for (int i = 0; i < n; i++) { can_use[i] = operations / n + (i < operations % n); } return dfs(dfs, x, x) == 0; }; long long lower, higher; { long long l = 0, r = *max_element(a.begin(), a.end()) * (long long)n; while (l < r) { long long m = l + (r - l) / 2; check(m) ? r = m : l = m + 1; } lower = l, higher = l + n * n; } long long ans = higher; for (int i = 0; i < 2 * n; i += 1) { vector<i64> b(n + 1); i64 pans = *ranges::partition_point(views::iota((lower - i) / (2 * n), (ans - i) / (2 * n) + 1), [&](i64 x) { for (int u : o) { i64 y = x * 2 + (u - 1 < i) + (u - 1 + n < i); b[u] = a[u]; for (int v : adj[u]) if (p[u] != v) b[u] += b[v]; if (y < b[u]) { b[u] = b[u] - y; } else { b[u] = (y - b[u]) % 2; } } return b[o.back()]; }); debug(i, pans); ans = min(ans, pans * (2 * n) + i); } cout << ans << "\n"; } }
2035
G1
Go Learn! (Easy Version)
\textbf{The differences between the easy and hard versions are the constraints on $n$ and the sum of $n$. In this version, $n \leq 3000$ and the sum of $n$ does not exceed $10^4$. You can only make hacks if both versions are solved.} Well, well, well, let's see how Bessie is managing her finances. She seems to be in the trenches! Fortunately, she is applying for a job at Moogle to resolve this issue. Moogle interviews require intensive knowledge of obscure algorithms and complex data structures, but Bessie received a tip-off from an LGM on exactly what she has to go learn. Bessie wrote the following code to binary search for a certain element $k$ in a possibly unsorted array $[a_1, a_2,\ldots,a_n]$ with $n$ elements. \begin{verbatim} let l = 1 let h = n while l < h: let m = floor((l + h) / 2) if a[m] < k: l = m + 1 else: h = m return l \end{verbatim} Bessie submitted her code to Farmer John's problem with $m$ ($1 \leq m \leq n$) tests. The $i$-th test is of the form $(x_i, k_i)$ ($1 \leq x, k \leq n$). It is guaranteed all the $x_i$ are distinct and all the $k_i$ are distinct. Test $i$ is correct if the following hold: - The $x_i$-th element in the array is $k_i$. - If Bessie calls the binary search as shown in the above code for $k_i$, it will return $x_i$. It might not be possible for all $m$ tests to be correct on the same array, so Farmer John will remove some of them so Bessie can AC. Let $r$ be the minimum of tests removed so that there exists an array $[a_1, a_2,\ldots,a_n]$ with $1 \leq a_i \leq n$ so that all remaining tests are correct. In addition to finding $r$, Farmer John wants you to count the number of arrays $[a_1, a_2,\ldots,a_n]$ with $1 \leq a_i \leq n$ such that there exists a way to remove exactly $r$ tests so that all the remaining tests are correct. Since this number may be very large, please find it modulo $998\,244\,353$.
Let's sort the tests by $x_i$. I claim a subsequence of tests $t_1 \dots t_r$ remaining is correct iff $k_{t_i} < k_{t_{i+1}}$ for all $1 \leq i < r$. $x_{t_1} = 1$ or $k_{t_1} \neq 1$. In other words, the set of tests is increasing, and if some test has $k_i = 1$, that test's $x_i$ must $= 1$. This is because a binary search for value $1$ will always be index $1$. Otherwise, the array $a$ where $a_i = k_{t_j}$ if $i = t_j$ for some $j$, $a_i = 1$ if $i < x_{t_1}$ or $a_i = a_{i-1}$ is a valid array for all of these tests. This is because the binary search will perform normally and correctly lower bound all of those occurrences. Suppose then the tests were not increasing. Consider two indices $i$ and $j$ such that $x_i < x_j$ but $k_i > k_j$. Consider the $l$, $r$, and $m$ in the binary search where $l \leq x_i \leq m$ and $m+1 \leq x_j \leq r$. For test $i$ to search left, $a_m$ must be $\geq k_i$, but $j$ also had to search right so $a_m$ must be $< k_j$. Then we need $k_i \leq a_m < k_j$, but we assumed that $k_i > k_j$, hence a contradiction. $\Box$ Suppose $r$ was the minimum number of tests removed. I claim for each valid $a_1 \dots a_n$, there is exactly one set of $r$ tests to remove to make $a_1 \dots a_n$ valid. This is because if there were more than one way, these two choices of the $m-r$ tests to keep would be different. Then $a$ would actually satisfy at least $m-r+1$ remaining tests, so $r$ is not minimal. Putting these two statements together, we can formulate an $O(m^2 \log n)$ dp. Let $\text{touch}(i)$ denote the set of indices $m$ such that a binary search that terminates at position $i$ checks all these $a_m$ at some point. Also, let $k_i$ be the $k$ of the test with $x = i$. If no such test exists $k_i$ is undefined. Let $\text{dp}_i$ be a tuple of $(x, y)$ where $x$ is the longest increasing set of tests where the last test has $x_j = i$ and $y$ is the number of ways to fill the prefix of $a_1 \dots a_{i}$ to satisfy some choice of $x$ tests removed. Note that $\text{dp}_i$ is not defined when there does not exist some $x_j = i$. Also if, $i \neq 1$, and $k_i = 1$, $\text{dp}_i = (0, 0)$. Consider computing some $\text{dp}_i$. Let $mx = \max\limits_{j < i, k_j < k_i} \text{dp}_j.x$. $\text{dp}_i = (mx+1, \sum\limits_{j < i, k_j < k_i, \text{dp}_j.x = mx} cnt(j, i)\cdot \text{dp}_j.y)$ where $\text{cnt}(j, i)$ is the number of ways to fill the $a_i \dots a_j$ if both the test with $x = i$ and the test with $x = j$ are taken. Most of the elements between $i$ and $j$ will be free, so we can assign them to any number from $1 \dots n$. Let's determine which (and how many) numbers are not free. Consider all elements $m$ such that $m\in\text{touch}(i)$, $m\in\text{touch}(j)$, and $i < m < j$. Let there be $b$ of such $m$, these elements are forced to take on values $k_j \leq v < k_i$. Then consider all elements $m$ such that $m \in \text{touch}(i)$, $m \notin \text{touch}(j)$, and $i < m < j$. Let there be $g$ such $m$, these elements are forced to take on values $k_j \leq v$. Finally, consider all $m$ such that $m \in \text{touch}(j)$, $m \notin \text{touch}(i)$, and $i < m < j$. Let there be $l$ such $m$, these elements are forced to take on values $v < k_i$. Then $\text{cnt}(i, j) = n^{i - j - 1 - (g + m + l)} (k_i - k_j)^b (n - k_j + 1)^g (k_i - 1)^l$. After we have found all the $\text{dp}$'s, we can try each $\text{dp}$ as the last test taken in the entire set. We can determine all the elements in the suffix that have their value forced (relative to $i$). We can precompute $\text{touch}(i)$ or compute it on the fly in $O(\log n)$ time per $i$. We can also choose to use binpow/precompute powers. As we have $m$ states with up to $m$ transitions each each taking $O(\log n)$ time, our final complexity is $O(m^2 \log n)$.
[ "dp", "trees" ]
3,300
#include <bits/stdc++.h> using namespace std; #define ll long long #define pii pair<int, int> #define f first #define s second const int mx = 3005, md = 998244353; int n, m, t, A[mx]; pii ans; vector<int> touch[mx]; vector<pii> tests; pii dp[mx]; pii comb(pii a, pii b){ return a.f == b.f ? make_pair(a.f, (a.s + b.s) % md) : max(a, b); } ll bpow(ll a, int p){ ll ans = 1; for (;p; p /= 2, a = (a * a) % md) if (p & 1) ans = (ans * a) % md; return ans; } vector<int> findTouch(int idx){ int l = 1, h = n; vector<int> ret; while (l < h){ int m = (l + h) / 2; ret.push_back(m); if (m < idx) l = m + 1; else h = m; } return ret; } int main(){ ios_base::sync_with_stdio(0); cin.tie(0); int T; cin >> T; while(T--){ cin >> n >> m; for(int i = 1; i<=n; i++){ A[i] = 0; touch[i].clear(); dp[i] = {0, 0}; } tests.clear(); for (int i = 1; i <= m; i++){ int idx, k; cin >> idx >> k; A[idx] = k; } ans = {0, bpow(n, n)}; for (int i = 1; i <= n; i++){ if (!A[i]) continue; touch[i] = findTouch(i); int le = 0, ge = 0; int mark[n + 1] = {}; for (int pos : touch[i]){ mark[pos] = true; le += pos < i; ge += pos > i; } dp[i] = make_pair(1, bpow(A[i] - 1, le) * bpow(n, i - 1 - le) % md); // Corner case -- not possible to have A[i] = 1 if (i > 1 and A[i] == 1) dp[i] = {0, 0}; for (int j = 1; j < i; j++){ if (!A[j] or A[j] > A[i]) continue; int geqj = 0; int lei = 0; int both = 0; int none = i - j - 1; for (int pos : touch[i]){ if (pos > j and pos < i){ lei++; none--; } } for (int pos : touch[j]){ if (pos > j and pos < i){ if (mark[pos]){ both++; lei--; } else{ geqj++; none--; } } } int ways = dp[j].s * bpow(n, none) % md * bpow(n - A[j] + 1, geqj) % md * bpow(A[i] - 1, lei) % md * bpow(A[i] - A[j], both) % md; dp[i] = comb(dp[i], make_pair(dp[j].f + 1, ways)); } ans = comb(ans, make_pair(dp[i].f, dp[i].s * bpow(n - A[i] + 1, ge) % md * bpow(n, n - i - ge) % md)); } cout<<m - ans.f<<" "<<ans.s<<"\n"; } }
2035
G2
Go Learn! (Hard Version)
\textbf{The differences between the easy and hard versions are the constraints on $n$ and the sum of $n$. In this version, $n \leq 3\cdot 10^5$ and the sum of $n$ does not exceed $10^6$. You can only make hacks if both versions are solved.} Well, well, well, let's see how Bessie is managing her finances. She seems to be in the trenches! Fortunately, she is applying for a job at Moogle to resolve this issue. Moogle interviews require intensive knowledge of obscure algorithms and complex data structures, but Bessie received a tip-off from an LGM on exactly what she has to go learn. Bessie wrote the following code to binary search for a certain element $k$ in a possibly unsorted array $[a_1, a_2,\ldots,a_n]$ with $n$ elements. \begin{verbatim} let l = 1 let h = n while l < h: let m = floor((l + h) / 2) if a[m] < k: l = m + 1 else: h = m return l \end{verbatim} Bessie submitted her code to Farmer John's problem with $m$ ($1 \leq m \leq n$) tests. The $i$-th test is of the form $(x_i, k_i)$ ($1 \leq x, k \leq n$). It is guaranteed all the $x_i$ are distinct and all the $k_i$ are distinct. Test $i$ is correct if the following hold: - The $x_i$-th element in the array is $k_i$. - If Bessie calls the binary search as shown in the above code for $k_i$, it will return $x_i$. It might not be possible for all $m$ tests to be correct on the same array, so Farmer John will remove some of them so Bessie can AC. Let $r$ be the minimum of tests removed so that there exists an array $[a_1, a_2,\ldots,a_n]$ with $1 \leq a_i \leq n$ so that all remaining tests are correct. In addition to finding $r$, Farmer John wants you to count the number of arrays $[a_1, a_2,\ldots,a_n]$ with $1 \leq a_i \leq n$ such that there exists a way to remove exactly $r$ tests so that all the remaining tests are correct. Since this number may be very large, please find it modulo $998\,244\,353$.
Read the editorial for G1; we will build off of that. The most important observation is that $b$ is defined above as at most $1$. If you consider binary searching for $j$ and $i$, at the index $m$ which is present both in $\text{touch}(j)$ and $\text{touch}(i)$ and between $j$ and $i$, the search must go left for $j$ and go right for $i$. Then none of the points after $m$ can occur in both $\text{touch}(i)$ and $\text{touch}(j)$. The only time when such $m$ doesn't exist is when the $m = i$ or $m = j$. We can call such $m$ as the LCA of $i$ and $j$. We can also observe that the indices in $\text{touch}(i)$ but not in $\text{touch}(j)$ are between $m$ and $i$. Similarly, the indices in $\text{touch}(j)$ but not in $\text{touch}(i)$ are between $j$ and $m$. We can associate all the transitions from $j$ to $i$ with the same LCA together and transition simultaneously. We can break our dp into parts only dependent on $i$ and $j$, and only the $(a_i - a_j)^b$ depends on both. We can account for this by storing $dp_j \cdot (n - k_j + 1)^g$ and $k_j\cdot dp_j (n - k_j + 1)^g$. A little extra casework is needed when $j$ or $i$ is also the LCA. Then for each $i$, we can iterate all the possible LCA's and do all the candidate $j$ transitions at once. To ensure our dp only makes valid transitions, we can insert $i$ in increasing order of $k_i$. For each $i$, there are $\log n$ transitions, making the complexity $O(m \log n)$. Depending on the precomputation used and constant factors, $O(m \log^2 n)$ might pass as well.
[ "divide and conquer", "dp" ]
3,500
//maomao and hanekawa will carry me to red //#pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt") #include <iostream> #include <iomanip> #include <cmath> #include <utility> #include <cassert> #include <algorithm> #include <vector> #include <array> #include <cstring> #include <functional> #include <numeric> #include <set> #include <queue> #include <map> #include <chrono> #include <random> #define sz(x) ((int)(x.size())) #define all(x) x.begin(), x.end() #define pb push_back #define eb emplace_back #define kill(x, s) {if(x){ cout << s << "\n"; return ; }} #ifndef LOCAL #define cerr while(0) cerr #endif using ll = long long; using lb = long double; const lb eps = 1e-9; //const ll mod = 1e9 + 7, ll_max = 1e18; const ll mod = (1 << (23)) * 119 +1, ll_max = 1e18; const int MX = 3e5 +10, int_max = 0x3f3f3f3f; struct { template<class T> operator T() { T x; std::cin >> x; return x; } } in; using namespace std; //ritwin/jeroen orz template <int M> struct Modint { int v; Modint() : v(0) {} template <typename T> Modint(T x) : v(x%M) { if (v < 0) v += M; } Modint operator+ () { return *this; } Modint operator- () { return Modint() - *this; } Modint& operator++() { if (++v == M) v = 0; return *this; } Modint& operator--() { if (--v < 0) v = M-1; return *this; } Modint& operator++(int) { Modint r = *this; ++*this; return r; } Modint& operator--(int) { Modint r = *this; --*this; return r; } Modint& operator+=(const Modint &r) { if ((v += r.v) >= M) v -= M; return *this; } Modint& operator-=(const Modint &r) { if ((v -= r.v) < 0) v += M; return *this; } Modint& operator*=(const Modint &r) { v = (ll) v * r.v % M; return *this; } friend Modint operator+(const Modint &a, const Modint &b) { return Modint(a) += b; } friend Modint operator-(const Modint &a, const Modint &b) { return Modint(a) -= b; } friend Modint operator*(const Modint &a, const Modint &b) { return Modint(a) *= b; } friend bool operator==(const Modint &a, const Modint &b) { return a.v == b.v; } friend bool operator!=(const Modint &a, const Modint &b) { return a.v != b.v; } }; using mint = Modint<mod>; #define info pair<int, mint> int n, m; int arr[MX]; info dp[MX]; info prec[2][MX]; mint pown[MX]; info comb(info a, info b){ int c = max(a.first, b.first); return pair(c, (a.first == c)*a.second + (b.first == c)*b.second); } mint binpow(ll base, ll b = mod-2){ mint ans = 1; mint a = base; for(int i = 1; i<=b; i++){ if(i&b) ans *= a; a *= a; } return ans; } vector<pair<int, int>> touch(int i){ vector<pair<int, int>> ret; int l = 1, h = n; while(l < h){ int mid = (l + h)/2; ret.pb({mid, i>=mid+1}); if(i <= mid) h = mid; else l = mid+1; } reverse(all(ret)); return ret; } void update(int j){ auto p = touch(j); int right = 0; mint pj = 1; for(auto [lca, dir] : p){ if(lca < j) continue; if(dir == 0){ //there are "right" things that are >= j //there are lca-j-1-right things that free mint ways = dp[j].second*pown[lca-j-right-(j != lca)]*pj; cerr << lca << " " << ways.v << "\n"; if(j != lca){ prec[0][lca] = comb(prec[0][lca], pair(dp[j].first, ways)); prec[1][lca] = comb(prec[1][lca], pair(dp[j].first, ways*arr[j])); }else{ prec[0][lca] = comb(prec[0][lca], pair(dp[j].first, 0)); prec[1][lca] = comb(prec[1][lca], pair(dp[j].first, -ways)); } } if(j < lca){ right++; pj *= n-arr[j]+1; } } } info query(int i, info best){ //obtain dp[i] auto p = touch(i); int left = 0; mint pi = 1; for(auto [lca, dir] : p){ if(i < lca) continue; if(dir == 1){ mint ways = prec[0][lca].second*(arr[i]) - prec[1][lca].second; ways *= pown[i-lca-left-(i != lca)]*pi; best = comb(best, make_pair(prec[0][lca].first+1, ways)); } if(lca < i){ left++; pi *= arr[i]-1; } } return best; } void solve(){ n = in, m = in; pown[0] = 1; for(int i = 1; i<=n; i++){ arr[i] = 0; dp[i] = pair(0, mint(0)); prec[0][i] = prec[1][i] = pair(0, mint(0)); pown[i] = pown[i-1]*n; } vector<pair<int, int>> events; for(int i = 1; i<=m; i++){ int x = in; arr[x] = in; events.pb({arr[x], x}); } sort(all(events)); info ans = pair(0, pown[n]); for(auto [x, i] : events){ auto p = touch(i); int left = 0; mint pi = 1; for(auto [lca, dir] : p){ if(lca < i){ left++; pi *= arr[i]-1; } } info best = pair(1, pown[i-left-1]*pi); if(x == 1 && i > 1) continue; best = query(i, best); int right = 0; mint pj = 1; for(auto [lca, dir] : p){ if(i < lca){ right++; pj *= (n - arr[i]+1); } } ans = comb(ans, pair(best.first, best.second*pown[n-i-right]*pj)); dp[i] = best; update(i); } cout << m - ans.first << " " << ans.second.v << "\n"; } signed main(){ cin.tie(0) -> sync_with_stdio(0); int T = 1; cin >> T; for(int i = 1; i<=T; i++){ //cout << "Case #" << i << ": "; solve(); } return 0; }
2035
H
Peak Productivity Forces
\begin{quote} I'm peakly productive and this is deep. \end{quote} You are given two permutations$^{\text{∗}}$ $a$ and $b$, both of length $n$. You can perform the following three-step operation on permutation $a$: - Choose an index $i$ ($1 \le i \le n$). - Cyclic shift $a_1, a_2, \ldots, a_{i-1}$ by $1$ to the right. If you had chosen $i = 1$, then this range doesn't exist, and you cyclic shift nothing. - Cyclic shift $a_{i + 1}, a_{i + 2}, \ldots, a_n$ by $1$ to the right. If you had chosen $i = n$, then this range doesn't exist, and you cyclic shift nothing. After the operation, $a_1,a_2,\ldots, a_{i-2},a_{i-1},a_i,a_{i + 1}, a_{i + 2},\ldots,a_{n-1}, a_n$ is transformed into $a_{i-1},a_1,\ldots,a_{i-3},a_{i-2},a_i,a_n, a_{i + 1},\ldots,a_{n-2}, a_{n-1}$. Here are some examples of operations done on the identity permutation $[1,2,3,4,5,6,7]$ of length $7$: - If we choose $i = 3$, it will become $[2, 1, 3, 7, 4, 5, 6]$. - If we choose $i = 1$, it will become $[1, 7, 2, 3, 4, 5, 6]$. - If we choose $i = 7$, it will become $[6, 1, 2, 3, 4, 5, 7]$. Notably, position $i$ is \textbf{not} shifted. Find a construction using at most $2n$ operations to make $a$ equal to $b$ or print $-1$ if it is impossible. The number of operations does not need to be minimized. It can be shown that if it is possible to make $a$ equal to $b$, it is possible to do this within $2n$ operations. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
We can rephrase the problem as sorting $b^{-1}(a)$. First, it is impossible to sort the permutation $[2, 1]$ because no operations you make affect the permutation. Instead of completely sorting it we can get it to a semi sorted state in $n + 1$ moves. This semi sorted state will be: some pairs of non-intersecting adjacent elements will be swapped and $n$ will always be in its position at the very end. ex: $[2, 1, 3, 5, 4, 6]$ where $1, 2$ are swapped and $4, 5$ are swapped How we can reach the semi sorted state? We will try to build this up in the prefix from largest element to smallest element. To move $x$ at index $i$ to the front we can do an operation on $i + 1$ if $i < n$. Because our sorted state condition requires $n$ to be at the front, if $i = n$ and $x = n$ we can make a query on position $1$ and then position $3$ to insert it into the front. If $i = n$ we can move $x - 1$ to the front and then $x$. Let $i'$ be the new position of $x$ after moving $x - 1$ to the front. If $i' = n$, we can move $x$ to it's correct position by doing the operation on position $1$. Otherwise, we can do the operation on position $i' + 1$ and insert it into the front. From here we can make $n - 1$ more operations on either position $n$ or $n - 1$ depending on which one we need to send to the front to sort it properly. We now need to be able to simulate the following operations efficiently. Do the operation on index $i$. Find the index of element $v$. Find the element at index $i$. This can be done in $O(\log n)$ amortized per operation using an implicit BBST using a treap or splay. Alternatively, we can index each value by $\text{current time} - \text{position}$. The general rotation to the right does not affect this index, but the positions that do not just shift to the right by $1$ must be manually updated. This takes $O(1)$ per update and $O(1)$ per query (for more information, read the code). Depending on the data structure used, the final complexity is either $O(n)$ or $O(n \log n)$.
[ "constructive algorithms" ]
3,500
#include <bits/stdc++.h> using namespace std; // #include "../lib/hori.h" struct ds { vector<int> occ, pos; int counter, n; ds() {} ds(vector<int> perm) { counter = 0; n = ssize(perm); occ = vector<int>(n, 0); pos = vector<int>(2 * n, 0); // store n + i - update for (int i = 0; i < n; i++) { occ[perm[i]] = i; pos[n + i - 0] = perm[i]; } } int query(int x) { // find the i such that a[i] = x; return occ[x] + counter; } void op(int i) { // funny operation int a = pos[n + i - 1 - counter]; int b = pos[n + i - counter]; int c = pos[n + n - 1 - counter]; counter++; if (i) pos[n + 0 - counter] = a, occ[a] = -counter; pos[n + i - counter] = b, occ[b] = i - counter; if (i != n - 1) pos[n + i + 1 - counter] = c, occ[c] = i + 1 - counter; } int at(int i) { // return a[i] return pos[n + i - counter]; } vector<int> get() { // return the new permutation vector<int> arr(n); for (int i = 0; i < n; i++) { arr[i] = pos[n + i - counter]; } return arr; } }; int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for (int tt = 0; tt < t; tt++) { int n; cin >> n; vector<int> aa(n), bb(n), pos_b(n), a(n); for (auto& i : aa) cin >> i, i--; for (auto& i : bb) cin >> i, i--; for (int i = 0; i < n; i++) pos_b[bb[i]] = i; for (int i = 0; i < n; i++) a[i] = pos_b[aa[i]]; if (a == vector<int>{1, 0}) { cout << -1 << endl; continue; } ds d(a); vector<int> ans; auto op = [&] (int i) { d.op(i); ans.push_back(i + 1); if (d.counter > n - 1) d = ds(d.get()); }; for (int i = n - 1; i >= 0; i--) { int j = d.query(i); if (j != n - 1) { op(j + 1); } else if (i != 0) { int k = d.query(i - 1); op(k + 1); int j = d.query(i); if (j == n - 1) op(0); else op(j + 1), i += (i == n - 1); i--; } else { op(0); } } for (int i = 0; i < n - 1; i++) { if (i != n - 2 and d.at(n - 3) > d.at(n - 2)) op(n - 2), i++; op(n - 1); } cout << ans.size() << endl; for (int i : ans) cout << i << " "; cout << endl; } }
2036
A
Quintomania
Boris Notkin composes melodies. He represents them as a sequence of notes, where each note is encoded as an integer from $0$ to $127$ inclusive. The interval between two notes $a$ and $b$ is equal to $|a - b|$ semitones. Boris considers a melody perfect if the interval between each two adjacent notes is either $5$ semitones or $7$ semitones. After composing his latest melodies, he enthusiastically shows you his collection of works. Help Boris Notkin understand whether his melodies are perfect.
If for all $i$ $(1 \leq i \leq n - 1)$ is true $|a_i - a_{i+1}| = 5$ or $|a_i - a_{i+1}| = 7$, the answer to the problem is "YES", otherwise it is "NO". Complexity: $O(n)$
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; bool solve(){ int n; cin >> n; vector<int>a(n); for(int i = 0; i < n; i++) cin >> a[i]; for(int i = 1; i < n; i++) { if(abs(a[i] - a[i - 1]) != 5 && abs(a[i] - a[i - 1]) != 7) return false; } return true; } int main() { int t; cin >> t; while(t--){ cout << (solve() ? "YES" : "NO") << "\n"; } }
2036
B
Startup
Arseniy came up with another business plan — to sell soda from a vending machine! For this, he purchased a machine with $n$ shelves, as well as $k$ bottles, where the $i$-th bottle is characterized by the brand index $b_i$ and the cost $c_i$. You can place any number of bottles on each shelf, but all bottles on the same shelf must be of the same brand. Arseniy knows that all the bottles he puts on the shelves of the machine will be sold. Therefore, he asked you to calculate the \textbf{maximum} amount he can earn.
Let's create an array brand_cost of length $k$ and fill it so that brand_cost[i] stores the cost of all bottles of brand $i+1$. Then sort the array by non-growing and calculate the sum of its first min(n, k) elements, which will be the answer to the problem. Complexity: $O(k \cdot \log k)$
[ "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n, k; cin >> n >> k; vector<int> brand_cost(k, 0); for (int i = 0; i < k; i++) { int b, c; cin >> b >> c; brand_cost[b - 1] += c; } sort(brand_cost.rbegin(), brand_cost.rend()); long long ans = 0; for (int i = 0; i < min(n, k); i++) { ans += brand_cost[i]; } cout << ans << '\n'; } int main() { int t; cin >> t; while (t--) { solve(); } return 0; }
2036
C
Anya and 1100
While rummaging through things in a distant drawer, Anya found a beautiful string $s$ consisting only of zeros and ones. Now she wants to make it even more beautiful by performing $q$ operations on it. Each operation is described by two integers $i$ ($1 \le i \le |s|$) and $v$ ($v \in \{0, 1\}$) and means that the $i$-th character of the string is assigned the value $v$ (that is, the assignment $s_i = v$ is performed). But Anya loves the number $1100$, so after each query, she asks you to tell her whether the substring "1100" is present in her string (i.e. there exist such $1 \le i \le |s| - 3$ that $s_{i}s_{i + 1}s_{i + 2}s_{i + 3} = 1100$).
With each query, to track the change in the presence of "1100" in a row, you don't have to go through the entire row - you can check just a few neighboring cells. First, in a naive way, let's count $count$ - the number of times "1100" occurs in $s$. Then for each of $q$ queries we will update $count$: consider the substring $s[\max(1, i - 3); \min(i + 3, n)]$ before changing $s_i$ and find $before$ - the number of times that "1100" occurs in it. Then update $s_i = v$ and similarly find $after$ - the number of times that "1100" occurs in $s[\max(1, i - 3); \min(i + 3, n)]$ after applying the query. Thus, by doing $count = count + (after - before)$, we get the number of times that "1100" occurs in $s$ after the query is applied. If $count > 0$, the answer to the query is "YES", otherwise it is "NO". Complexity: $O(|s| + q)$
[ "brute force", "implementation" ]
1,100
#include <cstdio> #include <cstring> using namespace std; typedef long long l; char buf[1000000]; l n; bool check_1100(l i) { if (i < 0) return false; if (i >= n - 3) return false; if (buf[i] == '1' && buf[i + 1] == '1' && buf[i + 2] == '0' && buf[i + 3] == '0') return true; return false; } void solve() { scanf("%s", buf); n = strlen(buf); l count = 0; for (l i = 0; i < n; i++) if (check_1100(i)) count++; l q; scanf("%lld", &q); while (q--) { l i, v; scanf("%lld %lld", &i, &v); i--; if (buf[i] != '0' + v) { bool before = check_1100(i - 3) || check_1100(i - 2) || check_1100(i - 1) || check_1100(i); buf[i] = '0' + v; bool after = check_1100(i - 3) || check_1100(i - 2) || check_1100(i - 1) || check_1100(i); count += after - before; } printf(count ? "YES\n" : "NO\n"); } } int main() { l t; scanf("%lld", &t); while (t--) solve(); }
2036
D
I Love 1543
One morning, Polycarp woke up and realized that $1543$ is the most favorite number in his life. The first thing that Polycarp saw that day as soon as he opened his eyes was a large wall carpet of size $n$ by $m$ cells; $n$ and $m$ are even integers. Each cell contains one of the digits from $0$ to $9$. Polycarp became curious about how many times the number $1543$ would appear in all layers$^{\text{∗}}$ of the carpet when traversed \textbf{clockwise}. \begin{footnotesize} $^{\text{∗}}$The first layer of a carpet of size $n \times m$ is defined as a closed strip of length $2 \cdot (n+m-2)$ and thickness of $1$ element, surrounding its outer part. Each subsequent layer is defined as the first layer of the carpet obtained by removing all previous layers from the original carpet. \end{footnotesize}
We will go through all layers of the carpet, adding to the answer the number of $1543$ records encountered on each layer. To do this, we can iterate over, for example, the top-left cells of each layer having the form $(i, i)$ for all $i$ in the range $[1, \frac{min(n, m)}{2}]$, and then traverse the layer with a naive algorithm, writing the encountered digits into some array. Then traverse the array and count the $1543$ occurrences in that layer. Also, when traversing the array, we should take into account the cyclic nature of the layer, remembering to check for possible occurrences of $1543$ containing a starting cell. Complexity: $O(n \cdot m)$
[ "brute force", "implementation", "matrices" ]
1,300
#include <cstdio> char a[1005][1005]; char layer[4005]; void solve() { int n, m; scanf("%d %d", &n, &m); for (int i = 0; i < n; ++i) scanf("%s", a[i]); int count = 0; for (int i = 0; (i + 1) * 2 <= n && (i + 1) * 2 <= m; ++i) { int pos = 0; for (int j = i; j < m - i; ++j) layer[pos++] = a[i][j]; for (int j = i + 1; j < n - i - 1; ++j) layer[pos++] = a[j][m - i - 1]; for (int j = m - i - 1; j >= i; --j) layer[pos++] = a[n - i - 1][j]; for (int j = n - i - 2; j >= i + 1; --j) layer[pos++] = a[j][i]; for (int j = 0; j < pos; ++j) if (layer[j] == '1' && layer[(j + 1) % pos] == '5' && layer[(j + 2) % pos] == '4' && layer[(j + 3) % pos] == '3') count++; } printf("%lld\n", count); } int main() { int t; scanf("%d", &t); while (t--) solve(); }
2036
E
Reverse the Rivers
A conspiracy of ancient sages, who decided to redirect rivers for their own convenience, has put the world on the brink. But before implementing their grand plan, they decided to carefully think through their strategy — that's what sages do. There are $n$ countries, each with exactly $k$ regions. For the $j$-th region of the $i$-th country, they calculated the value $a_{i,j}$, which reflects the amount of water in it. The sages intend to create channels between the $j$-th region of the $i$-th country and the $j$-th region of the $(i + 1)$-th country for all $1 \leq i \leq (n - 1)$ and for all $1 \leq j \leq k$. Since all $n$ countries are on a large slope, water flows towards the country with the highest number. According to the sages' predictions, after the channel system is created, the new value of the $j$-th region of the $i$-th country will be $b_{i,j} = a_{1,j} | a_{2,j} | ... | a_{i,j}$, where $|$ denotes the bitwise "OR" operation. After the redistribution of water, the sages aim to choose the most suitable country for living, so they will send you $q$ queries for consideration. Each query will contain $m$ requirements. Each requirement contains three parameters: the region number $r$, the sign $o$ (either "$<$" or "$>$"), and the value $c$. If $o$ = "$<$", then in the $r$-th region of the country you choose, the new value must be strictly less than the limit $c$, and if $o$ = "$>$", it must be strictly greater. In other words, the chosen country $i$ must satisfy all $m$ requirements. If in the current requirement $o$ = "$<$", then it must hold that $b_{i,r} < c$, and if $o$ = "$>$", then $b_{i,r} > c$. In response to each query, you should output a single integer — the number of the suitable country. If there are multiple such countries, output the smallest one. If no such country exists, output $-1$.
For any non-negative integers, $a \leq a | b$, where $|$ is the bitwise "or" operation. After computing the values of $b_{i,j}$ for all countries and regions, we can notice that for a fixed region $j$, the values of $b_{i,j}$ increase as the index $i$ increases. This is because the bitwise "or" operation cannot decrease a number, but only increase or leave it unchanged. Hence, we can use binary search to quickly find the country that matches the given conditions. For each query and for each requirement, if $o$ = "<", we search for the first country where $b_{i,r} \geq c$ (this will be the first country that does not satisfy the condition). If sign $o$ = ">", we look for the first country where $b_{i,r} \leq c$. In both cases, we can use standard binary search to find the index. If the checks leave at least one country that satisfies all the requirements, we choose the country with the lowest number. Complexity: counting values $O(n\cdot k)$, processing each query using binary search $O(m \log n)$, total $O(n \cdot k + q \cdot m \cdot \log n)$.
[ "binary search", "constructive algorithms", "data structures", "greedy" ]
1,600
#include <cstdio> typedef long long l; l ** arr; int main() { l n, k, q; scanf("%lld %lld %lld", &n, &k, &q); arr = new l*[n]; for (l i = 0; i < n; i++) arr[i] = new l[k]; for (l i = 0; i < n; i++) for (l j = 0; j < k; j++) scanf("%lld", &arr[i][j]); for (l i = 1; i < n; i++) for (l j = 0; j < k; j++) arr[i][j] |= arr[i - 1][j]; while (q--) { l m; scanf("%lld", &m); l left_pos = 0, right_pos = n - 1; while (m--) { l r, c; char o; scanf("%lld %c %lld", &r, &o, &c); r--; if (o == '<') { l le = -1, ri = n, mid; while (le + 1 != ri) { mid = (le + ri) / 2; if (arr[mid][r] < c) le = mid; else ri = mid; } if (le < right_pos) right_pos = le; } else { l le = -1, ri = n, mid; while (le + 1 != ri) { mid = (le + ri) / 2; if (arr[mid][r] <= c) le = mid; else ri = mid; } if (ri > left_pos) left_pos = ri; } } if (left_pos <= right_pos) printf("%lld\n", left_pos + 1); else printf("-1\n"); } }
2036
F
XORificator 3000
Alice has been giving gifts to Bob for many years, and she knows that what he enjoys the most is performing bitwise XOR of interesting integers. Bob considers a positive integer $x$ to be interesting if it satisfies $x \not\equiv k (\bmod 2^i)$. Therefore, this year for his birthday, she gifted him a super-powerful "XORificator 3000", the latest model. Bob was very pleased with the gift, as it allowed him to instantly compute the XOR of all interesting integers in any range from $l$ to $r$, inclusive. After all, what else does a person need for happiness? Unfortunately, the device was so powerful that at one point it performed XOR with itself and disappeared. Bob was very upset, and to cheer him up, Alice asked you to write your version of the "XORificator".
Note the base of the module Can we quickly compute XOR on the segment $[l, r]$? We also recommend the beautiful tutorial by ne_justlm! Let us introduce the notation $\DeclareMathOperator{\XOR}{XOR}\XOR(l, r) = l \oplus (l+1) \oplus \dots \oplus r$ . The first thing that comes to mind when reading the condition is that we can compute XOR of all numbers on the segment $(0, x)$ for $O(1)$ by the following formula: $\XOR(0, x) = \begin{cases} x & \text{if } x \equiv 0 \pmod{4} \\ 1 & \text{if } x \equiv 1 \pmod{4} \\ x + 1 & \text{if } x \equiv 2 \pmod{4} \\ 0 & \text{if } x \equiv 3 \pmod{4} \end{cases}$ Then $\XOR(l, r)$ can be found as $\XOR(0, r) \oplus \XOR(0, l-1)$. Now note that for the answer we only need to learn for $O(1)$ to find XOR of all uninteresting on the segment: then we can do XOR with the whole segment and get XOR of all interesting numbers already. The base of the modulus, equal to the degree of two, is not chosen by chance: we only need to "compress" $l$ and $r$ by $2^i$ times in such a way that the resulting range contains all uninteresting numbers shifted $i$ bits to the right. Then computing $\XOR(l', r')$ we get exactly the desired XOR of uninteresting numbers, also shifted $i$ bits to the right. Then, to find these remaining lower $i$ bits, we just need to find the number of uninteresting numbers on the segment $[l, r]$. If it is odd, these $i$ bits will be equal to $k$, since they are all equal to $k \mod 2^i$, and so have the same $i$ minor bits equal to $k$ proper, and so their XOR an odd number of times will also be equal to $k$. Otherwise, the lower $i$ bits of the answer will be $0$, since we have done XOR an even number of times. The number of uninteresting numbers on the segment can be calculated in a similar way to $\XOR(l, r)$, namely find their number on the segments $[0, r]$ and $[0, l-1]$ and subtract the latter from the former. The number of numbers equal to $k$ modulo $m$ and not exceeding $r$ is calculated as $\left\lfloor \frac{r - k}{m} \right\rfloor$. Time complexity of the solution: $O(\mathit{\log r})$.
[ "bitmasks", "dp", "number theory", "two pointers" ]
1,900
#include <iostream> using namespace std; #define int uint64_t #define SPEEDY std::ios_base::sync_with_stdio(0); std::cin.tie(0); std::cout.tie(0); int xor_0_n(int n) { int rem = n % 4; if (rem == 0) { return n; } if (rem == 1) { return 1; } if (rem == 2) { return n + 1; } return 0; } int xor_range(int l, int r) { return xor_0_n(r) ^ xor_0_n(l - 1); } int32_t main() { int t; cin >> t; while (t--) { int l, r, i, k; cin >> l >> r >> i >> k; int highBits = xor_range((l - k + (1 << i) - 1) >> i, (r - k) >> i) << i; int lowBits = k * (((r - k) / (1 << i) - (l - k - 1) / (1 << i)) & 1); cout << (xor_range(l, r) ^ highBits ^ lowBits) << '\n'; } return 0; }
2036
G
Library of Magic
This is an interactive problem. The Department of Supernatural Phenomena at the Oxenfurt Academy has opened the Library of Magic, which contains the works of the greatest sorcerers of Redania — $n$ ($3 \leq n \leq 10^{18}$) types of books, numbered from $1$ to $n$. Each book's type number is indicated on its spine. Moreover, each type of book is stored in the library in exactly two copies! And you have been appointed as the librarian. One night, you wake up to a strange noise and see a creature leaving the building through a window. Three thick tomes of different colors were sticking out of the mysterious thief's backpack. Before you start searching for them, you decide to compute the numbers $a$, $b$, and $c$ written on the spines of these books. All three numbers are \textbf{distinct}. So, you have an unordered set of tomes, which includes one tome with each of the pairwise distinct numbers $a$, $b$, and $c$, and two tomes for all numbers from $1$ to $n$, except for $a$, $b$, and $c$. You want to find these values $a$, $b$, and $c$. Since you are not working in a simple library, but in the Library of Magic, you can only use one spell in the form of a query to check the presence of books in their place: - "xor l r" — \textbf{Bitwise XOR query} with parameters $l$ and $r$. Let $k$ be the number of such tomes in the library whose numbers are greater than or equal to $l$ and less than or equal to $r$. You will receive the result of the computation $v_1 \oplus v_2 \oplus ... \oplus v_k$, where $v_1 ... v_k$ are the numbers on the spines of these tomes, and $\oplus$ denotes the operation of bitwise exclusive OR. Since your magical abilities as a librarian are severely limited, you can make no more than $150$ queries.
Have you considered the cases where $a \oplus b \oplus c = 0$? Suppose you are certain that at least one lost number is located on some segment $[le, ri]$. Can you choose a value $mid$ such that the queries xor {le} {mid} and xor {mid + 1} {ri} you can unambiguously understand on which of the segments ($[le, mid]$ or $[(mid + 1), ri]$) lies at least one lost number, even if both of these queries return $0$? To begin with, we note that for any number $x$, $x \oplus x = 0$ is satisfied. Therefore, by querying xor l r, you will get bitwise XOR of only those volume numbers that are in the library in a single copy (within the scope of querying $l$ and $r$, of course). Also note that for two pairwise distinct numbers $x$ and $y$, $x \oplus y \neq 0$ is always satisfied. Initially, our goal is - to determine the largest bit of the maximum of the lost numbers. To do this, we can go through the bits starting from the largest significant bit in n. For each $i$-th bit, we will ask xor {2^i} {min(2^(i + 1) - 1, n)}. Note that all numbers on this interval have $i$-th bit equal to one. Then if we get a result not equal to zero, then this bit is the desired largest bit of the maximum of the lost numbers. If we get a result equal to zero, then this bit is guaranteed not to be present in any of the numbers, i.e. all three numbers are less than $2^i$. Let's prove it. If we had one or two numbers on the requested interval, their XOR would not be $0$ (see the first paragraph). If all three numbers are on this interval, then the XOR of their $i$-th bit is $1 \oplus 1 \oplus 1 = 1$, and hence the XOR of the numbers themselves is also different from $0$. Now that we know the largest bit $i$ of the desired number, we can find this number by any realization of binary search inside the interval $[2^i; \min(2^{i + 1} - 1, n)]$. By the answer to any query on any interval within that interval, we can unambiguously know whether our number is present on that interval or not - the proof is similar to the one above. The first number is found. The second number can be found using any bin search, since XOR of two different numbers is always different from zero. The main thing is not to forget to "exclude" the already found number from the obtained result using the same XOR. And the third number can be found by requesting the result of the whole interval from $1$ to $n$ and "excluding" the already found two numbers from it. Number of requests: $\approx 2 \cdot \log n \approx 120 < 150$
[ "binary search", "constructive algorithms", "divide and conquer", "interactive", "math", "number theory" ]
2,200
#include <cstdio> typedef long long l; l n, num1, num2; l req(l le, l ri, l num) { if (le > n) return 0; if (ri > n) ri = n; printf("xor %lld %lld\n", le, ri); fflush(stdout); l res; scanf("%lld", &res); if (num > 1 && le <= num1 && num1 <= ri) res ^= num1; if (num > 2 && le <= num2 && num2 <= ri) res ^= num2; return res; } void solve() { scanf("%lld", &n); num1 = 0; num2 = 0; l start = 1LL << (63 - __builtin_clzll(n)); for (l i = start; i > 0; i >>= 1) { l res = req(num1 | i, num1 | (i * 2 - 1), 1); if (res) num1 |= i; } for (l i = start; i > 0; i >>= 1) { l res = req(num2 | i, num2 | (i * 2 - 1), 2); if (res) num2 |= i; } printf("ans %lld %lld %lld\n", num1, num2, req(1, n, 3)); fflush(stdout); } int main() { l t; scanf("%lld", &t); while (t--) solve(); }
2037
A
Twice
Kinich wakes up to the start of a new day. He turns on his phone, checks his mailbox, and finds a mysterious present. He decides to unbox the present. Kinich unboxes an array $a$ with $n$ integers. Initially, Kinich's score is $0$. He will perform the following operation any number of times: - Select two indices $i$ and $j$ $(1 \leq i < j \leq n)$ such that neither $i$ nor $j$ has been chosen in any previous operation and $a_i = a_j$. Then, add $1$ to his score. Output the maximum score Kinich can achieve after performing the aforementioned operation any number of times.
We want to count how many times we can choose $i$ and $j$ such that $a_i = a_j$. Suppose $f_x$ stores the frequency of $x$ in $a$. Once we choose $a_i = a_j = x$, $f_x$ is subtracted by $2$. Thus, the answer is the sum of $\lfloor \frac{f_x}{2} \rfloor$ over all $x$.
[ "implementation" ]
800
t = int(input()) for _ in range(t): n = int(input()) a = list(map(int, input().split())) print(sum([a.count(x) // 2 for x in range(n + 1)]))
2037
B
Intercepted Inputs
To help you prepare for your upcoming Codeforces contest, Citlali set a grid problem and is trying to give you a $n$ by $m$ grid through your input stream. Specifically, your input stream should contain the following: - The first line contains two integers $n$ and $m$ — the dimensions of the grid. - The following $n$ lines contain $m$ integers each — the values of the grid. However, someone has intercepted your input stream, shuffled all given integers, and put them all on one line! Now, there are $k$ integers all on one line, and you don't know where each integer originally belongs. Instead of asking Citlali to resend the input, you decide to determine the values of $n$ and $m$ yourself. Output any possible value of $n$ and $m$ that Citlali could have provided.
It is worth noting that test $4$ is especially made to blow up python dictionaries, sets, and Collections.counter. If you are time limit exceeding, consider using a frequency array of length $n$. You must check if you can find two integers $n$ and $m$, such that $n \cdot m+2=k$. You can either use a counter, or use two pointers. Do note that $n^2+2=k$ is an edge case that must be separated if you use a counter to implement it. This edge case does not appear in the two pointers approach. Time complexity is $O(n \log n)$ (assuming you are wise enough to not use a hash table).
[ "brute force", "implementation" ]
800
testcases = int(input()) for _ in range(testcases): k = int(input()) list = input().split() freq = [] for i in range(k+1): freq.append(0) for x in list: freq[int(x)] = freq[int(x)]+1 solution = (-1,-1) for i in range(1,k+1): if i*i==k-2: if freq[i]>1: solution = (i,i) elif (k-2)%i==0: if freq[i]>0 and freq[(k-2)//i]>0: solution = (i, (k-2)//i) print(solution[0], solution[1])
2037
C
Superultra's Favorite Permutation
Superultra, a little red panda, desperately wants primogems. In his dreams, a voice tells him that he must solve the following task to obtain a lifetime supply of primogems. Help Superultra! Construct a permutation$^{\text{∗}}$ $p$ of length $n$ such that $p_i + p_{i+1}$ is composite$^{\text{†}}$ over all $1 \leq i \leq n - 1$. If it's not possible, output $-1$. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). $^{\text{†}}$An integer $x$ is composite if it has at least one other divisor besides $1$ and $x$. For example, $4$ is composite because $2$ is a divisor. \end{footnotesize}
Remember that all even numbers greater than $2$ are composite. As $1+3 > 2$, any two numbers with same parity sum up to a composite number. Now you only have to find one odd number and one even number that sum up to a composite number. One can manually verify that there is no such pair in $n \leq 4$, but in $n=5$ there exists $(4,5)$ which sums up to $9$, a composite number.
[ "constructive algorithms", "greedy", "math", "number theory" ]
1,000
for _ in range(int(input())): n = int(input()) if n < 5: print(-1) continue for i in range(2,n+1,2): if i != 4: print(i,end=" ") print("4 5",end=" ") for i in range(1,n+1,2): if i != 5: print(i, end = " ") print()
2037
D
Sharky Surfing
Mualani loves surfing on her sharky surfboard! Mualani's surf path can be modeled by a number line. She starts at position $1$, and the path ends at position $L$. When she is at position $x$ with a jump power of $k$, she can jump to any \textbf{integer} position in the interval $[x, x+k]$. Initially, her jump power is $1$. However, her surf path isn't completely smooth. There are $n$ hurdles on her path. Each hurdle is represented by an interval $[l, r]$, meaning she cannot jump to any position in the interval $[l, r]$. There are also $m$ power-ups at certain positions on the path. Power-up $i$ is located at position $x_i$ and has a value of $v_i$. When Mualani is at position $x_i$, she has the option to collect the power-up to increase her jump power by $v_i$. There may be multiple power-ups at the same position. When she is at a position with some power-ups, she may choose to take or ignore each individual power-up. No power-up is in the interval of any hurdle. What is the minimum number of power-ups she must collect to reach position $L$ to finish the path? If it is not possible to finish the surf path, output $-1$.
Process from earliest to latest. Maintain a priority queue of power-ups left so far. If Mualani meets a power-up, add it to the priority queue. Otherwise (Mualani meets a hurdle), take power-ups in the priority queue from strongest to weakest until you can jump over the hurdle. This guarantees that each time Mualani jumps over a hurdle, she takes the minimum number of power-ups necessary. Time complexity is $O((n+m)\log m)$, where $O(\log m)$ is from the priority queue. Note that the hurdle intervals are inclusive. If there is a hurdle at $[l, r]$, she must jump from position $l-1$ to $r+1$.
[ "data structures", "greedy", "two pointers" ]
1,300
import sys import heapq input = sys.stdin.readline for _ in range(int(input())): n,m,L = map(int,input().split()) EV = [] for _ in range(n): EV.append((*list(map(int,input().split())),1)) for _ in range(m): EV.append((*list(map(int,input().split())),0)) EV.sort() k = 1 pwr = [] for a,b,t in EV: if t == 0: heapq.heappush(pwr,-b) else: while pwr and k < b-a + 2: k -= heapq.heappop(pwr) if k < b-a + 2: print(-1) break else: print(m-len(pwr))
2037
E
Kachina's Favorite Binary String
This is an interactive problem. Kachina challenges you to guess her favorite binary string$^{\text{∗}}$ $s$ of length $n$. She defines $f(l, r)$ as the number of subsequences$^{\text{†}}$ of $01$ in $s_l s_{l+1} \ldots s_r$. \textbf{Two subsequences are considered different if they are formed by deleting characters from different positions in the original string, even if the resulting subsequences consist of the same characters.} To determine $s$, you can ask her some questions. In each question, you can choose two indices $l$ and $r$ ($1 \leq l < r \leq n$) and ask her for the value of $f(l, r)$. Determine and output $s$ after asking Kachina no more than $n$ questions. However, it may be the case that $s$ is impossible to be determined. In this case, you would need to report $IMPOSSIBLE$ instead. Formally, $s$ is impossible to be determined if after asking $n$ questions, there are always multiple possible strings for $s$, regardless of what questions are asked. \textbf{Note that if you report} $IMPOSSIBLE$ \textbf{when there exists a sequence of at most $n$ queries that will uniquely determine the binary string, you will get the Wrong Answer verdict.} \begin{footnotesize} $^{\text{∗}}$A binary string only contains characters $0$ and $1$. $^{\text{†}}$A sequence $a$ is a subsequence of a sequence $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) elements. For example, subsequences of $\mathtt{1011101}$ are $\mathtt{0}$, $\mathtt{1}$, $\mathtt{11111}$, $\mathtt{0111}$, but not $\mathtt{000}$ nor $\mathtt{11100}$. \end{footnotesize}
Notice that for if for some $r$ we have $f(1, r) < f(1, r + 1)$ then we can conclude that $s_{r + 1} = 1$ (if it is $0$ then $f(1, r) = f(1, r + 1)$ will be true) and if $f(1, r)$ is non-zero and $f(1, r) = f(1, r + 1)$ then $s_{r + 1}$ is $0$. Unfortunately this is only useful if there is a $0$ in $s_1,s_2,...,s_r$, so the next thing can try is to find is the value of the longest prefix such that $f(1, r)$ is $0$ (after this point there will be a zero in all prefixes). See that if $f(1, r) = 0$ and $f(1, r + 1) = k$ then $s_{r + 1} = 1$, the last $k$ characters of $s_1,s_2,...,s_r$ must be $0$ and the first $r - k$ characters must be $1$. To prove this we can argue by contradiction, suppose it is not true and then it will become apparent that some shorter prefix will be non-zero when we query it. The one case that this does not cover is when all prefixes are zero, from similar contradiction argument as above we can see that the string must look like $111...1100....000$ in this case, in this case it is not hard to see that all queries will give a value of zero, and thus we can report that it is impossible. So we should query all prefixes, the first one which is non-zero (if this does not exist we can report impossible) we can deduce its value as discussed above, then there will be a $0$ in the prefix so we can deduce all subsequent characters as discussed at the start.
[ "dp", "greedy", "interactive", "two pointers" ]
1,600
def qu(a,b): if (a,b) not in d: print("?", a+1,b+1) d[(a,b)] = int(input()) return d[(a,b)] for _ in range(int(input())): d = dict() n = int(input()) SOL = ["0"] * n last = qu(0,n-1) if last: z = 1 for i in range(n-2,0,-1): nw= qu(0,i) if nw != last: SOL[i+1] = "1" last = nw if last == 0: z = i+1;break if last: SOL[1] = "1" SOL[0] = "0" else: last = 1 for j in range(z-2,-1,-1): nw = qu(j,z) if nw == last: SOL[j] = "1" last = nw print("!","".join(SOL)) else: print("! IMPOSSIBLE")
2037
F
Ardent Flames
You have obtained the new limited event character Xilonen. You decide to use her in combat. There are $n$ enemies in a line. The $i$'th enemy from the left has health $h_i$ and is currently at position $x_i$. Xilonen has an attack damage of $m$, and you are ready to defeat the enemies with her. Xilonen has a powerful "ground stomp" attack. \textbf{Before you perform any attacks}, you select an integer $p$ and position Xilonen there ($p$ can be any integer position, including a position with an enemy currently). Afterwards, for each attack, she deals $m$ damage to an enemy at position $p$ (if there are any), $m-1$ damage to enemies at positions $p-1$ and $p+1$, $m-2$ damage to enemies at positions $p-2$ and $p+2$, and so on. Enemies that are at least a distance of $m$ away from Xilonen take no damage from attacks. Formally, if there is an enemy at position $x$, she will deal $\max(0,m - |p - x|)$ damage to that enemy each hit. \textbf{Note that you may not choose a different $p$ for different attacks.} Over all possible $p$, output the minimum number of attacks Xilonen must perform to defeat at least $k$ enemies. If it is impossible to find a $p$ such that eventually at least $k$ enemies will be defeated, output $-1$ instead. Note that an enemy is considered to be defeated if its health reaches $0$ or below.
Let's perform binary search on the minimum number of hits to kill at least $k$ enemies. How do we check if a specific answer is possible? Let's consider a single enemy for now. If its health is $h_i$ and we need to kill it in at most $q$ attacks, then we need to be doing at least $\lceil\frac{h_i}{q}\rceil$ damage per attack to this enemy. If this number is greater than $m$, then obviously we cannot kill this enemy in at most $q$ attacks as the maximum damage Xilonen can do is $m$ damage per hit. Otherwise, we can model the enemy as a valid interval where we can place Xilonen. Specifically, the inequality $m-|p-x|\geq\lceil\frac{h_i}{q}\rceil$ must be satisfied. Now that we have modeled each enemy as an interval, the problem is reduced to finding whether or not there exists a point on at least $k$ intervals. This is a classic problem that can be approached by a sweep-line algorithm, sorting the events of intervals starting and ending by time and adding $1$ to your counter when an interval starts and subtracting $1$ to your counter when an interval ends. Note that the maximum possible answer to any setup with a solution is $\max( h_i)=10^9$, so if we cannot kill at least $k$ enemies in $10^9$ attacks then we can just output $-1$ as our answer. The total time complexity is $O(n\log({n})\log({\max(h_i)})$.
[ "binary search", "data structures", "math", "sortings", "two pointers" ]
2,100
import sys input = sys.stdin.readline from collections import defaultdict for _ in range(1): n,m,k = map(int,input().split()) h = list(map(int,input().split())) x = list(map(int,input().split())) lo = 0 hi = int(1e10) while hi - lo > 1: mid = (lo + hi) // 2 ev = defaultdict(int) for i in range(n): ma = (h[i] + mid - 1) // mid if ma > m: continue ev[x[i]-m+ma] += 1 ev[x[i]+m-ma+1] -= 1 sc = 0 for y in sorted(ev.keys()): sc += ev[y] if sc >= k: hi = mid break else: lo = mid if hi == int(1e10): print(-1) else: print(hi)