contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1130
|
C
|
Connect
|
Alice lives on a flat planet that can be modeled as a square grid of size $n \times n$, with rows and columns enumerated from $1$ to $n$. We represent the cell at the intersection of row $r$ and column $c$ with ordered pair $(r, c)$. Each cell in the grid is either land or water.
\begin{center}
{\small An example planet with $n = 5$. It also appears in the first sample test.}
\end{center}
Alice resides in land cell $(r_1, c_1)$. She wishes to travel to land cell $(r_2, c_2)$. At any moment, she may move to one of the cells adjacent to where she is—in one of the four directions (i.e., up, down, left, or right).
Unfortunately, Alice cannot swim, and there is no viable transportation means other than by foot (i.e., she can walk only on land). As a result, Alice's trip may be impossible.
To help Alice, you plan to create \textbf{at most one} tunnel between some two land cells. The tunnel will allow Alice to freely travel between the two endpoints. Indeed, creating a tunnel is a lot of effort: the cost of creating a tunnel between cells $(r_s, c_s)$ and $(r_t, c_t)$ is $(r_s-r_t)^2 + (c_s-c_t)^2$.
For now, your task is to find the minimum possible cost of creating at most one tunnel so that Alice could travel from $(r_1, c_1)$ to $(r_2, c_2)$. If no tunnel needs to be created, the cost is $0$.
|
Let $S$ be the set of cells accessible from $(r_1, c_1)$. Similarly, let $T$ be the set of cells accessible from $(r_2, c_2)$. We can find $S$ and $T$ using a search algorithm such as a DFS or a BFS. If $S = T$, then a tunnel is not needed, so the answer is $0$. Otherwise, we need to create a tunnel with an endpoint in $A$ and the other in $B$. Now, we can simply iterate through all possible pairs of cells $((x_1, y_1), (x_2, y_2))$ where $(x_1, y_1) \in S$ and $(x_2, y_2) \in T$ to find one that minimizes the cost (i.e., $(x_1-x_2)^2 + (y_1-y_2)^2$). The time complexity is $\mathcal{O}(n^4)$.
|
[
"brute force",
"dfs and similar",
"dsu"
] | 1,400
| null |
1131
|
A
|
Sea Battle
|
In order to make the "Sea Battle" game more interesting, Boris decided to add a new ship type to it. The ship consists of two rectangles. The first rectangle has a width of $w_1$ and a height of $h_1$, while the second rectangle has a width of $w_2$ and a height of $h_2$, where $w_1 \ge w_2$. In this game, exactly one ship is used, made up of two rectangles. There are no other ships on the field.
The rectangles are placed on field in the following way:
- the second rectangle is on top the first rectangle;
- they are aligned to the left, i.e. their left sides are on the same line;
- the rectangles are adjacent to each other without a gap.
See the pictures in the notes: the first rectangle is colored red, the second rectangle is colored blue.
Formally, let's introduce a coordinate system. Then, the leftmost bottom cell of the first rectangle has coordinates $(1, 1)$, the rightmost top cell of the first rectangle has coordinates $(w_1, h_1)$, the leftmost bottom cell of the second rectangle has coordinates $(1, h_1 + 1)$ and the rightmost top cell of the second rectangle has coordinates $(w_2, h_1 + h_2)$.
After the ship is completely destroyed, all cells neighboring by side or a corner with the ship are marked. Of course, only cells, which don't belong to the ship are marked. On the pictures in the notes such cells are colored green.
Find out how many cells should be marked after the ship is destroyed. The field of the game is infinite in any direction.
|
Let's classify marked squares to two groups. First group will consist of cells that are neighboring by corner to ship. There are exactly $5$ such corners. Second group will consist of cells that are neighboring by side to ship. Number of such squares is equal to length of perimeter of a ship. Thus answer is equal to $2 \cdot (w_1 + h1 + h2) + 4$.
|
[
"math"
] | 800
| null |
1131
|
B
|
Draw!
|
You still have partial information about the score during the historic football match. You are given a set of pairs $(a_i, b_i)$, indicating that at some point during the match the score was "$a_i$: $b_i$". It is known that if the current score is «$x$:$y$», then after the goal it will change to "$x+1$:$y$" or "$x$:$y+1$". What is the largest number of times a draw could appear on the scoreboard?
The pairs "$a_i$:$b_i$" are given in chronological order (time increases), but you are given score only for some moments of time. The last pair corresponds to the end of the match.
|
Since some scores are already fixed, we only have "liberty" in between of them. One can easily see, that basically we need to solve the problem between each neighboring pair and then sum all the answers (it may turn out, that for a fixed score, it will be accounted twice, in the "left" pair and in the "right", but it's easy to subtract it back). How to solve the problem between score $(a, b)$ and $(c, d)$? We want to put in the middle as much pairs $(x, x)$ as possible. So we have $a \le x \le c$ and $b \le x \le d$, hence $\max(a, b) \le x \le \min(c, d)$, it's easy to count the number of such $x$'s and you can also see, that there is a goal sequence which achieves all such $(x, x)$'s together.
|
[
"greedy",
"implementation"
] | 1,400
| null |
1131
|
C
|
Birthday
|
Cowboy Vlad has a birthday today! There are $n$ children who came to the celebration. In order to greet Vlad, the children decided to form a circle around him. Among the children who came, there are both tall and low, so if they stand in a circle arbitrarily, it may turn out, that there is a tall and low child standing next to each other, and it will be difficult for them to hold hands. Therefore, children want to stand in a circle so that the maximum difference between the growth of two neighboring children would be minimal possible.
Formally, let's number children from $1$ to $n$ in a circle order, that is, for every $i$ child with number $i$ will stand next to the child with number $i+1$, also the child with number $1$ stands next to the child with number $n$. Then we will call the discomfort of the circle the maximum absolute difference of heights of the children, who stand next to each other.
Please help children to find out how they should reorder themselves, so that the resulting discomfort is smallest possible.
|
The solution is greedy one. Suppose we have reordered $a_i$, so that $a_i \le a_{i + 1}$. Then I claim that the answer can be built as follows: First write all elements with even indices and then all elements with odd indices in reversed order. For example, if $n = 5$: we get $a_1, a_3, a_5 \mid a_4, a_2$ and if $n = 6$: $a_1, a_3, a_5 \mid a_6, a_4, a_2$. One can "check on many tests" that it works in practice, but here goes the proof: Note, that the solution provides answer which is at most $\max a_{i + 2} - a_{i}$. Let's show that for every $i$, answer must be at least $a_{i + 2} - a_{i}$. To do this, draw all $a_i$'s as a graph. Then the solution to the problem is some Hamiltonian cycle. Let's suppose that $a_{i + 2} - a_{i}$ is prohibited to us (and all jumps containing this one). Red denotes the forbidden jumps. One can easily see, that $a_{i+1}$ is a cut point, and no hamiltonian cycle is possible. This concludes our proof!
|
[
"binary search",
"greedy",
"sortings"
] | 1,200
| null |
1131
|
D
|
Gourmet choice
|
Mr. Apple, a gourmet, works as editor-in-chief of a gastronomic periodical. He travels around the world, tasting new delights of famous chefs from the most fashionable restaurants. Mr. Apple has his own signature method of review — in each restaurant Mr. Apple orders two sets of dishes on two different days. All the dishes are different, because Mr. Apple doesn't like to eat the same food. For each pair of dishes from different days he remembers exactly which was better, or that they were of the same quality. After this the gourmet evaluates each dish with a positive integer.
Once, during a revision of a restaurant of Celtic medieval cuisine named «Poisson», that serves chestnut soup with fir, warm soda bread, spicy lemon pie and other folk food, Mr. Apple was very pleasantly surprised the gourmet with its variety of menu, and hence ordered too much. Now he's confused about evaluating dishes.
The gourmet tasted a set of $n$ dishes on the first day and a set of $m$ dishes on the second day. He made a table $a$ of size $n \times m$, in which he described his impressions. If, according to the expert, dish $i$ from the first set was better than dish $j$ from the second set, then $a_{ij}$ is equal to ">", in the opposite case $a_{ij}$ is equal to "<". Dishes also may be equally good, in this case $a_{ij}$ is "=".
Now Mr. Apple wants you to help him to evaluate every dish. Since Mr. Apple is very strict, he will evaluate the dishes so that the maximal number used is as small as possible. But Mr. Apple also is very fair, so he never evaluates the dishes so that it goes against his feelings. In other words, if $a_{ij}$ is "<", then the number assigned to dish $i$ from the first set should be less than the number of dish $j$ from the second set, if $a_{ij}$ is ">", then it should be greater, and finally if $a_{ij}$ is "=", then the numbers should be the same.
Help Mr. Apple to evaluate each dish from both sets so that it is consistent with his feelings, or determine that this is impossible.
|
This task has different possible solutions. One of them is as follows - make a DSU for all $n+m$ dishes (Disjoint Set Union data structure, https://en.wikipedia.org/wiki/Disjoint-set_data_structure), and unite all dishes that should be evaluated with the same number according to the table (unite dishes $i$ and $n+j$ if $a_{ij}$ equals "=". Then create graph. We will iterate over all $i$, $j$ and add a directed edge in some direction between the sets, corresponding to the $i$ and $j$, if one of them is better, then the other. In case the graph has a self-loop or cycle, it's easy to see that the answer is impossible. Otherwise assign numbers, where the vertex gets the least number greater than the vertex it goes to. This is the answer.
|
[
"dfs and similar",
"dp",
"dsu",
"graphs",
"greedy"
] | 2,000
| null |
1131
|
E
|
String Multiplication
|
Roman and Denis are on the trip to the programming competition. Since the trip was long, they soon got bored, and hence decided to came up with something. Roman invented a pizza's recipe, while Denis invented a string multiplication. According to Denis, the result of multiplication (product) of strings $s$ of length $m$ and $t$ is a string $t + s_1 + t + s_2 + \ldots + t + s_m + t$, where $s_i$ denotes the $i$-th symbol of the string $s$, and "+" denotes string concatenation. For example, the product of strings "abc" and "de" is a string "deadebdecde", while the product of the strings "ab" and "z" is a string "zazbz". Note, that unlike the numbers multiplication, the product of strings $s$ and $t$ is not necessarily equal to product of $t$ and $s$.
Roman was jealous of Denis, since he invented such a cool operation, and hence decided to invent something string-related too. Since Roman is beauty-lover, he decided to define the beauty of the string as the length of the longest substring, consisting of only one letter. For example, the beauty of the string "xayyaaabca" is equal to $3$, since there is a substring "aaa", while the beauty of the string "qwerqwer" is equal to $1$, since all neighboring symbols in it are different.
In order to entertain Roman, Denis wrote down $n$ strings $p_1, p_2, p_3, \ldots, p_n$ on the paper and asked him to calculate the beauty of the string $( \ldots (((p_1 \cdot p_2) \cdot p_3) \cdot \ldots ) \cdot p_n$, where $s \cdot t$ denotes a multiplication of strings $s$ and $t$. Roman hasn't fully realized how Denis's multiplication works, so he asked you for a help. Denis knows, that Roman is very impressionable, he guarantees, that the beauty of the resulting string is at most $10^9$.
|
Let's notice, that the string multiplication is associative, that is $(a \cdot b) \cdot c = a \cdot (b \cdot c)$. So instead of left "$(a \cdot b) \cdot c$" given in statement, let's use "$a \cdot (b \cdot c)$" That is, we have $s_n$, then go to $s_{n-1} \cdot s_n$, then $s_{n - 2} \cdot (s_{n-1} \cdot s_n)$ and so on. One can also solve the problem without observing the associativity property and going with $s_1 \to s_1 \cdot s_2 \to (s_1 \cdot s_2) \cdot s_3$ and so on. However there is one caveat. Since the string grows very fast, "an answer" will grow as well. And while you are promised that the answer is at most $10^9$, observe the following situtation: $s_1, s_2, ..., s_{100}$ are "aaaaaaaaaaaaaaaaaaaa", which makes an answer quite large, but if you add a $s_{101}$ equal to "c" it collides to a mere $1$, so it requires some careful handling, basically store for every value $x$ you want value $\min(x, 10^9)$. Going in another direction has an advantage, that if some value is large it will stay large for life, so since answer is $10^9$ no overflows will happen. Now back to the suggested solution. Let's proceed as $s_n \to s_{n-1} \cdot s_n \to s_{n - 2} \cdot (s_{n-1} \cdot s_n)$ and so on. Note, that it's enough to store not whole the current string $s_{i} \cdot \ldots \cdot s_n$, but just some basic information about. Let's simply store: The length of the largest substring of a single character Whether the string consists of the single character or not The left and the right character of it The length of the prefix, which consists of a single character and the same for the suffix. It's easy to see that if you know such values for some string $s_{i} \cdot \ldots \cdot s_n$, you can also compute it for $s_{i - 1} \cdot s_{i} \cdot \ldots \cdot s_n$ (here, the brackets are assumed as in $a \cdot (b \cdot c)$).
|
[
"dp",
"greedy",
"strings"
] | 2,300
| null |
1131
|
F
|
Asya And Kittens
|
Asya loves animals very much. Recently, she purchased $n$ kittens, enumerated them from $1$ and $n$ and then put them into the cage. The cage consists of one row of $n$ cells, enumerated with integers from $1$ to $n$ from left to right. Adjacent cells had a partially transparent partition wall between them, hence there were $n - 1$ partitions originally. Initially, each cell contained exactly one kitten with some number.
Observing the kittens, Asya noticed, that they are very friendly and often a pair of kittens in neighboring cells wants to play together. So Asya started to remove partitions between neighboring cells. In particular, on the day $i$, Asya:
- Noticed, that the kittens $x_i$ and $y_i$, located in neighboring cells want to play together.
- Removed the partition between these two cells, efficiently creating a single cell, having all kittens from two original cells.
Since Asya has never putted partitions back, after $n - 1$ days the cage contained a single cell, having all kittens.
For every day, Asya remembers numbers of kittens $x_i$ and $y_i$, who wanted to play together, however she doesn't remember how she placed kittens in the cage in the beginning. Please help her and find any possible initial arrangement of the kittens into $n$ cells.
|
In this problem we are given a disjoint set union process with $n - 1$ steps, merging $n$ initial 1-element sets into one $n$-element set. We have to put elements into a linear array of cells, so that the cells to be joined at each step of the process were immediate neighbours (i.e. not separated by other cells). This problem can be solved in $O(n\log n)$ or in $O(n\alpha(n))$ (where $\alpha(n)$ is the inverse Ackermann function) via standard disjoint-set data structure, additionally storing lists of elements in each set. The simplest solution is based on a set-size version of rank heuristic: storing mapping from item to id (representative) of its current set, and the inverse mapping from set to the list of its elements when we have to merge two sets $A$ and $B$, we make the smaller set part of the larger set and update mappings, assigning new set ids for elements in $O(min(|A|, |B|))$ and concatenating the lists (can be done in $O(1)$ or in $O(min(|A|, |B|))$) This gives us $O(n\log n)$: element can not change its set more than $log n$ times, because the change leads to (at least) doubling of the element's set size. In order to get $O(n\alpha(n))$, we have to use the disjoint set structure with both path compression and rank heuristics, plus concatenation of lists should be done in $O(1)$.
|
[
"constructive algorithms",
"dsu"
] | 1,700
| null |
1131
|
G
|
Most Dangerous Shark
|
Semyon participates in the most prestigious competition of the world ocean for the title of the most dangerous shark. During this competition sharks compete in different subjects: speed swimming, masking, map navigation and many others. Now Semyon is taking part in «destruction» contest.
During it, $m$ dominoes are placed in front of the shark. All dominoes are on the same line, but the height of the dominoes may vary. The distance between adjacent dominoes is $1$. Moreover, each Domino has its own cost value, expressed as an integer. The goal is to drop all the dominoes. To do this, the shark can push any domino to the left or to the right, and it will begin falling in this direction. If during the fall the domino touches other dominoes, they will also start falling in the same direction in which the original domino is falling, thus beginning a chain reaction, as a result of which many dominoes can fall. A falling domino touches another one, if and only if the distance between them \textbf{was strictly less than} the height of the falling domino, the dominoes do not necessarily have to be adjacent.
Of course, any shark can easily drop all the dominoes in this way, so the goal is not to drop all the dominoes, but do it with a minimum cost. The cost of the destruction is the sum of the costs of dominoes that the shark needs to push to make all the dominoes fall.
Simon has already won in the previous subjects, but is not smart enough to win in this one. Help Semyon and determine the minimum total cost of the dominoes he will have to push to make all the dominoes fall.
|
This problem can be solved using dynamic programming technique. Let $dp_i$ be minimum cost to fall first $i$ dominoes. If $i$-th domino was dropped to the right $dp_i = min(dp_j + c_i)$ over such $j$ that dropped $i$-th domino will fall all dominoes from $j + 1$ to $i$. Otherwise, some other domino was dropped to the right and fell domino $i$. Then $dp_i = min(dp_{j - 1} + c_j)$ other such $j$, that $j$-th domino dropped to the right will fall $i$-th domino. Such solution works in $O(m^2)$. We need to speed up this solution. We need to find possible $j$ for each $i$ faster. At first let's notice, that possible $j$'s forms subsegments, so we need just find most right $j$. This can be done using stack technique like finding nearest element greater than current. Another part of the solution, we need to optimize is taking range minimum query of $dp$'s. That can be easily done using segment tree technique or fenwick tree technique, however it requires $O(log m)$ time per query which is too slow. To make it faster we can use stack technique again! Let's maintain stack of increasing values of $dp_i$ (or $dp_i + c_i$, depending on case). Because segments on which we are taking minimum are nested or non-intersecting we can always drop all values after the optimum for each query. Using amortized analysis technique, we can see that such solution works in $O(m)$.
|
[
"data structures",
"dp",
"two pointers"
] | 2,700
| null |
1132
|
A
|
Regular Bracket Sequence
|
A string is called bracket sequence if it does not contain any characters other than "(" and ")". A bracket sequence is called regular if it it is possible to obtain correct arithmetic expression by inserting characters "+" and "1" into this sequence. For example, "", "(())" and "()()" are regular bracket sequences; "))" and ")((" are bracket sequences (but not regular ones), and "(a)" and "(1)+(1)" are not bracket sequences at all.
You have a number of strings; each string is a bracket sequence of length $2$. So, overall you have $cnt_1$ strings "((", $cnt_2$ strings "()", $cnt_3$ strings ")(" and $cnt_4$ strings "))". You want to write all these strings in some order, one after another; after that, you will get a long bracket sequence of length $2(cnt_1 + cnt_2 + cnt_3 + cnt_4)$. You wonder: is it possible to choose some order of the strings you have such that you will get a regular bracket sequence? \textbf{Note that you may not remove any characters or strings, and you may not add anything either}.
|
For bracket sequence to be regular, it should have equal number of opening and closing brackets. So, if $cnt_1 \ne cnt_4$, then it's impossible to construct any regular bracket sequence. $cnt_2$ is completely irrelevant to us, since inserting or removing a () substring doesn't change the status of the string we get. Almost the same applies to $cnt_3$, but we should have at least one (( substring before it. So, if $cnt_3 > 0$, but $cnt_1 = 0$, there is no solution. In all other cases it is possible to order all strings as follows: all strings ((, then all strings (), then all strings )(, then all strings )).
|
[
"greedy",
"implementation"
] | 1,100
|
cnt = []
for i in range(4):
cnt.append(int(input()))
if(cnt[0] == cnt[3] and (cnt[2] == 0 or cnt[0] > 0)):
print(1)
else:
print(0)
|
1132
|
B
|
Discounts
|
You came to a local shop and want to buy some chocolate bars. There are $n$ bars in the shop, $i$-th of them costs $a_i$ coins (and you want to buy all of them).
You have $m$ different coupons that allow you to buy chocolate bars. $i$-th coupon allows you to buy $q_i$ chocolate bars while you have to pay only for the $q_i - 1$ most expensive ones (so, the cheapest bar of those $q_i$ bars is for free).
You can use only one coupon; if you use coupon $i$, you have to choose $q_i$ bars and buy them using the coupon, and buy all the remaining $n - q_i$ bars without any discounts.
To decide which coupon to choose, you want to know what will be the minimum total amount of money you have to pay if you use one of the coupons optimally.
|
When using $i$-th coupon, the bar we get for free should have at least $x_i - 1$ bars not cheaper than it. So, if we consider $a$ sorted in non-decreasing order, then we cannot get discount greater than $a_{n - x_i + 1}$. On the other hand, we can always get such a discount if we pick $x_i$ most expensive bars to buy using the $i$-th coupon.
|
[
"greedy",
"sortings"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
const int N = 300009;
int n;
int a[N];
long long res[N];
int main() {
long long sum = 0;
scanf("%d", &n);
for(int i = 0; i < n; ++i){
scanf("%d", a + i);
sum += a[i];
}
sort(a, a + n);
memset(res, -1, sizeof res);
int m;
scanf("%d", &m);
for(int i = 0; i < m; ++i){
int c;
scanf("%d", &c);
if(res[c] == -1){
res[c] = sum - a[n - c];
}
printf("%lld\n", res[c]);
}
return 0;
}
|
1132
|
C
|
Painting the Fence
|
You have a long fence which consists of $n$ sections. Unfortunately, it is not painted, so you decided to hire $q$ painters to paint it. $i$-th painter will paint all sections $x$ such that $l_i \le x \le r_i$.
Unfortunately, you are on a tight budget, so you may hire only $q - 2$ painters. Obviously, only painters you hire will do their work.
You want to maximize the number of painted sections if you choose $q - 2$ painters optimally. A section is considered painted if at least one painter paints it.
|
Let $c_i$ be the number of painters that are painting the $i$-th section. Let's fix the first painter (denote his index as $x$) we won't take and decrease the numbers of array $c$ in the range which he paints. Then we may new array $d$, such that $d_i$ is equal to $1$ if and only if $c_i = 1$, and $0$ otherwise. This array corresponds to segments that are painted by only one painter After that we build prefix sum array $p$ on array $d$: $p_i = \sum\limits_{j=1}^{i} d_j$. This should be done in $O(n)$. Now, for each remaining painter we can count the number of sections that are painted only by him. For painter $i$ it will be equal to $p_{r_i} - p_{l_i - 1}$. Let's denote it as $res_i$. Finally, find an painter with the minimum value of $res_i$, denote it as $MinRes$. The answer (if we choose painter $x$ as one of two that won't be hired) will be equal to $cnt - MinRes$, where $cnt$ is the number of elements greater than $0$ in the array $c$ after removing the painter $x$. And, of course, we should do the same for all painters.
|
[
"brute force"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
const int N = 5043;
int p1[N];
int p2[N];
int p3[N];
int n, q;
int l[N], r[N];
int solve(int idx)
{
memset(p1, 0, sizeof p1);
for(int i = 0; i < q; i++)
{
if(i == idx) continue;
p1[l[i]]++;
p1[r[i]]--;
}
memset(p2, 0, sizeof p2);
int c = 0;
for(int i = 0; i < n; i++)
{
c += p1[i];
p2[i] = c;
}
memset(p3, 0, sizeof p3);
for(int i = 0; i < n; i++)
p3[i + 1] = p3[i] + (p2[i] == 1 ? 1 : 0);
int ans = -int(1e9);
for(int i = 0; i < q; i++)
{
if(i == idx) continue;
ans = max(ans, p3[l[i]] - p3[r[i]]);
}
for(int i = 0; i < n; i++)
if(p2[i] > 0)
ans++;
return ans;
}
int main()
{
cin >> n >> q;
for(int i = 0; i < q; i++)
{
cin >> l[i] >> r[i];
l[i]--;
}
int ans = 0;
for(int i = 0; i < q; i++)
ans = max(ans, solve(i));
cout << ans << endl;
}
|
1132
|
D
|
Stressful Training
|
Berland SU holds yet another training contest for its students today. $n$ students came, each of them brought his laptop. However, it turned out that everyone has forgot their chargers!
Let students be numbered from $1$ to $n$. Laptop of the $i$-th student has charge $a_i$ at the beginning of the contest and it uses $b_i$ of charge per minute (i.e. if the laptop has $c$ charge at the beginning of some minute, it becomes $c - b_i$ charge at the beginning of the next minute). The whole contest lasts for $k$ minutes.
Polycarp (the coach of Berland SU) decided to buy a \textbf{single} charger so that all the students would be able to successfully finish the contest. He buys the charger at the same moment the contest starts.
Polycarp can choose to buy the charger with any non-negative (zero or positive) integer power output. The power output is chosen before the purchase, it can't be changed afterwards. Let the chosen power output be $x$. \textbf{At the beginning of each minute} (from the minute contest starts to the last minute of the contest) he can plug the charger into any of the student's laptops and use it for some \textbf{integer} number of minutes. If the laptop is using $b_i$ charge per minute then it will become $b_i - x$ per minute while the charger is plugged in. Negative power usage rate means that the laptop's charge is increasing. The charge of any laptop isn't limited, it can become infinitely large. The charger can be plugged in no more than one laptop at the same time.
The student successfully finishes the contest if the charge of his laptop never is below zero at the beginning of some minute (from the minute contest starts to the last minute of the contest, zero charge is allowed). The charge of the laptop of the minute the contest ends doesn't matter.
Help Polycarp to determine the minimal possible power output the charger should have so that all the students are able to successfully finish the contest. Also report if no such charger exists.
|
The easiest part of the solution is to notice that if the charger of power $x$ works then the charger of power $x + 1$ also works. Thus, binary search is applicable to the problem. $k$ is really small and only one laptop can be charged during some minute. It implies that check function can work in something polynomial on $k$ by searching for the right laptop to charge during every minute. I claim that the greedy algorithm works. Find the laptop that gets his charge below zero the first. Charge it for one minute as early as possible. Repeat until you either don't have time to charge the laptop (check returns false) or the contest is over (check returns true). Why greedy works? Well, check any case where check returns false. If some laptop runs out of power then all the minutes up to the current one are used to charge something. Moreover, you can free no minute of these as by doing greedy we charged all laptops as late as possible. Freeing some minute will lead to other laptop dying earlier. One way to implement this is the following. Keep a heap of events ($time\_i-th\_laptop\_dies$), pop its head, add $x$ to it if the time is greater than the number of charges already made and push it back to heap. That will simulate the entire process in $O((n + k) \log n)$. Unfortunately, this may be too slow on some implementations. Let's try the following linear approach. Maintain not the heap but such an array that $i$-th its cell contains all indices of all the laptops to run out of charge on the beginning of minute $i$. Keep an iterator to the first non-empty position. Pop a single index out of this vector, charge it and push it to the new position. You'll still make $k$ steps and on each step you'll make $O(1)$ instant operations. That will make it $O(n + k)$ for this simulation. I'm not really sure how to build the maximal answer case, however, I can estimate the upper bound of binary search. You can set $x$ in such a way that it charges every laptop in one minute so that it won't run out of power until the end of the contest. Choose the smallest $a_i$, the greatest $b_i$, the greatest $k$ and you'll end up with $1 - 2 \cdot 10^5 \cdot 10^7$ total usage. Thus, $2 \cdot 10^12$ will always be enough. Overall complexity: $O((n + k) \cdot \log ANS)$. (or $O((n + k) \cdot \log ANS \log n)$ if you are skillful enough to squeeze it :D).
|
[
"binary search",
"greedy"
] | 2,300
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
const int N = 200 * 1000 + 13;
const long long INF64 = 1e18;
int n, k;
long long a[N];
int b[N];
long long cur[N];
vector<int> qr[N];
bool check(long long x){
forn(i, k) qr[i].clear();
forn(i, n) cur[i] = a[i];
forn(i, n){
long long t = cur[i] / b[i] + 1;
cur[i] %= b[i];
if (t < k) qr[t].push_back(i);
}
int lst = 0;
forn(t, k){
while (lst < k && qr[lst].empty())
++lst;
if (lst <= t)
return false;
if (lst == k)
return true;
int i = qr[lst].back();
if (cur[i] + x < b[i]){
cur[i] += x;
continue;
}
qr[lst].pop_back();
long long nt = (cur[i] + x) / b[i];
cur[i] = (cur[i] + x) % b[i];
if (lst + nt < k)
qr[lst + nt].push_back(i);
}
return true;
}
int main() {
scanf("%d%d", &n, &k);
forn(i, n) scanf("%lld", &a[i]);
forn(i, n) scanf("%d", &b[i]);
long long l = 0, r = INF64;
while (l < r - 1){
long long m = (l + r) / 2;
if (check(m))
r = m;
else
l = m;
}
if (!check(r))
puts("-1");
else
printf("%lld\n", check(l) ? l : r);
return 0;
}
|
1132
|
E
|
Knapsack
|
You have a set of items, each having some integer weight not greater than $8$. You denote that a subset of items is good if total weight of items in the subset does not exceed $W$.
You want to calculate the maximum possible weight of a good subset of items. Note that you have to consider the empty set and the original set when calculating the answer.
|
Let's consider the optimal answer. Suppose we take $c_i$ items of weight $i$. Let $L$ be the least common multiple of all weights (that is $840$). Then we may represent $c_i$ as $c_i = \frac{L}{i} p_i + q_i$, where $0 \le q < \frac{L}{i}$. Let's do the following trick: we will take $q_i$ items of weight $i$, and all the remaining items of this weight can be merged into some items of weight $L$. Then we can write a brute force solution that picks less than $\frac{L}{i}$ items of each weight, transforms the remaining ones into items of weight $L$ as much as possible, and when we fix the whole subset, adds maximum possible number of items of weight $L$ to the answer. This works in something like $\prod \limits_{i = 1}^8 \frac{L}{i} = \frac{L^8}{8!}$ operations, which is too much. How can we speed it up? Rewrite it using dynamic programming! When we have fixed the number of items we take from $x$ first sets, the only two things that matter now are the current total weight of taken items and the number of items of weight $L$ we can use; and it's obvious that the more items of weight $L$ we can use, the better. So let's write the following dynamic programming solution: $dp[x][y]$ - maximum number of items of weight $L$ we can have, if we processed first $x$ types of items, and current total weight is $y$. Note that the second dimension should have size $8L$.
|
[
"dfs and similar",
"dp",
"greedy"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
const int N = 9;
const int L = 840;
typedef long long li;
li dp[N][L * N];
li W;
li cnt[N];
int main()
{
cin >> W;
for(int i = 0; i < 8; i++)
cin >> cnt[i];
for(int i = 0; i < N; i++) for(int j = 0; j < L * N; j++) dp[i][j] = -1;
dp[0][0] = 0;
for(int i = 0; i < 8; i++)
{
for(int j = 0; j < L * N; j++)
{
if(dp[i][j] == -1) continue;
int b = L / (i + 1);
if(cnt[i] < b)
b = cnt[i];
for(int k = 0; k <= b; k++)
{
li& d = dp[i + 1][j + k * (i + 1)];
d = max(d, dp[i][j] + (cnt[i] - k) / (L / (i + 1)));
}
}
}
li ans = 0;
for(int j = 0; j < L * N; j++)
{
if(j > W || dp[8][j] == -1)
continue;
ans = max(ans, j + L * (min(dp[8][j], (W - j) / L)));
}
cout << ans << endl;
}
|
1132
|
F
|
Clear the String
|
You are given a string $s$ of length $n$ consisting of lowercase Latin letters. You may apply some operations to this string: in one operation you can delete some contiguous substring of this string, if all letters in the substring you delete are equal. For example, after deleting substring bbbb from string abbbbaccdd we get the string aaccdd.
Calculate the minimum number of operations to delete the whole string $s$.
|
We will solve the problem by dynamic programming. Let $dp_{l, r}$ be the answer for substring $s_{l, l + 1, \dots, r}$. Then we have two cases: The first letter of the substring is deleted separately from the rest, then $dp_{l, r} = 1 + dp_{l + 1, r}$; The first letter of the substring is deleted alongside with some other letter (both letters must be equal), then $dp_{l,r} = \min \limits_{l < i \le r, s_i = s_r} dp_{l + 1, i - 1} + dp_{i, r}$.
|
[
"dp"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
const int N = 505;
int n;
string s;
int dp[N][N];
int calc(int l, int r){
int &res = dp[l][r];
if(res != -1) return res;
if(l > r) return res = 0;
if(l == r) return res = 1;
res = 1 + calc(l + 1, r);
for(int i = l + 1; i <= r; ++ i)
if(s[l] == s[i])
res = min(res, calc(l + 1, i - 1) + calc(i, r));
return res;
}
int main(){
cin >> n >> s;
memset(dp, -1, sizeof dp);
cout << calc(0, n - 1);
return 0;
}
|
1132
|
G
|
Greedy Subsequences
|
For some array $c$, let's denote a greedy subsequence as a sequence of indices $p_1$, $p_2$, ..., $p_l$ such that $1 \le p_1 < p_2 < \dots < p_l \le |c|$, and for each $i \in [1, l - 1]$, $p_{i + 1}$ is the minimum number such that $p_{i + 1} > p_i$ and $c[p_{i + 1}] > c[p_i]$.
You are given an array $a_1, a_2, \dots, a_n$. For each its subsegment of length $k$, calculate the length of its longest greedy subsequence.
|
Let's calculate for each position $i$ position $nxt_i$ - "the closest greater from the right" element to $i$ and add directed edge from $i$ to $nxt_i$. Then we will get oriented forest (or tree if we'd add fictive vertex) where all edges are directed to some root. So, we can look at current subsegment we need to calculate the answer for as at a number of marked vertices in the tree. Then, the answer itself is a longest path up to the tree consisting only from marked vertices. Key observation is next: if $u$ and $v$ are marked and $u$ is an ancestor of $v$ then any vertex $y$ on path from $v$ to $u$ is also marked. So, "the longest path up to the tree consisting only from marked vertices" has length equal to a number of marked vertices on path to the root. And we have three types of queries: mark a vertex, unmark a vertex and calculate maximum number of marked vertices among all paths to the root. It can be done with Segment Tree on Euler Tour of the tree: if we calculate $T_{in}$ and $T_{out}$ for each vertex in dfs order, then marking/unmarking is just adding $\pm 1$ to a segment $[T_{in}, T_{out})$, and maximum among all paths is a maximum on the whole tree. Result time complexity is $O(n \log{n})$ and space complexity is $O(n)$.
|
[
"data structures",
"dp",
"trees"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
#define x first
#define y second
int n, k;
vector<int> a;
inline bool read() {
if(!(cin >> n >> k))
return false;
a.assign(n, 0);
fore(i, 0, n)
cin >> a[i];
return true;
}
int T = 0;
vector< vector<int> > g;
vector<int> tin, tout;
void dfs(int v) {
tin[v] = T++;
for(int to : g[v])
dfs(to);
tout[v] = T;
}
vector<int> Tmax, Tadd;
inline void push(int v) {
Tadd[2 * v + 1] += Tadd[v];
Tadd[2 * v + 2] += Tadd[v];
Tadd[v] = 0;
}
inline int getmax(int v) {
return Tmax[v] + Tadd[v];
}
void addVal(int v, int l, int r, int lf, int rg, int val) {
if(l == lf && r == rg) {
Tadd[v] += val;
return;
}
int mid = (l + r) >> 1;
push(v);
if(lf < mid) addVal(2 * v + 1, l, mid, lf, min(mid, rg), val);
if(rg > mid) addVal(2 * v + 2, mid, r, max(lf, mid), rg, val);
Tmax[v] = max(getmax(2 * v + 1), getmax(2 * v + 2));
}
inline void solve() {
g.resize(n + 1);
tin.resize(n + 1, 0);
tout.resize(n + 1, 0);
vector<int> st;
for(int i = n - 1; i >= 0; i--) {
while(!st.empty() && a[st.back()] <= a[i])
st.pop_back();
int nxt = st.empty() ? n : st.back();
g[nxt].push_back(i);
st.push_back(i);
}
dfs(n);
Tmax.assign(4 * (n + 1), 0);
Tadd.assign(4 * (n + 1), 0);
fore(i, 0, k - 1)
addVal(0, 0, n + 1, tin[i], tout[i], +1);
for(int l = 0; l + k <= n; l++) {
addVal(0, 0, n + 1, tin[l + k - 1], tout[l + k - 1], +1);
cout << getmax(0) << " ";
addVal(0, 0, n + 1, tin[l], tout[l], -1);
}
cout << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
int tt = clock();
#endif
ios_base::sync_with_stdio(0);
cin.tie(0), cout.tie(0);
cout << fixed << setprecision(15);
if(read()) {
solve();
#ifdef _DEBUG
cerr << "TIME = " << clock() - tt << endl;
tt = clock();
#endif
}
return 0;
}
|
1133
|
A
|
Middle of the Contest
|
Polycarp is going to participate in the contest. It starts at $h_1:m_1$ and ends at $h_2:m_2$. It is guaranteed that the contest lasts an even number of minutes (i.e. $m_1 \% 2 = m_2 \% 2$, where $x \% y$ is $x$ modulo $y$). It is also guaranteed that the entire contest is held during a single day. And finally it is guaranteed that the contest lasts at least two minutes.
Polycarp wants to know the time of the midpoint of the contest. For example, if the contest lasts from $10:00$ to $11:00$ then the answer is $10:30$, if the contest lasts from $11:10$ to $11:12$ then the answer is $11:11$.
|
Firstly, let's parse both strings to four integers ($h_1, m_1, h_2, m_2$). Just read them and then use some standard functions to transform them into integers (or we can do it manually). The second part is to obtain $t_1 = h_1 \cdot 60 + m_1$ and $t_2 = h_2 \cdot 60 + m_2$. Then let t3=t1+t22. It is the answer. We have to print h3= \lfloor t360 \rfloor and m3=t3%60, where \lfloor ab \rfloor is a divided by b rounding down and a%b is a modulo b. The only thing we should do more carefully is to print one leading zero before h3 if it is less than 10 and do the same for m3.
|
[
"implementation"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int h1, m1;
scanf("%d:%d", &h1, &m1);
int h2, m2;
scanf("%d:%d", &h2, &m2);
int t1 = h1 * 60 + m1;
int t2 = h2 * 60 + m2;
int t3 = (t1 + t2) / 2;
printf("%02d:%02d\n", t3 / 60, t3 % 60);
return 0;
}
|
1133
|
B
|
Preparation for International Women's Day
|
International Women's Day is coming soon! Polycarp is preparing for the holiday.
There are $n$ candy boxes in the shop for sale. The $i$-th box contains $d_i$ candies.
Polycarp wants to prepare the maximum number of gifts for $k$ girls. Each gift will consist of \textbf{exactly two} boxes. The girls should be able to share each gift equally, so the total amount of candies in a gift (in a pair of boxes) should be divisible by $k$. In other words, two boxes $i$ and $j$ ($i \ne j$) can be combined as a gift if $d_i + d_j$ is divisible by $k$.
How many boxes will Polycarp be able to give? Of course, each box can be a part of no more than one gift. Polycarp cannot use boxes "partially" or redistribute candies between them.
|
Let $cnt_i$ be the number of boxes with $i$ candies modulo $k$. Firstly, the number of pairs of boxes we can obtain using two boxes with remainder $0$ modulo $k$ is $\lfloor\frac{cnt_0}{2}\rfloor$. Secondly, if $k$ is even then we also can obtain pairs of boxes using two boxes with remainder $\frac{k}{2}$ modulo $k$ and its number is $\lfloor\frac{cnt_{\frac{k}{2}}}{2}\rfloor$. And for any other remainder $i$ from $1$ to $\lfloor\frac{k}{2}\rfloor$ the number of pairs of boxes is $min(cnt_{i}, cnt_{k - i - 1})$. So, if we sum up all these values, the answer is this sum multiplied by two (because we have to print the number of boxes, not pairs).
|
[
"math",
"number theory"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n, k;
cin >> n >> k;
vector<int> cnt(k);
for (int i = 0; i < n; ++i) {
int x;
cin >> x;
++cnt[x % k];
}
int ans = cnt[0] / 2;
if (k % 2 == 0) ans += cnt[k / 2] / 2;
for (int i = 1; i < (k + 1) / 2; ++i) {
int j = k - i;
ans += min(cnt[i], cnt[j]);
}
cout << ans * 2 << endl;
return 0;
}
|
1133
|
C
|
Balanced Team
|
You are a coach at your local university. There are $n$ students under your supervision, the programming skill of the $i$-th student is $a_i$.
You have to create a team for a new programming competition. As you know, the more students some team has the more probable its victory is! So you have to create a team with the maximum number of students. But you also know that a team should be balanced. It means that the programming skill of each pair of students in a created team should differ by no more than $5$.
Your task is to report the maximum possible number of students in a balanced team.
|
Let's sort all values in non-decreasing order. Then we can use two pointers to calculate for each student i the maximum number of students j such that aj-ai \le 5 (j>i). This is pretty standard approach. We also can use binary search to do it (or we can store for each programming skill the number of students with this skill and just iterate from some skill x to x+5 and sum up all numbers of students).
|
[
"sortings",
"two pointers"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
sort(a.begin(), a.end());
int ans = 0;
int j = 0;
for (int i = 0; i < n; ++i) {
while (j < n && a[j] - a[i] <= 5) {
++j;
ans = max(ans, j - i);
}
}
cout << ans << endl;
return 0;
}
|
1133
|
D
|
Zero Quantity Maximization
|
You are given two arrays $a$ and $b$, each contains $n$ integers.
You want to create a new array $c$ as follows: choose some real (i.e. not necessarily integer) number $d$, and then for every $i \in [1, n]$ let $c_i := d \cdot a_i + b_i$.
Your goal is to maximize the number of zeroes in array $c$. What is the largest possible answer, if you choose $d$ optimally?
|
For each index i \in [1,n] let's try to find which d we should use in order to make i-th element of c equal to zero. If ai=0, then ci=bi no matter which d we choose. So we should just ignore this index and add 1 to the answer if bi=0. Otherwise, we should choose d=-biai. Let's calculate the required fraction for each index, and among all fractions find one that fits most indices (this can be done, for example, by storing all fractions in a map). The only thing that's left to analyze is how to compare the fractions, because floating-point numbers may be not precise enough. Let's store each fraction as a pair of integers (x,y), where x is the numenator and y is the denominator. We should normalize each fraction as follows: firstly, we reduce it by finding the greatest common divisor of x and y, and then dividing both numbers by this divisor. Secondly, we should ensure that numenator is non-negative, and if numenator is zero, then denominator should also be non-negative (this can be achieved by multiplying both numbers by -1).
|
[
"hashing",
"math",
"number theory"
] | 1,500
|
#include<bits/stdc++.h>
using namespace std;
#define x first
#define y second
const int N = 200043;
void norm(pair<int, int>& p)
{
if(p.x < 0)
{
p.x *= -1;
p.y *= -1;
}
else if (p.x == 0 && p.y < 0)
{
p.y *= -1;
}
int d = __gcd(abs(p.x), abs(p.y));
p.x /= d;
p.y /= d;
}
map<pair<int, int>, int> m;
int a[N];
int b[N];
int n;
int main()
{
scanf("%d", &n);
for(int i = 0; i < n; i++)
scanf("%d", &a[i]);
for(int i = 0; i < n; i++)
scanf("%d", &b[i]);
int ans = 0;
int cnt0 = 0;
for(int i = 0; i < n; i++)
{
if(a[i] == 0)
{
if(b[i] == 0)
cnt0++;
}
else
{
pair<int, int> p = make_pair(-b[i], a[i]);
norm(p);
m[p]++;
ans = max(ans, m[p]);
}
}
cout << cnt0 + ans << endl;
}
|
1133
|
E
|
K Balanced Teams
|
You are a coach at your local university. There are $n$ students under your supervision, the programming skill of the $i$-th student is $a_i$.
You have to form $k$ teams for yet another new programming competition. As you know, the more students are involved in competition the more probable the victory of your university is! So you have to form no more than $k$ (and at least one) \textbf{non-empty} teams so that the \textbf{total} number of students in them is maximized. But you also know that \textbf{each} team should be balanced. It means that the programming skill of each pair of students in \textbf{each} team should differ by no more than $5$. Teams are independent from one another (it means that the difference between programming skills of two students from two different teams does not matter).
It is possible that some students not be included in any team at all.
Your task is to report the maximum possible \textbf{total} number of students in no more than $k$ (and at least one) \textbf{non-empty} balanced teams.
\textbf{If you are Python programmer, consider using PyPy instead of Python when you submit your code.}
|
Firstly, let's sort all students in order of non-decreasing their programming skill. Then let's calculate the following dynamic programming: dpi,j is the maximum number of students in at most j non-empty teams if we consider first i students. How to do transitions from dpi,j? The first transition is pretty intuitive: just skip the i-th (0-indexed) student. Then we can set dpi+1,j+1:=max(dpi+1,j+1,dpi,j). The second possible transition is to take some team starting from the i-th student. The only assumption we need to do it is the following: take the maximum by number of students team starting from the i-th student is always optimally. Why it is so? If we consider the student with the maximum programming skill in the team, we can take him to this team instad of forming the new team with this student because this is always not worse. So the second transition is the following: let cnti be the number of students in a team if the i-th student is the first in it. We can calculate this part in O(n2) naively or in O(n) using two pointers. We can set dpi+cnti,j+1=max(dpi+cnti,j+1,dpi,j+cnti). Time complexity: O(n(n+k)).
|
[
"dp",
"sortings",
"two pointers"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n, k;
cin >> n >> k;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
sort(a.begin(), a.end());
vector<int> cnt(n);
for (int i = 0; i < n; ++i) {
while (i + cnt[i] < n && a[i + cnt[i]] - a[i] <= 5) {
++cnt[i];
}
}
vector<vector<int>> dp(n + 1, vector<int>(k + 1));
for (int i = 0; i < n; ++i) {
for (int j = 0; j <= k; ++j) {
dp[i + 1][j] = max(dp[i + 1][j], dp[i][j]);
if (j + 1 <= k) {
dp[i + cnt[i]][j + 1] = max(dp[i + cnt[i]][j + 1], dp[i][j] + cnt[i]);
}
}
}
cout << dp[n][k] << endl;
return 0;
}
|
1133
|
F1
|
Spanning Tree with Maximum Degree
|
You are given an undirected unweighted connected graph consisting of $n$ vertices and $m$ edges. It is guaranteed that there are no self-loops or multiple edges in the given graph.
Your task is to find \textbf{any} spanning tree of this graph such that the maximum degree over all vertices is maximum possible. Recall that the degree of a vertex is the number of edges incident to it.
|
We can take any vertex with the maximum degree and all its neighbours. To implement it, just run bfs from any vertex with the maximum degree. See the authors solution for better understanding.
|
[
"graphs"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
int n, m;
vector<vector<int>> g;
vector<int> deg, used;
vector<pair<int, int>> ans;
mt19937 rnd(time(NULL));
void bfs(int s) {
used = vector<int>(n);
used[s] = 1;
queue<int> q;
q.push(s);
while (!q.empty()) {
int v = q.front();
q.pop();
for (auto to : g[v]) {
if (used[to]) continue;
if (rnd() & 1) ans.push_back(make_pair(v, to));
else ans.push_back(make_pair(to, v));
used[to] = 1;
q.push(to);
}
}
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
cin >> n >> m;
g = vector<vector<int>>(n);
deg = vector<int>(n);
for (int i = 0; i < m; ++i) {
int x, y;
cin >> x >> y;
--x, --y;
g[x].push_back(y);
g[y].push_back(x);
++deg[x];
++deg[y];
}
int pos = 0;
for (int i = 0; i < n; ++i) {
if (deg[pos] < deg[i]) {
pos = i;
}
}
bfs(pos);
shuffle(ans.begin(), ans.end(), rnd);
for (auto it : ans) cout << it.first + 1 << " " << it.second + 1 << endl;
return 0;
}
|
1133
|
F2
|
Spanning Tree with One Fixed Degree
|
You are given an undirected unweighted connected graph consisting of $n$ vertices and $m$ edges. It is guaranteed that there are no self-loops or multiple edges in the given graph.
Your task is to find \textbf{any} spanning tree of this graph such that the \textbf{degree of the first vertex (vertex with label $1$ on it)} is equal to $D$ (or say that there are no such spanning trees). Recall that the degree of a vertex is the number of edges incident to it.
|
Firstly, let's remove the vertex 1 from the graph. Then let's calculate the number of connected components. Let it be cnt. The answer is NO if and only if cnt>D or D is greater than the number of edges incident to the first vertex. Otherwise let's construct the answer. Firstly, let's add into the new graph spanning trees of components in the initial graph without vertex 1. Then let's add into the new graph cnt edges from vertex 1 - one edge to each component. Then let's add into the new graph any D-cnt remaining edges from vertex 1. The last thing we need is to construct a spanning tree of a new graph such that all edges incident to the vertex 1 are in this spanning tree (and other edges doesn't matter). How to do it? Let's run bfs from the vertex 1 in a new graph!
|
[
"constructive algorithms",
"dfs and similar",
"dsu",
"graphs",
"greedy"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
int n, m, D;
vector<vector<int>> g;
int cnt;
vector<int> p, color;
vector<pair<int, int>> ans;
mt19937 rnd(time(NULL));
void bfs(int s, int bad) {
queue<int> q;
q.push(s);
color[s] = cnt;
while (!q.empty()) {
int v = q.front();
q.pop();
if (p[v] != -1) {
if (rnd() & 1) ans.push_back(make_pair(p[v], v));
else ans.push_back(make_pair(v, p[v]));
}
for (auto to : g[v]) {
if (to == bad || color[to] != -1) continue;
p[to] = v;
color[to] = cnt;
q.push(to);
}
}
++cnt;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
cin >> n >> m >> D;
g = vector<vector<int>>(n);
for (int i = 0; i < m; ++i) {
int x, y;
cin >> x >> y;
--x, --y;
g[x].push_back(y);
g[y].push_back(x);
}
p = color = vector<int>(n, -1);
cnt = 0;
for (int i = 1; i < n; ++i) {
if (color[i] == -1) {
bfs(i, 0);
}
}
if (cnt > D || D > int(g[0].size())) {
cout << "NO" << endl;
return 0;
}
sort(g[0].begin(), g[0].end(), [](int a, int b) {
return color[a] < color[b];
});
for (int i = 0; i < int(g[0].size()); ++i) {
if (i == 0 || color[g[0][i]] != color[g[0][i - 1]]) {
ans.push_back(make_pair(0, g[0][i]));
}
}
D -= cnt;
for (int i = 1; i < int(g[0].size()); ++i) {
if (D == 0) break;
if (color[g[0][i]] == color[g[0][i - 1]]) {
ans.push_back(make_pair(0, g[0][i]));
--D;
}
}
g = vector<vector<int>>(n);
for (auto it : ans) {
g[it.first].push_back(it.second);
g[it.second].push_back(it.first);
}
ans.clear();
p = color = vector<int>(n, -1);
cnt = 0;
bfs(0, -1);
shuffle(ans.begin(), ans.end(), rnd);
cout << "YES" << endl;
for (auto it : ans) {
cout << it.first + 1 << " " << it.second + 1 << endl;
}
return 0;
}
|
1136
|
A
|
Nastya Is Reading a Book
|
After lessons Nastya decided to read a book. The book contains $n$ chapters, going one after another, so that one page of the book belongs to exactly one chapter and each chapter contains at least one page.
Yesterday evening Nastya did not manage to finish reading the book, so she marked the page with number $k$ as the first page which was not read (i.e. she read all pages from the $1$-st to the $(k-1)$-th).
The next day Nastya's friend Igor came and asked her, how many chapters remain to be read by Nastya? Nastya is too busy now, so she asks you to compute the number of chapters she has not completely read yet (i.e. the number of chapters she has not started to read or has finished reading somewhere in the middle).
|
It is easy to understand that the first unread chapter is the last chapter, whom $l_i \leq k$. In order to find it we can iterate through list of chapters in increasing order.
|
[
"implementation"
] | 800
|
"#include <bits/stdc++.h>\nusing namespace std;\n#define int long long\nconst int N = 101;\nint n;\npair <int, int> a[N];\nsigned main() {\n #ifdef HOME\n freopen(\"input.txt\", \"r\", stdin);\n #else\n ios_base::sync_with_stdio(0); cin.tie(0);\n #endif\n cin >> n;\n for (int i = 0; i < n; ++i) cin >> a[i].first >> a[i].second;\n int p;\n cin >> p;\n for (int i = n - 1; i >= 0; --i) {\n if (a[i].first <= p) {\n cout << n - i << '\\n';\n exit(0);\n } \n } \n}"
|
1136
|
B
|
Nastya Is Playing Computer Games
|
Finished her homework, Nastya decided to play computer games. Passing levels one by one, Nastya eventually faced a problem. Her mission is to leave a room, where a lot of monsters live, as quickly as possible.
There are $n$ manholes in the room which are situated on one line, but, unfortunately, all the manholes are closed, and there is one stone on every manhole. There is exactly one coin under every manhole, and to win the game Nastya should pick all the coins. Initially Nastya stands near the $k$-th manhole from the left. She is thinking what to do.
In one turn, Nastya can do one of the following:
- if there is at least one stone on the manhole Nastya stands near, throw exactly one stone from it onto any other manhole (yes, Nastya is strong).
- go to a neighboring manhole;
- if there are no stones on the manhole Nastya stays near, she can open it and pick the coin from it. After it she must close the manhole immediately (it doesn't require additional moves).
\begin{center}
{\small The figure shows the intermediate state of the game. At the current position Nastya can throw the stone to any other manhole or move left or right to the neighboring manholes. If she were near the leftmost manhole, she could open it (since there are no stones on it).}
\end{center}
Nastya can leave the room when she picks all the coins. Monsters are everywhere, so you need to compute the minimum number of moves Nastya has to make to pick all the coins.
Note one time more that Nastya can open a manhole only when there are no stones onto it.
|
Note that in any case we will open $n$ hatches. Also, initial position of stones is : $1,1,1,1,1,1 ... 1$ ($1$ - the number of stones on the i-th hatch ).After any permutation of stones we will have permutation of numbers : $2,0,1,1,1,1,1...$ So, to open hatch with $2$ stones, we need at least $2$ movements. So, at all, we need at least $n+1$ movements(1). To get into all the hatches we need at least $min(n-k,k-1) + n-1$ movements.(since we can only go to the neighboring). So, at all, answer is : $(n+1) + n + (n-1) + min(n-k,k-1) = 3n + min(n-k,k-1)$.
|
[
"constructive algorithms",
"math"
] | 1,000
|
"#include<bits/stdc++.h>\nusing namespace std;\n#define int long long\nsigned main() {\n #ifdef HOME\n freopen(\"input.txt\", \"r\", stdin);\n #else\n ios_base::sync_with_stdio(0); cin.tie(0); cout.precision(20);\n #endif\n int n, p;\n cin >> n >> p;\n cout << (2 * n + 1) + (n - 1) + min(p - 1, n - p) << '\\n';\n}"
|
1136
|
C
|
Nastya Is Transposing Matrices
|
Nastya came to her informatics lesson, and her teacher who is, by the way, a little bit famous here gave her the following task.
Two matrices $A$ and $B$ are given, each of them has size $n \times m$. Nastya can perform the following operation to matrix $A$ unlimited number of times:
- take any square square submatrix of $A$ and transpose it (i.e. the element of the submatrix which was in the $i$-th row and $j$-th column of the submatrix will be in the $j$-th row and $i$-th column after transposing, and the transposed submatrix itself will keep its place in the matrix $A$).
Nastya's task is to check whether it is possible to transform the matrix $A$ to the matrix $B$.
\begin{center}
{\small Example of the operation}
\end{center}
As it may require a lot of operations, you are asked to answer this question for Nastya.
A square submatrix of matrix $M$ is a matrix which consist of all elements which comes from one of the rows with indeces $x, x+1, \dots, x+k-1$ of matrix $M$ and comes from one of the columns with indeces $y, y+1, \dots, y+k-1$ of matrix $M$. $k$ is the size of square submatrix. In other words, square submatrix is the set of elements of source matrix which form a solid square (i.e. without holes).
|
Let's note that after applying the operation multiset of numbers on each diagonal (which goes up and right) stays the same. Also we can get any permutation of numbers on any diagonal because we can swap neighboring elements on diagonal. So, we just need to check if the multisets of numbers on corresponding diagonals are the same.
|
[
"constructive algorithms",
"sortings"
] | 1,500
|
"#include <iostream>\n#include <vector>\n#include <algorithm>\nusing namespace std;\n#define int long long\nconst int MAXN = 500;\nint a[MAXN][MAXN];\nint b[MAXN][MAXN];\nvector<int> aa[MAXN * 2];\nvector<int> bb[MAXN * 2];\nsigned main() {\n\tios_base::sync_with_stdio(false);\n\n\tint n, m;\n\tcin >> n >> m;\n\tfor (int i = 0; i < n; ++i)\n\t\tfor (int j = 0; j < m; ++j) {\n\t\t\tcin >> a[i][j];\n\t\t\taa[i + j].push_back(a[i][j]);\n\t\t}\n\tfor (int i = 0; i < n; ++i)\n\t\tfor (int j = 0; j < m; ++j) {\n\t\t\tcin >> b[i][j];\n\t\t\tbb[i + j].push_back(b[i][j]);\n\t\t}\n\tbool ok = 1;\n\tfor (int i = 0; i < MAXN * 2; ++i) {\n\t\tsort(aa[i].begin(), aa[i].end());\n\t\tsort(bb[i].begin(), bb[i].end());\n\t\tif (aa[i] != bb[i])\n \t\t\tok = 0;\n\t}\n\tif (ok)\n\t\tcout << \"YES\";\n\telse\n\t\tcout << \"NO\";\n\treturn 0;\n}"
|
1136
|
D
|
Nastya Is Buying Lunch
|
At the big break Nastya came to the school dining room. There are $n$ pupils in the school, numbered from $1$ to $n$. Unfortunately, Nastya came pretty late, so that all pupils had already stood in the queue, i.e. Nastya took the last place in the queue. Of course, it's a little bit sad for Nastya, but she is not going to despond because some pupils in the queue can agree to change places with some other pupils.
Formally, there are some pairs $u$, $v$ such that if the pupil with number $u$ stands directly in front of the pupil with number $v$, Nastya can ask them and they will change places.
Nastya asks you to find the maximal number of places in queue she can move forward.
|
Solution 1: Let's solve the proiblem, iterating from the end, adding pupils one by one. I. e for every suffix we are solving original problem without pupils, which don't belong to this suffix. What happens when we add pupil $i$ to the suffix? By the time when we add pupil $i$ we have answer for the previous suffix. In this answer there are, probably, pupils, which Nastya can't overtake. Let this subset of pupils be $P$. Then, if $i$-th pupil can give place for Nastya and all pupils from $P$, we will swap them. Otherwise, we can add this pupil to $P$. In order to check this condition we can iterate through pupils, who can swap with $i$-th pupil, and calculate how many are contained in $P$. This solution works in $O(n+m)$. Obviously, when we consider all suffixes, answer will be $n-1-|P|$. Solution 2: Let's build directed graph, where $i$-th vertex corresponds $i$-th pupil and edge from $u$ to $v$ exists if and only if pupil $v$ can't give place to pupil $u$ and $v$ is closer to the beginning of queue than $u$. We can note that answer is number of vertexes in this graph, which are unreachable from Nastya's vertex. Proof: (1) Obviously, if edge from $u$ to $v$ exists, pupil $v$ will always be in front of $u$. (2) If vertex $v$ is reachable from vertex $u$, the same condition is true. Let's prove that Nastya can overtake pupils, who are unreachable in graph by giving an algorithm how to do it. Let there are unreachable vertexes in front of Nastya, $u$ - the closest from them. If $u$ is directly in front of Nastya, they can swap and number of such vertexes will decrease. Otherwise, let $v$ be the next pupil after $u$ (further from the beginning). Because $u$ is the closest unreachable vertex, $v$ is reachable. So, there is no edge from $u$ to $v$ and they can change their places. We can similarly move $v$ further and then swap him with Nastya. Using this algorithm, Nastya can overtake all pupils, which correspond unreachable vertexes. Fine, now we just have to calculate number of such vertexes. It can be done with standard algorithm "DFS by complement graph".
|
[
"greedy"
] | 1,800
|
"#include <bits/stdc++.h>\n//#pragma comment(linker, \u201d/STACK:36777216\u201c)\n \nusing namespace std;\n \ntypedef long long ll;\n#define mp make_pair\n#define pb push_back\n#define x first\n#define y second\n#define all(a) a.begin(), a.end()\n#define db long double\n\nint n, m;\nvector<int> a, was;\nvector<vector<int> > g;\n\nint main(){\n //freopen(\"input.txt\", \"r\", stdin);\n //freopen(\"output.txt\", \"w\", stdout);\n ios_base::sync_with_stdio(0); cin.tie(0);\n cin >> n >> m;\n a.resize(n);\n g.resize(n);\n was.resize(n);\n for (int i = 0; i < n; i++) cin >> a[i], a[i]--;\n for (int i = 0; i < m; i++){\n \tint w1, w2;\n \tcin >> w1 >> w2;\n \tw1--; w2--;\n \tg[w1].pb(w2);\n }\n\n reverse(all(a));\n int ans = 0;\n\n for (int i = 0; i < n; i++) was[i] = 0;\n was[a[0]] = 1;\n\tint cnt = 1;\n\tfor (int i = 1; i < n; i++){\n\t\tint cnt2 = 0;\n\t\tfor (int to : g[a[i]]){\n\t\t\tif (was[to]) cnt2++;\n\t\t}\n\t\tif (cnt == cnt2){\n\t\t\tans++;\n\t\t} else {\n\t\t\twas[a[i]] = 1;\n\t\t\tcnt++;\n\t\t}\n\t}\n\n cout << ans;\n}"
|
1136
|
E
|
Nastya Hasn't Written a Legend
|
In this task, Nastya asked us to write a formal statement.
An array $a$ of length $n$ and an array $k$ of length $n-1$ are given. Two types of queries should be processed:
- increase $a_i$ by $x$. Then if $a_{i+1} < a_i + k_i$, $a_{i+1}$ becomes exactly $a_i + k_i$; again, if $a_{i+2} < a_{i+1} + k_{i+1}$, $a_{i+2}$ becomes exactly $a_{i+1} + k_{i+1}$, and so far for $a_{i+3}$, ..., $a_n$;
- print the sum of the contiguous subarray from the $l$-th element to the $r$-th element of the array $a$.
It's guaranteed that initially $a_i + k_i \leq a_{i+1}$ for all $1 \leq i \leq n-1$.
|
Let $t_{i} = k_{1} + k_{2} + ... + k_{i - 1}$, $b_{i} = a_{i} - t_{i}$. We can rewrite the condition $a_{i+1} >= a_{i} + k_{i}$, using array $b$: $a_{i+1} >= a_{i} + k_{i}$ $a_{i+1} - k_{i} >= a_{i}$ $a_{i+1} - k_{i} - k_{i-1} - ... - k_{1} >= a_{i} - k_{i-1} - ... - k_{1}$ $a_{i+1} - t_{i+1} >= a_{i} - t_{i}$ $b_{i+1} >= b_{i}$ Let's calculate arrays $t$ and $b$. So as $a_{i} = b_{i} + t_{i}$, in order to get sum in subarray of $a$, we can sum corresponding sums in $b$ and $t$. Now let's find out what happens with $b$ after addition $x$ to position $i$. $b_{i}$ increases exactly on $x$. Then, if $b_{i+1} < b_{i}$, $b_{i+1}$ becomes equal to $b_{i}$, and so on for $i+2$, $i+3$, ..., $n$. Note that array $b$ is always sorted and each addition sets value $b_{i} + x$ in half-interval $[i, pos)$, where $pos$ - the lowest index such as $b_{pos} >= b_{i} + x$ To handle these modifications, let's build segment tree on array $b$ with operation "set value on a segment", which stores sum and maximum in every vertex. The only problem is how to find $pos$. This can be done with descending along the segment tree. If the maximum in the left son of current vertex is bigger or equal that $b_{i} + x$, we go to the left son, otherwise we go the right son. BONUS: solve it with modifications of elements of $k$.
|
[
"binary search",
"data structures"
] | 2,200
|
"/*\n\nCode for problem E by cookiedoth\nGenerated 23 Feb 2019 at 02.32 P\n\n\n\u2585\u2588\u2588\u2588\u2588\u2588\u2588\u2588 ]\u2584\u2584\u2584\u2584\u2584\u2584\u2584 \n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2585\u2584\u2583 \nIl\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588] \n\u25e5\u2299\u25b2\u2299\u25b2\u2299\u25b2\u2299\u25b2\u2299\u25b2\u2299\u25b2\u2299\u25e4\n\n~_^\n=_=\n\u00af\\_(\u30c4)_/\u00af\n\n*/\n\n#include <iostream>\n#include <fstream>\n#include <vector>\n#include <set>\n#include <map>\n#include <bitset>\n#include <algorithm>\n#include <iomanip>\n#include <cmath>\n#include <ctime>\n#include <functional>\n#include <unordered_set>\n#include <unordered_map>\n#include <string>\n#include <queue>\n#include <deque>\n#include <stack>\n#include <complex>\n#include <cassert>\n#include <random>\n#include <cstring>\n#include <numeric>\n#define ll long long\n#define ld long double\n#define null NULL\n#define all(a) a.begin(), a.end()\n#define debug(a) cerr << #a << \" = \" << a << endl\n#define forn(i, n) for (int i = 0; i < n; ++i)\n#define sz(a) (int)a.size()\n\nusing namespace std;\n\ntemplate<class T> int chkmax(T &a, T b) {\n\tif (b > a) {\n\t\ta = b;\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\ntemplate<class T> int chkmin(T &a, T b) {\n\tif (b < a) {\n\t\ta = b;\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\ntemplate<class iterator> void output(iterator begin, iterator end, ostream& out = cerr) {\n\twhile (begin != end) {\n\t\tout << (*begin) << \" \";\n\t\tbegin++;\n\t}\n\tout << endl;\n}\n\ntemplate<class T> void output(T x, ostream& out = cerr) {\n\toutput(x.begin(), x.end(), out);\n}\n\nvoid fast_io() {\n\tios_base::sync_with_stdio(0);\n\tcin.tie(0);\n\tcout.tie(0);\n}\n\nconst ll DEFAULT = -1e18;\n\nstruct st {\n\tvector<ll> t, len, sum, add_mod, set_mod;\n\tint n;\n\n\tvoid build(vector<ll> &a, int v, int tl, int tr) {\n\t\tif (tl == tr) {\n\t\t\tt[v] = sum[v] = a[tl];\n\t\t\tlen[v] = 1;\n\t\t}\n\t\telse {\n\t\t\tint tm = (tl + tr) >> 1;\n\t\t\tbuild(a, v * 2, tl, tm);\n\t\t\tbuild(a, v * 2 + 1, tm + 1, tr);\n\t\t\tt[v] = max(t[v * 2], t[v * 2 + 1]);\n\t\t\tsum[v] = sum[v * 2] + sum[v * 2 + 1];\n\t\t\tlen[v] = (tr - tl + 1);\n\t\t}\n\t}\n\n\tst(vector<ll> &a) {\n\t\tn = a.size();\n\t\tt.resize(4 * n);\n\t\tsum.resize(4 * n);\n\t\tadd_mod.resize(4 * n);\n\t\tset_mod.resize(4 * n, DEFAULT);\n\t\tlen.resize(4 * n);\n\t\tbuild(a, 1, 0, n - 1);\n\t}\n\n\tst() {}\n\n\tvoid apply_add_mod(int v, ll val) {\n\t\tif (set_mod[v] != DEFAULT) {\n\t\t\tset_mod[v] += val;\n\t\t}\n\t\telse {\n\t\t\tadd_mod[v] += val;\n\t\t}\n\t}\n\n\tvoid apply_set_mod(int v, ll val) {\n\t\tadd_mod[v] = 0;\n\t\tset_mod[v] = val;\n\t}\n\n\tvoid push(int v) {\n\t\tassert(add_mod[v] == 0 || set_mod[v] == DEFAULT);\n\t\tif (add_mod[v]) {\n\t\t\tt[v] += add_mod[v];\n\t\t\tsum[v] += add_mod[v] * len[v];\n\t\t\tapply_add_mod(v * 2, add_mod[v]);\n\t\t\tapply_add_mod(v * 2 + 1, add_mod[v]);\n\t\t\tadd_mod[v] = 0;\n\t\t}\n\t\tif (set_mod[v] != DEFAULT) {\n\t\t\tt[v] = set_mod[v];\n\t\t\tsum[v] = set_mod[v] * len[v];\n\t\t\tapply_set_mod(v * 2, set_mod[v]);\n\t\t\tapply_set_mod(v * 2 + 1, set_mod[v]);\n\t\t\tset_mod[v] = DEFAULT;\n\t\t}\n\t}\n\n\tll get_sum(int v) {\n\t\tif (set_mod[v] != DEFAULT) {\n\t\t\treturn set_mod[v] * len[v];\n\t\t}\n\t\tif (add_mod[v]) {\n\t\t\treturn add_mod[v] * len[v] + sum[v];\n\t\t}\n\t\treturn sum[v];\n\t}\n\n\tll get_t(int v) {\n\t\tif (set_mod[v] != DEFAULT) {\n\t\t\treturn set_mod[v];\n\t\t}\n\t\tif (add_mod[v]) {\n\t\t\treturn add_mod[v] + t[v];\n\t\t}\n\t\treturn t[v];\n\t}\n\n\tvoid recalc(int v) {\n\t\tt[v] = max(get_t(v * 2), get_t(v * 2 + 1));\n\t\tsum[v] = get_sum(v * 2) + get_sum(v * 2 + 1);\n\t}\n\n\tvoid add_update(int l, int r, ll val, int v, int tl, int tr) {\n\t\tif (l > r) {\n\t\t\treturn;\n\t\t}\n\t\tif (l == tl && r == tr) {\n\t\t\tapply_add_mod(v, val);\n\t\t\treturn;\n\t\t}\n\t\tint tm = (tl + tr) >> 1;\n\t\tpush(v);\n\t\tadd_update(l, min(r, tm), val, v * 2, tl, tm);\n\t\tadd_update(max(l, tm + 1), r, val, v * 2 + 1, tm + 1, tr);\n\t\trecalc(v);\n\t}\n\n\tvoid set_update(int l, int r, ll val, int v, int tl, int tr) {\n\t\tif (l > r) {\n\t\t\treturn;\n\t\t}\n\t\tif (l == tl && r == tr) {\n\t\t\tapply_set_mod(v, val);\n\t\t\treturn;\n\t\t}\n\t\tint tm = (tl + tr) >> 1;\n\t\tpush(v);\n\t\tset_update(l, min(r, tm), val, v * 2, tl, tm);\n\t\tset_update(max(l, tm + 1), r, val, v * 2 + 1, tm + 1, tr);\n\t\trecalc(v);\n\t}\n\n\tll get_sum(int l, int r, int v, int tl, int tr) {\n\t\tif (l > r) {\n\t\t\treturn 0;\n\t\t}\n\t\tif (l == tl && r == tr) {\n\t\t\treturn get_sum(v);\n\t\t}\n\t\tint tm = (tl + tr) >> 1;\n\t\tpush(v);\n\t\tll res_l = get_sum(l, min(r, tm), v * 2, tl, tm);\n\t\tll res_r = get_sum(max(l, tm + 1), r, v * 2 + 1, tm + 1, tr);\n\t\treturn res_l + res_r;\n\t}\n\n\tint lower_bound(ll val, int v, int tl, int tr) {\n\t\tif (tl == tr) {\n\t\t\treturn tl;\n\t\t}\n\t\tpush(v);\n\t\tint tm = (tl + tr) >> 1;\n\t\tif (get_t(v * 2) >= val) {\n\t\t\treturn lower_bound(val, v * 2, tl, tm);\n\t\t}\n\t\telse {\n\t\t\treturn lower_bound(val, v * 2 + 1, tm + 1, tr);\n\t\t}\n\t}\n\n\t//interface\n\n\tvoid add_update(int l, int r, ll val) {\n\t\tadd_update(l, r, val, 1, 0, n - 1);\n\t}\n\n\tvoid set_update(int l, int r, ll val) {\n\t\tset_update(l, r, val, 1, 0, n - 1);\n\t}\n\n\tll get_sum(int l, int r) {\n\t\tll res = get_sum(l, r, 1, 0, n - 1);\n\t\treturn res;\n\t}\n\n\tint lower_bound(ll val) {\n\t\tif (get_t(1) < val) {\n\t\t\treturn n;\n\t\t}\n\t\telse {\n\t\t\treturn lower_bound(val, 1, 0, n - 1);\n\t\t}\n\t}\n};\n\nint n;\nvector<ll> a, k, prefK, b;\n\nsigned main() {\n\tfast_io();\n\t\n\tcin >> n;\n\ta.resize(n);\n\tk.resize(n - 1);\n\tprefK.resize(n);\n\tfor (int i = 0; i < n; ++i) {\n\t\tcin >> a[i];\n\t}\n\tfor (int i = 0; i < n - 1; ++i) {\n\t\tcin >> k[i];\n\t}\n\tfor (int i = 1; i < n; ++i) {\n\t\tprefK[i] = prefK[i - 1] + k[i - 1];\n\t}\n\n\tb.resize(n);\n\tfor (int i = 0; i < n; ++i) {\n\t\tb[i] = a[i] - prefK[i];\n\t}\n\tst t(b), t1(prefK);\n\n\tint q;\n\tcin >> q;\n\tfor (int i = 0; i < q; ++i) {\n\t\tchar type;\n\t\tcin >> type;\n\t\tif (type == 's') {\n\t\t\tint l, r;\n\t\t\tcin >> l >> r;\n\t\t\tl--;\n\t\t\tr--;\n\t\t\tcout << t.get_sum(l, r) + t1.get_sum(l, r) << '\\n';\n\t\t}\n\t\tif (type == '+') {\n\t\t\tint pos;\n\t\t\tll val;\n\t\t\tcin >> pos >> val;\n\t\t\tpos--;\n\t\t\tval += t.get_sum(pos, pos);\n\t\t\tint pos1 = t.lower_bound(val);\n\t\t\tt.set_update(pos, pos1 - 1, val);\n\t\t}\n\t}\n}"
|
1137
|
A
|
Skyscrapers
|
Dora loves adventures quite a lot. During some journey she encountered an amazing city, which is formed by $n$ streets along the Eastern direction and $m$ streets across the Southern direction. Naturally, this city has $nm$ intersections. At any intersection of $i$-th Eastern street and $j$-th Southern street there is a monumental skyscraper. Dora instantly became curious and decided to explore the heights of the city buildings.
When Dora passes through the intersection of the $i$-th Eastern and $j$-th Southern street she examines those two streets. After Dora learns the heights of all the skyscrapers on those two streets she wonders: how one should reassign heights to the skyscrapers on those two streets, so that the maximum height would be as small as possible and the result of comparing the heights of any two skyscrapers on one street wouldn't change.
Formally, on every of $nm$ intersections Dora solves an independent problem. She sees $n + m - 1$ skyscrapers and for each of them she knows its real height. Moreover, any two heights can be compared to get a result "greater", "smaller" or "equal". Now Dora wants to select some integer $x$ and assign every skyscraper a height from $1$ to $x$. When assigning heights, Dora wants to preserve the relative order of the skyscrapers in both streets. That is, the result of any comparison of heights of two skyscrapers in the current Eastern street shouldn't change and the result of any comparison of heights of two skyscrapers in current Southern street shouldn't change as well. Note that skyscrapers located on the Southern street are not compared with skyscrapers located on the Eastern street only. However, the skyscraper located at the streets intersection can be compared with both Southern and Eastern skyscrapers. For every intersection Dora wants to \textbf{independently} calculate the minimum possible $x$.
For example, if the intersection and the two streets corresponding to it look as follows:
Then it is optimal to replace the heights of the skyscrapers as follows (note that all comparisons "less", "equal", "greater" inside the Eastern street and inside the Southern street are preserved)
The largest used number is $5$, hence the answer for this intersection would be $5$.
Help Dora to compute the answers for each intersection.
|
Let's examine the $i$-th row and $j$-th column, suppose the element on their intersection is $x$ Let's denote the number of different elements less than $x$ in the row as $L_{row}$, and in the column as $L_{col}$. Similarly, let's say that the number of different elements greater than $x$ in row is $G_{row}$ and in column is $G_{col}$ (L = Less, G = Greater). Then the answer is $ans = \max(L_{row}, L_{col}) + 1 + \max(G_{row}, G_{col})$. First summand is necessary to fit all elements $<x$, second is for $x$ and the last one for $>x$. Now let's find a way to compute all this $4$ values. Let's for each line and each column write down all the elements in it, sort them, delete repeating elements and save the result in such form. Now how to find $L_{row}$ and $G_{row}$? Simply do a binary search over this list to find the element equal to $x$. If the length of this list is $x$ and the position of the found element is $i$ ($0$-based), then $L_{row} = i$ and $G_{row} = k - 1 - i$. Similarly we can find $L_{col}$, $G_{col}$ and solve the problem Complexity is: $\mathcal{O}(nm \log)$.
|
[
"implementation",
"sortings"
] | 1,600
| null |
1137
|
B
|
Camp Schedule
|
The new camp by widely-known over the country Spring Programming Camp is going to start soon. Hence, all the team of friendly curators and teachers started composing the camp's schedule. After some continuous discussion, they came up with a schedule $s$, which can be represented as a binary string, in which the $i$-th symbol is '1' if students will write the contest in the $i$-th day and '0' if they will have a day off.
At the last moment Gleb said that the camp will be the most productive if it runs with the schedule $t$ (which can be described in the same format as schedule $s$). Since the number of days in the current may be different from number of days in schedule $t$, Gleb required that the camp's schedule must be altered so that the number of occurrences of $t$ in it as a substring is maximum possible. At the same time, \textbf{the number of contest days and days off shouldn't change}, only their order may change.
Could you rearrange the schedule in the best possible way?
|
If we can't make any occurrences of string $t$ in string $s$ just output any permutation of $s$. Otherwise, we can show that there is an optimal answer $x$, such that it starts with a string $t$. Suppose the opposite, then remove all characters of the string $s$ before the first occurrence of the string $t$ and insert them to the end. The number of occurrences clearly didn't decreased. Obviously, we want to make the next occurrence of string $t$ in string $s$ as left as possible. If we decide to make it somewhere else, we can move out the extra characters and try to improve the answer. To achieve this, we need to find the largest suffix of the string $t$ that matches the prefix of string $t$ of the same length. It can be found by using the prefix function, z-function or hashes.
|
[
"greedy",
"hashing",
"strings"
] | 1,600
| null |
1137
|
C
|
Museums Tour
|
In the country $N$, there are $n$ cities connected by $m$ one-way roads. Although this country seems unremarkable, there are two interesting facts about it. At first, a week lasts $d$ days here. At second, there is exactly one museum in each city of the country $N$.
Travel agency "Open museums" is developing a new program for tourists interested in museums. Agency's employees know which days each of the museums is open. The tour should start in the capital — the city number $1$, and the first day of the tour must be on the first day of a week. Each day a tourist will be in some city, watching the exposition in its museum (in case museum is open today), and by the end of the day, the tour either ends or the tourist goes into another city connected by a road with the current one. The road system of $N$ is designed in such a way that traveling by a road always takes one night and also all the roads are \textbf{one-way}. It's allowed to visit a city multiple times during the trip.
You should develop such route for the trip that the number of \textbf{distinct} museums, possible to visit during it, is maximum.
|
Let's build a graph where the vertices are ($u$, $t$) where u is the node for original graph and $t$ is a day modulo $d$ (days are indexed from $0$ to $d - 1$). Then connect ($u$, $t$) to ($v$, $(t + 1)\,mod\,d$) for all edges $(u, v)$ from original graph and find the strongly connected components of this graph. For each of the SCC compute the number of different opened museums in it. Then we just need to find a path with maximum cost that begins in SCC which contains ($1$, $0$). We can do it with a dynamic programming on the DAG. The key fact is that if there is a path from ($u$, $j$) to ($u$, $j'$), then there is also a path from ($u$, $j'$) to ($u$, $j$) (simply go this path in original graph $d - 1$ times more), so we won't count some museum twice in the dynamic programming on this graph.
|
[
"dp",
"graphs",
"implementation"
] | 2,500
| null |
1137
|
D
|
Cooperative Game
|
This is an interactive problem.
Misha likes to play cooperative games with incomplete information. Today he suggested ten his friends to play a cooperative game "Lake".
Misha has already come up with a field for the upcoming game. The field for this game is a directed graph consisting of two parts. The first part is a road along the coast of the lake which is a cycle of $c$ vertices. The second part is a path from home to the lake which is a chain of $t$ vertices, and there is an edge from the last vertex of this chain to the vertex of the road along the coast which has the most beautiful view of the lake, also known as the finish vertex. Misha decided to keep the field secret, so nobody knows neither $t$ nor $c$.
Note that each vertex of the field has exactly one outgoing edge and all the vertices except the home vertex and the finish vertex have exactly one ingoing edge. The home vertex has no incoming edges, the finish vertex has two incoming edges.
At the beginning of the game pieces of all the ten players, indexed with consecutive integers from $0$ to $9$, are at the home vertex. After that on each turn some of the players can ask Misha to simultaneously move their pieces along the corresponding edges. Misha will not answer more than $q$ such queries. After each move Misha will tell players whose pieces are at the same vertices and whose pieces are at different vertices.
The goal of the game is to move all the pieces to the finish vertex. Misha's friends have no idea how to win in such a game without knowledge of $c$, $t$ and $q$, but luckily they are your friends. Help them: coordinate their actions to win the game.
Misha has drawn such a field that $1 \le t, c$, $(t+c) \leq 1000$ and $q = 3 \cdot (t+c)$.
|
The count of friends you have in the problem was actually a misleading. Here is how to solve this problem with only three of them. Let's name them $fast$, $slow$, $lazy$, and then consider the following process: $fast$ and $slow$ moves forward, then $fast$ only, then $fast$ and $slow$ again and so on until at some moment they will appear in same vertex on cycle ($fast$ takes the lead, makes it to the cycle, circles there until $slow$ makes it to the cycle too, and then approaches him, reducing the distance between them by $1$ every $2$ moves). Here you can notice, that $slow$ had no time to make even one full circle on cycle, because in that case $fast$ would managed to make at least two full circles and they would meet earlier. Let $r$ denote the distance from finish vertex to the one $fast$ and $slow$ met. Then $slow = t + x$ ($1$) and $fast = t + ?\cdot{}c + x$ ($2$). Also $fast = 2\cdot{}slow$ ($3$) as soon as $fast$ had a move at each step and $slow$ had only on the odd ones. Substitute ($1$) and ($2$) into ($3$) and you would get $t + ?\cdot{}c + x = 2\cdot{}t + 2\cdot{}x$. Simplify it and take in modulo $c$ to get $-x \equiv t \pmod{c}$, i.e. if you would now apply $t$ moves to $fast$ and $slow$ they would end up in finish vertex, and if we instead somehow would manage to apply exactly $t$ moves to all friends, all of them would end up in the finish vertex. Here is the last bit of the solution: instead of trying to compute $t$ let's just move all friends until all of them would meet in one vertex - that vertex will be the finish one. Described solution takes less than $2 \cdot (c + t)$ steps in the first stage and exactly $t$ steps in the second stage, so in total it would made less than $3 \cdot (c + t)$.
|
[
"constructive algorithms",
"interactive",
"number theory"
] | 2,400
| null |
1137
|
E
|
Train Car Selection
|
Vasya likes to travel by train, but doesn't like when the car he travels in is located in the tail of the train.
Vasya gets on the train at the station. The train consists of $n$ cars indexed from $1$ to $n$ counting from the locomotive (head of the train). Three types of events occur while the train is moving:
- Some number of cars are added to the head of the train;
- Some number of cars are added to the tail of the train;
- Vasya recalculates the values of the convenience of the cars (read more about it below).
At each moment of time we will index the cars from the head of the train, starting from $1$. Note that when adding new cars to the head of the train, the indexing of the old ones may shift.
To choose which car to go in, Vasya will use the value $A_i$ for each car (where $i$ is a car index), which is calculated as follows:
- At the beginning of the trip $A_i=0$, as well as for the new cars at the time of their addition.
- During the next recalculation Vasya chooses some \textbf{positive} integers $b$ and $s$ and adds to all $A_i$ value $b + (i - 1) \cdot s$.
Vasya hasn't decided yet where he will get on the train and where will get off the train, so after each event of one of the three types he wants to know the least index of the car, such that its value $A_i$ is minimal. Since there is a lot of cars, Vasya asked you to write a program that answers his question.
|
There are many approaches to this problem, many of them having $\mathcal{O}(q \log q)$ time, but we will describe a purely linear solution. First, notice that for every group of cars added together, we need only to care about the first car in this group - the remaining ones will never be the answer. Second, notice that there are some cars appended to the head of the train, then all previous cars will never be answer again, so we can simply replace them with cars with $A_i = 0$. So now we only need to care about operations of adding cars to the tail, and about adding the progression. Suppose cars located at positions $x$ and have comfort $A_x$. Then observe the lower-left convex hull of points $(x, A_x)$. One can see, that the points not lying on this hull will never be an answer. Also note, that we can handle all progressions implicitly - suppose the progressions are described with $b_i$, $s_i$. Then let's simply store current sums of $b_i$ and $s_i$. Then the operation of adding progression can be done by simply adding to those sums, also we don't have to track the moment the cars are added, since we can simply subtract from $A_i$ based on sums at the moment of addition. So when we add cars to the end we simply need to add point to the end and possibly drop some points from the end of the current convex hull. And when we add new progression, we may also need to drop some elements from the hull, but since it's the convex hull, the line coefficients between neighboring points are monotonous, so we need to drop only some points in the end of the hull.
|
[
"data structures",
"greedy"
] | 2,700
| null |
1137
|
F
|
Matches Are Not a Child's Play
|
Lena is playing with matches. The natural question arising in the head of any child playing with matches is whether it's possible to set a tree on fire with a matches, or not.
Let's say, that the tree is a connected graph without cycles and the vertices are labeled with integers $1, 2, \ldots, n$. Also every vertex $v$ has some integer priority $p_v$ associated with it. All priorities are distinct.
It turns out, that if you set a tree on fire, it will burn to nothing. However, this process doesn't happen instantly. At the beginning, burns out the leaf (a vertex is called to be a leaf if it has only one adjacent vertex) of the tree of the minimum priority. Then burns out the leaf of the minimal priority of the remaining tree, and so on. This way, the vertices turn into the leaves and burn out until only one vertex remains. Then this vertex burns out as well.
Lena has prepared a tree of $n$ vertices and every vertex in it has a priority $p_v = v$. Lena is very curious about burning out this tree. However, she understands, that if she will burn the tree now, then it will disappear completely. Lena is a kind girl and she will feel bad for the burned tree, so she wants to study the process of burning the tree only in her mind. Lena wants to process $q$ queries, each of them is one of three following types:
- "up $v$", assign the vertex $v$ priority $1 + \max\{p_1, p_2, \ldots, p_n\}$;
- "when $v$", find the step at which the vertex $v$ will burn out, if the tree would be set on fire now;
- "compare $v$ $u$", find out which of the vertices $v$ and $u$ will burn out first, if the tree would be set on fire now.
Notice, that if all priorities would be distinct, then after the "up" query they will stay distinct as well. Initially all priorities are distinct, hence during any (purely hypothetical of course) burning of the tree, all leafs would have distinct priorities.
|
First, let's notice, that operation "compare" is redundant and can be implemented as two "when" operations (we didn't removed it from onsite olympiad's version as a possible hint). Suppose we know the order of the burning vertices. How it will change after "up" operation? Actually, quite simply: let's notice, that the part from previous maximum to the new one will burn the last. Actually, everything except this path burns out in the same relative order as before, and then burns the path, in order from older maximum to the new one. Let's say, that "up" paints it's path in the color i (and i++). Then to calculate when[v], let's first get the color of vertex v. And then when[v] equals to the number of vertices of smaller colors plus the number of vertices of the same color, but which will burn before this one. The latter is simply the number of vertices on the path from older maximum to the new one in the corresponding up query. To implement coloring on the path, we can use Heavy-Light-Decomposition. Inside every path of HLD let's store a set of segments of vertices of the same color. Then operation to color some path works in $O(\log^2 n)$, amortized. The number of vertices with smaller color can be calculated with a fenwick tree (which stores for color the number of such vertices). There are also small technical details to handle: you need to account the original order of burning out, before all up's. But since each vertex changes it's color from zero to nonzero at most once, you can do it in O(number such vertices).
|
[
"data structures",
"trees"
] | 3,400
| null |
1138
|
A
|
Sushi for Two
|
Arkady invited Anna for a dinner to a sushi restaurant. The restaurant is a bit unusual: it offers $n$ pieces of sushi aligned in a row, and a customer has to choose a continuous subsegment of these sushi to buy.
The pieces of sushi are of two types: either with tuna or with eel. Let's denote the type of the $i$-th from the left sushi as $t_i$, where $t_i = 1$ means it is with tuna, and $t_i = 2$ means it is with eel.
Arkady does not like tuna, Anna does not like eel. Arkady wants to choose such a continuous subsegment of sushi that it has equal number of sushi of each type and each half of the subsegment has only sushi of one type. For example, subsegment $[2, 2, 2, 1, 1, 1]$ is valid, but subsegment $[1, 2, 1, 2, 1, 2]$ is not, because both halves contain both types of sushi.
Find the length of the longest continuous subsegment of sushi Arkady can buy.
|
It is more or less obvious that the answer is the maximum among the minimums of the length of two consecutive segments of equal elements. As for implementation, just go from left to right and keep the last element, the length of the current segment and the length of the next segment. When the current element is not the same as the last element, update the answer.
|
[
"binary search",
"greedy",
"implementation"
] | 900
| null |
1138
|
B
|
Circus
|
Polycarp is a head of a circus troupe. There are $n$ — an even number — artists in the troupe. It is known whether the $i$-th artist can perform as a clown (if yes, then $c_i = 1$, otherwise $c_i = 0$), and whether they can perform as an acrobat (if yes, then $a_i = 1$, otherwise $a_i = 0$).
Split the artists into two performances in such a way that:
- each artist plays in exactly one performance,
- the number of artists in the two performances is equal (i.e. equal to $\frac{n}{2}$),
- the number of artists that can perform as clowns in the first performance is the same as the number of artists that can perform as acrobats in the second performance.
|
Note, that there are only four types of artists: <<0; 0>>, <<0; 1>>, <<1; 0>>, <<1; 1>>. So the whole problem can be described with four integers - the number of artists of each type. Let's say, that there are $n_a$ <<0; 0>> artists, $n_b$ <<0; 1>> artists, $n_c$ <<1; 0>> artists, $n_d$ <<1, 1>> artists. In the same manner, the selection of artists for the first performance can be described with four integers $0 \le a \le n_a$, $0 \le b \le n_b$, $0 \le c \le n_c$, $0 \le d \le n_d$. Note, that we have some restrictions on $a$, $b$, $c$, $d$. In particular, we need to select exactly half of the artists: $a + b + c + d = \frac{n}{2}$. Also we have a requirement that the number of clowns in first performance ($c + d$) must be equal to number of acrobats in the second ($n_b - b + n_d - d$): $c + d = n_b - b + n_d - d$, so we have $b + c + 2d = n_b + n_d$. This equations are necessary and sufficient. So we have 4 unknown variables and 2 equations. We can bruteforce any two variables, calculate using them other two variables. And if everything went well, print an answer. Complexity: $\mathcal{O}(n^2$).
|
[
"brute force",
"greedy",
"math",
"strings"
] | 1,800
| null |
1139
|
A
|
Even Substrings
|
You are given a string $s=s_1s_2\dots s_n$ of length $n$, which only contains digits $1$, $2$, ..., $9$.
A substring $s[l \dots r]$ of $s$ is a string $s_l s_{l + 1} s_{l + 2} \ldots s_r$. A substring $s[l \dots r]$ of $s$ is called even if the number represented by it is even.
Find the number of even substrings of $s$. Note, that even if some substrings are equal as strings, but have different $l$ and $r$, they are counted as \textbf{different} substrings.
|
Any substring ending in $2, 4, 6, 8$ forms an even substring. Thus, iterate over all positions $i$ of the string $s$, and if the digit represented by character at $i_{th}$ index is even, then add $i+1$ to the answer, since all the substrings starting at $0, 1, ..., i$ and ending at $i$ are even substrings. Overall Complexity: $O(n)$.
|
[
"implementation",
"strings"
] | 800
|
import java.util.*;
public class SolutionA {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
char[] s = sc.next().toCharArray();
int ans = 0;
for(int i = 0; i < n; ++i) {
if((s[i] - '0') % 2 == 0)
ans += (i + 1);
}
System.out.print(ans);
}
}
|
1139
|
B
|
Chocolates
|
You went to the store, selling $n$ types of chocolates. There are $a_i$ chocolates of type $i$ in stock.
You have unlimited amount of cash (so you are not restricted by any prices) and want to buy as many chocolates as possible. However if you buy $x_i$ chocolates of type $i$ (clearly, $0 \le x_i \le a_i$), then for all $1 \le j < i$ at least one of the following must hold:
- $x_j = 0$ (you bought zero chocolates of type $j$)
- $x_j < x_i$ (you bought less chocolates of type $j$ than of type $i$)
For example, the array $x = [0, 0, 1, 2, 10]$ satisfies the requirement above (assuming that all $a_i \ge x_i$), while arrays $x = [0, 1, 0]$, $x = [5, 5]$ and $x = [3, 2]$ don't.
Calculate the maximum number of chocolates you can buy.
|
It is optimal to proceed greedily from the back of the array. If we have taken $x$ candies of the $i+1$ type, then we can only take $\min(x - 1, A_i)$ candies for type $i$. If this value is less than zero, we take $0$ from here. Overall Complexity: $O(n)$
|
[
"greedy",
"implementation"
] | 1,000
|
import java.util.*;
import static java.lang.Math.*;
public class SolutionB {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
long a[] = new long[n];
for(int i = 0; i < n; ++i)
a[i] = sc.nextLong();
long ans = 0;
long curV = 1000000001;
for(int i = n - 1; i >= 0; --i) {
curV = max(0, min(curV - 1, a[i]));
ans += curV;
}
System.out.print(ans);
}
}
|
1139
|
C
|
Edgy Trees
|
You are given a tree (a connected undirected graph without cycles) of $n$ vertices. Each of the $n - 1$ edges of the tree is colored in either black or red.
You are also given an integer $k$. Consider sequences of $k$ vertices. Let's call a sequence $[a_1, a_2, \ldots, a_k]$ good if it satisfies the following criterion:
- We will walk a path (possibly visiting same edge/vertex multiple times) on the tree, starting from $a_1$ and ending at $a_k$.
- Start at $a_1$, then go to $a_2$ using the shortest path between $a_1$ and $a_2$, then go to $a_3$ in a similar way, and so on, until you travel the shortest path between $a_{k-1}$ and $a_k$.
- If you walked over at least one black edge during this process, then the sequence is good.
Consider the tree on the picture. If $k=3$ then the following sequences are good: $[1, 4, 7]$, $[5, 5, 3]$ and $[2, 3, 7]$. The following sequences are not good: $[1, 4, 6]$, $[5, 5, 5]$, $[3, 7, 3]$.
There are $n^k$ sequences of vertices, count how many of them are good. Since this number can be quite large, print it modulo $10^9+7$.
|
Let's find the number of bad sequences - Sequences of length $k$ that do not pass through any black edges. Then answer is all possible sequences minus the number of bad sequences. Thus, we can remove black edges from the tree. Now the tree would be split into connected components. For every connected component, if we start with $A_1$ being a node from this component, then we cannot step outside this component, since doing so would mean that we visit a black edge. But we can visit all the nodes in the current connected component in any order. So if the size of the current component is $p$, then we have $p^k$ bad sequences corresponding to this connected component. Thus, the overall answer is $n^k - \sum p^k$ where $p$ is the size of different connected components, considering only red edges.
|
[
"dfs and similar",
"dsu",
"graphs",
"math",
"trees"
] | 1,500
|
import java.util.*;
import static java.lang.Math.*;
public class SolutionC {
static void dfs(int i) {
vis[i] = 1;
cnt++;
for(int j : adj[i]) {
if(vis[j] == 0)
dfs(j);
}
}
static long fast_pow(long a, long b) {
if(b == 0)
return 1L;
long val = fast_pow(a, b / 2);
if(b % 2 == 0)
return val * val % mod;
else
return val * val % mod * a % mod;
}
static long mod = (long)1e9 + 7;
static ArrayList<Integer> adj[];
static int vis[];
static long cnt = 0;
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
int k = sc.nextInt();
adj = new ArrayList[n];
for(int i = 0; i < n; ++i)
adj[i] = new ArrayList<>();
for(int i = 0; i < n - 1; ++i) {
int u = sc.nextInt() - 1;
int v = sc.nextInt() - 1;
int col = sc.nextInt();
if(col == 0) {
adj[u].add(v);
adj[v].add(u);
}
}
long ans = fast_pow(n, k);
long rem = 0;
vis = new int[n];
for(int i = 0; i < n; ++i) {
if(vis[i] == 0) {
cnt = 0;
dfs(i);
rem += fast_pow(cnt, k);
}
}
rem %= mod;
ans = (ans - rem + mod) % mod;
System.out.print(ans);
}
}
|
1139
|
D
|
Steps to One
|
Vivek initially has an empty array $a$ and some integer constant $m$.
He performs the following algorithm:
- Select a random integer $x$ uniformly in range from $1$ to $m$ and append it to the end of $a$.
- Compute the greatest common divisor of integers in $a$.
- In case it equals to $1$, break
- Otherwise, return to step $1$.
Find the expected length of $a$. It can be shown that it can be represented as $\frac{P}{Q}$ where $P$ and $Q$ are coprime integers and $Q\neq 0 \pmod{10^9+7}$. Print the value of $P \cdot Q^{-1} \pmod{10^9+7}$.
|
Let $dp[x]$ be the expected number of additional steps to get a gcd of 1 if the gcd of the current array is $x$. Suppose the current gcd of the array $a$ is $x$, after the next iteration of the algorithm, we would append some randomly chosen $j$ with a probability $\frac{1}{m}$, and move to state $gcd(x,j)$ and since the length increases by $1$ on appending, we will make $dp[gcd(x, j)] + 1$ steps for this $j$. So the recurrence is : $dp[x] =1 + \sum_{j=1}^{m}\frac{dp[gcd(j,x)]}{m}$ I recommend this Expectation tutorial to get more understanding of the basics. We can group together all terms having the same $gcd(j, x)$, move terms having $gcd(j, x) = x$ to the left side of the equation and use that to calculate $dp[x]$. This is an $O(m^2)$ solution. Here we notice that $gcd(j,x)$ is a factor of $x$, So the recurrence could be modified as : $dp[x] = 1+\sum_{y \in divisors(x)}{\frac{dp[y] \cdot f(y,x)}{m}}$ Lets express $x = y \cdot a$, and $p=y \cdot b$ where $a$, $b$ are positive integers, So we want to find the number of $p$'s where $1 \le p \le m$, $p = y \cdot b$, such that $gcd(b,a) = 1$, i.e. we want to find number of $b$ where $1 \le b \le m/y$ such that $gcd(b,a) = 1$. Lets find factorization of $a$, so $b$ must not be divisible by any of the prime factors of $a$. We can find number of $b\le m/y$ such that it isn't divisible by some set of primes by inclusion exclusion. Since there are at most $6$ primes, we have complexity: $O(m \log{m} \cdot 2^6 \cdot 6)$ For an alternate solution using mobius function, refer code 2.
|
[
"dp",
"math",
"number theory",
"probabilities"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
#define IOS ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
#define endl "\n"
#define int long long
const int N = 1e5 + 5;
const int MOD = 1e9 + 7;
int pow(int a, int b, int m)
{
int ans=1;
while(b)
{
if(b&1)
ans=(ans*a)%m;
b/=2;
a=(a*a)%m;
}
return ans;
}
int modinv(int k)
{
return pow(k, MOD - 2, MOD);
}
int mu[N], prime[N];
vector<int> primes;
void precompute()
{
fill(prime + 2, prime + N, 1);
mu[1] = 1;
for(int i=2;i<N;i++)
{
if(prime[i])
primes.push_back(i), mu[i] = -1;
for(auto &it: primes)
{
if(i * it >= N)
break;
prime[i * it] = 0;
if(i % it == 0)
{
mu[i * it] = 0;
break;
}
else
mu[i * it] = mu[i] * mu[it];
}
}
}
int n;
int ans = 0;
double dans = 0;
int32_t main()
{
IOS;
precompute();
cin>>n;
ans = 1, dans = 1;
for(int i=2;i<=n;i++)
{
int f = n / i;
dans -= 1.0 * mu[i] * f / (n -f);
ans -= (mu[i] * f * modinv(n - f)) % MOD;
ans += MOD;
ans %= MOD;
}
cout<<ans;
return 0;
}
|
1139
|
E
|
Maximize Mex
|
There are $n$ students and $m$ clubs in a college. The clubs are numbered from $1$ to $m$. Each student has a potential $p_i$ and is a member of the club with index $c_i$. Initially, each student is a member of exactly one club. A technical fest starts in the college, and it will run for the next $d$ days. There is a coding competition every day in the technical fest.
Every day, in the morning, exactly one student of the college leaves their club. Once a student leaves their club, they will never join any club again. Every day, in the afternoon, the director of the college will select one student from each club (in case some club has no members, nobody is selected from that club) to form a team for this day's coding competition. The strength of a team is the mex of potentials of the students in the team. The director wants to know the maximum possible strength of the team for each of the coming $d$ days. Thus, every day the director chooses such team, that the team strength is maximized.
The mex of the multiset $S$ is the smallest non-negative integer that is not present in $S$. For example, the mex of the $\{0, 1, 1, 2, 4, 5, 9\}$ is $3$, the mex of $\{1, 2, 3\}$ is $0$ and the mex of $\varnothing$ (empty set) is $0$.
|
Let's reverse the queries so that instead of removal of edges we need to work with the addition of edges. Now consider a bipartite graph with values $0 \ldots m$ on the left side and clubs $1 \ldots m$ on the right side. For an ind $i$, which has not been removed, we add an edge from $p_i$ on the left to $c_i$ on the right. Let's start finding matching for values $0 \ldots m$ in sequence. Let's say we couldn't find matching for $i$-th value after query $x$, so our answer for query $x$ will be $i$. Now, we add the student of query $x$. Let the index of the student be $ind$, then we add an edge from $p_{ind}$ on the left to $c_{ind}$ on the right. Now, we start finding matching from $i \ldots m$, until matching is not possible. We repeat this process until all queries are solved. Overall Complexity: $O(d \cdot n + m \cdot n)$
|
[
"flows",
"graph matchings",
"graphs"
] | 2,400
|
import java.util.*;
import static java.lang.Math.*;
public class SolutionE {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
int m = sc.nextInt();
int deg[] = new int[m];
int p[] = new int[n];
for(int i = 0; i < n; ++i) {
p[i] = sc.nextInt();
if(p[i] < m)
deg[p[i]]++;
}
int c[] = new int[n];
for(int i = 0; i < n; ++i)
c[i] = sc.nextInt();
int q = sc.nextInt();
int qind[] = new int[q];
int vis[] = new int[n];
for(int i = 0; i < q; ++i) {
qind[i] = sc.nextInt() - 1;
vis[qind[i]] = 1;
}
Kunh kunh = new Kunh(2 * m, deg);
for(int i = 0; i < n; ++i) {
if(vis[i] == 0) {
if(p[i] < m)
kunh.addEdge(p[i], m + c[i] - 1);
}
}
int ans = 0;
while(ans < m && kunh.addFlow(ans))
ans++;
int fans[] = new int[q];
for(int i = q - 1; i >= 0; --i) {
fans[i] = ans;
int ind = qind[i];
if(p[ind] < m) {
kunh.addEdge(p[ind], m + c[ind] - 1);
}
while(ans < m && kunh.addFlow(ans))
ans++;
}
for(int i : fans)
System.out.println(i);
}
}
class Kunh {
int pair[];
int adj[][];
int ptr[];
int vis[];
int level = 0;
Kunh(int n, int deg[]) {
pair = new int[n];
Arrays.fill(pair, -1);
adj = new int[n / 2][];
for(int i = 0; i < n / 2; ++i) {
adj[i] = new int[deg[i]];
}
ptr = new int[n / 2];
vis = new int[n / 2];
}
void addEdge(int i, int j) {
adj[i][ptr[i]++] = j;
}
boolean addFlow(int source) {
level++;
return dfs(source);
}
boolean dfs(int i) {
vis[i] = level;
for(int k = 0; k < ptr[i]; ++k) {
int j = adj[i][k];
if(pair[j] == -1) {
pair[j] = i;
return true;
}
else if(vis[pair[j]] != level && dfs(pair[j])) {
pair[j] = i;
return true;
}
}
return false;
}
}
|
1139
|
F
|
Dish Shopping
|
There are $m$ people living in a city. There are $n$ dishes sold in the city. Each dish $i$ has a price $p_i$, a standard $s_i$ and a beauty $b_i$. Each person $j$ has an income of $inc_j$ and a preferred beauty $pref_j$.
A person would never buy a dish whose standard is less than the person's income. Also, a person can't afford a dish with a price greater than the income of the person. In other words, a person $j$ can buy a dish $i$ only if $p_i \leq inc_j \leq s_i$.
Also, a person $j$ can buy a dish $i$, only if $|b_i-pref_j| \leq (inc_j-p_i)$. In other words, if the price of the dish is less than the person's income by $k$, the person will only allow the absolute difference of at most $k$ between the beauty of the dish and his/her preferred beauty.
Print the number of dishes that can be bought by each person in the city.
|
Let's consider a matrix where $i$-th row represents price $i$ and $j$-th column represents beauty $j$, such that the value of $c[i][j]$ represents the number of dishes that can be bought by a person having income $i$ and preferred beauty $j$. Then, adding a dish with price $p$, standard $s$ and beauty $b$ is similar to adding $+1$ in each cell in the triangle formed by vertices $P(p, b)$, $Q(s, b - s + p)$ and $R(s, b + s - p)$. Now, we need to update the given triangles with $+1$ in a matrix efficiently. Let's do this offline. Note that for a triangle with vertices $P(p, b)$, $Q(s, b - s + p)$ and $R(s, b + s - p)$, the column updated in $p$-th row is $b$. The columns updated in $(p + 1)$-th row are $b - 1$, $b$ and $b + 1$. So, if the columns updated in $i$-th row are $x \ldots y$, then the columns updated in $i + 1$-th row will be $(x - 1) \ldots (y + 1)$. For updating a range $l \ldots r$ in a row $i$, we can update $mat[i][l]$ with $+1$ and $mat[i][r + 1]$ with $-1$. And then when we do prefix sum in that row we get actual values for each cell. We can use similar approach here. Note that instead of updating $mat[i][l]$ with $+1$, we can update some array $larr[i + l]$ with $+1$, similarly instead of updating $mat[i][r + 1]$ with $-1$, we can update some array $rarr[r + 1 - i]$ with $-1$. In this way, we can provide updates for each row in triangle instead of just one row. The value of a cell $mat[i][j]$ will be addition of prefix sum of $larr$ in range $[0, i+j]$ and prefix sum of $rarr$ in range $[0, j-i]$. Let's create four events for each triangle. For, a triangle with vertices $P(p, b)$, $Q(s, b - s + p)$ and $R(s, b + s - p)$, the events added will be $larr[p + b] \texttt{+=} 1$ at cell $(p, b)$, $rarr[b + 1 - p] \texttt{-=} 1$ at cell $(p, b + 1)$, $larr[p + b] \texttt{-=} 1$ at cell $(s + 1, b - s - 1 + p)$, $rarr[b + 1 - p] \texttt{-=} 1$ at cell $(s + 1, b + s + 1 - p)$. Also, we add query events for each person $i$ at cell $(inc_i, pb_i)$. Let's sort this events on the basis of rows of the cell and then for the same row we handle triangle events first and then handle query events. For maintaining prefix sum in $larr$ and $rarr$, we can use compression and a datastructure like fenwick tree. Overall Complexity: $O((n + m) \cdot log(n + m))$
|
[
"data structures",
"divide and conquer"
] | 2,500
|
import java.util.*;
import static java.lang.Math.*;
public class SolutionF {
static int queryBit(int[] bit, int ind) {
int ans = 0;
while(ind > 0) {
ans += bit[ind];
ind -= Integer.lowestOneBit(ind);
}
return ans;
}
static void updateBit(int[] bit, int ind, int val) {
while(ind < bit.length) {
bit[ind] += val;
ind += Integer.lowestOneBit(ind);
}
}
static TreeMap<Integer, Integer> lmap = new TreeMap<>();
static TreeMap<Integer, Integer> rmap = new TreeMap<>();
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
int m = sc.nextInt();
ArrayList<Event> list = new ArrayList<>();
int p[] = new int[n];
for(int i = 0; i < n; ++i) {
p[i] = sc.nextInt();
}
int s[] = new int[n];
for(int i = 0; i < n; ++i) {
s[i] = sc.nextInt();
}
int b[] = new int[n];
for(int i = 0; i < n; ++i) {
b[i] = sc.nextInt();
}
for(int i = 0; i < n; ++i) {
list.add(new Event(0, p[i], b[i] + p[i], 1));
list.add(new Event(0, s[i] + 1, b[i] + p[i], -1));
list.add(new Event(1, p[i], b[i] + 1 - p[i], -1));
list.add(new Event(1, s[i] + 1, b[i] + 1 - p[i], 1));
}
int inc[] = new int[m];
for(int i = 0; i < m; ++i)
inc[i] = sc.nextInt();
int pb[] = new int[m];
for(int i = 0; i < m; ++i)
pb[i] = sc.nextInt();
for(int i = 0; i < m; ++i) {
lmap.put(pb[i] + inc[i], 1);
rmap.put(pb[i] - inc[i], 1);
list.add(new Event(2, inc[i], pb[i], i));
}
Collections.sort(list, new Comparator<Event>() {
public int compare(Event e1, Event e2) {
if(e1.x < e2.x)
return -1;
if(e1.x > e2.x)
return 1;
if(e1.type < e2.type)
return -1;
if(e1.type > e2.type)
return 1;
return 0;
}
});
int ptr = 1;
for(int i : lmap.keySet()) {
lmap.put(i, ptr++);
}
int lbit[] = new int[ptr];
ptr = 1;
for(int i : rmap.keySet()) {
rmap.put(i, ptr++);
}
int rbit[] = new int[ptr];
int ans[] = new int[m];
for(Event e : list) {
int type = e.type;
if(type == 0) {
if(lmap.higherKey(e.y - 1) != null) {
updateBit(lbit, lmap.get(lmap.higherKey(e.y - 1)), e.val);
}
}
else if(type == 1) {
if(rmap.higherKey(e.y - 1) != null) {
updateBit(rbit, rmap.get(rmap.higherKey(e.y - 1)), e.val);
}
}
else {
int lind = lmap.get(e.y + e.x);
int rind = rmap.get(e.y - e.x);
int ind = e.val;
ans[ind] = queryBit(lbit, lind) + queryBit(rbit, rind);
}
}
for(int i : ans)
System.out.print(i + " ");
}
}
class Event {
// type
// 0 : lupd, 1 : rupd, 2 : people query
int type, x, y, val;
Event(int a, int b, int c, int d) {
type = a;
x = b;
y = c;
val = d;
}
}
|
1140
|
A
|
Detective Book
|
Ivan recently bought a detective book. The book is so interesting that each page of this book introduces some sort of a mystery, which will be explained later. The $i$-th page contains some mystery that will be explained on page $a_i$ ($a_i \ge i$).
Ivan wants to read the whole book. Each day, he reads the first page he didn't read earlier, and continues to read the following pages one by one, until all the mysteries he read about are explained and clear to him (Ivan stops if there does not exist any page $i$ such that Ivan already has read it, but hasn't read page $a_i$). After that, he closes the book and continues to read it on the following day from the next page.
How many days will it take to read the whole book?
|
Solution is just some implementation: simulate algorithm given in the legend, maintaining maximum over all $a_i$ on prefix and breaking when the maximum becomes smaller than index of the next page.
|
[
"implementation"
] | 1,000
|
#include<bits/stdc++.h>
using namespace std;
int n;
vector<int> a;
inline bool read() {
if(!(cin >> n))
return false;
a.resize(n);
for(int i = 0; i < n; i++) {
cin >> a[i];
a[i]--;
}
return true;
}
inline void solve() {
int cnt = 0, pos = 0;
while(pos < n) {
cnt++;
int mx = pos;
while(pos < n && pos <= mx) {
mx = max(mx, a[pos]);
pos++;
}
}
cout << cnt << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
if(read()) {
solve();
}
return 0;
}
|
1140
|
B
|
Good String
|
You have a string $s$ of length $n$ consisting of only characters > and <. You may do some operations with this string, for each operation you have to choose some character that still remains in the string. If you choose a character >, the character that comes right after it is deleted (if the character you chose was the last one, nothing happens). If you choose a character <, the character that comes right before it is deleted (if the character you chose was the first one, nothing happens).
For example, if we choose character > in string {> \textbf{>} < >}, the string will become to > > >. And if we choose character < in string {> \textbf{<}}, the string will become to <.
The string is good if there is a sequence of operations such that after performing it only one character will remain in the string. For example, the strings >, > > are good.
\textbf{Before applying the operations}, you may remove any number of characters from the given string (possibly none, possibly up to $n - 1$, but not the whole string). You need to calculate the minimum number of characters to be deleted from string $s$ so that it becomes good.
|
A string is good when either its first character is > or the last is <. Strings of type < $\dots$ > are not good, as their first and last characters will never change and they will eventually come to the form < >. So, the answer is the minimum number of characters from the beginning of the string, which must be removed so that the first symbol becomes >, or minimum number of characters from the end of the string, which must be removed so that the last symbol becomes <.
|
[
"implementation",
"strings"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int t, n;
string s;
int main(){
cin >> t;
for(int tc = 0; tc < t; ++tc){
cin >> n >> s;
int res = n - 1;
for(int i = 0; i < n; ++i)
if(s[i] == '>' || s[n - 1 - i] == '<')
res = min(res, i);
cout << res << endl;
}
return 0;
}
|
1140
|
C
|
Playlist
|
You have a playlist consisting of $n$ songs. The $i$-th song is characterized by two numbers $t_i$ and $b_i$ — its length and beauty respectively. The pleasure of listening to set of songs is equal to the total length of the songs in the set multiplied by the minimum beauty among them. For example, the pleasure of listening to a set of $3$ songs having lengths $[5, 7, 4]$ and beauty values $[11, 14, 6]$ is equal to $(5 + 7 + 4) \cdot 6 = 96$.
You need to choose \textbf{at most} $k$ songs from your playlist, so the pleasure of listening to the set of these songs them is maximum possible.
|
If we fix a song with minimum beauty in the answer, then we need to take the remaining $k - 1$ songs (or less) among those having beauty greater than or equal to the beauty of the fixed song - and the longer they are, the better. So, we will iterate on the songs in the order of decreasing their beauty, and for the current song we will maintain $k$ longest songs having greater or similar beauty. This can be done using some standard containers: set in C++ or TreeSet in Java.
|
[
"brute force",
"data structures",
"sortings"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
const int N = 300009;
int n, k;
pair<int, int> a[N];
int main() {
cin >> n >> k;
for(int i = 0; i < n; ++i)
cin >> a[i].second >> a[i].first;
sort(a, a + n);
long long res = 0;
long long sum = 0;
set<pair<int, int> > s;
for(int i = n - 1; i >= 0; --i){
s.insert(make_pair(a[i].second, i));
sum += a[i].second;
while(s.size() > k){
auto it = s.begin();
sum -= it->first;
s.erase(it);
}
res = max(res, sum * a[i].first);
}
cout << res << endl;
return 0;
}
|
1140
|
D
|
Minimum Triangulation
|
You are given a regular polygon with $n$ vertices labeled from $1$ to $n$ in counter-clockwise order. The triangulation of a given polygon is a set of triangles such that each vertex of each triangle is a vertex of the initial polygon, there is no pair of triangles such that their intersection has non-zero area, and the total area of all triangles is equal to the area of the given polygon. The weight of a triangulation is the sum of weigths of triangles it consists of, where the weight of a triagle is denoted as the product of labels of its vertices.
Calculate the minimum weight among all triangulations of the polygon.
|
You can use straightforward way and calculate answer with "l-r-dp" with $O(n^3)$. But there is a easier claim: it's optimal to split $n$-gon with diagonals coming from $1$, so answer is $\sum\limits_{i = 2}^{n - 1}{i \cdot (i + 1)}$. Proof: let's look at the triange which contains edge $1-n$. Let's name it $1-n-x$. If $x = n - 1$, we can delete this triangle and go to $(n-1)$-gon. Otherwise, $1 < x < n - 1$. Let's look at triangle $n-x-k$. It always exists and $x < k < n$. Finally, if we change pair of triangles ($1-n-x$, $n-x-k$) to ($1-n-k$, $1-k-x$), answer will decrease since $nx > kx$ and $nxk > nk$, that's why $nx + nxk > nk + kx$. Note, that triangle $1-n-x$ changes to $1-n-k$ and $k > x$, so repeating this step will eventually lead us to situation $x = n - 1$. As a result, we can morph any triangulation into one mentioned above and its weight won't increase.
|
[
"dp",
"greedy",
"math"
] | 1,200
|
#include<bits/stdc++.h>
using namespace std;
int main() {
int n; cin >> n;
long long ans = 0;
for(int id = 2; id < n; id++)
ans += 1ll * id * (id + 1);
cout << ans << endl;
}
|
1140
|
E
|
Palindrome-less Arrays
|
Let's denote that some array $b$ is bad if it contains a subarray $b_l, b_{l+1}, \dots, b_{r}$ of odd length more than $1$ ($l < r$ and $r - l + 1$ is odd) such that $\forall i \in \{0, 1, \dots, r - l\}$ $b_{l + i} = b_{r - i}$.
If an array is not bad, it is \textbf{good}.
Now you are given an array $a_1, a_2, \dots, a_n$. Some elements are replaced by $-1$. Calculate the number of good arrays you can obtain by replacing each $-1$ with some integer from $1$ to $k$.
Since the answer can be large, print it modulo $998244353$.
|
At first, "array contains a palindromic subarray of length $\ge 3$" is equivalent to "array contains a palindromic subarray of length $=3$". So we need to calculate number of arrays without palindromes of length $3$. It's equivalent to finding arrays where $a[i] \neq a[i + 2]$ for all appropriate $i$. Note, that $i$ and $i + 2$ have same parity, so all odd and all even positions in array are independent, and answer is the product of the number of ways to choose numbers for odd positions, and the number of ways to choose numbers for even positions. In terms of same parity our condition morphs to $a[i] \neq a[i + 1]$ and we need to calculate all ways to replace ($-1$)-s in such way that all pairs of consecutive elements are different. To calculate it let's look at sequences of consecutive ($-1$)-s. They will look like $a, -1, -1, \dots, -1, b$ with $l$ ($-1$)-s, where $a$ and $b$ are positive (case where $a$ is empty can be considered as $k \cdot (a, -1, \dots, -1, b \text{ with $l - 1$ ($-1$)-s})$, case with empty $b$ is solved the same way). In the end we need to find a way to calculate the number of those sequences. There are only two fundamental types of sequences: $a, -1, \dots, -1, a$ (same value from both ends) and $a, -1, \dots, -1, b$ ($a \neq b$). Exact values of $a$ and $b$ don't really matter. Let's find a way to calculate both values (name them $cntSame$ and $cntDiff$) for $l$ consecutive ($-1$)-s in $O(\log{l})$ time. Base values: $cntSame(0) = 0, cntDiff(0) = 1$. Let's try to choose value of $-1$ in the middle of sequence: if $l \mod 2 = 1$, then we can split sequence in two sequences of length $\lfloor l / 2 \rfloor$ and $cntSame(l) = cntSame(l / 2)^2 + (k - 1) \cdot cntDiff(l / 2)^2$ and $cntDiff(l) = 2 \cdot cntSame(l / 2) \cdot cntDiff(l / 2) + (k - 2) \cdot cntDiff(l / 2)^2$. If $l \mod 2 = 0$ then just iterate over value of last $-1$, then $cntSame(l) = (k - 1) \cdot cntDiff(l - 1)$ and $cntDiff(l) = cntSame(l - 1) + (k - 2) \cdot cntDiff(l - 1)$. Resulting complexity is $O(n)$.
|
[
"combinatorics",
"divide and conquer",
"dp"
] | 2,200
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
#define x first
#define y second
typedef long long li;
typedef pair<int, int> pt;
const int MOD = 998244353;
int norm(int a) {
while(a >= MOD) a -= MOD;
while(a < 0) a += MOD;
return a;
}
int mul(int a, int b) {
return int(a * 1ll * b % MOD);
}
int n, k;
vector<int> a;
inline bool read() {
if(!(cin >> n >> k))
return false;
a.resize(n);
fore(i, 0, n)
cin >> a[i];
return true;
}
pair<int, int> calc(int len) {
if(len == 0) return {0, 1};
if(len & 1) {
auto res = calc(len >> 1);
return {norm(mul(res.x, res.x) + mul(k - 1, mul(res.y, res.y))),
norm(mul(2, mul(res.x, res.y)) + mul(k - 2, mul(res.y, res.y)))};
}
auto res = calc(len - 1);
return {mul(k - 1, res.y), norm(res.x + mul(k - 2, res.y))};
}
vector<int> curArray;
int calcSeg(int l, int r) {
if(r >= sz(curArray)) {
int len = r - l - 1, cf = 1;
if(l < 0)
len--, cf = k;
if(len == 0) return cf;
auto res = calc(len - 1);
return mul(cf, norm(res.x + mul(k - 1, res.y)));
}
if(l < 0) {
if(r - l == 1) return 1;
auto res = calc(r - l - 2);
return norm(res.x + mul(k - 1, res.y));
}
auto res = calc(r - l - 1);
return curArray[l] == curArray[r] ? res.x : res.y;
}
inline void solve() {
int ans = 1;
fore(k, 0, 2) {
curArray.clear();
for(int i = 0; 2 * i + k < n; i++)
curArray.push_back(a[2 * i + k]);
int lst = -1;
fore(i, 0, sz(curArray)){
if(curArray[i] == -1) continue;
ans = mul(ans, calcSeg(lst, i));
lst = i;
}
ans = mul(ans, calcSeg(lst, sz(curArray)));
}
cout << ans << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
int tt = clock();
#endif
ios_base::sync_with_stdio(0);
cin.tie(0), cout.tie(0);
cout << fixed << setprecision(15);
if(read()) {
solve();
#ifdef _DEBUG
cerr << "TIME = " << clock() - tt << endl;
tt = clock();
#endif
}
return 0;
}
|
1140
|
F
|
Extending Set of Points
|
For a given set of two-dimensional points $S$, let's denote its extension $E(S)$ as the result of the following algorithm:
Create another set of two-dimensional points $R$, which is initially equal to $S$. Then, while there exist four numbers $x_1$, $y_1$, $x_2$ and $y_2$ such that $(x_1, y_1) \in R$, $(x_1, y_2) \in R$, $(x_2, y_1) \in R$ and $(x_2, y_2) \notin R$, add $(x_2, y_2)$ to $R$. When it is impossible to find such four integers, let $R$ be the result of the algorithm.
Now for the problem itself. You are given a set of two-dimensional points $S$, which is initially empty. You have to process two types of queries: add some point to $S$, or remove some point from it. After each query you have to compute the size of $E(S)$.
|
Let's try to analyze how the size of $E(S)$ can be calculated. Let's connect points having same $x$-coordinates to each other, and do the same for points having same $y$-coordinates. Then we can solve the problem for each component separatedly: after the algorithm is run, the component will contain the points $(X, Y)$ such that at least one point in the component has $x$-coordinate equal to $X$, and at least one point in the component (maybe same, maybe another one) has $y$-coordinate equal to $Y$. So the answer for each component is the product of the number of distinct $x$-coordinates and the number of distinct $y$-coordinates in the component. Now we can process insertion queries: there are many ways to do it, but, in my opinion, the easiest way to handle them is to create a separate vertex for every $x$-coordinate and $y$-coordinate, and process each point as an edge connecting vertices corresponding to its coordinates (edges can be easily added by using DSU with rank heuristics). To handle removals, we will get rid of them completely. Transform the input into a set of $O(q)$ events "some point exists from query $l$ to query $r$". Then build a segment tree over queries, and break each event into $O(\log q)$ segments with this segment tree. Then we can initialize a DSU, and run DFS on the vertices of the segment tree to get answers for all queries. When we enter some node, we add all edges that exist on the corresponding segment into DSU. If we are in a leaf node, we may compute the $E(S)$ for the corresponding query. And when we leave a vertex, we can rollback all changes we made when we entered it. One important moment is that using path compression in DSU here is meaningless since it doesn't work with rollbacks well. This solution works in $O(q \log^2 q)$.
|
[
"data structures",
"divide and conquer",
"dsu"
] | 2,600
|
#include <bits/stdc++.h>
using namespace std;
typedef long long li;
#define x first
#define y second
const int N = 300043;
const int K = 300000;
int p[2 * N];
int s1[2 * N];
int s2[2 * N];
li ans = 0;
int* where[80 * N];
int val[80 * N];
int cur = 0;
void change(int& x, int y)
{
where[cur] = &x;
val[cur] = x;
x = y;
cur++;
}
void rollback()
{
cur--;
(*where[cur]) = val[cur];
}
int get(int x)
{
if(p[x] == x)
return x;
return get(p[x]);
}
void merge(int x, int y)
{
x = get(x);
y = get(y);
if(x == y) return;
ans -= s1[x] * 1ll * s2[x];
ans -= s1[y] * 1ll * s2[y];
if(s1[x] + s2[x] < s1[y] + s2[y])
swap(x, y);
change(p[y], x);
change(s1[x], s1[x] + s1[y]);
change(s2[x], s2[x] + s2[y]);
ans += s1[x] * 1ll * s2[x];
}
void init()
{
for(int i = 0; i < K; i++)
{
p[i] = i;
p[i + K] = i + K;
s1[i] = 1;
s2[i + K] = 1;
}
}
vector<pair<int, int> > T[4 * N];
void add(int v, int l, int r, int L, int R, pair<int, int> val)
{
if(L >= R) return;
if(L == l && R == r)
T[v].push_back(val);
else
{
int m = (l + r) / 2;
add(v * 2 + 1, l, m, L, min(R, m), val);
add(v * 2 + 2, m, r, max(m, L), R, val);
}
}
li res[N];
void dfs(int v, int l, int r)
{
li last_ans = ans;
int state = cur;
for(auto x : T[v])
merge(x.x, x.y + K);
if(l == r - 1)
res[l] = ans;
else
{
int m = (l + r) / 2;
dfs(v * 2 + 1, l, m);
dfs(v * 2 + 2, m, r);
}
while(cur != state) rollback();
ans = last_ans;
}
int main()
{
int q;
scanf("%d", &q);
map<pair<int, int>, int> last;
for(int i = 0; i < q; i++)
{
int x, y;
scanf("%d %d", &x, &y);
--x;
--y;
pair<int, int> p = make_pair(x, y);
if(last.count(p))
{
add(0, 0, q, last[p], i, p);
last.erase(p);
}
else
last[p] = i;
}
for(auto x : last)
add(0, 0, q, x.y, q, x.x);
init();
dfs(0, 0, q);
for(int i = 0; i < q; i++)
printf("%I64d ", res[i]);
}
|
1140
|
G
|
Double Tree
|
You are given a special undirected graph. It consists of $2n$ vertices numbered from $1$ to $2n$. The following properties hold for the graph:
- there are exactly $3n-2$ edges in the graph: $n$ edges connect vertices having odd numbers with vertices having even numbers, $n - 1$ edges connect vertices having odd numbers with each other, and $n - 1$ edges connect vertices having even numbers with each other;
- for each edge $(u, v)$ between a pair of vertices with odd numbers, there exists an edge $(u + 1, v + 1)$, and vice versa;
- for each odd number $u \in [1, 2n - 1]$, there exists an edge $(u, u + 1)$;
- the graph is connected; moreover, if we delete all vertices with even numbers from it, and all edges incident to them, the graph will become a tree (the same applies to deleting odd vertices).
So, the graph can be represented as two trees having the same structure, and $n$ edges connecting each vertex of the first tree to the corresponding vertex of the second tree.
Edges of the graph are weighted. The length of some simple path in the graph is the sum of weights of traversed edges.
You are given $q$ queries to this graph; in each query, you are asked to compute the length of the shortest path between some pair of vertices in this graph. Can you answer all of the queries?
|
Suppose we want to minimize the number of traversed edges of the second type (edges that connect odd vertices to each other or even vertices to each other), and minimizing the length of the path has lower priority. Then we exactly know the number of edges of the second type we will use to get from one vertex to another; and when building a path, we each time either jump from one "tree" to another using an edge of the first type, or use the only edge of the second type that brings us closer to the vertex we want to reach. So, in this case problem can be solved either by binary lifting or by centroid decomposition. The model solution uses the latter: merge the graph into one tree (vertices $2i - 1$ and $2i$ of the original graph merge into vertex $i$ in the tree), build its centroid decomposition, and for each centroid $c$ and vertex $v$ of its centroid-subtree calculate the length of the shortest path from $2c - 1$ and $2c$ to $2v - 1$ and $2v$ using dynamic programming. Then the answer for each pair of vertices $u$ and $v$ may be calculated as follows: find the deepest centroid $c$ controlling the both vertices, and try either shortest path $u \rightarrow 2c - 1 \rightarrow v$ or shortest path $u \rightarrow 2c \rightarrow v$. But this solution won't work in the original problem because sometimes we want to choose an edge of the second type that leads us further from the vertex we want to reach in the merged tree, but allows us to use a cheaper edge of the first type to jump from one tree to another. Let's make this situation impossible! We may change the weights of all edges of the second type so that the weight of edge between $2i - 1$ and $2i$ becomes the length of the shortest path between $2i - 1$ and $2i$. This can be done by solving a SSSP problem: build a graph of $n + 1$ vertices, where each vertex $i$ from $1$ to $n$ represents the path between from $2i - 1$ and $2i$. Add a directed edge with weight equal to $w_{2i - 1, 2i}$ going from vertex $0$ to vertex $i$. And finally, for every pair $i, j$ such that $2i - 1$ and $2j - 1$ are connected by edge of weight $w_1$, and $2i$ and $2j$ are connected by edge of weight $w_2$, add an undirected edge connecting $i$ and $j$ in the new graph (its weight should be $w_1 + w_2$). Then the distance from $0$ to $i$ in this graph will be equal to the length of the shortest path from $2i - 1$ to $2i$ in the original graph.
|
[
"data structures",
"divide and conquer",
"shortest paths",
"trees"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
typedef long long li;
#define x first
#define y second
const int N = 600043;
li d1[N];
li d2[N][2];
li dist_temp[N][2];
vector<pair<int, int> > g[N];
vector<int> qs[N];
int qq[N][2];
li ans[N];
int n;
int used[N];
int cnt[N];
int last[N];
int cc = 1;
void preprocess()
{
set<pair<li, int> > q;
for(int i = 0; i < n; i++)
q.insert(make_pair(d1[i], i));
while(!q.empty())
{
int k = q.begin()->second;
q.erase(q.begin());
for(auto e : g[k])
{
int to = e.first;
li w = d2[e.second][0] + d2[e.second][1];
if(d1[to] > w + d1[k])
{
q.erase(make_pair(d1[to], to));
d1[to] = w + d1[k];
q.insert(make_pair(d1[to], to));
}
}
}
}
void dfs1(int x, int p = -1)
{
if(used[x]) return;
cnt[x] = 1;
for(auto y : g[x])
{
int to = y.first;
if(!used[to] && to != p)
{
dfs1(to, x);
cnt[x] += cnt[to];
}
}
}
vector<int> cur_queries;
pair<int, int> c;
int S = 0;
void find_centroid(int x, int p = -1)
{
if(used[x]) return;
int mx = 0;
for(auto y : g[x])
{
int to = y.first;
if(!used[to] && to != p)
{
find_centroid(to, x);
mx = max(mx, cnt[to]);
}
}
if(p != -1)
mx = max(mx, S - cnt[x]);
c = min(c, make_pair(mx, x));
}
void dfs2(int x, int p = -1, int e = -1)
{
if(used[x]) return;
if(p == -1)
{
dist_temp[x * 2][0] = dist_temp[x * 2 + 1][1] = 0ll;
dist_temp[x * 2][1] = dist_temp[x * 2 + 1][0] = d1[x];
}
else
{
for(int k = 0; k < 2; k++)
{
li& D0 = dist_temp[x * 2][k];
li& D1 = dist_temp[x * 2 + 1][k];
D0 = dist_temp[p * 2][k] + d2[e][0];
D1 = dist_temp[p * 2 + 1][k] + d2[e][1];
D0 = min(D0, D1 + d1[x]);
D1 = min(D1, D0 + d1[x]);
}
}
for(auto y : qs[x])
{
if(ans[y] != -1) continue;
if(last[y] == cc)
cur_queries.push_back(y);
else
last[y] = cc;
}
for(auto y : g[x])
{
int to = y.first;
if(!used[to] && to != p)
dfs2(to, x, y.second);
}
}
void centroid(int v)
{
if(used[v]) return;
dfs1(v);
S = cnt[v];
c = make_pair(int(1e9), -1);
find_centroid(v);
int C = c.second;
used[C] = 1;
for(auto y : g[C])
centroid(y.first);
cc++;
cur_queries.clear();
used[C] = 0;
dfs2(C);
for(auto x : cur_queries)
{
int u = qq[x][0];
int v = qq[x][1];
ans[x] = min(dist_temp[u][0] + dist_temp[v][0], dist_temp[u][1] + dist_temp[v][1]);
}
}
int main()
{
scanf("%d", &n);
for(int i = 0; i < n; i++)
scanf("%I64d", &d1[i]);
for(int i = 0; i < n - 1; i++)
{
int x, y;
li w1, w2;
scanf("%d %d %I64d %I64d", &x, &y, &w1, &w2);
--x;
--y;
d2[i][0] = w1;
d2[i][1] = w2;
g[x].push_back(make_pair(y, i));
g[y].push_back(make_pair(x, i));
}
int q;
scanf("%d", &q);
for(int i = 0; i < q; i++)
{
scanf("%d", &qq[i][0]);
scanf("%d", &qq[i][1]);
--qq[i][0];
--qq[i][1];
qs[qq[i][0] / 2].push_back(i);
qs[qq[i][1] / 2].push_back(i);
}
preprocess();
for(int i = 0; i < q; i++)
ans[i] = -1;
centroid(0);
for(int i = 0; i < q; i++)
cout << ans[i] << endl;
}
|
1141
|
A
|
Game 23
|
Polycarp plays "Game 23". Initially he has a number $n$ and his goal is to transform it to $m$. In one move, he can multiply $n$ by $2$ or multiply $n$ by $3$. He can perform any number of moves.
Print the number of moves needed to transform $n$ to $m$. Print -1 if it is impossible to do so.
It is easy to prove that any way to transform $n$ to $m$ contains the same number of moves (i.e. number of moves doesn't depend on the way of transformation).
|
If $m$ is not divisible by $n$ then just print -1 and stop the program. Otherwise, calculate $d=m/n$, denoting the required number of times to multiply $n$. It is easy to see that $d$ should be a product of zero or more $2$'s and of zero or more $3$'s, i.e. $d=2^x3^y$ for integers $x,y \ge 0$. To find $x$ just use a loop to divide $d$ by $2$ while it is divisible by $2$. Similarly, to find $y$ just use a loop to divide $d$ by $3$ while it is divisible by $3$. After the divisions, the expected value of $d$ is $1$. If $d \ne 1$, print -1. Otherwise, print the total number of the loop iterations.
|
[
"implementation",
"math"
] | 1,000
|
int n, m;
cin >> n >> m;
int result = -1;
if (m % n == 0) {
result = 0;
int d = m / n;
while (d % 2 == 0)
d /= 2, result++;
while (d % 3 == 0)
d /= 3, result++;
if (d != 1)
result = -1;
}
cout << result << endl;
|
1141
|
B
|
Maximal Continuous Rest
|
Each day in Berland consists of $n$ hours. Polycarp likes time management. That's why he has a fixed schedule for each day — it is a sequence $a_1, a_2, \dots, a_n$ (each $a_i$ is either $0$ or $1$), where $a_i=0$ if Polycarp works during the $i$-th hour of the day and $a_i=1$ if Polycarp rests during the $i$-th hour of the day.
Days go one after another endlessly and Polycarp uses the same schedule for each day.
What is the maximal number of continuous hours during which Polycarp rests? It is guaranteed that there is at least one working hour in a day.
|
At first consider we process the only day. In this case just iterate over hours and maintain $len$ - the length of the current rest block (i.e. if the element equals $1$ then increase $len$, if the element equals $0$ then reset $len$ to $0$). The maximum intermediate value of $len$ is the answer. In case of multiple days, consider the given sequence as a cyclic sequence. Concatenate the sequence twice and solve the previous case. Sure, not it is no necessary to concatenate it in explicit way, just use $a[i~\%~n]$ instead of $a[i]$ and process $i=0 \dots 2\cdot n-1$.
|
[
"implementation"
] | 900
|
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
int result = 0;
int len = 0;
for (int i = 0; i < 2 * n; i++) {
if (a[i % n] == 1) {
len++;
result = max(result, len);
} else {
len = 0;
}
}
cout << result << endl;
|
1141
|
C
|
Polycarp Restores Permutation
|
An array of integers $p_1, p_2, \dots, p_n$ is called a permutation if it contains each number from $1$ to $n$ exactly once. For example, the following arrays are permutations: $[3, 1, 2]$, $[1]$, $[1, 2, 3, 4, 5]$ and $[4, 3, 1, 2]$. The following arrays are not permutations: $[2]$, $[1, 1]$, $[2, 3, 4]$.
Polycarp invented a really cool permutation $p_1, p_2, \dots, p_n$ of length $n$. It is very disappointing, but he forgot this permutation. He only remembers the array $q_1, q_2, \dots, q_{n-1}$ of length $n-1$, where $q_i=p_{i+1}-p_i$.
Given $n$ and $q=q_1, q_2, \dots, q_{n-1}$, help Polycarp restore the invented permutation.
|
Let's $p[1]=x$. Thus, $p[2]=p[1]+(p[2]-p[1])$=$x+q[1]$, $p[3]=p[1]+(p[2]-p[1])+(p[3]-p[2])$=$x+q[1]+q[2]$, ..., $p[n]=p[1]+(p[2]-p[1])+(p[3]-p[2])+\dots+(p[n]-p[n-1])$=$x+q[1]+q[2]+\dots+q[n-1]$. It means that the sequence of $n$ partial sums $p'=[0, q[1], q[1]+q[2], \dots, q[1]+q[2]+\dots+q[n-1]]$ is the required permutation if we do $+x$ to each element. The value of $x$ is unknown yet. Find such $i$ that $p'[i]$ is minimum. Thus, $x=1-p'[i]$. Exactly this value will change $p'[i]$ to be $1$ after you add $x$. So, add $x$ to each element of $p'$ and check that now it is a permutation. Probably, you need to use long long to avoid possible integer overflows.
|
[
"math"
] | 1,500
|
int n;
cin >> n;
vector<int> q(n - 1);
long long sum = 0;
long long min_val = 0;
for (int i = 0; i + 1 < n; i++) {
cin >> q[i];
sum += q[i];
if (sum < min_val)
min_val = sum;
}
vector<long long> p(n);
p[0] = 1 - min_val;
forn(i, n - 1)
p[i + 1] = p[i] + q[i];
bool ok = true;
for (int i = 0; i < n; i++)
if (p[i] < 1 || p[i] > n)
ok = false;
if (ok)
ok = set<long long>(p.begin(), p.end()).size() == n;
if (ok) {
forn(i, n)
cout << p[i] << " ";
} else
cout << -1 << endl;
|
1141
|
D
|
Colored Boots
|
There are $n$ left boots and $n$ right boots. Each boot has a color which is denoted as a lowercase Latin letter or a question mark ('?'). Thus, you are given two strings $l$ and $r$, both of length $n$. The character $l_i$ stands for the color of the $i$-th left boot and the character $r_i$ stands for the color of the $i$-th right boot.
A lowercase Latin letter denotes a specific color, but the question mark ('?') denotes an indefinite color. Two specific colors are compatible if they are exactly the same. An indefinite color is compatible with any (specific or indefinite) color.
For example, the following pairs of colors are compatible: ('f', 'f'), ('?', 'z'), ('a', '?') and ('?', '?'). The following pairs of colors are not compatible: ('f', 'g') and ('a', 'z').
Compute the maximum number of pairs of boots such that there is one left and one right boot in a pair and their colors are compatible.
Print the maximum number of such pairs and the pairs themselves. A boot can be part of at most one pair.
|
Use greedy approach in this problem. At first, match such pairs that colors are exactly the same (and they are specific, not indefinite). After it match each indefinite colored left boot (if any) with any specific colored right boot. Possibly, some indefinite colored left boots stay unmatched. Similarly, match each indefinite colored right boot (if any) with any specific colored left boot. And finally match indefinite colored left and right boots (if any).
|
[
"greedy",
"implementation"
] | 1,500
|
#define forn(i, n) for (int i = 0; i < int(n); i++)
const int A = 26;
...
int n;
cin >> n;
string l;
cin >> l;
vector<vector<int>> left(A);
vector<int> wl;
forn(i, n)
if (l[i] != '?')
left[l[i] - 'a'].push_back(i);
else
wl.push_back(i);
string r;
cin >> r;
vector<vector<int>> right(A);
vector<int> wr;
forn(i, n)
if (r[i] != '?')
right[r[i] - 'a'].push_back(i);
else
wr.push_back(i);
vector<pair<int,int>> p;
vector<int> cl(A), cr(A);
forn(i, A) {
forn(j, min(left[i].size(), right[i].size()))
p.push_back(make_pair(left[i][j] + 1, right[i][j] + 1));
cl[i] = cr[i] = min(left[i].size(), right[i].size());
}
forn(i, A)
while (cl[i] < left[i].size() && !wr.empty()) {
p.push_back(make_pair(left[i][cl[i]] + 1, wr[wr.size() - 1] + 1));
cl[i]++;
wr.pop_back();
}
forn(i, A)
while (cr[i] < right[i].size() && !wl.empty()) {
p.push_back(make_pair(wl[wl.size() - 1] + 1, right[i][cr[i]] + 1));
wl.pop_back();
cr[i]++;
}
forn(j, min(wl.size(), wr.size()))
p.push_back(make_pair(wl[j] + 1, wr[j] + 1));
cout << p.size() << endl;
for (auto pp: p)
cout << pp.first << " " << pp.second << endl;
|
1141
|
E
|
Superhero Battle
|
A superhero fights with a monster. The battle consists of rounds, each of which lasts exactly $n$ minutes. After a round ends, the next round starts immediately. This is repeated over and over again.
Each round has the same scenario. It is described by a sequence of $n$ numbers: $d_1, d_2, \dots, d_n$ ($-10^6 \le d_i \le 10^6$). The $i$-th element means that monster's hp (hit points) changes by the value $d_i$ during the $i$-th minute of each round. Formally, if before the $i$-th minute of a round the monster's hp is $h$, then after the $i$-th minute it changes to $h := h + d_i$.
The monster's initial hp is $H$. It means that before the battle the monster has $H$ hit points. Print the first minute after which the monster dies. The monster dies if its hp is less than or equal to $0$. Print -1 if the battle continues infinitely.
|
In general the answer looks like: some number of complete (full) round cycles plus some prefix the the round. Check corner case that there are no complete (full) rounds at all (just check the first round in naive way). If no solution found, the answer has at least one complete (full) cycle and some prefix. If total sum in one round is not negative, then a complete (full) cycle doesn't help and it is again the no solution case. Let's find number of complete (full) cycles. We need such number of cycles $x$ that if your multiple $x$ by total sum and add some prefix, the result (with negative sign, because it is not a damage) will be greater or equal than $H$. So, to find $x$ just add with $H$ the minimal prefix partial sum and divide the result by minus total sum. Now you know the number of complete (full) cycles, just iterate over the last round in naive way to find the answer.
|
[
"math"
] | 1,700
|
long long H;
int n;
cin >> H >> n;
vector<long long> a(n);
long long sum = 0;
long long gap = 0;
long long h = H;
for (int i = 0; i < n; i++) {
cin >> a[i];
sum -= a[i];
h += a[i];
if (h <= 0) {
cout << i + 1 << endl;
return 0;
}
gap = max(gap, sum);
}
if (sum <= 0) {
cout << -1 << endl;
return 0;
}
long long whole = (H - gap) / sum;
H -= whole * sum;
long long result = whole * n;
for (int i = 0;; i++) {
H += a[i % n];
result++;
if (H <= 0) {
cout << result << endl;
break;
}
}
|
1141
|
F2
|
Same Sum Blocks (Hard)
|
This problem is given in two editions, which differ exclusively in the constraints on the number $n$.
You are given an array of integers $a[1], a[2], \dots, a[n].$ A block is a sequence of contiguous (consecutive) elements $a[l], a[l+1], \dots, a[r]$ ($1 \le l \le r \le n$). Thus, a block is defined by a pair of indices $(l, r)$.
Find a set of blocks $(l_1, r_1), (l_2, r_2), \dots, (l_k, r_k)$ such that:
- They do not intersect (i.e. they are disjoint). Formally, for each pair of blocks $(l_i, r_i)$ and $(l_j, r_j$) where $i \neq j$ either $r_i < l_j$ or $r_j < l_i$.
- For each block the sum of its elements is the same. Formally, $$a[l_1]+a[l_1+1]+\dots+a[r_1]=a[l_2]+a[l_2+1]+\dots+a[r_2]=$$ $$\dots =$$ $$a[l_k]+a[l_k+1]+\dots+a[r_k].$$
- The number of the blocks in the set is maximum. Formally, there does not exist a set of blocks $(l_1', r_1'), (l_2', r_2'), \dots, (l_{k'}', r_{k'}')$ satisfying the above two requirements with $k' > k$.
\begin{center}
{\small The picture corresponds to the first example. Blue boxes illustrate blocks.}
\end{center}
Write a program to find such a set of blocks.
|
Let's $x$ the same sum of blocks in the answer. Obviously, $x$ can be represented as a sum of some adjacent elements of $a$, i.e. $x=a[l]+a[l+1]+\dots+a[r]$ for some $l$ and $r$. Iterate over all possible blocks in $O(n^2)$ and for each sum store all the blocks. You can use 'map<int, vector<pair<int,int>>>' to store blocks grouped by a sum. You can do it with the following code: Note, that blocks are sorted by the right end in each group. After it you can independently try each group (there are $O(n^2)$ of them) and find the maximal disjoint set of blocks of a group. You can do it greedily, each time taking into the answer segment with the smallest right end. Since in each group they are ordered by the right end, you can find the required maximal disjoint block set with one pass. Let's assume $pp$ is the current group of blocks (they are ordered by the right end), then the following code constructs the maximal disjoint set: Choose the maximum among maximal disjoint sets for the groups.
|
[
"data structures",
"greedy"
] | 1,900
|
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
map<int, vector<pair<int,int>>> segs;
for (int r = 0; r < n; r++) {
int sum = 0;
for (int l = r; l >= 0; l--) {
sum += a[l];
segs[sum].push_back({l, r});
}
}
int result = 0;
vector<pair<int,int>> best;
for (const auto& p: segs) {
const vector<pair<int,int>>& pp = p.second;
int cur = 0;
int r = -1;
vector<pair<int,int>> now;
for (auto seg: pp)
if (seg.first > r) {
cur++;
now.push_back(seg);
r = seg.second;
}
if (cur > result) {
result = cur;
best = now;
}
}
cout << result << endl;
for (auto seg: best)
cout << seg.first + 1 << " " << seg.second + 1 << endl;
|
1141
|
G
|
Privatization of Roads in Treeland
|
Treeland consists of $n$ cities and $n-1$ roads. Each road is bidirectional and connects two distinct cities. From any city you can get to any other city by roads. Yes, you are right — the country's topology is an undirected tree.
There are some private road companies in Treeland. The government decided to sell roads to the companies. Each road will belong to one company and a company can own multiple roads.
The government is afraid to look unfair. They think that people in a city can consider them unfair if there is one company which owns two or more roads entering the city. The government wants to make such privatization that the number of such cities doesn't exceed $k$ and the number of companies taking part in the privatization is minimal.
Choose the number of companies $r$ such that it is possible to assign each road to one company in such a way that the number of cities that have two or more roads of one company is at most $k$. In other words, if for a city all the roads belong to the different companies then the city is good. Your task is to find the minimal $r$ that there is such assignment to companies from $1$ to $r$ that the number of cities which are not good doesn't exceed $k$.
\begin{center}
{\small The picture illustrates the first example ($n=6, k=2$). The answer contains $r=2$ companies. Numbers on the edges denote edge indices. Edge colors mean companies: red corresponds to the first company, blue corresponds to the second company. The gray vertex (number $3$) is not good. The number of such vertices (just one) doesn't exceed $k=2$. It is impossible to have at most $k=2$ not good cities in case of one company.}
\end{center}
|
Formally, the problem is to paint tree edges in minimal number of colors in such a way, the the number of improper vertices doesn't exceed $k$. A vertex is improper if it has at least two incident edges of the same color. It is easy to show that $D$ colors is always enough to paint a tree in such a way that all the vertices are proper, where $D$ is the maximum vertex degree. Actually, it is always the truth do any bipartite graph. Indeed, if number of colors is less than the maximum degree, such vertices will have at least two edges of the same color (Dirichlet's principle). If $D$ equals the maximum degree, you can use just depth first search tree traversal to paint edges in different colors. In this problem you can have up to $k$ improper vertices, so just choose such minimal $d$ that number of vertices of degree greater than $d$ is at most $k$. In an alternative solution, you can use a binary search to find such $d$, but it makes the implementation harder and the solution becomes slower by $log$ factor. After it paint edges with $d$ colors, each time choosing the next color (skip color if it equals with the color of the traversal incoming edge).
|
[
"binary search",
"constructive algorithms",
"dfs and similar",
"graphs",
"greedy",
"trees"
] | 1,900
|
int n, k, r;
vector<vector<pair<int,int>>> g;
int D;
vector<int> col;
void dfs(int u, int p, int f) {
int color = 0;
for (auto e: g[u])
if (p != e.first) {
if (color == f) {
color = (color + 1) % D;
f = -1;
}
col[e.second] = color;
dfs(e.first, u, color);
color = (color + 1) % D;
}
}
int main() {
cin >> n >> k;
g.resize(n);
vector<int> d(n);
for (int i = 0; i + 1 < n; i++) {
int x, y;
cin >> x >> y;
x--, y--;
g[x].push_back({y, i});
g[y].push_back({x, i});
d[x]++, d[y]++;
}
map<int,int> cnt;
for (int dd: d)
cnt[dd]++;
int kk = n;
D = 0;
for (auto p: cnt)
if (kk > k)
D = p.first,
kk -= p.second;
else
break;
col = vector<int>(n - 1);
dfs(0, -1, -1);
cout << D << endl;
for (int i = 0; i + 1 < n; i++)
cout << col[i] + 1 << " ";
}
|
1142
|
A
|
The Beatles
|
Recently a Golden Circle of Beetlovers was found in Byteland. It is a circle route going through $n \cdot k$ cities. The cities are numerated from $1$ to $n \cdot k$, the distance between the neighboring cities is exactly $1$ km.
Sergey does not like beetles, he loves burgers. Fortunately for him, there are $n$ fast food restaurants on the circle, they are located in the $1$-st, the $(k + 1)$-st, the $(2k + 1)$-st, and so on, the $((n-1)k + 1)$-st cities, i.e. the distance between the neighboring cities with fast food restaurants is $k$ km.
Sergey began his journey at some city $s$ and traveled along the circle, making stops at cities each $l$ km ($l > 0$), until he stopped in $s$ once again. Sergey then forgot numbers $s$ and $l$, but he remembers that the distance from the city $s$ to the nearest fast food restaurant was $a$ km, and the distance from the city he stopped at after traveling the first $l$ km from $s$ to the nearest fast food restaurant was $b$ km. Sergey always traveled in the same direction along the circle, but when he calculated distances to the restaurants, he considered both directions.
Now Sergey is interested in two integers. The first integer $x$ is the minimum number of stops (excluding the first) Sergey could have done before returning to $s$. The second integer $y$ is the maximum number of stops (excluding the first) Sergey could have done before returning to $s$.
|
Let's assume that we know the length of the jump, and it is equal to $l$. Then, in order to be back at the starting point, Sergey will need to make exactly $n \cdot k / gcd(n \cdot k, l)$ moves, where $gcd$ is the greatest common divider. Let $l = kx + c$, where $c$ and $x$ are non-negative integers. So, if we know $a$ and $b$, than $c$ can only take $4$ values: $((a + b) \% c, (a - b) \% c, (b - a) \% c, (-a - b) \% c)$, where $\% c$ means modulo $c$. It is clear that only $x <n$ can be considered. Then we iterate over all the $4n$ variants of the pair $(x, c)$, and for each we find the number of moves to the starting point. The minimum and maximum of the resulting numbers will be the answer.
|
[
"brute force",
"math"
] | 1,700
| null |
1142
|
B
|
Lynyrd Skynyrd
|
Recently Lynyrd and Skynyrd went to a shop where Lynyrd bought a permutation $p$ of length $n$, and Skynyrd bought an array $a$ of length $m$, consisting of integers from $1$ to $n$.
Lynyrd and Skynyrd became bored, so they asked you $q$ queries, each of which has the following form: "does the subsegment of $a$ from the $l$-th to the $r$-th positions, inclusive, have a subsequence that is a cyclic shift of $p$?" Please answer the queries.
A permutation of length $n$ is a sequence of $n$ integers such that each integer from $1$ to $n$ appears exactly once in it.
A cyclic shift of a permutation $(p_1, p_2, \ldots, p_n)$ is a permutation $(p_i, p_{i + 1}, \ldots, p_{n}, p_1, p_2, \ldots, p_{i - 1})$ for some $i$ from $1$ to $n$. For example, a permutation $(2, 1, 3)$ has three distinct cyclic shifts: $(2, 1, 3)$, $(1, 3, 2)$, $(3, 2, 1)$.
A subsequence of a subsegment of array $a$ from the $l$-th to the $r$-th positions, inclusive, is a sequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ for some $i_1, i_2, \ldots, i_k$ such that $l \leq i_1 < i_2 < \ldots < i_k \leq r$.
|
For each $a_i$ if the number $a_i$ has position $j$ in $p$, let's find the greatest $l$, such that $l$ is less then $i$ and $a_l = p_{j - 1}$ (let's define $p_0 = p_n$) We will call this position $b_i$. This can be done in $O(n)$ time, just for each $p_j$ we will keep the last it's position in $a$ while iterating over $a$. Now let's notice that using this info for each $i$ we can find the beginning of right most subsequence of $a$ which is a ciclic shift of $p$ and ends exactly at $a_i$. This can easily be done because if there is a subsequence of $a$ $a_{i_1}, a_{i_2} \ldots, a_{i_{n - 1}}, a_i$, which is the right most such subsequence, then ${i_{n - 1}}$ is $b_i$, $i_{n- 2}$ is $b_{i_{n - 1}}$, and so on. So to find such subsequence and the position of it's beginning, we need to calculate $b[b[b \ldots b[i] \ldots ]]$ $n - 1$ times. To do it we can use binary lifting. Then we will have $O(m \log n)$ precalc and we will get the beginning of such subsequence in $O(\log n)$ time. Now for each prefix of $a$ let's calculate the the beginning of right most subsequence of it, which is a cyclic shift of $p$. This can be calculated in linear time, first we look at the answer for this prefix without the last number, and then update it with the right most subsequence, which ends at the end of prefix. Now we can answer each query in $O(1)$ time, because we just need to find the beginning of the right most subsequence, which ends at prefix of length $r$ and compare it with $l$.
|
[
"data structures",
"dfs and similar",
"dp",
"math",
"trees"
] | 2,000
| null |
1142
|
C
|
U2
|
Recently Vasya learned that, given two points with different $x$ coordinates, you can draw through them exactly one parabola with equation of type $y = x^2 + bx + c$, where $b$ and $c$ are reals. Let's call such a parabola an $U$-shaped one.
Vasya drew several distinct points with integer coordinates on a plane and then drew an $U$-shaped parabola through each pair of the points that have different $x$ coordinates. The picture became somewhat messy, but Vasya still wants to count how many of the parabolas drawn don't have any drawn point inside their internal area. Help Vasya.
The internal area of an $U$-shaped parabola is the part of the plane that lies strictly above the parabola when the $y$ axis is directed upwards.
|
Let's rewrite parabola equation $y = x^2 + bx + c$ as $y - x^2 = bx + c$. This means, that if you change each point $(x_i, y_i)$ to $(x_i, y_i - x_i^2)$, then the parabolas will turn into straight lines, and the task will be reduced to constructing a top part of convex hull on the obtained points and calculate the number of segments on it.
|
[
"geometry"
] | 2,400
| null |
1142
|
D
|
Foreigner
|
Traveling around the world you noticed that many shop owners raise prices to inadequate values if the see you are a foreigner.
You define inadequate numbers as follows:
- all integers from $1$ to $9$ are inadequate;
- for an integer $x \ge 10$ to be inadequate, it is required that the integer $\lfloor x / 10 \rfloor$ is inadequate, but that's not the only condition. Let's sort all the inadequate integers. Let $\lfloor x / 10 \rfloor$ have number $k$ in this order. Then, the integer $x$ is inadequate only if the last digit of $x$ is strictly less than the reminder of division of $k$ by $11$.
Here $\lfloor x / 10 \rfloor$ denotes $x/10$ rounded down.
Thus, if $x$ is the $m$-th in increasing order inadequate number, and $m$ gives the remainder $c$ when divided by $11$, then integers $10 \cdot x + 0, 10 \cdot x + 1 \ldots, 10 \cdot x + (c - 1)$ are inadequate, while integers $10 \cdot x + c, 10 \cdot x + (c + 1), \ldots, 10 \cdot x + 9$ are not inadequate.
The first several inadequate integers are $1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 21, 30, 31, 32 \ldots$. After that, since $4$ is the fourth inadequate integer, $40, 41, 42, 43$ are inadequate, while $44, 45, 46, \ldots, 49$ are not inadequate; since $10$ is the $10$-th inadequate number, integers $100, 101, 102, \ldots, 109$ are all inadequate. And since $20$ is the $11$-th inadequate number, none of $200, 201, 202, \ldots, 209$ is inadequate.
You wrote down all the prices you have seen in a trip. Unfortunately, all integers got concatenated in one large digit string $s$ and you lost the bounds between the neighboring integers. You are now interested in the number of substrings of the resulting string that form an inadequate number. If a substring appears more than once at different positions, all its appearances are counted separately.
|
Let's take any inadequate number $x$ of length $n$. Let's keep 3 parameters for it: $a_x$ which is the number of all inadequate numbers less of equal then $x$, $b_x$ which is the number of inadequate numbers less then $x$ which have length $n$ and $c_x$ which is the number of inadequate numbers grater then $x$ which have length $n$. We know that there are exactly $a_x$ modulo 11 inadequate numbers that come from $x$ by adding a new digit to the end. Also, because we know $b_x$ and $c_x$, we can find $a$, $b$ and $c$ parameters for each of those numbers. Let's now notice, that if instead of keeping $a$, $b$ and $c$ parameters we can keep all of them modulo 11. The parameters of all numbers that come from $x$ will still be defined because if we increase $a$ by 11, these parameters will be the same modulo 11, if we increase $b$ by 11, $a$ and $b$ parameters of all numbers that come from $x$ are increase by $0 + 1 + 2 + \ldots + 10 = (11 \cdot 10) / 2 = 55$ which is 0 modulo 11. The same is with $c$ parameter. So these 3 parameters modulo 11 exactly define what new numbers will come from $x$ and the values of these 3 parameters for them. Now we can create some kind of automaton of size $11^3$, where we will have 9 starting nodes and all paths from these nodes will be the inadequate numbers and for any inadequate number there will be path which will define it. Now let's create dynamic $dp[i][j]$, which is how long we can go into the automaton by the characters of suffix of length $i$ of our given string $s$ starting from node $j$ in automaton. We can calculate this $dp$ in $n \cdot 11^3$ time and using it we can check for each suffix of $s$ what is its longest prefix that is an inadequate number and so we can solve the problem. Actually, during the contest it turned out that this problem has shorter solution, but this is more general one which doesn't depend on the starting numbers (in our case they were $1, 2, 3, \ldots 9$).
|
[
"dp"
] | 2,800
| null |
1142
|
E
|
Pink Floyd
|
This is an interactive task.
Scientists are about to invent a new optimization for the Floyd-Warshall algorithm, which will allow it to work in linear time. There is only one part of the optimization still unfinished.
It is well known that the Floyd-Warshall algorithm takes a graph with $n$ nodes and exactly one edge between each pair of nodes. The scientists have this graph, what is more, they have directed each edge in one of the two possible directions.
To optimize the algorithm, exactly $m$ edges are colored in pink color and all the rest are colored in green. You know the direction of all $m$ pink edges, but the direction of green edges is unknown to you. In one query you can ask the scientists about the direction of exactly one green edge, however, you can perform at most $2 \cdot n$ such queries.
Your task is to find the node from which every other node can be reached by a path consisting of edges of same color. Be aware that the scientists may have lied that they had fixed the direction of all edges beforehand, so their answers may depend on your queries.
|
First let's assume that graph has only the green edges and we still don't know their directions. Then we can do the following. We will keep some node which we will call the "top node" and the list of the nodes, that can be reached from the top node. Let's take some node $A$ which is not reachable and ask about the direction of edge between the top node and $A$. If the edge goes from top node to $A$, then we can add $A$ to the list of reachable nodes and the size of the list will increase by one. If the edge goes from $A$ to the top node, then we can call $A$ the new top node, and as the old top node was reachable from $A$, the list of reachable nodes will remain the same, but the node $A$ is reachable from itself, so it also will be added to the list. This way after each query we will increase the number of reachable nodes and in $n - 1$ query we will get the answer. In our graph we have edges that are coloured in pink, so if we will repeat the described above algorithm, we may get that the edge between top node and $A$ is coloured in pink. To avoid it, let's create the condensation of pink graph (condensation means the graph of the strongly connected components (later SCC)). As it is a condensation, there always should be some SCCs, that have no incoming edges. Let's call them free SCCs. We will pick some free SCC and pick some node from it and make it top node. Now we will repeat the following: if there are no other free SCC except the one, that contains the top node, all others are reachable from the only free SCC, which means that we have solved the problem. If there is any other top SCC, let's pick some node $A$ from it. Because this SCC has no incoming edges, the edge between $A$ and top node is green. So we can repeat the algorithm described above. After that as the node $A$ is reachable by green color, we can assume that this node is deleted. If it was the last node in it's SCC, we delete this SCC as well and delete all outcoming edges from it. Now some more SCCs may become free. And then we repeat our algorithm. After at most $n-1$ steps all nodes will be reachable from top one by single coloured path, which means that we have solved the problem. In out algorithm we need first to create condensation of the graph, which can be done in $O(n + m)$ time, then we need to remember how many nodes are in each SCC and keep a set of all free SCCs. That can also be done in $O(n + m)$ time.
|
[
"graphs",
"interactive"
] | 3,200
| null |
1143
|
A
|
The Doors
|
Three years have passes and nothing changed. It is still raining in London, and Mr. Black has to close all the doors in his home in order to not be flooded. Once, however, Mr. Black became so nervous that he opened one door, then another, then one more and so on until he opened all the doors in his house.
There are exactly two exits from Mr. Black's house, let's name them left and right exits. There are several doors in each of the exits, so each door in Mr. Black's house is located either in the left or in the right exit. You know where each door is located. Initially all the doors are closed. Mr. Black can exit the house if and only if all doors in at least one of the exits is open. You are given a sequence in which Mr. Black opened the doors, please find the smallest index $k$ such that Mr. Black can exit the house after opening the first $k$ doors.
We have to note that Mr. Black opened each door at most once, and in the end all doors became open.
|
Let's walk through the array and find for each exit the door, that was opened last. Then the answer is minimum of the numbers of these doors.
|
[
"implementation"
] | 800
| null |
1143
|
B
|
Nirvana
|
Kurt reaches nirvana when he finds the product of all the digits of some positive integer. Greater value of the product makes the nirvana deeper.
Help Kurt find the maximum possible product of digits among all integers from $1$ to $n$.
|
Let number $x$ be an answer, and $\overline{y_0 y_1 \ldots y_l}$ - number from input, so $x$ = $\overline{y_0 y_1 \ldots y_k (y_{k + 1} - 1) 9 9 \ldots 9}$ for some $k$ (otherwise, you can increase one of the digits by $1$, so that $x$ will still be no more than $y$). So, let's go through $k$ from $0$ to the length of $y$ and return the maximum of them by the product of digits.
|
[
"brute force",
"math",
"number theory"
] | 1,200
| null |
1143
|
C
|
Queen
|
You are given a rooted tree with vertices numerated from $1$ to $n$. A tree is a connected graph without cycles. A rooted tree has a special vertex named root.
Ancestors of the vertex $i$ are all vertices on the path from the root to the vertex $i$, except the vertex $i$ itself. The parent of the vertex $i$ is the nearest to the vertex $i$ ancestor of $i$. Each vertex is a child of its parent. In the given tree the parent of the vertex $i$ is the vertex $p_i$. For the root, the value $p_i$ is $-1$.
\begin{center}
{\small An example of a tree with $n=8$, the root is vertex $5$. The parent of the vertex $2$ is vertex $3$, the parent of the vertex $1$ is vertex $5$. The ancestors of the vertex $6$ are vertices $4$ and $5$, the ancestors of the vertex $7$ are vertices $8$, $3$ and $5$}
\end{center}
You noticed that some vertices do not respect others. In particular, if $c_i = 1$, then the vertex $i$ does not respect any of its ancestors, and if $c_i = 0$, it respects all of them.
You decided to delete vertices from the tree one by one. On each step you select such a non-root vertex that it does not respect its parent and none of its children respects it. If there are several such vertices, you select the one with the \textbf{smallest number}. When you delete this vertex $v$, all children of $v$ become connected with the parent of $v$.
\begin{center}
{\small An example of deletion of the vertex $7$.}
\end{center}
Once there are no vertices matching the criteria for deletion, you stop the process. Print the order in which you will delete the vertices. Note that this order is unique.
|
From the condition about the fact that each vertex equally respects all its ancestors, we can understand that each vertex is either always a candidate for deletion or it can never be deleted. That is because when we delete some vertex all new vertices which become sons of it's parent are also disrespecting it. Now we can iterate over all vertices and if it respects it's parent, we will remember that it's parent can not be deleted. And so we will get the answer.
|
[
"dfs and similar",
"trees"
] | 1,400
| null |
1144
|
A
|
Diverse Strings
|
A string is called diverse if it contains consecutive (adjacent) letters of the Latin alphabet and each letter occurs exactly once. For example, the following strings are diverse: "fced", "xyz", "r" and "dabcef". The following string are \textbf{not} diverse: "az", "aa", "bad" and "babc". Note that the letters 'a' and 'z' are not adjacent.
Formally, consider positions of all letters in the string in the alphabet. These positions should form contiguous segment, i.e. they should come one by one without any gaps. And all letters in the string should be distinct (duplicates are not allowed).
You are given a sequence of strings. For each string, if it is diverse, print "Yes". Otherwise, print "No".
|
The string is diverse if it is a permutation of some substring of the Latin alphabet ("abcd ... xyz"). In other words, the string is diverse if all letters in it are distinct (we can check it using some structure like std::set or array of used elements) and if the number of letters between the letter with the maximum alphabet position and the letter with the minimum alphabet position plus one is exactly the length of the string. For example, the position in alphabet of letter 'a' is one, the position of letter 'f' is six and so on.
|
[
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int n;
cin >> n;
forn(i, n) {
string s;
cin >> s;
if (set<char>(s.begin(), s.end()).size() == s.length()
&& *max_element(s.begin(), s.end()) == char(*min_element(s.begin(), s.end()) + (s.length() - 1)))
cout << "Yes" << endl;
else
cout << "No" << endl;
}
}
|
1144
|
B
|
Parity Alternated Deletions
|
Polycarp has an array $a$ consisting of $n$ integers.
He wants to play a game with this array. The game consists of several moves. On the first move he chooses any element and deletes it (after the first move the array contains $n-1$ elements). For each of the next moves he chooses any element with the only restriction: its parity should differ from the parity of the element deleted on the previous move. In other words, he alternates parities (even-odd-even-odd-... or odd-even-odd-even-...) of the removed elements. Polycarp stops if he can't make a move.
Formally:
- If it is the first move, he chooses any element and deletes it;
- If it is the second or any next move:
- if the last deleted element was \textbf{odd}, Polycarp chooses any \textbf{even} element and deletes it;
- if the last deleted element was \textbf{even}, Polycarp chooses any \textbf{odd} element and deletes it.
- If after some move Polycarp cannot make a move, the game ends.
Polycarp's goal is to \textbf{minimize} the sum of \textbf{non-deleted} elements of the array after end of the game. If Polycarp can delete the whole array, then the sum of \textbf{non-deleted} elements is zero.
Help Polycarp find this value.
|
Let's calculate the sum of the whole array $sum$ and then divide all its elements into two arrays $odd$ and $even$ by their parity ($odd$ for odd, $even$ for even). Sort both of them in non-increasing order. Then what can we see? We always can delete first $k = min(|odd|, |even|)$ elements from both arrays (where $|x|$ is the size of $x$). So let's decrease $sum$ by the sum of first $k$ elements of the array $odd$ and the same for the array $even$. If one the arrays has more than $k$ elements (both arrays cannot have more than $k$ elements because if it is so then $k$ should be greater) then let's decrease $sum$ by the $k+1$-th element of this array (because this is the maximum possible element we can remove). Now $sum$ is the answer for the problem.
|
[
"greedy",
"implementation",
"sortings"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
int sum = 0;
vector<int> even, odd;
for (int i = 0; i < n; ++i) {
int x;
cin >> x;
sum += x;
if (x & 1) odd.push_back(x);
else even.push_back(x);
}
sort(odd.rbegin(), odd.rend());
sort(even.rbegin(), even.rend());
int k = min(odd.size(), even.size());
int rem = sum;
rem -= accumulate(odd.begin(), odd.begin() + k, 0);
rem -= accumulate(even.begin(), even.begin() + k, 0);
if (int(odd.size()) > k) {
rem -= odd[k];
}
if (int(even.size()) > k) {
rem -= even[k];
}
cout << rem << endl;
return 0;
}
|
1144
|
C
|
Two Shuffled Sequences
|
Two integer sequences existed initially — one of them was \textbf{strictly} increasing, and the other one — \textbf{strictly} decreasing.
Strictly increasing sequence is a sequence of integers $[x_1 < x_2 < \dots < x_k]$. And strictly decreasing sequence is a sequence of integers $[y_1 > y_2 > \dots > y_l]$. Note that the empty sequence and the sequence consisting of one element can be considered as increasing or decreasing.
They were merged into one sequence $a$. After that sequence $a$ got shuffled. For example, some of the possible resulting sequences $a$ for an increasing sequence $[1, 3, 4]$ and a decreasing sequence $[10, 4, 2]$ are sequences $[1, 2, 3, 4, 4, 10]$ or $[4, 2, 1, 10, 4, 3]$.
This shuffled sequence $a$ is given in the input.
Your task is to find \textbf{any} two suitable initial sequences. One of them should be \textbf{strictly} increasing and the other one — \textbf{strictly} decreasing. Note that the empty sequence and the sequence consisting of one element can be considered as increasing or decreasing.
If there is a contradiction in the input and it is impossible to split the given sequence $a$ to increasing and decreasing sequences, print "NO".
|
Let's count the number of occurrences of each element in the array $cnt$. Because the maximum possible element is $2 \cdot 10^5$, it can be done without any data structures. Then let's check if $cnt_i$ is greater than $2$ for some $i$ from $0$ to $2 \cdot 10^5$, and if it is, then the answer is "NO", because this element should occur at least twice in one of the sequences. Now let's output the increasing sequence. The number of elements in it is the number of elements $i$ such that $cnt_i = 2$. Let's iterate from left to right, print the suitable elements and decrease their $cnt$. The number of elements in the decreasing sequence is just the number of elements with non-zero $cnt$. So let's iterate from right to left and just print suitable elements.
|
[
"constructive algorithms",
"sortings"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> cnt(200 * 1000 + 1);
for (int i = 0; i < n; ++i) {
int x;
cin >> x;
++cnt[x];
if (cnt[x] > 2) {
cout << "NO" << endl;
return 0;
}
}
cout << "YES" << endl << count(cnt.begin(), cnt.end(), 2) << endl;
for (int i = 0; i <= 200 * 1000; ++i) {
if (cnt[i] == 2) {
cout << i << " ";
--cnt[i];
}
}
cout << endl << count(cnt.begin(), cnt.end(), 1) << endl;
for (int i = 200 * 1000; i >= 0; --i) {
if (cnt[i] == 1) {
cout << i << " ";
--cnt[i];
}
}
cout << endl;
assert(count(cnt.begin(), cnt.end(), 0) == 200 * 1000 + 1);
return 0;
}
|
1144
|
D
|
Equalize Them All
|
You are given an array $a$ consisting of $n$ integers. You can perform the following operations arbitrary number of times (possibly, zero):
- Choose a pair of indices $(i, j)$ such that $|i-j|=1$ (indices $i$ and $j$ are adjacent) and set $a_i := a_i + |a_i - a_j|$;
- Choose a pair of indices $(i, j)$ such that $|i-j|=1$ (indices $i$ and $j$ are adjacent) and set $a_i := a_i - |a_i - a_j|$.
The value $|x|$ means the absolute value of $x$. For example, $|4| = 4$, $|-3| = 3$.
Your task is to find the minimum number of operations required to obtain the array of equal elements and print the order of operations to do it.
\textbf{It is guaranteed that you always can obtain the array of equal elements using such operations}.
\textbf{Note that after each operation each element of the current array should not exceed $10^{18}$ by absolute value}.
|
Let's find the most frequent element in the array (using the array of frequencies $cnt$, of course). Let this element be $x$. If we will see the operations more carefully, we can see that the part of these operations means "set element $p$ to element $q$". if $p < q$ then this operation is $1 p q$, otherwise it is $2 p q$. Now let's consider the number of operations in the optimal answer. It is obvious that we need at least $n - cnt_x$ operations to equalize all the elements. And it is also obvious that we can always do it with $n - cnt_x$ such operations we have. How to restore the answer? There is an easy way to do it: find the first occurrence of $x$. Let it be $pos$. Then let's go from $pos - 1$ to $1$ and set each element to the next element (element at the position $pos - 1$ to $pos$, $pos - 2$ to $pos - 1$ and so on). And don't forget to print right type of operation. Then let's go from left to right from $1$ to $n$ and if the $i$-th element don't equal to $x$ then set it to $i-1$-th element using right operation.
|
[
"constructive algorithms",
"greedy"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> a(n), cnt(200 * 1000 + 1);
for (int i = 0; i < n; ++i) {
cin >> a[i];
++cnt[a[i]];
}
int val = max_element(cnt.begin(), cnt.end()) - cnt.begin();
int pos = find(a.begin(), a.end(), val) - a.begin();
cout << n - cnt[val] << endl;
int siz = 0;
for (int i = pos - 1; i >= 0; --i) {
cout << 1 + (a[i] > a[i + 1]) << " " << i + 1 << " " << i + 2 << " " << endl;
a[i] = a[i + 1];
++siz;
}
for (int i = 0; i < n - 1; ++i) {
if (a[i + 1] != val) {
assert(a[i] == val);
cout << 1 + (a[i + 1] > a[i]) << " " << i + 2 << " " << i + 1 << " " << endl;
a[i + 1] = a[i];
++siz;
}
}
assert(siz == n - cnt[val]);
return 0;
}
|
1144
|
E
|
Median String
|
You are given two strings $s$ and $t$, both consisting of exactly $k$ lowercase Latin letters, $s$ is lexicographically less than $t$.
Let's consider list of all strings consisting of exactly $k$ lowercase Latin letters, lexicographically not less than $s$ and not greater than $t$ (including $s$ and $t$) in lexicographical order. For example, for $k=2$, $s=$"az" and $t=$"bf" the list will be ["az", "ba", "bb", "bc", "bd", "be", "bf"].
Your task is to print the median (the middle element) of this list. For the example above this will be "bc".
\textbf{It is guaranteed that there is an odd number of strings lexicographically not less than $s$ and not greater than $t$}.
|
This problem supposed to be easy long arithmetic problem. Let's represent our strings as huge numbers with base $26$. Let $s$ be $l$ and $t$ be $r$. So if we will see more precisely in the problem statement, then we can see that the answer is $\frac{l + r}{2}$. The operation $l + r$ with long numbers can be done in $O(k)$ and division long number by two also can be done in $O(k)$. All details of implementation are in the author's solution.
|
[
"bitmasks",
"math",
"number theory",
"strings"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
vector<int> get(const string &s) {
vector<int> res(s.size() + 1);
for (int i = 0; i < int(s.size()); ++i) {
res[i + 1] = s[i] - 'a';
}
return res;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int k;
string s, t;
cin >> k >> s >> t;
vector<int> a = get(s), b = get(t);
for (int i = k; i >= 0; --i) {
a[i] += b[i];
if (i) {
a[i - 1] += a[i] / 26;
a[i] %= 26;
}
}
for (int i = 0; i <= k; ++i) {
int rem = a[i] % 2;
a[i] /= 2;
if (i + 1 <= k) {
a[i + 1] += rem * 26;
} else {
assert(rem == 0);
}
}
for (int i = 1; i <= k; ++i) {
cout << char('a' + a[i]);
}
cout << endl;
return 0;
}
|
1144
|
F
|
Graph Without Long Directed Paths
|
You are given a connected undirected graph consisting of $n$ vertices and $m$ edges. There are no self-loops or multiple edges in the given graph.
You have to direct its edges in such a way that the obtained directed graph does not contain any paths of length two or greater (where the length of path is denoted as the number of traversed edges).
|
What if the given graph will contain a cycle of odd length? It will mean that some two consecutive edges of this cycle will be oriented in the same way and will form a path of length two. What if there is no cycles of odd length in this graph? Then it is bipartite. Let's color it and see what we got. We got some vertices in the left part, some vertices in the right part and all edges connecting vertices from different parts. Let's orient all edges such that them will go from the left part to the right part. That's it.
|
[
"dfs and similar",
"graphs"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200 * 1000 + 11;
int n, m;
vector<int> g[N];
vector<pair<int, int>> e;
bool bipartite;
vector<int> color;
void dfs(int v, int c) {
color[v] = c;
for (auto to : g[v]) {
if (color[to] == -1) {
dfs(to, c ^ 1);
} else {
if (color[to] == color[v]) {
bipartite = false;
}
}
}
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
cin >> n >> m;
for (int i = 0; i < m; ++i) {
int x, y;
cin >> x >> y;
--x, --y;
g[x].push_back(y);
g[y].push_back(x);
e.push_back(make_pair(x, y));
}
bipartite = true;
color = vector<int>(n, -1);
dfs(0, 0);
if (!bipartite) {
cout << "NO" << endl;
return 0;
}
cout << "YES" << endl;
for (int i = 0; i < m; ++i) {
cout << (color[e[i].first] < color[e[i].second]);
}
cout << endl;
return 0;
}
|
1144
|
G
|
Two Merged Sequences
|
Two integer sequences existed initially, one of them was \textbf{strictly} increasing, and another one — \textbf{strictly} decreasing.
Strictly increasing sequence is a sequence of integers $[x_1 < x_2 < \dots < x_k]$. And strictly decreasing sequence is a sequence of integers $[y_1 > y_2 > \dots > y_l]$. Note that the empty sequence and the sequence consisting of one element can be considered as increasing or decreasing.
Elements of increasing sequence were inserted between elements of the decreasing one (and, possibly, before its first element and after its last element) \textbf{without changing the order}. For example, sequences $[1, 3, 4]$ and $[10, 4, 2]$ can produce the following resulting sequences: $[10, \textbf{1}, \textbf{3}, 4, 2, \textbf{4}]$, $[\textbf{1}, \textbf{3}, \textbf{4}, 10, 4, 2]$. The following sequence cannot be the result of these insertions: $[\textbf{1}, 10, \textbf{4}, 4, \textbf{3}, 2]$ because the order of elements in the increasing sequence was changed.
Let the obtained sequence be $a$. This sequence $a$ is given in the input. Your task is to find \textbf{any} two suitable initial sequences. One of them should be \textbf{strictly} increasing, and another one — \textbf{strictly} decreasing. \textbf{Note that the empty sequence and the sequence consisting of one element can be considered as increasing or decreasing.}
If there is a contradiction in the input and it is impossible to split the given sequence $a$ into one increasing sequence and one decreasing sequence, print "NO".
|
I know about greedy solutions and other approaches, but I'll describe my solution. This is dynamic programming. I'll consider all positions $0$-indexed. Let $dp_{i, 0}$ be the maximum possible minimal element in the decreasing sequence, if the last element ($i-1$-th) was in the increasing sequence, and $dp_{i, 1}$ be the minimum possible maximum element in the increasing sequence, if the last element ($i-1$-th) was in the decreasing sequence. Initially, all $dp_{i, 0}$ are $-\infty$ and all $dp_{i, 1}$ are $\infty$ (except two values: $dp_{0, 0} = \infty$ and $dp_{0, 1} = -\infty$). What about transitions? Let's consider four cases: The previous element was in the increasing sequence and we want to add the current element to the increasing sequence. We can do $dp_{i, 0} := max(dp_{i, 0}, dp_{i - 1, 0})$ if $a_i > a_{i - 1}$; the previous element was in the increasing sequence and we want to add the current element to the decreasing sequence. We can do $dp_{i, 1} := min(dp_{i, 1}, a_{i - 1})$ if $a_i < dp_{i - 1, 0}$; the previous element was in the decreasing sequence and we want to add the current element to the decreasing sequence. We can do $dp_{i, 1} := min(dp_{i, 1}, dp_{i - 1, 1})$ if $a_i < a_{i - 1}$; the previous element was in the decreasing sequence and we want to add the current element to the increasing sequence. We can do $dp_{i, 0} := max(dp_{i, 0}, a_{i - 1})$ if $a_i > dp_{i - 1, 1}$. The logic behind these transitions is kinda hard but understandable. If $dp_{n - 1, 0} = -\infty$ and $dp_{n - 1, 1} = \infty$ then the answer is "NO". Otherwise we can restore any possible answer using parents in the dynamic programming.
|
[
"dp",
"greedy"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
const int INF = 1e9;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
vector<vector<int>> dp(n, vector<int>({-INF, INF}));
vector<vector<int>> p(n, vector<int>(2, -1));
dp[0][0] = INF;
dp[0][1] = -INF;
for (int i = 1; i < n; ++i) {
{
if (a[i] > a[i - 1] && dp[i][0] < dp[i - 1][0]) {
dp[i][0] = dp[i - 1][0];
p[i][0] = 0;
}
if (a[i] < dp[i - 1][0] && dp[i][1] > a[i - 1]) {
dp[i][1] = a[i - 1];
p[i][1] = 0;
}
}
{
if (a[i] < a[i - 1] && dp[i][1] > dp[i - 1][1]) {
dp[i][1] = dp[i - 1][1];
p[i][1] = 1;
}
if (a[i] > dp[i - 1][1] && dp[i][0] < a[i - 1]) {
dp[i][0] = a[i - 1];
p[i][0] = 1;
}
}
}
int pos = -1;
if (dp[n - 1][0] != -INF) {
pos = 0;
}
if (dp[n - 1][1] != INF) {
pos = 1;
}
if (pos == -1) {
cout << "NO" << endl;
return 0;
}
vector<int> inInc(n);
for (int i = n - 1; i >= 0; --i) {
if (pos == 0) {
inInc[i] = 1;
}
pos = p[i][pos];
}
cout << "YES" << endl;
for (int i = 0; i < n; ++i) {
cout << !inInc[i] << " ";
}
cout << endl;
return 0;
}
|
1146
|
A
|
Love "A"
|
Alice has a string $s$. She really likes the letter "a". She calls a string good if strictly more than half of the characters in that string are "a"s. For example "aaabb", "axaa" are good strings, and "baca", "awwwa", "" (empty string) are not.
Alice can erase some characters from her string $s$. She would like to know what is the longest string remaining after erasing some characters (possibly zero) to get a good string. It is guaranteed that the string has at least one "a" in it, so the answer always exists.
|
In this problem, it is only ever optimal to erase characters that are not "a". Let $x$ be the number of "a" characters in the string $s$, and let $n$ be the total number of characters in $s$. In order for the $a$s to be a strict majority, we can have at most $x-1$ characters that are not "a", so the maximum string length is bounded from above by $2x-1$. The string length is also bounded above by the length of $s$, so the answer is $\min(n, 2x-1)$.
|
[
"implementation",
"strings"
] | 800
|
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
string t;
cin >> t;
cout << min(2*(int)count(t.begin(),t.end(),'a')-1,(int)t.size());
}
|
1146
|
B
|
Hate "A"
|
Bob has a string $s$ consisting of lowercase English letters. He defines $s'$ to be the string after removing all "a" characters from $s$ (keeping all other characters in the same order). He then generates a new string $t$ by concatenating $s$ and $s'$. In other words, $t=s+s'$ (look at notes for an example).
You are given a string $t$. Your task is to find some $s$ that Bob could have used to generate $t$. It can be shown that if an answer exists, it will be unique.
|
There are a few different ways to approach this. In one way, we can approach it by looking at the frequency of all characters. We want to find a split point so that all "a"s lie on the left side of the split, and all other characters are split evenly between the left and right side. This split can be uniquely determined and found in linear time by keeping track of prefix sums (it also might be possible there is no split, in which case the answer is impossible). After finding a split, we still need to make sure the characters appear in the same order, which is another linear time pass. In another way, let's count the number of "a" and non-"a" characters. Let these numbers be $c_0$ and $c_1$. If $c_1$ is not divisible by $2$, then the answer is impossible. Otherwise, we know the suffix of length $c_1/2$ of $t$ must be $s'$, and we can check there are no "a"s there. We also check that after removing all "a"s from $t$, the first and second half of the strings are equal.
|
[
"implementation",
"strings"
] | 1,100
|
#include <iostream>
using namespace std;
int main()
{
string t;
cin >> t;
int cnt=0,pos=-1;
for (int i=0;i<t.size();i++)
{
if (t[i]=='a')
cnt++;
if (2*(i+1)-cnt==t.size())
{
pos=i;
break;
}
}
if (pos==-1)
{
cout << ":(";
return 0;
}
int cur=0;
for (int j=pos+1;j<t.size();j++)
{
if (t[j]=='a')
{
cout << ":(";
return 0;
}
while (t[cur]=='a')
cur++;
if (t[cur]!=t[j])
{
cout << ":(";
return 0;
}
cur++;
}
cout << t.substr(0,pos+1);
}
|
1146
|
C
|
Tree Diameter
|
There is a weighted tree with $n$ nodes and $n-1$ edges. The nodes are conveniently labeled from $1$ to $n$. The weights are positive integers at most $100$. Define the distance between two nodes to be the sum of edges on the unique path between the nodes. You would like to find the diameter of the tree. Diameter is the maximum distance between a pair of nodes.
Unfortunately, the tree isn't given to you, but you can ask some questions about it. In one question, you can specify two nonempty disjoint sets of nodes $p$ and $q$, and the judge will return the maximum distance between a node in $p$ and a node in $q$. In the words, maximum distance between $x$ and $y$, where $x \in p$ and $y \in q$. After asking not more than $9$ questions, you must report the maximum distance between any pair of nodes.
|
The standard algorithm for finding the diameter of a tree is to find the farthest distance from node $1$, then find the farthest distance from that node. We can do the same technique in this problem. We use one question to get the farthest distance from node $1$ in the graph. Then, we use binary search to figure out which node is actually the farthest away (if there are ties, it doesn't matter which one we choose). More specifically, if we know the farthest node lies in some subset $S$, we can ask about half of the element in $S$. If the distance doesn't decrease, then we know the farthest node must be in the half we asked about, otherwise, it is in the other half. This will use an additional 7 questions. We use the last question to get the actual diameter. There's another way that uses fewer questions and also ignores the fact that the graph is a tree. We can state the problem instead as trying to find $7$ splits of numbers so that every pair of numbers is separated in at least one split. We can do this by looking at the binary representation of the numbers. In the i-th question, let $p$ be the set of all numbers with the i-th bit set and $q$ be the other numbers. Every pair has at most 7 bits, and each pair of numbers differs in at least one bit, so this satisfies what we are looking for. After asking all questions, we can print the maximum result we found from all of them.
|
[
"bitmasks",
"graphs",
"interactive"
] | 1,700
|
/**
* code generated by JHelper
* More info: https://github.com/AlexeyDmitriev/JHelper
* @author Azat Ismagilov
*/
#include <fstream>
#include <iostream>
#include <algorithm>
#include <vector>
#include <numeric>
//#define int long long
#define fs first
#define sc second
#define pb push_back
#define ppb pop_back
#define pf push_front
#define ppf pop_front
#define mp make_pair
#define len(v) ((int)v.size())
#define vc vector
#define pr pair
#define all(v) v.begin(), v.end()
//#pragma GCC optimize("Ofast")
//#pragma GCC target("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx,tune=native")
//#pragma GCC optimize("unroll-loops")
using namespace std;
template<typename T, typename U>
inline ostream &operator<<(ostream &_out, const pair<T, U> &_p) {
_out << _p.first << ' ' << _p.second;
return _out;
}
template<typename T, typename U>
inline istream &operator>>(istream &_in, pair<T, U> &_p) {
_in >> _p.first >> _p.second;
return _in;
}
template<typename T>
inline ostream &operator<<(ostream &_out, const vector<T> &_v) {
if (_v.empty()) { return _out; }
_out << _v.front();
for (auto _it = ++_v.begin(); _it != _v.end(); ++_it) { _out << ' ' << *_it; }
return _out;
}
template<typename T>
inline istream &operator>>(istream &_in, vector<T> &_v) {
for (auto &_i : _v) { _in >> _i; }
return _in;
}
const int MAXN = 1e5;
const int INF = 1e9;
const int MOD = 1e9 + 7;
class CTreeDiameter {
public:
void solve(std::istream &in, std::ostream &out) {
int n;
in >> n;
int ans = 0;
for (int k = 1; k <= 7; k++) {
int k1 = 0;
for (int i = 0; i < n; i++) {
if ((i % (1 << k)) >= (1 << (k - 1))) {
k1++;
}
}
if (k1 > 0) {
out << k1 << ' ' << n - k1 << ' ';
for (int i = 0; i < n; i++) {
if ((i % (1 << k)) >= (1 << (k - 1))) {
out << i + 1 << ' ';
}
}
for (int i = 0; i < n; i++) {
if ((i % (1 << k)) < (1 << (k - 1))) {
out << i + 1 << ' ';
}
}
out << endl;
int x;
in >> x;
if (x == -1) {
exit(1);
}
ans = max(ans, x);
}
}
out << "-1 " << ans << endl;
}
};
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
CTreeDiameter solver;
std::istream &in(std::cin);
std::ostream &out(std::cout);
int n;
in >> n;
for (int i = 0; i < n; ++i) {
solver.solve(in, out);
}
return 0;
}
|
1146
|
D
|
Frog Jumping
|
A frog is initially at position $0$ on the number line. The frog has two positive integers $a$ and $b$. From a position $k$, it can either jump to position $k+a$ or $k-b$.
Let $f(x)$ be the number of distinct integers the frog can reach if it never jumps on an integer outside the interval $[0, x]$. The frog doesn't need to visit all these integers in one trip, that is, an integer is counted if the frog can somehow reach it if it starts from $0$.
Given an integer $m$, find $\sum_{i=0}^{m} f(i)$. That is, find the sum of all $f(i)$ for $i$ from $0$ to $m$.
|
Split this into $a$ problems. For each $i$ modulo $a$, we want to find the leftmost node that the frog can reach modulo $i$, and what is the smallest $x$ needed to get this. We can do this greedily, starting at $0$, jump to the right until we can jump left $b$, and repeat this until we find a repeated value modulo $a$. For each $i$ we have some $x_i$ and $d_i$, which is the smallest $x$ needed, and $d_i$ is the leftmost node we can reach that is a distance of $i$ modulo $a$. We can then do a summation $\sum_{j=x_i}^{m} floor((j-d_i+1)/a)$. To find this sum in constant time, it's first easier to reduce it to case where the lower and upper bounds of this summation are divisible by $a$, then the sum will have groups of $a$ that are the same value, so it's just the summation of some consecutive integers which can be computed in constant time. The overall complexity is $O(a)$.
|
[
"dfs and similar",
"math",
"number theory"
] | 2,100
|
#include <iostream>
#include <algorithm>
#include <set>
using namespace std;
int dist[200005];
long long sum[200005];
long long f(long long x)
{
return x*(x+1)/2;
}
int main()
{
int m,a,b;
scanf("%d%d%d",&m,&a,&b);
int g=__gcd(a,b),n=min(m,a+b-1);
for (int i=0;i<=n;i++)
dist[i]=n+1;
set<pair<int,int> > s;
s.insert({0,0});
dist[0]=0;
while (!s.empty())
{
auto p=*s.begin();
s.erase(s.begin());
if (dist[p.second]!=p.first)
continue;
sum[p.first]++;
if (p.second>=b && p.first<dist[p.second-b])
{
dist[p.second-b]=p.first;
s.insert({p.first,p.second-b});
}
if (p.second+a<=n && max(p.first,p.second+a)<dist[p.second+a])
{
dist[p.second+a]=max(p.first,p.second+a);
s.insert({max(p.first,p.second+a),p.second+a});
}
}
long long ans=0;
for (int i=0;i<=n;i++)
{
if (i)
sum[i]+=sum[i-1];
ans+=sum[i];
}
if (n!=m)
{
while ((m+1)%g)
{
ans+=m/g+1;
m--;
}
ans+=g*(f(m/g+1)-f(n/g+1));
}
printf("%I64d",ans);
}
|
1146
|
E
|
Hot is Cold
|
You are given an array of $n$ integers $a_1, a_2, \ldots, a_n$.
You will perform $q$ operations. In the $i$-th operation, you have a symbol $s_i$ which is either "<" or ">" and a number $x_i$.
You make a new array $b$ such that $b_j = -a_j$ if $a_j s_i x_i$ and $b_j = a_j$ otherwise (i.e. if $s_i$ is '>', then all $a_j > x_i$ will be flipped). After doing all these replacements, $a$ is set to be $b$.
You want to know what your final array looks like after all operations.
|
You can keep track of an array $w_i$ from $i = 0 \ldots 10^5$ which is number from $0$ to $3$. This means 'keep sign', 'flip sign', 'positive', or 'negative', respectively. You can use this to determine the final value. For example, if $a_i = j$, then we look at $w_{|j|}$, and that will help us see what the final answer will be. Each operation affects some contiguous range of $w_i$s. For example if it's $< x$ where $x$ is positive, then that means each $w_i$ from $0$ to $x-1$ goes from "keep sign"<->"flip sign" or "positive"<->"negative" (i.e. flipping lowest bit of the number), and each $w_i$ from $x$ to $10^5$ is set to $p$. You can use a segment tree with flip/set range to implement these operations efficiently.
|
[
"bitmasks",
"data structures",
"divide and conquer",
"implementation"
] | 2,400
|
#include <iostream>
#include <vector>
using namespace std;
#define sh 100000
vector<int> v[200005];
int arr[100005],tree[2][400005],ans[200005],cur=-sh;
pair<char,int> qu[100005];
void update(int node,int st,int en,int idx,pair<char,int> val)
{
if (st==en)
{
tree[0][node]=0;
if ((val.first=='>' && cur>val.second) || (val.first=='<' && cur<val.second))
tree[0][node]=1;
tree[1][node]=1;
if ((val.first=='>' && -cur>val.second) || (val.first=='<' && -cur<val.second))
tree[1][node]=0;
}
else
{
int mid=(st+en)/2;
if (st<=idx && idx<=mid)
update(2*node,st,mid,idx,val);
else
update(2*node+1,mid+1,en,idx,val);
for (int i=0;i<2;i++)
tree[i][node]=tree[tree[i][2*node]][2*node+1];
}
}
int main()
{
int n,q;
scanf("%d%d",&n,&q);
for (int i=0;i<n;i++)
scanf("%d",&arr[i]);
for (int i=0;i<q;i++)
{
scanf(" %c%d",&qu[i].first,&qu[i].second);
if (qu[i].first=='>')
{
v[qu[i].second+1+sh].push_back(i);
v[-qu[i].second+sh].push_back(i);
}
if (qu[i].first=='<')
{
v[qu[i].second+sh].push_back(i);
v[-qu[i].second+1+sh].push_back(i);
}
update(1,0,q-1,i,qu[i]);
}
while (1)
{
ans[cur+sh]=tree[0][1];
if (cur==sh)
break;
cur++;
for (int i:v[cur+sh])
update(1,0,q-1,i,qu[i]);
}
for (int i=0;i<n;i++)
printf("%d ",(!ans[arr[i]+sh]-ans[arr[i]+sh])*arr[i]);
}
|
1146
|
F
|
Leaf Partition
|
You are given a rooted tree with $n$ nodes, labeled from $1$ to $n$. The tree is rooted at node $1$. The parent of the $i$-th node is $p_i$. A leaf is node with no children. For a given set of leaves $L$, let $f(L)$ denote the smallest connected subgraph that contains all leaves $L$.
You would like to partition the leaves such that for any two different sets $x, y$ of the partition, $f(x)$ and $f(y)$ are disjoint.
Count the number of ways to partition the leaves, modulo $998244353$. Two ways are different if there are two leaves such that they are in the same set in one way but in different sets in the other.
|
We can solve this with tree dp. Here, we extend the partition to include nodes in the $f$ values of the leaf set. The dp state counts the number of partitions of leaves in the subtree rooted at node $i$ subject to some conditions. We keep $3$ values for each node. $dp[i][0]$ will count the number of ways to partition the leaves given that node $i$ does not belong any partition. $dp[i][1]$ is the number of ways given that node $i$ is in some partition, but needs to be connected above because it is only directly connected to one child below. $dp[i][2]$ is the number of ways given that node $i$ is i some partition, but does not need to connect above. Note that all three cases are disjoint. Initially as a base case $dp[leaf][2]$ is $1$ and all other values zero. To compute the dp value of a node, we iterate through its direct children. If we choose to include the child in the same partition, there are $dp[child][1] + dp[child][2]$ ways to choose this. Otherwise, if the child is in a different partition, then there are $dp[child][0] + dp[child][2]$ to choose this. See the attached code for more details on how to compute the transitions. The final answer is $dp[0][0] + dp[0][2]$
|
[
"dp",
"trees"
] | 2,500
|
/**
* code generated by JHelper
* More info: https://github.com/AlexeyDmitriev/JHelper
* @author Azat Ismagilov
*/
#include <fstream>
#include <iostream>
#include <algorithm>
#include <vector>
#include <numeric>
#define int long long
#define fs first
#define sc second
#define pb push_back
#define ppb pop_back
#define pf push_front
#define ppf pop_front
#define mp make_pair
#define len(v) ((int)v.size())
#define vc vector
#define pr pair
#define all(v) v.begin(), v.end()
//#pragma GCC optimize("Ofast")
//#pragma GCC target("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx,tune=native")
//#pragma GCC optimize("unroll-loops")
using namespace std;
template<typename T, typename U>
inline ostream &operator<<(ostream &_out, const pair<T, U> &_p) {
_out << _p.first << ' ' << _p.second;
return _out;
}
template<typename T, typename U>
inline istream &operator>>(istream &_in, pair<T, U> &_p) {
_in >> _p.first >> _p.second;
return _in;
}
template<typename T>
inline ostream &operator<<(ostream &_out, const vector<T> &_v) {
if (_v.empty()) { return _out; }
_out << _v.front();
for (auto _it = ++_v.begin(); _it != _v.end(); ++_it) { _out << ' ' << *_it; }
return _out;
}
template<typename T>
inline istream &operator>>(istream &_in, vector<T> &_v) {
for (auto &_i : _v) { _in >> _i; }
return _in;
}
const int MAXN = 2e5;
const int INF = 1e9;
const int MOD = 998244353;
int sum(int a, int b) {
if (a + b >= MOD) {
return a + b - MOD;
}
return a + b;
}
int mul(int a, int b) {
if (a * b >= MOD) {
return (a * b) % MOD;
}
return a * b;
}
int sqr(int a) {
return (a * a) % MOD;
}
int bin_pow(int a, int b) {
if (b == 0) {
return 1;
}
if (b % 2) {
return mul(a, bin_pow(a, b - 1));
}
return sqr(bin_pow(a, b / 2));
}
vc<int> g[MAXN];
class FLeafPartition {
public:
void solve(std::istream &in, std::ostream &out) {
int n;
in >> n;
vc<int> p(n - 1);
for (int i = 0; i < n - 1; i++) {
in >> p[i];
p[i]--;
g[p[i]].pb(i + 1);
}
vc<vc<int>> dp(n, vc<int>(3));
for (int i = n - 1; i >= 0; i--) {
if (len(g[i]) > 0) {
dp[i][2] = 1;
for (auto v : g[i]) {
dp[i][0] = sum(mul(sum(dp[i][0], dp[i][1]), sum(dp[v][0], dp[v][1])),
mul(dp[i][0], sum(dp[v][0], dp[v][2])));
dp[i][1] = sum(mul(dp[i][2], sum(dp[v][0], dp[v][1])), mul(dp[i][1], sum(dp[v][0], dp[v][2])));
dp[i][2] = mul(dp[i][2], sum(dp[v][0], dp[v][2]));
}
} else
dp[i][0] = 1;
}
out << sum(dp[0][0], dp[0][2]);
}
};
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
FLeafPartition solver;
std::istream &in(std::cin);
std::ostream &out(std::cout);
solver.solve(in, out);
return 0;
}
|
1146
|
G
|
Zoning Restrictions
|
You are planning to build housing on a street. There are $n$ spots available on the street on which you can build a house. The spots are labeled from $1$ to $n$ from left to right. In each spot, you can build a house with an integer height between $0$ and $h$.
In each spot, if a house has height $a$, you can gain $a^2$ dollars from it.
The city has $m$ zoning restrictions though. The $i$-th restriction says that if the tallest house from spots $l_i$ to $r_i$ is strictly more than $x_i$, you must pay a fine of $c_i$.
You would like to build houses to maximize your profit (sum of dollars gained minus fines). Determine the maximum profit possible.
|
There are two different solutions. The first is min cut. Let's build the graph as follows: There is a source and sink. The source side represents things that are "true", and sink side represents things that are "false" There are $O(n*h)$ nodes $(i,j)$ for ($1 \leq i \leq n, 0 \leq j \leq h$) where $i$ is a spot and $j$ is the height. Node $(i,j)$ represents the condition "the height at spot $i$ is at least $j$". We can connect $(i,j) -> (i,j-1)$ with infinite capacity (since "spot $i \geq j$" implies "spot $i \geq j-1$"). This is needed to make sure that for every $i$, $(i,j) -> (i,j+1)$ is cut exactly once for some j (though it might not be needed in practice). We draw an edge from $(i,j) -> (i,j+1)$ with capacity $(h^2-j^2)$, meaning if $(i,j)$ is on true and $(i,j+1)$ is false, then we incur that cost. There are $m$ nodes for each restriction, representing true if this restriction is violated and false otherwise. Call the k-th such node $k$. We connect $k$ to the sink with cost $c_k$. We connect $(a, x_k)$ for ($l_k \leq a \leq r_k$) to $k$ with infinite capacity (since house has height $\geq x_k$ in range implies restriction is violated). To get the answer, run max flow, and then subtract answer from $h^2n$. The max flow is bounded above by $h^2n$ and the number of nodes is $hn$ and edges is $hn+mn$, so by ford fulkerson, the worstcase running time is $O(h^2n(hn+mn))$, which is about $50^5$. We can speed it up slightly by using some rmq structure to have $hn+m$ edges rather than $hn+mn$ edges, but this optimization isn't necessary. The second approach is dp. Let $dp[i][j][k]$ be the max profit given we are only considering buildings $i..j$ and the heights are at most $k$. To compute this, let's iterate through some middle point $f$ between $i$ and $j$ as the tallest building and fix its height as $x$. We can compute which restrictions lie entirely in the interval that we violate and add the profit to our sum. Now, this splits into two problems $dp[i][f-1][x]$ and $dp[f+1][j][x]$, and the main observation is that these problems are now independent.
|
[
"dp",
"flows",
"graphs"
] | 2,700
|
import java.io.OutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.util.Arrays;
import java.io.BufferedWriter;
import java.util.InputMismatchException;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Stream;
import java.io.Writer;
import java.io.OutputStreamWriter;
import java.io.InputStream;
/**
* Built using CHelper plug-in
* Actual solution is at the top
*
* @author lewin
*/
public class Main {
public static void main(String[] args) {
InputStream inputStream = System.in;
OutputStream outputStream = System.out;
InputReader in = new InputReader(inputStream);
OutputWriter out = new OutputWriter(outputStream);
ZoningRestrictions solver = new ZoningRestrictions();
solver.solve(1, in, out);
out.close();
}
static class ZoningRestrictions {
int INF = 1 << 29;
int nnodes;
int n;
int h;
int m;
int[] l;
int[] r;
int[] x;
int[] c;
int SOURCE;
int SINK;
int getPositionNode(int i, int j) {
if (j == 0) return SOURCE;
if (j >= h + 1) return SINK;
return j * n + i;
}
int getRestrictionNode(int x) {
return nnodes - 3 - x;
}
public void solve(int testNumber, InputReader in, OutputWriter out) {
n = in.nextInt();
h = in.nextInt();
m = in.nextInt();
l = new int[m];
r = new int[m];
x = new int[m];
c = new int[m];
for (int i = 0; i < m; i++) {
l[i] = in.nextInt() - 1;
r[i] = in.nextInt() - 1;
x[i] = in.nextInt();
c[i] = in.nextInt();
}
nnodes = n * (h + 1) + m + 2;
SOURCE = nnodes - 1;
SINK = nnodes - 2;
List<MaxFlowDinic.Edge>[] gg = LUtils.genArrayList(nnodes);
for (int i = 0; i < n; i++) {
for (int j = 0; j <= h; j++) {
int a1 = getPositionNode(i, j), a2 = getPositionNode(i, j + 1);
MaxFlowDinic.addEdge(gg, a2, a1, INF); // maybe not needed...
MaxFlowDinic.addEdge(gg, a1, a2, h * h - j * j);
}
}
for (int i = 0; i < m; i++) {
int cpos = getRestrictionNode(i);
MaxFlowDinic.addEdge(gg, cpos, SINK, c[i]);
for (int j = l[i]; j <= r[i]; j++) { // can reduce to constant edges if we add log(n)*n*h nodes
MaxFlowDinic.addEdge(gg, getPositionNode(j, x[i] + 1), cpos, INF);
}
}
out.println(h * h * n - MaxFlowDinic.maxFlow(gg, SOURCE, SINK));
}
}
static class InputReader {
private InputStream stream;
private byte[] buf = new byte[1 << 16];
private int curChar;
private int numChars;
public InputReader(InputStream stream) {
this.stream = stream;
}
public int read() {
if (this.numChars == -1) {
throw new InputMismatchException();
} else {
if (this.curChar >= this.numChars) {
this.curChar = 0;
try {
this.numChars = this.stream.read(this.buf);
} catch (IOException var2) {
throw new InputMismatchException();
}
if (this.numChars <= 0) {
return -1;
}
}
return this.buf[this.curChar++];
}
}
public int nextInt() {
int c;
for (c = this.read(); isSpaceChar(c); c = this.read()) {
;
}
byte sgn = 1;
if (c == 45) {
sgn = -1;
c = this.read();
}
int res = 0;
while (c >= 48 && c <= 57) {
res *= 10;
res += c - 48;
c = this.read();
if (isSpaceChar(c)) {
return res * sgn;
}
}
throw new InputMismatchException();
}
public static boolean isSpaceChar(int c) {
return c == 32 || c == 10 || c == 13 || c == 9 || c == -1;
}
}
static class MaxFlowDinic {
public static void addEdge(List<MaxFlowDinic.Edge>[] graph, int s, int t, int cap) {
graph[s].add(new MaxFlowDinic.Edge(t, graph[t].size(), cap));
graph[t].add(new MaxFlowDinic.Edge(s, graph[s].size() - 1, 0));
}
static boolean dinicBfs(List<MaxFlowDinic.Edge>[] graph, int src, int dest, int[] dist) {
Arrays.fill(dist, -1);
dist[src] = 0;
int[] Q = new int[graph.length];
int sizeQ = 0;
Q[sizeQ++] = src;
for (int i = 0; i < sizeQ; i++) {
int u = Q[i];
for (MaxFlowDinic.Edge e : graph[u]) {
if (dist[e.t] < 0 && e.f < e.cap) {
dist[e.t] = dist[u] + 1;
Q[sizeQ++] = e.t;
}
}
}
return dist[dest] >= 0;
}
static int dinicDfs(List<MaxFlowDinic.Edge>[] graph, int[] ptr, int[] dist, int dest, int u, int f) {
if (u == dest)
return f;
for (; ptr[u] < graph[u].size(); ++ptr[u]) {
MaxFlowDinic.Edge e = graph[u].get(ptr[u]);
if (dist[e.t] == dist[u] + 1 && e.f < e.cap) {
int df = dinicDfs(graph, ptr, dist, dest, e.t, Math.min(f, e.cap - e.f));
if (df > 0) {
e.f += df;
graph[e.t].get(e.rev).f -= df;
return df;
}
}
}
return 0;
}
public static int maxFlow(List<MaxFlowDinic.Edge>[] graph, int src, int dest) {
int flow = 0;
int[] dist = new int[graph.length];
while (dinicBfs(graph, src, dest, dist)) {
int[] ptr = new int[graph.length];
while (true) {
int df = dinicDfs(graph, ptr, dist, dest, src, Integer.MAX_VALUE);
if (df == 0)
break;
flow += df;
}
}
return flow;
}
public static class Edge {
public int t;
public int rev;
public int cap;
public int f;
public int idx;
public Edge(int t, int rev, int cap) {
this.t = t;
this.rev = rev;
this.cap = cap;
}
public Edge(int t, int rev, int cap, int idx) {
this.t = t;
this.rev = rev;
this.cap = cap;
this.idx = idx;
}
}
}
static class OutputWriter {
private final PrintWriter writer;
public OutputWriter(OutputStream outputStream) {
writer = new PrintWriter(new BufferedWriter(new OutputStreamWriter(outputStream)));
}
public OutputWriter(Writer writer) {
this.writer = new PrintWriter(writer);
}
public void close() {
writer.close();
}
public void println(int i) {
writer.println(i);
}
}
static class LUtils {
public static <E> List<E>[] genArrayList(int size) {
return Stream.generate(ArrayList::new).limit(size).toArray(List[]::new);
}
}
}
|
1146
|
H
|
Satanic Panic
|
You are given a set of $n$ points in a 2D plane. No three points are collinear.
A pentagram is a set of $5$ points $A,B,C,D,E$ that can be arranged as follows. Note the length of the line segments don't matter, only that those particular intersections exist.
Count the number of ways to choose $5$ points from the given set that form a pentagram.
|
Originally, I only had an $O(n^4)$ solution, but [user:y0105w49] recognized an easy solution with bitsets and helped me find an $O(n^3)$ solution instead. We do this by counting out bad configurations. There are initially $\binom{n}{5}$ tuples of $5$ points, and we want to subtract out bad configurations. There are two classes of bad configurations. One is a point in a convex quadrilateral. The other is a pair of points inside a triangle. If we look at the first class of bad configurations, we can notice it has two instances of "point in a triangle", and if we look at the second class, it has four instances of "point in a triangle". See the following image for more details: Let's precompute for each number $i$, the number of triangles from the chosen set that contains $i$ points inside its interior. Let this number be $f_i$. This can be found using bitsets in $O(n^4 / 64)$ time, or some other geometric method in $O(n^3)$ time. First, we can subtract out $\sum i\cdot f_i\cdot(n-4) / 2$. This removes subtracts all instances of the first class of bad configuration, but double subtracts instances of the second class of bad configurations. To fix that, we can add back $\sum \binom{i}{2} f_i$. The final formulas only take $O(n)$ time to evaluate, so the time is mostly spent computing the $f$ values which is $O(n^3)$ time. There is also another approach using some angle sweeps that I found later (though a bit harder to explain and to implement). The main idea is to fix the bottom point of the pentagram as well as the third and fourth points. Then, it reduces to counting the number of points that lie in some intersection of half planes. To do it efficiently, if we fix the first and third points, then we can do an angle sweep to efficiently count the number of points that could be the second point as we are iterating through the fourth point (and symmetrically we can do this to count the fifth point and multiply the two together). We can precompute these numbers to get a runtime of $O(n^3 log n)$. A third solution from Scott: the goal is to find the number of convex pentagons. We will do this through constructive DP. Take all directed segments between points and sort them by angle. Then, maintain array $dp[i][j][k]$, the number of polylines that start at vertex 𝑖, end at vertex 𝑗, contain 𝑘 segments, and only make clockwise turns. Loop over each directed segment in order and update the DP array accordingly. Each pentagon has exactly one traversal that satisfies the above constraints, so the answer is just the sum of all $dp[i][i][5]$.
|
[
"dp",
"geometry"
] | 2,900
|
#include <cstdio>
#include <cstring>
#include <cstdlib>
#include <algorithm>
#include <complex>
using namespace std;
#define FR(i, a, b) for(int i = (a); i < (b); ++i)
#define FOR(i, n) FR(i, 0, n)
#define MP make_pair
#define A first
#define B second
typedef long long ll;
typedef complex<ll> pnt;
const int MAXN = 400;
#define X real
#define Y imag
pnt lis[MAXN];
int n;
int num[MAXN][MAXN];
int ans[MAXN];
ll cross(pnt a, pnt b) {
return imag(conj(a) * b);
}
pnt getPoint() {
int x, y;
scanf("%d%d", &x, &y);
return pnt(x, y);
}
int below(int i, int j) {
return (X(lis[i]) == X(lis[j])) && (Y(lis[i]) < Y(lis[j]));
}
int betweenBelow(int i, int j, int x) {
if (X(lis[i]) < X(lis[j])) {
return X(lis[i]) < X(lis[x]) && X(lis[x]) < X(lis[j]) &&
cross(lis[j] - lis[i], lis[x] - lis[i]) < 0;
} else {
return X(lis[j]) < X(lis[x]) && X(lis[x]) < X(lis[i]) &&
cross(lis[i] - lis[j], lis[x] - lis[j]) < 0;
}
}
int main() {
scanf("%d", &n);
FOR(i, n) {
lis[i] = getPoint();
}
memset(num, 0, sizeof(num));
FOR(i, n) {
FOR(j, n) if(X(lis[i]) < X(lis[j])){
FOR(k, n) if(k != i && k != j) {
if(below(k, i)) num[i][j]++;
if(below(k, j)) num[i][j]++;
if(betweenBelow(i, j, k)) {
num[i][j] += 2;
}
}
num[j][i] = -num[i][j];
}
}
memset(ans, 0, sizeof(ans));
FOR(i, n) FOR(j, i) FOR(k, j) {
int temp = abs(num[i][j] + num[j][k] + num[k][i]) / 2;
temp -= betweenBelow(i, j, k);
temp -= betweenBelow(j, k, i);
temp -= betweenBelow(k, i, j);
ans[temp]++;
}
ll x = (ll) n * (n - 1) * (n - 2) * (n - 3) * (n - 4) / 120;
ll r = 0;
FOR(i, n - 2)
r -= (ll) ans[i] * i * (n - 4);
x += r/2;
FOR(i, n - 2)
x += (ll) i * (i - 1) / 2 * ans[i];
printf("%lld
", x);
return 0;
}
|
1147
|
A
|
Hide and Seek
|
Alice and Bob are playing a game on a line with $n$ cells. There are $n$ cells labeled from $1$ through $n$. For each $i$ from $1$ to $n-1$, cells $i$ and $i+1$ are adjacent.
Alice initially has a token on some cell on the line, and Bob tries to guess where it is.
Bob guesses a sequence of line cell numbers $x_1, x_2, \ldots, x_k$ in order. In the $i$-th question, Bob asks Alice if her token is currently on cell $x_i$. That is, Alice can answer either "YES" or "NO" to each Bob's question.
\textbf{At most one time} in this process, before or after answering a question, Alice is allowed to move her token from her current cell to some \textbf{adjacent} cell. Alice acted in such a way that she was able to answer "NO" to \textbf{all} of Bob's questions.
Note that Alice can even move her token before answering the first question or after answering the last question. Alice can also choose to not move at all.
You are given $n$ and Bob's questions $x_1, \ldots, x_k$. You would like to count the number of scenarios that let Alice answer "NO" to all of Bob's questions.
Let $(a,b)$ denote a scenario where Alice starts at cell $a$ and ends at cell $b$. Two scenarios $(a_i, b_i)$ and $(a_j, b_j)$ are different if $a_i \neq a_j$ or $b_i \neq b_j$.
|
Another way to phrase this problem is to count the number of ordered pairs $(p,q)$ that satisfy the following $|p-q| \leq 1$ There exists a number $i$ such that p doesn't appear in $x[0\ldots i]$ and $q$ doesn't appear in $x[i+1\ldots n]$. There are only $O(n)$ pairs that satisfy the first condition, so we can loop over all these pairs and try to efficiently check the second condition. If $p=q$, this pair is valid if $p$ doesn't appear in $x$, so we can just check this for all $1\ldots n$ directly with just one sweep through $x$. For $p,q$ that differ by exactly one, we can check that the first occurrence of $p$ occurs after the last occurrence of $q$. This is because we can start at $p$, wait until the last question about $q$ has been asked, and then swap to $p$. This is guaranteed to avoid all of Bob's questions. To answer this question efficiently, let's precompute the first and last occurrence of each number. This can be done by sweeping through the array once. Thus, checking if the condition is satisfied is just checking that $first[p] > last[q]$. The total time complexity is $O(k)$ for pre-computation, and $O(n)$ for answering all the questions.
|
[
"graphs"
] | 1,500
|
/**
* code generated by JHelper
* More info: https://github.com/AlexeyDmitriev/JHelper
* @author Azat Ismagilov
*/
#include <fstream>
#include <iostream>
#include <algorithm>
#include <vector>
#include <numeric>
#define int long long
#define fs first
#define sc second
#define pb push_back
#define ppb pop_back
#define pf push_front
#define ppf pop_front
#define mp make_pair
#define len(v) ((int)v.size())
#define vc vector
#define pr pair
#define all(v) v.begin(), v.end()
//#pragma GCC optimize("Ofast")
//#pragma GCC target("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx,tune=native")
//#pragma GCC optimize("unroll-loops")
using namespace std;
template<typename T, typename U>
inline ostream &operator<<(ostream &_out, const pair<T, U> &_p) {
_out << _p.first << ' ' << _p.second;
return _out;
}
template<typename T, typename U>
inline istream &operator>>(istream &_in, pair<T, U> &_p) {
_in >> _p.first >> _p.second;
return _in;
}
template<typename T>
inline ostream &operator<<(ostream &_out, const vector<T> &_v) {
if (_v.empty()) { return _out; }
_out << _v.front();
for (auto _it = ++_v.begin(); _it != _v.end(); ++_it) { _out << ' ' << *_it; }
return _out;
}
template<typename T>
inline istream &operator>>(istream &_in, vector<T> &_v) {
for (auto &_i : _v) { _in >> _i; }
return _in;
}
const int MAXN = 1e5;
const int INF = 1e9;
const int MOD = 1e9 + 7;
class Hideandseek {
public:
void solve(std::istream &in, std::ostream &out) {
int n, k;
in >> n >> k;
vc<int> x(k);
in >> x;
vc<int> firsts(n + 1, k), lasts(n + 1, -1);
for (int i = 1; i <= k; i++) {
lasts[x[i - 1]] = i;
}
for (int i = k; i >= 1; i--) {
firsts[x[i - 1]] = i;
}
int ans = 0;
for (int i = 1; i <= n; i++) {
for (int j = max(i - 1, 1ll); j <= min(n, i + 1); j++)
if (j == i) {
if (lasts[i] == -1)
ans++;
} else {
ans += (firsts[j] - lasts[i]) >= 0;
//out << i << ' ' << j << ' ' << firsts[i] << ' ' << lasts[j] << endl;
}
}
out << ans;
}
};
main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
Hideandseek solver;
std::istream &in(std::cin);
std::ostream &out(std::cout);
solver.solve(in, out);
return 0;
}
|
1147
|
B
|
Chladni Figure
|
Inaka has a disc, the circumference of which is $n$ units. The circumference is equally divided by $n$ points numbered clockwise from $1$ to $n$, such that points $i$ and $i + 1$ ($1 \leq i < n$) are adjacent, and so are points $n$ and $1$.
There are $m$ straight segments on the disc, the endpoints of which are all among the aforementioned $n$ points.
Inaka wants to know if her image is rotationally symmetrical, i.e. if there is an integer $k$ ($1 \leq k < n$), such that if all segments are rotated clockwise around the center of the circle by $k$ units, the new image will be the same as the original one.
|
Let's brute force the value of $k$ and check if it's possible to rotate the image by $k$ to get the same image. We can do this by iterating through all segments $(a,b)$, and checking that $(a+k,b+k)$ is a segment (the endpoints taken modulo $n$ if needed). This gives an $O(nm)$ solution, however, you can notice that we only need to check divisors of $n$ rather than all values from $1$ to $n$. This is because the set of segments $(a,b), (a+k,b+k), (a+2k,b+2k), \ldots$ is exactly equal to $(a,b), (a+\gcd(n,k), b+\gcd(n,k)), (a+2\gcd(n,k), b+2\gcd(n,k)), \ldots$. Thus, this take $O(m \cdot d(n))$, where $d(n)$ denotes the number of divisors of $n$, which is fast enough to pass this problem. There is also a faster linear time solution. We can reduce this to a problem of finding the largest period of a string. For every point, we can sort the length of the segments starting from that point (length in this case refers to clockwise distance). We also add some null character to denote a point. For instance, the first sample case's string might start like $2, -1, -1, 4, 8, 10, -1, \ldots$ that represent the points from $1$ to $3$. Such a string can be computed in $O(m \log m)$ time. Then, after finding this string, we just want to check the period is bigger than $1$. Let $w$ be the length of the string. We can find this by concatenating the string to itself, then use z-algorithm to check if there is any is any index $i$ from $1$ to $w-1$ that is at least $w$.
|
[
"brute force",
"strings"
] | 1,900
|
#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;
int n,m;
vector<pair<int,int> > s;
bool test(int k)
{
if (k==n)
return 0;
vector<pair<int,int> > cur;
for (auto p:s)
{
p.first=(p.first+k)%n;
p.second=(p.second+k)%n;
if (p.first>p.second)
swap(p.first,p.second);
cur.push_back(p);
}
sort(cur.begin(),cur.end());
return (cur==s);
}
int main()
{
scanf("%d%d",&n,&m);
while (m--)
{
int a,b;
scanf("%d%d",&a,&b);
a--;
b--;
if (a>b)
swap(a,b);
s.push_back({a,b});
}
sort(s.begin(),s.end());
for (int i=1;i*i<=n;i++)
{
if (n%i==0 && (test(i) || test(n/i)))
{
printf("Yes");
return 0;
}
}
printf("No");
}
|
1147
|
C
|
Thanos Nim
|
Alice and Bob are playing a game with $n$ piles of stones. It is guaranteed that $n$ is an even number. The $i$-th pile has $a_i$ stones.
Alice and Bob will play a game alternating turns with Alice going first.
On a player's turn, they must choose \textbf{exactly} $\frac{n}{2}$ nonempty piles and independently remove a positive number of stones from each of the chosen piles. They \textbf{can} remove a \textbf{different} number of stones from the piles in a single turn. The first player unable to make a move loses (when there are less than $\frac{n}{2}$ nonempty piles).
Given the starting configuration, determine who will win the game.
|
The main claim is that if a player is forced to reduce the minimum number of stones over all piles, then they lose. Intuitively, every time a player reduces the minimum, the other player has a move that doesn't reduce the minimum, and if a player isn't forced to reduce the minimum, they have a move that will force the other player to reduce the minimum. More formally, let $m$ be the minimum number of stones in a pile, and let $x$ be the number of piles with $m$ stones. Alice can win if and only if $x \leq n/2$. Let's call the positions where Alice can win "winning positions", and all other positions "losing positions" To see why this works, we need to show from a winning position, we can reach some losing position, and if we are at a losing position, we can only reach winning positions. If we are at a winning position, there are at least $n/2$ piles that have strictly more than $m$ stones, so we can choose any arbitrary subset of them and reduce them to $m$ stones. This is now a losing position. If we are at a losing position, no matter what we do, we must include a pile of size $m$ in our chosen subset. If $m$ is zero, this means we have no available moves. Otherwise, the minimum will strictly decrease, but only at most $n/2$ piles (from the piles that we chose) can reach that new minimum. Thus, losing positions can only reach winning positions. The solution is then as follows. Find the min of the array and find the frequency of the min. Print "Alice" if the frequency is less than or equal to $n/2$, otherwise, print "Bob". The time complexity is $O(n)$. Alternatively, you can sort and check the index $0$ and index $n/2$ are the same for a short three line solution.
|
[
"games"
] | 2,000
|
#include <iostream>
using namespace std;
int main()
{
int n,mn=1e9,cnt;
scanf("%d",&n);
for (int i=0;i<n;i++)
{
int a;
scanf("%d",&a);
if (a<mn)
{
mn=a;
cnt=0;
}
if (a==mn)
cnt++;
}
if (cnt>n/2)
printf("Bob");
else
printf("Alice");
}
|
1147
|
D
|
Palindrome XOR
|
You are given a string $s$ consisting of characters "1", "0", and "?". The first character of $s$ is guaranteed to be "1". Let $m$ be the number of characters in $s$.
Count the number of ways we can choose a pair of integers $a, b$ that satisfies the following:
- $1 \leq a < b < 2^m$
- When written without leading zeros, the base-2 representations of $a$ and $b$ are both palindromes.
- The base-2 representation of bitwise XOR of $a$ and $b$ matches the pattern $s$. We say that $t$ matches $s$ if the lengths of $t$ and $s$ are the same and for every $i$, the $i$-th character of $t$ is equal to the $i$-th character of $s$, or the $i$-th character of $s$ is "?".
Compute this count modulo $998244353$.
|
Since the leading character of $s$ is a "1", then that means $a < 2^{m-1}$ and $2^{m-1} \leq b < 2^m$. Let's fix the length of $a$ as $k$. I'll describe a more general solution, so there might be simpler solutions that work for this specific problem. Let's make a graph with $n+k+2$ nodes. The first $n$ nodes represent the $n$ bits of $b$, and the next $k$ nodes represent the $k$ bits of $a$. The last two nodes represent a $0$ node and $1$ node (which we will explain later). We want to find the number of ways to color the graph with two colors $0$ and $1$ such that they satisfy some conditions. Let's draw two different types of edges $0$-edges and $1$-edges. If two nodes are connected by a $0$-edge, then that means they must be the same color. If two nodes are connected by a $1$-edge, then that means they must be a different color. We will draw some edges as follows: Draw a $1$ edge between the $0$ node and $1$ node to represent they must be different colors. Draw a $0$ edge between $b_i$ and $b_{n-i-1}$ to represent the palindrome conditions (similarly we can do this for $a$). For the $i$-th bit, if $s_i$ is "1", draw a $1$ edge between $a_i$ and $b_i$ (if $i > k$, we instead draw an edge from $b_i$ to $1$). If $s_i$ is "0", then draw a $0$ edge between $a_i$ and $b_i$. If $s_i$ is "?", then don't draw any edges, since there are no explicit constraints. Now, we want to count the number of valid colorings. We want to split the graph into a two colors, which is a bipartite graph. We want all edges that cross the bipartition to be $1$ edges and all edges within the same bipartition to be $0$ edges. To count this, we first collapse all connected components of $0$ edges, then check if the remaining $1$ edges form a bipartite graph. If there is a non-bipartite graph, return $0$ immediately, since this means it's impossible to fulfill the conditions. Otherwise, let $C$ be the number of connected components. We add $2^{C-1}$ to our answer. The reason we subtract $1$ is that the component containing the $0$ and $1$ node is fixed to be colored $0$, but for other components, we are free to color the components in either of two ways. There are $n$ different lengths to try, each of which take a linear amount of time to get the count, so the overall time complexity is $O(n^2)$.
|
[
"dfs and similar",
"graphs"
] | 2,400
|
#include <bits/stdc++.h>
#define gcd(a, b) ((!a || !b) ? (a || b) : __gcd(a, b))
using namespace std;
typedef long long ll;
mt19937 rnd(1);
const int N = 2e3 + 7;
vector <pair <int, int> > g[N];
int val[N];
void add_edge(int a, int b, int c)
{
g[a].push_back({b, c});
g[b].push_back({a, c});
}
vector <int> t;
bool u[N];
void dfs(int v)
{
t.push_back(v);
u[v] = true;
for (auto c : g[v])
{
int to = c.first;
if (!u[to])
{
dfs(to);
}
}
}
const int M = 998244353;
bool check(int s, int col)
{
u[s] = true;
val[s] = col;
for (auto c : g[s])
{
int to = c.first;
if (val[to] != -1 && (val[s] + c.second) % 2 != val[to])
{
return false;
}
if (!u[to])
{
if (!check(to, (val[s] + c.second) % 2))
{
return false;
}
}
}
return true;
}
void solve()
{
string s;
cin >> s;
reverse(s.begin(), s.end());
int n = (int) s.size();
int sum = 0;
for (int m = 1; m < n; m++)
{
vector <int> ind_a(n);
for (int j = 0; j < n; j++)
{
ind_a[j] = j;
}
vector <int> ind_b(m);
for (int j = 0; j < m; j++)
{
ind_b[j] = n + j;
}
for (int x = 0; x < n + m; x++)
{
val[x] = -1;
g[x].clear();
}
for (int i = 0; i < n; i++)
{
add_edge(ind_a[i], ind_a[n - 1 - i], 0);
}
for (int i = 0; i < m; i++)
{
add_edge(ind_b[i], ind_b[m - 1 - i], 0);
}
for (int i = 0; i < n; i++)
{
if (s[i] == '?') continue;
if (i < m)
{
add_edge(ind_a[i], ind_b[i], s[i] - '0');
}
else
{
val[ind_a[i]] = s[i] - '0';
}
}
val[ind_b.back()] = 1;
for (int i = 0; i < n + m; i++)
{
u[i] = false;
}
int ans = 1;
for (int i = 0; i < n + m; i++)
{
if (!u[i])
{
t.clear();
dfs(i);
int start = i;
int value = -1;
for (int x : t)
{
if (val[x] != -1)
{
start = x;
value = val[x];
}
u[x] = 0;
}
if (value != -1)
{
ans *= check(start, value);
}
else
{
ans *= (2 * check(start, 0));
ans %= M;
}
}
}
sum = (sum + ans) % M;
}
cout << sum << endl;
}
int main()
{
#ifdef ONPC
freopen("a.in", "r", stdin);
#endif
ios::sync_with_stdio(0);
cin.tie(0);
solve();
|
1147
|
E
|
Rainbow Coins
|
Carl has $n$ coins of various colors, and he would like to sort them into piles. The coins are labeled $1,2,\ldots,n$, and each coin is exactly one of red, green, or blue. He would like to sort the coins into three different piles so one pile contains all red coins, one pile contains all green coins, and one pile contains all blue coins.
Unfortunately, Carl is colorblind, so this task is impossible for him. Luckily, he has a friend who can take a pair of coins and tell Carl if they are the same color or not. Using his friend, Carl believes he can now sort the coins. The order of the piles doesn't matter, as long as all same colored coins are in the one pile, and no two different colored coins are in the same pile.
His friend will answer questions about multiple pairs of coins in batches, and will answer about all of those pairs in parallel. Each coin should be in at most one pair in each batch. The same coin can appear in different batches.
Carl can use only $7$ batches. Help him find the piles of coins after sorting.
|
Let's define a new question that takes a list of coins $t_1, t_2, \ldots, t_k$, and returns the answers about all adjacent pairs of coins in this list (e.g. the answers to $(t_1, t_2), (t_2, t_3), (t_3, t_4), \ldots$. We can do this in two questions, one question is $(t_1, t_2), (t_3, t_4), \ldots,$ and the other is $(t_2,t_3), (t_4,t_5), \ldots$, and we can interleave the results to get the answer for all adjacent pairs in order. We show how to use only three questions of this type to get the answer (so the total number of questions in the original problem is $6$). First, let's ask about all adjacent pairs in $1, 2, \ldots, n$. This splits the coins into contiguous groups of the same color. Let's take some arbitrary representative from each group and put them in a line, so we now have a problem where all adjacent coins in the line are different colors. Now, we ask two more questions about adjacent pairs. One for coins $1,3,5,\ldots$ and one for coins $2,4,6,\ldots$. This is now enough to reconstruct the color of all the coins. WLOG, coin $1$ is red, and coin $2$ is blue. We know coin $3$ cannot be blue (since it must be different from coin $2$), and we compared it with coin $1$, so we know if it is either red or green. We can repeat this for all coins in sequence.
|
[
"interactive"
] | 3,000
|
#include <bits/stdc++.h>
#define gcd(a, b) ((!a || !b) ? (a || b) : __gcd(a, b))
using namespace std;
typedef long long ll;
mt19937 rnd(1);
struct ev
{
int a, b, eq;
};
int col[123456];
vector <ev> ask(vector <pair <int, int> > e)
{
if (e.empty()) return {};
cout << "Q " << e.size() << ' ';
string str = "";
for (auto c : e)
{
cout << c.first + 1 << ' ' << c.second + 1 << ' ';
str += (col[c.first] == col[c.second]) + '0';
}
cout << endl;
string s;
#ifdef ONPC
s = str;
#else
cin >> s;
#endif
vector <ev> q;
for (int i = 0; i < (int) e.size(); i++)
{
q.push_back({e[i].first, e[i].second, s[i] - '0'});
}
return q;
}
void solve()
{
int t = 1;
cin >> t;
while (t--)
{
int n = 10;
cin >> n;
for (int i = 0; i < n; i++)
{
col[i] = rnd() % 3;
}
vector <int> x(n);
for (int i = 0; i < n; i++)
{
x[i] = (1 << 0) + (1 << 1) + (1 << 2);
}
x[0] = (1 << 0);
vector <ev> q;
vector <ev> temp;
auto flex = [&] (vector <int> arr)
{
vector <pair <int, int> > e;
for (int i= 0; i + 1 < (int) arr.size(); i += 2)
{
e.push_back({arr[i], arr[i + 1]});
}
temp = ask(e);
for (auto c : temp) q.push_back(c);
e.clear();
for (int i= 1; i + 1 < (int) arr.size(); i += 2)
{
e.push_back({arr[i], arr[i + 1]});
}
temp = ask(e);
for (auto c : temp) q.push_back(c);
};
vector <int> p(n);
for (int i = 0; i < n; i++) p[i] = i;
flex(p);
vector <int> kek;
for (auto c : q)
{
if (c.eq == 0)
{
kek.push_back(c.a);
}
}
sort(kek.begin(), kek.end());
kek.push_back(n-1);
kek.resize(unique(kek.begin(), kek.end()) - kek.begin());
vector <int> l, r;
for (int i = 0; i < (int) kek.size(); i++)
{
if (i % 2 == 0) l.push_back(kek[i]);
else r.push_back(kek[i]);
}
flex(l);
flex(r);
auto bunt = [&] ()
{
vector <vector <ev> > tupa_flex(n);
queue <int> que;
for (int i = 0; i < n; i++)
{
que.push(i);
}
for (auto c : q)
{
tupa_flex[c.a].push_back(c);
tupa_flex[c.b].push_back(c);
}
while (!que.empty())
{
int i = que.front();
que.pop();
for (auto c : tupa_flex[i])
{
int j = (c.a == i ? c.b : c.a);
if (c.eq)
{
if ((x[i] & x[j]) != x[j])
{
x[j] = (x[i] & x[j]);
que.push(j);
}
}
else
{
if (__builtin_popcount(x[i]) == 1)
{
if ((~x[i] & x[j]) != x[j])
{
x[j] = (~x[i] & x[j]);
que.push(j);
}
}
}
}
}
while (true)
{
auto was = x;
for (auto c : q)
{
if (c.eq)
{
x[c.a] &= x[c.b];
x[c.b] &= x[c.a];
}
else
{
if (__builtin_popcount(x[c.a]) == 1) x[c.b] &= ~x[c.a];
if (__builtin_popcount(x[c.b]) == 1) x[c.a] &= ~x[c.b];
}
}
if (x == was) break;
}
};
bunt();
for (int i = 0; i < n; i++)
{
if (__builtin_popcount(x[i]) > 1)
{
x[i] = (x[i] & -x[i]);
break;
}
}
bunt();
vector <vector <int> > kok(8);
for (int i = 0; i < n; i++) kok[x[i]].push_back(i);
cout << "A " << kok[1 << 0].size() << ' ' << kok[1 << 1].size() << ' ' << kok[1 << 2].size() << endl;
for (int i = 0; i < 3; i++)
{
for (int x : kok[1 << i]) cout << x + 1 << ' ';
cout << endl;
}
}
}
int main()
{
#ifdef ONPC
//freopen("a.in", "r", stdin);
#endif
ios::sync_with_stdio(0);
cin.tie(0);
solve();
}
|
1147
|
F
|
Zigzag Game
|
You are given a complete bipartite graph with $2n$ nodes, with $n$ nodes on each side of the bipartition. Nodes $1$ through $n$ are on one side of the bipartition, and nodes $n+1$ to $2n$ are on the other side. You are also given an $n \times n$ matrix $a$ describing the edge weights. $a_{ij}$ denotes the weight of the edge between nodes $i$ and $j+n$. Each edge has a distinct weight.
Alice and Bob are playing a game on this graph. First Alice chooses to play as either "increasing" or "decreasing" for herself, and Bob gets the other choice. Then she places a token on any node of the graph. Bob then moves the token along any edge incident to that node. They now take turns playing the following game, with Alice going first.
The current player must move the token from the current vertex to some adjacent unvisited vertex. Let $w$ be the last weight of the last edge that was traversed. The edge that is traversed must be strictly greater than $w$ if the player is playing as "increasing", otherwise, it must be strictly less. The first player unable to make a move loses.
You are given $n$ and the edge weights of the graph. You can choose to play as either Alice or Bob, and you will play against the judge. You must win all the games for your answer to be judged correct.
|
I first got the idea for this problem from this problem: https://www.codechef.com/problems/HAMILG. There might be an easier version of the problem on a bipartite graph out there somewhere also. Anyways, the solution for the problem is conceptually simple. Find a matching of the graph, and then you can play according to the matching. You don't even need to remember which vertices you've seen before, since the main idea is if your opponent can make a move, then you can too. The same idea can be extended to this problem. The intuition is to find some matching to show that Bob can always win. It's not clear how to find such a matching directly, so let's look at what properties the matching needs. Let's call nodes $1$ to $n$ the "left" side of the bipartite graph and nodes $n+1$ to $2n$ the "right" side Without loss of generality, suppose Alice choose "increasing" and she placed her token somewhere on the left side. Now, let's consider what properties a matching needs. Let's take any two edges in our matching, one that connects nodes $(w,x)$ ($w$ is on the left, $x$ is on the right), and the other connects $(y,z)$ ($y$ is on the left, $z$ is on the right). We want it to be the case that if it is legal for Alice to move from $x$ to $y$, then it is legal for Bob to move from $y$ to $z$. This way, Bob always can respond by playing according to the matching. Let's see what this means in terms of edge weights. If Alice can move from $x$ to $y$, then that means $edge(x,y) > edge(w,x)$, since the last edge traversed was $(w,x)$ and Alice is increasing. We want it to be the case that $edge(x,y) > edge(w,x)$ implies that $edge(x,y) > edge(y,z)$. Thus, the matching is bad if and only if there are two edges in the matching $(w,x)$ and $(y,z)$ such that $edge(x,y) > edge(w,x)$ and $edge(x,y) < edge(y,z)$. We can solve this with stable marriage. We call the bad case an instability, which happens if $x$ prefers $y$ over $w$ and $y$ prefers $x$ over $z$. We can construct the preference list of the nodes as follows. For each node on the left side, create a preference list of nodes on the right in decreasing order of weight. For each node on the right side, create a preference list of nodes on the left in increasing order of weight. We now find any stable marriage in $O(n^2)$ time. It is guaranteed that one exists and has no instabilities. Thus, we've found a matching that guarantees Bob's win. As a followup, can you devise a strategy that guarantees Alice can beat Bob if Bob makes a non-optimal move?
|
[
"games",
"interactive"
] | 3,500
|
#include <bits/stdc++.h>
#define gcd(a, b) ((!a || !b) ? (a || b) : __gcd(a, b))
using namespace std;
typedef long long ll;
mt19937 rnd(1);
bool wPrefersM1OverM(const vector <vector <int> > &prefer, int w, int m, int m1)
{
int n = (int) prefer.size() / 2;
// Check if w prefers m over her current engagment m1
for (int i = 0; i < n; i++)
{
// If m1 comes before m in lisr of w, then w prefers her
// cirrent engagement, don't do anything
if (prefer[w][i] == m1)
return true;
// If m cmes before m1 in w's list, then free her current
// engagement and engage her with m
if (prefer[w][i] == m)
return false;
}
}
vector <int> marriage(vector <vector <int> > prefer)
{
int n = (int) prefer.size() / 2;
vector<int> wPartner(n, -1);
vector<bool> mFree(n);
// stores paing information. The value of wPartner[i]
// indicates the partner assigned to woman N+i. Note that
// the woman numbers between N and 2*N-1. The value -1
// indicates that (N+i)'th woman is free
// An array to store availability of men. If mFree[i] is
// false, then man 'i' is free, otherwise engaged.
// Initialize all men and women as free
int freeCount = n;
// While there are free men
while (freeCount > 0)
{
// Pick the first free man (we could pick any)
int m;
for (m = 0; m < n; m++)
if (mFree[m] == false)
break;
// One by one go to all women according to m's preferences.
// Here m is the picked free man
for (int i = 0; i < n && mFree[m] == false; i++)
{
int w = prefer[m][i];
// The woman of preference is free, w and m become
// partners (Note that the partnership maybe changed
// later). So we can say they are engaged not married
if (wPartner[w - n] == -1)
{
wPartner[w - n] = m;
mFree[m] = true;
freeCount--;
} else // If w is not free
{
// Find current engagement of w
int m1 = wPartner[w - n];
// If w prefers m over her current engagement m1,
// then break the engagement between w and m1 and
// engage m with w.
if (wPrefersM1OverM(prefer, w, m, m1) == false)
{
wPartner[w - n] = m;
mFree[m] = true;
mFree[m1] = false;
}
} // End of Else
} // End of the for loop that goes to all women in m's list
} // End of main while loop
return wPartner;
}
void solve()
{
int t;
cin >> t;
while (t--)
{
int n;
cin >> n;
vector <vector <int> > a(n, vector <int> (n));
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
cin >> a[i][j];
}
}
cout << "B" << endl;
char move;
cin >> move;
int v;
cin >> v;
v--;
int kek = (move == 'D') ^ (v >= n) ^ 1;
vector <vector <int> > prefer;
for (int i = 0; i < n; i++)
{
vector <int> guys;
for (int j = 0; j < n; j++)
{
guys.push_back(n + j);
}
sort(guys.begin(), guys.end(), [&] (int x, int y)
{
if (kek)
{
return a[i][x - n] < a[i][y - n];
}
else
{
return a[i][x - n] > a[i][y - n];
}
});
prefer.push_back(guys);
}
for (int i = n; i < n + n; i++)
{
vector <int> guys;
for (int j = 0; j < n; j++)
{
guys.push_back(j);
}
sort(guys.begin(), guys.end(), [&] (int x, int y)
{
if (!kek)
{
return a[x][i - n] < a[y][i - n];
}
else
{
return a[x][i - n] > a[y][i - n];
}
});
prefer.push_back(guys);
}
/*
shuffle(p.begin(), p.end(), rnd);
bool ch = true;
while (ch)
{
ch = false;
vector<pair<int, int >> e;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
if (i != j)
e.push_back({i, j});
}
}
shuffle(e.begin(), e.end(), rnd);
for (auto c : e)
{
int i = c.first, j = c.second;
if ((move == 'D') ^ (v >= n))
{
if (a[i][p[i]] > a[j][p[i]] && a[j][p[i]] > a[j][p[j]])
{
swap(p[i], p[j]);
ch = true;
//break;?
}
} else
{
if (a[i][p[i]] < a[j][p[i]] && a[j][p[i]] < a[j][p[j]])
{
swap(p[i], p[j]);
ch = true;
//break;?
}
}
}
}
*/
vector <int> p = marriage(prefer);
while (true)
{
if (v >= n)
{
cout << p[v - n] + 1 << endl;
}
else
{
for (int i = 0; i < n; i++)
{
if (p[i] == v)
{
cout << n + i + 1 << endl;
break;
}
}
}
cin >> v;
if (v == -1)
{
break;
}
if (v == -2)
{
return;
}
v--;
}
}
}
int main()
{
#ifdef ONPC
//freopen("a.in", "r", stdin);
#endif
ios::sync_with_stdio(0);
cin.tie(0);
solve();
}
|
1148
|
A
|
Another One Bites The Dust
|
Let's call a string good if and only if it consists of only two types of letters — 'a' and 'b' and every two consecutive letters are distinct. For example "baba" and "aba" are good strings and "abb" is a bad string.
You have $a$ strings "a", $b$ strings "b" and $c$ strings "ab". You want to choose some subset of these strings and concatenate them in any arbitrarily order.
What is the length of the longest good string you can obtain this way?
|
The answer is $2 * c + min(a, b) + min(max(a, b), min(a, b) + 1)$. First, you can place all "ab" strings together. Then if there are more "a" strings than "b" strings, you can start adding in the end "a" and "b" strings one by one in an alternating order. If there are more "b" strings than "a" strings, you can do the same thing but add to the begin of the string.
|
[
"greedy"
] | 800
| null |
1148
|
B
|
Born This Way
|
Arkady bought an air ticket from a city A to a city C. Unfortunately, there are no direct flights, but there are a lot of flights from A to a city B, and from B to C.
There are $n$ flights from A to B, they depart at time moments $a_1$, $a_2$, $a_3$, ..., $a_n$ and arrive at B $t_a$ moments later.
There are $m$ flights from B to C, they depart at time moments $b_1$, $b_2$, $b_3$, ..., $b_m$ and arrive at C $t_b$ moments later.
The connection time is negligible, so one can use the $i$-th flight from A to B and the $j$-th flight from B to C if and only if $b_j \ge a_i + t_a$.
You can cancel at most $k$ flights. If you cancel a flight, Arkady can not use it.
Arkady wants to be in C as early as possible, while you want him to be in C as late as possible. Find the earliest time Arkady can arrive at C, if you optimally cancel $k$ flights. If you can cancel $k$ or less flights in such a way that it is not possible to reach C at all, print $-1$.
|
If $k \geq n$, we can cancel all flights between A and B, and Arkady won't be able to get to C, so answer will be equal to $-1$. Otherwise, $k < n$. If the subset of canceled flights is chosen, the Arkady's strategy is clear: use the earliest available flight from A to B, arrive to B at some time moment $t$ and choose the earliest not canceled flight from B to C with a departure time moment greater or equal than $t$. Suppose we have chosen $x$ - a number of canceled flights between A and B. No matter what subset of flights will be cancelled between B and C, we should cancel the first $x$ flights from A to B. Now we know the exact moment of the Arkady's arrival to B - it is equal to $a_{x+1} + t_a$. Now our optimal strategy is to cancel the first $k-x$ flights between B and C, which Arkady can use. If there is no flights remaining, the answer is $-1$. Otherwise, Arkady will be in C at the $b_j + t_b$-th time moment, where $j$ is the index of the earliest flight, which Arkady can use. $j$ is equal to $pos + (k - x)$, where $pos$ is index of the first flight from B to C, which departure time moment is greater or equal than $a_{x+1} + t_a$. Now we can just iterate through all possible $x$ and calculate the biggest time moment. Since array $b$ is sorted, you can find $pos$ using binary search. The complexity of this solution is $O(n \log n)$. Also the array of $a_{x + 1} + t_a$ values is sorted too, so you can use two-pointers method. This solution will work with $O(n)$ complexity.
|
[
"binary search",
"brute force",
"two pointers"
] | 1,600
| null |
1148
|
C
|
Crazy Diamond
|
You are given a permutation $p$ of integers from $1$ to $n$, where $n$ is an even number.
Your goal is to sort the permutation. To do so, you can perform zero or more operations of the following type:
- take two indices $i$ and $j$ such that $2 \cdot |i - j| \geq n$ and swap $p_i$ and $p_j$.
There is \textbf{no need to minimize} the number of operations, however you should use no more than $5 \cdot n$ operations. One can show that it is always possible to do that.
|
The problem can be solved in $4n$ swaps in total in the worst case (might be there is a more optimal solution, but it wasn't required). Let's go from left to right through the array. Suppose our current position is $i$ and position of the number $i$ is $j$. If $i \ne j$, we want to swap them. If $|i - j| \cdot 2 \geq n$, you can just swap(i, j). If $\frac{n}{2} \leq i - 1$, than you can do swap(i, 1), swap(1, j), swap(i, 1). If $\frac{n}{2} \leq n - j$, than you can do swap(i, n), swap(j, n), swap(i, n). In the last case $\frac{n}{2} \leq j - 1$ and $\frac{n}{2} \leq n - i$ and so you do $5$ swaps: swap(i, n), swap(n, 1), swap(1, j), swap(1, n), swap(i, n). One can be see, that the last case happens at most $\frac{n}{2}$ times, since when $i \geq \frac{n}{2} + 1$ you only need $3$ swaps or less. And so in total it's no more than $5 \cdot \frac{n}{2} + 3 \cdot \frac{n}{2} = 4 n$ swaps.
|
[
"constructive algorithms",
"sortings"
] | 1,700
| null |
1148
|
D
|
Dirty Deeds Done Dirt Cheap
|
You are given $n$ pairs of integers $(a_1, b_1), (a_2, b_2), \ldots, (a_n, b_n)$. All of the integers in the pairs are distinct and are in the range from $1$ to $2 \cdot n$ inclusive.
Let's call a sequence of integers $x_1, x_2, \ldots, x_{2k}$ good if either
- $x_1 < x_2 > x_3 < \ldots < x_{2k-2} > x_{2k-1} < x_{2k}$, or
- $x_1 > x_2 < x_3 > \ldots > x_{2k-2} < x_{2k-1} > x_{2k}$.
You need to choose a subset of distinct indices $i_1, i_2, \ldots, i_t$ and \textbf{their order} in a way that if you write down all numbers from the pairs in a single sequence (the sequence would be $a_{i_1}, b_{i_1}, a_{i_2}, b_{i_2}, \ldots, a_{i_t}, b_{i_t}$), this sequence is good.
What is the largest subset of indices you can choose? You also need to construct the corresponding index sequence $i_1, i_2, \ldots, i_t$.
|
First notice, that there are two types of pairs: one with $a_i < b_i$ and another one with $a_i > b_i$. Note, that we can't form an answer with pairs of different types. Now let's notice that we can take all pairs of fixed type. Suppose the type is $a_i < b_i$. Then sort pairs by their $a_i$ in the decreasing order and that is the answer. For type $a_i > b_i$ let's sort them by $b_i$ in increasing order. Just select the longest of two answers.
|
[
"greedy",
"sortings"
] | 1,800
| null |
1148
|
E
|
Earth Wind and Fire
|
There are $n$ stones arranged on an axis. Initially the $i$-th stone is located at the coordinate $s_i$. There may be more than one stone in a single place.
You can perform zero or more operations of the following type:
- take two stones with indices $i$ and $j$ so that $s_i \leq s_j$, choose an integer $d$ ($0 \leq 2 \cdot d \leq s_j - s_i$), and replace the coordinate $s_i$ with $(s_i + d)$ and replace coordinate $s_j$ with $(s_j - d)$. In other words, draw stones closer to each other.
You want to move the stones so that they are located at positions $t_1, t_2, \ldots, t_n$. The order of the stones is not important — you just want for the multiset of the stones resulting positions to be the same as the multiset of $t_1, t_2, \ldots, t_n$.
Detect whether it is possible to move the stones this way, and if yes, construct a way to do so. You don't need to minimize the number of moves.
|
Consider original and target position of stones in sorted order. Note, that we can always construct an answer preserving the original stones order (suppose we have an answer which doesn't preserve the order. It will happen when there are two stones at the same spot, just move the other one). So now we simplified our problem - we want to move stone $s_i$ into $t_i$. So we now can compute $\delta_i = t_i - s_i$ - the relative movement of each stone. To begin with, observe that if $\sum \delta_i \ne 0$ there is no solution, since the operations we can use preserve the sum of the coordinates. However this is not enough, for example if $\delta_1 < 0$ (you need to move to the left the leftmost stone), there is clearly no answer. The real condition is that elements $\delta_i$ should form a "balanced bracket sequence", that is, the sum of every prefix of $\delta_i$ must be $\ge 0$. In this and only in this case we can match corresponding right-movements with left-movements. In order to construct the exact answer we can go from left to right maintaining the stack of not-yet-closed "move-to-the right actions", formally we will put on stack pairs (stone_index, steps_to_move). So when we see stone with $\delta_i > 0$ we put it on the stack, and when we see stone with $\delta_i < 0$ we match it with the stones on the stack, removing the elements from the stack if they have moved to the full extent.
|
[
"constructive algorithms",
"greedy",
"math",
"sortings",
"two pointers"
] | 2,300
| null |
1148
|
F
|
Foo Fighters
|
You are given $n$ objects. Each object has two integer properties: $val_i$ — its price — and $mask_i$. It is guaranteed that the sum of all prices is initially non-zero.
You want to select a positive integer $s$. All objects will be modified after that. The $i$-th object will be modified using the following procedure:
- Consider $mask_i$ and $s$ in binary notation,
- Compute the bitwise AND of $s$ and $mask_i$ ($s \,\&\, mask_i$),
- If ($s \,\&\, mask_i$) contains an odd number of ones, replace the $val_i$ with $-val_i$. Otherwise do nothing with the $i$-th object.
You need to find such an integer $s$ that when the modification above is done the sum of all prices changes sign (if it was negative, it should become positive, and vice-versa; it is not allowed for it to become zero). The absolute value of the sum can be arbitrary.
|
Let's solve this problem using the induction method. Let's solve this problem for all objects with masks less than $2^{K}$. Base of the induction: $K = 1$, than we just can multiply price by $-1$, like adding a single bit into the answer, if the price has the same sign as the initial sum. Move: we solved the problem for all masks less than $2^{K - 1}$. Now take a look on all masks smaller than $2^{K}$ and not less than $2^{K - 1}$. If the sum of their prices has the same sign as the initial sum, than we add bit $K - 1$ into the answer and multiply all prices of object with this bit in the mask by $-1$. It's important to notice, that when we chose whether we need this bit we go only through numbers where this bit is the biggest one, but when we added it we need to recalculate prices for all object with this bit in the mask.
|
[
"bitmasks",
"constructive algorithms"
] | 2,700
| null |
1148
|
G
|
Gold Experience
|
Consider an undirected graph $G$ with $n$ vertices. There is a value $a_i$ in each vertex.
Two vertices $i$ and $j$ are connected with an edge if and only if $gcd(a_i, a_j) > 1$, where $gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$.
Consider a set of vertices. Let's call a vertex in this set fair if it is connected with an edge with all other vertices in this set.
You need to find a set of $k$ vertices (where $k$ is a given integer, $2 \cdot k \le n$) where all vertices are fair or all vertices are not fair. One can show that such a set always exists.
|
Firstly, let's discuss how to find the answer in $O(N^2)$ time. Let's assume $k$ is even. Pick any set on $k$ vertices. If it is a clique, just print it as an answer, if it is not, there must be two vertices in this set that don't share an edge. Just remove these two vertices and replace them with 2 arbitrary ones from the rest of the vertices. We can notice that if we remove $\frac{k}{2}$ such pairs, then removed vertices will form a set where all vertices are not fair. If $k$ is odd, then the above algorithm has a problem, because we remove two vertices in one step. To fix it, we can find a triplet of vertices $a$, $b$, $c$ such that there is no edge between $a$ and $b$, and there is no edge between $b$, and $c$. Now, with this triplet we can get the set with odd size. If there are no such triplet, then, our graph has quite simple structure which we can handle separately. Now to the $O(N \cdot Log \cdot 2^8)$ solution. Assume we have a set of integers $a_1, a_2, \ldots, a_n$ and an integer $x$. We want to find, for how many $a_i$ have $gcd(a_i, x) \neq 1$. This can be done using inclusion-exclusion principle. Let $p_1, p_2, \ldots, p_k$ be distinct prime divisors of $x$. At first, we want to count how many $a_i$ is divisible by $p_1, p_2, \ldots, p_k$, but than $a_i$'s divisible by both $p_1, p_2$ are counted twice, so we subtract count of $a_i$ that is divisible by $p_1 \cdot p_2$ and so on. We can maintain set $a_1, a_2, \ldots, a_n$ and answer queries dynamically in $O(2^{NumberOfPrimeDivisors})$ time. This is the only place where we use the graph structure defined by the edge between two vertices $i$ and $j$ exists if only if $gcd(a_i, a_j) \neq 1$. Like in $O(N^2)$ solution, let's find a triplet of vertices $a$, $b$, $c$ such that there is no edge between $a$ and $b$, and there is no edge between $b$ and $c$. Let's remove this triplet from the graph. Let $b$ be the number of vertices that are connected to all other $n - 3$ vertices. If $b \geq k$ we can print any $k$ of them as a clique. Otherwise, let's remove all of these $b$ vertices. We can notice that remaining vertices plus removed triplet form an antifair set. Let's estimate size of this set: $S = (n - 3) - b + 3 = n - b$. Recall that $b \le k$ and $2 \cdot k \leq n$, so we have $S = n - b \geq k$. So, now we get an antifair set of size equal to $k$ or greater than $k$. Let's show that we always can construct an antifair set with size strictly equal to $k$. In the previous paragraph we have estimated the maximum size of an antifair set on vertices $1, 2, \ldots, n - 3$. Let $f(R)$ be the maximum size of an antifair set on vertices with indices: $1, 2, \ldots, R$. Let's find with binary search lowest integer $c$ such that $f(c - 1) + 3 < k$ and $f(c) + 3 \geq k$. Let's look at the vertex $c$. The size of the maximal antifair set without it is $f(c - 1)$. The size of the maximal antifair set with it is $f(c)$. So among vertices $1, 2, \ldots, c - 1$ there are exactly $f(c) - f(c - 1) - 1$ of them connected with all vertices from $1$ to $c - 1$ and not connected with the vertex $c$. It is easy to see, that we can delete any $f(c) - f(c - 1) - 2$ of them or less and the remaining set of vertices will still be antifair. With this observation we can achieve either set of size $k$ or set of size $k + 1$. In the first case, we just find the answer. In the second case, remember, that we have reserved triplet and can delete one of it's vertex to make the total size $k$.
|
[
"constructive algorithms",
"graphs",
"math",
"number theory",
"probabilities"
] | 3,300
| null |
1148
|
H
|
Holy Diver
|
You are given an array which is initially empty. You need to perform $n$ operations of the given format:
- "$a$ $l$ $r$ $k$": append $a$ to the end of the array. After that count the number of integer pairs $x, y$ such that $l \leq x \leq y \leq r$ and $\operatorname{mex}(a_{x}, a_{x+1}, \ldots, a_{y}) = k$.
The elements of the array are numerated from $1$ in the order they are added to the array.
To make this problem more tricky we don't say your real parameters of the queries. Instead your are given $a'$, $l'$, $r'$, $k'$. To get $a$, $l$, $r$, $k$ on the $i$-th operation you need to perform the following:
- $a := (a' + lans) \bmod(n + 1)$,
- $l := (l' + lans) \bmod{i} + 1$,
- $r := (r' + lans) \bmod{i} + 1$,
- if $l > r$ swap $l$ and $r$,
- $k := (k' + lans) \bmod(n + 1)$,
where $lans$ is the answer to the previous operation, initially $lans$ is equal to zero. $i$ is the id of the operation, operations are numbered from $1$.The $\operatorname{mex}(S)$, where $S$ is a multiset of non-negative integers, is the smallest non-negative integer which does not appear in the set. For example, $\operatorname{mex}(\{2, 2, 3\}) = 0$ and $\operatorname{mex} (\{0, 1, 4, 1, 6\}) = 2$.
|
First, let's try to solve it if we know the whole array initially. Let's move from the last element to the first. When we are in the element $i$ let's store all segments of right borders ($[r', r"]$) in the set, so that segment $[i, r]$ have the same mex value for all $r' \le r \le r"$. You can see that mex values in those segments are increasing from left to right, so you can recalculate those segments efficiently when you add another element from the left. How to do so? Suppose we added element $k$ on the left. Then we must remove the segment with mex $k$ and replace it (with possibly many) segments of greater mex, the last of those segments might get merge with the segment following the removed one. It's easy to see that we add/remove only $O(n)$ segments in total since each time we remove exactly once segment, but add quite a lot. Informally, we simply can't add too much segments each time, since all mex's are distinct at every moment. After that for each segment of the right borders we can compute when it was added to the set and when it was deleted. Then we can see it as a rectangle, where $x$ coordinates correspond to the left borders and $y$ coordinates correspond to the right borders. After that the problem is reduced to add value in the rectangle and compute the sum of values in the rectangle of a query. So after that you need to notice, that those rectangles lay like in different stripes of surface. So they don't have points with the same x coordinates from different rectangles. That's why you can say, that to count total area laying on a query rectangle, you just can keep a segment tree with operations: add number to the segment, count the sum of number on the segment. That's because there is only one rectangle intersecting your query, but not laying completely inside. So when you meet a rectangle during you scan line, you just add its width to the proper segment. Notice that there are actually $O(n)$ scan lines, because you need to keep them for every mex value. And for the query you need to get the state of your structure after some prefix of rectangles proceeded. They lay completely inside your query, so you need just to take the sum on the segment in your structure. Also there is one rectangle which can intersect you query, and you need to proceed it by hands. Another case is that in your set, when you proceed queries one by one, there are some current segment of mex, who was not deleted yet, so you need to take care about it too, because it can contain one segment with proper mex value for you query.
|
[
"data structures"
] | 3,500
| null |
1149
|
A
|
Prefix Sum Primes
|
We're giving away nice huge bags containing number tiles! A bag we want to present to you contains $n$ tiles. Each of them has a single number written on it — either $1$ or $2$.
However, there is one condition you must fulfill in order to receive the prize. You will need to put all the tiles from the bag in a sequence, in any order you wish. We will then compute the sums of all prefixes in the sequence, and then count how many of these sums are prime numbers. If you want to keep the prize, you will need to maximize the number of primes you get.
Can you win the prize? Hurry up, the bags are waiting!
|
There are at least a couple of correct solutions I had in mind. Let me present the one I find the most straightforward, which doesn't even require implementing any sieve. If all the numbers on the tiles are equal, we have no choice but to output the only possible permutation. In the remaining cases, we'll show that the following solution is optimal: Start with $2$ and $1$. Use all remaining $2$s. Finish with all remaining $1$s.
|
[
"constructive algorithms",
"greedy",
"math",
"number theory"
] | 1,200
|
N = int(input())
num_ones = input().count('1')
num_twos = N - num_ones
if not num_ones or not num_twos:
solution = [1] * num_ones + [2] * num_twos
else:
solution = [2, 1] + [2] * (num_twos - 1) + [1] * (num_ones - 1)
print(*solution)
|
1149
|
B
|
Three Religions
|
During the archaeological research in the Middle East you found the traces of three ancient religions: First religion, Second religion and Third religion. You compiled the information on the evolution of each of these beliefs, and you now wonder if the followers of each religion could coexist in peace.
The \underline{Word of Universe} is a long word containing the lowercase English characters only. At each moment of time, each of the religion beliefs could be described by a word consisting of lowercase English characters.
The three religions can coexist in peace if their descriptions form disjoint subsequences of the \underline{Word of Universe}. More formally, one can paint some of the characters of the \underline{Word of Universe} in three colors: $1$, $2$, $3$, so that each character is painted in \underline{at most} one color, and the description of the $i$-th religion can be constructed from the \underline{Word of Universe} by removing all characters that aren't painted in color $i$.
The religions however evolve. In the beginning, each religion description is empty. Every once in a while, either a character is appended to the end of the description of a single religion, or the last character is dropped from the description. After each change, determine if the religions could coexist in peace.
|
For our convenience, construct a two-dimensional helper array $N$ where $N(i, c)$ is the location of the first occurrence of character $c$ in the Word of Universe on position $i$ or later (or $\infty$ if no such character exists). This array can be created in a straightforward in $O(26 \cdot n)$ time, by iterating the word from the end to the beginning. Example. Consider the Word of Uniwerse equal to abccaab. The helper array looks as follows: Actually, for our purposes it's easier to set $\infty = n$ (in our example, $n=7$) and also consider additional links from indices $n$ and $n+1$ to index $\infty = n$. Why? We'll later need the index of the first occurrence of some character after some location $p$. If this $p$ already happens to be $\infty$ ($n$, as we already established), we can easily see that no requested occurrence exists: Let's now try to answer each query in $O(250^3)$ time. We can do it using dynamic programming: let $D(n_1, n_2, n_3)$ be the length of the shortest prefix of the Word of Universe that contains the disjoint occurrences of the prefix of length $n_1$ of the first religion's description, the prefix of length $n_2$ of the second religion's description, and the prefix of length $n_3$ of the third religion's description. Each state can be evaluated in constant time by checking for each religion $i \in \{1, 2, 3\}$, what the prefix length would be if the last character of the prefix is a part of the $i$-th religion's description. (We use the helper array $N$ to speed up the search.) How to write the state transitions? For each $i \in \{1, 2, 3\}$: chop the last character of the $i$-th description's prefix, find the shortest prefix of the Word of Universe containing all three descriptions, and then reappend this last character. We can do that using our helper array: $D(n_1, n_2, n_3) = \min \begin{cases} N(D(n_1 - 1, n_2, n_3) + 1, \text{description}_1[n_1]) & \text{if } n_1 \geq 1, \\ N(D(n_1, n_2 - 1, n_3) + 1, \text{description}_2[n_2]) & \text{if } n_2 \geq 1, \\ N(D(n_1, n_2, n_3 - 1) + 1, \text{description}_3[n_3]) & \text{if } n_3 \geq 1. \end{cases}$ Now, if the lengths of the descriptions are $\ell_1$, $\ell_2$, and $\ell_3$, respectively, then the embedding of these descriptions as distinct subsequences exists if and only if $D(\ell_1, \ell_2, \ell_3) < \infty = n$. However, due to the nature of queries, we can do a single update in $O(250^2)$ time: if we drop a character, we don't need to recompute any states; if we add a character to the $i$-th description, we only need to recompute the states with $n_i$ equal to the length of the description - and there are at most $251^2$ of them! This allows us to solve the problem in $O(q \cdot 250^2)$ time.
|
[
"dp",
"implementation",
"strings"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int AlphaSize = 26;
const int MaxN = 1e5 + 100;
const int MaxM = 256;
int next_occur[MaxN][AlphaSize];
int dp[MaxM][MaxM][MaxM];
string pattern;
string words[3];
int N, Q;
void CreateLinks() {
for (int i = 0; i < AlphaSize; ++i) {
next_occur[N][i] = next_occur[N + 1][i] = N;
}
for (int pos = N - 1; pos >= 0; --pos) {
for (int i = 0; i < AlphaSize; ++i) {
next_occur[pos][i] = (pattern[pos] == 'a' + i ? pos : next_occur[pos + 1][i]);
}
}
}
void RecomputeDp(int a, int b, int c) {
int &val = dp[a][b][c];
val = N;
if (a) { val = min(val, next_occur[dp[a-1][b][c] + 1][words[0][a-1] - 'a']); }
if (b) { val = min(val, next_occur[dp[a][b-1][c] + 1][words[1][b-1] - 'a']); }
if (c) { val = min(val, next_occur[dp[a][b][c-1] + 1][words[2][c-1] - 'a']); }
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> N >> Q >> pattern;
CreateLinks();
dp[0][0][0] = -1;
for (int i = 0; i < Q; ++i) {
char type;
int word_id;
cin >> type >> word_id;
--word_id;
if (type == '+') {
char ch;
cin >> ch;
words[word_id] += ch;
int max0 = words[0].size(), max1 = words[1].size(), max2 = words[2].size();
int min0 = word_id == 0 ? max0 : 0;
int min1 = word_id == 1 ? max1 : 0;
int min2 = word_id == 2 ? max2 : 0;
for (int a = min0; a <= max0; ++a) {
for (int b = min1; b <= max1; ++b) {
for (int c = min2; c <= max2; ++c) {
RecomputeDp(a, b, c);
}
}
}
} else {
words[word_id].pop_back();
}
bool answer = dp[words[0].size()][words[1].size()][words[2].size()] < N;
cout << (answer ? "YES\n" : "NO\n");
}
}
|
1149
|
C
|
Tree Generator™
|
Owl Pacino has always been into trees — unweighted rooted trees in particular. He loves determining the diameter of every tree he sees — that is, the maximum length of any simple path in the tree.
Owl Pacino's owl friends decided to present him the Tree Generator™ — a powerful machine creating rooted trees from their descriptions. An $n$-vertex rooted tree can be described by a bracket sequence of length $2(n - 1)$ in the following way: find any walk starting and finishing in the root that traverses each edge exactly twice — once down the tree, and later up the tree. Then follow the path and write down "(" (an opening parenthesis) if an edge is followed down the tree, and ")" (a closing parenthesis) otherwise.
The following figure shows sample rooted trees and their descriptions:
Owl wrote down the description of an $n$-vertex rooted tree. Then, he rewrote the description $q$ times. However, each time he wrote a new description, he picked two different characters in the description he wrote the last time, swapped them and wrote down the resulting string. He always made sure that each written string was the description of a rooted tree.
Pacino then used Tree Generator™ for each description he wrote down. What is the diameter of each constructed tree?
|
Take any rooted tree and its description. For any vertices $u$, $v$, let $h(v)$ be its depth in the tree, and $d(u, v) = h(u) + h(v) - 2h(\mathrm{lca}(u, v))$ be the distance between $u$ and $v$. Consider the traversal of the tree represented by the description. Let's say we're processing the parentheses one by one each second, and let's set $\mu(t)$ be the current depth after $t$ seconds. Moreover, let $t_u$ and $t_v$ be the moments of time when we're in vertices $u, v$, respectively (there might be multiple such moments; pick any). Therefore, $h(u) = \mu(t_u)$ and $h(v) = \mu(t_v)$. Assume without the loss of generality that $t_u < t_v$ and consider the part of the description between the $t_u$-th and $t_v$-th second. What is the shallowest vertex we visit during such traversal? As the description represents a depth-first search of the tree, it must be $\mathrm{lca}(u, v)$. Therefore, $h(\mathrm{lca}(u, v)) = \min_{t_l \in [t_u, t_v]} \mu(t_l)$. It follows that $d(u, v) = \max_{t_l \in [t_u, t_v]} \left(\mu(t_u) + \mu(t_v) - 2\mu(t_l)\right)$. Eventually, the diameter is equal to $\max_{u, v} d(u, v) = \max_{t_u \leq t_l \leq t_v} \left(\mu(t_u) - 2\mu(t_l) + \mu(t_v)\right)$. This leads to a slow $O(n)$ solution for computing a single diameter without constructing the tree: consider the parentheses one by one, and maintain the current depth and maximum values of $\mu(a)$, $\mu(a) - 2\mu(b)$ and $\mu(a) - 2\mu(b) + \mu(c)$ for $a \leq b \leq c$ on the prefix. However, we still need to be able to process the updates quicker than in linear time. It turns out we can maintain a segment tree. Each node will maintain some information about the substring of the description. Note that such substring doesn't have to describe the whole tree, and so the number of opening and closing parentheses doesn't have to match and it can happen that we're at a negative depth when following the description. We'll hold the following information about the substring. We assume everywhere that $a \leq b \leq c$: $\delta$ (the final depth, might be non-zero), $\max \mu(a)$, $\max (-2\mu(b))$, $\max (\mu(a) - 2\mu(b))$, $\max(-2\mu(b) + \mu(c))$, $\max (\mu(a) - 2\mu(b) + \mu(c))$. It's now pretty straightforward to combine the informations about two neighboring substrings into a single information about the concatenation of the substrings. Note for example that $\delta = \delta_L + \delta_R$ and $\max \mu(a) = \max\left\{ \max_L \mu(a),\ \max_R \mu(a) + \delta_L\right\}$. This allows to maintain the segment tree over the description and process the single character replacement in $O(\log n)$ time. Therefore, we can process each query in $O(\log n)$ time, and solve the whole task in $O(n + q \log n)$ time. Notably, square-root decomposition isn't much slower in this task; $O(\sqrt{n})$ per query should pass if you aren't deliberately trying to write as slow code as possible.
|
[
"data structures",
"implementation",
"trees"
] | 2,700
|
#include <algorithm>
#include <cassert>
#include <iostream>
#include <string>
#include <vector>
using namespace std;
struct Tree {
// Node maximizing the value of depth(l) - 2*depth(v) + depth(r) for l<=v<=r
struct Node {
int delta; // num '(' - num ')'
int min_depth; // minimum value of [num '(' - num ')'] on prefix
int max_depth; // maximum ...
int max_lv; // max value of [depth(l) - 2 * depth(v)], l <= v
int max_rv; // max value of [depth(r) - 2 * depth(v)], v <= r
int max_lvr; // max value of [depth(l) - 2 * depth(v) + depth(r)]
static Node Empty() {
return Node{0, 0, 0, 0, 0, 0};
}
static Node LeftParenthesis() {
return Node{1, 0, 1, 0, 1, 1};
}
static Node RightParenthesis() {
return Node{-1, -1, 0, 2, 1, 1};
}
static Node FromCharacter(char ch) {
if (ch == '(')
return LeftParenthesis();
else
return RightParenthesis();
}
Node ShiftHeight(int displacement) const {
return Node{delta + displacement,
min_depth + displacement,
max_depth + displacement,
max_lv - displacement,
max_rv - displacement,
max_lvr};
}
static Node Merge(const Node &lhs, const Node &rhs) {
Node rhs_shifted = rhs.ShiftHeight(lhs.delta);
Node result;
result.delta = rhs_shifted.delta;
result.min_depth = min(lhs.min_depth, rhs_shifted.min_depth);
result.max_depth = max(lhs.max_depth, rhs_shifted.max_depth);
result.max_lv = max(
{lhs.max_lv, rhs_shifted.max_lv, lhs.max_depth - 2 * rhs_shifted.min_depth});
result.max_rv = max(
{lhs.max_rv, rhs_shifted.max_rv, rhs_shifted.max_depth - 2 * lhs.min_depth});
result.max_lvr = max(
{lhs.max_lvr, rhs_shifted.max_lvr,
lhs.max_lv + rhs_shifted.max_depth,
rhs_shifted.max_rv + lhs.max_depth});
return result;
}
};
vector<Node> data;
int Base;
Tree(int N) : Base(1) {
while (Base < N + 1) { Base *= 2; }
data.resize(Base * 2, Node::Empty());
}
void Replace(int pos, char ch) {
pos += Base;
data[pos] = Node::FromCharacter(ch);
pos /= 2;
while (pos) {
data[pos] = Node::Merge(data[pos * 2], data[pos * 2 + 1]);
pos /= 2;
}
}
int GetMaxPath() const {
return data[1].max_lvr;
}
};
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int N, Q;
string paren_string;
cin >> N >> Q >> paren_string;
--N;
Tree path_tree(N * 2);
for (int i = 0; i < N * 2; ++i) {
path_tree.Replace(i, paren_string[i]);
}
cout << path_tree.GetMaxPath() << "\n";
for (int query_idx = 0; query_idx < Q; ++query_idx) {
int first_swap, second_swap;
cin >> first_swap >> second_swap;
--first_swap;
--second_swap;
const char next_first = paren_string[second_swap];
const char next_second = paren_string[first_swap];
assert(next_first != next_second);
path_tree.Replace(first_swap, next_first);
paren_string[first_swap] = next_first;
path_tree.Replace(second_swap, next_second);
paren_string[second_swap] = next_second;
cout << path_tree.GetMaxPath() << "\n";
}
}
|
1149
|
D
|
Abandoning Roads
|
Codefortia is a small island country located somewhere in the West Pacific. It consists of $n$ settlements connected by $m$ bidirectional gravel roads. Curiously enough, the beliefs of the inhabitants require the time needed to pass each road to be equal either to $a$ or $b$ seconds. It's guaranteed that one can go between any pair of settlements by following a sequence of roads.
Codefortia was recently struck by the financial crisis. Therefore, the king decided to abandon some of the roads so that:
- it will be possible to travel between each pair of cities using the remaining roads only,
- the sum of times required to pass each remaining road will be minimum possible (in other words, remaining roads must form minimum spanning tree, using the time to pass the road as its weight),
- among all the plans minimizing the sum of times above, the time required to travel between the king's residence (in settlement $1$) and the parliament house (in settlement $p$) using the remaining roads only will be minimum possible.
The king, however, forgot where the parliament house was. For each settlement $p = 1, 2, \dots, n$, can you tell what is the minimum time required to travel between the king's residence and the parliament house (located in settlement $p$) after some roads are abandoned?
|
Let's partition the edges of the graph into two classes: light (weight $a$) and heavy (weight $b$). Let's now fix a single settlement $p$ as the location of the parliament house, and consider how the path between settlements $1$ and $p$ could look like. Lemma 1. If there is a path between two settlements $u$, $v$ consisting of light edges only, then they will be connected using the light edges only in any minimum spanning tree. Proof. Consider the Kruskal minimum spanning tree algorithm (which can produce any minimum spanning tree depending on the order we consider the edges with equal weight). After we process all light edges, settlements $u$ and $v$ will be connected. Lemma 2. Consider the connected components in the graph with heavy edges removed. In the original graph, a path can be a part of the minimum spanning tree if and only if there is we don't leave and then reenter any of the components. Proof. If we leave and then reenter any connected component, there are two vertices $u$, $v$ in a single connected component (that is, connected by a light path in the original graph) that has at least one heavy edge on the path in between. Lemma 1 asserts that it's impossible. However, if no such situation happens, it's straightforward to extend the selected path to a minimum spanning tree - first add all possible light edges, then all possible heavy edges so that the graph becomes a spanning tree. This leads us to an (inefficient) $O(2^n m \log(2^n m))$ solution: find the connected components in the graph with light edges only. We want now to find the shortest path between $1$ and all other vertices that doesn't revisit any component multiple times. In order to do so, create a graph where each state is of the following format $(\text{mask of components visited so far}, \text{current vertex})$. This information allows us to check if we don't reenter any previously visited component. After we do that, we run Dijkstra's shortest path algorithm from vertex $1$. The shortest path between vertices $1$ and $p$ can be found in the state first visiting vertex $p$. The beautiful thing now is that the algorithm can be sped up by the following greedy observation: Lemma 3. Consider any component of size $3$ or less. It's not optimal to leave and then reenter this component even if we don't explicitly forbid this. Proof. We need to use at least two heavy edges to leave and then reenter the component, and this costs us $2b$ or more. However, as the component has at most three vertices, the path between any pair of vertices costs at most $2a$. As $a < b$, it's always more optimal to take the path inside the component. Therefore, we can simply ignore all components of size $3$ or less. As we now need to remember the components having at least $4$ vertices, the number of states drops down to $O(2^{n/4} m)$. This immediately allows us to finish the solution in a number of ways: Implement the vanilla Dijkstra algorithm on the state graph - time complexity $O(2^{n/4} m \log(2^{n/4} m))$. Notice that the edges of the graph have only two weights ($a$ and $b$) and therefore the priority queue can be implemented using two queues - time complexity drops to $O(2^{n/4} m)$.
|
[
"brute force",
"dp",
"graphs",
"greedy"
] | 3,000
|
#include <bits/stdc++.h>
using namespace std;
const int MaxN = 80;
const int kCompoSizeBound = 4;
int N, M, A, B;
vector<int> adj_cheap[MaxN];
vector<int> adj_expensive[MaxN];
vector<vector<int>> large_cheap_compos;
unsigned large_compo_mask[MaxN];
bool is_same_compo[MaxN][MaxN];
bool visited[MaxN];
vector<int> cur_component;
void DfsCheap(int v) {
visited[v] = true;
cur_component.push_back(v);
for (int s : adj_cheap[v]) {
if (!visited[s]) {
DfsCheap(s);
}
}
}
void FindCheapComponents() {
fill_n(visited, N + 1, false);
for (int v = 1; v <= N; ++v) {
if (!visited[v]) {
cur_component.clear();
DfsCheap(v);
for (int a : cur_component) for (int b : cur_component) {
is_same_compo[a][b] = true;
}
if ((int)cur_component.size() >= kCompoSizeBound) {
const int compo_idx = (int)large_cheap_compos.size();
large_cheap_compos.push_back(cur_component);
for (int vert : cur_component) {
large_compo_mask[vert] = 1U << compo_idx;
}
}
}
}
}
int32_t main() {
cin >> N >> M >> A >> B;
for (int i = 0; i < M; ++i) {
int u, v, c;
cin >> u >> v >> c;
if (c == A) {
adj_cheap[u].push_back(v);
adj_cheap[v].push_back(u);
} else {
adj_expensive[u].push_back(v);
adj_expensive[v].push_back(u);
}
}
FindCheapComponents();
const int S = large_cheap_compos.size();
using State = pair<int, unsigned>;
using QueueElem = pair<int, State>;
const State init_state{1, large_compo_mask[1]};
priority_queue<QueueElem, vector<QueueElem>, greater<QueueElem>> que;
que.emplace(0, init_state);
const int kInfty = 1e9;
vector<int> answers(N + 1, kInfty);
vector<vector<int>> distances(N + 1, vector<int>(1 << S, kInfty));
distances[1][large_compo_mask[1]] = 0;
vector<vector<bool>> dij_visited(N + 1, vector<bool>(1 << S));
while (!que.empty()) {
auto [cost, state] = que.top();
que.pop();
auto &&[vert, mask] = state;
if (dij_visited[vert][mask]) {
continue;
}
dij_visited[vert][mask] = true;
answers[vert] = min(answers[vert], cost);
for (int s_cheap : adj_cheap[vert]) {
const int new_dist = cost + A;
if (distances[s_cheap][mask] > new_dist) {
que.emplace(new_dist, State{s_cheap, mask});
distances[s_cheap][mask] = new_dist;
}
}
for (int s_large : adj_expensive[vert]) {
if (is_same_compo[vert][s_large]) { continue; }
if (large_compo_mask[s_large] & mask) { continue; }
const unsigned new_mask = mask | large_compo_mask[s_large];
const int new_dist = cost + B;
if (distances[s_large][new_mask] > new_dist) {
que.emplace(new_dist, State{s_large, new_mask});
distances[s_large][new_mask] = new_dist;
}
}
}
for (int i = 1; i <= N; ++i) {
cout << answers[i] << (i == N ? '\n' : ' ');
}
}
|
1149
|
E
|
Election Promises
|
In Byteland, there are two political parties fighting for seats in the Parliament in the upcoming elections: Wrong Answer Party and Time Limit Exceeded Party. As they want to convince as many citizens as possible to cast their votes on them, they keep promising lower and lower taxes.
There are $n$ cities in Byteland, connected by $m$ one-way roads. Interestingly enough, the road network has no cycles — it's impossible to start in any city, follow a number of roads, and return to that city. Last year, citizens of the $i$-th city had to pay $h_i$ bourles of tax.
Parties will now alternately hold the election conventions in various cities. If a party holds a convention in city $v$, the party needs to decrease the taxes in this city to a non-negative integer amount of bourles. However, at the same time they can arbitrarily modify the taxes in each of the cities that can be reached from $v$ using a single road. The only condition that must be fulfilled that the tax in each city has to remain a non-negative integer amount of bourles.
The first party to hold the convention is Wrong Answer Party. It's predicted that the party to hold the last convention will win the election. Can Wrong Answer Party win regardless of Time Limit Exceeded Party's moves?
|
Let's get straight to the main lemma: Lemma. For any vertex $v$, let's define the level of $v$ as $M(v) = \mathrm{mex}\,\{M(u)\,\mid\,v\to u\,\text{is an edge}\}$ where $\mathrm{mex}$ is the minimum-excluded function. Moreover, for any $t \in \mathbb{N}$, let $X(t)$ be the xor-sum of the taxes in all cities $v$ fulfilling $M(v) = t$; that is, $X(t) = \bigoplus\,\{h_v\,\mid\,M(v)=t\}$. The starting party loses if and only if $X(t) = 0$ for all $t$'s; that is, if all xor-sums are equal to 0. Proof. Let's consider any move from the position in which all xor-sums are zero. If we're holding the convention in the city $v$, then we must decrease the amount of taxes in this city. Note however that there is no direct connection from $v$ to any other city $u$ having the same level as $v$, as $M(v)$ is the smallest integer outside of the set $\{M(u)\,\mid\,v\to u\,\text{is an edge}\}$. Therefore, there is exactly one tax value changing at level $M(v)$, and thus the value $X(M(v))$ must change. As it was zero before, it must become non-zero. Now consider any configuration where some xor-sums are non-zero. Let $t$ be the highest level at which $X(t) > 0$. We want to hold the election in a selected city at level $t$. Which one? Notice that no city at a single level is connected to each other, and therefore we can only afford to pick one city and strictly decrease its tax. This is equivalent to the game of Nim where each city corresponds to a single stack of stones. We perform a single optimal move in this game: pick a city and decrease the tax in the city by a non-zero amount of bourles, leading to the position where the new xor-sum of the taxes at this level $X'(t)$ is equal to zero. We also need to take care of zeroing all $X(l)$'s for $l < t$. This is however straightforward: for each $l < t$, pick a single city $v_l$ fulfilling at level $l$ directly reachable from $v$ (there must be one from the definition of $M(v)$), and alter the value of tax in order to nullify $X(l)$. The proof above is actually constructive and allows us to compute a single winning move in $O(n + m)$ time. As a small bonus: if you're into advanced game theory, you can come up with the lemma above without much guesswork. One can compute the nimbers for the state where there are no taxes anywhere except a single city $v$ where the tax is equal to $h_v$, and it turns out to be equal to $\omega^{M(v)} \cdot h_v$ where $\omega$ is the smallest infinite ordinal number. Moreover, it's not that hard to see that the nimber for the state where there are more taxed cities is a nim-sum of the nimbers corresponding to the states with only one taxed city. This all should quite naturally lead to the lemma above.
|
[
"games",
"graphs"
] | 3,200
|
#include <bits/stdc++.h>
using namespace std;
const int MaxN = 2e5 + 100;
int stack_sizes[MaxN];
vector<int> adj[MaxN];
int N, M;
int vertex_group_idx[MaxN];
bool visited[MaxN];
int group_xor[MaxN];
vector<int> vertex_groups[MaxN];
void DfsMakeLayers(int vert) {
visited[vert] = true;
vector<int> seen_layers;
for (int s : adj[vert]) {
if (!visited[s]) {
DfsMakeLayers(s);
}
seen_layers.push_back(vertex_group_idx[s]);
}
sort(seen_layers.begin(), seen_layers.end());
seen_layers.resize(unique(seen_layers.begin(), seen_layers.end()) -
seen_layers.begin());
seen_layers.push_back(1e9);
while (seen_layers[vertex_group_idx[vert]] == vertex_group_idx[vert]) {
++vertex_group_idx[vert];
}
const int group_id = vertex_group_idx[vert];
group_xor[group_id] ^= stack_sizes[vert];
vertex_groups[group_id].push_back(vert);
}
int main() {
ios_base::sync_with_stdio(0);
cin >> N >> M;
for (int i = 0; i < N; ++i) {
cin >> stack_sizes[i];
}
for (int i = 0; i < M; ++i) {
int u, v;
cin >> u >> v;
--u; --v;
adj[u].push_back(v);
}
for (int vert = 0; vert < N; ++vert) {
if (!visited[vert]) {
DfsMakeLayers(vert);
}
}
if (count(group_xor, group_xor + N, 0) == N) {
cout << "LOSE\n";
return 0;
}
int last_idx = N;
while (group_xor[last_idx] == 0)
--last_idx;
int touched_vertex = -1;
for (int vert : vertex_groups[last_idx]) {
if ((stack_sizes[vert] ^ group_xor[last_idx]) < stack_sizes[vert]) {
touched_vertex = vert;
break;
}
}
assert(touched_vertex != -1);
stack_sizes[touched_vertex] ^= group_xor[last_idx];
group_xor[last_idx] = 0;
for (int neigh : adj[touched_vertex]) {
const int group_idx = vertex_group_idx[neigh];
stack_sizes[neigh] ^= group_xor[group_idx];
group_xor[group_idx] = 0;
}
assert(count(group_xor, group_xor + N, 0) == N);
cout << "WIN\n";
for (int i = 0; i < N; ++i)
cout << stack_sizes[i] << " ";
cout << "\n";
}
|
1150
|
A
|
Stock Arbitraging
|
Welcome to Codeforces Stock Exchange! We're pretty limited now as we currently allow trading on one stock, Codeforces Ltd. We hope you'll still be able to make profit from the market!
In the morning, there are $n$ opportunities to buy shares. The $i$-th of them allows to buy as many shares as you want, each at the price of $s_i$ bourles.
In the evening, there are $m$ opportunities to sell shares. The $i$-th of them allows to sell as many shares as you want, each at the price of $b_i$ bourles. You can't sell more shares than you have.
It's morning now and you possess $r$ bourles and no shares.
What is the maximum number of bourles you can hold after the evening?
|
The main observation is that we always want to buy shares as cheaply as possible, and sell them as expensively as possible. Therefore, we should pick the lowest price at which we can buy shares $s_{\min} = \min(s_1, s_2, \dots, s_n)$, and the highest price at which we can sell the shares $b_{\max} = \max(b_1, b_2, \dots, b_m)$. Now, we have two cases: If $s_{\min} < b_{\max}$, it's optimal to buy as many shares and possible in the morning and sell them all in the evening. We can buy as many as $\left\lfloor \frac{r}{s_{\min}} \right\rfloor$ shares and gain $b_{\max} - s_{\min}$ bourles profit on each of them. Therefore, the final balance is $r + \left\lfloor \frac{r}{s_{\min}} \right\rfloor (b_{\max} - s_{\min})$. If $s_{\min} \geq b_{\max}$, we're not gaining any profit on the shares and therefore we shouldn't care about trading stocks at all. The final balance is then $r$. The solution can be therefore implemented in $O(n + m)$ time. However, the constraints even allowed brute-forcing the seller, the buyer and the amount of stock we're buying in $O(nmr)$ time. Note that many programming languages have routines for finding the minima/maxima of the collections of integers: e.g. min_element in C++, Collections.min in Java, or min in Python. This should make the code shorter and quicker to write.
|
[
"greedy",
"implementation"
] | 800
|
R = int(input().split()[-1])
best_buy = min(map(int, input().split()))
best_sell = max(map(int, input().split()))
num_buy = R // best_buy
print(max(R, R + (best_sell - best_buy) * num_buy))
|
1150
|
B
|
Tiling Challenge
|
One day Alice was cleaning up her basement when she noticed something very curious: an \textbf{infinite} set of wooden pieces! Each piece was made of five square tiles, with four tiles adjacent to the fifth center tile:
By the pieces lay a large square wooden board. The board is divided into $n^2$ cells arranged into $n$ rows and $n$ columns. Some of the cells are already occupied by single tiles stuck to it. The remaining cells are free.Alice started wondering whether she could fill the board completely using the pieces she had found. Of course, each piece has to cover exactly five distinct cells of the board, no two pieces can overlap and every piece should fit in the board entirely, without some parts laying outside the board borders. The board however was too large for Alice to do the tiling by hand. Can you help determine if it's possible to fully tile the board?
|
Notice that there is only one orientation of the wooden piece (and this is the orientation depicted in the statement). Moreover, we can notice the following fact: Take any (untiled) topmost cell of the board. If there is any correct tiling of the board, this cell must be covered by the topmost tile of the piece. The fact is pretty obvious - there is simply no other way to cover the cell. Having that, it's straightforward to implement an $O(n^4)$ solution: while the board isn't fully covered, take any topmost untiled cell and try to cover it with a piece. If it's impossible, declare the failure. If it's possible, lay the piece and repeat the procedure. We can lay at most $O(n^2)$ pieces, and we're looking for the topmost cell in $O(n^2)$ time. This gives us $O(n^4)$ running time. While this is enough to solve the task, one can also notice that we don't have to seek the whole board in search of the cell. In fact, we can do a single scan through the board in row-major order, and as soon as we find any uncovered cell, we try to cover it by a wooden piece. This allows us to implement the solution in $O(n^2)$ time.
|
[
"greedy",
"implementation"
] | 900
|
vectors = [(0, 0), (1, -1), (1, 0), (1, 1), (2, 0)]
N = int(input())
board = [list(input()) for row_id in range(N)]
for row in range(N - 2):
for col in range(1, N - 1):
can_place = all(board[row + dr][col + dc] == '.' for (dr, dc) in vectors)
if can_place:
for (dr, dc) in vectors:
board[row + dr][col + dc] = '#'
has_dots = any(row.count('.') for row in board)
print('NO' if has_dots else 'YES')
|
1151
|
A
|
Maxim and Biology
|
Today in the scientific lyceum of the Kingdom of Kremland, there was a biology lesson. The topic of the lesson was the genomes. Let's call the genome the string "ACTG".
Maxim was very boring to sit in class, so the teacher came up with a task for him: on a given string $s$ consisting of uppercase letters and length of at least $4$, you need to find the minimum number of operations that you need to apply, so that the genome appears in it as a substring. For one operation, you can replace any letter in the string $s$ with the next or previous in the alphabet. For example, for the letter "D" the previous one will be "C", and the next — "E". In this problem, we assume that for the letter "A", the previous one will be the letter "Z", and the next one will be "B", and for the letter "Z", the previous one is the letter "Y", and the next one is the letter "A".
Help Maxim solve the problem that the teacher gave him.
A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end.
|
Check every substring of string $s$ of length $4$ and find the minimum number of operations to transform it into "ACTG" and update the answer. Complexity is $\mathcal{O}(n)$.
|
[
"brute force",
"strings"
] | 1,000
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.