contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
583
|
B
|
Robot's Task
|
Robot Doc is located in the hall, with $n$ computers stand in a line, numbered from left to right from $1$ to $n$. Each computer contains \textbf{exactly one} piece of information, each of which Doc wants to get eventually. The computers are equipped with a security system, so to crack the $i$-th of them, the robot needs to collect at least $a_{i}$ any pieces of information from the other computers. Doc can hack the computer only if he is right next to it.
The robot is assembled using modern technologies and can move along the line of computers in either of the two possible directions, but the change of direction requires a large amount of resources from Doc. Tell the minimum number of changes of direction, which the robot will have to make to collect all $n$ parts of information if initially it is next to computer with number $1$.
\textbf{It is guaranteed} that there exists at least one sequence of the robot's actions, which leads to the collection of all information. Initially Doc doesn't have any pieces of information.
|
It is always optimal to pass all the computers in the row, starting from $1$-st to $n$-th, then from $n$-th to first, then again from first to $n$-th, etc. and collecting the information parts as possible, while not all of them are collected. Such way gives robot maximal use of every direction change. $O(n^{2})$ solution using this approach must have been passed system tests.
|
[
"greedy",
"implementation"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
const int N = 5000;
int n, a[N];
int main() {
cin >> n;
for (int i = 0; i < n; ++i) cin >> a[i];
int s = 0, res = 0;
while (true) {
for (int i = 0; i < n; ++i)
if (a[i] <= s) a[i] = n + 1, s++;
if (s == n) break;
res++;
for (int i = n - 1; i >= 0; --i)
if (a[i] <= s) a[i] = n + 1, s++;
if (s == n) break;
res++;
}
cout << res << endl;
return 0;
}
|
584
|
A
|
Olesya and Rodion
|
Olesya loves numbers consisting of $n$ digits, and Rodion only likes numbers that are divisible by $t$. Find some number that satisfies both of them.
Your task is: given the $n$ and $t$ print an integer strictly larger than zero consisting of $n$ digits that is divisible by $t$. If such number doesn't exist, print $ - 1$.
|
Two cases: $t = 10$ and $t \neq 10$. If $t = 10$ then there are two cases again :). If $n = 1$ then answer is $- 1$ (because there is not any digit divisible by $t$). If $n > 1$, then answer, for example, is '1111...110' (contains $n - 1$ ones). If $t < 10$ then answer, for example, is 'tttttttt' (it, obviously, divisible by $t$).
|
[
"math"
] | 1,000
| null |
584
|
B
|
Kolya and Tanya
|
Kolya loves putting gnomes at the circle table and giving them coins, and Tanya loves studying triplets of gnomes, sitting in the vertexes of an equilateral triangle.
More formally, there are $3n$ gnomes sitting in a circle. Each gnome can have from $1$ to $3$ coins. Let's number the places in the order they occur in the circle by numbers from $0$ to $3n - 1$, let the gnome sitting on the $i$-th place have $a_{i}$ coins. If there is an integer $i$ ($0 ≤ i < n$) such that $a_{i} + a_{i + n} + a_{i + 2n} ≠ 6$, then Tanya is satisfied.
Count the number of ways to choose $a_{i}$ so that Tanya is satisfied. As there can be many ways of distributing coins, print the remainder of this number modulo $10^{9} + 7$. Two ways, $a$ and $b$, are considered distinct if there is index $i$ ($0 ≤ i < 3n$), such that $a_{i} ≠ b_{i}$ (that is, some gnome got different number of coins in these two ways).
|
The number of ways to choose $a_{i}$ (without any conditions) - $3^{3n}$. Let $A$ be the number of ways to choose $a_{i}$ with condition that in every triangle gnomes have $6$ coins. Then answer is $3^{3n} - A$. Note that we can separate all gnomes on n independent triangles ($i$-th triangle contains of $a_{i}$, $a_{i + n}$, $a_{i + 2n}$, $i < n$). So we can count answer for one triangle and then multiply the results. To count answer for one triangle, we can note that there are $7$ ways to choose gnomes with $6$ coins (all permutations $1$, $2$, $3$ and $2$, $2$, $2$). So answer is - $3^{3n} - 7^{n}$. We count it in $O(n)$.
|
[
"combinatorics"
] | 1,500
| null |
584
|
C
|
Marina and Vasya
|
Marina loves strings of the same length and Vasya loves when there is a third string, different from them in exactly $t$ characters. Help Vasya find at least one such string.
More formally, you are given two strings $s_{1}$, $s_{2}$ of length $n$ and number $t$. Let's denote as $f(a, b)$ the number of characters in which strings $a$ and $b$ are different. Then your task will be to find any string $s_{3}$ of length $n$, such that $f(s_{1}, s_{3}) = f(s_{2}, s_{3}) = t$. If there is no such string, print $ - 1$.
|
Let's find a string that will be equal to $s1$ in $k = n - t$ positions and equal to $s2$ in $k = n - t$ positions. Let's denote $q$ = quantity of $i$ that $s1_{i} = s2_{i}$. If $k \le q$, then let's take $k$ positions such that $s1_{pos} = s2_{pos}$ and put in answer the same symbols. Then put in other $n - k$ positions symbols, which are not equal to corresponding in $s1$ and $s2$ (we can do it, because we have 26 letters). Now $k > q$. So, if there is an answer, where exists $pos$ such that $s1_{pos} = s2_{pos}$, $s1_{pos} \neq ans_{pos}$, let's denote $ans_{i} = s1_{i}$, and then in any position such that $s1_{i} \neq s2_{i} = ans_{i}$ and $s1_{i} = ans_{i}$ (and in the same position in $s2$) we will choose $a_{i}$, such that $a_{i} \neq s1_{i}$ and $a_{i} \neq s2_{i}$ (we can do it because $k > q$). So, for every position from $q$ we can put symbols, equal to corresponding symbols in $s1$ and $s2$. Now we have strings $s1, s2$ of length $n - q$ (such that $s1_{i} \neq s2_{i}$ for every i) and we should find string $ans$ such that $f(ans, s1) = f(ans, s2)$. We know that $s1_{i} \neq s2_{i}$, so $ans_{i}$ may be equal to corresponding in $s1$ or to corresponding in $s2$ or not equal to $s1_{i}$ and to $s2_{i}$. So, we need, at least, $2(k - q)$ symbols in answer to make $f(s1, ans) = k - q$ and $f(s2, ans) = k - q$. Consequently, if $n - q < 2(k - q)$, the answer is $- 1$, and else just put first $k - q$ symbols in answer from $s1$, next $k - q$ symbols from $s2$, and others won't be equal to corresponding in $s1$ and $s2$. Solution works in $O(n)$.
|
[
"constructive algorithms",
"greedy",
"strings"
] | 1,700
| null |
584
|
D
|
Dima and Lisa
|
Dima loves representing an odd number as the sum of multiple primes, and Lisa loves it when there are at most three primes. Help them to represent the given number as the sum of at most than three primes.
More formally, you are given an \textbf{odd} numer $n$. Find a set of numbers $p_{i}$ ($1 ≤ i ≤ k$), such that
- $1 ≤ k ≤ 3$
- $p_{i}$ is a prime
- $\sum_{i=1}^{k}p_{i}=n$
The numbers $p_{i}$ do not necessarily have to be distinct. It is guaranteed that at least one possible solution exists.
|
There is a fact that the distance between adjacent prime numbers isn't big. For $n = 10^{9}$ maximal distanse is $282$. So let's find maximal prime $p$, such that $p < n - 1$ (we can just decrease n while it's not prime(we can check it in $O({\sqrt{n}})$)). We know that $n - p < 300$. Now we have even (because $n$ and $p$ are odd) number $n - p$ and we should divide it into a sum of two primes. As $n - p < 300$, we can just iterate over small primes $P$ and check if $P$ is prime and $n - p - P$ is prime. You can check that there is a solution for all even numbers less than $300$ by bruteforce.
|
[
"brute force",
"math",
"number theory"
] | 1,800
| null |
584
|
E
|
Anton and Ira
|
Anton loves transforming one permutation into another one by swapping elements for money, and Ira doesn't like paying for stupid games. Help them obtain the required permutation by paying as little money as possible.
More formally, we have two permutations, $p$ and $s$ of numbers from $1$ to $n$. We can swap $p_{i}$ and $p_{j}$, by paying $|i - j|$ coins for it. Find and print the smallest number of coins required to obtain permutation $s$ from permutation $p$. Also print the sequence of swap operations at which we obtain a solution.
|
We can consider that we pay $2|i - j|$ coins for swap (we can divide answer in the end). Then we can consider that we pay $|i - j|$ coins for moving $p_{i}$ and $|i - j|$ for moving $p_{j}$. So, if $x$ was on position $i$ and then came to position $j$, then we will pay at least $|i - j|$ coins. Then the answer is at least $\sum_{k=1}^{n}|p p[k]-p s[k]|$ ($pp$ - position $k$ in permutation $p$, and $ps$ - position $k$ in permutation $s$). Let's prove that this is the answer by showing the algorithm of making swaps. Let's consider that permutation $s$ is sorted (our task is equal to it). Then we will put numbers from $n$ to $1$ on their positions. How we can put $n$ on its position? Denote $p_{pos} = n$. Let's prove that there exists a position $pos2$ such that $pos < pos2$ and $p_{pos2} \le pos$(then we will swap $p_{pos2}$ with $n$ (and both numbers will move to their final positions and $n$ will move to the right, so we can repeat this process until $n$ returns to its position)). We can note that there are only $n - pos$ positions that are bigger than $pos$. And how many numbers on these positions can be bigger than $pos$? We can say that answer is $n - pos$, but it's incorrect, because $n$ is bigger than $pos$, but $pos_{n} \le pos$. Now we can use Pigeonhole principle and say that position $x$, such that $x > pos$ and $p_{x} \le pos$ exists. But now our algorithm is $O(n^{3})$. How we can put $n$ in its position in $O(n)$ operations? Let's move the pointer to the right while number is bigger than $pos$. Then swap $n$ with found number. After we can move pointer from new $n$'s position, so pointer always moves to the right and will not do more then $n$ steps.
|
[
"constructive algorithms",
"greedy",
"math"
] | 2,300
| null |
585
|
A
|
Gennady the Dentist
|
Gennady is one of the best child dentists in Berland. Today $n$ children got an appointment with him, they lined up in front of his office.
All children love to cry loudly at the reception at the dentist. We enumerate the children with integers from $1$ to $n$ in the order they go in the line. Every child is associated with the value of his cofidence $p_{i}$. The children take turns one after another to come into the office; each time the child that is the first in the line goes to the doctor.
While Gennady treats the teeth of the $i$-th child, the child is crying with the volume of $v_{i}$. At that the confidence of the first child in the line is reduced by the amount of $v_{i}$, the second one — by value $v_{i} - 1$, and so on. The children in the queue after the $v_{i}$-th child almost do not hear the crying, so their confidence remains unchanged.
If at any point in time the confidence of the $j$-th child is less than zero, he begins to cry with the volume of $d_{j}$ and leaves the line, running towards the exit, without going to the doctor's office. At this the confidence of all the children after the $j$-th one in the line is reduced by the amount of $d_{j}$.
All these events occur immediately one after the other in some order. Some cries may lead to other cries, causing a chain reaction. Once in the hallway it is quiet, the child, who is first in the line, goes into the doctor's office.
Help Gennady the Dentist to determine the numbers of kids, whose teeth he will cure. Print their numbers in the chronological order.
|
Let's store for each child his current confidence value and a boolean indicating whether child had left the queue (or visited the dentist office) or not. Then one could easily process children one by one, considering only children who still are in the queue (using boolean array), and changing stored values. Such solution has complexity $O(n^{2})$ and requires author's attention much, especially the case with possible confidence value overflowing. Of course there are much faster solutions not required in our case.
|
[
"brute force",
"implementation"
] | 1,800
| null |
585
|
B
|
Phillip and Trains
|
The mobile application store has a new game called "Subway Roller".
The protagonist of the game Philip is located in one end of the tunnel and wants to get out of the other one. The tunnel is a rectangular field consisting of three rows and $n$ columns. At the beginning of the game the hero is in some cell of the leftmost column. Some number of trains rides towards the hero. Each train consists of two or more neighbouring cells in some row of the field.
All trains are moving from right to left at a speed of two cells per second, and the hero runs from left to right at the speed of one cell per second. For simplicity, the game is implemented so that the hero and the trains move in turns. First, the hero moves one cell to the right, then one square up or down, or stays idle. Then all the trains move twice simultaneously one cell to the left. Thus, in one move, Philip definitely makes a move to the right and can move up or down. If at any point, Philip is in the same cell with a train, he loses. If the train reaches the left column, it continues to move as before, leaving the tunnel.
Your task is to answer the question whether there is a sequence of movements of Philip, such that he would be able to get to the rightmost column.
|
One could consider a graph with vertices corresponding to every $(x, y)$ position. I should notice that train positions for each Phillip position are fully restorable from his $y$ coordinate. Edge between vertices $u$ and $v$ means that we could get from position corresponding to $u$ to position corresponding by $v$ in one turn without moving onto a train cell or moving in a cell which will be occupied by some train before the next turn. All we need next is to find whether any finishing position is reachable from the only starting position (using BFS or DFS, or, as soon as graph is a DAG, dynamic programming). As soon as graph has $O(n)$ vertices and $O(n)$ edges, solution complexity equals to $O(n)$.
|
[
"dfs and similar",
"graphs",
"shortest paths"
] | 1,700
| null |
585
|
C
|
Alice, Bob, Oranges and Apples
|
Alice and Bob decided to eat some fruit. In the kitchen they found a large bag of oranges and apples. Alice immediately took an orange for herself, Bob took an apple. To make the process of sharing the remaining fruit more fun, the friends decided to play a game. They put multiple cards and on each one they wrote a letter, either 'A', or the letter 'B'. Then they began to remove the cards one by one from left to right, every time they removed a card with the letter 'A', Alice gave Bob all the fruits she had at that moment and took out of the bag as many apples and as many oranges as she had before. Thus the number of oranges and apples Alice had, did not change. If the card had written letter 'B', then Bob did the same, that is, he gave Alice all the fruit that he had, and took from the bag the same set of fruit. After the last card way removed, all the fruit in the bag were over.
You know how many oranges and apples was in the bag at first. Your task is to find any sequence of cards that Alice and Bob could have played with.
|
Firstly, let's understand the process described in problem statement. If one would write a tree of a sum-pairs $(x, y)$ with letters $\mathrm{\boldmath~A~}$ and $\mathbf{\hat{B}}$, he would get the Stern-Brocot tree. Let the number of oranges be enumerator and the number of apples be denumerator of fraction. At every step we have two fractions (at first step they are $\textstyle{\frac{0}{10}}$) and should replace exactly one of them with their mediant. In such way first fraction is first parent to the left from mediant while second fraction is parent to the right. The process described in statement is, this way, a process of finding a fraction in the Stern-Brocot tree, finishing when the current mediant is equal to current node in the tree and $(x, y)$ pair is the fraction we are searching. This means that if $\operatorname*{gcd}(x,y)\neq1$, $(x, y)$ does not correspond to any correct fraction and the answer is "Impossible". Other way, we could find it in the tree. If $x > y$, we should firstly go in the right subtree. Moreover, we could then consider we are searching $\textstyle{\frac{a-y}{y}}$ from the root. If $x < y$, we should go left and next consider $\frac{\overline{{x}}}{y\!\!\!{y\!\!\!{y}}\!\!\!\slash-x}$ from the root. This gives us Euclidian algorithm, which could be realized to work in $O(\log n)$ complexity. Complexity: $O(logn)$.
|
[
"number theory"
] | 2,400
| null |
585
|
D
|
Lizard Era: Beginning
|
In the game Lizard Era: Beginning the protagonist will travel with three companions: Lynn, Meliana and Worrigan. Overall the game has $n$ mandatory quests. To perform each of them, you need to take \textbf{exactly two} companions.
The attitude of each of the companions to the hero is an integer. Initially, the attitude of each of them to the hero of neutral and equal to 0. As the hero completes quests, he makes actions that change the attitude of the companions, whom he took to perform this task, in positive or negative direction.
Tell us what companions the hero needs to choose to make their attitude equal after completing all the quests. If this can be done in several ways, choose the one in which the value of resulting attitude is greatest possible.
|
To solve the problem we will use meet-in-the-middle approach. For $\textstyle{\left|{\frac{n}{2}}\right|}$ we should consider all $3^{\left[{\frac{n}{2}}\right]}$ variants. Let in some variant approval values of three companions are $a, b, c$ respectively. If we will consider some variant from other half (there are $3^{\left\lceil{\frac{n}{2}}\right\rceil}$ of them) and $a', b', c'$ approval values, then to ``merge'' such two parts correctly, two conditions $a - b = b' - a'$ and $b - c = c' - b'$ must be true ($a + a' = b + b' = c + c'$ is true), and the value we are maximizing is $a + a'$. This way, to solve the task one could consider every variant from the first half and store for every possible pair $(a - b, b - c)$ the maximum $a$ value achievable (using, for example, the $\mathrm{map}$ structure or any fast sorting algorithm). If one would then consider every variant from the second half, he just need to find $(b' - a', c' - b')$ pair in the structure to get the maximum $a$ value if possible and update answer with $a + a'$ value. Answer restoring is pretty same to the algorithm above. Such solution has $O(3^{\lfloor{\frac{n}{2}}\rfloor}\log(3^{\lfloor{\frac{n}{2}}\rfloor})$ complexity.
|
[
"meet-in-the-middle"
] | 2,300
| null |
585
|
E
|
Present for Vitalik the Philatelist
|
Vitalik the philatelist has a birthday today!
As he is a regular customer in a stamp store called 'Robin Bobin', the store management decided to make him a gift.
Vitalik wants to buy one stamp and the store will give him a non-empty set of the remaining stamps, such that the greatest common divisor (GCD) of the price of the stamps they give to him is more than one. If the GCD of prices of the purchased stamp and prices of present stamps set will be equal to $1$, then Vitalik will leave the store completely happy.
The store management asks you to count the number of different situations in which Vitalik will leave the store completely happy. Since the required number of situations can be very large, you need to find the remainder of this number modulo $10^{9} + 7$. The situations are different if the stamps purchased by Vitalik are different, or if one of the present sets contains a stamp that the other present does not contain.
|
Let's calculate the number of subsets with gcd equal to $1$ - value A. Let's do that using principle of inclusions-exclusions: firstly we say that all subsets is good. The total number of subsets is equal to $2^{n}$. Now let's subtract subsets with gcd divisible by $2$. The number of that subsets is equal to $2^{cnt2} - 1$ ($cnt_{i}$ is the number of numbers that is divisable by $i$). Next we should subtract $2^{cnt3} - 1$. Subsets with gcd divisible by $4$ we already counted with number two. Next we should subtract $2^{cnt5} - 1$. Now we should notice that subsets with gcd divisible by $6$ we already processed twice: firstly with number $2$, then with - $3$, so let's add the number of these subsets $2^{cnt6} - 1$. If we continue this process we will get, that for all numbers $d$ we should add the value $ \mu (d)(2^{cntd} - 1)$, where $ \mu (d)$ is equals to $0$, if $d$ is divisible by square of some prime, $1$, if the number of primes in factorization of $d$ is even and $- 1$ in other case. So the numbers that is divisible by square of some prime we can ignore, because they have coefficient $0$. To calculate values $cnt_{i}$ we should factorize all numbers and iterate over $2^{k}$ divisors with value $ \mu (d) \neq 0$. Now it's easy to see, that the number of subsets with gcd greater than $1$ equals to $B = 2^{n} - A$. To solve the problem let's fix the stamp that Vitaliy will buy $a_{i}$. Let's recalculate the number $B$ for array a without element $a_{i}$. To do that we should only subtract those terms that was affected by number $a_{i}$. We can do that in $2^{k}$, where $k$ is the number of primes in factorization of the number $a_{i}$. It's easy to see that only the subsets with gcd greater than $1$, but not divisible by any divisor of $a_{i}$, should we counted in answer. To calculate number of those subsets let's again use the principle of inclusions-exclusions. For every divisor $d$ of $a_{i}$ let's subtract the value $ \mu (2^{cntd} - 1)$ from $B$. So now we got $B_{i}$ - the number of subsets with gcd greater than $1$, but coprime with $a_{i}$. The answer to problem is the sum over all $B_{i}$. The maximum number of primes in factorization of number not greater than $10^{7}$ is equal to $8$. We can factorize all numbers from $1$ to $n$ in linear time by algorithm for finding the smallest divisors for all intergers from $1$ to $n$, or by sieve of Eratosthenes in $O(nloglogn)$ time. Complexity: $O(C + n2^{K})$, where $C=\mathbf{m}_{i\geq1}^{n}\,a_{i}$, $K$ - is the largest number of primes in factorization of $a_{i}$.
|
[
"combinatorics",
"math",
"number theory"
] | 2,900
| null |
585
|
F
|
Digits of Number Pi
|
Vasily has recently learned about the amazing properties of number $π$. In one of the articles it has been hypothesized that, whatever the sequence of numbers we have, in some position, this sequence is found among the digits of number $π$. Thus, if you take, for example, the epic novel "War and Peace" of famous Russian author Leo Tolstoy, and encode it with numbers, then we will find the novel among the characters of number $π$.
Vasily was absolutely delighted with this, because it means that all the books, songs and programs have already been written and encoded in the digits of $π$. Vasily is, of course, a bit wary that this is only a hypothesis and it hasn't been proved, so he decided to check it out.
To do this, Vasily downloaded from the Internet the archive with the sequence of digits of number $π$, starting with a certain position, and began to check the different strings of digits on the presence in the downloaded archive. Vasily quickly found short strings of digits, but each time he took a longer string, it turned out that it is not in the archive. Vasily came up with a definition that a string of length $d$ is a half-occurrence if it contains a substring of length of at least $\left\lfloor{\frac{d}{2}}\right\rfloor$, which occurs in the archive.
To complete the investigation, Vasily took $2$ large numbers $x, y$ ($x ≤ y$) with the same number of digits and now he wants to find the number of numbers in the interval from $x$ to $y$, which are half-occurrences in the archive. Help Vasily calculate this value modulo $10^{9} + 7$.
|
Consider all substrings of string $s$ with length $\left\lfloor{\frac{d}{2}}\right\rfloor$. Let's add them all to trie data structure, calculate failure links and build automata by digits. We can do that in linear time using Aho-Korasik algorithm. Now to solve the problem we should calculate dp $z_{i, v, b1, b2, b}$. State of dp is described by five numbers: $i$ - number of digits, that we already put to our number, $v$ - in which vertex of trie we are, $b_{1}$ - equals to one if the prefix that we built is equals to prefix of $x$, $b_{2}$ - equals to one if the prefix that we built is equals to prefix of $y$, $b$ - equals to one if we already have some substring with length $\left\lfloor{\frac{d}{2}}\right\rfloor$ on the prexif that we built. The value of dp is the number of ways to built prefix with the given set of properties. To update dp we should iterate over the digit that we want to add to prefix, check that we still is in segment $[x, y]$, go from vertex $v$ to the next vertex in automata. So the answer is the sum by all $v, b_{1}, b_{2}$ $z_{b, v, v1, v2, 1}$. Complexity: $O(nd^{2})$.
|
[
"dp",
"implementation",
"strings"
] | 3,200
| null |
586
|
A
|
Alena's Schedule
|
Alena has successfully passed the entrance exams to the university and is now looking forward to start studying.
One two-hour lesson at the Russian university is traditionally called a pair, it lasts for two academic hours (an academic hour is equal to 45 minutes).
The University works in such a way that every day it holds exactly $n$ lessons. Depending on the schedule of a particular group of students, on a given day, some pairs may actually contain classes, but some may be empty (such pairs are called breaks).
The official website of the university has already published the schedule for tomorrow for Alena's group. Thus, for each of the $n$ pairs she knows if there will be a class at that time or not.
Alena's House is far from the university, so if there are breaks, she doesn't always go home. Alena has time to go home only if the break consists of at least two free pairs in a row, otherwise she waits for the next pair at the university.
Of course, Alena does not want to be sleepy during pairs, so she will sleep as long as possible, and will only come to the first pair that is presented in her schedule. Similarly, if there are no more pairs, then Alena immediately goes home.
Alena appreciates the time spent at home, so she always goes home when it is possible, and returns to the university only at the beginning of the next pair. Help Alena determine for how many pairs she will stay at the university. Note that during some pairs Alena may be at the university waiting for the upcoming pair.
|
To solve this problem one should remove all leading and trailing zeroes from array and then calculate the number of ones and number of zeroes neighboured by ones. The sum of this values is the answer for the problem. Complexity: $O(n)$.
|
[
"implementation"
] | 900
| null |
586
|
B
|
Laurenty and Shop
|
A little boy Laurenty has been playing his favourite game Nota for quite a while and is now very hungry. The boy wants to make sausage and cheese sandwiches, but first, he needs to buy a sausage and some cheese.
The town where Laurenty lives in is not large. The houses in it are located in two rows, $n$ houses in each row. Laurenty lives in the very last house of the second row. The only shop in town is placed in the first house of the first row.
The first and second rows are separated with the main avenue of the city. The adjacent houses of one row are separated by streets.
Each crosswalk of a street or an avenue has some traffic lights. In order to cross the street, you need to press a button on the traffic light, wait for a while for the green light and cross the street. Different traffic lights can have different waiting time.
The traffic light on the crosswalk from the $j$-th house of the $i$-th row to the $(j + 1)$-th house of the same row has waiting time equal to $a_{ij}$ ($1 ≤ i ≤ 2, 1 ≤ j ≤ n - 1$). For the traffic light on the crossing from the $j$-th house of one row to the $j$-th house of another row the waiting time equals $b_{j}$ ($1 ≤ j ≤ n$). The city doesn't have any other crossings.
The boy wants to get to the store, buy the products and go back. The main avenue of the city is wide enough, so the boy wants to cross it \textbf{exactly once} on the way to the store and \textbf{exactly once} on the way back home. The boy would get bored if he had to walk the same way again, so he wants the way home to be different from the way to the store in at least one crossing.
\begin{center}
{\small Figure to the first sample.}
\end{center}
Help Laurenty determine the minimum total time he needs to wait at the crossroads.
|
Let's call some path $i$th if we start it by going $i$ times left, then we cross the prospect and go left $n - 1 - i$ times again. Let $d_{i}$ be equal to the time we should wait on traffic lights while following $i$-th path. If we consider any way from the shop to home, it is equal (but reversed) to only path from home to the shop, meaning that we need to find two distinct paths from home to the shop. So the answer to the problem is the sum of the smallest and the second smallest values among $d_{i}$. One could easily calculate $d_{i}$ using calculated $d_{i - 1}$, so $d_{i}$ could be found in one for cycle. If we will consider only two minimum values among $d_{i}$, solution complexity will be $O(n)$. Complexity: $O(n)$.
|
[
"implementation"
] | 1,300
| null |
587
|
A
|
Duff and Weight Lifting
|
Recently, Duff has been practicing weight lifting. As a hard practice, Malek gave her a task. He gave her a sequence of weights. Weight of $i$-th of them is $2^{wi}$ pounds. In each step, Duff can lift some of the remaining weights and throw them away. She does this until there's no more weight left. Malek asked her to minimize the number of steps.
Duff is a competitive programming fan. That's why in each step, she can only lift and throw away a sequence of weights $2^{a1}, ..., 2^{ak}$ if and only if there exists a non-negative integer $x$ such that $2^{a1} + 2^{a2} + ... + 2^{ak} = 2^{x}$, i. e. the sum of those numbers is a power of two.
Duff is a competitive programming fan, but not a programmer. That's why she asked for your help. Help her minimize the number of steps.
|
Problem is: you have to find the minimum number of $k$, such there is a sequence $a_{1}, a_{2}, ..., a_{k}$ with condition $2^{a1} + 2^{a2} + ... + 2^{ak} = S = 2^{w1} + 2^{w2} + ... + 2^{w2}$. Obviously, minimum value of $k$ is the number of set bits in binary representation of $S$ (proof is easy, you can prove it as a practice :P). Our only problem is how to count the number of set bits in binary representation of $S$? Building the binary representation of $S$ as an array in $O(n+m a x(a_{i}))$ is easy: MAXN = 1000000 + log(1000000) bit[0..MAXN] = {} // all equal to zero ans = 0 for i = 1 to n bit[w[i]] ++ // of course after this, some bits maybe greater than 1, we'll fix them below for i = 0 to MAXN - 1 bit[i + 1] += bit[i]/2 // floor(bit[i]/2) bit[i] %= 2 // bit[i] = bit[i] modulo 2 ans += bit[i] // if bit[i] = 0, then answer doesn't change, otherwise it'll increase by 1Time complexity: $O(n+m a x(a_{i}))$
|
[
"greedy"
] | 1,500
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nconst int maxn = 2e6 + 100;\nint a, cnt[maxn + 10], n;\nint main(){\n\tscanf(\"%d\", &n);\n\tFor(i,0,n){\n\t\tscanf(\"%d\", &a);\n\t\tcnt[a] ++;\n\t}\n\tint ans = 0;\n\tFor(i,0,maxn)if(cnt[i]){\n\t\tcnt[i + 1] += cnt[i]/2;\n\t\tcnt[i] %= 2;\n\t\tif(cnt[i])\n\t\t\t++ ans;\n\t}\n\tprintf(\"%d\\n\", ans);\n\treturn 0;\n}\n"
|
587
|
B
|
Duff in Beach
|
While Duff was resting in the beach, she accidentally found a strange array $b_{0}, b_{1}, ..., b_{l - 1}$ consisting of $l$ positive integers. This array was strange because it was extremely long, but there was another (maybe shorter) array, $a_{0}, ..., a_{n - 1}$ that $b$ can be build from $a$ with formula: $b_{i} = a_{i mod n}$ where $a mod b$ denoted the remainder of dividing $a$ by $b$.
Duff is so curious, she wants to know the number of subsequences of $b$ like $b_{i1}, b_{i2}, ..., b_{ix}$ ($0 ≤ i_{1} < i_{2} < ... < i_{x} < l$), such that:
- $1 ≤ x ≤ k$
- For each $1 ≤ j ≤ x - 1$, $\lfloor{\frac{\iota_{2}}{n}}\rfloor+1=\lfloor{\frac{\iota_{3+1}}{n}}\rfloor$
- For each $1 ≤ j ≤ x - 1$, $b_{ij} ≤ b_{ij + 1}$. i.e this subsequence is non-decreasing.
Since this number can be very large, she want to know it modulo $10^{9} + 7$.
Duff is not a programmer, and Malek is unavailable at the moment. So she asked for your help. Please tell her this number.
|
If we fix $x$ and $b_{ix} mod n$, then problem will be solved (because we can then multiply it by the number of valid distinct values of $\textstyle{\left|{\frac{L}{n}}\right|}$). For the problem above, let $dp[i][j]$ be the number of valid subsequences of $b$ where $x = j$ and $\lfloor{\frac{i}{n}}\rfloor=0$ and $\vert{\frac{i}{n}}\vert=i$. Of course, for every $i$, $dp[i][1] = 1$. For calculating value of $dp[i][j]$: $d p[i][j]=\sum_{0<k<n-1.a_{i}<a_{i}}d p[k][j-1]$ For this purpose, we can sort the array $a$ and use two pointer: if $p_{0}, p_{1}, ...p_{n - 1}$ is a permutation of $0, ..., n - 1$ where for each $0 \le t < n - 1$, $a_{pt} \le a_{pt + 1}$: for i = 0 to n-1 dp[i][1] = 1 for j = 2 to k pointer = 0 sum = 0 for i = 0 to n-1 while pointer < n and a[p[pointer]] <= a[i] sum = (sum + dp[p[pointer ++]][j - 1]) % MOD dp[i][j] = sum Now, if $c=\left.{\left[{\frac{l}{n}}\right]}$ and $x = l - 1 mod n$, then answer equals to $\sum_{i=0}^{x}\sum_{j=1}^{m i n(c,k)}d p[i][j]\times(c-j+1)\sum_{i=x+1}^{n-1\ m i n(c-1,k)}d p[i][j]\times(c-j)$ (there are $c - j + 1$ valid different values of $\textstyle{\left|{\frac{L}{n}}\right|}$ for the first group and $c - j$ for the second group). Time complexity: $O(n k)$
|
[
"dp"
] | 2,100
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nconst int maxn = 1e6 + 100, mod = 1e9 + 7;\nint a[maxn], dp[maxn][2], p[maxn];\nll l, c;\nint n, k;\nll ans;\nint last;\ninline void calc(int i, int j){\n\tll t = (c - j + 1LL) % mod;\n\tif(i > last && l % (ll)n)\n\t\tt = (t - 1LL + mod) % mod;\n\tll x = (1LL * t * (ll)dp[i][j & 1]) % mod;\n\tans = (ans + x) % mod;\n}\nint main(){\n\tscanf(\"%d %lld %d\", &n, &l, &k);\n\tc = l / (ll)n;\n\tlast = ((ll)(l - 1LL) % (ll)n);\n\tif(l % n)\n\t\t++ c;\n\tk = (int)min((ll)k, c);\n\tFor(i,0,n){\n\t\tscanf(\"%d\", a + i);\n\t\tp[i] = i;\n\t\tdp[i][1] = 1;\n\t\tcalc(i, 1);\n\t}\n\tsort(p, p + n, [](const int &i, const int &j){return pii(a[i], i) < pii(a[j], j);});\n\tint po = 0;\n\tFor(j,2,k + 1){\n\t\tpo = 0;\n\t\tint x = j & 1, y = !x;\n\t\tint sum = 0;\n\t\tFor(iii,0,n){\n\t\t\tint i = p[iii];\n\t\t\twhile(po < n && a[p[po]] <= a[i])\n\t\t\t\tsum = (sum + dp[p[po ++]][y]) % mod;\n\t\t\tdp[i][x] = sum;\n\t\t\tcalc(i, j);\n\t\t}\n\t}\n\tans %= mod;\n\tans = (ans + mod) % mod;\n\tprintf(\"%d\\n\", (int)ans);\n\treturn 0;\n}"
|
587
|
C
|
Duff in the Army
|
Recently Duff has been a soldier in the army. Malek is her commander.
Their country, Andarz Gu has $n$ cities (numbered from $1$ to $n$) and $n - 1$ bidirectional roads. Each road connects two different cities. There exist a unique path between any two cities.
There are also $m$ people living in Andarz Gu (numbered from $1$ to $m$). Each person has and ID number. ID number of $i - th$ person is $i$ and he/she lives in city number $c_{i}$. Note that there may be more than one person in a city, also there may be no people living in the city.
Malek loves to order. That's why he asks Duff to answer to $q$ queries. In each query, he gives her numbers $v, u$ and $a$.
To answer a query:
Assume there are $x$ people living in the cities lying on the path from city $v$ to city $u$. Assume these people's IDs are $p_{1}, p_{2}, ..., p_{x}$ in increasing order.
If $k = min(x, a)$, then Duff should tell Malek numbers $k, p_{1}, p_{2}, ..., p_{k}$ in this order. In the other words, Malek wants to know $a$ minimums on that path (or less, if there are less than $a$ people).
Duff is very busy at the moment, so she asked you to help her and answer the queries.
|
Solution is something like the fourth LCA approach discussed here. For each $1 \le i \le n$ and $0 \le j \le lg(n)$, store the minimum 10 people in the path from city (vertex) $i$ to its $2^{j} - th$ parent in an array. Now everything is needed is: how to merge the array of two paths? You can keep the these 10 values in the array in increasing order and for merging, use merge function which will work in $O(20)$. And then delete the extra values (more than 10). And do the same as described in the article for a query (just like the preprocess part). Time complexity: $O((n+q)a.l g(n))$
|
[
"data structures",
"trees"
] | 2,200
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\nconst int maxn = 1e5 + 100, maxl = 20;\nstruct myvec{\n\tint n = 0;\n\tint a[10] = {};\n\tinline void pop_back(){if(n)--n;}\n\tinline int & operator [](const int &x){return a[x];}\n\tinline int back(){if(n)return a[n-1];return -1;}\n\tinline void push_back(const int &x){if(n < 10 && x != back())a[n ++] = x;}\n\tinline int size(){return n;}\n\tinline bool empty(){return !n;}\n}vec[maxn][maxl], ins[maxn];\nvi adj[maxn];\nint par[maxn][maxl], h[maxn];\ninline myvec operator +(myvec &v,myvec &u){\n\tmyvec ans;\n\tint i = 0, j = 0;\n\twhile(i < v.size() && j < u.size()){\n\t\tif(v[i] <= u[j])\n\t\t\tans.pb(v[i ++]);\n\t\telse\n\t\t\tans.pb(u[j ++]);\n\t}\n\twhile(i < v.size())\n\t\tans.pb(v[i ++]);\n\twhile(j < u.size())\n\t\tans.pb(u[j ++]);\n//\tans.n = unique(ans.a, ans.a + ans.n) - ans.a;\n\treturn ans;\n}\ninline void dfs(int v = 0, int p = -1){\n\tpar[v][0] = p;\n\tvec[v][0] = ins[v];\n\tif(p + 1){\n\t\th[v] = h[p] + 1;\n\t\tvec[v][0] = ins[v] + ins[p];\n\t}\n\tFor(i,1,maxl)\n\t\tif(par[v][i-1] + 1){\n\t\t\tpar[v][i] = par[par[v][i-1]][i-1];\n\t\t\tvec[v][i] = vec[v][i-1] + vec[par[v][i-1]][i-1];\n\t\t}\n\trep(u, adj[v])\tif(p - u)\n\t\tdfs(u, v);\n}\ninline myvec lca(int v, int u){\n\tmyvec ans = ins[v] + ins[u];\n\tif(h[v] < h[u])\tswap(v, u);\n\trof(i,maxl-1,-1)\n\t\tif(par[v][i] + 1 && h[par[v][i]] >= h[u]){\n\t\t\tans = ans + vec[v][i];\n\t\t\tv = par[v][i];\n\t\t}\n\tif(v == u)\treturn ans;\n\trof(i,maxl-1,-1)\n\t\tif(par[v][i] - par[u][i]){\n\t\t\tans = ans + vec[v][i];\n\t\t\tans = ans + vec[u][i];\n\t\t\tv = par[v][i], u = par[u][i];\n\t\t}\n\tans = ans + ins[par[v][0]];\n\treturn ans;\n}\nint n, m, q;\nint main(){\n\tmemset(par, -1, sizeof par);\n\tscanf(\"%d %d %d\", &n, &m, &q);\n\tFor(i,1,n){\n\t\tint v, u;\n\t\tscanf(\"%d %d\", &v, &u);\n\t\t-- v, -- u;\n\t\tadj[v].pb(u);\n\t\tadj[u].pb(v);\n\t}\n\tFor(i,1,m + 1){\n\t\tint v;\n\t\tscanf(\"%d\", &v);\n\t\t-- v;\n\t\tif((int)ins[v].size() < 10)\n\t\t\tins[v].pb(i);\n\t}\n\tdfs();\n\twhile(q--){\n\t\tint v, u, a;\n\t\tscanf(\"%d %d %d\", &v, &u, &a);\n\t\t-- v, -- u;\n\t\tmyvec ans = lca(v, u);\n\t\tif(ans.size() > a)\n\t\t\tans.n = a;\n\t\tvi vec;\n\t\tvec.pb(ans.size());\n\t\tFor(i,0,ans.size())\n\t\t\tvec.pb(ans[i]);\n\t\tFor(i,0,vec.size()){\n\t\t\tprintf(\"%d\", vec[i]);\n\t\t\tif(i + 1 == vec.size())\n\t\t\t\tputs(\"\");\n\t\t\telse\n\t\t\t\tprintf(\" \");\n\t\t}\n\t}\n\treturn 0;\n}"
|
587
|
D
|
Duff in Mafia
|
Duff is one if the heads of Mafia in her country, Andarz Gu. Andarz Gu has $n$ cities (numbered from 1 to $n$) connected by $m$ bidirectional roads (numbered by $1$ to $m$).
Each road has a destructing time, and a color. $i$-th road connects cities $v_{i}$ and $u_{i}$ and its color is $c_{i}$ and its destructing time is $t_{i}$.
Mafia wants to destruct a matching in Andarz Gu. A matching is a subset of roads such that no two roads in this subset has common endpoint. They can destruct these roads in parallel, i. e. the total destruction time is a maximum over destruction times of all selected roads.
They want two conditions to be satisfied:
- The remaining roads form a proper coloring.
- Destructing time of this matching is minimized.
The remaining roads after destructing this matching form a proper coloring if and only if no two roads of the same color have same endpoint, or, in the other words, edges of each color should form a matching.
There is no programmer in Mafia. That's why Duff asked for your help. Please help her and determine which matching to destruct in order to satisfied those conditions (or state that this is not possible).
|
Run binary search on the answer ($t$). For checking if answer is less than or equal to $x$ (check(x)): First of all delete all edges with destructing time greater than $x$. Now, if there is more than one pair of edges with the same color connected to a vertex(because we can delete at most one of them), answer is "No". Use 2-Sat. Consider a literal for each edge $e$ ($x_{e}$). If $x_{e} = TRUE$, it means it should be destructed and it shouldn't otherwise. There are some conditions: For each vertex $v$, if there is one (exactly one) pair of edges like $i$ and $j$ with the same color connected to $v$, then we should have the clause $x_{i}\vee x_{j}$. For each vertex $v$, if the edges connected to it are $e_{1}, e_{2}, ..., e_{k}$, we should make sure that there is no pair $(i, j)$ where $1 \le i < j \le k$ and $x_{e1} = x_{e2} = True$. The naive approach is to add a clause $\top_{x_{i}}\lor^{+}x_{e_{j}}$ for each pair. But it's not efficient. The efficient way tho satisfy the second condition is to use prefix or: adding $k$ new literals $p_{1}, p_{2}, ..., p_{k}$ and for each $j \le i$, make sure $x_{e_{j}}\Rightarrow p_{i}$. To make sure about this, we can add two clauses for each $p_{i}$: $\neg_{x_{i}}\lor p_{i}$ and $\neg p_{i-1}\lor p_{i}$ (the second one is only for $i > 1$). And the only thing left is to make sure $\vdash p_{i}\lor^{+}x_{e_{i+1}}$ (there are no two TRUE edges). This way the number of literals and clauses are $O(n+m)$. So, after binary search is over, we should run check(t) to get a sample matching. Time complexity: $O((n+m)l g(m))$ (but slow, because of the constant factor)
|
[
"2-sat",
"binary search"
] | 3,100
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nconst int maxn = 5e4 + 10, maxN = 6 * maxn;\nint n, m, comp[maxN];\nvi adj[maxN], adl[maxN];\nbool mark[maxN];\nstruct edge{\n\tint v, u, c, t;\n}e[maxn];\nvi ed[maxn];\ninline int neg(int x){\n\treturn x ^ 1;\n}\ninline void add_edge(int v, int u){\n\tadj[v].pb(u);\n\tadl[u].pb(v);\n}\ninline void add_clause(int v, int u){\n\tadd_edge(neg(v), u);\n\tadd_edge(neg(u), v);\n}\nint sz[maxN], SZ[maxN];\nstack <int> s;\ninline void dfs(int v){\n\tmark[v] = true;\n\trep(u, adj[v])\tif(!mark[u])\n\t\tdfs(u);\n\ts.push(v);\n}\nbool flag = true;\ninline void dfs(int v, int c){\n\tcomp[v] = c;\n\tif(comp[v] == comp[neg(v)]){\n\t\tflag = false;\n\t\treturn ;\n\t}\n\trep(u, adl[v])\tif(!comp[u]){\n\t\tdfs(u, c);\n\t\tif(!flag)\treturn ;\n\t}\n}\nint nex;\ninline bool check(int T){\n\tflag = true;\n\tFor(i,0,nex){\n\t\tmark[i] = false;\n\t\tadj[i].resize(sz[i]);\n\t\tadl[i].resize(SZ[i]);\n\t\tcomp[i] = 0;\n\t}\n\twhile(!s.empty())\ts.pop();\n\tFor(i,0,m)\tif(e[i].t > T)\n\t\tadd_edge(L(i), R(i));\n\tFor(i,0,nex)\n\t\tif(!mark[i])\n\t\t\tdfs(i);\n\tint cnt = 1;\n\twhile(!s.empty()){\n\t\tint v = s.top();\n\t\ts.pop();\n\t\tif(comp[v])\tcontinue;\n\t\tdfs(v, cnt ++);\n\t\tif(!flag)\treturn false;\n\t}\n\treturn flag;\n}\nvi ts = {0};\nint main(){\n\tscanf(\"%d %d\", &n, &m);\n\tFor(i,0,m){\n\t\tscanf(\"%d %d %d %d\", &e[i].v, &e[i].u, &e[i].c, &e[i].t);\n\t\t-- e[i].v;\n\t\t-- e[i].u;\n\t\ted[e[i].v].pb(i);\n\t\ted[e[i].u].pb(i);\n\t\tts.pb(e[i].t);\n\t}\n\tnex = L(m);\n\tFor(i,0,n){\n\t\tsort(all(ed[i]), [](const int &x, const int &y){return e[x].c < e[y].c;});\n\t\tint cnt = 0;\n\t\tint prv;\n\t\tif(!ed[i].empty()){\n\t\t\tprv = nex;\n\t\t\tnex += 2;\n\t\t\tadd_clause(R(ed[i][0]), prv);\n\t\t}\n\t\tFor(j,1,ed[i].size()){\n\t\t\tint x = ed[i][j-1], y = ed[i][j];\n\t\t\tint cur = nex;\n\t\t\tnex += 2;\n\t\t\tif(e[x].c == e[y].c){\n\t\t\t\t++ cnt;\n\t\t\t\tif(cnt > 1){\n\t\t\t\t\tputs(\"No\");\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\tadd_clause(L(x), L(y));\n\t\t\t}\n\t\t\tadd_clause(R(y), cur);\n\t\t\tadd_clause(neg(prv), cur);\n\t\t\tadd_clause(neg(prv), R(y));\n\t\t\tprv = cur;\n\t\t}\n\t}\n\tFor(i,0,maxN)\n\t\tsz[i] = adj[i].size(),SZ[i] = adl[i].size();\n\tsort(all(ts));\n\tts.resize(unique(all(ts)) - ts.begin());\n\tint lo = 0, hi = ts.size() - 1;\n\twhile(lo < hi){\n\t\tint mid = (lo + hi)/2;\n\t\tif(check(ts[mid]))\n\t\t\thi = mid;\n\t\telse\n\t\t\tlo = mid + 1;\n\t}\n\tif(lo >= (int)ts.size() or !check(ts[lo])){\n\t\tputs(\"No\");\n\t\treturn 0;\n\t}\n\tputs(\"Yes\");\n\tfill(mark, mark + nex, false);\n\tint T = ts[lo];\n\tFor(i,0,nex){\n\t\tint v = comp[i], u = comp[neg(i)];\n\t\tmark[min(v, u)] = false;\n\t\tmark[max(v, u)] = true;\n\t}\n\tvi ans;\n\tFor(i,0,m)\n\t\tif(mark[comp[L(i)]])\n\t\t\tans.pb(i + 1);\n\tint K = ans.size();\n\tprintf(\"%d %d\\n\", T, K);\n\tint cnt = 0;\n\trep(i, ans){\n\t\tprintf(\"%d\", i);\n\t\t++ cnt;\n\t\tif(cnt == K)\n\t\t\tputs(\"\");\n\t\telse\n\t\t\tprintf(\" \");\n\t}\n\treturn 0;\n}"
|
587
|
E
|
Duff as a Queen
|
Duff is the queen of her country, Andarz Gu. She's a competitive programming fan. That's why, when he saw her minister, Malek, free, she gave her a sequence consisting of $n$ non-negative integers, $a_{1}, a_{2}, ..., a_{n}$ and asked him to perform $q$ queries for her on this sequence.
There are two types of queries:
- given numbers $l, r$ and $k$, Malek should perform $a_{i}=a_{i}\oplus k$ for each $l ≤ i ≤ r$ ($a\oplus b=a\times\mathrm{or}\,b$, bitwise exclusive OR of numbers $a$ and $b$).
- given numbers $l$ and $r$ Malek should tell her the score of sequence $a_{l}, a_{l + 1}, ... , a_{r}$.
Score of a sequence $b_{1}, ..., b_{k}$ is the number of its different Kheshtaks. A non-negative integer $w$ is a Kheshtak of this sequence if and only if there exists a subsequence of $b$, let's denote it as $b_{i1}, b_{i2}, ... , b_{ix}$ (possibly empty) such that $w=b_{i_{1}}\oplus b_{i_{2}}\oplus\ldots\leftrightarrow b_{i_{3}}$ ($1 ≤ i_{1} < i_{2} < ... < i_{x} ≤ k$). If this subsequence is empty, then $w = 0$.
Unlike Duff, Malek is not a programmer. That's why he asked for your help. Please help him perform these queries.
|
Lemma #1: If numbers $b_{1}, b_{2}, ..., b_{k}$ are $k$ Kheshtaks of $a_{1}, ..., a_{n}$, then $b_{1}\oplus b_{2}\oplus\ldots\oplus b_{k}$ is a Kheshtak of $a_{1}, ..., a_{n}$. Proof: For each $1 \le i \le k$, consider $mask_{i}$ is a binary bitmask of length $n$ and its $j - th$ bit shows a subsequence of $a_{1}, ..., a_{n}$ (subset) with xor equal to $b_{i}$. So, xor of elements subsequence(subset) of $a_{1}, ..., a_{n}$ with bitmask equal to $m a s k_{1}\oplus m a s k_{2}\oplus\ldots\oplus m a s k_{k}$ equals to $b_{1}\oplus b_{2}\oplus\ldots\oplus b_{k}$. So, it's a Kheshtak of this sequence. From this lemma, we can get another results: If all numbers $b_{1}, b_{2}, ..., b_{k}$ are $k$ Kheshtaks of $a_{1}, ..., a_{n}$, then every Kheshtak of $b_{1}, b_{2}, ..., b_{k}$ is a Kheshtak of $a_{1}, ..., a_{n}$ Lemma #2: Score of sequence $a_{1}, a_{2}, ..., a_{n}$ is equal to the score of sequence $a_{1},a_{1}\oplus a_{2},a_{2}\oplus a_{3},...,a_{n-1}\oplus a_{n}$. Proof: If we show the second sequence by $b_{1}, b_{2}, ..., b_{n}$, then for each $1 \le i \le n$: $b_{i}$ = $a_{i-1}\oplus a_{i}$ $a_{i}$ = $b_{1}\oplus b_{2}\oplus\ldots\oplus b_{i}$ $\underline{{\mathbf{f}}}$ each element from sequence $b$ is a Kheshtak of sequence $a$ and vise versa. So, due to the result of Lemma #1, each Kheshtak of sequence $b$ is a Kheshtak of sequence $a$ and vise versa. So: $score(b_{1}, ..., b_{n}) \le score(a_{1}, ..., a_{n})$ $score(a_{1}, ..., a_{n}) \le score(b_{1}, ..., b_{n})$ $\underline{{\mathbf{f}}}$ $score(a_{1}, ..., a_{n}) = score(b_{1}, ..., b_{n})$ Back to original problem: denote another array $b_{2}, ..., b_{n}$ where $b_{i}=a_{i-1}\oplus a_{i}$. Let's solve these two problems: 1- We have array $a_{1}, ..., a_{n}$ and $q$ queries of two types: $upd(l, r, k)$: Given numbers $l, r$ and $k$, for each $l \le i \le r$, perform $a_{i}=a_{i}\oplus k$ $ask(i):$ Given number $i$, return the value of $a_{i}$. This problem can be solved easily with a simple segment tree using lazy propagation. 2- We have array $b_{2}, ..., b_{n}$ and queries of two types: $modify(p, k)$: Perform $b_{p} = k$. $basis(l, r)$: Find and return the basis vector of $b_{l}, b_{l + 1}, ..., b_{r}$ (using Gaussian Elimination, its size it at most 32). This problem can be solved by a segment tree where in each node we have the basis of the substring of that node (node $[l, r)$ has the basis of sequence $b_{l}, ..., b_{r - 1}$). This way we can insert to a basis vector $v$: insert(x, v) for a in v if a & -a & x x ^= a if !x return for a in v if x & -x & a a ^= x v.push(x)But size of $v$ will always be less than or equal to 32. For merging two nodes (of segment tree), we can insert the elements of one in another one. For handling queries of two types, we act like this: Type one: Call functions: $upd(l, r, k)$, $m o d i f y(l,b_{l}\oplus k)$ and $m o d i f y(r+1,b_{r+1}\oplus k)$. Type two: Let $b = basis(l + 1, r)$. Call $insert(a_{l}, b)$. And then print $2^{b.size()}$ as the answer. Time complexity: $O(n l g(n)\times32^{2})$ = $O(n l g(n)l g^{2}(m a x(a_{1},...,a_{n})))$
|
[
"data structures"
] | 2,900
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nconst int maxn = 2e5 + 10;\ninline int sbit(const int &x){return x & -x;}\nstruct basis{\n\tint t = 0, a[32] = {};\n\tinline void pb(int x){\n\t\tFor(i,0,t)\n\t\t\tif(x & sbit(a[i]))\n\t\t\t\tx ^= a[i];\n\t\tif(!x)\treturn ;\n\t\tFor(i,0,t)\n\t\t\tif(sbit(x) & a[i])\n\t\t\t\ta[i] ^= x;\n\t\ta[t ++] = x;\n\t}\n\tinline void perform(int k){\n\t\tFor(i,0,t)\n\t\t\tif(a[i] & 1)\n\t\t\t\ta[i] ^= k;\n\t}\n\tinline basis operator + (basis b){\n\t\tif(!t)\treturn b;\n\t\tif(!b.t)\treturn *this;\n\t\tbasis ans;\n\t\tFor(i,0,t)\n\t\t\tans.pb(a[i]);\n\t\tFor(i,0,b.t)\n\t\t\tans.pb(b.a[i]);\n\t\treturn ans;\n\t}\n}v[maxn * 4], emp;\nint lz[maxn * 4], a[maxn], n, q, p2[35];\ninline void upd(int id, int k){\n\tif(!k)\treturn ;\n\tlz[id] ^= k;\n\tv[id].perform(k);\n}\ninline void shift(int id){\n\tupd(L(id), lz[id]);\n\tupd(R(id), lz[id]);\n\tlz[id] = 0;\n}\ninline void build(int id = 1, int l = 0, int r = n){\n\tif(r - l < 2){\n\t\tv[id].pb(a[l]);\n\t\treturn ;\n\t}\n\tint mid = (l + r)/2;\n\tbuild(L(id), l, mid);\n\tbuild(R(id), mid, r);\n\tv[id] = v[L(id)] + v[R(id)];\n}\ninline void upd(int x, int y, int k, int id = 1, int l = 0, int r = n){\n\tif(x >= r or l >= y)\treturn ;\n\tif(x <= l && r <= y){\n\t\tupd(id, k);\n\t\treturn ;\n\t}\n\tint mid = (l + r)/2;\n\tshift(id);\n\tupd(x, y, k, L(id), l, mid);\n\tupd(x, y, k, R(id), mid, r);\n\tv[id] = v[L(id)] + v[R(id)];\n}\ninline basis ask(int x, int y, int id = 1, int l = 0, int r = n){\n\tif(x >= r or l >= y)\treturn emp;\n\tif(x <= l && r <= y)\treturn v[id];\n\tint mid = (l + r)/2;\n\tshift(id);\n\treturn ask(x, y, L(id), l, mid) + ask(x, y, R(id), mid, r);\n}\nint main(){\n\tscanf(\"%d %d\", &n, &q);\n\tFor(i,0,n){\n\t\tscanf(\"%d\", a + i);\n\t\ta[i] = 2*a[i] + 1;\n\t}\n\tbuild();\n\tFor(i,0,35)\tp2[i] = (1 << i);\n\twhile(q --){\n\t\tint t, l, r, k;\n\t\tscanf(\"%d %d %d\", &t, &l, &r);\n\t\t-- l;\n\t\tif(t == 1){\n\t\t\tscanf(\"%d\", &k);\n\t\t\tupd(l, r, k << 1);\n\t\t}\n\t\telse{\n\t\t\tbasis ans, b = ask(l, r);\n\t\t\tFor(i,0,b.t)\n\t\t\t\tans.pb(b.a[i] >> 1);\n\t\t\tprintf(\"%d\\n\", p2[ans.t]);\n\t\t}\n\t}\n\treturn 0;\n}\n"
|
587
|
F
|
Duff is Mad
|
Duff is mad at her friends. That's why she sometimes makes Malek to take candy from one of her friends for no reason!
She has $n$ friends. Her $i$-th friend's name is $s_{i}$ (their names are not necessarily unique). $q$ times, she asks Malek to take candy from her friends. She's angry, but also she acts with rules. When she wants to ask Malek to take candy from one of her friends, like $k$, she chooses two numbers $l$ and $r$ and tells Malek to take exactly $\sum_{i=l}^{r}{\cal O}c c u r(s_{i},s_{k})$ candies from him/her, where $occur(t, s)$ is the number of occurrences of string $t$ in $s$.
Malek is not able to calculate how many candies to take in each request from Duff. That's why she asked for your help. Please tell him how many candies to take in each request.
|
Use Aho-Corasick. Assume first of all we build the trie of our strings (function t). If $t(v, c) \neq - 1$ it means that there is an edge in the trie outgoing from vertex $v$ written $c$ on it. So, for building Aho-Corasick, consider $f(v)$ = the vertex we go into, in case of failure ($t(v, c) = - 1$). i.e the deepest vertex ($u$), that $v \neq u$ and the path from root to $u$ is a suffix of path from root to $v$. No we can build an automaton (Aho-Corasick), function $g$. For each i, do this (in the automaton): cur = root for c in s[i] cur = g(cur, c) And then push i in q[cur] (q is a vector, also we do this for cur = root). end[cur].push(i) // end is also a vector, consisting of the indices of strings ending in vertex cur (state cur in automaton) last[i] = cur // last[i] is the final state we get to from searching string s[i] in automaton gAssume $cnt(v, i)$ is the number of occurrences of number $i$ in $q[v]$. Also, denote $c o u n t(v,i)=\sum_{u}\O_{i n\ s u b t r e e\ o f\ v}c m t(u,i)$. Build another tree. In this tree, for each $i$ that is not root of the trie, let $par[i] = f(i)$ (the vertex we go in the trie, in case of failure) and call it C-Tree. So now, problem is on a tree. Operations are : Each query gives numbers $l, r, k$ and you have to find the number $\sum_{i=l}^{r}c o u n t(l a s t\lbrack i\rbrack,k)$. Act offline. If $N = 10^{5}$, then: 1. For each $i$ such that $s[i].s i z e(\l)>\sqrt{N}$, collect queries (like struct) in a vector of queries $query[i]$, then run dfs on the C-Tree and using a partial sum answer to all queries with $k = i$. There are at most $\sqrt{n}$ of these numbers, so it can be done in $O(N{\sqrt{N}})$. After doing these, erase $i$ from all $q[1], q[1], ..., q[N]$. Code (in dfs) would be like this(on C-Tree): partial_sum[n] = {}; // all 0 dfs(v, i){ cnt = 0; for x in q[v] if(x == i) ++ cnt; for u in childern[v] cnt += dfs(u); for x in end[v] partial_sum[x] += cnt; return cnt; } calc(i){ dfs(root, i); for i = 2 to n partial_sum[i] += partial_sum[i-1] for query u in query[i] u.ans = partial_sum[u.r] - partial_sum[u.l - 1] }And we should just run calc(i) for each of them. 2. For each $i$ such that $s[i].s i z e(\l)\leq{\sqrt{N}}$, collect queries (like struct) in a vector of queries $query[i]$. (each element of this vector have three integers in it: $l, r$ and $ans$). Consider this problem: We have an array a of length $N$(initially all element equal to 0) and some queries of two types: $increase(i, val)$: increase $a[i]$ by $val$ $sum(i)$: tell the value of $a[1] + a[2] + ... + a[i]$ We know that number of queries of the first type is $O(N)$ and from the second type is $O(N{\sqrt{N}})$. Using Sqrt decomposition, we can solve this problem in $O(N{\sqrt{N}})$: K = sqrt(N) tot[K] = {}, a[N] = {} // this code is 0-based increase(i, val) while i < N and i % K > 0 a[i ++] += val while i < K tot[i/K] += val i += K sum(i) return a[i] + tot[i/K]Back to our problem now. Then, just run dfs once on this C-Tree and act like this: dfs(vertex v): for i in end[v] increase(i, 1) for i in q[v] for query u in query[i] u.ans += sum(u.r) - sum(u.l - 1) for u in children[v] dfs(u) for i in end[v] increase(i, -1)Then answer to a query $q$ is $q.ans$. Time complexity: $O(N{\sqrt{N}})$
|
[
"data structures",
"strings"
] | 3,000
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nconst int maxn = 1e5 + 100;\nint TRIE[maxn][26];\nstring s[maxn];\nint f[maxn], aut[maxn][26], root, nx = 1;\nvi tof[maxn];\nvi adj[maxn];\nint end[maxn];\nvi ends[maxn];\ninline void build(int x){\n\tint v = root;\n\trep(ch, s[x]){\n\t\tint c = ch - 'a';\n\t\tif(TRIE[v][c] == -1){\n\t\t\tTRIE[v][c] = nx;\n\t\t\t++ nx;\n\t\t}\n\t\tv = TRIE[v][c];\n\t}\n\t::end[x] = v;\n\t::ends[v].pb(x);\n}\ninline void ahoc(){\n\tf[root] = root;\n\tqueue<int> q;\n\tq.push(root);\n\twhile(!q.empty()){\n\t\tint v = q.front();\n\t\tq.pop();\n\t\tFor(c,0,26){\n\t\t\tif(TRIE[v][c] != -1){\n\t\t\t\taut[v][c] = TRIE[v][c];\t\n\t\t\t\tif(v != root)\n\t\t\t\t\tf[aut[v][c]] = aut[f[v]][c];\n\t\t\t\telse\n\t\t\t\t\tf[aut[v][c]] = root;\n\t\t\t\tq.push(TRIE[v][c]);\n\t\t\t}\n\t\t\telse{\n\t\t\t\tif(v == root)\n\t\t\t\t\taut[v][c] = root;\n\t\t\t\telse\n\t\t\t\t\taut[v][c] = aut[f[v]][c];\n\t\t\t}\n\t\t}\n\t}\n}\ninline void go(int x){\n\tint v = root;\n\trep(ch, s[x]){\n\t\tint c = ch - 'a';\n\t\tv = aut[v][c];\n\t\ttof[v].pb(x);\n\t}\n}\nint par[maxn];\nchar ch[maxn];\nconst int K = 350;\nstruct sqrt_decomposition{\n\tll sum[maxn] = {}, tot[K] = {};\n\tinline void add(int p, ll val){\n\t\twhile(p < maxn && p % K)\n\t\t\tsum[p ++] += val;\n\t\twhile(p < maxn){\n\t\t\ttot[p/K] += val;\n\t\t\tp += K;\n\t\t}\n\t}\n\tinline ll ask(int p){\n\t\treturn (p < 0 ? 0 : sum[p] + tot[p/K]);\n\t}\n}SQRT;\nstruct query{\n\tint l, r, k;\n\tll ans = 0LL;\n\tquery(){}\n\tquery(int L, int R, int K){\n\t\tl = L;\n\t\tr = R;\n\t\tk = K;\n\t}\n}Q[maxn];\nvi queries[maxn];\ninline void dfs(int v){\n\trep(a, ::ends[v])\n\t\tSQRT.add(a, 1);\n\trep(i, tof[v])\tif((int)s[i].size() < K){\n\t\trep(j, queries[i])\n\t\t\tQ[j].ans += SQRT.ask(Q[j].r) - SQRT.ask(Q[j].l - 1);\n\t}\n\trep(u, adj[v])\tdfs(u);\n\trep(a, ::ends[v])\n\t\tSQRT.add(a, -1);\n}\nll ps[maxn];\ninline int dfs(int v, int x){\n\tint cnt = 0;\n\trep(a, tof[v])\tif(a == x)\n\t\t++ cnt;\n\trep(u, adj[v])\n\t\tcnt += dfs(u, x);\n\trep(a, ::ends[v])\n\t\tps[a] += (ll)cnt;\n\treturn cnt;\n}\nint main(){\n\tmemset(TRIE, -1, sizeof TRIE);\n\tmemset(par, -1, sizeof par);\n\tint n, q;\n\tscanf(\"%d %d\", &n, &q);\n\tFor(i,0,n){\n\t\tscanf(\"%s\", ch);\n\t\ts[i] = (string)ch;\n\t\tbuild(i);\n\t}\n\tahoc();\n\tFor(i,0,n)\n\t\tgo(i);\n\tFor(i,1,nx){\n\t\tpar[i] = f[i];\n\t\tadj[par[i]].pb(i);\n\t}\n\tFor(i,0,q){\n\t\tint l, r, k;\n\t\tscanf(\"%d %d %d\", &l, &r, &k);\n\t\t-- l, -- r, -- k;\n\t\tQ[i] = query(l, r, k);\n\t\tqueries[k].pb(i);\n\t}\n\tFor(i,0,n)\tif((int)s[i].size() >= K){\n\t\tfill(ps, ps + maxn, 0LL);\n\t\tdfs(root, i);\n\t\tFor(i,1,maxn)\n\t\t\tps[i] += ps[i-1];\n\t\trep(j, queries[i])\n\t\t\tQ[j].ans += ps[Q[j].r] - (Q[j].l ? ps[Q[j].l-1] : 0LL);\n\t}\n\tdfs(root);\n\tFor(i,0,q)\n\t\tprintf(\"%lld\\n\", Q[i].ans);\n\treturn 0;\n}\n\n"
|
588
|
A
|
Duff and Meat
|
Duff is addicted to meat! Malek wants to keep her happy for $n$ days. In order to be happy in $i$-th day, she needs to eat exactly $a_{i}$ kilograms of meat.
There is a big shop uptown and Malek wants to buy meat for her from there. In $i$-th day, they sell meat for $p_{i}$ dollars per kilogram. Malek knows all numbers $a_{1}, ..., a_{n}$ and $p_{1}, ..., p_{n}$. In each day, he can buy arbitrary amount of meat, also he can keep some meat he has for the future.
Malek is a little tired from cooking meat, so he asked for your help. Help him to minimize the total money he spends to keep Duff happy for $n$ days.
|
Idea is a simple greedy, buy needed meat for $i - th$ day when it's cheapest among days $1, 2, ..., n$. So, the pseudo code below will work: ans = 0 price = infinity for i = 1 to n price = min(price, p[i]) ans += price * a[i] Time complexity: ${\mathcal{O}}(n)$
|
[
"greedy"
] | 900
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nint main(){\n\tiOS;\n\tint n;\n\tcin >> n;\n\tint price = 1e9, ans = 0;\n\twhile(n --){\n\t\tint a, p;\n\t\tcin >> a >> p;\n\t\tsmin(price, p);\n\t\tans += a * price;\n\t}\n\tcout << ans << endl;\n\treturn 0;\n}\n"
|
588
|
B
|
Duff in Love
|
Duff is in love with lovely numbers! A positive integer $x$ is called lovely if and only if there is no such positive integer $a > 1$ such that $a^{2}$ is a divisor of $x$.
Malek has a number store! In his store, he has only divisors of positive integer $n$ (and he has all of them). As a birthday present, Malek wants to give her a lovely number from his store. He wants this number to be as big as possible.
Malek always had issues in math, so he asked for your help. Please tell him what is the biggest lovely number in his store.
|
Find all prime divisors of $n$. Assume they are $p_{1}, p_{2}, ..., p_{k}$ (in $O({\sqrt{n}})$). If answer is $a$, then we know that for each $1 \le i \le k$, obviously $a$ is not divisible by $p_{i}^{2}$ (and all greater powers of $p_{i}$). So $a \le p_{1} \times p_{2} \times ... \times p_{k}$. And we know that $p_{1} \times p_{2} \times ... \times p_{k}$ is itself lovely. So,answer is $p_{1} \times p_{2} \times ... \times p_{k}$ Time complexity: $O({\sqrt{n}})$
|
[
"math"
] | 1,300
|
"#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\nusing namespace __gnu_pbds;\nusing namespace std;\n#define Foreach(i, c) for(__typeof((c).begin()) i = (c).begin(); i != (c).end(); ++i)\n#define For(i,a,b) for(int (i)=(a);(i) < (b); ++(i))\n#define rof(i,a,b) for(int (i)=(a);(i) > (b); --(i))\n#define rep(i, c) for(auto &(i) : (c))\n#define x first\n#define y second\n#define pb push_back\n#define PB pop_back()\n#define iOS ios_base::sync_with_stdio(false)\n#define sqr(a) (((a) * (a)))\n#define all(a) a.begin() , a.end()\n#define error(x) cerr << #x << \" = \" << (x) <<endl\n#define Error(a,b) cerr<<\"( \"<<#a<<\" , \"<<#b<<\" ) = ( \"<<(a)<<\" , \"<<(b)<<\" )\\n\";\n#define errop(a) cerr<<#a<<\" = ( \"<<((a).x)<<\" , \"<<((a).y)<<\" )\\n\";\n#define coud(a,b) cout<<fixed << setprecision((b)) << (a)\n#define L(x) ((x)<<1)\n#define R(x) (((x)<<1)+1)\n#define umap unordered_map\n#define double long double\ntypedef long long ll;\ntypedef pair<int,int>pii;\ntypedef vector<int> vi;\ntypedef complex<double> point;\ntemplate <typename T> using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\ntemplate <class T> inline void smax(T &x,T y){ x = max((x), (y));}\ntemplate <class T> inline void smin(T &x,T y){ x = min((x), (y));}\nint main(){\n\tiOS;\n\tll n;\n\tcin >> n;\n\tll ans = 1;\n\tll x = n;\n\tfor(ll i = 2;i * i <= n; i ++)\tif(x % i == 0){\n\t\tans *= (ll) i;\n\t\twhile(x % i == 0)\n\t\t\tx /= i;\n\t}\n\tif(x > 1)\n\t\tans *= x;\n\tcout << ans << endl;\n\treturn 0;\n}\n"
|
590
|
A
|
Median Smoothing
|
A schoolboy named Vasya loves reading books on programming and mathematics. He has recently read an encyclopedia article that described the method of median smoothing (or median filter) and its many applications in science and engineering. Vasya liked the idea of the method very much, and he decided to try it in practice.
Applying the simplest variant of median smoothing to the sequence of numbers $a_{1}, a_{2}, ..., a_{n}$ will result a new sequence $b_{1}, b_{2}, ..., b_{n}$ obtained by the following algorithm:
- $b_{1} = a_{1}$, $b_{n} = a_{n}$, that is, the first and the last number of the new sequence match the corresponding numbers of the original sequence.
- For $i = 2, ..., n - 1$ value $b_{i}$ is equal to the median of three values $a_{i - 1}$, $a_{i}$ and $a_{i + 1}$.
The median of a set of three numbers is the number that goes on the second place, when these three numbers are written in the non-decreasing order. For example, the median of the set 5, 1, 2 is number 2, and the median of set 1, 0, 1 is equal to 1.
In order to make the task easier, Vasya decided to apply the method to sequences consisting of zeros and ones only.
Having made the procedure once, Vasya looked at the resulting sequence and thought: what if I apply the algorithm to it once again, and then apply it to the next result, and so on? Vasya tried a couple of examples and found out that after some number of median smoothing algorithm applications the sequence can stop changing. We say that the sequence is stable, if it does not change when the median smoothing is applied to it.
Now Vasya wonders, whether the sequence always eventually becomes stable. He asks you to write a program that, given a sequence of zeros and ones, will determine whether it ever becomes stable. Moreover, if it ever becomes stable, then you should determine what will it look like and how many times one needs to apply the median smoothing algorithm to initial sequence in order to obtain a stable one.
|
We will call the element of a sequence stable, if it doesn't change after applying the algorithm of median smoothing for any number of times. Both ends are stable by the definition of the median smoothing. Also, it is easy to notice, that two equal consecutive elements are both stable. Now we should take a look at how do stable elements affect their neighbors. Suppose $a_{i - 1} = a_{i}$, meaning $i - 1$ and $i$ are stable. Additionaly assume, that $a_{i + 1}$ is not a stable element, hence $a_{i + 1} \neq a_{i}$ and $a_{i + 1} \neq a_{i + 2}$. Keeping in mind that only $0$ and $1$ values are possible, we conclude that $a_{i} = a_{i + 2}$ and applying a median smoothing algorithm to this sequence will result in $a_{i} = a_{i + 1}$. That means, if there is a stable element in position $i$, both $i + 1$ and $i - 1$ are guaranteed to be stable after one application of median smoothing. Now we can conclude, that all sequences will turn to stable at some point. Note, that if there are two stable elements $i$ and $j$ with no other stable elements located between them, the sequence of elements between them is alternating, i.e. $a_{k} = (a_{i} + k - i)mod2$, where $k\in[i,j]$. One can check, that stable elements may occur only at the ends of the alternating sequence, meaning the sequence will remain alternating until it will be killed by effect spreading from ending stable elements. The solution is: calculate $max(min(|i - s_{j}|))$, where $s_{j}$ are the initial stable elements. Time complexity is $O(n)$.
|
[
"implementation"
] | 1,700
|
#include<bits/stdc++.h>
#include<cmath>
#include<cstdio>
#include<cstring>
#include<cstdlib>
#include<iostream>
#include<algorithm>
#include<vector>
using namespace std;
#define FZ(n) memset((n),0,sizeof(n))
#define FMO(n) memset((n),-1,sizeof(n))
#define MC(n,m) memcpy((n),(m),sizeof(n))
#define F first
#define S second
#define MP make_pair
#define PB push_back
#define FOR(x,y) for(__typeof(y.begin())x=y.begin();x!=y.end();x++)
#define IOS ios_base::sync_with_stdio(0); cin.tie(0)
// Let's Fight!
const int MAXN = 505050;
int N;
int arr[MAXN];
int modify(int l, int r)
{
if(l == r) return 0;
if(arr[l] == arr[r])
{
for(int i=l+1; i<r; i++)
arr[i] = arr[l];
return (r-l)/2;
}
else
{
int m = (l+r+1)/2;
for(int i=l+1; i<m; i++)
arr[i] = arr[l];
for(int i=m; i<r; i++)
arr[i] = arr[r];
return (r-l-1)/2;
}
}
int main()
{
IOS;
cin>>N;
for(int i=0; i<N; i++)
cin>>arr[i];
int ans = 0;
int lb = 0;
for(int i=0; i<N; i++)
{
if(i == N-1 || arr[i] == arr[i+1])
{
ans = max(ans, modify(lb, i));
lb = i+1;
}
}
cout<<ans<<"\n";
for(int i=0; i<N; i++)
cout<<arr[i]<<(i==N-1 ? "\n" : " ");
return 0;
}
|
590
|
B
|
Chip 'n Dale Rescue Rangers
|
A team of furry rescue rangers was sitting idle in their hollow tree when suddenly they received a signal of distress. In a few moments they were ready, and the dirigible of the rescue chipmunks hit the road.
We assume that the action takes place on a Cartesian plane. The headquarters of the rescuers is located at point $(x_{1}, y_{1})$, and the distress signal came from the point $(x_{2}, y_{2})$.
Due to Gadget's engineering talent, the rescuers' dirigible can instantly change its current velocity and direction of movement at any moment and as many times as needed. The only limitation is: the speed of the aircraft relative to the air can not exceed $v_{\mathrm{max}}$ meters per second.
Of course, Gadget is a true rescuer and wants to reach the destination as soon as possible. The matter is complicated by the fact that the wind is blowing in the air and it affects the movement of the dirigible. According to the weather forecast, the wind will be defined by the vector $(v_{x}, v_{y})$ for the nearest $t$ seconds, and then will change to $(w_{x}, w_{y})$. These vectors give both the direction and velocity of the wind. Formally, if a dirigible is located at the point $(x, y)$, while its own velocity relative to the air is equal to zero and the wind $(u_{x}, u_{y})$ is blowing, then after ${\mathbf{}}T$ seconds the new position of the dirigible will be $(x+\tau\cdot u_{x},\,y+\tau\cdot u_{y})$.
Gadget is busy piloting the aircraft, so she asked Chip to calculate how long will it take them to reach the destination if they fly optimally. He coped with the task easily, but Dale is convinced that Chip has given the random value, aiming only not to lose the face in front of Gadget. Dale has asked you to find the right answer.
It is guaranteed that the speed of the wind at any moment of time is strictly less than the maximum possible speed of the airship relative to the air.
|
If the velocity of the dirigible relative to the air is given by the vector $(a_{x}, a_{y})$, while the velocity of the wind is $(b_{x}, b_{y})$, the resulting velocity of the dirigible relative to the plane is $(a_{x} + b_{x}, a_{y} + b_{y})$. The main idea here is that the answer function is monotonous. If the dirigible is able to reach to target in ${\mathbf{}}T$ seconds, then it can do so in $\tau+x$ seconds, for any $x \ge 0$. That is an obvious consequence from the fact the maximum self speed of the dirigible is strictly greater then the speed of the wind at any moment of time. For any monotonous function we can use binary search. Now we only need to check, if for some given value ${\mathbf{}}T$ it's possible for the dirigible to reach the target in ${\mathbf{}}T$ seconds. Let's separate the movement of the air and the movement of the dirigible in the air. The movement cause by the air is: $(x_{n}, y_{n})$ = $(x_{1}+v_{x}\cdot\tau,y_{1}+v_{w}\cdot\tau)$, if $\tau\leq l$; $(x_{n}, y_{n})$ = $\left(T\right|+{{v_{x}}\cdot\left(\mp\right.\upsilon_{x}\cdot\left(\tau=\left|\right),\ y\right|+\upsilon_{y}\cdot\left(\Omega+\upsilon_{y}\cdot\left(\tau=\imath\right)\right)}$, for ${\boldsymbol{\tau}}>{\boldsymbol{\imath}}$. The only thing we need to check now is that the distance between the point $(x_{n}, y_{n})$ and the target coordinates $(x_{2}, y_{2})$ can be covered moving with the speed $v_{max}$ in ${\mathbf{}}T$ seconds assuming there is no wind. Time complexity is $O(l o g_{\frac{C}{\epsilon}}^{C})$, where $C$ stands for the maximum coordinate, and $ \epsilon $ - desired accuracy.
|
[
"binary search",
"geometry",
"math"
] | 2,100
|
#include <iostream>
#include <vector>
#include <cmath>
#include <ctime>
#include <cassert>
#include <cstdio>
#include <queue>
#include <set>
#include <map>
#include <fstream>
#include <cstdlib>
#include <string>
#include <cstring>
#include <algorithm>
#include <numeric>
#define mp make_pair
#define mt make_tuple
#define fi first
#define se second
#define pb push_back
#define all(x) (x).begin(), (x).end()
#define rall(x) (x).rbegin(), (x).rend()
#define forn(i, n) for (int i = 0; i < (int)(n); ++i)
#define for1(i, n) for (int i = 1; i <= (int)(n); ++i)
#define ford(i, n) for (int i = (int)(n) - 1; i >= 0; --i)
#define fore(i, a, b) for (int i = (int)(a); i <= (int)(b); ++i)
using namespace std;
typedef pair<int, int> pii;
typedef vector<int> vi;
typedef vector<pii> vpi;
typedef vector<vi> vvi;
typedef long long i64;
typedef vector<i64> vi64;
typedef vector<vi64> vvi64;
template<class T> bool uin(T &a, T b) { return a > b ? (a = b, true) : false; }
template<class T> bool uax(T &a, T b) { return a < b ? (a = b, true) : false; }
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.precision(10);
cout << fixed;
#ifdef LOCAL_DEFINE
freopen("input.txt", "rt", stdin);
#endif
double x1, y1, x2, y2, v, t, vx, vy, wx, wy;
cin >> x1 >> y1 >> x2 >> y2 >> v >> t >> vx >> vy >> wx >> wy;
double L = 0.0, R = 1e9;
forn(its, 100) {
double M = 0.5 * (L + R);
double x = x1 + min(M, t) * vx + max(M - t, 0.0) * wx;
double y = y1 + min(M, t) * vy + max(M - t, 0.0) * wy;
(hypot(x - x2, y - y2) < v * M ? R : L) = M;
}
cout << R << '\n';
#ifdef LOCAL_DEFINE
cerr << "Time elapsed: " << 1.0 * clock() / CLOCKS_PER_SEC << " s.\n";
#endif
return 0;
}
|
590
|
C
|
Three States
|
The famous global economic crisis is approaching rapidly, so the states of Berman, Berance and Bertaly formed an alliance and allowed the residents of all member states to freely pass through the territory of any of them. In addition, it was decided that a road between the states should be built to guarantee so that one could any point of any country can be reached from any point of any other State.
Since roads are always expensive, the governments of the states of the newly formed alliance asked you to help them assess the costs. To do this, you have been issued a map that can be represented as a rectangle table consisting of $n$ rows and $m$ columns. Any cell of the map either belongs to one of three states, or is an area where it is allowed to build a road, or is an area where the construction of the road is not allowed. A cell is called passable, if it belongs to one of the states, or the road was built in this cell. From any passable cells you can move up, down, right and left, if the cell that corresponds to the movement exists and is passable.
Your task is to construct a road inside a minimum number of cells, so that it would be possible to get from any cell of any state to any cell of any other state using only passable cells.
It is guaranteed that initially it is possible to reach any cell of any state from any cell of this state, moving only along its cells. It is also guaranteed that for any state there is at least one cell that belongs to it.
|
Affirmation. Suppose we are given an undirected unweighted connected graph and three distinct chosen vertices $u$, $v$, $w$ of this graph. We state that at least one minimum connecting network for these three vertices has the following form: some vertex $c$ is chosen and the resulting network is obtained as a union of shortest paths from $c$ to each of the chosen vertices. Proof. One of the optimal subgraphs connecting these three vertices is always a tree. Really, we can take any connecting subgraph and while there are cycles remaining in it, take any cycle and throw away any edge of this cycle. Moreover, only vertices $u$, $v$ and $w$ are allowed to be leaves of this tree, as we can delete from the network any other leaves and the network will still be connected. If the tree has no more than three leaves, it has no more than one vertex with the degree greater than $2$. This vertex is $c$ from the statement above. Any path from $c$ to the leaves may obviously be replaced with the shortest path. Special case is than there is no node with the degree greater than $2$, meaning one of $u$, $v$ or $w$ lies on the shortest path connecting two other vertices. The solution is: find the shortest path from each of the chosen vertices to all other vertices, and then try every vertex of the graph as $c$. Time complexity is $O(|V| + |E|)$. To apply this solution to the given problem we should first build a graph, where cells of the table stand for the vertices and two vertices are connected by an edge, if the corresponding cells were neighboring. Now we should merge all vertices of a single state in one in order to obtain a task described in the beginning. Time complexity is a linear function of the size of the table $O(n{\tilde{m}})$.
|
[
"dfs and similar",
"graphs",
"shortest paths"
] | 2,200
|
//#include <iostream>
#include <fstream>
#include <vector>
#include <set>
#include <map>
#include <cstring>
#include <string>
#include <cmath>
#include <cassert>
#include <ctime>
#include <algorithm>
#include <sstream>
#include <list>
#include <queue>
#include <deque>
#include <stack>
#include <cstdlib>
#include <cstdio>
#include <iterator>
#include <functional>
#include <bitset>
#define mp make_pair
#define pb push_back
#ifdef LOCAL
#define eprintf(...) fprintf(stderr,__VA_ARGS__)
#else
#define eprintf(...)
#endif
#define TIMESTAMP(x) eprintf("["#x"] Time : %.3lf s.\n", clock()*1.0/CLOCKS_PER_SEC)
#define TIMESTAMPf(x,...) eprintf("[" x "] Time : %.3lf s.\n", __VA_ARGS__, clock()*1.0/CLOCKS_PER_SEC)
#if ( ( _WIN32 || __WIN32__ ) && __cplusplus < 201103L)
#define LLD "%I64d"
#else
#define LLD "%lld"
#endif
using namespace std;
#define TASKNAME "C"
#ifdef LOCAL
static struct __timestamper {
string what;
__timestamper(const char* what) : what(what){};
__timestamper(const string& what) : what(what){};
~__timestamper(){
TIMESTAMPf("%s", what.data());
}
} __TIMESTAMPER("end");
#else
struct __timestamper {};
#endif
typedef long long ll;
typedef long double ld;
const int MAXN = 1010;
char s[MAXN][MAXN];
int n, m;
int dist[3][MAXN][MAXN];
const int dx[4] = {0, 1, 0, -1};
const int dy[4] = {-1, 0, 1, 0};
int main(){
#ifdef LOCAL
assert(freopen(TASKNAME".in","r",stdin));
assert(freopen(TASKNAME".out","w",stdout));
#endif
scanf("%d%d",&n,&m);
for (int i = 0; i < n; i++)
scanf("%s", s[i]);
memset(dist, -1, sizeof(dist));
for (int c = '1'; c <= '3'; c++) {
deque<pair<int, int>> q;
for (int i = 0; i < n; i++)
for (int j = 0; j < m; j++)
if (s[i][j] == c) {
dist[c - '1'][i][j] = 0;
q.push_back(make_pair(i, j));
}
while (!q.empty()) {
int x = q.front().first;
int y = q.front().second;
q.pop_front();
for (int i = 0; i < 4; i++) {
int nx = x + dx[i];
int ny = y + dy[i];
if (0 <= nx && nx < n && 0 <= ny && ny < m && s[nx][ny] != '#') {
int nd = dist[c - '1'][x][y] + (s[nx][ny] == '.');
if (dist[c - '1'][nx][ny] == -1 || dist[c - '1'][nx][ny] > nd) {
dist[c - '1'][nx][ny] = nd;
if (s[nx][ny] == '.') {
q.push_back(make_pair(nx, ny));
} else {
q.push_front(make_pair(nx, ny));
}
}
}
}
}
}
int ans = -1;
for (int i = 0; i < n; i++)
for (int j = 0; j < m; j++) {
if (dist[0][i][j] != -1 && dist[1][i][j] != -1 && dist[2][i][j] != -1) {
int nval = dist[0][i][j] + dist[1][i][j] + dist[2][i][j] - 2 * (s[i][j] == '.');
//fprintf(stderr, "%d %d %d: %d %d %d\n", i, j, nval, dist[0][i][j], dist[1][i][j], dist[2][i][j]);
if (ans == -1 || ans > nval) {
ans = nval;
}
}
}
printf("%d\n", ans);
return 0;
}
|
590
|
D
|
Top Secret Task
|
A top-secret military base under the command of Colonel Zuev is expecting an inspection from the Ministry of Defence. According to the charter, each top-secret military base must include a top-secret troop that should... well, we cannot tell you exactly what it should do, it is a top secret troop at the end. The problem is that Zuev's base is missing this top-secret troop for some reasons.
The colonel decided to deal with the problem immediately and ordered to line up in a single line all $n$ soldiers of the base entrusted to him. Zuev knows that the loquacity of the $i$-th soldier from the left is equal to $q_{i}$. Zuev wants to form the top-secret troop using $k$ leftmost soldiers in the line, thus he wants their total loquacity to be as small as possible (as the troop should remain top-secret). To achieve this, he is going to choose a pair of \textbf{consecutive} soldiers and swap them. He intends to do so no more than $s$ times. Note that any soldier can be a participant of such swaps for any number of times. The problem turned out to be unusual, and colonel Zuev asked you to help.
Determine, what is the minimum total loquacity of the first $k$ soldiers in the line, that can be achieved by performing no more than $s$ swaps of two consecutive soldiers.
|
If $s>{\frac{n(n-1)}{2}}$, than the sum of $k$ minimums is obviously an answer. Let $i_{1} < i_{2} <$ ... $< i_{k}$ be the indices of the elements that will form the answer. Note, that the relative order of the chosen subset will remain the same, as there is no reason to swap two elements that will both be included in the answer. The minimum number of operations required to place this $k$ elements at the beginning is equal to $T=(i_{1}-1)+\ldots+(i_{k}-k)=\sum_{p=1}^{k}i_{p}$ - $\frac{k\cdot(k+1)}{2}$. $T$$ \le $$S$ $\longrightarrow\bigcup$ $\sum_{p=1}^{k}i_{p}$ $ \le $ ${\frac{k\cdot(k+1)}{2}}+S$. $M={\frac{k\cdot(k+1)}{2}}+S$. Calculate the dynamic programming $d[i][j][p]$ &mdash minimum possible sum, if we chose $j$ elements among first $i$ with the total indices sum no greater than $p$. In order to optimize the memory consumption we will keep in memory only two latest layers of the dp. Time complexity is $O(n^{4})$, with $O(n^{3})$ memory consumption.
|
[
"dp"
] | 2,300
|
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <algorithm>
#include <iostream>
#include <cstring>
#include <vector>
#include <set>
#include <map>
#include <cassert>
#include <ctime>
#include <string>
#include <queue>
using namespace std;
#ifdef _WIN32
#define LLD "%I64d"
#else
#define LLD "%lld"
#endif
typedef long double ld;
long long rdtsc() {
long long tmp;
asm("rdtsc" : "=A"(tmp));
return tmp;
}
inline int myrand() {
return abs((rand() << 15) ^ rand());
}
inline int rnd(int x) {
return myrand() % x;
}
#define pb push_back
#define mp make_pair
#define eprintf(...) fprintf(stderr, __VA_ARGS__)
#define sz(x) ((int)(x).size())
#define TASKNAME "text"
const int INF = (int) 1.01e9;
const ld EPS = 1e-9;
void precalc() {
}
int n, k, s;
const int maxn = 150 + 10;
int a[maxn];
bool read() {
if (scanf("%d%d%d", &n, &k, &s) < 3) {
return 0;
}
for (int i = 0; i < n; ++i) {
scanf("%d", a + i);
}
return 1;
}
int d[maxn][maxn * maxn];
int nd[maxn][maxn * maxn];
int getMaxc(int x, int t) {
return min(s, (x - t) * t);
}
void update(int &x, int val) {
x = min(x, val);
}
void solve() {
d[0][0] = 0;
for (int x = 0; x < n; ++x) {
for (int t = 0; t <= x + 1 && t <= k; ++t) {
int maxc = getMaxc(x + 1, t);
for (int c = 0; c <= maxc; ++c) {
nd[t][c] = INF;
}
}
for (int t = 0; t <= x && t <= k; ++t) {
int maxc = getMaxc(x, t);
for (int c = 0; c <= maxc; ++c) {
int &cur = d[t][c];
if (cur == INF) {
continue;
}
update(nd[t][c], cur);
update(nd[t + 1][c + (x - t)], cur + a[x]);
}
}
for (int t = 0; t <= x + 1 && t <= k; ++t) {
int maxc = getMaxc(x + 1, t);
for (int c = 0; c <= maxc; ++c) {
d[t][c] = nd[t][c];
}
}
}
int maxc = getMaxc(n, k);
int res = INF;
for (int c = 0; c <= maxc; ++c) {
res = min(res, d[k][c]);
}
printf("%d\n", res);
}
int main() {
srand(rdtsc());
#ifdef DEBUG
freopen(TASKNAME".out", "w", stdout);
assert(freopen(TASKNAME".in", "r", stdin));
#endif
precalc();
while (1) {
if (!read()) {
break;
}
solve();
#ifdef DEBUG
eprintf("Time: %.18lf\n", (double)clock() / CLOCKS_PER_SEC);
#endif
}
return 0;
}
|
590
|
E
|
Birthday
|
Today is birthday of a Little Dasha — she is now 8 years old! On this occasion, each of her $n$ friends and relatives gave her a ribbon with a greeting written on it, and, as it turned out, all the greetings are different. Dasha gathered all the ribbons and decided to throw away some of them in order to make the remaining set stylish. The birthday girl considers a set of ribbons stylish if no greeting written on some ribbon is a substring of another greeting written on some other ribbon. Let us recall that the substring of the string $s$ is a continuous segment of $s$.
Help Dasha to keep as many ribbons as possible, so that she could brag about them to all of her friends. Dasha cannot rotate or flip ribbons, that is, each greeting can be read in a single way given in the input.
|
The given problem actually consists of two separate problems: build the directed graph of substring relation and find the maximum independent set in it. Note, that if the string $s_{2}$ is a substring of some string $s_{1}$, while string $s_{3}$ is a substring of the string $s_{2}$, then $s_{3}$ is a substring of $s_{1}$. That means the graph of substring relation defines a partially ordered set. To build the graph one can use Aho-Corasick algorithm. Usage of this structure allow to build all essential arc of the graph in time $O(L)$, where $L$ stands for the total length of all strings in the input. We will call the arc $u v\in E$ essential, if there is no $w$, such that $u w\in E$ and $w v\in E$. One of the ways to do so is: Build Aho-Corasick using all strings in the input; For every node of the Aho-Corasick structure find and remember the nearest terminal node in the suffix-link path; Once again traverse all strings through Aho-Corasick. Every time new symbol is added, add an arc from the node corresponding to the current string (in the graph we build, not Aho-Corasick) to the node of the graph corresponding to the nearest terminal in the suffix-link path; The previous step will build all essential arcs plus some other arcs, but they do not affect the next step in any way; Find the transitive closure of the graph. To solve the second part of the problem one should use the Dilworth theorem. The way to restore the answer subset comes from the constructive proof of the theorem. Time complexity is $O(L + n^{3})$ to build the graph plus $O(n^{3})$ to find the maximum antichain. The overall complexity is $O(L + n^{3})$.
|
[
"graph matchings",
"strings"
] | 3,200
|
#include <algorithm>
#include <cassert>
#include <cstring>
#include <iostream>
#include <cstdio>
#include <queue>
using namespace std;
#define FOR(i, a, b) for (int i = (a); i < (b); ++i)
#define REP(i, n) FOR(i, 0, n)
#define TRACE(x) cout << #x << " = " << x << endl
#define _ << " _ " <<
typedef long long llint;
const int MAXN = 777;
const int MAX = 1e7 + 1000;
bool e[MAXN][MAXN];
bool matched[MAXN];
int dad[MAXN];
int bio[MAXN], cookie;
int a[MAXN];
int n;
bool match(int x) {
if (x == -1) return true;
if (bio[x] == cookie) return false;
bio[x] = cookie;
REP(i, n)
if (e[x][i] && match(dad[i])) {
dad[i] = x;
return true;
}
return false;
}
bool was[MAXN][2];
void dfs(int x, int side) {
if (was[x][side]) return;
was[x][side] = true;
REP(i, n) {
if (side == 0 && e[x][i]) dfs(i, 1);
if (side == 1 && dad[x] == i) dfs(i, 0);
}
}
char* s[MAXN];
int len[MAXN];
struct node {
node* fail;
node* parent;
node* to[2];
node* finfail;
int fin;
bool mark;
bool e;
node () { fail = 0; fin = -1; };
} mem[MAX];
node* alloc = mem;
node* insert(node* x, char* s, int i) {
while (*s) {
int c = *s - 'a';
if (x->to[c] == 0) {
x->to[c] = alloc++;
x->to[c]->e = c;
x->to[c]->parent = x;
}
x = x->to[c];
s++;
}
x->fin = i;
return x;
}
void push_links(node *root) {
static node* Q[MAX];
int b, e;
b = e = 0;
Q[e++] = root;
while (b < e) {
node *x = Q[b++];
REP(i, 2)
if (x->to[i]) Q[e++] = x->to[i];
if (x == root || x->parent == root) {
x->fail = root;
x->finfail = NULL;
} else {
x->fail = x->parent->fail;
while (x->fail != root && !x->fail->to[x->e]) x->fail = x->fail->fail;
if (x->fail->to[x->e]) x->fail = x->fail->to[x->e];
if (x->fail->fin != -1) {
x->finfail = x->fail;
} else {
x->finfail = x->fail->finfail;
}
}
}
}
int main(void) {
scanf("%d", &n);
static char buf[MAX];
static node* w[MAXN];
char* cur = buf;
node* root = alloc++;
REP(i, n) {
scanf("%s", cur);
s[i] = cur;
len[i] = strlen(cur);
cur += len[i] + 1;
w[i] = insert(root, s[i], i);
}
push_links(root);
vector<int> eq[MAXN];
REP(i, n) REP(j, n)
if (w[i]->fin == i)
if (i != j && w[i] == w[j]) {
eq[i].push_back(j);
}
REP(i, n) {
node *cur = root;
REP(j, len[i]) {
while (cur != root && !cur->to[s[i][j]-'a']) cur = cur->fail;
if (cur->to[s[i][j]-'a']) cur = cur->to[s[i][j]-'a'];
node *x = cur;
if (x->fin == -1) x = x->finfail;
while (x && !e[i][x->fin]) {
e[i][x->fin] = true;
x = x->finfail;
}
}
REP(j, n)
if (e[i][j])
for (int k: eq[j]) e[i][k] = true;
e[i][i] = false;
}
REP(i, n) dad[i] = -1;
REP(i, n) {
cookie++;
matched[i] = match(i);
}
REP(i, n)
if (!matched[i]) dfs(i, 0);
bool vc[MAXN];
REP(i, n) vc[i] = false;
REP(i, n) {
if (matched[i] && !was[i][0]) vc[i] = true;
if (dad[i] != -1 && was[i][1]) vc[i] = true;
}
vector<int> ans;
REP(i, n)
if (!vc[i]) ans.push_back(i);
printf("%d\n", (int)ans.size());
for (int x: ans) printf("%d ", x+1);
printf("\n");
return 0;
}
|
591
|
A
|
Wizards' Duel
|
Harry Potter and He-Who-Must-Not-Be-Named engaged in a fight to the death once again. This time they are located at opposite ends of the corridor of length $l$. Two opponents simultaneously charge a deadly spell in the enemy. We know that the impulse of Harry's magic spell flies at a speed of $p$ meters per second, and the impulse of You-Know-Who's magic spell flies at a speed of $q$ meters per second.
The impulses are moving through the corridor toward each other, and at the time of the collision they turn round and fly back to those who cast them without changing their original speeds. Then, as soon as the impulse gets back to it's caster, the wizard reflects it and sends again towards the enemy, without changing the original speed of the impulse.
Since Harry has perfectly mastered the basics of magic, he knows that after the second collision both impulses will disappear, and a powerful explosion will occur exactly in the place of their collision. However, the young wizard isn't good at math, so he asks you to calculate the distance from his position to the place of the second meeting of the spell impulses, provided that the opponents do not change positions during the whole fight.
|
Let's start with determining the position of the first collision. Two impulses converge with a speed $p + q$, so the first collision will occur after $\frac{L}{p{\stackrel{\sim}{+}}q}$ seconds. The coordinate of this collision is given by the formula $x_{1}=p\cdot{\frac{L}{p+q}}$. Note, that the distance one impulse passes while returning to it's caster is equal to the distance it has passed from the caster to the first collision. That means impulses will reach their casters simultaneously, and the situation will be identic to the beginning of the duel. Hence, the second collision (third, fourth, etc) will occur at exactly the same place as the first one.
|
[
"implementation",
"math"
] | 900
|
#include<cstdio>
#include<cstdlib>
#include<cstring>
#include<cmath>
#include<algorithm>
#define ll long long
#define inf 1e9
#define eps 1e-10
#define md
#define N
using namespace std;
int main()
{
double len,a,b;
scanf("%lf%lf%lf",&len,&a,&b);
printf("%lf\n",len*(a/(a+b)));
return 0;
}
|
591
|
B
|
Rebranding
|
The name of one small but proud corporation consists of $n$ lowercase English letters. The Corporation has decided to try rebranding — an active marketing strategy, that includes a set of measures to change either the brand (both for the company and the goods it produces) or its components: the name, the logo, the slogan. They decided to start with the name.
For this purpose the corporation has consecutively hired $m$ designers. Once a company hires the $i$-th designer, he immediately contributes to the creation of a new corporation name as follows: he takes the newest version of the name and replaces all the letters $x_{i}$ by $y_{i}$, and all the letters $y_{i}$ by $x_{i}$. This results in the new version. It is possible that some of these letters do no occur in the string. It may also happen that $x_{i}$ coincides with $y_{i}$. The version of the name received after the work of the last designer becomes the new name of the corporation.
Manager Arkady has recently got a job in this company, but is already soaked in the spirit of teamwork and is very worried about the success of the rebranding. Naturally, he can't wait to find out what is the new name the Corporation will receive.
Satisfy Arkady's curiosity and tell him the final version of the name.
|
Trivial solution will just emulate the work of all designers, every time considering all characters of the string one by one and replacing all $x_{i}$ with $y_{i}$ and vice versa. This will work in $O(n \cdot m)$ and get TL. First one should note that same characters always end as a same characters, meaning the position of the letter doesn't affect the result in any way. One should only remember the mapping for all distinct characters. Let $p(i, c)$ be the mapping of $c$ after $i$ designers already finished their job. Now: $p(0, c) = c$ If $p(i - 1, c) = x_{i}$, then $p(i, c) = y_{i}$ Same, if $p(i - 1, c) = y_{i}$, then $p(i, c) = x_{i}$ This solution complexity is $O(| \Sigma | \cdot m + n)$ and is enough to pass all the tests.
|
[
"implementation",
"strings"
] | 1,200
|
#include <bits/stdc++.h>
#define fi first
#define sc second
using namespace std;
typedef pair<int,int> pi;
char s[300009];
char ch[30];
int main(){
int n,m;scanf("%d%d",&n,&m);
scanf("%s",s);
for(int i=0;i<26;i++)ch[i]=i+'a';
for(int i=0;i<m;i++){
char a,b;scanf(" %c %c",&a,&b);
for(int j=0;j<26;j++){
if(ch[j]==a)ch[j]=b;
else if(ch[j]==b)ch[j]=a;
}
}
for(int i=0;i<n;i++){
printf("%c",ch[s[i]-'a']);
}
}
|
592
|
A
|
PawnChess
|
Galois is one of the strongest chess players of Byteforces. He has even invented a new variant of chess, which he named «PawnChess».
This new game is played on a board consisting of 8 rows and 8 columns. At the beginning of every game some black and white pawns are placed on the board. The number of black pawns placed is not necessarily equal to the number of white pawns placed.
Lets enumerate rows and columns with integers from 1 to 8. Rows are numbered from top to bottom, while columns are numbered from left to right. Now we denote as $(r, c)$ the cell located at the row $r$ and at the column $c$.
There are always two players A and B playing the game. Player A plays with white pawns, while player B plays with black ones. The goal of player A is to put any of his pawns to the row $1$, while player B tries to put any of his pawns to the row $8$. As soon as any of the players completes his goal the game finishes immediately and the succeeded player is declared a winner.
Player A moves first and then they alternate turns. On his move player A must choose exactly one white pawn and move it one step upward and player B (at his turn) must choose exactly one black pawn and move it one step down. Any move is possible only if the targeted cell is empty. It's guaranteed that for any scenario of the game there will always be at least one move available for any of the players.
Moving upward means that the pawn located in $(r, c)$ will go to the cell $(r - 1, c)$, while moving down means the pawn located in $(r, c)$ will go to the cell $(r + 1, c)$. Again, the corresponding cell must be empty, i.e. not occupied by any other pawn of any color.
Given the initial disposition of the board, determine who wins the game if both players play optimally. Note that there will always be a winner due to the restriction that for any game scenario both players will have some moves available.
|
Player A wins if the distance of his nearest pawn to the top of the board is less than or equal to the distance of the Player's B nearest pawn to the bottom of the board (Note that you should only consider pawns that are not blocked by another pawns).
|
[
"implementation"
] | 1,200
| null |
592
|
B
|
The Monster and the Squirrel
|
Ari the monster always wakes up very early with the first ray of the sun and the first thing she does is feeding her squirrel.
Ari draws a regular convex polygon on the floor and numbers it's vertices $1, 2, ..., n$ in clockwise order. Then starting from the vertex $1$ she draws a ray in the direction of each other vertex. The ray stops when it reaches a vertex or intersects with another ray drawn before. Ari repeats this process for vertex $2, 3, ..., n$ (in this particular order). And then she puts a walnut in each region inside the polygon.
Ada the squirrel wants to collect all the walnuts, but she is not allowed to step on the lines drawn by Ari. That means Ada have to perform a small jump if she wants to go from one region to another. Ada can jump from one region P to another region Q if and only if P and Q \textbf{share a side or a corner}.
Assuming that Ada starts from outside of the picture, what is the minimum number of jumps she has to perform in order to collect all the walnuts?
|
After drawing the rays from the first vertex $(n - 2)$ triangles are formed. The subsequent rays will generate independently sub-regions in these triangles. Let's analyse the triangle determined by vertices $1, i, i + 1$, after drawing the rays from vertex $i$ and $(i + 1)$ the triangle will be divided into $(n - i) + (i - 2) = n - 2$ regions. Therefore the total number of convex regions is $(n - 2)^{2}$ If the squirrel starts from the region that have $1$ as a vertex, then she can go through each region of triangle $(1, i, i + 1)$ once. That implies that the squirrel can collect all the walnuts in $(n - 2)^{2}$ jumps.
|
[
"math"
] | 1,100
| null |
592
|
C
|
The Big Race
|
Vector Willman and Array Bolt are the two most famous athletes of Byteforces. They are going to compete in a race with a distance of $L$ meters today.
Willman and Bolt have exactly the same speed, so when they compete the result is always a tie. That is a problem for the organizers because they want a winner.
While watching previous races the organizers have noticed that Willman can perform \textbf{only} steps of length equal to $w$ meters, and Bolt can perform \textbf{only} steps of length equal to $b$ meters. Organizers decided to slightly change the rules of the race. Now, at the end of the racetrack there will be an abyss, and the winner will be declared the athlete, who manages to run farther from the starting point of the the racetrack (which is not the subject to change by any of the athletes).
Note that none of the athletes can run infinitely far, as they both will at some moment of time face the point, such that only one step further will cause them to fall in the abyss. In other words, the athlete \textbf{will not} fall into the abyss if the total length of all his steps will be less or equal to the chosen distance $L$.
Since the organizers are very fair, the are going to set the length of the racetrack as an integer chosen randomly and uniformly in range from 1 to $t$ (both are included). What is the probability that Willman and Bolt tie again today?
|
Let D be the length of the racetrack, Since both athletes should tie $D\mathrm{\mod\}B=D\mathrm{\mod\}W$. Let $M = lcm(B, W)$, then $D = k \cdot M + r$. None of the athletes should give one step further, therefore $r \le min{B - 1, W - 1, T} = X$. $D$ must be greater than $0$ and less than or equal to $T$ so $- r / M < k \le (T - r) / M$. For $r = 0$, the number of valid racetracks is $\lfloor{\frac{T}{M}}\rfloor$, and for $r > 0$ the number of racetracks is $\lfloor{\frac{T-\tau}{M}}\rfloor+1$. Iterating over all possible r, we can count the number of racetracks in which Willman and Bolt ties: $X\,+\,\sum_{r=0}^{x}\bigl({\frac{T-r}{M}}\bigr)$ Note that $\lfloor{\frac{T-r}{M}}\rfloor=v\Leftrightarrow T-v\cdot M-M<r\leq T-v\cdot M$. That means that $\lfloor{\frac{T-T}{M}}\rfloor=v$ for exactly $M$ values of $r$. We can count the number of values of $r$ in which $\lfloor{\frac{T-r}{M}}\rfloor=\lfloor{\frac{T-0}{M}}\rfloor=v\rfloor$, and the values of $r$ in which $\lfloor{\frac{T-T}{M}}\rfloor=\lfloor{\frac{T-X}{M}}\rfloor=v_{2}$. Each of the remaining values $v_{1} - 1, v_{1} - 2, ..., v_{2} + 1$ will appear exactly $M$ times.
|
[
"math"
] | 1,800
| null |
592
|
D
|
Super M
|
Ari the monster is not an ordinary monster. She is the hidden identity of Super M, the Byteforces’ superhero. Byteforces is a country that consists of $n$ cities, connected by $n - 1$ bidirectional roads. Every road connects exactly two distinct cities, and the whole road system is designed in a way that one is able to go from any city to any other city using only the given roads. There are $m$ cities being attacked by humans. So Ari... we meant Super M have to immediately go to each of the cities being attacked to scare those bad humans. Super M can pass from one city to another only using the given roads. Moreover, passing through one road takes her exactly one kron - the time unit used in Byteforces.
However, Super M is not on Byteforces now - she is attending a training camp located in a nearby country Codeforces. Fortunately, there is a special device in Codeforces that allows her to instantly teleport from Codeforces to any city of Byteforces. The way back is too long, so for the purpose of this problem teleportation is used exactly once.
You are to help Super M, by calculating the city in which she should teleport at the beginning in order to end her job in the minimum time (measured in krons). Also, provide her with this time so she can plan her way back to Codeforces.
|
Observation 1: Ari should teleport to one of the attacked cities (it doesn't worth going to a city that is not attacked since then she should go to one of the attacked cities) Observation 2: The nodes visited by Ari will determine a sub-tree $T$ of the original tree, this tree is unique and is determined by all the paths from two attacked cities. Observation 3: If Ari had to return to the city from where she started, then the total distance would be $2e$, where $e$ is the number of edges of $T$, that is because she goes through each edge forward and backward Observation 4: If Ari does not have to return to the starting city (the root of $T$), then the total distance is $2e - L$, where $L$ is the distance of the farthest node from the root Observation 5: In order to get a minimum total distance, Ari should chose one diameter of the tree, and teleport to one of its leaves. The problem is now transformed in finding the diameter of a tree that contains the smallest index for one of its leaves. Note that all diameters pass through the center of the tree, so we can find all the farthest nodes from the center...and [details omitted].
|
[
"dfs and similar",
"dp",
"graphs",
"trees"
] | 2,200
| null |
592
|
E
|
BCPC
|
BCPC stands for Byteforces Collegiate Programming Contest, and is the most famous competition in Byteforces.
BCPC is a team competition. Each team is composed by a coach and three contestants. Blenda is the coach of the Bit State University(BSU), and she is very strict selecting the members of her team.
In BSU there are $n$ students numbered from 1 to $n$. Since all BSU students are infinitely smart, the only important parameters for Blenda are their reading and writing speed. After a careful measuring, Blenda have found that the $i$-th student have a \textbf{reading} speed equal to $r_{i}$ (words per minute), and a \textbf{writing} speed of $w_{i}$ (symbols per minute). Since BSU students are very smart, the measured speeds are sometimes very big and Blenda have decided to subtract some constant value $c$ from all the values of reading speed and some value $d$ from all the values of writing speed. Therefore she considers $r_{i}' = r_{i} - c$ and $w_{i}' = w_{i} - d$.
The student $i$ is said to overwhelm the student $j$ if and only if $r_{i}'·w_{j}' > r_{j}'·w_{i}'$. Blenda doesn’t like fights in teams, so she thinks that a team consisting of three distinct students $i, j$ and $k$ is good if $i$ overwhelms $j$, $j$ overwhelms $k$, and $k$ overwhelms $i$. Yes, the relation of overwhelming is not transitive as it often happens in real life.
Since Blenda is busy preparing a training camp in Codeforces, you are given a task to calculate the number of different good teams in BSU. Two teams are considered to be different if there is at least one student that is present in one team but is not present in the other. In other words, two teams are different if the sets of students that form these teams are different.
|
Let's represent the reading and writing speeds of the students as points in a plane. Two students $i, j$ are compatible if $ri' \cdot wj' - rj' \cdot wi' > 0$ this equation is identical to the cross product: $(ri', wi') \times (rj', wj') > 0$. Using this fact is easy to see that three students $i, j, k$ are compatible if the triangle $(r_{i}, w_{i}), (r_{j}, w_{j}), (r_{k}, w_{k})$ contains the point $(x, y)$. So the problem is reduced to count the number of triangles that contains the origin. Let's count the triangles that have two known vertices $i$ and $j$ (look at the picture above). It is easy to see that the third vertex should be inside the region $S$. So now we have to be able of counting points that are between two rays, that can be done using binary search (ordering the points first by slope and then by the distance to the origin). Now given a point $i$, let's count the triangles that have $i$ as a vertex (look at the picture above again). We have to count the points that lie between the ray $iO$, and every other ray $jO$ (the angle between $iO$ and $jO$ must be $ \le 180$). Let $S_{j}$ denote the number points that are between the rays $OR$ and $jO$, then the number of triangles that have $i$ as a vertex are $\textstyle\sum_{j:O i\times O j>0}S_{j}-S_{i}$. This summation can be calculated if we pre-calculate the cumulative sums of $S_{j}$. The overall complexity is $O(n \cdot log(n))$.
|
[
"binary search",
"geometry",
"two pointers"
] | 2,700
| null |
593
|
A
|
2Char
|
Andrew often reads articles in his favorite magazine 2Char. The main feature of these articles is that each of them uses at most two distinct letters. Andrew decided to send an article to the magazine, but as he hasn't written any article, he just decided to take a random one from magazine 26Char. However, before sending it to the magazine 2Char, he needs to adapt the text to the format of the journal. To do so, he removes some words from the chosen article, in such a way that the remaining text can be written using no more than two distinct letters.
Since the payment depends from the number of non-space characters in the article, Andrew wants to keep the words with the maximum total length.
|
For each letter will maintain the total length of words ($cnt1_{ci}$), which found it was alone, and for each pair of letters will maintain the total length of words that contains only them ($cnt2_{ci, cj}$). For each row, count a number of different letters in it. If it is one, then add this letter to the length of the word. If two of them, then add to the pair of letters word`s length. Now find a pair of letters that will be the answer. For a pair of letters $c_{i}, c_{j}$ answer is $cnt1_{ci} + cnt1_{cj} + cnt2_{ci, cj}$. Among all these pairs find the maximum. This is the answer. The overall complexity is O (total length of all strings + 26 * 26)
|
[
"brute force",
"implementation"
] | 1,200
| null |
593
|
B
|
Anton and Lines
|
The teacher gave Anton a large geometry homework, but he didn't do it (as usual) as he participated in a regular round on Codeforces. In the task he was given a set of $n$ lines defined by the equations $y = k_{i}·x + b_{i}$. It was necessary to determine whether there is at least one point of intersection of two of these lines, that lays strictly inside the strip between $x_{1} < x_{2}$. In other words, is it true that there are $1 ≤ i < j ≤ n$ and $x', y'$, such that:
- $y' = k_{i} * x' + b_{i}$, that is, point $(x', y')$ belongs to the line number $i$;
- $y' = k_{j} * x' + b_{j}$, that is, point $(x', y')$ belongs to the line number $j$;
- $x_{1} < x' < x_{2}$, that is, point $(x', y')$ lies inside the strip bounded by $x_{1} < x_{2}$.
You can't leave Anton in trouble, can you? Write a program that solves the given task.
|
Note that if $a$ s line intersects with the $j$ th in this band, and at $x = x_{1}$ $i$ th line is higher, at $x = x_{2}$ above would be $j$ th line. Sort by $y$ coordinate at $x = x_{1} + eps$, and $x = x_{2} - eps$. Verify that the order of lines in both cases is the same. If there is a line that its index in the former case does not coincide with the second, output Yes. In another case, derive No. The only thing that can stop us is the intersection at the borders, as in this case we dont know the sorts order. Then add to our border $x_{1}$ small $eps$, and by $x_{2}$ subtract $eps$, and the sort order is set uniquely. The overall complexity is $O(nlogn)$
|
[
"geometry",
"sortings"
] | 1,600
| null |
593
|
C
|
Beautiful Function
|
Every day Ruslan tried to count sheep to fall asleep, but this didn't help. Now he has found a more interesting thing to do. First, he thinks of some set of circles on a plane, and then tries to choose a beautiful set of points, such that there is at least one point from the set inside or on the border of each of the imagined circles.
Yesterday Ruslan tried to solve this problem for the case when the set of points is considered beautiful if it is given as $(x_{t} = f(t), y_{t} = g(t))$, where argument $t$ takes all integer values from $0$ to $50$. Moreover, $f(t)$ and $g(t)$ should be correct functions.
Assume that $w(t)$ and $h(t)$ are some correct functions, and $c$ is an integer ranging from $0$ to $50$. The function $s(t)$ is correct if it's obtained by one of the following rules:
- $s(t) = abs(w(t))$, where $abs(x)$ means taking the absolute value of a number $x$, i.e. $|x|$;
- $s(t) = (w(t) + h(t))$;
- $s(t) = (w(t) - h(t))$;
- $s(t) = (w(t) * h(t))$, where $ * $ means multiplication, i.e. $(w(t)·h(t))$;
- $s(t) = c$;
- $s(t) = t$;
Yesterday Ruslan thought on and on, but he could not cope with the task. Now he asks you to write a program that computes the appropriate $f(t)$ and $g(t)$ for any set of at most $50$ circles.
In each of the functions $f(t)$ and $g(t)$ you are allowed to use no more than $50$ multiplications. The length of any function should not exceed $100·n$ characters. The function \textbf{should not contain spaces.}
Ruslan can't keep big numbers in his memory, so you should choose $f(t)$ and $g(t)$, such that for all integer $t$ from $0$ to $50$ value of $f(t)$ and $g(t)$ and all the intermediate calculations won't exceed $10^{9}$ by their absolute value.
|
One of the answers will be the amount of such expressions for each circle in the coordinate $x$ and similarly coordinate $y$: $\lfloor{\frac{x_{i}}{2}}\rfloor\ast(1-a b s(t-i)+a b s(a b s(t-i)-1))$ For $a = 1$, $b = abs(t - i)$, it can be written as $\left\lfloor{\frac{x_{i}}{2}}\right\rfloor*(a-b+a b s(b-a))$ Consider the $a - b + abs(a - b)$: if $a \le b$, then $a - b + abs(a - b) = 0$, if $a > b$, then $a - b + abs(a - b) = 2a - 2b$ Now consider what means $a > b$: $1 > abs(t - i)$ $i > t - 1$ and $i < t + 1$. For integer $i$ is possible only if $i = t$. That is, this bracket is not nullified only if $i = t$. Consider the $2a - 2b = 2 - 2 * abs(t - i) = 2$. Then $\left\lfloor{\frac{x}{2}}\right\rfloor\approx2$ differs from the wanted position by no more than 1, but since all the radiuses are not less than 2, then this point belongs to the circle. The overall complexity is $O(n)$.
|
[
"constructive algorithms",
"math"
] | 2,200
| null |
593
|
D
|
Happy Tree Party
|
Bogdan has a birthday today and mom gave him a tree consisting of $n$ vertecies. For every edge of the tree $i$, some number $x_{i}$ was written on it. In case you forget, a tree is a connected non-directed graph without cycles. After the present was granted, $m$ guests consecutively come to Bogdan's party. When the $i$-th guest comes, he performs exactly one of the two possible operations:
- Chooses some number $y_{i}$, and two vertecies $a_{i}$ and $b_{i}$. After that, he moves along the edges of the tree from vertex $a_{i}$ to vertex $b_{i}$ using the shortest path (of course, such a path is unique in the tree). Every time he moves along some edge $j$, he replaces his current number $y_{i}$ by $y_{i}=\lfloor{\frac{y_{i}}{x_{j}}}\rfloor$, that is, by the result of integer division $y_{i}$ div $x_{j}$.
- Chooses some edge $p_{i}$ and replaces the value written in it $x_{pi}$ by some positive integer $c_{i} < x_{pi}$.
As Bogdan cares about his guests, he decided to ease the process. Write a program that performs all the operations requested by guests and outputs the resulting value $y_{i}$ for each $i$ of the first type.
|
Consider the problem ignoring the second typed requests. We note that in the column where all the numbers on the edges of $>$ 1 maximum number of assignments to $x=\lfloor{\frac{x}{R_{e}}}\rfloor$ before $x$ will turn into $0$ is not exceeds $64$. Indeed, if all the $R_{v} = 2$, the number of operations can be assessed as $log_{2}(x)$. Hang the tree for some top and call it the root. Learn how to solve the problem, provided that for every $v$ $R_{v} > 1$ and no requests of the second type. For each vertex except the root, we have identified it as the ancestor of the neighbor closest to the root. Suppose we had a request of the first type from the top $a$ to $b$ vertices with original number $x$. We divide the road into two vertical parts, one of which is close to the root, while the other moves away. We find all the edges in this way. To do this, we calculate the depth of each node to the root of the distance. Now we will go up in parallel to the tree of the two peaks, until he met a total. If in the course of the recovery, we have been more than $64$ edges, in the substitutions $x=\lfloor{\frac{x}{R_{e}}}\rfloor$ we get $x = 0$ and we can at the current step to stop the algorithm search. Thus, we make no more than $O(log(x))$ operations. Let`s turn to the problem, where $R_{v} > 0$. We note that our previous solution in this case can work for $O(n)$. Since the passage of the edge with $R_{v} = 1$ our value does not change. We reduce this problem to the above consideration. Compress the graph, that is, remove all single edges. To do this, run by dfs root and will keep the deepest edge on the path from the root to the top with $R_{v} > 1$. Let us remember that we have had requests to reduce $R_{v}$. We maintain the closest ancestor of $P_{v}$ c $R_{Pv} > 1$. We use the idea of compression paths. When answer to a request of the first type will be recalculated $P_{v}$. We introduce a recursive function $F(v)$. Which returns the $v$, if $R_{v} > 1$, otherwise perform the assignment of $P_{v} = F(P_{v})$ and returns $F(P_{v})$. Each edge we will remove $1$ times, so in total the call of all functions $F(v)$ running $O(n)$. Final time is $O(logx)$ on request of the first type and $O(1)$ an average of request of the second type.
|
[
"data structures",
"dfs and similar",
"graphs",
"math",
"trees"
] | 2,400
| null |
593
|
E
|
Strange Calculation and Cats
|
Gosha's universe is a table consisting of $n$ rows and $m$ columns. Both the rows and columns are numbered with consecutive integers starting with $1$. We will use $(r, c)$ to denote a cell located in the row $r$ and column $c$.
Gosha is often invited somewhere. Every time he gets an invitation, he first calculates the number of ways to get to this place, and only then he goes. Gosha's house is located in the cell $(1, 1)$.
At any moment of time, Gosha moves from the cell he is currently located in to a cell adjacent to it (two cells are adjacent if they share a common side). Of course, the movement is possible only if such a cell exists, i.e. Gosha will not go beyond the boundaries of the table. Thus, from the cell $(r, c)$ he is able to make a move to one of the cells $(r - 1, c)$, $(r, c - 1)$, $(r + 1, c)$, $(r, c + 1)$. Also, Ghosha can skip a move and stay in the current cell $(r, c)$.
Besides the love of strange calculations, Gosha is allergic to cats, so he never goes to the cell that has a cat in it. Gosha knows exactly where and when he will be invited and the schedule of cats travelling along the table. Formally, he has $q$ records, the $i$-th of them has one of the following forms:
- $1$, $x_{i}$, $y_{i}$, $t_{i}$ — Gosha is invited to come to cell $(x_{i}, y_{i})$ at the moment of time $t_{i}$. It is guaranteed that there is no cat inside cell $(x_{i}, y_{i})$ at this moment of time.
- $2$, $x_{i}$, $y_{i}$, $t_{i}$ — at the moment $t_{i}$ a cat appears in cell $(x_{i}, y_{i})$. It is guaranteed that no other cat is located in this cell $(x_{i}, y_{i})$ at that moment of time.
- $3$, $x_{i}$, $y_{i}$, $t_{i}$ — at the moment $t_{i}$ a cat leaves cell $(x_{i}, y_{i})$. It is guaranteed that there is cat located in the cell $(x_{i}, y_{i})$.
Gosha plans to accept only one invitation, but he has not yet decided, which particular one. In order to make this decision, he asks you to calculate for each of the invitations $i$ the number of ways to get to the cell $(x_{i}, y_{i})$ at the moment $t_{i}$. For every invitation, assume that Gosha he starts moving from cell $(1, 1)$ at the moment $1$.
Moving between two neighboring cells takes Gosha exactly one unit of tim. In particular, this means that Gosha can come into the cell only if a cat sitting in it leaves the moment when Gosha begins his movement from the neighboring cell, and if none of the cats comes to the cell at the time when Gosha is in it.
Two ways to go from cell $(1, 1)$ to cell $(x, y)$ at time $t$ are considered distinct if for at least one moment of time from $1$ to $t$ Gosha's positions are distinct for the two ways at this moment. Note, that during this travel Gosha is allowed to visit both $(1, 1)$ and $(x, y)$ multiple times. Since the number of ways can be quite large, print it modulo $10^{9} + 7$.
|
Learn how to solve the problem for small t. We use standard dynamic $dp_{x, y, t}$ = number of ways to get into the cell (x; y) at time t. Conversion is the sum of all valid ways to get into the cell (x; y) at time t - 1. Note that this dp can be counted by means of the construction of the power matrix. Head of the transition matrix, $T_{i, j} = 1$, if we can get out of the cell $i$ in a cell $j$. Suppose we had a vector G, where $G_{i}$ equal to the number of ways to get into the cell $i$. Then a new vector $G'$ by $dt$ second $G'$ = $G$ * $(T^{dt})$. So we learned to solve the problem without changes in O (log $dt * S^{3}$), where dt - at a time, S - area. Consider what happens when adding or removing a cat. When such requests varies transition matrix. Between these requests constant T, then we can construct a power matrix. Thus, at the moment of change is recalculated T, and between changes in the degree of erecting matrix. The decision is O ($m * S^{3}$ log $dt$), m - number of requests
|
[
"dp",
"matrices"
] | 2,400
| null |
594
|
A
|
Warrior and Archer
|
In the official contest this problem has a different statement, for which jury's solution was working incorrectly, and for this reason it was excluded from the contest. This mistake have been fixed and the current given problem statement and model solution corresponds to what jury wanted it to be during the contest.
Vova and Lesha are friends. They often meet at Vova's place and compete against each other in a computer game named The Ancient Papyri: Swordsink. Vova always chooses a warrior as his fighter and Leshac chooses an archer. After that they should choose initial positions for their characters and start the fight. A warrior is good at melee combat, so Vova will try to make the distance between fighters as small as possible. An archer prefers to keep the enemy at a distance, so Lesha will try to make the initial distance as large as possible.
There are $n$ ($n$ is always even) possible starting positions for characters marked along the $Ox$ axis. The positions are given by their distinct coordinates $x_{1}, x_{2}, ..., x_{n}$, two characters cannot end up at the same position.
Vova and Lesha take turns banning available positions, Vova moves first. During each turn one of the guys bans exactly one of the remaining positions. Banned positions cannot be used by \textbf{both} Vova and Lesha. They continue to make moves until there are only two possible positions remaining (thus, the total number of moves will be $n - 2$). After that Vova's character takes the position with the lesser coordinate and Lesha's character takes the position with the bigger coordinate and the guys start fighting.
Vova and Lesha are already tired by the game of choosing positions, as they need to play it before every fight, so they asked you (the developer of the The Ancient Papyri: Swordsink) to write a module that would automatically determine the distance at which the warrior and the archer will start fighting if both Vova and Lesha play optimally.
|
Let's sort the points by increasing $x$ coordinate and work with sorted points array next. Let's suppose that after optimal playing points numbered $l$ and $r$ ($l < r$) are left. It's true that the first player didn't ban any of the points numbered $i$ $l < i < r$, otherwise he could change his corresponding move to point $l$ or point $r$ (one could prove it doesn't depend on second player optimal moves) and change the optimal answer. It turns out that all the ${\underset{^{\frac{n}{2}}}{=}}$ points banned by the first player have numbers outside of $[l, r]$ segment, therefore $r-l\leq{\frac{n}{2}}$. We should notice that if the first player choosed any $[l, r]$ for $r-l={\frac{n}{2}}$, he could always make the final points numbers located inside this segment. The second player wants to make $r-l={\frac{n}{2}}$ (he couldn't make less), what is equivalent if he always ban points inside final $[l, r]$ segment (numbered $l < i < r$). As soon as the second player doesn't know what segment first player have chosen after every of his moves, he must detect a point which satisfies him in every first player choice. It's true number of this point is the median of set of point numbers left (the odd number) after the first player move. The number of moves of the first player left is lesser by one than moves of the second player, so the first player later could ban some points from the left and some points from the right, except three middle points. Two of it (leftmost and rightmost ones) shouldn't be banned by the second player as soon as it could increase the size of banned points from the left (or from the right), but third middle point satisfies the second player in every first player choice. This way the second player always bans the point inside final point segment. Thus the answer is the minimum between every of $x_{i+{\frac{n}{2}}}-x_{i}$ values.
|
[
"games"
] | 2,300
| null |
594
|
B
|
Max and Bike
|
For months Maxim has been coming to work on his favorite bicycle. And quite recently he decided that he is ready to take part in a cyclists' competitions.
He knows that this year $n$ competitions will take place. During the $i$-th competition the participant must as quickly as possible complete a ride along a straight line from point $s_{i}$ to point $f_{i}$ ($s_{i} < f_{i}$).
Measuring time is a complex process related to usage of a special sensor and a time counter. Think of the front wheel of a bicycle as a circle of radius $r$. Let's neglect the thickness of a tire, the size of the sensor, and all physical effects. The sensor is placed on the rim of the wheel, that is, on some fixed point on a circle of radius $r$. After that the counter moves just like the chosen point of the circle, i.e. moves forward and rotates around the center of the circle.
At the beginning each participant can choose \textbf{any} point $b_{i}$, such that his bike is fully behind the starting line, that is, $b_{i} < s_{i} - r$. After that, he starts the movement, instantly accelerates to his maximum speed and at time $ts_{i}$, when the coordinate of the sensor is equal to the coordinate of the start, the time counter starts. The cyclist makes a complete ride, moving with his maximum speed and at the moment the sensor's coordinate is equal to the coordinate of the finish (moment of time $tf_{i}$), the time counter deactivates and records the final time. Thus, the counter records that the participant made a complete ride in time $tf_{i} - ts_{i}$.
Maxim is good at math and he suspects that the total result doesn't only depend on his maximum speed $v$, but also on his choice of the initial point $b_{i}$. Now Maxim is asking you to calculate for each of $n$ competitions the minimum possible time that can be measured by the time counter. The radius of the wheel of his bike is equal to $r$.
|
The main proposition to solve this problem - in the middle of every competition the sensor must be or in the top point of the wheel or in the bottom point of the wheel. To calculate the answer we need to use binary search. If the center of the wheel moved on the distance $c$, then the sensor moved on the distance $c + rsin(c / r)$, if the sensor was on the top point of the wheel in the middle, or on the distance $c - rsin(c / r)$, if the sensor was on the bottom point of the wheel in the middle, where $r$ - the radius of the wheel. Asymptotic behavior of this solution - $O(n\log n)$.
|
[
"binary search",
"geometry"
] | 2,500
| null |
594
|
C
|
Edo and Magnets
|
Edo has got a collection of $n$ refrigerator magnets!
He decided to buy a refrigerator and hang the magnets on the door. The shop can make the refrigerator with any size of the door that meets the following restrictions: the refrigerator door must be rectangle, and both the length and the width of the door must be \textbf{positive integers}.
Edo figured out how he wants to place the magnets on the refrigerator. He introduced a system of coordinates on the plane, where each magnet is represented as a rectangle with sides parallel to the coordinate axes.
Now he wants to remove no more than $k$ magnets (he may choose to keep all of them) and attach all remaining magnets to the refrigerator door, and the area of the door should be as small as possible. A magnet is considered to be attached to the refrigerator door if \textbf{its center} lies on the door or on its boundary. The relative positions of all the remaining magnets must correspond to the plan.
Let us explain the last two sentences. Let's suppose we want to hang two magnets on the refrigerator. If the magnet in the plan has coordinates of the lower left corner ($x_{1}$, $y_{1}$) and the upper right corner ($x_{2}$, $y_{2}$), then its center is located at ($\scriptstyle{\frac{x_{1}+x_{2}}{2}}$, $\textstyle{\frac{y_{1}+y_{2}}{2}}$) (may not be integers). By saying the relative position should correspond to the plan we mean that the only available operation is translation, i.e. the vector connecting the centers of two magnets in the original plan, must be equal to the vector connecting the centers of these two magnets on the refrigerator.
\textbf{The sides of the refrigerator door must also be parallel to coordinate axes.}
|
Let's find the centers of every rectangle and multiple them of 2 (to make all coordinates integers).Then we need to by the rectangle door, which contains all dots, but the lengths of the sides of this door must be rounded up to the nearest integers. Now, let's delete the magnets from the door one by one, gradually the door will decrease. Obviously every time optimal to delete only dots, which owned to the sides of the rectangle. Let's brute $4^{k}$ ways, how we will do delete the magnets. We will do it with helps of recursion, every time we will delete point with minimum or maximum value of the coordinates. If we will store 4 arrays (or 2 deques) we can do it with asymptotic $O(1)$. Such a solution works $O(4^{k})$. It can be easily shown that this algorithm delete always some number of the leftmost, rightmost, uppermost and lowermost points. So we can brute how $k$ will distributed between this values and we can model the deleting with helps of 4 arrays. This solution has asymptotic behavior $O(k^{4})$.
|
[
"brute force",
"greedy",
"implementation",
"two pointers"
] | 2,300
| null |
594
|
D
|
REQ
|
Today on a math lesson the teacher told Vovochka that the Euler function of a positive integer $φ(n)$ is an arithmetic function that counts the positive integers less than or equal to n that are relatively prime to n. The number $1$ is coprime to all the positive integers and $φ(1) = 1$.
Now the teacher gave Vovochka an array of $n$ positive integers $a_{1}, a_{2}, ..., a_{n}$ and a task to process $q$ queries $l_{i}$ $r_{i}$ — to calculate and print $\varphi\left(\prod_{i=l}^{r}a_{i}\right)$ modulo $10^{9} + 7$. As it is too hard for a second grade school student, you've decided to help Vovochka.
|
To calculate the answer on every query let's use the formula $\varphi(n)=n{\frac{p_{1}-1}{p_{1}}}\cdot{\frac{p_{2}-1}{p_{2}}}\cdot\cdot\cdot\cdot{\frac{p_{k}-1}{p_{k}}}$, where $p_{1}, p_{2}, ..., p_{k}$ - all prime numbers which divided $n$. We make all calculations by the module $10^{9} + 7$ Let's suppose that we solving problem for fix left end $l$ of the range. Every query now is a query on the prefix of the array. Then in formula for every prime $p$ we need to know only about the leftmost position of it. Let's convert the query in the query of the multiple on the prefix: at first init Fenwick tree with ones, then make the multiplication in points $l, l + 1, ..., n$ with value of the elements $a_{l}, a_{l + 1}, ..., a_{n}$. For every leftmost positions of prime $p$ make in position $i$ the multiplication in point $i$ on the $\textstyle{\frac{p-1}{p}}$. This prepare can be done with asymptotic $O(n\log n\log C)$, where C - the maximum value of the element (this logarithm - the number of prime divisors of some $a_{i}$). We interest in all leftmost ends, because of that let's know how to go from the one end to the other. Let we know all about the end $l$ - how to update the end $l + 1$? Let's make the multiplication in the Fenwick tree in the point $l$ on the value $a_{l}^{ - 1}$. Also we are not interesting in all prime numbers inside $a_{l}$, so let's make the multiplications in point $l$ on the all values $\frac{p}{p\!\!\!/-1}$. But every of this prime numbers can have other entries which now becoming the leftmost. Add them with the multiplication which described above. With helps of sort the transition between leftmost ends can be done in $O(n\log n\log C)$. To answer to the queries we need to sort them in correct order (or add them in the dynamic array), and to the get the answer to the query we will make $O(\log n)$ iterations. So total asymptotic behavior of solution is $O(n\log n\log C+q\log n)$ iterations and $O(n\log C+q)$ additional memory.
|
[
"data structures",
"number theory"
] | 2,500
| null |
594
|
E
|
Cutting the Line
|
You are given a non-empty line $s$ and an integer $k$. The following operation is performed with this line exactly once:
- A line is split into \textbf{at most} $k$ non-empty substrings, i.e. string $s$ is represented as a concatenation of a set of strings $s = t_{1} + t_{2} + ... + t_{m}$, $1 ≤ m ≤ k$.
- Some of strings $t_{i}$ are replaced by strings $t_{i}^{r}$, that is, their record from right to left.
- The lines are concatenated back in the same order, we get string $s' = t'_{1}t'_{2}... t'_{m}$, where $t'_{i}$ equals $t_{i}$ or $t_{i}^{r}$.
Your task is to determine the lexicographically smallest string that could be the result of applying the given operation to the string $s$.
|
Let's describe the greedy algorithm, which allows to solve the problem for every $k > 2$ for every string $S$. Let's think, that we always reverse some prefix of the string $S$ (may be with length equals to one). Because we want to minimize lexicographically the string it is easy to confirm that we will reverse such a prefixes, which prefix (after reversing) is equals to the minimal lexicographically suffix of the reverse string $S$ (let it be $S^{r}$) - this is prefix, which length equals to the length of the minimum suffix $S^{r}$ (the reverse operation of the prefix $S$ equals to change it with suffix $S^{r}$). Let the lexicographically minimal suffix of the string $S^{r}$ is $s$. It can be shown, that there are no 2 entries $s$ in $S^{r}$ which intersects, because of that the string $s$ will be with period and minimal suffix will with less length. So, the string $S^{r}$ looks like $t_{p}s^{ap}t_{p - 1}s^{ap - 1}t_{p - 2}... t_{1}s^{a1}$, where $s^{x}$ means the concatenation of the string $s$ $x$ times, $a_{1}, a_{2}, ..., a_{p}$ - integers, and the strings $t_{1}, t_{2}, ..., t_{p}$ - some non-empty (exclude may be $t_{p}$) strings, which do not contain the $s$ inside. If we reverse some needed prefix of the string $s$, we will go to the string $S'$, and the minimal suffix $s'$ of the reversing string $S'^{r}$ can't be lexicographically less than $s$, because of that we need to make $s'$ equals to $s$. It will helps us to increase prefix which look like $s^{b}$ in the answer (and we will can minimize it too). it is easy to show, that maximum $b$, which we can get equals to $a_{1}$ in case $p = 1$ and $a_{1}+\operatorname*{m}_{i=2}^{p}a_{p}$ (in case if p \geq 2$). After such operations the prefix of the answer will looks like $s^{a1}s^{ai}t_{i}s^{ai - 1}... s^{a2}t_{2}$. Because t_{i} - non-empty strings we can not to increase the number of concatenations $s$ in the prefix of the answer. The reversing of the second prefix $(s^{ai}...)$ can be done because $k > 2$. From the information described above we know that if $k > 2$ for lost string we need always reverse prefix, which after reversing is equals to the suffix of the string $S^{r}$ which looks like $s^{a1}$. To find this suffix every time, we need only once to build Lindon decomposition (with helps of Duval's algorithm) of the reverse string and carefully unite the equals strings. Only one case is lost - prefix of the lost string does not need to be reverse - we can make the concatenation of the consecutive reverse prefixes with length equals to 1. Because for $k = 1$ the problem is very easy, we need to solve it for $k = 2$ - cut the string on the two pieces (prefix and suffix) and some way of their reverse. The case when nothing reverse is not interesting, let's look on other cases: Prefix do not reverse. In this case always reversing suffix. Two variants of the string with reverse suffix we can compare with $O(1)$ with helps of $z$-function of the string $S^{r}#S$. Prefix reverse. To solve this case we can use approvals from the editorial of the problem F Yandex.Algorithm 2015 Round 2.2 which was written by GlebsHP and check only 2 ways of reversing prefix. We need for them to brute the reverse of suffixes. It is only need in the end to choose from 2 cases better answer. Asymptotic behavior of the solution is $O(|s|)$.
|
[
"string suffix structures",
"strings"
] | 3,100
| null |
595
|
A
|
Vitaly and Night
|
One day Vitaly was going home late at night and wondering: how many people aren't sleeping at that moment? To estimate, Vitaly decided to look which windows are lit in the house he was passing by at that moment.
Vitaly sees a building of $n$ floors and $2·m$ windows on each floor. On each floor there are $m$ flats numbered from $1$ to $m$, and two consecutive windows correspond to each flat. If we number the windows from $1$ to $2·m$ from left to right, then the $j$-th flat of the $i$-th floor has windows $2·j - 1$ and $2·j$ in the corresponding row of windows (as usual, floors are enumerated from the bottom). Vitaly thinks that people in the flat aren't sleeping at that moment if \textbf{at least one} of the windows corresponding to this flat has lights on.
Given the information about the windows of the given house, your task is to calculate the number of flats where, according to Vitaly, people aren't sleeping.
|
It was easy realization problem. Let's increase the variable $i$ from 1 to n, and inside let's increase the variable $j$ from 1 to $2 \cdot m$. On every iteration we will increase the variable $j$ on 2. If on current iteration $a[i][j] = '1'$ or $a[i][j + 1] = '1'$ let's increase the answer on one. Asymptotic behavior of this solution - $O(nm)$.
|
[
"constructive algorithms",
"implementation"
] | 800
| null |
595
|
B
|
Pasha and Phone
|
Pasha has recently bought a new phone jPager and started adding his friends' phone numbers there. Each phone number consists of exactly $n$ digits.
Also Pasha has a number $k$ and two sequences of length $n / k$ ($n$ is divisible by $k$) $a_{1}, a_{2}, ..., a_{n / k}$ and $b_{1}, b_{2}, ..., b_{n / k}$. Let's split the phone number into blocks of length $k$. The first block will be formed by digits from the phone number that are on positions $1$, $2$,..., $k$, the second block will be formed by digits from the phone number that are on positions $k + 1$, $k + 2$, ..., $2·k$ and so on. Pasha considers a phone number good, if the $i$-th block doesn't start from the digit $b_{i}$ and is divisible by $a_{i}$ if represented as an integer.
To represent the block of length $k$ as an integer, let's write it out as a sequence $c_{1}$, $c_{2}$,...,$c_{k}$. Then the integer is calculated as the result of the expression $c_{1}·10^{k - 1} + c_{2}·10^{k - 2} + ... + c_{k}$.
Pasha asks you to calculate the number of good phone numbers of length $n$, for the given $k$, $a_{i}$ and $b_{i}$. As this number can be too big, print it modulo $10^{9} + 7$.
|
Let's calculate the answer to every block separately from each other and multiply the answer to the previous blocks to the answer for current block. For the block with length equals to $k$ we can calculate the answer in the following way. Let for this block the number must be divided on $x$ and must not starts with digit $y$. Then the answer for this block - the number of numbers containing exactly $k$ digits and which divisible by $x$, subtract the number of numbers which have the first digit equals to $y$ and containing exactly $k$ digits and plus the number of numbers which have the first digit equals to $y - 1$ (only if $y > 0$) and containing exactly $k$ digits. Asymptotic behavior of this solution - $O(n / k)$.
|
[
"binary search",
"math"
] | 1,600
| null |
596
|
A
|
Wilbur and Swimming Pool
|
After making bad dives into swimming pools, Wilbur wants to build a swimming pool in the shape of a rectangle in his backyard. He has set up coordinate axes, and he wants the sides of the rectangle to be parallel to them. Of course, the area of the rectangle must be positive. Wilbur had all four vertices of the planned pool written on a paper, until his friend came along and erased some of the vertices.
Now Wilbur is wondering, if the remaining $n$ vertices of the initial rectangle give enough information to restore the area of the planned swimming pool.
|
It is a necessary and sufficient condition that we have exactly 2 distinct values for $x$ and $y$. If we have less than 2 distinct values for any variable, then there is no way to know the length of that dimension. If there are at least 3 distinct values for any variable, then that means more than 3 vertices lie on that dimension, which cannot happen since there can be at most 2 vertices in a line segment. The area, if it can be found, is just the difference of values of the $x$ coordinates times the difference of values of the $y$ coordinates. Complexity: $O(1)$
|
[
"geometry",
"implementation"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int n;
set<int> a, b;
int value(set<int>& s) {
return (*s.rbegin())-(*s.begin());
}
int main(void) {
cin >> n;
for (int i=0; i<n; i++) {
int x, y;
cin >> x >> y;
a.insert(x);
b.insert(y);
}
if (a.size()!=2 || b.size()!=2)
cout << "-1\n";
else
cout << value(a)*value(b) << "\n";
return 0;
}
|
596
|
B
|
Wilbur and Array
|
Wilbur the pig is tinkering with arrays again. He has the array $a_{1}, a_{2}, ..., a_{n}$ initially consisting of $n$ zeros. At one step, he can choose any index $i$ and either add $1$ to all elements $a_{i}, a_{i + 1}, ... , a_{n}$ or subtract $1$ from all elements $a_{i}, a_{i + 1}, ..., a_{n}$. His goal is to end up with the array $b_{1}, b_{2}, ..., b_{n}$.
Of course, Wilbur wants to achieve this goal in the minimum number of steps and asks you to compute this value.
|
No matter what, we make $|b_{1}|$ operations to make $a_{1}$ equal to $b_{1}$. Once this is done, $a_{2}, a_{3}, ... a_{n} = b_{1}$. Then no matter what, we must make $|b_{2} - b_{1}|$ operations to make $a_{2}$ equal to $b_{2}$. In general, to make $a_{i} = b_{i}$ we need to make $|b_{i} - b_{i - 1}|$ operations, so in total we make $|b_{1}| + |b_{2} - b_{1}| + |b_{3} - b_{2}| + ... + |b_{n} - b_{n - 1}|$ operations. Complexity: $O(n)$
|
[
"greedy",
"implementation"
] | 1,100
|
#include <bits/stdc++.h>
#define MAX_N 200010
#define ll long long
using namespace std;
int n;
ll arr[MAX_N], tot=0;
int main(void) {
scanf("%d",&n);
for (int i=0; i<n; i++) {
scanf("%lld",arr+i);
if (i>0) tot+=abs(arr[i]-arr[i-1]);
else tot+=abs(arr[i]);
}
cout << tot << "\n";
return 0;
}
|
596
|
C
|
Wilbur and Points
|
Wilbur is playing with a set of $n$ points on the coordinate plane. All points have non-negative integer coordinates. Moreover, if some point ($x$, $y$) belongs to the set, then all points ($x'$, $y'$), such that $0 ≤ x' ≤ x$ and $0 ≤ y' ≤ y$ also belong to this set.
Now Wilbur wants to number the points in the set he has, that is assign them distinct integer numbers from $1$ to $n$. In order to make the numbering aesthetically pleasing, Wilbur imposes the condition that if some point ($x$, $y$) gets number $i$, then all ($x'$,$y'$) from the set, such that $x' ≥ x$ and $y' ≥ y$ must be assigned a number not less than $i$. For example, for a set of four points ($0$, $0$), ($0$, $1$), ($1$, $0$) and ($1$, $1$), there are two aesthetically pleasing numberings. One is $1$, $2$, $3$, $4$ and another one is $1$, $3$, $2$, $4$.
Wilbur's friend comes along and challenges Wilbur. For any point he defines it's special value as $s(x, y) = y - x$. Now he gives Wilbur some $w_{1}$, $w_{2}$,..., $w_{n}$, and asks him to find an aesthetically pleasing numbering of the points in the set, such that the point that gets number $i$ has it's special value equal to $w_{i}$, that is $s(x_{i}, y_{i}) = y_{i} - x_{i} = w_{i}$.
Now Wilbur asks you to help him with this challenge.
|
Note that if there is an integer $d$ so that the number of $w_{i}$ equal to $d$ differs from the number of the given squares whose weight equals $d$, then the answer is automatically "NO". This can be easily checked by using a map for the $w_{i}$ and the weights of the squares and checking if the maps are the same. This step takes $O(n\log(n))$ time. Let $d$ be an integer, and let $D$ be the set of all $i$ so that $w_{i} = d$. Let $W$ be the set of all special points $(x, y)$ so that the weight of $(x, y)$ is $d$. Note that $W$ and $D$ have the same number of elements. Suppose that $i_{1} < i_{2} < ... < i_{k}$ are the elements of $D$. Let $(a, b) < (c, d)$ if $a < c$ or $a = c$ and $b < d$. Suppose that $(x_{1}, y_{1}) < (x_{2}, y_{2}) < ... < (x_{k}, y_{k})$ are the elements of $W$. Note that the point $(x_{j}, y_{j})$ has to be labeled by $i_{j}$ for $1 \le j \le k$. Now, each special point is labeled. It remains to check if this is a valid labeling. This can be done by taking an array of vectors. The vector $arr[i]$ will denote the points with $x$-coordinate $i$. This vector can be easily made from the points given in $O(n)$ time, and since the points are already labeled, $arr[i][j]$ will denote the label for the point $(i, j)$. Now, for all points $(i, j)$, the point $(i, j + 1)$ (if it is special) and the point $(i + 1, j)$ (if it is special) must have a greater number than $(i, j)$. This step takes a total of $O(n)$ time. Complexity: $O(n\log(n))$
|
[
"combinatorics",
"greedy",
"sortings"
] | 1,700
|
#include <bits/stdc++.h>
#define MAX_N 100010
using namespace std;
int n;
map<int,vector<pair<int,int> > > weights;
map<int,vector<int> > cnt;
pair<int,int> ans[MAX_N];
vector<int> diag[MAX_N];
int main(void) {
scanf("%d",&n);
for (int i=0; i<n; i++) {
int a, b;
scanf("%d %d",&a,&b);
while (diag[a].size()<=b) diag[a].push_back(0);
weights[b-a].push_back(make_pair(a,b));
}
for (auto& v: weights) {
sort(v.second.begin(),v.second.end());
}
for (int i=0; i<n; i++) {
int a;
cin >> a;
cnt[a].push_back(i);
}
for (auto v: cnt) {
if (weights.count(v.first)==0 || weights[v.first].size()!=v.second.size()) {
printf("NO\n");
return 0;
}
for (int i=0; i<v.second.size(); i++) {
ans[v.second[i]]=weights[v.first][i];
diag[weights[v.first][i].first][weights[v.first][i].second]=v.second[i];
}
}
for (int i=0; i<MAX_N; i++) {
for (int j=0; j<(int)diag[i].size(); j++) {
if ((int)diag[i+1].size()>j && diag[i+1][j]<diag[i][j]) {
printf("NO\n");
return 0;
}
if (j+1<(int)diag[i].size() && diag[i][j]>diag[i][j+1]) {
printf("NO\n");
return 0;
}
}
}
printf("YES\n");
for (int i=0; i<n; i++) {
printf("%d %d\n",ans[i].first,ans[i].second);
}
return 0;
}
|
596
|
D
|
Wilbur and Trees
|
Wilbur the pig really wants to be a beaver, so he decided today to pretend he is a beaver and bite at trees to cut them down.
There are $n$ trees located at various positions on a line. Tree $i$ is located at position $x_{i}$. All the given positions of the trees are distinct.
The trees are equal, i.e. each tree has height $h$. Due to the wind, when a tree is cut down, it either falls left with probability $p$, or falls right with probability $1 - p$. If a tree hits another tree while falling, that tree will fall in the same direction as the tree that hit it. A tree can hit another tree only if the distance between them is strictly less than $h$.
For example, imagine there are $4$ trees located at positions $1$, $3$, $5$ and $8$, while $h = 3$ and the tree at position $1$ falls right. It hits the tree at position $3$ and it starts to fall too. In it's turn it hits the tree at position $5$ and it also starts to fall. The distance between $8$ and $5$ is exactly $3$, so the tree at position $8$ will not fall.
As long as there are still trees standing, Wilbur will select either the leftmost standing tree with probability $0.5$ or the rightmost standing tree with probability $0.5$. Selected tree is then cut down. If there is only one tree remaining, Wilbur always selects it. As the ground is covered with grass, Wilbur wants to know the expected total length of the ground covered with fallen trees after he cuts them all down because he is concerned about his grass-eating cow friends. Please help Wilbur.
|
Let us solve this problem using dynamic programming. First let us reindex the trees by sorting them by $x$-coordinate. Let $f(i, j, b_{1}, b_{2})$ where we would like to consider the problem of if we only have trees $i... j$ standing where $b_{1} = 1$ indicates that tree $i - 1$ falls right and $b_{1} = 0$ if it falls left and $b_{2} = 1$ indicates that tree $j + 1$ falls right and $b_{2} = 0$ if it falls left. We start with the case that Wilbur chooses the left tree and it falls right. The plan is to calculate the expected length in this scenario and multiply by the chance of this case occurring, which is $\frac{(1-p)}{2}$. We can easily calculate what is the farthest right tree that falls as a result of this and call it $w_{i}$. Then if $w_{i} > = j$ this means the entire segment falls, from which the length of the ground covered by trees in $i... j$ can be calculated. However, be careful when $b_{2} = 0$, as there may be overlapping covered regions when the tree $j$ falls right but the tree $j + 1$ falls left. If only $w_{i} < j$, then we just consider adding the length of ground covered by trees $i... w_{i}$ falling right and add to the value of the subproblem $f(w_{i + 1}, j, 1, b_{2})$. There is another interesting case where Wilbur chooses the left tree and it falls left. In this case we calculate the expected length and multiply by the chance of this occurring, which is $\frac{p}{2}$. The expected length of ground covered by the trees here is just the length contributed by tree $i$ falling left, which we must be careful calculating as there might be overlapping covered regions with the $i$th tree falling left and the $i - 1$th tree falling right. Then we also add the value of subproblem $f(i + 1, j, 0, b_{2})$. Doing this naively would take $O(n^{3})$ time, but this can be lowered to $O(n^{2})$ by precalculating what happens when tree $i$ falls left or right. We should also consider the cases that Wilbur chooses the right tree, but these cases are analogous by symmetry. Complexity: $O(n^{2})$
|
[
"dp",
"math",
"probabilities",
"sortings"
] | 2,300
|
#include <bits/stdc++.h>
#define MAX_N 2010
#define INF 500000000
using namespace std;
int n, h, arr[MAX_N], sz[2][MAX_N];
double dp[MAX_N][MAX_N][2][2], p;
double compute(int lef, int rig, int fl, int fr) {
// 0 means fall left, 1 means fall right
if (lef>rig) return 0;
if (dp[lef][rig][fl][fr]!=-1) return dp[lef][rig][fl][fr];
dp[lef][rig][fl][fr]=0;
double ans=0;
// wilbur chooses left
// falls right
int nl=min(rig,lef+sz[0][lef]-1);
if (nl==rig) {
if (fr==0) ans+=(1-p)*(min(h,arr[rig+1]-arr[rig]-h)+arr[rig]-arr[lef]);
else ans+=(1-p)*(min(h,arr[rig+1]-arr[rig])+arr[rig]-arr[lef]);
}
else {
ans+=(1-p)*compute(nl+1,rig,1,fr);
ans+=(1-p)*(arr[nl]-arr[lef]+h);
}
// falls left
if (fl==1) {
ans+=p*min(h,arr[lef]-arr[lef-1]-h);
}
else {
ans+=p*min(h,arr[lef]-arr[lef-1]);
}
ans+=p*compute(lef+1,rig,0,fr);
// wilbur chooses right
// falls left
int nr=max(lef,rig-sz[1][rig]+1);
if (nr==lef) {
if (fl==1) ans+=p*(min(h,arr[lef]-arr[lef-1]-h)+arr[rig]-arr[lef]);
else ans+=p*(min(h,arr[lef]-arr[lef-1])+arr[rig]-arr[lef]);
}
else {
ans+=p*compute(lef,nr-1,fl,0);
ans+=p*(arr[rig]-arr[nr]+h);
}
// falls right
if (fr==0) {
ans+=(1-p)*min(h,arr[rig+1]-arr[rig]-h);
}
else {
ans+=(1-p)*min(h,arr[rig+1]-arr[rig]);
}
ans+=(1-p)*compute(lef,rig-1,fl,1);
// divide by 2 for equiprobably choosing left or right
ans/=2.0;
dp[lef][rig][fl][fr]=ans;
return ans;
}
int main(void) {
for (int i=0; i<MAX_N; i++) {
for (int j=0; j<MAX_N; j++) {
for (int k=0; k<2; k++) {
for (int l=0; l<2; l++) {
dp[i][j][k][l]=-1;
}
}
}
}
cin >> n >> h >> p;
for (int i=0; i<n; i++) {
cin >> arr[i];
}
arr[n]=INF;
arr[n+1]=-INF;
n+=2;
sort(arr,arr+n);
sz[1][0]=1;
for (int i=1; i<n; i++) {
if (arr[i]-arr[i-1]<h) sz[1][i]=sz[1][i-1]+1;
else sz[1][i]=1;
}
sz[0][n]=1;
for (int i=n-1; i>=0; i--) {
if (arr[i+1]-arr[i]<h) sz[0][i]=sz[0][i+1]+1;
else sz[0][i]=1;
}
printf("%.9f\n",compute(1,n-2,0,1));
return 0;
}
|
596
|
E
|
Wilbur and Strings
|
Wilbur the pig now wants to play with strings. He has found an $n$ by $m$ table consisting only of the digits from $0$ to $9$ where the rows are numbered $1$ to $n$ and the columns are numbered $1$ to $m$. Wilbur starts at some square and makes certain moves. If he is at square ($x$, $y$) and the digit $d$ ($0 ≤ d ≤ 9$) is written at position ($x$, $y$), \textbf{then he must} move to the square ($x + a_{d}$, $y + b_{d}$), if that square lies within the table, and he stays in the square ($x$, $y$) otherwise. Before Wilbur makes a move, he can choose whether or not to write the digit written in this square on the white board. All digits written on the whiteboard form some string. Every time a new digit is written, it goes to the end of the current string.
Wilbur has $q$ strings that he is worried about. For each string $s_{i}$, Wilbur wants to know whether there exists a starting position ($x$, $y$) so that by making finitely many moves, Wilbur can end up with the string $s_{i}$ written on the white board.
|
Solution 1: Suppose that $s$ is a string in the query. Reverse $s$ and the direction of all the moves that can be made on the table. Note that starting at any point that is part of a cycle, there is a loop and then edges that go out of the loop. So, for every point, it can be checked by dfs whether the $s$ can be made by starting at that point by storing what is in the cycle. Moreover, note that in the reversed graph, each point can only be a part of one cycle. Therefore, the total time for the dfs in a query is $O(nm \cdot SIGMA + |s|)$. This is good enough for $q$ queries to run in time. Complexity: $O(n m q\cdot S I G M A+\sum_{i}|s_{i}|)$ where $SIGMA = 10$ is the number of distinct characters in the table, and $s_{i}$ is the query string for the $i$ th query. Solution 2 (Actually too slow, see comment by waterfalls below for more details): For each string $s$, dfs from every node that has in degree equal to $0$ in the original graph. There will be a path which leads into a cycle after which anything in the cycle can be used any number of times in $s$. Only every node with in degree equal to $0$ has to be checked because every path which leads to a cycle is part of a larger path which starts with a vertex of in degree $0$ that leads into a cycle. This solution is slower, but it works in practice since it is really hard for a string to match so many times in the table. Each query will take $O(n^{2} \cdot m^{2} + s_{i})$ time, but it is much faster in practice. Complexity: $O(n^{2}m^{2}q\cdot S I G M A+\sum_{i}|s_{i}|)$ where $SIGMA = 10$ is the number of distinct characters in the table, and $s_{i}$ is the query string of the $i$ th query.
|
[
"dfs and similar",
"dp",
"graphs",
"strings"
] | 2,500
|
#include <bits/stdc++.h>
#define MAX_N 210
#define SIGMA 10
#define MLEN 1000010
using namespace std;
int n, m, q, values[MAX_N*MAX_N];
int x[SIGMA], y[SIGMA], nxt[MAX_N*MAX_N];
bool in_cycle[MAX_N*MAX_N];
int vis[MAX_N*MAX_N], num_cyc=0, idx[MAX_N*MAX_N];
bool has[MAX_N*MAX_N][SIGMA], vis2[MAX_N*MAX_N];
int first_val[SIGMA];
vector<int> rev[MAX_N*MAX_N], cycles[MAX_N*MAX_N];
void dfs_cycles(int a) {
if (vis[a]==2) return ;
if (vis[a]==1) {
in_cycle[a]=true;
vis[a]=2;
return ;
}
vis[a]=1;
dfs_cycles(nxt[a]);
vis[a]=2;
}
void getcycles() {
memset(vis,0,sizeof(vis));
memset(vis2,0,sizeof(vis2));
memset(in_cycle,0,sizeof(in_cycle));
memset(has,0,sizeof(has));
for (int i=0; i<n; i++) {
dfs_cycles(i);
if (in_cycle[i] && !vis2[i]) {
int j=i;
do {
in_cycle[j]=true;
vis2[j]=true;
idx[j]=num_cyc;
has[num_cyc][values[j]]=true;
cycles[num_cyc].push_back(j);
j=nxt[j];
} while (j!=i);
num_cyc++;
}
}
}
void reverse_graph() {
for (int i=0; i<n; i++) {
if (!in_cycle[i]) {
rev[nxt[i]].push_back(i);
}
}
}
bool dfs_query(int a, int pos, string& s) {
if (pos<s.length() && values[a]==s[pos]-'0') pos++;
if (pos==s.length()) return true;
vis2[a]=true;
for (auto val: rev[a]) {
if (dfs_query(val,pos,s)) return true;
}
return false;
}
void query(string& s) {
memset(vis2,0,sizeof(vis2));
for (int i=0; i<n; i++) {
if (in_cycle[i] && !vis2[i]) {
int pos=-1;
for (int j=0; j<SIGMA; j++) {
if (first_val[j]!=-1 && !has[idx[i]][j]) {
if (pos==-1) pos=first_val[j];
else pos=min(pos,first_val[j]);
}
}
if (pos==-1) {
cout << "YES\n";
return ;
}
for (int j: cycles[idx[i]]) {
if (dfs_query(j,pos,s)) {
cout << "YES\n";
return ;
}
}
}
}
cout << "NO\n";
}
int main(void) {
char ex[MAX_N][MAX_N];
cin >> n >> m >> q;
for (int i=0; i<n; i++) {
for (int j=0; j<m; j++) {
cin >> ex[i][j];
values[m*i+j]=ex[i][j]-'0';
}
}
for (int i=0; i<SIGMA; i++) {
cin >> x[i] >> y[i];
}
for (int i=0; i<n; i++) {
for (int j=0; j<m; j++) {
int ni=i+x[values[m*i+j]], nj=j+y[values[m*i+j]];
if (ni<0 || nj<0 || ni>=n || nj>=m) {
ni=i;
nj=j;
}
nxt[m*i+j]=ni*m+nj;
}
}
n*=m;
getcycles();
reverse_graph();
for (int i=0; i<q; i++) {
memset(first_val,-1,sizeof(first_val));
string s="", t;
cin >> t;
for (int j=0; j<t.length(); j++) {
s+=t[t.length()-1-j];
first_val[t[j]-'0']=t.length()-1-j;
}
query(s);
}
return 0;
}
|
599
|
A
|
Patrick and Shopping
|
Today Patrick waits for a visit from his friend Spongebob. To prepare for the visit, Patrick needs to buy some goodies in two stores located near his house. There is a $d_{1}$ meter long road between his house and the first shop and a $d_{2}$ meter long road between his house and the second shop. Also, there is a road of length $d_{3}$ directly connecting these two shops to each other. Help Patrick calculate the minimum distance that he needs to walk in order to go to both shops and return to his house.
Patrick always starts at his house. He should visit both shops moving only along the three existing roads and return back to his house. He doesn't mind visiting the same shop or passing the same road multiple times. The only goal is to minimize the total distance traveled.
|
Everything that you needed to do - solve some similar cases. You need to check the following cases: Home $\to$ the first shop $\to$ the second shop $\to$ home Home $\to$ the first shop $\to$ the second shop $\to$ the first shop $\to$ home Home $\to$ the second shop $\to$ home $\to$ the first shop $\to$ home Home $\to$ the second shop $\to$ the first shop $\to$ the second shop $\to$ home Time: $O(1)$
|
[
"implementation"
] | 800
| null |
599
|
B
|
Spongebob and Joke
|
While Patrick was gone shopping, Spongebob decided to play a little trick on his friend. The naughty Sponge browsed through Patrick's personal stuff and found a sequence $a_{1}, a_{2}, ..., a_{m}$ of length $m$, consisting of integers from $1$ to $n$, not necessarily distinct. Then he picked some sequence $f_{1}, f_{2}, ..., f_{n}$ of length $n$ and for each number $a_{i}$ got number $b_{i} = f_{ai}$. To finish the prank he erased the initial sequence $a_{i}$.
It's hard to express how sad Patrick was when he returned home from shopping! We will just say that Spongebob immediately got really sorry about what he has done and he is now trying to restore the original sequence. Help him do this or determine that this is impossible.
|
First of all, you should read the statement carefully. Then, for every element 1 $...$ $N$ create a list of integers from what we can get this number. After that you have to check some cases, before that create a special mark for answer Ambiguity: Let current element of the given array is $b_{i}$ If two or more elements exist from which it's possible to get $b_{i}$, then use your special mark that answer is Ambiguity If no elements exist from which it's possible to get $b_{i}$, then print Impossible If only one element exists from which it's possible to get $b_{i}$ just change $b_{i}$ to the value of this element Finally, if you marked your special mark then print Ambiguity, else print Possible and correct answer. Time: $O(N)$
|
[
"implementation"
] | 1,500
| null |
599
|
C
|
Day at the Beach
|
One day Squidward, Spongebob and Patrick decided to go to the beach. Unfortunately, the weather was bad, so the friends were unable to ride waves. However, they decided to spent their time building sand castles.
At the end of the day there were $n$ castles built by friends. Castles are numbered from $1$ to $n$, and the height of the $i$-th castle is equal to $h_{i}$. When friends were about to leave, Squidward noticed, that castles are not ordered by their height, and this looks ugly. Now friends are going to reorder the castles in a way to obtain that condition $h_{i} ≤ h_{i + 1}$ holds for all $i$ from $1$ to $n - 1$.
Squidward suggested the following process of sorting castles:
- Castles are split into blocks — groups of \textbf{consecutive} castles. Therefore the block from $i$ to $j$ will include castles $i, i + 1, ..., j$. A block may consist of a single castle.
- The partitioning is chosen in such a way that every castle is a part of \textbf{exactly} one block.
- Each block is sorted independently from other blocks, that is the sequence $h_{i}, h_{i + 1}, ..., h_{j}$ becomes sorted.
- The partitioning should satisfy the condition that after each block is sorted, the sequence $h_{i}$ becomes sorted too. This may always be achieved by saying that the whole sequence is a single block.
Even Patrick understands that increasing the number of blocks in partitioning will ease the sorting process. Now friends ask you to count the maximum possible number of blocks in a partitioning that satisfies all the above requirements.
|
Let's take a minute to see how the best answer should look like. Let $H_{i}$ be a sorted sequence of $h_{i}$. Let $E$ - set of indices of the last elements of each block. Then $\dot{\mathbf{v}}$ $e$ $\underline{{\land}}$ $E$, first $e$ sorted elements of sequence $h_{i}$ are equal to the first $e$ elements of the sequence $H_{j}$. So, it is not difficult to notice that the size of $E$ is the answer for the problem. Firstly, we need to calculate two arrays: $prefmax$ and $suffmin$, where $prefmax_{i}$ - maximum between $a_{1}$, $a_{2}$, $...$, $a_{i}$, and $suffmin_{i}$ - minimum between $a_{i}$, $a_{i + 1}$, $...$, $a_{n}$. If you want to get the answer, just calculate the number of indices $i$ that $prefmax_{i}$ $ \le $ $suffmin_{i + 1}$. Time: $O(N)$
|
[
"sortings"
] | 1,600
| null |
599
|
D
|
Spongebob and Squares
|
Spongebob is already tired trying to reason his weird actions and calculations, so he simply asked you to find all pairs of n and m, such that there are exactly $x$ distinct squares in the table consisting of $n$ rows and $m$ columns. For example, in a $3 × 5$ table there are $15$ squares with side one, $8$ squares with side two and $3$ squares with side three. The total number of distinct squares in a $3 × 5$ table is $15 + 8 + 3 = 26$.
|
First of all, let's solve this problem for $n \le m$, and then just swap $n$ and $m$ and print the answer. Important! Not to print squares twice! We can use this formula for fixed $n$ & $m$ $(n \le m)$ for calculating the value of $x$. $x=\sum_{i=0}^{n-1}(n-i)*(m-i)\Leftrightarrow x=\textstyle{\sum_{i=0}^{n-1}(n*m-i*(n+m)+i^{2}})$ Then $x=n^{2}*m-(n+m)*\sum_{i=0}^{n-1}i+\sum_{i=0}^{n-1}i^{2}$ Using the sum squares and the sum of the first $k$ numbers we can easily solve this problem. Getting $6x = 6n^{2} * m - 3(n^{2} + n^{3} - nm - n^{2}) + 2n^{3} - 3n^{3} + n = 3 * m * n^{2} + 3 * m * n - n^{3} + n$ As we solved this task for $n \le m$ the $3n^{2} * m = \approx n^{3}$, it means that $n$ is not greater than $2{\stackrel{\partial{\sqrt{X}}}}$. Time: $O({\overset{\land}{\lor}}{\overline{{X}}})$
|
[
"brute force",
"math"
] | 1,900
| null |
599
|
E
|
Sandy and Nuts
|
Rooted tree is a connected graph without any simple cycles with one vertex selected as a root. In this problem the vertex number $1$ will always serve as a root.
Lowest common ancestor of two vertices $u$ and $v$ is the farthest from the root vertex that lies on both the path from $u$ to the root and on path from $v$ to the root. We will denote it as $LCA(u, v)$.
Sandy had a rooted tree consisting of $n$ vertices that she used to store her nuts. Unfortunately, the underwater storm broke her tree and she doesn't remember all it's edges. She only managed to restore $m$ edges of the initial tree and $q$ triples $a_{i}$, $b_{i}$ and $c_{i}$, for which she supposes $LCA(a_{i}, b_{i}) = c_{i}$.
Help Sandy count the number of trees of size $n$ with vertex $1$ as a root, that match all the information she remembered. If she made a mess and there are no such trees then print $0$. Two rooted trees are considered to be distinct if there exists an edge that occur in one of them and doesn't occur in the other one.
|
The solution for this problem is dynamic programming. Let $f_{root, mask}$ is the number of ways to build a tree with root in vertex $root$ using vertices from the mask $mask$ and all restrictions were met. For convenience we shall number the vertices from zero. The answer is $f_{0, 2^{n} - 1}$. Trivial states are the states where a mask has only one single bit. In such cases $f_{root, mask} = 1$. Let's solve this task recursively with memorization. To make the transition, we need to choose some kind of mask $newMask$, which is necessarily is the submask of mask $mask$. Then we should try to find new root $newRoot$ in mask $newMask$. Also, in order not to count the same tree repeatedly impose conditions on the mask $newMask$. Namely, we shall take only such masks $newMask$, in which the senior bit (not in charge of the $root$) coincides with a senior bit (not in charge of the $root$) of the mask $mask$. After that, you need to check the fulfillment of all conditions to the edges and to the lca. If everything is OK, update $f_{r o o t,m a s k}+=f_{n e w R o v t,n e w M a s k}*f_{r o o t,m a s k}\oplus n e w M a s k$. Where $\bigoplus$ means $xor$. What about checking lca, it's possible to do it in time $O(N^{2})$ - previously memorized lca for each pair or in the worst case in time $O(Q)$ just iterating through all pairs of vertices, for which some vertex $v$ is lca. Time: $O(3^{N} \cdot N^{3})$ or $O(3^{N} \cdot N \cdot Q)$
|
[
"bitmasks",
"dp",
"trees"
] | 2,600
| null |
600
|
A
|
Extract Numbers
|
You are given string $s$. Let's call word any largest sequence of consecutive symbols without symbols ',' (comma) and ';' (semicolon). For example, there are four words in string "aba,123;1a;0": "aba", "123", "1a", "0". A word can be empty: for example, the string $s$=";;" contains three empty words separated by ';'.
You should find all words in the given string that are nonnegative INTEGER numbers without leading zeroes and build by them new string $a$. String $a$ should contain all words that are numbers separating them by ',' (the order of numbers should remain the same as in the string $s$). By all other words you should build string $b$ in the same way (the order of numbers should remain the same as in the string $s$).
Here strings "101", "0" are INTEGER numbers, but "01" and "1.0" are not.
For example, for the string aba,123;1a;0 the string $a$ would be equal to "123,0" and string $b$ would be equal to "aba,1a".
|
This is a technical problem. You should do exactly what is written in problem statement.
|
[
"implementation",
"strings"
] | 1,600
| null |
600
|
B
|
Queries about less or equal elements
|
You are given two arrays of integers $a$ and $b$. For each element of the second array $b_{j}$ you should find the number of elements in array $a$ that are less than or equal to the value $b_{j}$.
|
Let's sort all numbers in a. Now let's iterate over elements of $b$ and for element $b_{j}$ find the index of lowest number that is greater than $b_{j}$. We can do that using binary search. That index will be the answer for value $b_{j}$. Complexity: $O(nlogn)$.
|
[
"binary search",
"data structures",
"sortings",
"two pointers"
] | 1,300
| null |
600
|
C
|
Make Palindrome
|
A string is called palindrome if it reads the same from left to right and from right to left. For example "kazak", "oo", "r" and "mikhailrubinchikkihcniburliahkim" are palindroms, but strings "abb" and "ij" are not.
You are given string $s$ consisting of lowercase Latin letters. At once you can choose any position in the string and change letter in that position to any other lowercase letter. So after each changing the length of the string doesn't change. At first you can change some letters in $s$. Then you can permute the order of letters as you want. Permutation doesn't count as changes.
You should obtain palindrome with the minimal number of changes. If there are several ways to do that you should get the lexicographically (alphabetically) smallest palindrome. So firstly you should minimize the number of changes and then minimize the palindrome lexicographically.
|
Let's denote $cnt_{c}$ - the number of occurences of symbol $c$. Let's consider odd values $cnt_{c}$. Palindrome can not contain more than one symbol $c$ with odd $cnt_{c}$. Let's denote symbols with odd $cnt_{c}$ as $a_{1}, a_{2}...a_{k}$ (in alphabetical order). Let's replace any one of symbols $a_{k}$ with symbol $a_{1}$, $a_{k - 1}$ with $a_{2}$ and so on until the middle of $a$. Now we have no more than one odd symbol. If we have some let's place it in the middle of the answer. First half of answer will contain $\frac{c r n\dot{t}_{\mathcal{C}}}{2}$ occurences of symbol $c$ in alphabetical order. The second half will contain the same symbols in reverse order. For example for string $s = aabcd$ at first we will replace $d$ by Unable to parse markup [type=CF_TEX] Compexity: $O(n)$.
|
[
"constructive algorithms",
"greedy",
"strings"
] | 1,800
| null |
600
|
D
|
Area of Two Circles' Intersection
|
You are given two circles. Find the area of their intersection.
|
If the circles don't intersect than the answer is $0$. We can check that case with only integer calculations (simply by comparing the square of distance between centers with square of the sum of radiuses). If one of the circles is fully in other then the answer is the square of the smaller one. We can check this case also with only integer calculations (simply by comparing the square of distance between centers with square of the difference of radiuses). So now let's consider the general case. The answer will be equal to the sum of two circular segments. Let's consider the triangle with apexes in centers if circles and in some intersecting point of the circles. In that triangle we know all three sides so we can compute the angle of the circular segment. So we can compute the square of circular sector. And the only thing that we should do now is to subtract the square of triangle with apexes in the center of circle and in the intersecting points of circles. We can do that by computing the half of absolute value of cross product. So we have the following formulas: $\alpha=a r c c o s\left({\textstyle\frac{r_{2}^{2}+d^{2}-r_{1}^{2}}{2r_{2}d}}\right),s=\alpha r_{2}^{2},t=r_{2}^{2}\sin\alpha\cos\alpha,a n s_{1}=s-t$, where $d$ is the distance between centers of the circles. And also we should do the same thing with second circle by replacing of indices $1 \le ftrightarrow2$. Complexity: $O(1)$.
|
[
"geometry"
] | 2,000
| null |
600
|
E
|
Lomsat gelral
|
You are given a rooted tree with root in vertex $1$. Each vertex is coloured in some colour.
Let's call colour $c$ dominating in the subtree of vertex $v$ if there are no other colours that appear in the subtree of vertex $v$ more times than colour $c$. So it's possible that two or more colours will be dominating in the subtree of some vertex.
The subtree of vertex $v$ is the vertex $v$ and all other vertices that contains vertex $v$ in each path to the root.
For each vertex $v$ find the sum of all dominating colours in the subtree of vertex $v$.
|
The name of this problem is anagram for ''Small to large''. There is a reason for that :-) The author solution for this problem uses the classic technique for computing sets in tree. The simple solution is the following: let's find for each vertex $v$ the ''map<int, int>'' - the number of occurences for each colour, ''set<pair<int, int>>'' - pairs the number of occurences and the colour, and the number $sum$ - the sum of most frequent colours in subtree of $v$. To find that firstly we should find the same thing for all childs of $v$ and then merge them to one. These solution is correct but too slow (it works in $O(n^{2}logn)$ time). Let's improve that solution: every time when we want to merge two $map$-s $a$ and $b$ let's merge the smaller one to larger simply by iterating over all elements of the smaller one (this is the ``Small to large''). Let's consider some vertex $v$: every time when vertex $v$ will be moved from one $map$ to another the size of the new map will be at least two times larger. So each vertex can be moved not over than $logn$ times. Each moving can be done in $O(logn)$ time. If we accumulate that values by all vertices then we get the complexity $O(nlog^{2}n)$. I saw the solutions that differs from author's but this technique can be used in a lot of other problems.
|
[
"data structures",
"dfs and similar",
"dsu",
"trees"
] | 2,300
| null |
600
|
F
|
Edge coloring of bipartite graph
|
You are given an undirected bipartite graph without multiple edges. You should paint the edges of graph to minimal number of colours, so that no two adjacent edges have the same colour.
|
Let's denote $d$ is the maximum degree of vertex in graph. Let's prove that the answer is $d$. We will build the constructive algorithm for that (it will be the solution to problem). Let's colour the edges one by one in some order. Let $(x, y)$ be the current edge. If there exist colour $c$ that is free in vertex $x$ and in vertex $y$ then we can simply colour $(x, y)$ with $c$. If there is no such colour then there are a couple of colours $c_{1}, c_{2}$ so that $c_{1}$ is in $x$ and not in $y$ and $c_{2}$ is in $y$ but not in $x$. Let's make vertex $y$ free from colour $c_{2}$. Denote $z$ the other end of edge from $y$ with colour $c_{2}$. If $z$ is free from colour $c_{1}$ then we can colour $x, y$ with $c_{2}$ and recolour $y, z$ with $c_{1}$. So me make alternation. If $z$ is not free from colour $c_{1}$ let's denote $w$ the other end of the edge from $z$ with colour $c_{1}$. If $w$ is free from colour $c_{2}$ then again we can do alternation. And so on. We will find an alternating chain because the graph is bipartite. To find the chain we can use depth first search. Each chain contains no more than $O(n)$ vertices. So we have: Complexity: $O(nm)$.
|
[
"graphs"
] | 2,800
| null |
601
|
A
|
The Two Routes
|
In Absurdistan, there are $n$ towns (numbered $1$ through $n$) and $m$ bidirectional railways. There is also an absurdly simple road network — for each pair of different towns $x$ and $y$, there is a bidirectional road between towns $x$ and $y$ \textbf{if and only if} there is no railway between them. Travelling to a different town using one railway or one road always takes exactly one hour.
A train and a bus leave town $1$ at the same time. They both have the same destination, town $n$, and don't make any stops on the way (but they can wait in town $n$). The train can move only along railways and the bus can move only along roads.
You've been asked to plan out routes for the vehicles; each route can use any road/railway multiple times. One of the most important aspects to consider is safety — in order to avoid accidents at railway crossings, the train and the bus must not arrive at the same town (except town $n$) simultaneously.
Under these constraints, what is the minimum number of hours needed for both vehicles to reach town $n$ (the maximum of arrival times of the bus and the train)? Note, that bus and train are not required to arrive to the town $n$ at the same moment of time, but are allowed to do so.
|
The condition that the train and bus can't meet at one vertex except the final one is just trolling. If there's a railway $\textstyle1\,\,-\,\,N$, then the train can take it and wait in town $N$. If there's no such railway, then there's a road $\textstyle1\,\,-\,\,N$, the bus can take it and wait in $N$ instead. There's nothing forbidding this :D. The route of one vehicle is clear. How about the other one? Well, it can move as it wants, so the answer is the length of its shortest path from $1$ to $N$... or $- 1$ if no such path exists. It can be found by BFS in time $O(N + M) = O(N^{2})$. In order to avoid casework, we can just compute the answer as the maximum of the train's and the bus's shortest distance from $1$ to $N$. That way, we compute $\operatorname{max}(1,\operatorname{answer})$; since the answer is $ \ge 1$, it works well. In summary, time and memory complexity: $O(N^{2})$.
|
[
"graphs",
"shortest paths"
] | 1,600
| null |
601
|
B
|
Lipshitz Sequence
|
A function $f:\mathbb{R}\rightarrow\mathbb{R}$ is called Lipschitz continuous if there is a real constant $K$ such that the inequality $|f(x) - f(y)| ≤ K·|x - y|$ holds for all $x,y\in\mathbb{R}$. We'll deal with a more... discrete version of this term.
For an array $\operatorname{h}[1..n]$, we define it's Lipschitz constant $L(\mathbf{h})$ as follows:
- if $n < 2$, $L(\mathbf{h})=0$
- if $n ≥ 2$, $L(\mathbf{h})=\operatorname*{max}\left[{\frac{|\mathbf{h}[j]-\mathbf{h}[i]|}{j-i}}\right]$ over all $1 ≤ i < j ≤ n$
In other words, $L=L(\mathrm{h})$ is the smallest non-negative integer such that $|h[i] - h[j]| ≤ L·|i - j|$ holds for all $1 ≤ i, j ≤ n$.
You are given an array $\bar{\mathbf{A}}$ of size $n$ and $q$ queries of the form $[l, r]$. For each query, consider the subarray $s=\mathbf{a}[l..r]$; determine the sum of Lipschitz constants of \textbf{all subarrays} of $\underline{{\land}}$.
|
Let $L_{1}(i,j)={\frac{|x_{i}-A_{i}|}{|i-j|}}$ for $i \neq j$. Key observation: it's sufficient to consider $j = i + 1$ when calculating the Lipschitz constant. It can be seen if you draw points $(i, A_{i})$ and lines between them on paper - the steepest lines must be between adjacent pairs of points. In order to prove it properly, we'll consider three numbers $A_{i}, A_{j}, A_{k}$ ($i < j < k$) and show that one of the numbers $L_{1}(i, j)$, $L_{1}(j, k)$ is $ \ge L_{1}(i, k)$. W.l.o.g., we may assume $A_{i} \le A_{k}$. There are 3 cases depending on the position of $A_{j}$ relative to $A_{i}, A_{k}$: $A_{j} > A_{i}, A_{k}$ - we can see that $L_{1}(i, j) > L_{1}(i, k)$, since $|A_{j} - A_{i}| = A_{j} - A_{i} > A_{k} - A_{i} = |A_{k} - A_{i}|$ and $j - i < k - i$; we just need to divide those inequalities $A_{j} < A_{i}, A_{k}$ - this is similar to the previous case, we can prove that $L_{1}(j, k) > L_{1}(i, k)$ in the same way $A_{i} \le A_{j} \le A_{k}$ - this case requires more work: we'll denote $d_{1y} = A_{j} - A_{i}, d_{2y} = A_{k} - A_{j}$, $d_{1x} = j - i, d_{2x} = k - j$ then, $L_{1}(i, j) = d_{1y} / d_{1x}$, $L_{1}(j, k) = d_{2y} / d_{2x}$, $L_{1}(i, k) = (d_{1y} + d_{2y}) / (d_{1x} + d_{2x})$ let's prove it by contradiction: assume that $L_{1}(i, j), L_{1}(j, k) < L_{1}(i, k)$ $d_{1y} + d_{2y} = L_{1}(i, j)d_{1x} + L_{1}(j, k)d_{2x} < L_{1}(i, k)d_{1x} + L_{1}(i, k)d_{2x} = L_{1}(i, k)(d_{1x} + d_{2x}) = d_{1y} + d_{2y}$, which is a contradiction We've just proved that to any $L_{1}$ computed for two elements $A[i], A[k]$ with $k > i + 1$, we can replace one of $i, j$ by a point $j$ between them without decreasing $L_{1}$; a sufficient amount of such operations will give us $k = i + 1$. Therefore, the max. $L_{1}$ can be found by only considering differences between adjacent points. This is actually a huge simplification - the Lipschitz constant of an array is the maximum abs. difference of adjacent elements! If we replace the array $A[1..n]$ by an array $D[1..n - 1]$ of differences, $D[i] = A[i + 1] - A[i]$, then the Lipschitz constant of a subarray $A[l, r]$ is the max. element in the subarray $D[l..r - 1]$. Finding subarray maxima actually sounds quite standard, doesn't it? No segment trees, of course - there are still too many subarrays to consider. So, what do we do next? There are queries to answer, but not too many of them, so we can process each of them in $O(N)$ time. One approach that works is assigning a max. difference $D[i]$ to each subarray - since there can be multiple max. $D[i]$, let's take the leftmost one. We can invert it to determine the subarrays for which a given $D[i]$ is maximum: if $D[a_{i}]$ is the closest difference to the left of $D[i]$ that's $ \ge D[i]$ or $a_{i} = 0$ if there's none, and $D[b_{i}]$ is the closest difference to the right that's $> D[i]$ or $b_{i} = n - 1$ if there's none (note the strict/non-strict inequality signs - we don't care about differences equal to $D[i]$ to its right, but there can't be any to its left, or it wouldn't be the leftmost max.), then those are all subarrays $D[j..k]$ such that $a_{i} < j \le i \le k < b_{i}$. If we don't have the whole array $D[1..n - 1]$, but only some subarray $D[l..r]$, then we can simply replace $a_{i}$ by $\operatorname*{max}(a_{i},l-1)$ and $b_{i}$ by $\operatorname*{min}(b_{i},r+1)$. The number of those subarrays is $P_{i} = (i - a_{i})(b_{i} - i)$, since we can choose $j$ and $k$ independently. All we have to do to answer a query is check all differences, take $a_{i}$, $b_{i}$ (as the max/min with some precomputed values) and compute $P_{i}$; the answer to the query is $\textstyle\sum_{i=l}^{r}D[i]\cdot P_{i}$. We only need to precompute all $a_{i}, b_{i}$ for the whole array $D[1..n - 1]$ now; that's a standard problem, solvable using stacks in $O(N)$ time or using maps + Fenwick trees in $O(N\log N)$ time. The total time complexity is $O(NQ)$, memory $O(N)$.
|
[
"data structures",
"math"
] | 2,100
| null |
601
|
C
|
Kleofáš and the n-thlon
|
Kleofáš is participating in an $n$-thlon - a tournament consisting of $n$ different competitions in $n$ different disciplines (numbered $1$ through $n$). There are $m$ participants in the $n$-thlon and each of them participates in all competitions.
In each of these $n$ competitions, the participants are given \underline{ranks} from $1$ to $m$ in such a way that no two participants are given the same rank - in other words, the ranks in each competition form a permutation of numbers from $1$ to $m$. The \underline{score} of a participant in a competition is equal to his/her rank in it.
The \underline{overall score} of each participant is computed as the sum of that participant's scores in all competitions.
The \underline{overall rank} of each participant is equal to $1 + k$, where $k$ is the number of participants with \textbf{strictly smaller} overall score.
The $n$-thlon is over now, but the results haven't been published yet. Kleofáš still remembers his ranks in each particular competition; however, he doesn't remember anything about how well the other participants did. Therefore, Kleofáš would like to know his expected overall rank.
All competitors are equally good at each discipline, so all rankings (permutations of ranks of everyone except Kleofáš) in each competition are equiprobable.
|
As it usually happens with computing expected values, the solution is dynamic programming. There are 2 things we could try to compute: probabilities of individual overall ranks of Kleofáš or just some expected values. In this case, the latter option works. "one bit is 8 bytes?" "no, the other way around" "so 8 bytes is 1 bit?" After some attempts, one finds out that there's no reasonable way to make a DP for an expected rank or score of one person (or multiple people). What does work, and will be the basis of our solution, is the exact opposite: we can compute the expected number of people with a given score. The most obvious DP for it would compute $E(i, s)$ - the exp. number of people other than Kleofáš with score $s$ after the first $i$ competitions. Initially, $E(0, 0) = m - 1$ and $E(0, s > 0) = 0$. How can we get someone with score $s$ in competition $i$? That person can have any score $k$ from 1 to $m$ except $x_{i}$ (since Kleofáš has that one) with the same probability $\frac{1}{m-1}$. The expected values are sums with probabilities $P(i, s, j)$ that there are $j$ people with score $s$: $E(i,s)=\sum_{j}j\cdot P(i,s,j)\,.$Considering that the probability that one of them will get score $k$ is $\frac{{\dot{J}}}{m-1}$, we know that with probability $P(i,s,j)\underline{{{b}}}$, we had $j$ people with score $s$ before the competition and one of them had score $s + k$ after that competition - adding 1 to $E(i + 1, s + k)$. By summation over $j$, we'll find the exp. number of people who had overall score $s$ and scored $k$ more: $\sum_{j}P(i,s,j)\frac{j}{m-1}=\frac{\cal{E}(i,s)}{m-1}\,.$Lol, it turns out to be so simple. We can find the probability $E(i + 1, t)$ afterwards: since getting overall score $t$ after $i + 1$ competitions means getting score $k$ in the currently processed competition and overall score $s = t - k$ before, and both distinct $k$ and expectations for people with distinct $s$ are totally independent of each other, then we just need to sum up the exp. numbers of people with those scores (which we just computed) over the allowed $k$: $E(i+1,t)=\sum_{k}\frac{E(i,t-k)}{m-1}\,.$The formulas for our DP are now complete and we can use them to compute $E(n, s)$ for all $1 \le s \le mn$. Since $E(n, s)$ people with $s$ greater than the overall score $s_{k}$ of Kleofáš add $E(n, s)$ to the overall rank of Kleofáš and people with $s \le s_{k}$ add nothing, we can find the answer as $\qquad1+\sum_{s>s_{\mathrm{s}}}E(n,s)\,.$This takes $O(m^{2}n^{2})$ time, since there are $O(mn)$ scores, $O(mn^{2})$ states of the DP and directly computing each of them takes $O(m)$ time. Too slow. We can do better, of course. Let's forget about dividing by $m - 1$ for a while; then, $E(i + 1, t)$ is a sum of $E(i, s)$ for one or two ranges of scores - or for one range minus one value. If you can solve div1C, then you should immediately know what to do: compute prefix sums of $E(i, s)$ over $s$ and find $E(i + 1, t)$ for each $t$ using them. And now, computing one state takes $O(1)$ time and the problem is solved in $O(mn^{2})$ time (and memory).
|
[
"dp",
"math",
"probabilities"
] | 2,300
| null |
601
|
D
|
Acyclic Organic Compounds
|
You are given a tree $T$ with $n$ vertices (numbered $1$ through $n$) and a letter in each vertex. The tree is rooted at vertex $1$.
Let's look at the subtree $T_{v}$ of some vertex $v$. It is possible to read a string along each simple path starting at $v$ and ending at some vertex in $T_{v}$ (possibly $v$ itself). Let's denote the number of \textbf{distinct} strings which can be read this way as $\operatorname{dif}(v)$.
Also, there's a number $c_{v}$ assigned to each vertex $v$. We are interested in vertices with the maximum value of $\operatorname{dif}(v)+c_{v}$.
You should compute two statistics: the maximum value of $\operatorname{dif}(v)+c_{v}$ and the number of vertices $v$ with the maximum $\operatorname{dif}(v)+c_{v}$.
|
The name is really almost unrelated - it's just what a tree with arbitrary letters typically is in chemistry. If you solved problem TREEPATH from the recent Codechef November Challenge, this problem should be easier for you - it uses the same technique, after all. Let's figure out how to compute $\operatorname{dif}(v)$ for just one fixed $v$. One more or less obvious way is computing hashes of our strings in a DFS and then counting the number of distinct hashes (which is why there are anti-hash tests :D). However, there's another, deterministic and faster way. Compressing the subtree $T_{v}$ into a trie. Recall that a trie is a rooted tree with a letter in each vertex (or possibly nothing in the root), where each vertex encodes a unique string read along the path from the root to it; it has at most $ \sigma $ sons, where $ \sigma = 26$ is the size of the alphabet, and each son contains a different letter. Adding a son is done trivially in $O( \sigma )$ (each vertex contains an array of 26 links to - possibly non-existent - sons) and moving down to a son with the character $c$ is then possible in $O(1)$. Compressing a subtree can be done in a DFS. Let's build a trie $H_{v}$ (because $T_{v}$ is already used), initially consisting only of one vertex - the root containing the letter $s_{v}$. In the DFS, we'll remember the current vertex $R$ of the tree $T$ and the current vertex $cur$ of the trie. We'll start the DFS at $v$ with $cur$ being the root of $H_{v}$; all we need to do is look at each son $S$ of $R$ in DFS, create the son $cur_{s}$ of $cur$ corresponding to the character $s_{S}$ (if it didn't exist yet) and run $DFS(S, cur_{s})$. This DFS does nothing but construct $H_{v}$ that encodes all strings read down from $v$ in $T_{v}$. And since each vertex of $H_{v}$ encodes a distinct string, $\operatorname{dif}(v)$ is the number of vertices of $H_{v}$. This runs in $O(|T_{v}| \sigma )$ time, since it can create a trie with $|T_{v}|$ vertices in the worst case. Overall, it'd be $O(N^{2} \sigma )$ if $T$ looks sufficiently like a path. The HLD trick Well, what can we do to improve it? This trick is really the same - find the son $w$ of $v$ that has the maximum $|T_{w}|$, add $s_{v}$ to $H_{w}$ and make it $H_{v}$; then, DFS through the rest of $T_{v}$ and complete the trie $H_{v}$ as in the slow solution. The trick resembles HLD a lot, since we're basically remembering tries on HLD-paths. If $v$ is a leaf, of course, we can just create $H_{v}$ that consists of one vertex. How do we "add" $v$ to a trie $H_{w}$ of its son $w$? Well, $v$ should be the root of the trie afterwards and the original $H_{w}$'s root should become its son, so we're rerooting $H_{w}$. We'll just create a new vertex in $H_{w}$ with $s_{v}$ in it, make it the root of $H_{w}$ and make the previous root of $H_{w}$ its son. And if we number the tries somehow, then we can just set the number of $H_{v}$ to be the number of $H_{w}$. It remains true that $dif(v)$ is $|H_{v}|$ - the number of vertices in the trie $H_{v}$, which allows us to compute those values directly. After computing $dif(v)$ for each $v$, we can just compute both statistics directly in $O(N)$. Since each vertex of $T$ corresponds to vertices in at most $O(\log N)$ tries (for each heavy edge that's on the path from it to the root), we aren't creating tries with a total of $O(N^{2})$ vertices, but $O(N\log N)$. The time complexity is therefore $O(N\log N\sigma)$. However, the same is true for the memory, so you can't waste it too much!
|
[
"data structures",
"dfs and similar",
"dsu",
"hashing",
"strings",
"trees"
] | 2,400
| null |
601
|
E
|
A Museum Robbery
|
There's a famous museum in the city where Kleofáš lives. In the museum, $n$ exhibits (numbered $1$ through $n$) had been displayed for a long time; the $i$-th of those exhibits has value $v_{i}$ and mass $w_{i}$.
Then, the museum was bought by a large financial group and started to vary the exhibits. At about the same time, Kleofáš... gained interest in the museum, so to say.
You should process $q$ events of three types:
- type $1$ — the museum displays an exhibit with value $v$ and mass $w$; the exhibit displayed in the $i$-th event of this type is numbered $n + i$ (see sample explanation for more details)
- type $2$ — the museum removes the exhibit with number $x$ and stores it safely in its vault
- type $3$ — Kleofáš visits the museum and wonders (for no important reason at all, of course): if there was a robbery and exhibits with total mass at most $m$ were stolen, what would their maximum possible total value be?
For each event of type 3, let $s(m)$ be the maximum possible total value of stolen exhibits with total mass $ ≤ m$.
Formally, let $D$ be the set of numbers of all exhibits that are currently displayed (so initially $D$ = {1, ..., n}). Let $P(D)$ be the set of all subsets of $D$ and let
\[
G=\left\{S\in P(D)\left|\sum_{i\in S}w_{i}\leq m\right.\right\}\,.
\]
Then, $s(m)$ is defined as
\[
s(m)=\operatorname*{max}_{S\in G}\left(\sum_{i\in S}v_{i}\right)\,.
\]
Compute $s(m)$ for each $m\in\{1,2,\ldots,k\}$. Note that the output follows a special format.
|
In this problem, we are supposed to solve the 0-1 knapsack problem for a set of items which changes over time. We'll solve it offline - each query (event of type 3) is asked about a subset of all $N$ exhibits appearing on the input. Introduction If we just had 1 query and nothing else, it's just standard knapsack DP. We'll add the exhibits one by one and update $s(m)$ (initially, $s(m) = 0$ for all $m$). When processing an exhibit with $(v, w)$, in order to get loot with mass $m$, we can either take that exhibit and get value at least $s(m - w) + v$, or not take it and get $s(m)$; therefore, we need to replace $s(m)$ by $\operatorname*{max}(s(m),s(m-w)+v)$; the right way to do it is in decreasing order of $m$. In fact, it's possible to merge 2 knapsacks with any number of items in $O(k^{2})$, but that's not what we want here. Note that we can add exhibits this way. Thus, if there were no queries of type 2, we would be able to solve whole problem in $O(Nk)$ time by just remembering the current $s(m)$ and updating it when adding an exhibit. Even if all queries were of type 2 (with larger $n$), we'd be able to solve it in $O(nk)$ time in a similar way after sorting the exhibits in the order of their removal and processing queries/removals in reverse chronological order. The key Let's have $q$ queries numbered $1$ through $Q$ in the order in which they're asked; query $q$ is asked on some subset $S_{q}$ of exhibits. MAGIC TRICK: Compute the values $s(m)$ only for subsets $S_{2q}\cap S_{2q+1}$ - the intersections of pairs of queries $2q, 2q + 1$ (intersection of the first and the second query, of the third and fourth etc.), recursively. Then, recompute $s(m)$ for all individual queries in $O((N + Q)k)$ time by adding elements which are missing in the intersection, using the standard knapsack method. What?! How does this work?! Shouldn't it be more like $O(N^{2})$ time? Well, no - just look at one exhibit and the queries where it appears. It'll be a contiguous range of them - since it's displayed until it's removed (or the events end). This element will only be missing in the intersection, but present in one query (so it'll be one of the elements added using knapsack DP), if query $2q + 1$ is the one where it appears first or query $2q$ the one where it appears last. That makes at most two addittions of each element and $O(N)$ over all of them; adding each of them takes $O(k)$ time, which gives $O(Nk)$. The second part of the complexity, $O(Qk)$ time, is spent by copying the values of $s(m)$ first from the intersection of queries $2q$ and $2q + 1$ to those individual queries. If we're left with just one query, we can solve it in $O(Nk)$ as the usual 0-1 knapsack. Since we're halving the number of queries when recursing deeper, we can only recurse to depth $O(\log Q)$ and the time complexity is $O((N+Q)k\log Q)$. A different point of view (Baklazan's) We can also look at this as building a perfect binary tree with sets $S_{1}, ..., S_{Q}$ in leaves and the intersection of sets of children in every other vertex. For each vertex $v$ of this tree, we're solving the knapsack - computing $s(m)$ - for the set $D_{v}$ of displayed exhibits in it. We will solve the knapsack for the root directly and then proceed to the leaves. In each vertex $v$, we will take $s(m)$, the set $D_{p}$ of its parent $p$ and find $s(m)$ for $v$ by adding exhibits which are in $D_{v}$, but not in $D_{p}$. We know that the set $D_{p}$ is of the form $\textstyle\bigcap_{i=a}^{b}S_{i}$ for some $a, b$ and $D_{v}$ is either of the form $\textstyle\bigcap_{i=a}^{m}S_{i}$ or $\textstyle\bigcap_{i=m+1}^{b}S_{i}$ for $m={\frac{b+a-1}{2}}$ (depending on whether it's the left or the right son). In the first case, only elements removed between the $m$-th and $b$-th query have to be added and in the second case, it's only elements added between the $a$-th and $m + 1$-th query. Since each element will only be added/removed once and the ranges of queries on the same level of the tree are disjoint, we will do $O((N + Q)k)$ work on each level and the overall time complexity is $O((N+Q)k\log Q)$. Finding the intersections and exhibits not in the intersections Of course, bruteforcing them in $O(NQ)$ isn't such a bad idea, but it'd probably TLE - and we can do better. We've just described how we can pick those exhibits based on the queries between which they were added/removed. Therefore, we can find - for each exhibit - the interval of queries for which it was displayed and remember for each two consecutive queries the elements added and removed between them; finding the exhibits added/removed in some range is then just a matter of iterating over them. Since we're actually adding all of them, this won't worsen the time complexity. In order to efficiently determine the exhibits in some set $\textstyle\bigcap_{i=a}^{b}S_{i}$, we can remember for each exhibit the interval of time when it was displayed. The exhibit is in the set $\textstyle\bigcap_{i=a}^{b}S_{i}$ if and only if it was displayed before the $a$-th query and remained displayed at least until the $b$-th query. To conclude, the time complexity is $O((N+Q)k\log Q)=O(q k\log q)$ and since we don't need to remember all levels of the perfect binary tree, but just the one we're computing and the one above it, the memory complexity is $O(qk)$.
|
[
"data structures",
"dp"
] | 2,800
| null |
602
|
A
|
Two Bases
|
After seeing the "ALL YOUR BASE ARE BELONG TO US" meme for the first time, numbers $X$ and $Y$ realised that they have different bases, which complicated their relations.
You're given a number $X$ represented in base $b_{x}$ and a number $Y$ represented in base $b_{y}$. Compare those two numbers.
|
It's easy to compare two numbers if the same base belong to both. And our numbers can be converted to a common base - just use the formulas $X\,=\,\sum_{i=1}^{N}\,x_{i}B_{x}^{N-i};\quad Y\,=\sum_{i=1}^{M}\,y_{i}B_{y}^{M-i}\,.$A straightforward implementation takes $O(N + M)$ time and memory. Watch out, you need 64-bit integers! And don't use pow - iterating $X\rightarrow X B_{x}+x_{i}$ is better.
|
[
"brute force",
"implementation"
] | 1,100
| null |
602
|
B
|
Approximating a Constant Range
|
When Xellos was doing a practice course in university, he once had to measure the intensity of an effect that slowly approached equilibrium. A good way to determine the equilibrium intensity would be choosing a sufficiently large number of consecutive data points that seems as constant as possible and taking their average. Of course, with the usual sizes of data, it's nothing challenging — but why not make a similar programming contest problem while we're at it?
You're given a sequence of $n$ data points $a_{1}, ..., a_{n}$. There aren't any big jumps between consecutive data points — for each $1 ≤ i < n$, it's guaranteed that $|a_{i + 1} - a_{i}| ≤ 1$.
A range $[l, r]$ of data points is said to be almost constant if the difference between the largest and the smallest value in that range is at most $1$. Formally, let $M$ be the maximum and $m$ the minimum value of $a_{i}$ for $l ≤ i ≤ r$; the range $[l, r]$ is almost constant if $M - m ≤ 1$.
Find the length of the longest almost constant range.
|
Let's process the numbers from left to right and recompute the longest range ending at the currently processed number. One option would be remembering the last position of each integer using STL map<>/set<> data structures, looking at the first occurrences of $A_{i}$ plus/minus 1 or 2 to the left of the current $A_{i}$ and deciding on the almost constant range ending at $A_{i}$ based on the second closest of those numbers. However, there's a simpler and more efficient option - notice that if we look at non-zero differences in any almost constant range, then they must alternate: $.., + 1, - 1, + 1, - 1, ..$. If there were two successive differences of $+ 1$-s or $- 1$-s (possibly separated by some differences of $0$), then we'd have numbers $a - 1, a, a, ..., a, a + 1$, so a range that contains them isn't almost constant. Let's remember the latest non-zero difference (whether it was +1 or -1 and where it happened); it's easy to update this info when encountering a new non-zero difference. When doing that update, we should also check whether the new non-zero difference is the same as the latest one (if $A_{i} - A_{i - 1} = A_{j + 1} - A_{j}$). If it is, then we know that any almost constant range that contains $A_{i}$ can't contain $A_{j}$. Therefore, we can keep the current leftmost endpoint $l$ of a constant range and update it to $j + 1$ in any such situation; the length of the longest almost constant range ending at $A_{i}$ will be $i - l + 1$. This only needs a constant number of operations per each $A_{i}$, so the time complexity is $O(N)$. Memory: $O(N)$, but it can be implemented in $O(1)$.
|
[
"dp",
"implementation",
"two pointers"
] | 1,400
| null |
603
|
A
|
Alternative Thinking
|
Kevin has just recevied his disappointing results on the USA Identification of Cows Olympiad (USAICO) in the form of a binary string of length $n$. Each character of Kevin's string represents Kevin's score on one of the $n$ questions of the olympiad—'1' for a correctly identified cow and '0' otherwise.
However, all is not lost. Kevin is a big proponent of alternative thinking and believes that his score, instead of being the sum of his points, should be the length of the longest alternating subsequence of his string. Here, we define an \underline{alternating subsequence} of a string as a \textbf{not-necessarily contiguous} subsequence where no two consecutive elements are equal. For example, ${0, 1, 0, 1}$, ${1, 0, 1}$, and ${1, 0, 1, 0}$ are alternating sequences, while ${1, 0, 0}$ and ${0, 1, 0, 1, 1}$ are not.
Kevin, being the sneaky little puffball that he is, is willing to hack into the USAICO databases to improve his score. In order to be subtle, he decides that he will flip exactly one substring—that is, take a \textbf{contiguous} non-empty substring of his score and change all '0's in that substring to '1's and vice versa. After such an operation, Kevin wants to know the length of the longest possible alternating subsequence that his string could have.
|
Hint: Is there any easy way to describe the longest alternating subsequence of a string? What happens at the endpoints of the substring that we flip? Imagine compressing each contiguous block of the same character into a single character. For example, the first sample case 10000011 gets mapped to 101. Then the longest alternating subsequence of our string is equal to the length of our compressed string. So what does flipping a substring do to our compressed string? To answer this, we can think about flipping a substring as flipping two (possibly empty) prefixes. As an example, consider the string 10000011. Flipping the bolded substring 100 00 011 is equivalent to flipping the two bolded prefixes 10000011 and 10000. For the most part, flipping the prefix of a string also flips the corresponding portion of the compressed string. The interesting case occurs at the endpoint of the prefix. Here, we have two possibilities: the two characters on either side of the endpoint are the same or different. If they are the same (00 or 11), then flipping this prefix adds an extra character into our compressed string. If they are different (01 or 10), we merge two characters in our compressed string. These increase and decrease, respectively, the length of the longest alternating subsequence by one. There is actually one more case that we left out: when the endpoint of our prefix is also an endpoint of the string. Then it is easy to check that the length of the longest alternating subsequence doesn't change. With these observations, we see that we want to flip prefixes that end between 00 or 11 substrings. Each such substring allows us to increase our result by one, up to a maximum of two, since we only have two flips. If there exist no such substrings that we can flip, we can always flip the entire string and have our result stay the same. Thus our answer is the length of the initial longest alternating subsequence plus $\operatorname*{min}(2,\neq\mathrm{{of}}^{\cdot}00^{\cdot}\;\mathrm{{and}}^{\cdot}1\}^{\cdot}\;\mathrm{{substrings}})$. A very easy way to even simplify the above is to notice that if the initial longest alternating subsequence has length $len - 2$, then there will definitely be two 00 or 11 substrings. If it has length $n - 1$, then it has exactly one 00 or 11 substring. So our answer can be seen as the even easier $\mathrm{min}(n,\mathrm{longest~alternating~subsequence}+2)$.
|
[
"dp",
"greedy",
"math"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
int N, res = 1;
string S;
int main(){
cin >> N >> S;
for(int i = 1; i < N; i++){
res += (S[i] != S[i - 1]);
}
cout << min(res + 2, N) << '\n';
}
|
603
|
B
|
Moodular Arithmetic
|
As behooves any intelligent schoolboy, Kevin Sun is studying psycowlogy, cowculus, and cryptcowgraphy at the Bovinia State University (BGU) under Farmer Ivan. During his Mathematics of Olympiads (MoO) class, Kevin was confronted with a weird functional equation and needs your help. For two fixed integers $k$ and $p$, where $p$ is an odd prime number, the functional equation states that
\[
f(k x\ \ \mathrm{mod}\ p)\equiv k\cdot f(x)\ \ \mathrm{mod}\ p
\]
for some function $f:\{0,1,2,\cdot\cdot\cdot,p-1\}\rightarrow\{0,1,2,\cdot\cdot\cdot,p-1\}$. (This equation should hold for any integer $x$ in the range $0$ to $p - 1$, inclusive.)
It turns out that $f$ can actually be many different functions. Instead of finding a solution, Kevin wants you to count the number of distinct functions $f$ that satisfy this equation. Since the answer may be very large, you should print your result modulo $10^{9} + 7$.
|
Hint: First there are special cases $k = 0$ and $k = 1$. After clearing these out, think about the following: given the value of $f(n)$ for some $n$, how many other values of $f$ can we find? We first have the degenerate cases where $k = 0$ and $k = 1$. If $k = 0$, then the functional equaton is equivalent to $f(0) = 0$. Therefore, $p^{p - 1}$ functions satisfy this, because the values $f(1), f(2), ..., f(p - 1)$ can be anything in ${0, 1, 2, ..., p - 1}$. If $k = 1$, then the equation is just $f(x) = f(x).$ Therefore $p^{p}$ functions satisfy this, because the values $f(0), f(1), f(2), ..., f(p - 1)$ can be anything in ${0, 1, 2, ..., p - 1}.$ Now assume that $k \ge 2$, and let $m$ be the least positive integer such that $k^{m}\equiv1\mathrm{\mod\}p.$ This is called the \emph{order} of $k\mod p.$ First, plug in $x = 0$ to find that $f(0)=k\cdot f(0)\implies(k-1)f(0)=0\implies f(0)=0\mod p$ as $p$ is prime, and $k\neq1$. Now for some integer $n\neq0$, choose a value for $f(n)$. Given this value, we can easily show that $f(k^{i}n\mathrm{\boldmath~\mod~}p)\equiv k^{i}f(n)\mathrm{\boldmath~\mod~}p$ just by plugging in $x = k^{i - 1}n$ into the functional equation and using induction. Note that the numbers $n, kn, k^{2}n, ..., k^{m - 1}n$ are distinct $\mathrm{mod}\,p$, since $m$ is the smallest number such that $k^{m}\equiv1\mod p$. Therefore, if we choose the value of $f(n)$, we get the value of $m$ numbers ($f(n),f(k n\mathrm{\boldmath~\mod~}p),\cdot\cdot\cdot,f(k^{m-1}n\mathrm{\boldmath~\mod~}p)$). Therefore, if we choose $f(n)$ of $\frac{p-1}{m}$ integers $n$, we can get the value of all $p - 1$ nonzero integers. Since $f(n)$ can be chosen in $p$ ways for each of the $\frac{p-1}{m}$ integers, the answer is $\textstyle{\int}^{\frac{p-1}{m}}$. Another way to think about this idea is to view each integer from $0$ to $p - 1$ as a vertex in a graph, where $n$ is connected to $k^{i}n\ \ \mathrm{mod}\ p$ for every integer $i$. If we fix the value of $f(n)$ for some $n$, then $f$ also becomes fixed for all other vertices in its connected component. Thus our answer is $p$ raised to the the number of connected components in the graph.
|
[
"combinatorics",
"dfs and similar",
"dsu",
"math",
"number theory"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 1000000007;
int P, K;
int modpow(int a, int p){
if(p == 0) return 1;
return (long long) a * modpow(a, p - 1) % MOD;
}
int main(){
cin >> P >> K;
if(K == 0){
cout << modpow(P, P - 1) << '\n';
} else if(K == 1){
cout << modpow(P, P) << '\n';
} else {
int ord = 1, cur = K;
for( ; cur != 1; ord += 1){
cur = (long long) cur * K % P;
}
cout << modpow(P, (P - 1) / ord) << '\n';
}
}
|
603
|
C
|
Lieges of Legendre
|
Kevin and Nicky Sun have invented a new game called Lieges of Legendre. In this game, two players take turns modifying the game state with Kevin moving first. Initially, the game is set up so that there are $n$ piles of cows, with the $i$-th pile containing $a_{i}$ cows. During each player's turn, that player calls upon the power of Sunlight, and uses it to either:
- Remove a single cow from a chosen non-empty pile.
- Choose a pile of cows with even size $2·x$ ($x > 0$), and replace it with $k$ piles of $x$ cows each.
The player who removes the last cow wins. Given $n$, $k$, and a sequence $a_{1}, a_{2}, ..., a_{n}$, help Kevin and Nicky find the winner, given that both sides play in optimal way.
|
Hint: Is there a way to determine the winner of a game with many piles but looking at only one pile at a time? We'll use the concepts of Grundy numbers and Sprague-Grundy's Theorem in this solution. The idea is that every game state can be assigned an integer number, and if there are many piles of a game, then the value assigned to that total game state is the xor of the values of each pile individually. The Grundy number of a state is the minimum number that is not achieved among any state that the state can move to. Given this brief intro (which you can read more about many places), we have to separate the problem into 2 cases, $k$ even and odd. Let $f(n)$ denote the Grundy number of a pile of size $n$. By definition $f(0) = 0.$ If $k$ is even, then when you split the pile of size $2n$ into $k$ piles of size $n$, the resulting Grundy number of that state is $\underbrace{f(n)\oplus f(n)\oplus\cdots\6f(n)}_{k\;\mathrm{times}}=0,$ as $k$ is even. Given this, it is easy to compute that $f(0) = 0, f(1) = 1, f(2) = 2, f(3) = 0, f(4) = 1.$ Now I will show by induction that for $n \ge 2, f(2n - 1) = 0, f(2n) = 1.$ The base cases are clear. For $f(2n - 1)$, the only state that can be moved to from here is that with $2n - 2$ cows. By induction, $f(2n - 2) = 1 > 0,$ so $f(2n - 1) = 0.$ On the other hand, for $2n$, removing one stone gets to a state with $2n - 1$ stones, with Grundy number $f(2n - 1) = 0$ by induction. Using the second operation gives a Grundy number of $0$ as explained above, so the smallest positive integer not achieveable is $1$, so $f(2n) = 1$. The case where $k$ is odd is similar but requires more work. Let's look at the splitting operation first. This time, from a pile of size $2n$ we can move to $k$ piles of size $n$, with Grundy number $\underbrace{f(n)\oplus f(n)\oplus\cdots\6\oplus f(n)}_{k{\mathrm{~times}}}=f(n),$ as $k$ is odd. So from $2n$ we can achieve the Grundy numbers $f(2n - 1)$ and $f(n).$ Using this discussion, we can easily compute the first few Grundy numbers. $f(0) = 0, f(1) = 1, f(2) = 0, f(3) = 1, f(4) = 2, f(5) = 0$. I'll prove that for $n \ge 2$, $f(2n) > 0, f(2n + 1) = 0$ by induction. The base cases are clear. Now, for $n \ge 3$, since a pile of size $2n + 1$ can only move to a pile of size $2n$, which by induction has Grundy number $f(2n) > 0$, $f(2n + 1) = 0$. Similarly, because from a pile of size $2n$, you can move to a pile of size $2n - 1$, which has Grundy number $f(2n - 1) = 0$, $f(2n) > 0$. Now computing the general Grundy number $f(n)$ for any $n$ is easy. If $n \le 4$, we have them precomputed. If $n$ is odd and $n > 4$, $f(n) = 0$. In $n$ is even and $n \ge 6$, then $f(n)$ is the minimal excludent of $f(n - 1) = 0$ and $f(n / 2)$ (because $n - 1$ is odd and $ \ge 5$, so $f(n - 1) = 0.$) We can do this recursively, The complexity is $O(n)$ in the $k$ even case and $O(n\log M A X)$ in the $k$ odd case.
|
[
"games",
"math"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int pre[] = {0, 1, 0, 1, 2};
int N, K, A, res;
int grundy(int a){
if(K % 2 == 0){
if(a == 1) return 1;
if(a == 2) return 2;
return (a % 2) ^ 1;
} else {
if(a < 5) return pre[a];
if(a % 2 == 1) return 0;
return (grundy(a / 2) == 1 ? 2 : 1);
}
}
int main(){
cin >> N >> K;
for(int i = 0; i < N; i++){
cin >> A;
res ^= grundy(A);
}
cout << (res ? "Kevin" : "Nicky") << '\n';
}
|
603
|
D
|
Ruminations on Ruminants
|
Kevin Sun is ruminating on the origin of cows while standing at the origin of the Cartesian plane. He notices $n$ lines $\ell_{1},\ell_{2},\cdot\cdot\ell_{n}$ on the plane, each representable by an equation of the form $ax + by = c$. He also observes that no two lines are parallel and that no three lines pass through the same point.
For each triple $(i, j, k)$ such that $1 ≤ i < j < k ≤ n$, Kevin considers the triangle formed by the three lines $\ell_{i},\ell_{j},\ell_{k}$ . He calls a triangle \underline{original} if the circumcircle of that triangle passes through the origin. Since Kevin believes that the circles of bovine life are tied directly to such triangles, he wants to know the number of original triangles formed by unordered triples of distinct lines.
Recall that the circumcircle of a triangle is the circle which passes through all the vertices of that triangle.
|
Hint: It seems like this would be $O(n^{3})$ because of triples of lines. Can you reduce that with some geometric observations? Think of results from Euclidean geometry relating to 4 points lying on the same circle. First, we will prove a lemma, known as the Simson Line: Lemma: Given points $A, B, C, P$ in the plane with $D, E, F$ on lines $BC, CA,$ and $AB$, respectively such that $P D\perp B C.P E\perp A C.P F\perp A B$, then $P$ lies on the circumcircle of $\triangle A B C$ if and only if $D, E,$ and $F$ are collinear. Proof: Assume that the points are configured as shown, and other other configurations follow similarly. Recall that a quadrilateral $ABCP$ is cyclic if and only if $\angle B A C=180^{\circ}-\angle B P C$. Note that this implies that a quadrilateral with two opposite right angles is cyclic, so in particular quadrilaterals $AEPF, BFPD, CDPE$ are cyclic. Because $\angle F P E=180^{\circ}-\angle B A C$ we get that $ABPC$ is cyclic if and only if $\angle F P E=\angle B P C$, if and only if $\angle F P B=\angle E P C$. Now note that $\angle F P B=\angle F D B$ (again since $BFPD$ is cyclic) and $\angle E P C=\angle E D C$, so $\angle B D F=\angle E D C$ if and only if $\angle F P B=\angle E P C$, if and only if $ABPC$ is cyclic. Thus the lemma is proven. This lemma provides us with an efficient way to test if the circumcircle of the triangle formed by three lines in the plane passes through the origin. Specifically, for a line $\,\ell_{i}$, let $X_{i}$ be the projection of the origin onto $\,\ell_{i}$. Then $\ell_{i},\ell_{j},\ell_{k}$ form an original triangle if and only if $X_{i}, X_{j},$ and $X_{k}$ are collinear. Thus the problem reduces to finding the number of triples $i < j < k$ with $X_{i}, X_{j}, X_{k}$ collinear. The points $X_{i}$ are all distinct, except that possibly two lines may pass through the origin, so we can have up to two points $X_{i}$ at the origin. Let us first solve the problem in the case that all points $X_{i}$ are distinct. In this case, consider each $i$, and store for each slope how many points $X_{j}$ with $i < j$ the line $X_{i}$ $X_{j}$ has this slope. This storage can be done in $O(1)$ or $O(\log n)$, depending on how hashing is done. Note also that we must consider a vertical line to have the valid slope $\textstyle{\frac{1}{0}}$. If $a_{1},a_{2},**a_{l}$ are the number of points corresponding to the distinct slopes $S_{1},\;S_{2},\;\star\star S_{i}$ through $X_{i}$ (for points $X_{j}$ with $i < j$), then for $X_{i}$ we add to the total count ${\binom{a_{1}}{2}}+{\binom{a_{2}}{2}}+\cdot\cdot\cdot{\binom{a_{\ell}}{2}}$ If the $X_{i}$ are not distinct, we only encounter an issue in the above solutions when we consider the slope through points $X_{i}$ and $X_{j}$ where $X_{i} = X_{j}$. In this case, for any third $k$, $X_{i}, X_{j},$ and $X_{k}$ are collinear. So when considering slopes from $X_{i}$ in the original algorithm, we simply run the algorithm on all slopes other than the one through $X_{j}$, and simply add $n - 2$ to the count afterwards to account for the $n - 2$ points which are collinear with $X_{i}$ and $X_{j}$. Running the above, we get an algorithm that runs in $O(n^{2})$ or $O(n^{2}\log{n})$. Another approach which doesn't involve the Simson line follows a similar idea: we want to find some property $f(i, j)$ between points $X_{i}$ and $X_{j}$ such that the triangle formed by indices $i, j, k$ is original if and only if $f(i, j) = f(i, k)$. Then we can use the same argument as above to solve the problem in $O(n^{2})$ or $O(n^{2}\log{n})$. Instead of using the slope between points $i, j$ as the property, suppose $\,\ell_{i}$ and $\,\ell_{j}$ meet at some point $P$, and let $O$ be the origin (again $O = P$ is a special case). Then we let $f(i, j)$ be the angle between $\,\ell_{j}$ and $OP$. Because of the properties of cyclic quadrilaterals explained above, the triangle is original if and only if $f(i, j) = f(i, k)$, up to defining the angle as positive or negative and modulo $180^{\circ}$. Following this approach carefully, we can finish as before.
|
[
"geometry",
"math"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair<ll,ll> pll;
ll gcd(ll x, ll y){
for( ; y > 0; swap(x, y)) x %= y;
return x;
}
pll reduce(ll x, ll y){
ll g = gcd(abs(x), abs(y));
if(g != 0) x /= g, y /= g;
if((y < 0) || (y == 0 && x < 0)) x = -x, y = -y;
return make_pair(x, y);
}
const int MAXN = 2005;
int N, A, B, C;
ll X[MAXN], Y[MAXN], Z[MAXN], res;
int main(){
cin >> N;
for(int i = 0; i < N; i++){
cin >> A >> B >> C;
X[i] = A * C;
Y[i] = B * C;
Z[i] = A * A + B * B;
}
for(int i = 0; i < N; i++){
int tot = 0, zero = 0;
map<pll,int> freq;
for(int j = i+1; j < N; j++){
ll u = X[i] * Z[j] - X[j] * Z[i];
ll v = Y[i] * Z[j] - Y[j] * Z[i];
pll p = reduce(u, v);
if(p == pll()){
res += tot;
zero += 1;
} else {
res += freq[p] + zero;
freq[p] += 1;
}
tot += 1;
}
}
cout << res << '\n';
}
|
603
|
E
|
Pastoral Oddities
|
In the land of Bovinia there are $n$ pastures, but no paths connecting the pastures. Of course, this is a terrible situation, so Kevin Sun is planning to rectify it by constructing $m$ undirected paths connecting pairs of distinct pastures. To make transportation more efficient, he also plans to pave some of these new paths.
Kevin is very particular about certain aspects of path-paving. Since he loves odd numbers, he wants each pasture to have an odd number of paved paths connected to it. Thus we call a paving \underline{sunny} if each pasture is incident to an odd number of paved paths. He also enjoys short paths more than long paths, so he would like the longest paved path to be as short as possible. After adding each path, Kevin wants to know if a sunny paving exists for the paths of Bovinia, and if at least one does, the minimum possible length of the longest path in such a paving. Note that "longest path" here means maximum-weight edge.
|
Hint: What is a necessary and sufficient condition for Kevin to be able to pave paths so that each edge is incident to an odd number of them? Does this problem remind you of constructing a minimum spanning tree? We represent this problem on a graph with pastures as vertices and paths as edges. Call a paving where each vertex is incident to an odd number of paved edges an \emph{odd paving}. We start with a lemma about such pavings: A connected graph has an odd paving if and only if it has an even number of vertices. For connected graphs with even numbers of vertices, we can prove this observation by considering a spanning tree of the graph. To construct an odd paving, start from the leaves of the tree and greedily pave edges so that each vertex but the root is incident to an odd number of paved edges. Now consider the graph consisting only of paved edges. Since the sum of all vertex degrees in this graph must be even, it follows that the root is also incident to an odd number of paved edges, so the paving is odd. Now we prove that no odd paving exists in the case of an odd number of vertices. Suppose for the sake of contradiction that one existed. Then the sum of the vertex degrees in the graph consisting only of paved edges would be odd, which is impossible. Thus no odd paving exists for graphs with odd numbers of vertices. Note that this observation turns the degree condition into a condition on the parity of connected component sizes. We finish the problem using this equivalent condition. Suppose we only want to solve this problem once, after all $m$ edges are added. Then we can use Kruskal's algorithm to build a minimum spanning forest by adding edges in order of increasing length. We stop once each tree in the forest contains an even number of vertices, since the graph now satisfies the conditions of the lemma. If there are still odd-sized components by the time we add all the edges, then no odd paving exists. This algorithm, however, runs in $O(m\log m)$ per query, which is too slow if we want to answer after adding each edge. To speed things up, we maintain the ending position of our version of Kruskal's as we add edges online. We do this using a data structure called a link-cut tree. This data structure allows us to add and delete edges from a forest while handling path and connectivity queries. All of these operations take only $O(\log n)$ time per operation. (A path query asks for something like maximum-weight edge on the path between $u$ and $v$; a connectivity query asks if $u$ and $v$ are connected.) First, let's look at how we can solve the online minimum spanning tree problem with a link-cut tree. We augment our data structure to support finding the maximum-weight edge on the path between vertices $u$ and $v$ in $O(\log n)$. Adding an edge then works as follows: If $u$ and $v$ are not connected, connect $u$ and $v$; otherwise, if the new edge is cheaper, delete the maximum-weight edge on the path between $u$ and $v$ and add the new edge. To make implementation easier, we can represent edges as vertices in the link-cut tree. For example, if $u$ and $v$ are connected, in the link-cut tree they would be connected as $u$--$e$--$v$, where $e$ is a vertex representing edge $u$--$v$. We solve our original problem with a similar idea. Note that the end state of our variation on Kruskal's is a minimum spanning forest after adding $k$ edges. (We no longer care about the edges that are longer than the longest of these $k$ edges, since the answer is monotonically decreasing---more edges never hurt.) So when we add another edge to the forest, we can use the online minimum spanning tree idea to get the minimum spanning forest that uses the old cheapest $k$ edges and our new edge. Note that inserting the edge never increases the number of odd components: even linked to even is even, even linked to odd is odd, odd linked to odd is even. Now, pretend that we got this arrangement by running Kruskal's algorithm, adding the edges one-by-one. We can "roll back" the steps of the algorithm by deleting the longest edge until deleting another edge would give us an odd-sized component. (If we started with an odd-sized component, we don't delete anything.) This gives us an ending position for our Kruskal-like algorithm that uses a minimal number of edges so that all components have even size---we're ready to add another edge. ("But wait a minute!" you may say. "What if edges have the same weight?" In this case, if we can't remove one of possibly many longest edges, then we can't lower our answer anyway, so we stop.) Note that all of this takes amortized $O(\log n)$ time per edge. The path queries and the insertion of the new edge involve a constant number of link-cut tree operations. To know which edge to delete, the set of edges currently in the forest can be stored easily in an STL set sorted by length. When adding an edge, we also pay for the cost of deleting that edge, so the "rolling back" phase gets accounted for. Therefore, this algorithm runs in $O(m\log n)$. You may have noticed that executing this algorithm involves checking the size of a connected component in the link-cut tree. This is a detail that needs to be resolved carefully, since link-cut trees usually only handle path operations, not operations that involve subtrees. Here, we stop treating link-cut trees as a black box. (If you aren't familiar with the data structure, you should read about it at https://courses.csail.mit.edu/6.851/spring12/scribe/L19.pdf ) At each vertex, we track the size of its virtual subtree, as well as the sum of the real subtree sizes of its non-preferred children. We can update these values while linking and exposing (a.k.a. accessing), allowing us to perform root-change operations while keeping real subtree sizes. To get the size of a component, we just query for the real subtree size of the root. Since the implementation of this algorithm can be rather challenging, here is a link to a documented version of my code:
|
[
"data structures",
"divide and conquer",
"dsu",
"math",
"trees"
] | 3,000
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 100005;
const int MAXM = 300005;
struct node{
node *l, *r, *p, *m;
int f, v, w, s, id;
node(int v, int w, int id) : l(), r(), p(), m(this), f(), v(v), w(w), s(w), id(id) {}
};
inline bool is_root(node *n){
return n -> p == NULL || n -> p -> l != n && n -> p -> r != n;
}
inline bool left(node *n){
return n == n -> p -> l;
}
inline int sum(node *n){ return n ? n -> s : 0; }
inline node* max(node *n){ return n ? n -> m : NULL; }
inline node* max(node *a, node *b){
if(a == NULL || b == NULL) return a ? a : b;
return a -> v > b -> v ? a : b;
}
inline void push(node *n){
if(!n -> f) return;
n -> f = 0;
swap(n -> l, n -> r);
if(n -> l) n -> l -> f ^= 1;
if(n -> r) n -> r -> f ^= 1;
}
inline void pull(node *n){
n -> m = max(max(max(n -> l), max(n -> r)), n);
n -> s = sum(n -> l) + sum(n -> r) + n -> w;
}
inline void connect(node *n, node *p, bool l){
(l ? p -> l : p -> r) = n;
if(n) n -> p = p;
}
inline void rotate(node *n){
node *p = n -> p, *g = p -> p;
bool l = left(n);
connect(l ? n -> r : n -> l, p, l);
if(!is_root(p)) connect(n, g, left(p));
else n -> p = g;
connect(p, n, !l);
pull(p), pull(n);
}
inline void splay(node *n){
while(!is_root(n)){
node *p = n -> p;
if(!is_root(p)) push(p -> p);
push(p), push(n);
if(!is_root(p)) rotate(left(n) ^ left(p) ? n : p);
rotate(n);
}
push(n);
}
inline void expose(node *n){
node *last = NULL;
for(node *m = n; m; m = m -> p){
splay(m);
m -> w -= sum(last);
m -> w += sum(m -> r);
m -> r = last;
last = m;
pull(m);
}
splay(n);
}
inline void evert(node *n){
expose(n);
n -> f ^= 1;
}
inline void link(node *m, node *n){
evert(m);
expose(n);
m -> p = n;
n -> w += sum(m);
}
inline void cut(node *m, node *n){
evert(m);
expose(n);
n -> l -> p = NULL;
n -> l = NULL;
}
inline node* path_max(node *m, node *n){
evert(m);
expose(n);
return n -> m;
}
inline int size(node *n){
evert(n);
return sum(n);
}
inline bool connected(node *m, node *n){
expose(m);
expose(n);
return m -> p != NULL;
}
struct edge{
int a, b, w, id;
bool operator<(const edge e) const {
return w != e.w ? w > e.w : id > e.id;
}
};
int n, m, o;
node *v[MAXN], *ev[MAXM];
vector<edge> ed;
set<edge> s;
int delete_edge(edge e){
cut(v[e.a], ev[e.id]);
cut(v[e.b], ev[e.id]);
if(size(v[e.a]) & 1 && size(v[e.b]) & 1) o += 2;
return !(size(v[e.a]) & 1);
}
void add_edge(edge e){
if(size(v[e.a]) & 1 && size(v[e.b]) & 1) o -= 2;
link(v[e.a], ev[e.id]);
link(v[e.b], ev[e.id]);
}
void new_edge(int a, int b, int w){
int id = ed.size();
edge e = {a, b, w, id};
ed.push_back(e);
ev[id] = new node(w, 0, id);
if(connected(v[a], v[b])){
node* m = path_max(v[a], v[b]);
if(m -> v <= w) return;
delete_edge(ed[m -> id]);
s.erase(ed[m -> id]);
}
add_edge(e);
s.insert(e);
}
int max_cost(){
if(o > 0) return -1;
while(delete_edge(*s.begin())){
s.erase(s.begin());
}
add_edge(*s.begin());
return s.begin() -> w;
}
int main(){
ios::sync_with_stdio(0);
cin.tie(0);
cin >> n >> m;
for(int i = 1; i <= n; i++){
v[i] = new node(0, 1, -i);
}
o = n;
for(int i = 0; i < m; i++){
int a, b, w;
cin >> a >> b >> w;
new_edge(a, b, w);
cout << max_cost() << '\n';
}
}
|
604
|
A
|
Uncowed Forces
|
Kevin Sun has just finished competing in Codeforces Round #334! The round was 120 minutes long and featured five problems with maximum point values of 500, 1000, 1500, 2000, and 2500, respectively. Despite the challenging tasks, Kevin was uncowed and bulldozed through all of them, distinguishing himself from the herd as the best cowmputer scientist in all of Bovinia. Kevin knows his submission time for each problem, the number of wrong submissions that he made on each problem, and his total numbers of successful and unsuccessful hacks. Because Codeforces scoring is complicated, Kevin wants you to write a program to compute his final score.
Codeforces scores are computed as follows: If the maximum point value of a problem is $x$, and Kevin submitted correctly at minute $m$ but made $w$ wrong submissions, then his score on that problem is $\operatorname*{max}\left(0.3x,\left(1-{\frac{m}{250}}\right)x-50w\right)$. His total score is equal to the sum of his scores for each problem. In addition, Kevin's total score gets increased by $100$ points for each successful hack, but gets decreased by $50$ points for each unsuccessful hack.
All arithmetic operations are performed with absolute precision and no rounding. It is guaranteed that Kevin's final score is an integer.
|
Hint: Just do it! But if you're having trouble, try doing your computations using only integers. This problem is straightforward implementation---just code what's described in the problem statement. However, floating point error is one place where you can trip up. Avoid it by rounding (adding $0.5$ before casting to int), or by doing all calculations with integers. The latter is possible since $250$ always divides the maximum point value of a problem. Thus when we rewrite our formula for score as $\operatorname*{max}\left(3\cdot x/10,(250-m)\cdot x/250\right)$, it is easy to check that we only have integers as intermediate values.
|
[
"implementation"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int pt[5] = {500, 1000, 1500, 2000, 2500}; int tot = 0;
int m[10], w[10];
int main() {
for (int i = 1; i <= 5; ++i)
cin >> m[i];
for (int i = 1; i <= 5; ++i)
cin >> w[i];
int s, u; cin >> s >> u; tot = 100*s-50*u;
for (int i = 1; i <= 5; ++i)
tot += max(pt[i-1]-pt[i-1]*m[i]/250-50*w[i], pt[i-1]/10*3);
cout << tot << endl;
}
|
604
|
B
|
More Cowbell
|
Kevin Sun wants to move his precious collection of $n$ cowbells from Naperthrill to Exeter, where there is actually grass instead of corn. Before moving, he must pack his cowbells into $k$ boxes of a fixed size. In order to keep his collection safe during transportation, he won't place more than \textbf{two} cowbells into a single box. Since Kevin wishes to minimize expenses, he is curious about the smallest size box he can use to pack his entire collection.
Kevin is a meticulous cowbell collector and knows that the size of his $i$-th ($1 ≤ i ≤ n$) cowbell is an integer $s_{i}$. In fact, he keeps his cowbells sorted by size, so $s_{i - 1} ≤ s_{i}$ for any $i > 1$. Also an expert packer, Kevin can fit one or two cowbells into a box of size $s$ if and only if the sum of their sizes does not exceed $s$. Given this information, help Kevin determine the smallest $s$ for which it is possible to put all of his cowbells into $k$ boxes of size $s$.
|
Hint: Try thinking about a sorted list of cowbells. What do we do with the largest ones? Intuitively, we want to use as many boxes as we can and put the largest cowbells by themselves. Then, we want to pair the leftover cowbells so that the largest sum of a pair is minimized.This leads to the following greedy algorithm: First, if $k \ge n$, then each cowbell can go into its own box, so our answer is $max(s_{1}, s_{2}, ..., s_{n})$. Otherwise, we can have at most $2k - n$ boxes that contain one cowbell. So as the cowbells are sorted by size, we put the $2k - n$ largest into their own boxes. For the remaining $n - (2k - n) = 2(n - k)$ cowbells, we pair the $i$ th largest cowbell with the $(2(n - k) - i + 1)$ th largest. In other words, we match the smallest remaining cowbell with the largest, the second smallest with the second largest, and so on. Given these pairings, we can loop through them to find the largest box we'll need. The complexity of this algorithm is $O(n)$ in all cases. To prove that this greedy works, think about the cowbell the the largest one gets paired with. If it's not the smallest, we can perform a swap so that the largest cowbell is paired with the smallest and not make our answer worse. After we've paired the largest cowbell, we can apply the same logic to the second largest, third largest, etc. until we're done.
|
[
"binary search",
"greedy"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 100000;
int N, K, S[MAXN], res;
int main(){
cin >> N >> K;
for(int i = 0; i < N; i++){
cin >> S[i];
res = max(res, S[i]);
}
for(int i = 0; i < N - K; i++){
res = max(res, S[i] + S[2 * (N - K) - 1 - i]);
}
cout << res << '\n';
}
|
605
|
A
|
Sorting Railway Cars
|
An infinitely long railway has a train consisting of $n$ cars, numbered from $1$ to $n$ (the numbers of all the cars are distinct) and positioned in arbitrary order. David Blaine wants to sort the railway cars in the order of increasing numbers. In one move he can make one of the cars disappear from its place and teleport it either to the beginning of the train, or to the end of the train, at his desire. What is the minimum number of actions David Blaine needs to perform in order to sort the train?
|
Let's suppose we removed from the array all that elements we would move. What remains? The sequence of the numbers in a row: a, a+1, \dots , b. The length of this sequence must be maximal to minimize the number of elements to move. Consider the array pos, where pos[p[i]] = i. Look at it's subsegment pos[a], pos[a+1], \dots , pos[b]. This sequence must be increasing and its length as mentioned above must be maximal. So we must find the longest subsegment of pos, where pos[a], pos[a+1], \dots , pos[b] is increasing.
|
[
"constructive algorithms",
"greedy"
] | 1,600
| null |
605
|
B
|
Lazy Student
|
Student Vladislav came to his programming exam completely unprepared as usual. He got a question about some strange algorithm on a graph — something that will definitely never be useful in real life. He asked a girl sitting next to him to lend him some cheat papers for this questions and found there the following definition:
The minimum spanning tree $T$ of graph $G$ is such a tree that it contains all the vertices of the original graph $G$, and the sum of the weights of its edges is the minimum possible among all such trees.
Vladislav drew a graph with $n$ vertices and $m$ edges containing no loops and multiple edges. He found one of its minimum spanning trees and then wrote for each edge its weight and whether it is included in the found tree or not. Unfortunately, the piece of paper where the graph was painted is gone and the teacher is getting very angry and demands to see the original graph. Help Vladislav come up with a graph so that the information about the minimum spanning tree remains correct.
|
Let's order edges of ascending length, in case of a tie placing earlier edges we were asked to include to MST. Let's start adding them to the graph in this order. If we asked to include the current edge to MST, use this edge to llink 1st vertex with the least currently isolated vertex. If we asked NOT to include the current edge to MST, use this edge to link some vertices that are already linked but have no edges between them. To do this it's convenient to have two pointer on vertices (let's call them FROM and TO). At the beginning, FROM=2, TO=3. When we are to link two already linked vertices, we add new edge (FROM, TO) and increment FROM. If FROM becomes equal to TO, we can assume we already added all possible edges to TO, so we increment TO and set FROM to 2. This means from this moment we will use non-MST edges to connect TO with all previous vertices starting from 2. If it appears that TO looks at currently isolated vertex, we can assume there are no place for non-MST edge it the graph, so the answer is Impossible. Keep doing in the described way, we'll be adding MST edges as (1,2), \dots , (1,n) and non-MST edges as (2,3), (2,4), (3,4), (2,5), (3,5), (4,5), ...
|
[
"constructive algorithms",
"data structures",
"graphs"
] | 1,700
| null |
605
|
C
|
Freelancer's Dreams
|
Mikhail the Freelancer dreams of two things: to become a cool programmer and to buy a flat in Moscow. To become a cool programmer, he needs at least $p$ experience points, and a desired flat in Moscow costs $q$ dollars. Mikhail is determined to follow his dreams and registered at a freelance site.
He has suggestions to work on $n$ distinct projects. Mikhail has already evaluated that the participation in the $i$-th project will increase his experience by $a_{i}$ per day and bring $b_{i}$ dollars per day. As freelance work implies flexible working hours, Mikhail is free to stop working on one project at any time and start working on another project. Doing so, he receives the respective share of experience and money. Mikhail is only trying to become a cool programmer, so he is able to work only on one project at any moment of time.
Find the real value, equal to the minimum number of days Mikhail needs to make his dream come true.
For example, suppose Mikhail is suggested to work on three projects and $a_{1} = 6$, $b_{1} = 2$, $a_{2} = 1$, $b_{2} = 3$, $a_{3} = 2$, $b_{3} = 6$. Also, $p = 20$ and $q = 20$. In order to achieve his aims Mikhail has to work for $2.5$ days on both first and third projects. Indeed, $a_{1}·2.5 + a_{2}·0 + a_{3}·2.5 = 6·2.5 + 1·0 + 2·2.5 = 20$ and $b_{1}·2.5 + b_{2}·0 + b_{3}·2.5 = 2·2.5 + 3·0 + 6·2.5 = 20$.
|
We can let our hero not to receive money or experience for some projects. This new opportunity does not change the answer. Consider the hero spent time T to achieve his dream. On each project he spent some part of this time (possibly zero). So the average speed of making money and experience was linear combination of speeds on all these projects, weighted by parts of time spent for each of the projects. Let's build the set P on the plane of points (x, y) such that we can receive x money and y experience per time unit. Place points (a[i], b[i]) on the plane. Add also two points (max(a[i]), 0) and (0, max(b[i])). All these points for sure are included to P. Find their convex hull. After that, any point inside or at the border of the convex hull would correspond to usage of some linear combination of projects. Now we should select some point which hero should use as the average speed of receiving money and experience during all time of achieving his dream. This point should be non-strictly inside the convex hull. The dream is realized if we get to point (A,B). The problem lets us to get upper of righter, but to do so is not easier than to get to the (A,B) itself. So let's direct a ray from (0,0) to (A,B) and find the latest moment when this ray was inside our convex hull. This point would correspond to the largest available speed of receiving resources in the direction of point (A,B). Coordinates of this point are speed of getting resources. To find the point, we have to intersect the ray and the convex hull.
|
[
"geometry"
] | 2,400
| null |
605
|
D
|
Board Game
|
You are playing a board card game. In this game the player has two characteristics, $x$ and $y$ — the white magic skill and the black magic skill, respectively. There are $n$ spell cards lying on the table, each of them has four characteristics, $a_{i}$, $b_{i}$, $c_{i}$ and $d_{i}$. In one move a player can pick one of the cards and cast the spell written on it, but only if first two of it's characteristics meet the requirement $a_{i} ≤ x$ and $b_{i} ≤ y$, i.e. if the player has enough magic skill to cast this spell. However, after casting the spell the characteristics of a player change and become equal to $x = c_{i}$ and $y = d_{i}$.
At the beginning of the game both characteristics of a player are equal to zero. The goal of the game is to cast the $n$-th spell. Your task is to make it in as few moves as possible. You are allowed to use spell in any order and any number of times (for example, you may not use some spells at all).
|
Consider n vectors starting at points (a[i], b[i]) and ending at points (c[i], d[i]). Run BFS. On each of its stages we must able to perform such an operation: get set of vectors starting inside rectangle 0 <= x <= c[i], 0 <= y <= d[i] and never consider these vectors again. It can be managed like this. Compress x-coordinates. For each x we'll hold the list of vectors which first coordinate is x. Create a segment tree with first coordinate as index and second coordinate as value. The segment tree must be able to find index of minimum for segment and to set value at point. Now consider we have to find all the vectors with first coordinate from 0 to x and second coordinate from 0 to y. Let's find index of minimum in the segment tree for segment [0, x]. This minimum points us to the vector (x,y), whose x - that index of minimum and y - value of minimum. Remove it from list of vectors (adding also to the queue of the BFS) and set in the segment tree to this index second coordinate of the next vector with first coordinate x. Continue this way while minimum on a segment remains less than y. So, on each step we will find list of not yet visited vectors in the bottom right rectangle, and each vector would be considered only once, after what it would be deleted from data structures.
|
[
"data structures",
"dfs and similar"
] | 2,500
| null |
605
|
E
|
Intergalaxy Trips
|
The scientists have recently discovered wormholes — objects in space that allow to travel very long distances between galaxies and star systems.
The scientists know that there are $n$ galaxies within reach. You are in the galaxy number $1$ and you need to get to the galaxy number $n$. To get from galaxy $i$ to galaxy $j$, you need to fly onto a wormhole $(i, j)$ and in exactly one galaxy day you will find yourself in galaxy $j$.
Unfortunately, the required wormhole is not always available. Every galaxy day they disappear and appear at random. However, the state of wormholes does not change within one galaxy day. A wormhole from galaxy $i$ to galaxy $j$ exists during each galaxy day taken separately with probability $p_{ij}$. You can always find out what wormholes exist at the given moment. At each moment you can either travel to another galaxy through one of wormholes that exist at this moment or you can simply wait for one galaxy day to see which wormholes will lead from your current position at the next day.
Your task is to find the expected value of time needed to travel from galaxy $1$ to galaxy $n$, if you act in the optimal way. It is guaranteed that this expected value exists.
|
The vertex is the better, the less is the expected number of moves from it to reach finish. The overall strategy is: if it is possible to move to vertex better than current, you should move to it, otherwise stay in place. Just like in Dijkstra, we will keep estimates of answer for each vertex, and fix these estimates as the final answer for all vertices one by one, starting from best vertices to the worst. On the first step we will fix vertex N (the answer for it is zero). On the second step - vertex from which it's easiest to reach N. On the third step - vertex from which it's easiest to finish, moving to vertices determined on first two steps. And so on. On each step we find such vertex which gives best expected number of moves if we are to move from it to vertices better than it and then we fix this expected number - it cannot change from now. For each non-fixed yet vertex we can find an estimate of expected time it takes to reach finish from it. In this estimate we take into account knowledge about vertices we know answer for. We iterate through vertices in order of non-increasing answer for them, so the answer for vertex being estimated is not better than for vertices we already iterate through. Let's see the expression for expected time of getting to finish from vertex x, considering use of tactic "move to best of i accessible vertices we know answer for, or stay in place": m(x) = p(x, v[0]) * ans(v[0]) + (1 - p(x, v[0]) * p(x, v[1]) * ans(v[1]) + (1 - p(x, v[0]) * (1 - p(x, v[1]) * p(x, v[2]) * ans(v[2]) + \dots + (1 - p(x, v[0]) * (1 - p(x, v[1]) * \dots * (1 - p(x, v[i-1]) * m(x) + 1 Here m(x) - estimate for vertex x, p(a,b) - the probability of existence of edge (a,b), and ans(v) - known answer for vertex v. Note that m(x) expressed by itself, because there is a probability of staying in place. We will keep estimating expression for each vertex in the form of m(x) = A[x] * m(x) + B[x]. For each vertex we will keep A[x] and B[x]. This would mean that with some probabilites it would be possible to move to some better vertex, and this opportunity gives contribution to expected time equal to B[x], and also with some probability we have to stay in place, and this probability is A[x] (this is just the same as coefficient before m(x) in the expression). So, on each step we select one currently non-fixed vertex v with minimal estimate, then fix it and do relaxation from it, refreshing estimates for other vertices. When we refresh estimate for some vertex x, we change its A[x] and B[x]. A[x] is reduced by A[x] * p(x,v), because the probability of staying still consider it's not possible to move to v. B[x] is increased by A[x] * p(x,v) * ans(v), where A[x] is the probability that it's not possible to use some vertex better than v, A[x] * p(x,v) is the probability that it's also possible to use vertex v, and ans(v) - known answer we just fixed for vertex v. To calculate the value of estimate for some vertex x, we can use expression m(x) = A[x] * m(x) + B[x] and express m(x) from it. Exactly m(x) is that value we should keep on the priority queue in out Dijkstra analogue, and exactly m(x) is the value to fix as the final answer for vertex x, when this vertex is announced as vertex with minimal estimate at the start of a step.
|
[
"probabilities",
"shortest paths"
] | 2,700
| null |
606
|
A
|
Magic Spheres
|
Carl is a beginner magician. He has $a$ blue, $b$ violet and $c$ orange magic spheres. In one move he can transform two spheres \textbf{of the same color} into one sphere of any other color. To make a spell that has never been seen before, he needs at least $x$ blue, $y$ violet and $z$ orange spheres. Can he get them (possible, in multiple actions)?
|
Let's count how many spheres of each type are lacking to the goal. We must do at least that many transformations. Let's count how many spheres of each type are extra relative to the goal. Each two extra spheres give us an opportunity to do one transformation. So to find out how many transformations can be done from the given type of spheres, one must look how many extra spheres there are, divide this number by 2 and round down. Let's sum all the opportunities of transformations from each type of spheres and all the lacks. If there are at least that many opportunities of transformations as the lacks, the answer is positive. Otherwise, it's negative.
|
[
"implementation"
] | 1,200
| null |
606
|
B
|
Testing Robots
|
The Cybernetics Failures (CF) organisation made a prototype of a bomb technician robot. To find the possible problems it was decided to carry out a series of tests. At the beginning of each test the robot prototype will be placed in cell $(x_{0}, y_{0})$ of a rectangular squared field of size $x × y$, after that a mine will be installed into one of the squares of the field. It is supposed to conduct exactly $x·y$ tests, each time a mine is installed into a square that has never been used before. The starting cell of the robot always remains the same.
After placing the objects on the field the robot will have to run a sequence of commands given by string $s$, consisting only of characters 'L', 'R', 'U', 'D'. These commands tell the robot to move one square to the left, to the right, up or down, or stay idle if moving in the given direction is impossible. As soon as the robot fulfills all the sequence of commands, it will blow up due to a bug in the code. But if at some moment of time the robot is at the same square with the mine, it will also blow up, but not due to a bug in the code.
Moving to the left decreases coordinate $y$, and moving to the right increases it. Similarly, moving up decreases the $x$ coordinate, and moving down increases it.
The tests can go on for very long, so your task is to predict their results. For each $k$ from $0$ to $length(s)$ your task is to find in how many tests the robot will run exactly $k$ commands before it blows up.
|
Let's prepare a matrix, where for each cell we will hold, at which moment the robot visits it for the first time while moving through its route. To find these values, let's follow all the route. Each time we move to a cell we never visited before, we must save to the corresponding matrix' cell, how many actions are done now. Let's prepare an array of counters, in which for each possible number of actions we will hold how many variants there were, when robot explodes after this number of actions. Now let's iterate through all possible cells where mine could be placed. For each cell, if it wasn't visited by robot, add one variant of N actions, where N is the total length of the route. If it was, add one variant of that many actions as written in this cell (the moment of time when it was visited first). Look, if there is a mine in this cell, robot would explode just after first visiting it. The array of counters is now the answer to the problem.
|
[
"implementation"
] | 1,600
| null |
607
|
A
|
Chain Reaction
|
There are $n$ beacons located at distinct positions on a number line. The $i$-th beacon has position $a_{i}$ and power level $b_{i}$. When the $i$-th beacon is activated, it destroys all beacons to its left (direction of decreasing coordinates) within distance $b_{i}$ inclusive. The beacon itself is not destroyed however. Saitama will activate the beacons one at a time from right to left. If a beacon is destroyed, it cannot be activated.
Saitama wants Genos to add a beacon \textbf{strictly to the right} of all the existing beacons, with any position and any power level, such that the least possible number of beacons are destroyed. Note that Genos's placement of the beacon means it will be the first beacon activated. Help Genos by finding the minimum number of beacons that could be destroyed.
|
It turns out that it is actually easier to compute the complement of the problem - the maximum number of objects not destroyed. We can subtract this from the total number of objects to obtain our final answer. We can solve this problem using dynamic programming. Let $dp[x]$ be the maximum number of objects not destroyed in the range $[0, x]$ given that position $x$ is unaffected by an explosion. We can compute $dp[x]$ using the following recurrence: $d p[x]={\left\{d p[x-1]\right.}_{\mathrm{~where~}b_{i{\mathrm{~is~the~power~level~of~the~position~}}x}}$ Now, if we can place an object to the right of all objects with any power level, we can destroy some suffix of the (sorted list of) objects. The answer is thus the maximum number of destroyed objects objects given that we destroy some suffix of the objects first. This can be easily evaluated as $\operatorname*{max}_{1\leq i\leq n}d p[a_{i}]$ Since this is the complement of our answer, our final answer is actually $n-\operatorname*{max}_{1<i<n}d p[a_{i}]$ Time Complexity - $O(max(a_{i}))$, Memory Complexity - $O(max(a_{i}))$
|
[
"binary search",
"dp"
] | 1,600
|
"#include <iostream>\nusing namespace std;\nconst int maxn = 1e6 + 5;\n\nint n, b[maxn], dp[maxn];\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(NULL);\n cin >> n;\n for (int i = 0, a; i < n; i++) {\n cin >> a;\n cin >> b[a];\n }\n if (b[0] > 0) {\n dp[0] = 1;\n }\n int mx = 0;\n for (int i = 1; i < maxn; i++) {\n if (b[i] == 0) {\n dp[i] = dp[i - 1];\n } else {\n if (b[i] >= i) {\n dp[i] = 1;\n } else {\n dp[i] = dp[i - b[i] - 1] + 1;\n }\n }\n if (dp[i] > mx) {\n mx = dp[i];\n }\n }\n cout << n - mx << '\\n';\n return 0;\n}\n"
|
607
|
B
|
Zuma
|
Genos recently installed the game Zuma on his phone. In Zuma there exists a line of $n$ gemstones, the $i$-th of which has color $c_{i}$. The goal of the game is to destroy all the gemstones in the line as quickly as possible.
In one second, Genos is able to choose exactly one continuous substring of colored gemstones that is a palindrome and remove it from the line. After the substring is removed, the remaining gemstones shift to form a solid line again. What is the minimum number of seconds needed to destroy the entire line?
Let us remind, that the string (or substring) is called palindrome, if it reads same backwards or forward. In our case this means the color of the first gemstone is equal to the color of the last one, the color of the second gemstone is equal to the color of the next to last and so on.
|
We use dp on contiguous ranges to calculate the answer. Let $D[i][j]$ denote the number of seconds it takes to collapse some range $[i, j]$. Let us work out a transition for this definition. Consider the left-most gemstone. This gemstone will either be destroyed individually or as part of a non-singular range. In the first case, we destroy the left-most gemstone and reduce to the subproblem $[i + 1, j]$. In the second case, notice that the left-most gemstone will match up with some gemstone to its right. We can iterate through every gemstone with the same color as the left-most (let $k$ be the index of this matching gemstone) and reduce to two subproblems $[i + 1, k - 1]$ and $[k + 1, j]$. We can reduce to the subproblem $[i + 1, k - 1]$ because we can just remove gemstones $i$ and $k$ with the last removal of $[i + 1, k - 1]$. We must also make a special case for when the first two elements in a range are equal and consider the subproblem $[i + 2, j]$. Here is a formalization of the dp: $\begin{array}{c}{{}}\\ {{D[i][j]=\left\{\begin{array}{l}{{\qquad\qquad}}\\ {{1,\mathrm{if}\,i=j}}\\ {{\qquad\qquad}}\\ {{\qquad\qquad}}\\ {{\qquad\qquad}}\\ {{\qquad\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\qquad}}\\ {{\operatorname{oup}\int_{\qquad}^{\qquad}}\\ {{\qquad}}&{{\qquad}}\end{array}\qquad\qquad}}\\ {{\qquad}}\\ {{\qquad}}&{{\qquad}}\\ {{\qquad}}&{{\qquad}}\\ {{\qquad}}&{{\qquad}}\\ {{\qquad}}{{\qquad}}&{{\qquad}}\\ {{\qquad}}\operatorname{ouy~\operatorname{ncladededed~in~min~{\sinh}{i f}}}}\end{array}$ http://codeforces.com/blog/entry/22256?#comment-268876Why is this dp correct? Notice that the recursive version of our dp will come across the optimal solution in its search. Moreover, every path in the recursive search tree corresponds to some valid sequence of deletions. Since our dp only searches across valid deletions and will at some point come across the optimal sequence of deletions, the answer it produces will be optimal. Time Complexity - $O(n^{3})$, Space Complexity - $O(n^{2})$
|
[
"dp"
] | 1,900
|
"#include<algorithm>\n#include<cstdio>\nusing namespace std;\nconst int maxn=505;\n\nint n, a[maxn], d[maxn][maxn];\n\nvoid read() {\n\tscanf(\"%d\", &n);\n\tfor (int i = 0; i < n; i++) {\n\t\tscanf(\"%d\", a + i);\n\t}\n}\n\nvoid fun() {\n\tfor (int len = 1; len <= n; len++) {\n\t\tfor (int beg = 0, end = len - 1; end < n; beg++, end++) {\n\t\t\tif (len == 1) {\n\t\t\t\td[beg][end] = 1;\n\t\t\t} else {\n\t\t\t\td[beg][end] = 1 + d[beg + 1][end];\n\t\t\t\tif (a[beg] == a[beg + 1]) {\n\t\t\t\t\td[beg][end] = min(1 + d[beg + 2][end], d[beg][end]);\n\t\t\t\t}\n\t\t\t\tfor (int match = beg + 2; match <= end; match++) {\n\t\t\t\t\tif (a[beg] == a[match]) {\n\t\t\t\t\t\td[beg][end] = min(d[beg + 1][match - 1] + d[match + 1][end], d[beg][end]);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nint main() {\n\tread();\n\tfun();\n\tprintf(\"%d\\n\", d[0][n-1]);\n\treturn 0;\n}\n"
|
607
|
C
|
Marbles
|
In the spirit of the holidays, Saitama has given Genos two grid paths of length $n$ (a weird gift even by Saitama's standards). A grid path is an ordered sequence of neighbouring squares in an infinite grid. Two squares are neighbouring if they share a side.
One example of a grid path is $(0, 0) → (0, 1) → (0, 2) → (1, 2) → (1, 1) → (0, 1) → ( - 1, 1)$. Note that squares in this sequence might be repeated, i.e. path has self intersections.
Movement within a grid path is restricted to adjacent squares within the sequence. That is, from the $i$-th square, one can \textbf{only move} to the $(i - 1)$-th or $(i + 1)$-th squares of this path. Note that there is only a single valid move from the first and last squares of a grid path. Also note, that even if there is some $j$-th square of the path that coincides with the $i$-th square, only moves to $(i - 1)$-th and $(i + 1)$-th squares are available. For example, from the second square in the above sequence, one can only move to either the first or third squares.
To ensure that movement is not ambiguous, the two grid paths will not have an alternating sequence of three squares. For example, a contiguous subsequence $(0, 0) → (0, 1) → (0, 0)$ \textbf{cannot occur} in a valid grid path.
One marble is placed on the first square of each grid path. Genos wants to get both marbles to the last square of each grid path. However, there is a catch. Whenever he moves one marble, the other marble will copy its movement if possible. For instance, if one marble moves east, then the other marble will try and move east as well. By try, we mean if moving east is a valid move, then the marble will move east.
Moving north increases the second coordinate by $1$, while moving south decreases it by $1$. Similarly, moving east increases first coordinate by $1$, while moving west decreases it.
Given these two valid grid paths, Genos wants to know if it is possible to move both marbles to the ends of their respective paths. That is, if it is possible to move the marbles such that both marbles rest on the last square of their respective paths.
|
Define the reverse of a sequence as the sequence of moves needed to negate the movement. For example, EEE and WWW are reverses, and WWSSSEE and WWNNNEE are reverses. I claim is impossible to get both balls to the end if and only if some suffix of the first sequence is the reverse of a suffix of the second sequence. Let us prove the forward case first, that if two suffixes are reverses, then it is impossible to get both balls to the end. Consider a sequence and its reverse, and note that they share the same geometric structure, except that the direction of travel is opposite. Now imagine laying the two grid paths over each other so that their reverse suffixes are laying on top of each other. It becomes apparent that in order to move both balls to their ends, they must cross over at some point within the confines of the suffix. However, this is impossible under the movement rules, as in order for this to happen, the two balls need to move in different directions at a single point in time, which is not allowed. Now let us prove the backwards case: that if no suffixes are reverses, then it is possible for both balls to reach the end. There is a simple algorithm that achieves this goal, which is to move the first ball to its end, then move the second ball to its end, then move the first ball to its end, and so on. Let's denote each of these "move the x ball to its end" one step in the algorithm. After every step, the combined distance of both balls from the start is strictly increasing. Without loss of generality, consider a step where you move the first ball to the end, this increases the distance of the first ball by some value $k$. However, the second ball can move back at most $k - 1$ steps (only its a reverse sequence can move back $k$ steps), so the minimum change in distance is $+ 1$. Hence, at some point the combined distance will increase to $2(n - 1)$ and both balls will be at the end. In order to check if suffixes are reverses of each other, we can take reverse the first sequence, and see if one of its prefixes matches a suffix of the second sequence. This can be done using string hashing or KMP in linear time. Time Complexity - $O(n)$, Memory Complexity - $O(n)$
|
[
"hashing",
"strings"
] | 2,500
|
"import java.io.*;\n\npublic class Main {\n final static String dirs = \"NSEW\";\n\n public static void solve(Input in, PrintWriter out) throws IOException {\n int n = in.nextInt() - 1;\n String s1 = in.next();\n String s2 = in.next();\n StringBuilder sb = new StringBuilder();\n for (int i = 0; i < n; ++i) {\n sb.append(dirs.charAt(dirs.indexOf(s1.charAt(n - i - 1)) ^ 1));\n }\n sb.append(s2);\n int[] p = new int[2 * n];\n for (int i = 1; i < 2 * n; ++i) {\n p[i] = p[i - 1];\n while (p[i] != 0 && sb.charAt(i) != sb.charAt(p[i])) {\n p[i] = p[p[i] - 1];\n }\n if (sb.charAt(i) == sb.charAt(p[i])) {\n p[i]++;\n }\n }\n out.println(p[2 * n - 1] == 0 ? \"YES\" : \"NO\");\n }\n\n public static void main(String[] args) throws IOException {\n PrintWriter out = new PrintWriter(System.out);\n solve(new Input(new BufferedReader(new InputStreamReader(System.in))), out);\n out.close();\n }\n\n static class Input {\n BufferedReader in;\n StringBuilder sb = new StringBuilder();\n\n public Input(BufferedReader in) {\n this.in = in;\n }\n\n public Input(String s) {\n this.in = new BufferedReader(new StringReader(s));\n }\n\n public String next() throws IOException {\n sb.setLength(0);\n while (true) {\n int c = in.read();\n if (c == -1) {\n return null;\n }\n if (\" \\n\\r\\t\".indexOf(c) == -1) {\n sb.append((char)c);\n break;\n }\n }\n while (true) {\n int c = in.read();\n if (c == -1 || \" \\n\\r\\t\".indexOf(c) != -1) {\n break;\n }\n sb.append((char)c);\n }\n return sb.toString();\n }\n\n public int nextInt() throws IOException {\n return Integer.parseInt(next());\n }\n\n public long nextLong() throws IOException {\n return Long.parseLong(next());\n }\n\n public double nextDouble() throws IOException {\n return Double.parseDouble(next());\n }\n }\n}\n"
|
607
|
D
|
Power Tree
|
Genos and Saitama went shopping for Christmas trees. However, a different type of tree caught their attention, the exalted Power Tree.
A Power Tree starts out as a single root vertex indexed $1$. A Power Tree grows through a magical phenomenon known as an update. In an update, a single vertex is added to the tree as a child of some other vertex.
Every vertex in the tree (the root and all the added vertices) has some value $v_{i}$ associated with it. The power of a vertex is defined as the strength of the multiset composed of the value associated with this vertex ($v_{i}$) and the powers of its direct children. The strength of a multiset is defined as the sum of all elements in the \textbf{multiset} multiplied by the number of elements in it. Or in other words for some \textbf{multiset} $S$:
\[
S t r e n g t h(S)=|S|\cdot\sum_{d\in S}d
\]
Saitama knows the updates that will be performed on the tree, so he decided to test Genos by asking him queries about the tree during its growth cycle.
An update is of the form $1 p v$, and adds a new vertex with value $v$ as a child of vertex $p$.
A query is of the form $2 u$, and asks for the power of vertex $u$.
Please help Genos respond to these queries modulo $10^{9} + 7$.
|
Let's solve a restricted version of the problem where all queries are about the root. First however, let us define some notation. In this editorial, we will use $d(x)$ to denote the number of children of vertex $x$. If there is an update involved, $d(x)$ refers to the value prior to the update. To deal these queries, notice that each vertex within the tree has some contribution $c_{i}$ to the root power. This contribution is an integer multiple $m_{i}$ of each vertex's value $v_{i}$, such that $c_{i} = m_{i} \cdot v_{i}$ If we sum the contributions of every vertex, we get the power of the root. To deal with updates, notice that adding a vertex $u$ to a leaf $p$ scales the multiplier of every vertex in $p$'s subtree by a factor of $\frac{d(p)+2}{d(p)+1}$. As for the contribution of $u$, notice that $m_{u} = m_{p}$. Now, in order to handle both queries and updates efficiently, we need a fast way to sum all contributions, a way to scale contributions in a subtree, and a way to add new vertices. This sounds like a job for ... a segment tree! We all know segment trees hate insertions, so instead of inserting new vertices, we pre-build the tree with initial values $0$, updating values instead of inserting new vertices. In order to efficiently support subtree modification, we construct a segment tree on the preorder walk of the tree, so that every subtree corresponds to a contiguous segment within the segment tree. This segment tree will store the contributions of each vertex and needs to support range-sum-query, range-multiply-update, and point-update (updating a single element). The details of implementing such a segment tree and are left as an exercise to the reader. Armed with this segment tree, queries become a single range-sum. Scaling the contribution in a subtree becomes a range-multiply (we don't need to worry about multiplying un-added vertices because they are set to $0$). And adding a new vertex becomes a range-sum-query to retrieve the contribution of the parent, and then a point-set to set the contribution of the added vertex. Finally, to solve the full version of the problem, notice that the power of a non-root vertex $w$ is a scaled down range sum in the segment tree. The value of the scale is $\frac{d(w)+1}{m_{w}}$, the proof of which is left as an exercise to the reader. Time Complexity - $O(q\cdot\log q)$, Space Complexity - $O(q)$
|
[
"data structures",
"trees"
] | 2,600
|
"//tonynater\n\n#include <iostream>\n#include <vector>\n\nusing namespace std;\n\ntypedef long long ll;\n\nconst ll mod = 1000000007;\n\nconst int maxn = 200010;\n\nint v[maxn], q, n = 1, qstore[maxn][2]; //input\n\nint parent[maxn]; //tree\nvector<int> children[maxn]; //tree\n\nint curtravpos, traversal[maxn], bounds[maxn][2]; //traversal\n\nint activechildren[maxn]; //processing\n\nvoid dfs(int u) {\n traversal[curtravpos] = u;\n bounds[u][0] = curtravpos;\n ++curtravpos;\n for(auto v : children[u]) {\n dfs(v);\n }\n bounds[u][1] = curtravpos-1;\n}\n\nstruct segment_tree {\n int b, e;\n segment_tree *lst, *rst;\n \n int sum;\n ll mult;\n \n segment_tree(int _b, int _e) {\n b = _b;\n e = _e;\n sum = 0;\n mult = 1;\n lst = rst = NULL;\n if(b < e) {\n lst = new segment_tree(b,(b+e)/2);\n rst = new segment_tree((b+e)/2+1,e);\n merge();\n }\n }\n \n void prop() {\n if(mult > 1) {\n sum = sum*mult%mod;\n if(lst != NULL) {\n lst->mult = lst->mult*mult%mod;\n rst->mult = rst->mult*mult%mod;\n }\n mult = 1;\n }\n }\n \n void merge() {\n if(lst != NULL) {\n sum = (lst->sum+rst->sum)%mod;\n }\n }\n \n void rmult(int l, int r, int m) {\n if(m == 1) {\n return;\n }\n prop();\n if(e < l || r < b) {\n return;\n }else if(l <= b && e <= r) {\n mult = m;\n prop();\n }else {\n lst->rmult(l,r,m);\n rst->rmult(l,r,m);\n merge();\n }\n }\n \n void set(int idx, int v) {\n prop();\n if(b == idx && idx == e) {\n sum = v;\n }else if(idx <= (b+e)/2) {\n lst->set(idx,v);\n rst->prop();\n merge();\n }else {\n lst->prop();\n rst->set(idx,v);\n merge();\n }\n }\n \n int get(int l, int r) {\n prop();\n if(e < l || r < b) {\n return 0;\n }else if(l <= b && e <= r) {\n return sum;\n }else {\n int lsum = lst->get(l,r);\n int rsum = rst->get(l,r);\n return (lsum+rsum)%mod;\n }\n }\n};\n\nll binpow(ll base, int exp) {\n if(exp == 0) {\n return 1;\n }else {\n ll half = binpow(base,exp/2);\n ll full = half*half%mod;\n if(exp%2) {\n full = full*base%mod;\n }\n return full;\n }\n}\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(NULL);\n \n cin >> v[0] >> q;\n \n for(int i = 0; i < q; i++) {\n cin >> qstore[i][0] >> qstore[i][1];\n --qstore[i][1];\n if(qstore[i][0] == 1) {\n parent[n] = qstore[i][1];\n cin >> v[n];\n children[parent[n]].push_back(n);\n qstore[i][1] = n;\n ++n;\n }\n }\n \n dfs(0);\n \n segment_tree *st_root = new segment_tree(0,n-1);\n st_root->set(0,v[0]);\n \n for(int i = 0; i < q; i++) {\n int x = qstore[i][1];\n if(qstore[i][0] == 1) {\n int par = parent[x];\n int lb = bounds[par][0], rb = bounds[par][1];\n int curchildren = ++activechildren[par];\n int mult = binpow(curchildren,mod-2)*(curchildren+1)%mod;\n st_root->rmult(lb,rb,mult);\n \n int parmultfactor = st_root->get(lb,lb)*binpow(v[par],mod-2)%mod;\n int childval = ll(parmultfactor)*v[x]%mod;\n st_root->set(bounds[x][0],childval);\n }else {\n int rootres = st_root->get(bounds[x][0],bounds[x][1]);\n int scale = 1;\n if(x > 0) {\n scale = st_root->get(bounds[parent[x]][0],bounds[parent[x]][0])\n *binpow(v[parent[x]],mod-2)%mod;\n }\n int res = rootres*binpow(scale,mod-2)%mod;\n cout << res << '\\n';\n }\n }\n \n return 0;\n}\n"
|
607
|
E
|
Cross Sum
|
Genos has been given $n$ distinct lines on the Cartesian plane. Let $\mathbf{Z}$ be a list of intersection points of these lines. A single point might appear multiple times in this list if it is the intersection of multiple pairs of lines. The order of the list does not matter.
Given a query point $(p, q)$, let ${\mathcal{D}}$ be the corresponding list of distances of all points in $\mathbf{Z}$ to the query point. Distance here refers to euclidean distance. As a refresher, the euclidean distance between two points $(x_{1}, y_{1})$ and $(x_{2}, y_{2})$ is $\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$.
Genos is given a point $(p, q)$ and a positive integer $m$. He is asked to find the sum of the $m$ smallest elements in ${\mathcal{D}}$. Duplicate elements in ${\mathcal{D}}$ are treated as separate elements. Genos is intimidated by Div1 E problems so he asked for your help.
|
The problem boils down to summing the $k$ closest intersections to a given query point. We binary search on the distance $d$ of $k$th farthest point. For a given distance $d$, the number of points within distance $d$ of our query point is equivalent to the number of pairwise intersections that lie within a circle of radius $d$ centered at our query point. To count the number of intersections, we can find the intersection points of the lines on the circle and sort them. Two lines which intersect will have overlapping intersection points on the circle (i.e. of the form ABAB where As and Bs are the intersection points of two lines). Counting the number of intersections can be done by DP. Once we have $d$, we once again draw a circle of size $d$ but this time we loop through all points in $O(k)$ instead of counting the number of points. It may happen that there are $I < k$ intersections inside the circle of radius $d$ but also $I' > k$ inside a circle of radius $d + \epsilon $. In this case, we should calculate the answer for $d$ and add $d(k - I)$. Time Complexity - $O(n\cdot\log n\cdot\log d+k)$, Space Complexity - $O(n)$
|
[
"binary search",
"geometry"
] | 3,300
|
"#include <algorithm>\n#include <cmath>\n#include <iomanip>\n#include <iostream>\n#include <vector>\nusing namespace std;\nconst int maxn = 5e4 + 5;\nconst long double pi = acos((long double)(-1));\n\nint n, m, beg[maxn], fen[2 * maxn], pos[maxn][2], prv[2 * maxn], nxt[2 * maxn];\nlong double x, y, a[maxn], b[maxn];\nvector<pair<long double, int> > circx, span;\n\nvoid computeCircleIntersections(long double r) {\n circx.clear();\n for (int i = 0; i < n; i++) {\n long double ca = a[i], cb = b[i];\n long double discrim = ca * ca * r * r - cb * cb + r * r;\n if (discrim > 0) {\n long double sq = sqrt(discrim);\n long double x1 = (-sq - ca * cb) / (ca * ca + 1);\n long double x2 = ( sq - ca * cb) / (ca * ca + 1);\n long double y1 = ca * x1 + cb, y2 = ca * x2 + cb;\n long double a1 = atan2(y1, x1), a2 = atan2(y2, x2);\n circx.push_back(make_pair(a1, i));\n circx.push_back(make_pair(a2, i));\n }\n }\n sort(circx.begin(), circx.end());\n}\n\nvoid add(int idx, int d) {\n ++idx;\n while (idx < 2 * maxn) {\n fen[idx] += d;\n idx += (idx & -idx);\n }\n}\n\nint sum(int idx) {\n ++idx;\n int ret = 0;\n while (idx > 0) {\n ret += fen[idx];\n idx -= (idx & -idx);\n }\n return ret;\n}\n\nlong long countCircleIntersections() {\n long long ret = 0;\n for (int i = 0, tot = 0; i < circx.size(); i++) {\n int idx = circx[i].second;\n if (beg[idx] == -1) {\n ++tot;\n beg[idx] = i;\n add(i, 1);\n } else {\n --tot;\n add(beg[idx], -1);\n ret += tot - sum(beg[idx]);\n beg[idx] = -1;\n }\n }\n return ret;\n}\n\nvoid initCyclicList() {\n for (int i = 0; i < circx.size(); i++) {\n prv[i] = (i + circx.size() - 1) % circx.size();\n nxt[i] = (i + 1) % circx.size();\n }\n}\n\nvoid deleteCyclicListElement(int idx) {\n int pidx = prv[idx];\n int nidx = nxt[idx];\n prv[nidx] = pidx;\n nxt[pidx] = nidx;\n}\n\nlong double intersectionDistance(int idx1, int idx2) {\n --m;\n long double ix = (b[idx1] - b[idx2]) / (a[idx2] - a[idx1]);\n long double iy = a[idx1] * ix + b[idx1];\n long double dist = sqrt(ix * ix + iy * iy);\n return dist;\n}\n\nlong double sumCyclicList(int start, int end, int *it) {\n long double sum = 0;\n for (int idx = it[start]; idx != end; idx = it[idx]) {\n sum += intersectionDistance(circx[start].second, circx[idx].second);\n }\n return sum;\n}\n\nlong double sumCircleIntersections(long double r) {\n computeCircleIntersections(r);\n if (countCircleIntersections() > m) {\n return 0; //degenerate case: a lot of intersections on query point\n }\n for (int i = 0; i < circx.size(); i++) {\n int idx = circx[i].second;\n if (beg[idx] == -1) {\n beg[idx] = i;\n pos[idx][0] = i;\n } else {\n pos[idx][1] = i;\n long double sp = circx[i].first - circx[beg[idx]].first;\n if (2 * pi - sp < sp) {\n sp = 2 * pi - sp;\n int tmp = pos[idx][0];\n pos[idx][0] = pos[idx][1];\n pos[idx][1] = tmp;\n }\n span.push_back(make_pair(sp, idx));\n }\n }\n sort(span.begin(), span.end());\n initCyclicList();\n long double sum = 0;\n for (int i = 0; i < span.size(); i++) {\n int idx = span[i].second;\n if (pos[idx][0] < pos[idx][1]) {\n sum += sumCyclicList(pos[idx][0], pos[idx][1], nxt);\n } else {\n sum += sumCyclicList(pos[idx][0], pos[idx][1], nxt);\n }\n deleteCyclicListElement(pos[idx][0]);\n deleteCyclicListElement(pos[idx][1]);\n }\n return sum + m * r;\n}\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(NULL);\n cin >> n >> x >> y >> m;\n x /= 1000, y /= 1000;\n for (int i = 0; i < n; i++) {\n cin >> a[i] >> b[i];\n a[i] /= 1000, b[i] /= 1000;\n b[i] += a[i] * x - y;\n }\n fill_n(beg, maxn, -1);\n long double low = 0, high = 1e10;\n for (int i = 0; i < 70; i++) {\n long double mid = (low + high) / 2;\n computeCircleIntersections(mid);\n if (countCircleIntersections() < m) {\n low = mid;\n } else {\n high = mid;\n }\n }\n cout << fixed << setprecision(9) << sumCircleIntersections(low) << '\\n';\n return 0;\n}\n"
|
608
|
A
|
Saitama Destroys Hotel
|
Saitama accidentally destroyed a hotel again. To repay the hotel company, Genos has volunteered to operate an elevator in one of its other hotels. The elevator is special — it starts on the top floor, can only move down, and has infinite capacity. Floors are numbered from $0$ to $s$ and elevator initially starts on floor $s$ at time $0$.
The elevator takes exactly $1$ second to move down exactly $1$ floor and negligible time to pick up passengers. Genos is given a list detailing when and on which floor passengers arrive. Please determine how long in seconds it will take Genos to bring all passengers to floor $0$.
|
The minimum amount of time required is the maximum value of $t_{i} + f_{i}$ and $s$, where t_i and f_i are the time and the floor of the passenger respectively. The initial observation that should be made for this problem is that only the latest passenger on each floor matters. So, we can ignore all passengers that aren't the latest passenger on each floor. Now, assume there is only a passenger on floor $s$. Call this passenger $a$. The time taken for this passenger is clearly $t_{a} + f_{a}$ (the time taken to wait for the passenger summed to the time taken for the elevator to reach the bottom). Now, add in one passenger on a floor lower than $s$. Call this new passenger $b$. There are 2 possibilities for this passenger. Either the elevator reaches the passenger's floor after the passenger's time of arrival or the elevator reaches the passenger's floor before the passenger's time of arrival. For the first case, no time is added to the solution, and the solution remains $t_{a} + f_{a}$. For the second case, the passenger on floor $s$ doesn't matter, and the time taken is $t_{b} + f_{b}$ for the new passenger. The only thing left is to determine whether the elevator reaches the new passenger before $t_{i}$ of the new passenger. It does so if $t_{a} + (f_{a} - f_{b}) > t_{b}$. Clearly this is equivalent to whether $t_{a} + f_{a} > t_{b} + f_{b}$. Thus, the solution is max of $max(t_{a} + f_{a}, t_{b} + f_{b})$. A similar line of reasoning can be applied to the rest of the passengers. Thus, the solution is the maximum value of $t_{i} + f_{i}$ and $s$.
|
[
"implementation",
"math"
] | 1,000
|
"n,s = map(int, raw_input().split())\nans = s\nfor i in range(n):\n\tf,t = map(int, raw_input().split())\n\tans = max(ans, t+f)\nprint ans\n"
|
608
|
B
|
Hamming Distance Sum
|
Genos needs your help. He was asked to solve the following programming problem by Saitama:
The length of some string $s$ is denoted $|s|$. The Hamming distance between two strings $s$ and $t$ of equal length is defined as $\sum_{i=1}^{\left|0|}\left|s_{i}-t_{i}\right|$, where $s_{i}$ is the $i$-th character of $s$ and $t_{i}$ is the $i$-th character of $t$. For example, the Hamming distance between string "0011" and string "0110" is $|0 - 0| + |0 - 1| + |1 - 1| + |1 - 0| = 0 + 1 + 0 + 1 = 2$.
Given two binary strings $a$ and $b$, find the sum of the Hamming distances between $a$ and all contiguous substrings of $b$ of length $|a|$.
|
We are trying to find $\sum_{i=0}^{|b|-|a|-1}|a[j]-b[i+j]|$. Swapping the sums, we see that this is equivalent to $\sum_{j=0}^{|a|-1}\sum_{i=0}^{|b|-|a|}|a[j]-b[i+j]|$. Summing up the answer in the naive fashion will give an $O(n^{2})$ solution. However, notice that we can actually find $\sum_{i=0}^{|b|-|a|}|a|j\rangle-b[i+j]|$ without going through each individual character. Rather, all we need is a frequency count of different characters. To obtain this frequency count, we can simply build prefix count arrays of all characters on $b$. Let's call this prefix count array $F$, where $F[x][c]$ gives the number of occurrences of the character $c$ in the prefix $[0, x)$ of $b$. We can then write $\sum_{j=0}^{|a|-1}\sum_{i=0}^{|b|-|a|}|a[j]-b[i+j]|$. as $\sum_{i=0}^{|a|-1}\sum_{c=0}^{1}|a|j|-c|\cdot(F[|b|-|a|+j+1]|c]-F[j|[c])$. This gives us a linear solution. Time Complexity - $O(|a| + |b|)$, Memory Complexity - $O(|b|)$
|
[
"combinatorics",
"strings"
] | 1,500
|
"#include <cstdlib>\n#include <iostream>\n#include <string>\nusing namespace std;\ntypedef long long ll;\nconst int MAXS = 200010;\n\nstring A, B;\nint F[MAXS][2];\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(NULL);\n cin >> A >> B;\n for (int i = 1; i <= B.size(); i++) {\n for (int j = 0; j < 2; j++) {\n F[i][j] = F[i - 1][j];\n }\n ++F[i][B[i - 1] - '0'];\n }\n ll res = 0;\n for (int i = 0, c; i < A.size(); i++) {\n c = A[i]-'0';\n for (int j = 0; j < 2; j++) {\n res += abs(c - j) * (F[B.size() - A.size() + i + 1][j] - F[i][j]);\n }\n }\n cout << res << '\\n';\n return 0;\n}\n"
|
609
|
A
|
USB Flash Drives
|
Sean is trying to save a large file to a USB flash drive. He has $n$ USB flash drives with capacities equal to $a_{1}, a_{2}, ..., a_{n}$ megabytes. The file size is equal to $m$ megabytes.
Find the minimum number of USB flash drives needed to write Sean's file, if he can split the file between drives.
|
Let's sort the array in nonincreasing order. Now the answer is some of the first flash-drives. Let's iterate over array from left to right until the moment when we will have the sum at least $m$. The number of elements we took is the answer to the problem. Complexity: $O(nlogn)$.
|
[
"greedy",
"implementation",
"sortings"
] | 800
| null |
609
|
B
|
The Best Gift
|
Emily's birthday is next week and Jack has decided to buy a present for her. He knows she loves books so he goes to the local bookshop, where there are $n$ books on sale from one of $m$ genres.
In the bookshop, Jack decides to buy two books of different genres.
Based on the genre of books on sale in the shop, find the number of options available to Jack for choosing two books of different genres for Emily. Options are considered different if they differ in at least one book.
The books are given by indices of their genres. The genres are numbered from $1$ to $m$.
|
Let's denote $cnt_{i}$ - the number of books of $i$ th genre. The answer to problem is equals to $\sum_{i=1}^{m}\sum_{i=i+1}^{m}c n t_{i}\cdot c n t_{j}={\frac{n\!\cdot\!(n-1)}{2}}-\sum_{i=1}^{m}{\frac{c n t_{i}\cdot(c n t_{i}-1)}{2}}$. In first sum we are calculating the number of good pairs, while in second we are subtracting the number of bad pairs from the number of all pairs. Complexity: $O(n + m^{2})$ or $O(n + m)$.
|
[
"constructive algorithms",
"implementation"
] | 1,100
| null |
609
|
C
|
Load Balancing
|
In the school computer room there are $n$ servers which are responsible for processing several computing tasks. You know the number of scheduled tasks for each server: there are $m_{i}$ tasks assigned to the $i$-th server.
In order to balance the load for each server, you want to reassign some tasks to make the difference between the most loaded server and the least loaded server as small as possible. In other words you want to minimize expression $m_{a} - m_{b}$, where $a$ is the most loaded server and $b$ is the least loaded one.
In one second you can reassign a single task. Thus in one second you can choose any pair of servers and move a single task from one server to another.
Write a program to find the minimum number of seconds needed to balance the load of servers.
|
Denote $s$ - the sum of elements in array. If $s$ is divisible by $n$ then the balanced array consists of $n$ elements $\mathbf{\Pi}_{n}^{S}$. In this case the difference between maximal and minimal elements is $0$. Easy to see that in any other case the answer is greater than $0$. On the other hand the array consists of $s\ {\mathrm{mod}}\ n$ numbers $\textstyle\bigcap_{I n}$ and $n-s\mod n$ numbers $\left|\,{\frac{\mathcal{S}}{\ m}}\,\right|$ is balanced with the difference equals to $1$. Let's denote this balanced array $b$. To get array $b$ let's sort array $a$ in nonincreasing order and match element $a_{i}$ to element $b_{i}$. Now we should increase some elements and decrease others. In one operation we can increase some element and decrease another, so the answer is $\textstyle{\frac{{\frac{n}{k-1}}\left|a_{i}-b_{i}\right|}{2}}$. Complexity: $O(nlogn)$.
|
[
"implementation",
"math"
] | 1,500
| null |
609
|
D
|
Gadgets for dollars and pounds
|
Nura wants to buy $k$ gadgets. She has only $s$ burles for that. She can buy each gadget for dollars or for pounds. So each gadget is selling only for some type of currency. The type of currency and the cost in that currency are not changing.
Nura can buy gadgets for $n$ days. For each day you know the exchange rates of dollar and pound, so you know the cost of conversion burles to dollars or to pounds.
Each day (from $1$ to $n$) Nura can buy some gadgets by current exchange rate. Each day she can buy any gadgets she wants, but each gadget can be bought no more than once during $n$ days.
Help Nura to find the minimum day index when she will have $k$ gadgets. Nura always pays with burles, which are converted according to the exchange rate of the purchase day. Nura can't buy dollars or pounds, she always stores only burles. Gadgets are numbered with integers from $1$ to $m$ in order of their appearing in input.
|
If Nura can buy $k$ gadgets in $x$ days then she can do that in $x + 1$ days. So the function of answer is monotonic. So we can find the minimal day with binary search. Denote $lf = 0$ - the left bound of binary search and $rg = n + 1$ - the right one. We will maintain the invariant that in left bound we can't buy $k$ gadgets and in right bound we can do that. Denote function $f(d)$ equals to $1$ if we can buy $k$ gadgets in $d$ days and $0$ otherwise. As usual in binary search we will choose $d={\frac{l f+r g}{2}}$. If $f(d) = 1$ then we should move the right bound $rg = d$ and the left bound $lf = d$ in other case. If binary search found the value $lf = n + 1$ then the answer is $- 1$, otherwise the answer is $lf$. Before binary search we can create two arrays of gadgets which are selling for dollars and pounds, and sort them. Easy to see that we should buy gadgets for dollars on day $i \le d$ when dollar costs as small as possible and $j \le d$ when pounds costs as small as possible. Let now we want to buy $x$ gadgets for dollars and $k - x$ gadgets for pounds. Of course we will buy the least cheap of them (we already sort the arrays for that). Let's iterate over $x$ from $0$ to $k$ and maintain the sum of gadgets for dollars $s_{1}$ and the sum of gadgets for pounds $s_{2}$. For $x = 0$ we can calculate the sums in $O(k)$. For other x's we can recalculate the sums in $O(1)$ time from the sums for $x - 1$ by adding gadget for dollars and removing gadget for pounds. Complexity: $O(klogn)$.
|
[
"binary search",
"greedy",
"two pointers"
] | 2,000
| null |
609
|
E
|
Minimum spanning tree for each edge
|
Connected undirected weighted graph without self-loops and multiple edges is given. Graph contains $n$ vertices and $m$ edges.
For each edge $(u, v)$ find the minimal possible weight of the spanning tree that contains the edge $(u, v)$.
The weight of the spanning tree is the sum of weights of all edges included in spanning tree.
|
This problem was prepared by dalex. Let's build any MST with any fast algorithm (for example with Kruskal's algorithm). For all edges in MST the answer is the weight of the MST. Let's consider any other edge $(x, y)$. There is exactly one path between $x$ and $y$ in the MST. Let's remove mostly heavy edge on this path and add edge $(x, y)$. Resulting tree is the MST contaning edge $(x, y)$ (this can be proven by Tarjan criterion). Let's fix some root in the MST (for example the vertex $1$). To find the most heavy edge on the path from $x$ to $y$ we can firstly find the heaviest edge on the path from $x$ to $l = lca(x, y)$ and then on the path from $y$ to $l$, where $l$ is the lowest common ancestor of vertices $x$ and $y$. To find $l$ we can use binary lifting method. During calculation of $l$ we also can maintain the weight of the heaviest edge. Of course this problem also can be solved with difficult data structures, for example with Heavy-light decomposition method or with Linkcut trees. Complexity: $O(mlogn)$. It's very strange but I can't find any articles with Tarjan criterion on English (although there are articles on Russian), so here it is: Some spanning tree is minimal if and only if the weight of any other edge $(x, y)$ (not from spanning tree) is not less than the weight of the heaviest edge on the path from $x$ to $y$ in spanning tree.
|
[
"data structures",
"dfs and similar",
"dsu",
"graphs",
"trees"
] | 2,100
| null |
609
|
F
|
Frogs and mosquitoes
|
There are $n$ frogs sitting on the coordinate axis $Ox$. For each frog two values $x_{i}, t_{i}$ are known — the position and the initial length of the tongue of the $i$-th frog (it is guaranteed that all positions $x_{i}$ are different). $m$ mosquitoes one by one are landing to the coordinate axis. For each mosquito two values are known $p_{j}$ — the coordinate of the position where the $j$-th mosquito lands and $b_{j}$ — the size of the $j$-th mosquito. Frogs and mosquitoes are represented as points on the coordinate axis.
The frog can eat mosquito if mosquito is in the same position with the frog or to the right, and the distance between them is not greater than the length of the tongue of the frog.
If at some moment several frogs can eat a mosquito the leftmost frog will eat it (with minimal $x_{i}$). After eating a mosquito the length of the tongue of a frog increases with the value of the size of eaten mosquito. It's possible that after it the frog will be able to eat some other mosquitoes (the frog should eat them in this case).
For each frog print two values — the number of eaten mosquitoes and the length of the tongue after landing all mosquitoes and after eating all possible mosquitoes by frogs.
Each mosquito is landing to the coordinate axis only after frogs eat all possible mosquitoes landed before. Mosquitoes are given in order of their landing to the coordinate axis.
|
Let's maintain the set of not eaten mosquitoes (for example with set in C++ or with TreeSet in Java) and process mosquitoes in order of their landing. Also we will maintain the set of segments $(a_{i}, b_{i})$, where $a_{i}$ is the position of the $i$-th frog and $b_{i} = a_{i} + l_{i}$, where $l_{i}$ is the current length of the tongue of the $i$-th frog. Let the current mosquito landed in the position $x$. Let's choose segment $(a_{i}, b_{i})$ with minimal $a_{i}$ such that $b_{i} \ge x$. If the value $a_{i} \le x$ we found the frog that will eat mosquito. Otherwise the current mosquito will not be eaten and we should add it to our set. If the $i$-th frog will eat mosquito then it's tongue length will be increased by the size of mosquito and we should update segment $(a_{i}, b_{i})$. After that we should choose the nearest mosquito to the right the from frog and if it's possible eat that mosquito by the $i$-th frog (this can be done with lower_bound in C++). Possibly we should eat several mosquitoes, so we should repeat this process several times. Segments $(a_{i}, b_{i})$ we can store in segment tree by position $a_{i}$ and value $b_{i}$. Now to find segment we need we can do binary search by the value of $a_{i}$ and check the maximum $b_{i}$ value on the prefix to be at least $x$. This will work in $O(nlog^{2}n)$ time. We can improve this solution. Let's go down in segment tree in the following manner: if the maximum value $b_{i}$ in the left subtree of segment tree is at least $x$ then we will go to the left, otherwise we will go to the right. Complexity: $O((n + m)log(n + m))$.
|
[
"data structures",
"greedy"
] | 2,500
| null |
610
|
A
|
Pasha and Stick
|
Pasha has a wooden stick of some positive integer length $n$. He wants to perform exactly three cuts to get four parts of the stick. Each part must have some positive integer length and the sum of these lengths will obviously be $n$.
Pasha likes rectangles but hates squares, so he wonders, how many ways are there to split a stick into four parts so that it's possible to form a rectangle using these parts, but is impossible to form a square.
Your task is to help Pasha and count the number of such ways. Two ways to cut the stick are considered distinct if there exists some integer $x$, such that the number of parts of length $x$ in the first way differ from the number of parts of length $x$ in the second way.
|
If the given $n$ is odd the answer is 0, because the perimeter of any rectangle is always even number. If $n$ is even the number of rectangles which can be construct equals to $n / 4$. If $n$ is divisible by 4 we will count the square, which are deprecated, because we need to subtract 1 from the answer. Asymptotic behavior - $O(1)$.
|
[
"combinatorics",
"math"
] | 1,000
| null |
610
|
B
|
Vika and Squares
|
Vika has $n$ jars with paints of distinct colors. All the jars are numbered from $1$ to $n$ and the $i$-th jar contains $a_{i}$ liters of paint of color $i$.
Vika also has an infinitely long rectangular piece of paper of width $1$, consisting of squares of size $1 × 1$. Squares are numbered $1$, $2$, $3$ and so on. Vika decided that she will start painting squares one by one from left to right, starting from the square number $1$ and some arbitrary color. If the square was painted in color $x$, then the next square will be painted in color $x + 1$. In case of $x = n$, next square is painted in color $1$. If there is no more paint of the color Vika wants to use now, then she stops.
Square is always painted in only one color, and it takes exactly $1$ liter of paint. Your task is to calculate the maximum number of squares that might be painted, if Vika chooses right color to paint the first square.
|
At first let's find the minimum in the given array and store it in the variable $minimum$. It is easy to understand, that we always can paint $n * minimum$ squares. So we need to find such a minimum in the array before which staying the most number of elements, which more than the minimum. In the other words we need to find 2 minimums in the array which are farthest from each other (do not forget about cyclical of the array). If there is only one minumum we need to start paint from the color which stay in the array exactly after the minimum (do not forget about cyclical of the array too). It can be done with help of iterations from the left to the right. We need to store the position of the nearest minimum in the variable and update her and the answer when we meet the element which equals to minimum. Asymptotic behavior - $O(n)$, where $n$ - the number of different colors.
|
[
"constructive algorithms",
"implementation"
] | 1,300
| null |
610
|
C
|
Harmony Analysis
|
The semester is already ending, so Danil made an effort and decided to visit a lesson on harmony analysis to know how does the professor look like, at least. Danil was very bored on this lesson until the teacher gave the group a simple task: find $4$ vectors in $4$-dimensional space, such that every coordinate of every vector is $1$ or $ - 1$ and any two vectors are orthogonal. Just as a reminder, two vectors in $n$-dimensional space are considered to be orthogonal if and only if their scalar product is equal to zero, that is:
\[
\sum_{i=1}^{n}a_{i}\cdot b_{i}=0
\]
.Danil quickly managed to come up with the solution for this problem and the teacher noticed that the problem can be solved in a more general case for $2^{k}$ vectors in $2^{k}$-dimensinoal space. When Danil came home, he quickly came up with the solution for this problem. Can you cope with it?
|
Let's build the answer recursively. For $k = 0$ the answer is $- 1$ or $+ 1$. Let we want to build the answer for some $k > 0$. At first let's build the answer for $k - 1$. As the answer for $k$ let's take four copies of answer for $k - 1$ with inverting of values in last one. So we have some fractal with $2 \times 2$ base: $00$, $01$. Let's prove the correctness of such building by induction. Consider two vector from the top (down) half: they have zero scalar product in the left half and in the right, so the total scalar product is also equals to zero. Consider a vector from the top half and from the down: their scalar products in the left and the right halfs differs only in sign, so the total scalar product is also zero. Note the answer is also is a matrix with element $i, j$ equals to \texttt{+} if the number of one bits in number $i|j$ is even. Complexity $O((2^{k})^{2})$.
|
[
"constructive algorithms"
] | 1,800
| null |
610
|
D
|
Vika and Segments
|
Vika has an infinite sheet of squared paper. Initially all squares are white. She introduced a two-dimensional coordinate system on this sheet and drew $n$ black horizontal and vertical segments parallel to the coordinate axes. All segments have width equal to $1$ square, that means every segment occupy some set of neighbouring squares situated in one row or one column.
Your task is to calculate the number of painted cells. If a cell was painted more than once, it should be calculated exactly once.
|
At first let's unite all segments which are in same verticals or horizontals. Now the answer to the problem is the sum of lengths of all segments subtract the number of intersections. Let's count the number of intersections. For this let's use the horizontal scan-line from the top to the bottom (is can be done with help of events - vertical segment is open, vertical segment is close and hadle horizontal segment) and in some data structure store the set of $x$-coordinates of the open segments. For example we can use Fenwick tree with precompression of the coordinates. Now for current horizontal segment we need to take the number of the opened vertical segmetns with value $x$ in the range $x_{1}, x_{2}$, where $x$ - vertical where the vertical segment are locating and $x_{1}, x_{2}$ - the $x$-coordinates of the current horizontal segment. Asymptotic behavior: $O(nlogn)$.
|
[
"constructive algorithms",
"data structures",
"geometry",
"two pointers"
] | 2,300
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.