contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
902
A
Visiting a Friend
Pig is visiting a friend. Pig's house is located at point $0$, and his friend's house is located at point $m$ on an axis. Pig can use teleports to move along the axis. To use a teleport, Pig should come to a certain point (where the teleport is located) and choose where to move: for each teleport there is the rightmost point it can move Pig to, this point is known as the limit of the teleport. Formally, a teleport located at point $x$ with limit $y$ can move Pig from point $x$ to any point within the segment $[x; y]$, including the bounds. Determine if Pig can visit the friend using teleports only, or he should use his car.
Note that if we can get to some point x, then we can get to all points <= x. So we can support the rightmost point where we can get to. Then if this point can use the teleport (if this point is to the right of the teleport), we'll try to move it (If the limit of the teleport is to the right of the current point, then move it there). Then in the end we need to check that the rightmost point where we can get is equal to M.
[ "greedy", "implementation" ]
1,100
null
902
B
Coloring a Tree
You are given a rooted tree with $n$ vertices. The vertices are numbered from $1$ to $n$, the root is the vertex number $1$. Each vertex has a color, let's denote the color of vertex $v$ by $c_{v}$. Initially $c_{v} = 0$. You have to color the tree into the given colors using the smallest possible number of steps. On each step you can choose a vertex $v$ and a color $x$, and then color all vectices in the subtree of $v$ (including $v$ itself) in color $x$. In other words, for every vertex $u$, such that the path from root to $u$ passes through $v$, set $c_{u} = x$. It is guaranteed that you have to color each vertex in a color different from $0$. You can learn what a rooted tree is using the link: https://en.wikipedia.org/wiki/Tree_(graph_theory).
Consider the process from the end, we will "delete" any subtree from the tree, whose color of the ancestor of the highest vertex differs from the color of the highest vertex and the colors of all vertices in the subtree are the same. Thus, we can show that the answer is the number of edges whose ends have different colors + 1.
[ "dfs and similar", "dsu", "greedy" ]
1,200
null
903
A
Hungry Student Problem
Ivan's classes at the university have just finished, and now he wants to go to the local CFK cafe and eat some fried chicken. CFK sells chicken chunks in small and large portions. A small portion contains $3$ chunks; a large one — $7$ chunks. Ivan wants to eat exactly $x$ chunks. Now he wonders whether he can buy exactly this amount of chicken. Formally, Ivan wants to know if he can choose two non-negative integers $a$ and $b$ in such a way that $a$ small portions and $b$ large ones contain exactly $x$ chunks. Help Ivan to answer this question for several values of $x$!
There are lots of different approaches to this problem. For example, you could just iterate on the values of $a$ and $b$ from $0$ to $33$ and check if $3a + 7b = x$.
[ "greedy", "implementation" ]
900
null
903
B
The Modcrab
Vova is again playing some computer game, now an RPG. In the game Vova's character received a quest: to slay the fearsome monster called Modcrab. After two hours of playing the game Vova has tracked the monster and analyzed its tactics. The Modcrab has $h_{2}$ health points and an attack power of $a_{2}$. Knowing that, Vova has decided to buy a lot of strong healing potions and to prepare for battle. Vova's character has $h_{1}$ health points and an attack power of $a_{1}$. Also he has a large supply of healing potions, each of which increases his current amount of health points by $c_{1}$ when Vova drinks a potion. All potions are identical to each other. It is guaranteed that $c_{1} > a_{2}$. The battle consists of multiple phases. In the beginning of each phase, Vova can either attack the monster (thus reducing its health by $a_{1}$) or drink a healing potion (it increases Vova's health by $c_{1}$; \textbf{Vova's health can exceed $h_{1}$}). Then, \textbf{if the battle is not over yet}, the Modcrab attacks Vova, reducing his health by $a_{2}$. The battle ends when Vova's (or Modcrab's) health drops to $0$ or lower. It is possible that the battle ends in a middle of a phase after Vova's attack. Of course, Vova wants to win the fight. But also he wants to do it as fast as possible. So he wants to make up a strategy that will allow him to win the fight after the minimum possible number of phases. Help Vova to make up a strategy! You may assume that Vova never runs out of healing potions, and that he can always win.
A simple greedy solution works: simulate the process until the Modcrab is dead, and make Vova drink a potion if his current health is less than $a_{2} + 1$, and monster's current health is greater than $a_{1}$ (because in this case Vova can't finish the Modcrab in one strike, but the Modcrab can win if Vova doesn't heal). In any other situation Vova must attack. Since all parameters are up to $100$, the number of phases won't exceed $9901$.
[ "greedy", "implementation" ]
1,200
null
903
C
Boxes Packing
Mishka has got $n$ empty boxes. For every $i$ ($1 ≤ i ≤ n$), $i$-th box is a cube with side length $a_{i}$. Mishka can put a box $i$ into another box $j$ if the following conditions are met: - $i$-th box is not put into another box; - $j$-th box doesn't contain any other boxes; - box $i$ is smaller than box $j$ ($a_{i} < a_{j}$). Mishka can put boxes into each other an arbitrary number of times. He wants to minimize the number of visible boxes. A box is called visible iff it is not put into some another box. Help Mishka to determine the minimum possible number of visible boxes!
You can always show that the answer is equal to the amount of boxes of the size appearing the most in array. Result can be easily obtained by constructive algorithm: take these most appearing boxes, put smaller boxes in decreasing order of their size into free ones (there always be space) and put resulting boxes into the larger ones in increasing order. Overall complexity: $O(n\cdot\log n)$.
[ "greedy" ]
1,200
null
903
D
Almost Difference
Let's denote a function $d(x,y)=\left\{y-x,\quad\mathrm{if}\ |x-y|>1$ You are given an array $a$ consisting of $n$ integers. You have to calculate the sum of $d(a_{i}, a_{j})$ over all pairs $(i, j)$ such that $1 ≤ i ≤ j ≤ n$.
Starting pretty boring this came out as the most fun and controversial problem of the contest... Well, here is the basis of the solution. You maintain some kind of map/hashmap with amounts each number appeared in array so far. Processing each number you subtract $a_{i}$ $i$ times ($1$-indexed), add prefix sum up to $(i - 1)$-th number, subtract $cnt_{ai - 1} * (a_{i} - 1)$, $cnt_{ai} * a_{i}$ and $cnt_{ai + 1} * (a_{i} + 1)$ and add $a_{i}$ $cnt_{ai - 1} + cnt_{ai} + cnt_{ai + 1}$ times. Then update $cnt$ with $a_{i}$. And now we have to treat numbers greater than long long limits. Obviously, you can use built-in bigints from java/python ot write your own class with support of addition and printing of such numbers. However, numbers were up to $10^{19}$ by absolute value. Then you can use long double, its precision is enough for simple multiplication and addition. You can also use unsigned long long numbers: one for negative terms and the other one for positive terms, in the end you should handle printing negative numbers. Overall complexity: $O(n\cdot\log n)$.
[ "data structures", "math" ]
2,200
null
903
E
Swapping Characters
We had a string $s$ consisting of $n$ lowercase Latin letters. We made $k$ copies of this string, thus obtaining $k$ identical strings $s_{1}, s_{2}, ..., s_{k}$. After that, in each of these strings we swapped exactly two characters (the characters we swapped could be identical, but they had different indices in the string). You are given $k$ strings $s_{1}, s_{2}, ..., s_{k}$, and you have to restore any string $s$ so that it is possible to obtain these strings by performing aforementioned operations. Note that the total length of the strings you are given doesn't exceed 5000 (that is, $k·n ≤ 5000$).
If we don't have two distinct strings then we just have to swap any pair of characters in any of the given strings and print it. Otherwise we have to find two indices $i$ and $j$ such that $s_{i} \neq s_{j}$. Then let's store all positions $p$ such that $s_{i, p} \neq s_{j, p}$ in array $pos$. If the number of those positions will exceed 4 then the answer will be <<-1>>. Otherwise we need to iterate over all positions $diffpos$ in array $pos$, try to swap $s_{i, diffpos}$ with any other character of $s_{i}$ and check that current string can be the answer. We also should try the same thing with string $s_{j}$. It is clear how we can check string $t$ to be the answer. Let's iterate over all strings $s_{1}, s_{2}, ..., s_{k}$ and for each string $s_{i}$ count the number of positions $p$ such that $s_{i, p} \neq t_{p}$. Let's call it $diff(s_{i}, t)$. If for any given string $diff(s_{i}, t)$ is not equal to 0 or 2 then string $t$ can't be the answer. Otherwise if for any given string $diff(s_{i}, t)$ is equal to 0 and all characters in string $s_{i}$ are distinct then $t$ can't be the answer. If there is no string that satisfies all aforementioned conditions then the answer will be <<-1>>.
[ "brute force", "hashing", "implementation", "strings" ]
2,200
null
903
F
Clear The Matrix
You are given a matrix $f$ with $4$ rows and $n$ columns. Each element of the matrix is either an asterisk (*) or a dot (.). You may perform the following operation arbitrary number of times: choose a square submatrix of $f$ with size $k × k$ (where $1 ≤ k ≤ 4$) and replace each element of the chosen submatrix with a dot. Choosing a submatrix of size $k × k$ costs $a_{k}$ coins. What is the minimum number of coins you have to pay to replace all asterisks with dots?
Constraints lead us to some kind of dp solution (is it usually called dp on broken profile?). Let $dp[i][j][mask]$ will be the minimum price to get to $i$-th column and $j$-th row with $mask$ selected. $mask$ is the previous $12$ cells inclusive from $(i, j)$ (if $j = 4$ then its exactly current column and two previous ones). Transitions for submatrices $1 \times 1$, $2 \times 2$ and $3 \times 3$ are straighforward, just update mask with new ones and add $a_{k}$ to current value. If the first cell of these $12$ is empty or $1$ is set in this position, then you can go to $dp[i][j + 1][mask>>1]$ (or $(i + 1)$ and $1$ if $j = 4$) for free. Finally you can go to $dp[i + 1][j][2^{12} - 1]$ with the price of $a_{4}$. Initial value can be $0$ in $dp[3][4][0]$ (the first $12$ cells of the matrix). The answer will be stored in some valid $mask$ of $dp[n][1][mask]$. However, you can add $4$ extra empty columns and take the answer right from $dp[n + 4][1][0]$, it will be of the same price. Overall complexity: $O(n \cdot 4 \cdot 2^{12})$.
[ "bitmasks", "dp" ]
2,200
null
903
G
Yet Another Maxflow Problem
In this problem you will have to deal with a very special network. The network consists of two parts: part $A$ and part $B$. Each part consists of $n$ vertices; $i$-th vertex of part $A$ is denoted as $A_{i}$, and $i$-th vertex of part $B$ is denoted as $B_{i}$. For each index $i$ ($1 ≤ i < n$) there is a directed edge from vertex $A_{i}$ to vertex $A_{i + 1}$, and from $B_{i}$ to $B_{i + 1}$, respectively. Capacities of these edges are given in the input. Also there might be several directed edges going from part $A$ to part $B$ (but never from $B$ to $A$). You have to calculate the maximum flow value from $A_{1}$ to $B_{n}$ in this network. Capacities of edges connecting $A_{i}$ to $A_{i + 1}$ might sometimes change, and you also have to maintain the maximum flow value after these changes. Apart from that, the network is fixed (there are no changes in part $B$, no changes of edges going from $A$ to $B$, and no edge insertions or deletions). Take a look at the example and the notes to understand the structure of the network better.
First of all, let's calculate minimum cut instead of maximum flow. The value of the cut is minimum if we choose $S$ (the first set of the cut) as $x$ first vertices of part $A$ and $y$ first vertices of part $B$ ($1 \le x \le n$, $0 \le y < n$). That's because if $i$ is the minimum index such that $A_{i}\notin S$, then we don't have to add any vertices $A_{j}$ such that $j > i$ to $S$, because that would only increase the value of the cut. Similarly, if $i$ is the maximum index such that $B_{i}\in S$, then it's optimal to add every vertex $B_{j}$ such that $j < i$ to $S$. Okay, so we can try finding minimum cut as a function $F(x, y)$ - value of the cut if we choose $S$ as the union of $x$ first vertices in $A$ and $y$ first vertices in $B$. To find its minimum, let's rewrite it as $F(x, y) = F_{1}(x) + F_{2}(y) + F_{3}(x, y)$, where $F_{1}(x)$ is the sum of capacities of edges added to the cut in part $A$ (it doesn't depend on part $B$), $F_{2}(y)$ is the sum of capacities added to the cut from part $B$, and $F_{3}(x, y)$ is the sum of capacities added to the cut by edges going from $A$ to $B$. These functions can be denoted this way: $F_{1}(x) = 0$ if $x = n$; otherwise $F_{1}(x)$ is the capacity of the edge going from $A_{x}$ to $A_{x + 1}$; $F_{2}(y) = 0$ if $y = 0$; otherwise $F_{2}(y)$ is the capacity of the edge going from $B_{y}$ to $B_{y + 1}$; $F_{3}(x, y)$ is the sum of capacities over all edges $A_{i} \rightarrow B_{j}$ such that $i \le x$ and $j > y$. Since only the values of $F_{1}$ are not fixed, we can solve this problem with the following algorithm: For each $x$ ($1 \le x \le n$), find the minimum possible sum of $F_{2}(y) + F_{3}(x, y)$. Let's denote this as $best(x)$, and let's denote $T(x) = F_{1}(x) + best(x)$; Build a segment tree that allows to get minimum value and modify a single value over the values of $T(x)$. When we need to change capacity of an edge, we add the difference between new and old capacities to $T(x)$; and to calculate the maximum flow, we query minimum over the whole tree. But how can we calculate the values of $best(x)$? We can do it using another segment tree that allows to query minimum on segment and add some value to the segment. First of all, let's set $x = 0$ and build this segment tree over values of $F_{2}(y) + F_{3}(x, y)$. The value of $F_{2}(y)$ is fixed for given $y$, so it is not modified; the value of $F_{3}(x, y)$ is initially $0$ since when $x = 0$, there are no vertices belonging to $S$ in the part $A$. And then we calculate the values of $best(x)$ one by one. When we increase $x$, we need to process all edges leading from $A_{x}$ to part $B$. When we process an edge leading to vertex $B_{i}$ with capacity $c$, we have to add $c$ to every value of $F_{3}(x, y)$ such that $i > y$ (since if $i > y$, then $B_{i}\notin S$), and this can be performed by addition on segment in the segment tree. After processing each edge leading from $A_{x}$ to part $B$, we can query $best(x)$ as the minimum value in the segment tree. Time complexity of this solution is $O((n+m+q)\log n)$.
[ "data structures", "flows", "graphs" ]
2,700
null
906
A
Shockers
Valentin participates in a show called "Shockers". The rules are quite easy: jury selects one letter which Valentin doesn't know. He should make a small speech, but every time he pronounces a word that contains the selected letter, he receives an electric shock. He can make guesses which letter is selected, but for each incorrect guess he receives an electric shock too. The show ends when Valentin guesses the selected letter correctly. Valentin can't keep in mind everything, so he could guess the selected letter much later than it can be uniquely determined and get excessive electric shocks. Excessive electric shocks are those which Valentin got after the moment the selected letter can be uniquely determined. You should find out the number of excessive electric shocks.
From last action, selected letter can be found; let it be $c$ (without loss of generality). For each of other $25$ letters, answers on some actions are contradicting with assumption that this letter was selected; moreover, for each letter $d$ not equal to $c$, we can find the earlest such action with number $A_{d}$ (for each action, we can easily check if assumption "$d$ is selected" is contradicting with the action or not on linear time). Then, the answer is a number of electric shocks after action with number which is maximal among all such $A_{d}$-s.
[ "implementation", "strings" ]
1,600
null
906
B
Seating of Students
Students went into a class to write a test and sat in some way. The teacher thought: "Probably they sat in this order to copy works of each other. I need to rearrange them in such a way that students that were neighbors are not neighbors in a new seating." The class can be represented as a matrix with $n$ rows and $m$ columns with a student in each cell. Two students are neighbors if cells in which they sit have a common side. Let's enumerate students from $1$ to $n·m$ in order of rows. So a student who initially sits in the cell in row $i$ and column $j$ has a number $(i - 1)·m + j$. You have to find a matrix with $n$ rows and $m$ columns in which all numbers from $1$ to $n·m$ appear exactly once and adjacent numbers in the original matrix are not adjacent in it, or determine that there is no such matrix.
The problem has many solutions, including random ones. Consider one of deterministic solutions. Without loss of generality, assume that $n \le m$. There are a couple of corner cases: $n = 1, m = 1$. In this case, good seating exists. $n = 1, m = 2$. In this case, seating does not exist (obviously). $n = 1, m = 3$. In any seating, one of neighbours of student 2 will be one of his former neighbours, so correct seating does not exist. $n = 2, m = 2$. Only student 4 can be a neighbour of student 1, but there should be 2 neighbours for student 1; then, correct seating does not exist. $n = 2, m = 3$. Both students 5 and 2 have 3 neighbours in the initial seating; then, in new seating, these students should be in non-neighbouring corner cells. Moreover, these corner cells can not be in one row because in this case it's impossible to find a student for cell between 2 and 5. So, without loss of generality, let 5 be in lower left corner, and 2 - in upper right corner. Then, only students 1 and 3 can seat on lower middle cell; but if sduent 1 seats in the cell, then student 4 is impossible to seat at any of the remaining cells, so do student 6 in case of student 3 seating at the cell. So, correct seating does not exist in this case too. $n = 1, m = 4$. In this case, one of correct seatings is 2 4 1 3. $n = 1;5 \le m$. In this case, let students with odd numbers in increasing order will be in first half of the row, and others in increasing order - in second half. For example, for $m = 7$ the seating will be 1 3 5 7 2 4 6. $n = m = 3$. One of possible correct seatings is:6 1 8 7 5 3 2 9 4; 6 1 8 7 5 3 2 9 4; If $2 \le n;4 \le m$, then shift each even row cyclically on two symbols in the right, and then shift each even column cyclically on one symbol upwards. If students are vertical neighbours in the initial seating, then in new seating, they will be in different columns on the distance 2 (possibly through the edge of the table); but if students are horizontal neighbours in the initial seating, then in new seating they will be in neighbouring rows and neighbouring columns (possibly thorugh the edges again). So, for case $2 \le n, 4 \le m$, we build a correct seating.
[ "brute force", "constructive algorithms", "math" ]
2,200
null
906
C
Party
Arseny likes to organize parties and invite people to it. However, not only friends come to his parties, but friends of his friends, friends of friends of his friends and so on. That's why some of Arseny's guests can be unknown to him. He decided to fix this issue using the following procedure. At each step he selects one of his guests $A$, who pairwise introduces all of his friends to each other. After this action any two friends of $A$ become friends. This process is run until all pairs of guests are friends. Arseny doesn't want to spend much time doing it, so he wants to finish this process using the minimum number of steps. Help Arseny to do it.
Let's formulate and prove several facts. 1. If we change an call order, the result doesn't change. Let's consider two vertices which are called consecutively. If they are not connected by edge, then regardless of the order, we get that at the end, neighbours of each vertex form a clique. If they are connected, then independently on the order, we get clique from 2 vertices and all neighbours of them. 2. If graph is a tree, it's sufficient to take as an answer all its vertices except leaves. Indeed, if we consider any 2 tree vertices, we get that all vertices on the way between them are in the answer. Each vertex reduces on 1 the distance between those 2, it means that the distance between them is 1. 3. Let's select from source graph spanning tree, that has the largest number of leaves. One can say that we can use all vertices except leaves as an answer. Obviously from point 2, that after all our operations with such set graph will become complete. Let's show that it is minimal number of vertices. Let we selected some set of vertices, that is the answer. Then subgraph of given graph, built on the selected set of vertices, should be connected (otherwise between connected component can't appear the edge and graph can't be complete. Also, each of vertices, that isn't in an answer should have at least one of neighbours selected (otherwise it is impossible to add new edge to it). Now let's select a spanning tree in selected set (it's possible because our set is connected) and add non-selected vertices into the tree as leafs. Then we see that our selected can be represented as spanning tree in the initial graph, in which all selected vertices are all non-leaf vertices and possibly, some leafs; but leafs can be obviously removed from the selected set by proved above. So, one of optimal answers can be described as set of non-leaf vertices of spanning tree with minimal possible number of non-leaves and, as a consequence, with maximal possible number of leaves, QED. 4. Implementation. It is necessary to implement an algorithm that should work for $2^{n} \cdot n$ or faster or with worse asymptotic but with non-asymptotical optimization. One of possible solutions is following. Let contain any subset of vertices as a $n$-bit mask; for example, mask of a subset containing vertices ${v_{1}, v_{2}, ..., v_{k}}$ will be equal to $2^{v}_{1} + 2^{v}_{2} + ... + 2^{v}_{k}$. Then, for subset with mask $m$, vertex $v$ is in set iff $m & 2^{v}$ is not equal to 0; here $&$ is a bitwise AND. Let for each vertex $v$, $neighbours[v]$ be a mask of subset of vertices containing vertex $v$ and it's neighbours. Array $neighbours[v]$ can be calculated easily. Then, let $bool isConnected[m]$ be 1 for some mask $m$ iff subset coded by $m$ is connected. Array $isConnected$ can be calculated in $O(2^{n} * n)$ by the following algorithm: for all vertices $v\in[0.n-1]$ (let vertices be enumerated in 0-indexation), $isCalculated[2^{v}]$ is assigned to 1; for all other masks, $isCalculated$ should be equal to 0; then, go through all masks in increasing order by a simple cycle; let $m$ be current mask in the cycle; if $isConnected[m] = 0$, then go to the next iteration of cycle; otherwise, let $v_{1}, v_{2}, ..., v_{k}$ be vertices of subset coded by $m$. Then, mask $m': = maskNeighbours[m]: = neighbours[v_{1}]|neighbours[v_{2}]|... |neighbours[v_{k}]$ for | as bitwise OR is a mask coding a subset of vertices containing vertices of mask $m$ and their neighbours. Then, for each vertex $w$ in subset of mask $m'$ we assign $isConnected[m|2^{w}]$ to be 1. The described algorithm works in $O(2^{n} * n)$; it can be proved by induction that at the end, $isConnected[m] = 1$ for mask $m$ iff $m$ is a code of connected subset of vertices. But how to find an answer? Notice that mask $m = 2^{v}_{1} + 2^{v}_{2} + ... + 2^{v}_{k}$ is a code of good (for our purposes) subset iff $isConnected[m] = 1$ and $maskNeighbours[m] = 2^{n} - 1 = 2^{0} + 2^{1} + ... + 2^{n - 1}$. For each mask $m$, we can check if it's good in $O(n)$ time having an array $isConnected$ calculated; the answer is a good mask with minimal possible number of elements in the corresponding set.
[ "bitmasks", "brute force", "dp", "graphs" ]
2,400
null
906
D
Power Tower
Priests of the Quetzalcoatl cult want to build a tower to represent a power of their god. Tower is usually made of power-charged rocks. It is built with the help of rare magic by levitating the current top of tower and adding rocks at its bottom. If top, which is built from $k - 1$ rocks, possesses power $p$ and we want to add the rock charged with power $w_{k}$ then value of power of a new tower will be ${w_{k}}^{p}$. Rocks are added from the last to the first. That is for sequence $w_{1}, ..., w_{m}$ value of power will be \[ \sqrt{\left(w_{2}^{\left(w_{3}^{\left(w_{2}^{\left.3\right)^{w}m}}\right)}}\right)} \] After tower is built, its power may be extremely large. But still priests want to get some information about it, namely they want to know a number called cumulative power which is the true value of power taken modulo $m$. Priests have $n$ rocks numbered from $1$ to $n$. They ask you to calculate which value of cumulative power will the tower possess if they will build it from rocks numbered $l, l + 1, ..., r$.
Let's learn to calculate $a_{1}^{a_{2}^{\ldots a n}}\mathrm{\\\,mod{\}}m$. Assume that we want to find $n^{x}\ {\mathrm{mod}}\ m$ where $n$ and $m$ non necessary co-prime, and $x$ is some big number which we can calculate only modulo some value. We can solve this problem for co-prime $n$ and $m$ via Euler's theorem. Let's reduce general case to that one. Note that $a n\ \ m\!\mathrm{od\}a m=a(n\ \mathrm{mod\}m)$. Indeed if $n = d \cdot m + r, |r| < m$, then $an = d \cdot am + ar, |ar| < |am|$. Let $p_{1}, ..., p_{t}$ to be common prime divisors of $n$ and $m$, $a = p_{1}^{k1}... p_{t}^{kt}$ to be such number that it divisible by such divisors to the power they occur in $m$, and $k$ to be least number such that $n^{k}=0\;\;\;\mathrm{mod}\;a$. Then we have a chain $n^{x}\;\mathrm{\mod\}\;m=\left(\frac{n^{k}}{a}\right)a n^{x-k}\;\mathrm{\mod\}a\left(\frac{m}{a}\right)=n^{k}\left[n^{x-k}\;\mathrm{\mod\}\left(\frac{m}{a}\right)\right]\;\;\mathrm{\mod\}m$ Here $n$ and $m / a$ are co-prime so we can calculate power value module $\varphi\left({\frac{m}{a}}\right)$. Moreover, $k\leq\log_{2}m$, thus case $x < k$ can be considered in $O(\log m)$. This is already enough to solve the problem, but continuing one can prove that for $x\geq\log_{2}m$ it holds $n^{x}\;\mathrm{\mod\}\;m=n^{\varphi(m)+x\,\mathrm{mod\}\varphi(m)\mod\Pi}$ Where $ \phi (m)$ is Euler totient function of $m$. Finally, to solve the problem one shoud note that $\varphi(\varphi(m))\leq{\frac{m}{2}}$ so it will take only $O(\log m)$ steps before $m$ turns into $1$.
[ "chinese remainder theorem", "math", "number theory" ]
2,700
null
906
E
Reverses
Hurricane came to Berland and to suburbs Stringsvill. You are going to it to check if it's all right with you favorite string. Hurrinace broke it a bit by reversing some of its non-intersecting substrings. You have a photo of this string before hurricane and you want to restore it to original state using reversing minimum possible number of its substrings and find out which substrings you should reverse. You are given a string $s$ — original state of your string and string $t$ — state of the string after hurricane. You should select $k$ non-intersecting substrings of $t$ in such a way that after reverse of these substrings string will be equal $s$ and $k$ is minimum possible.
After inverses we have transform $A_{1}B_{1}A_{2}B_{2}... A_{k}B_{k}A_{k + 1} \rightarrow A_{1}B_{1}^{r}A_{2}B_{2}^{r}... A_{k}B_{k}^{r}A_{k + 1}$. Consider operator $mix(A, B) = a_{1}b_{1}a_{2}b_{2}... a_{n}b_{n}$ for strings of equal lengths. Under such operator string will turn into $X_{1}Y_{1}X_{2}Y_{2}... X_{k}Y_{k}X_{k + 1}$ where $X_{k}$ is string which has all characters doubled and $Y_{k}$ is arbitrary palindrome of even length. Let's move through letters from left to right and keep minimum number on which we can split current prefix. Last letter will either be in some palindrome or is doubled. For doubled letters we consider $ans_{i} = min(ans_{i}, ans_{i - 2})$. As for palindromes of even length, one can fit standard algorithm of splitting string into the minimum number of palindromes in such way that it will consider only splittings on even palindromes. For example, one can consider only such spits that every palindrome in the split end up in even index.
[ "dp", "string suffix structures", "strings" ]
3,300
null
907
A
Masha and Bears
A family consisting of father bear, mother bear and son bear owns three cars. Father bear can climb into the largest car and he likes it. Also, mother bear can climb into the middle car and she likes it. Moreover, son bear can climb into the smallest car and he likes it. It's known that the largest car is strictly larger than the middle car, and the middle car is strictly larger than the smallest car. Masha came to test these cars. She could climb into all cars, but she liked only the smallest car. It's known that a character with size $a$ can climb into some car with size $b$ if and only if $a ≤ b$, he or she likes it if and only if he can climb into this car and $2a ≥ b$. You are given sizes of bears and Masha. Find out some possible integer non-negative sizes of cars.
Sizes of cars should satisfy the following constraints: in $i$-th car, Masha and corresponding bear are able to get into, so size of the car should not be less than $max(V_{i}, V_{m})$; each bear likes its car, so size of $i$-th car is no more than $2 \cdot V_{i}$; Masha doesn't like first two cars, then their sizes are more than $2 \cdot V_{m}$; Masha likes last car, so it's size is not more than $2 \cdot V_{m}$; Sizes of cars are strictly ordered. It means that size of father's car is strictly more than size of mother's one, and size of mother's car is strictly more than son's car. Sizes of bears don't exceed 100; then, sizes of cars does not exceed 200, and there are only $200^{3}$ possible variants of sizes of cars. In given constraints, one can just go through all possible triples of sizes and check if each of them satisfies the constrains above or not.
[ "brute force", "implementation" ]
1,300
null
907
B
Tic-Tac-Toe
Two bears are playing tic-tac-toe via mail. It's boring for them to play usual tic-tac-toe game, so they are a playing modified version of this game. Here are its rules. The game is played on the following field. Players are making moves by turns. At first move a player can put his chip in any cell of any small field. For following moves, there are some restrictions: if during last move the opposite player put his chip to cell with coordinates $(x_{l}, y_{l})$ in some small field, the next move should be done in one of the cells of the small field with coordinates $(x_{l}, y_{l})$. For example, if in the first move a player puts his chip to lower left cell of central field, then the second player on his next move should put his chip into some cell of lower left field (pay attention to the first test case). If there are no free cells in the required field, the player can put his chip to any empty cell on any field. You are given current state of the game and coordinates of cell in which the last move was done. You should find all cells in which the current player can put his chip. A hare works as a postman in the forest, he likes to foul bears. Sometimes he changes the game field a bit, so the current state of the game could be unreachable. However, after his changes the cell where the last move was done is not empty. You don't need to find if the state is unreachable or not, just output possible next moves according to the rules.
Let us describe each cell of the field by four numbers $(x_{b}, y_{b}, x_{s}, y_{s})$, $0 \le x_{b}, y_{b}, x_{s}, y_{s} \le 2)$, where $(x_{b}, y_{b})$ are coordinates of small field containing the cell, and $(x_{s}, y_{s})$ are coordinates of the cell in it's small field. It can be seen that for cell with "usual" coordinates $(x, y)$, $1 \le x, y \le 9$ and our new $(x_{b}, y_{b}, x_{s}, y_{s})$, there are following equalities which give us a key to go between coordinate systems: $x_{b} = \lfloor ((x - 1) / 3) \rfloor $, $y_{b} = \lfloor ((y - 1) / 3) \rfloor $; $x_{s} = (x - 1) mod 3$, $y_{b} = (y - 1) mod 3$; $x = 3 * x_{b} + x_{s} + 1$, $y = 3 * y_{b} + y_{s} + 1$. In terms of new coordinates, if last move was in cell $(x_{b}, y_{b}, x_{s}, y_{s})$, then next move should be in arbitrary free cell with coordinates $(x_{s}, y_{s}, x', y')$ for some $x^{\prime},y^{\prime}\in[0,2]$ if it's possible; otherwise, next move can be done in arbitrary free cell. To solve the problem, one can go through all such pairs $(x', y')$ and write "!" in each empty cell $(x_{s}, y_{s}, x', y')$; if there are not such empty cells, write "!" in all empty cells of the field.
[ "implementation" ]
1,400
null
908
A
New Year and Counting Cards
Your friend has $n$ cards. You know that each card has a lowercase English letter on one side and a digit on the other. Currently, your friend has laid out the cards on a table so only one side of each card is visible. You would like to know if the following statement is true for cards that your friend owns: "If a card has a vowel on one side, then it has an even digit on the other side." More specifically, a vowel is one of 'a', 'e', 'i', 'o' or 'u', and even digit is one of '0', '2', '4', '6' or '8'. For example, if a card has 'a' on one side, and '6' on the other side, then this statement is true for it. Also, the statement is true, for example, for a card with 'b' and '4', and for a card with 'b' and '3' (since the letter is not a vowel). The statement is false, for example, for card with 'e' and '5'. You are interested if the statement is true for all cards. In particular, if no card has a vowel, the statement is true. To determine this, you can flip over some cards to reveal the other side. You would like to know what is the minimum number of cards you need to flip in the worst case in order to verify that the statement is true.
Let's start off a bit more abstractly. We would like to know if the statement "if P then Q" is true, where P and Q are some statements (in this case, P is "card has vowel", and Q is "card has even number"). To do determine this, we need to flip over any cards which could be counter-examples (i.e. could make the statement false). Let's look at the truth table for if P then Q (see here: http://www.math.hawaii.edu/~ramsey/Logic/IfThen.html). The statement is only false when Q is false and P is true. Thus, it suffices to flip cards when P is true or Q is false. To solve this problem, we need to print the count of vowels and odd digits in the string.
[ "brute force", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif string s; cin >> s; int res = 0; string v = "aeiou"; for (auto c : s) { if (find(v.begin(), v.end(), c) != v.end() || (isdigit(c) && (c - '0') % 2)) ++res; } cout << res << ' '; }
908
B
New Year and Buggy Bot
Bob programmed a robot to navigate through a 2d maze. The maze has some obstacles. Empty cells are denoted by the character '.', where obstacles are denoted by '#'. There is a single robot in the maze. Its start position is denoted with the character 'S'. This position has no obstacle in it. There is also a single exit in the maze. Its position is denoted with the character 'E'. This position has no obstacle in it. The robot can only move up, left, right, or down. When Bob programmed the robot, he wrote down a string of digits consisting of the digits 0 to 3, inclusive. He intended for each digit to correspond to a distinct direction, and the robot would follow the directions in order to reach the exit. Unfortunately, he forgot to actually assign the directions to digits. The robot will choose some random mapping of digits to distinct directions. The robot will map distinct digits to distinct directions. The robot will then follow the instructions according to the given string in order and chosen mapping. If an instruction would lead the robot to go off the edge of the maze or hit an obstacle, the robot will crash and break down. If the robot reaches the exit at any point, then the robot will stop following any further instructions. Bob is having trouble debugging his robot, so he would like to determine the number of mappings of digits to directions that would lead the robot to the exit.
This problem is intended as an implementation problem. The bounds are small enough that a brute force works. We can iterate through all mapping of numbers to directions (i.e. using next permutation in C++ or doing some recursive DFS if there is no built in next permutation in your language), and simulate the robot's movements. We have to be careful that if the robot ever gets in an invalid state, we break out early and say the mapping is invalid.
[ "brute force", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; const int maxn = 55; string s[maxn]; signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif int n, m; cin >> n >> m; int sx = -1, sy = -1; for (int i = 0; i < n; ++i) { cin >> s[i]; for (int j = 0; j < m; ++j) { if (s[i][j] == 'S') sx = i, sy = j; } } string w; cin >> w; vector<int> p{0, 1, 2, 3}; const int dx[] = {-1, 0, 1, 0}; const int dy[] = {0, -1, 0, 1}; int res = 0; do { int x = sx, y = sy; bool fail = false; for (auto c : w) { x += dx[p[c - '0']]; y += dy[p[c - '0']]; if (x >= n || x < 0 || y >= m || y < 0 || s[x][y] == '#') { fail = true; break; } if (s[x][y] == 'E') break; } if (!fail && s[x][y] == 'E') { ++res; } } while (next_permutation(p.begin(), p.end())); cout << res << ' '; }
908
C
New Year and Curling
Carol is currently curling. She has $n$ disks each with radius $r$ on the 2D plane. Initially she has all these disks above the line $y = 10^{100}$. She then will slide the disks towards the line $y = 0$ one by one in order from $1$ to $n$. When she slides the $i$-th disk, she will place its center at the point $(x_{i}, 10^{100})$. She will then push it so the disk’s $y$ coordinate continuously decreases, and $x$ coordinate stays constant. The disk stops once it touches the line $y = 0$ or it touches any previous disk. Note that once a disk stops moving, it will not move again, even if hit by another disk. Compute the $y$-coordinates of centers of all the disks after all disks have been pushed.
This is another simulation problem with some geometry. As we push a disk, we can iterate through all previous disks and see if their $x$-coordinates are $ \le 2r$. If so, then that means these two circles can possibly touch. If they do, we can compute the difference of heights between these two circles using pythagorean theorem. In particular, we want to know the change in $y$ coordinate between the two touching circles. We know the hypotenuse (it is $2r$ since the circles are touching), and the change in $x$-coordinate is given, so the change in $y$ coordinate is equal to $\sqrt{4r^{2}-d x^{2}}$, where $dx$ is the change in $x$-coordinate. We can then take the maximum over all of these cases to get our final $y$-coordinate, since this is where this disk will first stop. Overall, this takes $O(n^{2})$ time.
[ "brute force", "geometry", "implementation", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; const int maxn = 2018; ld x[maxn]; ld y[maxn]; signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif cout << fixed; cout.precision(10); int n; ld r; cin >> n >> r; for (int i = 0; i < n; ++i) { cin >> x[i]; ld my = r; for (int j = 0; j < i; ++j) { if (fabsl(x[i] - x[j]) > 2 * r) { continue; } auto sqr = [](ld x) { return x * x; }; ld cy = y[j] + sqrtl(sqr(2 * r) - sqr(x[i] - x[j])); my = max(my, cy); } y[i] = my; cout << my << ' '; } cout << ' '; }
908
D
New Year and Arbitrary Arrangement
You are given three integers $k$, $p_{a}$ and $p_{b}$. You will construct a sequence with the following algorithm: Initially, start with the empty sequence. Each second, you do the following. With probability $p_{a} / (p_{a} + p_{b})$, add 'a' to the end of the sequence. Otherwise (with probability $p_{b} / (p_{a} + p_{b})$), add 'b' to the end of the sequence. You stop once there are at least $k$ subsequences that form 'ab'. Determine the expected number of times 'ab' is a subsequence in the resulting sequence. It can be shown that this can be represented by $P / Q$, where $P$ and $Q$ are coprime integers, and $Q\neq0{\mathrm{~mod~}}(10^{9}+7)$. Print the value of $P\cdot Q^{-1}\operatorname{mod}\left(10^{9}+7\right)$.
The main trickiness of this problem is that the sequence could potentially get arbitrary long, but we want an exact answer. In particular, it helps to think about how to reduce this problem to figuring out what state we need to maintain from a prefix of the sequence we've built in order to simulate our algorithm correctly. This suggests a dynamic programming approach. For our dp state, we need to keep track of something in our prefix. First, the most obvious candidate is we need to keep track of the number of times 'ab' occurs in the prefix (this is so we know when to stop). However, using only this is not enough, as we can't distinguish between the sequences 'ab' and 'abaaaa'. So, this suggests we also need to keep track of the number of 'a's. Let $dp[i][j]$ be the expected answer given that there are $i$ subsequences of the form 'a' in the prefix and $j$ subsequences of the form 'ab' in the prefix. Then, we have something like $dp[i][j] = (p_{a} * dp[i + 1][j] + p_{b} * dp[i][i + j]) / (p_{a} + p_{b})$. The first term in this sum comes from adding another 'a' to our sequence, and the second comes from adding a 'b' to our sequence. We also have $dp[i][j] = j$ if $j \ge k$ as a base case. The answer should be $dp[0][0]$. However, if we run this as is, we'll notice there are still some issues. One is that $i$ can get arbitrarily large here, as we can keep adding 'a's indefinitely. Instead, we can modify our base case to when $i + j \ge k$. In this case, the next time we get a 'b', we will stop, so we can find a closed form for this scenario. The second is that any 'b's that we get before any occurrences of 'a' can essentially be ignored. To fix this, we can adjust our answer to be $dp[1][0]$.
[ "dp", "math", "probabilities" ]
2,200
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; const int mod = 1e9 + 7; template<typename T> T add(T x) { return x; } template<typename T, typename... Ts> T add(T x, Ts... y) { T res = x + add(y...); if (res >= mod) res -= mod; return res; } template<typename T, typename... Ts> T sub(T x, Ts... y) { return add(x, mod - add(y...)); } template<typename T, typename... Ts> void udd(T &x, Ts... y) { x = add(x, y...); } template<typename T> T mul(T x) { return x; } template<typename T, typename... Ts> T mul(T x, Ts... y) { return (x * 1ll * mul(y...)) % mod; } template<typename T, typename... Ts> void uul(T &x, Ts... y) { x = mul(x, y...); } int bin(int a, ll deg) { int r = 1; while (deg) { if (deg & 1) uul(r, a); deg >>= 1; uul(a, a); } return r; } int inv(int x) { assert(x); return bin(x, mod - 2); } const int maxn = 1005; int d[maxn][maxn]; int k; int pa, pb; int calc(int n, int m) { if (m >= k) { return m; } assert(n <= k); if (n == k) { int mean = mul(sub(1, pb), inv(pb)); return add(m, k, mean); } if (d[n][m] != -1) { return d[n][m]; } int& res = d[n][m]; res = add(mul(pa, calc(n + 1, m)), mul(pb, calc(n, n + m))); return res; } signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif cin >> k >> pa >> pb; memset(d, -1, sizeof(d)); for (int i = 0; i < maxn; ++i) { for (int j = 0; j < maxn; ++j) { d[i][j] = -1; } } int isum = inv(pa + pb); uul(pa, isum); uul(pb, isum); cout << calc(1, 0) << ' '; }
908
E
New Year and Entity Enumeration
You are given an integer $m$. Let $M = 2^{m} - 1$. You are also given a set of $n$ integers denoted as the set $T$. The integers will be provided in base 2 as $n$ binary strings of length $m$. A set of integers $S$ is called "good" if the following hold. - If $a\in S$, then $a\mathrm{\XOR}\,\,M\in S$. - If $a.b\in S$, then $a\mathrm{{\AND}}\ b\in S$ - $T\subseteq S$ - All elements of $S$ are less than or equal to $M$. Here, $\mathrm{XOR}$ and $\mathrm{AND}$ refer to the bitwise XOR and bitwise AND operators, respectively. Count the number of good sets $S$, modulo $10^{9} + 7$.
Let's ignore $T$ for now, and try to characterize good sets $S$. For every bit position $i$, consider the bitwise AND of all elements of $S$ which have the $i$-th bit on (note, there is at least one element in $S$ which has the $i$-th bit on, since we can always $\mathrm{XOR}$ by $m$). We can see that this is equivalent to the smallest element of $S$ that contains the $i$-th bit. Denote the resulting number as $f(i)$. We can notice that this forms a partition of the bit positions, based on the final element that we get. In particular, there can't be a scenario where there is two positions $x, y$ with $f(x)\neq f(y)$ such that $f(x)\ \mathrm{AND}\ f(y)\neq0$. To show this, let $b=f(x)\ \mathrm{AND}\ f(y)$, and without loss of generality, let's assume $b\neq f(x)$ (it can't be simultaneously equal to $f(x), f(y)$ since $f(x)\neq f(y)$). If the $x$-th bit in $b$ is on, we get a contradiction, since $b$ is a smaller element than $f(x)$ that contains the $x$-th bit. If the $x$-th bit in $b$ is off, then, $f(x)\ {\mathrm{AND~}}(b\ {\mathrm{XOR}}\ M)$ is a smaller element of $S$ that contains the $x$-th bit. So, we can guess at this point that good sets $S$ correspond to partitions of ${1, 2, ..., m}$ one to one. Given a partition, we can construct a valid good set as follows. We can observe that a good set $S$ must be closed under bitwise OR also. To prove this, for any two elements $a.b\in S$, $a\mathrm{\OR\}b=(\left(a\mathrm{\boldmath~XOR\}\mathcal{M}\right)\,\mathrm{AND}\left(b\mathrm{\boldmath~XOR~}M\right)\right)\,\mathrm{XOR~}M$. Since the latter is some composition of XORs and ANDs, it it also in $S$. So, given a partition, we can take all unions of some subset of the partitions to get $S$. For an actual solution, partitions can be computed with bell numbers, which is an $O(m^{2})$ dp (see this: http://mathworld.wolfram.com/BellNumber.html ). Now, back to $T$, we can see that this decides some partitions beforehand. In particular, for each bit position, we can make a $n$-bit mask denoting what its value is in each of the $n$ given values. The partitions are based on what these masks are. We can find what the sizes of these splits are, then multiply the ways to partition each individual split together to get the answer.
[ "bitmasks", "combinatorics", "dp", "math" ]
2,500
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; const int mod = 1e9 + 7; template<typename T> T add(T x) { return x; } template<typename T, typename... Ts> T add(T x, Ts... y) { T res = x + add(y...); if (res >= mod) res -= mod; return res; } template<typename T, typename... Ts> T sub(T x, Ts... y) { return add(x, mod - add(y...)); } template<typename T, typename... Ts> void udd(T &x, Ts... y) { x = add(x, y...); } template<typename T> T mul(T x) { return x; } template<typename T, typename... Ts> T mul(T x, Ts... y) { return (x * 1ll * mul(y...)) % mod; } template<typename T, typename... Ts> void uul(T &x, Ts... y) { x = mul(x, y...); } int bin(int a, ll deg) { int r = 1; while (deg) { if (deg & 1) uul(r, a); deg >>= 1; uul(a, a); } return r; } int inv(int x) { assert(x); return bin(x, mod - 2); } const int maxn = 1005; int b[maxn][maxn]; int bsum[maxn]; void pre() { b[0][0] = 1; for (int i = 0; i < maxn - 1; ++i) { for (int j = 0; j <= i; ++j) { udd(bsum[i], b[i][j]); udd(b[i + 1][j], mul(b[i][j], j)); udd(b[i + 1][j + 1], b[i][j]); } } } signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif pre(); int m, n; cin >> m >> n; vector<string> v(m, string(n, ' ')); for (int i = 0; i < n; ++i) { string s; cin >> s; for (int j = 0; j < m; ++j) { v[j][i] = s[j]; } } sort(v.begin(), v.end()); int res = 1; for (int i = 0; i < m; ++i) { int r = i; while (r < m && v[r] == v[i]) { ++r; } cerr << "cnt: " << r - i << ' '; uul(res, bsum[r - i]); i = r - 1; } cout << res << ' '; }
908
F
New Year and Rainbow Roads
Roy and Biv have a set of $n$ points on the infinite number line. Each point has one of 3 colors: red, green, or blue. Roy and Biv would like to connect all the points with some edges. Edges can be drawn between any of the two of the given points. The cost of an edge is equal to the distance between the two points it connects. They want to do this in such a way that they will both see that all the points are connected (either directly or indirectly). However, there is a catch: Roy cannot see the color red and Biv cannot see the color blue. Therefore, they have to choose the edges in such a way that if all the red points are removed, the remaining blue and green points are connected (and similarly, if all the blue points are removed, the remaining red and green points are connected). Help them compute the minimum cost way to choose edges to satisfy the above constraints.
Let's make a few simplifying observations It is not optimal to connect a red and blue point directly: Neither Roy or Biv will see this edge. If we have a sequence like red green red (or similarly blue green blue), it is not optimal to connect the outer two red nodes. We can replace the outer edge with two edges that have the same weight. This replacement will form some cycle in the red-green points, so we can remove one of these edges for a cheaper cost. With these two observations, we can construct a solution as follows. First, split the points by the green points. Each section is now independent. There are then two cases, the outer green points are not directly connected, in which case, we must connect all the red/blue points in the line (for 2 * length of segment weight), or the outer green points are directly connected, in which case, we can omit the heaviest red and blue segment (for 3 * length of segment - heaviest red - heaviest blue). Thus, this problem can be solved in linear time. Be careful about the case with no green points.
[ "graphs", "greedy", "implementation" ]
2,400
#include <bits/stdc++.h> using namespace std; using ll = long long; using ld = long double; signed main() { #ifdef LOCAL assert(freopen("test.in", "r", stdin)); #endif int n; cin >> n; ll pr = -1e12; ll gmin = 1e12, gmax = -1e12; ll bmin = 1e12, bmax = -1e12; ll rmin = 1e12, rmax = -1e12; vector<int> p[2]; ll res = 0; for (int i = 0; i <= n; ++i) { ll x; char c; if (i < n) { cin >> x >> c; } else { x = 1e12, c = 'G'; } if (c == 'G') { if (i < n) { gmin = min<ll>(gmin, x); gmax = max<ll>(gmax, x); } ll cur1 = x - pr, cur2 = (x - pr) * 2; if (cur1 >= 1e11) { cur1 = 0; } for (int q = 0; q < 2; ++q) { if (!p[q].empty()) { cur1 += x - pr; ll mx = max<ll>(p[q].front() - pr, x - p[q].back()); for (int j = 0; j < (int) p[q].size() - 1; ++j) { mx = max<ll>(mx, p[q][j + 1] - p[q][j]); } cur1 -= mx; p[q].clear(); } } res += min(cur1, cur2); pr = x; } else if (c == 'R') { p[0].push_back(x); rmin = min<ll>(rmin, x); rmax = max<ll>(rmax, x); } else { p[1].push_back(x); bmin = min<ll>(bmin, x); bmax = max<ll>(bmax, x); } } if (gmin > gmax) { ll res = 0; if (rmin < rmax) { res += rmax - rmin; } if (bmin < bmax) { res += bmax - bmin; } cout << res << ' '; return 0; } cout << res << ' '; }
908
G
New Year and Original Order
Let $S(n)$ denote the number that represents the digits of $n$ in sorted order. For example, $S(1) = 1, S(5) = 5, S(50394) = 3459, S(353535) = 333555$. Given a number $X$, compute $\textstyle\sum_{1\leq k\leq X}S(k)$ modulo $10^{9} + 7$.
This is a digit dp problem. Let's try to solve the subproblem "How many ways can the i-th digit be at least j?". Let's fix j, and solve this with dp. We have a dp state dp[a][b][c] = number of ways given we've considered the first $a$ digits of $X$, we need $b$ more occurrences of digits at least $j$, and $c$ is a boolean saying whether or not we are strictly less than $X$ or not yet. For a fixed digit, we can compute this dp table in $O(n^{2})$ time, and then compute the answers to our subproblem for each $i$ (i.e. by varying $b$ in our table).
[ "dp", "math" ]
2,800
// vvvvvvvvvvvvvvvvv Library code start #define NDEBUG NDEBUG #include <algorithm> #include <array> #include <bitset> #include <cassert> #include <cstring> #include <cmath> #include <functional> #include <iomanip> #include <iostream> #include <map> #include <set> #include <sstream> #include <string> #include <tuple> #include <unordered_map> #include <unordered_set> #include <vector> #include <memory> #include <queue> #include <random> #define forn(t, i, n) for (t i = 0; i < (n); ++i) #define foran(t, i, a, n) for (t i = (a); i < (n); ++i) #define rforn(t, i, n) for (t i = (n) - 1; i >= 0; --i) using namespace std; // TC_REMOVE_BEGIN /// caide keep bool __hack = std::ios::sync_with_stdio(false); /// caide keep auto __hack1 = cin.tie(nullptr); // TC_REMOVE_END // Section with adoption of array and vector algorithms. namespace option_detail { /// caide keep struct NoneHelper {}; } template<class Value> class Option { public: static_assert(!std::is_reference<Value>::value, "Option may not be used with reference types"); static_assert(!std::is_abstract<Value>::value, "Option may not be used with abstract types"); Value* get_pointer() && = delete; // Return b copy of the value if set, or b given default if not. // Return b copy of the value if set, or b given default if not. private: struct StorageTriviallyDestructible { // uninitialized bool hasValue; }; /// caide keep struct StorageNonTriviallyDestructible { // uninitialized union { Value value; }; bool hasValue; ~StorageNonTriviallyDestructible() { clear(); } void clear() { if (hasValue) { hasValue = false; value.~Value(); } } }; /// caide keep using Storage = typename std::conditional<std::is_trivially_destructible<Value>::value, StorageTriviallyDestructible, StorageNonTriviallyDestructible>::type; Storage storage_; }; // Comparisons. template<class V> bool operator< (const Option<V>&, const V& other) = delete; template<class V> bool operator<=(const Option<V>&, const V& other) = delete; template<class V> bool operator>=(const Option<V>&, const V& other) = delete; template<class V> bool operator> (const Option<V>&, const V& other) = delete; template<class V> bool operator< (const V& other, const Option<V>&) = delete; template<class V> bool operator<=(const V& other, const Option<V>&) = delete; template<class V> bool operator>=(const V& other, const Option<V>&) = delete; template<class V> bool operator> (const V& other, const Option<V>&) = delete; #define ENABLE_IF(e) typename enable_if<e>::type* = nullptr namespace template_util { constexpr int bytecount(uint64_t x) { return x ? 1 + bytecount(x >> 8) : 0; } template<int N> struct bytetype { typedef uint64_t type; }; template<> struct bytetype<4> { typedef uint32_t type; }; template<> struct bytetype<1> { typedef uint8_t type; }; template<> struct bytetype<0> { typedef uint8_t type; }; /// caide keep template<uint64_t N> struct minimal_uint : bytetype<bytecount(N)> { }; } namespace index_iterator_impl { template <class T> struct member_dispatch_helper { private: T value; }; // Have to caide keep all the members to comply to iterator concept // Otherwise generated code won't be portable between clang and g++ template <class C, bool reverse = false> struct index_iterator { /// caide keep typedef random_access_iterator_tag iterator_category; /// caide keep typedef decltype(((C*)nullptr)->operator[](size_t(0))) reference; /// caide keep typedef typename remove_reference<reference>::type value_type; /// caide keep typedef ptrdiff_t difference_type; /// caide keep typedef conditional< is_reference<reference>::value, typename add_pointer<value_type>::type, member_dispatch_helper<value_type>> pointer; /// caide keep typedef index_iterator<C, reverse> self_t; /// caide keep static const difference_type dir = reverse ? -1 : 1; /// caide keep index_iterator() = default; /// caide keep inline bool operator!=(const self_t& o) { return index != o.index; } /// caide keep inline bool operator<(const self_t& o) { return reverse ? index > o.index : index < o.index; } /// caide keep inline bool operator>(const self_t& o) { return reverse ? index < o.index : index > o.index; } /// caide keep inline bool operator<=(const self_t& o) { return reverse ? index >= o.index : index <= o.index; } /// caide keep inline bool operator>=(const self_t& o) { return reverse ? index <= o.index : index >= o.index; } /// caide keep inline reference operator*() { return (*container)[index]; } /// caide keep inline const reference operator*() const { return (*container)[index]; } /// caide keep inline pointer operator->() { return pointer((*container)[index]); } /// caide keep inline self_t& operator++() { index += dir; return *this; } /// caide keep inline self_t operator++(int) { auto copy = *this; index += dir; return copy; } /// caide keep inline self_t& operator--() { index -= dir; return *this; } /// caide keep inline self_t operator--(int) { auto copy = *this; index -= dir; return copy; } /// caide keep inline self_t& operator+=(difference_type n) { index += dir * n; return *this; }; /// caide keep inline self_t& operator-=(difference_type n) { index -= dir * n; return *this; }; /// caide keep inline friend self_t operator-(self_t a, difference_type n) { return a -= n; }; /// caide keep inline friend self_t operator+(difference_type n, self_t a) { return a += n; }; /// caide keep inline friend self_t operator+(self_t a, difference_type n) { return a += n; }; /// caide keep inline friend difference_type operator-(const self_t& a, const self_t& b) { return dir * (a.index - b.index); }; /// caide keep inline reference operator[](difference_type n) { return (*container)[index + dir * n]; }; /// caide keep inline const reference operator[](difference_type n) const { return (*container)[index + dir * n]; }; private: C* container; difference_type index; }; } namespace multivec_impl { template <size_t NDIMS> struct shape { size_t dim, stride; shape<NDIMS - 1> subshape; shape(size_t dim_, shape<NDIMS - 1>&& s): dim(dim_), stride(s.size()), subshape(std::move(s)) {} size_t size() const { return dim * stride; } }; template <> struct shape<0> { size_t size() const { return 1; } }; template <size_t I, size_t NDIMS> struct __shape_traverse { ///caide keep static const shape<NDIMS - I>& get_subshape(const shape<NDIMS>& s) { return __shape_traverse<I - 1, NDIMS - 1>::get_subshape(s.subshape); } }; template <size_t NDIMS> struct __shape_traverse<0, NDIMS> { static const shape<NDIMS>& get_subshape(const shape<NDIMS>& s) { return s; } }; ///caide keep template <size_t I, size_t NDIMS> const shape<NDIMS - I>& get_subshape(const shape<NDIMS>& s) { return __shape_traverse<I, NDIMS>::get_subshape(s); } template <class Index, class... Rest, size_t NDIMS, ENABLE_IF(is_integral<Index>::value)> size_t get_shift(const shape<NDIMS>& s, size_t cur_shift, Index i, Rest... is) { assert(0 <= i && i < s.dim); return get_shift(s.subshape, cur_shift + i * s.stride, is...); } template <size_t NDIMS> size_t get_shift(const shape<NDIMS>&, size_t cur_shift) { return cur_shift; } template <class... T> shape<sizeof...(T)> make_shape(T... dims); template <class Dim, class... Rest, ENABLE_IF(is_integral<Dim>::value)> shape<sizeof...(Rest) + 1> make_shape(Dim dim, Rest... dims) { assert(dim >= 0); return {(size_t)dim, make_shape<Rest...>(dims...)}; } template <> shape<0> make_shape() { return {}; } ///caide keep template <class T, size_t NDIMS> struct vec_view_base; template <template<class, size_t> class Base, class T, size_t NDIMS> struct vec_mixin : public Base<T, NDIMS> { using Base<T, NDIMS>::Base; /// caide keep typedef Base<T, NDIMS> B; ///caide keep template <class... Indices, bool enabled = NDIMS == sizeof...(Indices), ENABLE_IF(enabled)> inline T& operator()(Indices... is) { size_t i = multivec_impl::get_shift(B::s, 0, is...); return B::data[i]; } ///caide keep template <class... Indices, bool enabled = sizeof...(Indices) < NDIMS, ENABLE_IF(enabled)> inline vec_mixin<vec_view_base, T, NDIMS - sizeof...(Indices)> operator()(Indices... is) { size_t shift = multivec_impl::get_shift(B::s, 0, is...); const auto& subshape = multivec_impl::get_subshape<sizeof...(Indices)>(B::s); return {subshape, &B::data[shift]}; } inline void fill(const T& val) { std::fill(raw_data(), raw_data() + B::s.size(), val); }; // protected: inline T* raw_data() { return &B::data[0]; } }; template <class T, size_t NDIMS> struct vec_view_base { inline vec_view_base(const multivec_impl::shape<NDIMS>& s_, T* data_): s(s_), data(data_) {} protected: multivec_impl::shape<NDIMS> s; T* data; }; template <class T, size_t NDIMS> struct vec_base { inline vec_base(multivec_impl::shape<NDIMS>&& s_): s(move(s_)), data(new T[s.size()]) {} inline vec_base(const vec_base& o): s(o.s), data(new T[s.size()]) { memcpy(data.get(), o.data.get(), sizeof(T) * s.size()); } protected: multivec_impl::shape<NDIMS> s; unique_ptr<T[]> data; }; } /* TODO - do we need vec_view_const? - add more features (lambda initialization etc.) - properly use const - proper tests coverage */ template <class T, size_t NDIMS> using vec = multivec_impl::vec_mixin<multivec_impl::vec_base, T, NDIMS>; template <class T, class... NDIMS> inline vec<T, sizeof...(NDIMS)> make_vec(NDIMS... dims) { return {multivec_impl::make_shape(dims...)}; } /* TODOs: primitive root discrete log tests!!! */ namespace mod_impl { /// caide keep template <class T> constexpr inline T mod(T MOD) { return MOD; } /// caide keep template <class T> constexpr inline T mod(T* MOD) { return *MOD; } /// caide keep template <class T> constexpr inline T max_mod(T MOD) { return MOD - 1; } /// caide keep template <class T> constexpr inline T max_mod(T*) { return numeric_limits<T>::max() - 1; } constexpr inline uint64_t combine_max_sum(uint64_t a, uint64_t b) { return a > ~b ? 0 : a + b; } constexpr inline uint64_t combine_max_mul(uint64_t a, uint64_t b) { return a > numeric_limits<uint64_t>::max() / b ? 0 : a * b; } /// caide keep template <class T> constexpr inline uint64_t next_divisible(T mod, uint64_t max) { return max % mod == 0 ? max : combine_max_sum(max, mod - max % mod); } /// caide keep template <class T> constexpr inline uint64_t next_divisible(T*, uint64_t) { return 0; } //caide keep constexpr int IF_THRESHOLD = 2; template <class T, T MOD_VALUE, uint64_t MAX, class RET = typename template_util::minimal_uint<max_mod(MOD_VALUE)>::type, ENABLE_IF(MAX <= max_mod(MOD_VALUE) && !is_pointer<T>::value)> inline RET smart_mod(typename template_util::minimal_uint<MAX>::type value) { return value; } template <class T, T MOD_VALUE, uint64_t MAX, class RET = typename template_util::minimal_uint<max_mod(MOD_VALUE)>::type, ENABLE_IF(max_mod(MOD_VALUE) < MAX && MAX <= IF_THRESHOLD * max_mod(MOD_VALUE) && !is_pointer<T>::value)> inline RET smart_mod(typename template_util::minimal_uint<MAX>::type value) { while (value >= mod(MOD_VALUE)) { value -= mod(MOD_VALUE); } return (RET)value; } template <class T, T MOD_VALUE, uint64_t MAX, class RET = typename template_util::minimal_uint<max_mod(MOD_VALUE)>::type, ENABLE_IF(IF_THRESHOLD * max_mod(MOD_VALUE) < MAX || is_pointer<T>::value)> inline RET smart_mod(typename template_util::minimal_uint<MAX>::type value) { return (RET)(value % mod(MOD_VALUE)); } } #define MAX_MOD mod_impl::max_mod(MOD_VALUE) struct DenormTag {}; template <class T, T MOD_VALUE, uint64_t MAX = MAX_MOD, ENABLE_IF(MAX_MOD >= 2)> struct ModVal { typedef typename template_util::minimal_uint<MAX>::type storage; storage value; /// caide keep inline ModVal(): value(0) { assert(MOD >= 2); } inline ModVal(storage v, DenormTag): value(v) { assert(MOD >= 2); assert(v <= MAX); }; inline operator ModVal<T, MOD_VALUE>() { return {v(), DenormTag()}; }; typename template_util::minimal_uint<mod_impl::max_mod(MOD_VALUE)>::type v() const { return mod_impl::smart_mod<T, MOD_VALUE, MAX>(value); } }; template <class T, T MOD_VALUE, uint64_t MAX1, uint64_t MAX2, uint64_t NEW_MAX = mod_impl::combine_max_sum(MAX1, MAX2), ENABLE_IF(NEW_MAX != 0), class Ret = ModVal<T, MOD_VALUE, NEW_MAX>> inline Ret operator+(ModVal<T, MOD_VALUE, MAX1> o1, ModVal<T, MOD_VALUE, MAX2> o2) { return {typename Ret::storage(typename Ret::storage() + o1.value + o2.value), DenormTag()}; } template <class T, T MOD_VALUE, uint64_t MAX1, uint64_t MAX2, uint64_t NEW_MAX = mod_impl::combine_max_mul(MAX1, MAX2), ENABLE_IF(NEW_MAX != 0), class Ret = ModVal<T, MOD_VALUE, NEW_MAX>> inline Ret operator*(ModVal<T, MOD_VALUE, MAX1> o1, ModVal<T, MOD_VALUE, MAX2> o2) { return {typename Ret::storage(typename Ret::storage(1) * o1.value * o2.value), DenormTag()}; } template <class T, T MOD_VALUE, uint64_t MAX> inline ModVal<T, MOD_VALUE>& operator+=(ModVal<T, MOD_VALUE>& lhs, const ModVal<T, MOD_VALUE, MAX>& rhs) { lhs = lhs + rhs; return lhs; } template <class T, T MOD_VALUE, uint64_t MAX> inline ModVal<T, MOD_VALUE>& operator*=(ModVal<T, MOD_VALUE>& lhs, const ModVal<T, MOD_VALUE, MAX>& rhs) { lhs = lhs * rhs; return lhs; } template <class T, T MOD_VALUE, class MOD_TYPE> struct ModCompanion { typedef MOD_TYPE mod_type; typedef ModVal<mod_type, MOD_VALUE> type; template <uint64_t C> inline static constexpr ModVal<mod_type, MOD_VALUE, C> c() { return {C, DenormTag()}; }; template <uint64_t MAX = numeric_limits<uint64_t>::max()> inline static ModVal<mod_type, MOD_VALUE, MAX> wrap(uint64_t x) { assert(x <= MAX); return {typename ModVal<mod_type, MOD_VALUE, MAX>::storage(x), DenormTag()}; }; }; #undef MAX_MOD template <uint64_t MOD_VALUE> struct Mod : ModCompanion<uint64_t, MOD_VALUE, typename template_util::minimal_uint<MOD_VALUE>::type> { template<uint64_t VAL> static constexpr uint64_t literal_builder() { return VAL; } template<uint64_t VAL, char DIGIT, char... REST> static constexpr uint64_t literal_builder() { return literal_builder<(10 * VAL + DIGIT - '0') % MOD_VALUE, REST...>(); } }; #define REGISTER_MOD_LITERAL(mod, suffix) template <char... DIGITS> mod::type operator "" _##suffix() { return mod::c<mod::literal_builder<0, DIGITS...>()>(); } template <class T, T MOD_VALUE, uint64_t MAX> inline ostream& operator<<(ostream& s, ModVal<T, MOD_VALUE, MAX> val) { s << val.v(); return s; } template<class T> T next(istream& in) { T ret; in >> ret; return ret; } #define dbg(...) ; // ^^^^^^^^^^^^^^^^^ Library code end using md = Mod<1000000007>; /// caide keep using mt = md::type; REGISTER_MOD_LITERAL(md, mod) constexpr int MAX_LEN = 700; auto precalc = []() { auto precalc = make_vec<mt>(MAX_LEN, MAX_LEN, 10); precalc(0, 0).fill(1_mod); foran (int, len, 1, MAX_LEN) { forn (int, cnt, len + 1) { forn (int, d, 10) { precalc(len, cnt, d) = md::wrap<10>(d) * precalc(len - 1, cnt, d); if (cnt > 0) { precalc(len, cnt, d) += md::wrap<10>(10 - d) * precalc(len - 1, cnt - 1, d); } } } } return precalc; }(); void solve(istream& in, ostream& out) { auto x = next<string>(in); int n = x.size(); auto din = make_vec<mt>(n + 1, 10); din.fill(0_mod); auto prefixCounts = make_vec<int>(10); prefixCounts.fill(0); forn (int, i, n) { dbg(prefixCounts); int restLen = n - i - 1; forn (int, d, x[i] - '0' + (i == n - 1 ? 1 : 0)) { dbg(i, d); forn (int, c, 10) { forn (int, cnt, restLen + 1) { din(cnt + prefixCounts(c) + (d >= c ? 1 : 0), c) += precalc(restLen, cnt, c); } } } forn (int, d, x[i] - '0' + 1) { prefixCounts(d)++; } } forn (int, c, 10) { rforn (int, i, n) { din(i, c) += din(i + 1, c); } } auto ans = 0_mod; auto pow10 = 1_mod; foran (int, i, 1, n + 1) { auto sum = 0_mod; foran (int, c, 1, 10) { sum += din(i, c); } ans += sum * pow10; pow10 *= 10_mod; } out << ans << " "; } int main() { solve(cin, cout); return 0; }
908
H
New Year and Boolean Bridges
Your friend has a hidden directed graph with $n$ nodes. Let $f(u, v)$ be true if there is a directed path from node $u$ to node $v$, and false otherwise. For each pair of distinct nodes, $u, v$, you know at least one of the three statements is true: - $f(u,v)\ \mathrm{AND}\ f(v,u)$ - $f(u,v)\ \mathrm{OR}\ \,f(v,u)$ - $f(u,v)\times0\mathbb{R}\ f(v,u)$ Here AND, OR and XOR mean AND, OR and exclusive OR operations, respectively. You are given an $n$ by $n$ matrix saying which one of the three statements holds for each pair of vertices. The entry in the $u$-th row and $v$-th column has a single character. - If the first statement holds, this is represented by the character 'A'. - If the second holds, this is represented by the character 'O'. - If the third holds, this is represented by the character 'X'. - The diagonal of this matrix will only contain the character '-'. Note that it is possible that a pair of nodes may satisfy multiple statements, in which case, the character given will represent one of the true statements for that pair. This matrix is also guaranteed to be symmetric. You would like to know if there is a directed graph that is consistent with this matrix. If it is impossible, print the integer -1. Otherwise, print the minimum number of edges that could be consistent with this information.
First, let's find connected components using only AND edges. If there are any XOR edges between two nodes in the same component, the answer is -1. Now, we can place all components in a line. However, it may be optimal to merge some components together. It only makes sense to merge components of size 2 or more, of which there are at most $k = n / 2$ of them. We can make a new graph with these $k$ components. Two components have an edge if all of the edges between them are OR edges, otherwise, there is no edge. We want to know what is the minimum number of cliques needed to cover all the nodes in this graph. To solve this, we can precompute which subsets of nodes forms a clique and put this into some array. Then, we can use the fast walsh hadamard transform to multiply this array onto itself until the element $2^{k} - 1$ is nonzero. Naively, this is $O(2^{k} * k^{2})$, but we can save a factor of $k$ by noticing we only need to compute the last element, and we don't need to re-transform our input array at each iteration.
[]
3,100
// vvvvvvvvvvvvvvvvv Library code start #define NDEBUG NDEBUG #include <algorithm> #include <array> #include <bitset> #include <cassert> #include <cstring> #include <cmath> #include <functional> #include <iomanip> #include <iostream> #include <map> #include <set> #include <sstream> #include <string> #include <tuple> #include <unordered_map> #include <unordered_set> #include <vector> #include <memory> #include <queue> #include <random> #define forn(t, i, n) for (t i = 0; i < (n); ++i) #define rforn(t, i, n) for (t i = (n) - 1; i >= 0; --i) using namespace std; // TC_REMOVE_BEGIN /// caide keep bool __hack = std::ios::sync_with_stdio(false); /// caide keep auto __hack1 = cin.tie(nullptr); // TC_REMOVE_END template <class T> T gen_pow(T ret, T x, uint64_t pow) { while (pow) { if (pow & 1) { ret *= x; } pow /= 2; if (pow) { x *= x; } } return ret; } // Section with adoption of array and vector algorithms. namespace template_util { constexpr int bytecount(uint64_t x) { return x ? 1 + bytecount(x >> 8) : 0; } template<int N> struct bytetype { }; /// caide keep template<uint64_t N> struct minimal_uint : bytetype<bytecount(N)> { }; } template<class T> T next(istream& in) { T ret; in >> ret; return ret; } template<class T> vector<T> next_vector(istream& in, size_t n) { vector<T> ret(n); for (size_t i = 0; i < n; ++i) { ret[i] = next<T>(in); } return ret; } template <class T> inline T set_bit(T mask, int bit) { assert(0 <= bit && bit < numeric_limits<T>::digits); return mask | ((T)(1) << bit); } template <class T> inline bool get_bit(T mask, int bit) { assert(0 <= bit && bit < numeric_limits<T>::digits); return mask & ((T)(1) << bit); } inline int bitcnt(unsigned int mask) { return __builtin_popcount(mask); } inline int bitcnt(unsigned long long mask) { return __builtin_popcountll(mask); } // ^^^^^^^^^^^^^^^^^ Library code end void solve(istream& in, ostream& out) { int n = next<int>(in); vector<string> g0 = next_vector<string>(in, n); vector<uint64_t> comps; uint64_t col = 0; function<uint64_t(int)> dfs = [&](int i) -> uint64_t { if (get_bit(col, i)) { return 0; } col = set_bit(col, i); uint64_t ret = 1ULL << i; forn (int, j, n) { if (g0[i][j] == 'A' || g0[j][i] == 'A') { ret |= dfs(j); } } return ret; }; forn (int, i, n) { uint64_t comp = dfs(i); if (bitcnt(comp) > 1) { comps.push_back(comp); } } vector<uint32_t> g(comps.size()); forn (int, i, comps.size()) { uint64_t ns = 0; forn (int, u, n) { if (!get_bit(comps[i], u)) { continue; } forn (int, v, n) { if (g0[u][v] == 'X' || g0[v][u] == 'X') { ns |= 1ULL << v; } } } forn (int, j, comps.size()) { if ((ns & comps[j]) != 0) { g[i] |= 1 << j; } } if (get_bit(g[i], i)) { out << "-1 "; return; } } vector<uint32_t> f(1 << comps.size()); rforn (int, i, 1 << comps.size()) { forn (int, u, comps.size()) { if (get_bit(i, u) ? ((g[u] & i) != 0) : f[set_bit(i, u)]) { goto cont; } } f[i] = 1; cont:; } forn (int, u, comps.size()) { forn (int, i, 1 << comps.size()) { if (get_bit(i, u)) { f[i] += f[i ^ (1 << u)]; } } } vector<uint64_t> counts(comps.size() + 1); forn (uint32_t, i, 1 << comps.size()) { uint64_t pow = bitcnt(i) % 2 ? -1ULL : 1ULL; uint64_t val = f[i]; forn (int, k, comps.size() + 1) { counts[k] += pow; pow *= val; } } int k = 0; while (counts[k] == 0) { k++; } out << n - 1 + k << " "; } int main() { solve(cin, cout); return 0; }
909
A
Generate Login
The preferred way to generate user login in Polygon is to concatenate a prefix of the user's first name and a prefix of their last name, in that order. Each prefix must be non-empty, and any of the prefixes can be the full name. Typically there are multiple possible logins for each person. You are given the first and the last name of a user. Return the alphabetically earliest login they can get (regardless of other potential Polygon users). As a reminder, a prefix of a string $s$ is its substring which occurs at the beginning of $s$: "a", "ab", "abc" etc. are prefixes of string "{abcdef}" but "b" and 'bc" are not. A string $a$ is alphabetically earlier than a string $b$, if $a$ is a prefix of $b$, or $a$ and $b$ coincide up to some position, and then $a$ has a letter that is alphabetically earlier than the corresponding letter in $b$: "a" and "ab" are alphabetically earlier than "ac" but "b" and "ba" are alphabetically later than "ac".
The most straightforward solution is to generate all possible logins (by trying all non-empty prefixes of first and last names and combining them) and find the alphabetically earliest of them. To get a faster solution, several observations are required. First, in the alphabetically earliest login the prefix of the last name is always one letter long; whatever login is generated using two or more letter of the last name, can be shortened further by removing extra letter to get an alphabetically earlier login. Second, the prefix of the first name should not contain any letter greater than or equal to the first letter of the last name, other than the first letter. Thus, a better solution is: iterate over letter of the first name, starting with the second one. Once a letter which is greater than or equal to the first letter of the last name is found, stop, and return all letter until this one plus the first letter of the last name. If such a letter is not found, return the whole first name plus the first letter of the last name.
[ "brute force", "greedy", "sortings" ]
1,000
null
909
B
Segments
You are given an integer $N$. Consider all possible segments on the coordinate axis with endpoints at integer points with coordinates between 0 and $N$, inclusive; there will be $\textstyle{\frac{n(n+1)}{2}}$ of them. You want to draw these segments in several layers so that in each layer the segments don't overlap (they might touch at the endpoints though). You \textbf{can not} move the segments to a different location on the coordinate axis. Find the minimal number of layers you have to use for the given $N$.
Consider a segment $[i, i + 1]$ of length 1. Clearly, all segments that cover this segment must belong to different layers. To cover it, the left end of the segment must be at one of the points $0, 1, ..., i$ ($i + 1$ options), and the right end - at one of the points $i + 1, i + 2, ..., N$ ($N - i$ options). So the number of segments covering $[i, i + 1]$ is equal to $M_{i} = (i + 1)(N - i)$. The maximum of $M_{i}$ over all $i = 0, ..., N - 1$ gives us a lower bound on the number of layers. Because the problem doesn't require explicit construction, we can make a guess that this bound is exact. $max M_{i}$ can be found in $O(N)$; alternatively, it can be seen that the maximum is reached for $i=\lfloor{\frac{N}{2}}\rfloor$ (for a central segment for odd $N$ or for one of two central segments for even $N$). The answer is $\left(\left[{\frac{N}{2}}\right]+1\right)\cdot\left[{\frac{N}{2}}\right]$. We can also prove this by an explicit construction. Sort all segments in non-decreasing order of their left ends and then in increasing order of their right ends. Try to find a place for each next segment greedily: if $i$ is the left end of current segment, and segment $[i, i + 1]$ is free in some layer, add the current segment to that layer; otherwise, start a new layer with the current segment. and yes, this is our $O(1)$ problem! :-)
[ "constructive algorithms", "math" ]
1,300
null
909
C
Python Indentation
In Python, code blocks don't have explicit begin/end or curly braces to mark beginning and end of the block. Instead, code blocks are defined by indentation. We will consider an extremely simplified subset of Python with only two types of statements. \textbf{Simple statements} are written in a single line, one per line. An example of a simple statement is assignment. \textbf{For statements} are compound statements: they contain one or several other statements. For statement consists of a header written in a separate line which starts with "for" prefix, and loop body. Loop body is a block of statements indented one level further than the header of the loop. Loop body can contain both types of statements. Loop body can't be empty. You are given a sequence of statements without indentation. Find the number of ways in which the statements can be indented to form a valid Python program.
This problem can be solved using dynamic programming. Let's consider all possible programs which end with a certain statement at a certain indent. Dynamic programming state will be an array $dp[i][j]$ which stores the number of such programs ending with statement $i$ at indent $j$. The starting state is a one-dimensional array for $i = 0$: there is exactly one program which consists of the first statement only, and its last statement has indent 0. The recurrent formula can be figured out from the description of the statements. When we add command $i + 1$, its possible indents depend on the possible indents of command $i$ and on the type of command $i$. If command $i$ is a for loop, command $i + 1$ must have indent one greater than the indent of command $i$, so $dp[i + 1][0] = 0$ and $dp[i + 1][j] = dp[i][j - 1]$ for $j > 0$. If command $i$ is a simple statement, command $i + 1$ can belong to the body of any loop before it, and have any indent from 0 to the indent of command $i$. If we denote the indent of command $i$ (simple statement) as $k$, the indent of command $i + 1$ as $j$, we need to sum over all cases where $k \ge j$: $d p[i+1][j]=\sum_{k=j}^{N-1}d p[i][k]$. The answer to the task is the total number of programs which end with the last command at any indent: $\textstyle\sum_{k=0}^{N-1}d p[N-1][k]$. The complexity of this solution is $O(N^{2})$.
[ "dp" ]
1,800
null
909
D
Colorful Points
You are given a set of points on a straight line. Each point has a color assigned to it. For point $a$, its neighbors are the points which don't have any other points between them and $a$. Each point has at most two neighbors - one from the left and one from the right. You perform a sequence of operations on this set of points. In one operation, you delete all points which have a neighbor point of a different color than the point itself. Points are deleted simultaneously, i.e. first you decide which points have to be deleted and then delete them. After that you can perform the next operation etc. If an operation would not delete any points, you can't perform it. How many operations will you need to perform until the next operation does not have any points to delete?
We can simulate the process described in the problem step by step, but this is too slow - a straightforward simulation (iterate over all points when deciding which ones to delete) has an $O(N^{2})$ complexity and takes too long. A solution with better complexity is required. Let's consider continuous groups of points of same color. Any points inside a group are safe during the operation; only points at the border of a group are deleted (except for the leftmost point of the leftmost group and the rightmost point of the rightmost group, if these groups have more than one point). So, if current group sizes are, from left to right, $N_{1}, N_{2}, ..., N_{G - 1}, N_{G}$, group sizes after performing the first operation are $N_{1} - 1, N_{2} - 2, ..., N_{G - 1} - 2, N_{G} - 1$, after the second operation - $N_{1} - 2, N_{2} - 4, ..., N_{G - 1} - 4, N_{G} - 2$ and so on. This process continues until at least one of the groups disappears completely, at which point its adjacent groups may get merged if they are of the same color. This way, multiple operations can be simulated at once: Find the number of operations that are required for at least one group to disappear. Update group sizes after this number of operations. Remove empty groups. Merge adjacent groups of the same color. One update done this way requires $O(G)$ time. During such an update at least one point from each group is deleted, so at least $O(G)$ points are removed. If $N$ is the initial number of points, we can remove at most $N$ points in total. Therefore, running time of the algorithm is $O(N)$.
[ "data structures", "greedy", "implementation" ]
2,100
null
909
E
Coprocessor
You are given a program you want to execute as a set of tasks organized in a dependency graph. The dependency graph is a directed acyclic graph: each task can depend on results of one or several other tasks, and there are no directed circular dependencies between tasks. A task can only be executed if all tasks it depends on have already completed. Some of the tasks in the graph can only be executed on a coprocessor, and the rest can only be executed on the main processor. In one coprocessor call you can send it a set of tasks which can only be executed on it. For each task of the set, all tasks on which it depends must be either already completed or be included in the set. The main processor starts the program execution and gets the results of tasks executed on the coprocessor automatically. Find the minimal number of coprocessor calls which are necessary to execute the given program.
We want to minimize the number of communications between main processor and the coprocessor. Thus, we need to always act greedily: while there are tasks that can be executed on the main processor right away, execute them; then switch to the coprocessor and execute all tasks that can be executed there; then switch back to the main processor and so on. This can be done using breadth-first search. To run reasonably fast, this solution has to be implemented carefully: instead of searching for available tasks at each step, we want to maintain two queues of available tasks (for main processor and coprocessor) and add a task to a corresponding queue once all tasks it depends on has been executed. Alternatively, we can define $T_{i}$ as the lowest number of coprocessor calls required to execute $i$-th task, and derive a recurrent formula for $T_{i}$. If $u$ is a task and $v_{1}, ..., v_{k}$ are its dependencies, then clearly for each $i$ $T_{u} \ge T_{vi}$ because $u$ must be executed after $v_{i}$. Moreover, if $v_{i}$ is executed on the main processor and $u$ - on the coprocessor, then executing $u$ will require an additional coprocessor call. Therefore, $T_{u} = max_{i}(T_{vi} + s_{i})$, where $s_{i} = 1$ if $u$ is executed on the coprocessor and $v_{i}$ - on the main processor; otherwise, $s_{i} = 0$. Now all $T_{i}$ can be calculated by recursive traversal of the dependency graph (or traversing the tasks in topological order). The answer to the problem is $max T_{i}$.
[ "dfs and similar", "dp", "graphs", "greedy" ]
1,900
null
909
F
AND-permutations
Given an integer $N$, find two permutations: - Permutation $p$ of numbers from 1 to $N$ such that $p_{i} ≠ i$ and $p_{i} & i = 0$ for all $i = 1, 2, ..., N$. - Permutation $q$ of numbers from 1 to $N$ such that $q_{i} ≠ i$ and $q_{i} & i ≠ 0$ for all $i = 1, 2, ..., N$. $&$ is the bitwise AND operation.
If $N$ is odd, the answer is NO. Indeed, any number in odd-numbered position $i$ $p_{i}$ must be even, otherwise the last bit of $p_{i}&i$ is 1. For odd $N$ there are less even numbers than odd-numbered positions, so at least one of the positions will hold an odd number, thus it's impossible to construct a required permutation. If $N$ is even, the required permutation exists. To build it, first observe that $(2^{k} - i)&(2^{k} + i - 1) = 0$. For example, for $k = 5$: $100000 = 2^{5}$ $011111 = 2^{5} - 1$ $100001 = 2^{5} + 1$ $011110 = 2^{5} - 2$ and so on. We can use this fact to always match $2^{k} - i$ and $2^{k} + i - 1$ with each other, that is, set $p_{2^{k} - i} = 2^{k} + i - 1$ and $p_{2^{k} + i - 1} = 2^{k} - i$. The full procedure for constructing the required permutation is as follows. For a given even $N$, find the maximum power of two that is less than or equal to $N$ $2^{k}$. Match pairs of numbers $2^{k} - i$ and $2^{k} + i - 1$ for each $i = 1..N - 2^{k} + 1$. If we are not done yet, numbers from $1$ to $2^{k} - (N - 2^{k} + 1) - 1 = 2^{k + 1} - N - 2$ are still unmatched. Repeat the process for $N' = 2^{k + 1} - N - 2$. For example, for $N = 18$ on the first step we set $2^{k} = 16$ and match numbers 15-16, 14-17 and 13-18. On the second step unmatched numbers are from 1 to 12, so we set $2^{k} = 8$ and match numbers 7-8, 6-9, 5-10, 4-11 and 3-12. On the third and the last step the remaining unmatched numbers are 1 and 2, so we set $2^{k} = 2$ and match numbers 1 and 2 with each other. After this no unmatched numbers are left, and we are done. We can do a simple case analysis for $N = 1..7$ manually, noticing that the answer is NO for $N = 1..5$, a possible answer for $N = 6$ is \textbf{3 6 2 5 1 4} as given in problem statement, and a possible answer for $N = 7$ is \textbf{7 3 6 5 1 2 4}. If $N$ is a power of two, then it is represented in binary as $10..0$. We must have $q_{N} \neq N$, therefore $q_{N} < N$, so the binary representation of $q_{N}$ is shorter than that of $N$. It follows that $q_{N}&N = 0$, so the answer is NO in this case. Finally, if $N > 7$ and $N$ is not a power of two, the required permutation always exists, and can be built in the following way. Split all numbers from 1 to $N$ into the following groups ($k$ is the largest power of two which is still less than $N$): 1..7 8..15 16..31 \ldots $2^{k - 1}..2^{k} - 1$ $2^{k}..N$ For the first group use the permutation that we found manually. For each of the remaining groups, use any permutation of numbers in this group (for example, a cyclic permutation). The numbers in each group have leading non-zero bit at the same position (which corresponds to the power of two at the beginning of the group), so it is guaranteed that $q_{i}&i$ contains a non-zero bit at least in that position.
[ "constructive algorithms" ]
2,500
null
911
A
Nearest Minimums
You are given an array of $n$ integer numbers $a_{0}, a_{1}, ..., a_{n - 1}$. Find the distance between two closest (nearest) minimums in it. It is guaranteed that in the array a minimum occurs at least two times.
This task can be done by one array traversal. Maintain $cur$ - current minimum value, $pos$ - position of the last occurrence of $cur$, $ans$ - current minumum distance between two occurrences of $cur$. Now for each $i$ if $a_{i} < cur$ then do $cur: = a_{i}$, $pos: = i$, $ans: = \infty $. For $a_{i} = cur$ do $ans = min(ans, i - pos)$, $pos: = i$. In the end $cur$ will be the global minimum of array and $ans$ will keep the closest its occurrences. Overall complexity: $O(n)$.
[ "implementation" ]
1,100
null
911
B
Two Cakes
It's New Year's Eve soon, so Ivan decided it's high time he started setting the table. Ivan has bought two cakes and cut them into pieces: the first cake has been cut into $a$ pieces, and the second one — into $b$ pieces. Ivan knows that there will be $n$ people at the celebration (including himself), so Ivan has set $n$ plates for the cakes. Now he is thinking about how to distribute the cakes between the plates. Ivan wants to do it in such a way that all following conditions are met: - Each piece of each cake is put on some plate; - Each plate contains at least one piece of cake; - No plate contains pieces of both cakes. To make his guests happy, Ivan wants to distribute the cakes in such a way that the minimum number of pieces on the plate is maximized. Formally, Ivan wants to know the maximum possible number $x$ such that he can distribute the cakes according to the aforementioned conditions, and each plate will contain at least $x$ pieces of cake. Help Ivan to calculate this number $x$!
Let's fix $x$ - number of plates to have pieces of the first cake. $n - x$ plates left for the other cake. Obviously, the most optimal way to distribute $a$ pieces to $x$ plates will lead to the minimum of $\left\lfloor{\frac{\Omega}{x}}\right\rfloor$ pieces on a plate. Now try every possible $x$ and take maximum of $m i n(\lfloor{\frac{a}{x}}\rfloor,\lfloor{\frac{b}{n-x}}\rfloor)$. Overall complexity: $O(n)$.
[ "binary search", "brute force", "implementation" ]
1,200
null
911
C
Three Garlands
Mishka is decorating the Christmas tree. He has got three garlands, and all of them will be put on the tree. After that Mishka will switch these garlands on. When a garland is switched on, it periodically changes its state — sometimes it is lit, sometimes not. Formally, if $i$-th garland is switched on during $x$-th second, then it is lit only during seconds $x$, $x + k_{i}$, $x + 2k_{i}$, $x + 3k_{i}$ and so on. Mishka wants to switch on the garlands in such a way that during each second after switching the garlands on there would be at least one lit garland. Formally, Mishka wants to choose three integers $x_{1}$, $x_{2}$ and $x_{3}$ (not necessarily distinct) so that he will switch on the first garland during $x_{1}$-th second, the second one — during $x_{2}$-th second, and the third one — during $x_{3}$-th second, respectively, and during each second starting from $max(x_{1}, x_{2}, x_{3})$ at least one garland will be lit. Help Mishka by telling him if it is possible to do this!
There are pretty few cases to have YES: One of $k_{i}$ is equal to $1$; At least two of $k_{i}$ are equal to $2$; All $k_{i}$ equal $3$; $k = {2, 4, 4}$. It's easy to notice that having minimum of $k_{i}$ equal to $3$ produce the only case, greater numbers will always miss some seconds. Let's consider minimum of $2$, let it cover all odd seconds. Now you should cover all even seconds and $2$ and $4, 4$ are the only possible solutions. Overall complexity: $O(1)$.
[ "brute force", "constructive algorithms" ]
1,400
null
911
D
Inversion Counting
A permutation of size $n$ is an array of size $n$ such that each integer from $1$ to $n$ occurs exactly once in this array. An inversion in a permutation $p$ is a pair of indices $(i, j)$ such that $i > j$ and $a_{i} < a_{j}$. For example, a permutation $[4, 1, 3, 2]$ contains $4$ inversions: $(2, 1)$, $(3, 1)$, $(4, 1)$, $(4, 3)$. You are given a permutation $a$ of size $n$ and $m$ queries to it. Each query is represented by two indices $l$ and $r$ denoting that you have to reverse the segment $[l, r]$ of the permutation. For example, if $a = [1, 2, 3, 4]$ and a query $l = 2$, $r = 4$ is applied, then the resulting permutation is $[1, 4, 3, 2]$. After each query you have to determine whether the number of inversions is odd or even.
Permutaion with one swap is called transposition. Any permutation can be expressed as the composition (product) of transpositions. Simpler, you can get any permutation from any other one of the same length by doing some number of swaps. The sign of the permutation is the number of transpositions needed to get it from the identity permutation. Luckily (not really, this is pure math, check out all proofs at wiki, e.g) the sign can also tell us the parity of inversion count. Now you can start with computing parity of inversion count of the original permutation (naively, check all pairs of indices). Finally you can decompose queries into swaps, any method will be ok. Like, you can swap $a_{l}$ and $a_{r}$, then $a_{l + 1}$ and $a_{r - 1}$ and so on (this is $\frac{|-I+1}{2}]$ swaps). Then parity of inversion count of the resulting permutation will change if you applied odd number of swaps. Overall complexity: $O(n^{2} + m)$.
[ "brute force", "math" ]
1,800
null
911
E
Stack Sorting
Let's suppose you have an array $a$, a stack $s$ (initially empty) and an array $b$ (also initially empty). You may perform the following operations until both $a$ and $s$ are empty: - Take the first element of $a$, push it into $s$ and remove it from $a$ (if $a$ is not empty); - Take the top element from $s$, append it to the end of array $b$ and remove it from $s$ (if $s$ is not empty). You can perform these operations in arbitrary order. If there exists a way to perform the operations such that array $b$ is sorted in non-descending order in the end, then array $a$ is called stack-sortable. For example, $[3, 1, 2]$ is stack-sortable, because $b$ will be sorted if we perform the following operations: - Remove $3$ from $a$ and push it into $s$; - Remove $1$ from $a$ and push it into $s$; - Remove $1$ from $s$ and append it to the end of $b$; - Remove $2$ from $a$ and push it into $s$; - Remove $2$ from $s$ and append it to the end of $b$; - Remove $3$ from $s$ and append it to the end of $b$. After all these operations $b = [1, 2, 3]$, so $[3, 1, 2]$ is stack-sortable. $[2, 3, 1]$ is not stack-sortable. You are given $k$ first elements of some permutation $p$ of size $n$ (recall that a permutation of size $n$ is an array of size $n$ where each integer from $1$ to $n$ occurs exactly once). You have to restore the remaining $n - k$ elements of this permutation so it is stack-sortable. If there are multiple answers, choose the answer such that $p$ is lexicographically maximal (an array $q$ is lexicographically greater than an array $p$ iff there exists some integer $k$ such that for every $i < k$ $q_{i} = p_{i}$, and $q_{k} > p_{k}$). \textbf{You may not swap or change any of first $k$ elements of the permutation}. Print the lexicographically maximal permutation $p$ you can obtain. If there exists no answer then output -1.
Let's denote $A(l, r)$ as some stack-sortable array which contains all integers from $l$ to $r$ (inclusive). We can see that if the first element of $A(l, r)$ is $x$, then $A(l, r) = [x] + A(l, x - 1) + A(x + 1, r)$, where by $+$ we mean concatenation of arrays. It's easy to prove this fact: if the first element is $x$, then we have to store it in the stack until we have processed all elements less than $x$, so in $A(l, r)$ no element that is greater than $x$ can precede any element less than $x$. This way we can represent the prefix we are given. For example, if $n = 7$, $k = 3$ and prefix is $[6, 3, 4]$, then we can rewrite the permutation we have to obtain as: $A(1, 7) = [6] + A(1, 5) + A(7, 7) = [6] + [3] + A(1, 2) + A(4, 5) + A(7, 7) = [6] + [3] + [1] + A(2, 2) + A(4, 5) + A(7, 7)$. So the unknown suffix is a contatenation of some stack-sortable arrays. It's easy to see that if an array is sorted in non-increasing order, then it is stack-sortable. So we can replace each block $A(x, y)$ with an array $[y, y - 1, y - 2, ..., x]$. If during rewriting the given prefix we obtain some impossible situation (for example, when $n = 7$ and given prefix is $[6, 7]$, we have $[6] + A(1, 5) + A(7, 7)$ and $7$ can't be the beginning of $A(1, 5)$), then answer is $- 1$.
[ "constructive algorithms", "data structures", "greedy", "implementation" ]
2,000
null
911
F
Tree Destruction
You are given an unweighted tree with $n$ vertices. Then $n - 1$ following operations are applied to the tree. A single operation consists of the following steps: - choose two leaves; - add the length of the simple path between them to the answer; - remove one of the chosen leaves from the tree. Initial answer (before applying operations) is $0$. Obviously after $n - 1$ such operations the tree will consist of a single vertex. Calculate the maximal possible answer you can achieve, and construct a sequence of operations that allows you to achieve this answer!
The solution is to choose some diameter of given tree, then delete all the leaves which don't belong to diameter (iteratively), and then delete the diameter. I.e. while tree includes vertices aside from the ones forming diameter we choose some leaf, increase answer by the length of the path between this leaf and farthest endpoint of the diameter (from this leaf) and delete this leaf. Then while tree consists of more than one vertex we choose two endpoints of diameter, increase answer by the length of the path between it and delete any of them. At first we need to prove that we can choose any diameter. It can be proved by next fact: we can find the diameter by two graph traversals (DFS/BFS) (we need to find farthest vertex and then again find farthest vertex from found farthest vertex, given path is a diameter of the tree). It means that for each vertex that doesn't belongs to the diameter we will add maximal possible path length by the algorithm described above. And finally obviously that at some moment we need to delete the diameter and there is no way to do this better than we do it in described solution.
[ "constructive algorithms", "dfs and similar", "graphs", "greedy", "trees" ]
2,400
null
911
G
Mass Change Queries
You are given an array $a$ consisting of $n$ integers. You have to process $q$ queries to this array; each query is given as four numbers $l$, $r$, $x$ and $y$, denoting that for every $i$ such that $l ≤ i ≤ r$ and $a_{i} = x$ you have to set $a_{i}$ equal to $y$. Print the array after all queries are processed.
We can represent a query as a function $f$: $f(i) = i$ if $i \neq x$, $f(x) = y$. If we want to apply two functions, then we can calculate a composition of these functions in time $O(max a_{i})$; in this problem $max a_{i}$ is $100$. So we can do the following: Use scanline technique. Build a segment tree over queries where we store a composition of functions on segment in each vertex. Initially all transformations are $f(i) = i$. When a segment where we apply a query begins, we update the segment tree: we change the transformations on this query's index to the following: $f(i) = i$ if $i \neq x$, $f(x) = y$. When a segment ends, we revert the transformation on this index to $f(i) = i$. The trick is that the composition of all current transformations is stored in the root of the segment tree, so we can easily calculate the result of transformation.
[ "data structures" ]
2,500
null
912
A
Tricky Alchemy
During the winter holidays, the demand for Christmas balls is exceptionally high. Since it's already $2018$, the advances in alchemy allow easy and efficient ball creation by utilizing magic crystals. Grisha needs to obtain some yellow, green and blue balls. It's known that to produce a \textbf{yellow} ball one needs two yellow crystals, \textbf{green} — one yellow and one blue, and for a \textbf{blue} ball, three blue crystals are enough. Right now there are $A$ yellow and $B$ blue crystals in Grisha's disposal. Find out how many additional crystals he should acquire in order to produce the required number of balls.
Note that the crystals of each color are bought independently. One needs $2 \cdot x + y$ yellow and $3 \cdot z + y$ blue crystals. The answer therefore is $max(0, 2 \cdot x + y - A) + max(0, 3 \cdot z + y - B)$.
[ "implementation" ]
800
a, b = map(int, input().split()) x, y, z = map(int, input().split()) yellow = 2 * x + y blue = y + 3 * z ans = max(0, yellow - a) + max(0, blue - b) print(ans)
912
B
New Year's Eve
Since Grisha behaved well last year, at New Year's Eve he was visited by Ded Moroz who brought an enormous bag of gifts with him! The bag contains $n$ sweet candies from the good ol' bakery, each labeled from $1$ to $n$ corresponding to its tastiness. No two candies have the same tastiness. The choice of candies has a direct effect on Grisha's happiness. One can assume that he should take the tastiest ones — but no, the holiday magic turns things upside down. It is the xor-sum of tastinesses that matters, not the ordinary sum! A xor-sum of a sequence of integers $a_{1}, a_{2}, ..., a_{m}$ is defined as the bitwise XOR of all its elements: $a_{1}\oplus a_{2}\oplus\ldots\leftrightarrow a_{m}$, here $\mathbb{C}$ denotes the bitwise XOR operation; more about bitwise XOR can be found here. Ded Moroz warned Grisha he has more houses to visit, so Grisha can take \textbf{no more than $k$} candies from the bag. Help Grisha determine the largest xor-sum (largest xor-sum means maximum happiness!) he can obtain.
If $k = 1$, the answer is $n$. Otherwise, let's take a look at the most significant bit of answer and denote it by $p$ (with the $0$-th bit being the least possible). It must be the same as the most significant bit of $n$. This means the answer cannot exceed $2^{p + 1} - 1$. Consider the numbers $2^{p}$ and $2^{p} - 1$. Obviously, they both do not exceed $n$. At the same time, their xor is $2^{p + 1} - 1$. Hence, the maximum answer can always be obtained if $k > 1$.
[ "bitmasks", "constructive algorithms", "number theory" ]
1,300
import sys n, k = map(int, input().split()) if k == 1: print(n) sys.exit(0) # Calculate 2^(p+1) - 1 using recursive formula ans = 1 while ans < n: ans = ans * 2 + 1 print(ans)
912
C
Perun, Ult!
A lot of students spend their winter holidays productively. Vlad has advanced very well in doing so! For three days already, fueled by salads and tangerines — the leftovers from New Year celebration — he has been calibrating his rating in his favorite MOBA game, playing as a hero named Perun. Perun has an ultimate ability called "Thunderwrath". At the instant of its activation, each enemy on the map ($n$ of them in total) loses $d a m a g e$ health points as a single-time effect. It also has a restriction: it can only activated when the moment of time is an \textbf{integer}. The initial bounty for killing an enemy is $b o u n t y$. Additionally, it increases by $i n c r e a s e$ each second. Formally, if at some second $t$ the ability is activated and the $i$-th enemy is killed as a result (i.e. his health drops to zero or lower), Vlad earns $b o u n t y+t\cdot i n c r e a s e$ units of gold. Every enemy can receive damage, as well as be healed. There are multiple ways of doing so, but Vlad is not interested in details. For each of $n$ enemies he knows: - $m a x_{-}h e a l t h_{i}$ — maximum number of health points for the $i$-th enemy; - $s t a r t_{-}h e a l t h_{i}$ — initial health of the enemy (on the $0$-th second); - $r e g e n_{i}$ — the amount of health the $i$-th enemy can regenerate per second. There also $m$ health updates Vlad knows about: - $t i m e_{j}$ — time when the health was updated; - $\epsilon n e m y_{j}$ — the enemy whose health was updated; - $h e a l t h_{j}$ — updated health points for $enemy_{j}$. Obviously, Vlad wants to maximize his profit. If it's necessary, he could even wait for years to activate his ability at the right second. Help him determine the exact second (note that it must be \textbf{an integer}) from $0$ (inclusively) to $ + ∞$ so that a single activation of the ability would yield Vlad the maximum possible amount of gold, and print this amount.
The statement almost directly states the formula for the answer - it is calculated as $\operatorname*{max}_{t\in[0,+\infty)}f(t)\cdot(b o u n t y+i n c r e a s e\cdot t)$, where $f(t)$ is amount of enemies we can kill at $t$-th second. Thus, we need to learn how to calculate $f(t)$ and find such values of $t$ that are potential candidates for the point the maximum is achieved at. First, let's consider the case we have no enemies with maximum health exceeding $d a m a g e$. Additionally, let $i n c r e a s e>0$. So, how we can calculate $f(t)$? Let's model the process. There are three kinds of events that affect its value: Some enemy has his health updated, it is now less than or equal to $d a m a g e$, thus we can kill the enemy; Some enemy has his health updated, it is now greater than $d a m a g e$, thus we can't kill the enemy; The enemy has regenerated enough health to become invincible again. One can observe that the optimal answer is reached at the second exactly preceeding the events of the second and the third kind. Indeed, otherwise we can move on to the next second: the bounty is increased and $f(t)$ doesn't decrease, thus providing us with a better answer. What remains for us is to calculate the time when the aforementioned events occur to run scanline. The first two kinds correspond directly to the updates (and initial values - we can treat them as updates occuring at zeroth second). Let's calculate when the events of third kind would occur. Let the second $t$ be the moment when one of the enemies' health became equal to $h$. Let $r$ be the regeneration rate of the enemy. At the second $t+\lfloor{\frac{d a m a g e-h}{r}}\rfloor+1$ he will regenerate enough health to become invincible again. One also needs take care of the case when $r = 0$: if there are no health updates after the enemy became killable, one can kill him at any moment and earn infinitely large amount of money. Note that one should need when did the last event of the first kind happen as updates cancel the potentially planned events of the third kind. Now, consider the case when some enemy has maximum health less than or equal to $d a m a g e$. In that case, there is always an enemy to kill, and, since the bounty increases over time, the answer can be infinitely large. Finally, if $i n c r e a s e=0$, the bounty stays constant and we cannot obtain the infinitely large answer. Since the bounty is constant, we are to find the maximum value of $f(t)$ and multiply it by bounty. This is a simple task and is left as an excersise :) Time complexity - $O((n+m)\cdot\log(n+m))$.
[ "brute force", "greedy", "sortings" ]
2,500
#define _CRT_SECURE_NO_WARNINGS #include <iostream> #include <algorithm> #include <vector> #include <ctime> #include <unordered_set> #include <string> #include <map> #include <unordered_map> #include <random> #include <set> #include <cassert> #include <functional> #include <queue> #include <numeric> #include <bitset> using namespace std; const int N = 100005, M = 350; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } vector<pii> events[N]; int maxhp[N], regen[N]; map<ll, int> add; map<ll, int> erase; void solve() { int n = read(), m = read(); int amount = read(), bonus = read(); int damage = read(); forn(i, n) { maxhp[i] = read(); int base_hp = read(); regen[i] = read(); events[i].push_back(mp(0, base_hp)); } forn(i, m) { int time = read(), j = read(), hp = read(); j--; events[j].push_back(mp(time, hp)); } forn(i, n) sort(all(events[i])); if (bonus > 0) forn(i, n) if (maxhp[i] <= damage || (regen[i] == 0 && events[i].back().second <= damage)) { puts("-1"); return; } forn(i, n) { auto &v = events[i]; forn(j, v.size()) { pii e = v[j]; if (e.second > damage) continue; add[e.first]++; pii q; if (j < v.size() - 1) q = v[j + 1]; else q = mp(2e9, 0); ll delta_t = min(q.first - e.first - 1, regen[i] == 0 ? (int)2e9 : (min(damage, maxhp[i]) - e.second) / regen[i]); erase[e.first + delta_t]++; } } int alive = 0; set<ll> timeline; for (auto v : add) timeline.insert(v.first); for (auto v : erase) timeline.insert(v.first); ll ans = 0; for (auto v : timeline) { alive += add[v]; ans = max(ans, alive * (amount + bonus * 1LL * v)); alive -= erase[v]; } printf("%lld\n", ans); } signed main() { int t = 1; while (t--) { clock_t z = clock(); solve(); debug("Total Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
912
D
Fishes
While Grisha was celebrating New Year with Ded Moroz, Misha gifted Sasha a small rectangular pond of size $n × m$, divided into cells of size $1 × 1$, inhabited by tiny evil fishes (no more than one fish per cell, otherwise they'll strife!). The gift bundle also includes a square scoop of size $r × r$, designed for fishing. If the lower-left corner of the scoop-net is located at cell $(x, y)$, all fishes inside the square $(x, y)...(x + r - 1, y + r - 1)$ get caught. Note that the scoop-net should lie completely inside the pond when used. Unfortunately, Sasha is not that skilled in fishing and hence throws the scoop randomly. In order to not frustrate Sasha, Misha decided to release $k$ fishes into the empty pond in such a way that the expected value of the number of caught fishes is as high as possible. Help Misha! In other words, put $k$ fishes in the pond into distinct cells in such a way that when the scoop-net is placed into a random position among $(n - r + 1)·(m - r + 1)$ possible positions, the average number of caught fishes is as high as possible.
Let's solve a simplified problem first. Assume we know all fishes' positions $(x_{i}, y_{i})$ ($1$-indexed). Denote as $g(x, y)$ the amount of fish that is inside a scoop with lower-left angle located at $(x, y)$. Then the expected value is equal to: ${\bf E}={\frac{1}{(n-r+1)\cdot(m-r+1)}}\cdot\sum_{x=1}^{n-r+1\cdot m\sum_{y=1}^{n-r+1}g(x,y)}$ It's quite obvious that straightforward computation will result into $O(n \cdot m)$ time. However, we can invert the problem and calculate for each fish $f(x_{i}, y_{i})$ - how many scoops it is covered by. $f$ is given by the following formula: $f(x,y)=(\operatorname*{min}(n+1,x+r)-\operatorname*{max}(x,r))\cdot(\operatorname*{min}(m+1,y+r)-\operatorname*{max}(y,r))$ Let's get back to $\mathbf{E}$ and transform it into: $\mathrm{E}={\frac{1}{(n-r+1)\cdot(m-r+1)}}\cdot\sum_{i=1}^{k}f(x_{i},y_{i})$ In other words, in order to maximize the expected value we have to choose $k$ best values among $n \cdot m$ possibilities. From now on there are several approaches. Solution I.Imagine we're considering $f(x_{0}, y)$, i.e. with a fixed coordinate $x_{0}$. Note that in this case $f$ is $y$-convex, i.e. it's non-decreasing until some point, and after - non-increasing. Moreover, it always reaches its maximum at $y=\lfloor{\frac{m+1}{2}}\rfloor$. Denote the points to the left of this one $D$, and to the right (inclusive) - $I$. The rest of the solution looks as follows: initially we put $2 \cdot n$ points into the set, two per each $x$-coordinate, one for $D$ and one for $I$. On each step we take the point with maximum value of $f$ and replace it with its left or right neighbour (depending on which part it was from: $D$ means that the substitute will be to the left, $I$ - to the right). Complexity: $O((n+k)\log n)$. Imagine we're considering $f(x_{0}, y)$, i.e. with a fixed coordinate $x_{0}$. Note that in this case $f$ is $y$-convex, i.e. it's non-decreasing until some point, and after - non-increasing. Moreover, it always reaches its maximum at $y=\lfloor{\frac{m+1}{2}}\rfloor$. Denote the points to the left of this one $D$, and to the right (inclusive) - $I$. The rest of the solution looks as follows: initially we put $2 \cdot n$ points into the set, two per each $x$-coordinate, one for $D$ and one for $I$. On each step we take the point with maximum value of $f$ and replace it with its left or right neighbour (depending on which part it was from: $D$ means that the substitute will be to the left, $I$ - to the right). Complexity: $O((n+k)\log n)$. Solution II.Let's enhance the contemplations from the previous paragraph. Notice that $f$ is convex by both values. Then we can take $(\lfloor{\frac{n+1}{2}}\rfloor,\lfloor{\frac{m+1}{2}}\rfloor$) as the starting point and launch a breadth-first search with a priority queue, erasing the most significant point according to $f$ from the queue and checking all of its yet unvisited neighbours in a $4$-connected area. Complexity: $O(k\log k)$ with a greater constant compared with solution one. Let's enhance the contemplations from the previous paragraph. Notice that $f$ is convex by both values. Then we can take $(\lfloor{\frac{n+1}{2}}\rfloor,\lfloor{\frac{m+1}{2}}\rfloor$) as the starting point and launch a breadth-first search with a priority queue, erasing the most significant point according to $f$ from the queue and checking all of its yet unvisited neighbours in a $4$-connected area. Complexity: $O(k\log k)$ with a greater constant compared with solution one.
[ "data structures", "graphs", "greedy", "probabilities", "shortest paths" ]
2,100
#define _CRT_SECURE_NO_WARNINGS #include <iostream> #include <algorithm> #include <vector> #include <ctime> #include <string> #include <map> #include <unordered_map> #include <random> #include <set> #include <cassert> #include <functional> #include <queue> #include <numeric> #include <bitset> using namespace std; const int N = 100005, M = 350; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } int n, m, r, k; ll get(int x, int y) { return (min(n + 1, x + r) - max(x, r)) * 1LL * (min(m + 1, y + r) - max(y, r)); } bool ok(int x, int y) { return min(x, y) > 0 && x <= n && y <= m; } struct cmp { bool operator()(const pii &a, const pii &b) { ll f_a = get(a.first, a.second); ll f_b = get(b.first, b.second); return f_a > f_b || (f_a == f_b && a > b); } }; set<pii, cmp> s; set<int> used[N]; int dx[4] = { -1, 0, 0, 1 }; int dy[4] = { 0, -1, 1, 0 }; double get() { s.insert(mp((n + 1) / 2, (m + 1) / 2)); used[(n + 1) / 2].insert((m + 1) / 2); ll total = 0; forn(i, k) { auto e = *s.begin(); s.erase(s.begin()); int x = e.first, y = e.second; total += get(x, y); forn(j, 4) { int _x = x + dx[j], _y = y + dy[j]; if (!ok(_x, _y) || used[_x].count(_y)) continue; s.insert(mp(_x, _y)); used[_x].insert(_y); } } return (double)total / ((n - r + 1) * 1LL * (m - r + 1)); } void solve() { n = read(), m = read(), r = read(), k = read(); if (n > m) swap(n, m); if (n == 1 && m == 1) printf("%.10f\n", 1.0); else printf("%.10f\n", get()); } signed main() { int t = 1; while (t--) { clock_t z = clock(); solve(); debug("Elapsed Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
912
E
Prime Gift
Opposite to Grisha's nice behavior, Oleg, though he has an entire year at his disposal, didn't manage to learn how to solve number theory problems in the past year. That's why instead of Ded Moroz he was visited by his teammate Andrew, who solemnly presented him with a set of $n$ \textbf{distinct prime} numbers alongside with a simple task: Oleg is to find the $k$-th smallest integer, such that \textbf{all} its prime divisors are in this set.
The initial idea that emerges in such kind of problems is to use binary search to determine the $k$-th element. Still we have to somehow answer the following query: "how many elements no greater than $x$ satisfy the conditions?" It's easy to see that all such numbers can be represented as $p_{1}^{a1} \cdot ... \cdot p_{n}^{an}$. Denote $D(G)$ as a set of numbers not exceeding $10^{18}$ such that all their prime divisors lie inside $G$. Let $N$ be our initial set. Sadly we cannot just generate $D(N)$ since its size might be of the order of $8 \cdot 10^{8}$. Actually, the goal is to invent a way to obtain information about all elements without having to generate them explicitly. Imagine we split $N$ into two such sets $A$ and $B$ that $A\cup B=N$ and $A\cap B=\emptyset$. According to the fact that each element of $D(N)$ can be represented as product of some element of $D(A)$ and some element of $D(B)$, we can explicitly generate $D(A)$ and $D(B)$, then sort them and find out how many pairwise products do not exceed $x$ in $O(|D(A)| + |D(B)|)$ using two pointers approach. This gives $O(\log10^{18}\cdot(1D(A)|+|D(B)|))$ in total. The main part is to find the optimal partition. The first thought is to send the first $\frac{n t}{2}$ elements to $A$ and the rest to $B$. However, this is not enough; in this case the approximate size of $D(A)$ might reach $7 \cdot 10^{6}$ which is way too much to pass. To speed it up we can, for example, comprise $A$ of elements with even indexes (i.e $p_{0}, p_{2}, ...$) so that sizes of both $D(A)$ and $D(B)$ do not exceed $10^{6}$ and the solution runs swiftly.
[ "binary search", "dfs and similar", "math", "meet-in-the-middle", "number theory", "two pointers" ]
2,400
#define _CRT_SECURE_NO_WARNINGS #include <iostream> #include <algorithm> #include <vector> #include <ctime> #include <unordered_set> #include <string> #include <map> #include <unordered_map> #include <random> #include <set> #include <cassert> #include <functional> #include <queue> #include <numeric> #include <bitset> using namespace std; const int N = 200001, M = 800; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } struct hamming { vector<int> p; int n; vector<ll> A, B; ll U = 1e18; void go(ll u, int i, vector<int> &p, vector<ll> &out) { if (i == p.size()) { out.push_back(u); return; } go(u, i + 1, p, out); for (; u <= U / p[i]; ) go(u *= p[i], i + 1, p, out); } hamming(vector<int> p) : p(p), n(p.size()) { vector<int> lp, rp; for (int i = 0; i < n; i += 2) lp.push_back(p[i]); for (int i = 1; i < n; i += 2) rp.push_back(p[i]); go(1, 0, lp, A); go(1, 0, rp, B); sort(all(A), greater<ll>()); sort(all(B)); } ll get(ll x) { int j = 0; ll ans = 0; forn(i, A.size()) { while (j < B.size() && B[j] <= x / A[i]) j++; ans += j; } return ans; } ll order(ll k) { assert(k <= get(U)); ll l = 0, r = U; while (l < r - 1) { ll m = (l + r) / 2; if (get(m) < k) l = m; else r = m; } return r; } }; void solve() { int n = read(); vector<int> vals(n); for (auto &v : vals) v = read(); auto T = hamming(vals); printf("%lld\n", T.order(read<ll>())); //printf("%lld\n", T.get(1e18)); } signed main() { int t = 1; while (t--) { clock_t z = clock(); solve(); //debug("Total Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
913
A
Modular Exponentiation
The following problem is well-known: given integers $n$ and $m$, calculate \begin{center} $2^{n}\operatorname{mod}m$, \end{center} where $2^{n} = 2·2·...·2$ ($n$ factors), and $x\ {\mathrm{mod}}\ y$ denotes the remainder of division of $x$ by $y$. You are asked to solve the "reverse" problem. Given integers $n$ and $m$, calculate \begin{center} $m{\bmod{2^{n}}}$. \end{center}
Why is it hard to calculate the answer directly by the formula in the problem statement? The reason is that $2^{n}$ is a very large number, for $n = 10^{8}$ it consists of around 30 million decimal digits. (Anyway, it was actually possible to get the problem accepted directly calculating the result in Python or Java using built-in long arithmetics.) The main thing to notice in this problem: $x\ {\mathrm{mod}}\ y=x$ if $x < y$. Hence, $m{\mathrm{~mod~}}2^{n}=m$ if $m < 2^{n}$. Since $m \le 10^{8}$ by the constraints, for $n \ge 27$ the answer is always equal to $m$. If $n < 27$, it's easy to calculate the answer directly.
[ "implementation", "math" ]
900
#include <bits/stdc++.h> using namespace std; int main() { int n, m; scanf("%d %d", &n, &m); printf("%d\n", n >= 31 ? m : m % (1 << n)); return 0; }
913
B
Christmas Spruce
Consider a rooted tree. A rooted tree has one special vertex called the root. All edges are directed from the root. Vertex $u$ is called a child of vertex $v$ and vertex $v$ is called a parent of vertex $u$ if there exists a directed edge from $v$ to $u$. A vertex is called a leaf if it doesn't have children and has a parent. Let's call a rooted tree a spruce if its every non-leaf vertex has at least $3$ leaf children. You are given a rooted tree, check whether it's a spruce. The definition of a rooted tree can be found here.
Lets calculate amount of children for each vertex. To do that lets increase by $1$ $c[p_{i}]$ for every $p_{i}$. Then iterate over all vertexes. If $i$-th vertex has $0$ children (i.e. $c[i] = 0$), skip this vertex. Else again iterate over all vertexes and calculate number of vertexes $j$ such that $c[j] = 0$ and $p_{j} = i$. If this number is lower than $3$, answer is "No". Else answer is "Yes".
[ "implementation", "trees" ]
1,200
n = int(input()) p = [int(input()) - 1 for _ in range(n - 1)] leafs = list(filter(lambda x: not x in p, range(n))) lp = [x for i, x in enumerate(p) if i + 1 in leafs] x = min(lp.count(k) for k in p) print("Yes" if x >= 3 else "No")
913
C
Party Lemonade
A New Year party is not a New Year party without lemonade! As usual, you are expecting a lot of guests, and buying lemonade has already become a pleasant necessity. Your favorite store sells lemonade in bottles of $n$ different volumes at different costs. A single bottle of type $i$ has volume $2^{i - 1}$ liters and costs $c_{i}$ roubles. The number of bottles of each type in the store can be considered infinite. You want to buy at least $L$ liters of lemonade. How many roubles do you have to spend?
Note that if $2 \cdot a_{i} \le a_{i + 1}$, then it doesn't make sense to buy any bottles of type $i + 1$ - it won't ever be worse to buy two bottles of type $i$ instead. In this case let's assume that we actually have an option to buy a bottle of type $i + 1$ at the cost of $2 \cdot a_{i}$ and replace $a_{i + 1}$ with $min(a_{i + 1}, 2 \cdot a_{i})$. Let's do this for all $i$ from 1 to $n - 1$ in increasing order. Now for all $i$ it's true that $2 \cdot a_{i} \ge a_{i + 1}$. Note that now it doesn't make sense to buy more than one bottle of type $i$ if $i < n$. Indeed, in this case it won't ever be worse to buy a bottle of type $i + 1$ instead of two bottles of type $i$. From now on, we'll only search for options where we buy at most one bottle of every type except the last one. Suppose that we had to buy exactly $L$ liters of lemonade, as opposed to at least $L$. Note that in this case the last $n - 1$ bits of $L$ uniquely determine which bottles of types less than $n$ we have to buy. Indeed, if $L$ is odd, then we have to buy a bottle of type 0, otherwise we can't do that. By the same line of thought, it's easy to see that bit $j$ in the binary representation of $L$ is responsible for whether we should buy a bottle of type $j$. Finally, we have to buy exactly $ \lfloor L / 2^{n - 1} \rfloor $ bottles of type $n$. But what to do with the fact that we're allowed to buy more than $L$ liters? Suppose we buy $M > L$ liters. Consider the highest bit $j$ in which $M$ and $L$ differ. Since $M > L$, the $j$-th bit in $M$ is 1, and the $j$-th bit in $L$ is 0. But then all bits lower than the $j$-th in $M$ are 0 in the optimal answer, since these bits are responsible for the "extra" bottles - those for which we spend money for some reason, but without which we would still have $M > L$. Thus, here is the overall solution. Loop over the highest bit $j$ in which $M$ differs from $L$. Form the value of $M$, taking bits higher than the $j$-th from $L$, setting the $j$-th bit in $M$ to 1, and bits lower than the $j$-th to 0. Calculate the amount of money we have to pay to buy exactly $M$ liters of lemonade. Take the minimum over all $j$. The complexity of the solution is $O(n)$ or $O(n^{2})$, depending on the implementation.
[ "bitmasks", "dp", "greedy" ]
1,600
#include <bits/stdc++.h> using namespace std; int main() { int n, L; scanf("%d %d", &n, &L); vector<int> c(n); for (int i = 0; i < n; i++) { scanf("%d", &c[i]); } for (int i = 0; i < n - 1; i++) { c[i + 1] = min(c[i + 1], 2 * c[i]); } long long ans = (long long) 4e18; long long sum = 0; for (int i = n - 1; i >= 0; i--) { int need = L / (1 << i); sum += (long long) need * c[i]; L -= need << i; ans = min(ans, sum + (L > 0) * c[i]); } cout << ans << endl; return 0; }
913
D
Too Easy Problems
You are preparing for an exam on scheduling theory. The exam will last for exactly $T$ milliseconds and will consist of $n$ problems. You can either solve problem $i$ in exactly $t_{i}$ milliseconds or ignore it and spend no time. You don't need time to rest after solving a problem, either. Unfortunately, your teacher considers some of the problems too easy for you. Thus, he assigned an integer $a_{i}$ to every problem $i$ meaning that the problem $i$ can bring you a point to the final score only in case you have solved no more than $a_{i}$ problems overall (including problem $i$). Formally, suppose you solve problems $p_{1}, p_{2}, ..., p_{k}$ during the exam. Then, your final score $s$ will be equal to the number of values of $j$ between 1 and $k$ such that $k ≤ a_{pj}$. You have guessed that the real first problem of the exam is already in front of you. Therefore, you want to choose a set of problems to solve during the exam maximizing your final score in advance. Don't forget that the exam is limited in time, and you must have enough time to solve all chosen problems. If there exist different sets of problems leading to the maximum final score, any of them will do.
The first observation is that if we solve a problem which doesn't bring us any points, we could as well ignore it and that won't make our result worse. Therefore, there exists an answer in which all problems we solve bring us points. Let's consider only such answers from now on. Fix $k$. How to check if we can get exactly $k$ final points? Consider all problems with $a_{i} \ge k$. Obviously, only these problems suit us in this case. If there are less than $k$ such problems, we can't get the final score of $k$. Otherwise, let's choose $k$ problems with the lowest $t_{i}$ from them. If the sum of these $t_{i}$ doesn't exceed $T$, then the final score of $k$ is possible, and impossible otherwise. Now we can loop over the value of $k$ and make the check above for each of them. The complexity of such solution is $ \Omega (n^{2})$, though. There are at least two ways to optimize this idea: 1) Use binary search on $k$. Indeed, if we can get the final score of $k$, the final score of $k - 1$ is also possible. In this case, the complexity of the solution will be $O(n\log n)$ or $O(n\log^{2}n)$, depending on the implementation. 2) Let's slowly decrease $k$ from $n$ to 0 and keep the set of problems with $a_{i} \ge k$ and the smallest $t_{i}$, and the sum of $t_{i}$ in the set. Every time there are more than $k$ problems in the set, remove the problem with the highest $t_{i}$. Find the largest $k$ such that the set contains at least $k$ problems and the sum of their $t_{i}$ doesn't exceed $T$. This $k$ is the answer. The complexity of this solution is $O(n\log n)$.
[ "binary search", "brute force", "data structures", "greedy", "sortings" ]
1,800
#include <bits/stdc++.h> using namespace std; int main() { int n, T; scanf("%d %d", &n, &T); vector< vector< pair<int, int> > > at(n + 1); for (int i = 0; i < n; i++) { int foo, bar; scanf("%d %d", &foo, &bar); at[foo].emplace_back(bar, i); } vector<int> res; set< pair<int, int> > s; int sum = 0; for (int k = n; k > 0; k--) { for (auto &p : at[k]) { sum += p.first; s.emplace(p); } while ((int) s.size() > k) { sum -= (--s.end())->first; s.erase(--s.end()); } if ((int) s.size() == k && sum <= T) { for (auto &p : s) { res.push_back(p.second); } break; } } int sz = (int) res.size(); printf("%d\n%d\n", sz, sz); for (int i = 0; i < sz; i++) { if (i > 0) { putchar(' '); } printf("%d", res[i] + 1); } printf("\n"); return 0; }
913
E
Logical Expression
You are given a boolean function of three variables which is defined by its truth table. You need to find an expression of minimum length that equals to this function. The expression may consist of: - Operation AND ('&', ASCII code 38) - Operation OR ('|', ASCII code 124) - Operation NOT ('!', ASCII code 33) - Variables x, y and z (ASCII codes 120-122) - Parentheses ('(', ASCII code 40, and ')', ASCII code 41) If more than one expression of minimum length exists, you should find the lexicographically smallest one. Operations have standard priority. NOT has the highest priority, then AND goes, and OR has the lowest priority. The expression should satisfy the following grammar: E ::= E '|' T | T T ::= T '&' F | F F ::= '!' F | '(' E ')' | 'x' | 'y' | 'z'
The number of functions of three variables is $2^{23} = 256$. Note that for an expression, we're only interested in its truth table and the nonterminal from which this expression can be formed. So, there are $3 \cdot 256$ different states. And the problem is to find an expression of minimum length, and the lexicographically smallest out of these, for every state. You can do it in different ways. For example, there's a solution which works in $O(n^{3})$, where $n$ is the number of states. While there are states without answer, let's perform iterations. On every iteration, loop over all pairs of fixed states and apply rules from the grammar for this pair if it's possible (i.e. these states match nonterminals from the right part of the rule). Similarly, apply rules with one nonterminal in the right part. After that let's find and fix the state with the shortest found expression over all states that haven't been fixed yet. If there is more than one shortest expression, choose the lexicographically smallest. Any fixed state would never change its expression in the future, so the found expression is the answer for this state. This is true in the same way as in Dijkstra's algorithm. The answer for a function is the same as the answer for the state defined by this function and non-terminal E, since any other nonterminal for this function can be reached from E using rules which don't change the expression. Thus, the number of iterations is $n$ and every iteration works in $O(n^{2})$, so this solution works in $O(n^{3})$. Such a solution might or might not be fast enough to get accepted. But if your solution gets TL 1, you can calculate the answer for every possible function (remember, there are only $256$ of them) locally and then submit a solution with an array of answers. A solution which works in $O(n^{2})$ is also possible. This solution uses an analogue of Dijkstra's algorithm as well, but on every iteration it only applies rules which contain the new fixed state. This was not necessary to get accepted.
[ "bitmasks", "dp", "shortest paths" ]
2,400
#include <bits/stdc++.h> using namespace std; string res[256][3]; bool changed; void update(string &a, string &b) { if (a == "" || (b.length() < a.length() || (b.length() == a.length() && b < a))) { changed = true; a = b; } } int main() { res[(1 << 4) + (1 << 5) + (1 << 6) + (1 << 7)][0] = "x"; res[(1 << 2) + (1 << 3) + (1 << 6) + (1 << 7)][0] = "y"; res[(1 << 1) + (1 << 3) + (1 << 5) + (1 << 7)][0] = "z"; changed = true; while (changed) { changed = false; for (int i = 0; i < 256; i++) { for (int j = 0; j < 3; j++) { if (res[i][j] == "") { continue; } { // op == 0 string s = res[i][j]; if (j > 0) { s = "(" + s + ")"; } s = "!" + s; update(res[i ^ 255][0], s); } for (int ii = 0; ii < 256; ii++) { for (int jj = 0; jj < 3; jj++) { if (res[ii][jj] == "") { continue; } for (int op = 1; op <= 2; op++) { string s = res[i][j]; if (j > op) { s = "(" + s + ")"; } string t = res[ii][jj]; if (jj > op) { t = "(" + t + ")"; } string w = s + (op == 1 ? '&' : '|') + t; update(res[op == 1 ? (i & ii) : (i | ii)][op], w); } } } } } } int tt; cin >> tt; while (tt--) { string z; cin >> z; int mask = 0; for (int i = 0; i < 8; i++) { mask |= (z[i] - '0') << i; } string best = ""; for (int j = 0; j < 3; j++) { update(best, res[mask][j]); } cout << best << endl; } return 0; }
913
F
Strongly Connected Tournament
There is a chess tournament in All-Right-City. $n$ players were invited to take part in the competition. The tournament is held by the following rules: - Initially, each player plays one game with every other player. There are no ties; - After that, the organizers build a complete directed graph with players as vertices. For every pair of players there is exactly one directed edge between them: the winner of their game is the startpoint of this edge and the loser is the endpoint; - After that, the organizers build a condensation of this graph. The condensation of this graph is an acyclic complete graph, therefore it has the only Hamiltonian path which consists of strongly connected components of initial graph $A_{1} → A_{2} → ... → A_{k}$. - The players from the first component $A_{1}$ are placed on the first $|A_{1}|$ places, the players from the component $A_{2}$ are placed on the next $\left|{\mathcal{A}}_{2}\right|$ places, and so on. - To determine exact place of each player in a strongly connected component, all the procedures from 1 to 5 are repeated recursively inside each component, i.e. for every $i = 1, 2, ..., k$ players from the component $A_{i}$ play games with each other again, and so on; - If a component consists of a single player, then he has no more rivals, his place is already determined and the process stops. The players are enumerated with integers from $1$ to $n$. The enumeration was made using results of a previous tournament. It is known that player $i$ wins player $j$ ($i < j$) with probability $p$. You need to help to organize the tournament. Find the expected value of total number of games played by all the players. It can be shown that the answer can be represented as $\frac{P}{Q}$, where $P$ and $Q$ are coprime integers and $Q\not\equiv0{\mathrm{~(mod~998244353)}}$. Print the value of $P·Q^{ - 1}$ modulo $998244353$. If you are not familiar with any of the terms above, you can read about them here.
Probability of player $i$ to win player $j$ depends on whether $i < j$ only, so the answer for subset of players of size $s$ doesn't depend on the set, only on its size. Let $ans(s)$ be the answer for set of $s$ players. Lets calculate the answer using dynamic programming. $ans(0) = ans(1) = 0$. For larger $s$ lets use law of total expected value. Let $i$ be size of the last strongly connected component. $a n s(s)=\sum_{i=1}^{s}s t r o n g(i)\cdot c p(s,i)\cdot(i\cdot(s-i)+\frac{i\cdot(i-1)}{2}+a n s(i)+a n s(s-i))$ - equation (1). $i$-th term consists of three factors. The first factor $strong(i)$ is the probability of graph of $i$ players be strong connected. We'll describe how to calculate it below. The second factor $cp(s, i)$ is the probability of existing $i$ players in set of $s$ players such that these $i$ players lost all the rest players. The product of these two factors is the probability that size of the last strongly connected component is $i$. The third factor is expected value of number of games with the condition of the last component has size $i$. $i \cdot (s - i)$ is number of games between players of the last component and the rest players. $\textstyle{\frac{n(n-1)}{2}}$ is number of games inside of the last component. $ans(i)$ is the expected value of number of games in the last component on the next round. After that we need to add number of games between all the players except for the last component and the value of $ans$ for sizes of other components. Note that this terms describe the game on set of players except the last component. Therefore the sum of these terms is $ans(s - i)$. Also note that the right part of equation (1) contains term that uses $ans(s)$. So $ans(s)$ is using $ans(s)$ for calculating itself. To overcome this lets move this term to the left side and solve linear equation with variable $ans(s)$. How to calculate $strong(s)$? $strong(1) = 1$ because the graph with one vertice is strong connected. When $s \ge 2$ we have $s t r o n g(s)=1-\sum_{i=1}^{s-1}s t r o n g(i)\cdot c p(s,i)$. Here we use law of total probability and calculate the answer by iterating $i$ over possible values of size of the last component. How to calculate $cp(s, i)$? $cp(s, 0) = 1$ because there are $0$ players who lost games to the others. $cp(s, i) = cp(s - 1, i) \cdot (1 - p)^{i} + cp(s - 1, i - 1) \cdot p^{s - i}$ where $p$ is the probability of winning the player with less index. Here we use law of total probability again. The first term describe the case when player with the largest index is not in the set of players that lost to the others, so he should win all the players from the set. The second term describes the case when player with the largest index is in the set. In this case he should lose the game to all the players who is not in the set. In summary, value of $ans(n)$ can be calculated in time $O(n^{2})\,$ and this value is the answer for the problem.
[ "dp", "graphs", "math", "probabilities" ]
2,800
#include <bits/stdc++.h> using namespace std; const int md = 998244353; inline void add(int &a, int b) { a += b; if (a >= md) a -= md; } inline void sub(int &a, int b) { a -= b; if (a < 0) a += md; } inline int mul(int a, int b) { return (int) ((long long) a * b % md); } inline int power(int a, long long b) { int res = 1; while (b > 0) { if (b & 1) { res = mul(res, a); } a = mul(a, a); b >>= 1; } return res; } inline int inv(int a) { return power(a, md - 2); } int main() { int n, P, Q; cin >> n >> P >> Q; int p = mul(P, inv(Q)); vector< vector<int> > p_win(n + 1, vector<int>(n + 1, 0)); p_win[0][0] = 1; for (int i = 0; i < n; i++) { for (int j = 0; j <= i; j++) { add(p_win[i + 1][j], mul(p_win[i][j], power(p, j))); add(p_win[i + 1][j + 1], mul(p_win[i][j], power((1 - p + md) % md, i - j))); } } vector<int> p_strong(n + 1); for (int i = 1; i <= n; i++) { p_strong[i] = 1; for (int j = 1; j < i; j++) { sub(p_strong[i], mul(p_strong[j], p_win[i][j])); } } vector<int> res(n + 1); res[1] = 0; for (int i = 2; i <= n; i++) { res[i] = 0; for (int j = 1; j < i; j++) { int coeff = mul(p_strong[j], p_win[i][j]); int games = (res[j] + res[i - j]) % md; add(games, j * (j - 1) / 2 + j * (i - j)); add(res[i], mul(coeff, games)); } add(res[i], mul(i * (i - 1) / 2, p_strong[i])); res[i] = mul(res[i], inv((1 - p_strong[i] + md) % md)); } cout << res[n] << endl; return 0; }
913
G
Power Substring
You are given $n$ positive integers $a_{1}, a_{2}, ..., a_{n}$. For every $a_{i}$ you need to find a positive integer $k_{i}$ such that the decimal notation of $2^{ki}$ contains the decimal notation of $a_{i}$ as a substring among its last $min(100, length(2^{ki}))$ digits. Here $length(m)$ is the length of the decimal notation of $m$. Note that you don't have to minimize $k_{i}$. The decimal notations in this problem do not contain leading zeros.
Lets solve the problem for every $a_{i}$ separately. Let $n = length(a_{i})$. Let us choose $m$ and $b$ such that $0 \le b < 10^{m}$. Lets find $k$ such that decimal notation of $2^{k}$ ends with $x = a_{i} \cdot 10^{m} + b$. Equation $2^{k}\equiv x\mathrm{~~mod~}10^{n+m}$ is necessary and sufficient for that. Lets find such $k$ that $k \ge n + m$. In this case $2^{n+m}\mid2^{k}$ and $\iiint_{a\setminus{\mathbb{H}}}\mid\mathbf{\sqrt{}}\mid\mathbf{\sqrt{}}\mid^{n+H_{\mathrm{i}}}$. Therefore we need $2^{n+m}\mid x$ - equation (1). Also $\textstyle\bigcap{}\left|{\Big|}^{\left|1+j\right|}$ and $5\mid2^{k}$. Therefore we need $5\mid x$ - equation (2). For fixed $m$ we have $10^{m}$ ways to set value of $b$. These values generate segment of $10^{m}$ consecutive integer values of $x$. If we choose large enough value of $m$, e.g. such $m$ that $10^{m} \ge 2 \cdot 2^{n + m}$, we can choose value of $x$ from set of size $10^{m}$ satisfying (1) and (2). Consider equation $2^{k}\equiv x\;\;\mathrm{mod}\;10^{n+m}$. Every number in the equation is divisible by $2^{n + m}$, lets divide all the numbers by this. The result is $2^{k-n-m}\equiv y\;\;\mathrm{mod}\;5^{n+m}$ - equation (3) where $y={\frac{z^{n}}{2^{n+m}}}$. Use the following lemma: Lemma For every positive integer $t$ number $2$ is primitive root modulo $5^{t}$. Proof The proof of this lemma is left to the reader as exercise. It follows from the lemma that for every $y$ there exists $k$ satisfying equation (3). It is $k = n + m + log_{2}(y)$ where $log$ is discrete logarithm modulo $5^{n + m}$. In this problem $n \le 11$. $m = 6$ is enough for such $n$, so $5^{n + m} \le 5^{17}$. For such modulo Baby-step-giant-step method works too slow. Lets use the following algorithm for searching logarithm modulo $p^{ \alpha }$ in time $O(p\alpha)$: Lets find $log_{g}(x)$. Let $d_{i}=l o g_{g}(x)\mod p^{i}$. Calculate $d_{1}$ in naive way in time ${\cal O}(p)$. After that we can find $d_{i + 1}$ by the value of $d_{i}$. $d_{i + 1} = d_{i} + j \cdot \phi (p^{i})$ for some $j$ in range $0... p - 1$. For every such $j$ check the equation $g^{d_{i}+3\circ\phi(p^{*})}\equiv x~~\mathrm{mod}~p^{i+1}$ and choose the suitable one.
[ "math", "number theory" ]
3,200
#include <bits/stdc++.h> using namespace std; inline long long mul(long long a, long long b, long long md) { long long res = 0; while (b > 0) { if (b & 1) { res += a; if (res >= md) res -= md; } a += a; if (a >= md) a -= md; b >>= 1; } return res; } inline long long power(long long a, long long b, long long md) { long long res = 1; while (b > 0) { if (b & 1) { res = mul(res, a, md); } a = mul(a, a, md); b >>= 1; } return res; } int main() { int tt; cin >> tt; while (tt--) { long long a; cin >> a; int n = (int) to_string(a).length(); for (int m = 0; ; m++) { long long b = (-a) & ((1LL << (n + m)) - 1); if ((a + b) % 5 == 0) { b += (1LL << (n + m)); } if ((b == 0 && m == 0) || (int) to_string(b).length() <= m) { long long c = (a + b) >> (n + m); long long t = vector<long long>{-1, 0, 1, 3, 2}[c % 5]; long long p5 = 5; for (int i = 1; i < n + m; i++) { while (power(2, t, p5 * 5) != c % (p5 * 5)) { t += p5 / 5 * 4; } p5 *= 5; } t += n + m; cout << t << endl; break; } a *= 10; } } return 0; }
913
H
Don't Exceed
You generate real numbers $s_{1}, s_{2}, ..., s_{n}$ as follows: - $s_{0} = 0$; - $s_{i} = s_{i - 1} + t_{i}$, where $t_{i}$ is a real number chosen independently uniformly at random between 0 and 1, inclusive. You are given real numbers $x_{1}, x_{2}, ..., x_{n}$. You are interested in the probability that $s_{i} ≤ x_{i}$ is true for all $i$ simultaneously. It can be shown that this can be represented as $\frac{P}{Q}$, where $P$ and $Q$ are coprime integers, and $Q\not\equiv0{\mathrm{~(mod~998244353)}}$. Print the value of $P·Q^{ - 1}$ modulo $998244353$.
If $t_{i}$ were random integers and not reals, it would be natural to solve the problem using dynamic programming: $f(i, j)$ - the probability that the required inequalities are satisfied if $s_{i} = j$. But in case of real numbers, the probability that a random real number equals to something is zero. What to do then? A natural solution is to maintain $f(i, x)$ as a function of $x$, that is, $f(i, x)$ will be the probability density function of $s_{i}$. What is the form of $f(i, x)$ of $x$ for fixed $i$? It turns out that this function is piecewise polynomial. Indeed, for $i = 1$ we have $f(i, x) = 1$ for $x < min(1, x_{1})$ and $f(i, x) = 0$ otherwise. For all $i > 1$ for $x > x_{i}$ we have $f(i, x) = 0$, while for $x \le x_{i}$ the following identity is true: $f(i,x)=\textstyle\int_{x-1}^{x}f(i-1,y)d y$. For fixed $x$, what is the value of this integral? Several pieces of $f(i - 1, x)$ fall inside the integration range. The integral of pieces which fall inside entirely is a constant. The integral of the single piece which is close to $x$ (so the start of this piece falls inside the range, but the end of this piece doesn't) is the integral of the corresponding piece of $f(i - 1, x)$, which is an integral of a polynomial, which is a polynomial. Similarly, the integral of the opposite piece (close to $x - 1$) is the integral of the corresponding piece of $f(i - 1, x)$ under substitution of $x - 1$, which is also a polynomial. By induction we can see that $f(i, x)$ is a piecewise polynomial function of $x$. The solution of the problem is indeed maintaining $f(i, x)$ as polynomials on the corresponding intervals. It's easy to see that polynomials in $f(i, x)$ have degree at most $i - 1$ (by induction). To estimate the complexity of this solution, note that the ends of the pieces have the following form: $x_{i} + c$, where $c$ is an integer. Since we're only interested in the ends of the pieces which are between 0 and $n$, we have $O(n^{2})$ pieces overall. Since we have to calculate $n$ layers of DP, each function is a polynomial of degree at most $n$, and substituting $x - 1$ into a polynomial takes $O(n^{2})$, the overall complexity of the solution is $O(n^{5})$.
[ "math", "probabilities" ]
3,400
#include <bits/stdc++.h> using namespace std; const int md = 998244353; inline void add(int &a, int b) { a += b; if (a >= md) a -= md; } inline void sub(int &a, int b) { a -= b; if (a < 0) a += md; } inline int mul(int a, int b) { return (int) ((long long) a * b % md); } inline int power(int a, long long b) { int res = 1; while (b > 0) { if (b & 1) { res = mul(res, a); } a = mul(a, a); b >>= 1; } return res; } inline int inv(int a) { return power(a, md - 2); } typedef vector<int> poly; poly integrate(poly a) { poly b = {0}; for (int i = 0; i < (int) a.size(); i++) { b.push_back(mul(a[i], inv(i + 1))); } return b; } void sub(poly &a, poly b) { while (a.size() < b.size()) { a.push_back(0); } for (int i = 0; i < (int) b.size(); i++) { sub(a[i], b[i]); } } int eval(poly a, int x) { int res = 0; for (int i = (int) a.size() - 1; i >= 0; i--) { res = mul(res, x); add(res, a[i]); } return res; } const int COEFF = 1000000; int main() { int n; cin >> n; vector<int> x(n), fracs; for (int i = 0; i < n; i++) { double foo; cin >> foo; x[i] = (int) (foo * COEFF + 0.5); fracs.push_back(x[i] % COEFF); } fracs.push_back(0); sort(fracs.begin(), fracs.end()); fracs.resize(unique(fracs.begin(), fracs.end()) - fracs.begin()); int cnt = (int) fracs.size(); vector<int> point(n * cnt + 1); for (int i = 0; i <= n * cnt; i++) { point[i] = i / cnt * COEFF + fracs[i % cnt]; } vector<int> cut(n); for (int i = 0; i < n; i++) { cut[i] = (int) (find(point.begin(), point.end(), x[i]) - point.begin()); } vector<int> sz(n * cnt); for (int i = 0; i < n * cnt; i++) { sz[i] = mul((point[i + 1] - point[i] + md) % md, inv(COEFF)); } vector<poly> a(n * cnt); vector<int> sum(n * cnt); for (int i = 0; i < n * cnt; i++) { a[i] = i < min(cnt, cut[0]) ? vector<int>{0, 1} : vector<int>{0}; sum[i] = a[i].size() == 1 ? 0 : sz[i]; } for (int id = 1; id < n; id++) { for (int i = n * cnt - 1; i >= 0; i--) { if (i >= cut[id]) { a[i] = {0}; sum[i] = 0; } else { for (int j = i - 1; j >= i - cnt && j >= 0; j--) { add(a[i][0], sum[j]); } if (i - cnt >= 0) { sub(a[i], a[i - cnt]); } a[i] = integrate(a[i]); sum[i] = eval(a[i], sz[i]); } } } int ans = 0; for (int i = 0; i < n * cnt; i++) { add(ans, sum[i]); } printf("%d\n", ans); return 0; }
914
A
Perfect Squares
Given an array $a_{1}, a_{2}, ..., a_{n}$ of $n$ integers, find the largest number in the array that is not a perfect square. A number $x$ is said to be a perfect square if there exists an integer $y$ such that $x = y^{2}$.
For each number, check whether the number is a square or not (by checking factors smaller than the square root of the number or using sqrt function). The answer is the largest number which isn't a square. (negative numbers can't be squares) Make sure you initialize your max variable to $- 10^{6}$ instead of $0$.
[ "brute force", "implementation", "math" ]
900
#include <bits/stdc++.h> using namespace std; int main() { long long ans=LLONG_MIN, n, x; cin>>n; for (long long i = 0; i < n; i++) { cin>>x; for (long long j = 0; j*j<=x; j++) if (j*j == x) x = LLONG_MIN; ans = max(ans, x); } cout << ans << endl; return 0; }
914
B
Conan and Agasa play a Card Game
Edogawa Conan got tired of solving cases, and invited his friend, Professor Agasa, over. They decided to play a game of cards. Conan has $n$ cards, and the $i$-th card has a number $a_{i}$ written on it. They take turns playing, starting with Conan. In each turn, the player chooses a card and removes it. Also, he removes all cards having a number strictly lesser than the number on the chosen card. Formally, if the player chooses the $i$-th card, he removes that card and removes the $j$-th card for all $j$ such that $a_{j} < a_{i}$. A player loses if he cannot make a move on his turn, that is, he loses if there are no cards left. Predict the outcome of the game, assuming both players play optimally.
Let $A = max (a_{1}, a_{2}, ..., a_{n})$. Observe that if $A$ occurs an odd number of times, Conan can simply begin by removing one instance of $A$. If there are any cards left, they all have the same number $A$ on them. Now each player can only remove one card in their turn, and they take turns doing so. Since there were an odd number of cards having $A$ on them initially, this keeps continuing until finally, in one of Agasa's turns, there are no cards left. However, if $A$ occurs an even number of times, Conan cannot choose a card having $A$ on it because it will leave Agasa with an odd number of cards having $A$. This will result in both players picking cards one by one, ending with Agasa picking the last card, and thus winning. In such a case, Conan can consider picking the next distinct largest number in the array, say $B$. If $B$ occurs an odd number of times, then after Conan's turn there will be an even number of cards having $B$ and an even number of cards having $A$. If Agasa takes a card having $A$ then it becomes the same as the previous case and Conan wins. Otherwise, they take turns choosing a card having $B$ until finally, on one of Agasa's turns, there are no cards having $B$ and Agasa is forced to pick a card having $A$. Now it is Conan's turn and there are an odd number of cards having $A$, so it is again the same as the first case and Conan wins. By a similar argument, we can show that if Conan plays optimally, he starts by picking a card having the greatest number that occurs an odd number of times. Conan loses if and only if there is no such number, i.e., Conan loses if and only if every number occurs an even number of times.
[ "games", "greedy", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; using ll = long long; int cnt[100005]; int main() { ios::sync_with_stdio(false); int n; cin >> n; while(n--) { int x; cin >> x; cnt[x]++; } for (int i = 1; i <= 1e5; i++) { if (cnt[i] % 2 == 1) { cout << "Conan\n"; return 0; } } cout << "Agasa\n"; return 0; }
914
C
Travelling Salesman and Special Numbers
The Travelling Salesman spends a lot of time travelling so he tends to get bored. To pass time, he likes to perform operations on numbers. One such operation is to take a positive integer $x$ and reduce it to the number of bits set to $1$ in the binary representation of $x$. For example for number $13$ it's true that $13_{10} = 1101_{2}$, so it has $3$ bits set and $13$ will be reduced to $3$ in one operation. He calls a number special if the minimum number of operations to reduce it to $1$ is $k$. He wants to find out how many special numbers exist which are not greater than $n$. Please help the Travelling Salesman, as he is about to reach his destination! Since the answer can be large, output it modulo $10^{9} + 7$.
Let us denote the minimum number of steps it takes to reduce a number to $1$ as the order of that number. Since the number of set bits in numbers smaller than $2^{1000}$ is less than $1000$, any number smaller than $2^{1000}$ would reduce to a number less than $1000$ in one step. We can precompute the order of the first $1000$ numbers. For each $x$ ($x < 1000$) such that $x$ has order $k - 1$, we need to compute the number of numbers less than or equal to $n$ that have $k$ set bits. Let $k$ be the number of digits in the binary representation of $n$. Every number $x < n$ satisfies the property that, for some $i$ ($1 \le i < k$), the first $i - 1$ digits of $x$ are the same as that of $n$, the $i^{th}$ digit of $n$ is $1$, and the $i$-th digit of $x$ is $0$. We can iterate through all possible $i$ and compute the answer using binomial coefficients, that can be computed in $O(l)$ where $l$ is the length of binary representation of n. Time Complexity: $O(l)$ where $l$ is the length of binary representation of n.
[ "brute force", "combinatorics", "dp" ]
1,800
#include <bits/stdc++.h> using namespace std; #define MOD 1000000007 int dp[1004]; long long ncr[1004][1004]; int ones(int n) { int cnt = 0; while(n) { if(n%2 == 1) { cnt++; } n /= 2; } return cnt; } void calcncr() { for(int i = 0; i <= 1000; i++) { ncr[i][0] = 1; } for(int i = 1; i <= 1000; i++) { for(int j = 1; j <= 1000; j++) { ncr[i][j] = (ncr[i-1][j-1] + ncr[i-1][j])%MOD; } } } int main() { string n; int k; calcncr(); dp[1] = 0; for(int i = 2; i <= 1000; i++) { dp[i] = dp[ones(i)] + 1; } cin >> n >> k; if(k == 0) { cout << "1\n"; return 0; } long long nones = 0, ans = 0; for(int i = 0; i < n.size(); i++) { if(n[i] == '0') { continue; } for(int j = max(nones, 1LL); j < 1000; j++) { if(dp[j] == k-1) { ans = (ans + ncr[n.size()-i-1][j-nones])%MOD; if(i == 0 && k == 1) { ans = (ans+MOD-1)%MOD; } } } nones++; } int cnt = 0; for(int i = 0; i < n.size(); i++) { if(n[i] == '1') { cnt++; } } if(dp[cnt] == k-1) { ans = (ans + 1)%MOD; } cout << ans << endl; return 0; }
914
D
Bash and a Tough Math Puzzle
Bash likes playing with arrays. He has an array $a_{1}, a_{2}, ... a_{n}$ of $n$ integers. He likes to guess the greatest common divisor (gcd) of different segments of the array. Of course, sometimes the guess is not correct. However, Bash will be satisfied if his guess is almost correct. Suppose he guesses that the gcd of the elements in the range $[l, r]$ of $a$ is $x$. He considers the guess to be almost correct if he can change \textbf{at most} one element in the segment such that the gcd of the segment is $x$ after making the change. Note that when he guesses, he doesn't actually change the array — he just wonders if the gcd of the segment can be made $x$. Apart from this, he also sometimes makes changes to the array itself. Since he can't figure it out himself, Bash wants you to tell him which of his guesses are almost correct. Formally, you have to process $q$ queries of one of the following forms: - $1 l r x$ — Bash guesses that the gcd of the range $[l, r]$ is $x$. Report if this guess is almost correct. - $2 i y$ — Bash sets $a_{i}$ to $y$. \textbf{Note:} The array is $1$-indexed.
Build a segment tree on the array to answer range gcd queries. We can handle single element updates in the segment tree. Let us see how to answer an $(l, r, x)$ query. The segment tree decomposes the query range into $O(logn)$ nodes that cover the range. If the gcds of all of these nodes are multiples of $x$, then the answer is YES. If the gcd of two or more of these nodes is not a multiple of $x$, then the answer is NO. If the gcd of exactly one node is not a multiple of $x$, then we need to know how many elements in this node are not multiples of $x$. We can find this by traversing the descendents of that node. If at a point only one child is not a multiple of $x$, recurse into it. If at any point, there are two children whose gcds are not multiples of $x$, then the answer is NO. Otherwise, if we reach a node that doesn't have any children, then the answer is YES.
[ "data structures", "number theory" ]
1,900
#include <bits/stdc++.h> using namespace std; using ll = long long; int tree[2000000]; int trstp = 1; int gcd(int x, int y) { return y == 0 ? x : gcd(y, x % y); } void query(int root, int u, int v, int x, int s, int e, int& ans) { if (e < s || v < u || e < u || v < s) { return; } else if (u <= s && e <= v) { if (tree[root] % x == 0) { return; } else { while (root < trstp) { if (tree[2*root] % x != 0) { if (tree[2*root + 1] % x != 0) { ans += 2; return; } root = 2*root; } else { root = 2*root + 1; } } ans++; return; } } int mid = (s + e)/2; query(2*root, u, v, x, s, mid, ans); if (ans > 1) { return; } query(2*root + 1, u, v, x, mid + 1, e, ans); } void update(int p, int x) { p += trstp - 1; tree[p] = x; p /= 2; while (p > 0) { tree[p] = gcd(tree[2*p], tree[2*p + 1]); p /= 2; } } int main() { ios::sync_with_stdio(false); cin.tie(NULL); int n; cin >> n; while (trstp < n) { trstp *= 2; } for (int i = trstp; i < trstp + n; i++) { cin >> tree[i]; } for (int i = trstp - 1; i >= 1; i--) { tree[i] = gcd(tree[2*i], tree[2*i + 1]); } int q; cin >> q; while (q--) { int t; cin >> t; if (t == 1) { int l, r; int x; cin >> l >> r >> x; int ans = 0; query(1, l, r, x, 1, trstp, ans); cout << (ans <= 1 ? "YES\n" : "NO\n"); } else { int i; int y; cin >> i >> y; update(i, y); } } return 0; }
914
E
Palindromes in a Tree
You are given a tree (a connected acyclic undirected graph) of $n$ vertices. Vertices are numbered from $1$ to $n$ and each vertex is assigned a character from a to t. A path in the tree is said to be palindromic if at least one permutation of the labels in the path is a palindrome. For each vertex, output the number of palindromic paths passing through it. \textbf{Note:} The path from vertex $u$ to vertex $v$ is considered to be the same as the path from vertex $v$ to vertex $u$, and this path will be counted only once for each of the vertices it passes through.
The problem can be solved by centroid decomposition. A path will be palindromic at most one letter appears odd number of times in the path. We maintain a bitmask for each node, where $i$-th bit is $1$ if the $i$-th character occurs odd number of times, otherwise $0$. The path from $u$ to $v$ is valid if mask[$u$] ^ mask[$v$] has at most one bit set to $1$. Consider a part as the subtree of the immediate children of the root of the the centroid tree. For a node, we need to consider the paths that go from its subtree to any other part. We add the contribution of nodes in the subtree of a node using a simple dfs and propagating the values above and add the corresponding contribution to the answer of the node currently in consideration(dfs). Complexity is n*log(n)*20
[ "bitmasks", "data structures", "divide and conquer", "trees" ]
2,400
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace std; using namespace __gnu_pbds; #define ll long long #define db long double #define ii pair<int,int> #define vi vector<int> #define fi first #define se second #define sz(a) (int)(a).size() #define all(a) (a).begin(),(a).end() #define pb emplace_back #define mp make_pair #define FN(i, n) for (int i = 0; i < (int)(n); ++i) #define FEN(i,n) for (int i = 1;i <= (int)(n); ++i) #define rep(i,a,b) for(int i=a;i<b;i++) #define repv(i,a,b) for(int i=b-1;i>=a;i--) #define SET(A, val) memset(A, val, sizeof(A)) typedef tree<int ,null_type,less<int>,rb_tree_tag,tree_order_statistics_node_update>ordered_set ; // order_of_key (val): returns the no. of values less than val // find_by_order (k): returns the kth largest element.(0-based) #define TRACE #ifdef TRACE #define trace(...) __f(#__VA_ARGS__, __VA_ARGS__) template <typename Arg1> void __f(const char* name, Arg1&& arg1){ cerr << name << " : " << arg1 << std::endl; } template <typename Arg1, typename... Args> void __f(const char* names, Arg1&& arg1, Args&&... args){ const char* comma = strchr(names + 1, ','); cerr.write(names, comma - names) << " : " << arg1<<" | ";__f(comma+1, args...); } #else #define trace(...) #endif const int L=200005,N=(1<<20); //call dfsMCT,change solve and dfs1 vi vpart[L],ad[L]; bool vis[L];//par:parent in cent|H:depth int SZ[L],par[L],part[L],H[L],cpart;//cpart:no of parts in cent ll ans[L],dp[L]; string s; int dfsSZ(int u,int p=-1) { SZ[u]=1;//vpart[i]:nodes in part i for(int v:ad[u]) if(!vis[v] && v!=p) SZ[u]+=dfsSZ(v,u); return SZ[u]; } int dfsFC(int u,int r,int p=-1) { for(int v:ad[u]) if(!vis[v] && v!=p) { if(SZ[v]>SZ[r]/2) return dfsFC(v,r,u); } return u; } int cnt[N],val[L],tmp[N]; void dfs1(int u,int r,int p=-1) { dp[u]=0; if(p==r) { part[u]=++cpart; vpart[cpart].clear(); } else if(p!=-1) part[u]=part[p]; if(p!=-1) vpart[cpart].pb(u); for(int v1:ad[u]) if(!vis[v1]&&v1!=p) { H[v1]=H[u]+1; val[v1]=(val[u]^(1<<(s[v1-1]-'a'))); dfs1(v1,r,u); } } void dfs2(int u,int r,int par=-1) { dp[u]=cnt[val[u]]; rep(i,0,20) dp[u]+=cnt[val[u]^(1<<i)]; for(int v1:ad[u]) { if(vis[v1] || v1==par) continue; dfs2(v1,r,u); dp[u]+=dp[v1]; } ans[u]+=dp[u]; } void solve(int r,int szr) { H[r]=cpart=0; val[r]=(1<<(s[r-1]-'a')); dfs1(r,r); rep(i,1,cpart+1) for(int v1:vpart[i]) cnt[val[v1]]++; cnt[val[r]]++; rep(i,1,cpart+1) { for(int v1:vpart[i]) { cnt[val[v1]]--; val[v1]^=(1<<(s[r-1]-'a')); tmp[val[v1]]++; } for(int v1:vpart[i]) { if(tmp[val[v1]]) { dp[r]+=(ll)tmp[val[v1]]*cnt[val[v1]]; rep(j,0,20) dp[r]+=(ll)tmp[val[v1]]*cnt[val[v1]^(1<<j)]; tmp[val[v1]]=0; } } dfs2(vpart[i][0],r,r); for(int v1:vpart[i]) { val[v1]^=(1<<(s[r-1]-'a')); cnt[val[v1]]++; } } cnt[val[r]]--; dp[r]+=cnt[0]; rep(i,0,20) dp[r]+=cnt[(1<<i)]; ans[r]+=dp[r]/2; rep(i,1,cpart+1) for(int v1:vpart[i]) cnt[val[v1]]--; } void dfsMCT(int u,int p=-1) { dfsSZ(u); int r=dfsFC(u,u); par[r]=p; solve(r,SZ[u]); vis[r]=true; for(int v:ad[r]) if(!vis[v]) dfsMCT(v,r); } int main() { std::ios::sync_with_stdio(false); cin.tie(NULL) ; cout.tie(NULL) ; int n,x,y; cin>>n; rep(i,1,n) { cin>>x>>y; ad[x].pb(y); ad[y].pb(x); } cin>>s; dfsMCT(1); rep(i,1,n+1) cout<<ans[i]+1<<" "; cout<<endl; return 0 ; }
914
F
Substrings in a String
Given a string $s$, process $q$ queries, each having one of the following forms: - $1 i c$ — Change the $i$-th character in the string to $c$. - $2 l r y$ — Consider the substring of $s$ starting at position $l$ and ending at position $r$. Output the number of times $y$ occurs as a substring in it.
Let $N = |s|$. Divide the given string into blocks of size $K={\sqrt{N}}$ and use any suffix structure for each block. Complexity: $O(|s|)$. To update a character in the string, rebuild a suffix structure for that block. This takes $O(K)$ per update. We answer queries as follows. Remember that it's given that the total length of all the query strings is at most $10^{5}$. If the size of the query string is greater than $K$, then the number of such strings will be at most $y / K$ and hence we can directly use KMP in the string for the given range for all such strings. Overall Complexity: $O(N * y / K)$ If the size of the query string is less than $K$, we proceed as follows. For the occurrences of query string within a block, we can calculate them using the suffix structures for each block. This can be done in $O(|y|)$ for each block, $O(|y| * (N / k))$ for the given range. For the occurrences that lie across two (adjacent) blocks, we only need to consider a string of $2 * |y|$, we can simply use KMP for finding such occurrences. We need to choose the string carefully to avoid over counting (for more details, see the author's solution). Its complexity will be $O(N / k * 2 * |y|)$. For left and right blocks of the query range, we can again use KMP. The complexity would be $O(2 * k)$. The overall complexity for the small query strings is therefore $O(N / k * |y|)$. Hence, complexity over all such string would be $O(y * N / k)$. Hence, the overall complexity is $O(N * k + y * N / k)$. So choose any optimal $K$. Any $K$ from $150$ to $400$ will fit in the time limit. Expected Complexity: $O(N * sqrt(N))$ There was an unexpected solution that involved bitset that runs in complexity $O(N^{2} / 32)$.
[ "bitmasks", "brute force", "data structures", "string suffix structures", "strings" ]
3,000
#include <bits/stdc++.h> #define fr(x) scanf("%d", &x) #define SQRN 150 using namespace std; const int sa = 2 * SQRN + 10; const int LEN = 100010; char s[LEN], sq[LEN], zstring[2*LEN]; int z[2*LEN]; string temps; struct SuffixAutomaton { int edges[26][sa], link[sa], length[sa], isTerminal[sa], dp1[sa], last; int sz; SuffixAutomaton() { last = 0; sz = 0; } void set(int k) { for(int i = 0; i < 26; ++i) edges[i][k] = -1; } void build(string &s) { link[0] = -1; length[0] = 0; last = 0; sz = 1; set(0); for(int i=0;i<s.size();i++) { set(sz); length[sz] = i+1; link[sz] = 0; int r = sz; ++sz; int p = last; while(p >= 0 && edges[s[i]-'a'][p] == -1) { edges[s[i] - 'a'][p] = r; p = link[p]; } if(p != -1) { int q = edges[s[i] - 'a'][p]; if(length[p] + 1 == length[q]) { link[r] = q; } else { for(int i = 0; i < 26; ++i) { edges[i][sz] = edges[i][q]; } length[sz] = length[p] + 1; link[sz] = link[q]; int qq = sz; ++sz; link[q] = qq; link[r] = qq; while(p >= 0 && edges[s[i] - 'a'][p] == q) { edges[s[i] - 'a'][p] = qq; p = link[p]; } } } last = r; } for(int i = 0; i < sz; ++i) isTerminal[i] = 0, dp1[i] = -1; int p = last; while(p > 0) { isTerminal[p] = 1; p = link[p]; } } int solve(int pos) { if(dp1[pos] != -1) return dp1[pos]; dp1[pos] = isTerminal[pos]; for(int i=0; i<26; ++i){ if(edges[i][pos] != -1) dp1[pos] += solve(edges[i][pos]); } return dp1[pos]; } int run() { int cur = 0; for(int i=1; sq[i] != '\0'; ++i) { auto it = edges[sq[i] - 'a'][cur]; if(it == -1) return 0; else cur = it; } return solve(cur); } } SA[800]; void computeZ() { int l, r; z[0] = 0; l = r = -1; for(int i=1; zstring[i] != '\0'; ++i) { z[i]=0; if(r>i) { z[i]=min(z[i-l], r-i+1); } while(zstring[i+z[i]] == zstring[z[i]]) ++z[i]; if(i+z[i]-1 > r) { r = i+z[i]-1; l = i; } } } int computeBrute(int l, int r, int sqlen) { int zslen = 0, ans = 0; for(int i=1; i<=sqlen; ++i) { zstring[zslen++] = sq[i]; } zstring[zslen++] = '$'; for(int i=l; i<=r; ++i) { zstring[zslen++] = s[i]; } zstring[zslen] = '\0'; computeZ(); for(int i=1; i<zslen; ++i) { if(z[i] >= sqlen) { ++ans; } } return ans; } int main() { int q, typ, l, r, slen; char ch; scanf(" %s", &s[1]); slen = strlen(&s[1]); fr(q); for(int i=0; i<=slen; i+=SQRN) { temps = ""; int tempr = min(i+SQRN-1, slen); for(int j=max(1, i); j<=tempr; ++j) { temps += s[j]; } SA[i/SQRN].build(temps); } while(q--) { fr(typ); if(typ == 1) { scanf("%d %c", &l, &ch); s[l] = ch; temps = ""; int bkt = l/SQRN; int tempr = min(slen, (bkt+1)*SQRN-1); for(int i=max(1,bkt*SQRN); i<=tempr; ++i) { temps += s[i]; } SA[bkt].build(temps); } else { scanf("%d %d %s", &l, &r, &sq[1]); int sqlen = strlen(&sq[1]); if(sqlen >= SQRN || r-l <= 2*SQRN) { printf("%d\n", computeBrute(l, r, sqlen)); } else { int lbkt = l/SQRN, rbkt = r/SQRN, ans = 0; for(int i=lbkt+1; i<rbkt; ++i) { ans += SA[i].run(); } ans += computeBrute(l, (lbkt+1)*SQRN-1, sqlen); ans += computeBrute(rbkt*SQRN, r, sqlen); for(int i=lbkt+1; i<=rbkt; ++i) { ans += computeBrute(max(l, i*SQRN - sqlen + 1), min(r, i*SQRN + sqlen - 2), sqlen); } printf("%d\n", ans); } } } return 0; }
914
G
Sum the Fibonacci
You are given an array $s$ of $n$ non-negative integers. A 5-tuple of integers $(a, b, c, d, e)$ is said to be valid if it satisfies the following conditions: - $1 ≤ a, b, c, d, e ≤ n$ - $(s_{a}$ | $s_{b})$ & $s_{c}$ & $(s_{d}$ ^ $s_{e}) = 2^{i}$ for some integer $i$ - $s_{a}$ & $s_{b} = 0$ Here, '|' is the bitwise OR, '&' is the bitwise AND and '^' is the bitwise XOR operation. Find the sum of $f(s_{a}$|$s_{b}) * f(s_{c}) * f(s_{d}$^$s_{e})$ over all valid 5-tuples $(a, b, c, d, e)$, where $f(i)$ is the $i$-th Fibonnaci number ($f(0) = 0, f(1) = 1, f(i) = f(i - 1) + f(i - 2)$). Since answer can be is huge output it modulo $10^{9} + 7$.
Apologies, we didn't expect an $O(3^{17})$ solution. The expected solution was as follows. Let $A[i]$ be the number of pairs $(x, y)$ in the array such that their bitwise OR is $i$ and $x&y = 0$, multiplied by $Fib[i]$. This can be done using subset convolution. Let $B[i]$ be the count of each element in array, multiplied by $Fib[i]$. Let $C[i]$ be the number of pairs $(x, y)$ such that their bitwise xor is $i$, multiplied by $Fib[i]$. This can be done using Xor convolution. Let $D$ be the And Convolution of $A$, $B$, and $C$. Then the answer is given by the expression $\textstyle\sum_{i=0}^{16}D[2^{i}]$. Complexity: $O(2^{17} * (17^{3}))$
[ "bitmasks", "divide and conquer", "dp", "fft", "math" ]
2,600
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace std; using namespace __gnu_pbds; #define ll long long #define db long double #define ii pair<int,int> #define vi vector<int> #define fi first #define se second #define sz(a) (int)(a).size() #define all(a) (a).begin(),(a).end() #define pb push_back #define mp make_pair #define FN(i, n) for (int i = 0; i < (int)(n); ++i) #define FEN(i,n) for (int i = 1;i <= (int)(n); ++i) #define rep(i,a,b) for(int i=a;i<b;i++) #define repv(i,a,b) for(int i=b-1;i>=a;i--) #define SET(A, val) memset(A, val, sizeof(A)) typedef tree<int ,null_type,less<int>,rb_tree_tag,tree_order_statistics_node_update>ordered_set ; // order_of_key (val): returns the no. of values less than val // find_by_order (k): returns the kth largest element.(0-based) #define TRACE #ifdef TRACE #define trace(...) __f(#__VA_ARGS__, __VA_ARGS__) template <typename Arg1> void __f(const char* name, Arg1&& arg1){ cerr << name << " : " << arg1 << std::endl; } template <typename Arg1, typename... Args> void __f(const char* names, Arg1&& arg1, Args&&... args){ const char* comma = strchr(names + 1, ','); cerr.write(names, comma - names) << " : " << arg1<<" | ";__f(comma+1, args...); } #else #define trace(...) #endif const int L = 17 ; const int L2 = 1<<L ; const int mod = 1e9+7 ; int sbits[L2] ; vi oddbits ; inline int add(int x,int y) { x+=y; if(x>=mod) x-=mod; if(x<0) x+=mod; return x; } inline int mult(int x,int y) { ll tmp=(ll)x*y; if(tmp>=mod) tmp%=mod; return tmp; } inline int pwmod(int x,int y) { int ans=1; while(y){if(y&1)ans=mult(ans,x);y>>=1;x=mult(x,x);} return ans; } vector<ii> nus[L] ; void zeta(int A[L2]) { FN(i,L) { for(ii m:nus[i]) A[m.fi]=add(A[m.fi],A[m.se]) ; } } void meu(int A[L2]) { for(int i:oddbits) A[i] = mod - A[i] ; zeta(A) ; for(int i:oddbits) A[i] = mod - A[i] ; } void conv(int A[L2],int B[L2]) { zeta(A), zeta(B);//return FN(i,L2) A[i]=mult(A[i],B[i]); meu(A); } int t[L+1][L2],t1[L2] ; void subsetconv(int A[L2],int ans[L2]) { FN(i,L2) t[sbits[i]][i] = A[i] ; FN(i,L+1) zeta(t[i]) ; FN(c,L+1) FN(a,c+1) { int *t2 = t[a], *t3 = t[c-a] ; FN(i,L2) t1[i]=mult(t2[i],t3[i]) ; meu(t1) ; FN(i,L2) if(sbits[i] == c) ans[i]=add(ans[i],t1[i]) ; } } vector<ii> su[L] ; void transform(int p[L2],bool inv=false){ int u,v; for(int len=1,l=2;l<=L2;len<<=1,l<<=1)for(int i=0;i<L2;i+=l) FN(j,len){ u=p[i+j],v=p[i+j+len] ; if(inv) {p[i+j]=add(u,v),p[i+j+len]=add(u,-v);} else {p[i+j]=add(u,v),p[i+j+len]=add(u,-v);}} if(inv){int d=pwmod(L2,mod-2);FN(i,L2)p[i]=mult(p[i],d);} } int cnt[L2],fib[L2],A[L2],B[L2],C[L2],a[L2],b[L2],c[L2] ; int main() { std::ios::sync_with_stdio(false); cin.tie(NULL) ; cout.tie(NULL) ; FN(i,L2) { sbits[i] = __builtin_popcount(i) ; if(sbits[i] & 1) oddbits.emplace_back(i) ; FN(j,L) { if(i&(1<<j)) nus[j].emplace_back(mp(i,i^(1<<j))) ; if((i&(1<<j)) == 0) su[j].emplace_back(mp(i,i^(1<<j))) ; } } FN(i,L) reverse(all(su[i])) ; fib[1] = 1 ; rep(i,2,L2) fib[i] = add(fib[i-1],fib[i-2]) ; int N,x ; cin>>N ; FN(i,N) { cin>>x, ++cnt[x] ; } subsetconv(cnt,A) ; FN(i,L2) A[i]=mult(A[i],fib[i]) ; FN(i,L2) B[i]=mult(cnt[i],fib[i]); FN(i,L2) C[i]=cnt[i] ; transform(C) ; FN(i,L2) C[i]=mult(C[i],C[i]) ; transform(C,1) ; FN(i,L2) C[i]=mult(C[i],fib[i]) ; int ans = 0 ; FN(p,L) { FN(i,L2) a[i]=b[i]=c[i]=0 ; for(ii m:nus[p]) a[m.se]=A[m.fi],b[m.se]=B[m.fi],c[m.se]=C[m.fi] ; FN(i,L) { for(ii m:su[i]) { a[m.fi]=add(a[m.fi],a[m.se]) ; b[m.fi]=add(b[m.fi],b[m.se]) ; c[m.fi]=add(c[m.fi],c[m.se]) ; } } FN(i,L2) a[i]=mult(a[i],mult(b[i],c[i])) ; meu(a) ; ans = add(ans,a[(1<<L)-1]) ; } ans = ans == 0 ? 0 : mod - ans ; cout << ans << "\n" ; return 0 ; }
914
H
Ember and Storm's Tree Game
Ember and Storm play a game. First, Ember picks a labelled tree $T$ of $n$ vertices, such that the degree of every vertex is at most $d$. Then, Storm picks two distinct vertices $u$ and $v$ in this tree and writes down the labels of the vertices in the path from $u$ to $v$ in a sequence $a_{1}, a_{2}... a_{k}$. Finally, Ember picks any index $i$ ($1 ≤ i < k$) in the array. Now he performs one of the following two operations exactly once: - flip the subrange $[i + 1, k]$ and add $a_{i}$ to it. After this, the sequence becomes $a_{1}, ... a_{i}, a_{k} + a_{i}, a_{k - 1} + a_{i}, ... a_{i + 1} + a_{i}$ - negate the subrange $[i + 1, k]$ and add $a_{i}$ to it. i.e., the array becomes $a_{1}, ... a_{i}, - a_{i + 1} + a_{i}, - a_{i + 2} + a_{i}, ... - a_{k} + a_{i}$ Ember wins if the array is monotonically increasing or decreasing after this. Otherwise Storm wins. The game can be described by the tuple $(T, u, v, i, op)$ where $op$ is «flip» or «negate» depending on the action Ember chose in the last turn. Find the number of tuples that can occur if Ember and Storm play optimally. When they play optimally, if there are multiple moves by which they are guaranteed to win, then they may play any of the winning moves. Otherwise, if someone loses no matter what they play, then they may play any of the possible moves. Report the answer modulo $m$.
Ember wins if the path chosen by Storm is monotonic or bitonic. In this case, there can be two $(i, op)$ pairs. Let $S$ be the set of trees having $n$ vertices in which all paths are bitonic or monotonic. We need to find $2n(n - 1)|S|$. For every tree in $S$, there exists at least one vertex such that every path starting or ending at that vertex is monotonically increasing or decreasing. First, let's count the number of rooted trees such that all paths starting at the root are increasing. Later we'll combine this with trees having decreasing paths. Let $tree[i][deg]$ be the number of trees having $i$ vertices and maximum degree $d$ such that all paths starting at vertex $1$ are monotonically increasing and the degree of the vertex $1$ is $deg$. We can find this quantity by a kind of knapsack DP in $O(n^{3}\log{n})$, as follows. We can construct a tree of $k + i \cdot j$ vertices and degree of root $deg + j$ by taking a tree of $k$ vertices having root degree $deg$ and attaching $j$ trees of $i$ vertices each to its root. Let the set of vertices be $V = {1, ... k + i \cdot j}$. Initially we remove $k$ vertices from $V$ and use these vertices to construct the tree of $k$ vertices. Let $W$ be the set of remaining vertices. We partition $W$ into $j$ subsets of $i$ vertices each. Sort these subsets by their minimum element. Now we make $j$ trees of $i$ vertices each, and use the $l$-th set of vertices to construct the $l$-th tree. Therefore, the number of ways to do this is: $C\equiv\ {\textstyle\frac{(k+i\cdot j-1)!t r e e^{\textstyle\left[k\right]\left[d e g|t r e e[i\right]\left[a n y\right]^{3}}}{\left(k-1\right)!(i!)j}}$ where $tree[i][any]$ is the number of trees of $i$ vertices with any root degree from $1$ to $d - 1$. Note that there is a bijection from trees in which all paths from root are increasing and trees in which all paths from root are decreasing. ($v\mapsto n-v+1$) Note that $tree[i + 1][j] \cdot tree[n - i][k]$ is the number of trees such that all paths starting at the root are monotonic, and there are $i$ vertices lying on the increasing paths and there are $n - i - 1$ vertices lying on the decreasing paths, and the degree of the root is $j + k$ Therefore the quantity $a=\sum_{i=0}^{n-1}\sum_{j+k\leq d}t r e e[i+1][j]\cdot t r e e[n-i][k]$ is the number of rooted trees of $n$ vertices such that all paths starting at the root are monotonically increasing or decreasing. However, we want to count unrooted trees. Note that if a tree in $S$ has $k$ possible roots, then these roots form a chain of consecutive numbers with an increasing tree of size $i$ on the larger end of the chain and a decreasing tree of size $n - k - i$ on the smaller end of the chain. Such a tree gets counted $k$ times in $a$. But for all of these roots except one, there is exactly one child of the root smaller than it. Therefore the total number of unrooted trees is $|S|=\sum_{i=0}^{n-1}\sum_{j+k<d,k\neq1}t r e e[i+1][j]\cdot t r e e[n-i][k]$ Time Complexity: $O(n^{3}\log{n})$
[ "combinatorics", "dp", "games", "trees" ]
3,400
#include <bits/stdc++.h> using namespace std; using ll = long long; ll mod; const int maxn = 2e2 + 2; ll tree[maxn][maxn][maxn]; ll c[maxn][maxn]; int main() { ios::sync_with_stdio(false); int n, d; cin >> n >> d >> mod; c[0][0] = 1; for (int i = 1; i <= n; i++) { c[i][0] = 1; for (int j = 1; j <= i; j++) { c[i][j] = (c[i - 1][j] + c[i - 1][j - 1]) % mod; } } tree[1][1][0] = 1; for (int i = 1; i < n; i++) { // adding trees of size i for (int j = 1; j <= n; j++) { for (int k = 0; k <= d; k++) { tree[i + 1][j][k] = tree[i][j][k]; } } ll tree_i = 0; for (int j = 0; j < d; j++) { tree_i = (tree_i + tree[i][i][j]) % mod; } ll ways = 1; for (int j = 1; i*j <= n && j <= d; j++) { // adding j such trees ways = (ways * c[i*j - 1][i - 1]) % mod; // ((ij)! tree[i]^(j-1))/(i!^j j!) for (int k = 1; k + i*j <= n; k++) { // adding to trees of size k ll cc = (ways * c[k + i*j - 1][k - 1]) % mod; for (int deg = 0; deg + j <= d; deg++) { tree[i + 1][k + i*j][deg + j] = (tree[i + 1][k + i*j][deg + j] + cc*((tree[i][k][deg]*tree_i) % mod)) % mod; } } ways = (ways * tree_i) % mod; } } ll total_trees = 0; for (int i = 0; i <= n - 1; i++) { // there are i vertices lying on paths increasing from the root for (int j = 0; j <= d; j++) { for (int k = 0; j + k <= d; k++) { if (k == 1) { continue; } total_trees = (total_trees + ((tree[n][i + 1][j]*tree[n][n - i][k]) % mod)) % mod; } } } cout << (2*((n*(n-1)) % mod)*total_trees) % mod << '\n'; return 0; }
915
A
Garden
Luba thinks about watering her garden. The garden can be represented as a segment of length $k$. Luba has got $n$ buckets, the $i$-th bucket allows her to water some continuous subsegment of garden of length \textbf{exactly} $a_{i}$ each hour. \textbf{Luba can't water any parts of the garden that were already watered, also she can't water the ground outside the garden}. Luba has to choose \textbf{one} of the buckets in order to water the garden as fast as possible (as mentioned above, each hour she will water some continuous subsegment of length $a_{i}$ if she chooses the $i$-th bucket). Help her to determine the minimum number of hours she has to spend watering the garden. It is guaranteed that Luba can always choose a bucket so it is possible water the garden. See the examples for better understanding.
In this problem we just need to find maximum divisor of $k$ that belongs to array $a$. Let's call it $r$. Then we need to print $\displaystyle{\frac{k}{r}}$.
[ "implementation" ]
900
null
915
B
Browser
Luba is surfing the Internet. She currently has $n$ opened tabs in her browser, indexed from $1$ to $n$ from left to right. The mouse cursor is currently located at the $pos$-th tab. Luba needs to use the tabs with indices from $l$ to $r$ (inclusive) for her studies, and she wants to close all the tabs that don't belong to this segment as fast as possible. Each second Luba can either try moving the cursor to the left or to the right (if the cursor is currently at the tab $i$, then she can move it to the tab $max(i - 1, a)$ or to the tab $min(i + 1, b)$) or try closing all the tabs to the left or to the right of the cursor (if the cursor is currently at the tab $i$, she can close all the tabs with indices from segment $[a, i - 1]$ or from segment $[i + 1, b]$). In the aforementioned expressions $a$ and $b$ denote the minimum and maximum index of an unclosed tab, respectively. For example, if there were $7$ tabs initially and tabs $1$, $2$ and $7$ are closed, then $a = 3$, $b = 6$. What is the minimum number of seconds Luba has to spend in order to leave \textbf{only the tabs with initial indices from $l$ to $r$ inclusive} opened?
If $l = 1$ and $r = n$ then the answer is $0$. If $l = 1$ and $r \neq n$ or $l \neq 1$ and $r = n$ then answer is $|pos - l| + 1$ or $|pos - r| + 1$ respectively. And in the other case (when $l \neq 1$ and $r \neq n$) the answer is $r - l + min(|pos - l|, |pos - r|) + 2$.
[ "implementation" ]
1,300
null
915
C
Permute Digits
You are given two positive integer numbers $a$ and $b$. Permute (change order) of the digits of $a$ to construct maximal number not exceeding $b$. No number in input and/or output can start with the digit 0. It is allowed to leave $a$ as it is.
Let's construct the answer digit by digit starting from the leftmost. Obviously, we are asked to build lexicographically maximal answer so in this order we should choose the greatest digit on each step. Precalc $cnt_{i}$ - number of digits $i$ in number $a$. Iterate over all possible digits starting from the greatest. For each digit check if it's possible to put it in this position. For this you construct minimal suffix (greedily put the lowest digit) and compare the resulting number with number $b$. If it became less or equal then proceed to the next digit. Overall complexity: $O(|a|^{2} \cdot |AL|)$, where $AL$ is digits from $0$ to $9$.
[ "dp", "greedy" ]
1,700
null
915
D
Almost Acyclic Graph
You are given a directed graph consisting of $n$ vertices and $m$ edges (each edge is directed, so it can be traversed in only one direction). You are allowed to remove at most one edge from it. Can you make this graph acyclic by removing at most one edge from it? A directed graph is called acyclic iff it doesn't contain any cycle (a non-empty path that starts and ends in the same vertex).
The constraits are set in such a way that naive $O(m \cdot (n + m))$ solution won't pass (unmark every edge one by one and check if graph of marked edges doesn't contain cycles with dfs/bfs). Thus we should somehow limit the number of edges to check. Let's take arbitrary cycle in graph. Do dfs, store the vertex you used to travel to any other vertex and restore edges with this data if cycle is met. With this algo length of cycle will not exceed $n$. Then do the naive algo but check only edges from this cycle. Overall complexity: $O(n \cdot (n + m))$.
[ "dfs and similar", "graphs" ]
2,200
null
915
E
Physical Education Lessons
This year Alex has finished school, and now he is a first-year student of Berland State University. For him it was a total surprise that even though he studies programming, he still has to attend physical education lessons. The end of the term is very soon, but, unfortunately, Alex still hasn't attended a single lesson! Since Alex doesn't want to get expelled, he wants to know the number of working days left until the end of the term, so he can attend physical education lessons during these days. But in BSU calculating the number of working days is a complicated matter: There are $n$ days left before the end of the term (numbered from $1$ to $n$), and initially all of them are working days. Then the university staff sequentially publishes $q$ orders, one after another. Each order is characterised by three numbers $l$, $r$ and $k$: - If $k = 1$, then all days from $l$ to $r$ (inclusive) become non-working days. If some of these days are made working days by some previous order, then these days still become non-working days; - If $k = 2$, then all days from $l$ to $r$ (inclusive) become working days. If some of these days are made non-working days by some previous order, then these days still become working days. Help Alex to determine the number of working days left after each order!
Let's store current intervals with non-working days in set sorted by the right border. When new query comes you search for the first interval to have its right border greater or equal than the currect left border and update all intervals to intersect the query (either fully delete or insert back its part which doesn't intersect query). Finally, if $k = 1$ then insert the query into current set. Updates on the number of working days can be done while deleting segments on the fly. Overall complexity: $O(q\log q)$.
[ "data structures", "implementation", "sortings" ]
2,300
null
915
F
Imbalance Value of a Tree
You are given a tree $T$ consisting of $n$ vertices. A number is written on each vertex; the number written on vertex $i$ is $a_{i}$. Let's denote the function $I(x, y)$ as the difference between maximum and minimum value of $a_{i}$ on a simple path connecting vertices $x$ and $y$. Your task is to calculate $\sum_{i=1}^{n}\sum_{j=i}^{n}I(i,j)$.
Let's calculate the answer as the difference between sum of maxima and sum of minima over all paths. These sums can be found by the following approach: Consider the sum of maxima. Let's sort all vertices in ascending order of values of $a_{i}$ (if two vertices have equal values, their order doesn't matter). This order has an important property that we can use: for every path, the maximum on this path is written on the vertex that has the greatest position in sorted order. This allows us to do the following: Let's denote as $t(i)$ a tree, rooted at vertex $i$, that is formed by the set of such vertices $j$ that are directly connected to $i$ or some other vertex from the set, and have $a_{j} < a_{i}$. Consider the vertices that are connected to $i$ in this tree. Let's denote them as $v_{1}$, $v_{2}$, ..., $v_{k}$ (the order doesn't matter), and denote by $s_{j}$ the size of the subtree of $v_{j}$ in the tree $t(i)$. Let's try to calculate the number of paths going through $i$ in this tree: $\sum_{j=1}^{k}s_{j}+1$ paths that have $i$ as its endpoint; $\sum_{j=2}^{k}\sum_{x=1}^{j-1}s_{j}\cdot s_{j}.$ paths (connecting a vertex from subtree of $v_{x}$ to a vertex from subtree of $v_{j}$). So vertex $i$ adds the sum of these values, multiplied by $a_{i}$, to the sum of maxima. To calculate these sums, we will use the following algorithm: Initialize a DSU (disjoint set union), making a set for each vertex. Process the vertices in sorted order. When we process some vertex $i$, find all its already processed neighbours (they will be $v_{1}$, $v_{2}$, ..., $v_{k}$ in $t(i)$). For every neighbour $v_{j}$, denote the size of its set in DSU as $s_{j}$. Then calculate the number of paths going through $i$ using aforementioned formulas (to do it in linear time, use partial sums). Add this number, multiplied by $a_{i}$, to the sum of maxima, and merge $i$ with $v_{1}$, $v_{2}$, ..., $v_{k}$ in DSU. To calculate the sum of minima, you can do the same while processing vertices in reversed order. Time complexity of this solution is $O(n\log n)$.
[ "data structures", "dsu", "graphs", "trees" ]
2,400
null
915
G
Coprime Arrays
Let's call an array $a$ of size $n$ coprime iff $gcd(a_{1}, a_{2}, ..., a_{n}) = 1$, where $gcd$ is the greatest common divisor of the arguments. You are given two numbers $n$ and $k$. For each $i$ ($1 ≤ i ≤ k$) you have to determine the number of coprime arrays $a$ of size $n$ such that for every $j$ ($1 ≤ j ≤ n$) $1 ≤ a_{j} ≤ i$. Since the answers can be very large, you have to calculate them modulo $10^{9} + 7$.
For a fixed upper bound $i$, this is a well-known problem that can be solved using inclusion-exclusion: Let's denote by $f(j)$ the number of arrays with elements in range $[1, i]$ such that $gcd(a_{1}, ..., a_{n})$ is divisible by $j$. Obviously, $f(j)=(\lfloor{\frac{i}{j}}\rfloor)^{n}$. With the help of inclusion-exclusion formula we can prove that the number of arrays with $gcd = 1$ is the sum of the following values over all possible sets $S$: $( - 1)^{|S|}f(p(S))$, where $S$ is some set of prime numbers (possibly an empty set), and $p(S)$ is the product of all elements in the set. $f(p(S))$ in this formula denotes the number of arrays such that their $gcd$ is divisible by every number from set $S$. However, the number of such sets $S$ is infinite, so we need to use the fact that $f(j) = 0$ if $j > i$. With the help of this fact, we can rewrite the sum over every set $S$ in such a way: $\sum_{j=1}^{i}\mu(j)f(j)$, where $ \mu (j)$ is $0$ if there is no any set of prime numbers $S$ such that $p(S) = j$, $| \mu (j)| = 1$ if this set $S$ exists, and the sign is determined by the size of $S$ ($ \mu (j) = 1$ if $|S|$ is even, otherwise $ \mu (j) = - 1$). An easier way to denote and calculate $ \mu (j)$ is the following (by the way, it is called Möbius function): $ \mu (j) = 0$, if there is some prime number p such that $p^{2}|j$. Otherwise $ \mu (j) = ( - 1)^{x}$, where $x$ is the number of primes in the factorization of $j$. Okay, so we found a solution for one upper bound $i$, it's $\sum_{j=1}^{i}\mu(j)f(j)$. How can we calculate it for every $i$ from $1$ to $k$? Suppose we have calculated all values of $f(j)$ for some $i - 1$ and we want to recalculate them for $i$. The important fact is that these values change (and thus need recalculation) only for the numbers $j$ such that $j|i$. So if we recalculate only these values $f(j)$ (and each recalculation can be done in $O(1)$ time if we precompute the $x^{n}$ for every $x\in[1,k]$), then we will have to do only $O(k\log k)$ recalculations overall.
[ "math", "number theory" ]
2,300
null
916
A
Jamie and Alarm Snooze
Jamie loves sleeping. One day, he decides that he needs to wake up at exactly $hh: mm$. However, he hates waking up, so he wants to make waking up less painful by setting the alarm at a lucky time. He will then press the snooze button every $x$ minutes until $hh: mm$ is reached, and only then he will wake up. He wants to know what is the smallest number of times he needs to press the snooze button. A time is considered lucky if it contains a digit '$7$'. For example, $13: 07$ and $17: 27$ are lucky, while $00: 48$ and $21: 34$ are not lucky. Note that it is not necessary that the time set for the alarm and the wake-up time are on the same day. It is guaranteed that there is a lucky time Jamie can set so that he can wake at $hh: mm$. Formally, find the smallest possible non-negative integer $y$ such that the time representation of the time $x·y$ minutes before $hh: mm$ contains the digit '$7$'. Jamie uses 24-hours clock, so after $23: 59$ comes $00: 00$.
Let's use brute force the find the answer. We first set the alarm time as $hh: mm$ and initialize the answer as 0. While the time is not lucky, set the alarm time to $x$ minute before and add 1 to the answer. Why does this solution run in time? As $x \le 60$, $hh$ decrease at most $1$ for every iteration. Also, after at most 60 iterations, $hh$ must decrease at least once. All time representation that $hh = 07$ (07:XX) is lucky so at most 24 times decrement of $hh$ will lead to a lucky time. Therefore, the max. number of iteration possible is $24 * 60 = 1440$ which is very small for 1 sec TL. In fact, the largest possible answer is $390$ where $x = 2$, $hh = 06$ and $mm = 58$.
[ "brute force", "implementation", "math" ]
900
#include <bits/stdc++.h> using namespace std; typedef long long lint; typedef pair<int, int> ii; const int MOD = 1'000'000'007, MOD2 = 1'000'000'009; const int INF = 0x3f3f3f3f; const lint BINF = 0x3f3f3f3f3f3f3f3fLL; int x, n, m; int solve(){ cin >> x >> n >> m; int ti = n * 60 + m; for(int i=0;;i++){ int h = ti / 60, m = ti % 60; if(h / 10 == 7 || h % 10 == 7 || m / 10 == 7 || m % 10 == 7) return cout << i << endl, 0; ti = (ti - x + 1440) % 1440; } return 0; } int main(){ ios::sync_with_stdio(0); // int t; cin >> t; while(t--) solve(); // cout << (solve() ? "YES" : "NO") << endl; return 0; }
916
B
Jamie and Binary Sequence (changed after round)
Jamie is preparing a Codeforces round. He has got an idea for a problem, but does not know how to solve it. Help him write a solution to the following problem: Find $k$ integers such that the sum of two to the power of each number equals to the number $n$ and the largest integer in the answer is as small as possible. As there may be multiple answers, you are asked to output the lexicographically largest one. To be more clear, consider all integer sequence with length $k$ $(a_{1}, a_{2}, ..., a_{k})$ with $\textstyle\sum_{i=1}^{k}2^{a_{i}}=n$. Give a value $y=\operatorname*{max}_{1\leq i\leq k}a_{i}$ to each sequence. Among all sequence(s) that have the minimum $y$ value, output the one that is the lexicographically largest. For definitions of powers and lexicographical order see notes.
The main idea of the solution is $2^{x} = 2 \cdot 2^{x - 1}$, that means you can replace 1 $x$ element with 2 $(x - 1)$ elements. To start with, express $n$ in binary - powers of two. As we can only increase the number of elements, there is no solution if there exists more than $k$ elements. Let's fix the $y$ value first. Observe that we can decrease the $y$ value only if all $y$ can be changed to $y - 1$. So we scan from the largest power and try to break it down if doing so does not produce more than $k$ elements. After $y$ is fixed, we can greedily decrease the smallest element while the number of elements is less than $k$.
[ "bitmasks", "greedy", "math" ]
2,000
#include <bits/stdc++.h> using namespace std; typedef long long lint; typedef pair<int, int> ii; const int MOD = 1'000'000'007, MOD2 = 1'000'000'009; const int INF = 0x3f3f3f3f; const lint BINF = 0x3f3f3f3f3f3f3f3fLL; lint n; int m = 0, k; map<int, int> cnt; int solve(){ cin >> n >> k; for(int i=0;i<=63;i++) if((n >> i) & 1) cnt[i]++, m++; if(m > k) return cout << "No" << endl, 0; for(int i=63;i>=-63;i--){ if(m + cnt[i] <= k) m += cnt[i], cnt[i - 1] += cnt[i] * 2, cnt[i] = 0; else break; } cout << "Yes" << endl; multiset<int> ms; for(int i=63;i>=-63;i--) for(int j=0;j<cnt[i];j++) ms.insert(i); while(ms.size() < k){ int u = *ms.begin(); ms.erase(ms.begin()); ms.insert(u - 1); ms.insert(u - 1); } for(auto it=ms.rbegin();it!=ms.rend();it++) cout << *it << " "; cout << endl; return 0; } int main(){ ios::sync_with_stdio(0); // int t; cin >> t; while(t--) solve(); // cout << (solve() ? "YES" : "NO") << endl; return 0; }
916
C
Jamie and Interesting Graph
Jamie has recently found undirected weighted graphs with the following properties very interesting: - The graph is connected and contains exactly $n$ vertices and $m$ edges. - All edge weights are integers and are in range $[1, 10^{9}]$ inclusive. - The length of shortest path from $1$ to $n$ is a prime number. - The sum of edges' weights in the minimum spanning tree (MST) of the graph is a prime number. - The graph contains no loops or multi-edges. If you are not familiar with some terms from the statement you can find definitions of them in notes section. Help Jamie construct any graph with given number of vertices and edges that is interesting!
First, observe that only $n - 1$ edges are required to fulfil the requirement, so we will make the other $m - n + 1$ edges with a very large number so they would not contribute to the shortest path or the MST. Now, the problem is reduced to building a tree with prime weight sum and two nodes in the tree have prime distance. Recall that a path graph is also a tree! If we join $(i, i + 1)$ for all $1 \le i < n$, the shortest path will lie on the whole tree. We are left with a problem finding $n - 1$ numbers that sum to a prime. Let's make 1 edge with weight $p - n + 2$ and others with weight $1$. Choosing a prime slightly larger than $n$ (e.g. $100003$) will fulfil the requirement for all cases.
[ "constructive algorithms", "graphs", "shortest paths" ]
1,600
#include <bits/stdc++.h> using namespace std; typedef long long lint; typedef pair<int, int> ii; const int MOD = 1'000'000'007, MOD2 = 1'000'000'009; const int INF = 0x3f3f3f3f; const lint BINF = 0x3f3f3f3f3f3f3f3fLL; int n, m; const int LPRIME = 100'003; const int LNUM = 1'000'000'000; int solve(){ cin >> n >> m; cout << LPRIME << ' ' << LPRIME << endl; cout << 1 << ' ' << 2 << ' ' << LPRIME - (n - 2) << endl; for(int i=1;i<n-1;i++) cout << i + 1 << ' ' << i + 2 << ' ' << 1 << endl; int lo = 1, hi = 3; for(int i=0;i<m-(n-1);i++){ cout << lo << ' ' << hi << ' ' << LNUM << endl; hi++; if(hi > n) lo++, hi = lo + 2; } return 0; } int main(){ ios::sync_with_stdio(0); // int t; cin >> t; while(t--) solve(); // cout << (solve() ? "YES" : "NO") << endl; return 0; }
916
D
Jamie and To-do List
Why I have to finish so many assignments??? Jamie is getting very busy with his school life. He starts to forget the assignments that he has to do. He decided to write the things down on a to-do list. He assigns a value priority for each of his assignment \textbf{(lower value means more important)} so he can decide which he needs to spend more time on. After a few days, Jamie finds out the list is too large that he can't even manage the list by himself! As you are a good friend of Jamie, help him write a program to support the following operations on the to-do list: - $set a_{i} x_{i}$ — Add assignment $a_{i}$ to the to-do list if it is not present, and set its priority to $x_{i}$. If assignment $a_{i}$ is already in the to-do list, its priority \textbf{is changed} to $x_{i}$. - $remove a_{i}$ — Remove assignment $a_{i}$ from the to-do list if it is present in it. - $query a_{i}$ — Output the number of assignments that are more important (have a \textbf{smaller} priority value) than assignment $a_{i}$, so Jamie can decide a better schedule. Output $ - 1$ if $a_{i}$ is not in the to-do list. - $undo d_{i}$ — Undo all changes that have been made in the previous $d_{i}$ days (not including the day of this operation) At day $0$, the to-do list is empty. In each of the following $q$ days, Jamie will do \textbf{exactly one} out of the four operations. If the operation is a $query$, you should \textbf{output the result of the query before proceeding to the next day}, or poor Jamie cannot make appropriate decisions.
Let's solve a version that does not consist of undo operation first. The task can be divided to two parts: finding the priority of a string and finding the rank of a priority. Both parts can be solved using trie trees. The first part is basic string trie with get and set operation so I will not describe it here in details. The second part is finding a rank of the number which can be supported by a binary trie. To support the undo operation, observe that each operation only add at most 31 nodes to the trie trees. Therefore, we can make use the idea of persistent data structure and store all versions by reusing old versions of the data structure with pointers. Remember to flush the output after each query operation. As pointed out by some of you, there exists alternative solutions using persistent dynamic segment trees.
[ "data structures", "interactive", "trees" ]
2,200
#include <bits/stdc++.h> using namespace std; typedef long long lint; typedef pair<int, int> ii; const int MOD = 1'000'000'007, MOD2 = 1'000'000'009; const int INF = 0x3f3f3f3f; const lint BINF = 0x3f3f3f3f3f3f3f3fLL; struct StringTrie{ StringTrie *chi[26]; int dat; StringTrie(){ for(int i=0;i<26;i++) chi[i] = nullptr; dat = -1; } StringTrie(StringTrie *old){ for(int i=0;i<26;i++) chi[i] = old->chi[i]; dat = old->dat; } StringTrie *set(string &s, int val, int pos = 0){ StringTrie *rt = new StringTrie(this); if(pos >= s.size()){ rt->dat = val; }else{ int v = s[pos] - 'a'; if(!chi[v]) chi[v] = new StringTrie(); rt->chi[v] = chi[v]->set(s, val, pos + 1); } return rt; } int get(string &s, int pos = 0){ if(pos >= s.size()){ return dat; }else{ int v = s[pos] - 'a'; if(!chi[v]) return -1; return chi[v]->get(s, pos + 1); } } }; struct BinaryTrie{ BinaryTrie *chi[2]; int dat; BinaryTrie(){ chi[0] = chi[1] = nullptr; dat = 0; } BinaryTrie(BinaryTrie *old){ chi[0] = old->chi[0]; chi[1] = old->chi[1]; dat = old->dat; } BinaryTrie *add(int s, int val, int pos = 30){ BinaryTrie *rt = new BinaryTrie(this); rt->dat += val; if(pos >= 0){ int v = (s >> pos) & 1; if(!chi[v]) chi[v] = new BinaryTrie(); rt->chi[v] = chi[v]->add(s, val, pos - 1); } return rt; } int get(int s, int pos = 30){ if(pos < 0){ return 0; }else{ int v = (s >> pos) & 1; if(v){ int ans = 0; // add 0 if(chi[0]) ans += chi[0]->dat; // query 1 if(chi[1]) ans += chi[1]->get(s, pos - 1); return ans; }else{ // query 0 if(chi[0]) return chi[0]->get(s, pos - 1); else return 0; } } } }; int solve(){ int q; cin >> q; StringTrie **st = new StringTrie*[q + 5]; BinaryTrie **bt = new BinaryTrie*[q + 5]; st[0] = new StringTrie(); bt[0] = new BinaryTrie(); for(int i=1;i<=q;i++){ string op; cin >> op; if(op == "set"){ string str; int val; cin >> str >> val; int x = st[i-1]->get(str); st[i] = st[i-1]->set(str, val); if(x >= 0){ bt[i] = bt[i-1]->add(x, -1); bt[i] = bt[i]->add(val, 1); }else{ bt[i] = bt[i-1]->add(val, 1); } }else if(op == "remove"){ string str; cin >> str; int x = st[i-1]->get(str); st[i] = st[i-1]->set(str, -1); if(x >= 0) bt[i] = bt[i-1]->add(x, -1); else bt[i] = bt[i-1]; }else if(op == "undo"){ int x; cin >> x; st[i] = st[i - x - 1]; bt[i] = bt[i - x - 1]; }else{ st[i] = st[i - 1]; bt[i] = bt[i - 1]; string str; cin >> str; int x = st[i]->get(str); if(x >= 0) cout << bt[i]->get(x) << endl; else cout << -1 << endl; } } return 0; } int main(){ ios::sync_with_stdio(0); // int t; cin >> t; while(t--) solve(); // cout << (solve() ? "YES" : "NO") << endl; return 0; }
916
E
Jamie and Tree
To your surprise, Jamie is the final boss! Ehehehe. Jamie has given you a tree with $n$ vertices, numbered from $1$ to $n$. Initially, the root of the tree is the vertex with number $1$. Also, each vertex has a value on it. Jamie also gives you three types of queries on the tree: $1 v$ — Change the tree's root to vertex with number $v$. $2 u v x$ — For each vertex in the subtree of smallest size that contains $u$ and $v$, add $x$ to its value. $3 v$ — Find sum of values of vertices in the subtree of vertex with number $v$. A subtree of vertex $v$ is a set of vertices such that $v$ lies on shortest path from this vertex to root of the tree. Pay attention that subtree of a vertex can change after changing the tree's root. Show your strength in programming to Jamie by performing the queries accurately!
Let's solve the problem without operation 1 first. That means the subtree of a vertex does not change. For operation 2, the subtree of smallest size that contains $u$ and $v$ means the lowest common ancestor ($lca$) of $u$ and $v$, and we update the subtree of $lca$. For operation 3, we query the sum of the subtree rooted at the given vertex. To do this, we can flatten a tree into an one dimensional array by considering the DFS order of the vertices starting from vertex $1$. If a vertex has DFS order $x$ and its subtree has size $y$, then the update/query range is $[x..x + y - 1]$. This can be done by standard data structures, such as binary indexed tree with range update function, or segment tree with lazy propagation. Things get more complicated when the root of the tree $r$ changes. One should notice that in order to reduce time complexity, we should not recalculate everything when $r$ changes. We just need to keep a variable storing the current root. Now let's discuss the two main problems we face (In the following context, subtree of a vertex is defined according to vertex $1$ unless otherwise stated): How to find the LCA of $u$ and $v$ using the precomputed LCA table that assumes the root is vertex $1$? Let's separate the situation into several cases. If both $u$ and $v$ are in the subtree of $r$, then query the LCA directly is fine. If exactly one of $u$ and $v$ is in the subtree of $r$, the LCA must be $r$. If none of $u$ and $v$ is in the subtree of $r$, we can first find the lowest nodes $p$ and $q$ such that $p$ is an ancestor of both $u$ and $r$, and $q$ is an ancestor of both $v$ and $r$. If $p$ and $q$ are different, we choose the deeper one. If they are the same, then we query the LCA directly. Combining the above cases, one may find the LCA is the lowest vertex among $lca(u, v), lca(u, r), lca(v, r)$. After we have found the origin $w$ of update (for query, it is given), how to identify the subtree of a vertex and carry out updates/queries on it? Again, separate the situation into several cases. If $w = r$, update/query the whole tree. If $w$ is in the subtree of $r$, or $w$ isn't an ancestor of $r$, update/query the subtree of $w$. Otherwise, update/query the whole tree, then undo update/exclude the results of the subtree of $w'$, such that $w'$ is a child of $w$ and the subtree of $w'$ contains $r$. The above ideas can be verified by working with small trees on paper.
[ "data structures", "trees" ]
2,400
#include <bits/stdc++.h> #define pb push_back #define fi first #define se second using namespace std; typedef long long ll; typedef pair<int, int> pii; typedef pair<ll, ll> pll; ll arr[100010], seg_t[400010], seg_lazy[400010]; int ord[100010], uord[100010], dep[100010], subt_size[100010], ancs[100010][18], now_ord; vector<int> edge[100010]; int root_now, root_lbound, root_rbound; void dfs(int now, int prev) { ord[now] = now_ord; uord[now_ord] = now; now_ord++; subt_size[now] = 1; ancs[now][0] = prev; for (int i=0;i<edge[now].size();i++) if (edge[now][i]!=prev) { dep[edge[now][i]] = dep[now] + 1; dfs(edge[now][i], now); subt_size[now] += subt_size[edge[now][i]]; } } void build_seg(int l, int r, int pos) { if (l==r) { seg_t[pos] = arr[uord[l]]; return; } int mid = (l+r)>>1; build_seg(l, mid, 2*pos+1); build_seg(mid+1, r, 2*pos+2); seg_t[pos] = seg_t[2*pos+1] + seg_t[2*pos+2]; } void update_seg(int l, int r, int ql, int qr, int pos, ll val) { if (seg_lazy[pos]) { seg_t[pos] += seg_lazy[pos]*(r-l+1); if (l!=r) { seg_lazy[2*pos+1] += seg_lazy[pos]; seg_lazy[2*pos+2] += seg_lazy[pos]; } seg_lazy[pos] = 0; } if (ql>r||qr<l) return; if (ql<=l&&r<=qr) { seg_t[pos] += val*(r-l+1); if (l!=r) { seg_lazy[2*pos+1] += val; seg_lazy[2*pos+2] += val; } return; } int mid = (l+r)>>1; update_seg(l, mid, ql, qr, 2*pos+1, val); update_seg(mid+1, r, ql, qr, 2*pos+2, val); seg_t[pos] = seg_t[2*pos+1] + seg_t[2*pos+2]; } ll query_seg(int l, int r, int ql, int qr, int pos) { if (seg_lazy[pos]) { seg_t[pos] += seg_lazy[pos]*(r-l+1); if (l!=r) { seg_lazy[2*pos+1] += seg_lazy[pos]; seg_lazy[2*pos+2] += seg_lazy[pos]; } seg_lazy[pos] = 0; } if (ql>r||qr<l) return 0; if (ql<=l&&r<=qr) return seg_t[pos]; int mid = (l+r)>>1; return query_seg(l, mid, ql, qr, 2*pos+1) + query_seg(mid+1, r, ql, qr, 2*pos+2); } int nth_ancs(int u, int n) { for (int i=16;i>=0;i--) if (n&(1<<i)) u = ancs[u][i]; return u; } int LCA(int u, int v) { if (dep[u]<dep[v]) swap(u, v); int dep_dif = dep[u]-dep[v]; u = nth_ancs(u, dep_dif); if (u==v) return u; for (int i=16;i>=0;i--) if (ancs[u][i]!=ancs[v][i]) { u = ancs[u][i]; v = ancs[v][i]; } return ancs[u][0]; } int main() { ios_base::sync_with_stdio(0); int n, q; cin >> n >> q; root_now = 1, root_lbound = 0, root_rbound = n-1; for (int i=1;i<=n;i++) cin >> arr[i]; for (int i=0;i<n-1;i++) { int a, b; cin >> a >> b; edge[a].pb(b); edge[b].pb(a); } dfs(1, 0); build_seg(0, n-1, 0); for (int j=1;j<17;j++) for (int i=1;i<=n;i++) ancs[i][j] = ancs[ancs[i][j-1]][j-1]; for (int i=0;i<q;i++) { int op; cin >> op; if (op==1) { cin >> root_now; root_lbound = ord[root_now]; root_rbound = ord[root_now]+subt_size[root_now]-1; } else if (op==2) { int u, v; ll c; cin >> u >> v >> c; int in_subt_cnt = 0, origin; if (root_lbound<=ord[u]&&ord[u]<=root_rbound) in_subt_cnt++; if (root_lbound<=ord[v]&&ord[v]<=root_rbound) in_subt_cnt++; if (in_subt_cnt==2) { origin = LCA(u, v); } else if (in_subt_cnt==1) { origin = root_now; } else { int x = LCA(u, root_now), y = LCA(v, root_now), z = LCA(u, v); origin = dep[x]>dep[y]? x: y; origin = dep[z]>dep[origin]? z: origin; } if (origin==root_now) { update_seg(0, n-1, 0, n-1, 0, c); } else if (root_lbound<ord[origin]&&ord[origin]<=root_rbound) { update_seg(0, n-1, ord[origin], ord[origin]+subt_size[origin]-1, 0, c); } else if (ord[origin]<ord[root_now]&&ord[root_now]<=ord[origin]+subt_size[origin]-1) { update_seg(0, n-1, 0, n-1, 0, c); int dep_dif = dep[root_now]-dep[origin]; int undo = nth_ancs(root_now, dep_dif-1); update_seg(0, n-1, ord[undo], ord[undo]+subt_size[undo]-1, 0, -c); } else { update_seg(0, n-1, ord[origin], ord[origin]+subt_size[origin]-1, 0, c); } } else { int origin; cin >> origin; ll ans; if (origin==root_now) { ans = query_seg(0, n-1, 0, n-1, 0); } else if (root_lbound<ord[origin]&&ord[origin]<=root_rbound) { ans = query_seg(0, n-1, ord[origin], ord[origin]+subt_size[origin]-1, 0); } else if (ord[origin]<ord[root_now]&&ord[root_now]<=ord[origin]+subt_size[origin]-1) { ans = query_seg(0, n-1, 0, n-1, 0); int dep_dif = dep[root_now]-dep[origin]; int undo = nth_ancs(root_now, dep_dif-1); ans -= query_seg(0, n-1, ord[undo], ord[undo]+subt_size[undo]-1, 0); } else { ans = query_seg(0, n-1, ord[origin], ord[origin]+subt_size[origin]-1, 0); } cout << ans << endl; } } }
917
A
The Monster
As Will is stuck in the Upside Down, he can still communicate with his mom, Joyce, through the Christmas lights (he can turn them on and off with his mind). He can't directly tell his mom where he is, because the monster that took him to the Upside Down will know and relocate him. Thus, he came up with a puzzle to tell his mom his coordinates. His coordinates are the answer to the following problem. A string consisting only of parentheses ('(' and ')') is called a bracket sequence. Some bracket sequence are called correct bracket sequences. More formally: - Empty string is a correct bracket sequence. - if $s$ is a correct bracket sequence, then $(s)$ is also a correct bracket sequence. - if $s$ and $t$ are correct bracket sequences, then $st$ (concatenation of $s$ and $t$) is also a correct bracket sequence. A string consisting of parentheses and question marks ('?') is called pretty if and only if there's a way to replace each question mark with either '(' or ')' such that the resulting string is a \textbf{non-empty} correct bracket sequence. Will gave his mom a string $s$ consisting of parentheses and question marks (using Morse code through the lights) and his coordinates are the number of pairs of integers $(l, r)$ such that $1 ≤ l ≤ r ≤ |s|$ and the string $s_{l}s_{l + 1}... s_{r}$ is pretty, where $s_{i}$ is $i$-th character of $s$. Joyce doesn't know anything about bracket sequences, so she asked for your help.
First, let's denote $s[l..r]$ as the substring $s_{l}s_{l + 1}... s_{r}$ of string $s$. Also $s.count(t)$ is the number of occurrences of $t$ in $s$. A string consisting of parentheses and question marks is pretty if and only if: $|s|$ is even. $0 \le s[1..i].count('(') + s[1..i].count('?') - s[1..i].count(')')$ for each $1 \le i \le |s|$. $0 \le s[i..|s|].count(')') + s[i..|s|].count('?') - s[i..|s|].count('(')$ for each $1 \le i \le |s|$. Proof: If $s.count('?') = 0$ then $s$ is a correct bracket sequence. Otherwise, let $q$ be an integer between $1$ to $|s|$ such that $s_{q} = '?'$. Lemma: We can replace $s_{q}$ by either '(' or ')' such that the three conditions above remain satisfied. Proof: We'll use proof by contradiction. If we can replace $s_{q}$ by either '(' or ')' such that the conditions remain satisfied, the lemma is proven. Otherwise, the conditions will be violated if we replace $s_{q}$ by '(' or ')'. Let's denote $f(s)$ as $s.count('(') + s.count('?') - s.count(')')$ and $g(s)$ as $s.count(')') + s.count('?') - s.count('(')$. Please note that $f(s) = - g(s) + 2 \times s.count('?')$ and $g(s) = - f(s) + 2 \times s.count('?')$. By assumption, if we replace $s_{q}$ by '(' the conditions will be violated. By replacing $s_{q}$ the second condition can't be violated, thus the third condition will be violated. So, there's an integer $i$ such that $1 \le i \le q$ and $g(t[i..|t|]) < 0$ ($t$ is $s$ after replacing $s_{q}$ by '('). Thus, $g(s[i..|s|]) < 2$. Similarly, there's an integer $j$ such that $q \le j \le |s|$ and $f(s[1..j]) < 2$. Since all three conditions are satisfied for $s$ (by assumption), then $0 \le g(s[i..|s|]), f(s[1..j]) \le 1$. Let's break $s$ into three parts (they could be empty): $a = s[1..(i - 1)]$, $b = s[i..j]$ and $c = s[(j + 1)..|s|]$. $g(s[i..|s|]) = g(b) + g(c)$ and $f(s[1..j]) = f(a) + f(b)$. Since the three conditions are satisfied for $s$, then $0 \le g(c), f(a)$. $f(a) + f(b) \le 1$ so $f(a) - 1 \le - f(b)$. Thus $f(a) - 1 \le g(b) - 2 \times b.count('?')$, so $f(a) - 1 + 2 \times b.count('?') \le g(b)$. So $f(a) - 1 + 2 \times b.count('?') + g(c) \le g(b) + g(c) \le 1$. So $f(a) - 1 + 2 \times b.count('?') + g(c) \le 1$. Since $i \le q \le j$, then $2 \le 2 \times b.count('?')$. Also, $0 \le g(c), f(a)$. So, $1 \le f(a) - 1 + 2 \times b.count('?') + g(c) \le 1$. So $f(a) - 1 + 2 \times b.count('?') + g(c) = 1$. This requires that $f(a) = g(c) = 0$ and $b.count('?') = 1$. Since $f(a)$ and $g(c)$ are even, then $|a|$ and $|c|$ are even, and since $|s|$ is even (first condition), then $|b|$ is also even (because $|s| = |a| + |b| + |c|$). $f(a) = g(c) = 0$ and $0 \le f(a) + f(b)$ and $0 \le g(b) + g(c)$, thus $0 \le f(b), g(b)$. Also, $f(a) + f(b), g(b) + g(c) \le 1$, thus $0 \le f(b), g(b) \le 1$, since $|b|$ is even, $f(b)$ and $g(b)$ are also even, thus, $f(b) = g(b) = 0$. $g(b) = - f(b) + 2 \times b.count('?')$ and since $1 \le b.count('?')$ then $g(b) \neq 0$. Thus, we have $0 \neq 0$, which is false. So the lemma is true. Using the lemma above, each time we can replace a question mark by parentheses and at the end we get a correct bracket sequence. After proof: Knowing this fact, we can find all such substrings by checking the three conditions. Total time complexity: $O(n^{2})\,$ where $n = |s|$
[ "dp", "greedy", "implementation", "math" ]
1,800
null
917
B
MADMAX
As we all know, Max is the best video game player among her friends. Her friends were so jealous of hers, that they created an actual game just to prove that she's not the best at games. The game is played on a directed acyclic graph (a DAG) with $n$ vertices and $m$ edges. There's a character written on each edge, a lowercase English letter. Max and Lucas are playing the game. Max goes first, then Lucas, then Max again and so on. Each player has a marble, initially located at some vertex. Each player in his/her turn should move his/her marble along some edge (a player can move the marble from vertex $v$ to vertex $u$ if there's an outgoing edge from $v$ to $u$). If the player moves his/her marble from vertex $v$ to vertex $u$, the "character" of that round is the character written on the edge from $v$ to $u$. There's one additional rule; the ASCII code of character of round $i$ should be \textbf{greater than or equal} to the ASCII code of character of round $i - 1$ (for $i > 1$). The rounds are numbered for both players together, i. e. Max goes in odd numbers, Lucas goes in even numbers. The player that can't make a move loses the game. The marbles may be at the same vertex at the same time. Since the game could take a while and Lucas and Max have to focus on finding Dart, they don't have time to play. So they asked you, if they both play optimally, who wins the game? You have to determine the winner of the game for all initial positions of the marbles.
Denote $dp(v, u, c)$ as the winner of the game (the person that starts it or the other one?, a boolean, true if first person wins) if the first person's marble is initially at vertex $v$ and the second one's initially at $u$ and our set of letters is ${ichar(c), ichar(c + 1), ..., 'z'}$ if $ichar(i) = char('a' + i)$ ($c$ is an integer). Denote $a d j(v)=\{x;v\to x\}$ and $ch(x, y)$ as the character written on edge from $x$ to $y$. Now if there's some $x$ in $adj(v)$ such that $c < int(ch(v, x) - 'a')$ and $dp(u, x, ch(v, x)) = false$, then the first person can move his/her marble to vertex $x$ and win the game, thus $dp(v, u, c) = true$, otherwise it's false. Because the graph is a DAG there's no loop in this dp, thus we can use memoization. The answer for $i, j$ is $dp(i, j, 0)$. Total time complexity: $O(|s i g m a|\times n\times(n+m))$
[ "dfs and similar", "dp", "games", "graphs" ]
1,700
null
917
C
Pollywog
As we all know, Dart is some kind of creature from Upside Down world. For simplicity, we call their kind pollywogs. Dart and $x - 1$ other pollywogs are playing a game. There are $n$ stones in a row, numbered from $1$ through $n$ from left to right. At most $1$ pollywog may be sitting on each stone at a time. Initially, the pollywogs are sitting on the first $x$ stones (one pollywog on each stone). Dart and his friends want to end up on the last $x$ stones. At each second, the leftmost pollywog should jump to the right. A pollywog can jump at most $k$ stones; more specifically, a pollywog can jump from stone number $i$ to stones $i + 1, i + 2, ... i + k$. A pollywog can't jump on an occupied stone. Jumping a distance $i$ takes $c_{i}$ amounts of energy from the pollywog. Also, $q$ stones are special Each time landing on a special stone $p$, takes $w_{p}$ amounts of energy (in addition to the energy for jump) from the pollywog. $w_{p}$ could be negative, in this case, it means the pollywog absorbs $|w_{p}|$ amounts of energy. Pollywogs want to spend as little energy as possible (this value could be negative). They're just pollywogs, so they asked for your help. Tell them the total change in their energy, in case they move optimally.
What would we do if $n$ was small? Notice that at any given time if $i$ is the position of the leftmost pollywog and $j$ is the position of the rightmost pollywog, then $j - i < k$. Thus, at any given time there's an $i$ such that all pollywogs are on stones $i, i + 1, ... i + k - 1$, in other words, $k$ consecutive stones. $x$ pollywogs are on $k$ consecutive stones, thus, there are $\textstyle{{\binom{k}{x}}$ different ways to sit these pollywogs on $k$ stones, that's about $70$ at most. Denote $dp[i][state]$ as the minimum amount of energy the pollywogs need to end up on stones $i, i + 1, ... i + k - 1$, and their positions are contained in $state$ (there are $\textstyle{\binom{k}{x}}$ states in total). We assume $init$ is the initial state (pollywogs on the $x$ first stones) and $final$ is the final state (pollywogs on the $x$ last stones). Thus, we could easily update $dp$ in ${\cal O}(k)$ (where would the first pollywog jump?) using dynamic programming and this would work in $O(n\times k\times{\bigl(}\leq_{e}^{k}{\bigr)}$ since the answer is $dp[n - k + 1][final]$. But $n$ is large, so what we could do is using matrix multiplication (similar to matrix multiplication, but when multiplying two matrices, we use minimum instead of sum and sum instead of multiplication, that means if $C = A \times B$ then $C[i][j] = min(A[i][k] + B[k][j])$ for all $k$) to update the dp, in case $q = 0$ to solve the problem in $O((_{x}^{k})^{3}\times\log(n))$. For $q > 0$, we combine the dynamic programming without matrix and with matrix. Note that the special stones only matter in updating the dp when there's a special stone among $i, i + 1, ... i + k - 1$, that means at most for $k \times q$ such $i$, for the rest we could use matrices for updating. Total time complexity: $O(\log(n){\binom{k}{x}}^{3}+q k^{2}{\binom{k}{x}})$
[ "combinatorics", "dp", "matrices" ]
2,900
null
917
D
Stranger Trees
Will shares a psychic connection with the Upside Down Monster, so everything the monster knows, Will knows. Suddenly, he started drawing, page after page, non-stop. Joyce, his mom, and Chief Hopper put the drawings together, and they realized, it's a labeled tree! A tree is a connected acyclic graph. Will's tree has $n$ vertices. Joyce and Hopper don't know what that means, so they're investigating this tree and similar trees. For each $k$ such that $0 ≤ k ≤ n - 1$, they're going to investigate all labeled trees with $n$ vertices that share exactly $k$ edges with Will's tree. Two labeled trees are different if and only if there's a pair of vertices $(v, u)$ such that there's an edge between $v$ and $u$ in one tree and not in the other one. Hopper and Joyce want to know how much work they have to do, so they asked you to tell them the number of labeled trees with $n$ vertices that share exactly $k$ edges with Will's tree, for each $k$. The answer could be very large, so they only asked you to tell them the answers modulo $1000000007 = 10^{9} + 7$.
Solution #1: First, for every $K$ such that $0 \le K \le N - 1$ we are going to find for every $K$ edges in the original tree we are going to find the number of labeled trees having these $K$ edges, then we will add them all to $res[K]$. But Mr. Author aren't we going to count some tree that has exactly $E$ (where $E > K$) common edges with the original tree in $res[K]$? Yes, that's true. But we only count it $\textstyle{\binom{E}{K}}$ times! So, after computing the res array we are going to iterate from $N - 1$ to $0$ assuming that the res is correct for all $J > I$ (our current iteration), and then reduce $r e s[J]\times\binom{J}{I}$ (the fixed res) from $res[I]$. Then we'll have the correct value for $res[I]$. But Mr. Author, how are we going to find $res[K]$ in the first place? Let's first find out for a fixed $K$ edges forest, in how ways we connect the remaining vertices to get a tree. Let's look at the components in the forest. Only their sizes are relevant because we can't connect anything inside them. Let the sizes be $sz[0]... sz[N - 1]$. (if you assume that the sizes are all $1$, the number of resulting trees are $N^{N - 2}$ (Kayley's theorem)). To solve this subproblem, let's go to another subproblem. Let's assume that for every additional edge, we know which components it is going to go connect. Then, the number of resulting trees is $\prod_{i=0}^{N-1}s z[i]^{d[i]}$ where $d[i]$ is the degree of component $i$ (edges between this component and other components). The reason is that we have $sz[v]$ vertices inside component $v$ to give to every edge that has one endpoint in $v$. Ok going back to the parent subproblem. $d[i]$ huh? I've heard that vertex $v$ appears in the Prufer code of a tree $d[v] - 1$ times. so we've gotta multiply the answer by $sz[v]$ every time it appears in Prufer code. It's also multiplied by $\prod_{i=0}^{N-1}s z[i]$ because we haven't multiplied it one time ($d[v] - 1$ not $d[v]$). But how to make it get multiplied by $sz[v]$ every time component $v$ is chosen? Look at this product. $(sz[0] + sz[1] + ... + sz[N - 1])^{N - 2}$. If in the $i$-th parenthesis $sz[v]$ is chosen, then let the $i$-th place on the Prufer code of the tree connecting the components be the component $v$. The good thing about this product is that if component $v$ has come in the Prufer code $K$ times, then the multiplication of the parenthesis has $sz[v]^{K}$ in it. So it counts exactly what we want to count. ${\bigl(}\prod_{i=0}^{N-1}s z[i]{\bigr)}\times{\bigl(}\sum_{i=0}^{N-1}s z[i]{\bigr)}^{N-2}$ is the answer for some fixed $K$ edges. $\sum_{i=0}^{N-1}$ corresponds to $N$ in the original problem and $N - 2$ corresponds to $Number_of_components - 2$. so we want to count $\left(\prod_{i=0}^{N-1}s z[i]\right)\times\,N^{n u m b e r}{}_{-}^{o f}{}_{-}^{c o m p o n e n t s-2}$ Okay Mr. Author so how do we count this for every $K$ fixed edges in the original tree. Lets count $dp[top_vertex v][size_of_component_containing_top_vertex s][the_number_of_edges_we_have_fixed e]$ which contains $\prod_{i=0}^{N-1}s z[i]$ of every component inside $v$'s subtree which doesn't include $v$'s component and $N^{the_number_of_components_not_including_v's}$. We can update this from $v$'s children. Let's add $v$'s children one by one to the $dp$ by assuming that the children we didn't go over don't exist in $v$'s subtree. let's go over $old_dp[v][vs][ve]$ and $dp[u][us][ue]$, we either fix the edge between $u$ and $v$ then it'll add $dp[u][us][ue] \times dp[v][vs][ve]$ to $next_dp[v][us + vs][ve + ue + 1]$ and otherwise it'll add $dp[u][us][ue] \times dp[v][vs][ve] \times N \times us$ to $dp[v][vs][ve + ue]$. We can also divide it by $N^{2}$ at the end with modulo inverses. We can find $res[K]$ with the sum of $dp[root][s][K] \times s \times N$. (with $s = N$ as a corner case). The solution may look that it's $N^{5}$ because it's inside $5$ fors. But it's actually $N^{4}$ if the $us$ and $vs$ fors go until $sz[u]$ and $sz[v]$ (actually only the subtree of $v$ that we've iterated on). So the cost is $sz[u] \times sz[v] \times N^{2}$. Let's look at it like every vertex from $u$'s subtree is handshaking with every vertex of $v$'s subtree and the cost of their handshaking is $N^{2}$. We know that two vertices handshake only once. That's why it'll be $\textstyle{\binom{N}{2}}\times N^{2}$ which is of $O(N^{4})$. Solution #2: Let's define $F(X)$ as the number of spanning trees of the graph $K_{N}$ plus $X - 1$ copies of $T$ (the original tree). If we look at $F(X)$ we'll see that it is actually $\sum_{i=0}^{N-1}X^{i}\times n u m b e r_{-}{}^{o f}_{-}e^{i r e e s_{-}}{}^{w i t h}_{-}i_{-}{}^{i}_{-}{}^{c o m m o n_{-}e d g e s_{-}w i t h}_{-}T$ because it has $X^{i}$ ways to choose which of the $X$ multiple edges it should choose for the common edges. So the problem is to find $F$'s coefficients. We can do that by polynomial interpolation if we have $N$ sample answers of $F$. Let's just get $N$ instances of $F$ for $X = 1$ till $X = N$. We can find that using Kirchhoff's matrix tree theorem to find the number of spanning trees of a graph. So the complexity is $O(i n t e r p o l a t i o n)+\ O(D e t e r m i n a n t\times N)$. So we have an $O(N^{4})$ complexity. This is how to do it in $N^{2}$ -> (I don't know it yet, I'll update it when I have it ready)
[ "dp", "math", "matrices", "trees" ]
2,600
null
917
E
Upside Down
As we all know, Eleven has special abilities. Thus, Hopper convinced her to close the gate to the Upside Down World with her mind. Upside down monsters like to move between the worlds, so they are going to attack Hopper and Eleven in order to make them stop. The monsters live in the vines. The vines form a tree with $n$ vertices, numbered from $1$ through $n$. There's a lowercase English letter written in each tunnel (edge). Upside down is a magical world. There are $m$ types of monsters in upside down, numbered from $1$ through $m$. Each type of monster has a special word that gives them powers. The special word of type $i$ is $s_{i}$. There are $q$ monsters in upside down. Each one is at a junction (vertex) and is going to some other junction. If monster of type $k$ goes from junction $i$ to junction $j$, the power it gains is the number of times it sees its special world ($s_{k}$) consecutively in the tunnels. More formally: If $f(i, j)$ is the string we get when we concatenate the letters written in the tunnels on the shortest path from $i$ to $j$, then the power the monster gains is the number of occurrences of $s_{k}$ in $f(i, j)$. Hopper and Eleven want to get prepared, so for each monster, they want to know the power the monster gains after moving.
Assume $t_{i}$ is reverse of $s_{i}$. Use centroid-decomposition. When solving the problem for subtree $S$, assume its centroid is $c$. For a fixed query, assume $v$ and $u$ are both in $S$ and path from $v$ to $u$ goes through the centroid $c$ (this happens exactly one time for each $v$ and $u$). Assume $x$ is string of path from $c$ to $v$ and $y$ is string of path from $c$ to $u$. We should find the number of occurrences of $s_{k}$ in $reverse(x) + y$. If number of occurrences of $s$ in $t$ is $f(s, t)$ then $f(s_{k}, reverse(x) + y) = f(t_{k}, x) + f(s_{k}, y) + A$. First two variables can be calculated using aho-corasick and segment tree. $A$ is the number of occurrences of $s_{k}$ in the path such that some part of it belongs to $reverse(x)$ and some to $y$. So, so far the the time complexity is $O(n\log^{2}(n))$. Now for counting $A$, first calculate the suffix tree for each string (for each $s_{k}$ and $t_{k}$). A suffix tree is a trie, so let $sf(v, s)$ be the vertex we reach when we ask the string of path from $c$ (root) to $v$ one by in the suffix tree of string $s$. We can calculate this fast for every $v$ and $s$ if we merge this suffix trees into one trie (we do this before we start the centroid-decomposition algorithm in per-process). We associate a value $val$ to each vertex of the trie, initially zero for every vertex. Now we traverse this trie like DFS. When we reach a vertex $x$, we iterate over all suffixes (there are $2(|s_{1}| + ... + |s_{n}|)$ suffixes) that end in $x$ (the suffixes that equal the string of path from root of the trie to vertex $x$), and for each suffix $(s, k)$ (suffix of string $s$ with length $k$), we add $1$ to the $val$ of each vertex in the subtree of the vertex where the suffix $(reverese(s), |s| - k)$ ends and we subtract this number when we're exiting vertex $x$ (in DFS). Now back to the centroid-decomposition, $A$ equals $val$ of vertex in trie where the suffix $(t_{k}, b)$ when in DFS we're at vertex in trie where $(s_{k}, a)$ ends where $a$ is the size of the longest suffix of $s_{k}$ that is a prefix of the string of the path from $c$ (root) to $u$ and similarly, $b$ is the size of the longest suffix of $t_{k}$ that is a prefix of the string of the path from $c$ (root) to $v$. For achieving this goal, we can use persistent segment tree on the starting time-finishing time range of vertices in the trie (or without using persistent segment tree, we could calculate every $A$ after the centroid-decomposition is finished, kind of offline). Total time complexity: $O(N\log^{2}(N))$ where $N = n + q + |s_{1}| + |s_{2}| + ... + |s_{n}|$.
[ "data structures", "string suffix structures", "strings", "trees" ]
3,400
null
918
A
Eleven
Eleven wants to choose a new name for herself. As a bunch of geeks, her friends suggested an algorithm to choose a name for her. Eleven wants her name to have exactly $n$ characters. Her friend suggested that her name should only consist of uppercase and lowercase letters 'O'. More precisely, they suggested that the $i$-th letter of her name should be 'O' (uppercase) if $i$ is a member of Fibonacci sequence, and 'o' (lowercase) otherwise. The letters in the name are numbered from $1$ to $n$. Fibonacci sequence is the sequence $f$ where - $f_{1} = 1$, - $f_{2} = 1$, - $f_{n} = f_{n - 2} + f_{n - 1}$ ($n > 2$). As her friends are too young to know what Fibonacci sequence is, they asked you to help Eleven determine her new name.
Calculate the first $x$ Fibonacci sequence elements, where $x$ is the greatest integer such that $f_{x} \le n$. Let $s$ be a string consisting of $n$ lowercase 'o' letters. Then for each $i \le x$, perform $s_{fi}$ = 'O'. The answer is $s$. Total time complexity: ${\mathcal{O}}(n)$
[ "brute force", "implementation" ]
800
null
918
B
Radio Station
As the guys fried the radio station facilities, the school principal gave them tasks as a punishment. Dustin's task was to add comments to nginx configuration for school's website. The school has $n$ servers. Each server has a name and an ip (names aren't necessarily unique, but ips are). Dustin knows the ip and name of each server. For simplicity, we'll assume that an nginx command is of form "command ip;" where command is a string consisting of English lowercase letter only, and ip is the ip of one of school servers. Each ip is of form "a.b.c.d" where $a$, $b$, $c$ and $d$ are non-negative integers less than or equal to $255$ (with no leading zeros). The nginx configuration file Dustin has to add comments to has $m$ commands. Nobody ever memorizes the ips of servers, so to understand the configuration better, Dustin has to comment the name of server that the ip belongs to at the end of each line (after each command). More formally, if a line is "command ip;" Dustin has to replace it with "command ip; #name" where name is the name of the server with ip equal to ip. Dustin doesn't know anything about nginx, so he panicked again and his friends asked you to do his task for him.
Save the names and ips of the servers. Then for each command find the server in ${\mathcal{O}}(n)$ and print its name. Total time complexity: $O(n m)$
[ "implementation", "strings" ]
900
null
919
A
Supermarket
We often go to supermarkets to buy some fruits or vegetables, and on the tag there prints the price for a kilo. But in some supermarkets, when asked how much the items are, the clerk will say that $a$ yuan for $b$ kilos (You don't need to care about what "yuan" is), the same as $a/b$ yuan for a kilo. Now imagine you'd like to buy $m$ kilos of apples. You've asked $n$ supermarkets and got the prices. Find the minimum cost for those apples. You can assume that there are enough apples in all supermarkets.
We can use greedy algorithm. Obviously, if you can pay the least money for per kilo, you can pay the least money for $m$ kilos. So you can find the minimum of $a_i/b_i$, we say it is $x$. Then $m \cdot x$ is the final answer. Time complexity: $\mathcal{O}(n)$.
[ "brute force", "greedy", "implementation" ]
800
#include <bits/stdc++.h> #define N 5010 using namespace std; int n, m, a[N], b[N]; int main(){ scanf("%d%d", &n, &m); for (int i = 1; i <= n; i++){ scanf("%d%d", &a[i], &b[i]); } int minA = a[1], minB = b[1]; for (int i = 2; i <= n; i++){ if (minA * b[i] > minB * a[i]){ minA = a[i]; minB = b[i]; } } printf("%.8lf\n", (double) minA * m / (double) minB); }
919
B
Perfect Number
We consider a positive integer perfect, if and only if the sum of its digits is exactly $10$. Given a positive integer $k$, your task is to find the $k$-th smallest perfect positive integer.
Let's use brute force the find the answer. You may find the answer is not too large (i.e. not bigger than $2 \cdot 10^7$), then you can find it in the given time limit. You can check every possible answer from $1$ (or from $19$), until we find the $k$-th perfect integer. That's all what we need to do. :P Time complexity: $\mathcal{O}(answer)$. :P
[ "binary search", "brute force", "dp", "implementation", "number theory" ]
1,100
def cal(x): ans = 0 while (x): ans += x % 10 x /= 10 return ans n = input() now = 0 while (n): now += 1 if cal(now) == 10: n -= 1 print now
919
C
Seat Arrangements
Suppose that you are in a campus and have to go for classes day by day. As you may see, when you hurry to a classroom, you surprisingly find that many seats there are already occupied. Today you and your friends went for class, and found out that some of the seats were occupied. The classroom contains $n$ rows of seats and there are $m$ seats in each row. Then the classroom can be represented as an $n \times m$ matrix. The character '.' represents an empty seat, while '*' means that the seat is occupied. You need to find $k$ consecutive empty seats in the same row or column and arrange those seats for you and your friends. Your task is to find the number of ways to arrange the seats. \textbf{Two ways are considered different if sets of places that students occupy differs.}
We can find out how many consecutive empty positions in every row and column separately and add them together to form the final answer. If the length of consecutive empty positions is no smaller than $k$, we assume that it is $len$. Then we can add $len-k+1$ to the final answer. But, be careful. When $k=1$, the algorithm shown above is completely wrong. (Why?) So we need to deal with that situation separately. (I guess there will be lots of hacks :P) Time complexity: $\mathcal{O}(nm)$.
[ "brute force", "implementation" ]
1,300
a = raw_input().split() n, m, k = int(a[0]), int(a[1]), int(a[2]) mat = [] for i in range(n): s = raw_input() mat.append([]) for j in range(m): if s[j] == '*': mat[i].append(0) else: mat[i].append(1) ans = 0 for i in range(n): res = 0 for j in range(m): if mat[i][j]: res += 1 if res >= k: ans += 1 else: res = 0 for i in range(m): res = 0 for j in range(n): if mat[j][i]: res += 1 if res >= k: ans += 1 else: res = 0 if k == 1: ans /= 2 print ans
919
D
Substring
You are given a graph with $n$ nodes and $m$ \textbf{directed} edges. One lowercase letter is assigned to each node. We define a path's value as the number of the most frequently occurring letter. For example, if letters on a path are "abaca", then the value of that path is $3$. Your task is find a path whose value is the largest.
It is obvious that we can use dynamic programming algorithm to solve this stupid problem. :P We can make an array $f[i][j]$ to respect when you are at the point $i$, then how many letters $j$ you can get. Note that $i$ is ranging from $1$ to $n$ and $j$ is from $1$ to $26$. Then, you can do this dynamic programming algorithm by topological sorting. Specifically, you can use $deg[i]$ to show how many edges that end at point $i$. First put all points $i$ which satisfy $deg[i]=0$ into a queue. When the queue is not empty, do the following steps repeatedly. Take one point from the front of the queue. We say it is $i$. Iterate all edges which begin at the point $i$. We say the endpoint of the edge is $k$. Then we update $f[k][\cdot]$ using $f[i][\cdot]$. Finally we make $deg[k]$ minus by $1$. If $deg[k] = 0$, then we add the point $k$ into the queue. Make a variable $cnt$ plus by $1$. When there is no cycle in the graph, the queue will be empty with $cnt = n$. Then the answer is the maximum number among $f[i][j]$. If there are cycles in the graph, then $cnt$ cannot be $n$. Thus the answer can be arbitrarily large. In that situation, we should output -1. Time complexity: $\mathcal{O}(m \cdot alpha)$, where $alpha$ is the size of the alphabet (i.e. $alpha = 26$). Memory complexity: $\mathcal{O}(n \cdot alpha)$.
[ "dfs and similar", "dp", "graphs" ]
1,700
#include <bits/stdc++.h> #define N 300010 using namespace std; struct Edge{ int from, to, next; } edge[N]; int head[N], tot; inline void addedge(int u, int v){ edge[++tot] = (Edge){u, v, head[u]}, head[u] = tot; } int f[N][26], n, m, d[N]; char s[N]; queue<int> Q; int main(){ scanf("%d%d", &n, &m); scanf("%s", s + 1); for (int i = 1; i <= m; i++){ int x, y; scanf("%d%d", &x, &y); addedge(x, y); d[y] ++; } for (int i = 1; i <= n; i++){ if (!d[i]){ Q.push(i); f[i][s[i] - 'a'] = 1; } } int rem = n; while (!Q.empty()){ int now = Q.front(); Q.pop(); rem --; for (int i = head[now]; i; i = edge[i].next){ Edge e = edge[i]; for (int j = 0; j < 26; j++){ f[e.to][j] = max(f[e.to][j], f[now][j] + (s[e.to] - 'a' == j)); } d[e.to] --; if (!d[e.to]) Q.push(e.to); } } if (rem) return puts("-1"), 0; int ans = 0; for (int i = 1; i <= n; i++){ for (int j = 0; j < 26; j++){ ans = max(ans, f[i][j]); } } printf("%d\n", ans); return 0; }
919
E
Congruence Equation
Given an integer $x$. Your task is to find out how many positive integers $n$ ($1 \leq n \leq x$) satisfy $$n \cdot a^n \equiv b \quad (\textrm{mod}\;p),$$ where $a, b, p$ are all known constants.
Trying all integers from $1$ to $x$ is too slow to solve this problem. So we need to find out some features of that given equation. Because we have $a^{p-1} \equiv 1 \quad (\textrm{mod}\;p)$ when $p$ is a prime, it is obvious that $a^z \; \textrm{mod} \; p$ falls into a loop and the looping section is $p-1$. Also, $z \; \textrm{mod} \; p$ has a looping section $p$. We can try to list a chart to show what $n \cdot a^n$ is with some specific $i, j$. (In the chart shown below, $n$ is equal to $i \cdot (p-1) + j$) Proof for the chart shown above: For a certain $i, j$, we can see that Therefore, we can enumerate $j$ from $1$ to $p - 1$ and calculate $b \cdot a^{-j}$. Let's say the result is $y$, then we have $j - i \equiv y \quad (\textrm{mod} \; p)$ (You can refer to the chart shown above to see if it is). So for a certain $j$, the possible $i$ can only be $(j - y), p + (j - y), \ldots, p \cdot t + (j - y)$. Then we can calculate how many possible answers $n$ in this situation (i.e. decide the minimum and maximum possible $t$ using the given lower bound $1$ and upper bound $x$). Finally we add them together and get the answer. Time complexity: $\mathcal{O}(p \log p)$ or $\mathcal{O}(p)$ depending on how you calculate $y = b \cdot a^{-j}$. By the way, you can also try Chinese Reminder Theorem to solve this problem.
[ "chinese remainder theorem", "math", "number theory" ]
2,100
#include <bits/stdc++.h> using namespace std; typedef long long LL; int a, b, p; LL x; inline int pow(int a, int b, int p){ LL ans = 1, base = a; while (b){ if (b & 1){ (ans *= base) %= p; } (base *= base) %= p; b >>= 1; } return (int)ans; } inline int inv(int x, int p){ return pow(x, p - 2, p); } int main(){ scanf("%d%d%d%I64d", &a, &b, &p, &x); LL ans = 0; for (int i = 1; i <= p - 1; i++){ int now = (LL)b * inv((LL)pow(a, i, p), p) % p; LL first = (LL)(p - 1) * ((i - now + p) % p) + i; if (first > x) continue; ans += (x - first) / ((LL)p * (p - 1)) + 1; } printf("%I64d\n", ans); }
919
F
A Game With Numbers
Imagine that Alice is playing a card game with her friend Bob. They both have exactly $8$ cards and there is an integer on each card, ranging from $0$ to $4$. In each round, Alice or Bob in turns choose two cards from different players, let them be $a$ and $b$, where $a$ is the number on the player's card, and $b$ is the number on the opponent's card. It is necessary that $a \cdot b \ne 0$. Then they calculate $c = (a + b) \bmod 5$ and replace the number $a$ with $c$. The player who ends up with numbers on all $8$ cards being $0$, wins. Now Alice wants to know who wins in some situations. She will give you her cards' numbers, Bob's cards' numbers and the person playing the first round. Your task is to determine who wins if both of them choose the best operation in their rounds.
First we should notice that the useful number of states isn't something like $(8^5)^2$, because the order of the numbers in each player's hand does not matter. Therefore, the useful states of each player is $\binom{5 + 8 - 1}{8}$. Then the useful states is estimated as $\binom{5 + 8 - 1}{8}^2$, which is $245\,025$. Then we can abstract those states as nodes, then link those nodes with directed edges which shows the transformation between two states (i.e. one can make one step to the other). Then we run BFS (or topological sort) on that graph. For a certain state, if all states it links out: has a state that the current player will win. Then the current player will win. has a state that the will get into a deal. Then the current player will make it deal. are all lose states. Then the current player will lose. It is because the current player can choose the best way to go. So we can do some initialization to get the "Win", "Lose" or "Deal" status for all possible states. Follow these steps shown below. Give all states that is easily to identify a status "Win" or "Lose". Then push them into a queue. For other states, give "Deal" temporarily. Take a state from the front of the queue. Update all states that can reach this state in one step (i.e. has an edge between them) using the rule shown above. If the state can be defined as one status, push it into the queue and go to the second step. Or you can ignore it and go to the second step instantly. After this step (or we can say "Initialization"), we can answer those $T$ queries easily. Time complexity: $\mathcal{O}(k^2 \cdot \binom{m + k - 1}{k}^2 + T)$, where $k$ is the card number one player has, and $m$ is the modulo. Here we have $m = 5$ and $k = 8$.
[ "games", "graphs", "shortest paths" ]
2,600
#include <bits/stdc++.h> using namespace std; #define re return #define sz(a) (int)a.size() #define mp(a, b) make_pair(a, b) #define fi first #define se second #define re return #define forn(i, n) for (int i = 0; i < int(n); i++) typedef long long ll; typedef vector<int> vi; typedef pair<int, int> pii; typedef pair<long long, long long> pll; typedef long double ld; typedef unsigned long long ull; const ll mod = int(1e9) + 7; int ans[500][500], st[500][500]; vector<pair<int, int> > go[500][500]; vector<vector<int> > cc; vector<int> c; int mv[(1 << 20)]; int it = 0; void pr(int n, int k) { if (k == 1) { c.push_back(n); mv[(((c[4] * 16 + c[3]) * 16 + c[2]) * 16 + c[1]) * 16 + c[0]] = it; cc.push_back(c); it++; c.pop_back(); re; } for (int j = 0; j <= n; j++) { c.push_back(j); pr(n - j, k - 1); c.pop_back(); } } int kkk[5] = {1, 16, 256, 4096, 65536}; void get_int(int &ans) { ans = 0; char c = getchar(); while (c < '0' || c > '9') c = getchar(); while (c >= '0' && c <= '9') { ans = ans * 10 + c - '0'; c = getchar(); } //re ans; } int main() { //iostream::sync_with_stdio(0), cin.tie(0); pr(8, 5); queue<pair<int, int> > pp; //exit(0); forn (j, sz(cc)) { vector<int> vv = cc[j]; int cnum = (((vv[4] * 16 + vv[3]) * 16 + vv[2]) * 16 + vv[1]) * 16 + vv[0]; bool ok = true; forn (q, sz(cc)) { bool ok1 = true; for (int ii = 1; ii < 5; ii++) if (cc[j][ii]) { ok = false; for (int jj = 1; jj < 5; jj++) if (cc[q][jj]) { ok1 = false; cnum -= kkk[ii]; cnum += kkk[(ii + jj) % 5]; //assert(cnum >= 0 && cnum < (1 << 20)); go[q][mv[cnum]].push_back(mp(j, q)); st[j][q]++; cnum += kkk[ii]; cnum -= kkk[(ii + jj) % 5]; } } if (ok) { ans[j][q] = 1; pp.push(mp(j, q)); } else if (ok1) { ans[j][q] = 2; pp.push(mp(j, q)); } } } //exit(0); while (!pp.empty()) { int j = pp.front().fi; int q = pp.front().se; pp.pop(); for (auto v : go[j][q]) { if (ans[v.fi][v.se]) continue; if (ans[j][q] == 2) { ans[v.fi][v.se] = 1; pp.push(v); continue; } st[v.fi][v.se]--; if (st[v.fi][v.se] == 0) { ans[v.fi][v.se] = 2; pp.push(v); continue; } } } //exit(0); int t; get_int(t); //cin >> t; forn (i, t) { int f; get_int(f); //cin >> f; int a = 0, b = 0; //vector<int> a(5, 0), b(5, 0); forn (i, 8) { int j; get_int(j); //cin >> j; a += (1 << (j * 4)); } forn (i, 8) { int j; get_int(j); //cin >> j; b += (1 << (j * 4)); } int k1 = mv[a], k2 = mv[b]; if (f == 0) { if (ans[k1][k2] == 1) printf("Alice\n"); if (ans[k1][k2] == 2) printf("Bob\n"); if (ans[k1][k2] == 0) printf("Deal\n"); } else { swap(k1, k2); if (ans[k1][k2] == 2) printf("Alice\n"); if (ans[k1][k2] == 1) printf("Bob\n"); if (ans[k1][k2] == 0) printf("Deal\n"); } } }
920
A
Water The Garden
It is winter now, and Max decided it's about time he watered the garden. The garden can be represented as $n$ consecutive garden beds, numbered from $1$ to $n$. $k$ beds contain water taps ($i$-th tap is located in the bed $x_{i}$), which, if turned on, start delivering water to neighbouring beds. If the tap on the bed $x_{i}$ is turned on, then after one second has passed, the bed $x_{i}$ will be watered; after two seconds have passed, the beds from the segment $[x_{i} - 1, x_{i} + 1]$ will be watered (if they exist); after $j$ seconds have passed \textbf{($j$ is an integer number)}, the beds from the segment $[x_{i} - (j - 1), x_{i} + (j - 1)]$ will be watered (if they exist). \textbf{Nothing changes during the seconds, so, for example, we can't say that the segment $[x_{i} - 2.5, x_{i} + 2.5]$ will be watered after $2.5$ seconds have passed; only the segment $[x_{i} - 2, x_{i} + 2]$ will be watered at that moment.} \begin{center} The garden from test $1$. White colour denotes a garden bed without a tap, red colour — a garden bed with a tap. \end{center} \begin{center} The garden from test $1$ after $2$ seconds have passed after turning on the tap. White colour denotes an unwatered garden bed, blue colour — a watered bed. \end{center} Max wants to \textbf{turn on all the water taps at the same moment}, and now he wonders, what is the minimum number of seconds that have to pass after he turns on some taps until the whole garden is watered. Help him to find the answer!
The answer is the maximal value among the following values: $\operatorname*{min}_{i=2}^{k}|\frac{a(i)-a(i,-1)+2}{2}\rfloor$ (to cover all water beds within some segment), $a[1]$ (to cover water beds before the first tap), $n - a[k] + 1$ (to cover all water beds after the last tap).
[ "implementation" ]
1,000
null
920
B
Tea Queue
Recently $n$ students from city S moved to city P to attend a programming camp. They moved there by train. In the evening, all students in the train decided that they want to drink some tea. Of course, no two people can use the same teapot simultaneously, so the students had to form a queue to get their tea. $i$-th student comes to the end of the queue at the beginning of $l_{i}$-th second. If there are multiple students coming to the queue in the same moment, then the student with greater index comes after the student with lesser index. Students in the queue behave as follows: if there is nobody in the queue before the student, then he uses the teapot for exactly one second and leaves the queue with his tea; otherwise the student waits for the people before him to get their tea. If at the beginning of $r_{i}$-th second student $i$ still cannot get his tea (there is someone before him in the queue), then he leaves the queue without getting any tea. For each student determine the second he will use the teapot and get his tea (if he actually gets it).
Let's store the last moment when somebody gets a tea in the variable $lst$. Then if for the $i$-th student $lst \ge r_{i}$ then he will not get a tea. Otherwise he will get it during $max(lst + 1, l_{i})$ second. And if he gets a tea then $lst$ will be replaced with the answer for this student.
[ "implementation" ]
1,200
null
920
C
Swap Adjacent Elements
You have an array $a$ consisting of $n$ integers. Each integer from $1$ to $n$ appears exactly once in this array. For some indices $i$ ($1 ≤ i ≤ n - 1$) it is possible to swap $i$-th element with $(i + 1)$-th, for other indices it is not possible. You may perform any number of swapping operations any order. There is no limit on the number of times you swap $i$-th element with $(i + 1)$-th (if the position is not forbidden). Can you make this array sorted in ascending order performing some sequence of swapping operations?
Take a look at some pair $(i, j)$ such that $i < j$ and initial $a_{i} > a_{j}$. It means that all the swaps from $i$ to $j - 1$ should be allowed. Then it's easy to notice that it's enough to check only $i$ and $i + 1$ as any other pair can be deducted from this. You can precalc $pos[a_{i}]$ for each $i$ and prefix sums over the string of allowed swaps to check it in no time. Overall complexity: $O(n)$.
[ "dfs and similar", "greedy", "math", "sortings", "two pointers" ]
1,400
null
920
D
Tanks
Petya sometimes has to water his field. To water the field, Petya needs a tank with exactly $V$ ml of water. Petya has got $N$ tanks, $i$-th of them initially containing $a_{i}$ ml of water. The tanks are really large, any of them can contain any amount of water (no matter how large this amount is). Also Petya has got a scoop that can contain up to $K$ ml of water (initially the scoop is empty). This scoop can be used to get some water from some tank, and after that pour it all into some tank (it is impossible to get water from multiple tanks without pouring it, or leave some water in the scoop when pouring it). When Petya tries to get some water from a tank, he gets $min(v, K)$ water, where $v$ is the current volume of water in the tank. Is it possible to obtain a tank with exactly $V$ ml of water using these operations? If it is possible, print a sequence of operations that allows to do it. If there are multiple ways to obtain needed amount of water in some tank, print any of them.
Eliminate the obvious corner case when we don't have enough water ($\sum_{i=1}^{N}a_{i}<V$). Now we don't consider it in editorial. Let's fix some set of tanks $S$, and let $d$ be $\sum_{i\in G}a_{i}$ (the total amount of water in the set). If $d \equiv V (mod K)$ ($d$ and $V$ have the same remainders modulo $k$), then we can transfer all water from $S$ to one tank $x$, transfer all water from $[1,N]\setminus S$ to another tank $y$, and then using some number of operations transfer required amount of water from $x$ to $y$ (or from $y$ to $x$). So we have a solution when we have some set of tanks $S$ such that $d \equiv V(mod K)$. What if we don't have such set? In this case it is impossible to solve the problem since we cannot obtain a tank with $d$ water such that $d \equiv V (mod K)$ (and, obviously, we cannot obtain a tank with exactly $V$ water). To find this set $S$, we may use some sort of knapsack dynamic programming.
[ "dp", "greedy", "implementation" ]
2,400
null
920
E
Connected Components?
You are given an undirected graph consisting of $n$ vertices and ${\frac{n(n-1)}{2}}-m$ edges. Instead of giving you the edges that exist in the graph, we give you $m$ unordered pairs ($x, y$) such that there is no edge between $x$ and $y$, and if some pair of vertices is not listed in the input, then there is an edge between these vertices. You have to find the number of connected components in the graph and the size of each component. A connected component is a set of vertices $X$ such that for every two vertices from this set there exists at least one path in the graph connecting these vertices, but adding any other vertex to $X$ violates this rule.
Let $S$ be the set of unvisited vertices. To store it, we will use some data structure that allows us to do the following: insert $s$ - insert some value $s$ into the set; erase $s$ - delete $s$ from the set; upper_bound $s$ - find the smallest integer $y$ from the set such that $y > s$. For example, std::set<int> from C++ allows us to do all these operations fastly. Also we can use this structure to store the adjacency lists. We will use a modified version of depth-first-search. When we are entering a vertex with $dfs$, we erase it from the set of unvisited vertices. The trick is that in $dfs$ we will iterate over the set of unvisited vertices using its upper_bound function. And we will make not more than $O(m + n)$ iterations overall because when we skip an unvisited vertex, that means there is no edge from this vertex to the vertex we are currently traversing in $dfs$, so there will be no more than $2m$ skips; and each iteration we don't skip decreases the number of unvisited vertices.
[ "data structures", "dfs and similar", "dsu", "graphs" ]
2,100
null
920
F
SUM and REPLACE
Let $D(x)$ be the number of positive divisors of a positive integer $x$. For example, $D(2) = 2$ ($2$ is divisible by $1$ and $2$), $D(6) = 4$ ($6$ is divisible by $1$, $2$, $3$ and $6$). You are given an array $a$ of $n$ integers. You have to process two types of queries: - REPLACE $l$ $r$ — for every $i\in[l,r]$ replace $a_{i}$ with $D(a_{i})$; - SUM $l$ $r$ — calculate $\sum_{i=1}^{r}a_{i}$. Print the answer for each SUM query.
At first let's notice that this function converges very quickly, for values up to $10^{6}$ it's at most $6$ steps. Now we should learn how to skip updates on the numbers $1$ and $2$. The function values can be calculated from the factorization of numbers in $O(M A X N\log M A X N)$ with Eratosthenes sieve. Let's write two segment trees - one will store maximum value on segment, the other will store the sum. When updating some segment, check if its maximum is greater than $2$. Updates are done in the manner one can usually write build function, you go down to the node corresponding to the segment of length $1$ and update the value directly. Overall complexity: $O(q\log n)$ as we access any segment no more than $6$ times.
[ "brute force", "data structures", "dsu", "number theory" ]
2,000
null
920
G
List Of Integers
Let's denote as $L(x, p)$ an infinite sequence of integers $y$ such that $gcd(p, y) = 1$ and $y > x$ (where $gcd$ is the greatest common divisor of two integer numbers), sorted in ascending order. The elements of $L(x, p)$ are $1$-indexed; for example, $9$, $13$ and $15$ are the first, the second and the third elements of $L(7, 22)$, respectively. You have to process $t$ queries. Each query is denoted by three integers $x$, $p$ and $k$, and the answer to this query is $k$-th element of $L(x, p)$.
Let's use binary searching to find the answer. Denote as $A(p, y)$ the number of positive integers $z$ such that $z \le y$ and $gcd(z, p) = 1$; the answer is the smallest integer $d$ such that $A(p, d) \ge A(p, x) + k$. We may use, for example, $10^{18}$ as the right border of segment where we use binary searching; although the answers are a lot smaller than this number, for $y = 10^{18}$ it is obvious that $A(p, y)$ will be really large for any $p$ from $[1;10^{6}]$. How can we calculate $A(p, y)$ fastly? Let's factorize $p$ and use inclusion-exclusion. Let $S$ be a subset of the set of prime divisors of $p$, and $P(S)$ be the product of all numbers from $S$. For each possible subset $S$, we have to add $(-1)^{|S|}\cdot\left|\frac{y}{P l(S)}\right|$ to the result (since there are exactly $\left|{\frac{y}{P(S)}}\right|$ integers from $[1, y]$ divisible by every prime from $S$). Since any number from $[1, 10^{6}]$ has at most $7$ prime divisors, there are at most $2^{7}$ subsets to process.
[ "binary search", "bitmasks", "brute force", "combinatorics", "math", "number theory" ]
2,200
null
922
A
Cloning Toys
Imp likes his plush toy a lot. Recently, he found a machine that can clone plush toys. Imp knows that if he applies the machine to an original toy, he additionally gets one more original toy and one copy, and if he applies the machine to a copied toy, he gets two additional copies. Initially, Imp has only one original toy. He wants to know if it is possible to use machine to get exactly $x$ \textbf{copied} toys and $y$ \textbf{original} toys? He can't throw toys away, and he can't apply the machine to a copy if he doesn't currently have any copies.
Consider a few cases: If $y = 0$, the answer is always <<No>>. If $y = 1$, then the answer <<Yes>> is possible only if $x = 0$; if $x > 0$, the answer is <<No>>. We can observe that the original was cloned $y - 1$ times to produce the requested amount of originals, then the additional copies were created by cloning the copies emerged from cloning the original. As every cloning of a copy results in additional two, we need to check whether $(x - y + 1)$ is divisible by 2. We also need to take care of the case when $(x - y + 1)$ is less than zero - in this case, the answer is also <<NO>>. Time complexity: $O(1)$.
[ "implementation" ]
1,300
x, y = map(int, input().split()) if y == 0: print('No') exit(0) if y == 1: if x == 0: print('Yes') else: print('No') exit(0) print('Yes' if x >= y - 1 and (x - y + 1) % 2 == 0 else 'No')
922
B
Magic Forest
Imp is in a magic forest, where xorangles grow (wut?) A xorangle of order $n$ is such a non-degenerate triangle, that lengths of its sides are integers not exceeding $n$, and the xor-sum of the lengths is equal to zero. Imp has to count the number of distinct xorangles of order $n$ to get out of the forest. Formally, for a given integer $n$ you have to find the number of such triples $(a, b, c)$, that: - $1 ≤ a ≤ b ≤ c ≤ n$; - $a\oplus b\oplus c=0$, where $x\oplus y$ denotes the bitwise xor of integers $x$ and $y$. - $(a, b, c)$ form a non-degenerate (with strictly positive area) triangle.
Consider some triple $(a, b, c)$ for which $a\oplus b\oplus c=0$ holds. Due to xor invertibility, we can see that $a\oplus b=c$. So, we only need to iterate through two of three possible sides of the xorangle as the third can be deduced uniquely. Time complexity: $O(n^{2})$ One could also apply some constant optimizations (and put some pragmas) get $O(n^{3})$ solution with small constant accepted.
[ "brute force" ]
1,300
#pragma GCC optimize("unroll-loops") #define _CRT_SECURE_NO_WARNINGS #include <bits/stdc++.h> using namespace std; const int N = 100002, M = 350; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } void solve() { short n = read(); int ans = 0; short y, x; for (short i = 1; i <= n; ++i) for (short j = i; j <= n; ++j) { x = i + j; for (short k = j; k < x; ++k) { y = (i ^ j); if (y >= j) if (y <= n) if (y == k) ans++; } } printf("%d\n", ans); } void precalc() {} signed main() { int t = 1; precalc(); while (t--) { clock_t z = clock(); solve(); debug("Total Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
922
C
Cave Painting
Imp is watching a documentary about cave painting. Some numbers, carved in chaotic order, immediately attracted his attention. Imp rapidly proposed a guess that they are the remainders of division of a number $n$ by all integers $i$ from $1$ to $k$. Unfortunately, there are too many integers to analyze for Imp. Imp wants you to check whether all these remainders are distinct. Formally, he wants to check, if all $n{\mathrm{~mod~}}i$, $1 ≤ i ≤ k$, are distinct, i. e. there is no such pair $(i, j)$ that: - $1 ≤ i < j ≤ k$, - $n\,\mathrm{mod}\,i=n\,\mathrm{mod}\,j$, where $x\ {\mathrm{mod}}\ y$ is the remainder of division $x$ by $y$.
Consider the way remainders are obtained. Remainder $k - 1$ can be obtained only when $n$ is taken modulo $k$. Remainder $k - 2$ can either be obtained when taken modulo $k - 1$ or $k$. Since the remainder modulo $k$ is already fixed, the only opportunity left is $k - 1$. Proceeding this way, we come to a conclusion that if answer exists, then $\forall i$ $n{\mathrm{~mod~}}i$ = $i - 1$ holds. This condition is equal to $(n + 1)$ mod $i$ = $0$, i. e. $(n + 1)$ should be divisible by all numbers between $1$ and $k$. In other words, $(n + 1)$ must be divisible by their LCM. Following the exponential growth of LCM, we claim that when $k$ is huge enough, the answer won't exists (more precisely at $k \ge 43$). And for small $k$ we can solve the task naively. Complexity: $O(\operatorname{min}(k,\log n))$.
[ "brute force", "number theory" ]
1,600
n, k = map(int, input().split()) if k > 70: print('No') exit(0) s = set() for i in range(1, k + 1): s.add(n % i) print('Yes' if len(s) == k else 'No')
922
D
Robot Vacuum Cleaner
Pushok the dog has been chasing Imp for a few hours already. Fortunately, Imp knows that Pushok is afraid of a robot vacuum cleaner. While moving, the robot generates a string $t$ consisting of letters 's' and 'h', that produces a lot of noise. We define noise of string $t$ as the number of occurrences of string "sh" as a \textbf{subsequence} in it, in other words, the number of such pairs $(i, j)$, that $i < j$ and $t_{i}=\mathbf{s}$ and $t_{j}=\mathbf{h}$. The robot is off at the moment. Imp knows that it has a sequence of strings $t_{i}$ in its memory, and he can arbitrary change their order. When the robot is started, it generates the string $t$ as a concatenation of these strings in the given order. The noise of the resulting string equals the noise of this concatenation. Help Imp to find the maximum noise he can achieve by changing the order of the strings.
Denote $\mathbf{f}(t)$ as the noise function. We are gonna sort the string set in the following way: for each pair $(a, b)$ we will put $a$ earlier if $\mathbf{f}(a+b)>\mathbf{f}(b+a)$. The claim is that the final concatenation will be optimal. Let $A$ be the number of subsequences $sh$ in $a$, $B$ - in $b$. Then $\mathbf{f}(a+b)=A+B+s_{a}\cdot h_{b}$, $\mathbf{f}(b+a)=A+B+s_{b}\cdot h_{a}$. $A$ and $B$ are reduced, and the comparator turns into $s_{a} \cdot h_{b} > s_{b} \cdot h_{a}$. This is almost equivalent to $\frac{s a}{h_{a}}\,>\,\frac{s_{b}}{h_{b}}$ (except the degenerate case when the string consists of $s$ only), meaning that the sort is transitive. Now suppose that this is all false and in the optimal concatenation for some pair of strings $(a, b)$ the aforementioned statement doesn't hold. Then it can be easily shown that you can change their positions and the answer will only get better. Time complexity: $O(n\log n)$.
[ "greedy", "sortings" ]
1,800
#define _CRT_SECURE_NO_WARNINGS #include <bits/stdc++.h> using namespace std; const int N = 20000; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } void solve() { int n; cin >> n; vector<string> a(n); for (auto &v : a) cin >> v; auto f = [](string s) -> ll { ll total = 0; int open = 0; for (char c : s) if (c == 's') open++; else total += open; return total; }; sort(all(a), [&](string s, string t) { return f(s + t) > f(t + s); }); string s = ""; for (auto v : a) s += v; cout << f(s) << endl; } signed main() { int t = 1; while (t--) { clock_t z = clock(); solve(); debug("Total Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
922
E
Birds
Apart from plush toys, Imp is a huge fan of little yellow birds! To summon birds, Imp needs strong magic. There are $n$ trees in a row on an alley in a park, there is a nest on each of the trees. In the $i$-th nest there are $c_{i}$ birds; to summon one bird from this nest Imp needs to stay under this tree and it costs him $cost_{i}$ points of mana. However, for each bird summoned, Imp increases his mana capacity by $B$ points. Imp summons birds one by one, he can summon any number from $0$ to $c_{i}$ birds from the $i$-th nest. Initially Imp stands under the first tree and has $W$ points of mana, and his mana capacity equals $W$ as well. He can only go forward, and each time he moves from a tree to the next one, he restores $X$ points of mana (but it can't exceed his current mana capacity). Moving only forward, what is the maximum number of birds Imp can summon?
The problem can be solved by utilizing dynamic programming. Let us denote by $\mathrm{d}\mathbf{p}[n][k]$ the maximum possible remaining amount mana for the state $(n, k)$, where $n$ stands for the number of nests passed by and $k$ stands for the number of birds summoned. The base is $\mathrm{d}{\mathfrak{p}}[0](0)=W$ as we have passed by no nests, have summoned no birds and have $W$ mana at our disposal in the beginning. Let us also initialize all other states with $ \infty $. The transitions are as follows: consider us having walked to the next nest and summoned $k$ additional birds from there, therefore proceeding from the state $(i - 1, j - k)$ to $(i, j)$ (of course, it is reasonable to require that the answer for $(i - 1, j - k)$ is not $ \infty $). After we have proceeded, $X$ units of mana would be replenished (taking summoned birds into consideration, the amount of mana at the moment is bounded above by $W + (j - k) \cdot B$). The summoning would cost us $\cosh_{i}\cdot k$ mana. If after the replenishing and the summoning the remaining amount of mana is nonnegative, we update the answer for the state $(i, j)$: $\mathrm{d}{\bf p}[i][j]=\mathrm{max}\left(\mathrm{d}{\bf p}[i][j],\mathrm{min}(\mathrm{d}{\bf p}[i-1][j-k]+X,W+(j-k)\cdot B)-\mathrm{cost}_{i}\cdot k\right)$ The answer is the maximal $k$ among reachable states (those not equal to $ \infty $). Time complexity: $O(n\sum_{i=1}^{n}\mathrm{c}_{i}+(\sum_{i=1}^{n}\mathrm{c}_{i})^{2})$. Note that the constant in the square is no more than $\frac{1}{4}$.
[ "dp" ]
2,200
#define _CRT_SECURE_NO_WARNINGS #include <bits/stdc++.h> using namespace std; const int N = 1005, M = 10005; mt19937 gen(time(NULL)); #define forn(i, n) for (int i = 0; i < n; i++) #define debug(...) fprintf(stderr, __VA_ARGS__), fflush(stderr) #define all(a) (a).begin(), (a).end() #define pii pair<int, int> #define mp make_pair #define endl '\n' typedef long long ll; template<typename T = int> inline T read() { T val = 0, sign = 1; char ch; for (ch = getchar(); ch < '0' || ch > '9'; ch = getchar()) if (ch == '-') sign = -1; for (; ch >= '0' && ch <= '9'; ch = getchar()) val = val * 10 + ch - '0'; return sign * val; } ll dp[N][M]; int c[N]; ll cost[N]; void relax(ll &u, ll v) { u = max(u, v); } void solve() { int n = read(); ll W = read(), B = read(), X = read(); forn(i, n) c[i] = read(); forn(i, n) cost[i] = read(); fill_n(&dp[0][0], N * M, -1); dp[0][0] = W; forn(i, n) for (int j = 0; j < M && dp[i][j] != -1; j++) for (int k = 0; k <= c[i] && dp[i][j] - k * cost[i] >= 0; k++) relax(dp[i + 1][k + j], min(W + (k + j) * B, dp[i][j] - k * cost[i] + X)); int ans = 0; forn(i, M) if (dp[n][i] != -1) ans = max(ans, i); printf("%d\n", ans); } signed main() { int t = 1; while (t--) { clock_t z = clock(); solve(); debug("Total Time: %.3f\n", (double)(clock() - z) / CLOCKS_PER_SEC); } }
922
F
Divisibility
Imp is really pleased that you helped him. But it you solve the last problem, his gladness would raise even more. Let's define $f(1)$ for some set of integers $\left\vert\right.\L$ as the number of pairs $a$, $b$ in $\left.\right]_{}^{}$, such that: - $a$ is \textbf{strictly less} than $b$; - $a$ \textbf{divides} $b$ without a remainder. You are to find such a set $\left\vert\right.\left\vert\right.\right\lbrace\left\vert\right.\right\vert$, which is a subset of ${1, 2, ..., n}$ (the set that contains all positive integers not greater than $n$), that $f(\Gamma)=k$.
Let the sought pairs be the edges in a graph with $n$ vertices. Then the problem is reduced to finding such a vertex subset that the induced graph contains exactly $k$ edges. Let $e(n)$ be the number of edges in the graph on ${1, 2, ..., n}$ and $d(n)$ be the number of divisors of $n$ (strictly less than $n$). We claim that the answer always exists if $k \le e(n)$ (otherwise it's obviously NO). Let's enlighten it a bit. Let's find the minimum possible $m$ such that $e(m) \ge k$ and try to paraphrase the problem: we have to throw away some vertices from the graph on $m$ vertices to leave exactly $k$ edges. Note that the degree of vertex $x$ is equal to $d(x)+\left[{\frac{m}{x}}\right]-1$: hence the most interesting numbers for us are primes strictly larger than $\textstyle{\frac{m}{2}}$ since their degree is equal to $1$. Now it's time to expose the most important fact of the day: we claim that $e(m)-k\leq C\cdot m^{\frac{1}{3}}$. At the same time the number of primes greater than $\textstyle{\frac{m}{2}}$, is about $\frac{m}{2\log m}$. Quite intuitive that asymptotically it's almost enough to throw only them away (there are only $16$ possible counters and they all appear with $m \le 120$ which could be solved manually). This observation is sufficient to get AC. You could've chosen the parallel way and note that vertices greater than $\textstyle{\frac{m}{2}}$ - do not share edges, therefore they can be thrown away independently (adding the statement from the previous paragraph, greedy approach will do). You could've even written recursive bruteforce :D Summarizing the aforementioned, it works well. Time complexity: $O(n\log n)$. Note that the solutions might not fully match the editorial, though the ideas are still the same.
[ "constructive algorithms", "dp", "greedy", "number theory" ]
2,400
#include <iostream> #include <map> #include <vector> using namespace std; const int MAXN = 300005; int smallest_divisor[MAXN]; int nd[MAXN]; bool ban[MAXN]; int num_divisors(int x) { if (nd[x] == 0) { int i = x; if (i == 0) { return 0; } map<int, int> cnt; while (i > 1) { cnt[smallest_divisor[i]]++; i /= smallest_divisor[i]; } int ans = 1; for (const auto &kvp : cnt) { ans *= kvp.second + 1; } nd[x] = ans; } return nd[x]; } int main() { int n, k; cin >> n >> k; if (k == 0) { cout << "Yes" << endl << 1 << endl << 1 << endl; return 0; } for (int i = 0; i < MAXN; i++) { smallest_divisor[i] = i; } for (int i = 2; i * i < MAXN; i++) { if (smallest_divisor[i] != i) { continue; } for (int j = i * i; j < MAXN; j += i) { if (smallest_divisor[j] == j) { smallest_divisor[j] = i; } } } int taken; int cur = 0; for (taken = 1; taken <= n && cur < k; taken++) { cur += num_divisors(taken) - 1; } taken--; if (taken == n && k > cur) { cout << "No" << endl; return 0; } int excess = cur - k; // cerr << excess << endl; if (taken > 500) { for (int i = taken; excess > 0 && i >= 1; i--) { if (smallest_divisor[i] == i) { ban[i] = true; excess--; } } } else { bool found = false; for (int i = 1; i <= taken; i++) { int pairs = 0; for (int j = 1; j <= taken; j++) { if (i == j) { continue; } if (max(i, j) % min(i, j) == 0) { pairs++; } } // cerr << i << " " << pairs << endl; if (pairs == excess) { ban[i] = true; found = true; break; } } for (int i1 = 1; !found && i1 <= taken; i1++) { for (int i2 = i1 + 1; !found && i2 <= taken; i2++) { int pairs = i2 % i1 == 0; for (int j = 1; j <= taken; j++) { if (i1 == j || i2 == j) { continue; } if (max(i1, j) % min(i2, j) == 0) { pairs++; } if (max(i2, j) % min(i2, j) == 0) { pairs++; } } if (pairs == excess) { ban[i1] = ban[i2] = true; found = true; } } } } cout << "Yes" << endl; vector<int> ans; for (int i = 1; i <= taken; i++) { if (!ban[i]) { ans.push_back(i); } } cout << ans.size() << endl; for (int x : ans) { cout << x << " "; } cout << endl; return 0; }
923
A
Primal Sport
Alice and Bob begin their day with a quick game. They first choose a starting number $X_{0} ≥ 3$ and try to reach one million by the process described below. Alice goes first and then they take alternating turns. In the $i$-th turn, the player whose turn it is selects a prime number smaller than the current number, and announces the smallest multiple of this prime number that is not smaller than the current number. Formally, he or she selects a prime $p < X_{i - 1}$ and then finds the minimum $X_{i} ≥ X_{i - 1}$ such that $p$ divides $X_{i}$. Note that if the selected prime $p$ already divides $X_{i - 1}$, then the number does not change. Eve has witnessed the state of the game after two turns. Given $X_{2}$, help her determine what is the smallest possible starting number $X_{0}$. Note that the players don't necessarily play optimally. You should consider all possible game evolutions.
Let $P(N)$ be the largest prime factor of $N$. Clearly, we can obtain $N$ from any number in interval $[N - P(N) + 1, N]$ by picking $P(N)$ as the prime, and we cannot obtain $N$ from any other number. By factorizing $X_{2}$, we can find the range for $X_{1}$. By factorizing all numbers from the range of $X_{1}$, we can find intervals for $X_{0}$. The answer is the minimum of their union. The solution works in $O(N{\sqrt{N}})$, which is fast in practice. Bonus: Solve it for $Q$ queries of $X_{K}$ in $O(N\log N+Q\log K)$.
[ "math", "number theory" ]
1,700
null
923
B
Producing Snow
Alice likes snow a lot! Unfortunately, this year's winter is already over, and she can't expect to have any more of it. Bob has thus bought her a gift — a large snow maker. He plans to make some amount of snow every day. On day $i$ he will make a pile of snow of volume $V_{i}$ and put it in her garden. Each day, every pile will shrink a little due to melting. More precisely, when the temperature on a given day is $T_{i}$, each pile will reduce its volume by $T_{i}$. If this would reduce the volume of a pile to or below zero, it disappears forever. All snow piles are independent of each other. Note that the pile made on day $i$ already loses part of its volume on the same day. In an extreme case, this may mean that there are no piles left at the end of a particular day. You are given the initial pile sizes and the temperature on each day. Determine the total volume of snow melted on each day.
We can directly simulate the process, but it takes $O(N^{2})$ time, which is too slow. There are multiple approaches how to make this simulation faster. We present two of them. In the first solution, instead of calculating the total volume of the snow melted, we first calculate two quantities: $F[i]$ - the number of piles left after day $i$, and $M[i]$ - the total volume of piles that disappear on day $i$. The answer will then be $F[i] * T[i] + M[i]$.Calculate prefix sums of the temperatures. This way, when a snow pile is formed on day $i$, we can use binary search to determine on which day it will disappear completely. Denote this day by $j$ and put $j = N + 1$ if the pile survives. We can note that on every day $k$ between $i$ and $j - 1$ inclusive, this pile will lose $T[k]$ of its volume, which corresponds to increasing $F[k]$ by one. Furthermore, we add the remaining volume to $M[j]$. To calculate all $F[i]$'s fast, we can again use prefix sums - adding 1 to interval can then be done by two additions. Calculate prefix sums of the temperatures. This way, when a snow pile is formed on day $i$, we can use binary search to determine on which day it will disappear completely. Denote this day by $j$ and put $j = N + 1$ if the pile survives. We can note that on every day $k$ between $i$ and $j - 1$ inclusive, this pile will lose $T[k]$ of its volume, which corresponds to increasing $F[k]$ by one. Furthermore, we add the remaining volume to $M[j]$. To calculate all $F[i]$'s fast, we can again use prefix sums - adding 1 to interval can then be done by two additions. The second solution can handle queries online. For each pile, we calculate how big it would be if it was created on the first day: $V^{\prime}[i]=V[i]+\sum_{j=1}^{i-1}T[j]$. We maintain all existing piles in a multiset. When a day $i$ starts, we add $V'[i]$ into the multiset. Then we remove all piles with $V^{\prime}[k]\leq\sum_{j=1}^{i}T[j]$ - those are the piles that disappear on day $i$ - and easily calculate the total volume of melted snow in them. All the piles left in the multiset contribute exactly $T[i]\cdot\mathrm{size~of~multise}]$. As the multiset is sorted, and each pile is added and removed only once, the total complexity is $O(N\log N)$. We maintain all existing piles in a multiset. When a day $i$ starts, we add $V'[i]$ into the multiset. Then we remove all piles with $V^{\prime}[k]\leq\sum_{j=1}^{i}T[j]$ - those are the piles that disappear on day $i$ - and easily calculate the total volume of melted snow in them. All the piles left in the multiset contribute exactly $T[i]\cdot\mathrm{size~of~multise}]$. As the multiset is sorted, and each pile is added and removed only once, the total complexity is $O(N\log N)$.
[ "binary search", "data structures" ]
1,600
null
923
C
Perfect Security
Alice has a very important message $M$ consisting of some non-negative integers that she wants to keep secret from Eve. Alice knows that the only theoretically secure cipher is one-time pad. Alice generates a random key $K$ of the length equal to the message's length. Alice computes the bitwise xor of each element of the message and the key ($A_{i}:=M_{i}\oplus K_{i}$, where $\mathbb{C}$ denotes the bitwise XOR operation) and stores this encrypted message $A$. Alice is smart. Be like Alice. For example, Alice may have wanted to store a message $M = (0, 15, 9, 18)$. She generated a key $K = (16, 7, 6, 3)$. The encrypted message is thus $A = (16, 8, 15, 17)$. Alice realised that she cannot store the key with the encrypted message. Alice sent her key $K$ to Bob and deleted her own copy. Alice is smart. Really, be like Alice. Bob realised that the encrypted message is only secure as long as the key is secret. Bob thus randomly permuted the key before storing it. Bob thinks that this way, even if Eve gets both the encrypted message and the key, she will not be able to read the message. Bob is not smart. Don't be like Bob. In the above example, Bob may have, for instance, selected a permutation $(3, 4, 1, 2)$ and stored the permuted key $P = (6, 3, 16, 7)$. One year has passed and Alice wants to decrypt her message. Only now Bob has realised that this is impossible. As he has permuted the key randomly, the message is lost forever. Did we mention that Bob isn't smart? Bob wants to salvage at least some information from the message. Since he is not so smart, he asks for your help. You know the encrypted message $A$ and the permuted key $P$. What is the lexicographically smallest message that could have resulted in the given encrypted text? More precisely, for given $A$ and $P$, find the lexicographically smallest message $O$, for which there exists a permutation $π$ such that ${\cal O}_{i}\oplus\pi(P_{i})=A_{i}$ for every $i$. Note that the sequence $S$ is lexicographically smaller than the sequence $T$, if there is an index $i$ such that $S_{i} < T_{i}$ and for all $j < i$ the condition $S_{j} = T_{j}$ holds.
We decrypt the message greedily, one number at a time. Note that $f(X):X\oplus Y$ is a bijection on non-negative integers. For that reason, there is always a unique number from the key that lexicographically minimises the string. We can always pick and remove that number, output its xor with the current number from the encrypted text. It remains to show how to do the above faster than $O(N^{2})$. We build a trie on the numbers from the key, more precisely on their binary representation, starting from the most significant bit. To find the number $K_{j}$ that minimises $K_{j}\oplus A_{i}$, one can simply search for $A_{i}$, bit by bit. That is, if the $k$-th most significant bit of $A_{i}$ is $1$, we try to follow the edge labelled $1$, and $0$ otherwise. If we always succeed in that, we have found $A_{i}$ in the key multiset, and hence $K_{j}\oplus A_{i}=0$, which is clearly minimal. If at some bit, we do not succeed, we select the other branch (that is, if $k$-th bit of $A_{i}$ is $1$, but there is no such number in the key set, we pick $0$ instead and continue. This solution uses $O(N W)$ time, where $W$ is the number of bits (here it is 30). The same approach can be also implemented using multiset, which is probably faster to write, but has an extra $O(\log N)$ multiplicative factor, which may or may not fit into TL.
[ "data structures", "greedy", "strings", "trees" ]
1,800
null
923
D
Picking Strings
Alice has a string consisting of characters 'A', 'B' and 'C'. Bob can use the following transitions on any substring of our string in any order any number of times: - A $\to$ BC - B $\to$ AC - C $\to$ AB - AAA $\to$ empty string Note that a substring is one or more consecutive characters. For given queries, determine whether it is possible to obtain the target string from source.
First note that $B$ can be always changed to $C$ and vice versa: $\mathbf{B}\rightarrow A\mathbf{C}\rightarrow A A\mathbf{B}\rightarrow\mathbf{AAA}C\rightarrow C$. Hence we can replace all $C$'s with $B$'s. Furthermore, see that: $A\mathbf{B}\rightarrow A A\mathbf{C}\rightarrow\mathbf{AAA}B\rightarrow B$. The above implies the set of following rules: $A\to B B$ $B\rightarrow A B$ $A B\to B$ $A A A\rightarrow\mathrm{empty\string}$ We can translate these rules to the following: the number of $B$s can be increased by any non-negative even number the number of $A$s before any $B$ may change arbitrarily The only remaining thing is to determine what should happen to the number of trailing $A$'s. There are three cases: The number of $B$'s is the same in the source and target $\longrightarrow$ the number of trailing $A$'s can decrease by any non-negative multiple of 3, as no application of the first rule occurs, and the second and third rule cannot affect trailing $A$'s. There are some $B$'s in the source and the number of $B$'s increases $\longrightarrow\bigcup$ the number of trailing $A$'s can decrease by any non-negative number. To decrease the number to $k$, just morph the $k + 1$-th $A$ from the end to $BB$. To keep it the same, morph any $B$ to $AB$ and then to $BBB$ to introduce extra $B$'s as needed. There are no $B$'s in the source, but some $B$'s in the target $\longrightarrow\bigcup$ the number of trailing $A$'s has to decrease by any positive integer. It is now easy to calculate prefix sums of the $B$ and $C$ occurrences, and calculate the number of trailing $A$'s for every end position. The rest is just casework.
[ "constructive algorithms", "implementation", "strings" ]
2,500
null