contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1450
C1
Errich-Tac-Toe (Easy Version)
\textbf{The only difference between the easy and hard versions is that tokens of type O do not appear in the input of the easy version.} Errichto gave Monogon the following challenge in order to intimidate him from taking his top contributor spot on Codeforces. In a Tic-Tac-Toe grid, there are $n$ rows and $n$ columns. Each cell of the grid is either empty or contains a token. There are two types of tokens: X and O. If there exist three tokens of the same type consecutive in a row or column, it is a winning configuration. Otherwise, it is a draw configuration. \begin{center} The patterns in the first row are winning configurations. The patterns in the second row are draw configurations. \end{center} In an operation, you can change an X to an O, or an O to an X. Let $k$ denote the total number of tokens in the grid. Your task is to make the grid a \textbf{draw} in at most $\lfloor \frac{k}{3}\rfloor$ (rounding down) operations. You are \textbf{not required} to minimize the number of operations.
For each cell $(i,j)$, let's associate it with the color $(i+j)\bmod 3$. Every three consecutive cells contains one of each color. So, if we choose one color and flip all tokens with that color, it will be a solution. We need only prove that there is a color associated with at most one third of the tokens. Let there be $x_i$ tokens in cells of diagonal number $i$, for $i=0,1,2$. Then we have $k=x_0+x_1+x_2$. Therefore, $\min\{x_0,x_1,x_2\}\le \lfloor \frac{k}{3}\rfloor$. Here's the coloring on an example. There are $3$ X's on red, $4$ on blue, and $5$ on green. Red has the fewest, so we flip those.
[ "constructive algorithms", "math" ]
2,100
#include <bits/stdc++.h> using namespace std; const int N = 305; int n; string s[N]; void solve() { cin >> n; // cnt[color] = occurrences of X on that color cell. int cnt[3] = {0, 0, 0}; for(int i = 0; i < n; i++) { cin >> s[i]; for(int j = 0; j < n; j++) { if(s[i][j] == 'X') { cnt[(i + j) % 3]++; } } } // val = best color int val = min_element(cnt, cnt + 3) - cnt; for(int i = 0; i < n; i++) { for(int j = 0; j < n; j++) { // flip tokens whose color is the best val if(s[i][j] == 'X' && (i + j) % 3 == val) { s[i][j] = 'O'; } } cout << s[i] << '\n'; } } int main() { ios::sync_with_stdio(false); cin.tie(0); int te; cin >> te; while(te--) { solve(); } }
1450
C2
Errich-Tac-Toe (Hard Version)
\textbf{The only difference between the easy and hard versions is that tokens of type O do not appear in the input of the easy version.} Errichto gave Monogon the following challenge in order to intimidate him from taking his top contributor spot on Codeforces. In a Tic-Tac-Toe grid, there are $n$ rows and $n$ columns. Each cell of the grid is either empty or contains a token. There are two types of tokens: X and O. If there exist three tokens of the same type consecutive in a row or column, it is a winning configuration. Otherwise, it is a draw configuration. \begin{center} The patterns in the first row are winning configurations. The patterns in the second row are draw configurations. \end{center} In an operation, you can change an X to an O, or an O to an X. Let $k$ denote the total number of tokens in the grid. Your task is to make the grid a \textbf{draw} in at most $\lfloor \frac{k}{3}\rfloor$ (rounding down) operations. You are \textbf{not required} to minimize the number of operations.
If for every three consecutive tokens in the grid, we make sure there is an X and there is an O, we are done. For each cell $(i,j)$, let's associate it with the color $(i+j)\bmod 3$. Since every three consecutive cells contains one of each color, if we make all of one color X and all of a different color O, it would be a solution. It remains to prove that one such way makes at most $\lfloor \frac{k}{3}\rfloor$ operations. For each token on the grid, it is associated with a color ($0$, $1$, or $2$) and a type (X or O). Therefore each token falls into one of $6$ categories. So we can make a table for how many tokens fall in each category. $\begin{array}{c|c|c|c} &0&1&2\\\hline \text{X}&x_0&x_1&x_2\\\hline \text{O}&o_0&o_1&o_2\\ \end{array}$ Firstly, $k=x_0+x_1+x_2+o_0+o_1+o_2$. Let $a_{ij}$ denote the number of operations we make if we flip all X's in cells with color $i$ and flip all O's in cells with color $j$. Then $a_{ij}=x_i+o_j$. Now, we have $a_{01}+a_{02}+a_{10}+a_{12}+a_{20}+a_{21}=2k$. Finally, $\min\{a_{01},a_{02},a_{10},a_{12},a_{20},a_{21}\}\le \lfloor 2k/6\rfloor=\lfloor \frac{k}{3}\rfloor$. Here's the coloring in an example. We flip all green X's and blue O's.
[ "constructive algorithms", "math" ]
2,300
#include <bits/stdc++.h> using namespace std; const int N = 305; int te, n; string s[N]; void solve() { cin >> n; // Let X = 0, O = 1. Let cnt[r][b] = the number of tokens b in diagonals with label r. int cnt[3][2] = {{0, 0}, {0, 0}, {0, 0}}; int k = 0; for(int i = 0; i < n; i++) { cin >> s[i]; for(int j = 0; j < n; j++) { char c = s[i][j]; int val = (i + j) % 3; if(c == 'X') { cnt[val][0]++; k++; } else if(c == 'O') { cnt[val][1]++; k++; } } } // Choose one diagonal label d to become all O's, and another diagonal label d2 to become all X's. for(int d = 0; d < 3; d++) { for(int d2 = 0; d2 < 3; d2++) { if(d == d2) continue; if(cnt[d][0] + cnt[d2][1] <= k / 3) { for(int i = 0; i < n; i++) { for(int j = 0; j < n; j++) { if((i + j) % 3 == d && s[i][j] == 'X') { s[i][j] = 'O'; }else if((i + j) % 3 == d2 && s[i][j] == 'O') { s[i][j] = 'X'; } } cout << s[i] << '\n'; } return; } } } assert(false); } int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> te; while(te--) { solve(); } }
1450
D
Rating Compression
On the competitive programming platform CodeCook, every person has a rating graph described by an array of integers $a$ of length $n$. You are now updating the infrastructure, so you've created a program to compress these graphs. The program works as follows. Given an integer parameter $k$, the program takes the minimum of each contiguous subarray of length $k$ in $a$. More formally, for an array $a$ of length $n$ and an integer $k$, define the $k$-compression array of $a$ as an array $b$ of length $n-k+1$, such that $$b_j =\min_{j\le i\le j+k-1}a_i$$ For example, the $3$-compression array of $[1, 3, 4, 5, 2]$ is $[\min\{1, 3, 4\}, \min\{3, 4, 5\}, \min\{4, 5, 2\}]=[1, 3, 2].$ A permutation of length $m$ is an array consisting of $m$ distinct integers from $1$ to $m$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($m=3$ but there is $4$ in the array). A $k$-compression array will make CodeCook users happy if it will be a permutation. Given an array $a$, determine for all $1\leq k\leq n$ if CodeCook users will be happy after a $k$-compression of this array or not.
For $k=1$, we simply need to check that the array is a permutation. If $k=n$, we simply need to check that the array has minimum $1$. For $1<k<n$, we know the $k$-compression should contain exactly one occurrence of $1$. But $1$ is the minimum possible element, so it should occur in only one length $k$ subarray. Therefore, $1$ should only occur at one of the extremes: index $1$ or index $n$. Without loss of generality, suppose it occurs at index $n$. Then we require that the $k$-compression of $a[1,\ldots,n-1]$ is a permutation of the numbers $\{2,3,\ldots,n-k+1\}$, and we can solve iteratively. This gives us the following algorithm. We maintain an interval $[l,r]$. Initially $l=1$ and $r=n$. Now, we iterate $i=1,\ldots,n$. For each $i$, we make sure that either $a_l=i$ or $a_r=i$, increment $l$ or decrement $r$ accordingly. Then check that $\min\limits_{j=l}^r a_j=i+1$. If $i$ is the first index that our checks fail, then the answer is $0$ for $2\le k\le n-i+1$ and the answer is $1$ for $n-i+1< k\le n$. Checking the minimum element each iteration can be done with data structures like a sparse table or segment tree. It is also sufficient to maintain a $count$ array of the elements, for a complexity of $O(n)$.
[ "binary search", "data structures", "greedy", "implementation", "two pointers" ]
1,800
#include <bits/stdc++.h> using namespace std; const int mxN = 300001; int arr[mxN]; // input array int cnt[mxN]; // cnt[x] = number of occurrences of x in the array bool ans[mxN]; // the final answer int n; void solve() { cin>>n; for(int i = 0; i <= n; ++i) { cnt[i] = 0; ans[i] = 0; } for(int i = 0; i < n; ++i) { cin>>arr[i]; arr[i]--; // switch to 0-indexing cnt[arr[i]]++; } int tp = -1; while(cnt[tp+1]==1) { tp++; } // each number from 0..n-1 has exactly one occurrence -> permutation ans[0] = tp==n-1; int l = 0; int r = n-1; // special case, min = 0 ans[n-1] = cnt[0] > 0; // shrink the interval as long as our checks pass for(int i = n-1; i > 0; --i) { if(!ans[n-1]) break; ans[i] = true; int nxt = n-i-1; if(--cnt[nxt] == 0 && (arr[l]==nxt || arr[r]==nxt) && cnt[nxt+1]) { if(arr[l]==nxt)l++; if(arr[r]==nxt)r--; continue; } break; } for(int i = 0; i < n; ++i) { cout<<ans[i]; } cout<<"\n"; } int main() { cin.tie(0)->sync_with_stdio(0); int T; cin>>T; while(T--) { solve(); } return 0; }
1450
E
Capitalism
A society can be represented by a connected, undirected graph of $n$ vertices and $m$ edges. The vertices represent people, and an edge $(i,j)$ represents a friendship between people $i$ and $j$. In society, the $i$-th person has an income $a_i$. A person $i$ is envious of person $j$ if $a_j=a_i+1$. That is if person $j$ has exactly $1$ more unit of income than person $i$. The society is called \textbf{capitalist} if for every pair of friends one is envious of the other. For some friendships, you know which friend is envious of the other. For the remaining friendships, you do not know the direction of envy. The \textbf{income inequality} of society is defined as $\max\limits_{1 \leq i \leq n} a_i - \min\limits_{1 \leq i \leq n} a_i$. You only know the friendships and not the incomes. If it is impossible for this society to be capitalist with the given knowledge, you should report about it. Otherwise, you should find an assignment of incomes in which the society is capitalist, and the income inequality is maximized.
Firstly, each edge connects a person with even income and a person with odd income. So if the graph is not bipartite, then a solution does not exist. Consider a friendship between people $u$ and $v$, where we don't know the direction. Since $|a_u-a_v|=1$, we know that $a_u-a_v\le 1$ and $a_v-a_u\le 1$. Consider a directed friendship $(u\to v)$. Since $a_u+1=a_v$, we know that $a_v-a_u\le 1$ and $a_u-a_v\le -1$. For each friendship, let's add two directed edges between them. If it's undirected, we add one from $u\to v$ of weight $1$ and one from $v\to u$ of weight $1$. If it's directed, we add one from $u\to v$ of weight $1$, and one from $v\to u$ of weight $-1$. The way we added these edges ensures that if $u$ and $v$ are any two vertices and there is a path of distance $d$ from $u$ to $v$, then $a_v-a_u\le d$. Note that if a negative cycle exists in our graph, then the inequalities give a contradiction. Otherwise if an answer exists, some vertex $u$ will have minimum $a_u$ and another vertex $v$ will have maximum $a_v$, and $a_v-a_u\le \mathrm{dist}(u\to v)$. Therefore, the answer cannot be greater than the diameter: $\max_{u,v}\mathrm{dist}(u\to v)$. If we find a construction with this answer, then this proves optimality. Let our construction be as follows. First, choose two vertices $u$ and $v$ such that $\mathrm{dist}(u\to v)$ is maximized. Let's assign $a_i=\mathrm{dist}(u\to i)$. The property of shortest paths tell us that all the desired inequalities hold for $a_i$. Then we know all directed friendships are correct: $a_v-a_u=1$. For all undirected friendships, we know $|a_v-a_u|\le 1$. Since the graph is bipartite, it cannot hold that $a_u=a_v$, therefore $|a_v-a_u|=1$, so all requirements hold and the income inequality is maximized. For the implementation, we need to check the graph is bipartite, check if negative cycles exist, and find all-pairs shortest paths. For this, we can simply do Floyd-Warshall in $O(n^3)$ time.
[ "constructive algorithms", "dfs and similar", "graphs", "shortest paths" ]
2,700
#include <bits/stdc++.h> using namespace std; const int N = 205, M = 2005; int n, m, u[M], v[M], b[M]; int dist[N][N]; int main() { ios::sync_with_stdio(false); cin.tie(0); cin >> n >> m; for(int i = 1; i <= n; i++) { for(int j = 1; j <= n; j++) { if(i != j) dist[i][j] = INT_MAX; } } for(int i = 0; i < m; i++) { cin >> u[i] >> v[i] >> b[i]; // build directed graph dist[u[i]][v[i]] = 1; dist[v[i]][u[i]] = (b[i] == 0 ? 1 : -1); } // Floyd-Warshall, detecting negative cycles if integer underflow for(int k = 1; k <= n; k++) { for(int i = 1; i <= n; i++) { for(int j = 1; j <= n; j++) { if(dist[i][k] != INT_MAX && dist[k][j] != INT_MAX && dist[i][k] + dist[k][j] < dist[i][j]) { dist[i][j] = dist[i][k] + dist[k][j]; if(dist[i][j] < -1e9) { // integer underflow -> negative cycle cout << "NO\n"; return 0; } } } } } // (max dist, u) pair<int, int> best = {-1, 1}; for(int i = 1; i <= n; i++) { if(dist[i][i] < 0) { // negative cycle cout << "NO\n"; return 0; } for(int j = 1; j <= n; j++) { best = max(best, {dist[i][j], i}); } } // check it is a valid answer. This handles bipartite check automatically. int s = best.second; for(int i = 0; i < m; i++) { if(dist[s][u[i]] == dist[s][v[i]]) { // non-bipartite cout << "NO\n"; return 0; } } cout << "YES\n" << best.first << '\n'; for(int i = 1; i <= n; i++) { cout << dist[s][i] << ' '; } cout << '\n'; }
1450
F
The Struggling Contestant
To help those contestants who struggle a lot in contests, the headquarters of Codeforces are planning to introduce Division 5. In this new division, the tags of all problems will be announced prior to the round to help the contestants. The contest consists of $n$ problems, where the tag of the $i$-th problem is denoted by an integer $a_i$. You want to AK (solve all problems). To do that, you must solve the problems in some order. To make the contest funnier, you created extra limitations on yourself. You do not want to solve two problems consecutively with the same tag since it is boring. Also, you are afraid of big jumps in difficulties while solving them, so you want to minimize the number of times that you solve two problems consecutively that are not adjacent in the contest order. Formally, your solve order can be described by a permutation $p$ of length $n$. The \textbf{cost} of a permutation is defined as the number of indices $i$ ($1\le i<n$) where $|p_{i+1}-p_i|>1$. You have the requirement that $a_{p_i}\ne a_{p_{i+1}}$ for all $1\le i< n$. You want to know the minimum possible cost of permutation that satisfies the requirement. If no permutations meet this requirement, you should report about it.
Suppose $p$ is a permutation that satisfies the condition. Imagine we add a divider between adjacent indices that are not adjacent in $p$. If $k$ is the number of "jumps" in $p$, then we have split the array $a$ into $k+1$ consecutive segments. The permutation will scan these segments in some order, and each segment can be scanned in forward or reversed order. We can visualize the problem like we "cut" the array in $k$ places, then reorder and reverse the segments so that no two adjacent tags are equal. Obviously, for all $i$ such that $a_i=a_{i+1}$ we need to cut between $i$ and $i+1$. Now, we only need to think about the endpoints of these segments, and we need a condition for when it is possible to avoid connecting equal endpoints. For a tag $x$, define $f(x)$ as the number of times $x$ appears as an endpoint of a segment. Note that if a segment consists of only one tag, we count $x$ twice, as both a left and right endpoint. We claim that a valid reordering of the segments is possible if and only if for all tags $x$, it holds that $f(x) \le k+2$. Let $x$ be a tag. Let's prove that if a solution exists, then $f(x)\le k+2$. Consider a reordering of the segments as $[y_1, y_2],[y_3,y_4],\ldots,[y_{2k+1},y_{2k+2}]$. Here, $y_{2i-1}$ and $y_{2i}$ are the endpoints of the segment that appears $i$-th in the solution order. Since $y_{2i}$ and $y_{2i+1}$ are connected, they cannot both be $x$. It can be that $y_1=x$, and it can be that $y_{2k+2}=x$, but at most half of the remaining endpoints (there are $2k$ of them) can be $x$ due to these connections. So $f(x)\le k+2$. Let's prove that if $f(x)\le k+2$ for all tags $x$, then we can construct a solution. We proceed by induction on $k$. If $k=0$, there is only one segment and we are done. Now, suppose $k\ge 1$. Let $x$ be any tag with maximum $f(x)$. Select one segment with $x$ as an endpoint and another with $y \ne x$ as an endpoint. (Note that such a pair of segments always exists). Connect the selected $x$ and $y$ together, merging them into one segment. We have reduced the number of segments by $1$, and we have decreased the frequencies of $x$ and $y$ as endpoints by $1$. After making this connection, the condition clearly holds for $x$ and $y$. For all other tags $z$ ($z \ne x$, $z \ne y$), it holds before the operation that $f(z) \le (2k+2)-f(x)\le (2k+2)-f(z)$ and so $f(z)\le k+1$. After the operation, $k$ decreases and $f(z)$ is unchanged, so $f(z) \le k+2$ holds. By induction, a solution exists. To find the solution with the minimum number of cuts, we firstly must cut the array between all adjacent indices of equal tags. If this set of cuts already satisfies the above condition, we are done. Otherwise, there is a tag $x$ such that $f(x) > k+2$ and for all tags $y \ne x$, $f(y)\le k+2$. If we add a cut between two consecutive tags such that one of them is $x$, it increases $f(x)$ by one and $k$ by one, so it is useless. If we cut between two consecutive tags that are not $x$, it does not change $f(x)$ and it increases $k$ by one. That is, each such cut brings the condition one step closer to equality. Therefore if a solution exists and $f(x)>k+2$, we require exactly $f(x)-(k+2)$ additional cuts. Let's summarize the solution. If some tag $x$ has more than $\lceil n/2\rceil$ occurrences, a solution does not exist. Otherwise, let $k$ be the number of adjacent equal tags, and add the necessary cuts in these positions. Let $x$ be the tag that occurs most frequently as an endpoint in the resulting segments. The answer is $k+\max\{0, f(x)-(k+2)\}$.
[ "constructive algorithms", "greedy" ]
2,400
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n); vector<int> f(n + 1, 0), ends(n + 1, 0); int cuts = 0; for(int i = 0; i < n; i++) { cin >> a[i]; f[a[i]]++; if(i > 0 && a[i] == a[i - 1]) { ends[a[i]]++; ends[a[i - 1]]++; cuts++; } } if(*max_element(f.begin(), f.end()) > (n + 1) / 2) { cout << -1 << '\n'; return; } ends[a[0]]++; ends[a[n - 1]]++; int maxf = *max_element(ends.begin(), ends.end()); cout << cuts + max(0, maxf - cuts - 2) << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(0); int te; cin >> te; while(te--) { solve(); } }
1450
G
Communism
\textbf{Please pay attention to the unusual memory limit in this problem.} In a parallel universe, Satan is called "Trygub". For that reason, the letters of his namesake were deleted from the alphabet in ancient times. The government has $n$ workers standing in a row and numbered with integers from $1$ to $n$ from left to right. Their job categories can be represented as a string $s$ of length $n$, where the character $s_i$ represents the job category of the $i$-th worker. A new law will be approved to increase the equality between the workers. The government decided to make everyone have the same job category by performing the following operation any number of times (possibly zero). There is a fixed \textbf{rational} parameter $k=\frac ab$ describing how easy it is to convince the public, and it will be used to determine the success of an operation. In an operation, the government first selects a job category $x$ with at least one worker at the current moment. Suppose $i_1,\ldots, i_m$ ($i_1<\ldots<i_m$) are the positions of all the workers with job category $x$. If $k\cdot (i_m-i_1+1)\le m$, the government is able to choose any job category $y$ with at least one worker at the current moment and change the job category of \textbf{all} workers with job category $x$ to job category $y$. If it is possible to make all workers have job category $x$, we say that $x$ is obtainable. Can you tell the government the set of obtainable job categories?
Let $C$ denote the set of distinct characters in $s$. Suppose there is a sequence of operations that transforms every character to $x$ in the end. For an operation where we transform all occurrences of $y$ to $z$, let's add a directed edge $(y\to z)$. The resulting structure is a directed tree rooted at $x$. This is because every character except $x$ is transformed exactly once. For each character $y$, denote the index of its first occurrence by $l_y$ and its last occurrence by $r_y$. For any non-empty subset $S$ of characters, define $\mathrm{range}(S)=\left[\min\limits_{y\in S} l_y, \max\limits_{y\in S} r_y\right]$. That is, the smallest interval capturing all occurrences of characters belonging to $S$. Also define $\mathrm{cnt}(S)$ as the number of occurrences of characters in $S$. Given a directed tree structure, we can decide its validity regardless of the order of operations. For a character $y$, let $S_y$ denotes the set of characters in the subtree of $y$. The condition is that for every character $y$, $\mathrm{cnt}(S_y)\ge k \left|\mathrm{range}(S_y)\right|.\tag{1}$ Now, we can devise a bitmask dp. For any subset $M$, let $\mathrm{dp}(M)$ be true if we can organize the characters of $M$ into a valid tree rooted at a character outside of $M$. Then the final answer will be the set of characters $x$ such that $\mathrm{dp}(C\setminus \{x\})$ is true. To create the dp transitions, there are $2$ cases. Choose a character $y\in M$ to be an ancestor of all of $M$. That is, if the condition $(1)$ holds for the set $M=S_y$ and $\mathrm{dp}(M\setminus \{y\})$ is true, then $\mathrm{dp}(M)$ is true. The set $M$ should be split into at least two disjoint subtrees. That is, if there is a non-empty, proper subset $S\subset M$ such that $\mathrm{dp}(S)$ and $\mathrm{dp}(M\setminus S)$ are true, then $\mathrm{dp}(M)$ is also true. These transitions give rise to an $O(n+3^{|C|})$ solution, since we iterate over all subsets of $M$ in case 2. We still need another observation to optimize this further. The key observation is that we may assume for the transitions of case 2 that $\mathrm{range}(S)\cap \mathrm{range}(M\setminus S)=\emptyset$. That is, the sets of characters in sibling subtrees occur on disjoint intervals. To prove that this assumption is justified, we show that a valid tree can be transformed to a valid tree that does satisfy our assumption. Suppose there are two siblings $y$ and $z$ whose subtrees have overlapping ranges. Then if we change $z$'s parent to $y$, we only need to check that the condition $(1)$ still holds for $y$. In fact, $\mathrm{cnt}(A\cup \{y\}\cup B\cup \{z\})= \mathrm{cnt}(A\cup \{y\})+\mathrm{cnt}(B\cup\{z\})$ $\ge k \cdot \left|\mathrm{range}(A\cup \{y\})\right| + k\cdot \left|\mathrm{range}(B\cup \{z\})\right|\ge k\cdot \left|\mathrm{range}(A\cup \{y\}\cup B\cup \{z\})\right|.$ Of course, after the transformation in the figure, maybe our assumption still does not hold. But observe that if we repeat this process, it will stop in finitely many steps. This is because the sum of depths of all nodes strictly increases with each transformation, and the depth of a node is bounded by $|C|$. Now that the claim is proven, it is not hard to improve our solution to $O(n+|C|2^{|C|})$. If we sort the characters in $C$ by the index of their first occurrence, it is sufficient to try only $|C|-1$ splits according to this order for case 2. That is, if the characters in $M$ are ordered $c_1,\ldots, c_m$, then we should take $S=\{c_1,\ldots,c_i\}$ for some $i$.
[ "bitmasks", "dp", "trees" ]
3,500
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0), cin.tie(0); int n, ka, kb; string s; cin >> n >> ka >> kb >> s; // sort characters of string by first occurrence int p = 0; vector<int> id(26, -1); vector<char> mp(26); for (int i = 0; i < n; ++i) { int c = s[i] - 'a'; if (id[c] == -1) { mp[p] = s[i]; id[c] = p++; } } // dp arrays for first occurrence, last occurrence, number of occurrences for each bitmask vector<int> l(1 << p, n), r(1 << p, -1), count(1 << p); for (int i = 0; i < n; ++i) { int c = s[i] - 'a'; if (l[1 << id[c]] == n) { l[1 << id[c]] = i; } r[1 << id[c]] = i; ++count[1 << id[c]]; } // dp[mask] = true if it's possible for everything in mask to transform to a common character vector<bool> dp(1 << p); dp[0] = true; for (int m = 0; m < (1 << p); ++m) { for (int i = 0, m2 = 0; i < p; ++i) { if (m >> i & 1) { m2 |= 1 << i; // try ways to split the mask along the sorted order if(dp[m2] && dp[m ^ m2]) { dp[m] = true; } l[m] = min(l[m ^ 1 << i], l[1 << i]); r[m] = max(r[m ^ 1 << i], r[1 << i]); count[m] = count[m ^ 1 << i] + count[1 << i]; } } // try ways to root at some vertex in the mask for (int i = 0; i < p; ++i) { if((m >> i & 1) && dp[m ^ 1 << i] && kb * count[m] >= (r[m] - l[m] + 1) * ka) { dp[m] = true; } } } set<char> ans; for (int i = 0; i < p; ++i) { if (dp[((1 << p) - 1) ^ (1 << i)]) { ans.insert(mp[i]); } } cout << ans.size(); for (auto i : ans) cout << " " << i; cout << "\n"; }
1450
H1
Multithreading (Easy Version)
\textbf{The only difference between the two versions of the problem is that there are no updates in the easy version.} There are $n$ spools of thread placed on the rim of a circular table. The spools come in two types of thread: the first thread is black and the second thread is white. For any two spools of the same color, you can attach them with a thread of that color in a straight line segment. Define a matching as a way to attach spools together so that each spool is attached to exactly one other spool. Coloring is an assignment of colors (white and black) to the spools. A coloring is called \textbf{valid} if it has at least one matching. That is if the number of black spools and the number of white spools are both even. Given a matching, we can find the number of times some white thread intersects some black thread. We compute the number of pairs of differently colored threads that intersect instead of the number of intersection points, so one intersection point may be counted multiple times if different pairs of threads intersect at the same point. If $c$ is a valid coloring, let $f(c)$ denote the minimum number of such intersections out of all possible matchings. \begin{center} The circle above is described by the coloring bwbbbwww. After matching the spools as shown, there is one intersection between differently colored threads. It can be proven that it is the minimum possible, so $f(\text{bwbbbwww}) = 1$. \end{center} You are given a string $s$ representing an \textbf{unfinished} coloring, with black, white, and uncolored spools. A coloring $c$ is called $s$-reachable if you can achieve it by assigning colors to the uncolored spools of $s$ without changing the others. A coloring $c$ is chosen uniformly at random among all valid, $s$-reachable colorings. Compute the expected value of $f(c)$. You should find it by modulo $998244353$. We can show that the answer can be written in the form $\frac{p}{q}$ where $p$ and $q$ are relatively prime integers and $q\not\equiv 0\pmod{998244353}$. The answer by modulo $998244353$ is equal to $(p\cdot q^{-1})$ modulo $998244353$.
Lets first solve for a given coloring, $c$, the value of $f(c)$. Let $B_{odd}$, $B_{even}$ denote the number of black spots on even positions, and odd positions respectively. We notate similarly for $W_{odd}$ and $W_{even}$. Claim: $f(c) = \frac12 |B_{odd} - B_{even}|$. Proof: Let's show for any coloring where $B_{odd} - B_{even} = 2k$ (the other case $B_{even} - B_{odd} = 2k$ is equivalent) we have $f(c)=k$. Let's show $f(c)\le k$ with a construction of $k$ intersections. Given the condition, let's show a construction. Suppose that $B_{odd}\geq 1$ and $B_{even}\geq 1$. Then there are two adjacent positions of the same color. Connect those positions and continue to solve for the remaining spools. Eventually, there will be $B_{odd} = 2k$ and $W_{even} = 2k$ in an alternating pattern $\text{bwbw}\ldots\text{bwbw}$. It's easy to connect these to form $k$ connections. Let's show $f(c)\ge k$ by proving any matching has at least $k$ intersections. First, we may assume there are no same-color intersections. If one existed, we could improve the matching, not increasing the number of different-color intersections.Now, since $B_{odd}-B_{even}=2k$, there are at least $k$ black spools on odd positions that are connected to other black spools on odd positions. Each such thread splits the other spools into two sections, each with an odd number of spools. Therefore each of these $k$ threads intersects another thread. Qed. Now, since $B_{odd}-B_{even}=2k$, there are at least $k$ black spools on odd positions that are connected to other black spools on odd positions. Each such thread splits the other spools into two sections, each with an odd number of spools. Therefore each of these $k$ threads intersects another thread. Qed. Now, suppose we have $F$ unfilled positions total. Say $F_{odd}$ are on odd positions and $F_{even}$ are on even positions. Let $x = \frac{n}{2} - W_{odd} - B_{even}$. Claim: Let $i\equiv x\pmod 2$. The number of valid, $s$-reachable colorings $c$ with $f(c)=\frac12 |x-i|$ is equal to ${F\choose i}$. Proof: Suppose we have a subset of $i$ unfilled positions. For the elements of the subset, we color even positions black and odd positions white. For elements outside the subset, we color even positions white and odd positions black. Say that $a$ is the number of positions in our subset on even positions. Now, $B_{even}+a$ spools will be black on even positions, and $B_{odd}+F_{odd}-i+a$ will be black on odd positions. Then $f(c)=\frac12 \left|(B_{even}+a)-(B_{odd}+F_{odd}-i+a)\right|$ $=\frac12 |B_{even}-B_{odd}-F_{odd}+i|=\frac12 |x-i|.$ It is clear that our mapping is a bijecetion. Qed. Given our claims, we can write the expected value as such: $\frac{1}{2^F}\sum_{0\le i\le F\atop i\equiv x\pmod 2} |x-i| {{F}\choose{i}}.$
[ "combinatorics", "fft", "math" ]
2,900
#include <bits/stdc++.h> using namespace std; using ll = long long; const ll MOD = 998244353; const int mxN = 200001; ll fact[mxN], ifact[mxN]; ll nCr(int n, int r) { if(n==r)return 1; if(r > n || r < 0)return 0; return ((fact[n]*ifact[n-r])%MOD*ifact[r])%MOD; } ll binpow(ll a,ll b) { ll res=1; while(b) { if(b&1)res=(res*a)%MOD; a=(a*a)%MOD; b>>=1; } return res; } ll modInv(ll a) { return binpow(a, MOD-2); } string s; int n, q; int b[2], w[2], f[2]; ll tpow[mxN]; ll tinv[mxN]; ll ans[4]; ll lft, func; void print() { ll res = (ans[1]*lft + ans[2]*func)%MOD; res = (ans[0]*func + ans[3]*lft - res + MOD)%MOD; res = (res * tinv[lft])%MOD; cout<<res<<"\n"; } void reset() { for(int i = 0; i < 2; ++i) { b[i] = w[i] = f[i] = 0; } ans[0] = ans[1] = ans[2] = ans[3] = 0; } void fix() { int parity = abs(func)&1; for(int i = parity; i <= min(func-2, lft); i+=2) { ans[0] = (ans[0] + nCr(lft, i))%MOD; } for(int i = parity; i <= min(func-2, lft); i+=2) { ans[1] = (ans[1] + nCr(lft-1, i-1))%MOD; } for(int i = func+2; i <= lft; i+=2) { ans[3] = (ans[3] + nCr(lft-1, i-1))%MOD; } for(int i = func+2; i <= lft; i+=2) { ans[2] = (ans[2] + nCr(lft, i))%MOD; } } int main() { fact[0] = 1; tpow[0] = 1; for(int i = 1; i < mxN; ++i){ fact[i] = (fact[i-1]*i)%MOD; tpow[i] = (tpow[i-1]*2)%MOD; } ifact[mxN-1] = modInv(fact[mxN-1]); tinv[mxN-1] = modInv(tpow[mxN-1]); for(int i = mxN-2; i >= 0; --i) { ifact[i] = (ifact[i+1]*(i+1))%MOD; tinv[i] = (tinv[i+1]*2)%MOD; } cin>>n>>q>>s; for(int i = 0; i < n; ++i) { if(s[i]=='w') { w[i&1]++; }else if(s[i]=='b') { b[i&1]++; }else { f[i&1]++; } } lft = f[0] + f[1]; func = n/2 - b[1] - w[0]; fix(); print(); for(int i = 0; i < q; ++i) { int a; char c; cin>>a>>c; a--; reset(); s[a] = c; for(int i = 0; i < n; ++i) { if(s[i]=='w') { w[i&1]++; }else if(s[i]=='b') { b[i&1]++; }else { f[i&1]++; } } lft = f[0] + f[1]; func = n/2 - b[1] - w[0]; fix(); print(); } return 0; }
1450
H2
Multithreading (Hard Version)
\textbf{The only difference between the two versions of the problem is that there are no updates in the easy version.} There are $n$ spools of thread placed on the rim of a circular table. The spools come in two types of thread: the first thread is black and the second thread is white. For any two spools of the same color, you can attach them with a thread of that color in a straight line segment. Define a matching as a way to attach spools together so that each spool is attached to exactly one other spool. Coloring is an assignment of colors (white and black) to the spools. A coloring is called \textbf{valid} if it has at least one matching. That is if the number of black spools and the number of white spools are both even. Given a matching, we can find the number of times some white thread intersects some black thread. We compute the number of pairs of differently colored threads that intersect instead of the number of intersection points, so one intersection point may be counted multiple times if different pairs of threads intersect at the same point. If $c$ is a valid coloring, let $f(c)$ denote the minimum number of such intersections out of all possible matchings. \begin{center} The circle above is described by the coloring bwbbbwww. After matching the spools as shown, there is one intersection between differently colored threads. It can be proven that it is the minimum possible, so $f(\text{bwbbbwww}) = 1$. \end{center} You are given a string $s$ representing an \textbf{unfinished} coloring, with black, white, and uncolored spools. A coloring $c$ is called $s$-reachable if you can achieve it by assigning colors to the uncolored spools of $s$ without changing the others. A coloring $c$ is chosen uniformly at random among all valid, $s$-reachable colorings. Compute the expected value of $f(c)$. You should find it by modulo $998244353$. There will be $m$ updates to change one character of $s$. After each update, you should again compute the expected value of $f(c)$. We can show that each answer can be written in the form $\frac{p}{q}$ where $p$ and $q$ are relatively prime integers and $q\not\equiv 0\pmod{998244353}$. The answer by modulo $998244353$ is equal to $(p\cdot q^{-1})$ modulo $998244353$.
Continued from Easy Version tutorial. Recall the answer is $\frac{1}{2^F}\sum_{0\le i\le F\atop i\equiv x\pmod 2} |x-i| {{F}\choose{i}}.$ Now we have to maintain this sum over updates. Let's ignore the $2^{F}$, and rewrite the sum as $\sum_{0\le i\le x-2\atop i\equiv x\pmod 2} x{{F}\choose{i}} - \sum_{0\le i\le x-2\atop i\equiv x\pmod 2} i{{F}\choose{i}} + \sum_{x+2\le i\le F\atop i\equiv x\pmod 2} i{{F}\choose{i}} - \sum_{x+2\le i\le F\atop i\equiv x\pmod 2} x{{F}\choose{i}}.$ We will transform sums of the form $i{{F}\choose{i}}$ to sums of the form ${F\choose i}$. We rewrite them with the following identity. $i{{F}\choose{i}} = F{{F-1}\choose{i-1}}$ To deal with the $i\equiv x\pmod 2$ condition, we use Pascal's rule to rewrite each ${F}\choose{i}$ as ${{F-1}\choose{i-1}} + {{F-1}\choose{i}}$, and each sum becomes a prefix sum or suffix sum of some binomial coefficients with the same upper index. The indices change by $O(1)$ between updates, and we should handle these cases. We can handle changes to the upper index by noting the following equation, again due to Pascal's rule. $\sum_{i=0}^{k} {{F+1}\choose{i}} = 2\sum_{i=0}^{k} {{F}\choose{i}} - {{F}\choose{k}}.$ It is easy to handle updates to the lower index by adding and subtracting some binomial coefficients when necessary. Be careful about cases where $F = 1$ since the transformed prefix sum won't consider all possibilities.
[ "combinatorics", "implementation", "math" ]
3,300
#include <bits/stdc++.h> using namespace std; using ll = long long; const ll MOD = 998244353; const int mxN = 200001; ll fact[mxN], ifact[mxN]; ll nCr(int n, int r) { if(r > n || r < 0)return 0; return ((fact[n]*ifact[n-r])%MOD*ifact[r])%MOD; } ll binpow(ll a,ll b) { ll res=1; while(b) { if(b&1)res=(res*a)%MOD; a=(a*a)%MOD; b>>=1; } return res; } ll modInv(ll a) { return binpow(a, MOD-2); } string s; int n, q; int b[2], w[2], f[2]; ll tpow[mxN]; ll tinv[mxN]; ll ans[4]; ll lft, func; void print() { ll res = (ans[1]*lft + ans[2]*func)%MOD; res = (ans[0]*func + ans[3]*lft - res + MOD)%MOD; res = (res * tinv[lft])%MOD; cout<<res<<"\n"; } void fix() { ans[0] = ans[1] = ans[2] = ans[3] = 0; int parity = abs(func)&1; for(int i = parity; i <= min(func-2, lft); i+=2) { ans[0] = (ans[0] + nCr(lft, i))%MOD; } for(int i = parity; i <= min(func-2, lft); i+=2) { ans[1] = (ans[1] + nCr(lft-1, i-1))%MOD; } for(int i = func+2; i <= lft; i+=2) { ans[3] = (ans[3] + nCr(lft-1, i-1))%MOD; } for(int i = func+2; i <= lft; i+=2) { ans[2] = (ans[2] + nCr(lft, i))%MOD; } } void inc() { // increase the numerator ans[0] = (2*ans[0] - nCr(lft-1, func-2) + MOD)%MOD; ans[1] = (2*ans[1] - nCr(lft-2, func-3) + MOD)%MOD; ans[2] = tpow[lft-1] - ans[2]; ans[2] = 2*ans[2] - nCr(lft-1, func); ans[2] = ((tpow[lft] - ans[2])%MOD + MOD)%MOD; ans[3] = tpow[lft-2] - ans[3]; ans[3] = 2*ans[3] - nCr(lft-2, func-1); ans[3] = ((tpow[lft-1] - ans[3])%MOD + MOD)%MOD; lft++; } void binc() { //increase denominator ans[0] = (ans[0] + nCr(lft-1, func-1))%MOD; ans[1] = (ans[1] + nCr(lft-2, func-2))%MOD; ans[2] = (ans[2] - nCr(lft-1, func+1) + MOD)%MOD; ans[3] = (ans[3] - nCr(lft-2, func) + MOD)%MOD; func++; } void dec() { //decrease the numerator ans[0] = ((ans[0] + nCr(lft-2, func-2))*tinv[1])%MOD; ans[1] = ((ans[1] + nCr(lft-3, func-3))*tinv[1])%MOD; ans[2] = tpow[lft-1] - ans[2]; ans[2] = ((ans[2] + nCr(lft-2, func))*tinv[1])%MOD; ans[2] = (tpow[lft-2] - ans[2] + MOD)%MOD; ans[3] = tpow[lft-2] - ans[3]; ans[3] = ((ans[3] + nCr(lft-3, func-1))*tinv[1])%MOD; ans[3] = (tpow[lft-3] - ans[3] + MOD)%MOD; lft--; } void bdec() { //decrease denominator ans[0] = (ans[0] - nCr(lft-1, func-2) + MOD)%MOD; ans[1] = (ans[1] - nCr(lft-2, func-3) + MOD)%MOD; ans[2] = (ans[2] + nCr(lft-1, func))%MOD; ans[3] = (ans[3] + nCr(lft-2, func-1))%MOD; func--; } int main() { fact[0] = 1; tpow[0] = 1; for(int i = 1; i < mxN; ++i){ fact[i] = (fact[i-1]*i)%MOD; tpow[i] = (tpow[i-1]*2)%MOD; } ifact[mxN-1] = modInv(fact[mxN-1]); tinv[mxN-1] = modInv(tpow[mxN-1]); for(int i = mxN-2; i >= 0; --i) { ifact[i] = (ifact[i+1]*(i+1))%MOD; tinv[i] = (tinv[i+1]*2)%MOD; } cin>>n>>q>>s; for(int i = 0; i < n; ++i) { if(s[i]=='w') { w[i&1]++; }else if(s[i]=='b') { b[i&1]++; }else { f[i&1]++; } } lft = f[0] + f[1]; func = n/2 - b[1] - w[0]; fix(); print(); for(int i = 0; i < q; ++i) { int a; char c; cin>>a>>c; a--; //uncolor if(s[a]=='w') { inc(); if((a&1)^1) binc(); }else if(s[a]=='b') { inc(); if(a&1) binc(); } //recolor if(c=='w') { dec(); if((a&1)^1) bdec(); }else if(c=='b') { dec(); if(a&1) bdec(); } s[a] = c; if(lft<=2) fix(); print(); } return 0; }
1451
A
Subtract or Divide
Ridbit starts with an integer $n$. In one move, he can perform one of the following operations: - divide $n$ by one of its \textbf{proper} divisors, or - subtract $1$ from $n$ if $n$ is greater than $1$. A proper divisor is a divisor of a number, excluding itself. For example, $1$, $2$, $4$, $5$, and $10$ are proper divisors of $20$, but $20$ itself is not. What is the minimum number of moves Ridbit is required to make to reduce $n$ to $1$?
Key Idea: For $n > 3$, the answer is $2$ when $n$ is even and $3$ when $n$ is odd. Cases when $n \leq 3$ can be handled separately. Solution: Case 1: $n \leq 3$ For $n = 1, 2, 3$, it can be shown that the minimum number of operations required are $0$, $1$, $2$ respectively. Case 2: $n > 3$ and $n$ is even If $n$ is even and greater than $2$, then $\frac{n}{2}$ is a proper divisor. So we can divide $n$ by $\frac{n}{2}$ to make it $2$ and then subtract $1$. This requires $2$ operations. Case 3: $n > 3$ and $n$ is odd In this case, $n$ can be made even by subtracting $1$. From case 2, it can be seen that it will take $2$ more operations. Thus a total of $3$ operations are required. Time complexity: $O(1)$ per case
[ "greedy", "math" ]
800
import java.util.*; import java.util.ArrayList; public class Main { public static void main(String args[]) { Scanner sc = new Scanner(System.in); int t = sc.nextInt(); for(int i = 0; i < t; i++) { int n = sc.nextInt(); System.out.println(Math.min(2 + (n & 1), n - 1)); } } }
1451
B
Non-Substring Subsequence
Hr0d1y has $q$ queries on a binary string $s$ of length $n$. A binary string is a string containing only characters '0' and '1'. A query is described by a pair of integers $l_i$, $r_i$ $(1 \leq l_i \lt r_i \leq n)$. For each query, he has to determine whether there exists a good subsequence in $s$ that is equal to the substring $s[l_i\ldots r_i]$. - A substring $s[i\ldots j]$ of a string $s$ is the string formed by characters $s_i s_{i+1} \ldots s_j$. - String $a$ is said to be a subsequence of string $b$ if $a$ can be obtained from $b$ by deleting some characters without changing the order of the remaining characters. - A subsequence is said to be \textbf{good} if it is not contiguous and has length $\ge 2$. For example, if $s$ is "1100110", then the subsequences $s_1s_2s_4$ ("{\textbf{11}0\textbf{0}110}") and $s_1s_5s_7$ ("{\textbf{1}100\textbf{1}1\textbf{0}}") are good, while $s_1s_2s_3$ ("{\textbf{110}0110}") is not good. Can you help Hr0d1y answer each query?
Key Idea: In each query, the answer is YES iff the first character of the given substring is not the first occurence of that character or the last character of the given substring is not the last occurrence of that character in the string. Solution: The condition stated above is both necessary and sufficient. Proof that it is necessary: Assume that a non-contiguous subsequence exists when the condition is false. If the first character of the substring is the first occurrence of its kind, then the subsequence cannot start before it. Similarly, if the last character of the substring is the last occurrence of its kind, then the subsequence cannot end after it. In such a case, the only subsequence that is of the same length as the given substring and equal to it, is the substring itself. However, this subsequence is contiguous - which is a contradiction. Thus, it is a necessary condition. Proof that it is sufficient: If the first character of the substring $s[l_i...r_i]$ occurs at some index $j$ $(j < l_i)$, then the subsequence $s_js_{l_i+1}...s_{r_i}$ is good. If the last character of the substring $s[l_i...r_i]$ occurs at some index $j$ $(j > r_i)$, then the subsequence $s_{l_i}...s_{r_i-1}s_j$ is good. Thus it is sufficient. Time complexity: $O(nq)$ or $O(n+q)$ for each case depending on implementation.
[ "dp", "greedy", "implementation", "strings" ]
900
import java.io.*; import java.util.*; public class Main { public static void main(String args[]) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int t = Integer.parseInt(br.readLine()); for(int cases = 0; cases < t; cases++) { StringTokenizer st = new StringTokenizer(br.readLine()); int n = Integer.parseInt(st.nextToken()); int q = Integer.parseInt(st.nextToken()); String s = br.readLine(); int fi[] = new int[2]; int la[] = new int[2]; for(int j = 0; j < 2; j++) { fi[j] = 2000000000; la[j] = -1; } for(int i = 0; i < s.length(); i++) { int cur = (s.charAt(i) - '0'); fi[cur] = Math.min(fi[cur], i); la[cur] = i; } for(int i = 0; i < q; i++) { st = new StringTokenizer(br.readLine()); int l = Integer.parseInt(st.nextToken()) - 1; int r = Integer.parseInt(st.nextToken()) - 1; int curl = (s.charAt(l) - '0'); int curr = (s.charAt(r) - '0'); if(fi[curl] < l || la[curr] > r) System.out.println("YES\n"); else System.out.println("NO\n"); } } } }
1451
C
String Equality
Ashish has two strings $a$ and $b$, each of length $n$, and an integer $k$. The strings only contain lowercase English letters. He wants to convert string $a$ into string $b$ by performing some (possibly zero) operations on $a$. In one move, he can either - choose an index $i$ ($1 \leq i\leq n-1$) and swap $a_i$ and $a_{i+1}$, or - choose an index $i$ ($1 \leq i \leq n-k+1$) and if $a_i, a_{i+1}, \ldots, a_{i+k-1}$ are \textbf{all equal} to some character $c$ ($c \neq$ 'z'), replace each one with the next character $(c+1)$, that is, 'a' is replaced by 'b', 'b' is replaced by 'c' and so on. Note that he can perform any number of operations, and the operations can only be performed on string $a$. Help Ashish determine if it is possible to convert string $a$ into $b$ after performing some (possibly zero) operations on it.
Key Idea: For the answer to be YES, the frequencies of each character of the alphabet must match after performing some sequence of operations. Let $freq_{i, a}$ and $freq_{i, b}$ be the frequencies of the $i$-th character of the alphabet in strings $a$ and $b$ respectively. For each $i$ starting from "a", we keep exactly $freq_{i, b}$ of the occurrences and try to to convert the rest into the next character. If at any step, this is not possible, the answer is NO. Solution: Consider operations of the first type. It can be shown that after some finite sequence of swaps, we can reorder the string in any way we like. This is helpful because we do not have to worry about the characters being adjacent when we perform an operation of the second type (we can always reorder the string to allow it). In other words, only frequency of the characters matter. To convert string $a$ into string $b$, we first make the frequencies of each character of the alphabet equal, then reorder the string using operations of the first type. The former can be done as described above. If for any $i$, there are an insufficient number of occurrences ($freq_{i, a} \lt freq_{i, b}$) or the remaining occurrences cannot all be converted into the next character, i.e. $(freq_{i, a} - freq_{i, b})$ is not a multiple of $k$, the answer is NO. Otherwise, the answer is YES. Time complexity: $O(n)$
[ "dp", "greedy", "hashing", "implementation", "strings" ]
1,400
import java.util.*; import java.util.ArrayList; public class Main { public static void main(String args[]) { Scanner sc = new Scanner(System.in); int t = sc.nextInt(); for(int cases = 0; cases < t; cases++) { int n = sc.nextInt(); int k = sc.nextInt(); sc.nextLine(); String a = sc.nextLine(); String b = sc.nextLine(); int have[] = new int[27]; int need[] = new int[27]; for(int i = 0; i < n; i++) { have[a.charAt(i) - 'a']++; need[b.charAt(i) - 'a']++; } boolean bad = false; for(int j = 25; j >= 0; j--) { have[j] += have[j + 1]; need[j] += need[j + 1]; if(have[j] > need[j] || (need[j] - have[j]) % k != 0) bad = true; } if(bad) System.out.println("No"); else System.out.println("Yes"); } } }
1451
D
Circle Game
Utkarsh is forced to play yet another one of Ashish's games. The game progresses turn by turn and as usual, Ashish moves \textbf{first}. Consider the 2D plane. There is a token which is initially at \textbf{$(0,0)$}. In one move a player must increase either the $x$ coordinate or the $y$ coordinate of the token by \textbf{exactly} $k$. In doing so, the player must ensure that the token stays within a (Euclidean) distance $d$ from $(0,0)$. In other words, if after a move the coordinates of the token are $(p,q)$, then $p^2 + q^2 \leq d^2$ must hold. The game ends when a player is unable to make a move. It can be shown that the game will end in a finite number of moves. If both players play optimally, determine who will win.
Key Idea: Let $z$ be the maximum integer such that the point $(kz, kz)$ is within the circle. If the point $(kz, k(z+1))$ is also within the circle, player 1 wins. Otherwise player 2 wins. Solution: Regardless of what move player 1 makes, player 2 can force the token to be at some point on the line $x = y$ at the end of his turn (if player 1 moves up, player 2 can move right and vice versa). Case 1: $(kz, k(z+1))$ lies outside the circle Player 2 can guarantee his victory in this fashion as player 1 will not have any moves left after reaching the point $(kz, kz)$. Player 2 wins. Case 2: $(kz, k(z+1))$ lies within the circle After player 1 makes his first move, player 2 finds himself in the same situation as player 1 did in the previous case. That is, player 1 can ensure that they reach either $(kz, k(z+1))$ or $(k(z+1), kz)$ in the same way. Points $(k(z+1), k(z+1))$ and $(kz, k(z+2))$ do not lie within the circle as by definition $d^2 \lt k^2(z+1)^2 + k^2(z+1)^2 \lt k^2z^2 + k^2(z+2)^2$. Thus player 2 will not have any moves left and player 1 wins.
[ "games", "geometry", "math" ]
1,700
import java.util.*; import java.util.ArrayList; public class Main { public static boolean is_within(long x, long y, long dsq, long k) { return dsq - k * k * x * x - k * k * y * y >= 0; } public static void main(String args[]) { Scanner sc = new Scanner(System.in); int t = sc.nextInt(); for(int cases = 0; cases < t; cases++) { int d = sc.nextInt(); long dsq = ((long) d) * d; int k = sc.nextInt(); int cur = 0; while(is_within(cur + 1, cur + 1, dsq, k)) cur++; if(is_within(cur + 1, cur, dsq, k)) System.out.println("Ashish"); else System.out.println("Utkarsh"); } } }
1451
E1
Bitwise Queries (Easy Version)
\textbf{The only difference between the easy and hard versions is the constraints on the number of queries.} \textbf{This is an interactive problem.} Ridbit has a hidden array $a$ of $n$ integers which he wants Ashish to guess. Note that $n$ is a \textbf{power of two}. Ashish is allowed to ask three different types of queries. They are of the form - AND $i$ $j$: ask for the bitwise AND of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ - OR $i$ $j$: ask for the bitwise OR of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ - XOR $i$ $j$: ask for the bitwise XOR of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ Can you help Ashish guess the elements of the array? \textbf{In this version, each element takes a value in the range $[0, n-1]$ (inclusive) and Ashish can ask no more than $n+2$ queries.}
Key Idea: $a + b = (a \oplus b) + 2 * (a \& b)$ Pick and distinct $i, j, k$ and find $a_{i} + a_{j} = x$, $a_{i} + a_{k} = y$ and $a_{j} + a_{k} = z$ by querying their XOR and AND values ($6$ queries). This is a system of linear equation with three equations and three variables and thus us a unique solution. Solving it you get the values of $a_{i}$, $a_{j}$, and $a_{k}$. You can then get the remaining $n - 3$ values using $n - 3$ more queries. This uses a total of $n + 3$ queries. Realize that $a_{j}$ $\oplus$ $a_{k}$ $=$ ($a_{i}$ $\oplus$ $a_{j}$) $\oplus$ ($a_{i}$ $\oplus$ $a_{k}$) to reduce the number of initial queries from $6$ to $5$, for a total of $n + 2$ queries. Solution: Lets take a look at the properties of the 3 operations we have, specifically what information we can recover from it. AND - lossy in information about bits that are on, cant be used to recover the values OR - lossy in information about bits that are off, cant be used to recover the values XOR - lossless, a $\oplus$ (a $\oplus$ b) gives us b again. So clearly xor operations are going to be the core of recovering the values. If we can find one of the values, we can find the remaining $n - 1$ values using xor queries. But how do we get that first value? Lets try to think about this for a operation for which we know how to obtain the individual values - addition. If we had $a + b = x$, $a + c = y$ and $b + c = z$, we could just solve the three linear equations to obtain the answer. So what's the difference between xor and addition anyway? If the $i$-th bit is off in both the numbers, xor and addition both give 0. If the $i$-th bit is on in exactly oneof the numbers, xor and addition will both give $2^{i}$ . But what about if the $i$-th bit is on in the both the numbers? Then addition contributes $2 * 2^{i}$ to the answer while xor contributes 0. Is there some way to recover this value? Well if $i$-th bit is on in the both the numbers, the AND will contribute $2^{i}$ to the answer. So we can realize that $a + b$ can be rewritten as (a $\oplus$ $b$) + $2 \times$ (a $\&$ b). So now if we just choose positions $1$, $2$ and $3$, we can obtain $x = a_{1} + a_{2}$, $y = a_{1} + a_{3}$ and $z = a_{2} + a_{3}$ using $3$ XOR and $3$ AND operations. Furthermore we can realize that $a_{2}$ $\oplus$ $a_{3}$ is just ($a_{1}$ $\oplus$ $a_{2}$) $\oplus$ ($a_{1}$ $\oplus$ $a_{3}$), so we can save one more operation. Now solving the above simultaneous equations, we get $a_{1}$ = $\frac{x + y - z}{2}$. Now for $2 \leq i \leq n$, $a_{i}$ is just $a_{1}$ $\oplus$ ($a_{1}$ $\oplus$ $a_{i}$). We already calculated $a_{1}$ $\oplus$ $a_{2}$ and $a_{1}$ $\oplus$ $a_{3}$ for the previous step, and can calculate the remaining xor values in $n - 3$ steps. So in total we use exactly $5 + (n - 3)$ = $n + 2$ queries, which is enough to get AC.
[ "bitmasks", "constructive algorithms", "interactive", "math" ]
2,000
import java.util.Scanner; public class Main { public static int queries(String s, Scanner sc) { System.out.println(s); System.out.flush(); int x = sc.nextInt(); if(x == -1) System.exit(0); return x; } public static void main(String args[]) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(); int xorvals[] = new int[n + 1]; int ans[] = new int[n + 1]; for(int i = 2; i <= n; i++) xorvals[i] = queries("XOR 1 " + i, sc); int xor12 = xorvals[2], xor13 = xorvals[3], xor23 = xorvals[2] ^ xorvals[3]; int and12, and13, and23; and12 = queries("AND 1 2", sc); and13 = queries("AND 1 3", sc); and23 = queries("AND 2 3", sc); int x = xor12 + 2 * and12; int y = xor13 + 2 * and13; int z = xor23 + 2 * and23; assert((x + y - z) % 2 == 0); ans[1] = (x + y - z) / 2; for(int i = 2; i <= n; i++) ans[i] = (xorvals[i] ^ ans[1]); System.out.print("! "); for(int i = 1; i <= n; i++) System.out.print(ans[i] + " "); System.out.println(); System.out.flush(); } }
1451
E2
Bitwise Queries (Hard Version)
\textbf{The only difference between the easy and hard versions is the constraints on the number of queries.} \textbf{This is an interactive problem.} Ridbit has a hidden array $a$ of $n$ integers which he wants Ashish to guess. Note that $n$ is a \textbf{power of two}. Ashish is allowed to ask three different types of queries. They are of the form - AND $i$ $j$: ask for the bitwise AND of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ - OR $i$ $j$: ask for the bitwise OR of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ - XOR $i$ $j$: ask for the bitwise XOR of elements $a_i$ and $a_j$ $(1 \leq i, j \le n$, $i \neq j)$ Can you help Ashish guess the elements of the array? \textbf{In this version, each element takes a value in the range $[0, n-1]$ (inclusive) and Ashish can ask no more than $n+1$ queries.}
Key Idea: Query the $n - 1$ xor values of the form $(1, i)$, for $2 \leq i \leq n$ initially. Since numbers lie in the range $[0, n - 1]$, one of two cases can arise: 1. There exists indices j, k such that $a_{j} = a_{k}$. This can be found as $a_{i}$ $\oplus$ $a_{j}$ $=$ $a_{i}$ $\oplus$ $a_{k}$. So we can just use the query $a_{j}$ $\&$ $a_{k}$ to get the value of $a_{j}$. Now we can find the remaining values using the xor values previous queried in$(n - 1) + 1 = n$ queries. 2. All integers in the range $[0, n - 1]$ are present. There exists (several) pairs $j, k$ such that $a_{j}$ $\oplus$ $a_{k}$ $=$ $n - 1$. This implies that neither of the numbers have any bits in common and as such $a_{j}$ $\&$ $a_{k}$ $=$ $0$, allowing us to save one query compared to E1. Thus we need $(n - 1) + 3 - 1 = (n + 1)$ queries. Solution: Now we have to eliminate $1$ more query but we are yet to use the info that all values of a are in the range $[0, n - 1]$. It may be tempting to try to remove a xor query, but it is unlikely this will work, as we would not be able differentiate between two arrays which are equal in $n - 1$ places and differ in the position we didn't check. As such it doesn't seem probable that we can recover the exact value if we don't use $n - 1$ xor operations. Since we'll need them anyway, lets go ahead and find $a_{1}$ $\oplus$ $a_{i}$ for all $2 \leq i \leq n$. Then clearly we have to eliminate $1$ of the $3$ AND queries. Since numbers lie in the range $[0, n - 1]$, one of two cases must arise: At least one value is repeated. Clearly if $a_{1}$ $\oplus$ $a_{j} = a_{1}$ $\oplus$ $a_{k}$, then $a_{j} = a_{k}$. So we can use the already queried xor values to check if there exists two such values. Its easy to see that $x$ $\&$ $x$ is $x$ itself, so we can get the value of $a_{j}$ using the query $a_{j}$ $\&$ $a_{k}$. Now we can just the xor values to find the remaining values. This needs $(n - 1) + 1 = n$ queries. All integers in the range $[0, n - 1]$ are present. We're trying to find a pair of values $(a_{j}, a_{k})$ for which we can find $a_{j}$ $\&$ $a_{k}$ without needing to query it. However we don't have any equal pair like the last case. The next easiest value that comes to mind would be $0$. Can we easily select a pair with AND value zero based on just the info we currently have? Well if the $i$-th bit of $a_{j}$ $\oplus$ $a_{k}$ is on, then it must undoubtedly have been on in either $a_{j}$ or $a_{k}$, not both. But we can't make any such statements about a bit being off, since the bit could be off in both (satisfying the condition), or on in both (doesn't satisfy nonzero AND). So can we choose a pair $(a_{j}, a_{k})$ such that all bits are on? Yes we can, this value will be $n - 1$ since $n$ is a power of $2$ and there will be $\frac{n}{2}$ pairs which equal it ($(0, n - 1)$, $(1, n - 2)$, $(2, n - 3)$, etc). So we can just find any one pair such that $a_{j}$ $\oplus$ $a_{k}$ $=$ $n - 1$. Now we know $a_{j}$ $\&$ $a_{k}$ $=$ $0$ and can reduce the number of required AND queries by $1$. So we will need $(n - 1) + 3 - 1 = (n + 1)$ queries. We're trying to find a pair of values $(a_{j}, a_{k})$ for which we can find $a_{j}$ $\&$ $a_{k}$ without needing to query it. However we don't have any equal pair like the last case. The next easiest value that comes to mind would be $0$. Can we easily select a pair with AND value zero based on just the info we currently have? Well if the $i$-th bit of $a_{j}$ $\oplus$ $a_{k}$ is on, then it must undoubtedly have been on in either $a_{j}$ or $a_{k}$, not both. But we can't make any such statements about a bit being off, since the bit could be off in both (satisfying the condition), or on in both (doesn't satisfy nonzero AND). So can we choose a pair $(a_{j}, a_{k})$ such that all bits are on? Yes we can, this value will be $n - 1$ since $n$ is a power of $2$ and there will be $\frac{n}{2}$ pairs which equal it ($(0, n - 1)$, $(1, n - 2)$, $(2, n - 3)$, etc). So we can just find any one pair such that $a_{j}$ $\oplus$ $a_{k}$ $=$ $n - 1$. Now we know $a_{j}$ $\&$ $a_{k}$ $=$ $0$ and can reduce the number of required AND queries by $1$. So we will need $(n - 1) + 3 - 1 = (n + 1)$ queries. It is easy to see that these two cases cover all situations possible and in both cases we can find the answer in at max $(n + 1)$ queries which is enough to get AC.
[ "bitmasks", "constructive algorithms", "interactive", "math" ]
2,300
import java.util.*; import java.util.ArrayList; public class Main { public static int queries(String s, int i, int j, Scanner sc) { System.out.println(s + " " + i + " " + j); System.out.flush(); int x = sc.nextInt(); if(x == -1) System.exit(0); return x; } public static void main(String args[]) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(); ArrayList<Integer> pos[] = new ArrayList[n + 1]; for(int i = 0; i <= n; i++) pos[i] = new ArrayList<Integer>(); int xorvals[] = new int[n + 1]; int ans[] = new int[n + 1]; xorvals[1] = 0; pos[0].add(1); for(int i = 2; i <= n; i++) { xorvals[i] = queries("XOR", 1, i, sc); pos[xorvals[i]].add(i); } int a = 1, b = -1, c = -1; int same = -1; for(int i = 0; i < n; i++) if(pos[i].size() > 1) { b = pos[i].get(0); c = pos[i].get(1); same = i; } if(same == -1) { // If a_b ^ a_c == n - 1, then a_b & a_c = 0 for(int i = 2; i <= 3; i++) for(int j = i + 1; j <= n; j++) if((xorvals[i] ^ xorvals[j]) == n - 1) { b = i; c = j; } assert(b != -1 && c != -1); int xorab = xorvals[a] ^ xorvals[b], xorac = xorvals[a] ^ xorvals[c], xorbc = xorvals[b] ^ xorvals[c]; int andab = queries("AND", a, b, sc); int andac = queries("AND", a, c, sc); int andbc = 0; int x = xorab + 2 * andab; int y = xorac + 2 * andac; int z = xorbc + 2 * andbc; assert((x + y - z) % 2 == 0); ans[a] = (x + y - z) / 2; } else { // if a_1 ^ a_b == a_1 ^ a_c, then a_b = a_c = (a_b & a_c) ans[b] = queries("AND", b, c, sc); ans[1] = xorvals[b] ^ ans[b]; } for(int i = 2; i <= n; i++) ans[i] = (xorvals[i] ^ ans[1]); System.out.print("! "); for(int i = 1; i <= n; i++) System.out.print(ans[i] + " "); System.out.println(); System.out.flush(); } }
1451
F
Nullify The Matrix
Jeel and Ashish play a game on an $n \times m$ matrix. The rows are numbered $1$ to $n$ from top to bottom and the columns are numbered $1$ to $m$ from left to right. They play turn by turn. Ashish goes \textbf{first}. Initially, each cell of the matrix contains a non-negative integer. Each turn, a player must perform \textbf{all} of the following actions in order. - Choose a starting cell $(r_1, c_1)$ with \textbf{non-zero} value. - Choose a finishing cell $(r_2, c_2)$ such that $r_1 \leq r_2$ and $c_1 \leq c_2$. - Decrease the value of the starting cell by some positive non-zero integer. - Pick any of the shortest paths between the two cells and either increase, decrease or leave the values of cells on this path unchanged. Note that: - a shortest path is one that passes through the least number of cells; - all cells on this path \textbf{excluding} the starting cell, but the finishing cell may be modified; - the resulting value of each cell must be a non-negative integer; - the cells are modified independently and not necessarily by the same value. If the starting and ending cells are the same, then as per the rules, the value of the cell is decreased. No other operations are performed. The game ends when all the values become zero. The player who is unable to make a move loses. It can be shown that the game will end in a finite number of moves if both players play optimally. Given the initial matrix, if both players play optimally, can you predict who will win?
Let's consider diagonals $d$ of the form $r+c$ - the diagonals where the sum of row index $(r)$ and column index $(c)$ is constant. Then xor of a diagonal $d$ will be $xor(d) = a(r_1,c_1)\oplus a(r_2,c_2)\oplus...a(r_n,c_n)$, such that $r_1 + c_1 = d$, $r_2 + c_2 = d$, .... $r_n + c_n = d$. $Solution:$ If $\forall d,\hspace{0.1cm}xor(d) = 0$ at the start of the game, then Jeel wins. Else, Ashish wins. $Proof:$ Let's define two states $S$ and $S'$: $S$ : $\forall d,\hspace{0.1cm}xor(d) = 0$ $S'$ : $\exists d,\hspace{0.1cm}xor(d) \ne 0$ $Lemma\hspace{0.1cm}1:$ Any move on $S$ converts the matrix to $S'$. $Proof:$ If we are in state $S$, $xor(r1 + c1)=0$. Since, we need to anyhow decrease value of $(r1, c1)$, $xor(r1 + c1)$ will become non-zero. Hence, any move on $S$ converts the matrix to $S'$. $Lemma\hspace{0.1cm}2:$ There always exists a move on $S'$ to convert it to $S$. $Proof:$ If we are given an $S'$ state, we can convert it into $S$ as follows: Note that the diagonals between $(r1 + c1)$ and $(r2 + c2)$ can be arbitrarily changed to any value. $(r1, c1)$ can only be decreased in value. Let $(r1 + c1)$ be such a diagonal that there is no diagonal $d < r1 + c1$ with $xor(d) != 0$. Also, in order to ensure that we can make $xor(r1 + c1) = 0$ by decreasing, we select such a cell $(r1, c1)$ whose largest set bit is equal to the largest set bit of $xor(r1 + c1)$. In this way, we ensure that diagonal $r1 + c1$ can be made $0$. Now, we need to fix $(r2, c2)$. Let's fix $(r2, c2)$ such that there is no diagonal $d > r2 + c2$, such that $xor(d) != 0$. Since, (r2, c2) can be increased as well as decreased, we can easily make $xor(r2 + c2) = 0$. The diagonals in between can be arbitarily changed, so making them $0$ is trivial. At the end of this move, all diagonals have become $0$. Hence, given a state $S'$, we can always convert it to $S$. Due to $Lemma\hspace{0.1cm}1$ and $Lemma\hspace{0.1cm}2$, a person starting on state $S'$ can always stay at $S'$ and can always force the opponent to start on state $S$. Since the state at the end of the game will be $S$, the opponent will always lose. Complexity: $O(n * m)$
[ "constructive algorithms", "games" ]
2,700
import java.util.*; public class NullifyTheMatrix { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int t = sc.nextInt(); for(int x = 0; x < t; ++x) { int n = sc.nextInt(); int m = sc.nextInt(); int mat[][] = new int[n][m]; for(int i = 0; i < n; ++i) { for(int j = 0; j < m; ++j) mat[i][j] = sc.nextInt(); } int xors[] = new int[n + m - 1]; for(int i = 0; i < n; ++i) { for(int j = 0; j < m; ++j) xors[i + j] ^= mat[i][j]; } int flag = 0; for(int xor : xors) { flag |= xor != 0 ? 1 : 0; } if(flag == 1) System.out.println("Ashish"); else System.out.println("Jeel"); } } }
1452
A
Robot Program
There is an infinite 2-dimensional grid. The robot stands in cell $(0, 0)$ and wants to reach cell $(x, y)$. Here is a list of possible commands the robot can execute: - move north from cell $(i, j)$ to $(i, j + 1)$; - move east from cell $(i, j)$ to $(i + 1, j)$; - move south from cell $(i, j)$ to $(i, j - 1)$; - move west from cell $(i, j)$ to $(i - 1, j)$; - stay in cell $(i, j)$. The robot wants to reach cell $(x, y)$ in as few commands as possible. However, he can't execute the same command two or more times in a row. What is the minimum number of commands required to reach $(x, y)$ from $(0, 0)$?
Obviously, you can always obtain the optimal answer without using west or south moves. So the shortest path consists of $x$ east moves and $y$ north moves. Let's estimate the lower bound of the answer. Take a look at these constructions: "E?E?E?E?E" and "N?N?N?N?N" (let question mark be any command different from the used one). That's the tightest you can put east or north moves in. So the answer is at least $2 \cdot max(x, y) - 1$. For $x \neq y$ you can put them just as in the construction and fill the rest of question marks with a stay in place move. $x = y$ case works differently, though. You can do it only in $x + y$ moves by taking alternating moves. Overall complexity: $O(1)$ per testcase.
[ "math" ]
800
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for(int i = 0; i < t; i++) { int x, y; cin >> x >> y; int ans = max(x, y) * 2 - 1; if(x == y) ans++; cout << ans << endl; } }
1452
B
Toy Blocks
You are asked to watch your nephew who likes to play with toy blocks in a strange way. He has $n$ boxes and the $i$-th box has $a_i$ blocks. His game consists of two steps: - he chooses an arbitrary box $i$; - he tries to move \textbf{all} blocks from the $i$-th box to other boxes. If he can make the same number of blocks in each of $n - 1$ other boxes then he will be happy, otherwise, will be sad. Note that your nephew can only move the blocks from the chosen box to the other boxes; he cannot move blocks from the other boxes.You don't want to make your nephew sad, so you decided to put several extra blocks into some boxes in such a way that no matter which box $i$ he chooses he won't be sad. What is the minimum number of extra blocks you need to put?
Since nephew emptying the box $i$ he's chosen and wants to make all other $n - 1$ box equal then it means that at least the $sum$ of all array $a$ should be divisible by $(n - 1)$ and the number of blocks in each other box should be at least $\left\lceil \frac{sum}{n - 1} \right\rceil$ (ceiling function). On the other side, since nephew chooses $i$ (not you), then he can choose a box which is not a maximum $max$, and since he makes empty the only box $i$, then the final number in each other block should be at least $max$. In total, the resulting number of blocks in each of $n - 1$ other boxes should be at least $k = \max(\left\lceil \frac{sum}{n - 1} \right\rceil, max)$ and we need to add at least $(n - 1) \cdot k - sum$ elements to the initial array. We can always reach this lower bound if we will put each block in the box with the current minimum number of blocks.
[ "binary search", "greedy", "math", "sortings" ]
1,400
fun main() { repeat(readLine()!!.toInt()) { val n = readLine()!!.toInt() val a = readLine()!!.split(' ').map { it.toLong() } val k = maxOf(a.max()!!, (a.sum() + n - 2) / (n - 1)) println(k * (n - 1) - a.sum()) } }
1452
C
Two Brackets
You are given a string $s$, consisting of brackets of two types: '(', ')', '[' and ']'. A string is called a regular bracket sequence (RBS) if it's of one of the following types: - empty string; - '(' + RBS + ')'; - '[' + RBS + ']'; - RBS + RBS. where plus is a concatenation of two strings. In one move you can choose a non-empty subsequence of the string $s$ (not necessarily consecutive) that is an RBS, remove it from the string and concatenate the remaining parts without changing the order. What is the maximum number of moves you can perform?
Notice that it's never optimal to erase a subsequence of length greater than $2$ because every RBS of length above $2$ contains an RBS of length $2$ inside and removing it won't break the regular property of the outside one. So the task can be solved for the round and the square brackets independently, the answer will be the sum of both. Let's solve the version for brackets '(' and ')'. In general, you just want to remove consecutive substring "()" until there is no more left in the string. That can be done by processing the string from left and right and maintaining a stack of current brackets. If the top bracket in it is '(' and the current bracket is ')', then you can increment the answer and remove that bracket from the stack. Otherwise, you push the current bracket to the stack. Overall complexity: $O(|s|)$ per testcase.
[ "greedy" ]
800
def calc(s, x, y): bal, cnt = 0, 0 for c in s: if c == y: if bal > 0: bal -= 1 cnt += 1 elif c == x: bal += 1 return cnt for _ in range(int(input())): s = input() print(calc(s, '(', ')') + calc(s, '[', ']'))
1452
D
Radio Towers
There are $n + 2$ towns located on a coordinate line, numbered from $0$ to $n + 1$. The $i$-th town is located at the point $i$. You build a radio tower in each of the towns $1, 2, \dots, n$ with probability $\frac{1}{2}$ (these events are independent). After that, you want to set the signal power on each tower to some integer from $1$ to $n$ (signal powers are not necessarily the same, but also not necessarily different). The signal from a tower located in a town $i$ with signal power $p$ reaches every city $c$ such that $|c - i| < p$. After building the towers, you want to choose signal powers in such a way that: - towns $0$ and $n + 1$ don't get any signal from the radio towers; - towns $1, 2, \dots, n$ get signal from exactly one radio tower each. For example, if $n = 5$, and you have built the towers in towns $2$, $4$ and $5$, you may set the signal power of the tower in town $2$ to $2$, and the signal power of the towers in towns $4$ and $5$ to $1$. That way, towns $0$ and $n + 1$ don't get the signal from any tower, towns $1$, $2$ and $3$ get the signal from the tower in town $2$, town $4$ gets the signal from the tower in town $4$, and town $5$ gets the signal from the tower in town $5$. Calculate the probability that, after building the towers, you will have a way to set signal powers to meet all constraints.
The crucial observation is that when the positions of towers are fixed, the way to set their signal powers is unique if it exists. That's because the first tower should have its signal power exactly equal to the required to cover all towns before it, the second tower should have signal power exactly equal to the required to cover all towns before it that weren't covered by the first one, and so on. So let's count the number of ways to cover all towns, and then divide it by $2^n$. Covering all towns can be expressed as splitting $n$ into the sum of several positive odd integers. It can be calculated with dynamic programming with prefix sums, but we can also prove that the number of ways to split $n$ is exactly the $n$-th integer in the Fibonacci sequence as follows (this proof uses mathematical induction): for $n \le 2$, it's quite obvious; for $n > 2$ and $n \bmod 2 = 0$, let's iterate on the length of the last segment. We have to sum $F_1 + F_3 + \dots + F_{n - 1}$; $F_1 + F_3 = 1 + 2 = F_4$; $F_4 + F_5 = F_6$; $F_6 + F_7 = F_8$, and so on, until we get $F_{n - 2} + F_{n - 1} = F_n$; for $n > 2$ and $n \bmod 2 = 1$, let's iterate on the length of the last segment, and add $1$ to result since we can cover everything with a single segment. So, this is $1 + F_2 + F_4 + F_6 + \dots + F_{n - 1}$. $1 + F_2 = F_3$, $F_3 + F_4 = F_5$, and so on. So, the answer to the problem is $\dfrac{F_n}{2^n}$. The last thing we have to consider is that we have to print a fraction modulo $998244353$. Since $998244353$ is a prime, using Fermat little theorem, we can calculate $y^{-1}$ as $y^{998244351} \bmod 998244353$. Exponentiation must be done with some fast algorithm (for example, binary exponentiation). Note: it's common in problems requiring to calculate something modulo some prime number to have problems with overflow in intermediate calculations or some other issues when we forget to take the result of some expression modulo $998244353$. I recommend using either special addition/multiplication/exponentiation functions that always take the result modulo $998244353$ (an example how to write and use them can be viewed in the model solution), or a special modular integer data structure with overloaded operators that you have to implement by yourself.
[ "combinatorics", "dp", "math" ]
1,600
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; while(x >= MOD) x -= MOD; while(x < 0) x += MOD; return x; } int mul(int x, int y) { return (x * 1ll * y) % MOD; } int binpow(int x, int y) { int ans = 1; while(y > 0) { if(y % 2 == 1) ans = mul(ans, x); x = mul(x, x); y /= 2; } return ans; } int divide(int x, int y) { return mul(x, binpow(y, MOD - 2)); } int main() { int n; cin >> n; vector<int> fib(n + 1); fib[0] = 0; fib[1] = 1; for(int i = 2; i <= n; i++) fib[i] = add(fib[i - 1], fib[i - 2]); cout << divide(fib[n], binpow(2, n)) << endl; }
1452
E
Two Editorials
Berland regional ICPC contest has just ended. There were $m$ participants numbered from $1$ to $m$, who competed on a problemset of $n$ problems numbered from $1$ to $n$. Now the editorial is about to take place. There are two problem authors, each of them is going to tell the tutorial to \textbf{exactly $k$ consecutive tasks} of the problemset. The authors choose the segment of $k$ consecutive tasks for themselves independently of each other. The segments can coincide, intersect or not intersect at all. The $i$-th participant is interested in listening to the tutorial of all consecutive tasks from $l_i$ to $r_i$. Each participant always chooses to listen to only the problem author that tells the tutorials to the maximum number of tasks he is interested in. Let this maximum number be $a_i$. No participant can listen to both of the authors, even if their segments don't intersect. The authors want to choose the segments of $k$ consecutive tasks for themselves in such a way that the sum of $a_i$ over all participants is maximized.
Consider some participant's segment $[l; r]$ and one of the author's segment $[i; i + k - 1]$. How does the length of intersection change when you move $i$ from left to right? It first increases until the centers of both segments coincide (that's the easiest to notice on the segments of the same length) and then decreases. The increase is totally symmetrical to the decrease. With that idea you can conclude that the author's segment, whose center is the closest to the center of participant's segment, has the larger intersection length. Let's sort the participants' segments by their center. You can see that the first author will be optimal for the prefix of the segments and the second author - for the remaining suffix. So you can just iterate over the length of the prefix and update the answer with all options. Overall complexity: $O(n \log n + nm)$.
[ "brute force", "dp", "greedy", "sortings", "two pointers" ]
2,500
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct seg{ int l, r; }; int main() { int n, m, k; cin >> n >> m >> k; vector<seg> a(m); forn(i, m){ cin >> a[i].l >> a[i].r; --a[i].l; } sort(a.begin(), a.end(), [](const seg &a, const seg &b){ return a.l + a.r < b.l + b.r; }); vector<int> su(m + 1); forn(i, n - k + 1){ int cur = 0; for (int j = m - 1; j >= 0; --j){ cur += max(0, min(i + k, a[j].r) - max(i, a[j].l)); su[j] = max(su[j], cur); } } int ans = su[0]; forn(i, n - k + 1){ int cur = 0; forn(j, m){ cur += max(0, min(i + k, a[j].r) - max(i, a[j].l)); ans = max(ans, cur + su[j + 1]); } } cout << ans << endl; return 0; }
1452
F
Divide Powers
You are given a multiset of powers of two. More precisely, for each $i$ from $0$ to $n$ exclusive you have $cnt_i$ elements equal to $2^i$. In one operation, you can choose any one element $2^l > 1$ and divide it into two elements $2^{l - 1}$. You should perform $q$ queries. Each query has one of two types: - "$1$ $pos$ $val$" — assign $cnt_{pos} := val$; - "$2$ $x$ $k$" — calculate the minimum number of operations you need to make at least $k$ elements with value lower or equal to $2^x$. Note that all queries of the second type don't change the multiset; that is, you just calculate the minimum number of operations, you don't perform them.
Several observations: Generally, we have two types of operations: divide $2^l$ and either $l \le x$ or $l > x$. If $2^l \le 2^x$ then in one division we'll get $+1$ element $\le 2^x$, so we can just keep track of the total possible number of these operations as $small$. If $2^l > 2^x$ then if we decide to split whole $2^l$ to $2^x$-s then we get $+2^{l - x}$ elements $\le 2^x$ but in $2^{l - x} - 1$ operations, i. e. in one division we'll get $+\frac{2^{l-x}}{2^{l-x} - 1} > 1$ elements. So it's preferably to fully split $2^l > 2^x$ than $2^l \le 2^x$. Also, the less $l > x$ - the more profitable each division. As a result, let's act greedy: let's say, we need $k$ more elements $\le 2^x$. Let's iterate over $2^l > 2^x$ in the increasing order. If $2^{l - x} \le k$ then let's fully split $2^l$ in $2^x$ in $2^{l - x} - 1$ operations, decrease $k$, increase a counter of operations $cur$ and increase $small$ accordingly. If $2^{l - x} > k$ then the situation becomes complicated. We can either don't touch $2^l$ and try to use preserved operations with small $2^j$ if $small \ge k$, or split $2^l$ in two $2^{l - 1}$-s. Now we spent one operation and get two $2^{l-1}$. If $2^{l - 1 - x} > k$ then we don't need one of $2^{l - 1}$ and can split further only one $2^{l - 1}$. If $2^{l - 1 - x} \le k$ then it's optimal to fully split one of $2^{l - 1}$ and proceed further with only one $2^{l - 1}$ and recalculated $k$, $cur$ and $small$. In both cases we can solve optimal splitting of $2^{l - 1}$ recursively in the same manner as $2^l$. Since in each step we lower $l$ then we need to check only $O(n)$ cases per each query. If $2^{l - 1 - x} > k$ then we don't need one of $2^{l - 1}$ and can split further only one $2^{l - 1}$. If $2^{l - 1 - x} \le k$ then it's optimal to fully split one of $2^{l - 1}$ and proceed further with only one $2^{l - 1}$ and recalculated $k$, $cur$ and $small$. Note, that we can treat situation $1$ ($2^{l - x} \le k$) in packs for several $2^l$ with equal $l$, so the first part also works in $O(n)$. The resulting complexity is $O(n)$ per query.
[ "constructive algorithms", "greedy" ]
2,900
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define x first #define y second typedef long long li; typedef long double ld; typedef pair<int, int> pt; template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) { return out << "(" << p.x << ", " << p.y << ")"; } template<class A> ostream& operator <<(ostream& out, const vector<A> &v) { out << "["; fore(i, 0, sz(v)) { if(i) out << ", "; out << v[i]; } return out << "]"; } const int INF = int(1e9); const li INF64 = li(1e18); const ld EPS = 1e-9; int n, q; vector<li> cnt; inline bool read() { if(!(cin >> n >> q)) return false; cnt.assign(n, 0); fore (i, 0, n) cin >> cnt[i]; return true; } inline void solve() { fore (qs, 0, q) { int tp, pos; li val; cin >> tp >> pos >> val; if (tp == 1) { cnt[pos] = val; } else { li small = 0, cur = 0; fore (i, 0, pos + 1) { small += cnt[i] * ((1ll << i) - 1); val -= cnt[i]; } if (val <= 0) { cout << 0 << '\n'; continue; } int id = pos + 1; while (id < n) { li add = 1ll << (id - pos); li need = min(val / add, cnt[id]); cur += need * (add - 1); val -= need * add; small += need * add * ((1ll << pos) - 1); if (need == cnt[id]) id++; else break; } if (val <= 0) { cout << cur << '\n'; continue; } if (id >= n) { cout << (val > small ? -1 : cur + val) << '\n'; continue; } li ans = INF64; while (id > pos) { if (small >= val) ans = min(ans, cur + val); cur++; id--; li add = 1ll << (id - pos); if (val >= add) { cur += add - 1; val -= add; small += add * ((1ll << pos) - 1); } } assert(val <= 0); cout << min(ans, cur) << endl; } } } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1452
G
Game On Tree
Alice and Bob are playing a game. They have a tree consisting of $n$ vertices. Initially, Bob has $k$ chips, the $i$-th chip is located in the vertex $a_i$ (all these vertices are unique). Before the game starts, Alice will place a chip into one of the vertices of the tree. The game consists of turns. Each turn, the following events happen (sequentially, exactly in the following order): - Alice either moves her chip to an adjacent vertex or doesn't move it; - for each Bob's chip, he either moves it to an adjacent vertex or doesn't move it. Note that this choice is done independently for each chip. The game ends when Alice's chip shares the same vertex with one (or multiple) of Bob's chips. Note that Bob's chips may share the same vertex, even though they are in different vertices at the beginning of the game. Alice wants to maximize the number of turns, Bob wants to minimize it. If the game ends in the middle of some turn (Alice moves her chip to a vertex that contains one or multiple Bob's chips), this turn is counted. For each vertex, calculate the number of turns the game will last if Alice places her chip in that vertex.
This task was inspired by an older edu task and another task proposed by RockyB. Let's learn to solve the problem for at least one starting vertex for Alice. Let this vertex be $v$. In general, Alice's strategy is basically this: run to some vertex $u$ as fast as possible and stay in it until Bob reaches $u$. Hesitation on a way to this vertex won't be optimal. Visiting the same vertex multiple times won't as well. I guess that can be proven more formally by analyzing the set of possible solutions after each move. What properties should vertex $u$ have for Alice to be able to escape to it? There shouldn't be a way for Bob to catch her midway. However, it's not necessary to check any midway intersections. If Bob can catch her anywhere on a path, she can also follow her to the end (by moving the same path) and catch her at the destination. Thus, this vertex $u$ should be further from any Bob's chips than from $v$. So you can precalculate the distance to the closest Bob's chip $d_w$ to each vertex $w$. Just push all chips to a queue and run a bfs. We've learned to solve the problem in $O(n)$ for each vertex $v$. Just iterate over all vertices $u$ and take the maximum of $d_u$ over such of them that have $d_u$ greater than the distance from $v$ to $u$. Now the solution can go two ways. You can stop thinking here and obtain an $O(n \log^2 n)$ one or think more and get an $O(n \log n)$. The first one goes like that. Notice that the function $f(v, x)$ if the Alice can make at least $x$ moves from vertex $v$ is monotonous in regard to $x$. So we can binary search the answer. The check query transforms to the following. Consider all vertices with distance less or equal to $x$ from $v$. There should exist at least one vertex $u$ with value $d_u>x$ for the check to return true. So at least the maximum value of them should be greater than $x$. That is basically a centroid exercise. Let each centroid store such an array $val$ that $val_i$ is the maximum value of $d_u$ over all such $u$ that belong to this centroid's subgraph and are no further than $i$ distance from the centroid. That array can be constructed in $O(n \log n)$ for all centroids in total. You can easily see that the length of this array doesn't exceed the number of vertices in the subgraph of the corresponding centroid that is $O(n \log n)$ be definition. For the query iterate over all centroids $v$ belongs to and check the value from some cell of each one's $val$ array. For the second solution let's reverse the problem. Consider the vertex $u$ Alice escapes to. If there is a starting vertex $v$ no further than $d_u - 1$ from it, then the answer for $v$ can be updated with $d_u$. So we can update the subgraph of vertices with distance no more than $d_u - 1$ with the maximum of their current answer and $d_u$. The solution will be almost the same centroid. Iterate over all centroids $u$ belongs to and write $d_u$ into the cell $i$ of each one's array $val$, where $i$ is the distance from $u$ to this centroid. Then build an array of prefix maximums over this array. Finally, for each $v$ collect the best answer over all centroids $v$ belongs to. Overall complexity: $O(n \log^2 n)$ or $O(n \log n)$.
[ "data structures", "dfs and similar", "greedy", "trees" ]
2,700
#include<bits/stdc++.h> using namespace std; const int N = 200043; const int LN = 18; vector<int> g[N]; vector<int> dist[N]; int sz[N]; int par[N]; bool used[N]; int max_dist[N]; vector<int> val[N]; int calc_size(int x, int p = -1) { sz[x] = 1; for(auto y : g[x]) if(y != p && !used[y]) sz[x] += calc_size(y, x); return sz[x]; } int find_centroid(int x, int p, int s) { int ans = -1; bool good = true; for(auto y : g[x]) if(y != p && !used[y]) good &= sz[y] * 2 <= s; else if(y == p && !used[y]) good &= (s - sz[x]) * 2 <= s; if(good) ans = x; for(auto y : g[x]) if(y != p && !used[y]) ans = max(ans, find_centroid(y, x, s)); return ans; } void calc_dist(int x, int p, int d, int s) { dist[x].push_back(d); for(auto y : g[x]) if(y != p && !used[y]) calc_dist(y, x, d + 1, s); max_dist[s] = max(max_dist[s], d); } int decomposition(int v) { calc_size(v); int c = find_centroid(v, v, sz[v]); used[c] = true; for(auto y : g[c]) if(!used[y]) { par[decomposition(y)] = c; } used[c] = false; calc_dist(c, c, 0, c); return c; } int main() { int n; scanf("%d", &n); for(int i = 0; i < n - 1; i++) { int x, y; scanf("%d %d", &x, &y); --x; --y; g[x].push_back(y); g[y].push_back(x); } decomposition(0); for(int i = 0; i < n; i++) val[i].resize(max_dist[i] + 1); int k; scanf("%d", &k); vector<int> d(n, int(1e9)); queue<int> q; for(int i = 0; i < k; i++) { int x; scanf("%d", &x); --x; q.push(x); d[x] = 0; } while(!q.empty()) { int x = q.front(); q.pop(); for(auto y : g[x]) if(d[y] > d[x] + 1) { q.push(y); d[y] = d[x] + 1; } } for(int i = 0; i < n; i++) { if(d[i] == 0) continue; int curc = i; for(int j = 0; j < dist[i].size(); j++) { int dd = dist[i][j]; if(dd > d[i] - 1) { curc = par[curc]; continue; } dd = d[i] - 1 - dd; if(dd >= val[curc].size()) dd = val[curc].size() - 1; val[curc][dd] = max(val[curc][dd], d[i]); curc = par[curc]; } } for(int i = 0; i < n; i++) for(int j = max_dist[i]; j >= 1; j--) val[i][j - 1] = max(val[i][j], val[i][j - 1]); for(int i = 0; i < n; i++) { int ans = 0; int curc = i; for(int j = 0; j < dist[i].size(); j++) { int dd = dist[i][j]; ans = max(ans, val[curc][dd]); curc = par[curc]; } if(d[i] == 0) ans = 0; printf("%d%c", ans, " \n"[i == n - 1]); } }
1453
A
Cancel the Trains
Gildong's town has a train system that has $100$ trains that travel from the bottom end to the top end and $100$ trains that travel from the left end to the right end. The trains starting from each side are numbered from $1$ to $100$, respectively, and all trains have the same speed. Let's take a look at the picture below. The train system can be represented as coordinates on a 2D plane. The $i$-th train starting at the bottom end is initially at $(i,0)$ and will be at $(i,T)$ after $T$ minutes, and the $i$-th train starting at the left end is initially at $(0,i)$ and will be at $(T,i)$ after $T$ minutes. All trains arrive at their destinations after $101$ minutes. However, Gildong found that some trains scheduled to depart at a specific time, simultaneously, are very dangerous. At this time, $n$ trains are scheduled to depart from the bottom end and $m$ trains are scheduled to depart from the left end. If two trains are both at $(x,y)$ at the same time for some $x$ and $y$, they will crash into each other. Therefore, he is asking you to find the \textbf{minimum} number of trains that should be cancelled to prevent all such crashes.
Let's first determine whether it's possible for two trains that start from $(i,0)$ and $(0,j)$ to crash into each other. For this to happen, there must be a point where $(i,T)=(T,j)$, which means $i=T$ and $j=T$. Therefore, a train that starts from the bottom end can crash into a train that starts from the left end if and only if they have the same train number. We can cancel either one of them to prevent that crash. Since $n$ and $m$ are small, we can brute-force every pair of trains and count the number of pairs of trains that share the same number, and print that value as the answer. The time complexity for this solution is $\mathcal{O}(nm)$ for each test case. If we want an asymptotically faster solution, we can put all train numbers into a binary search tree structure (such as std::set in C++), and then find duplicate numbers from the trains of the other side by searching for those values. Another similar solution is to sort the former numbers, then perform binary search for the other values. The time complexity for these solutions is $\mathcal{O}((n+m)\log(n))$. There is another solution with $\mathcal{O}(n+m)$ time complexity taking advantage of the fact that the train numbers are small. Let's make an array $X$ of length $101$, and set $X[i] = true$ if and only if there is a train starting from $(i, 0)$. For each train starting from $(0, j)$, we can check if $X[j]$ is $true$, and count the total number of duplicates this way.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX_N = 100; const int MAX_COORD = 100; void solve() { int n, m, i, j; cin >> n >> m; bool v[MAX_COORD + 5] = { 0 }; int ans = 0; for (i = 0; i < n; i++) { int x; cin >> x; v[x] = true; } for (i = 0; i < m; i++) { int x; cin >> x; if (v[x]) ans++; } cout << ans << '\n'; } int main() { int t; cin >> t; while (t--) solve(); }
1453
B
Suffix Operations
Gildong has an interesting machine that has an array $a$ with $n$ integers. The machine supports two kinds of operations: - Increase all elements of a suffix of the array by $1$. - Decrease all elements of a suffix of the array by $1$. A suffix is a subsegment (contiguous elements) of the array that contains $a_n$. In other words, for all $i$ where $a_i$ is included in the subsegment, all $a_j$'s where $i \lt j \le n$ must also be included in the subsegment. Gildong wants to make all elements of $a$ equal — he will always do so using the minimum number of operations necessary. To make his life even easier, before Gildong starts using the machine, you have the option of changing one of the integers in the array to any other integer. You are allowed to leave the array unchanged. You want to minimize the number of operations Gildong performs. With your help, what is the minimum number of operations Gildong will perform? Note that even if you change one of the integers in the array, you should \textbf{not} count that as one of the operations because Gildong did not perform it.
First, let's find the optimal strategy for Gildong to follow to make all elements of the array equal. It's obvious that there is no need to perform any operation on the suffix starting at $a_1$, since that operation changes all the integers in the array. For $i=2$ to $n$, the only way for $a_i$ to have equal value to $a_{i-1}$ is to perform one of the operations on the suffix starting at $a_i$ $abs(a_i - a_{i-1})$ times. This is because all operations starting at other positions do not change the difference between $a_i$ and $a_{i-1}$. Therefore, the minimum number of operations Gildong has to perform is $\sum_{i=2}^{n}{abs(a_i - a_{i-1})}$. How should we change one element so that we can minimize this value? Let's take care of some special cases first. The optimal way to change $a_1$ is to make it equal to $a_2$, and then the minimum number of operations Gildong has to perform is decreased by $abs(a_2 - a_1)$. Similarly, the optimal way to change $a_n$ is to make it equal to $a_{n-1}$, and then the minimum number of operations Gildong has to perform is decreased by $abs(a_n - a_{n-1})$. For the rest of the elements, changing $a_i$ affects both $abs(a_i - a_{i-1})$ and $abs(a_{i+1} - a_i)$. Here, we need to observe an important fact: This value is minimized when $a_i$ is between $a_{i-1}$ and $a_{i+1}$, inclusive. Intuitively, if $a_{i-1} < a_i > a_{i+1}$, Gildong has to perform one or more $2$-nd operations on the suffix starting at $a_i$, and then one or more $1$-st operations on the suffix starting at $a_{i+1}$ to compensate for the extra $2$-nd operations. This applies to the scenario where $a_{i-1} > a_i < a_{i+1}$ as well. If $a_i$ is between $a_{i-1}$ and $a_{i+1}$, these additional operations are unnecessary. In fact, the number of operations is decreased from $abs(a_i - a_{i-1}) + abs(a_{i+1} - a_i)$ to $abs(a_{i+1} - a_{i-1})$. Therefore, we can decrease the number of operations needed by: $\max \begin{cases} abs(a_2 - a_1)\\ abs(a_n - a_{n-1})\\ \max_{i=2..n-1}{abs(a_i - a_{i-1}) + abs(a_{i+1} - a_i) - abs(a_{i+1} - a_{i-1})} \end{cases}$ The answer is $x-y$ where $x$ is the minimum number of operations Gildong needs to perform on the initial array, and $y$ is the maximum number of operations we can decrease by changing exactly one element. Time complexity: $\mathcal{O}(n)$.
[ "constructive algorithms", "implementation" ]
1,400
def solve(): n = int(input()) a = [0] + list(map(int, input().split())) ans = 0 for i in range(2, n + 1): ans += abs(a[i] - a[i - 1]) mx = max(abs(a[2] - a[1]), abs(a[n] - a[n - 1])) for i in range(2, n): mx = max(mx, abs(a[i] - a[i - 1]) + abs(a[i + 1] - a[i]) - abs(a[i + 1] - a[i - 1])) print(ans - mx) t = int(input()) for _ in range(t): solve()
1453
C
Triangles
Gildong has a square board consisting of $n$ rows and $n$ columns of square cells, each consisting of a single digit (from $0$ to $9$). The cell at the $j$-th column of the $i$-th row can be represented as $(i, j)$, and the length of the side of each cell is $1$. Gildong likes big things, so for each digit $d$, he wants to find a triangle such that: - Each vertex of the triangle is in the center of a cell. - The digit of every vertex of the triangle is $d$. - At least one side of the triangle is parallel to one of the sides of the board. You may assume that a side of length $0$ is parallel to both sides of the board. - The area of the triangle is maximized. Of course, he can't just be happy with finding these triangles as is. Therefore, for each digit $d$, he's going to change the digit of exactly one cell of the board to $d$, then find such a triangle. He changes it back to its original digit after he is done with each digit. Find the maximum area of the triangle he can make for each digit. Note that he can put multiple vertices of the triangle on the same cell, and the triangle can be a degenerate triangle; i.e. the area of the triangle can be $0$. Also, note that he is allowed to change the digit of a cell from $d$ to $d$.
Let's consider each $d$ separately. For each cell that has the digit $d$, we'll check the case where this cell is used as a vertex of the end of a horizontal or vertical side (i.e. the base) of the triangle, and change the digit of another cell and use it as the other end of that side. Let's say the position for this cell is $(i, j)$, and we change the cell at $(p, q)$. Then there are two cases: $i=p$: Since we always want to maximize the length of the base, $q$ can be either $1$ or $n$, depending on whether $j-1 > n-j$ or not. The length of the base will be $\max(j-1, n-j)$. $j=q$: Similarly, $p$ can be either $1$ or $n$, depending on whether $i-1 > n-i$ or not. The length of the base will be $\max(i-1, n-i)$. In the first case, since the base is horizontal, the area of the triangle will be determined only by the remaining vertex's row component. Therefore, we only need the maximum / minimum row position of $d$. Let's say they are $max\_row$ and $min\_row$, respectively. We can easily pre-calculate them in advance. Assuming that $b$ is the length of the base, the maximum area will be $\cfrac{b \times \max(max\_row-i, i-min\_row)}{2}$. We can get rid of the $\cfrac{1}{2}$ part as we will print the area multiplied by $2$. Thankfully, we don't need to check the case where we change a cell that is not used as an end of a base separately, since we can always move it around so that they will eventually be horizontal / vertical to one of the other vertices without changing the area, which becomes a case that we already took care of. The same process can be applied to the second case as well. Summarizing the whole process, for each $d$, the maximum area (multiplied by $2$) is: $\max_{(i, j)=d} \begin{cases} \max(j-1, n-j) \times \max(max\_row-i, i-min\_row)\\ \max(i-1, n-i) \times \max(max\_column-j, j-min\_column)) \end{cases}$ Since we check each cell exactly once, and pre-calculating $max\_row$, $min\_row$, $max\_column$, and $min\_column$ takes $\mathcal{O}(n^2)$, the total time complexity is $\mathcal{O}(n^2)$ for each test case.
[ "greedy", "implementation" ]
1,700
import sys import copy input = sys.stdin.readline MAX_N = 2000 inf = MAX_N + 9 def solve(a): global inf mnr = [inf]*10 mxr = [0]*10 mnc = [inf]*10 mxc = [0]*10 ans = [0]*10 for i in range(1, n+1): for j in range(1, n+1): x = a[i][j] mnr[x] = min(mnr[x], i) mxr[x] = max(mxr[x], i) mnc[x] = min(mnc[x], j) mxc[x] = max(mxc[x], j) for i in range(1, n+1): for j in range(1, n+1): x = a[i][j] ans[x] = max(ans[x], max(mxr[x] - i, i - mnr[x]) * max(n - j, j - 1)) ans[x] = max(ans[x], max(mxc[x] - j, j - mnc[x]) * max(n - i, i - 1)) return ans if __name__ == '__main__': t = int(input()) while t: n = int(input()) a = [[]] for i in range(1, n+1): s = input() a.append([0]) for j in range(0, n): a[i].append(int(s[j])) ans = solve(a) for i in range(10): print(ans[i], end=' ') print() t -= 1
1453
D
Checkpoints
Gildong is developing a game consisting of $n$ stages numbered from $1$ to $n$. The player starts the game from the $1$-st stage and should beat the stages in increasing order of the stage number. The player wins the game after beating the $n$-th stage. There is at most one checkpoint on each stage, and there is always a checkpoint on the $1$-st stage. At the beginning of the game, only the checkpoint on the $1$-st stage is activated, and all other checkpoints are deactivated. When the player gets to the $i$-th stage that has a checkpoint, that checkpoint is activated. For each try of a stage, the player can either beat the stage or fail the stage. If they beat the $i$-th stage, the player is moved to the $i+1$-st stage. If they fail the $i$-th stage, the player is moved to the most recent checkpoint they activated, and they have to beat the stages after that checkpoint again. For example, assume that $n = 4$ and the checkpoints are on the $1$-st and $3$-rd stages. The player starts at the $1$-st stage. If they fail on the $1$-st stage, they need to retry the $1$-st stage because the checkpoint on the $1$-st stage is the most recent checkpoint they activated. If the player beats the $1$-st stage, they're moved to the $2$-nd stage. If they fail it, they're sent back to the $1$-st stage again. If they beat both the $1$-st stage and the $2$-nd stage, they get to the $3$-rd stage and the checkpoint on the $3$-rd stage is activated. Now whenever they fail on the $3$-rd stage, or the $4$-th stage after beating the $3$-rd stage, they're sent back to the $3$-rd stage. If they beat both the $3$-rd stage and the $4$-th stage, they win the game. Gildong is going to build the stages to have equal difficulty. He wants you to find any series of stages and checkpoints using at most $2000$ stages, where the expected number of tries over all stages is exactly $k$, for a player whose probability of beating each stage is exactly $\cfrac{1}{2}$.
As already explained in the notes (and is quite obvious), the expected number of tries to beat stage $i$ with a checkpoint where stage $i+1$ also has a checkpoint (or is the end of the game) is $2$. What if stage $i+1$ doesn't have a checkpoint and stage $i+2$ has a checkpoint? We can think of it like this. It takes $2$ tries in expectation to get to stage $i+1$, and the player needs to add one more try, trying $3$ times in total. But this also has a probability of $\cfrac{1}{2}$ to succeed, so the expected number of tries to actually get to stage $i+2$ is multiplied by $2$ - making it $6$ in total. This can be extended indefinitely. Let's say $x_i$ is the expected number of tries to beat $i$ consecutive stages with only one checkpoint at the beginning. If we extend it to $i+1$ consecutive stages, it takes $x_{i+1} = 2 \cdot (x_i+1)$ tries. If this is not intuitive, we can always use Monte Carlo method to simulate how many tries each takes. The general term for this sequence is $x_i = 2^{i+1} - 2$, and it is introduced in OEIS A000918 with a similar example. As each checkpoint makes the stages after that checkpoint independent of the previous stages, we can just add up appropriate $x_i$'s to make it equal to $k$. Using $x_i$ means we append the stages in $1$ $0$ $0$ $0$ ... form where the number of $0$'s is $i-1$. As every term of the sequence is even, the answer is $-1$ if $k$ is odd. Otherwise, we can show that there always exists an answer for all even $k \le 10^{18}$. There are two simple strategies to make it with at most $2000$ stages. The first strategy is to greedily take the greatest $x_i \le y$ where $y$ is the remaining number, then append $x_i$ and subtract it from $y$. This works because either $y$ can be exactly $2 \cdot x_i$, or we can use $x_i$ once and repeat the process with $y-x_i$. The worst case for this strategy is to use all of $x_{57}$, $x_{56}$, $x_{55}$, ..., $x_1$ and another $x_1$, which sums up to total of $1654$ stages. Another strategy is to use $x_i$ and $x_1$ if bit $i+1$ ($0$-indexed) is $1$. Since there can be at most $58$ $1$-bits, the worst case for this strategy is still far less than $2000$. Time complexity: $\mathcal{O}(\log^2{k})$
[ "brute force", "constructive algorithms", "greedy", "math", "probabilities" ]
1,900
import java.util.*; public class Main { static Scanner sc = new Scanner(System.in); static final int MAX_N = 2000; public static void main(String[] args) { int t = sc.nextInt(); while (t-- > 0) solve(); } public static void solve() { long k = sc.nextLong(); if (k % 2 == 1) { System.out.println(-1); return; } int[] a = new int[MAX_N + 5]; a[1] = 1; int cur = 1; while (k > 0) { int c; for (c = 1; (1l << (c + 1)) - 2 <= k; c++); cur += c - 1; a[cur] = 1; k -= (1l << c) - 2; } System.out.println(cur - 1); for (int i = 1; i < cur; i++) System.out.print(a[i] + (i == cur - 1 ? "\n" : " ")); } }
1453
E
Dog Snacks
Gildong is playing with his dog, Badugi. They're at a park that has $n$ intersections and $n-1$ bidirectional roads, each $1$ meter in length and connecting two intersections with each other. The intersections are numbered from $1$ to $n$, and for every $a$ and $b$ ($1 \le a, b \le n$), it is possible to get to the $b$-th intersection from the $a$-th intersection using some set of roads. Gildong has put one snack at every intersection of the park. Now Gildong will give Badugi a mission to eat all of the snacks. Badugi starts at the $1$-st intersection, and he will move by the following rules: - Badugi looks for snacks that are as close to him as possible. Here, the distance is the length of the shortest path from Badugi's current location to the intersection with the snack. However, Badugi's sense of smell is limited to $k$ meters, so he can only find snacks that are less than or equal to $k$ meters away from himself. If he cannot find any such snack, he fails the mission. - Among all the snacks that Badugi can smell from his current location, he chooses a snack that minimizes the distance he needs to travel from his current intersection. If there are multiple such snacks, Badugi will choose one arbitrarily. - He repeats this process until he eats all $n$ snacks. After that, he has to find the $1$-st intersection again which also must be less than or equal to $k$ meters away from the last snack he just ate. If he manages to find it, he completes the mission. Otherwise, he fails the mission. Unfortunately, Gildong doesn't know the value of $k$. So, he wants you to find the minimum value of $k$ that makes it possible for Badugi to complete his mission, if Badugi moves optimally.
It is obvious that the problem can be modeled as a tree rooted at the $1$-st vertex. Given a large enough $k$, we can see that Badugi will always 'clear out' each subtree and then move back to its parent. This is because if there exists an unvisited child for a vertex, the distance between them is $1$, while any unvisited vertex at the parent's side has distance of at least $2$. Therefore, Badugi's moves will look like a preorder tree traversal. This implies another fact. Let's say the $j$-th vertex is a child of the $i$-th vertex. After visiting the last vertex of a subtree rooted at the $j$-th vertex, Badugi has to move a longer distance when it was the last child of the $i$-th vertex than when the $i$-th vertex has another unvisited child. The only important rule for Badugi is to choose the child that has the shortest 'moving back' distance as the last child he will visit. This distance can be sent back to its parent so that the parent can choose between the candidates. Let's say the minimum among the candidates is $mn$, and the maximum is $mx$. It is optimal to use $mn+1$ as the 'moving back' distance, and the maximum of the 'child-to-child' distances will be $mx+1$. There is one exception for this strategy - the root. Unlike the others, there is no need to move any further after visiting all vertices and then getting back to the root. This means choosing $mx$ as the 'moving back' distance is optimal, because we don't need to add anything to it. Then we can use the sub-maximum of the candidates as the maximum of 'child-to-child' distances. Along these processes we can update the answer whenever we find the maximum of 'child-to-child' distances, along with the last move back to the root. Time complexity: $O(n)$ for each test case, but it's too boring to find the sub-maximum in linear time, so just sort the candidates and it will be $O(n\log{n})$ for each test case.
[ "binary search", "dfs and similar", "dp", "greedy", "trees" ]
2,300
import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.util.ArrayList; import java.util.Collections; public class Main { static BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); static BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(System.out)); static final int MAX_N = 200000; static final int inf = (int)1e9; static ArrayList<Integer>[] adj; static int ans; public static void main(String[] args) throws IOException { int t = Integer.parseInt(br.readLine()); while (t-- > 0) bw.write(solve() + "\n"); bw.flush(); bw.close(); br.close(); } public static int solve() throws IOException { int n = Integer.parseInt(br.readLine()); int i; adj = new ArrayList[n + 1]; for (i = 1; i <= n; i++) adj[i] = new ArrayList<Integer>(); for (i = 0; i < n - 1; i++) { int u, v; String[] s = br.readLine().split(" "); u = Integer.parseInt(s[0]); v = Integer.parseInt(s[1]); adj[u].add(v); adj[v].add(u); } ans = 0; dfs(1, 0); return ans; } public static int dfs(int cur, int p) { if (adj[cur].size() == 1 && cur != 1) return 1; ArrayList<Integer> v = new ArrayList<Integer>(); for (int x : adj[cur]) { if (x == p) continue; int r = dfs(x, cur); v.add(r); } Collections.sort(v); if (v.size() > 1) { if (cur == 1) ans = Math.max(ans, Math.max(v.get(v.size() - 1), v.get(v.size() - 2) + 1)); else ans = Math.max(ans, v.get(v.size() - 1) + 1); } ans = Math.max(ans, v.get(0)); return v.get(0) + 1; } }
1453
F
Even Harder
Gildong is now developing a puzzle game. The puzzle consists of $n$ platforms numbered from $1$ to $n$. The player plays the game as a character that can stand on each platform and the goal of the game is to move the character from the $1$-st platform to the $n$-th platform. The $i$-th platform is labeled with an integer $a_i$ ($0 \le a_i \le n-i$). When the character is standing on the $i$-th platform, the player can move the character to any of the $j$-th platforms where $i+1 \le j \le i+a_i$. If the character is on the $i$-th platform where $a_i=0$ and $i \ne n$, the player loses the game. Since Gildong thinks the current game is not hard enough, he wants to make it even harder. He wants to change some (possibly zero) labels to $0$ so that there remains exactly one way to win. He wants to modify the game as little as possible, so he's asking you to find the \textbf{minimum} number of platforms that should have their labels changed. Two ways are different if and only if there exists a platform the character gets to in one way but not in the other way.
Since there is at least one way to win initially, every platform is reachable from the start. Note that this condition should still hold after we achieve what Gildong wants. Because of this, if there are multiple $j$'s where $j + a_j \ge i$, there are at least two ways that can get to the $i$-th platform. Therefore, in order to leave only one way to get to the $i$-th platform, we need to change all $a_j$'s where $j + a_j >= i$ into $0$ except for one of them. We'll call this process cleaning, and the platforms where we set $a_j = 0$ are the cleaned platforms. Let $dp[i][x]$ ($i \le x$) be the minimum number of platforms that should be cleaned to leave only one way to get to the $i$-th platform, where the only platform $k$ that can directly move to the $i$-th platform (i.e. the predecessor) has a value at most $x - k$. In other words, $dp[i][x]$ only considers all $k$'s where $k \lt i \le k + a_k \le x$, and choose the one that requires minimum number of cleaned platforms so far. We'll determine $dp[i][i..n]$ in increasing order of $i$. Now let $cnt$ be the number of $y$'s where $j \lt y \lt i$ and $y + a_y \ge i$. Initially $cnt$ is $0$. Now for each $j$ from $i - 1$ to $1$ (downwards), if $j + a_j >= i$, minimize $dp[i][j + a_j]$ with $dp[j][i - 1] + cnt$ and then increase $cnt$ by $1$. This means we clean all platforms between $j + 1$ and $i - 1$ to let the $j$-th platform be the predecessor of the $i$-th platform, while it has possibility to reach the $j + a_j$-th platform. Note that all $k$'s where $k \lt j$ and $k + a_k \ge i$ are already cleaned when calculating $dp[j][i - 1]$, so we only need to count the cleaned platforms between $j + 1$ and $i - 1$ for $cnt$. Then we can prefix-minimize $dp[i][i..n]$ because the actual meaning of the $dp[i][x]$ is the minimum number of platforms that should be cleaned for the predecessor of the $i$-th platform can reach at most the $x$-th platform; i.e. it should consider all cases where $j + a[j] \lt x$ as well. After we repeat these processes for all $i$'s, the answer is $dp[n][n]$. Though the explanation is quite complicated, the code turns out to be very short. Time complexity: $\mathcal{O}(n^2)$.
[ "dp" ]
2,700
import sys input = sys.stdin.readline MAX_N = 3000 inf = MAX_N def solve(): global inf n = int(input()) a = [0] + list(map(int, input().split())) dp = [[0] * (n + 1) for i in range(n + 1)] for i in range(2, n + 1): cnt = 0 for j in range(i, n + 1): dp[i][j] = inf for j in range(i - 1, 0, -1): if j + a[j] >= i: dp[i][j + a[j]] = min(dp[i][j + a[j]], dp[j][i - 1 ] + cnt) cnt += 1 for j in range(i + 1, n + 1): dp[i][j] = min(dp[i][j], dp[i][j - 1]) return dp[n][n] if __name__ == '__main__': t = int(input()) while t: print(solve()) t -= 1
1454
A
Special Permutation
You are given one integer $n$ ($n > 1$). Recall that a permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2, 3, 1, 5, 4]$ is a permutation of length $5$, but $[1, 2, 2]$ is not a permutation ($2$ appears twice in the array) and $[1, 3, 4]$ is also not a permutation ($n = 3$ but there is $4$ in the array). Your task is to find a permutation $p$ of length $n$ that there is no index $i$ ($1 \le i \le n$) such that $p_i = i$ (so, for all $i$ from $1$ to $n$ the condition $p_i \ne i$ should be satisfied). You have to answer $t$ independent test cases. If there are several answers, you can print any. It can be proven that the answer exists for each $n > 1$.
There are many possible solutions. One of them is just to print $2, 3, \ldots, n, 1$.
[ "constructive algorithms", "probabilities" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; for (int i = 0; i < n; ++i) { cout << (i + 1) % n + 1 << " "; } cout << endl; } return 0; }
1454
B
Unique Bid Auction
There is a game called "Unique Bid Auction". You can read more about it here: https://en.wikipedia.org/wiki/Unique_bid_auction (though you don't have to do it to solve this problem). Let's simplify this game a bit. Formally, there are $n$ participants, the $i$-th participant chose the number $a_i$. The winner of the game is such a participant that the number he chose is \textbf{unique} (i. e. nobody else chose this number except him) and is \textbf{minimal} (i. e. among all unique values of $a$ the minimum one is the winning one). Your task is to find the \textbf{index} of the participant who won the game (or -1 if there is no winner). Indexing is $1$-based, i. e. the participants are numbered from $1$ to $n$. You have to answer $t$ independent test cases.
This is a simple implementation problem. Let's calculate two values for each $i$ from $1$ to $n$: $cnt_i$ - the number of occurrences of $i$ in $a$ and $idx_i$ - any position of $i$ in $a$. Then, let's iterate through $i$ from $1$ to $n$ and, if $cnt_i = 1$, just print $idx_i$ (because if it is the only such element then we found the winner). If we didn't find any such element, we have to print -1.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> cnt(n + 1), idx(n + 1); for (int i = 0; i < n; ++i) { int x; cin >> x; ++cnt[x]; idx[x] = i + 1; } int ans = -1; for (int i = 0; i <= n; ++i) { if (cnt[i] == 1) { ans = idx[i]; break; } } cout << ans << endl; } return 0; }
1454
C
Sequence Transformation
You are given a sequence $a$, initially consisting of $n$ integers. You want to transform this sequence so that all elements in it are equal (i. e. it contains several occurrences of the same element). To achieve this, you choose some integer $x$ \textbf{that occurs at least once in $a$}, and then perform the following operation any number of times (possibly zero): choose some segment $[l, r]$ of the sequence and remove it. But there is one exception: \textbf{you are not allowed to choose a segment that contains $x$}. More formally, you choose some contiguous subsequence $[a_l, a_{l + 1}, \dots, a_r]$ such that $a_i \ne x$ if $l \le i \le r$, and remove it. After removal, the numbering of elements to the right of the removed segment changes: the element that was the $(r+1)$-th is now $l$-th, the element that was $(r+2)$-th is now $(l+1)$-th, and so on (i. e. the remaining sequence just collapses). Note that you \textbf{can not change} $x$ after you chose it. For example, suppose $n = 6$, $a = [1, 3, 2, 4, 1, 2]$. Then one of the ways to transform it in two operations is to choose $x = 1$, then: - choose $l = 2$, $r = 4$, so the resulting sequence is $a = [1, 1, 2]$; - choose $l = 3$, $r = 3$, so the resulting sequence is $a = [1, 1]$. Note that choosing $x$ is not an operation. Also, note that you \textbf{can not} remove any occurrence of $x$. Your task is to find the \textbf{minimum} number of operations required to transform the sequence in a way described above. You have to answer $t$ independent test cases.
Firstly, let's remove all consecutive equal elements (just keep one occurrence of each such element). For example, the array $[1, 1, 2, 3, 3, 3, 2]$ becomes $[1, 2, 3, 2]$. Now, the answer for each $a_i$ is almost the number of its occurrences plus one. Why is it so? Because we need to remove all segments of elements between every pair of consecutive occurrences of $a_i$. The number of such segments is the number of occurrences of $a_i$ minus one. There is also a segment before the first occurrence of $a_i$ and a segment after the last occurrence of $a_i$. But the first segment doesn't exist for the first element and the last segment doesn't exist for the last element. So, after removing consecutive elements, let's calculate for each $a_i$ the number of its occurrences plus one, subtract one from the value of the first element and from the value of the last element. Then the answer is the minimum among these values for all $a_i$.
[ "greedy", "implementation" ]
1,200
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (auto &it : a) cin >> it; vector<int> res(n + 1, 1); n = unique(a.begin(), a.end()) - a.begin(); a.resize(n); for (int i = 0; i < n; ++i) { res[a[i]] += 1; } res[a[0]] -= 1; res[a[n - 1]] -= 1; int ans = 1e9; for (int i = 0; i < n; ++i) { ans = min(ans, res[a[i]]); } cout << ans << endl; } return 0; }
1454
D
Number into Sequence
You are given an integer $n$ ($n > 1$). Your task is to find a sequence of integers $a_1, a_2, \ldots, a_k$ such that: - each $a_i$ is strictly greater than $1$; - $a_1 \cdot a_2 \cdot \ldots \cdot a_k = n$ (i. e. the product of this sequence is $n$); - $a_{i + 1}$ is divisible by $a_i$ for each $i$ from $1$ to $k-1$; - $k$ is the \textbf{maximum} possible (i. e. the length of this sequence is the \textbf{maximum} possible). If there are several such sequences, any of them is acceptable. It can be proven that at least one valid sequence always exists for any integer $n > 1$. You have to answer $t$ independent test cases.
Consider $n$ in this canonical form ${p_1}^{a_1} \cdot {p_2}^{a_2} \cdot \ldots \cdot {p_k}^{a_k}$ (just find the prime factorization of $n$). Let $i$ be such an index that $a_i$ is the maximum among all values of $a$. Then the answer length can not exceed $a_i$. This is because if the answer has greater length, then some number doesn't have $p_i$ in its representation (thus, there will be problems with divisibility because we are considering primes). So, let's create the answer of length $a_i$ consisting of $p_i$. Then let's just multiply the last element by all other primes in their degrees. So, we satisfied the divisibility rule and the length of the answer is the maximum possible.
[ "constructive algorithms", "math", "number theory" ]
1,300
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { long long n; cin >> n; vector<pair<int, long long>> val; for (long long i = 2; i * i <= n; ++i) { int cnt = 0; while (n % i == 0) { ++cnt; n /= i; } if (cnt > 0) { val.push_back({cnt, i}); } } if (n > 1) { val.push_back({1, n}); } sort(val.rbegin(), val.rend()); vector<long long> ans(val[0].first, val[0].second); for (int i = 1; i < int(val.size()); ++i) { for (int j = 0; j < val[i].first; ++j) { ans.back() *= val[i].second; } } cout << ans.size() << endl; for (auto it : ans) cout << it << " "; cout << endl; } return 0; }
1454
E
Number of Simple Paths
You are given an \textbf{undirected} graph consisting of $n$ vertices and $n$ edges. It is guaranteed that the given graph is \textbf{connected} (i. e. it is possible to reach any vertex from any other vertex) and there are no self-loops and multiple edges in the graph. Your task is to calculate the number of \textbf{simple paths} of length \textbf{at least} $1$ in the given graph. Note that paths that differ only by their direction are considered the same (i. e. you have to calculate the number of undirected paths). For example, paths $[1, 2, 3]$ and $[3, 2, 1]$ are considered the same. You have to answer $t$ independent test cases. Recall that a path in the graph is a sequence of vertices $v_1, v_2, \ldots, v_k$ such that each pair of adjacent (consecutive) vertices in this sequence is connected by an edge. The length of the path is the number of edges in it. A \textbf{simple path} is such a path that all vertices in it are distinct.
Because our graph is just a tree with an additional edge, consider it as a cycle with trees hanged on cycle vertices. Consider some tree hung on a vertex $v$ on a cycle. There is only one path between each pair of its vertices (including the root which is a vertex $v$). So, if the tree has $cnt_v$ vertices, then $\frac{cnt_v(cnt_v-1)}{2}$ paths are added to the answer. What about paths that go out of a tree? Let's assume that there are $cnt_v \cdot (n - cnt_v)$ such paths (yeah, we counted only a half of actual paths from this component but this is fine). When we consider other trees, we will take into account the other half of paths. This information can lead us to the conclusion that the only information we need to know about trees hanged on cycle vertices is the number of vertices in these trees. So, if we know $cnt_v$ for each vertex on a cycle, we can just calculate the answer as $\sum\limits_{v \in cycle} \frac{cnt_v(cnt_v-1)}{2} + cnt_v \cdot (n - cnt_v)$. So how to find values $cnt_v$? Of course, there is a simple and straight-forward solution "just extract and mark all cycle vertices and run dfs from every vertex of a cycle", but there is another approach without any graph algorithms that works very well for such kind of graphs. Initially, let $cnt_v = 1$ for each $v$ from $1$ to $n$. Let's create a queue containing all leafs of the graph. Let's take the leaf $x$, get its parent $p$, add $cnt_p := cnt_p + cnt_x$ and remove the vertex $x$ with all edges incident to it. After that, if $p$ became a leaf, let's add it to the queue. We can see that after processing all leafs only cycle vertices remain in the graph, and $cnt_v$ is exactly the number of the vertices in a tree (and we can just calculate the answer using the formula above). This approach can be implemented in $O(n \log n)$ or in $O(n)$, there is almost no difference, but $O(n \log n)$ one can be written a bit simpler than a linear one.
[ "combinatorics", "dfs and similar", "graphs", "trees" ]
2,000
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; while (t--) { int n; cin >> n; vector<set<int>> g(n); for (int i = 0; i < n; ++i) { int x, y; cin >> x >> y; --x, --y; g[x].insert(y); g[y].insert(x); } vector<int> val(n, 1); queue<int> leafs; for (int i = 0; i < n; ++i) { if (g[i].size() == 1) { leafs.push(i); } } while (!leafs.empty()) { int v = leafs.front(); leafs.pop(); int to = *g[v].begin(); val[to] += val[v]; val[v] = 0; g[v].clear(); g[to].erase(v); if (g[to].size() == 1) { leafs.push(to); } } long long ans = 0; for (int i = 0; i < n; ++i) { ans += val[i] * 1ll * (val[i] - 1) / 2; ans += val[i] * 1ll * (n - val[i]); } cout << ans << endl; } return 0; }
1454
F
Array Partition
You are given an array $a$ consisting of $n$ integers. Let $min(l, r)$ be the minimum value among $a_l, a_{l + 1}, \ldots, a_r$ and $max(l, r)$ be the maximum value among $a_l, a_{l + 1}, \ldots, a_r$. Your task is to choose three \textbf{positive} (greater than $0$) integers $x$, $y$ and $z$ such that: - $x + y + z = n$; - $max(1, x) = min(x + 1, x + y) = max(x + y + 1, n)$. In other words, you have to split the array $a$ into three consecutive non-empty parts that cover the whole array and the maximum in the first part equals the minimum in the second part and equals the maximum in the third part (or determine it is impossible to find such a partition). Among all such triples (partitions), you can choose any. You have to answer $t$ independent test cases.
Let's fix the length of the first block (iterate through $i$ from $0$ to $n-3$). Let's also try to maximize the length of the third block using the second pointer. So, initially the length of the first block is $1$ and the maximum in the block is $a_0$ (after that, its length will be $2$ and the maximum will be $max(a_0, a_1)$, and so on). Let's move the left border of the third block $r$ while $a_r \le a_0$ and the second block have at least one element. After we expanded the third block, consider some cases: if its maximum is less than $a_0$, then we don't need to consider this partition (we expanded the third block as much as we can but didn't find the required maximum). Otherwise, its maximum fits our conditions. Then let's find the minimum in the second block. Let it be $mn$. If $mn < mx$, this partition is also bad. If $mn = mx$, this partition is good, and we can just print it. Otherwise, $mn > mx$, and we need to fix the second block somehow. I claim that we only need to expand it by at most one element. Let's see why is it true. Consider we want to expand it to the right (the other case just uses the same logic). If we expanded our block by one element to the right and the new value in it is less than $mx$, then this partition is bad, and we can not use it (this is also the reason why we are trying to maximize the length of the third block). Otherwise, this value is always $mx$, because the first and the third blocks didn't have values greater than $mx$. There are some cases in the implementation we need to consider carefully: first, we need to find the minimum in the second block fast enough. This can be done if we store it as a multiset (set with repetitions). Second, we can remove the only maximum in the third (or the first) block by expanding the second one. This can be handled easily if we just store all three blocks (not only the second one) as multisets. And the last case is that we sometimes need to move the left border of the third block to the right. This happens when we expand the first block and the second block becomes empty. So, this happens only $O(n)$ times in total. Total time complexity of this solution is $O(n \log n)$. There is also another solution from Gassa: Let us start solving by asking the following question: in which segments will the maximal values end up? Consider the maximum value $m$ in the whole array. If there are at least three of them, a valid answer is to pick any maximum except the first and the last as the middle segment. For example, "3 2 3 1 3 2 3" ($m = 3$) can be partitioned as "3 2[3]1 3 2 3" or as "3 2 3 1[3]2 3", where the middle segment is marked with square brackets. Otherwise, all $m$ should end up in the middle segment, so that maximums to the left and to the right are less. For example, in "1 2 3 1 3 2 1", we have to put both threes into the middle segment. Let us now calculate the minimum value $v$ on the middle segment, considering the numbers between those equal to $m$. For example, in "1 2[3 1 3]2 1" the value $v = \min (3, 1, 3) = 1$. All numbers greater than $v$ should also go into the middle segment, and everything between them, which can result in $v$ decreasing even more. We have to expand the borders of the middle segment until the minimum value is such $v$ that there are only values not greater than $v$ left outside. Continuing with the example, we see that in "1 2[3 1 3]2 1", the middle segment should be expanded to "1[2 3 1 3 2]1". Conversely, if the maximum $m$ was unique, then $v = m$, and we didn't have to expand the middle segment at this stage. For example, in "1 2 3 2 1", the current state is "1 2[3]2 1". If the maximums to the left and to the right are exactly $v$, we found an answer. Otherwise, any possible middle segment contains the middle segment that we now got. So we have to expand the middle segment, either to the left or to the right. As a result, the value $v$ may become smaller, which can cause another expansion. What's left is to decide where to expand when we have a choice. We would like to do it greedily. Indeed, consider three values: the minimum on middle segment $v$, the maximum on left segment $u$ and the maximum on right segment $w$. When we expand the middle segment, each of them can only decrease. And if we find an answer, the equation $u = v = w$ will be satisfied. So, as we make the minimum of these three values smaller, we have less and less opportunities to make them equal. Thus we can pick the side of expansion after which the value $\min (u, v, w)$ is larger, and if these are equal, pick any. For example, in the state "1 3[5 4 5]2 3 1", we have $u = 3$, $v = 4$, and $w = 3$. If we expand to the left, we get "1[3 5 4 5]2 3 1" where $u = 1$, $v = 3$, and $w = 3$. If we expand to the right, we get "1 3[5 4 5 2]3 1" where $u = 3$, $v = 2$, and $w = 3$. Our algorithm will pick expanding to the right, however, there is no valid answer in this example anyway. Implementation: let us precalculate the maximums on all prefixes and all suffixes of the array. Then we can expand the middle segment by one element in $O (1)$, and the total running time is linear.
[ "binary search", "data structures", "greedy", "two pointers" ]
2,100
// Author: Ivan Kazmenko (gassa@mail.ru) module solution; import std.algorithm; import std.conv; import std.range; import std.stdio; import std.string; void main () { auto tests = readln.strip.to !(int); foreach (test; 0..tests) { auto n = readln.strip.to !(int); auto a = readln.splitter.map !(to !(int)).array; auto x = a.dup; foreach (i; 1..n) { x[i] = max (x[i], x[i - 1]); } auto y = a.dup; foreach_reverse (i; 0..n - 1) { y[i] = max (y[i], y[i + 1]); } auto v = a.maxElement; auto maxPlaces = n.iota.filter !(i => a[i] == v).array; int lo = maxPlaces[$ / 2]; int hi = lo + 1; while (true) { if (lo == 0 || hi == n) { writeln ("NO"); break; } if (x[lo - 1] == v && y[hi] == v) { writeln ("YES"); writeln (lo, " ", hi - lo, " ", n - hi); break; } int u = (lo - 1 == 0) ? int.min : min (x[lo - 2], a[lo - 1]); int w = (hi + 1 >= n) ? int.min : min (y[hi + 1], a[hi]); if (u > w) { v = min (v, a[lo - 1]); lo -= 1; } else { v = min (v, a[hi]); hi += 1; } } } }
1455
A
Strange Functions
Let's define a function $f(x)$ ($x$ is a positive integer) as follows: write all digits of the decimal representation of $x$ backwards, then get rid of the leading zeroes. For example, $f(321) = 123$, $f(120) = 21$, $f(1000000) = 1$, $f(111) = 111$. Let's define another function $g(x) = \dfrac{x}{f(f(x))}$ ($x$ is a positive integer as well). Your task is the following: for the given positive integer $n$, calculate the number of different values of $g(x)$ among all numbers $x$ such that $1 \le x \le n$.
Let's analyze which values can the function $g(x)$ have. It can be proven that the value of $g(x)$ is equal to $10^k$, where $k$ is the number of zero-digits at the end of the number $x$, because $f(f(x))$ is the same number as $x$ except for the fact that it doesn't have any trailing zeroes. Okay, now let's analyze when we reach the new value of $g(x)$. $1$ is the first value of $x$ such that $g(x) = 1$, $10$ is the first value of $x$ such that $g(x) = 10$, $100$ is the first value of $x$ such that $g(x) = 100$, and so on. We have to calculate the maximum number that has the form $10^k$ and is not greater than $n$, and the answer is exactly $k + 1$. It can be done with a mathematical solution, but the most simple way to do it is read $n$ as a string instead, and calculate its length.
[ "math", "number theory" ]
800
for i in range(int(input())): print(len(input()))
1455
B
Jumps
You are standing on the $\mathit{OX}$-axis at point $0$ and you want to move to an integer point $x > 0$. You can make several jumps. Suppose you're currently at point $y$ ($y$ may be negative) and jump for the $k$-th time. You can: - either jump to the point $y + k$ - or jump to the point $y - 1$. What is the minimum number of jumps you need to reach the point $x$?
At first, let's jump with $+k$ while $x$ is still greater than the current position. Now we finished in some position $pos = 1 + 2 + \dots + steps = \frac{steps (steps + 1)}{2} \ge x$. Note that $0 \le pos - x < steps$ otherwise, we wouldn't make the last step. If $pos = x$ then we are lucky to finish right in point $x$. Otherwise, let's look at what happens if we replace one $+k$ with $-1$. Basically, we'll finish in $pos' = pos - (k + 1)$. And since $k \in [1, steps]$ then $pos' \in [pos - steps - 1, pos - 2]$. We know that $pos - step < x$ so if $x < pos - 1$ then we can choose the corresponding $k = pos - x - 1$ and replace $+k$ with $-1$ and get straight to the point $x$. But if $x + 1 = pos$ then we need one extra operation $-1$. To calculate $steps$ fast we can note we need at least $steps = \sqrt{2 \cdot x} - 1$ since $steps (steps + 1) \le (steps + 1)^2 \le 2x$ and then we can increase $steps$ while $steps (steps + 1) < 2x$.
[ "constructive algorithms", "math" ]
1,200
#include<bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int x; cin >> x; int steps = 0; while (steps * (steps + 1) < 2 * x) steps++; if (steps * (steps + 1) / 2 == x + 1) steps++; cout << steps << endl; } }
1455
C
Ping-pong
Alice and Bob play ping-pong with simplified rules. During the game, the player serving the ball commences a play. The server strikes the ball then the receiver makes a return by hitting the ball back. Thereafter, the server and receiver must alternately make a return until one of them doesn't make a return. The one who doesn't make a return loses this play. The winner of the play commences the next play. Alice starts the first play. Alice has $x$ stamina and Bob has $y$. To hit the ball (while serving or returning) each player spends $1$ stamina, so if they don't have any stamina, they can't return the ball (and lose the play) or can't serve the ball (in this case, the other player serves the ball instead). If both players run out of stamina, the game is over. Sometimes, it's strategically optimal not to return the ball, lose the current play, but save the stamina. On the contrary, when the server commences a play, they have to hit the ball, if they have some stamina left. Both Alice and Bob play optimally and want to, firstly, maximize their number of wins and, secondly, minimize the number of wins of their opponent. Calculate the resulting number of Alice's and Bob's wins.
Let's find an answer for a little different version of the game. Let's say that $f(x, y)$ is the final score if the first player has $x$ stamina and the second has $y$ stamina. The first player can either hit the ball or can give up and lose the play. How to calculate $f(x, y)$? Obviously, $f(0, x) = (0, x)$ and $f(x, 0) = (x, 0)$. Otherwise, the first player can either hit the ball: then the player spent $1$ stamina and now it's to the second player to decide - hit or lose. So basically, we moved to the state $f(y, x - 1)$ and the answer in this case is $\mathit{rev}(f(y, x - 1))$ where $\mathit{rev}(a, b) = (b, a)$; or lose the play: then the player doesn't spend any stamina, but the opponent has to serve the ball. He serves the ball, spend $1$ stamina and return to the state, where the first player decides - hit or lose. Formally, the answer in this case is $f(x, y - 1) + (0, 1)$. Looking at $f(0, x) = (0, x)$, $f(x, 0) = (x, 0)$ and one of transitions $f(x, y - 1) + (0, 1)$ we can guess that $f(x, y) = (x, y)$ and prove it by induction: $f(x, y)$ is either $\mathit{rev}(f(y, x - 1))$ or $f(x, y - 1) + (0, 1)$, but $\mathit{rev}(f(y, x - 1)) = \mathit{rev}(y, x - 1) = (x - 1, y)$ and $f(x, y - 1) + (0, 1) = (x, y - 1) + (0, 1) = (x, y)$ and $(x, y)$ is better than $(x - 1, y)$, so $f(x, y) = (x, y)$. The final step is to note that since Alice starts the first play and has to serve ball - the answer is $\mathit{rev}(f(y, x - 1)) = \mathit{rev}(y, x - 1) = (x - 1, y)$.
[ "constructive algorithms", "games", "math" ]
1,100
#include <bits/stdc++.h> using namespace std; void solve() { int x, y; cin >> x >> y; cout << x - 1 << " " << y << endl; } int main() { int t; cin >> t; while (t--) solve(); }
1455
D
Sequence and Swaps
You are given a sequence $a$ consisting of $n$ integers $a_1, a_2, \dots, a_n$, and an integer $x$. Your task is to make the sequence $a$ sorted (it is considered sorted if the condition $a_1 \le a_2 \le a_3 \le \dots \le a_n$ holds). To make the sequence sorted, you may perform the following operation any number of times you want (possibly zero): choose an integer $i$ such that $1 \le i \le n$ and $a_i > x$, and swap the values of $a_i$ and $x$. For example, if $a = [0, 2, 3, 5, 4]$, $x = 1$, the following sequence of operations is possible: - choose $i = 2$ (it is possible since $a_2 > x$), then $a = [0, 1, 3, 5, 4]$, $x = 2$; - choose $i = 3$ (it is possible since $a_3 > x$), then $a = [0, 1, 2, 5, 4]$, $x = 3$; - choose $i = 4$ (it is possible since $a_4 > x$), then $a = [0, 1, 2, 3, 4]$, $x = 5$. Calculate the minimum number of operations you have to perform so that $a$ becomes sorted, or report that it is impossible.
The main fact that allows us to solve this problem is that the value of $x$ always increases after swaps, and since the resulting sequence should be sorted, the indices of elements we swap with $x$ also increase. This observation is actually enough for us to implement a dynamic programming solution of the form "dp_{i, j} is the minimum number of actions we have to perform to reach the following situation: the last integer we swapped with $x$ was $a_i$, and the current value of $a_i$ is $j$". Depending on your implementation, it works either in $O(n^3)$ or in $O(n^2)$. But there exists a much simpler to code greedy solution: scan the array from left to right until it is sorted, and find the first element such that we can apply the operation to it (and apply that operation to it). Implementing it in $O(n^2)$ or even in $O(n)$ is easy, but proving it is a bit harder. The key fact that is required to prove it is that if we can apply an operation to some position, but don't do it and instead apply this operation to some position to the right of that one, the elements on these two positions are no longer sorted (if we can apply the operation to some position $i$, then $a_i > x$, but if we apply the operation to position $j$ instead, then after it $a_i > a_j$). Since we can't go backward, the resulting array cannot be sorted by any means - that's why we can't skip elements in this greedy solution.
[ "dp", "greedy", "sortings" ]
1,600
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define x first #define y second typedef long long li; typedef long double ld; typedef pair<int, int> pt; template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) { return out << "(" << p.x << ", " << p.y << ")"; } template<class A> ostream& operator <<(ostream& out, const vector<A> &v) { out << "["; fore(i, 0, sz(v)) { if(i) out << ", "; out << v[i]; } return out << "]"; } const int INF = int(1e9); const li INF64 = li(1e18); const ld EPS = 1e-9; int n, x; vector<int> a; inline bool read() { if(!(cin >> n >> x)) return false; a.resize(n); fore (i, 0, n) cin >> a[i]; return true; } inline void solve() { vector<int> sf(n + 1, 0); sf[n] = sf[n - 1] = 1; for (int i = n - 2; i >= 0; i--) { if (a[i] <= a[i + 1]) sf[i] = sf[i + 1]; } int ans = 0; int uk = 0; while (true) { int np = uk; while (np < n && a[np] <= x) np++; fore (i, uk, np) { if (i == 0) continue; if (a[i - 1] > a[i]) { cout << -1 << endl; return; } } if (sf[np]) break; assert(a[np] > x); swap(a[np], x); ans++; uk = np; } cout << ans << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; while(t--) { read(); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1455
E
Four Points
You are given four different integer points $p_1$, $p_2$, $p_3$ and $p_4$ on $\mathit{XY}$ grid. In one step you can choose one of the points $p_i$ and move it in one of four directions by one. In other words, if you have chosen point $p_i = (x, y)$ you can move it to $(x, y + 1)$, $(x, y - 1)$, $(x + 1, y)$ or $(x - 1, y)$. Your goal to move points in such a way that they will form a square with sides parallel to $\mathit{OX}$ and $\mathit{OY}$ axes (a square with side $0$ is allowed). What is the minimum number of steps you need to make such a square?
Let's discuss two approaches to this problem. Firstly, let's think that we choose not four destination points but four lines on which sides of the square lie. It's two vertical lines with coordinates $x_1$ and $x_2$ and two horizontal lines $y_1$ and $y_2$ (of course, $|x_1 - x_2| = |y_1 - y_2|$). The first approach is to note that either both $x_1$ and $x_2$ coincide with some $(p_i.x)$-s and $y_1$ coincide with one of $(p_i.y)$ or both $y_1$ and $y_2$ coincide with some $(p_i.y)$-s and $x_1$ coincide with one $(p_i.x)$. $\underset{(\text{two cases})}{2} \cdot \binom{4}{2} \cdot 4 \cdot \underset{y_2 = y_1 \pm |x_1 - x_2|}{2} \cdot 4! \cdot 4 \approx 10^4$ The second approach is more clever and faster. Let's assign a role for each point $p_i$ - which vertex of the final square this $p_i$ will be: left-bottom, left-top, right-bottom or right-top. There will be $4!$ such assignments. For simplicity let's say that the left-bottom vertex is $a$, left-top is $b$, right-bottom is $c$ and right-top is $d$. If we rewrite our total distance formulas, we can note that instead of summing the distance between points, we can sum the distance from lines to the points which should lie on it. In other words, we can calculate the answer as $(|a_x - x_1| + |b_x - x_1|) + (|c_x - x_2| + |d_x - x_2|) + \\ + (|a_y - y_1| + |c_y - y_1|) + (|b_y - y_2| + |d_y - y_2|).$ Let's look at the left side $x_1$. If $\min(a_x, b_x) \le x_1 \le \max(a_x, b_x)$ then the total distance is always $\max(a_x, b_x) - \min(a_x, b_x)$ and $x_1$ has it's optimal segment of values. Analogically, $x_2$ also has it's optimal segment of values $\min(c_x, d_x) \le x_2 \le \max(c_x, d_x)$. Since we can choose $x_1$ as any value in its segment and $x_2$ as any value in its segment then the possible distance $|x_1 - x_2|$ (the side of the square) also forms a segment of possible values (let's name it as $[xSeg.l, xSeg.r]$) and can be calculated with pretty easy formula. On the other hand, we can do the same with horizontal sides and compute the segment of optimal side length $[ySeg.l, ySeg.r]$ in the same manner. Now, if $xSeg$ and $ySeg$ intersects then we can choose such side length $k$ that lies in both $xSeg$ and $ySeg$ and optimal for both vertical sides and horizontal sides. Otherwise, without loss of generality, $xSeg.r < ySeg.l$ and if we choose $k = ySeg.l$ we need to pay extra "fee" since we out of $xSeg$ - it means that the chosen $k$ is greater than optimal $xSeg.r$, so we need to choose, for example, $x_1$ outside $[\min(a_x, b_x), \max(a_x, b_x)]$. And we need to pay $+2$ for each step outside of this segment. In other words, the answer for a fixed permutation is $(\max(a_x, b_x) - \min(a_x, b_x)) + (\max(c_x, d_x) - \min(c_x, d_x)) + \\ + (\max(a_x, c_x) - \min(a_x, c_x)) + (\max(b_x, d_x) - \min(b_x, d_x)) + \\ + 2 \cdot \max(0, \max(xSeg.l, ySeg.l) - \min(xSeg.r, ySeg.r)).$
[ "brute force", "constructive algorithms", "flows", "geometry", "greedy", "implementation", "math", "ternary search" ]
2,400
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define x first #define y second typedef long long li; typedef pair<li, li> pt; const li INF64 = li(1e18); pt p[4]; inline bool read() { fore (i, 0, 4) { if(!(cin >> p[i].x >> p[i].y)) return false; } return true; } li len(const pt &a) { assert(a.y >= a.x); return a.y - a.x; } pt getSeg(li a, li b) { return { min(a, b), max(a, b) }; } pt getOpt(const pt &a, const pt &b) { return { max({ a.x - b.y, b.x - a.y, 0LL }), max({ b.y - a.x, a.y - b.x, 0LL }) }; } inline void solve() { li ans = INF64; vector<int> id = { 0, 1, 2, 3 }; do { li cur = 0; auto x1 = getSeg(p[id[0]].x, p[id[3]].x); auto x2 = getSeg(p[id[1]].x, p[id[2]].x); cur += len(x1) + len(x2); pt xSeg = getOpt(x1, x2); auto y1 = getSeg(p[id[0]].y, p[id[1]].y); auto y2 = getSeg(p[id[2]].y, p[id[3]].y); cur += len(y1) + len(y2); pt ySeg = getOpt(y1, y2); li is = min(xSeg.y, ySeg.y) - max(xSeg.x, ySeg.x); cur += 2 * max(0LL, -is); ans = min(ans, cur); } while (next_permutation(id.begin(), id.end())); cout << ans << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; while(t--) { assert(read()); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1455
F
String and Operations
You are given a string $s$ consisting of $n$ characters. These characters are among the first $k$ lowercase letters of the Latin alphabet. You have to perform $n$ operations with the string. During the $i$-th operation, you take the character that \textbf{initially occupied} the $i$-th position, and perform \textbf{one} of the following actions with it: - swap it with the previous character in the string (if it exists). This operation is represented as L; - swap it with the next character in the string (if it exists). This operation is represented as R; - cyclically change it to the previous character in the alphabet (b becomes a, c becomes b, and so on; a becomes the $k$-th letter of the Latin alphabet). This operation is represented as D; - cyclically change it to the next character in the alphabet (a becomes b, b becomes c, and so on; the $k$-th letter of the Latin alphabet becomes a). This operation is represented as U; - do nothing. This operation is represented as 0. For example, suppose the initial string is test, $k = 20$, and the sequence of operations is URLD. Then the string is transformed as follows: - the first operation is U, so we change the underlined letter in {\underline{t}est} to the next one in the first $20$ Latin letters, which is a. The string is now aest; - the second operation is R, so we swap the underlined letter with the next one in the string {a\underline{e}st}. The string is now aset; - the third operation is L, so we swap the underlined letter with the previous one in the string {a\underline{s}et} (note that this is now the $2$-nd character of the string, but it was initially the $3$-rd one, so the $3$-rd operation is performed to it). The resulting string is saet; - the fourth operation is D, so we change the underlined letter in {sae\underline{t}} to the previous one in the first $20$ Latin letters, which is s. The string is now saes. The result of performing the sequence of operations is saes. Given the string $s$ and the value of $k$, find the lexicographically smallest string that can be obtained after applying a sequence of operations to $s$.
The crucial observation that we have to make is that the character that initially occupied the position $i$ cannot occupy the positions to the left of $i - 2$: we can shift some character two positions to the left using a combination of operations RL, but we can't go any further. So, the prefix of the first $i$ characters of the resulting string can only be affected by the prefix of the first $i + 2$ characters of the initial string. Let's use the following dynamic programming to solve the problem: let $dp_i$ be the lexicographically minimum string that we can obtain by applying operations to the first $i$ characters (that is, $dp_i$ is the answer to the problem if we consider only $i$ first characters of the original string). The transitions here are a bit tricky. If we apply the operation U or D to the character $i + 1$, then $dp_{i + 1} = dp_i + c'$, where $c'$ is the character we get when we apply the aforementioned operation to that character. L is a bit more complicated: we have to insert the character $s_{i + 1}$ just before the last character of $dp_i$. Modeling that we can apply the operation R is likely the most complex transition in our dynamic programming. First of all, we can't just make an update to $dp_{i + 1}$ or $dp_{i + 2}$, since it leads us to a situation where we can still apply some operations to the prefix we have built. Instead, we have to consider the operation we will be able to do with the character $s_{i + 2}$. Using another operation R is useless since the result is like performing no operations with those two characters at all, so we have to consider two options for operation with the $(i + 2)$-th character - D or U (whichever is better), or L. In the first case, we update $dp_{i + 2}$ by appending the resulting two characters to $dp_i$ (the one that we get when we change $s_{i + 2}$, and the one that initially was $s_{i + 1}$). In the second case, things are a bit trickier, but still not very complicated: the character that was $s_{i + 2}$ moves two positions backward, so it is inserted right before the last character of $dp_i$, and then we append $s_{i + 1}$ to the string we get. So, there are four transitions we have to make: a transition from $dp_i$ to $dp_{i + 1}$ that models the case when we apply U or D to the $(i + 1)$-th character; a transition from $dp_i$ to $dp_{i + 1}$ that models the case when we apply L to the $(i + 1)$-th character; a transition from $dp_i$ to $dp_{i + 2}$ to model the operations RD or RU; a transition from $dp_i$ to $dp_{i + 2}$ to model the operations RL. Overall complexity is $O(n^2)$ but it can be improved to $O(n \log n)$ with some complicated data structures like persistent segment tree with hashes to compare strings and append characters to them in $O(\log n)$.
[ "dp", "greedy" ]
2,800
#include <bits/stdc++.h> using namespace std; const int N = 505; int n, k; string s; string dp[N]; void solve() { cin >> n >> k >> s; for (int i = 1; i <= n; i++) dp[i] = char('z' + 1); for (int i = 0; i < n; i++) { int c = s[i] - 'a'; int nc = min({c, (c + 1) % k, (c + k - 1) % k}); dp[i + 1] = min(dp[i + 1], dp[i] + char('a' + nc)); if (i > 0) { dp[i + 1] = min(dp[i + 1], dp[i - 1] + char('a' + nc) + s[i - 1]); dp[i + 1] = min(dp[i + 1], dp[i].substr(0, i - 1) + s[i] + dp[i].back()); } if (i > 1) { dp[i + 1] = min(dp[i + 1], dp[i - 1].substr(0, i - 2) + s[i] + dp[i - 1].back() + s[i - 1]); } } cout << dp[n] << endl; } int main() { int t; cin >> t; while (t--) solve(); }
1455
G
Forbidden Value
Polycarp is editing a complicated computer program. First, variable $x$ is declared and assigned to $0$. Then there are instructions of two types: - set $y$ $v$ — assign $x$ a value $y$ or spend $v$ burles to remove that instruction (thus, not reassign $x$); - if $y$ $\dots$ end block — execute instructions inside the if block if the value of $x$ is $y$ and ignore the block otherwise. if blocks can contain set instructions and other if blocks inside them. However, when the value of $x$ gets assigned to $s$, the computer breaks and immediately catches fire. Polycarp wants to prevent that from happening and spend as few burles as possible. What is the minimum amount of burles he can spend on removing set instructions to never assign $x$ to $s$?
Consider the following dynamic programming. $dp_{ij}$ - the minimum cost to make $x$ have value $j$ after the $i$-th line. The transitions here are pretty easy: on set you just consider two options of skipping or not skipping the instructions and on if you either go to the next line or to the end of the block depending on the value. There are a lot of possible values, so that dp works in $O(n^2)$. First, let's notice that all the values that don't appear in the input won't matter, so you can keep only the existing values in the dp. Next, let's consider the following modification to it. What happens when you enter an if block? It's actually the same dp but the only starting value is not $0$ with cost $0$ as in the beginning of the whole program but some value $y$ with some cost $c$. So let's calculate this dp separately from the outer one and just merge the values together. Notice that if some value doesn't appear inside the if block then its cost can not decrease exiting out of it. Thus, it's enough to calculate the inner dp only for values that appear inside the if block. Okay, the transitions for if became easier. The set transitions are still slow, though. Examine the nature of them. All the values besides the $y$ written on the set instruction increase their cost by $v$. As for the $y$, its cost becomes equal to the cost of the cheapest value before the instruction. Thus, let's maintain the dp in some data structure that allows to add the same integer to all elements and take the minimum of its elements. That can be done with a set and a single integer that stores the current shift that should be applied to all elements. Surely, you'll also need a map to retrieve the current cost of particular values. The final part is fast merging of the if block dp and the outer one. It might not appear obvious but if you do that in a straightforward manner of adding all the costs from if one by one it can become $O(n^2)$ in total. So we can apply small-to-large and swap these dp's based on their sizes. Overall complexity: $O(n \log^2 n)$.
[ "data structures", "dp" ]
2,900
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) #define x first #define y second using namespace std; const long long INF = 1e18; struct op{ string tp; int y, v, to; }; struct addmap{ long long add; map<int, long long> val; multiset<long long> mn; }; void reset(addmap &a, int x, long long val){ if (a.val.count(x)) a.mn.erase(a.mn.find(a.val[x])); a.val[x] = val - a.add; a.mn.insert(val - a.add); } int main() { int n, s; scanf("%d%d", &n, &s); static char buf[10]; op a; vector<addmap> st; st.push_back({}); st.back().val[0] = 0; st.back().add = 0; st.back().mn.insert(0); forn(i, n){ scanf("%s", buf); a.tp = buf; if (a.tp == "set"){ scanf("%d%d", &a.y, &a.v); assert(!st.back().mn.empty()); long long mn = st.back().add + *st.back().mn.begin(); st.back().add += a.v; if (a.y != s) reset(st.back(), a.y, mn); } else if (a.tp == "if"){ scanf("%d", &a.y); long long val = INF; if (st.back().val.count(a.y)){ val = st.back().val[a.y] + st.back().add; st.back().mn.erase(st.back().mn.find(st.back().val[a.y])); st.back().val.erase(a.y); } st.push_back({}); reset(st.back(), a.y, val); st.back().add = 0; } else{ if (st[int(st.size()) - 1].val.size() > st[int(st.size()) - 2].val.size()) swap(st[int(st.size()) - 1], st[int(st.size()) - 2]); addmap& v = st[int(st.size()) - 2]; for (auto it : st.back().val){ if (!v.val.count(it.x) || v.val[it.x] + v.add > it.y + st.back().add){ if (v.val.count(it.x)) v.mn.erase(v.mn.find(v.val[it.x])); v.val[it.x] = it.y + st.back().add - v.add; v.mn.insert(it.y + st.back().add - v.add); } } st.pop_back(); } } printf("%lld\n", *st.back().mn.begin() + st.back().add); return 0; }
1456
E
XOR-ranges
Given integers $c_{0}, c_{1}, \ldots, c_{k-1}$ we can define the cost of a number $0 \le x < 2^{k}$ as $p(x) = \sum_{i=0}^{k-1} \left( \left\lfloor \frac{x}{2^{i}} \right\rfloor \bmod 2 \right) \cdot c_{i}$. In other words, the cost of number $x$ is the sum of $c_{i}$ over the bits of $x$ which are equal to one. Let's define the cost of array $a$ of length $n \ge 2$ with elements from $[0, 2^{k})$ as follows: $cost(a) = \sum_{i=1}^{n - 1} p(a_{i} \oplus a_{i+1})$, where $\oplus$ denotes bitwise exclusive OR operation. You have to construct an array of length $n$ with minimal cost, given that each element should belong to the given segment: $l_{i} \le a_{i} \le r_{i}$.
First, we will make all segments exclusive for convenience. Assume we have segment $(l, r)$, we gonna analyze the process of forming $x$ from highest bit to lowest bit: Let $hb$ is the highest bit such that $hb$-th bit of $l$ and $r$ are different (Apparently, bits higher than $hb$ of x has to be same with bits of $l$ and $r$). We call $hb$ key bit of the segment. Now we set $hb$-th bit of $x$ off (almost similar if we set on). From now on, we have $x < r$ and continue considering lower bits. If the considered bit is on in $l$, we must set this bit on in $x$, otherwise, we have two choices: Set this bit off in $x$ and consider lower bits. Set this bit on in $x$ and don't need to care about lower bits (Because $x > l$ now). Back to the problem, imagine if we fixed all non-free bits of every element, how should we set other bits in order to minimize the cost? It's quite simple: Consider $i$-th bit, call the pair $(l, r)$ visible if $i$-th bits of $l$ and $r$ are non-free but ones of $A[l+1, l+2, .., r-1]$ are free. For each visible pair such that $i$-th bit of endpoints in this pair are different from each other, we'll add $c_{i}$ to the answer. This thing inspire us to write a dynamic programming function: $dp(i, l, r, state(l), state(r))$ ($state(l)$ is $(f, c)$ with $f$ is how did you set key bit for $l$-th element, and $c$ is where is lowest non-free bit of this element (equal to or lower than $i$), similar to $state(r)$) is minimal cost at $i$-th and higher bits of $A[l..r]$ such that $(l, r)$ is currently visible. We have two types of transition: Make $(l, r)$ really visible by going to $(i+1)$-th bit. Make $(l, r)$ invisible by choosing $md (l < md < r)$ and choosing $state(md)$ such that lowest non-free bit of $md$-th element is $i$. Our answer is just $dp(0, 0, N+1, (0, 0), (0, 0))$
[ "dp", "greedy" ]
3,500
null
1458
A
Row GCD
You are given two positive integer sequences $a_1, \ldots, a_n$ and $b_1, \ldots, b_m$. For each $j = 1, \ldots, m$ find the greatest common divisor of $a_1 + b_j, \ldots, a_n + b_j$.
From basic properties of GCD we know that $GCD(x, y) = GCD(x - y, y)$. The same applies for multiple arguments: $GCD(x, y, z, \ldots) = GCD(x - y, y, z, \ldots)$. Let's use this for $GCD(a_1 + b_j, \ldots, a_n + b_j)$ and subtract $a_1 + b_j$ from all other arguments: $GCD(a_1 + b_j, \ldots, a_n + b_j) = GCD(a_1 + b_j, a_2 - a_1, \ldots, a_n - a_1)$. If we find $G = GCD(a_2 - a_1, \ldots, a_n - a_1)$, then any answer can be found as $GCD(a_1 + b_j, G)$. Note that we have to assume that GCD of an empty set is $0$, and $GCD(x, 0) = x$ for any $x$, since $0$ is the only number divisible by any other number.
[ "math", "number theory" ]
1,600
null
1458
B
Glass Half Spilled
There are $n$ glasses on the table numbered $1, \ldots, n$. The glass $i$ can hold up to $a_i$ units of water, and currently contains $b_i$ units of water. You would like to choose $k$ glasses and collect as much water in them as possible. To that effect you can pour water from one glass to another as many times as you like. However, because of the glasses' awkward shape (and totally unrelated to your natural clumsiness), each time you try to transfer any amount of water, half of the amount is spilled on the floor. Formally, suppose a glass $i$ currently contains $c_i$ units of water, and a glass $j$ contains $c_j$ units of water. Suppose you try to transfer $x$ units from glass $i$ to glass $j$ (naturally, $x$ can not exceed $c_i$). Then, $x / 2$ units is spilled on the floor. After the transfer is done, the glass $i$ will contain $c_i - x$ units, and the glass $j$ will contain $\min(a_j, c_j + x / 2)$ units (excess water that doesn't fit in the glass is also spilled). Each time you transfer water, you can arbitrarlly choose from which glass $i$ to which glass $j$ to pour, and also the amount $x$ transferred can be any positive real number. For each $k = 1, \ldots, n$, determine the largest possible total amount of water that can be collected in arbitrarily chosen $k$ glasses after transferring water between glasses zero or more times.
Suppose that we want to collect water in a certain set of chosen glasses $S$. Let $A_S$ be the total capacity of chosen glasses, and $B_S$ be the total amount of water currently contained in chosen glasses. Also, let $B$ be the total amount of water in all glasses. Clearly, the optimal way is to directly transfer water from non-chosen glasses to chosen ones. Then, we already secured $B_S$ units, and we can transfer up to $B - B_S$ units, thus the largest possible amount is $B_S + (B - B_S) / 2 = B / 2 + B_S / 2$. But all this water may not fit into the chosen glasses, so the actual amount we collect is $\min(A_S, B / 2 + B_s / 2)$. To find the optimum answer for each $k$, let's use dynamic programming: define $dp[i][k][A]$ as the largest possible $B_S$ for a subset of $k$ glasses among $1, \ldots, i$ such that the total capacity in the subset is $A$. We can recalculate this with transition $dp[i][k][A] = \max(dp[i - 1][k][A], dp[i - 1][k - 1][A - a[i]] + b[i])$. Then, $ans[k] = \max_A \min(A, dp[n][k][A] / 2 + B / 2)$. Time complexity is $O(n^3 \max a_i)$, where $\max a_i$ is the largest capacity of a glass. We can also get rid of storing the first dimension $i$ by overwriting DP values as $i$ increases, and obtain $O(n^2 \max a_i)$ memory.
[ "dp" ]
2,000
null
1458
C
Latin Square
You are given a square matrix of size $n$. Every row and every column of this matrix is a permutation of $1$, $2$, $\ldots$, $n$. Let $a_{i, j}$ be the element at the intersection of $i$-th row and $j$-th column for every $1 \leq i, j \leq n$. Rows are numbered $1, \ldots, n$ top to bottom, and columns are numbered $1, \ldots, n$ left to right. There are six types of operations: - R: cyclically shift all columns to the right, formally, set the value of each $a_{i, j}$ to $a_{i, ((j - 2)\bmod n) + 1}$; - L: cyclically shift all columns to the left, formally, set the value of each $a_{i, j}$ to $a_{i, (j\bmod n) + 1}$; - D: cyclically shift all rows down, formally, set the value of each $a_{i, j}$ to $a_{((i - 2)\bmod n) + 1, j}$; - U: cyclically shift all rows up, formally, set the value of each $a_{i, j}$ to $a_{(i\bmod n) + 1, j}$; - I: replace the permutation read left to right in each row with its inverse. - C: replace the permutation read top to bottom in each column with its inverse. Inverse of a permutation $p_1$, $p_2$, $\ldots$, $p_n$ is a permutation $q_1$, $q_2$, $\ldots$, $q_n$, such that $p_{q_i} = i$ for every $1 \leq i \leq n$.One can see that after any sequence of operations every row and every column of the matrix will still be a permutation of $1, 2, \ldots, n$. Given the initial matrix description, you should process $m$ operations and output the final matrix.
For convenience, let's assume that all row and column indices, as well as matrix values, are from $0, \ldots, n - 1$ instead for $1, \ldots, n$. If only shift operations were present, we could solve the problem in linear time: just maintain where the top left corner ends up after all the shifts, and then the matrix can be easily reconstructed (if this is not immediately clear to you, take a moment and think how this should be done). For the inverse operations, the approach is going to be similar, but we'll need a different way to look at the matrix. Instead of the matrix, let's think about the set of all triples $(i, j, a[i][j])$. Imagining a set of $n^2$ points in 3D, where $i, j$ are $x$ and $y$ coordinates in the horizontal plance, and $a[i][j]$ is the height $z$ is a good mental picture. Shift operations apply $x \to (x \pm 1) \bmod n$ and $y \to (y \pm 1) \bmod n$ to all points. We can think of this as a translation of the entire 3D space, with some looping at the borders. Instead of the given points, let's keep track of where the point $(0, 0, 0)$ is located after the operations are done; it's just as easy as in two-dimensional version from before. Now let's introduce the inverses. Consider, say, a row inverse operation, and all the points $(i, j, k)$, where $k = a[i][j]$, that have the same row index $i$. As an example, suppose that the top ($0$-th) row of the matrix contains numbers $(2, 1, 3, 0)$. The entries in the row correspond to points $(0, 0, 2)$, $(0, 1, 1)$, $(0, 2, 3)$, $(0, 3, 0)$. The inverse permutation to this row is $(3, 1, 0, 2)$, thus the new set of points should be $(0, 0, 3)$, $(0, 1, 1)$, $(0, 2, 0)$, $(0, 3, 2)$. In general, if there is currently a point $(i, j, k)$ in the set, then after the inverse there must be a point $(i, k, j)$. Thus, the entire operation may be summarized as "swap the second and third coordinates". Similarly, the column inverse is "swap the first and third coordinates". Again, we think of this transformation applied to the entire 3D space: this can be seen as a mirror reflection with respect to $y = z$ or $x = z$ plane. How does this affect our "keep track of $(0, 0, 0)$" approach? It is easy to update its position: just swap respective coordinates. However, we now need to care about how the answer is reconstructed, since, say, the point $(1, 0, 0)$ not necessarily ends up one step to the right of $(0, 0, 0)$. Thus, in addition, let's keep track of how the direction vectors $v_x = (1, 0, 0)$, $v_y = (0, 1, 0)$, $v_z = (0, 0, 1)$ are permuted after all operations. Shifts do not affect them, but inverses swap two of them, depending on which coordinates where swapped. We are almost done, we just need to reconstruct the answer at the end. If we had an initially given point $p = (i, j, k)$, where will it end up? If the origin ends up at a position represented as a vector $v_0 = (x, y, z)$, then $p$ ends up at $p' = v_0 + iv_x + jv_y + kv_z$, where $v_x, v_y, v_z$ are permuted accordingly. Once we have the result $p' = (i', j', k')$, all we need is to put $k'$ into $b[i'][j']$, where $b$ is the answer matrix. This results, once again, in a linear time solution. This explanation could have been much shorter with some linear-algebraic machinery (basically each transformation = multiplication by a certain matrix), but I chose to leave it as elementary ($\neq$ simple!) as possible.
[ "math", "matrices" ]
2,700
null
1458
D
Flip and Reverse
You are given a string $s$ of 0's and 1's. You are allowed to perform the following operation: - choose a non-empty contiguous substring of $s$ that contains an equal number of 0's and 1's; - flip all characters in the substring, that is, replace all 0's with 1's, and vice versa; - reverse the substring. For example, consider $s$ = 00111011, and the following operation: - Choose the first six characters as the substring to act upon: {\textbf{001110}11}. Note that the number of 0's and 1's are equal, so this is a legal choice. Choosing substrings 0, 110, or the entire string would not be possible. - Flip all characters in the substring: {\textbf{110001}11}. - Reverse the substring: {\textbf{100011}11}. Find the lexicographically smallest string that can be obtained from $s$ after zero or more operations.
Let's go over characters of $s$ left to right and keep track of the balance = (the number of $0$'s) - (the number of $1$'s) among the visited characters. We can think about starting at the point $0$, and moving right (from $x$ to $x + 1$) when we see a $0$, and moving left (to $x - 1$) when we see a $1$. Each time we go from $x$ to $x \pm 1$, let's connect the two points with an edge. We'll keep track of all the edges, even if there are multiple ones connecting the same pair of points. Since our path visits all edges we've created exactly once, it is an Eulerian path in the (multi)graph constructed this way. The start point of the path is $0$, and the finish point is equal to the total balance of the string $s$. Now, let's think about what a transformation does. A valid substring subject to the transformation has an equal number of $0$'s and $1$'s, thus it must correspond to a part of our path which is a round tour from a point $x$ back to itself. Flipping and reversing the substring makes us traverse all the same edges, but in reverse order and in reverse direction. Thus, in terms of graphs and paths, what we're doing is reversing a cycle that occurs in the Eulerian path. Note that the collection of edges is preserved after every operation, in other words, the graph is an invariant of the process. Furthermore, the start and finish of the path also stay the same. The answer (= lex. min. string) thus also corresponds to a certain Eulerian path in the very same graph, but which one? It should be the lex. min. path (where we prefer going right before going left) that is obtainable from the initial one with cycle reverses. However, the important observation is: in this particular graph any Eulerian path is obtainable from any other one with cycle reverses (as long as they start and finish in the same places). There are a few ways to prove this. For example, consider two Eulerian paths, and look at the first point where they diverge: say, one goes $x \to x + 1$, while the other goes $x \to x - 1$. But both paths will have to eventually return to $x$ to go in the other direction, thus there is at least one extra edge in each direction $x \to x + 1$ and $x \to x - 1$, unvisited before the divergence happened. Let's wait until the first path visits at least two edges between $x$ and $x - 1$, returning back to $x$, and reverse the resulting cycle. With this, the next edge after diverging have changed from $x \to x + 1$ to $x \to x - 1$, and now the two paths diverge some time later. We can proceed until both paths are the same. The observation implies that we should simply look for the lex. min. Eulerian path in the entire graph. Let's try to do this greedily: build the path from the beginning, marking edges as used when we traverse them. Take the transition $x \to x + 1$ whenever there is a unused edge leading there, otherwise settle for $x \to x - 1$. There is a caveat, though: we can not go $x \to x + 1$ if this uses the last edge between $x$ and $x + 1$, and additionally we still have to visit $x$ some time later (say, if $x$ has unused edges to $x - 1$). However, with this extra condition in place, the greedy algorithm will always find an Eulerian path, and it's clearly the lex. min. one. This procedure is fairly easy to implement in linear time: reconstruct the graph and do the greedy thing.
[ "data structures", "graphs", "greedy" ]
3,100
null
1458
E
Nim Shortcuts
After your debut mobile game "Nim" blew up, you decided to make a sequel called "Nim 2". This game will expand on the trusted Nim game formula, adding the much awaited second heap! In the game, there are two heaps, each containing a non-negative number of stones. Two players make moves in turn. On their turn, a player can take any positive number of stones from either one of the heaps. A player who is unable to move loses the game. To make the game easier to playtest, you've introduced developer shortcuts. There are $n$ shortcut positions $(x_1, y_1), \ldots, (x_n, y_n)$. These change the game as follows: suppose that before a player's turn the first and second heap contain $x$ and $y$ stones respectively. If the pair $(x, y)$ is equal to one of the pairs $(x_i, y_i)$, then the player about to move loses instantly, otherwise they are able to make moves as normal. Note that in the above explanation the two heaps and all pairs are \textbf{ordered}, that is, $x$ must refer to the size of the first heap, and $y$ must refer to the size of the second heap. The game release was followed by too much celebration, and next thing you know is developer shortcuts made their way to the next official update of the game! Players now complain that the AI opponent has become unbeatable at certain stages of the game. You now have to write a program to figure out which of the given initial positions can be won by the starting player, assuming both players act optimally.
Given the shortcut positions, we can immediately mark some initial positions as winning or losing for the first player. Namely, all shortcut positions are immedilately losing, and any position within one move of a shortcut position is winning (unless it's also a shortcut position). In an example picture below cells $(x, y)$ correspond to different initial positions. Red cells mark shortcut positions, and yellow cells are single move wins. Which of the unmarked positions are losing for the starting player? We can repeatedly apply the usual game-analysing rule: if all positions reachable in one move from a position $(x, y)$ are known to be wins, then position $(x, y)$ loses. We'll consider all candidates subject to this rule in lexicographic order. The first candidate is the position $(0, 0)$. A few situations can happen as we proceed forward: $(x, y)$ is a shortcut position. There's no need to mark it as losing, just go to $(x + 1, y + 1)$, as all positions $(x + 1, y)$, $(x + 2, y)$, $\ldots$ and $(x, y + 1)$, $(x, y + 2)$, $\ldots$ are known to be single-move wins to $(x, y)$. $(x, y)$ is a single move win to a shortcut $(x', y)$ with $x' < x$. Then, skip $(x, y)$ and go to the next natural candidate $(x + 1, y)$, since all cells $(x, y + 1)$, $(x, y + 2)$, $\ldots$ are known to be single-move wins to $(x', y)$. $(x, y)$ is a single move win to a shortcut $(x, y')$ with $y' < y$. Then, skip $(x, y)$ and go to the next natural candidate $(x, y + 1)$, since all cells $(x + 1, y)$, $(x + 2, y)$, $\ldots$ are known to be single-move wine to $(x, y')$. Neither of the above applies. Then, there can be no losing positions reachable in one move from $(x, y)$: single-move wins are eliminated, and all manually marked positions $(x', y'$) saitsfy $x' < x$, $y' < y$. Thus, per our rule, we mark $(x, y)$ as losing and proceed to $(x + 1, y + 1)$. Here's what happens in the picture above. All candidates considered throughout the process are tinted blue, and all identified losing positions are dark blue. With this procedure we can identify all losing positions $(x, y)$ under $x, y \leq C$ in roughly $O(C \log n)$ time ($n$ is the number of shortcuts). This is too slow since coordinates can be up to $10^9$. However, we can speed up consecutive applications of the last case of the above procedure as follows: ... Neither of the above applies. Let $X > x$ and $Y > y$ be the closest $x$-coordinate and $y$-coordinate of a shortcut position respectively ($X$ and $Y$ may correspond to different shortcut positions). Let $\Delta = \min(X - x, Y - y)$. Then, mark all positions $(x, y), (x + 1, y + 1), \ldots, (x + \Delta - 1, y + \Delta - 1)$ as losing, and go to $(x + \Delta, y + \Delta)$. This is justified since all thusly marked positions don't share any coordinate with a shortcut, thus surely they're all subject to this case of the procedure. Looking up $X$ and $Y$ can be implemented with sets of all $x_i$ and $y_i$, and upper bound calls. Further, instead of marking all cells $(x, y), (x + 1, y + 1), \ldots, (x + \Delta - 1, y + \Delta - 1)$ one by one, let's memorize them as a diagonal segment, described by the initial point $(x, y)$ and length $\Delta$. We can now observe that the optimized process only takes $O(n)$ steps. Indeed, for each step we can find a shortcut $(x_i, y_i)$ such that either $x \geq x_i$ or $y \geq y_i$ is true for the first time. In particular, the number of diagonal segments representing all additional losing positions is $O(n)$. Now, to answer any query $(a, b)$ we have to check if one of the following is true: $(a, b)$ is a shortcut; $(a, b)$ belongs to a diagonal segment. The first condition is easy to check if all shortcuts are stored, say, in a set structure. Diagonal segments can also be stored in a set, and we can find the closest to $(a, b)$ with a lower bound, say, by comparing $x$-coordinates. This results in an $O(n \log n)$ precomputation, and $O(\log n)$ per initial position query. (Kudos to Golovanov399 for his neat grid drawing tool)
[ "data structures", "games" ]
3,100
null
1458
F
Range Diameter Sum
You are given a tree with $n$ vertices numbered $1, \ldots, n$. A tree is a connected simple graph without cycles. Let $\mathrm{dist}(u, v)$ be the number of edges in the unique simple path connecting vertices $u$ and $v$. Let $\mathrm{diam}(l, r) = \max \mathrm{dist}(u, v)$ over all pairs $u, v$ such that $l \leq u, v \leq r$. Compute $\sum_{1 \leq l \leq r \leq n} \mathrm{diam}(l, r)$.
Please bear with my formal style for this one as my hand-waving skills are too weak to explain this properly. We'll use the "divide-and-conquer" strategy. Let us implement a recursive procedure $solve(l, r)$ that will compute the sum $\sum_{l \leq i \leq j \leq r} diam(i, j)$. It will do so as follows: If $l = r$, then answer is trivially $0$. Otherwise, choose $m = \lfloor(l + r) / 2 \rfloor$, and compute the following sum: $\sum_{i = l}^m \sum_{j = m + 1}^r diam(i, j)$. The only entries $diam(i, j)$ that are not accounted for satisfy either $l \leq i \leq j \leq m$ or $m + 1 \leq i \leq j \leq r$. Compute their total as $solve(l, m) + solve(m + 1, r)$, and return the total answer. Of course, $\sum_{i = l}^m \sum_{j = m + 1}^r diam(i, j)$ is the tricky part. We'll need some preparation to compute it. To make arguments cleaner, let us introduce an extra vertex in the middle of every edge, subdividing it into two. We'll refer to new vertices as extra vertices, and the old vertices as proper vertices. Since all distances are doubled, we will have to divide the answer by two in the end. We will define a circle $C(v, r)$ of radius $r$ centered at a vertex $v$ in the tree as the set of all vertices at distance at most $r$ from the vertex $v$. We also assume that if $r > 0$ there are at least two vertices in $C(v, r)$ exactly at distance $r$ from $v$, otherwise decrease $r$ accordingly. For a set of vertices $S$ we define $Cover(S)$ as a circle of smallest radius containing all vertices of $S$. If we assume that all vertices of $S$ are proper, then $Cover(S)$ is unique. Indeed, after subdividing all distances among $S$ are even, thus in $Cover(S) = (v, r)$ we must have $r = diam(S) / 2$, and $v$ is the unique diameter midpoint. We can see that if $r > 0$, then $S$ has at least two vertices at distance $r$ from the covering circle center $v$, namely, the diameter endpoints. Warning: math ahead. It you're not interested in proofs, only read the premise of Lemma 2. Lemma 1. If a circle $C(v, r)$ contains two proper vertices $a, b$, then it also contains the midpoint $c$ on the path between $a, b$, and further $dist(v, c) \leq r - dist(a, b) / 2$. Proof. Say, for the vertex $a$ we have $dist(v, a) \leq dist(v, c) + dist(c, a) = dist(v, c) + dist(a, b) / 2$. This inequality is strict: $dist(v, a) < dist(v, c) + dist(c, a)$ only when $v$ and $a$ are in the same subtree with respect to $c$. If we assume that respective inequalities for $a, b$ are both strict, then both $a, b$ are in the same subtree of $c$, in which case $c$ can not be the midpoint. Otherwise, say, for $a$ we have $r \geq dist(v, a) = dist(v, c) + dist(c, a) = dist(v, c) + dist(a, b) / 2$, and $dist(v, c) \leq r - dist(a, b) / 2$ by simple rearrangement. An immediate corollary of this lemma is this: if a circle $C(v, r)$ contains a set of proper vertices $S$, it also contains $Cover(S)$. We apply the lemma to the midpoint of $diam(S)$, which is also the center $v'$ of $Cover(S) = C(v', r')$, to establish $dist(v, v') \leq r - r'$. Then, for any vertex $w$ of $Cover(S)$ we $dist(v, w) \leq dist(v, v') + dist(v', w) \leq r$, thus $w \in C(v, r)$. Lemma 2. Let $S$, $T$ be two non-empty sets of proper vertices. Then the smallest covering circle $Cover(S \cup T)$ can be found by only knowing $Cover(S)$ and $Cover(T)$, by the following rule: If $Cover(S) \subseteq Cover(T)$, then $Cover(S \cup T) = Cover(T)$. If $Cover(T) \subseteq Cover(S)$, then $Cover(S \cup T) = Cover(S)$. Let none of $Cover(S) = (v_S, r_S)$ and $Cover(T) = (v_T, r_T)$ contain the other one. Then, $Cover(S \cup T) = C(V, R)$ has radius $R$ equal to $(r_S + dist(v_S, v_T) + r_T) / 2$ (which is asserted to be an integer), and its center $V$ is located on the unique path $v_S v_T$ at distance $(r - r_S) / 2$ (which is asserted to be an integer) from $v_S$. Proof. The first two cases are somewhat easy. Say, in the first case $Cover(S \cup T)$ has to contain $T$, then it must contain $Cover(T)$, and then it contains $Cover(S)$ by default. Now, assume the latter case. Any two vertices $v \in S$, $u \in T$ satisfy $dist(v, u) \leq dist(v, v_S) + dist(v_S, v_T) + dist(v_T, u) \leq r_S + dist(v_S, v_T) + r_T$, thus $\max dist(v, u) = 2R \leq r_S + dist(v_S, v_T) + r_T$. Now, let $ab$ and $cd$ be diameter endpoints of $S$ and $T$ respectively. Here we can even choose, say, $a$ outside of $Cover(T)$, since $S \not \subseteq Cover(T)$, similarly choose $c \not \in Cover(S)$. We then must have $dist(a, c) = r_S + dist(v_S, v_T) + r_T$, thus $Cover(S \cup T)$ has radius $R$ at least half of that, and $V$ is the only possible center candidate. When one or both of $Cover(S)$ and $Cover(T)$ has zero radius, the claim can established pretty much directly. Okay, we're out of the woods now! The last lemma allows us to maintain $Cover(S) = C(v, r)$ for any set $S$, and further "merge" $Cover(S)$ and $Cover(T)$ to obtain $Cover(S \cup T)$, since all we need is distances between vertices, and finding a vertex on a $v_S v_T$ at certain distance from $v_S$. We can also efficiently check if $C(v, r)$ contains $C(v', r')$ by verifying $dist(v, v') \leq r - r'$. All of that can be done with binary lifting and LCA manipulations in $O(\log n)$ per operation. At last, let's get back to the divide-and-conquer step. Let us find circles $C_1(i) = (v_i, r_i) = Cover(i, \ldots, m)$, and $C_2(j) = (v_j, r_j) = Cover(m + 1, \ldots, r)$ for all relevant $i, j$. We have $C_1(m) = C(m, 0)$, and $C_1(i)$ is a merge of $C_1(i + 1)$ and $C(i, 0)$ for any smaller $i$; similar for $C_2(j)$. Now, let's consider the sum $\sum_{j = m + 1}^r diam(i, j)$ for some $i$. $diam(i, j)$ is equal to twice the radius of the merge of $C_1(i)$ and $C_2(j)$. Consider increasing $j$ from $m + 1$ to $r$. Since $C_2(j) \subseteq C_2(j + 1)$ for any $j$, we have three interesting ranges for $j$, in order from left to right: $j \in [m + 1, t_1(i))$: $C_2(j) \subseteq C_1(i)$; $j \in [t_1(i), t_2(i))$: $C_1(i)$ and $C_2(j)$ are not contained in each other; $j \in [t_2(i), r]$: $C_1(i) \subseteq C_2(j)$. In the first range $diam(i, j) = 2r_i$, and in the third range $diam(i, j) = 2r_j$. If we know the boundaries $t_1(i)$ and $t_2(i)$, then these can be accounted for with prefix sums on $r_j$ (and simple multiplcation for $r_i$). In the second range we have to sum up $r_i + dist(v_i, v_j) + r_j$. Again, summands $r_i$ and $r_j$ are accounted in the same way. Only $\sum_{j \in [t_1(i), t_2(i))} dist(v_i, v_j)$ remains. Instead of computing this directly right now, consider how $t_1(i)$ and $t_2(i)$ change as $i$ decreases from $m$ to $l$. We have $C_1(i) \subseteq C_1(i - 1)$, thus we can conclude that $t_1(i)$ and $t_2(i)$ both do not decrease, and the range $[t_1(i), t_2(i))$ is a "sliding window" with both endpoints moving to the right. Boundaries $t_1(i)$ and $t_2(i)$ can thus be maintained with the two-pointers approach, and vertices in the range can be maintained with a queue, where vertices enter from the right and leave from the left. On top of the queue that supports "push-back" and "pop-front", we'd also like to query "sum of distances from an arbitrary vertex $v$ to all vertices in the queue". To this end (and only this), we will have to use centroid decomposition in its most basic form. It's fair to say that the margins of this explanation are already too crowded for a proper explanation of this. In short, we can add/remove vertices and query the distance sum in $O(\log n)$ time. This concludes the divide-and-conquer step description. With the usual divide-and-conquer time analysis we arrive at an $O(n \log^2 n)$ solution.
[ "data structures", "trees" ]
3,500
null
1459
A
Red-Blue Shuffle
There are $n$ cards numbered $1, \ldots, n$. The card $i$ has a red digit $r_i$ and a blue digit $b_i$ written on it. We arrange all $n$ cards in random order from left to right, with all permutations of $1, \ldots, n$ having the same probability. We then read all red digits on the cards from left to right, and obtain an integer $R$. In the same way, we read all blue digits and obtain an integer $B$. When reading a number, leading zeros can be ignored. If all digits in a number are zeros, then the number is equal to $0$. Below is an illustration of a possible rearrangement of three cards, and how $R$ and $B$ can be found. Two players, Red and Blue, are involved in a bet. Red bets that after the shuffle $R > B$, and Blue bets that $R < B$. If in the end $R = B$, the bet results in a draw, and neither player wins. Determine, which of the two players is more likely (has higher probability) to win the bet, or that their chances are equal. Refer to the Note section for a formal discussion of comparing probabilities.
First we can observe that if a card has $r_i = b_i$, then if doesn't affect the comparison between $R$ and $B$ regardless of its position. We can forget about all such cards. Formally, if we erase all such cards after the permutation, then $R$ and $B$ are still compared in the same way, and further all the remaining cards are still permuted equiprobably. Now we only have cards with $r_i > b_i$ (let's call them Red-favourable) and $r_i < b_i$ (call them Blue-favourable). But then, the comparison between $R$ and $B$ will only be decided by whether the first card is Red- or Blue-favourable. Naturally, if, say, there are more Red-favourable cards than Blue-favourable cards, then Red is more likely to win. If there is an equal number of Red- and Blue-favourable cards, then the chances are equal. Thus, for the solution purposes we only need to count indices $i$ with $r_i > b_i$ and those with $r_i < b_i$, and compare these two numbers.
[ "math", "probabilities" ]
800
null
1459
B
Move and Turn
A robot is standing at the origin of the infinite two-dimensional plane. Each second the robot moves exactly $1$ meter in one of the four cardinal directions: north, south, west, and east. For the first step the robot \textbf{can choose any of the four directions}, but then at the end of every second it \textbf{has to turn} 90 degrees left or right with respect to the direction it just moved in. For example, if the robot has just moved north or south, the next step it takes has to be either west or east, and vice versa. The robot makes \textbf{exactly} $n$ steps from its starting position according to the rules above. How many different points can the robot arrive to at the end? The final orientation of the robot can be ignored.
We will describe an $O(1)$ formula solution. Some slower solutions were also allowed. First, consider the case when $n$ is even. Regardless of the initial direction, we will make $n / 2$ horizontal (west-east) steps and $n / 2$ vertical (north-south) steps. Further, directions of horizontal and vertical steps may be decided independently. If we have to make, say, $k$ horizontal steps choosing left/right direction every time, there are $k + 1$ possible horizontal positions $x$ we can end up in. Indeed, all possible ways can be grouped as follows: $k$ steps left, $0$ steps right: $x = -k$; $k - 1$ steps left, $1$ step right: $x - -k + 2$; ... $0$ steps left, $k$ steps right: $x = k$. Back in the case of even $n$, since the directions for vertical and horizontal steps can be chosen independently, there are $(n / 2 + 1)^2$ possible combinations of final $x$ and $y$. Let's now say that $n = 2k + 1$ is odd. If we start with a horizontal step, then in total we will be making $k + 1$ horizontal steps and $k$ vertical steps, thus the number of combinations here is $(k + 1) \times (k + 2)$. A similar argument applies for when we start with a vertical step. Finally, observe that it is impossible to reach the same position starting with both vertical and horizontal step. This is because the parity of, say, the final horizontal position $x$ is the same as the number of horizontal steps, thus it can not be the same after $k$ and $k + 1$ horizontal steps. Thus, in the odd case the answer is $2(k + 1)(k + 2)$, where $k = n / 2$ rounded down.
[ "dp", "math" ]
1,300
null
1461
A
String Generation
One fall day Joe got bored because he couldn't find himself something interesting to do. Marty suggested Joe to generate a string of length $n$ to entertain him somehow. It didn't seem particularly difficult, but Joe's generated string had to follow these rules: - the string may only contain characters 'a', 'b', or 'c'; - the maximum length of a substring of this string that is a palindrome does not exceed $k$. A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. For example, strings "a", "bc", "abc" are substrings of a string "abc", while strings "ac", "ba", "cba" are not. A string is a palindrome if it reads the same from the left to the right and from the right to the left. For example, strings "abccba", "abbba", "aba", "abacaba", "a", and "bacab" are palindromes, while strings "abcbba", "abb", and "ab" are not. Now Joe wants to find any correct string. Help him! It can be proven that the answer always exists under the given constraints.
Let's note that the string like "abcabcabcabc..." has only palindromes of length 1, what means it is suitable for us in all cases.
[ "constructive algorithms", "greedy" ]
800
#include <bits/stdc++.h> #define fi first #define se second #define m_p make_pair #define endl '\n' #define fast_io ios_base::sync_with_stdio(0); cin.tie(0) using namespace std; typedef long long ll; const int MAXN = 1123456; const int MAXINT = 2147483098; const ll MAXLL = 9223372036854775258LL; void next(char &x) { if (x <= 'b') x++; else x = 'a'; } void solve() { int n, k; cin >> n >> k; char cur = 'a'; for (int i = 0; i < n; ++i) { cout << cur; next(cur); } cout << endl; } int main() { fast_io; int t; cin >> t; while(t--) { solve(); } return 0; }
1461
B
Find the Spruce
Holidays are coming up really soon. Rick realized that it's time to think about buying a traditional spruce tree. But Rick doesn't want real trees to get hurt so he decided to find some in an $n \times m$ matrix consisting of "*" and ".". To find every spruce first let's define what a spruce in the matrix is. A set of matrix cells is called a spruce of height $k$ with origin at point $(x, y)$ if: - All cells in the set contain an "*". - For each $1 \le i \le k$ all cells with the row number $x+i-1$ and columns in range $[y - i + 1, y + i - 1]$ must be a part of the set. All other cells cannot belong to the set. Examples of correct and incorrect spruce trees: Now Rick wants to know how many spruces his $n \times m$ matrix contains. Help Rick solve this problem.
Let's iterate over the top of the spruce. When we meet the symbol "*", we will start iterating over the height of the current spruce. If for the current height $k$ for each $1 \le i \le k$ all cells with the row number $x+i-1$ and columns in range $[y - i + 1, y + i - 1]$ are "*", then we increase the value of the answer. Otherwise, we stop the increment hight.
[ "brute force", "dp", "implementation" ]
1,400
def solve(): n, m = map(int, input().split()) a = [] for i in range(n): a.append(input()) dp = [] for i in range(n): dp.append([]) for j in range(m): dp[i].append((1 if a[i][j] == '*' else 0) if j == 0 else dp[i][j - 1] + (1 if a[i][j] == '*' else 0)) answer = 0 for x in range(n): for y in range(m): if a[x][y] == '.': continue for level in range(min(n - x, m - y, y + 1)): if dp[x + level][y + level] - (0 if y - level - 1 < 0 else dp[x + level][y - level - 1]) != level * 2 + 1: break answer += 1 print(answer) tests = int(input()) while (tests > 0): solve() tests = tests - 1
1461
C
Random Events
Ron is a happy owner of a permutation $a$ of length $n$. A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). Ron's permutation is subjected to $m$ experiments of the following type: ($r_i$, $p_i$). This means that elements in range $[1, r_i]$ (in other words, the prefix of length $r_i$) have to be sorted in ascending order with the probability of $p_i$. All experiments are performed in the same order in which they are specified in the input data. As an example, let's take a look at a permutation $[4, 2, 1, 5, 3]$ and an experiment ($3, 0.6$). After such an experiment with the probability of $60\%$ the permutation will assume the form $[1, 2, 4, 5, 3]$ and with a $40\%$ probability it will remain unchanged. You have to determine the probability of the permutation becoming completely sorted in ascending order after $m$ experiments.
Let's first define some variable $R$, which will be equal to the last unsorted number (the largest $i$ for which $r_i != i$). Now we can see that we are not interested in experiments with $r_i < R$. To get the answer, we just need to multiply the remaining $(1-p_i)$. This number will indicate the probability that all the remaining experiments failed. Since we need to deduce the probability of success, we can subtract the resulting number from one.
[ "dp", "math", "probabilities" ]
1,500
def solve(n, m): a = list(map(int, input().split())) lastCorrectPos = n - 1; while lastCorrectPos >= 0 and a[lastCorrectPos] == lastCorrectPos + 1: lastCorrectPos -= 1 ans = 1.0 if lastCorrectPos == -1: ans = 0.0 for _ in range(m): data = input().split() r = int(data[0]) - 1 p = float(data[1]) if r >= lastCorrectPos: ans = ans * (1 - p) print("{:.6f}".format(1 - ans)) t = int(input()) for _ in range(t): data = input().split() solve(int(data[0]), int(data[1]))
1461
D
Divide and Summarize
Mike received an array $a$ of length $n$ as a birthday present and decided to test how pretty it is. An array would pass the $i$-th prettiness test if there is a way to get an array with a sum of elements totaling $s_i$, using some number (possibly zero) of slicing operations. An array slicing operation is conducted in the following way: - assume $mid = \lfloor\frac{max(array) + min(array)}{2}\rfloor$, where $max$ and $min$ — are functions that find the maximum and the minimum array elements. In other words, $mid$ is the sum of the maximum and the minimum element of $array$ divided by $2$ rounded down. - Then the array is split into two parts $\mathit{left}$ and $right$. The $\mathit{left}$ array contains all elements which are less than or equal $mid$, and the $right$ array contains all elements which are greater than $mid$. Elements in $\mathit{left}$ and $right$ keep their relative order from $array$. - During the third step we choose which of the $\mathit{left}$ and $right$ arrays we want to keep. The chosen array replaces the current one and the other is permanently discarded. You need to help Mike find out the results of $q$ prettiness tests. Note that you test the prettiness of the array $a$, so you start each prettiness test with the primordial (initial) array $a$. Thus, the first slice (if required) is always performed on the array $a$.
To begin with, you can notice that the cut operation does not depend on the order of the $a$ array. So we can sort it. Now let's build a tree of transitions from the original array to all its possible states. You can simply prove that the height of this tree does not exceed $log(max)$. Since $max(current_a)-min(current_a)$ after each operation of the section is reduced at least twice. Having understood this, we can write a simple recursive search over the states $(left, right)$. The state will describe a sub-segment of the $a$ array that is the current array. For each state, we can calculate the current amount (on the segment from left to right) and add it to any convenient collection (set/HashSet). Next, to respond to requests, we can simply look at our collected collection.
[ "binary search", "brute force", "data structures", "divide and conquer", "implementation", "sortings" ]
1,600
q = set() a = [] def go(l, r): global a, q sum = 0 for i in range(l, r + 1): sum += a[i] q.add(sum) mid = (a[l] + a[r]) // 2 pos = -1 for i in range(l, r + 1): if a[i] <= mid: pos = i else: break if pos == -1 or pos == r: return go(l, pos) go(pos + 1, r) def solve(): global a, q n, m = map(int, input().split()) a = list(map(int, input().split())) q = set() a.sort() go(0, n - 1) for _ in range(m): x = int(input()) if x in q: print('YES') else: print('NO') tests = int(input()) while (tests > 0): solve() tests = tests - 1
1461
E
Water Level
In recent years John has very successfully settled at his new job at the office. But John doesn't like to idly sit around while his code is compiling, so he immediately found himself an interesting distraction. The point of his distraction was to maintain a water level in the water cooler used by other zebras. Originally the cooler contained exactly $k$ liters of water. John decided that the amount of water must always be at least $l$ liters of water but no more than $r$ liters. John will stay at the office for exactly $t$ days. He knows that each day exactly $x$ liters of water will be used by his colleagues. At the beginning of each day he can add exactly $y$ liters of water to the cooler, but at any point in time the amount of water in the cooler must be in the range $[l, r]$. Now John wants to find out whether he will be able to maintain the water level at the necessary level for $t$ days. Help him answer this question!
If $y \le x$, it is quite easy to calculate the answer. Note that at each iteration of the algorithm (except, perhaps, the first), the water level will decrease by x-y liters, and we will have to calculate whether it can decrease t times. Otherwise, lets note, when we use the rise in water level, we change the value of the expression $k$ mod $x$. At each step of the algorithm, we will lower the water level as many times as we can, and then raise the level. Further, we note that if we have already reached the value $k$ mod $x$, then we are in a "cycle", and therefore we will be able to maintain the water level. If the water level is out of bounds, then the answer is No.
[ "brute force", "graphs", "greedy", "implementation", "math" ]
2,200
#include <bits/stdc++.h> #define fi first #define se second #define m_p make_pair #define endl '\n' #define fast_io ios_base::sync_with_stdio(0); cin.tie(0) using namespace std; typedef long long ll; const int MAXN = 1123456; const int MAXINT = 2147483098; const ll MAXLL = 8000000000000000000LL; const long double eps = 1e-9; mt19937_64 mt_rand( chrono::system_clock::now().time_since_epoch().count() ); int main() { fast_io; ll k, l, r, t, x, y; cin >> k >> l >> r >> t >> x >> y; if (k < l || k > r) return cout << "No\n", 0; if (x > y) { if (k + y > r) k -= x, t--; if (k < l) return cout << "No\n", 0; ll canAlive = (k - l) / (x - y); if (canAlive < t) return cout << "No\n", 0; cout << "Yes" << endl; }else { vector <bool> was(x, false); while(t > 0) { if (was[k % x]) return cout << "Yes" << endl, 0; was[k % x] = true; ll canMove = min(t, (k - l) / x); k -= canMove * x; t -= canMove; if (t == 0) return cout << "Yes" << endl, 0; t--; if (k + y <= r) k += y; k -= x; if (k < l || k > r) return cout << "No" << endl, 0; } cout << "Yes" << endl; } return 0; }
1461
F
Mathematical Expression
Barbara was late for her math class so as a punishment the teacher made her solve the task on a sheet of paper. Barbara looked at the sheet of paper and only saw $n$ numbers $a_1, a_2, \ldots, a_n$ without any mathematical symbols. The teacher explained to Barbara that she has to place the available symbols between the numbers in a way that would make the resulting expression's value as large as possible. To find out which symbols were available the teacher has given Barbara a string $s$ which contained that information. It's easy to notice that Barbara has to place $n - 1$ symbols between numbers in total. The expression must start with a number and all symbols must be allowed (i.e. included in $s$). Note that multiplication takes precedence over addition or subtraction, addition and subtraction have the same priority and performed from left to right. Help Barbara and create the required expression!
First, let's solve the problem without the multiplication sign. It is always beneficial for us to put a plus or a minus, if there is no plus sign. Now we will solve the problem when there is a sign to multiply. The case when the plus sign is missing is very easy to solve. We put the sign to multiply to the first zero, then we put a minus sign in front of the zero, and after the multiplication signs. Let's consider the case when there is a plus sign and multiply. Note that it is never beneficial for us to use the minus sign. If there are zeros in the expression, then we simply put plus signs between them and the expressions are split into other independent expressions. Now you need to be able to solve an expression that has no zeros. To begin with, we take out all units at the beginning and at the end separately with plus signs. The following solution is now proposed. If the product of numbers is greater than or equal to $10^{16}$, then it is beneficial for us to put the multiplication sign everywhere. Otherwise, we can use dynamic programming, because the number of numbers greater than one is no more than $log_2(10^{16})$. Dynamic programming will be one dimension. $dp_i$ is the most profitable answer if you put signs on the prefix. Let us now stand at $i$ and want to go to $j$, this means that between the numbers at the $i$-th and $j$-th positions there will be a sign to multiply, and after the $j$-th position there will be a sign a plus.
[ "constructive algorithms", "dp", "greedy" ]
2,700
#include <bits/stdc++.h> #pragma GCC optimize("unroll-loops") #pragma GCC optimize("-O3") #pragma GCC optimize("Ofast") #pragma GCC optimize("fast-math") #pragma GCC optimize("no-stack-protector") #define fi first #define se second #define p_b push_back #define pll pair<ll,ll> #define pii pair<int,int> #define m_p make_pair #define all(x) x.begin(),x.end() #define sset ordered_set #define sqr(x) (x)*(x) #define pw(x) (1ll << x) #define sz(x) (int)x.size() #define fout(x) {cout << x << "\n"; return; } using namespace std; typedef long long ll; typedef long double ld; const ll N = 2e5 + 5; const ll M = 1e7; const ll MAXN = 1123456; const ll inf = 1e17; const ll mod = 1e9 + 7; const ld eps = 1e-10; ll mul(ll a, ll b){ return min(a * b, M); } ll dp[61], pr[61]; int main(){ ios_base :: sync_with_stdio(0); cin.tie(0); int n; cin >> n; vector <ll> a(n + 1); for(int i = 1; i <= n; i++)cin >> a[i]; string s; cin >> s; if (n == 3 && a[1] == 2 && a[2] == 2 && a[3] == 0 && s == "+-*") { cout << "2*2-0\n"; return 0; } if(sz(s) == 1){ cout << a[1]; for(int i = 2; i <= n; i++){ cout << s[0] << a[i]; } cout << "\n"; return 0; } bool fmul, fplus, fminus; fmul = fplus = fminus = 0; for(auto i : s){ if(i == '*')fmul = 1; if(i == '+')fplus = 1; if(i == '-')fminus = 1; } if(!fmul){ if(fplus){ cout << a[1]; for(int i = 2; i <= n; i++){ cout << '+' << a[i]; } cout << "\n"; return 0; }else{ cout << a[1]; for(int i = 2; i <= n; i++){ cout << '-' << a[i]; } cout << "\n"; return 0; } }else{ if(!fplus){ cout << a[1]; for(int i = 2; i <= n; i++){ if(a[i] == 0)cout << "-"; else cout << "*"; cout << a[i]; } cout << "\n"; return 0; }else{ vector <char> b(n + 1, '+'); int uk = 1; while(uk <= n){ while(uk <= n && a[uk] == 0)uk++; if(uk == n + 1)break; int le = uk; while(uk <= n && a[uk])uk++; int ri = uk - 1; while(le <= ri && a[le] == 1)le++; while(le <= ri && a[ri] == 1)ri--; if(le > ri)continue; ll mult = 1; for(int i = le; i <= ri; i++)mult = mul(mult, a[i]); if(mult == M){ for(int i = le + 1; i <= ri; i++)b[i] = '*'; continue; } vector < pair <ll, pll> > st; int j = le; while(j <= ri){ int l = j; ll t = 1; if(a[l] == 1){ while(j <= ri && a[j] == 1)j++; t = j - l; }else{ while(j <= ri && a[j] > 1){ if(j != l)b[j] = '*'; t = mul(t, a[j]); j++; } } st.p_b({t, {l, j - 1}}); } int nn = sz(st); for(int i = 1; i <= nn; i++)dp[i] = -inf; for(int i = 1; i <= nn; i++){ if(dp[i] < dp[i - 1] + st[i - 1].fi){ pr[i] = i - 1; dp[i] = dp[i - 1] + st[i - 1].fi; } if(i % 2 == 1){ ll t = 1; for(int j = i; j <= nn; j++){ if((j - i) % 2 == 0)t *= st[j - 1].fi; if((j - i) % 2 == 0 && dp[j] < t + dp[i - 1]){ pr[j] = i - 1; dp[j] = t + dp[i - 1]; } } } } int x = nn; while(x){ int l = pr[x] + 1, r = x; x = pr[x]; if(l < r){ for(int i = st[l - 1].se.fi + 1; i <= st[r - 1].se.se; i++)b[i] = '*'; } } } cout << a[1]; for(int i = 2; i <= n; i++){ cout << b[i] << a[i]; } cout << "\n"; } } return 0; }
1462
A
Favorite Sequence
Polycarp has a favorite sequence $a[1 \dots n]$ consisting of $n$ integers. He wrote it out on the whiteboard as follows: - he wrote the number $a_1$ to the left side (at the beginning of the whiteboard); - he wrote the number $a_2$ to the right side (at the end of the whiteboard); - then as far to the left as possible (but to the right from $a_1$), he wrote the number $a_3$; - then as far to the right as possible (but to the left from $a_2$), he wrote the number $a_4$; - Polycarp continued to act as well, until he wrote out the entire sequence on the whiteboard. \begin{center} {\small The beginning of the result looks like this (of course, if $n \ge 4$).} \end{center} For example, if $n=7$ and $a=[3, 1, 4, 1, 5, 9, 2]$, then Polycarp will write a sequence on the whiteboard $[3, 4, 5, 2, 9, 1, 1]$. You saw the sequence written on the whiteboard and now you want to restore Polycarp's favorite sequence.
In this problem, you can implement an algorithm opposite to that given in the condition. Let's maintain two pointers to the left-most and right-most unhandled element. Then, restoring the original array, you: put the left-most unhandled item in the first position put the right-most unhandled item in the second position put the left-most unhandled item in the third position put the right-most unhandled item in the fourth position ...
[ "implementation", "two pointers" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> v(n); for (int &e : v) { cin >> e; } int left = 0, right = n - 1; vector<int> ans(n); for (int i = 0; i < n; i++) { if (i % 2 == 0) { ans[i] = v[left++]; } else { ans[i] = v[right--]; } } for (int i : ans) { cout << i << " "; } cout << "\n"; } int main() { int t; cin >> t; while (t--) { solve(); } }
1462
B
Last Year's Substring
Polycarp has a string $s[1 \dots n]$ of length $n$ consisting of decimal digits. Polycarp performs the following operation with the string $s$ \textbf{no more than once} (i.e. he can perform operation $0$ or $1$ time): - Polycarp selects two numbers $i$ and $j$ ($1 \leq i \leq j \leq n$) and removes characters from the $s$ string at the positions $i, i+1, i+2, \ldots, j$ (i.e. removes substring $s[i \dots j]$). More formally, Polycarp turns the string $s$ into the string $s_1 s_2 \ldots s_{i-1} s_{j+1} s_{j+2} \ldots s_{n}$. For example, the string $s = $"20192020" Polycarp can turn into strings: - "2020" (in this case $(i, j)=(3, 6)$ or $(i, j)=(1, 4)$); - "2019220" (in this case $(i, j)=(6, 6)$); - "020" (in this case $(i, j)=(1, 5)$); - other operations are also possible, only a few of them are listed above. Polycarp likes the string "2020" very much, so he is wondering if it is possible to turn the string $s$ into a string "2020" in no more than one operation? Note that you can perform zero operations.
Let's see how the deleted substring $t$ should look so that after deleting it, the string $s$ turns into the string "2020". The length of the string $t$ must be $n - 4$. Then we can iterate over all substrings of the string $s$ of length $n - 4$ (there are no more than five such substrings) and look at the string obtained after deleting the substring. That is, we need to check that one of the following character sequence matches the $(2, 0, 2, 0)$: $(s[1], s[2], s[3], s[4])$; $(s[1], s[2], s[3], s[n])$; $(s[1], s[2], s[n - 1], s[n])$; $(s[1], s[n - 2], s[n - 1], s[n])$; $(s[n - 3], s[n - 2], s[n - 1], s[n])$;
[ "dp", "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; string s; cin >> s; for (int i = 0; i <= 4; i++) { if (s.substr(0, i) + s.substr(n - 4 + i, 4 - i) == "2020") { cout << "YES" << endl; return; } } cout << "NO" << endl; } int main() { int tests; cin >> tests; while (tests-- > 0) { solve(); } return 0; }
1462
C
Unique Number
You are given a positive number $x$. Find the smallest positive integer number that has the sum of digits equal to $x$ and all digits are \textbf{distinct} (unique).
First of all, let's understand that the answer to the problem should not contain zeros (leading zeros are useless, while others increase the number, but do not change the sum). It is also clear that the number we found should have the minimum possible length (since the longer the numbers without leading zeros, the larger they are). Numbers of the same length are compared lexicographically, that is, first by the first digit, then by the second, and so on. This means that the digits in the number must go in sorted order (the order of the digits does not affect the sum, but does affect the value). Let's minimize the length of the number first. We need to get the specified sum in as few digits as possible. So we should use as large digits as possible. Then let's start with the number $9$ and add the digits from $8$ to $1$ to the beginning of the number in turn, until the sum of the digits exceeds the specified sum. Obviously, you can't get an answer for fewer digits. Now we minimize the number itself. First, we must minimize the first digit. The first digit is uniquely determined as the difference between the sum of the remaining digits and the required sum. So you need to maximize the sum of all digits except the first one (which has already been done in the previous paragraph). It only remains to correct the first digit and print the answer.
[ "brute force", "greedy", "math" ]
900
#include <bits/stdc++.h> using namespace std; void solve() { int x; cin >> x; vector<int> ans; int sum = 0, last = 9; while (sum < x && last > 0) { ans.push_back(min(x - sum, last)); sum += last; last--; } if (sum < x) { cout << -1 << "\n"; } else { reverse(ans.begin(), ans.end()); for (int i : ans) { cout << i; } cout << "\n"; } } int main() { int t; cin >> t; while (t--) { solve(); } }
1462
D
Add to Neighbour and Remove
Polycarp was given an array of $a[1 \dots n]$ of $n$ integers. He can perform the following operation with the array $a$ no more than $n$ times: - Polycarp selects the index $i$ and adds the value $a_i$ to \textbf{one of his choice} of its neighbors. More formally, Polycarp adds the value of $a_i$ to $a_{i-1}$ or to $a_{i+1}$ (if such a neighbor does not exist, then it is impossible to add to it). - After adding it, Polycarp removes the $i$-th element from the $a$ array. During this step the length of $a$ is decreased by $1$. The two items above together denote one single operation. For example, if Polycarp has an array $a = [3, 1, 6, 6, 2]$, then it can perform the following sequence of operations with it: - Polycarp selects $i = 2$ and adds the value $a_i$ to $(i-1)$-th element: $a = [4, 6, 6, 2]$. - Polycarp selects $i = 1$ and adds the value $a_i$ to $(i+1)$-th element: $a = [10, 6, 2]$. - Polycarp selects $i = 3$ and adds the value $a_i$ to $(i-1)$-th element: $a = [10, 8]$. - Polycarp selects $i = 2$ and adds the value $a_i$ to $(i-1)$-th element: $a = [18]$. Note that Polycarp could stop performing operations at any time. Polycarp wondered how many minimum operations he would need to perform to make all the elements of $a$ equal (i.e., he wants all $a_i$ are equal to each other).
Let $k$ - be the number of operations performed by Polycarp. Let's see how to check if $k$ is the answer. Let's denote by $s$ the sum of numbers in the array $a$. Note that after each operation $s$ does not change. Since we know that after $k$ operations all elements must be the same and the sum of the numbers in the array does not change, then each of the remaining elements must be equal to $\frac{s}{n-k}$. Let's check if it is possible to perform $k$ operations so that at the end all elements are equal to $\frac{s}{n-k}$. Note that the process described in the condition is equivalent to the following process: Choose a set of $n - k + 1$ indices $i_1, i_2, \dots, i_{n-k+1}$ ($1 = i_1 < i_2 < \dots < i_ {n - k} < i_ {n - k + 1} = n + 1$) - partitions; Create a new array $b$ of $n - k$ elements, where $b_j = \sum\limits_{t=i_j}^{i_{j+1}-1} b_t$ $b_j = \sum\limits_{t=i_j}^{i_{j+1}-1} b_t$ Then, to check if $k$ is the answer, it is necessary to split the $a$ array into $n - k$ subarrays, in which the sum of all elements is equal to $\frac{s}{n-k}$. Such a check can be implemented greedily in $\mathcal{O}(n)$. It is enough to go through the array from left to right and take an element into the current subarray until the sum in it exceeds $\frac{s}{n-k}$. The resulting solution works in $\mathcal{O}(n \cdot \sigma(s))$ or $\mathcal{O}(n^2)$, where $\sigma(s)$ is the number of divisors $s$.
[ "greedy", "math", "number theory" ]
1,400
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n; cin >> n; vector<ll> a(n); ll sum = 0; for (ll &x : a) { cin >> x; sum += x; } for (int i = n; i >= 1; i--) { if (sum % i == 0) { ll needSum = sum / i; ll curSum = 0; bool ok = true; for (int j = 0; j < n; j++) { curSum += a[j]; if (curSum > needSum) { ok = false; break; } else if (curSum == needSum) { curSum = 0; } } if (ok) { cout << n - i << endl; return; } } } } int main() { int tests; cin >> tests; while (tests-- > 0) { solve(); } return 0; }
1462
E1
Close Tuples (easy version)
\textbf{This is the easy version of this problem. The only difference between easy and hard versions is the constraints on $k$ and $m$ (in this version $k=2$ and $m=3$). Also, in this version of the problem, you DON'T NEED to output the answer by modulo.} You are given a sequence $a$ of length $n$ consisting of integers from $1$ to $n$. \textbf{The sequence may contain duplicates (i.e. some elements can be equal)}. Find the number of tuples of $m = 3$ elements such that the maximum number in the tuple differs from the minimum by no more than $k = 2$. Formally, you need to find the number of triples of indices $i < j < z$ such that $$\max(a_i, a_j, a_z) - \min(a_i, a_j, a_z) \le 2.$$ For example, if $n=4$ and $a=[1,2,4,3]$, then there are two such triples ($i=1, j=2, z=4$ and $i=2, j=3, z=4$). If $n=4$ and $a=[1,1,1,1]$, then all four possible triples are suitable.
In the easy version of the problem, you can count how many times each number occurs (the numbers themselves do not exceed $n$). Note that we do not have very many options for which triples of numbers can be included in the answer. Let's iterate over $x$ - the minimum number in the triples. Then there are the following options: [$x$, $x + 1$, $x + 2$]; [$x$, $x + 1$, $x + 1$]; [$x$, $x + 2$, $x + 2$]; [$x$, $x$, $x + 1$]; [$x$, $x$, $x + 2$]; [$x$, $x$, $x$]. In each option, you need to multiply the number of ways to choose one, two or three numbers from all occurrences of this number. This is done using binomial coefficients. Formally, if $cnt[x]$ - is the number of occurrences of the number $x$, then the formulas corresponding to the options in the list above are as follows: $cnt[x] \cdot cnt[x + 1] \cdot cnt[x + 2]$; $cnt[x] \cdot \frac{cnt[x + 1] \cdot (cnt[x + 1] - 1)}{2}$; $cnt[x] \cdot \frac{cnt[x + 2] \cdot (cnt[x + 2] - 1)}{2}$; $\frac{cnt[x] \cdot (cnt[x] - 1)}{2} \cdot cnt[x + 1]$; $\frac{cnt[x] \cdot (cnt[x] - 1)}{2} \cdot cnt[x + 2]$; $\frac{cnt[x] \cdot (cnt[x] - 1) \cdot (cnt[x] - 2)}{6}$. If we sum these values over all $x$ from $1$ to $n$, then we get the answer to the problem.
[ "binary search", "combinatorics", "math", "sortings", "two pointers" ]
1,500
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n; cin >> n; vector<ll> cnt(n + 1); for (int i = 0; i < n; i++) { int x; cin >> x; cnt[x]++; } ll ans = 0; for (int i = 2; i < n; i++) { ans += cnt[i - 1] * cnt[i] * cnt[i + 1]; } for (int i = 1; i < n; i++) { ans += cnt[i] * (cnt[i] - 1) / 2 * cnt[i + 1]; } for (int i = 2; i <= n; i++) { ans += cnt[i - 1] * cnt[i] * (cnt[i] - 1) / 2; } for (int i = 2; i < n; i++) { ans += cnt[i - 1] * cnt[i + 1] * (cnt[i + 1] - 1) / 2; } for (int i = 2; i < n; i++) { ans += cnt[i + 1] * cnt[i - 1] * (cnt[i - 1] - 1) / 2; } for (int i = 1; i <= n; i++) { ans += cnt[i] * (cnt[i] - 1) * (cnt[i] - 2) / 6; } cout << ans << "\n"; } int main() { int t; cin >> t; while (t--) { solve(); } }
1462
E2
Close Tuples (hard version)
\textbf{This is the hard version of this problem. The only difference between the easy and hard versions is the constraints on $k$ and $m$. In this version of the problem, you need to output the answer by modulo $10^9+7$.} You are given a sequence $a$ of length $n$ consisting of integers from $1$ to $n$. \textbf{The sequence may contain duplicates (i.e. some elements can be equal)}. Find the number of tuples of $m$ elements such that the maximum number in the tuple differs from the minimum by no more than $k$. Formally, you need to find the number of tuples of $m$ indices $i_1 < i_2 < \ldots < i_m$, such that $$\max(a_{i_1}, a_{i_2}, \ldots, a_{i_m}) - \min(a_{i_1}, a_{i_2}, \ldots, a_{i_m}) \le k.$$ For example, if $n=4$, $m=3$, $k=2$, $a=[1,2,4,3]$, then there are two such triples ($i=1, j=2, z=4$ and $i=2, j=3, z=4$). If $n=4$, $m=2$, $k=1$, $a=[1,1,1,1]$, then all six possible pairs are suitable. \textbf{As the result can be very large, you should print the value modulo $10^9 + 7$ (the remainder when divided by $10^9 + 7$)}.
The key idea that allows us to move from the previous version to this one is that the values of the numbers themselves are not important to us. The main idea is to consider all numbers in the interval $[x, x + k]$. Let's also, as in the previous version, iterate over the minimum element $x$ in the tuple. Now let's find the count of numbers $cnt$ that lie in the interval $[x, x + k]$ (this can be done with a binary search, two pointers, or prefix sums using an array of occurrences). Then it remains to add to the answer the number of ways to choose $m - 1$ numbers from $cnt - 1$ (we fixed one of the numbers as the minimum). You have to sum these values over all possible values of $x$ (even the same) because now you are not fixing the value of the minimum element (as in the previous problem), but its index in the sorted array. To calculate binomial coefficients quickly, you can pre-compute all factorial values and all $\frac{1}{n!}$ values by modulo. If you do not know how to calculate the inverse element by modulo, then you could pre-compute the part of Pascal's triangle in $\mathcal{O}(nm)$.
[ "binary search", "combinatorics", "implementation", "math", "sortings", "two pointers" ]
1,700
#include <bits/stdc++.h> using namespace std; using ll = long long; const int N = 300500; const int mod = 1000000007; ll fact[N]; ll invFact[N]; ll fast_pow(ll a, ll p) { ll res = 1; while (p) { if (p % 2 == 0) { a = (a * a) % mod; p /= 2; } else { res = (res * a) % mod; p--; } } return res; } ll C(int n, int k) { if (k > n) { return 0; } return fact[n] * invFact[k] % mod * invFact[n - k] % mod; } void solve() { int n, m, k; cin >> n >> m >> k; vector<ll> v(n); for (ll &e : v) { cin >> e; } sort(v.begin(), v.end()); ll ans = 0; for (int i = 0; i < n; i++) { int l = i + 1; int r = upper_bound(v.begin(), v.end(), v[i] + k) - v.begin(); ans = (ans + C(r - l, m - 1)) % mod; } cout << ans << "\n"; } int main() { fact[0] = invFact[0] = 1; for (int i = 1; i < N; i++) { fact[i] = (fact[i - 1] * i) % mod; invFact[i] = fast_pow(fact[i], mod - 2); } int t; cin >> t; while (t--) { solve(); } }
1462
F
The Treasure of The Segments
Polycarp found $n$ segments on the street. A segment with the index $i$ is described by two integers $l_i$ and $r_i$ — coordinates of the beginning and end of the segment, respectively. Polycarp realized that he didn't need all the segments, so he wanted to delete some of them. Polycarp believes that a set of $k$ segments is good if there is a segment $[l_i, r_i]$ ($1 \leq i \leq k$) from the set, such that it intersects every segment from the set (the intersection must be a \textbf{point or segment}). For example, a set of $3$ segments $[[1, 4], [2, 3], [3, 6]]$ is good, since the segment $[2, 3]$ intersects each segment from the set. Set of $4$ segments $[[1, 2], [2, 3], [3, 5], [4, 5]]$ is not good. Polycarp wonders, what is the minimum number of segments he has to delete so that the remaining segments form a good set?
As we know from the problem statement: Polycarp believes that a set of $k$ segments is good if there is a segment $[l_i, r_i]$ ($1 \leq i \leq k$) from the set, such that it intersects every segment from the set (the intersection must be a point or segment). Let's iterate over this segment (which intersects all the others) and construct a good set of the remaining segments, maximum in terms of inclusion. It is easy to understand that this set will include all segments that intersect with ours. We must delete all other segments. Two segments $[l_1, r_1]$ and $[l_2, r_2]$ intersect if $max(l_1, l_2) \le min(r_1, r_2)$. Then if the segment that we iterate over has coordinates $[L, R]$, then we must remove all such segments $[l, r]$ for which $r < L$ or $R < l$ is satisfied (that is, the segment ends earlier than ours begins, or vice versa). Note that these two conditions cannot be fulfilled simultaneously, since $l \le r$, and if both conditions are satisfied, then $r < L \le R < l$. This means that we can count the number of segments suitable for these conditions independently. Each of these conditions is easy to handle. Let's create two arrays - all the left boundaries of the segments and all the right boundaries of the segments. Let's sort both arrays. Now we can count the required quantities using the binary search or prefix sums (but in this case, we need to use the coordinate compression technique). Taking at least the number of deleted segments among all the options, we will get the answer to the problem.
[ "binary search", "data structures", "greedy" ]
1,800
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { vector<int> L; vector<int> R; int n; cin >> n; vector<pair<int, int>> v(n); for (auto &[l, r] : v) { cin >> l >> r; L.push_back(l); R.push_back(r); } sort(L.begin(), L.end()); sort(R.begin(), R.end()); int ans = n - 1; for (auto [l, r] : v) { int left = lower_bound(R.begin(), R.end(), l) - R.begin(); int right = max(0, n - (int)(upper_bound(L.begin(), L.end(), r) - L.begin())); ans = min(ans, left + right); } cout << ans << "\n"; } int main() { int t; cin >> t; while (t--) { solve(); } }
1463
A
Dungeon
You are playing a new computer game in which you have to fight monsters. In a dungeon you are trying to clear, you met three monsters; the first of them has $a$ health points, the second has $b$ health points, and the third has $c$. To kill the monsters, you can use a cannon that, when fired, deals $1$ damage to the selected monster. Every $7$-th (i. e. shots with numbers $7$, $14$, $21$ etc.) cannon shot is enhanced and deals $1$ damage to \textbf{all} monsters, not just one of them. If some monster's current amount of health points is $0$, it can't be targeted by a regular shot and does not receive damage from an enhanced shot. You want to pass the dungeon beautifully, i. e., kill all the monsters with the same enhanced shot (i. e. after some enhanced shot, the health points of each of the monsters should become equal to $0$ \textbf{for the first time}). Each shot must hit a monster, i. e. each shot deals damage to at least one monster.
Note that for every $7$ shots, we deal a total of $9$ units of damage. Since we want to kill all the monsters with a shot which index is divisible by $7$, let's denote the number of shots as $7k$. In this case, a total of $a+b+c$ units of damage must be dealt, hence $k=\frac{a+b+c}{9}$ (if the result of the division is not an integer, then there is no answer). Since each monster will receive at least $k$ units of damage (with enhanced shots), the health of each monster must be at least $k$. If the two conditions described above are met, then the remaining shots can always be distributed in the desired way.
[ "binary search", "math" ]
1,100
for i in range(int(input())): a, b, c = map(int, input().split()) if (a + b + c) % 9 != 0: print("NO") else: print("YES" if min(a, b, c) >= (a + b + c) // 9 else "NO")
1463
B
Find The Array
You are given an array $[a_1, a_2, \dots, a_n]$ such that $1 \le a_i \le 10^9$. Let $S$ be the sum of all elements of the array $a$. Let's call an array $b$ of $n$ integers \textbf{beautiful} if: - $1 \le b_i \le 10^9$ for each $i$ from $1$ to $n$; - for every pair of adjacent integers from the array $(b_i, b_{i + 1})$, either $b_i$ divides $b_{i + 1}$, or $b_{i + 1}$ divides $b_i$ (or both); - $2 \sum \limits_{i = 1}^{n} |a_i - b_i| \le S$. Your task is to find any beautiful array. It can be shown that at least one beautiful array always exists.
It is enough to consider two possible arrays $b$: ($a_1, 1, a_3, 1, a_5, \dots$) and ($1, a_2, 1, a_4, 1, \dots$). It is not difficult to notice that in these arrays, the condition is met that among two neighboring elements, one divides the other. It remains to show that at least one of these two arrays satisfies the condition $2 \sum \limits_{i = 1}^{n} |a_i - b_i| \le S$. Let's consider $S_{odd}$ - the sum of elements at odd positions and $S_{even}$ - the sum of elements at even positions. Since $S=S_{odd}+S_{even}$, at least one of the values of $S_{odd}$ and $S_{even}$ does not exceed $\frac{S}{2}$ (because otherwise their sum will be strictly greater than $S$). Without losing generality, assume that $S_{odd} \le \frac{S}{2}$. Note that for the second variant of the array $b$, the condition $\sum \limits_{i = 1}^{n} |a_i - b_i| \le S_{odd}$ holds, so $2 \sum \limits_{i = 1}^{n} |a_i - b_i| \le S$.
[ "bitmasks", "constructive algorithms", "greedy" ]
1,400
for t in range(int(input())): n = int(input()) a = list(map(int, input().split())) s = sum(a) cur = [0, 0] for i in range(n): cur[i % 2] += a[i] - 1 for j in range(2): if 2 * cur[j] > s: continue for i in range(n): if i % 2 == j: a[i] = 1 break print(*a)
1463
C
Busy Robot
You have a robot that can move along a number line. At time moment $0$ it stands at point $0$. You give $n$ commands to the robot: at time $t_i$ seconds you command the robot to go to point $x_i$. Whenever the robot receives a command, it starts moving towards the point $x_i$ with the speed of $1$ unit per second, and he stops when he reaches that point. However, while the robot is moving, it \textbf{ignores} all the other commands that you give him. For example, suppose you give three commands to the robot: at time $1$ move to point $5$, at time $3$ move to point $0$ and at time $6$ move to point $4$. Then the robot stands at $0$ until time $1$, then starts moving towards $5$, ignores the second command, reaches $5$ at time $6$ and immediately starts moving to $4$ to execute the third command. At time $7$ it reaches $4$ and stops there. You call the command $i$ successful, if there is a time moment in the range $[t_i, t_{i + 1}]$ (i. e. after you give this command and before you give another one, both bounds inclusive; we consider $t_{n + 1} = +\infty$) when the robot is at point $x_i$. Count the number of successful commands. Note that it is possible that an ignored command is successful.
The main idea in the problem is not how to solve it but how to code it neatly. I've come up with the following way. Let's store three variables: where is the robot now, what direction does it move ($-1$, $0$ or $1$) and how much time is left until it stops moving. The processing of the commands looks becomes pretty easy. If there is no time left to move then the command is executed, and we tell the robot the direction and the time left for the current command. Then there are two cases: either the robot stops before the next command or after it. However, they can be processed simultaneously. Let $T$ be the minimum of the time left before the robot stops moving and the time before the next command. We sure know that before the next command the robot will visit exactly the segment of positions between the current position and the current position plus direction multiplied by $T$. If the destination for the current command is in this segment, then that command is successful. After the command is processed subtract $T$ from the time left and increase the position by direction multiplied by $T$. Overall complexity: $O(n)$ per testcase.
[ "implementation" ]
1,800
def inside(l, r, x): return min(l, r) <= x <= max(l, r) def sg(x): return -1 if x < 0 else int(x > 0) for _ in range(int(input())): n = int(input()) qs = [] for i in range(n): qs.append(list(map(int, input().split()))) qs.append([4*10**9, 0]) ans = 0 pos, dr, lft = 0, 0, 0 for i in range(n): t, x = qs[i] tn = qs[i + 1][0] if lft == 0: lft = abs(pos - x) dr = sg(x - pos) tmp = min(lft, tn - t) if inside(pos, pos + dr * tmp, x): ans += 1 pos += dr * tmp lft -= tmp print(ans)
1463
D
Pairs
You have $2n$ integers $1, 2, \dots, 2n$. You have to redistribute these $2n$ elements into $n$ pairs. After that, you choose $x$ pairs and take minimum elements from them, and from the other $n - x$ pairs, you take maximum elements. Your goal is to obtain the set of numbers $\{b_1, b_2, \dots, b_n\}$ as the result of taking elements from the pairs. What is the number of different $x$-s ($0 \le x \le n$) such that it's possible to obtain the set $b$ if for each $x$ you can choose how to distribute numbers into pairs and from which $x$ pairs choose minimum elements?
Let's prove that in the set $b$ $x$ minimum elements will be from $x$ pairs where we'll take minimums and analogically $n - x$ maximums will be from $n - x$ pairs where we'll take maximums. By contradiction, let's look at two pairs $(a, b)$ ($a < b$) and $(c, d)$ ($c < d$), where we will take maximum from $(a, b)$ and minimum from $(c, d)$ and $b < c$, if we swap elements $b$ and $c$ and get pair $(a, c)$, $(b, d)$, the result won't change, but now minimum from pair $(b, d)$ will be less than maximum from $(a, c)$. So we can always make pairs in such a way that the chosen minimum from any pair will be less than the chosen maximum from any other pair. Let's make set $nb$ as all elements which are not in $b$. In the same way, we can prove that $n - x$ minimums of $nb$ are from pairs where we took maximums and $x$ maximums are from pairs where we took minimums. Let's say $b$ and $nb$ are sorted. Now we've proven that for a fixed $x$ we should pair $b_1, b_2, \dots, b_x$ with $nb_{n - x + 1}, nb_{n - x + 2}, \dots, nb_{n}$ and $b_{x + 1}, \dots, b_n$ with $nb_1, \dots, nb_{n - x}$. It's not hard to prove that it's optimal to pair $(b_1, nb_{n - x + 1})$, $(b_2, nb_{n - x + 2})$, ..., $(b_x, nb_n)$ and in the same way $(nb_1, b_{x + 1})$, $(nb_2, b_{x + 2})$, ..., $(nb_{n - x}, b_n)$. For a fixed $x$ we can just check that constructed pairs are valid. But what happens if we move from $x$ to $x + 1$? If for $1 \le i \le n - x$ all $(nb_i, b_{x + i})$ was valid then for $x + 1$ all pairs $(nb_i, b_{x + 1 + i})$ will be valid as well. And on contrary, if at least one pair $(b_i, nb_{n - x + i})$ wasn't valid then for $x + 1$ the pair $(b_i, nb_{n - x - 1 + i})$ won't be valid as well. Due to monotony we can find the maximum valid $x$ just checking only pairs $(b_i, nb_{n - x + i})$ and in the same way we can find maximum $n - x$ (minimum $x$) such that all pairs $(nb_i, b_{x + i})$ are valid. That's why all valid $x$-s form a segment, and we need to find its borders. We can find a maximum $x$ (maximum $n - x$) with either binary search or with two pointers and print the length of the segment. Time complexity is either $O(n)$ or $O(n \log{n})$.
[ "binary search", "constructive algorithms", "greedy", "two pointers" ]
1,900
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) const int INF = int(1e9); int n; vector<int> b; inline bool read() { if(!(cin >> n)) return false; b.resize(n); fore (i, 0, n) cin >> b[i]; return true; } int getMax(const vector<int> &a, const vector<int> &b) { int uk = 0; fore (i, 0, sz(a)) { while (uk < sz(b) && b[uk] < a[i]) uk++; if (uk >= sz(b)) return i; uk++; } return sz(a); } inline void solve() { vector<int> nb; fore (i, 1, 2 * n + 1) { if (!binary_search(b.begin(), b.end(), i)) nb.push_back(i); } int mxX = getMax(b, nb); int mnX = n - getMax(nb, b); cout << mxX - mnX + 1 << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(false); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int t; cin >> t; while(t--) { read(); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1463
E
Plan of Lectures
Ivan is a programming teacher. During the academic year, he plans to give $n$ lectures on $n$ different topics. Each topic should be used in exactly one lecture. Ivan wants to choose which topic will he explain during the $1$-st, $2$-nd, ..., $n$-th lecture — formally, he wants to choose some permutation of integers from $1$ to $n$ (let's call this permutation $q$). $q_i$ is the index of the topic Ivan will explain during the $i$-th lecture. For each topic (except \textbf{exactly one}), there exists a prerequisite topic (for the topic $i$, the prerequisite topic is $p_i$). Ivan cannot give a lecture on a topic before giving a lecture on its prerequisite topic. There exists at least one valid ordering of topics according to these prerequisite constraints. Ordering the topics correctly can help students understand the lectures better. Ivan has $k$ special pairs of topics $(x_i, y_i)$ such that he knows that the students will understand the $y_i$-th topic better if the lecture on it is conducted \textbf{right after} the lecture on the $x_i$-th topic. Ivan wants to satisfy the constraints on every such pair, that is, for every $i \in [1, k]$, there should exist some $j \in [1, n - 1]$ such that $q_j = x_i$ and $q_{j + 1} = y_i$. Now Ivan wants to know if there exists an ordering of topics that satisfies all these constraints, and if at least one exists, find any of them.
The prerequisites for each lecture form a rooted tree, so let's forget about the legend and learn how to find such an order of vertices of a tree that all conditions work. Let's introduce some algorithm that produces an ordering of vertices for every possible case. If any valid ordering exists, it should produce a valid one. So we will only have to check if the resulting ordering is fine and output it if it is. If there were no special pairs, the task would be perfectly solvable with an algorithm of topological sorting. Thus, let's come up with a way to modify the graph, so that topsort could still be a solution. We know that the vertices that are in the special pairs should follow each other in the ordering. Look at these special pairs as edges as well. Let's first imagine they are undirected. These edges connect some vertices in the tree into components. Each component should be a segment of vertices in a valid answer. So how about we compress them into one vertex first each, find some answer for a compressed version and decompress them back? Let each of these connected components be a vertex in the new graph. Two components are connected by an edge if there is a edge in the tree between vertices of the corresponding components. Topsort in this graph will tell us the order the components should go. However, we should also find the order the vertices should go inside each component. Let's topsort the graph of all directed special pairs and sort the vertices in that order in every component. We can sort the entire graph instead of sorting each component separately because the components are totally independent. Finally, write down the answer: iterate over the compressed vertices of the first new graph in the order of its topsort, for each one write down all the actual vertices inside it in the order of the topsort of the second new graph. Check if each vertex has its parent earlier than itself in the answer. If all the graphs were topologically sortable and that holds, then the answer exists, and we found it. Otherwise, the answer doesn't exist. Overall complexity: $O(n \log n)$.
[ "constructive algorithms", "dfs and similar", "dsu", "graphs", "implementation", "sortings", "trees" ]
2,400
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for(int i = 0; i < int(n); i++) vector<vector<int>> g, h, ng; vector<int> rk, p; int getp(int a){ return a == p[a] ? a : p[a] = getp(p[a]); } void unite(int a, int b){ a = getp(a), b = getp(b); if (a == b) return; if (rk[a] < rk[b]) swap(a, b); rk[a] += rk[b]; p[b] = a; } vector<vector<int>> comp; vector<int> ord; vector<int> used; bool fl; void ts(int v, const vector<vector<int>> &g){ used[v] = 1; for (int u : g[v]){ if (used[u] == 0) ts(u, g); else if (used[u] == 1) fl = false; if (!fl) return; } ord.push_back(v); used[v] = 2; } bool topsort(const vector<vector<int>> &g){ int n = g.size(); used.assign(n, 0); ord.clear(); fl = true; forn(i, n) if (!used[i]){ ts(i, g); if (!fl) return false; } reverse(ord.begin(), ord.end()); return true; } vector<int> pos; bool dfs(int v){ for (int u : g[v]){ if (pos[u] < pos[v]) return false; if (!dfs(u)) return false; } return true; } int main() { int n, m; scanf("%d%d", &n, &m); g.resize(n); h.resize(n); ng.resize(n); int rt = -1; forn(i, n){ int p; scanf("%d", &p); --p; if (p == -1) rt = i; else g[p].push_back(i); } rk.assign(n, 1); p.resize(n); iota(p.begin(), p.end(), 0); forn(i, m){ int v, u; scanf("%d%d", &v, &u); --v, --u; h[v].push_back(u); unite(v, u); } if (!topsort(h)){ puts("0"); return 0; } auto ord1 = ord; vector<int> pos1(n); forn(i, n) pos1[ord1[i]] = i; forn(v, n) for (int u : g[v]) if (getp(v) != getp(u)) ng[getp(v)].push_back(getp(u)); if (!topsort(ng)){ puts("0"); return 0; } comp.resize(n); forn(i, n) comp[getp(i)].push_back(i); vector<int> fin; for (int it : ord){ sort(comp[it].begin(), comp[it].end(), [&](int v, int u){ return pos1[v] < pos1[u]; }); for (int v : comp[it]) fin.push_back(v); } pos.resize(n); forn(i, n) pos[fin[i]] = i; if (!dfs(rt)){ puts("0"); return 0; } for (int v : fin) printf("%d ", v + 1); puts(""); }
1463
F
Max Correct Set
Let's call the set of positive integers $S$ \textbf{correct} if the following two conditions are met: - $S \subseteq \{1, 2, \dots, n\}$; - if $a \in S$ and $b \in S$, then $|a-b| \neq x$ and $|a-b| \neq y$. For the given values $n$, $x$, and $y$, you have to find the maximum size of the \textbf{correct} set.
The key idea of the task is to prove that there is an optimal answer where the chosen elements in $S$ has a period equal to $p = x + y$. Let's work with $0, 1, \dots, n - 1$ instead of $1, 2, \dots, n$. Firstly, let's prove that if we've chosen correct set $a_1, \dots, a_k$ in interval $[l, l + p)$ then if we take all $a_i + p$ then set $\{ a_1, \dots, a_k, a_1 + p, \dots a_k + p \}$ will be corect as well. By contradiction: suppose we have $a_i + p - a_j = x$ ($y$), then $a_i - a_j = x - p = x - x - y = -y$ ($-x$) or $|a_i - a_j| = y$ ($x$) - contradiction. It means that if we take the correct set in interval $[l, l + p)$ we can create a periodic answer by copying this interval several times. Next, let's prove that there is an optimal periodic answer. Let's look at any optimal answer and its indicator vector (binary vector of length $n$ where $id_i = 1$ iff $i$ is in the set). Let $r = n \bmod p$. Let's split the vector in $2 \left\lfloor \frac{n}{p} \right\rfloor + 1$ intervals: $[0, r), [r, p), [p, p + r), [p + r, 2p), \dots, [n - r, n)$. The $1$-st, $3$-rd, $5$-th... segments have length $r$ and $2$-nd, $4$-th,... segments have length $p - r$. If we choose any two consecutive segments its total length will be equal to $p$ and we can use it to make periodic answer by replacing all length $r$ segments with the chosen one and $p - r$ segments with the other one. We can prove that we can always find such two consecutive segments that the induced answer will be greater or equal to the initial one. If we create vector where $v_i$ is equal to the sum of $id_j$ in the $i$-th segment, then the task is equivalent to finding $v_i$ and $v_{i + 1}$ such that replacing all $v_{i \pm 2z}$ by $v_i$ and all $v_{i + 1 \pm 2z}$ by $v_{i + 1}$ won't decrease array $v$ sum. The proof is down below. Now, since the answer is periodical, taking element $c$ ($0 \le c < p$) is equivalent to taking all elements $d \equiv c \bmod p$, so for each $c$ we can calc $val_c$ - the number of integers with the same remainder. And for each $c$ we either take it or not. So we can write $dp[p][2^{\max(x, y)}]$, where $dp[i][msk]$ is the maximum sum if we processed $i$ elements and last $\max(x, y)$ elements are described by mask $msk$. We start with $dp[0][0]$ and, when look at the $i$-th element, either take it (if we can) or skip it. Time complexity is $O((x + y) 2^{\max(x, y)})$. ========== Let's prove that for any array $v_1, v_2, \dots, v_{2n + 1}$ we can find pair $v_p, v_{p + 1}$ such that replacing all $v_{p \pm 2z}$ with $v_p$ and all $v_{p + 1 \pm 2z}$ with $v_{p + 1}$ won't decrease the total sum. Let's define $S_o = \sum\limits_{i = 1}^{n + 1}{v_{2i - 1}}$ and $S_e = \sum\limits_{i = 1}^{n}{v_{2i}}$. Let's make array $b_1, \dots, b_{2n + 1}$, where $b_{2i - 1} = (n + 1)v_{2i - 1} - S_o$ and $b_{2i} = n \cdot v_{2i} - S_e$. The meaning behind $b_i$ is how changes the total sum if we replace corresponding elements by $v_i$. Note, that finding a good pair $v_p, v_{p + 1}$ is equivalent to finding $b_p + b_{p + 1} \ge 0$. Also, note that $\sum\limits_{i = 1}^{n + 1}{b_{2i - 1}} = (n + 1)S_o - (n + 1)S_o = 0$ and analogically, $\sum\limits_{i = 1}^{n}{b_{2i}} = n \cdot S_e - n \cdot S_e = 0$. Let's prove by contradiction: suppose that for any $i$ $b_i + b_{i + 1} < 0$. Let's look at $\sum\limits_{i = 1}^{2n + 1}{b_i} = \sum\limits_{i = 1}^{n + 1}{b_{2i - 1}} + \sum\limits_{i = 1}^{n}{b_{2i}} = 0$. But from the other side, we know that $b_2 + b_3 < 0$, $b_4 + b_5 < 0$, ..., $b_{2n} + b_{2n + 1} < 0$, so $b_1 > 0$, otherwise $\sum\limits_{i = 1}^{2n + 1}{b_i}$ will be negative. In the same way, since $b_1 + b_2 < 0$, $b_4 + b_5 < 0$, ..., $b_{2n} + b_{2n + 1} < 0$, then $b_3 > 0$. Analogically we can prove that each $b_{2i - 1} > 0$, but $\sum\limits_{i = 1}^{n + 1}{b_{2i - 1}} = 0$ - contradiction. So, there is always a pair $b_p + b_{p + 1} \ge 0$, i. e. a pair $v_p, v_{p + 1}$.
[ "bitmasks", "dp", "math" ]
3,100
#include <bits/stdc++.h> using namespace std; const int INF = 1e9; const int N = 22; int dp[2][1 << N]; int val[2 * N]; int main() { int n, x, y; scanf("%d%d%d", &n, &x, &y); int k = x + y; int m = max(x, y); int FULL = (1 << m) - 1; for (int i = 0; i < k; ++i) val[i] = n / k + (i < n % k); for (int mask = 0; mask < (1 << m); ++mask) dp[0][mask] = -INF; dp[0][0] = 0; for (int i = 0; i < k; ++i) { for (int mask = 0; mask < (1 << m); ++mask) dp[1][mask] = -INF; for (int mask = 0; mask < (1 << m); ++mask) { if (dp[0][mask] == -INF) continue; int nmask = (mask << 1) & FULL; dp[1][nmask] = max(dp[1][nmask], dp[0][mask]); if (((mask >> (x - 1)) & 1) | ((mask >> (y - 1)) & 1)) continue; nmask |= 1; dp[1][nmask] = max(dp[1][nmask], dp[0][mask] + val[i]); } swap(dp[0], dp[1]); } int ans = 0; for (int mask = 0; mask < (1 << m); ++mask) ans = max(ans, dp[0][mask]); printf("%d\n", ans); }
1464
F
My Beautiful Madness
You are given a tree. We will consider simple paths on it. Let's denote path between vertices $a$ and $b$ as $(a, b)$. Let $d$-neighborhood of a path be a set of vertices of the tree located at a distance $\leq d$ from at least one vertex of the path (for example, $0$-neighborhood of a path is a path itself). Let $P$ be a multiset of the tree paths. Initially, it is empty. You are asked to maintain the following queries: - $1$ $u$ $v$ — add path $(u, v)$ into $P$ ($1 \leq u, v \leq n$). - $2$ $u$ $v$ — delete path $(u, v)$ from $P$ ($1 \leq u, v \leq n$). Notice that $(u, v)$ equals to $(v, u)$. For example, if $P = \{(1, 2), (1, 2)\}$, than after query $2$ $2$ $1$, $P = \{(1, 2)\}$. - $3$ $d$ — if intersection of all $d$-neighborhoods of paths from $P$ is not empty output "Yes", otherwise output "No" ($0 \leq d \leq n - 1$).
When I write "a vertex at a distance $d$ up from the given", I mean such a vertex, if it exists; otherwise the root of the tree. Let's learn how to answer the query of whether the intersection of $d$-neighborhoods of all paths is empty. For a given path poman is a vertex at a distance $d$ up from the LCA of this path. Lemma: if the intersection is non-empty, then the deepest poman vertex is included in it. Proof: let the intersection be non-empty, and the deepest poman vertex (let's call it $v$) is not included in the intersection. The path that spawned $v$ is located along with the neighborhood in $v$'s subtree. $v$ is not included in the intersection, so there is some path whose neighborhood does not contain v. Note that it cannot be in the subtree of $v$, since then its poman vertex would be deeper than $v$; hence it lies outside the subtree $v$ with the neighborhood, hence the intersection of all neighborhoods is empty. Contradiction. This knowledge allows us to check if the intersection is empty by checking if one particular vertex $v$ lies in the intersection (for a query, recognizing $v$ is easy if we maintain a set of LCA of paths from P, ordered by depth). Let $kek$ be a vertex at a distance $d$ above $v$. First, a necessary condition: for each path from $P$, the subtree of $kek$ contains at least one end. Let's implement it like this: when the path $(u, v)$ comes, we do ad[u]++, ad[v]++, ad[LCA(u, v)]--. Now the sum of ad in a subtree of a vertex is the number of ends of different paths in it. Now we know for sure that there is no path that lies strictly outside the $kek$ subtree, that is, all paths for which LCA is outside the subtree of $kek$ pass through $kek$, and hence $v$ lies in their $d$-neighborhood. The only remaining paths to check are those in the $kek$ subtree. The shortest distance from $v$ to such paths is the distance to their $LCA$ (this is not true if the $LCA$ lies on the path $(v, kek)$, but this path itself has length $d$ and we will definitely reach such $LCA$ from $v$). We got the following task about processing queries on a tree: podkek[v]++ podkek[v]-- among the vertices of the given subtree with $podkek[v] > 0$, find the farthest from the given vertex (also contained in this subtree)
[ "data structures", "trees" ]
3,500
#pragma GCC optimize("O3") #include <iostream> #include <vector> #include <algorithm> #include <set> using namespace std; const int LG = 20; const int inf = 1e9 + 228; struct Fenwick { int n; vector<int> t; Fenwick(int n) : n(n), t(n) {} void upd(int v, int x) { for (int i = v; i < n; i |= i + 1) { t[i] += x; } } int get(int v) { int res = 0; for (int i = v; i >= 0; i = (i & (i + 1)) - 1) { res += t[i]; } return res; } int get(int l, int r) { return get(r) - get(l - 1); } }; struct SegTree { int n; vector<int> t; SegTree(int n) : n(n), t(2 * n, -inf) {} void upd(int v, int x) { t[v += n] = x; for (v >>= 1; v; v >>= 1) { t[v] = max(t[v << 1], t[v << 1 | 1]); } } int get(int l, int r) { int ret = -inf; for (l += n, r += n; l <= r; l = (l + 1) >> 1, r = (r - 1) >> 1) { if (l & 1) { ret = max(ret, t[l]); } if (!(r & 1)) { ret = max(ret, t[r]); } } return ret; } }; struct Query { int type; int u, v; int d; }; struct HpItem { int in_hp_v; int in_desc_v; int src_v; bool operator < (const HpItem& o) const { return in_hp_v > o.in_hp_v || in_hp_v == o.in_hp_v && in_desc_v > o.in_desc_v || in_hp_v == o.in_hp_v && in_desc_v == o.in_desc_v && src_v > o.src_v; } }; int n, q; vector<vector<int>> g; vector<Query> qs; vector<vector<int>> up; vector<int> pred; vector<int> in; vector<int> out; int timer; vector<int> dep; vector<int> sz; vector<int> top; vector<int> bot; vector<vector<HpItem>> hpq; vector<SegTree> onBot; vector<SegTree> onTop; vector<vector<int>> cnt; void dfs_tree(int v, int p) { if (v > 0) { auto it = g[v].begin(); while (*it != p) { it++; } g[v].erase(it); } up[v][0] = p; pred[v] = p - !v; for (int i = 1; i < LG; i++) { up[v][i] = up[up[v][i - 1]][i - 1]; } sz[v] = 1; for (int& to : g[v]) { dep[to] = dep[v] + 1; dfs_tree(to, v); sz[v] += sz[to]; if (sz[to] > sz[g[v][0]]) { swap(to, g[v][0]); } } } void dfs_hld(int v) { in[v] = timer++; for (int to : g[v]) { if (to == g[v][0]) { top[to] = top[v]; bot[top[v]] = to; } else { top[to] = to; bot[to] = to; } dfs_hld(to); } out[v] = timer; } bool upper(int u, int v) { return (in[u] <= in[v] && out[v] <= out[u]); } int lca(int u, int v) { if (in[u] > in[v]) { swap(u, v); } if (upper(u, v)) { return u; } for (int i = LG - 1; i >= 0; i--) { if (!upper(up[u][i], v)) { u = up[u][i]; } } return up[u][0]; } int kth(int v, int k) { for (int i = LG - 1; i >= 0; i--) { if ((1 << i) <= k) { v = up[v][i]; k -= (1 << i); } } return v; } void upd(int u, int dt) { int v = u; int desc = -1; while (v != -1) { int pos = lower_bound(hpq[top[v]].begin(), hpq[top[v]].end(), HpItem{ in[v], (desc == -1 ? -1 : in[desc]), u }) - hpq[top[v]].begin(); bool ex = (cnt[top[v]][pos] > 0); cnt[top[v]][pos] += dt; if (!ex) { onBot[top[v]].upd(pos, dep[u]); onTop[top[v]].upd(pos, dep[u] - dep[v] + (dep[bot[top[v]]] - dep[v])); } else if (cnt[top[v]][pos] == 0) { onBot[top[v]].upd(pos, -inf); onTop[top[v]].upd(pos, -inf); } desc = top[v]; v = pred[top[v]]; } } int get(int u, int v) { // lower upper int ret = -inf; int roma = u; int desc = -1; while (top[u] != top[v]) { int R = upper_bound(hpq[top[u]].begin(), hpq[top[u]].end(), HpItem{ in[u], (desc == -1 ? -1 : in[desc]), -inf }) - hpq[top[u]].begin(); // [R.. int L = upper_bound(hpq[top[u]].begin(), hpq[top[u]].end(), HpItem{ in[u], (desc == -1 ? -1 : in[desc]), inf }) - hpq[top[u]].begin() - 1; // ..L] if (L >= 0) { ret = max(ret, (onBot[top[u]].get(0, L) - dep[u]) + (dep[roma] - dep[u])); } if (R < onTop[top[u]].n) { ret = max(ret, (onTop[top[u]].get(R, onTop[top[u]].n - 1) - (dep[bot[top[u]]] - dep[u])) + (dep[roma] - dep[u])); } desc = top[u]; u = pred[top[u]]; } { int R = upper_bound(hpq[top[u]].begin(), hpq[top[u]].end(), HpItem{ in[u], (desc == -1 ? -1 : in[desc]), -inf }) - hpq[top[u]].begin(); // [R.. int L = upper_bound(hpq[top[u]].begin(), hpq[top[u]].end(), HpItem{ in[u], (desc == -1 ? -1 : in[desc]), inf }) - hpq[top[u]].begin() - 1; // ..L] if (L >= 0) { ret = max(ret, (onBot[top[u]].get(0, L) - dep[u]) + (dep[roma] - dep[u])); } int bound = upper_bound(hpq[top[u]].begin(), hpq[top[u]].end(), HpItem{ in[v], -inf, -inf }) - hpq[top[u]].begin() - 1; if (R <= bound) { ret = max(ret, (onTop[top[u]].get(R, bound) - (dep[bot[top[u]]] - dep[u])) + (dep[roma] - dep[u])); } } return ret; } int main() { ios::sync_with_stdio(0); cin.tie(0); cin >> n >> q; g.resize(n); for (int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; --u; --v; g[u].push_back(v); g[v].push_back(u); } qs.resize(q); for (auto& qry : qs) { cin >> qry.type; if (qry.type <= 2) { cin >> qry.u >> qry.v; --qry.u; --qry.v; } else { cin >> qry.d; } } up.resize(n, vector<int>(LG)); dep.resize(n); sz.resize(n); pred.resize(n); dfs_tree(0, 0); in.resize(n); out.resize(n); top.resize(n); bot.resize(n); dfs_hld(0); hpq.resize(n); for (int u = 0; u < n; u++) { int v = u; int desc = -1; while (v != -1) { hpq[top[v]].push_back({ in[v], (desc == -1 ? -1 : in[desc]), u }); desc = top[v]; v = pred[top[v]]; } } cnt.resize(n); for (int u = 0; u < n; u++) { sort(hpq[u].begin(), hpq[u].end()); onBot.emplace_back(hpq[u].size()); onTop.emplace_back(hpq[u].size()); cnt[u].resize(hpq[u].size()); } multiset<pair<int, int>> setik; Fenwick fen(n); for (int i = 0; i < q; i++) { if (qs[i].type == 1) { int u = qs[i].u; int v = qs[i].v; int _lca = lca(u, v); setik.insert({ dep[_lca], _lca }); fen.upd(in[u], 1); fen.upd(in[v], 1); fen.upd(in[_lca], -1); upd(_lca, 1); } else if (qs[i].type == 2) { int u = qs[i].u; int v = qs[i].v; int _lca = lca(u, v); setik.erase(setik.find({ dep[_lca], _lca })); fen.upd(in[u], -1); fen.upd(in[v], -1); fen.upd(in[_lca], 1); upd(_lca, -1); } else { int d = qs[i].d; int v = setik.rbegin()->second; int roma = kth(v, d); int u = kth(roma, d); if (fen.get(in[u], out[u] - 1) != setik.size()) { cout << "No\n"; continue; } cout << (get(roma, u) <= d ? "Yes\n" : "No\n"); } } }
1466
A
Bovine Dilemma
Argus was charged with guarding Io, which is not an ordinary cow. Io is quite an explorer, and she wanders off rather frequently, making Argus' life stressful. So the cowherd decided to construct an enclosed pasture for Io. There are $n$ trees growing along the river, where Argus tends Io. For this problem, the river can be viewed as the $OX$ axis of the Cartesian coordinate system, and the $n$ trees as points with the $y$-coordinate equal $0$. There is also another tree growing in the point $(0, 1)$. Argus will tie a rope around three of the trees, creating a triangular pasture. Its exact shape doesn't matter to Io, but its area is crucial to her. There may be many ways for Argus to arrange the fence, but only the ones which result in different areas of the pasture are interesting for Io. Calculate the number of \textbf{different} areas that her pasture may have. Note that the pasture must have \textbf{nonzero} area.
The solution is to iterate over all possible triangles, find their areas, and compute how many of these areas are distinct. So the problem is to calculate the area of a given triangle efficiently. There are many possible ways to do it; I will describe the most straightforward method. Recall the formula for the triangle's area: $area = \frac{base \cdot height}{2}$ As we want to count distinct areas we can forget about dividing by $2$ and focus on calculating $base \cdot height$. Note that all triangles are of form $(x_1, 0)$, $(x_2, 0)$ and $(0, 1)$. Thanks to this, if we pick a side $(x_1, 0)$, $(x_2, 0)$ as a base, then the height is equal to $1$! As a result, $base \cdot height = |x_1 - x_2|$. Final complexity is $\mathcal{O}(n^2)$ per test case. Try to solve it for $n, x_i \leq 10^5$.
[ "brute force", "geometry", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; scanf("%d", &n); vector <int> in(n); for(auto &p: in) scanf("%d", &p); set <int> S; for(int i = 0; i < n; ++i) for(int j = i + 1; j < n; ++j) S.insert(in[j] - in[i]); printf("%d\n", (int)S.size()); } int main() { int cases; scanf("%d", &cases); while(cases--) solve(); return 0; }
1466
B
Last minute enhancements
Athenaeus has just finished creating his latest musical composition and will present it tomorrow to the people of Athens. Unfortunately, the melody is rather dull and highly likely won't be met with a warm reception. His song consists of $n$ notes, which we will treat as \textbf{positive integers}. The \textbf{diversity} of a song is the number of \textbf{different} notes it contains. As a patron of music, Euterpe watches over composers and guides them throughout the process of creating new melodies. She decided to help Athenaeus by changing his song to make it more diverse. Being a minor goddess, she cannot arbitrarily change the song. Instead, for each of the $n$ notes in the song, she can either leave it as it is or \textbf{increase} it by $1$. Given the song as a sequence of integers describing the notes, find out the maximal, achievable diversity.
We describe two solutions. The first focuses on maximal contiguous intervals of values. We notice that for each such interval $[l, r]$ it can either remain unchanged or get increased to $[l, r + 1]$ (other possibilities won't increase the number of different elements; thus, we don't need to consider them). From this observation, we conclude that the result is the number of different elements in the input increased by the number of intervals containing at least one duplicate. The second is based on a simple, greedy approach, where we analyze the elements in nondecreasing order. While analyzing, we keep the set of elements for which we have already decided its value. When we want to add the next element, then we check if it is in the set. If it is, we increase it by $1$; otherwise, we keep it as it is. Can you solve it if we can increase note $x_i$ by any integer in $[0, k_i]$, for given $k_i$?
[ "dp", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; scanf("%d", &n); set <int> S; for(int i = 1; i <= n; ++i) { int v; scanf("%d", &v); if(S.count(v)) v++; S.insert(v); } printf("%d\n", (int)S.size()); } int main() { int cases; scanf("%d", &cases); while(cases--) solve(); return 0; }
1466
C
Canine poetry
After his wife's tragic death, Eurydice, Orpheus descended to the realm of death to see her. Reaching its gates was uneasy, but passing through them proved to be even more challenging. Mostly because of Cerberus, the three-headed hound of Hades. Orpheus, a famous poet, and musician plans to calm Cerberus with his poetry and safely walk past him. He created a very peculiar poem for Cerberus. It consists only of lowercase English letters. We call a poem's substring a palindrome if and only if it reads the same backwards and forwards. A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deleting several (possibly zero or all) characters from the beginning and several (possibly zero or all) characters from the end. Unfortunately, Cerberus dislikes palindromes of length greater than $1$. For example in the poem abaa the hound of Hades wouldn't like substrings aba and aa. Orpheus can only calm Cerberus if the hound likes his poetry. That's why he wants to change his poem so that it does not contain any palindrome substrings of length \textbf{greater than $1$}. Orpheus can modify the poem by replacing a letter at any position with any lowercase English letter. He can use this operation arbitrarily many times (possibly zero). Since there can be many palindromes in his poem, he may have to make some corrections. But how many, exactly? Given the poem, determine the minimal number of letters that have to be changed so that the poem does not contain any palindromes of length greater than $1$.
The main observation is that if there exists a palindrome of length larger than $3$, then there exists a palindrome of length $2$ or $3$. This observation allows us to simplify the task to erasing all palindromes of length $2$ or $3$. We can also notice that each character will be replaced at most once. From now on, there are a few possible solutions. The easiest one is to iterate over a word from left to right. When we encounter a palindrome (of length $2$ or $3$) ending on the current position, consisting of unmarked elements, we greedily mark this character as the one we want to replace. The number of marked characters is the answer, as it turns out that we can obtain a valid palindrome-less sequence by replacing only the letters on the marked positions. The complexity is $\mathcal{O}(n)$. The sketch of proof is as follows: it is sufficient to ensure that each marked character does not coincide with $4$ neighboring characters, so we are still left with $22$ remaining possibilities. We can try different approaches too. One of these is a $dp$-based solution, where the state $dp[i][c_a][c_b]$ denotes the minimal result after analyzing first $i$ elements, where characters $s_i$ and $s_{i - 1}$ are equal to $c_a$ and $c_b$ respectively. This $dp$ can be implemented in $\mathcal{O}(n \cdot 26^2)$, which should pass, but is pretty messy to implement, so we want to improve it. We can notice that we are not interested in the last $2$ characters' exact values, but only if these were changed or not (the same observation as in the greedy solution). Thanks to this, our state changes to $dp[i][c_i][c_{i - 1}]$, where $c_i$ encodes whether $i$-th character was changed. This $dp$ can be cleanly implemented in linear time. What if letters can change, and you need the calculate the result after each change?
[ "dp", "greedy", "strings" ]
1,300
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 7; int n; char in[N]; int dp[N][2][2]; void solve(){ scanf("%s", in + 1); n = strlen(in + 1); for(int ta = 0; ta < 2; ++ta) for(int tb = 0; tb < 2; ++tb) dp[0][ta][tb] = 0; for(int i = 1; i <= n; ++i) for(int ta = 0; ta < 2; ++ta) for(int tb = 0; tb < 2; ++tb) dp[i][ta][tb] = N; for(int i = 1; i <= n; ++i){ if(in[i] != in[i - 1]){ if(in[i] != in[i - 2]) dp[i][0][0] = min(dp[i][0][0], dp[i - 1][0][0]); dp[i][0][0] = min(dp[i][0][0], dp[i - 1][0][1]); } if(in[i] != in[i - 2]) dp[i][0][1] = min(dp[i][0][1], dp[i - 1][1][0]); dp[i][0][1] = min(dp[i][0][1], dp[i - 1][1][1]); dp[i][1][0] = min(dp[i][1][0], min(dp[i - 1][0][0], dp[i - 1][0][1]) + 1); dp[i][1][1] = min(dp[i][1][1], min(dp[i - 1][1][0], dp[i - 1][1][1]) + 1); } int ans = N; for(int ta = 0; ta < 2; ++ta) for(int tb = 0; tb < 2; ++tb) ans = min(ans, dp[n][ta][tb]); printf("%d\n", ans); } int main(){ int cases; scanf("%d", &cases); while(cases--) solve(); return 0; }
1466
D
13th Labour of Heracles
You've probably heard about the twelve labors of Heracles, but do you have any idea about the thirteenth? It is commonly assumed it took him a dozen years to complete the twelve feats, so on average, a year to accomplish every one of them. As time flows faster these days, you have minutes rather than months to solve this task. But will you manage? In this problem, you are given a tree with $n$ weighted vertices. A tree is a connected graph with $n - 1$ edges. Let us define its $k$-coloring as an assignment of $k$ colors to the edges so that each edge has exactly one color assigned to it. Note that you don't have to use all $k$ colors. A subgraph of color $x$ consists of these edges from the original tree, which are assigned color $x$, and only those vertices that are adjacent to at least one such edge. So there are no vertices of degree $0$ in such a subgraph. The value of a connected component is the \textbf{sum} of weights of its vertices. Let us define the value of a subgraph as a \textbf{maximum} of values of its connected components. We will assume that the value of an empty subgraph equals $0$. There is also a value of a $k$-coloring, which equals the \textbf{sum} of values of subgraphs of all $k$ colors. Given a tree, for each $k$ from $1$ to $n - 1$ calculate the maximal value of a $k$-coloring.
Description of the algorithm For simplicity, I will describe the partition of edges into colors in terms of modifying the graph. After the modifications, each connected component will correspond to edges in the given color (different color for every component). I will add colors to the coloring one by one. Initially, we have only one color and the whole graph (one connected component). We add the next color in the following way: Let vertex $v$ be a vertex with maximum weight among the vertices with degree greater than $1$. Let $u$ be any neighbor of $v$. Add vertex $v'$ to the graph with weight equal to the weight of $v$. Add edge $(u,v')$ and remove the $(u,v)$ edge. This algorithm can be simulated in complexity $\mathcal{O}(n \log n)$. We process vertices in the order of nonincreasing weights and add $w_i$ $d_i-1$ times, where $w_i$ is a weight of a vertex and $d_i$ its degree in the initial graph. Proof Note that there exists an optimal solution in which all edges assigned to the same color form a single connected component. From this moment on, I will consider only such colorings. Proof: Consider a color such that edges in that color don't form a single connected component. We can change the color of all of them apart from those in a connected component with a maximal value because they don't contribute to the result. So reassigning them to the same colors as adjacent edges will not decrease the value of the coloring. The result equals the sum over all vertices of products of weight ($w_i$) of a vertex and number of distinct colors of edges incident to it ($c_i$): $\sum_{v =1}^n w_i \cdot c_i$ Note that $c_i$ is always between $1$ and the degree of $i$-th vertex. Furthermore, for a coloring using exactly $k$ colors, the following holds: $\sum_{v =1}^n c_i = n + k-1$ Proof: Consider any coloring $C$, it can be obtained the following way: Initially all edges are in the first color, thus all $c_i = 1$. Until not all colors have been processed: Consider the connected component in the first color. Let me define a leaf edge as an edge incident to a vertex of degree one in the graph containing only edges in the first color. There must exist at least one leaf edge which has color $x \neq 1$ in the coloring $C$ (a leaf edge currently has the first color, but has to be repainted). Process color $x$: set the color of all edges, which color in $C$ is $x$, to $x$. Only $c_i$ of vertices belonging to both connected components of the fist and $x$-th color have changed (they have increased by 1). Since edges in color $x$ form a single connected component and the same holds for the first color, there is exactly one such vertex. Consider the connected component in the first color. Let me define a leaf edge as an edge incident to a vertex of degree one in the graph containing only edges in the first color. There must exist at least one leaf edge which has color $x \neq 1$ in the coloring $C$ (a leaf edge currently has the first color, but has to be repainted). Process color $x$: set the color of all edges, which color in $C$ is $x$, to $x$. Only $c_i$ of vertices belonging to both connected components of the fist and $x$-th color have changed (they have increased by 1). Since edges in color $x$ form a single connected component and the same holds for the first color, there is exactly one such vertex. This proves $\sum_{v =1}^n c_i \leq n + k-1$, which is enough to see that the algorithm above yields an optimal solution. The inequality $\sum_{v =1}^n c_i \geq n + k-1$ stems from the fact, that there are $k$ connected components in a $k$-coloring.
[ "data structures", "greedy", "sortings", "trees" ]
1,500
#include <bits/stdc++.h> using namespace std; typedef long long int LL; const int N = 100 * 1000 + 7; int n; int w[N]; int deg[N]; void solve() { scanf("%d", &n); for(int i = 1; i <= n; ++i) { scanf("%d", &w[i]); deg[i] = 0; } for(int i = 1; i < n; ++i) { int u, v; scanf("%d %d", &u, &v); deg[u]++; deg[v]++; } LL ans = 0; vector <int> to_sort; for(int i = 1; i <= n; ++i) { for(int j = 1; j < deg[i]; ++j) to_sort.push_back(w[i]); ans += w[i]; } sort(to_sort.begin(), to_sort.end()); reverse(to_sort.begin(), to_sort.end()); for(auto &v: to_sort) { printf("%lld ", ans); ans += v; } printf("%lld\n", ans); } int main() { int cases; scanf("%d", &cases); while(cases--) solve(); return 0; }
1466
E
Apollo versus Pan
Only a few know that Pan and Apollo weren't only battling for the title of the GOAT musician. A few millenniums later, they also challenged each other in math (or rather in fast calculations). The task they got to solve is the following: Let $x_1, x_2, \ldots, x_n$ be the sequence of $n$ non-negative integers. Find this value: $$\sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n (x_i \, \& \, x_j) \cdot (x_j \, | \, x_k)$$ Here $\&$ denotes the bitwise and, and $|$ denotes the bitwise or. Pan and Apollo could solve this in a few seconds. Can you do it too? For convenience, find the answer modulo $10^9 + 7$.
The formula given in this task looks difficult to calculate, so we can rewrite it: $\sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n (x_i \, \& \, x_j) \cdot (x_j \, | \, x_k) = \sum_{j=1}^n \sum_{i=1}^n (x_i \, \& \, x_j) \sum_{k=1}^n (x_j \, | \, x_k) = \sum_{j=1}^n \left[ \sum_{i=1}^n (x_i \, \& \, x_j) \right] \cdot \left[ \sum_{k=1}^n (x_j \, | \, x_k) \right]$ We fix the element $x_j$. Now the task is to calculate two sums $\sum_i (x_i \, \& \, x_j)$ and $\sum_k (x_j \, | \, x_k)$, and multiply them by each other. Let's define function $f(x, c)$ as the value of $c$-th bit in $x$. For example $f(13, 1) = 0$, because $13 = 11\underline{0}1_2$, and $f(12, 2) = 1$, because $12 = 1\underline{1}00_2$. Additionally, define $M$ as the smallest integer such that $\forall_i \, x_i < 2^M$. Note that in this task $M \leq 60$. We can rewrite our sums using function $f$: $\sum_i (x_i \, \& \, x_j) = \sum_{c = 0}^{M} 2^c \sum_i f(x_i, c) \cdot f(x_j, c) = \sum_{c = 0}^{M} 2^c f(x_j, c) \sum_i f(x_i, c)$ $\sum_k (x_j \, | \, x_k) = \sum_{c = 0}^{M} 2^c \sum_k 1 - (1 - f(x_j, c)) \cdot (1 - f(x_k, c)) = \sum_{c = 0}^{M} 2^c \left[ n - (1 - f(x_j, c)) \sum_k (1 - f(x_k, c)) \right]$ In other words, we just split elements $x_i, x_j, x_k$ into the powers of two. If we memorize the values of $\sum_i f(x_i, c)$, for each $c \in \{0, 1, \ldots, M \}$, then we can calculate the desired sums in $\mathcal{O}(M)$ for fixed $x_j$ using the above equations. So the final solution is to iterate over all elements in the array and fix them as $x_j$, and sum all of the results obtained. Complexity is $\mathcal{O}(nM) = \mathcal{O}(n \log \max_i(x_i))$
[ "bitmasks", "brute force", "math" ]
1,800
#include <bits/stdc++.h> using namespace std; typedef long long int LL; const int N = 500 * 1000 + 7; const int P = 60; const int MX = 1e9 + 7; int n; LL in[N]; int cnt[P]; void solve(){ scanf("%d", &n); for(int i = 0; i < P; ++i) cnt[i] = 0; for(int i = 1; i <= n; ++i){ scanf("%lld", &in[i]); for(int j = 0; j < P; ++j) cnt[j] += in[i] >> j & 1; } int ans = 0; for(int i = 1; i <= n; ++i){ LL exp_or = 0, exp_and = 0; for(int j = 0; j < P; ++j){ if(in[i] >> j & 1){ exp_or += (1LL << j) % MX * n; exp_and += (1LL << j) % MX * cnt[j]; } else exp_or += (1LL << j) % MX * cnt[j]; } exp_and %= MX, exp_or %= MX; ans = (ans + 1LL * exp_or * exp_and) % MX; } printf("%d\n", ans); } int main(){ int cases; scanf("%d", &cases); while(cases--) solve(); return 0; }
1466
F
Euclid's nightmare
You may know that Euclid was a mathematician. Well, as it turns out, Morpheus knew it too. So when he wanted to play a mean trick on Euclid, he sent him an appropriate nightmare. In his bad dream Euclid has a set $S$ of $n$ $m$-dimensional vectors over the $\mathbb{Z}_2$ field and can perform vector addition on them. In other words he has vectors with $m$ coordinates, each one equal either $0$ or $1$. Vector addition is defined as follows: let $u+v = w$, then $w_i = (u_i + v_i) \bmod 2$. Euclid can sum any subset of $S$ and archive another $m$-dimensional vector over $\mathbb{Z}_2$. In particular, he can sum together an empty subset; in such a case, the resulting vector has all coordinates equal $0$. Let $T$ be the set of all the vectors that can be written as a sum of some vectors from $S$. Now Euclid wonders the size of $T$ and whether he can use only a subset $S'$ of $S$ to obtain all the vectors from $T$. As it is usually the case in such scenarios, he will not wake up until he figures this out. So far, things are looking rather grim for the philosopher. But there is hope, as he noticed that all vectors in $S$ have \textbf{at most} $2$ coordinates equal $1$. Help Euclid and calculate $|T|$, the number of $m$-dimensional vectors over $\mathbb{Z}_2$ that can be written as a sum of some vectors from $S$. As it can be quite large, calculate it modulo $10^9+7$. You should also find $S'$, the \textbf{smallest} such subset of $S$, that all vectors in $T$ can be written as a sum of vectors from $S'$. In case there are multiple such sets with a minimal number of elements, output the lexicographically smallest one with respect to the order in which their elements are given in the input. Consider sets $A$ and $B$ such that $|A| = |B|$. Let $a_1, a_2, \dots a_{|A|}$ and $b_1, b_2, \dots b_{|B|}$ be increasing arrays of indices elements of $A$ and $B$ correspondingly. $A$ is lexicographically smaller than $B$ iff there exists such $i$ that $a_j = b_j$ for all $j < i$ and $a_i < b_i$.
We start by simplifying the task. Increase the length of all vectors by $1$, i.e., add additional dimension. For the vectors from $S$, the $m+1$-th character will be $1$ in those with exactly one character $1$. This way, all the vectors in $S$ will have $1$s on precisely two positions. When we have some items and some subsets of these items of size exactly $2$, it is intuitive to create a graph on these items. Thus we create a graph where vertices represent dimensions of our space, and edges are vectors given in the input. Consider a sum of a subset of vectors in the input. Notice that each such vector contributes $1$ to the sum at exactly $2$ positions. Thus the resulting vector has an even number of $1$. Moreover, consider any connected component in the constructed graph. Note that our vector needs to have an even number of $1$ in each connected component for the same reason. It turns out that we can represent any vector fulfilling this condition. From these observations, we conclude that all we care about are connected components of the graph. Thus the answer is the MST of this graph, where the value of an edge is its index. The number of achievable vectors is $2^{|S'|}$. Complexity is $\mathcal{O}((n + m) \log (n + m))$ or faster depending on the algorithm to find the MST.
[ "bitmasks", "dfs and similar", "dsu", "graphs", "greedy", "math", "sortings" ]
2,100
#include <bits/stdc++.h> using namespace std; typedef long long int LL; const int N = 5e5 + 7; const int MX = 1e9 + 7; int n, m; int rep[N]; int Find(int a) { if(rep[a] == a) return a; return rep[a] = Find(rep[a]); } bool Union(int a, int b) { a = Find(a); b = Find(b); rep[a] = b; return a != b; } int main() { scanf("%d %d", &n, &m); for(int i = 1; i <= m + 1; ++i) rep[i] = i; vector <int> ans; for(int i = 1; i <= n; ++i) { int k; scanf("%d", &k); int fa, fb = m + 1; scanf("%d", &fa); if(k > 1) scanf("%d", &fb); if(Union(fa, fb)) ans.push_back(i); } int res = 1; for(int i = 0; i < (int)ans.size(); ++i) res = (res + res) % MX; printf("%d %d\n", res, (int)ans.size()); for(auto v: ans) printf("%d ", v); puts(""); return 0; }
1466
G
Song of the Sirens
\begin{quote} Whoso in ignorance draws near to them and hears the Sirens' voice, he nevermore returns.Homer, Odyssey \end{quote} In the times of Jason and the Argonauts, it was well known that sirens use the sound of their songs to lure sailors into their demise. Yet only a few knew that every time sirens call a sailor by his name, his will weakens, making him more vulnerable. For the purpose of this problem, both siren songs and names of the sailors will be represented as strings of lowercase English letters. The more times the sailor's name occurs as a contiguous substring of the song, the greater danger he is in. Jason found out that sirens can sing one of the $n+1$ songs, which have the following structure: let $s_i$ ($0 \leq i \leq n$) be the $i$-th song and $t$ be a string of length $n$, then for every $i < n$: $s_{i+1} = s_i t_i s_i$. In other words $i+1$-st song is the concatenation of $i$-th song, $i$-th letter ($0$-indexed) of $t$ and the $i$-th song. Fortunately, he also knows $s_0$ and $t$. Jason wonders how many times a sailor's name is mentioned in a particular song. Answer $q$ queries: given the sailor's name ($w$) and the index of a song ($i$) output the number of occurrences of $w$ in $s_i$ as a substring. As this number can be quite large, output its remainder modulo $10^9+7$.
We define $f(n, w)$ as the number of occurrences of $w$ in $s_n$. Moreover, we define $g(n, w), n > 0$ as the number of occurrences of $w$ in $s_n$ which contain character $t_{n - 1}$. With these definitions, we write the formula for $f(n, w)$ with $n > 0$: $f(n, w) = 2 \cdot f(n - 1, w) + g(n, w) = f(0, w) \cdot 2^n + \sum_{k = 1}^n g(k, w) \cdot 2^{n - k}$ Let $s_k$ be the shortest song, such that $|w| \leq |s_k|$. Notice that $|s_k| \leq 2 \cdot |w| + |s_0|$. We can further reformulate the formula for $f(n, w)$: $f(n, w) = f(k, w) \cdot 2^{n - k} + \sum_{i = k + 1}^n g(i, w) \cdot 2^{n - i}$ The length of $s_k$ is more or less the same as the length of $w$, so we can brute-force all occurrences of $w$ in $s_k$ with, for example, the KMP algorithm. The key observation to calculate the latter part of the formula is that $g(i, w)$ = $g(j, w)$ if $t_{i - 1} = t_{j - 1}$, and $k < i, j$. To prove that we note that $s_i, s_j$ can be represented as: $s_i = a_i \, s_k \, t_{i - 1} \, s_k \, b_i \\ s_j = a_j \, s_k \, t_{j - 1} \, s_k \, b_j \\$ As $|w| \leq |s_k|$, and we are interested only in occurrences containing the middle character, we can see that for equal $t_{i - 1}$, $t_{j - 1}$, values of $g$ are the same. We calculate the value of the $g$ function for each letter of the alphabet (we can use KMP for this). Using prefix sums we can calculate $\sum_{i = k + 1}^n g(i, w) \cdot 2^{n - i}$ in $\mathcal{O}(|\Sigma|)$, where $|\Sigma|$ denotes the length of the alphabet. The whole algorithm works in $\mathcal{O}(n + q \cdot (|\Sigma| + |s_0|) + S)$, where $S$ denotes the total length of all queries. It can be improved to $\mathcal{O}(n + q + S)$, but it was not needed to get accepted.
[ "combinatorics", "divide and conquer", "hashing", "math", "string suffix structures", "strings" ]
2,600
#include <bits/stdc++.h> using namespace std; const int MAXLEN = 1e6 + 7; const int MOD = 1e9 + 7; int n, q; string s, t; vector <string> songs; vector <int> pw; vector <int> sum[26]; void read(string &p) { static char input[MAXLEN]; scanf("%s", input); p = input; } void prepare_songs() { songs = {s}; for(auto c: t) { if(songs.back().size() >= MAXLEN) break; auto p = songs.back(); auto nxt = p + string(1, c) + p; songs.push_back(nxt); } } void prepare_sum() { for(int i = 0; i < 26; ++i) { char c = 'a' + i; sum[i].resize(n + 1, 0); for(int j = 0; j < n; ++j) sum[i][j + 1] = (sum[i][j] * 2 + (t[j] == c)) % MOD; } pw.resize(n + 1, 0); pw[0] = 1; for(int i = 1; i <= n; ++i) pw[i] = 2LL * pw[i - 1] % MOD; } void init() { scanf("%d %d", &n, &q); read(s), read(t); prepare_songs(); prepare_sum(); } vector <int> kmp(string &in) { int m = in.size(), cpref = 0; vector <int> dp(m, 0); for(int i = 1; i < (int)in.size(); ++i) { while(cpref > 0 && in[cpref] != in[i]) cpref = dp[cpref - 1]; if(in[cpref] == in[i]) ++cpref; dp[i] = cpref; } return dp; } /* Get all borders based on dp from kmp */ vector <bool> get(vector <int> &dp, int m) { vector <bool> ret(m + 1, false); int cur = dp.back(); while(cur) { ret[cur] = true; cur = dp[cur - 1]; } ret[cur] = true; return ret; } int answer(int k, string &w) { int id = 0; while(songs[id].size() < w.size()) ++id; if(id > k) return 0; int m = w.size(); string s_pref = w + '#' + songs[id]; string s_suf = songs[id] + '#' + w; auto dp_pref = kmp(s_pref); auto dp_suf = kmp(s_suf); auto pref = get(dp_pref, m); auto suf = get(dp_suf, m); /* Compute all internal occurences */ int ret = 0; for(auto &v: dp_pref) ret += (v == m); ret = 1LL * ret * pw[k - id] % MOD; /* Compute the remaining occurences */ for(int i = 0; i < m; ++i) { if(!pref[i] || !suf[m - i - 1]) continue; int c = w[i] - 'a'; ret = (ret + sum[c][k] - 1LL * sum[c][id] * pw[k - id]) % MOD; } ret = (ret + MOD) % MOD; return ret; } void solve() { while(q--) { int k; string w; scanf("%d", &k); read(w); printf("%d\n", answer(k, w)); } } int main() { init(); solve(); return 0; }
1466
H
Finding satisfactory solutions
Getting so far in this contest is not an easy feat. By solving all the previous problems, you have impressed the gods greatly. Thus, they decided to spare you the story for this problem and grant a formal statement instead. Consider $n$ agents. Each one of them initially has exactly one item, \textbf{$i$-th agent has the item number $i$}. We are interested in reassignments of these items among the agents. An assignment is valid iff each item is assigned to exactly one agent, and each agent is assigned exactly one item. Each agent has a preference over the items, which can be described by a permutation $p$ of items sorted from the most to the least desirable. In other words, the agent prefers item $i$ to item $j$ iff $i$ appears earlier in the permutation $p$. A \textbf{preference profile} is a list of $n$ permutations of length $n$ each, such that $i$-th permutation describes preferences of the $i$-th agent. It is possible that some of the agents are not happy with the assignment of items. A set of dissatisfied agents may choose not to cooperate with other agents. In such a case, they would exchange the items they possess initially ($i$-th item belongs to $i$-th agent) only between themselves. Agents from this group don't care about the satisfaction of agents outside of it. However, they need to exchange their items in such a way that will make at least one of them happier, and none of them less happy (in comparison to the given assignment). Formally, consider a valid assignment of items — $A$. Let $A(i)$ denote the item assigned to $i$-th agent. Also, consider a subset of agents. Let $S$ be the set of their indices. We will say this subset of agents is dissatisfied iff there exists a valid assignment $B(i)$ such that: - For each $i \in S$, $B(i) \in S$. - No agent $i \in S$ prefers $A(i)$ to $B(i)$ (no agent from the $S$ is less happy). - At least one agent $i \in S$ prefers $B(i)$ to $A(i)$ (at least one agent from the $S$ is happier). An assignment is optimal if no subset of the agents is dissatisfied. Note that the empty subset cannot be dissatisfied. It can be proven that for each preference profile, there is precisely one optimal assignment. Example: Consider $3$ agents with the following preference profile: - $[2, 1, 3]$ - $[1, 2, 3]$ - $[1, 3, 2]$ And such an assignment: - First agent gets item $2$ - Second agent gets item $3$. - Third agent gets item $1$. See that the set of agents $\{1, 2\}$ is dissatisfied, because they can reassign their (initial) items in the following way: - First agent gets item $2$. - Second agent gets item $1$. - Third agent gets item $3$. This reassignment will make the second agent happier and make no difference to the first agent. As a result, the third agent got an item that is worse for him, but this does not prevent the set $\{1,2\}$ from being dissatisfied (he is not in this set). The following assignment would be optimal: - First agent gets item $2$. - Second agent gets item $1$. - Third agent gets item $3$. Given an assignment $A$, calculate the number of distinct preference profiles for which assignment $A$ is optimal. As the answer can be huge, output it modulo $10^9+7$. Two preference profiles are different iff they assign different preference permutations to any agent.
We start with presenting the algorithm for finding the unique optimal good assignment. It treats agents as vertices in a graph and works in the following way: For every $i$ from $1$ to $n$ create a vertex (a single vertex will represent both the agent and his good) Until there are agents left, perform a round of an algorithm in the following way: For every agent, create a unidirectional edge from his vertex to the vertex of the good most desired by him that is still unassigned (present in the graph). For every agent lying on a cycle, assign his most desirable, available good to him. Then erase all the vertices lying on these cycles. For every agent, create a unidirectional edge from his vertex to the vertex of the good most desired by him that is still unassigned (present in the graph). For every agent lying on a cycle, assign his most desirable, available good to him. Then erase all the vertices lying on these cycles. So the task is to find the number of preference profiles that will generate the cycles present in the given optimal assignment. Note that cycles of the same length are equivalent. We can find this number by simulation of the above algorithm. To simulate, we use dynamic programming. The state contains: For each cycle length, the number of such cycles left. The number of already processed agents The number of agents processed in the previous round of the algorithm The number of agents processed in the current round of the algorithm The length of the cycle we currently analyze The state's value is the number of ways to assign preferences to all the agents on the already processed cycles. There are two types of transitions: Iterate over the number of cycles of the current length to pick in the current round Finish processing the current round, and start the next round Let $p(n)$ be the maximal product of natural numbers with sum equal to $n$. Then the number of transitions between can be bounded by $\mathcal{O}( p(n) \cdot n^3 )$.
[ "combinatorics", "dp", "graphs", "greedy", "math" ]
3,300
#include <bits/stdc++.h> using namespace std; typedef long long int LL; #define st first #define nd second #define PII pair <int, int> const int N = 60; const int MX = 1e9 + 7; struct state { int sum_picked = 0; int last_picked = 0, cur_picked = 0; int iterator = 0; vector <PII> lengths; state(vector <PII> _lengths) { lengths = _lengths; } /* operators for map comparisons */ bool operator<(const state &s) const { if(lengths != s.lengths) return lengths < s.lengths; if(last_picked != s.last_picked) return last_picked < s.last_picked; if(cur_picked != s.cur_picked) return cur_picked < s.cur_picked; return iterator < s.iterator; } bool operator==(const state &s) const { if(last_picked != s.last_picked) return false; if(cur_picked != s.cur_picked) return false; if(iterator != s.iterator) return false; if(lengths != s.lengths) return false; return true; } }; int n; vector <PII> cycles; map <state, int> dp; int sil[N], rv[N]; int pre[N][N][N]; int fast(int a, int b) { int ret = 1; while(b) { if(b & 1) ret = 1LL * ret * a % MX; b >>= 1; a = 1LL * a * a % MX; } return ret; } int newt(int a, int b) { if(b < 0 || a < b) return 0; return 1LL * sil[a] * rv[b] % MX * rv[a - b] % MX; } PII get_ways(int a, int b) { if(a == 0 && b == 0) return {sil[n - 1], 0}; int ret = 0, ret2 = 0; int c = n - a - 1; for(int t = 1; t + c <= n; ++t) { ret = (ret + 1LL * newt(n - t, c) * sil[c] % MX * newt(n - c - 1, b) % MX * sil[b] % MX * sil[a - b]) % MX; ret2 = (ret2 + 1LL * newt(n - t, c) * sil[c] % MX * newt(n - t - c, b) % MX * sil[b] % MX * sil[a - b]) % MX; } return {ret, ret2}; } void precalc() { sil[0] = 1; for(int i = 1; i <= n; ++i) sil[i] = 1LL * sil[i - 1] * i % MX; rv[n] = fast(sil[n], MX - 2); for(int i = n; i >= 1; --i) rv[i - 1] = 1LL * rv[i] * i % MX; for(int i = 0; i < n; ++i) for(int j = 0; j <= i; ++j) { if(i && j == 0) continue; auto [val, val2] = get_ways(i, j); pre[i][j][0] = 1; int res = 1, res2 = 1; for(int k = 1; k <= n; ++k) { res = 1LL * res * val % MX; res2 = 1LL * res2 * val2 % MX; pre[i][j][k] = (res + MX - res2) % MX; } } } void read() { scanf("%d", &n); precalc(); vector <int> input(n); for(auto &v: input) { scanf("%d", &v); v--; } vector <bool> vis(n); vector <int> cycle_lengths; for(int i = 0; i < n; ++i) { if(vis[i]) continue; int cur = i; int cycle_length = 0; while(!vis[cur]) { ++cycle_length; vis[cur] = true; cur = input[cur]; } cycle_lengths.push_back(cycle_length); } sort(cycle_lengths.begin(), cycle_lengths.end()); for(auto v: cycle_lengths) { if(cycles.size() && cycles.back().st == v) cycles.back().nd++; else cycles.push_back({v, 1}); } } int solve(state &cur) { if(cur.sum_picked == n) return 1; if(dp.count(cur)) return dp[cur]; if(cur.iterator == (int)cur.lengths.size()) { if(cur.cur_picked == 0) return dp[cur] = 0; state nxt = cur; nxt.sum_picked += nxt.cur_picked; nxt.last_picked = nxt.cur_picked; nxt.cur_picked = 0; nxt.iterator = 0; return dp[cur] = solve(nxt); } state nxt = cur; nxt.iterator++; int ret = 0, tmp = 1; auto [length, count] = cur.lengths[cur.iterator]; for(int i = 0; i <= count; ++i) { nxt.cur_picked = cur.cur_picked + i * length; nxt.lengths[cur.iterator].nd = count - i; ret = (ret + 1LL * solve(nxt) * tmp % MX * newt(count, i)) % MX; tmp = 1LL * tmp * pre[cur.sum_picked][cur.last_picked][length] % MX; } return dp[cur] = ret; } int main() { read(); state start = state(cycles); int ans = solve(start); printf("%d\n", ans); return 0; }
1466
I
The Riddle of the Sphinx
\begin{quote} What walks on four feet in the morning, two in the afternoon, and three at night? \end{quote} This is an interactive problem. This problem doesn't support hacks. Sphinx's duty is to guard the city of Thebes by making sure that no unworthy traveler crosses its gates. Only the ones who answer her riddle timely and correctly (or get an acc for short) are allowed to pass. As of those who fail, no one heard of them ever again... So you don't have a choice but to solve the riddle. Sphinx has an array $a_1, a_2, \ldots, a_n$ of \textbf{nonnegative} integers \textbf{strictly smaller} than $2^b$ and asked you to find the maximum value among its elements. Of course, she will not show you the array, but she will give you $n$ and $b$. As it is impossible to answer this riddle blindly, you can ask her some questions. For given $i, y$, she'll answer you whether $a_i$ is \textbf{bigger} than $y$. As sphinxes are not very patient, you can ask at most $3 \cdot (n + b) $ such questions. Although cunning, sphinxes are honest. Even though the array \textbf{can change} between your queries, answers to the previously asked questions will remain valid.
First, we start with a slightly suboptimal solution using $5 \cdot (n + b)$ queries. We investigate the elements in order from $1$ to $n$. While doing so, we will keep the stack with some of these elements. Moreover, we will keep a prefix of the maximum we found till now. We keep the following conditions: The prefix of the maximum will have precisely the same length as the number of the elements on the stack $k$-th element on the stack (counting from the lowest element) will match at exactly $k$ bits with the current prefix. Now, we describe how to add an element while keeping previous conditions. Assume that number of elements on the stack is $m < b$. If the current element is: smaller on the first $m$ bits than the current prefix, then delete it and proceed to the next element equal on the first $m$ bits to the current prefix, then find the value of $m + 1$ bit using the current number and add this element to the top of the stack larger on the first $m$ bits than the current prefix, then remove the last element of the stack and last bit of the prefix and try again to add the current element to the stack Queries on the first $m$ bits can be done by filling the number with $0$s or $1$s on the remaining positions (depending on the type of query). After we finish analyzing the sequence, we have $m$ elements on the stack and prefix of length $m$, a candidate for maximum. Now we want to find out if this is truly the prefix of the maximum, or maybe there is an element with a larger prefix. To do this, we analyze the stack from its top. We take the top element, check if this larger on the first $m$ bits than the maximum, and if not, we remove it and proceed. Otherwise, if it is a $k$-th element on the stack, then we know it matches the current prefix on the first $k$ bits, so we keep only these first $k$ bits of the current prefix. At the very end, we are left with some prefix of the maximum. If this prefix has length $b$, then the task is solved. Otherwise, we find all elements with the same prefix and recursively solve the task for smaller $b$ and $n$. We quickly analyze the outlined solution. In the first phase, we ask at most $3$ queries per element when we add it and precisely one when we remove it. In the second phase, we ask as many queries as the stack's size (we can say that this is the query for removal). In the third phase, we use precisely $n$ queries, so in summary, we use $5 \cdot n$ queries. It can be proven that if we achieve the prefix of $k < b$ elements, then there are at most $k$ elements with such prefix. Using some potential, we can easily prove that the sum of the number of queries used in the algorithm is at most $5 \cdot (n + b)$. Unfortunately, this is not enough. We optimize two parts of the solution. It is easy to observe that we can skip the third phase - if we get the prefix of length $k$, only the lowest $k$ elements from the stack can have this prefix. Some of them might be smaller on these bits, but this does not matter, as if we ever query them later, we will receive no as an answer. This is enough to achieve $4 \cdot (n + b)$ solution. The second optimization is that we do not need elements on the stack to match the appropriate prefix exactly. It is enough to fulfill the following conditions instead: $k$-th element on the stack cannot be larger than the first $k$ bits of the prefix There is an element on the stack, which is at least as large as the current prefix. This observation allows us to drop one query we used previously to determine if an element is smaller than the current prefix. This way, we achieve the demanded number of queries equal to $3 \cdot (n + b)$.
[ "binary search", "data structures", "interactive" ]
3,400
#include <bits/stdc++.h> using namespace std; #define st first #define nd second int n, b; void write(int i, string y){ printf("%d ", i); for(auto p: y) printf("%c", p); puts(""); fflush(stdout); } bool ask(int i, string y){ write(i, y); string ans; cin >> ans; return ans == "yes"; } void solve(vector <int> cur, string pref){ if(pref.size() == b){ write(0, pref); return; } stack <pair <int, string> > S; S.push({0, pref}); for(auto v: cur){ while(S.size() > 1){ auto prv = S.top().nd; while(prv.size() < b) prv.push_back('1'); if(!ask(v, prv)) break; S.pop(); } if(S.top().nd.size() == b) continue; auto prv = S.top().nd; prv.push_back('0'); while(prv.size() < b) prv.push_back('1'); if(ask(v, prv)){ prv = S.top().nd + "1"; S.push({v, prv}); } else{ prv = S.top().nd + "0"; S.push({v, prv}); } } vector <int> nxt; string ans = S.top().nd; while(S.size() > 1){ auto p = S.top(); S.pop(); string tmp = ans; while(tmp.size() < b) tmp.push_back('1'); if(ask(p.st, tmp)){ nxt.clear(); ans = p.nd; } nxt.push_back(p.st); } solve(nxt, ans); } int main(){ scanf("%d %d", &n, &b); vector <int> all_ids; for(int i = 0; i < n; ++i) all_ids.push_back(i + 1); solve(all_ids, ""); return 0; }
1467
A
Wizard of Orz
There are $n$ digital panels placed in a straight line. Each panel can show any digit from $0$ to $9$. Initially, all panels show $0$. Every second, the digit shown by each panel increases by $1$. In other words, at the end of every second, a panel that showed $9$ would now show $0$, a panel that showed $0$ would now show $1$, a panel that showed $1$ would now show $2$, and so on. When a panel is paused, the digit displayed on the panel does not change in the subsequent seconds. You must pause exactly one of these panels, at any second you wish. Then, the panels adjacent to it get paused one second later, the panels adjacent to those get paused $2$ seconds later, and so on. In other words, if you pause panel $x$, panel $y$ (for all valid $y$) would be paused exactly $|x−y|$ seconds later. For example, suppose there are $4$ panels, and the $3$-rd panel is paused when the digit $9$ is on it. - The panel $1$ pauses $2$ seconds later, so it has the digit $1$; - the panel $2$ pauses $1$ second later, so it has the digit $0$; - the panel $4$ pauses $1$ second later, so it has the digit $0$. The resulting $4$-digit number is $1090$. \textbf{Note that this example is not optimal for $n = 4$}. Once all panels have been paused, you write the digits displayed on them from left to right, to form an $n$ digit number (it can consist of leading zeros). What is the largest possible number you can get? Initially, all panels show $0$.
If there is only one panel, then pause it when digit $9$ appears on it. You cannot do any better than that. Otherwise, it is always optimal to pause the second panel from left, when the digit $8$ appears on it. This would give an answer of the form $98901234567890123456\dots$. You can verify that this is the largest number that can be achieved.
[ "constructive algorithms", "greedy", "math" ]
900
#include <bits/stdc++.h> using namespace std; #define int long long int solveTestCase() { int n; cin >> n; string s = "989"; if (n <= 3) return cout << s.substr(0, n) << "\n", 0; cout << s; for (int i = 3; i < n; i++) cout << (i - 3) % 10; cout << "\n"; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; cin >> t; while (t--) solveTestCase(); }
1467
B
Hills And Valleys
You are given a sequence of $n$ integers $a_1$, $a_2$, ..., $a_n$. Let us call an index $j$ ($2 \le j \le {{n-1}}$) a hill if $a_j > a_{{j+1}}$ and $a_j > a_{{j-1}}$; and let us call it a valley if $a_j < a_{{j+1}}$ and $a_j < a_{{j-1}}$. Let us define the intimidation value of a sequence as the sum of the number of hills and the number of valleys in the sequence. You can change \textbf{exactly one} integer in the sequence to any number that you want, or let the sequence remain unchanged. What is the minimum intimidation value that you can achieve?
Changing the value of $a_i$ affects the hill/valley status of only elements $\{a_{i-1}, a_i, a_{i+1}\}$. We claim that it is optimal to change $a_i$ to either $a_{i-1}$ or $a_{i+1}$ for valid $i\space(1<i<n)$. Let $x$ be a value of $a_i$ such that the number of hills/valleys among elements $\{a_{i-1},a_i,a_{i+1}\}$ is minimised. Now, if $x < \max(a_{i-1}, a_{i+1})$, we can set $x := \min(a_{i-1}, a_{i+1})$ without changing any non-hill/non-valley element to a hill/valley. Similarly, if $x > \min(a_{i-1},a_{i+1})$ we can set $x := \max(a_{i-1},a_{i+1})$. Hence proved. The final solution is as follows. Precompute the number of hills and valleys in the original array. Then, for every valid index $i$, calculate the change in the number of hills/valleys of the elements $\{a_{i-1},a_i,a_{i+1}\}$ on setting $a_i := a_{i-1}$ and $a_i := a_{i+1}$ and update the minimum answer accordingly.
[ "brute force", "implementation" ]
1,700
#include <bits/stdc++.h> using namespace std; #define int long long const int N = 3e5; int a[N], n; int isValley(int i) { return (i > 0 && i < n - 1 && a[i] < a[i - 1] && a[i] < a[i + 1]); } int isHill(int i) { return (i > 0 && i < n - 1 && a[i] > a[i - 1] && a[i] > a[i + 1]); } int solveTestCase() { cin >> n; for (int i = 0; i < n; i++) cin >> a[i]; int is[n] = {}; int s = 0; for (int i = 1; i < n - 1; i++) { if (isHill(i) || isValley(i)) is[i] = 1, s++; } int ans = s; for (int i = 1; i < n - 1; i++) { int temp = a[i]; a[i] = a[i - 1]; ans = min(ans, s - is[i - 1] - is[i] - is[i + 1] + isHill(i - 1) + isValley(i - 1) + isHill(i) + isValley(i) + isHill(i + 1) + isValley(i + 1)); a[i] = a[i + 1]; ans = min(ans, s - is[i - 1] - is[i] - is[i + 1] + isHill(i - 1) + isValley(i - 1) + isHill(i) + isValley(i) + isHill(i + 1) + isValley(i + 1)); a[i] = temp; } cout << ans << "\n"; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; cin >> t; while (t--) solveTestCase(); }
1467
C
Three Bags
You are given \textbf{three} bags. Each bag contains a non-empty multiset of numbers. You can perform a number of operations on these bags. In one operation, you can choose any two non-empty bags, and choose one number from each of the bags. Let's say that you choose number $a$ from the first bag and number $b$ from the second bag. Then, you remove $b$ from the second bag and replace $a$ with $a-b$ in the first bag. Note that if there are multiple occurrences of these numbers, then you shall only remove/replace exactly one occurrence. You have to perform these operations in such a way that you have exactly one number remaining in exactly one of the bags (the other two bags being empty). It can be shown that you can always apply these operations to receive such a configuration in the end. Among all these configurations, find the one which has the maximum number left in the end.
Model the problem as a rooted tree (graph). There are $n_1+n_2+n_3$ nodes, corresponding to the elements from the three bags. Edges between two nodes represent an operation on the two elements and it can be seen that an edge can exist only between nodes of elements from different sets. A directed edge from $b$ to $a$ means that $b$ is removed and $a$ is replaced to $a-b$. All edges are directed towards the root. In the given rooted tree, operations are applied bottom up, that is, an operation is applied on a leaf node and its parent, and the leaf node is removed. The process is continued until only the root node is remaining. It is easy to see that the value of the element at the end of all operations is the sum of elements at even depth minus the sum of elements at odd depth (depth is $0$ indexed). Claim: The constructed rooted tree is valid, if the elements at odd depth are 1. are from at least two different bags, or 2. contain all elements from one bag. Proof: Consider a rooted tree with all elements at odd depth from exactly one bag (say $X$) which doesn't fulfil criteria $2$. No remaining elements from bag $X$ can be connected to the nodes at odd depth, making it impossible to construct a tree. This implies that $1$ must hold. However, if criteria $2$ holds, no elements from bag $X$ remain to be connected, and thus the tree can be constructed. Hence proved. Now, we have to minimise either of the two cases, which is the solution to the problem. For the first case, we can choose the two smallest numbers such that they both appear in different bags to be at odd depth. For the second case, we can choose all numbers from the bag where the sum of the numbers is minimum to be at odd depth.
[ "constructive algorithms", "greedy" ]
1,900
#include <bits/stdc++.h> using namespace std; #define int long long const int N = 3e5; vector<vector<int>> a(3, vector<int>()); vector<int> n(3, 0); int calc() { int ans = 0, m1 = a[1][0], m2 = a[2][0], s1 = 0, s2 = 0; for (int i : a[0]) ans += i; for (int i = 1; i < n[1]; i++) s1 += a[1][i]; for (int i = 1; i < n[2]; i++) s2 += a[2][i]; ans += max({s2 - m1 + s1 - m2, m2 + s2 - m1 - s1, m1 + s1 - m2 - s2}); return ans; } int solveTestCase() { cin >> n[0] >> n[1] >> n[2]; a[0].resize(n[0]); a[1].resize(n[1]); a[2].resize(n[2]); for (int i = 0; i < 3; i++) { for (int j = 0; j < n[i]; j++) cin >> a[i][j]; sort(a[i].begin(), a[i].end()); } int ans = -1e18; ans = max(ans, calc()); swap(a[0], a[1]), swap(n[0], n[1]); ans = max(ans, calc()); swap(a[0], a[2]), swap(n[0], n[2]); ans = max(ans, calc()); cout << ans; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; //cin >> t; while (t--) solveTestCase(); }
1467
D
Sum of Paths
There are $n$ cells, numbered $1,2,\dots, n$ from left to right. You have to place a robot at any cell initially. The robot must make \textbf{exactly} $k$ moves. In one move, the robot must move one cell to the left or right, provided that it doesn't move out of bounds. In other words, if the robot was in the cell $i$, it must move to either the cell $i-1$ or the cell $i+1$, as long as it lies between $1$ and $n$ (endpoints inclusive). The cells, in the order they are visited (including the cell the robot is placed), together make a good path. Each cell $i$ has a value $a_i$ associated with it. Let $c_0, c_1, \dots, c_k$ be the sequence of cells in a good path in the order they are visited ($c_0$ is the cell robot is initially placed, $c_1$ is the cell where the robot is after its first move, and so on; more formally, $c_i$ is the cell that the robot is at after $i$ moves). Then the value of the path is calculated as $a_{c_0} + a_{c_1} + \dots + a_{c_k}$. Your task is to calculate the sum of values over all possible good paths. Since this number can be very large, output it modulo $10^9 + 7$. Two good paths are considered different if the starting cell differs or there exists an integer $i \in [1, k]$ such that the current cell of the robot after exactly $i$ moves is different in those paths. You must process $q$ updates to $a$ and print the updated sum each time. Each update changes the value of exactly one cell. See the input format and the sample input-output for more details.
The main idea of the problem is to calculate the contribution of each cell to the answer. Let $cnt_i$ denote the sum of the number of times cell $i$ appears in all good paths of length $k$. Then the answer is equal to $a_1*cnt_1 + a_2*cnt_2 + \dots + a_n*cnt_n$. We shall use dynamic programming to calculate these values. Let $dp_{{i,j}}$ denote the number of good paths of length $j$ that end at cell $i$. Then, $dp_{{i,0}} = 1$ for all $i$ such that $1 \le i \le n$. Further, for all $i$ such that $1 \le i \le n$ and for all $j$ such that $1 \le j \le k$, $dp_{{i, j}}$ can be calculated as $dp_{{i,j}} = dp_{{{{i-1}},{{j-1}}}} + dp_{{{{i+1}},{{j-1}}}}$ because we can move to cell $i$ from either cell $i-1$ or cell $i+1$. Handle the cases where $i=1$ and $i=n$ separately to avoid out-of-bounds error. Observe that $dp_{{i,j}}$ is also equal to the number of good paths of length $j$ that start at cell $i$. Let $a_{{i,j}}$ denote the number of times cell $i$ appears after exactly $j$ moves in all valid paths of length $k$. Well $a_{{i,j}} = dp_{{i,j}} * dp_{{i,k-j}}$ because we can split a path of length $k$ into two paths of length $j$ and length $k-j$, with the first path ending at cell $i$ and the second path starting at cell $i$. Since $cnt_i$ denotes the sum of the number of times cell $i$ appears in all good path of length $k$, cell $i$ can appear after exactly $0, 1, 2, \dots, k$ moves. This means that $cnt_i = \sum_{{j=0}}^{{k}} a_{{i,j}}$. Extending the solution to account for updates is easy once we have calculated these values.
[ "combinatorics", "dp", "math" ]
2,200
#include <bits/stdc++.h> using namespace std; #define int long long const int N = 5005, mod = 1e9 + 7; int dp[N][N], cnt[N], a[N], q, n, k; void pre() { for (int i = 0; i < n; i++) dp[i][0] = 1; for (int j = 1; j < N; j++) { dp[0][j] = dp[1][j - 1]; for (int i = 1; i < n - 1; i++) dp[i][j] = (dp[i - 1][j - 1] + dp[i + 1][j - 1]) % mod; dp[n - 1][j] = dp[n - 2][j - 1]; } for (int i = 0; i < n; i++) { for (int j = 0; j <= k; j++) cnt[i] += dp[i][j] * dp[i][k - j], cnt[i] %= mod; } } void solveTestCase() { int ans = 0; for (int i = 0; i < n; i++) cin >> a[i], ans += a[i] * cnt[i], ans %= mod; while (q--) { int i, x; cin >> i >> x; i--; ans -= (a[i] * cnt[i]) % mod, ans += mod, ans %= mod; a[i] = x; ans += (a[i] * cnt[i]), ans %= mod; cout << ans << "\n"; } } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); cin >> n >> k >> q; pre(); int t = 1; //cin >> t; while (t--) solveTestCase(); }
1467
E
Distinctive Roots in a Tree
You are given a tree with $n$ vertices. Each vertex $i$ has a value $a_i$ associated with it. Let us root the tree at some vertex $v$. The vertex $v$ is called a distinctive root if the following holds: in all paths that start at $v$ and end at some other node, all the values encountered are distinct. Two different paths may have values in common but a single path must have all distinct values. Find the number of distinctive roots in the tree.
Root the tree arbitrarily. Consider any node $v$. Let us remove $v$ from the tree and examine the trees that will be created in the resultant forest. Let's say that a particular tree was attached to $v$ through node $u$. Further, let's say that this tree has some node $x$ satisfying $a_v = a_x$. Then clearly, if any distinctive root exists, it must be in this component, because if it were in any other component, then, when you root the tree at that node, you will get a path from the root containing both $v$ and $x$. Let us add a directed edge from $v$ to $u$, signifying that all distinctive roots must be in this particular component. Once we have repeated this for all nodes, we now have a set of directed edges. All distinctive roots must have each of these edges pointing to it. We can check this for all nodes using the rerooting technique. Link cut tree or difference array works as well. Note: In order to check the number of nodes $x$ which occur in a subtree, do a pre-order traversal of the tree. Create a map which maps a value $b$ to the dfs-in times of all nodes $i$ that satisfy $a_i = b$. Now, two lower-bounds on this shall tell us the number of occurrences in the subtree in logarithmic time. Also, with this information, you can also calculate the number of times $x$ occurs in the tree attached to $v$'s parent.
[ "data structures", "dfs and similar", "dp", "trees" ]
2,500
#include <bits/stdc++.h> using namespace std; #define int long long const int N = 5e5; vector<int> adj[N]; int a[N], par[N], n; map<int, vector<int>> v, times; int euler[N * 2 - 1], tin[N], tout[N], c = 0; set<pair<int, int>> g; int dp[N], ans; void dfs(int v, int p = -1) { par[v] = p; tin[v] = c; euler[c++] = v; for (int i : adj[v]) { if (i == p) continue; dfs(i, v); euler[c++] = v; } tout[v] = c - 1; } void examine(int v) { int sum = 0; for (int i : adj[v]) { if (i == par[v]) continue; int count = upper_bound(times[a[v]].begin(), times[a[v]].end(), tout[i]) - lower_bound(times[a[v]].begin(), times[a[v]].end(), tin[i]); if (count > 0) g.insert({v, i}); sum += count; } sum = times[a[v]].size() - sum - 1; if (sum) g.insert({v, par[v]}); } int setup(int v) { for (int i : adj[v]) { if (i != par[v]) dp[v] += setup(i); } return dp[v] + g.count({v, par[v]}); } void reroot(int v) { if (dp[v] == g.size()) ans++; for (int i : adj[v]) { if (i == par[v]) continue; dp[v] -= dp[i]; dp[v] -= g.count({i, v}); dp[i] += dp[v]; dp[i] += g.count({v, i}); reroot(i); dp[i] -= g.count({v, i}); dp[i] -= dp[v]; dp[v] += g.count({i, v}); dp[v] += dp[i]; } } int solveTestCase() { cin >> n; for (int i = 0; i < n; i++) cin >> a[i]; for (int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; u--, v--; adj[u].push_back(v); adj[v].push_back(u); } dfs(0); for (int i = 0; i < n; i++) v[a[i]].push_back(i); for (auto i : v) { if (i.second.size() == 1) continue; for (int j : i.second) times[i.first].push_back(tin[j]); sort(times[i.first].begin(), times[i.first].end()); for (int j : i.second) examine(j); } setup(0); reroot(0); cout << ans; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); int t = 1; //cin >> t; while (t--) solveTestCase(); }
1468
A
LaIS
Let's call a sequence $b_1, b_2, b_3 \dots, b_{k - 1}, b_k$ almost increasing if $$\min(b_1, b_2) \le \min(b_2, b_3) \le \dots \le \min(b_{k - 1}, b_k).$$ In particular, any sequence with no more than two elements is almost increasing. You are given a sequence of integers $a_1, a_2, \dots, a_n$. Calculate the length of its longest almost increasing subsequence. You'll be given $t$ test cases. Solve each test case independently. Reminder: a subsequence is a sequence that can be derived from another sequence by deleting some elements without changing the order of the remaining elements.
Let's add $0$ to the beginning of $a$, then we'll increase LaIS by one and so it will always start from $0$. Let's look at any almost increasing subsequence (aIS) and underline elements, which are minimums in at least one consecutive pair, for example, $[\underline{0}, \underline{1}, \underline{2}, 7, \underline{2}, \underline{2}, 3]$. Note that underlined elements form increasing subsequence (IS) and there is no more than one element between each consecutive pair. What constraints these elements have? Obviously, $a[pos_{i - 1}] \le a[pos_i] \ge a[pos_{i + 1}]$, but we can ease the constraints just to $a[pos_{i - 1}] \le a[pos_i]$, i. e. we can allow $a[pos_i] < a[pos_{i + 1}]$, since aIS will still be aIS. Now, note that between $pos_{i-1}$ and $pos_{i + 1}$ we can choose any $j$ such that $a[pos_{i - 1}] \le a[j]$, so we can always choose the first such $j$, moreover we can precalculate such $j$ as $nxt_i$ for each $a_i$. Using stacks or something similar. Now, we can note that each $a_i$ is either minimum in LaIS or $i = nxt_j$ for some other element $a_j$. And we can write the following dynamic programming: let $d[i]$ be the LaIS which finish in $a_i$ and it's the last minimum (we can think that the last element of LaIS is minimum in the imaginary pair). To calculate it, we can iterate $i$ from left to right and store for each value $x$ the length of LaIS with the last minimum (not the last element) equal to $x$ in Segment Tree (ST). So, $d_i$ is equal to "get maximum over segment $[0, a_i]$ in ST" plus $1$. And we update ST in two moments: firstly, right now with value $d_i$ in position $a_i$ and secondly, after we meet the element $nxt_i$ with value $d_i + 1$ also in position $a_i$. After we process all $i$ we'll get ST where for each $x$ the length of LaIS with last minimum equal to $x$ will be stored and the answer will be equal just to "maximum over all tree (segment $[0, n]$)". In total, we should calculate $nxt_i$ for each $a_i$ (we can do it in linear time using stack) and maintain Segment Tree with two basic operations: range maximum and update in position. The time complexity is $O(n \log(n))$ and the space complexity is $O(n)$.
[ "data structures", "dp", "greedy" ]
2,200
null
1468
B
Bakery
Monocarp would like to open a bakery in his local area. But, at first, he should figure out whether he can compete with other shops. Monocarp plans that the bakery will work for $n$ days. On the $i$-th day, $a_i$ loaves of bread will be baked in the morning before the opening. At the end of the $n$-th day, Monocarp will sell all the remaining bread that wasn't sold earlier with a huge discount. Because of how bread is stored, the bakery seller sells the bread in the following order: firstly, he sells the loaves that were baked that morning; secondly, he sells the loaves that were baked the day before and weren't sold yet; then the loaves that were baked two days before and weren't sold yet, and so on. That's why some customers may buy a rather stale bread and will definitely spread negative rumors. Let's define loaf spoilage as the difference between the day it was baked and the day it was sold. Then the unattractiveness of the bakery will be equal to the maximum spoilage among all loaves of bread baked at the bakery. Suppose Monocarp's local area has consumer demand equal to $k$, it means that each day $k$ customers will come to the bakery and each of them will ask for one loaf of bread (the loaves are sold according to the aforementioned order). If there is no bread left, then the person just doesn't buy anything. During the last day sale, all the remaining loaves will be sold (and they will still count in the calculation of the unattractiveness). Monocarp analyzed his competitors' data and came up with $m$ possible consumer demand values $k_1, k_2, \dots, k_m$, and now he'd like to calculate the unattractiveness of the bakery for each value of demand. Can you help him?
Let's look at all $n$ days with some fixed $k$. Obviously, the seller works like a stack, so let's divide all days into segments in such a way, that the stack is emptied between consecutive segments. We can note that after we've got these segments - the answer is the maximum length among all segments. Why? Because the stalest bread on the bottom of the stack and we can't sell it until we empty the stack. Now, let's set $k = 10^9$ and look at what happens when we gradually lower $k$. With $k=10^9$, we sell all bread that we baked on the same day, so all segments consist of one day $[i, i + 1)$. Now, if we start lowering $k$ then at some moment segments will start to merge (and only merge), since $k$ is not enough to sell all bread in this interval. Since no segment will split up, there will be only $n - 1$ merging. So we can look only at moments $k$ when either several segments merge or when we should answer the query. With what $k$ the segment $[p_1, p_1)$ will start to merge with the next segment $[p_1, p_2)$? The answer is when $(p_2 - p_1) \cdot k < \sum\limits_{p_1 \le i < p_2}{a_i}$ or $k = \left\lfloor \frac{(\sum{a_i}) - 1}{p_2 - p_1} \right\rfloor$. So, we can for each segment $[p_i, p_{i + 1})$ maintain its value $k_i$ when it will start merging with next segment in set. And if we want to calculate the answer for a given $k$ from the queries, we should merge all segments with $k_i \ge k$, while updating the current maximum length among all segments. Since merging two segments require two operations with the set then the total time complexity is $O(m + n \log{n})$.
[ "data structures", "dsu" ]
2,900
null
1468
C
Berpizza
Monocarp and Polycarp are working as waiters in Berpizza, a pizzeria located near the center of Bertown. Since they are waiters, their job is to serve the customers, but they choose whom they serve first differently. At the start of the working day, there are no customers at the Berpizza. They come there one by one. When a customer comes into the pizzeria, she sits and waits for Monocarp or Polycarp to serve her. Monocarp has been working in Berpizza for just two weeks, so whenever he serves a customer, he simply chooses the one who came to Berpizza first, and serves that customer. On the other hand, Polycarp is an experienced waiter at Berpizza, and he knows which customers are going to spend a lot of money at the pizzeria (and which aren't) as soon as he sees them. For each customer, Polycarp estimates the amount of money this customer can spend, and when he serves a customer, he chooses the one that is expected to leave the most money at Berpizza (in case there are several such customers, he chooses the one who came first among them). Obviously, no customer can be served twice, so Monocarp and Polycarp choose which customer to serve only among those who haven't been served yet. When the number of customers gets really high, it becomes difficult for both Monocarp and Polycarp to choose the customer they are going to serve. Your task is to write a program that makes these choices for them. Formally, your program should be able to process three types of queries: - $1$ $m$ — a customer comes to Berpizza, and Polycarp estimates the amount of money that they will spend as $m$; - $2$ — Monocarp serves a customer which came to the pizzeria first; - $3$ — Polycarp serves a customer which is expected to spend the largest amount of money at the pizzeria (if there are several such customers, the one that came to the pizzeria first is chosen). For each query of types $2$ and $3$, report the number of the customer who was served (the customers are numbered in the order they come to the pizzeria, starting from $1$).
The hardest part of this problem is to efficiently implement the third query, i. e. finding the customer with the greatest value of $m$ and erasing it. Simply iterating on all of them is too slow, since there may be up to $2.5 \cdot 10^5$ such queries and up to $5 \cdot 10^5$ customers at the pizzeria. There are several solutions to this issue, I will describe two of them. The first solution is to treat each customer as a pair $(id, m)$, where $id$ is the number of the customer. Then the first query means "insert a new pair", the second query - "remove the pair with minimum $id$", and the third query - "remove the pair with maximum $m$". To maintain them, we can use two balanced binary trees that store these pairs - one will store them sorted by $id$, and the other - sorted by $m$. Then the second query means "find the leftmost element in the first tree and erase it from both trees", and the third query means "find the rightmost element in the second tree and erase it from both trees". Balanced binary trees can perform these operations in $O(\log n)$, where $n$ is the size of the tree. Note that in most languages you don't have to write a balanced binary tree from scratch - for example, the containers std::set in C++ and TreeSet in Java already support all of the required operations. The second solution maintains three data structures: a queue for supporting the queries of type $2$, a heap for supporting the queries of type $3$, and an array or set that allows checking whether some customer was already deleted by a query. Finding the customer that came first using a queue or has a maximum value of $m$ using a heap is easy, but deleting some element from queue/heap is much harder (because we have to delete some arbitrary element, not necessarily the element at the head of the queue/heap). Instead, we can do it the other way: when we delete some customer from one of these data structures, we mark it as deleted. And while processing the following queries of type $2$ or $3$, we should check the element in the head of the data structure and, if it is marked as deleted, remove it before processing the query. Note that there can be multiple such elements, so it should be done in a loop. Since each element is deleted at most once, this solution works in $O(n \log n)$ amortized.
[ "data structures", "implementation" ]
1,400
null
1468
D
Firecrackers
Consider a long corridor which can be divided into $n$ square cells of size $1 \times 1$. These cells are numbered from $1$ to $n$ from left to right. There are two people in this corridor, a hooligan and a security guard. Initially, the hooligan is in the $a$-th cell, the guard is in the $b$-th cell ($a \ne b$). \begin{center} {\small One of the possible situations. The corridor consists of $7$ cells, the hooligan is in the $3$-rd cell, the guard is in the $6$-th ($n = 7$, $a = 3$, $b = 6$).} \end{center} There are $m$ firecrackers in the hooligan's pocket, the $i$-th firecracker explodes in $s_i$ seconds after being lit. The following events happen each second (sequentially, \textbf{exactly in the following order}): - firstly, the hooligan either moves into an adjacent cell (from the cell $i$, he can move to the cell $(i + 1)$ or to the cell $(i - 1)$, and he cannot leave the corridor) or stays in the cell he is currently. If the hooligan doesn't move, he can light \textbf{one} of his firecrackers and drop it. The hooligan can't move into the cell where the guard is; - secondly, some firecrackers that were already dropped may explode. Formally, if the firecracker $j$ is dropped on the $T$-th second, then it will explode on the $(T + s_j)$-th second (for example, if a firecracker with $s_j = 2$ is dropped on the $4$-th second, it explodes on the $6$-th second); - finally, the guard moves one cell closer to the hooligan. If the guard moves to the cell where the hooligan is, the hooligan is caught. Obviously, the hooligan will be caught sooner or later, since the corridor is finite. His goal is to see the maximum number of firecrackers explode before he is caught; that is, he will act in order to maximize the number of firecrackers that explodes before he is caught. Your task is to calculate the number of such firecrackers, if the hooligan acts optimally.
The first crucial observation that we need is the following one: it is optimal to light and drop some firecrackers, and only then start running away from the guard (that's because running away doesn't actually do anything if none of the firecrackers are lit). The hooligan shouldn't drop more than $|a - b| - 1$ firecrackers, because otherwise he will be caught before starting running away, and the last firecracker he dropped won't go off. Okay, now suppose the hooligan wants to explode exactly $f$ firecrackers. It's obvious that he wants to choose the $f$ firecrackers with minimum $s_i$, but in which order he should drop them? If the $i$-th firecracker he drops goes off in $s_j$ seconds, then it will explode on the $(i + s_j)$-th second. We have to choose an ordering of the firecrackers that minimizes the maximum of $(i + s_j)$ in order to check that the hooligan has enough time to see all the firecrackers he dropped explode. It can be shown that we can drop the firecracker with the maximum $s_j$ first, then the firecracker with the second maximum $s_j$, and so on, and the maximum of $(i + s_j)$ is minimized (obviously, we consider only the firecrackers with $f$ minimum values of $s_i$). This leads us to a solution with a binary search on $f$ in $O(n \log n)$: we can check that the hooligan can explode at least $f$ firecrackers in $O(n)$ (after sorting the sequence $s$ beforehand), and binary search requires to check it for only $\log n$ values of $f$.
[ "binary search", "sortings" ]
1,700
null
1468
E
Four Segments
Monocarp wants to draw four line segments on a sheet of paper. He wants the $i$-th segment to have its length equal to $a_i$ ($1 \le i \le 4$). These segments can intersect with each other, and each segment should be either horizontal or vertical. Monocarp wants to draw the segments in such a way that they enclose a rectangular space, and the area of that rectangular space should be maximum possible. For example, if Monocarp wants to draw four segments with lengths $1$, $2$, $3$ and $4$, he can do it the following way: \begin{center} {\small Here, Monocarp has drawn segments $AB$ (with length $1$), $CD$ (with length $2$), $BC$ (with length $3$) and $EF$ (with length $4$). He got a rectangle $ABCF$ with area equal to $3$ that is enclosed by the segments.} \end{center} Calculate the maximum area of a rectangle Monocarp can enclose with four segments.
Suppose the values of $a_1$, $a_2$, $a_3$, $a_4$ are sorted in non-descending order. Then the shorter side of the rectangle cannot be longer than $a_1$, because one of the sides must be formed by a segment of length $a_1$. Similarly, the longer side of the rectangle cannot be longer than $a_3$, because there should be at least two segments with length not less than the length of the longer side. So, the answer cannot be greater than $a_1 \cdot a_3$. It's easy to construct the rectangle with exactly this area by drawing the following segments: from $(0, 0)$ to $(a_1, 0)$; from $(0, a_3)$ to $(a_2, a_3)$; from $(0, 0)$ to $(0, a_3)$; from $(a_1, 0)$ to $(a_1, a_4)$. So, the solution is to sort the sequence $[a_1, a_2, a_3, a_4]$, and then print $a_1 \cdot a_3$.
[ "greedy" ]
800
null
1468
F
Full Turn
There are $n$ persons located on a plane. The $i$-th person is located at the point $(x_i, y_i)$ and initially looks at the point $(u_i, v_i)$. At the same moment of time, all persons will start to rotate clockwise synchronously with the same angular speed. They will rotate until they do one full $360$-degree turn. It is said that persons $A$ and $B$ made eye contact if person $A$ looks in person $B$'s direction at the same moment when person $B$ looks in person $A$'s direction. If there is a person $C$ located between persons $A$ and $B$, that will not obstruct $A$ and $B$ from making eye contact. A person can make eye contact with more than one person at the same time. Calculate the number of pairs of persons that will make eye contact at least once during the rotation (including the initial moment).
Lets define for person with index $i$ their initial vision vector as vector ($u_i - x_i$, $v_i - y_i$). It is possible to prove that two persons will make eye contact during 360 rotation if and only if their initial vision vectors are collinear and oppositely directed. Note that the position of the persons does not matter, only their vision vectors. E.g. lets assume that person $A$ has initial vision vector ($3$, $-4$) and person $B$ - ($-6$, $8$). These vectors are collinear and oppositely directed, hence person $A$ and $B$ will make eye contact during the rotation. If we try to check separately for each pair of persons if they will make eye contact, that would take too much time. For example for $n=10^5$ that would take $\approx 5*10^9$ checks. Instead we should use a different approach. First, lets normalize vision vector of each person by dividing its coordinates by GCD of absolute values of the coordinates. Here GCD stands for greatest common divisor. E.g. vector's coordinates ($6$, $-8$) should be divided by $GCD(|6|, |-8|) = GCD(6, 8) = 2$, and the normalized vector will be ($3$, $-4$). There is a special case for vectors, which have zero as one of the coordinates: ($0$, $C$), ($0$, $-C$), ($C$, $0$) and ($-C$, $0$), where $C$ is some positive integer. These should be normalized to vectors ($0$, $1$), ($0$, $-1$), ($1$, $0$) and ($-1$, $0$) respectively. After normalization all collinear and co-directed vectors will have exactly the same coordinates. Lets group such vectors and count the number of vectors in each group. Then it is obvious that each person from group with vector ($x$, $y$) will make eye contact with each person from group with vector ($-x$, $-y$). If the first group has $k$ vectors and the second group has $l$ vectors, in total there will be $k*l$ eye contacts between members of these two groups. And also members of these two groups will not make eye contact with members of any other groups. So fast algorithm should create a map, where key would be group's vector and value - number of persons in the group. Then the algorithm should iterate over groups in the map, for each group find the oppositely directed group, and add to the answer multiplication of these groups' sizes. Note that the maximum possible answer is $(n/2)^2$. That would be $2.5*10^9$ when $n=10^5$, which does not fit into signed int32.
[ "geometry", "hashing", "number theory" ]
1,700
null
1468
G
Hobbits
The hobbits Frodo and Sam are carrying the One Ring to Mordor. In order not to be spotted by orcs, they decided to go through the mountains. The mountain relief can be represented as a polyline with $n$ points $(x_i, y_i)$, numbered from $1$ to $n$ ($x_i < x_{i + 1}$ for $1 \le i \le n - 1$). Hobbits start their journey at the point $(x_1, y_1)$ and should reach the point $(x_n, y_n)$ to complete their mission. The problem is that there is a tower with the Eye of Sauron, which watches them. The tower is located at the point $(x_n, y_n)$ and has the height $H$, so the Eye is located at the point $(x_n, y_n + H)$. In order to complete the mission successfully, the hobbits have to wear cloaks all the time when the Sauron Eye can see them, i. e. when there is a direct line from the Eye to the hobbits which is not intersected by the relief. The hobbits are low, so their height can be considered negligibly small, but still positive, so when a direct line from the Sauron Eye to the hobbits only touches the relief, the Eye can see them. \begin{center} {\small The Sauron Eye can't see hobbits when they are in the left position, but can see them when they are in the right position.} \end{center} The hobbits do not like to wear cloaks, so they wear them only when they can be spotted by the Eye. Your task is to calculate the total distance the hobbits have to walk while wearing cloaks.
Let's start with the general idea of the solution. To solve the problem, we need to iterate over all relief segments and understand, which part the segment is seen by the Eye. A segment point can be hidden from the Eye by several mountains, but it is enough to track only the highest mountain. Generally speaking, the relief segments can be processed in any order, bit it would be more convenient to iterate on them backwards - i.e. in the order from the Eye to the start point. Processing the segments in reversed order will allow to recalculate the highest mountain easier. Let's now discuss implementation details. Formally speaking, there are $n$ relief points $p_i = (x_i, y_i)$ ($1 \le i \le n$) and the Eye point $E = (x_n, y_n + H)$. Each relief point defines its own angle $\alpha_i$, which is measured counter-clockwise between positive direction of the OX axis and the $(E, p_i)$ vector. Now, having two relief points $p_i$ and $p_j$ ($i < j$), it can be said that $p_i$ is hidden from the Eye if $\alpha_i > \alpha_j$. When this check is implemented in the solution, to avoid the precision loss, it is recommended to use vector product and analyse its sign to understand the relation of the angles. This check is done in $O(1)$ time. Being able to understand when a point is hidden from the Eye by another point, it is possible to calculate, which part of a segment $(p_i, p_{i + 1})$ is seen by the Eye. First of all let's note, that if the segment left point $p_i$ is hidden by its right point $p_{i + 1}$, then the entire segment is hidden from the Eye. Now, we have the situation when left segment point is not hidden from the Eye by its right point. But the segment or its part can still be hidden by the highest mountain (point $M$), which is a point with minimal angle from all relief points, located to the right of our segment. Here three cases are possible: both $p_{i}$ and $p_{i + 1}$ are hidden by $M$ - in this case the entire segment is hidden and we switch to the next segment; both $p_{i}$ and $p_{i + 1}$ are visible by the Eye (not hidden by the highest mountain $M$) - in this case the entire segment is visible and we should add its length to the answer; left segment point $p_i$ is visible by the Eye, but right segment point $p_{i + 1}$ is hidden by $M$ - in this case we need to find intersection point $I$ of the segment $(p_i, p_{i + 1})$ and the ray $(E, M)$. What is left - add length of $(p_i, I)$ segment to the answer. Now, let's conclude the final algorithm: Iterate over all relief segments from right to left, keeping the highest mountain point $M$. Analyze how left and right points of the current segment are located relatively to each other and point $M$. Recalculate $M$, taking points of the current segments as candidates for the highest mountain.
[ "binary search", "geometry" ]
2,500
null
1468
H
K and Medians
Let's denote the median of a sequence $s$ with odd length as the value in the middle of $s$ if we sort $s$ in non-decreasing order. For example, let $s = [1, 2, 5, 7, 2, 3, 12]$. After sorting, we get sequence $[1, 2, 2, \underline{3}, 5, 7, 12]$, and the median is equal to $3$. You have a sequence of $n$ integers $[1, 2, \dots, n]$ and an \textbf{odd} integer $k$. In one step, you choose any $k$ elements from the sequence and erase all chosen elements \textbf{except} their median. These elements do not have to go continuously (gaps are allowed between them). For example, if you have a sequence $[1, 2, 3, 4, 5, 6, 7]$ (i.e. $n=7$) and $k = 3$, then the following options for the first step are possible: - choose $[1, \underline{2}, 3]$; $2$ is their median, so it is not erased, and the resulting sequence is $[2, 4, 5, 6, 7]$; - choose $[2, \underline{4}, 6]$; $4$ is their median, so it is not erased, and the resulting sequence is $[1, 3, 4, 5, 7]$; - choose $[1, \underline{6}, 7]$; $6$ is their median, so it is not erased, and the resulting sequence is $[2, 3, 4, 5, 6]$; - and several others. You can do zero or more steps. Can you get a sequence $b_1$, $b_2$, ..., $b_m$ after several steps? You'll be given $t$ test cases. Solve each test case independently.
Since after each operation we erase exactly $k - 1$ element from $a$ then if $(n - m) \not\equiv 0 \pmod{(k - 1)}$ then the answer is NO. Otherwise, if there is such element $b_{pos}$ such that there are at least $\frac{k - 1}{2}$ erased elements lower than $b_i$ and at least $\frac{k - 1}{2}$ erased elements greater that $b_i$, then answer is YES, otherwise NO. Let's prove this criterion in two ways. From the one side, in the last step, we should choose $k$ elements and erase them excepts its median, so the median is exactly that element $b_{pos}$. From the other side, let's prove that if there is such $b_{pos}$ then we can always choose operations to get sequence $b$. Let's make operations in such a way that in the last step we'll erase $b_{pos}$, $d = \frac{k - 1}{2}$ elements lower $b_{pos}$ and $d$ elements greater $b_{pos}$. Since, it doesn't matter which $d$ elements to take from left and right of $b_{pos}$, we will only consider number of such elements. Let's denote $x$ and $y$ as the initial number of elements from left and right of $b_{pos}$ to erase. We know that $x \ge d$ and $y \ge d$. and we want to make both of them equal to $d$. Let $x' = x - d$ and $y' = y - d$. We can think that we have $x'$ free elements to erase from left and $y'$ from the right. While $x' + y' \ge k$ let's take any $k$ free elements that should be erased and erase $k - 1$ of them. Then we get situation $0 \le x' < k$, $0 \le y' < k$ and $x' + y' < k$. Since $(n - m)$ is divisible by $(k - 1)$, then it means that $x' + y' = k - 1$. Let's look at what we can do now. We should take one of "reserved" elements to participate in erasing last $x' + y'$ free elements but it shouldn't break the situation with $d$ lower elements, $b_{pos}$ and $d$ greater elements. If $x' \ge y'$, let's take one extra element which is lower than $b_{pos}$, then after erasing, the remaining median will also be lower than $b_{pos}$. If $x' < y'$, let's take one extra element greater than $b_{pos}$, then the remaining median will also be greater than $b_{pos}$. In the end, we choose $d$ elements lower, $b_{pos}$ and $d$ elements greater. Erase them except its median $b_{pos}$ and get the desired array $b$.
[ "constructive algorithms", "greedy", "math" ]
2,200
null
1468
I
Plane Tiling
You are given five integers $n$, $dx_1$, $dy_1$, $dx_2$ and $dy_2$. You have to select $n$ distinct pairs of integers $(x_i, y_i)$ in such a way that, for every possible pair of integers $(x, y)$, there exists exactly one triple of integers $(a, b, i)$ meeting the following constraints: \begin{center} $ \begin{cases} x \, = \, x_i + a \cdot dx_1 + b \cdot dx_2, \\ y \, = \, y_i + a \cdot dy_1 + b \cdot dy_2. \end{cases} $ \end{center}
One of the solutions is to consider an infinite 2 dimensional plane and draw a grid with lines parallel to vectors ($dx_1$, $dy_1$) and ($dx_2$, $dy_2$). By doing so we will get a tiling of the plane with parallelepipeds, one of them will have corners at ($0$, $0$), ($dx_1$, $dy_1$), ($dx_1 + dx_2$, $dy_1 + dy_2$), ($dx_2$, $dy_2$). All the parallelepipeds are same shape and all of them can be represented by translating one of them by vector ($a \cdot dx_1 + b \cdot dx_2$, $a \cdot dy_1 + b \cdot dy_2$), where $a$ and $b$ are all possible integer pairs. Thus, if we "rasterize" one of them, we will get a correct solution. To do so, we can represent the parallelepiped as two triangles and rasterize each of them individually. To rasterize a triangle, we need to select those cells which center is either inside of it, or located on the "top" or "left" edge of it. A "top" edge is an edge that is perfectly horizontal and whose defining vertices are above the third one. A "left" edge is an edge that is going up if we go in triangle's clockwise order. This way, no cell is going to be selected twice in case we "rasterize" two triangles sharing an edge, and since we are tiling the plane with given parallelepipeds rasterized cells are exactly our solution. If vectors ($dx_1$, $dy_1$) and ($dx_2$, $dy_2$) are either collinear or one of them is zero, the answer is "NO". Also there is another solution. First of all we have to calculate $d$ - absolute value of determinant of the matrix formed by the given vectors: $d = |det((dx_1, dy_1), (dx_2, dy_2))| = |dx_1 * dy_2 - dx_2 * dy_1|$ If given value $n$ is not equal to $d$ then there is no solution, in particular if $d=0$ there is no required $n$ pairs of integers. Now let $n$=$d$ and $d_x$ = $gcd(dx_1, dx_2)$, $d_y$ = $gcd(dy_1, dy_2)$. Let's go to the same problem with smaller integers - we divide $dx_1$, $dx_2$ by $d_x$ and divide $dy_1$, $dy_2$ by $d_y$. Also we define $m=n/(d_x \cdot d_y)$ ($n$=$d$ so it is divisible by $d_x\cdot d_y$). So firstly we need to find such $m$ points $(x_i, y_i)$ that all values $(x_i + a \cdot dx_1 + b \cdot dx_2, y_i + a \cdot dy_1 + b \cdot dy_2)$ are different. It is enough for solution, because we still have $m$ equal to the absolute value of the determinant of the new matrix. It turns out that it is easy to find such set of points. In particular we may choose points $(0,0), (0,1), \cdots, (0, m-1)$, i.e. $x_i=i$, $y_i=0$. Let's prove that such set is correct. Assume that for some non-zero pair $(a, b)$ and for some $j$ we also have one of these points: $x_i$ = $x_j + a \cdot dx_1 + b \cdot dx_2$ and $y_i$ = $y_j + a \cdot dy_1 + b \cdot dy_2$. Considering $y_i=y_j=0$, we have $a \cdot dy_1 + b \cdot dy_2 = 0$. Since $dy_1$ and $dy_2$ are coprime (we have divided it by $d_y$) $a = k \cdot dy_2$, $b = -k\cdot dy_1$ for some integer $k$. If we use this for $x$ equation, we will have: $x_i - x_j$ = $k \cdot dy_2 \cdot dx_1 - k \cdot dy_1 \cdot dx_2$. I.e. $x_i - x_j$ = $k \cdot (dx_1 \cdot dy_2 - dx_2 \cdot dy1)$ = $\pm k \cdot m$. Also $-m < x_i - x_j < m$ so this equation has no solution for integer non-zero $k$. Thus this set of points is correct. Finally having such $m$ points we have to create an $n$-points solution for the original problem. Obviously we need to multiply current answer coordinates by $d_x$ and $d_y$. Then for each of these points, for each $0 \leq r_x < d_x$ and for each $0 \leq r_y < d_y$ we need to print a new point. So, the answer is $(i \cdot d_x + r_x, r_y)$ for each $0 \leq i < m$, $0 \leq r_x < d_x$, $0 \leq r_y < d_y$.
[ "geometry", "implementation", "math" ]
2,500
null
1468
J
Road Reform
There are $n$ cities and $m$ bidirectional roads in Berland. The $i$-th road connects the cities $x_i$ and $y_i$, and has the speed limit $s_i$. The road network allows everyone to get from any city to any other city. The Berland Transport Ministry is planning a road reform. First of all, maintaining all $m$ roads is too costly, so $m - (n - 1)$ roads will be demolished in such a way that the remaining $(n - 1)$ roads still allow to get to any city from any other city. Formally, the remaining roads should represent an undirected tree. Secondly, the speed limits on the remaining roads might be changed. The changes will be done sequentially, each change is either increasing the speed limit on some road by $1$, or decreasing it by $1$. Since changing the speed limit requires a lot of work, the Ministry wants to minimize the number of changes. The goal of the Ministry is to have a road network of $(n - 1)$ roads with the maximum speed limit over all roads equal to exactly $k$. They assigned you the task of calculating the minimum number of speed limit changes they have to perform so the road network meets their requirements. For example, suppose the initial map of Berland looks like that, and $k = 7$: Then one of the optimal courses of action is to demolish the roads $1$–$4$ and $3$–$4$, and then decrease the speed limit on the road $2$–$3$ by $1$, so the resulting road network looks like that:
We will consider two cases: the road network without roads having $s_i > k$ is either connected or not. Checking that may be done with the help of DFS, BFS, DSU, or any other graph algorithm/data structure that allows checking if some graph is connected. If the network without roads having $s_i > k$ is not connected, then we have to take several roads with $s_i > k$ into the answer. Since their speed limit is too high, we have to decrease the speed limit on them to $k$. Then the required number of changes is $\sum \limits_{i \in R} \max(0, s_i - k)$, where $R$ is the set of roads that are added to the answer. To minimize this sum, we can set each road's speed limit to $\max(0, s_i - k)$ and find the minimum spanning tree of the resulting graph. Unfortunately, this approach doesn't work in the other case - we may build a road network having the maximum speed limit less than $k$, not exactly $k$. We have to choose a road with the current speed limit as close to $k$ as possible, i. e. the one with the minimum value of $|s_i - k|$. After applying $|s_i - k|$ changes to it, we can choose $n - 2$ roads having $s_i \le k$ to form a spanning tree with the chosen road (it is always possible due to the properties of the spanning tree: if we have a spanning tree and we want to add an edge of the graph to it, we can always find another edge to discard). So, in this case, the answer is $\min \limits_{i = 1}^{m} |s_i - k|$.
[ "dsu", "graphs", "greedy" ]
1,800
null
1468
K
The Robot
There is a robot on a checkered field that is endless in all directions. Initially, the robot is located in the cell with coordinates $(0, 0)$. He will execute commands which are described by a string of capital Latin letters 'L', 'R', 'D', 'U'. When a command is executed, the robot simply moves in the corresponding direction: - 'L': one cell to the left (the $x$-coordinate of the current cell decreases by $1$); - 'R': one cell to the right (the $x$-coordinate of the current cell is increased by $1$); - 'D': one cell down (the $y$-coordinate of the current cell decreases by $1$); - 'U': one cell up (the $y$-coordinate of the current cell is increased by $1$). Your task is to put an obstacle in one cell of the field so that after executing the commands, the robot will return to the original cell of its path $(0, 0)$. Of course, an obstacle cannot be placed in the starting cell $(0, 0)$. It is guaranteed that if the obstacle is not placed, then the robot will not return to the starting cell. An obstacle affects the movement of the robot in the following way: if it tries to go in a certain direction, and there is an obstacle, then it simply remains in place (the obstacle also remains, that is, it does not disappear). Find any such cell of the field (other than $(0, 0)$) that if you put an obstacle there, the robot will return to the cell $(0, 0)$ after the execution of all commands. If there is no solution, then report it.
It is obvious that the cell that can be the answer must belong to the original path of the robot. Thus, there are at most $n$ candidate cells in the answer. In order to make sure that a candidate cell is indeed the answer; it is necessary to simulate the movement of the robot, taking into account this cell as an obstacle. No more than $n$ actions will be spent on one such simulation. Therefore, to check all $n$ candidates, the total number of actions spent is will not exceed $n^2$, which fits into the requirements for the running time (time limit) of a solution.
[ "brute force", "implementation" ]
1,600
null
1468
L
Prime Divisors Selection
Suppose you have a sequence of $k$ integers $A = [a_1, a_2, \dots , a_k]$ where each $a_i \geq 2$. A sequence of \textbf{prime} integers $P = [p_1, p_2, \dots, p_k]$ is called suitable for the sequence $A$ if $a_1$ is divisible by $p_1$, $a_2$ is divisible by $p_2$ and so on. A sequence of \textbf{prime} integers $P$ is called friendly if there are no unique integers in this sequence. A sequence $A$ is called ideal, if each sequence $P$ that is suitable for $A$ is friendly as well (i. e. there is no sequence $P$ that is suitable for $A$, but not friendly). For example, the sequence $[2, 4, 16]$ is ideal, while the sequence $[2, 4, 6]$ is not ideal (there exists a sequence $P = [2, 2, 3]$ which is suitable for $A$, but not friendly). You are given $n$ different integers $x_1$, $x_2$, ..., $x_n$. You have to choose \textbf{exactly} $k$ of them in such a way that they form an ideal sequence, or report that it is impossible. Note that no integer can be chosen more than once.
Let's find for each prime integer $p \leq 10^{9}$ amount of integers $p^{q}$ among the given integers. If there are no more than 1 such integers, this prime $p$ is not interesting for us: if we add one such integer to the answer, we may always select exactly one divisor $p$ in a suitable sequence so it will not be friendly. Otherwise let's call such $p$ important and find a group $X_p$ (with some size $k_i$ > 1) - all integers $p^q$ in the given set. So we have several ($t$) such groups with sizes $k_1$, $k_2$, ... $k_t$. Now there are some different cases. Case 1: required size $k \leq k_1 + k_2 + \dots + k_t$. If $k$ is odd and $k_i$=2 for each $i$, we have to find some integer $y$ that has minimum possible different important primes in factorization (and does not have not important primes). If there is no such $y$, obviously, there is no solution. Now let there is $y$ that has $u$ different primes in factorization. If $u \leq (k-1)/2$ we may take all $u$ groups necessary for this integer $y$ and add some other groups $X_p$ in order to have exactly $k$ numbers in the result (if $u < (k-1)/2$). If $k$ is even and $k_i$ = 2 for each $i$, we may just take any $k/2$ groups by 2 integers. If $k_j$ > 2 for some $j$ it is easy to check that we may always find the ideal sequence. Let's start to add groups with the maximum sizes while it is possible. Now we have total size $k_1 \leq k$ and $q = k - k_1$ 'empty' space. If $q \geq 2$, we may just add $q$ integers from the next group. If $q = 1$, we may remove one integer from the greatest group (because its size is at least 3) and add 2 integers from the next group. Case 2: $k$ > $k_1$ + $k_2$ + ... + $k_t$. First of all let's take all the groups $X_p$ to the answer. Now each of the remaining integers is either represented as a product of important primes, or not. If it is represented, then we can add it to the answer and the set will remain ideal. Otherwise if we add such number, we can choose any prime divisor from it that is not important and the set will not be ideal. So, we just have to check whether there are enough integers that are represented as a product of important primes and add any $k$ - $k_1$ - $k_2$ - ... - $k_t$ of them to the answer. If there are not enough such integers, there is no solution. How to find all important primes? We are interested only in groups $X_p = {p^q}$ with sizes at least 2. So each such group either has an integer $p^2$ or an integer $p^q$ with $q \geq 3$. To find all the numbers in the first case, we may just check for each given integer whether it is $p^2$ for some prime $p$. For the second case we have $p \leq 10^6$, so we may find all the possible powers for each prime $p \leq 10^6$.
[ "binary search", "greedy", "math", "number theory" ]
2,700
null
1468
M
Similar Sets
You are given $n$ sets of integers. The $i$-th set contains $k_i$ integers. Two sets are called similar if they share at least two common elements, i. e. there exist two integers $x$ and $y$ such that $x \ne y$, and they both belong to each of the two sets. Your task is to find two similar sets among the given ones, or report that there is no such pair of sets.
Let's define $D$ = $m^{1/2}$, where $m$ is the total amount of integers. Now let's call a set of integers 'large' if its size is at least $D$ and 'small' otherwise. Firstly let's check whether there is a large set that has two common integers with some other (small or large) set. Let's try to find a similar set for each large set $S$ separately. For each integer $x$ in $S$ we may set flag $w[x]$=1. Now for each other set $T$ with size $k$ we may calculate amount of integers $x$ in $T$ that satisfy condition $w[x]$=1 (i.e. amount of common integers with $S$) using $O(k)$ operations. So, we will use $O(m)$ operations for each large set $S$. Since $D$=$m^{1/2}$ we will have at most $m/D$ = $D$ large sets and will perform $O(D*m) = O(m^{3/2})$ operations. Now we have to check whether there are two similar small sets. For each small set $S$ let's find all pairs of integers $(x, y)$, $x < y$ in this set. If there are two equal pairs in different sets, that sets are similar. To find such pairs we may just create a separate array for each integer $x$ and add all values $y$ to it. It is possible to check whether there are two equal integers in array in $O(k)$ operations, where $k$ is the size of this array. Since the size of each small set is at most $D$ we will have at most $D*k$ pairs for each set of size $k$ and at most $D*m$ pairs in total. So we will perform $O(D*m)$ = $O(m^{3/2})$ operations.
[ "data structures", "graphs", "implementation" ]
2,300
null
1469
A
Regular Bracket Sequence
A bracket sequence is called regular if it is possible to obtain correct arithmetic expression by inserting characters + and 1 into this sequence. For example, sequences (())(), () and (()(())) are regular, while )(, (() and (()))( are not. Let's call a regular bracket sequence "RBS". You are given a sequence $s$ of $n$ characters (, ), and/or ?. \textbf{There is exactly one character ( and exactly one character )} in this sequence. You have to replace every character ? with either ) or ( (different characters ? can be replaced with different brackets). \textbf{You cannot reorder the characters, remove them, insert other characters, and each ? must be replaced}. Determine if it is possible to obtain an RBS after these replacements.
There are two solutions to this problem: casework and greedy. The greedy solution goes as follows: the number of opening brackets in an RBS should be exactly $\frac{|s|}{2}$, and if there is a closing bracket before an opening bracket, it's optimal to swap them, if possible. So, we should replace the first $\frac{|s|}{2} - 1$ question marks with opening brackets, other question marks with closing brackets, and if the answer exists, this method will find it. All that's left is to check that the resulting sequence is an RBS. The casework solution goes as follows: first of all, each RBS should have an even length, so if $|s|$ is odd, there is no answer. Furthermore, an RBS always begins with an opening bracket and always ends with a closing bracket, so if the first character is a closing bracket or the last character is an opening bracket, there is no answer. Since there is at most one opening bracket and at most one closing bracket in the original sequence, these three constraints are enough: if the opening bracket is before the closing bracket, then they balance out, and all other characters can be replaced in such a way that they form an RBS of length $|s| - 2$. If the opening bracket is after the closing bracket, then the first and the last characters are question marks (since the first character is not a closing bracket, and the last one is not an opening bracket). We should replace the first character with an opening bracket, the last character with a closing bracket, so we get four characters (two opening and two closing brackets) which balance themselves out. All other question marks can be replaced in such a way that they form an RBS of length $|s| - 4$. So, all we have to check is that $|s|$ is even, the first character is not a closing bracket, and the last character is not an opening bracket.
[ "constructive algorithms", "greedy" ]
1,000
t = int(input()) for i in range(t): s = input() n = len(s) a = [s[i] for i in range(n)] cnt = n // 2 - 1 for j in range(n): if a[j] == '?': if cnt > 0: cnt -= 1 a[j] = '(' else: a[j] = ')' bal = 0 minbal = 0 for j in range(n): if a[j] == '(': bal += 1 else: bal -= 1 minbal = min(bal, minbal) print('YES' if bal == 0 and minbal >= 0 else 'NO')
1469
B
Red and Blue
Monocarp had a sequence $a$ consisting of $n + m$ integers $a_1, a_2, \dots, a_{n + m}$. He painted the elements into two colors, red and blue; $n$ elements were painted red, all other $m$ elements were painted blue. After painting the elements, he has written two sequences $r_1, r_2, \dots, r_n$ and $b_1, b_2, \dots, b_m$. The sequence $r$ consisted of all red elements of $a$ \textbf{in the order they appeared in $a$}; similarly, the sequence $b$ consisted of all blue elements of $a$ \textbf{in the order they appeared in $a$ as well}. Unfortunately, the original sequence was lost, and Monocarp only has the sequences $r$ and $b$. He wants to restore the original sequence. In case there are multiple ways to restore it, he wants to choose a way to restore that maximizes the value of $$f(a) = \max(0, a_1, (a_1 + a_2), (a_1 + a_2 + a_3), \dots, (a_1 + a_2 + a_3 + \dots + a_{n + m}))$$ Help Monocarp to calculate the maximum possible value of $f(a)$.
Denote $p_i$ as the sum of first $i$ elements of $r$, and $q_j$ as the sum of first $j$ elements of $b$. These values can be calculated in $O(n + m)$ with prefix sums. The first solution is to use dynamic programming. Let $dp_{i, j}$ be the maximum value of $f(a)$ if we placed the first $i$ elements of $r$ and the first $j$ elements of $b$. Transitions can be performed in $O(1)$: we either place an element from $r$ (then we go to $dp_{i + 1, j}$ and update it with $\max(dp_{i, j}, p_{i + 1} + q_j)$), or place an element from $b$ (then we go to $dp_{i, j + 1}$ and update it with $\max(dp_{i, j}, p_i + q_{j + 1})$). The answer is stored in $dp_{n, m}$, and this solution works in $O(nm)$. The second solution: observe that the sum of several first elements of $a$ is the sum of several first elements of $r$ and several first elements of $b$. So each prefix sum of $a$ (and the answer itself) is not greater than $\max_{i = 0}^{n} p_i + \max_{j = 0}^{m} p_j$. It's easy to show how to obtain exactly this answer: let $k$ be the value of $i$ such that $p_i$ is maximized, and $l$ be the value of $j$ such that $q_j$ is maximized. Let's place the first $k$ elements of $r$, then the first $l$ elements of $b$ (so the current sum is exactly $\max_{i = 0}^{n} p_i + \max_{j = 0}^{m} p_j$), and place all of the remaining elements in any possible order. So, the answer is $\max_{i = 0}^{n} p_i + \max_{j = 0}^{m} p_j$. This solution works in $O(n + m)$.
[ "dp", "greedy" ]
1,000
for _ in range(int(input())): n = int(input()) a = [int(x) for x in input().split()] m = int(input()) b = [int(x) for x in input().split()] dp = [[-10**9 for j in range(m + 1)] for i in range(n + 1)] dp[0][0] = 0 ans = 0 for i in range(n + 1): for j in range(m + 1): if i < n: dp[i + 1][j] = max(dp[i + 1][j], dp[i][j] + a[i]) if j < m: dp[i][j + 1] = max(dp[i][j + 1], dp[i][j] + b[j]) ans = max(ans, dp[i][j]) print(ans)
1469
C
Building a Fence
You want to build a fence that will consist of $n$ equal sections. All sections have a width equal to $1$ and height equal to $k$. You will place all sections in one line side by side. Unfortunately, the ground beneath the fence is not flat. For simplicity, you can think that the ground level under the $i$-th section is equal to $h_i$. You should follow several rules to build the fence: - the consecutive sections should have a common side of length at least $1$; - the first and the last sections should stand on the corresponding ground levels; - the sections between may be either on the ground level or higher, but not higher than $k - 1$ from the ground level $h_i$ (the height should be an integer); \begin{center} {\small One of possible fences (blue color) for the first test case} \end{center} Is it possible to build a fence that meets all rules?
Let's set sections from left to right. Note that for the $i$-th section all valid heights $x$ (heights for which it's possible to choose heights for all sections $1 \dots i$ meeting all rules and finishing with the height of $i$ equal to $x$) form a segment. It's not hard to prove by induction. For the first section, the valid segment is $[h_1, h_1]$. The step of induction: if the valid segment for $i - 1$ is $[l_{i - 1}, r_{i - 1}]$ then valid $x_i$-s for $i$ is the segment $[\max(l_{i - 1} - (k - 1), h_i), \min(r_{i - 1} + (k - 1), h_i + (k - 1))]$, since for each $x_i$ you can find at least one $x_{i - 1}$ in $[l_{i - 1}, r_{i - 1}]$ which don't break the first rule. If for any $i$ the correct segment is empty or if we can't fulfill the third rule ($h_n \not\in [l_{n - 1} - (k - 1), r_{n - 1} + (k - 1)]$) then there is no answer, otherwise at least one answer is always exist. As a result, to solve the problem, you should just maintain the segment of valid $x_i$ (using the formula above) while iterating $i$. Complexity is $O(n)$.
[ "dp", "greedy", "implementation", "two pointers" ]
1,600
fun main() { repeat(readLine()!!.toInt()) { val (n, k) = readLine()!!.split(' ').map { it.toInt() } val h = readLine()!!.split(' ').map { it.toInt() } var mn = h[0] var mx = h[0] var ok = true for (i in 1 until n) { mn = maxOf(mn - k + 1, h[i]) mx = minOf(mx + k - 1, h[i] + k - 1) if (mn > mx) { ok = false break } } if (h[n - 1] !in mn..mx) ok = false println(if (ok) "YES" else "NO") } }