contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1081
G
Mergesort Strikes Back
Chouti thought about his very first days in competitive programming. When he had just learned to write merge sort, he thought that the merge sort is too slow, so he restricted the maximum depth of recursion and modified the merge sort to the following: Chouti found his idea dumb since obviously, this "merge sort" sometimes cannot sort the array correctly. However, Chouti is now starting to think of how good this "merge sort" is. Particularly, Chouti wants to know for a random permutation $a$ of $1, 2, \ldots, n$ the expected number of inversions after calling MergeSort(a, 1, n, k). It can be proved that the expected number is rational. For the given prime $q$, suppose the answer can be denoted by $\frac{u}{d}$ where $gcd(u,d)=1$, you need to output an integer $r$ satisfying $0 \le r<q$ and $rd \equiv u \pmod q$. It can be proved that such $r$ exists and is unique.
How does "merge" work? What is this "mergesort" actually doing? First, consider how "merge" will work when dealing with two arbitrary arrays. Partition every array into several blocks: find all elements that are larger than all elements before them and use these elements as the start of the blocks. Then, by 'merging' we're just sorting these blocks by their starting elements. (This is because that when we decided to put the start element of a block into the result array, the remaining elements in the block will be sequentially added too since they're smaller than the start element) Now back to the merge sort. What this "mergesort" doing is basically dividing the array into several segments (i.e. the ranges reaching the depth threshold). In every segment, the relative order stays the same when merging, so the expected number of inversions they contribute is just $\frac{l(l-1)}{4}$ (for a segment of length $l$). Again suppose every segment is divided into blocks aforementioned, we're just sorting those blocks altogether by the beginning elements. Let's consider inversions formed from two blocks in two different segments. Say the elements forming the inversion are initially the $i$-th and $j$-th of those two segments (from left to right), then the blocks they belong start from the maximum of the first $i$-th and the first $j$-th of those two segments. Consider the maximum among these $i + j$ numbers, if it is one of these two elements, the inversion can't be formed. The probability that the maximum is neither $i$ nor $j$ is $\scriptstyle{\frac{\mu+2}{i+j}}$, and if the maximum is chosen, there are $50%$ percent of odds that the order of $i$ and $j$ is different from that of two maximums, because these two elements' order can be changed by swapping these two elements while leaving the maximums' order unchanged. So their probability of forming an inversion is just $\frac{i+j-2}{2(i+j)}=\frac{1}{2}-\frac{1}{i+j}$. Enumerate two blocks and $i$ and precalculate the partial sum of reciprocals, we can calculate the expectation of inversions formed from these two blocks in $O(n)$ time. However, there might be $O(n)$ blocks. But there is one more property for this problem: those segments are of at most two different lengths. By considering same length pairs only once, we can get a $O(n)$ or $O(n\log(n))$ solution. Do an induction on $k$. Suppose for $k - 1$ we have only segments of length $a$ and $a + 1$. If $2|a$, let $a = 2t$, from $2t$ and $2t + 1$ only segments of length $t$ and $t + 1$ will be formed. Otherwise, let $a = 2t + 1$, from $2t + 1$ and $2t + 2$ still only segments of length $t$ and $t + 1$ will be formed. Bonus: solve this problem when the segments' length can be arbitary and $n \le 10^{5}$.
[ "math", "probabilities" ]
3,200
#include <bits/stdc++.h> using namespace std; typedef long long ll; #define fi first #define se second #define SZ 123456 int n,k,MOD,ts[SZ],tn=0; ll inv[SZ],invs[SZ]; ll qp(ll a,ll b) { ll x=1; a%=MOD; while(b) { if(b&1) x=x*a%MOD; a=a*a%MOD; b>>=1; } return x; } void go(int l,int r,int h) { if(h<=1||l==r) ts[++tn]=r-l+1; else { int m=(l+r)>>1; go(l,m,h-1); go(m+1,r,h-1); } } map<ll,int> cnt; ll calc(int a,int b) { ll ans=a*(ll)b%MOD; for(int i=1;i<=a;++i) ans-=(invs[i+b]-invs[i])*2LL,ans%=MOD; return ans; } int main() { scanf("%d%d%d",&n,&k,&MOD); for(int i=1;i<=max(n,2);++i) inv[i]=qp(i,MOD-2), invs[i]=(invs[i-1]+inv[i])%MOD; go(1,n,k); ll ans=0; for(int i=1;i<=tn;++i) ++cnt[ts[i]],ans+=ts[i]*(ll)(ts[i]-1)/2,ans%=MOD; for(auto t:cnt) if(t.se>=2) ans+=calc(t.fi,t.fi)*((ll)t.se*(t.se-1)/2%MOD),ans%=MOD; for(auto a:cnt) for(auto b:cnt) if(a.fi<b.fi) ans+=calc(a.fi,b.fi)*((ll)a.se*b.se%MOD),ans%=MOD; ans=ans%MOD*inv[2]%MOD; ans=(ans%MOD+MOD)%MOD; printf("%d\n",int(ans)); }
1081
H
Palindromic Magic
After learning some fancy algorithms about palindromes, Chouti found palindromes very interesting, so he wants to challenge you with this problem. Chouti has got two strings $A$ and $B$. Since he likes palindromes, he would like to pick $a$ as some non-empty palindromic substring of $A$ and $b$ as some non-empty palindromic substring of $B$. Concatenating them, he will get string $ab$. Chouti thinks strings he could get this way are interesting, so he wants to know how many different strings he can get.
What will be counted twice? Warning: This editorial is probably new and arcane for ones who are not familiar with this field. If you just want to get a quick idea about the solution, you can skip all the proofs (they're wrapped in spoiler tags). Some symbols: All indices of strings start from zero. $x^{R}$ stands for the reverse of string $x$. (e.g. $'abc'^{R} = 'cba'$), $xy$ stands for concatenation of $x$ and $y$. (e.g. $x = 'a', y = 'b', xy = 'ab'$), $x^{a}$ stands for concatenation of $a$ copies of $x$ (e.g. $x = 'ab'$, $x^{2} = 'abab'$). $x[a, b]$ stands for the substring of $x$ starting and ending from the $a$-th and $b$-th character. (e.g. $'abc'[1, 2] = 'bc'$) Border of $x$: strings with are common prefix & suffix of $x$. Formally, $x$ has a border of length $t$ $(x[0, t - 1])$ iff $x_{i} = x_{|x| - t + i}$ ($i\in[0,t-1]$). Period of $x$: $x$ has a period of length $t$ iff $x_{i} = x_{i + t}$ ($0 \le i < |x| - t$). When $t||x|$ we also call $t$ a full period. From the formulas it's easy to see $x$ has a period of length $t$ iff $x$ has a border of length $|x| - t$. ($t\in\left[1,\left|x\right|\right]$) Power: $x$ is a power iff the minimum full period of $x$ isn't $|x|$. e.g. abab is a power. Lemma 1 (weak periodicity lemma): if $p$ and $q$ are periods of $s$, $p + q \le |s|$, $gcd(p, q)$ is also a period of $s$. Suppose $p < q, d = q - p$. If $|s| - q \le i < |s| - d$, $s_{i} = s_{i - p} = s_{i + d}$. If $0 \le i < |s| - q$, $s_{i} = s_{i + q} = s_{i + q - p} = s_{i + d}$. So $q - p$ is also a period. Using Euclid algorithm we can get $gcd(p, q)$ is a period. Lemma 2: Let $S$ be the set of period lengths $ \le |s| / 2$ of $s$, if $S$ is non-empty, $\forall u\in S,\operatorname*{min}S|u$. Let $min S = v$, since $v + u \le |s|$, $gcd(v, u)$ is also a valid period, so $gcd(v, u) \ge v$, $v|u$. Let $border(x)$ be the longest (not-self) border of $x$. e.g. $border('aba') = 'a', border('ab') = "$. If $x$ is a palindrome, its palindromic prefix and suffix must be its border. Therefore, its (not-self) longest palindromic prefix (suffix) is $border(x)$. Let $|x| = a$, $x$ has a border of length $b$ iff $x_{i} = x_{a - b + i}$ ($i\in[0,b-1]$), $x$ has a palindromic prefix of length $b$ iff $x_{i} = x_{b - 1 - i}$ ($i\in[0,b-1]$). Since $x_{i} = x_{a - 1 - i}$, they are just the same. If $S = pq$, $p$ and $q$ are palindromic and non-empty, we call $(p, q)$ a palindromic decomposition of $S$. If $S = pq$, $p$ and $q$ are palindromic, $q$ is non-empty ($p$ can be empty) we call $(p, q)$ a non-strict palindromic decomposition of $S$. Lemma 3: if $S = x_{1} x_{2} = y_{1} y_{2} = z_{1} z_{2}$, $|x_{1}| < |y_{1}| < |z_{1}|$, $x_{2}, y_{1}, y_{2}, z_{1}$ are palindromic and non-empty, then $x_{1}$ and $z_{2}$ are also palindromic. Let $z_{1} = y_{1} v$. $v^{R}$ is a suffix of $y_{2}$, also a suffix of $x_{2}$. So $v$ is a prefix of $x_{2}$, then $x_{1v}$ is a prefix of $z_{1}$. Since $y_{1}$ is a palindromic prefix of $z_{1}$, $z_{1} = y_{1} v$, $|v|$ is a period of $z_{1}$. Since $x_{1v}$ is a prefix, so $|v|$ is also a period of $x_{1} v$. Suppose $t$ be some arbitary large number (you can think of it as $ \infty $), then $x_{1}$ is a suffix of $v^{t}$. Since $v^{R}$ is prefix of $z_{1}$, $x_{1}$ is a prefix of $(v^{R})^{t}$. So $x_{1}^{R}$ is a suffix of $v^{t}$, then $x_{1} = x_{1}^{R}$, so $x_{1}$ is palindromic. $z_{2}$ is palindromic similarly. Lemma 4: Suppose $S$ has some palindromic decomposition. Let the longest palindromic prefix of $S$ be $a$, $S = ax$, longest palindromic suffix of $S$ be $b$, $S = yb$. At least one of $x$ and $y$ is palindromic. If none of them are palindromic, let $S = pq$ be a valid palindromic decomposition of $S$, then $S = yb = pq = ax$, by Lemma 3, contradiction. Lemma 5: $S = p_{1} q_{1} = p_{2} q_{2}$ ($|p_{1}| < |p_{2}|$, $p_{1}, q_{1}, p_{2}, q_{2}$ are palindromic, $q_{1}$ and $q_{2}$ are non-empty), then $S$ is a power. We prove this by proving $gcd(|p_{2}| - |p_{1}|, |S|)$ is a period. Let $|S| = n$, $|p_{2}| - |p_{1}| = t$. Because $p_{1}$ is a palindromic prefix of $p_{2}$, $t$ is a period of $p_{2}$. Similarly $t$ is a period of $q_{1}$. Since they have a common part of length $t$ (namely $S[p_{1}, p_{2} - 1]$), $t$ is a period of $S$. So $t$ is a period of $S$. For $x\in[0,t)$, $s_{x} = s_{|p2| - 1 - x} = s_{n - 1 + |p1| - (|p2| - 1 - x)} = s_{n - t + x}$ (first two equations are because $p_{2}$ and $q_{1}$ and palindromic). So $n - t$ is also a period of $S$. Since $t + n - t = n$, $gcd(t, n - t) = gcd(t, n)$ is also a period of $S$. (weak periodicity lemma) Lemma 6: Let $S = p_{1} q_{1} = p_{2} q_{2} = ... = p_{t} q_{t}$ be all non-strict palindromic decompositions of $S$, $h$ be the minimum full period of $S$. When $t \neq 0$, $t = |S| / h$. From Lemma 5, it's clear that $h = |S|$ iff $t = 1$. In the following $t \ge 2$ is assumed. Let $ \alpha = S[0, h - 1]$, because $ \alpha $ is not a power (otherwise $s$ will have smaller full period), $ \alpha $ has at most one non-strict palindromic decomposition (from Lemma 5). Let $S = pq$ be any non-strict palindromic decomposition, then $max(|p|, |q|) \ge h$. When $|p| \ge h$, $ \alpha = p[0, h - 1]$, so $\alpha[0,|p|\,\mathrm{mod}\,h-1]=p^{R}[0,|p]\,\mathrm{mod}\,h-1]=(\alpha[0,|p]\,\mathrm{mod}\,h-1)^{R}$, then $\alpha[0,|p]{\mathrm{~mod~}}h-1]$ is palindromic. Similarly $\alpha[|p|\mathrm{\,mod\,}h,h-1]$ is also palindromic. When $|q| \ge h$ similar arguments can be applied. Therefore, $\alpha[0,|p]{\mathrm{~mod~}}h-1]$ and $\alpha[|p|\mathrm{\,mod\,}h,h-1]$ is a non-strict palindromic decomposition of $ \alpha $. Therefore, $ \alpha $ has a non-strict palindromic decomposition. Let its only non-strict palindromic decomposition be $ \alpha [0, g - 1]$ and $ \alpha [g, h - 1]$. Therefore, every $p_{i}$ must satisfy $p_{i}{\mathrm{~mod~}}h=g$, so $t \le |s| / h$. Also, these all $|s| / h$ decompositions can be obtained. Lemma 7: Let $S = p_{1} q_{1} = p_{2} q_{2} = ... = p_{t} q_{t}$ be all non-strict palindromic decompositions of $S$. ($|p_{i}| < |p_{i + 1}|$) For every $i\in[1,t-1]$, at least one of $p_{i} = border(p_{i + 1})$ and $q_{i + 1} = border(q_{i})$ is true. Instead of proving directly, we first introduce a way to compute all decompositions. Let the longest palindromic prefix of $S$ be $a$ $(a \neq S)$, $S = ax$, longest palindromic suffix of $S$ be $b$ (it may be $S$), $S = yb$. If $x = b$ obviously $S = ab$ is the only way to decompose. If $S = pq$ and $p \neq a, q \neq b$, $p$ and $q$ are palindromic, by Lemma 3 we have $x, y$ are also palindromic. So if neither $x$ or $y$ is palindromic, then $S$ can't be composed to two palindromes. If exactly one of $x$ and $y$ is palindromic, it's the only way to decompose $S$. If both of them are palindromic, we can then find the second-longest non-self palindromic prefix of $S$: $c$. Let $S = cz$, if $z$ is not palindromic or $c = y$, then $S = ax = yb$ are the only non-strict palindromic decompositions. Otherwise, we can find all palindromic suffix of $|S|$ whose lengths between $|z|$ and $|b|$, their prefixes must also be palindromic (using Lemma 3 for $ax$ and $cz$), then $S = ax = cz = ... = yb$ (other palindromic suffixes and their corresponding prefixes are omitted) Back to the proof of Lemma 7, the only case we need to prove now is $S = ax = yb$. Suppose the claim is incorrect, let $p = border(a)$, $s = border(y)$, $S = ax = pq = rs = yb$, ($|a| > |p| > |y|, |a| > |r| > |y|$, $p$ and $s$ are palindromic) Continuing with the proof of Lemma 6, since $t = 2$, $S = \alpha ^{2}$. If $|p| \ge | \alpha |$, $q=\alpha[|p|\,\mathrm{mod}\,h,h-1]$, so $q$ would also be palindromic, contradiction. Therefore, $|p| < | \alpha |$ and similarly $|s| < | \alpha |$. Let $ \alpha = pq' = r's$ and the non-strict palindromic decomposition of $ \alpha $ be $ \alpha = \beta \theta $. Since $ \alpha = pq' = \beta \theta = r's$, by Lemma 3 $q'$ and $r'$ should also be palindromic, contradiction. Lemmas are ready, let's start focusing on this problem. A naive idea is to count the number of palindromes in $A$ and in $B$, and multiply them. This will obviously count a string many times. By Lemma 7, suppose $S = xy$, to reduce counting, we can check if using $border(x)$ or $border(y)$ to replace $x$ or $y$ can also achieve $S$. If any of them do, reduce the answer by 1, then we're done. So for a palindromic string $x$ in $A$, we want to count strings that are attainable from both $x$ and $border(x)$ and subtract from the answer. Finding $x$ and $border(x)$ themselves can be simply done by the palindromic tree. Let $x = border(x)w$, we want to count $T$s in $B$ that $T = wS$ and both $T$ and $S$ are palindromic. Since $|w|$ is the shortest period of $x$, $w$ can't be a power. If $|w| > |S|$, $w = S + U$. $S$ are $U$ are both palindromes. Since $w$ is not a power, it can be decomposed to be two palindromes in at most one way (Lemma 5). We can find that only way (by checking maximal palindromic suffix & prefix) and use hashing to check if it exists in $B$. If $|w| \le |S|$, if $S$ is not maximum palindromic suffix of $T$, $w$ must be a power (Lemma 2), so we only need to check maximum palindromic suffixes (i.e. $S = border(T)$). We need to do the similar thing to $y$s in $B$, then adding back both attainable from $border(x)$ and $border(y)$. Adding back can be done in a similar manner, or directly use hashing to find all matching $w$ s. Finding maximal palindromic suffix and prefix of substrings can be done by binary-lifting on two palindromic trees (one and one reversed). Let $up_{i, j}$ be the resulting node after jumping through fail links for $2^{j}$ steps from node $i$. While querying maximal palindromic suffix for $s[l, r]$, find the node corresponding to the maximal palindromic suffix of $s[1, r]$ (this can be stored while building palindromic tree). If it fits into $s[l, r]$ we're done. Otherwise, enumerate $j$ from large to small and jump $2^{j}$ steps (with the help of $up$) if the result node is still unable to fit into $s[l, r]$, then jump one more step to get the result.
[ "data structures", "hashing", "strings" ]
3,500
#include <bits/stdc++.h> using namespace std; const int N = 234567; const int LOG = 18; const int ALPHA = 26; const int base = 2333; const int md0 = 1e9 + 7; const int md1 = 1e9 + 9; struct hash_t { int hash0, hash1; hash_t(int hash0 = 0, int hash1 = 0):hash0(hash0), hash1(hash1) { } hash_t operator + (const int &x) const { return hash_t((hash0 + x) % md0, (hash1 + x) % md1); }; hash_t operator * (const int &x) const { return hash_t((long long)hash0 * x % md0, (long long)hash1 * x % md1); } hash_t operator + (const hash_t &x) const { return hash_t((hash0 + x.hash0) % md0, (hash1 + x.hash1) % md1); }; hash_t operator - (const hash_t &x) const { return hash_t((hash0 + md0 - x.hash0) % md0, (hash1 + md1 - x.hash1) % md1); }; hash_t operator * (const hash_t &x) const { return hash_t((long long)hash0 * x.hash0 % md0, (long long)hash1 * x.hash1 % md1); } long long get() { return (long long)hash0 * md1 + hash1; } } ha[N], hb[N], power[N]; struct palindrome_tree_t { int n, total, p[N], pos[N], value[N], parent[N], go[N][ALPHA], ancestor[LOG][N]; char s[N]; palindrome_tree_t() { parent[0] = 1; value[1] = -1; total = 1; p[0] = 1; } int extend(int p, int w, int n) { while (s[n] != s[n - value[p] - 1]) { p = parent[p]; } if (!go[p][w]) { int q = ++total, k = parent[p]; while (s[n] != s[n - value[k] - 1]) { k = parent[k]; } value[q] = value[p] + 2; parent[q] = go[k][w]; go[p][w] = q; pos[q] = n; } return go[p][w]; } void init() { for (int i = 1; i <= n; ++i) { p[i] = extend(p[i - 1], s[i] - 'a', i); } for (int i = 0; i <= total; ++i) { ancestor[0][i] = parent[i]; } for (int i = 1; i < LOG; ++i) { for (int j = 0; j <= total; ++j) { ancestor[i][j] = ancestor[i - 1][ancestor[i - 1][j]]; } } } int query(int r, int length) { r = p[r]; if (value[r] <= length) { return value[r]; } for (int i = LOG - 1; ~i; --i) { if (value[ancestor[i][r]] > length) { r = ancestor[i][r]; } } return value[parent[r]]; } bool check(int r, int length) { r = p[r]; for (int i = LOG - 1; ~i; --i) { if (value[ancestor[i][r]] >= length) { r = ancestor[i][r]; } } return value[r] == length; } } A, B, RA, RB; map<long long, int> fa, fb, ga, gb; long long answer; char a[N], b[N]; int n, m; hash_t get_hash(hash_t *h, int l, int r) { return h[r] - h[l - 1] * power[r - l + 1]; } int main() { #ifdef wxh010910 freopen("input.txt", "r", stdin); #endif scanf("%s %s", a + 1, b + 1); n = strlen(a + 1); m = strlen(b + 1); A.n = RA.n = n; B.n = RB.n = m; for (int i = 1; i <= n; ++i) { A.s[i] = RA.s[n - i + 1] = a[i]; ha[i] = ha[i - 1] * base + a[i]; } for (int i = 1; i <= m; ++i) { B.s[i] = RB.s[m - i + 1] = b[i]; hb[i] = hb[i - 1] * base + b[i]; } power[0] = hash_t(1, 1); for (int i = 1; i <= max(n, m); ++i) { power[i] = power[i - 1] * base; } A.init(); B.init(); RA.init(); RB.init(); answer = (long long)(A.total - 1) * (B.total - 1); for (int i = 2; i <= A.total; ++i) { ++fa[get_hash(ha, A.pos[i] - A.value[i] + 1, A.pos[i]).get()]; int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; if (A.value[i] <= A.value[p] << 1) { ++ga[get_hash(ha, l, r).get()]; } } for (int i = 2; i <= B.total; ++i) { ++fb[get_hash(hb, B.pos[i] - B.value[i] + 1, B.pos[i]).get()]; int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; if (B.value[i] <= B.value[p] << 1) { ++gb[get_hash(hb, l, r).get()]; } } for (int i = 2; i <= A.total; ++i) { int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; long long value = get_hash(ha, l, r).get(); if (gb.count(value)) { answer -= gb[value]; } int longest_palindrome_suffix = A.query(r, r - l + 1); if (longest_palindrome_suffix == r - l + 1) { continue; } if (RA.check(n - l + 1, r - l + 1 - longest_palindrome_suffix)) { int length = r - l + 1 - longest_palindrome_suffix; if (fb.count(get_hash(ha, l, l + length - 1).get()) && fb.count((get_hash(ha, l, r) * power[length] + get_hash(ha, l, l + length - 1)).get())) { --answer; } continue; } int longest_palindrome_prefix = RA.query(n - l + 1, r - l + 1); if (A.check(r, r - l + 1 - longest_palindrome_prefix)) { int length = longest_palindrome_prefix; if (fb.count(get_hash(ha, l, l + length - 1).get()) && fb.count((get_hash(ha, l, r) * power[length] + get_hash(ha, l, l + length - 1)).get())) { --answer; } continue; } } for (int i = 2; i <= B.total; ++i) { int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; long long value = get_hash(hb, l, r).get(); if (ga.count(value)) { answer -= ga[value]; } int longest_palindrome_suffix = B.query(r, r - l + 1); if (longest_palindrome_suffix == r - l + 1) { continue; } if (RB.check(m - l + 1, r - l + 1 - longest_palindrome_suffix)) { int length = longest_palindrome_suffix; if (fa.count(get_hash(hb, r - length + 1, r).get()) && fa.count((get_hash(hb, r - length + 1, r) * power[r - l + 1] + get_hash(hb, l, r)).get())) { --answer; } continue; } int longest_palindrome_prefix = RB.query(m - l + 1, r - l + 1); if (B.check(r, r - l + 1 - longest_palindrome_prefix)) { int length = r - l + 1 - longest_palindrome_prefix; if (fa.count(get_hash(hb, r - length + 1, r).get()) && fa.count((get_hash(hb, r - length + 1, r) * power[r - l + 1] + get_hash(hb, l, r)).get())) { --answer; } continue; } } for (int i = 2; i <= A.total; ++i) { int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; if (A.value[i] > A.value[p] << 1) { ++ga[get_hash(ha, l, r).get()]; } } for (int i = 2; i <= B.total; ++i) { int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; if (B.value[i] > B.value[p] << 1) { ++gb[get_hash(hb, l, r).get()]; } } for (auto p : ga) { answer += (long long)p.second * gb[p.first]; } printf("%lld\n", answer); return 0; }
1082
A
Vasya and Book
Vasya is reading a e-book. The file of the book consists of $n$ pages, numbered from $1$ to $n$. The screen is currently displaying the contents of page $x$, and Vasya wants to read the page $y$. There are two buttons on the book which allow Vasya to scroll $d$ pages forwards or backwards (but he cannot scroll outside the book). For example, if the book consists of $10$ pages, and $d = 3$, then from the first page Vasya can scroll to the first or to the fourth page by pressing one of the buttons; from the second page — to the first or to the fifth; from the sixth page — to the third or to the ninth; from the eighth — to the fifth or to the tenth. Help Vasya to calculate the minimum number of times he needs to press a button to move to page $y$.
It is easy to understand that the optimal answer is achieved in one of three cases: Vasya is trying to visit page $y$ without visiting pages $1$ and $n$; Vasya first goes to the page $1$, and then to the page $y$; Vasya first goes to the $n$ page, and then to the $y$ page. In the first case, Vasya can go directly to the $y$ page from the $x$ page if $|x-y|$ is divided by $d$. In the second case, Vasya can get to page $y$ through page $1$, if $y - 1$ is divided by $d$. The required number of actions will be equal to $\lceil \frac{x - 1}{d} \rceil + \frac{y - 1}{d}$. Similarly, in the third case, Vasya can go to the page $y$ through the page $n$ if $n - y$ is divided by $d$. The required number of actions will be equal to $\lceil\frac{n - x}{d} \rceil + \frac{n - y}{d}$. If none of the three options described above is appropriate, then there is no answer.
[ "implementation", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; const int INF = int(2e9) + 99; int n, x, y, d; int dist(int x, int y){ return (abs(x - y) + (d - 1)) / d; } int main() { int t; cin >> t; for(int i = 0; i < t; ++i){ cin >> n >> x >> y >> d; int len = abs(x - y); int res = INF; if(len % d == 0) res = min(res, dist(x, y)); len = y - 1; if(len % d == 0) res = min(res, dist(x, 1) + dist(1, y)); len = n - y; if(len % d == 0) res = min(res, dist(x, n) + dist(n, y)); if(res == INF) res = -1; cout << res << endl; } return 0; }
1082
B
Vova and Trophies
Vova has won $n$ trophies in different competitions. Each trophy is either golden or silver. The trophies are arranged in a row. The beauty of the arrangement is the length of the longest subsegment consisting of golden trophies. Vova wants to swap two trophies (not necessarily adjacent ones) to make the arrangement as beautiful as possible — that means, to maximize the length of the longest such subsegment. Help Vova! Tell him the maximum possible beauty of the arrangement if he is allowed to do at most one swap.
Let $r_i$ be the maximal segment of gold cups that begins in the cup $i$. Let $l_i$ be the maximum segment of gold cups that ends in the cup $i$. Also, let the total number of gold cups be $cntG$. Note that it makes no sense to change the cups of the same color. Then let's consider the silver cup, which will change with the gold cup, let its number be $i$. Then if $r_{i + 1} + l_{i - 1} < cntG$, then we will update the answer with the value $r_{i + 1} + l_{i - 1} + 1$, and otherwise with the value $r_{i + 1} + l_{i - 1}$. This will not work if all the cups are golden. In this case, the answer is $n$.
[ "greedy" ]
1,600
#include <bits/stdc++.h> using namespace std; int n; string s; int main() { cin >> n >> s; vector <int> l(n), r(n); for(int i = 0; i < n; ++i){ if(s[i] == 'G'){ l[i] = 1; if(i > 0) l[i] += l[i - 1]; } } for(int i = n - 1; i >= 0; --i){ if(s[i] == 'G'){ r[i] = 1; if(i + 1 < n) r[i] += r[i + 1]; } } int res = 0; int cntG = 0; for(int i = 0; i < n; ++i) cntG += s[i] == 'G'; for(int i = 0; i < n; ++i){ if(s[i] == 'G') continue; int nres = 1; if(i > 0) nres += l[i - 1]; if(i + 1 < n) nres += r[i + 1]; res = max(res, nres); } res = min(res, cntG); if(cntG == n) res = cntG; cout << res << endl; return 0; }
1082
C
Multi-Subject Competition
A multi-subject competition is coming! The competition has $m$ different subjects participants can choose from. That's why Alex (the coach) should form a competition delegation among his students. He has $n$ candidates. For the $i$-th person he knows subject $s_i$ the candidate specializes in and $r_i$ — a skill level in his specialization (this level can be negative!). The rules of the competition require each delegation to choose some subset of subjects they will participate in. The only restriction is that the \textbf{number of students from the team} participating in each of the \textbf{chosen} subjects should be the \textbf{same}. Alex decided that each candidate would participate only in the subject he specializes in. Now Alex wonders whom he has to choose to maximize the total sum of skill levels of all delegates, or just skip the competition this year if every valid non-empty delegation has negative sum. (Of course, Alex doesn't have any spare money so each delegate he chooses must participate in the competition).
At first, it's optimal to take candidates with maximal levels for a fixed subject. At second, if we fix number of participants in each subject for some delegation, then it's always optimal to choose all subjects with positive sum of levels. It leads us to a following solution. Let's divide all candidates by it's $s_i$ and sort each group in non-increasing order. In result we can just iterate over all prefix sums for each group and update global answer of current length with current sum if it has a positive value.
[ "greedy", "sortings" ]
1,600
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) int n, m; vector<int> s, r; inline bool read() { if(!(cin >> n >> m)) return false; s.assign(n, 0); r.assign(n, 0); fore(i, 0, n) { assert(cin >> s[i] >> r[i]); s[i]--; } return true; } vector< vector<int> > subs; inline void solve() { subs.assign(m + 1, vector<int>()); fore(i, 0, n) subs[s[i]].push_back(r[i]); fore(id, 0, sz(subs)) { sort(subs[id].begin(), subs[id].end()); reverse(subs[id].begin(), subs[id].end()); } vector<int> mx(n + 5, 0); fore(id, 0, sz(subs)) { int curSum = 0; fore(i, 0, sz(subs[id])) { curSum += subs[id][i]; if(curSum < 0) break; mx[i + 1] += curSum; } } cout << *max_element(mx.begin(), mx.end()) << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1082
D
Maximum Diameter Graph
Graph constructive problems are back! This time the graph you are asked to build should match the following properties. The graph is connected if and only if there exists a path between every pair of vertices. The diameter (aka "longest shortest path") of a connected undirected graph is the maximum number of edges in the \textbf{shortest} path between any pair of its vertices. The degree of a vertex is the number of edges incident to it. Given a sequence of $n$ integers $a_1, a_2, \dots, a_n$ construct a \textbf{connected undirected} graph of $n$ vertices such that: - the graph contains no self-loops and no multiple edges; - the degree $d_i$ of the $i$-th vertex doesn't exceed $a_i$ (i.e. $d_i \le a_i$); - the diameter of the graph is maximum possible. Output the resulting graph or report that no solution exists.
Let's construct the graph the following manner. Take all the vertices with $a_i > 1$ and build a bamboo out of them. Surely, all but the end ones will have degree $2$, the diameter now is the number of vertices minus 1. One can show that building the graph any other way won't make the diameter greater. How should we distribute the other vertices? Two of them can be used to increase diameter. And all the others won't matter, they can be paired with any of the vertices with degrees to spare. If no loops are added then the diameter won't change - the path that was the longest won't become any shorter. All those facts imply that the graph should be a tree and the sum of $a_i$ should be at least $2n - 2$. Overall complexity: $O(n)$.
[ "constructive algorithms", "graphs", "implementation" ]
1,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 1000 + 7; int n; int a[N]; int main() { scanf("%d", &n); forn(i, n) scanf("%d", &a[i]); int sum = 0; forn(i, n) sum += a[i]; if (sum < 2 * n - 2){ puts("NO"); return 0; } vector<int> ones; forn(i, n) if (a[i] == 1){ a[i] = 0; ones.push_back(i); } int t = ones.size(); int dm = (n - t) - 1 + min(2, t); printf("YES %d\n%d\n", dm, n - 1); int lst = -1; if (!ones.empty()){ lst = ones.back(); ones.pop_back(); } forn(i, n){ if (a[i] > 1){ if (lst != -1){ --a[lst]; --a[i]; printf("%d %d\n", lst + 1, i + 1); } lst = i; } } for (int i = n - 1; i >= 0; --i){ while (!ones.empty() && a[i] > 0){ --a[i]; printf("%d %d\n", i + 1, ones.back() + 1); ones.pop_back(); } } return 0; }
1082
E
Increasing Frequency
You are given array $a$ of length $n$. You can choose one segment $[l, r]$ ($1 \le l \le r \le n$) and integer value $k$ (positive, negative or even zero) and change $a_l, a_{l + 1}, \dots, a_r$ by $k$ each (i.e. $a_i := a_i + k$ for each $l \le i \le r$). What is the maximum possible number of elements with value $c$ that can be obtained after one such operation?
Let $cnt(l, r, x)$ be a number of occurrences of number $x$ in subsegment $[l, r]$. The given task is equivalent to choosing $[l, r]$ and value $d$ such that $ans = cnt(1, l - 1, c) + cnt(l, r, d) + cnt(r + 1, n, c)$ is maximum possible. But with some transformations $ans = cnt(1, n, c) + (cnt(l, r, d) - cnt(l, r, c))$ so we need to maximize $cnt(l, r, d) - cnt(l, r, c)$. Key observation is the next: if we fix some value $d$ then we can shrink each segment between consecutive occurrences of $d$ in one element with weight equal to $-cnt(l_i, r_i, c)$. Then we need just to find subsegment with maximal sum - the standard task which can be solved in $O(cnt(1, n, d))$. Finally, total complexity is $\sum\limits_{d}{O(cnt(1, n, d))} = O(n)$.
[ "binary search", "dp", "greedy" ]
2,000
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) const int INF = int(1e9); int n, c; vector<int> a; inline bool read() { if(!(cin >> n >> c)) return false; a.assign(n, 0); fore(i, 0, n) assert(scanf("%d", &a[i]) == 1); return true; } vector<int> cntC; int getCnt(int l, int r) { return cntC[r] - cntC[l]; } vector< vector<int> > segs; vector<int> lst; int maxSegment(const vector<int> &s) { int mx = -INF; int bal = 0; fore(i, 0, sz(s)) { bal = max(0, bal + s[i]); mx = max(mx, bal); } return mx; } inline void solve() { cntC.assign(n + 1, 0); fore(i, 0, n) cntC[i + 1] = cntC[i] + (a[i] == c); int cntDif = *max_element(a.begin(), a.end()) + 1; segs.assign(cntDif, vector<int>()); lst.assign(cntDif, -1); fore(i, 0, n) { segs[a[i]].push_back(-getCnt(lst[a[i]] + 1, i)); lst[a[i]] = i; segs[a[i]].push_back(1); } fore(v, 0, cntDif) segs[v].push_back(-getCnt(lst[v] + 1, n)); int ans = 0; fore(v, 0, cntDif) { if(v == c) continue; ans = max(ans, maxSegment(segs[v])); } cout << getCnt(0, n) + ans << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1082
F
Speed Dial
Polycarp's phone book contains $n$ phone numbers, each of them is described by $s_i$ — the number itself and $m_i$ — the number of times Polycarp dials it in daily. Polycarp has just bought a brand new phone with an amazing speed dial feature! More precisely, $k$ buttons on it can have a number assigned to it (not necessary from the phone book). To enter some number Polycarp can press one of these $k$ buttons and then finish the number using usual digit buttons (entering a number with only digit buttons is also possible). Speed dial button can only be used when no digits are entered. No button can have its number reassigned. What is the minimal total number of \textbf{digit number presses} Polycarp can achieve after he assigns numbers to speed dial buttons and enters each of the numbers from his phone book the given number of times in an optimal way?
The first thing to come to one's mind is dynamic programming on a trie. The most naive of the solutions take $O(S \cdot n^2 \cdot k^2)$, where $S$ is the total length of strings. I'll introduce the faster approach. Let $dp[x][rem][k]$ be the solution for subtree of the vertex $x$ with $rem$ buttons remaining and $k$ is the closest ancestor vertex with the button used in it. This dp will be recalced via the other dp. Let $dp2[x][rem][k][m]$ be the same thing as $dp[x][rem][k]$ but only $m$ first children of $x$ is taken into consideration and $x$ doesn't have a button in it. Give $z$ buttons to the current child, then update $dp2[x][rem][k][m]$ with $dp[m-th child of x][z][k][m]$ + $dp2[x][rem - z][k][m + 1]$. $dp[x][rem][k]$ will then have two options: $dp[x][rem - 1][x]$ for $x$ having button in it and $dp2[x][rem][k][0]$ for $x$ not having button in it. $dp[x][rem][k]$ has $O(1)$ transitions and $O(len^2 \cdot p)$ states. $dp2[x][rem][k][m]$ has $O(k)$ total transitions and also $O(len^2 \cdot p)$ states.
[ "dp", "strings", "trees" ]
2,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 500 + 7; const int M = 11; const int INF = 1e9; struct node{ int nxt[10]; int cnt; node(){ memset(nxt, -1, sizeof(nxt)); cnt = 0; } }; node trie[N]; int cnt; int h[N]; void add(string s, int m){ int cur = 0; forn(i, s.size()){ int c = s[i] - '0'; if (trie[cur].nxt[c] == -1){ trie[cur].nxt[c] = cnt; h[cnt] = h[cur] + 1; ++cnt; } cur = trie[cur].nxt[c]; } trie[cur].cnt += m; } int n, k; int dp[N][M][N]; int dp2[N][M][N][M]; int calc(int x, int rem, int k){ if (dp[x][rem][k] != -1) return dp[x][rem][k]; vector<int> ch; forn(i, 10) if (trie[x].nxt[i] != -1) ch.push_back(trie[x].nxt[i]); dp[x][rem][k] = INF; if (rem > 0) dp[x][rem][k] = min(dp[x][rem][k], calc(x, rem - 1, x)); dp2[x][rem][k][ch.size()] = 0; for (int i = int(ch.size()) - 1; i >= 0; --i) forn(z, rem + 1) dp2[x][rem][k][i] = min(dp2[x][rem][k][i], calc(ch[i], z, k) + dp2[x][rem - z][k][i + 1]); dp[x][rem][k] = min(dp[x][rem][k], dp2[x][rem][k][0] + (h[x] - h[k]) * trie[x].cnt); return dp[x][rem][k]; } int main() { trie[0] = node(); cnt = 1; cin >> n >> k; forn(i, n){ string s; int m; cin >> s >> m; add(s, m); } memset(dp, -1, sizeof(dp)); forn(i, N) forn(j, M) forn(l, N) forn(t, M) dp2[i][j][l][t] = INF; int ans = calc(0, k, 0); cout << ans << endl; return 0; }
1082
G
Petya and Graph
Petya has a simple graph (that is, a graph without loops or multiple edges) consisting of $n$ vertices and $m$ edges. The weight of the $i$-th vertex is $a_i$. The weight of the $i$-th edge is $w_i$. A subgraph of a graph is some set of the graph vertices and some set of the graph edges. The set of edges must meet the condition: both ends of each edge from the set must belong to the chosen set of vertices. The weight of a subgraph is the sum of the weights of its edges, minus the sum of the weights of its vertices. You need to find the maximum weight of subgraph of given graph. \textbf{The given graph does not contain loops and multiple edges}.
This problem can be reduced to one of well-known flow problems: "Projects and Instruments". In this problem, we have a set of projects we can do, each with its cost, and a set of instruments (each also having some cost). Each project depends on some instruments, and each instrument can be used any number of times. We have to choose a subset of projects and a subset of instruments so that if a project is chosen, all instruments that this project depends on are also chosen, and we have to maximize the difference between the sum of costs of chosen projects and the sum of costs of chosen instruments. The problem about projects and instruments can be solved with the following flow network: for each project, create a vertex and add a directed edge from the source to this vertex with capacity equal to the cost of this project; for each instrument, create a vertex and add a directed edge from this vertex to the sink with capacity equal to the cost of this instrument; for each project, create edges with infinite capacity from the vertex denoting this project to all vertices denoting the required instruments for this project. Let's analyze an $(S, T)$ cut between the source and the sink in this vertex, and construct some answer based on this cut as follows: if a project-vertex belongs to $S$, then we take this project; if an instrument-vertex belongs to $S$, then we take this instrument; all other projects and instruments are discarded. If an edge between some project and some instrument is cut, then it means that the answer is incorrect (we try to take a project requiring some instrument we don't take), and the cut value is infinite. Otherwise, the value of the cut is equal to the total cost of taken instruments and discarded projects, and we need to minimize it. So the minimum cut in this network denotes the best answer. Reducing the given problem to this problem is easy: edges of the given graph are "projects", vertices of the given graph are "instruments". Regarding implementation, any flow algorithm using capacity scaling should be sufficient. It seems that Dinic also passes, even though its complexity is $O(n^2 m)$.
[ "flows", "graphs" ]
2,400
#include <bits/stdc++.h> using namespace std; typedef long long li; const int N = 2009; const int INF = int(1e9) + 777; struct edge{ int to, f, c; edge () {} edge (int to, int f, int c) : to(to), f(f), c(c) {} }; int n, m; int s, t; vector<edge> edges; vector <int> g[N]; int u[N], cu; void addEdge(int v, int to, int cap){ g[v].push_back(edges.size()); edges.push_back(edge(to, 0, cap)); g[to].push_back(edges.size()); edges.push_back(edge(v, 0, 0)); } int dfs(int v, int need){ if(v == t) return need; u[v] = cu; for(auto to : g[v]){ edge &e = edges[to]; if(u[e.to] != cu && e.c - e.f >= need){ int add = dfs(e.to, need); if(add > 0){ edges[to].f += add; edges[to ^ 1].f -= add; return add; } } } return 0; } li enlarge(int k){ li res = 0; while(true){ ++cu; int add = dfs(s, k); res += add; if(add == 0) break; } return res; } li maxFlow(){ li flow = 0; for(int k = (1 << 29); k > 0; k >>= 1){ flow += enlarge(k); } return flow; } int main() { //freopen("input.txt", "r", stdin); int nn, mm; cin >> nn >> mm; n = nn + mm + 5; m = nn + mm + mm + mm + 5; s = n - 1, t = n - 2; for(int i = 0; i < nn; ++i){ int a; cin >> a; addEdge(i + mm, t, a); } li sum = 0; for(int i = 0; i < mm; ++i){ int u, v, w; cin >> u >> v >> w; --u, --v; sum += w; addEdge(s, i, w); addEdge(i, u + mm, INF); addEdge(i, v + mm, INF); } li fl = maxFlow(); cout << sum - fl << endl; return 0; }
1083
A
The Fair Nut and the Best Path
The Fair Nut is going to travel to the Tree Country, in which there are $n$ cities. Most of the land of this country is covered by forest. Furthermore, the local road system forms a tree (connected graph without cycles). Nut wants to rent a car in the city $u$ and go by a simple path to city $v$. He hasn't determined the path, so it's time to do it. Note that chosen path can consist of only one vertex. A filling station is located in every city. Because of strange law, Nut can buy only $w_i$ liters of gasoline in the $i$-th city. We can assume, that he has \textbf{infinite money}. Each road has a length, and as soon as Nut drives through this road, the amount of gasoline decreases by length. Of course, Nut can't choose a path, which consists of roads, where he runs out of gasoline. He can buy gasoline in \textbf{every} visited city, even in \textbf{the first} and \textbf{the last}. He also wants to find the maximum amount of gasoline that he can have at the end of the path. Help him: count it.
Let's write on edge with length $l$ number $-l$. Let sum on the path be sum of amounts of gasoline, which can be bought in cities on this path plus sum of the numbers, which were written on its edges. If we don't run out of gasoline on some path, sum on it will be equal to amount of gasoline at the end of way. If we run out of gasoline on a path, we can start from the next city after the road, where it happened, and sum on the path won't decrease. So, there is a path with maximal sum, where we don't run out of gasoline. This sum is answer to the problem. How to find it? Let $dp_{i}$ is maximal sum on vertical way, which starts in vertex $i$. It is not difficult to calculate $dp_{i}$, using $dp$ values for children of vertex $i$. Every way can be divided to two vertical ways, so we can calculate answer by turning over $i$, which is the highest vertex of a path, and taking the two biggest vertical ways, which starts from vertex $i$.
[ "data structures", "dp", "trees" ]
1,800
null
1083
B
The Fair Nut and Strings
Recently, the Fair Nut has written $k$ strings of length $n$, consisting of letters "a" and "b". He calculated $c$ — the number of strings that are prefixes of at least one of the written strings. \textbf{Every string was counted only one time}. Then, he lost his sheet with strings. He remembers that all written strings were lexicographically \textbf{not smaller} than string $s$ and \textbf{not bigger} than string $t$. He is interested: what is the maximum value of $c$ that he could get. A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds: - $a$ is a prefix of $b$, but $a \ne b$; - in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
If $s$ and $t$ are equal, answer is $n$. Let's cut common prefix of $s$ and $t$, and increase answer to its length. Now $s$ starts from "a" and $t$ starts from "b". Let $m$ is new lengths of $s$ and $t$. If string $s$ weren't written, we can change the lexicographically smallest string to $s$, and $c$ will not decrease. We can do the same thing with $t$. Now $s$ and $t$ are in answer. We can increase answer by $2 \cdot m$, decrease $k$ by 2 and don't count strings, which are prefixes of $s$ and $t$, while calculating $c$. Let's divide strings in answer into groups - two strings are in one group if and only if their first characters are equal and their largest common prefixes with $s$ or $t$ (it depends on the first character) are equal. Let length of group be $m - lcp$, where $lcp$ - length of this common prefix. $c$ is equal to the number of vertexes in trie on written strings. If we have fixed $l$ - number of strings, which will belong to some group, we have to maximize size of set of vertexes, which is union of $l$ ways in full binary tree with height $h$, equals to length of group. It can be proved by induction, that the first way increases size of set by $h$, the second by $h - 1$, next 2 ways by $h - 2$, next 4 ways by $h - 3$, etc. We can create array $p$, where $p_{i}$ - how many ways increase answer by $h - i$. Note that these values are additive - if we have two independent binary trees and want to distribute some number of ways between them, we can sum their arrays (and it is how prove the previous fact). We have $O(n)$ independent groups, and we want to sum their arrays fast. Every binary tree increases values on suffix by $[1, 1, 2, 4, ...]$. Let's forget about the first $1$ and add it in the end. Then, put $1$ to the second position in suffix, and add $2 \cdot a_{i}$ to $a_{i + 1}$ for $1 \le i \le n - 1$ in increasing order, where $a$ - array, which we want to get. Note that values in this array could be very big, but if we change values, bigger than $k$, to $k$, answer will not change (because there are only $k$ ways). To calculate answer, let's take the prefix with sum $k$ (if there are no such prefix, we take the first prefix with sum, which is bigger than $k$ and decrease last element), fill other elements with $0$. Answer will be equal sum $a[i] \cdot i$ for $1 \le i \le m$.
[ "greedy", "strings" ]
2,000
null
1083
C
Max Mex
Once Grisha found a tree (connected graph without cycles) with a root in node $1$. But this tree was not just a tree. A permutation $p$ of integers from $0$ to $n - 1$ is written in nodes, a number $p_i$ is written in node $i$. As Grisha likes to invent some strange and interesting problems for himself, but not always can solve them, you need to help him deal with two types of queries on this tree. Let's define a function $MEX(S)$, where $S$ is a set of non-negative integers, as a smallest non-negative integer that is not included in this set. Let $l$ be a simple path in this tree. So let's define indices of nodes which lie on $l$ as $u_1$, $u_2$, $\ldots$, $u_k$. Define $V(l)$ as a set {$p_{u_1}$, $p_{u_2}$, $\ldots$ , $p_{u_k}$}. Then queries are: - For two nodes $i$ and $j$, swap $p_i$ and $p_j$. - Find the maximum value of $MEX(V(l))$ in all possible $l$.
First let's redefine the MEX query more clearly - you need to find what is the maximum $a$, such that all nodes with permutation values up to $a$ lie on the same path. For that you can use just a simple segment tree - in a node of a segment tree you need to store is it true that all nodes with permutation values between $l$ and $r$ lie on the same path and if so, what are the endpoints of this path. You can merge these paths using precalculated LCA and in and out times. For example you can just check all pairs of nodes from the endpoints of paths as candidates for the endpoints of a new path. So for the MEX query you need to traverse this segment tree, and for the change query you just update paths in $O$($log$ $n$) nodes of your segment tree. $O$($n$ $log$ $n$)
[ "data structures", "trees" ]
2,900
null
1083
D
The Fair Nut's getting crazy
The Fair Nut has found an array $a$ of $n$ integers. We call subarray $l \ldots r$ a sequence of consecutive elements of an array with indexes from $l$ to $r$, i.e. $a_l, a_{l+1}, a_{l+2}, \ldots, a_{r-1}, a_{r}$. No one knows the reason, but he calls a pair of subsegments good if and only if the following conditions are satisfied: - These subsegments should not be nested. That is, each of the subsegments should contain an element (as an index) that does not belong to another subsegment. - Subsegments intersect and each element that belongs to the intersection belongs each of segments only once. For example $a=[1, 2, 3, 5, 5]$. Pairs $(1 \ldots 3; 2 \ldots 5)$ and $(1 \ldots 2; 2 \ldots 3)$) — are good, but $(1 \dots 3; 2 \ldots 3)$ and $(3 \ldots 4; 4 \ldots 5)$ — are not (subsegment $1 \ldots 3$ contains subsegment $2 \ldots 3$, integer $5$ belongs both segments, but occurs twice in subsegment $4 \ldots 5$). Help the Fair Nut to find out the number of pairs of good subsegments! The answer can be rather big so print it modulo $10^9+7$.
Consider O($N^2$) solution: Fix intersection of this segments $L \dots R$. We will call right barrier those integer $Right$, that right border of right segment can be from $R \dots Right$. Also Left barrier is integer, that left border of left segment can be from $Left \dots L$. If we precalculate for each element the furthest left and right elements equal to our ($go_l[i]; go_r[i]$): $Right = MIN(go_r[i])-1$ and $Left = MAX(go_l[i])+1$. Add to answer ($L-Left+1$)*($Right-R+1$) for all segment intersections. Faster solution: Let's go i=$1 \dots.N$ and keep two arrays Left and Right in any data structure, $Left_j$=Left barrier for segment $j \dots i$, $Right_j$=Right barrier for segment $j \dots i$. We need to add sum of ($j-Left_j+1$)*($Right_j-i+1$) for all $j$ from $1$ to $i$. Let's do it using clever Segment Tree.Imagine we are in position i and we want to recalculate arrays Left and Right after increasing $i$ by $1$. Element $A_i$ has furthest left equal $A_l$. We need to do $max=l$ on prefix $1 \dotsi$ in array Left. With Right everything is similar. We can note, that Left and Right are monotonous, so we can just do equation on some segment. Now we want to update the answer. We are in position i, amount of good pairs of segments are ($j- Left_j+1$)*($Right_j-i+1$)=$-Left_j$*$Right_j$-($i+1$)*($j+1$)+$Right_j$*($j+1$)-$Left_j$*($i+1$). $Right_j$*($j+1$) we can keep in separate Segment Tree. Calculating -($i+1$)*($j+1$)-$Left_j$*($i+1$) is easy too. To get $-Left_j$*$Right_j$ we need segment tree, which can do update on first array segment, update on second array segment, get sum of pair products. It can be done keeping sum of Left's, sum of Right's, and sum of $Left$ * $Right$, and some modificators. We can do it using push's. To see details you can see the code. Imagine we are in position i and we want to recalculate arrays Left and Right after increasing $i$ by $1$. Element $A_i$ has furthest left equal $A_l$. We need to do $max=l$ on prefix $1 \dotsi$ in array Left. With Right everything is similar. We can note, that Left and Right are monotonous, so we can just do equation on some segment. Now we want to update the answer. We are in position i, amount of good pairs of segments are ($j- Left_j+1$)*($Right_j-i+1$)=$-Left_j$*$Right_j$-($i+1$)*($j+1$)+$Right_j$*($j+1$)-$Left_j$*($i+1$). $Right_j$*($j+1$) we can keep in separate Segment Tree. Calculating -($i+1$)*($j+1$)-$Left_j$*($i+1$) is easy too. To get $-Left_j$*$Right_j$ we need segment tree, which can do update on first array segment, update on second array segment, get sum of pair products. It can be done keeping sum of Left's, sum of Right's, and sum of $Left$ * $Right$, and some modificators. We can do it using push's. To see details you can see the code.
[ "data structures", "implementation" ]
3,500
null
1083
E
The Fair Nut and Rectangles
The Fair Nut got stacked in planar world. He should solve this task to get out. You are given $n$ rectangles with vertexes in $(0, 0)$, $(x_i, 0)$, $(x_i, y_i)$, $(0, y_i)$. For each rectangle, you are also given a number $a_i$. Choose some of them that the area of union minus sum of $a_i$ of the chosen ones is maximum. It is guaranteed that there are no nested rectangles. Nut has no idea how to find the answer, so he asked for your help.
Let's order rectangles by $x_i$, so $x_1, ..., x_n$ will be increasing. If the $x_1, ..., x_n$ is increasing, $y_1, ..., y_n$ is decreasing, because there are no nested rectangles. Then lets define $dp_i$ as the maximum value, which can be acheived by choosing some subset of first $i$ rectangles which contains the $i$-th rectangle. It can be calculated by $dp_i = \max\limits_{1 \leq j < i} dp_j + x_i \cdot y_i - x_j \cdot y_i$, where $j$ is the previous chosen rectangle (we subtract $x_j \cdot y_i$ because it is common square of the subset for $dp_j$ and $i$-th rectangle). This formula can be optimized using convex hull trick and calculated in $O(n log n)$ or in $O(n)$ if rectangles are already sorted.
[ "data structures", "dp", "geometry" ]
2,400
null
1083
F
The Fair Nut and Amusing Xor
The Fair Nut has two arrays $a$ and $b$, consisting of $n$ numbers. He found them so long ago that no one knows when they came to him. The Fair Nut often changes numbers in his arrays. He also is interested in how similar $a$ and $b$ are after every modification. Let's denote similarity of two arrays as the minimum number of operations to apply to make arrays equal (every operation can be applied for both arrays). If it is impossible, similarity will be equal $-1$. Per one operation you can choose a subarray with length $k$ ($k$ is fixed), and change every element $a_i$, which belongs to the chosen subarray, to $a_i \oplus x$ ($x$ can be chosen), where $\oplus$ denotes the bitwise XOR operation. Nut has already calculated the similarity of the arrays after every modification. Can you do it? Note that you just need to calculate those values, that is you do not need to apply any operations.
Let $c_{i}=a_{i} \oplus b_{i}$. Let's notice that if there is list of operations to both arrays, which makes them equal, applying these operations to $a$ makes it equal to $b$. Because of this, applying them to $c$ makes all elements equal to $0$. Now we are processing modifications of $c$. Let's make array $d$ with length $n + 1$, consisting of $c_{0}$, $n - 1$ values $c_{i} \oplus c_{i + 1}$ for $1 \le i \le n - 1$, and $c_{n - 1}$. The only result in $d$ of applying operation to $c$ is changing two elements $d_{i}$, $d_{i + k}$ to $d_{i} \oplus x$, $d_{i + k} \oplus x$. Let's divide array $d$ into $k$ groups. Elements with the same indexes by modulo $k$ will be in one group. Tasks for different groups are independent and matches up to the initial problem for $k = 2$. Now we are solving problem for $k = 2$. Let's calculate prefix xors. Operation with $k = 2$ changes only one prefix xor (and it is not the last prefix xor). So, if the last prefix xor isn't equal to 0, the answer is $-1$. Otherwise, minimal number of operations to apply is number of prefix xors, which are not equal to $0$. Let's see what modification does with prefix xors. It changes all elements from $a$ to $a \oplus x$ on some suffix. To recalculate number of zeros fastly, let's divide array to blocks with length $O(\sqrt n)$ and keep in every block count of every element and modifier. To process modification, let's update modifier for all blocks, which are covered by suffix and recalculate all counts for block, which is partly covered by it. Number of zeros in a block is count of elements, which are equal to modifier. We also have to store the last element of array, which is equal to xor of all $x$ from modifications. Note, that this task is solved for each of $k$ groups, so we have to store sum of counts of non-zero prefix xors, and number of groups, where the last prefix xor is not zero.
[ "data structures" ]
3,300
null
1084
A
The Fair Nut and Elevator
The Fair Nut lives in $n$ story house. $a_i$ people live on the $i$-th floor of the house. Every person uses elevator twice a day: to get from the floor where he/she lives to the ground (first) floor and to get from the first floor to the floor where he/she lives, when he/she comes back home in the evening. It was decided that elevator, when it is not used, will stay on the $x$-th floor, but $x$ hasn't been chosen yet. When a person needs to get from floor $a$ to floor $b$, elevator follows the simple algorithm: - Moves from the $x$-th floor (initially it stays on the $x$-th floor) to the $a$-th and takes the passenger. - Moves from the $a$-th floor to the $b$-th floor and lets out the passenger (if $a$ equals $b$, elevator just opens and closes the doors, \textbf{but still} comes to the floor from the $x$-th floor). - Moves from the $b$-th floor back to the $x$-th. The elevator never transposes more than one person and always goes back to the floor $x$ before transposing a next passenger. The elevator spends one unit of electricity to move between neighboring floors. So moving from the $a$-th floor to the $b$-th floor requires $|a - b|$ units of electricity.Your task is to help Nut to find the minimum number of electricity units, that it would be enough for one day, by choosing an optimal the $x$-th floor. Don't forget than elevator initially stays on the $x$-th floor.
For each request of passenger who lives on the $p$-th floor to get to the first floor, we need $2 \cdot (max(p, x) - 1)$ energy, because in this case lift moves from the $x$-th floor to the $p$-th, then from the $p$-th to the first, then from the first to the $x$-th. So sum is $|p - x| + |x - 1| + |p - 1|$ and it equals $2 \cdot (max(p, x) - 1)$. if request is to get from the first to the $p$-th floor, number of energy is the same. So the optimal answer can be acheived be choosing the first floor as the $x$-th.
[ "brute force", "implementation" ]
1,000
null
1084
B
Kvass and the Fair Nut
The Fair Nut likes kvass very much. On his birthday parents presented him $n$ kegs of kvass. There are $v_i$ liters of kvass in the $i$-th keg. Each keg has a lever. You can pour your glass by \textbf{exactly} $1$ liter pulling this lever. The Fair Nut likes this drink very much, so he wants to pour his glass by $s$ liters of kvass. But he wants to do it, so kvass level in the least keg is as much as possible. Help him find out how much kvass can be in the least keg or define it's not possible to pour his glass by $s$ liters of kvass.
If $\sum a_i$ $\le s$ - the answer is $-1$ Otherwise, let $v$ - minimal volume from these kegs. The answer is $\le v$. For all i: $s$-=$(a_i - v)$. Now all elements equal to $v$. if $s$ become $\le 0$ the answer is $v$. Else the answer is $\lfloor v - (s + n - 1) / n\rfloor$.
[ "greedy", "implementation" ]
1,200
null
1084
C
The Fair Nut and String
The Fair Nut found a string $s$. The string consists of lowercase Latin letters. The Nut is a curious guy, so he wants to find the number of strictly increasing sequences $p_1, p_2, \ldots, p_k$, such that: - For each $i$ ($1 \leq i \leq k$), $s_{p_i} =$ 'a'. - For each $i$ ($1 \leq i < k$), there is such $j$ that $p_i < j < p_{i + 1}$ and $s_j =$ 'b'. The Nut is upset because he doesn't know how to find the number. Help him. This number should be calculated modulo $10^9 + 7$.
Firstly, let's erase all symbols different from 'a' and 'b'. Then let's split string on blocks of consecutive symbols 'a'. Now we need to multiply all sizes of blocks increased by 1. It is an answer which also includes one empty subsequence, so we should just decrease it by one.
[ "combinatorics", "dp", "implementation" ]
1,500
null
1085
A
Right-Left Cipher
Polycarp loves ciphers. He has invented his own cipher called Right-Left. Right-Left cipher is used for strings. To encrypt the string $s=s_{1}s_{2} \dots s_{n}$ Polycarp uses the following algorithm: - he writes down $s_1$, - he appends the current word with $s_2$ (i.e. writes down $s_2$ to the right of the current result), - he prepends the current word with $s_3$ (i.e. writes down $s_3$ to the left of the current result), - he appends the current word with $s_4$ (i.e. writes down $s_4$ to the right of the current result), - he prepends the current word with $s_5$ (i.e. writes down $s_5$ to the left of the current result), - and so on for each position until the end of $s$. For example, if $s$="techno" the process is: "t" $\to$ "te" $\to$ "cte" $\to$ "cteh" $\to$ "ncteh" $\to$ "ncteho". So the encrypted $s$="techno" is "ncteho". Given string $t$ — the result of encryption of some string $s$. Your task is to decrypt it, i.e. find the string $s$.
You can simulate the process, maintaining the indices of characters of the initial string. So, like this you can find the value of character of the initial string.
[ "implementation", "strings" ]
800
null
1085
B
Div Times Mod
Vasya likes to solve equations. Today he wants to solve $(x~\mathrm{div}~k) \cdot (x \bmod k) = n$, where $\mathrm{div}$ and $\mathrm{mod}$ stand for integer division and modulo operations (refer to the Notes below for exact definition). In this equation, $k$ and $n$ are positive integer parameters, and $x$ is a positive integer unknown. If there are several solutions, Vasya wants to find the smallest possible $x$. Can you help him?
$n$ has to be divisible by $p = x \bmod k$, which in turn is less than $k$. We can try all options of $p$ (in $O(p)$ time), and for suitable options restore $x = p \cdot \frac{n - p}{k}$. Choose the smallest possible $x$. Note that $p = 1$ always divides $n$, hence at least one option will always be available.
[ "math" ]
1,100
null
1085
C
Connect Three
The Squareland national forest is divided into equal $1 \times 1$ square plots aligned with north-south and east-west directions. Each plot can be uniquely described by integer Cartesian coordinates $(x, y)$ of its south-west corner. Three friends, Alice, Bob, and Charlie are going to buy three distinct plots of land $A, B, C$ in the forest. Initially, all plots in the forest (including the plots $A, B, C$) are covered by trees. The friends want to visit each other, so they want to clean some of the plots from trees. After cleaning, one should be able to reach any of the plots $A, B, C$ from any other one of those by moving through adjacent cleared plots. Two plots are adjacent if they share a side. \begin{center} {\small For example, $A=(0,0)$, $B=(1,1)$, $C=(2,2)$. The minimal number of plots to be cleared is $5$. One of the ways to do it is shown with the gray color.} \end{center} Of course, the friends don't want to strain too much. Help them find out the smallest number of plots they need to clean from trees.
The smallest possible number of plots required to connect all three plots is at least $\Delta_x + \Delta_y + 1$, where $\Delta_x = x_{max} - x_{min}$ and $\Delta_y = y_{max} - y_{min}$ (here $x_{min}$, $x_{maxn}$, $y_{min}$, $y_{max}$ are extreme coordinate values among the three given plots). It now suffices to find any suitable collection of plots of this size. Let $x_m$ and $y_m$ be the median values of $(x_A, x_B, x_C)$ and $(y_A, y_B, y_C)$. For each of the plots $A$, $B$, $C$ connect it with the plot $(x_m, y_m)$ with any shortest path (if one of $A$, $B$, $C$ coincides with $(x_m, y_m)$, just do nothing). One can check that the resulting collection has size exactly $\Delta_x + \Delta_y + 1$, and it clearly connects $A$, $B$ and $C$ together. The above solution has complexity $O(C)$, where $C$ is the largest coordinate value. Given that $C$ is quite small, one could go with slower solutions, for instance, instead of $(x_m, y_m)$ try all $C^2$ plots as the connecting plot.
[ "implementation", "math" ]
1,600
null
1085
D
Minimum Diameter Tree
You are given a tree (an undirected connected graph without cycles) and an integer $s$. Vanya wants to put weights on all edges of the tree so that all weights are non-negative real numbers and their sum is $s$. At the same time, he wants to make the diameter of the tree as small as possible. Let's define the diameter of a weighed tree as the maximum sum of the weights of the edges lying on the path between two some vertices of the tree. In other words, the diameter of a weighed tree is the length of the longest simple path in the tree, where length of a path is equal to the sum of weights over all edges in the path. Find the minimum possible diameter that Vanya can get.
Let's denote the number of leaves in this tree for $l$. Let's prove that the answer is $\frac {2s} {l}$. To construct an example with such diameter, let's put the weight $\frac s l$ to the edge adjacent to the leaf, and let's put the weight $0$ to other edges. It is easy to see that the diameter of this tree is $\frac {2s} {l}$. To prove that it is the minimal possible diameter we denote leaf numbers for $a_1, \ldots, a_l$. For $dist_{{x}{y}}$ we denote the sum of weights of edges lying on the path between vertices $x$ and $y$. Then $\frac {l(l-1)} 2 \cdot \max\limits_{1 \leq i < j \leq l}dist_{{a_i}{a_j}} \geq \sum\limits_{1 \leq i < j \leq l}dist_{{a_i}{a_j}}$. Note that the contribution to the sum on the right side of the inequality of the weight of each edge will be at least $l-1$, because any edge lies on $\geq l-1$ paths between the leaves of the tree. So, $\sum\limits_{1 \leq i < j \leq l}dist_{{a_i}{a_j}} \geq (l-1) \cdot \sum\limits_{e \in E} weight_e = (l-1) \cdot s$. So, $\frac {l(l-1)} 2 \cdot \max\limits_{1 \leq i < j \leq l}dist_{{a_i}{a_j}} \geq (l-1) \cdot s$ and we get that $\max\limits_{1 \leq i < j \leq l}dist_{{a_i}{a_j}} \geq \frac {2s} l$. So, to solve this problem you need to calculate the number of leaves in the tree. This can be done in linear time by counting all degrees of vertices. Time comlexity: $O(n)$.
[ "constructive algorithms", "implementation", "trees" ]
1,700
null
1085
E
Vasya and Templates
Vasya owns three strings $s$ , $a$ and $b$, each of them consists only of first $k$ Latin letters. Let a template be such a string of length $k$ that each of the first $k$ Latin letters appears in it exactly once (thus there are $k!$ distinct templates). Application of template $p$ to the string $s$ is the replacement of each character in string $s$ with $p_i$, $i$ is the index of this letter in the alphabet. For example, applying template "bdca" to a string "aabccd" yields string "bbdcca". Vasya wants to know if there exists such a template which yields a string lexicographically greater than or equal to string $a$ and lexicographically less than or equal to string $b$ after applying it to $s$. If there exist multiple suitable templates, print any of them. String $a$ is lexicographically less than string $b$ if there is some $i$ ($1 \le i \le n$) that $a_i < b_i$ and for any $j$ ($1 \le j < i$) $a_j = b_j$. You are required to answer $t$ testcases \textbf{independently}.
Let's implement the following strategy: obtain the minimal string which is greater than or equal to $a$ to correspond to at least one template. If there exists such a string and it is less than or equal to $b$ then the answer exists, otherwise it's not. Let's iterate over the length of prefix of the answer $s'$, which equals the prefix of $a$. For some fixed length we can restore some part of the template. For example, let prefix of $s$ be "abd" and prefix of $a$ be "dba", template will then look like "db?a". Also sometimes prefix can have no answer. Now we want to expand the prefix with some character $c$ at position $i$. If that character had appeared already, then the substitute is known. Also if the substitute is less than $a_i$ then the resulting string will be less than $a$, so it will be incorrect. If it equals $a_i$ then put it and proceed to position $i+1$. And if it's greater then the resulting string will be greater than $a$, so the rest of the string can be filled greedily, minimizing the resulting string $s'$. If character $c$ hadn't appeared previously, then let's try all possible character to substitute it (let it be some character $c'$). That character should be greater than or equal to $a_i$ and it shouldn't be already taken. If $c'$ is greater then $a_i$, then the rest of the string can be filled greedily, minimizing the resulting string $s'$. Otherwise it's $a_i$ and we proceed to position $i+1$. It's easy to show that the greedy filling will be performed no more than $k$ times, thus the solution will work in $O(nk)$.
[ "greedy", "implementation", "strings" ]
2,300
null
1085
F
Rock-Paper-Scissors Champion
$n$ players are going to play a rock-paper-scissors tournament. As you probably know, in a one-on-one match of rock-paper-scissors, two players choose their shapes independently. The outcome is then determined depending on the chosen shapes: "paper" beats "rock", "rock" beats "scissors", "scissors" beat "paper", and two equal shapes result in a draw. At the start of the tournament all players will stand in a row, with their numbers increasing from $1$ for the leftmost player, to $n$ for the rightmost player. Each player has a pre-chosen shape that they will use in every game throughout the tournament. Here's how the tournament is conducted: - If there is only one player left, he is declared the champion. - Otherwise, two adjacent players in the row are chosen arbitrarily, and they play the next match. The losing player is eliminated from the tournament and leaves his place in the row (with his former neighbours becoming adjacent). If the game is a draw, the losing player is determined by a coin toss. The organizers are informed about all players' favoured shapes. They wish to find out the total number of players who have a chance of becoming the tournament champion (that is, there is a suitable way to choose the order of the games and manipulate the coin tosses). However, some players are still optimizing their strategy, and can inform the organizers about their new shapes. Can you find the number of possible champions after each such request?
First, let's determine which players can win in a given configuration. If all players have the same shape, then everyone can win. If there are only two kinds of shapes, then one shape always loses, and everyone with the other shape can win. Let's now assume there are all three shapes present. If a player $i$ can win, then they have to independently eliminate everyone to either side. It suffices to determine if $i$ can win everyone to their left (the other side can be treated symmetrically). Suppose that the player $i$ has the shape R (all the other cases are similar). We will show that $i$ can not eliminate everyone to the left if and only if two conditions hold: there is no S to the left of $i$; there is at least one P to the left of $i$. Indeed, if both of these are true, there is nothing stopping a P from eliminating $i$. Otherwise, if there are no P to the left of $i$, he can potentially win everyone there. Finally, suppose that there is an S to the left of $i$. Here's how $i$ can eliminate everyone to the left. First, eliminate all P's as follows: locate the closest pair of P and S, bring them together by eliminating all R's in between with the P, then eliminate the P. When we've eiliminated all P's this way, we are left with the previous case (no P's). Now, how to deal with modifications? Assume that there are currently all three shapes present (otherwise we can easily find the answer as described at the start). How many R's can not win (similar for P's and S's)? According to our criterion, all R's losing because they can not eliminate their respective left-hand sides are located between $P_0$ and $S_0$, where $P_0$ and $S_0$ are positions of the first P and S (that is, when $P_0 < S_0$, otherwise there are no R's losing this way). A similar condition applies to the R's losing because of the right-hand side. Now we simply count the number of R's in these ranges and subtract them from the total. Note that we can never subtract an R twice since there is at one S on one of its sides. It now suffices to use any data structure that supports range summing and finding extreme values in a set, with modifications. For the range summing we may use a Fenwick tree (BIT) or a segment tree, and for extreme values an std::set will suffice. All operations we need take $O(\log n)$ time, hence each modification and answer takes $O(\log n)$ time.
[]
2,500
null
1085
G
Beautiful Matrix
Petya collects beautiful matrix. A matrix of size $n \times n$ is beautiful if: - All elements of the matrix are integers between $1$ and $n$; - For every row of the matrix, all elements of this row are different; - For every pair of vertically adjacent elements, these elements are different. Today Petya bought a beautiful matrix $a$ of size $n \times n$, and now he wants to determine its rarity. The rarity of the matrix is its index in the list of beautiful matrices of size $n \times n$, sorted in lexicographical order. Matrix comparison is done row by row. (The index of lexicographically smallest matrix is \textbf{zero}). Since the number of beautiful matrices may be huge, Petya wants you to calculate the rarity of the matrix $a$ modulo $998\,244\,353$.
Calculate the following dp: $dp[n]$ - the number of permutations of length $n$ of elements $1 \dots n$ such that $p_i \ne i$ for every $i = 1 \dots n$ $dp[n] = (n - 1) \cdot (dp[n - 1] + dp[n - 2])$; Calculate the following dp: $dp2[n][k]$ - the number of permutations of length $n$ of elements $1, 2, \dots, n, n + 1, n + 2, \dots, 2n - k$ such that $p_i \ne i$ $dp2[n][0] = n!$ $dp2[n][k] = dp2[n][k - 1] - dp2[n - 1][k - 1]$; We can follow the usual process of recovering the lexicographic index. Iterate over the element to put in the current position and add the number of ways to complete the matrix to the answer. How to calculate the number of ways to complete the matrix? The current row can be completed the following way: Look at the elements of the previous row (the same suffix as the one we want to complete), renumerate its elements into $1 \dots t$, where $t$ is the length of suffix. The current row now incudes some elements which appeared in the suffix of the previous row and some which don't. Let's renumerate those that appeared correspondingly. Let there be $k$ of such elements. Then the number of ways to complete the row is $dp2[t][k]$. The other rows can be completed the following way: For any row the previous one can be renumerated into permutation of form $1 \dots n$, thus the number of ways to choose the current row is $dp[n]$. The only thing left is to raise $dp[n]$ to the power of the number of rows to be completed. That solution is $O(N^3)$ as we were looking into all the candidates for the current position. However, these candidates can be split up into two groups: those that change the value of $k$ by 1 and those that leave it as is. Thus a single cell can be processed in $O(\log N)$ with a couple of data structures.
[ "combinatorics", "data structures", "dp" ]
2,900
null
1086
F
Forest Fires
Berland forest was planted several decades ago in a formation of an infinite grid with a single tree in every cell. Now the trees are grown up and they form a pretty dense structure. So dense, actually, that the fire became a real danger for the forest. This season had been abnormally hot in Berland and some trees got caught on fire! The second fire started is considered the second $0$. Every second fire lit up all intact neightbouring trees to every currently burning tree. The tree is neighbouring if it occupies adjacent \textbf{by side or by corner} cell. Luckily, after $t$ seconds Berland fire department finally reached the location of fire and instantaneously extinguished it all. Now they want to calculate the destructive power of the fire. Let $val_{x, y}$ be the second the tree in cell $(x, y)$ got caught on fire. The destructive power is the sum of $val_{x, y}$ over all $(x, y)$ of burnt trees. Clearly, all the workers of fire department are firefighters, not programmers, thus they asked you to help them calculate the destructive power of the fire. The result can be rather big, so print it modulo $998244353$.
Let $f(t)$ be the total number of trees burnt during first $t$ seconds. The answer can be represented as $t f(t) - \sum \limits_{i = 0}^{t - 1} f(i)$. Computing one value of $f(t)$ can be done in $O(n^2)$ or $O(n \log n)$ with scanline or something like that. Let's analyze how the value of this function is changed as time goes. In the beginning, only $n$ initial trees are burnt, then the zones around their position, expand, and so on, until two zones start intersecting. Then again, until another pair of zones starts intersecting. And so on. Let $x_1, x_2, \dots, x_k$ be the sorted sequence of moments when two zones start intersecting (this sequence has no more than $O(n^2)$ elements and can easily be computed in $O(n^2 \log n)$). Let's analyze the behavior of the function $f(t)$ on segments $[0, x_1 - 1], [x_1, x_2 - 1]$ and so on. Why are we interested in such segments? Because for each such segment, $f(t)$ can be represented as a polynomial. This can be proven with the help of inclusion-exclusion: for each subset of zones, the intersection of zones is either empty or a rectangle. And if we expand the rectangle, then during second $0$ its area is $ab$, during second $1$ - $(a + 2)(b + 2)$, during second $t$ - $(a + 2t)(b + 2t)$, it's a $2$-nd degree polynomial. So if we would try to compute the area of affected land through inclusion-exclusion formula, we would get a sum of no more than $2^n$ polynomials, each having degree no more than $2$, so the result is also a $2$-nd degree polynomial. We can actually compute the coefficients of this polynomial by interpolation or just some pen and paper work. And $f(x_i) + f(x_{i} + 1) + f(x_{i} + 2) + \dots + f(x_{i + 1} - 1)$ is a polynomial of $3$-rd degree, which can be computed using some more pen and paper work. So, to conclude, the solution consists of two steps: find all the moments when two zones affected by different trees start intersecting; consider the function on the segments when it behaves as a polynomial.
[ "math" ]
3,500
null
1088
A
Ehab and another construction problem
Given an integer $x$, find 2 integers $a$ and $b$ such that: - $1 \le a,b \le x$ - $b$ divides $a$ ($a$ is divisible by $b$). - $a \cdot b>x$. - $\frac{a}{b}<x$.
Well, the constraints allow a brute-force solution, but here's an $O(1)$ solution: If $x = 1$, there's no solution. Otherwise, just print $x - x%2$ and 2. Time complexity: $O(1)$.
[ "brute force", "constructive algorithms" ]
800
"#include <iostream>\nusing namespace std;\nint main()\n{\n int x;\n cin >> x;\n if (x==1)\n cout << -1;\n else\n cout << x-x%2 << ' ' << 2;\n}"
1088
B
Ehab and subtraction
You're given an array $a$. You should repeat the following operation $k$ times: find the minimum non-zero element in the array, print it, and then subtract it from all the non-zero elements of the array. If all the elements are 0s, just print 0.
Let $s$ be the set of numbers in input (sorted and distinct). In the $i^{th}$ step, $s_{i}$ is subtracted from all bigger or equal elements, and all smaller elements are 0. Thus, the answer in the $i^{th}$ step is $s_{i} - s_{i - 1}$ ($s_{0} = 0$). Time complexity: $O(nlog(n))$.
[ "implementation", "sortings" ]
1,000
"#include <iostream>\n#include <set>\nusing namespace std;\nint main()\n{\n int n,m;\n scanf(\"%d%d\",&n,&m);\n set<int> s;\n s.insert(0);\n while (n--)\n {\n \tint a;\n \tscanf(\"%d\",&a);\n \ts.insert(a);\n\t}\n\tauto it=s.begin();\n\tfor (int i=0;i<m;i++)\n\t{\n\t\tif (next(it)==s.end())\n\t\tprintf(\"0\\n\");\n\t\telse\n\t\t{\n\t\t\tprintf(\"%d\\n\",*next(it)-*it);\n\t\t\tit=next(it);\n\t\t}\n\t}\n}"
1088
C
Ehab and a 2-operation task
You're given an array $a$ of length $n$. You can perform the following operations on it: - choose an index $i$ $(1 \le i \le n)$, an integer $x$ $(0 \le x \le 10^6)$, and replace $a_j$ with $a_j+x$ for all $(1 \le j \le i)$, which means add $x$ to all the elements in the prefix ending at $i$. - choose an index $i$ $(1 \le i \le n)$, an integer $x$ $(1 \le x \le 10^6)$, and replace $a_j$ with $a_j \% x$ for all $(1 \le j \le i)$, which means replace every element in the prefix ending at $i$ with the remainder after dividing it by $x$. Can you make the array \textbf{strictly increasing} in no more than $n+1$ operations?
The editorial uses 0-indexing. Both solutions make $a_{i} = i$. First, let's make $a_{i} = x * n + i$ (for some $x$). Then, let's mod the whole array with $n$ (making $a_{i} = i$). If the "add update" changed one index, we can just add $i + n - a_{i}%n$ to index $i$. The problem is, if we make $a_{i} = x * n + i$, then update an index $j > i$, $a_{i}$ will be ruined. Just start from the back of the array! Note: for any $a, b$, if $b > a$, $a%b = a$. Additionally, if $a \ge b >$$\textstyle{\left|{\frac{a}{2}}\right|}$, $a%b = a - b$. Let's add $5 \cdot 10^{5}$ to the whole array, loop over $a_{i}$ (in order), and mod prefix $i$ with $a_{i} - i$. Why does this work? Notice that $a_{i}%(a_{i} - i) = a_{i} - (a_{i} - i) = i$ (the second note). Also, $a_{i}$ won't be changed afterwards (the first note). Time complexity: $O(n)$.
[ "constructive algorithms", "greedy", "math" ]
1,400
"#include <iostream>\nusing namespace std;\nint a[2005];\nint main()\n{\n\tint n;\n\tscanf(\"%d\",&n);\n\tfor (int i=0;i<n;i++)\n\tscanf(\"%d\",&a[i]);\n\tprintf(\"%d\\n\",n+1);\n\tint sum=0;\n\tfor (int i=n-1;i>=0;i--)\n\t{\n\t\tint add=(i-(a[i]+sum)%n+n)%n;\n\t\tprintf(\"1 %d %d\\n\",i+1,add);\n\t\tsum+=add;\n\t}\n\tprintf(\"2 %d %d\",n,n);\n}"
1088
D
Ehab and another another xor problem
\textbf{This is an interactive problem!} Ehab plays a game with Laggy. Ehab has 2 hidden integers $(a,b)$. Laggy can ask a pair of integers $(c,d)$ and Ehab will reply with: - 1 if $a \oplus c>b \oplus d$. - 0 if $a \oplus c=b \oplus d$. - -1 if $a \oplus c<b \oplus d$. Operation $a \oplus b$ is the bitwise-xor operation of two numbers $a$ and $b$. Laggy should guess $(a,b)$ with \textbf{at most 62 questions}. You'll play this game. You're Laggy and the interactor is Ehab. \textbf{It's guaranteed that $0 \le a,b<2^{30}$.}
This problem is particularly hard to explain :/ I recommend the simulation. Let's build $a$ and $b$ bit by bit from the most significant to the least significant (assume they're stored in $curA$ and $curB$). Then, at the $i^{th}$ step, $a\oplus c u r A$ and $b\oplus c u r B$ have all bits from the most significant to the $(i + 1)^{th}$ set to 0. Notice that whether $x$ is greater or less than $y$ is judged by the most significant bit in which they differ (the one that has 1 is bigger). Let's query with $(c u r A\oplus2^{i}.c u r B)$ and $(c u r A,c u r B\oplus2^{k})$. $a\oplus c$ and $b\oplus d$ can only differ in the $i^{th}$ bit (or a bit less significant). Now, if the results of the queries are different, $a$ and $b$ have the same value in this bit, and this value can be determined by the answer of respective queries (1 if the second query's answer is 1, 0 otherwise). If the queries give the same result, $a$ and $b$ must differ in this bit. How to know which of them has a 1 and which has a 0? We know that the greater between them (after setting the processed bits to 0) has a 1 and the other has a 0. The trick is to keep track of the greater between them. Before all queries, we send $(0, 0)$ to know the greater. Every time they differ in a bit, the greater may change. It'll simply change to the answer of the 2 queries we sent! In other words, we know when we sent the queries that after making $a$ and $b$ equal in this bit, some other bit became the most significant bit in which they differ. Also, we know who has a 1 in this bit (the greater in this query). Thus, we'll keep the answer of this query for the future, so when this bit comes, we don't need additional queries. Let's simulate for $a = 6$ and $b = 5$. In the first query, we'll send $(0, 0)$ to know that $a > b$. In the second query, we'll send $(4, 0)$ (the answer is -1) and $(0, 4)$ (the answer is 1). Since the answers differ, this bit has the same value for $a$ and $b$. Since the answer to the second query is 1, they both have a 1. Now, $curA = curB = 4$. In the third query, we'll send $(6, 4)$ (the answer is -1) and $(4, 6)$ (the answer is -1). Since the answers are the same, $a$ and $b$ differ in this bit. Since $a$ is currently the greater, $a$ has a 1 and $b$ has a 0. Now, the greater is $b$ (the next time 2 bits differ, $b$ will have a 1). Also, $curA = 6$ and $curB = 4$. In the last query, we'll send $(7, 4)$ (the answer is 0) and $(6, 5)$ (the answer is 0). Since the answers are the same, $a$ and $b$ differ in this bit. Since $b$ is currently greater, $b$ has a 1. Now, $curA = 6$ and $curB = 5$ and we're done :D Time complexity: $O(log(n))$.
[ "bitmasks", "constructive algorithms", "implementation", "interactive" ]
2,000
"#include <iostream>\nusing namespace std;\nint ask(int c,int d)\n{\n\tcout << \"? \" << c << ' ' << d << endl;\n\tint ans;\n\tcin >> ans;\n\treturn ans;\n}\nint main()\n{\n\tcout.flush();\n\tint a=0,b=0,big=ask(0,0);\n\tfor (int i=29;i>=0;i--)\n\t{\n\t\tint f=ask(a^(1<<i),b),s=ask(a,b^(1<<i));\n\t\tif (f==s)\n\t\t{\n\t\t\tif (big==1)\n\t\t\ta^=(1<<i);\n\t\t\telse\n\t\t\tb^=(1<<i);\n\t\t\tbig=f;\n\t\t}\n\t\telse if (f==-1)\n\t\t{\n\t\t\ta^=(1<<i);\n\t\t\tb^=(1<<i);\n\t\t}\n\t}\n\tcout << \"! \" << a << ' ' << b << endl;\n}"
1088
E
Ehab and a component choosing problem
You're given a tree consisting of $n$ nodes. Every node $u$ has a weight $a_u$. You want to choose an integer $k$ $(1 \le k \le n)$ and then choose $k$ connected components of nodes that don't overlap (i.e every node is in at most 1 component). Let the set of nodes you chose be $s$. You want to maximize: $$\frac{\sum\limits_{u \in s} a_u}{k}$$ In other words, you want to maximize the sum of weights of nodes in $s$ divided by the number of connected components you chose. Also, if there are several solutions, you want to \textbf{maximize $k$}. Note that adjacent nodes \textbf{can} belong to different components. Refer to the third sample.
Assume you already chose the components. Let the sum of nodes in the $i^{th}$ component be $b_{i}$. Then, the expression in the problem is equivalent to $average(b_{1}, b_{2}, ..., b_{k})$. Assume we only bother about the fraction maximization problem and don't care about $k$. Then, it'll always be better to choose the component with the maximum $b_{i}$ and throw away the rest! This is because of the famous inequality: $max(b_{1}, b_{2}, ..., b_{k}) \ge average(b_{1}, b_{2}, ..., b_{k})$ and the equality only occurs if all $b_{i}$ are equal! This means that the maximum value of the fraction is simply the maximum sum of a sub-component in the tree. To calculate it, let's root the tree at node 1, and calculate $dp[node]$, the maximum sum of a sub-component that contains node. Now, I'll put the code, and explain it after. $ans$ denotes the maximum sub-component sum. First, we call $dfs(1, 0, 1)$. We calculate the $dp$ of all the children of $node$. For every child $u$, we extend the component of $node$ with the component of $u$ if $dp[u] > 0$, and do nothing otherwise. Now, we solved the first half of our problem, but what about maximizing $k$? Notice that all components you choose must have a sum of weights equal to $ans$ (because the equality occurs if and only if all $b_{i}$ are equal). You just want to maximize their count. Let's calculate our $dp$ again. Assume $dp[node] = ans$. We have 2 choices: either mark the $node$ and its component as a component in the answer (but then other nodes won't be able to use them because the components can't overlap), or wait and extend the component. The idea is that there's no reason to wait. If we extend the component with some nodes, they won't change the sum, and they may even have another sub-component with maximal sum that we're merging to our component and wasting it! Thus, we'll always go with the first choice, making $dp[node] = 0$ so that its parent can't use it, and increasing $k$ :D Time complexity: $O(n)$.
[ "dp", "greedy", "math", "trees" ]
2,400
"#include <iostream>\n#include <vector>\nusing namespace std;\nint a[300005],k;\nlong long dp[300005],ans=-1e9;\nvector<int> v[300005];\nvoid dfs(int node,int p,bool f)\n{\n\tdp[node]=a[node];\n\tfor (int u:v[node])\n\t{\n\t\tif (u!=p)\n\t\t{\n\t\t\tdfs(u,node,f);\n\t\t\tdp[node]+=max(dp[u],0LL);\n\t\t}\n\t}\n\tif (f)\n\tans=max(ans,dp[node]);\n\telse if (dp[node]==ans)\n\t{\n\t\tdp[node]=0;\n\t\tk++;\n\t}\n}\nint main()\n{\n\tint n;\n\tscanf(\"%d\",&n);\n\tfor (int i=1;i<=n;i++)\n\tscanf(\"%d\",&a[i]);\n\tfor (int i=1;i<n;i++)\n\t{\n\t\tint a,b;\n\t\tscanf(\"%d%d\",&a,&b);\n\t\tv[a].push_back(b);\n\t\tv[b].push_back(a);\n\t}\n\tdfs(1,0,1);\n\tdfs(1,0,0);\n\tprintf(\"%I64d %d\",ans*k,k);\n}"
1088
F
Ehab and a weird weight formula
You're given a tree consisting of $n$ nodes. Every node $u$ has a weight $a_u$. It is guaranteed that there is only one node with minimum weight in the tree. For every node $u$ (except for the node with the minimum weight), it must have a neighbor $v$ such that $a_v<a_u$. You should construct a tree to minimize the weight $w$ calculated as follows: - For every node $u$, $deg_u \cdot a_u$ is added to $w$ ($deg_u$ is the number of edges containing node $u$). - For every edge $\{ u,v \}$, $\lceil log_2(dist(u,v)) \rceil \cdot min(a_u,a_v)$ is added to $w$, where $dist(u,v)$ is the number of edges in the path from $u$ to $v$ in the given tree.
First, let's reduce the problem to ordinary MST. We know that each edge ${u, v}$ adds $ \lceil log_{2}(dist(u, v)) \rceil \cdot min(a_{u}, a_{v})$ to $w$. In fact, it also adds 1 to $deg_{u}$ and $deg_{v}$. Thus, the problem is ordinary MST on a complete graph where each edge ${u, v}$ has weight $( \lceil log_{2}(dist(u, v)) \rceil + 1) \cdot min(a_{u}, a_{v}) + max(a_{u}, a_{v})$! Let the node with the minimum weight be $m$. Let's root the tree at it. Lemma: for every node $u$ and a child $v$, $a_{v} > a_{u}$. In simpler words, the weight increase as we go down the tree. Proof: the proof is by contradiction. Assume $a_{v} \le a_{u}$. Then, the condition in the problem (that every node has an adjacent node with less weight) isn't satisfied yet for $v$. Therefore, $v$ must have a child $k$ such that $a_{k} < a_{v}$. However, the condition isn't satisfied for $k$, so $k$ needs another child and the child needs another child etc. (the tree will be infinite) which is clearly a contradiction. From that, we know that the weights decrease as we go up the tree and increase as we go down. Back to the MST problem. From Kruskal's algorithm, we know that the minimal edge incident to every node will be added to the MST (because the edges are sorted by weight). Let's analyze the minimal edge incident to every node $u$. Let its other end be $v$. Except for node $m$, $v$ will be an ancestor of $u$. Why? Assume we fix the distance part and just want to minimize $a_{v}$. We'll keep going up the tree (it's never optimal to go down, since the weights will increase) until we reach the desired distance. Now, since the minimal edge incident to every node will be added to the MST (by Kruskal's algorithm), and they're distinct (because, otherwise, you're saying that $u$ is an ancestor of $v$ and $v$ is an ancestor of $u$), THEY ARE THE MST. Now, the problem just reduces to finding the minimal edge incident to every node and summing them up (except for $m$). To do that, we'll fix the $ \lceil log_{2}(dist(u, v)) \rceil $ (let it be $k$), and get the $2^{kth}$ ancestor with the well-known sparse-table (binary lifting). Time complexity: $O(nlog(n))$.
[ "data structures", "trees" ]
2,800
"#include <iostream>\n#include <string.h>\n#include <vector>\nusing namespace std;\nvector<int> v[500005];\nint m=1,a[500005],dp[20][500005];\nlong long ans;\nvoid dfs(int node,int p)\n{\n\tdp[0][node]=p;\n\tfor (int i=1;i<20;i++)\n\t{\n\t\tif (dp[i-1][node]!=-1)\n\t\tdp[i][node]=dp[i-1][dp[i-1][node]];\n\t}\n\tint d;\n\tlong long mn=(1LL<<60);\n\tfor (d=0;d<20 && dp[d][node]!=-1;d++)\n\tmn=min(mn,(long long)(d+1)*a[dp[d][node]]+a[node]);\n\tmn=min(mn,(long long)(d+1)*a[m]+a[node]);\n\tif (p!=-1)\n\tans+=mn;\n\tfor (int u:v[node])\n\t{\n\t\tif (u!=p)\n\t\tdfs(u,node);\n\t}\n}\nint main()\n{\n\tint n;\n\tscanf(\"%d\",&n);\n\tfor (int i=1;i<=n;i++)\n\t{\n\t\tscanf(\"%d\",&a[i]);\n\t\tif (a[i]<a[m])\n\t\tm=i;\n\t}\n\tfor (int i=1;i<n;i++)\n\t{\n\t\tint a,b;\n\t\tscanf(\"%d%d\",&a,&b);\n\t\tv[a].push_back(b);\n\t\tv[b].push_back(a);\n\t}\n\tmemset(dp,-1,sizeof(dp));\n\tdfs(m,-1);\n\tprintf(\"%I64d\",ans);\n}"
1091
A
New Year and the Christmas Ornament
Alice and Bob are decorating a Christmas Tree. Alice wants only $3$ types of ornaments to be used on the Christmas Tree: yellow, blue and red. They have $y$ yellow ornaments, $b$ blue ornaments and $r$ red ornaments. In Bob's opinion, a Christmas Tree will be beautiful if: - the number of blue ornaments used is greater by \textbf{exactly} $1$ than the number of yellow ornaments, and - the number of red ornaments used is greater by \textbf{exactly} $1$ than the number of blue ornaments. That is, if they have $8$ yellow ornaments, $13$ blue ornaments and $9$ red ornaments, we can choose $4$ yellow, $5$ blue and $6$ red ornaments ($5=4+1$ and $6=5+1$). Alice wants to choose as many ornaments as possible, but she also wants the Christmas Tree to be beautiful according to Bob's opinion. In the example two paragraphs above, we would choose $7$ yellow, $8$ blue and $9$ red ornaments. If we do it, we will use $7+8+9=24$ ornaments. That is the maximum number. Since Alice and Bob are busy with preparing food to the New Year's Eve, they are asking you to find out the maximum number of ornaments that can be used in their \textbf{beautiful} Christmas Tree! It is guaranteed that it is possible to choose at least $6$ ($1+2+3=6$) ornaments.
Consider some solution $Y$, $B$, $R$, where $Y + 1 = B$ and $B + 1 = R$. Let's add two yellow ornaments and one blue both to the solution and to the reserve, then we have $Y = B = R$. We can see that this new problem is equivalent to the old one. In this problem, the best solution is achieved when we use $\min(y,b,r)$ ornaments of each colour. Hence, we can find the said minimum, multiply by three and then remove the three extra ornaments again.
[ "brute force", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int a, b, c; cin >> a >> b >> c; cout << min(a + 2, min(b + 1, c)) * 3 - 3; }
1091
B
New Year and the Treasure Geolocation
Bob is a pirate looking for the greatest treasure the world has ever seen. The treasure is located at the point $T$, which coordinates to be found out. Bob travelled around the world and collected clues of the treasure location at $n$ obelisks. These clues were in an ancient language, and he has only decrypted them at home. Since he does not know which clue belongs to which obelisk, finding the treasure might pose a challenge. Can you help him? As everyone knows, the world is a two-dimensional plane. The $i$-th obelisk is at integer coordinates $(x_i, y_i)$. The $j$-th clue consists of $2$ integers $(a_j, b_j)$ and belongs to the obelisk $p_j$, where $p$ is some (unknown) permutation on $n$ elements. It means that the treasure is located at $T=(x_{p_j} + a_j, y_{p_j} + b_j)$. This point $T$ is the same for all clues. In other words, each clue belongs to exactly one of the obelisks, and each obelisk has exactly one clue that belongs to it. A clue represents the vector from the obelisk to the treasure. The clues must be distributed among the obelisks in such a way that they all point to the same position of the treasure. Your task is to find the coordinates of the treasure. If there are multiple solutions, you may print any of them. Note that you don't need to find the permutation. Permutations are used only in order to explain the problem.
We know that there exists some permutation $p$ such that for all $i$ the following holds: $(t_x, t_y) = (x_{p_i} + a_i, y_{p_i} + b_i)$ Summing this for all $i$ we get: $n \cdot (t_x, t_y) = \sum (x_{p_i} + a_i, y_{p_i} + b_i) = \left(\sum (x_{p_i} + a_i), \sum (y_{p_i} + b_i)\right) = \left(\sum x_i + \sum a_i, \sum y_i + \sum b_i\right)$ We can thus sum all $x$, respectively $y$, coordinates of both obelisks and clues, and divide by $n$. This takes $\mathcal O(n)$ time. Alternative solution: Take the lexicographically smallest obelisk coordinate. It is clear that this needs to be paired with the lexicographically largest clue. We simply find minimum and maximum in $\mathcal O(n)$ and sum.
[ "brute force", "constructive algorithms", "greedy", "implementation" ]
1,200
#include <vector> #include <algorithm> #include <iostream> using namespace std; typedef pair<int,int> pii; #define x first #define y second int main() { int N; cin >> N; vector<pii> O(N), T(N); for (int i = 0; i < N; i++) cin >> O[i].x >> O[i].y; for (int i = 0; i < N; i++) cin >> T[i].x >> T[i].y; sort(O.begin(),O.end()); sort(T.begin(),T.end()); reverse(T.begin(),T.end()); vector<pii> Ans(N); for (int i = 0; i < N; i++) Ans[i] = {O[i].x+T[i].x, O[i].y+T[i].y}; sort(Ans.begin(),Ans.end()); cout << Ans[0].x << ' ' << Ans[0].y << endl; }
1091
C
New Year and the Sphere Transmission
There are $n$ people sitting in a circle, numbered from $1$ to $n$ in the order in which they are seated. That is, for all $i$ from $1$ to $n-1$, the people with id $i$ and $i+1$ are adjacent. People with id $n$ and $1$ are adjacent as well. The person with id $1$ initially has a ball. He picks a positive integer $k$ at most $n$, and passes the ball to his $k$-th neighbour in the direction of increasing ids, that person passes the ball to his $k$-th neighbour in the same direction, and so on until the person with the id $1$ gets the ball back. When he gets it back, people do not pass the ball any more. For instance, if $n = 6$ and $k = 4$, the ball is passed in order $[1, 5, 3, 1]$. Consider the set of all people that touched the ball. The \textbf{fun value} of the game is the sum of the ids of people that touched it. In the above example, the fun value would be $1 + 5 + 3 = 9$. Find and report the set of possible fun values for all choices of positive integer $k$. It can be shown that under the constraints of the problem, the ball always gets back to the $1$-st player after finitely many steps, and there are no more than $10^5$ possible fun values for given $n$.
Subtract $1$ from all values for convenience. Fix the value of $k$. We get values of $a \cdot k \bmod n$ for all $a$ from $0$ until we reach $0$ again. This value can be also written as $a \cdot k - b \cdot n$. Due to Bezout's theorem, the equation $a \cdot k - b \cdot n = c$ has an integer solution for $a$ and $b$ if and only if $c$ is divisible by $\text{gcd}(k, n)$. Furthermore, all possible values of $c$ will be visited before it returns to $0$. This is because the element $k/\text{gcd}(k,n)$ generates the group $\mathbb{Z}_{n/\text{gcd}(k,n)}$. We can thus consider only values of $k$ that divide $n$. We can find all of them by trial division in $\mathcal O(\sqrt n)$. For each of them, we can find a closed form solution by summing arithmetic series.
[ "math", "number theory" ]
1,400
#include <iostream> #include <vector> #include <algorithm> using namespace std; typedef long long ll; int main() { ll N; cin >> N; vector<ll> ans; for (ll i = 1; i*i <= N; ++i) { if (N%i==0) { ans.push_back(N*(i-1)/2 + i); if (i*i!=N) { ans.push_back(N*(N/i-1)/2 + N/i); } } } sort(ans.begin(),ans.end()); for (int i = 0; i < ans.size(); ++i) { cout << ans[i] << " \n"[i==ans.size()-1]; } }
1091
D
New Year and the Permutation Concatenation
Let $n$ be an integer. Consider all permutations on integers $1$ to $n$ in lexicographic order, and concatenate them into one big sequence $p$. For example, if $n = 3$, then $p = [1, 2, 3, 1, 3, 2, 2, 1, 3, 2, 3, 1, 3, 1, 2, 3, 2, 1]$. The length of this sequence will be $n \cdot n!$. Let $1 \leq i \leq j \leq n \cdot n!$ be a pair of indices. We call the sequence $(p_i, p_{i+1}, \dots, p_{j-1}, p_j)$ a \textbf{subarray} of $p$. Its \textbf{length} is defined as the number of its elements, i.e., $j - i + 1$. Its \textbf{sum} is the sum of all its elements, i.e., $\sum_{k=i}^j p_k$. You are given $n$. Find the number of subarrays of $p$ of length $n$ having sum $\frac{n(n+1)}{2}$. Since this number may be large, output it modulo $998244353$ (a prime number).
There are two types of subarrays with length $n$: They are fully formed from one permutations. They are a concatenation of a $k$ long suffix of one permutation, and a $n-k$ long prefix of the next permutation. There are $n!$ such subarrays of the first type, and they all have the the correct sum. Let's investigate the second type. Recall the algorithm of finding the next permutation in lexicographical order. We find the longest suffix that is in decreasing order, let its length be $k$. We swap the preceding element $x$ with the smallest one from the decreasing sequence larger than $x$, and sort the suffix in increasing order. The prefix of length $n - k - 1$ is left unchanged, but all longer proper prefixes are different and also change their sum. Coming back to our problem, if the suffix of length $k$ is in decreasing order, than the prefix of length $n - k$ of the next permutation has different sum from the prefix of the same length of the current permutation, hence the subarray has incorrect sum. Conversely, if the suffix of length $k$ is not in decreasing order, then the prefix of length $n-k$ of the next permutation equals to the prefix of the current permutation, and its sum is $\frac{n(n+1)}{2}$. To find the answer, we must thus subtract the number of suffixes of all permutations that are decreasing. How many of them are there for a fixed $k$? This is simple - we choose the first $n-k$ elements freely, and permute them. The rest has to be sorted in a particular way. Hence the number of bad subarrays coming from a suffix of length $k$ equals $\frac{n!}{k!}$. Convince yourself that this approach works correctly even for the last permutation, where there is no next permutation to concatenate its suffixes with. The answer is: $n \cdot n! - \sum_{k=1}^{n-1} \frac{n!}{k!}$ This can be calculated in $\mathcal O(n)$ without the need of modular division. There is also a simple recurrence counting the same answer, found by arsijo: $d(n) = \left(d(n-1) + (n-1)! - 1\right) \cdot n$
[ "combinatorics", "dp", "math" ]
1,700
#include <iostream> #include <vector> using namespace std; template <unsigned int N> class Field { typedef unsigned int ui; typedef unsigned long long ull; inline ui pow(ui a, ui p){ui r=1,e=a;while(p){if(p&1){r=((ull)r*e)%N;}e=((ull)e*e)%N;p>>=1;}return r;} inline ui inv(ui a){return pow(a,N-2);} public: inline Field(int x = 0) : v(x) {} inline Field<N> pow(int p){return (*this)^p; } inline Field<N> operator^(int p){return {(int)pow(v,(ui)p)};} inline Field<N>&operator+=(const Field<N>&o) {if (v+o.v >= N) v += o.v - N; else v += o.v; return *this; } inline Field<N>&operator-=(const Field<N>&o) {if (v<o.v) v -= o.v-N; else v-=o.v; return *this; } inline Field<N>&operator*=(const Field<N>&o) {v=(ull)v*o.v % N; return *this; } inline Field<N>&operator/=(const Field<N>&o) { return *this*=inv(o.v); } inline Field<N> operator+(const Field<N>&o) const {Field<N>r{*this};return r+=o;} inline Field<N> operator-(const Field<N>&o) const {Field<N>r{*this};return r-=o;} inline Field<N> operator*(const Field<N>&o) const {Field<N>r{*this};return r*=o;} inline Field<N> operator/(const Field<N>&o) const {Field<N>r{*this};return r/=o;} inline Field<N> operator-() {if(v) return {(int)(N-v)}; else return {0};}; inline Field<N>& operator++() { ++v; if (v==N) v=0; return *this; } inline Field<N> operator++(int) { Field<N>r{*this}; ++*this; return r; } inline Field<N>& operator--() { --v; if (v==-1) v=N-1; return *this; } inline Field<N> operator--(int) { Field<N>r{*this}; --*this; return r; } inline bool operator==(const Field<N>&o) const { return o.v==v; } inline bool operator!=(const Field<N>&o) const { return o.v!=v; } inline explicit operator ui() const { return v; } inline static vector<Field<N>>fact(int t){vector<Field<N>>F(t+1,1);for(int i=2;i<=t;++i){F[i]=F[i-1]*i;}return F;} inline static vector<Field<N>>invfact(int t){vector<Field<N>>F(t+1,1);Field<N> X{1};for(int i=2;i<=t;++i){X=X*i;}F[t]=1/X;for(int i=t-1;i>=2;--i){F[i]=F[i+1]*(i+1);}return F;} private: ui v; }; template<unsigned int N>istream &operator>>(std::istream&is,Field<N>&f){unsigned int v;is>>v;f=v;return is;} template<unsigned int N>ostream &operator<<(std::ostream&os,const Field<N>&f){return os<<(unsigned int)f;} template<unsigned int N>Field<N> operator+(int i,const Field<N>&f){return Field<N>(i)+f;} template<unsigned int N>Field<N> operator-(int i,const Field<N>&f){return Field<N>(i)-f;} template<unsigned int N>Field<N> operator*(int i,const Field<N>&f){return Field<N>(i)*f;} template<unsigned int N>Field<N> operator/(int i,const Field<N>&f){return Field<N>(i)/f;} typedef Field<998244353> FF; int main(int argc, char* argv[]) { int n; cin >> n; auto F = FF::fact(n); auto I = FF::invfact(n); FF ans = n * F[n]; for (int i = 1; i < n; ++i) ans -= F[n]*I[i]; cout << ans << endl; }
1091
E
New Year and the Acquaintance Estimation
Bob is an active user of the social network Faithbug. On this network, people are able to engage in a mutual friendship. That is, if $a$ is a friend of $b$, then $b$ is also a friend of $a$. Each user thus has a non-negative amount of friends. This morning, somebody anonymously sent Bob the following link: graph realization problem and Bob wants to know who that was. In order to do that, he first needs to know how the social network looks like. He investigated the profile of every other person on the network and noted down the number of his friends. However, he neglected to note down the number of his friends. Help him find out how many friends he has. Since there may be many possible answers, print \textbf{all} of them.
The first observation is that using the Handshaking lemma, we know the parity of $a_{n+1}$. Secondly, on the integers of same parity, the answer always forms a continuous interval, that is if $a_{n+1} = X$ is one possible answer and $a_{n+1} = Y$ is another with $X < Y$, then every $X < Z < Y$ satisfying $X \bmod 2 == Z \bmod 2$ is also an answer. We should thus look into some binary search approaches. We use the Erdos-Gallai theorem linked in the statement to determine whether a sequence is graphic. If it is not the case, we must determine whether the answer is too big or too small. This depends on whether the $a_{n+1}$ is on the left or the right side of the inequality when it is not satisfied. If it is on the left, it means that it is too big - clearly making it larger is never going to change the sign of the inequality. On the contrary, if $a_{n+1}$ is on the right, it is clearly too small. It can also happen that the inequality is false for some $k$ where $a_{n+1}$ is on the left and for some other $k$ it is on the right. Then there clearly is no solution. Checking the inequality naively takes $\mathcal O(n^2)$ time, but we can also do it in $\mathcal O(n)$: for left side we need a prefix sum, and for right side we maintain the sum and also how many times a particular value occurs. The sum of values at most $k$ can then be maintained in $\mathcal O(1)$. This yields an algorithm in $\mathcal O(n \log n)$. Alternatively, we can perform a similar binary search using Havel-Hakimi algorithm, using a segment tree or a multiset and tracking whether the $a_{n+1}$ was already processed or not to find out whether the answer is too small or too big. This yields $\mathcal O(n \log^2 n)$ algorithm.
[ "binary search", "data structures", "graphs", "greedy", "implementation", "math", "sortings" ]
2,400
#include <iostream> #include <algorithm> #include <vector> using namespace std; #define MAXN 500000 int N; int A[MAXN]; long long sum; #define TOO_SMALL -1 #define OK 0 #define TOO_BIG 1 int is_score(int value) { vector<int> C(N+1,0); for (int i = 0; i < N; ++i) ++C[A[i]]; ++C[value]; int less = 0; long long left = 0, right = 0; for (int k = 0, i = 0; k <= N; k++) { int val = (i == k && (i == N || A[i] < value)) ? value : A[i++]; left += val; --C[val]; right -= min(val, k); less += C[k]; right += N-k-less; if (left > right + (long long)(k+1)*k) { return (i == k) ? TOO_BIG : TOO_SMALL; } } return OK; } int main(int,char**) { ios_base::sync_with_stdio(false); scanf("%d", &N); sum = 0; for (int i = 0; i < N; i++) { scanf("%d", A + i); sum += A[i]; } sort(A,A+N,greater<int>()); int parity = sum & 1; int lo = 0, hi = (N - parity) / 2, lores = -1; while (lo <= hi) { int mid = (lo + hi) / 2; if (is_score(2*mid + parity) == TOO_SMALL) { lo = mid + 1; } else { lores = mid; hi = mid - 1; } } lo = lores; hi = (N - parity) / 2; int hires = -1; while (lo <= hi) { int mid = (lo + hi) / 2; if (is_score(2*mid + parity) == TOO_BIG) { hi = mid - 1; } else { hires = mid; lo = mid + 1; } } if (lores == -1 || hires == -1) printf("-1\n"); else { for (int i = lores; i <= hires; ++i) printf("%d ", 2*i+parity); printf("\n"); } }
1091
F
New Year and the Mallard Expedition
Bob is a duck. He wants to get to Alice's nest, so that those two can duck! \begin{center} Duck is the ultimate animal! (Image courtesy of See Bang) \end{center} The journey can be represented as a straight line, consisting of $n$ segments. Bob is located to the left of the first segment, while Alice's nest is on the right of the last segment. Each segment has a length in meters, and also terrain type: grass, water or lava. Bob has three movement types: swimming, walking and flying. He can switch between them or change his direction at any point in time (even when he is located at a non-integer coordinate), and doing so doesn't require any extra time. Bob can swim only on the water, walk only on the grass and fly over \textbf{any} terrain. Flying one meter takes $1$ second, swimming one meter takes $3$ seconds, and finally walking one meter takes $5$ seconds. Bob has a finite amount of energy, called stamina. Swimming and walking is relaxing for him, so he gains $1$ stamina for every meter he walks or swims. On the other hand, flying is quite tiring, and he spends $1$ stamina for every meter flown. Staying in place does not influence his stamina at all. Of course, his stamina can never become negative. Initially, his stamina is zero. What is the shortest possible time in which he can reach Alice's nest?
We start with a greedy solution. This means that we fly over lava, swim on water and walk on grass, keeping track of time and the stamina. There are two types of issues that may happen - either we don't have enough stamina to fly over lava, or we have some leftover stamina at the end. If we lack some stamina, we can walk or swim "in place" for some amount of time to gain it. This "in place" movement can be perform by simply walking or swimming half a meter backwards and then half a meter forwards. This gains $1$ stamina. It is more effective to do on water, as we move faster there, so we may as well do it on the first water on the journey. If there was no water before the lava we are flying across, we use slightly more expensive ground. On the other hand, if we have some unused stamina in the end, we should convert some previous movement to flying to save time. Note that converting $1$ meter of movement costs us $2$ stamina, not one, since we consume $1$ stamina and also gain $1$ stamina less. Since moving on grass is slower, we prefer to convert such movements. However, we must take care not to convert a walking segment that is too early - we should not modify the journey in such a way that we run out of stamina at some point. We thus keep track of the length of grass that we travelled across during our journey. Consider the end of some patch of terrain. Let the total length of grass that we've travelled through until now be $G$, and let the current stamina be $S$. We may never convert more than $G$ grass, and also no more than $S/2$ since if not, we would have run out of stamina at the current location. We simply lower the amount of grass $G$ that is available to be converted, to $S/2$. In the end, we convert $G$ walking to flying, and $S/2 - G$ swimming to flying to save some time. This works in $\mathcal O(n)$.
[ "constructive algorithms", "greedy" ]
2,600
#include <iostream> #include <vector> #include <string> typedef long long ll; using namespace std; int main() { int N; cin >> N; vector<ll> L(N); for (ll &l: L) cin >> l; string T; cin >> T; bool hadWater = false; ll time = 0, stamina = 0, twiceGrass = 0; for (int i = 0; i < N; ++i) { if (T[i] == 'L') { time += L[i]; stamina -= L[i]; if (stamina < 0) { /* not enough stamina, walk or swim "in place" to gain it */ time -= stamina * (hadWater ? 3 : 5); stamina = 0; } } else if (T[i] == 'W') { hadWater = true; stamina += L[i]; time += 3 * L[i]; } else { stamina += L[i]; time += 5 * L[i]; twiceGrass += 2*L[i]; } /* no more than stamina/2 of walking can be converted to flying to save time, * otherwise there would not be enough stamina at this point */ twiceGrass = min(twiceGrass, stamina); } if (stamina > 0) { // convert walking to flying time -= (5-1) * twiceGrass/2; // convert swimming to flying time -= (3-1) * (stamina - twiceGrass)/2; } cout << time << endl; }
1091
G
New Year and the Factorisation Collaboration
Integer factorisation is hard. The RSA Factoring Challenge offered $$100\,000$ for factoring RSA-$1024$, a $1024$-bit long product of two prime numbers. To this date, nobody was able to claim the prize. We want you to factorise a $1024$-bit number. Since your programming language of choice might not offer facilities for handling large integers, we will provide you with a very simple calculator. To use this calculator, you can print queries on the standard output and retrieve the results from the standard input. The operations are as follows: - + x y where $x$ and $y$ are integers between $0$ and $n-1$. Returns $(x+y) \bmod n$. - - x y where $x$ and $y$ are integers between $0$ and $n-1$. Returns $(x-y) \bmod n$. - * x y where $x$ and $y$ are integers between $0$ and $n-1$. Returns $(x \cdot y) \bmod n$. - / x y where $x$ and $y$ are integers between $0$ and $n-1$ and $y$ is coprime with $n$. Returns $(x \cdot y^{-1}) \bmod n$ where $y^{-1}$ is multiplicative inverse of $y$ modulo $n$. If $y$ is not coprime with $n$, then $-1$ is returned instead. - sqrt x where $x$ is integer between $0$ and $n-1$ coprime with $n$. Returns $y$ such that $y^2 \bmod n = x$. If there are multiple such integers, only one of them is returned. If there are none, $-1$ is returned instead. - ^ x y where $x$ and $y$ are integers between $0$ and $n-1$. Returns ${x^y \bmod n}$. Find the factorisation of $n$ that is a product of between $2$ and $10$ \textbf{distinct} prime numbers, all of form $4x + 3$ for some integer $x$. \textbf{Because of technical issues, we restrict number of requests to $100$.}
Most of the operations we're given are useless, as we can perform them on our own. However, we cannot perform the square root ourselves, so we should be able to use the information given us by the square root oracle to find the factorisation. First let's solve the task for a product of two primes. Select $x$ uniformly at random and assign $z = x^2$. Let $y$ be the answer to the query sqrt z. We have $x^2 \equiv y^2 \pmod n$ $(x+y)(x-y) \equiv 0 \pmod n$ Let's take a look what happens when there are more than two prime factors. Some of them will be accumulated in $x+y$, and the others in $x-y$. Collect all these values (again, after taking gcd of them with $n$) over multiple queries. We can find the prime factor $p_j$ if and only if for $i \neq j$ we have seen a value $t$ such that $\text{gcd}(t, p_i \cdot p_j) = p_j$. This is because we can take gcd of exactly all values containing $p_i$ as a factor and to get $p_i$. We do not know which values contain the prime factor before we know the prime factor, but we simply calculate gcd of all subsets of retrieved values. Since gcd is a commutative and associative operation, and all values we get this way must be factors of $n$, we can perform this step in $\mathcal O(f \cdot 4^f)$ time, where $f$ is the number of prime factors of $n$. We can then a primality checking algorithm to check which of these are primes. Alternatively, we can just greedily pick the smallest of all values (except $1$) that is coprime to all the values that have been selected so far - this will always be a prime. It remains to show what is the probability of success. We need each pair of primes to be separated at least once. The probability of that happening in one query is $1/2$. Over the course of $q$ queries, we have probability $2^{-q}$ that we are still unable to separate the two primes. Taking union bound over all prime pairs yields an upper bound on the error probability $4 * 10^{-14}$. Implementation note: To find the square root, we use Chinese remainder theorem and find the square roots in the respective finite fields. The primes are of form $4x + 3$ for a simple reason - we can find the square root of $x$ modulo $p$ by simply calculating $x^{\frac{p+1}{2}}$. For primes of form $4x + 1$, we need Tonelli-Shanks, which empirically uses about $5$ modular exponentiations. With the number of queries and prime factors, and the size of the numbers, it was suddenly impossible for the interactor to fit into the time limit.
[ "interactive", "math", "number theory" ]
3,200
import random def isPrime(n): """ Miller-Rabin primality test. A return value of False means n is certainly not prime. A return value of True means n is very likely a prime. """ if n!=int(n): return False n=int(n) #Miller-Rabin test for prime if n==0 or n==1 or n==4 or n==6 or n==8 or n==9: return False if n==2 or n==3 or n==5 or n==7: return True s = 0 d = n-1 while d%2==0: d>>=1 s+=1 assert(2**s * d == n-1) def trial_composite(a): if pow(a, d, n) == 1: return False for i in range(s): if pow(a, 2**i * d, n) == n-1: return False return True for i in range(20):#number of trials a = random.randrange(2, n) if trial_composite(a): return False return True def gcd(x, y): return x if y == 0 else gcd(y, x % y) n = int(input()) divs = [n] def split(parts): global divs divs = [gcd(d, p) for d in divs for p in parts if gcd(d, p) != 1] while not all([isPrime(x) for x in divs]): x = random.randint(0, n - 1) g = gcd(n, x) if gcd(n, x) != 1: split([g, n // g]) continue y = int(input('sqrt {}\n'.format(x * x % n))) if x == y: continue a, b = abs(x - y), x + y g = gcd(x, y) split([a // g, b // g, g]) print('!', len(divs), ' '.join(str(d) for d in sorted(divs)))
1091
H
New Year and the Tricolore Recreation
Alice and Bob play a game on a grid with $n$ rows and infinitely many columns. In each row, there are three tokens, blue, white and red one. \textbf{Before} the game starts and \textbf{after every move}, the following two conditions must hold: - Any two tokens are not in the same cell. - In each row, the blue token is to the left of the white token, and the red token is to the right of the white token. First, they pick a positive integer $f$, whose value is valid for the whole game. Second, the starting player is chosen and makes his or her first turn. Then players take alternating turns. The player who is unable to make a move loses. During a move, a player first selects an integer $k$ that is either a prime number or a product of two (not necessarily distinct) primes. The smallest possible values of $k$ are thus $2, 3, 4, 5, 6, 7, 9, 10, 11, 13, 14, 15, 17, 19, \dots$. Furthermore, $k$ must not be equal to the previously picked integer $f$. Each turn, a move is performed in exactly one of the rows. If it is Alice's turn, she chooses a single blue token and moves it $k$ cells to the right. Alternatively, she may move both the blue and the white token in the same row by the same amount $k$ to the right. On the other hand, Bob selects a single red token and moves it $k$ cells to the left. Similarly, he may also move the white and the red token in the corresponding row by $k$ to the left. Note that Alice may never move a red token, while Bob may never move a blue one. Remember that after a move, the two conditions on relative positions of the tokens must still hold. Both players play optimally. Given the initial state of the board, determine who wins for two games: if Alice starts and if Bob starts.
If you represent the row state as a pair $(x,y)$ where $x$ is the distance between blue and white token and $y$ is the distance between white and red, we can see that each player has one operation that reduces $x$ by $k$, and a second operation which reduces $y$ by $k$. In other words, each $x$ and $y$ defines a symmetric game, and all games are independent. This is an impartial game after all! Once we have the Grundy numbers for each pile, we can use Sprague-Grundy theorem to find the answer. How to find the Grundy numbers? We find the set of primes and semiprime by sieving. To find the value of pile of size $n$, we can collect the Grundy numbers of each pile size reachable from $n$ (i.e. $n - p$ where p is a prime or semiprime), and then find the mex. This takes $\mathcal O(m^2)$, where $m$ is the maximum distance between two tokens ($2*10^5$) and is too slow. To improve upon this, we can use bitset, but that's not enough. We actually need more of these! Maintain $k$ bitsets $G_0$, ..., $G_m$ where $i$-th bitset contains true at $j$-th position when there is a transition from state $j$ to state with grundy number $i$. How do you do that? You have one bitset $P$ storing all primes and semiprimes. For each $i$, preform linear search for the mex, let the grundy number be $j$. Then or $G_j$ with $P << i$. This works in $O(n^2/w)$. The maximum Grundy number given the input constraints is less than 100, so we can safely pick $k = 100$.
[ "games" ]
3,200
#include <vector> #include <stack> #include <iostream> #include <algorithm> #include <bitset> using namespace std; typedef unsigned int ui; typedef long long ll; struct Sieve : public std::vector<bool> { // ~10ns * n explicit Sieve(ui n) : vector<bool>(n+1, true), n(n) { at(0) = false; if (n!=0) at(1) = false; for (ui i = 2; i*i <= n; ++i) { if (at(i)) for (int j = i*i; j <= n; j+=i) (*this)[j] = false; } } vector<int> primes() const { vector<int> ans; for (int i=2; i<=n; ++i) if (at(i)) ans.push_back(i); return ans; } private: int n; }; constexpr int M = 2e5; auto P = Sieve{M}.primes(); int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); vector<int> G(M, 0); int Q = P.size(); for (int i = 0; i < Q; ++i) { for (int j = i; j < Q; ++j) { if (ll(P[i])*P[j] >= M) break; P.push_back(P[i]*P[j]); } } bitset<M> PB; for (int p : P) PB[p] = true; int N, F; cin >> N >> F; PB[F] = false; vector<bitset<M>> W(100); W[0] = PB; for (int i = 1; i < M; ++i) { while (W[G[i]][i]) G[i]++; W[G[i]] |= PB << i; } cerr << *max_element(G.begin(),G.end()) << endl; int g = 0; for (int i = 0; i < N; i++) { int r,w,b; cin >> r >> w >> b; g ^= G[w-r-1]; g ^= G[b-w-1]; } if (g == 0) { cout << "Bob\nAlice\n"; } else { cout << "Alice\nBob\n"; } }
1092
A
Uniform String
You are given two integers $n$ and $k$. Your task is to construct such a string $s$ of length $n$ that for each $i$ from $1$ to $k$ there is at least one $i$-th letter of the Latin alphabet in this string (the first letter is 'a', the second is 'b' and so on) and there are no other letters except these. You have to \textbf{maximize the minimal frequency} of some letter (the frequency of a letter is the number of occurrences of this letter in a string). If there are several possible answers, you can print \textbf{any}. You have to answer $t$ \textbf{independent} queries.
The only thing you need to do is to place letters by blocks $1, 2, \dots, k$, $1, 2, \dots, k$ and so on. The last block can contain less than $k$ letters but it is ok. It is easy to see that this letters distribution is always not worse than others.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int t; cin >> t; for (int i = 0; i < t; ++i) { int n, k; cin >> n >> k; for (int j = 0; j < n; ++j) { cout << char('a' + j % k); } cout << endl; } return 0; }
1092
B
Teams Forming
There are $n$ students in a university. The number of students is even. The $i$-th student has programming skill equal to $a_i$. The coach wants to form $\frac{n}{2}$ teams. Each team should consist of exactly two students, and each student should belong to exactly one team. Two students can form a team only if their skills are equal (otherwise they cannot understand each other and cannot form a team). Students can solve problems to increase their skill. One solved problem increases the skill by one. The coach wants to know the minimum total number of problems students should solve to form exactly $\frac{n}{2}$ teams (i.e. each pair of students should form a team). Your task is to find this number.
If we sort the students in order of non-decreasing their skill, we can see that the minimum cost of the team with the lowest skill (let's call it the first team) is equal to $a_2 - a_1$ (if $a$ is already sorted), the cost of the second team is $a_4 - a_3$ and so on. So if we sort $a$ in non-decreasing order then the answer is $\sum\limits_{i = 1}^{\frac{n}{2}} a_{2i} - a_{2i - 1}$.
[ "sortings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) { cin >> a[i]; } sort(a.begin(), a.end()); int res = 0; for (int i = 0; i < n; i += 2) { res += a[i + 1] - a[i]; } cout << res << endl; return 0; }
1092
C
Prefixes and Suffixes
Ivan wants to play a game with you. He picked some string $s$ of length $n$ consisting only of lowercase Latin letters. You don't know this string. Ivan has informed you about all its improper prefixes and suffixes (i.e. prefixes and suffixes of lengths from $1$ to $n-1$), but he didn't tell you which strings are prefixes and which are suffixes. Ivan wants you to guess which of the given $2n-2$ strings are prefixes of the given string and which are suffixes. It may be impossible to guess the string Ivan picked (since multiple strings may give the same set of suffixes and prefixes), but Ivan will accept your answer if there is at least one string that is consistent with it. Let the game begin!
The first observation: if we will take two strings of length $n-1$ then we almost can restore the initial string. Why almost? Because there are two possible options: when the first string of length $n-1$ is a prefix and the second one is the suffix and vice versa. Let's write a function check(pref, suf) which will check if the first string can be the prefix of the guessed string and the second one can be the suffix. After we write this function, we can run it two times (depending on the order of strings of length $n-1$) and find any suitable answer. If the first string ($pref$) is the prefix and the second one ($suf$) is the suffix then the whole string $s = pref + suf_{n - 2}$ ($0$-indexed) where '+' is the concatenation of strings. Let's check if we have all prefixes and suffixes of this string in the input. We can easy do it with two nested loops and some boolean array which can say us if some string is already used or not. Firstly, let's iterate over the length of the prefix or suffix and inside let's find any string from the input matching to the current prefix or suffix. If we find all $2n - 2$ strings then the current string is one of the guessed and we can print the answer.
[ "strings" ]
1,700
#include <bits/stdc++.h> using namespace std; int n; vector<string> v; string res; bool check(const string &pref, const string &suf) { string s = pref + suf.substr(n - 2); multiset<string> vv, sPref, sSuf; for (int i = 0; i < n - 1; ++i) { sPref.insert(s.substr(0, n - i - 1)); vv.insert(s.substr(0, n - i - 1)); sSuf.insert(s.substr(i + 1)); vv.insert(s.substr(i + 1)); } if (vv == multiset<string>(v.begin(), v.end())) { for (int i = 0; i < 2 * n - 2; ++i) { if (sPref.count(v[i])) { res += 'P'; sPref.erase(sPref.find(v[i])); } else if (sSuf.count(v[i])) { res += 'S'; sSuf.erase(sSuf.find(v[i])); } else { assert(false); } } return true; } return false; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif cin >> n; v = vector<string>(2 * n - 2); vector<string> big; for (int i = 0; i < 2 * n - 2; ++i) { cin >> v[i]; if (int(v[i].size()) == n - 1) { big.push_back(v[i]); } } if (check(big[0], big[1])) { cout << res << endl; } else { check(big[1], big[0]); cout << res << endl; } return 0; }
1092
D1
Great Vova Wall (Version 1)
Vova's family is building the Great Vova Wall (named by Vova himself). Vova's parents, grandparents, grand-grandparents contributed to it. Now it's totally up to Vova to put the finishing touches. The current state of the wall can be respresented by a sequence $a$ of $n$ integers, with $a_i$ being the height of the $i$-th part of the wall. Vova can only use $2 \times 1$ bricks to put in the wall (he has infinite supply of them, however). Vova can put bricks \textbf{horizontally} on the neighboring parts of the wall of equal height. It means that if for some $i$ the current height of part $i$ is the same as for part $i + 1$, then Vova can put a brick there and thus increase both heights by 1. Obviously, Vova can't put bricks in such a way that its parts turn out to be off the borders (to the left of part $1$ of the wall or to the right of part $n$ of it). \textbf{The next paragraph is specific to the version 1 of the problem.} Vova can also put bricks vertically. That means increasing height of any part of the wall by 2. Vova is a perfectionist, so he considers the wall completed when: - all parts of the wall has the same height; - the wall has no empty spaces inside it. Can Vova complete the wall using any amount of bricks (possibly zero)?
Fairly enough, solutions of both versions of the problem are pretty similar. The major difference between them are the vertical bricks. As you aren't required to minimize the total height, you can work not with the heights themselves but with their parities instead. Vertical brick now does nothing and horizontal brick changes the parity of neighbouring parts of the same parity. Now imagine the following greedy solution. While you have some segment of the same parities of even length, fill it with horizontal bricks. This operation merges this segment with one to the left and to the right. If there is a single segment left then the answer is "YES". Otherwise it's "NO". The proof is left to the readers. Implementing this as it is will be $O(n \log n)$ at best. You'll need to keep the whole set of segments and the set with only even length ones. But there exists more fun approach. We don't even need the lengths of the segments - just the parities of the lengths. Then merging the even segment with something will just erase that segment and xor the length of the left and right ones. Moreover, you don't even need to erase the whole even segment, you can do it brick by brick, as this operations are now the same. Let's simulate this with a stack. When the new number comes, push its parity to the stack. If the topmost two elements of the stack have the same parity, pop them both. Now the answer is "YES" if at the end stack has no more than one element. When I heard of this problem, I actually had not that stack itself in mind but the correct bracket sequences. Like let's define parity 0 as '(' and ')' and parity 1 as '[' and ']'. Now the operations we perform with stack are "greedily put the closing bracket if the last unclosed bracket was of the same type" and "put opening otherwise". Then the stack will have like all the brackets which are still to be closed and you'll close them as early as you can. This idea helped to both prove the correctness of algo and implement it. Overall complexity: $O(n)$.
[ "greedy", "implementation", "math" ]
2,200
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; int n; int a[N]; int main() { scanf("%d", &n); forn(i, n){ scanf("%d", &a[i]); a[i] &= 1; } set<pair<int, int>> seg, even; forn(i, n){ int j = i; while (j + 1 < n && a[j + 1] == a[i]) ++j; seg.insert({i, j}); if ((j - i + 1) % 2 == 0) even.insert({i, j}); i = j; } while (seg.size() > 1 && !even.empty()){ auto cur = *even.begin(); even.erase(cur); seg.erase(cur); auto it = seg.lower_bound(cur); if (it != seg.end()){ cur.second = it->second; if ((it->second - it->first + 1) % 2 == 0) even.erase(*it); seg.erase(it); } it = seg.lower_bound(cur); if (it != seg.begin()){ --it; cur.first = it->first; if ((it->second - it->first + 1) % 2 == 0) even.erase(*it); seg.erase(it); } seg.insert(cur); if ((cur.second - cur.first + 1) % 2 == 0) even.insert(cur); } puts(seg.size() == 1 ? "YES" : "NO"); return 0; }
1092
D2
Great Vova Wall (Version 2)
Vova's family is building the Great Vova Wall (named by Vova himself). Vova's parents, grandparents, grand-grandparents contributed to it. Now it's totally up to Vova to put the finishing touches. The current state of the wall can be respresented by a sequence $a$ of $n$ integers, with $a_i$ being the height of the $i$-th part of the wall. Vova can only use $2 \times 1$ bricks to put in the wall (he has infinite supply of them, however). Vova can put bricks \textbf{only horizontally} on the neighbouring parts of the wall of equal height. It means that if for some $i$ the current height of part $i$ is the same as for part $i + 1$, then Vova can put a brick there and thus increase both heights by 1. Obviously, Vova can't put bricks in such a way that its parts turn out to be off the borders (to the left of part $1$ of the wall or to the right of part $n$ of it). \textbf{Note that Vova can't put bricks vertically.} Vova is a perfectionist, so he considers the wall completed when: - all parts of the wall has the same height; - the wall has no empty spaces inside it. Can Vova complete the wall using any amount of bricks (possibly zero)?
Fairly enough, solutions of both versions of the problem are pretty similar. Read the second part of the previous tutorial first. This problem can also be implemented in the strightforward manner. The greedy solution now is searching for the first minimum in array and putting a brick in there. If it's impossible then the answer is "NO". This can also be simulated with sets, a bit more tedious but still ok and also $O(n \log n)$. Now back to the stack approach. Here you can't go to parities of the numbers (like tests $[1, 3]$ and $[1, 1]$ lead to different results). You push the number itself. However, you will also need an extra condition on the stack. You can't push to it the number greater than the current topmost element. The only problem with this are maximums of array. Obviously, the resulting wall (if the answer exists) will be of height equal to the maximum initial height. And it means that you shouldn't care about the ability to match all maximums in stack. They way I suggest to take around the issue is to process separately each segment between two consecutive maximums. One can easily prove the correctness of it by construction. Overall complexity: $O(n)$.
[ "data structures", "implementation" ]
2,200
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 1000 * 1000 + 13; int n; int a[N]; int main() { scanf("%d", &n); forn(i, n) scanf("%d", &a[i]); vector<int> st; int mx = *max_element(a, a + n); forn(i, n){ if (a[i] == mx) continue; int j = i - 1; while (j + 1 < n && a[j + 1] != mx){ ++j; if (!st.empty() && st.back() == a[j]){ st.pop_back(); } else if (st.empty() || st.back() > a[j]){ st.push_back(a[j]); } else{ puts("NO"); return 0; } } if (!st.empty()){ puts("NO"); return 0; } i = j; } puts("YES"); return 0; }
1092
E
Minimal Diameter Forest
You are given a forest — an undirected graph with $n$ vertices such that each its connected component is a tree. The diameter (aka "longest shortest path") of a connected undirected graph is the maximum number of edges in the \textbf{shortest} path between any pair of its vertices. You task is to add some edges (possibly zero) to the graph so that it becomes a tree and the diameter of the tree is minimal possible. If there are multiple correct answers, print any of them.
Let's start with the solution and then proceed to the proof. For each tree in a forest find such a vertex that the maximal distance from it to any vertex is minimal possible (a center of a tree). Tree may include two centers, take any of them in that case. Find the the tree with the maximum diameter. Connect the centers of other trees with its center. Overall complexity is the complexity of looking for a diameter: $O(n)$ or $O(n^2)$. The center is the best vertex in a tree to connect to. The diameter of merging two trees $t_1$ and $t_2$ by $v$ in $t_1$ and $u$ in $t_2$ with $d_1$ being the maximum shortest path from $v$ to any other vertex in $t_1$, $d_2$ being the same for $u$ in $t_2$ is $max(diam_1, diam_2, d_1 + d_2 + 1)$. Thus minimizing both $d_1$ and $d_2$ will produce the best result. The most optimal structure is a star. The center tree will be connected directly to any other tree. The other trees will be connected through a single vertex among each other, which leads to the answer no more than 1 worse than connecting them directly. And building the answer the other way will exceed this difference as some tree will be connected to the center tree of the star through one vertex as well. The previous fact implies that the center tree of the star should have the maximal diameter among all trees.
[ "constructive algorithms", "dfs and similar", "greedy", "trees" ]
2,000
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; const int INF = 1000000000; int n, m; vector<int> g[N]; int bfs(int x, int dist[N]){ queue<int> q; q.push(x); dist[x] = 0; int lst = -1; while (!q.empty()){ int v = q.front(); q.pop(); lst = v; for (auto u : g[v]) if (dist[u] > dist[v] + 1){ dist[u] = dist[v] + 1; q.push(u); } } return lst; } int distx[N], disty[N]; bool used[N]; vector<int> cur; void dfs(int v){ used[v] = true; cur.push_back(v); for (auto u : g[v]) if (!used[u]) dfs(u); } int main() { scanf("%d%d", &n, &m); forn(i, m){ int v, u; scanf("%d%d", &v, &u); --v, --u; g[v].push_back(u); g[u].push_back(v); } forn(i, n) distx[i] = disty[i] = INF; vector<pair<int, int>> comps; forn(i, n) if (!used[i]){ cur.clear(); dfs(i); int x = bfs(i, distx); int y = bfs(x, disty); for (auto v : cur) distx[v] = INF; bfs(y, distx); int d = disty[y], center; for (auto v : cur) if (distx[v] == d / 2 && disty[v] == d - d / 2) center = v; comps.push_back({d, center}); } vector<pair<int, int>> ans; nth_element(comp.begin(), comp.end() - 1, comp.end()); forn(i, int(comps.size()) - 1){ g[comps[i].second].push_back(comps.back().second); g[comps.back().second].push_back(comps[i].second); ans.push_back({comps[i].second, comps.back().second}); } forn(i, n) distx[i] = disty[i] = INF; int y = bfs(bfs(comps.back().second, distx), disty); printf("%d\n", disty[y]); for (auto it : ans) printf("%d %d\n", it.first + 1, it.second + 1); return 0; }
1092
F
Tree with Maximum Cost
You are given a tree consisting exactly of $n$ vertices. Tree is a connected undirected graph with $n-1$ edges. Each vertex $v$ of this tree has a value $a_v$ assigned to it. Let $dist(x, y)$ be the distance between the vertices $x$ and $y$. The distance between the vertices is the number of edges on the simple path between them. Let's define the cost of the tree as the following value: firstly, let's fix some vertex of the tree. Let it be $v$. Then the cost of the tree is $\sum\limits_{i = 1}^{n} dist(i, v) \cdot a_i$. Your task is to calculate the \textbf{maximum possible cost} of the tree if you can choose $v$ arbitrarily.
Firstly, let's calculate the answer (let it be $res$) for some fixed vertex. Let this vertex be the vertex $1$. Just run simple dfs and calculate the result using the formula from the problem statement. Also let's calculate the sum of values (let the sum in the subtree of the vertex $v$ be $sum_v$) in each subtree of the given tree if its root is the vertex $1$. It can be easily done with simple dynamic programming. And now the magic part: let's apply the technique which is called "re-rooting" (at least we called it so). Let's maintain the correct values in subtrees at each step of our algorithm. How will values and the answer change if we will go through the edge $(u, v)$? The following sequence of changes will change all values correctly: Firstly, it can be seen that $res$ will decrease by $sum_v$ (because the distance to each vertex in this subtree will decrease by one); then $sum_u$ will decrease by $sum_v$ (because we change the root of the tree) (we need this step to maintain the correct values); then $res$ will increase by $sum_u$ (because the distance to each vertex in this subtree will increase by one); and then $sum_v$ will increase by $sum_u$ (because we change the root of the tree) (we need this step to maintain the correct values). So, we can recalculate all the values we need if we go through the edge. So now we can write another one dfs to try to update the answer for each vertex as a root (as the chosen vertex).
[ "dfs and similar", "dp", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; long long res, ans; vector<int> a; vector<long long> sum; vector<vector<int>> g; void dfs(int v, int p = -1, int h = 0) { res += h * 1ll * a[v]; sum[v] = a[v]; for (auto to : g[v]) { if (to == p) { continue; } dfs(to, v, h + 1); sum[v] += sum[to]; } } void go(int v, int p = -1) { ans = max(ans, res); for (auto to : g[v]) { if (to == p) { continue; } res -= sum[to]; sum[v] -= sum[to]; res += sum[v]; sum[to] += sum[v]; go(to, v); sum[to] -= sum[v]; res -= sum[v]; sum[v] += sum[to]; res += sum[to]; } } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; cin >> n; a = vector<int>(n); sum = vector<long long>(n); g = vector<vector<int>>(n); for (int i = 0; i < n; ++i) { cin >> a[i]; } for (int i = 0; i < n - 1; ++i) { int x, y; cin >> x >> y; --x, --y; g[x].push_back(y); g[y].push_back(x); } dfs(0); go(0); cout << ans << endl; return 0; }
1093
A
Dice Rolling
Mishka got a six-faced dice. It has integer numbers from $2$ to $7$ written on its faces (all numbers on faces are different, so this is an \textbf{almost} usual dice). Mishka wants to get exactly $x$ points by rolling his dice. The number of points is just a sum of numbers written at the topmost face of the dice for all the rolls Mishka makes. Mishka doesn't really care about the number of rolls, so he just wants to know \textbf{any} number of rolls he can make to be able to get exactly $x$ points for them. \textbf{Mishka is very lucky, so if the probability to get $x$ points with chosen number of rolls is non-zero, he will be able to roll the dice in such a way.} Your task is to print this number. It is \textbf{guaranteed} that at least one answer exists. Mishka is also very curious about different number of points to score so you have to answer $t$ \textbf{independent} queries.
It is enough to print $\lfloor\frac{x_i}{2}\rfloor$ for each query, where $\lfloor\frac{x}{y}\rfloor$ is $x$ divided by $y$ rounded down.
[ "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for (int i = 0; i < t; ++i) { int x; cin >> x; cout << x / 2 << endl; } return 0; }
1093
B
Letters Rearranging
You are given a string $s$ consisting only of lowercase Latin letters. You can rearrange all letters of this string as you wish. Your task is to obtain a \textbf{good} string by rearranging the letters of the given string or report that it is impossible to do it. Let's call a string \textbf{good} if it is not a palindrome. Palindrome is a string which is read from left to right the same as from right to left. For example, strings "abacaba", "aa" and "z" are palindromes and strings "bba", "xd" are not. You have to answer $t$ \textbf{independent} queries.
The only case when the answer is -1 is when all letters of the string are equal. Why is it so? Because if we have at least two different letters we can place the first one at the first position of the string and the second one at the last position of the string. Then it is clearly that the obtained string is good. We can implement this solution by the following way: sort $s_i$ and if $s_{i, 1} = s_{i, |s_i|}$ (the first letter equals to the last one) then the answer is -1 otherwise the answer is $s_i$.
[ "constructive algorithms", "greedy", "sortings", "strings" ]
900
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for (int i = 0; i < t; ++i) { string s; cin >> s; sort(s.begin(), s.end()); if (s[0] == s.back()) cout << -1 << endl; else cout << s << endl; } return 0; }
1093
C
Mishka and the Last Exam
Mishka is trying really hard to avoid being kicked out of the university. In particular, he was doing absolutely nothing for the whole semester, miraculously passed some exams so that just one is left. There were $n$ classes of that subject during the semester and on $i$-th class professor mentioned some non-negative integer $a_i$ to the students. It turned out, the exam was to tell the whole sequence back to the professor. Sounds easy enough for those who attended every class, doesn't it? Obviously Mishka didn't attend any classes. However, professor left some clues on the values of $a$ to help out students like Mishka: - $a$ was sorted in non-decreasing order ($a_1 \le a_2 \le \dots \le a_n$); - $n$ was even; - the following sequence $b$, consisting of $\frac n 2$ elements, was formed and given out to students: $b_i = a_i + a_{n - i + 1}$. Professor also mentioned that any sequence $a$, which produces sequence $b$ with the presented technique, will be acceptable. Help Mishka to pass that last exam. Restore any sorted sequence $a$ of non-negative integers, which produces sequence $b$ with the presented technique. It is guaranteed that there exists at least one correct sequence $a$, which produces the given sequence $b$.
Let's present the following greedy approach. The numbers will be restored in pairs $(a_1, a_n)$, $(a_2, a_{n - 1})$ and so on. Thus, we can have some limits on the values of the current pair (satisfying the criteria about sort). Initially, $l = 0, r = 10^{18}$ and they are updated with $l = a_i, r = a_{n - i + 1}$. Let $l$ be minimal possible in the answer. Take $a_i = max(l, b_i - r)$ and $r = b_i - l$. That way $l$ was chosen in such a way that both $l$ and $r$ are within the restrictions and $l$ is also minimal possible. If $l$ was any greater than we would move both $l$ limit up and $r$ limit down leaving less freedom for later choices. Overall complexity: $O(n)$. Funnily enough, I coded some naive solution just to test main correct and with restriction of $10^9$ on numbers it passed all tests in 300 ms at max. After I saw that I guessed why it worked in $O(MAXVAL)$ but it looked fun nonetheless.
[ "greedy" ]
1,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const long long INF64 = 1'000'000'000'000'000'000ll; const int N = 200 * 1000 + 13; int n; long long a[N], b[N]; void brute(int x, long long l, long long r){ if (x == n / 2){ forn(i, n) printf("%lld ", a[i]); puts(""); exit(0); } for (long long i = l; i <= b[x] / 2; ++i) if (b[x] - i <= r){ a[x] = i; a[n - x - 1] = b[x] - i; brute(x + 1, i, b[x] - i); } } int main() { scanf("%d", &n); forn(i, n / 2) scanf("%lld", &b[i]); brute(0, 0, INF64); return 0; }
1093
D
Beautiful Graph
You are given an undirected unweighted graph consisting of $n$ vertices and $m$ edges. You have to write a number on each vertex of the graph. Each number should be $1$, $2$ or $3$. The graph becomes beautiful if for each edge the sum of numbers on vertices connected by this edge is odd. Calculate the number of possible ways to write numbers $1$, $2$ and $3$ on vertices so the graph becomes beautiful. Since this number may be large, print it modulo $998244353$. \textbf{Note that you have to write exactly one number on each vertex}. The graph does not have any self-loops or multiple edges.
Let's denote a way to distribute numbers as a painting. Let's also call the paintings that meet the constraints good paintings (and all other paintings are bad). We can solve the problem for each connected component of the graph independently and multiply the answers. Let's analyze a painting of some connected component. If some vertex has an odd number written on it, then we should write even numbers on all adjacent vertices, and vice versa. So in fact we need to check if the component is bipartite, and if it is, divide it into two parts. The number of good paintings is $2^{a} + 2^b$, where $a$ is the size of the first part, and $b$ is the size of the second part, because we write $2$'s into all vertices of one part, and $1$'s or $3$'s into all vertices of another part.
[ "dfs and similar", "graphs" ]
1,700
#include <bits/stdc++.h> using namespace std; const int N = int(3e5) + 999; const int MOD = 998244353; int n, m; vector <int> g[N]; int p2[N]; int cnt[2]; int col[N]; bool bad; void dfs(int v, int c){ col[v] = c; ++cnt[c]; for(auto to : g[v]){ if(col[to] == -1) dfs(to, 1 - c); if((col[v] ^ col[to]) == 0) bad = true; } } int main() { p2[0] = 1; for(int i = 1; i < N; ++i) p2[i] = (2 * p2[i - 1]) % MOD; int tc; scanf("%d", &tc); while(tc--){ scanf("%d%d", &n, &m); for(int i = 0; i < n; ++i) g[i].clear(); for(int i = 0; i < m; ++i){ int u, v; scanf("%d %d", &u, &v); --u, --v; g[u].push_back(v); g[v].push_back(u); } int res = 1; for(int i = 0; i < n; ++i) col[i] = -1; for(int i = 0; i < n; ++i){ if(col[i] != -1) continue; bad = false; cnt[0] = cnt[1] = 0; dfs(i, 0); if(bad){ puts("0"); break; } int cur = (p2[cnt[0]] + p2[cnt[1]]) % MOD; res = (res * 1LL * cur) % MOD; } if(!bad) printf("%d\n", res); } return 0; }
1093
E
Intersection of Permutations
You are given two permutations $a$ and $b$, both consisting of $n$ elements. Permutation of $n$ elements is such a integer sequence that each value from $1$ to $n$ appears exactly once in it. You are asked to perform two types of queries with them: - $1~l_a~r_a~l_b~r_b$ — calculate the number of values which appear in both segment $[l_a; r_a]$ of positions in permutation $a$ and segment $[l_b; r_b]$ of positions in permutation $b$; - $2~x~y$ — swap values on positions $x$ and $y$ in permutation $b$. Print the answer for each query of the first type. It is guaranteed that there will be at least one query of the first type in the input.
At first, time limit was not that tight for the problem. We didn't want any sqrt, bitset or straight up $nm$ solution to pass (and it's close to none to pass). Jury solution works faster than twice the time limit so we decided 6 seconds is alright. The task is purely about implementation. You renumerate numbers in permutations so that the queries are (segment of values, segment of positions) and then have the structure to make update in point and sum on rectangle. Renumeration in my case was making the first permutation into identity one and changing the numbers in second appropriately. You can choose the structure you want, I'll tell about the one I use when the queries are offline. For online the common technique is having $n$ BITs, each with treap in it (ordered_set template from pbds is usually enough). For offline you can precalculate the values to fall into each BIT beforehand and do BIT on these values inside. Preprocess all the update and get queries $(x, y)$, adding $y$ into all the BITs $x$ will fall into. Then sort them and leave only unique occurrences. Overall complexity: $O((n + m) \cdot \log^2 n)$.
[ "data structures" ]
2,400
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; int n, m; int a[N], b[N], pos[N]; vector<int> f[N]; vector<int> vals[N]; void addupd(int x, int y){ for (int i = x; i < N; i |= i + 1) vals[i].push_back(y); } void addget(int x, int y){ if (x < 0 || y < 0) return; for (int i = x; i >= 0; i = (i & (i + 1)) - 1) vals[i].push_back(y); } void upd(int x, int y, int val){ for (int i = x; i < N; i |= i + 1) for (int j = lower_bound(vals[i].begin(), vals[i].end(), y) - vals[i].begin(); j < int(f[i].size()); j |= j + 1) f[i][j] += val; } int get(int x, int y){ if (x < 0 || y < 0) return 0; int res = 0; for (int i = x; i >= 0; i = (i & (i + 1)) - 1) for (int j = lower_bound(vals[i].begin(), vals[i].end(), y) - vals[i].begin(); j >= 0; j = (j & (j + 1)) - 1) res += f[i][j]; return res; } struct query{ int t, la, ra, lb, rb; query(){}; }; query q[N]; int main() { scanf("%d%d", &n, &m); forn(i, n){ scanf("%d", &a[i]); --a[i]; pos[a[i]] = i; } forn(i, n){ scanf("%d", &b[i]); --b[i]; b[i] = pos[b[i]]; } forn(i, m){ scanf("%d", &q[i].t); if (q[i].t == 1){ scanf("%d%d%d%d", &q[i].la, &q[i].ra, &q[i].lb, &q[i].rb); --q[i].la, --q[i].ra, --q[i].lb, --q[i].rb; } else{ scanf("%d%d", &q[i].lb, &q[i].rb); --q[i].lb, --q[i].rb; } } vector<int> tmp(b, b + n); forn(i, n) addupd(i, b[i]); forn(i, m){ if (q[i].t == 1){ addget(q[i].rb, q[i].ra); addget(q[i].lb - 1, q[i].ra); addget(q[i].rb, q[i].la - 1); addget(q[i].lb - 1, q[i].la - 1); } else{ addupd(q[i].lb, b[q[i].lb]); addupd(q[i].rb, b[q[i].rb]); swap(b[q[i].lb], b[q[i].rb]); addupd(q[i].lb, b[q[i].lb]); addupd(q[i].rb, b[q[i].rb]); } } forn(i, n) b[i] = tmp[i]; forn(i, N){ sort(vals[i].begin(), vals[i].end()); vals[i].resize(unique(vals[i].begin(), vals[i].end()) - vals[i].begin()); f[i].resize(vals[i].size(), 0); } forn(i, n) upd(i, b[i], 1); forn(i, m){ if (q[i].t == 1){ int res = 0; res += get(q[i].rb, q[i].ra); res -= get(q[i].lb - 1, q[i].ra); res -= get(q[i].rb, q[i].la - 1); res += get(q[i].lb - 1, q[i].la - 1); printf("%d\n", res); } else{ upd(q[i].lb, b[q[i].lb], -1); upd(q[i].rb, b[q[i].rb], -1); swap(b[q[i].lb], b[q[i].rb]); upd(q[i].lb, b[q[i].lb], 1); upd(q[i].rb, b[q[i].rb], 1); } } return 0; }
1093
F
Vasya and Array
Vasya has got an array consisting of $n$ integers, and two integers $k$ and $len$ in addition. All numbers in the array are either between $1$ and $k$ (inclusive), or equal to $-1$. The array is good if there is no segment of $len$ consecutive \textbf{equal} numbers. Vasya will replace each $-1$ with some number from $1$ to $k$ (inclusive) in such a way that the resulting array is good. Tell him the number of ways to do this replacement. Since the answer may be large, print it modulo $998244353$.
Let's try dynamic programming approach to this problem. Let $dp_{cnt, lst}$ be the number of ways to replace all $-1$ with numbers from $1$ to $k$ in such a way that array $a_{1 \dots cnt}$ is good and the last number of that array is $lst$. Let $sdp_{cnt} = \sum\limits_{i=1}^{k} dp_{cnt, i}$. Then initially it's $dp_{cnt, lst} = sdp_{cnt-1}$ if $a_{cnt}$ equals to $-1$ or $lst$. However, we could include incorrect states - such that segment $[a_{cnt-len+1} \dots a_{cnt}]$ consist of the same value. It happens when: $cnt \ge len$, as we should have at least $len$ elements; segment $[a_{cnt-len+1} \dots a_{cnt}]$ has all its elements either equal to $-1$ or $lst$. If both of these conditions hold then you should subtract all the bad states from $dp_{cnt, lst}$. The number of them is $sdp_{cnt-len} - dp_{cnt-len, lst}$.
[ "dp" ]
2,400
#include <bits/stdc++.h> #define fore(i, l, r) for(int i = int(l); i < int(r); ++i) #define forn(i, n) fore(i, 0, n) #define nfor(i, n) for(int i = int(n) - 1; i >= 0; --i) #define for1(i, n) for(int i = 1; i < int(n); ++i) #define mp make_pair #define pb push_back #define sz(a) int((a).size()) #define all(a) (a).begin(), (a).end() #define x first #define y second #define correct(x, y, xmax, ymax) ((x) >= 0 && (x) < (xmax) && (y) >= 0 && (y) < (ymax)) #define max(a, b) ((a > b)? a : b) #define min(a, b) ((a < b)? a : b) #define abs(a) ((a < 0) ? -a : a) #define sqr(a) (a * a) using namespace std; const int N = int(1e5); const int M = 105; const int MOD = 998244353; int sum(int a, int b){ return (a + b) % MOD; } int n, k, len; int a[N]; int dp[N][M]; int sumdp[N]; int cnt[M][N]; int main() { scanf("%d %d %d", &n, &k, &len); forn(i, n){ scanf("%d", a + i); if(a[i] != -1) --a[i]; } forn(i, k) forn(j, n) cnt[i][j + 1] = cnt[i][j] + (a[j] == i || a[j] == -1); forn(i, n){ forn(j, k){ if(!(a[i] == -1 || a[i] == j)) continue; int add = 1; if(i > 0) add = sumdp[i - 1]; dp[i][j] = add; bool ok = i + 1 >= len; int l = max(0, i - len + 1); int r = i + 1; ok &= (r - l == cnt[j][r] - cnt[j][l]); if(!ok) continue; if(i + 1 == len){ dp[i][j] = sum(dp[i][j], MOD - 1); continue; } add = sum(dp[i - len][j], MOD - sumdp[i - len]); dp[i][j] = sum(dp[i][j], add); } forn(j, k) sumdp[i] = sum(sumdp[i], dp[i][j]); } printf("%d\n", sumdp[n - 1]); return 0; }
1093
G
Multidimensional Queries
You are given an array $a$ of $n$ points in $k$-dimensional space. Let the distance between two points $a_x$ and $a_y$ be $\sum \limits_{i = 1}^{k} |a_{x, i} - a_{y, i}|$ (it is also known as Manhattan distance). You have to process $q$ queries of the following two types: - $1$ $i$ $b_1$ $b_2$ ... $b_k$ — set $i$-th element of $a$ to the point $(b_1, b_2, \dots, b_k)$; - $2$ $l$ $r$ — find the maximum distance between two points $a_i$ and $a_j$, where $l \le i, j \le r$.
Let's rewrite the formula of distance between two points as follows: $\sum \limits_{i = 1}^{k} |a_{x, i} - a_{y, i}| = \sum \limits_{i = 1}^{k} c_i (a_{x, i} - a_{y, i}) = \sum \limits_{i = 1}^{k} c_i a_{x, i} - \sum \limits_{i = 1}^{k} c_i a_{y, i}$, where $c_i = 1$ if $a_{x, i} \ge a_{y, i}$, otherwise $c_i = -1$. Consider what will happen if we change some $c_i$ to the opposite value. The result of this formula obviously won't increase, so we may try all possible values of $c_i$ and pick maximum result. This allows us to consider every option to set values of $c_i$ (there are $2^k$ such options) separately, and when we fix a set of values of $c_i$, find two points that maximize the distance if it is expressed with fixed $c_i$. To do so, we have to find the point having maximum $\sum \limits_{i = 1}^{k} c_i a_{x, i}$ and having minimum $\sum \limits_{i = 1}^{k} c_i a_{x, i}$. So actually our problem is reduced to the following: we have $2^k$ arrays, we want to process some queries in the form "change an element of some array" and "find maximum and minimum on some segment of some array". This can be done simply by building a segment tree over each array, and then we will get the solution having $O((n + q) 2^k \log n)$ time complexity.
[ "bitmasks", "data structures" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; const int INF = 1e9; struct point{ int a[5]; point(){}; int& operator[](int x){ return a[x]; } }; int n, k; point a[N]; int t[32][4 * N]; int apply(point& p, int mask){ int res = 0; forn(i, k){ res += (mask & 1 ? p[i] : -p[i]); mask >>= 1; } return res; } void build(int v, int l, int r){ if (l == r - 1){ forn(mask, 1 << k) t[mask][v] = apply(a[l], mask); return; } int m = (l + r) / 2; build(v * 2, l, m); build(v * 2 + 1, m, r); forn(mask, 1 << k) t[mask][v] = min(t[mask][v * 2], t[mask][v * 2 + 1]); } void upd(int v, int l, int r, int pos, point& val){ if (l == r - 1){ forn(mask, 1 << k) t[mask][v] = apply(val, mask); return; } int m = (l + r) / 2; if (pos < m) upd(v * 2, l, m, pos, val); else upd(v * 2 + 1, m, r, pos, val); forn(mask, 1 << k) t[mask][v] = min(t[mask][v * 2], t[mask][v * 2 + 1]); } int bst[32]; void get(int v, int l, int r, int L, int R){ if (L >= R) return; if (l == L && r == R){ forn(mask, 1 << k) bst[mask] = min(bst[mask], t[mask][v]); return; } int m = (l + r) / 2; get(v * 2, l, m, L, min(m, R)); get(v * 2 + 1, m, r, max(m, L), R); } int main() { scanf("%d%d", &n, &k); forn(i, n) forn(j, k) scanf("%d", &a[i][j]); build(1, 0, n); int m; scanf("%d", &m); forn(_, m){ int t; scanf("%d", &t); if (t == 1){ int i; scanf("%d", &i); --i; point tmp; forn(j, k) scanf("%d", &tmp[j]); upd(1, 0, n, i, tmp); } else{ int l, r; scanf("%d%d", &l, &r); --l, --r; forn(mask, 1 << k) bst[mask] = INF; get(1, 0, n, l, r + 1); int ans = 0; forn(mask, 1 << k) ans = max(ans, abs(bst[(1 << k) - 1 - mask] + bst[mask])); printf("%d\n", ans); } } return 0; }
1095
A
Repeating Cipher
Polycarp loves ciphers. He has invented his own cipher called repeating. Repeating cipher is used for strings. To encrypt the string $s=s_{1}s_{2} \dots s_{m}$ ($1 \le m \le 10$), Polycarp uses the following algorithm: - he writes down $s_1$ ones, - he writes down $s_2$ twice, - he writes down $s_3$ three times, - ... - he writes down $s_m$ $m$ times. For example, if $s$="bab" the process is: "b" $\to$ "baa" $\to$ "baabbb". So the encrypted $s$="bab" is "baabbb". Given string $t$ — the result of encryption of some string $s$. Your task is to decrypt it, i. e. find the string $s$.
There are many possible approaches in this problem, I will describe one of the easiest. Let's print the initial string by the following algorithm: firstly, init the variable $i = 1$. Then, while the encrypted string isn't empty, print the first character of this string, remove $i$ first characters from it and increase $i$ by one.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int n; cin >> n; string s; cin >> s; int index = 0; int gap = 1; while (index < n) cout << s[index], index += gap, gap++; }
1095
B
Array Stabilization
You are given an array $a$ consisting of $n$ integer numbers. Let instability of the array be the following value: $\max\limits_{i = 1}^{n} a_i - \min\limits_{i = 1}^{n} a_i$. You have to remove \textbf{exactly one} element from this array to minimize instability of the resulting $(n-1)$-elements array. Your task is to calculate the minimum possible instability.
It is easy to see that we always have to remove either minimum or maximum of the array. So we can sort the array and the answer will be $min(a_{n - 1} - a_{1}, a_{n} - a_{2})$. We also can do it without sort because two minimal and two maximal elements of the array can be found in linear time.
[ "implementation" ]
900
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) { cin >> a[i]; } sort(a.begin(), a.end()); cout << min(a[n - 2] - a[0], a[n - 1] - a[1]) << endl; return 0; }
1095
C
Powers Of Two
A positive integer $x$ is called a power of two if it can be represented as $x = 2^y$, where $y$ is a non-negative integer. So, the powers of two are $1, 2, 4, 8, 16, \dots$. You are given two positive integers $n$ and $k$. Your task is to represent $n$ as the \textbf{sum} of \textbf{exactly} $k$ powers of two.
First of all, let's analyze how can we calculate the minimum number of powers of two needed to get $n$ as the sum. We can use binary representation of $n$: each bit in it, which is equal to $1$, becomes a summand in the answer. Firstly, if the number of summands is greater than $k$ then the answer is NO. Okay, what if we don't have enough summands? Every summand $x > 1$ can be broken down into two summands equal to $\frac{x}{2}$. Let's maintain all summands greater than $1$ somewhere (stack, array, queue, multiset, anything you want), and pick an arbitrary summand and break it into two until we have exactly $k$ summands. If $n \ge k$, then this process will terminate since we will have some summand to pick until all of them are equal to $1$.
[ "bitmasks", "greedy" ]
1,400
#include<bits/stdc++.h> using namespace std; int main() { int n, k; cin >> n >> k; map<int, int> ans; queue<int> q; for(int i = 0; i <= 30; i++) if(n & (1 << i)) { if(i > 0) q.push(1 << i); ans[1 << i]++; } if(int(ans.size()) > k) { puts("NO"); return 0; } int cnt = ans.size(); while(cnt < k && !q.empty()) { int z = q.front(); q.pop(); ans[z]--; ans[z / 2] += 2; if(z / 2 > 1) { q.push(z / 2); q.push(z / 2); } cnt++; } if(cnt < k) { puts("NO"); return 0; } puts("YES"); for(auto x : ans) for(int i = 0; i < x.second; i++) printf("%d ", x.first); puts(""); return 0; }
1095
D
Circular Dance
There are $n$ kids, numbered from $1$ to $n$, dancing in a circle around the Christmas tree. Let's enumerate them in a clockwise direction as $p_1$, $p_2$, ..., $p_n$ (all these numbers are from $1$ to $n$ and are distinct, so $p$ is a permutation). Let the next kid for a kid $p_i$ be kid $p_{i + 1}$ if $i < n$ and $p_1$ otherwise. After the dance, each kid remembered two kids: the next kid (let's call him $x$) and the next kid for $x$. Each kid told you which kids he/she remembered: the kid $i$ remembered kids $a_{i, 1}$ and $a_{i, 2}$. However, the order of $a_{i, 1}$ and $a_{i, 2}$ can differ from their order in the circle. \begin{center} {\small Example: 5 kids in a circle, $p=[3, 2, 4, 1, 5]$ (or any cyclic shift). The information kids remembered is: $a_{1,1}=3$, $a_{1,2}=5$; $a_{2,1}=1$, $a_{2,2}=4$; $a_{3,1}=2$, $a_{3,2}=4$; $a_{4,1}=1$, $a_{4,2}=5$; $a_{5,1}=2$, $a_{5,2}=3$.} \end{center} You have to restore the order of the kids in the circle using this information. If there are several answers, you may print any. It is guaranteed that at least one solution exists. \textbf{If you are Python programmer, consider using PyPy instead of Python when you submit your code.}
Let's write a function check(a, b) which will try to restore the circle if kid with number $b$ comes right after kid with number $a$. If $b$ comes right after $a$ then we can determine $c$ - the number of kid who is next to kid $b$. So now we have: $b$ comes right after $a$, $c$ comes right after $b$. Let's determine $d$ - kid who is next to kid $c$. If we repeat this operation $n$ times then we can "determine" the answer if $b$ comes right after $a$. But it can be wrong so we have to check that our answer corresponds to the input. So if we have this function, we can apply it two times to determine the correct answer. Just call check($1, a_{1, 1}$) and check($1, a_{1, 2}$).
[ "implementation" ]
1,600
#include <bits/stdc++.h> using namespace std; int n; vector<vector<int>> a; void check(int l, int r) { vector<int> ans; for (int i = 0; i < n; ++i) { int nxt = -1; if (a[l][0] == r) { nxt = a[l][1]; } else if (a[l][1] == r) { nxt = a[l][0]; } else { return; } ans.push_back(nxt); l = r; r = nxt; } for (auto it : ans) { cout << it + 1 << " "; } cout << endl; exit(0); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif cin >> n; a = vector<vector<int>> (n, vector<int>(2)); for (int i = 0; i < n; ++i) { cin >> a[i][0] >> a[i][1]; --a[i][0]; --a[i][1]; } check(0, a[0][0]); check(0, a[0][1]); assert(false); return 0; }
1095
E
Almost Regular Bracket Sequence
You are given a bracket sequence $s$ consisting of $n$ opening '(' and closing ')' brackets. A regular bracket sequence is a bracket sequence that can be transformed into a correct arithmetic expression by inserting characters '1' and '+' between the original characters of the sequence. For example, bracket sequences "()()", "(())" are regular (the resulting expressions are: "(1)+(1)", "((1+1)+1)"), and ")(" and "(" are not. You can change the type of some bracket $s_i$. It means that if $s_i = $ ')' then you can change it to '(' and vice versa. Your task is to calculate the number of positions $i$ such that if you change the type of the $i$-th bracket, then the resulting bracket sequence becomes regular.
In this problem we have to calculate the number (count) of positions such that if we change the type of the bracket at this position then the obtained bracket sequence will become regular. Let's calculate the balance of each prefix of the bracket sequence and store it in the array $prefBal$. Just iterate from left to right over the string and if the current bracket is opening then increase the current balance by one, otherwise decrease it by one. For each prefix let's also calculate whether it can be a prefix of a regular bracket sequence (RBS) and store it in the array $prefCan$. The prefix of length $i$ can be the prefix of RBS if and only if the prefix of length $i-1$ can be the prefix of RBS and $prefBal_i \ge 0$. Let's calculate the same arrays for all suffixes (and call they $sufBal$ and $sufCan$ correspondingly), but now the closing bracket will increase the balance by one and the opening will decrease it by one and we consider the characters from right to left. Now if we have these arrays, let's iterate over all positions in the initial bracket sequence. If we now at the position $i$ then let's do the following things: firstly, if $prefCan_{i} = false$ or $sufCan_{i} = false$ then skip this position. Otherwise if the current bracket is opening then we have to increase the answer if $prefBal_{i-1} > 0$ and $prefBal_{i - 1} - 1 + sufBal_{i + 1} = 0$ (only in this case the bracket sequence will become regular). And if the current bracket is closing then we have to increase the answer if $prefBal_{i - 1} + 1 - sufBal_{i + 1} = 0$.
[ "implementation" ]
1,900
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; string s; cin >> n >> s; string rs(s.rbegin(), s.rend()); for (int i = 0; i < n; ++i) { if (rs[i] == '(') { rs[i] = ')'; } else { rs[i] = '('; } } vector<int> pref(n + 1), suf(n + 1); vector<bool> okp(n + 1), oks(n + 1); okp[0] = oks[n] = true; for (int i = 0; i < n; ++i) { pref[i + 1] = pref[i] + (s[i] == '(' ? +1 : -1); okp[i + 1] = okp[i] & (pref[i + 1] >= 0); suf[n - i - 1] = suf[n - i] + (rs[i] == '(' ? +1 : -1); oks[n - i - 1] = oks[n - i] & (suf[n - i - 1] >= 0); } int ans = 0; for (int i = 0; i < n; ++i) { if (!okp[i] || !oks[i + 1]) { continue; } if (s[i] == '(') { if (pref[i] > 0 && pref[i] - 1 - suf[i + 1] == 0) { ++ans; } } else { if (pref[i] + 1 - suf[i + 1] == 0) { ++ans; } } } cout << ans << endl; return 0; }
1095
F
Make It Connected
You are given an undirected graph consisting of $n$ vertices. A number is written on each vertex; the number on vertex $i$ is $a_i$. Initially there are no edges in the graph. You may add some edges to this graph, but you have to pay for them. The cost of adding an edge between vertices $x$ and $y$ is $a_x + a_y$ coins. There are also $m$ special offers, each of them is denoted by three numbers $x$, $y$ and $w$, and means that you can add an edge connecting vertices $x$ and $y$ and pay $w$ coins for it. You don't have to use special offers: if there is a pair of vertices $x$ and $y$ that has a special offer associated with it, you still may connect these two vertices paying $a_x + a_y$ coins for it. What is the minimum number of coins you have to spend to make the graph connected? Recall that a graph is connected if it's possible to get from any vertex to any other vertex using only the edges belonging to this graph.
Suppose we have found all the edges of the graph explicitly, sorted them, and start running Kruskal on the sorted list of edges. Each time we add some edge to MST, it is either a special edge given in the input, or an edge which was generated with cost $a_x + a_y$ (whichever costs less). Let's try to analyze how can we find the cheapest edge of each type that connects two components. For special edges, we may just maintain the number of special edges we already added or skipped, and when choosing a new edge, we skip some more (possibly zero) special edges that don't connect anything, until we find an edge that connects something. And for the other type of edges, we may find two components having minimum numbers on the vertices in those components as small as possible, and just connect the minimum vertex in the first component with the minimum vertex in the second component. We may simulate this by maintaining a data structure (for example, a multiset), where for each component we will store the vertex having minimum $a_i$ in this component, and pick two minimums from this set. We also have to be able to check if two vertices are connected (this can be done with DSU) and merge two components. But this solution can be made easier. Every time we add a "non-special" edge, one of the ends of this edge is the vertex with minimum $a_i$. So we may just find this vertex, generate all edges connecting this vertex to all other vertices, merge this set of edges with the set of special edges, and run any MST algorithm on the resulting set of edges. If there are multiple minimums in the array $a$, then we may pick any of them because in Kruskal algorithm it doesn't matter which of the edges with equal costs we try to add first.
[ "dsu", "graphs", "greedy" ]
1,900
#include<bits/stdc++.h> using namespace std; #define x first #define y second #define mp make_pair typedef pair<long long, pair<int, int> > edge; const int N = 200043; int p[N]; long long a[N]; int get_leader(int x) { return (x == p[x] ? x : (p[x] = get_leader(p[x]))); } bool merge(int x, int y) { x = get_leader(x); y = get_leader(y); if(x == y) return false; p[x] = y; return true; } int main() { int n, m; scanf("%d %d", &n, &m); for(int i = 0; i < n; i++) scanf("%lld", &a[i]); vector<edge> es(m); for(int i = 0; i < m; i++) { scanf("%d %d %lld", &es[i].y.x, &es[i].y.y, &es[i].x); es[i].y.x--; es[i].y.y--; } int z = min_element(a, a + n) - a; for(int i = 0; i < n; i++) if(i != z) es.push_back(mp(a[i] + a[z], mp(i, z))); sort(es.begin(), es.end()); long long ans = 0; for(int i = 0; i < n; i++) p[i] = i; for(auto e : es) if(merge(e.y.x, e.y.y)) ans += e.x; printf("%lld\n", ans); return 0; }
1096
A
Find Divisible
You are given a range of positive integers from $l$ to $r$. Find such a pair of integers $(x, y)$ that $l \le x, y \le r$, $x \ne y$ and $x$ divides $y$. If there are multiple answers, print any of them. You are also asked to answer $T$ independent queries.
Print $l$ and $2l$. Firstly, the smallest value of $\frac y x$ you can have is $2$ and if any greater value fits then $2$ fits as well. Secondly, the absolute difference between $x$ and $2x$ increases when you increase $x$, thus lessening the possibility of both numbers fitting into the range. Overall complexity: $O(1)$.
[ "greedy", "implementation", "math" ]
800
T = int(input()) for i in range(T): l, r = map(int, input().split()) print(l, l * 2)
1096
B
Substring Removal
You are given a string $s$ of length $n$ consisting only of lowercase Latin letters. A substring of a string is a contiguous subsequence of that string. So, string "forces" is substring of string "codeforces", but string "coder" is not. Your task is to calculate the number of ways to remove \textbf{exactly} one substring from this string in such a way that \textbf{all} remaining characters are \textbf{equal} (the number of distinct characters either zero or one). It is guaranteed that there is \textbf{at least two} different characters in $s$. Note that you \textbf{can} remove the whole string and it is correct. Also note that you should \textbf{remove at least one character}. Since the answer can be rather large (not very large though) print it modulo $998244353$. \textbf{If you are Python programmer, consider using PyPy instead of Python when you submit your code.}
Firstly, let's calculate the length of the prefix of equal letters (let it be $l$) and the length of the suffix of equal letters (let it be $r$). It can be done with two cycles with breaks. It is obvious that this prefix and suffix wouldn't overlap. Then let's consider two cases: the first one is when $s_1 \ne s_n$ and the second one is when $s_1 = s_n$. In the first case we can only remain either prefix or suffix of $s$ consisting only of equal letters. Then the answer is $l + r + 1$ (because we can remain from $1$ to $l$ letters on the prefix, from $1$ to $r$ on the suffix or empty string). In the second case we can remain from $0$ to $l$ letters on the prefix and from $0$ to $r$ letters on the suffix. But now because $s_1 = s_n$ we can combine these ways, so the answer is $(l + 1) \cdot (r + 1)$. And the bonus (this case is not belong to the given problem): if all letters in the string are equal then then answer is $\frac{n (n-1)}{2} + n$ because we can choose any substring of $s$ of length at least $2$ and any substring of length $1$.
[ "combinatorics", "math", "strings" ]
1,300
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; string s; cin >> n >> s; int cntl = 0, cntr = 0; for (int i = 0; i < n; ++i) { if (s[i] == s[0]) { ++cntl; } else { break; } } for (int i = n - 1; i >= 0; --i) { if (s[i] == s[n - 1]) { ++cntr; } else { break; } } if (s[0] == s[n - 1]) { cout << ((cntl + 1) * 1ll * (cntr + 1)) % 998244353 << endl; } else { cout << (cntl + cntr + 1) % 998244353; } return 0; }
1096
C
Polygon for the Angle
You are given an angle $\text{ang}$. The Jury asks You to find such \textbf{regular} $n$-gon (regular polygon with $n$ vertices) that it has three vertices $a$, $b$ and $c$ (they can be non-consecutive) with $\angle{abc} = \text{ang}$ or report that there is no such $n$-gon. If there are several answers, print the \textbf{minimal} one. It is guarantied that if answer exists then it doesn't exceed $998244353$.
At first, let prove that all possible angles in the regular $n$-gon equal to $\frac{180k}{n}$, where $1 \le k \le n - 2$. To prove it we can build circumscribed circle around $n$-gon. Then the circle will be divided on $n$ equal arcs with lengths $\frac{360}{n}$. Any possible angle in the $n$-gon is a inscribed angle in the circle and equal to half of central angle. Any central angle, in turn, equals to sum of some consecitive $k$ arcs. In result, any angle equal to $\frac{1}{2} k \frac{360}{n} = \frac{180k}{n}$. The maximal possible angle is reached from three consecutive vertices and equal (by properties of regular polygons) to $\frac{180(n-2)}{n}$. So, we need to find minimal integer $n$ such that $\frac{180k}{n} = \text{ang}$, where $k$ is integer and $1 \le k \le n - 2$. Its equivalent to find minimal integer solution of $180k = \text{ang} \cdot n$. Let $g = \text{GCD}(180, \text{ang})$, then we can divide both parts on $g$. In result, $\frac{180}{g}k = \frac{\text{ang}}{g}n$. Since $\text{GCD}(\frac{180}{g}, \frac{\text{ang}}{g}) = 1$, then $\frac{180}{g}$ must divide $n$. Analogically, $\frac{\text{ang}}{g}$ must divide $k$. Then, solution is next: $n = x \cdot \frac{180}{g}$ and $k = x \cdot \frac{\text{ang}}{g}$. We are finding the minimal solution, so $x$ is almost always $1$, except cases where $k = n - 1$ - here we must take $x = 2$, since we have restricition on $k \le n - 2$. The picture for the futher visibility:
[ "brute force", "geometry" ]
1,600
#include<bits/stdc++.h> using namespace std; int ang; inline bool read() { if(!(cin >> ang)) return false; return true; } inline void solve() { int g = __gcd(ang, 180); int k = ang / g; int n = 180 / g; if(k + 1 == n) k *= 2, n *= 2; cout << n << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif cout << fixed << setprecision(15); int tc; cin >> tc; while(tc--) { assert(read()); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1096
D
Easy Problem
Vasya is preparing a contest, and now he has written a statement for an easy problem. The statement is a string of length $n$ consisting of lowercase Latin latters. Vasya thinks that the statement can be considered hard if it contains a subsequence hard; otherwise the statement is easy. For example, hard, hzazrzd, haaaaard can be considered hard statements, while har, hart and drah are easy statements. Vasya doesn't want the statement to be hard. He may remove some characters from the statement in order to make it easy. But, of course, some parts of the statement can be crucial to understanding. Initially the ambiguity of the statement is $0$, and removing $i$-th character increases the ambiguity by $a_i$ (the index of each character is considered as it was in the original statement, so, for example, if you delete character r from hard, and then character d, the index of d is still $4$ even though you delete it from the string had). Vasya wants to calculate the minimum ambiguity of the statement, if he removes some characters (possibly zero) so that the statement is easy. Help him to do it! Recall that subsequence is a sequence that can be derived from another sequence by deleting some elements without changing the order of the remaining elements.
Denote string $t$ as hard. We will solve this problem with dynamic programming. Denote $dp_{cnt, len}$ - the minimum possible ambiguity if we considered first $cnt$ letters of statement and got prefix $t$ having length $len$ as a subsequence of the string. If $cnt$-th letter of the statement is not equal to $t_{len}$, then $dp_{cnt, len} = dp_{cnt-1, len}$ - we don't have to change it. Otherwise we either change the letter, or let it stay as it is (and the length of the prefix we found so far increases): $dp_{cnt, len} = \min (dp_{cnt-1, len-1}, dp_{cnt-1, len} + a_{cnt-1})$.
[ "dp" ]
1,800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 100 * 1000 + 13; const long long INF64 = 1e18; int n; string s; int a[N]; long long dp[N][5]; const string h = "hard"; int main() { scanf("%d", &n); static char buf[N]; scanf("%s", buf); s = buf; forn(i, n) scanf("%d", &a[i]); forn(i, N) forn(j, 5) dp[i][j] = INF64; dp[0][0] = 0; forn(i, n) forn(j, 4){ dp[i + 1][j + (s[i] == h[j])] = min(dp[i + 1][j + (s[i] == h[j])], dp[i][j]); dp[i + 1][j] = min(dp[i + 1][j], dp[i][j] + a[i]); } printf("%lld\n", *min_element(dp[n], dp[n] + 4)); return 0; }
1096
E
The Top Scorer
Hasan loves playing games and has recently discovered a game called TopScore. In this soccer-like game there are $p$ players doing penalty shoot-outs. Winner is the one who scores the most. \textbf{In case of ties, one of the top-scorers will be declared as the winner randomly with equal probability.} They have just finished the game and now are waiting for the result. But there's a tiny problem! The judges have lost the paper of scores! Fortunately they have calculated sum of the scores before they get lost and also for some of the players they have remembered a lower bound on how much they scored. However, the information about the bounds is private, so Hasan only got to know his bound. According to the available data, he knows that his score is at least $r$ and sum of the scores is $s$. Thus the final state of the game can be represented in form of sequence of $p$ integers $a_1, a_2, \dots, a_p$ ($0 \le a_i$) — player's scores. Hasan is player number $1$, so $a_1 \ge r$. Also $a_1 + a_2 + \dots + a_p = s$. Two states are considered different if there exists some position $i$ such that the value of $a_i$ differs in these states. \textbf{Once again, Hasan doesn't know the exact scores (he doesn't know his exact score as well). So he considers each of the final states to be equally probable to achieve.} Help Hasan find the probability of him winning. It can be shown that it is in the form of $\frac{P}{Q}$ where $P$ and $Q$ are non-negative integers and $Q \ne 0$, $P \le Q$. Report the value of $P \cdot Q^{-1} \pmod {998244353}$.
An straightforward dp solution is to calculate $dp_{s,p,m} =$ {number of states at the end of the game in which no one has scored more than $m$ goals} where $s$ is the number of total goals to be scored and $p$ is the number players in the game. Fix the score of Hasan in the game and by using this dp the rest is easy (also described below). But as long as the time needed to calculate $dp_{s,p,m}$ is $O(s \cdot r \cdot p)$ this solution won't fit in the constraints. With a little help from combinatorics, we can calculate the value of mentioned dp function without using recursions. Define $g(s,p,m) = \sum_{i=0}^{p}{(-1)^i \binom{p}{i} \binom{s+p-1-i(m+1)}{p-1}}$ This formula is a well-known modification of "Star and Bars" problem but with the upper limit on terms. Now we can calculate the answer, firstly fix Hasan's score and number of top-scorers, then use $g$ to calculate each state: $f(s,r,p) = \frac{\sum_{t=r}^s{\sum_{q=1}^p{\binom{p-1}{q-1} \cdot \frac 1 q \cdot g(s-qt,p-q,t-1)}}}{\binom{s-r+p-1}{p-1}}$ Overall complexity: $O(s\cdot p^2)$.
[ "combinatorics", "dp", "math", "probabilities" ]
2,500
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int MOD = 998244353; const int N = 10 * 1000 + 7; const int M = 100 + 7; int fact[N], rfact[N]; int add(int a, int b){ a += b; if (a >= MOD) a -= MOD; if (a < 0) a += MOD; return a; } int mul(int a, int b){ return (a * 1ll * b) % MOD; } int binpow(int a, int b){ int res = 1; while (b){ if (b & 1) res = mul(res, a); a = mul(a, a); b >>= 1; } return res; } int cnk(int n, int k){ if (n == k) return 1; if (k < 0 || k > n) return 0; return mul(fact[n], mul(rfact[k], rfact[n - k])); } int g(int s, int p, int m){ int res = 0; forn(i, p + 1) res = add(res, mul(i & 1 ? MOD - 1 : 1, mul(cnk(p, i), cnk(s + p - 1 - i * (m + 1), p - 1)))); return res; } int inv(int x){ return mul(rfact[x], fact[x - 1]); } int main() { fact[0] = 1; for (int i = 1; i < N; ++i) fact[i] = mul(fact[i - 1], i); rfact[N - 1] = binpow(fact[N - 1], MOD - 2); for (int i = N - 2; i >= 0; --i) rfact[i] = mul(rfact[i + 1], i + 1); int p, s, r; scanf("%d%d%d", &p, &s, &r); int Q = cnk(s - r + p - 1, p - 1); int P = 0; for (int t = r; t <= s; ++t) for (int q = 1; q <= p; ++q) P = add(P, mul(mul(cnk(p - 1, q - 1), inv(q)), g(s - q * t, p - q, t - 1))); printf("%d\n", mul(P, binpow(Q, MOD - 2))); return 0; }
1096
F
Inversion Expectation
A permutation of size $n$ is an array of size $n$ such that each integer from $1$ to $n$ occurs exactly once in this array. An inversion in a permutation $p$ is a pair of indices $(i, j)$ such that $i > j$ and $a_i < a_j$. For example, a permutation $[4, 1, 3, 2]$ contains $4$ inversions: $(2, 1)$, $(3, 1)$, $(4, 1)$, $(4, 3)$. You are given a permutation $p$ of size $n$. However, the numbers on some positions are replaced by $-1$. Let the valid permutation be such a replacement of $-1$ in this sequence back to numbers from $1$ to $n$ in such a way that the resulting sequence is a permutation of size $n$. The given sequence was turned into a valid permutation randomly with the equal probability of getting each valid permutation. Calculate the expected total number of inversions in the resulting valid permutation. It can be shown that it is in the form of $\frac{P}{Q}$ where $P$ and $Q$ are non-negative integers and $Q \ne 0$. Report the value of $P \cdot Q^{-1} \pmod {998244353}$.
Let's break the problem into four general cases. Case 1. Inversions between two unknown numbers. Each pair of numbers can either be or inversion or not and the number of permutations for both cases is the same. Thus, the expected value of that is $\frac{cnt(-1) \cdot (cnt(-1) - 1)}{2} \cdot \frac 1 2$. Case 2 and 3. Inversions between the known and unknown number. Let's check the case with left number being unknown and right being known. The opposite will be done similarly. For each known number calculate the number of unknowns to the left of it $lft_x$ and the total number of unknowns greater than it $gt_x$. Then you'll need to put a greater number out of all possible to make an inversion. Add $lft_x \cdot \frac{gt_x}{cnt(-1)}$ to the answer. Case 4. Inversions between two known numbers. Just calculate that number ignoring all $-1$ and add it to answer. Overall complexity: $O(n \log n)$ (for the lase case, all others are done in $O(n)$).
[ "dp", "math", "probabilities" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; const int MOD = 998244353; int n; int p[N]; bool used[N]; int gt[N]; int add(int a, int b){ a += b; if (a >= MOD) a -= MOD; return a; } int mul(int a, int b){ return (a * 1ll * b) % MOD; } int binpow(int a, int b){ int res = 1; while (b){ if (b & 1) res = mul(res, a); a = mul(a, a); b >>= 1; } return res; } int f[N]; void upd(int x){ for (int i = x; i < N; i |= i + 1) ++f[i]; } int get(int x){ int sum = 0; for (int i = x; i >= 0; i = (i & (i + 1)) - 1) sum += f[i]; return sum; } int main() { scanf("%d", &n); forn(i, n){ scanf("%d", &p[i]); if (p[i] != -1) used[p[i]] = true; } int cur = 0; for (int i = n; i >= 1; --i){ gt[i] = cur; cur += !used[i]; } // case 1 int ans = mul(mul(cur, cur - 1), binpow(4, MOD - 2)); int inv = binpow(cur, MOD - 2); // case 2 int lft = 0; forn(i, n){ if (p[i] == -1) ++lft; else ans = add(ans, mul(lft, mul(gt[p[i]], inv))); } // case 3 int rgh = 0; for (int i = n - 1; i >= 0; --i){ if (p[i] == -1) ++rgh; else ans = add(ans, mul(rgh, mul(cur - gt[p[i]], inv))); } // case 4 int tmp = 0; forn(i, n) if (p[i] != -1){ ans = add(ans, tmp - get(p[i])); upd(p[i]); ++tmp; } printf("%d\n", ans); return 0; }
1096
G
Lucky Tickets
All bus tickets in Berland have their numbers. A number consists of $n$ digits ($n$ is even). Only $k$ decimal digits $d_1, d_2, \dots, d_k$ can be used to form ticket numbers. If $0$ is among these digits, then numbers may have leading zeroes. For example, if $n = 4$ and only digits $0$ and $4$ can be used, then $0000$, $4004$, $4440$ are valid ticket numbers, and $0002$, $00$, $44443$ are not. A ticket is lucky if the sum of first $n / 2$ digits is equal to the sum of remaining $n / 2$ digits. Calculate the number of different lucky tickets in Berland. Since the answer may be big, print it modulo $998244353$.
The naive solution would be $dp_{x, y}$ - the number of sequences of allowed digits with length $x$ and sum $y$. We compute it for $x = \frac{n}{2}$ and for every possible $y$, and the answer is $\sum_y dp_{\frac{n}{2}, y}^2$. Let's speed this up. Let's denote the following polynomial: $f(x) = \sum_{i = 0}^{9} c_i x^i$, where $c_i = 1$ if $i$ is an allowed digit, otherwise $c_i = 0$. It's easy to see that the coefficients of $f(x)$ are equal to the values of $dp_{1, y}$. Using mathematical induction we may prove that the coefficients of $(f(x))^a$ are equal to $dp_{a, y}$. So now we need to compute $(f(x))^{\frac{n}{2}}$. There are two possible ways to do this that result in $O(n \log n)$ complexity. The first option is to apply binary exponentiation with NTT polynomial multiplication. The second option is to use the fact that if we apply NTT to a polynomial, we get a set of its values in some points. So, if we exponentiate these values, we get a set of values of exponentiated polynomial in the same points. So we may apply NTT to $f(x)$ treating it as a polynomial of degree $5n$, raise each resulting value to the power of $\frac{n}{2}$ and apply inverse transformation.
[ "divide and conquer", "dp", "fft" ]
2,400
#include<bits/stdc++.h> using namespace std; const int LOGN = 21; const int N = (1 << LOGN); const int MOD = 998244353; const int g = 3; #define forn(i, n) for(int i = 0; i < int(n); i++) inline int mul(int a, int b) { return (a * 1ll * b) % MOD; } inline int norm(int a) { while(a >= MOD) a -= MOD; while(a < 0) a += MOD; return a; } inline int binPow(int a, int k) { int ans = 1; while(k > 0) { if(k & 1) ans = mul(ans, a); a = mul(a, a); k >>= 1; } return ans; } inline int inv(int a) { return binPow(a, MOD - 2); } vector<int> w[LOGN]; vector<int> iw[LOGN]; vector<int> rv[LOGN]; void precalc() { int wb = binPow(g, (MOD - 1) / (1 << LOGN)); for(int st = 0; st < LOGN; st++) { w[st].assign(1 << st, 1); iw[st].assign(1 << st, 1); int bw = binPow(wb, 1 << (LOGN - st - 1)); int ibw = inv(bw); int cw = 1; int icw = 1; for(int k = 0; k < (1 << st); k++) { w[st][k] = cw; iw[st][k] = icw; cw = mul(cw, bw); icw = mul(icw, ibw); } rv[st].assign(1 << st, 0); if(st == 0) { rv[st][0] = 0; continue; } int h = (1 << (st - 1)); for(int k = 0; k < (1 << st); k++) rv[st][k] = (rv[st - 1][k & (h - 1)] << 1) | (k >= h); } } inline void fft(int a[N], int n, int ln, bool inverse) { for(int i = 0; i < n; i++) { int ni = rv[ln][i]; if(i < ni) swap(a[i], a[ni]); } for(int st = 0; (1 << st) < n; st++) { int len = (1 << st); for(int k = 0; k < n; k += (len << 1)) { for(int pos = k; pos < k + len; pos++) { int l = a[pos]; int r = mul(a[pos + len], (inverse ? iw[st][pos - k] : w[st][pos - k])); a[pos] = norm(l + r); a[pos + len] = norm(l - r); } } } if(inverse) { int in = inv(n); for(int i = 0; i < n; i++) a[i] = mul(a[i], in); } } int aa[N], bb[N], cc[N]; inline void multiply(int a[N], int sza, int b[N], int szb, int c[N], int &szc) { int n = 1, ln = 0; while(n < (sza + szb)) n <<= 1, ln++; for(int i = 0; i < n; i++) aa[i] = (i < sza ? a[i] : 0); for(int i = 0; i < n; i++) bb[i] = (i < szb ? b[i] : 0); fft(aa, n, ln, false); fft(bb, n, ln, false); for(int i = 0; i < n; i++) cc[i] = mul(aa[i], bb[i]); fft(cc, n, ln, true); szc = n; for(int i = 0; i < n; i++) c[i] = cc[i]; } vector<int> T[N]; int a[N]; int b[N]; int c[N]; #define sz(a) (int(a.size())) int main() { precalc(); int n, k; scanf("%d %d", &n, &k); for(int i = 0; i < k; i++) { int x; scanf("%d", &x); a[x] = 1; } int nn = 1, ln = 0; int nw = (n * 5) + 1; while(nn < nw) { nn *= 2; ln++; } fft(a, nn, ln, false); forn(i, nn) a[i] = binPow(a[i], n / 2); fft(a, nn, ln, true); int ans = 0; forn(i, nn) ans = norm(ans + binPow(a[i], 2)); printf("%d\n", ans); return 0; }
1097
A
Gennady and a Card Game
Gennady owns a small hotel in the countryside where he lives a peaceful life. He loves to take long walks, watch sunsets and play cards with tourists staying in his hotel. His favorite game is called "Mau-Mau". To play Mau-Mau, you need a pack of $52$ cards. Each card has a suit (Diamonds — D, Clubs — C, Spades — S, or Hearts — H), and a rank (2, 3, 4, 5, 6, 7, 8, 9, T, J, Q, K, or A). At the start of the game, there is one card on the table and you have five cards in your hand. You can play a card from your hand if and only if it has the same rank or the same suit as the card on the table. In order to check if you'd be a good playing partner, Gennady has prepared a task for you. Given the card on the table and five cards in your hand, check if you can play at least one card.
The task is to check if some card in the second line has a common character with the card in the first line. The best way to do this is to create a single string for the card in the hand and an array of strings for the cards on the table. After reading the input, just iterate with a loop over the array. If a string in the array has any common character with the single string, then output "YES". If no similarity was found, output "NO".
[ "brute force", "implementation" ]
800
null
1097
B
Petr and a Combination Lock
Petr has just bought a new car. He's just arrived at the most known Petersburg's petrol station to refuel it when he suddenly discovered that the petrol tank is secured with a combination lock! The lock has a scale of $360$ degrees and a pointer which initially points at zero: Petr called his car dealer, who instructed him to rotate the lock's wheel exactly $n$ times. The $i$-th rotation should be $a_i$ degrees, either clockwise or counterclockwise, and after all $n$ rotations the pointer should again point at zero. This confused Petr a little bit as he isn't sure which rotations should be done clockwise and which should be done counterclockwise. As there are many possible ways of rotating the lock, help him and find out whether there exists at least one, such that after all $n$ rotations the pointer will point at zero again.
The best way to check the condition in the statement is to use bitmasks. Iterate from $0$ to $2^n-1$ and for each number consider its binary representation. If the $i$-th bit of the representation is set to $0$, then decide to perform the $i$-th rotation clockwise. In the opposite case, do it counterclockwise. Finally, check if the whole angle is a multiple of $360$ and if so, output "YES". If no good combination of rotations was found, output "NO".
[ "bitmasks", "brute force", "dp" ]
1,200
null
1097
C
Yuhao and a Parenthesis
One day, Yuhao came across a problem about checking if some bracket sequences are correct bracket sequences. A bracket sequence is any non-empty sequence of opening and closing parentheses. A bracket sequence is called a correct bracket sequence if it's possible to obtain a correct arithmetic expression by inserting characters "+" and "1" into this sequence. For example, the sequences "(())()", "()" and "(()(()))" are correct, while the bracket sequences ")(", "(()" and "(()))(" are not correct. Yuhao found this problem too simple for him so he decided to make the problem harder. You are given many (not necessarily correct) bracket sequences. The task is to connect some of them into ordered pairs so that each bracket sequence occurs in at most one pair and the concatenation of the bracket sequences in each pair is a correct bracket sequence. The goal is to create as many pairs as possible. This problem unfortunately turned out to be too difficult for Yuhao. Can you help him and solve it?
It turns out that each bracket sequence can be described by a pair $(a, b)$, where $a$ and $b$ are the minimal non-negative integers such that after adding $a$ opening parentheses "(" to the left of the string and $b$ closing parentheses ")" to the right of the string it becomes a correct bracket sequence. These numbers can be easily found with a single loop. If $a \neq 0$ and $b \neq 0$, we cannot pair this string with any other string, because it needs brackets on its both sides. If $a=b=0$, then this string is a correct bracket sequence and can be paired with any other correct bracket sequence. If there are $k$ such strings, then $\left\lfloor \frac{k}{2} \right\rfloor$ pairs can be formed. If we want to concatenate two other bracket sequences (say, $i$-th and $j$-th) in order to produce a correct bracket sequence, the excess of the opening parentheses in the $i$-th sequence must equal the excess of the closing parentheses in the $j$-th sequence; that is, $b_i = a_j$. This allows us to pair the sequences greedily.
[ "greedy", "implementation" ]
1,400
null
1097
D
Makoto and a Blackboard
Makoto has a big blackboard with a positive integer $n$ written on it. He will perform the following action exactly $k$ times: Suppose the number currently written on the blackboard is $v$. He will randomly pick one of the divisors of $v$ (possibly $1$ and $v$) and replace $v$ with this divisor. As Makoto uses his famous random number generator (RNG) and as he always uses $58$ as his generator seed, each divisor is guaranteed to be chosen with equal probability. He now wonders what is the expected value of the number written on the blackboard after $k$ steps. It can be shown that this value can be represented as $\frac{P}{Q}$ where $P$ and $Q$ are coprime integers and $Q \not\equiv 0 \pmod{10^9+7}$. Print the value of $P \cdot Q^{-1}$ modulo $10^9+7$.
How to solve the problem if $n$ is a prime power $p^{\alpha}$? We can simply use dynamic programming. Let $DP_{i, j}$ denote the probability that after $i$ steps the current number on the blackboard is $p^j$. At the beginning, $DP_{0, \alpha} = 1$. The transitions in DP are rather straightforward. The expected value of the number on the blackboard is then $\sum_{j=0}^{\alpha} DP_{k, j} p^j.$ What if $n$ is not a prime power? We can observe that the result is multiplicative on $n$. Therefore, we factorize $n = p_1^{\alpha_1} p_2^{\alpha_k} \dots p_\ell^{\alpha_\ell}$, apply the solution above for each prime power $p_1^{\alpha_1}$ through $p_\ell^{\alpha_\ell}$, and eventually multiply the results. Another problem is that we need to write a single integer modulo $10^9+7$ even though the answer is a rational number. To cope with this, we notice that $10^9+7$ is prime. Therefore, thanks to the Fermat Little Theorem, dividing by some number $x$ is equivalent to multiplying by $x^{10^9+5}$, modulo $10^9+7$. We use the quick exponentiation to compute the results. The overall complexity is $O(\sqrt{n}+k \cdot \log n)$.
[ "dp", "math", "number theory", "probabilities" ]
2,200
null
1097
E
Egor and an RPG game
One Saturday afternoon Egor was playing his favorite RPG game. While discovering new lands and territories, he came across the following sign: Egor is a passionate player, but he is an algorithmician as well. That's why he instantly spotted four common letters in two words on the sign above — if we permute the letters "R", "E", "G", "O" from the first word, we can obtain the letters "O", "G", "R", "E". Egor got inspired by the sign and right away he came up with a problem about permutations. You are given a permutation of length $n$. You have to split it into some non-empty subsequences so that each element of the permutation belongs to exactly one subsequence. Each subsequence must be monotonic — that is, either increasing or decreasing. Sequence is called to be a subsequence if it can be derived from permutation by deleting some (possibly none) elements without changing the order of the remaining elements. The number of subsequences should be small enough — let $f(n)$ be the minimum integer $k$ such that every permutation of length $n$ can be partitioned into at most $k$ monotonic subsequences. You need to split the permutation into at most $f(n)$ monotonic subsequences.
Firstly, consider this testcase: $\{1, 3, 2, 6, 5, 4, 10, 9, 8, 7, 15, 14, 13, 12, 11\}$. It's easy to see that here we need at least $5$ sequences. It's possible to generalize it and see that if $n \geq \frac{k \cdot (k+1)}{2}$ for some integer $k$, then we need at least $k$ sequences. So, maybe if $n < \frac{k \cdot (k+1)}{2}$ it's always possible to use at most $k-1$ sequences? It turns out to be true. How to prove it? Let's calculate the LIS (Longest Increasing Sequence) of the permutation. If $|LIS| \geq k$, we can erase it, and because $\frac{k \cdot (k+1)}{2}-k=\frac{(k-1) \cdot k}{2}$, we can recursively solve the problem for the smaller permutation and $k$. What if $|LIS| < k$? It's known that we can split the permutation into $|LIS|$ decreasing sequences. As this number is not greater than the limit on the number of sequences, we can simply do it and end the process. The proof above also gives a simple algorithm to find the partition. Its running time is $O(n\sqrt{n} \log{n})$.
[ "constructive algorithms", "greedy" ]
3,400
null
1097
F
Alex and a TV Show
Alex decided to try his luck in TV shows. He once went to the quiz named "What's That Word?!". After perfectly answering the questions "How is a pseudonym commonly referred to in the Internet?" ("Um... a nick?"), "After which famous inventor we name the unit of the magnetic field strength?" ("Um... Nikola Tesla?") and "Which rock band performs "How You Remind Me"?" ("Um... Nickelback?"), he decided to apply to a little bit more difficult TV show: "What's in This Multiset?!". The rules of this TV show are as follows: there are $n$ multisets numbered from $1$ to $n$. Each of them is initially empty. Then, $q$ events happen; each of them is in one of the four possible types: - 1 x v — set the $x$-th multiset to a singleton $\{v\}$. - 2 x y z — set the $x$-th multiset to a union of the $y$-th and the $z$-th multiset. For example: $\{1, 3\}\cup\{1, 4, 4\}=\{1, 1, 3, 4, 4\}$. - 3 x y z — set the $x$-th multiset to a product of the $y$-th and the $z$-th multiset. The product $A \times B$ of two multisets $A$, $B$ is defined as $\{ \gcd(a, b)\, \mid\, a \in A,\, b \in B \}$, where $\gcd(p, q)$ is the greatest common divisor of $p$ and $q$. For example: $\{2, 2, 3\} \times \{1, 4, 6\}=\{1, 2, 2, 1, 2, 2, 1, 1, 3\}$. - 4 x v — the participant is asked how many times number $v$ occurs in the $x$-th multiset. As the quiz turned out to be too hard in the past, participants should now give the answers \textbf{modulo $2$ only}. Note, that $x$, $y$ and $z$ described above are not necessarily different. In events of types $2$ and $3$, the sum or the product is computed first, and then the assignment is performed. Alex is confused by the complicated rules of the show. Can you help him answer the requests of the $4$-th type?
Firstly, let's store in a bitset of size $7000$ an information which elements occur an odd number of times in some multiset. Let's think about the query of the third type like about a convolution. We'll solve it similarly to FFT: we'll change both polynomials (arrays) into a form of "values in points", multiply them by simply multiplying corresponding numbers and then reverse the changing process. It turns out that the equivalent of "the values in points" method should be as follows: instead of storing the parity of numbers equal to $x$, store the parity of numbers divisible by $x$. Why? If we denote by $a$ the number of elements divisible by some $x$ in the first multiset and by $b$ the number of elements divisible by $x$ in the second multiset, then the number of elements divisible by $x$ in their product is $a \cdot b$. Thanks to that, we can multiply easily now. The multiplication is now just a bitwise AND operation. We can also add the multisets in this form as it's a bitwise XOR operation. But of course, we cannot change the form each time when we want to multiply two multisets. It can be done in $O(d \cdot \log{d})$ for $d=7000$, but it's too slow. Instead, we commit to keeping each multiset in the changed form all the time. How to answer for parity of the number of occurrences of some number $x$ in a multiset without changing it to normal form? Inclusion-exclusion principle tells us that we should take the numbers from positions $i$ which are divisible by $x$ and for which $\frac{i}{x}$ is square-free. Each of these numbers should be multiplied by $1$ or $-1$. Luckily, we are calculating the results modulo $2$ and because $1 \equiv -1~(mod~2)$, we just add everything up without multiplying by $\pm 1$. We can prepare the bitsets for every possible query of type $1$ and $4$. This allows us to answer every query in $O(d/32)$.
[ "bitmasks", "combinatorics", "number theory" ]
2,500
null
1097
G
Vladislav and a Great Legend
A great legend used to be here, but some troll hacked Codeforces and erased it. Too bad for us, but in the troll society he earned a title of an ultimate-greatest-over troll. At least for them, it's something good. And maybe a formal statement will be even better for us? You are given a tree $T$ with $n$ vertices numbered from $1$ to $n$. For every \textbf{non-empty} subset $X$ of vertices of $T$, let $f(X)$ be the minimum number of edges in the smallest connected subtree of $T$ which contains every vertex from $X$. You're also given an integer $k$. You need to compute the sum of $(f(X))^k$ among all non-empty subsets of vertices, that is: $$ \sum\limits_{X \subseteq \{1, 2,\: \dots \:, n\},\, X \neq \varnothing} (f(X))^k. $$ As the result might be very large, output it modulo $10^9 + 7$.
Firstly, to properly understand what does "calculate the sum of the $k$-th powers" means, I recommend you to get acquainted with these two lectures (especially the "powers technique"): https://codeforces.com/blog/entry/62690 https://codeforces.com/blog/entry/62792 Here is a shorter version of what you should understand. When we have a set of some objects, and we want to do something with the $k$-th power of its size (let's assume $k=2$ for a moment), we should look at $x^2$ which turns out to be equal to $(1+1+1+...+1)\cdot(1+1+1+...+1)$, where $x$ is the size of the set and each parenthesis, of course, contains $x$ ones. Each $1$ stands for some element of this set. It can be observed that $x^2$ is equal to the number of ordered pairs ($k$-tuples in general) of elements in the set because we should choose one $1$ in the first parenthesis and one $1$ in the second parenthesis. So, let's change the order of the summation. Firstly, iterate over pairs ($k$-tuples in general) and for each of them calculate in how many sets does it appear. And that's exactly what we will do in the solution. For each $l$ ($1 \leq l \leq k$), and going further for each $l$-tuple of $\textbf{different}$ and $\textbf{sorted}$ edges, we will calculate in how many spanning subsets does it appear. Nextly, with some simpler DP, we will calculate in how many ways can we change a tuple of $l$ different and sorted elements into a tuple which doesn't necessarily meet these conditions. So, how to calculate the sum for each $l$? Let's imagine that we want to calculate the number of spanning subsets for some fixed subset of edges. How many of them are there? Each vertex which lies on a path between any two of the chosen edges can be arbitrarily taken or not. The rest of the vertices can be divided into connected groups, and each group must contain at least one chosen vertex. The division is rather intuitional, but formally: two vertices (which don't lie on any path between two edges in the subset) are in the same group if and only if the path between them doesn't contain any edge in the chosen subset. So, now we know that we should calculate the sum of $2^{number~of~vertices~which~lie~on~some~path}\cdot\prod_{each~group}(2^{size~of~this~group}-1)$. This can be done with a $DP_{i, j}$, where $i$ stands for a vertex, and $j$ stands for the number of chosen edges in its subtree. The complexity trivially is equal to $O(n \cdot k^2)$, but it turns out that if we iterate not to $k$, but to $min(k, size~of~the~subtree)$ each time, the complexity decreases to $O(n \cdot k)$ (similarly to the trick with complexity $O(n^2)$ instead of $O(n^3)$ in many tree DP problems).
[ "combinatorics", "dp", "trees" ]
3,000
null
1097
H
Mateusz and an Infinite Sequence
A Thue-Morse-Radecki-Mateusz sequence (Thorse-Radewoosh sequence in short) is an infinite sequence constructed from a finite sequence $\mathrm{gen}$ of length $d$ and an integer $m$, obtained in the following sequence of steps: - In the beginning, we define the one-element sequence $M_0=(0)$. - In the $k$-th step, $k \geq 1$, we define the sequence $M_k$ to be the concatenation of the $d$ copies of $M_{k-1}$. However, each of them is altered slightly — in the $i$-th of them ($1 \leq i \leq d$), each element $x$ is changed to $(x+\mathrm{gen}_i) \pmod{m}$. For instance, if we pick $\mathrm{gen} = (0, \textcolor{blue}{1}, \textcolor{green}{2})$ and $m = 4$: - $M_0 = (0)$, - $M_1 = (0, \textcolor{blue}{1}, \textcolor{green}{2})$, - $M_2 = (0, 1, 2, \textcolor{blue}{1, 2, 3}, \textcolor{green}{2, 3, 0})$, - $M_3 = (0, 1, 2, 1, 2, 3, 2, 3, 0, \textcolor{blue}{1, 2, 3, 2, 3, 0, 3, 0, 1}, \textcolor{green}{2, 3, 0, 3, 0, 1, 0, 1, 2})$, and so on. As you can see, as long as the first element of $\mathrm{gen}$ is $0$, each consecutive step produces a sequence whose prefix is the sequence generated in the previous step. Therefore, we can define the infinite Thorse-Radewoosh sequence $M_\infty$ as the sequence obtained by applying the step above indefinitely. For the parameters above, $M_\infty = (0, 1, 2, 1, 2, 3, 2, 3, 0, 1, 2, 3, 2, 3, 0, 3, 0, 1, \dots)$. Mateusz picked a sequence $\mathrm{gen}$ and an integer $m$, and used them to obtain a Thorse-Radewoosh sequence $M_\infty$. He then picked two integers $l$, $r$, and wrote down a subsequence of this sequence $A := ((M_\infty)_l, (M_\infty)_{l+1}, \dots, (M_\infty)_r)$. Note that we use the $1$-based indexing both for $M_\infty$ and $A$. Mateusz has his favorite sequence $B$ with length $n$, and would like to see how large it is compared to $A$. Let's say that $B$ majorizes sequence $X$ of length $n$ (let's denote it as $B \geq X$) if and only if for all $i \in \{1, 2, \dots, n\}$, we have $B_i \geq X_i$. He now asks himself how many integers $x$ in the range $[1, |A| - n + 1]$ there are such that $B \geq (A_x, A_{x+1}, A_{x+2}, \dots, A_{x+n-1})$. As both sequences were huge, answering the question using only his pen and paper turned out to be too time-consuming. Can you help him automate his research?
In the beginning, change the query about an interval into two queries about prefixes. Nextly, let's generalize the problem. Change the condition $A_i \leq B_j$ into $A_i \in X_j$, where $X_j$ is some set of possible values (and will be represented as a bitmask). The problem is that the length of the sequence from the input can be huge. How to decrease it? If we'd know the index of the position where we want to compare the sequence from the input taken modulo $d$, we'd be able to decrease the size of the sequence approximately $d$ times. How? We'd know the partition into groups of at most $d$ elements and one group can be reduced to one element (using binary shifts and AND operations). We don't know the remainder as there can be many of them. The idea is to use the recursive function which will check every possibility. The function will take as an argument a sequence $X$ (the sequence of sets of possible elements for each position) and a length of the prefix. It will work as follows: If the length of the sequence is equal to one, calculate the result with some DP which will be described later. If the possible prefix is empty, return $0$. In other cases, iterate over possible reminder and split into $d$ recursive calls, each with approximately $d$ times shorter sequence. How does this reduction work? If the sequence's length is big, the sum of lengths doesn't change much. The tricky part comes when the length of the sequence is $2$. Then, in $d-1$ calls, the length will change to one, but in one call the length will stay equal to two. This shouldn't scare us because these $d-1$ calls will be cut off instantly with mentioned DP. In this way, we will generate $O(n \cdot log_d(r) \cdot d)$ queries. The next observation is that there will be only $O(log_d(r))$ different lengths of prefixes. To prove it we can consider how does the set of possible prefixes look like on each level of the recursion. Now, for each mentioned prefix $a$ and each number $b$ from the range $[0, m)$ we should calculate how many times does $b$ appear on the prefix $a$. To do this, write $a$ in the $d$-based system and use simple DP which goes like $DP[i=number~of~already~considered~digits][j=remainder][l=are~we~aready~smaller~than~a]=number~of~ways$ with $0 \leq i \leq log_d(r)$, $0 \leq j < m$ and $0 \leq l \leq 1$.
[ "bitmasks", "brute force", "dp", "strings" ]
3,400
null
1098
A
Sum in the tree
Mitya has a rooted tree with $n$ vertices indexed from $1$ to $n$, where the root has index $1$. Each vertex $v$ initially had an integer number $a_v \ge 0$ written on it. For every vertex $v$ Mitya has computed $s_v$: the sum of all values written on the vertices on the path from vertex $v$ to the root, as well as $h_v$ — the depth of vertex $v$, which denotes the number of vertices on the path from vertex $v$ to the root. Clearly, $s_1=a_1$ and $h_1=1$. Then Mitya erased all numbers $a_v$, and by accident he also erased all values $s_v$ for vertices with even depth (vertices with even $h_v$). Your task is to restore the values $a_v$ for every vertex, or determine that Mitya made a mistake. In case there are multiple ways to restore the values, you're required to find one which minimizes the total sum of values $a_v$ for all vertices in the tree.
To achieve the minimum possible sum of values in the tree, for vertices with even depth we need to put 0 for leaves and the maximum value possible for other vertices, because increasing the value does not make the resulting sum worse - our children would compensate for it. Since $a_v \ge 0$, it's obvious that ${s_p}_v \le {s_p}_v + a_v = s_v$. For every child $u$ of vertex $v$ it's also true that $s_v \le s_u$, hence ${s_p}_v \le s_v \le s_u$. From this one can derive that the maximum possible $a_v$ for vertices with even $h_v$ equals $\min\limits_{u\ -\ child\ v}s_{u} - s_{p_v}$. Given the values $a_v$ in the vertices of even depth, we can restore the values in the vertices of odd depth using the formula $a_v = s_v - s_{p_v}$. This requires a simple DFS (which translated to a for-loop given the tree representation in the problem statement).
[ "constructive algorithms", "dfs and similar", "greedy", "trees" ]
1,600
null
1098
B
Nice table
You are given an $n \times m$ table, consisting of characters «A», «G», «C», «T». Let's call a table nice, if every $2 \times 2$ square contains all four distinct characters. Your task is to find a nice table (also consisting of «A», «G», «C», «T»), that differs from the given table in the minimum number of characters.
Key idea: In a good matrix, either each row contain at most two different characters, or each column contain at most two different characters. Proof: (it will be here, but for now the editorial fields are too narrow to contain it). In other words, the field looks either like (up to a permutation of <<AGCT>>): <<AGAGAG>> <<CTCTCT>> <<AGAGAG>> (or <<GAGAGA>>) <<CTCTCT>> (or <<TCTCTC>>) <<AGAGAG>> (or <<GAGAGA>>) <<CTCTCT>> (or <<TCTCTC>>) (and so on) or similarly by columns. So, the solution is to say that we have this alternation by rows, iterate over the permutation of letters, for each row choose from the row and its inversion the one that differs from the row of the original matrix in the minimum number of characters. Then rotate the matrix, and solve similarly again, and take a more optimal one.
[ "brute force", "constructive algorithms", "greedy", "math" ]
2,100
null
1098
C
Construct a tree
Misha walked through the snowy forest and he was so fascinated by the trees to decide to draw his own tree! Misha would like to construct a rooted tree with $n$ vertices, indexed from 1 to $n$, where the root has index 1. Every other vertex has a parent $p_i$, and $i$ is called a child of vertex $p_i$. Vertex $u$ belongs to the subtree of vertex $v$ iff $v$ is reachable from $u$ while iterating over the parents ($u$, $p_{u}$, $p_{p_{u}}$, ...). Clearly, $v$ belongs to its own subtree, and the number of vertices in the subtree is called the size of the subtree. Misha is only interested in trees where every vertex belongs to the subtree of vertex $1$. Below there is a tree with $6$ vertices. The subtree of vertex $2$ contains vertices $2$, $3$, $4$, $5$. Hence the size of its subtree is $4$. The branching coefficient of the tree is defined as the maximum number of children in any vertex. For example, for the tree above the branching coefficient equals $2$. Your task is to construct a tree with $n$ vertices such that the sum of the subtree sizes for all vertices equals $s$, and the branching coefficient is minimum possible.
Note, that vertex belongs to subtrees of vertexes, which lay on its way to root. So, sum of sizes of subtrees is equal to $p + n$, where $p$ is sum of lengths of ways from root to vertexes. Let's consider which sum of sizes of subtrees can be in tree with branching coefficient less or equal than $k$. Minimal sum can be achieved in $k$-nary tree (it consists of root, $k$ vertexes on distance $1$, $k^2$ on distance $2$, etc, the last level can be filled not completely). Maximal sum can be achieved in bamboo - tree, which consists of only one way with length $n$. If $s$ is bigger than sum in this tree, the answer will be <<No>>. Let's find minimal $k$, such that $s$ is not smaller than sum of sizes of subtrees in $k$-nary tree (using binary search). Now $s$ is between minimal and maximal sum of sizes of subtrees in tree with branching coefficient, which is not bigger than $k$. Let's show how to build tree with the given sum, which is between these two borders. Let's start with $k$-nary tree. Let's realize recursive function, which rebuilds tree. Let we are in a subtree, and want to increase sum of sizes of subtrees by $x$ (in this subtree). We can change this subtree to bamboo with same number of vertexes (if the sum of sizeof of subtrees won't be too big). Otherwise, we run this function from subtrees in some order. If we reach needed sum, we will terminate the process. Otherwise, every subtree of child is bamboo now, current sum of sizes is smaller than needed, but if we merged bamboos, it would be bigger than it. Let's move vertexes to the end of the first bamboo from others. Over time, after next moving, sum of sizes will increase too much. We can move it to the another position in the first bamboo, in order to make sum equal to $s$, and terminate the process. But if we believe that every tree with sum of sizes of subtrees between maximal and minimal can be built, there is another solution with $O(n \log n)$ complexity, which is easier to realize. The sum of subtree sizes is influenced only by the number of vertices at each distance, and not by their mutual arrangement. Sum of sizes of subtrees is equal to $n + i \cdot d_{i}$, where $d_{i}$ - number of vertexes on distance $i$ from the root. Let's built this array of counts. These conditions must be satisfied: If the $i$-th element is bigger than $0$ ($i > 0$), than $i - 1$-th element must be bigger than $0$. If the $i$-th element is equal to $t$, $i + 1$-th mustn't be bigger than $t \cdot k$. Sum of elements must be equal to $n$. Let we have recovered some prefix of this array (i.e we know that we can fill rest, and sum of sizes of subtrees will be equal to $s$). Let's try to put some value to the next element. We know that the tree can be rebuilt so that the sum is any between maximum and minimum, so there are two conditions, which are satisfied (we want to put value $x$ to position $i$). $x$ must be big enough, so if we fill suffix with numbers $x$, $x \cdot k$, $x \cdot k^2$, ... (the last non-zero number can be smaller, sum of numbers is $n$) sum of sizes of subtrees will be not greater to $s$. $x$ must be small enough, so if we fill suffix with $1$, sum of sizes of subtrees will be not less than $s$. Both borders form some segment of values, which could be put to this position. We haven't to look for the left border, just find the right border by binsearch, and put number, which is equal to it. We can easily restore the tree, using array $d$.
[ "binary search", "constructive algorithms", "dfs and similar", "graphs", "greedy", "trees" ]
2,400
null
1098
D
Eels
Vasya is a big fish lover, and his parents gave him an aquarium for the New Year. Vasya does not have a degree in ichthyology, so he thinks that filling a new aquarium with eels is a good idea. Unfortunately, eels are predators, so Vasya decided to find out how dangerous this idea was. Getting into one aquarium, eels fight each other until exactly one fish remains. When two eels fight, the big one eats the smaller one (if their weights are equal, then one of them will still eat the other). Namely, let $n$ eels be initially in an aquarium, and the $i$-th of them have a weight of $x_i$. Then $n-1$ battles will occur between them, as a result of which, only one eel will survive. In a battle of two eels with weights $a$ and $b$, where $a \le b$, eel of weight $a$ will be eaten and disappear from the aquarium, and eel of weight $b$ will increase its weight to $a+b$. A battle between two eels with weights $a$ and $b$, where $a \le b$, is considered dangerous if $b \le 2 a$. For a given set of eels, danger is defined as the maximum number of dangerous battles that can occur among these eels if they are placed in one aquarium. Now Vasya is planning, which eels he wants to put into an aquarium. He has some set of eels (initially empty). He makes a series of operations with this set. With each operation, he either adds one eel in the set, or removes one eel from the set. Vasya asks you to calculate the danger of the current set of eels after each operation.
Let's consider a set of fishes of size $k$ and sort it in non-decreasing order: $a_1 \le a_2 \le \ldots \le a_k$. Let's call a fish fat if its weight is greater than twice the sum of all fishes with smaller indices: fish $a_i$ is fat iff $a_i > 2 \sum_{j < i} a_j$. Key fact. Let $t$ be the total number of fat fishes. We'll prove that in this case the set of fishes has danger $k-t$. Observation 1. A fat fish can't dangerously eat a fish with smaller weight. Indeed, even if all the smaller fishes eat each other, the resulting fish would be too small. We can conclude that the danger is not greater $k-t$. Observation 2. Let's use the following strategy: at every moment fishes with two smallest weights fight with each other. If a battle between fishes with masses $a$ and $b$ is not dangerous, it's clear that fish $b$ has not eaten any other fish yet - otherwise this fish is a sum of two smaller fishes $c+d=b$, where $c \le d$, but in this case $d \ge b/2 > a$, hence according to the strategy there must have been a fight between two fishes $a$ and $c$ before. This indicates that fish $b$ has not eaten any other fish yet, and fish $a$ is a result of battles between all smaller fishes. In this case $b$ is a fat fish by definition, which completes the proof that this strategy guarantees exactly $k-t$ dangerous battles. Algorithm. Clearly now we need a data structure that allows us to perform three types of operations: add element $x$ remove element $x$ find all elements that are greater than the sum of all smaller elements Let's split the allowed values range into half-intervals $[1,2)$, $[2,4)$, $[4,8)$, ..., $[2^{30}, 2^{31})$. Observation 3. Every half-interval contains at most 1 fat fish, and if there is one, it's the on with the minimum value. Indeed, every single half-interval does not contain two values $x$ and $y$ where $y > 2x$. Now for each half-interval we can maintain the sum of all fishes in this half-interval, the sum of weights of fishes in this half-interval, and the minimum fish in the half-interval (one could use set / PriorityQueue or a similar data structure). In this case we can determine the count of fat fishes simply by iterating over all half-intervals. The resulting complexity is $O(\log_2 n)$ per query.
[ "data structures" ]
2,800
null
1098
E
Fedya the Potter
Fedya loves problems involving data structures. Especially ones about different queries on subsegments. Fedya had a nice array $a_1, a_2, \ldots a_n$ and a beautiful data structure. This data structure, given $l$ and $r$, $1 \le l \le r \le n$, could find the greatest integer $d$, such that $d$ divides each of $a_l$, $a_{l+1}$, ..., $a_{r}$. Fedya really likes this data structure, so he applied it to every non-empty contiguous subarray of array $a$, put all answers into the array and sorted it. He called this array $b$. It's easy to see that array $b$ contains $n(n+1)/2$ elements. After that, Fedya implemented another cool data structure, that allowed him to find sum $b_l + b_{l+1} + \ldots + b_r$ for given $l$ and $r$, $1 \le l \le r \le n(n+1)/2$. Surely, Fedya applied this data structure to every contiguous subarray of array $b$, called the result $c$ and sorted it. Help Fedya find the lower median of array $c$. Recall that for a sorted array of length $k$ the lower median is an element at position $\lfloor \frac{k + 1}{2} \rfloor$, if elements of the array are enumerated starting from $1$. For example, the lower median of array $(1, 1, 2, 3, 6)$ is $2$, and the lower median of $(0, 17, 23, 96)$ is $17$.
There are $O(N\cdot logN)$ different values of gcd, because if we fix the left bound of a segment and iterate right bound from left bound to the end, the gcd stays unchanged or decreases in two or more times. We can get a compressed version of array $b$ if we compress equal consecutive elements in to pair $(value, countOfValue)$. We can use segment tree to find all segments of equal elements. Let's estimate how many segments in array $b$ have sum less or equals some $M$. If size of array $b$ had been small, we could have done the 2 pointers technique to do that. Unfortunately, size of array $b$ is $O(N^2)$. However, array $b$ has a lot of equal elements. So, the solution is to process a group of equal elements in a fast way. Let's fix bounds $L$ and $R$ in array $b$, assume $T$ - is a sum of elements beetwen $L$ and $R$, after $L$ we have a group of elements $X$, with size $xcnt$, after $R$ we have a group of elements $Y$, with size $ycnt$. Then, it is easy to see that our task is to find out how many pairs of integers $(a, b)$ satisfy the condition: $0 < b \cdot Y - a \cdot X + T \le M$.This is a standard task where we need to calculate all integer points under a line that can be done with the euclidean algorithm. Here the example of a plot of this function:
[ "binary search", "implementation", "math", "number theory" ]
3,400
null
1098
F
Ж-function
The length of the longest common prefix of two strings $s=s_1 s_2 \ldots s_n$ and $t = t_1 t_2 \ldots t_m$ is defined as the maximum $k \le \min(n, m)$ such that $s_1 s_2 \ldots s_k$ equals $t_1 t_2 \ldots t_k$. Let's denote the longest common prefix of two strings $s$ and $t$ as $lcp(s,t)$. Z-function of a string $s_1 s_2 \dots s_n$ is a sequence of integers $z_1, z_2, \ldots, z_n$, where $z_i = lcp(s_1 s_2 \ldots s_n,\ \ s_i s_{i+1} \dots s_n)$. Ж-function of a string $s$ is defined as $z_1 + z_2 + \ldots + z_n$. You're given a string $s=s_1 s_2 \ldots s_n$ and $q$ queries. Each query is described by two integers $l_i$ and $r_i$, where $1 \le l_i \le r_i \le n$. The answer for the query is defined as Ж-function of the string $s_{l_i} s_{l_i +1} \ldots s_{r_i}$.
Answer for query $l$, $r$ is equal to $\sum^{r}_{i=l}{\min(lcp(l,i), r - i + 1)}$. $lcp(i, l)$ is the length of longest common preffix of $i$-th and $l$-th suffixes. Let $\min(lcp(l,i), r - i + 1)$ be denoted by <<cutted $lcp$>> of two suffixes. Now we can use another approach to calculate the answer. For each $k = 1 \ldots r - l + 1$ let's count number of suffixes $i = l \ldots r$ with <<cutted $lcp$>> $\geq k$, it is equal to number of suffixes $i = l \ldots r - k + 1$ with $lcp$ $\geq k$. The sum of these values will be an answer. Let's build the suffix tree of our string. To count this value for fixed $k$ and $l$, let's consider an ancestor of $l$-th suffix with depth equal to $k$, number of sought for suffixes is equal to a number of suffixes with numbers from $l$ to $r - k + 1$ in the subtree of our vertex. So let's suppose that for each vertex $v$ there is a data structre, which can count number of leaves with numbers from $l$ to $r-h_v+1$ ($h_v$ is the depth of $v$) in subtree of $v$ for fixed $l$ and $r$. Now to get an answer for query we should find sum of these sums for given $l$ and $r$ for all ancestors with depth $\leq r - l + 1$ of vertex which corresponds the substring of request. Now we can just divide the request into two parts! We should count the number of leaves with numbers $\leq r - h_v + 1$ and number of leaves with numbers $< l$. The first part is similar to counting number of leaves (in subtree) with numbers $x$, that $x + h_v \leq r + 1$. Now we should use Heavy-light decompositin of our suffix tree to find sum on way. Let's consider all $\mathcal{O}(\log n)$ heavy ways which are above the fixed leaf. Our vertex appears in some prefixes of these ways, and its values are $x+h_v, x+h_v + 1, \ldots, x+h_v+len$. So on each way there are some leaves determined by ($x + h_v, len$) which should be considered. Lets denote these pairs by ($x_i, y_i$). Now we just need to count number of $(i, j)$, that $0 \leq j \leq y_i$ and $x_i + j \leq r + 1$. I.e. for each query we should calculate $\sum{\max(0, \min(r + 1 - x_i, y_j))}$ on prefixes of considered heavy ways. Now we can use scaning line and Fenwick tree for each way to process all queries. It would be easy for you to find out how to do it if you understood everything before ;). The second part can be done similary, but in this case each leaf corresponds a point, not linear function. The complexity is $\mathcal{O}(n \log^2 n)$, which easily passes TL.
[ "string suffix structures", "strings" ]
3,500
null
1099
A
Snowball
Today's morning was exceptionally snowy. Meshanya decided to go outside and noticed a huge snowball rolling down the mountain! Luckily, there are two stones on that mountain. Initially, snowball is at height $h$ and it has weight $w$. Each second the following sequence of events happens: snowball's weights increases by $i$, where $i$ — is the current height of snowball, then snowball hits the stone (if it's present at the current height), then snowball moves one meter down. If the snowball reaches height zero, it stops. There are exactly two stones on the mountain. First stone has weight $u_1$ and is located at height $d_1$, the second one — $u_2$ and $d_2$ respectively. When the snowball hits either of two stones, it loses weight equal to the weight of that stone. If after this snowball has negative weight, then its weight becomes zero, but the snowball continues moving as before. Find the weight of the snowball when it stops moving, that is, it reaches height 0.
This problem can be solved in many ways, we will tell you one of them. Let's just iterate through all the heights of $i$ from $h$ to $1$. Inside the loop, we have to add $i$ to the weight of snowball, and then check whether there is a stone at this height. If there is, then you need to check whether weight of snowball is more than weight of the stone. If more - then subtract the weight of the stone from weight of snowball, if not - then assign the weight of snowball value 0.
[ "implementation" ]
800
null
1099
B
Squares and Segments
Little Sofia is in fourth grade. Today in the geometry lesson she learned about segments and squares. On the way home, she decided to draw $n$ squares in the snow with a side length of $1$. For simplicity, we assume that Sofia lives on a plane and can draw only segments of length $1$, parallel to the coordinate axes, with vertices at integer points. In order to draw a segment, Sofia proceeds as follows. If she wants to draw a vertical segment with the coordinates of the ends $(x, y)$ and $(x, y+1)$. Then Sofia looks if there is already a drawn segment with the coordinates of the ends $(x', y)$ and $(x', y+1)$ for some $x'$. If such a segment exists, then Sofia quickly draws a new segment, using the old one as a guideline. If there is no such segment, then Sofia has to take a ruler and measure a new segment for a long time. Same thing happens when Sofia wants to draw a horizontal segment, but only now she checks for the existence of a segment with the same coordinates $x$, $x+1$ and the differing coordinate $y$. For example, if Sofia needs to draw one square, she will have to draw two segments using a ruler: After that, she can draw the remaining two segments, using the first two as a guide: If Sofia needs to draw two squares, she will have to draw three segments using a ruler: After that, she can draw the remaining four segments, using the first three as a guide: Sofia is in a hurry, so she wants to minimize the number of segments that she will have to draw with a ruler without a guide. Help her find this minimum number.
Consider any resulting configuration of the squares. We can safely assume that a set of non-empty rows and non-empty columns are connected (otherwise just move the disconnected part a bit closer to any other). Clearly, in every column and every row we can <<for free>> extend it to have all the squares in the bounding box - minimum rectangle containing the resulting figure. From this we can conclude that we can search for the optimal answer among the rectangles. The answer for a rectangle $a \times b$ equals $a+b$ (just draw the first row and the first column), so we need to find two values $a$ and $b$ such that $a \times b \ge n$ and $a + b$ is minimum possible. It's easy to see that the answer is not optimal if $|a-b| \ge 2$: by moving the numbers towards each other we get the same sum, but greater product. This observation leads to the following solution: if $s=[\sqrt n]$, then the answer is either a rectangle $s \times s$, or a rectangle $s \times (s+1)$, or a rectangle $(s+1) \times (s+1)$ (because of the rounding). We just need to check which one is better.
[ "binary search", "constructive algorithms", "math" ]
1,100
null
1099
C
Postcard
Andrey received a postcard from Irina. It contained only the words "Hello, Andrey!", and a strange string consisting of lowercase Latin letters, snowflakes and candy canes. Andrey thought that this string is an encrypted message, and decided to decrypt it. Andrey noticed that snowflakes and candy canes always stand after the letters, so he supposed that the message was encrypted as follows. Candy cane means that the letter before it can be removed, or can be left. A snowflake means that the letter before it can be removed, left, or repeated several times. For example, consider the following string: This string can encode the message «happynewyear». For this, candy canes and snowflakes should be used as follows: - candy cane 1: remove the letter w, - snowflake 1: repeat the letter p twice, - candy cane 2: leave the letter n, - snowflake 2: remove the letter w, - snowflake 3: leave the letter e. Please note that the same string can encode different messages. For example, the string above can encode «hayewyar», «happpppynewwwwwyear», and other messages. Andrey knows that messages from Irina usually have a length of $k$ letters. Help him to find out if a given string can encode a message of $k$ letters, and if so, give an example of such a message.
If the string in the postcard does not contain any snowflakes or candy cones, $k$ must be equal to the length of the string, because the only string encoded by such message is the string itself, and in this case if $k$ is not equal to the length of the string, the answer is <<Impossible>>. Let's call the characters of the message mandatory if they are not followed by snowflakes or candy cones. Clearly $k$ should be at least the number of mandatory characters, otherwise the answer is <<Impossible>>. In case there a snowflake (*) in the message, we can repeat the preceding character enough times to get the length $k$, and remove the rest of non-mandatory characters. If there are no snowflakes, but only candy cones, we should use the characters followed by candy cones until we get the desired length $k$. In case we don't have enough, the answer is <<Impossible>>.
[ "constructive algorithms", "implementation" ]
1,200
null
1099
F
Cookies
Mitya and Vasya are playing an interesting game. They have a rooted tree with $n$ vertices, and the vertices are indexed from $1$ to $n$. The root has index $1$. Every other vertex $i \ge 2$ has its parent $p_i$, and vertex $i$ is called a child of vertex $p_i$. There are some cookies in every vertex of the tree: there are $x_i$ cookies in vertex $i$. It takes exactly $t_i$ time for Mitya to eat \textbf{one} cookie in vertex $i$. There is also a chip, which is initially located in the root of the tree, and it takes $l_i$ time to move the chip along the edge connecting vertex $i$ with its parent. Mitya and Vasya take turns playing, Mitya goes first. - Mitya moves the chip from the vertex, where the chip is located, to one of its children. - Vasya can remove an edge from the vertex, where the chip is located, to one of its children. Vasya can also decide to skip his turn. Mitya can stop the game at any his turn. Once he stops the game, he moves the chip up to the root, eating some cookies along his way. Mitya can decide how many cookies he would like to eat in every vertex on his way. The total time spent on descend, ascend and eating cookies should not exceed $T$. Please note that in the end of the game the chip is always located in the root of the tree: Mitya can not leave the chip in any other vertex, even if he has already eaten enough cookies — he must move the chip back to the root (and every move from vertex $v$ to its parent takes $l_v$ time). Find out what is the maximum number of cookies Mitya can eat, regardless of Vasya's actions.
If Mitya moves the chip to vertex $i$ during the game and then moves it back to the root, he will have exactly $T - 2 \cdot (\text{time to reach vertex $i$ from the root})$ time to eat cookies. Let's denote the maximum number of cookies he can eat during this time by $f[i]$. Let's first focus on what to do next, assuming we have already computed $f[i]$. We can use DFS to compute $dp[i]$ - the maximum number of cookies Mitya can eat if he finishes the game in the subtree of vertex $i$ (vertex $i$ itself included). Let $m1[i]$ and $m2[i]$ be the indices of two children of vertex $i$ with maximum value $dp[j]$ among its children. It's simple to compute $dp[i]$: In case vertex $i$ is a leaf, $dp[i] = f[i]$, since we can not move any further. In case vertex $i$ is a root of the tree, we have two choices: either stop the game, or go to the child with the maximum value of $dp$, which means $dp[i] = \max(f[i],\ dp[m1[i]])$ in this case. Otherwise, if we're in the vertex $i$, Vasya can remove an edge to any child $j$ of vertex $i$, and clearly he would remove the one with maximum value of $dp$, meaning he would remove the edge from $i$ to $m1[i]$. This means $dp[i] = \max(f[i],\ dp[m2[i]])$, because we can also decide to stop the game in vertex $i$. The only thing left now is how to compute values $f[i]$ efficiently. We can do it with another DFS: while traversing the tree, for vertex $i$ we maintain the pairs $(t_j,\ x_j)$ for all vertices on the path from the root to vertex $i$. Clearly Mitya would prefer to eat the cookies that he can eat quickly, so we maintain these vertices ordered by $t_j$ in increasing order, and to compute the answer for vertex $i$, we just eat the cookies from "fastest" to "slowest" until we run out of time. To do it efficiently, one can use their favourite data structure (BIT / Segment Tree / Treap) while traversing the tree, which leads to $O(n \log n)$ solution.
[ "binary search", "data structures", "dfs and similar", "dp", "games", "trees" ]
2,400
null
1100
A
Roman and Browser
This morning, Roman woke up and opened the browser with $n$ opened tabs numbered from $1$ to $n$. There are two kinds of tabs: those with the information required for the test and those with social network sites. Roman decided that there are too many tabs open so he wants to close some of them. He decided to accomplish this by closing every $k$-th ($2 \leq k \leq n - 1$) tab. Only then he will decide whether he wants to study for the test or to chat on the social networks. Formally, Roman will choose one tab (let its number be $b$) and then close all tabs with numbers $c = b + i \cdot k$ that satisfy the following condition: $1 \leq c \leq n$ and $i$ is an integer (it may be positive, negative or zero). For example, if $k = 3$, $n = 14$ and Roman chooses $b = 8$, then he will close tabs with numbers $2$, $5$, $8$, $11$ and $14$. After closing the tabs Roman will calculate the amount of remaining tabs with the information for the test (let's denote it $e$) and the amount of remaining social network tabs ($s$). Help Roman to calculate the maximal absolute value of the difference of those values $|e - s|$ so that it would be easy to decide what to do next.
The constraints in this task allowed us to simply iterate over the closed tab and check the answer, but we can solve it more quickly - calculate the sum for each value modulo $k$ and count the total sum for the whole array. After that, you just need to go through the module tab numbers that we delete, and update the answer. Complexity - $O (n ^ 2)$ or $O (n + k)$.
[ "implementation" ]
1,000
null
1100
B
Build a Contest
Arkady coordinates rounds on some not really famous competitive programming platform. Each round features $n$ problems of distinct difficulty, the difficulties are numbered from $1$ to $n$. To hold a round Arkady needs $n$ new (not used previously) problems, one for each difficulty. As for now, Arkady creates all the problems himself, but unfortunately, he can't just create a problem of a desired difficulty. Instead, when he creates a problem, he evaluates its difficulty from $1$ to $n$ and puts it into the problems pool. At each moment when Arkady can choose a set of $n$ new problems of distinct difficulties from the pool, he holds a round with these problems and removes them from the pool. Arkady always creates one problem at a time, so if he can hold a round after creating a problem, he immediately does it. You are given a sequence of problems' difficulties in the order Arkady created them. For each problem, determine whether Arkady held the round right after creating this problem, or not. Initially the problems pool is empty.
We will keep the arrays, how many problems are created for each specific complexity $i$ - $cnt_i$ and how many problems have been created for the round $j$ - $exist_j$. Then if we create a task with complexity $c$, we will recalculate $cnt_c = cnt_c + 1$, $exist_ {cnt_c} = exist_ {cnt_c} + 1$. Suppose we have already given $k$ rounds. Then, after adding the next task, we only need to check that $exist_k = n$, in this case we can held the next round, otherwise not. The complexity is $O (m)$.
[ "data structures", "implementation" ]
1,300
null
1100
C
NN and the Optical Illusion
NN is an experienced internet user and that means he spends a lot of time on the social media. Once he found the following image on the Net, which asked him to compare the sizes of inner circles: It turned out that the circles are equal. NN was very surprised by this fact, so he decided to create a similar picture himself. He managed to calculate the number of outer circles $n$ and the radius of the inner circle $r$. NN thinks that, using this information, you can exactly determine the radius of the outer circles $R$ so that the inner circle touches all of the outer ones externally and each pair of neighboring outer circles also touches each other. While NN tried very hard to guess the required radius, he didn't manage to do that. Help NN find the required radius for building the required picture.
Consider three circles - inner and two outer. Since all the circles are tangent, the sides of the triangle constructed on the centers of the circles pass through the tangency points of the circles. Denote by $\alpha$ the angle in an equilateral $n$ -gon. Then $\alpha = \frac {\pi (n - 2)} {n}$. On the other hand, $\frac {R} {r + R} = \cos (\frac {\ alpha} {2})$. It can be noted that $\frac {R} {r + R} = \frac {R + r - r} {r + R} = 1 - \frac {r} {r + R}$, i.e. the function increases with $R$. On this basis, a binary search can be used to find the answer, or explicitly derive the formula $R = r \frac {\cos (\frac {\alpha} {2})} {1 - \cos (\frac {\alpha} {2} )}$ The complexity is $O (1)$ or $O (\log C)$.
[ "binary search", "geometry", "math" ]
1,200
null
1100
D
Dasha and Chess
This is an interactive task. Dasha and NN like playing chess. While playing a match they decided that normal chess isn't interesting enough for them, so they invented a game described below. There are $666$ black rooks and $1$ white king on the chess board of size $999 \times 999$. The white king wins if he gets checked by rook, or, in other words, if he moves onto the square which shares either a row or column with a black rook. The sides take turns, starting with white. NN plays as a white king and on each of his turns he moves a king to one of the squares that are adjacent to his current position either by side or diagonally, or, formally, if the king was on the square $(x, y)$, it can move to the square $(nx, ny)$ if and only $\max (|nx - x|, |ny - y|) = 1$ , $1 \leq nx, ny \leq 999$. NN is also forbidden from moving onto the squares occupied with black rooks, however, he can move onto the same row or column as a black rook. Dasha, however, neglects playing by the chess rules, and instead of moving rooks normally she moves one of her rooks on any space devoid of other chess pieces. It is also possible that the rook would move onto the same square it was before and the position wouldn't change. However, she can't move the rook on the same row or column with the king. Each player makes $2000$ turns, if the white king wasn't checked by a black rook during those turns, black wins. NN doesn't like losing, but thinks the task is too difficult for him, so he asks you to write a program that will always win playing for the white king. Note that Dasha can see your king and play depending on its position.
One of the possible strategies: the king goes to the center, then goes to the corner that he has as few rooks as possible behind his back. The solution uses the Pigeonhole principle, since in the largest corner and in two neighbors to it, the sum will be no less than $666 * 3/4 > 499$ rooks, i.e. $\geq 500$ rooks, and since the king gets to the corner for $499$ of moves, he will definitely get under check of the rook.
[ "constructive algorithms", "games", "interactive" ]
2,500
null
1100
E
Andrew and Taxi
Andrew prefers taxi to other means of transport, but recently most taxi drivers have been acting inappropriately. In order to earn more money, taxi drivers started to drive in circles. Roads in Andrew's city are one-way, and people are not necessary able to travel from one part to another, but it pales in comparison to insidious taxi drivers. The mayor of the city decided to change the direction of certain roads so that the taxi drivers wouldn't be able to increase the cost of the trip endlessly. More formally, if the taxi driver is on a certain crossroads, they wouldn't be able to reach it again if he performs a nonzero trip. Traffic controllers are needed in order to change the direction the road goes. For every road it is known how many traffic controllers are needed to change the direction of the road to the opposite one. It is allowed to change the directions of roads one by one, meaning that each traffic controller can participate in reversing two or more roads. You need to calculate the minimum number of traffic controllers that you need to hire to perform the task and the list of the roads that need to be reversed.
Suppose we have $k$ traffic controllers. They can turn all edges whose weight is less than or equal to $k$. Then let's remove all these edges from the graph, make a topological sorting of the remaining graph, and orient the other edges in the order of topological sorting. If there are cycles left in the graph after removing the edges, then we cannot get rid of them, having $k$ traffic controllers. Otherwise, by adding edges we will not add new loops. The parameter $k$ can be iterated through a binary search. Also in binary search, you can go through not all possible values of $k$, but only the values that are on the edges. Complexity - $O ((n + m) \log C)$ or $O ((n + m) \log m)$.
[ "binary search", "dfs and similar", "graphs" ]
2,200
null
1100
F
Ivan and Burgers
Ivan loves burgers and spending money. There are $n$ burger joints on the street where Ivan lives. Ivan has $q$ friends, and the $i$-th friend suggested to meet at the joint $l_i$ and walk to the joint $r_i$ $(l_i \leq r_i)$. While strolling with the $i$-th friend Ivan can visit all joints $x$ which satisfy $l_i \leq x \leq r_i$. For each joint Ivan knows the cost of the most expensive burger in it, it costs $c_i$ burles. Ivan wants to visit some subset of joints on his way, in each of them he will buy the most expensive burger and spend the most money. But there is a small issue: his card broke and instead of charging him for purchases, the amount of money on it changes as follows. If Ivan had $d$ burles before the purchase and he spent $c$ burles at the joint, then after the purchase he would have $d \oplus c$ burles, where $\oplus$ denotes the bitwise XOR operation. Currently Ivan has $2^{2^{100}} - 1$ burles and he wants to go out for a walk. Help him to determine the maximal amount of burles he can spend if he goes for a walk with the friend $i$. The amount of burles he spends is defined as the difference between the initial amount on his account and the final account.
Note that to answer on a segment, it is enough to know the basis of this segment, that is, the minimum set of numbers, with which you can represent all the numbers that are representable on this segment. Since $0 \leq a_i \leq 1000000$, then the basis will be no more than $20$. To find the maximum number, run the Gauss algorithm for the basis so that there is no pair of numbers in the basis for which the maximum bit is the same. Then the maximum representable number can be obtained by the following algorithm: we will consider the numbers starting from the maximum, and add them to the answer if they increase the current answer. The complete problem can be solved using the "divide and conquer" method: if we split the segment in half, then all requests on it either completely lie into one of the segments, or lieinto both. For requests that completely lie in one of the segments, we respond recursively. For queries that fall in both segments, we combine two linear hulls - $hull [l, mid]$ and $hull [mid + 1, r]$. All linear hulls of the form $hull [i, mid]$ and $hull [mid + 1, j]$ can be obtained by adding vectors sequentially from right to left (on the right segment from left to right). An alternative solution was proposed at the contest. Note that a query on a segment is a suffix of some array prefix. We will gradually increase the array prefix and maintain the "maximally right" array basis (if for the current prefix we go right through to the left, then the "maximally right" basis is the greedily typed basis of array values). When adding an element, it can either not be presented in the current basis, then it must be added to the current basis. If we can represent the element, then it is necessary to find the left-most element whose removal does not change the linear hull. This can be done by simulating Gauss algorithm from right to left: the first representable element must be removed from the basis. Since the basis is small, this operation can be done with each addition of a vector. We can get the answer to the problem, since for the fixed right element, we know at what point the basis changes as the left border moves from right to left. Complexity is $O ((n + q) \log ^ 2 C)$ or $O (n \log^2 C + q)$.
[ "data structures", "divide and conquer", "greedy", "math" ]
2,500
null
1101
A
Minimum Integer
You are given $q$ queries in the following form: {Given three integers $l_i$, $r_i$ and $d_i$, find minimum \textbf{positive} integer $x_i$ such that it is divisible by $d_i$ and it does not belong to the segment $[l_i, r_i]$}. Can you answer all the queries? Recall that a number $x$ belongs to segment $[l, r]$ if $l \le x \le r$.
There are two basic cases we have to consider: either the element we want to find is less than $l_i$, or it is greater than $r_i$. In the first case, we are interested in $d_i$ itself: it is the minimum positive number divisible by $d_i$, and if it is less than $l_i$, then it is the answer. In the second case, we have to find minimum element that is greater than $r_i$ and is divisible by $d_i$. This can be done as follows: we calculate the number of elements divisible by $d_i$ that are not greater than $r_i$ as $c = \lfloor \frac{r_i}{d_i} \rfloor$, and then we take $(c+1)$-th element, which is $d_i(c + 1)$.
[ "math" ]
1,000
q = int(input()) for i in range(q): l, r, d = map(int, input().split()) if(d < l or d > r): print(d) else: print((r // d) * d + d)
1101
B
Accordion
An accordion is a string (yes, in the real world accordions are musical instruments, but let's forget about it for a while) which can be represented as a concatenation of: an opening bracket (ASCII code $091$), a colon (ASCII code $058$), some (possibly zero) vertical line characters (ASCII code $124$), another colon, and a closing bracket (ASCII code $093$). The length of the accordion is the number of characters in it. For example, [::], [:||:] and [:|||:] are accordions having length $4$, $6$ and $7$. (:|:), {{:||:}}, [:], ]:||:[ are not accordions. You are given a string $s$. You want to transform it into an accordion by removing some (possibly zero) characters from it. Note that you may not insert new characters or reorder existing ones. Is it possible to obtain an accordion by removing characters from $s$, and if so, what is the maximum possible length of the result?
No cases. No any special thoughts. Just greedy. The solution consists of six steps: Remove the prefix of the string until the position of leftmost '[' character. If there is no such character, print -1; Remove the prefix of the string until the position of leftmost ':' character. If there is no such character, print -1; Reverse the string; Remove the prefix of the string until the position of leftmost ']' character. If there is no such character, print -1; Remove the prefix of the string until the position of leftmost ':' character. If there is no such character, print -1; Print the number of '|' characters in the remaining string plus four.
[ "greedy", "implementation" ]
1,300
#include <bits/stdc++.h> using namespace std; void rem(string &s, const string &c) { auto pos = s.find(c); if (pos == string::npos) { cout << -1 << endl; exit(0); } s.erase(0, pos + 1); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif string s; cin >> s; rem(s, "["); rem(s, ":"); reverse(s.begin(), s.end()); rem(s, "]"); rem(s, ":"); cout << count(s.begin(), s.end(), '|') + 4 << endl; return 0; }
1101
C
Division and Union
There are $n$ segments $[l_i, r_i]$ for $1 \le i \le n$. You should divide all segments into two non-empty groups in such way that there is no pair of segments from different groups which have at least one common point, or say that it's impossible to do it. Each segment should belong to exactly one group. To optimize testing process you will be given multitest.
Let's prove that division possible if and only if union of all segments has two and more segments. If the union have at least two segments, then we can choose one of them and put all segments it contains in one group and other segments to another group. On the other hand, if we can divide all segments in two groups in such way that there are no pair from different group which intersects, then union of segments from the first group doesn't intersect union of segments from another and union of unions consists of several non-intersecting components. How can it help? If union of segments consits of several segments then exists $x$ such that for any segment $[l_i, r_i]$ either $l_i \le r_i \le x$ or $x < l_i \le r_i$ and both parts are non-empty. Moreover, $x$ equals to one of $r_i$. It leads us straight to one of possible solutions: sort all segments by $r_i$ in increasing order and for each $r_i$ we should check that $r_i < \min\limits_{i < j \le n}{l_j}$ (suffix minimum). If we've found such $r_i$ then all prefix goes to one group and suffix - to another.
[ "sortings" ]
1,500
#include<bits/stdc++.h> using namespace std; #define x first #define y second typedef pair<int, int> pt; int n; vector< pair<pt, int> > segs; inline bool read() { if(!(cin >> n)) return false; segs.resize(n); for(int i = 0; i < n; i++) { cin >> segs[i].x.x >> segs[i].x.y; segs[i].y = i; } return true; } bool cmp(const pair<pt, int> &a, const pair<pt, int> &b) { if(a.x.y != b.x.y) return a.x.y < b.x.y; if(a.x.x != b.x.x) return a.x.x < b.x.x; return a.y < b.y; } inline void solve() { sort(segs.begin(), segs.end(), cmp); int mn = segs[n - 1].x.x; for(int i = n - 2; i >= 0; i--) { if(segs[i].x.y < mn) { vector<int> ts(n, 2); for(int id = 0; id <= i; id++) ts[segs[id].y] = 1; for(int t : ts) cout << t << ' '; cout << '\n'; return; } mn = min(mn, segs[i].x.x); } cout << -1 << '\n'; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(0); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); int tc; cin >> tc; while(tc--) { assert(read()); solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1101
D
GCD Counting
You are given a tree consisting of $n$ vertices. A number is written on each vertex; the number on vertex $i$ is equal to $a_i$. Let's denote the function $g(x, y)$ as the greatest common divisor of the numbers written on the vertices belonging to the simple path from vertex $x$ to vertex $y$ (including these two vertices). Also let's denote $dist(x, y)$ as the number of vertices on the simple path between vertices $x$ and $y$, including the endpoints. $dist(x, x) = 1$ for every vertex $x$. Your task is calculate the maximum value of $dist(x, y)$ among such pairs of vertices that $g(x, y) > 1$.
I know there exists $O(n \log MAXN)$ solution and author of the problem promises to tell it to you (here he explained it). I'd love to tell easier to code and about the same time to work $O(n \log^2 MAXN)$ solution. At first, notice that it is only enough to check the paths such that all vertices on it is divisible by some prime. Let's for each $v$ calculate the path of the maximum length to pass through it. That means that one part of this path goes down to one child of it and another part goes down to another child. For each vertex we will store the lengths of maximum paths through vertices with values divisible by each prime in $a_v$. That is $O(n \log MAXN)$ memory. To recalc the answer we will store all values of children nodes, sort them and update the answer with two pointers technique. Don't forget about the case of $n = 1$! Overall complexity: $O(n \log^2 MAXN)$.
[ "data structures", "dfs and similar", "dp", "number theory", "trees" ]
2,000
#include <bits/stdc++.h> using namespace std; #define x first #define y second #define mp make_pair #define pb push_back #define sqr(a) ((a) * (a)) #define sz(a) int(a.size()) #define all(a) a.begin(), a.end() #define forn(i, n) for(int i = 0; i < int(n); i++) #define fore(i, l, r) for(int i = int(l); i < int(r); i++) typedef long long li; typedef long double ld; typedef pair<int, int> pt; template <class A, class B> ostream& operator << (ostream& out, const pair<A, B> &a) { return out << "(" << a.x << ", " << a.y << ")"; } template <class A> ostream& operator << (ostream& out, const vector<A> &v) { out << "["; forn(i, sz(v)) { if(i) out << ", "; out << v[i]; } return out << "]"; } mt19937 rnd(time(NULL)); const int INF = int(1e9); const li INF64 = li(1e18); const int MOD = INF + 7; const ld EPS = 1e-9; const ld PI = acos(-1.0); const int N = 200 * 1000 + 13; int n; int a[N]; vector<int> g[N]; bool read () { if (scanf("%d", &n) != 1) return false; forn(i, n) g[i].clear(); forn(i, n) scanf("%d", &a[i]); forn(i, n - 1){ int x, y; scanf("%d%d", &x, &y); --x, --y; g[x].pb(y); g[y].pb(x); } return true; } vector<pt> dp[N]; int ans; void calc(int v, int p = -1){ vector<pt> chd; for (auto u : g[v]) if (u != p){ calc(u, v); for (auto it : dp[u]) chd.pb(it); } sort(all(chd)); forn(i, sz(chd)){ int j = i - 1; int mx1 = 0, mx2 = 0; while (j + 1 < sz(chd) && chd[j + 1].x == chd[i].x){ ++j; if (chd[j].y >= mx1) mx2 = mx1, mx1 = chd[j].y; else if (chd[j].y > mx2) mx2 = chd[j].y; } if (a[v] % chd[i].x == 0){ ans = max(ans, mx1 + mx2 + 1); dp[v].pb(mp(chd[i].x, mx1 + 1)); while (a[v] % chd[i].x == 0) a[v] /= chd[i].x; } else{ ans = max(ans, mx1); } i = j; } for (int i = 2; i * i <= a[v]; ++i) if (a[v] % i == 0){ dp[v].pb(mp(i, 1)); ans = max(ans, 1); while (a[v] % i == 0) a[v] /= i; } if (a[v] > 1){ dp[v].pb(mp(a[v], 1)); ans = max(ans, 1); } } void solve() { forn(i, N) dp[i].clear(); ans = 0; calc(0); printf("%d\n", ans); } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); int tt = clock(); #endif cerr.precision(15); cout.precision(15); cerr << fixed; cout << fixed; #ifdef _DEBUG while(read()) { #else if (read()){ #endif solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } }
1101
E
Polycarp's New Job
Polycarp has recently got himself a new job. He now earns so much that his old wallet can't even store all the money he has. Berland bills somehow come in lots of different sizes. However, all of them are shaped as rectangles (possibly squares). All wallets are also produced in form of rectangles (possibly squares). A bill $x \times y$ fits into some wallet $h \times w$ if either $x \le h$ and $y \le w$ or $y \le h$ and $x \le w$. Bills can overlap with each other in a wallet and an infinite amount of bills can fit into a wallet. That implies that all the bills Polycarp currently have fit into a wallet if every single one of them fits into it independently of the others. Now you are asked to perform the queries of two types: - $+~x~y$ — Polycarp earns a bill of size $x \times y$; - $?~h~w$ — Polycarp wants to check if all the bills he has earned to this moment fit into a wallet of size $h \times w$. It is guaranteed that there is at least one query of type $1$ before the first query of type $2$ and that there is at least one query of type $2$ in the input data. For each query of type $2$ print "YES" if all the bills he has earned to this moment fit into a wallet of given size. Print "NO" otherwise.
Let's find the smallest wallet to fit all bills. One its side is the maximum side of any bill. Now we orient the bills in such a way that their longer side is put against this side of the wallet. The second side of the wallet is the maximum of the other sides. More formally, for set of bills $(a_1, b_1)$, $(a_2, b_2)$, ... ($a_i \le b_i$ for each $i$), the minimum wallet is ($max~a_i$, $max~b_i$). The minimum wallet fits all sufficient wallets. So the solution is maintaining the maximum of all $a_i$ and $b_i$ and checking if $h \ge a_i$ and $w \ge b_i$ ($h \le w$). Choose your i/o functions wisely. Overall complexity: $O(n)$.
[ "implementation" ]
1,500
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int n; scanf("%d", &n); int mxa = 0, mxb = 0; static char buf[5]; forn(i, n){ int x, y; scanf("%s%d%d", buf, &x, &y); if (x < y) swap(x, y); if (buf[0] == '+'){ mxa = max(mxa, x); mxb = max(mxb, y); } else{ puts(mxa <= x && mxb <= y ? "YES" : "NO"); } } }
1101
F
Trucks and Cities
There are $n$ cities along the road, which can be represented as a straight line. The $i$-th city is situated at the distance of $a_i$ kilometers from the origin. All cities are situated in the same direction from the origin. There are $m$ trucks travelling from one city to another. Each truck can be described by $4$ integers: starting city $s_i$, finishing city $f_i$, fuel consumption $c_i$ and number of possible refuelings $r_i$. The $i$-th truck will spend $c_i$ litres of fuel per one kilometer. When a truck arrives in some city, it can be refueled (but refueling is impossible in the middle of nowhere). The $i$-th truck can be refueled at most $r_i$ times. Each refueling makes truck's gas-tank full. All trucks start with full gas-tank. All trucks will have gas-tanks of the same size $V$ litres. You should find minimum possible $V$ such that all trucks can reach their destinations without refueling more times than allowed.
First (bonus) solution: implement idea from Blogewoosh #6. Time complexity will be somewhat $O((q + \log{q} \log{\text{MAX}})n)$ and space complexity is $O(n + q)$. Honest solution: Note, that for each truck lower bound on the answer is $\max(c_i \cdot (a[p_{j + 1}] - a[p_{j}])) = c_i \cdot \max(a[p_{j + 1}] - a[p_j])$, where $p_0 = s_i, p_1, p_2, \dots, p_{r_i + 1} = f_i$ is optimal partition of $[a[s_i], a[f_i]]$ on $r_i + 1$ segments (partition which minimize maximum length of segment) and doesn't depend on $c_i$ of truck. So, it enough to calculate $d[l][r][k]$ - optimal partition of segment $[a[l], a[r]]$ on $k$ segments. Let $opt[l][r][k]$ be position, where last segment starts in partition with value $d[l][r][k]$. Note, that $opt[l][r][k] \le opt[l][r + 1][k]$. On the other hand, $d[l][r][k] = \min\limits_{l \le j \le r}(\max(d[l][j][k - 1], a[r] - a[j]))$. But $d[l][j][k - 1] \le d[l][j + 1][k - 1]$ and $a[r] - a[j] > a[r] - a[j + 1]$, then $\max(d[l][j][k - 1], a[r] - a[j])$ is somewhat "convex". Finally, best $j$ is no more than $opt[l][r][k]$, And we can look at $j$ as second pointer (along with $r$ as first pointer). So we can for each $r$ move $j$ while answer "relaxes" (while answer is decreasing or staying same). In result, for each $l$ and $k$ there will be $O(n)$ operations in total. Optimizing memory consumption is easy, if we notice that we can iterate over $l$ but not save it as state of dp. In the end, time complexity is $O(n^3 + q)$ and space complexity is $O(n^2 + q)$.
[ "binary search", "dp" ]
2,400
#include<bits/stdc++.h> using namespace std; #define fore(i, l, r) for(int i = int(l); i < int(r); i++) #define sz(a) int((a).size()) #define all(a) (a).begin(), (a).end() #define mp make_pair #define pb push_back #define x first #define y second typedef long long li; typedef pair<int, int> pt; const int INF = int(1e9); const li INF64 = li(1e18); const int N = 411; int n, m; int a[N]; vector< pair<pt, pt> > qs; inline bool read() { if(!(cin >> n >> m)) return false; fore(i, 0, n) cin >> a[i]; qs.resize(m); fore(i, 0, m) { cin >> qs[i].x.x >> qs[i].x.y >> qs[i].y.x >> qs[i].y.y; qs[i].x.x--; qs[i].x.y--; } return true; } int getCnt(const pair<pt, pt> &q, li mx) { li dist = mx / q.y.x; int s = q.x.x, f = q.x.y; int u = s, cnt = 0; while(u < f) { int v = u; while(v < f && a[v + 1] - a[u] <= dist) v++; if(v == u) return INF; cnt++; u = v; } return cnt - 1; } li upd(const pair<pt, pt> &q, li lf, li rg) { if(getCnt(q, lf) <= q.y.y) return lf; while(rg - lf > 1) { li mid = (lf + rg) >> 1; (getCnt(q, mid) <= q.y.y ? rg : lf) = mid; } return rg; } inline unsigned int getHash(const vector<int> &vals) { unsigned int hash = 0; for(int v : vals) hash = hash * 3 + v; return hash; } inline void solve() { auto seed = getHash(vector<int>(a, a + n)); fore(i, 0, m) seed = seed * 3 + getHash({qs[i].x.x, qs[i].x.y, qs[i].y.x, qs[i].y.y}); mt19937 rnd(seed); vector<int> ids(m, 0); iota(all(ids), 0); shuffle(all(ids), rnd); li curv = 0; for(int id : ids) curv = upd(qs[id], curv, 2 * INF64); cout << curv << endl; } int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); int tt = clock(); #endif ios_base::sync_with_stdio(0); cin.tie(0), cout.tie(0); cout << fixed << setprecision(15); if(read()) { solve(); #ifdef _DEBUG cerr << "TIME = " << clock() - tt << endl; tt = clock(); #endif } return 0; }
1101
G
(Zero XOR Subset)-less
You are given an array $a_1, a_2, \dots, a_n$ of integer numbers. Your task is to divide the array into the maximum number of segments in such a way that: - each element is contained in \textbf{exactly one} segment; - each segment contains at least one element; - there doesn't exist a non-empty subset of segments such that bitwise XOR of the numbers from them is equal to $0$. Print the maximum number of segments the array can be divided into. Print -1 if no suitable division exists.
Let's consider some division $[0, i_1)$, $[i_1, i_2)$, ..., $[i_k, n)$. Represent the XOR sum of the subset via prefix-XOR. Those are $pr[i_1] \oplus pr[0]$, $pr[i_2] \oplus pr[i_1]$, ..., $pr[n] \oplus pr[i_k]$. I claim that you can collect any subset that is a XOR of an even number of $pr[x]$ for pairwise distinct values of $x$. Let's take a look on some prefix of processed segments, where the last segment is taken into subset. The previous taken $pr[x]$'s can't be changed, the last taken $pr[x]$ can either be eliminated if we also take the current segment (and that segment erases one value and adds one) or added to the answer if we don't take it (but the next taken segment will add two values). You can see that the parity doesn't change. Moreover, you can collect any subset that is a XOR of an odd number of $pr[x]$ for pairwise distinct values of $x$. Just forget about $pr[0]$ taken into the answer, as its value is $0$. Then all the even subsets which included it will become odd. This way we can collect all subsets of $pr[x]$ for some division. Now you just want find the division that produces the maximum number of linearly independent numbers (binary vectors). That is - the size of the basis of the space of chosen numbers (binary vectors). Now it's time to abuse the fact that adding a number into the set can only increase the size of basis of the space. Thus, adding anything to the maximum set won't change the answer (otherwise the result would be greater than the "maximum"). Finally, you say that the maximum basis size is equal to the basis size of all the prefix-XOR and easily calculate in $O(n \log MAXN)$. The only corner case is $pr[n]$ being $0$ itself. Then for any division the full subset will also give $0$ result. That is the only case with answer $-1$. Overall complexity: $O(n \log MAXN)$.
[ "math", "matrices" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int N = 200 * 1000 + 13; const int LOGN = 30; int n; int a[N], pr[N]; int base[LOGN]; void try_gauss(int v){ for(int i = LOGN - 1; i >= 0; i--) if (base[i] != -1 && (v & (1 << i))) v ^= base[i]; if (v == 0) return; for(int i = LOGN - 1; i >= 0; i--) if (v & (1 << i)){ base[i] = v; return; } } int main() { scanf("%d", &n); forn(i, n) scanf("%d", &a[i]); memset(base, -1, sizeof(base)); forn(i, n){ pr[i + 1] = pr[i] ^ a[i]; try_gauss(pr[i + 1]); } if (pr[n] == 0){ puts("-1"); return 0; } int siz = 0; forn(i, LOGN) siz += (base[i] != -1); printf("%d\n", siz); }
1102
A
Integer Sequence Dividing
You are given an integer sequence $1, 2, \dots, n$. You have to divide it into two sets $A$ and $B$ in such a way that each element belongs to \textbf{exactly one} set and $|sum(A) - sum(B)|$ is minimum possible. The value $|x|$ is the absolute value of $x$ and $sum(S)$ is the sum of elements of the set $S$.
The first solution: take $n$ modulo $4$ and solve the problem manually (then for cases $n = 0$ and $n = 3$ the answer is $0$ and for $n = 1$ and $n = 2$ the answer is $1$). Prove: Let's see what can we make for numbers $n$, $n - 1$, $n - 2$ and $n - 3$. We can add $n$ and $n - 3$ in $A$ and add $n - 1$ and $n - 2$ in $B$. Then the difference between sums will be $0$. We can consider last four numbers this way until we have at least four numbers. And then we have a case $n \le 3$. We can prove the solution for these four cases using bruteforce. The second solution: if $\sum\limits_{i=1}^{n}i$ is even then the answer is $0$ otherwise the answer is $1$. The formula above is just $\frac{n(n+1)}{2}$. Prove: if we have an integer sequence $1, 2, \dots, n$ then we can obtain every number from $0$ to $\frac{n(n+1)}{2}$ as the sum of some elements of this sequence. How? Greedily! You can see how this greedy works (and prove, if you want) yourself. So what's next? If $\frac{n(n+1)}{2}$ is even then we can obtain the sum $\frac{n(n+1)}{4}$ in $A$ and in $B$. Otherwise we can only obtain $\lfloor\frac{n(n+1)}{4}\rfloor$ in $A$ and $\lceil\frac{n(n+1)}{4}\rceil$ in $B$ (or vice versa).
[ "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; cin >> n; long long sum = n * 1ll * (n + 1) / 2; cout << (sum & 1) << endl; return 0; }
1102
B
Array K-Coloring
You are given an array $a$ consisting of $n$ integer numbers. You have to color this array in $k$ colors in such a way that: - Each element of the array should be colored in some color; - For each $i$ from $1$ to $k$ there should be \textbf{at least one} element colored in the $i$-th color in the array; - For each $i$ from $1$ to $k$ all elements colored in the $i$-th color should be \textbf{distinct}. Obviously, such coloring might be impossible. In this case, print "NO". Otherwise print "YES" and \textbf{any} coloring (i.e. numbers $c_1, c_2, \dots c_n$, where $1 \le c_i \le k$ and $c_i$ is the color of the $i$-th element of the given array) satisfying the conditions above. If there are multiple answers, you can print \textbf{any}.
How can we solve this problem easily? Firstly, let's sort the initial array (but maintain the initial order of the elements in the array to restore the answer). Then let's just distribute all the colors uniformly. Let's color the first element in the first color, the second one - in the second, the $k$-th element - in the $k$-th color, the $k+1$-th - in the first color, and so on. So we color the $i$-th element in the color $(i - 1) \% k + 1$ ($\%$ is just modulo operation). We can see that the answer is "NO" if there is an element with frequency at least $k+1$ in the array (by pigeonhole principle). Otherwise our solution builds the correct answer. So we can try to find such element in the array naively, using counting sort or many other approaches. Time complexity - $O(n \log n)$ or $O(n^2)$.
[ "greedy", "sortings" ]
1,400
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n, k; cin >> n >> k; vector<pair<int, int>> a(n); for (int i = 0; i < n; ++i) { cin >> a[i].first; a[i].second = i; } sort(a.begin(), a.end()); vector<vector<int>> buckets(k); vector<int> res(n); for (int i = 0; i < n; ++i) { buckets[i % k].push_back(a[i].first); res[a[i].second] = i % k; } for (int i = 0; i < k; ++i) { for (int j = 0; j < int(buckets[i].size()) - 1; ++j) { if (buckets[i][j] == buckets[i][j + 1]) { cout << "NO" << endl; return 0; } } } cout << "YES" << endl; for (int i = 0; i < n; ++i) { cout << res[i] + 1 << " "; } cout << endl; return 0; }