contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1618
|
G
|
Trader Problem
|
Monocarp plays a computer game (yet again!). This game has a unique trading mechanics.
To trade with a character, Monocarp has to choose one of the items he possesses and trade it for some item the other character possesses. Each item has an integer price. If Monocarp's chosen item has price $x$, then he can trade it for any item \textbf{(exactly one item)} with price not greater than $x+k$.
Monocarp initially has $n$ items, the price of the $i$-th item he has is $a_i$. The character Monocarp is trading with has $m$ items, the price of the $i$-th item they have is $b_i$. Monocarp can trade with this character as many times as he wants (possibly even zero times), each time exchanging one of his items with one of the other character's items according to the aforementioned constraints. Note that if Monocarp gets some item during an exchange, he can trade it for another item (since now the item belongs to him), and vice versa: if Monocarp trades one of his items for another item, he can get his item back by trading something for it.
You have to answer $q$ queries. Each query consists of one integer, which is the value of $k$, and asks you to calculate the maximum possible total cost of items Monocarp can have after some sequence of trades, assuming that he can trade an item of cost $x$ for an item of cost not greater than $x+k$ during each trade. Note that the queries are independent: the trades do not actually occur, Monocarp only wants to calculate the maximum total cost he can get.
|
Suppose we have fixed the value of $k$, so we can trade an item with price $i$ for an item with price $j$ if $j \in [0, i + k]$. We can see that it's never optimal to trade an item with higher price for an item with lower price, and we could just simulate the trading process as follows: try to find an item owned by Polycarp and a more expensive item owned by the other character which can be traded, repeat until we cannot find any suitable pair. Unfortunately, it is too slow. Instead, let's try to analyze: for a given value of $k$, how to verify that an item of price $x$ can be traded for an item of price $y$ (maybe not right away, but with intermediate trades)? You can build a graph of $n + m$ vertices representing items, where two vertices representing items with prices $x$ and $y$ are connected by an edge if and only if $|x - y| \le k$. Then, the edges of the graph represent possible trades, and the paths in the graph represent sequences of trades. So, one item can be traded for another item (possibly with intermediate trades) if the vertices representing the items belong to the same component. For a fixed value of $k$, we can build this graph, find all of its components, calculate the number of Monocarp's items in each component and add this number of most expensive vertices from the component to the answer. There are two problems though. The first one is that the graph may have up to $O((n + m)^2)$ edges. But if we sort all items according to their prices, we are only interested in edges between vertices which represent adjacent items in sorted order, so the size of the graph is decreased to $O(n + m)$. Another problem is that there are multiple queries for different values of $k$. To handle it, we can sort the values of $k$ in ascending order and go in sorted order while maintaining the graph for the current value of $k$. A data structure like DSU or a method like small-to-large merging can be helpful to update the components as they merge. The last trick: to quickly recalculate the number of items Monocarp has in a component and the sum of most expensive several items, you can build two prefix sum arrays - one over the array storing the costs of the items, and another one over the array which stores values $1$ or $0$ depending on who owns the respective item (the items should still be considered in sorted order). Since each component is a segment of costs of items, prefix sums allow us to calculate the required values in $O(1)$. By the way, knowing that each component is a segment, we can get rid of the graph and the structure that stores it altogether and just maintain a set of segments of items representing the components.
|
[
"data structures",
"dsu",
"greedy",
"sortings"
] | 2,200
|
#include<bits/stdc++.h>
using namespace std;
long long get(const vector<int>& pcnt, const vector<long long>& psum, pair<int, int> seg)
{
int L = seg.first;
int R = seg.second;
int cnt = pcnt[R] - pcnt[L];
return psum[R] - psum[R - cnt];
}
int main()
{
int n, m, q;
scanf("%d %d %d", &n, &m, &q);
vector<int> a(n), b(m);
for(int i = 0; i < n; i++)
{
scanf("%d", &a[i]);
}
for(int i = 0; i < m; i++)
{
scanf("%d", &b[i]);
}
vector<int> pcnt = {0};
vector<long long> psum = {0ll};
vector<pair<int, int>> order;
for(int i = 0; i < n; i++) order.push_back(make_pair(a[i], 1));
for(int i = 0; i < m; i++) order.push_back(make_pair(b[i], 0));
sort(order.begin(), order.end());
int z = n + m;
for(int i = 0; i < z; i++)
{
pcnt.push_back(pcnt.back() + order[i].second);
psum.push_back(psum.back() + order[i].first);
}
long long cur = 0;
for(int i = 0; i < n; i++)
cur += a[i];
set<pair<int, int>> segs;
for(int i = 0; i < z; i++)
segs.insert(make_pair(i, i + 1));
map<int, vector<int>> events;
for(int i = 0; i < z - 1; i++)
events[order[i + 1].first - order[i].first].push_back(i);
vector<pair<int, long long>> ans = {{0, cur}};
for(auto x : events)
{
int cost = x.first;
vector<int> changes = x.second;
for(auto i : changes)
{
auto itr = segs.upper_bound(make_pair(i, int(1e9)));
auto itl = prev(itr);
pair<int, int> pl = *itl;
pair<int, int> pr = *itr;
cur -= get(pcnt, psum, pl);
cur -= get(pcnt, psum, pr);
pair<int, int> p = make_pair(pl.first, pr.second);
cur += get(pcnt, psum, p);
segs.erase(pl);
segs.erase(pr);
segs.insert(p);
}
ans.push_back(make_pair(cost, cur));
}
for(int i = 0; i < q; i++)
{
int k;
scanf("%d", &k);
int pos = upper_bound(ans.begin(), ans.end(), make_pair(k + 1, -1ll)) - ans.begin() - 1;
printf("%lld\n", ans[pos].second);
}
}
|
1619
|
A
|
Square String?
|
A string is called square if it is some string written twice in a row. For example, the strings "aa", "abcabc", "abab" and "baabaa" are square. But the strings "aaa", "abaaab" and "abcdabc" are not square.
For a given string $s$ determine if it is square.
|
If the length of the given string $s$ is odd, then the answer is NO, since adding two strings cannot do that. Otherwise, let $n$ be the length of the string. Let's go through the first half of the string, comparing whether its first and $\frac{n}{2} + 1$ characters are equal, its second and $\frac{n}{2} + 2$ characters are equal, and so on. If the characters in any pair are not equal, the answer is NO, otherwise - YES.
|
[
"implementation",
"strings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
string s;
cin >> s;
bool ok = true;
if (s.length() % 2 == 0) {
forn(i, s.length() / 2)
if (s[i] != s[i + s.length() / 2])
ok = false;
} else
ok = false;
cout << (ok ? "YES" : "NO") << endl;
}
}
|
1619
|
B
|
Squares and Cubes
|
Polycarp likes squares and cubes of positive integers. Here is the beginning of the sequence of numbers he likes: $1$, $4$, $8$, $9$, ....
For a given number $n$, count the number of integers from $1$ to $n$ that Polycarp likes. In other words, find the number of such $x$ that $x$ is a square of a positive integer number or a cube of a positive integer number (or both a square and a cube simultaneously).
|
We'll search for positive integers not larger than $n$, and add their squares or cubes to the set if they don't exceed $n$. If $n = 10^9$, the maximum number Polycarp will like is $31622^2 = 999950884$, so the running time will be within the time limit. The answer to the problem is the length of the resulting set.
|
[
"implementation",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int n;
cin >> n;
set<int> a;
for (int i = 1; i * i <= n; i++)
a.insert(i * i);
for (int i = 1; i * i * i <= n; i++)
a.insert(i * i * i);
cout << a.size() << endl;
}
}
|
1619
|
C
|
Wrong Addition
|
Tanya is learning how to add numbers, but so far she is not doing it correctly. She is adding two numbers $a$ and $b$ using the following algorithm:
- If one of the numbers is shorter than the other, Tanya adds leading zeros so that the numbers are the same length.
- The numbers are processed from right to left (that is, from the least significant digits to the most significant).
- In the first step, she adds the last digit of $a$ to the last digit of $b$ and writes their sum in the answer.
- At each next step, she performs the same operation on each pair of digits in the same place and writes the result to the \textbf{left} side of the answer.
For example, the numbers $a = 17236$ and $b = 3465$ Tanya adds up as follows:
$$ \large{ \begin{array}{r} + \begin{array}{r} 17236\\ 03465\\ \end{array} \\ \hline \begin{array}{r} 1106911 \end{array} \end{array}} $$
- calculates the sum of $6 + 5 = 11$ and writes $11$ in the answer.
- calculates the sum of $3 + 6 = 9$ and writes the result to the left side of the answer to get $911$.
- calculates the sum of $2 + 4 = 6$ and writes the result to the left side of the answer to get $6911$.
- calculates the sum of $7 + 3 = 10$, and writes the result to the left side of the answer to get $106911$.
- calculates the sum of $1 + 0 = 1$ and writes the result to the left side of the answer and get $1106911$.
As a result, she gets $1106911$.
You are given two positive integers $a$ and $s$. Find the number $b$ such that by adding $a$ and $b$ as described above, Tanya will get $s$. Or determine that no suitable $b$ exists.
|
Let's compute the answer to the array $b$, where $b_ k$ is the digit at the $k$ position in the number we are looking for. Let $i$ be the position of the last digit in number $a$, $j$ be the position of the last digit in number $s$. Then denote $x = a_i$, $y = s_j$, and consider the cases: if $x \le y$, then the sum of $a_i + b_i$ was exactly $s_i$, then $b_i = y - x$. if $x \gt y$, then the sum $a_i + b_i$ was greater than $9$ and we need to look at the next digit of the number $s$. If there isn't one, we can't get the answer - we'll output -1. Otherwise we recalculate $y = 10 \cdot s_{j - 1} + s_j$ and reduce $j$ by one. if now $y \ge 10$ and $y \le 18$, then $b_i = y - x$. Otherwise, we deduce -1, since we cannot get more than $9 + 9 = 18$ when adding two digits, and the cases where $a_i + b_i \lt 10$ have already been considered before.
|
[
"implementation"
] | 1,200
|
#include<bits/stdc++.h>
#define len(s) (int)s.size()
using namespace std;
using ll = long long;
void solve(){
ll a, s;
cin >> a >> s;
vector<int>b;
while(s){
int x = a % 10;
int y = s % 10;
if(x <= y) b.emplace_back(y - x);
else{
s /= 10;
y += 10 * (s % 10);
if(x < y && y >= 10 && y <= 19) b.emplace_back(y - x);
else{
cout << -1 << '\n';
return;
}
}
a /= 10;
s /= 10;
}
if(a) cout << -1 << '\n';
else{
while (b.back() == 0) b.pop_back();
for(int i = len(b) - 1; i >= 0; i--) cout << b[i];
cout << '\n';
}
}
int main(){
ios_base ::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while (t){
solve();
t--;
}
return 0;
}
|
1619
|
D
|
New Year's Problem
|
Vlad has $n$ friends, for each of whom he wants to buy one gift for the New Year.
There are $m$ shops in the city, in each of which he can buy a gift for any of his friends. If the $j$-th friend ($1 \le j \le n$) receives a gift bought in the shop with the number $i$ ($1 \le i \le m$), then the friend receives $p_{ij}$ units of joy. The rectangular table $p_{ij}$ is given in the input.
Vlad has time to visit at most $n-1$ shops (where $n$ is the number of \textbf{friends}). He chooses which shops he will visit and for which friends he will buy gifts in each of them.
Let the $j$-th friend receive $a_j$ units of joy from Vlad's gift. Let's find the value $\alpha=\min\{a_1, a_2, \dots, a_n\}$. Vlad's goal is to buy gifts so that the value of $\alpha$ is as large as possible. In other words, Vlad wants to maximize the minimum of the joys of his friends.
For example, let $m = 2$, $n = 2$. Let the joy from the gifts that we can buy in the first shop: $p_{11} = 1$, $p_{12}=2$, in the second shop: $p_{21} = 3$, $p_{22}=4$.
Then it is enough for Vlad to go only to the second shop and buy a gift for the first friend, bringing joy $3$, and for the second — bringing joy $4$. In this case, the value $\alpha$ will be equal to $\min\{3, 4\} = 3$
Help Vlad choose gifts for his friends so that the value of $\alpha$ is as high as possible. Please note that each friend must receive one gift. Vlad can visit at most $n-1$ shops (where $n$ is the number of \textbf{friends}). In the shop, he can buy any number of gifts.
|
Note that if we cannot get joy $x$, then we cannot get $x+1$, and if we can get at least $x$, then we can get at least $x-1$. These facts allow us to use binary search to find the answer. Now we need to understand how exactly we can recognize whether we can gain joy at least $x$ or not. We can enter at most $n-1$ shops, so we always need to take two gifts from some store, which means there must be a store in which we can find two or more gifts with pleasure at least $x$. Also, each friend should receive a gift, which means that we should be able to buy each gift with pleasure at least $x$. It takes O (nm) to check that both of these conditions are met. The total solution works in O (nm \times log (nm)).
|
[
"binary search",
"greedy",
"sortings"
] | 1,800
|
//
// Created by Vlad on 17.12.2021.
//
#include <bits/stdc++.h>
#define int long long
#define mp make_pair
#define x first
#define y second
#define all(a) (a).begin(), (a).end()
#define rall(a) (a).rbegin(), (a).rend()
/*#pragma GCC optimize("Ofast")
#pragma GCC optimize("no-stack-protector")
#pragma GCC optimize("unroll-loops")
#pragma GCC target("sse,sse2,sse3,ssse3,popcnt,abm,mmx,tune=native")
#pragma GCC optimize("fast-math")
*/
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(143);
const int inf = 1e10;
const int M = 998244353;
const ld pi = atan2(0, -1);
const ld eps = 1e-4;
int n, m;
vector<vector<int>> p;
bool check(int x){
vector<bool> abl(m);
bool pair = false;
for(int i = 0; i < n; ++i){
int c = 0;
for(int j = 0; j < m; ++j){
if(p[i][j] >= x){
abl[j] = true;
c++;
}
}
if(c > 1) pair = true;
}
if(!pair && m > 1) return false;
bool ans = true;
for(bool b: abl){
ans = ans && b;
}
return ans;
}
void solve() {
cin >> n >> m;
p.assign(n, vector<int>(m));
for(int i = 0; i < n; ++i){
for(int j = 0; j < m; ++j){
cin >> p[i][j];
}
}
int l = 1, r = 2;
while (check(r)) r *= 2;
while (r - l > 1){
int mid = (r + l) / 2;
if(check(mid)) l = mid;
else r = mid;
}
cout << l;
}
bool multi = true;
signed main() {
//freopen("in.txt", "r", stdin);
//freopen("in.txt", "w", stdout);
int t = 1;
if (multi) {
cin >> t;
}
for (; t != 0; --t) {
solve();
cout << "\n";
}
return 0;
}
|
1619
|
E
|
MEX and Increments
|
Dmitry has an array of $n$ non-negative integers $a_1, a_2, \dots, a_n$.
In one operation, Dmitry can choose any index $j$ ($1 \le j \le n$) and increase the value of the element $a_j$ by $1$. He can choose the same index $j$ multiple times.
For each $i$ from $0$ to $n$, determine whether Dmitry can make the $\mathrm{MEX}$ of the array equal to exactly $i$. If it is possible, then determine the minimum number of operations to do it.
The $\mathrm{MEX}$ of the array is equal to the minimum non-negative integer that is not in the array. For example, the $\mathrm{MEX}$ of the array $[3, 1, 0]$ is equal to $2$, and the array $[3, 3, 1, 4]$ is equal to $0$.
|
First, let's sort the array. Then we will consider its elements in non-decreasing order. To make MEX equal to $0$, you need to increase all zeros. To make MEX at least $n$, you first need to make MEX at least $n - 1$, and then, if the number $n - 1$ is missing in the array, you need to get it. If there are no extra values less than $n - 1$, then this and all subsequent MEX values cannot be obtained. Otherwise, you can use the maximum of the extra array values. To do this, you can use a data structure such as a stack. If an element occurs more than once in the array, put its extra occurrences on the stack.
|
[
"constructive algorithms",
"data structures",
"dp",
"greedy",
"implementation",
"math",
"sortings"
] | 1,700
|
#include <iostream>
#include <vector>
#include <stack>
#include <algorithm>
using namespace std;
typedef long long ll;
const int MAX_N = 2e5;
int main() {
int t;
cin >> t;
for (int _ = 0; _ < t; ++_) {
int n;
cin >> n;
vector<int> a(n), cnt(n + 1);
for (int i = 0; i < n; ++i) {
cin >> a[i];
cnt[a[i]]++;
}
sort(a.begin(), a.end());
stack<int> st;
vector<ll> ans(n + 1, -1);
ll sum = 0;
for (int i = 0; i <= n; ++i) {
if (i > 0 && cnt[i - 1] == 0) {
if (st.empty()) {
break;
}
int j = st.top();
st.pop();
sum += i - j - 1;
}
ans[i] = sum + cnt[i];
while (i > 0 && cnt[i - 1] > 1) {
cnt[i - 1]--;
st.push(i - 1);
}
}
for (ll x : ans) {
cout << x << ' ';
}
cout << '\n';
}
}
|
1619
|
F
|
Let's Play the Hat?
|
The Hat is a game of speedy explanation/guessing words (similar to Alias). It's fun. Try it! In this problem, we are talking about a variant of the game when the players are sitting at the table and everyone plays individually (i.e. not teams, but individual gamers play).
$n$ people gathered in a room with $m$ tables ($n \ge 2m$). They want to play the Hat $k$ times. Thus, $k$ games will be played at each table. Each player will play in $k$ games.
To do this, they are distributed among the tables for each game. During each game, one player plays at exactly one table. A player can play at different tables.
Players want to have the most "fair" schedule of games. For this reason, they are looking for a schedule (table distribution for each game) such that:
- At any table in each game there are either $\lfloor\frac{n}{m}\rfloor$ people or $\lceil\frac{n}{m}\rceil$ people (that is, either $n/m$ rounded down, or $n/m$ rounded up). Different numbers of people can play different games at the same table.
- Let's calculate for each player the value $b_i$ — the number of times the $i$-th player played at a table with $\lceil\frac{n}{m}\rceil$ persons ($n/m$ rounded up). Any two values of $b_i$must differ by no more than $1$. In other words, for any two players $i$ and $j$, it must be true $|b_i - b_j| \le 1$.
For example, if $n=5$, $m=2$ and $k=2$, then at the request of the first item either two players or three players should play at each table. Consider the following schedules:
- First game: $1, 2, 3$ are played at the first table, and $4, 5$ at the second one. The second game: at the first table they play $5, 1$, and at the second — $2, 3, 4$. This schedule is \textbf{not "fair"} since $b_2=2$ (the second player played twice at a big table) and $b_5=0$ (the fifth player did not play at a big table).
- First game: $1, 2, 3$ are played at the first table, and $4, 5$ at the second one. The second game: at the first table they play $4, 5, 2$, and at the second one — $1, 3$. This schedule is \textbf{"fair"}: $b=[1,2,1,1,1]$ (any two values of $b_i$ differ by no more than $1$).
Find any "fair" game schedule for $n$ people if they play on the $m$ tables of $k$ games.
|
For each game, we want to seat $n$ people at $m$ tables, $n \mod m$ of them will be big and $B = \lceil\frac{n}{m}\rceil$ will sit at them, and $m - n \mod m$ will be small. Each round $p = n \mod m * B$ people will sit at the big tables. Let's put people with numbers $0, 1, 2 \dots p - 1$ at large tables in the first round (for convenience we index from zero), and the rest for small ones, in the second round we will seat people at large tables with numbers $p \mod n, (p + 1) \mod n \dots (2p - 1) \mod n$ and so on. We cycle through the players from $1$ to $n$ in blocks of $p$. Since $p <n$, no one person can be ahead of any other by 2 or more large tables.
|
[
"brute force",
"constructive algorithms",
"greedy",
"math"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int n, m, k;
cin >> n >> m >> k;
vector<int> p(n);
forn(i, n)
p[i] = i;
if (tt > 0)
cout << endl;
forn(round, k) {
int index = 0;
forn(table, m) {
int size = n / m;
if (table < n % m)
size++;
cout << size;
forn(j, size)
cout << " " << p[index++] + 1;
cout << endl;
}
rotate(p.begin(), p.begin() + (n % m) * ((n + m - 1) / m), p.end());
}
}
}
|
1619
|
G
|
Unusual Minesweeper
|
Polycarp is very fond of playing the game Minesweeper. Recently he found a similar game and there are such rules.
There are mines on the field, for each the coordinates of its location are known ($x_i, y_i$). Each mine has a lifetime in seconds, after which it will explode. After the explosion, the mine also detonates all mines vertically and horizontally at a distance of $k$ (two perpendicular lines). As a result, we get an explosion on the field in the form of a "plus" symbol ('\textbf{+}'). Thus, one explosion can cause new explosions, and so on.
Also, Polycarp can detonate anyone mine every second, starting from zero seconds. After that, a chain reaction of explosions also takes place. Mines explode \textbf{instantly} and also \textbf{instantly} detonate other mines according to the rules described above.
Polycarp wants to set a new record and asks you to help him calculate in what minimum number of seconds all mines can be detonated.
|
Our first task is to separate mines into components. We will store in the hashmap $mapx$ at the $x$ coordinate all the $y$ coordinates where there is a mine. Let's do the same with the $mapy$ hashmap. Thus, going through the available arrays in $mapx$ and $mapy$, we connect adjacent elements into one component, if $|mapx[x][i] - mapx[x][i + 1]| <= k$, also with $mapy$. As a result, we have components, where if you detonate one mine in the $j$'s component, then all the mines belonging to this component will also explode. Further, we find a mine with a minimum timer in each component. Finding the minimum for each component, we store it conditionally in the array $a$. Now we know at what minimum time some component will explode if it is left unaffected. To answer, it remains to find in the sorted array $a$ such a minimum index $i$ $(0 \le i \le a.size - 1)$ that $max (a[i], a.size - i - 2)$ is min. And the general asymptotic behavior is $O(n log (n))$.
|
[
"binary search",
"dfs and similar",
"dsu",
"greedy",
"sortings"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int k;
map <int, vector<int>> mx;
map <int, vector<int>> my;
map <pair<int ,int>, bool> used;
map <pair<int, int>, int> time_of;
int dfs(int x, int y) {
used[{x, y}] = true;
int _min_ = time_of[{x, y}];
auto i = lower_bound(mx[x].begin(), mx[x].end(), y);
auto j = lower_bound(my[y].begin(), my[y].end(), x);
if (++i != mx[x].end() && !used[{x, *i}] && abs(*i - y) <= k) {
_min_ = min(_min_, dfs(x, *i));
}
--i;
if (i != mx[x].begin() && !used[{x, *(--i)}] && abs(*i - y) <= k) {
_min_ = min(_min_, dfs(x, *i));
}
if (++j != my[y].end() && !used[{*j, y}] && abs(*j - x) <= k) {
_min_ = min(_min_, dfs(*j, y));
}
--j;
if (j != my[y].begin() && !used[{*(--j), y}] && abs(*j - x) <= k) {
_min_ = min(_min_, dfs(*j, y));
}
return _min_;
}
void solve() {
mx.clear();
my.clear();
used.clear();
int n;
cin >> n >> k;
vector <pair<int, int>> a(n);
int x, y, timer;
for (int i = 0; i < n; ++i) {
cin >> x >> y >> timer;
a[i] = {x, y};
time_of[{x, y}] = timer;
mx[x].push_back(y);
my[y].push_back(x);
}
vector<int> res;
for (auto now: mx) {
sort(mx[now.first].begin(), mx[now.first].end());
}
for (auto now: my) {
sort(my[now.first].begin(), my[now.first].end());
}
for (auto now: a) {
if (!used[now]) {
res.push_back(dfs(now.first, now.second));
}
}
sort(res.begin(), res.end());
int ans = res.size() - 1;
for (int i = 0; i < res.size(); ++i) {
ans = min(ans, max((int)res.size() - i - 2, res[i]));
}
cout << ans << '\n';
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(nullptr); cout.tie(nullptr);
int tests;
cin >> tests;
forn(tt, tests) {
solve();
}
}
|
1619
|
H
|
Permutation and Queries
|
You are given a permutation $p$ of $n$ elements. A permutation of $n$ elements is an array of length $n$ containing each integer from $1$ to $n$ exactly once. For example, $[1, 2, 3]$ and $[4, 3, 5, 1, 2]$ are permutations, but $[1, 2, 4]$ and $[4, 3, 2, 1, 2]$ are not permutations. You should perform $q$ queries.
There are two types of queries:
- $1$ $x$ $y$ — swap $p_x$ and $p_y$.
- $2$ $i$ $k$ — print the number that $i$ will become if we assign $i = p_i$ $k$ times.
|
Let's compute an array $a$ of $n$ integers - answers to all possible second-type queries with $k = Q$, $Q \approx \sqrt n$. Now, if we have to perform any second-type query, we can split it into at most $n / Q$ queries with $k = Q$ and at most $Q - 1$ queries with $k = 1$. Let's also compute an array $r$ of $n$ integers - reverse permutation. If $p_i = j$, then $r_j = i$. To perform any first-type query, we should recompute $p$, $r$ and $a$. We can swap $p_x$ and $p_y$ in the array $p$ and $r_{p_x}$ and $r_{p_y}$ in the array $r.$ No more than $2 \cdot Q$ elements will be changed in the array $a$. These are elements with indexes $x, r_x, r_{r_x}, \dots$($Q$ elements) and $y, r_y, r_{r_y}, \dots$($Q$ elements). We can recompute $a_x$ and then assign $a_{r_x} = r_{a_x}$ and $x = r_x$ $Q - 1$ times. Similarly for $y$. Time complexity: $O((n+q)\sqrt n)$.
|
[
"brute force",
"data structures",
"divide and conquer",
"two pointers"
] | 2,400
|
#include <bits/stdc++.h>
//#define int long long
#define ld long double
#define x first
#define y second
#define pb push_back
using namespace std;
const int Q = 100;
int n, q, p[100005], r[100005], a[100005];
int32_t main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin >> n >> q;
for(int i = 0; i < n; i++)
{
cin >> p[i];
p[i]--;
}
for(int i = 0; i < n; i++)
r[p[i]] = i;
for(int i = 0; i < n; i++)
{
int x = i;
for(int j = 0; j < Q; j++)
x = p[x];
a[i] = x;
}
while(q--)
{
int t, x, y;
cin >> t >> x >> y;
if(t == 2)
{
x--;
while(y >= Q)
{
y -= Q;
x = a[x];
}
while(y--)
x = p[x];
cout << x + 1 << "\n";
}
else
{
x--;
y--;
swap(r[p[x]], r[p[y]]);
swap(p[x], p[y]);
int ax = x;
for(int i = 0; i < Q; i++)
ax = p[ax];
int x2 = x;
for(int i = 0; i < Q; i++)
{
a[x] = ax;
x = r[x];
ax = r[ax];
}
swap(x, y);
ax = x;
for(int i = 0; i < Q; i++)
ax = p[ax];
x2 = x;
for(int i = 0; i < Q; i++)
{
a[x] = ax;
x = r[x];
ax = r[ax];
}
}
}
}
|
1620
|
A
|
Equal or Not Equal
|
You had $n$ positive integers $a_1, a_2, \dots, a_n$ arranged in a circle. For each pair of neighboring numbers ($a_1$ and $a_2$, $a_2$ and $a_3$, ..., $a_{n - 1}$ and $a_n$, and $a_n$ and $a_1$), you wrote down: are the numbers in the pair equal or not.
Unfortunately, you've lost a piece of paper with the array $a$. Moreover, you are afraid that even information about equality of neighboring elements may be inconsistent. So, you are wondering: is there any array $a$ which is consistent with information you have about equality or non-equality of corresponding pairs?
|
Let's look at a group of E: it's easy to see that each such a group is equal to the same number. Now, let's look at how these groups are distributed on the circle: If there are no N then all $a_i$ are just equal to each other. It's okay. If there is exactly one N then from one side, all of them are still in one group, so they should be equal, but from the other side, one pair should have different values. It's contradiction. If there are more than one N then all numbers are divided in several groups with different values. It's okay. As a result, array $a$ exists as long as the number of N isn't $1$.
|
[
"constructive algorithms",
"dsu",
"implementation"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
string s;
cin >> s;
cout << (count(s.begin(), s.end(), 'N') == 1 ? "NO\n" : "YES\n");
}
}
|
1620
|
B
|
Triangles on a Rectangle
|
A rectangle with its opposite corners in $(0, 0)$ and $(w, h)$ and sides parallel to the axes is drawn on a plane.
You are given a list of lattice points such that each point lies on a side of a rectangle but not in its corner. Also, there are at least two points on every side of a rectangle.
Your task is to choose three points in such a way that:
- exactly two of them belong to the same side of a rectangle;
- the area of a triangle formed by them is maximum possible.
Print the doubled area of this triangle. It can be shown that the doubled area of any triangle formed by lattice points is always an integer.
|
The area of a triangle is equal to its base multiplied by its height divided by $2$. Let the two points that have to be on the same side of a rectangle form its base. To maximize it, let's choose such two points that are the most apart from each other - the first and the last in the list. Then the height will be determined by the distance from that side to the remaining point. Since there are points on all sides, the points on the opposite side are the furthest. Thus, the height is always one of $h$ or $w$, depending on whether we picked the horizontal or the vertical side. So we have to check four options to pick the side and choose the best answer among them.
|
[
"geometry",
"greedy",
"math"
] | 1,000
|
for _ in range(int(input())):
w, h = map(int, input().split())
ans = 0
for i in range(4):
a = [int(x) for x in input().split()][1:]
ans = max(ans, (a[-1] - a[0]) * (h if i < 2 else w))
print(ans)
|
1620
|
C
|
BA-String
|
You are given an integer $k$ and a string $s$ that consists only of characters 'a' (a lowercase Latin letter) and '*' (an asterisk).
Each asterisk should be replaced with several (from $0$ to $k$ inclusive) lowercase Latin letters 'b'. Different asterisk can be replaced with different counts of letter 'b'.
The result of the replacement is called a BA-string.
Two strings $a$ and $b$ are different if they either have different lengths or there exists such a position $i$ that $a_i \neq b_i$.
A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds:
- $a$ is a prefix of $b$, but $a \ne b$;
- in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
Now consider all different BA-strings and find the $x$-th lexicographically smallest of them.
|
Find all segments of asterisks in the string. Let there be $t$ of them, and the number of asterisks in them be $c_1, c_2, \dots, c_t$. That tells us that the $i$-th segment of asterisks can be replaced with at most $c_i \cdot k$ letters 'b'. Notice that we can compare two strings lexicographically using just the number of letters 'b' that replace each of $t$ segments of asterisks. Let that sequence for some string $a$ be $A_1, A_2, \dots, A_t$ and that sequence for some string $b$ be $B_1, B_2, \dots, B_t$. Then $a < b$ if and only if $A < B$. That is, there exists such position $i$ that $A_i < B_i$. The proof is trivial. So we can actually look at the sequence $A_1, A_2, \dots, A_t$ as some kind of number in a mixed base. The lowest "digit" $A_t$ can be of one of $c_t \cdot k + 1$ values (from $0$ to $c_t \cdot k$). The second lowest - one of $c_{t-1} \cdot k + 1$. And so on. Then, comparison of two strings is the same as comparison of these two mixed base numbers. Thus, the task is to convert number $x-1$ to this mixed base. Turns out, it's not that hard. In base $10$, for example, the lowest digit can be determined as the remainder of the number of dividing by $10$. Here it will be the remainder of dividing by $c_t \cdot k + 1$. After that, divide and floor the number and proceed to the next "digit". After $t$ steps are done, the "digits" of that mixed base number tell exactly how many letters 'b' should replace each segment of asterisks. Overall complexity: $O(n)$ per testcase to recover the string, $O(nk)$ to print it.
|
[
"brute force",
"dp",
"greedy",
"implementation",
"math"
] | 1,800
|
for _ in range(int(input())):
n, k, x = map(int, input().split())
x -= 1
s = input()[::-1]
res = []
i = 0
while i < n:
if s[i] == 'a':
res.append(s[i])
else:
j = i
while j + 1 < n and s[j + 1] == s[i]:
j += 1
cur = (j - i + 1) * k + 1
res.append('b' * (x % cur))
x //= cur
i = j
i += 1
print(''.join(res[::-1]))
|
1620
|
D
|
Exact Change
|
One day, early in the morning, you decided to buy yourself a bag of chips in the nearby store. The store has chips of $n$ different flavors. A bag of the $i$-th flavor costs $a_i$ burles.
The store may run out of some flavors, so you'll decide which one to buy after arriving there. But there are two major flaws in this plan:
- you have only coins of $1$, $2$ and $3$ burles;
- since it's morning, the store will ask you to pay in exact change, i. e. if you choose the $i$-th flavor, you'll have to pay exactly $a_i$ burles.
Coins are heavy, so you'd like to take the least possible number of coins in total. That's why you are wondering: what is the minimum total number of coins you should take with you, so you can buy a bag of chips of any flavor in exact change?
|
Let's define $m = \max(a_i)$, then it should be obvious that we need at least $r = \left\lceil \frac{m}{3} \right\rceil$ coins to buy a bag of chips of cost $m$. Now, it's not hard to prove that $r + 1$ coins is always enough to buy a bag of chips of any cost $c \le m$. Proof: if $m \equiv 0 \pmod 3$, we'll take $r - 1$ coins of value $3$, coin $1$ and coin $2$; if $m \equiv 2 \pmod 3$, we'll take $r - 1$ coins $3$ and two coins $1$; if $m \equiv 1 \pmod 3$, we'll take $r - 2$ coins $3$, one coin $2$ and two coins $1$. So the question is how to decide, is $r$ coins enough. The solution is to note that there is no need to take more than $3$ coins $1$ and more than $3$ coins $2$, so we can just brute force the number of coins $1$ we'll take $c_1$ and the number of coins $2$ we'll take $c_2$. Then, the number of coins $3$ $c_3 = \left\lceil \frac{m - c_1 - 2 c_2}{3} \right\rceil$, and we can check: is it possible to pay exactly $a_i$ using at most $c_1$, $c_2$ and $c_3$ coins respectively. There exists casework solution as well, but it's quite tricky, so brute force is preferable. The main problem for case work is the case $m \equiv 1 \pmod 3$, since there are two different ways to take $r$ coins: either $r - 1$ coins $3$ and coin $1$ or $r - 2$ coins $3$ and two coins $2$. In the first way, you can't gather exactly $a_i \equiv 2 \pmod 3$ and in the second one, you can gather neither $a_i = m - 1$ nor $a_i = 1$.
|
[
"brute force",
"constructive algorithms",
"greedy"
] | 2,000
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) {
return out << "(" << p.x << ", " << p.y << ")";
}
template<class A> ostream& operator <<(ostream& out, const vector<A> &v) {
fore(i, 0, sz(v)) {
if(i) out << " ";
out << v[i];
}
return out;
}
int n;
vector<int> a;
inline bool read() {
if(!(cin >> n))
return false;
a.resize(n);
fore (i, 0, n)
cin >> a[i];
return true;
}
bool p(int val, int c1, int c2, int c3) {
fore (cur1, 0, c1 + 1) fore (cur2, 0, c2 + 1) {
if (cur1 + 2 * cur2 > val)
continue;
if ((val - cur1 - 2 * cur2) % 3 != 0)
continue;
if ((val - cur1 - 2 * cur2) / 3 <= c3)
return true;
}
return false;
}
bool possible(int c1, int c2, int c3) {
for (int v : a) {
if (!p(v, c1, c2, c3))
return false;
}
return true;
}
inline void solve() {
int m = *max_element(a.begin(), a.end());
int ans = int(1e9);
const int MAG = 3;
fore (c1, 0, MAG) fore (c2, 0, MAG) {
int c3 = max(0, (m - c1 - 2 * c2 + 2) / 3);
assert(c1 + 2 * c2 + 3 * c3 >= m);
if (possible(c1, c2, c3))
ans = min(ans, c1 + c2 + c3);
}
cout << ans << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
int t; cin >> t;
while(t--) {
read();
solve();
}
return 0;
}
|
1620
|
E
|
Replace the Numbers
|
You have an array of integers (initially empty).
You have to perform $q$ queries. Each query is of one of two types:
- "$1$ $x$" — add the element $x$ to the end of the array;
- "$2$ $x$ $y$" — replace all occurrences of $x$ in the array with $y$.
Find the resulting array after performing all the queries.
|
Let's solve the problem from the end. Let's maintain the array $p_x$ - what number will $x$ become if we apply to it all the already considered queries of type $2$. If the current query is of the first type, then we simply add $p_x$ to the resulting array. If the current query is of the second type, then we have to change the value of $p_x$. Since all occurrences of $x$ must be replaced with $y$, it is enough to assign $p_x = p_y$. Since we process each query in $O(1)$, the final complexity is $O(n)$. There is also an alternative solution. Let's process queries in the direct order. Let's store all its positions in an array for each number. Then for the first query, it is enough to put the index in the corresponding array of positions. And for a query of the second type, we have to move all the positions of the number $x$ into an array of positions of the number $y$. The naive implementation is obviously too slow, but we can use the small to large method, then the complexity of the solution will be $O(n \log{n})$.
|
[
"constructive algorithms",
"data structures",
"dsu",
"implementation"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
const int N = 500 * 1000 + 13;
int n, q;
vector<int> pos[N];
int main() {
scanf("%d", &q);
while (q--) {
int t, x, y;
scanf("%d", &t);
if (t == 1) {
scanf("%d", &x);
pos[x].push_back(n++);
} else {
scanf("%d%d", &x, &y);
if (x != y) {
if (pos[x].size() > pos[y].size()) pos[x].swap(pos[y]);
for (int &i : pos[x]) pos[y].push_back(i);
pos[x].clear();
}
}
}
vector<int> ans(n);
for (int x = 0; x < N; ++x)
for (int &i : pos[x])
ans[i] = x;
for (int &x : ans) printf("%d ", x);
}
|
1620
|
F
|
Bipartite Array
|
You are given a permutation $p$ consisting of $n$ integers $1, 2, \dots, n$ (a permutation is an array where each element from $1$ to $n$ occurs exactly once).
Let's call an array $a$ bipartite if the following undirected graph is bipartite:
- the graph consists of $n$ vertices;
- two vertices $i$ and $j$ are connected by an edge if $i < j$ and $a_i > a_j$.
Your task is to find a bipartite array of integers $a$ of size $n$, such that $a_i = p_i$ or $a_i = -p_i$, or report that no such array exists. If there are multiple answers, print any of them.
|
To begin with, let's understand that an array is bipartite if and only if there is no decreasing subsequence of length $3$ in the array. Now we can write dynamic programming $dp_{i, x, y}$: is there an array $a$ of length $i$ such that $x$ is the maximum last element of a decreasing subsequence of length $1$, and $y$ is the maximum last element of a subsequence of length $2$. Note that $x > y$. Let's consider all possible transitions from the state $(i, x, y)$ if we are trying to put the number $z$ on the $i$-th position, where $z = \pm p_i$: if $z > x$, then the new state will be $(i +1, z,y)$; if $z > y$, then the new state will be $(i + 1, x, z)$; if $z < y$, then such a transition is not valid, because a decreasing subsequence of length $3$ is formed in the array. With a naive implementation, such dynamic programming works in $O(n^3)$. We can note that for fixed values of $i$ and $x$ ($i$ and $y$) it is enough for us to store only the minimum available value of $y$ ($x$). So, we can write dynamic programming $dp_{i, x}$, which is defined similarly to the above, but now instead of being Boolean, stores the minimum value of $y$ (or infinity if the state is not valid). We have speeded up our solution to $O(n^2)$, but it is still too slow. To speed up the solution even more, we have to look at the transitions in dynamics and notice that for a fixed $i$, either $x$ or $y$ is always equal to $\pm p_{i - 1}$. So we can rewrite our dynamic programming in the following form - $dp_{i, pos, sign}$. Here, the $pos$ flag says which of the numbers $x$ and $y$ is equal to $\pm p_{i - 1}$, and the $sign$ flag is responsible for the sign of $p_{i - 1}$, and the minimum value of $y$ or $x$ is stored in the value itself (depending on $pos$). Thus, we got a solution with a linear running time. In fact, this solution can be simplified if we see the following relation: the number we use on position $i$ is not less than $dp_{i, 0, sign}$ and not greater than $dp_{i, 1, sign}$. This allows us to get rid of one of the states in our dynamic programming altogether, so we get an easier solution. This optimization wasn't required to get AC, but the code becomes shorter.
|
[
"dp",
"greedy"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); ++i)
const int INF = 1e9;
const int N = 1000 * 1000 + 13;
int n;
int p[N], a[N];
int dp[N][2], pr[N][2];
void solve() {
scanf("%d", &n);
forn(i, n) scanf("%d", &p[i]);
forn(i, n) forn(j, 2) dp[i][j] = INF;
dp[0][0] = dp[0][1] = -INF;
forn(i, n - 1) forn(j, 2) if (dp[i][j] != INF) {
int x = j ? -p[i] : p[i];
int y = dp[i][j];
if (x < y) swap(x, y);
forn(nj, 2) {
int z = nj ? -p[i + 1] : p[i + 1];
if (z > x) {
if (dp[i + 1][nj] > y) {
dp[i + 1][nj] = y;
pr[i + 1][nj] = j;
}
} else if (z > y) {
if (dp[i + 1][nj] > x) {
dp[i + 1][nj] = x;
pr[i + 1][nj] = j;
}
}
}
}
int j = -1;
forn(i, 2) if (dp[n - 1][i] != INF) j = i;
if (j == -1) {
puts("NO");
return;
}
for (int i = n - 1; i >= 0; i--) {
a[i] = j ? -p[i] : p[i];
j = pr[i][j];
}
puts("YES");
forn(i, n) printf("%d ", a[i]);
puts("");
}
int main() {
int t;
scanf("%d", &t);
while (t--) solve();
}
|
1620
|
G
|
Subsequences Galore
|
For a sequence of strings $[t_1, t_2, \dots, t_m]$, let's define the function $f([t_1, t_2, \dots, t_m])$ as the number of different strings (\textbf{including the empty string}) that are subsequences of \textbf{at least one} string $t_i$. $f([]) = 0$ (i. e. the number of such strings for an empty sequence is $0$).
You are given a sequence of strings $[s_1, s_2, \dots, s_n]$. Every string in this sequence consists of lowercase Latin letters and is \textbf{sorted} (i. e., each string begins with several (maybe zero) characters a, then several (maybe zero) characters b, ..., ends with several (maybe zero) characters z).
For each of $2^n$ subsequences of $[s_1, s_2, \dots, s_n]$, calculate the value of the function $f$ modulo $998244353$.
|
For a string $t$, let's define its characteristic mask as the mask of $n$ bits, where $i$-th bit is $1$ if and only if $t$ is a subsequence of $s_i$. Let's suppose we somehow calculate the number of strings for each characteristic mask, and we denote this as $G(x)$ for a mask $x$. How can we use this information to find $f([s_{i_1}, s_{i_2}, \dots, s_{i_k}])$? Suppose this set of strings is represented by a mask $x$, then the strings which are not included in $f$ are the strings such that their characteristic mask has bitwise AND with $x$ equal to $0$, i. e. these characteristic masks are submasks of $2^n - 1 \oplus x$. We can use SOS DP to calculate these sums of $G(x)$ over submasks in $O(2^n n)$. The only problem is how to calculate $G(x)$ for every mask. Let's analyze when a string is a subsequence of a sorted string $s_i$. The subsequence should be sorted as well, and the number of occurrences of every character in a subsequence should not exceed the number of occurrences of that character in $s_i$. So, if there are $c_1$ characters a in $s_i$, $c_2$ characters b in $s_i$, and so on, then the number of its subsequences is $\prod \limits_{j=1}^{26} (1 + c_j)$. What about subsequences of every string from a set? These conditions on the number of occurrences should apply to every string in the set, so, for each character, we can calculate the minimum number of occurrences of this character in each string of the set, add $1$, and multiply these numbers to get the number of strings that are subsequences of each string in a set. These values can be calculated in $O(2^n (n + A))$ for all $2^n$ subsequences of $[s_1, s_2, \dots, s_n]$ using recursive approach. Can these numbers be used as $G(x)$? Not so fast. Unfortunately, these values (let's call them $H(x)$) are the numbers of subsequences of the chosen sets of strings, but we have no information about the strings that are not included in the chosen set of strings. To handle it, we can use the following equation: $H(x) = \sum \limits_{x \subseteq y} G(y)$, where $x \subseteq y$ means that $x$ is a submask of $y$. To transform the values of $H(x)$ into the values of $G(x)$, we can flip all bits in the masks (so $H(x)$ is the sum of $G(y)$ over all submasks of $x$), apply inverse SOS DP (also known as Mobius transformation), and then flip all bits in the masks again. So, we found a way to calculate all values of $G(x)$ in $O(2^n (n + A))$, and we have already discussed what to do with them in the first paragraph of the editorial. The overall complexity of the solution is $O(2^n (n+A))$.
|
[
"bitmasks",
"combinatorics",
"dp"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
const int N = 23;
const int A = 26;
const int S = 20043;
int n;
string inp[N];
char buf[S];
int cnt[N][A];
const int MOD = 998244353;
int add(int x, int y)
{
x += y;
while(x >= MOD) x -= MOD;
while(x < 0) x += MOD;
return x;
}
int sub(int x, int y)
{
return add(x, -y);
}
int mul(int x, int y)
{
return (x * 1ll * y) % MOD;
}
void flip_all(vector<int>& a)
{
int msk = (1 << n) - 1;
for(int i = 0; i < (1 << (n - 1)); i++)
swap(a[i], a[i ^ msk]);
}
int val[S];
int* where[S];
int cur = 0;
void change(int& x, int y)
{
where[cur] = &x;
val[cur] = x;
x = y;
cur++;
}
void rollback()
{
--cur;
(*where[cur]) = val[cur];
}
void zeta_transform(vector<int>& a)
{
for(int i = 0; i < n; i++)
{
for(int j = 0; j < (1 << n); j++)
if((j & (1 << i)) == 0)
a[j ^ (1 << i)] = add(a[j ^ (1 << i)], a[j]);
}
}
void mobius_transform(vector<int>& a)
{
for(int i = n - 1; i >= 0; i--)
{
for(int j = (1 << n) - 1; j >= 0; j--)
if((j & (1 << i)) != 0)
a[j] = sub(a[j], a[j ^ (1 << i)]);
}
}
int cur_max[A];
vector<int> mask_cnt;
void rec(int depth, int mask)
{
if(depth == n)
{
mask_cnt[mask] = 1;
for(int i = 0; i < A; i++)
mask_cnt[mask] = mul(mask_cnt[mask], cur_max[i] + 1);
}
else
{
int state = cur;
for(int i = 0; i < A; i++)
change(cur_max[i], min(cur_max[i], cnt[depth][i]));
rec(depth + 1, mask + (1 << depth));
while(state != cur) rollback();
rec(depth + 1, mask);
}
}
int main()
{
scanf("%d", &n);
for(int i = 0; i < n; i++)
{
scanf("%s", buf);
inp[i] = buf;
for(auto x : inp[i])
cnt[i][x - 'a']++;
}
for(int i = 0; i < A; i++)
cur_max[i] = S;
mask_cnt.resize(1 << n);
rec(0, 0);
flip_all(mask_cnt);
mobius_transform(mask_cnt);
flip_all(mask_cnt);
int sum = 0;
for(int i = 0; i < (1 << n); i++) sum = add(sum, mask_cnt[i]);
zeta_transform(mask_cnt);
vector<int> res(1 << n);
for(int i = 0; i < (1 << n); i++)
res[i] = sub(sum, mask_cnt[((1 << n) - 1) ^ i]);
long long ans = 0;
for(int i = 0; i < (1 << n); i++)
{
int c = 0, s = 0;
for(int j = 0; j < n; j++)
{
if(i & (1 << j))
{
c++;
s += j + 1;
}
}
ans ^= res[i] * 1ll * c * 1ll * s;
}
//for(int i = 0; i < (1 << n); i++) printf("%d\n", res[i]);
printf("%lld\n", ans);
}
|
1621
|
A
|
Stable Arrangement of Rooks
|
You have an $n \times n$ chessboard and $k$ rooks. Rows of this chessboard are numbered by integers from $1$ to $n$ from top to bottom and columns of this chessboard are numbered by integers from $1$ to $n$ from left to right. The cell $(x, y)$ is the cell on the intersection of row $x$ and collumn $y$ for $1 \leq x \leq n$ and $1 \leq y \leq n$.
The arrangement of rooks on this board is called good, if no rook is beaten by another rook.
A rook beats all the rooks that shares the same row or collumn with it.
The \textbf{good} arrangement of rooks on this board is called not stable, if it is possible to move one rook to the adjacent cell so arrangement becomes not good. Otherwise, the \textbf{good} arrangement is stable. Here, adjacent cells are the cells \textbf{that share a side}.
\begin{center}
{\small Such arrangement of $3$ rooks on the $4 \times 4$ chessboard is good, but it is not stable: the rook from $(1, 1)$ can be moved to the adjacent cell $(2, 1)$ and rooks on cells $(2, 1)$ and $(2, 4)$ will beat each other.}
\end{center}
Please, find any stable arrangement of $k$ rooks on the $n \times n$ chessboard or report that there is no such arrangement.
|
It is easy to see that if there are two rooks in the neighbouring rows we can move one of them to the another of these two rows, so there shouldn't be two rooks in the neighbouring rows in a stable arrangement. Let's split chessboard into $\lceil \frac{n}{2} \rceil$ regions in the following way: region $1$ contains all cells of rows $1$, $2$, region $2$ contains all cells of rows $3$, $4$ and so on. The last region can contain cells of only one row if $n$ is odd. We had shown that there can't be two rooks in one region so if $k > \lceil \frac{n}{2} \rceil = \lfloor \frac{n+1}{2} \rfloor$ there is no stable arrangement. We can always place $k \leq \lfloor \frac{n+1}{2} \rfloor$ rooks into cells $(1, 1)$, $(3, 3)$, $\ldots$, $(2k-1, 2k-1)$. Such arrangement is stable because after moving any rook to the neighbouring row it will be in the same collumn and in the even numbered row where there are no other rooks. The same applies for moving rook to the neighbouring collumn. Actually, we can show that condition from the first paragraph combined with the same condition for collumns is sufficient and necessary criterion for stable placement.
|
[
"constructive algorithms"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--)
{
int n, k;
cin >> n >> k;
if (k > (n + 1) / 2)
{
cout << "-1\n";
continue;
}
vector<string> s(n, string(n, '.'));
for (int i = 0; i < k; i++)
s[2 * i][2 * i] = 'R';
for (int i = 0; i < n; i++)
cout << s[i] << "\n";
}
}
|
1621
|
B
|
Integers Shop
|
The integers shop sells $n$ segments. The $i$-th of them contains all integers from $l_i$ to $r_i$ and costs $c_i$ coins.
Tomorrow Vasya will go to this shop and will buy some segments there. He will get all integers that appear in at least one of bought segments. The total cost of the purchase is the sum of costs of all segments in it.
After shopping, Vasya will get some more integers as a gift. He will get integer $x$ as a gift if and only if all of the following conditions are satisfied:
- Vasya hasn't bought $x$.
- Vasya has bought integer $l$ that is less than $x$.
- Vasya has bought integer $r$ that is greater than $x$.
Vasya can get integer $x$ as a gift only once so he won't have the same integers after receiving a gift.
For example, if Vasya buys segment $[2, 4]$ for $20$ coins and segment $[7, 8]$ for $22$ coins, he spends $42$ coins and receives integers $2, 3, 4, 7, 8$ from these segments. He also gets integers $5$ and $6$ as a gift.
Due to the technical issues only the first $s$ segments (that is, segments $[l_1, r_1], [l_2, r_2], \ldots, [l_s, r_s]$) will be available tomorrow in the shop.
Vasya wants to get (to buy or to get as a gift) as many integers as possible. If he can do this in differents ways, he selects the cheapest of them.
For each $s$ from $1$ to $n$, find how many coins will Vasya spend if only the first $s$ segments will be available.
|
Let $L$ be the minimum integer Vasya will buy and $R$ be the maximum integer Vasya will buy. Then it is easy to see that he will get all integers between $L$ and $R$ and only them after receiving a gift. Because Vasya wants to maximise the number of integers he will get, he should buy the smallest and the largest integers available in the shop. They can either appear in the same segment or in different segments. It is important to note that if they appear in the same segment, then it is the longest one. Let's add the segments to shop one by one and maintain the following six values: The smallest integer in the shop and the cost of the cheapest segment that contains it. The largest integer in the shop and the cost of the cheapest segment that contains it. The length of the longest segment and the cost of the cheapest of such segments. When we know all this values it is easy to find how many coins Vasya will spend in the shop. Total time complexity of this solution is $\mathcal{O}(n)$. There are other solutions (for example, solution with sets) that runs in $\mathcal{O}(n \log n)$.
|
[
"data structures",
"greedy",
"implementation"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int answer(set<vector<int> > &byL, set<vector<int> > &byR, map<pair<int, int>, set<int> > &all)
{
if (byL.size() == 0)
return 0;
int L = (*byL.begin())[0];
int R = (*byR.rbegin())[0];
int Lc = (*byL.begin())[1];
int Rc = -(*byR.rbegin())[1];
if (all[{L, R}].size() != 0)
{
int x = *all[{L, R}].begin();
if (x < Lc + Rc)
return x;
}
return Lc + Rc;
}
void solve()
{
int q;
cin >> q;
set<vector<int> > byL, byR;
map<pair<int, int>, set<int> > all;
while (q--)
{
char t = '+';
int l, r, c;
cin >> l >> r >> c;
if (t == '+')
{
byL.insert({l, c, r});
byR.insert({r, -c, l});
all[{l, r}].insert(c);
}
else
{
byL.erase({l, c, r});
byR.erase({r, -c, l});
all[{l, r}].erase(c);
}
cout << answer(byL, byR, all) << "\n";
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
solve();
}
}
|
1621
|
C
|
Hidden Permutations
|
\textbf{This is an interactive problem.}
The jury has a permutation $p$ of length $n$ and wants you to guess it. For this, the jury created another permutation $q$ of length $n$. Initially, $q$ is an identity permutation ($q_i = i$ for all $i$).
You can ask queries to get $q_i$ for any $i$ you want. After each query, the jury will change $q$ in the following way:
- At first, the jury will create a new permutation $q'$ of length $n$ such that $q'_i = q_{p_i}$ for all $i$.
- Then the jury will replace permutation $q$ with pemutation $q'$.
You can make no more than $2n$ queries in order to quess $p$.
|
At first, let's solve this problem when $p$ is a cycle of length $n$. It can be done by asking for $q_1$ for $n$ times. After this, we will receive $1$, $p_1$, $p_{p_1}$, $p_{p_{p_1}}, \ldots$ in this order. Since $p$ is a cycle, each $x$ will appear once in this sequence. The next element after $x$ will be $p_x$. When $p$ is not a cycle we can determine the cycle containing the first element of permutation by asking such queries. Actually, we can stop asking queries after receiving $1$, thus determining this cycle in $len+1$ queries, where $len$ is the length of this cycle. We can determine other cycles in the same way. We will ask $n$ queries to determine all elements and one more query for determining each cycle end, that is not more than $2n$ queries.
|
[
"dfs and similar",
"interactive",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
int ask(int i)
{
cout << "? " << i + 1 << endl;
int x;
cin >> x;
return x - 1;
}
void solve()
{
int n;
cin >> n;
vector<int> myp(n, -1);
for (int i = 0; i < n; i++)
{
if (myp[i] == -1)
{
vector<int> cycle;
int answer = ask(i);
int x = ask(i);
cycle.push_back(x);
while (x != answer)
{
x = ask(i);
cycle.push_back(x);
}
for (int j = 0; j < cycle.size(); j++)
{
myp[cycle[j]] = cycle[(j + 1) % cycle.size()];
}
}
}
cout << "! ";
for (int i = 0; i < myp.size(); i++) cout << myp[i] + 1 << " ";
cout << endl;
}
int main()
{
int t;
cin >> t;
while (t--)
{
solve();
}
}
|
1621
|
D
|
The Winter Hike
|
Circular land is an $2n \times 2n$ grid. Rows of this grid are numbered by integers from $1$ to $2n$ from top to bottom and columns of this grid are numbered by integers from $1$ to $2n$ from left to right. The cell $(x, y)$ is the cell on the intersection of row $x$ and column $y$ for $1 \leq x \leq 2n$ and $1 \leq y \leq 2n$.
There are $n^2$ of your friends in the top left corner of the grid. That is, in each cell $(x, y)$ with $1 \leq x, y \leq n$ there is exactly one friend. Some of the other cells are covered with snow.
Your friends want to get to the bottom right corner of the grid. For this in each cell $(x, y)$ with $n+1 \leq x, y \leq 2n$ there should be exactly one friend. It doesn't matter in what cell each of friends will be.
You have decided to help your friends to get to the bottom right corner of the grid.
For this, you can give instructions of the following types:
- You select a row $x$. All friends in this row should move to the next cell in this row. That is, friend from the cell $(x, y)$ with $1 \leq y < 2n$ will move to the cell $(x, y + 1)$ and friend from the cell $(x, 2n)$ will move to the cell $(x, 1)$.
- You select a row $x$. All friends in this row should move to the previous cell in this row. That is, friend from the cell $(x, y)$ with $1 < y \leq 2n$ will move to the cell $(x, y - 1)$ and friend from the cell $(x, 1)$ will move to the cell $(x, 2n)$.
- You select a column $y$. All friends in this column should move to the next cell in this column. That is, friend from the cell $(x, y)$ with $1 \leq x < 2n$ will move to the cell $(x + 1, y)$ and friend from the cell $(2n, y)$ will move to the cell $(1, y)$.
- You select a column $y$. All friends in this column should move to the previous cell in this column. That is, friend from the cell $(x, y)$ with $1 < x \leq 2n$ will move to the cell $(x - 1, y)$ and friend from the cell $(1, y)$ will move to the cell $(2n, y)$.
Note how friends on the grid border behave in these instructions.
\begin{center}
{\small Example of applying the third operation to the second column. Here, colorful circles denote your friends and blue cells are covered with snow.}
\end{center}
You can give such instructions any number of times. You can give instructions of different types. If after any instruction one of your friends is in the cell covered with snow he becomes ill.
In order to save your friends you can remove snow from some cells before giving the first instruction:
- You can select the cell $(x, y)$ that is covered with snow now and remove snow from this cell for $c_{x, y}$ coins.
You can do this operation any number of times.
You want to spend the minimal number of coins and give some instructions to your friends. After this, all your friends should be in the bottom right corner of the grid and none of them should be ill.
Please, find how many coins you will spend.
|
Let's say that if for cell $(i, j)$, $c_{i, j}=0$ then it is covered with snow but the cost of removing snow from this cell is $0$. It is obvious that we should remove all the snow in the bottom right corner of the grid. In the case $n=1$ we should remove the snow from the exactly one of the remaining cells. Now concider only the friends in cells $(1, 1)$, $(1, n)$, $(n, n)$, $(n, 1)$. The first operation that will affect any of them is either operation in the $1$-st or in the $n$-th row or operation in the $1$-st or in the $n$-th column. After any of these operaions one of them will be in one of the following cells: $(2n, 1)$, $(1, 2n)$, $(2n, n)$, $(1, n+1)$, $(n+1, n)$, $(n, n+1)$, $(n+1, 1)$, $(n, 2n)$. So we should remove snow from at least one of these cells. Now we can show that it is actually enough to remove snow from exactly one of these cells. Let's assume that we removed snow from $(n, n+1)$. All other cells are identical. Then you can move your friends from the $n$-th row to the $(n+1)$-th column in $2n$ moves as follows: Move all your friends in the $n$-th row to the next cell of this row. Move all your friends in the $(n+1)$-th column to the next cell of this column. Repeat these two operations $n-1$ more times. After this, your friends will stand in the following cells: Here, gray cells are covered with snow. Each integer denotes one of your friends. Now it is easy to see, how to move friends that initially were in the $(n-1)$-th row to the bottom right corner. By repeation these sequences of operations you can help your friends to finish the trip.
|
[
"constructive algorithms",
"greedy",
"math"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
void solve()
{
int n;
cin >> n;
vector<vector<ll> > a(2 * n, vector<ll>(2 * n));
for (int i = 0; i < 2 * n; i++)
{
for (int j = 0; j < 2 * n; j++)
{
cin >> a[i][j];
}
}
ll ans = 0;
for (int i = n; i < 2 * n; i++)
{
for (int j = n; j < 2 * n; j++)
{
ans += a[i][j];
}
}
ll mn = 1e9 + 1;
mn = min(mn, a[n][0]);
mn = min(mn, a[n][n - 1]);
mn = min(mn, a[2 * n - 1][0]);
mn = min(mn, a[2 * n - 1][n - 1]);
mn = min(mn, a[0][n]);
mn = min(mn, a[n - 1][n]);
mn = min(mn, a[0][2 * n - 1]);
mn = min(mn, a[n - 1][2 * n - 1]);
cout << ans + mn << "\n";
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
solve();
}
}
|
1621
|
E
|
New School
|
You have decided to open a new school. You have already found $n$ teachers and $m$ groups of students. The $i$-th group of students consists of $k_i \geq 2$ students. You know age of each teacher and each student. The ages of teachers are $a_1, a_2, \ldots, a_n$ and the ages of students of the $i$-th group are $b_{i, 1}, b_{i, 2}, \ldots, b_{i, k_i}$.
To start lessons you should assign the teacher to each group of students. Such assignment should satisfy the following requirements:
- To each group exactly one teacher assigned.
- To each teacher at most $1$ group of students assigned.
- The average of students' ages in each group doesn't exceed the age of the teacher assigned to this group.
The average of set $x_1, x_2, \ldots, x_k$ of $k$ integers is $\frac{x_1 + x_2 + \ldots + x_k}{k}$.
Recently you have heard that one of the students will refuse to study in your school. After this, the size of one group will decrease by $1$ while all other groups will remain unchanged.
You don't know who will refuse to study. For each student determine if you can start lessons in case of his refusal.
Note, that it is \textbf{not guaranteed} that it is possible to start lessons before any refusal.
|
Suppose, that you know average ages of each group of studens $avg_1, avg_2, \ldots, avg_m$. How can we check whether we can start lessons? Let's sort this ages and ages of teachers in the decreasing order. Now we have $avg_1 \geq avg_2 \geq \ldots \geq avg_m$ and $a_1 \geq a_2 \geq \ldots \geq a_n$. In all the following solution I will assume that average ages of students and teachers are sorted in the decreasing order. Let's show that we can start lessons if and only if $avg_i \leq a_i$ for all $1 \leq i \leq m$. If $avg_i > a_i$ for some $i$ then the eldest $i$ groups can be assigned only to the $i-1$ eldest teachers, so there is no possible assignment. Otherwise we can assign the $i$-th group to the $i$-th teacher. When one of the students refuse to study, only one value of $avg_i$ changes to the new value $x$. Let's denote the new position of this group in the sorted list as $j$. Then all groups on positions between $i$ and $j$ will move by $1$ towards the initial position of group $i$. This position $j$ can be easily found with binary search. Then we can compare $a_j$ and $x$, $a_k$ with $avg_k$ for groups that haven't move and $a_k$ with $avg_{k \pm 1}$ for groups that have moved to the neighbouring positions. It can be easy done with prefix sum arrays. We can also show that we can do binary search in each group, but it doesn't decrease time complexity (for example, in case when all groups are small).
|
[
"binary search",
"data structures",
"dp",
"greedy",
"implementation",
"sortings"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
int comp(pair<long long, int> a, pair<long long, int> b)
{
if (a.first * b.second < b.first * a.second)
return 1;
return 0;
}
void solve()
{
int n, m;
cin >> n >> m;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
vector<vector<int> > g(m);
for (int i = 0; i < m; i++)
{
int k;
cin >> k;
g[i] = vector<int>(k);
for (int j = 0; j < k; j++)
{
cin >> g[i][j];
}
}
vector<pair<pair<long long, int>, int> > avg(m);
for (int i = 0; i < m; i++)
{
avg[i] = {{accumulate(g[i].begin(), g[i].end(), 0LL), g[i].size()}, i};
}
sort(a.rbegin(), a.rend());
sort(avg.rbegin(), avg.rend(), [&](pair<pair<long long, int>, int> A, pair<pair<long long, int>, int> B){
return comp(A.first, B.first);
});
vector<int> pos(m);
for (int i = 0; i < m; i++)
pos[avg[i].second] = i;
vector<int> assign_to_next(m);
vector<int> assign_to_this(m);
vector<int> assign_to_prev(m);
for (int i = 0; i < m - 1; i++)
assign_to_next[i] = comp({1ll * a[i + 1], 1}, avg[i].first);
for (int i = 0; i < m; i++)
assign_to_this[i] = comp({1ll * a[i], 1}, avg[i].first);
for (int i = 1; i < m; i++)
assign_to_prev[i] = comp({1ll * a[i - 1], 1}, avg[i].first);
for (int i = 1; i < m; i++)
assign_to_next[i] += assign_to_next[i - 1],
assign_to_this[i] += assign_to_this[i - 1],
assign_to_prev[i] += assign_to_prev[i - 1];
for (int i = 0; i < m; i++)
{
int id = pos[i];
pair<long long, int> cur = {accumulate(g[i].begin(), g[i].end(), 0LL), g[i].size() - 1};
for (int j = 0; j < g[i].size(); j++)
{
cur.first -= g[i][j];
int L = -1, R = m;
while (L + 1 < R)
{
int M = (L + R) / 2;
if (!comp(avg[M].first, cur))
L = M;
else
R = M;
}
if (R > id) R--;
int tr = 1;
if (comp({1ll * a[R], 1}, cur) == 1) tr = 0;
if (min(R, id) - 1 >= 0 && assign_to_this[min(R, id) - 1] != 0) tr = 0;
if (assign_to_this[max(R, id)] != assign_to_this.back()) tr = 0;
if (R < id && (R ? assign_to_next[R - 1] : 0) != assign_to_next[id - 1]) tr = 0;
if (R > id && assign_to_prev[R] != assign_to_prev[id]) tr = 0;
cout << tr;
cur.first += g[i][j];
}
}
cout << "\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
solve();
}
}
|
1621
|
F
|
Strange Instructions
|
Dasha has $10^{100}$ coins. Recently, she found a binary string $s$ of length $n$ and some operations that allows to change this string (she can do each operation any number of times):
- Replace substring 00 of $s$ by 0 and receive $a$ coins.
- Replace substring 11 of $s$ by 1 and receive $b$ coins.
- Remove 0 from any position in $s$ and \textbf{pay} $c$ coins.
It turned out that while doing this operations Dasha should follow the rule:
- It is forbidden to do two operations with the same parity in a row. Operations are numbered by integers $1$-$3$ in the order they are given above.
Please, calculate what is the maximum profit Dasha can get by doing these operations and following this rule.
|
From the rule $4$ it follows that the sequence of types of operations looks like $\ldots$, ($1$ or $3$), $2$, ($1$ or $3$), $2$, ($1$ or $3$), $\ldots$. One can suppose that operation $1$ is "better" than operation $3$ so we can (in optimal solution) do all operations of type $3$ after all operations of type $1$. However it is now always the truth: for example for $s = 00101$ with large $a$ and $b$ and small $c$ it is optimal to do operations of types $3$, $2$ and $1$ in this order to get profit of $a+b-c$ ($00101 \rightarrow 0011 \rightarrow 011 \rightarrow 01$). But it turns out that it is the only case when we should do operation of type $1$ after the operation of the type $3$: we can do no more than one such operation $1$ and we can do it in the end. I will prove it later. Now we know how does the operation sequence look like. Let's now think what zeroes and ones we will remove on each step. Obviously, the only case we should use operation of type $3$ is to remove the last $0$ in the block (block of zeroes or ones is the unextendable substring constisting only of zeroes or ones) otherwise we can use the operation of type $1$. Let's now think that string is a sequence of blocks. Obiviously, all zeroes/ones in one block are indistinguishable. Let's look how does the number of possible operations of type $2$ changes after each operation. It is the number of ones minus the number of blocks of ones. After the operation of type $1$ the blocks structure doesn't changes; after the operation of type $2$ the blocks structure also doesn't change but the number of ones reduces by one; after the operation of type of $3$ the number of ones doesn't change, but two blocks of ones are merged together (if we removed not the first or last block). So, the number of possible operations of type $2$ decreaces by one after the operation of type $2$ and can increase by one after the operation of type $3$. Also we can see that the operation of type $2$ can't affect any block of zeroes in any way. From here it follows that all operations of type $2$ are indistinguishable and we should only care about the amount of possible such operations. Also it follows that using operation $3$ to remove one of the middle blocks is always better than using operation $3$ to remove the block from the side. Also it follows that if we have no possible operation of type $2$ remaining the only way to not stop doing operations it to do operation of type $3$ with one of the middle blocks. However, after it we will do operation of type $2$ and will come back to this situation again. Here, in cases $b < c$ or no operation of type $3$ available we can do operation of type $1$ (if it is possible) and stop. I claim that the case above is the only case when we should do the operation of type $1$ after the operation $3$. Assume it is not. Then we have consecutive operations of types $3$, $2$, $1$, $2$ in this order. Then we had possible operations of type $2$ before the first of these operations and operations of types $1$ and $3$ are applied to the different blocks. Thus we can make the same operations in the order $1$, $2$, $3$, $2$ and don't change the answer (we don't care what block to we apply the type $2$ operation). So there is an optimal sequence of operations without consecutive operations $3$, $2$, $1$, $2$. Well, we know much enough about operations of types $2$ and $3$. Now let's go to the operation of type $1$. When we will start making operations of type $3$? There are two cases: we cannot make any operation of type $1$ or we have no possible operations of type $2$. In the first case we will make all possible operations of type $1$ so it doesn't matter in what order we will do them. In the second case we will never come back to the operations of type $1$ except the last operation, so we should prepare as much blocks of length $1$ as possible. The best way to do this is to remove zeroes from the shortest block except the corner blocks on each operation, and then remove zeroes from the corner blocks. I claim that this is enough to solve the problem. It seems that there are too many cases but all of them are covered by the algorithm below. Let's fix the parity of the first operation type to simplify the implementation, so on each step we will know the parity of the type of operation we should do next. Now the algorithm is (after each operation we should try to update the answer): If we should do the operation of type $2$: If we can do it, we do it. Otherwise, we should terminate. If we can do it, we do it. Otherwise, we should terminate. If we should do the operation of type $1$ or $3$: If there are no possible operations of type $2$: If we can do operation of type $1$, we should try to do it (but don't actually do it) and update the answer. It it the last operation in this case. If we can do operation of type $3$ and remove one of the center blocks, we should do it. Otherwise, we should terminate. If there are possible operations of type $2$: If we can do operation of type $1$ on one of the middle blocks, we should do it on the one of the shortest middle blocks. Otherwise, if we can do operation of type $1$ on one of the corner blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the middle blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the corner blocks, we should do it. Otherwise, we should terminate. If there are no possible operations of type $2$: If we can do operation of type $1$, we should try to do it (but don't actually do it) and update the answer. It it the last operation in this case. If we can do operation of type $3$ and remove one of the center blocks, we should do it. Otherwise, we should terminate. If we can do operation of type $1$, we should try to do it (but don't actually do it) and update the answer. It it the last operation in this case. If we can do operation of type $3$ and remove one of the center blocks, we should do it. Otherwise, we should terminate. If there are possible operations of type $2$: If we can do operation of type $1$ on one of the middle blocks, we should do it on the one of the shortest middle blocks. Otherwise, if we can do operation of type $1$ on one of the corner blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the middle blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the corner blocks, we should do it. Otherwise, we should terminate. If we can do operation of type $1$ on one of the middle blocks, we should do it on the one of the shortest middle blocks. Otherwise, if we can do operation of type $1$ on one of the corner blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the middle blocks, we should do it. Otherwise, if we can do operation of type $3$ on one of the corner blocks, we should do it. Otherwise, we should terminate. It covers all the cases and works in $\mathcal{O}(n)$. Total complexity is $\mathcal{O}(n \log n)$ because of sorting.
|
[
"data structures",
"greedy",
"implementation"
] | 2,700
|
#include <bits/stdc++.h>
typedef long long ll;
const int INF = 1e9;
using namespace std;
#define forn(i, n) for (int i = 0; (i) != (n); (i)++)
#define all(v) (v).begin(), (v).end()
#define rall(v) (v).rbegin(), (v).rend()
void solver(int turn, ll &ans, ll a, ll b, ll c, vector<int> blocks, ll other0, ll single0, ll P, ll turns1)
{
ll cur = 0;
while (1)
{
if (turn == 1)
{
if (turns1 > 0)
{
turns1--;
cur += b;
ans = max(ans, cur);
}
else
{
return;
}
}
else
{
if (turns1 == 0)
{
if (other0 > 0 || blocks.size() > 0)
{
ans = max(ans, cur + a); /// it is the final move
}
if (single0 > 0) /// we are forced to remove single zero
{
single0--;
cur -= c;
ans = max(ans, cur);
turns1++;
}
}
else
{
if (blocks.size() > 0)
{
blocks[blocks.size() - 1]--;
if (blocks.back() == 1)
blocks.pop_back(), single0++;
cur += a;
ans = max(ans, cur);
}
else if (other0 > 0)
{
other0--;
cur += a;
ans = max(ans, cur);
}
else if (single0 > 0)
{
single0--;
turns1++;
cur -= c;
ans = max(ans, cur);
}
else if (P > 0)
{
P--;
cur -= c;
ans = max(ans, cur);
}
else
{
return;
}
}
}
turn ^= 1;
}
}
ll solve(int n, string s, ll a, ll b, ll c)
{
if (n == 1)
{
return 0ll;
}
// +a for "00" -> "0"
// +b for "11" -> "1"
// -c for "0"->""
ll ans = 0;
ll fir1 = INF, lst1 = -1;
forn(i, n) if (s[i] == '1')
{
lst1 = i;
}
forn(i, n) if (s[i] == '1')
{
fir1 = i;
break;
}
if (fir1 == INF)
{
return a;
}
vector<int> blocks;
ll P = 0;
if (s[0] == '0') P++;
if (s.back() == '0') P++;
ll other0 = max(fir1 - 1, 0ll) + max(n - lst1 - 2, 0ll);
ll turns1 = 0;
for (int i = 0; i + 1 < n; i++) if (s[i] == s[i + 1] && s[i] == '1')
turns1++;
ll single0 = 0;
for (int i = fir1; i < lst1; )
{
int j = i + 1;
while (s[j] != '1')
j++;
int len = j - i - 1;
if (len == 1) single0++;
else if (len > 1) blocks.push_back(len);
i = j;
}
sort(rall(blocks));
solver(0, ans, a, b, c, blocks, other0, single0, P, turns1);
solver(1, ans, a, b, c, blocks, other0, single0, P, turns1);
return ans;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
int n;
cin >> n;
ll a, b, c;
cin >> a >> b >> c;
string s;
cin >> s;
ll ans3 = solve(n, s, a, b, c);
cout << ans3 << "\n";
}
}
|
1621
|
G
|
Weighted Increasing Subsequences
|
You are given the sequence of integers $a_1, a_2, \ldots, a_n$ of length $n$.
The sequence of indices $i_1 < i_2 < \ldots < i_k$ of length $k$ denotes the subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ of length $k$ of sequence $a$.
The subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ of length $k$ of sequence $a$ is called increasing subsequence if $a_{i_j} < a_{i_{j+1}}$ for each $1 \leq j < k$.
The weight of the increasing subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ of length $k$ of sequence $a$ is the number of $1 \leq j \leq k$, such that exists index $i_k < x \leq n$ and $a_x > a_{i_j}$.
For example, if $a = [6, 4, 8, 6, 5]$, then the sequence of indices $i = [2, 4]$ denotes increasing subsequence $[4, 6]$ of sequence $a$. The weight of this increasing subsequence is $1$, because for $j = 1$ exists $x = 5$ and $a_5 = 5 > a_{i_1} = 4$, but for $j = 2$ such $x$ doesn't exist.
Find the sum of weights of all increasing subsequences of $a$ modulo $10^9+7$.
|
At first, this problem can be easily reduced to the problem about permutations by replacing $a_i$ by pair $(a_i, -i)$. The relative order of such pairs is the same as in the problem, however all of them are different so we can replace them with permutation. Let the weight of increasing subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ be $w$. It is obvious that if $a_{i_j}$ affects the weight, then $a_{i_{j-1}}$ also affects the weight ($j > 1$). So, elements $a_{i_1}, a_{i_2}, \ldots, a_{i_w}$ affects the weight of this subsequence while other don't. As $x$ in the weight defenition it is always correct to select the position of $\max(a_{i_k + 1}, \ldots, a_n)$. Let's force to select such $x$ for all increasing subsequences. Then we will know that $a_{i_j} < a_x < a_{i_{j+1}}$ for some $j$ and the weight $w$ is equals to this $j$. Let's determine when $a_i$ affects the weight of increasing sequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$. Firstly, $a_i$ should appear in this sequence, that is $a_i = a_{i_j}$ for some $j$. Secondly, $\max(a_{i_k + 1}, \ldots, a_n)$ should be greater than $a_i$. We can obtain solution in $\mathcal{O}(n^2)$ by fixing both $a_i$ and $a_{i_k}$. Let $a_{x_1} < a_{x_2} < \ldots$ be the sequence of suffix maximums. Note that $x_1 > x_2 > \ldots$ here. Let $y'$ be the smallest $y$ such that $a_{x_y} > a_i$. Then $a_i$ affects the weight of increasing subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ if and only if $i_k \neq x_{y'}$. It is true because if subsequence ends in $a_{x_{y'}}$ then there is no suffix maximum on the right of $a_{x_{y'}}$ that is greater then $a_i$. Otherwise, the subsequence ends before $a_{x_{y'}}$ so $a_i$ affects the weight. Let's calculate how many increasing subsequences contains $a_i$. For this we should multiply the number of increasing subsequences that ends in $a_i$ by the number of increasing subsequences that starts in $a_i$. Both this values can by found by standart dynamic programming. Also we should calculate the number of increasing subsequences that starts in $a_i$ and ends in $a_{x_{y'}}$. This can be done by almost the same dynamic programming. We can use binary indexed tree to optimize all calculations to $\mathcal{O}(n \log n)$. It is imporant to note that when we calculate the number of increasing subsequences that ends in $a_{x_{y'}}$ we are only interested in the increasing subsequences that begins in $a_{x_{y'-1}} < a_i \leq a_{x_{y'}}$ so there will be $\mathcal{O}(n)$ calls to binary indexed tree in total.
|
[
"data structures",
"dp",
"math"
] | 3,200
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 1e9 + 7;
void add(int pos, int x, vector<int> &fenw)
{
while (pos < fenw.size())
{
fenw[pos] += x;
fenw[pos] %= MOD;
pos |= (pos + 1);
}
}
int get(int pos, vector<int> &fenw)
{
int res = 0;
while (pos >= 0)
{
res = res + fenw[pos];
res %= MOD;
pos &= (pos + 1);
pos--;
}
return res;
}
int weight_of_all_subseq(vector<int> seq)
{
vector<int> dp_up(seq.size());
vector<int> fenw1(seq.size());
for (int i = (int)seq.size() - 1; i >= 0; i--)
{
dp_up[i] = (1ll + get(seq.size() - 1, fenw1) - get(seq[i], fenw1) + MOD) % MOD;
add(seq[i], dp_up[i], fenw1);
}
vector<int> fenw2(seq.size());
vector<int> dp_down(seq.size());
for (int i = 0; i < seq.size(); i++)
{
dp_down[i] = (1ll + get(seq[i], fenw2)) % MOD;
add(seq[i], dp_down[i], fenw2);
}
vector<int> suf_mx;
vector<int> is_suf_mx(seq.size());
int mx = 0;
for (int i = (int)seq.size() - 1; i >= 0; i--)
{
mx = max(mx, seq[i]);
if (seq[i] == mx)
suf_mx.push_back(i), is_suf_mx[i] = 1;
}
vector<int> q(seq.size());
for (int i = 0; i < seq.size(); i++)
q[seq[i]] = i;
vector<int> dp_up_fix(seq.size());
vector<int> fenw3(seq.size());
int lst = seq.size() - 1;
for (int x = (int)seq.size() - 1; x >= 0; x--)
{
if (is_suf_mx[q[x]])
{
dp_up_fix[x] = 1;
for (int j = x + 1; j <= lst; j++)
add(q[j], (MOD - dp_up_fix[j]) % MOD, fenw3);
add(q[x], dp_up_fix[x], fenw3);
lst = x;
continue;
}
dp_up_fix[x] = (get(q[lst], fenw3) - get(q[x], fenw3) + MOD) % MOD;
add(q[x], dp_up_fix[x], fenw3);
}
int ans = 0;
for (int i = (int)seq.size() - 1; i >= 0; i--)
{
ans = (ans + 1ll * dp_down[i] * (dp_up[i] + MOD - dp_up_fix[seq[i]])) % MOD;
}
return ans;
}
void solve()
{
int n;
cin >> n;
vector<pair<int, int> > b(n);
for (int i = 0; i < n; i++)
cin >> b[i].first, b[i].second = -i;
vector<int> a(n);
sort(b.begin(), b.end());
for (int i = 0; i < n; i++)
a[-b[i].second] = i;
cout << weight_of_all_subseq(a) << "\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
solve();
}
}
|
1621
|
H
|
Trains and Airplanes
|
Railway network of one city consists of $n$ stations connected by $n-1$ roads. These stations and roads forms a tree. Station $1$ is a city center. For each road you know the time trains spend to pass this road. You can assume that trains don't spend time on stops. Let's define $dist(v)$ as the time that trains spend to get from the station $v$ to the station $1$.
This railway network is splitted into zones named by first $k$ capital latin letters. The zone of the $i$-th station is $z_i$. City center is in the zone A. For all other stations it is guaranteed that the first station on the road from this station to the city center is either in the same zone or in the zone with lexicographically smaller name. Any road is completely owned by the zone of the most distant end from the city center.
Tourist will arrive at the airport soon and then he will go to the city center. Here's how the trip from station $v$ to station $1$ happends:
- At the moment $0$, tourist enters the train that follows directly from the station $v$ to the station $1$. The trip will last for $dist(v)$ minutes.
- Tourist can buy tickets for any subset of zones at any moment. Ticket for zone $i$ costs $pass_i$ euro.
- Every $T$ minutes since the start of the trip (that is, at the moments $T, 2T, \ldots$) the control system will scan tourist. If at the moment of scan tourist is in the zone $i$ without zone $i$ ticket, he should pay $fine_i$ euro. Formally, the zone of tourist is determined in the following way:
- If tourist is at the station $1$, then he already at the city center so he shouldn't pay fine.
- If tourist is at the station $u \neq 1$, then he is in the zone $z_u$.
- If tourist is moving from the station $x$ to the station $y$ that are directly connected by road, then he is in the zone $z_x$.
Note, that tourist can pay fine multiple times in the same zone.
Tourist always selects such way to buy tickets and pay fines that minimizes the total cost of trip. Let $f(v)$ be such cost for station $v$.
Unfortunately, tourist doesn't know the current values of $pass_i$ and $fine_i$ for different zones and he has forgot the location of the airport. He will ask you queries of $3$ types:
- $1$ $i$ $c$ — the cost of \textbf{ticket} in zone $i$ has changed. Now $pass_i$ is $c$.
- $2$ $i$ $c$ — the cost of \textbf{fine} in zone $i$ has changed. Now $fine_i$ is $c$.
- $3$ $u$ — solve the following problem for current values of $pass$ and $fine$:
- You are given the station $u$. Consider all the stations $v$ that satisfy the following conditions:
- $z_v = z_u$
- The station $u$ is on the path from the station $v$ to the station $1$.
Find the value of $\min(f(v))$ over all such stations $v$ with the following assumption: \textbf{tourist has the ticket for the zone of station} $z_u$.
|
Let $scans_{v, i}$ be the number of scans in the $i$-th zone if we start from the vertex $v$ and $scans_{v, z_v} = 0$. Note that $scans$ doesn't change during queries. Now our task is to calculate $\sum_{z \in zones} \min (scans_{v, z} \cdot fine_{z}, pass_{z})$ over some set of vertices $A_i$ described in the $i$-th query. It is important to note that for any two vertices $v_1, v_2 \in A_i$ and any zone $z$ holds $|scans_{v_1, z} - scans_{v_2, z}| \leq 1$. From this, we can see that there are no more than $\mathcal{O}(2^k)$ different values of $scans_v$ for $v \in A_i$, so we don't need to check all vertices in $A_i$ to find the answer. However, it is still quite a lot. Let $minscans_z$ be the min value of $scans_{v, z}$ for $v \in A_i$. For some vertices $v$ we will have $minscans_z$ scans in zone $z$ and $minscans_z + 1$ scans for other vertices. Actually, we can look at the moment $t$ we escape the start zone (zone that contains the airport). For each zone $z$ there is a segment$\bmod T$ such that if $t$ lies in it we will have $minscans_z$ scans in this zone and we will have $minscans_z + 1$ scans otherwise. Let's suppose we are going to do something like sweepline on this segments. There are $\mathcal{O}(k)$ segments and they can split the$\bmod T$ circle into at most $\mathcal{O}(k)$ sections. This is how we get that we actually should check not more that $2k$ vertices in order to find answer for each query. What about the implementation part, we should be able to find where we escape the start zone (let's call this vertex $V$) and the time we spend on the road from any vertex to the vertex $1$. It can be easily done with dfs. Also we should be able to calculate $scans_{V}$. We can do it in $\mathcal{O}(k)$ by repeatedly going to the next vertex of the path in another zone. While doing this we can also find$\bmod T$ segments when we should escape the start zone in order to have less scans in each zone: if we start from vertex $V$ and we are in the zone $z$ at the segment of moments $[t_l, t_r]$, then we will have to pay additional fine in zone $z$ if we pass through $V$ at one of moments $[T-t_r, T-t_l] \pmod T$. Then we will do sweepline algorithm on this segments to find the segments of times we are interested in. The last thing we should be able to do is to determine in which of previouly descibed segments there are vertices in $A_i$. It can be done by computing this segment for each vertex and then merging this values by dfs. During answering the query we can iterate over all previouly descibed segments and update answer in case we have vertex in this segment. In order to reduce time complexity of answering query from $\mathcal{O}(k^2)$ to $\mathcal{O}(k)$ we should actually perform such sweepline and maintain current answer and precompute whether we have vertices in this section. It is how we can get $\mathcal{O}(nk\log k + qk)$ solution with $\mathcal{O}(nk)$ memory. We can also do all calculations in place and solve this problem with $\mathcal{O}(n)$ memory in $\mathcal{O}(nk\log k + qk\log k)$ which is not much slower. It seems that this problem is somehow connected with real life so you can be interested how such descripion of set of vertices in queries was obtained. Assume that tourist already was in this airport. He bought the ticket at the station near the airport but activated it only at the station $u$ in the same zone. At the moment of query he had this ticket and it contained some information about activation.
|
[
"dfs and similar",
"graphs",
"shortest paths",
"trees"
] | 3,500
|
#include <bits/stdc++.h>
using namespace std;
const long long INF = 1e9 + 5;
pair<vector<long long>, vector<pair<int, int> > > get_total_and_changes(int vv, int k, int T, vector<long long> &depth, vector<int> &checkpoint, string &z)
{
//int vv = par;
long long enter_time = 0;
long long cur_time = 0;
vector<pair<int, int> > changes;
vector<long long> total(k);
while (vv != 0)
{
cur_time += depth[vv] - depth[checkpoint[vv]];
int L = enter_time % T, R = (cur_time - 1) % T;
total[z[vv] - 'A'] += (cur_time - 1 + T) / T - (enter_time - 1 + T) / T;
L = (T - L) % T;
R = (T - R) % T;
if ((L + 1) % T != R % T)
{
if (R % T != 0) changes.push_back({R % T, z[vv]});
if ((L + 1) % T != 0) changes.push_back({(L + 1) % T, -z[vv]});
}
enter_time = cur_time;
vv = checkpoint[vv];
}
sort(changes.begin(), changes.end());
return {total, changes};
}
void dfs(int v, int par, vector<vector<pair<int, int> > > &g,
vector<int> &p, vector<long long> &depth,
string &z, vector<int> &checkpoint)
{
checkpoint[v] = checkpoint[par];
p[v] = par;
if (z[par] != z[v])
checkpoint[v] = par;
for (auto e : g[v]) if (e.first != par)
{
depth[e.first] = depth[v] + e.second;
dfs(e.first, v, g, p, depth, z, checkpoint);
}
}
void dfs(int v, int p, vector<vector<pair<int, int> > > &g, string &z, vector<long long> &checkpoint_sectors)
{
for (auto e : g[v]) if (e.first != p)
{
dfs(e.first, v, g, z, checkpoint_sectors);
if (z[e.first] == z[v] && z[v] != 'A')
{
checkpoint_sectors[v] |= checkpoint_sectors[e.first];
}
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
// read the main input
int n;
cin >> n;
vector<vector<pair<int, int> > > g(n);
for (int i = 0; i < n - 1; i++)
{
int v, u, t;
cin >> v >> u >> t;
v--, u--;
g[v].push_back({u, t});
g[u].push_back({v, t});
}
int k;
cin >> k;
string z;
cin >> z;
vector<int> pass(k), fine(k);
for (int i = 0; i < k; i++)
{
cin >> pass[i];
}
for (int i = 0; i < k; i++)
{
cin >> fine[i];
}
int T;
cin >> T;
// work with main data
vector<int> p(n);
vector<long long> depth(n);
vector<int> checkpoint(n);
dfs(0, 0, g, p, depth, z, checkpoint);
vector<long long> checkpoint_sectors(n);
for (int i = 0; i < n; i++)
{
if (z[i] == 'A') continue;
int CH = checkpoint[i];
int pass = (depth[i] - depth[CH]) % T;
vector<pair<int, int> > table_changes = get_total_and_changes(CH, k, T, depth, checkpoint, z).second;
for (int j = 0; j < table_changes.size(); j++)
{
if (table_changes[j].first <= pass)
checkpoint_sectors[i]++;
}
checkpoint_sectors[i] = (1ll << checkpoint_sectors[i]);
}
dfs(0, 0, g, z, checkpoint_sectors);
// answer queries
int q;
cin >> q;
while (q--)
{
int t;
cin >> t;
if (t == 1)
{
char z;
int c;
cin >> z >> c;
pass[z - 'A'] = c;
continue;
}
else if (t == 2)
{
char z;
int c;
cin >> z >> c;
fine[z - 'A'] = c;
continue;
}
int u;
cin >> u;
u--;
if (z[u] == 'A')
{
cout << 0 << "\n";
continue;
}
int CH = checkpoint[u];
long long ans = 1e18;
pair<vector<long long>, vector<pair<int, int> > > tables = get_total_and_changes(CH, k, T, depth, checkpoint, z);
long long cur = 0;
for (int i = 0; i < k; i++) cur += min(1ll * pass[i], 1ll * fine[i] * min(tables.first[i], INF));
for (int i = 0; i <= tables.second.size(); i++)
{
if (checkpoint_sectors[u] & (1ll << i)) ans = min(ans, cur);
if (i == tables.second.size()) break;
int j = tables.second[i].second;
if (j > 0)
{
j -= 'A';
cur -= min(1ll * pass[j], 1ll * fine[j] * min(tables.first[j], INF));
tables.first[j]++;
cur += min(1ll * pass[j], 1ll * fine[j] * min(tables.first[j], INF));
}
else
{
j *= -1;
j -= 'A';
cur -= min(1ll * pass[j], 1ll * fine[j] * min(tables.first[j], INF));
tables.first[j]--;
cur += min(1ll * pass[j], 1ll * fine[j] * min(tables.first[j], INF));
}
}
cout << ans << "\n";
}
}
|
1621
|
I
|
Two Sequences
|
Consider an array of integers $C = [c_1, c_2, \ldots, c_n]$ of length $n$. Let's build the sequence of arrays $D_0, D_1, D_2, \ldots, D_{n}$ of length $n+1$ in the following way:
- The first element of this sequence will be equals $C$: $D_0 = C$.
- For each $1 \leq i \leq n$ array $D_i$ will be constructed from $D_{i-1}$ in the following way:
- Let's find the lexicographically smallest subarray of $D_{i-1}$ of length $i$. Then, the first $n-i$ elements of $D_i$ will be equals to the corresponding $n-i$ elements of array $D_{i-1}$ and the last $i$ elements of $D_i$ will be equals to the corresponding elements of the found subarray of length $i$.
Array $x$ is subarray of array $y$, if $x$ can be obtained by deletion of several (possibly, zero or all) elements from the beginning of $y$ and several (possibly, zero or all) elements from the end of $y$.
For array $C$ let's denote array $D_n$ as $op(C)$.
Alice has an array of integers $A = [a_1, a_2, \ldots, a_n]$ of length $n$. She will build the sequence of arrays $B_0, B_1, \ldots, B_n$ of length $n+1$ in the following way:
- The first element of this sequence will be equals $A$: $B_0 = A$.
- For each $1 \leq i \leq n$ array $B_i$ will be equals $op(B_{i-1})$, where $op$ is the transformation described above.
She will ask you $q$ queries about elements of sequence of arrays $B_0, B_1, \ldots, B_n$. Each query consists of two integers $i$ and $j$, and the answer to this query is the value of the $j$-th element of array $B_i$.
|
Let's denote the construction of array $D_i$ from array $D_{i-1}$ as the $i$-th step of transformation $op$. Also let's behave as all steps affect $C$ instead of creating multiple new arrays. The solution consists of $4$ parts: Show that there exists small $m$, such that $B_m = B_{m-1}$. If $n \leq 10^5$ then $m$ doesn't exceed $7$ ($m = \mathcal{O}(\log\log n)$). Find the lexicographically smallest suffix for each prefix of some array. For subarray $c_l, \ldots, c_r$ find the smallest $k$ such that the subarray $c_l, \ldots, c_r$ is a period (without a prefix) of the subarray $c_k, \ldots, c_r$. Show that on the $(n-r+1)$-th step of transformation the lexicographically smallest subarray starts in $c_l$ or in $c_k$. Find the way to simulate the transformation quite fast. Total time complexity is $\mathcal{O}(n \log^2 n \log\log n)$. You can try to solve each of the parts independently! Part $1$. Let's concider an array $C$ we are going to apply $op$ to. It is easy to see that $op(C) = C$ if it is non-increasing (because on each step the suffix of length $i$ will be one of the lexicographically smallest subarrays of length $i$). Otherwise let $j$ be the smallest index such that $c_j < c_{j+1}$. Let $x$ be the smallest index such that $c_x = c_j$ (then $c_x = c_{x+1} = \ldots = c_j$, all element of the subarray of length $j - x + 1$ are equal). The first $n - j - 1$ steps doesn't affect the prefix of length $j + 1$. It is easy to see that the last $x$ steps doesn't change array (the reason is the same as in the paragraph above). Let's understand what happends on the $(n-j)$-th step. We should find the lexicographically smallest subarray that starts on the prefix of length $j+1$. The smallest element on this prefix is $c_j$, thus this subarray will start in one of positions between $x$ and $j$. Since $c_j < c_{j+1}$ the subarray of length $n-j$ that starts in the position $x$ is always one of the lexicographically smallest subarrays. Thus after the $(n-j)$-th step If $x + n - j - 1 \leq j$, then all elements on positions between $j+1$ and $n$ became equals to $c_x$ (because the selected subarray constists only of elements that are equal $c_x$). It it easy to see that array becomes non-increasing so it won't ever change again. Otherwise elements on positions $j+1, j+2, \ldots, j+(j-x-1)$ becomes equal $c_x$ and elements on positions $j+(j-x), \ldots, n$ are equal to old values of $c_x, c_{x+1}, \ldots, c_{n-(j-x)}$. By the way $c_{j+(j-x)}$ is equal to the old value of $c_x$. It is important that in the second case the length of the subarray of the equal elements becomes $(j-x+1)+(j-x+1)$ and it is followed by the greater element. Now we can apply the same ideas to the steps between the $(n-j+1)$-th and $(n-x)$-th. We see that the length of the subarray filled by the same elements becomes $(j-x+1) + (j-x+1) + (j-x) + (j-x-1) + \ldots + 1 = \frac{(j-x+3)(j-x+2)}{2}-1$ or this subarray becomes the suffix and whole array won't change again. In the initial array $A$ the length of this subarray (the subarray of the equal elements starting at position $x$) is at least $1$, so in $B_1$ it is at least $2$, in $B_2$ it is at least $5$, in $B_3$ it is at least $20$, in $B_4$ it is at least $230$, in $B_5$ it is at least $26795$, in $B_6$ this subarray becomes the suffix. After that all arrays will be equal $B_6$ (because $n \leq 10^5$). The integers above are connected with the sequence A007501. Now we know that we should simulate $op$ only $6$ times to get all answers. In the following solution I will optimize $op$ assuming that time limit is aproximately $1$ sec. It is easy to see how to find $B_6$ in linear time and simulate $op$ only $5$ times. Part $2$. If we had no change queries than the problem is to find the lexicographically smallest substring of length $k$ in the given string for all $k$ between $1$ and $n$. The solution of this problem is building the suffix array and finding the lexicographically smallest suffix of length at least $k$. It is important that change queries change the suffix of the initial array to "something lexicographically small". And that on the $i$-th step the required substring starts in one of position on the prefix of length $i-1$. We can guess that we should find the lexicographically smallest suffix of each prefix. I will prove that it is almost enough later. As far as I know it is not the well known problem so I will describe the solution. At first we can build the suffix array of the initial array and sparce table in order to compate subarrays fast and in order to find the first position of difference of subarrays that start at some positions $i$ and $j$. Now let's consider all prefixes of the initial array in the order of increasing of length. We want to understand how does the suffix array of the current array changes when we add the integer to the right end of it. There are some prefix-independent subarrays (that is, subarrays that have the integer that differs them already added), and some segments of prefix-dependent subarrays. It is easy to see that the prefix-independent subarrays won't change their relative order. But prefix-depenent subarrays in one segment can change their relative order. Suppose that you have the sorted sequence of pairwise different arrays $s_1, s_2, \ldots, s_n$ and each of them is the prefix of the next one. Then appending some integer $c$ to each of them and sorting them again are equivalent to the following operations: Let's split the sequence of arrays into two subsequences $s_{k_1}, s_{k_2}, \ldots, s_{k_x}$ and $s_{m_1}, s_{m_2}, \ldots, s_{m_y}$ in the following way: the first of them contains all such arrays $s_i$ that $i < n$ and $s_i + c \leq s_{i+1}$, the second of them contains all other arrays. The sorted sequence of arrays is equal to $s_{k_1} + c, s_{k_2} + c, \ldots, s_{k_x} + c, s_{m_y} + c, s_{m_{y-1}} + c, \ldots, s_{y_1} + c$. Prefix-dependent arrays can appear only in the first part of the sequence. We can use sparce table to find the first position where prefix-dependent subarrays differ. Using the suffix array we can find the final order of these arrays. Since we are interested only in the lexicographically smallest suffix we can maintain only some data about the first segment of prefix-dependent subarrays in the suffix array. Let's call the suffix from the first segment such that its position in the final suffix array is the smallest important suffix. Then the suffixes that can be the lexicographically smallest now differs from the important suffix in come position later. Let's maintain "the minimal suffix candidates" as a set of triples (the position of the beginning of this suffix, the position of this suffix in the final suffix array, the position where this suffix differs from the important suffix). Here is the code that finds the minimal suffix of each prefix of the given array: This part of the solution works in $\mathcal{O}(n \log^2 n)$ or in $\mathcal{O}(n \log n)$. It depends on your implementation of suffix array. Part $3$. For a given $r$ let the position of the beginning of the lexmin suffix of $c_1, c_2, \ldots, c_r$ be $l$. For a subarray $c_l, \ldots, c_r$, I want to find the smallest $k$ such that $c_l, \ldots, c_r$ is a period of $c_k, \ldots, c_r$ (without a prefix). It can be easily done by binary search or binary lifting in $\mathcal{O}(n \log n)$ for all $r$. We use the suffix array and sparce table from the previous part here. Now, let's show that on the $(n-r+1)$-th step the lexicographically smallest subarray starts in one of positions $l$ and $k$ (for this $r$). Let $i = n - r + 1$. Let's assume that the lexmin subarray starts in the position $x$ and $x \neq l$. If $c_l, \ldots, c_r$ is not a prefix of $c_x, \ldots, c_r$ then the subarray that starts at the position $l$ is not larger than the subarray that starts at the position $x$ according to the choice of $l$. By the same reasons $x < l$. Now we know that $c_l, \ldots, c_r$ is a prefix of $c_x, \ldots, c_r$. Let $p$ be the smallest positive integer such that $c_{l+p} \neq c_{x+p}$. Since we assumed that the lexmin subarray starts at the position $x$ then $c_{l+p} > c_{x+p}$. The case $x + p > x + i - 1$ is not interesting because subarrays of length $i$ from positions $l$ and $x$ are equal in this case. If $l + p \leq r$ then $c_l, \ldots, c_r$ is not a prefix of $c_x, \ldots, c_r$. If $x + p \leq r$ then we are in the following situation: On the first picture the subarray that starts in position $x$ is shown. On the second picture the subarray that starts in position $l$ is show.$S$, $T$ are subarrays ($T$ can be empty), $a$, $b$ are the integers on position $x+p$ and $l+p$. The blue line splits the array into the prefix of the length $r$ and the suffix of the length $n-r$. Then we can show that the subarray that starts in the position $x+(r-l)+1$ is lexicographically smaller than the subarray we selected on the previous step. But it can't be true. On the first picture the subarray that starts in position $x$ is shown. On the second picture the subarray that starts in position $l$ is show.$S$, $T$ are subarrays ($T$ can be empty), $a$, $b$ are the integers on position $x+p$ and $l+p$. The blue line splits the array into the prefix of the length $r$ and the suffix of the length $n-r$. $S$, $T$ are subarrays ($T$ can be empty), $a$, $b$ are the integers on position $x+p$ and $l+p$. The blue line splits the array into the prefix of the length $r$ and the suffix of the length $n-r$. Then we can show that the subarray that starts in the position $x+(r-l)+1$ is lexicographically smaller than the subarray we selected on the previous step. But it can't be true. If $x + p > r$ we can't get such a simple contradiction. But in this case we know that there are some items that appear in both subarrays. I will show that $c_l, \ldots, c_r$ is a period (without a prefix) of $c_x, \ldots, c_r$. Consider arrays $c_x, \ldots, c_{x+p-1}$ and $c_l, \ldots, c_{l+p-1}$. Let $S$ be the subarray $c_l, \ldots, c_r$ and $T$ be the subarray $c_{x+(r-l)+1}, \ldots, c_r$. Then from equalities $c_y = c_{y-(l-x)}$ for $x + (r-l) \leq y < l+p$ follows that $T$ is a period of the subarray $c_{r + 1}, \ldots, c_{l+p-1}$. It is easy to see that the length of $T$ in at least the length of $S$ and that $T$ is lexicographically larger than $S$ according to the choise of $l$. But if the prefix of $T$ of length $|S|$ is larger than $S$, then it is incorrect that we have select the subarray that starts with $T$ on the previous step. Thus $S$ is a prefix of $T$. By the same reasons we can show that concatenation of $\alpha \leq \lfloor\frac{|T|}{|S|}\rfloor$ copies of $S$ is a prefix $T$. At the same time $S$ is a suffix of $T$. Thus $S$ is a period of $T$. If $|T| \bmod |S| \neq 0$ then the suffix of $S$ of length $|T| \bmod |S|$ is lexicographically smaller than $S$ but is makes a contradiction with the choise of $l$. Thus $S$ is a period of $T$ (without a prefix). Let $S$ be the subarray $c_l, \ldots, c_r$ and $T$ be the subarray $c_{x+(r-l)+1}, \ldots, c_r$. Then from equalities $c_y = c_{y-(l-x)}$ for $x + (r-l) \leq y < l+p$ follows that $T$ is a period of the subarray $c_{r + 1}, \ldots, c_{l+p-1}$. It is easy to see that the length of $T$ in at least the length of $S$ and that $T$ is lexicographically larger than $S$ according to the choise of $l$. But if the prefix of $T$ of length $|S|$ is larger than $S$, then it is incorrect that we have select the subarray that starts with $T$ on the previous step. Thus $S$ is a prefix of $T$. By the same reasons we can show that concatenation of $\alpha \leq \lfloor\frac{|T|}{|S|}\rfloor$ copies of $S$ is a prefix $T$. At the same time $S$ is a suffix of $T$. Thus $S$ is a period of $T$. If $|T| \bmod |S| \neq 0$ then the suffix of $S$ of length $|T| \bmod |S|$ is lexicographically smaller than $S$ but is makes a contradiction with the choise of $l$. Thus $S$ is a period of $T$ (without a prefix). The last thing to show is that $x = k$. Otherwise, $x - (r-l+1) \geq k$. It is easy to show that the subarray that starts in the position $x-(r-l+1)$ is not larger than the subarray that starts in the position $x$. It follows from the selection of $p$: $c_{l+p}$ makes the first difference between $c_{r+1}, \ldots, c_{n}$ and infinite concatenation of subarrays $c_l, \ldots, c_r$. Part $4$. Now we know that we should compare two subarrays that start from some known positions on each step. Let's deal with copying the subarray to the suffix. Let's split an array into two parts. The first of them will consist of the first $n-i+1$ elements on the $i$-th step. The second of them will be the subarray we found on the previous step. Let's store the second part as a queue of subarrays of the initial subarray. If we copy some subarray that doesn't intersect with the suffix, we just put it in the beginning of the queue. Otherwise, we can put the suffix of the current prefix into this queue. We can maintain this queue in the segment tree with hashes. To compare two subarrays we will use the binary search. Total time complexity of this part is $\mathcal{O}(n \log^2 n)$. There will be $M = \mathcal{O}(n \log n \log \log n) \approx 10^7$ hash comparations in the whole solution. The model solution uses the module near $10^{36}$. I guess that it is possible to create a faster solution or solution without hashes using some ideas from the part $3$."prefix of T" from the picture above is either really small ($0$ or $1$) or huge ($|T| - 1$ or $|T| - 2$)? it is easy to see where the increasing order of lengths of subarrays is used; but is it necessary that these lengths are $1, \ldots, n$ in parts $2$-$4$ (in other worlds, is it possible to solve the problem that requires to make the given steps of $op$)?
|
[
"data structures",
"hashing",
"string suffix structures"
] | 3,500
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair<int, int> pii;
#define all(x) (x).begin(), (x).end()
#define l first
#define r second
const int INF = 1e9 + 1;
const int N = 202000;
const ll MOD1 = 998244391;
const ll M1 = 1e6 + 3;
int deg1[N];
vector<int> lcp(vector<int> &s, vector<int> &sa, vector<int> &pos)
{
int n = s.size();
vector<int> L(n);
int k = 0;
for (int i = 0; i < n; i++)
{
if (k > 0) k--;
if (pos[i] == n - 1)
L[n - 1] = -1, k = 0;
else
{
int j = sa[pos[i] + 1];
while (max(i + k, j + k) < n && s[i + k] == s[j + k])
k++;
L[pos[i]] = k;
}
}
return L;
}
int H[N];
int sp[20][N];
int get(int l, int r)
{
int j = H[r - l + 1];
return min(sp[j][l], sp[j][r - (1 << j) + 1]);
}
void buildsa(vector<int> &s, vector<int> &sa, vector<int> &pos)
{
map<int, int> mm;
for (int i = 0; i < (int)s.size(); i++)
mm[s[i]] = -1;
int x = 0;
for (int c = 0; c <= s.size(); c++)
{
if (mm.find(c) != mm.end())
mm[c] = x++;
}
for (int i = 0; i < (int)s.size(); i++)
{
pos[i] = mm[s[i]];
}
vector<pair<pii, int> > p(s.size());
for (int j = 1; (1 << j) <= 2 * (int)s.size(); j++)
{
for (int i = 0; i < (int)s.size(); i++)
{
p[i] = {{pos[i], pos[(i + (1 << j) / 2) % s.size()]}, i};
}
sort(p.begin(), p.end());
int x = 0;
pos[p[0].second] = x;
for (int i = 1; i < (int)p.size(); i++)
{
if (p[i].first != p[i - 1].first) x++;
pos[p[i].second] = x;
}
}
for (int i = 0; i < (int)s.size(); i++)
{
sa[pos[i]] = i;
}
}
vector<int> msep(vector<int> &s, vector<int> &sa, vector<int> &pos)
{
vector<int> res(s.size());
int n = s.size();
int ri = 0;
int pi = INF;
set<array<int, 3> > waitlist;
set<array<int, 3> > waitlist_sorted;
auto emplace = [&](int z) {
pi = min(pi, z);
if (pi != INF && pi < z && get(pi, z - 1) >= ri - sa[z])
{
waitlist.insert({sa[z] + get(pi, z - 1), z, sa[z]});
waitlist_sorted.insert({-sa[z], sa[z] + get(pi, z - 1), z});
}
};
while (ri < n)
{
ri++;
while (waitlist.size())
{
array<int, 3> it = *waitlist.begin();
int z = it[1];
if (it[0] >= ri) break;
waitlist.erase(it);
waitlist_sorted.erase({-it[2], it[0], it[1]});
}
emplace(pos[ri - 1]);
if (waitlist_sorted.size())
{
res[ri - 1] = -(*waitlist_sorted.begin())[0];
}
else
{
res[ri - 1] = sa[pi];
}
}
return res;
}
double precomputation_time = 0;
vector<int> fast_op(vector<int> s)
{
double st = clock();
s.push_back(0);
vector<int> sa(s.size());
vector<int> pos(s.size());
buildsa(s, sa, pos);
vector<int> L = lcp(s, sa, pos);
for (int i = 0; i < (int)s.size(); i++) sp[0][i] = L[i];
for (int j = 0; j < 20; j++)
for (int i = 0; i + (1 << j) - 1 < (int)s.size(); i++)
sp[j][i] = get(i, i + (1 << j) - 1);
s.pop_back();
auto cmp = [&](int l1, int l2, int len)
{
if (get(min(pos[l1], pos[l2]), max(pos[l1], pos[l2]) - 1) >= len)
return 1;
return 0;
};
vector<int> min_suf_each_pref = msep(s, sa, pos);
vector<int> kek_suf_each_pref(s.size());
for (int i = 0; i < s.size(); i++)
{
int len = i + 1 - min_suf_each_pref[i];
kek_suf_each_pref[i] = min_suf_each_pref[i];
int j = 0;
for (j = 0; j < 20; j++)
{
ll nc = kek_suf_each_pref[i] - (1ll << j) * len;
if (nc >= 0 && cmp(nc, kek_suf_each_pref[i], (1ll << j) * len))
kek_suf_each_pref[i] = nc;
else
break;
}
for (; j >= 0; j--)
{
ll nc = kek_suf_each_pref[i] - (1ll << j) * len;
if (nc >= 0 && cmp(nc, kek_suf_each_pref[i], (1ll << j) * len))
kek_suf_each_pref[i] = nc;
}
}
precomputation_time += 1.0 * (clock() - st) / CLOCKS_PER_SEC;
deque<pair<int, int> > h;
struct SegTree{
struct Node
{
int l, r;
ll len;
ll h1;
};
vector<Node> tree;
vector<Node> hashes;
int pnt;
SegTree(vector<int> s)
{
tree.resize(4 * s.size());
hashes.resize(s.size());
hashes[0].h1 = s[0];
for (int i = 1; i < s.size(); i++)
{
hashes[i].h1 = (hashes[i - 1].h1 * M1 + s[i]) % MOD1;
}
pnt = s.size();
}
Node Get(int l, int r)
{
Node res;
res.l = l;
res.r = r;
res.len = r - l + 1;
res.h1 = (hashes[r].h1 - (l ? hashes[l - 1].h1 : 0) * deg1[r - l + 1]) % MOD1;
res.h1 = (res.h1 + MOD1) % MOD1;
return res;
}
Node Merge(Node L, Node R)
{
if (L.len + R.len < N) L.h1 = (L.h1 * deg1[R.len] + R.h1) % MOD1;
L.len += R.len;
return L;
}
void Set(int pos, Node X, int L, int R, int V)
{
if (L + 1 == R)
{
tree[V] = X;
return;
}
int M = (L + R) / 2;
if (pos < M) Set(pos, X, L, M, 2 * V + 1);
else Set(pos, X, M, R, 2 * V + 2);
tree[V] = Merge(tree[2 * V + 1], tree[2 * V + 2]);
}
void push_front(int l, int r)
{
pnt--;
Set(pnt, Get(l, r), 0, tree.size() / 4, 0);
}
Node Get(int chars, int L, int R, int V)
{
if (L + 1 == R)
{
return Get(tree[V].l, tree[V].l + chars - 1);
}
int M = (L + R) / 2;
if (tree[2 * V + 1].len >= chars)
return Get(chars, L, M, 2 * V + 1);
return Merge(tree[2 * V + 1], Get(chars - tree[2 * V + 1].len, M, R, 2 * V + 2));
}
Node Get(int chars)
{
return Get(chars, 0, tree.size() / 4, 0);
}
};
SegTree TT(s);
for (int i = 1; i <= s.size(); i++)
{
int pos1 = min_suf_each_pref[s.size() - i];
int pos2 = kek_suf_each_pref[s.size() - i];
int len = s.size() - i + 1 - pos1;
if (pos1 == pos2)
{
int LL = min_suf_each_pref[s.size() - i];
int RR = min((int)s.size() - i, min_suf_each_pref[s.size() - i] + i - 1);
h.push_front({LL, RR});
TT.push_front(LL, RR);
continue;
}
int t2 = -1;
int aval = s.size() - i + 1 - pos2 - len;
aval = min(aval, i);
if (TT.Get(aval).h1 != TT.Get(pos2 + len, pos2 + len + aval - 1).h1)
t2 = 1;
int shL = aval, shR = i;
while (t2 == -1 && shL + 1 < shR)
{
int sh = (shL + shR) / 2;
int x1 = TT.Get(sh).h1;
int x2 = TT.Merge(TT.Get(pos2 + len, s.size() - i), TT.Get(sh - aval)).h1;
if (x1 != x2)
shR = sh;
else
shL = sh;
}
if (t2 == -1 && shR < i)
{
int x1 = TT.Get(shR).h1;
int x2 = TT.Merge(TT.Get(pos2 + len, s.size() - i), TT.Get(shR - aval)).h1;
if (x1 != x2)
{
if ((x1 - x2 + MOD1) % MOD1 < MOD1 / 2)
{
t2 = 2;
}
else
{
t2 = 1;
}
}
}
if (t2 == 1)
{
int LL = min_suf_each_pref[s.size() - i];
int RR = min((int)s.size() - i, min_suf_each_pref[s.size() - i] + i - 1);
h.push_front({LL, RR});
TT.push_front(LL, RR);
}
else
{
int LL = kek_suf_each_pref[s.size() - i];
int RR = min((int)s.size() - i, kek_suf_each_pref[s.size() - i] + i - 1);
h.push_front({LL, RR});
TT.push_front(LL, RR);
}
}
vector<int> s2;
for (int j = 0; j < h.size(); j++)
{
for (int k = h[j].l; k <= h[j].r; k++)
{
s2.push_back(s[k]);
if (s2.size() == s.size())
break;
}
if (s2.size() == s.size())
break;
}
return s2;
}
vector<int> op(vector<int> s)
{
for (int i = 1; i < s.size(); i++)
{
vector<int> h = {INF};
for (int j = 0; j + i <= s.size(); j++)
{
vector<int> t;
for (int k = 0; k < i; k++)
t.push_back(s[j + k]);
h = min(h, t);
}
for (int j = 0; j < i; j++)
{
s[s.size() - i + j] = h[j];
}
}
return s;
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
deg1[0] = 1;
for (int i = 1; i < N; i++)
deg1[i] = deg1[i - 1] * M1 % MOD1;
for (int i = 3; i < N; i++)
H[i] = (((i - 1) & (i - 2)) == 0) + H[i - 1];
int n;
cin >> n;
vector<vector<int> > b(7, vector<int>(n));
for (int i = 0; i < n; i++) cin >> b[0][i];
// b[1] - 2
// b[2] - 5
// b[3] - 20
// b[4] - 230
// b[5] - 26795
// b[6] - stable
for (int i = 1; i < 6; i++) b[i] = fast_op(b[i - 1]);
int pos = 0;
while (pos + 1 < n && b[0][pos + 1] <= b[0][pos]) pos++;
for (int i = 0; i < pos; i++) b[6][i] = b[0][i];
for (int i = pos; i < n; i++) b[6][i] = b[0][pos];
int q;
cin >> q;
while (q--)
{
int i, j;
cin >> i >> j;
j--;
i = min(i, 6);
cout << b[i][j] << "\n";
}
return 0;
}
|
1622
|
A
|
Construct a Rectangle
|
There are three sticks with integer lengths $l_1, l_2$ and $l_3$.
You are asked to break exactly one of them into two pieces in such a way that:
- both pieces have positive (strictly greater than $0$) \textbf{integer} length;
- the total length of the pieces is equal to the original length of the stick;
- it's possible to construct a rectangle from the resulting four sticks such that each stick is used as exactly one of its sides.
A square is also considered a rectangle.
Determine if it's possible to do that.
|
First, the condition about being able to construct a rectangle is the same as having two pairs of sticks of equal length. Let's fix the stick that we are going to break into two parts. Now there are two cases. The remaining two sticks can be the same. In that case, you can break the chosen stick into equal parts to make the second equal pair of sticks. Note, however, that the stick should have an even length, because otherwise the length of the resulting parts won't be integer. The remaining two sticks can be different. In that case, the chosen stick should have the length equal to their total length, because the only way to make two pairs of equal sticks is to produce the same two sticks as the remaining ones. Overall complexity: $O(1)$ per testcase.
|
[
"geometry",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
vector<int> l(3);
for (int i = 0; i < 3; ++i)
cin >> l[i];
bool ok = false;
for (int i = 0; i < 3; ++i)
ok |= l[i] == l[(i + 1) % 3] + l[(i + 2) % 3];
for (int i = 0; i < 3; ++i) if (l[i] % 2 == 0)
ok |= l[(i + 1) % 3] == l[(i + 2) % 3];
cout << (ok ? "YES\n" : "NO\n");
}
}
|
1622
|
B
|
Berland Music
|
Berland Music is a music streaming service built specifically to support Berland local artist. Its developers are currently working on a song recommendation module.
So imagine Monocarp got recommended $n$ songs, numbered from $1$ to $n$. The $i$-th song had its predicted rating equal to $p_i$, where $1 \le p_i \le n$ and every integer from $1$ to $n$ appears exactly once. In other words, $p$ is a permutation.
After listening to each of them, Monocarp pressed either a like or a dislike button. Let his vote sequence be represented with a string $s$, such that $s_i=0$ means that he disliked the $i$-th song, and $s_i=1$ means that he liked it.
Now the service has to re-evaluate the song ratings in such a way that:
- the new ratings $q_1, q_2, \dots, q_n$ still form a permutation ($1 \le q_i \le n$; each integer from $1$ to $n$ appears exactly once);
- every song that Monocarp liked should have a greater rating than every song that Monocarp disliked (formally, for all $i, j$ such that $s_i=1$ and $s_j=0$, $q_i>q_j$ should hold).
Among all valid permutations $q$ find the one that has the smallest value of $\sum\limits_{i=1}^n |p_i-q_i|$, where $|x|$ is an absolute value of $x$.
Print the permutation $q_1, q_2, \dots, q_n$. If there are multiple answers, you can print any of them.
|
Since we know that every disliked song should have lower rating than every liked song, we actually know which new ratings should belong to disliked songs and which should belong to the liked ones. The disliked songs take ratings from $1$ to the number of zeros in $s$. The liked songs take ratings from the number of zeros in $s$ plus $1$ to $n$. Thus, we have two independent tasks to solve. Let the disliked songs have ratings $d_1, d_2, \dots, d_k$. Their new ratings should be $1, 2, \dots, k$. We can show that if we sort the array $d$, then $|d'_1 - 1| + |d'_2 - 2| + \dots + |d'_k - k|$ will be the lowest possible. The general way to prove it is to show that if the order has any inversions, we can always fix the leftmost of them (swap two adjacent values), and the cost doesn't increase. So the solution can be to sort triples $(s_i, p_i, i)$ and restore $q$ from the order of $i$ in these. Overall complexity: $O(n \log n)$ per testcase.
|
[
"data structures",
"greedy",
"math",
"sortings"
] | 1,000
|
for _ in range(int(input())):
n = int(input())
p = [int(x) for x in input().split()]
s = input()
l = sorted([[s[i], p[i], i] for i in range(n)])
q = [-1 for i in range(n)]
for i in range(n):
q[l[i][2]] = i + 1
print(*q)
|
1622
|
C
|
Set or Decrease
|
You are given an integer array $a_1, a_2, \dots, a_n$ and integer $k$.
In one step you can
- either choose some index $i$ and decrease $a_i$ by one (make $a_i = a_i - 1$);
- or choose two indices $i$ and $j$ and set $a_i$ equal to $a_j$ (make $a_i = a_j$).
What is the minimum number of steps you need to make the sum of array $\sum\limits_{i=1}^{n}{a_i} \le k$? (You are allowed to make values of array negative).
|
First, we can prove that the optimal way to perform operations is first, decrease the minimum value several (maybe, zero) times, then take several (maybe, zero) maximums and make them equal to the minimum value. The proof consists of several steps: Prove that first, we make decreases, only then sets: if some $a_i = a_i - 1$ is done after some $a_j = a_k$ then if there were no modification of $a_i$ then you can just move $a_i = a_i - 1$ earlier. Otherwise, there were $a_i = a_k$, and you can replace (... $a_i = a_k$, $a_i = a_i - 1$ ...) with (... $a_k = a_k - 1$, $a_i = a_k$ ...). We demonstrated how to move decrease operations before set operations. if some $a_i = a_i - 1$ is done after some $a_j = a_k$ then if there were no modification of $a_i$ then you can just move $a_i = a_i - 1$ earlier. Otherwise, there were $a_i = a_k$, and you can replace (... $a_i = a_k$, $a_i = a_i - 1$ ...) with (... $a_k = a_k - 1$, $a_i = a_k$ ...). We demonstrated how to move decrease operations before set operations. Prove that it's optimal to decrease only one element $a_i$: instead of decreasing $a_i$ by $x$ and $a_j$ by $y$ (where $a_i \le a_j$), we can decrease $a_i$ by $x + y$ and replace all $a_k = a_j$ with $a_k = a_i$. instead of decreasing $a_i$ by $x$ and $a_j$ by $y$ (where $a_i \le a_j$), we can decrease $a_i$ by $x + y$ and replace all $a_k = a_j$ with $a_k = a_i$. It's optimal to decrease the minimum element - it follows from proof of previous step. If we make $y$ set operations, it's optimal to set minimum value to $y$ maximum elements - should be obvious. To use the strategy, we'll firstly sort array $a$ in non-decreasing order. In this case, we'll decrease $a_1$ by $x$ and perform set to $y$ elements $a_{n-y+1}, \dots, a_n$. The question is: how to minimize value of $x + y$? Note, that $0 \le y < n$ (since setting the same position multiple times has no sense). Let's iterate over all possible values of $y$ and determine the minimum $x$ needed. The resulting array will consists of $(a_1 - x), a_2, a_3, \dots, a_{n - y}, (a_1 - x), (a_1 - x), \dots, (a_1 - x)$. Let's say that $P(i) = a_1 + a_2 + \dots + a_i$ (and all $P(i)$ can be precomputed beforehand). Then the sum of array will become $(a_1 - x)(y + 1) + P(n - y) - a_1$, and we need $(a_1 - x)(y + 1) + P(n - y) - a_1 \le k$ $(a_1 - x)(y + 1) \le k - P(n - y) + a_1$ $a_1 - x \le \left\lfloor \frac{k - P(n - y) + a_1}{y + 1} \right\rfloor$ $x = a_1 - \left\lfloor \frac{k - P(n - y) + a_1}{y + 1} \right\rfloor$ Using the formula above, we can for each $y$ ($0 \le y < n$) calculate minimum $x$ required. But to be accurate, value $k - P(n - y) + a_1$ may be negative, and, usually in programming languages, integer division $\frac{c}{d}$ for negative $c$ returns $\left\lceil \frac{c}{d} \right\rceil$ instead of $\left\lfloor \frac{c}{d} \right\rfloor$. There is an alternative solution: note that if $\sum{a_i} \le k$, then $a_1 \le \frac{k}{n}$. Note that if $a_1 \ge \frac{k}{n}$ then resulting value of $a_1 - x$ is in $\frac{k}{n} - n < a_1 - x \le \frac{n}{k}$ and there are at most $n$ possible value for $x$. So, you can iterate over all possible $x$ and for each $x$ calculate minimum required $y$ either with binary search or two pointers.
|
[
"binary search",
"brute force",
"greedy",
"sortings"
] | 1,600
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
typedef long long li;
const int INF = int(1e9);
const li INF64 = li(1e18);
int n;
li k;
vector<int> a;
inline bool read() {
if(!(cin >> n >> k))
return false;
a.resize(n);
fore (i, 0, n)
cin >> a[i];
return true;
}
li accurateFloor(li a, li b) {
li val = a / b;
while (val * b > a)
val--;
return val;
}
inline void solve() {
sort(a.begin(), a.end());
vector<li> pSum(n + 1, 0);
fore (i, 0, n)
pSum[i + 1] = pSum[i] + a[i];
li ans = 1e18;
fore (y, 0, n) {
//(a[0] - x)(y + 1) + (pSum[n - y] - a[0]) <= k
//a[0] - x <= (k - pSum[n - y] + a[0]) / (y + 1)
//x == a[0] - (k - pSum[n - y] + a[0]) / (y + 1)
li x = a[0] - accurateFloor(k - pSum[n - y] + a[0], y + 1);
ans = min(ans, y + max(0LL, x));
}
cout << ans << endl;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
ios_base::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
int t; cin >> t;
while (t--) {
read();
solve();
}
return 0;
}
|
1622
|
D
|
Shuffle
|
You are given a binary string (i. e. a string consisting of characters 0 and/or 1) $s$ of length $n$. You can perform the following operation with the string $s$ \textbf{at most once}: choose a substring (a contiguous subsequence) of $s$ having \textbf{exactly} $k$ characters 1 in it, and shuffle it (reorder the characters in the substring as you wish).
Calculate the number of different strings which can be obtained from $s$ by performing this operation at most once.
|
We could iterate on the substrings we want to shuffle and try to count the number of ways to reorder their characters, but, unfortunately, there's no easy way to take care of the fact that shuffling different substrings may yield the same result. Instead, we will iterate on the first and the last character that are changed. Let these characters be $i$ and $j$. First of all, let's check that they can belong to the same substring we can shuffle - it is the case if the string contains at least $k$ characters 1, and the substring from the $i$-th character to the $j$-th character contains at most $k$ characters 1. Then, after we've fixed the first and the last characters that are changed, we have to calculate the number of ways to shuffle the characters between them (including them) so that both of these characters are changed. Let's calculate $c_0$ and $c_1$ - the number of characters 0 and 1 respectively in the substring. Then, we need to modify these two values: for example, if the $i$-th character is 0, then since it is the first changed character, it should become 1, so we need to put 1 there and decrease $c_1$ by one. The same for the $j$-th character. Let $c'_0$ and $c'_1$ be the values of $c_0$ and $c_1$ after we take care of the fact that the $i$-th and the $j$-th character are fixed. The remaining characters can be in any order, so the number of ways to arrang them is ${{c'_0 + c'_1}\choose{c'_0}}$. We can add up these values for all pairs ($i, j$) such that we can shuffle a substring containing these two characters. We won't be counting any string twice because we ensure that $i$ is the first changed character, and $j$ is the last changed character. Don't forget to add $1$ to the answer - the string we didn't count is the original one. This solution works in $O(n^2)$, but the problem is solvable in $O(n)$.
|
[
"combinatorics",
"math",
"two pointers"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
int add(int x, int y)
{
x += y;
while(x >= MOD) x -= MOD;
while(x < 0) x += MOD;
return x;
}
int main()
{
int n;
cin >> n;
int k;
cin >> k;
string s;
cin >> s;
vector<int> p(n + 1);
for(int i = 0; i < n; i++) p[i + 1] = p[i] + (s[i] - '0');
vector<vector<int>> C(n + 1);
for(int i = 0; i <= n; i++)
{
C[i].resize(i + 1);
C[i][0] = C[i][i] = 1;
for(int j = 1; j < i; j++)
C[i][j] = add(C[i - 1][j], C[i - 1][j - 1]);
}
int ans = 1;
for(int i = 0; i < n; i++)
for(int j = i + 1; j < n; j++)
{
int cnt = j + 1 - i;
int cnt1 = p[j + 1] - p[i];
if(cnt1 > k || p[n] < k) continue;
cnt -= 2;
if(s[i] == '0') cnt1--;
if(s[j] == '0') cnt1--;
if(cnt1 <= cnt && cnt1 >= 0 && cnt >= 0)
ans = add(ans, C[cnt][cnt1]);
}
cout << ans << endl;
}
|
1622
|
E
|
Math Test
|
Petya is a math teacher. $n$ of his students has written a test consisting of $m$ questions. For each student, it is known which questions he has answered correctly and which he has not.
If the student answers the $j$-th question correctly, he gets $p_j$ points (otherwise, he gets $0$ points). Moreover, the points for the questions are distributed in such a way that the array $p$ is a permutation of numbers from $1$ to $m$.
For the $i$-th student, Petya knows that he expects to get $x_i$ points for the test. Petya wonders how unexpected the results could be. Petya believes that the surprise value of the results for students is equal to $\sum\limits_{i=1}^{n} |x_i - r_i|$, where $r_i$ is the number of points that the $i$-th student has got for the test.
Your task is to help Petya find such a permutation $p$ for which the surprise value of the results is maximum possible. If there are multiple answers, print any of them.
|
Note that there are only two ways to fix the result of the operation of taking an absolute value in the expression $|x_i - r_i|$: $x_i - r_i$ or $r_i - x_i$. Since the value of $n$ is small enough that we can iterate over all $2^n$ options, and choose the one for which the sum is maximum. For each student, let's fix with which sign their total points will contribute to the answer, then $x_i$ will contribute with the opposite sign. Now, for the question $j$ we can calculate $val_j$ - the coefficient with which $p_j$ will contribute to the answer. It remains to choose such a permutation $p$ that the sum $\sum\limits_{j=1}^m p_j val_j$ is the maximum possible. From here we can see that if $val_j < val_i$ (for some $i$ and $j$), then $p_j < p_i$ must holds, otherwise we can swap $p_j$ and $p_i$, and the answer will increase. This means that we can sort all questions in ascending order by the value in the $val$ array, and assign the value $x$ in the array $p$ to the $x$-th question in ascending order. For some of $2^n$ options, the permutations we found may be illegal because it can happen that we consider the case that some $|x_i - r_i|$ evaluates as $(x_i - r_i)$, but in the best permutation we found for that option, it evaluates as $(r_i - x_i)$. We can just ignore it because this will never be the case with the option giving the highest possible surprise value - if this thing happened for some option to choose the signs of $r_i$, then, if we flip the signs for the students such that the conditions on them are not met in the optimal permutation, we'll get a combination of signs that yields a higher surprise value.
|
[
"bitmasks",
"brute force",
"greedy"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); ++i)
int main() {
int t;
scanf("%d", &t);
while (t--) {
int n, m;
scanf("%d%d", &n, &m);
vector<int> x(n);
forn(i, n) scanf("%d", &x[i]);
vector<vector<int>> a(n, vector<int>(m));
forn(i, n) forn(j, m) scanf("%1d", &a[i][j]);
int ans = -1;
vector<int> best;
forn(mask, 1 << n) {
vector<int> val(m);
forn(i, n) forn(j, m) if (a[i][j]) val[j] += ((mask >> i) & 1) ? +1 : -1;
int res = 0;
forn(i, n) res += ((mask >> i) & 1) ? -x[i] : x[i];
vector<int> p(m);
iota(p.begin(), p.end(), 0);
sort(p.begin(), p.end(), [&](int x, int y) { return val[x] < val[y]; });
forn(i, m) res += val[p[i]] * (i + 1);
if (res > ans) ans = res, best = p;
}
vector<int> ansPerm(m);
forn(i, m) ansPerm[best[i]] = i;
forn(i, m) printf("%d ", ansPerm[i] + 1);
puts("");
}
}
|
1622
|
F
|
Quadratic Set
|
Let's call a set of positive integers $a_1, a_2, \dots, a_k$ quadratic if the product of the factorials of its elements is a square of an integer, i. e. $\prod\limits_{i=1}^{k} a_i! = m^2$, for some integer $m$.
You are given a positive integer $n$.
Your task is to find a quadratic subset of a set $1, 2, \dots, n$ of maximum size. If there are multiple answers, print any of them.
|
A good start to solve the problem would be to check the answers for small values of $n$. One can see that the answers (the sizes of the maximum subsets) are not much different from $n$ itself, or rather not less than $n-3$. Let's try to prove that this is true for all $n$. Consider $n$ is even. Let $n=2k$, let's see what the product is equal to if we take all the numbers from $1$ to $n$. $\prod\limits_{i=1}^{2k} i! = \prod\limits_{i=1}^{k} (2i-1)! (2i)! = \prod\limits_{i=1}^{k} (2i-1)!^2 2i = (\prod\limits_{i=1}^{k} (2i-1)!)^2 \prod\limits_{i=1}^{k} 2i = (\prod\limits_{i=1}^{k} (2i-1)!)^2 2^k k!$ From here we can see that for even $k$ the answer is at least $n-1$, because we can delete $k!$ and the product of the remaining factorials will be the square of an integer, for odd $k$ the answer is at least $n-2$, because we can delete $2!$ and $k!$. It remains to prove that the answer is at least $n-3$ for odd $n$. This is easy to do, because the answer for $n$ is not less than the answer for $n-1$ minus $1$, because we can delete $n!$ and solve the task with a smaller $n$ value. Moreover, it can be seen from the previous arguments that the answer $3$ can only be for $n \equiv 3 \pmod 4$, and we already know that in this case one of the correct answers is to remove the factorials $2, \frac{n-1}{2}, n$. It remains to learn how to check whether it is possible to remove $1$ or $2$ numbers so that the remaining product of factorials is the square of an integer. To do this, we can use XOR hashes. Let's assign each prime number a random $64$-bit number. For composite numbers, the hash is equal to the XOR of hashes of all its prime divisors from factorization. Thus, if some prime is included in the number an even number of times, it will not affect the value of the hash, which is what we need. The hash of the product of two numbers is equal to the XOR of the hashes of these numbers. Let's denote the hash function as $H$. Using the above, let's calculate $H(i)$ for all $i$ from $1$ to $n$, as well as $H(i!)$ for all $i$ from $1$ to $n$, this is easy to do, because $H(i!) = H((i-1)!) \oplus H(i)$. We will also store a map $H(i!) \rightarrow i$. Let's calculate the hash $H(1!2! \cdots n!)$ and denote it as $fp$. It remains to consider the following cases: if $fp = 0$, then the current product is already the square of an integer; for an answer of size $n-1$, we have to check that there exists such a $i$ that $H(i!) \oplus fp = 0$. To find such $i$, let's check whether the map contains $fp$; for an answer of size $n-2$, we have to check that there are such $i$ and $j$ that $H(i!) \oplus H(j!) \oplus fp = 0$. To do this, iterate over $i$, and then check whether map contains $H(i!) \oplus fp$; otherwise, the answer is $n-3$, and there is an answer, where all numbers except $2, \frac{n-1}{2}, n$ are taken.
|
[
"constructive algorithms",
"hashing",
"math",
"number theory"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
const int N = 1000 * 1000 + 13;
using li = unsigned long long;
int pr[N];
li hs[N], f[N];
unordered_map<li, int> rf;
int main() {
int n;
scanf("%d", &n);
mt19937_64 rnd(chrono::steady_clock::now().time_since_epoch().count());
iota(pr, pr + N, 0);
for (int i = 2; i <= n; ++i) if (pr[i] == i) {
for (int j = i; j <= n; j += i) pr[j] = min(pr[j], i);
hs[i] = rnd();
}
for (int i = 2; i <= n; ++i) {
f[i] = f[i - 1];
int x = i;
while (x != 1) {
f[i] ^= hs[pr[x]];
x /= pr[x];
}
rf[f[i]] = i;
}
auto solve = [&] (int n) -> vector<int> {
li fp = 0;
for (int i = 2; i <= n; ++i) fp ^= f[i];
if (fp == 0) return {};
if (rf.count(fp)) return {rf[fp]};
for (int i = 2; i <= n; ++i) {
li x = f[i] ^ fp;
if (rf.count(x) && rf[x] != i) return {rf[x], i};
}
return {2, n / 2, n};
};
auto ans = solve(n);
printf("%d\n", n - (int)ans.size());
for (int i = 1; i <= n; ++i)
if (find(ans.begin(), ans.end(), i) == ans.end()) printf("%d ", i);
puts("");
}
|
1623
|
A
|
Robot Cleaner
|
A robot cleaner is placed on the floor of a rectangle room, surrounded by walls. The floor consists of $n$ rows and $m$ columns. The rows of the floor are numbered from $1$ to $n$ from top to bottom, and columns of the floor are numbered from $1$ to $m$ from left to right. The cell on the intersection of the $r$-th row and the $c$-th column is denoted as $(r,c)$. The initial position of the robot is $(r_b, c_b)$.
In one second, the robot moves by $dr$ rows and $dc$ columns, that is, after one second, the robot moves from the cell $(r, c)$ to $(r + dr, c + dc)$. Initially $dr = 1$, $dc = 1$. If there is a vertical wall (the left or the right walls) in the movement direction, $dc$ is reflected before the movement, so the new value of $dc$ is $-dc$. And if there is a horizontal wall (the upper or lower walls), $dr$ is reflected before the movement, so the new value of $dr$ is $-dr$.
Each second (including the moment before the robot starts moving), the robot cleans every cell lying in the same row \textbf{or} the same column as its position. There is only one dirty cell at $(r_d, c_d)$. The job of the robot is to clean that dirty cell.
\begin{center}
Illustration for the first example. The blue arc is the robot. The red star is the target dirty cell. Each second the robot cleans a row and a column, denoted by yellow stripes.
\end{center}
Given the floor size $n$ and $m$, the robot's initial position $(r_b, c_b)$ and the dirty cell's position $(r_d, c_d)$, find the time for the robot to do its job.
|
Let's consider the 1-D problem of this problem: there are $n$ cells, lying in a row. The robot is at the $x$-th cell, and the dirty cell is at the $y$-th cell. Each second, the robot cleans the cell at its position. The robot initially moves by 1 cell to the right each second. If there is no cell in the movement direction, its direction will be reflected. What is the minimum time for the robot to clean the dirty cell? There are two cases needed to be considered: If $x \le y$, then the answer is $y - x$. The robot just goes straight to the dirty cell. Otherwise, if $x > y$, then the robot needs to go to the right endpoint first, and then go back to the dirty cell. Going to the right endpoint takes $n - x$ seconds, and going from that cell to the dirty cell takes $n - y$ seconds. Therefore, the answer for this case is $2 \cdot n - x - y$. Going back to our original problem, we can solve it by dividing it into two 1-D versions. This is done by projecting the position of the robot and the dirty cell onto the $Ox$ and $Oy$ axis as follows. By doing so, we can see that we can clean the dirty cell, if and only if one of the projections of the robot can reach the dirty cell. Therefore, the answer is the minimum between the answers of the two sub-problems.
|
[
"brute force",
"implementation",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
int ntest;
cin >> ntest;
while (ntest--) {
int n, m, rb, cb, rd, cd;
cin >> n >> m >> rb >> cb >> rd >> cd;
cout << min(
rb <= rd ? rd - rb : 2 * n - rb - rd,
cb <= cd ? cd - cb : 2 * m - cb - cd
) << '\n';
}
return 0;
}
|
1623
|
B
|
Game on Ranges
|
Alice and Bob play the following game. Alice has a set $S$ of disjoint ranges of integers, initially containing only one range $[1, n]$. In one turn, Alice picks a range $[l, r]$ from the set $S$ and asks Bob to pick a number in the range. Bob chooses a number $d$ ($l \le d \le r$). Then Alice removes $[l, r]$ from $S$ and puts into the set $S$ the range $[l, d - 1]$ (if $l \le d - 1$) and the range $[d + 1, r]$ (if $d + 1 \le r$). The game ends when the set $S$ is empty. We can show that the number of turns in each game is exactly $n$.
After playing the game, Alice remembers all the ranges $[l, r]$ she picked from the set $S$, but Bob does not remember any of the numbers that he picked. But Bob is smart, and he knows he can find out his numbers $d$ from Alice's ranges, and so he asks you for help with your programming skill.
Given the list of ranges that Alice has picked ($[l, r]$), for each range, help Bob find the number $d$ that Bob has picked.
We can show that there is always a unique way for Bob to choose his number for a list of valid ranges picked by Alice.
|
If the length of a range $[l, r]$ is 1 (that is, $l = r$), then $d = l = r$. Otherwise, if Bob picks a number $d$, then Alice has to put the sets $[l, d - 1]$ and $[d + 1, r]$ (if existed) back to the set. Thus, there will be a moment that Alice picks the range $[l, d - 1]$ (if existed), and another moment to pick the range $[d + 1, r]$ (if existed) as well. Using the above observation, for each range $[l, r]$, we can iterate the number $d$ from $l$ to $r$, check if both range $[l, d - 1]$ (if $d > l$) and $[d + 1, r]$ (if $d < r$) existed in the Alice's picked ranges. Or in other words, check if these ranges are given in the input. For checking, we can either use set data structures supported in most programming languages or simply use a 2-dimensional array for marking the picked ranges. The time complexity is, therefore, $O(n^2)$.
|
[
"brute force",
"dfs and similar",
"implementation",
"sortings"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main() {
cin.tie(0)->sync_with_stdio(0);
int ntest;
cin >> ntest;
while (ntest--) {
int n;
cin >> n;
vector mark(n + 1, vector<bool>(n + 1));
vector<int> l(n), r(n);
for (int i = 0; i < n; ++i) {
cin >> l[i] >> r[i];
mark[l[i]][r[i]] = true;
}
for (int i = 0; i < n; ++i) {
for (int d = l[i]; d <= r[i]; ++d) {
if ((d == l[i] or mark[l[i]][d - 1]) and (d == r[i] or mark[d + 1][r[i]])) {
cout << l[i] << ' ' << r[i] << ' ' << d << '\n';
break;
}
}
}
}
return 0;
}
|
1623
|
C
|
Balanced Stone Heaps
|
There are $n$ heaps of stone. The $i$-th heap has $h_i$ stones. You want to change the number of stones in the heap by performing the following process once:
- You go through the heaps from the $3$-rd heap to the $n$-th heap, in this order.
- Let $i$ be the number of the current heap.
- You can choose a number $d$ ($0 \le 3 \cdot d \le h_i$), move $d$ stones from the $i$-th heap to the $(i - 1)$-th heap, and $2 \cdot d$ stones from the $i$-th heap to the $(i - 2)$-th heap.
- So after that $h_i$ is decreased by $3 \cdot d$, $h_{i - 1}$ is increased by $d$, and $h_{i - 2}$ is increased by $2 \cdot d$.
- You can choose different or same $d$ for different operations. Some heaps may become empty, but they still count as heaps.
What is the maximum number of stones in the smallest heap after the process?
|
The answer can be binary searched. That is, we can find the biggest number $x$, such that we can make all heap having at least $x$ stones. So our job here is to check if we can satisfy the said condition with a number $x$. Let's consider a reversed problem: we go from $1$ to $n - 2$, pick a number $d$ ($0 \le 3\cdot d \le h_i$), move $d$ and $2\cdot d$ stones from the $i$-th heap to the $i + 1$ heap and $i + 2$-th heap respectively. In this problem, we can greedily move the stones: since $x$ is the minimum required stones, if at some moment, we have $h_i < x$, then we can not satisfy the condition for $x$ anymore. Otherwise, it is always the best to use as many stones as we can, that is, choose $d = \left \lfloor \frac {h_i - x} 3 \right \rfloor$, and move $d$ and $2\cdot d$ stones to the $i + 1$ and $i + 2$ heaps respectively. In the end, if all the heap are not less than $x$, we can conclude that we can make all heaps having not less than $x$ stones with this process. Going back to our original problem, it seems like we can solve it by doing the process in the reversed order, as discussed above. But there is a catch! The number of stones that can be moved must not exceed the number of the stones in the original heap! So, if we call $h_i$ - the original heap size, and $h'_i$ - the current modified heap size, then the number of stones that we should move is $d = \left \lfloor \frac {\min \{ h_i, h'_i - x \}} 3 \right\rfloor$ at each step. There is not much to note about this problem.
|
[
"binary search",
"greedy"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
const int maxn = 201010;
int n;
int h[maxn];
bool check(int x) {
vector<int> cur_h(h, h + n);
for (int i = n - 1; i >= 2; --i) {
if (cur_h[i] < x) return false;
int d = min(h[i], cur_h[i] - x) / 3;
cur_h[i - 1] += d;
cur_h[i - 2] += 2 * d;
// we don't need to fix cur_h[i] since we are not going back
}
return cur_h[0] >= x and cur_h[1] >= x;
}
int main() {
cin.tie(0)->sync_with_stdio(0);
int ntest;
cin >> ntest;
while (ntest--) {
cin >> n;
for (int i = 0; i < n; ++i) cin >> h[i];
int l = 0, r = *max_element(h, h + n);
while (l < r) {
int mid = l + (r - l + 1) / 2;
if (check(mid)) l = mid;
else r = mid - 1;
}
cout << l << '\n';
}
return 0;
}
|
1623
|
D
|
Robot Cleaner Revisit
|
The statement of this problem shares a lot with problem A. The differences are that in this problem, the probability is introduced, and the constraint is different.
A robot cleaner is placed on the floor of a rectangle room, surrounded by walls. The floor consists of $n$ rows and $m$ columns. The rows of the floor are numbered from $1$ to $n$ from top to bottom, and columns of the floor are numbered from $1$ to $m$ from left to right. The cell on the intersection of the $r$-th row and the $c$-th column is denoted as $(r,c)$. The initial position of the robot is $(r_b, c_b)$.
In one second, the robot moves by $dr$ rows and $dc$ columns, that is, after one second, the robot moves from the cell $(r, c)$ to $(r + dr, c + dc)$. Initially $dr = 1$, $dc = 1$. If there is a vertical wall (the left or the right walls) in the movement direction, $dc$ is reflected before the movement, so the new value of $dc$ is $-dc$. And if there is a horizontal wall (the upper or lower walls), $dr$ is reflected before the movement, so the new value of $dr$ is $-dr$.
Each second (including the moment before the robot starts moving), the robot cleans every cell lying in the same row \textbf{or} the same column as its position. There is only one dirty cell at $(r_d, c_d)$. The job of the robot is to clean that dirty cell.
After a lot of testings in problem A, the robot is now broken. It cleans the floor as described above, but at each second the cleaning operation is performed with probability $\frac p {100}$ only, and not performed with probability $1 - \frac p {100}$. The cleaning or not cleaning outcomes are independent each second.
Given the floor size $n$ and $m$, the robot's initial position $(r_b, c_b)$ and the dirty cell's position $(r_d, c_d)$, find the \textbf{expected time} for the robot to do its job.
It can be shown that the answer can be expressed as an irreducible fraction $\frac x y$, where $x$ and $y$ are integers and $y \not \equiv 0 \pmod{10^9 + 7} $. Output the integer equal to $x \cdot y^{-1} \bmod (10^9 + 7)$. In other words, output such an integer $a$ that $0 \le a < 10^9 + 7$ and $a \cdot y \equiv x \pmod {10^9 + 7}$.
|
In order to see how my solution actually works, let's solve this problem, the math way! You can skip to the "In general, ..." part if you don't really care about these concrete examples. First of all, let $\overline{p}$ be the probability of not cleaning, that is, the probability that the robot will not be able to clean. So $\overline{p} = 1 - \frac p {100}$. Let's revisit the first example again. In this example, the robot has 2 states: when it was at position $(1, 1)$, and when it was at $(2, 2)$. Let $x$ be the answer for this problem when the robot started at $(1, 1)$, and $y$ be the answer for this problem when the robot started at $(2, 2)$. Let's consider the first state. If the robot can clean, it spends $0$ seconds to clean the dirty cell. Otherwise, it will spend $1 + y$ seconds. Therefore, we have an equation: $x = \overline{p} (1 + y)$. Similarly, we also have the an equation: $y = \overline{p}(1 + x)$, since these two states are symetrical. Subtituding $y = \overline{p}(1 + x)$ into $x = \overline{p}(1 + y)$, we have $\begin{array}{crcl} & x & = & \overline{p}(1 + \overline{p}(1 + x)) \end{array}$ By substituting $\overline p$ in, we can find the value of $x$. Let's consider the other example. In this example, the robot has 4 states: when it is at $(1, 1)$, when it is at $(2, 2)$ but going to the right, when it is at $(1, 3)$, and when it is at $(2, 2)$ but going to the left. Let the answer for these states be $x_1, x_2, x_3,$ and $x_4$. Similar to the previous example, we can write down the equations: $\begin{cases} x_1 = 1 + x_2 & \text{(because at }(1, 1)\text{ the robot cannot clean the dirty cell)} \\ x_2 = \overline p (1 + x_3) \\ x_3 = \overline p (1 + x_4) \\ x_4 = \overline p (1 + x_1) \end{cases}$ Substituting these equations back to back, we can have the following equation: $x_1 = 1 + \overline p \left( 1 + \overline p \left( 1 + \overline p \left( 1 + x \right) \right) \right)$ And again, if we substitute $\overline p$ in, the answer can be found easily. In general, the path that the robot takes will form a cycle, containing the initial position. If we call $x$ the answer to the problem at the initial position, the equation we need to solve will have the following form: $x = a_1 \left( 1 + a_2 \left( 1 + a_3 \left( 1 + \ldots a_k \left(1 + x \right ) \ldots \right) \right) \right)$ where $k$ is the number of states in the cycle, and $a_i$ is some coefficient. $a_i = \overline p$ if, at the $i$-th state in the cycle, we have an opportunity to clean the dirty cell, and $1$ otherwise. The equation can easily be solved by expanding the brackets from the innermost to the outermost, by going through the cycle in the reverse order. After the expansion, the equation will be a very simple linear equation with the form $x = u + vx$, and the solution will be $x = \frac u {1 - v}$. To construct the equation, we can first find the cycle by either DFS or BFS, and go through the cycle in the reverse order for expansion. Or, we can do the reverse simulation, maintaining the coefficient $u$ and $v$ right away. And even better, we can just forget about the cycle and iterate exactly $4(n - 1)(m - 1)$ times (not $4nm$ though), since $4(n - 1)(m - 1)$ will always be the multiple of $k$ - the cycle length. The time complexity of this solution is $O(nm)$.
|
[
"implementation",
"math",
"probabilities"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
// I swear that I type this everytime, so this is not count as template :))
#define defop(type, op) \
inline friend type operator op (type u, const type& v) { return u op ##= v; } \
type& operator op##=(const type& o)
template<int mod>
struct modint {
int x;
// note that there is no correction, simply for speed
modint(int xx = 0): x(xx) {}
modint(ll xx): x(int(xx % mod)) {}
defop(modint, +) {
if ((x += o.x) >= mod) x -= mod;
return *this;
}
defop(modint, -) {
if ((x -= o.x) < 0) x += mod;
return *this;
}
defop(modint, * ) { return *this = modint(1ll * x * o.x); }
modint pow(ll exp) const {
modint ans = 1, base = *this;
for (; exp > 0; exp >>= 1, base *= base)
if (exp & 1) ans *= base;
return ans;
}
defop(modint, /) { return *this *= o.pow(mod - 2); }
};
using mint = modint<(int)1e9 + 7>;
int main() {
cin.tie(0)->sync_with_stdio(0);
int ntest;
cin >> ntest;
while (ntest--) {
int n, m, rb, cb, rd, cd;
mint p;
cin >> n >> m >> rb >> cb >> rd >> cd >> p.x;
p = 1 - p / 100; // probability of not cleaning
int dr = -1, dc = -1;
mint u = 0, v = 1;
for (int i = 0; i < 4 * (n - 1) * (m - 1); ++i) {
if (!(1 <= rb + dr and rb + dr <= n)) dr = -dr;
if (!(1 <= cb + dc and cb + dc <= m)) dc = -dc;
rb += dr;
cb += dc;
u += 1;
if (rb == rd or cb == cd) {
u *= p;
v *= p;
}
}
cout << (u / (1 - v)).x << '\n';
}
return 0;
}
|
1623
|
E
|
Middle Duplication
|
A binary tree of $n$ nodes is given. Nodes of the tree are numbered from $1$ to $n$ and the root is the node $1$. Each node can have no child, only one left child, only one right child, or both children. For convenience, let's denote $l_u$ and $r_u$ as the left and the right child of the node $u$ respectively, $l_u = 0$ if $u$ does not have the left child, and $r_u = 0$ if the node $u$ does not have the right child.
Each node has a string label, initially is a single character $c_u$. Let's define the string representation of the binary tree as the concatenation of the labels of the nodes in the in-order. Formally, let $f(u)$ be the string representation of the tree rooted at the node $u$. $f(u)$ is defined as follows: $$ f(u) = \begin{cases} <empty string>, & \text{if }u = 0; \\ f(l_u) + c_u + f(r_u) & \text{otherwise}, \end{cases} $$ where $+$ denotes the string concatenation operation.
This way, the string representation of the tree is $f(1)$.
For each node, we can duplicate its label \textbf{at most once}, that is, assign $c_u$ with $c_u + c_u$, but only if $u$ is the root of the tree, or if its parent also has its label duplicated.
You are given the tree and an integer $k$. What is the lexicographically smallest string representation of the tree, if we can duplicate labels of at most $k$ nodes?
A string $a$ is lexicographically smaller than a string $b$ if and only if one of the following holds:
- $a$ is a prefix of $b$, but $a \ne b$;
- in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears earlier in the alphabet than the corresponding letter in $b$.
|
Firstly, we need to determine if a label should be duplicated at all. For example, in the string "bac", the characters 'b' and 'c' should never be duplicated, since duplicating them always make the result worse ("bbac" and "bacc" are both lexicographically greater than "bac"). This is because, next to the character 'b' is 'a', and 'a' is smaller than 'b'. For the case of 'c', after it there is no more characters, thus we should not duplicate it as well. Let's call a node good if duplicating its label will make the result better (lexicographically smaller). To find good nodes, we can find the initial string representation of the tree using DFS. A node is good if the next different character in the string representation must exist and is smaller than the label of the node. After finding the good nodes, let's find the first label that we should duplicate. This label must be from a good label and must lie as close to the start of the string as possible. We can find this label, also by DFS. We still do DFS in the in-order, and the first good node having depth not exceed $k$ will be the first node to have the label duplicated. And by duplicating this node, we must duplicate the labels of its ancestors as well. Note that during the DFS process, if we don't duplicate a node, we should not go to the right sub-tree. Let's call the cost of duplicating a node the number of its ancestors that is not duplicated before. The cost of the first duplicated node is its depth, which can be calculated while doing DFS. The cost of the other nodes can also be maintained while doing DFS as well: if a node is duplicated, the root going to the right sub-tree will have a cost of $1$. So overall we will have the following DFS algorithm on the node $u$: If $u = 0$, then we do nothing. If $cost[u] > k$ , we do nothing. Assign $cost[l[u]] \gets cost[u] + 1$ and do DFS on $l[u]$. If $l[u]$ has it label duplicated, we duplicate the label of $u$. Otherwise, if $u$ is good, we duplicate the label of $u$ as well, and decrease $k$ by $cost[u]$. If $u$ is duplicated, then assign $cost[r[u]] \gets 1$, do DFS on $r[u]$. For implementation, we can pass the variable cost together with $u$ to the DFS function, and don't need to declare a global array. The overall time complexity of this solution is $O(n)$.
|
[
"data structures",
"dfs and similar",
"greedy",
"strings",
"trees"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
const int maxn = 202020;
int n, k;
string c;
int l[maxn], r[maxn];
vector<int> in_order;
void build_in_order(int u) {
if (u == 0) return ;
build_in_order(l[u]);
in_order.push_back(u);
build_in_order(r[u]);
}
bool good[maxn];
bool duplicated[maxn];
// the function try to greedily duplicate the nodes.
// u: the curernt node
// cost: distance to the nearest duplicated ancestor.
// the function will "destroy" the value of k
void dfs(int u, int cost = 1) {
if (u == 0) return ;
if (cost > k) return ;
dfs(l[u], cost + 1);
if (duplicated[l[u]]) {
duplicated[u] = true;
} else if (good[u]) {
duplicated[u] = true;
k -= cost;
}
if (duplicated[u]) dfs(r[u], 1);
}
int main() {
cin.tie(0)->sync_with_stdio(0);
cin >> n >> k;
cin >> c;
c = ' ' + c;
for (int i = 1; i <= n; ++i) {
cin >> l[i] >> r[i];
}
build_in_order(1);
char last_diff = c[in_order.back()];
for (int i = n - 2; i >= 0; --i) {
int u = in_order[i];
int v = in_order[i + 1];
if (c[u] != c[v]) {
last_diff = c[v];
}
if (c[u] < last_diff) {
good[u] = true;
}
}
dfs(1);
for (auto u: in_order) {
cout << c[u];
if (duplicated[u]) cout << c[u];
}
return 0;
}
|
1624
|
A
|
Plus One on the Subset
|
Polycarp got an array of integers $a[1 \dots n]$ as a gift. Now he wants to perform a certain number of operations (possibly zero) so that all elements of the array become the same (that is, to become $a_1=a_2=\dots=a_n$).
- In one operation, he can take some indices in the array and increase the elements of the array at those indices by $1$.
For example, let $a=[4,2,1,6,2]$. He can perform the following operation: select indices 1, 2, and 4 and increase elements of the array in those indices by $1$. As a result, in one operation, he can get a new state of the array $a=[5,3,1,7,2]$.
What is the minimum number of operations it can take so that all elements of the array become equal to each other (that is, to become $a_1=a_2=\dots=a_n$)?
|
Let's sort the numbers in ascending order. It becomes immediately clear that it is not profitable for us to increase the numbers that are equal to the last number (the maximum of the array). It turns out that every time you need to take such a subset of the array, in which all the numbers, except the maximums. And once for each operation, the numbers in the subset are increased by one, then how many times can the operation be performed on the array? Accordingly $max(a) - min(a)$.
|
[
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define forn(i, n) for (int i = 0; i < int(n); i++)
void solve() {
int n;
cin >> n;
int a[n];
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int MIN = INT_MAX;
int MAX = INT_MIN;
for (int i = 0; i < n; ++i) {
MIN = min(MIN, a[i]);
MAX = max(MAX, a[i]);
}
cout << MAX - MIN << '\n';
}
int main() {
int tests;
cin >> tests;
forn(tt, tests) {
solve();
}
}
|
1624
|
B
|
Make AP
|
Polycarp has $3$ positive integers $a$, $b$ and $c$. He can perform the following operation \textbf{exactly once}.
- Choose a \textbf{positive} integer $m$ and multiply \textbf{exactly one} of the integers $a$, $b$ or $c$ by $m$.
Can Polycarp make it so that after performing the operation, the sequence of three numbers $a$, $b$, $c$ (\textbf{in this order}) forms an arithmetic progression? Note that you \textbf{cannot change} the order of $a$, $b$ and $c$.
Formally, a sequence $x_1, x_2, \dots, x_n$ is called an arithmetic progression (AP) if there exists a number $d$ (called "common difference") such that $x_{i+1}=x_i+d$ for all $i$ from $1$ to $n-1$. In this problem, $n=3$.
For example, the following sequences are AP: $[5, 10, 15]$, $[3, 2, 1]$, $[1, 1, 1]$, and $[13, 10, 7]$. The following sequences are not AP: $[1, 2, 4]$, $[0, 1, 0]$ and $[1, 3, 2]$.
You need to answer $t$ independent test cases.
|
Let's iterate over the number that we want to multiply by $m$. How can we check that we can multiply the current number so that an AP is formed? Note that those $2$ numbers that we do not touch should form an AP themselves. For instance, let at the current operation we want somehow multiply the number $c$. Then $a = x_1$, and $b = x_2 = x_1 + d$. Note that $b - a = x_1 + d - x_1 = d$. Thus, we know what $d$ is. Also we know that $c = x_1 + 2 \cdot d = a + 2 \cdot d$. Let's check if $a + 2 \cdot d$ is divisible by $c$. If yes, then we have found the answer, if not, then move on to the next number. We do the same for $a$ and $b$. Be careful with non positive numbers, integer divisions and other edge cases.
|
[
"implementation",
"math"
] | 900
|
#include<bits/stdc++.h>
using namespace std;
void solveTest() {
int a, b, c;
cin >> a >> b >> c;
int new_a = b - (c - b);
if(new_a >= a && new_a % a == 0 && new_a != 0) {
cout << "YES\n";
return;
}
int new_b = a + (c - a)/2;
if(new_b >= b && (c-a)%2 == 0 && new_b % b == 0 && new_b != 0) {
cout << "YES\n";
return;
}
int new_c = a + 2*(b - a);
if(new_c >= c && new_c % c == 0 && new_c != 0) {
cout << "YES\n";
return;
}
cout << "NO\n";
return;
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(0); cout.tie(0);
int tt;
cin >> tt;
while(tt--)
solveTest();
return 0;
}
|
1624
|
C
|
Division by Two and Permutation
|
You are given an array $a$ consisting of $n$ positive integers. You can perform operations on it.
In one operation you can replace any element of the array $a_i$ with $\lfloor \frac{a_i}{2} \rfloor$, that is, by an integer part of dividing $a_i$ by $2$ (rounding down).
See if you can apply the operation some number of times (possible $0$) to make the array $a$ become a permutation of numbers from $1$ to $n$ —that is, so that it contains all numbers from $1$ to $n$, each exactly once.
For example, if $a = [1, 8, 25, 2]$, $n = 4$, then the answer is yes. You could do the following:
- Replace $8$ with $\lfloor \frac{8}{2} \rfloor = 4$, then $a = [1, 4, 25, 2]$.
- Replace $25$ with $\lfloor \frac{25}{2} \rfloor = 12$, then $a = [1, 4, 12, 2]$.
- Replace $12$ with $\lfloor \frac{12}{2} \rfloor = 6$, then $a = [1, 4, 6, 2]$.
- Replace $6$ with $\lfloor \frac{6}{2} \rfloor = 3$, then $a = [1, 4, 3, 2]$.
|
Let's sort the array $a$ in descending order of the values of its elements. Then let's create a logical array $used$, where $used[i]$ will have the value true if we already got element $i$ of the permutation we are looking for, and the value false otherwise. We loop through the elements of the array $a$ and assign $x = a_i$. We'll divide $x$ by $2$ as long as it exceeds $n$ or as long as $used[x]$ is true. If it turns out that $x = 0$, then all the numbers that could be obtained from $a_i$ have already been obtained before. Since each element of the array $a$ must produce a new value from $1$ to $n$, the answer cannot be constructed - output NO. Otherwise, assign $used[x]$ a value of true - this means that the number $x$, which is an element of the permutation, we will get exactly from the original number $a_i$. After processing all elements of the array $a$ we can output YES.
|
[
"constructive algorithms",
"flows",
"graph matchings",
"greedy",
"math"
] | 1,100
|
#include<bits/stdc++.h>
using namespace std;
void solve(){
int n;
cin >> n;
vector<int>a(n), used(n + 1, false);
for(auto &i : a) cin >> i;
sort(a.begin(), a.end(), [] (int a, int b) {
return a > b;
});
bool ok = true;
for(auto &i : a){
int x = i;
while(x > n or used[x]) x /= 2;
if(x > 0) used[x] = true;
else ok = false;
}
cout << (ok ? "YES" : "NO") << '\n';
}
int main(){
ios_base :: sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t--){
solve();
}
return 0;
}
|
1624
|
D
|
Palindromes Coloring
|
You have a string $s$ consisting of lowercase Latin alphabet letters.
You can color some letters in colors from $1$ to $k$. It is not necessary to paint all the letters. But for each color, there must be a letter painted in that color.
Then you can swap any two symbols painted in the same color as many times as you want.
After that, $k$ strings will be created, $i$-th of them will contain all the characters colored in the color $i$, written in the order of their sequence in the string $s$.
Your task is to color the characters of the string so that all the resulting $k$ strings are palindromes, and the length of the shortest of these $k$ strings is as \textbf{large} as possible.
Read the note for the first test case of the example if you need a clarification.
Recall that a string is a palindrome if it reads the same way both from left to right and from right to left. For example, the strings abacaba, cccc, z and dxd are palindromes, but the strings abab and aaabaa — are not.
|
We will solve the problem greedily. First, we will try to add pairs of identical characters to palindromes. As long as there are at least $k$ pairs, let's add them. After that, it is no longer possible to add a couple of characters, but you can try to add one character in the middle. This can be done if there are at least $k$ characters left. There is no need to paint other characters.
|
[
"binary search",
"greedy",
"sortings",
"strings"
] | 1,400
|
#include <iostream>
#include <vector>
#include <stack>
#include <algorithm>
using namespace std;
typedef long long ll;
const int MAX_N = 2e5;
int main(int argc, char* argv[]) {
int t;
cin >> t;
for (int _ = 0; _ < t; ++_) {
int n, k;
string s;
cin >> n >> k >> s;
vector<int> cnt(26);
for (char c : s) {
cnt[c - 'a']++;
}
int cntPairs = 0, cntOdd = 0;
for (int c : cnt) {
cntPairs += c / 2;
cntOdd += c % 2;
}
int ans = 2 * (cntPairs / k);
cntOdd += 2 * (cntPairs % k);
if (cntOdd >= k) {
ans++;
}
cout << ans << '\n';
}
}
|
1624
|
E
|
Masha-forgetful
|
Masha meets a new friend and learns his phone number — $s$. She wants to remember it as soon as possible. The phone number — is a string of length $m$ that consists of digits from $0$ to $9$. The phone number may start with 0.
Masha already knows $n$ phone numbers (all numbers have the same length $m$). It will be easier for her to remember a new number if the $s$ is represented as segments of numbers she already knows. Each such segment must be of length \textbf{at least $2$}, otherwise there will be too many segments and Masha will get confused.
For example, Masha needs to remember the number: $s = $ '12345678' and she already knows $n = 4$ numbers: '12340219', '20215601', '56782022', '12300678'. You can represent $s$ as a $3$ segment: '1234' of number one, '56' of number two, and '78' of number three. There are other ways to represent $s$.
Masha asks you for help, she asks you to break the string $s$ into segments of length $2$ or more of the numbers she already knows. If there are several possible answers, print \textbf{any} of them.
|
The key idea is that any string of length greater than 3 can be obtained by concatenating strings of length $2$ or $3$. Then when reading the data, remember all occurring substring of length $2$ and $3$. There are at most $10^4$. Now we will count the dynamics on the prefix: $dp[i] = true$ if we can get the prefix of length $i$ of phone $s$ by segments of length $2$ and $3$ of the known phones Masha. Then for the transition we need to look through the lengths $2$ and $3$, then take a substring of the corresponding length and find out whether such a string occurred in the phones known to Masha. Then it will take $O(m)$ or $O(mlogm)$ time to recalculate the dynamics, depending on the implementation. But it will still take more time to read the data, so the final asymptotic will be $O(nm)$ or $O(nmlogm)$.
|
[
"brute force",
"constructive algorithms",
"dp",
"hashing",
"implementation",
"strings"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
#define sz(v) (int)v.size()
const int N = 1e4;
map<string, bool> have;
map<string, tuple<int,int,int>> pos;
void solve() {
int n, m; cin >> n >> m;
vector<bool> dp(m+1, false);
vector<int> pr(m+1);
vector<string> cache;
dp[0] = true;
forn(i, n) {
string s; cin >> s;
forn(j, m) {
string t;
t += s[j];
for(int k = 1; k <= 2; k++) {
if (k + j >= m) break;
t += s[j+k];
if (!have[t]) {
have[t] = true;
pos[t] = make_tuple(j, j+k, i);
cache.push_back(t);
}
}
}
}
string s; cin >> s;
forn(i, m) {
string t;
t += s[i];
for (int k = 1; k <= 2; k++) {
if (i - k < 0) break;
t = s[i-k] + t;
if (have[t] && dp[i-k]) {
dp[i+1] = true;
pr[i+1] = i-k;
}
if (dp[i+1]) break;
}
}
for (string t : cache) {
have[t] = false;
}
if (!dp[m]) {
cout << "-1\n";
return;
}
vector<tuple<int,int,int>> ans;
for (int k = m; k > 0; ) {
int p = pr[k];
string t = s.substr(p, k - p);
ans.emplace_back(pos[t]);
k = p;
}
cout << sz(ans) << '\n';
reverse(ans.begin(), ans.end());
for (auto [l,r,i] : ans) cout << l+1 << ' ' << r+1 << ' ' << i+1 << '\n';
}
int main() {
int t;
cin >> t;
forn(tt, t) {
solve();
}
}
|
1624
|
F
|
Interacdive Problem
|
\textbf{This problem is interactive.}
We decided to play a game with you and guess the number $x$ ($1 \le x < n$), where you know the number $n$.
You can make queries like this:
- + c: this command assigns $x = x + c$ ($1 \le c < n$) and then returns you the value $\lfloor\frac{x}{n}\rfloor$ ($x$ divide by $n$ and round down).
You win if you guess the current number with no more than $10$ queries.
|
After each query we know the $\lfloor\frac{x}{n}\rfloor$ value, then we need to find $x \mod n$ to find the current $x$ value. To find it, we will use a binary search. Suppose $x \mod n$ is in the $[l, r)$ half-interval, in order to understand which half it is in, we will make a query, select the middle $m$ of the half-interval and make a + $n-m$ query. After that, $\lfloor\frac{x}{n}\rfloor$ will either change, which means that $x \mod n$ was in the $[m, r)$ half-interval, or not, then it was in the $[l, m)$ half-interval. Now you just need to properly shift the half-interval to accommodate the query change.
|
[
"binary search",
"constructive algorithms",
"interactive"
] | 2,000
|
#include <bits/stdc++.h>
#define int long long
#define mp make_pair
#define x first
#define y second
#define all(a) (a).begin(), (a).end()
#define rall(a) (a).rbegin(), (a).rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(143);
const int inf = 1e10;
const int M = 998244353;
const ld pi = atan2(0, -1);
const ld eps = 1e-4;
signed main(){
int n;
cin >> n;
int l = 1, r = n;
int div = 0;
while(r - l > 1){
int mid = (r + l) / 2;
cout << "+ "<< n - mid << endl;
int d;
cin >> d;
if(d > div)l = mid;
else r = mid;
l = (l + n - mid) % n;
r = (r + n - mid) % n;
if(r == 0) r = n;
div = d;
}
cout << "! " << div * n + l;
return 0;
}
|
1624
|
G
|
MinOr Tree
|
Recently, Vlad has been carried away by spanning trees, so his friends, without hesitation, gave him a connected weighted undirected graph of $n$ vertices and $m$ edges for his birthday.
Vlad defined the ority of a spanning tree as the bitwise OR of all its weights, and now he is interested in what is the minimum possible ority that can be achieved by choosing a certain spanning tree. A spanning tree is a connected subgraph of a given graph that does not contain cycles.
In other words, you want to keep $n-1$ edges so that the graph remains connected and the bitwise OR weights of the edges are as small as possible. You have to find the minimum bitwise OR itself.
|
We need to minimize the result of the bitwise operation, so for convenience, we represent the answer as a mask. Firstly, let's assume that this mask is composed entirely of ones. Let's go from the most significant bit to the least significant one and try to reduce the answer. To understand whether it is possible to remove the $j$-th bit, remove it and check if the graph, in which all the weights are submasks of the current answer, is connected, for this, you can use depth-first search or a disjoint sets union. If the graph is connected, then the bit can obviously be thrown out, and if not it cannot and must be returned.
|
[
"bitmasks",
"dfs and similar",
"dsu",
"graphs",
"greedy"
] | 1,900
|
#include <bits/stdc++.h>
#define int long long
#define mp make_pair
#define x first
#define y second
#define all(a) (a).begin(), (a).end()
#define rall(a) (a).rbegin(), (a).rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(143);
const int inf = 1e9;
const int M = 998244353;
const ld pi = atan2(0, -1);
const ld eps = 1e-4;
int n, cur;
vector<vector<pair<int, int>>> sl;
void dfs(int v, vector<bool> &used){
used[v] = true;
for(auto e: sl[v]){
int u = e.x, w = e.y;
if(!used[u] && (cur | w) == cur){
dfs(u, used);
}
}
}
void cnt(int pw){
if(pw < 0) return;
int d = (ll) 1 << pw;
cur -= d;
vector<bool> used(n);
dfs(0, used);
for(bool b: used){
if(!b) {
cur += d;
break;
}
}
cnt(pw - 1);
}
void solve() {
int m;
cin >> n >> m;
sl.assign(n, vector<pair<int, int>>(0));
for(int i = 0; i < m; ++i){
int u, v, w;
cin >> u >> v >> w;
--u, --v;
sl[u].emplace_back(v, w);
sl[v].emplace_back(u, w);
}
cur = 1;
int bit = 0;
for(; cur < inf; bit++){
cur = 2 * cur + 1;
}
cnt(bit);
cout << cur;
}
bool multi = true;
signed main() {
int t = 1;
if (multi) {
cin >> t;
}
for (; t != 0; --t) {
solve();
cout << "\n";
}
return 0;
}
|
1625
|
A
|
Ancient Civilization
|
Martian scientists explore Ganymede, one of Jupiter's numerous moons. Recently, they have found ruins of an ancient civilization. The scientists brought to Mars some tablets with writings in a language unknown to science.
They found out that the inhabitants of Ganymede used an alphabet consisting of two letters, and each word was exactly $\ell$ letters long. So, the scientists decided to write each word of this language as an integer from $0$ to $2^{\ell} - 1$ inclusively. The first letter of the alphabet corresponds to zero bit in this integer, and the second letter corresponds to one bit.
The same word may have various forms in this language. Then, you need to restore the initial form. The process of doing it is described below.
Denote the distance between two words as the amount of positions, in which these words differ. For example, the distance between $1001_2$ and $1100_2$ (in binary) is equal to two, as these words have different letters in the second and the fourth positions, counting from left to right. Further, denote the distance between words $x$ and $y$ as $d(x, y)$.
Let the word have $n$ forms, the $i$-th of which is described with an integer $x_i$. All the $x_i$ are not necessarily different, as two various forms of the word can be written the same. Consider some word $y$. Then, closeness of the word $y$ is equal to the sum of distances to each of the word forms, i. e. the sum $d(x_i, y)$ over all $1 \le i \le n$.
The initial form is the word $y$ with minimal possible nearness.
You need to help the scientists and write the program which finds the initial form of the word given all its known forms. Note that the initial form is \textbf{not necessarily} equal to any of the $n$ given forms.
|
Note that the problem can be solved independently for each bit, as the bits don't influence each other. Set the $i$th bit to zero if the numbers in the array contain more zeros than ones in the $i$th bit. Otherwise, set it to one.
|
[
"bitmasks",
"greedy",
"math"
] | 800
| null |
1625
|
B
|
Elementary Particles
|
Martians are actively engaged in interplanetary trade. Olymp City, the Martian city known for its spaceport, has become a place where goods from all the corners of our Galaxy come. To deliver even more freight from faraway planets, Martians need fast spaceships.
A group of scientists conducts experiments to build a fast engine for the new spaceship. In the current experiment, there are $n$ elementary particles, the $i$-th of them has type $a_i$.
Denote a subsegment of the particle sequence ($a_1, a_2, \dots, a_n$) as a sequence ($a_l, a_{l+1}, \dots, a_r$) for some left bound $l$ and right bound $r$ ($1 \le l \le r \le n$). For instance, the sequence $(1\ 4\ 2\ 8\ 5\ 7)$ for $l=2$ and $r=4$ has the sequence $(4\ 2\ 8)$ as a subsegment. Two subsegments are considered different if at least one bound of those subsegments differs.
Note that the subsegments can be equal as sequences but still considered different. For example, consider the sequence $(1\ 1\ 1\ 1\ 1)$ and two of its subsegments: one with $l=1$ and $r=3$ and another with $l=2$ and $r=4$. Both subsegments are equal to $(1\ 1\ 1)$, but still considered different, as their left and right bounds differ.
The scientists want to conduct a reaction to get two different subsegments of the same length. Denote this length $k$. The resulting pair of subsegments must be harmonious, i. e. for \textbf{some} $i$ ($1 \le i \le k$) it must be true that the types of particles on the $i$-th position are the same for these two subsegments. For example, the pair $(1\ 7\ 3)$ and $(4\ 7\ 8)$ is harmonious, as both subsegments have $7$ on the second position. The pair $(1\ 2\ 3)$ and $(3\ 1\ 2)$ is not harmonious.
The longer are harmonious subsegments, the more chances for the scientists to design a fast engine. So, they asked you to calculate the maximal possible length of harmonious pair made of different subsegments.
|
Note the following fact. For each optimal pair of harmonious strings, it's true that the right string ends on the last character. Proof: suppose it's wrong. Then, we can expand the strings to the right by one character, and they will remain harmonious. Now, prove the following statement, that will help us to solve this problem. The statement is as follows: the answer is $n - min(v - u)$ where minimum is over all $u$ and $v$ such that $u < v$ and $a_u = a_v$. Proof: consider two elements $u$ and $v$ such that $u < v$ and $a_u = a_v$. Suppose that they are on the same position in a pair of harmonious substrings. What maximal length these substring may have? From what was proved above, we know that we can expand the strings to the right. Take the first string starting in $u$ and the second string starting with $v$. Then, we get the strings of length $n - v + 1$ after expanding them. Still, it's not enough. So, we will also expand the strings to the left. So, the total length of the strings will become $n - v + u$, which is equal to $n - (v - u)$. The smaller $v - u$, the larger the length. To solve the problem, we need to find a pair of nearest equal elements quickly. We can do the following: store all the positions of each element (i. e. all the positions with $a_i = 1$, with $a_i = 2$ etc.), and then we iterate over $a_i$, go through the pairs of neighboring positions and calculate the minimum.
|
[
"brute force",
"greedy",
"sortings"
] | 1,100
| null |
1625
|
C
|
Road Optimization
|
The Government of Mars is not only interested in optimizing space flights, but also wants to improve the road system of the planet.
One of the most important highways of Mars connects Olymp City and Kstolop, the capital of Cydonia. In this problem, we only consider the way from Kstolop to Olymp City, but not the reverse path (i. e. the path from Olymp City to Kstolop).
The road from Kstolop to Olymp City is $\ell$ kilometers long. Each point of the road has a coordinate $x$ ($0 \le x \le \ell$), which is equal to the distance from Kstolop in kilometers. So, Kstolop is located in the point with coordinate $0$, and Olymp City is located in the point with coordinate $\ell$.
There are $n$ signs along the road, $i$-th of which sets a speed limit $a_i$. This limit means that the next kilometer must be passed in $a_i$ minutes and is active until you encounter the next along the road. There is a road sign at the start of the road (i. e. in the point with coordinate $0$), which sets the initial speed limit.
If you know the location of all the signs, it's not hard to calculate how much time it takes to drive from Kstolop to Olymp City. Consider an example:
Here, you need to drive the first three kilometers in five minutes each, then one kilometer in eight minutes, then four kilometers in three minutes each, and finally the last two kilometers must be passed in six minutes each. Total time is $3\cdot 5 + 1\cdot 8 + 4\cdot 3 + 2\cdot 6 = 47$ minutes.
To optimize the road traffic, the Government of Mars decided to remove no more than $k$ road signs. It cannot remove the sign at the start of the road, otherwise, there will be no limit at the start. By removing these signs, the Government also wants to make the time needed to drive from Kstolop to Olymp City as small as possible.
The largest industrial enterprises are located in Cydonia, so it's the priority task to optimize the road traffic from Olymp City. So, the Government of Mars wants you to remove the signs in the way described above.
|
First you need to understand that this problem must be solved with dynamic programming. Let $dp_{i,j}$ is the minimum time to drive between the two cities, if we consider first $i$ signs and have already removed $j$ signs. We also assume that the $i$th sign is taken. Then, the initial state is: $dp_{0,0} = 0$, $dp_{i,0} = dp_{i-1,0} + b_{i-1} \cdot (a_i - a_{i-1})$. So, we don't need to drive to the first sign (as it takes $0$ seconds), and if we don't remove any signs, it's easy to calculate the time. Initially, fill $dp_{i,j} = \infty$ for all $i = 1 \ldots n, j = 1 \ldots k$. Then, the answer is $min(dp_{n,j})$ over all $j = 0 \ldots k$. Consider which transitions can we make. Calculate our DP from bottom to top, so we go from smaller states to larger ones. Consider all the $i = 0 \ldots n-1$ and all the $j = 0 \ldots k$ in the loop order, i. e: If we see $dp_{i, j} = \infty$, then there's no answer and we'll just skip this state. For example, it may mean that $i < j$. Now, iterate over the positions of the next sign we'll put. Call this position $pos$. The transitions are as follows: $dp_{pos, j + pos - i - 1} = \min(dp_{pos,j + pos - i - 1}, dp_{i, j} + b_i * (a_{pos} - a_i))$. Why such formula? After removing all the signs between $[i+1, pos-1]$ we will stay on the sign $pos$, remove $pos - i - 1$ signs, and the time to go from $i$ to $pos$ will depend on the sign $i$ and the distance between $i$ and $pos$. So, we get the solution in $O(n^3)$. There also exists a solution in $O(n^2\cdot \log n)$, which uses Convex Hull Trick. We don't describe it here, as it's not required to solve the problem.
|
[
"dp"
] | 1,700
| null |
1625
|
D
|
Binary Spiders
|
Binary Spiders are species of spiders that live on Mars. These spiders weave their webs to defend themselves from enemies.
To weave a web, spiders join in pairs. If the first spider in pair has $x$ legs, and the second spider has $y$ legs, then they weave a web with durability $x \oplus y$. Here, $\oplus$ means bitwise XOR.
Binary Spiders live in large groups. You observe a group of $n$ spiders, and the $i$-th spider has $a_i$ legs.
When the group is threatened, some of the spiders become defenders. Defenders are chosen in the following way. First, there must be at least two defenders. Second, any pair of defenders must be able to weave a web with durability at least $k$. Third, there must be as much defenders as possible.
Scientists have researched the behaviour of Binary Spiders for a long time, and now they have a hypothesis that they can always choose the defenders in an optimal way, satisfying the conditions above. You need to verify this hypothesis on your group of spiders. So, you need to understand how many spiders must become defenders. You are not a Binary Spider, so you decided to use a computer to solve this problem.
|
To solve the problem, we need the following well-known fact. Suppose we have a set of numbers and we need to find minimal possible $xor$ over all the pairs. It is true that, if we sort the numbers in non-descending order, the answer will be equal to minimal $xor$ over neighboring numbers. We need the minimal $xor$ to be not less than $k$ in this problem. For a quadratic solution, we can just sort the numbers and consider $dp_i$ - largest good subset such that the greatest element in it is equal to $a_i$. Then, $dp_i = max(dp_j + 1)$, where $a_j \oplus a_i \ge k$. To make this solution faster, we can use a trie over the bits of numbers. In each vertex of the trie, we store the size of the largest good subset whose greatest element lies in this subtree. When we want to get the answer for $i$, we descend in the tree and get the answer for all the subtrees where we don't go, if the corresponding bit in $k$ is equal to zero. Time complexity is $O(n\cdot \log \max a_i)$.
|
[
"bitmasks",
"data structures",
"implementation",
"math",
"sortings",
"trees"
] | 2,300
| null |
1625
|
E1
|
Cats on the Upgrade (easy version)
|
This is the easy version of the problem. The only difference between the easy and the hard versions are removal queries, they are present only in the hard version.
"Interplanetary Software, Inc." together with "Robots of Cydonia, Ltd." has developed and released robot cats. These electronic pets can meow, catch mice and entertain the owner in various ways.
The developers from "Interplanetary Software, Inc." have recently decided to release a software update for these robots. After the update, the cats must solve the problems about bracket sequences. One of the problems is described below.
First, we need to learn a bit of bracket sequence theory. Consider the strings that contain characters "(", ")" and ".". Call a string regular bracket sequence (RBS), if it can be transformed to an empty string by one or more operations of removing either single "." characters, or a continuous substring "()". For instance, the string "(()(.))" is an RBS, as it can be transformed to an empty string with the following sequence of removals:
\begin{center}
"{(()(\underline{.}))}" $\rightarrow$ "{(()\underline{()})}" $\rightarrow$ "{(\underline{()})}" $\rightarrow$ "{\underline{()}}" $\rightarrow$ "".
\end{center}
We got an empty string, so the initial string was an RBS. At the same time, the string ")(" is not an RBS, as it is not possible to apply such removal operations to it.
An RBS is simple if this RBS is not empty, doesn't start with ".", and doesn't end with ".".
Denote the substring of the string $s$ as its sequential subsegment. In particular, $s[l\dots r] = s_ls_{l+1}\dots s_r$, where $s_i$ is the $i$-th character of the string $s$.
Now, move on to the problem statement itself. You are given a string $s$, initially consisting of characters "(" and ")". You need to answer the queries of the following kind.
Given two indices, $l$ and $r$ ($1 \le l < r \le n$), and it's \textbf{guaranteed} that the substring $s[l\dots r]$ is a \textbf{simple RBS}. You need to find the number of substrings in $s[l\dots r]$ such that they are simple RBS. In other words, find the number of index pairs $i$, $j$ such that $l \le i < j \le r$ and $s[i\dots j]$ is a simple RBS.
You are an employee in "Interplanetary Software, Inc." and you were given the task to teach the cats to solve the problem above, after the update.
Note that the "." character cannot appear in the string in this version of the problem. It is only needed for the hard version.
|
First, we need to make the input string an RBS. Consider one of the possible ways to do it. First, we keep the stack of all the opening brackets. We remove the bracket from the stack if we encounter the corresponding closing bracket. If there is an unpaired closing bracket or an opening bracket which is not removed, they must be replaced with a dot. So, the input string becomes an RBS. It's not hard to see that there are no queries that pass through dots we put in this step. Now, build a tree from the brackets. We will do it in the following way. Initially, there is one vertex. Then, if we encounter an opening bracket, we go one level below and create a new vertex, and if we encounter a closing bracket, then we go to the parent. It's now clear that each vertex corresponds to an RBS. The root of the tree corresponds to the entire string, and leaf nodes correspond to empty RBSes. Now, note that we can obtain all the RBSes if we take all the subsegments from the children of vertices. Each subsegment from the children looks like (RBS)(RBS)...(RBS), i. e. it's a concatenation of RBSes that correspond to children, where each one is put into brackets. Now, we can make a simple DP. Indeed, the amount of all RBSes in a vertex is the sum of RBSes of its children plus $\frac{k \cdot (k + 1)}{2}$, where $k$ is the number of children. The amount of RBSes on the segment is calculated in a similar way. When we calculate such DP and can carefully find a vertex in the tree, we can answer the queries on the segment. The time complexity is $O(q \log n)$ or possibly $O(n + q)$ if we manage to find the vertices corresponding to indices fast.
|
[
"brute force",
"data structures",
"dfs and similar",
"divide and conquer",
"dp",
"graphs",
"trees"
] | 2,500
| null |
1625
|
E2
|
Cats on the Upgrade (hard version)
|
This is the hard version of the problem. The only difference between the easy and the hard versions are removal queries, they are present only in the hard version.
"Interplanetary Software, Inc." together with "Robots of Cydonia, Ltd." has developed and released robot cats. These electronic pets can meow, catch mice and entertain the owner in various ways.
The developers from "Interplanetary Software, Inc." have recently decided to release a software update for these robots. After the update, the cats must solve the problems about bracket sequences. One of the problems is described below.
First, we need to learn a bit of bracket sequence theory. Consider the strings that contain characters "(", ")" and ".". Call a string regular bracket sequence (RBS), if it can be transformed to an empty string by one or more operations of removing either single "." characters, or a continuous substring "()". For instance, the string "(()(.))" is an RBS, as it can be transformed to an empty string with the following sequence of removals:
\begin{center}
"{(()(\underline{.}))}" $\rightarrow$ "{(()\underline{()})}" $\rightarrow$ "{(\underline{()})}" $\rightarrow$ "{\underline{()}}" $\rightarrow$ "".
\end{center}
We got an empty string, so the initial string was an RBS. At the same time, the string ")(" is not an RBS, as it is not possible to apply such removal operations to it.
An RBS is simple if this RBS is not empty, doesn't start with ".", and doesn't end with ".".
Denote the substring of the string $s$ as its sequential subsegment. In particular, $s[l\dots r] = s_ls_{l+1}\dots s_r$, where $s_i$ is the $i$-th character of the string $s$.
Now, move on to the problem statement itself. You are given a string $s$, initially consisting of characters "(" and ")". You need to answer the following queries:
- Given two indices, $l$ and $r$ ($1 \le l < r \le n$). It's \textbf{guaranteed} that the $l$-th character is equal to "(", the $r$-th character is equal to ")", and the characters between them are equal to ".". Then the $l$-th and the $r$-th characters must be set to ".".
- Given two indices, $l$ and $r$ ($1 \le l < r \le n$), and it's \textbf{guaranteed} that the substring $s[l\dots r]$ is a \textbf{simple RBS}. You need to find the number of substrings in $s[l\dots r]$ such that they are simple RBS. In other words, find the number of index pairs $i$, $j$ such that $l \le i < j \le r$ and $s[i\dots j]$ is a simple RBS.
You are an employee in "Interplanetary Software, Inc." and you were given the task to teach the cats to solve the problem above, after the update.
|
Now, we need to see how to handle removal queries in this task. Build an SQRT decomposition in the following way. We will rebuild the entire tree after each $\sqrt{n}$ queries and recalculate the DP. Between the rebuilds, we hold a list of removed leaves. Now look how we can recalculate the answer if some leaves are removed. First suppose that the leaf is not a direct child of the vertex we are interested in. Then the removal of this leaf decreases the answer by $q$, where $q$ is the number of children of this leaf's parent. Why so? The parent of this leaf had the answer as sum of answers in its children plus $\frac{q \cdot (q + 1)}{2}$. The answer in the leaf is equal to zero, so the new answer became sum plus $\frac{q \cdot (q + 1)}{2}$, thus decreased by $q$. When we build the DP, the modification of answer in a vertex is passed to its parents unchanged. So, the answer decreases by $q$ on the entire path from this leaf to the root. We can easily check if we are affected by this removal. It must be applied only if the removed leaf lies strictly inside our query of the second type. We also need to handle the case where our leaf is a direct child of the vertex we consider, as the removal described above doesn't fully apply to this case. This is an exercise left to the reader. So, we get the solution in $O((n + q) \cdot \sqrt{n})$. There is also a solution in $O((n + q) \log n)$. We consider it very shortly. Let's hold just $\frac{k \cdot (k + 1)}{2}$ in each vertex, not the sum over children plus $\frac{k \cdot (k + 1)}{2}$, as we did before. Then the answer for each vertex is the sum in the subtree. We can keep a Fenwick tree, as we can calculate sums in the subtree with it, using Eulerian tour over the tree. It's not hard to see that each update must be applied only once to the direct parent.
|
[
"binary search",
"data structures",
"dfs and similar",
"graphs",
"trees"
] | 2,800
| null |
1626
|
A
|
Equidistant Letters
|
You are given a string $s$, consisting of lowercase Latin letters. Every letter appears in it no more than twice.
Your task is to rearrange the letters in the string in such a way that for each pair of letters that appear exactly twice, the distance between the letters in the pair is the same. You are not allowed to add or remove letters.
It can be shown that the answer always exists. If there are multiple answers, print any of them.
|
Let's consider a very special case of equal distances. What if all distances were equal to $1$? It implies that if some letter appears exactly twice, both occurrences are placed right next to each other. That construction can be achieved if you sort the string, for example: first right down all letters 'a', then all letters 'b' and so on. If a letter appears multiple times, all its occurrences will be next to each other, just as we wanted. Overall complexity: $O(|s| \log |s|)$ or $O(|s|)$ per testcase.
|
[
"constructive algorithms",
"sortings"
] | 800
|
for _ in range(int(input())):
print(''.join(sorted(input())))
|
1626
|
B
|
Minor Reduction
|
You are given a decimal representation of an integer $x$ without leading zeros.
You have to perform the following reduction on it \textbf{exactly once}: take two neighboring digits in $x$ and replace them with their sum without leading zeros (if the sum is $0$, it's represented as a single $0$).
For example, if $x = 10057$, the possible reductions are:
- choose the first and the second digits $1$ and $0$, replace them with $1+0=1$; the result is $1057$;
- choose the second and the third digits $0$ and $0$, replace them with $0+0=0$; the result is also $1057$;
- choose the third and the fourth digits $0$ and $5$, replace them with $0+5=5$; the result is still $1057$;
- choose the fourth and the fifth digits $5$ and $7$, replace them with $5+7=12$; the result is $10012$.
What's the largest number that can be obtained?
|
Let's think how a reduction changes the length of $x$. There are two cases. If two adjacent letters sum up to $10$ or greater, then the length doesn't change. Otherwise, the length decreases by one. Obviously, if there exists a reduction that doesn't change the length, then it's better to use it. Which among such reduction should you choose? Well, notice that such a reduction always makes the number strictly smaller (easy to see with some case analysis). Thus, the logical conclusion is to leave the longest possible prefix of $x$ untouched. So, the rightmost such reduction will change the number as little as possible. If all reductions decrease the length, then a similar argument can be applied. The sum will be a single digit, but a digit that is greater than or equal to the left one of the adjacent pair. If it was just greater, it's easy to see that the leftmost such reduction will make the number the largest possible. The equal case adds more case analysis on top of the proof, but the conclusion remains the same: the leftmost reduction is the best one. As an implementation note, since all the reductions are of the same type, the leftmost reduction always includes the first and the second digits. Overall complexity: $O(|x|)$ per testcase.
|
[
"greedy",
"strings"
] | 1,100
|
for _ in range(int(input())):
x = [ord(c) - ord('0') for c in input()]
n = len(x)
for i in range(n - 2, -1, -1):
if x[i] + x[i + 1] >= 10:
x[i + 1] += x[i] - 10
x[i] = 1
break
else:
x[1] += x[0]
x.pop(0)
print(''.join([chr(c + ord('0')) for c in x]))
|
1626
|
C
|
Monsters And Spells
|
Monocarp is playing a computer game once again. He is a wizard apprentice, who only knows a single spell. Luckily, this spell can damage the monsters.
The level he's currently on contains $n$ monsters. The $i$-th of them appears $k_i$ seconds after the start of the level and has $h_i$ health points. As an additional constraint, $h_i \le k_i$ for all $1 \le i \le n$. All $k_i$ are different.
Monocarp can cast the spell at moments which are positive integer amounts of second after the start of the level: $1, 2, 3, \dots$ The damage of the spell is calculated as follows. If he didn't cast the spell at the previous second, the damage is $1$. Otherwise, let the damage at the previous second be $x$. Then he can choose the damage to be either $x + 1$ or $1$. A spell uses mana: casting a spell with damage $x$ uses $x$ mana. Mana doesn't regenerate.
To kill the $i$-th monster, Monocarp has to cast a spell with damage at least $h_i$ at the exact moment the monster appears, which is $k_i$.
Note that Monocarp can cast the spell even when there is no monster at the current second.
The mana amount required to cast the spells is the sum of mana usages for all cast spells. Calculate the least amount of mana required for Monocarp to kill all monsters.
It can be shown that it's always possible to kill all monsters under the constraints of the problem.
|
Consider the problem with $n=1$. There is a single monster with some health $h$ that appears at some second $k$. In order to kill it, we have to wind up our spell until it has damage $h$. So we have to use it from second $k - h + 1$ to second $k$. Look at it as a segment $[k - h + 1; k]$ on a timeline. Actually, to avoid handling zero length segments, let's instead say that a segment covers the time from $k - h$ non-inclusive to $k$ inclusive, producing a half-interval $(k - h; k]$. This way, the total mana cost will be $\frac{\mathit{len}(\mathit{len} + 1)}{2}$, where $\mathit{len}$ is the length of the half-interval. Now $n=2$. There are two time segments. If they don't intersect (segments $(1; 2]$ and $(2; 3]$ don't intersect, since they are half-intervals), then it's always better to wind up the spell for the monsters separately instead of saving the damage. However, if they intersect, then we don't have the choice other than to save the damage from the earlier one to the later one. Otherwise, there won't be enough time to wind up the spell. What that means in a mathematic sense? The answer is the union of two half-intervals. If they don't intersect, they are left as is. Otherwise, they become one half-interval that covers them both. Now add the third monster into the construction. The same argument applies. While there exists a pair of intersecting half-intervals, keep uniting them. The union of all half-intervals can be found in $O(n \log n)$, but the constraints allowed slower approaches as well.
|
[
"binary search",
"data structures",
"dp",
"greedy",
"implementation",
"math",
"two pointers"
] | 1,700
|
for _ in range(int(input())):
n = int(input())
k = list(map(int, input().split()))
h = list(map(int, input().split()))
st = []
for i in range(n):
st.append([k[i] - h[i], k[i]])
st.sort()
l, r = -1, -1
ans = 0
for it in st:
if it[0] >= r:
ans += (r - l) * (r - l + 1) // 2
l, r = it
else:
r = max(r, it[1])
ans += (r - l) * (r - l + 1) // 2
print(ans)
|
1626
|
D
|
Martial Arts Tournament
|
Monocarp is planning to host a martial arts tournament. There will be three divisions based on weight: lightweight, middleweight and heavyweight. The winner of each division will be determined by a single elimination system.
In particular, that implies that the number of participants in each division should be a power of two. Additionally, each division should have a non-zero amount of participants.
$n$ participants have registered for the tournament so far, the $i$-th of them weighs $a_i$. To split participants into divisions, Monocarp is going to establish two integer weight boundaries $x$ and $y$ ($x < y$).
All participants who weigh strictly less than $x$ will be considered lightweight. All participants who weigh greater or equal to $y$ will be considered heavyweight. The remaining participants will be considered middleweight.
It's possible that the distribution doesn't make the number of participants in each division a power of two. It can also lead to empty divisions. To fix the issues, Monocarp can invite an arbitrary number of participants to each division.
Note that Monocarp can't kick out any of the $n$ participants who have already registered for the tournament.
However, he wants to invite as little extra participants as possible. Help Monocarp to choose $x$ and $y$ in such a way that the total amount of extra participants required is as small as possible. Output that amount.
|
Sort the weights, now choosing $x$ and $y$ will split the array into three consecutive segments. Consider a naive solution to the problem. You can iterate over the length of the first segment and the second segment. The third segment will include everyone remaining. Now you have to check if there exist some $x$ and $y$ that produce such segment. $x$ can be equal to the first element of the second segment (since only all elements of the first segment are smaller than it). Similarly, $y$ can be equal to the first element of the third segment. However, if the last element of some segment is equal to the first element of the next segment, no $x$ or $y$ can split the array like that. Otherwise, you can split an array like that. So you can iterate over the lengths, check the correctness and choose the best answer. Now let's optimize it using the condition about powers of two. First, iterate over the size of the middle division (which is a power of two). Then over the length of the first segment (which can be not a power of two). Check if the first segment is valid. So we fixed the length of the first segment and some value which is greater or equal than the length of the second segment. That value isn't necessarily equal to the length of the second segment because the produced segment might be invalid. So there is a greedy idea that the second segment should be as long as possible under the constraint that it doesn't exceed the fixed value. The intuition is the following. Consider the longest possible valid segment. Now take the last element away from it. We will have to invite one more participant to the middle division. And that element will also get added to the third segment, increasing its length. So potentially, you can only increase the required number of participants to invite. This can be implemented in the following fashion. For each position $i$ precalculate $\mathit{left}_i$ - the closest possible segment border from the left. Iterate over the size of the middle division $\mathit{mid}$ as a power of two. Iterate over the length of the first segment $\mathit{len}_1$. Find the closest border to the left of $\mathit{len}_1 + \mathit{mid} = \mathit{left}[\mathit{len}_1 + \mathit{mid}]$. Get the lengths of the second and the third segments. Find the closest powers of two to each length and update the answer. Overall complexity: $O(n \log n)$ per testcase.
|
[
"binary search",
"brute force",
"greedy",
"math"
] | 2,100
|
calc = 1
nxt = [1, 0]
for _ in range(int(input())):
n = int(input())
a = sorted(list(map(int, input().split())))
while calc <= n:
for i in range(calc):
nxt.append(calc - i - 1)
calc *= 2
left = []
for i in range(n + 1):
if i == 0 or i == n or a[i] != a[i - 1]:
left.append(i)
else:
left.append(left[-1])
mid = 1
ans = n + 2
while mid <= n:
for len1 in range(n + 1):
if left[len1] == len1:
len2 = left[min(n, len1 + mid)] - len1
len3 = n - len1 - len2
ans = min(ans, nxt[len1] + (mid - len2) + nxt[len3])
mid *= 2
print(ans)
|
1626
|
E
|
Black and White Tree
|
You are given a tree consisting of $n$ vertices. Some of the vertices (at least two) are black, all the other vertices are white.
You place a chip on one of the vertices of the tree, and then perform the following operations:
- let the current vertex where the chip is located is $x$. You choose a black vertex $y$, and then move the chip along the first edge on the simple path from $x$ to $y$.
You are not allowed to choose the same black vertex $y$ in two operations in a row (i. e., for every two consecutive operations, the chosen black vertex should be different).
You end your operations when the chip moves to the black vertex (if it is initially placed in a black vertex, you don't perform the operations at all), or when the number of performed operations exceeds $100^{500}$.
For every vertex $i$, you have to determine if there exists a (possibly empty) sequence of operations that moves the chip to some black vertex, if the chip is initially placed on the vertex $i$.
|
I think there are some ways to solve this problem with casework, but let's try to come up with an intuitive and easy-to-implement approach. It's always possible to move closer to some black vertex, no matter in which vertex you are currently and which black vertex was used in the previous operation. However, sometimes if you try to move along an edge, you immediately get forced back. Let's analyze when we can move without being forced back. We can move along the edge $x \rightarrow y$ so that our next action is not moving back if: either $y$ is black (there is no next action); or, if we remove the edge between $x$ and $y$, the number of black vertices in $y$'s component is at least $2$ (we can use one of them to go from $x$ to $y$, and another one to continue our path). Note that the cases $x \rightarrow y$ and $y \rightarrow x$ may be different (sometimes it will be possible to move in one direction, and impossible to move in the opposite direction). Let's treat this possible move $x \rightarrow y$ as an arc in a directed graph. We can find all such arcs if we can answer the queries of the type "count black vertices in a subtree of some vertex", and this can be done by rooting the tree and calculating this information for each subtree with DFS. Now, if there is a way from some vertex $i$ to some black vertex along these arcs, the answer for the vertex $i$ is $1$. How can we find all such vertices? Let's transpose the graph (change the direction of each arc to opposite), now we need to find all vertices reachable from black ones - which is easily done with multisource BFS or DFS. The complexity of this solution is $O(n)$.
|
[
"dfs and similar",
"greedy",
"trees"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
const int N = 300043;
vector<int> g[N];
int cnt[N];
int c[N];
vector<int> g2[N];
int par[N];
int used[N];
void dfs(int x, int p = -1)
{
par[x] = p;
for(auto y : g[x])
if(y != p)
{
dfs(y, x);
cnt[x] += cnt[y];
}
cnt[x] += c[x];
}
int main()
{
int n;
scanf("%d", &n);
for(int i = 0; i < n; i++) scanf("%d", &c[i]);
for(int i = 1; i < n; i++)
{
int x, y;
scanf("%d %d", &x, &y);
--x;
--y;
g[x].push_back(y);
g[y].push_back(x);
}
dfs(0);
for(int i = 0; i < n; i++)
for(auto j : g[i])
{
if(i == par[j])
{
if(c[i] == 1 || cnt[0] - cnt[j] > 1)
g2[i].push_back(j);
}
else
{
if(c[i] == 1 || cnt[i] > 1)
g2[i].push_back(j);
}
}
queue<int> q;
for(int i = 0; i < n; i++)
{
if(c[i] == 1)
{
q.push(i);
used[i] = 1;
}
}
while(!q.empty())
{
int k = q.front();
q.pop();
for(auto y : g2[k])
if(used[y] == 0)
{
used[y] = 1;
q.push(y);
}
}
for(int i = 0; i < n; i++)
printf("%d ", used[i]);
puts("");
}
|
1626
|
F
|
A Random Code Problem
|
You are given an integer array $a_0, a_1, \dots, a_{n - 1}$, and an integer $k$. You perform the following code with it:
\begin{verbatim}
long long ans = 0; // create a 64-bit signed variable which is initially equal to 0
for(int i = 1; i <= k; i++)
{
int idx = rnd.next(0, n - 1); // generate a random integer between 0 and n - 1, both inclusive
// each integer from 0 to n - 1 has the same probability of being chosen
ans += a[idx];
a[idx] -= (a[idx] % i);
}
\end{verbatim}
Your task is to calculate the expected value of the variable ans after performing this code.
Note that the input is generated according to special rules (see the input format section).
|
I think it's easier to approach this problem using combinatorics instead of probability theory methods, so we'll calculate the answer as "the sum of values of ans over all ways to choose the index on each iteration of the loop". If a number $a_{idx}$ is chosen on the iteration $i$ of the loop, then it is reduced to the maximum number divisible by $i$ that doesn't exceed the initial value. So, if a number is divisible by all integers from $1$ to $k$, i. e. divisible by $L = LCM(1,2,\dots,k)$, it won't be changed in the operation. Furthermore, if $\lfloor \frac{a_{idx}}{L} \rfloor = x$, then the value of this element won't become less than $x \cdot L$. It means that we can interpret each number $a_i$ as $a_i = x \cdot L + y$, where $x = \lfloor \frac{a_{i}}{L} \rfloor$ and $y = a_i \bmod L$. The part with $x \cdot L$ will always be added to the variable ans when this element is chosen, so let's add $k \cdot n^{k-1} \cdot x \cdot L$ to the answer (which is the contribution of $x \cdot L$ over all ways to choose the indices in the operations), and work with $a_i \bmod L$ instead of $a_i$. Now all elements of the array are less than $L$. We can use this constraint by writing the following dynamic programming to solve the problem: $dp_{i,j}$ is the number of appearances of the integer $i$ in the array $a$ over all ways to choose the indices for the first $j$ iterations. For $j = 0$, $dp$ is just the number of occurrences of each integer in the array $a$. The transitions from $dp_{i,j}$ are the following ones: if this element is chosen in the operation, then it becomes $i' = i - (i \bmod (j + 1))$, and we transition to the state $dp_{i',j+1}$; otherwise, the element is unchanged, and we transition to the state $dp_{i,j+1}$, multiplying the current value by $n-1$, which is the number of ways to choose some other element in the operation. How can we use this dynamic programming to get the answer? On the $(j+1)$-th iteration, the number of times we choose the integer $i$ is exactly $dp_{i,j}$, and the number of ways to use the integers in the next operations is $n^{k-j-1}$, so we add $i \cdot dp_{i,j} \cdot n^{k-j-1}$ to the answer for every such state $dp_{i,j}$. This solution runs in time $O(n + LCM(1,2,\dots,k) \cdot k)$, which may be too slow if not implemented carefully. Fortunately, we have an easy way to optimize it: use $L = LCM(1,2,\dots,k-1)$ instead of $L = LCM(1,2,\dots,k)$, which divides $L$ by $17$ in the worst case scenario for our solution. We can do this because even if an integer is changed on the $k$-th operation, we are not interested in this change since this is the last operation.
|
[
"combinatorics",
"dp",
"math",
"number theory",
"probabilities"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
const int L = 720720;
int add(int x, int y, int m = MOD)
{
x += y;
if(x >= m) x -= m;
return x;
}
int mul(int x, int y, int m = MOD)
{
return (x * 1ll * y) % m;
}
int binpow(int x, int y)
{
int z = 1;
while(y > 0)
{
if(y % 2 == 1) z = mul(z, x);
x = mul(x, x);
y /= 2;
}
return z;
}
int inv(int x)
{
return binpow(x, MOD - 2);
}
int divide(int x, int y)
{
return mul(x, inv(y));
}
int main()
{
int n, a0, x, y, k, M;
cin >> n >> a0 >> x >> y >> k >> M;
vector<int> arr(n);
arr[0] = a0;
for(int i = 1; i < n; i++)
arr[i] = add(mul(arr[i - 1], x, M), y, M);
int ans = 0;
int total_ways = binpow(n, k);
int coeff = mul(divide(total_ways, n), k);
vector<vector<int>> dp(k, vector<int>(L));
for(int i = 0; i < n; i++)
{
int p = arr[i] / L;
int q = arr[i] % L;
dp[0][q]++;
ans = add(ans, mul(p, mul(L, coeff)));
}
int cur_coeff = divide(total_ways, n);
for(int i = 1; i <= k; i++)
{
for(int j = 0; j < L; j++)
{
int cur = dp[i - 1][j];
if(i < k)
dp[i][j] = add(dp[i][j], mul(n - 1, cur));
ans = add(ans, mul(j, mul(cur, cur_coeff)));
if(i < k)
dp[i][j - (j % i)] = add(dp[i][j - (j % i)], cur);
}
cur_coeff = divide(cur_coeff, n);
}
cout << ans << endl;
}
|
1627
|
A
|
Not Shading
|
There is a grid with $n$ rows and $m$ columns. Some cells are colored black, and the rest of the cells are colored white.
In one operation, you can select some \textbf{black} cell and do \textbf{exactly one} of the following:
- color all cells in its row black, or
- color all cells in its column black.
You are given two integers $r$ and $c$. Find the minimum number of operations required to make the cell in row $r$ and column $c$ black, or determine that it is impossible.
|
There are several cases to consider: If all the cell are white, then it is impossible to perform any operations, so you cannot make any cell black. The answer is $-1$. If the cell in row $r$ and column $c$ is already black, then we don't need to perform any operations. The answer is $0$. If any of the cells in row $r$ are already black (that is, the cell we need to turn black shares a row with a black cell), then we can take this black cell and make row $r$. The same is true if any of the cells in column $c$ are already black. The answer is $1$. Otherwise, we claim the answer is $2$. Take any black cell and make its row black. This means that every column contains a black cell, so now we can take column $c$ are turn it black. Thus the answer is $2$.
|
[
"constructive algorithms",
"implementation"
] | 800
|
for t in range(int(input())):
n, m, r, c = map(int, input().split())
a = [list(input()) for i in range(n)]
if "B" not in str(a):
print(-1)
continue
print(2 - ("B" in a[r-1] + list([*zip(*a)][c-1])) - (a[r-1][c-1] == 'B'))
|
1627
|
B
|
Not Sitting
|
Rahul and Tina are looking forward to starting their new year at college. As they enter their new classroom, they observe the seats of students are arranged in a $n \times m$ grid. The seat in row $r$ and column $c$ is denoted by $(r, c)$, and the distance between two seats $(a,b)$ and $(c,d)$ is $|a-c| + |b-d|$.
As the class president, Tina has access to \textbf{exactly} $k$ buckets of pink paint. The following process occurs.
- First, Tina chooses exactly $k$ seats in the classroom to paint with pink paint. One bucket of paint can paint exactly one seat.
- After Tina has painted $k$ seats in the previous step, Rahul chooses where he sits. He will not choose a seat that has been painted pink due to his hatred of the colour pink.
- After Rahul has chosen his seat, Tina chooses a seat for herself. She can choose any of the seats, painted or not, other than the one chosen by Rahul.
Rahul wants to choose a seat such that he sits as close to Tina as possible. However, Tina wants to sit as far away from Rahul as possible due to some complicated relationship history that we couldn't fit into the statement!
Now, Rahul wonders for $k = 0, 1, \dots, n \cdot m - 1$, if Tina has $k$ buckets of paint, how close can Rahul sit to Tina, if both Rahul and Tina are aware of each other's intentions and they both act as strategically as possible? Please help satisfy Rahul's curiosity!
|
Let's denote Rahul's seat as $(a, b)$ and Tina's seat as $(c, d)$. Notice that the in the distance between their seats, $|a-c| + |b-d|$, $|a-c|$ and $|b-d|$ are independent of each other, i.e. both the $x$-coordinate and $y$-coordinate of Tina's seat are independent. From the answer to the hint above, we can see that the optimal seat for Tina in a $1$-dimensional classroom is one of the edge seats, and combining this with the previous observation means that the optimal seat for Tina is always one of the corner seats. Since Rahul chooses seats optimally, he will know that Tina will choose one of the corner seats, so he will choose a seat such that the maximum distance from it to one of the corner seats is minimised. As Tina also chooses which seats to paint optimally, the best strategy for her is to paint the $k$ seats with minimum maximum distance to one of the corner seats pink. We can implement this by calculating for each seat the maximum distance to one of the corner seats from it, and storing these values in an array. After sorting this array in non-decreasing order, we can simply print the first $n \cdot m - 1$ values of the array, as the $i$-th value of the array ($0$-indexed) is the optimal answer for $k = i$. This can be implemented in $\mathcal{O}(nm\log(nm))$ time per test case.
|
[
"games",
"greedy",
"sortings"
] | 1,300
|
import sys
input = lambda: sys.stdin.readline().rstrip("\r\n")
for t in range(int(input())):
n, m = map(int, input().split())
print(*sorted(max(x, n-1-x) + max(y, m-1-y) for x in range(n) for y in range(m)))
|
1627
|
C
|
Not Assigning
|
You are given a tree of $n$ vertices numbered from $1$ to $n$, with edges numbered from $1$ to $n-1$. A tree is a connected undirected graph without cycles. You have to assign integer weights to each edge of the tree, such that the resultant graph is a prime tree.
A prime tree is a tree where the weight of every path consisting of \textbf{one or two edges} is prime. A path should not visit any vertex twice. The weight of a path is the sum of edge weights on that path.
Consider the graph below. It is a prime tree as the weight of every path of two or less edges is prime. For example, the following path of two edges: $2 \to 1 \to 3$ has a weight of $11 + 2 = 13$, which is prime. Similarly, the path of one edge: $4 \to 3$ has a weight of $5$, which is also prime.
Print \textbf{any} valid assignment of weights such that the resultant tree is a prime tree. If there is no such assignment, then print $-1$. It can be proven that if a valid assignment exists, one exists with weights between $1$ and $10^5$ as well.
|
Let us first see when a valid assignment does not exist. Claim. If any vertex has 3 or more edges adjacent to it, no valid assignment exists. Proof. Consider a graph where a vertex has edges to three other vertices with weights $x$, $y$ and $z$ respectively. For a valid assignment, $x$, $y$ and $z$ need to be primes themselves. Also, $x+y$, $y+z$ and $x + z$ need to be primes too. Since $x, y, z \ge 2$ (as $2$ is the smallest prime), thus $x + y, x + z, y + z \ge 4$, so they must be odd primes. This implies: $x$ and $y$ have opposite parity. $y$ and $z$ have opposite parity. $x$ and $z$ have opposite parity. As all the three conditions cannot hold together, hence we have a contradiction. Proven. So, we have a tree where every vertex has either one or two edges adjacent to it. Such a tree will have exactly two leaf nodes for $n \ge 2$ and have the following structure, where $V_1$ and $V_n$ are the leaf nodes. $V_1 \longleftrightarrow V_2 \longleftrightarrow V_3 \dots \longleftrightarrow V_{n-1} \longleftrightarrow V_n$ Thus, starting a DFS from any leaf node, we can assign weights $2$ and $3$ (or $2$ and the first number of any twin prime pair) alternatingly to form a prime tree, as $2$, $3$ and $2+3 = 5$ are all primes. Expected time complexity: $\mathcal{O}(n)$
|
[
"constructive algorithms",
"dfs and similar",
"number theory",
"trees"
] | 1,400
|
import sys
input = lambda: sys.stdin.readline().rstrip("\r\n")
for t in range(int(input())):
n = int(input())
graph = [[] for __ in range(n + 1)]
ans = [-1] * (n - 1)
for i in range(n - 1):
x, y = map(int, input().split())
graph[x] += [(y, i)]
graph[y] += [(x, i)]
if max(len(graph[i]) for i in range(n + 1)) > 2:
print(-1)
continue
cur, prev = 1, None
while len(graph[cur]) != 1: cur += 1
for p in range(n - 1):
for x, i in graph[cur]:
if x != prev:
ans[i] = [17, 2][p % 2]
cur, prev = x, cur
break
print(*ans)
|
1627
|
D
|
Not Adding
|
You have an array $a_1, a_2, \dots, a_n$ consisting of $n$ \textbf{distinct} integers. You are allowed to perform the following operation on it:
- Choose two elements from the array $a_i$ and $a_j$ ($i \ne j$) such that $\gcd(a_i, a_j)$ is not present in the array, and add $\gcd(a_i, a_j)$ to the end of the array. Here $\gcd(x, y)$ denotes greatest common divisor (GCD) of integers $x$ and $y$.
Note that the array changes after each operation, and the subsequent operations are performed on the new array.
What is the \textbf{maximum} number of times you can perform the operation on the array?
|
Note that the $\gcd$ of two numbers cannot exceed their maximum. Let the maximum element of the array be $A$. So for every number from $1$ to $A$, we try to check whether that element can be included in the array after performing some operations or not. How to check for a particular number $x$? For $x$ to be in the final array, either: It already exists in the initial array. Or, the $\gcd$ of all multiples of $x$ present in the initial array equals $x$. Proof For $x$ to be added after some operations, there must be some subset of the array which has a $\gcd$ equal to $x$. We can perform the operations by taking the current gcd and one element from the subset at a time and at the end we will obtain $x$. Note that such a subset can only contain multiples of $x$. So it is enough to check that the $\gcd$ of all multiples is equal to $x$. Thus, the overall solution takes $\mathcal{O}(n + A \log{A})$.
|
[
"brute force",
"dp",
"math",
"number theory"
] | 1,900
|
import io, os
from math import gcd
input = io.BytesIO(os.read(0, os.fstat(0).st_size)).readline
n = int(input())
a = list(map(int, input().split()))
MAXN = 1000000
present = [False] * (MAXN + 1)
for each in a:
present[each] = True
ans = 0
for i in range(MAXN, 0, -1):
if present[i]: continue
g = 0
for j in range(2*i, MAXN + 1, i):
if present[j]:
if g == 0:
g = j
elif gcd(g, j) == i:
g = i
break
if g == i:
ans += 1
present[i] = True
print(ans)
|
1627
|
E
|
Not Escaping
|
Major Ram is being chased by his arch enemy Raghav. Ram must reach the top of the building to escape via helicopter. The building, however, is on fire. Ram must choose the optimal path to reach the top of the building to lose the minimum amount of health.
The building consists of $n$ floors, each with $m$ rooms each. Let $(i, j)$ represent the $j$-th room on the $i$-th floor. Additionally, there are $k$ ladders installed. The $i$-th ladder allows Ram to travel from $(a_i, b_i)$ to $(c_i, d_i)$, but \textbf{not in the other direction}. Ram also gains $h_i$ health points if he uses the ladder $i$. \textbf{It is guaranteed $a_i < c_i$ for all ladders.}
If Ram is on the $i$-th floor, he can move either left or right. Travelling across floors, however, is treacherous. If Ram travels from $(i, j)$ to $(i, k)$, he loses $|j-k| \cdot x_i$ health points.
Ram enters the building at $(1, 1)$ while his helicopter is waiting at $(n, m)$. What is the minimum amount of health Ram loses if he takes the most optimal path? Note this answer may be negative (in which case he gains health). Output "NO ESCAPE" if no matter what path Ram takes, he cannot escape the clutches of Raghav.
|
The building plan of the input consists of $n \cdot m$ rooms, which in the worst case is $10^{10}$ however, most of these rooms are unimportant to us. We can instead use a much reduced version of the building consisting of at most $2k + 2$ rooms, both endpoints of each ladder, as well as our starting and target rooms. As every ladder connects a lower floor to a higher floor and is one-directional, we can process the rooms floor by floor, from floor $1$ to floor $n$. On each floor, let's sort all the rooms in non-decreasing order. Now, we can use dynamic programming, as well as the compression previously mentioned to calculate the minimum distance to get to all important rooms. First, we calculate the minimum cost to get to each room using a room on the same floor as an intermediate. We can do this by iterating over the rooms on a floor twice, once from left to right, and then once from right to left. Then, for each room on the floor, if it has a ladder going up from it, we can update the $dp$ value of the room where the ladder ends. Our answer is the $dp$ value of the target room. This can be implemented in $\mathcal{O}(k\log(k))$ time per test case.
|
[
"data structures",
"dp",
"implementation",
"shortest paths",
"two pointers"
] | 2,200
|
import sys, os, io
from collections import defaultdict
input = io.BytesIO(os.read(0, os.fstat(0).st_size)).readline
inf = 5 * 10**16
for _ in range(int(input())):
n, m, k = map(int, input().split())
a = list(map(int, input().split()))
go_to = defaultdict(list)
coords = set()
coords.add((1, 1))
coords.add((n, m))
for i in range(k):
p, q, r, s, t = map(int, input().split())
go_to[p * m + q] += [[r, s, t]]
coords.add((p, q))
coords.add((r, s))
coords = sorted(list(coords))
N, index = len(coords), {}
dp, prefix, suffix = [-inf] * N, [-inf] * N, [-inf] * N
for i in range(N): index[coords[i]] = i
pairs, i = [], 0
dp[0] = 0
while i < N:
j = i
while j + 1 < N and coords[j + 1][0] == coords[j][0]:
j += 1
if coords[j][0] == 1:
dp[j] = -a[0] * (coords[j][1] - 1)
pairs += [(i, j)]
i = j + 1
for start, end in pairs:
leftVal = rightVal = -inf
for i in range(start, end + 1):
x, y = coords[i]
leftVal = max(leftVal, prefix[i])
dp[i] = max(dp[i], leftVal - a[x - 1] * (y - 1))
for i in range(end, start - 1, -1):
x, y = coords[i]
rightVal = max(rightVal, suffix[i])
dp[i] = max(dp[i], rightVal - a[x - 1] * (m - y))
for i in range(start, end + 1):
if dp[i] == -inf: continue
x, y = coords[i]
for r, s, t in go_to[x * m + y]:
j = index[(r, s)]
dp[j] = max(dp[j], dp[i] + t)
prefix[j] = max(prefix[j], dp[j] + a[r - 1] * (s - 1))
suffix[j] = max(suffix[j], dp[j] + a[r - 1] * (m - s))
print(-dp[-1] if dp[-1] != -inf else "NO ESCAPE")
|
1627
|
F
|
Not Splitting
|
There is a $k \times k$ grid, where $k$ is even. The square in row $r$ and column $c$ is denoted by $(r,c)$. Two squares $(r_1, c_1)$ and $(r_2, c_2)$ are considered adjacent if $\lvert r_1 - r_2 \rvert + \lvert c_1 - c_2 \rvert = 1$.
An array of adjacent pairs of squares is called strong if it is possible to cut the grid along grid lines into two connected, congruent pieces so that each pair is part of the \textbf{same} piece. Two pieces are congruent if one can be matched with the other by translation, rotation, and reflection, or a combination of these.
\begin{center}
The picture above represents the first test case. Arrows indicate pairs of squares, and the thick black line represents the cut.
\end{center}
You are given an array $a$ of $n$ pairs of adjacent squares. Find the size of the largest strong subsequence of $a$. An array $p$ is a subsequence of an array $q$ if $p$ can be obtained from $q$ by deletion of several (possibly, zero or all) elements.
|
Claim. Any cut that splits the square into two congruent parts is rotationally symmetric about the center by $180^{\circ}$. Proof. It is a special case when the cut is a vertical or horizontal line. Assume otherwise. Then: One piece has a row containing more than $\frac{k}{2}$, but less than $k$ squares. One piece has a column containing more than $\frac{k}{2}$, but less than $k$ squares. Both exactly pieces contain two of the corners of the grid. Now consider the isometry of the plane bringing one piece to the other. Then the corners of the grid of one piece must map to the corners of the grid of the other piece, since there has to be a straight edge connecting them with length $k$, which only exists between two corners. There are precisely two such isometries that fit within the bounds of the square: a reflection and a $180^{\circ}$ rotation, pictured below, respectively. However if one piece has a row containing more than $\frac{k}{2}$ but less than $k$ squares, then in the first case the number of squares in that row is greater than $k$. The same holds in the case of a vertical reflection. Hence the cut must be rotationally symmetric. Now we can turn the problem into a graph problem. Consider the graph whose vertices are vertices of the grid and whose edges are edges of the grid. We need to minimize the number of pairs of squares that we "split up" in our cut. Note that each pair of squares shares an edge. Thus, we want to minimize the number of these edges we pass through. Let's initially weight all edges with $0$, and increase the weight by $1$ for each edge given. Since each cut is rotationally symmetric about the center, we can just consider finding a minimal-weight path from the boundary to the center, and then rotating this path $180^{\circ}$ to find a valid cut. However, there are three details we need to iron out: The cut may pass through other weighted edges when rotated. We need to find an efficient way to find the shortest path from each boundary point to the center. The cut may be self-intersecting. The second point can be accounted for by noticing that the boundary of the square has all edges of weight $0$, so we can just run single source shortest paths from any single point on the boundary. For the third point, consider some path that intersects itself when rotated. Suppose we build the path edge-by-edge, along with its mirror copy. At some point we will hit the mirror copy. But that means that there is a way with strictly fewer edges to reach the same point: just take the path from this intersection to the start of the mirror copy. See the image below. Instead of taking the long path (in blue/orange), we can take the shorter path (in green/purple). So now all our details are successfully ironed out. We can just run Dijkstra's algorithm from any vertex and find the length of the shortest path, solving the problem in $\mathcal{O}(n + k^2 \log k)$.
|
[
"geometry",
"graphs",
"greedy",
"implementation",
"shortest paths"
] | 2,700
|
import io, os
input = io.BytesIO(os.read(0, os.fstat(0).st_size)).readline
from heapq import heappop, heappush
for _ in range(int(input())):
q, n = map(int, input().split())
graph = [[[] for j in range(n + 1)] for i in range(n + 1)]
for x in range(n + 1):
for y in range(n + 1):
if x - 1 >= 0: graph[x-1][y] += [[x, y, 0]]
if x + 1 < n: graph[x+1][y] += [[x, y, 0]]
if y - 1 >= 0: graph[x][y-1] += [[x, y, 0]]
if y + 1 < n: graph[x][y+1] += [[x, y, 0]]
def update(x1, y1, x2, y2):
for i in range(len(graph[x1][y1])):
if (graph[x1][y1][i][0], graph[x1][y1][i][1]) == (x2, y2):
graph[x1][y1][i][2] += 1
for i in range(len(graph[x2][y2])):
if (graph[x2][y2][i][0], graph[x2][y2][i][1]) == (x1, y1):
graph[x2][y2][i][2] += 1
for i in range(q):
x1, y1, x2, y2 = map(int, input().split())
if x1 == x2:
y1, y2 = min(y1, y2), max(y1, y2)
update(x1 - 1, y1, x1, y1)
update(n - (x1 - 1), n - y1, n - x1, n - y1)
else:
x1, x2 = min(x1, x2), max(x1, x2)
update(x1, y1 - 1, x1, y1)
update(n - x1, n - (y1 - 1), n - x1, n - y1)
dist = [[float("inf")] * (n + 1) for __ in range(n + 1)]
queue = []
heappush(queue, (0, n // 2, n // 2))
dist[n // 2][n // 2] = 0
while queue:
length, x, y = heappop(queue)
if x == 0 or y == 0:
print(q - length)
break
if dist[x][y] != length:
continue
for x2, y2, w in graph[x][y]:
if dist[x2][y2] > dist[x][y] + w:
dist[x2][y2] = dist[x][y] + w
heappush(queue, (dist[x2][y2], x2, y2))
|
1628
|
A
|
Meximum Array
|
Mihai has just learned about the MEX concept and since he liked it so much, he decided to use it right away.
Given an array $a$ of $n$ non-negative integers, Mihai wants to create \textbf{a new array $b$} that is formed in the following way:
While $a$ is not empty:
- Choose an integer $k$ ($1 \leq k \leq |a|$).
- Append the MEX of the first $k$ numbers of the array $a$ to the end of array $b$ and erase them from the array $a$, shifting the positions of the remaining numbers in $a$.
But, since Mihai loves big arrays as much as the MEX concept, he wants the new array $b$ to be the \textbf{lexicographically maximum}. So, Mihai asks you to tell him what the maximum array $b$ that can be created by constructing the array optimally is.
An array $x$ is lexicographically greater than an array $y$ if in the first position where $x$ and $y$ differ $x_i > y_i$ or if $|x| > |y|$ and $y$ is a prefix of $x$ (where $|x|$ denotes the size of the array $x$).
The \textbf{MEX} of a set of non-negative integers is the minimal non-negative integer such that it is not in the set. For example, \textbf{MEX}({${1, 2, 3}$}) $= 0$ and \textbf{MEX}({${0, 1, 2, 4, 5}$}) $= 3$.
|
The splitting points can be picked greedily. Firstly, find the MEX of all suffixes, this can be easily done in $O(n \cdot log(n))$ or $O(n)$. Instead of removing elements, we consider that we need to split the array into some number of subarrays. Let $p$ be the index we are currently at and MEX($l, r$) - the MEX of the set formed from the numbers $[a_l, a_{l+1}, ..., a_r]$. Start the process by looking at the first element, so $p = 1$ initially. Then do the following process as long as $p \leq n$: find the first position $j (p \leq j \leq n)$ such that MEX($p, j$) $=$ MEX($p, n$), add this MEX to the array $b$ and do the same process starting from position $j + 1$, so $p = j + 1$. This process always produces the optimal answer because if for each element in $b$ we choose to remove the minimum amount of elements from $a$ to obtain the maximum element $b_i$, so we have more elements in the future to do the same optimal choices. Complexity: $O(n \cdot log(n))$ or $O(n)$ depending on implementation.
|
[
"binary search",
"constructive algorithms",
"greedy",
"implementation",
"math",
"two pointers"
] | 1,400
| null |
1628
|
B
|
Peculiar Movie Preferences
|
Mihai plans to watch a movie. He only likes palindromic movies, so he wants to skip some (possibly zero) scenes to make the remaining parts of the movie palindromic.
You are given a list $s$ of $n$ non-empty strings of length \textbf{at most $3$}, representing the scenes of Mihai's movie.
A subsequence of $s$ is called awesome if it is non-empty and the concatenation of the strings in the subsequence, in order, is a palindrome.
Can you help Mihai check if there is at least one awesome subsequence of $s$?
A palindrome is a string that reads the same backward as forward, for example strings "z", "aaa", "aba", "abccba" are palindromes, but strings "codeforces", "reality", "ab" are not.
A sequence $a$ is a non-empty subsequence of a non-empty sequence $b$ if $a$ can be obtained from $b$ by deletion of several (possibly zero, but not all) elements.
|
Because of the low constraints on the lengths of the strings, we can prove that it's enough to pair at most $2$ strings to form a palindrome. <proof only checking pairs is enough> Let's assume there is a awesome subsequence of the form xyz where x and z are single strings from s, and y is anything. If x and z are the same length, they clearly have to be reverses of each other for xyz to be a palindrome, so y is not needed to make it a palindrome. If they are not the same length, one of them is of length 3 and the other is of length 2. Assume x is the string of length 3 and y is the string of length 2. The first two characters of x must be the reverse of z. If x and z are concatenated, the third character of x is in the middle, so it doesn't matter. So in this case too, y is not needed. This proves that if any awesome subsequence exists, there also exists an awesome subsequence of 1 or 2 strings. <proof ends> So, we first check if there exists a palindrome already, if there is, we found a solution! If not, checking for each pair would take too long, but we can do it much more efficiently. We can assume that all strings are of length $2$ or $3$ since if there was a string of length $1$ it would be a palindrome and we would have found the solution earlier. For each string of length $2$ it's enough to check if before it, we have seen a string of the following $2$ forms: its reverse or its reverse with a character appended to it (so a string of length $3$), since the last character of a string of length $3$ would be the middle character of the palindrome obtained after concatenation. For each string of length $3$ it's enough to check if before it, we have seen a string of the following $2$ forms: its reverse or the reverse of the string without considering the first character (so a string of length $2$), since the first character of a string of length $3$ would be the middle character of the palindrome obtained after concatenation. All this can be checked using a frequency matrix, map, set or other data structures.
|
[
"greedy",
"strings"
] | 1,700
| null |
1628
|
C
|
Grid Xor
|
Note: The XOR-sum of set $\{s_1,s_2,\ldots,s_m\}$ is defined as $s_1 \oplus s_2 \oplus \ldots \oplus s_m$, where $\oplus$ denotes the bitwise XOR operation.
After almost winning IOI, Victor bought himself an $n\times n$ grid containing integers in each cell. \textbf{$n$ is an even integer.} The integer in the cell in the $i$-th row and $j$-th column is $a_{i,j}$.
Sadly, Mihai stole the grid from Victor and told him he would return it with only one condition: Victor has to tell Mihai the XOR-sum of \textbf{all} the integers in the whole grid.
Victor doesn't remember all the elements of the grid, but he remembers some information about it: For each cell, Victor remembers the XOR-sum of all its neighboring cells.
Two cells are considered neighbors if they share an edge — in other words, for some integers $1 \le i, j, k, l \le n$, the cell in the $i$-th row and $j$-th column is a neighbor of the cell in the $k$-th row and $l$-th column if $|i - k| = 1$ and $j = l$, or if $i = k$ and $|j - l| = 1$.
To get his grid back, Victor is asking you for your help. Can you use the information Victor remembers to find the XOR-sum of the whole grid?
It can be proven that the answer is unique.
|
Let's denote count($i,j$) the amount of times the cell ($i,j$) contributed in queries. We notice that count($i,j$) must be odd for all cells ($i,j$). There are multiple possible solutions for this problem. In the editorial we will describe two of them. <first solution> The following construction satisfies the condition: Iterate Through all rows from row $2$ to n. For each row, traverse all its cells and query cell ($i,j$) if the cell above it if cell ($i-1,j$) was contributed (count($i-1,j$) $= 0$ (mod $2$)) an even amount of times and XOR curent_answer with $a_{ij}$ (curent_answer initially being $0$) Everule gave a proof of correctness of this approach. <first solution> <second solution> Let's try making some pattern that takes all cells exactly once: Something like this would work if the board was a triangle instead of a square. But it turns out we can actually completely cover the square by using 4 copies of such a triangle rotated: <second solution>
|
[
"constructive algorithms",
"greedy",
"implementation",
"interactive",
"math"
] | 2,300
| null |
1628
|
D1
|
Game on Sum (Easy Version)
|
\textbf{This is the easy version of the problem. The difference is the constraints on $n$, $m$ and $t$. You can make hacks only if all versions of the problem are solved.}
Alice and Bob are given the numbers $n$, $m$ and $k$, and play a game as follows:
The game has a score that Alice tries to maximize, and Bob tries to minimize. The score is initially $0$. The game consists of $n$ turns. Each turn, Alice picks a \textbf{real} number from $0$ to $k$ (inclusive) which Bob either adds to or subtracts from the score of the game. But throughout the game, Bob has to choose to add at least $m$ out of the $n$ turns.
Bob gets to know which number Alice picked before deciding whether to add or subtract the number from the score, and Alice gets to know whether Bob added or subtracted the number for the previous turn before picking the number for the current turn (except on the first turn since there was no previous turn).
If Alice and Bob play optimally, what will the final score of the game be?
|
What is the answer for $n = 2$, $m = 1$? Let's call the number Alice picks on the first turn $x$. If $x$ is small, Bob can add it, and then Alice will have to pick $0$ on the last turn since Bob will definitely subtract it from the score if it isn't $0$, meaning the score ends up being $x$. If Alice picks a big number, Bob can subtract it. Then Alice will pick the biggest number she can on the last turn, ending up with a score of $k-x$. Since Bob tries to minimize the score of the game, Alice should pick an $x$ such that it maximizes the value of $min(x, k-x)$. $x$ and $k-x$ are both linear (straight line) functions on $x$. The $x$ value that maximizes the minimum of two lines is their intersection. The intersection of the lines $x$ and $k-x$ is at $x = k/2$. So Alice should pick $x = k/2$ in the optimal game where $n = 2$, $m = 1$. To generalize the solution to arbitrary $n$ and $m$, we can use DP. Let $DP[i][j]$ what the score would be if $n = i$, $m = j$. Our base cases will be $DP[i][0] = 0$ since if Bob doesn't have to add anything, Alice has to always pick $0$. $DP[i][i] = i \cdot k$ since if Bob always has to add, Alice can just pick $k$ every time. When Bob adds to the score, the rest of the game will be the same as a game with $1$ fewer turns and $1$ fewer forced adds, except the game score is offset by Alice's number. When Bob subtracts from the score, the rest of the game will be the same as a game with $1$ fewer turns except the game score is offset by negative Alice's number. Bob will take the minimum of these, so the $DP$ transition will be $DP[i][j] = min(DP[i-1][j-1]+x, DP[i-1][j]-x)$ for $x$ that maximizes this value. This is the same problem as the $n=2$ case resulting in the intersection between lines. The score at this intersection simplifies nicely to $DP[i][j] = (DP[i-1][j-1]+DP[i-1][j])/2$ This $O(n\cdot m)$ solution is fast enough to pass the easy version of this problem.
|
[
"combinatorics",
"dp",
"games",
"math"
] | 2,100
| null |
1628
|
D2
|
Game on Sum (Hard Version)
|
\textbf{This is the hard version of the problem. The difference is the constraints on $n$, $m$ and $t$. You can make hacks only if all versions of the problem are solved.}
Alice and Bob are given the numbers $n$, $m$ and $k$, and play a game as follows:
The game has a score that Alice tries to maximize, and Bob tries to minimize. The score is initially $0$. The game consists of $n$ turns. Each turn, Alice picks a \textbf{real} number from $0$ to $k$ (inclusive) which Bob either adds to or subtracts from the score of the game. But throughout the game, Bob has to choose to add at least $m$ out of the $n$ turns.
Bob gets to know which number Alice picked before deciding whether to add or subtract the number from the score, and Alice gets to know whether Bob added or subtracted the number for the previous turn before picking the number for the current turn (except on the first turn since there was no previous turn).
If Alice and Bob play optimally, what will the final score of the game be?
|
We have base cases $DP[i][0] = 0$ $DP[i][i] = k\cdot i$ And transition $DP[i][j] = (DP[i-1][j-1]+DP[i-1][j])/2$ Check the explanation for the easy version to see why. This $DP$ can be optimized by looking at contributions from the base cases. If we draw the $DP$ states on a grid and ignore the division by $2$ in the transition, we can see that the number of times states $DP[i][i]$ contributes to state $DP[n][m]$ is the number of paths from $(i, j)$ to $(n, m)$ in the grid such that at each step, both $i$ and $j$ increase, or only $j$ increases, except we have to exclude paths that go through other base cases. The number of such paths is $n-i-1 \choose m-j$. Since the number of steps in all of these paths is the same, we can account for the division by $2$ in each transition by dividing by $2^{n-i}$ in the end. To find the value of $DP[n][m]$ we sum the contribution form every base case $DP[i][i]$ for $1 \leq i \leq n$.
|
[
"combinatorics",
"dp",
"games",
"math"
] | 2,400
| null |
1628
|
E
|
Groceries in Meteor Town
|
Mihai lives in a town where meteor storms are a common problem. It's annoying, because Mihai has to buy groceries sometimes, and getting hit by meteors isn't fun. Therefore, we ask you to find the most dangerous way to buy groceries so that we can trick him to go there.
The town has $n$ buildings numbered from $1$ to $n$. Some buildings have roads between them, and there is exactly $1$ simple path from any building to any other building. Each road has a certain meteor danger level. The buildings all have grocery stores, but Mihai only cares about the open ones, of course. Initially, all the grocery stores are closed.
You are given $q$ queries of three types:
- Given the integers $l$ and $r$, the buildings numbered from $l$ to $r$ open their grocery stores (nothing happens to buildings in the range that already have an open grocery store).
- Given the integers $l$ and $r$, the buildings numbered from $l$ to $r$ close their grocery stores (nothing happens to buildings in the range that didn't have an open grocery store).
- Given the integer $x$, find the maximum meteor danger level on the simple path from $x$ to \textbf{any} open grocery store, or $-1$ if there is no edge on any simple path to an open store.
|
Consider the edge with the greatest weight. Any path going through that edge will have weight equal to the weight of that edge. If we delete that edge from the tree, a path going through that edge in the original tree has one endpoint in each component that results from the removal of the edge. Consider some query $x$ with some set of open stores. If the component not including $x$ has an open store, then the answer is the deleted edge. If it does not have an open store, then we recursively solve the problem for the component of $x$. Observe that the structure of this is the same as for finding $LCA$ of a set of nodes in a tree. When asking for the $LCA$ of some set, we can pick an in-order traversal, ignore all nodes except the leftmost and rightmost ones, and still get the same $LCA$. The solution outline then looks like this: Create the binary tree that arises from making a node representing the edge with the greatest weight, and then doing the same for the two components resulting from deleting the greatest weight making them left and right subtrees of the above tree. Order the nodes by in-order traversal in this tree. Use a segment tree or another data structure to maintain what is the leftmost and rightmost open store in the in-order traversal. Find the LCA of the leftmost and rightmost open store in the created tree.
|
[
"binary search",
"data structures",
"dsu",
"trees"
] | 3,100
| null |
1628
|
F
|
Spaceship Crisis Management
|
NASA (Norwegian Astronaut Stuff Association) is developing a new steering system for spaceships. But in its current state, it wouldn't be very safe if the spaceship would end up in a bunch of space junk. To make the steering system safe, they need to answer the following:
Given the target position $t = (0, 0)$, a set of $n$ pieces of space junk $l$ described by line segments $l_i = ((a_{ix}, a_{iy}), (b_{ix}, b_{iy}))$, and a starting position $s = (s_x, s_y)$, is there a direction such that floating in that direction from the starting position would lead to the target position?
When the spaceship hits a piece of space junk, what happens depends on the absolute difference in angle between the floating direction and the line segment, $\theta$:
- If $\theta < 45^{\circ}$, the spaceship slides along the piece of space junk in the direction that minimizes the change in angle, and when the spaceship slides off the end of the space junk, it continues floating in the direction it came in (before hitting the space junk).
- If $\theta \ge 45^{\circ}$, the spaceship stops, because there is too much friction to slide along the space junk.
You are only given the set of pieces of space junk once, and the target position is always $(0, 0)$, but there are $q$ queries, each with a starting position $s_j = (s_{jx}, s_{jy})$.
Answer the above question for each query.
|
The exit position at a specific segment is always the same, only the exit direction changes depending on which angle the spaceship comes in at. Therefore, if we know the set of directions that are good to hit a segment at, we know if a path that hits the segment is good or not. Note that it's only useful to consider directions that are either the direction from the closest point on some segment to the target, or from some starting position to the target. Let's call these directions relevant directions. Slow solution: We can use DP to determine the set of useful directions for each segment: Sort the segments by distance to the target, and for each segment try shooting a ray in every relevant direction that is within 45 degrees of the direction of the segment itself, and see if it either hits the target or a segment where that direction is good. The comparison with 45 degrees can be done exactly using e.g. properties of the dot product. Then for each starting position query, the same ray shooting can be done. This is $O((n+q)^2 \cdot n)$ because there are $O(n+q)$ relevant directions, $O(n+q)$ positions from which to shoot rays, and $O(n)$ segments to check intersection with. This is too slow. We found two different ways to optimize this: 1. Since we need to know what a ray hits for many different directions from the same origin, we could do some preprocessing at each origin. A Li-Chao tree traditionally finds the minimum y-value at a certain x-coordinate among a set of line segments. But it doesn't have to contain line segments. It can contain any set of functions such that any pair of them intersect at most once. This includes distance to 2D line segments from a fixed origin as a function of angle. Using this, we can for each origin do $O(n \cdot log (n+q))$ time precomputation to get $O(log (n+q))$ time per query to a single direction, resulting in the time complexity $O((n+q)^2 \log (n+q))$ 2. Solution by Maksim1744: If we fix the floating direction, the movement between space junk forms edges in a functional graph. We can use a sweep to build the graph and then for each starting position and determine if it reaches the target in the graph. Doing this for all relevant directions also results in a time a complexity of $O((n+q)^2 \log (n+q))$
|
[
"binary search",
"data structures",
"geometry",
"sortings"
] | 3,500
| null |
1629
|
A
|
Download More RAM
|
Did you know you can download more RAM? There is a shop with $n$ different pieces of software that increase your RAM. The $i$-th RAM increasing software takes $a_i$ GB of memory to run (\textbf{temporarily, once the program is done running, you get the RAM back}), and gives you an additional $b_i$ GB of RAM (permanently). \textbf{Each software can only be used once.} Your PC currently has $k$ GB of RAM.
Note that you can't use a RAM-increasing software if it takes more GB of RAM to use than what you currently have.
Since RAM is the most important thing in the world, you wonder, what is the maximum possible amount of RAM achievable?
|
Using some software is never bad. It always ends up increasing your RAM if you can use it. And for any possible order to use a set of software in, they all result in the same amount RAM in the end. So we can greedily go through the list, using software if you have enough RAM for it. After going through the list, your RAM may have increased, so maybe some of the software you couldn't use at the start is now usable. Therefore we have to go through the list again (now with the used software removed) until the RAM doesn't increase anymore. This results in time complexity $O(n^2)$, which is fine for these constraints. It turns out we don't actually need to go through the list of software more than once if we sort it by $a$. This results in $O(n \log n)$ time complexity.
|
[
"brute force",
"greedy",
"sortings"
] | 800
| null |
1629
|
B
|
GCD Arrays
|
Consider the array $a$ composed of all the integers in the range $[l, r]$. For example, if $l = 3$ and $r = 7$, then $a = [3, 4, 5, 6, 7]$.
Given $l$, $r$, and $k$, is it possible for $\gcd(a)$ to be greater than $1$ after doing the following operation at most $k$ times?
- Choose $2$ numbers from $a$.
- Permanently remove one occurrence of each of them from the array.
- Insert their product back into $a$.
$\gcd(b)$ denotes the greatest common divisor (GCD) of the integers in $b$.
|
For the $GCD$ of the whole array to be greater than $1$, each of the elements must have a common prime factor, so we need to find the prime factor that's the most common in the array and merge the elements that have this prime factor with those who don't, the answer being the size of the array - number of occurrences of the most frequent prime factor. And, because the numbers are consecutive, the most common prime factor is always $2$. So, the minimum number of moves we need to do is the count of odd numbers in the given range, which is $(r - l + 1) - (r / 2 - (l - 1) / 2)$. Now. the answer is "YES" when the minimum number of moves we need to do is less than or equal to $k$ and "NO" otherwise. The extra cases we should take care of are the ones where $l = r$, in which the answer is always "YES", or "NO' only when $l = r = 1$.
|
[
"greedy",
"math",
"number theory"
] | 800
| null |
1630
|
A
|
And Matching
|
You are given a set of $n$ ($n$ is always a power of $2$) elements containing all integers $0, 1, 2, \ldots, n-1$ exactly once.
Find $\frac{n}{2}$ pairs of elements such that:
- Each element in the set is in exactly one pair.
- The sum over all pairs of the bitwise AND of its elements must be exactly equal to $k$. Formally, if $a_i$ and $b_i$ are the elements of the $i$-th pair, then the following must hold: $$\sum_{i=1}^{n/2}{a_i \& b_i} = k,$$ where $\&$ denotes the bitwise AND operation.
If there are many solutions, print any of them, if there is no solution, print $-1$ instead.
|
Try to find a pairing such that $\sum\limits_1^{n/2}{a_i\&{b_i}}=0$ Try to find a pairing for $k>0$ by changing only a few elements from the previous pairing. Let's define $c(x)$, the compliment of $x$, as the number $x$ after changing all bits 0 to 1 and vice versa, for example $c(110010_2) = 001101_2$. It can be shown that $c(x) = x\oplus{(n-1)}$. Remember that $n-1 = 11...11_2$ since $n$ is a power of $2$. We will separate the problem into three cases. Case $k = 0$: In this case it is possible to pair $x$ with $c(x)$ for $0\leq{x}<{\frac{n}{2}}$, getting $\sum\limits_{x=0}^{\frac{n}{2}-1} {x\&{c(x)}} = 0$. Case $k = 0$: In this case it is possible to pair $x$ with $c(x)$ for $0\leq{x}<{\frac{n}{2}}$, getting $\sum\limits_{x=0}^{\frac{n}{2}-1} {x\&{c(x)}} = 0$. Case $0 < k < n-1$: In this case it is possible to pair each element with its compliment except $0$, $k$, $c(k)$ and $n-1$, and then pair $0$ with $c(k)$ and $k$ with $n-1$, $0\& c(k) = 0$ and $k\& (n-1) = k$. Case $0 < k < n-1$: In this case it is possible to pair each element with its compliment except $0$, $k$, $c(k)$ and $n-1$, and then pair $0$ with $c(k)$ and $k$ with $n-1$, $0\& c(k) = 0$ and $k\& (n-1) = k$. Case $k = n-1$: There are many constructions that work in this case, if $n=4$ there is no solution, if $n \geq8$ it is possible to construct the answer in the following way: It is possible to pair $n-1$ with $n-2$, $n-3$ with $1$, $0$ with $2$ and all other elements with their compliments. $(n-1)\&{(n-2)}=n-2$, for example $1111_2\&{1110_2}=1110_2$ $(n-3)\&{1}=1$, for example $1101_2\&{0001_2}=0001_2$ $0\&{2}=0$, for example $0000_2\&{0010_2}=0000_2$ All other elements can be paired with their complements and $x\&{c(x)}=0$ Note that $(n-2)+1+0+0+ ... +0=n-1$. Case $k = n-1$: There are many constructions that work in this case, if $n=4$ there is no solution, if $n \geq8$ it is possible to construct the answer in the following way: It is possible to pair $n-1$ with $n-2$, $n-3$ with $1$, $0$ with $2$ and all other elements with their compliments. $(n-1)\&{(n-2)}=n-2$, for example $1111_2\&{1110_2}=1110_2$ $(n-3)\&{1}=1$, for example $1101_2\&{0001_2}=0001_2$ $0\&{2}=0$, for example $0000_2\&{0010_2}=0000_2$ All other elements can be paired with their complements and $x\&{c(x)}=0$ Note that $(n-2)+1+0+0+ ... +0=n-1$. Each case can be implemented in $O(n)$. Let's define $a$ such that $a_i = i-1$ for $1\le i \le \frac{n}{2}$ and $b$ such that $b_i = c(a_i)$ for $1\le i \le \frac{n}{2}$. For example, for $n = 16$ they are: $a = [0000_2, 0001_2, 0010_2, 0011_2, 0100_2, 0101_2, 0110_2, 0111_2]$ $b = [1111_2, 1110_2, 1101_2, 1100_2, 1011_2, 1010_2, 1001_2, 1000_2]$ All swaps are independent and are applied to the original $a$ and $b$. After swapping two adjacent elements of $b$ (that have not been swapped) the sum will change in $2^x-1$ for some positive integer $x$. Then it is possible to solve the problem by repeatedly swapping the pair that maximizes $\sum\limits_{i=1}^{n/2}{ a_i\&{b_i}}$ after the swap such that $\sum\limits_{i=1}^{n/2}{ a_i\&{b_i}} \leq k$ is held and none of its elements have been swapped yet. However, this only works for all values of $k$ if $n \geq 32$, the case $n \leq 16$ can be handled with brute force. Please read the previous solution. Arrays $a$ and $b$ from it will also be used here. It is possible to start with $a$ and $b$ and repeatedly select and index $x$ randomly and swap $b_x$ with $b_{x+1}$ if $\sum\limits_{i=1}^{n/2}{a_i\&b_i} \leq k$ holds until $\sum\limits_{i=1}^{n/2}{a_i\&b_i} = k$. We have no proof of this solution but it was stressed against each possible input to the problem and it worked quickly for $n \geq 16$, the case $n \leq 8$ can be handled with brute force.
|
[
"bitmasks",
"constructive algorithms"
] | 1,500
|
#include<bits/stdc++.h>
using namespace std;
int c(int x,int n){
return ( x ^ ( n - 1 ) );
}
int main(){
int tc;
cin >> tc;
while( tc-- ){
int n, k;
cin >> n >> k;
vector<int> a(n/2), b(n/2);
if( k == 0 ){
for(int i=0; i<n/2; i++){
a[i] = i;
b[i] = c(i,n);
}
}
if( k > 0 && k < n - 1 ){
int small_k = min( k , c(k,n) );
for(int i=0; i<n/2; i++){
if( i != 0 && i != small_k ){
a[i] = i;
b[i] = c(i,n);
}
}
a[0] = 0;
b[0] = c(k,n);
a[small_k] = k;
b[small_k] = n - 1;
}
if( k == n - 1 ){
if( n == 4 ){
cout << -1 << '\n';
continue;
}
a[0] = n - 2;
b[0] = n - 1;
a[1] = 1;
b[1] = n - 3;
a[2] = 0;
b[2] = 2;
for(int i=3; i<n/2; i++){
a[i] = i;
b[i] = c(i,n);
}
}
for(int i=0; i<n/2; i++){
cout << a[i] << ' ' << b[i] << '\n';
}
}
return 0;
}
|
1630
|
B
|
Range and Partition
|
Given an array $a$ of $n$ integers, find a range of values $[x, y]$ ($x \le y$), and split $a$ into \textbf{exactly} $k$ ($1 \le k \le n$) subarrays in such a way that:
- Each subarray is formed by several continuous elements of $a$, that is, it is equal to $a_l, a_{l+1}, \ldots, a_r$ for some $l$ and $r$ ($1 \leq l \leq r \leq n$).
- Each element from $a$ belongs to exactly one subarray.
- In each subarray the number of elements inside the range $[x, y]$ (inclusive) is \textbf{strictly greater} than the number of elements outside the range. An element with index $i$ is inside the range $[x, y]$ if and only if $x \le a_i \le y$.
Print any solution that minimizes $y - x$.
|
Focus on how to solve the problem for a fixed interval $[x,y]$ Think about the numbers inside the interval as $+1$, and the other numbers as $-1$ Try to relate a partition into valid subarrays with an increasing sequence of the prefix sums array Note that if some value $x$ ($x>0$) appears on the prefix sums array, $x-1$ appears before since the absolute value of the elements is $1$ (+1 and -1) Focus on how to solve the problem for a fixed interval $[x,y]$: Let us define an array $b$ such that $b_i = 1$ if $x \le a_i \le y$ or $b_i = -1$ otherwise, for all $1\le i\le n$. Let's define $psum_i$ as $b_1 + b_2 + ... + b_i$. We need to find a partition on $k$ subarrays with positive sum of $b_i$. The sum of a subarray $[l,r]$ is $b_l+b_{l+1}+...+b_r = psum_r-psum_{l-1}$. Then a subarray is valid if $psum_r > psum_{l-1}$. We need to find an increasing sequence of $psum$ of length $k+1$ starting at $0$ and ending at $n$. Let's define $firstocc_x$ to be the first occurrence of the integer $x$ in $psum$. If $psum_n < k$ there will be no valid sequence, otherwise the sequence $0, firstocc_1, firstocc_2, ..., firstocc_{k-1}, n$ will satisfy all constraints. Note that, since $|psum_i-psum_{i-1}| = 1$, for $i>0$, then $firstocc_v$ exists and $firstocc_v < firstocc_{v+1}$ for $0\leq v \leq psum_n$. This solves the problem for a fixed interval. It remains to find the smallest interval $[x,y]$ such that $psum_n \geq k$. For a given interval $[x,y]$, since $psum_n = b_1 + b_2 + ... + b_n$, $psum_n$ will be equal to the number of elements of $a$ inside the interval minus the number of elements outside. Then for each $x$, it is possible to find the smallest $y$ such that $psum_n \geq k$ using binary search or two pointers. It is also possible to note that: $psum_n \geq k$ $\sum\limits_{i=1}^n b_i \geq k$ $\sum\limits_{i=1}^n (-1 + 2\cdot [x\le a_i\le y]) \geq k$ $\sum\limits_{i=1}^n [x\le a_i\le y] \geq \lceil{\frac{k+n}{2}}\rceil$ We need to find the smallest interval with at least $\lceil{\frac{k+n}{2}}\rceil$ inside, let $A$ be the array $a$ sorted, the answer is the minimum interval among all intervals $[A_i, A_{i+\lceil{\frac{k+n}{2}}\rceil-1}]$ for $1 \leq i \leq n - \lceil{\frac{k+n}{2}}\rceil+1$. Complexity: $O(n\log{n})$ if solved with the previous formula or binary search, or $O(n)$ is solved with two pointers.
|
[
"binary search",
"constructive algorithms",
"data structures",
"greedy",
"two pointers"
] | 1,800
|
#include<bits/stdc++.h>
using namespace std;
int main() {
int tc;
cin >> tc;
while( tc-- ){
int n, k;
cin >> n >> k;
vector<int> a(n), sorted_a(n);
for(int i=0; i<n; i++){
cin >> a[i];
sorted_a[i] = a[i];
}
sort(sorted_a.begin(),sorted_a.end());
int req_sum = ( n + k + 1 ) / 2;
pair<int,pair<int,int>> ans = { n + 1 , { -1 , -1 } };
for(int i=0; i+req_sum-1<n; i++)
ans = min( ans , { sorted_a[i+req_sum-1] - sorted_a[i] , { sorted_a[i] , sorted_a[i+req_sum-1] } } );
cout << ans.second.first << ' ' << ans.second.second << '\n';
int subarrays_found = 0, curr_sum = 0;
int last_uncovered = 1;
for(int i=0; i<n; i++){
if( a[i] >= ans.second.first && a[i] <= ans.second.second ) curr_sum ++;
else curr_sum --;
if( curr_sum > 0 && subarrays_found + 1 < k ){
cout << last_uncovered << ' ' << ( i + 1 ) << '\n';
last_uncovered = i + 2;
subarrays_found ++;
curr_sum = 0;
}
}
subarrays_found ++;
cout << last_uncovered << ' ' << n << '\n';
}
return 0;
}
|
1630
|
C
|
Paint the Middle
|
You are given $n$ elements numbered from $1$ to $n$, the element $i$ has value $a_i$ and color $c_i$, initially, $c_i = 0$ for all $i$.
The following operation can be applied:
- Select three elements $i$, $j$ and $k$ ($1 \leq i < j < k \leq n$), such that $c_i$, $c_j$ and $c_k$ are all equal to $0$ and $a_i = a_k$, then set $c_j = 1$.
Find the maximum value of $\sum\limits_{i=1}^n{c_i}$ that can be obtained after applying the given operation any number of times.
|
Think about all occurrences of some element, what occurrences are important? Think about the first and last occurrence of each element as a segment. Think about the segments that at least one of its endpoints will end up with $c_i = 0$. For each $x$ such that all the elements $a_1, a_2, ..., a_x$ are different from $a_{x+1}, a_{x+2}, ..., a_n$ it is impossible to apply an operation with some indices from the first part, and some other from the second one. Then it is possible to split the array in subarrays for each $x$ such that the previous condition holds, and sum the answers from all of them. Let's solve the problem independently for one of those subarrays, let's denote its length as $m$, the values of its elements as $a_1, ..., a_m$ and their colors as $c_1, ..., c_m$: For every tuple $(x, y, z)$ such that $1 \le x < y < z \le m$ and $a_x = a_y = a_z$ it is possible to apply an operation with indices $x, y$ and $z$. Then only the first and last occurrences of each element are important. For all pairs $(x, y)$ such that $1 \le x < y \le m$, $a_x = a_y$, $a_x$ is the first occurrence and $a_y$ the last occurrence of that value, a segment $[x, y]$ will be created. Let's denote the left border of a segment $i$ as $l_i$ and the right border as $r_i$. Let's say that a set of segments $S$ is connected if the union of its segments is the segment $[\min(l_i, \forall i\in{S}), \max(r_i, \forall i\in{S})]$. Instead of maximizing $\sum\limits_{i=1}^m{c_i}$, it is possible to focus on minimizing $\sum\limits_{i=1}^m{[c_i=0]}$. Lemma 1: If we have a connected set $S$, it is possible to apply some operations to its induced array to end up with at most $|S|+1$ elements with $c_i = 0$. For each segment $x$ in $S$ if there exists a segment $y$ such that $l_y < l_x < r_x < r_y$, it is possible to apply the operation with indices $l_y, l_x, r_y$ and with $l_y, r_x, r_y$. Otherwise, add this segment to a set $T$. Then is possible to repeatedly select the leftmost segment of $T$ that have not been selected yet, and set the color of its right border to $1$, this will be always possible until we select the rightmost segment since $T$ is connected. In the end, all the left borders of the segments of $T$ will have $c_i = 0$, the same holds for the right border of the rightmost segment of $T$, which leads to a total of $|T|+1$ elements with $c_i = 0$, and $|T| \le |S|$. Let $X$ be a subarray that can be obtained by applying the given operation to the initial subarray any number of times. Let $S(X)$ be the set of segments that includes all segments $i$ such that $c[l_i] = 0$ or $c[r_i] = 0$ (or both), where $c[i]$ is the color of the $i$-th segment of the subarray $X$. Lemma 2: There is always an optimal solution in which $S(X)$ is connected. Suppose $S(X)$ is not connected, if there are only two components of segments $A$ and $B$, there will always be a segment from $A$ to $B$ due to the way the subarray was formed. If $A$ or $B$ have some segment $x$ such that there exists a segment $y$ such that $l_y < l_x < r_x < r_y$ you can erase it by applying the operation with indices $l_y, l_x, r_y$ and with $l_y, r_x, r_y$. Then we can assume that $\sum\limits_{i\in A}{([c[l_i]=0]+[c[r_i]=0])} = |A|+1$ and similarly for $B$. The solution to $A$ before merging is $|A|+1$, the solution to $B$ is $|B|+1$, if we merge $A$ and $B$ with a segment we get a component $C$ of size $|A|+|B|+1$, and its answer will be $|A|+|B|+1+1$ (using \bf{lemma 1}), the case with more than two components is similar, then we can always merge the components without making the answer worse. Finally, the problem in each subarray can be reduced to find the smallest set (in number of segments), such that the union of its segments is the whole subarray. This can be computed with dp or sweep line. Let $dp[x]$ be the minimum size of a set such that the union of its segments is the segment $[1,x]$. To compute $dp$, process all the segments in increasing order of $r_i$, and compute the value of $dp[r_i] = \min(dp[l_i+1], dp[l_i+2], ..., dp[r_i-1]) + 1$. Then the solution to the subarray is $dp[m] + 1$, this $dp$ can be computed in $O(m\log{m})$ with segment tree. It is possible to compute a similar $dp$ to solve the problem for the whole array without splitting the array, the time complexity is $O(n\log{n})$. It is possible to create an event where a segment starts and an event where a segment ends. Then process the events in order and each time a segment ends, if it is the rightmost segment added, add to the solution the segment with maximum $r_i$ among the segments that $l_i$ is already processed. It is possible to modify the sweep line to solve the problem for the whole array without splitting the array, the time complexity is $O(n)$ or $O(n\log{n})$ depending on the implementation.
|
[
"dp",
"greedy",
"sortings",
"two pointers"
] | 2,200
|
#include<bits/stdc++.h>
using namespace std;
template <typename Tnode,typename Tup>
struct ST{
vector<Tnode> st;
int sz;
ST(int n){
sz = n;
st.resize(4*n);
}
Tnode merge_(Tnode a, Tnode b){
Tnode c;
/// Merge a and b into c
c = min( a , b );
return c;
}
void update_node(int nod,Tup v){
/// how v affects to st[nod]
st[nod] = v;
}
void build(vector<Tnode> &arr){ build(1,0,sz-1,arr); }
void build(int nod,int l,int r,vector<Tnode> &arr){
if( l == r ){
st[nod] = arr[l];
return;
}
int mi = ( l + r ) >> 1;
build((nod<<1),l,mi,arr);
build((nod<<1)+1,mi+1,r,arr);
st[nod] = merge_( st[(nod<<1)] , st[(nod<<1)+1] );
}
void update(int id,Tup v){ update(1,0,sz-1,id,v); }
void update(int nod,int l,int r,int id,Tup v){
if( l == r ){
update_node(nod,v);
return;
}
int mi = ( l + r ) >> 1;
if( id <= mi ) update((nod<<1),l,mi,id,v);
else update((nod<<1)+1,mi+1,r,id,v);
st[nod] = merge_( st[(nod<<1)] , st[(nod<<1)+1] );
}
Tnode query(int l,int r){ return query(1,0,sz-1,l,r); }
Tnode query(int nod,int l,int r,int x,int y){
if( l >= x && r <= y ) return st[nod];
int mi = ( l + r ) >> 1;
if( y <= mi ) return query((nod<<1),l,mi,x,y);
if( x > mi ) return query((nod<<1)+1,mi+1,r,x,y);
return merge_( query((nod<<1),l,mi,x,y), query((nod<<1)+1,mi+1,r,x,y) );
}
};
int main(){
int n;
cin >> n;
vector<int> a(n), fst(n,-1), lst(n,-1);
for(int i=0; i<n; i++){
cin >> a[i];
a[i] --;
if( fst[a[i]] == -1 )
fst[a[i]] = i;
lst[a[i]] = i;
}
vector<pair<int,int>> segments;
for(int i=0; i<n; i++)
if( fst[i] != -1 )
segments.push_back({lst[i]+1,fst[i]+1});
sort(segments.begin(),segments.end());
vector<int> dp(n+1,1000000007);
dp[0] = 0;
ST<int,int> st(n+1);
st.build(dp);
for( auto i : segments ){
dp[i.first] = min( dp[i.first] , dp[i.second-1] + 1 + ( i.first != i.second ) );
if( i.second + 1 <= i.first - 1 )
dp[i.first] = min( dp[i.first] , st.query(i.second+1,i.first-1) + 1 );
st.update(i.first,dp[i.first]);
}
cout << n - dp[n] << '\n';
return 0;
}
|
1630
|
D
|
Flipping Range
|
You are given an array $a$ of $n$ integers and a set $B$ of $m$ positive integers such that $1 \leq b_i \leq \lfloor \frac{n}{2} \rfloor$ for $1\le i\le m$, where $b_i$ is the $i$-th element of $B$.
You can make the following operation on $a$:
- Select some $x$ such that $x$ appears in $B$.
- Select an interval from array $a$ of size $x$ and multiply by $-1$ every element in the interval. Formally, select $l$ and $r$ such that $1\leq l\leq r \leq n$ and $r-l+1=x$, then assign $a_i:=-a_i$ for every $i$ such that $l\leq i\leq r$.
Consider the following example, let $a=[0,6,-2,1,-4,5]$ and $B=\{1,2\}$:
- $[0,6,-2,-1,4,5]$ is obtained after choosing size $2$ and $l=4$, $r=5$.
- $[0,6,2,-1,4,5]$ is obtained after choosing size $1$ and $l=3$, $r=3$.
Find the maximum $\sum\limits_{i=1}^n {a_i}$ you can get after applying such operation any number of times (possibly zero).
|
What is the size of the smallest subarray that it is possible to multiply by $-1$ using some operations? Let $s$ be a string such that $s_i=0$ if that element is multiplied by $-1$ or $s_i=1$ otherwise, what such $s$ are reachable? Think about the parity of the sum of all $s_i$ such that $i\mod{g}=constant$, where $g$ is the size of the smallest subarray that it is possible to multiply by $-1$ using some operations If we have $x, y \in B$ (assume $x > y$), since all elements of $B$ are at most $\lfloor\frac{n}{2}\rfloor$, it is possible to multiply all intervals of size $x-y$ by either multiplying an interval of size $x$ that starts at the position of the interval of size $x-y$, and an interval of size $y$ that ends at the same position as the interval $x$, or multiply an interval of size $x$ that ends at the same position as the interval of size $x-y$ and another interval of size $y$ that starts at the same position as the interval of size $x$. For two elements $x, y \in B$ ($x > y$), it is possible to add $x-y$ to $B$, repeatedly doing this it is possible to get $\gcd(x, y)$. Let $g = \gcd(b_1, b_2, ..., b_m : b_i \in B)$, by applying the previous reduction $g$ is the smallest element that can be obtained, and all other elements will be its multiples, then the problem is reduced to, multiplying intervals of size $g$ by $-1$ any number of times, maximize $\sum\limits_{i=1}^n{a_i}$. Let's define the string $s = 000...00$ of size $n$ (0-indexed) such that $s_i = 0$ if the $i$-th element is not multiplied by $-1$ or $s_i = 1$ otherwise. The operation flips all values of $s$ in a substring of size $g$. Let's define $f_x$ as the xor over all values $s_i$ such that $i\mod g = x$, note that $f_x$ is defined for the values $0 \le x \le g-1$. In any operation, all values of $f$ change simultaneously, since they are all $0$ at the beginning only the states of $s$ such that all $f_i$ are equal are reachable. To prove that all states of $s$ with all $f_i$ equal are reachable, let's start with any state of $s$ such that $f = 000...00$ and repeatedly select the rightmost $i$ such that $s_i=1$ and $i\geq g-1$ and flip the substring that ends in that position, after doing that as many times as possible, $s_i = 0$ for $g-1\leq i\leq n-1$. If $s_i=1$ for any $0\leq i < g$, then $f_i = 1$ which is a contradiction since $f_{g-1} = 0$ and all $f_i$ change simultaneously, then $s = 000...00$. The case with all values of $f$ equal to $1$ is similar. After this, it is possible to solve the problem with $dp$. Let $dp_{i,0}$ be the maximum sum of $a_i, a_{i-g}, a_{i-2\cdot{g}}, ..., a_{i-k\cdot{g}}$ such that $i-k\cdot{g}\equiv{i}(\mod{g})$ and $\bigoplus\limits_{k\geq 0, i-k\cdot g\geq 0} f_{i-k \cdot g}=0$ and $dp_{i,1}$ be the same such that $\bigoplus\limits_{k\geq 0, i-k\cdot g\geq 0} f_{i-k \cdot g}=1$. The answer to the problem is $\max(\sum\limits_{i=n-g}^{n-1}{dp_{i,0}}, \sum\limits_{i=n-g}^{n-1}{dp_{i,1}} )$ (0-indexed). This $dp$ can be computed in $O(n)$.
|
[
"constructive algorithms",
"dp",
"greedy",
"number theory"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
int T;
cin >> T;
while(T--)
{
int n,m;
cin >> n >> m;
vector<int> a(n),b(m);
for(int i=0;i<n;i++)
cin >> a[i];
int g=0;
for(int i=0;i<m;i++)
{
cin >> b[i];
g=__gcd(g,b[i]);
}
vector<vector<ll>> dp(g,vector<ll>(2));
for(int i=0;i<g;i++)
dp[i][1]=-2e9;
for(int i=0;i<n;i++)
{
int rem=i%g;
ll v0=max(dp[rem][0]+a[i],dp[rem][1]-a[i]);
ll v1=max(dp[rem][0]-a[i],dp[rem][1]+a[i]);
dp[rem][0]=v0;
dp[rem][1]=v1;
}
ll sum0=0,sum1=0;
for(int i=0;i<g;i++)
{
sum0+=dp[i][0];
sum1+=dp[i][1];
}
cout << max(sum0,sum1) << '\n';
}
return 0;
}
|
1630
|
E
|
Expected Components
|
Given a cyclic array $a$ of size $n$, where $a_i$ is the value of $a$ in the $i$-th position, \textbf{there may be repeated values}. Let us define that a permutation of $a$ is equal to another permutation of $a$ if and only if their values are the same for each position $i$ or we can transform them to each other by performing some cyclic rotation. Let us define for a cyclic array $b$ its number of components as the number of connected components in a graph, where the vertices are the positions of $b$ and we add an edge between each pair of adjacent positions of $b$ with equal values (note that in a cyclic array the first and last position are also adjacents).
Find the expected value of components of a permutation of $a$ if we select it equiprobably over the set of all the different permutations of $a$.
|
Burnside's lemma Think about an easy way to count the number of components in a cyclic array. The number of components in a cyclic array is equal to the number of adjacent positions with different values. The problem can be solved by applying Burnside's lemma. The number of different permutations of the cyclic array $a$ is equal to the sum of number of fixed points for each permutation function divided by the number of permutations functions. Let's focus on two parts. First part (find the number of different permutations of $a$): Let's define a permutation function $F_x(arr)$ as the function that cyclically shifts the array $arr$ by $x$ positions. In this problem for an array of size $n$ we have $n$ possible permutations functions and we would need to find the sum of the number of fixed points for each permutation function. To find the number of fixed points for a permutation function $F_x()$ we have that $arr_i$ must be equal to $arr_{(i+x)\%n}$, if we add an edge $(i,(i+x)\%n)$ for each position $i$ then by number theory we would obtain that $gcd(n,x)$ cycles would be formed and each one of size $\frac{n}{gcd(n,x)}$, then we can note that each position $i$ will belong to the $(i\%gcd(n,x))$-th cycle, so we can say that the problem can be transformed into counting the number of permutations with repetition in an array of size $gcd(n,x)$. Let us denote $cnt[v]$ as the number of values equal to $v$ in array $a$, when we are processing the function $F_x()$ and we reduce the problem to an array of size $gcd(n,x)$ we should also decrease $cnt[v]$ to $\frac{cnt[v]}{n/gcd(n,x)}$ since each component is made up of $\frac{n}{gcd(n,x)}$ values, also we must observe that for solving a problem for an array of size $x$, then $\frac{n}{x}$ should be a divisor of $gcd(cnt[1],cnt[2],\ldots,cnt[n])$. Let us denote $cnt_x[v] = \frac{cnt[v]}{n/gcd(n,x)}$ So to count the number of permutations with repetition for $F_x()$ that can be formed with the frequency array $cnt_x$ we can use the formula $\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!}$ Let us denote $G_{all} = gcd(cnt[1],cnt[2],\ldots,cnt[n])$ Let us denote $fdiv(val)$ as the number of divisors of $val$. Let us denote $tot_{sz}$ as the number of permutations with repetition for an array of size $sz$, from what has been said before we have that $\frac{n}{sz}$ must be divisible by $G_{all}$ so we only need to calculate the permutations with repetition for $fdiv(G_{all})$ arrays. Now suppose that the number of different values of array $a$ is $k$ then $G_{all}$ must be at most $\frac{n}{k}$ because the gcd of several numbers is always less than or equal to the smallest of them. Now to calculate the permutations with repetition for a $cnt_x$ we do it in $O(k)$, for that we need to precalculate some factorials and modular inverses before, and since we need to calculate them $fdiv(G_{all})$ times, then we have that in total the complexity would be $O(fdiv(G_{all})\cdot k)$ but since $G_{all}$ is at most $\frac{n}{k}$ and $fdiv(\frac{n}{k})$ is at most $\frac{n}{k}$, substituting it would be $O(\frac{n}{k}\cdot k)$ equal to $O(n)$ So to find the sum of the number of fixed points we need the sum of $tot_{gcd(n,x)}$ for $1 \le x \le n$ and $\frac{n}{gcd(n,x)}$ divides to $G_{all}$, at the end of all we divide the sum of the number of fixed points by $n$ and we would obtain the number of different permutations of $a$. To find the $gcd(n,x)$ for $1 \le x \le n$ we do it with the Euclid's algorithm in complexity $O(n\cdot log)$ so in total the complexity is $O(n\cdot log)$ Second part (find the expected value of components of different permutations of $a$): Here we will use the Linear Expectation property and we will focus on calculating the contribution of each component separately, the first thing is to realize that the number of components is equal to the number of different adjacent values, so we only need to focus on two adjacent values, except if it is a single component, this would be a special case. If we have $k$ different values we can use each different pair of them that in total would be $k\cdot(k-1)$ pairs, we can realize that when we put a pair its contribution would be equal to the number of ways to permute the remaining values, which if we are in an array of size $\frac{n}{x}$ and we use the values $val_1$ and $val_2$ it would be equal to: $tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)} \cdot cnt_x[val_1] \cdot cnt_x[val_2]$ because we removing a value $val_1$ and another value $val_2$ from the set, so if we have the formula: $\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!}$ and $val_1$ and $val_2$ are the first two elements then it would be: $\frac{(n-2)!}{(x_1-1)! \cdot (x_2-1)! \cdot \ldots \cdot x_n!}$ which would be equivalent to: $\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!} \cdot \frac{1}{n \cdot (n-1)} \cdot x_1 \cdot x_2$ Now to calculate the contribution of the $k\cdot(k-1)$ pairs we can realize that taking common factor $tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)}$ in the previous expression it only remains to find the sum of $cnt_x[i]\cdot cnt_x[j]$ for all $i \neq j$, this can be found in $O(k)$ easily by keeping the prefix sum and doing some multiplication. Then at the end we multiply by $n$ since there are $n$ possible pairs of adjacent elements in the general array. Let us define $sum_{sz}$ as the contribution of components of the permutations with repetition for an array of size $sz$, then: $sum_{n/x} = tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)} \cdot (sum~of~(cnt_x[i]\cdot cnt_x[j])~for~i \neq j) \cdot n$ Now for each possible permutation with repetition we have by the Burnside's lemma that in the end we divide it by $n$, so we should also divide by $n$ the contribution of each component. Let's define $tot'_x = \frac{tot_x}{n}$ and $sum'_x = \frac{sum_x}{n}$ Let's define $tot_{all}$ as the sum of $tot'_{gcd(n,x)}$ for $1 \le x \le n$ and $\frac{n}{gcd(n,x)}$ divide to $G_{all}$. Let's define $sum_{all}$ as the sum of $sum'_{gcd(n,x)}$ for $1 \le x \le n$ and $\frac{n}{gcd(n,x)}$ divide to $G_{all}$. The final answer would be: $res = \frac{sum_{all}}{tot_{all}}$ The final complexity then is $O(n\cdot log)$
|
[
"combinatorics",
"math",
"number theory",
"probabilities"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 1e6 + 100;
const int MOD = 998244353;
long long fact[MAXN];
long long F[MAXN];
long long qpow(long long a, long long b)
{
long long res = 1;
while(b)
{
if(b&1)res = res*a%MOD;
a = a*a%MOD;
b /= 2;
}
return res;
}
long long inv(long long x)
{
return qpow(x,MOD-2);
}
int32_t main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
fact[0] = 1;
for(int i = 1 ; i < MAXN ; i++)
{
fact[i] = fact[i-1]*i%MOD;
}
int T;
cin >> T;
while(T--)
{
int N;
cin >> N;
for(int i = 1 ; i <= N ; i++)
{
F[i] = 0;
}
for(int i = 1 ; i <= N ; i++)
{
int n;
cin >> n;
F[n]++;
}
vector<long long> vvv;
for(int i = 1 ; i <= N ; i++)
{
if(F[i])vvv.push_back(F[i]);
}
int G = 0;
for(auto x : vvv)
{
G = __gcd(G,x);
}
if(G == N)
{
cout << 1 << '\n';
continue;
}
vector<long long> arr(N+1);
vector<long long> arr2(N+1);
for(int i = 1 ; i <= G ; i++)
{
if(G%i == 0)
{
long long tot = inv(fact[N/i-2]), acum = 0, sum = 0;
for(auto x : vvv)
{
tot = tot*fact[x/i]%MOD;
sum = (sum + acum*(x/i)*2)%MOD;
acum = (acum + (x/i))%MOD;
}
tot = inv(tot);
arr2[i] = tot*(N/i-1)%MOD*(N/i)%MOD;
tot = tot*sum%MOD*N%MOD;
arr[i] = tot;
}
}
long long res = 0;
long long cont = 0;
for(int i = 1 ; i <= N ; i++)
{
long long ggg = N/__gcd(N,i);
if(G%ggg == 0)
{
res = (res + arr[ggg])%MOD;
cont = (cont + arr2[ggg])%MOD;
}
}
cout << res*inv(cont)%MOD << '\n';
}
return 0;
}
|
1630
|
F
|
Making It Bipartite
|
You are given an undirected graph of $n$ vertices indexed from $1$ to $n$, where vertex $i$ has a value $a_i$ assigned to it and all values $a_i$ are \textbf{different}. There is an edge between two vertices $u$ and $v$ if either $a_u$ divides $a_v$ or $a_v$ divides $a_u$.
Find the minimum number of vertices to remove such that the remaining graph is bipartite, when you remove a vertex you remove all the edges incident to it.
|
Think about the directed graph where there is an directed edge from $a$ to $b$ if and only if $b|a$ Let us define the above graph as $G$, make a duplicate graph $G'$ from $G$, and then add directed edges $(x', x)$ for each node $x'$ of the graph $G'$. What happens in this graph? Maximum Antichain First of all, let's analyze what happens when there are $3$ vertices $x$, $y$ and $z$ such that $a_x|a_y$, $a_x|a_z$ and $a_y|a_z$, if this happens then the graph cannot be bipartite because there would be a cycle of size $3$, therefore there cannot be such a triple ($x$, $y$, $z$), this condition, besides to being necessary, is sufficient since we can separate the graph into two sets, set $A$: vertices that have edges towards multiples, set $B$: vertices that have edges towards divisors, keep in mind that a vertex cannot exist in two sets at the same time if the condition is fulfilled, now note that there are no edges between elements of the same set because if this happens it would mean that they belong to different sets and it would be a contradiction, then the problem is to find the minimum number of vertices to remove such that in the remaining vertices there is no such triple of numbers ($x$, $y$, $z$). Now instead of minimizing the number of vertices to remove, let's try to maximize the number of vertices that will remain in the graph. Let us define the directed graph $G$ as the graph formed by $n$ vertices, and directed edges ($u$, $v$) such that $a_v|a_u$, now the problem is reduced to finding the maximum number of vertices such that in the graph formed among them, no vertex has ingoing edges and outgoing edges at the same time, formally for each vertex $x$ the following property must be kept $indegree_x = 0$ or $outdegree_x = 0$, in this way we guarantee that there is no triple ($x$, $y$, $z$) such that $a_x|a_y$, $a_x|a_z$ and $a_y|a_z$. Now let's define the graph $G'$ as a copy of the graph $G$. Formally for each directed edge ($u$, $v$) in the graph $G$ there is an directed edge ($u'$, $v'$) in the graph $G'$. On the other hand, let's define the graph $H = G + G'$ and we will also add new directed edges ($u'$, $u$), this graph $H$ is a $DAG$, it is easy to see that the edges always go from a vertex $u$ to a vertex $v$ only if $a_u > a_v$, except for the edges ($u'$, $u$), which in this case $a_{u'} = a_u$, these edges are the ones that connect $G'$ to $G$, but since they always go in one direction pointing towards $G$, the property of $DAG$ is still fulfilled. Now the only thing we have to do is find the largest antichain in the graph $H$, this can be done using the Dilworth's Theorem, modeling the problem as a bipartite matching, we can use some flow algorithm such as Dinic's Algorithm, or Hopcroft Karp's Algorithm, which is specific to find the maximum bipartite matching. First of all we realize that the graph $G$ is a special graph since if there is an indirect path from a vertex $u$ to a vertex $v$ then there is always a direct edge between them, this is true because if we have $3$ vertices $x$, $y$ and $z$ such that $a_x|a_y$ and $a_y|a_z$ then always $a_x|a_z$. With this we can say that two elements are not reachable with each other if and only if there are no edges between them. Now let's say that all the vertices in the graph $G$ are white and all the vertices in the graph $G'$ are black, let us denote $f(x)$ a function such that $f(u') = u$, where the vertex $u$ from the graph $G$ is the projection of the vertex $u'$ from the graph $G'$. Now let's divide the proof into two parts. Lemma 1: Every antichain of $H$ can be transformed into a valid set of vertices such that they form a bipartite graph. Proof of Lemma 1: Let's divide the antichain of $H$ into two sets, white vertices and black vertices, Let us define the set of white vertices as $W$ and the set of all black vertices as $B$, now we will create a set $S$ = {$f(x)$ | $x \in B$}. It is easy to see that no element in $S$ belongs to $W$ since if this happens it would mean that there is an element $x$ such that $x$ belongs to $B$ and $f(x)$ belongs to $W$ and by the concept of antichain that would not be possible. It is also easy to see that the elements of the set $S$ are an antichain since the set $S$ is a projection of vertices from the set $B$ of the graph $G'$ on $G$. Now we have that there are no edges between the vertices of the set $S$ and there are no edges between the vertices of the set $W$, with this it is proved that the graph is bipartite. Lemma 2: Every valid set of vertices such that they form a bipartite graph can be transformed into an antichain of $H$. Proof of Lemma 2: Let us denote $f^{-1}(x)$ a function such that $f^{-1}(u) = u'$, where vertex $u$ from graph $G$ is the projection of vertex $u'$ from graph $G'$. Let us denote the set $A$ as all vertices that have $indegree$ greater than $0$ and $B$ to all vertices that have $outdegree$ greater than $0$, now we will create a set $C$ = {$f^{-1}(x)$ | $x \in A$}, It is easy to see that set $B$ is an antichain since if one vertex has an edge to another vertex then some of them would have $indegree$ greater than $0$ and would contradict the definition of set $B$, we can also see that the elements in set $A$ are an antichain since all the elements have $outdegree = 0$ so no vertex point towards any other vertex, with this we can define that all the elements in $C$ are an antichain since they are a projection of vertices of the set $A$ from the graph $G$ on $G'$, Now we want to proof that the union of set $B$ and $C$ is an antichain, this is very simple to see since the vertices of set $B$ belong to $G$ and the vertices of $C$ belong to $G'$, therefore there is no edge from any vertex in $B$ to a vertex in $C$ since there are no edges from $G$ to $G'$. Now it only remains to proof that from set $C$ no vertex of set $B$ can be reached, this is proved taking into account that the vertices reachable from the set $C$ in the graph $G$ are the same that the vertices reachable from the set $A$ in the graph $G$, and as no vertex of $A$ has edges towards $B$, this cannot happen. Therefore the union of the sets $B$ and $C$ is an antichain of $H$. Then we can say that the two problems are equivalent and it is shown that finding the maximum antichain we obtain the largest bipartite graph. The graph $G$ contains $n$ vertices and around $n\cdot log(n)$ edges (since the numbers $a_x$ are different and the sum of the divisors from $1$ to $n$ is around $n\cdot log(n)$). The graph $G'$ is a duplicate of $G$ then we would have $n\cdot log(n)\cdot2 + n$ edges and $2\cdot n$ vertices, if we use the Hopcroft Karp algorithm we would obtain a time complexity of $O(n\cdot log(n)\cdot sqrt(n))$ and a space complexity of $O(n\cdot log(n))$.
|
[
"flows",
"graph matchings",
"graphs",
"number theory"
] | 3,400
|
#include <bits/stdc++.h>
using namespace std;
struct HOPCROFT_KARP
{
int n, m;
vector<vector<int>> adj;
vector<int> mu, mv, level, que;
HOPCROFT_KARP(int n, int m) : n(n), m(m), adj(n), mu(n, -1), mv(m, -1), level(n), que(n) {}
void add_edge(int u, int v)
{
adj[u].push_back(v);
}
void bfs()
{
int qf = 0, qt = 0;
for(int u = 0 ; u < n ; ++u)
{
if(mu[u] == -1)que[qt++] = u, level[u] = 0;
else level[u] = -1;
}
for( ; qf < qt ; ++qf)
{
int u = que[qf];
for(auto w : adj[u])
{
int v = mv[w];
if(v != -1 && level[v] == -1)
que[qt++] = v, level[v] = level[u] + 1;
}
}
}
bool dfs(int u)
{
for(auto w : adj[u])
{
int v = mv[w];
if(v == -1 || (level[v] == level[u] + 1 && dfs(v)))
return mu[u] = w, mv[w] = u, true;
}
return false;
}
int max_matching()
{
int match = 0;
for(int c = 1 ; bfs(), c ; match += c)
for(int u = c = 0 ; u < n ; ++u)
if(mu[u] == -1)
c += dfs(u);
return match;
}
pair<vector<int>, vector<int>> min_vertex_cover()
{
max_matching();
vector<int> L, R, inR(m);
for(int u = 0 ; u < n ; ++u)
{
if(level[u] == -1)L.push_back(u);
else if(mu[u] != -1)inR[mu[u]] = true;
}
for(int v = 0 ; v < m ; ++v)
if(inR[v])R.push_back(v);
return { L, R };
}
};
const int MAXN = 5e4 + 100;
int arr[MAXN];
vector<int> dv[MAXN];
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
for(int i = 1 ; i < MAXN ; i++)
{
for(int j = i*2 ; j < MAXN ; j+=i)
{
dv[j].push_back(i);
}
}
for(int i = 0 ; i < MAXN ; i++)
{
arr[i] = -1;
}
int T;
cin >> T;
while(T--)
{
int N;
cin >> N;
vector<int> vect;
for(int i = 0 ; i < N ; i++)
{
int n;
cin >> n;
vect.push_back(n);
arr[n] = i;
}
vector<pair<int,int>> edge;
for(int i = 0 ; i < N ; i++)
{
for(auto x : dv[vect[i]])
{
if(arr[x] != -1)
{
edge.push_back({i, arr[x]});
}
}
}
for(auto x : vect)
{
arr[x] = -1;
}
HOPCROFT_KARP HK(2*N,2*N);
for(auto x : edge)
{
int i = x.first;
int j = x.second;
HK.add_edge(i,j);
HK.add_edge(i+N,j+N);
}
for(int i = 0 ; i < N ; i++)
{
HK.add_edge(i+N,i);
}
cout << HK.max_matching()-N << '\n';
}
return 0;
}
|
1631
|
A
|
Min Max Swap
|
You are given two arrays $a$ and $b$ of $n$ positive integers each. You can apply the following operation to them any number of times:
- Select an index $i$ ($1\leq i\leq n$) and swap $a_i$ with $b_i$ (i. e. $a_i$ becomes $b_i$ and vice versa).
Find the \textbf{minimum} possible value of $\max(a_1, a_2, \ldots, a_n) \cdot \max(b_1, b_2, \ldots, b_n)$ you can get after applying such operation any number of times (possibly zero).
|
Think about how $max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$ will contribute to the answer. The maximum of one array is always $max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$. How should you minimize then? Let $m_1 = \max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$. The answer will always be $m_1 \cdot m_2$ where $m_2$ is the maximum of the array that does not contain $m_1$. Since $m_1$ is fixed, the problem can be reduced to minimize $m_2$, that is, minimize the maximum of the array that does not contain the global maximum. WLOG assume that the global maximum will be in the array $b$, we can swap elements at each index $x$ such that $a_x > b_x$, ending with $a_i \leq b_i$ for all $i$. It can be shown that the maximum of array $a$ is minimized in this way. Time complexity: $O(n)$
|
[
"greedy"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
int calc_max(vector<int> a){
int res = 0;
for( auto i : a )
res = max( res , i );
return res;
}
int main(){
int tc;
cin >> tc;
while( tc-- ){
int n;
cin >> n;
vector<int> a(n), b(n);
for( auto &i : a )
cin >> i;
for( auto &i : b )
cin >> i;
for(int i=0; i<n; i++)
if( a[i] > b[i] )
swap( a[i] , b[i] );
cout << calc_max(a) * calc_max(b) << '\n';
}
}
|
1631
|
B
|
Fun with Even Subarrays
|
You are given an array $a$ of $n$ elements. You can apply the following operation to it any number of times:
- Select some subarray from $a$ of even size $2k$ that begins at position $l$ ($1\le l \le l+2\cdot{k}-1\le n$, $k \ge 1$) and for each $i$ between $0$ and $k-1$ (inclusive), assign the value $a_{l+k+i}$ to $a_{l+i}$.
For example, if $a = [2, 1, 3, 4, 5, 3]$, then choose $l = 1$ and $k = 2$, applying this operation the array will become $a = [3, 4, 3, 4, 5, 3]$.
Find the minimum number of operations (possibly zero) needed to make all the elements of the array equal.
|
It is not possible to modify $a_n$ using the given operation. Think about the leftmost $x$ such that $a_x \neq a_n$. For simplicity, let $b_1, b_2, ..., b_n = a_n, a_{n-1}, ..., a_1$ (let $b$ be $a$ reversed). The operation transforms to select a subarray $[l, r]$ of length $2\cdot{k}$, so $k = \frac{r-l+1}{2}$, then for all $i$ such that $0 \leq i < k$, set $b_{l+k+i} = b_{l+i}$. $b_1$ can not be changed with the given operation. That reduces the problem to make all elements equal to $b_1$. Let $x$ be the rightmost index such that for all $1 \leq i \leq x$, $b_i = b_1$ holds. The problem will be solved when $x = n$. If an operation is applied with $l + k > x + 1$, $b_{x+1}$ will not change and $x$ will remain the same. The largest range with $l + k \leq x + 1$ is $[1, 2\cdot{x}]$, applying an operation to it will lead to $b_{x+1}, b_{x+2}, ..., b_{2\cdot{x}} = b_1, b_2, ..., b_x$, so $x$ will become at least $2\cdot{x}$ and there is not any other range that will lead to a bigger value of $x$. If $2\cdot{x} > n$, it is possible to apply the operation on $[x-(n-x)+1,n]$, after applying it $b_{x+1}, ..., b_n = b_{x-(n-x)+1}, ..., b_x$ and all elements will become equal. The problem can now be solved by repeatedly finding $x$ and applying the operation on $[1, 2\cdot{x}]$ or on $[x-(n-x)+1,n]$ if $2\cdot{x} > n$. Since $x$ will become at least $2\cdot{x}$ in each operation but the last one, the naive implementation will take $O(n\log{n})$, however, it is easy to implement it in $O(n)$.
|
[
"dp",
"greedy"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main()
{
int tc;
cin >> tc;
while(tc--)
{
int n;
cin >> n;
vector<int> a(n+1);
for(int i=1; i<=n; i++)
cin >> a[i];
vector<int> b = a;
reverse(b.begin()+1,b.end());
int ans = 0, x = 1;
while( x < n )
{
if( b[x+1] == b[1] ){
x ++;
continue;
}
ans ++;
x *= 2;
}
cout << ans << '\n';
}
return 0;
}
|
1632
|
A
|
ABC
|
Recently, the students of School 179 have developed a unique algorithm, which takes in a binary string $s$ as input. However, they soon found out that if some substring $t$ of $s$ is a palindrome of length greater than 1, the algorithm will work incorrectly. Can the students somehow reorder the characters of $s$ so that the algorithm will work correctly on the string?
A binary string is a string where each character is either 0 or 1.
A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end.
A palindrome is a string that reads the same backwards as forwards.
|
Only a few strings have the answer "YES". For $n \ge 3$, the answer is "NO". Let $n \ge 3$ and the resulting string be $a$. For there to be no palindromes of length greater than $1$, at least all of these inequalities must be true: $a_1 \neq a_2$, $a_2 \neq a_3$, and $a_1 \neq a_3$. Since our string is binary, this is impossible, so the answer is "NO". For $n \le 2$, there are 4 strings that have the answer "YES": $0$, $1$, $01$, and $10$; as well as 2 strings that have the answer "NO": $00$ and $11$. Time complexity: $O(n)$
|
[
"implementation"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
string s;
cin >> s;
if(n > 2 || s == "11" || s == "00") {
cout << "NO\n";
} else {
cout << "YES\n";
}
}
}
|
1632
|
B
|
Roof Construction
|
It has finally been decided to build a roof over the football field in School 179. Its construction will require placing $n$ consecutive vertical pillars. Furthermore, the headmaster wants the heights of all the pillars to form a permutation $p$ of integers from $0$ to $n - 1$, where $p_i$ is the height of the $i$-th pillar from the left $(1 \le i \le n)$.
As the chief, you know that the cost of construction of consecutive pillars is equal to \textbf{the maximum value of the bitwise XOR} of heights of all pairs of adjacent pillars. In other words, the cost of construction is equal to $\max\limits_{1 \le i \le n - 1}{p_i \oplus p_{i + 1}}$, where $\oplus$ denotes the bitwise XOR operation.
Find any sequence of pillar heights $p$ of length $n$ with the smallest construction cost.
In this problem, a permutation is an array consisting of $n$ distinct integers from $0$ to $n - 1$ in arbitrary order. For example, $[2,3,1,0,4]$ is a permutation, but $[1,0,1]$ is not a permutation ($1$ appears twice in the array) and $[1,0,3]$ is also not a permutation ($n=3$, but $3$ is in the array).
|
The cost of construction is a power of two. The cost of construction is $2 ^ k$, where $k$ is the highest set bit in $n - 1$. Let $k$ be the highest set bit in $n - 1$. There will always be a pair of adjacent elements where one of them has the $k$-th bit set and the other one doesn't, so the cost is at least $2^k$. A simple construction that reaches it is $2^k - 1$, $2^k - 2$, $\ldots$, $0$, $2^k$, $2^k + 1$, $\ldots$, $n - 1$. Time complexity: $O(n)$ Bonus: count the number of permutations with the minimum cost.
|
[
"bitmasks",
"constructive algorithms"
] | 1,000
|
#include<bits/stdc++.h>
using namespace std;
int main() {
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
int k = 0;
while((1 << (k + 1)) <= n - 1) ++k;
for(int i = (1 << k) - 1; i >= 0; i--) {
cout << i << ' ';
}
for(int i = (1 << k); i < n; i++) {
cout << i << ' ';
}
cout << '\n';
}
}
|
1632
|
C
|
Strange Test
|
Igor is in 11th grade. Tomorrow he will have to write an informatics test by the strictest teacher in the school, Pavel Denisovich.
Igor knows how the test will be conducted: first of all, the teacher will give each student two positive integers $a$ and $b$ ($a < b$). After that, the student can apply any of the following operations any number of times:
- $a := a + 1$ (increase $a$ by $1$),
- $b := b + 1$ (increase $b$ by $1$),
- $a := a \ | \ b$ (replace $a$ with the bitwise OR of $a$ and $b$).
To get full marks on the test, the student has to tell the teacher the minimum required number of operations to make $a$ and $b$ equal.
Igor already knows which numbers the teacher will give him. Help him figure out what is the minimum number of operations needed to make $a$ equal to $b$.
|
It is optimal to apply the third operation at most once. It is optimal to apply the third operation at most once, because is does not decrease $a$ and always makes $b \le a$. This means that after we use it, we can only apply the second operation. If we don't apply the third operation, the answer is $b - a$. Suppose we do apply it. Before that, we used the first and second operations some number of times, let the resulting values of $a$ and $b$ be $a'$ and $b'$ respectively $(a \le a', b \le b')$. The answer in this case will be $(a' - a) + (b' - b) + ((a' \ | \ b') - b') + 1$ $=$ $a' + (a' \ | \ b') + (1 - a - b)$. This is equivalent to minimizing $a' + (a' \ | \ b')$, since $(1 - a - b)$ is constant. To do that, we can iterate $a'$ from $a$ to $b$. For a fixed $a'$, we have to minimize $a' \ | \ b'$, the optimal $b'$ can be constructed like this: Set $b'$ to zero and iterate over bits from highest to lowest. There are 4 cases: If current bit of $a'$ is $0$ and $b$ is $1$, set the current bit of $b'$ to $1$. If current bit of $a'$ is $0$ and $b$ is $0$, set the current bit of $b'$ to $0$. If current bit of $a'$ is $1$ and $b$ is $1$, set the current bit of $b'$ to $1$. If current bit of $a'$ is $1$ and $b$ is $0$, set the current bit of $b'$ to $1$ and stop. This works in $O(\log b)$ and can also be sped up to $O(1)$ using bit manipulation. Time complexity: $O(b)$ or $O(b \log b)$ Bonus 1: solve the problem in $O(\log b)$ or faster. Bonus 2: prove that is optimal to have either $a' = a$ or $b' = b$.
|
[
"binary search",
"bitmasks",
"brute force",
"dp",
"math"
] | 1,600
|
for _ in range(int(input())):
a, b = map(int, input().split())
ans = b - a
for a1 in range(a, b):
b1 = 0
for i in range(21, -1, -1):
if (b >> i) & 1:
b1 ^= (1 << i)
else:
if (a1 >> i) & 1:
b1 ^= (1 << i)
break
ans = min(ans, a1 - a - b + (a1 | b1) + 1);
print(ans)
|
1632
|
D
|
New Year Concert
|
New Year is just around the corner, which means that in School 179, preparations for the concert are in full swing.
There are $n$ classes in the school, numbered from $1$ to $n$, the $i$-th class has prepared a scene of length $a_i$ minutes.
As the main one responsible for holding the concert, Idnar knows that if a concert has $k$ scenes of lengths $b_1$, $b_2$, $\ldots$, $b_k$ minutes, then the audience will get bored if there exist two integers $l$ and $r$ such that $1 \le l \le r \le k$ and $\gcd(b_l, b_{l + 1}, \ldots, b_{r - 1}, b_r) = r - l + 1$, where $\gcd(b_l, b_{l + 1}, \ldots, b_{r - 1}, b_r)$ is equal to the greatest common divisor (GCD) of the numbers $b_l$, $b_{l + 1}$, $\ldots$, $b_{r - 1}$, $b_r$.
To avoid boring the audience, Idnar can ask any number of times (possibly zero) for the $t$-th class ($1 \le t \le k$) to make a new scene $d$ minutes in length, where $d$ can be \textbf{any positive integer}. Thus, after this operation, $b_t$ is equal to $d$. Note that $t$ and $d$ can be different for each operation.
For a sequence of scene lengths $b_1$, $b_2$, $\ldots$, $b_{k}$, let $f(b)$ be the minimum number of classes Idnar has to ask to change their scene if he wants to avoid boring the audience.
Idnar hasn't decided which scenes will be allowed for the concert, so he wants to know the value of $f$ for each non-empty prefix of $a$. In other words, Idnar wants to know the values of $f(a_1)$, $f(a_1$,$a_2)$, $\ldots$, $f(a_1$,$a_2$,$\ldots$,$a_n)$.
|
Let's call a segment $[l, r]$ bad if $\gcd(a_l \ldots a_r) = r - l + 1$. There at most $n$ bad segments. For a fixed $l$, as $r$ increases, $\gcd(a_l \ldots a_r)$ does not increase. Suppose you change $a_i$ into a big prime. How does this affect the bad segments? Read the hints above. Let's find all of the bad segments. For a fixed $l$, let's find the largest $r$ that has $\gcd(a_l \ldots a_r) \ge r - l + 1$. This can be done with binary search and a sparse table / segment tree. If $\gcd(a_l \ldots a_r) = r - l + 1$, then the segment $[l, r]$ is bad. If we change $a_i$ into a big prime, no new bad segments will appear. And all bad segments that have $i$ inside of them will disappear. So we have to find the minimum number of points to cover all of them. This is a standard problem, which can be solved greedily: choose the segment with the smallest $r$, delete all segments that have $r$ in them, and repeat. In our case, this is easy to do because our segments are not nested. Time complexity: $O(n \log n \log A)$ with a sparse table, where $A$ is the maximum value of $a_i$. Notes: There are many different modifications to the previous solution, some of them use two pointers (since segments are not nested) and some of them update the answer on the fly while searching for the bad segments. Using a segment tree and two pointers you can get the complexity $O(n (\log n + \log A))$. You can also use the fact that for a prefix, there at most $O(\log A)$ different suffix $\gcd$ values. This leads to another way to find the bad segments.
|
[
"binary search",
"data structures",
"greedy",
"math",
"number theory",
"two pointers"
] | 2,000
|
from math import gcd
n = int(input())
a = list(map(int, input().split()))
t = [0] * (4 * n)
def build(v, tl, tr):
global t
if tl == tr - 1:
t[v] = a[tl]
else:
tm = (tl + tr) // 2
build(2*v + 1, tl, tm)
build(2*v + 2, tm, tr)
t[v] = gcd(t[2*v + 1], t[2*v + 2])
def query(v, tl, tr, l, r):
if l >= r:
return 0
if tl == l and tr == r:
return t[v]
tm = (tl + tr) // 2
r1 = query(2*v + 1, tl, tm, l, min(tm, r))
r2 = query(2*v + 2, tm, tr, max(l, tm), r)
return gcd(r1, r2)
build(0, 0, n)
cur_l = 0
ans = 0
res = []
for i in range(n):
while query(0, 0, n, cur_l, i + 1) < i + 1 - cur_l:
cur_l += 1
if query(0, 0, n, cur_l, i + 1) == i + 1 - cur_l:
ans += 1
cur_l = i + 1
res.append(ans)
print(' '.join(map(str, res)))
|
1632
|
E2
|
Distance Tree (hard version)
|
\textbf{This version of the problem differs from the previous one only in the constraint on $n$}.
A tree is a connected undirected graph without cycles. A weighted tree has a weight assigned to each edge. The distance between two vertices is the minimum sum of weights on the path connecting them.
You are given a weighted tree with $n$ vertices, each edge has a weight of $1$. Denote $d(v)$ as the distance between vertex $1$ and vertex $v$.
Let $f(x)$ be the minimum possible value of $\max\limits_{1 \leq v \leq n} \ {d(v)}$ if you can temporarily add an edge with weight $x$ between any two vertices $a$ and $b$ $(1 \le a, b \le n)$. Note that after this operation, the graph is no longer a tree.
For each integer $x$ from $1$ to $n$, find $f(x)$.
|
It is optimal to add edges of type $(1, v)$. Try to check if for a fixed $x$ the answer is at most $ans$. For a fixed $x$ and answer $ans$, find the distance between nodes that have $depth_v > ans$. For each node, find two children with the deepest subtrees. Read the hints above. Let $f_{ans}$ be the maximum distance between two nodes that have $depth_v > ans$. If for some $x$ the answer is at most $ans$, then either $ans \ge depth$ or $\lceil \frac{f_{ans}}{2} \rceil + x \le ans$, since we can add an edge $(1, u)$ where $u$ is in the middle of the path connecting the two farthest apart nodes with $depth_v > ans$. Since $f_{ans}$ decreases as $ans$ increases, we can use binary search. Also note that we can use two pointers and increase $ans$ as we increase $x$. How to calculate $f_{ans}$? Let's find for each node its two children with the deepest subtrees. Let $a_v$ and $b_v$ be the depths of their subtrees ($a_v \ge b_v$). If there are not enough children, set this values to $depth_v$. If $b_v > 0$, do $f_{b_v-1} := \max(f_{b_v - 1}, a_v + b_v - 2 \cdot depth_v)$. After this, iterate $i$ from $n - 2$ to $0$ and do $f_i = \max(f_i, f_{i + 1})$. Time complexity: $O(n)$ or $O(n \log n)$ with binary search. Note: To solve E1, it is enough to calculate $f_{ans}$ in $O(n)$ or $O(n \log n)$ for each $ans$. One way to do that is to find the diameter of the resulting tree after repeatedly deleting any leaf that has $depth_v \le ans$ ($1$ is also considered a leaf).
|
[
"binary search",
"dfs and similar",
"shortest paths",
"trees"
] | 2,700
|
import sys
from types import GeneratorType
g = []
d = []
n = 0
inp = list(sys.stdin)
def dfs_from_hell():
st = [[], [0, 0, 0, -1, 0, 0]]
while len(st) > 1:
if len(st[-1]) == 7:
if st[-1][-1] > st[-1][1]:
st[-1][2] = st[-1][1]
st[-1][1] = st[-1][-1]
elif st[-1][-1] > st[-1][2]:
st[-1][2] = st[-1][-1]
del st[-1][-1]
continue
if len(g[st[-1][0]]) == st[-1][5]:
i = min(st[-1][1], st[-1][2]) - 1
if i >= 0:
d[i] = max(d[i], st[-1][1] + st[-1][2] - 2 * st[-1][4] + 1)
st[-2].append(st[-1][1])
del st[-1]
continue
if g[st[-1][0]][st[-1][5]] == st[-1][3]:
st[-1][5] += 1
continue
st.append([g[st[-1][0]][st[-1][5]], st[-1][4] + 1, st[-1][4] + 1, st[-1][0], st[-1][4] + 1, 0])
st[-2][5] += 1
return st[0][0]
ci = 1
def solve():
global n, g, d, ci
n = int(inp[ci])
ci += 1
g = [[] for _ in range(n)]
d = [0 for _ in range(n)]
for i in range(n - 1):
a, b = map(int, inp[ci].split())
ci += 1
a -= 1
b -= 1
g[a].append(b);
g[b].append(a);
m_ans = dfs_from_hell()
for i in range(n - 2, -1, -1):
d[i] = max(d[i], d[i + 1])
ans = 0
res = []
for k in range(1, n + 1):
while ans < m_ans and d[ans] // 2 + k > ans:
ans += 1
res.append(str(ans))
print(' '.join(res))
for _ in range(int(inp[0])):
solve()
|
1633
|
A
|
Div. 7
|
You are given an integer $n$. You have to change the minimum number of digits in it in such a way that the resulting number \textbf{does not have any leading zeroes} and \textbf{is divisible by $7$}.
If there are multiple ways to do it, print any of them. If the given number is already divisible by $7$, leave it unchanged.
|
A lot of different solutions can be written in this problem. The model solution relies on the fact that every $7$-th integer is divisible by $7$, and it means that there is always a way to change the last digit of $n$ (or leave it unchanged) so that the result is divisible by $7$. So, if $n$ is already divisible by $7$, we just print it, otherwise we change its last digit using some formulas or iteration on its value from $0$ to $9$.
|
[
"brute force"
] | 800
|
t = int(input())
for i in range(t):
n = int(input())
if n % 7 == 0:
print(n)
else:
ans = -1
for j in range(10):
if (n - n % 10 + j) % 7 == 0:
ans = n - n % 10 + j
print(ans)
|
1633
|
B
|
Minority
|
You are given a string $s$, consisting only of characters '0' and '1'.
You have to choose a contiguous substring of $s$ and remove all occurrences of the character, which is a strict minority in it, from the substring.
That is, if the amount of '0's in the substring is strictly smaller than the amount of '1's, remove all occurrences of '0' from the substring. If the amount of '1's is strictly smaller than the amount of '0's, remove all occurrences of '1'. If the amounts are the same, do nothing.
You have to apply the operation \textbf{exactly once}. What is the maximum amount of characters that can be removed?
|
Let's try to estimate the maximum possible answer. Best case, you will be able to remove either all zeros or all ones from the entire string. Whichever has the least occurrences, can be the answer. If the amounts of zeros and ones in the string are different, this bound is actually easy to reach: just choose the substring that is the entire string. If the amounts are the same, the bound is impossible to reach. Choosing the entire string will do nothing, and asking a smaller substring will decrease the answer. The smallest we can decrease the answer by is $1$. If you choose the substring that is the string without the last character, you will decrease one of the amounts by one. That will make the amounts different, and the bound will be reached. Overall complexity: $O(|s|)$ per testcase.
|
[
"greedy"
] | 800
|
for _ in range(int(input())):
s = input()
print(min((len(s) - 1) // 2, s.count('0'), s.count('1')))
|
1633
|
C
|
Kill the Monster
|
Monocarp is playing a computer game. In this game, his character fights different monsters.
A fight between a character and a monster goes as follows. Suppose the character initially has health $h_C$ and attack $d_C$; the monster initially has health $h_M$ and attack $d_M$. The fight consists of several steps:
- the character attacks the monster, decreasing the monster's health by $d_C$;
- the monster attacks the character, decreasing the character's health by $d_M$;
- the character attacks the monster, decreasing the monster's health by $d_C$;
- the monster attacks the character, decreasing the character's health by $d_M$;
- and so on, until the end of the fight.
The fight ends when someone's health becomes non-positive (i. e. $0$ or less). If the monster's health becomes non-positive, the character wins, otherwise the monster wins.
Monocarp's character currently has health equal to $h_C$ and attack equal to $d_C$. He wants to slay a monster with health equal to $h_M$ and attack equal to $d_M$. Before the fight, Monocarp can spend up to $k$ coins to upgrade his character's weapon and/or armor; each upgrade costs exactly one coin, each weapon upgrade increases the character's attack by $w$, and each armor upgrade increases the character's health by $a$.
Can Monocarp's character slay the monster if Monocarp spends coins on upgrades optimally?
|
First of all, let's understand how to solve the problem without upgrades. To do this, it is enough to compare two numbers: $\left\lceil\frac{h_M}{d_C}\right\rceil$ and $\left\lceil\frac{h_C}{d_M}\right\rceil$ - the number of attacks that the character needs to kill the monster and the number of attacks that the monster needs to kill the character, respectively. So, if the first number is not greater than the second number, then the character wins. Note that the number of coins is not very large, which means we can iterate over the number of coins that we will spend on weapon upgrades, and the remaining coins will be spent on armor upgrades. After that, we can use the formula described above to check whether the character will win. The complexity of the solution is $O(k)$.
|
[
"brute force",
"math"
] | 1,100
|
for _ in range(int(input())):
hc, dc = map(int, input().split())
hm, dm = map(int, input().split())
k, w, a = map(int, input().split())
for i in range(k + 1):
nhc = hc + i * a
ndc = dc + (k - i) * w
if (hm + ndc - 1) // ndc <= (nhc + dm - 1) // dm:
print("YES")
break
else:
print("NO")
|
1633
|
D
|
Make Them Equal
|
You have an array of integers $a$ of size $n$. Initially, all elements of the array are equal to $1$. You can perform the following operation: choose two integers $i$ ($1 \le i \le n$) and $x$ ($x > 0$), and then increase the value of $a_i$ by $\left\lfloor\frac{a_i}{x}\right\rfloor$ (i.e. make $a_i = a_i + \left\lfloor\frac{a_i}{x}\right\rfloor$).
After performing all operations, you will receive $c_i$ coins for all such $i$ that $a_i = b_i$.
Your task is to determine the maximum number of coins that you can receive by performing no more than $k$ operations.
|
Let's calculate $d_i$ - the minimum number of operations to get the number $i$ from $1$. To do this, it is enough to use BFS or dynamic programming. Edges in the graph (transitions in dynamic programming) have the form $\left(i, i + \left\lfloor\frac{i}{x}\right\rfloor\right)$ for all $1 \le x \le i$. Now the problem itself can be reduced to a knapsack problem: there are $n$ items, $i$-th item weighs $d_{b_i}$ and costs $c_i$, you have to find a set of items with the total weight of no more than $k$ of the maximum cost. This is a standard problem that can be solved in $O(nk)$, but it is too slow (although some participants passed all the tests with such a solution). However, we can notice that the values of $d_i$ should not grow too fast, namely, the maximum value of $d_i$ for $1 \le i \le 10^3$ does not exceed $12$. This means that the maximum possible weight is no more than $12n$, and we can limit $k$ to this number (i. e. make $k = \min(k, 12n)$).
|
[
"dp",
"greedy"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
const int N = 1001;
int main() {
vector<int> d(N, N);
d[1] = 0;
for (int i = 1; i < N; ++i) {
for (int x = 1; x <= i; ++x) {
int j = i + i / x;
if (j < N) d[j] = min(d[j], d[i] + 1);
}
}
int t;
cin >> t;
while (t--) {
int n, k;
cin >> n >> k;
vector<int> b(n), c(n);
for (int &x : b) cin >> x;
for (int &x : c) cin >> x;
int sum = 0;
for (int x : b) sum += d[x];
k = min(k, sum);
vector<int> dp(k + 1, 0);
for (int i = 0; i < n; ++i) {
for (int j = k - d[b[i]]; j >= 0; j--) {
dp[j + d[b[i]]] = max(dp[j + d[b[i]]], dp[j] + c[i]);
}
}
cout << *max_element(dp.begin(), dp.end()) << '\n';
}
}
|
1633
|
E
|
Spanning Tree Queries
|
You are given a connected weighted undirected graph, consisting of $n$ vertices and $m$ edges.
You are asked $k$ queries about it. Each query consists of a single integer $x$. For each query, you select a spanning tree in the graph. Let the weights of its edges be $w_1, w_2, \dots, w_{n-1}$. The cost of a spanning tree is $\sum \limits_{i=1}^{n-1} |w_i - x|$ (the sum of absolute differences between the weights and $x$). The answer to a query is the lowest cost of a spanning tree.
The queries are given in a compressed format. The first $p$ $(1 \le p \le k)$ queries $q_1, q_2, \dots, q_p$ are provided explicitly. For queries from $p+1$ to $k$, $q_j = (q_{j-1} \cdot a + b) \mod c$.
Print the xor of answers to all queries.
|
Consider a naive solution using Kruskal's algorithm for finding MST. Given some $x$, you arrange the edges in the increasing order of $|w_i - x|$ and process them one by one. Look closely at the arrangements. At $x=0$ the edges are sorted by $w_i$. How does the arrangement change when $x$ increases? Well, some edges swap places. Consider a pair of edges with different weights $w_1$ and $w_2$ ($w_1 < w_2$). Edge $1$ will go before edge $2$ in the arrangement as long as $x$ is closer to $w_1$ than $w_2$. So for all $x$ up to $\frac{w_1 + w_2}{2}$, edge $1$ goes before edge $2$. And for all $x$ from $\frac{w_1 + w_2}{2}$ onwards, edge $2$ goes before edge $1$. This tells us that every pair of edge with different weights will swap exactly once. So there will be at most $O(m^2)$ swaps. Which is at most $O(m^2)$ different arrangements. Each of them corresponds to some range of $x$'s. We can extract the ranges of $x$'s for all arrangements and calculate MST at the start of each range. We can also find the arrangement that corresponds to some $x$ from a query with a binary search. However, only knowing the weight of the MST at the start of the range is not enough. The weights of edges change later in the range, and we can't predict how. Some edges have their weight increasing, some decreasing. First, let's add more ranges. We want each edge to behave the same way on the entire range: either increase all the way or decrease all the way. If we also add $x=w_i$ for all $i$ into the MST calculation, this will hold. Second, let's store another value for each range: the number of edges that have their weight increasing on it. With that, we can easily recalculate the change in the cost of the spanning tree. The TL should be free enough for you to sort the edges for each MST calculation, resulting in $O(m^2 (m \log m + n \log n) + k \log m)$ solution. You can also optimize the first part to $O(m^3)$.
|
[
"binary search",
"data structures",
"dfs and similar",
"dsu",
"graphs",
"greedy",
"math",
"sortings",
"trees"
] | 2,400
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
struct edge{
int v, u, w;
};
vector<int> pr, rk;
int getp(int a){
return a == pr[a] ? a : pr[a] = getp(pr[a]);
}
bool unite(int a, int b){
a = getp(a), b = getp(b);
if (a == b) return false;
if (rk[a] < rk[b]) swap(a, b);
rk[a] += rk[b];
pr[b] = a;
return true;
}
int main() {
int n, m;
scanf("%d%d", &n, &m);
pr.resize(n);
rk.resize(n);
vector<edge> es(m);
forn(i, m){
scanf("%d%d%d", &es[i].v, &es[i].u, &es[i].w);
--es[i].v, --es[i].u;
es[i].w *= 2;
}
vector<int> ev(1, 0);
forn(i, m) forn(j, i + 1) ev.push_back((es[i].w + es[j].w) / 2);
sort(ev.begin(), ev.end());
ev.resize(unique(ev.begin(), ev.end()) - ev.begin());
vector<long long> base;
vector<int> cnt;
for (int x : ev){
sort(es.begin(), es.end(), [&x](const edge &a, const edge &b){
int wa = abs(a.w - x);
int wb = abs(b.w - x);
if (wa != wb) return wa < wb;
return a.w > b.w;
});
forn(i, n) pr[i] = i, rk[i] = 1;
long long cur_base = 0;
int cur_cnt = 0;
for (const auto &e : es) if (unite(e.v, e.u)){
cur_base += abs(e.w - x);
cur_cnt += x < e.w;
}
base.push_back(cur_base);
cnt.push_back(cur_cnt);
}
int p, k, a, b, c;
scanf("%d%d%d%d%d", &p, &k, &a, &b, &c);
int x = 0;
long long ans = 0;
forn(q, k){
if (q < p) scanf("%d", &x);
else x = (x * 1ll * a + b) % c;
int y = upper_bound(ev.begin(), ev.end(), 2 * x) - ev.begin() - 1;
ans ^= (base[y] + (n - 1 - 2 * cnt[y]) * 1ll * (2 * x - ev[y])) / 2;
}
printf("%lld\n", ans);
return 0;
}
|
1633
|
F
|
Perfect Matching
|
You are given a tree consisting of $n$ vertices (numbered from $1$ to $n$) and $n-1$ edges (numbered from $1$ to $n-1$). Initially, all vertices except vertex $1$ are inactive.
You have to process queries of three types:
- $1$ $v$ — activate the vertex $v$. It is guaranteed that the vertex $v$ is inactive before this query, and one of its neighbors is active. After activating the vertex, you have to choose a subset of edges of the tree such that each \textbf{active} vertex is incident to \textbf{exactly one} chosen edge, and each \textbf{inactive} vertex is not incident to any of the chosen edges — in other words, this subset should represent a perfect matching on the active part of the tree. If any such subset of edges exists, print the sum of indices of edges in it; otherwise, print $0$.
- $2$ — queries of this type will be asked only right after a query of type $1$, and there will be \textbf{at most $10$} such queries. If your answer to the previous query was $0$, simply print $0$; otherwise, print the subset of edges for the previous query as follows: first, print the number of edges in the subset, then print the indices of the chosen edges \textbf{in ascending order}. The sum of indices should be equal to your answer to the previous query.
- $3$ — terminate the program.
Note that you should solve the problem in online mode. It means that you can't read the whole input at once. You can read each query only after writing the answer for the last query. Use functions fflush in C++ and BufferedWriter.flush in Java languages after each writing in your program.
|
Let's root the tree at vertex $1$ and try to analyze when a tree contains a perfect matching. If we want to find the maximum matching in a tree, we can use some greedy approaches like "take a leaf of the tree, match it with its parent and remove both vertices, repeat this process until only isolated vertices remain". If we are interested in a perfect matching, then this process should eliminate all of the vertices. Let's modify this process a bit by always picking the deepest leaf. If there exists a perfect matching, picking the deepest leaf will ensure that the tree always remains a tree and doesn't fall apart, i. e. there will always be one connected component. It means that when we remove the leaf with its parent, this leaf is the only descendant of its parent. It's easy to see that whenever we remove a pair of vertices in this process, for each remaining vertex, the number of its descendants is either left unchanged or decreased by $2$. It means that if a vertex has an even number of descendants, it will have an even number of descendants until it is removed, and the same for odd number of descendants. Let's call the vertices with even number of descendants (including the vertex itself) even vertices, and all the other vertices - odd vertices. A vertex cannot change its status in the process of building the perfect matching. Each leaf is and odd vertex, and if its parent has only one child, this parent is an even vertex. So, when we remove a pair of vertices, one of them (the child) is odd, and the other of them (the parent) is even. This leads us to another way of building the perfect matching: match each odd vertex with its parent, and make sure that everything is correct. Unfortunately, implementing it is $O(n)$ per query, so we need something faster. We can see that each even vertex has at least one odd child (because if all children of a vertex are even, then the number of its descendants, including the vertex itself is odd). In order to find a perfect matching, we have to make sure that: each even vertex has exactly one odd child; each odd vertex has an even vertex as its parent. All this means is that the number of even vertices should be equal to the number of odd vertices: it cannot be greater since each even vertex has at least one odd child, and if it is smaller, it's impossible to match the vertices. The perfect matching itself consists of edges that connect odd vertices with their parents. Okay, now we need some sort of data structure to maintain the status of each vertex (and the sum of edges that lead to an odd vertex if directed from top to bottom). In our problem, we have to add new leaves to the tree (it happens when a vertex is activated), and this increases the number of descendants for every vertex on the path from the root to this new leaf. So, we need some sort of data structure that supports the operations "add a new leaf" and "flip the status of all vertices on a path". One of the structures that allow this is the Link/Cut Tree, but we can use the fact that the whole tree is given in advance to build a Heavy-Light Decomposition on it, which is much easier to code. Operations on segments of paths can be done with a lazy segment tree, and each vertex then will be added in $O(\log^2 n)$.
|
[
"data structures",
"divide and conquer",
"interactive",
"trees"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
typedef pair<long long, int> val;
#define x first
#define y second
const int N = 200043;
val operator +(const val& a, const val& b)
{
return make_pair(a.x + b.x, a.y + b.y);
}
val operator -(const val& a, const val& b)
{
return make_pair(a.x - b.x, a.y - b.y);
}
val T[4 * N];
val Tfull[4 * N];
int f[4 * N];
val getVal(int v)
{
if(f[v] == 1)
return Tfull[v] - T[v];
return T[v];
}
void push(int v)
{
if(f[v] == 1)
{
f[v] = 0;
T[v] = Tfull[v] - T[v];
f[v * 2 + 1] ^= 1;
f[v * 2 + 2] ^= 1;
}
}
void upd(int v)
{
Tfull[v] = Tfull[v * 2 + 1] + Tfull[v * 2 + 2];
T[v] = getVal(v * 2 + 1) + getVal(v * 2 + 2);
}
void setVal(int v, int l, int r, int pos, val cur)
{
if(l == r - 1)
{
f[v] = 0;
Tfull[v] = cur;
T[v] = cur;
}
else
{
push(v);
int m = (l + r) / 2;
if(pos < m)
setVal(v * 2 + 1, l, m, pos, cur);
else
setVal(v * 2 + 2, m, r, pos, cur);
upd(v);
}
}
void flipColor(int v, int l, int r, int L, int R)
{
if(L >= R) return;
if(l == L && r == R)
f[v] ^= 1;
else
{
push(v);
int m = (l + r) / 2;
flipColor(v * 2 + 1, l, m, L, min(m, R));
flipColor(v * 2 + 2, m, r, max(L, m), R);
upd(v);
}
}
val getVal(int v, int l, int r, int L, int R)
{
if(L >= R) return make_pair(0ll, 0);
if(l == L && r == R) return getVal(v);
int m = (l + r) / 2;
val ans = make_pair(0ll, 0);
push(v);
ans = ans + getVal(v * 2 + 1, l, m, L, min(R, m));
ans = ans + getVal(v * 2 + 2, m, r, max(L, m), R);
upd(v);
return ans;
}
int n;
vector<int> g[N];
int p[N], siz[N], d[N], nxt[N];
int tin[N], timer;
map<pair<int, int>, int> idx;
long long sum;
int cnt;
int active[N];
int active_cnt;
void dfs_sz(int v)
{
if (p[v] != -1)
{
auto it = find(g[v].begin(), g[v].end(), p[v]);
if (it != g[v].end())
g[v].erase(it);
}
siz[v] = 1;
for (int &u : g[v])
{
p[u] = v;
d[u] = d[v] + 1;
dfs_sz(u);
siz[v] += siz[u];
if (siz[u] > siz[g[v][0]])
swap(u, g[v][0]);
}
}
void dfs_hld(int v)
{
tin[v] = timer++;
for (int u : g[v])
{
nxt[u] = (u == g[v][0] ? nxt[v] : u);
dfs_hld(u);
}
}
// [l; r] inclusive
void flipSegment(int l, int r)
{
flipColor(0, 0, n, l, r + 1);
}
// [l; r] inclusive
val get(int l, int r)
{
return getVal(0, 0, n, l, r + 1);
}
void flipPath(int v, int u)
{
for (; nxt[v] != nxt[u]; u = p[nxt[u]])
{
if (d[nxt[v]] > d[nxt[u]]) swap(v, u);
flipSegment(tin[nxt[u]], tin[u]);
}
if (d[v] > d[u]) swap(v, u);
flipSegment(tin[v], tin[u]);
}
val getPath(int v, int u)
{
val res = make_pair(0ll, 0);
for (; nxt[v] != nxt[u]; u = p[nxt[u]])
{
if (d[nxt[v]] > d[nxt[u]]) swap(v, u);
// update res with the result of get()
res = res + get(tin[nxt[u]], tin[u]);
}
if (d[v] > d[u]) swap(v, u);
res = res + get(tin[v], tin[u]);
return res;
}
void activate_vertex(int x)
{
int id = 0;
if(p[x] != -1)
{
id = idx[make_pair(x, p[x])];
val v = getPath(0, p[x]);
flipPath(0, p[x]);
sum -= v.x;
cnt -= v.y;
v = getPath(0, p[x]);
sum += v.x;
cnt += v.y;
}
cnt++;
sum += id;
setVal(0, 0, n, tin[x], make_pair((long long)(id), 1));
active[x] = 1;
active_cnt++;
}
void init_hld(int root = 0)
{
d[root] = 0;
nxt[root] = root;
p[root] = -1;
timer = 0;
dfs_sz(root);
dfs_hld(root);
}
int currentSize[N];
int dfsSolution(int x, vector<int>& edges)
{
if(!active[x]) return 0;
currentSize[x] = 1;
for(auto y : g[x])
if(y != p[x])
currentSize[x] += dfsSolution(y, edges);
if(currentSize[x] % 2 == 1)
edges.push_back(idx[make_pair(x, p[x])]);
return currentSize[x];
}
void outputSolution()
{
vector<int> edges;
if(cnt * 2 == active_cnt)
{
dfsSolution(0, edges);
sort(edges.begin(), edges.end());
}
printf("%d", int(edges.size()));
for(auto x : edges) printf(" %d", x);
puts("");
fflush(stdout);
}
void processQuery(int v)
{
activate_vertex(v);
if(cnt * 2 == active_cnt)
printf("%lld\n", sum);
else
puts("0");
fflush(stdout);
}
int main()
{
scanf("%d", &n);
for(int i = 1; i < n; i++)
{
int x, y;
scanf("%d %d", &x, &y);
--x;
--y;
g[x].push_back(y);
g[y].push_back(x);
idx[make_pair(x, y)] = i;
idx[make_pair(y, x)] = i;
}
init_hld();
activate_vertex(0);
while(true)
{
int t;
scanf("%d", &t);
if(t == 3)
break;
if(t == 2)
outputSolution();
if(t == 1)
{
int v;
scanf("%d", &v);
--v;
processQuery(v);
}
}
}
|
1634
|
A
|
Reverse and Concatenate
|
\begin{quote}
Real stupidity beats artificial intelligence every time.
\hfill — Terry Pratchett, Hogfather, Discworld
\end{quote}
You are given a string $s$ of length $n$ and a number $k$. Let's denote by $rev(s)$ the reversed string $s$ (i.e. $rev(s) = s_n s_{n-1} ... s_1$). You can apply one of the two kinds of operations to the string:
- replace the string $s$ with $s + rev(s)$
- replace the string $s$ with $rev(s) + s$
How many different strings can you get as a result of performing \textbf{exactly} $k$ operations (possibly of different kinds) on the original string $s$?
In this statement we denoted the concatenation of strings $s$ and $t$ as $s + t$. In other words, $s + t = s_1 s_2 ... s_n t_1 t_2 ... t_m$, where $n$ and $m$ are the lengths of strings $s$ and $t$ respectively.
|
If $k = 0$, the answer is $1$. Otherwise, consider two cases: The string is a palindrome (that is, $s = rev(s)$). Then $rev(s) + s = s + rev(s) = s + s$, so both operations will replace $s$ by the string $s+s$, which is also a palindrome. Then for any $k$ the answer is $1$. Otherwise $s \ne rev(s)$. Then after the first operation we get either $s + rev(s)$ (which is a palindrome) or $rev(s) + s$ (also a palindrome). Also note that if we apply the operation to two different palindromes of length $x$ any number of times, they cannot become equal, since they do not have the same prefix of length $x$. So, after the first operation from a non-palindrome 2 different strings will be obtained, and after all following operations 2 unequal strings will be obtained. So the answer is - $2$.
|
[
"greedy",
"strings"
] | 800
|
q = int(input())
for _ in range(q):
n, k = map(int, input().split())
s = input()
if s == s[::-1] or k == 0:
print(1)
else:
print(2)
|
1634
|
B
|
Fortune Telling
|
\begin{quote}
Haha, try to solve this, SelectorUnlimited!
\hfill — antontrygubO_o
\end{quote}
Your friends Alice and Bob practice fortune telling.
Fortune telling is performed as follows. There is a well-known array $a$ of $n$ non-negative integers indexed from $1$ to $n$. The tellee starts with some non-negative number $d$ and performs one of the two operations for each $i = 1, 2, \ldots, n$, \textbf{in the increasing order of $i$}. The possible operations are:
- replace their current number $d$ with $d + a_i$
- replace their current number $d$ with $d \oplus a_i$ (hereinafter $\oplus$ denotes the bitwise XOR operation)
Notice that the chosen operation may be different for different $i$ and for different tellees.
One time, Alice decided to start with $d = x$ and Bob started with $d = x + 3$. Each of them performed fortune telling and got a particular number in the end. Notice that the friends chose operations independently of each other, that is, they could apply different operations for the same $i$.
You learnt that either Alice or Bob ended up with number $y$ in the end, but you don't know whose of the two it was. Given the numbers Alice and Bob started with and $y$, find out who (Alice or Bob) could get the number $y$ after performing the operations. It is guaranteed that on the jury tests, \textbf{exactly one} of your friends could have actually gotten that number.
\textbf{Hacks}
You cannot make hacks in this problem.
|
Notice that the operations $+$ and $\oplus$ have the same effect on the parity: it is inverted if the second argument of the operation is odd, and stays the same otherwise. By induction, we conclude that if we apply the operations to some even number and to some odd number, the resulting numbers will also be of different parity. Therefore, we can determine whether the parity of the input is the same as the parity of the output or the opposite: if the sum of $a$ is even, then the parity does not change, otherwise it does. Thus we can find out the parity of the original number from the parity of the result, and this is enough to solve the problem because the numbers $x$ and $x + 3$ have different parity.
|
[
"bitmasks",
"math"
] | 1,400
|
def main():
n, x, y = map(int, input().split())
a = list(map(int, input().split()))
if (sum(a) + x + y) % 2 == 0:
print('Alice')
else:
print('Bob')
for _ in range(int(input())):
main()
|
1634
|
C
|
OKEA
|
\begin{quote}
People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.
\hfill — Pedro Domingos
\end{quote}
You work for a well-known department store that uses leading technologies and employs mechanistic work — that is, robots!
The department you work in sells $n \cdot k$ items. The first item costs $1$ dollar, the second item costs $2$ dollars, and so on: $i$-th item costs $i$ dollars. The items are situated on shelves. The items form a rectangular grid: there are $n$ shelves in total, and each shelf contains exactly $k$ items. We will denote by $a_{i,j}$ the price of $j$-th item (counting from the left) on the $i$-th shelf, $1 \le i \le n, 1 \le j \le k$.
Occasionally robots get curious and ponder on the following question: what is the mean price (arithmetic average) of items $a_{i,l}, a_{i,l+1}, \ldots, a_{i,r}$ for some shelf $i$ and indices $l \le r$? Unfortunately, the old robots can only work with whole numbers. If the mean price turns out not to be an integer, they break down.
You care about robots' welfare. You want to arrange the items in such a way that the robots cannot theoretically break. Formally, you want to choose such a two-dimensional array $a$ that:
- Every number from $1$ to $n \cdot k$ (inclusively) occurs exactly once.
- For each $i, l, r$, the mean price of items from $l$ to $r$ on $i$-th shelf is an integer.
Find out if such an arrangement is possible, and if it is, give any example of such arrangement.
|
If $k = 1$, you can put items on the shelves in any order. Otherwise, there are at least 2 items on each shelf. If there are items of different parity on the shelf, it is obvious that there are two neighboring items of different parity, but then the arithmetic mean of these two items won't be whole, which is against the constraints. Therefore, all items on each shelf are of the same parity. Notice that if the number of shelves $n$ is odd, we cannot arrange the items correctly because the number of shelves with even and odd items must be the same (that is, if $k \ge 2$). Let us show that for even $n$ there is always an answer. On $i$-th shelf we will place items with prices $i, i + n, i + 2 \cdot n, \ldots, i + n \cdot (k - 1)$. We can use the formula for the sum of an arithmetic progression to compute the sum of prices of a subsegment with coordinates $i, l$ up to $i, r$: $sum = i \cdot (r - l + 1) + \frac{n(l - 1) + n(r - 1)}{2} \cdot (r - l + 1) = \ = i \cdot (r - l + 1) + \frac{n}{2} \cdot (l + r - 2) \cdot (r - l + 1) = \ = (r - l + 1) \cdot \left(i + \frac{n}{2} \cdot (l + r - 2)\right)$ The length of the segment ($r - l + 1$) always divides this sum, since $n$ is even. Therefore, this arrangement fits the requirements of the problem.
|
[
"constructive algorithms"
] | 1,000
|
def solve():
n, k = map(int, input().split())
if k == 1:
print("YES")
for i in range(1, n + 1):
print(i)
return
if n % 2 == 1:
print("NO")
return
print("YES")
for i in range(1, n + 1):
print(*[i for i in range(i, n * k + 1, n)])
for _ in range(int(input())):
solve()
|
1634
|
D
|
Finding Zero
|
\textbf{This is an interactive problem.}
We picked an array of whole numbers $a_1, a_2, \ldots, a_n$ ($0 \le a_i \le 10^9$) and concealed \textbf{exactly one} zero in it! Your goal is to find the location of this zero, that is, to find $i$ such that $a_i = 0$.
You are allowed to make several queries to guess the answer. For each query, you can think up three distinct indices $i, j, k$, and we will tell you the value of $\max(a_i, a_j, a_k) - \min(a_i, a_j, a_k)$. In other words, we will tell you the difference between the maximum and the minimum number among $a_i$, $a_j$ and $a_k$.
You are allowed to make no more than $2 \cdot n - 2$ queries, and after that you have two tries to guess where the zero is. That is, you have to tell us two numbers $i$ and $j$ and you win if $a_i = 0$ or $a_j = 0$.
Can you guess where we hid the zero?
Note that the array in each test case is fixed beforehand and will not change during the game. In other words, the interactor is not adaptive.
|
Notice that for any four numbers $a, b, c, d$ we can locate at least two numbers among them that are certainly not zeroes using only four queries as follows. For each of the four numbers, compute it's complement, that is, the difference between the maximum and the minimum of the other three numbers: $\bar{a} = \max(b, c, d) - \min(b, c, d)$ and so on. This takes exactly four queries. Now, consider what happens if one of the four numbers was a zero. For instance, if $a = 0, b \le c \le d$, then: $\bar{a} = d - b$ $\bar{b} = d$ $\bar{c} = d$ $\bar{d} = c$ $d > d - b, d \ge c$, so the two largest complements ($\bar{b} = \bar{c} = d$ in this example) are always complements of non-zeroes. Of course, the order of the values could be different, but the numbers with the two largest complements will always be guaranteed non-zeroes. If there is no zero among these numbers, then we can still run this algorithm because it doesn't matter exactly which numbers it will yield - they are all non-zero anyway. Now let's learn how to solve the problem using this algorithm. Start with a "pile" of the first four numbers, apply the algorithm and throw two certain non-zeroes away. Add the next two numbers to the "pile" and drop two non-zeroes again. Repeat this until there are two or three numbers left in the "pile", depending on the parity of $n$. If there are three elements left, add some number that we have already dropped to the pile again and apply the algorithm the last time. If $n$ is even, we have made $\frac{n - 2}{2} \cdot 4 = 2n - 4$ queries. If $n$ is odd, we have made $\frac{n - 3}{2} \cdot 4 + 4 = 2n - 2$ queries. The complexity of this solution is $\mathcal{O}(n)$, and the solution uses no more than $2n - 2$ queries.
|
[
"constructive algorithms",
"interactive",
"math"
] | 2,000
|
#include <bits/stdc++.h>
#define all(x) (x).begin(), (x).end()
#define len(x) (int) (x).size()
using namespace std;
int get(const vector <int>& x) {
cout << "? " << x[0] + 1 << " " << x[1] + 1 << " " << x[2] + 1 << endl;
int ans;
cin >> ans;
return ans;
}
signed main() {
ios_base::sync_with_stdio();
cin.tie(nullptr);
int t;
cin >> t;
while(t --> 0) {
int n;
cin >> n;
pair <int, int> possible = {0, 1};
for (int i = 2; i < n - 1; i += 2) {
vector <pair <int, int>> ch(4);
vector <int> now = {possible.first, possible.second, i, i + 1};
for (int j = 0; j < 4; j++) {
vector <int> x = now;
x.erase(x.begin() + j);
ch[j] = {get(x), now[j]};
}
sort(all(ch));
possible = {ch[0].second, ch[1].second};
}
if (n % 2 == 1) {
int other = 0;
while (possible.first == other || possible.second == other) {
other++;
}
vector <pair <int, int>> ch(4);
vector <int> now = {possible.first, possible.second, n - 1, other};
for (int j = 0; j < 4; j++) {
vector <int> x = now;
x.erase(x.begin() + j);
ch[j] = {get(x), now[j]};
}
sort(all(ch));
possible = {ch[0].second, ch[1].second};
}
cout << "! " << possible.first + 1 << " " << possible.second + 1 << endl;
}
return 0;
}
|
1634
|
E
|
Fair Share
|
\begin{quote}
Even a cat has things it can do that AI cannot.
\hfill — Fei-Fei Li
\end{quote}
You are given $m$ arrays of positive integers. Each array is of even length.
You need to split all these integers into two \textbf{equal} multisets $L$ and $R$, that is, each element of each array should go into one of two multisets (but not both). Additionally, for each of the $m$ arrays, \textbf{exactly half} of its elements should go into $L$, and the rest should go into $R$.
Give an example of such a division or determine that no such division exists.
|
If there is a number that occurs an odd number of times in total, there is no answer. Otherwise, let us construct a bipartite graph as follows. The left part will denote the arrays ($m$ vertices) and the right part will denote the numbers (up to $\sum n_i$ vertices). Each array vertex is connected to all the numbers contained in the array, counted with multiplicity. That is, a vertex $a$ from the left part is connected to a vertex $b$ from the right part $x$ times, where $x$ is the count of occurrences of the number $b$ in the $a$-th array. Notice that all vertices in both parts are of even degree because the length of each array is even and the number of occurrences of each number is even. Therefore we can find a directed Eulerian circuit of this graph. Then for edges like $a \rightarrow b$ (going from the left to the right) let us add the number $b$ to $L$, and for edges like $a \leftarrow b$ (going from the right to the left) add $b$ to $R$. This partitioning will obviously be valid. For each vertex on the left, the indegree equals the outdegree and hence each array is split in half, and for each vertex on the right the same condition holds, so each number occurs in $L$ and $R$ the same number of times and thus $L = R$.
|
[
"constructive algorithms",
"data structures",
"dfs and similar",
"graph matchings",
"graphs"
] | 2,400
|
#include <bits/stdc++.h>
#define len(x) (int) (x).size()
#define all(x) (x).begin(), (x).end()
#define endl "\n"
using namespace std;
const int N = 4e5 + 100, H = 2e5 + 50;
vector <pair <int, int>> g[N];
string ans[N];
int pos[N];
void dfs(int v) {
if (pos[v] == 0) {
ans[v] = string(len(g[v]), 'L');
}
while (pos[v] < len(g[v])) {
auto [i, ind] = g[v][pos[v]];
if (i == -1) {
pos[v]++;
continue;
}
g[i][ind].first = -1, g[v][pos[v]].first = -1;
if (v < H) {
ans[v][pos[v]] = 'R';
}
pos[v]++;
dfs(i);
}
}
signed main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int m;
cin >> m;
map <int, int> ind, cnt;
int fre_ind = 0;
vector <vector <int>> nums(m);
for (int i = 0; i < m; i++) {
int n;
cin >> n;
for (int _ = 0; _ < n; _++) {
int x;
cin >> x;
if (!ind.count(x)) {
ind[x] = fre_ind++;
}
x = ind[x];
cnt[x]++;
nums[i].push_back(x);
g[i].emplace_back(x + H, len(g[x + H]));
g[x + H].emplace_back(i, len(g[i]) - 1);
}
}
for (auto [num, cn] : cnt) {
if (cn % 2 == 1) {
cout << "NO" << endl;
return 0;
}
}
for (int i = 0; i < N; i++) {
dfs(i);
}
cout << "YES" << endl;
for (int i = 0; i < m; i++) {
cout << ans[i] << endl;
}
return 0;
}
|
1634
|
F
|
Fibonacci Additions
|
\begin{quote}
One of my most productive days was throwing away 1,000 lines of code.
\hfill — Ken Thompson
\end{quote}
\textbf{Fibonacci addition} is an operation on an array $X$ of integers, parametrized by indices $l$ and $r$. Fibonacci addition increases $X_l$ by $F_1$, increases $X_{l + 1}$ by $F_2$, and so on up to $X_r$ which is increased by $F_{r - l + 1}$.
$F_i$ denotes the $i$-th Fibonacci number ($F_1 = 1$, $F_2 = 1$, $F_{i} = F_{i - 1} + F_{i - 2}$ for $i > 2$), and \textbf{all operations are performed modulo $MOD$}.
You are given two arrays $A$ and $B$ of the same length. We will ask you to perform several Fibonacci additions on these arrays with different parameters, and after each operation you have to report whether arrays $A$ and $B$ are equal modulo $MOD$.
|
Let $C_i = A_i - B_i$. Consider another auxiliary array $D$, where $D_1 = C_1$, $D_2 = C_2 - C_1$, and $D_i = C_i - C_{i - 1} - C_{i - 2}$ for $i > 2$. Notice that arrays $A$ and $B$ are equal if and only if all elements of $D$ are equal to $0$. Let's analyze how Fibonacci addition affects $D$. If Fibonacci addition is performed on array $A$ on a segment from $l$ to $r$, then: $D_l$ will increase by $1$, $D_{r + 1}$ will decrease by $F_{r - l + 2}$, and $D_{r + 2}$ will decrease by $F_{r - l + 1}$. Fibonacci addition on $B$ can be handled in a similar way. Fibonacci numbers modulo $MOD$ can be easily precomputed, and therefore the problem can be solved in linear time.
|
[
"brute force",
"data structures",
"hashing",
"implementation",
"math"
] | 2,700
|
#include <bits/stdc++.h>
#define all(x) (x).begin(), (x).end()
#define len(x) (int) (x).size()
#define endl "\n"
#define int long long
using namespace std;
const int N = 3e5 + 100;
int MOD;
int fib[N];
vector <int> unfib;
int notzero = 0;
void upd(int ind, int delta) {
notzero -= (unfib[ind] != 0);
unfib[ind] += delta;
if (unfib[ind] >= MOD) {
unfib[ind] -= MOD;
};
notzero += (unfib[ind] != 0);
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int n, q;
cin >> n >> q >> MOD;
fib[0] = fib[1] = 1;
for (int i = 2; i < N; i++) {
fib[i] = (fib[i - 1] + fib[i - 2]) % MOD;
}
vector <int> delta(n);
for (int& i : delta) {
cin >> i;
}
for (int i = 0; i < n; i++) {
int x;
cin >> x;
delta[i] = (delta[i] - x + MOD) % MOD;
}
unfib.resize(n);
unfib[0] = delta[0];
if (len(unfib) >= 2) {
unfib[1] = (delta[1] - delta[0] + MOD) % MOD;
}
for (int i = 2; i < n; i++) {
unfib[i] = ((long long) delta[i] - delta[i - 1] - delta[i - 2] + MOD * 2) % MOD;
}
for (int i = 0; i < n; i++) {
notzero += (unfib[i] != 0);
}
while (q--) {
char c;
int l, r;
cin >> c >> l >> r;
if (c == 'A') {
upd(l - 1, 1);
if (r != n) {
upd(r, MOD - fib[r - l + 1]);
}
if (r < n - 1) {
upd(r + 1, MOD - fib[r - l]);
}
} else {
upd(l - 1, MOD - 1);
if (r != n) {
upd(r, fib[r - l + 1]);
}
if (r < n - 1) {
upd(r + 1, fib[r - l]);
}
}
if (!notzero) {
cout << "YES" << endl;
} else {
cout << "NO" << endl;
}
}
return 0;
}
|
1635
|
A
|
Min Or Sum
|
You are given an array $a$ of size $n$.
You can perform the following operation on the array:
- Choose two different integers $i, j$ $(1 \leq i < j \leq n$), replace $a_i$ with $x$ and $a_j$ with $y$. In order not to break the array, $a_i | a_j = x | y$ must be held, where $|$ denotes the bitwise OR operation. Notice that $x$ and $y$ are non-negative integers.
Please output the minimum sum of the array you can get after using the operation above any number of times.
|
The answer is $a_1 | a_2 | \cdots | a_n$. Here is the proof: Let $m = a_1 | a_2 | \cdots | a_n$. After an operation, the value $m$ won't change. Since, $a_1 | a_2 | \cdots | a_n \leq a_1 + a_2 + \cdots + a_n$, the sum of the array has a lower bound of $m$. And this sum can be constructed easily: for all $i \in [1, n - 1]$, set $a_{i+1}$ to $a_i | a_{i+1}$ and $a_i$ to $0$.
|
[
"bitmasks",
"greedy"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main () {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
int ans = 0;
for (int i = 0, x; i < n; ++i) {
cin >> x;
ans |= x;
}
cout << ans << endl;
}
}
|
1635
|
B
|
Avoid Local Maximums
|
You are given an array $a$ of size $n$. Each element in this array is an integer between $1$ and $10^9$.
You can perform several operations to this array. During an operation, you can replace an element in the array with any integer between $1$ and $10^9$.
Output the minimum number of operations needed such that the resulting array doesn't contain any local maximums, and the resulting array after the operations.
An element $a_i$ is a local maximum if it is strictly larger than both of its neighbors (that is, $a_i > a_{i - 1}$ and $a_i > a_{i + 1}$). Since $a_1$ and $a_n$ have only one neighbor each, they will never be a local maximum.
|
Let's check all elements in $a$ from the left. Once we find that $a_i$ is a local maximum, then we should perform an operation to fix it. There are many ways to do this, but the optimal way is to set $a_{i+1}$ to $max(a_{i},a_{i+2})$, because we can avoid $a_i$ and $a_{i+2}$ from being local maximums at the same time. Proof: Let's take all indices of local maximums in the initial array and append them to an empty array $b$ with their corresponding order. For example, if $a=[1,2,1,3,1,1,3,1,4,1,2 ,1]$, we obtain $b=[2,4,7,9,11]$. Then, we devide $b$ into subarrays, such that $b_i$ and $b_{i+1}$ are in the same subarray if and only if $b_{i+1}-b_i=2$. Using the same example above, we will devide $b$ into $[2,4], [7,9,11]$. To finish our proof, we need two important observations. 1. Any operation would cancel at most two local maximums. 2. Any operation won't cancel two local maximums whose indices are in different subarrays of $b$. So for a fixed subarray, we need at least $\lceil \frac{length}{2} \rceil$ operations to cancel all corresponding local maximums, and the lower bound of the answer is the sum of $\lceil \frac{length}{2} \rceil$ for all subarrays. Since the strategy described above could achieve this lower bound, our proof has completed.
|
[
"greedy"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main () {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector <int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int ans = 0;
for (int i = 1; i + 1 < n; ++i) {
if (a[i] > a[i - 1] && a[i] > a[i + 1]) {
if (i + 2 < n) {
a[i + 1] = max(a[i], a[i + 2]);
} else {
a[i + 1] = a[i];
}
ans++;
}
}
cout << ans << endl;
for (int i = 0; i < n; ++i) {
cout << a[i] << " \n"[i == n - 1];
}
}
}
|
1635
|
C
|
Differential Sorting
|
You are given an array $a$ of $n$ elements.
Your can perform the following operation no more than $n$ times: Select three indices $x,y,z$ $(1 \leq x < y < z \leq n)$ and replace $a_x$ with $a_y - a_z$. After the operation, $|a_x|$ need to be less than $10^{18}$.
Your goal is to make the resulting array \textbf{non-decreasing}. If there are multiple solutions, you can output any. If it is impossible to achieve, you should report it as well.
|
First of all, if $a_{n-1} > a_n$, then the answer is $-1$ since we can't change the last two elements. If $a_n \geq 0$, there exists a simple solution: perform the operation $(i, n - 1, n)$ for each $1 \leq i \leq n - 2$. Otherwise, the answer exists if and only if the initial array is sorted. Proof: Assume that $a_n < 0$ and we can sort the array after $m > 0$ operations. Consider the last operation we performed ($x_m, y_m, z_m$). Since all elements should be negative after the last operation, so $a_{z_m} < 0$ should hold before the last operation. But $a_{x_m} = a_{y_m} - a_{z_m} > a_{y_m}$ after this, so the array isn't sorted in the end. By contradiction, we have proved that we can't perform any operations as long as $a_n < 0$.
|
[
"constructive algorithms",
"greedy"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main () {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector <int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
if (a[n - 2] > a[n - 1]) {
cout << -1 << endl;
} else {
if (a[n - 1] < 0) {
if (is_sorted(a.begin(), a.end())) {
cout << 0 << endl;
} else {
cout << -1 << endl;
}
} else {
cout << n - 2 << endl;
for (int i = 1; i <= n - 2; ++i) {
cout << i << ' ' << n - 1 << ' ' << n << endl;
}
}
}
}
}
|
1635
|
D
|
Infinite Set
|
You are given an array $a$ consisting of $n$ \textbf{distinct} positive integers.
Let's consider an infinite integer set $S$ which contains all integers $x$ that satisfy at least one of the following conditions:
- $x = a_i$ for some $1 \leq i \leq n$.
- $x = 2y + 1$ and $y$ is in $S$.
- $x = 4y$ and $y$ is in $S$.
For example, if $a = [1,2]$ then the $10$ smallest elements in $S$ will be $\{1,2,3,4,5,7,8,9,11,12\}$.
Find the number of elements in $S$ that are strictly smaller than $2^p$. Since this number may be too large, print it modulo $10^9 + 7$.
|
First of all, let's discuss the problem where $n = 1$ and $a_1 = 1$. For every integer $x$, there is exactly one integer $k$ satisfying $2^k \leq x < 2^{k + 1}$. Let's define $f(x) = k$. Then, it's quite easy to find out $f(2x + 1) = f(x) + 1$ and $f(4x) = f(x) + 2$. This observation leads to a simple dynamic programming solution: let $dp_i$ be the number of integer $x$ where $x \in S$ and $f(x) = i$. The base case is $dp_{0} = 1$ and the transition is $dp_i = dp_{i - 1} + dp_{i - 2}$. After computing the $dp$ array, the final answer will be $\sum\limits_{i = 0}^{p - 1}dp_i$. For the general version of the problem, in order not to compute the same number two or more times, we need to delete all "useless" numbers. A number $a_i$ is called useless if there exists an index $j$ such that $a_j$ can generate $a_i$ after a series of operations (setting $a_j$ to $2 a_j + 1$ or $4 a_j$). After the deletion, we can simply do the same thing above, only changing the transition a little bit: $dp_i = dp_{i - 1} + dp_{i - 2} + g(i)$, where $g(i)$ = number of $j$ satisfying $f(a_j) = i$. The final problem is how to find all the useless numbers. For every integer $x$, there are at most $\mathcal{O}(logx)$ possible "parents" that can generate it. Also, such "parent" must be smaller than $x$. So, let's sort the array in increasing order. Maintain all useful numbers in a set, and for each $a_i$, we will check whether its "parent" exists or not. Once we confirm that its parent doesn't exist, we will append it to the set of useful numbers. This works in $\mathcal{O}(nlognlogC)$. Total Complexity: $\mathcal{O}(nlognlogC + p)$.
|
[
"bitmasks",
"dp",
"math",
"matrices",
"number theory",
"strings"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
const int mod = 1e9 + 7;
int main () {
ios::sync_with_stdio(false);
cin.tie(0);
int n, p;
cin >> n >> p;
vector <int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
sort(a.begin(), a.end());
set <int> useful;
for (int i = 0; i < n; ++i) {
int x = a[i];
bool flag = false;
while (x > 0) {
if (useful.count(x)) {
flag = true;
}
if (x & 1) {
x >>= 1;
} else if (x & 3) {
break;
} else {
x >>= 2;
}
}
if (!flag)
useful.insert(a[i]);
}
vector <int> cnt(30, 0), dp(p);
for (int x : useful) {
cnt[__lg(x)]++;
}
int ans = 0;
for (int i = 0; i < p; ++i) {
if (i < 30) {
dp[i] = cnt[i];
}
if (i >= 1) {
dp[i] += dp[i - 1];
if (dp[i] >= mod) {
dp[i] -= mod;
}
}
if (i >= 2) {
dp[i] += dp[i - 2];
if (dp[i] >= mod) {
dp[i] -= mod;
}
}
ans += dp[i];
if (ans >= mod) {
ans -= mod;
}
}
cout << ans << endl;
}
|
1635
|
E
|
Cars
|
There are $n$ cars on a coordinate axis $OX$. Each car is located at an integer point initially and no two cars are located at the same point. Also, each car is oriented either left or right, and they can move at any constant positive speed in that direction at any moment.
More formally, we can describe the $i$-th car with a letter and an integer: its orientation $ori_i$ and its location $x_i$. If $ori_i = L$, then $x_i$ is decreasing at a constant rate with respect to time. Similarly, if $ori_i = R$, then $x_i$ is increasing at a constant rate with respect to time.
We call two cars \textbf{irrelevant} if they never end up in the same point regardless of their speed. In other words, they won't share the same coordinate at any moment.
We call two cars \textbf{destined} if they always end up in the same point regardless of their speed. In other words, they must share the same coordinate at some moment.
Unfortunately, we lost all information about our cars, but we do remember $m$ relationships. There are two types of relationships:
$1$ $i$ $j$ —$i$-th car and $j$-th car are \textbf{irrelevant}.
$2$ $i$ $j$ —$i$-th car and $j$-th car are \textbf{destined}.
Restore the orientations and the locations of the cars satisfying the relationships, or report that it is impossible. If there are multiple solutions, you can output any.
Note that if two cars share the same coordinate, they will intersect, but at the same moment they will continue their movement in their directions.
|
First of all, let's discuss in what cases will two cars be irrelevant or destined. If two cars are in the same direction, they can either share the same coordinate at some moment or not, depending on their speed. In the picture above, if we set the speed of the blue car to $\infty$ and the red car to $1$, they won't meet each other. But if we set the speed of the blue car to $1$ and the red car to $\infty$, they will meet each other. If two cars are irrelevant, they must look something like this: If two cars are destined, they must look something like this: In conclusion, if there is a relationship between car $i$ and car $j$, their direction must be different. So let's build a graph: if there is a relationship between car $i$ and car $j$, we'll add an undirected edge between vertices $(i, j)$. Note that this implies any valid set of relations must form a bipartite graph. After that, let's run dfs or bfs to make a bipartite coloring. If the graph isn't bipartite, the answer is obviously "NO". Otherwise we can get information about the cars' direction. The next part is how to know where the cars are located. If car $i$ and car $j$ have a relationship and the orientation of the car $i$ is left, we can know the following restriction: 1. if two cars are irrelevant, $x_i < x_j$ must be held. 2. if two cars are destined, $x_j < x_i$ must be held. Let's build another graph. for every restriction $x_i < x_j$, add a directed edge from $i$ to $j$. After that, if the graph has one or more cycles, the answer is obviously "NO". Otherwise, we can do topological sort and the ordering is one possible solution of $x$ in decreasing order. Total Complexity: $\mathcal{O}(n + m)$.
|
[
"2-sat",
"constructive algorithms",
"dfs and similar",
"dsu",
"graphs",
"greedy",
"sortings"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200001;
struct edge {
int type, u, v;
};
vector <int> adj[N];
int col[N], topo[N];
void dfs(int v) {
for (int u : adj[v]) if (col[u] == -1) {
col[u] = col[v] ^ 1;
dfs(u);
}
}
bool BipartiteColoring(int n) {
for (int i = 1; i <= n; ++i)
col[i] = -1;
for (int i = 1; i <= n; ++i) if (col[i] == -1) {
col[i] = 0;
dfs(i);
}
for (int i = 1; i <= n; ++i) for (int j : adj[i]) {
if (col[i] == col[j]) {
return false;
}
}
return true;
}
bool TopologicalSort(int n) {
vector <int> in(n + 1, 0);
for (int i = 1; i <= n; ++i) for (int j : adj[i]) {
in[j]++;
}
queue <int> q;
for (int i = 1; i <= n; ++i) if (in[i] == 0) {
q.push(i);
}
int ord = 0;
while (!q.empty()) {
int v = q.front(); q.pop();
topo[v] = ord++;
for (int u : adj[v]) {
in[u]--;
if (in[u] == 0) {
q.push(u);
}
}
}
return ord == n;
}
int main () {
ios::sync_with_stdio(false);
cin.tie(0);
int n, m;
cin >> n >> m;
vector <edge> a(m);
for (int i = 0; i < m; ++i) {
cin >> a[i].type >> a[i].u >> a[i].v;
adj[a[i].u].push_back(a[i].v);
adj[a[i].v].push_back(a[i].u);
}
if (!BipartiteColoring(n)) {
cout << "NO" << endl;
return 0;
}
// col = 0 -> orient left, col = 1 -> orient right
for (int i = 1; i <= n; ++i) {
adj[i].clear();
}
for (edge e : a) {
if (col[e.u] == 1)
swap(e.u, e.v);
if (e.type == 1) {
adj[e.u].push_back(e.v);
} else {
adj[e.v].push_back(e.u);
}
}
if (!TopologicalSort(n)) {
cout << "NO" << endl;
return 0;
}
cout << "YES" << endl;
for (int i = 1; i <= n; ++i) {
cout << (col[i] == 0 ? "L " : "R ") << topo[i] << endl;
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.