contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1677
E
Tokitsukaze and Beautiful Subsegments
Tokitsukaze has a permutation $p$ of length $n$. Let's call a segment $[l,r]$ beautiful if there exist $i$ and $j$ satisfying $p_i \cdot p_j = \max\{p_l, p_{l+1}, \ldots, p_r \}$, where $l \leq i < j \leq r$. Now Tokitsukaze has $q$ queries, in the $i$-th query she wants to know how many beautiful subsegments $[x,y]$ there are in the segment $[l_i,r_i]$ (i. e. $l_i \leq x \leq y \leq r_i$).
Let's sort the queries $[l,r]$ with $r$ in ascending order. Let's move $R$ from $1$ to $n$, and answer queries $[l,r]$ when $r=R$. Use segment tree to solve this problem, each leaf node $[ul,ul]$ in the segment tree maintenance the number that how many intervals $[ul,R]$ are beautiful, then answering queries is to get sum. Use monotone stack to maintain the maximum number. When moving $R$ one step right, the monotone stack will pop some and push one. Let's think about the number $x$ in monotone stack, which means the maximum number of the intervals $[lp,R]$ ($l1 \le lp \le l2$) is $x$. If there exist a $ll$ in $[l1,l2]$ which satisfies $a_{ll}*a_R=x$, then before $x$ is poped, intervals $[l,rr]$ are beautiful($l \in [l1,l2]$,$l \le ll$ ,$R \le rr$). So we can assume that $x$ will not be poped when it is pushed in the monotone stack, and do something like difference algorithm in the segment tree, by using the current $R$ as the time stamp. Each node in the segment tree has to maintain $a*R+b$,which like a linear function. So when moving $R$ right step, let's enumerate the factor of $a_R$ to update old intervals in monotone stack, enumerate the multiple of $a_R$ to update the new interval in monotone stack and update the intervals which are poped. Time complexity: $\mathcal{O}(nlognlogn+qlogn)$.
[ "data structures" ]
2,900
#include<bits/stdc++.h> using namespace std; const int N=2e5+5; typedef pair<int,int> pp; int n,m,ti,top,a[N],d[N],p[N],bz[N],bo[N]; long long ans[1000005]; vector<pp>q[1000005]; vector<int>w[N]; struct dd { long long x,y,tg,len; }tr[N*4]; void make(int x,int l,int r) { tr[x].len=r-l+1; if (l==r) return; int mid=l+r>>1; make(x<<1,l,mid),make(x<<1|1,mid+1,r); } void down(int x) { if (tr[x].x==tr[x].len) { tr[x<<1].x=tr[x<<1].len; tr[x<<1|1].x=tr[x<<1|1].len; } if (tr[x].x==0) { tr[x<<1].x=0; tr[x<<1|1].x=0; } tr[x<<1].tg+=tr[x].tg; tr[x<<1].y+=tr[x].tg*tr[x<<1].len; tr[x<<1|1].tg+=tr[x].tg; tr[x<<1|1].y+=tr[x].tg*tr[x<<1|1].len; tr[x].tg=0; } void up(int x) { tr[x].x=tr[x<<1].x+tr[x<<1|1].x; tr[x].y=tr[x<<1].y+tr[x<<1|1].y; } void clear(int x,int l,int r,int ll,int rr) { if (l>=ll && r<=rr) { tr[x].tg+=ti; tr[x].y+=ti*tr[x].len; tr[x].x=0; return; } down(x); int mid=l+r>>1; if (ll<=mid) clear(x<<1,l,mid,ll,rr); if (rr>mid) clear(x<<1|1,mid+1,r,ll,rr); up(x); } void add(int x,int l,int r,int ll,int rr) { if (l>=ll && r<=rr) { tr[x].tg-=ti; tr[x].x=tr[x].len; tr[x].y-=ti*tr[x].len; return; } down(x); int mid=l+r>>1; if (ll<=mid) add(x<<1,l,mid,ll,rr); if (rr>mid) add(x<<1|1,mid+1,r,ll,rr); up(x); } long long find(int x,int l,int r,int ll,int rr) { if (l>=ll && r<=rr) return tr[x].y+ti*tr[x].x; down(x); int mid=l+r>>1; long long ans=0; if (ll<=mid) ans+=find(x<<1,l,mid,ll,rr); if (rr>mid) ans+=find(x<<1|1,mid+1,r,ll,rr); return ans; } int main() { //freopen("a.in","r",stdin); //freopen("1.out","w",stdout); scanf("%d%d",&n,&m); for (int i=1;i<=n;i++) scanf("%d",&a[i]),bz[a[i]]=i; for (int i=1;i<=m;i++) { int l,r; scanf("%d%d",&l,&r); q[r].emplace_back(l,i); } for (int i=1;i<=n;i++) for (int j=i;j<=n;j+=i) w[j].emplace_back(i); make(1,1,n); for (int i=1;i<=n;i++) { while (top && a[d[top]]<a[i]) { if (p[top]>d[top-1]) clear(1,1,n,d[top-1]+1,p[top]); bo[d[top]]=0; top--; } for (int j=a[i];j<=n;j+=a[i]) if (bo[bz[j]]) { int pt=bo[bz[j]]; int l=d[pt-1]+1,r=d[pt]; int e=bz[j/a[i]]; if (e<l || e>=i) continue; e=min(e,r); if (e<=p[pt]) continue; // if (l<=p[pt]) clear(1,1,n,l,p[pt]); add(1,1,n,p[pt]+1,e); p[pt]=e; } d[++top]=i; bo[i]=top; p[top]=d[top-1]; for (auto j:w[a[i]]) { int l=d[top-1]+1,r=i; int e1=bz[j],e2=bz[a[i]/j]; if (e2<=e1) continue; if (e1<l || e2>i) continue; if (e1<=p[top]) continue; // if (l<=p[top]) clear(1,1,n,l,p[top]); add(1,1,n,p[top]+1,e1); p[top]=e1; } ti++; for (auto t:q[i]) ans[t.second]=find(1,1,n,t.first,i); } for (int i=1;i<=m;i++) printf("%lld\n",ans[i]); return 0; }
1677
F
Tokitsukaze and Gems
Tokitsukaze has a sequence with length of $n$. She likes gems very much. There are $n$ kinds of gems. The gems of the $i$-th kind are on the $i$-th position, and there are $a_i$ gems of the same kind on that position. Define $G(l,r)$ as the multiset containing all gems on the segment $[l,r]$ (inclusive). A multiset of gems can be represented as $S=[s_1,s_2,\ldots,s_n]$, which is a non-negative integer sequence of length $n$ and means that $S$ contains $s_i$ gems of the $i$-th kind in the multiset. A multiset $T=[t_1,t_2,\ldots,t_n]$ is a multisubset of $S=[s_1,s_2,\ldots,s_n]$ if and only if $t_i\le s_i$ for any $i$ satisfying $1\le i\le n$. Now, given two positive integers $k$ and $p$, you need to calculate the result of $$\sum_{l=1}^n \sum_{r=l}^n\sum\limits_{[t_1,t_2,\cdots,t_n] \subseteq G(l,r)}\left(\left(\sum_{i=1}^n p^{t_i}t_i^k\right)\left(\sum_{i=1}^n[t_i>0]\right)\right),$$ where $[t_i>0]=1$ if $t_i>0$ and $[t_i>0]=0$ if $t_i=0$. Since the answer can be quite large, print it modulo $998\,244\,353$.
First, consider how to calculate the contribution of a legal path. It can be found that the contribution of a legal path is equivalent to the following formula, where $b_i$ indicates the number of the $i$-th gems on the path. $\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}(\sum_{j=1}^n (P^{i_j}i_j^k)\times\sum_{j=1}^n [i_j\ne 0])$ This formula seems impossible to calculate, but it can be observed that there is a chain structure behind it. Consider dynamic programming, try to calculate the $1\sim (i+1)$-th gems' answer by only considering the $1\sim i$-th gems' answer. Consider maintenance these: $dp_{n,1}=\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}(\sum_{j=1}^n (P^{i_j}i_j^k)\times\sum_{j=1}^n [i_j\ne 0])$ $dp_{n,2}=\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}\sum_{j=1}^n (P^{i_j}i_j^k)$ $dp_{n,3}=\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n} \sum_{j=1}^n [i_j\ne 0]$ $dp_{n,4}=\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n} 1$ It can be observed that if we enumerate how many the $(n + 1)$-th gem are selected, this can actually be transferred, for example: $dp_{n+1,1}=\sum_{p=0}^{b_{n+1}}\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}((\sum_{j=1}^n P^{i_j}i_j^k+P^p p^k)\times (\sum_{j=1}^n [i_j\ne 0])+[p\ne 0])$ Then spread. $A=\sum_{p=0}^{b_{n+1}}\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}(\sum_{j=1}^n P^{i_j}i_j^k \times \sum_{j=1}^n [i_j\ne 0])=(1+b_{n+1})dp_{n,1}$ $B=\sum_{p=0}^{b_{n+1}}\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}\sum_{j=1}^n P^{i_j}i_j^k[p\ne 0]=b_{n+1}dp_{n,2}$ $C=\sum_{p=0}^{b_{n+1}}\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}P^p p^k\times \sum_{j=1}^n [i_j\ne 0]=dp_{n,3} \times \sum_{p=0}^{b_{n+1}}P^p p^k=dp_{n,3} \times \sum_{p=1}^{b_{n+1}}P^p p^k$ $D=\sum_{p=0}^{b_{n+1}}\sum_{i_1=0}^{b_1}\sum_{i_2=0}^{b_2}\ldots \sum_{i_n=0}^{b_n}(P^p p^k)\times [p\ne 0]=dp_{n,4}\times\sum_{p=1}^{b_{n+1}}P^p p^k$ Now it can be transferred. The main problem is how to calculate $\sum_{i=1}^n i^kx^i$ quickly, ensuring that $x\gt 1$. Let $h_{k,n}$ represent $\sum_{i=1}^n i^k x^i$. Calculate the sum according to the proportional sequence, $h_{0,n}=\dfrac{x^{n+1}-x}{x-1}$. $xh_{k,n}-h_{k,n}=\sum_{i=1}^n x^{i+1}i^k-\sum_{i=1}^nx^ii^k$ $=x^{n+1}n^k+\sum_{i=1}^nx^i((i-1)^k-i^k)$ $=x^{n+1}n^k+\sum_{i=1}^nx^i([\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}i^j]-i^k)$ $=x^{n+1}n^k+\sum_{i=0}^{k-1}(-1)^{k-j}\binom{k}{j}h_{i,n}$ Finally we get $h_{k,n}=\dfrac{x^{n+1}n^k+\sum_{i=0}^{k-1}(-1)^{k-j}\binom{k}{j}h_{i,n}}{x-1}$ According to this recurrence formula, we can get $h_{0,n}=x^{n+1}(\frac{1}{x-1})-x\times(\frac{1}{x-1})$ $h_{1,n}=x^{n+1}(\frac{n}{x-1}-\dfrac{1}{(x-1)^2})-x(-\dfrac{1}{(x-1)^2})$ When $k$ is constant, there exist a $k$-degree polynomial $g_k(x)$ about $n$, making $h_{k,n}=x^{n+1}g_k(n)-xg_k(0)$. This conclusion can be proved by mathematical induction. Consider how to use interpolation to get this polynomial, we only need to find $g_k(0)\sim g_k(k)$. Consider difference method, $h_{k,n}-h_{k,n-1}=x^{n+1}g_k(n)-x^ng_k(n-1)=n^kx^n$. So $xg_k(n)-g_k(n-1)=n^k$. But we still can't get $C$. There is a conclusion: the $(n + 1)$-th difference of the $n$-degree polynomial $f(x)$ is $0$. That is to say, $\sum_{i=0}^{n+1}(-1)^{n+1-i}\binom{n+1}{i}f(i)=0$. So we can get $g_k(0)$, then use polynomial fast interpolation, and then polynomial multipoint evaluation to get the value of every point we need. It can be observed that the DP transfer can be replaced by matrix, so there are matrices each position, the problem is equivalent to get the sum of the matrix products of all intervals. It's easy to solve. You can solve it in the way you like. Overall complexity: $\mathcal{O}(k\log^2k+n)$.
[ "dp", "math" ]
3,500
#include<cmath> #include<queue> #include<cstdio> #include<cstring> #include<assert.h> #include<iostream> #include<algorithm> using namespace std; #define LL long long #define int long long const int MAXN=4e5+5; const int MAXNN=1e5+5; int n,k,P,fac[MAXN],ifac[MAXN],Inv[MAXN],rev[MAXN],x[MAXN],y[MAXN],K[MAXN],B[MAXN],g[MAXN],F[MAXN],Q[MAXN],val[MAXN]; int a[MAXNN],b[MAXNN],W[21][MAXN*2]; const int mod=998244353; vector<int>G[MAXNN]; inline int mul(int a,int b){return 1ll*a*b%mod;} inline int dec(int a,int b){return ((a-b)%mod+mod)%mod;} inline int add(int a,int b){return ((a+b)%mod+mod)%mod;} inline int qkpow(int a,int b){ int ans=1,base=a; while(b){ if(b&1)ans=mul(ans,base); base=mul(base,base); b>>=1; } return ans; } namespace IO{ const int sz=1<<22; char a[sz+5],b[sz+5],*p1=a,*p2=a,*t=b,p[105]; inline char gc(){ return p1==p2?(p2=(p1=a)+fread(a,1,sz,stdin),p1==p2?EOF:*p1++):*p1++; } template<class T> void gi(T& x){ x=0; char c=gc(); for(;c<'0'||c>'9';c=gc()); for(;c>='0'&&c<='9';c=gc()) x=x*10+(c-'0'); } inline void flush(){fwrite(b,1,t-b,stdout),t=b; } inline void pc(char x){*t++=x; if(t-b==sz) flush(); } template<class T> void pi(T x,char c='\n'){ if(x==0) pc('0'); int t=0; for(;x;x/=10) p[++t]=x%10+'0'; for(;t;--t) pc(p[t]); pc(c); } struct F{~F(){flush();}}f; } using IO::gi; using IO::pi; using IO::pc; const int S=4; struct Matrix{ int a[4][4]; Matrix(){memset(a,0,sizeof a);} void clear(){memset(a,0,sizeof a);} void init(){for(int i=0;i<S;i++)a[i][i]=1;} int* operator [] (int x){return a[x];} Matrix friend operator +(Matrix A,Matrix B){ Matrix ret=Matrix(); for(int i=0;i<S;i++) for(int j=0;j<=i;j++) ret[i][j]=add(A[i][j],B[i][j]); return ret; } Matrix friend operator -(Matrix A,Matrix B){ Matrix ret=Matrix(); for(int i=0;i<S;i++) for(int j=0;j<=i;j++) ret[i][j]=dec(A[i][j],B[i][j]); return ret; } Matrix friend operator *(Matrix A,Matrix B){ Matrix ret=Matrix(); for(int k=0;k<S;k++) for(int j=0;j<=k;j++) if(B[k][j]) for(int i=k;i<S;i++) ret[i][j]=1ll*(ret[i][j]+1ll*A[i][k]*B[k][j])%mod; return ret; } Matrix friend operator *(Matrix A,int x){ Matrix ret=Matrix(); for(int i=0;i<S;i++) for(int j=0;j<=i;j++) ret[i][j]=1ll*A[i][j]*x%mod; return ret; } }d[MAXNN],dp[MAXNN],ans; Matrix operator += (Matrix &p,Matrix q){return p=p+q;} Matrix operator -= (Matrix &p,Matrix q){return p=p-q;} Matrix operator *= (Matrix &p,Matrix q){return p=p*q;} inline int C(int n,int m){ if(n<m||m<0)return 0; return 1ll*fac[n]*ifac[m]%mod*ifac[n-m]%mod; } void init(){ fac[0]=ifac[0]=Inv[0]=Inv[1]=1; for(int i=1;i<MAXN;i++)fac[i]=1ll*fac[i-1]*i%mod; ifac[MAXN-1]=qkpow(fac[MAXN-1],mod-2); for(int i=MAXN-2;i>=1;i--)ifac[i]=1ll*ifac[i+1]*(i+1)%mod; for(int i=2;i<MAXN;i++)Inv[i]=1ll*(mod-mod/i)*Inv[mod%i]%mod; int wn; for(int i=0,x=1;x<MAXN;i++,x<<=1){ W[i][x]=1; wn=qkpow(3,(mod-1)/(x<<1)); for(int j=1;j<x;j++)W[i][x+j]=1ll*wn*W[i][x+j-1]%mod; wn=qkpow(wn,mod-2); for(int j=1;j<x;j++)W[i][x-j]=1ll*wn*W[i][x-j+1]%mod; } } inline void NTT(int *a,int type,int n){ for(int i=0;i<n;i++)if(i<rev[i])swap(a[i],a[rev[i]]); LL x,y; for(int i=1,cnt=0;i<n;cnt++,i<<=1){ for(int j=0;j<n;j+=(i<<1)){ for(int t=0,k=j;k<j+i;k++,t+=type){ x=a[k],y=1ll*W[cnt][i+t]*a[k+i]; a[k]=(x+y)%mod; a[k+i]=(x-y)%mod; } } } if(type==-1)for(int i=0;i<n;i++)a[i]=mul(a[i],Inv[n]); } void Polyinv(int *a,int *b,int len){ static int c[MAXN]; if(len==1){ b[0]=qkpow(a[0],mod-2); return ; } Polyinv(a,b,(len+1)>>1); int l=0,m=(len<<1),n=1; for(;n<=m;n<<=1,l++); for(int i=0;i<n;i++)rev[i]=(rev[i>>1]>>1)|((i&1)<<(l-1)); for(int i=0;i<len;i++)c[i]=a[i]; for(int i=len;i<n;i++)c[i]=0; NTT(c,1,n),NTT(b,1,n); for(int i=0;i<n;i++)b[i]=1ll*(2-1ll*b[i]*c[i])%mod*b[i]%mod; NTT(b,-1,n); for(int i=len;i<n;i++)b[i]=0; } inline void Polydao(int *Aa,int *Bb,int len){ Bb[len-1]=0; for(int i=1;i<len;i++)Bb[i-1]=1ll*i*Aa[i]%mod; } int init_NTT(int L){ int l=0,m=L,n=1; for(;n<=m;n<<=1,l++); for(int i=0;i<n;i++)rev[i]=(rev[i>>1]>>1)|((i&1)<<(l-1)); return n; } void MUL(int *a,int *b,int *c,int n,int m,int lim){ static int t1[MAXN],t2[MAXN],t3[MAXN]; for(int i=0;i<n;i++)t1[i]=a[i]; for(int i=0;i<m;i++)t2[i]=b[i]; for(int i=n;i<lim;i++)t1[i]=0; for(int i=m;i<lim;i++)t2[i]=0; NTT(t1,1,lim),NTT(t2,1,lim); for(int i=0;i<lim;i++)t3[i]=1ll*t1[i]*t2[i]%mod; NTT(t3,-1,lim); for(int i=0;i<n+m-1;i++)c[i]=t3[i]; } namespace Evaluation{ #define ls(p) p<<1 #define rs(p) p<<1|1 int *T1[MAXN],*T2[MAXN],T3[MAXN],T4[MAXN],flow[MAXN*32],*now=flow; void clear(){now=flow;} inline void MUL2(int *a,int *b,int *c,int n,int m,int lim){ static int t1[MAXN],t2[MAXN],t3[MAXN]; for(int i=0;i<n;i++)t1[i]=a[i]; for(int i=0;i<m;i++)t2[i]=b[i]; reverse(t2,t2+m); for(int i=n;i<lim;i++)t1[i]=0; for(int i=m;i<lim;i++)t2[i]=0; NTT(t1,1,lim),NTT(t2,1,lim); for(int i=0;i<lim;i++)t3[i]=1ll*t1[i]*t2[i]%mod; NTT(t3,-1,lim); for(int i=m-1;i<n;i++)c[i-m+1]=t3[i]; } void pre(int *Q,int p,int l,int r){ T1[p]=now,now+=r-l+2; T2[p]=now,now+=r-l+2; for(int i=0;i<=r-l+1;i++)T1[p][i]=T2[p][i]=0; if(l==r){ T1[p][0]=1; T1[p][1]=(mod-Q[l])%mod; return ; } int mid=(l+r)>>1; pre(Q,ls(p),l,mid); pre(Q,rs(p),mid+1,r); if(r-l+1<=256){ for(int i=0;i<=mid-l+1;i++) for(int j=0;j<=r-mid;j++) T1[p][i+j]=add(T1[p][i+j],1ll*T1[ls(p)][i]*T1[rs(p)][j]%mod); }else{ int lim=init_NTT(r-l+1); MUL(T1[ls(p)],T1[rs(p)],T1[p],mid-l+2,r-mid+1,lim); } } void Solve(int *S,int p,int l,int r){ if(l==r){ S[l]=T2[p][0]; return ; } int mid=(l+r)>>1; int lim=init_NTT(r-l+1); MUL2(T2[p],T1[rs(p)],T2[ls(p)],r-l+1,r-mid+1,lim); MUL2(T2[p],T1[ls(p)],T2[rs(p)],r-l+1,mid-l+2,lim); Solve(S,ls(p),l,mid); Solve(S,rs(p),mid+1,r); } //opt0/1 区分多点求值/快速插值 void evaluation(int *F,int *Q,int *S,int n,int m,int opt){ clear(); static int AF[MAXN]; n=max(n,m+1),m=max(m,n-1); pre(Q,1,1,m); if(opt){ for(int i=0;i<n;i++)AF[i]=T1[1][i]; reverse(AF,AF+n); Polydao(AF,AF,n+1); } Polyinv(T1[1],T3,m+1); reverse(T3,T3+m+1); int lim=init_NTT(n+m+1); if(!opt)MUL(F,T3,T4,n,m+1,lim); else MUL(AF,T3,T4,n,m+1,lim); for(int i=n;i<n+m;i++)T2[1][i-n]=T4[i]; Solve(S,1,1,m); for(int i=1;i<=m;i++)S[i]=(1ll*S[i]*Q[i]%mod+(!opt?F[0]:AF[0]))%mod; for(int i=0;i<=n+m+1;i++)T3[i]=T4[i]=0; } }; namespace Lagrange_Interpolation{ #define ls(p) p<<1 #define rs(p) p<<1|1 int S[MAXN],*T[MAXN],flow[MAXN*32],*now=flow,tmp1[MAXN],tmp2[MAXN],e1[MAXN],e2[MAXN]; void solve(int p,int l,int r){ T[p]=now,now+=r-l+2; if(l==r){ T[p][0]=S[l]; return ; } int mid=(l+r)>>1; solve(ls(p),l,mid); solve(rs(p),mid+1,r); for(int i=0;i<mid-l+1;i++)tmp1[i]=T[ls(p)][i]; for(int i=0;i<r-mid;i++)tmp2[i]=T[rs(p)][i]; for(int i=0;i<r-mid+1;i++)e1[i]=Evaluation::T1[rs(p)][i]; reverse(e1,e1+r-mid+1); for(int i=0;i<mid-l+2;i++)e2[i]=Evaluation::T1[ls(p)][i]; reverse(e2,e2+mid-l+2); int lim=init_NTT(r-l+1); MUL(tmp1,e1,e1,mid-l+1,r-mid+1,lim); MUL(tmp2,e2,e2,r-mid,mid-l+2,lim); for(int i=0;i<=r-l;i++)T[p][i]=add(e1[i],e2[i]); } void Interpolation(int *x,int *y,int *ans,int n){ Evaluation::evaluation(x,x,S,n+1,n,1); for(int i=1;i<=n;i++)S[i]=1ll*y[i]*qkpow(S[i],mod-2)%mod; solve(1,1,n); for(int i=0;i<n;i++)ans[i]=T[1][i]; } } void Pre(int P){ int invP=qkpow(P,mod-2),kk=0,bb=0; K[0]=1,B[0]=0; for(int i=1;i<=k+1;i++){ K[i]=1ll*K[i-1]*invP%mod; B[i]=1ll*(B[i-1]+qkpow(i,k))*invP%mod; int Num=C(k+1,i); if(i&1)kk=dec(kk,1ll*Num*K[i]%mod),bb=dec(bb,1ll*Num*B[i]%mod); else kk=add(kk,1ll*Num*K[i]%mod),bb=add(bb,1ll*Num*B[i]%mod); } kk=(kk+1),kk=(kk==mod?0:kk); g[0]=1ll*(mod-bb)*qkpow(kk,mod-2)%mod; for(int i=1;i<=k;i++)g[i]=add(1ll*g[0]*K[i]%mod,B[i]); for(int i=1;i<=k+1;i++)x[i]=i-1,y[i]=(g[i-1]+mod)%mod; Lagrange_Interpolation::Interpolation(x,y,F,k+1); for(int i=0;i<=k;i++)F[i]=(F[i]+mod)%mod; // cout<<endl; for(int i=1;i<=n;i++)Q[i]=b[i]; Evaluation::evaluation(F,Q,val,k+1,n,0); for(int i=1;i<=n;i++)val[i]=(val[i]+mod)%mod; } inline void makeit(int Id,int y,Matrix& S){ //[S1S2,S1,S2,product of b] int X=dec(1ll*qkpow(P,y+1)*val[Id]%mod,1ll*P*g[0]%mod); S.a[0][0]=add(y,1),S.a[1][0]=y,S.a[2][0]=X,S.a[3][0]=X; S.a[1][1]=add(y,1),S.a[3][1]=X; S.a[2][2]=add(y,1),S.a[3][2]=y; S.a[3][3]=add(y,1); } signed main(){ gi(n),gi(k),gi(P); init(); for(int i=1;i<=n;i++)gi(b[i]); Pre(P); for(int i=1;i<=n;i++)makeit(i,b[i],d[i]); dp[1]=d[1]; for(int i=2;i<=n;i++)dp[i]=dp[i-1]*d[i]+d[i]; for(int i=1;i<=n;i++)ans+=dp[i]; pi(ans.a[3][0],'\n'); return 0; }
1678
A
Tokitsukaze and All Zero Sequence
Tokitsukaze has a sequence $a$ of length $n$. For each operation, she selects two numbers $a_i$ and $a_j$ ($i \ne j$; $1 \leq i,j \leq n$). - If $a_i = a_j$, change one of them to $0$. - Otherwise change both of them to $\min(a_i, a_j)$. Tokitsukaze wants to know the minimum number of operations to change all numbers in the sequence to $0$. It can be proved that the answer always exists.
We observe that when there is $0$ in the sequence, it is optimal to choose $0$ and any number other than $0$ for each operation. Therefore, when there is $0$ in the sequence, let $cnt$ be the number of $0$s, the answer is $n - cnt$. Otherwise, when $0$ does not exist in the sequence, there are two situations: When there exist two equal numbers in the sequence, we can perform the operation $1$ time to convert it into the situation of having $0$ in the sequence. So the answer must be $n$. When there exist two equal numbers in the sequence, we can perform the operation $1$ time to convert it into the situation of having $0$ in the sequence. So the answer must be $n$. When all numbers in the sequence are distinct, we can perform the operation $2$ times to convert it into the situation of having $0$ in the sequence. So the answer must be $n + 1$. When all numbers in the sequence are distinct, we can perform the operation $2$ times to convert it into the situation of having $0$ in the sequence. So the answer must be $n + 1$.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; int main(){ int t; scanf("%i",&t); while(t--){ int n; scanf("%i",&n); vector<int> a(n); for(int i=0;i<n;i++) scanf("%i",&a[i]); sort(a.begin(),a.end()); int zero=count(a.begin(),a.end(),0); if(zero>0) printf("%i\n",n-zero); else{ bool same=false; for(int i=1;i<n;i++) if(a[i]==a[i-1]) same=true; if(same)printf("%i\n",n); else printf("%i\n",n+1); } } return 0; }
1678
B2
Tokitsukaze and Good 01-String (hard version)
This is the hard version of the problem. The only difference between the two versions is that the harder version asks additionally for a minimum number of subsegments. Tokitsukaze has a binary string $s$ of length $n$, consisting only of zeros and ones, $n$ is \textbf{even}. Now Tokitsukaze divides $s$ into \textbf{the minimum number} of \textbf{contiguous} subsegments, and for each subsegment, all bits in each subsegment are the same. After that, $s$ is considered good if the lengths of all subsegments are even. For example, if $s$ is "11001111", it will be divided into "11", "00" and "1111". Their lengths are $2$, $2$, $4$ respectively, which are all even numbers, so "11001111" is good. Another example, if $s$ is "1110011000", it will be divided into "111", "00", "11" and "000", and their lengths are $3$, $2$, $2$, $3$. Obviously, "1110011000" is not good. Tokitsukaze wants to make $s$ good by changing the values of some positions in $s$. Specifically, she can perform the operation any number of times: change the value of $s_i$ to '0' or '1' ($1 \leq i \leq n$). Can you tell her the minimum number of operations to make $s$ good? \textbf{Meanwhile, she also wants to know the minimum number of subsegments that $s$ can be divided into among all solutions with the minimum number of operations.}
Obviously, the operation is, for each pair of adjacent and unequal characters, change both of them to $0$ or $1$. In other words, the string is divided into many adjacent binaries with length of $2$. If the binary is "01" or "10", it needs $1$ operation, otherwise the operation is not required. If you want to minimize the number of "contiguous segments", a simple greedy way is to change each "binary requiring operation" (i.e. $01$ or $10$) into the form of the previous or next "binary requiring no operation" (i.e. $11$ or $00$). For example: "0010" change to "0000" and "1101" change to "1111". In this way, this "binary requiring operation" has no contribution to the final number of contiguous segments. We only need to count the number of contiguous "11" and "00" binaries. In particular, if all binaries are "01" or "10", the final contribution to the number of final contiguous segments is $1$ in total. Obviously, the operation is, for each pair of adjacent and unequal characters, change both of them to $0$ or $1$. In other words, the string is divided into many adjacent binaries with length of $2$. Consider the recursion from the front to the back, it can be observed that the previously maintained prefix must end with "00" or "11". Therefore, it's necessary to convert each divided binary into "00" or "11" by enumerating the cost of changing the current binary into "00" and "11". Maintain the minimum number of operations in the process of DP, try to make the current string consistent with the end of the prefix subsegment. $dp[n][2]$, set the first dimension as the maintained length of prefix and the second dimension as the suffix state of the prefix. Then the DP transfer equation is: $dp[i][0]=min(dp[i-2][0]+\left \{c_0,0\right \},dp[i-2][1]+\left \{c_0,1\right \})$ $dp[i][1]=min(dp[i-2][0]+\left \{c_1,1\right \},dp[i-2][1]+\left \{c_1,0\right \})$ where $c_k$ represents the cost of converting the current binary to "kk".
[ "dp", "greedy", "implementation" ]
1,800
#include<bits/stdc++.h> #define fre(z) freopen(z".in","r",stdin),freopen(z".out","w",stdout) #define lowit(x) (x&-x) #define range(x) begin(x), end(x) #define sz(x) (int)(x).size() #define pb push_back #define sto \ std::ios::sync_with_stdio(false); \ std::cin.tie(nullptr); \ std::cout.tie(nullptr); using namespace std; typedef long long ll; typedef pair<ll,ll> PII; inline ll read(){ ll x=0;char ch;bool f=true; for(ch=getchar();!isdigit(ch);ch=getchar())if(ch=='-')f^=f; for(;isdigit(ch);ch=getchar())x=(x<<3)+(x<<1)+(ch^48); return f?x:-x; } //#define LOCAL_DEFINE #define DEBUG(x) cerr<<(#x)<<'='<<(x)<<endl const int N=2e5+7; const int INF=1e9; char s[N]; int c[N]; PII dp[N][2]; void solve(){ int n=read(); scanf("%s",s+1); for(int i=1;i<=n;i++)s[i]-='0'; for(int i=1;i<=n/2;i++)c[i]=s[i*2]+s[i*2-1]*2; for(int i=1;i<=n/2;i++)for(int j=0;j<2;j++)dp[i][j]={INF,INF}; for(int i=1;i<=n/2;i++)for(int j=0;j<2;j++){ int cc=c[i],dd=j?3:0,cnt=((dd%2)^(cc%2))+((dd/2)^(cc/2)); for(int k=0;k<2;k++) dp[i][j]=min(dp[i][j],{dp[i-1][k].first+cnt,dp[i-1][k].second+(j!=k)}); } PII ans=min(dp[n/2][0],dp[n/2][1]); cout<<ans.first<<" "<<ans.second+1<<"\n"; } int main(){ #ifdef ONLINE_JUDGE #else //fre("1"); #endif ll T=1; T=read(); for(int i=1;i<=T;i++)solve(); #ifdef LOCAL_DEFINE cerr << "Time elapsed: " << 1.0 * clock() / CLOCKS_PER_SEC << " s.\n"; #endif return 0; }
1679
A
AvtoBus
Spring has come, and the management of the AvtoBus bus fleet has given the order to replace winter tires with summer tires on all buses. You own a small bus service business and you have just received an order to replace $n$ tires. You know that the bus fleet owns two types of buses: with two axles (these buses have $4$ wheels) and with three axles (these buses have $6$ wheels). You don't know how many buses of which type the AvtoBus bus fleet owns, so you wonder how many buses the fleet might have. You have to determine the minimum and the maximum number of buses that can be in the fleet if you know that the total number of wheels for all buses is $n$.
Let the number of buses with two axles is $x$ and the number of buses with three axles is $y$. Then the equality $4x + 6y = n$ must be true. If $n$ is odd, there is no answer, because the left part of the equality is always even. Now we can divide each part of the equality by two: $2x + 3y = \frac{n}{2}$. Let's maximize the number of buses. Then we should make $x$ as large as possible. So, we will get $2 + \ldots + 2 + 2 = \frac{n}{2}$ if $\frac{n}{2}$ is even, and $2 + \ldots + 2 + 3 = \frac{n}{2}$ otherwise. In both cases the number of buses is $\left\lfloor \frac{n}{2} \right\rfloor$. Now let's minimize the number of buses. So, we should make $y$ as large is possible. We will get $3 + \ldots + 3 + 3 + 3 = \frac{n}{2}$ if $\frac{n}{2}$ is divisible by $3$, $3 + \ldots + 3 + 3 + 2 = \frac{n}{2}$ if $n \equiv 2 \pmod 3$, and $3 + \ldots + 3 + 2 + 2 = \frac{n}{2}$ if $n \equiv 1 \pmod 3$. In all cases the number of buses is $\left\lceil \frac{n}{3} \right\rceil$. Also don't forget the case $n = 2$ - each bus has at least four wheels, so in this case there is no answer. Time complexity: $\mathcal{O}(1)$.
[ "brute force", "greedy", "math", "number theory" ]
900
null
1679
B
Stone Age Problem
Once upon a time Mike and Mike decided to come up with an outstanding problem for some stage of ROI (rare olympiad in informatics). One of them came up with a problem prototype but another stole the idea and proposed that problem for another stage of the same olympiad. Since then the first Mike has been waiting for an opportunity to propose the original idea for some other contest... Mike waited until this moment! You are given an array $a$ of $n$ integers. You are also given $q$ queries of two types: - Replace $i$-th element in the array with integer $x$. - Replace each element in the array with integer $x$. After performing each query you have to calculate the sum of all elements in the array.
As we want to perform queries fast, we will store some variables: current sum of all elements in the array $sum$, index of the last query of the second type $lastSecondQuery$ and its value $lastSecondQueryValue$. For each element of the array we will also store index of the last query of the first type that changed this element $lastFirstQuery[i]$ and its value $lastFirstQueryValue[i]$. Now let's answer the queries. If we are going to perform a query of the first type, we have to know, what the number $a_i$ equals now. If $lastSecondQuery > lastFirstQuery[i]$, then $a_i = lastSecondQueryValue$ now, and $a_i = lastFirstQueryValue[i]$ otherwise. Now let's subtract $a_i$ from the sum, change $lastFirstQuery[i]$ and $lastFirstQueryValue[i]$, and add the new value $a_i$ to the sum. If we are going to perform a query of the second type, we have to update values $lastSecondQuery$ and $lastSecondQueryValue$. The new sum of all elements of the array is $n \cdot lastSecondQueryValue$. Time complexity: $\mathcal{O}(n + q)$.
[ "data structures", "implementation" ]
1,200
null
1679
C
Rooks Defenders
You have a square chessboard of size $n \times n$. Rows are numbered from top to bottom with numbers from $1$ to $n$, and columns — from left to right with numbers from $1$ to $n$. So, each cell is denoted with pair of integers $(x, y)$ ($1 \le x, y \le n$), where $x$ is a row number and $y$ is a column number. You have to perform $q$ queries of three types: - Put a new rook in cell $(x, y)$. - Remove a rook from cell $(x, y)$. It's guaranteed that the rook was put in this cell before. - Check if each cell of subrectangle $(x_1, y_1) - (x_2, y_2)$ of the board is attacked by at least one rook. Subrectangle is a set of cells $(x, y)$ such that for each cell two conditions are satisfied: $x_1 \le x \le x_2$ and $y_1 \le y \le y_2$. Recall that cell $(a, b)$ is attacked by a rook placed in cell $(c, d)$ if either $a = c$ or $b = d$. In particular, the cell containing a rook is attacked by this rook.
Consider some subrectangle. Note that each its cell is attacked by some rook if and only if there is at least one rook in each row $x$ ($x_1 \le x \le x_2$) or in each column $y$ ($y_1 \le y \le y_2$). Now we will solve the problem using this criterium. Let's create a set $freeRows$ where we will store indices of rows in which there are no rooks. Similarly, we will store indices of columns in which there are no rooks in a set $freeCols$. If we have to answer the query of the third type, we have to check if there is at least one $x$ in the $freeRows$ set such that $x_1 \le x \le x_2$ or there is at least one $y$ in the $freeCols$ set such that $y_1 \le y \le y_2$. If we will store these two sets sorted, we can perform this type of query in $\mathcal{O}(\log n)$ using binary search. Now we're going to answer queries of the first and the second types. Let's store for each row and column, how many rooks are there in this row or column. When we add a new rook we should increment this counters for the corresponding row and column and remove the row from $freeRows$ set and the column from $freeColumns$ set. When we remove a rook we should decrement counters for its row and column and if there is no more rooks in the row or in the column, we should add their indices to $freeRows$ or $freeCols$. Time complexity: $\mathcal{O}(q \log n)$.
[ "data structures", "implementation" ]
1,400
null
1679
D
Toss a Coin to Your Graph...
One day Masha was walking in the park and found a graph under a tree... Surprised? Did you think that this problem would have some logical and reasoned story? No way! So, the problem... Masha has an oriented graph which $i$-th vertex contains some positive integer $a_i$. Initially Masha can put a coin at some vertex. In one operation she can move a coin placed in some vertex $u$ to any other vertex $v$ such that there is an oriented edge $u \to v$ in the graph. Each time when the coin is placed in some vertex $i$, Masha write down an integer $a_i$ in her notebook (in particular, when Masha initially puts a coin at some vertex, she writes an integer written at this vertex in her notebook). Masha wants to make exactly $k - 1$ operations in such way that the maximum number written in her notebook is as small as possible.
Note that the function of existence of the answer relatively to the minimum value of the maximum in the path is monotonous. If we were able to construct the path with maximum, not greater than $x$, we are able to construct the path with maximum, not greater than $x + 1$. This leads to the idea of binary search the answer. Let binary search to fix some integer $x$. We have to check, if there is a path in the graph, that consists of $k - 1$ edges and the maximum on this path is not greater than $x$. In the beginning let's leave in consideration only vertices which values are not greater than $x$. Now we need to check if the needed path exist in the resulting graph. If there is a cycle in the graph, there is a path of every length in it, so there is a path of length $k - 1$. Otherwise we have a directed acyclic graph. Let's find a longest path in it and compare its length with $k - 1$. Let's sort the graph topologically and calculate $dp[v]$ - the length of the longest path in the graph that begins in vertex $v$, it's a well-known classical problem. Time complexity: $\mathcal{O}((n + m) \log MAX)$.
[ "binary search", "dfs and similar", "dp", "graphs" ]
1,900
null
1679
E
Typical Party in Dorm
Today is a holiday in the residence hall — Oleh arrived, in honor of which the girls gave him a string. Oleh liked the gift a lot, so he immediately thought up and offered you, his best friend, the following problem. You are given a string $s$ of length $n$, which consists of the first $17$ lowercase Latin letters {$a$, $b$, $c$, $\ldots$, $p$, $q$} and question marks. And $q$ queries. Each query is defined by a set of pairwise distinct lowercase first $17$ letters of the Latin alphabet, which can be used to replace the question marks in the string $s$. The answer to the query is the sum of the number of distinct substrings that are palindromes over all strings that can be obtained from the original string $s$ by replacing question marks with available characters. The answer must be calculated modulo $998\,244\,353$. \textbf{Pay attention!} Two substrings are different when their start and end positions in the string are different. So, the number of different substrings that are palindromes for the string aba will be $4$: a, b, a, aba. Consider examples of replacing question marks with letters. For example, from the string aba??ee when querying {$a$, $b$} you can get the strings ababaee or abaaaee but you cannot get the strings pizza, abaee, abacaba, aba?fee, aba47ee, or abatree. Recall that a palindrome is a string that reads the same from left to right as from right to left.
We are given a string $s$, we need to count the number of palindromes in all possible versions of it if the characters "$?$" can be replaced by some letters of the English alphabet, which are set by queries. Let us solve the problem initially for a $1$ query. First, we can see that instead of counting the number of palindromes in each possible version of string $s$, we can count the number of strings in which a substring $[l;r]$ would be a palindrome. Consider the substring $[l;r]$. Suppose the set of allowed characters in this query is $Q$. What would be the effect of the substring on the answer? Divide the characters of the substring into pairs: first with last, second with penultimate, and so on. If the length of the substring is odd, pair the central character with itself. Now let's consider each pair separately: If two characters of the pair are equal and they are not question marks, then this pair does not affect the answer. If two characters of the pair are equal and they are question marks, then this pair multiplies the answer by $|Q|$, where $|Q|$ - the number of possible characters to replace in this query. (Let's call the number of such pairs $f$). If two characters of the pair are not equal and there is no question mark among them, then this substring will never be a palindrome. If two characters of a pair are not equal and there is a question mark among them, you must check if the letter that is not a question mark belongs to the set $Q$, if not, then in this query this substring will never be a palindrome. Thus, to get the answer for the substring $[l;r]$ it is enough: Check for blocking pairs - if they exist, the answer is $0$ by definition. Otherwise, the answer is $|Q|^{d+f}$, where $d$ and $f$ are defined above. Let's assign to each possible set of letters a binary mask of size $\alpha$, where $\alpha$ - the size of the alphabet in the problem. In the future, we will assume that mask and set are the same. Consider possible blocking pairs of some substring $[l;r]$, they are of two kinds: If two characters of the pair are not equal and there is no question mark among them, then this substring will never be a palindrome and we do not consider it. If two characters of the pair are not equal and there is a question mark between them, a character that is not a question mark must be in the query for us to consider this substring. Next, let us note the following fact: $d+f$ does not depend on the query $Q$, because $d$ depends only on $l$ and $r$, and $f$ - on the number of pairs where both characters are question marks. It follows that every substring of the form $[l;r]$: Or is simply ignored if it has a blocking pair of the first type. (These substrings will not be mentioned further in the tutorial - when we say substrings, we automatically mean the one described below). Either is characterized by a pair of numbers ($P$; $d+f$). Consider an arbitrary query $Q$, how to calculate the answer for it? We need to go through all substrings, check whether $P \subseteq Q$ and if so, add $|Q|^{d+f}$ to the answer. The values of $P$ and $d+f$ for the substring $[l;r]$ can be found quickly by knowing the same values for the substring $[l+1;r-1]$. Thus, by iterating over the parity of the palindrome length, then its center, and then its length itself, we can quickly find these values for all the substrings. Thus, our solution has asymptotics so far $O$($n^2$ $+$ $q \cdot n^2$), which is obviously too slow. But first, let's figure out how to solve the problem if we have $|Q|$ - fixed? Let's create an array of size $2^\alpha$ (let's call it $R$). For each substring, let's add to $R_P$ value $|Q|^{d+f}$. What does this give us? We will then have in $R_i$ the sum of answers from all the substrings $P$ of which is equal to $i$. To find the answer in this case we have to sum all $R_i$ where $i \subseteq Q$. Actually, we have reached the asymptotics $O$($n^2$ $+$ $q \cdot 2^\alpha$) or $O$($n^2$ $+$ $3^\alpha$). But! The problem of finding the sum of subset values is a well-known one and solves our subproblem for $O$($\alpha$ $\cdot$ $2^\alpha$). In more detail: https://codeforces.com/blog/entry/45223 Well, here we have reached the asymptotics of $O$($n^2$ $+$ $\alpha \cdot 2^\alpha$), which is enough. Let us return to the original problem. First, note that we don't have many different $|Q|$ - just $\alpha$. So we create $\alpha$ different arrays of size $2^\alpha$. When processing the current substring, we add $i^{d+f}$ to each of the $\alpha$ arrays at position $P$, where $i$ - the index of the current array. Thus, it is as if we $\alpha$ times solved the problem for a fixed $|Q|$. We need to take the sum over subsets of $Q$ from the $|Q|$th array to get the answer. The final asymptotic of the solution - $O$($\alpha \cdot n^2$ $+$ $\alpha^2 \cdot 2^\alpha$). Applied optimizations in the author's solution: Power optimization: we will often need different values of numbers from $1$ to $\alpha$ in powers from $0$ to $n$, so to remove $log$ from the asymptotic, a precalculation of these values is used. This optimization is necessary to guarantee a full score on the problem. Addition optimization: we do not need to add a subset ($P$; $d+f$) to arrays whose index is less than $|P|$. This is since they will not affect the answer in any way. (Since the sets from which the answer will be taken in this array have fewer elements than $P$, they cannot, by definition, be $P$ supersets.) This optimization is unlikely to have much effect on runtime. It is written here for a clearer understanding of the author's solution. Modulo optimization: The take modulo operation is slower than other operations, so it is worth avoiding. Here we often use the sum modulo of two numbers that do not exceed $998\,244\,353$, so we will replace it by using (a+b >= M ? a+b-M : a+b) instead of (a+b)%M
[ "bitmasks", "combinatorics", "dp", "strings" ]
2,400
null
1679
F
Formalism for Formalism
Yura is a mathematician, and his cognition of the world is so absolute as if he have been solving formal problems a hundred of trillions of billions of years. This problem is just that! Consider all non-negative integers from the interval $[0, 10^{n})$. For convenience we complement all numbers with leading zeros in such way that each number from the given interval consists of exactly $n$ decimal digits. You are given a set of pairs $(u_i, v_i)$, where $u_i$ and $v_i$ are distinct decimal digits from $0$ to $9$. Consider a number $x$ consisting of $n$ digits. We will enumerate all digits from left to right and denote them as $d_1, d_2, \ldots, d_n$. In one operation you can swap digits $d_i$ and $d_{i + 1}$ if and only if there is a pair $(u_j, v_j)$ in the set such that at least one of the following conditions is satisfied: - $d_i = u_j$ and $d_{i + 1} = v_j$, - $d_i = v_j$ and $d_{i + 1} = u_j$. We will call the numbers $x$ and $y$, consisting of $n$ digits, equivalent if the number $x$ can be transformed into the number $y$ using some number of operations described above. In particular, every number is considered equivalent to itself. You are given an integer $n$ and a set of $m$ pairs of digits $(u_i, v_i)$. You have to find the maximum integer $k$ such that there exists a set of integers $x_1, x_2, \ldots, x_k$ ($0 \le x_i < 10^{n}$) such that for each $1 \le i < j \le k$ the number $x_i$ is \textbf{not} equivalent to the number $x_j$.
If you carefully read the problem statement, it becomes clear that we can do some transformations some numbers into others and we have to calculate the number of equivalence classes of all numbers consisting of $n$ digits. Let's say that the representative of some equivalence class is the lexicographically minimal number in this class. Now the problem is to calculate the number of distinct integers that are representatives of some equivalence classes. For convenience let's build an undirected graph which vertices will be digits and an edge will connect some digits $u$ and $v$ if and only if we are given a pair of digits $(u, v)$ or a pair of digits $(v, u)$. It's easy to see that the number is a representative of some class if and only if it doesn't have a substring of kind $[x, d_1, d_2, \ldots, d_s, y]$, where $x > y$, and there are edges $(y, d_s), (y, d_{s - 1}), \ldots, (y, d_1), (y, x)$ in the graph. Now, knowing the criterium, let's calculate $dp[suff][mask]$ - the number of equivalence classes if we have added $suff$ digits from right to left in our number and now we can move to front only digits from the $mask$. Let's go over the next digit $c$ that will be added to the left. If there is a digit $i$ in the mask such that $c > i$ and there is an edge $(c, i)$ in the graph, we cannot add digit $c$, because it will break our criterium. Otherwise we can add digit $c$, it remains to calculate the new mask. Firstly, there will be a digit $c$ in the new mask. Secondly, all digits $i > c$ such that digit $i$ was in the old $mask$ and there is an edge $(c, i)$ in the graph, will be in the new mask too. After that let's perform a transition from state $(suff, mask)$ to state $(suff + 1, newMask)$. The time complexity of this solution is $\mathcal{O}(n \cdot 2^\Sigma \cdot \Sigma^2)$, where $\Sigma$ is the size of the alphabet - $10$ for this problem. It's slow a bit, so we will optimize this solution. We can precalculate the $newMask$ for each pair $(c, mask)$ using the algorithm described above. After that, using this information, we can recalculate $dp$ faster. Time complexity: $\mathcal{O}(2^\Sigma \cdot \Sigma^2 + n \cdot 2^\Sigma \cdot \Sigma)$.
[ "bitmasks", "dp", "math" ]
2,600
null
1680
A
Minimums and Maximums
An array is beautiful if both of the following two conditions meet: - there are \textbf{at least} $l_1$ and \textbf{at most} $r_1$ elements in the array equal to its minimum; - there are \textbf{at least} $l_2$ and \textbf{at most} $r_2$ elements in the array equal to its maximum. For example, the array $[2, 3, 2, 4, 4, 3, 2]$ has $3$ elements equal to its minimum ($1$-st, $3$-rd and $7$-th) and $2$ elements equal to its maximum ($4$-th and $5$-th). Another example: the array $[42, 42, 42]$ has $3$ elements equal to its minimum and $3$ elements equal to its maximum. Your task is to calculate the \textbf{minimum} possible number of elements in a beautiful array.
Firstly, since we are interested in minimum possible size of the array, we don't need any elements other than minimums and maximums. So, the array has at most $2$ distinct elements. Now there are many possible solutions. The simplest one is to iterate on the number of minimums (let this be $i$) and maximums (let this be $j$). If the number of minimums is equal to the number of maximums, then the array should have all elements as both its minimums and maximums, so its length should be $i$; otherwise, it should be $i + j$. We can iterate on all possible pairs $(i, j)$ and find the best result over all of them. A solution in $O(1)$ is possible if you see that you only have to consider $l_1$ and $l_2$ as the number of minimums/maximums, or check if the segments $[l_1, r_1]$ and $[l_2, r_2]$ intersect in $O(1)$.
[ "brute force", "math" ]
800
t = int(input()) for i in range(t): l1, r1, l2, r2 = map(int, input().split()) if max(l1, l2) <= min(r1, r2): print(max(l1, l2)) else: print(l1 + l2)
1680
B
Robots
There is a field divided into $n$ rows and $m$ columns. Some cells are empty (denoted as E), other cells contain robots (denoted as R). You can send a command to \textbf{all robots} at the same time. The command can be of one of the four types: - move up; - move right; - move down; - move left. When you send a command, \textbf{all robots at the same time} attempt to take one step in the direction you picked. If a robot tries to move outside the field, it explodes; otherwise, \textbf{every robot} moves to an adjacent cell in the chosen direction. You can send as many commands as you want (possibly, zero), in any order. Your goal is to make at least one robot reach the upper left corner of the field. Can you do this without forcing any of the robots to explode?
Let's assume that the rows are numbered from $0$ to $n-1$ from top to bottom, and columns are numbered from $0$ to $m-1$ from left to right. If there is no robot in the cell $(0, 0)$ initially, we have to perform several moves up and/or left. If the first row with at least one robot is the $i$-th row, then we can make at most $i$ steps up (and we should do at least $i$ steps up, since otherwise there will me no robot in the upper row). Similarly, if the first column with at least one robot is the $j$-th column, then we can make at most $j$ steps to the left (and we should do at least $j$ steps to the left, since otherwise there will me no robot in the leftmost column). Now there are two possible solutions, both starting with finding $i$ and $j$: we afterwards either simulate $i$ moves up and $j$ moves to the left and check that everything is fine, or just check that there is a robot in the cell $(i, j)$ (since only this robot can end up in $(0, 0)$).
[ "implementation" ]
800
t = int(input()) for i in range(t): n, m = map(int, input().split()) s = [] for j in range(n): s.append(input()) minx = 10 ** 9 miny = 10 ** 9 for x in range(n): for y in range(m): if s[x][y] == 'R': minx = min(minx, x) miny = min(miny, y) print('YES' if s[minx][miny] == 'R' else 'NO')
1680
C
Binary String
You are given a string $s$ consisting of characters 0 and/or 1. You have to remove several (possibly zero) characters from the beginning of the string, and then several (possibly zero) characters from the end of the string. \textbf{The string may become empty after the removals}. The cost of the removal is the \textbf{maximum} of the following two values: - the number of characters 0 left in the string; - the number of characters 1 removed from the string. What is the \textbf{minimum} cost of removal you can achieve?
There are many different approaches to this problem: dynamic programming, binary search, greedy, two pointers, anything you want. The model solution uses an approach based on binary search, so I'll describe it. First of all, why does binary search work? Let's say that the number of 1's is $c$. If the cost of deletion is $k$, then we have deleted at most $k$ characters 1, and have left at most $k$ characters 0. Let's increase the number of characters we delete from the prefix of the string until the number of deleted 1's becomes $k+1$: if $c \ge k+1$, it's always possible. So, if we consider the segment of values $[0, c]$, the fact that we can get cost $k$ implies that we can get cost $k+1$, so we can use binary search on segment $[0, c]$ to find the minimum achievable cost. Now, how to check if we can obtain the cost of deletion equal to $k$? One possible way to do this is to form an array $pos$, where $pos_i$ is the position of the $i$-th character 1 in the string, and find the minimum value of $pos_{i+c-k-1} - pos_i$ in this array: the string that should remain has to contain at least $c-k$ characters 1, and the minimum value of $pos_{i+c-k-1} - pos_i$ is the minimum possible length of such string. Then we can find the number of 0's in this string and check if it is greater than $k$ or not.
[ "binary search", "greedy", "strings", "two pointers" ]
1,600
def can(pos, m): k = len(pos) x = k - m for i in range(m + 1): l = pos[i] r = pos[i + x - 1] if r - l + 1 - x <= m: return True return False t = int(input()) for i in range(t): s = input() pos = [] n = len(s) for i in range(n): if s[i] == '1': pos.append(i) lf = 0 rg = len(pos) while rg - lf > 1: mid = (lf + rg) // 2 if can(pos, mid): rg = mid else: lf = mid if len(pos) == 0 or pos[-1] - pos[0] == len(pos) - 1: print(0) else: print(rg)
1680
D
Dog Walking
You are walking with your dog, and now you are at the promenade. The promenade can be represented as an infinite line. Initially, you are in the point $0$ with your dog. You decided to give some freedom to your dog, so you untied her and let her run for a while. Also, you watched what your dog is doing, so you have some writings about how she ran. During the $i$-th minute, the dog position changed from her previous position by the value $a_i$ (it means, that the dog ran for $a_i$ meters during the $i$-th minute). If $a_i$ is positive, the dog ran $a_i$ meters to the right, otherwise (if $a_i$ is negative) she ran $a_i$ meters to the left. During some minutes, you were chatting with your friend, so you don't have writings about your dog movement during these minutes. These values $a_i$ equal zero. You want your dog to return to you after the end of the walk, so the destination point of the dog after $n$ minutes \textbf{should be} $0$. Now you are wondering: what is the maximum possible number of different \textbf{integer points} of the line your dog could visit on her way, if you replace every $0$ with some integer from $-k$ to $k$ (and your dog \textbf{should} return to $0$ after the walk)? The dog visits an integer point if she runs through that point or reaches in it at the end of any minute. Point $0$ is always visited by the dog, since she is initially there. If the dog cannot return to the point $0$ after $n$ minutes regardless of the integers you place, print -1.
Consider every cyclic shift of the array $a$. Suppose that now the array $a$ starts from the position $i$ (the first element is $a[i]$ and the last element is $a[(i + n - 1) \% n]$). Assume that before the position $i$ our dog reached her minimum possible position and now the minimum position will not change. So our problem is to fill all zeros in the array $a$ in such a way that the maximum prefix sum of $a$ is the maximum possible and the total sum of $a$ is zero. For simplicity, consider the array $b$ which is the $i$-th cyclic shift of $a$ (i. e. the first element $b[0]$ is $a[i]$, the second element $b[1]$ is $a[(i + 1) \% n]$, and so on). Let's iterate from left to right and maintain the current sum of the array $b$. Let this variable be $s$. Now, when we meet $b_j = 0$, we should replace it with the maximum possible value we can (because in such a way we will increase the maximum number of prefix sums). Let $x$ be the number of zeros in $b$ starting from the position $j + 1$. This value can be calculated in advance in $O(n)$ for every cyclic shift using suffix sums. Then the segment of positions we can have at the end is $[s - xk; s + xk]$ and we want to place the maximum possible value in $b[j]$ in such a way that this remaining segment (with addition of our current element) will cover $0$. This maximum value equals $b[j] = min(k, xk - s)$. If $b[j]$ becomes less than $-k$ then this cyclic shift is invalid, and we should skip it. Otherwise, let's add $b[j]$ to $s$ and proceed. If there are no values $b[j] < -k$, then we placed anything correctly. Now can just simulate the movements of our dog to find the answer for the current cyclic shift. But there are cases when $a$ do not contain zeros, so these cases should be handled somehow (I just checked that after simulation we returned to $0$). If we returned to $0$, we can update the answer as the difference between the maximum and the minimum positions plus one. If there is no valid cyclic shift, then the answer is -1. Time complexity: $O(n^2)$.
[ "brute force", "greedy", "math" ]
2,400
#include <bits/stdc++.h> using namespace std; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int n; long long k; cin >> n >> k; vector<long long> a(n); for (auto &it : a) { cin >> it; } long long ans = 0; for (int it = 0; it < n; ++it) { vector<int> cnt(n); for (int i = n - 1; i >= 0; --i) { cnt[i] = (a[i] == 0); if (i + 1 < n) { cnt[i] += cnt[i + 1]; } } vector<long long> b = a; long long s = accumulate(b.begin(), b.end(), 0ll); bool ok = true; for (int i = 0; i < n; ++i) { if (b[i] == 0) { long long x = (i + 1 < n ? cnt[i + 1] : 0); b[i] = min(k, x * k - s); if (b[i] < -k) { ok = false; } s += b[i]; } } if (ok) { long long pos = 0, mn = 0, mx = 0; for (int i = 0; i < n; ++i) { pos += b[i]; mn = min(mn, pos); mx = max(mx, pos); } if (pos == 0) { ans = max(ans, mx - mn + 1); } } rotate(a.begin(), a.begin() + 1, a.end()); } if (ans == 0) { ans = -1; } cout << ans << endl; return 0; }
1680
E
Moving Chips
You are given a board of size $2 \times n$ ($2$ rows, $n$ columns). Some cells of the board contain chips. The chip is represented as '*', and an empty space is represented as '.'. It is guaranteed that there is at least one chip on the board. In one move, you can choose \textbf{any} chip and move it to any adjacent (by side) cell of the board (if this cell is inside the board). It means that if the chip is in the first row, you can move it left, right or down (but it shouldn't leave the board). Same, if the chip is in the second row, you can move it left, right or up. If the chip moves to the cell with another chip, the chip in the destination cell disappears (i. e. our chip captures it). Your task is to calculate the \textbf{minimum} number of moves required to leave \textbf{exactly} one chip on the board. You have to answer $t$ independent test cases.
Firstly, I want to say a few words about the difficulty of this problem. Till the last moment, we didn't know easy to prove (and easy to write) solutions, so we decided that this is a good problem E. But now we realized it is a lot easier than we expected. Now, let's talk about the solution. At the beginning, let's remove redundant columns from the beginning and from the end (i. e. columns without chips) and change the value $n$ correspondingly. Now, let $cost_{i, j}$ be $1$ if $s_{j, i}$ is '*', and $0$ otherwise. This array needed to make the implementation easier. Let's calculate the dynamic programming $dp_{i, j}$, where $i$ is the index of the last processed column and $j$ is the number of the row where our chip is standing. This seems a bit suspicious why we can calculate such a dynamic programming, so let's explain some things about it. It can be shown that in the optimal answer there will be some column where the last move happens. And if the number of this column is $j$ then all chips to the left of $j$ will move only to the right and all chips to the right of $j$ will move only to the left. Actually, we can always consider that $j$ is the last column. Consider paths of two chips that will survive till the last move. The first chip is to the left of $j$ and will move only to the right, and the second one is to the right of $j$ and will move only to the left. Then we can replicate the path of the second chip in the reverse order using the first chip. So the second chip can stay still until the last move. In the optimal answer, it is always better to have exactly one chip in the current column, because moving two chips to the right is always worse than just eat one of them and move the remaining one. Initial states of $dp$ are $+\infty$ except the values of the first column. For the first column, $dp_{0, 0} = cost_{0, 1}$ and $dp_{0, 1} = cost_{0, 0}$. The answer will be $min(dp_{n - 1, 0}, dp_{n - 1, 1})$. Okay, how to make transitions from $dp_{i, j}$? For all $i$ from $0$ to $n-2$, let's consider four cases: $dp_{i, 0} \rightarrow dp_{i + 1, 0}$ - here we need one move to go to the next column and, probably, one more move to delete the figure in the second row in the column $i + 1$. So the transition seems like $dp_{i + 1, 0} = min(dp_{i + 1, 0}, dp_{i, 0} + 1 + cost_{i + 1, 1})$; $dp_{i, 1} \rightarrow dp_{i + 1, 1}$ - same as the previous transition, $dp_{i + 1, 1} = min(dp_{i + 1, 1}, dp_{i, 1} + 1 + cost_{i + 1, 0})$; $dp_{i, 0} \rightarrow dp_{i + 1, 1}$ - because the cost of this transition is always $2$ (the distance between these cells is $2$), we just go firstly to the right and then down (to ensure that we eat the figure in the first row). So the transition is $dp_{i + 1, 1} = min(dp_{i + 1, 1}, dp_{i, 0} + 2)$; $dp_{i, 1} \rightarrow dp_{i + 1, 1}$ - same as the previous transition, $dp_{i + 1, 0} = min(dp_{i + 1, 0}, dp_{i, 1} + 2)$. Time complexity: $O(n)$.
[ "bitmasks", "dp", "greedy" ]
2,000
#include <bits/stdc++.h> using namespace std; const int INF = 1e9; int main() { #ifdef _DEBUG freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); #endif int tc; cin >> tc; while (tc--) { int n; string s[2]; cin >> n >> s[0] >> s[1]; for (int it = 0; it < 2; ++it) { while (s[0].back() == '.' && s[1].back() == '.') { s[0].pop_back(); s[1].pop_back(); } reverse(s[0].begin(), s[0].end()); reverse(s[1].begin(), s[1].end()); } n = s[0].size(); vector<vector<int>> cost(n, vector<int>(2)); for (int i = 0; i < n; ++i) { for (int j = 0; j < 2; ++j) { cost[i][j] = (s[j][i] == '*'); } } vector<vector<int>> dp(n, vector<int>(2, INF)); dp[0][0] = cost[0][1]; dp[0][1] = cost[0][0]; for (int i = 0; i + 1 < n; ++i) { dp[i + 1][0] = min(dp[i + 1][0], dp[i][0] + 1 + cost[i + 1][1]); dp[i + 1][0] = min(dp[i + 1][0], dp[i][1] + 2); dp[i + 1][1] = min(dp[i + 1][1], dp[i][1] + 1 + cost[i + 1][0]); dp[i + 1][1] = min(dp[i + 1][1], dp[i][0] + 2); } cout << min(dp[n - 1][0], dp[n - 1][1]) << endl; } return 0; }
1680
F
Lenient Vertex Cover
You are given a simple connected undirected graph, consisting of $n$ vertices and $m$ edges. The vertices are numbered from $1$ to $n$. A vertex cover of a graph is a set of vertices such that each edge has at least one of its endpoints in the set. Let's call a lenient vertex cover such a vertex cover that \textbf{at most one} edge in it has both endpoints in the set. Find a lenient vertex cover of a graph or report that there is none. If there are multiple answers, then print any of them.
Let's think about why we can't always make a perfect vertex cover - such a vertex cover that each edge has exactly one endpoint in it. Or why the answer can not exist at all. Consider a bamboo. It's always possible to find a perfect vertex cover. Just choose every other vertex in it and account for parity. Make a bamboo into a loop. Now you can see that an even length loop has a perfect vertex cover. An odd length doesn't. That tells us that each odd length loop in a graph will have a bad edge on it. Odd length loops should instantly make you think about bipartite colorings. So we can see that a bipartite graph always has a perfect vertex cover. Just choose one of the parts into a cover, and each edge will have exactly one endpoint in it. At the same time, a non-bipartite graph never has a perfect cover. So our general goal is to remove (basically, mark as bad) at most one edge in such a way that the remaining graph is bipartite. Consider a dfs tree of the graph, colored bipartitely. Every edge in the tree is good (has endpoints in different parts). Every edge outside the tree can be either good or bad. What happens to the tree if we remove an edge? If we remove an edge outside the dfs tree, then nothing happens to it. So if there is no more than one bad edge outside the tree, then we found the answer. That was the easy part. Now what happens if we remove an edge from the tree? The back edges from the subtree of the edge can force the subtree to either remain colored the same or flip all its colors. We don't really care if it remains the same, because we already took care of it in the first part. So let's pretend it always flips the colors. Thus, all edges that go from the subtree upwards above the removed edge, have only one of their endpoints colors changed. Good edges turn bad, bad edges turn good. All other edges don't change. So you should choose such an edge to remove that all bad edges in the graph go from its subtree upwards above that edge and no good edges go from its subtree upwards above that edge. That can be calculated with a dfs. Since all non-tree edges in the dfs tree are back edges, you can simply increment a counter on the bottom vertex, decrement the counter on the top vertex and collect sums from the bottom. The sum in the vertex will tell you the number of edges that start below or in the vertex and end above the vertex. Do this for both kinds of edge and check the conditions for all vertices. Finally, choose such a part to be a vertex cover that the removed edge has both ends in it (if you choose the other part, that edge won't be covered at all). The solution is linear, but the problem still requires a massive time and memory limit only because of recursion in the dfs. Overall complexity: $O(n + m)$ per testcase.
[ "dfs and similar", "divide and conquer", "dsu", "graphs", "trees" ]
2,600
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; vector<vector<int>> g, h; vector<int> tin, tout, clr; vector<vector<int>> sum(2); int T; int flip; int cnt; bool isp(int v, int u){ return tin[v] <= tin[u] && tout[v] >= tout[u]; } void init(int v){ tin[v] = T++; for (int u : g[v]){ if (clr[u] == -1){ clr[u] = clr[v] ^ 1; h[v].push_back(u); init(u); } else if (tin[u] < tin[v]){ int dif = clr[v] ^ clr[u]; if (!dif){ flip = clr[v] ^ 1; ++cnt; } --sum[dif][u]; ++sum[dif][v]; } } tout[v] = T; } int sv; void dfs(int v){ for (int u : h[v]){ dfs(u); if (sum[0][u] == cnt && sum[1][u] == 1){ sv = u; flip = clr[v] ^ 1; } forn(i, 2) sum[i][v] += sum[i][u]; } } int main() { cin.tie(0); iostream::sync_with_stdio(false); int t; cin >> t; forn(_, t){ int n, m; cin >> n >> m; g.assign(n, vector<int>()); h.assign(n, vector<int>()); forn(i, m){ int v, u; cin >> v >> u; --v, --u; g[v].push_back(u); g[u].push_back(v); } tin.resize(n); tout.resize(n); forn(i, 2) sum[i].assign(n, 0); clr.assign(n, -1); cnt = 0; T = 0; clr[0] = 0; init(0); if (cnt <= 1){ cout << "YES\n"; forn(v, n) cout << (clr[v] ^ flip); cout << "\n"; continue; } sv = -1; dfs(0); if (sv == -1){ cout << "NO\n"; } else{ cout << "YES\n"; forn(v, n) cout << (clr[v] ^ isp(sv, v) ^ flip); cout << "\n"; } } return 0; }
1681
A
Game with Cards
Alice and Bob play a game. Alice has $n$ cards, the $i$-th of them has the integer $a_i$ written on it. Bob has $m$ cards, the $j$-th of them has the integer $b_j$ written on it. On the first turn of the game, \textbf{the first player} chooses one of his/her cards and puts it on the table (plays it). On the second turn, \textbf{the second player} chooses one of his/her cards \textbf{such that the integer on it is greater than the integer on the card played on the first turn}, and plays it. On the third turn, \textbf{the first player} chooses one of his/her cards \textbf{such that the integer on it is greater than the integer on the card played on the second turn}, and plays it, and so on — the players take turns, and each player has to choose one of his/her cards with greater integer than the card played by the other player on the last turn. If some player cannot make a turn, he/she loses. For example, if Alice has $4$ cards with numbers $[10, 5, 3, 8]$, and Bob has $3$ cards with numbers $[6, 11, 6]$, the game may go as follows: - Alice can choose any of her cards. She chooses the card with integer $5$ and plays it. - Bob can choose any of his cards with number greater than $5$. He chooses a card with integer $6$ and plays it. - Alice can choose any of her cards with number greater than $6$. She chooses the card with integer $10$ and plays it. - Bob can choose any of his cards with number greater than $10$. He chooses a card with integer $11$ and plays it. - Alice can choose any of her cards with number greater than $11$, but she has no such cards, so she loses. Both Alice and Bob play \textbf{optimally (if a player is able to win the game no matter how the other player plays, the former player will definitely win the game)}. You have to answer two questions: - who wins if Alice is the first player? - who wins if Bob is the first player?
Let the maximum card among all $n + m$ cards be $x$. If only one player has a card of value of $x$, then he/she can win by playing it on the first turn or on the second turn; the opponent won't be able to respond with any of their cards. Otherwise (if both players have a card with value $x$), the player who plays this card earlier wins the game. So, in this case, the winner is the player who makes the first turn.
[ "games", "greedy" ]
800
t = int(input()) for i in range(t): n = int(input()) a = list(map(int, input().split())) m = int(input()) b = list(map(int, input().split())) print('Alice' if max(a) >= max(b) else 'Bob') print('Alice' if max(a) > max(b) else 'Bob')
1681
B
Card Trick
Monocarp has just learned a new card trick, and can't wait to present it to you. He shows you the entire deck of $n$ cards. You see that the values of cards from the topmost to the bottommost are integers $a_1, a_2, \dots, a_n$, and all values are different. Then he asks you to shuffle the deck $m$ times. With the $j$-th shuffle, you should take $b_j$ topmost cards and move them under the remaining $(n - b_j)$ cards without changing the order. And then, using some magic, Monocarp tells you the topmost card of the deck. However, you are not really buying that magic. You tell him that you know the topmost card yourself. Can you surprise Monocarp and tell him the topmost card before he shows it?
The easiest way to solve to problem is probably to see the resemblense of a shuffle operation to an std::rotate function. So you can obtain the final deck by applying cyclic shifts of the deck by $b_1$, then $b_2$ and so on. Since the shifts are cyclic, it doesn't matter if you shift by $x$ or by $x + n$ or by $x + k \cdot x$ for any non-negative $k$. The result will be the same. Thus, you can calculate the sum of rotations you apply, and subtract $n$, until it becomes less than $n$. That is taking it modulo $n$. Finally, after rotating a sequence by some $x$, the $x$-th element of it ($0$-indexed) becomes the first one. Thus, you just want to print the ($sum \bmod n$)-th element of $a$. Overall complexity: $O(n + m)$ per testcase.
[ "implementation", "math" ]
800
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) m = int(input()) print(a[sum(map(int, input().split())) % n])
1681
C
Double Sort
You are given two arrays $a$ and $b$, both consisting of $n$ integers. In one move, you can choose two indices $i$ and $j$ ($1 \le i, j \le n$; $i \neq j$) and swap $a_i$ with $a_j$ and $b_i$ with $b_j$. You have to perform the swap in both arrays. You are allowed to perform at most $10^4$ moves (possibly, zero). Can you make both arrays sorted in a non-decreasing order at the end? If you can, print any sequence of moves that makes both arrays sorted.
Imagine that all elements of $a$ are distinct. This way, sorting $a$ in increasing order will fix the order of $b$. If $b$ turns out sorted in a non-decreasing order, then the answer exists. Otherwise, it doesn't. To obtain the sequence of swaps, you can sort $a$ with any comparison-based sorting algorithm you want: even bubble sort will not exceed the allowed number of swaps. What changes if $a$ has repeated elements? Distinct elements are still ordered among themselves, but now there are also blocks of equal elements. For each block, look into the corresponding values in $b$. Obviously, these have to be sorted in a non-decreasing order. Rearrange them as they should be. In fact, this is exactly the same as sorting the sequence of pairs $(a_i, b_i)$ with a default comparator - first by $a_i$, then by $b_i$. Since we fixed the wanted order, we can proceed with the same steps we made in a distinct case. Overall complexity: $O(n \log n)$ or $O(n^2)$ per testcase.
[ "implementation", "sortings" ]
1,200
for _ in range(int(input())): n = int(input()) a = list(map(int, input().split())) b = list(map(int, input().split())) tmp = [i for i in range(n)] tmp.sort(key=lambda i: [a[i], b[i]]) for i in range(n - 1): if a[tmp[i]] > a[tmp[i + 1]] or b[tmp[i]] > b[tmp[i + 1]]: print("-1") break else: ans = [] for i in range(n - 1): for j in range(n - 1): if a[j] > a[j + 1] or b[j] > b[j + 1]: a[j], a[j + 1] = a[j + 1], a[j] b[j], b[j + 1] = b[j + 1], b[j] ans.append([j + 1, j + 2]) print(len(ans)) for it in ans: print(*it)
1681
D
Required Length
You are given two integer numbers, $n$ and $x$. You may perform several operations with the integer $x$. Each operation you perform is the following one: choose any digit $y$ that occurs in the decimal representation of $x$ at least once, and replace $x$ by $x \cdot y$. You want to make the length of decimal representation of $x$ (without leading zeroes) equal to $n$. What is the minimum number of operations required to do that?
One of the possible approaches to this problem is to try multiplying $x$ only by the largest digit in it. Unfortunately, this doesn't work quite well, since it gives WA on one of the examples. That example is too big to consider, but a smaller version of it can prove that this is an incorrect solution: let $n = 5$, $x = 403$. If we multiply $403$ by $4$, we get $1612$, and there's no way to obtain a number with $5$ digits using the next action. But, if we multiply $403$ by $3$, we get $1209$, which can then be multiplied by $9$ to obtain a $5$-digit number. So, considering only the largest digit is not enough. This implies that we somehow need to consider the options that are not optimal locally, but optimal globally (i. e. choose a lower digit right now to obtain a higher digit in the future). Let's try to estimate the number of possible integers that can be obtained using these operations to see if we can consider all possible options. The key observation is that each integer we obtain will have the form $x \cdot 2^a \cdot 3^b \cdot 5^c \cdot 7^d$, since only one-digit primes can be added to the factorization. Since we consider only numbers less than $10^{19}$, $a$ is not greater than $63$, $b$ is not greater than $39$, $c$ is not greater than $27$, and $d$ is not greater than $22$, and the number of reachable integers is about $1.5$ million (note that this is a very generous bound since not all combinations of $(a,b,c,d)$ yield an integer less than $10^{19}$, and not all such integers can be reached with the operations). This allows us to use BFS or dynamic programming to calculate the answer.
[ "brute force", "dfs and similar", "dp", "hashing", "shortest paths" ]
1,700
#include<bits/stdc++.h> using namespace std; int main() { int n; cin >> n; long long v; cin >> v; queue<long long> q; map<long long, int> dist; dist[v] = 0; q.push(v); while(!q.empty()) { long long k = q.front(); q.pop(); string s = to_string(k); if(s.size() == n) { cout << dist[k] << endl; return 0; } for(auto x : s) { if(x == '0') continue; long long w = k * int(x - '0'); if(!dist.count(w)) { dist[w] = dist[k] + 1; q.push(w); } } } cout << -1 << endl; return 0; }
1681
E
Labyrinth Adventures
You found a map of a weirdly shaped labyrinth. The map is a grid, consisting of $n$ rows and $n$ columns. The rows of the grid are numbered from $1$ to $n$ from bottom to top. The columns of the grid are numbered from $1$ to $n$ from left to right. The labyrinth has $n$ layers. The first layer is the bottom left corner (cell $(1, 1)$). The second layer consists of all cells that are in the grid and adjacent to the first layer by a side or a corner. The third layer consists of all cells that are in the grid and adjacent to the second layer by a side or a corner. And so on. The labyrinth with $5$ layers, for example, is shaped as follows: The layers are separated from one another with walls. However, there are doors in these walls. Each layer (except for layer $n$) has exactly two doors to the next layer. One door is placed on the top wall of the layer and another door is placed on the right wall of the layer. For each layer from $1$ to $n-1$ you are given positions of these two doors. The doors can be passed in both directions: either from layer $i$ to layer $i+1$ or from layer $i+1$ to layer $i$. If you are standing in some cell, you can move to an adjacent by a side cell if a wall doesn't block your move (e.g. you can't move to a cell in another layer if there is no door between the cells). Now you have $m$ queries of sort: what's the minimum number of moves one has to make to go from cell $(x_1, y_1)$ to cell $(x_2, y_2)$.
WLOG, assume all queries ask to move from a lower layer to a higher layer. The first thing to notice in the problem is that it is always optimal to never go down a layer. You have an optimal path that is going down some layers, and then returning to the same layer. So it leaves a layer in some its cell and returns to it in some other cell (or the same one). The best distance it can achieve is the Manhattan distance between these two cells. However, we can also achieve the Manhattan distance by just going along this layer, and the answer will be at least as optimal. If the query asks about the cells of the same layer, just answer with the Manhattan distance. Otherwise, we can describe the path as follows: go from the first cell to some door on its layer, enter the door and go to another door on the next layer, so on until the layer of the second cell, where you go from a door to the second cell. Thus, we could potentially write $dp_{i,j}$ - the shortest distance from the start to the $j$-th door of the $i$-th layer. Initialize both doors of the first layer, take the best answer from the both doors of the last layer. That would be $O(n)$ per query, which is too slow. Let's optimize it with some precalculations. In particular, we want to know the shortest distance between one door of some layer and one door of another layer. We can use the technique similar to binary lifting. Calculate the distance between a pair of doors on layers which are $2^x$ apart for all $x$ up to $\log n$. Let $dp_{i,x,d1,d2}$ be the distance from door $d1$ of layer $i$ to door $d2$ of layer $i+2^x$. $dp_{i,0,d1,d2}$ can be initialized straightforwardly. Then, to calculate $dp_{i,x,d1,d2}$, we can use the values for $x-1$: $dp_{i,x-1,d1,t}$ and $dp_{i+2^{x-1},x-1,t,d2}$ for some intermediate door $t$ on layer $i+2^{x-1}$. To obtain the answer, use $O(\log n)$ jumps to reach the layer one before the last one. Then iterate over the last door. Alternatively, you could pack this dynamic programming into a segment tree, use divide and conquer on queries or do square root decomposition. Overall complexity: $O((n + m) \log n)$.
[ "data structures", "dp", "matrices", "shortest paths" ]
2,600
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const long long INF64 = 1e18; typedef pair<int, int> pt; #define x first #define y second int main() { int n; scanf("%d", &n); vector<vector<pt>> d(n - 1, vector<pt>(2)); forn(i, n - 1) forn(j, 2){ scanf("%d%d", &d[i][j].x, &d[i][j].y); --d[i][j].x, --d[i][j].y; } int lg = 1; while ((1 << lg) < n - 1) ++lg; vector<vector<vector<vector<long long>>>> dp(n - 2, vector<vector<vector<long long>>>(lg, vector<vector<long long>>(2, vector<long long>(2, INF64)))); forn(i, n - 2) forn(k, 2){ dp[i][0][0][k] = abs(d[i][0].x + 1 - d[i + 1][k].x) + abs(d[i][0].y - d[i + 1][k].y) + 1; dp[i][0][1][k] = abs(d[i][1].x - d[i + 1][k].x) + abs(d[i][1].y + 1 - d[i + 1][k].y) + 1; } for (int l = 1; l < lg; ++l) forn(i, n - 2) forn(j, 2) forn(k, 2) forn(t, 2) if (i + (1 << (l - 1)) < n - 2){ dp[i][l][j][k] = min(dp[i][l][j][k], dp[i][l - 1][j][t] + dp[i + (1 << (l - 1))][l - 1][t][k]); } int m; scanf("%d", &m); forn(_, m){ int x1, y1, x2, y2; scanf("%d%d%d%d", &x1, &y1, &x2, &y2); --x1, --y1, --x2, --y2; int l1 = max(x1, y1), l2 = max(x2, y2); if (l1 > l2){ swap(l1, l2); swap(x1, x2); swap(y1, y2); } if (l1 == l2){ printf("%d\n", abs(x1 - x2) + abs(y1 - y2)); continue; } vector<long long> ndp(2); ndp[0] = abs(x1 - d[l1][0].x) + abs(y1 - d[l1][0].y); ndp[1] = abs(x1 - d[l1][1].x) + abs(y1 - d[l1][1].y); for (int i = lg - 1; i >= 0; --i) if (l1 + (1 << i) < l2){ vector<long long> tmp(2, INF64); forn(j, 2) forn(k, 2) tmp[k] = min(tmp[k], ndp[j] + dp[l1][i][j][k]); ndp = tmp; l1 += (1 << i); } long long ans = INF64; ans = min(ans, ndp[0] + abs(d[l1][0].x + 1 - x2) + abs(d[l1][0].y - y2) + 1); ans = min(ans, ndp[1] + abs(d[l1][1].x - x2) + abs(d[l1][1].y + 1 - y2) + 1); printf("%lld\n", ans); } return 0; }
1681
F
Unique Occurrences
You are given a tree, consisting of $n$ vertices. Each edge has an integer value written on it. Let $f(v, u)$ be the number of values that appear \textbf{exactly once} on the edges of a simple path between vertices $v$ and $u$. Calculate the sum of $f(v, u)$ over all pairs of vertices $v$ and $u$ such that $1 \le v < u \le n$.
Let's use contribution to the sum technique to simplify the problem. Instead of counting the number of colors that occure only once for each path, let's, for each color, count the number of paths that contain this color exactly once. Now we can solve the problem independently for each color, and sum up the answers. The first intended solution was the following. So we want to calculate the answer for some color $c$. Mark all edges of color $c$ as good, the rest are bad. Then we can calculate $dp_{v,i}$ - the number of paths up to vertex $v$ such that they contain either $0$ or $1$ good edges. The transitions should be pretty easy, and the answer should be updated when you consider gluing up paths from different children in each vertex. Obviously, this is $O(n)$ per color, so $O(n^2)$ overall. However, we can only calculate this dynamic programming as easily on a virtual tree of vertices adjacent to all good edges. How to calculate the dp for some vertex $v$? First, push the paths from all virtual children to $v$. That was enough in the dp for the entire tree but now there are also removed vertices that could also have paths starting in them. All these paths contain $0$ good edges (otherwise, they would have had virtual vertices on them). Their amount is the following: the size of the real subtree of $v$ minus the sizes of real subtrees of all its virtual children. The rest is exactly the same as in the dp on the real tree. A little fun trick. Usually, you want to add lca of adjacent vertices to the virtual tree. But that's actually not needed here: you can just add the root of the tree and link the vertices without a parent to them. That won't change the result of the dp. That solution works in $O(n \log n)$ or $O(n)$. The second intended solution is slower complexity-wise but not time-wise. In the first solution we wanted to leave only the good edges in the tree. Here, we want to remove only them. Consider the resulting connected components. What's the number of paths that contain only one of the good edges? It's actually the product of sizes of the connected components this edge connects. So we want to remove edges, add edges and maintain the sizes of the connected components of the tree. That's basically the same problem as dynamic connectivity. The $O(n \log^2 n)$ implementation works well enough.
[ "data structures", "dfs and similar", "divide and conquer", "dp", "dsu", "trees" ]
2,300
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; struct edge{ int v, u, w; }; vector<vector<edge>> t; void add(int v, int l, int r, int L, int R, const edge &e){ if (L >= R) return; if (l == L && r == R){ t[v].push_back(e); return; } int m = (l + r) / 2; add(v * 2, l, m, L, min(m, R), e); add(v * 2 + 1, m, r, max(m, L), R, e); } vector<int> rk, p; vector<int*> where; vector<int> val; int getp(int a){ return a == p[a] ? a : getp(p[a]); } void unite(int a, int b){ a = getp(a), b = getp(b); if (a == b) return; if (rk[a] < rk[b]) swap(a, b); where.push_back(&rk[a]); val.push_back(rk[a]); rk[a] += rk[b]; where.push_back(&p[b]); val.push_back(p[b]); p[b] = a; } void rollback(){ *where.back() = val.back(); where.pop_back(); val.pop_back(); } long long trav(int v, int l, int r){ int sv = where.size(); for (auto it : t[v]) if (it.w == 0) unite(it.v, it.u); long long res = 0; if (l == r - 1){ for (auto it : t[v]) if (it.w == 1) res += rk[getp(it.v)] * 1ll * rk[getp(it.u)]; } else{ int m = (l + r) / 2; res += trav(v * 2, l, m); res += trav(v * 2 + 1, m, r); } while (int(where.size()) > sv) rollback(); return res; } int main() { int n; scanf("%d", &n); vector<edge> e(n - 1); forn(i, n - 1){ scanf("%d%d%d", &e[i].v, &e[i].u, &e[i].w); --e[i].v, --e[i].u, --e[i].w; } sort(e.begin(), e.end(), [](const edge &a, const edge &b){ return a.w < b.w; }); t.resize(4 * n); forn(i, n - 1){ int pos = lower_bound(e.begin(), e.end(), e[i], [](const edge &a, const edge &b){ return a.w < b.w; }) - e.begin(); add(1, 0, n, 0, pos, {e[i].v, e[i].u, 0}); add(1, 0, n, pos, pos + 1, {e[i].v, e[i].u, 1}); add(1, 0, n, pos + 1, n, {e[i].v, e[i].u, 0}); } rk.resize(n, 1); p.resize(n); iota(p.begin(), p.end(), 0); printf("%lld\n", trav(1, 0, n)); return 0; }
1682
A
Palindromic Indices
You are given a \textbf{palindromic} string $s$ of length $n$. You have to count the number of indices $i$ $(1 \le i \le n)$ such that the string after removing $s_i$ from $s$ still remains a palindrome. For example, consider $s$ = "aba" - If we remove $s_1$ from $s$, the string becomes "ba" which is not a palindrome. - If we remove $s_2$ from $s$, the string becomes "aa" which is a palindrome. - If we remove $s_3$ from $s$, the string becomes "ab" which is not a palindrome. A palindrome is a string that reads the same backward as forward. For example, "abba", "a", "fef" are palindromes whereas "codeforces", "acd", "xy" are not.
Read the statement carefully!! The given string is a palindrome. Let's remove some index $i$ from the first half of $s$ and check whether the resulting string is a palindrome or not, the other half has the same approach. The prefix of length $i-1$ already matches with the suffix of the same length because the initial string was a palindrome, so we just need to check if $t = s[i + 1 \ldots n - i + 1]$ is a palindrome. For $t$ to be a palindrome, $s_{n - i + 1}$ should be equal to $s_{i + 1}$ which was initially equal to $s_{n - i}$, again which should be equal to $s_{i + 2}$ and this goes on. Here we can see that $s_i = s_{i + 1} \ldots = s_{n - i + 1}$. So the answer is simply equal to the number of contiguous same characters in the center of the string which can be calculated in $\mathcal{O(n)}$.
[ "greedy", "strings" ]
800
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x < y) x = y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 2401 #endif int _runtimeTerror_() { int n; cin >> n; string s; cin >> s; int cnt = 0; for(int i=(n-1)/2;i>=0;--i) { if(s[i] == s[(n - 1) / 2]) { ++cnt; } else { break; } } cout << 2 * cnt - (n & 1) << "\n"; return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initncr(); #endif int TESTS = 1; cin >> TESTS; while(TESTS--) { _runtimeTerror_(); } return 0; }
1682
B
AND Sorting
You are given a permutation $p$ of integers from $0$ to $n-1$ (each of them occurs exactly once). Initially, the permutation is \textbf{not sorted} (that is, $p_i>p_{i+1}$ for at least one $1 \le i \le n - 1$). The permutation is called $X$-sortable for some non-negative integer $X$ if it is possible to sort the permutation by performing the operation below some finite number of times: - Choose two indices $i$ and $j$ $(1 \le i \lt j \le n)$ such that $p_i \& p_j = X$. - Swap $p_i$ and $p_j$. Here $\&$ denotes the bitwise AND operation. Find the \textbf{maximum} value of $X$ such that $p$ is $X$-sortable. It can be shown that there always exists some value of $X$ such that $p$ is $X$-sortable.
You must have to make at least one swap on the elements which are not at their correct positions initially. So $X$ must be a submask of all elements which are not at their correct positions. What is the maximum possible value of $X$ from Hint $1$? It is the bitwise AND of all elements which are not at their correct positions. It turns out that this value is achievable too. We always have to make at least one swap for the elements which are not at their correct positions. Hence an upper bound of answer would be the bitwise AND of those elements. Let the value be $X$. It turns out that the given permutation is $X$-sortable. Proof: First, notice that $X$ would always be present in $p$. Let $pos_x$ be the position of $X$ in $p$ initially. Let's say at some point we want to swap two values $p_i$ and $p_j$, then $p_i$ and $p_j$ would always be a supermask of $X$ i.e. $p_i$ & $X = X$ and $p_j$ & $X = X$. We can make the following moves to swap $p_i$ and $p_j$ without disturbing any other element. Swap values at indices $i$ and $pos_x$. Swap values at indices $i$ and $j$. Swap values at indices $j$ and $pos_x$. It can be seen that in every swap the bitwise AND of two values which we are swapping is always $X$. Hence we can swap any two values which were not at their correct positions, therefore we can sort the permutation $p$. Overall Complexity: $\mathcal{O(n)}$.
[ "bitmasks", "constructive algorithms", "sortings" ]
1,100
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x < y) x = y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 2401 #endif int _runtimeTerror_() { int n; cin >> n; int ans = (1 << 30) - 1; for(int i=0;i<n;++i) { int x; cin >> x; if(x != i) { ans &= x; } } cout << ans << "\n"; return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initncr(); #endif int TESTS = 1; cin >> TESTS; while(TESTS--) { _runtimeTerror_(); } return 0; }
1682
C
LIS or Reverse LIS?
You are given an array $a$ of $n$ positive integers. Let $\text{LIS}(a)$ denote the length of longest strictly increasing subsequence of $a$. For example, - $\text{LIS}([2, \underline{1}, 1, \underline{3}])$ = $2$. - $\text{LIS}([\underline{3}, \underline{5}, \underline{10}, \underline{20}])$ = $4$. - $\text{LIS}([3, \underline{1}, \underline{2}, \underline{4}])$ = $3$. We define array $a'$ as the array obtained after reversing the array $a$ i.e. $a' = [a_n, a_{n-1}, \ldots , a_1]$. The beauty of array $a$ is defined as $min(\text{LIS}(a),\text{LIS}(a'))$. Your task is to determine the maximum possible beauty of the array $a$ if you can rearrange the array $a$ arbitrarily.
Let $\text{LDS}(a)$ be the longest strictly decreasing subsequence of $a$, then $\text{LIS}(a')$ = $\text{LDS}(a)$. There can be at most one index common between $\text{LIS}(a)$ and $\text{LDS}(a)$. Let's make a small observation: There can be at most one index common to both $\text{LIS}(a)$ and $\text{LDS}(a)$. If some element $x$ occurs $\geq 2$ times, then one of its occurrences can be included in $\text{LIS}(a)$ and another one in $\text{LDS}(a)$, and all the remaining occurrences are of no use because none of them can contain 2 equal elements. If some element $x$ is a singleton i.e. the frequency of $x$ in $a$ is $1$, then it can have $3$ positions In $\text{LIS}(a)$ only. In $\text{LDS}(a)$ only. The only common element of $\text{LIS}(a)$ and $\text{LDS}(a)$. It can be seen that it is always optimal to choose some singleton as the only common element (if available) because those with frequency $\geq 2$ can easily contribute $1$ to both $\text{LIS}(a)$ and $\text{LDS}(a)$ easily. Let $t$ be the number of elements having frequency $\geq 2$ and $s$ be the number of singletons in $a$. The singletons should be divided equally among $\text{LIS}(a)$ and $\text{LDS}(a)$ with one of them given to both, if available. Hence, the answer is $t + \lceil \frac{s}{2} \rceil$. The values $s$ and $t$ can be found using some data structure like $\text{std:map}$ in C++ in $\mathcal{O}(n\log(n))$.
[ "constructive algorithms", "greedy", "implementation", "math" ]
1,400
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x < y) x = y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 2401 #endif int _runtimeTerror_() { int n; cin >> n; map<int, int> mp; for(int i=1;i<=n;++i) { int x; cin >> x; ++mp[x]; } int single = 0, doble = 0; for(auto &[i, j]:mp) { single += j == 1; doble += j > 1; } cout << doble + (single + 1) / 2 << "\n"; return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initncr(); #endif int TESTS = 1; cin >> TESTS; while(TESTS--) { _runtimeTerror_(); } return 0; }
1682
D
Circular Spanning Tree
There are $n$ nodes arranged in a circle numbered from $1$ to $n$ in the clockwise order. You are also given a binary string $s$ of length $n$. Your task is to construct a tree on the given $n$ nodes satisfying the two conditions below or report that such tree does not exist: - For each node $i$ $(1 \le i \le n)$, the degree of node is even if $s_i = 0$ and odd if $s_i = 1$. - No two edges of the tree intersect internally in the circle. The edges are allowed to intersect on the circumference. Note that all edges are drawn as straight line segments. For example, edge $(u, v)$ in the tree is drawn as a line segment connecting $u$ and $v$ on the circle. A tree on $n$ nodes is a connected graph with $n - 1$ edges.
What are the mandatory conditions on string $s$ for a tree to be possible? If there are no odd degree vertices or the count of odd degree vertices is odd, then it is impossible to construct any tree. It turns out that these conditions are sufficient too. Let's check some cases when it is not possible to construct the answer- When all vertices have an even degree, then there is no way to generate a tree because every tree contains at least $2$ leaves. When there are an odd number of vertices with odd degrees, then there is no tree possible because the sum of degrees must be even. It turns out that it is always possible to construct a tree if none of the above is true. The following construction works - Select some vertex $i$ such that the previous vertex of $i$ (assumed cyclically) has an odd degree i.e. $s_{i - 1} = 1$. Clearly, such a vertex always exists. Select some vertex $i$ such that the previous vertex of $i$ (assumed cyclically) has an odd degree i.e. $s_{i - 1} = 1$. Clearly, such a vertex always exists. Now left rotate $s$, $i - 1$ times such that the selected vertex is now at index $1$. Note that after the rotation $s_n$ will become $1$. Now we can see that $s[2\ldots n]$ can be divided into several segments such that each segment ends with some vertex having an odd degree. And each segment should contain exactly one vertex with an odd degree. So $s[2 \ldots n] = [0\ldots 1][0\ldots 1] \ldots [0\ldots 1]$ where $0$ may appear $0$ times. Connect vertex $1$ to the starting vertex of each segment and connect adjacent vertices inside each segment. It can be clearly seen that edges will never intersect internally. The only thing we need to verify is the degree constraints. Now left rotate $s$, $i - 1$ times such that the selected vertex is now at index $1$. Note that after the rotation $s_n$ will become $1$. Now we can see that $s[2\ldots n]$ can be divided into several segments such that each segment ends with some vertex having an odd degree. And each segment should contain exactly one vertex with an odd degree. So $s[2 \ldots n] = [0\ldots 1][0\ldots 1] \ldots [0\ldots 1]$ where $0$ may appear $0$ times. Connect vertex $1$ to the starting vertex of each segment and connect adjacent vertices inside each segment. It can be clearly seen that edges will never intersect internally. The only thing we need to verify is the degree constraints. Proof: The degree condition is valid for each segment, as each vertex with an even degree is connected with $2$ other vertices and the last vertex with an odd degree will be connected to only one vertex i.e it's previous one or vertex $1$ if it was only on its segment. Let $cnt_1$ be the number of vertices with odd degree. If $s_1 = 1$, then there will be $cnt_1 - 1$ segments which is an odd number, hence vertex $1$ will be connected to odd number of vertices. If $s_1 = 0$, then there will be $cnt_1$ segments which is an even number, hence vertex $1$ will be connected to even number of vertices. Note that we renumbered the vertices during rotation which should be handled in implementation. The intuition for the above approach comes from the case when all $s_i$ are $1$ in which we create a star network. Overall complexity: $\mathcal{O(n)}$.
[ "constructive algorithms", "implementation", "trees" ]
2,000
#include<bits/stdc++.h> using namespace std; #define all(v) v.begin(),v.end() #define endl "\n" int solve(){ int n; cin >> n; string s; cin >> s; auto cnt = count(all(s),'1'); if(cnt == 0 or cnt & 1){ cout << "NO" << endl; return 0; } auto inc = [&](int j){ return (j + 1)%n; }; cout << "YES" << endl; for(int p = 1; p < n; p++){ if(s[p - 1] == '1'){ auto i = inc(p); while(i != p){ int j = i; int prev = p; while(j != p){ cout << prev + 1 << " " << j + 1 << endl; prev = j; j = inc(j); if(s[prev] == '1')break; } i = j; } return 0; } } return 0; } signed main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); #ifdef SIEVE sieve(); #endif #ifdef NCR init(); #endif int t=1;cin>>t; while(t--){ solve(); } return 0; }
1682
E
Unordered Swaps
Alice had a permutation $p$ of numbers from $1$ to $n$. Alice can swap a pair $(x, y)$ which means swapping elements at positions $x$ and $y$ in $p$ (i.e. swap $p_x$ and $p_y$). Alice recently learned her first sorting algorithm, so she decided to sort her permutation in the \textbf{minimum} number of swaps possible. She wrote down all the swaps in the order in which she performed them to sort the permutation on a piece of paper. For example, - $[(2, 3), (1, 3)]$ is a valid swap sequence by Alice for permutation $p = [3, 1, 2]$ whereas $[(1, 3), (2, 3)]$ is not because it doesn't sort the permutation. Note that we cannot sort the permutation in less than $2$ swaps. - $[(1, 2), (2, 3), (2, 4), (2, 3)]$ cannot be a sequence of swaps by Alice for $p = [2, 1, 4, 3]$ even if it sorts the permutation because $p$ can be sorted in $2$ swaps, for example using the sequence $[(4, 3), (1, 2)]$. Unfortunately, Bob shuffled the sequence of swaps written by Alice. You are given Alice's permutation $p$ and the swaps performed by Alice in arbitrary order. Can you restore the correct sequence of swaps that sorts the permutation $p$? Since Alice wrote correct swaps before Bob shuffled them up, it is guaranteed that there exists some order of swaps that sorts the permutation.
One way of solving permutation problems is to look at permutation cycles. Let's decompose our permutation into cycles, then it's easy to see that each cycle can be solved independently because we have to sort the permutation in a minimum number of moves which isn't possible if two cycles are merged at any instant. Let's look at one cycle only, whose vertices are numbered from $1$ to $n$ in the orientation of cycle i.e the cycle is $1 \rightarrow 2 \rightarrow \ldots \rightarrow n \rightarrow 1$. Also assume that we only have swaps $(x, y)$ that are relevant to this cycle. It is known that we can sort a cycle of size $n$ in $n - 1$ moves and it is the minimum number of required moves. Claim 1: The set of swaps if considered as edges $(x, y)$ form a tree on the $n$ vertices of the cycle. Assume that the edges don't form a tree, then there exist at least two disjoint components let's say $S$ and $T$. Now we must be able to make swaps inside $S$ only, to sort elements in $S$ which needs that the set {$i, i \in S$} is same as set {$p_i, i \in S$} which is not possible by property of permutation cycles. Any cycle of permutation, say $C$, can't be split further into two sets $S$ and $T$ such that both of them can be sorted independently among themselves. So we must use all the $n - 1$ edges of the tree in some order to get $n$ cycles each of size $1$. Let's consider any element $u$ having adjacency list as $[x_1, x_2, ..., x_k]$ in the order they appear on the cycle if we start moving from $u$ in the orientation of cycle i.e $u \rightarrow u + 1 \rightarrow ... \rightarrow n \rightarrow 1 \rightarrow ... \rightarrow u$. Claim 2: We can never make the swap $(u, x_j)$ before swap $(u, x_i)$ if $j > i$. If we make the swap $(u, x_j)$ first, then $u$ and $x_i$ will go in different cycles for subsequent operations and we will never be able to use edge $(u, x_i)$ because it will merge the two different cycles which isn't possible because we are constrained to break a cycle into smaller cycles only. And if are not able to use edge $(u, x_i)$ then we will never be able to sort the permutation because we had $n - 1$ edges all of which were to be used and we wasted $1$ of them. Using above claim, for every element $u$ the order of edges is fixed i.e $x_1$, then $x_2$, ..., and finally $x_k$. Let's build a directed graph on $n - 1$ vertices (representing the swaps) where for every element $u$ we add directed edges $(u,x_1) \rightarrow (u,x_2)$, ..., $(u,x_{k-1}) \rightarrow (u,x_k)$. Since it is guaranteed that the answer will exist i.e a valid sequence of moves exist, hence the topological sorting of the above graph must exist, any of which represents a correct sequence of swaps. Note that whenever we make a swap that is not violating claim $2$ for any element $u$, then there will be no cross edge in two smaller cycles that are going to be formed and those smaller cycles can be further solved independently. Also the order of edges i.e $[x_1, x_2, ..., x_k]$ is not going to change for any element which ensures that the directed graph we built remains correct even if we remove some appropriate edge. Hence the answer is simply the topological sort of the graph we built. Overall Complexity: $\mathcal{O(nlog(n))}$, $nlog(n)$ for sorting the edges according to cycle's orientation to get the order $[x_1, x_2, ..., x_k]$ for every vertex. Claim: The given swaps considered as edges $(x, y)$ forms a non-intersecting tree on $n$ vertices on the circle i.e no two edges intersect internally. Motivation for problem D Let's say edges $(a, b)$ and $(c, d)$ intersect internally in the circle. WLOG, let's suppose we make swap $(a, b)$ before swap $(c, d)$, then $c$ and $d$ will go in different cycles as in Claim $2$ above. What if you were given any tree on $n$ vertices and asked to solve the problem with "YES/NO"? If the given edges intersect internally in the circle then the answer is "NO" otherwise it's always possible to construct a valid sequence of swaps. This is what the validator of E and checker of D do, try this one, and feel free to discuss in the comments section. Let's make every edge $(u, v)$ such that $u \lt v$, clearly the order of $u$, $v$ doesn't matter. Consider each edge as a segment $[u, v]$, then the edges of the tree intersect internally if and only if any two segments say $[l_1, r_1]$ and $[l_2, r_2]$ satisfies any of the below conditions- $l_1\lt l_2 \lt r_1 \lt r_2$ $l_1\lt l_2 \lt r_1 \lt r_2$ $l_2 \lt l_1 \lt r_2 \lt r_1$ $l_2 \lt l_1 \lt r_2 \lt r_1$ In the original problem, it was mentioned that there is always a correct sequence of swaps so we claimed that topological sorting must exist and indeed any topological sorting suffices. What if we were given a non-intersecting spanning tree? Can we still claim that there exists a correct move at every step? Claim: Yes, we can We need to show that there there is always some edge that can be removed without breaking claim $2$ above which is the only required condition. Cycles of length $\le 2$ are trivial. Let's represent by $u_{next}$ the first element of the list $[x_1, x_2, ..., x_k]$ for $u$ i.e the closest vertex having an edge with $u$ in cycle's orientation. Now, let's start an iteration, start moving from $1$ and jump to $v_{next}$ every time when you are at vertex $v$. Continue this process until you again reach $1$ or cross over $1$. Let the sequence obtained be $s$ i.e $s_1 = 1, s_2, ..., s_k$ where on moving further from $s_k$ we cross/reach $1$. For simplicity assume $k \ge 3$, $k = 2$ is trivial. It can be shown that $(s_{k-1}, s_k)$ is the required edge. $s_{k_{next}}$ lies between $s_{k-1}$ and $s_k$. There are three cases other that this: $s_{k_{next}}$ lies between $s_k$ and $1$, which is not possible because we would have moved further and $s_k$ would not be the last element of sequence $s$. $s_{k_{next}}$ lies between $s_k$ and $1$, which is not possible because we would have moved further and $s_k$ would not be the last element of sequence $s$. $s_{k_{next}}$ = $1$ which is not possible because it will create a cycle and we are given a tree. $s_{k_{next}}$ = $1$ which is not possible because it will create a cycle and we are given a tree. $s_{k_{next}}$ lies between $s_j$ and $s_{j+1}$ for $j \le k-2$, this is also not possible because then the edges $(s_k, s_{k_{next}})$ and $(s_j, s_{j+1})$ will intersect and we are given a non-intersecting tree. $s_{k_{next}}$ lies between $s_j$ and $s_{j+1}$ for $j \le k-2$, this is also not possible because then the edges $(s_k, s_{k_{next}})$ and $(s_j, s_{j+1})$ will intersect and we are given a non-intersecting tree. $s_{k}$ is first element of adjacency list of $s_{k-1}$ by the definition of $v_{next}$ and $s_{k-1}$ is the first element of adjacency list of $s_{k}$ by above 3 points. Hence it is safe to make the swap $(s_{k-1}, s_{k})$. So the topological sort still works. This might not be the only proof, if you have some other proofs feel free to discuss them in the comments. Hope you liked the details!! Any ideas on how to write a generator for this problem? Randomly partition the permutation into cycles, so generating swaps for a particular cycle is the main issue. Let's represent the cycle by an array $a$ of size $n$ with cycle as $a_1 \rightarrow a_2 \rightarrow ... \rightarrow a_n \rightarrow a_1$ Now let's start making random swaps say $(a_i, a_j)$ to break the cycle, then this generates two smaller cycles - $a_1 \rightarrow a_2 \rightarrow ... \rightarrow a_i \rightarrow a_{j+1} \rightarrow ... \rightarrow a_n \rightarrow a_1$. $a_1 \rightarrow a_2 \rightarrow ... \rightarrow a_i \rightarrow a_{j+1} \rightarrow ... \rightarrow a_n \rightarrow a_1$. $a_{i+1} \rightarrow ... \rightarrow a_j \rightarrow a_{i+1}$. $a_{i+1} \rightarrow ... \rightarrow a_j \rightarrow a_{i+1}$. This can be easily done using treaps :) and then we can use recursion to solve them independently. It's very rare!! Atleast first time for us.
[ "constructive algorithms", "dfs and similar", "graphs", "greedy", "math", "sortings", "trees" ]
2,700
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x < y) x = y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 2401 #endif const int N = 2e5 + 5; vector<int> t_sort; int idx[N]; int vs[N]; vector<int> v[N]; bool dfs_sort(int u) { vs[u]=2; for(auto j:v[u]) { if(vs[j]==2) return true; if(vs[j]==0 && dfs_sort(j)) return true; } vs[u]=1; t_sort.push_back(u); return false; } // Returns true if there is a topological sort else returns false bool top_sort(int n) { t_sort.clear(); for(int i=1;i<=n;++i) vs[i]=0; for(int i=1;i<=n;++i) { if(vs[i]==0) { if(dfs_sort(i)) { t_sort.clear(); return false; } } } reverse(t_sort.begin(),t_sort.end()); assert(t_sort.size()==n); for(int i=0;i<n;++i) idx[t_sort[i]]=i; return true; } int _runtimeTerror_() { int n, k; cin >> n >> k; vector<int> p(n+1), a(k), b(k); for(int i=1;i<=n;++i) { cin >> p[i]; } vector<vector<pii>> g(n+1); for(int i=0;i<k;++i) { int x, y; cin >> x >> y; a[i] = x, b[i] = y; g[x].push_back({y, i + 1}); g[y].push_back({x, i + 1}); } vector<int> id(n+1); vector<int> ans; auto solve = [&](vector<int> &cyc) { int n = sz(cyc); if(n == 1) { return; } for(int i=0;i<n;++i) { id[cyc[i]] = i; } auto dist = [&](int x, int y) { return (id[y] - id[x] + n) % n; }; vector<int> good; for(int i:cyc) { sort(all(g[i]), [&](pii &a, pii &b) { return dist(i, a.F) < dist(i, b.F); }); for(int j=1;j<sz(g[i]);++j) { v[g[i][j-1].S].push_back(g[i][j].S); } } }; vector<bool> vis(n+1); for(int i=1;i<=n;++i) { if(vis[i]) { continue; } vector<int> cycle; int cur = i; while(!vis[cur]) { cycle.push_back(cur); vis[cur] = 1; cur = p[cur]; } solve(cycle); } top_sort(k); for(auto i:t_sort) { cout << i << " "; } cout << "\n"; return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initncr(); #endif int TESTS = 1; //cin >> TESTS; while(TESTS--) { _runtimeTerror_(); } return 0; }
1682
F
MCMF?
You are given two integer arrays $a$ and $b$ ($b_i \neq 0$ and $|b_i| \leq 10^9$). Array $a$ is sorted in \textbf{non-decreasing} order. The cost of a subarray $a[l:r]$ is defined as follows: - If $ \sum\limits_{j = l}^{r} b_j \neq 0$, then the cost is not defined. - Otherwise: - Construct a bipartite flow graph with $r-l+1$ vertices, labeled from $l$ to $r$, with all vertices having $b_i \lt 0$ on the left and those with $b_i \gt 0$ on right. For each $i, j$ such that $l \le i, j \le r$, $b_i<0$ and $b_j>0$, draw an edge from $i$ to $j$ with infinite capacity and cost of unit flow as $|a_i-a_j|$. - Add two more vertices: source $S$ and sink $T$. - For each $i$ such that $l \le i \le r$ and $b_i<0$, add an edge from $S$ to $i$ with cost $0$ and capacity $|b_i|$. - For each $i$ such that $l \le i \le r$ and $b_i>0$, add an edge from $i$ to $T$ with cost $0$ and capacity $|b_i|$. - The cost of the subarray is then defined as the minimum cost of maximum flow from $S$ to $T$. You are given $q$ queries in the form of two integers $l$ and $r$. You have to compute the cost of subarray $a[l:r]$ for each query, modulo $10^9 + 7$. If you don't know what the minimum cost of maximum flow means, read here.
Let us suppose we need to calculate the answer for only one query, say complete array i.e $a[1:n]$. The scary flow structure in the problem can be reduced as- Let's replicate each vertex $i$, $|b_i|$ times. Then we can see that there will be an equal number of vertices on the left and right side. Now the problem reduces that we have to match these vertices with minimum cost such that the cost of matching $i$ and $j$ is $|a_i - a_j|$. There are only 2 type of elements (left side and right side) and the following greedy algorithm to match the elements works. Algorithm: Sort the type $1$ and $2$ elements independently and match them in the sorted order. Assume that two elements from left $l_1 \le l_2$ are matched with two elements from right $r_1 \le r_2$ as $[l_1, r_2]$ and $[l_2, r_1]$, then it can be easily shown that matching $[l_1, r_1]$ and $[l_2, r_2]$ is always more optimal. The proof is left as an excercise to reader. Since the array $a$ is given in sorted order, let's use it!! Let's assume- Type $1$ elements are those having $b_i \lt 0$. Type $1$ elements are those having $b_i \lt 0$. Type $2$ elements are those having $b_i \gt 0$. Type $2$ elements are those having $b_i \gt 0$. Now instead of replicating elements $|b_i|$ times and sorting them independently, let's iterate on array $a$ from left to right and add the contribution of each element independently. Say we are at index $i$, and prefix sum of $b_i$ so far is $psum_i$, then the following cases arise- $b_i \gt 0$, $psum_i \ge 0$ - There is no unmatched type $1$ element on the left, so we just add this element's contribution to the answer i.e $-b_i \cdot a_i$. $b_i \gt 0$, $psum_i \ge 0$ - There is no unmatched type $1$ element on the left, so we just add this element's contribution to the answer i.e $-b_i \cdot a_i$. $b_i \gt 0$, $psum_i \lt -b_i$ - There are more than $b_i$ unmatched type $1$ elements on the left, so we match $b_i$ of them to $a_i$, adding a contribution of $a_i \cdot b_i$ to the answer. $b_i \gt 0$, $psum_i \lt -b_i$ - There are more than $b_i$ unmatched type $1$ elements on the left, so we match $b_i$ of them to $a_i$, adding a contribution of $a_i \cdot b_i$ to the answer. $b_i \gt 0$, $psum_i \lt 0$ and $psum_i \gt -b_i$ - There are less than $b_i$ unmatched elements ($= |psum_i|$) on the left, so we match those with equal number of $a_i$ and remaining are propagated further, adding a contribution of $|psum_i| * a_i - (b_i - |psum_i|) * a_i$, where the positive term comes from those matching with previous unmatched elements and the negative term comes from those that are going to be left unmatched. $b_i \gt 0$, $psum_i \lt 0$ and $psum_i \gt -b_i$ - There are less than $b_i$ unmatched elements ($= |psum_i|$) on the left, so we match those with equal number of $a_i$ and remaining are propagated further, adding a contribution of $|psum_i| * a_i - (b_i - |psum_i|) * a_i$, where the positive term comes from those matching with previous unmatched elements and the negative term comes from those that are going to be left unmatched. Similar cases are there for $b_i \lt 0$. Ok so now we can easily solve the problem for one query in $O(n)$. Main idea: Let's simulate the above algorithm for every suffix and record the obtained answer in $ans_i$ for $i^{th}$ suffix. Note that the value $ans_i$ doesn't denote any answer for some suffix because the sum of $b_i$ over that suffix might or might not be zero. One important observation here is that- Let some subarray $a[l:r]$ for which sum of $b_i$ is $0$, then $ans_l - ans_{r+1}$ do have a good meaning, it's the answer for that query indeed. Our answer for $a[l:r]$ would have been the result of simulation on the subarray, but how does simulation on $l^{th}$ suffix looks? It greedily matches the subarray $a[l:r]$ first because the sum of $b_i$ is zero, so it will surely pair up all elements in that subarray. Then it moves further on $r+1$ and continuing the simulation after $r+1$ is equivalent to starting the simulation from $r+1$ itself because $psum$ so far (defined above) would be automatically 0. Note that $ans_l$ doesn't have any physical meaning because it will add some junk value if elements after $r+1$ are not paired up equally but those junk values are exactly same in $ans_l$ and $ans_r$ which cancel out, giving the correct answer. But still, we can't simulate for every suffix, right? It would go $O(n^2)$ again. Let's iterate from left to right and for every $i$ try calculating it's contribution in $1^{st}$, $2^{nd}$, ..., $(i-1)^{th}$ suffixes which is easy because it depends only on $psum_i$, $b_i$ (which are constant for a given $i$) and $psum_l$ for contribution to $l^{th}$ suffix. This is pretty standard using $2$ fenwick trees. How to calculate $ans_i$? Let's solve $b_i \gt 0$ and $b_i \lt 0$ independently, say $b_i \gt 0$ for now. Other case is similar. Let $psum_i = \sum_{j=1}^{i}b_j$. Consider the contribution of index $i$ to $ans_l$ for $l \lt i$, from three cases described above the contribution is different for different $l$ with different $psum_l$. We can build a fenwick tree on compressed prefix sums. Case $1$ and $2$ above add a constant value to a range of prefix sums that can be maintained in one fenwick tree and Case $2$ gives some linear function of $psum$ to be added in a range that can be maintained in other fenwick tree. Add contribution of each $i$ from $1$ to $n$ first, and let's start calculating $ans_i$. For $i = 1$, $ans_1$ can be obtained by querying at $psum_1$ in both fenwicks. For $i = 1$, $ans_1$ can be obtained by querying at $psum_1$ in both fenwicks. Then we remove the contribution of $i = 1$ from the two fenwick trees (simply the negative of which we added above), because $i = 1$ won't be contributing to any suffix other than $1^{st}$ one. Then we remove the contribution of $i = 1$ from the two fenwick trees (simply the negative of which we added above), because $i = 1$ won't be contributing to any suffix other than $1^{st}$ one. Similarly we move from left to right and calculate $ans_i$ by querying at $psum_i$ and then remove the contribution of $i^{th}$ element. Similarly we move from left to right and calculate $ans_i$ by querying at $psum_i$ and then remove the contribution of $i^{th}$ element.
[ "data structures", "flows", "graphs", "greedy", "sortings", "two pointers" ]
2,700
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y<x) x=y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x<y) x=y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 2401 #endif const int MOD=1000000007; struct Mint { int val; Mint(long long v = 0) { if (v < 0) v = v % MOD + MOD; if (v >= MOD) v %= MOD; val = v; } static int mod_inv(int a, int m = MOD) { int g = m, r = a, x = 0, y = 1; while (r != 0) { int q = g / r; g %= r; swap(g, r); x -= q * y; swap(x, y); } return x < 0 ? x + m : x; } explicit operator int() const { return val; } Mint& operator+=(const Mint &other) { val += other.val; if (val >= MOD) val -= MOD; return *this; } Mint& operator-=(const Mint &other) { val -= other.val; if (val < 0) val += MOD; return *this; } static unsigned fast_mod(uint64_t x, unsigned m = MOD) { #if !defined(_WIN32) || defined(_WIN64) return x % m; #endif unsigned x_high = x >> 32, x_low = (unsigned) x; unsigned quot, rem; asm("divl %4\n" : "=a" (quot), "=d" (rem) : "d" (x_high), "a" (x_low), "r" (m)); return rem; } Mint& operator*=(const Mint &other) { val = fast_mod((uint64_t) val * other.val); return *this; } Mint& operator/=(const Mint &other) { return *this *= other.inv(); } friend Mint operator+(const Mint &a, const Mint &b) { return Mint(a) += b; } friend Mint operator-(const Mint &a, const Mint &b) { return Mint(a) -= b; } friend Mint operator*(const Mint &a, const Mint &b) { return Mint(a) *= b; } friend Mint operator/(const Mint &a, const Mint &b) { return Mint(a) /= b; } Mint& operator++() { val = val == MOD - 1 ? 0 : val + 1; return *this; } Mint& operator--() { val = val == 0 ? MOD - 1 : val - 1; return *this; } Mint operator++(int32_t) { Mint before = *this; ++*this; return before; } Mint operator--(int32_t) { Mint before = *this; --*this; return before; } Mint operator-() const { return val == 0 ? 0 : MOD - val; } bool operator==(const Mint &other) const { return val == other.val; } bool operator!=(const Mint &other) const { return val != other.val; } Mint inv() const { return mod_inv(val); } Mint power(long long p) const { assert(p >= 0); Mint a = *this, result = 1; while (p > 0) { if (p & 1) result *= a; a *= a; p >>= 1; } return result; } friend ostream& operator << (ostream &stream, const Mint &m) { return stream << m.val; } friend istream& operator >> (istream &stream, Mint &m) { return stream>>m.val; } }; template<typename T=long long> struct fenwick { vector<T> bit; int n; fenwick(int x) { n = x; bit.resize(x + 1, T(0)); } void update(int j,T val) { for(;j<=n;j+=j&-j) bit[j] += val; } T get(int r) { T u = 0; for(;r;r-=r&-r) u += bit[r]; return u; } T query(int l,int r) { return get(r)-get(l-1); } // kth element int getKth(T k) { int ans = 0; T cnt = 0; for(int i=20;i>=0;--i) { if(ans + (1 << i) <= n && cnt + bit[ans + (1 << i)] < k) { ans += (1 << i); cnt += bit[ans]; } } if(ans == n) { return -1; } return ans + 1; } void insert(int x) { update(x, 1); } void erase(int x) { update(x, -1); } }; int _runtimeTerror_() { int n; int Q; cin >> n >> Q; vector<array<int,2>> a(n); for(int i=0;i<n;++i) { cin >> a[i][0]; } for(int i=0;i<n;++i) { cin >> a[i][1]; } vector<Mint> val(n, 0); for(int i=n-1;i>=0;--i) { if(i < n - 1) { val[i] = val[i + 1]; } val[i] += a[i][0] * Mint(abs(a[i][1])); } auto solve = [&](vector<array<int,2>> &a) { ll psum = 0; vector<ll> psums; psums.push_back(0); for(int i=0;i<n;++i) { psum += a[i][1]; assert(a[i][1] != 0); psums.push_back(psum); } sort(all(psums)); psums.resize(unique(all(psums)) - psums.begin()); psum = 0; auto get_next = [&](ll x) { return lower_bound(all(psums), x) - psums.begin() + 1; }; fenwick<Mint> f1(2*n), f2(2*n), f3(2*n); for(int i=0;i<n;++i) { if(a[i][1] > 0) { f1.update(1, Mint(a[i][1]) * a[i][0]); f1.update(get_next(psum), -Mint(a[i][1]) * a[i][0]); f2.update(get_next(psum), a[i][0]); f2.update(get_next(psum + a[i][1]), -a[i][0]); f3.update(get_next(psum), Mint(a[i][0]) * Mint(psum + a[i][1])); f3.update(get_next(psum + a[i][1]), -Mint(a[i][0]) * (psum + a[i][1])); } psum += a[i][1]; } psum = 0; for(int i=0;i<n;++i) { val[i] -= 2 * f1.get(get_next(psum)); val[i] -= 2 * (f3.get(get_next(psum)) - psum * f2.get(get_next(psum))); if(a[i][1] > 0) { f1.update(1, -Mint(a[i][1]) * a[i][0]); f1.update(get_next(psum), Mint(a[i][1]) * a[i][0]); f2.update(get_next(psum), -a[i][0]); f2.update(get_next(psum + a[i][1]), a[i][0]); f3.update(get_next(psum), -Mint(a[i][0]) * Mint(psum + a[i][1])); f3.update(get_next(psum + a[i][1]), Mint(a[i][0]) * (psum + a[i][1])); } psum += a[i][1]; } }; solve(a); for(int i=0;i<n;++i) { a[i][1] = -a[i][1]; } solve(a); val.push_back(0); while(Q--) { int l, r; cin >> l >> r; --l, --r; cout << val[l] - val[r + 1] << "\n"; } return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initialize(); #endif int TESTS = 1; //cin >> TESTS; while(TESTS--) _runtimeTerror_(); return 0; }
1684
A
Digit Minimization
There is an integer $n$ \textbf{without zeros} in its decimal representation. Alice and Bob are playing a game with this integer. Alice starts first. They play the game in turns. On her turn, Alice \textbf{must} swap any two digits of the integer that are on different positions. Bob on his turn always removes the last digit of the integer. The game ends when there is only one digit left. You have to find the smallest integer Alice can get in the end, if she plays optimally.
Let $k$ be the length of $n$. Let $n_i$ be the $i$-th digit of $n$ ($1$-indexation from the left). $k = 1$The game ends immediately so the answer is $n$ itself. The game ends immediately so the answer is $n$ itself. $k = 2$Alice should make the first move and she has to swap $n_1$ and $n_2$. After that Bob removes $n_1$ and in the end there is only $n_2$. Alice should make the first move and she has to swap $n_1$ and $n_2$. After that Bob removes $n_1$ and in the end there is only $n_2$. $k \ge 3$Alice can make swaps in such way that when there are only two digits left the second digit will be the maximal digit of $n$. Then she will make a swap and the maximal digit will be on the first position. The other one will be removed by Bob. This way she can always get the maximal digit of $n$ in the end of the game. Alice can make swaps in such way that when there are only two digits left the second digit will be the maximal digit of $n$. Then she will make a swap and the maximal digit will be on the first position. The other one will be removed by Bob. This way she can always get the maximal digit of $n$ in the end of the game.
[ "constructive algorithms", "games", "math", "strings" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int T; cin >> T; while (T --> 0) { string n; cin >> n; if (n.size() == 2) { cout << n[1] << '\n'; } else { cout << *min_element(n.begin(), n.end()) << '\n'; } } return 0; }
1684
B
Z mod X = C
You are given three positive integers $a$, $b$, $c$ ($a < b < c$). You have to find three positive integers $x$, $y$, $z$ such that: $$x \bmod y = a,$$ $$y \bmod z = b,$$ $$z \bmod x = c.$$ Here $p \bmod q$ denotes the remainder from dividing $p$ by $q$. It is possible to show that for such constraints the answer always exists.
In this problem it is enough to find a contstruction that works for all $a < b < c$. For example: $x = a + b + c$$y = b + c$ $z = c$ $y = b + c$ $z = c$ In this case $x \bmod y = (a + b + c) \bmod (b + c) = a$ since $a < b < b + c$ $y \bmod z = (b + c) \bmod c = b$ since $b < c$ $z \bmod x = c \bmod (a + b + c) = c$ since $c < (a + b + c)$.
[ "constructive algorithms", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int a, b, c; cin >> a >> b >> c; cout << a + b + c << " " << b + c << " " << c << "\n"; } int main() { int t; cin >> t; while (t--) solve(); return 0; }
1684
C
Column Swapping
You are given a grid with $n$ rows and $m$ columns, where each cell has a positive integer written on it. Let's call a grid good, if in each row the sequence of numbers is sorted in a non-decreasing order. It means, that for each $1 \le i \le n$ and $2 \le j \le m$ the following holds: $a_{i,j} \ge a_{i, j-1}$. You have to to do the following operation exactly once: choose two columns with indexes $i$ and $j$ (\textbf{not necessarily different}), $1 \le i, j \le m$, and swap them. You are asked to determine whether it is possible to make the grid good after the swap and, if it is, find the columns that need to be swapped.
At first, let's check wether the given table is good. If it is not then there is a row that has elements that should be replaced. Let's say that this row is $a$ and $b$ is the sorted row $a$. Then let's find the set of positions $i$ that $a_i \neq b_i$. If there are at least $3$ such positions then the answer is $-1$ because by making a swap we remove at most $2$ such bad positions. If there are no more than $2$ such positions then let's swap the corresponding columns and check whether each row is sorted. If the table is good we found the answer. If it is not then the answer is $-1$ because we can not sort $a$ and get a good table after that.
[ "brute force", "constructive algorithms", "greedy", "implementation", "sortings" ]
1,400
#include <bits/stdc++.h> using namespace std; void solve(vector<vector<int>> &a) { int n = a.size(), m = a[0].size(); vector<int> bad; for (int i = 0; i < n && bad.empty(); i++) { vector<int> b = a[i]; sort(b.begin(), b.end()); for (int j = 0; j < m; j++) { if (a[i][j] != b[j]) bad.push_back(j); } } if ((int)bad.size() == 0) { cout << 1 << " " << 1 << "\n"; return; } if ((int)bad.size() > 2) { cout << -1 << "\n"; return; } for (int i = 0; i < n; i++) { swap(a[i][bad[0]], a[i][bad[1]]); } for (int i = 0; i < n; i++) { for (int j = 1; j < m; j++) { if (a[i][j] < a[i][j - 1]) { cout << -1 << "\n"; return; } } } cout << bad[0] + 1 << " " << bad[1] + 1 << "\n"; return; } int main() { #ifdef LOCAL freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #else ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); #endif int T; cin >> T; while (T--) { int n, m; cin >> n >> m; vector<vector<int>> a(n, vector<int>(m)); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { cin >> a[i][j]; } } solve(a); } return 0; }
1684
D
Traps
There are $n$ traps numbered from $1$ to $n$. You will go through them one by one in order. The $i$-th trap deals $a_i$ base damage to you. Instead of going through a trap, you can jump it over. You can jump over no more than $k$ traps. If you jump over a trap, it does not deal any damage to you. But there is an additional rule: if you jump over a trap, all next traps damages increase by $1$ (this is a bonus damage). Note that if you jump over a trap, you don't get any damage (neither base damage nor bonus damage). Also, the bonus damage stacks so, for example, if you go through a trap $i$ with base damage $a_i$, and you have already jumped over $3$ traps, you get $(a_i + 3)$ damage. You have to find the minimal damage that it is possible to get if you are allowed to jump over no more than $k$ traps.
Firstly, let's notice that it is always better to use all $k$ jumps. If we jumped over less than $k$ traps then we can jump over the last trap and the total damage will be less. Secondly, let's say that we immediately get $n - i$ damage if we jump over $i$-th trap. This way the first trap that we jump over will cause $k - 1$ damage more than it should (because of $k - 1$ traps that we jump over next), the second will cause $k - 2$ damage more, ..., the last one will cause $0$ damage more. So the total damage only increases by $\frac{k(k - 1)}{2}$ which does not depend on the traps that we choose. That's why the traps that we have to jump over in this problem are the same. Now let's consider an array $b: b_i = a_i - (n - i)$. This array denotes the amount of damage we dodge if we jump over $i$-th trap (dodge $a_i$ because we don't take the trap's damage but get $n - i$ because of the immediate damage we take). Here a simple greedy works: let's just chose $k$ maximal values of this array, these will be the traps that we have to jump over so the total damage that we dodge is maximized. These traps will be the answer to the original problem.
[ "constructive algorithms", "greedy", "sortings" ]
1,700
#include <bits/stdc++.h> using namespace std; #define all(x) (x).begin(), (x).end() void solve() { int n, k; cin >> n >> k; long long ans = 0; vector<int> a(n); for (int i = 0; i < n; i++) { cin >> a[i]; ans += a[i]; a[i] += i + 1; } sort(all(a)); reverse(all(a)); for (int i = 0; i < k; i++) ans -= a[i]; for (int i = 0; i < k; i++) { ans += n; ans -= i; } cout << ans << "\n"; } int main() { #ifdef LOCAL freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #else ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); #endif int t; cin >> t; while (t--) solve(); return 0; }
1684
E
MEX vs DIFF
You are given an array $a$ of $n$ non-negative integers. In one operation you can change any number in the array to any other non-negative integer. Let's define the cost of the array as $\operatorname{DIFF}(a) - \operatorname{MEX}(a)$, where $\operatorname{MEX}$ of a set of non-negative integers is the smallest non-negative integer not present in the set, and $\operatorname{DIFF}$ is the number of different numbers in the array. For example, $\operatorname{MEX}(\{1, 2, 3\}) = 0$, $\operatorname{MEX}(\{0, 1, 2, 4, 5\}) = 3$. You should find the minimal cost of the array $a$ if you are allowed to make at most $k$ operations.
Let's consider all possible $\operatorname{MEX}$ after all operations. It is from $0$ to $n$ and we can check them all in the increasing order. Now let's fix some $\operatorname{MEX} = m$. There should be all numbers from $0$ to $m$ in the array, so there are some "holes" in the array that should be covered. The hole is an integer from $0$ to $m$ which is not present in the array. If there is at least one hole in the end it is not possible to obtain $\operatorname{MEX} = m$. Now let's see how we should cover the holes. We will do it greedily. Firstly, we will need to use integers that are greater than $m$. It is easy to see that they are always not worse to use than the integers which are already from $0$ to $m$. Moreover, we should start from those integers that occur less times in the array. It is because each time we cover a hole we increase $\operatorname{MEX}$ at least by one (we cover the holes in increasing order) and the value of $\operatorname{DIFF}$ increases at most by $1$ and it does not increase when we change the last element of the same value. After that if we used all integers that are greater than $m$ we should use those integers that are from $0$ to $m$ but only those that occur more than once. By doing these operations we increase $\operatorname{MEX}$ at least by $1$ and increase $\operatorname{DIFF}$ exactly by $1$ (because we cover a hole). Now let's notice that when considering each $\operatorname{MEX}$ value in the increasing order we can simply maintain some information about the current state of the array: a set that helps us find the elements greater than $m$ which occur less times in the array, the amount of not covered holes, the number of "bonus" elements from $0$ to $m$ (the number of integers from $0$ to $m$ minus $\operatorname{DIFF}$ from those elements that are from $0$ to $m$) and it is easy to see how it is changed when we increase $\operatorname{MEX}$. So in total we can calculate the answer for each $\operatorname{MEX}$ for all $\operatorname{MEX}$ from $0$ to $n$.
[ "binary search", "brute force", "constructive algorithms", "data structures", "greedy", "two pointers" ]
2,100
#include <bits/stdc++.h> using namespace std; #define pb emplace_back #define all(x) (x).begin(), (x).end() #define fi first #define se second #define pii pair<int, int> #define puu pair<unsigned, unsigned> #define ll long long #define mp make_pair #define ui unsigned #define ull unsigned long long #define ld double #define pld pair<ld, ld> #define pll pair<ll, ll> const int INF = 1e9 + 1; const ll INFLL = 1e18; void solve() { int n, k; cin >> n >> k; vector<int> a(n); for (auto &c : a) cin >> c; map<int, int> cnt; for (auto &c : a) cnt[c]++; set<pii> s1, s2; int sum1 = 0; for (auto &c : cnt) s2.insert(mp(c.se, c.fi)); int ans = INF; int skip = 0; for (int x = 0; x <= n; x++) { if (s1.find(mp(cnt[x - 1], x - 1)) != s1.end()) { sum1 -= cnt[x - 1]; s1.erase(mp(cnt[x - 1], x - 1)); } if (s2.find(mp(cnt[x - 1], x - 1)) != s2.end()) { s2.erase(mp(cnt[x - 1], x - 1)); } while (s2.size() && sum1 + s2.begin()->fi <= k) { s1.insert(*s2.begin()); sum1 += s2.begin()->fi; s2.erase(s2.begin()); } if (k < skip) break; int now = x + s2.size(); if (x == 0) { now = max(1, (int)s2.size()); } ans = min(ans, now - x); if (cnt[x] == 0) skip++; } cout << ans << "\n"; } int main() { #ifdef LOCAL freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #else ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); #endif int t; cin >> t; while (t--) solve(); return 0; }
1684
F
Diverse Segments
You are given an array $a$ of $n$ integers. Also you are given $m$ subsegments of that array. The left and the right endpoints of the $j$-th segment are $l_j$ and $r_j$ respectively. You are allowed to make \textbf{no more than one} operation. In that operation you choose any subsegment of the array $a$ and replace each value on this segment with any integer (you are also allowed to keep elements the same). You have to apply this operation so that for the given $m$ segments, the elements on each segment are distinct. More formally, for each $1 \le j \le m$ all elements $a_{l_{j}}, a_{l_{j}+1}, \ldots, a_{r_{j}-1}, a_{r_{j}}$ should be distinct. You don't want to use the operation on a big segment, so you have to find the smallest length of a segment, so that you can apply the operation to this segment and meet the above-mentioned conditions. If it is not needed to use this operation, the answer is $0$.
Let's say the answer is $[L, R]$. Let's consider all given segments. There should be no two equal elements that are in the same given segment and are not in $[L, R]$, i.e. at least one of them should be in the answer. Let's find such segment of minimal length for $L = 1$. To do that let's for each $r$ find such minimal $l$ that on segment $[l, r]$ all elements are distinct. Let's say that $l = f(r)$. This could be done using two pointers, maintaining current set of elements and checking whether the considered element is not in the set. Let's say we are given a segment $[l_j, r_j]$. If $f(r_j) \le l_j$ then this segment is already fine. Else for $L = 1$ this condition holds $R \ge f(r_j) - 1$. Minimal $R$ that meets all the inequalitites will be the answer. Now let's find out how to find $R$ while increasing $L$ by $1$. If we increase $L$ then $R$ can not decrease. Also if $[L + 1, R]$ can not be the answer then in some of given segments there are two equal elements $a_L$ and $a_j$. If $j < L$ then the left endpoint can not be greater than $L$. Else $j > R$. Then we have to make $R = j$ and this $j$ should be minimal. This $j$ could be found, for example, by binary search in $cnt$ array. To check whether two elements are in the same segments it is possible to use a segment tree or a Fenwick tree. It is possible to store for each $i$ such maximal $r_j$ that there is a given segment $[l_j, r_j]$ and $l_j \le i$. The final answer will be the minimal length of $[L, R]$ for all $L$.
[ "data structures", "two pointers" ]
2,600
#include <bits/stdc++.h> using namespace std; #define pb emplace_back #define all(x) (x).begin(), (x).end() #define fi first #define se second #define pii pair<int, int> #define puu pair<unsigned, unsigned> #define ll long long #define mp make_pair #define ui unsigned #define ull unsigned long long #define ld double #define pld pair<ld, ld> #define pll pair<ll, ll> const int INF = 1e9 + 1; const ll INFLL = 1e18; vector<int> f; void incr(int x, int d) { for (; x < (int)f.size(); x |= (x + 1)) f[x] = max(f[x], d); } int get(int x) { int ans = -1; for (; x >= 0; x = (x & (x + 1)) - 1) ans = max(ans, f[x]); return ans; } void solve() { int n, m; cin >> n >> m; vector<int> a(n); for (auto &c : a) cin >> c; map<int, vector<int>> gist; for (int i = 0; i < n; i++) gist[a[i]].pb(i); vector<pii> seg(m); f.assign(n, -1); for (auto &c : seg) { cin >> c.fi >> c.se; c.fi--; c.se--; incr(c.fi, c.se); } vector<int> mnl(n); set<int> s; int l = n; for (int r = n - 1; r >= 0; r--) { while (l - 1 >= 0 && !s.count(a[l - 1])) { l--; s.insert(a[l]); } mnl[r] = l; s.erase(a[r]); } int mnr = -1; for (auto &c : seg) { int l = c.fi, r = c.se; if (mnl[r] <= l) continue; mnr = max(mnr, mnl[r] - 1); } if (mnr == -1) { cout << 0 << "\n"; return; } int ans = mnr + 1; for (int l = 0; l + 1 < n; l++) { if (gist[a[l]][0] != l) { int id = lower_bound(all(gist[a[l]]), l) - gist[a[l]].begin() - 1; int pr = gist[a[l]][id]; if (get(pr) >= l) { break; } } int id = upper_bound(all(gist[a[l]]), mnr) - gist[a[l]].begin(); if (id != (int)gist[a[l]].size()) { int nxt = gist[a[l]][id]; if (get(l) >= nxt) { mnr = nxt; } } mnr = max(mnr, l + 1); ans = min(ans, mnr - l); } cout << ans << "\n"; } int main() { #ifdef LOCAL freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #else ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); #endif int t; cin >> t; while (t--) solve(); return 0; }
1684
G
Euclid Guess
Let's consider Euclid's algorithm for finding the greatest common divisor, where $t$ is a list: \begin{verbatim} function Euclid(a, b): if a < b: swap(a, b) if b == 0: return a r = reminder from dividing a by b if r > 0: append r to the back of t return Euclid(b, r) \end{verbatim} There is an array $p$ of pairs of positive integers that are not greater than $m$. Initially, the list $t$ is empty. Then the function is run on each pair in $p$. After that the list $t$ is shuffled and given to you. You have to find an array $p$ \textbf{of any size} not greater than $2 \cdot 10^4$ that produces the given list $t$, or tell that no such array exists.
Let's consider some pair $(a, b)$ and consider a sequence of remainders that is induced by this pair: $a \bmod b = x_1$, $b \bmod x_1 = x_2$, ..., $x_{p - 2} \bmod x_{p - 1} = x_p$, where $x_p = gcd(a, b)$. If $x_i \le \frac{m}{3}$ then we can just add a pair $(3x_i, 2x_i)$ and we will get the only remainder $x_i$ so these values will not be a problem. If $x_1 > \frac{m}{3}$ (if $i > 1$ then $x_i \le \frac{m}{3}$) then $a = b + x_1$ because if $a \ge 2b + x_1$ then $a > m$ which is impossible. $b = x_1 + x_2$ because if $b \ge 2x_1 + x_2$ then $a \ge 3x_1 + x_2 > m$. Then $a = b + x_1 = 2x_1 + x_2 \ge 2x_1 + x_p$ because $x_p < x_{p - 1} < \ldots < x_2 < x_1$. It means that for each $x_i > \frac{m}{3}$ there should be such $x_j$ that $2x_i + x_j \le m$ and $x_j$ is a divisor of $x_i$. For such values we can consider a bipartite graph where in the left part there are only $x_i > \frac{m}{3}$ and in the right part there are only $x_j \le \frac{m}{3}$ and two integers are connected $\iff$ the integer on the right divies the left one. This graph can be built in $O(n^2)$. After that we just have to find such matching that covers each integer in the left part and add all unused integers from the right part using the method above.
[ "constructive algorithms", "flows", "graph matchings", "math", "number theory" ]
2,800
#include <bits/stdc++.h> using namespace std; #define pb emplace_back #define all(x) (x).begin(), (x).end() #define fi first #define se second #define pii pair<int, int> #define puu pair<unsigned, unsigned> #define ll long long #define mp make_pair #define ui unsigned #define ull unsigned long long #define ld double #define pld pair<ld, ld> #define pll pair<ll, ll> #define int ll const int INF = 1e9 + 1; const ll INFLL = 1e18; vector<vector<int>> g; vector<int> with; vector<int> usd; int dfs(int v) { if (usd[v]) return 0; usd[v] = 1; for (auto &to : g[v]) { if (with[to] == -1) { with[to] = v; return 1; } } for (auto &to : g[v]) { if (dfs(with[to])) { with[to] = v; return 1; } } return 0; } signed main() { #ifdef LOCAL freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #else ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); #endif int n, A; cin >> n >> A; vector<int> a(n); vector<int> l, r; for (auto &c : a) { cin >> c; if (3 * c > A) { l.pb(c); } else { r.pb(c); } } g.resize(l.size()); with.resize(r.size(), -1); for (int i = 0; i < (int)l.size(); i++) { for (int j = 0; j < (int)r.size(); j++) { if (l[i] % r[j] == 0 && 2 * l[i] + r[j] <= A) { g[i].pb(j); } } } int cnt = 0; for (int i = 0; i < (int)l.size(); i++) { usd.assign(l.size(), 0); cnt += dfs(i); } if (cnt < (int)l.size()) { cout << -1; return 0; } vector<pii> ans; for (int j = 0; j < (int)r.size(); j++) { if (with[j] == -1) { ans.pb(3 * r[j], 2 * r[j]); } else { ans.pb(2 * l[with[j]] + r[j], l[with[j]] + r[j]); } } cout << ans.size() << "\n"; for (auto &c : ans) cout << c.fi << " " << c.se << "\n"; return 0; }
1684
H
Hard Cut
You are given a binary string $s$. You have to cut it into any number of non-intersecting substrings, so that the sum of binary integers denoted by these substrings is a power of 2. Each element of $s$ should be in exactly one substring.
Let's say that there are $k$ ones in the given string. We will build the answer for all $k > 0$. $k = 0$ is the only case when the answer does not exist. $k = 1$, $k = 2$, $k = 4$ - cut into single digits. $k = 3$ - described later. Let's consider $k = 5$: If all ones are in a row, then we should cut them into $1111_2 + 1_2 = 16_{10}$. Else there will be either $101_2$ or $100_2$ and the other part of the string may be cut into single digits so the sum will be $8_{10}$. Now let's consider $k > 5$ (here we will also consider $k = 3$): Let's describe the function $solve(l, r, k, n)$ for cutting the susbtring from $l$ to $r$ with $k$ ones to get the sum $n$. Let's use the "divide and conquer" technique: we have a string with $k$ ones and we want to get the sum $n = 2^{\left \lceil \log_2(k)\right \rceil}$, and for big enough $k$ we can just cut the whole string into two substrings with almost equal number of ones, run the algorithm for them and get the sum $2^{\left \lceil \log_2(\left \lfloor k/2 \right \rfloor)\right \rceil} + 2^{\left \lceil \log_2(\left \lceil k/2 \right \rceil)\right \rceil} = 2^{\left \lceil \log_2(k)\right \rceil} = n$. Now let's show how to make such cut for all strings that have from $6$ to $11$ ones and after that for all $k \ge 12$ we will be able to run the method decribed above. $k = n$Just cut into single digits. Just cut into single digits. $k < n \le \left \lfloor \frac{3k}{2} \right \rfloor$Let's consider the first two ones. If they are consequent then we will cut them into a single substring. It costs us only two ones but it increases the sum by $3$. If these two ones are not consequent then we will cut out the segments $10_2$ and $1_2$. This way we spend two ones again and the sum increases by $3$ as well. Also it is always possible to cut off a single digit so it is easy to see that we can get any sum $n$ from $k$ to $\left \lfloor \frac{3k}{2} \right \rfloor$. This way we got the answer for $k = 6, 7, 8, 11$. Now we only need to show how to make the cut for $k = 9, 10$. Let's consider the first two ones. If they are consequent then we will cut them into a single substring. It costs us only two ones but it increases the sum by $3$. If these two ones are not consequent then we will cut out the segments $10_2$ and $1_2$. This way we spend two ones again and the sum increases by $3$ as well. Also it is always possible to cut off a single digit so it is easy to see that we can get any sum $n$ from $k$ to $\left \lfloor \frac{3k}{2} \right \rfloor$. This way we got the answer for $k = 6, 7, 8, 11$. Now we only need to show how to make the cut for $k = 9, 10$. $k = 9$Let's consider a substring of length $3$ that starts in the leftmost one, there are $4$ possible cases: $t = 100$, then we need to use $k' = 8$ ones to get the sum $n' = 16_{10} - 100_2 = 12$, which we know how to do. $t = 101$, then we need to use $k' = 7$ ones to get the sum $n' = 11$, we will show it in the end. $t = 110$, then we need to use $k'= 7$ ones to get the sum $n' = 10$, which we know how to do. $t = 111$, then we need to use $k' = 6$ ones to get the sum $n' = 9$, which we know how to do. Let's consider a substring of length $3$ that starts in the leftmost one, there are $4$ possible cases: $t = 100$, then we need to use $k' = 8$ ones to get the sum $n' = 16_{10} - 100_2 = 12$, which we know how to do. $t = 101$, then we need to use $k' = 7$ ones to get the sum $n' = 11$, we will show it in the end. $t = 110$, then we need to use $k'= 7$ ones to get the sum $n' = 10$, which we know how to do. $t = 111$, then we need to use $k' = 6$ ones to get the sum $n' = 9$, which we know how to do. $k = 10$To do that let's cut a substring with $k_1 = 4$ ones from the left and get the sum $n_1 = 8$ and a remaining substring with $k_2 = 6$ ones and get the sum $n_2$. We already know how to get the second sum so now there is only $k_1 = 4$ and $n_1 = 8$ left. To do that let's cut a substring with $k_1 = 4$ ones from the left and get the sum $n_1 = 8$ and a remaining substring with $k_2 = 6$ ones and get the sum $n_2$. We already know how to get the second sum so now there is only $k_1 = 4$ and $n_1 = 8$ left. Let's consider the last two cases for full solution:$k = 4,\, n = 8$: Let's do the same thing as we did for $k = 9$, cut off a substring of length $3$ that starts from the leftmost one. Then we have two substrings and for both of them we know how to cut them properly. $k = 7,\, n = 11$: Let's cut off the first four ones and use the previous technique we will get the sum $8$. The remaining three ones we will use to get the sum $3$. $k = 4,\, n = 8$: Let's do the same thing as we did for $k = 9$, cut off a substring of length $3$ that starts from the leftmost one. Then we have two substrings and for both of them we know how to cut them properly. $k = 7,\, n = 11$: Let's cut off the first four ones and use the previous technique we will get the sum $8$. The remaining three ones we will use to get the sum $3$. This shows how to cut the string for any $k > 0$.
[ "constructive algorithms", "dfs and similar", "divide and conquer", "math" ]
3,400
#include <bits/stdc++.h> using namespace std; const int MAXN = 1e6; int g[MAXN + 1]; string s; vector<pair<int, int>> segments; // [l, r] int get(int k, int i) { // get k-th '1' on [i, |s|) --i; while (k --> 0) i = s.find('1', i + 1); return i; } void five() { int first = get(1, 0), last = get(5, 0); int r = (int)s.size() - 1; if (last - first + 1 == 5) { // creating 16: ['001111', '1', '0000'] segments.emplace_back(0, last - 1); segments.emplace_back(last, last); if (last < r) segments.emplace_back(last + 1, r); } else { // creating 8: cut out '101' or '100' and cut everything in single digits int pos = s.find("101"); if (pos == string::npos) pos = s.find("100"); for (int i = 0; i < pos; ++i) segments.emplace_back(i, i); segments.emplace_back(pos, pos + 2); for (int i = pos + 3; i <= r; ++i) segments.emplace_back(i, i); } } void solve(int l, int r, int k, int n) { // [l; r] if (k == n) { // cutting into single digits for (int i = l; i <= r; ++i) segments.emplace_back(i, i); } else if (k == 2 && n == 3) { int pos = get(1, l); if (s[pos + 1] == '1') { // ['11', '00...00'] segments.emplace_back(l, pos + 1); if (pos + 2 <= r) segments.emplace_back(pos + 2, r); } else { // ['10', '0001', '00000'] int newpos = get(1, pos + 1); segments.emplace_back(l, pos + 1); segments.emplace_back(pos + 2, newpos); if (newpos + 1 <= r) segments.emplace_back(newpos + 1, r); } } else if (3 * k / 2 >= n) { // using previous if technique int pos = get(2, l); solve(l, pos, 2, 3); if (k > 2) solve(pos + 1, r, k - 2, n - 3); } else if ((k == 4 && n == 8) || (k == 9 && n == 16)) { int pos = get(1, l); string sub = s.substr(pos, 3); segments.emplace_back(l, pos + 2); if (sub == "100") solve(pos + 3, r, k - 1, n - 4); if (sub == "101") solve(pos + 3, r, k - 2, n - 5); if (sub == "110") solve(pos + 3, r, k - 2, n - 6); if (sub == "111") solve(pos + 3, r, k - 3, n - 7); } else if ((k == 7 && n == 11) || (k == 10 && n == 16)) { int mid = get(4, l); solve(l, mid, 4, 8); solve(mid + 1, r, k - 4, n - 8); } else { // common case int mid = get(k / 2, l); solve(l, mid, k / 2, n / 2); solve(mid + 1, r, k - k / 2, n / 2); } } signed main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); g[1] = 1; for (int i = 2; i <= MAXN; ++i) g[i] = g[(i + 1) / 2] * 2; int T; cin >> T; while (T --> 0) { cin >> s; int k = count(s.begin(), s.end(), '1'); if (!k) { cout << -1 << '\n'; continue; } segments.clear(); if (k == 5) five(); else solve(0, (int)s.size() - 1, k, g[k]); cout << segments.size() << '\n'; for (auto &[l, r] : segments) { cout << l + 1 << ' ' << r + 1 << '\n'; } } return 0; }
1685
A
Circular Local MiniMax
You are given $n$ integers $a_1, a_2, \ldots, a_n$. Is it possible to arrange them on a circle so that each number is strictly greater than both its neighbors or strictly smaller than both its neighbors? In other words, check if there exists a rearrangement $b_1, b_2, \ldots, b_n$ of the integers $a_1, a_2, \ldots, a_n$ such that for each $i$ from $1$ to $n$ at least one of the following conditions holds: - $b_{i-1} < b_i > b_{i+1}$ - $b_{i-1} > b_i < b_{i+1}$ To make sense of the previous formulas for $i=1$ and $i=n$, one shall define $b_0=b_n$ and $b_{n+1}=b_1$.
Let's call $b_i$ local minimum if $b_{i-1} > b_i < b_{i+1}$ and local maximum if $b_{i-1} < b_i > b_{i+1}$. It's clear that in the arrangement satisfying the conditions from the statement, if $b_i$ is a local minimum, $b_{i+1}$ is a local maximum, and vice versa. Local minimums and local maximums will be alternating. Then it's easy to see that such an arrangement can't exist for odd $n$. Indeed, suppose that the conditions from the statement are satisfied for $b_1, b_2, \ldots, b_n$. If we suppose that $b_1$ is local minimum, we get that $b_2$ is local maximum, $b_3$ is local minimum, $\ldots,$, $b_n$ is local minimum, $b_1$ is local maximum. Clearly, $b_1$ can't be a local maximum and a local minimum at the same time, leading to a contradiction. Let's now consider the case of even $n = 2m$. Sort the array $a$, so that $a_1 \le a_2 \le \ldots \le a_{2m}$. Let's show that if $a_i = a_{i + m - 1} = x$ for some $2 \le i \le m-1$, then there is no arrangement satisfying the conditions from the statement. Indeed, consider such an arrangement: we have $m$ numbers $x$, and no two of them can be adjacent, so they occupy every second position. In addition, as local maximums and local minimums are alternating, we get that all $x$ are local maximums or all $x$ are local minimums. The first would imply that $a_{2m} < x$, which isn't possible. The second would imply that $a_1 > x$, which isn't possible. It turns out that if there is no such $i$, the arrangement exists. Indeed, we can arrange numbers on the circle in the following order: $(a_1, a_{m+1}, a_2, a_{m+2}, \ldots, a_m, a_{2m})$. Here $a_k < a_{m + k} > a_{k+1}$ for $1 \le k \le m-1$, $a_{m+k} > a_{k+1} < a_{m+k+1}$ for $1 \le k \le m-1$, $a_{2m} > a_1 < a_{m+1}$ and $a_m < a_{2m} > a_1$.
[ "constructive algorithms", "greedy", "sortings" ]
1,100
null
1685
B
Linguistics
Alina has discovered a weird language, which contains only $4$ words: $A$, $B$, $AB$, $BA$. It also turned out that there are no spaces in this language: a sentence is written by just concatenating its words into a single string. Alina has found one such sentence $s$ and she is curious: is it possible that it consists of precisely $a$ words $A$, $b$ words $B$, $c$ words $AB$, and $d$ words $BA$? In other words, determine, if it's possible to concatenate these $a+b+c+d$ words in some order so that the resulting string is $s$. Each of the $a+b+c+d$ words must be used exactly once in the concatenation, but you can choose the order in which they are concatenated.
For the answer to be $\texttt{YES}$ the frequency of the character $\texttt{A}$ in the $a+b+c+d$ words must coincide with its frequency in the sentence $s$, which is equivalent to the condition $a + c + d = \#\{i:\, s_i = \texttt{A}\}$. From now on we may assume that this is true. Notice that the answer to the problem is $\texttt{YES}$ if and only if it is possible to tile the string $s$ with $c$ copies of $\texttt{AB}$ and $d$ copies of $\texttt{BA}$ so that all the $c+d$ substrings are disjoint. Indeed one can simply fill the remaining characters of $s$ with the $a$ copies of $\texttt{A}$ and the $b$ copies of $\texttt{B}$ (we are sure that the number of characters $\texttt{A}$ and $\texttt{B}$ is correct because of the initial check). Moreover, if $s_i = s_{i+1}$, then clearly any tiling with $\texttt{AB}$ and $\texttt{BA}$ of $s$ cannot cover with a single tile both $s_i$ and $s_{i+1}$; therefore we can split the string $s$ between $s_i$ and $s_{i+1}$ and try to tile the two resulting substrings. By repeating this argument we will end up with many alternating strings (a string $t$ is alternating if $t_{i} \not= t_{i+1}$ for all $1\le i < |t|$). So, we have reduced the problem to the following one: Subproblem: You are given many alternating strings, can you find in these strings $c$ substrings equal to $\texttt{AB}$ and $d$ subtrings equal to $\texttt{BA}$ such that all the $c+d$ substrings are disjoint? In order to solve the problem let us analyze what happens when only one alternating string is present. Given an alternating string $t$, we say that the pair $(x, y)$ is valid if we can find in $t$ $x$ substrings equal to $\texttt{AB}$ and $y$ substrings equal to $\texttt{BA}$ so that the $x+y$ substrings are disjoint. Let us consider various cases: If $|t|=2k+1$, then $(x, y)$ is valid if and only if $x + y \le k$. Let $f(k):=\{(x, y):\, 0\le x,y\text{ and } x + y\le k\}$. If $|t|=2k$ and $t_1 = \texttt{A}$, then $(x, y)$ is valid if and only if $x+y\le k-1$ or $(x, y) = (k, 0)$. Let $f_{AB}(k):= \{(k, 0)\}\cup f(k-1)$. If $|t|=2k$ and $t_1 = \texttt{B}$, then $(x, y)$ is valid if and only if $x+y\le k-1$ or $(x, y) = (0, k)$. Let $f_{BA}(k):= \{(0, k)\}\cup f(k-1)$. We will provide a proof for the correcteness of the following greedy algorithm. Greedy algorithm: Sort the strings of type $2$ by length and fill them, starting from the shortest, only with $\texttt{AB}$ (i.e., find as many disjoint copies of $\texttt{AB}$ as possible) until you reach $c$ or you finish them (a string of length $2k$ is filled with $k$ $\texttt{AB}$). Do the same for the strings of type $3$ for the word $\texttt{BA}$. At this point there are no more bonuses to use and you can just fill the remaining strings with $\texttt{AB}$ and $\texttt{BA}$ in the only reasonable way. If in the end, if you have found $c$ strings $\texttt{AB}$ and $d$ strings $\texttt{BA}$ then the answer is $\texttt{YES}$, otherwise it is $\texttt{NO}$. Let us provide a proof of the correctness of this algorithm. Let us remark that the proof of the correctness is in fact rather easy, but a bit cumbersome to write down. The strategy of the proof is to start from a solution and to show that the one constructed by the greedy algorithm is "better". Proof of the correctness of the greedy algorithm: Let $U, V, W$ be the multisets of lengths of strings of the first, second and third type respectively (according to the case division above). Then the problem is equivalent to understanding if $(c, d) \in \sum_{u\in U} f(u) + \sum_{v\in V} f_{AB}(v) + \sum_{w\in W} f_{BA}(w).$ Assume that the answer is $\texttt{YES}$ and consider a solution of the problem, i.e. a choice of a valid pair for each of the $|U|+|V|+|W|$ alternating strings. Let $V = V'\sqcup V' '$ where $V'$ corresponds to the lengths of the strings in $V$ where the valid pair of the solution is given by $(k, 0)$ ($k$ is the length of the string). Partition $W=W'\sqcup W' '$ analogously. Then we have $(c, d) \in \sum_{u\in U} f(u) + \sum_{v'\in V'} (v', 0) + \sum_{v' '\in V' '} f(v' ' - 1) + \sum_{w'\in W'} (0, w') + \sum_{w' '\in W' '} f(w' ' - 1) .$ Let us make a couple of observations: For any $k_1, k_2$ it holds $f(k_1) + f(k_2) = f(k_1 + k_2)$. If $k_1\le k_2$, then $f(k_1) + (k_2, 0) \subseteq (k_1, 0) + f(k_2)$ and also $f(k_1) + (0, k_2) \subseteq (0, k_1) + f(k_2)$. $\tag{$\star$} (c, d) \in f\Bigg(\sum_{u\in U} k + \sum_{v' '\in V\setminus V'} (v' ' - 1) + \sum_{w' '\in W\setminus W'} (w' ' -1)\Bigg) + \bigg(\sum_{v'\in V'} v', 0\bigg) + \bigg(0, \sum_{w'\in W'} w'\bigg).$ $\Big(c-\sum V', d-\sum W'\Big) \in f(s),$ $\Big(c-(\sum V' + v' '), d-\sum W'\Big) \in f(s-(v' '-1)),$
[ "greedy", "implementation", "sortings", "strings" ]
2,000
null
1685
C
Bring Balance
Alina has a bracket sequence $s$ of length $2n$, consisting of $n$ opening brackets '(' and $n$ closing brackets ')'. As she likes balance, she wants to turn this bracket sequence into a balanced bracket sequence. In one operation, she can reverse any substring of $s$. What's the smallest number of operations that she needs to turn $s$ into a balanced bracket sequence? It can be shown that it's always possible in at most $n$ operations. As a reminder, a sequence of brackets is called balanced if one can turn it into a valid math expression by adding characters + and 1. For example, sequences (())(), (), and (()(())) are balanced, while )(, ((), and (()))( are not.
Let's denote ( by $1$ and ) by $-1$. Then we need to achieve a sequence in which all prefix sums are nonnegative. Let our sequence be $a_1, a_2, \ldots, a_{2n}$, and denote $pref_i = a_1 + a_2 + \ldots + a_i$ for each $0 \le i \le 2n$. Key observation: It's always possible to get a balanced sequence in at most $2$ operations. Proof: Let $0 \le i \le 2n$ be the index for which the value of $pref_i$ is the largest (if there are several such $i$, choose any). Let's reverse segments $a[1:i]$ and $a[i+1:2n]$, getting sequence $a_i, a_{i-1}, \ldots, a_1, a_{2n}, a_{2n-1}, \ldots, a_{i+1}$. It's easy to show that this sequence is balanced. $a_i + a_{i-1} + \ldots + a_j = pref_i - pref_{j-1} \ge 0$ for any $j \le i$, and $a_j + a_{j-1} + \ldots + a_{i+1} = pref_j - pref_i \le 0$ for any $j \ge i$, so $a_i + a_{i-1} + \ldots + a_i + a_{2n} + \ldots + a_{j+1} = -(a_j + a_{j-1} + \ldots + a_{i+1}) \ge 0$ for any $j \ge i$. So, all prefix sums are nonnegative, as desired. It remains to check if we can make our sequence balanced in less than $2$ operations. Checking if $0$ operations are enough is trivial: just check if the initial sequence is balanced. Now, let's check if we can make the sequence (which initially isn't balanced) balanced in exactly one operation. Let $l$ be the smallest index for which $pref_l < 0$, and $r$ be the largest such index. Suppose that we will reverse the segment $a[L:R]$. Clearly, $L \le l$, as otherwise, we would have a negative prefix sum. Similarly $R > r$. After reversing, we need to worry only about the $i$-th prefix sum for each $i$ from $L$ to $R-1$, all others will be nonnegative. The $i$-th prefix sum for such $i$ will be equal to $pref_{L-1} + (pref_R - pref_{R + L - 1 - i})$. So, segment $[L, R]$ will be good iff $pref_{L-1} + pref_R \ge pref_i$ for all $L -1 \le i \le R$. It's easy to show that if any such segment $[L, R]$ works, then also the segment $[L_1, R_1]$ works, where $L_1$ is the index from $[0, l]$ for which $pref_{L_1}$ is the largest, and $R_1$ is the index from $[r+1, 2n]$ for which $pref_{R_1}$ is the largest. Indeed, suppose that there is some $L_1 \le i \le R_1$ such that $pref_{L_1} + pref_{R_1} < pref_i$. If $i \le l$, then $pref_i < pref_{L_1}$, contradiction. If $i > r$, then $pref_i < pref_{R_1}$, contradiction. If $l < i \le r$, then $i$ is inside any such segment $[L, R]$, and $pref_i > pref_L + pref_R$ for any choice of $L, R$. So, it's enough to choose segment $[L_1, R_1]$ and to check if the sequence becomes balanced after reversing it.
[ "brute force", "constructive algorithms", "greedy" ]
2,600
null
1685
D1
Permutation Weight (Easy Version)
\textbf{This is an easy version of the problem. The difference between the easy and hard versions is that in this version, you can output any permutation with the smallest weight}. You are given a permutation $p_1, p_2, \ldots, p_n$ of integers from $1$ to $n$. Let's define the weight of the permutation $q_1, q_2, \ldots, q_n$ of integers from $1$ to $n$ as $$|q_1 - p_{q_{2}}| + |q_2 - p_{q_{3}}| + \ldots + |q_{n-1} - p_{q_{n}}| + |q_n - p_{q_{1}}|$$ You want your permutation to be as lightweight as possible. Find any permutation $q$ with the smallest possible weight.
Let's first understand what is the minimum possible weight of $q$. When can it be $0$? Only when $q_i = p_{q_{i+1}}$ for each $i$. Clearly, such $q$ exists only when $p$ is just one cycle. This gives a hint that we should look at cycles. Consider splitting of $p$ into cycles (where a cycle is an array $[a_1, a_2, \ldots, a_m]$ such that $p_{a_i} = a_{i \bmod m +1}$ for $1 \le i \le m$). Let's say there are $k$ of such cycles. I claim that the answer is $2(k-1)$. You can see the proof in the tutorial of the hard version of this problem. Now, let's provide an example. We will construct a permutation $p'$ as follows: Initially, it's equal to $p$ Then, for each $x$ from $1$ to $n-1$, if $x$ and $x+1$ are in different cycles in $p'$ currently, swap them. One such swap reduces the number of cycles by exactly $1$, so we will do exactly $k-1$ such swaps and $p'$ will consist of exactly one cycle. Next, construct $q$ by the rule $q_i = p'_{q_{i+1}}$ (it's possible as $p'$ is just one cycle). As $|q_i - p_{q_{i+1}}| = |p'_{q_{i+1}} - p_{q_{i+1}}|$, the weight of $q$ is just the sum of $|p_i - p'_i|$. Clearly, one swap increases this value by at most $2$, so in the end it will be at most $2(k-1)$, as desired.
[ "constructive algorithms", "dfs and similar", "dsu" ]
2,800
null
1685
D2
Permutation Weight (Hard Version)
\textbf{This is a hard version of the problem. The difference between the easy and hard versions is that in this version, you have to output the lexicographically smallest permutation with the smallest weight}. You are given a permutation $p_1, p_2, \ldots, p_n$ of integers from $1$ to $n$. Let's define the weight of the permutation $q_1, q_2, \ldots, q_n$ of integers from $1$ to $n$ as $$|q_1 - p_{q_{2}}| + |q_2 - p_{q_{3}}| + \ldots + |q_{n-1} - p_{q_{n}}| + |q_n - p_{q_{1}}|$$ You want your permutation to be as lightweight as possible. Among the permutations $q$ with the smallest possible weight, find the lexicographically smallest. Permutation $a_1, a_2, \ldots, a_n$ is lexicographically smaller than permutation $b_1, b_2, \ldots, b_n$, if there exists some $1 \le i \le n$ such that $a_j = b_j$ for all $1 \le j < i$ and $a_i<b_i$.
We will start by proving that the weight is at least $2(k-1)$ (where $k$ is the number of cycles), and understanding the structure of optimal permutations in the process. Again, consider splitting of $p$ into cycles (where a cycle is an array $[a_1, a_2, \ldots, a_m]$ such that $p_{a_i} = a_{i \bmod m +1}$ for $1 \le i \le m$). Let's say there are $k$ of such cycles. Now, consider a graph $G$ on $n$ nodes, and draw an edge from $q_i$ to $p_{q_{i+1}}$ for each $i$ from $1$ to $n$ (there may be self-loops and multi-edges here). Each node has one incoming and outgoing edge, so the entire graph is split into several cycles. Consider any such cycle $(b_1, b_2, \ldots, b_m)$, where all $b_i$ are distinct. Its contribution to the answer is $|b_1 - b_2| + |b_2 - b_3| + \ldots + |b_m - b_1|$. We will show that: This value is at least $2(m-1)$. Proof: Wlog $b_i$ is the smallest among $b$ and $b_j$ is the largest, with $i<j$. Then $|b_1 - b_2| + |b_2 - b_3| + \ldots + |b_m - b_1| = (|b_i - b_{i+1}| + \ldots + |b_{j-1} - b_j|) + (|b_j - b_{j+1}| + \ldots + |b_{i-1} - b_i|) \ge$ $\le 2|b_i - b_j| \ge 2(m-1)$. This value is at least $2(m-1)$. Proof: Wlog $b_i$ is the smallest among $b$ and $b_j$ is the largest, with $i<j$. Then $|b_1 - b_2| + |b_2 - b_3| + \ldots + |b_m - b_1| = (|b_i - b_{i+1}| + \ldots + |b_{j-1} - b_j|) + (|b_j - b_{j+1}| + \ldots + |b_{i-1} - b_i|) \ge$ $\le 2|b_i - b_j| \ge 2(m-1)$. It's $2(m-1)$ only when the numbers on the cycle are some $m$ consecutive numbers and are increasing on the path from the smallest number to the largest and decreasing on the way back. Proof: It's just the case when all the inequalities in the expression above become equalities. It's $2(m-1)$ only when the numbers on the cycle are some $m$ consecutive numbers and are increasing on the path from the smallest number to the largest and decreasing on the way back. Proof: It's just the case when all the inequalities in the expression above become equalities. Now, assign each cycle of $p$ a number from $1$ to $k$ and consider the graph $G_1$ on $k$ nodes, draw an edge between the nodes corresponding to the cycles where $q_i$ and $p_{q_{i+1}}$ belong. As $q_i$ and $p_{q_i}$ are in the same cycle for each $i$, we get that every two consecutive edges we draw share a node. As we will draw an edge from every cycle, the graph is connected. Each edge in $G_1$ corresponds to an edge in $G$ (edge between $q_i$ and $p_{q_{i+1}}$ in $G$ corresponds to an edge in $G_1$ between the nodes corresponding to the cycles where $q_i$ and $p_{q_{i+1}}$ belong). Now, consider any spanning tree in graph $G_1$. Clearly, any cycle of length $m$ in $G_1$ can contain at most $m-1$ edges from it (if it contained $m$ edges from it, we would have a cycle in $G_1$). So, the total sum of $(m-1)$ over all cycles in $G$ is at least $k-1$, and therefore the total contribution to the weight is at least $2(k-1)$. Now let's give a characterization of all permutations $q$ which have the weight $2(k-1)$. It turns out, that they are in correspondence with graphs $G$ on $n$ nodes which satisfy the following conditions: Each node has one incoming and one outgoing edge (and therefore graph is split into cycles). Each node has one incoming and one outgoing edge (and therefore graph is split into cycles). In $G$, if a cycle has length $m$, then it consists of $m$ consecutive integers, where numbers go up from the smallest number to the largest and down on the way back In $G$, if a cycle has length $m$, then it consists of $m$ consecutive integers, where numbers go up from the smallest number to the largest and down on the way back The sum of $m-1$ over all cycles is precisely $k-1$. The sum of $m-1$ over all cycles is precisely $k-1$. If we draw an edge between two cycles of $p$ if an element of the first cycle is connected to the element of the second cycle in $G$, this graph on $k$ nodes is connected. If we draw an edge between two cycles of $p$ if an element of the first cycle is connected to the element of the second cycle in $G$, this graph on $k$ nodes is connected. Each such graph is a corresponding graph of some optimal permutation $q$. The proof is left for the reader as an exercise. Now, how to solve our problem? Let's build $q$ element by element. The weights of all cyclic shifts of the same permutation are the same, so we start with $q_1 = 1$. Now, the only subproblem we have to be able to solve is to check if the current prefix $q$ of length $l$ is a prefix of some permutation with weight $2(k-1)$. So, we have to check if our current $l-1$ edges $q_i \to p_{q_{i+1}}$ can be a subset of some graph $G$ satisfying all the conditions above. Denote these edges $(u_1, v_1), \ldots, (u_{l-1}, v_{l-1})$. If edge has $u_i<v_i$, consider segment $[l_i, r_i] = [u_i, v_i]$, and call it right segment. If edge has $u_i>v_i$, consider segment $[l_i, r_i] = [v_i, u_i]$, and call it left segment. If $u_i = v_i$, call $u_i$ loop node. We can show that the following criteria are sufficient: No two right segments intersect internally. No two right segments intersect internally. No two left segments intersect internally. No two left segments intersect internally. No right/left segment contains a loop node. No right/left segment contains a loop node. Consider a graph on $k$ nodes, corresponding to the cycles of $p$. For each $1 \le i \le n-1$, if $[i, i+1]$ is contained in some segment, draw an edge between the corresponding cycles in which $i$, $i+1$ are. Then, this graph can't have a cycle (has to be a forest). Consider a graph on $k$ nodes, corresponding to the cycles of $p$. For each $1 \le i \le n-1$, if $[i, i+1]$ is contained in some segment, draw an edge between the corresponding cycles in which $i$, $i+1$ are. Then, this graph can't have a cycle (has to be a forest). Consider a graph on $k$ nodes, corresponding to the cycles of $p$. For each $1 \le i \le n-1$, unless $i$ or $i+1$ are loop nodes, or $i$ is the right end of two segments, or $i+1$ is the left end of two segments, draw an edge between the corresponding cycles in which $i$, $i+1$ are. Then, this graph must be connected. Consider a graph on $k$ nodes, corresponding to the cycles of $p$. For each $1 \le i \le n-1$, unless $i$ or $i+1$ are loop nodes, or $i$ is the right end of two segments, or $i+1$ is the left end of two segments, draw an edge between the corresponding cycles in which $i$, $i+1$ are. Then, this graph must be connected. These conditions may sound complicated but they are very simple implications of the conditions on $G$ above. The proof that if these conditions are satisfied then edges form a subset of some valid $G$ is left to the reader as an exercise too (tutorial is already too long, sorry). Total complexity is $O(n^3)$, as we can do up $O(n^2)$ checks, and each check takes $O(n)$ time.
[ "constructive algorithms", "greedy" ]
3,500
null
1685
E
The Ultimate LIS Problem
It turns out that this is exactly the $100$-th problem of mine that appears in some programming competition. So it has to be special! And what can be more special than another problem about LIS... You are given a permutation $p_1, p_2, \ldots, p_{2n+1}$ of integers from $1$ to $2n+1$. You will have to process $q$ updates, where the $i$-th update consists in swapping $p_{u_i}, p_{v_i}$. After each update, find any cyclic shift of $p$ with $LIS \le n$, or determine that there is no such shift. (Refer to the output section for details). Here $LIS(a)$ denotes the length of longest strictly increasing subsequence of $a$. \textbf{Hacks are disabled in this problem. Don't ask why}.
Consider element $n+1$. Replace for a while $n+1$ by $0$, each element $\le n$ by $-1$, and each element $\ge n+2$ by $1$. It's well-known that for an array of $n$ of $1$ and $n$ of $-1$ there exists a cyclic shift such that all its prefix sums are nonnegative (which is equivalent to the sequence of these $1$s and $-1$s being balanced). Consider such cyclic shift for our array. It's easy to see now that in this cycle there is no increasing subsequence of length at $n+1$ of element distinct from $n+1$. Indeed, in such a subsequence we would have $t$ of $-1$s, and then $n + 1 - t$ of $1$s, which would mean that $t$-th $1$ goes after the $t$-th $-1$, meaning that the subsequence isn't balanced. So, if there is an increasing subsequence of length $n+1$ in this shift, $n+1$ is in it. Let's say there are $t$ elements $\le n$ before $n+1$ in this subsequence and $n - t$ elements $\ge n$ after $n+1$ in this subsequence. As $t$-th $1$ goes before $t$-th $-1$ in this shift, we get that there are exactly $t$ elements $\ge n+2$ in this shift before $n+1$, and (similarly) exactly $n-t$ elements $\le n$ in this shift after $n+1$. In addition, these two parts form (as ones and minus ones) two separate balanced sequences. Now, consider a shift in which $n+1$ is the first element. It's easy to see that if the condition above holds, then in this shift ones and minus ones also form a balanced subsequence. If $LIS$ of this shift is $n+1$, it must be the case that $n+1$ and $n$ ones form it, which implies that elements $n+1, n+2, \ldots, 2n+1$ go clockwise in this permutation. Similarly, after considering a shift in which $n+1$ is the last element, we get that elements $1, 2, \ldots, n+1$ go clockwise in this permutation. It turns out that if all the conditions above hold, then all shifts have $LIS \ge n+1$. Indeed, consider any shift, elements to the left of $n+1$ smaller than $n+1$, $n+1$, and elements to the right of $n+1$ larger than $n+1$ form an increasing subsequence. From the fact that $1$s and $-1$s are balanced we get that numbers smaller than $n+1$ take at least half of the space before $n+1$, and larger than $n+1$ take at least half of the space after $n+1$, so its length is at least $n+1$. With all this knowledge, how do we solve the problem? We will keep track of all the $1$s and $-1$s with a segment tree, with queries "smallest prefix sum on the subsegment". When we need to provide the answer after the update, we will do the following: Check if the circular segment from $n+1$ to $n+1$ is balanced. If not, find the shift that makes $1$s and $-1$s balanced, its $LIS$ is at most $n$. Check if the circular segment from $n+1$ to $n+1$ is balanced. If not, find the shift that makes $1$s and $-1$s balanced, its $LIS$ is at most $n$. Then, check if $n+1, n+2, \ldots, 2n+1$ go clockwise in the permutation. If not, then the shift in which $n+1$ is the first element has $LIS\le n$. Then, check if $n+1, n+2, \ldots, 2n+1$ go clockwise in the permutation. If not, then the shift in which $n+1$ is the first element has $LIS\le n$. Then, check if $1, 2, \ldots, n+1$ go clockwise in the permutation. If not, then the shift in which $n+1$ is the last element has $LIS\le n$. Then, check if $1, 2, \ldots, n+1$ go clockwise in the permutation. If not, then the shift in which $n+1$ is the last element has $LIS\le n$. Otherwise, output $-1$. Otherwise, output $-1$. We check if $1, 2, \ldots, n+1$ go clockwise by saving the following sum: $\sum_{i = 1}^{n} (pos(i\bmod (n+1) + 1) - pos(i))\bmod (2n+1)$ If it's $2n+1$, they go clockwise, else not. We can update this sum in $O(1)$ per query. Total complexity is $O(\log(n))$ per query.
[ "data structures", "greedy" ]
3,500
null
1686
A
Everything Everywhere All But One
You are given an array of $n$ integers $a_1, a_2, \ldots, a_n$. After you watched the amazing film "Everything Everywhere All At Once", you came up with the following operation. In one operation, you choose $n-1$ elements of the array and replace each of them with their arithmetic mean (which doesn't have to be an integer). For example, from the array $[1, 2, 3, 1]$ we can get the array $[2, 2, 2, 1]$, if we choose the first three elements, or we can get the array $[\frac{4}{3}, \frac{4}{3}, 3, \frac{4}{3}]$, if we choose all elements except the third. Is it possible to make all elements of the array equal by performing a finite number of such operations?
Suppose that we did one operation and not all numbers are equal. Let's say that we have $n-1$ numbers $x$ and $1$ number $y$ now, with $x \neq y$. In the next operation, we have two options: to make operation with $n-1$ numbers $x$, or with $n-2$ of $x$ and one $y$. If we go with the first option, we will still have $n-1$ of $x$ and one $y$: numbers won't change. If we go with the second option, we will have one number $x$ and $n-1$ numbers $\frac{(n-2)x + y}{n-1}$, where $\frac{(n-2)x + y}{n-1} = x + \frac{y-x}{n-1} \neq x$. So, we again are in a state where we have $n-1$ copies of one number and one different number. So, if after the first operation not all numbers are equal, they will never be all equal. Therefore, it's enough to check each possible operation in $O(n^2)$ (which can be clearly optimized to $O(n)$). And go watch the movie Everything Everywhere All At Once, it's fantastic!
[ "greedy" ]
800
null
1686
B
Odd Subarrays
For an array $[b_1, b_2, \ldots, b_m]$ define its number of inversions as the number of pairs $(i, j)$ of integers such that $1 \le i < j \le m$ and $b_i>b_j$. Let's call array $b$ \textbf{odd} if its number of inversions is odd. For example, array $[4, 2, 7]$ is odd, as its number of inversions is $1$, while array $[2, 1, 4, 3]$ isn't, as its number of inversions is $2$. You are given a permutation $[p_1, p_2, \ldots, p_n]$ of integers from $1$ to $n$ (each of them appears exactly once in the permutation). You want to split it into several consecutive subarrays (maybe just one), so that the number of the \textbf{odd} subarrays among them is as large as possible. What largest number of these subarrays may be \textbf{odd}?
Consider any optimal splitting. Clearly, for any subarray $[b_1, b_2, \ldots, b_m]$ which is not odd, we can just split it into $[b_1], [b_2], \ldots, [b_m]$, For any odd subarray $[b_1, b_2, \ldots, b_m]$ with $m \ge 3$, there exists an $1 \le i \le m-1$ such that $b_i > b_{i+1}$ (otherwise $b$ is sorted and has no inversions). Then, we can split $b$ into $[b_1], [b_2], \ldots, [b_{i-1}], [b_i, b_{i+1}], [b_{i+2}], \ldots, [b_m]$, where we also have one odd subarray. So, if we can split $p$ into several subarrays such that there are $k$ odd subarrays, we can split it into several subarrays of length $\le 2$ so that there are $k$ odd subarrays too. Then, let $dp_i$ denote the largest number of odd subarrays we can get from splitting $p[1:i]$. Then, $dp_i = max(dp_{i-1}, dp_{i-2} + (p_{i-1} > p_i))$. This $dp$ can be calculated in $O(n)$. It's also easy to show that the following greedy algorithm works: traverse the permutation from left to right, whenever you see two elements $p_{i-1}>p_i$, make a subarray $[p_{i-1}, p_i]$, and proceed from $p_{i+1}$.
[ "dp", "greedy" ]
800
null
1687
A
The Enchanted Forest
\begin{quote} The enchanted forest got its name from the magical mushrooms growing here. They may cause illusions and generally should not be approached. \hfill —Perfect Memento in Strict Sense \end{quote} Marisa comes to pick mushrooms in the Enchanted Forest. The Enchanted forest can be represented by $n$ points on the $X$-axis numbered $1$ through $n$. Before Marisa started, her friend, Patchouli, used magic to detect the initial number of mushroom on each point, represented by $a_1,a_2,\ldots,a_n$. Marisa can start out at \textbf{any} point in the forest on minute $0$. Each minute, the followings happen in order: - She moves from point $x$ to $y$ ($|x-y|\le 1$, possibly $y=x$). - She collects all mushrooms on point $y$. - A new mushroom appears on each point in the forest. Note that she \textbf{cannot} collect mushrooms on minute $0$. Now, Marisa wants to know the maximum number of mushrooms she can pick after $k$ minutes.
Consider $k\le n$ and $k>n$ separately. Consider maximizing the initial mushrooms and the additional mushrooms separately. Is there any common strategy? If $k\le n$: Consider how to maximize the initial mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $\max\limits_{k\le i\le n}\sum\limits_{j=i-k+1}^ia_j$. Consider how to maximize the initial mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $\max\limits_{k\le i\le n}\sum\limits_{j=i-k+1}^ia_j$. Consider how to maximize the additional mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $\frac{k(k+1)}{2}$. Consider how to maximize the additional mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $\frac{k(k+1)}{2}$. We can find that maximizing the two parts shares the same strategy. So add up the answers of the two parts. If $k>n$: Consider how to maximize the initial mushrooms she collects. Obviously she can collect all of them. The answer is $\sum\limits_{i=1}^n a_i$. Consider how to maximize the additional mushrooms she collects. Let $b_i$ be her position on minute $k-i$ ($0\le i< n$). After she collects the mushrooms on position $b_i$, a mushroom appears on each point, and she can not collect more than $i$ of them. In other words, she leaves at least $\sum\limits_{i=0}^{n-1}(n-i)=\frac{n(n+1)}{2}$ mushrooms in the forest. Let $b_i=i+1$, she will leave exactly $\sum\limits_{i=0}^{n-1}(n-i)=\frac{n(n+1)}{2}$ mushrooms in the forest. We can find that maximizing the two parts shares the same strategy. So add up the answers of the two parts. The time complexity is $O(n)$.
[ "brute force", "greedy" ]
1,600
T=int(input()) for t in range(T): n,m=map(int,input().split()) a=[0]+list(map(int,input().split())) for i in range(1,n+1): a[i]+=a[i-1] if m>n: print(a[n]+(m-1+m-n)*n//2) else: ans=0 for i in range(n+1): if i>=m: ans=max(ans,a[i]-a[i-m]) print(ans+(1+m-1)*(m-1)//2)
1687
B
Railway System
\begin{quote} As for the technology in the outside world, it is really too advanced for Gensokyo to even look up to. \hfill —Yasaka Kanako, Symposium of Post-mysticism \end{quote} This is an interactive problem. Under the direct supervision of Kanako and the Moriya Shrine, the railway system of Gensokyo is finally finished. GSKR (Gensokyo Railways) consists of $n$ stations with $m$ bidirectional tracks connecting them. The $i$-th track has length $l_i$ ($1\le l_i\le 10^6$). Due to budget limits, the railway system \textbf{may not be connected}, though there may be more than one track between two stations. The value of a railway system is defined as the total length of its all tracks. The maximum (or minimum) capacity of a railway system is defined as the maximum (or minimum) value among all of the currently functional system's full spanning forest. In brief, full spanning forest of a graph is a spanning forest with the same connectivity as the given graph. Kanako has a simulator only able to process no more than $2m$ queries. The input of the simulator is a string $s$ of length $m$, consisting of characters 0 and/or 1. The simulator will assume the $i$-th track functional if $s_i=$ 1. The device will then tell Kanako the maximum capacity of the system in the simulated state. Kanako wants to know the the minimum capacity of the system with all tracks functional with the help of the simulator. The structure of the railway system is fixed in advance. In other words, the interactor is not adaptive.
$2m=m+m$. What can we do with the first $m$ queries? We can now the lengths of each edge with $m$ queries. Kruskal. We can get the lengths of each edge using $m$ queries by asking the maximum capacity of each edge separately. Then, sort the edges in non-decreasing order represented by ${l}$, and ask the maximum capacity of all prefixes represented by ${s}$ using the rest $m$ queries. Consider the process of Kruskal's algorithm. The $i$-th edge $(u_i,v_i)$ being in the minimum full spanning forest is equivalent to there being no path between $u_i$ and $v_i$ in the graph consisting of former edges, which is equivalent to $s_i=s_{i-1}+l_i$. Then we know whether each edge exists in the minimum full spanning forest. The time complexity is $O(m^2)$.
[ "constructive algorithms", "graphs", "greedy", "interactive", "sortings" ]
1,700
n,m=map(int,input().split()) a=[] for i in range(m): print('?','0'*i+'1'+'0'*(m-i-1),flush=1) a.append(int(input())) cur=0 s=['0' for i in range(m)] for i in range(m): x=0 for j in range(m): if a[x]>a[j]: x=j s[x]='1' print('? ',*s,sep='',flush=1) c=int(input()) if (cur+a[x]==c): cur+=a[x] else: s[x]='0' a[x]=2000000 print('!',cur,flush=1)
1687
C
Sanae and Giant Robot
\begin{quote} Is it really?! The robot only existing in my imagination?! The Colossal Walking Robot?!! \hfill — Kochiya Sanae \end{quote} Sanae made a giant robot — Hisoutensoku, but something is wrong with it. To make matters worse, Sanae can not figure out how to stop it, and she is forced to fix it on-the-fly.The state of a robot can be represented by an array of integers of length $n$. Initially, the robot is at state $a$. She wishes to turn it into state $b$. As a great programmer, Sanae knows the art of copy-and-paste. In one operation, she can choose some segment from given segments, copy the segment from $b$ and paste it into \textbf{the same place} of the robot, replacing the original state there. However, she has to ensure that the sum of $a$ \textbf{does not change} after each copy operation in case the robot go haywire. Formally, Sanae can choose segment $[l,r]$ and assign $a_i = b_i$ ($l\le i\le r$) if $\sum\limits_{i=1}^n a_i$ does not change after the operation. Determine whether it is possible for Sanae to successfully turn the robot from the initial state $a$ to the desired state $b$ with any (possibly, zero) operations.
Let $b_i=0$ for convenience. The interval selected satisfies $\sum\limits_{i=l}^r a_i=0$. What does range sum remind you of? Let $s_i=\sum\limits_{k=1}^i a_k-b_k$. The task can be described as: Given an array $s$. For some given interval $[l,r]$ if $s_{l-1}=s_r$, we can assign $s_r$ to $s_i$ ($l\le i< r$). The goal is to make $s_i=0$ ($0\le i\le n$). Obviously assigning non-zero value to $s$ is useless, while assigning $0$ to $s$ does no harm. Therefore, we can repeatedly choose any interval $[l,r]$ satisfying $s_{l-1}=s_r=0$, and assigning $0$ to all non-zero $s_i$ ($l\le i< r$) until there is no such interval. We can use set in C++ or disjoint set or segment tree to find such $i$. As each element can be assigned to $0$ at most once, the time complexity is $O((n+m)\log n)$. It is Div.2 D at first.
[ "binary search", "brute force", "data structures", "dsu", "greedy", "sortings" ]
2,500
#include "bits/stdC++.h" using namespace std; #define all(x) (x).begin(),(x).end() typedef long long ll; int main() { ios::sync_with_stdio(0);cin.tie(0); int T;cin>>T; while (T--) { int n,m,i; cin>>n>>m; vector<ll> a(n+1); vector<int> deg(m,2),b(n+1),id(n+1); vector<pair<int,int>> p(m); vector<vector<int>> e(n+1); iota(all(id),0); set<int> s(all(id)); for (i=1;i<=n;i++) cin>>a[i]; for (i=1;i<=n;i++) cin>>b[i]; for (i=0;i<m;i++) { auto &[l,r]=p[i]; cin>>l>>r; e[l-1].push_back(i); e[r].push_back(i); } for (i=1;i<=n;i++) a[i]-=b[i]; for (i=1;i<=n;i++) a[i]+=a[i-1]; queue<int> q; for (i=0;i<=n;i++) if (!a[i]) q.push(i),s.erase(i); while (q.size()) { int x=q.front();q.pop(); for (int y:e[x]) if (!--deg[y]) { auto [l,r]=p[y]; auto lt=s.lower_bound(l),rt=s.upper_bound(r); for (auto it=lt;it!=rt;++it) q.push(*it); s.erase(lt,rt); } } cout<<(s.size()?"NO\n":"YES\n"); } }
1687
D
Cute number
\begin{quote} Ran is especially skilled in computation and mathematics. It is said that she can do unimaginable calculation work in an instant. \hfill —Perfect Memento in Strict Sense \end{quote} Ran Yakumo is a cute girl who loves creating cute Maths problems. Let $f(x)$ be the minimal square number \textbf{strictly} greater than $x$, and $g(x)$ be the maximal square number less than or equal to $x$. For example, $f(1)=f(2)=g(4)=g(8)=4$. A positive integer $x$ is cute if $x-g(x)<f(x)-x$. For example, $1,5,11$ are cute integers, while $3,8,15$ are not. Ran gives you an array $a$ of length $n$. She wants you to find the smallest non-negative integer $k$ such that $a_i + k$ is a cute number for any element of $a$.
What is the range of the answer? How to solve it in $O(na_n)$? For any integer $x$, iff we can find $w$ satisfying $x\in[w^2,w^2+w]$, we have $x-w^2 < (w+1)^2-x$, which means $x$ is beautiful. Define $f(x)=w$. It is easy to find that $k\leq a_n^2$, and there are only $a_n$ useful $w$ because $w\le a_n$. Enumerate $f(a_1+k)$ ($f(a_1+k)\le a_n$), and calculate the range of $a_i+k$ in order. It can be shown that the range is an interval for all $1\le i\le n$. So we can solve this problem in $O(n a_n)$. We call $i$ a jump if $f(a_{i}+k)\ne f(a_{i-1}+k)$. Assuming $f(a_1+k)=w$, there is no more than $\frac{a_n}{w}$ jumps. We only need to enumerate jumps to calculate the ranges. We can use linked list or set in C++ to maintain it. The time complexity is $O(\sum\limits_{w=1}^{a_n} \frac {a_n}{w}=a_n\log a_n)$.
[ "binary search", "brute force", "data structures", "dsu", "implementation", "math" ]
2,900
#include <bits/stdC++.h> using namespace std; typedef long long ll; #define all(x) (x).begin(),(x).end() const int N=2e6+2; vector<int> e[N]; struct Q { int id; mutable int len,t; bool operator<(const Q &o) const {return id<o.id;} }; int main() { ios::sync_with_stdio(0);cin.tie(0); cout<<setiosflags(ios::fixed)<<setprecision(15); int n,i,j; cin>>n; vector<int> a(n); for (int &x:a) cin>>x; a.resize(unique(all(a))-a.begin()); n=a.size(); set<Q> s; for (i=1;i<n;i++) s.insert({i,a[i]-a[i-1],0}),e[a[i]-a[i-1]].push_back(i); for (i=1;;i++) { for (int x:e[i]) { auto it=s.find({x,i,0}); assert(it!=s.end()); auto L=it==s.begin()?s.end():prev(it),R=next(it); if (L!=s.end()&&L->t&&R!=s.end()&&R->t) { L->len+=i+R->len; s.erase(it); s.erase(R); } else if (L!=s.end()&&L->t) { L->len+=i; s.erase(it); } else if (R!=s.end()&&R->t) { R->len+=i; s.erase(it); } else it->t=1; } if (a[0]<=(ll)i*(i+1)) //[i*i,i*(i+1)] { ll L=max((ll)a[0],(ll)i*i),R=(ll)i*(i+1); int step=i; for (auto [id,D,t]:s) { L+=D; if (!t) { step=ceil((sqrt(1+4*L)-1)/2); //if (L>(ll)step*(step+1)) ++step; L=max(L,(ll)step*step); } R=min(R+D,(ll)step*(step+1)); if (L>R) break; } if (L<=R) { cout<<L-a.back()<<endl; return 0; } } } }
1687
E
Become Big For Me
\begin{quote} Come, let's build a world where even the weak are not forgotten! \hfill —Kijin Seija, Double Dealing Characters \end{quote} Shinmyoumaru has a mallet that can turn objects bigger or smaller. She is testing it out on a sequence $a$ and a number $v$ whose initial value is $1$. She wants to make $v = \gcd\limits_{i\ne j}\{a_i\cdot a_j\}$ by \textbf{no more than} $10^5$ operations ($\gcd\limits_{i\ne j}\{a_i\cdot a_j\}$ denotes the $\gcd$ of all products of two distinct elements of the sequence $a$). In each operation, she picks a subsequence $b$ of $a$, and does one of the followings: - \textbf{Enlarge}: $v = v \cdot \mathrm{lcm}(b)$ - \textbf{Reduce}: $v = \frac{v}{\mathrm{lcm}(b)}$ Note that she does \textbf{not} need to guarantee that $v$ is an integer, that is, $v$ does \textbf{not} need to be a multiple of $\mathrm{lcm}(b)$ when performing Reduce. Moreover, she wants to guarantee that the total length of $b$ chosen over the operations does not exceed $10^6$. Fine a possible operation sequence for her. \textbf{You don't need to minimize anything}.
Consider the Inclusion-Exclusion Principle. Let $f_p(x)$ be the maximum integer satisfying $p^{f_p(x)}|x$. For each prime $p$, WLOG, assuming $f_p(a_i) \le f_p(a_{i+1})$ ($1\le i<n$) then $f_p(\gcd{a_ia_j})=f_p(a_1)+f_p(a_2)$. Consider the Inclusion-Exclusion Principle: $k\text{-th}\min{S}=\sum\limits_{\varnothing\ne T\subseteq S}(-1)^{|T|-k}\tbinom{|T|-1}{k-1}\max{T}$. So $f_p(\gcd\{a_ia_j\})=\sum\limits_{\varnothing\ne T\subseteq \{f_p(a)\}}((-1)^{|T|-1}+(-1)^{|T|}(|T|-1))\max\{T\}=\sum\limits_{\varnothing\ne T\subseteq \{f_p(a)}(-1)^{|T|}(|T|-2)\max\{T\}$ Then $\gcd{a_ia_j}=\prod\limits_{\varnothing\ne T\subseteq {a}}\operatorname{lcm}{T}^{(-1)^{|T|}(|T|-2)}$. We can solve the task by choosing a short subsequence $c$ satisfying $\gcd{a_ia_j}=\gcd{c_ic_j}$ and enumerating its subsets. To fit in the constraint, the length of $c$ should be no longer than $14$. Think of an easier task: choosing a small subset $g(a)$ satisfying $\gcd{a}=\gcd g(a)$. If we can solve it, we can construct $c$ by choosing $g(a)\cup g(a-g(a))$ if $|g(a)|$ does not exceed $7$. First, choose an arbitrary element $x$ in ${a}$ as the only element of $S$, and factorize $x$ into $\prod\limits_{i=1}^{\omega(x)} p_i^{k_i}$ ($p_i< p_{i+1}$). For each $i$, if $f_{p_i}(S)=\min_j f_{p_i}(a_j)$ then add an arbitrary element $y_i$ in ${a}$ satisfying $f_{p_i}(y_i)=\min_j f_{p_i}(a_j)$ to $S$. Now obviously $\gcd S=\gcd {a}$, but $|S|\le\omega(x)+1\le 8$. We can prove that $|S|=8$ and $\gcd(S-{x})\ne \gcd {a}$ do not hold at the same time, then we can solve the task by choosing $g(a)=\begin{cases}S&(|S|<8)\\S-\{x\}&(|S|=8)\end{cases}$ . Consider the necessary condition of $|S|=\omega(x)=8\land\gcd(S-{x})\ne \gcd {a}$: $\exists d\in\text{Prime}, f_{d}(x)<\min\limits_{y\in S-{x}}f_{d}(y)$. According to how we choose $y_i$, $d\ne p_i$, $d\prod\limits_{i=2} ^{7}p_i|y_1$, so $d\prod\limits_{i=1} ^{7}p_i|y_1p_1$. Since $2\times3\times5\times7\times11\times13\times17\times19=9699690$ and $y_1\le 10^6$, $p_1\ge 11$. But $x\ge 11\times 13\times 17\times 19\times 23\times 29\times 31>10^6$, causing a conflict. So $|S|=\omega(x)=8\land\gcd(S-{x})\ne \gcd {a}$ does not hold. The time complexity is $O(n\log \max{a_i}+2^{\max{\omega(a_i)}}\max{\omega(a_i)}+n\max{\omega(a_i)})$. Worth mentioning, with this conclusion (such small set exists), we can solve it much more easier. Just choose a small set by greedy, and enumerate its subset of size $14$. There are more than $500$ tests at first.
[ "combinatorics", "constructive algorithms", "greedy", "math", "number theory" ]
3,500
#pragma GCC target("popcnt") #include "bits/stdC++.h" using namespace std; typedef unsigned int ui; typedef long long ll; #define all(x) (x).begin(),(x).end() namespace Prime { typedef unsigned int ui; typedef unsigned long long ll; const int N=1e6+2; ui pr[N],mn[N],phi[N],cnt; int mu[N]; void init_prime() { ui i,j,k; phi[1]=mu[1]=1; for (i=2;i<N;i++) { if (!mn[i]) { pr[cnt++]=i; phi[i]=i-1;mu[i]=-1; mn[i]=i; } for (j=0;(k=i*pr[j])<N;j++) { mn[k]=pr[j]; if (i%pr[j]==0) { phi[k]=phi[i]*pr[j]; break; } phi[k]=phi[i]*(pr[j]-1); mu[k]=-mu[i]; } } //for (i=2;i<N;i++) if (mu[i]<0) mu[i]+=p; } vector<pair<ui,ui>> getw(ll x) { ui i; assert((ll)(N-1)*(N-1)>=x); vector<pair<ui,ui>> r; for (i=0;i<cnt&&pr[i]*pr[i]<=x&&x>=N;i++) if (x%pr[i]==0) { ui y=pr[i],z=1,tmp; x/=y; while (x==(tmp=x/y)*y) x=tmp,++z; r.push_back({y,z}); } if (x>=N) { r.push_back({x,1}); return r; } while (x>1) { ui y=mn[x],z=1,tmp; x/=y; while (x==(tmp=x/y)*y) x=tmp,++z; r.push_back({y,z}); } return r; } } using Prime::pr,Prime::phi,Prime::getw; using Prime::mu,Prime::init_prime; const int N=1e6+5; int a[N]; bool ed[N]; int main() { ios::sync_with_stdio(0);cin.tie(0); cout<<setiosflags(ios::fixed)<<setprecision(15); int n,i,j; init_prime(); cin>>n; vector<vector<pair<ui,ui>>> b(n+1); for (i=1;i<=n;i++) { cin>>a[i]; b[i]=getw(a[i]); } if (n==3&&a[1]==6&&a[2]==10&&a[3]==15) { cout<<"1\n0 3 1 2 3\n"; return 0; } if (n==4&&a[1]==2&&a[2]==4&&a[3]==8&&a[4]==16) { cout<<"2\n0 1 4\n1 1 1\n"; return 0; } vector<int> s; auto getmin=[&]() { int i,j,m=0; for (i=1;i<=n;i++) if (!ed[i]) break; if (i>n) return; int x=i; vector<int> nm(N,1'000'000'000),id(N); vector<vector<int>> occ(N); vector<int> flg(n+1); set<int> S; for (i=1;i<=n;i++) if (!ed[i]) { for (auto [p,t]:b[i]) { occ[p].push_back(i); if (nm[p]>t) { nm[p]=t; id[p]=i; } } ++m; S.insert(i); } for (i=2;i<N;i++) if (id[i]&&occ[i].size()!=m) { for (int x:occ[i]) S.erase(x); nm[i]=0;id[i]=*S.begin(); for (int x:occ[i]) S.insert(x); } vector<int> r; for (auto [p,t]:b[x]) if (t!=nm[p]) r.push_back(id[p]); vector<ui> mn(N,1'000'000'000),cnt(N),toc(N); for (auto [p,t]:b[x]) toc[p]=t; for (int x:r) for (auto [p,t]:b[x]) mn[p]=min(mn[p],t),++cnt[p]; for (i=2;i<N;i++) if (cnt[i]==r.size()&&mn[i]>toc[i]) break; if (i<N) r.push_back(x); for (int x:r) ed[x]=1,s.push_back(x); }; getmin();getmin(); sort(all(s));s.resize(unique(all(s))-s.begin()); n=s.size();assert(n<=14); ll D=0; for (i=0;i<n;i++) for(j=i+1;j<n;j++) D=gcd(D,(ll)a[s[i]]*a[s[j]]); if (D==1) {cout<<"0\n";return 0;} vector<pair<int,vector<int>>> ans; for (i=1;i<1<<n;i++) { vector<int> v; for (j=0;j<n;j++) if (i>>j&1) v.push_back(s[j]); int pc=__builtin_popcount(i); pc=pc&1?2-pc:pc-2; for (j=1;j<=pc;j++) ans.push_back({0,v}); pc=-pc; for (j=1;j<=pc;j++) ans.push_back({1,v}); } int totsize=0; cout<<ans.size()<<'\n'; for (auto &[x,v]:ans) { cout<<x<<' '<<v.size(); for (int x:v) cout<<' '<<x; cout<<'\n'; assert((totsize+=v.size())<=1'000'000); } //cout<<totsize<<endl; }
1687
F
Koishi's Unconscious Permutation
\begin{quote} As she closed the Satori's eye that could read minds, Koishi gained the ability to live in unconsciousness. Even she herself does not know what she is up to. \hfill — Subterranean Animism \end{quote} Koishi is unconsciously permuting $n$ numbers: $1, 2, \ldots, n$. She thinks the permutation $p$ is \textbf{beautiful} if $s=\sum\limits_{i=1}^{n-1} [p_i+1=p_{i+1}]$. $[x]$ equals to $1$ if $x$ holds, or $0$ otherwise. For each $k\in[0,n-1]$, she wants to know the number of beautiful permutations of length $n$ satisfying $k=\sum\limits_{i=1}^{n-1}[p_i<p_{i+1}]$.
How Elegia's mind works? We call a permutation $p$ of length $n-s$ is good if $\forall i\in[1,n-s-1],p_i+1\not=p_{i+1}$. If we can calculate $ans_k=\sum\limits\limits_{p\ is\ \text{good}}[\sum\limits\limits_{i=1}^{n-s-1}[p_i<p_{i+1}]=k]$ then, we can get the answer easily by Binomial inversion. So we only need to focus on how to calculate $ans_k$. For convenience, let $n\rightarrow n-s$. We have: $ans_k = \sum\limits_{i=0}^{k} \binom{n-1}{i} (-1)^i \left\langle\begin{matrix} n-i\\n-k-1 \end{matrix}\right\rangle$ where $\left\langle\begin{matrix} n\\m \end{matrix}\right\rangle$ is the Eulerian number. As is known to all, the generating function of Eulerian number is: $\left\langle\begin{matrix} n\\m \end{matrix}\right\rangle = [x^my^n]\dfrac{n!(x-1)}{x-e^{(x-1)y}}$ So we have: $ans_k = \sum\limits_{j=0}^{k} \binom{n-1}{j} (-1)^j \left\langle\begin{matrix} n-j\\n-k-1 \end{matrix}\right\rangle\\ = [x^{n-k-1}] \sum\limits_{j=0}^{k} \binom{n-1}{j} (-1)^j [y^{n-j}]\dfrac{(n-j)!(x-1)}{x-e^{(x-1)y}}\\ = [x^{n-k-1}] \sum\limits_{j=0}^{k} \dfrac{(n-1)!}{j!} (-1)^j (n-j)[y^{n-j}]\dfrac{x-1}{x-e^{(x-1)y}}\\ = [x^{n-k-1}] \sum\limits_{j=0}^{k} \dfrac{(n-1)!}{j!} (-1)^j [y^{n-j-1}]\dfrac{\text{d}}{\text{d}y}\dfrac{x-1}{x-e^{(x-1)y}}\\ =(n-1)![x^{n-k-1}] \sum\limits_{j=0}^{k} \dfrac{(-1)^j}{j!} [y^{n-j-1}] \dfrac{(x-1)^2e^{(x-1)y}}{(x-e^{(x-1)y})^2}\\ =(n-1)![x^{n-k-1}] \sum\limits_{j=0}^{k} [y^j]e^{-y} [y^{n-j-1}] \dfrac{(x-1)^2e^{(x-1)y}}{(x-e^{(x-1)y})^2}\\ =(n-1)![x^{n-k-1}] [y^{n-1}] \dfrac{(x-1)^2e^{(x-2)y}}{(x-e^{(x-1)y})^2}\\ =(n-1)![x^{n-k-1}] [y^{n-1}] \dfrac{(x-1)^2e^{xy}}{(xe^y-e^{xy})^2}\\ =(n-1)![x^{n-k-1}] [y^{n-1}] \dfrac{(x-1)^2e^{-xy}}{(xe^{(1-x)y}-1)^2}\\$ Consider how to calculate $[y^{n-1}] \dfrac{(x-1)^2e^{-xy}}{(xe^{(1-x)y}-1)^2}$. Let $u=(1-x)y$ and we have: $[y^{n-1}] \dfrac{(x-1)^2e^{-xy}}{(xe^{(1-x)y}-1)^2} = (1-x)^{n+1} [u^{n-1}] \dfrac{e^{\frac{-xu}{1-x}}}{(xe^u-1)^2}\\ = (1-x)^{n+1} [u^{n-1}] \dfrac{e^{\frac{-xu}{1-x}}}{(1-xe^u)^2}$ And: $[u^{n-1}] \dfrac{e^{\frac{-xu}{1-x}}}{(1-xe^u)^2}\\= [u^{n-1}] \sum\limits_{i=0} (i+1)x^ie^{(i-\frac{x}{1-x})u}\\= [u^{n-1}]\sum\limits_{i=0}(i+\frac{1}{1-x}+\frac{-x}{1-x})x^ie^{(i-\frac{x}{1-x})u}\\= [u^{n-1}] \sum\limits_{i=0}(i-\frac{x}{1-x})x^ie^{(i-\frac{x}{1-x})u}+ [u^{n-1}] \frac{1}{1-x}\sum\limits_{i=0}x^ie^{(i-\frac{x}{1-x})u}\\= [u^{n}] \sum\limits_{i=0}nx^ie^{(i-\frac{x}{1-x})u}+ [u^{n-1}] \frac{1}{1-x}\sum\limits_{i=0}x^ie^{(i-\frac{x}{1-x})u}\\= [u^{n}] \dfrac{ne^{-\frac{xu}{1-x}}}{1-xe^u}+ [u^{n-1}] \frac{1}{1-x} \dfrac{e^{-\frac{xu}{1-x}}}{1-xe^u}$ So we just need to focus on how to calculate $[u^{n}]\dfrac{e^{-\frac{xu}{1-x}}}{1-xe^u}$. Let $w=e^u-1$, we have: $[u^{n}] \dfrac{(w+1)^{-\frac{x}{1-x}}}{1-x(w+1)}\\ = \sum\limits_{m=0}^n [u^n](e^u-1)^m[w^{m}] \dfrac{(w+1)^{-\frac{x}{1-x}}}{1-x(w+1)}\\ = \sum\limits_{m=0}^n [u^n](e^u-1)^m[w^{m}] \dfrac{\sum\limits_{i=0}w^i\binom{-\frac{x}{1-x}}{i}}{1-x(w+1)}\\ = \sum\limits_{m=0}^n [u^n](e^u-1)^m[w^{m}] \dfrac{\sum\limits_{i=0}w^i\binom{-\frac{x}{1-x}}{i}}{1-xw-x}\\ = \dfrac{1}{1-x} \sum\limits_{m=0}^n [u^n](e^u-1)^m[w^{m}] \dfrac{\sum\limits_{i=0}w^i\binom{-\frac{x}{1-x}}{i}}{1-\frac{x}{1-x}w}\\ = \dfrac{1}{1-x} \sum\limits_{m=0}^n [u^n](e^u-1)^m \sum\limits_{i=0}^m \binom{-\frac{x}{1-x}}{i} (\dfrac{x}{1-x})^{m-i}$ Let $s=\dfrac{x}{1-x}$ . Try to calculate $\dfrac{1}{1-x} \sum\limits_{m=0}^n [u^n](e^u-1)^m \sum\limits_{i=0}^m \binom{-s}{i} s^{m-i}$ . We know $[u^n]\ (e^u-1)^m$ is the Stirling numbers of the second kind. We can calculate it in $O(n\log n)$ or $O(n \log^2 n)$. Build a $2 \times 2$ matrix to get $\sum\limits_{i=0}^m \binom{-s}{i} s^{m-i}$. Let $M_m=\left[ \begin{matrix} \frac{-s-m}{m+1} & \frac{-s-m}{m+1}\\ 0 & s \end{matrix} \right]$ And we have $\left[ \begin{matrix} \binom{-s}{m} & \sum\limits_{i=0}^m\binom{-s}{i}s^{m-i} \end{matrix} \right] \times \left[ \begin{matrix} \frac{-s-m}{m+1} & \frac{-s-m}{m+1}\\ 0 & s \end{matrix} \right] = \left[ \begin{matrix} \binom{-s}{m+1} & \sum\limits_{i=0}^{m+1}\binom{-s}{i}s^{m+1-i} \end{matrix} \right]$ So we can divide and conquer to calculate it in $O(n \log^2 n)$.
[ "fft", "math" ]
3,500
#include <cstdio> #include <cstring> #include <algorithm> #include <iostream> #include <chrono> #include <random> #include <functional> #include <vector> #define LOG(FMT...) fprintf(stderr, FMT) using namespace std; typedef long long ll; typedef unsigned long long ull; const int P = 998244353, R = 3; const int BRUTE_N2_LIMIT = 50; int mpow(int x, int k, int p = P) { int ret = 1; while (k) { if (k & 1) ret = ret * (ll) x % p; x = x * (ll) x % p; k >>= 1; } return ret; } int norm(int x) { return x >= P ? x - P : x; } int reduce(int x) { return x < 0 ? x + P : x; } void add(int& x, int y) { if ((x += y) >= P) x -= P; } void sub(int& x, int y) { if ((x -= y) < 0) x += P; } struct Simple { int n; vector<int> fac, ifac, inv; void build(int n) { this->n = n; fac.resize(n + 1); ifac.resize(n + 1); inv.resize(n + 1); fac[0] = 1; for (int x = 1; x <= n; ++x) fac[x] = fac[x - 1] * (ll) x % P; inv[1] = 1; for (int x = 2; x <= n; ++x) inv[x] = -(P / x) * (ll) inv[P % x] % P + P; ifac[0] = 1; for (int x = 1; x <= n; ++x) ifac[x] = ifac[x - 1] * (ll) inv[x] % P; } Simple() { build(1); } void check(int k) { int nn = n; if (k > nn) { while (k > nn) nn <<= 1; build(nn); } } int gfac(int k) { check(k); return fac[k]; } int gifac(int k) { check(k); return ifac[k]; } int ginv(int k) { check(k); return inv[k]; } int binom(int n, int m) { if (m < 0 || m > n) return 0; return gfac(n) * (ll) gifac(m) % P * gifac(n - m) % P; } } simp; const int L2 = 11; struct NTT { int L; vector<int> root; NTT() : L(-1) {} void prepRoot(int l) { L = l; root.resize((1 << L) + 1); int i, n = 1 << L; int *w2 = root.data(); *w2 = 1; w2[1 << l] = mpow(31, 1 << (21 - l)); for (i = l; i; --i) w2[1 << (i - 1)] = (ull) w2[1 << i] * w2[1 << i] % P; for (i = 1; i < n; ++i) w2[i] = (ull) w2[i & (i - 1)] * w2[i & -i] % P; } void DIF(int *a, int l) { int *j, *k, n = 1 << l, len = n >> 1, r, *o; for (; len; len >>= 1) for (j = a, o = root.data(); j != a + n; j += len << 1, ++o) for (k = j; k != j + len; ++k) { r = (ull) *o * k[len] % P; k[len] = reduce(*k - r); add(*k, r); } } void DIT(int *a, int l) { int *j, *k, n = 1 << l, len = 1, r, *o; for (; len != n; len <<= 1) for (j = a, o = root.data(); j != a + n; j += len << 1, ++o) for (k = j; k != j + len; ++k) { r = reduce(*k + k[len] - P); k[len] = ull(*k - k[len] + P) * *o % P; *k = r; } } void fft(int *a, int lgn, int d = 1) { if (L < lgn) prepRoot(lgn); int n = 1 << lgn; if (d == 1) DIF(a, lgn); else { DIT(a, lgn); reverse(a + 1, a + n); ull nv = P - (P - 1) / n; for (int i = 0; i < n; ++i) a[i] = a[i] * nv % P; } } } ntt; struct Poly { vector<int> a; Poly(int v = 0) : a(1) { if ((v %= P) < 0) v += P; a[0] = v; } Poly(const vector<int> &a) : a(a) {} Poly(initializer_list<int> init) : a(init) {} // Helps int operator[](int k) const { return k < a.size() ? a[k] : 0; } int &operator[](int k) { if (k >= a.size()) a.resize(k + 1); return a[k]; } int deg() const { return a.size() - 1; } void redeg(int d) { a.resize(d + 1); } Poly monic() const; Poly sunic() const; Poly slice(int d) const { if (d < a.size()) return vector<int>(a.begin(), a.begin() + d + 1); vector<int> res(a); res.resize(d + 1); return res; } int *base() { return a.data(); } const int *base() const { return a.data(); } Poly println(FILE *fp) const { fprintf(fp, "%d", a[0]); for (int i = 1; i < a.size(); ++i) fprintf(fp, " %d", a[i]); fputc('\n', fp); return *this; } // Calculations Poly operator+(const Poly &rhs) const { vector<int> res(max(a.size(), rhs.a.size())); for (int i = 0; i < res.size(); ++i) if ((res[i] = operator[](i) + rhs[i]) >= P) res[i] -= P; return res; } Poly operator-() const { Poly ret(a); for (int i = 0; i < a.size(); ++i) if (ret[i]) ret[i] = P - ret[i]; return ret; } Poly operator-(const Poly &rhs) const { return operator+(-rhs); } Poly operator*(const Poly &rhs) const; Poly taylor(int k) const; }; Poly zeroes(int deg) { return vector<int>(deg + 1); } Poly operator "" _z(unsigned long long a) { return {0, (int) a}; } Poly operator+(int v, const Poly &rhs) { return Poly(v) + rhs; } Poly Poly::operator*(const Poly &rhs) const { int n = deg(), m = rhs.deg(); if (n <= 10 || m <= 10 || n + m <= BRUTE_N2_LIMIT) { Poly ret = zeroes(n + m); for (int i = 0; i <= n; ++i) for (int j = 0; j <= m; ++j) ret[i + j] = (ret[i + j] + a[i] * (ll) rhs[j]) % P; return ret; } n += m; int l = 0; while ((1 << l) <= n) ++l; vector<int> res(1 << l), tmp(1 << l); memcpy(res.data(), base(), a.size() * sizeof(int)); ntt.fft(res.data(), l, 1); memcpy(tmp.data(), rhs.base(), rhs.a.size() * sizeof(int)); ntt.fft(tmp.data(), l, 1); for (int i = 0; i < (1 << l); ++i) res[i] = res[i] * (ll) tmp[i] % P; ntt.fft(res.data(), l, -1); res.resize(n + 1); return res; } Poly Poly::taylor(int k) const { int n = deg(); Poly t = zeroes(n); simp.check(n); for (int i = 0; i <= n; ++i) t[n - i] = a[i] * (ll) simp.fac[i] % P; int pw = 1; Poly help = vector<int>(simp.ifac.begin(), simp.ifac.begin() + n + 1); for (int i = 0; i <= n; ++i) { help[i] = help[i] * (ll) pw % P; pw = pw * (ll) k % P; } t = t * help; for (int i = 0; i <= n; ++i) help[i] = t[n - i] * (ll) simp.ifac[i] % P; return help; } Poly stirling2(int n) { Poly p = zeroes(n), ne = zeroes(n); for (int i = 0; i <= n; ++i) p[i] = mpow(i, n) * (ll)simp.gifac(i) % P; for (int i = 0; i <= n; ++i) ne[i] = simp.gifac(i); for (int i = 1; i <= n; i += 2) ne[i] = P - ne[i]; p = p * ne; vector<int> ans(n + 1); for (int i = 0; i <= n; ++i) ans[i] = p[i] * (ll)simp.gfac(i) % P; return ans; } namespace DC { int N; vector<Poly> prd, sum; Poly lift(Poly a, int k) { a.a.insert(a.a.begin(), k, 0); return a; } void build(int o, int l, int r) { if (l == r - 1) { prd[o].redeg(1); prd[o][1] = P - simp.ginv(r); prd[o][0] = (P - l) * (ll)simp.ginv(r) % P; sum[o] = prd[o]; return; } int mid = (l + r + 1) / 2; build(o << 1, l, mid); build(o << 1 | 1, mid, r); prd[o] = prd[o << 1] * prd[o << 1 | 1]; sum[o] = prd[o << 1] * sum[o << 1 | 1] + lift(sum[o << 1], r - mid); } void pre(int n) { N = n; sum.resize(n * 4); prd.resize(n * 4); build(1, 0, n); } Poly input; pair<Poly, Poly> solve(int o, int l, int r) { if (l == r - 1) { Poly r1 = input[r]; return make_pair(r1 * prd[o], lift(r1, 1)); } int mid = (l + r + 1) / 2; auto ls = solve(o << 1, l, mid), rs = solve(o << 1 | 1, mid, r); ls.first = ls.first + prd[o << 1] * rs.first + sum[o << 1] * rs.second; ls.second = ls.second + lift(rs.second, mid - l); return ls; } Poly solve(Poly in) { input = in; input.redeg(N); auto pr = solve(1, 0, N); auto ret = pr.first + pr.second; ret[0] = (ret[0] + input[0]) % P; return ret; } } Poly compute(Poly coeff) { int n = coeff.deg(); Poly ret = DC::solve(coeff); ret.redeg(n); reverse(ret.a.begin(), ret.a.end()); ret = ret.taylor(P - 1); reverse(ret.a.begin(), ret.a.end()); return ret; } Poly solve(int n) { DC::pre(n); auto v0 = stirling2(n), v1 = stirling2(n - 1); return compute(v0) + compute(v1); } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int n, s; cin >> n >> s; auto ans = solve(n - s); for (int i = 0; i < s; ++i) cout << "0 "; for (int i = n - s - 1; i >= 0; --i) cout << ans[i] * (ll)simp.binom(n - 1, s) % P << " \n"[i == 0]; return 0; }
1688
A
Cirno's Perfect Bitmasks Classroom
\begin{quote} Even if it's a really easy question, she won't be able to answer it \hfill — Perfect Memento in Strict Sense \end{quote} Cirno's perfect bitmasks classroom has just started! Cirno gave her students a positive integer $x$. As an assignment, her students need to find the \textbf{minimum positive} integer $y$, which satisfies the following two conditions: $$x\ and\ y > 0$$ $$x\ xor\ y > 0$$ Where $and$ is the bitwise AND operation, and $xor$ is the bitwise XOR operation. Among the students was Mystia, who was truly baffled by all these new operators. Please help her!
Consider $x=2^k$ and $x\ne 2^k$ separately. Let $p_i$ be the $i$-th bit of $x$, $q_i$ be the $i$-th bit of $y$ (both indexed from $0$). $x\ \texttt{and}\ y > 0\Leftrightarrow \exists i,\ p_i= q_i = 1$. $x\ \texttt{xor}\ y > 0\Leftrightarrow \exists i,\ p_i\ne q_i$. To satisfy the first condition, find the minimum integer $k$ satisfying $p_k=1$, and assign $1$ to $q_k$. If $x\ne 2^k$, the second condition is satisfied now. Otherwise, find the minimum integer $j$ satisfying $p_j=0$, and assign $1$ to $q_j$. The time complexity is $O(1)$. We wrote a brute force program, and it runs more than 2 seconds on polygon. However, many participant passed the pretests. We apologize for our fault.
[ "bitmasks", "brute force" ]
800
for T in range(int(input())): x=int(input()) y=x&-x while (x==y or (x&y)==0): y+=1 print(y)
1688
B
Patchouli's Magical Talisman
\begin{quote} She is skilled in all kinds of magics, and is keen on inventing new one. \hfill —Perfect Memento in Strict Sense \end{quote} Patchouli is making a magical talisman. She initially has $n$ magical tokens. Their magical power can be represented with \textbf{positive} integers $a_1, a_2, \ldots, a_n$. Patchouli may perform the following two operations on the tokens. - \textbf{Fusion:} Patchouli chooses two tokens, removes them, and creates a new token with magical power equal to the sum of the two chosen tokens. - \textbf{Reduction:} Patchouli chooses a token with an \textbf{even} value of magical power $x$, removes it and creates a new token with magical power equal to $\frac{x}{2}$. Tokens are more effective when their magical powers are \textbf{odd} values. Please help Patchouli to find the minimum number of operations she needs to make magical powers of all tokens \textbf{odd} values.
What if there is at least one odd integer? How to produce an odd integer? Let $g(x)$ be the maximum integer satisfying $2^{g(x)}|x$. A greedy solution is to make one integer odd integer, and plus it to other even integers. Let $f(a)$ be the answer of an sequence ${a_n}$. We can find that: $f(a)=\begin{cases}\sum[g (a_i)>0]-1+\min\{g(a_i)\}& \forall i,\ g(a_i)>0 \\\sum[g (a_i)>0]&\texttt{otherwise}\end{cases}=\sum[g (a_i)>0]+\max\{0,\min\{g(a_i)\}-1\}$ It can be shown that it is the optimal strategy. We can prove that $f(a)$ decreases by at most $1$ with one operation. For the first operation, assuming we choose $a_i$ and $a_j$, let $a_k=a_i+a_j$. Obviously $g(a_k)\geq \min{g(a_i),g(a_i)}$ holds, so $\sum[g (a_i)>0]$ decreases by at most $1$, and $\min{g(a_i)}$ does not decrease. So $f(a)$ decreases by at most $1$. For the second operation, assuming we choose $a_j$. If $g(a_j)=\min{g(a_i)}>1$, $\max{0,\min{g(a_i)}-1}$ decreases by $1$ and $\sum[g (a_i)>0]$ remains unchanged. Otherwise $\max{0,\min{g(a_i)}-1}$ does not change and $\sum[g (a_i)>0]$ decreases by at most $1$. So $f(a)$ decreases by at most $1$. We can draw a conclusion that $f(a)$ decreases by at most $1$ after one operation. Since $f(a)=0\Leftrightarrow$ $a_i$ are odd integers, the strategy is proved to be optimal. The time complexity is $O(n)$. The constraint is $a_i\ge 0$ at first.
[ "bitmasks", "constructive algorithms", "greedy", "sortings" ]
800
//这回只花了45min就打完了。 #include "bits/stdC++.h" using namespace std; #define all(x) (x).begin(),(x).end() int main() { ios::sync_with_stdio(0);cin.tie(0); int T; cin>>T; while (T--) { int n,r; cin>>n; vector<int> a(n); for (int &x:a) cin>>x,x=__builtin_ffs(x)-1; r=max(*min_element(all(a))-1,0); for (int x:a) r+=(x>0); cout<<r<<'\n'; } }
1688
C
Manipulating History
\begin{quote} As a human, she can erase history of its entirety. As a Bai Ze (Hakutaku), she can create history out of nothingness. \hfill —Perfect Memento in Strict Sense \end{quote} Keine has the ability to manipulate history. The history of Gensokyo is a string $s$ \textbf{of length $1$ initially}. To fix the chaos caused by Yukari, she needs to do the following operations $n$ times, for the $i$-th time: - She chooses a \textbf{non-empty substring} $t_{2i-1}$ of $s$. - She replaces $t_{2i-1}$ with a \textbf{non-empty} string, $t_{2i}$. Note that the lengths of strings $t_{2i-1}$ and $t_{2i}$ can be different. Note that if $t_{2i-1}$ occurs more than once in $s$, \textbf{exactly one} of them will be replaced. For example, let $s=$"marisa", $t_{2i-1}=$"a", and $t_{2i}=$"z". After the operation, $s$ becomes "mzrisa" or "marisz". After $n$ operations, Keine got the final string and an operation sequence $t$ of length $2n$. Just as Keine thinks she has finished, Yukari appears again and shuffles the order of $t$. Worse still, Keine forgets the initial history. Help Keine find the initial history of Gensokyo! Recall that a substring is a sequence of consecutive characters of the string. For example, for string "abc" its substrings are: "ab", "c", "bc" and some others. But the following strings are not its substring: "ac", "cba", "acb". \textbf{Hacks} You cannot make hacks in this problem.
You do not need to know anything about string matching or other algorithms. Why the initial string consists of only one letter? Why the answer is unique if there is at least one answer? What if each string in the input data consist of one letter? Parity. Let $t$ be the unshuffled operation sequence. Consider a single letter $c$ that has ever appeared in $s$ (there are $1+\sum\limits_{i=1}^n|t_{2i}|$ letters). There are two possible situations: $c$ is in the initial string. No matter $c$ is replaced or not, $c$ will appear in the input data exactly once (in replaced strings or in the final string). $c$ is not in the initial string. No matter $c$ is replaced or not, $c$ will appear in the input data exactly twice. So the answer is the only letter appearing odd times in the input data. The time complexity is $O(\sum |s_i|+|t|)$.
[ "constructive algorithms", "greedy", "strings" ]
1,700
_=int(input()) for __ in range(_): n=2*int(input())+1 a=[0 for i in range(26)] for i in range(n): s=input() for c in s: a[ord(c)-ord('a')]+=1 cnt=0 for i in range(26): if (a[i]%2==1): print(chr(i+ord('a'))) cnt+=1 if cnt!=1: print("fake problem")
1689
A
Lex String
Kuznecov likes art, poetry, and music. And strings consisting of lowercase English letters. Recently, Kuznecov has found two strings, $a$ and $b$, of lengths $n$ and $m$ respectively. They consist of lowercase English letters and \textbf{no character is contained in both strings}. Let another string $c$ be initially empty. Kuznecov can do the following two types of operations: - Choose any character from the string $a$, remove it from $a$, and add it to the end of $c$. - Choose any character from the string $b$, remove it from $b$, and add it to the end of $c$. But, he can not do more than $k$ operations of the same type in a row. He must perform operations until either $a$ or $b$ becomes empty. What is the lexicographically smallest possible value of $c$ after he finishes? A string $x$ is lexicographically smaller than a string $y$ if and only if one of the following holds: - $x$ is a prefix of $y$, but $x \neq y$; - in the first position where $x$ and $y$ differ, the string $x$ has a letter that appears earlier in the alphabet than the corresponding letter in $y$.
Greedily take the smallest character in both strings. What's the exception to this? We can't take the smallest character in both strings when we've already took $k$ elements from the string we chose. Denote $A$ as the number of characters we've took from string $a$ in the last few consecutive moves and denote $B$ the same for $b$. If $A=k$, then we have to take the smallest character in string $b$ instead of possibly $a$. If $B=k$, then we have to take the smallest character in string $a$ instead of possibly $b$. Remember to reset $A$ to $0$ when you take a character from $b$. Similarly, reset $B$ to $0$ when you take a character from $a$.
[ "brute force", "greedy", "implementation", "sortings", "two pointers" ]
800
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(0),cout.tie(0); int t; cin>>t; while (t--) { int n,m,k; cin>>n>>m>>k; string a,b,c; cin>>a>>b; sort(a.begin(),a.end(),greater<char>()); sort(b.begin(),b.end(),greater<char>()); int ak=0,bk=0; while (!a.empty() && !b.empty()) { bool gde=b.back()<a.back(); if (gde && bk==k) gde=0; if (!gde && ak==k) gde=1; if (gde) c.push_back(b.back()),bk++,ak=0,b.pop_back(); else c.push_back(a.back()),ak++,bk=0,a.pop_back(); } cout<<c<<"\n"; } }
1689
B
Mystic Permutation
Monocarp is a little boy who lives in Byteland and he loves programming. Recently, he found a permutation of length $n$. He has to come up with a mystic permutation. It has to be a new permutation such that it differs from the old one in each position. More formally, if the old permutation is $p_1,p_2,\ldots,p_n$ and the new one is $q_1,q_2,\ldots,q_n$ it must hold that $$p_1\neq q_1, p_2\neq q_2, \ldots ,p_n\neq q_n.$$ Monocarp is afraid of lexicographically large permutations. Can you please help him to find the lexicographically minimal mystic permutation?
When is it impossible to find such permutation? It is impossible only for $n=1$. In every other case, iterate over each position in order from $1$ to $n$ and take the smallest available number. What's the exception to this? The exception is the last two elements. We can always take the smallest available number for each $q_i$ satisfying $i<n-1$. To do this we maintain an array of bools of already taken numbers, and then iterate over it to find the smallest available number satisfying $p_i\neq q_i$ which is also not checked in the array, and then check it (we took it). Now consider $(p_{n-1},p_{n})$ and we want $(q_{n-1},q_{n})$ to be lexicographically minimal while satisfying $p_{n-1}\neq q_{n-1}$ and $p_{n}\neq q_{n}$. Let $a$ and $b$ be the last two unused numbers in the array of bools with $a<b$. We try to take $(q_{n-1},q_{n}) = (a,b)$. If $a=p_{n-1}$ or $b=p_{n}$, then we take $(q_{n-1},q_{n}) = (b,a)$. If $(a,b)$ isn't valid, then $(b,a)$ is. The proof is left as an exercise to the reader. This solution runs in $O(n^2)$ and can be optimized to $O(n \ log \ n)$. Riblji_Keksic found the O(n) solution.
[ "data structures", "greedy" ]
900
#include <bits/stdc++.h> using namespace std; int t,n,A[1010],B[1010]; int main() { scanf("%d",&t); while(t--) { scanf("%d",&n); for(int i=1; i<=n; i++) { scanf("%d",&A[i]); B[i] = i; } if(n==1) { printf("-1\n"); continue; } for(int i=1; i<n; i++) { if(A[i]==B[i]) swap(B[i],B[i+1]); } if(A[n]==B[n]) swap(B[n-1],B[n]); for(int i=1; i<=n; i++) printf("%d ",B[i]); printf("\n"); } return 0; }
1689
C
Infected Tree
Byteland is a beautiful land known because of its beautiful trees. Misha has found a binary tree with $n$ vertices, numbered from $1$ to $n$. A binary tree is an acyclic connected bidirectional graph containing $n$ vertices and $n - 1$ edges. Each vertex has a degree at most $3$, whereas the root is the vertex with the number $1$ and it has a degree at most $2$. Unfortunately, the root got infected. The following process happens $n$ times: - Misha either chooses a non-infected (and not deleted) vertex and deletes it with all edges which have an end in this vertex or just does nothing. - Then, the infection spreads to each vertex that is connected by an edge to an already infected vertex (all already infected vertices remain infected). As Misha does not have much time to think, please tell him what is the maximum number of vertices he can save from the infection (note that deleted vertices are not counted as saved).
We always delete a vertex directly connected to an infected one. Use dp. Let $u_1,u_2,...,u_k$ be the sequence of removed vertices such that the infection cannot spread anymore. If vertex $u_i$ was never directly connected to an infected vertex, then we could have deleted its parent instead of $u_i$ and we would have got a better solution. Hence, we may assume we always delete a vertex directly connected to an infected one. Now, we may use some dynamic programming ideas. Let $dp_i$ be the maximum number of vertices we can save in the subtree of vertex $i$ if that vertex is infected and we use operations only in the subtree. We can assume the second as the tree is binary and we have two choices - save the subtree of one child by deleting it and infect the other, or the other way around. In each case, the infection will be "active" in at most one subtree of some vertex. If $c_1$ and $c_2$ are the children of vertex $i$, the transition is $dp_i = max(dp_{c_1}+s_2-1,dp_{c_2}+s_1-1)$ where $s_i$ denotes the number of vertices in the subtree of $i$. The answer to the problem is $dp_1$. Complexity is $O(n)$.
[ "dfs and similar", "dp", "trees" ]
1,600
#include <bits/stdc++.h> using namespace std; vector<vector<int>> g(300005); int ch[300005],dp[300005]; void dfs(int p, int q) { ch[p]=1,dp[p]=0; int s=0; for (auto it : g[p]) if (it!=q) { dfs(it,p); s+=dp[it]; ch[p]+=ch[it]; } for (auto it : g[p]) if (it!=q) { dp[p]=max(dp[p],s-dp[it]+ch[it]-1); } } int main() { ios_base::sync_with_stdio(false); cin.tie(0),cout.tie(0); int t; cin>>t; while (t--) { int n; cin>>n; for (int i=1;i<=n;i++) g[i].clear(); for (int i=1;i<n;i++) { int u,v; cin>>u>>v; g[u].push_back(v); g[v].push_back(u); } dfs(1,0); cout<<dp[1]<<"\n"; } }
1689
D
Lena and Matrix
Lena is a beautiful girl who likes logical puzzles. As a gift for her birthday, Lena got a matrix puzzle! The matrix consists of $n$ rows and $m$ columns, and each cell is either black or white. The coordinates $(i,j)$ denote the cell which belongs to the $i$-th row and $j$-th column for every $1\leq i \leq n$ and $1\leq j \leq m$. To solve the puzzle, Lena has to choose a cell that minimizes the Manhattan distance to the farthest black cell from the chosen cell. More formally, let there be $k \ge 1$ black cells in the matrix with coordinates $(x_i,y_i)$ for every $1\leq i \leq k$. Lena should choose a cell $(a,b)$ that minimizes $$\max_{i=1}^{k}(|a-x_i|+|b-y_i|).$$ As Lena has no skill, she asked you for help. Will you tell her the optimal cell to choose?
There are not a lot of useful black squares. Consider this algorithm: iterate over all squares in the matrix and find the most distant black square. Let's find out how to do that efficiently. In fact, only 4 (not necessarily distinct) black squares will be useful: one square which minimizes $i-j$, one square which maximizes $i-j$, one square which minimizes $i+j$ and one square which maximizes $i+j$, where $(i,j)$ denotes the black cell's coordinates. In other words, we would like to find the most distant "border". Let's look at the example above. The cell we choose to recolour to yellow creates four regions (the top-left rectangle, the top-right rectangle, the bottom-left rectangle and the bottom-right rectangle, which are created by two lines parallel with coordinate axes passing through our yellow point). The most distant border will be fully contained inside one region, hence we should find the distance from our yellow cell to any cell on that border, and that is the maximum possible distance. The complexity is $O(n\cdot m)$.
[ "data structures", "dp", "geometry", "shortest paths" ]
1,900
#include <bits/stdc++.h> using namespace std; char tab[1003][1003]; pair<int,int> a={-1,-1},b={-1,-1},c={-1,-1},d={-1,-1}; int main() { ios_base::sync_with_stdio(false); cin.tie(0),cout.tie(0); int t; cin>>t; while (t--) { a={-1,-1},b={-1,-1},c={-1,-1},d={-1,-1}; int n,m; cin>>n>>m; for (int i=0;i<n;i++) cin>>tab[i]; vector<pair<int,int>> interesting; for (int i=0;i<n;i++) for (int j=0;j<m;j++) if (tab[i][j]=='B') { if (a.first==-1 || i+j>a.first+a.second) a={i,j}; if (b.first==-1 || i+j<b.first+b.second) b={i,j}; if (c.first==-1 || i-j>c.first-c.second) c={i,j}; if (d.first==-1 || i-j<d.first-d.second) d={i,j}; } interesting.push_back(a); interesting.push_back(b); interesting.push_back(c); interesting.push_back(d); int ans=1e9; pair<int,int> opt; for (int i=0;i<n;i++) for (int j=0;j<m;j++) { int dist=0; for (auto it : interesting) dist=max(dist,abs(i-it.first)+abs(j-it.second)); if (dist<ans) ans=dist,opt={i,j}; } cout<<opt.first+1<<" "<<opt.second+1<<"\n"; } }
1689
E
ANDfinity
Bit Lightyear, to the ANDfinity and beyond! After graduating from computer sciences, Vlad has been awarded an array $a_1,a_2,\ldots,a_n$ of $n$ non-negative integers. As it is natural, he wanted to construct a graph consisting of $n$ vertices, numbered $1, 2,\ldots, n$. He decided to add an edge between $i$ and $j$ if and only if $a_i \& a_j > 0$, where $\&$ denotes the bitwise AND operation. Vlad also wants the graph to be connected, which might not be the case initially. In order to satisfy that, he can do the following two types of operations on the array: - Choose some element $a_i$ and increment it by $1$. - Choose some element $a_i$ and decrement it by $1$ (possible only if $a_i > 0$). It can be proven that there exists a finite sequence of operations such that the graph will be connected. So, can you please help Vlad find the minimum possible number of operations to do that and also provide the way how to do that?
Increase every $0$ by $1$ initially. Check if the answer is $0$: check whether the graph is already connected. Check if the answer is $1$. If the answer is not $0$ or $1$, then it is $2$. Firslty, let's understand how to check whether the graph induced by some array $b$ is connected in $O(n \ log \ max \ b_i)$. We create a graph over bits. Let's take all elements $b_i$ and add an edge between their adjacent bits (all bits of a single $b_i$ will be connected). To quickly access the lowest bit we will use $b_i \ \& \ -b_i$ in code. Now we just check whether the graph over bits is connected. We check whether the graph for initial array $a$ is connected. If it is, the answer is 0. Then, we wonder if the answer is $1$. Check if at least one of the graphs for arrays $a_1,...,a_{i-1},a_i-1,a_{i+1},...,a_n$ for every $1\leq i \leq n$ is connected. Do the same for arrays $a_1,...,a_{i-1},a_i+1,a_{i+1},...,a_n$. If none of the graphs is connected, the answer is $2$ and otherwise $1$. Now let's see how the answer will be at most $2$. Let $i_1,i_2,...,i_k$ be the sequence of indices denoting that $a_{i_j}$ has the highest lowest bit (the highest value of $a_i \ \& \ -a_i$). if $k=1$ then we can just decrease $a_{i_1}$ by $1$ and connect everything. If $k\geq 2$ and we do the same we might disconnect that number from other numbers having the highest lowest bit, thus an additional operation of adding $1$ to $a_{i_2}$ is needed to keep everything connected. The answer is $2$ in this case. Complexity of this solution is $O(n^2 \ log \ max \ a_i)$.
[ "bitmasks", "brute force", "constructive algorithms", "dfs and similar", "dsu", "graphs" ]
2,500
#include <bits/stdc++.h> using namespace std; int a[2005]; vector<vector<int>> g(32); bool vis[32]; void dfs(int p) { if (vis[p]) return; vis[p]=1; for (auto it : g[p]) dfs(it); } bool connected(int n) { int m=0; for (int i=0;i<n;i++) if (a[i]==0) return false; for (int i=0;i<n;i++) m|=a[i]; for (int i=0;i<31;i++) g[i].clear(); for (int i=0;i<n;i++) { int last=-1; for (int j=0;j<31;j++) if (a[i]&(1<<j)){ if (last!=-1) g[last].push_back(j),g[j].push_back(last); last=j; } } for (int j=0;j<31;j++) vis[j]=0; for (int j=0;j<31;j++) if ((1<<j)&m) { dfs(j); break; } for (int j=0;j<31;j++) if (((1<<j)&m) && !vis[j]) return false; return true; } int main() { ios_base::sync_with_stdio(false); cin.tie(0),cout.tie(0); int t; cin>>t; while (t--){ int n; cin>>n; for (int i=0;i<n;i++) cin>>a[i]; int ans=0; for (int i=0;i<n;i++) if (a[i]==0) ans++,a[i]++; int m=0; for (int i=0;i<n;i++) m=max(m,(a[i]&(-a[i]))); if (connected(n)) { cout<<ans<<"\n"; for (int i=0;i<n;i++) cout<<a[i]<<" "; cout<<"\n"; goto kraj; } for (int i=0;i<n;i++) { a[i]--; if (connected(n)) { cout<<ans+1<<"\n"; for (int i=0;i<n;i++) cout<<a[i]<<" "; cout<<"\n"; goto kraj; } a[i]++; } for (int i=0;i<n;i++) { a[i]++; if (connected(n)) { cout<<ans+1<<"\n"; for (int i=0;i<n;i++) cout<<a[i]<<" "; cout<<"\n"; goto kraj; } a[i]--; } for (int i=0;i<n;i++) if ((a[i]&-a[i])==m) { a[i]--; break; } for (int i=0;i<n;i++) if ((a[i]&-a[i])==m) { a[i]++; break; } cout<<ans+2<<"\n"; for (int i=0;i<n;i++) cout<<a[i]<<" "; cout<<"\n"; kraj:; } }
1690
A
Print a Pedestal (Codeforces logo?)
Given the integer $n$ — the number of available blocks. You must use \textbf{all} blocks to build a pedestal. The pedestal consists of $3$ platforms for $2$-nd, $1$-st and $3$-rd places respectively. The platform for the $1$-st place must be \textbf{strictly} higher than for the $2$-nd place, and the platform for the $2$-nd place must be \textbf{strictly} higher than for the $3$-rd place. Also, the height of each platform must be greater than zero (that is, each platform must contain at least one block). \begin{center} {\small Example pedestal of $n=11$ blocks: second place height equals $4$ blocks, first place height equals $5$ blocks, third place height equals $2$ blocks.} \end{center} Among all possible pedestals of $n$ blocks, deduce one such that the platform height for the $1$-st place \textbf{minimum} as possible. If there are several of them, output any of them.
In the $n \le 10^5$ constraints, the problem can be solved by brute force: we will go through the value for $h_1$ (the height for the first place), and then select suitable values for $h_2$ and $h_3$. Since $h_2 > h_3$, we divide the remaining $n - h_1$ blocks equally between $h_2$ and $h_3$. If it turns out that $h_2 = h_3$, then we try to decrease $h_3$ by $1$ and increase $h_2$ by 1. If we get the right answer ($h_1 > h_2 > h_3 > 1$), output the heights and stop the process. We will go through the value of $h_1$ in order of increasing.
[ "constructive algorithms", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) #define sz(v) (int)v.size() #define all(v) v.begin(),v.end() #define eb emplace_back void solve() { int n; cin >> n; for (int a = 3; a < n; a++) { int c = (n - a) / 2; int b = n - a - c; if (c > 1 && b+1 < a) { c--; b++; } if (a > b && b > c) { cout << b << ' ' << a << ' ' << c << endl; return; } } } int main() { int t; cin >> t; forn(tt, t) { solve(); } }
1690
B
Array Decrements
Kristina has two arrays $a$ and $b$, each containing $n$ non-negative integers. She can perform the following operation on array $a$ any number of times: - apply a decrement to each non-zero element of the array, that is, replace the value of each element $a_i$ such that $a_i > 0$ with the value $a_i - 1$ ($1 \le i \le n$). If $a_i$ was $0$, its value does not change. Determine whether Kristina can get an array $b$ from an array $a$ in some number of operations (probably zero). In other words, can she make $a_i = b_i$ after some number of operations for each $1 \le i \le n$? For example, let $n = 4$, $a = [3, 5, 4, 1]$ and $b = [1, 3, 2, 0]$. In this case, she can apply the operation twice: - after the first application of the operation she gets $a = [2, 4, 3, 0]$; - after the second use of the operation she gets $a = [1, 3, 2, 0]$. Thus, in two operations, she can get an array $b$ from an array $a$.
For all elements of the arrays to become equal after subtraction of units, $a_i>=b_i$ for $1 \le i \le n$ must be satisfied. In addition, if there exists $b_i > 0$, then the equality $a_i = b_i$ can be obtained only by subtracting exactly $a_i - b_i$ units from $a_i$. Since the equality $a_i = b_i$ must be satisfied for all $i$, the problem is reduced to checking that for $b_i > 0$ all differences $a_i - b_i=dif$ are equal and for $b_i = 0$ the difference does not exceed $dif$.
[ "greedy", "implementation" ]
800
#include<bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const int inf = 1e9 + 7; bool equals(vector<int>&a, vector<int>&b, int n){ int dif = inf; forn(i, n){ if(b[i] != 0) dif = min(dif, a[i] - b[i]); } if(dif < 0) return false; if(dif == inf) return true; forn(i, n){ if(a[i] - b[i] > dif) return false; if(b[i] != 0 && a[i] - b[i] < dif) return false; } return true; } void solve(){ int n; cin >> n; vector<int>a(n), b(n); forn(i, n) cin >> a[i]; forn(i, n) cin >> b[i]; cout << (equals(a, b, n) ? "YES\n" : "NO\n"); } int main(){ int t; cin >> t; while(t--){ solve(); } }
1690
C
Restoring the Duration of Tasks
Recently, Polycarp completed $n$ successive tasks. For each completed task, the time $s_i$ is known when it was given, no two tasks were given at the same time. Also given is the time $f_i$ when the task was completed. For each task, there is an unknown value $d_i$ ($d_i>0$) — \textbf{duration of task execution}. It is known that the tasks were completed in the order in which they came. Polycarp performed the tasks as follows: - As soon as the very first task came, Polycarp immediately began to carry it out. - If a new task arrived before Polycarp finished the previous one, he put the new task at the end of the queue. - When Polycarp finished executing the next task and the queue was not empty, he \textbf{immediately} took a new task from the head of the queue (if the queue is empty — he just waited for the next task). Find $d_i$ (duration) of each task.
Accordingly, as it was said in the task, we put all tasks into the queue in the order of their arrival, then we fix the time at the beginning as $cur\_time = 0$. So, while there is a task in the queue, we proceed as follows: Take the task from the queue. Take as time the maximum from the current and from the arrival time of the task ($cur\_time = max(cur\_time, s)$). We subtract the current time from the time when the task was done ($f = d - cur\_time$). Replace the current time with the time the task was done ($cur\_time = d$) If there is a task in the queue, go to item $1$.
[ "data structures", "greedy", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; typedef long long ll; #define forn(i, n) for (int i = 0; i < int(n); i++) void solve() { int n; cin >> n; int s[n]; int f[n]; for (int i = 0; i < n; ++i) { cin >> s[i]; } for (int i = 0; i < n; ++i) { cin >> f[i]; } int curTime = 0; int d[n]; for (int i = 0; i < n; ++i) { curTime = max(curTime, s[i]); d[i] = f[i] - curTime; curTime = f[i]; } for (auto now: d) { cout << now << " "; } cout << '\n'; } int main() { int tests; cin >> tests; forn(tt, tests) { solve(); } }
1690
D
Black and White Stripe
You have a stripe of checkered paper of length $n$. Each cell is either white or black. What is the minimum number of cells that must be recolored from white to black in order to have a segment of $k$ consecutive black cells on the stripe? If the input data is such that a segment of $k$ consecutive black cells already exists, then print 0.
To obtain a segment of $k$ cells of black color, we need to paint all the white cells of the segment black. Then go through all the segments of length $k$ (there are only $n - k$) and choose such a segment among them that the number of white cells on it is minimal. You can quickly find out the number of white cells in the segment by prefix sums.
[ "implementation", "two pointers" ]
1,000
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) int main() { int t; cin >> t; forn(tt, t) { int n, k; cin >> n >> k; string s; cin >> s; vector<int> w(n + 1); for (int i = 1; i <= n; i++) w[i] = w[i - 1] + int(s[i - 1] == 'W'); int result = INT_MAX; for (int i = k; i <= n; i++) result = min(result, w[i] - w[i - k]); cout << result << endl; } }
1690
E
Price Maximization
A batch of $n$ goods ($n$ — an even number) is brought to the store, $i$-th of which has weight $a_i$. Before selling the goods, they must be packed into packages. After packing, the following will be done: - There will be $\frac{n}{2}$ packages, each package contains exactly two goods; - The weight of the package that contains goods with indices $i$ and $j$ ($1 \le i, j \le n$) is $a_i + a_j$. With this, the cost of a package of weight $x$ is always $\left \lfloor\frac{x}{k}\right\rfloor$ burles (rounded down), where $k$ — a fixed and given value. Pack the goods to the packages so that the revenue from their sale is maximized. In other words, make such $\frac{n}{2}$ pairs of given goods that the sum of the values $\left \lfloor\frac{x_i}{k} \right \rfloor$, where $x_i$ is the weight of the package number $i$ ($1 \le i \le \frac{n}{2}$), is \textbf{maximal}. For example, let $n = 6, k = 3$, weights of goods $a = [3, 2, 7, 1, 4, 8]$. Let's pack them into the following packages. - In the first package we will put the third and sixth goods. Its weight will be $a_3 + a_6 = 7 + 8 = 15$. The cost of the package will be $\left \lfloor\frac{15}{3}\right\rfloor = 5$ burles. - In the second package put the first and fifth goods, the weight is $a_1 + a_5 = 3 + 4 = 7$. The cost of the package is $\left \lfloor\frac{7}{3}\right\rfloor = 2$ burles. - In the third package put the second and fourth goods, the weight is $a_2 + a_4 = 2 + 1 = 3$. The cost of the package is $\left \lfloor\frac{3}{3}\right\rfloor = 1$ burle. With this packing, the total cost of all packs would be $5 + 2 + 1 = 8$ burles.
Note that we do not need to consider the numbers $x \ge k$, we are only interested in the remainder of the division of $x$ by $k$, and we simply add the value $\left \lfloor\frac{x}{k}\right\rfloor$ to the answer. We get an array $a$, where $a_i < k$. Let's sort it and greedily type index pairs $i < j$ such that $a_i + a_j \ge k$. This can be done with two pointers. Then add the number of matching pairs to the answer counter. This will be the answer to the problem.
[ "binary search", "greedy", "math", "two pointers" ]
1,500
#include<bits/stdc++.h> #define len(s) (int)s.size() using namespace std; using ll = long long; void solve(){ int n, k; cin >> n >> k; vector<ll>a(n); ll sum = 0; for(int i = 0; i < n; i++) { cin >> a[i]; sum += a[i] / k; a[i] = a[i] % k; } sort(a.begin(), a.end(), [] (int x, int y){ return x > y; }); for(int i = 0, j = n - 1; i < j; i++, j--){ while(a[i] + a[j] < k && i < j) j--; if(i == j) break; sum++; } cout << sum << endl; } int main(){ int t; cin >> t; while(t--){ solve(); } return 0; }
1690
F
Shifting String
Polycarp found the string $s$ and the permutation $p$. Their lengths turned out to be the same and equal to $n$. A permutation of $n$ elements — is an array of length $n$, in which every integer from $1$ to $n$ occurs exactly once. For example, $[1, 2, 3]$ and $[4, 3, 5, 1, 2]$ are permutations, but $[1, 2, 4]$, $[4, 3, 2, 1, 2]$ and $[0, 1, 2]$ are not. In one operation he can multiply $s$ by $p$, so he replaces $s$ with string $new$, in which for any $i$ from $1$ to $n$ it is true that $new_i = s_{p_i}$. For example, with $s=wmbe$ and $p = [3, 1, 4, 2]$, after operation the string will turn to $s=s_3 s_1 s_4 s_2=bwem$. Polycarp wondered after how many operations the string would become equal to its initial value for the first time. Since it may take too long, he asks for your help in this matter. It can be proved that the required number of operations always exists. It can be very large, so use a 64-bit integer type.
To begin with, let's understand why the string will return to its original form. In fact, the graph that the permutation sets consists of simple cycles and it turns out that after a certain number of operations, each character will return to its place. Consider each cycle as a string that is cyclically shifted every turn. It may seem that the answer is - $lcm$ (the smallest common multiple) of the cycle lengths, but to become equal to the initial string, it is not necessary to go through the entire cycle. The constraints allow us to calculate the length of the minimum suitable shift $k_j$ in $\mathcal{O}(len^2)$, where $len$ is the length of the cycle, so just iterate over the desired shift. Note that after $k_j$ operations, the cycle will return to its original form and this will happen again after $k_j$ operations. The answer will be $lcm$ of all $k_j$, since each cycle individually comes to its original form after the number of operations is a multiple of its $k_j$.
[ "graphs", "math", "number theory", "strings" ]
1,700
def gcd(a, b): if b == 0: return a; return gcd(b, a % b) def shift(s): for i in range(1, len(s) + 1): if s == s[i:] + s[:i]: return i def solve(): n = int(input()) s = input() p = [int(x)-1 for x in input().split()] used = [False] * n ans = 1 i = 0 while i < n: ss = '' while not used[i]: ss += s[i] used[i] = True i = p[i]; i += 1 if len(ss) == 0: continue ln = shift(ss) ans = ans * ln // gcd(ans, ln) print(ans) t = int(input()) for _ in range(t): solve()
1690
G
Count the Trains
There are $n$ of independent carriages on the rails. The carriages are numbered from left to right from $1$ to $n$. The carriages are not connected to each other. The carriages move to the left, so that the carriage with number $1$ moves ahead of all of them. The $i$-th carriage has its own engine, which can accelerate the carriage to $a_i$ km/h, but the carriage cannot go faster than the carriage in front of it. See example for explanation. All carriages start moving to the left at the same time, and they naturally form \textbf{trains}. We will call \textbf{trains} — consecutive moving carriages having the same speed. For example, we have $n=5$ carriages and array $a = [10, 13, 5, 2, 6]$. Then the final speeds of the carriages will be $[10, 10, 5, 2, 2]$. Respectively, $3$ of the train will be formed. There are also messages saying that some engine has been corrupted: - message "k d" means that the speed of the $k$-th carriage has decreased by $d$ (that is, there has been a change in the maximum speed of the carriage $a_k = a_k - d$). Messages arrive sequentially, the processing of the next message takes into account the changes from all previous messages. After each message determine the number of formed trains.
In the set we will keep the indices that start the trains. That is, if the array $v$ - the real speeds of the carriages, then we will store in the network such values $2 \le i$ that $v[i] < v[i-1]$. As well as the value of $0$. Thus, the size of the set -is the answer to the problem. Consider now the operation to reduce the speed of the carriage: find such maximal index $j \le k$ in the set, if the value $a_k < a_j$, then we should add the value $k$ to the set, since it will start a new train. Then we should remove all subsequent indexes $j$ from the set such that $a_k < a_j$. Thus, for all operations we will add no more than $n$ elements to the array, and remove in total no more than $2 \cdot n$ elements from the set. We obtain the asymptotic $O(m \cdot logn)$.
[ "binary search", "data structures", "greedy", "sortings" ]
2,000
#include<bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; vector<int> a(n); set<int> tmp; for (int i = 0; i < n; i++) { cin >> a[i]; if (tmp.empty() || a[i] < a[*tmp.rbegin()]) { tmp.insert(i); } } for (int i = 0; i < m; i++) { int j, d; cin >> j >> d; j--; a[j] -= d; auto it = tmp.upper_bound(j); if (it != tmp.begin()) { it = prev(it); if (*it == j || a[*it] > a[j]) { tmp.insert(j); } } else { tmp.insert(j); } while (1) { it = tmp.upper_bound(j); if (it != tmp.end() && a[*it] >= a[j]) { tmp.erase(it); } else { break; } } cout << (int) tmp.size() << " "; } cout << '\n'; } }
1691
A
Beat The Odds
Given a sequence $a_1, a_2, \ldots, a_n$, find the minimum number of elements to remove from the sequence such that after the removal, the sum of every $2$ consecutive elements is even.
The sum of an odd integer and an even integer is an odd integer. So, you can't have both even and odd elements in the array for the sum of every two consecutive elements to be even. Hence, the final array should only contain all even or all odd elements. Hence, we will remove either all odd elements or all even elements, whichever takes lesser number of operations. Therefore, the answer is: $min(\textit{count of odd elements}, \textit{count of even elements})$
[ "brute force", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) cin >> a[i]; int num_odd = 0; for (auto x : a) if (x & 1) num_odd++; cout << min(num_odd, n - num_odd) << endl; } return 0; }
1691
B
Shoe Shuffling
A class of students got bored wearing the same pair of shoes every day, so they decided to shuffle their shoes among themselves. In this problem, a pair of shoes is inseparable and is considered as a single object. There are $n$ students in the class, and you are given an array $s$ in \textbf{non-decreasing} order, where $s_i$ is the shoe size of the $i$-th student. A shuffling of shoes is valid only if \textbf{no student gets their own shoes} and if every student gets shoes of size \textbf{greater than or equal to} their size. You have to output a permutation $p$ of $\{1,2,\ldots,n\}$ denoting a valid shuffling of shoes, where the $i$-th student gets the shoes of the $p_i$-th student ($p_i \ne i$). And output $-1$ if a valid shuffling does not exist. A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
We can observe that the number of pairs of shoes greater than their size is limited for any student. So if student $j$ wears shoes that are greater than their size, then some student $i$ who has a size greater than student $j$ will compromise their size. So say a valid shuffling exists where a student gets shoes of size greater than their own, i.e., student $j$ got shoes of student $i$ where $s_i>s_j$. Then, for all pairs of shoes of size $s_j$, one pair will go to a student whose size is smaller than $s_j$. This chain will continue until a student with shoe size $s_1$ gets a pair of shoes greater than theirs, and then there will exist a pair of shoes of size $s_1$ that no student can wear. Thus, if a valid shuffling, every student must get shoes of the same size as their own. Hence, a valid shuffling exists if more than one student has the same size shoes for all shoe sizes. A valid shuffling can be generated by rearranging students' shoes with the same shoe size such that no one gets their shoes. This can be done in multiple ways, for example, cyclic rotation.
[ "constructive algorithms", "greedy", "implementation", "two pointers" ]
1,000
#include <bits/stdc++.h> using namespace std; #define ll long long typedef vector<ll> vll; #define io \ ios_base::sync_with_stdio(false); \ cin.tie(NULL); \ cout.tie(NULL) int main() { io; ll tc; cin >> tc; while (tc--) { ll n; cin >> n; vll s(n), p(n); for (ll i = 0; i < n; ++i) cin >> s[i]; ll l = 0, r = 0; bool ans = true; for (ll i = 0; i < n; ++i) p[i] = i + 1; while (r < n) { while (r < n - 1 and s[r] == s[r + 1]) // get range [l,r] with equal values ++r; if (l == r) ans = false; else rotate(p.begin() + l, p.begin() + r, p.begin() + r + 1); // rotate right in range [l,r] l = r + 1; ++r; } if (ans) { for (auto &x : p) cout << x << " "; cout << endl; } else cout << -1 << endl; } return 0; }
1691
C
Sum of Substrings
You are given a binary string $s$ of length $n$. Let's define $d_i$ as the number whose decimal representation is $s_i s_{i+1}$ (possibly, with a leading zero). We define $f(s)$ to be the sum of all the valid $d_i$. In other words, $f(s) = \sum\limits_{i=1}^{n-1} d_i$. For example, for the string $s = 1011$: - $d_1 = 10$ (ten); - $d_2 = 01$ (one) - $d_3 = 11$ (eleven); - $f(s) = 10 + 01 + 11 = 22$. In one operation you can swap any two adjacent elements of the string. Find the minimum value of $f(s)$ that can be achieved if at most $k$ operations are allowed.
We can observe that for any string $s$, $F(s)$ can also be written as: $F(s) = 10\times s_1 + 11 \times s_2 + 11 \times s_3 \dots 11 \times s_{n-1} + 1 \times s_n$ Now, in order to minimize the value of $F(s)$, we would want to put the 1s at position $n$ first, then at position $1$ and then anywhere in the middle of the string. In order to achieve the best configuration in at max $k$ operations, we will try to move the last '1' at position $n$ first, then with the remaining operations, we will try to move the first '1' at position 1. The remaining 1s can stay where they are as they will anyways be contributing a value of $11$ no matter which position they take.
[ "brute force", "constructive algorithms", "greedy", "math", "strings" ]
1,400
#include <bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int t; cin >> t; while (t--) { int n, k; cin >> n >> k; string s; cin >> s; int ones = 0, p1_first = n, p1_last = -1; for (int p = 0; p < n; p++) { if (s[p] != '1') continue; ones += 1; if (p1_first == n) p1_first = p; p1_last = p; } int add = 0; // moving the last one to last position if (ones > 0 and (n - 1 - p1_last) <= k) { k -= (n - 1 - p1_last); add += 1; ones -= 1; } // moving the first one to first position if (ones > 0 and p1_first <= k) { k -= (p1_first); add += 10; ones -= 1; } cout << 11 * ones + add << "\n"; } return 0; }
1691
D
Max GEQ Sum
You are given an array $a$ of $n$ integers. You are asked to find out if the inequality $$\max(a_i, a_{i + 1}, \ldots, a_{j - 1}, a_{j}) \geq a_i + a_{i + 1} + \dots + a_{j - 1} + a_{j}$$ holds for all pairs of indices $(i, j)$, where $1 \leq i \leq j \leq n$.
Let's look at the problem from the perspective of each $a_i$. We want to check whether the sum of the subarrays, where $a_i$ is the maximum element, exceeds $a_i$ or not. Firstly, we must find out in which subarrays is $a_i$ the maximum. This involves finding the previous greater element index and the next greater element index of $i$, which can be done for all indices in $O(n)$ using stacks. Take these indices as $x_i$, $y_i$. After computing this for every index, we'll know that $a_i$ is max in subarrays with starting index $[x_i + 1, i]$ and ending index $[i, y_i - 1]$. Take $(j, k)$, which represents the sum of a subarray which starts at index $j$ and ends at index $k$, where $j \in [x_i + 1, i]$, $k \in [i, y_i - 1]$. If $(j, k) > a_i$, then $(j, i - 1) + (i, i) + (i + 1, k) > a_i$, giving us $(j, i - 1) + (i + 1, k) > 0$. Hence, at least one of the subarrays, $(j, i - 1)$ or $(i + 1, k)$ has a sum greater than $0$, which implies that one of subarrays $(j, i)$, $(i, k)$ has sum greater than $a_i$, so only checking subarrays which start or end at index $i$ suffices. Therefore, for an index $i$, we need to check subarrays $(x_i + 1, i), (x_i + 2, i), \dots, (i - 1, i)$, and subarrays $(i, i + 1), (i, i + 2), \dots, (i, y_i - 1)$. Since we just care if any one of them exceed $a_i$, finding the max of them is enough. This reduces to making a range query over the prefix sums and one over the suffix sums. The query on prefix sums would look like $\text{max}(i, y_i - 1) - \text{prefix}[i - 1] > a_i$ Where $\text{max}(i, y_i - 1)$ returns the max prefix sum in the given range. This query can be done using a segment tree in $O(\log n)$. If any of the queries is true, then we just have to output "NO", else output "YES". With this we get the time complexity of the solution as $O(n\log n)$.
[ "binary search", "constructive algorithms", "data structures", "divide and conquer", "implementation", "two pointers" ]
1,800
#include <bits/stdc++.h> typedef long long ll; using namespace std; const ll ninf = -1e15; vector<int> nextGreater(vector<ll>& arr, int n) { stack<int> s; vector<int> result(n, n); for (int i = 0; i < n; i++) { while (!s.empty() && arr[s.top()] < arr[i]) { result[s.top()] = i; s.pop(); } s.push(i); } return result; } vector<int> prevGreater(vector<ll>& arr, int n) { stack<int> s; vector<int> result(n, -1); for (int i = n - 1; i >= 0; i--) { while (!s.empty() && arr[s.top()] < arr[i]) { result[s.top()] = i; s.pop(); } s.push(i); } return result; } ll query(vector<ll> &tree, int node, int ns, int ne, int qs, int qe) { if (qe < ns || qs > ne) return ninf; if (qs <= ns && ne <= qe) return tree[node]; int mid = ns + (ne - ns) / 2; ll leftQuery = query(tree, 2 * node, ns, mid, qs, qe); ll rightQuery = query(tree, 2 * node + 1, mid + 1, ne, qs, qe); return max(leftQuery, rightQuery); } int main() { int t; cin >> t; while (t--) { int n, _n; cin >> n; vector<ll> arr(n, 0); for (auto& a : arr) cin >> a; // Round off n to next power of 2 _n = n; while (__builtin_popcount(_n) != 1) _n++; // Prefix sums vector<ll> prefixSum(n, 0), suffixSum(n, 0); prefixSum[0] = arr[0]; for (int i = 1; i < n; i++) { prefixSum[i] = prefixSum[i - 1] + arr[i]; } suffixSum[n - 1] = arr[n - 1]; for (int i = n - 2; i >= 0; i--) { suffixSum[i] = suffixSum[i + 1] + arr[i]; } // Two max-segtress, one on the prefix sums, one on the suffix sums vector<ll> prefixTree(2 * _n, ninf), suffixTree(2 * _n, ninf); for (int i = 0; i < n; i++) { prefixTree[_n + i] = prefixSum[i]; suffixTree[_n + i] = suffixSum[i]; } for (int i = _n - 1; i >= 1; i--) { prefixTree[i] = max(prefixTree[2 * i], prefixTree[2 * i + 1]); suffixTree[i] = max(suffixTree[2 * i], suffixTree[2 * i + 1]); } vector<int> ng = nextGreater(arr, n); vector<int> pg = prevGreater(arr, n); bool flag = true; for (int i = 0; i < n; i++) { ll rightMax = query(prefixTree, 1, 0, _n - 1, i + 1, ng[i] - 1) - prefixSum[i]; ll leftMax = query(suffixTree, 1, 0, _n - 1, pg[i] + 1, i - 1) - suffixSum[i]; if (max(leftMax, rightMax) > 0) { flag = false; break; } } if (flag) cout << "YES\n"; else cout << "NO\n"; } }
1691
E
Number of Groups
You are given $n$ colored segments on the number line. Each segment is either colored red or blue. The $i$-th segment can be represented by a tuple $(c_i, l_i, r_i)$. The segment contains all the points in the range $[l_i, r_i]$, inclusive, and its color denoted by $c_i$: - if $c_i = 0$, it is a red segment; - if $c_i = 1$, it is a blue segment. We say that two segments of \textbf{different} colors are connected, if they share at least one common point. Two segments belong to the same group, if they are either connected directly, or through a sequence of directly connected segments. Find the number of groups of segments.
We will be using the starting and ending points of different segments to count the final answer. We maintain a Union-Find data structure (DSU) on size $n$ corresponding to the $n$ segments given as input. We store all the starting and ending points in a set. (example: If $2$ segments are $(0,10)$ and $(11, 12)$, we store ${0,10,11,12}$ in a set irrespective of the color of the segment). We now iterate through these points in ascending order. We maintain $2$ running sets corresponding to the $2$ colors. In these sets, we will store the segments for which the starting point has been reached while iterating through the set of points but the ending point hasn't been reached (ie: we store the segments that have started but not ended). The algorithm works as follows: If we are at a point $x$: If it corresponds to a segment's starting point: We add that segment to the set corresponding to its color We merge (DSU merge) this segment with all segments present in the set corresponding to the other color (since their ending point hasn't been reached yet). We also erase all segments in the set corresponding to the other color except the one with the largest closing point value. We add that segment to the set corresponding to its color We merge (DSU merge) this segment with all segments present in the set corresponding to the other color (since their ending point hasn't been reached yet). We also erase all segments in the set corresponding to the other color except the one with the largest closing point value. If the point corresponds to a segment's ending point: We delete the segment from the set corresponding to its color. We delete the segment from the set corresponding to its color. Why can we delete the segments of color except for the one with the largest ending point if we encounter a starting point of the other color? We are able to greedily pick the segment with the furthest ending point value because all segments of the same color in that set have been connected together by the segments of the other color. Hence, we can just work with the segments with the largest ending point value of both colors for each component that exists. darkkcyan's solution without using sets in python. TheScrasse's $nlog^3n$ solution for E using Boruvka and Mergesort tree :D
[ "data structures", "dfs and similar", "dsu", "graphs", "greedy", "sortings" ]
2,300
#include <bits/stdc++.h> using namespace std; #define nl "\n" #define nf endl #define ll int #define pb push_back #define _ << ' ' << #define tm gfewnignefgo #define INF (int)1e9 #define mod 998244353 #define maxn 200010 ll i, i1, j, k, k1, t, n, m, res, flag[10], a, b; ll c[maxn], l[maxn], r[maxn], pr[maxn], sz[maxn]; ll tm, df[maxn]; vector<ll> adj[maxn]; vector<array<ll, 2>> nd; vector<array<ll, 3>> cm; priority_queue<array<ll, 2>> fn[2][maxn]; ll find(ll x) { if (x == pr[x]) return x; return pr[x] = find(pr[x]); } bool same(ll a, ll b) { return (find(a) == find(b)); } void onion(ll a, ll b) { a = find(a); b = find(b); if (a == b) return; if (sz[a] < sz[b]) swap(a, b); pr[b] = a; sz[a] += sz[b]; } void upd(ll c, ll p, ll x, ll id) { while (p < maxn) { fn[c][p].push({x, id}); p += (p & (-p)); } } void nts(ll c, ll p, ll r, ll st) { ll wn = -1; for (; p > 0; p -= (p & (-p))) { while (!fn[c][p].empty()) { auto [rr, id] = fn[c][p].top(); if (!df[id]) { fn[c][p].pop(); continue; } if (rr < r) break; wn = id; break; } if (wn != -1) { nd.pb({st, wn}); break; } } } int main() { ios::sync_with_stdio(0); cin.tie(0); #if !ONLINE_JUDGE && !EVAL ifstream cin("input.txt"); ofstream cout("output.txt"); #endif // today I'm overkilling everything // tl;dr mst with boruvka and merge sort tree, O(n*log^3(n)), let's hope it gets ac cin >> t; while (t--) { cin >> n; for (i = 1; i <= 2 * n; i++) { while (!fn[0][i].empty()) fn[0][i].pop(); while (!fn[1][i].empty()) fn[1][i].pop(); } for (i = 1; i <= n; i++) { cin >> c[i] >> l[i] >> r[i]; pr[i] = i; sz[i] = 1; } cm.clear(); cm.pb({-INF, 0, 0}); for (i = 1; i <= n; i++) { cm.pb({l[i], i, 0}); cm.pb({r[i], i, 1}); } sort(cm.begin(), cm.end()); for (i = 1; i <= 2 * n; i++) { if (i == 1 || cm[i][0] != cm[i - 1][0]) k = i; if (cm[i][2] == 0) l[cm[i][1]] = k; else r[cm[i][1]] = k; } for (i = 1; i <= n; i++) { upd(c[i], l[i], r[i], i); df[i] = true; } while (true) { nd.clear(); for (i = 1; i <= n; i++) adj[i].clear(); for (i = 1; i <= n; i++) adj[find(i)].pb(i); for (i = 1; i <= n; i++) { for (auto u : adj[i]) df[u] = false; for (auto u : adj[i]) nts(c[u] ^ 1, r[u], l[u], u); for (auto u : adj[i]) { df[u] = true; upd(c[u], l[u], r[u], u); } } if (nd.empty()) break; for (auto [a, b] : nd) onion(a, b); } res = 0; for (i = 1; i <= n; i++) { if (find(i) == i) res++; } cout << res << nl; } return 0; }
1691
F
K-Set Tree
You are given a tree $G$ with $n$ vertices and an integer $k$. The vertices of the tree are numbered from $1$ to $n$. For a vertex $r$ and a subset $S$ of vertices of $G$, such that $|S| = k$, we define $f(r, S)$ as the size of the smallest rooted subtree containing all vertices in $S$ when the tree is rooted at $r$. A set of vertices $T$ is called a rooted subtree, if all the vertices in $T$ are connected, and for each vertex in $T$, all its descendants belong to $T$. You need to calculate the sum of $f(r, S)$ over \textbf{all possible distinct combinations} of vertices $r$ and subsets $S$, where $|S| = k$. Formally, compute the following: $$\sum_{r \in V} \sum_{S \subseteq V, |S| = k} f(r, S),$$ where $V$ is the set of vertices in $G$. Output the answer modulo $10^9 + 7$.
Our task is to calculate $\sum_{R \in V} \sum_{S \subseteq V, |S| = k} f(R, S)$ We will calculate our answer using dynamic programming over the trees. In this technique, we will calculate some properties for each sub-tree and eventually get those properties for the entire tree. The first property that we want to calculate for each sub-tree with node $v$ as the sub-tree root is - $cnt(v)$ which is the number of subsets of size $k$ such that sub-tree of $v$ is the minimum-size sub-tree covering it entirely. This can be calculated using combinatorics - first we calculate the total number of subsets of size $k$ in this sub-tree and then from it we can subtract the number of subsets of size $k$ which don't have sub-tree of $v$ as the minimum size sub-tree. $cnt(v) = {size(v) \choose k} - \sum \limits_{u \in \text{children}} {size(u) \choose k}$ This first property that we calculated is very important for us: If we take the sum of $cnt(v)$ over every node, we will get the total number of subsets of size $k$. When the tree is rooted at $r$, $cnt(v)$ represents the number of subsets where sub-tree of $v$ is the smallest sub-tree containing a set $S$ with $k$ vertices. Conclusively, $f(R = r, S) = \sum \limits_{v = 1}^{n} cnt(v) \times size(v)$. The second property that we want to find for each sub-tree is the $size(v)$ - the size of the sub-tree of $v$. The third property that we want to find for each sub-tree is $cntsz(v) = cnt(v) \times size(v)$. Now, we have $f(R = r, S)$ (as explained above) i.e. the contribution to the final answer when the root of the entire tree is fixed at $r$. We can calculate the final answer by fixing other nodes as roots and then summing these value up. Notice what happens when we try to change the root from $r$ to one of it's children. The properties that we calculated for each sub-tree remain the same except for the old root and the new root. We can recalculate the properties for these two nodes using some clever arithmetic and get the new answer with a new root. This is known as re-rooting technique. The method to calculate the new properties are: Note: We use $OR$ to represent Old Root. and $NR$ to represent New Root. $size_{new}(OR) = size_{old}(OR) - size_{old}(NR)$. (Subtracting the size of this branch.) $size_{new}(NR) = n$. (This is the main root.) $cnt_{new}(OR) = cnt_{old}(OR) - {size_{old}(OR) \choose k} + {size_{new}(OR) \choose k} + {size_{old}(NR) \choose k}$. (Removing contribution of old size and putting contribution of new size. Removing contribution of the branch.) $cnt_{new}(NR) = cnt_{old}(NR) - {size_{old}(NR) \choose k} + {size_{new}(NR) \choose k} - {size_{new}(OR) \choose k}$. (Removing contribution of old size and putting contribution of new size. Putting contribution of new brach.) $cntsz_{new}(OR) = cnt_{new}(OR) \times size_{new}(OR)$ (By definition.) $cntsz_{new}(NR) = cnt_{new}(NR) \times size_{new}(NR)$ (By definition.) $ans_{new} = ans_{old} - cntsz_{old}(OR) - cntsz_{old}(NR) + cntsz_{new}(OR) + cntsz_{new}(NR)$ (Subtracting old contribution and adding new contribution.) The final answer is given by: $finalans = \sum \limits_{v=1}^n ans_v$
[ "combinatorics", "dfs and similar", "dp", "math", "trees" ]
2,500
#include <iostream> #include <vector> using namespace std; using ll = long long; const ll MOD = 1e9 + 7; struct Comb { vector<ll> fac; vector<ll> invfac; ll n; Comb(ll n) { this->n = n; fac.resize(n + 1, 0); invfac.resize(n + 1, 0); fac[0] = 1; for (ll i = 1; i <= n; i++) fac[i] = (fac[i - 1] * i) % MOD; invfac[n] = power(fac[n], MOD - 2); for (ll i = n - 1; i >= 0; i--) invfac[i] = (invfac[i + 1] * (i + 1)) % MOD; } static ll power(ll x, ll y) { ll ret = 1; while (y) { if (y & 1) ret = (ret * x) % MOD; y >>= 1; x = (x * x) % MOD; } return ret; } ll nCr(ll n, ll r) { if (n < 0 or r < 0 or n < r) return 0; ll ans = (fac[n] * ((invfac[r] * invfac[n - r]) % MOD)) % MOD; return ans; } }; vector<vector<int> > adj; vector<int> sz; vector<ll> cnt; vector<ll> cntsz; Comb C(2e5 + 5); ll cur_ans = 0; ll ans = 0; void dfs1(int v, int p, int k) { sz[v] = 1; ll sub = 0; for (int u : adj[v]) { if (u != p) { dfs1(u, v, k); sz[v] += sz[u]; sub = (sub + C.nCr(sz[u], k)) % MOD; } } cnt[v] = (C.nCr(sz[v], k) - sub + MOD) % MOD; cntsz[v] = (cnt[v] * sz[v]) % MOD; cur_ans = (cur_ans + cntsz[v]) % MOD; } void dfs2(int v, int p, int k) { ans = (ans + cur_ans) % MOD; for (int u : adj[v]) { if (u != p) { // store int store_v_sz = sz[v]; ll store_v_cnt = cnt[v]; ll store_v_cntsz = cntsz[v]; int store_u_sz = sz[u]; ll store_u_cnt = cnt[u]; ll store_u_cntsz = cntsz[u]; ll store_cur_ans = cur_ans; // recalculate size[v], size[u] sz[v] -= sz[u]; sz[u] = sz.size(); // recalculate cnt[v] cnt[v] = (cnt[v] - C.nCr(store_v_sz, k) + MOD) % MOD; cnt[v] = (cnt[v] + C.nCr(sz[v], k)) % MOD; cnt[v] = (cnt[v] + C.nCr(store_u_sz, k)) % MOD; // recalculate cnt[u] cnt[u] = (cnt[u] - C.nCr(store_u_sz, k) + MOD) % MOD; cnt[u] = (cnt[u] + C.nCr(sz[u], k)) % MOD; cnt[u] = (cnt[u] - C.nCr(sz[v], k) + MOD) % MOD; // recalculate cntsz cntsz[v] = (cnt[v] * sz[v]) % MOD; cntsz[u] = (cnt[u] * sz[u]) % MOD; // recalculate cur_ans cur_ans = (cur_ans - store_v_cntsz - store_u_cntsz + MOD + MOD) % MOD; cur_ans = (cur_ans + cntsz[v] + cntsz[u]) % MOD; dfs2(u, v, k); // restore sz[v] = store_v_sz; cnt[v] = store_v_cnt; cntsz[v] = store_v_cntsz; sz[u] = store_u_sz; cnt[u] = store_u_cnt; cntsz[u] = store_u_cntsz; cur_ans = store_cur_ans; } } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int n, k; cin >> n >> k; adj.resize(n); sz.resize(n); cnt.resize(n); cntsz.resize(n); for (int i = 0; i < n - 1; ++i) { int u, v; cin >> u >> v; --u, --v; adj[u].push_back(v); adj[v].push_back(u); } dfs1(0, 0, k); dfs2(0, 0, k); cout << ans << endl; return 0; }
1692
A
Marathon
You are given four \textbf{distinct} integers $a$, $b$, $c$, $d$. Timur and three other people are running a marathon. The value $a$ is the distance that Timur has run and $b$, $c$, $d$ correspond to the distances the other three participants ran. Output the number of participants in front of Timur.
We can re-word the problem to count the number of numbers from $b, c, d$ that are larger than $a$. A possible way to do this is by keeping a variable that gets incremented every time we checked using the if statement whether a number is larger than $a$. The complexity is $\mathcal{O}(1)$.
[ "implementation" ]
800
t = int(input()) for test in range(t): a,b,c,d = map(int, input().split()) rs = (b > a) + (c > a) + (d > a) print(rs)
1692
B
All Distinct
Sho has an array $a$ consisting of $n$ integers. An operation consists of choosing two distinct indices $i$ and $j$ and removing $a_i$ and $a_j$ from the array. For example, for the array $[2, 3, 4, 2, 5]$, Sho can choose to remove indices $1$ and $3$. After this operation, the array becomes $[3, 2, 5]$. Note that after any operation, the length of the array is reduced by two. After he made some operations, Sho has an array that has only \textbf{distinct} elements. In addition, he made operations such that the resulting array is the \textbf{longest} possible. More formally, the array after Sho has made his operations respects these criteria: - No pairs such that ($i < j$) and $a_i = a_j$ exist. - The length of $a$ is maximized. Output the length of the final array.
Note that the size of the array doesn't change parity, since it always decreases by $2$. Let's count the number of distinct elements, call it $x$. If $x$ is the same parity as $n$ (the length of the array), then we can make sure all of these $x$ distinct elements stay in the array by removing two elements at a time. Otherwise, $x$ isn't the same parity as $n$. Then $x-1$ is the same parity as $n$, and we can make sure $x-1$ distinct elements stay in the array by removing two elements at a time. So the answer is $x$ if $x$ and $n$ have the same parity, and $x-1$ otherwise. For example: $[15,16,16,15]$ has $x=4$, $n=2$. So $x$ and $n$ have the same parity, and we can get all distinct numbers $[15,16]$ by removing $i=3$, $j=4$. Time complexity: $\mathcal{O}(n \log n)$ or $\mathcal{O}(n)$, depending on the implementation.
[ "greedy", "sortings" ]
800
#include <bits/stdc++.h> typedef long long ll; using namespace std; void solve() { int n, x; cin >> n; set<int> a; for(int i = 0; i < n; i++) { cin >> x; a.insert(x); } if((n-a.size())%2 == 0) { cout << a.size() << endl; } else { cout << a.size()-1 << endl; } } int32_t main(){ int t = 1; cin >> t; while (t--) { solve(); } }
1692
C
Where's the Bishop?
Mihai has an $8 \times 8$ chessboard whose rows are numbered from $1$ to $8$ from top to bottom and whose columns are numbered from $1$ to $8$ from left to right. Mihai has placed exactly one bishop on the chessboard. \textbf{The bishop is not placed on the edges of the board.} (In other words, the row and column of the bishop are between $2$ and $7$, inclusive.) The bishop attacks in all directions diagonally, and there is no limit to the distance which the bishop can attack. Note that the cell on which the bishop is placed is also considered attacked. \begin{center} {\small An example of a bishop on a chessboard. The squares it attacks are marked in red.} \end{center} Mihai has marked all squares the bishop attacks, but forgot where the bishop was! Help Mihai find the position of the bishop.
There are many ways to solve the problem. One way is to look for the following pattern: $\texttt{X.X}\\\texttt{.X.}\\\texttt{X.X}$ You can also look at the positions of the two diagonals and intersect them, but it requires more implementation. Time complexity: $\mathcal{O}(1)$ per testcase.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { char g[9][9]; for (int i = 1; i <= 8; i++) { for (int j = 1; j <= 8; j++) { cin >> g[i][j]; } } for (int i = 2; i <= 7; i++) { for (int j = 2; j <= 7; j++) { if (g[i][j] == '#' && g[i - 1][j - 1] == '#' && g[i - 1][j + 1] == '#' && g[i + 1][j - 1] == '#' && g[i + 1][j + 1] == '#') { cout << i << ' ' << j << '\n'; return; } } } } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1692
D
The Clock
Victor has a 24-hour clock that shows the time in the format "HH:MM" (00 $\le$ HH $\le$ 23, 00 $\le$ MM $\le$ 59). He looks at the clock every $x$ minutes, and the clock is currently showing time $s$. How many \textbf{different} palindromes will Victor see in total after looking at the clock every $x$ minutes, the first time being at time $s$? For example, if the clock starts out as 03:12 and Victor looks at the clock every $360$ minutes (i.e. every $6$ hours), then he will see the times 03:12, 09:12, 15:12, 21:12, 03:12, and the times will continue to repeat. Here the time 21:12 is the only palindrome he will ever see, so the answer is $1$. A palindrome is a string that reads the same backward as forward. For example, the times 12:21, 05:50, 11:11 are palindromes but 13:13, 22:10, 02:22 are not.
Note that Victor looks at the clock forever, but there are only at most $1440$ different times the clock can show (because there are $1440$ different minutes in a day). So we only have to check the first $1440$ times Victor sees, and count the palindromes (you can check a few more just to be safe, but they will repeat anyways). Now we just have to implementing adding $x$ minutes to a clock. There are several ways to do this. One of the slower ways might be writing functions converting a number of minutes into a time for a clock, or you can just compute all palindrome times in terms of minutes and that way you don't have to convert from clock time to number of minutes. The complexity is $\mathcal{O}(1)$ per test case, since you only have to check a constant number of times.
[ "brute force", "implementation" ]
1,100
#include <bits/stdc++.h> using namespace std; int a[5] = {600, 60, 0, 10, 1}; int good[16] = {0, 70, 140, 210, 280, 350, 601, 671, 741, 811, 881, 951, 1202, 1272, 1342, 1412}; void solve() { string s; cin >> s; int x; cin >> x; int tot = 0; for (int i = 0; i < 5; i++) { tot += (int)(s[i] - '0') * a[i]; } set<int> t; for (int i = 0; i < 2022; i++) { t.insert(tot); tot += x; tot %= 1440; } int res = 0; for (int i : t) { for (int j = 0; j < 16; j++) { if (good[j] == i) {res++;} } } cout << res << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1692
E
Binary Deque
Slavic has an array of length $n$ consisting only of zeroes and ones. In one operation, he removes either the first or the last element of the array. What is the minimum number of operations Slavic has to perform such that the total sum of the array is equal to $s$ after performing all the operations? In case the sum $s$ can't be obtained after any amount of operations, you should output -1.
Note that the remaining array is a subarray of the original array. There are many ways to approach the problem. Here is one solution, which the main solution uses: Compute prefix sums on the array, so we can find out the value of $a_l + \dots + a_r$ quickly. Let's iterate through the left endpoint $l$ from $1$ to $n$. Afterwards, we can binary search on the smallest value of $r$ such that $a_l + \dots + a_r = s$, since this sum is strictly increasing. The time complexity is $\mathcal{O}(n \log n)$.
[ "binary search", "implementation", "two pointers" ]
1,200
#include "bits/stdc++.h" using namespace std; #define ll long long #define forn(i,n) for(int i=0;i<n;i++) #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() ll query(int l, int r, vector<ll>& p) { return p[r] - (l ? p[l - 1] : 0); } void solve() { int n, s; cin >> n >> s; vector<ll> a(n), p(n); forn(i, n) { cin >> a[i]; p[i] = a[i]; if(i) p[i] += p[i - 1]; } int ans = INT_MAX; for(int i = 0; i < n; ++i) { int l = i, r = n - 1, pos = -1; while(l <= r) { int mid = l + r >> 1; if(query(i, mid, p) <= s) { pos = mid; l = mid + 1; } else r = mid - 1; } if(pos == -1 || query(i, pos, p) != s) continue; ans = min(ans, n - (pos - i + 1)); } cout << (ans == INT_MAX ? -1 : ans) << "\n"; } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1692
F
3SUM
Given an array $a$ of positive integers with length $n$, determine if there exist three \textbf{distinct} indices $i$, $j$, $k$ such that $a_i + a_j + a_k$ ends in the digit $3$.
Since we only care about the last digit of the sum $a_i + a_j + a_k$, we can ignore all numbers other than the last digits of the elements of $a$. (For example, we can consider $[20, 22, 19, 84]$ to be the same as $[0, 2, 9, 4]$.) Now note that if a number appears more than $3$ times in the array, we can ignore all copies that occur more than $3$ times, since our sum $a_i + a_j + a_k$ only involves three numbers. (For example, we can consider $[1,1,1,1,2]$ to be the same as $[1,1,1,2]$.) Using these observations, note that there are only $10$ digits, and each digit can occur at most $3$ times. So we can always reduce the array to one of length $30$. Since $30$ is very small, we can brute force all triples $a_i + a_j + a_k$, which runs quickly enough. Time complexity: $\mathcal{O}(n + \min(n, 30)^3)$.
[ "brute force", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; int cnt[10] = {}; for (int i = 0; i < n; i++) { int x; cin >> x; cnt[x % 10]++; } vector<int> v; for (int i = 0; i < 10; i++) { for (int j = 0; j < min(cnt[i], 3); j++) { v.push_back(i); } } int m = v.size(); for (int i = 0; i < m; i++) { for (int j = i + 1; j < m; j++) { for (int k = j + 1; k < m; k++) { if ((v[i] + v[j] + v[k]) % 10 == 3) {cout << "YES\n"; return;} } } } cout << "NO\n"; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1692
G
2^Sort
Given an array $a$ of length $n$ and an integer $k$, find the number of indices $1 \leq i \leq n - k$ such that the subarray $[a_i, \dots, a_{i+k}]$ with length $k+1$ (\textbf{not} with length $k$) has the following property: - If you multiply the first element by $2^0$, the second element by $2^1$, ..., and the ($k+1$)-st element by $2^k$, then this subarray is sorted in strictly increasing order. More formally, count the number of indices $1 \leq i \leq n - k$ such that $$2^0 \cdot a_i < 2^1 \cdot a_{i+1} < 2^2 \cdot a_{i+2} < \dots < 2^k \cdot a_{i+k}.$$
Note that $2^x \cdot a_i < 2^{x+1} \cdot a_{i+1}$ is the same as $a_i < 2a_{i+1}$, since we can divide by $2^x$. This means that we only need to check whether $a_i < 2a_{i+1}$ for each pair of adjacent elements. Let's consider as an example $[20,22,19]$. Note that $20 < 2 \cdot 22$ and $22 < 2 \cdot 19$, so if you multiply the first element by $1$, the second by $2$, and the third by $2^2$, the array is sorted. So let's make a new array $b$ where $b_i = 1$ if $a_i < 2a_{i+1}$ and $b_i = 0$ otherwise. Then we know that the whole chain of inequalities holds true if all values in a subarray of length $k$ in $b$ have all their values equal to $1$. For example, if $a=[9, 5, 3, 2, 1]$, $b=[1,1,1,0]$. Say $k=2$. Then $[9,5,3]$ works, since $[9,5 \cdot 2, 3 \cdot 2^2]$ is sorted. We can write this as $9 < 2 \cdot 5$ and $5 < 2 \cdot 3$. This is equivalent to $b_1 = 1$ (since $a_1 < 2a_2$) and $b_2 = 1$ (since $a_2 < 2a_3$). So the problem is equivalent to counting the number of subarrays of length $k$ in $b$ whose elements are all equal to $1$. There are many ways to do this. For example, you can compute prefix sums and then find the sum of all subarrays of length $k$, and count the number whose sum is $k$. The model solution uses a sliding window and updates the number of ones in the current subarray as we move from left to right. Time complexity is $\mathcal{O}(n)$.
[ "data structures", "dp", "sortings", "two pointers" ]
1,400
#include <bits/stdc++.h> using namespace std; const int MAX = 200007; const int MOD = 1000000007; void solve() { int n, k; cin >> n >> k; int a[n]; for (int i = 0; i < n; i++) { cin >> a[i]; } int ok[n]; for (int i = 0; i < n - 1; i++) { ok[i] = (a[i] < 2 * a[i + 1]); } int tot = 0; for (int i = 0; i < k; i++) { tot += ok[i]; } int res = 0; if (tot == k) {res++;} for (int i = k; i < n - 1; i++) { tot += ok[i]; tot -= ok[i - k]; if (tot == k) {res++;} } cout << res << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1692
H
Gambling
Marian is at a casino. The game at the casino works like this. Before each round, the player selects a number between $1$ and $10^9$. After that, a dice with $10^9$ faces is rolled so that a random number between $1$ and $10^9$ appears. If the player guesses the number correctly their total money is doubled, else their total money is halved. Marian predicted the future and knows all the numbers $x_1, x_2, \dots, x_n$ that the dice will show in the next $n$ rounds. He will pick three integers $a$, $l$ and $r$ ($l \leq r$). He will play $r-l+1$ rounds (rounds between $l$ and $r$ inclusive). In each of these rounds, he will guess the same number $a$. At the start (before the round $l$) he has $1$ dollar. Marian asks you to determine the integers $a$, $l$ and $r$ ($1 \leq a \leq 10^9$, $1 \leq l \leq r \leq n$) such that he makes the most money at the end. Note that during halving and multiplying there is no rounding and there are no precision errors. So, for example during a game, Marian could have money equal to $\dfrac{1}{1024}$, $\dfrac{1}{128}$, $\dfrac{1}{2}$, $1$, $2$, $4$, etc. (any value of $2^t$, where $t$ is an integer of any sign).
There are several solutions. Here is one. If we fix the value of $a$, then let's make a new array $b$ as follows: $b_i = 1$ if $\mathrm{round}_i = a$, and $b_i = -1$ otherwise. Then the total amount of money earned will just be $2^{b_l + \dots + b_r}$, so we only need to maximize $b_l + \dots + b_r$. In other words, we need to find the maximum sum of a subarray. This is a standard problem that can be solved using segment tree. Note that we need to iterate over all values of $a$, of which there are $n$ possibilities. So we have to update elements of the segment tree $\mathcal{O}(n)$ times and query once for each $a$, which means overall the solution runs in $\mathcal{O}(n \log n)$.
[ "data structures", "dp", "greedy", "math" ]
1,700
#include "bits/stdc++.h" using namespace std; #define ll long long #define forn(i,n) for(int i=0;i<n;i++) #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() struct DynamicMaxSubarraySum { struct node { ll pref, suf, val, sum; }; int N; ll neutral; vector<node> t; DynamicMaxSubarraySum(int _N, ll assign_value) { neutral = assign_value; N = _N; t.resize(4 * N); forn(i, 4 * N) t[i] = {0, 0, 0, 0}; build(1, 0, N - 1); } void build(int i, int l, int r) { if(l == r) { t[i].pref = t[i].suf = t[i].val = t[i].sum = neutral; return; } int mid = (l + r) >> 1; build(2 * i, l, mid); build(2 * i + 1, mid + 1, r); t[i] = merge(t[2 * i], t[2 * i + 1]); } node merge(node a, node b) { node c; c.pref = max(a.pref, a.sum + b.pref); c.suf = max(b.suf, b.sum + a.suf); c.val = max({a.val, b.val, a.suf + b.pref}); c.sum = a.sum + b.sum; return c; } void modif(int i, int l, int r, int pos, ll val) { if(l > pos || r < pos) return; if(l == pos && r == pos) { t[i].pref = t[i].suf = t[i].val = t[i].sum = val; return; } int mid = (l + r) >> 1; modif(2 * i, l, mid, pos, val); modif(2 * i + 1, mid + 1, r, pos, val); t[i] = merge(t[2 * i], t[2 * i + 1]); } node query(int i, int l, int r, int tl, int tr) { if(l > tr || r < tl) return {0, 0, 0, 0}; if(l >= tl && r <= tr) return t[i]; int mid = (l + r) >> 1; return merge(query(2 * i, l, mid, tl, tr), query(2 * i + 1, mid + 1, r, tl, tr)); } void modif(int pos, int val) { modif(1, 0, N - 1, pos, val); } node query(int l, int r) { return query(1, 0, N - 1, l, r); } node query(int pos) { return query(1, 0, N - 1, pos, pos); } }; void solve() { int n; cin >> n; vector<int> a(n); forn(i, n) cin >> a[i]; map<int, vector<int>> vv; forn(i, n) { vv[a[i]].pb(i); } DynamicMaxSubarraySum st(n, -1); ll mx = 0, ans = -1; for(auto i: vv) { vector<int> v = i.second; for(auto x: v) st.modif(x, 1); if(mx < st.query(0, n - 1).val) { ans = i.first; mx = st.query(0, n - 1).val; } for(auto x: v) st.modif(x, -1); } int ansl = -1, ansr = -1; for(int i = 0; i < n; ++i) { if(a[i] == ans) a[i] = 1; else a[i] = -1; } ll sum = 0, lastl = 0; mx = 0; for(int i = 0; i < n; ++i) { sum += a[i]; if(sum > mx) { mx = sum; ansr = i; ansl = lastl; } if(sum <= 0) { lastl = i + 1; sum = 0; } } cout << ans << " " << ansl + 1 << " " << ansr + 1 << "\n"; } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1693
A
Directional Increase
We have an array of length $n$. Initially, each element is equal to $0$ and there is a pointer located on the first element. We can do the following two kinds of operations any number of times (possibly zero) in any order: - If the pointer is not on the last element, increase the element the pointer is currently on by $1$. Then move it to the next element. - If the pointer is not on the first element, decrease the element the pointer is currently on by $1$. Then move it to the previous element. But there is one additional rule. \textbf{After we are done, the pointer has to be on the first element.} You are given an array $a$. Determine whether it's possible to obtain $a$ after some operations or not.
First of all the sum of the elements has to be $0$ because the pointer has to end up on the first element. Denote the number of times you do the first operation while the pointer is on the $i$-th element as $b_i$. And the number of times you do the second operation while the pointer is on the $i$-th element as $c_i$. $a_i = b_i - c_i$ and $c_i = b_{i - 1}$ because the pointer has to end up on the first element. So $a_i = b_i - b_{i - 1} \to b_i = a_i + b_{i - 1}$ and $b_1 = a_1$. Now that we have calculated $b$, we need to determine whether it's possible to perform the operations like so. There are two conditions. $b_i \geq 0$ ($1 \leq i \leq n$). If $b_i = 0$ for each $j$, $i < j$, $b_j = 0$. Because the pointer couldn't reach the $j$-th element. You can always construct $a$ if these two conditions hold. Proof by induction. Perform the first operation and the second operation $b_1 - 1$ times in a row. Then perform the first operation. Construct the rest by induction and then perform the second operation. Time complexity: $\mathcal{O}(n)$
[ "greedy" ]
1,300
//In the name of God #include <bits/stdc++.h> using namespace std; const int maxn = 2e5 + 100; #define fast_io ios::sync_with_stdio(false);cin.tie(0);cout.tie(0); int n, t, a[maxn]; long long ps[maxn]; int main(){ fast_io; cin >> t; while(t--){ cin >> n; for(int i = 1; i <= n; i++){ cin >> a[i]; ps[i] = ps[i - 1] + a[i]; } if(ps[n] != 0){ cout << "No\n"; continue; } bool ok = 1; for(int i = 1; i <= n; i++){ if(ps[i] < 0) ok = 0; } bool visited_zero = 0; for(int i = 1; i <= n; i++){ if(ps[i] == 0) visited_zero = 1; else if(visited_zero) ok = 0; } if(ok) cout << "Yes\n"; else cout << "No\n"; } return 0; }
1693
B
Fake Plastic Trees
We are given a rooted tree consisting of $n$ vertices numbered from $1$ to $n$. The root of the tree is the vertex $1$ and the parent of the vertex $v$ is $p_v$. There is a number written on each vertex, initially all numbers are equal to $0$. Let's denote the number written on the vertex $v$ as $a_v$. For each $v$, we want $a_v$ to be between $l_v$ and $r_v$ $(l_v \leq a_v \leq r_v)$. In a single operation we do the following: - Choose some vertex $v$. Let $b_1, b_2, \ldots, b_k$ be vertices on the path from the vertex $1$ to vertex $v$ (meaning $b_1 = 1$, $b_k = v$ and $b_i = p_{b_{i + 1}}$). - Choose a non-decreasing array $c$ of length $k$ of nonnegative integers: $0 \leq c_1 \leq c_2 \leq \ldots \leq c_k$. - For each $i$ $(1 \leq i \leq k)$, increase $a_{b_i}$ by $c_i$. What's the minimum number of operations needed to achieve our goal?
Lemma 1. You won't perform the operation on a particular vertex more than once. Because you could merge the operations. Lemma 2. If you perform the operation on some vertex $v$, you can do it with $c_k = r_v$. If there is another operation that helps this vertex, you can cut the operation into two pieces and merge one with the operation that starts at $v$. And you can just increase $c_k$ if it's not equal to $r_v$. Define $dp_v$ as the minimum number of operations needed to satisfy the conditions on the vertices in the subtree of vertex $v$. We claim that there is a way to achieve our goal with minimum number of operations in which for each vertex $v$ there are exactly $dp_v$ operations done in the subtree of vertex $v$. Imagine there is a vertex $v$ that there are more than $dp_v$ operations on vertices in the subtree of vertex $v$. We can instead satisfy the subtree by $dp_v$ operations and make sure that we perform the operation on vertex $p_v$. Denote $mx_v$ as the maximum value that vertex $v$ can pass to its ancestors if exactly $dp_v$ operations are performed in the subtree of vertex $v$. We need to perform the operation on a vertex $v$ if and only if $\sum_\limits{u | p_u = v}{mx_u} < l_v$. If we need to perform the operation on vertex $v$ then $mx_v = r_v$, otherwise $mx_v = min(r_v, \sum_\limits{u | p_u = v}{mx_u})$. Time complexity: $\mathcal{O}(n)$
[ "dfs and similar", "dp", "greedy", "trees" ]
1,700
# include <bits/stdc++.h> using namespace std; typedef long long ll; const int N = 2e5 + 10; int t, n, l[N], r[N], ans; vector <int> adj[N]; ll DFS(int v){ ll sum = 0; for (int u : adj[v]){ sum += DFS(u); } if (sum < ll(l[v])){ ++ ans; return r[v]; } return min(ll(r[v]), sum); } int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin >> t; while (t --){ cin >> n; for (int i = 2; i <= n; ++ i){ int par; cin >> par; adj[par].push_back(i); } for (int i = 1; i <= n; ++ i){ cin >> l[i] >> r[i]; } ans = 0; DFS(1); cout << ans << "\n"; for (int i = 1; i <= n; ++ i){ adj[i].clear(); } } return 0; }
1693
C
Keshi in Search of AmShZ
AmShZ has traveled to Italy from Iran for the Thom Yorke concert. There are $n$ cities in Italy indexed from $1$ to $n$ and $m$ \textbf{directed} roads indexed from $1$ to $m$. Initially, Keshi is located in the city $1$ and wants to go to AmShZ's house in the city $n$. Since Keshi doesn't know the map of Italy, AmShZ helps him to see each other as soon as possible. In the beginning of each day, AmShZ can send one of the following two messages to Keshi: - AmShZ sends the index of one road to Keshi as a \textbf{blocked} road. Then Keshi will understand that he should never use that road and he will remain in his current city for the day. - AmShZ tells Keshi to move. Then, Keshi will randomly choose one of the cities reachable from his current city and move there. (city $B$ is reachable from city $A$ if there's an out-going road from city $A$ to city $B$ which hasn't become \textbf{blocked} yet). If there are no such cities, Keshi will remain in his current city.Note that AmShZ always knows Keshi's current location. AmShZ and Keshi want to find the smallest possible integer $d$ for which they can make sure that they will see each other after at most $d$ days. Help them find $d$.
Define $dis_v$ as the minimum number of days needed to reach city $n$ from city $v$. $dis_n = 0$. We have to assume that Keshi will always choose the worst reachable city, that is the city with maximum $dis$. For each node $v$ we kind of have to choose $nxt_v$ and block all neighbors(outgoing edges) with a distance more than $dis_{nxt_v}$. We will use Dijkstra's algorithm. Note that in Dijkstra's algorithm we mark the nodes in increasing order of $dis$. At each step get the node $v$ with the minimum $dis$. For each node $u$ that there is an edge from $u$ to $v$, calculate $dis_u$ if $nxt_u$ was $v$. You know the number of edges you have to block since the neighbors with greater $dis$ are the ones that are not marked yet. Time complexity: $\mathcal{O}(n\log n)$
[ "graphs", "greedy", "shortest paths" ]
2,300
# include <bits/stdc++.h> using namespace std; const int N = 2e5 + 10; int n, m, d[N], dist[N]; priority_queue <pair <int, int> > pq; vector <int> adj[N]; bool mark[N]; int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin >> n >> m; fill(dist, dist + n + 1, m); while (m --){ int v, u; cin >> v >> u; adj[u].push_back(v); ++ d[v]; } dist[n] = 0; pq.push({0, n}); while (!pq.empty()){ int v = pq.top().second; pq.pop(); if (mark[v]) continue; mark[v] = true; for (int u : adj[v]){ if (dist[v] + d[u] < dist[u]){ dist[u] = dist[v] + d[u]; pq.push({- dist[u], u}); } -- d[u]; } } cout << dist[1] << '\n'; return 0; }
1693
D
Decinc Dividing
Let's call an array $a$ of $m$ integers $a_1, a_2, \ldots, a_m$ \textbf{Decinc} if $a$ can be made increasing by removing a decreasing subsequence (possibly empty) from it. - For example, if $a = [3, 2, 4, 1, 5]$, we can remove the decreasing subsequence $[a_1, a_4]$ from $a$ and obtain $a = [2, 4, 5]$, which is increasing. You are given a permutation $p$ of numbers from $1$ to $n$. Find the number of pairs of integers $(l, r)$ with $1 \le l \le r \le n$ such that $p[l \ldots r]$ (the subarray of $p$ from $l$ to $r$) is a \textbf{Decinc} array.
Let's solve the problem for a single subarray. Assume the $i$-th element belongs to the increasing subsequence. Define $dp_{l, i}$ for the interval $[l, i]$ as the maximum value the last element of the decreasing subsequence can have. It's $+\infty$ in case the decreasing subsequence is empty. It's $-\infty$ if the array is not possible. Assume the $i$-th element belongs to the decreasing subsequence. Define $pd_{l, i}$ for the interval $[l, i]$ as the minimum value the last element of the increasing subsequence can have. It's $-\infty$ in case the increasing subsequence is empty. It's $+\infty$ if the array is not possible. The interval $[l, r]$ is not Decinc if and only if $dp_{l, r} = -\infty$ and $pd_{l, r} = +\infty$. Iterate over $l$ from $n$ to $1$ and keep the $dp$ and $pd$ values updated (shown in the implementation). We claim that each $dp$ or $pd$ value will change at most three times throughout the algorithm. For some index $i$ assume $j$ is the largest index smaller than $i$ such that $a_j > a_{j + 1}$. $dp_i$ can only be one of these four values: $-\infty, +\infty, a_j, a_{j + 1}$. Because the last element of the decreasing subsequence can't be before the $j$-th element. And if it's some $k$ that lies on the interval $[j + 2, i - 1]$ you can simply move it to the increasing subsequence since both ($k - 1$)-th and ($k + 1$)-th elements belong to the increasing subsequence. The same applies for $pd_i$. So the $upd$ function is called $\mathcal{O}(n)$ times in total. Time complexity: $\mathcal{O}(n)$ It can be proven that a permutation is Decinc if and only if it's $3412$-avoiding and $2143$-avoiding.
[ "brute force", "data structures", "divide and conquer", "dp", "greedy" ]
2,800
# include <bits/stdc++.h> using namespace std; const int N = 2e5 + 10; int n, a[N], min_r[N], fen[N]; long long ans; stack <int> sk; vector <int> vec[2][N]; void mofen(int pos, int val){ for (pos += 5; pos < N; pos += pos & - pos) fen[pos] = min(fen[pos], val); } int gefen(int pos){ int res = n + 1; for (pos += 5; 0 < pos; pos -= pos & - pos) res = min(res, fen[pos]); return res; } void find_3412(){ fill(fen, fen + n + 10, n + 1); while (!sk.empty()) sk.pop(); for (int i = 1; i <= n; ++ i){ while (!sk.empty() && a[i] < a[sk.top()]) sk.pop(); if (!sk.empty()) vec[0][sk.top()].push_back(i); sk.push(i); } while (!sk.empty()) sk.pop(); for (int i = n; 1 <= i; -- i){ while (!sk.empty() && a[sk.top()] < a[i]) sk.pop(); if (!sk.empty()) vec[1][sk.top()].push_back(i); sk.push(i); } for (int i = n; 1 <= i; -- i){ for (int ind : vec[0][i]) mofen(a[ind], ind); for (int ind : vec[1][i]) min_r[ind] = min(min_r[ind], gefen(a[ind] - 1)); vec[0][i].clear(), vec[1][i].clear(); } } int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin >> n; for (int i = 1; i <= n; ++ i) cin >> a[i]; fill(min_r, min_r + n + 2, n + 1); find_3412(); for (int i = 1; i <= n; ++ i) a[i] = n + 1 - a[i]; find_3412(); for (int i = n; 1 <= i; -- i){ min_r[i] = min(min_r[i], min_r[i + 1]); ans += min_r[i] - i; } cout << ans << '\n'; return 0; }
1693
E
Outermost Maximums
Yeri has an array of $n + 2$ non-negative integers : $a_0, a_1, ..., a_n, a_{n + 1}$. We know that $a_0 = a_{n + 1} = 0$. She wants to make all the elements of $a$ equal to zero in the minimum number of operations. In one operation she can do one of the following: - Choose the leftmost maximum element and change it to the maximum of the elements on its left. - Choose the rightmost maximum element and change it to the maximum of the elements on its right. Help her find the minimum number of operations needed to make all elements of $a$ equal to zero.
Let's calculate for each element of the array, the minimum number of times it needs to change throughout the process. Let's take a look at the first time the $i$-th element is changing. We know that $a_i$ is the maximum number and the elements smaller than $a_i$ haven't changed yet. Denote the maximum element on its left as $l$ and the maximum element on its right as $r$. So $l$ is equal to the largest number among $a_1, a_2, \ldots, a_{i - 1}$ that is smaller than $a_i$. Likewise for $r$. We can change $a_i$ to either $l$ or $r$. It's better to change it to the smaller one. (We'll get into the details of why this is correct later.) Let $c_i$ be the minimum number of times the $i$-th element needs to change. The following greedy algorithm works in $\sum c_i$ operations. Let $x$ be the number the leftmost maximum becomes if we do the first operation and $y$ be the number the rightmost maximum becomes if we do the second operation. Do the first operation if $x \leq y$ and do the second operation otherwise. If $x \leq y$ then for the leftmost maximum, the maximum element on its left is smaller than the maximum element on its right. That means we are making the right choice for this element. Let's assume the array is a permutation of numbers from $1$ through $n$ for simplicity. For some element $a_i$ make string $S$ of length $n$ such that $S_{a_i}$ = "O". for any $j < i$, $S_{a_j}$ = "L". for any $j > i$, $S_{a_j}$ = "R". Imagine Yeri is initially standing on the $a_i$-th character of S (which is "O") and she is facing the beginning of the string. In each step she either jumps to the nearest "L" or jumps to the nearest "R". And her goal is to jump out of the string in minimum number of jumps. We know it's always better to jump to the further one, but we won't need this fact. We will use a segment tree. For each node maintain 4 values. What is the minimum possible number of jumps made in this interval in case we enter it looking for an "L" and leave it looking for an "L" we enter it looking for an "L" and leave it looking for an "R" we enter it looking for an "R" and leave it looking for an "L" we enter it looking for an "R" and leave it looking for an "R" It's easy to update this. Just fix which character you are looking for when moving from the node's one child to the other. And you can find these values for intervals of length $1$. Now if you iterate over $i$ from $1$ to $n$. At each step at most 2 characters of the string are changing, hence you can keep your segment tree updated. But there are still some details we need to sort out. The array is not necessarily a permutation. To fix this, first get rid of the "O". Then for each number see if it appears on the left side and see if it appears on the right side. Now there are 4 different states for each number but the same segment tree can handle this too. We just need to find the 4 values for each of the 4 states of a single number. Time complexity: $\mathcal{O}(n\log n)$
[ "data structures", "greedy" ]
3,400
//In the name of God #include <bits/stdc++.h> using namespace std; typedef long long ll; typedef pair<int, int> pii; typedef pair<ll, ll> pll; const ll maxn = 2e5 + 100; const ll mod = 1e9 + 7; const int inf = 1e9; const ll INF = 1e18; #define fast_io ios::sync_with_stdio(false);cin.tie(0);cout.tie(0); #define file_io freopen("input.txt", "r+", stdin);freopen("output.txt", "w+", stdout); #define pb push_back #define Mp make_pair #define F first #define S second #define Sz(x) ll((x).size()) #define all(x) (x).begin(), (x).end() #define lc (id << 1) #define rc (lc | 1) ll pw(ll a, ll b){ ll c = 1; while(b){ if(b & 1) c = c * a % mod; a = a * a % mod; b >>= 1; } return c; } struct cost{ int c[2][2]; cost(){ memset(c, 0, sizeof c); } }; cost cst[2][2]; int n, a[maxn], dp[maxn], cnl[maxn], cnr[maxn]; ll s, ans, ans2; cost seg[maxn << 2]; inline cost mrg(cost l, cost r){ cost mid; for(int i = 0; i < 2; i++){ for(int j = 0; j < 2; j++){ mid.c[i][j] = min(r.c[i][0] + l.c[0][j], r.c[i][1] + l.c[1][j]); } } return mid; } void bld(int id, int s, int e){ if(e - s == 1){ seg[id] = cst[int(cnl[s] > 0)][int(cnr[s] > 0)]; return; } int mid = (s + e) >> 1; bld(lc, s, mid); bld(rc, mid, e); seg[id] = mrg(seg[lc], seg[rc]); return; } void upd(int id, int s, int e, int l, int r){ if(r <= s || e <= l) return; if(l <= s && e <= r){ seg[id] = cst[int(cnl[s] > 0)][int(cnr[s] > 0)]; return; } int mid = (s + e) >> 1; upd(lc, s, mid, l, r); upd(rc, mid, e, l, r); seg[id] = mrg(seg[lc], seg[rc]); return; } cost get(int id, int s, int e, int l, int r){ if(r <= s || e <= l) return cost(); if(l <= s && e <= r) return seg[id]; int mid = (s + e) >> 1; return mrg(get(lc, s, mid, l, r), get(rc, mid, e, l, r)); } int main(){ fast_io; srand(time(NULL)); cst[0][0].c[0][0] = cst[0][0].c[1][1] = 0; cst[0][0].c[0][1] = cst[0][0].c[1][0] = inf; cst[0][1].c[1][0] = cst[0][1].c[1][1] = 1; cst[0][1].c[0][0] = 0; cst[0][1].c[0][1] = inf; cst[1][0].c[0][1] = cst[1][0].c[0][0] = 1; cst[1][0].c[1][1] = 0; cst[1][0].c[1][0] = inf; cst[1][1].c[0][0] = cst[1][1].c[1][1] = 1; cst[1][1].c[0][1] = cst[1][1].c[1][0] = 1; cin >> n; cnr[0] = cnl[0] = 1; for(int i = 1; i <= n; i++){ cin >> a[i]; cnr[a[i]]++; s += a[i]; } bld(1, 0, n + 1); for(int i = 1; i <= n; i++){ cost cs = get(1, 0, n + 1, 0, a[i]); cnr[a[i]]--; cnl[a[i]]++; upd(1, 0, n + 1, a[i], a[i] + 1); dp[i] = min({cs.c[0][0], cs.c[0][1], cs.c[1][0], cs.c[1][1]}); ans += dp[i]; } cout << ans << "\n"; return 0; }
1693
F
I Might Be Wrong
You are given a binary string $S$ of length $n$ indexed from $1$ to $n$. You can perform the following operation any number of times (possibly zero): - Choose two integers $l$ and $r$ ($1 \le l \le r \le n$). Let $cnt_0$ be the number of times 0 occurs in $S[l \ldots r]$ and $cnt_1$ be the number of times 1 occurs in $S[l \ldots r]$. You can pay $|cnt_0 - cnt_1| + 1$ coins and sort the $S[l \ldots r]$. (by $S[l \ldots r]$ we mean the substring of $S$ starting at position $l$ and ending at position $r$) For example if $S = $ 11001, we can perform the operation on $S[2 \ldots 4]$, paying $|2 - 1| + 1 = 2$ coins, and obtain $S = $ 10011 as a new string. Find the minimum total number of coins required to sort $S$ in increasing order.
- It's trivial that we only sort with segments with balance $0$. Proof: Imagine we have sorted interval $[l, r]$ and it has $d$ more zeros than it has ones. So this operation costs $d + 1$ coins. $S_l$ has to be $1$ otherwise we could just sort $[l + 1, r]$ with $d$ coins. Now that we know $S_l$ is $1$ there exists some $k \leq r$ that the interval $[l, k]$ has equal number of zeros and ones. Because the interval $[l, l]$ has more ones but the interval $[l, r]$ has more zeros. Sort $[l, k]$ with cost $1$ and then sort $[l + 1, r]$ with cost $d$. And we know we can sort $[l + 1, r]$ such that we only sort segments with balance $0$. - If everything is sorted cool. Suppose now the number of ones is greater than the number of zeros. - Replace $0$ by $-1$, $1$ by $1$, draw prefix sums. We have $n + 1$ points $(i, a_i)$, (initially $a_i = pref_i$), where $a_0 = 0$, $a_n>0$, and the operation is: choosing $i$, $j$ such that $a_i = a_j$ and making points between them first decrease then increase. Denote by AmShZ strategy the following procedure: while $S$ isn't sorted, Let $i$ be the smallest such that numbers from $a_i$ to $a_n$ increase. If $a_i \leq 0$, we sort in $1$ operation. Otherwise, let $j$ be the smallest such that $a_j = a_i$. Apply operation to $[j, i]$. Let's denote by $f(S)$ the smallest number of operations required for the current configuration. We will prove that AmShZ strategy ends in $f(S)$ iterations by induction over $n$ and over number of inversions in $S$. (Clearly any non-identical operation decreases number of inversions so that's useful). Denote by $S_k$ string $S$ in which suffix of length $k$ is sorted. Lemma 1: If $k1 > k2$, then AmShZ strategy for $S_{k1}$ uses at most as many operations as for $S_{k2}$. Proof: trivial Suppose now that we have proved our statement for all strings with the number of inversions less than in $S$. Now, let $i$ be the smallest such that numbers from $a_i$ to $a_n$ increase and $j$ be the smallest such that $a_j = a_i$. Suppose that there is a sorting sequence that uses fewer operations than AmShZ strategy. Let's denote its first operation by $[l, r]$. Consider several cases: $r \geq i$. This means that we basically end up in string $S_{n-l}$. But we could have ended up in string $S_{n-j}$, which uses at most the same number of operations $r \leq j$. Clearly, after operation $[l, r]$ $a_i$ can only decrease, so $j$ will still be the smallest index with $a_j = a_i$. By induction hypothesis, AmShZ strategy is optimal for resulting string, so the next operation in resulting string will be opertion $[j, i]$. But then we could have done $[j, i]$ first and then $[l, r]$. $j \leq l$. As $r<i$, the smallest index $i$ such that $a_i \ldots$ $a_n$ are increasing hasn't changed. Numbers from $a_0$ to $a_j$ also haven't changed, so $j$ remains the same. So the next move according to AmShZ strategy will be operation $[j, i]$, so operation $[l, r]$ was useless Last case. $l<j<r<i$. Note that all numbers up to $a_r$ will be strictly smallest then $a_i$ now. So we will have some new smallest $j1>r$ such that $a_j1 = a_i$. Then according to AmShZ strategy in the next move we will do operation $[j1, i]$. But then we could instead do $[j1, i]$ first, and then $[l, r]$. But this is not optimal, as AmShZ strategy sorts $S_{n-j1}$ at least as fast as $S_{n - i}$.
[ "binary search", "greedy", "two pointers" ]
3,400
#include <iostream> #include <cstdio> #include <cstdlib> #include <algorithm> #include <cmath> #include <vector> #include <set> #include <map> #include <unordered_set> #include <unordered_map> #include <queue> #include <ctime> #include <cassert> #include <complex> #include <string> #include <cstring> #include <chrono> #include <random> #include <bitset> #include <array> using namespace std; #ifdef LOCAL #define eprintf(...) {fprintf(stderr, __VA_ARGS__);fflush(stderr);} #else #define eprintf(...) 42 #endif using ll = long long; using ld = long double; using uint = unsigned int; using ull = unsigned long long; template<typename T> using pair2 = pair<T, T>; using pii = pair<int, int>; using pli = pair<ll, int>; using pll = pair<ll, ll>; mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); ll myRand(ll B) { return (ull)rng() % B; } #define pb push_back #define mp make_pair #define all(x) (x).begin(),(x).end() #define fi first #define se second clock_t startTime; double getCurrentTime() { return (double)(clock() - startTime) / CLOCKS_PER_SEC; } const int N = 200200; int n; char s[N]; int bal[N]; int pos[N]; void solve() { scanf("%d %s", &n, s); int b = 0; for (int i = 0; i < n; i++) { if (s[i] == '0') b++; else b--; } if (b < 0) { for (int i = 0; i < n; i++) s[i] ^= 1; reverse(s, s + n); b *= -1; } bal[0] = (n + b) / 2; for (int i = 0; i < n; i++) bal[i + 1] = bal[i] + (s[i] == '0' ? -1 : 1); assert(bal[n] == (n - b) / 2); for (int i = 0; i <= n; i++) pos[i] = -1; for (int i = 0; i <= n; i++) pos[bal[i]] = i; int ans = 0; int l = 0; while(l < (n + b) / 2) { if (bal[l + 1] < bal[l]) { l++; continue; } if (bal[l] <= (n - b) / 2) { ans++; break; } int r = pos[bal[l]]; assert(r > l); ans++; for (int i = l + 1; i < r; i++) { if (2 * i > l + r) { bal[i] = bal[l] - (r - i); } else { bal[i] = bal[l] - (i - l); } pos[bal[i]] = max(pos[bal[i]], i); } } printf("%d\n", ans); } int main() { startTime = clock(); // freopen("input.txt", "r", stdin); // freopen("output.txt", "w", stdout); int t; scanf("%d", &t); while(t--) solve(); return 0; }
1694
A
Creep
Define the score of some binary string $T$ as the absolute difference between the number of zeroes and ones in it. (for example, $T=$ 010001 contains $4$ zeroes and $2$ ones, so the score of $T$ is $|4-2| = 2$). Define the creepiness of some binary string $S$ as the maximum score among all of its prefixes (for example, the creepiness of $S=$ 01001 is equal to $2$ because the score of the prefix $S[1 \ldots 4]$ is $2$ and the rest of the prefixes have a score of $2$ or less). Given two integers $a$ and $b$, construct a binary string consisting of $a$ zeroes and $b$ ones with the minimum possible creepiness.
Define the minimum possible creepiness of the string as $ans$. We want to show that $ans$ is equal to $max(1, |a - b|)$. Creepiness of $S[1 \ldots 1]$ is equal to $1$ and creepiness of $S[1 \ldots n]$ is equal to $|a - b|$ so $max(1, |a - b|) \le ans$. The way to make a string with creepiness equal to $max(1, |a - b|)$:while $0 < a, b$ holds, add 01 to the end of the string. After that, add the remaining character to the end of the string. Now we know $ans \le max(1, |a - b|)$. while $0 < a, b$ holds, add 01 to the end of the string. After that, add the remaining character to the end of the string. Now we know $ans \le max(1, |a - b|)$. So $ans = max(1, |a - b|)$. complexity: $\mathcal{O}(a + b)$
[ "greedy", "implementation" ]
800
# include <bits/stdc++.h> using namespace std; int t, A, B; int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin >> t; while (t --){ cin >> A >> B; for (int i = 0; i < min(A, B); ++ i) cout << "01"; for (int i = 0; i < abs(A - B); ++ i) cout << (A < B ? 1 : 0); cout << '\n'; } return 0; }
1694
B
Paranoid String
Let's call a binary string $T$ of length $m$ indexed from $1$ to $m$ \textbf{paranoid} if we can obtain a string of length $1$ by performing the following two kinds of operations $m-1$ times in any order : - Select any substring of $T$ that is equal to 01, and then replace it with 1. - Select any substring of $T$ that is equal to 10, and then replace it with 0.For example, if $T = $ 001, we can select the substring $[T_2T_3]$ and perform the first operation. So we obtain $T = $ 01. You are given a binary string $S$ of length $n$ indexed from $1$ to $n$. Find the number of pairs of integers $(l, r)$ $1 \le l \le r \le n$ such that $S[l \ldots r]$ (the substring of $S$ from $l$ to $r$) is a \textbf{paranoid} string.
We want to show that a binary string $T$ of length $m$ is paranoid if and only if $m = 1$ or ($1 < m$ and $S[m] \neq S[m - 1]$). In the case of $S[m - 1] = S[m]$: We can never delete the last two characters because they will always remain equal. So $S$ is not paranoid. In the case of $S[m - 1] \neq S[m]$: If $m = 2$, we can reach our goal by one operation. Otherwise assume that the last character is 0. Now the last three characters are either 010 or 110. In the first case perform the operation on $[S_{m-2},S_{m-1}]$ and in the second case perform the operation on $[S_{m-1},S_m]$. Then the last two characters will be 10 and we can continue this algorithm on the new string until we reach $m = 1$. The number of paranoid substrings of length $1$ is equal to $n$. To count the number of longer substrings, we can fix $r$ from index $2$ to $n$. if $S[r] \neq S[r - 1]$ holds, we should add $r - 1$ to the answer. complexity: $\mathcal{O}(n)$
[ "constructive algorithms", "greedy" ]
1,200
# include <bits/stdc++.h> using namespace std; int t, n; string S; long long ans; int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); cin >> t; while (t --){ cin >> n >> S, ans = n; for (int i = 1; i < n; ++ i) if (S[i] != S[i - 1]) ans += i; cout << ans << '\n'; } return 0; }
1695
A
Subrectangle Guess
Michael and Joe are playing a game. The game is played on a grid with $n$ rows and $m$ columns, \textbf{filled with distinct integers}. We denote the square on the $i$-th ($1\le i\le n$) row and $j$-th ($1\le j\le m$) column by $(i, j)$ and the number there by $a_{ij}$. Michael starts by saying two numbers $h$ ($1\le h \le n$) and $w$ ($1\le w \le m$). Then Joe picks any $h\times w$ subrectangle of the board (without Michael seeing). Formally, an $h\times w$ subrectangle starts at some square $(a,b)$ where $1 \le a \le n-h+1$ and $1 \le b \le m-w+1$. It contains all squares $(i,j)$ for $a \le i \le a+h-1$ and $b \le j \le b+w-1$. \begin{center} {\small Possible move by Joe if Michael says $3\times 2$ (with maximum of $15$).} \end{center} Finally, Michael has to guess the maximum number in the subrectangle. He wins if he gets it right. Because Michael doesn't like big numbers, he wants the area of the chosen subrectangle (that is, $h \cdot w$), to be as small as possible, while still ensuring that he wins, not depending on Joe's choice. Help Michael out by finding this minimum possible area. It can be shown that Michael can always choose $h, w$ for which he can ensure that he wins.
Note that for any rectangle size, we can always choose an $h$ by $w$ rectangle that contains the maximum element in the grid (which is unique). So in order for Michael to ensure that he can win, he needs to make $h$ and $w$ big enough such that every $h$ by $w$ rectangle contains the maximum element in the grid. Let $(i, j)$ be the position of the maximum (1-indexed). The furthest point in the grid from it has to be one of the four corners, and hxw has to be big enough to include that furthest point and $(i,j)$. So just try all four corners and take the biggest rectangle that gives you. This reduces to $max(i, n - i + 1) \cdot max(j, m - j + 1)$ (because the dimensions are independent). So the answer is $max(i, n - i + 1) \cdot max(j, m - j + 1)$. Complexity: $O(nm)$
[ "games" ]
800
#include <bits/stdc++.h> using namespace std; int grid[45][45]; int main() { int num_tests; cin >> num_tests; for (int test = 0; test < num_tests; ++test) { int n, m; cin >> n >> m; int max_i = 0, max_j = 0; for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) { cin >> grid[i][j]; if (grid[i][j] > grid[max_i][max_j]) max_i = i, max_j = j; } int h = max(max_i+1, n-max_i); int w = max(max_j+1, m-max_j); cout << h * w << '\n'; } }
1695
B
Circle Game
Mike and Joe are playing a game with some stones. Specifically, they have $n$ piles of stones of sizes $a_1, a_2, \ldots, a_n$. These piles are arranged in a circle. The game goes as follows. Players take turns removing some positive number of stones from a pile in clockwise order starting from pile $1$. Formally, if a player removed stones from pile $i$ on a turn, the other player removes stones from pile $((i\bmod n) + 1)$ on the next turn. If a player cannot remove any stones on their turn (because the pile is empty), they lose. Mike goes first. If Mike and Joe play optimally, who will win?
Note that since all piles are initially nonempty, the game will not end for the first $n$ turns, because on each of those turns, a player will be removing from a nonempty pile. If $n$ is odd, Mike can remove all of the stones from the first pile. Then, on the $n+1$th turn (the first turn where the game can end), Joe will be forced to remove from the first pile, which is empty. So Mike can always win if $n$ is odd. If $n$ is even, then Mike will only ever remove from the odd piles, and Joe will only ever remove from the even piles. So each player has $n/2$ piles, and neither can remove from the other's piles. Therefore, it is optimal for each player to remove the minimal possible number of stones at each step, so that they stay in the game for as long as possible. So on each turn, a player removes exactly one stone, and the first pile to become empty will be the pile with the minimal number of stones. If there are multiple minimal piles, it will be the leftmost such pile. So if this pile is on an odd position, Mike will lose (and therefore Joe will win), and otherwise Joe will lose (and Mike will win). Complexity: $O(n)$
[ "games", "greedy" ]
1,000
#include <bits/stdc++.h> using namespace std; int main() { int num_tests; cin >> num_tests; for (int test = 0; test < num_tests; ++test) { int n; cin >> n; vector<int> a(n); for (int i = 0; i < n; ++i) cin >> a[i]; if (n % 2 == 1) { cout << "Mike\n"; continue; } int smallest = 0; for (int i = 0; i < n; ++i) if (a[i] < a[smallest]) smallest = i; if (smallest % 2 == 0) cout << "Joe\n"; else cout << "Mike\n"; } }
1695
C
Zero Path
You are given a grid with $n$ rows and $m$ columns. We denote the square on the $i$-th ($1\le i\le n$) row and $j$-th ($1\le j\le m$) column by $(i, j)$ and the number there by $a_{ij}$. All numbers are equal to $1$ or to $-1$. You start from the square $(1, 1)$ and can move one square down or one square to the right at a time. In the end, you want to end up at the square $(n, m)$. Is it possible to move in such a way so that the sum of the values written in all the visited cells (including $a_{11}$ and $a_{nm}$) is $0$?
Note that if $n+m$ is even, then the sum of any path from the top left to bottom right will be odd, and therefore nonzero. So in this case, there is no solution. Otherwise, every path from top left to bottom right will have even sum. For each position $(i, j)$ in the grid, we define $max_{ij}$ to be the maximum possible sum of a path starting at the top left and ending at $(i, j)$. Similarly, $min_{ij}$ is defined to be the minimum possible sum starting at the top left and ending at $(i, j)$. These values can be computed using an $O(nm)$ DP, where $max_{ij} = a_{ij} + max(max_{(i-1)j}, max_{i(j-1)})$ $min_{ij} = a_{ij} + min(min_{(i-1)j}, min_{i(j-1)})$ Proof: Let $p_1$ be a path from $(1, 1)$ to $(n, m)$ adding up to $min_{nm}$, and $p_2$ be another such path adding up to $max_{nm}$. Each of these paths consists of $n-1$ down moves and $m-1$ right moves, so it can be represented as a string of "R" and "D" of length $n+m-2$. We can move $p_1$ to $p_2$ by a sequence of operations where we swap two adjacent (and different) characters. Visually, what we are doing is replacing one square on the path with a square diagonally adjacent to it. The below picture shows one possible operation on a path. Note that in each step, the sum of values on the path changes by either $-2, 0,$ or $2$. So after performing this sequence of operations taking $p_1$ to $p_2$, we have moved the path with sum $min_{nm}$ to a path with sum $max_{nm}$, changing the sum by $-2, 0,$ or $2$ at each step. Therefore, because both $min_{nm}$ and $max_{nm}$ are even, and $min_{nm} \leq 0\leq max_{nm}$, at some point in this sequence of operations, the sum of the path must be zero. Complexity: $O(nm)$
[ "brute force", "data structures", "dp", "graphs", "greedy", "shortest paths" ]
1,700
#include <bits/stdc++.h> using namespace std; #define N 1010 int grid[N][N], mn[N][N], mx[N][N]; int main() { int num_tests; cin >> num_tests; for (int test = 0; test < num_tests; ++test) { int n, m; cin >> n >> m; for(int i = 0; i < n; ++i) for(int j = 0; j < m; ++j) cin >> grid[i][j]; mn[0][0] = mx[0][0] = grid[0][0]; for(int i = 1; i < n; ++i) mx[i][0] = mn[i][0] = mx[i - 1][0] + grid[i][0]; for(int j = 1; j < m; ++j) mx[0][j] = mn[0][j] = mx[0][j - 1] + grid[0][j]; for(int i = 1; i < n; ++i) for(int j = 1; j < m; ++j) { mx[i][j] = max(mx[i - 1][j], mx[i][j - 1]) + grid[i][j]; mn[i][j] = min(mn[i - 1][j], mn[i][j - 1]) + grid[i][j]; } if(mx[n - 1][m - 1] % 2 || mn[n - 1][m - 1] > 0 || mx[n - 1][m - 1] < 0) cout << "NO\n"; else cout << "YES\n"; } }
1695
D1
Tree Queries (Easy Version)
\textbf{The only difference between this problem and D2 is the bound on the size of the tree.} You are given an unrooted tree with $n$ vertices. There is some hidden vertex $x$ in that tree that you are trying to find. To do this, you may ask $k$ queries $v_1, v_2, \ldots, v_k$ where the $v_i$ are vertices in the tree. After you are finished asking all of the queries, you are given $k$ numbers $d_1, d_2, \ldots, d_k$, where $d_i$ is the number of edges on the shortest path between $v_i$ and $x$. Note that you know which distance corresponds to which query. What is the minimum $k$ such that there exists some queries $v_1, v_2, \ldots, v_k$ that let you always uniquely identify $x$ (no matter what $x$ is). Note that you don't actually need to output these queries.
If $n=1$, then no queries are needed, because there is only one vertex. Otherwise, we need at least one query. If we fix a node $u$, and force it to be a query, we can root the tree at $u$ and do a greedy DFS to compute the answer. Note that because we guarantee that the root is a query, when we are computing the answer for any node $v$ in this DFS, we can assume that either $v$ or some vertex not in the subtree of $v$ has already been queried. We define $ans[v]$ to be the minimal number of queries to distinguish all vertices in the subtree of $v$, given that $v$ or some vertex not in the subtree of $v$ has been queried. Note that for each child $c$ of $v$, we need to be able to distinguish all vertices in the subtree of $c$, so we have $ans[v] \geq \sum_c ans[c]$. Additionally, there can be at most one child $c$ of $v$ with no queries in its subtree, otherwise all of these children will be indistinguishable by the queries. If there are $x > 1$ such children of $v$, we can query the first $x-1$ of them, which will be enough to differentiate all vertices in these $x$ subtrees. So, using this definition of $x$, our final formula is $ans[v] = \sum_c ans[c] + max(0, x - 1)$ For each possible root, we do a DFS to recursively compute these answers. The answer is the minimum $ans[root] + 1$, where the $+1$ is to account for the fact that we are querying the root. Complexity: O(n^2)
[ "brute force", "constructive algorithms", "dfs and similar", "dp", "greedy", "trees" ]
2,200
#include <bits/stdc++.h> using namespace std; #define N 2010 vector<int> tree[N]; int dfs(int i, int p) { int sm = 0, z = 0; for (int j : tree[i]) if (j != p) { int x = dfs(j, i); sm += x; if (x == 0) z++; } return sm + max(0, z - 1); } int main() { int num_tests; cin >> num_tests; for (int test = 0; test < num_tests; ++test) { int n; cin >> n; for (int i = 1; i < n; ++i) { int u, v; cin >> u >> v; tree[u].push_back(v); tree[v].push_back(u); } if (n == 1) cout << "0\n"; else { int ans = n; for (int i = 1; i <= n; ++i) ans = min(ans, 1 + dfs(i, i)); cout << ans << '\n'; } for (int i = 1; i <= n; ++i) tree[i].clear(); } }
1695
D2
Tree Queries (Hard Version)
\textbf{The only difference between this problem and D1 is the bound on the size of the tree.} You are given an unrooted tree with $n$ vertices. There is some hidden vertex $x$ in that tree that you are trying to find. To do this, you may ask $k$ queries $v_1, v_2, \ldots, v_k$ where the $v_i$ are vertices in the tree. After you are finished asking all of the queries, you are given $k$ numbers $d_1, d_2, \ldots, d_k$, where $d_i$ is the number of edges on the shortest path between $v_i$ and $x$. Note that you know which distance corresponds to which query. What is the minimum $k$ such that there exists some queries $v_1, v_2, \ldots, v_k$ that let you always uniquely identify $x$ (no matter what $x$ is). Note that you don't actually need to output these queries.
In the previous solution, we forced the root to be a query, because we needed to ensure that for every node $v$, either $v$ was queried, or there was a query outside the subtree of $v$. Notice that if the root has degree $\geq 3$, regardless of whether we query the root, this property still holds. The way we compute values in the DFS ensures that at least $degree[root]-1 \geq 2$ subtrees of the root will have at least one query. Therefore, for each other vertex, some vertex outside its subtree must have a query. So the solution is the same as D1, except we root the tree at any vertex of degree $\geq 3$, and don't query the root itself. If there are no vertices of degree $\geq 3$, then the tree is a path, and querying either of the endpoints is sufficient, so the answer is 1. Complexity: $O(n)$
[ "constructive algorithms", "dfs and similar", "dp", "greedy", "trees" ]
2,300
#include <bits/stdc++.h> using namespace std; #define N 200010 vector<int> tree[N]; int dfs(int i, int p) { int sm = 0, z = 0; for (int j : tree[i]) if (j != p) { int x = dfs(j, i); sm += x; if (x == 0) z++; } return sm + max(0, z - 1); } int main() { int num_tests; cin >> num_tests; for (int test = 0; test < num_tests; ++test) { int n; cin >> n; for (int i = 1; i < n; ++i) { int u, v; cin >> u >> v; tree[u].push_back(v); tree[v].push_back(u); } int max_deg = 0; for (int i = 1; i <= n; ++i) max_deg = max(max_deg, (int)tree[i].size()); if (max_deg == 0) cout << "0\n"; else if (max_deg < 3) cout << "1\n"; else { for (int i = 1; i <= n; ++i) if (tree[i].size() >= 3) { cout << dfs(i, i) << '\n'; break; } } for (int i = 1; i <= n; ++i) tree[i].clear(); } }
1695
E
Ambiguous Dominoes
Polycarp and Monocarp are both solving the same puzzle with dominoes. They are given the same set of $n$ dominoes, the $i$-th of which contains two numbers $x_i$ and $y_i$. They are also both given the same $m$ by $k$ grid of values $a_{ij}$ such that $m\cdot k = 2n$. The puzzle asks them to place the $n$ dominoes on the grid in such a way that none of them overlap, and the values on each domino match the $a_{ij}$ values that domino covers. Dominoes can be rotated arbitrarily before being placed on the grid, so the domino $(x_i, y_i)$ is equivalent to the domino $(y_i, x_i)$. They have both solved the puzzle, and compared their answers, but noticed that not only did their solutions not match, but none of the $n$ dominoes were in the same location in both solutions! Formally, if two squares were covered by the same domino in Polycarp's solution, they were covered by different dominoes in Monocarp's solution. The diagram below shows one potential $a$ grid, along with the two players' solutions. Polycarp and Monocarp remember the set of dominoes they started with, but they have lost the grid $a$. Help them reconstruct one possible grid $a$, along with both of their solutions, or determine that no such grid exists.
We represent the set of dominoes as a graph on $2n$ vertices, where the vertices are the values $1$ to $2n$, and the ith domino represents an edge between $x_i$ and $y_i$. Note that this graph can contain self-loops and duplicate edges. If any of the connected components of the graph contain only one edge, then that means that there is a domino such that its $x_i$ and $y_i$ values don't appear in any other dominoes. Therefore, this domino must be in the same location in both grids, so there is no solution. Otherwise, iterate over all connected components with $>0$ edges, which must therefore contain $\geq 2$ edges. For a component with $k$ edges, we will run a DFS that will generate a valid $2$ by $k$ grid for the $k$ dominoes in this component. As we go through this DFS, we keep track of which vertices and which edges we've already seen. When we DFS from a vertex $u$ we haven't seen, mark it as seen, and iterate over all edges adjacent to $u$ that we haven't seen. Mark each one as seen, and then traverse it to the other vertex $v$, marking $u$ as its parent. Once we backtrack to $u$, continue to the next unseen edge. Once all unseen edges are traversed, traverse back to the parent of $u$. When we visit a vertex $u$ we have already seen, just traverse the edge back to its parent. Throughout this process, maintain a list of all vertices we've seen in order, including any duplicates. Because we traverse every edge exactly twice (once in each direction) in this DFS, this list will be of size $2k+1$, and every two adjacent vertices in the list are connected by an edge. Each edge will be represented in two positions in this list. We can additionally prove that within this list, each edge appears in one even position and one odd position. Proof: Let $uv$ be an edge that we initially traverse from $u$ to $v$. If $v$ has already been traversed by this point in the DFS, we immediately traverse back to $u$, so the two positions of $uv$ are adjacent in the list, and thus on different parity positions. If not, consider the final sequence of edges traversed, but removing any edges that are immediately traversed backwards. Note that since we are only removing pairs of adjacent edges, the parities of positions of edges in the list are unchanged. Now, the sequence of edges is just a DFS traversal of a tree. So because the graph is now a tree, and thus bipartite, any path from $v$ to itself must be of even length. Therefore, the sequence of moves looks like $(..., uv,$ [an even number of edges], $vu, ...)$ and therefore the two occurrences of $uv$ must be on different parities. Now, to generate the $2$ by $k$ grid that works for this component, we note that the list of size $2k+1$ can be seen as a cyclic list of size $2k$, because the first and last element of the list must be equal. So we pop the last element of the list off, and insert the remaining vertices of the list into a $2$ by $k$ grid in clockwise order. Now, the edges form a cycle of length $2k$, and because each edge appears on one odd and one even position, each domino will appear exactly once in both of the below orientations. Each of these orientations takes the dominoes going around the cycle in order. To get the solution for the whole problem, simply concatenate the $2$ by $k$ grids for each component into a $2$ by $n$ grid. The below example demonstrates the whole process. The blue edges are the edges to previously-seen vertices, and the red edges are the edges to previously-unseen vertices. Complexity: $O(n)$
[ "constructive algorithms", "dfs and similar", "graphs" ]
2,700
#include <bits/stdc++.h> using namespace std; #define N 600010 vector<pair<int, int>> graph[N]; vector<int> lst; int ans[2][N]; bool used[N], usedEdges[N]; void dfs(int i) { lst.push_back(i); if (!used[i]) { used[i] = true; for (pair<int, int> p : graph[i]) if (!usedEdges[p.second]) { usedEdges[p.second] = true; dfs(p.first); lst.push_back(i); } } } int main() { int n; cin >> n; string ptop(n, 'U'), pbot(n, 'D'); string mtop(n, 'U'), mbot(n, 'D'); for (int i = 0; i < n; ++i) { int u, v; cin >> u >> v; graph[u].emplace_back(v, i); graph[v].emplace_back(u, i); } int idx = 0; for (int i = 1; i <= 2 * n; ++i) if (!used[i]) { dfs(i); lst.pop_back(); int k = lst.size() / 2; if (k == 1) { cout << "-1\n"; exit(0); } for (int j = 0; j < k; ++j) { ans[0][j + idx] = lst[j]; ans[1][j + idx] = lst[2 * k - 1 - j]; } for (int j = 0; j < k - 1; j += 2) ptop[j + idx] = pbot[j + idx] = 'L', ptop[j + 1 + idx] = pbot[j + 1 + idx] = 'R'; for (int j = 1; j < k - 1; j += 2) mtop[j + idx] = mbot[j + idx] = 'L', mtop[j + 1 + idx] = mbot[j + 1 + idx] = 'R'; lst.clear(); idx += k; } cout << "2 " << n << '\n'; for (int i = 0; i < 2; ++i) { for (int j = 0; j < n; ++j) cout << ans[i][j] << ' '; cout << '\n'; } cout << ptop << '\n' << pbot << '\n'; cout << mtop << '\n' << mbot << '\n'; }
1696
A
NIT orz!
NIT, the cleaver, is new in town! Thousands of people line up to orz him. To keep his orzers entertained, NIT decided to let them solve the following problem related to $\operatorname{or} z$. Can you solve this problem too? You are given a 1-indexed array of $n$ integers, $a$, and an integer $z$. You can do the following operation any number (possibly zero) of times: - Select a positive integer $i$ such that $1\le i\le n$. Then, \textbf{simutaneously} set $a_i$ to $(a_i\operatorname{or} z)$ and set $z$ to $(a_i\operatorname{and} z)$. In other words, let $x$ and $y$ respectively be the current values of $a_i$ and $z$. Then set $a_i$ to $(x\operatorname{or}y)$ and set $z$ to $(x\operatorname{and}y)$. Here $\operatorname{or}$ and $\operatorname{and}$ denote the bitwise operations OR and AND respectively. Find the maximum possible value of the maximum value in $a$ after any number (possibly zero) of operations.
How many operations will we perform? At most one. Why? Suppose we can only perform exactly one operation. In this case the answer is $S=\max_{1\le i\le n}(a_i\mathrm{\ or\ }z)$. In fact, we can prove that this is the answer. Define $a_i'$ as the value of $a_i$ after some operations. It suffices to prove the answer will never exceed $S$. Note that $z$ will always become a submask of itself after any number of operations, so $a_i$ will always be a submask of $(a_i\mathrm{\ or\ }z)$ after any number of operations. This leads to the conclusion that $a_i'\le (a_i\mathrm{\ or\ }z)$ for all $i$. Thus $\max_{1\le i\le n} a_i'\le \max_{1\le i\le n}(a_i\mathrm{\ or\ }z)=S$. Time complexity is $O(n)$.
[ "bitmasks", "greedy" ]
800
null
1696
B
NIT Destroys the Universe
For a collection of integers $S$, define $\operatorname{mex}(S)$ as the smallest non-negative integer that does not appear in $S$. NIT, the cleaver, decides to destroy the universe. He is not so powerful as Thanos, so he can only destroy the universe by snapping his fingers several times. The universe can be represented as a 1-indexed array $a$ of length $n$. When NIT snaps his fingers, he does the following operation on the array: - He selects positive integers $l$ and $r$ such that $1\le l\le r\le n$. Let $w=\operatorname{mex}(\{a_l,a_{l+1},\dots,a_r\})$. Then, for all $l\le i\le r$, set $a_i$ to $w$. We say the universe is destroyed if and only if for all $1\le i\le n$, $a_i=0$ holds. Find the minimum number of times NIT needs to snap his fingers to destroy the universe. That is, find the minimum number of operations NIT needs to perform to make all elements in the array equal to $0$.
How many operations will we perform? At most two. Why? How to check if the array can be destroyed in $0$ or $1$ operations? The answer is at most $2$, because doing the operation on $[1,n]$ at most twice will always work. (If the array contains at least one zero, we need $2$ operations. Otherwise we need $1$ operation.) If the array consists of zeroes, the answer is $0$. If all non-zero elements form a contiguous segment in the array, the answer is $1$. To check this, you can find the leftmost and rightmost occurrence of non-zero elements and check if elements in the middle of them are also non-zero. Otherwise the answer is $2$. Time complexity is $O(n)$.
[ "greedy" ]
900
null
1696
C
Fishingprince Plays With Array
Fishingprince is playing with an array $[a_1,a_2,\dots,a_n]$. He also has a magic number $m$. He can do the following two operations on it: - Select $1\le i\le n$ such that $a_i$ is divisible by $m$ (that is, there exists an integer $t$ such that $m \cdot t = a_i$). Replace $a_i$ with \textbf{$m$ copies} of $\frac{a_i}{m}$. The order of the other elements doesn't change. For example, when $m=2$ and $a=[2,3]$ and $i=1$, $a$ changes into $[1,1,3]$. - Select $1\le i\le n-m+1$ such that $a_i=a_{i+1}=\dots=a_{i+m-1}$. Replace these $m$ elements with \textbf{a single} $m \cdot a_i$. The order of the other elements doesn't change. For example, when $m=2$ and $a=[3,2,2,3]$ and $i=2$, $a$ changes into $[3,4,3]$. Note that the array length might change during the process. The value of $n$ above is defined as the current length of the array (might differ from the $n$ in the input). Fishingprince has another array $[b_1,b_2,\dots,b_k]$. Please determine if he can turn $a$ into $b$ using any number (possibly zero) of operations.
The operation is reversible. (The two operations are reverses of each other.) Try to find a middle state, such that we can turn both $a$ and $b$ into it. Call the first operation "expand" and the second operation "shrink". Keep doing expand on both arrays until we can't do expand anymore, call the resulting arrays $a'$ and $b'$. It suffices to check if $a'=b'$. To implement this, you need to compress contiguous equal numbers. Proof of why this is necessary and sufficient: Sufficiency is obvious, since the operations are reversible. We can do something like $a\to a'=b'\to b$. Sufficiency is obvious, since the operations are reversible. We can do something like $a\to a'=b'\to b$. Necessity: Let $f(a)=a'$. It suffices to prove that an operation on $a$ does not affect $f(a)$. An expand operation obviously doesn't affect $f(a)$. A shrink operation shrinks $a[i,i+m-1]$ into one element. When computing $f(a')$, we will always expand $a'_i$ at some time, so the result is the same as $f(a)$. Necessity: Let $f(a)=a'$. It suffices to prove that an operation on $a$ does not affect $f(a)$. An expand operation obviously doesn't affect $f(a)$. A shrink operation shrinks $a[i,i+m-1]$ into one element. When computing $f(a')$, we will always expand $a'_i$ at some time, so the result is the same as $f(a)$. Time complexity is $O((n+k)\log_m V)$, where $V=\max a_i$.
[ "constructive algorithms", "greedy", "implementation", "math" ]
1,400
null
1696
D
Permutation Graph
A permutation is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array) and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). You are given a permutation of $1,2,\dots,n$, $[a_1,a_2,\dots,a_n]$. For integers $i$, $j$ such that $1\le i<j\le n$, define $\operatorname{mn}(i,j)$ as $\min\limits_{k=i}^j a_k$, and define $\operatorname{mx}(i,j)$ as $\max\limits_{k=i}^j a_k$. Let us build an undirected graph of $n$ vertices, numbered $1$ to $n$. For every pair of integers $1\le i<j\le n$, if $\operatorname{mn}(i,j)=a_i$ and $\operatorname{mx}(i,j)=a_j$ both holds, or $\operatorname{mn}(i,j)=a_j$ and $\operatorname{mx}(i,j)=a_i$ both holds, add an undirected edge of length $1$ between vertices $i$ and $j$. In this graph, find the length of the shortest path from vertex $1$ to vertex $n$. We can prove that $1$ and $n$ will always be connected via some path, so a shortest path always exists.
This problem has two different solutions. The first one is more beautiful, but less straight-forward. The solution is $O(n)$. We don't need any data structures. Instead of trying to construct the shortest path from $1$ to $n$, find a "transfer vertex" that we must pass through. We will always pass through the position of the maximum element in the array. Suppose the maximum element is $a_k$. Solve recursively for $dis(1,k)$ and $dis(k,n)$. Denote $dis(x,y)$ as the length of the shortest path between $x$ and $y$. Consider a position $i$ that $a_i=n$. Assume $i\ne 1$ and $i\ne n$. For a segment that passes $i$, its maximum element is always $a_i$. Thus, for $x<i<y$, $x$ and $y$ will never be directly connected by an edge. This means that when going from $1$ to $n$, we have to pass $i$. Let us solve recursively for $dis(1,i)$ and $dis(i,n)$. For example, we solve for $dis(1,i)$. We already know that $a_i=n$, so $i$ is the maximum element in $[1,i]$. Consider the minimum element in $[1,i]$, suppose it is $a_j\ (j<i)$. From similar arguments, we can solve recursively for $dis(1,j)$ and $dis(j,i)$. However, note that $dis(j,i)$ equals to $1$: since $j$ and $i$ are respectively minimum and maximum in $[1,i]$, they have to be minimum and maximum in $[j,i]$ as well. So $i,j$ must be directly connected. Thus, we only need to solve recursively for $dis(1,j)$. The process with $dis(i,n)$ is similar. Note that we will only call $dis(l,r)$ for $l=1$ or $r=n$ (if not, the return value is always 1), so it suffices to pre-compute prefix and suffix minimums and maximums. The time complexity is $O(n)$. Look at the last sample test case. Think of a simple greedy. Keep going to the rightmost vertex (the vertex with the largest id) works. Use data structures to simulate the process. How? We can prove that keep going to the vertex with the largest index is a correct strategy. The proof is left as an exercise :) Hint: try to prove that the path we will visit is the same as the path we visited in solution 1. Suppose we are at $i$. We want to find the largest $j>i$ such that $i$ and $j$ are directly connected. WLOG, assume $a_{i+1}<a_i$. Then, it cannot be the case that $a_j>a_i$, since none of $a_i,a_j$ will be $mn(i,j)$. Thus $a_j<a_i$. It follows that all $i<k<j$ satisfies $a_k<a_i$, otherwise none of $a_i,a_j$ will be $mx(i,j)$. Let $r_i$ be the largest $p$, such that for all $t\in [i+1,p]$, $a_t<a_i$. From the arguments above we know $j\in [i+1,r_i]$. $r_i$ can be pre-computed with a stack, or binary search + some data structures. Let $j_0$ be the position of the minimum element in $[i+1,r_i]$. Obviously $j_0$ is directly connected with $i$. For any $j_0<k\le r_i$, $mn(i,k)$ will be $a_{j_0}$, showing that all such $k$ is not directly connected with $i$. Thus, $j_0$ is the desired $j$. If we use data structures for range minimum, we get a $O(n\log n)$ solution, which can easily pass (not sure whether $O(n\log^2 n)$ ones will pass though, the large constraints were intended to cut those). However, by building the cartesian tree of the array and doing proper pre-computations, we can optimize this solution to $O(n)$.
[ "binary search", "constructive algorithms", "data structures", "divide and conquer", "greedy", "shortest paths" ]
1,900
null
1696
E
Placing Jinas
We say an infinite sequence $a_{0}, a_{1}, a_2, \ldots$ is \textbf{non-increasing} if and only if for all $i\ge 0$, $a_i \ge a_{i+1}$. There is an infinite right and down grid. The upper-left cell has coordinates $(0,0)$. Rows are numbered $0$ to infinity from top to bottom, columns are numbered from $0$ to infinity from left to right. There is also a \textbf{non-increasing} infinite sequence $a_{0}, a_{1}, a_2, \ldots$. You are given $a_0$, $a_1$, $\ldots$, $a_n$; for all $i>n$, $a_i=0$. For every pair of $x$, $y$, the cell with coordinates $(x,y)$ (which is located at the intersection of $x$-th row and $y$-th column) is white if $y<a_x$ and black otherwise. Initially there is one doll named Jina on $(0,0)$. You can do the following operation. - Select one doll on $(x,y)$. Remove it and place a doll on $(x,y+1)$ and place a doll on $(x+1,y)$. Note that multiple dolls can be present at a cell at the same time; in one operation, you remove only one. Your goal is to make all white cells contain $0$ dolls. What's the minimum number of operations needed to achieve the goal? Print the answer modulo $10^9+7$.
Try to find out the number of operations we do on a specific cell $(i,j)$, call it $f(i,j)$. Write the recurrence formula for $f(i,j)$. What is $f(i,j)$? $f(i,j)=\binom{i+j}j$ The answer is the sum of $f(i,j)$ over all white cells $(i,j)$. Use some combinatorics formula to speed it up. Let us find out the number of operations we do on a specific cell $(i,j)$, call it $f(i,j)$. Every operation done on $(i-1,j)$ will lead to one doll on $(i,j)$, thus consuming one operation on $(i,j)$. Similar observation holds for $(i,j-1)$. Thus, $f(i,j)=f(i,j-1)+f(i-1,j)$ (given that $(i,j),(i-1,j),(i,j-1)$ are all white cells). Note that $a$ is non-increasing: this means that if $(i,j)$ is white, $(i-1,j),(i,j-1)$ will both be white. So we can conclude that $f(i,j)=f(i,j-1)+f(i-1,j)$ always holds as long as $(i,j)$ is white. Another way to see the formula is $f(i,j)$ is the number of ways to go from $(0,0)$ to $(i,j)$, only going down or right by 1 step. This implies that $f(i,j)=\binom{i+j}j$. From this, we know that the answer is $\sum_{i=0}^n\sum_{j=0}^{a_i-1} \binom{i+j}{i}$. With the equation $\sum_{i=0}^k\binom{n+i}n=\binom{n+k+1}{n+1}$, we know that the answer is $\sum_{i=0}^n\binom{i+a_i}{i+1}$. The time complexity is $O(n+V)$, where $V=\max a_i$.
[ "combinatorics", "math" ]
2,000
null
1696
F
Tree Recovery
Fishingprince loves trees. A tree is a connected undirected graph without cycles. Fishingprince has a tree of $n$ vertices. The vertices are numbered $1$ through $n$. Let $d(x,y)$ denote the shortest distance on the tree from vertex $x$ to vertex $y$, assuming that the length of each edge is $1$. However, the tree was lost in an accident. Fortunately, Fishingprince still remembers some information about the tree. More specifically, for every triple of integers $x,y,z$ ($1\le x<y\le n$, $1\le z\le n$) he remembers whether $d(x,z)=d(y,z)$ or not. Help him recover the structure of the tree, or report that no tree satisfying the constraints exists.
The solution does not contain painful casework and deadly implemention. Suppose we aleady know edge $(i,j)$ exists in the tree. What can we know from it? We can immediately recover the whole tree. Read the hints first to understand the solution better. Construct a graph with $\binom n2$ vertices $(1,2),(1,3),\dots,(n-1,n)$. If $dis(a,b)=dis(b,c)$, link an undirected edge between $(a,b)$ and $(b,c)$. Observe that all edges in the tree form a connected component of size exactly $n-1$ in the graph! Find all components of size $n-1$ and try if all vertices in it form a tree that satisfy the input. There are at most $\dfrac n2$ such components, so complexity is $O(n^4)$. Proper pre-computation and the usage of bitsets can reduce the complexity to $O(n^4/w)$.
[ "brute force", "constructive algorithms", "dfs and similar", "dsu", "graphs", "trees" ]
2,600
null
1696
G
Fishingprince Plays With Array Again
Suppose you are given a 1-indexed sequence $a$ of non-negative integers, whose length is $n$, and two integers $x$, $y$. In consecutive $t$ seconds ($t$ can be any positive real number), you can do one of the following operations: - Select $1\le i<n$, decrease $a_i$ by $x\cdot t$, and decrease $a_{i+1}$ by $y\cdot t$. - Select $1\le i<n$, decrease $a_i$ by $y\cdot t$, and decrease $a_{i+1}$ by $x\cdot t$. Define the minimum amount of time (it might be a real number) required to make all elements in the sequence less than or equal to $0$ as $f(a)$. For example, when $x=1$, $y=2$, it takes $3$ seconds to deal with the array $[3,1,1,3]$. We can: - In the first $1.5$ seconds do the second operation with $i=1$. - In the next $1.5$ seconds do the first operation with $i=3$. We can prove that it's not possible to make all elements less than or equal to $0$ in less than $3$ seconds, so $f([3,1,1,3])=3$. Now you are given a 1-indexed sequence $b$ of positive integers, whose length is $n$. You are also given positive integers $x$, $y$. Process $q$ queries of the following two types: - 1 k v: change $b_k$ to $v$. - 2 l r: print $f([b_l,b_{l+1},\dots,b_r])$.
What kind of problem is this problem? Linear programming. Consider the dual. Consider the case when $n=2$. Draw the linear programming on a xOy-coordinate. Try to observe what the answer might be. First we solve the problem with only 1 query on the whole array $A$. This is a linear programming problem: $\text{minimize}\sum_{1\le i<n} a_i+b_i \\ Xa_1+Yb_1\ge A_1 \\ Xa_i+Yb_i+Ya_{i-1}+Xb_{i-1}\ge A_i\ (2\le i<n) \\ Ya_{n-1}+Xb_{n-1}\ge A_n \\ a_i,b_i\ge 0$ Consider its dual: $\text{maximize}\sum_{1\le i\le n}A_ix_i \\ Xx_i+Yx_{i+1}\le 1\ (1\le x<n) \\ Yx_i+Xx_{i+1}\le 1\ (1\le x<n) \\ x_i\ge 0$ Suppose $X\le Y$. Now we will prove that there exists an optimal solution to the dual problem, in which $x_i$ can only take three values: $1/Y,1/(X+Y),0$. The proof is as follows: It is well-known that an optimal solution to a linear programming problem must lie on a vertex of the "multi-dimensional convex polygon" which the restrictions surround. Thus we are only interested in $x_i$ that satisfy several "=" restrictions (and the restrictions should really intersect at one point, meaning that those "=" should uniquely determine $x$). Consider any "sufficient" (that means they uniquely determine ${x}$) subset of them. If one restriction is related to $x_p,x_q$, we link an undirected edge between $p$ and $q$. If one restriction is only related to $x_p$ (i.e. $x_p=0$), we link a self-loop on $p$. "Being sufficient" means that all connected components in the graph has exactly one cycle. However, for an edge $(u,v)$, we know that either $u=v+1$ or $u=v$. This means that all cycles can only be $(i\to i+1\to i)$ or $i\to i$. If a cycle is $(i\to i+1\to i)$, all $x_i$ in the component are $1/(X+Y)$; If a cycle is $i\to i$, all $x_i$ in the component are $1/Y$ or $0$ (not $1/X$, because it exceeds the constraints). Thus we can use dp to solve the dual problem. Let $dp(i,0/1/2)$ be the maximum $\sum_{j\le i}A_jx_j$ when $x_i$ is the $0/1/2$-th candidate above. Transitions are straight-forward. For multiple queries, the transitions can be written into multiplying matrices, and we can use segment tree to maintain updates. About precision issues: actually we can avoid using floating point numbers completely. Note that all numbers in this problem are fractions with denominator $Y(X+Y)$. Also note that the answer does not exceed $(\sum a_i)/Y$. This means that the numerator does not exceed $(\sum a_i)\times (X+Y)<10^{18}$, so we can use long long-s to only store numerators. If you use double in C++, the relative error of one operation is less than $10^{-15}$. $10^{-15}\times n<10^{-9}$, which means that using doubles is also enough. Complexity: $O(n+q\log n)$.
[ "brute force", "data structures", "geometry", "math" ]
3,300
null