55th International Mathematical Olympiad
PROBLEMS SHORT LIST WITH SOLUTIONS
$1 M O 2014$
Cape Town - South Africa
Shortlisted Problems with Solutions
$55^{\text {th }}$ International Mathematical Olympiad Cape Town, South Africa, 2014
Note of Confidentiality
The shortlisted problems should be kept strictly confidential until IMO 2015.
Contributing Countries
The Organising Committee and the Problem Selection Committee of IMO 2014 thank the following 43 countries for contributing 141 problem proposals.
Australia, Austria, Belgium, Benin, Bulgaria, Colombia, Croatia, Cyprus, Czech Republic, Denmark, Ecuador, Estonia, Finland, France, Georgia, Germany, Greece, Hong Kong, Hungary, Iceland, India, Indonesia, Iran, Ireland, Japan, Lithuania, Luxembourg, Malaysia, Mongolia, Netherlands, Nigeria, Pakistan, Russia, Saudi Arabia, Serbia, Slovakia, Slovenia, South Korea, Thailand, Turkey, Ukraine, United Kingdom, U.S.A.
Problem Selection Committee
Johan Meyer
Ilya I. Bogdanov
Géza Kós
Waldemar Pompe
Christian Reiher
Stephan Wagner

Problems
Algebra
A1. Let $z_{0}<z_{1}<z_{2}<\cdots$ be an infinite sequence of positive integers. Prove that there exists a unique integer $n \geqslant 1$ such that
(Austria) A2. Define the function $f:(0,1) \rightarrow(0,1)$ by
Let $a$ and $b$ be two real numbers such that $0<a<b<1$. We define the sequences $a_{n}$ and $b_{n}$ by $a_{0}=a, b_{0}=b$, and $a_{n}=f\left(a_{n-1}\right), b_{n}=f\left(b_{n-1}\right)$ for $n>0$. Show that there exists a positive integer $n$ such that
(Denmark) A3. For a sequence $x_{1}, x_{2}, \ldots, x_{n}$ of real numbers, we define its price as
Given $n$ real numbers, Dave and George want to arrange them into a sequence with a low price. Diligent Dave checks all possible ways and finds the minimum possible price $D$. Greedy George, on the other hand, chooses $x_{1}$ such that $\left|x_{1}\right|$ is as small as possible; among the remaining numbers, he chooses $x_{2}$ such that $\left|x_{1}+x_{2}\right|$ is as small as possible, and so on. Thus, in the $i^{\text {th }}$ step he chooses $x_{i}$ among the remaining numbers so as to minimise the value of $\left|x_{1}+x_{2}+\cdots+x_{i}\right|$. In each step, if several numbers provide the same value, George chooses one at random. Finally he gets a sequence with price $G$.
Find the least possible constant $c$ such that for every positive integer $n$, for every collection of $n$ real numbers, and for every possible sequence that George might obtain, the resulting values satisfy the inequality $G \leqslant c D$. (Georgia) A4. Determine all functions $f: \mathbb{Z} \rightarrow \mathbb{Z}$ satisfying
for all integers $m$ and $n$.
A5. Consider all polynomials $P(x)$ with real coefficients that have the following property: for any two real numbers $x$ and $y$ one has
Determine all possible values of $P(0)$.
A6. Find all functions $f: \mathbb{Z} \rightarrow \mathbb{Z}$ such that
for all $n \in \mathbb{Z}$.
Combinatorics
C1. Let $n$ points be given inside a rectangle $R$ such that no two of them lie on a line parallel to one of the sides of $R$. The rectangle $R$ is to be dissected into smaller rectangles with sides parallel to the sides of $R$ in such a way that none of these rectangles contains any of the given points in its interior. Prove that we have to dissect $R$ into at least $n+1$ smaller rectangles.
(Serbia)
C2. We have $2^{m}$ sheets of paper, with the number 1 written on each of them. We perform the following operation. In every step we choose two distinct sheets; if the numbers on the two sheets are $a$ and $b$, then we erase these numbers and write the number $a+b$ on both sheets. Prove that after $m 2^{m-1}$ steps, the sum of the numbers on all the sheets is at least $4^{m}$.
(Iran)
C3. Let $n \geqslant 2$ be an integer. Consider an $n \times n$ chessboard divided into $n^{2}$ unit squares. We call a configuration of $n$ rooks on this board happy if every row and every column contains exactly one rook. Find the greatest positive integer $k$ such that for every happy configuration of rooks, we can find a $k \times k$ square without a rook on any of its $k^{2}$ unit squares.
(Croatia)
C4. Construct a tetromino by attaching two $2 \times 1$ dominoes along their longer sides such that the midpoint of the longer side of one domino is a corner of the other domino. This construction yields two kinds of tetrominoes with opposite orientations. Let us call them Sand Z-tetrominoes, respectively.

Z-tetrominoes
Assume that a lattice polygon $P$ can be tiled with S-tetrominoes. Prove than no matter how we tile $P$ using only S- and Z-tetrominoes, we always use an even number of Z-tetrominoes. (Hungary) C5. Consider $n \geqslant 3$ lines in the plane such that no two lines are parallel and no three have a common point. These lines divide the plane into polygonal regions; let $\mathcal{F}$ be the set of regions having finite area. Prove that it is possible to colour $\lceil\sqrt{n / 2}\rceil$ of the lines blue in such a way that no region in $\mathcal{F}$ has a completely blue boundary. (For a real number $x,\lceil x\rceil$ denotes the least integer which is not smaller than $x$.)
C6. We are given an infinite deck of cards, each with a real number on it. For every real number $x$, there is exactly one card in the deck that has $x$ written on it. Now two players draw disjoint sets $A$ and $B$ of 100 cards each from this deck. We would like to define a rule that declares one of them a winner. This rule should satisfy the following conditions:
- The winner only depends on the relative order of the 200 cards: if the cards are laid down in increasing order face down and we are told which card belongs to which player, but not what numbers are written on them, we can still decide the winner.
- If we write the elements of both sets in increasing order as $A=\left{a_{1}, a_{2}, \ldots, a_{100}\right}$ and $B=\left{b_{1}, b_{2}, \ldots, b_{100}\right}$, and $a_{i}>b_{i}$ for all $i$, then $A$ beats $B$.
- If three players draw three disjoint sets $A, B, C$ from the deck, $A$ beats $B$ and $B$ beats $C$, then $A$ also beats $C$.
How many ways are there to define such a rule? Here, we consider two rules as different if there exist two sets $A$ and $B$ such that $A$ beats $B$ according to one rule, but $B$ beats $A$ according to the other. (Russia) C7. Let $M$ be a set of $n \geqslant 4$ points in the plane, no three of which are collinear. Initially these points are connected with $n$ segments so that each point in $M$ is the endpoint of exactly two segments. Then, at each step, one may choose two segments $A B$ and $C D$ sharing a common interior point and replace them by the segments $A C$ and $B D$ if none of them is present at this moment. Prove that it is impossible to perform $n^{3} / 4$ or more such moves. (Russia) C8. A card deck consists of 1024 cards. On each card, a set of distinct decimal digits is written in such a way that no two of these sets coincide (thus, one of the cards is empty). Two players alternately take cards from the deck, one card per turn. After the deck is empty, each player checks if he can throw out one of his cards so that each of the ten digits occurs on an even number of his remaining cards. If one player can do this but the other one cannot, the one who can is the winner; otherwise a draw is declared.
Determine all possible first moves of the first player after which he has a winning strategy. (Russia) C9. There are $n$ circles drawn on a piece of paper in such a way that any two circles intersect in two points, and no three circles pass through the same point. Turbo the snail slides along the circles in the following fashion. Initially he moves on one of the circles in clockwise direction. Turbo always keeps sliding along the current circle until he reaches an intersection with another circle. Then he continues his journey on this new circle and also changes the direction of moving, i.e. from clockwise to anticlockwise or vice versa.
Suppose that Turbo's path entirely covers all circles. Prove that $n$ must be odd.
Geometry
G1. The points $P$ and $Q$ are chosen on the side $B C$ of an acute-angled triangle $A B C$ so that $\angle P A B=\angle A C B$ and $\angle Q A C=\angle C B A$. The points $M$ and $N$ are taken on the rays $A P$ and $A Q$, respectively, so that $A P=P M$ and $A Q=Q N$. Prove that the lines $B M$ and $C N$ intersect on the circumcircle of the triangle $A B C$. (Georgia) G2. Let $A B C$ be a triangle. The points $K, L$, and $M$ lie on the segments $B C, C A$, and $A B$, respectively, such that the lines $A K, B L$, and $C M$ intersect in a common point. Prove that it is possible to choose two of the triangles $A L M, B M K$, and $C K L$ whose inradii sum up to at least the inradius of the triangle $A B C$. (Estonia) G3. Let $\Omega$ and $O$ be the circumcircle and the circumcentre of an acute-angled triangle $A B C$ with $A B>B C$. The angle bisector of $\angle A B C$ intersects $\Omega$ at $M \neq B$. Let $\Gamma$ be the circle with diameter $B M$. The angle bisectors of $\angle A O B$ and $\angle B O C$ intersect $\Gamma$ at points $P$ and $Q$, respectively. The point $R$ is chosen on the line $P Q$ so that $B R=M R$. Prove that $B R | A C$. (Here we always assume that an angle bisector is a ray.) (Russia) G4. Consider a fixed circle $\Gamma$ with three fixed points $A, B$, and $C$ on it. Also, let us fix a real number $\lambda \in(0,1)$. For a variable point $P \notin{A, B, C}$ on $\Gamma$, let $M$ be the point on the segment $C P$ such that $C M=\lambda \cdot C P$. Let $Q$ be the second point of intersection of the circumcircles of the triangles $A M P$ and $B M C$. Prove that as $P$ varies, the point $Q$ lies on a fixed circle. (United Kingdom) G5. Let $A B C D$ be a convex quadrilateral with $\angle B=\angle D=90^{\circ}$. Point $H$ is the foot of the perpendicular from $A$ to $B D$. The points $S$ and $T$ are chosen on the sides $A B$ and $A D$, respectively, in such a way that $H$ lies inside triangle $S C T$ and
Prove that the circumcircle of triangle $S H T$ is tangent to the line $B D$. (Iran) G6. Let $A B C$ be a fixed acute-angled triangle. Consider some points $E$ and $F$ lying on the sides $A C$ and $A B$, respectively, and let $M$ be the midpoint of $E F$. Let the perpendicular bisector of $E F$ intersect the line $B C$ at $K$, and let the perpendicular bisector of $M K$ intersect the lines $A C$ and $A B$ at $S$ and $T$, respectively. We call the pair $(E, F)$ interesting, if the quadrilateral $K S A T$ is cyclic.
Suppose that the pairs $\left(E_{1}, F_{1}\right)$ and $\left(E_{2}, F_{2}\right)$ are interesting. Prove that
G7. Let $A B C$ be a triangle with circumcircle $\Omega$ and incentre $I$. Let the line passing through $I$ and perpendicular to $C I$ intersect the segment $B C$ and the $\operatorname{arc} B C($ not containing $A)$ of $\Omega$ at points $U$ and $V$, respectively. Let the line passing through $U$ and parallel to $A I$ intersect $A V$ at $X$, and let the line passing through $V$ and parallel to $A I$ intersect $A B$ at $Y$. Let $W$ and $Z$ be the midpoints of $A X$ and $B C$, respectively. Prove that if the points $I, X$, and $Y$ are collinear, then the points $I, W$, and $Z$ are also collinear.
Number Theory
N1. Let $n \geqslant 2$ be an integer, and let $A_{n}$ be the set
Determine the largest positive integer that cannot be written as the sum of one or more (not necessarily distinct) elements of $A_{n}$. (Serbia) N2. Determine all pairs $(x, y)$ of positive integers such that
N3. A coin is called a Cape Town coin if its value is $1 / n$ for some positive integer $n$. Given a collection of Cape Town coins of total value at most $99+\frac{1}{2}$, prove that it is possible to split this collection into at most 100 groups each of total value at most 1. (Luxembourg) N4. Let $n>1$ be a given integer. Prove that infinitely many terms of the sequence $\left(a_{k}\right)_{k \geqslant 1}$, defined by
are odd. (For a real number $x,\lfloor x\rfloor$ denotes the largest integer not exceeding $x$.) (Hong Kong) N5. Find all triples $(p, x, y)$ consisting of a prime number $p$ and two positive integers $x$ and $y$ such that $x^{p-1}+y$ and $x+y^{p-1}$ are both powers of $p$. (Belgium) N6. Let $a_{1}<a_{2}<\cdots<a_{n}$ be pairwise coprime positive integers with $a_{1}$ being prime and $a_{1} \geqslant n+2$. On the segment $I=\left[0, a_{1} a_{2} \cdots a_{n}\right]$ of the real line, mark all integers that are divisible by at least one of the numbers $a_{1}, \ldots, a_{n}$. These points split $I$ into a number of smaller segments. Prove that the sum of the squares of the lengths of these segments is divisible by $a_{1}$. (Serbia) N7. Let $c \geqslant 1$ be an integer. Define a sequence of positive integers by $a_{1}=c$ and
for all $n \geqslant 1$. Prove that for each integer $n \geqslant 2$ there exists a prime number $p$ dividing $a_{n}$ but none of the numbers $a_{1}, \ldots, a_{n-1}$. (Austria) N8. For every real number $x$, let $|x|$ denote the distance between $x$ and the nearest integer. Prove that for every pair $(a, b)$ of positive integers there exist an odd prime $p$ and a positive integer $k$ satisfying
(Hungary)
Solutions
Algebra
A1. Let $z_{0}<z_{1}<z_{2}<\cdots$ be an infinite sequence of positive integers. Prove that there exists a unique integer $n \geqslant 1$ such that
(Austria) Solution. For $n=1,2, \ldots$ define
The sign of $d_{n}$ indicates whether the first inequality in (1) holds; i.e., it is satisfied if and only if $d_{n}>0$.
Notice that
so the second inequality in (1) is equivalent to $d_{n+1} \leqslant 0$. Therefore, we have to prove that there is a unique index $n \geqslant 1$ that satisfies $d_{n}>0 \geqslant d_{n+1}$.
By its definition the sequence $d_{1}, d_{2}, \ldots$ consists of integers and we have
From $d_{n+1}-d_{n}=\left(\left(z_{0}+\cdots+z_{n}+z_{n+1}\right)-(n+1) z_{n+1}\right)-\left(\left(z_{0}+\cdots+z_{n}\right)-n z_{n}\right)=n\left(z_{n}-z_{n+1}\right)<0$ we can see that $d_{n+1}<d_{n}$ and thus the sequence strictly decreases. Hence, we have a decreasing sequence $d_{1}>d_{2}>\ldots$ of integers such that its first element $d_{1}$ is positive. The sequence must drop below 0 at some point, and thus there is a unique index $n$, that is the index of the last positive term, satisfying $d_{n}>0 \geqslant d_{n+1}$.
Comment. Omitting the assumption that $z_{1}, z_{2}, \ldots$ are integers allows the numbers $d_{n}$ to be all positive. In such cases the desired $n$ does not exist. This happens for example if $z_{n}=2-\frac{1}{2^{n}}$ for all integers $n \geqslant 0$.
A2. Define the function $f:(0,1) \rightarrow(0,1)$ by
Let $a$ and $b$ be two real numbers such that $0<a<b<1$. We define the sequences $a_{n}$ and $b_{n}$ by $a_{0}=a, b_{0}=b$, and $a_{n}=f\left(a_{n-1}\right), b_{n}=f\left(b_{n-1}\right)$ for $n>0$. Show that there exists a positive integer $n$ such that
(Denmark) Solution. Note that
if $x<\frac{1}{2}$ and
if $x \geqslant \frac{1}{2}$. So if we consider $(0,1)$ as being divided into the two subintervals $I_{1}=\left(0, \frac{1}{2}\right)$ and $I_{2}=\left[\frac{1}{2}, 1\right)$, the inequality
holds if and only if $a_{n-1}$ and $b_{n-1}$ lie in distinct subintervals. Let us now assume, to the contrary, that $a_{k}$ and $b_{k}$ always lie in the same subinterval. Consider the distance $d_{k}=\left|a_{k}-b_{k}\right|$. If both $a_{k}$ and $b_{k}$ lie in $I_{1}$, then
If, on the other hand, $a_{k}$ and $b_{k}$ both lie in $I_{2}$, then $\min \left(a_{k}, b_{k}\right) \geqslant \frac{1}{2}$ and $\max \left(a_{k}, b_{k}\right)=$ $\min \left(a_{k}, b_{k}\right)+d_{k} \geqslant \frac{1}{2}+d_{k}$, which implies
This means that the difference $d_{k}$ is non-decreasing, and in particular $d_{k} \geqslant d_{0}>0$ for all $k$. We can even say more. If $a_{k}$ and $b_{k}$ lie in $I_{2}$, then
If $a_{k}$ and $b_{k}$ both lie in $I_{1}$, then $a_{k+1}$ and $b_{k+1}$ both lie in $I_{2}$, and so we have
In either case, $d_{k+2} \geqslant d_{k}\left(1+d_{0}\right)$, and inductively we get
For sufficiently large $m$, the right-hand side is greater than 1 , but since $a_{2 m}, b_{2 m}$ both lie in $(0,1)$, we must have $d_{2 m}<1$, a contradiction.
Thus there must be a positive integer $n$ such that $a_{n-1}$ and $b_{n-1}$ do not lie in the same subinterval, which proves the desired statement.
A3. For a sequence $x_{1}, x_{2}, \ldots, x_{n}$ of real numbers, we define its price as
Given $n$ real numbers, Dave and George want to arrange them into a sequence with a low price. Diligent Dave checks all possible ways and finds the minimum possible price $D$. Greedy George, on the other hand, chooses $x_{1}$ such that $\left|x_{1}\right|$ is as small as possible; among the remaining numbers, he chooses $x_{2}$ such that $\left|x_{1}+x_{2}\right|$ is as small as possible, and so on. Thus, in the $i^{\text {th }}$ step he chooses $x_{i}$ among the remaining numbers so as to minimise the value of $\left|x_{1}+x_{2}+\cdots+x_{i}\right|$. In each step, if several numbers provide the same value, George chooses one at random. Finally he gets a sequence with price $G$.
Find the least possible constant $c$ such that for every positive integer $n$, for every collection of $n$ real numbers, and for every possible sequence that George might obtain, the resulting values satisfy the inequality $G \leqslant c D$. (Georgia) Answer. $c=2$. Solution. If the initial numbers are $1,-1,2$, and -2 , then Dave may arrange them as $1,-2,2,-1$, while George may get the sequence $1,-1,2,-2$, resulting in $D=1$ and $G=2$. So we obtain $c \geqslant 2$.
Therefore, it remains to prove that $G \leqslant 2 D$. Let $x_{1}, x_{2}, \ldots, x_{n}$ be the numbers Dave and George have at their disposal. Assume that Dave and George arrange them into sequences $d_{1}, d_{2}, \ldots, d_{n}$ and $g_{1}, g_{2}, \ldots, g_{n}$, respectively. Put
We claim that
These inequalities yield the desired estimate, as $G \leqslant \max {M, S} \leqslant \max {M, 2 S} \leqslant 2 D$. The inequality (1) is a direct consequence of the definition of the price. To prove (2), consider an index $i$ with $\left|d_{i}\right|=M$. Then we have
as required. It remains to establish (3). Put $h_{i}=g_{1}+g_{2}+\cdots+g_{i}$. We will prove by induction on $i$ that $\left|h_{i}\right| \leqslant N$. The base case $i=1$ holds, since $\left|h_{1}\right|=\left|g_{1}\right| \leqslant M \leqslant N$. Notice also that $\left|h_{n}\right|=S \leqslant N$.
For the induction step, assume that $\left|h_{i-1}\right| \leqslant N$. We distinguish two cases. Case 1. Assume that no two of the numbers $g_{i}, g_{i+1}, \ldots, g_{n}$ have opposite signs. Without loss of generality, we may assume that they are all nonnegative. Then one has $h_{i-1} \leqslant h_{i} \leqslant \cdots \leqslant h_{n}$, thus
Case 2. Among the numbers $g_{i}, g_{i+1}, \ldots, g_{n}$ there are positive and negative ones.
Then there exists some index $j \geqslant i$ such that $h_{i-1} g_{j} \leqslant 0$. By the definition of George's sequence we have
Thus, the induction step is established. Comment 1. One can establish the weaker inequalities $D \geqslant \frac{M}{2}$ and $G \leqslant D+\frac{M}{2}$ from which the result also follows.
Comment 2. One may ask a more specific question to find the maximal suitable $c$ if the number $n$ is fixed. For $n=1$ or 2 , the answer is $c=1$. For $n=3$, the answer is $c=\frac{3}{2}$, and it is reached e.g., for the collection $1,2,-4$. Finally, for $n \geqslant 4$ the answer is $c=2$. In this case the arguments from the solution above apply, and the answer is reached e.g., for the same collection $1,-1,2,-2$, augmented by several zeroes.
A4. Determine all functions $f: \mathbb{Z} \rightarrow \mathbb{Z}$ satisfying
for all integers $m$ and $n$. (Netherlands) Answer. There is only one such function, namely $n \longmapsto 2 n+1007$. Solution. Let $f$ be a function satisfying (1). Set $C=1007$ and define the function $g: \mathbb{Z} \rightarrow \mathbb{Z}$ by $g(m)=f(3 m)-f(m)+2 C$ for all $m \in \mathbb{Z}$; in particular, $g(0)=2 C$. Now (1) rewrites as
for all $m, n \in \mathbb{Z}$. By induction in both directions it follows that
holds for all $m, n, t \in \mathbb{Z}$. Applying this, for any $r \in \mathbb{Z}$, to the triples $(r, 0, f(0))$ and $(0,0, f(r))$ in place of $(m, n, t)$ we obtain
Now if $f(0)$ vanished, then $g(0)=2 C>0$ would entail that $f$ vanishes identically, contrary to (1). Thus $f(0) \neq 0$ and the previous equation yields $g(r)=\alpha f(r)$, where $\alpha=\frac{g(0)}{f(0)}$ is some nonzero constant.
So the definition of $g$ reveals $f(3 m)=(1+\alpha) f(m)-2 C$, i.e.,
for all $m \in \mathbb{Z}$, where $\beta=\frac{2 C}{\alpha}$. By induction on $k$ this implies
for all integers $k \geqslant 0$ and $m$. Since $3 \nmid 2014$, there exists by (1) some value $d=f(a)$ attained by $f$ that is not divisible by 3 . Now by (2) we have $f(n+t d)=f(n)+t g(a)=f(n)+\alpha \cdot t f(a)$, i.e.,
for all $n, t \in \mathbb{Z}$. Let us fix any positive integer $k$ with $d \mid\left(3^{k}-1\right)$, which is possible, since $\operatorname{gcd}(3, d)=1$. E.g., by the Euler-Fermat theorem, we may take $k=\varphi(|d|)$. Now for each $m \in \mathbb{Z}$ we get
from (5), which in view of (4) yields $\left((1+\alpha)^{k}-1\right)(f(m)-\beta)=\alpha\left(3^{k}-1\right) m$. Since $\alpha \neq 0$, the right hand side does not vanish for $m \neq 0$, wherefore the first factor on the left hand side cannot vanish either. It follows that
So $f$ is a linear function, say $f(m)=A m+\beta$ for all $m \in \mathbb{Z}$ with some constant $A \in \mathbb{Q}$. Plugging this into (1) one obtains $\left(A^{2}-2 A\right) m+(A \beta-2 C)=0$ for all $m$, which is equivalent to the conjunction of
The first equation is equivalent to $A \in{0,2}$, and as $C \neq 0$ the second one gives
This shows that $f$ is indeed the function mentioned in the answer and as the numbers found in (7) do indeed satisfy the equations (6) this function is indeed as desired.
Comment 1. One may see that $\alpha=2$. A more pedestrian version of the above solution starts with a direct proof of this fact, that can be obtained by substituting some special values into (1), e.g., as follows.
Set $D=f(0)$. Plugging $m=0$ into (1) and simplifying, we get
for all $n \in \mathbb{Z}$. In particular, for $n=0, D, 2 D$ we obtain $f(D)=2 C+D, f(2 D)=f(D)+2 C=4 C+D$, and $f(3 D)=f(2 D)+2 C=6 C+D$. So substituting $m=D$ and $n=r-D$ into (1) and applying (8) with $n=r-D$ afterwards we learn
i.e., $f(r+2 C)=f(r)+4 C$. By induction in both directions it follows that
holds for all $n, t \in \mathbb{Z}$. Claim. If $a$ and $b$ denote two integers with the property that $f(n+a)=f(n)+b$ holds for all $n \in \mathbb{Z}$, then $b=2 a$. Proof. Applying induction in both directions to the assumption we get $f(n+t a)=f(n)+t b$ for all $n, t \in \mathbb{Z}$. Plugging $(n, t)=(0,2 C)$ into this equation and $(n, t)=(0, a)$ into $(9)$ we get $f(2 a C)-f(0)=$ $2 b C=4 a C$, and, as $C \neq 0$, the claim follows.
Now by (1), for any $m \in \mathbb{Z}$, the numbers $a=f(m)$ and $b=f(3 m)-f(m)+2 C$ have the property mentioned in the claim, whence we have
In view of (3) this tells us indeed that $\alpha=2$. Now the solution may be completed as above, but due to our knowledge of $\alpha=2$ we get the desired formula $f(m)=2 m+C$ directly without having the need to go through all linear functions. Now it just remains to check that this function does indeed satisfy (1).
Comment 2. It is natural to wonder what happens if one replaces the number 2014 appearing in the statement of the problem by some arbitrary integer $B$.
If $B$ is odd, there is no such function, as can be seen by using the same ideas as in the above solution.
If $B \neq 0$ is even, however, then the only such function is given by $n \longmapsto 2 n+B / 2$. In case $3 \nmid B$ this was essentially proved above, but for the general case one more idea seems to be necessary. Writing $B=3^{\nu} \cdot k$ with some integers $\nu$ and $k$ such that $3 \nmid k$ one can obtain $f(n)=2 n+B / 2$ for all $n$ that are divisible by $3^{\nu}$ in the same manner as usual; then one may use the formula $f(3 n)=3 f(n)-B$ to establish the remaining cases.
Finally, in case $B=0$ there are more solutions than just the function $n \longmapsto 2 n$. It can be shown that all these other functions are periodic; to mention just one kind of example, for any even integers $r$ and $s$ the function
also has the property under discussion.
A5. Consider all polynomials $P(x)$ with real coefficients that have the following property: for any two real numbers $x$ and $y$ one has
Determine all possible values of $P(0)$. (Belgium) Answer. The set of possible values of $P(0)$ is $(-\infty, 0) \cup{1}$.
Solution.
Part I. We begin by verifying that these numbers are indeed possible values of $P(0)$. To see that each negative real number $-C$ can be $P(0)$, it suffices to check that for every $C>0$ the polynomial $P(x)=-\left(\frac{2 x^{2}}{C}+C\right)$ has the property described in the statement of the problem. Due to symmetry it is enough for this purpose to prove $\left|y^{2}-P(x)\right|>2|x|$ for any two real numbers $x$ and $y$. In fact we have
where in the first estimate equality can only hold if $|x|=C$, whilst in the second one it can only hold if $x=0$. As these two conditions cannot be met at the same time, we have indeed $\left|y^{2}-P(x)\right|>2|x|$.
To show that $P(0)=1$ is possible as well, we verify that the polynomial $P(x)=x^{2}+1$ satisfies (1). Notice that for all real numbers $x$ and $y$ we have
Since this inequality is symmetric in $x$ and $y$, we are done. Part II. Now we show that no values other than those mentioned in the answer are possible for $P(0)$. To reach this we let $P$ denote any polynomial satisfying (1) and $P(0) \geqslant 0$; as we shall see, this implies $P(x)=x^{2}+1$ for all real $x$, which is actually more than what we want.
First step: We prove that $P$ is even. By (1) we have
for all real numbers $x$ and $y$. Considering just the equivalence of the first and third statement and taking into account that $y^{2}$ may vary through $\mathbb{R}_{\geqslant 0}$ we infer that
holds for all $x \in \mathbb{R}$. We claim that there are infinitely many real numbers $x$ such that $P(x)+2|x| \geqslant 0$. This holds in fact for any real polynomial with $P(0) \geqslant 0$; in order to see this, we may assume that the coefficient of $P$ appearing in front of $x$ is nonnegative. In this case the desired inequality holds for all sufficiently small positive real numbers.
For such numbers $x$ satisfying $P(x)+2|x| \geqslant 0$ we have $P(x)+2|x|=P(-x)+2|x|$ by the previous displayed formula, and hence also $P(x)=P(-x)$. Consequently the polynomial $P(x)-P(-x)$ has infinitely many zeros, wherefore it has to vanish identically. Thus $P$ is indeed even.
Second step: We prove that $P(t)>0$ for all $t \in \mathbb{R}$. Let us assume for a moment that there exists a real number $t \neq 0$ with $P(t)=0$. Then there is some open interval $I$ around $t$ such that $|P(y)| \leqslant 2|y|$ holds for all $y \in I$. Plugging $x=0$ into (1) we learn that $y^{2}=P(0)$ holds for all $y \in I$, which is clearly absurd. We have thus shown $P(t) \neq 0$ for all $t \neq 0$.
In combination with $P(0) \geqslant 0$ this informs us that our claim could only fail if $P(0)=0$. In this case there is by our first step a polynomial $Q(x)$ such that $P(x)=x^{2} Q(x)$. Applying (1) to $x=0$ and an arbitrary $y \neq 0$ we get $|y Q(y)|>2$, which is surely false when $y$ is sufficiently small.
Third step: We prove that $P$ is a quadratic polynomial. Notice that $P$ cannot be constant, for otherwise if $x=\sqrt{P(0)}$ and $y$ is sufficiently large, the first part of (1) is false whilst the second part is true. So the degree $n$ of $P$ has to be at least 1 . By our first step $n$ has to be even as well, whence in particular $n \geqslant 2$.
Now assume that $n \geqslant 4$. Plugging $y=\sqrt{P(x)}$ into (1) we get $\left|x^{2}-P(\sqrt{P(x)})\right| \leqslant 2 \sqrt{P(x)}$ and hence
for all real $x$. Choose positive real numbers $x_{0}, a$, and $b$ such that if $x \in\left(x_{0}, \infty\right)$, then $a x^{n}<$ $P(x)<b x^{n}$; this is indeed possible, for if $d>0$ denotes the leading coefficient of $P$, then $\lim {x \rightarrow \infty} \frac{P(x)}{x^{n}}=d$, whence for instance the numbers $a=\frac{d}{2}$ and $b=2 d$ work provided that $x{0}$ is chosen large enough.
Now for all sufficiently large real numbers $x$ we have
i.e.
which is surely absurd. Thus $P$ is indeed a quadratic polynomial. Fourth step: We prove that $P(x)=x^{2}+1$. In the light of our first three steps there are two real numbers $a>0$ and $b$ such that $P(x)=$ $a x^{2}+b$. Now if $x$ is large enough and $y=\sqrt{a} x$, the left part of (1) holds and the right part reads $\left|\left(1-a^{2}\right) x^{2}-b\right| \leqslant 2 \sqrt{a} x$. In view of the fact that $a>0$ this is only possible if $a=1$. Finally, substituting $y=x+1$ with $x>0$ into (1) we get
i.e.,
for all $x>0$. Choosing $x$ large enough, we can achieve that at least one of these two statements holds; then both hold, which is only possible if $b=1$, as desired.
Comment 1. There are some issues with this problem in that its most natural solutions seem to use some basic facts from analysis, such as the continuity of polynomials or the intermediate value theorem. Yet these facts are intuitively obvious and implicitly clear to the students competing at this level of difficulty, so that the Problem Selection Committee still thinks that the problem is suitable for the IMO.
Comment 2. It seems that most solutions will in the main case, where $P(0)$ is nonnegative, contain an argument that is somewhat asymptotic in nature showing that $P$ is quadratic, and some part narrowing that case down to $P(x)=x^{2}+1$.
Comment 3. It is also possible to skip the first step and start with the second step directly, but then one has to work a bit harder to rule out the case $P(0)=0$. Let us sketch one possibility of doing this: Take the auxiliary polynomial $Q(x)$ such that $P(x)=x Q(x)$. Applying (1) to $x=0$ and an arbitrary $y \neq 0$ we get $|Q(y)|>2$. Hence we either have $Q(z) \geqslant 2$ for all real $z$ or $Q(z) \leqslant-2$ for all real $z$. In particular there is some $\eta \in{-1,+1}$ such that $P(\eta) \geqslant 2$ and $P(-\eta) \leqslant-2$. Substituting $x= \pm \eta$ into (1) we learn
But for $y=\sqrt{P(\eta)}$ the first statement is true, whilst the third one is false. Also, if one has not obtained the evenness of $P$ before embarking on the fourth step, one needs to work a bit harder there, but not in a way that is likely to cause major difficulties.
Comment 4. Truly curious people may wonder about the set of all polynomials having property (1). As explained in the solution above, $P(x)=x^{2}+1$ is the only one with $P(0)=1$. On the other hand, it is not hard to notice that for negative $P(0)$ there are more possibilities than those mentioned above. E.g., as remarked by the proposer, if $a$ and $b$ denote two positive real numbers with $a b>1$ and $Q$ denotes a polynomial attaining nonnegative values only, then $P(x)=-\left(a x^{2}+b+Q(x)\right)$ works.
More generally, it may be proved that if $P(x)$ satisfies (1) and $P(0)<0$, then $-P(x)>2|x|$ holds for all $x \in \mathbb{R}$ so that one just considers the equivalence of two false statements. One may generate all such polynomials $P$ by going through all combinations of a solution of the polynomial equation
and a real $E>0$, and setting
for each of them.
A6. Find all functions $f: \mathbb{Z} \rightarrow \mathbb{Z}$ such that
for all $n \in \mathbb{Z}$. (United Kingdom) Answer. The possibilities are:
- $f(n)=n+1$ for all $n$;
- or, for some $a \geqslant 1, \quad f(n)= \begin{cases}n+1, & n>-a, \ -n+1, & n \leqslant-a ;\end{cases}$
- or $f(n)= \begin{cases}n+1, & n>0, \ 0, & n=0, \ -n+1, & n<0 .\end{cases}$
Solution 1.
Part I. Let us first check that each of the functions above really satisfies the given functional equation. If $f(n)=n+1$ for all $n$, then we have
If $f(n)=n+1$ for $n>-a$ and $f(n)=-n+1$ otherwise, then we have the same identity for $n>-a$ and
otherwise. The same applies to the third solution (with $a=0$ ), where in addition one has
Part II. It remains to prove that these are really the only functions that satisfy our functional equation. We do so in three steps:
Step 1: We prove that $f(n)=n+1$ for $n>0$. Consider the sequence $\left(a_{k}\right)$ given by $a_{k}=f^{k}(1)$ for $k \geqslant 0$. Setting $n=a_{k}$ in (1), we get
Of course, $a_{0}=1$ by definition. Since $a_{2}^{2}=1+4 a_{1}$ is odd, $a_{2}$ has to be odd as well, so we set $a_{2}=2 r+1$ for some $r \in \mathbb{Z}$. Then $a_{1}=r^{2}+r$ and consequently
Since $8 r+4 \neq 0, a_{3}^{2} \neq\left(r^{2}+r\right)^{2}$, so the difference between $a_{3}^{2}$ and $\left(r^{2}+r\right)^{2}$ is at least the distance from $\left(r^{2}+r\right)^{2}$ to the nearest even square (since $8 r+4$ and $r^{2}+r$ are both even). This implies that
(for $r=0$ and $r=-1$, the estimate is trivial, but this does not matter). Therefore, we ave
If $|r| \geqslant 4$, then
a contradiction. Thus $|r|<4$. Checking all possible remaining values of $r$, we find that $\left(r^{2}+r\right)^{2}+8 r+4$ is only a square in three cases: $r=-3, r=0$ and $r=1$. Let us now distinguish these three cases:
- $r=-3$, thus $a_{1}=6$ and $a_{2}=-5$. For each $k \geqslant 1$, we have
and the sign needs to be chosen in such a way that $a_{k+1}^{2}+4 a_{k+2}$ is again a square. This yields $a_{3}=-4, a_{4}=-3, a_{5}=-2, a_{6}=-1, a_{7}=0, a_{8}=1, a_{9}=2$. At this point we have reached a contradiction, since $f(1)=f\left(a_{0}\right)=a_{1}=6$ and at the same time $f(1)=f\left(a_{8}\right)=a_{9}=2$.
- $r=0$, thus $a_{1}=0$ and $a_{2}=1$. Then $a_{3}^{2}=a_{1}^{2}+4 a_{2}=4$, so $a_{3}= \pm 2$. This, however, is a contradiction again, since it gives us $f(1)=f\left(a_{0}\right)=a_{1}=0$ and at the same time $f(1)=f\left(a_{2}\right)=a_{3}= \pm 2$.
- $r=1$, thus $a_{1}=2$ and $a_{2}=3$. We prove by induction that $a_{k}=k+1$ for all $k \geqslant 0$ in this case, which we already know for $k \leqslant 2$ now. For the induction step, assume that $a_{k-1}=k$ and $a_{k}=k+1$. Then
so $a_{k+1}= \pm(k+2)$. If $a_{k+1}=-(k+2)$, then
The latter can only be a square if $k=4$ (since 1 and 9 are the only two squares whose difference is 8 ). Then, however, $a_{4}=5, a_{5}=-6$ and $a_{6}= \pm 1$, so
but neither 32 nor 40 is a perfect square. Thus $a_{k+1}=k+2$, which completes our induction. This also means that $f(n)=f\left(a_{n-1}\right)=a_{n}=n+1$ for all $n \geqslant 1$.
Step 2: We prove that either $f(0)=1$, or $f(0)=0$ and $f(n) \neq 0$ for $n \neq 0$. Set $n=0$ in (1) to get
This means that $f(0) \geqslant 0$. If $f(0)=0$, then $f(n) \neq 0$ for all $n \neq 0$, since we would otherwise have
If $f(0)>0$, then we know that $f(f(0))=f(0)+1$ from the first step, so
which yields $f(0)=1$.
Step 3: We discuss the values of $f(n)$ for $n<0$. Lemma. For every $n \geqslant 1$, we have $f(-n)=-n+1$ or $f(-n)=n+1$. Moreover, if $f(-n)=$ $-n+1$ for some $n \geqslant 1$, then also $f(-n+1)=-n+2$. Proof. We prove this statement by strong induction on $n$. For $n=1$, we get
Thus $f(-1)$ needs to be nonnegative. If $f(-1)=0$, then $f(f(-1))=f(0)= \pm 1$, so $f(0)=1$ (by our second step). Otherwise, we know that $f(f(-1))=f(-1)+1$, so
which yields $f(-1)=2$ and thus establishes the base case. For the induction step, we consider two cases:
- If $f(-n) \leqslant-n$, then
so $|f(f(-n))| \leqslant n-3$ (for $n=2$, this case cannot even occur). If $f(f(-n)) \geqslant 0$, then we already know from the first two steps that $f(f(f(-n)))=f(f(-n))+1$, unless perhaps if $f(0)=0$ and $f(f(-n))=0$. However, the latter would imply $f(-n)=0$ (as shown in Step 2) and thus $n=0$, which is impossible. If $f(f(-n))<0$, we can apply the induction hypothesis to $f(f(-n))$. In either case, $f(f(f(-n)))= \pm f(f(-n))+1$. Therefore,
which gives us
a contradiction.
- Thus, we are left with the case that $f(-n)>-n$. Now we argue as in the previous case: if $f(-n) \geqslant 0$, then $f(f(-n))=f(-n)+1$ by the first two steps, since $f(0)=0$ and $f(-n)=0$ would imply $n=0$ (as seen in Step 2) and is thus impossible. If $f(-n)<0$, we can apply the induction hypothesis, so in any case we can infer that $f(f(-n))= \pm f(-n)+1$. We obtain
so either
which gives us $f(-n)= \pm n+1$, or
Since 1 and 9 are the only perfect squares whose difference is 8 , we must have $n=1$, which we have already considered.
Finally, suppose that $f(-n)=-n+1$ for some $n \geqslant 2$. Then
so $f(-n+1)= \pm(n-2)$. However, we already know that $f(-n+1)=-n+2$ or $f(-n+1)=n$, so $f(-n+1)=-n+2$.
Combining everything we know, we find the solutions as stated in the answer:
- One solution is given by $f(n)=n+1$ for all $n$.
- If $f(n)$ is not always equal to $n+1$, then there is a largest integer $m$ (which cannot be positive) for which this is not the case. In view of the lemma that we proved, we must then have $f(n)=-n+1$ for any integer $n<m$. If $m=-a<0$, we obtain $f(n)=-n+1$ for $n \leqslant-a$ (and $f(n)=n+1$ otherwise). If $m=0$, we have the additional possibility that $f(0)=0, f(n)=-n+1$ for negative $n$ and $f(n)=n+1$ for positive $n$.
Solution 2. Let us provide an alternative proof for Part II, which also proceeds in several steps.
Step 1. Let $a$ be an arbitrary integer and $b=f(a)$. We first concentrate on the case where $|a|$ is sufficiently large.
- If $b=0$, then (1) applied to $a$ yields $a^{2}=f(f(a))^{2}$, thus
From now on, we set $D=|f(0)|$. Throughout Step 1, we will assume that $a \notin{-D, 0, D}$, thus $b \neq 0$. 2. From (1), noticing that $f(f(a))$ and $a$ have the same parity, we get
Hence we have
For the rest of Step 1, we also assume that $|a| \geqslant E=\max {D+2,10}$. Then by (3) we have $|b| \geqslant D+1$ and thus $|f(b)| \geqslant D$. 3. Set $c=f(b)$; by (3), we have $|c| \geqslant|b|-1$. Thus (1) yields
which implies
because $|b| \geqslant|a|-1 \geqslant 9$. Thus (3) can be refined to
Now, from $c^{2}=a^{2}+4 b$ with $|b| \in[|a|-1,|a|+3]$ we get $c^{2}=(a \pm 2)^{2}+d$, where $d \in{-16,-12,-8,-4,0,4,8}$. Since $|a \pm 2| \geqslant 8$, this can happen only if $c^{2}=(a \pm 2)^{2}$, which in turn yields $b= \pm a+1$. To summarise,
We have shown that, with at most finitely many exceptions, $f(a)=1 \pm a$. Thus it will be convenient for our second step to introduce the sets
Step 2. Now we investigate the structure of the sets $Z_{+}, Z_{-}$, and $Z_{0}$. 4. Note that $f(E+1)=1 \pm(E+1)$. If $f(E+1)=E+2$, then $E+1 \in Z_{+}$. Otherwise we have $f(1+E)=-E$; then the original equation (1) with $n=E+1$ gives us $(E-1)^{2}=f(-E)^{2}$, so $f(-E)= \pm(E-1)$. By (4) this may happen only if $f(-E)=1-E$, so in this case $-E \in Z_{+}$. In any case we find that $Z_{+} \neq \varnothing$. 5. Now take any $a \in Z_{+}$. We claim that every integer $x \geqslant a$ also lies in $Z_{+}$. We proceed by induction on $x$, the base case $x=a$ being covered by our assumption. For the induction step, assume that $f(x-1)=x$ and plug $n=x-1$ into (1). We get $f(x)^{2}=(x+1)^{2}$, so either $f(x)=x+1$ or $f(x)=-(x+1)$. Assume that $f(x)=-(x+1)$ and $x \neq-1$, since otherwise we already have $f(x)=x+1$. Plugging $n=x$ into (1), we obtain $f(-x-1)^{2}=(x-2)^{2}-8$, which may happen only if $x-2= \pm 3$ and $f(-x-1)= \pm 1$. Plugging $n=-x-1$ into (1), we get $f( \pm 1)^{2}=(x+1)^{2} \pm 4$, which in turn may happen only if $x+1 \in{-2,0,2}$. Thus $x \in{-1,5}$ and at the same time $x \in{-3,-1,1}$, which gives us $x=-1$. Since this has already been excluded, we must have $f(x)=x+1$, which completes our induction. 6. Now we know that either $Z_{+}=\mathbb{Z}$ (if $Z_{+}$is not bounded below), or $Z_{+}=\left{a \in \mathbb{Z}: a \geqslant a_{0}\right}$, where $a_{0}$ is the smallest element of $Z_{+}$. In the former case, $f(n)=n+1$ for all $n \in \mathbb{Z}$, which is our first solution. So we assume in the following that $Z_{+}$is bounded below and has a smallest element $a_{0}$. If $Z_{0}=\varnothing$, then we have $f(x)=x+1$ for $x \geqslant a_{0}$ and $f(x)=1-x$ for $x<a_{0}$. In particular, $f(0)=1$ in any case, so $0 \in Z_{+}$and thus $a_{0} \leqslant 0$. Thus we end up with the second solution listed in the answer. It remains to consider the case where $Z_{0} \neq \varnothing$. 7. Assume that there exists some $a \in Z_{0}$ with $b=f(a) \notin Z_{0}$, so that $f(b)=1 \pm b$. Then we have $a^{2}+4 b=(1 \pm b)^{2}$, so either $a^{2}=(b-1)^{2}$ or $a^{2}=(b-3)^{2}-8$. In the former case we have $b=1 \pm a$, which is impossible by our choice of $a$. So we get $a^{2}=(b-3)^{2}-8$, which implies $f(b)=1-b$ and $|a|=1,|b-3|=3$. If $b=0$, then we have $f(b)=1$, so $b \in Z_{+}$and therefore $a_{0} \leqslant 0$; hence $a=-1$. But then $f(a)=0=a+1$, so $a \in Z_{+}$, which is impossible. If $b=6$, then we have $f(6)=-5$, so $f(-5)^{2}=16$ and $f(-5) \in{-4,4}$. Then $f(f(-5))^{2}=$ $25+4 f(-5) \in{9,41}$, so $f(-5)=-4$ and $-5 \in Z_{+}$. This implies $a_{0} \leqslant-5$, which contradicts our assumption that $\pm 1=a \notin Z_{+}$. 8. Thus we have shown that $f\left(Z_{0}\right) \subseteq Z_{0}$, and $Z_{0}$ is finite. Take any element $c \in Z_{0}$, and consider the sequence defined by $c_{i}=f^{i}(c)$. All elements of the sequence $\left(c_{i}\right)$ lie in $Z_{0}$, hence it is bounded. Choose an index $k$ for which $\left|c_{k}\right|$ is maximal, so that in particular $\left|c_{k+1}\right| \leqslant\left|c_{k}\right|$ and $\left|c_{k+2}\right| \leqslant\left|c_{k}\right|$. Our functional equation (1) yields
Since $c_{k}$ and $c_{k+2}$ have the same parity and $\left|c_{k+2}\right| \leqslant\left|c_{k}\right|$, this leaves us with three possibilities: $\left|c_{k+2}\right|=\left|c_{k}\right|,\left|c_{k+2}\right|=\left|c_{k}\right|-2$, and $\left|c_{k}\right|-2= \pm 2, c_{k+2}=0$. If $\left|c_{k+2}\right|=\left|c_{k}\right|-2$, then $f\left(c_{k}\right)=c_{k+1}=1-\left|c_{k}\right|$, which means that $c_{k} \in Z_{-}$or $c_{k} \in Z_{+}$, and we reach a contradiction. If $\left|c_{k+2}\right|=\left|c_{k}\right|$, then $c_{k+1}=0$, thus $c_{k+3}^{2}=4 c_{k+2}$. So either $c_{k+3} \neq 0$ or (by maximality of $\left.\left|c_{k+2}\right|=\left|c_{k}\right|\right) c_{i}=0$ for all $i$. In the former case, we can repeat the entire argument with $c_{k+2}$ in the place of $c_{k}$. Now $\left|c_{k+4}\right|=\left|c_{k+2}\right|$ is not possible any more since $c_{k+3} \neq 0$, leaving us with the only possibility $\left|c_{k}\right|-2=\left|c_{k+2}\right|-2= \pm 2$.
Thus we know now that either all $c_{i}$ are equal to 0 , or $\left|c_{k}\right|=4$. If $c_{k}= \pm 4$, then either $c_{k+1}=0$ and $\left|c_{k+2}\right|=\left|c_{k}\right|=4$, or $c_{k+2}=0$ and $c_{k+1}=-4$. From this point onwards, all elements of the sequence are either 0 or $\pm 4$. Let $c_{r}$ be the last element of the sequence that is not equal to 0 or $\pm 4$ (if such an element exists). Then $c_{r+1}, c_{r+2} \in{-4,0,4}$, so
which gives us a contradiction. Thus all elements of the sequence are equal to 0 or $\pm 4$, and since the choice of $c_{0}=c$ was arbitrary, $Z_{0} \subseteq{-4,0,4}$. 9. Finally, we show that $4 \notin Z_{0}$ and $-4 \notin Z_{0}$. Suppose that $4 \in Z_{0}$. Then in particular $a_{0}$ (the smallest element of $Z_{+}$) cannot be less than 4 , since this would imply $4 \in Z_{+}$. So $-3 \in Z_{-}$, which means that $f(-3)=4$. Then $25=(-3)^{2}+4 f(-3)=f(f(-3))^{2}=f(4)^{2}$, so $f(4)= \pm 5 \notin Z_{0}$, and we reach a contradiction.
Suppose that $-4 \in Z_{0}$. The only possible values for $f(-4)$ that are left are 0 and -4 . Note that $4 f(0)=f(f(0))^{2}$, so $f(0) \geqslant 0$. If $f(-4)=0$, then we get $16=(-4)^{2}+0=f(0)^{2}$, thus $f(0)=4$. But then $f(f(-4)) \notin Z_{0}$, which is impossible. Thus $f(-4)=-4$, which gives us $0=(-4)^{2}+4 f(-4)=f(f(-4))^{2}=16$, and this is clearly absurd. Now we are left with $Z_{0}={0}$ and $f(0)=0$ as the only possibility. If $1 \in Z_{-}$, then $f(1)=0$, so $1=1^{2}+4 f(1)=f(f(1))^{2}=f(0)^{2}=0$, which is another contradiction. Thus $1 \in Z_{+}$, meaning that $a_{0} \leqslant 1$. On the other hand, $a_{0} \leqslant 0$ would imply $0 \in Z_{+}$, so we can only have $a_{0}=1$. Thus $Z_{+}$comprises all positive integers, and $Z_{-}$comprises all negative integers. This gives us the third solution.
Comment. All solutions known to the Problem Selection Committee are quite lengthy and technical, as the two solutions presented here show. It is possible to make the problem easier by imposing additional assumptions, such as $f(0) \neq 0$ or $f(n) \geqslant 1$ for all $n \geqslant 0$, to remove some of the technicalities.
Combinatorics
C1. Let $n$ points be given inside a rectangle $R$ such that no two of them lie on a line parallel to one of the sides of $R$. The rectangle $R$ is to be dissected into smaller rectangles with sides parallel to the sides of $R$ in such a way that none of these rectangles contains any of the given points in its interior. Prove that we have to dissect $R$ into at least $n+1$ smaller rectangles. (Serbia) Solution 1. Let $k$ be the number of rectangles in the dissection. The set of all points that are corners of one of the rectangles can be divided into three disjoint subsets:
- $A$, which consists of the four corners of the original rectangle $R$, each of which is the corner of exactly one of the smaller rectangles,
- $B$, which contains points where exactly two of the rectangles have a common corner (T-junctions, see the figure below),
- $C$, which contains points where four of the rectangles have a common corner (crossings, see the figure below).

Figure 1: A T-junction and a crossing
We denote the number of points in $B$ by $b$ and the number of points in $C$ by $c$. Since each of the $k$ rectangles has exactly four corners, we get
It follows that $2 b \leqslant 4 k-4$, so $b \leqslant 2 k-2$. Each of the $n$ given points has to lie on a side of one of the smaller rectangles (but not of the original rectangle $R$ ). If we extend this side as far as possible along borders between rectangles, we obtain a line segment whose ends are T-junctions. Note that every point in $B$ can only be an endpoint of at most one such segment containing one of the given points, since it is stated that no two of them lie on a common line parallel to the sides of $R$. This means that
Combining our two inequalities for $b$, we get
thus $k \geqslant n+1$, which is what we wanted to prove.
Solution 2. Let $k$ denote the number of rectangles. In the following, we refer to the directions of the sides of $R$ as 'horizontal' and 'vertical' respectively. Our goal is to prove the inequality $k \geqslant n+1$ for fixed $n$. Equivalently, we can prove the inequality $n \leqslant k-1$ for each $k$, which will be done by induction on $k$. For $k=1$, the statement is trivial.
Now assume that $k>1$. If none of the line segments that form the borders between the rectangles is horizontal, then we have $k-1$ vertical segments dividing $R$ into $k$ rectangles. On each of them, there can only be one of the $n$ points, so $n \leqslant k-1$, which is exactly what we want to prove.
Otherwise, consider the lowest horizontal line $h$ that contains one or more of these line segments. Let $R^{\prime}$ be the rectangle that results when everything that lies below $h$ is removed from $R$ (see the example in the figure below).
The rectangles that lie entirely below $h$ form blocks of rectangles separated by vertical line segments. Suppose there are $r$ blocks and $k_{i}$ rectangles in the $i^{\text {th }}$ block. The left and right border of each block has to extend further upwards beyond $h$. Thus we can move any points that lie on these borders upwards, so that they now lie inside $R^{\prime}$. This can be done without violating the conditions, one only needs to make sure that they do not get to lie on a common horizontal line with one of the other given points.
All other borders between rectangles in the $i^{\text {th }}$ block have to lie entirely below $h$. There are $k_{i}-1$ such line segments, each of which can contain at most one of the given points. Finally, there can be one point that lies on $h$. All other points have to lie in $R^{\prime}$ (after moving some of them as explained in the previous paragraph).

Figure 2: Illustration of the inductive argument We see that $R^{\prime}$ is divided into $k-\sum_{i=1}^{r} k_{i}$ rectangles. Applying the induction hypothesis to $R^{\prime}$, we find that there are at most
points. Since $r \geqslant 1$, this means that $n \leqslant k-1$, which completes our induction.
C2. We have $2^{m}$ sheets of paper, with the number 1 written on each of them. We perform the following operation. In every step we choose two distinct sheets; if the numbers on the two sheets are $a$ and $b$, then we erase these numbers and write the number $a+b$ on both sheets. Prove that after $m 2^{m-1}$ steps, the sum of the numbers on all the sheets is at least $4^{m}$. (Iran) Solution. Let $P_{k}$ be the product of the numbers on the sheets after $k$ steps. Suppose that in the $(k+1)^{\text {th }}$ step the numbers $a$ and $b$ are replaced by $a+b$. In the product, the number $a b$ is replaced by $(a+b)^{2}$, and the other factors do not change. Since $(a+b)^{2} \geqslant 4 a b$, we see that $P_{k+1} \geqslant 4 P_{k}$. Starting with $P_{0}=1$, a straightforward induction yields
for all integers $k \geqslant 0$; in particular
so by the AM-GM inequality, the sum of the numbers written on the sheets after $m 2^{m-1}$ steps is at least
Comment 1. It is possible to achieve the sum $4^{m}$ in $m 2^{m-1}$ steps. For example, starting from $2^{m}$ equal numbers on the sheets, in $2^{m-1}$ consecutive steps we can double all numbers. After $m$ such doubling rounds we have the number $2^{m}$ on every sheet.
Comment 2. There are several versions of the solution above. E.g., one may try to assign to each positive integer $n$ a weight $w_{n}$ in such a way that the sum of the weights of the numbers written on the sheets increases, say, by at least 2 in each step. For this purpose, one needs the inequality
to be satisfied for all positive integers $a$ and $b$. Starting from $w_{1}=1$ and trying to choose the weights as small as possible, one may find that these weights can be defined as follows: For every positive integer $n$, one chooses $k$ to be the maximal integer such that $n \geqslant 2^{k}$, and puts
Now, in order to prove that these weights satisfy (1), one may take arbitrary positive integers $a$ and $b$, and choose an integer $d \geqslant 0$ such that $w_{a+b}=d+\frac{a+b}{2^{d}}$. Then one has
Since the initial sum of the weights was $2^{m}$, after $m 2^{m-1}$ steps the sum is at least $(m+1) 2^{m}$. To finish the solution, one may notice that by (2) for every positive integer $a$ one has
So the sum of the numbers $a_{1}, a_{2}, \ldots, a_{2^{m}}$ on the sheets can be estimated as
as required. For establishing the inequalities (1) and (3), one may also use the convexity argument, instead of the second definition of $w_{n}$ in (2).
One may check that $\log {2} n \leqslant w{n} \leqslant \log _{2} n+1$; thus, in some rough sense, this approach is obtained by "taking the logarithm" of the solution above.
Comment 3. An intuitive strategy to minimise the sum of numbers is that in every step we choose the two smallest numbers. We may call this the greedy strategy. In the following paragraphs we prove that the greedy strategy indeed provides the least possible sum of numbers.
Claim. Starting from any sequence $x_{1}, \ldots, x_{N}$ of positive real numbers on $N$ sheets, for any number $k$ of steps, the greedy strategy achieves the lowest possible sum of numbers.
Proof. We apply induction on $k$; for $k=1$ the statement is obvious. Let $k \geqslant 2$, and assume that the claim is true for smaller values.
Every sequence of $k$ steps can be encoded as $S=\left(\left(i_{1}, j_{1}\right), \ldots,\left(i_{k}, j_{k}\right)\right)$, where, for $r=1,2, \ldots, k$, the numbers $i_{r}$ and $j_{r}$ are the indices of the two sheets that are chosen in the $r^{\text {th }}$ step. The resulting final sum will be some linear combination of $x_{1}, \ldots, x_{N}$, say, $c_{1} x_{1}+\cdots+c_{N} x_{N}$ with positive integers $c_{1}, \ldots, c_{N}$ that depend on $S$ only. Call the numbers $\left(c_{1}, \ldots, c_{N}\right)$ the characteristic vector of $S$.
Choose a sequence $S_{0}=\left(\left(i_{1}, j_{1}\right), \ldots,\left(i_{k}, j_{k}\right)\right)$ of steps that produces the minimal sum, starting from $x_{1}, \ldots, x_{N}$, and let $\left(c_{1}, \ldots, c_{N}\right)$ be the characteristic vector of $S$. We may assume that the sheets are indexed in such an order that $c_{1} \geqslant c_{2} \geqslant \cdots \geqslant c_{N}$. If the sheets (and the numbers) are permuted by a permutation $\pi$ of the indices $(1,2, \ldots, N)$ and then the same steps are performed, we can obtain the $\operatorname{sum} \sum_{t=1}^{N} c_{t} x_{\pi(t)}$. By the rearrangement inequality, the smallest possible sum can be achieved when the numbers $\left(x_{1}, \ldots, x_{N}\right)$ are in non-decreasing order. So we can assume that also $x_{1} \leqslant x_{2} \leqslant \cdots \leqslant x_{N}$.
Let $\ell$ be the largest index with $c_{1}=\cdots=c_{\ell}$, and let the $r^{\text {th }}$ step be the first step for which $c_{i_{r}}=c_{1}$ or $c_{j_{r}}=c_{1}$. The role of $i_{r}$ and $j_{r}$ is symmetrical, so we can assume $c_{i_{r}}=c_{1}$ and thus $i_{r} \leqslant \ell$. We show that $c_{j_{r}}=c_{1}$ and $j_{r} \leqslant \ell$ hold, too.
Before the $r^{\text {th }}$ step, on the $i_{r}{ }^{\text {th }}$ sheet we had the number $x_{i_{r}}$. On the $j_{r}{ }^{\text {th }}$ sheet there was a linear combination that contains the number $x_{j_{r}}$ with a positive integer coefficient, and possibly some other terms. In the $r^{\text {th }}$ step, the number $x_{i_{r}}$ joins that linear combination. From this point, each sheet contains a linear combination of $x_{1}, \ldots, x_{N}$, with the coefficient of $x_{j_{r}}$ being not smaller than the coefficient of $x_{i_{r}}$. This is preserved to the end of the procedure, so we have $c_{j_{r}} \geqslant c_{i_{r}}$. But $c_{i_{r}}=c_{1}$ is maximal among the coefficients, so we have $c_{j_{r}}=c_{i_{r}}=c_{1}$ and thus $j_{r} \leqslant \ell$.
Either from $c_{j_{r}}=c_{i_{r}}=c_{1}$ or from the arguments in the previous paragraph we can see that none of the $i_{r}{ }^{\text {th }}$ and the $j_{r}{ }^{\text {th }}$ sheets were used before step $r$. Therefore, the final linear combination of the numbers does not change if the step $\left(i_{r}, j_{r}\right)$ is performed first: the sequence of steps
also produces the same minimal sum at the end. Therefore, we can replace $S_{0}$ by $S_{1}$ and we may assume that $r=1$ and $c_{i_{1}}=c_{j_{1}}=c_{1}$.
As $i_{1} \neq j_{1}$, we can see that $\ell \geqslant 2$ and $c_{1}=c_{2}=c_{i_{1}}=c_{j_{1}}$. Let $\pi$ be such a permutation of the indices $(1,2, \ldots, N)$ that exchanges 1,2 with $i_{r}, j_{r}$ and does not change the remaining indices. Let
Since $c_{\pi(i)}=c_{i}$ for all indices $i$, this sequence of steps produces the same, minimal sum. Moreover, in the first step we chose $x_{\pi\left(i_{1}\right)}=x_{1}$ and $x_{\pi\left(j_{1}\right)}=x_{2}$, the two smallest numbers.
Hence, it is possible to achieve the optimal sum if we follow the greedy strategy in the first step. By the induction hypothesis, following the greedy strategy in the remaining steps we achieve the optimal sum.
C3. Let $n \geqslant 2$ be an integer. Consider an $n \times n$ chessboard divided into $n^{2}$ unit squares. We call a configuration of $n$ rooks on this board happy if every row and every column contains exactly one rook. Find the greatest positive integer $k$ such that for every happy configuration of rooks, we can find a $k \times k$ square without a rook on any of its $k^{2}$ unit squares. (Croatia) Answer. $\lfloor\sqrt{n-1}\rfloor$. Solution. Let $\ell$ be a positive integer. We will show that (i) if $n>\ell^{2}$ then each happy configuration contains an empty $\ell \times \ell$ square, but (ii) if $n \leqslant \ell^{2}$ then there exists a happy configuration not containing such a square. These two statements together yield the answer. (i). Assume that $n>\ell^{2}$. Consider any happy configuration. There exists a row $R$ containing a rook in its leftmost square. Take $\ell$ consecutive rows with $R$ being one of them. Their union $U$ contains exactly $\ell$ rooks. Now remove the $n-\ell^{2} \geqslant 1$ leftmost columns from $U$ (thus at least one rook is also removed). The remaining part is an $\ell^{2} \times \ell$ rectangle, so it can be split into $\ell$ squares of size $\ell \times \ell$, and this part contains at most $\ell-1$ rooks. Thus one of these squares is empty. (ii). Now we assume that $n \leqslant \ell^{2}$. Firstly, we will construct a happy configuration with no empty $\ell \times \ell$ square for the case $n=\ell^{2}$. After that we will modify it to work for smaller values of $n$.
Let us enumerate the rows from bottom to top as well as the columns from left to right by the numbers $0,1, \ldots, \ell^{2}-1$. Every square will be denoted, as usual, by the pair $(r, c)$ of its row and column numbers. Now we put the rooks on all squares of the form $(i \ell+j, j \ell+i)$ with $i, j=0,1, \ldots, \ell-1$ (the picture below represents this arrangement for $\ell=3$ ). Since each number from 0 to $\ell^{2}-1$ has a unique representation of the form $i \ell+j(0 \leqslant i, j \leqslant \ell-1)$, each row and each column contains exactly one rook.

Next, we show that each $\ell \times \ell$ square $A$ on the board contains a rook. Consider such a square $A$, and consider $\ell$ consecutive rows the union of which contains $A$. Let the lowest of these rows have number $p \ell+q$ with $0 \leqslant p, q \leqslant \ell-1$ (notice that $p \ell+q \leqslant \ell^{2}-\ell$ ). Then the rooks in this union are placed in the columns with numbers $q \ell+p,(q+1) \ell+p, \ldots,(\ell-1) \ell+p$, $p+1, \ell+(p+1), \ldots,(q-1) \ell+p+1$, or, putting these numbers in increasing order,
One readily checks that the first number in this list is at most $\ell-1$ (if $p=\ell-1$, then $q=0$, and the first listed number is $q \ell+p=\ell-1$ ), the last one is at least $(\ell-1) \ell$, and the difference between any two consecutive numbers is at most $\ell$. Thus, one of the $\ell$ consecutive columns intersecting $A$ contains a number listed above, and the rook in this column is inside $A$, as required. The construction for $n=\ell^{2}$ is established.
It remains to construct a happy configuration of rooks not containing an empty $\ell \times \ell$ square for $n<\ell^{2}$. In order to achieve this, take the construction for an $\ell^{2} \times \ell^{2}$ square described above and remove the $\ell^{2}-n$ bottom rows together with the $\ell^{2}-n$ rightmost columns. We will have a rook arrangement with no empty $\ell \times \ell$ square, but several rows and columns may happen to be empty. Clearly, the number of empty rows is equal to the number of empty columns, so one can find a bijection between them, and put a rook on any crossing of an empty row and an empty column corresponding to each other.
Comment. Part (i) allows several different proofs. E.g., in the last paragraph of the solution, it suffices to deal only with the case $n=\ell^{2}+1$. Notice now that among the four corner squares, at least one is empty. So the rooks in its row and in its column are distinct. Now, deleting this row and column we obtain an $\ell^{2} \times \ell^{2}$ square with $\ell^{2}-1$ rooks in it. This square can be partitioned into $\ell^{2}$ squares of size $\ell \times \ell$, so one of them is empty.
C4. Construct a tetromino by attaching two $2 \times 1$ dominoes along their longer sides such that the midpoint of the longer side of one domino is a corner of the other domino. This construction yields two kinds of tetrominoes with opposite orientations. Let us call them Sand Z-tetrominoes, respectively.

Assume that a lattice polygon $P$ can be tiled with S-tetrominoes. Prove than no matter how we tile $P$ using only S - and Z -tetrominoes, we always use an even number of Z -tetrominoes.
(Hungary)
Solution 1. We may assume that polygon $P$ is the union of some squares of an infinite chessboard. Colour the squares of the chessboard with two colours as the figure below illustrates.

Observe that no matter how we tile $P$, any S-tetromino covers an even number of black squares, whereas any Z-tetromino covers an odd number of them. As $P$ can be tiled exclusively by S-tetrominoes, it contains an even number of black squares. But if some S-tetrominoes and some Z-tetrominoes cover an even number of black squares, then the number of Z-tetrominoes must be even.
Comment. An alternative approach makes use of the following two colourings, which are perhaps somewhat more natural:

Let $s_{1}$ and $s_{2}$ be the number of $S$-tetrominoes of the first and second type (as shown in the figure above) respectively that are used in a tiling of $P$. Likewise, let $z_{1}$ and $z_{2}$ be the number of $Z$-tetrominoes of the first and second type respectively. The first colouring shows that $s_{1}+z_{2}$ is invariant modulo 2 , the second colouring shows that $s_{1}+z_{1}$ is invariant modulo 2 . Adding these two conditions, we find that $z_{1}+z_{2}$ is invariant modulo 2 , which is what we have to prove. Indeed, the sum of the two colourings (regarding white as 0 and black as 1 and adding modulo 2) is the colouring shown in the solution.
Solution 2. Let us assign coordinates to the squares of the infinite chessboard in such a way that the squares of $P$ have nonnegative coordinates only, and that the first coordinate increases as one moves to the right, while the second coordinate increases as one moves upwards. Write the integer $3^{i} \cdot(-3)^{j}$ into the square with coordinates $(i, j)$, as in the following figure:
| $\vdots$ | |||||
|---|---|---|---|---|---|
| 81 | $\vdots$ | ||||
| -27 | -81 | $\vdots$ | |||
| 9 | 27 | 81 | $\cdots$ | ||
| -3 | -9 | -27 | -81 | $\cdots$ | |
| 1 | 3 | 9 | 27 | 81 | $\cdots$ |
The sum of the numbers written into four squares that can be covered by an $S$-tetromino is either of the form
(for the first type of $S$-tetrominoes), or of the form
and thus divisible by 32 . For this reason, the sum of the numbers written into the squares of $P$, and thus also the sum of the numbers covered by $Z$-tetrominoes in the second covering, is likewise divisible by 32 . Now the sum of the entries of a $Z$-tetromino is either of the form
(for the first type of $Z$-tetrominoes), or of the form
i.e., 16 times an odd number. Thus in order to obtain a total that is divisible by 32 , an even number of the latter kind of $Z$-tetrominoes needs to be used. Rotating everything by $90^{\circ}$, we find that the number of $Z$-tetrominoes of the first kind is even as well. So we have even proven slightly more than necessary.
Comment 1. In the second solution, 3 and -3 can be replaced by other combinations as well. For example, for any positive integer $a \equiv 3(\bmod 4)$, we can write $a^{i} \cdot(-a)^{j}$ into the square with coordinates $(i, j)$ and apply the same argument.
Comment 2. As the second solution shows, we even have the stronger result that the parity of the number of each of the four types of tetrominoes in a tiling of $P$ by S - and Z-tetrominoes is an invariant of $P$. This also remains true if there is no tiling of $P$ that uses only S-tetrominoes.
C5. Consider $n \geqslant 3$ lines in the plane such that no two lines are parallel and no three have a common point. These lines divide the plane into polygonal regions; let $\mathcal{F}$ be the set of regions having finite area. Prove that it is possible to colour $\lceil\sqrt{n / 2}\rceil$ of the lines blue in such a way that no region in $\mathcal{F}$ has a completely blue boundary. (For a real number $x,\lceil x\rceil$ denotes the least integer which is not smaller than $x$.) (Austria) Solution. Let $L$ be the given set of lines. Choose a maximal (by inclusion) subset $B \subseteq L$ such that when we colour the lines of $B$ blue, no region in $\mathcal{F}$ has a completely blue boundary. Let $|B|=k$. We claim that $k \geqslant\lceil\sqrt{n / 2}\rceil$.
Let us colour all the lines of $L \backslash B$ red. Call a point blue if it is the intersection of two blue lines. Then there are $\binom{k}{2}$ blue points.
Now consider any red line $\ell$. By the maximality of $B$, there exists at least one region $A \in \mathcal{F}$ whose only red side lies on $\ell$. Since $A$ has at least three sides, it must have at least one blue vertex. Let us take one such vertex and associate it to $\ell$.
Since each blue point belongs to four regions (some of which may be unbounded), it is associated to at most four red lines. Thus the total number of red lines is at most $4\binom{k}{2}$. On the other hand, this number is $n-k$, so
and finally $k \geqslant\lceil\sqrt{n / 2}\rceil$, which gives the desired result. Comment 1. The constant factor in the estimate can be improved in different ways; we sketch two of them below. On the other hand, the Problem Selection Committee is not aware of any results showing that it is sometimes impossible to colour $k$ lines satisfying the desired condition for $k \gg \sqrt{n}$. In this situation we find it more suitable to keep the original formulation of the problem.
- Firstly, we show that in the proof above one has in fact $k=|B| \geqslant\lceil\sqrt{2 n / 3}\rceil$.
Let us make weighted associations as follows. Let a region $A$ whose only red side lies on $\ell$ have $k$ vertices, so that $k-2$ of them are blue. We associate each of these blue vertices to $\ell$, and put the weight $\frac{1}{k-2}$ on each such association. So the sum of the weights of all the associations is exactly $n-k$.
Now, one may check that among the four regions adjacent to a blue vertex $v$, at most two are triangles. This means that the sum of the weights of all associations involving $v$ is at most $1+1+\frac{1}{2}+\frac{1}{2}=3$. This leads to the estimate
or
which yields $k \geqslant\lceil\sqrt{2 n / 3}\rceil$. 2. Next, we even show that $k=|B| \geqslant\lceil\sqrt{n}\rceil$. For this, we specify the process of associating points to red lines in one more different way.
Call a point red if it lies on a red line as well as on a blue line. Consider any red line $\ell$, and take an arbitrary region $A \in \mathcal{F}$ whose only red side lies on $\ell$. Let $r^{\prime}, r, b_{1}, \ldots, b_{k}$ be its vertices in clockwise order with $r^{\prime}, r \in \ell$; then the points $r^{\prime}, r$ are red, while all the points $b_{1}, \ldots, b_{k}$ are blue. Let us associate to $\ell$ the red point $r$ and the blue point $b_{1}$. One may notice that to each pair of a red point $r$ and a blue point $b$, at most one red line can be associated, since there is at most one region $A$ having $r$ and $b$ as two clockwise consecutive vertices.
We claim now that at most two red lines are associated to each blue point $b$; this leads to the desired bound
Assume, to the contrary, that three red lines $\ell_{1}, \ell_{2}$, and $\ell_{3}$ are associated to the same blue point $b$. Let $r_{1}, r_{2}$, and $r_{3}$ respectively be the red points associated to these lines; all these points are distinct. The point $b$ defines four blue rays, and each point $r_{i}$ is the red point closest to $b$ on one of these rays. So we may assume that the points $r_{2}$ and $r_{3}$ lie on one blue line passing through $b$, while $r_{1}$ lies on the other one.

Now consider the region $A$ used to associate $r_{1}$ and $b$ with $\ell_{1}$. Three of its clockwise consecutive vertices are $r_{1}, b$, and either $r_{2}$ or $r_{3}$ (say, $r_{2}$ ). Since $A$ has only one red side, it can only be the triangle $r_{1} b r_{2}$; but then both $\ell_{1}$ and $\ell_{2}$ pass through $r_{2}$, as well as some blue line. This is impossible by the problem assumptions.
Comment 2. The condition that the lines be non-parallel is essentially not used in the solution, nor in the previous comment; thus it may be omitted.
C6. We are given an infinite deck of cards, each with a real number on it. For every real number $x$, there is exactly one card in the deck that has $x$ written on it. Now two players draw disjoint sets $A$ and $B$ of 100 cards each from this deck. We would like to define a rule that declares one of them a winner. This rule should satisfy the following conditions:
- The winner only depends on the relative order of the 200 cards: if the cards are laid down in increasing order face down and we are told which card belongs to which player, but not what numbers are written on them, we can still decide the winner.
- If we write the elements of both sets in increasing order as $A=\left{a_{1}, a_{2}, \ldots, a_{100}\right}$ and $B=\left{b_{1}, b_{2}, \ldots, b_{100}\right}$, and $a_{i}>b_{i}$ for all $i$, then $A$ beats $B$.
- If three players draw three disjoint sets $A, B, C$ from the deck, $A$ beats $B$ and $B$ beats $C$, then $A$ also beats $C$.
How many ways are there to define such a rule? Here, we consider two rules as different if there exist two sets $A$ and $B$ such that $A$ beats $B$ according to one rule, but $B$ beats $A$ according to the other. (Russia) Answer. 100. Solution 1. We prove a more general statement for sets of cardinality $n$ (the problem being the special case $n=100$, then the answer is $n$ ). In the following, we write $A>B$ or $B<A$ for " $A$ beats $B$ ".
Part I. Let us first define $n$ different rules that satisfy the conditions. To this end, fix an index $k \in{1,2, \ldots, n}$. We write both $A$ and $B$ in increasing order as $A=\left{a_{1}, a_{2}, \ldots, a_{n}\right}$ and $B=\left{b_{1}, b_{2}, \ldots, b_{n}\right}$ and say that $A$ beats $B$ if and only if $a_{k}>b_{k}$. This rule clearly satisfies all three conditions, and the rules corresponding to different $k$ are all different. Thus there are at least $n$ different rules.
Part II. Now we have to prove that there is no other way to define such a rule. Suppose that our rule satisfies the conditions, and let $k \in{1,2, \ldots, n}$ be minimal with the property that
Clearly, such a $k$ exists, since this holds for $k=n$ by assumption. Now consider two disjoint sets $X=\left{x_{1}, x_{2}, \ldots, x_{n}\right}$ and $Y=\left{y_{1}, y_{2}, \ldots, y_{n}\right}$, both in increasing order (i.e., $x_{1}<x_{2}<\cdots<x_{n}$ and $y_{1}<y_{2}<\cdots<y_{n}$ ). We claim that $X<Y$ if (and only if - this follows automatically) $x_{k}<y_{k}$.
To prove this statement, pick arbitrary real numbers $u_{i}, v_{i}, w_{i} \notin X \cup Y$ such that
and
and set
Then
- $u_{i}<y_{i}$ and $x_{i}<v_{i}$ for all $i$, so $U<Y$ and $X<V$ by the second condition.
- The elements of $U \cup W$ are ordered in the same way as those of $A_{k-1} \cup B_{k-1}$, and since $A_{k-1}>B_{k-1}$ by our choice of $k$, we also have $U>W$ (if $k=1$, this is trivial).
- The elements of $V \cup W$ are ordered in the same way as those of $A_{k} \cup B_{k}$, and since $A_{k} \prec B_{k}$ by our choice of $k$, we also have $V \prec W$.
It follows that
so $X<Y$ by the third condition, which is what we wanted to prove. Solution 2. Another possible approach to Part II of this problem is induction on $n$. For $n=1$, there is trivially only one rule in view of the second condition.
In the following, we assume that our claim (namely, that there are no possible rules other than those given in Part I) holds for $n-1$ in place of $n$. We start with the following observation: Claim. At least one of the two relations
and
holds. Proof. Suppose that the first relation does not hold. Since our rule may only depend on the relative order, we must also have
Likewise, if the second relation does not hold, then we must also have
Now condition 3 implies that
which contradicts the second condition. Now we distinguish two cases, depending on which of the two relations actually holds: First case: $({2} \cup{2 i-1 \mid 2 \leqslant i \leqslant n})<({1} \cup{2 i \mid 2 \leqslant i \leqslant n})$. Let $A=\left{a_{1}, a_{2}, \ldots, a_{n}\right}$ and $B=\left{b_{1}, b_{2}, \ldots, b_{n}\right}$ be two disjoint sets, both in increasing order. We claim that the winner can be decided only from the values of $a_{2}, \ldots, a_{n}$ and $b_{2}, \ldots, b_{n}$, while $a_{1}$ and $b_{1}$ are actually irrelevant. Suppose that this was not the case, and assume without loss of generality that $a_{2}<b_{2}$. Then the relative order of $a_{1}, a_{2}, \ldots, a_{n}, b_{2}, \ldots, b_{n}$ is fixed, and the position of $b_{1}$ has to decide the winner. Suppose that for some value $b_{1}=x, B$ wins, while for some other value $b_{1}=y, A$ wins.
Write $B_{x}=\left{x, b_{2}, \ldots, b_{n}\right}$ and $B_{y}=\left{y, b_{2}, \ldots, b_{n}\right}$, and let $\varepsilon>0$ be smaller than half the distance between any two of the numbers in $B_{x} \cup B_{y} \cup A$. For any set $M$, let $M \pm \varepsilon$ be the set obtained by adding/subtracting $\varepsilon$ to all elements of $M$. By our choice of $\varepsilon$, the relative order of the elements of $\left(B_{y}+\varepsilon\right) \cup A$ is still the same as for $B_{y} \cup A$, while the relative order of the elements of $\left(B_{x}-\varepsilon\right) \cup A$ is still the same as for $B_{x} \cup A$. Thus $A \prec B_{x}-\varepsilon$, but $A>B_{y}+\varepsilon$. Moreover, if $y>x$, then $B_{x}-\varepsilon \prec B_{y}+\varepsilon$ by condition 2, while otherwise the relative order of the elements in $\left(B_{x}-\varepsilon\right) \cup\left(B_{y}+\varepsilon\right)$ is the same as for the two sets ${2} \cup{2 i-1 \mid 2 \leqslant i \leqslant n}$ and ${1} \cup{2 i \mid 2 \leqslant i \leqslant n}$, so that $B_{x}-\varepsilon<B_{y}+\varepsilon$. In either case, we obtain
which contradicts condition 3. So we know now that the winner does not depend on $a_{1}, b_{1}$. Therefore, we can define a new rule $<^{}$ on sets of cardinality $n-1$ by saying that $A<^{} B$ if and only if $A \cup{a} \prec B \cup{b}$ for some $a, b$ (or equivalently, all $a, b$ ) such that $a<\min A, b<\min B$ and $A \cup{a}$ and $B \cup{b}$ are disjoint. The rule $<^{}$ satisfies all conditions again, so by the induction hypothesis, there exists an index $i$ such that $A<^{} B$ if and only if the $i^{\text {th }}$ smallest element of $A$ is less than the $i^{\text {th }}$ smallest element of $B$. This implies that $C \prec D$ if and only if the $(i+1)^{\text {th }}$ smallest element of $C$ is less than the $(i+1)^{\text {th }}$ smallest element of $D$, which completes our induction.
Second case: $({2 i-1 \mid 1 \leqslant i \leqslant n-1} \cup{2 n}) \prec({2 i \mid 1 \leqslant i \leqslant n-1} \cup{2 n-1})$. Set $-A={-a \mid a \in A}$ for any $A \subseteq \mathbb{R}$. For any two disjoint sets $A, B \subseteq \mathbb{R}$ of cardinality $n$, we write $A \prec^{\circ} B$ to mean $(-B) \prec(-A)$. It is easy to see that $\prec^{\circ}$ defines a rule to determine a winner that satisfies the three conditions of our problem as well as the relation of the first case. So it follows in the same way as in the first case that for some $i, A<^{\circ} B$ if and only if the $i^{\text {th }}$ smallest element of $A$ is less than the $i^{\text {th }}$ smallest element of $B$, which is equivalent to the condition that the $i^{\text {th }}$ largest element of $-A$ is greater than the $i^{\text {th }}$ largest element of $-B$. This proves that the original rule $<$ also has the desired form.
Comment. The problem asks for all possible partial orders on the set of $n$-element subsets of $\mathbb{R}$ such that any two disjoint sets are comparable, the order relation only depends on the relative order of the elements, and $\left{a_{1}, a_{2}, \ldots, a_{n}\right}<\left{b_{1}, b_{2}, \ldots, b_{n}\right}$ whenever $a_{i}<b_{i}$ for all $i$.
As the proposer points out, one may also ask for all total orders on all $n$-element subsets of $\mathbb{R}$ (dropping the condition of disjointness and requiring that $\left{a_{1}, a_{2}, \ldots, a_{n}\right} \leq\left{b_{1}, b_{2}, \ldots, b_{n}\right}$ whenever $a_{i} \leqslant b_{i}$ for all $i$ ). It turns out that the number of possibilities in this case is $n!$, and all possible total orders are obtained in the following way. Fix a permutation $\sigma \in S_{n}$. Let $A=\left{a_{1}, a_{2}, \ldots, a_{n}\right}$ and $B=\left{b_{1}, b_{2}, \ldots, b_{n}\right}$ be two subsets of $\mathbb{R}$ with $a_{1}<a_{2}<\cdots<a_{n}$ and $b_{1}<b_{2}<\cdots<b_{n}$. Then we say that $A>{\sigma} B$ if and only if $\left(a{\sigma(1)}, \ldots, a_{\sigma(n)}\right)$ is lexicographically greater than $\left(b_{\sigma(1)}, \ldots, b_{\sigma(n)}\right)$.
It seems, however, that this formulation adds rather more technicalities to the problem than additional ideas.
This page is intentionally left blank
C7. Let $M$ be a set of $n \geqslant 4$ points in the plane, no three of which are collinear. Initially these points are connected with $n$ segments so that each point in $M$ is the endpoint of exactly two segments. Then, at each step, one may choose two segments $A B$ and $C D$ sharing a common interior point and replace them by the segments $A C$ and $B D$ if none of them is present at this moment. Prove that it is impossible to perform $n^{3} / 4$ or more such moves. (Russia) Solution. A line is said to be red if it contains two points of $M$. As no three points of $M$ are collinear, each red line determines a unique pair of points of $M$. Moreover, there are precisely $\binom{n}{2}<\frac{n^{2}}{2}$ red lines. By the value of a segment we mean the number of red lines intersecting it in its interior, and the value of a set of segments is defined to be the sum of the values of its elements. We will prove that $(i)$ the value of the initial set of segments is smaller than $n^{3} / 2$ and that (ii) each step decreases the value of the set of segments present by at least 2 . Since such a value can never be negative, these two assertions imply the statement of the problem.
To show $(i)$ we just need to observe that each segment has a value that is smaller than $n^{2} / 2$. Thus the combined value of the $n$ initial segments is indeed below $n \cdot n^{2} / 2=n^{3} / 2$.
It remains to establish (ii). Suppose that at some moment we have two segments $A B$ and $C D$ sharing an interior point $S$, and that at the next moment we have the two segments $A C$ and $B D$ instead. Let $X_{A B}$ denote the set of red lines intersecting the segment $A B$ in its interior and let the sets $X_{A C}, X_{B D}$, and $X_{C D}$ be defined similarly. We are to prove that $\left|X_{A C}\right|+\left|X_{B D}\right|+2 \leqslant\left|X_{A B}\right|+\left|X_{C D}\right|$.
As a first step in this direction, we claim that
Indeed, if $g$ is a red line intersecting, e.g. the segment $A C$ in its interior, then it has to intersect the triangle $A C S$ once again, either in the interior of its side $A S$, or in the interior of its side $C S$, or at $S$, meaning that it belongs to $X_{A B}$ or to $X_{C D}$ (see Figure 1). Moreover, the red lines $A B$ and $C D$ contribute to $X_{A B} \cup X_{C D}$ but not to $X_{A C} \cup X_{B D}$. Thereby (1) is proved.

Figure 3
Similarly but more easily one obtains
Indeed, a red line $h$ appearing in $X_{A C} \cap X_{B D}$ belongs, for similar reasons as above, also to $X_{A B} \cap X_{C D}$. To make the argument precise, one may just distinguish the cases $S \in h$ (see Figure 2) and $S \notin h$ (see Figure 3). Thereby (2) is proved.
Adding (1) and (2) we obtain the desired conclusion, thus completing the solution of this problem.
Comment 1. There is a problem belonging to the folklore, in the solution of which one may use the same kind of operation:
Given $n$ red and $n$ green points in the plane, prove that one may draw $n$ nonintersecting segments each of which connects a red point with a green point.
A standard approach to this problem consists in taking $n$ arbitrary segments connecting the red points with the green points, and to perform the same operation as in the above proposal whenever an intersection occurs. Now each time one performs such a step, the total length of the segments that are present decreases due to the triangle inequality. So, as there are only finitely many possibilities for the set of segments present, the process must end at some stage.
In the above proposal, however, considering the sum of the Euclidean lengths of the segment that are present does not seem to help much, for even though it shows that the process must necessarily terminate after some finite number of steps, it does not seem to easily yield any upper bound on the number of these steps that grows polynomially with $n$.
One may regard the concept of the value of a segment introduced in the above solution as an appropriately discretised version of Euclidean length suitable for obtaining such a bound.
The Problem Selection Committee still believes the problem to be sufficiently original for the competition.
Comment 2. There are some other essentially equivalent ways of presenting the same solution. E.g., put $M=\left{A_{1}, A_{2}, \ldots, A_{n}\right}$, denote the set of segments present at any moment by $\left{e_{1}, e_{2}, \ldots, e_{n}\right}$, and called a triple $(i, j, k)$ of indices with $i \neq j$ intersecting, if the line $A_{i} A_{j}$ intersects the segment $e_{k}$. It may then be shown that the number $S$ of intersecting triples satisfies $0 \leqslant S<n^{3}$ at the beginning and decreases by at least 4 in each step.
Comment 3. It is not difficult to construct an example where $c n^{2}$ moves are possible (for some absolute constant $c>0$ ). It would be interesting to say more about the gap between $c n^{2}$ and $c n^{3}$.
C8. A card deck consists of 1024 cards. On each card, a set of distinct decimal digits is written in such a way that no two of these sets coincide (thus, one of the cards is empty). Two players alternately take cards from the deck, one card per turn. After the deck is empty, each player checks if he can throw out one of his cards so that each of the ten digits occurs on an even number of his remaining cards. If one player can do this but the other one cannot, the one who can is the winner; otherwise a draw is declared.
Determine all possible first moves of the first player after which he has a winning strategy. (Russia) Answer. All the moves except for taking the empty card.
Solution. Let us identify each card with the set of digits written on it. For any collection of cards $C_{1}, C_{2}, \ldots, C_{k}$ denote by their sum the set $C_{1} \triangle C_{2} \triangle \cdots \triangle C_{k}$ consisting of all elements belonging to an odd number of the $C_{i}$ 's. Denote the first and the second player by $\mathcal{F}$ and $\mathcal{S}$, respectively.
Since each digit is written on exactly 512 cards, the sum of all the cards is $\varnothing$. Therefore, at the end of the game the sum of all the cards of $\mathcal{F}$ will be the same as that of $\mathcal{S}$; denote this sum by $C$. Then the player who took $C$ can throw it out and get the desired situation, while the other one cannot. Thus, the player getting card $C$ wins, and no draw is possible.
Now, given a nonempty card $B$, one can easily see that all the cards can be split into 512 pairs of the form $(X, X \triangle B)$ because $(X \triangle B) \triangle B=X$. The following lemma shows a property of such a partition that is important for the solution. Lemma. Let $B \neq \varnothing$ be some card. Let us choose 512 cards so that exactly one card is chosen from every pair $(X, X \triangle B)$. Then the sum of all chosen cards is either $\varnothing$ or $B$. Proof. Let $b$ be some element of $B$. Enumerate the pairs; let $X_{i}$ be the card not containing $b$ in the $i^{\text {th }}$ pair, and let $Y_{i}$ be the other card in this pair. Then the sets $X_{i}$ are exactly all the sets not containing $b$, therefore each digit $a \neq b$ is written on exactly 256 of these cards, so $X_{1} \triangle X_{2} \triangle \cdots \triangle X_{512}=\varnothing$. Now, if we replace some summands in this sum by the other elements from their pairs, we will simply add $B$ several times to this sum, thus the sum will either remain unchanged or change by $B$, as required.
Now we consider two cases. Case 1. Assume that $\mathcal{F}$ takes the card $\varnothing$ on his first move. In this case, we present a winning strategy for $\mathcal{S}$.
Let $\mathcal{S}$ take an arbitrary card $A$. Assume that $\mathcal{F}$ takes card $B$ after that; then $\mathcal{S}$ takes $A \triangle B$. Split all 1024 cards into 512 pairs of the form $(X, X \triangle B)$; we call two cards in one pair partners. Then the four cards taken so far form two pairs $(\varnothing, B)$ and $(A, A \triangle B)$ belonging to $\mathcal{F}$ and $\mathcal{S}$, respectively. On each of the subsequent moves, when $\mathcal{F}$ takes some card, $\mathcal{S}$ should take the partner of this card in response.
Consider the situation at the end of the game. Let us for a moment replace card $A$ belonging to $\mathcal{S}$ by $\varnothing$. Then he would have one card from each pair; by our lemma, the sum of all these cards would be either $\varnothing$ or $B$. Now, replacing $\varnothing$ back by $A$ we get that the actual sum of the cards of $\mathcal{S}$ is either $A$ or $A \triangle B$, and he has both these cards. Thus $\mathcal{S}$ wins.
Case 2. Now assume that $\mathcal{F}$ takes some card $A \neq \varnothing$ on his first move. Let us present a winning strategy for $\mathcal{F}$ in this case.
Assume that $\mathcal{S}$ takes some card $B \neq \varnothing$ on his first move; then $\mathcal{F}$ takes $A \triangle B$. Again, let us split all the cards into pairs of the form $(X, X \triangle B)$; then the cards which have not been taken yet form several complete pairs and one extra element (card $\varnothing$ has not been taken while its partner $B$ has). Now, on each of the subsequent moves, if $\mathcal{S}$ takes some element from a complete pair, then $\mathcal{F}$ takes its partner. If $\mathcal{S}$ takes the extra element, then $\mathcal{F}$ takes an arbitrary card $Y$, and the partner of $Y$ becomes the new extra element.
Thus, on his last move $\mathcal{S}$ is forced to take the extra element. After that player $\mathcal{F}$ has cards $A$ and $A \triangle B$, player $\mathcal{S}$ has cards $B$ and $\varnothing$, and $\mathcal{F}$ has exactly one element from every other pair. Thus the situation is the same as in the previous case with roles reversed, and $\mathcal{F}$ wins.
Finally, if $\mathcal{S}$ takes $\varnothing$ on his first move then $\mathcal{F}$ denotes any card which has not been taken yet by $B$ and takes $A \triangle B$. After that, the same strategy as above is applicable.
Comment 1. If one wants to avoid the unusual question about the first move, one may change the formulation as follows. (The difficulty of the problem would decrease somewhat.)
A card deck consists of 1023 cards; on each card, a nonempty set of distinct decimal digits is written in such a way that no two of these sets coincide. Two players alternately take cards from the deck, one card per turn. When the deck is empty, each player checks if he can throw out one of his cards so that for each of the ten digits, he still holds an even number of cards with this digit. If one player can do this but the other one cannot, the one who can is the winner; otherwise a draw is declared.
Determine which of the players (if any) has a winning strategy. The winner in this version is the first player. The analysis of the game from the first two paragraphs of the previous solution applies to this version as well, except for the case $C=\varnothing$ in which the result is a draw. Then the strategy for $\mathcal{S}$ in Case 1 works for $\mathcal{F}$ in this version: the sum of all his cards at the end is either $A$ or $A \triangle B$, thus nonempty in both cases.
Comment 2. Notice that all the cards form a vector space over $\mathbb{F}_{2}$, with $\triangle$ the operation of addition. Due to the automorphisms of this space, all possibilities for $\mathcal{F}$ 's first move except $\varnothing$ are equivalent. The same holds for the response by $\mathcal{S}$ if $\mathcal{F}$ takes the card $\varnothing$ on his first move.
Comment 3. It is not that hard to show that in the initial game, $\mathcal{F}$ has a winning move, by the idea of "strategy stealing".
Namely, assume that $\mathcal{S}$ has a winning strategy. Let us take two card decks and start two games, in which $\mathcal{S}$ will act by his strategy. In the first game, $\mathcal{F}$ takes an arbitrary card $A_{1}$; assume that $\mathcal{S}$ takes some $B_{1}$ in response. Then $\mathcal{F}$ takes the card $B_{1}$ at the second game; let the response by $\mathcal{S}$ be $A_{2}$. Then $\mathcal{F}$ takes $A_{2}$ in the first game and gets a response $B_{2}$, and so on.
This process stops at some moment when in the second game $\mathcal{S}$ takes $A_{i}=A_{1}$. At this moment the players hold the same sets of cards in both games, but with roles reversed. Now, if some cards remain in the decks, $\mathcal{F}$ takes an arbitrary card from the first deck starting a similar cycle.
At the end of the game, player $\mathcal{F}$ 's cards in the first game are exactly player $\mathcal{S}$ 's cards in the second game, and vice versa. Thus in one of the games $\mathcal{F}$ will win, which is impossible by our assumption.
One may notice that the strategy in Case 2 is constructed exactly in this way from the strategy in Case 1 . This is possible since every response by $\mathcal{S}$ wins if $\mathcal{F}$ takes the card $\varnothing$ on his first move.
C9. There are $n$ circles drawn on a piece of paper in such a way that any two circles intersect in two points, and no three circles pass through the same point. Turbo the snail slides along the circles in the following fashion. Initially he moves on one of the circles in clockwise direction. Turbo always keeps sliding along the current circle until he reaches an intersection with another circle. Then he continues his journey on this new circle and also changes the direction of moving, i.e. from clockwise to anticlockwise or vice versa.
Suppose that Turbo's path entirely covers all circles. Prove that $n$ must be odd.
(India)
Solution 1. Replace every cross (i.e. intersection of two circles) by two small circle arcs that indicate the direction in which the snail should leave the cross (see Figure 1.1). Notice that the placement of the small arcs does not depend on the direction of moving on the curves; no matter which direction the snail is moving on the circle arcs, he will follow the same curves (see Figure 1.2). In this way we have a set of curves, that are the possible paths of the snail. Call these curves snail orbits or just orbits. Every snail orbit is a simple closed curve that has no intersection with any other orbit.

Figure 1.2
We prove the following, more general statement. (*) In any configuration of $n$ circles such that no two of them are tangent, the number of snail orbits has the same parity as the number $n$. (Note that it is not assumed that all circle pairs intersect.)
This immediately solves the problem.
Let us introduce the following operation that will be called flipping a cross. At a cross, remove the two small arcs of the orbits, and replace them by the other two arcs. Hence, when the snail arrives at a flipped cross, he will continue on the other circle as before, but he will preserve the orientation in which he goes along the circle arcs (see Figure 2).

Figure 2 Consider what happens to the number of orbits when a cross is flipped. Denote by $a, b, c$, and $d$ the four arcs that meet at the cross such that $a$ and $b$ belong to the same circle. Before the flipping $a$ and $b$ were connected to $c$ and $d$, respectively, and after the flipping $a$ and $b$ are connected to $d$ and $c$, respectively.
The orbits passing through the cross are closed curves, so each of the $\operatorname{arcs} a, b, c$, and $d$ is connected to another one by orbits outside the cross. We distinguish three cases.
Case 1: $a$ is connected to $b$ and $c$ is connected to $d$ by the orbits outside the cross (see Figure 3.1).
We show that this case is impossible. Remove the two small arcs at the cross, connect $a$ to $b$, and connect $c$ to $d$ at the cross. Let $\gamma$ be the new closed curve containing $a$ and $b$, and let $\delta$ be the new curve that connects $c$ and $d$. These two curves intersect at the cross. So one of $c$ and $d$ is inside $\gamma$ and the other one is outside $\gamma$. Then the two closed curves have to meet at least one more time, but this is a contradiction, since no orbit can intersect itself.

Figure 3.3
Case 2: $a$ is connected to $c$ and $b$ is connected to $d$ (see Figure 3.2). Before the flipping $a$ and $c$ belong to one orbit and $b$ and $d$ belong to another orbit. Flipping the cross merges the two orbits into a single orbit. Hence, the number of orbits decreases by 1.
Case 3: $a$ is connected to $d$ and $b$ is connected to $c$ (see Figure 3.3). Before the flipping the arcs $a, b, c$, and $d$ belong to a single orbit. Flipping the cross splits that orbit in two. The number of orbits increases by 1.
As can be seen, every flipping decreases or increases the number of orbits by one, thus changes its parity.
Now flip every cross, one by one. Since every pair of circles has 0 or 2 intersections, the number of crosses is even. Therefore, when all crosses have been flipped, the original parity of the number of orbits is restored. So it is sufficient to prove (*) for the new configuration, where all crosses are flipped. Of course also in this new configuration the (modified) orbits are simple closed curves not intersecting each other.
Orient the orbits in such a way that the snail always moves anticlockwise along the circle arcs. Figure 4 shows the same circles as in Figure 1 after flipping all crosses and adding orientation. (Note that this orientation may be different from the orientation of the orbit as a planar curve; the orientation of every orbit may be negative as well as positive, like the middle orbit in Figure 4.) If the snail moves around an orbit, the total angle change in his moving direction, the total curvature, is either $+2 \pi$ or $-2 \pi$, depending on the orientation of the orbit. Let $P$ and $N$ be the number of orbits with positive and negative orientation, respectively. Then the total curvature of all orbits is $(P-N) \cdot 2 \pi$.

Figure 5
Double-count the total curvature of all orbits. Along every circle the total curvature is $2 \pi$. At every cross, the two turnings make two changes with some angles having the same absolute value but opposite signs, as depicted in Figure 5. So the changes in the direction at the crosses cancel out. Hence, the total curvature is $n \cdot 2 \pi$.
Now we have $(P-N) \cdot 2 \pi=n \cdot 2 \pi$, so $P-N=n$. The number of (modified) orbits is $P+N$, that has a same parity as $P-N=n$.
Solution 2. We present a different proof of (*). We perform a sequence of small modification steps on the configuration of the circles in such a way that at the end they have no intersection at all (see Figure 6.1). We use two kinds of local changes to the structure of the orbits (see Figure 6.2):
- Type-1 step: An arc of a circle is moved over an arc of another circle; such a step creates or removes two intersections.
- Type-2 step: An arc of a circle is moved through the intersection of two other circles.

Figure 6.2
We assume that in every step only one circle is moved, and that this circle is moved over at most one arc or intersection point of other circles.
We will show that the parity of the number of orbits does not change in any step. As every circle becomes a separate orbit at the end of the procedure, this fact proves (*).
Consider what happens to the number of orbits when a Type-1 step is performed. The two intersection points are created or removed in a small neighbourhood. Denote some points of the two circles where they enter or leave this neighbourhood by $a, b, c$, and $d$ in this order around the neighbourhood; let $a$ and $b$ belong to one circle and let $c$ and $d$ belong to the other circle. The two circle arcs may have the same or opposite orientations. Moreover, the four end-points of the two arcs are connected by the other parts of the orbits. This can happen in two ways without intersection: either $a$ is connected to $d$ and $b$ is connected to $c$, or $a$ is connected to $b$ and $c$ is connected to $d$. Altogether we have four cases, as shown in Figure 7.

Figure 7 We can see that the number of orbits is changed by -2 or +2 in the leftmost case when the arcs have the same orientation, $a$ is connected to $d$, and $b$ is connected to $c$. In the other three cases the number of orbits is not changed. Hence, Type-1 steps do not change the parity of the number of orbits.
Now consider a Type-2 step. The three circles enclose a small, triangular region; by the step, this triangle is replaced by another triangle. Again, the modification of the orbits is done in some small neighbourhood; the structure does not change outside. Each side of the triangle shaped region can be convex or concave; the number of concave sides can be $0,1,2$ or 3 , so there are 4 possible arrangements of the orbits inside the neighbourhood, as shown in Figure 8.
all convex

Figure 8 Denote the points where the three circles enter or leave the neighbourhood by $a, b, c, d$, $e$, and $f$ in this order around the neighbourhood. As can be seen in Figure 8, there are only two essentially different cases; either $a, c, e$ are connected to $b, d, f$, respectively, or $a, c, e$ are connected to $f, b, d$, respectively. The step either preserves the set of connections or switches to the other arrangement. Obviously, in the earlier case the number of orbits is not changed; therefore we have to consider only the latter case.
The points $a, b, c, d, e$, and $f$ are connected by the orbits outside, without intersection. If $a$ was connected to $c$, say, then this orbit would isolate $b$, so this is impossible. Hence, each of $a, b, c, d, e$ and $f$ must be connected either to one of its neighbours or to the opposite point. If say $a$ is connected to $d$, then this orbit separates $b$ and $c$ from $e$ and $f$, therefore $b$ must be connected to $c$ and $e$ must be connected to $f$. Altogether there are only two cases and their reverses: either each point is connected to one of its neighbours or two opposite points are connected and the the remaining neigh boring pairs are connected to each other. See Figure 9.

Figure 9 We can see that if only neighbouring points are connected, then the number of orbits is changed by +2 or -2 . If two opposite points are connected ( $a$ and $d$ in the figure), then the orbits are re-arranged, but their number is unchanged. Hence, Type-2 steps also preserve the parity. This completes the proof of (*).
Solution 3. Like in the previous solutions, we do not need all circle pairs to intersect but we assume that the circles form a connected set. Denote by $\mathcal{C}$ and $\mathcal{P}$ the sets of circles and their intersection points, respectively.
The circles divide the plane into several simply connected, bounded regions and one unbounded region. Denote the set of these regions by $\mathcal{R}$. We say that an intersection point or a region is odd or even if it is contained inside an odd or even number of circles, respectively. Let $\mathcal{P}{\text {odd }}$ and $\mathcal{R}{\text {odd }}$ be the sets of odd intersection points and odd regions, respectively.
Claim.
Proof. For each circle $c \in \mathcal{C}$, denote by $R_{c}, P_{c}$, and $X_{c}$ the number of regions inside $c$, the number of intersection points inside $c$, and the number of circles intersecting $c$, respectively. The circles divide each other into several arcs; denote by $A_{c}$ the number of such arcs inside $c$. By double counting the regions and intersection points inside the circles we get
For each circle $c$, apply EULER's polyhedron theorem to the (simply connected) regions in $c$. There are $2 X_{c}$ intersection points on $c$; they divide the circle into $2 X_{c}$ arcs. The polyhedron theorem yields $\left(R_{c}+1\right)+\left(P_{c}+2 X_{c}\right)=\left(A_{c}+2 X_{c}\right)+2$, considering the exterior of $c$ as a single region. Therefore,
Moreover, we have four arcs starting from every interior points inside $c$ and a single arc starting into the interior from each intersection point on the circle. By double-counting the end-points of the interior arcs we get $2 A_{c}=4 P_{c}+2 X_{c}$, so
The relations (2) and (3) together yield
By summing up (4) for all circles we obtain
which yields
Notice that in $\sum_{c \in \mathcal{C}} X_{c}$ each intersecting circle pair is counted twice, i.e., for both circles in the pair, so
which finishes the proof of the Claim. Now insert the same small arcs at the intersections as in the first solution, and suppose that there is a single snail orbit $b$.
First we show that the odd regions are inside the curve $b$, while the even regions are outside. Take a region $r \in \mathcal{R}$ and a point $x$ in its interior, and draw a ray $y$, starting from $x$, that does not pass through any intersection point of the circles and is neither tangent to any of the circles. As is well-known, $x$ is inside the curve $b$ if and only if $y$ intersects $b$ an odd number of times (see Figure 10). Notice that if an arbitrary circle $c$ contains $x$ in its interior, then $c$ intersects $y$ at a single point; otherwise, if $x$ is outside $c$, then $c$ has 2 or 0 intersections with $y$. Therefore, $y$ intersects $b$ an odd number of times if and only if $x$ is contained in an odd number of circles, so if and only if $r$ is odd.

Figure 10 Now consider an intersection point $p$ of two circles $c_{1}$ and $c_{2}$ and a small neighbourhood around $p$. Suppose that $p$ is contained inside $k$ circles.
We have four regions that meet at $p$. Let $r_{1}$ be the region that lies outside both $c_{1}$ and $c_{2}$, let $r_{2}$ be the region that lies inside both $c_{1}$ and $c_{2}$, and let $r_{3}$ and $r_{4}$ be the two remaining regions, each lying inside exactly one of $c_{1}$ and $c_{2}$. The region $r_{1}$ is contained inside the same $k$ circles as $p$; the region $r_{2}$ is contained also by $c_{1}$ and $c_{2}$, so by $k+2$ circles in total; each of the regions $r_{3}$ and $r_{4}$ is contained inside $k+1$ circles. After the small arcs have been inserted at $p$, the regions $r_{1}$ and $r_{2}$ get connected, and the regions $r_{3}$ and $r_{4}$ remain separated at $p$ (see Figure 11). If $p$ is an odd point, then $r_{1}$ and $r_{2}$ are odd, so two odd regions are connected at $p$. Otherwise, if $p$ is even, then we have two even regions connected at $p$.

Figure 12
Consider the system of odd regions and their connections at the odd points as a graph. In this graph the odd regions are the vertices, and each odd point establishes an edge that connects two vertices (see Figure 12). As $b$ is a single closed curve, this graph is connected and contains no cycle, so the graph is a tree. Then the number of vertices must be by one greater than the number of edges, so
The relations (1) and (9) together prove that $n$ must be odd.
Comment. For every odd $n$ there exists at least one configuration of $n$ circles with a single snail orbit. Figure 13 shows a possible configuration with 5 circles. In general, if a circle is rotated by $k \cdot \frac{360^{\circ}}{n}$ $(k=1,2, \ldots, n-1)$ around an interior point other than the centre, the circle and its rotated copies together provide a single snail orbit.

Figure 13
Geometry
G1. The points $P$ and $Q$ are chosen on the side $B C$ of an acute-angled triangle $A B C$ so that $\angle P A B=\angle A C B$ and $\angle Q A C=\angle C B A$. The points $M$ and $N$ are taken on the rays $A P$ and $A Q$, respectively, so that $A P=P M$ and $A Q=Q N$. Prove that the lines $B M$ and $C N$ intersect on the circumcircle of the triangle $A B C$. (Georgia) Solution 1. Denote by $S$ the intersection point of the lines $B M$ and $C N$. Let moreover $\beta=\angle Q A C=\angle C B A$ and $\gamma=\angle P A B=\angle A C B$. From these equalities it follows that the triangles $A B P$ and $C A Q$ are similar (see Figure 1). Therefore we obtain
Moreover,
Hence the triangles $B P M$ and $N Q C$ are similar. This gives $\angle B M P=\angle N C Q$, so the triangles $B P M$ and $B S C$ are also similar. Thus we get
Figure 2
Solution 2. As in the previous solution, denote by $S$ the intersection point of the lines $B M$ and $N C$. Let moreover the circumcircle of the triangle $A B C$ intersect the lines $A P$ and $A Q$ again at $K$ and $L$, respectively (see Figure 2).
Note that $\angle L B C=\angle L A C=\angle C B A$ and similarly $\angle K C B=\angle K A B=\angle B C A$. It implies that the lines $B L$ and $C K$ meet at a point $X$, being symmetric to the point $A$ with respect to the line $B C$. Since $A P=P M$ and $A Q=Q N$, it follows that $X$ lies on the line $M N$. Therefore, using Pascal's theorem for the hexagon $A L B S C K$, we infer that $S$ lies on the circumcircle of the triangle $A B C$, which finishes the proof.
Comment. Both solutions can be modified to obtain a more general result, with the equalities
replaced by
G2. Let $A B C$ be a triangle. The points $K, L$, and $M$ lie on the segments $B C, C A$, and $A B$, respectively, such that the lines $A K, B L$, and $C M$ intersect in a common point. Prove that it is possible to choose two of the triangles $A L M, B M K$, and $C K L$ whose inradii sum up to at least the inradius of the triangle $A B C$. (Estonia) Solution. Denote
By Ceva's theorem, $a b c=1$, so we may, without loss of generality, assume that $a \geqslant 1$. Then at least one of the numbers $b$ or $c$ is not greater than 1 . Therefore at least one of the pairs $(a, b)$, $(b, c)$ has its first component not less than 1 and the second one not greater than 1 . Without loss of generality, assume that $1 \leqslant a$ and $b \leqslant 1$.
Therefore, we obtain $b c \leqslant 1$ and $1 \leqslant c a$, or equivalently
The first inequality implies that the line passing through $M$ and parallel to $B C$ intersects the segment $A L$ at a point $X$ (see Figure 1). Therefore the inradius of the triangle $A L M$ is not less than the inradius $r_{1}$ of triangle $A M X$.
Similarly, the line passing through $M$ and parallel to $A C$ intersects the segment $B K$ at a point $Y$, so the inradius of the triangle $B M K$ is not less than the inradius $r_{2}$ of the triangle $B M Y$. Thus, to complete our solution, it is enough to show that $r_{1}+r_{2} \geqslant r$, where $r$ is the inradius of the triangle $A B C$. We prove that in fact $r_{1}+r_{2}=r$.

Figure 1 Since $M X | B C$, the dilation with centre $A$ that takes $M$ to $B$ takes the incircle of the triangle $A M X$ to the incircle of the triangle $A B C$. Therefore
Adding these equalities gives $r_{1}+r_{2}=r$, as required. Comment. Alternatively, one can use Desargues' theorem instead of Ceva's theorem, as follows: The lines $A B, B C, C A$ dissect the plane into seven regions. One of them is bounded, and amongst the other six, three are two-sided and three are three-sided. Now define the points $P=B C \cap L M$, $Q=C A \cap M K$, and $R=A B \cap K L$ (in the projective plane). By Desargues' theorem, the points $P$, $Q, R$ lie on a common line $\ell$. This line intersects only unbounded regions. If we now assume (without loss of generality) that $P, Q$ and $R$ lie on $\ell$ in that order, then one of the segments $P Q$ or $Q R$ lies inside a two-sided region. If, for example, this segment is $P Q$, then the triangles $A L M$ and $B M K$ will satisfy the statement of the problem for the same reason.
G3. Let $\Omega$ and $O$ be the circumcircle and the circumcentre of an acute-angled triangle $A B C$ with $A B>B C$. The angle bisector of $\angle A B C$ intersects $\Omega$ at $M \neq B$. Let $\Gamma$ be the circle with diameter $B M$. The angle bisectors of $\angle A O B$ and $\angle B O C$ intersect $\Gamma$ at points $P$ and $Q$, respectively. The point $R$ is chosen on the line $P Q$ so that $B R=M R$. Prove that $B R | A C$. (Here we always assume that an angle bisector is a ray.) (Russia) Solution. Let $K$ be the midpoint of $B M$, i.e., the centre of $\Gamma$. Notice that $A B \neq B C$ implies $K \neq O$. Clearly, the lines $O M$ and $O K$ are the perpendicular bisectors of $A C$ and $B M$, respectively. Therefore, $R$ is the intersection point of $P Q$ and $O K$.
Let $N$ be the second point of intersection of $\Gamma$ with the line $O M$. Since $B M$ is a diameter of $\Gamma$, the lines $B N$ and $A C$ are both perpendicular to $O M$. Hence $B N | A C$, and it suffices to prove that $B N$ passes through $R$. Our plan for doing this is to interpret the lines $B N, O K$, and $P Q$ as the radical axes of three appropriate circles.
Let $\omega$ be the circle with diameter $B O$. Since $\angle B N O=\angle B K O=90^{\circ}$, the points $N$ and $K$ lie on $\omega$.
Next we show that the points $O, K, P$, and $Q$ are concyclic. To this end, let $D$ and $E$ be the midpoints of $B C$ and $A B$, respectively. Clearly, $D$ and $E$ lie on the rays $O Q$ and $O P$, respectively. By our assumptions about the triangle $A B C$, the points $B, E, O, K$, and $D$ lie in this order on $\omega$. It follows that $\angle E O R=\angle E B K=\angle K B D=\angle K O D$, so the line $K O$ externally bisects the angle $P O Q$. Since the point $K$ is the centre of $\Gamma$, it also lies on the perpendicular bisector of $P Q$. So $K$ coincides with the midpoint of the $\operatorname{arc} P O Q$ of the circumcircle $\gamma$ of triangle $P O Q$.
Thus the lines $O K, B N$, and $P Q$ are pairwise radical axes of the circles $\omega, \gamma$, and $\Gamma$. Hence they are concurrent at $R$, as required.

G4. Consider a fixed circle $\Gamma$ with three fixed points $A, B$, and $C$ on it. Also, let us fix a real number $\lambda \in(0,1)$. For a variable point $P \notin{A, B, C}$ on $\Gamma$, let $M$ be the point on the segment $C P$ such that $C M=\lambda \cdot C P$. Let $Q$ be the second point of intersection of the circumcircles of the triangles $A M P$ and $B M C$. Prove that as $P$ varies, the point $Q$ lies on a fixed circle. (United Kingdom) Solution 1. Throughout the solution, we denote by $\Varangle(a, b)$ the directed angle between the lines $a$ and $b$.
Let $D$ be the point on the segment $A B$ such that $B D=\lambda \cdot B A$. We will show that either $Q=D$, or $\Varangle(D Q, Q B)=\Varangle(A B, B C)$; this would mean that the point $Q$ varies over the constant circle through $D$ tangent to $B C$ at $B$, as required.
Denote the circumcircles of the triangles $A M P$ and $B M C$ by $\omega_{A}$ and $\omega_{B}$, respectively. The lines $A P, B C$, and $M Q$ are pairwise radical axes of the circles $\Gamma, \omega_{A}$, and $\omega_{B}$, thus either they are parallel, or they share a common point $X$.
Assume that these lines are parallel (see Figure 1). Then the segments $A P, Q M$, and $B C$ have a common perpendicular bisector; the reflection in this bisector maps the segment $C P$ to $B A$, and maps $M$ to $Q$. Therefore, in this case $Q$ lies on $A B$, and $B Q / A B=C M / C P=$ $B D / A B$; so we have $Q=D$.

Figure 2
Now assume that the lines $A P, Q M$, and $B C$ are concurrent at some point $X$ (see Figure 2). Notice that the points $A, B, Q$, and $X$ lie on a common circle $\Omega$ by Miquel's theorem applied to the triangle $X P C$. Let us denote by $Y$ the symmetric image of $X$ about the perpendicular bisector of $A B$. Clearly, $Y$ lies on $\Omega$, and the triangles $Y A B$ and $\triangle X B A$ are congruent. Moreover, the triangle $X P C$ is similar to the triangle $X B A$, so it is also similar to the triangle $Y A B$.
Next, the points $D$ and $M$ correspond to each other in similar triangles $Y A B$ and $X P C$, since $B D / B A=C M / C P=\lambda$. Moreover, the triangles $Y A B$ and $X P C$ are equi-oriented, so $\Varangle(M X, X P)=\Varangle(D Y, Y A)$. On the other hand, since the points $A, Q, X$, and $Y$ lie on $\Omega$, we have $\Varangle(Q Y, Y A)=\Varangle(M X, X P)$. Therefore, $\Varangle(Q Y, Y A)=\Varangle(D Y, Y A)$, so the points $Y, D$, and $Q$ are collinear.
Finally, we have $\Varangle(D Q, Q B)=\Varangle(Y Q, Q B)=\Varangle(Y A, A B)=\Varangle(A B, B X)=\Varangle(A B, B C)$, as desired.
Comment. In the original proposal, $\lambda$ was supposed to be an arbitrary real number distinct from 0 and 1, and the point $M$ was defined by $\overrightarrow{C M}=\lambda \cdot \overrightarrow{C P}$. The Problem Selection Committee decided to add the restriction $\lambda \in(0,1)$ in order to avoid a large case distinction.
Solution 2. As in the previous solution, we introduce the radical centre $X=A P \cap B C \cap M Q$ of the circles $\omega_{A}, \omega_{B}$, and $\Gamma$. Next, we also notice that the points $A, Q, B$, and $X$ lie on a common circle $\Omega$.
If the point $P$ lies on the arc $B A C$ of $\Gamma$, then the point $X$ is outside $\Gamma$, thus the point $Q$ belongs to the ray $X M$, and therefore the points $P, A$, and $Q$ lie on the same side of $B C$. Otherwise, if $P$ lies on the arc $B C$ not containing $A$, then $X$ lies inside $\Gamma$, so $M$ and $Q$ lie on different sides of $B C$; thus again $Q$ and $A$ lie on the same side of $B C$. So, in each case the points $Q$ and $A$ lie on the same side of $B C$.

Figure 3 Now we prove that the ratio
is constant. Since the points $A, Q, B$, and $X$ are concyclic, we have
Next, since the points $B, Q, M$, and $C$ are concyclic, the triangles $X B Q$ and $X M C$ are similar, so
Analogously, the triangles $X C P$ and $X A B$ are also similar, so
Therefore, we obtain
so this ratio is indeed constant. Thus the circle passing through $Q$ and tangent to $B C$ at $B$ is also constant, and $Q$ varies over this fixed circle.
Comment. It is not hard to guess that the desired circle should be tangent to $B C$ at $B$. Indeed, the second paragraph of this solution shows that this circle lies on one side of $B C$; on the other hand, in the limit case $P=B$, the point $Q$ also coincides with $B$.
Solution 3. Let us perform an inversion centred at $C$. Denote by $X^{\prime}$ the image of a point $X$ under this inversion.
The circle $\Gamma$ maps to the line $\Gamma^{\prime}$ passing through the constant points $A^{\prime}$ and $B^{\prime}$, and containing the variable point $P^{\prime}$. By the problem condition, the point $M$ varies over the circle $\gamma$ which is the homothetic image of $\Gamma$ with centre $C$ and coefficient $\lambda$. Thus $M^{\prime}$ varies over the constant line $\gamma^{\prime} | A^{\prime} B^{\prime}$ which is the homothetic image of $A^{\prime} B^{\prime}$ with centre $C$ and coefficient $1 / \lambda$, and $M=\gamma^{\prime} \cap C P^{\prime}$. Next, the circumcircles $\omega_{A}$ and $\omega_{B}$ of the triangles $A M P$ and $B M C$ map to the circumcircle $\omega_{A}^{\prime}$ of the triangle $A^{\prime} M^{\prime} P^{\prime}$ and to the line $B^{\prime} M^{\prime}$, respectively; the point $Q$ thus maps to the second point of intersection of $B^{\prime} M^{\prime}$ with $\omega_{A}^{\prime}$ (see Figure 4).

Figure 4
Let $J$ be the (constant) common point of the lines $\gamma^{\prime}$ and $C A^{\prime}$, and let $\ell$ be the (constant) line through $J$ parallel to $C B^{\prime}$. Let $V$ be the common point of the lines $\ell$ and $B^{\prime} M^{\prime}$. Applying Pappus' theorem to the triples $\left(C, J, A^{\prime}\right)$ and $\left(V, B^{\prime}, M^{\prime}\right)$ we get that the points $C B^{\prime} \cap J V$, $J M^{\prime} \cap A^{\prime} B^{\prime}$, and $C M^{\prime} \cap A^{\prime} V$ are collinear. The first two of these points are ideal, hence so is the third, which means that $C M^{\prime} | A^{\prime} V$.
Now we have $\Varangle\left(Q^{\prime} A^{\prime}, A^{\prime} P^{\prime}\right)=\Varangle\left(Q^{\prime} M^{\prime}, M^{\prime} P^{\prime}\right)=\angle\left(V M^{\prime}, A^{\prime} V\right)$, which means that the triangles $B^{\prime} Q^{\prime} A^{\prime}$ and $B^{\prime} A^{\prime} V$ are similar, and $\left(B^{\prime} A^{\prime}\right)^{2}=B^{\prime} Q^{\prime} \cdot B^{\prime} V$. Thus $Q^{\prime}$ is the image of $V$ under the second (fixed) inversion with centre $B^{\prime}$ and radius $B^{\prime} A^{\prime}$. Since $V$ varies over the constant line $\ell, Q^{\prime}$ varies over some constant circle $\Theta$. Thus, applying the first inversion back we get that $Q$ also varies over some fixed circle.
One should notice that this last circle is not a line; otherwise $\Theta$ would contain $C$, and thus $\ell$ would contain the image of $C$ under the second inversion. This is impossible, since $C B^{\prime} | \ell$.
G5. Let $A B C D$ be a convex quadrilateral with $\angle B=\angle D=90^{\circ}$. Point $H$ is the foot of the perpendicular from $A$ to $B D$. The points $S$ and $T$ are chosen on the sides $A B$ and $A D$, respectively, in such a way that $H$ lies inside triangle $S C T$ and
Prove that the circumcircle of triangle $S H T$ is tangent to the line $B D$. (Iran) Solution. Let the line passing through $C$ and perpendicular to the line $S C$ intersect the line $A B$ at $Q$ (see Figure 1). Then
which implies that the points $C, H, S$, and $Q$ lie on a common circle. Moreover, since $S Q$ is a diameter of this circle, we infer that the circumcentre $K$ of triangle $S H C$ lies on the line $A B$. Similarly, we prove that the circumcentre $L$ of triangle $C H T$ lies on the line $A D$.

Figure 1 In order to prove that the circumcircle of triangle $S H T$ is tangent to $B D$, it suffices to show that the perpendicular bisectors of $H S$ and $H T$ intersect on the line $A H$. However, these two perpendicular bisectors coincide with the angle bisectors of angles $A K H$ and $A L H$. Therefore, in order to complete the solution, it is enough (by the bisector theorem) to show that
We present two proofs of this equality. First proof. Let the lines $K L$ and $H C$ intersect at $M$ (see Figure 2). Since $K H=K C$ and $L H=L C$, the points $H$ and $C$ are symmetric to each other with respect to the line $K L$. Therefore $M$ is the midpoint of $H C$. Denote by $O$ the circumcentre of quadrilateral $A B C D$. Then $O$ is the midpoint of $A C$. Therefore we have $O M | A H$ and hence $O M \perp B D$. This together with the equality $O B=O D$ implies that $O M$ is the perpendicular bisector of $B D$ and therefore $B M=D M$.
Since $C M \perp K L$, the points $B, C, M$, and $K$ lie on a common circle with diameter $K C$. Similarly, the points $L, C, M$, and $D$ lie on a circle with diameter $L C$. Thus, using the sine law, we obtain
which finishes the proof of (1).

Figure 3
Second proof. If the points $A, H$, and $C$ are collinear, then $A K=A L$ and $K H=L H$, so the equality (1) follows. Assume therefore that the points $A, H$, and $C$ do not lie in a line and consider the circle $\omega$ passing through them (see Figure 3). Since the quadrilateral $A B C D$ is cyclic,
Let $N \neq A$ be the intersection point of the circle $\omega$ and the angle bisector of $\angle C A H$. Then $A N$ is also the angle bisector of $\angle B A D$. Since $H$ and $C$ are symmetric to each other with respect to the line $K L$ and $H N=N C$, it follows that both $N$ and the centre of $\omega$ lie on the line $K L$. This means that the circle $\omega$ is an Apollonius circle of the points $K$ and $L$. This immediately yields (1).
Comment. Either proof can be used to obtain the following generalised result: Let $A B C D$ be a convex quadrilateral and let $H$ be a point in its interior with $\angle B A C=\angle D A H$. The points $S$ and $T$ are chosen on the sides $A B$ and $A D$, respectively, in such a way that $H$ lies inside triangle SCT and
Then the circumcentre of triangle SHT lies on the line AH (and moreover the circumcentre of triangle SCT lies on $A C$ ).
G6. Let $A B C$ be a fixed acute-angled triangle. Consider some points $E$ and $F$ lying on the sides $A C$ and $A B$, respectively, and let $M$ be the midpoint of $E F$. Let the perpendicular bisector of $E F$ intersect the line $B C$ at $K$, and let the perpendicular bisector of $M K$ intersect the lines $A C$ and $A B$ at $S$ and $T$, respectively. We call the pair $(E, F)$ interesting, if the quadrilateral $K S A T$ is cyclic.
Suppose that the pairs $\left(E_{1}, F_{1}\right)$ and $\left(E_{2}, F_{2}\right)$ are interesting. Prove that
(Iran) Solution 1. For any interesting pair $(E, F)$, we will say that the corresponding triangle $E F K$ is also interesting.
Let $E F K$ be an interesting triangle. Firstly, we prove that $\angle K E F=\angle K F E=\angle A$, which also means that the circumcircle $\omega_{1}$ of the triangle $A E F$ is tangent to the lines $K E$ and $K F$.
Denote by $\omega$ the circle passing through the points $K, S, A$, and $T$. Let the line $A M$ intersect the line $S T$ and the circle $\omega$ (for the second time) at $N$ and $L$, respectively (see Figure 1).
Since $E F | T S$ and $M$ is the midpoint of $E F, N$ is the midpoint of $S T$. Moreover, since $K$ and $M$ are symmetric to each other with respect to the line $S T$, we have $\angle K N S=\angle M N S=$ $\angle L N T$. Thus the points $K$ and $L$ are symmetric to each other with respect to the perpendicular bisector of $S T$. Therefore $K L | S T$.
Let $G$ be the point symmetric to $K$ with respect to $N$. Then $G$ lies on the line $E F$, and we may assume that it lies on the ray $M F$. One has
(if $K=L$, then the angle $K L A$ is understood to be the angle between $A L$ and the tangent to $\omega$ at $L$ ). This means that the points $K, G, E$, and $S$ are concyclic. Now, since $K S G T$ is a parallelogram, we obtain $\angle K E F=\angle K S G=180^{\circ}-\angle T K S=\angle A$. Since $K E=K F$, we also have $\angle K F E=\angle K E F=\angle A$.
After having proved this fact, one may finish the solution by different methods.

Figure 2
First method. We have just proved that all interesting triangles are similar to each other. This allows us to use the following lemma.
Lemma. Let $A B C$ be an arbitrary triangle. Choose two points $E_{1}$ and $E_{2}$ on the side $A C$, two points $F_{1}$ and $F_{2}$ on the side $A B$, and two points $K_{1}$ and $K_{2}$ on the side $B C$, in a way that the triangles $E_{1} F_{1} K_{1}$ and $E_{2} F_{2} K_{2}$ are similar. Then the six circumcircles of the triangles $A E_{i} F_{i}$, $B F_{i} K_{i}$, and $C E_{i} K_{i}(i=1,2)$ meet at a common point $Z$. Moreover, $Z$ is the centre of the spiral similarity that takes the triangle $E_{1} F_{1} K_{1}$ to the triangle $E_{2} F_{2} K_{2}$. Proof. Firstly, notice that for each $i=1,2$, the circumcircles of the triangles $A E_{i} F_{i}, B F_{i} K_{i}$, and $C K_{i} E_{i}$ have a common point $Z_{i}$ by Miquel's theorem. Moreover, we have $\Varangle\left(Z_{i} F_{i}, Z_{i} E_{i}\right)=\Varangle(A B, C A), \quad \Varangle\left(Z_{i} K_{i}, Z_{i} F_{i}\right)=\Varangle(B C, A B), \quad \Varangle\left(Z_{i} E_{i}, Z_{i} K_{i}\right)=\Varangle(C A, B C)$. This yields that the points $Z_{1}$ and $Z_{2}$ correspond to each other in similar triangles $E_{1} F_{1} K_{1}$ and $E_{2} F_{2} K_{2}$. Thus, if they coincide, then this common point is indeed the desired centre of a spiral similarity.
Finally, in order to show that $Z_{1}=Z_{2}$, one may notice that $\Varangle\left(A B, A Z_{1}\right)=\Varangle\left(E_{1} F_{1}, E_{1} Z_{1}\right)=$ $\Varangle\left(E_{2} F_{2}, E_{2} Z_{2}\right)=\Varangle\left(A B, A Z_{2}\right)$ (see Figure 2). Similarly, one has $\Varangle\left(B C, B Z_{1}\right)=\Varangle\left(B C, B Z_{2}\right)$ and $\Varangle\left(C A, C Z_{1}\right)=\Varangle\left(C A, C Z_{2}\right)$. This yields $Z_{1}=Z_{2}$.
Now, let $P$ and $Q$ be the feet of the perpendiculars from $B$ and $C$ onto $A C$ and $A B$, respectively, and let $R$ be the midpoint of $B C$ (see Figure 3). Then $R$ is the circumcentre of the cyclic quadrilateral $B C P Q$. Thus we obtain $\angle A P Q=\angle B$ and $\angle R P C=\angle C$, which yields $\angle Q P R=\angle A$. Similarly, we show that $\angle P Q R=\angle A$. Thus, all interesting triangles are similar to the triangle $P Q R$.

Figure 4
Denote now by $Z$ the common point of the circumcircles of $A P Q, B Q R$, and $C P R$. Let $E_{1} F_{1} K_{1}$ and $E_{2} F_{2} K_{2}$ be two interesting triangles. By the lemma, $Z$ is the centre of any spiral similarity taking one of the triangles $E_{1} F_{1} K_{1}, E_{2} F_{2} K_{2}$, and $P Q R$ to some other of them. Therefore the triangles $Z E_{1} E_{2}$ and $Z F_{1} F_{2}$ are similar, as well as the triangles $Z E_{1} F_{1}$ and $Z P Q$. Hence
Moreover, the equalities $\angle A Z Q=\angle A P Q=\angle A B C=180^{\circ}-\angle Q Z R$ show that the point $Z$ lies on the line $A R$ (see Figure 4). Therefore the triangles $A Z P$ and $A C R$ are similar, as well as the triangles $A Z Q$ and $A B R$. This yields
which completes the solution.
Second method. Now we will start from the fact that $\omega_{1}$ is tangent to the lines $K E$ and $K F$ (see Figure 5). We prove that if $(E, F)$ is an interesting pair, then
Let $Y$ be the intersection point of the segments $B E$ and $C F$. The points $B, K$, and $C$ are collinear, hence applying PASCAL's theorem to the degenerated hexagon AFFYEE, we infer that $Y$ lies on the circle $\omega_{1}$.
Denote by $Z$ the second intersection point of the circumcircle of the triangle $B F Y$ with the line $B C$ (see Figure 6). By Miquel's theorem, the points $C, Z, Y$, and $E$ are concyclic. Therefore we obtain
On the other hand, $B C^{2}=A B^{2}+A C^{2}-2 A B \cdot A C \cos \angle A$, by the cosine law. Hence
which simplifies to the desired equality (1). Let now $\left(E_{1}, F_{1}\right)$ and $\left(E_{2}, F_{2}\right)$ be two interesting pairs of points. Then we get
which gives the desired result.

Figure 6
Third method. Again, we make use of the fact that all interesting triangles are similar (and equi-oriented). Let us put the picture onto a complex plane such that $A$ is at the origin, and identify each point with the corresponding complex number.
Let $E F K$ be any interesting triangle. The equalities $\angle K E F=\angle K F E=\angle A$ yield that the ratio $\nu=\frac{K-E}{F-E}$ is the same for all interesting triangles. This in turn means that the numbers $E$, $F$, and $K$ satisfy the linear equation
Now let us choose the points $X$ and $Y$ on the rays $A B$ and $A C$, respectively, so that $\angle C X A=\angle A Y B=\angle A=\angle K E F$ (see Figure 7). Then each of the triangles $A X C$ and $Y A B$ is similar to any interesting triangle, which also means that
Moreover, one has $X / Y=\overline{C / B}$. Since the points $E, F$, and $K$ lie on $A C, A B$, and $B C$, respectively, one gets
for some real $\rho, \sigma$, and $\lambda$. In view of (3), the equation (2) now reads $\lambda B+(1-\lambda) C=K=$ $\mu E+\nu F=\rho B+\sigma C$, or
Since the nonzero complex numbers $B$ and $C$ have different arguments, the coefficients in the brackets vanish, so $\rho=\lambda$ and $\sigma=1-\lambda$. Therefore,
Now, if $\left(E_{1}, F_{1}\right)$ and $\left(E_{2}, F_{2}\right)$ are two distinct interesting pairs, one may apply (4) to both pairs. Subtracting, we get
Taking absolute values provides the required result.

Figure 7
Comment 1. One may notice that the triangle $P Q R$ is also interesting. Comment 2. In order to prove that $\angle K E F=\angle K F E=\angle A$, one may also use the following well-known fact: Let $A E F$ be a triangle with $A E \neq A F$, and let $K$ be the common point of the symmedian taken from $A$ and the perpendicular bisector of $E F$. Then the lines $K E$ and $K F$ are tangent to the circumcircle $\omega_{1}$ of the triangle $A E F$.
In this case, however, one needs to deal with the case $A E=A F$ separately.
Solution 2. Let $(E, F)$ be an interesting pair. This time we prove that
As in Solution 1, we introduce the circle $\omega$ passing through the points $K, S$, $A$, and $T$, together with the points $N$ and $L$ at which the line $A M$ intersect the line $S T$ and the circle $\omega$ for the second time, respectively. Let moreover $O$ be the centre of $\omega$ (see Figures 8 and 9). As in Solution 1, we note that $N$ is the midpoint of $S T$ and show that $K L | S T$, which implies $\angle F A M=\angle E A K$.

Figure 9
Suppose now that $K \neq L$ (see Figure 8). Then $K L | S T$, and consequently the lines $K M$ and $K L$ are perpendicular. It implies that the lines $L O$ and $K M$ meet at a point $X$ lying on the circle $\omega$. Since the lines $O N$ and $X M$ are both perpendicular to the line $S T$, they are parallel to each other, and hence $\angle L O N=\angle L X K=\angle M A K$. On the other hand, $\angle O L N=\angle M K A$, so we infer that triangles $N O L$ and $M A K$ are similar. This yields
If, on the other hand, $K=L$, then the points $A, M, N$, and $K$ lie on a common line, and this line is the perpendicular bisector of $S T$ (see Figure 9). This implies that $A K$ is a diameter of $\omega$, which yields $A M=2 O K-2 N K=2 O N$. So also in this case we obtain
Thus (5) is proved.
Let $P$ and $Q$ be the feet of the perpendiculars from $B$ and $C$ onto $A C$ and $A B$, respectively (see Figure 10). We claim that the point $M$ lies on the line $P Q$. Consider now the composition of the dilatation with factor $\cos \angle A$ and centre $A$, and the reflection with respect to the angle bisector of $\angle B A C$. This transformation is a similarity that takes $B, C$, and $K$ to $P, Q$, and $M$, respectively. Since $K$ lies on the line $B C$, the point $M$ lies on the line $P Q$.

Figure 10 Suppose that $E \neq P$. Then also $F \neq Q$, and by Menelaus' theorem, we obtain
Using the similarity of the triangles $A P Q$ and $A B C$, we infer that
The last equality holds obviously also in case $E=P$, because then $F=Q$. Moreover, since the line $P Q$ intersects the segment $E F$, we infer that the point $E$ lies on the segment $A P$ if and only if the point $F$ lies outside of the segment $A Q$.
Let now $\left(E_{1}, F_{1}\right)$ and $\left(E_{2}, F_{2}\right)$ be two interesting pairs. Then we obtain
If $P$ lies between the points $E_{1}$ and $E_{2}$, we add the equalities above, otherwise we subtract them. In any case we obtain
which completes the solution.
G7. Let $A B C$ be a triangle with circumcircle $\Omega$ and incentre $I$. Let the line passing through $I$ and perpendicular to $C I$ intersect the segment $B C$ and the $\operatorname{arc} B C$ (not containing $A$ ) of $\Omega$ at points $U$ and $V$, respectively. Let the line passing through $U$ and parallel to $A I$ intersect $A V$ at $X$, and let the line passing through $V$ and parallel to $A I$ intersect $A B$ at $Y$. Let $W$ and $Z$ be the midpoints of $A X$ and $B C$, respectively. Prove that if the points $I, X$, and $Y$ are collinear, then the points $I, W$, and $Z$ are also collinear.
Solution 1. We start with some general observations. Set $\alpha=\angle A / 2, \beta=\angle B / 2, \gamma=\angle C / 2$. Then obviously $\alpha+\beta+\gamma=90^{\circ}$. Since $\angle U I C=90^{\circ}$, we obtain $\angle I U C=\alpha+\beta$. Therefore $\angle B I V=\angle I U C-\angle I B C=\alpha=\angle B A I=\angle B Y V$, which implies that the points $B, Y, I$, and $V$ lie on a common circle (see Figure 1).
Assume now that the points $I, X$ and $Y$ are collinear. We prove that $\angle Y I A=90^{\circ}$. Let the line $X U$ intersect $A B$ at $N$. Since the lines $A I, U X$, and $V Y$ are parallel, we get
implying $N X=X U$. Moreover, $\angle B I U=\alpha=\angle B N U$. This implies that the quadrilateral BUIN is cyclic, and since $B I$ is the angle bisector of $\angle U B N$, we infer that $N I=U I$. Thus in the isosceles triangle $N I U$, the point $X$ is the midpoint of the base $N U$. This gives $\angle I X N=90^{\circ}$, i.e., $\angle Y I A=90^{\circ}$.

Figure 1 Let $S$ be the midpoint of the segment $V C$. Let moreover $T$ be the intersection point of the lines $A X$ and $S I$, and set $x=\angle B A V=\angle B C V$. Since $\angle C I A=90^{\circ}+\beta$ and $S I=S C$, we obtain
which implies that $T I=T A$. Therefore, since $\angle X I A=90^{\circ}$, the point $T$ is the midpoint of $A X$, i.e., $T=W$.
To complete our solution, it remains to show that the intersection point of the lines $I S$ and $B C$ coincide with the midpoint of the segment $B C$. But since $S$ is the midpoint of the segment $V C$, it suffices to show that the lines $B V$ and $I S$ are parallel.
Since the quadrilateral $B Y I V$ is cyclic, $\angle V B I=\angle V Y I=\angle Y I A=90^{\circ}$. This implies that $B V$ is the external angle bisector of the angle $A B C$, which yields $\angle V A C=\angle V C A$. Therefore $2 \alpha-x=2 \gamma+x$, which gives $\alpha=\gamma+x$. Hence $\angle S C I=\alpha$, so $\angle V S I=2 \alpha$.
On the other hand, $\angle B V C=180^{\circ}-\angle B A C=180^{\circ}-2 \alpha$, which implies that the lines $B V$ and $I S$ are parallel. This completes the solution.
Solution 2. As in Solution 1, we first prove that the points $B, Y, I, V$ lie on a common circle and $\angle Y I A=90^{\circ}$. The remaining part of the solution is based on the following lemma, which holds true for any triangle $A B C$, not necessarily with the property that $I, X, Y$ are collinear. Lemma. Let $A B C$ be the triangle inscribed in a circle $\Gamma$ and let $I$ be its incentre. Assume that the line passing through $I$ and perpendicular to the line $A I$ intersects the side $A B$ at the point $Y$. Let the circumcircle of the triangle $B Y I$ intersect the circle $\Gamma$ for the second time at $V$, and let the excircle of the triangle $A B C$ opposite to the vertex $A$ be tangent to the side $B C$ at $E$. Then
Proof. Let $\rho$ be the composition of the inversion with centre $A$ and radius $\sqrt{A B \cdot A C}$, and the symmetry with respect to $A I$. Clearly, $\rho$ interchanges $B$ and $C$.
Let $J$ be the excentre of the triangle $A B C$ opposite to $A$ (see Figure 2). Then we have $\angle J A C=\angle B A I$ and $\angle J C A=90^{\circ}+\gamma=\angle B I A$, so the triangles $A C J$ and $A I B$ are similar, and therefore $A B \cdot A C=A I \cdot A J$. This means that $\rho$ interchanges $I$ and $J$. Moreover, since $Y$ lies on $A B$ and $\angle A I Y=90^{\circ}$, the point $Y^{\prime}=\rho(Y)$ lies on $A C$, and $\angle J Y^{\prime} A=90^{\circ}$. Thus $\rho$ maps the circumcircle $\gamma$ of the triangle $B Y I$ to a circle $\gamma^{\prime}$ with diameter $J C$.
Finally, since $V$ lies on both $\Gamma$ and $\gamma$, the point $V^{\prime}=\rho(V)$ lies on the line $\rho(\Gamma)=A B$ as well as on $\gamma^{\prime}$, which in turn means that $V^{\prime}=E$. This implies the desired result.

Figure 3
Now we turn to the solution of the problem. Assume that the incircle $\omega_{1}$ of the triangle $A B C$ is tangent to $B C$ at $D$, and let the excircle $\omega_{2}$ of the triangle $A B C$ opposite to the vertex $A$ touch the side $B C$ at $E$ (see Figure 3). The homothety with centre $A$ that takes $\omega_{2}$ to $\omega_{1}$ takes the point $E$ to some point $F$, and the tangent to $\omega_{1}$ at $F$ is parallel to $B C$. Therefore $D F$ is a diameter of $\omega_{1}$. Moreover, $Z$ is the midpoint of $D E$. This implies that the lines $I Z$ and $F E$ are parallel.
Let $K=Y I \cap A E$. Since $\angle Y I A=90^{\circ}$, the lemma yields that $I$ is the midpoint of $X K$. This implies that the segments $I W$ and $A K$ are parallel. Therefore, the points $W, I$ and $Z$ are collinear.
Comment 1. The properties $\angle Y I A=90^{\circ}$ and $V A=V C$ can be established in various ways. The main difficulty of the problem seems to find out how to use these properties in connection to the points $W$ and $Z$.
In Solution 2 this principal part is more or less covered by the lemma, for which we have presented a direct proof. On the other hand, this lemma appears to be a combination of two well-known facts; let us formulate them in terms of the lemma statement.
Let the line $I Y$ intersect $A C$ at $P$ (see Figure 4). The first fact states that the circumcircle $\omega$ of the triangle $V Y P$ is tangent to the segments $A B$ and $A C$, as well as to the circle $\Gamma$. The second fact states that for such a circle, the angles $B A V$ and $C A E$ are equal.
The awareness of this lemma may help a lot in solving this problem; so the Jury might also consider a variation of the proposed problem, for which the lemma does not seem to be useful; see Comment 3.

Comment 2. The proposed problem stated the equivalence: the point $I$ lies on the line $X Y$ if and only if $I$ lies on the line $W Z$. Here we sketch the proof of the "if" part (see Figure 5). As in Solution 2 , let $B C$ touch the circles $\omega_{1}$ and $\omega_{2}$ at $D$ and $E$, respectively. Since $I Z | A E$ and $W$ lies on $I Z$, the line $D X$ is also parallel to $A E$. Therefore, the triangles $X U P$ and $A I Q$ are similar. Moreover, the line $D X$ is symmetric to $A E$ with respect to $I$, so $I P=I Q$, where $P=U V \cap X D$ and $Q=U V \cap A E$. Thus we obtain
So the pairs $I U$ and $P V$ are harmonic conjugates, and since $\angle U D I=90^{\circ}$, we get $\angle V D B=\angle B D X=$ $\angle B E A$. Therefore the point $V^{\prime}$ symmetric to $V$ with respect to the perpendicular bisector of $B C$ lies on the line $A E$. So we obtain $\angle B A V=\angle C A E$.
The rest can be obtained by simply reversing the arguments in Solution 2 . The points $B, V, I$, and $Y$ are concyclic. The lemma implies that $\angle Y I A=90^{\circ}$. Moreover, the points $B, U, I$, and $N$, where $N=U X \cap A B$, lie on a common circle, so $I N=I U$. Since $I Y \perp U N$, the point $X^{\prime}=I Y \cap U N$ is the midpoint of $U N$. But in the trapezoid $A Y V I$, the line $X U$ is parallel to the sides $A I$ and $Y V$, so $N X=U X^{\prime}$. This yields $X=X^{\prime}$. The reasoning presented in Solution 1 can also be reversed, but it requires a lot of technicalities. Therefore the Problem Selection Committee proposes to consider only the "only if" part of the original proposal, which is still challenging enough.
Comment 3. The Jury might also consider the following variation of the proposed problem.
Let $A B C$ be a triangle with circumcircle $\Omega$ and incentre I. Let the line through I perpendicular to CI intersect the segment $B C$ and the arc $B C$ (not containing $A$ ) of $\Omega$ at $U$ and $V$, respectively. Let the line through $U$ parallel to $A I$ intersect $A V$ at $X$. Prove that if the lines XI and AI are perpendicular, then the midpoint of the segment AC lies on the line XI (see Figure 6).

Figure 7
Since the solution contains the arguments used above, we only sketch it. Let $N=X U \cap A B$ (see Figure 7). Then $\angle B N U=\angle B A I=\angle B I U$, so the points $B, U, I$, and $N$ lie on a common circle. Therefore $I U=I N$, and since $I X \perp N U$, it follows that $N X=X U$. Now set $Y=X I \cap A B$. The equality $N X=X U$ implies that
and therefore $Y V | A I$. Hence $\angle B Y V=\angle B A I=\angle B I V$, so the points $B, V, I, Y$ are concyclic. Next we have $I Y \perp Y V$, so $\angle I B V=90^{\circ}$. This implies that $B V$ is the external angle bisector of the angle $A B C$, which gives $\angle V A C=\angle V C A$. So in order to show that $M=X I \cap A C$ is the midpoint of $A C$, it suffices to prove that $\angle V M C=90^{\circ}$. But this follows immediately from the observation that the points $V, C, M$, and $I$ are concyclic, as $\angle M I V=\angle Y B V=180^{\circ}-\angle A C V$. The converse statement is also true, but its proof requires some technicalities as well.
Number Theory
N1. Let $n \geqslant 2$ be an integer, and let $A_{n}$ be the set
Determine the largest positive integer that cannot be written as the sum of one or more (not necessarily distinct) elements of $A_{n}$. (Serbia) Answer. $(n-2) 2^{n}+1$.
Solution 1.
Part I. First we show that every integer greater than $(n-2) 2^{n}+1$ can be represented as such a sum. This is achieved by induction on $n$.
For $n=2$, the set $A_{n}$ consists of the two elements 2 and 3 . Every positive integer $m$ except for 1 can be represented as the sum of elements of $A_{n}$ in this case: as $m=2+2+\cdots+2$ if $m$ is even, and as $m=3+2+2+\cdots+2$ if $m$ is odd.
Now consider some $n>2$, and take an integer $m>(n-2) 2^{n}+1$. If $m$ is even, then consider
By the induction hypothesis, there is a representation of the form
for some $k_{i}$ with $0 \leqslant k_{i}<n-1$. It follows that
giving us the desired representation as a sum of elements of $A_{n}$. If $m$ is odd, we consider
By the induction hypothesis, there is a representation of the form
for some $k_{i}$ with $0 \leqslant k_{i}<n-1$. It follows that
giving us the desired representation of $m$ once again. Part II. It remains to show that there is no representation for $(n-2) 2^{n}+1$. Let $N$ be the smallest positive integer that satisfies $N \equiv 1\left(\bmod 2^{n}\right)$, and which can be represented as a sum of elements of $A_{n}$. Consider a representation of $N$, i.e.,
where $0 \leqslant k_{1}, k_{2}, \ldots, k_{r}<n$. Suppose first that two of the terms in the sum are the same, i.e., $k_{i}=k_{j}$ for some $i \neq j$. If $k_{i}=k_{j}=n-1$, then we can simply remove these two terms to get a representation for
as a sum of elements of $A_{n}$, which contradicts our choice of $N$. If $k_{i}=k_{j}=k<n-1$, replace the two terms by $2^{n}-2^{k+1}$, which is also an element of $A_{n}$, to get a representation for
This is a contradiction once again. Therefore, all $k_{i}$ have to be distinct, which means that
On the other hand, taking (1) modulo $2^{n}$, we find
Thus we must have $2^{k_{1}}+2^{k_{2}}+\cdots+2^{k_{r}}=2^{n}-1$, which is only possible if each element of ${0,1, \ldots, n-1}$ occurs as one of the $k_{i}$. This gives us
In particular, this means that $(n-2) 2^{n}+1$ cannot be represented as a sum of elements of $A_{n}$. Solution 2. The fact that $m=(n-2) 2^{n}+1$ cannot be represented as a sum of elements of $A_{n}$ can also be shown in other ways. We prove the following statement by induction on $n$ : Claim. If $a, b$ are integers with $a \geqslant 0, b \geqslant 1$, and $a+b<n$, then $a 2^{n}+b$ cannot be written as a sum of elements of $A_{n}$. Proof. The claim is clearly true for $n=2$ (since $a=0, b=1$ is the only possibility). For $n>2$, assume that there exist integers $a, b$ with $a \geqslant 0, b \geqslant 1$ and $a+b<n$ as well as elements $m_{1}, m_{2}, \ldots, m_{r}$ of $A_{n}$ such that
We can suppose, without loss of generality, that $m_{1} \geqslant m_{2} \geqslant \cdots \geqslant m_{r}$. Let $\ell$ be the largest index for which $m_{\ell}=2^{n}-1\left(\ell=0\right.$ if $\left.m_{1} \neq 2^{n}-1\right)$. Clearly, $\ell$ and $b$ must have the same parity. Now
and thus
Note that $m_{\ell+1} / 2, m_{\ell+2} / 2, \ldots, m_{r} / 2$ are elements of $A_{n-1}$. Moreover, $a-\ell$ and $(b+\ell) / 2$ are integers, and $(b+\ell) / 2 \geqslant 1$. If $a-\ell$ was negative, then we would have
thus $n \geqslant a+b+1 \geqslant 2^{n}$, which is impossible. So $a-\ell \geqslant 0$. By the induction hypothesis, we must have $a-\ell+\frac{b+\ell}{2} \geqslant n-1$, which gives us a contradiction, since
Considering the special case $a=n-2, b=1$ now completes the proof.
Solution 3. Denote by $B_{n}$ the set of all positive integers that can be written as a sum of elements of $A_{n}$. In this solution, we explicitly describe all the numbers in $B_{n}$ by an argument similar to the first solution.
For a positive integer $n$, we denote by $\sigma_{2}(n)$ the sum of its digits in the binary representation. Notice that every positive integer $m$ has a unique representation of the form $m=s 2^{n}-t$ with some positive integer $s$ and $0 \leqslant t \leqslant 2^{n}-1$. Lemma. For any two integers $s \geqslant 1$ and $0 \leqslant t \leqslant 2^{n}-1$, the number $m=s 2^{n}-t$ belongs to $B_{n}$ if and only if $s \geqslant \sigma_{2}(t)$. Proof. For $t=0$, the statement of the Lemma is obvious, since $m=2 s \cdot\left(2^{n}-2^{n-1}\right)$. Now suppose that $t \geqslant 1$, and let
be its binary expansion. If $s \geqslant \sigma$, then $m \in B_{n}$ since
Assume now that there exist integers $s$ and $t$ with $1 \leqslant s<\sigma_{2}(t)$ and $0 \leqslant t \leqslant 2^{n}-1$ such that the number $m=s 2^{n}-t$ belongs to $B_{n}$. Among all such instances, choose the one for which $m$ is smallest, and let
be the corresponding representation. If all the $\ell_{i}$ 's are distinct, then $\sum_{i=1}^{d} 2^{\ell_{i}} \leqslant \sum_{j=0}^{n-1} 2^{j}=2^{n}-1$, so one has $s=d$ and $t=\sum_{i=1}^{d} 2^{\ell_{i}}$, whence $s=d=\sigma_{2}(t)$; this is impossible. Therefore, two of the $\ell_{i}$ 's must be equal, say $\ell_{d-1}=\ell_{d}$. Then $m \geqslant 2\left(2^{n}-2^{\ell_{d}}\right) \geqslant 2^{n}$, so $s \geqslant 2$.
Now we claim that the number $m^{\prime}=m-2^{n}=(s-1) 2^{n}-t$ also belongs to $B_{n}$, which contradicts the minimality assumption. Indeed, one has
so
is the desired representation of $m^{\prime}$ (if $\ell_{d}=n-1$, then the last summand is simply omitted). This contradiction finishes the proof.
By our lemma, the largest number $M$ which does not belong to $B_{n}$ must have the form
for some $t$ with $1 \leqslant t \leqslant 2^{n}-1$, so $M$ is just the largest of these numbers. For $t_{0}=2^{n}-1$ we have $m_{t_{0}}=(n-1) 2^{n}-\left(2^{n}-1\right)=(n-2) 2^{n}+1$; for every other value of $t$ one has $\sigma_{2}(t) \leqslant n-1$, thus $m_{t} \leqslant(\sigma(t)-1) 2^{n} \leqslant(n-2) 2^{n}<m_{t_{0}}$. This means that $M=m_{t_{0}}=(n-2) 2^{n}+1$.
N2. Determine all pairs $(x, y)$ of positive integers such that
Answer. Either $(x, y)=(1,1)$ or ${x, y}=\left{m^{3}+m^{2}-2 m-1, m^{3}+2 m^{2}-m-1\right}$ for some positive integer $m \geqslant 2$. Solution. Let $(x, y)$ be any pair of positive integers solving (1). We shall prove that it appears in the list displayed above. The converse assertion that all these pairs do actually satisfy (1) either may be checked directly by means of a somewhat laborious calculation, or it can be seen by going in reverse order through the displayed equations that follow.
In case $x=y$ the given equation reduces to $x^{2 / 3}=1$, which is equivalent to $x=1$, whereby he have found the first solution.
To find the solutions with $x \neq y$ we may assume $x>y$ due to symmetry. Then the integer $n=x-y$ is positive and (1) may be rewritten as
Raising this to the third power and simplifying the result one obtains
To complete the square on the left hand side, we multiply by 4 and add $n^{2}$, thus getting
This shows that the cases $n=1$ and $n=2$ are impossible, whence $n>2$, and $4 n+1$ is the square of the rational number $\frac{2 y+n}{n-2}$. Consequently, it has to be a perfect square, and, since it is odd as well, there has to exist some nonnegative integer $m$ such that $4 n+1=(2 m+1)^{2}$, i.e.
Notice that $n>2$ entails $m \geqslant 2$. Substituting the value of $n$ just found into the previous displayed equation we arrive at
Extracting square roots and taking $2 m^{3}+3 m^{2}-3 m-2=(m-1)\left(2 m^{2}+5 m+2\right)>0$ into account we derive $2 y+m^{2}+m=2 m^{3}+3 m^{2}-3 m-2$, which in turn yields
Notice that $m \geqslant 2$ implies that $y=\left(m^{3}-1\right)+(m-2) m$ is indeed positive, as it should be. In view of $x=y+n=y+m^{2}+m$ it also follows that
and that this integer is positive as well. Comment. Alternatively one could ask to find all pairs $(x, y)$ of - not necessarily positive - integers solving (1). The answer to that question is a bit nicer than the answer above: the set of solutions are now described by
where $m$ varies through $\mathbb{Z}$. This may be shown using essentially the same arguments as above. We finally observe that the pair $(x, y)=(1,1)$, that appears to be sporadic above, corresponds to $m=-1$.
N3. A coin is called a Cape Town coin if its value is $1 / n$ for some positive integer $n$. Given a collection of Cape Town coins of total value at most $99+\frac{1}{2}$, prove that it is possible to split this collection into at most 100 groups each of total value at most 1. (Luxembourg) Solution. We will show that for every positive integer $N$ any collection of Cape Town coins of total value at most $N-\frac{1}{2}$ can be split into $N$ groups each of total value at most 1 . The problem statement is a particular case for $N=100$.
We start with some preparations. If several given coins together have a total value also of the form $\frac{1}{k}$ for a positive integer $k$, then we may merge them into one new coin. Clearly, if the resulting collection can be split in the required way then the initial collection can also be split.
After each such merging, the total number of coins decreases, thus at some moment we come to a situation when no more merging is possible. At this moment, for every even $k$ there is at most one coin of value $\frac{1}{k}$ (otherwise two such coins may be merged), and for every odd $k>1$ there are at most $k-1$ coins of value $\frac{1}{k}$ (otherwise $k$ such coins may also be merged).
Now, clearly, each coin of value 1 should form a single group; if there are $d$ such coins then we may remove them from the collection and replace $N$ by $N-d$. So from now on we may assume that there are no coins of value 1.
Finally, we may split all the coins in the following way. For each $k=1,2, \ldots, N$ we put all the coins of values $\frac{1}{2 k-1}$ and $\frac{1}{2 k}$ into a group $G_{k}$; the total value of $G_{k}$ does not exceed
It remains to distribute the "small" coins of values which are less than $\frac{1}{2 N}$; we will add them one by one. In each step, take any remaining small coin. The total value of coins in the groups at this moment is at most $N-\frac{1}{2}$, so there exists a group of total value at most $\frac{1}{N}\left(N-\frac{1}{2}\right)=1-\frac{1}{2 N}$; thus it is possible to put our small coin into this group. Acting so, we will finally distribute all the coins.
Comment 1. The algorithm may be modified, at least the step where one distributes the coins of values $\geqslant \frac{1}{2 N}$. One different way is to put into $G_{k}$ all the coins of values $\frac{1}{(2 k-1) 2^{s}}$ for all integer $s \geqslant 0$. One may easily see that their total value also does not exceed 1.
Comment 2. The original proposal also contained another part, suggesting to show that a required splitting may be impossible if the total value of coins is at most 100 . There are many examples of such a collection, e.g. one may take 98 coins of value 1 , one coin of value $\frac{1}{2}$, two coins of value $\frac{1}{3}$, and four coins of value $\frac{1}{5}$.
The Problem Selection Committee thinks that this part is less suitable for the competition.
N4. Let $n>1$ be a given integer. Prove that infinitely many terms of the sequence $\left(a_{k}\right)_{k \geqslant 1}$, defined by
are odd. (For a real number $x,\lfloor x\rfloor$ denotes the largest integer not exceeding $x$.) (Hong Kong) Solution 1. If $n$ is odd, let $k=n^{m}$ for $m=1,2, \ldots$. Then $a_{k}=n^{n^{m}-m}$, which is odd for each $m$.
Henceforth, assume that $n$ is even, say $n=2 t$ for some integer $t \geqslant 1$. Then, for any $m \geqslant 2$, the integer $n^{2^{m}}-2^{m}=2^{m}\left(2^{2^{m}-m} \cdot t^{2^{m}}-1\right)$ has an odd prime divisor $p$, since $2^{m}-m>1$. Then, for $k=p \cdot 2^{m}$, we have
where the congruences are taken modulo $p\left(\right.$ recall that $2^{p} \equiv 2(\bmod p)$, by Fermat's little theorem). Also, from $n^{k}-2^{m}<n^{k}<n^{k}+2^{m}(p-1)$, we see that the fraction $\frac{n^{k}}{k}$ lies strictly between the consecutive integers $\frac{n^{k}-2^{m}}{p \cdot 2^{m}}$ and $\frac{n^{k}+2^{m}(p-1)}{p \cdot 2^{m}}$, which gives
We finally observe that $\frac{n^{k}-2^{m}}{p \cdot 2^{m}}=\frac{\frac{n^{k}}{2^{m}}-1}{p}$ is an odd integer, since the integer $\frac{n^{k}}{2^{m}}-1$ is odd (recall that $k>m$ ). Note that for different values of $m$, we get different values of $k$, due to the different powers of 2 in the prime factorisation of $k$.
Solution 2. Treat the (trivial) case when $n$ is odd as in Solution 1. Now assume that $n$ is even and $n>2$. Let $p$ be a prime divisor of $n-1$. Proceed by induction on $i$ to prove that $p^{i+1}$ is a divisor of $n^{p^{i}}-1$ for every $i \geqslant 0$. The case $i=0$ is true by the way in which $p$ is chosen. Suppose the result is true for some $i \geqslant 0$. The factorisation
together with the fact that each of the $p$ terms between the square brackets is congruent to 1 modulo $p$, implies that the result is also true for $i+1$.
Hence $\left\lfloor\frac{n^{p^{i}}}{p^{i}}\right\rfloor=\frac{n^{p^{i}}-1}{p^{i}}$, an odd integer for each $i \geqslant 1$. Finally, we consider the case $n=2$. We observe that $3 \cdot 4^{i}$ is a divisor of $2^{3 \cdot 4^{i}}-4^{i}$ for every $i \geqslant 1$ : Trivially, $4^{i}$ is a divisor of $2^{3 \cdot 4^{i}}-4^{i}$, since $3 \cdot 4^{i}>2 i$. Furthermore, since $2^{3 \cdot 4^{i}}$ and $4^{i}$ are both congruent to 1 modulo 3, we have $3 \mid 2^{3 \cdot 4^{i}}-4^{i}$. Hence, $\left\lfloor\frac{2^{3 \cdot 4^{i}}}{3 \cdot 4^{i}}\right\rfloor=\frac{2^{3 \cdot 4^{i}}-4^{i}}{3 \cdot 4^{i}}=\frac{2^{3 \cdot 4^{i}-2 i}-1}{3}$, which is odd for every $i \geqslant 1$.
Comment. The case $n$ even and $n>2$ can also be solved by recursively defining the sequence $\left(k_{i}\right){i \geqslant 1}$ by $k{1}=1$ and $k_{i+1}=n^{k_{i}}-1$ for $i \geqslant 1$. Then $\left(k_{i}\right)$ is strictly increasing and it follows (by induction on $i$ ) that $k_{i} \mid n^{k_{i}}-1$ for all $i \geqslant 1$, so the $k_{i}$ are as desired.
The case $n=2$ can also be solved as follows: Let $i \geqslant 2$. By Bertrand's postulate, there exists a prime number $p$ such that $2^{2^{i}-1}<p \cdot 2^{i}<2^{2^{i}}$. This gives
Also, we have that $p \cdot 2^{i}$ is a divisor of $2^{p \cdot 2^{i}}-2^{2^{i}}$, hence, using (1), we get that
which is an odd integer. Solution 3. Treat the (trivial) case when $n$ is odd as in Solution 1. Let $n$ be even, and let $p$ be a prime divisor of $n+1$. Define the sequence $\left(a_{i}\right)_{i \geqslant 1}$ by
Recall that there exists $a$ with $1 \leqslant a<2^{i}$ such that $a p \equiv-1\left(\bmod 2^{i}\right)$, so each $a_{i}$ satisfies $1 \leqslant a_{i}<2^{i}$. This implies that $a_{i} p+1<p \cdot 2^{i}$. Also, $a_{i} \rightarrow \infty$ as $i \rightarrow \infty$, whence there are infinitely many $i$ such that $a_{i}<a_{i+1}$. From now on, we restrict ourselves only to these $i$.
Notice that $p$ is a divisor of $n^{p}+1$, which, in turn, divides $n^{p \cdot 2^{i}}-1$. It follows that $p \cdot 2^{i}$ is a divisor of $n^{p \cdot 2^{i}}-\left(a_{i} p+1\right)$, and we consequently see that the integer $\left\lfloor\frac{n^{p \cdot 2^{i}}}{p \cdot 2^{i}}\right\rfloor=\frac{n^{p \cdot 2^{i}}-\left(a_{i} p+1\right)}{p \cdot 2^{i}}$ is odd, since $2^{i+1}$ divides $n^{p \cdot 2^{i}}$, but not $a_{i} p+1$.
N5. Find all triples $(p, x, y)$ consisting of a prime number $p$ and two positive integers $x$ and $y$ such that $x^{p-1}+y$ and $x+y^{p-1}$ are both powers of $p$. (Belgium) Answer. $(p, x, y) \in{(3,2,5),(3,5,2)} \cup\left{\left(2, n, 2^{k}-n\right) \mid 0<n<2^{k}\right}$. Solution 1. For $p=2$, clearly all pairs of two positive integers $x$ and $y$ whose sum is a power of 2 satisfy the condition. Thus we assume in the following that $p>2$, and we let $a$ and $b$ be positive integers such that $x^{p-1}+y=p^{a}$ and $x+y^{p-1}=p^{b}$. Assume further, without loss of generality, that $x \leqslant y$, so that $p^{a}=x^{p-1}+y \leqslant x+y^{p-1}=p^{b}$, which means that $a \leqslant b$ (and thus $\left.p^{a} \mid p^{b}\right)$.
Now we have
We take this equation modulo $p^{a}$ and take into account that $p-1$ is even, which gives us
If $p \mid x$, then $p^{a} \mid x$, since $x^{(p-1)^{2}-1}+1$ is not divisible by $p$ in this case. However, this is impossible, since $x \leqslant x^{p-1}<p^{a}$. Thus we know that $p \nmid x$, which means that
By Fermat's little theorem, $x^{(p-1)^{2}} \equiv 1(\bmod p)$, thus $p$ divides $x+1$. Let $p^{r}$ be the highest power of $p$ that divides $x+1$. By the binomial theorem, we have
Except for the terms corresponding to $k=0, k=1$ and $k=2$, all terms in the sum are clearly divisible by $p^{3 r}$ and thus by $p^{r+2}$. The remaining terms are
which is divisible by $p^{2 r+1}$ and thus also by $p^{r+2}$,
which is divisible by $p^{r+1}$, but not $p^{r+2}$ by our choice of $r$, and the final term -1 corresponding to $k=0$. It follows that the highest power of $p$ that divides $x^{p(p-2)}+1$ is $p^{r+1}$.
On the other hand, we already know that $p^{a}$ divides $x^{p(p-2)}+1$, which means that $a \leqslant r+1$. Moreover,
Hence we either have $a=r$ or $a=r+1$. If $a=r$, then $x=y=1$ needs to hold in the inequality above, which is impossible for $p>2$. Thus $a=r+1$. Now since $p^{r} \leqslant x+1$, we get
so we must have $x=p-1$ for $p$ to divide $x+1$. It follows that $r=1$ and $a=2$. If $p \geqslant 5$, we obtain
a contradiction. So the only case that remains is $p=3$, and indeed $x=2$ and $y=p^{a}-x^{p-1}=5$ satisfy the conditions.
Comment 1. In this solution, we are implicitly using a special case of the following lemma known as "lifting the exponent": Lemma. Let $n$ be a positive integer, let $p$ be an odd prime, and let $v_{p}(m)$ denote the exponent of the highest power of $p$ that divides $m$.
If $x$ and $y$ are integers not divisible by $p$ such that $p \mid x-y$, then we have
Likewise, if $x$ and $y$ are integers not divisible by $p$ such that $p \mid x+y$, then we have
Comment 2. There exist various ways of solving the problem involving the "lifting the exponent" lemma. Let us sketch another one.
The cases $x=y$ and $p \mid x$ are ruled out easily, so we assume that $p>2, x<y$, and $p \nmid x$. In this case we also have $p^{a}<p^{b}$ and $p \mid x+1$.
Now one has
so by the lemma mentioned above one has $p^{a-1} \mid y-x$ and hence $y=x+t p^{a-1}$ for some positive integer $t$. Thus one gets
The factors on the left-hand side are coprime. So if $p \mid x$, then $x^{p-2}+1 \mid p-t$, which is impossible since $x<x^{p-2}+1$. Therefore, $p \nmid x$, and thus $x \mid p-t$. Since $p \mid x+1$, the only remaining case is $x=p-1, t=1$, and $y=p^{a-1}+p-1$. Now the solution can be completed in the same way as before. Solution 2. Again, we can focus on the case that $p>2$. If $p \mid x$, then also $p \mid y$. In this case, let $p^{k}$ and $p^{\ell}$ be the highest powers of $p$ that divide $x$ and $y$ respectively, and assume without loss of generality that $k \leqslant \ell$. Then $p^{k}$ divides $x+y^{p-1}$ while $p^{k+1}$ does not, but $p^{k}<x+y^{p-1}$, which yields a contradiction. So $x$ and $y$ are not divisible by $p$. Fermat's little theorem yields $0 \equiv x^{p-1}+y \equiv 1+y(\bmod p)$, so $y \equiv-1(\bmod p)$ and for the same reason $x \equiv-1(\bmod p)$.
In particular, $x, y \geqslant p-1$ and thus $x^{p-1}+y \geqslant 2(p-1)>p$, so $x^{p-1}+y$ and $y^{p-1}+x$ are both at least equal to $p^{2}$. Now we have
These two congruences, together with the Euler-Fermat theorem, give us
Since $x \equiv y \equiv-1(\bmod p), x-y$ is divisible by $p$, so $(x-y)^{2}$ is divisible by $p^{2}$. This means that
so $p^{2}$ divides $(x+y-2)(x+y+2)$. We already know that $x+y \equiv-2(\bmod p)$, so $x+y-2 \equiv$ $-4 \not \equiv 0(\bmod p)$. This means that $p^{2}$ divides $x+y+2$.
Using the same notation as in the first solution, we subtract the two original equations to obtain
The second factor is symmetric in $x$ and $y$, so it can be written as a polynomial of the elementary symmetric polynomials $x+y$ and $x y$ with integer coefficients. In particular, its value modulo $p^{2}$ is characterised by the two congruences $x y \equiv 1\left(\bmod p^{2}\right)$ and $x+y \equiv-2\left(\bmod p^{2}\right)$. Since both congruences are satisfied when $x=y=-1$, we must have
which simplifies to $y^{p-2}+y^{p-3} x+\cdots+x^{p-2}-1 \equiv-p\left(\bmod p^{2}\right)$. Thus the second factor in (1) is divisible by $p$, but not $p^{2}$.
This means that $p^{a-1}$ has to divide the other factor $y-x$. It follows that
Since $x \equiv-1(\bmod p)$, the last factor is $x^{p-3}-x^{p-4}+\cdots+1 \equiv p-2(\bmod p)$ and in particular not divisible by $p$. We infer that $p^{a-1} \mid x+1$ and continue as in the first solution.
Comment. Instead of reasoning by means of elementary symmetric polynomials, it is possible to provide a more direct argument as well. For odd $r,(x+1)^{2}$ divides $\left(x^{r}+1\right)^{2}$, and since $p$ divides $x+1$, we deduce that $p^{2}$ divides $\left(x^{r}+1\right)^{2}$. Together with the fact that $x y \equiv 1\left(\bmod p^{2}\right)$, we obtain
We apply this congruence with $r=p-2-2 k$ (where $0 \leqslant k<(p-2) / 2$ ) to find that
Summing over all $k$ yields
once again.
N6. Let $a_{1}<a_{2}<\cdots<a_{n}$ be pairwise coprime positive integers with $a_{1}$ being prime and $a_{1} \geqslant n+2$. On the segment $I=\left[0, a_{1} a_{2} \cdots a_{n}\right]$ of the real line, mark all integers that are divisible by at least one of the numbers $a_{1}, \ldots, a_{n}$. These points split $I$ into a number of smaller segments. Prove that the sum of the squares of the lengths of these segments is divisible by $a_{1}$. (Serbia) Solution 1. Let $A=a_{1} \cdots a_{n}$. Throughout the solution, all intervals will be nonempty and have integer end-points. For any interval $X$, the length of $X$ will be denoted by $|X|$.
Define the following two families of intervals:
We are interested in computing $\sum_{X \in \mathcal{S}}|X|^{2}$ modulo $a_{1}$. Note that the number $A$ is marked, so in the definition of $\mathcal{T}$ the condition $y \leqslant A$ is enforced without explicitly prescribing it.
Assign weights to the intervals in $\mathcal{T}$, depending only on their lengths. The weight of an arbitrary interval $Y \in \mathcal{T}$ will be $w(|Y|)$, where
Consider an arbitrary interval $X \in \mathcal{S}$ and its sub-intervals $Y \in \mathcal{T}$. Clearly, $X$ has one sub-interval of length $|X|$, two sub-intervals of length $|X|-1$ and so on; in general $X$ has $|X|-d+1$ sub-intervals of length $d$ for every $d=1,2, \ldots,|X|$. The sum of the weights of the sub-intervals of $X$ is $\sum_{Y \in \mathcal{T}, Y \subseteq X} w(|Y|)=\sum_{d=1}^{|X|}(|X|-d+1) \cdot w(d)=|X| \cdot 1+((|X|-1)+(|X|-2)+\cdots+1) \cdot 2=|X|^{2}$. Since the intervals in $\mathcal{S}$ are non-overlapping, every interval $Y \in \mathcal{T}$ is a sub-interval of a single interval $X \in \mathcal{S}$. Therefore,
For every $d=1,2, \ldots, a_{1}$, we count how many intervals in $\mathcal{T}$ are of length $d$. Notice that the multiples of $a_{1}$ are all marked, so the lengths of the intervals in $\mathcal{S}$ and $\mathcal{T}$ cannot exceed $a_{1}$. Let $x$ be an arbitrary integer with $0 \leqslant x \leqslant A-1$ and consider the interval $[x, x+d]$. Let $r_{1}$, $\ldots, r_{n}$ be the remainders of $x$ modulo $a_{1}, \ldots, a_{n}$, respectively. Since $a_{1}, \ldots, a_{n}$ are pairwise coprime, the number $x$ is uniquely identified by the sequence $\left(r_{1}, \ldots, r_{n}\right)$, due to the Chinese remainder theorem.
For every $i=1, \ldots, n$, the property that the interval $(x, x+d)$ does not contain any multiple of $a_{i}$ is equivalent with $r_{i}+d \leqslant a_{i}$, i.e. $r_{i} \in\left{0,1, \ldots, a_{i}-d\right}$, so there are $a_{i}-d+1$ choices for the number $r_{i}$ for each $i$. Therefore, the number of the remainder sequences $\left(r_{1}, \ldots, r_{n}\right)$ that satisfy $[x, x+d] \in \mathcal{T}$ is precisely $\left(a_{1}+1-d\right) \cdots\left(a_{n}+1-d\right)$. Denote this product by $f(d)$.
Now we can group the last sum in (1) by length of the intervals. As we have seen, for every $d=1, \ldots, a_{1}$ there are $f(d)$ intervals $Y \in \mathcal{T}$ with $|Y|=d$. Therefore, (1) can be continued as
Having the formula (2), the solution can be finished using the following well-known fact: Lemma. If $p$ is a prime, $F(x)$ is a polynomial with integer coefficients, and $\operatorname{deg} F \leqslant p-2$, then $\sum_{x=1}^{p} F(x)$ is divisible by $p$. Proof. Obviously, it is sufficient to prove the lemma for monomials of the form $x^{k}$ with $k \leqslant p-2$. Apply induction on $k$. If $k=0$ then $F=1$, and the statement is trivial.
Let $1 \leqslant k \leqslant p-2$, and assume that the lemma is proved for all lower degrees. Then
Since $0<k+1<p$, this proves $\sum_{x=1}^{p} x^{k} \equiv 0(\bmod p)$. In (2), by applying the lemma to the polynomial $f$ and the prime $a_{1}$, we obtain that $\sum_{d=1}^{a_{1}} f(d)$ is divisible by $a_{1}$. The term $f(1)=a_{1} \cdots a_{n}$ is also divisible by $a_{1}$; these two facts together prove that $\sum_{X \in \mathcal{S}}|X|^{2}$ is divisible by $a_{1}$.
Comment 1. With suitable sets of weights, the same method can be used to sum up other expressions on the lengths of the segments. For example, $w(1)=1$ and $w(k)=6(k-1)$ for $k \geqslant 2$ can be used to compute $\sum_{X \in \mathcal{S}}|X|^{3}$ and to prove that this sum is divisible by $a_{1}$ if $a_{1}$ is a prime with $a_{1} \geqslant n+3$. See also Comment 2 after the second solution. Solution 2. The conventions from the first paragraph of the first solution are still in force. We shall prove the following more general statement: ( $\boxplus$ ) Let $p$ denote a prime number, let $p=a_{1}<a_{2}<\cdots<a_{n}$ be $n$ pairwise coprime positive integers, and let $d$ be an integer with $1 \leqslant d \leqslant p-n$. Mark all integers that are divisible by at least one of the numbers $a_{1}, \ldots, a_{n}$ on the interval $I=\left[0, a_{1} a_{2} \cdots a_{n}\right]$ of the real line. These points split I into a number of smaller segments, say of lengths $b_{1}, \ldots, b_{k}$. Then the sum $\sum_{i=1}^{k}\binom{b_{i}}{d}$ is divisible by $p$.
Applying ( $\boxplus$ ) to $d=1$ and $d=2$ and using the equation $x^{2}=2\binom{x}{2}+\binom{x}{1}$, one easily gets the statement of the problem.
To prove $(\boxplus)$ itself, we argue by induction on $n$. The base case $n=1$ follows from the known fact that the binomial coefficient $\binom{p}{d}$ is divisible by $p$ whenever $1 \leqslant d \leqslant p-1$.
Let us now assume that $n \geqslant 2$, and that the statement is known whenever $n-1$ rather than $n$ coprime integers are given together with some integer $d \in[1, p-n+1]$. Suppose that the numbers $p=a_{1}<a_{2}<\cdots<a_{n}$ and $d$ are as above. Write $A^{\prime}=\prod_{i=1}^{n-1} a_{i}$ and $A=A^{\prime} a_{n}$. Mark the points on the real axis divisible by one of the numbers $a_{1}, \ldots, a_{n-1}$ green and those divisible by $a_{n}$ red. The green points divide $\left[0, A^{\prime}\right]$ into certain sub-intervals, say $J_{1}, J_{2}, \ldots$, and $J_{\ell}$.
To translate intervals we use the notation $[a, b]+m=[a+m, b+m]$ whenever $a, b, m \in \mathbb{Z}$. For each $i \in{1,2, \ldots, \ell}$ let $\mathcal{F}{i}$ be the family of intervals into which the red points partition the intervals $J{i}, J_{i}+A^{\prime}, \ldots$, and $J_{i}+\left(a_{n}-1\right) A^{\prime}$. We are to prove that
is divisible by $p$. Let us fix any index $i$ with $1 \leqslant i \leqslant \ell$ for a while. Since the numbers $A^{\prime}$ and $a_{n}$ are coprime by hypothesis, the numbers $0, A^{\prime}, \ldots,\left(a_{n}-1\right) A^{\prime}$ form a complete system of residues modulo $a_{n}$. Moreover, we have $\left|J_{i}\right| \leqslant p<a_{n}$, as in particular all multiples of $p$ are green. So each of the intervals $J_{i}, J_{i}+A^{\prime}, \ldots$, and $J_{i}+\left(a_{n}-1\right) A^{\prime}$ contains at most one red point. More precisely, for each $j \in\left{1, \ldots,\left|J_{i}\right|-1\right}$ there is exactly one amongst those intervals containing a red point splitting it into an interval of length $j$ followed by an interval of length $\left|J_{i}\right|-j$, while the remaining $a_{n}-\left|J_{i}\right|+1$ such intervals have no red points in their interiors. For these reasons
So it remains to prove that
is divisible by $p$. By the induction hypothesis, however, it is even true that both summands are divisible by $p$, for $1 \leqslant d<d+1 \leqslant p-(n-1)$. This completes the proof of ( $\boxplus$ ) and hence the solution of the problem.
Comment 2. The statement ( $\boxplus$ ) can also be proved by the method of the first solution, using the weights $w(x)=\binom{x-2}{d-2}$.
This page is intentionally left blank
N7. Let $c \geqslant 1$ be an integer. Define a sequence of positive integers by $a_{1}=c$ and
for all $n \geqslant 1$. Prove that for each integer $n \geqslant 2$ there exists a prime number $p$ dividing $a_{n}$ but none of the numbers $a_{1}, \ldots, a_{n-1}$. (Austria) Solution. Let us define $x_{0}=0$ and $x_{n}=a_{n} / c$ for all integers $n \geqslant 1$. It is easy to see that the sequence $\left(x_{n}\right)$ thus obtained obeys the recursive law
for all integers $n \geqslant 0$. In particular, all of its terms are positive integers; notice that $x_{1}=1$ and $x_{2}=2 c^{2}+1$. Since
holds for all integers $n \geqslant 0$, it is also strictly increasing. Since $x_{n+1}$ is by (1) coprime to $c$ for any $n \geqslant 0$, it suffices to prove that for each $n \geqslant 2$ there exists a prime number $p$ dividing $x_{n}$ but none of the numbers $x_{1}, \ldots, x_{n-1}$. Let us begin by establishing three preliminary claims. Claim 1. If $i \equiv j(\bmod m)$ holds for some integers $i, j \geqslant 0$ and $m \geqslant 1$, then $x_{i} \equiv x_{j}\left(\bmod x_{m}\right)$ holds as well. Proof. Evidently, it suffices to show $x_{i+m} \equiv x_{i}\left(\bmod x_{m}\right)$ for all integers $i \geqslant 0$ and $m \geqslant 1$. For this purpose we may argue for fixed $m$ by induction on $i$ using $x_{0}=0$ in the base case $i=0$. Now, if we have $x_{i+m} \equiv x_{i}\left(\bmod x_{m}\right)$ for some integer $i$, then the recursive equation (1) yields
which completes the induction. Claim 2. If the integers $i, j \geqslant 2$ and $m \geqslant 1$ satisfy $i \equiv j(\bmod m)$, then $x_{i} \equiv x_{j}\left(\bmod x_{m}^{2}\right)$ holds as well. Proof. Again it suffices to prove $x_{i+m} \equiv x_{i}\left(\bmod x_{m}^{2}\right)$ for all integers $i \geqslant 2$ and $m \geqslant 1$. As above, we proceed for fixed $m$ by induction on $i$. The induction step is again easy using (1), but this time the base case $i=2$ requires some calculation. Set $L=5 c^{2}$. By (1) we have $x_{m+1} \equiv L x_{m}+1\left(\bmod x_{m}^{2}\right)$, and hence
which in turn gives indeed $x_{m+2} \equiv 2 c^{2}+1 \equiv x_{2}\left(\bmod x_{m}^{2}\right)$. Claim 3. For each integer $n \geqslant 2$, we have $x_{n}>x_{1} \cdot x_{2} \cdots x_{n-2}$. Proof. The cases $n=2$ and $n=3$ are clear. Arguing inductively, we assume now that the claim holds for some $n \geqslant 3$. Recall that $x_{2} \geqslant 3$, so by monotonicity and (2) we get $x_{n} \geqslant x_{3} \geqslant x_{2}\left(x_{2}-2\right)^{2}+x_{2}+1 \geqslant 7$. It follows that
which by the induction hypothesis yields $x_{n+1}>x_{1} \cdot x_{2} \cdots x_{n-1}$, as desired.
Now we direct our attention to the problem itself: let any integer $n \geqslant 2$ be given. By Claim 3 there exists a prime number $p$ appearing with a higher exponent in the prime factorisation of $x_{n}$ than in the prime factorisation of $x_{1} \cdots x_{n-2}$. In particular, $p \mid x_{n}$, and it suffices to prove that $p$ divides none of $x_{1}, \ldots, x_{n-1}$.
Otherwise let $k \in{1, \ldots, n-1}$ be minimal such that $p$ divides $x_{k}$. Since $x_{n-1}$ and $x_{n}$ are coprime by (1) and $x_{1}=1$, we actually have $2 \leqslant k \leqslant n-2$. Write $n=q k+r$ with some integers $q \geqslant 0$ and $0 \leqslant r<k$. By Claim 1 we have $x_{n} \equiv x_{r}\left(\bmod x_{k}\right)$, whence $p \mid x_{r}$. Due to the minimality of $k$ this entails $r=0$, i.e. $k \mid n$.
Thus from Claim 2 we infer
Now let $\alpha \geqslant 1$ be maximal with the property $p^{\alpha} \mid x_{k}$. Then $x_{k}^{2}$ is divisible by $p^{\alpha+1}$ and by our choice of $p$ so is $x_{n}$. So by the previous congruence $x_{k}$ is a multiple of $p^{\alpha+1}$ as well, contrary to our choice of $\alpha$. This is the final contradiction concluding the solution.
N8. For every real number $x$, let $|x|$ denote the distance between $x$ and the nearest integer. Prove that for every pair $(a, b)$ of positive integers there exist an odd prime $p$ and a positive integer $k$ satisfying
(Hungary) Solution. Notice first that $\left\lfloor x+\frac{1}{2}\right\rfloor$ is an integer nearest to $x$, so $|x|=\left\lfloor\left.\left\lfloor x+\frac{1}{2}\right\rfloor-x \right\rvert,\right.$. Thus we have
For every rational number $r$ and every prime number $p$, denote by $v_{p}(r)$ the exponent of $p$ in the prime factorisation of $r$. Recall the notation $(2 n-1)!$ for the product of all odd positive integers not exceeding $2 n-1$, i.e., $(2 n-1)!!=1 \cdot 3 \cdots(2 n-1)$. Lemma. For every positive integer $n$ and every odd prime $p$, we have
Proof. For every positive integer $k$, let us count the multiples of $p^{k}$ among the factors $1,3, \ldots$, $2 n-1$. If $\ell$ is an arbitrary integer, the number $(2 \ell-1) p^{k}$ is listed above if and only if
Hence, the number of multiples of $p^{k}$ among the factors is precisely $m_{k}=\left\lfloor\frac{n}{p^{k}}+\frac{1}{2}\right\rfloor$. Thus we obtain
In order to prove the problem statement, consider the rational number
Obviously, $N>1$, so there exists a prime $p$ with $v_{p}(N)>0$. Since $N$ is a fraction of two odd numbers, $p$ is odd.
By our lemma,
Therefore, there exists some positive integer $k$ such that the integer number
is positive, so $d_{k} \geqslant 1$. By (2) we have
Since $|x|<\frac{1}{2}$ for every rational $x$ with odd denominator, the relation (3) can only be satisfied if all three signs on the right-hand side are positive and $d_{k}=1$. Thus we get
as required. Comment 1. There are various choices for the number $N$ in the solution. Here we sketch such a version.
Let $x$ and $y$ be two rational numbers with odd denominators. It is easy to see that the condition $|x|+|y|+|x+y|=1$ is satisfied if and only if
where ${x}$ denotes the fractional part of $x$. In the context of our problem, the first condition seems easier to deal with. Also, one may notice that
where
Now it is natural to consider the number
since
One may see that $M>1$, and that $v_{2}(M) \leqslant 0$. Thus, there exist an odd prime $p$ and a positive integer $k$ with
In view of (4), the last inequality yields
which is what we wanted to obtain. Comment 2. Once one tries to prove the existence of suitable $p$ and $k$ satisfying (5), it seems somehow natural to suppose that $a \leqslant b$ and to add the restriction $p^{k}>a$. In this case the inequalities (5) can be rewritten as
for some positive integer $m$. This means exactly that one of the numbers $2 a+1,2 a+3, \ldots, 2 a+2 b-1$ is divisible by some number of the form $p^{k}$ which is greater than $2 a$.
Using more advanced techniques, one can show that such a number $p^{k}$ exists even with $k=1$. This was shown in 2004 by Laishram and Shorey; the methods used for this proof are elementary but still quite involved. In fact, their result generalises a theorem by SyLvester which states that for every pair of integers $(n, k)$ with $n \geqslant k \geqslant 1$, the product $(n+1)(n+2) \cdots(n+k)$ is divisible by some prime $p>k$. We would like to mention here that Sylvester's theorem itself does not seem to suffice for solving the problem.


















