Buckets:

|
download
raw
123 kB

Title: The discrete generalized exchange-driven system

URL Source: https://arxiv.org/html/2408.00345

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2The generalized exchange-driven growth model 3Mathematical setting and main results 4The truncated system 5Existence for the general exchange-driven system 6A regularity result 7Uniqueness of solutions 8Final Remarks References License: CC BY-NC-SA 4.0 arXiv:2408.00345v2 [math.CA] 29 Aug 2025 The discrete generalized exchange-driven system P.K. Barik Instituto de Matemáticas, Universidad de Granada, Rector López Argüet, S/N, 18001, Granada, Spain, Departamento de Matemática Aplicada, Universidad de Granada, Avenida de Fuentenueva S/N, 18071, Granada, Spain, and Department of Mathematics, BITS-Pilani, Dubai Campus, P.O. Box 345055, Dubai, United Arab Emirates barik@dubai.bits-pilani.ac.in F.P. da Costa Univ. Aberta, Dep. of Sciences and Technology, Rua da Escola Politécnica 141-7, P-1269-001 Lisboa, Portugal, and Univ. Lisboa, Instituto Superior Técnico, Centre for Mathematical Analysis, Geometry and Dynamical Systems, Av. Rovisco Pais, P-1049-001 Lisboa, Portugal. fcosta@uab.pt J.T. Pinto Univ. Lisboa, Instituto Superior Técnico, Dep. of Mathematics and Centre for Mathematical Analysis, Geometry and Dynamical Systems, Av. Rovisco Pais, P-1049-001 Lisboa, Portugal. jpinto@tecnico.ulisboa.pt R. Sasportes† Univ. Aberta, Dep. of Sciences and Technology, Rua da Escola Politécnica 141-7, P-1269-001 Lisboa, Portugal, and Univ. Lisboa, Instituto Superior Técnico, Centre for Mathematical Analysis, Geometry and Dynamical Systems, Av. Rovisco Pais, P-1049-001 Lisboa, Portugal. rafael.sasportes@uab.pt (Date: First version: September 7, 2024;  Revised: July 17, 2025) Abstract.

We study a discrete model for generalized exchange-driven growth in which the particle exchanged between two clusters is not limited to be of size one. This set of models include as special cases the usual exchange-driven growth system and the coagulation-fragmentation system with binary fragmentation. Under reasonable general condition on the rate coefficients we establish the existence of admissible solutions, meaning solutions that are obtained as appropriate limit of solutions to a finite-dimensional truncation of the infinite-dimensional ODE. For these solutions we prove that, in the class of models we call isolated both the total number of particles and the total mass are conserved, whereas in those models we can non-isolated only the mass is conserved. Additionally, under more restrictive growth conditions for the rate equations we obtain uniqueness of solutions to the initial value problems.

Key words and phrases: exchange-driven cluster growth, aggregation kinetics, ordinary differential equations 1991 Mathematics Subject Classification: Primary 34A12; Secondary 34A34, 34A35, 92E20 † Our friend and co-worker Rafael Sasportes (1960–2024) died while this paper was on the final stages of preparation. We dedicate it to his memory. Research partially supported by Fundação para a Ciência e a Tecnologia (Portugal) through project CAMGSD UID/04459/2023. Corresponding author: P. K. Barik 1.Introduction

The dynamics of growth processes of aggregates, or clusters, is ubiquitous across all the natural world. For instance: in chemistry, polymerization processes [10]; in astrophysics, the creation of stars and planets [24]; in the physics of the atmosphere, the formation of clouds [23]; in solid state physics, deposition processes [21]; in biology, the aggregation of red blood cells [15];in ecology, the grouping behavior of animals [9]; in economics, the merger of enterprises [3], and many more.

By assigning at each time a positive integer to each cluster size, representing their concentrations in the system, the evolution of the cluster sizes’ distribution can be mathematically described by differential equations. One well-known mathematical model is the Smoluchowski coagulation equation, see [26], in which two clusters interact, leading to coalescence and the formation of a larger-sized cluster, and its generalization by inclusion of the possibility of fragmentation of the clusters (see, e.g., [4, 5, 8] and references therein). In the early 2000s several authors introduced a different growth model, known as the Exchange Driven Growth model (EDG), arising, inter alia, as the mean field limit for zero-range processes in non-equilibrium statistical physics, and also in modelling several social phenomena including migration, population dynamics and wealth exchange (see e.g. [6, 20, 17, 19]). The mechanism underlying this model involves the exchange of a single unit of mass (a monomer, as it is usually called in the coagulation literature) from one cluster to another whenever they came into interaction.

The first mathematical study of the EDG equation was done by Emre Esentürk in [12] where the existence and uniqueness of solutions to the EDG equations is discussed. In addition, the gelation transition, instantaneous gelation phenomenon and the existence of local solutions for different classes of interaction rates are also addressed. The large time behaviour of solutions was also investigated for different classes of reaction rates, see [13, 14]. In the recent work [25] Schlichting investigates the well-posedness of solutions to the EDG equation in which the conditions on initial data are relaxed when compared to those in [12]. Additionally, qualitative aspects such as the long-time behavior of the solution are also discussed. Furthermore, a study of self-similar solutions is carried out, specifically focusing on the product interaction rate, as detailed in [11].

In the present paper our primary goal is to propose a new, more general, discrete model for exchange-driven growth of cluster and start its mathematical study. This new model will be referred to as the discrete generalized exchange-driven growth model (DGED, for short). As explained in the next session, the DGED model allows for the exchange not only of a monomer between two interacting clusters, but of the transfer of a bigger chunk of one cluster to the other. Thus, the DGED model includes as a special case the EDG model referred to above. In a certain sense, the usual coagulation-fragmentation equations (with binary fragmentation) can also be considered as a special case of the DGED equations.

The outline of this article is as follows. In sections 2 and 3, we introduce the DGED model for two cases that, in section 2, are called isolated and non-isolated, and some relevant mathematical notions and results that will be needed later in this work. In addition, all the main results of our paper are stated in section 3. In section 4 we consider a truncated finite dimensional version of the DGED system. This is a finite dimensional ordinary differential equation system, for which the existence, uniqueness, and non-negativity of solutions to Cauchy problems are automatically obtained from the standard theory. Here we prove some properties of its solutions that will play an important role in the next section. In section 5 we establish the existence of mild solutions to Cauchy problems for the DGED system, i.e., continuous solutions of the integral version of the differential equations system. This is done by the use of Helly’s theorem and a diagonal argument to establish the existence of a function which is the limit of a (sub)sequence of solutions to truncated systems as the dimension of the truncation grows to infinity, and then by proving that this function is indeed a (mild) solution of the DGED system using appropriate bounds (obtained in the previous two sections) and the bounded and monotone convergence theorems. Still in section 5 we prove that for the isolated case solutions obtained in this way (which are called admissible) conserve two important quantities, which have the “physical” interpretation of being the total number of clusters, and the total mass of the system, whereas in the non-isolated case only mass is conserved. In the short section 6 we prove that under slightly more restrictive conditions on the rate coefficients of the DGED system and assuming conservation of the total cluster mass (which is true for admissible solutions in both the isolated and non-isolated cases), then each component of the mild solution proved to exist in section 5 is indeed continuously differentiable. In Section 7, we prove a uniqueness result under a more restrictive growth condition on the rate coefficients, applying standard techniques used in coagulation-type systems. We conclude the paper with a section of final remarks where we discuss some possible future work related to the long-term behavior of solutions for this model.

2.The generalized exchange-driven growth model

Consider a population of particles with sizes described by 𝑝 ∈ ℕ

{ 0 , 1 , 2 , … } . For each size 𝑝 and each time 𝑡 , let 𝑐 𝑝 ​ ( 𝑡 ) be the concentration of particles of size 𝑝 (or 𝑝 -clusters, for short) at time 𝑡 . Then, we assume that a chunk of size 𝑘 ⩽ 𝑝 can be detached from this particle and attached to another of size 𝑞 , schematically represented by

⟨ 𝑝 ⟩ + ⟨ 𝑞 ⟩ → ⟨ 𝑝 − 𝑘 ⟩ + ⟨ 𝑞 + 𝑘 ⟩ .

(1)

Figure 1 gives a pictorial illustration of this process.

\psfrag{p}{$p$}\psfrag{q}{$q$}\psfrag{p-k}{$p-k$}\psfrag{q+k}{$q+k$}\includegraphics[scale={0.70}]{DiscreteExchange.eps} Figure 1.Reaction scheme of the DGED model considered in this paper. A chunk of size 𝑘 in a particle of size 𝑝 is transferred to a particle of size 𝑞 to produce a particle of size 𝑝 − 𝑘 and another of size 𝑞 + 𝑘 . The rate coefficient for this reaction is 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ⩾ 0 .

Assuming the validity of the mass action law of chemical kinetics, the rate of production of ( 𝑝 − 𝑘 ) -clusters and ( 𝑞 + 𝑘 ) -clusters due to the reaction between 𝑝 -clusters and 𝑞 -clusters, according to the mechanism displayed in Figure 1, is equal to 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 ​ ( 𝑡 ) ​ 𝑐 𝑞 ​ ( 𝑡 ) , where the rate coefficient is 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) .

Hereafter 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) will denote the rate of the reaction in which a 𝑘 -cluster is detached from a 𝑝 -cluster and attaches itself to a 𝑞 -cluster. Clearly, we must have 𝑝 , 𝑞 , 𝑘 ∈ ℕ with 𝑝 ⩾ 1 , and 1 ⩽ 𝑘 ⩽ 𝑝 because the case 𝑘

0 is the absence of reaction and 𝑘 > 𝑝 does not make physical sense as we cannot detach from a given cluster a part bigger than the entire cluster. However, we will allow 𝑘

𝑝 in (1), which means that the entire 𝑝 -cluster is attached to the 𝑞 -cluster in the reaction

⟨ 𝑝 ⟩ + ⟨ 𝑞 ⟩ → ⟨ 0 ⟩ + ⟨ 𝑞 + 𝑝 ⟩ .

This is very much like the usual coagulation reaction (see, e.g., [8]) but for the consideration of the “void”, or “empty”, cluster ⟨ 0 ⟩ .

In a similar way, if we consider 𝑞

0 in the reaction scheme (1) it becomes

⟨ 𝑝 ⟩ + ⟨ 0 ⟩ → ⟨ 𝑝 − 𝑘 ⟩ + ⟨ 𝑘 ⟩ ,

and this is a kind of fragmentation of the non-void cluster. Figure 2 illustrates the domain of 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) and the regions just described.

\psfrag{p}{$p$}\psfrag{q}{$q$}\psfrag{r}{$k$}\psfrag{r=p}{$k=p$}\psfrag{q=0}{$q=0$}\includegraphics[scale={0.35}]{Figure2.eps} Figure 2.Domain of the rate coefficients 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) : region of ℕ 3 bounded by the planes 𝑞

0 (corresponding to binary fragmentation) and 𝑘

𝑝 (corresponding to coagulation).

Allowing the kind of reactions where the void clusters are destroyed by reaction with other clusters, means that one is considering the class of so called active exchange driven models (in the classification of Esenturk and Connaughton [13]). This is what will be done in this paper.

The way the dynamics of the void clusters is considered provides a further classification of the models under consideration: two natural choices correspond to open, or non-isolated, and to isolated thermodynamic systems. In the first case we can assume that the cluster system is embedded in an infinite size bath of void particles such that the concentration of ⟨ 0 ⟩ particles remains constant in time, i.e., 𝑐 ˙ 0 ​ ( 𝑡 )

0 , ∀ 𝑡 ; in the second case one assumes that the dynamics of the cluster system, including the void particles, is such that the total number of clusters is conserved (at least at a formal level) and so there is the need to have a dynamic equation describing the evolution of the concentration of clusters ⟨ 0 ⟩ compatible with this assumption. This same distinction can be see in [22] for the role of the monomers ( ⟨ 1 ⟩ particles) in the context of the dynamic of cluster equations of Becker-Döring type.

Returning to the above reaction schemes involving the void cluster, we note that its presence can have two distinct effects according to whether the system is isolated or not. In the non-isolated case, where the concentration of ⟨ 0 ⟩ is constant, the reactions are exactly the coagulation and the binary fragmentation reactions considered in coagulation-fragmentation studies. Thus, for that case, the rate coefficients 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑝 ) and 𝑎 ​ ( 𝑝 , 0 , 𝑘 ) correspond to reactions of coagulation and binary fragmentation, respectively, and all the other cases (with 0 < 𝑘 < 𝑝 ) correspond to genuine exchange of pieces of the 𝑝 -cluster to the 𝑞 -cluster. In the case of the isolated model, where the concentration of ⟨ 0 ⟩ is not necessarily constant, then the mass action law for the reaction corresponding to the destruction of void clusters produces a nonlinear differential equation and the parallel with the usual fragmentation equations is less straightforward.

The DGED system describes, for each 𝑖 ∈ ℕ , the time evolution of the concentration of the 𝑖 -clusters and is obtained by keeping track of the various ways an 𝑖 -cluster can be formed or destroyed by the kinetics of the type (1). As in previous studies of exchange-driven kinetics, such as [12, 14, 25], we also consider clusters of size zero, the void, or empty, clusters. As we shall see this is rather convenient, enabling, in the isolated model, the existence of an additional conservation law, and, in the non-isolated model, the recovery of the usual discrete coagulation-fragmentation model as a particular case.

In order to introduce the DGED system we have to consider the various ways that lead to the creation and the destruction of an 𝑖 -cluster, which we do next, starting with the case of the isolated system.

In scheme (1), one 𝑖 -cluster is created if (i) 𝑝 − 𝑘

𝑖 , or if (ii) 𝑞 + 𝑘

𝑖 . Likewise, one 𝑖 -cluster is destroyed if (iii) 𝑞

𝑖 , or if (iv) 𝑝

𝑖 . In the case of creation (i), for every 𝑘 ∈ ℕ + := ℕ ∖ { 0 } , we have to consider all the indices combinations for which 𝑝

𝑖 + 𝑘 and 𝑞

𝑗 , for any 𝑗 ∈ ℕ . Therefore, in this case, the formation of an 𝑖 -cluster proceeds through the scheme (1) illustrated in Figure 1 and, by the mass action law, its contribution to the global rate of change of 𝑐 𝑖 ​ ( 𝑡 ) is given by

𝑄 1 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) := ∑ 𝑘

1 ∞ ∑ 𝑗

0 ∞ 𝑎 ​ ( 𝑖 + 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 ​ ( 𝑡 ) ​ 𝑐 𝑗 ​ ( 𝑡 ) .

(2)

For the destruction case (iii), we have to consider all possible indices combinations with 𝑞

𝑖 , that is, 𝑗 := 𝑝 ⩾ 𝑘 ⩾ 1 . The corresponding contribution to the global rate of change of 𝑐 𝑖 ​ ( 𝑡 ) is thus

𝑄 2 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) := − ∑ 𝑘

1 ∞ ∑ 𝑗

𝑘 ∞ 𝑎 ​ ( 𝑗 , 𝑖 ; 𝑘 ) ​ 𝑐 𝑗 ​ ( 𝑡 ) ​ 𝑐 𝑖 ​ ( 𝑡 ) .

(3)

For the creation case (ii), we have to consider all possible indices combinations for which 𝑞

𝑖 − 𝑘 , that is, for 1 ⩽ 𝑘 ⩽ 𝑖 , we have 𝑝 ⩾ 𝑘 , which, by calling 𝑗 := 𝑝 leads to the contribution

𝑄 3 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) := ∑ 𝑘

1 𝑖 ∑ 𝑗

𝑘 ∞ 𝑎 ​ ( 𝑗 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 ​ ( 𝑡 ) ​ 𝑐 𝑖 − 𝑘 ​ ( 𝑡 ) .

(4)

Finally, for the destruction case (iv), we have to consider the indices combinations for which 𝑘 ⩽ 𝑝

𝑖 ,

𝑞 ∈ ℕ . By calling 𝑗 := 𝑞 , the corresponding contribution to the rate equation is

𝑄 4 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) := − ∑ 𝑘

1 𝑖 ∑ 𝑗

0 ∞ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑗 ​ ( 𝑡 ) ​ 𝑐 𝑖 ​ ( 𝑡 ) .

(5)

Clearly the processes leading to 𝑄 3 , 𝑖 and 𝑄 4 , 𝑖 cannot be in operation when 𝑖

0 because they require the consideration of clusters with size smaller than 𝑖 . Hence we define 𝑄 3 , 0

𝑄 4 , 0

0 .

If the system is not isolated but instead the number density of 0 -clusters is kept constant at some fixed value 𝑐 0 ​ ( 𝑡 )

𝑐 00 the equation for the evolution of 𝑐 0 ​ ( 𝑡 ) becomes simply

𝑐 ˙ 0

0 .

(6)

Hence, for this case 𝑄 𝑗 , 0

0 , for all 𝑗

1 , 2 , 3 , 4 .

Thus, in all cases, the Discrete Generalized Exchange Driven system (DGED) can be written as

𝑐 ˙ 𝑖

∑ 𝑗

1 4 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ) , 𝑖 ∈ ℕ .

(7)

We are particularly interested in the study of Cauchy problems for (7) considering the initial condition

𝑐 𝑖 ​ ( 0 )

𝑐 0 ​ 𝑖 ⩾ 0 , for  𝑖 ∈ ℕ .

(8)

We remark that all terms 𝑄 𝑗 , 𝑖 contain terms of the type 𝑎 ​ ( 𝑝 , 0 ; 𝑝 ) ​ 𝑐 𝑝 ​ ( 𝑡 ) ​ 𝑐 0 ​ ( 𝑡 ) , which corresponds to reactions

⟨ 𝑝 ⟩ + ⟨ 0 ⟩ → ⟨ 0 ⟩ + ⟨ 𝑝 ⟩ ,

that, in fact, do not correspond to any change in the cluster size distribution. They are included in each of the 𝑄 𝑗 , 𝑖 for notational convenience only and, in fact, they cancel each other in (7) (the 𝑗

0 terms in 𝑄 1 , 0 cancels the 𝑗

𝑘 terms in 𝑄 2 , 0 , and the 𝑘

𝑖

𝑗 term in 𝑄 3 , 𝑖 cancels the term with 𝑘

𝑖 and 𝑗

0 in 𝑄 4 , 𝑖 . ) However, when taken in isolation each 𝑄 𝑗 , 𝑖 contains these spurious, unphysical, contributions and since in the definition of solution one requires that each 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( ⋅ ) ) be integrable in ( 0 , 𝑇 ) (see Definition 3.1) we need to make some assumption on the coefficients 𝑎 ​ ( 𝑝 , 0 ; 𝑝 ) . Given the discussion above we define them as

𝑎 ​ ( 𝑝 , 0 ; 𝑝 )

0 , ∀ 𝑝 ∈ ℕ + .

(9)

We will also assume that the following symmetry relations always hold:

𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 )

𝑎 ​ ( 𝑗 , 𝑘 ; 𝑗 ) , ∀ 𝑗 , 𝑘 ∈ ℕ + ,

(10)

𝑎 ​ ( 𝑖 , 0 ; 𝑘 )

𝑎 ​ ( 𝑖 , 0 ; 𝑖 − 𝑘 ) , ∀ 𝑘 ∈ { 1 , … , 𝑖 − 1 } , 𝑖 ∈ ℕ + .

(11)

Condition (10) is due to the fact that, as pointed out above, each rate coefficient 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) with 𝑖

𝑘 corresponds to the rate of a coagulation reaction of a 𝑘 -cluster with a 𝑗 -cluster (to produce a ( 𝑗 + 𝑘 ) -cluster) and thus this is the usual symmetry assumption due to the fact that a coagulation reaction of a 𝑘 with a 𝑗 -cluster is the same as the reaction of a 𝑗 with a 𝑘 -cluster. Similarly, (11) correspond to the symmetry of the rates of the reaction ⟨ 𝑖 ⟩ + ⟨ 0 ⟩ → ⟨ 𝑖 − 𝑘 ⟩ + ⟨ 𝑘 ⟩ , when interpreted as a fragmentation of the 𝑖 -cluster produced by the shedding off a 𝑘 -cluster, or by the shedding off a ( 𝑖 − 𝑘 ) -cluster.

Observe that if the rate coefficients satisfy 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

0 if 𝑘 ≠ 1 , we obtain the exchange-driven growth system introduced in [6] and studied mathematically in [11, 12, 13, 14, 25], with rate coefficients 𝐾 ​ ( 𝑖 , 𝑗 ) . In this case,

∀ 𝑖 , 𝑘 ∈ ℕ + , 𝑗 ∈ ℕ , 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

𝐾 ​ ( 𝑖 , 𝑗 ) ​ 𝛿 𝑘 , 1 ,

(12)

where 𝛿 𝑘 , 1 is the Kronecker 𝛿 -symbol equal to 1 if 𝑘

1 and zero otherwise,.

As was already pointed out above, it is interesting to observe that the DGED system formally reduces to the standard discrete coagulation-fragmentation equations in the non-isolated case, provided the rate coefficients satisfy the conditions

𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ⩾ 0 ,

(13)

𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ⩾ 0 ,

(14)

and all the other cases are equal to zero,	

In fact, observe that with these conditions we get

𝑄 1 , 𝑖 ​ ( 𝑐 )

( ​ 14 ​ ) ​ ∑ 𝑘

1 ∞ 𝑎 ​ ( 𝑖 + 𝑘 , 0 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 ​ 𝑐 0

(15)

𝑄 2 , 𝑖 ​ ( 𝑐 )

( ​ 13 ​ ) − ∑ 𝑘

1 ∞ 𝑎 ​ ( 𝑘 , 𝑖 ; 𝑘 ) ​ 𝑐 𝑘 ​ 𝑐 𝑖

(16)

𝑄 3 , 𝑖 ​ ( 𝑐 )

( ​ 13 ​ ) , ( ​ 14 ​ ) ​ ∑ 𝑗

𝑖 ∞ 𝑎 ​ ( 𝑗 , 0 ; 𝑖 ) ​ 𝑐 𝑗 ​ 𝑐 0 + ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑘 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑘 ​ 𝑐 𝑖 − 𝑘

(17)

𝑄 4 , 𝑖 ​ ( 𝑐 )

( ​ 13 ​ ) , ( ​ 14 ​ ) − ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 ​ 𝑐 0 − ∑ 𝑗

0 ∞ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑖 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗

(18)

Now, due to (9), the terms 𝑗

𝑖 in the first sum in (17), 𝑘

𝑖 in the second sum in (17) and in the first sum in (18), and 𝑗

0 in the second sum in (18) are all equal to zero, and using the symmetry conditions (10) and (11)

∑ 𝑗

1 ∞ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑖 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗

( ​ 10 ​ ) ​ ∑ 𝑗

1 ∞ 𝑎 ​ ( 𝑗 , 𝑖 ; 𝑗 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗

(19)

∑ 𝑗

𝑖 + 1 ∞ 𝑎 ​ ( 𝑗 , 0 ; 𝑖 ) ​ 𝑐 𝑗 ​ 𝑐 0

( ​ 11 ​ ) ​ ∑ 𝑗

𝑖 + 1 ∞ 𝑎 ​ ( 𝑗 , 0 ; 𝑗 − 𝑖 ) ​ 𝑐 𝑗 ​ 𝑐 0

∑ 𝑘

1 ∞ 𝑎 ​ ( 𝑖 + 𝑘 , 0 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 ​ 𝑐 0 ,

(20)

then, putting together (15)-(19) we obtain, for 𝑖

1 ,

∑ 𝑗

1 4 𝑄 𝑗 , 𝑖 ​ ( 𝑐 )

∑ 𝑘

1 𝑖 − 1 𝑎 ​ ( 𝑘 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑘 ​ 𝑐 𝑖 − 𝑘 − ∑ 𝑘

1 𝑖 − 1 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 ​ 𝑐 0 +

+ 2 ​ ∑ 𝑗

1 ∞ 𝑎 ​ ( 𝑖 + 𝑗 , 0 ; 𝑗 ) ​ 𝑐 𝑖 + 𝑗 ​ 𝑐 0 − 2 ​ ∑ 𝑗

1 ∞ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑖 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗

and so we can write the DGED system as

𝑐 ˙ 𝑖

1 2 ​ ∑ 𝑘

1 𝑖 − 1 𝑊 𝑘 , 𝑖 − 𝑘 ​ ( 𝑐 ) − ∑ 𝑘

1 ∞ 𝑊 𝑖 , 𝑘 ​ ( 𝑐 ) , 𝑖 ∈ ℕ +

(21)

where 𝑊 𝑖 , 𝑗 ​ ( 𝑐 ) := 2 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑖 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗 − 2 ​ 𝑎 ​ ( 𝑖 + 𝑗 , 0 ; 𝑗 ) ​ 𝑐 𝑖 + 𝑗 ​ 𝑐 0 . Clearly, if our system is immersed in an infinite particle bath of 0 -cluster particles (which means that the concentration of 0 -particles is kept constant) and if we define 𝑎 𝑖 , 𝑗 := 2 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑖 ) and 𝑏 𝑖 , 𝑗 := 2 ​ 𝑎 ​ ( 𝑖 + 𝑗 , 0 ; 𝑗 ) ​ 𝑐 0 , then (21) becomes the usual discrete coagulation-fragmentation equation [8]. Therefore, in this case we have

∀ 𝑖 , 𝑗 , 𝑘 ∈ ℕ + , 𝑘 ⩽ 𝑖 , 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

1 2 ​ 𝑎 𝑖 , 𝑗 ​ 𝛿 𝑘 , 𝑖 ,

(22)

∀ 𝑖 , 𝑘 ∈ ℕ + , 𝑘 ⩽ 𝑖 , 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0

1 2 ​ 𝑏 𝑖 − 𝑘 , 𝑘 .

(23) 3.Mathematical setting and main results

In this section we introduce the notion of solution to the initial value problem (7), (8) that we consider in this work as well as other concepts that we will be useful in the following. Also, we introduce here the hypothesis we will consider for the rate coefficients 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) .

We start by introducing the concentration moments. For 𝑟 ⩾ 0 , we denote by 𝒫 𝑟 ​ ( 𝑡 ) the 𝑟 -th moment of the solution 𝑐 ​ ( 𝑡 ) at a given time 𝑡 :

𝒫 𝑟 ​ ( 𝑡 ) := ∑ 𝑖

0 ∞ 𝑖 𝑟 ​ 𝑐 𝑖 ​ ( 𝑡 ) .

(24)

In the cases of 𝑟

0 and 𝑟

1 , the moments have a natural physical interpretation: 𝒫 0 ​ ( 𝑡 ) represents the total number of particles in the system, while 𝒫 1 ​ ( 𝑡 ) is the system total mass. This is the reason that, for these type of models, it is anticipated that both these moments are invariant under time evolution.

The above-mentioned physical interpretation suggests that, analogously to previous works on coagulation-type systems, it is natural to work in the Banach space

𝑋 0 , 1 := { 𝑐 : ℕ → ℝ | ∥ 𝑐 ∥ := ∑ 𝑖

0 ∞ ( 1 + 𝑖 ) | 𝑐 𝑖 | < ∞ } .

(25)

Note that we can write ‖ 𝑐 ‖

‖ 𝑐 ‖ ℓ 1 + ‖ ( 𝑖 ​ 𝑐 𝑖 ) ‖ ℓ 1 where ‖ 𝑢 ‖ ℓ 1 is the usual ℓ 1 norm of the sequence 𝑢

( 𝑢 𝑖 ) : ℕ → ℝ .

Also due to the physical interpretation of 𝑐

( 𝑐 𝑖 ) as a sequence of concentrations, we will be exclusively interested in solutions in the non-negative cone of 𝑋 0 , 1 ,

𝑋 0 , 1 + := 𝑋 0 , 1 ∩ { 𝑐

( 𝑐 𝑖 ) | 𝑐 𝑖 ⩾ 0 } .

(26)

We are now ready to state the following definition of solution.

Definition 3.1.

Let 𝑇 ∈ ( 0 , + ∞ ] and let 𝑐 0

( 𝑐 0 ​ 𝑖 ) ∈ 𝑋 0 , 1 + be a sequence of nonnegative real numbers. A (mild) solution to (7), (8) on [ 0 , 𝑇 ) is a sequence of nonnegative continuous functions 𝑐

( 𝑐 𝑖 ) : [ 0 , 𝑇 ) → 𝑋 0 , 1 + , such that, for each 𝑖 ∈ ℕ and 𝑡 ∈ ( 0 , 𝑇 ) , the following holds:

(i)

𝑐 𝑖 ∈ 𝐶 0 ​ ( [ 0 , 𝑇 ) ) ,

(ii)

𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( ⋅ ) ) ∈ 𝐿 1 ​ ( 0 , 𝑡 ) , 𝑗 ∈ { 1 , 2 , 3 , 4 } ,

(iii)

and

𝑐 𝑖 ​ ( 𝑡 )

𝑐 0 ​ 𝑖 + ∫ 0 𝑡 ∑ 𝑗

1 4 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( 𝑠 ) ) ​ 𝑑 ​ 𝑠 .

3.1.Rate Coefficients: Growth Assumptions and Examples

As stated previously, we assume conditions (9), (10) and (11) throughout the paper. With respect to the bounds on the rate coefficients, we assume the existence of positive constants 𝐶 and 𝒬 , which as a matter of convenience we take 𝐶 , 𝒬 ⩾ 1 , as well as non-negative numbers 𝑞 𝑖 , 𝑘 , with 1 ⩽ 𝑘 ⩽ 𝑖 , satisfying

∀ 𝑖 ∈ ℕ + , ∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ⩽ 𝒬 ​ 𝑖 ,

(27)

such that, for all integers 𝑖 , 𝑘 such that 1 ⩽ 𝑘 ⩽ 𝑖 , and all 𝑗 ∈ ℕ + ,

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ⩽ 𝐶 ​ ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ​ 𝑞 𝑖 , 𝑘 .

(28)

We can see 𝑞 𝑖 , 𝑘 as measuring the ease with which a cluster of size 𝑘 can be detached from a cluster of size 𝑖 : the smaller the value of 𝑞 𝑖 , 𝑘 is, the smaller is the rate coefficient 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) , and so the harder it is for a 𝑘 -cluster to take part in the exchange reaction ⟨ 𝑖 ⟩ + ⟨ 𝑗 ⟩ → ⟨ 𝑖 − 𝑘 ⟩ + ⟨ 𝑗 + 𝑘 ⟩ .

One important bound for our subsequent arguments comes from the fact that, for every pair of integers 𝑖 , 𝑘 such that 1 ⩽ 𝑘 ⩽ 𝑖 , we have

𝑖 ​ ∑ 𝑘

1 𝑖 𝑞 𝑖 , 𝑘

min 1 ⩽ 𝑘 ⩽ 𝑖 ⁡ { 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) } ​ ∑ 𝑘

1 𝑖 𝑞 𝑖 , 𝑘 ⩽ ∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ⩽ 𝒬 ​ 𝑖 ,

thus resulting in ∑ 𝑘

1 𝑖 𝑞 𝑖 , 𝑘 ⩽ 𝒬 , and, in particular,

𝑞 𝑖 , 𝑘 ⩽ 𝒬 .

(29)

We illustrate next by exhibiting some examples that (27) and (28) are fulfilled by some well known cases in the literature.

Example 1. Consider the exchange driven growth system with kernel satisfying 𝐾 ​ ( 𝑖 , 𝑗 ) ⩽ 𝐶 0 ​ 𝑖 ​ 𝑗 , for 𝑖 , 𝑗 ∈ ℕ + , by (12), we have, for all 𝑖 , 𝑗 ∈ ℕ +

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

𝐾 ​ ( 𝑖 , 𝑗 ) ​ 𝛿 𝑘 , 1 ⩽ 𝐶 0 ​ 𝑖 ​ 𝑗 ​ 𝛿 𝑘 , 1 ⩽ 𝐶 0 ​ 𝑖 ​ ( 𝑗 + 1 ) ​ 𝛿 𝑘 , 1

which is (28) with 𝐶

𝐶 0 and 𝑞 𝑖 , 𝑘

𝛿 𝑘 , 1 , observing that (27) is also satisfied since,

∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘

∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝛿 𝑘 , 1

𝑖 ,

so that we can take 𝒬

1 in this case.

Example 2. In fact, Example 1 is a particular case of the more general case where there is an upper bound, 𝑘 ¯ ⩾ 1 , for the number of particles exchanged between two reacting clusters of any sizes, that is,

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

0 , if 𝑘

𝑘 ¯ .

We now show that, for this case, the existence of a constant 𝐶 ¯

0 such that, for 𝑖 , 𝑗 , 𝑘 ∈ ℕ + ,

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ⩽ 𝐶 ¯ ​ 𝑖 ​ 𝑗 , 1 ⩽ 𝑘 ⩽ min ⁡ ( 𝑖 , 𝑘 ¯ )

(30)

is equivalent to the existence of 𝐶 ⩾ 1 such that (28) is satisfied with 𝑞 𝑖 , 𝑘

𝟙 { 1 , … , 𝑘 ¯ } ​ ( 𝑘 ) , for which (27) is true. In first place, it is easy to see that this choice of 𝑞 𝑖 , 𝑘 verifies (27):

∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝟙 { 1 , … , 𝑘 ¯ } ​ ( 𝑘 )

∑ 𝑘

1 min ⁡ ( 𝑖 , 𝑘 ¯ ) 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ⩽ 𝑘 ¯ 2 ​ 𝑖 .

Furthermore, on one hand, for any 𝑖 , 𝑗 , 𝑘 ∈ ℕ + , such that 1 ⩽ 𝑘 ⩽ min ⁡ ( 𝑖 , 𝑘 ¯ ) , we have

( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ⩽ 𝑖 ​ 𝑗 ​ ( 1 + 𝑘 𝑗 ) ⩽ ( 1 + 𝑘 ¯ ) ​ 𝑖 ​ 𝑗 ,

and on the other, we also have, in case 𝑖 ⩾ 𝑘 ¯ ,

( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ⩾ 𝑖 ​ 𝑗 ​ ( 1 − 𝑘 − 1 𝑖 ) ⩾ 𝑖 ​ 𝑗 ​ ( 1 − 𝑘 ¯ − 1 𝑘 ¯ )

1 𝑘 ¯ ​ 𝑖 ​ 𝑗 ,

and in case 𝑖 < 𝑘 ¯ ,

( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ⩾ 𝑗 ⩾ 1 𝑘 ¯ ​ 𝑖 ​ 𝑗 .

Therefore, if condition (28) holds with the above choice of 𝑞 𝑖 , 𝑘 which also verifies (27), then condition (30) is verified with 𝐶 ¯

( 1 + 𝑘 ¯ ) ​ 𝐶 . Reciprocally, if the rate coefficients verifies (30) then conditions (27),(28) hold with the above choice of 𝑞 𝑖 , 𝑘 and 𝐶

𝑘 ¯ ​ 𝐶 ¯ .

Example 3. An example of a strictly positive rate coefficient 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) without a fixed upper bound 𝑘 ¯ on the size of clusters being exchanged is the constant kernel 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

1 , for which we have

1

( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 + 1 ) ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 + 1 ) ⩽ 2 ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ( 𝑖 − 𝑘 + 1 ) ​ 𝑘

: 2 ( 𝑖 − 𝑘 + 1 ) ( 𝑗 + 𝑘 ) 𝑞 𝑖 , 𝑘 .

Clearly, the symmetry conditions (10) and (11), as well as the bound (27), are trivially satisfied.

Example 4. A less trivial example of the sort considered in Example 3 but such that the rate coefficients are unbounded is

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 + 1 ) 1 + ( 𝑖 − 𝑘 ) ​ 𝑘 ,

for which checking the symmetry and growth conditions are also easily done.

Example 5. Finally, in this example let us consider the coagulation-fragmentation case with upper bound on the coagulation rate coefficients of the type 𝑎 𝑖 , 𝑗 ⩽ 𝐶 1 ​ ( 𝑖 + 𝑗 ) , by (22). For this case we have

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 )

1 2 ​ 𝑎 𝑖 , 𝑗 ​ 𝛿 𝑖 , 𝑘 ⩽ 𝐶 1 2 ​ ( 𝑖 + 𝑗 ) ​ 𝛿 𝑖 , 𝑘 ,

which is again (28), this time taking 𝐶

max ⁡ ( 1 , 𝐶 1 / 2 ) and 𝑞 𝑖 , 𝑘

𝛿 𝑖 , 𝑘 , remarking also that

∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘

∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝛿 𝑖 , 𝑘

𝑖

so that we can again take 𝒬

1 . Remark that no bounds are imposed on the coefficients 𝑎 ​ ( 𝑖 , 0 ; 𝑗 ) , which allows us to consider any choice of fragmentation coefficients 𝑏 𝑖 − 𝑗 , 𝑗 .

Therefore, from these examples (in particular from examples 1 and 5) we conclude that the cases of the exchange driven system growth with kernel of multiplicative bound type and the coagulation-fragmentation system with coagulation kernel with linear additive bound are both particular cases of the discrete generalized exchange driven system with bounds of the type (27)-(28).

3.2.Main results

In this subsection, we state the main results proved in this paper for the initial value problem (7)–(8), considering both the isolated and non-isolated cases.

Let us first state the existence result for the isolated case.

Theorem 3.2.

Let 𝑐 0

( 𝑐 0 ​ 𝑖 ) 𝑖 ≥ 0 ∈ 𝑋 0 , 1 + . Assume the rate coefficients satisfy the conditions (27)–(28). Then the initial value problem (7), (8) for the isolated case has at least one global solution 𝑐 defined on the interval [ 0 , ∞ ) .

The next theorem establishes mass and particle number conservation in the isolated system.

Theorem 3.3.

Let 𝑐 0 ∈ 𝑋 0 , 1 + , and let 𝑐

( 𝑐 𝑖 ) 𝑖 ⩾ 0 be an admissible solution to (7), (8) in the isolated case. Then, for all 𝑡 ⩾ 0 ,

𝒫 0 ( 𝑡 ) := ∑ 𝑖

0 ∞ 𝑐 𝑖 ( 𝑡 )

∑ 𝑖

0 ∞ 𝑐 0 ​ 𝑖

: 𝒫 0 ( 0 ) .

(31)

and

𝒫 1 ( 𝑡 ) := ∑ 𝑖

0 ∞ 𝑖 𝑐 𝑖 ( 𝑡 )

∑ 𝑖

0 ∞ 𝑖 𝑐 0 ​ 𝑖

: 𝒫 1 ( 0 ) .

(32)

Similarly to Theorem 3.2, we now state the existence of global solutions for the non-isolated case.

Theorem 3.4.

Let 𝑐 0

( 𝑐 0 ​ 𝑖 ) 𝑖 ≥ 0 ∈ 𝑋 0 , 1 + . Assume the rate coefficients satisfy the conditions (27)–(28). Then the initial value problem (7), (8) for the non-isolated case has at least one global solution 𝑐 defined on the interval [ 0 , ∞ ) .

The following result establishes the mass conservation property in the non-isolated case:

Theorem 3.5.

Let 𝑐 0 ∈ 𝑋 0 , 1 + , and let 𝑐

( 𝑐 𝑖 ) 𝑖 ⩾ 0 be an admissible solution to (7), (8) in the non-isolated case. Then, for all 𝑡 ⩾ 0 , 𝒫 1 ​ ( 𝑡 )

𝒫 1 ​ ( 0 ) .

We next state the regularity of the solution in classical sense:

Theorem 3.6 (Regularity of solutions).

Suppose that the rate coefficients satisfy (27)-(28), for all 𝑗 ∈ ℕ . If 𝑐

( 𝑐 𝑖 ) is a mild solution of (7)-(8) on [ 0 , 𝑇 ) , in either the isolated or the non-isolated cases, and if ∑ 𝑖

1 ∞ 𝑖 ​ 𝑐 𝑖 ​ ( ⋅ ) is constant in [ 0 , 𝑇 ) , then, for each 𝑖 ∈ ℕ ,

𝑐 𝑖 ∈ 𝐶 1 ​ ( [ 0 , 𝑇 ) , ℝ 0 + ) .

We finally state a partial uniqueness result:

Theorem 3.7.

Suppose that the rate coefficients satisfy (28) and (121), with 𝛼 ∈ [ 0 , 1 2 ) . Let 𝑇 ∈ ( 0 , + ∞ ) , and 𝑐 0 ∈ 𝑋 0 , 1 + . Then, the initial value problem (7)–(8) has a unique solution on [ 0 , 𝑇 ] .

3.3.Preliminary results

Prior to establish the existence result in section 5, it is imperative to revisit certain notations and to get some inequalities that will be needed later. Let us first recall the space ℰ from [18]:

ℰ is the set of non-negative and convex functions 𝜎 ∈ 𝒞 1 ​ ( [ 0 , ∞ ) ) ∩ 𝑊 𝑙 ​ 𝑜 ​ 𝑐 2 , ∞ ​ ( 0 , ∞ ) with 𝜎 ​ ( 0 )

0 , 𝜎 ′ ​ ( 0 ) ⩾ 0 and 𝜎 ′ is a concave function. Assume furthermore that these functions satisfy the following condition:

lim 𝑟 → ∞ 𝜎 ′ ​ ( 𝑟 )

lim 𝑟 → ∞ 𝜎 ​ ( 𝑟 ) 𝑟

∞ .

(33)

Again from [18], define ℰ 1 as follows:

ℰ 1 is the set of all non-negative and convex functions 𝜎 ∈ 𝒞 2 ​ ( [ 0 , ∞ ) ) with 𝜎 ​ ( 0 )

0 , 𝜎 ′ ​ ( 0 )

0 , and 𝜎 ′ is a convex function satisfying the so-called Δ 2 − condition, namely, there is a constant 𝐴 𝜎 ⩾ 0 , such that

𝜎 ′ ​ ( 2 ​ 𝑥 ) ⩽ 𝐴 𝜎 ​ 𝜎 ′ ​ ( 𝑥 ) , 𝑥 ∈ [ 0 , ∞ ) .

(34)

As an illustration of these concepts consider the following example: if 𝜎 ​ ( 𝑥 )

𝑥 𝑝 , then 𝜎 ∈ ℰ when 𝑝 ∈ ( 1 , 2 ] , and 𝜎 ∈ ℰ 1 if 𝑝 ⩾ 2 .

Instrumental to the proofs of our subsequent results is the following inequality that we recall from [18, Lemma 3.2.], which is valid for all 𝜎 ∈ ℰ ∪ ℰ 1 :

( 𝑖 + 𝑗 ) ​ ( 𝜎 ​ ( 𝑖 + 𝑗 ) − 𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑗 ) ) ⩽ 𝑚 𝜎 ​ ( 𝑖 ​ 𝜎 ​ ( 𝑗 ) + 𝑗 ​ 𝜎 ​ ( 𝑖 ) ) , ∀ 𝑖 ⩾ 0 , 𝑗 ⩾ 1 .

(35)

Here, 𝑚 𝜎 ⩾ 0 is a constant that, in case 𝜎 ∈ ℰ , can be taken as 𝑚 𝜎

2 . We have, as a consequence,

Lemma 3.8.

Assume 𝜎 ∈ ℰ 1 ∪ ℰ . Then, for all 𝑖 , 𝑗 , 𝑘 ∈ ℕ + such that 𝑘 ⩽ 𝑖 , we have

( 𝑗 + 𝑘 ) ​ ( 𝜎 ​ ( 𝑖 − 𝑘 ) + 𝜎 ​ ( 𝑗 + 𝑘 ) − 𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑗 ) ) ⩽ 𝑚 𝜎 ​ ( 𝑗 ​ 𝜎 ​ ( 𝑘 ) + 𝑘 ​ 𝜎 ​ ( 𝑗 ) ) .

Proof.

First, write

𝜎 ​ ( 𝑖 − 𝑘 ) + 𝜎 ​ ( 𝑗 + 𝑘 ) − 𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑗 )

= ( 𝜎 ​ ( 𝑗 + 𝑘 ) − 𝜎 ​ ( 𝑗 ) − 𝜎 ​ ( 𝑘 ) ) − ( 𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑖 − 𝑘 ) − 𝜎 ​ ( 𝑘 ) ) .

(36)

For each 𝑎 ⩾ 0 define 𝜙 𝑎 ​ ( 𝑥 ) := 𝜎 ​ ( 𝑥 ) − 𝜎 ​ ( 𝑎 ) − 𝜎 ​ ( 𝑥 − 𝑎 ) . Then, for any 𝑥 ⩾ 𝑎 , by the convexity of 𝜎 we have,

𝜙 𝑎 ′ ​ ( 𝑥 )

𝜎 ′ ​ ( 𝑥 ) − 𝜎 ′ ​ ( 𝑥 − 𝑎 ) ⩾ 0 .

But then, since 𝜙 𝑎 ​ ( 𝑎 )

0 , we conclude that, 𝜙 𝑎 ​ ( 𝑥 ) ⩾ 0 . Therefore,

𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑖 − 𝑘 ) − 𝜎 ​ ( 𝑘 )

𝜙 𝑖 − 𝑘 ​ ( 𝑖 ) ⩾ 0 .

By using this in (36), together with (35) the proof is completed. ∎

We also have the following important property:

Lemma 3.9.

Let 𝜎 ∈ ℰ . Then, there are constants 𝜂

0 and 𝑀 0

0 such that, for any integers 𝑝 ⩾ 𝑀 0 , and 𝑘 ∈ [ 1 , 𝑝 − 1 ] ,

𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 𝑘 ) − 𝜎 ​ ( 𝑘 ) ⩾ 𝜂 ​ 𝜎 ​ ( 𝑝 − 1 ) 𝑝 − 1 .

Proof.

Define

𝜙 𝑝 ​ ( 𝑘 ) := 𝜎 ​ ( 𝑝 − 𝑘 ) + 𝜎 ​ ( 𝑘 ) , 𝑘 ∈ [ 1 , 𝑝 ] .

Then, 𝜙 𝑝 ​ ( 𝑘 )

𝜙 𝑝 ​ ( 𝑝 − 𝑘 ) and also, if 1 ⩽ 𝑘 ⩽ 𝑝 / 2 , then 𝑝 / 2 ⩽ 𝑝 − 𝑘 ⩽ 𝑝 − 1 . Hence, by the convexity of 𝜎 , we have that 𝜎 ′ ​ ( 𝑝 − 𝑘 ) ⩾ 𝜎 ′ ​ ( 𝑘 ) , and therefore,

𝜙 𝑝 ′ ​ ( 𝑘 )

− 𝜎 ′ ​ ( 𝑝 − 𝑘 ) + 𝜎 ′ ​ ( 𝑘 ) ⩽ 0 , 𝜙 𝑝 ′ ​ ( 𝑝 − 𝑘 ) ⩾ 0 .

This implies that max 𝑘 ∈ [ 1 , 𝑝 − 1 ] ⁡ 𝜙 𝑝 ​ ( 𝑘 )

𝜙 𝑝 ​ ( 1 )

𝜙 𝑝 ​ ( 𝑝 − 1 ) . Thus,

min 𝑘 ∈ [ 1 , 𝑝 − 1 ] ⁡ ( 𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 𝑘 ) − 𝜎 ​ ( 𝑘 ) )

𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 1 ) − 𝜎 ​ ( 1 ) .

(37)

By the Lagrange theorem and the convexity of 𝜎 we can infer

𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 1 ) ⩾ 𝜎 ′ ​ ( 𝑝 − 1 ) .

On the other hand, again by the Lagrange theorem and convexity of 𝜎 ,

𝜎 ​ ( 𝑝 − 1 )

𝜎 ​ ( 𝑝 − 1 ) − 𝜎 ​ ( 0 ) ⩽ 𝜎 ′ ​ ( 𝑝 − 1 ) ​ ( 𝑝 − 1 ) ,

and therefore,

𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 1 ) − 𝜎 ​ ( 1 ) ⩾ 𝜎 ​ ( 𝑝 − 1 ) 𝑝 − 1 − 𝜎 ​ ( 1 )

𝜎 ​ ( 𝑝 − 1 ) 𝑝 − 1 ​ ( 1 − 𝜎 ​ ( 1 ) ​ 𝑝 − 1 𝜎 ​ ( 𝑝 − 1 ) ) .

Fix 𝜂 ∈ ( 0 , 1 ) . Since 𝜎 ∈ ℰ , there is 𝑀 0

0 such that,

𝑝 ⩾ 𝑀 0 ⟹ 𝜎 ​ ( 1 ) ​ 𝑝 − 1 𝜎 ​ ( 𝑝 − 1 ) ⩽ 1 − 𝜂 ,

so that,

𝜎 ​ ( 𝑝 ) − 𝜎 ​ ( 𝑝 − 1 ) − 𝜎 ​ ( 1 ) ⩾ 𝜂 ​ 𝜎 ​ ( 𝑝 − 1 ) 𝑝 − 1 .

By using this inequality in (37) the lemma is proved. ∎

4.The truncated system

We will consider 𝑁 -truncated systems that are obtained from (7) assuming that no clusters with sizes bigger than 𝑁 exist initially neither can they be formed by time evolution. This corresponds to a modification of the rate coefficients by making them zero whenever the reaction in question involves clusters larger than 𝑁 . An equivalent and maybe a more transparent way of reflecting this is to appropriately modify the sums in 𝑄 𝑗 , 𝑖 and consider the finite-dimensional system with 𝑖 ∈ { 0 , 1 , … , 𝑁 } . It is not hard to see that, in the case of isolated systems, the following system fulfills the condition above, and so we will call it the 𝑁 -truncated discrete generalized exchange-driven system ( 𝑁 -DGED for short):

{ 𝑐 ˙ 0

∑ 𝑗

1 2 𝑄 𝑗 , 0 𝑁 ​ ( 𝑐 )

𝑐 ˙ 𝑖

∑ 𝑗

1 4 𝑄 𝑗 , 𝑖 𝑁 ​ ( 𝑐 ) , 𝑖 ∈ { 1 , … , 𝑁 − 1 }

𝑐 ˙ 𝑁

∑ 𝑗

3 4 𝑄 𝑗 , 𝑁 𝑁 ​ ( 𝑐 ) ,

(38)

where

𝑄 1 , 𝑖 𝑁 ​ ( 𝑐 )
:= ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑖 + 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 ​ 𝑐 𝑗 ,

(39)

𝑄 2 , 𝑖 𝑁 ​ ( 𝑐 )
:= − ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

𝑘 𝑁 𝑎 ​ ( 𝑗 , 𝑖 ; 𝑘 ) ​ 𝑐 𝑗 ​ 𝑐 𝑖 ,

(40)

𝑄 3 , 𝑖 𝑁 ​ ( 𝑐 )
:= ∑ 𝑘

1 𝑖 ∑ 𝑗

𝑘 𝑁 𝑎 ​ ( 𝑗 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 ​ 𝑐 𝑖 − 𝑘 ,

(41)

𝑄 4 , 𝑖 𝑁 ​ ( 𝑐 )
:= − ∑ 𝑘

1 𝑖 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑗 ​ 𝑐 𝑖 ,

(42)

and

𝑐 𝑖 𝑁 ​ ( 0 )

𝑐 0 ​ 𝑖 ⩾ 0 , 𝑖 ∈ { 0 , … , 𝑁 } .

(43)

In the non-isolated case the 𝑁 -DGED system is the same as in (38) but for the 𝑐 0 -equation which is substituted by 𝑐 ˙ 0

0 , or, equivalently, we define, for all 𝑗 and 𝑁 ,

𝑄 𝑗 , 0 𝑁

0 .

(44)

Being an ordinary differential equation in ℝ 𝑁 + 1 with a polynomial vector field, the existence and uniqueness of solutions to the initial value problem are ensured by the standard Picard-Lindelöf theorem (see, e.g., [16, Theorem I-1-4]). It is also easy to conclude by standard arguments (see, e.g., the proof of Theorem III-4-5 in [16]) that ℝ ( 𝑁 + 1 ) + is invariant for the local flow associated with (38), which means that nonnegative initial data have unique nonnegative local solutions. The following result is important for the remaining analysis

Proposition 4.1.

Let 𝑐 𝑁

( 𝑐 𝑖 𝑁 ) 0 ⩽ 𝑖 ⩽ 𝑁 be any solution of (38)–(42) in the isolated case. Then, for every ( 𝑔 𝑖 ) we have

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝑔 𝑖 ​ 𝑐 𝑖 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 ( 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

(45)

Proof. Let us first take the summation of the quantity 𝑔 𝑖 ​ 𝑐 ˙ 𝑖 𝑁 from 𝑖

0 to 𝑖

𝑁 . Then, from (38), it can be inferred that

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁
𝑔 𝑖 ​ 𝑐 𝑖 𝑁

𝑔 0 ​ ∑ 𝑘

1 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 − 𝑔 0 ​ ∑ 𝑘

1 𝑁 ∑ 𝑗

𝑘 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁

  • ∑ 𝑖

    1 𝑁 − 1 ∑ 𝑘

    1 𝑁 − 𝑖 ∑ 𝑗

    0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖

  • 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖

  • 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 − ∑ 𝑖

    1 𝑁 − 1 ∑ 𝑘

    1 𝑁 − 𝑖 ∑ 𝑗

    𝑘 𝑁 𝑔 𝑖 ​ 𝑎 ​ ( 𝑗 , 𝑖 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

  • ∑ 𝑖

    1 𝑁 − 1 ∑ 𝑘

    1 𝑖 ∑ 𝑗

    𝑘 𝑁 𝑔 𝑖 ​ 𝑎 ​ ( 𝑗 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 − 𝑘 𝑁 − ∑ 𝑖

    1 𝑁 − 1 ∑ 𝑘

    1 𝑖 ∑ 𝑗

    0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

  • 𝑔 𝑁 ​ ∑ 𝑘

    1 𝑁 ∑ 𝑗

    𝑘 𝑁 𝑎 ​ ( 𝑗 , 𝑁 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑁 − 𝑘 𝑁 − 𝑔 𝑁 ​ ∑ 𝑘

    1 𝑁 ∑ 𝑗

    0 𝑁 − 𝑘 𝑎 ​ ( 𝑁 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑁 𝑁 .

(46)

Next, we simplify some terms on the right-hand side of (46). To do this, we will start with the third sum on the right-hand side of (46). Changing the order of summation of the sums in 𝑖 and 𝑘 and renaming 𝑖 + 𝑘 ↦ 𝑖 , it can be rewritten

∑ 𝑖

1 𝑁 − 1 ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 + 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 𝑗 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

1 𝑁 − 𝑘 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 + 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 𝑗 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 + 1 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 − 𝑘 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 − 𝑘 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

− 𝑔 0 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 .

(47)

In the same vein, the fourth sum in (46) can be rearranged by changing the order of the sums twice, first to exchange the 𝑖 and 𝑘 sums and then the 𝑖 and 𝑗 sums, and finally changing notation 𝑖 ↔ 𝑗 , giving

∑ 𝑖

1 𝑁 − 1 ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

𝑘 𝑁 𝑔 𝑖 ​ 𝑎 ​ ( 𝑗 , 𝑖 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

= ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑗 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁 − 𝑔 0 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑖 𝑁 .

(48)

Similarly, the fifth sum can be rearranged by exchanging the 𝑖 and 𝑘 sums, then introducing 𝑖 ′

𝑖 − 𝑘 , changing again the order of the 𝑖 ′ and 𝑗 sums, and finally change notation 𝑗 ↦ 𝑖 and 𝑖 ′ ↦ 𝑗 . In the end we get

∑ 𝑖

1 𝑁 − 1 ∑ 𝑘

1 𝑖 ∑ 𝑗

𝑘 𝑁 𝑔 𝑖 ​ 𝑎 ​ ( 𝑗 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 − 𝑘 𝑁

= ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑗 + 𝑘 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁 − 𝑔 𝑁 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝑎 ​ ( 𝑖 , 𝑁 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑁 − 𝑘 𝑁 .

(49)

Furthermore, let us write the sixth summation in simplified form by changing the order of the 𝑖 and 𝑘 sums:

∑ 𝑖

1 𝑁 − 1 ∑ 𝑘

1 𝑖 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

= ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁 − ∑ 𝑘

1 𝑁 − 1 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑁 ​ 𝑎 ​ ( 𝑁 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 𝑗 𝑁 .

(50)

Let us now incorporate the results obtained from equations (4)–(50) into equation (46). The revised expression after these substitutions is

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁
𝑔 𝑖 ​ 𝑐 𝑖 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 [ 𝑔 𝑖 − 𝑘 + 𝑔 𝑗 + 𝑘 − 𝑔 𝑖 − 𝑔 𝑗 ] ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

  • 𝑔 0 ​ ∑ 𝑘

    1 𝑁 ∑ 𝑗

    0 𝑁 − 𝑘 𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 − 𝑔 0 ​ ∑ 𝑘

    1 𝑁 ∑ 𝑗

    𝑘 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁

− 𝑔 0 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 + 𝑔 0 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑖 𝑁

− ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝑔 𝑁 ​ 𝑎 ​ ( 𝑖 , 𝑁 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑁 − 𝑘 𝑁 + ∑ 𝑘

1 𝑁 − 1 ∑ 𝑗

0 𝑁 − 𝑘 𝑔 𝑁 ​ 𝑎 ​ ( 𝑁 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 𝑗 𝑁

  • ∑ 𝑘

    1 𝑁 ∑ 𝑗

    𝑘 𝑁 𝑔 𝑁 ​ 𝑎 ​ ( 𝑗 , 𝑁 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑁 − 𝑘 𝑁 − ∑ 𝑘

    1 𝑁 ∑ 𝑗

    0 𝑁 − 𝑘 𝑔 𝑁 ​ 𝑎 ​ ( 𝑁 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑁 𝑁 .

(51)

Furthermore, we again write (51) into the following concise form:

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝑔 𝑖 ​ 𝑐 𝑖 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 [ 𝑔 𝑖 − 𝑘 + 𝑔 𝑗 + 𝑘 − 𝑔 𝑖 − 𝑔 𝑗 ] ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

+ 𝑔 0 ​ 𝑎 ​ ( 𝑁 , 0 ; 𝑁 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 0 𝑁 − 𝑔 0 ​ 𝑎 ​ ( 𝑁 , 0 ; 𝑁 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 0 𝑁

+ 𝑔 𝑁 ​ 𝑎 ​ ( 𝑁 , 0 ; 𝑁 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 0 𝑁 − 𝑔 𝑁 ​ 𝑎 ​ ( 𝑁 , 0 ; 𝑁 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑁 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 [ 𝑔 𝑖 − 𝑘 + 𝑔 𝑗 + 𝑘 − 𝑔 𝑖 − 𝑔 𝑗 ] ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁 .

This completes the proof of Proposition 4.1. □

It is easy to observe that the right-hand side of (45) is identically zero when 𝑔 𝑖

𝑖 , and 𝑔 𝑖

1 , thus proving the following two conservation laws:

Corollary 4.2.

All non negative solutions to (38) in the isolated case conserve the total number of initial clusters ‖ 𝑐 ​ ( 0 ) ‖ ℓ 1 and the initial mass ‖ ( 𝑖 ​ 𝑐 𝑖 ​ ( 0 ) ) ‖ ℓ 1 , when we consider a solution 𝑐 𝑁 of (38) an ℓ 1 -valued function by defining 𝑐 𝑗 𝑁 ≡ 0 for all 𝑗 ⩾ 𝑁 + 1 .

Remark 4.3.

A consequence of the previous corollary is that any non negative solution of the truncated system (38) is globally defined in ℝ 0 + .

In the non-isolated case there is a similar result whose proof proceed in the same way and will be omitted.

Proposition 4.4.

Let 𝑐 𝑁

( 𝑐 𝑖 𝑁 ) 0 ⩽ 𝑖 ⩽ 𝑁 be any solution of (38)–(43) in the non-isolated case. Then, for every ( 𝑔 𝑖 ) we have

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝑔 𝑖 ​ 𝑐 𝑖 𝑁

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 ( 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 𝑗 𝑁

− 𝑔 0 ​ ( ∑ 𝑘

1 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝑎 ​ ( 𝑘 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑘 𝑁 ​ 𝑐 𝑗 𝑁 − ∑ 𝑘

1 𝑁 ∑ 𝑖

𝑘 𝑁 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 0 𝑁 ) .

(52) Remark 4.5.

Taking 𝑔 𝑖

𝑖 in (52) it is easily seen that solutions of the DGED system in the non-isolated case conserve the initial mass and, as a consequence, any non negative solution of the truncated system (38) is globally defined in ℝ 0 + also in the non-isolated case. However, making 𝑔 𝑖

1 in (52) we immediately conclude that the total number of initial clusters is no longer conserved in this case.

The following two lemmas about properties of the solutions to the truncated system, in which we assume the conditions (27)-(28), will be essential for the proof of the existence theorem in the next section. They use, in a crucial way, the inequalities proved in lemmas 3.8 and 3.9.

Lemma 4.6.

Let 𝑐 𝑁

( 𝑐 𝑖 𝑁 ) 0 ⩽ 𝑖 ⩽ 𝑁 be a solution to (38)–(43) and 𝜎 ∈ ℰ ∪ ℰ 1 . Assume ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 0 ​ 𝑖 𝑁 is finite. Then, for each 𝑇 ⩾ 0 , we have for all 𝑡 ∈ [ 0 , 𝑇 ] ,

∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ 𝛾 𝑇 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 0 ​ 𝑖 𝑁 ,

(53)

and, for integers 𝑀 , 𝑁 such that 𝑀 0 ⩽ 𝑀 ⩽ 𝑁 , with 𝑀 0 as in Lemma 3.9,

0 ⩽ ∫ 0 𝑇 ∑ ( 𝑝 , 𝑘 ) ∈ 𝒥 0 𝑁 𝜎 ​ ( 𝑝 − 1 ) 𝑝 − 1 ​ 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑝 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 ⩽ 𝛾 𝑇 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 0 ​ 𝑖 𝑁 ,

(54)

where,

𝒥 0 𝑁 := { ( 𝑝 , 𝑘 ) ∈ ( ℕ + ) 2 : 𝑀 0 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑘 ⩽ 𝑝 − 1 } ,

and 𝛾 𝑇

0 is a constant only depending on 𝐶 , 𝑚 𝜎 , 𝒫 1 ​ ( 0 ) , 𝒬 , 𝑇 .

Proof.

For 1 ⩽ 𝑖 ⩽ 𝑁 , it can be deduced from Proposition 4.1, by setting 𝑔 𝑖 := 𝜎 ​ ( 𝑖 ) , that

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 )

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

0 𝑁 − 𝑘 𝜎 ~ ​ ( 𝑖 , 𝑗 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 ) ,

(55)

where

𝜎 ~ ​ ( 𝑖 , 𝑗 , 𝑘 ) := 𝜎 ​ ( 𝑗 + 𝑘 ) + 𝜎 ​ ( 𝑖 − 𝑘 ) − 𝜎 ​ ( 𝑗 ) − 𝜎 ​ ( 𝑖 ) .

(56)

Since there are no upper bounds on the rate coefficients of the type 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) , the terms involving this type of coefficients have to be tackled differently from the others.

For 𝑗 ∈ ℕ + , by (28), (56) and Lemma 3.8, we obtain,

𝜎 ~ ​ ( 𝑖 , 𝑗 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ⩽

𝐶 ​ ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ​ 𝑞 𝑖 , 𝑘 ​ 𝜎 ~ ​ ( 𝑖 , 𝑗 , 𝑘 )

𝐶 ​ 𝑚 𝜎 ​ ( 𝑗 ​ 𝜎 ​ ( 𝑘 ) + 𝑘 ​ 𝜎 ​ ( 𝑗 ) ) ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ,

(57)

so, by applying (4) to the terms not involving the rate coefficients of the type 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) , we have, for 𝑁 ⩾ 2 ,

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

1 𝑁 − 𝑘

𝜎 ~ ​ ( 𝑖 , 𝑗 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 )

(58)


𝐶 𝑚 𝜎 ( ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

1 𝑁 − 𝑘 𝑗 𝜎 ( 𝑘 ) ( 𝑖 − 𝑘 + 1 ) 𝑞 𝑖 , 𝑘 𝑐 𝑖 𝑁 ( 𝑡 ) 𝑐 𝑗 𝑁 ( 𝑡 )

(59)

  • ∑ 𝑘

    1 𝑁 − 1 ∑ 𝑖

    𝑘 𝑁 ∑ 𝑗

    1 𝑁 − 𝑘 𝑘 𝜎 ( 𝑗 ) ( 𝑖 − 𝑘
  • 1 ) 𝑞 𝑖 , 𝑘 𝑐 𝑖 𝑁 ( 𝑡 ) 𝑐 𝑗 𝑁 ( 𝑡 ) )

= :

𝐶 ​ 𝑚 𝜎 ​ ( 𝑆 1 + 𝑆 2 ) .

(60)

We now estimate separately 𝑆 1 and 𝑆 2 . By using the convexity of 𝜎 , which entails the fact that 𝑘 ↦ 𝜎 ​ ( 𝑘 ) / 𝑘 is increasing, hypothesis (27) and Corollary 4.2, we obtain,

𝑆 1

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

1 𝑁 − 𝑘 𝜎 ​ ( 𝑘 ) 𝑘 ​ 𝑖 𝜎 ​ ( 𝑖 ) ​ ( 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) 𝑖 ​ 𝑞 𝑖 , 𝑘 ) ​ ( 𝑗 ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 ) ) ​ ( 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) )

⩽ 𝒫 1 ​ ( 0 ) ​ 𝒬 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) .

(61)

Also,

𝑆 2

∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) 𝑖 ​ 𝑞 𝑖 , 𝑘 ) ​ ( 𝑖 ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ) ​ ( 𝜎 ​ ( 𝑗 ) ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 ) )

⩽ 𝒫 1 ​ ( 0 ) ​ 𝒬 ​ ∑ 𝑗

0 𝑁 𝜎 ​ ( 𝑗 ) ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 ) .

(62)

By applying (58)–(4) to (55) we obtain

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ 𝐾 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) + ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) .

(63)

where 𝐾 := 2 ​ 𝐶 ​ 𝒬 ​ 𝑚 𝜎 ​ 𝒫 1 ​ ( 0 ) . But, according to the proof of Lemma 3.8, for 1 ⩽ 𝑘 ⩽ 𝑖 ,

𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 )

− ( 𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑖 − 𝑘 ) − 𝜎 ​ ( 𝑘 ) ) ⩽ 0 ,

from (63) we obtain

𝑑 𝑑 ​ 𝑡 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ 𝐾 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ,

(64)

thus resulting (53) by Gronwall inequality.

In order to prove (54), we integrate both members of (63) and take into account (53) and the sign of each term, thus getting

∫ 0 𝑇 | ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | ​ 𝑑 𝑡 ⩽
∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 0 ) + 𝐾 ​ ∫ 0 𝑇 ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡


𝛾 ¯ 𝑇 ​ ∑ 𝑖

0 𝑁 𝜎 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 0 ) ,

(65)

where 𝛾 ¯ 𝑇 := 1 + 𝐾 ​ 𝑇 ​ exp ⁡ ( 𝐾 ​ 𝑇 ) . Remarking that 𝜎 ~ ​ ( 𝑖 , 0 , 𝑖 )

0 , we can write, for each 𝑖

1 , … , 𝑗 , and 𝑁 ⩾ 2 ,

∫ 0 𝑇 | ∑ 𝑘

1 𝑁 − 1 ∑ 𝑖

𝑘 𝑁 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | ​ 𝑑 𝑡

= ∫ 0 𝑇 | ∑ 𝑖

1 𝑁 ∑ 𝑘

1 𝑖 − 1 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | ​ 𝑑 𝑡 .

(66)

Since − 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 )

| 𝜎 ~ ​ ( 𝑖 , 0 , 𝑘 ) |

𝜎 ​ ( 𝑖 ) − 𝜎 ​ ( 𝑖 − 𝑘 ) − 𝜎 ​ ( 𝑘 ) , the conclusion is obtained from (4) and (66) by application of Lemma 3.9 and redefining the constant 𝛾 𝑇 .

Lemma 4.7.

Let 𝑐 𝑁

( 𝑐 𝑖 𝑁 ) 0 ⩽ 𝑖 ⩽ 𝑁 be a solution to (38) and 𝑇 ∈ ( 0 , + ∞ ) . There exists a positive constant Γ 2 , 𝑖 ​ ( 𝑇 ) depending only on 𝐶 , 𝒬 , ‖ 𝑐 0 ‖ , 𝑖 and 𝑇 such that, for each 𝑖 ∈ ℕ ,

| 𝑑 ​ 𝑐 𝑖 𝑁 𝑑 ​ 𝑡 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ Γ 2 , 𝑖 ​ ( 𝑇 ) .

Proof.

We present the proof for the isolated case and will comment on the non-isolated case at the end.

By Corollary 4.2, the following expressions can be deduced:

∑ 𝑖

0 𝑁 𝑖 ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 )

∑ 𝑖

0 𝑁 𝑖 ​ 𝑐 0 ​ 𝑖 𝑁 ⩽ ∑ 𝑖

0 ∞ 𝑖 ​ 𝑐 0 ​ 𝑖 𝑁

‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 ⩽ ‖ 𝑐 0 ‖ ,

(67)

and then

∑ 𝑖

0 𝑁 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ ∑ 𝑖

0 𝑁 𝑖 ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ ‖ 𝑐 0 ‖ .

(68)

Also recall that the numbers 𝑞 𝑖 , 𝑘 satisfy estimate (29).

Now, consider system (38). Let 𝑄 ^ 𝑗 , 𝑖 𝑁 , 𝑗

1 , 2 , 3 , 4 , be the sums (39)-(42) taken away the terms of the type 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 0 𝑁 . Hence, considering that 𝑄 ^ 2 , 0 𝑁

𝑄 ^ 3 , 1 𝑁

0 , we can write

𝑄 1 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

𝑄 ^ 1 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) + ∑ 𝑘

1 𝑁 − 𝑖 𝑎 ​ ( 𝑖 + 𝑘 , 0 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 0 , … , 𝑁 − 1 } ,

(69)

𝑄 2 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

( 1 − 𝛿 𝑖 , 0 ) ​ 𝑄 ^ 2 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝛿 𝑖 , 0 ​ ∑ 𝑘

1 𝑁 ∑ 𝑗

𝑘 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 0 , … , 𝑁 − 1 } ,

(70)

𝑄 3 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

( 1 − 𝛿 𝑖 , 1 ) ​ 𝑄 ^ 3 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) + ∑ 𝑗

𝑖 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑖 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 1 , … , 𝑁 } ,

(71)

𝑄 4 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

𝑄 ^ 4 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑖 𝑁 , 𝑖 ∈ { 1 , … , 𝑁 } ,

(72)

where 𝛿 𝑖 , ℓ is the Kronecker symbol. Therefore, after some rearrangements, we can rewrite the truncated system (38) in the following form:

𝑐 ˙ 0 𝑁

𝑄 ^ 1 , 0 𝑁 ​ ( 𝑐 𝑁 ) − ∑ 𝑘

1 𝑁 ∑ 𝑗

𝑘 + 1 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁 ,

(73)

𝑐 ˙ 𝑖 𝑁

∑ 𝑗

1 4 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 ) + 2 ​ ∑ 𝑗

𝑖 + 1 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑖 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁

− ∑ 𝑘

1 𝑖 − 1 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ 𝑐 0 𝑁 , for  ​ 𝑖 ∈ { 1 , … , 𝑁 − 1 } ,

(74)

𝑐 ˙ 𝑁 𝑁

∑ 𝑗

3 4 𝑄 ^ 𝑗 , 𝑁 𝑁 ​ ( 𝑐 ) − ∑ 𝑘

1 𝑁 − 1 𝑎 ​ ( 𝑁 , 0 ; 𝑘 ) ​ 𝑐 𝑁 𝑁 ​ 𝑐 0 𝑁 .

(75)

Since no upper bounds are imposed on the fragmentation-type coefficients 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) , the terms involving these must be estimated in a separate way, as in [18].

We first proceed to the estimation of the 𝑄 ^ 𝑗 , 𝑖 𝑁 terms. We remark that we are using in this work the convention that if the lower index of a sum is greater than the upper index then the sum is zero. Taking into account (39), we have, for 𝑖

0 , … , 𝑁 − 1 ,

𝑄 ^ 1 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )
⩽ 𝐶 ​ ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑖 + 1 ) ​ ( 𝑗 + 𝑘 ) ​ 𝑞 𝑖 + 𝑘 , 𝑘 ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 𝑗 𝑁

⩽ 𝐶 ​ 𝒬 ​ ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( ( 𝑖 + 𝑘 ) ​ 𝑐 𝑖 + 𝑘 𝑁 ) ​ ( 𝑗 ​ 𝑐 𝑗 𝑁 ) + 𝐶 ​ ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑖 + 1 ) ​ 𝑘 ​ 𝑞 𝑖 + 𝑘 , 𝑘 ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 𝑗 𝑁

⩽ 𝐶 ​ 𝒬 ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 2 + 𝐶 ​ 𝒬 ​ ( 𝑖 + 1 ) ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 ​ ‖ 𝑐 0 ‖ ℓ 1

⩽ 2 ​ 𝐶 ​ 𝒬 ​ ( 𝑖 + 1 ) ​ ‖ 𝑐 0 ‖ 2 ,

(76)

where we have used (29) which, in this case, implies that 𝑘 ​ 𝑞 𝑖 + 𝑘 , 𝑘 ⩽ 𝒬 ​ ( 𝑖 + 𝑘 ) .

By (40) we have, for 𝑖

1 , … , 𝑁 − 1 ,

| 𝑄 ^ 2 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) |
⩽ 𝐶 ​ ∑ 𝑘

1 𝑁 − 𝑖 ∑ 𝑗

𝑘 𝑁 ( 𝑗 − 𝑘 + 1 ) ​ ( 𝑖 + 𝑘 ) ​ 𝑞 𝑗 , 𝑘 ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

⩽ 𝐶 ​ ∑ 𝑗

1 𝑁 ∑ 𝑘

1 𝑗 ( 𝑗 − 𝑘 + 1 ) ​ 𝑞 𝑗 , 𝑘 ​ 𝑐 𝑗 𝑁 ​ ( 𝑖 ​ 𝑐 𝑖 𝑁 ) + 𝐶 ​ ∑ 𝑗

1 𝑁 ∑ 𝑘

1 𝑗 ( 𝑗 − 𝑘 + 1 ) ​ 𝑘 ​ 𝑞 𝑗 , 𝑘 ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

⩽ 𝐶 ​ 𝒬 ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 2 + 𝐶 ​ 𝒬 ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 ​ ‖ 𝑐 0 ‖ ℓ 1 ⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝑐 0 ‖ 2 .

(77)

where again we have used (27) that implies this time that ( 𝑗 − 𝑘 + 1 ) ​ 𝑘 ​ 𝑞 𝑗 , 𝑘 ⩽ 𝒬 ​ 𝑗 .

By (41), for 𝑖

2 , … , 𝑁 ,

𝑄 ^ 3 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )
⩽ 𝐶 ​ ∑ 𝑘

1 𝑖 − 1 ∑ 𝑗

𝑘 𝑁 ( 𝑗 − 𝑘 + 1 ) ​ 𝑖 ​ 𝑞 𝑗 , 𝑘 ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 − 𝑘 𝑁

⩽ 𝐶 ​ 𝒬 ​ 𝑖 ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 ​ ‖ ( 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 ⩽ 𝐶 ​ 𝒬 ​ 𝑖 ​ ‖ 𝑐 0 ‖ 2 .

(78)

Finally, from (41), for each 𝑖

1 , … , 𝑁 we obtain

| 𝑄 ^ 4 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) |
⩽ 𝐶 ​ ∑ 𝑘

1 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ​ 𝑞 𝑖 , 𝑘 ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

⩽ 𝐶 ​ ∑ 𝑘

1 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ​ ( 𝑗 ​ 𝑐 𝑗 𝑁 ) ​ 𝑐 𝑖 𝑁 + 𝐶 ​ ∑ 𝑘

1 𝑖 ∑ 𝑗

1 𝑁 − 𝑘 ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ​ 𝑘 ​ 𝑐 𝑗 𝑁 ​ 𝑐 𝑖 𝑁

⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝑐 0 ‖ 2 ,

(79)

where, once more, we have used (27) in a way similar to the cases above.

To proceed to the estimates on the other terms, in the case 𝑖

1 , … , 𝑁 − 1 , we first integrate (74) to get

𝑐 𝑖 𝑁 ​ ( 𝑇 ) − 𝑐 𝑖 𝑁 ​ ( 0 )

∫ 0 𝑇 ∑ 𝑗

1 4 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ​ ( 𝑡 ) ) ​ 𝑑 ​ 𝑡

  • 2 ∫ 0 𝑇 ∑ 𝑗

    𝑖
  • 1 𝑁 𝑎 ( 𝑗 , 0 , ; 𝑖 ) 𝑐 𝑗 𝑁 ( 𝑡 ) 𝑐 0 𝑁 ( 𝑡 ) 𝑑 𝑡 − ∫ 0 𝑇 ∑ 𝑘

    1 𝑖 𝑎 ( 𝑖 , 0 ; 𝑘 ) 𝑐 𝑖 𝑁 ( 𝑡 ) 𝑐 0 𝑁 ( 𝑡 ) 𝑑 𝑡 ,

where we take 𝑄 ^ 3 , 1 𝑁 ≡ 0 . Considering the signs of the 𝑄 ^ 𝑗 , 𝑖 𝑁 , we can infer that

0
⩽ 2 ∫ 0 𝑇 ∑ 𝑗

𝑖 + 1 𝑁 𝑎 ( 𝑗 , 0 , ; 𝑖 ) 𝑐 𝑗 𝑁 ( 𝑡 ) 𝑐 0 𝑁 ( 𝑡 ) 𝑑 𝑡

⩽ 𝑐 𝑖 𝑁 ​ ( 𝑇 ) + ∫ 0 𝑇 ∑ 𝑗 ∈ { 2 , 4 } | 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ​ ( 𝑡 ) ) | ​ 𝑑 ​ 𝑡 + ∫ 0 𝑇 ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 .

(80)

On the other hand, calling 𝛼 ​ ( 𝑖 ) := ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) we obtain, for the last term in (4),

∫ 0 𝑇 ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 ⩽ 𝛼 ​ ( 𝑖 ) ​ ‖ 𝑐 0 ‖ 2 ​ 𝑇 .

(81)

Therefore, taking into account (68), (4), (4), (4) and (81), we obtain

∫ 0 𝑇 ∑ 𝑗

𝑖 + 1 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑖 ) ​ 𝑐 𝑗 𝑁 ​ ( 𝑡 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 ⩽ 𝛾 .

(82)

for 𝛾 := 1 2 ​ ‖ ( 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 + 2 ​ 𝐶 ​ 𝒬 ​ ‖ ( 𝑖 ​ 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 2 ​ 𝑇 + 𝛼 ​ ( 𝑖 ) 2 ​ ‖ ( 𝑐 𝑖 ​ 0 ) ‖ ℓ 1 2 ​ 𝑇 . Then, using estimates (81) and (82), together with (4)-(4) in the equation (74), we conclude that, for 𝑖

1 , … , 𝑁 − 1 ,

| 𝑐 ˙ 𝑖 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ ( 9 ​ 𝐶 ​ 𝒬 ​ ( 𝑖 + 1 ) + 3 2 ​ 𝛼 ​ ( 𝑖 ) ) ​ ‖ 𝑐 0 ‖ 2 ​ 𝑇 + 1 2 ​ ‖ 𝑐 0 ‖ ,

which proves our claim for 𝑖

1 , … , 𝑁 − 1 . We omit the proof for the cases 𝑖

0 and 𝑖

𝑁 , since they are easier and follow exactly the same lines. For 𝑖 ⩾ 𝑁 + 1 it is trivial.

For the non-isolated case the only difference is that all 𝑄 𝑗 , 0 𝑁 are identically zero, instead of (69) and (70) with 𝑖

0 , and so the estimates above still hold. ∎

5.Existence for the general exchange-driven system 5.1.The isolated case (7)

In this section prove the existence of solutions for the initial value problem (7), (8) in the isolated system case.

Proof of theorem 3.2: By using a refined version of de la Vallée-Poussin theorem as in [18, Section 4], we can conclude that our hypothesis for 𝑐 0 implies that there is 𝜎 0 ∈ ℰ such that

𝒮 0 := ∑ 𝑖

0 ∞ 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 ​ 0 < ∞ .

(83)

Then, since for any 𝑁 ∈ ℕ + ,

∑ 𝑖

0 𝑁 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 ​ 0 𝑁

∑ 𝑖

0 𝑁 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 ​ 0 ⩽ 𝒮 0 ,

we conclude that, for each 𝑇 ⩾ 0 , we have, for all 𝑡 ∈ [ 0 , 𝑇 ] ,

∑ 𝑖

0 𝑁 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ⩽ 𝛾 𝑇 ​ 𝒮 0 ,

(84)

by using Lemma 4.6.

It follows from Lemma 4.7 that, for each 𝑖 ∈ ℕ , ( 𝑐 𝑖 𝑁 ) 𝑁 ⩾ 𝑖 is a bounded sequence in 𝐿 ∞ ​ ( 0 , 𝑇 ) ∩ 𝑊 1 , 1 ​ ( 0 , 𝑇 ) . Therefore, we can apply the Helly selection theorem, together with a diagonal argument, to guarantee the existence of a subsequence ( 𝑐 𝑖 𝑁 ) 𝑁 ⩾ 𝑖 (not relabelled) and a sequence 𝑐

( 𝑐 𝑖 ) 𝑖 ∈ ℕ of nonnegative functions of locally bounded variation such that, for each 𝑖 ∈ ℕ and 𝑡 ∈ [ 0 , ∞ ) ,

lim 𝑁 → + ∞ 𝑐 𝑖 𝑁 ​ ( 𝑡 )

𝑐 𝑖 ​ ( 𝑡 ) .

(85)

Also, it follows from the nonnegativity of the 𝑐 𝑖 𝑁 , (67), (68), and (85), that, for each 𝑡 ∈ [ 0 , + ∞ ) , we have 𝑐 ​ ( 𝑡 ) ∈ 𝑋 0 , 1 + and

‖ 𝑐 ​ ( 𝑡 ) ‖ ⩽ ‖ 𝑐 0 ‖ .

(86)

Let 𝑇

0 . Fixing any integer 𝑀 ⩾ 1 , adding in (84) up to 𝑀 and then taking limit as 𝑁 → + ∞ in 𝑐 𝑖 𝑁 ​ ( 𝑡 ) in this sum, we obtain the same inequality with 𝑐 𝑖 𝑁 replaced by 𝑐 𝑖 . Then, taking 𝑀 → + ∞ , we infer that

∑ 𝑖

0 ∞ 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 ​ ( 𝑡 ) ⩽ 𝛾 𝑇 ​ 𝒮 0 , 𝑡 ∈ [ 0 , 𝑇 ] .

(87)

Now we prove that 𝑐 is indeed a solution to (7). According to Definition 3.1 it is necessary that, for 𝑗 ∈ { 1 , 2 , 3 , 4 } and 𝑖 ∈ ℕ + , 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( ⋅ ) ) ∈ 𝐿 1 ​ ( 0 , 𝑇 ) . Similarly to the proof of Lemma 4.7, we define the quantities 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ) , 𝑖 ∈ ℕ , 𝑗 ∈ { 1 , 2 , 3 , 4 } , as the corresponding 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ) taken away the terms of the type 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 𝑝 ​ 𝑐 0 . We recall that this procedure is forced upon us by the fact that the growth bound (28) on the rate coefficients 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) does not necessarily hold when 𝑗

0 .

We claim that, for each 𝑖 ∈ ℕ + and 𝑗 ∈ { 1 , 2 , 3 , 4 } ,

𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ) ∈ 𝐿 1 ​ ( 0 , 𝑇 ) .

(88)

Define first

Φ ​ ( 𝛽 ) := ∑ ( 𝑖 , 𝑗 , 𝑘 ) ∈ ℐ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝛽 𝑖 ​ 𝛽 𝑗

where ℐ := { ( 𝑖 , 𝑗 , 𝑘 ) ∈ ( ℕ + ) 3 : 𝑘 ⩽ 𝑖 } , with domain 𝐷 Φ made by the sequences 𝛽 ∈ 𝑋 0 , 1 + for which the above triple series is convergent. We prove that, in fact, 𝐷 Φ

𝑋 0 , 1 + . Let 𝜂 ∈ ℕ + . Then, for each 𝛽 ∈ 𝑋 0 , 1 + , by (27) and (28), we have,

∑ 𝑖

1 𝜂 ∑ 𝑗

1 𝜂 ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝛽 𝑖 ​ 𝛽 𝑗 ⩽
𝐶 ​ ∑ 𝑖

1 𝜂 ∑ 𝑗

1 𝜂 ∑ 𝑘

1 𝑖 ( 𝑖 − 𝑘 + 1 ) ​ ( 𝑗 + 𝑘 ) ​ 𝑞 𝑖 , 𝑘 ​ 𝛽 𝑖 ​ 𝛽 𝑗

𝐶 ​ ∑ 𝑖

1 𝜂 ∑ 𝑗

1 𝜂 ∑ 𝑘

1 𝑖 ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘 ​ 𝛽 𝑖 ​ ( 𝑗 ​ 𝛽 𝑗 )

+ 𝐶 ​ ∑ 𝑖

1 𝜂 ∑ 𝑗

1 𝜂 ∑ 𝑘

1 𝑖 ( 𝑖 − 𝑘 + 1 ) ​ 𝑘 ​ 𝑞 𝑖 , 𝑘 ​ 𝛽 𝑖 ​ 𝛽 𝑗


𝐶 ​ 𝒬 ​ ( ∑ 𝑖

1 𝜂 𝑖 ​ 𝛽 𝑖 ) ​ ( ∑ 𝑗

1 𝜂 𝑗 ​ 𝛽 𝑗 ) + 𝐶 ​ 𝒬 ​ ( ∑ 𝑖

1 𝜂 𝑖 ​ 𝛽 𝑖 ) ​ ( ∑ 𝑗

1 𝜂 𝛽 𝑗 )

2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝛽 ‖ 2 .

By making 𝜂 → ∞ , we conclude that the above triple series is convergent and therefore, 𝛽 ∈ 𝐷 Φ . Now observe that, for each 𝑗 ∈ { 1 , 2 , 3 , 4 } and 𝑖 ∈ ℕ + , the indices in the double sum in 𝑄 ^ 𝑗 , 𝑖 , runs over a subset of ℐ , so that, for each 𝛽 ∈ 𝑋 0 , 1 + ,

𝑄 ^ 𝑗 , 𝑖 ​ ( 𝛽 ) is well defined and

| 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝛽 ) | ⩽ Φ ​ ( 𝛽 ) ⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝛽 ‖ 2 .

Hence, by (86),

∫ 0 𝑇 | 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) | ​ 𝑑 𝑡 ⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝑐 0 ‖ 2 ​ 𝑇 ,

(89)

and our claim (88) is proved.

Our next claim is that, for 𝑗 ∈ { 1 , 2 , 3 , 4 } , 𝑖 ∈ ℕ + , and also for ( 𝑗 , 𝑖 )

( 1 , 0 ) ,

lim 𝑁 → + ∞ | 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 )

0 .

(90)

We first rearrange the sums that define 𝑄 ^ 𝑗 , 𝑖 𝑁 and rewrite them, for each sequence 𝛽 as

𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝛽 )

∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝛽 𝑝 ​ 𝛽 𝑞 ,

where, for each integer 𝑁 ⩾ 2 , ,

𝒥 1 , 𝑖 𝑁
:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 𝑖 + 1 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑞 ⩽ 𝑁 + 𝑖 − 𝑝 } , 𝑖 ∈ ℕ ,

𝒥 2 , 𝑖 𝑁
:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 1 ⩽ 𝑝 ⩽ 𝑁 , 𝑞

𝑖 } , 𝑖 ∈ ℕ + ,

𝒥 3 , 𝑖 𝑁
:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 1 ⩽ 𝑞 ⩽ 𝑖 − 1 , 𝑖 − 𝑞 ⩽ 𝑝 ⩽ 𝑁 } , 𝑖 ∈ ℕ 2 ,

𝒥 4 , 𝑖 𝑁
:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 𝑝

𝑖 ,  1 ⩽ 𝑞 ⩽ 𝑁 − 1 } , 𝑖 ∈ ℕ + ,

and, for each integer 𝑁 ⩾ 2 and ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ,

𝒦 1 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 )

:= { 𝑝 − 𝑖 } , 𝑖 ∈ ℕ ,

𝒦 2 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 )

:= { 𝑘 ∈ ℕ : 1 ⩽ 𝑘 ⩽ min ⁡ ( 𝑝 , 𝑁 − 𝑖 ) } , 𝑖 ∈ ℕ + ,

𝒦 3 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 )

:= { 𝑖 − 𝑞 } , 𝑖 ∈ ℕ 2 ,

𝒦 4 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 )

:= { 𝑘 ∈ ℕ : 1 ⩽ 𝑘 ⩽ min ⁡ ( 𝑁 − 𝑞 , 𝑖 ) } , 𝑖 ∈ ℕ + .

Observe that, for any integer 𝑁 ⩾ 2 , 𝒥 𝑗 , 𝑖 𝑁 ⫋ 𝒥 𝑗 , 𝑖 𝑁 + 1 . Define 𝒥 𝑗 , 𝑖 := ⋃ 𝑁

2 ∞ 𝒥 𝑗 , 𝑖 𝑁 . Since, for each ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 , there is 𝑁 0 such that, for all 𝑁 ⩾ 𝑁 0 , ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 we can define 𝒦 𝑗 , 𝑖 ​ ( 𝑝 , 𝑞 ) := ⋃ 𝑁

𝑁 0 ∞ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) . It is not difficult to check that,

𝑄 ^ 𝑗 , 𝑖 ​ ( 𝛽 )

∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝛽 𝑝 ​ 𝛽 𝑞 ,

for any 𝛽 ∈ 𝑋 0 , 1 .

Let 𝑀 , 𝑁 be integers such that 2 ⩽ 𝑀 ⩽ 𝑁 , and consider the following inequality,

| 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑀 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ | 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 − 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 )

  • | ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∖ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

  • | ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∖ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 )

(91)

Since the first sum runs over a fixed finite set of indices, by applying the Lebesgue bounded convergence we can conclude that, for 𝑗 ∈ { 1 , 2 , 3 , 4 } and 𝑖 ∈ ℕ + , also for ( 𝑗 , 𝑖 )

( 1 , 0 ) , and excluding ( 𝑗 , 𝑖 )

( 3 , 1 ) ,

lim 𝑁 → + ∞ ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑀 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ | 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 − 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 )

0 .

(92)

Since, in all cases, 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) ⫋ { 𝑘 ∈ ℕ : 1 ⩽ 𝑘 ⩽ 𝑝 } , we obtain

∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 )
⩽ ∑ 𝑘

1 𝑝 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ⩽ 𝐶 ​ ∑ 𝑘

1 𝑝 ( 𝑝 − 𝑘 + 1 ) ​ ( 𝑞 + 𝑘 ) ​ 𝑞 𝑝 , 𝑘

𝐶 ​ ( 𝑞 ​ ∑ 𝑘

1 𝑝 ( 𝑝 − 𝑘 + 1 ) ​ 𝑞 𝑝 , 𝑘 + ∑ 𝑘

1 𝑝 ( 𝑝 − 𝑘 + 1 ) ​ 𝑘 ​ 𝑞 𝑝 , 𝑘 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ 𝑝 ​ 𝑞 .

(93)

Also we have, for max ⁡ ( 2 , 𝑖 ) ⩽ 𝑀 < 𝑁 ,

𝒥 2 , 𝑖 𝑁 ∖ 𝒥 2 , 𝑖 𝑀

( [ 𝑀 + 1 , 𝑁 ] ∩ ℕ ) × { 𝑖 } ,

𝒥 3 , 𝑖 𝑁 ∖ 𝒥 3 , 𝑖 𝑀

( [ 𝑀 + 1 , 𝑁 ] ∩ ℕ ) × ( [ 1 , 𝑖 − 1 ] ∩ ℕ ) ,

𝒥 4 , 𝑖 𝑁 ∖ 𝒥 4 , 𝑖 𝑀

{ 𝑖 } × ( [ 𝑀 , 𝑁 − 1 ] ∩ ℕ ) ,

The case 𝑗

1 has to be tackled in a different way. By using (5.1), we have, for 𝑗 ∈ { 2 , 3 , 4 }

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∖ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ( 𝑝 , 𝑞 ; 𝑘 )
𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ | ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∖ 𝒥 𝑗 , 𝑖 𝑀 ( 𝑝 ​ 𝑐 𝑝 𝑁 ) ​ ( 𝑞 ​ 𝑐 𝑞 𝑁 ) | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ 𝑖 ​ ‖ 𝑐 0 ‖ ​ | ∑ 𝑝

𝑀 𝑁 𝑝 ​ 𝑐 𝑝 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ 𝑖 ​ ‖ 𝑐 0 ‖ ​ sup 𝑝 ⩾ 𝑀 𝑝 𝜎 0 ​ ( 𝑝 ) ​ | ∑ 𝑝

𝑀 𝑁 𝜎 0 ​ ( 𝑝 ) ​ 𝑐 𝑝 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

Therefore, there is a constant 𝐶 0

0 only depending on 𝑖 , 𝑇 , 𝒮 0 and 𝑐 0 , such that

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 𝑁 ∖ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 0 ​ sup 𝑝 ⩾ 𝑀 𝑝 𝜎 0 ​ ( 𝑝 )

(94)

The last term in (91) can be dealt in the same way to conclude that

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝒥 𝑗 , 𝑖 ∖ 𝒥 𝑗 , 𝑖 𝑀 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 0 ​ sup 𝑝 ⩾ 𝑀 𝑝 𝜎 0 ​ ( 𝑝 ) .

(95)

By (91), (92), (94) and (95) we have

lim sup 𝑁 → + ∞ | 𝑄 ^ 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝑄 ^ 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 2 ​ 𝐶 0 ​ sup 𝑝 ⩾ 𝑀 𝑝 𝜎 0 ​ ( 𝑝 ) ,

for all 𝑀 ⩾ max ⁡ ( 2 , 𝑖 ) . Since 𝜎 0 ∈ ℰ , the right-hand side of the above estimate converges to zero, as 𝑀 → + ∞ , therefore proving (90) for 𝑗 ∈ { 2 , 3 , 4 } .

For 𝑗

1 , we consider the following decomposition, for 𝑖 ∈ ℕ

𝒥 1 , 𝑖 𝑁 ∖ 𝒥 1 , 𝑖 𝑀

𝐽 𝐼 ∪ 𝐽 𝐼 ​ 𝐼 ,

where (see Figure 3),

𝐽 𝐼

:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 𝑀 + 1 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑞 ⩽ 𝑁 + 𝑖 − 𝑝 } ,

𝐽 𝐼 ​ 𝐼

:= { ( 𝑝 , 𝑞 ) ∈ ℕ 2 : 𝑖 + 1 ⩽ 𝑝 ⩽ 𝑀 , 𝑀 + 1 + 𝑖 − 𝑝 ⩽ 𝑞 ⩽ 𝑁 + 𝑖 − 𝑝 } .

\psfrag{p}{$p$}\psfrag{q}{$q$}\psfrag{1}{$1$}\psfrag{i}{$i$}\psfrag{i+1}{$i+1$}\psfrag{M}{$M$}\psfrag{M+1}{$M+1$}\psfrag{M+i}{$M+i$}\psfrag{p+q=M+1+i}{$p+q=M+1+i$}\psfrag{N}{$N$}\psfrag{N+i}{$N+i$}\psfrag{p+q=N+i}{$p+q=N+i$}\psfrag{J_I}{$J_{I}$}\psfrag{J_II}{$J_{II}$}\includegraphics[scale={0.40}]{Figure3.eps} Figure 3.Regions 𝐽 𝐼 and 𝐽 𝐼 ​ 𝐼 defined in the text.

Then,

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ∑ 𝑘 ∈ 𝒦 1 , 𝑖 ​ ( 𝑝 , 𝑞 ) 𝑎
( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

| ∑ 𝑝

𝑀 + 1 𝑁 ∑ 𝑞

1 𝑁 + 𝑖 − 𝑝 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑝 − 𝑖 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 𝐶 ​ ( 𝑖 + 1 ) ​ | ∑ 𝑝

𝑀 + 1 𝑁 ∑ 𝑞

1 𝑁 + 𝑖 − 𝑝 ( 𝑞 + 𝑝 − 𝑖 ) ​ 𝑞 𝑝 , 𝑝 − 𝑖 ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝑐 0 ‖ ​ ( 𝑖 + 1 ) ​ | ∑ 𝑝

𝑀 + 1 𝑁 𝑝 ​ 𝑐 𝑝 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 2 ​ 𝐶 ​ 𝒬 ​ ‖ 𝑐 0 ‖ ​ ( 𝑖 + 1 ) ​ sup 𝑝 ⩾ 𝑀 + 1 𝑝 𝜎 0 ​ ( 𝑝 ) ​ | ∑ 𝑝

𝑀 + 1 𝑁 𝜎 0 ​ ( 𝑝 ) ​ 𝑐 𝑝 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ,

so that, there is 𝐶 1

0 only depending on 𝑖 , 𝑇 , 𝒮 0 and 𝑐 0 such that, for all 𝑀 ⩾ 𝑖 + 1 ,

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 1 ​ sup 𝑝 ⩾ 𝑀 + 1 𝑝 𝜎 0 ​ ( 𝑝 ) .

(96)

Similarly, it is easy to obtain,

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 1 ​ sup 𝑝 ⩾ 𝑀 + 1 𝑝 𝜎 0 ​ ( 𝑝 ) .

(97)

The estimate over 𝐽 𝐼 ​ 𝐼 is more involved:

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ​ 𝐼 ∑ 𝑘 ∈ 𝒦 1 , 𝑖 ​ ( 𝑝 , 𝑞 )
𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

| ∑ 𝑝

𝑖 + 1 𝑀 ∑ 𝑞

𝑀 + 1 + 𝑖 − 𝑝 𝑁 + 𝑖 − 𝑝 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑝 − 𝑖 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 𝐶 ​ ( 𝑖 + 1 ) ​ | ∑ 𝑝

𝑖 + 1 𝑀 ∑ 𝑞

𝑀 + 1 + 𝑖 − 𝑝 𝑁 + 𝑖 − 𝑝 ( 𝑞 + 𝑝 − 𝑖 ) ​ 𝑞 𝑝 , 𝑝 − 𝑖 ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 𝐶 ​ 𝒬 ​ ( 𝑖 + 1 ) ​ sup 𝑟 ⩾ 𝑀 + 1 𝑟 𝜎 0 ​ ( 𝑟 ) ​ | ∑ 𝑝

𝑖 + 1 𝑀 ∑ 𝑞

𝑀 + 1 + 𝑖 − 𝑝 𝑁 + 𝑖 − 𝑝 𝜎 0 ​ ( 𝑞 + 𝑝 − 𝑖 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ,

where we have used the fact that for ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ​ 𝐼 ,

𝑟 := 𝑞 + 𝑝 − 𝑖 ⩾ 𝑀 + 1 .

Using the monotonicity of 𝜎 0 and the inequality (35) with 𝑚 𝜎 0

2 , we get

𝜎 0 ​ ( 𝑞 + 𝑝 − 𝑖 ) ⩽ 𝜎 0 ​ ( 𝑞 + 𝑝 ) ⩽ 3 ​ 𝜎 0 ​ ( 𝑝 ) + 3 ​ 𝜎 0 ​ ( 𝑞 ) .

This implies that,

| ∑ 𝑝

𝑖 + 1 𝑀 ∑ 𝑞

𝑀 + 1 + 𝑖 − 𝑝 𝑁 + 𝑖 − 𝑝 𝜎 0 ​ ( 𝑞 + 𝑝 − 𝑖 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )
⩽ 3 ​ | ∑ 𝑝

𝑖 + 1 𝑀 ∑ 𝑞

𝑀 + 1 + 𝑖 − 𝑝 𝑁 + 𝑖 − 𝑝 ( 𝜎 0 ​ ( 𝑝 ) + 𝜎 0 ​ ( 𝑞 ) ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 )

⩽ 6 ​ 𝒮 0 ​ ‖ 𝑐 0 ‖ ​ 𝑇 .

Hence, there is a constant 𝐶 2

0 only depending on 𝑖 , 𝑇 , 𝒮 0 and ‖ 𝑐 0 ‖ , such that

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ​ 𝐼 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 𝑁 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ​ 𝑐 𝑞 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 2 ​ sup 𝑟 ⩾ 𝑀 + 1 𝑟 𝜎 0 ​ ( 𝑟 ) ,

(98)

and, obtained in the same way,

| ∑ ( 𝑝 , 𝑞 ) ∈ 𝐽 𝐼 ​ 𝐼 ∑ 𝑘 ∈ 𝒦 𝑗 , 𝑖 ​ ( 𝑝 , 𝑞 ) 𝑎 ​ ( 𝑝 , 𝑞 ; 𝑘 ) ​ 𝑐 𝑝 ​ 𝑐 𝑞 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 2 ​ sup 𝑟 ⩾ 𝑀 + 1 𝑟 𝜎 0 ​ ( 𝑟 ) .

(99)

As before, by (92), (96), (97), (98) and (99), we obtain, for 𝑖 ∈ ℕ ,

lim sup 𝑁 → + ∞ | 𝑄 ^ 1 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝑄 ^ 1 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝐶 3 ​ sup 𝑝 ⩾ 𝑀 + 1 𝑝 𝜎 0 ​ ( 𝑝 ) ,

for some constant 𝐶 3 > 0 only depending on 𝑖 , 𝑇 , 𝒮 0 and 𝑐 0 . Since 𝜎 0 ∈ ℰ , the right-hand side of the above estimate converges to zero, as 𝑀 → + ∞ , therefore proving (90) for 𝑗

1 .

Now we tackle the terms whose rate coefficients are of fragmentation-type, namely: 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) . Define 𝐴 𝑗 , 𝑖 𝑁 as the sum of terms in 𝑄 𝑗 , 𝑖 𝑁 which are of this type. According to (69)-(72) we have, for 𝑁 ⩾ 𝑖 + 2 ,

𝐴 1 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

( 1 − 𝛿 𝑖 , 0 ) ​ ∑ 𝑘

1 𝑁 − 𝑖 𝑎 ​ ( 𝑖 + 𝑘 , 0 ; 𝑘 ) ​ 𝑐 𝑖 + 𝑘 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 0 , 1 , … , 𝑁 − 1 } ,

(100)

𝐴 2 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

− 𝛿 𝑖 , 0 ​ ∑ 𝑘

1 𝑁 − 1 ∑ 𝑗

𝑘 + 1 𝑁 𝑎 ​ ( 𝑗 , 0 ; 𝑘 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 0 , 1 , … , 𝑁 − 1 } ,

(101)

𝐴 3 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

∑ 𝑗

𝑖 + 1 𝑁 𝑎 ​ ( 𝑗 , 0 , 𝑖 ) ​ 𝑐 𝑗 𝑁 ​ 𝑐 0 𝑁 , 𝑖 ∈ { 1 , … , 𝑁 } ,

(102)

𝐴 4 , 𝑖 𝑁 ​ ( 𝑐 𝑁 )

− ∑ 𝑘

1 𝑖 − 1 𝑎 ​ ( 𝑖 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑖 𝑁 , 𝑖 ∈ { 1 , … , 𝑁 } ,

(103)

where 𝛿 𝑖 , 0 is the Kronecker symbol. Here, we used Definition (9) that all coefficients of type 𝑎 ​ ( 𝑘 , 0 ; 𝑘 ) are zero. Correspondingly, we define 𝐴 𝑗 , 𝑖 as the sums of terms in 𝑄 𝑗 , 𝑖 with rate coefficients of the same type which are obtained from (100)-(103) making 𝑁 → ∞ in the upper limits of the sums. Our next claim is that 𝐴 𝑗 , 𝑖 ​ ( 𝑐 ) ∈ 𝐿 1 ​ ( 0 , 𝑇 ) and

lim 𝑁 → + ∞ | 𝐴 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝐴 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 )

0 .

(104)

Observe that, for each 𝑖

1 , … , 𝑁 , the sum in (103) runs over a finite set of indices and that

∫ 0 𝑇 | 𝐴 4 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ​ ( 𝑡 ) ) − 𝐴 4 , 𝑖 ​ ( 𝑐 ​ ( 𝑡 ) ) | ​ 𝑑 𝑡

𝛼 ​ ( 𝑖 ) ​ ∫ 0 𝑇 | 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) − 𝑐 0 ​ ( 𝑡 ) ​ 𝑐 𝑖 ​ ( 𝑡 ) | ​ 𝑑 𝑡 ,

and hence, our claim is easily proved for 𝑗

4 by the bounded convergence theorem. For the other cases we write, for 𝑗

1 , 2 , 3 ,

𝐴 𝑗 , 𝑖 𝑁 ​ ( 𝑐 )

𝑐 0 𝑁 ​ ∑ ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑁 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 𝑝 𝑁 ,

where,

ℐ 1 , 𝑖 𝑁
:= { ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 𝑖 + 1 ⩽ 𝑝 ⩽ 𝑁 , 𝑘

𝑝 − 𝑖 } , 𝑖 ∈ { 1 , … , 𝑁 − 1 } ,

ℐ 2 , 0 𝑁
:= { ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 2 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑘 ⩽ 𝑝 − 1 } ,

ℐ 3 , 𝑖 𝑁
:= { ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 𝑖 + 1 ⩽ 𝑝 ⩽ 𝑁 , 𝑘

𝑖 } , 𝑖 ∈ { 1 , … , 𝑁 } .

Observe that ℐ 𝑗 , 𝑖 𝑁 ⫋ ℐ 𝑗 , 𝑖 𝑁 + 1 . Also, define ℐ 𝑗 , 𝑖 := ⋃ 𝑁

𝑖 + 1 ∞ ℐ 𝑗 , 𝑖 𝑁 , and consider the sets

𝒥 𝑁 := { ( 𝑝 , 𝑘 ) ∈ ( ℕ + ) 2 : 1 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑘 ⩽ 𝑝 − 1 } ,

and 𝒥 := ⋃ 𝑁

1 ∞ 𝒥 𝑁 . Let us also recall the sets 𝒥 0 𝑁 from Lemma 4.6 and consider 𝒥 0 := ⋃ 𝑁

𝑀 0 + 1 ∞ 𝒥 0 𝑁 . Then, the set 𝒥 ∖ 𝒥 0 is finite. Therefore,

∑ ( 𝑝 , 𝑘 ) ∈ 𝒥 ∖ 𝒥 0 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 ​ 𝑐 𝑝 ∈ 𝐿 1 ​ ( 0 , 𝑇 ) .

(105)

On the other hand, for ( 𝑝 , 𝑘 ) ∈ 𝒥 0 𝑁 we have, by Lemma 4.6, and the fact that, for 𝑝 ⩾ 𝑀 0 + 1 ,

𝜎 0 ​ ( 𝑝 − 1 ) / 𝑝 − 1 ⩾ 1 , for fixed 𝜇 ⩾ 𝑀 0 + 1 ,

∫ 0 𝑇 ∑ ( 𝑝 , 𝑘 ) ∈ 𝒥 0 𝜇 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑝 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 ⩽ 𝛾 𝑇 ​ ∑ 𝑖

0 ∞ 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 0 ​ 𝑖 .

By letting 𝑁 → + ∞ , by using the bounded convergence theorem, we can replace in the previous inequality 𝑐 𝑝 𝑁 by 𝑐 𝑝 . Then, by the monotone convergence theorem, by letting 𝜇 → + ∞ , we obtain

∑ ( 𝑝 , 𝑘 ) ∈ 𝒥 0 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 ​ 𝑐 𝑝 ∈ 𝐿 1 ​ ( 0 , 𝑇 ) .

(106)

By (105) and (106), we conclude that,

∑ ( 𝑝 , 𝑘 ) ∈ 𝒥 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 ​ 𝑐 𝑝 ∈ 𝐿 1 ​ ( 0 , 𝑇 )

Since, for 𝑗

1 , 2 , 3 , and corresponding indices 𝑖 ,

ℐ 𝑗 , 𝑖 ⊂ 𝒥 , we can conclude that,

𝐴 𝑗 , 𝑖 ​ ( 𝑐 ) ∈ 𝐿 1 ​ ( 0 , 𝑇 ) .

(107)

This, together with (88) allows us to conclude that

𝑄 𝑗 , 𝑖 ​ ( 𝑐 ) ∈ 𝐿 1 ​ ( 0 , 𝑇 ) , 𝑗 ∈ { 1 , 2 , 3 , 4 } ( or  ​ 𝑗 ∈ { 1 , 2 } ​  if  ​ 𝑖

0 ) ,

which is condition (ii) of Definition 3.1.

The proof of (104) proceeds in a way similar to the proof of claim (90), by considering the inequality corresponding to (91) but now with the 𝑄 ^ ’s replaced by the 𝐴 ’s and with the obvious replacements on the right-hand side. So, for each pair of integers 𝑀 , 𝑁 such that 𝑖 + 1 ⩽ 𝑀 ⩽ 𝑁 , we have

ℐ 1 , 𝑖 𝑁 ∖ ℐ 1 , 𝑖 𝑀

{ ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 𝑀 + 1 ⩽ 𝑝 ⩽ 𝑁 , 𝑘

𝑝 − 𝑖 } , 𝑖 ∈ { 1 , … , 𝑁 − 1 } ,

ℐ 2 , 0 𝑁 ∖ ℐ 2 , 0 𝑀

{ ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 𝑀 + 1 ⩽ 𝑝 ⩽ 𝑁 ,  1 ⩽ 𝑘 ⩽ 𝑝 − 1 } ,

ℐ 3 , 𝑖 𝑁 ∖ ℐ 3 , 𝑖 𝑀

{ ( 𝑝 , 𝑘 ) ∈ ℕ 2 : 𝑀 + 1 ⩽ 𝑝 ⩽ 𝑁 , 𝑘

𝑖 } , 𝑖 ∈ { 1 , … , 𝑁 } .

By the bounded convergence theorem, for each choice of 𝑀 we have,

lim 𝑁 → + ∞ ∑ ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑀 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ | 𝑐 𝑝 𝑁 ​ 𝑐 0 𝑁 − 𝑐 𝑝 ​ 𝑐 0 | 𝐿 1 ​ ( 0 , 𝑇 )

0 .

(108)

With reference to Lemma 4.6, we have, for 𝑗

1 , 2 , 3 and 𝑖 in the corresponding sets, if 𝑁 ⩾ 𝑀 ⩾ 𝑀 0 , then ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 ⫋ 𝒥 0 so that (54) is true for 𝒥 0 replaced by ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 , that is, there is a constant 𝐶 ​ ( 𝑇 )

0 , such that,

∫ 0 𝑇 ∑ ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 𝜎 0 ​ ( 𝑝 − 1 ) 𝑝 − 1 ​ 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ ( 𝑡 ) ​ 𝑐 𝑝 𝑁 ​ ( 𝑡 ) ​ 𝑑 ​ 𝑡 ⩽ 𝐶 ​ ( 𝑇 ) .

(109)

For 𝑗

1 , 2 , 3 and ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 , we have 𝑝 ⩾ 𝑀 + 1 and therefore, given 𝜀 ∈ ( 0 , 1 ) , for 𝑀 sufficiently large we have

sup ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 𝜎 0 ​ ( 𝑝 − 1 ) 𝑝 − 1

𝜎 0 ​ ( 𝑀 ) 𝑀 ⩾ 2 ​ 𝐶 ​ ( 𝑇 ) 𝜀 ,

which, by (109) allows us to conclude that,

| ∑ ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 𝑁 ∖ ℐ 𝑗 , 𝑖 𝑀 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 𝑁 ​ 𝑐 𝑝 𝑁 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝜀 2 .

(110)

Proceeding similarly, we obtain

| ∑ ( 𝑝 , 𝑘 ) ∈ ℐ 𝑗 , 𝑖 ∖ ℐ 𝑗 , 𝑖 𝑀 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) ​ 𝑐 0 ​ 𝑐 𝑝 | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝜀 2 .

(111)

From (108), (110) and (111), we obtain

lim sup 𝑁 → + ∞ | 𝐴 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝐴 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 ) ⩽ 𝜀 ,

from which, by letting 𝜀 → 0 , we obtain (104).

From (90) and (104) we prove that for 𝑗 ∈ { 1 , 2 , 3 , 4 } , 𝑖 ∈ ℕ + , and also for ( 𝑗 , 𝑖 )

( 1 , 0 ) ,

lim 𝑁 → + ∞ | 𝑄 𝑗 , 𝑖 𝑁 ​ ( 𝑐 𝑁 ) − 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ) | 𝐿 1 ​ ( 0 , 𝑇 )

0 .

(112)

Then, from the integrated version of the truncated system (38)-(42), and also Corollary 4.2, (85) and (86), together with (112), it is straightforward to obtain that 𝑐 satisfies condition (iii) of Definition 3.1. The continuity of 𝑐 ​ ( ⋅ ) is a consequence of this. □

Analogously with what is sometimes the practice in the coagulation-fragmentation literature [7] we shall call admissible any solution to the initial value problem (7), (8) obtained as limit of solutions to the (38)–(43) when 𝑁 → ∞ , in the sense used in the above proof of Theorem 3.2.

The proof that, in the isolated case, admissible solutions of (7) conserve the moments (24) with 𝑟

0 (total number of clusters) and 𝑟

1 (total mass of clusters) is easily done using the fact that if an initial condition 𝑐 0 is in 𝑋 0 , 1 + then it satisfies (83), for some 𝜎 0 ∈ ℰ , and so it is slightly more regular, and this extra regularity is inherited by the admissible solution (remember Lemma 4.6 and (84)).

Proof of theorem 3.3: Let 𝑇

0 and 2 ⩽ 𝐿 < 𝑁 . Using Corollary 4.2 we write, for all 𝑡 ∈ [ 0 , 𝑇 ) .

𝒫 0 ​ ( 𝑡 ) − 𝒫 0 ​ ( 0 )

∑ 𝑖

0 𝑁 ( 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 0 ​ 𝑖 ) + ∑ 𝑖

𝑁 + 1 ∞ 𝑐 𝑖 ​ ( 𝑡 ) − ∑ 𝑖

𝑁 + 1 ∞ 𝑐 0 ​ 𝑖

∑ 𝑖

0 𝑁 ( 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ) + ∑ 𝑖

0 𝑁 ( 𝑐 𝑖 𝑁 ​ ( 𝑡 ) − 𝑐 0 ​ 𝑖 ) + ∑ 𝑖

𝑁 + 1 ∞ 𝑐 𝑖 ​ ( 𝑡 ) − ∑ 𝑖

𝑁 + 1 ∞ 𝑐 0 ​ 𝑖

∑ 𝑖

0 𝐿 ( 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) ) + ∑ 𝑖

𝐿 + 1 ∞ 𝑐 𝑖 ​ ( 𝑡 ) − ∑ 𝑖

𝐿 + 1 𝑁 𝑐 𝑖 𝑁 ​ ( 𝑡 ) − ∑ 𝑖

𝑁 + 1 ∞ 𝑐 0 ​ 𝑖

Thus, from the fact that 𝑐 0 ∈ 𝑋 0 , 1 ,

| 𝒫 0 ​ ( 𝑡 ) − 𝒫 0 ​ ( 0 ) |
⩽ ∑ 𝑖

0 𝐿 | 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 ∞ | 𝑐 𝑖 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 𝑁 | 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝑁 + 1 ∞ | 𝑐 0 ​ 𝑖 | ,

⩽ ∑ 𝑖

0 𝐿 | 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 ∞ | 𝑐 𝑖 ​ ( 𝑡 ) | +

+ 1 𝐿 + 1 ​ ∑ 𝑖

𝐿 + 1 𝑁 | 𝑖 ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝑁 + 1 ∞ | 𝑐 0 ​ 𝑖 | .

Now letting 𝑁 → ∞ and using (85) and (87) it follows that

| 𝒫 0 ​ ( 𝑡 ) − 𝒫 0 ​ ( 0 ) |
⩽ ∑ 𝑖

𝐿 + 1 ∞ | 𝑐 𝑖 ​ ( 𝑡 ) | + 1 𝐿 + 1 ​ 𝛾 𝑇 ​ 𝒮 0 ,

(113)

and letting 𝐿 → ∞ we conclude that | 𝒫 0 ​ ( 𝑡 ) − 𝒫 0 ​ ( 0 ) |

0 , thus proving (31).

To prove (32) repeat the computations above, now for 𝒫 1 ​ ( 𝑡 ) − 𝒫 1 ​ ( 0 ) . We get

| 𝒫 1 ​ ( 𝑡 ) − 𝒫 1 ​ ( 0 ) |
⩽ ∑ 𝑖

0 𝐿 𝑖 ​ | 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 ∞ | 𝑖 ​ 𝑐 𝑖 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 𝑁 | 𝑖 ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝑁 + 1 ∞ | 𝑖 ​ 𝑐 0 ​ 𝑖 | ,

⩽ ∑ 𝑖

0 𝐿 𝑖 ​ | 𝑐 𝑖 ​ ( 𝑡 ) − 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝐿 + 1 ∞ | 𝑖 ​ 𝑐 𝑖 ​ ( 𝑡 ) | +

+ 𝐿 + 1 𝜎 0 ​ ( 𝐿 + 1 ) ​ ∑ 𝑖

𝐿 + 1 𝑁 | 𝜎 0 ​ ( 𝑖 ) ​ 𝑐 𝑖 𝑁 ​ ( 𝑡 ) | + ∑ 𝑖

𝑁 + 1 ∞ | 𝑖 ​ 𝑐 0 ​ 𝑖 |

and now repeating the process of letting first 𝑁 → ∞ , then 𝐿 → ∞ , and using (87) and (33), we conclude (32). □

5.2.The non-isolated case

In the non-isolated case theorems 3.2 and 3.3 can be stated almost exactly verbatim (the only difference being the lack of conservation of the cluster number density) with essentially the same proofs, where the only difference is that the terms 𝑄 𝑗 , 0 are now all zero. For completeness we state them now, omitting the proofs.

6.A regularity result

All along this work we have been considering conditions (27)-(28) with the only condition imposed on the rate coefficients of fragmentation-type, 𝑎 ​ ( 𝑝 , 0 ; 𝑘 ) , being their nonnegativity, besides the “physical” conditions (9)-(11). This lack of upper bounds prevented us from using Ascoli-Arzelà theorem in the proof of the existence theorems. It also prevent us to obtain uniform convergence properties for the series defining each 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( ⋅ ) ) , for the solution 𝑐 obtained in the existence theorem and therefore, to obtain more regularity for this solution than its continuity. However, if we extend the upper bound (28) to 𝑗

0 , we obtain the following regularity result which, in particular, proves that the solution constructed in Theorem 3.2 is a solution of (7)-(8) in the classical sense:

Proof of the theorem 3.6: Similar to the function Φ introduced in the proof of Theorem 3.2, we define the function Ψ by

Ψ ​ ( 𝛽 ) := ∑ ( 𝑖 , 𝑗 , 𝑘 ) ∈ ℐ 0 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝛽 𝑖 ​ 𝛽 𝑗 ,

where,

ℐ 0 := { ( 𝑖 , 𝑗 , 𝑘 ) ∈ ℕ 3 : 1 ⩽ 𝑘 ⩽ 𝑖 } ,

the domain of which, 𝐷 Ψ , is the subset of 𝑋 0 , 1 + formed by the elements 𝛽 for which the above series is convergent. By proceeding as in the proof of Theorem 3.2, we obtain, for any integer 𝜂 ⩾ 2 ,

∑ 𝑖

1 𝜂 ∑ 𝑗

0 𝜂 ∑ 𝑘

1 𝑖 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝛽 𝑖 ​ 𝛽 𝑗 ⩽ 2 ​ 𝐶 ​ 𝒬 ​ ( ∑ 𝑖

1 𝜂 𝑖 ​ 𝛽 𝑖 ) ​ ( ∑ 𝑖

1 𝜂 𝑗 ​ 𝛽 𝑗 + ∑ 𝑗

0 𝜂 𝛽 𝑗 ) .

(114)

This shows that, in fact, 𝐷 Ψ

𝑋 0 , 1 + . On the other hand, from the assumption that ∑ 𝑖

1 ∞ 𝑖 ​ 𝑐 𝑖 ​ ( 𝑡 ) is constant for 𝑡 ∈ [ 0 , 𝑇 ) Dini’s theorem allow us to conclude that this series of nonnegative functions is uniformly convergent on any compact subset of [ 0 , 𝑇 ) . And so ∑ 𝑖

0 ∞ 𝑐 𝑖 ​ ( 𝑡 ) is also uniformly convergent on compact subsets of [ 0 , 𝑇 ) . This, together with (114) ensure that the same is true for the series defining Ψ ​ ( 𝑐 ​ ( ⋅ ) ) . Since, for 𝑗

1 , 2 , 3 , 4 with 𝑖 ∈ ℕ + , and 𝑗

1 , 2 with 𝑖

0 , the the indices of sums defining 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ) run over subsets of ℐ 0 , we conclude that these are also uniformly convergent on any compact subset of [ 0 , 𝑇 ) and therefore, by the continuity of 𝑐 , we conclude that 𝑄 𝑗 , 𝑖 ​ ( 𝑐 ​ ( ⋅ ) ) is continuous in [ 0 , 𝑇 ) . But then, by (iii) in Definition 3.1, the thesis of the lemma is obtained. □

Remark 6.1.

We remark that the proof presented above is valid for both the isolated and non-isolated cases as it is based on the bound (114) and the conservation of mass (together with Dini’s theorem), both of which hold true in both regimes.

Remark 6.2.

In the non-isolated case discussed in sections 2 and 3, when the DGED system is reduced to the standard coagulation-fragmentation equations, if we impose the conditions of Theorem 3.6 we have for the fragmentation coefficients,

𝑏 𝑗 , 𝑘

2 ​ 𝑎 ​ ( 𝑗 + 𝑘 , 0 ; 𝑘 ) ​ 𝑐 0

⩽ 2 ​ 𝐶 ​ ( 𝑗 + 1 ) ​ 𝑘 ​ 𝑞 𝑗 + 𝑘 , 𝑘 ​ 𝑐 0

⩽ 𝐾 ​ 𝑗 ​ 𝑘 ,

with 𝐾

2 ​ ( 𝐶 + 1 ) ​ 𝒬 ​ 𝑐 0 , and we recover Theorem 5.2 of [1].

7.Uniqueness of solutions

In this section, we prove the partial uniqueness result for (7)–(8) stated in theorem 3.7. The proof is based on ideas commonly used in studies of coagulation-type equations (see, e.g., [1, 18]) and so we will skip most of the details of the computations. Those ideas are the following: we assume that, for some 𝑇 ∈ ( 0 , ∞ ) the initial value problem (7)–(8) has two solutions 𝑐

( 𝑐 𝑖 ) and 𝑑

( 𝑑 𝑖 ) . Defining 𝑥 := 𝑐 − 𝑑 , we prove that, on [ 0 , 𝑇 ] , this function satisfies a differential inequality

𝜓 ​ ( 𝑡 ) ⩽ ∫ 0 𝑡 𝜓 ​ ( 𝑠 ) ​ 𝑑 𝑠 ,

where 𝜓 ​ ( 𝑡 ) := ∑ 𝑖

0 ∞ ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑡 ) | . From this, by Gronwall’s inequality, we get 𝑥 𝑖 ​ ( 𝑡 ) ≡ 0 , and hence uniqueness follows.

To implement this idea we need to get an evolution equation for solutions to (7) similar to (45), in Proposition- 4.1, which was valid for solutions of the truncated system (38)–(42). This is stated in the next proposition, whose proof is similar to that of Proposition 4.1 and we leave it to the reader.

Proposition 7.1.

Let 𝑐

( 𝑐 𝑖 ) 𝑖 ⩾ 0 be any solution of (7) in the isolated case. Then, for every 𝑛 ∈ ℕ and sequence ( 𝑔 𝑖 ) we have

∑ 𝑖

0 𝑛 𝑔 𝑖 ​ 𝑐 𝑖 ​ ( 𝑡 )

∑ 𝑖

0 𝑛 𝑔 𝑖 ​ 𝑐 𝑖 ​ ( 0 )

+ ∫ 0 𝑡 ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑘 𝑛 ∑ 𝑗

0 𝑛 − 𝑘 ( 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 ​ ( 𝑠 ) ​ 𝑐 𝑗 ​ ( 𝑠 ) ​ 𝑑 ​ 𝑠

(115)

  • ∫ 0 𝑡 ∑ 𝑗

    1 4 𝑅 𝑗 , 𝑛 ​ ( 𝑐 ​ ( 𝑠 ) ) ​ 𝑑 ​ 𝑠

(116)

where

𝑅 1 , 𝑛 ​ ( 𝑐 ​ ( ⋅ ) )
:= ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑛 + 1 ∞ ∑ 𝑗

0 𝑛 − 𝑘 𝑔 𝑗 + 𝑘 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗 ,

(117)

𝑅 2 , 𝑛 ​ ( 𝑐 ​ ( ⋅ ) )
:= ( ∑ 𝑘

1 ∞ ∑ 𝑖

𝑘 𝑛 + 𝑘 ∑ 𝑗

0 ∞ − ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑘 𝑛 ∑ 𝑗

0 𝑛 − 𝑘 ) ​ 𝑔 𝑖 − 𝑘 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗 ,

(118)

𝑅 3 , 𝑛 ​ ( 𝑐 ​ ( ⋅ ) )
:= − ( ∑ 𝑘

1 ∞ ∑ 𝑖

𝑘 ∞ ∑ 𝑗

0 𝑛 − ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑘 𝑛 ∑ 𝑗

0 𝑛 − 𝑘 ) ​ 𝑔 𝑗 ​ 𝑎 ​ ( 𝑗 , 𝑖 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑗 ​ 𝑐 𝑖 − 𝑘 ,

(119)

𝑅 4 , 𝑛 ​ ( 𝑐 ​ ( ⋅ ) )
:= − ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑘 𝑛 ∑ 𝑗

𝑛 − 𝑘 + 1 ∞ 𝑔 𝑖 ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑐 𝑖 ​ 𝑐 𝑗 .

(120)

Observe that in (115), when 𝑘

𝑛 the only term of the triple sum has ( 𝑖 , 𝑗 , 𝑘 )

( 𝑛 , 0 , 𝑛 ) and so, either from 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖

𝑔 𝑛 + 𝑔 0 − 𝑔 0 − 𝑔 𝑛

0 , or by (9), its contribution to (115) is identically zero. This means that the sum in 𝑘 can be done just from 1 to 𝑛 − 1 , exactly as in (45).

We will assume that the rate coefficients 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) satisfy the following: there exist a constant 𝛼 ∈ [ 0 , 1 2 ) and a positive constant 𝐶 such that, for all integers 𝑖 , 𝑘 such that 1 ⩽ 𝑘 ⩽ 𝑖 , and all 𝑗 ∈ ℕ , we have

𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ⩽ 𝐶 ​ ( 𝑖 − 𝑘 + 1 ) 𝛼 ​ ( 𝑗 𝛼 + 𝑘 𝛼 ) ​ 𝑞 𝑖 , 𝑘 ,

(121)

and 𝑞 𝑖 , 𝑘 satisfy the condition (27) used in the existence result.

Note that condition (121) is more restrictive than (28) not only due to the constant 𝛼 being smaller than 1 , but because it is assumed to be valid also for the case 𝑗

0 which we had not assumed to be the case in (28) (remember the discussion in Section 6.)

We also observe that from (27) and (121) it follows that

∑ 𝑘

1 𝑖 𝑘 𝛼 ​ ( 𝑖 − 𝑘 + 1 ) 𝛼 ​ 𝑞 𝑖 , 𝑘 ⩽ 𝒬 ​ 𝑖 𝛼 .

(122)

In fact, from min 1 ⩽ 𝑘 ⩽ 𝑖 ⁡ { 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) }

𝑖 , we have

𝒬 ​ 𝑖
⩾ ∑ 𝑘

1 𝑖 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ​ 𝑞 𝑖 , 𝑘

∑ 𝑘

1 𝑖 ( 𝑘 ​ ( 𝑖 − 𝑘 + 1 ) ) 1 − 𝛼 ​ 𝑘 𝛼 ​ ( 𝑖 − 𝑘 + 1 ) 𝛼 ​ 𝑞 𝑖 , 𝑘

⩾ 𝑖 1 − 𝛼 ​ ∑ 𝑘

1 𝑖 𝑘 𝛼 ​ ( 𝑖 − 𝑘 + 1 ) 𝛼 ​ 𝑞 𝑖 , 𝑘 ,

from which (122) readily follows.

We can now prove the uniqueness result stated in theorem 3.7.

Proof of the theorem 3.7: Suppose 𝑐

( 𝑐 𝑖 ) and 𝑑

( 𝑑 𝑖 ) are two solutions of (7)–(8) on [ 0 , 𝑇 ] . Let 𝑥 := 𝑐 − 𝑑 . Note that, since that 𝑐 ​ ( 0 ) and 𝑑 ​ ( 0 ) are both equal to the initial condition 𝑐 0 , and thus 𝑥 ​ ( 0 )

0 . Also, for all 𝑖 and 𝑗 it is clear that 𝑐 𝑖 ​ 𝑐 𝑗 − 𝑑 𝑖 ​ 𝑑 𝑗

𝑐 𝑖 ​ 𝑥 𝑗 + 𝑑 𝑗 ​ 𝑥 𝑖 . Observing that, for every absolutely continuous function 𝑢 ​ ( 𝑡 ) , the function 𝑡 ↦ | 𝑢 ​ ( 𝑡 ) | is also absolutely continuous and 𝑑 𝑑 ​ 𝑡 ​ | 𝑢 ​ ( 𝑡 ) |

sgn ⁡ ( 𝑢 ​ ( 𝑡 ) ) ​ 𝑑 ​ 𝑢 𝑑 ​ 𝑡 almost everywhere (where sgn ⁡ ( ⋅ ) denotes the sign function), we can use Proposition 7.1 to write

∑ 𝑖

0 𝑛 ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑡 ) |

∫ 0 𝑡 ∑ 𝑘

1 𝑛 ∑ 𝑖

𝑘 𝑛 ∑ 𝑗

0 𝑛 − 𝑘 𝑔 ~ ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ ( 𝑐 𝑖 ​ ( 𝑠 ) ​ 𝑥 𝑗 ​ ( 𝑠 ) + 𝑑 𝑗 ​ ( 𝑠 ) ​ 𝑥 𝑖 ​ ( 𝑠 ) ) ​ 𝑑 ​ 𝑠

(123)

  • ∫ 0 𝑡 ∑ 𝑗

    1 4 Δ ​ 𝑅 𝑗 , 𝑛 ​ ( 𝑠 ) ​ 𝑑 ​ 𝑠 ,

(124)

where 𝑔 ~ ​ ( 𝑖 , 𝑗 ; 𝑘 ) := 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖 , with 𝑔 𝑖

( 1 + 𝑖 𝛼 ) ​ sgn ⁡ ( 𝑥 𝑖 ) , and Δ ​ 𝑅 𝑗 , 𝑛 ​ ( ⋅ ) := 𝑅 𝑗 , 𝑛 ​ ( 𝑐 ​ ( ⋅ ) ) − 𝑅 𝑗 , 𝑛 ​ ( 𝑑 ​ ( ⋅ ) ) . Writing 𝑥 ℓ

sgn ⁡ ( 𝑥 ℓ ) ​ | 𝑥 ℓ | and remembering that the sign function is either − 1 , 0 or 1 , we can write

𝑔 ~ ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑥 𝑗

( 𝑔 𝑗 + 𝑘 + 𝑔 𝑖 − 𝑘 − 𝑔 𝑗 − 𝑔 𝑖 ) ​ 𝑥 𝑗

⩽ ( 2 + ( 𝑗 + 𝑘 ) 𝛼 + ( 𝑖 − 𝑘 ) 𝛼 − 𝑗 𝛼 + 𝑖 𝛼 ) ​ | 𝑥 𝑗 | ,

now using the concavity of 𝑢 ↦ 𝑢 𝛼 and the fact that for 0 ⩽ 𝑢 ⩽ 𝑖 the function 𝑢 ↦ 𝑢 𝛼 + ( 𝑖 − 𝑢 ) 𝛼 , has a maximum at 𝑢

𝑖 / 2 , whose value is 2 1 − 𝛼 ​ 𝑖 𝛼 we get

𝑔 ~ ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑥 𝑗

⩽ ( 2 + 𝑗 𝛼 + 𝑘 𝛼 + ( 𝑖 − 𝑘 ) 𝛼 − 𝑗 𝛼 + 𝑖 𝛼 ) ​ | 𝑥 𝑗 |

⩽ ( 2 + ( 2 1 − 𝛼 + 1 ) ​ 𝑖 𝛼 ) ​ | 𝑥 𝑗 |

⩽ ( 2 + 3 ​ 𝑖 𝛼 ) ​ | 𝑥 𝑗 | .

(125)

Analogously,

𝑔 ~ ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ 𝑥 𝑖

⩽ ( 2 + ( 𝑗 + 𝑘 ) 𝛼 + ( 𝑖 − 𝑘 ) 𝛼 + 𝑗 𝛼 − 𝑖 𝛼 ) ​ | 𝑥 𝑖 | ,

⩽ ( 2 + 𝑗 𝛼 + 𝑘 𝛼 + ( 𝑖 − 𝑘 ) 𝛼 + 𝑗 𝛼 − 𝑖 𝛼 ) ​ | 𝑥 𝑖 |

⩽ ( 2 + 2 ​ 𝑗 𝛼 + 𝑘 𝛼 ) ​ | 𝑥 𝑖 | .

(126)

Plugging (121), (122), (125) and (126) into (123) we can bound from above the function in the integral in (123) by

7 ​ 𝐶 ​ 𝑄 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) ​ ∑ 𝑖

0 𝑛 ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑠 ) | .

(127)

To estimate the terms Δ ​ 𝑅 𝑗 , 𝑛 we use the same tools as above (assumptions 𝛼 ∈ [ 0 , 1 2 ) , (121), (122), the concavity of 𝑖 ↦ 𝑖 𝛼 , values of the sign function) and recalling that | 𝑥 ℓ |

| 𝑐 ℓ − 𝑑 ℓ | ⩽ 𝑐 ℓ + 𝑑 ℓ , to obtain

Δ ​ 𝑅 1 , 𝑛

⩽ 6 ​ 𝐶 ​ 𝑄 ( 𝑛 + 1 ) 1 − 2 ​ 𝛼 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) 2 ,

(128)

Δ ​ 𝑅 2 , 𝑛
⩽ 12 ​ 𝐶 ​ 𝑄 ( 𝑛 + 1 ) 1 − 2 ​ 𝛼 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) 2 +

+ 4 ​ 𝐶 ​ 𝑄 ​ ∑ 𝑖

1 𝑛 ∑ 𝑗

𝑛 − 𝑖 + 1 ∞ 𝑖 2 ​ 𝛼 ​ ( 𝑐 𝑖 + 𝑑 𝑖 ) ​ 𝑗 2 ​ 𝛼 ​ ( 𝑐 𝑗 + 𝑑 𝑗 )

(129)

Δ ​ 𝑅 3 , 𝑛
⩽ 4 ​ 𝐶 ​ 𝑄 ( 𝑛 + 1 ) 1 − 𝛼 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) 2 +

+ 8 ​ 𝐶 ​ 𝑄 ​ ∑ 𝑖

1 𝑛 ∑ 𝑗

𝑛 − 𝑖 + 1 𝑛 𝑖 2 ​ 𝛼 ​ ( 𝑐 𝑖 + 𝑑 𝑖 ) ​ 𝑗 2 ​ 𝛼 ​ ( 𝑐 𝑗 + 𝑑 𝑗 ) ,

(130)

Δ ​ 𝑅 4 , 𝑛
⩽ 4 ​ 𝐶 ​ 𝑄 ​ ∑ 𝑖

1 𝑛 ∑ 𝑗

𝑛 − 𝑖 + 1 ∞ 𝑖 2 ​ 𝛼 ​ ( 𝑐 𝑖 + 𝑑 𝑖 ) ​ ( 𝑐 𝑗 + 𝑑 𝑗 ) .

(131)

For 𝑠 ∈ [ 0 , 𝑇 ] we have ∑ 𝑖

0 𝑛 ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑠 ) | ⩽ ‖ 𝑥 ​ ( 𝑠 ) ‖ ⩽ sup 𝑠 ∈ [ 0 , 𝑇 ] ‖ 𝑥 ​ ( 𝑠 ) ‖ < ∞ , and so the bound (127) and the monotone convergence theorem imply that the integral (123) is bounded above by

7 ​ 𝐶 ​ 𝑄 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) ​ ∫ 0 𝑡 ∑ 𝑖

0 ∞ ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑠 ) | ​ 𝑑 ​ 𝑠 ,

when 𝑛 → ∞ .

In the same way, from (128)–(131) and the assumption 𝛼 < 1 2 we conclude, using the dominated convergence theorem, that the integral (124) converges to zero as 𝑛 → ∞ .

Hence, we can now pass to the limit 𝑛 → ∞ in (123)–(124) obtaining, for 𝑡 ∈ [ 0 , 𝑇 ] the inequality

∑ 𝑖

0 ∞ ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑡 ) | ⩽ 7 ​ 𝐶 ​ 𝑄 ​ sup 𝑡 ∈ [ 0 , 𝑇 ] ( ‖ 𝑐 ​ ( 𝑡 ) ‖ + ‖ 𝑑 ​ ( 𝑡 ) ‖ ) ​ ∫ 0 𝑡 ∑ 𝑖

0 ∞ ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑠 ) | ​ 𝑑 ​ 𝑠 ,

and thus Gronwall’s lemma implies that

∑ 𝑖

0 ∞ ( 1 + 𝑖 𝛼 ) ​ | 𝑥 𝑖 ​ ( 𝑡 ) |

0 , ∀ 𝑡 ∈ [ 0 , 𝑇 ] ,

whence 𝑥 𝑖 ​ ( 𝑡 ) ≡ 0 , and the uniqueness is proved. □

The reader may have noted that the above proof is valid for both the isolated and the non-isolated cases.

8.Final Remarks

In this paper we introduced a new model for exchange-driven growth of clusters made up of a discrete number of particles. This model allows for the exchange of a number 𝑘 of particles between two clusters thus generalizing the exchange-driven model existing in the literature, for which only a single particle ( 𝑘

1 ) can be exchanged in each reaction between clusters.

The mathematical description of the time evolution of the concentration at time 𝑡 of clusters made of 𝑖 ⩾ 0 identical particles, 𝑐 𝑖 ​ ( 𝑡 ) , is the system of ordinary differential equations (7), which we called the Discrete Generalized Exchange Driven system (DGED). In this paper, after introducing the model and relating it with other cluster models (exchange-driven, coagulation-fragmentation) we proved an existence result under reasonably general conditions on the rate coefficients, and two conservation results. Under a slightly more restrictive hypothesis on the rate coefficients, we prove uniqueness of solutions to the initial value problems. Also under more restrictive assumptions than those used for the existence proof we prove a regularity result.

One aspect that we did not touch in this study is the long-time behaviour of solutions. This we expect to be able to tackle in future work. In this direction, proving that the set of solutions constitute a semi-group in an appropriate topology will be crucial. Furthermore, existing studies about the exchange-driven and the coagulation-fragmentation systems suggest that under appropriate conditions DGED can have nonzero equilibria and the dynamic convergence of solutions to these equilibria as 𝑡 → ∞ can be studied using a Lyapunov function. We conclude this paper with some further comments about this aspect.

System (7) can be rewritten in a way that highlights the balance of reversible chemical reactions between clusters; namely, by grouping together 𝑄 1 , 𝑖 ​ ( 𝑐 ) + 𝑄 2 , 𝑖 ​ ( 𝑐 ) we model reactions of the type ⟨ 𝑖 + 𝑘 ⟩ + ⟨ 𝑗 ⟩ ⇌ ⟨ 𝑖 ⟩ + ⟨ 𝑗 + 𝑘 ⟩ , and looking at 𝑄 3 , 𝑖 ​ ( 𝑐 ) + 𝑄 4 , 𝑖 ​ ( 𝑐 ) we are considering the reactions ⟨ 𝑖 − 𝑘 ⟩ + ⟨ 𝑗 ⟩ ⇌ ⟨ 𝑖 ⟩ + ⟨ 𝑗 − 𝑘 ⟩ . Thus, we can write (7) as follows

𝑐 ˙ 𝑖

∑ 𝑘

1 𝑖 𝑊 𝑖 − 𝑘 ; 𝑘 ​ ( 𝑐 ) − ∑ 𝑘

1 ∞ 𝑊 𝑖 ; 𝑘 ​ ( 𝑐 )

(132)

with

𝑊 𝑝 ; 𝑘 ​ ( 𝑐 ) := ∑ 𝑞

𝑘 ∞ 𝜔 ​ ( 𝑞 , 𝑝 ; 𝑘 ) ​ ( 𝑐 ) ,

(133)

and

𝜔 ​ ( 𝑞 , 𝑝 ; 𝑘 ) ​ ( 𝑐 ) := 𝑎 ​ ( 𝑞 , 𝑝 ; 𝑘 ) ​ 𝑐 𝑞 ​ 𝑐 𝑝 − 𝑎 ​ ( 𝑝 + 𝑘 , 𝑞 − 𝑘 ; 𝑘 ) ​ 𝑐 𝑝 + 𝑘 ​ 𝑐 𝑞 − 𝑘

(134)

where the first sum in the right-hand side of (132) arises from 𝑄 3 , 𝑖 ​ ( 𝑐 ) + 𝑄 4 , 𝑖 ​ ( 𝑐 ) and the second term from 𝑄 1 , 𝑖 ​ ( 𝑐 ) + 𝑄 2 , 𝑖 ​ ( 𝑐 ) .

Writing the equations in this form immediately shows that if each of the above reversible chemical reactions are in equilibrium then all 𝜔 ​ ( 𝑞 , 𝑝 ; 𝑘 ) ​ ( 𝑐 )

0 and we are at an equilibrium of the ordinary differential equation system. This situation corresponds to the occurrence of the so called microscopic reversibility, and is a physically natural assumption, translated in mathematical terms in the following detailed balance condition: there exists a positive sequence 𝒪

( 𝒪 𝑖 ) , with 𝒪 0

1 , such that 𝜔 ​ ( 𝑞 , 𝑝 ; 𝑘 ) ​ ( 𝒪 )

0 , ∀ 𝑞 , 𝑝 , 𝑘 . Clearly, under this condition the sequence defined by 𝑐 ¯ 𝑖

𝒪 𝑗 ​ 𝑧 𝑗 will be an equilibrium solution of (7) if 𝑧

0 is such that 𝑐 ¯ ∈ 𝑋 0 , 1 .

From previous works on cluster dynamics’ equations of exchange driven [14] or coagulation-fragmentation types [8] with detailed balance assumptions we might expect that, under appropriate conditions, the function

𝑉 ​ ( 𝑐 ) := ∑ 𝑖

0 ∞ 𝑐 𝑖 ​ ( log ⁡ 𝑐 𝑖 𝒪 𝑖 − 1 )

(135)

can serve as the foundation from which to construct a Lyapunov function for (7) in an appropriate topology of the phase space 𝑋 0 , 1 . Clearly, this is the case when the rate coefficients 𝑎 ​ ( 𝑖 , 𝑗 ; 𝑘 ) are such that (7) becomes one of the above-mentioned cluster systems.

In the case of the generalized exchange-driven system studied in the present paper, computing formally we have

𝑑 𝑑 ​ 𝑡 ​ 𝑉 ​ ( 𝑐 ​ ( 𝑡 ) )

∑ 𝑖

0 ∞ 𝑐 ˙ 𝑖 ​ ( 𝑡 ) ​ log ⁡ 𝑐 𝑖 ​ ( 𝑡 ) 𝒪 𝑖 ,

and substituting (132)–(134) into this expression we formally get

𝑑 𝑑 ​ 𝑡 ​ 𝑉 ​ ( 𝑐 ​ ( 𝑡 ) )

∑ 𝑗

0 ∞ ∑ 𝑖

1 ∞ ∑ 𝑘

1 𝑖 𝜔 ​ ( 𝑖 , 𝑗 ; 𝑘 ) ​ ( 𝑐 ​ ( 𝑡 ) ) ​ log ⁡ 𝑐 𝑗 + 𝑘 ​ ( 𝑡 ) ​ 𝒪 𝑗 𝑐 𝑗 ​ ( 𝑡 ) ​ 𝒪 𝑗 + 𝑘 .

(136)

Making rigorous these formal computations, studying the sign of (136), and hopefully using these tools in the analysis of the dynamic behaviour of solutions to (7) is clearly the subject of another work.

References [1] ↑ J. M. Ball and J. Carr, The discrete coagulation-fragmentation equations: Existence, uniqueness, and density conservation, J. Stat. Phys., 61 (1990) 203–234. [2] ↑ J. M. Ball, J. Carr, and O. Penrose, The Becker-Döring cluster equations: basic properties and asymptotic behaviour of solutions, Commun. Math. Phys., 104 (1986) 657–692. [3] ↑ Z. Banakar, M. Tavana, B. Huff, D. Di Caprio, A bank merger predictive model using the Smoluchowski stochastic coagulation equation and reverse engineering, Int. J. Bank Marketing, 36 (4), (2018) 634–662. [4] ↑ J. Banasiak, W. Lamb, and Ph. Laurençot, Analytic Methods for Coagulation-Fragmentation Models, volume I, Monographs and Research Notes in Mathematics, CRC Press, New York, 2019. [5] ↑ J. Banasiak, W. Lamb, and Ph. Laurençot, Analytic Methods for Coagulation-Fragmentation Models, volume II, Monographs and Research Notes in Mathematics, CRC Press, New York, 2019. [6] ↑ E. Ben-Naim and P. L. Krapivsky, Exchange-driven growth, Phys. Rev. E, 68 (2003), 031104. [7] ↑ J. Carr, Asymptotic behaviour of solutions to the coagulation-fragmentation equations. I. The strong fragmentation case, Proc. Roy. Soc. Edinburgh Sect. A, 121 (1992) 231–244. [8] ↑ F. P. da Costa, Mathematical aspects of coagulation-fragmentation equations, in: Mathematics of Energy and Climate Change, J.-P. Bourguignon, R. Jeltsch, A. A. Pinto, and M. Viana, eds., Springer–Verlag, Cham 2015, pp. 83–162. [9] ↑ P. Degond, J.-G. Liu, R. L. Pego, Coagulation-fragmentation model for animal group-size statistics, J. Nonlinear Sci., 27 (2017) 379–424. [10] ↑ P. G. J. van Dongen and M. H. Ernst, Kinetics of reversible polymerization, Journal of Statistical Physics, 37 (1984) 301–324. [11] ↑ C. Eichenberg and A. Schlichting, Self-similar behavior of the exchange-driven growth model with product kernel, Comm. Partial Diff. Eqs., 46 (2021) 498–546. [12] ↑ E. Esenturk, Mathematical theory of exchange-driven growth, Nonlinearity, 31 (2018) 3460–3483. [13] ↑ E. Esenturk and C. Connaughton, Role of zero clusters in exchange-driven growth with and without input, Phys. Rev. E, 101, (2020) 052134. [14] ↑ E. Esenturk and J. Velazquez, Large time behavior of exchange-driven growth, Discr. Cont. Dyn. Syst., 41, 2 (2021) 747–775. [15] ↑ R. D. Guy, A. L. Fogelson, J. P. Keener, Fibrin gel formation in a shear flow, Math. Med. Biol., 24 (1), (2007) 111–130. [16] ↑ P.-F. Hsieh and Y. Sibuya, Basic Theory of Ordinary Differential Equations, Universitext, Springer-Verlag, New York, 1999. [17] ↑ J. Ke and Z. Lin, Kinetics of migration-driven aggregation processes with birth and death, Phys. Rev. E, 67, (2002) 031103. [18] ↑ Ph. Laurençot, The discrete coagulation equations with multiple fragmentation, Proc. Edin. Math. Soc., 45, (2002), 67–82. [19] ↑ F. Leyvraz and S. Redner, Scaling theory for migration-driven aggregate growth, Phys. Rev. Lett., 88, (2002) 068301. [20] ↑ S. Ispolatov, P. L. Krapivsky, and S. Redner, Wealth distributions in models of capital exchange, Eur. J. Phys. B, 2, (1998) 267. [21] ↑ P .A. Mulheran, Theory of cluster growth on surfaces, in: Metallic Nanoparticles, J. A. Blackman (ed.), Handbook of Metal Physics, Vol. 5, Elsevier, Amsterdam, 2008, pp. 73–111. [22] ↑ O. Penrose, Metastable states for the Becker-Döring cluster equations, Commun. Math. Phys., 124 (1989) 515–541. [23] ↑ H. R Pruppacher and J. D. Klett, Microphysics of Clouds and Precipitation, 2nd Edition, Atmospheric and Oceanographic Sciences Library, volume 18, Springer, Dordrecht, 2010. [24] ↑ V. Safronov, Evolution of the protoplanetary cloud and formation of the earth and the planets, Israel Program for Scientific Translations, Jerusalem, 1972. [25] ↑ A. Schlichting, The exchange-driven growth model: basic properties and longtime behavior, J. Nonl. Sci., 30 (2020) 793–830. [26] ↑ M. Smoluchowski, Versuch einer mathematischen Theorie der Koagulationskinetik kol- loider Lösungen, Z. Phys. Chem., 92 (1917) 129–168. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
123 kB
·
Xet hash:
2952c9a72149a8a152fb35c09a39bc5d6fc79e2ceb364ffe36221b0da1ead685

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.