| { |
| "astrophysics": { |
| "train": { |
| "total_tokens": 786012597, |
| "example": "# Dark energy evolution from quantum gravity\n\nChristof Wetterich\n\nIf an ultraviolet fixed point renders quantum gravity renormalizable, the effective potential for a singlet scalar field -the cosmon -can be computed according to the corresponding scaling solution of the renormalization group equations. We associate the largest intrinsic mass scale generated by the flow away from the fixed point with the scale of present dark energy density or even smaller. This results in a highly predictive scenario for the evolution of dynamical dark energy. It solves the cosmological constant problem dynamically, and may be called \"quantum gravity quintessence\". A first setting without quantum scale symmetry violation in the neutrino sector could explain the present amount of dark energy, but fails for the constraints on its time evolution. In contrast, a logarithmic scale symmetry violation in the beyond standard model sector responsible for the neutrino masses induces a non-vanishing cosmon-neutrino coupling in the Einstein frame. This yields a cosmology similar to growing neutrino quintessence, which could be compatible with present observations. The small number of unknown parameters turns the scaling solution for quantum gravity into a fundamental explanation of dynamical dark energy which can be falsified.\n\nAlready in the first paper on dynamical dark energy or quintessence [1] quantum scale symmetry [2] has played a central role. In quantum gravity the fate of quantum scale symmetry is closely related to the presence of an ultraviolet fixed point of the renormalization group equation, which may be asymptotically safe [3,4] or asymptotically free [5][6][7][8]. For a world living exactly on the fixed point quantum scale symmetry is typically an exact symmetry. In contrast, for a crossover between two fixed points, or in case of a flow away from the ultraviolet fixed point due to a relevant parameter, an intrinsic mass scale yields an explicit breaking of quantum scale symmetry or dilatation symmetry. This intrinsic mass scale can be viewed as dimensional transmutation of running couplings, somewhat similar to the confinement scale in quantum chromodynamics. The largest intrinsic mass scale sets the overall scale of the model and has no directly observable meaning. Only dimensionless ratios, as particle masses or field values in units of this mass scale, are observable. In quantum gravity the largest intrinsic mass scale is often associated with the Planck mass. There is no need for this, however, since a dynamical Planck mass may be given by the value of a scalar field rather than being an intrinsic scale. We propose to associate the largest intrinsic mass scale with a much smaller scale. It may be of the order of a few meV -a scale characteristic for the masses of neutrinos and the present dark energy density, or even much smaller. This simple assumption leads to a highly predictive setting for dynamical dark energy [9,10] that we call \"quantum gravity quintessence\".\n\nThe basic reason for predictivity is the observation that for momentum scales or field values much larger than the intrinsic scale the latter becomes negligible. In this case all properties can be computed from the scaling solution of the renormalization flow. Scaling solutions are particular solutions of systems of non-linear differential equations. Their existence imposes severe constraints which encode properties that may not be recognized by too simple \"naturalness arguments\" for the role of quantum fluctuations. The renormalization group equations or flow equations deal already with the quantum effective action which includes all effects of quantum fluctuations. The flow equations, and therefore their particular scaling solutions, are due to quantum effects.\n\nIn functional renormalization the dimensionless couplings are not only functions of a renormalization scale k. They are also functions of the cosmon field χ. For example, the dimensionless effective potential of the cosmon u = V (χ, k)/k 4 depends, in general, both on k and on χ. For a scaling solution u is only a function of the dimensionless ratio χ/k. Once this function is computed, all cosmon self-interactions, quartic, sextic and so on, are given. Similarly, the coefficient of the curvature scalar F (effective squared Planck mass) defines a dimensionless ratio f = F/k 2. For the scaling solution f is only a function of χ/k. The same holds for the coefficient of the kinetic term of the cosmon K. If u, f and K can be computed, the cosmological field equations derived by variation of this quantum effective action lead to typical models of \"variable gravity\" [11].\n\nAlready the first computations of models of a singlet scalar field coupled to the metric (\"dilaton quantum gravity\") have revealed a simple behavior for large χ/k [12,13]. The function f increases quadratically with χ, f ∼ χ 2 /k 2, such that F = ξχ 2. The dimensionless coupling ξ is the non-minimal coupling of the scalar field to gravity. It is found not to vanish. On the other hand, for the scaling solution the potential settles to a constant for large χ, u(χ/k) → u ∞, V = u ∞ k 4. The scaling solution describes a crossover from an ultraviolet fixed point for χ/k → 0 to an infrared fixed point for χ/k → ∞. For a scaling solution describing a crossover the intrinsic scale is set by k. The intrinsic scale has a dominant effect for the scalar potential V ∼ k 4, while for the coefficient of the curvature scalar it remains subdominant,\n\nThe present cosmological epoch is very close to the infrared fixed point, since the ratio χ/k has grown to huge values ∼ 10 30 during the long history of the universe. This has crucial consequences for predictivity. First, the effects of the metric fluctuations are tiny, being suppressed by powers of k 2 /χ 2. (An exception is the contribution to a constant term in the scalar effective potential.) Second, for momenta or field values much larger than the neutrino mass the intrinsic scale k becomes negligible. In this momentum range one obtains the scale invariant standard model [1,14,15], with all mass scales (e.g. Fermi scale, confinement scale) proportional to χ. This proportionality is a simple consequence of quantum scale symmetry which becomes exact for k → 0. The particles and couplings of the standard model are well known, such that the renormalization group equations can be computed without additional unknown parameters. The functional flow equations coincide with the ones from perturbation theory.\n\nA loophole for predictivity arises from possible beyond standard model (BSM) physics. Even though the BSM sector may not involve additional light particles, it manifests itself through the masses of neutrinos. They are due to dimension five operators which involve a heavy scale where the symmetry B-L is spontaneously broken. We explore here two cases. For the first, the neutrino masses are exactly proportional to χ. In this case the model is very predictive for cosmology. The field equations for the cosmon admit solutions for which the present amount of dark energy can be obtained. This is already rather remarkable since no small dimensionless parameter is present, and the tiny ratio V /F 2 ∼ 10 -120 arises from the dynamical increase of χ/k over the history of the universe. The detailed equation of state is not compatible with observation, however.\n\nFor our second case we consider the possibility that quantum scale symmetry violation in the BSM-sector induces an additional dependence of neutrino masses on χ/k. This dependence will become computable only once a given BSM-sector is assumed. At the present stage, we parameterize the logarithmic running in the BSM-sector by a single free parameter, such that the model still remains highly predictive. The outcome is a model close to growing neutrino quintessence [16,17] which may well be compatible with present observation.\n\n$$F ∼ ξχ 2 + f ∞ k 2.$$\n\n## Cosmon potential from quantum gravity\n\nQuantum gravity computations of the flowing effective action by use of functional flow equations typically yield for a range of fields relevant for our purpose the quantum effective action according to the scaling solution,\n\nThe central point for predictivity is the computation of u(χ) according to the scaling solution of quantum gravity. We are interested in very large values of χ 2 /k 2. In this range the gravitational fluctuations decouple effectively for the flow equation for ξ(χ) and K(χ). Their contribution is suppressed by k 2 /χ 2, reflecting the inverse squared Planck mass. Both quantities reach constant values which are determined by the flow for smaller χ 2 /k 2 where the gravitational fluctuations still matter, and fluctuations for unknown particles beyond the standard model could play a role. Overall, the understanding of the flow of K and ξ has not yet reached the same reliability as for u.\n\nFor the computation of observational consequences it is most suitable to work in the Einstein frame with a fixed (reduced) Planck mass M = 2.436 • 10 18 GeV. The field equations follow then from a quantum effective action\n\nHere the \"wave function renormalization\" or \"kinetial\" Z is, in general, a function of φ. The Weyl transformation of eq. ( 1) to the Einstein frame rescales the metric for constant ξ by\n\nThis results in\n\nwith scalar field χ and φ related by\n\nFor constant ξ one has\n\nWe take here a constant Z as one of the parameters of our model. If u(χ) reaches for asymptotically large χ a constant u ∞ and ξ is constant, the potential U E (φ) decreases exponentially,\n\nThis amounts to a possible dynamical solution of the cosmological constant problem [1] for cosmologies where φ diverges in the infinite future. At present, φ/M is large but finite, resulting in very small U E as needed for an explanation of the present value of the dark energy density. For the scaling solution the Weyl transformation to the Einstein frame eliminates the renormalization scale k. For cosmology one can directly employ the field equations derived from the effective action (2).\n\nFunctional flow of the cosmon potential\n\nThe \"scaling frame\" (1) is the one for which a suitable approximation (truncation) to the exact functional flow equation [18][19][20][21][22] is evaluated. The corresponding functional flow equation for the dimensionless potential u = V (χ)/k 4 describes its dependence on the renormalization scale k, which is given by an infrared cutoff such that in a Euclidean setting only fluctuations with squared momentum q 2 larger than k 2 are included,\n\nThis flow equation holds at fixed ρ = χ 2 /(2k 2 ). The term -4u reflects the denominator in the ratio u = V /k 4, the term -2ρ∂ ρ results from the transition of the flow at fixed χ to fixed ρ, and the last term c U describes the effect of fluctuations on the k-dependence of V (χ). One finds [23,24] c U = 1 128π 2 NS (ρ) + 2 NV (ρ) -2 NF (ρ) + Ng (ρ), (9) with NS, NV, and NF the effective numbers of scalars, gauge bosons and Weyl-fermions, which depend on ρ through ρ-dependent particle masses. (For computations of the flow of the scalar potential within various approximations and truncations see refs. [25][26][27][28][29][30][31].) The contributions from non-gravitational fluctuations (matter fluctuations) are the standard result for functional flow equations [18,19], which are well tested in a large variety of applications [20]. They include one-loop perturbation theory. In a version of gauge invariant flow equations [21] the contribution from metric fluctuations can be approximated [32] by\n\nFor the large values of ρ relevant here v is tiny, resulting in constant Ng = 2, corresponding to the two massless graviton degrees of freedom.\n\nIn the flow equation ( 9) the contributions of bosons are positive, while the contributions of fermions are negative. This reflects the well known signs of the fluctuation contributions to the cosmological constant -the scalar potential can be seen as a field-dependent cosmological constant. These different signs will play an important role for the present note since they are responsible for a negative potential U E (φ) in a certain range of φ.\n\nThe effective particle numbers N involve threshold functions which account for the decoupling of heavy particles once their squared mass m 2 is larger than the cutoff k 2. Details of the threshold functions depend on the precise form of the cutoff. For the Litim cutoff [33] or the simplified flow equation [34] one has for the fermions\n\nHere the sum over f runs over all Weyl-(or equivalently Majorana-) fermions with mass m f. The fermion masses depend on the scalar field χ and we write\n\nwith the effective Yukawa couplings h f. Exact quantum scale symmetry implies constant h f. For example, the effective Yukawa coupling h e for the electron is of the form h e = y e φ 0 (χ)/χ, with vacuum expectation value of the Higgs doublet φ 0 (χ) proportional to χ and y e the Yukawa coupling to the Higgs doublet. The electroweak gauge hierarchy requires a tiny value of φ 0 /χ, implying a very small effective Yukawa coupling h e. From the measured ratio of electron mass m e to Planck mass M one infers\n\nIn order to compute the relevant value of m2 for a given stage of the cosmological evolution we need the relevant value of ρ. The units of k are arbitrary. We choose units which identify the present value of the potential with the present observed dark energy density, which obtains for a present Hubble parameter h = 0.7 and dark energy fraction Ω h,0 = 0.7 as\n\nFor realistic cosmology with U E (t 0 )/M 4 ≈ 10 -120 the present value of χ/k must be very large according to\n\n) With u(t 0 ) a few times (128π 2 ) -1, see below, this amounts to a very large present value of ρ somewhere around ρ(t 0 ) ≈ 10 58. This large value will be related later to the increase of ρ(t) over a huge time period in Planck units -it is a consequence of the huge age of our universe.\n\nWe may estimate the present value of the dimensionless neutrino mass mν,\n\nThis leads to the interesting conclusion that the present epoch of cosmology coincides roughly with the epoch when the neutrino fluctuations decouple from the flow of the effective potential. It may therefore not be surprising that close to the present epoch important qualitative changes can occur in the evolution of dynamical dark energy. On the other hand, in the present epoch electrons and all other charged fermions have already decoupled from the flow. For the boson fluctuations only the photon, the cosmon and the graviton contribute to the flow of the potential in the range of ρ relevant for the present cosmological epoch. Photons are massless, resulting in NV = 1. For the cosmon fluctuations the mass term is given by the second derivative of the effective potential with respect to χ,\n\nwhere primes denote derivatives with respect to ρ, u ′ = ∂ ρu. This completes the flow generator c U for the range of ρ relevant for cosmology close to the present epoch.\n\nFor earlier epochs ρ(t) is smaller than ρ(t 0 ). Electron fluctuations matter for a range of ρ smaller than h -2 e. For even much smaller ρ one has to include fluctuation contributions from muons and pions, and so on.\n\n$$Γ k = x g ′ - ξ 2 χ 2 R ′ + u(χ)k 4 + 1 2 K∂ µ χ∂ µ χ.(1)$$\n\n$$Γ = x √ g - M 2 2 R + U E (φ) + 1 2 Z∂ µ φ∂ µ φ. (2$$\n\n$$)$$\n\n$$g (E) µν = ξχ 2 M 2 g ′ µν.(3)$$\n\n$$U E (φ) = u(χ)k 4 M 4 ξ 2 χ 4,(4)$$\n\n$$φ = 4M ln χ k.(5)$$\n\n$$Z(φ) = 116$$\n\n$$K(χ) ξ + 6.(6)$$\n\n$$U E = u ∞ M 4 ξ 2 exp - φ M.(7)$$\n\n$$k∂ k u = -4u + 2ρ∂ ρu + 4c U.(8)$$\n\n$$Ng = 5 1 -v + 1 1 -v/4 -4, v = u ξ ρ.(10)$$\n\n$$NF = f 1 + m2 f -1, m2 f = m 2 f (ρ) k 2. (11$$\n\n$$)$$\n\n$$m f = h f χ, m2 f (ρ) = 2h 2 f (ρ)ρ,(12)$$\n\n$$h e = √ ξm e M.(13)$$\n\n$$u(t 0 )k 4 = U E (t 0 ) = (2.229 • 10 -3 eV) 4.(14)$$\n\n$$M 2 = ξχ 2 (t 0 ), U E (t 0 ) = u(t 0 )M 4 ξ 2 exp - φ(t 0 ) M = u(t 0 )M 4 4ξ 2 ρ(t 0 ) -2. (15$$\n\n$$m2 ν (t 0 ) = u(t 0 ) m ν 2.229 • 10 -3 eV 2. (16$$\n\n$$)$$\n\n$$NS = 1 1 + u ′ + 2ρu ′′,(17)$$\n\n## Scaling solution for the cosmon potential\n\nThe differential equation ( 8) admits a scaling solution for k∂ k u = 0. It is given by a solution of the non-linear differential equation\n\nThe solution of this non-linear differential equation has to exist for the whole range 0 ≤ ρ < ∞. Combined with a similar equation for f (ρ) this imposes severe constraints, such that the only scaling solutions found have constant u for ρ → ∞. With boundary condition u(ρ → ∞) = u ∞ the scaling solution reads [10]\n\nwith integrated threshold function obeying the differential equation\n\nIt describes the effective decoupling of fluctuations with m2 f ≫ 1. In the range of ρ relevant for present cosmology only the neutrinos contribute effectively in the sum over fermions. For the cosmon contribution we employ\n\nand infer that the scalar mass is tiny in the relevant range of ρ, NS = 1. The factor 5 in eq. ( 19) simply counts the number of massless bosonic degrees of freedom. (We have neglected in the flow equation mixing effects between metric-and scalar fluctuations [35]. They do not alter the conclusion Ng + NS = 3 for the relevant range of large ρ. The scaling solution for u should be seen together with a scaling solution for the dimensionless coefficient of the curvature scalar f. The solution f = 2ξ ρ for large ρ is already incorporated in our ansatz (1). In particular, there is no combined scaling solution for u and f which is compatible with u(ρ → ∞) ∼ ρ2.)\n\nThe scaling solution (19) can be translated to the potential in the Einstein frame ( 4)\n\nwith u ∞ = 5/(128π 2 ) ≈ 0.004. Here U 0 is a free constant that specifies the definition of φ and will be chosen as U 0 = (2.229 • 10 -3 eV) 4. The constant φ accounts for the dominant part of φ(t 0 ) and absorbs ξ,\n\nThe present value φ(t 0 ) is given by\n\nwhere a realistic cosmology requires 0 < u φ(t 0 ) < u ∞. With eqs. ( 4), ( 18) the extrema for U E occur for the scaling solution at c U = 0, or\n\nBesides the overall exponential factor the φdependence of U E (φ) is governed by\n\nwith (2ρ = exp(φ/2M ))\n\nThe Weyl transformation to the Einstein frame also rescales the fermion masses, which obey now\n\nThe function\n\nbecomes a constant if the Yukawa coupling h f does not depend on ρ,\n\nIn the Einstein frame the renormalization scale k is no longer present. We first concentrate on the case of constant h f and discuss a possible φ-dependence in the second part. In this case one finds for the integrated threshold function\n\nThe decoupling for large m2 f ≫ 1 follows from\n\nAs φ decreases the dimensionless fermion masses mf get smaller and t u approaches one. For small enough φ this leads to a negative value of u(φ), and therefore to a negative potential U E (φ). This negative region sets in once the largest neutrino mass term m2 ν (φ) is small enough. For the range of φ for which m2 e ≫ 1, m2 ν ≪ 1 one has Nf = 3 and therefore u(φ) = -1/(128π 2 ). The value of φ where the potential switches from negative to positive values depends on the generation structure of the neutrino masses. For degenerate neutrino masses it is given by t u m2 ν ( φ) = 5/6 or m2 ν ≈ 1/12. For hierarchical neutrino masses, m2\n\n1, m2 2 ≪ m2 3, m 3 = 0.06 eV, the switch of sign occurs for t u m2 3 = 1/2. For hierarchical neutrino masses we plot the potential U E (φ) for the scaling solution and constant Yukawa couplings in fig. 1. Units of U E are given by U 0, and ln ρ-c ρ. The maximum of the potential occurs for values rather close to the present dark energy density. For degenerate neutrino masses the potential maximum occurs for smaller x and is higher. For a given set of neutrino masses the potential U E for the scaling solution with constant Yukawa couplings is a parameter free function of the field variable (φφ)/M. It only depends on the masses of neutrinos. In addition, the dynamics depends on the kinetial Z. (We could use a canonical scalar field σ. This would lead to a one-parameter family of potentials\n\nIn the Einstein frame the fermion masses are constant in case of field-independent Yukawa couplings.\n\nEarly cosmology for the scaling potential For constant particle masses the evolution of the energy density of particles (radiation plus matter) follows the standard conservation laws. For a homogeneous isotropic universe the cosmological field equations read, with t cosmic time, a(t) the scale factor and H(t) = ∂ t ln a the Hubble parameter,\n\nwith\n\nand ρ E the energy density of matter and radiation in the Einstein frame (without the contribution of the cosmon).\n\nOne has n = 4 for radiation domination, n = 3 for matter domination, and more generally n is a smooth function of particle masses and temperature interpolating between the limits. We are interested in solutions of these field equations for a recent cosmological epoch for which ρ E is dominated by non-relativistic matter, n = 3. For this purpose we need for some suitable time t in the initial values φ(t in ), ∂ t φ(t in ) and ρ E (t in ). We choose φ(t in ) in a range where neutrinos are the only fermions contributing to the flow equation for the cosmon potential. For establishing a range of reasonable values for φ(t in ) and ∂ t φ(t in ) we need to understand the qualitative features of the cosmological evolution of the scalar field prior to t in. For realistic cosmologies the contributions of the scalar potential and kinetic energies will be very small at t in, such that the dominant cosmology near t in is matter dominated with\n\n). The outcome of the investigation of early cosmology is rather simple. It establishes that for a suitable range of Z the initial value φ(t in ) is large enough such that x is on the right of the maximum in fig. 1. This amounts to a type of \"thawing dark energy\" [1,36] for which φ remains constant for a long period in the radiation and matter dominated epochs. The gradient of the potential induces a change of φ only recently. We next present some details leading to this conclusion.\n\nWhile we focus in this note on late cosmology, our model actually covers the whole history of the universe. For small ρ the scaling solution for u may involve particles beyond the standard model, as for grand unified theories. The overall features are rather independent of the precise particle content. An early inflationary epoch requires positive U E for a range of small χ. For the scaling solution of the flow equations the effects of additional bosons beyond the standard model have to turn c U positive, c.f. eq. ( 9). This is typically the case for grand unified models, see ref. [10] for a detailed discussion of the inflationary epoch. It is followed by an epoch of kination [1], for which the energy density of the universe is dominated by the kinetic energy of the scalar field.\n\nNeglecting ρ E and U E the solution of eq. ( 33) reads\n\nwhere the kination epoch starts at t kin with a value φ(t kin ) = φ kin. For this epoch one has\n\nThis factor governs the relative importance of the potential U E. In particular, for Z < 1/6 the potential becomes more and more negligible during the epoch of kination.\n\nIn contrast, during kination the energy density in radiation or matter decreases slower than the scalar kinetic energy density,\n\nand will finally overwhelm the latter. If the potential remains negligible the epoch of kination ends once ρ E starts to dominate. For the radiation and matter dominated epochs the kinetic and potential energy of the scalar field make only a small contribution to H 2, such that the overall cosmology follows the standard picture. As long as U E can be neglected the approximate solution becomes\n\nand the relative importance of the scalar kinetic energy decreases\n\nThe scalar field almost stops its evolution, such that the ratio of potential to kinetic energy of the scalar field increases.\n\nAs long as the gradient of the potential can be neglected the overall picture of the evolution of the scalar field is rather simple. During the kination epoch φ increases logarithmically until it almost settles at φ r at some time tr after the onset of radiation domination. If φ r is sufficiently below the value φ max where U E has its maximum, the value of the scalar field will start to decrease once the term ∂U E /∂φ > 0 becomes important. On the other hand, for φ r > φ max the scalar field increases and the potential U E remains positive for all t > tr. A positive present value of U E is needed for any realistic cosmology. A viable solution is therefore φ r > φ max.\n\nThe value φ r is determined by the value of the scalar field at the end of inflation and the duration of the kination epoch. During kination the scalar field changes by\n\nThis change has to be large enough such that φ r > φ max.\n\nOn the other hand, for very large φ r /M the cosmon contribution to the energy density remains very small even at the present epoch. In this case the model cannot account for dark energy. These constraints limit the allowed range for Z.\n\nThe measured amplitude of the primordial fluctuations yields information about the value φ at the time of decoupling of the primordial density fluctuations,\n\nwith r ≲ 0.05 the tensor to scalar ratio. Parameterizing the kinetic energy at the beginning of kination, 2, in terms of the potential energy at decoupling,\n\none obtains\n\nHere ∆trans accounts for details at the end of inflation and reads in our parameterization, for r = 1/36, ∆trans = ln\n\nUp to smaller quantitative details the value φ r where the scalar field settles after the beginning of radiation domination depends on the two parameters Z and ρ\n\nr /1 GeV. We may roughly estimate the value of φ r needed for a realistic cosmology by associating U E (φ r ) with the present dark energy density U 0 = 2.229\n\nThis yields, up to shifts of the order of a few,\n\nFor ρ\n\n1/4 r = 10 6 GeV (1 GeV) one finds typical values Z = 0.022 (0.057). We emphasize that no particular fine tuning of Z is needed for realistic cosmology. A change of a factor 10 4 in U E (ρ r ) corresponds to an additive change of φ/M by 9.2 or a relative change ∆Z/Z of around 0.4 √ Z. The values of Z found obey Z < 1/6, such that the potential remains indeed negligible during kination.\n\n$$ρ∂ ρu = 2(u -c U ). (18$$\n\n$$)$$\n\n$$u = 5 128π 2 - 1 64π 2 f t u m2 f,(19)$$\n\n$$ρ∂ ρt u = 2t u - 2 1 + m2 f.(20)$$\n\n$$u ′ = - 1 64π 2 f ∂ ρt u m2 f (21) = 1 32π 2 ρ f 1 1 + m2 f -t u m2 f,$$\n\n$$U E = u(φ)M 4 ξ 2 exp - φ M = U 0 u(φ) u ∞ exp - φ - φ M,(22)$$\n\n$$φ M = ln u ∞ M 4 ξ 2 U 0. (23$$\n\n$$)$$\n\n$$φ(t 0 ) - φ M = ln u φ(t 0 ) u ∞ -ln U E (t 0 ) U 0,(24)$$\n\n$$ν 1 1 + m2 ν = 5 2. (25$$\n\n$$)$$\n\n$$u(φ) u ∞ = 1 - 2 5 f t u m2 f (φ),(26)$$\n\n$$m2 f (φ) = h 2 f (φ) exp φ 2M = c f (φ) exp φ - φ 2M. (27$$\n\n$$)$$\n\n$$m f (φ) = h f (φ)M/ ξ. (28$$\n\n$$)$$\n\n$$c f (φ) = h 2 f (φ) h 2 f φ(t 0 ) m 2 f (t 0 ) √ u ∞ √ U 0,(29)$$\n\n$$c f,0 = m f (t 0 ) 9 • 10 -3 eV 2. (30$$\n\n$$)$$\n\n$$t u ( m2 f ) = 1 -2 m2 f -2 m4 f ln m2 f 1 + m2 f.(31)$$\n\n$$t u m2 f ≫ 1 = 2 3 m2 f. (32$$\n\n$$)$$\n\n$$x = (φ -φ)/(2M ) = -4 -3 -2 -10$$\n\n$$V (σ) = U E φ(σ), with φ -φ = σ/ √ Z.)$$\n\n$$(∂ 2 t + 3H∂ t )φ = - 1 Z ∂U E ∂φ, ∂ t ρ E = -nHρ E,(33)$$\n\n$$H 2 = 1 3M 2 U E + Z 2 (∂ t φ) 2 + ρ E,(34)$$\n\n$$ρ E (t in ) = 3M 2 H 2 (t in ) = 4M 2 /(3t 2 in$$\n\n$$H = 1 3t, φ M = 2 3Z ln t t kin + φ kin M,(35)$$\n\n$$H -2 exp - φ M ∼ t 2- √ 2 3Z. (36$$\n\n$$)$$\n\n$$ρ E ∼ t -n 3,(37)$$\n\n$$H = 2 nt, φ = c n t -6 n, φ = c n n n -6 t n-6 n + φ r,(38)$$\n\n$$H -2 φ2 ∼ t 2(n-6) n. (39$$\n\n$$)$$\n\n$$∆φ M = 2 3Z ln t r t kin = 2 3Z ln H kin H r = 8 3Z ln ρ kin ρ r 1/4. (40$$\n\n$$)$$\n\n$$3.56 • 10 -8 r = u(φ d ) ξ 2 exp - φ d M,(41)$$\n\n$$ρ kin = (Z/2)(∂ t φ(t in ))$$\n\n$$ρ kin = ūkin ξ 2 exp - φ d M M 4,(42)$$\n\n$$φ r M = 1 - 1 √ 6Z ln 10 9 + 8 3Z ln 2.4 • 10 18 - 8 3Z ln ρ 1/4 r 1 GeV + ∆trans.(43)$$\n\n$$u d ξ 2 + 1 √ 6Z ln ūkin ξ 2. (44$$\n\n$$)$$\n\n$$1/4$$\n\n$$• 10 -3 eV 4, u ∞ ξ 2 exp - φ r M M 4 = U 0.(45)$$\n\n$$φ r M = 277 = 21 + 61 √ Z - 8 3Z ln φ 1/4 r 1 GeV. (46$$\n\n$$)$$\n\n## Dynamical dark energy\n\nWe next consider the evolution of the cosmon field in late cosmology, when the gradient of the potential starts to play a role. It is convenient to employ variables\n\nfor which the scalar field equation reads\n\n(48) Taking a y-derivative of the equation for H 2 yields\n\nWe therefore have two field equations ( 48), (49) for the functions x(y) and\n\nwith\n\nNote that U 0 and H 0 are not parameters of the model. They only define the convention which fixes the additive constants in the definition of the variables x and g. We choose H 2 0 such that U 0 /(3M 2 H 2 0 ) = 0.7. For a realistic cosmology we have to require that the cosmon energy density ρ h = U E + (Z/2)(∂ t φ) 2 remains positive for all time. The monotonic decrease of ρ h implies that negative ρ h cannot turn positive again. The evolution of Ω h obeys\n\nThe equation of state of the cosmon dark energy is defined by\n\nThe condition for realistic cosmology requires for all t the relation Ω K + Ω V > 0 and therefore w h > -1. For all epochs with U E < 0, Ω V < 0 one has w h > 1. We can write eq. ( 52) in terms of w h as\n\nThe condition for realistic cosmology is equivalent to finite w h for all time.\n\nFor the scaling solution one has\n\n(55) Particular solutions with a static field occur for x(y) = x max, with c U (x max ) = 0 corresponding to the maximum of U E (φ). For the Hubble parameter one has in this case the standard solution with a cosmological constant\n\nwhich corresponds to 3M 2 H 2 = ρ E + λ max. This particular \"cosmological constant solution\" requires tuning of parameters or initial conditions such that φ r = φ max. It is not compatible with observation since the value U E (φ max ) is larger than observed.\n\nWe have solved the system of evolution equations numerically choosing hierarchical neutrino masses m ν = (0.0005, 0.008, 0.058) eV and Z = 0.022. The maximum of U E occurs for x max ≈ -4.042199. We take at y in = ln 1 500 the initial values ∂ y x(y in ) = 0, g(y in ) = 9.28777. If we start with x(y in ) slightly above x max the scalar field has changed only little at y = 0. Thus dark energy is dominated by the cosmon potential and one finds for a typical example the present values for dark energy Ω h (0) = 0.922 and w h (0) = -0.983. The value of Ω h (0) is too high as compared to the observed value Ω h (0) ≈ 0.7. This is due to the fact that U E (x max ) is larger than U 0. Increasing further x(y in ), the present scalar kinetic energy and w h increase, resulting for example in w h (0) = -0.822, Ω h (0) = 0.92. We plot for this case the evolution of Ω h, Ω K and Ω E in fig. 2. In the future for y > 0 a scaling solution [1,37] with constant Ω h = 3Z and w h = 0 will be reached. As one further increases x(y in ) slightly, dark energy becomes important earlier, and also the decrease of Ω h and increase of Ω K begin at smaller y. One can tune x(y in ) in order to obtain a given Ω h (0). For x(y in ) = -4.04219658 the evolution is shown for Ω h, Ω K and Ω E in fig. 3. The present dark energy fraction Ω h (0) = 0.7 would be consistent with observation. At y = 0 the dark energy (blue curve) is dominated by the cosmon kinetic energy (orange curve), resulting in an equation of state w h (0) ≈ 1. This is not compatible with observation. The present value of the Hubble parameter obtains from g(0) = 0.0022 very close to H 0. The present dark energy fraction is Ω h = 0.7, with equation of state w h (0) close to one. The dynamics is similar to Fig. 2, but shifted in y.\n\nWe draw three main conclusions from this investigation. First, for the scaling solution of quantum gravity with constant neutrino masses in the Einstein frame the overall features of cosmology look qualitatively similar to a typical universe with thawing dark energy, provided Z is in a suitable range. Second, the precise values of Ω h (0) and w h (0) conflict with observation. Third, with our given assumptions the scaling solution of quantum gravity is predictive. Besides the neutrino masses and Z there is no free parameter available which could influence the outcome. For larger neutrino masses and somewhat different Z the incompatibility with observation does not change.\n\nThere are several ways how the cosmology corresponding to the scaling solution of quantum gravity can be modified. First for the scaling solution the effective neutrino Yukawa couplings h ν (ρ) may depend on ρ nontrivially. We will see that this can lead to growing neutrino quintessence [16,17].\n\nSecond, the solution of the flow equation may deviate from the scaling solution. This implies the presence of relevant parameters and intrinsic mass scales associated to them. The largest intrinsic mass scale may be denoted by k. As a consequence, the scaling solution holds to a good approximation for k ≫ k, while substantial deviations from the scaling solution occur for k ≤ k. The maximal intrinsic mass scale is a free parameter which sets the overall scale. We take it in the vicinity of 2•10 -3 eV. The effective squared Planck mass will then be dominated by M 2 ≈ ξχ 2, with only a tiny correction ∼ k2. For the scaling potential U = uk 4 one expects more substantial modifications, for example by an additional constant k4. A discussion of this possibility is postponed to future work.\n\n$$y = ln a, ∂ t = H∂ y, x = 1 2M (φ -φ) = ln ρ -c ρ, Ω V = U E 3M 2 H 2 = U 0 3M 2 H 2 u(x) u ∞ exp(-2x), Ω K = Z(∂ t φ) 2 6M 2 H 2 = 2Z 3 ∂ y x 2, Ω E = ρ E 3M 2 H 2, Ω E + Ω V + Ω K = 1, Ω h = Ω V + Ω K,(47)$$\n\n$$∂ 2 y + 3 + ∂ y ln H ∂ y x = 3 2Z 1 - 1 2 ∂ x ln u Ω V.$$\n\n$$∂ y ln H = - n 2 Ω E + 2Z ∂ y x 2 = - 1 2 n(1 -Ω V ) + (6 -n)Ω K. (49$$\n\n$$)$$\n\n$$g(y) = ln H H 0,(50)$$\n\n$$Ω V = U 0 3M 2 H 2 0 u(x) u ∞ exp -2(g + x).(51)$$\n\n$$∂ y Ω h = -∂ y Ω E = Ω E nΩ V -(6 -n)Ω K = Ω E n(1 -Ω E ) -6Ω K. (52$$\n\n$$)$$\n\n$$w h = Ω K -Ω V Ω K + Ω V = -1 + 2Ω K Ω h.(53)$$\n\n$$∂ y Ω h = (1 -Ω h )Ω h n -3(1 + w h ).(54)$$\n\n$$∂ x ln u = 2 -2c U /u, or ∂ 2 y + (3 + ∂ y g)∂ y x = U 0 c U (x) 2ZM 2 H 2 0 u ∞ exp -2(g + x).$$\n\n$$λ max = U 0 u(x max ) u ∞ exp(-2x max ),(56) namely ḡ$$\n\n$$(y) = 1 2 ln λ max + ρ E,0 exp(-ny) 3M 2 H 2 0,(57)$$\n\n## Scale symmetry violation in the neutrino sector\n\nLet us consider the possibility that the effective neutrino Yukawa coupling h ν (ρ) shows a non-trivial dependence on ρ. The masses of the left-handed neutrinos of the standard model of particle physics arise from nonrenormalizable couplings, which are sensitive to some \"beyond standard model\" (BSM) sector. A non-trivial ρ-dependence of the scaling solution in the BSM-sector, beyond the proportionality of mass scales to χ according to quantum scale symmetry, can result in non-trivial h ν (ρ). Thus ∂ ρh ν (ρ) ̸ = 0 indicates a violation of quantum scale symmetry for the scaling solution in the BSMsector.\n\nIf the characteristic χ-dependent mass scales in the BSM-sector are much larger than the Fermi scale (expectation value of the Higgs doublet φ 0 ∼ χ), the neutrino masses are small as compared to the electron mass due to some \"seesaw mechanism\" or \"cascade mechanism\". They are suppressed either by the mass of a \"right-handed\" or \"sterile\" neutrino [38][39][40], or by the mass of a scalar triplet [41,42]. If one of these masses is not exactly proportional to χ, or dimensionless couplings are not independent of χ, the quantum scale symmetry in the BSM-sector is violated. We make here the simplified assumption that this scale symmetry violation is common to all three neutrino masses and parameterize\n\nwith constant b ν. Here φ 0 is the doublet expectation value, φ 2 0 = εχ 2, ε ≪ 1, and m B-L (χ) = g B-L (χ)χ is the effective heavy mass scale associated to B -L-violating effects in the BSM-sector. The effective coupling g B-L (χ) incorporates the details of the mass-generation for neutrinos. For the scaling solution one has\n\nsuch that a non-trivial ρ-dependence arises from the nontrivial ρ-dependence of g B-L (ρ). Typically, g B-L is some dimensionless combination of couplings, as the Yukawa coupling of the right-handed neutrinos, or quartic scalar couplings which determine the mass of the heavy triplet, a cubic coupling between the triplet and two doublets, or ratios of various expectation values. For a scaling solution which has not yet settled at some quantum scale invariant limit one could expect some logarithmic dependence of g B-L on χ/k, which we parameterize in the range of interest for ρ by\n\n(This parameterization is not supposed to be valid for ρ → 0 or in a range where g B-L (ρ) would vanish.) Translating to φ yields\n\nWe consider positive c B-L and g B-L such that the neutrino masses in the Einstein frame increase with increasing φ.\n\n$$m ν = b ν φ 2 0 (χ) m B-L (χ) = b ν εχ 2 g B-L (χ)χ,(58)$$\n\n$$h ν (ρ) = b ν ε g B-L (ρ),(59)$$\n\n$$g B-L (ρ) = ḡB-L -c B-L ln χ k.(60)$$\n\n$$h ν (φ) = 4b ν εM 4ḡ B-L M -c B-L φ. (61$$\n\n$$)$$\n\n## Cosmon-neutrino coupling\n\nThe φ-dependence of the neutrino masses in the Einstein frame induces an effective coupling between the cosmon and neutrinos [16,37],\n\nwith\n\nFor a slow running, c B-L ≪ ḡB-L, the unknown parameter φ c /M may be in the range of the present value of φ/M, i.e. φ c close to φ. In the limit c B-L → 0,\n\nx c → ∞ one has β → 0 and recovers the case of the field-independent Yukawa coupling discussed previously. The cosmon-neutrino coupling has been employed in several models of dynamical dark energy with mass-varying neutrinos [43][44][45][46][47][48][49][50].\n\nThe individual neutrino masses obey\n\nIf we take for φ 0 the present value of φ, and correspondingly for x 0, the free parameters µ ν correspond to the present neutrino masses. In principle, one can choose x 0 arbitrarily. The free parameters in the neutrino sector are then x c and the three constant masses µ ν, which together with x 0 define the neutrino masses for a given value of φ. In other words, x c is the only additional free parameter for this scenario. With h ν (φ) = m ν (φ) √ ξ/M one inserts for the scaling potential (19)\n\nA non-vanishing cosmon-neutrino coupling β has several effects on the cosmology of the scaling solution. First, the evolution equation of the cosmon contains an additional term proportional to the trace of the energymomentum tensor of the neutrinos\n\nSecond, the energy-momentum tensor of the neutrinos is not conserved, according to [16,51],\n\nEnergy is exchanged between the neutrino-and cosmonsector and only the combined energy momentum tensor for the scalar field and neutrinos is covariantly conserved,\n\nThird, a variable neutrino mass influences the time when neutrinos get non-relativistic, i.e. when the neutrino pressure p ν differs substantially from ρ ν /3. For negative β and increasing neutrino masses neutrinos become non-relativistic later as compared to the case of constant neutrino masses. Only for non-relativistic neutrinos the modifications in eqs. (66), (67) matter. Fourth, the scaling solution for the cosmon potential is modified since m2 f (φ) involves an additional factor\n\nThe integrated threshold functions obey now the relation\n\nwith\n\nIt has the limiting behavior (with constant b -)\n\nWe solve eq. ( 70) numerically. For x c = 0.216, x 0 = 0.08, µ 1 = 0.002 eV, µ 2 = 0.01 eV, µ 3 = 0.24 eV, μν = 0.024 eV we plot the effective cosmon potential in fig. to larger x and its height is lower. For larger µ ν the potential maximum occurs for smaller x and its height increases.\n\nFifth, the additional cosmon-mediated attractive force between neutrinos accelerates the growth of fluctuations in the neutrino-sector. In our approximation all these effects are governed by a single new parameter x c, which determines the function β(x) according to eq. ( 62).\n\n$$β = - ∂ ln m ν ∂φ M = - M φ c -φ = - 1 2(x c -x),(62)$$\n\n$$φ c M = 4ḡ B-L c B-L, x c = φ c - φ 2M. (63$$\n\n$$)$$\n\n$$m ν (φ) = µ ν (φ c -φ 0 ) φ c -φ = µ ν (x c -x 0 ) x c -x. (64$$\n\n$$)$$\n\n$$m2 ν (x) = u ∞ µ 4 ν U 0 x c -x 0 x c -x 2 exp(x).(65)$$\n\n$$Z ∂ 2 t + 3H∂ t φ = - ∂U E ∂φ + β M (ρ ν -3p ν ).(66)$$\n\n$$∂ t ρ ν + 3H(ρ ν + p ν ) = - β M (ρ ν -3p ν )∂ t φ. (67$$\n\n$$)$$\n\n$$∂ t (ρ ν + ρ h ) = -3H ρ ν + p ν + Z(∂ t φ) 2. (68$$\n\n$$)$$\n\n$$h 2 ν (φ) h 2 ν (φ 0 ) = 4ḡ B-L M -c B-L φ 0 4ḡ B-L M -c B-L φ(t) 2 = φ c -φ 0 φ c -φ(t) 2 = x c -x 0 x c -x 2. (69$$\n\n$$)$$\n\n$$∂ x t (ν) u = 2t (ν) u - 2 1 + a ν (x c -x) -2 e x,(70)$$\n\n$$a ν = - u ∞ µ 4 ν U 0 (x c -x 0 ) 2. (71$$\n\n$$)$$\n\n$$t (ν) u (x → -∞) = 1 -b -e 2x, t (ν) u (x → x c ) = 2e -xc 3a ν (x c -x) 3. (72$$\n\n$$)$$\n\n## Cosmology for growing neutrino quintessence\n\nFor cosmology we have to take the effects of the neutrino fluid into account, with\n\nHere Ω m and Ω γ are the fractions of matter and photons, corresponding to the energy densities with the usual scaling behavior ρ m ∼ a -3, ρ γ ∼ a -4. For the evolution of ρ ν or Ω ν we need the effective equation of state in the neutrino-sector w ν = p ν /ρ ν. In addition to eq. ( 67) or (68) we will use\n\nwith total number density of neutrinos n ν ∼ a -3, and average neutrino mass mν,\n\nIn early cosmology the neutrino masses are negligible, w ν = 1/3, and ρ ν ∼ T 4 ν is given by the effective neutrino temperature T ν. For the present epoch of cosmology p ν is of the same order as the energy density of photons and therefore negligible, w ν ≈ 0.\n\nFor the present cosmological epoch we may combine the cosmon and neutrino energy density into a common effective dark energy density ρ d,\n\nNeglecting p ν one has for late cosmology, with μν =\n\nThe equation of state for this combined effective dark energy w d obeys for non-relativistic neutrinos\n\nIt is restricted by w d > -1, coming close to (-1) if T and ρ ν are small as compared to U E. This combined effective equation of state can be compared to the one for the scalar field (53).\n\nFor a numerical solution we employ the field equation\n\nwhere we insert\n\nHere Ω V and Ω K are given by eqs. ( 51) and ( 47), respectively. For the scaling solution we employ\n\nThe factor 1-3w ν suppresses the effects of the cosmonneutrino coupling as long as the neutrinos are relativistic. Using n ν ∼ a -3 we employ\n\nwhere T γ,0 = 2.348 • 10 -4 eV is the present photon temperature and c ν ≈ 0.548, see below. The evolution of the neutrino fraction obeys\n\n(84) In turn, we can combine eq. ( 82) with Ω ν in order to extract the neutrino equation of state w ν in eq. ( 80),\n\nFor the photon fraction we employ\n\nAt this point all quantities for the system of cosmic equations (79), (80), (84) are well defined. This system is solved numerically.\n\n$$Ω ν = ρ ν 3M 2 H 2, Ω m + Ω V + Ω K + Ω ν + Ω γ = 1. (73)$$\n\n$$ρ ν -3p ν = mν n ν,(74)$$\n\n$$mν = 1 3 ν m ν. (75$$\n\n$$)$$\n\n$$ρ d = ρ ν + ρ h = ρ ν + U E + Z 2 ∂ t φ 2. (76$$\n\n$$)$$\n\n$$1 3 ν µ ν, ∂ t ρ d = -3H μν n ν (φ c -φ 0 ) φ c -φ + Z ∂ t φ 2 = -3(1 + w d )Hρ d. (77$$\n\n$$)$$\n\n$$w d = T -U E T + U E + ρ ν, T = Z 2 ∂ t φ 2, ρ ν = φ c -φ 0 φ c -φ μν n ν. (78$$\n\n$$)$$\n\n$$∂ 2 y x + (3 + ∂ y g)∂ y x = 3 2Z 1 - 1 2 ∂ x ln u Ω V - Ω ν (1 -3w ν ) 2(x c -x),(79)$$\n\n$$∂ y g = - 3 2 1 + w ν Ω ν + Ω K -Ω V - 1 2 Ω γ. (80$$\n\n$$)$$\n\n$$1 - 1 2 ∂ x ln u Ω V = 0.7 c U u ∞ exp -2(g + x). (81$$\n\n$$)$$\n\n$$Ω ν (1 -3w ν ) = mν n ν 3M 2 H 2 (82) = μν B 3M 2 H 2 0 exp -(3y + 2g) x c -x 0 x c -x, with B = n ν (y in ) exp(3y in ) = 4c ν 11 T 3 γ,0,(83)$$\n\n$$∂ y Ω ν = -Ω ν 4 + 2∂ y g -1 + ∂ y x x c -x 1 -3w ν.$$\n\n$$w ν Ω ν = 1 3 Ω ν - 1 3 1 -3w ν Ω ν.(85)$$\n\n$$Ω γ = π 2 T 4 γ,0 45M 2 H 2 0 exp (-(4y + 2g).(86)$$\n\n## Initial conditions\n\nWe need to determine the initial conditions, which we choose here at a in = 1/3000, y in = -8.0 near matterradiation equality. The initial neutrino number density n ν (y in ) is given by (ζ(3) = 1.202)\n\nThe neutrino temperature T ν is related to the photon temperature T γ\n\nThis yields eq. ( 83) and implies for the evolution equation (79) of the scalar field\n\nwith\n\nFor the initial value of g(y in ) we need the contribution of photons and neutrinos to the energy density at y in, as determined for an early epoch where neutrino masses are negligible by\n\nwith matter-radiation equality at a eq = 1/3390. For y in = ln(1/3000) = -8.0 this yields the total radiation fraction Ω r (y in ) = 0.47. The initial value for g obeys\n\nHere Ω d,0 is the present total dark energy fraction which sums the cosmon potential and kinetic energy, as well as the energy density of neutrinos. For Ω d,0 = 0.7 this amounts to g(y in ) = 11.708. The value of g(y in ) has to be adapted to match the outcome of Ω d,0. This tuning can be avoided by a more direct formula in eq. ( 97) below.\n\nFor the initial value of Ω ν we use that the neutrino masses are negligible in early cosmology such that\n\n,\n\nand Ω ν (y) = c νγ a eq (1 + c νγ )(e y + a eq ).\n\nFor y in = -8 one finds the initial value Ω ν (y in ) = 0.19.\n\n$$n ν (y in ) = c ν T 3 ν (y in ), c ν = 9ζ(3) 2π 2 = 0.548.(87)$$\n\n$$T ν = 411$$\n\n$$1 3 T γ = 411$$\n\n$$1 3 T γ,0 exp(-y).(88)$$\n\n$$Ω ν (1 -3w ν ) = D μν (x c -x 0 ) 1 eV exp -(3y + 2g) (x c -x),(89)$$\n\n$$D = 4c ν T 3 γ,0 (1 eV) 33M 2 H 2 0 = 0.0657.(90)$$\n\n$$Ω r (a) = Ω γ (a) + Ω ν (a) = a eq a + a eq,(91)$$\n\n$$g(y in ) = - 3y in 2 + 1 2 ln 1 -Ω d,0 - 1 2 ln 1 -Ω r (y in ) -Ω h (y in ),(92)$$\n\n$$Ω ν Ω γ = c νγ =21 8 4 11 4 3$$\n\n$$Ω ν = c νγ 1 + c νγ Ω r,(93)$$\n\n## Evolution of energy densities\n\nAs an alternative numerical setting we follow directly the evolution of ρ m, ρ γ and ρ ν in addition to the evolution of the scalar field given by eq. ( 79). This yields the evolution of the critical density 3M 2 H 2 = ρ m + ρ γ + ρ ν + ρ h and therefore g(y). Since in contrast to the energy fractions Ω j the densities ρ j vary by orders of magnitude the coincidence of the numerical results for both systems of differential equations can be used for an estimate of the numerical error which turns out to be very small.\n\nThe evolution equation (79) for g implies that ρ m, as defined by Ω m = 1 -(Ω h + Ω ν + Ω γ ), scales ∼ a -3. For numerical robustness we can implement directly this relation by\n\nwith ρ m,0 determined from the scale factor at radiationmatter equality as\n\nFor the epoch when the neutrino masses are negligible the ratio ρ ν /ρ γ is fixed. For our model ρ h is negligible around matter-radiation equality or at last scattering. The Hubble parameter is then determined for this epoch.\n\nFor the physics around last scattering relevant for the observed CMB-anistropies the two quantities T γ,0 and a eq determine ρ m (y), ρ γ (y), ρ ν (y) and H(y). With a given baryon ratio Ω b /Ω m also the electron energy density ρ e (y) is known. Combining the measured value of T γ,0 with the value of a eq from Planck-data ensures that for our choice of parameters all physics around last scattering is compatible with constraints from CMB-data. This extends up to the time when neutrino masses begin to matter. One may choose parameters such that the distance to the last scattering surface is the one measured by the Planck collaboration. For the CMB observations the difference between our model and an ΛCDM model fitting the Planck data concerns then only the physics in the late epoch when dark energy starts to matter.\n\nFor neglected neutrino masses one can infer an expression for g(y),\n\nWe actually use this expression for the determination of g(y in ), which yields results very close to eq. (92). For a numerical solution of the system of differential equations we treat x(y in ) as a free parameter, set by the evolution of the scalar field prior to y in. We find that the evolution is rather independent of the initial condition for ∂ y x(y in ). After a rather short initial period ∂ y x settles at a scaling behavior (see below). We take (∂ y x)(y in ) = 0 as second initial condition. The initial conditions for g(y in ) and Ω ν (y in ) are fixed by eqs. (97) (94).\n\n$$ρ m (y) = ρ m,0 exp(-3y),(95)$$\n\n$$ρ m,0 = (1 + c νγ )ρ γ,0 a -1 eq, ρ γ,0 = π 2 15 T 4 γ,0.(96)$$\n\n$$g(y) = 1 2 ln ρ m,0 e -3y + ρ γ,0 (1 + c νγ )e -4y + ρ h -ln 3M 2 H 2 0.(97)$$\n\n## Example for small neutrino masses\n\nThe detailed behavior of the solution of the cosmological field equations for our model of growing quintessence depends on the parameters in a sensitive way. We present here an example for small hierarchical neutrino masses. The parameters are chosen as\n\nFor dark energy this results in the present energy fractions\n\nand equation of state\n\nFor the Hubble parameters one finds\n\nand the distance to the last scattering surface in units of the one measured by the PLANCK-collaboration is given by r d r\n\nFor the neutrino masses one has\n\nx(0) = -0.0013, mν (0) = 0.015 eV.\n\nThese values are, however, only a snapshot of fast oscillations as a function of redshift. For example, one has at redshift z = 1 an equation of state very close to -1, w d (z = 1) = -0.995, and much smaller neutrino masses mν (z = 1) = 0.004 eV. We describe in the following these oscillations in detail.\n\nIn fig. 5 we display the evolution of the dimensionless scalar field x as a function of y = ln a. The neutrino- 98), i.e. Z = 0.022, xc = 0.216, with neutrino masses determined by μν = 0.024 eV, x0 = 0.08. For x(yin) = -0.15 at yin = -ln(3000) the cosmon neutrino coupling is small initially and induces a slow decrease of x. As the universe expands the relative importance of the potential gradient increases, reversing the evolution of x towards a rapid increase near y = -0.5. Subsequently, the scalar field follows an oscillatory approach towards xc. induced force ∼ β first drives x slowly towards smaller values. The values of x shown in the figure correspond to the tail of the scalar potential at the right of the maximum in fig. 4. At some moment the gradient of the potential overtakes and x is pushed to larger values again. The competition between the neutrino-induced and gradient force, which have opposite sign, results in an oscillatory behavior. For time increasing to the future (y > 0) the scalar field essentially approaches a constant. The scalar potential U E (x) therefore becomes almost constant, which can be identified with an effective cosmological constant.\n\nThe decrease of x before the potential becomes important can be understood by solving its evolution equation neglecting the contribution from the potential\n\nFor the matter dominated epoch one has ∂ y g = -3/2 and, with Ω m,0 ≈ 0.3,\n\nresulting in\n\nThis is the damped motion of a particle in a potential\n\nThe \"energy\" decreases according to\n\nFor regions where ∂ 2 y x can be neglected the approximate solution reads\n\nwith x m = x(y m ) and y m denoting the onset of matter domination.\n\nFor the radiation dominated epoch one has g = -2y + const, such that the additional factor exp(y) in eq. ( 104) suppresses the driving force for large negative y. A numerical solution of eq. ( 104) shows a smooth transition from constant x to the solution (110). As a consequence, the value of x reached after the onset of radiation domination will be lowered by an amount ∆x due to the evolution in the matter dominated epoch. If ∆x is too large, the scalar field evolves to values x < x max on the left of the maximum of the potential. In this case it will quickly decrease further as soon as the potential becomes important. No realistic cosmology is found in this case. It is possible that the requirement that x stays larger than x max leads to restrictions on the neutrino masses for which a realistic cosmology can be obtained. We leave this question for further investigation.\n\nFor the present example the oscillations set in only in the future. For other parameters they start already in the past, such that the present cosmology undergoes rapid changes in certain quantities. This is the reason why in our plots we also show the evolution in the future, y > 0. This gives a flavor of what can happen for other parameter choices in the present and recent past. The evolution of the scalar field is reflected in the evolution of the different energy densities or corresponding energy fractions Ω m, Ω d, Ω h, Ω K and Ω ν shown in fig. 6. In early cosmology matter and radiation dominate. For the range y ≈ -1 shown in the figure the universe is matter dominated with Ω m (green) close to one. At this time the cosmon potential starts to play a role, inducing an increase of Ω V. As long as the field value changes slowly the kinetic energy fraction Ω K (orange) remains small, such that Ω h ≈ Ω V. Also the neutrino energy fraction Ω ν (red) is small, such that the curves for Ω h (blue) and Ω d (violet) almost coincide. Subsequently, the potential gradient induces a more rapid increase of x. This is the moment when dark energy \"thaws\", as visible in the increase of the kinetic energy fraction Ω K (orange). The thawing is stopped, however, once x approaches x c. The strong cosmon-neutrino coupling β counteracts the further increase of x and reverses the sign of ∂ y x. With a decreasing neutrino force the scalar kinetic energy fraction decreases (sharp drop in Ω K, orange). The cosmon field decreases again, until the potential gradient takes over. The oscillations of x are visible in the oscillations of Ω h (blue). The sharp drops in Ω h are partly compensated by sharp peaks in Ω ν which occur whenever x is close to x c. While Ω h and Ω ν oscillate strongly due to the oscillating scalar field x, the combined dark en-ergy fraction Ω d is a more smooth function. For the parameters chosen Ω d reaches a value Ω d (0) = 0.695. In summary, the thawing is not smooth as for β = 0. The increase of Ω d, and the corresponding decrease of Ω m, show structures which clearly distinguish the evolution from a cosmological constant.\n\nIn fig. 7 we show the equation of state for the scalar field energy density w h (orange), as well as the effective equation of state w d for the combined neutrino-cosmon fluid (blue), as a function of y.\n\nFor the scalar field Fig. 8. Equation of state w as function of redshift z. We display w h for the scalar field (orange) and the effective equation of state w d (blue). This is compared with a w0 -waparameterization [52] (green) with values w0 = -0.73, wa = -1.05 from a fit to DESI data [53]. One observes rough agreement for values z < 0.5 where dark energy is most important. Values w < -1 are not reached in our model.\n\nw h starts in early cosmology with positive values corresponding to the dominance of the kinetic energy. Once the potential becomes important w h turns negative, being close to w h = -1 for y > -3. Subsequently, the oscillations of Ω K and Ω V are reflected in an oscillating equation of state. For our example the first oscillation peak occurs in the near future. The equation of state differs substantially from w d = -1 only for redshift z < 0.4. We compare in Fig. 8 the redshift dependence of our example with a Chevallier-Polarski [52] parameterization w = w 0 + w a (1 -a) with values w 0 = -0.73, w a = -1.05, corresponding to a fit for the DESI data [53].\n\nThe behavior for z ≤ 0.4 is similar, but our model cannot describe a phantom regime with w d < -1.\n\nIn Fig. 9 we display the average neutrino mass. Once x becomes close to x c (in our example in the future) the evolution of the neutrino masses is strongly oscillating, with rather sharp peaks at the turning points for the evolution of the scalar field near x c. The present average neutrino mass for our particular set of parameters and initial conditions is mν = 0.015 eV. The precise value is highly sensitive to the details in view of the strong oscillations. During structure formation the neutrino masses have been much smaller. Despite the oscillatory behavior the evolution of the Hubble parameter is rather smooth. For our parameters and initial conditions the present value is found h(0) = 0.683. For other settings h(0) may come out larger than 0.7. Still, the detailed evolution of the Hubble parameter shows interesting features. In Fig. 10 we show the Hubble parameter for our example divided by the one for a ΛCDM model with the same value of the present dark energy fraction Ω d (0). We compare it with a similar ratio for a Chevallier-Polarski parameterization of w(a) with w 0 = -0.73, w a = -1.05. Both show the enhancement between redshifts z = 0 and z = 0.8 observed by the DESI-collaboration [53].\n\nFor other values of the neutrino masses or other values of the parameters x c and x(t in ) the fast oscillations often occur already in the recent past for a < 1, y < 0. From the overall oscillating picture it is evident that the detailed value of observables today depends in this case sensitively on the value of x c and the initial value x(t in ). In the plane (x c, x(t in )) one finds a line for which the present combined dark energy fraction Ω d amounts to Ω d = 0.7. The present distribution of this combined dark energy on the cosmon potential, cosmon kinetic energy and neutrino energy density is parameter-dependent. This is seen by the fast oscillations between these components, and reflected in the oscillating equation of state. Furthermore, the oscillations depend strongly on the assumed neutrino masses as encoded in μ and x 0. For larger μ the oscillations get slower. The present fraction of neutrino energy density and kinetic cosmon energy become larger and one starts to see more pronounced oscillations in Ω h and even Ω d. A detailed search in parameter space will be necessary in order to find out if there exists a parameter region for which the model is compatible with all present observations, possibly overcoming the tensions in the cosmological constant model. This investigation is not the purpose of the present note. Furthermore, we should mention that in growing neutrino quintessence the neutrino fluctuations grow non-linear in a recent cosmological epoch. The neutrinos form very large lumps, which may render the cosmic neutrino background observable by large scale inhomogeneities in the gravitational potential. Rather large backreaction effects are possible. Also the locally observed Hubble parameter may deviate from its cosmological average, with possible implications for the Hubble tension. For small neutrino masses these effects are suppressed by the small neutrino fraction Ω ν. Nevertheless, they could play a certain role for a detailed quantitative analysis. We refer to refs [54][55][56][57][58][59][60] for a detailed discussion of these issues.\n\n$$Z = 0.022, x c = 0.216, x 0 = 0.08, μ = 0.024 eV, µ 1 = 0.002 eV, µ 2 = 0.01 eV, µ 3 = 0.06 eV, y in = -8.0, x(y in ) = -0.15, ∂ y x(y in ) = 0.(98)$$\n\n$$Ω d (0) = 0.695, Ω h (0) = 0.694, Ω ν (0) = 0.001,(99)$$\n\n$$w d (0) = -0.770, w h (0) = -0.771.(100)$$\n\n$$h(0) = 0.683, g(0) = -0.025,(101)$$\n\n$$(Planck) d = 0.997.(102)$$\n\n$$∂ 2 y x + (3 + ∂ y g)∂ y x = -A(x c -x) -2 exp -(3y + 2g),(104) with$$\n\n$$A = 3D 4Z μν (x c -x 0 ) 1 eV. (105$$\n\n$$)$$\n\n$$g ≈ - 3 2 y + 1 2 ln Ω m,0,(106)$$\n\n$$∂ 2 y x + 3 2 ∂ y x = -Ã(x c -x) -2, Ã = AΩ m,0.(107)$$\n\n$$Ṽ (x) = Ã x c -x.(108)$$\n\n$$E(y) = 1 2 (∂ y x) 2 + Ṽ (x), ∂ y E = - 3 2 (∂ y x) 2. (109$$\n\n$$)$$\n\n$$x(y) = x c -(x c -x m ) 3 -2 Ãy m + 2 Ãy 1/3,(110)$$\n\n## Conclusions\n\nIn this note we have investigated the consequences of the scaling solution for quantum gravity for the evolution of dynamical dark energy. Our main assumption is that the largest intrinsic mass scale produced by the flow of dimensionless couplings away from the scaling solution is of the order of a few meV or smaller, rather than the Planck scale as often assumed. This leads to models of variable gravity and a scale symmetric standard model. The scaling solution of quantum gravity predicts a very light scalar field -the cosmon -as the pseudo Goldstone boson of spontaneously broken quantum scale symmetry. For suitable parameters, i.e. an appropriate range for Z, the evolution of the cosmon field induces dynamical dark energy. This is a striking prediction. It will be crucial to find out if the required value of Z is compatible with the scaling solution for the scalar kinetic term.\n\nThe second central outcome states that the scaling solution of quantum gravity is highly predictive for cosmology. This is due to the fact that the particles of the standard model and their interactions are well known, such that the flow equations contain essentially no free parameters for the relevant values of the cosmon field. Without a violation of quantum scale symmetry in the beyond standard model sector the time evolution of dynamical dark energy is not compatible with the precise cosmological constraints. In contrast, the simple assumption of a slow logarithmic running in the beyond standard model sector leads to models of growing neutrino quintessence. The rich and interesting phenomenology of these models may well be compatible with observation. For further progress in this direction one needs to identify which type of fluctuations lead to a scale violation in the beyond standard model sector.\n\nWe have focused in this note on exact scaling solutions of quantum gravity according to fundamental scale invariance [61]. It is remarkable that this very restrictive setting may lead to a cosmology compatible with observation. Relevant parameters for the flow away from the scaling solution could induce a small number of additional parameters for the cosmon potential, which will need to be investigated. For example, a shift of u by a constant could render u positive for all values of χ, resulting in positive U E for all values of the scalar field. This would still predict dynamical dark energy, but change the characteristics of its evolution.\n\nOur discussion of late dynamical dark energy has to be combined with an investigation of the inflationary epoch. Inflation is also predicted by the scaling solution of quantum gravity. The details of the inflationary epoch will depend, however, on unknown particles with high masses, as for grand unified models. For a given particle content the scaling solution of quantum gravity is very predictive for inflation as well. The kinetial of the cosmon, as reflected by Z, stops to evolve for large χ once the metric fluctuations and heavy particles have decoupled. The value of Z then depends on assumptions for unknown particles, whose fluctuations determine its flow for small χ. The values of Z, x c and x(y in ) are, in principle, calculable from the scaling solution of a model which covers all values of χ. We do not want here to stick to a definite setting for the particle physics in the ultraviolet limit. Then Z, x c and y(t in ) are effectively free parameters, in addition to the neutrino masses.\n\nIt is often believed that quantum gravity effects only affect very early cosmology. In contrast, our findings reveal that quantum gravity can also be very predictive for late cosmology. The severe constraints on the existence of scaling solutions for all values of the cosmon field fix the scaling solution for the cosmon potential. This potential is a key ingredient for the dynamics of dark energy. It can no longer be assumed ad hoc for phenomenological purposes, but rather becomes a calculable quantity. We hope that a large and fruitful research field emerges from this combination of cosmology with quantum gravity.\n\n## References\n\n1. Wetterich (1988) \"Cosmology and the fate of dilatation symmetry\" *Nuclear Physics B*\n\n2. Wetterich (2019) \"Quantum scale symmetry\"\n\n3. Weinberg (1980) \"Ultraviolet divergences in quantum theories of gravitation\"\n\n4. Reuter (1998) \"Nonperturbative evolution equation for quantum gravity\" *Phys. Rev. D*\n\n5. Stelle (1977) \"Renormalization of higher-derivative quantum gravity\" *Phys. Rev. D*\n\n6. Fradkin, Tseytlin (1982) \"Renormalizable asymptotically free quantum theory of gravity\" *Nucl. Phys. B*\n\n7. Avramidy, Barvinsky (1985) \"Asymptotic freedom in higher-derivative quantum gravity\" *Phys. Lett. B*\n\n8. Sen, Wetterich, Yamada \"Asymptotic freedom and safety in quantum gravity\" *JHEP*\n\n9. Wetterich (2022) \"The Quantum Gravity Connection between Inflation and Quintessence\" *Galaxies*\n\n10. Wetterich \"Quantum gravity and scale symmetry in cosmology (2023)\"\n\n11. Wetterich (2014) \"Variable gravity universe\" *Phys. Rev. D*\n\n12. Henz, Pawlowski, Wetterich (2017) \"Scaling solutions for dilaton quantum gravity\" *Physics Letters B*\n\n13. Henz, Pawlowski, Rodigast et al. (2013) \"Dilaton quantum gravity\"\n\n14. Shaposhnikov, Zenhäusern (2009) \"Scale invariance, unimodular gravity and dark energy\" *Physics Letters B*\n\n15. Shaposhnikov, Zenhäusern (2009) \"Quantum scale invariance, cosmological constant and hierarchy problem\" *Physics Letters B*\n\n16. Wetterich (2007) \"Growing neutrinos and cosmological selection\" *Physics Letters B*\n\n17. Amendola, Baldi, Wetterich (2008) \"Quintessence cosmologies with a growing matter component\" *Physical Review D*\n\n18. Wetterich (1993) \"Exact evolution equation for the effective potential\" *Physics Letters B*\n\n19. Reuter, Wetterich (1994) \"Effective average action for gauge theories and exact evolution equations\" *Nucl. Phys. B*\n\n20. Dupuis, Canet, Eichhorn et al. (2021) \"The nonperturbative functional renormalization group and its applications\" *Physics Reports*\n\n21. Wetterich (2018) \"Gauge invariant flow equation\" *Nuclear Physics B*\n\n22. Wetterich \"Simplified functional flow equation (2024)\"\n\n23. Pawlowski, Reichert, Wetterich et al. (2019) \"Higgs scalar potential in asymptotically safe quantum gravity\" *Physical Review D*\n\n24. Wetterich (2020) \"Effective scalar potential in asymptotically safe quantum gravity\"\n\n25. Dou, Percacci (1998) \"The running gravitational couplings\" *Classical and Quantum Gravity*\n\n26. Narain, Percacci (2010) \"Renormalization group flow in scalar-tensor theories: I\" *Classical and Quantum Gravity*\n\n27. Percacci, Vacca (2015) \"Search of scaling solutions in scalar-tensor gravity\" *The European Physical Journal C*\n\n28. Donà, Eichhorn, Labus et al. (2016) \"Asymptotic safety in an interacting system of gravity and scalar matter\" *Physical Review D*\n\n29. Eichhorn, Hamada, Lumma et al. (2018) \"Quantum gravity fluctuations flatten the planckscale higgs potential\" *Physical Review D*\n\n30. Eichhorn, Pauly (2021) \"Constraining power of asymptotic safety for scalar fields\" *Physical Review D*\n\n31. Laporte, Pereira, Saueressig et al. (2021) \"Scalar-tensor theories within asymptotic safety\" *Journal of High Energy Physics*\n\n32. Wetterich (2017) \"Graviton fluctuations erase the cosmological constant\" *Physics Letters B*\n\n33. Litim (2001) \"Optimized renormalization group flows\" *Physical Review D*\n\n34. Wetterich \"Simplified functional flow equation (2024)\"\n\n35. Wetterich, Yamada (2019) \"Variable planck mass from the gauge invariant flow equation\" *Physical Review D*\n\n36. Linder (2007) \"The dynamics of quintessence, the quintessence of dynamics\" *General Relativity and Gravitation*\n\n37. Wetterich (1994) \"The cosmon model for an asymptotically vanishing time-dependent cosmological \"constant\"\n\n38. Minkowski (1977) \"µ→eγ at a rate of one out of 109 muon decays?\" *Physics Letters B*\n\n39. Yanagida (1979) \"Horizontal gauge symmetry and masses of neutrinos\"\n\n40. Gell-Mann, Ramond, Slansky (1979) \"Complex Spinors and Unified Theories, Conf. Proc. C\"\n\n41. Magg, Wetterich (1980) \"Neutrino mass problem and gauge hierarchy\" *Physics Letters B*\n\n42. Lazarides, Shafi, Wetterich (1981) \"Proton lifetime and fermion masses in an so(10) model\" *Nuclear Physics B*\n\n43. Gu, Wang, Zhang (2003) \"Dark energy and neutrino mass limits from baryogenesis\" *Phys. Rev. D*\n\n44. Fardon, Nelson, Weiner (2004) \"Dark energy from mass varying neutrinos\" *Journal of Cosmology and Astroparticle Physics*\n\n45. Bi, Feng, Li et al. (2005) \"Cosmological evolution of interacting dark energy models with mass varying neutrinos\" *Physical Review D*\n\n46. Brookfield, Van De Bruck, Mota et al. (2005) \"Cosmology of mass-varying neutrinos driven by quintessence: Theory and observations\"\n\n47. Brookfield, Van De Bruck, Mota et al. (2006) \"Cosmology with massive neutrinos coupled to dark energy\" *Phys. Rev. Lett*\n\n48. Afshordi, Zaldarriaga, Kohri (2005) \"Instability of dark energy with mass-varying neutrinos\" *Phys. Rev. D*\n\n49. Bjaelde, Brookfield, Van De Bruck et al. (2008) \"Neutrino dark energy-revisiting the stability issue\" *Journal of Cosmology and Astroparticle Physics*\n\n50. Ichiki, Keum (2008) \"Primordial neutrinos, cosmological perturbations in interacting dark-energy model: Cmb and lss\" *Journal of Cosmology and Astroparticle Physics*\n\n51. Wetterich (1988) \"Cosmologies with variable Newton's \"constant\" *Nuclear Physics B*\n\n52. Chevallier, Polarski (2001) \"Accelerating universes with scaling dark matter\" *International Journal of Modern Physics D*\n\n53. Adame (2024) \"Desi 2024 vi: Cosmological constraints from the measurements of baryon acoustic oscillations\"\n\n54. Mota, Pettorino, Robbers et al. (2008) \"Neutrino clustering in growing neutrino quintessence\" *Physics Letters B*\n\n55. Wintergerst, Pettorino, Mota (2010) \"Wetterich, Very large scale structures in growing neutrino quintessence\" *Physical Review D*\n\n56. Pettorino, Wintergerst, Amendola (2010) \"Wetterich, Neutrino lumps and the cosmic microwave background\" *Physical Review D*\n\n57. Baldi, Pettorino, Amendola et al. (2011) \"Oscillating non-linear large-scale structures in growing neutrino quintessence: Oscillating structures and growing neutrinos\" *Monthly Notices of the Royal Astronomical Society*\n\n58. Ayaita, Weber, Wetterich (2012) \"Structure formation and backreaction in growing neutrino quintessence\" *Physical Review D*\n\n59. Ayaita, Baldi, Führer et al. (2016) \"Nonlinear growing neutrino cosmology\" *Physical Review D*\n\n60. Casas, Pettorino, Wetterich (2016) \"Dynamics of neutrino lumps in growing neutrino quintessence\" *Physical Review D*\n\n61. Wetterich (2021) \"Fundamental Scale Invariance\" *Nuclear Physics B*<|endoftext|>" |
| }, |
| "test": { |
| "total_tokens": 85257653, |
| "example": "# Development and validation of a cryogenic far-infrared diffraction grating spectrometer used to post-disperse the output from a Fourier transform spectrometer \n\nAlicia Anderson, David Naylor, Brad Gom, Matthew Buchan, Adam Christiansen, Ian Veenendaal, Affiliations\n\n## Abstract\n\nRecent advances in far-infrared detector technology have led to increases in raw sensitivity of more than an order of magnitude over previous state-of-the-art detectors. With such sensitivity, photon noise becomes the dominant noise component, even when using cryogenically cooled optics, unless a method of restricting the spectral bandpass is employed. The leading instrument concept features reflecting diffraction gratings, which post-disperse the light that has been modulated by a polarizing Fourier transform spectrometer (FTS) onto a detector array, thereby reducing the photon noise on each detector. This paper discusses the development of a cryogenic (4 K) diffraction grating spectrometer that operates over the wavelength range of 285 to 500 μm and was used to post-disperse the output from a room-temperature polarizing FTS. Measurements of the grating spectral response and diffraction efficiency are presented as a function of both wavelength and polarization to characterize the instrumental performance.\n\n## I. INTRODUCTION\n\nAs radiation travels through the universe, it experiences losses due to absorption and scattering from dust grains. Since the grain sizes are <1 μm, 1 observations at far-infrared (FIR) wavelengths (>30 μm) experience minimal losses and, thus, can probe obscured regions more efficiently. Spaceborne FIR spectroscopic observations provide a unique means of addressing some of the leading questions in modern astrophysics, from the formation of stars and planets in molecular clouds in our own galaxy 2 to the evolution of galaxies over cosmic time. 3 The Fourier transform spectrometer (FTS), with broad spectral coverage (the multiplex advantage 4 ), high throughput, and high and variable resolution, has been the instrument of choice for astronomical spectroscopy in the FIR (AKARI, 5 Herschel 6 ). The exquisite sensitivity of modern superconducting detectors, having noise equivalent powers (NEPs) of ∼10 -19 W/\n\n## √\n\nHz (an improvement of over two orders of magnitude from those used on Herschel) necessitates that the next generation of spaceborne FIR observatories employ hybrid spectrometers. With photon noise now the dominant driver for spectrometer design, the previous multiplex advantage of FTS becomes a disadvantage unless the spectral bandwidth is restricted.\n\n## Review of Scientific Instruments\n\n## ARTICLE pubs.aip.org/aip/rsi\n\nThe solution adopted for the SPICA SAFARI 7 and PRIMA 8 instruments is to post-disperse the output from a FTS using diffraction grating spectrometers. Previously, grating spectrometers have been used alone to provide low-resolution broad-band spectroscopy in the FIR. 9,10 In order to achieve the high dispersion required for the postdispersed polarizing Fourier transform spectrometer (PDPFTS) concept, equivalent to a resolving power of R ∼ 300, each diffraction grating must operate at high angles of incidence. At such angles, the grating response exhibits a strong polarization dependence, having high and uniform efficiency (∼80%) for TM polarized light but lower and variable efficiency (10-40%) for TE polarized light. A FTS based on the Martin-Puplett polarizing interferometer 11 uses the polarizing encoding properties of the interferometer to optimally couple to the TM grating mode. When the FTS is scanned over its full optical displacement, each of the detectors measures a high resolution interferogram convolved with the grating spectral response function (SRF) for that particular detector/grating combination. Upon Fourier transformation, an individual interferogram yields a small bandwidth, high resolution spectrum. By stitching together the spectra from individual detectors, one FTS scan produces a high-resolution spectrum of the entire wavelength range. We refer to this system as a post-dispersed polarizing Fourier transform spectrometer (PDPFTS). 12 In this paper, we describe the design and performance of a cryogenic grating spectrometer that has been developed to explore the challenges of the PDPFTS technique.\n\n## II. GRATING THEORY\n\nThe diffraction grating concept was first described in 1786 by Hopkinson and Rittenhouse, 13 who observed diffraction through a series of parallel wires. Fraunhofer extended this principle and ruled grooves onto an optical glass, which he used to study the solar spectrum. 14 The ruling process was completely transformed by Rowland, who developed several ruling engines and was able to create grating structures with resolving powers of ∼150 000. 15,16 With reference to Fig. 1, when monochromatic light of wavelength λ is incident on a diffraction grating, it is diffracted into a discrete angle given by the grating equation, 17\n\nwhere m is the order of diffraction. The right side of the equation represents the path difference between the light reflecting from adjacent grooves of the grating. 18 α and β are the angles of the incident and diffracted light, measured with respect to the grating normal, and d is the spacing between adjacent grooves. The mounting configuration chosen for the grating spectrometer is the Czerny-Turner monochromator, as displayed in Fig. 1. In this configuration, the grating equation can be rewritten as\n\nwhere ϕ is the deviation angle, and θ is the angle of incidence minus the deviation angle.\n\n$$mλ = d(sin α + sin β), (1$$\n\n$$)$$\n\n$$mλ = 2d(sin (θ -ϕ) cos ϕ), (2$$\n\n$$)$$\n\n## A. Resolving power\n\nFor a generic spectrometer, the spectral resolving power is given by\n\nwhere Δλ is the spectral resolution, often defined by the Rayleigh criterion. 17 At a given wavelength, the resolution of a grating spectrometer is determined by the widths of the entrance and exit slits, as discussed below.\n\nA linespread calculation is used to determine the convolution of the image of the entrance slit with the exit slit. As light passes through an entrance slit, it will form an image of the slit that will be magnified differently in the horizontal and vertical directions, a phenomenon referred to as anamorphic magnification.\n\nIn the dispersion direction, the tangential magnification of the width of the entrance slit is given by\n\nwhere w is the width of the entrance slit, w ′ is the width of the image of the entrance slit, and r and r ′ are the focal lengths of the entrance and exit optics, respectively. 18 To optimize the spectral resolution, the width of the exit slit, w ′′, should be chosen to match w ′. When w ′′ is less than w ′, the limiting resolution of the grating is achieved, but at a cost of lower throughput as the exit slit blocks some of the light from reaching the detector. Contrarily, when w ′′ is greater than w ′, the exit slit width limits the resolution. The resolution of a grating spectrometer limited by the slit widths can be expressed as 18\n\n$$R = λ Δλ, (3$$\n\n$$)$$\n\n$$χ T = w ′ w = r ′ r cos α cos β, (4$$\n\n$$)$$\n\n## Δλ\n\nReview of Scientific Instruments ARTICLE pubs.aip.org/aip/rsi where max(w ′′, w ′ ) denotes the maximum between the exit slit, w ′′, and w ′. For the grating designed in this study, w ′′ was chosen to match w ′ at the center of the band, 392.5 μm. The theoretical resolving power falls into two regimes:\n\nIn the top case, the system is entrance slit limited, and in the bottom case, it is exit slit limited. This analysis assumes that the system can be understood in terms of geometrical optics, which is a fair approximation since the components are oversized with respect to the wavelength of light (λ/d ≳ 10). Equation (6) has been used to model the theoretical resolving power of the grating described in this paper.\n\n$$= max (w ′′, w ′ )d cos (θ -ϕ) mr ′,(5)$$\n\n$$R = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ λmr dw cos (θ + ϕ) for w ′ < w ′′ (λ < 392.5 μm) λmr ′ dw ′ cos (θ -ϕ) for w ′ > w ′′ (λ > 392.5 μm). (6$$\n\n$$)$$\n\n## B. Efficiency\n\nWhen observing faint astronomical sources, it is important to maximize the energy diffracted into the order of interest. This can be accomplished by a process known as blazing, where the grooves of a grating are shaped to maximize efficiency at a particular wavelength. The blaze angle is given by 18 mλB = 2d sin θB, (7) where θB is the blaze angle labeled in Fig. 1. λB is the wavelength where the efficiency is maximized, and in our case, it is chosen to be the band center at 392.5 μm.\n\nTheoretical modeling of the grating efficiency requires knowledge of the electromagnetic fields exterior to, within, and at the boundary surface of the grating substrate. In principle, these fields can be determined by solving Maxwell's equations at the boundary. However, the diffraction formulas derived from Kirchhoff's approximations are not valid when the groove spacing is on the order of the wavelength of radiation, 17 as is the case for this study. Rigorous theoretical models have been developed over the past 70 years, [20][21][22][23][24] although solving Maxwell's equations for a boundary surface with an arbitrary profile is computationally challenging. In 1980, the problem was simplified by Chandezon et al., 25 who described a method that applies a translation of the coordinate system to map the grating profile to a planar surface, greatly simplifying the boundary conditions. This technique exploits the periodic nature of the profile to implement Fourier analysis in solving Maxwell's equations, reducing the calculation to an eigenvalue problem. 25 The methods discussed by Li et al. 26 were used to produce a theoretical polarizationsensitive efficiency model of the diffraction grating presented in this paper.\n\n## III. DESIGN OF THE GRATING MODULE\n\nThe grating for the PDPFTS was designed to operate over a wavelength range from 285 to 500 μm, chosen to match available test equipment. The theoretical resolving power, R, was calculated using Eq. ( 6) under the assumption that the spectrometer was slit width limited. The width of the exit slit, w ′′, was chosen to achieve a resolving power of R ∼ 100 at the middle of the wavelength range (392.5 μm), and the grooves were blazed to maximize the efficiency 7), the corresponding blaze angle is 39.4 ○. The specifications of the grating, designed to operate in first order, are listed in Table I. The grating was fabricated from Rapidly Solidified Aluminum 6061 (RSA6061) and ruled with a single point diamond machine under a specialized thermalization process to minimize internal stress.\n\nThe design of the grating spectrometer builds on the work of Veenendaal et al., 27,28 who developed a cryogenic post-dispersing grating to sort the output from a Fabry-Pérot interferometer. In the new design, the grating is mounted in a monolithic aluminum enclosure on a pivot driven by a cryogenic stepper motor through worm gear reduction. The monolithic design and material choice for the enclosure ensure that the system maintains alignment (e.g., gear drive and optics) as the system is cooled to cryogenic temperatures. As envisaged, the far-infrared post-dispersed polarizing FTS instrument concept will incorporate several stationary diffraction gratings to distribute the signal from a polarizing FTS across different spectral bands of interest onto an array of ultra-sensitive detectors. However, since we did not have access to a detector array, the grating needed to rotate to change the angle of incidence/diffraction and scan the wavelength range of interest using a single detector.\n\nThe monolithic enclosure that houses the grating is shown schematically in Fig. 2. Improvements from the previous design include a monolithic grating enclosure and shields (teal, 1), which have a highly reflective exterior to reduce absorption and minimize thermal loading as the system is cooled to 4 K. The inside surfaces of the enclosure were coated with epoxy and sprinkled with carborundum particles to reduce reflectivity, thereby mitigating stray light. We adopted the method that was developed to coat optical components at the Herschel Space Observatory. 29 Other notable improvements include a larger diameter low-pass filter (50 mm) (brown, 2), a new diffraction grating (yellow, 3) that features a plane mirror mounted to the rear side (red, 4) free to rotate 360 ○, and a retractable baffle (magenta, 5) to block stray light within the grating enclosure from reaching the detector. The new 50 mm windows have custom low-pass filters with a cut-off frequency of 35 cm -1. An additional low-pass filter is mounted at the interface between the 4 and 0.3 K enclosures to further limit stray light contaminating the detector signal. The exit slit (orange, 6) is mounted on the feedhorn of a 0.3 K composite bolometer detector (purple, 7). The cryogenic stepper motor (blue, 8) drives the worm and gear system and rotates the grating around the axis indicated by the black arrow. The grating assembly is clamped to the 4 K baseplate of a test-facility cryostat. 30,31 A. Rear mounted mirror\n\n## Review of Scientific Instruments\n\nThe derivation of the diffraction efficiency for a blazed reflection grating was discussed in Sec. II B. While these models provide a means to model the efficiency response, they do not account for manufacturing imperfections. There is literature available with efficiency measurements of various diffraction grating geometries; [32][33][34] however, the grating used in this study was custom-made to operate over far-infrared wavelengths and in a cryogenic environment. Thus, the efficiency could not have been characterized prior to receiving it because that would require an extensive suite of test equipment. We devised a simple method of determining the grating efficiency by mounting a mirror on the rear of the grating saddle. When the system is coupled to an FTS and the mirror is inserted into the optical path by rotating the grating 180 ○, a single measurement of the entire band is obtained, which serves to calibrate the efficiency of the grating as a function of wavelength.\n\n## B. Source module\n\nTo produce realistic synthetic astronomical spectra, the source module should consist of a broad-band emission source (continuum) and a narrow-band line source (spectral feature). The source module used for the results presented in Secs. IV and V includes a commercial blackbody (300-1200 K) and two quasi-monochromatic line sources. The line sources are custom built and feature two THz photomixers, 35,36 each illuminated by two independently tunable, continuous-wave infrared lasers operating in the 1550 nm band. The beat frequency between the two lasers modulates the conductivity of the photomixer material, and the embedded antenna generates THz radiation, which is coupled to free space by a hyper hemispherical silicon lens. The lens sits adjacent to the photomixing module, allowing the focus to be placed 22.3 ± 1.2 mm behind the face of the lens, which facilitates the mounting of the device. The output from the lens is a cone of radiation with a wavelengthdependent 12-15 ○ opening angle. However, at the wavelengths used in the results section, the device always has a 15 ○ output angle. With the current suite of lasers, this optical heterodyne technique allows us to produce quasi-monochromatic lines in the region from 0.2 to 1.5 THz whose line widths are two orders of magnitude below the resolution of the PDPFTS-amply sufficient to explore its performance.\n\n## C. Calibration FTS\n\nA room temperature polarizing calibration FTS (cFTS), provided by Blue Sky Spectroscopy Inc., 37 was used for the measurements presented in this paper. The optical configuration of the cFTS is a Martin-Puplett interferometer, MPI, and it was developed as a calibration facility to characterize THz sources (line or continuum), components (filters, samples, etc.), and detectors (single-pixel or arrays). Due to these applications, the spectrometer operates over a broad spectral range with high throughput and high spectral resolution power, as listed in Table II. In normal operating conditions, the chamber of the cFTS, including the path of the translating mirror up to the maximum optical path difference (OPD) of 32 cm, is evacuated to <10 mTorr, which prevents significant signal depletion due to molecular absorption from air. In addition, the input polarizer, polarizing beamsplitter, and output analyzer of the cFTS provide a combined efficiency of >95% over the entire spectral range. 38 A key advantage of coupling a polarizing FTS to a diffraction grating spectrometer is the ability to orient the output analyzer to match the more efficient TM mode of the grating.\n\n## IV. RESULTS I: EFFICIENCY\n\nFigure 3 shows the optical configuration for the results presented in this section of the paper. The dashed lines indicate the separation between room temperature (300 K) components, the cryogenic (4 K) grating assembly, and the composite bolometer detector chamber (300 mK). Low-pass optical filters 39 are mounted to the 100 and 4 K thermal shields and the grating and detector entrance apertures to reduce thermal loading and limit the bandwidth of radiation reaching the detector. Figure 4 shows a CAD schematic and image of the optical components mounted in the cryostat. The source was placed at the input of the FTS at room-temperature, and the output of the FTS was brought to a focus on the cryogenic entrance slit (1) using a custom f /690 ○ off-axis parabolic mirror (OAP). After passing through the slit, the beam was collimated by a f /615 ○ OAP (2), reflected by a flat mirror (3) toward a pendulum mirror (4), before passing through a 58 cm -1 low-pass filter (5) and into the grating module. Light from the grating was then measured by the composite bolometer detector (6).\n\nMeasurements were obtained with the hybrid roomtemperature/cryogenic PDPFTS described above. The source module, comprising the blackbody and tunable photomixer(s), operated at atmospheric pressure a short distance (∼ 8.6 cm) in front of the evacuated FTS. To study the instrument line shape (ILS) of the PDPFTS, both emission and absorption line measurements were obtained. By differing spectroscopic measurements of the blackbody and photomixer from those of the blackbody alone, it is possible to probe the ILS of the PDPFTS using the quasi-monochromatic photomixer sources. The left column of Fig. 5 shows the PDPFTS spectra of the blackbody source with the photomixer tuned to ∼32.2 cm -1 (blue) in emission (bottom) and absorption (top). The photomixer was subsequently turned off while the spectra of the blackbody alone were measured (red). The difference between these measurements is shown as the green points in the right panel, which have been fitted with the sinc function (blue). Table III presents the FWHM and center frequency extracted from the sinc fits to both the absorption and emission lines.\n\n## Review of Scientific Instruments\n\n## TABLE III.\n\nFWHM and center frequency extracted from the sinc fits to the emission and absorption photomixer lines in Fig. 5. The fitted values are compared with the theoretical resolution of the FTS as determined by the maximum OPD. these measurements, a calibrated cavity blackbody source with a slit geometry matched to the entrance slit of the grating was placed at the input to the FTS. When the mirror on the rear side of the grating was placed in the optical path of the incident beam, a scan of the FTS was used to obtain a measurement of the entire spectrum. The grating was then rotated into the path, and the incident angle, θ, was varied to change the wavelength being measured by the detector. At each grating angle, the FTS was scanned five times, and the interferograms were averaged, phase-corrected, and Fourier transformed to yield spectra at each grating position. Fulton et al. 40 discuss the challenges of phase correcting postdispersed data, which contain only a few points-roughly ten or fewer, depending on the spectral bandwidth presented by the diffraction grating and the resolution of the FTS. Since the instrumental phase of the FTS is common to all grating measurements across the band, they can be combined to determine the phase correction function more robustly. 40 The phase-corrected grating spectra and corresponding mirror spectra are shown in Fig. 6. The top panel shows FTS spectra polarized perpendicular to the grating grooves, i.e., in the dispersion direction (s-polarized). The bottom panel shows FTS spectra polarized parallel to the grating grooves (p-polarized). The varying background in the mirrored spectrum (black curve) in each plot is due to molecular absorption within the ∼ 8.6 cm of the atmospheric path and channel fringes caused by the vacuum chamber windows. A closer inspection in Fig. 6 shows that these channel fringes are also present in the individual grating scans.\n\n## FWHM (cm\n\nWhen the flat mirror is inserted into the optical path, radiation from across the entire spectral band falls onto the detector. By comparison, when the grating is in the optical path, the bandwidth of the signal is significantly reduced (i.e., less than 1% of the power received with the mirror in position). Similar to all bolometer detectors, the responsivity is a function of radiant loading and will be different when viewing the entire spectral band compared to viewing a narrow spectral region. The nonlinear response of the bolometer has been well studied under various conditions of radiant power loading. 41 By measuring the bolometer voltage and bias current while loaded with the wide spectral range and the narrow range, estimates of the radiant loading in each case can be inferred using the known performance of the detector. This allowed us to apply a first-order correction to the measured spectra to account for the nonlinear response. The corrected data are shown in Fig. 6, from which the grating efficiency, as a function of both wavelength and polarization, could be determined. Figure 7 presents the efficiency measurements calculated for each polarization state by comparing the amplitude of the grating scan to the signal amplitude of the mirror data at the same wavelength position. The data were corrected for the nonlinear response of the detector and multiplied by a scaling factor, which comprises all unknown efficiency losses between the grating and mirror systems. Thus, the measurements shown in Fig. 7 were taken to probe the general trend of the grating efficiency; they are not a measurement of the absolute efficiency, which would require a more detailed analysis of the potential coupling inefficiencies and a more robust determination of the detector response factor. The measurements are compared with the theoretical model for both polarization states. 25 The s-polarization (TM mode) diffraction efficiency is shown to be greater than the p-polarization (TE mode) efficiency for a significant portion of the grating band, where λ > 330 μm. We were able to reproduce the models well, although we suspect the deviations from the theoretical model at short wavelengths are likely due to machining imperfections (periodicity, groove spacing), which have a more significant impact on the theoretical curves when the wavelength approaches the groove spacing (d = 312 μm). To our knowledge, these represent the first measurements reporting the diffraction efficiency of a blazed grating as a function of wavelength and polarization at cryogenic temperatures and far-infrared wavelengths. Ultimately, the goal of this work is to implement a fully cryogenic PDPFTS instrument. Building on the results from the previous section, the final experimental configuration presented in this paper sought to combine a cryogenic line source with the grating and bolometer detector. This configuration serves to deploy three out of the four instrument modules at cryogenic temperatures. In this configuration, we are able to evaluate the performance of the diffraction grating spectrometer and the line source in a fully cryogenic environment. The performance metric we explored was the variation in resolving power as a function of wavelength.\n\n## Review of Scientific Instruments\n\n## Review of Scientific Instruments\n\nThe specifications of the cryogenic testing configuration are listed in Table IV. The differences between this design and the previous one (see Table I) are displayed with bold text. Figure 8 shows a schematic of the optical design for the results presented in this section. A tunable line source was provided by a lowtemperature grown gallium arsenide (LTG GaAs) terahertz photomixer. 42 and the grating disperses the light onto the exit slit located on the bolometer feedhorn (7).\n\nThe photomixer used in these measurements employed two 780 nm continuous-wave laser diodes 43 operating at slightly different frequencies. An AC photocurrent is induced in the photomixer at the optical beat frequency when a bias voltage is applied. By modulating the bias voltage with an arbitrary waveform generaan amplitude modulated optical signal from the photomixer is produced. A state-of-the-art low-noise differential pre-amplifier described in Ref. 41 measures the detected signal from opposite sides of the symmetric bolometer element in a fully differential configuration to eliminate common-mode noise. The differential AC signal was measured by a model SR830 digital signal processing (DSP) lock-in amplifier (LIA) with the reference signal provided by the waveform generator. The output from the LIA was digitized and recorded as a function of the grating position.\n\nFor the measurements presented in this section, the photomixer was tuned across the wavelength range from 285 to 479 μm. At each setting of the photomixer, the grating was scanned in 0.06 ○ increments (0.58-0.40 μm) around the corresponding photomixer wavelength to determine the spectral response function (SRF) of the grating. The far-infrared wavelength emitted by the photomixer was determined using a wavemeter 44 to measure the wavelengths of the individual 780 nm lasers, whose difference corresponded to the expected output wavelength. The normalized grating SRFs are shown in Fig. 10. Each SRF was fitted with a Gaussian function,\n\nto determine the center wavelength, λc, and full-width-halfmaximum, Δλ. The experimental resolving power, R, was determined using Eq. ( 3).\n\nFigure 11 shows data from three grating SRFs along with the best-fit Gaussian profile for each dataset. From left to right, the data shown were collected at photomixer wavelengths of 304, 394, and 474 μm. It is evident from Fig. 11 that there are extraneous contributions in the wings of the profiles, which we attribute to stray reflections reaching the detector. To illustrate this effect, Fig. 12 shows all grating SRFs transposed to the same angular scale and over-plotted around 0 ○. Features outside of the expected grating profile appear at approximately the same angular positions (-1 ○, -0.7 ○, and 0.6 ○ ), irrespective of the different photomixer wavelengths. These features have been traced to stray reflections from mounting components within the spectrometer that will be mitigated by applying absorbing coatings to the components and do not significantly impede our ability to recover the resolving power as a function of wavelength. Figure 13 shows the resolving power measured from all grating SRFs. The data are compared with the theoretical slit-width limited resolving power in Eq. ( 6). The data follow the overall trend of the theoretical curve. Even with a slightly lower resolving power, these measurements show that the grating succeeds as a post-dispersing module by restricting the spectral band of radiation viewed by the detector, achieving the target R ∼ 100 at the band center.\n\n$$f (λ) = A 0 exp ( -(λ -λc) 2 Δλ 2 ) + A 1 (8$$\n\n$$)$$\n\n## VI. CONCLUSIONS AND FUTURE WORK\n\nThe design and performance of a cryogenic far-infrared grating spectrometer have been presented. We have described a novel method to measure the grating efficiency curve as a function of both polarization state and wavelength at cryogenic temperatures and farinfrared wavelengths using a polarizing FTS. The results show that the diffraction grating spectrometer produced a spectral response that agrees well with the theory. We have independently verified the spectral resolving power of the diffraction grating in a fully cryogenic (4 K) environment with the second set of measurements. These results have shown the first successful integration of a cryogenic line source with our diffraction grating spectrometer and bolometer detector.\n\nWhile the results presented in this paper demonstrate the performance of the grating spectrometer, the final PDPFTS configuration will employ a cryogenic polarizing FTS to couple with the cryogenic source, grating, and bolometer. The fully cryogenic configuration benefits by eliminating chamber windows and, thus, channel fringes, as shown in the data presented in Fig. 6. Figure 14 shows a schematic of the cryogenic PDPFTS that is currently under development. The scanning FTS mechanism (FTSM) is provided by ABB Inc., 45 and the source module is comprised of the cryogenic photomixer described in this paper (blue, 1) coupled to a cryogenic blackbody source (red, 2) by a linear polarizer (orange, 3). Auxiliary optics (yellow, 4) couple light from the source into the FTSM (green, 5) and then into the grating spectrometer (teal, 6), which directs light toward the bolometer detector (purple, 7). The culmination of this project will be achieved upon the successful integration of the cryogenic source module and FTSM to realize the first fully cryogenic PDPFTS.\n\n## References\n\n1. Dyson, Williams (2021) \"The Physics of the Interstellar Medium\"\n\n2. Kamp (2021) *Publ. Astron. Soc. Aust*\n\n3. Spinoglio (2021) *Publ. Astron. Soc. Aust*\n\n4. Davis, Abrams, Brault (2001) \"Fourier Transform Spectrometry\"\n\n5. Murakami (2007) *Publ. Astron. Soc. Jpn*\n\n6. Pilbratt (2010) *Astron. Astrophys*\n\n7. Roelfsema (2018) *Publ. Astron. Soc. Aust*\n\n8. Bradford, Glenn, Rocca et al. (2022) \"Optimization of instrumentation for a cryogenic far-infrared probe mission\"\n\n9. Bradford (2004) \"Z-Spec, a broadband millimeter-wave grating spectrometer: Design, construction, and first cryogenic measurements\"\n\n10. Bradford (2006) \"BLISS for SPICA: Far-IR spectroscopy at the background limit\"\n\n11. Martin, Puplett (1970) *Infrared Phys*\n\n12. Naylor (2022) \"Development of a cryogenic far-infrared postdispersed polarizing Fourier transform spectrometer\"\n\n13. Hopkinson, Rittenhouse (1786) *Am. Philos. Soc*\n\n14. Fraunhofer (1823) *Ann. Phys*\n\n15. Rowland (1882) \"LXI. Preliminary notice of the results accomplished in the manufacture and theory of gratings for optical purposes\" *London, Edinburgh Dublin Philos. Mag. J. Sci*\n\n16. Rowland (1883) *Am. J. Sci. s*\n\n17. Born, Wolf (1997) \"Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light\"\n\n18. Palmer (2014) \"Diffraction Grating Handbook\"\n\n19. Czerny, Turner (1930) *Z. Phys*\n\n20. Meecham (1956) *J. Appl. Phys*\n\n21. Stroke (1960) *Rev. Opt*\n\n22. Pavageau, Bousquet (1970) *Opt. Acta*\n\n23. Maystre, Petit (1971) *Nouv. Rev. Opt. Appl*\n\n24. Mcphedran, Waterworth (1973) *Opt. Acta*\n\n25. Chandezon, Raoult, Maystre (1980) *Rev. Sci. Instrum*\n\n26. \"© Author(s) 2024 Review of Scientific Instruments ARTICLE pubs\"\n\n27. Li, Chandezon, Granet et al. (1999) *Appl. Opt*\n\n28. Veenendaal (2016) \"A cryogenic test facility\"\n\n29. Veenendaal (2020) *Rev. Sci. Instrum*\n\n30. Hargrave (2020)\n\n31. Veenendaal (2019) \"A novel cryogenic Fabry-Pérot interferometer for far-infrared astronomy\"\n\n32. Veenendaal (2016) \"Performance of a cryogenic test facility for 4 K interferometer delay line investigations\"\n\n33. Loewen, Nevière, Maystre (1977) *Appl. Opt*\n\n34. Michels, Mikes, Hunter (1974) *Appl. Opt*\n\n35. Meekins, Kowalski, Cruddace (1989) *Appl. Opt*\n\n36. Makiwa (2011) \"Performance characterization of a millimeter-wave photomixer\"\n\n37. Naylor, Gom, Van Der Wiel et al. (2013) *Can. J. Phys*\n\n38. Ade, Pisano, Tucker et al. (2006) \"A review of metal mesh filters\"\n\n39. Fulton, Naylor, Huber et al. (2021) \"Overcoming processing challenges for a post-dispersed Fourier transform Spectrometer\"\n\n40. Naylor, Gom, Ade et al. (1999) *Rev. Sci. Instrum*\n\n41. \"PH780DBR Series High-Power Single-Frequency Laser Diode\"\n\n42. (2004) \"WA-1000/WA-1500 Wavemeter-Laser Wavelength Meters\"\n\n43. Cournoyer (2020) \"Design of a novel cryogenic stiffness-compensated reactionless scan mechanism for the Fourier transform spectrometer of SPICA SAFARI instrument\"<|endoftext|>" |
| } |
| }, |
| "biology": { |
| "train": { |
| "total_tokens": 1755701072, |
| "example": "# In Vitro Antiproliferative and Antioxidant Effects of Extracts from Rubus caesius Leaves and Their Quality Evaluation\n\nDaniel Grochowski, Roman Paduch, Adrian Wiater, Adrianna Dudek, Mabgorzata Pleszczynska, Monika Tomczykowa, Sebastian Granica, Paulina Polak, Michab Tomczyk\n\n## Abstract\n\nThe present study was performed to evaluate the effect of different extracts and subfractions from Rubus caesius leaves on two human colon cancer cell lines obtained from two stages of the disease progression lines HT29 and SW948. Tested samples inhibited the viability of cells, both HT29 and SW948 lines, in a concentration-dependent manner. The most active was the ethyl acetate fraction which, applied at the highest concentration (250 𝜇g/mL), decreased the viability of cells (HT29 and SW948) below 66%. The extracts and subfractions were also investigated for antioxidant activities on DPPH and FRAP assays. All extracts, with the exception of water extract at a dose of 250 𝜇g/mL, almost totally reduced DPPH. The highest Fe 3+ ion reduction was shown for the diethyl and ethyl acetate fractions. It was more than 6.5 times higher (at a dose 250 𝜇g/mL) as compared to the control. The LC-MS studies of the analysed preparations showed that all samples contain a wide variety of polyphenolics, among which ellagitannins turned out to be the main constituents with dominant ellagic acid, sanguiin H-6, and flavonol derivatives.\n\n## 1. Introduction\n\nDrugs of natural origin have been used throughout history to cure or prevent diseases. Modern phytotherapy is engaged in the production of remedies from materials derived from plants and their use in effective and safe therapy. Their main action could be aimed at three aspects: cytostatic activity, especially when therapy concerns tumour tissue, and antiinflammatory and antioxidative or free radical reduction actions. With all this in mind, we have tried to evaluate the cytotoxic and antioxidant activities of Rubus caesius extracts on two human colon cancer cell lines obtained from two stages of disease progression. Additionally, the full phytochemical profile of all the investigated extracts obtained from R. caesius leaves based on the HPLC-DAD-MS n method has been characterized for the first time. R caesius is a wellknown shrub (dewberry) extending from Europe to Siberia, but it can also be found in the United States. Folk medicine attributes many virtues to R. caesius. Further studies are required to confirm the pharmacological relevance of the findings, but now there are great expectations for its wide therapeutic application [1].\n\n## 2. Materials and Methods\n\n## 2.1. Plant Material and Preparation of Extracts and Their\n\nFractions. The leaves from wild species of R. caesius were collected during June-July 2012-2014 from Puszcza Knyszyńska, near Bialystok, Poland. A voucher specimen of plant RC-11027 has been deposited in the Herbarium of the Department of Pharmacognosy, Medical University of Białystok, Poland. All plant samples, extracts, and fractions were prepared according to previously described methods [2]. Yields are as follows: RC1, 83 mg; RC2, 79 mg; RC3, 101 mg; RC4, 9 mg; RC5, 28 mg; RC6, 96 mg. 3 Analysis. The HPLC-DAD-MS 3 analysis was performed using similar conditions described previously [2]. HPLC analyses of samples were carried out on a reversed-phase Kinetex XB-C18, 100 mm × 2.1 mm × 1.7 𝜇m column (PHENOMENEX, USA). Compounds were analysed in negative and positive ion modes (the MS 2 152, -162, and -176 amu). In the case of the detection of one of the neutral loss masses MS 3 fragmentation was performed. Analysis was carried out using scan from 𝑚/𝑧 70 to 2.200.\n\n## 2.2. HPLC-DAD-MS\n\n## 2.3. Cell Cultures.\n\nTwo human colon tumour cell lines were used. HT29 (ATCC5 HTB-386) and SW948 (ATCC CCL-2376) cell lines representing early and late stages of tumour development were cultured as monolayers in 25 mL culture flasks (NUNC, Rochester, USA). All cell lines were maintained in RPMI 1640 medium supplemented with 10% FBS (foetal bovine serum) (v/v) and antibiotics (100 U/mL penicillin, 100 𝜇g/mL streptomycin) (SIGMA, St. Louis, MO, USA) at 37 ∘ C in a humidified atmosphere with 5% CO 2.\n\n## 2.4. MTT Assay.\n\nThe MTT assay is based on the conversion of a yellow tetrazolium salt by viable cells to purple crystals of formazan. The reaction is catalysed by mitochondrial succinyl dehydrogenase. Cell sensitivity to R. caesius extracts was analysed in a spectrophotometric 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) test according to Mosmann [3].\n\n## 2.5. Neutral Red (NR) Uptake Assay.\n\nThe NR cytotoxicity assay is based on the uptake and lysosomal accumulation of the supravital dye, Neutral Red. Dead or damaged cells do not take up the dye. The method was used as described earlier [4].\n\n2.6. Nitric Oxide (NO) Measurement. Nitrate, a stable end product of NO, was determined in culture supernatants by a spectrophotometric method based on the Griess reaction. The course of the procedure has been described previously [5].\n\n## 2.7. DPPH\n\n• Free Radical Scavenging Test. The free radical scavenging activity of extracts was analysed by the 1,1diphenyl-2-picrylhydrazyl (DPPH) assay. The test is based on the ability of antioxidants to reduce the stable dark violet radical DPPH • (SIGMA, USA) to the yellow diphenylpicrylhydrazine. The methodology has been described in our previous study [5].\n\n## 2.8. Ferric-Reducing Antioxidant Power (FRAP) Assay.\n\nThe FRAP method was used to determine the antioxidative capacity of the tested extracts. The procedure has been described earlier [4].\n\n## 2.9. Statistical Analysis.\n\nThe biological experiments were repeated three times. The data were analysed using one-way ANOVA followed by Dunnett's multiple comparison post hoc test. Only results with significance of 𝑝 ≤ 0.05 were considered significant.\n\n## 3. Results and Discussion\n\nMany species classified to the genus Rubus have been recognized as potential agents with significant effects on human health [6][7][8][9][10]. In the present work we selected leaves of blackberry R. caesius (dewberry) species traditionally used as a remedy to treat many diseases, among them gastrointestinal bleeding and diarrhoea [1,11]. More recently, Dudzińska and coauthors indicated that the extracts obtained from dewberry leaves demonstrate antiplatelet activities in whole blood, where neutrophils play a pivotal role in mediating their effects on platelets. Although these extracts do not hamper the neutrophil oxidative metabolism and do not influence the expression of neutrophil adhesive receptors, they demonstrate an ability to lower the reactive oxygen level produced by neutrophils [12]. According to the reviewed literature, little is known about the potential antiproliferative and antioxidant activity of dewberry's leaves which encouraged us to investigate this plant growing in Poland. In addition, there is no solid evidence describing the chemical composition of the species.\n\nFor the first time, we initiated a detailed phytochemical analysis of secondary metabolites and confirmed the presence of derivatives of quercetin and kaempferol, as well as ellagitannins [1,11]. The fingerprints of the analysed R. caesius extracts were established using the HPLC-DAD-MS 3 method. The analysis revealed the presence of thirty-five constituents (Figure 1) comprising ellagitannins and their derivatives, phenolic acids, as well as flavonoids. In the RC1 (water), RC2 (50% methanol), and RC3 (methanol) extracts ellagic acid [22] and sanguiin H-6 [23] were detected as the dominating constituents. The subfractions RC4 (diethyl ether), RC5 (ethyl acetate), and RC6 (n-butanol) contained a wide variety of phenolic acids [ 1 contains detailed UV-Vis and MS data for all the detected compounds together with their preliminary or full identification. These phytoconstituents express reductive activity on free radicals and may limit the appearance of mutations or even participate in DNA repair [24]. There are a few reports concerning the antitumour activity of other Rubus leaves extracts, but no data are available supporting the extracts from dewberry leaves [15,16,22,25,26]. Previous studies on in vitro models suggest that berry from Rubus species may influence colorectal cancer cell survival in concert terms proliferation and apoptosis [17]. Komes and coworkers also revealed that infusion from R. fruticosus leaves may induce cytotoxic action against human colon cells, depending of time and concentration [26]. On the other hand, cancer development is closely associated with inflammation and mutatory microenvironments containing free radicals. In another study, a triterpenoid-rich fraction from R. coreanus has been shown to express strong antiinflammatory activity towards injured colonic tissue [27]. Therefore, we decided to aim our study at the cytotoxic and reduction activity of R. caesius leaves in human colon carcinoma cells. Studies on the biological activity of different extracts and subfractions obtained from dewberry were based on two analyses (MTT and NR assays) which were performed on two human colon tumour cell lines HT29 (Duke's A) and SW948 (Duke's C). They were selected to show the reactivity of the early stage of this tumour development. Our study revealed that the tested R. caesius extracts expressed no cytotoxic activity. Samples RC2, RC5, and RC6 in a range of concentrations up to 200 𝜇g/mL induced mitochondrial action about 20% above the HT29 cells control (Figure 2). The SW948 cell line was more sensitive to the activity of the tested samples (RC1-3, RC5) than the HT29 line. The inhibitory effect was concentration-dependent. On the other hand, samples RC4 and RC6 induced succinyl dehydrogenase activity. The tested extracts inhibited, in a concentrationdependent manner, the viability of cells of both HT29 and SW948 lines. The most active was RC5, which, applied at the highest concentration (250 𝜇g/mL), decreased the viability of cells (HT29 and SW948) below 66% (Figure 3). The butanolic fraction (RC6), which at the highest concentration did not decrease the viability of cells below 87%, was less active. Our tests revealed that R. caesius extracts possess reductive activity. All extracts, with the exception of RC1 at a dose of 250 𝜇g/mL, almost totally reduced DPPH (Figure 4). RC1 at this concentration reduced only half of the radical. Fractions RC4 and RC5 at the lowest concentrations used expressed strong antioxidative activity. IC 50 values of the tested samples' activity and their comparison to the Trolox action are presented in Table 2. Antioxidant activity of the selected extracts was also determined by the FRAP method, which is based on the analysis of the Fe 3+ ions reduction ability of the tested compounds. The highest Fe 3+ ion reduction was shown for the RC4 and RC5 extracts. It was more than 6.5 times higher (at a dose 250 𝜇g/mL) as compared to the control (Figure 5). This result is comparable to 155 𝜇g/mL of ascorbic acid reductive activity. Similarly to DPPH, the lowest reductive action was shown for the RC1 extract. At the highest concentration applied, its activity was only 2.2 times (activity corresponding to 53 𝜇g/mL of ascorbic acid) stronger than the control.\n\nWe showed that the tested extracts (RC1-RC3) and subfractions (RC4-RC6) decreased the viability of cells, acting cytotoxically on tumour cells, and simultaneously expressed strong reductive activity. Our results are in agreement with studies by Durgo et al. [25], showing that red raspberry leaf extract expresses cytotoxic and antioxidative effects in the human colon adenocarcinoma (SW480) cell line. This activity was assigned mainly to polyphenolic compounds present in the plant material. Our results also confirmed results of Dai and coauthors, who revealed that the extracts from blackberry significantly limited HT29 human colon tumour cells growth, and the effect was dependent on the concentration applied. This effect was closely connected with the high content of anthocyanins [22]. Moreover, it was shown that acetone extract of R. fairholmianus roots influenced human colon tumour cell morphology and reduced their viability via limitation of the intracellular ATP pool and changes in cells' metabolic activity. As a consequence, depleted ATP quantity decreased the tumour cell proliferation rate and stimulated their death, mainly in the apoptotic pathway [28]. Furthermore, extracts from lyophilized fruits of R. occidentalis may modulate host immune system processes by impacting on the function and viability of activated human CD4+ and CD8+ T lymphocytes [13]. It may indirectly influence tumour cell development and further metastasis. Analyses in this immunological direction have also been expanded with the use of R. coreanus extracts loaded in gelatin nanoparticles. They were used as transport vehicles for the plant extracts and resulted in the significant enhancement of T, B, and NK cells' functionality in all areas of their immune activity [14]. Interesting results using cold water extracts of fresh fruits of R. caesius were shown by Turker et al. [29], who found a 100% antitumour efficiency of these extracts on cancer cells. In another study, similar effect was also supported by its antioxidative action with a relatively low IC 50 value of 5 𝜇g/mL [19]. Similarly, the shoot extracts of R. idaeus were found to be a source of sanguiin H-6 and ellagic acid, which exhibit antioxidative as well as cytotoxic activity [18]. Lee and coauthors additionally revealed that sanguiin H-6 induces morphological changes in tumour cells which are similar to apoptotic features. However, this compound does not affect the cancer cell cycle. In general, the molecular pathway of sanguiin H-6 activity is mediated by MAPK p38 and BID cleavage with the participation of caspase-8 [20]. Kim and coworkers have shown that the aqueous extract of the incompletely ripened fruit of R. coreanum inhibits cell proliferation and stimulates apoptosis in HT29 cells and that this may be mediated by its ability to activate the caspase-3 pathway [30]. In other study, Bowen-Forbes and coworkers showed that fruit extracts obtained from some Rubus species also exhibited great potential to inhibit colon, breast, lung, and gastric cancer cell growth. The authors speculate that the anticancer effect may partially depend on inhibitory action on cyclooxygenase-2 (COX-2) functionality. Moreover, due to high anthocyanin content, it may also strongly influence the oxidative condition in the tumour cell microenvironment [23].\n\nBesides the general activity of Rubus extracts on tumour cells, it was shown that the range of such action is based on the horticultural parameters of the plant material. Production factors, both genetic and environmental, determine the usefulness of plants as a material for specific destiny, for example, chemoprevention. Therefore, the degree of inhibition of human colon tumour cell proliferation depends not only on general active phytoconstituents presence, but also on their specific composition which is dependent on cultivar, production site, or stage of maturity [21].\n\nGenerally, plant extracts have many biological activities directly aimed at cell morphology and proliferation or indirectly by possessing reductive feature which influence inflammatory state modulating immune system reactivity. In our study R. caesius leaf extract revealed tumour cell growth limiting activity, on both the morphology and metabolism levels. Moreover, its antioxidative activity may be connected with colon origin tumour cells growth reduction.\n\n## References\n\n1. Rejewska, Sikora, Tomczykowa et al. (2013) \"Rubus caesius\" *Pharmacognosy Communications*\n\n2. Tomczyk, Pleszczyńska, Wiater et al. (2013) \"In vitro anticariogenic effects of Drymocallis rupestris extracts and their quality evaluation by HPLC-DAD-MS 3 analysis\" *Molecules*\n\n3. Mosmann (1983) \"Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays\" *Journal of Immunological Methods*\n\n4. Paduch, Woźniak, Niedziela et al. (2014) \"Assessment of eyebright (Euphrasia officinalis L.) extract activity in Evidence-Based Complementary and Alternative Medicine relation to human corneal cells using in vitro tests\" *Balkan Medical Journal*\n\n5. Paduch, Woźniak (2015) \"The effect of Lamium album extract on cultivated human corneal epithelial cells (10.014 pRSV-T)\" *Journal of Ophthalmic and Vision Research*\n\n6. Patel, Rojas-Vera, Dacke (2004) \"Therapeutic constituents and actions of Rubus species\" *Current Medicinal Chemistry*\n\n7. Rocabado, Bedoya, Abad et al. (2008) \"Rubus-a review of its phytochemical and pharmacological profile\" *Natural Product Communications*\n\n8. Holst, Haavik, Nordeng (2009) \"Raspberry leaf-should it be recommended to pregnant women?\" *Complementary Therapies in Clinical Practice*\n\n9. Gouveia-Figueira, Castilho (2015) \"Phenolic screening by HPLC-DAD-ESI/MS n and antioxidant capacity of leaves, flowers and berries of Rubus grandifolius Lowe\" *Industrial Crops and Products*\n\n10. Li, Du, He (2015) \"Chemical constituents and biological activities of plants from the genus Rubus\" *Chemistry & Biodiversity*\n\n11. Gudej, Tomczyk (2004) \"Determination of flavonoids, tannins and ellagic acid in leaves from RubusL. species\" *Archives of pharmacal research*\n\n12. Dudzinska, Bednarska, Boncler et al. (2016) \"The influence of Rubus idaeus and Rubus caesius leaf extracts on platelet aggregation in whole blood. Cross-talk of platelets and neutrophils\" *Platelets*\n\n13. Mace, King, Ameen (2014) \"Bioactive compounds or metabolites from black raspberries modulate T lymphocyte proliferation, myeloid cell differentiation and Jak/STAT signaling\" *Cancer Immunology and Immunotherapy*\n\n14. Seo, Choi, Lee (2011) \"Enhanced immunomodulatory activity of gelatin-encapsulated Rubus coreanus Miquel nanoparticles\" *International Journal of Molecular Sciences*\n\n15. George, Parimelazhagan, Kumar et al. (2015) \"Antitumor and wound healing properties of Rubus ellipticus Smith\" *Journal of Acupuncture and Meridian Studies*\n\n16. Zhang, Lu, Jiang et al. (2015) \"Bioactivities and extraction optimization of crude polysaccharides from the fruits and leaves of Rubus chingii Hu\" *Carbohydrate Polymers*\n\n17. Brown, Gill, Mcdougall et al. (2012) \"Mechanisms underlying the anti-proliferative effects of berry components in in vitro models of colon cancer\" *Current Pharmaceutical Biotechnology*\n\n18. Krauze-Baranowska, Głód, Kula (2014) \"Chemical composition and biological activity of Rubus idaeus shootsa traditional herbal remedy of Eastern Europe\" *BMC Complementary and Alternative Medicine*\n\n19. Conforti, Marrelli, Carmela (2011) \"Bioactive phytonutrients (omega fatty acids, tocopherols, polyphenols), in vitro inhibition of nitric oxide production and free radical scavenging activity of non-cultivated Mediterranean vegetables\" *Food Chemistry*\n\n20. Lee, Ko, Kim (2016) \"Inhibition of A2780 human ovarian carcinoma cell proliferation by a Rubuscomponent, sanguiin H-6\" *Journal of Agricultural and Food Chemistry*\n\n21. Johnson, Bomser, Scheerens et al. (2011) \"Effect of black raspberry (Rubus occidentalis L.) extract variation conditioned by cultivar, production site, and fruit maturity stage on colon cancer cell proliferation\" *Journal of Agricultural and Food Chemistry*\n\n22. Dai, Patel, Mumper (2007) \"Characterization of blackberry extract and its antiproliferative and anti-inflammatory properties\" *Journal of Medicinal Food*\n\n23. Bowen-Forbes, Zhang, Nair (2010) \"Anthocyanin content, antioxidant, anti-inflammatory and anticancer properties of blackberry and raspberry fruits\" *Journal of Food Composition and Analysis*\n\n24. Rajendran, Ho, Williams et al. (2011) \"Dietary phytochemicals, HDAC inhibition, and DNA damage/repair defects in cancer cells\" *Clinical Epigenetics*\n\n25. Durgo, Belščak-Cvitanović, Stančić et al. (2012) \"The bioactive potential of red raspberry (Rubus idaeus L.) leaves in exhibiting cytotoxic and cytoprotective activity on human laryngeal carcinoma and colon adenocarcinoma\" *Journal of Medicinal Food*\n\n26. Komes, Belščak-Cvitanović, Ljubičić (2014) \"Formulating blackberry leaf mixtures for preparation of infusions with plant derived sources of sweeteners\" *Food Chemistry*\n\n27. Shin, Cho, Choi (2014) \"Anti-inflammatory effect of a standardized triterpenoid-rich fraction isolated from Rubus coreanus on dextran sodium sulfate-induced acute colitis in mice and LPS-induced macrophages\" *Journal of Ethnopharmacology*\n\n28. George, Tynga, Abrahamse (2015) \"In vitro antiproliferative effect of the acetone extract of Rubus fairholmianus gard. Root on human colorectal cancer cells\" *BioMed Research International*\n\n29. Turker, Yildirim, Karakas (2012) \"Antibacterial and antitumor activities of some wild fruits grown in Turkey\" *Biotechnology and Biotechnological Equipment*\n\n30. Kim, Lee, Shin et al. (2005) \"Induction of apoptosis by the aqueous extract of Rubus coreanum in HT-29 human colon cancer cells\" *Nutrition*<|endoftext|>" |
| }, |
| "test": { |
| "total_tokens": 194923400, |
| "example": "# Recombination in Hepatitis C Virus: Identification of Four Novel Naturally Occurring Inter-Subtype Recombinants\n\nWeifeng Shi, Ines Freitas, Chaodong Zhu, Wei Zheng, William Hall, Desmond Higgins\n\n## Abstract\n\nRecombination in Hepatitis C virus (HCV) is considered to be rare. In this study, we performed a phylogenetic analysis of 1278 full-length HCV genome sequences to identify potential recombination events. Nine inter-genotype recombinants were identified, all of which have been previously reported. This confirms the rarity of inter-genotype HCV recombinants. The analysis also identified five inter-subtype recombinants, four of which are documented for the first time (EU246930, EU246931, EU246932, and EU246937). Specifically, the latter represent four different novel recombination types (6a/6o, 6e/ 6o, 6e/6h, and 6n/6o), and this was well supported by seven independent methods embedded in RDP. The breakpoints of the four novel HCV recombinants are located within the NS5B coding region and were different from all previously reported breakpoints. While the locations of the breakpoints identified by RDP were not identical, they are very close. Our study suggests that while recombination in HCV is rare, this warrants further investigation.\n\n## Introduction\n\nHepatitis C Virus (HCV) belongs to the family Flaviviridae and was first identified in 1989 [1]. It is a major cause of the liver diseases: chronic hepatitis, cirrhosis, and hepatocellular carcinoma. HCV is an enveloped virus with a positive-sense, singlestranded RNA genome of approximately 9400 bp in length [2].\n\nThe HCV genome has one open reading frame encoding a polyprotein of about 3,000 amino acids, and this is processed to produce three structural (core, E1, E2) and seven non-structural proteins (p7, NS2, NS3, NS4A, NS4B, NS5A, NS5B) [3].\n\nSimilar to many RNA viruses, HCV exhibits high genetic heterogeneity and to date seven genotypes have been identified. Different genotypes diverge by at least 30% over the complete genome [4]. In addition, HCV has also been further classified into numerous subtypes (http://hcv.lanl.gov/content/sequence/ HCV/classification/genotable.html). Subtypes can diverge by as much as 20%, but within subtype variation is usually less than 10% [4]. To date, genotype 1 includes 13 subtypes (subtypes 1a to 1m). The numbers of subtypes for genotypes 2, 3, and 4 were 18, 11, and 18, respectively. Genotypes 5 and 7 have only a single subtype, 5a and 7a. However, it is likely that more subtypes might be found for these genotypes due to continuous efforts to sequence more viral genomes. Genotype 6 has the largest number of reported subtypes with a total of 21.\n\nRecombination is an important evolutionary process for many viruses, such as human immunodeficiency virus [5] and hepatitis B virus (HBV) [6,7]. However, recombination is considered to be rare in HCV [8,9]. This is supported by the finding that HCVinfected cells can rarely be superinfected by another HCV of a different group or subtype, in vivo [10]. However, HCV superinfection or co-infection is known to occur [11][12][13][14][15] and recombination, while rare, would be expected to occur.\n\nRecently, Gonzalez-Candelas et al. classified HCV recombination events into three types: inter-genotype recombination, intersubtype recombination, and intra-patient/intra-subtype recombination [9]. So far, seven inter-genotype recombination types (2k/ 1b, 2i/6p, 2b/1b, 2/5, 2b/6w, 3a/1b and 2a/1a) and three intersubtype recombination types (1b/1a, 1a/1c and 4a/4d) have been described, based on analysis of either full-length or partial genome sequences [9]. Specifically, the 2k/1b recombinants have been demonstrated in Russia [16], Georgia, Estonia [17], Ireland [18], Uzbekistan [19], and Cyprus [20], and these are still circulating within Europe [21,22]. While it remains to be established, Morel et al. have suggested that genetic recombination may have important implications for HCV diagnosis, therapy, and epidemiology [21].\n\nTo date, only a few recombinants have been identified by analysis of a large number of complete genome sequences, and many recombination events have been identified by analyses of partial genome sequences [9]. It might be expected that, analysis of partial genome sequences could underestimate both the true level of recombination in HCV and may not provide an accurate identification of the breakpoints involved [9,21]. In the present study, we have carried out an analysis of HCV recombination using the large number (n = 1278) of all available full length detailed genome sequences.\n\n## Datasets and Methods\n\n1278 nucleotide sequences of HCV were downloaded from the Los Alamos HCV database (http://hcv.lanl.gov/content/index) on October 5 th, 2011. The full length HCV genome is approximately 9600 bp in size. However, only the coding regions (approximately 9000 bp) are used in our analysis. In addition, a virus sequence of canine origin [23] was downloaded from GenBank and included as an outgroup in the phylogenetic analyses.\n\nThe DNA sequences were initially translated into protein sequences. The protein sequences were aligned using Clustal Omega [24] and the alignment was adjusted manually in Bioedit [25]. The DNA sequence alignment was then made using the protein alignment as a template. The DNA alignment was 9270 bp in length. We applied four strategies to subdivide the alignment into sub-datasets (Table S1). The first strategy subdivides the full-length genome alignment into 15 sub-datasets, with the first 14 sub-datasets 600 bp long and the last one 870 bp long. The second strategy cuts the alignment into 18 sub-datasets, with the first 17 sub-datasets 500 bp long and the last one 770 bp long. The third strategy splits the alignment into 23 sub-datasets, with the first 22 sub-datasets 400 bp long and the last one 470 bp long. The last strategy subdivides the alignment into 31 subdatasets, with the first 30 sub-datasets 300 bp long and the last one 270 bp long. Phylogenetic analysis of the whole genome alignment and all of the sub-datasets was carried out using RAxML [26] under the GTRCAT approximation [27] and random starting trees. 1000, 600, 500, 400 and 300 rapid bootstrap replicates were performed for the full-length genome dataset, sub-datasets split using the first, second, third and fourth subdivision strategies, respectively. All other parameters were set to default. All of the trees are available on request from the authors. Trees were visualized using Dendroscope [28].\n\nInformation on these sequences, including genotype, subtype, and recombination, was downloaded from the database. This information was validated using the phylogenetic tree, constructed using the whole genome sequences, to correct potential genotype or subtype misclassifications and was used as background information. Subtyping information from each phylogenetic tree, constructed using the sub-datasets, was compared to the background information, on an individual basis. For each sequence, if all the information was concordant with the background information, this suggested the virus is not a recombinant. However, if a discrepancy between the background information and subtypes derived from the sub-datasets was identified, these sequences were analyzed further using multiple, independent computational methods described below.\n\nBecause the putative novel recombinants belonged to genotype 6, only the sequences of genotype 6 HCV (n = 77) were used for verification. All the potential novel putative recombinants were verified in a single run using the program RDP 3 [29]. The methods used included RDP [30], GENECONV [31], BootScan [32], Maxchi [33], Chimaera [34], SiSscan [35] and 3Seq [36]. The breakpoints were also defined by RDP. Similarity between the recombinants and their possible major and minor parents was estimated using Bioedit. BootScan, embedded in Simplot [37], was used to visualize the relationships among the recombinants and their possible parents, with a sequence (AF064490) from genotype 5 serving as an outgroup.\n\nTo further verify these recombination events, we extracted the NS5B genes of genotype 6 HCV from the whole alignment and split the alignment into two sub-alignments according to the breakpoints identified: the non-recombinant region and the recombinant region. We constructed phylogenetic trees using the non-recombinant NS5B gene regions and the recombinant regions, respectively. This was performed using PhyML [38]. To test the alternative topologies derived, we performed the Kishino-Hasegawa (KH) test [39] and calculated expected likelihood weights [40] using Tree-Puzzle [41].\n\nIn addition, to exclude the possibility that the detected recombination events are caused by lack of phylogenetic signals in the 39-end of genotype 6 HCV, we used the likelihood mapping method [42], implemented in Tree-Puzzle, to test whether the datasets used for detecting recombination events are suitable for phylogenetic analysis. Three models (HKY [43], TN [44] and GTR [45]) were used, respectively. Similarly, only the NS5B genes of genotype 6 HCV were used in this analysis.\n\n## Results\n\n## Phylogenetic Analysis of the Full-length Genome Sequences\n\nPhylogenetic analysis of the 1278 full-length genome sequences supports the current classification of HCV into seven genotypes, 1-7 (File S1). The number of sequences belonging to genotype 1 was 993, accounting for approximately 78% of the whole dataset, while that of genotype 2 was 116 (9%). Genotypes 3-7 included 33, 47, 5, 77 and 1 sequence, respectively.\n\n## Inter-genotype Recombination\n\nBy comparing phylogenetic signals from different subdivided fragments of the full-length genome sequences, we identified nine inter-genotype HCV recombinants. They belong to five recombination types, 2/5 (n = 2), 2b/6w (n = 1), 2b/1a (n = 1), 2b/ 1b (n = 1), and 2k/1b (n = 4), respectively (Table S2). All of these have been previously described [9]. No novel inter-genotype recombinants were found. Inter-subtype Recombination Phylogenetic trees constructed using different sequence fragments can be used to find potential inter-subtype recombination events. In all, five inter-subtype recombinants were identified. The 1a/1c recombinant (AY651061) has already been reported [46] and was not further studied. The remaining four sequences, EU246930, EU246931, EU246932 and EU246937, are shown for the first time to be recombinants. These four sequences were isolated from Vietnam and Thailand and have been reported to belong to subtypes 6a, 6e, 6e and 6n, respectively [47]. Phylogenetic analysis of the full-length genome sequences confirmed this subtype classification (data not shown). However, phylogenetic trees estimated using the 600 bp (n = 15), 500 bp (n = 18), 400 bp (n = 23) and 300 bp (n = 31) fragments were consistent and demonstrated that EU246930, EU246931, EU246932 and EU246937 are 6a/6o, 6e/6h, 6e/6o, and 6n/6o recombinants, respectively (Table 1).\n\nFigures 1 and2 demonstrate how potential recombination events are identified from the trees. Figures 1 and2 present the genotype 6 lineages of the phylogenetic trees constructed using the first fragment (600 bp in length) and the last fragment (827 bp in length) in the first sub-division strategy. In Figure 1, EU246930 (6a) is clustered within a lineage of 6a sequences and the bootstrap support value for this lineage is 84%. EU246931 and EU246932 (6e) fall within a cluster of 6e, with a bootstrap value of 92%, while EU246937 belongs to subtype 6n with a bootstrap value of 91%. However, in Figure 2, different phylogenetic relationships are found. EU246930 (6a), EU246932 (6e) and EU246937 (6n) are clustered with a lineage of subtype 6o sequences and the bootstrap support is 98%, while EU246931 (6e) forms a separate lineage with D84265 (6h) with 100% bootstrap support. Employing this approach, we analyzed all the trees and summarized the discordant phylogenetic signals suggesting evidence of recombination.\n\nFurther verification of these four recombinants was performed using RDP (Table 2). The four inter-subtype recombination events are supported by seven methods with significant p values (Table 2). The relationships within the recombinants with the potential major and minor parents identified by RDP were visualized using BootScan (Figure 3), which confirmed the recombination events.\n\nPhylogenetic analyses and BootScan analysis indicate that the breakpoints of the four recombinants are located within the NS5B region (Table 1, Figure 3). Breakpoints of the four recombinants defined by RDP are consistent with the result obtained by phylogenetic analysis. However, the locations are not exactly the same in each case and the length of the recombined segments ranged from 620 bp for EU246932 to 729 bp for EU246930 (Table 3).\n\nResults from the KH test and ELW were consistent and both of them supported that the phylogenies derived the non-recombinant region and the recombinant region were significantly different (Table 4).\n\nLikelihood mapping analysis of NS5B gene sequences of genotype 6 HCV using different models were congruent. All of them showed that the tree-likeness of the NS5B gene was very high, with the sum of A 1, A 2, and A 3 ranging from 95.5% to 96.0% (Table 5). In contrast, the value of A 7, which is evidence to support the star-likeness, was relatively small, ranging from 1.2% to 1.8%. In particular, the recombinant regions (8358-9099) also displayed very high probability of tree-likeness (Table 5).\n\n## Discussion\n\nRecombination in HCV has been considered a rare event. This is supported by the observation of superinfection exclusion, where an established virus infection prevents or interferes with subsequent infection by a second virus [10]. The first naturally occurring inter-genotype HCV recombinant was identified in 2002 [16]. This recombinant became established and is still circulating in some European countries [21,22]. So far, seven inter-genotype recombination types have been described [9]. Here, we identify nine inter-genotype recombinants and they belong to five inter-genotype recombination types, 2/5, 2b/6w, 2b/1a, 2b/1b and 2k/1b, respectively. All of these have been previously reported and no new or novel inter-genotype recombinants are found in this analysis. Sequence similarity between the recombinants and their major and minor parents is estimated only using the recombined regions. The major and minor parents of the recombinants are identified by RDP. doi:10.1371/journal.pone.0041997.t003 So far, only one subtype of genotype 5, 5a, has been identified. The breakpoint of the 2/5 recombinants is identified to be at or near the NS2/NS3 junction, between residues 3420 and 3440 [48]. Our results confirm this finding. However, the sequence divergence between the 2/5 recombinants and 5a from position 3421 to the end of the genome is 34.5% (1%, standard deviation), which is higher than the 20% cutoff used to define a subtype. Therefore, it is likely that the 2/5 recombinants are derived from a putative subtype of genotype 5, rather than 5a. Further collecting and sequencing more HCV samples of genotype 5 is needed to reveal the real phylogenetic diversity of HCV and to trace the most likely parents of the 2/5 recombinants.\n\nIn our work, five inter-subtype recombinants were found through large-scale phylogenetic analyses. The 1a/1c recombinant sequence was identified in India and has already been reported [46]. However, the remaining four recombinants are described here for the first time. These recombination events were well supported by various recombination detection methods and were shown not to result from the lack of phylogenetic signal in the 39end of HCV genomes. Specifically, they represent four novel intersubtype recombination types, 6a/6o, 6e/6o, 6e/6h and 6n/6o, respectively.\n\nAlthough only a few HCV recombinants have been described, current evidence suggests that the NS2/NS3 junction may be a hotspot for HCV recombination [9]. However, a breakpoint has also been identified within NS5B [49] and this is mapped to position 8046 in our alignment which is different from the breakpoints identified in our study (8245, 8356, 8358 and 8372, respectively). Notably, while the breakpoints of the recombinants identified in our study are not identical, they are very close. At present, it is impossible to determine whether these recombinants have arisen from single or multiple recombination events.\n\nTwo previous studies have also shown that recombination can happen within a single subtype or a patient [50,51]. Sentandreu et al. analyzed 17712 sequences from 136 serum samples derived from 111 patients and found approximately 11% of the samples were potential recombinant sequences [51]. On this basis, they concluded that recombination should be considered as a potentially important molecular mechanism for HCV to generate novel genetic variants. However, because our dataset has approximately 1300 sequences, it is extremely difficult to study detailed phylogenetic relationships for each sequence within a subtype using our approach and therefore we did not investigate intrasubtype recombination.\n\nThe subdivision of the whole HBV genome into numerous subdatasets has previously termed ''fragment typing'' and has been used to identify putative HBV recombinants [52]. We have also recently used a similar approach to detect HBV recombination [7]. In this work, we applied four strategies to split the whole genome alignment into sub-datasets of different lengths, with different start and end points. The results obtained from the four strategies are broadly in agreement. Therefore, we consider our approach to be very robust for the detection of inter-genotype and inter-subtype HCV recombinants and this is particularly useful when large datasets with thousands of genome sequences are involved. However, this has two limitations. First, it may not be effective for the detection of small recombined fragments of less than 100 bp, because the shorter the alignment is, the lower the power and sensitivity of the phylogenetic analysis. Second, it is difficult to detect intra-subtype recombination using this method. For some subtypes, such as 1a and 1b where there are a few hundred sequences available, it is difficult to detect the incongruent phylogenetic signals by ''eyeballing'' the trees. In these cases, methods that are able to automatically detect potential recombination events, such as RDP, as used in this study, should be employed.\n\nPrevious computer simulation studies and empirical data have shown that different recombination detection methods have distinct features and no single method is best for all situations [34,53]. In this work, seven methods used for verification of the results obtained from phylogenetic analyses. These methods are based on different rationales and have been classified into different classes [34,53]. For example, RDP, BootScan and SiSscan are phylogeny-based, while GENECONV, Maxchi and Chimaera are substitution-based. Because all of these methods detected the four sequences as recombinants, this provides very convincing evidence that these recombinants have been properly designated.\n\nIn conclusion, we have performed a large scale phylogenetic analysis of 1278 full-length genome sequences to detect putative inter-genotype and inter-subtype recombinants. No new or novel inter-genotype recombinants were found. However, we have identified for the first time four novel inter-subtype recombinants. Our studies suggest that HCV recombination and its implications for both pathogenesis and clinical outcomes certainly warrant further study.\n\n## Supporting Information\n\n## References\n\n1. Choo, Kuo, Weiner et al. (1989) \"Isolation of a cDNA clone derived from a blood-borne non-A, non-B viral hepatitis genome\" *Science*\n\n2. Simmonds (2004) \"Genetic diversity and evolution of hepatitis C virus-15 years on\" *J Gen Virol*\n\n3. Dubuisson (2007) \"Hepatitis C virus proteins\" *World J Gastroentero*\n\n4. Smith, Pathirana, Davidson et al. (1997) \"The origin of hepatitis C virus genotypes\" *J Gen Virol*\n\n5. Burke (1997) \"Recombination in HIV: an important viral evolutionary strategy\" *Emerg Infect Dis*\n\n6. Simmonds, Midgley (2005) \"Recombination in the genesis and evolution of hepatitis b virus genotypes\" *J Virol*\n\n7. Shi, Carr, Dunford et al. (2012) \"Identification of Novel Inter-genotypic Recombinants of Human Hepatitis B Viruses by Largescale Phylogenetic analysis\" *Virology*\n\n8. Yun, Lara, Johansson et al. (1996) \"Discrepancy of hepatitis C virus genotypes as determined by phylogenetic analysis of partial NS5 and core sequences\" *J Med Virol*\n\n9. Gonzalez-Candelas, Lopez-Labrador, Bracho (2011) \"Recombination in hepatitis C virus\" *Viruses*\n\n10. Tscherne, Evans, Hahn et al. (2007) \"Superinfection exclusion in cells infected with hepatitis C virus\" *J Virol*\n\n11. Matsubara, Sumazaki, Shin et al. (1996) \"Genotyping of hepatitis C virus: coinfection by multiple genotypes detected in children with chronic posttransfusion hepatitis C\" *J Pediatr Gastr Nutr*\n\n12. Toyoda, Fukuda, Hayakawa et al. (1998) \"Characteristics of patients with chronic infection due to hepatitis C virus of mixed subtype: prevalence, viral RNA concentrations, and response to interferon therapy\" *Clin Infect Dis*\n\n13. Giannini, Giannelli, Monti et al. (1999) \"Prevalence of mixed infection by different hepatitis C virus genotypes in patients with hepatitis C virus-related chronic liver disease\" *J Lab Clin Med*\n\n14. Asselah, Vidaud, Doloy et al. (2003) \"Second infection with a different hepatitis C virus genotype in a intravenous drug user during interferon therapy\" *Gut*\n\n15. Schijman, Colina, Mukomolov et al. (2004) \"Comparison of hepatitis C viral loads in patients with or without coinfection with different genotypes\" *Clin Diagn Lab Immun*\n\n16. Kalinina, Norder, Mukomolov et al. (2002) \"A natural intergenotypic recombinant of hepatitis C virus identified in St\" *J Virol*\n\n17. Tallo, Norder, Tefanova et al. (2007) \"Genetic characterization of hepatitis C virus strains in Estonia: fluctuations in the predominating subtype with time\" *J Med Virol*\n\n18. Moreau, Hegarty, Levis et al. (2006) \"Serendipitous identification of natural intergenotypic recombinants of hepatitis C in Ireland\" *Virol J*\n\n19. Kurbanov, Tanaka, Avazova et al. (2008) \"Detection of hepatitis C virus natural recombinant RF1_2k/1b strain among intravenous drug users in Uzbekistan\" *Hepatol Res*\n\n20. Demetriou, Kyriakou, Kostrikis (2011) \"Near-full genome characterisation of two natural intergenotypic 2k/1b recombinant Hepatitis C virus isolates\" *Adv Virol*\n\n21. Morel, Fournier, Francois et al. (2011) \"Genetic recombination of the hepatitis C virus: clinical implications\" *J Viral Hepatitis*\n\n22. Raghwani, Thomas, Koekkoek et al. (2011) \"The origin and evolution of the unique HCV circulating recombinant form 2k/1b\" *J Virol*\n\n23. Kapoor, Simmonds, Gerold et al. (2011) \"Characterization of a canine homolog of hepatitis C virus\" *P Natl Acad Sci*\n\n24. Sievers, Wilm, Dineen et al. (2011) \"Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega\" *Mol Syst Biol*\n\n25. Hall (1999) \"BioEdit: a user-friendly biological sequence alignment editor and analysis program for Windows 95/98/NT\" *Nucl Acids Symp Ser*\n\n26. Stamatakis, Ludwig, Meier (2005) \"Raxml-iii: A fast program for maximum likelihood-based inference of large phylogenetic trees\" *Bioinformatics*\n\n27. Stamatakis (2006) \"Phylogenetic models of rate heterogeneity: A high performance computing perspective\"\n\n28. Huson, Richter, Rausch et al. (2007) \"Dendroscope: An interactive viewer for large phylogenetic trees\" *BMC Bioinformatics*\n\n29. Martin, Lemey, Lott et al. (2010) \"Rdp3: A flexible and fast computer program for analyzing recombination\" *Bioinformatics*\n\n30. Martin, Rybicki (2000) \"RDP: detection of recombination amongst aligned sequences\" *Bioinformatics*\n\n31. Padidam, Sawyer, Fauquet (1999) \"Possible emergence of new geminiviruses by frequent recombination\" *Virology*\n\n32. Martin, Posada, Crandall et al. (2005) \"A modified bootscan algorithm for automated identification of recombinant sequences and recombination breakpoints\" *AIDS Res Hum Retroviruses*\n\n33. Smith (1992) \"Analyzing the mosaic structure of genes\" *J Mol Evol*\n\n34. Posada, Crandall (2001) \"Evaluation of methods for detecting recombination from DNA sequences: Computer simulations\" *P Natl Acad Sci*\n\n35. Gibbs, Armstrong, Gibbs (2000) \"Sister-Scanning: a Monte Carlo procedure for assessing signals in recombinant sequences\" *Bioinformatics*\n\n36. Boni, Posada, Feldman (2007) \"An exact nonparametric method for inferring mosaic structure in sequence triplets\" *Genetics*\n\n37. Lole, Bollinger, Paranjape et al. (1999) \"Full-length human immunodeficiency virus type 1 genomes from subtype cinfected seroconverters in india, with evidence of intersubtype recombination\" *J Virol*\n\n38. Guindon, Gascuel (2003) \"A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood\" *Syst Biol*\n\n39. Kishino, Hasegawa (1989) \"Evaluation of the maximum likelihood estimate of the evolutionary tree topologies from DNA sequence data, and the branching order in Hominoidea\" *J Mol Evol*\n\n40. Strimmer, Rambaut (2002) \"Inferring confidence sets of possibly misspecified gene trees\" *Proc R Soc Lond B*\n\n41. Schmidt, Strimmer, Vingron et al. (2002) \"TREE-PUZZLE: maximum likelihood phylogenetic analysis using quartets and parallel computing\" *Bioinformatics*\n\n42. Strimmer, Haeseler (1997) \"Likelihood-mapping: A simple method to visualize phylogenetic content of a sequence alignment\" *Proc Natl Acad Sci*\n\n43. Hasegawa, Kishino, Yano (1985) \"Dating of the human-ape splitting by a molecular clock of mitochondrial DNA\" *J Mol Evol*\n\n44. Tamura, Nei (1993) \"Estimation of the number of nucleotide substitutions in the control region of mitochondrial DNA in humans and chimpanzees\" *Mol Biol Evol*\n\n45. Tavare (1986) \"Some Probabilistic and Statistical Problems in the Analysis of DNA Sequences\"\n\n46. Ross, Verbeeck, Viazov et al. (2008) \"Evidence for a complex mosaic genome pattern in a full-length hepatitis C virus sequence\" *Evol Bioinform*\n\n47. Noppornpanth, Poovorawan, Lien et al. (2008) \"Complete genome analysis of hepatitis C virus subtypes 6t and 6u\" *J Gen Virol*\n\n48. Legrand-Abravanel, Claudinon, Nicot et al. (2007) \") recombinant of hepatitis C virus\"\n\n49. Colina, Casane, Vasquez et al. (2004) \"Evidence of intratypic recombination in natural populations of hepatitis C virus\" *J Gen Virol*\n\n50. Moreno, Casane, Lopez (2006) \"Evidence of recombination in quasispecies populations of a Hepatitis C Virus patient undergoing anti-viral therapy\" *Virol J*\n\n51. Sentandreu, Jimenez-Hernandez, Torres-Puente et al. (2008) \"Evidence of recombination in intrapatient populations of hepatitis C virus\" *PloS One*\n\n52. Yang, Xing, Deng et al. (2006) \"Identification of hepatitis b virus putative intergenotype recombinants by using fragment typing\" *J Gen Virol*\n\n53. Posada (2002) \"Evaluation of methods for detecting recombination from DNA sequences: Empirical data\" *Mol Biol Evol*<|endoftext|>" |
| } |
| }, |
| "cyber": { |
| "train": { |
| "total_tokens": 720587575, |
| "example": "# Pitch Imperfect: Detecting Audio Deepfakes Through Acoustic Prosodic Analysis\n\nKevin Warren, Daniel Olszewski, Seth Layton, Carrie Gates\n\n## Abstract\n\nAudio deepfakes are increasingly indifferentiable from organic speech, often fooling both authentication systems and human listeners. While many techniques use low-level audio features or optimization black-box model training, focusing on the features that humans use to recognize speech will likely be a more long-term robust approach to detection. We explore the use of prosody, or the high-level linguistic features of human speech (e.g., pitch, intonation, jitter) as a more foundational means of detecting audio deepfakes. We develop a detector based on six classical prosodic features and demonstrate that our model performs as well as other baseline models used by the community to detect audio deepfakes with an accuracy of 93% and an EER of 24.7%. More importantly, we demonstrate the benefits of using a linguistic features-based approach over existing models by applying an adaptive adversary using an L∞ norm attack against the detectors and using attention mechansisms in our training for explainability. We show that we can explain the prosodic features that have highest impact on the model's decision (Jitter, Shimmer and Mean Fundamental Frequency) and that other models are extremely susceptible to simple L∞ norm attacks (99.3% relative degradation in accuracy). While overall performance may be similar, we illustrate the robustness and explainability benefits to a prosody feature approach to audio deepfake detection.\n\n## I. INTRODUCTION\n\nRecent advances in audio deepfake generation techniques make creating human-sounding audio of anyone's voice more accessible and rapidly producible. While deepfakes can have enormous positive benefits [1], [2], they also present the potential for misuse in cases such as fraud [3], [4] and disinformation [5], [6]. Society's exposure to deepfakes has grown in recent years as the success of deepfakes in social media has increased their visibility. In response, there has been a large effort by researchers to explore potential options for detecting such threats. Early deepfake detection techniques exploits black-box machine learning (ML) techniques and lowlevel flaws (e.g., unusual spectral correlations [7], abnormal noise level estimations [8], and unique cepstral patterns [9]) to develop defenses.\n\nRecent work by Warren et al. studying the ways that human's percieve audio deepfakes demonstrates that humans and current machine learning models detect deepfakes differently and have separate trade-offs in performance [10]. They assert that these differences require that both humans and models are needed together for the detection process. Because of this, we take a different approach than the previous detection techniques and create a model that mimics the human's side in the detection process. In this paper, we explore a detection approach that focuses on the higher-level linguistc features that humans use when classifying deepfakes. We characterize prosody, the features of speech that are related to intonation, rhythm, and stress. While work has been done to improve prosody in deepfakes [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], characterizing prosody based on any single metric remains an open challenge in applied linguistics [24] and faults in prosody are the most common reason people believe a piece of audio is a deepfake [10].\n\nWe explore the viability of prosodic features as a deepfake detection approach and highlight the benefits this approach has over existing models. Our efforts produce the following contributions:\n\n• Evaluate Prosody-Based Deepfake Detection: We develop a classifier using six prosodic features (average and standard deviation of the mean-F 0, jitter, shimmer, average and standard deviation of harmonic to noise ratio (HNR)). We train/test our classifier on the ASVspoof2021 dataset [25] and show that we discriminate deepfakes with 93% accuracy. Additionally, we demonstrate similar performance to baseline models across all standard ML performance metrics. • Provide Decision Explainability: We implement an attention mechanism into our training process to determine which prosodic features have the largest influence on the model's classification. We show that three features (e.g., jitter, shimmer, and mean-F 0 ) have the largest impact on the model's decision. • Characterize an Adaptive Adversary: No prior work considers an adaptive adversary and its impacts on detection accuracy. We run an L ∞ norm adversary [26] against the best available baseline and show a 99.3% decrease in accuracy with minimal perturbation. This demonstrates that miniscule amounts of white noise can bypass other detectors, while our approach is not susceptible to such attacks. This paper uses the linguistic characteristics of generated speech to aid in the detection of deepfakes. We anticipate that as audio deepfake generators improve and eliminate the flaws that many current detectors build on, overcoming the task of mimicking prosody will be a far greater challenge. This work serves as the beginning of a long-term effort to consider the intersection of linguistics and speech to protect against deepfake threats.\n\nThis paper is organized as follows: Section II presents related work; Section III gives a background on prosody;\n\nSection IV provides a deepfake taxomony; Section V discusses our research questions; Section VI details the dataset used; Section VII presents our model compared to 4 baseline detectors; Section VIII contextualizes our model decisions and tests against an adaptive adverary; Section IX discusses related challenges; and Section X concludes.\n\n## II. RELATED WORK\n\nThe emergence and advancement of raw audio generation techniques have vastly improved the quality of audio attempting to sound organic/natural to the human ear [27], [28], [29], [30]. Deepfake audio aims to impersonate real people to make the differentiation between deepfake and human speech difficult [31], [32]. The potential for dangerous applications of fake audio has created the need for automated demarcation of humans from deepfakes.\n\nAudio deepfake detection was associated with spoof detection for automatic speaker verification (ASV) systems and spawned challenges such as ASVspoof2015 [33] and ASVspoof2019 [34]. However, the term \"audio deepfake\" evolved to aim to fool humans. This evolution spurred the ASVspoof2021 DeepFake [25] and Audio Deep Synthesis Detection [35] challenges. Current detection methods primarily employ complex Neural Networks [36], [37], [38], [39], [40], [41], [42], [43], [44]. These models generally focus on lowlevel features (e.g., spectrogram, MFCC, and CQCC).\n\nRecent work has explored the comparison between the abilities for humans to act as deepfake detectors [45], [46], [47], [48]. While humans do not perform as well as most detection models, Warren et al. [10] demonstrate that models do not strictly improve upon human performance, but rather have a difference in the way that they detect, and that both are necessary in the detection process. They demonstrate that humans are more sensitive to false negative decisions (i.e., believing deepfakes are humans), while models are sensitive to false positive decisions (i.e., believing humans are deepfakes). Several of these studies [10], [46] show that humans rely on linguistic features like prosody, pace, disfluencies and accents to aid in their decision process. Our work aims to explore the ability for models to detect using the same features as humans.\n\n## III. PROSODIC ANALYSIS A. Prosody Elements\n\nStructured language is defined not just by the words we use, but also by the acoustic features of our voices, known as prosody. Prosody is a linguistic catch-all for the suprasegmental parts of speech (e.g., stress, pitch, and tone). In linguistics, prosodic analysis is generally used to capture abnormal variations in prosody, voice quality, and pronunciations to diagnose pathological speech [49], [50]. For connected speech (i.e., conversations), we analyze prosody through a prosodic feature set and a voice quality feature set.\n\nThe prosodic features that are most consistently used in linguistics are pitch, length (i.e., duration of syllable sounds), and loudness. For this paper, we focus on the prosodic feature of pitch and its related concept of intonation which helps describe the complexity of human speech. Aspects of 'pitch' and 'tone' are the prosodic features that humans focus on for detection [10]. Voice quality is represented by another set of features that classify the raspy and airy nature of human voices. The voice quality measurements that are used within acoustic studies are jitter, shimmer, and harmonics-to-noise ratio (HNR). Some studies consider jitter and shimmer to be prosody-based instead of voice quality depending on their use in either voice sounds or speech sounds. All of these features (e.g., pitch, jitter, shimmer, and HNR) will serve as the basis of our prosodic analysis and will help determine vocal abnormalities for the detection of synthetically generated speech.\n\n1) Fundamental Frequency and Pitch: Fundamental frequency (F 0 ), the acoustic measurement of pitch, is a basic feature that describes human speech. Frequency is the number of times a sound wave repeats during a given period and the fundamental frequency is the lowest frequency of a voice signal [51]. Similarly, pitch is defined as our brain's perception of the fundamental frequency. The difference in the two features is most apparent when it comes to phantom fundamentals. A phantom fundamental is a phenomenon in which the brain perceives harmonics of the fundamental frequency as the existence of the fundamental frequency even if it is missing or removed from the voice signal (e.g., spectral filtering and frequency modulation). The fundamental frequency for male speakers is typically around 125 Hz, while female speakers tend to average around 210 Hz [52]. Whenever the fundamental frequency is present, the fundamental frequency and pitch refer to the same value.\n\n2) Intonation: The rise and fall of a person's voice (i.e., melodic patterns) refer to the prosodic feature called intonation. Varying tones help to give meaning to an utterance, allowing a person to stress certain parts of speech and express the desired emotion. A classic example of this is in the English language where one can turn any statement into a question by adding a rising intonation at the trail of the sentence. Without emotion, conversations become lifeless and unengaging, which is why varying intonation helps us indicate liveliness and makes speech sound more natural. A shift from a rising tone to a falling one is known as peaking intonation and the inverse is called dipping intonation. The more frequently these appear in speech, the less monotone a person will sound.\n\n3) Jitter and Shimmer: Voiced speech comes from a fluctuating organic source, making it quasi-periodic, and creating measurable differences in the oscillation of the signal. Jitter is the frequency variation between two cycles (i.e., period length), and shimmer measures the amplitude variation of a sound wave. Jitter comes from lapses in control of our vocal cord vibrations and people diagnosed with speech pathologies generally have higher amounts of jitter in their voice. The jitter levels in a person's voice are a representation of how \"hoarse\" their voice sounds [53]. Shimmer, however, corresponds to the presence of breathiness or noise emissions in our speech [51]. Both of these features capture the subtle inconsistencies that are present in human speech.\n\n4) Harmonic to Noise Ratio (HNR): Harmonic to noise ratio is the ratio of periodic and non-periodic components within a segment of voiced speech [54]. The HNR of a speech sample is commonly referred to as harmonicity and measures the efficiency of a person's speech. With respect to prosody, HNR denotes the texture (i.e., softness or roughness) of a person's sound. The combination of jitter, shimmer, and HNR quantifies an individual's voice quality. Harmonicity is another measurement that can help determine speech pathology or asthenia (i.e., abnormal physical weakness) in the voice. Most of the sounds that are made during human speech are associated with high HNR values. People, however, have varying degrees of lung capacity and strength in their vocal cords which makes HNR dependent on each individual, even for the same sound.\n\n## B. Acoustic Prosodic Analysis\n\nThe prosodic features presented in Section III-A are the main elements that constitute traditional prosodic analysis, which has appeared in linguistics for decades [55]. Acoustic prosodic analysis uses combinations of these prosody features to diagnose speech pathologies and to better understand the limitation in expressiveness and deficits in communications for people with a variety of behavioral and intellectual issues. This analysis looks for abnormalities and deviations in prosodic features from the expected range to determine issues with an individual's voice and compares those results with various known speech pathology deviations. By treating deepfakes as speech pathologies or disorders, we can leverage the use of acoustic prosodic analysis in deepfake detection to differentiate real and fake speech. Using various combinations of these features helps to describe the complex patterns and properties of an individual's voice.\n\n## IV. TAXONOMY OF FAKE SPEECH\n\nThe term \"audio deepfake\" can refer to different types of adversarial audio. To clarify the multitude of threats, we introduce a taxonomy to characterize deepfake attacks by their objective, generation method, and intended recipient.\n\n## A. Objective\n\nWhile deepfakes can be created for many purposes, the objective of the attack affects both its threat level and impact. While training and generation generally require specific voices, the target of a deepfake can either be claiming to impersonate a specific person or to generate a generic untargeted voice.\n\na) Targeted Deepfake: Targeted deepfakes use Deep Neural Networks (DNNs) to create audio based on a specified individual. These systems follow a standard encoder, synthesizer, and vocoder model seen in Figure 1. The encoder is designed to model a speaker's voice. It uses a set of utterances to create a distinctive profile of a speaker's voice called a speaker embedding. The more samples given to the encoder, the more accurate the embedding will be, with diminishing returns. Using the speaker embedding and an input text, the synthesizer generates a Mel spectrogram for the given text. The Mel spectrogram uses frequencies converted into the Mel scale, which is a logarithmic scale designed to mimic the human ear's perception of sound. Some recent synthesizers can generate the spectrogram from just a character sequence or phonetic spellings [56]. The vocoder converts the Mel spectrogram into the corresponding audio waveform. While the synthetic speech generation process is constantly changing with new tools, these three components are fundamental to the framework. The quality of the models in each tool is dependent on the quantity of training data and the complexity of the structure. The trade-off for better models is an increase in time and training resources.\n\nUsing these models, an adversary can create a sample for malicious purposes (e.g., a targeted audio deepfake). For example, a person may try to get a target to transfer money on fake orders made in the voice of the CEO [57]. Targeted audio has also been used for benign purposes in the medical field (e.g., recreating the voice of someone who has lost the ability to speak [1]) and is emerging as a possible future alternative in cinematography (e.g., recreating audio of deceased actors/actresses or developing movies with completely digital versions of actors/actresses).\n\nb) Untargeted Deepfake: Untargeted deepfakes also use neural networks, typically a single vocoder, to create fake organic-sounding audio. While untargeted audio can also be based on an individual's voice, it differs in that the sample is not claiming to be the person that it was trained on. For example, the voice of Siri is based on a voice actor, but the assistant uses the sample to sound more human, not impersonate the actor [58]. This aims at giving consumers the feeling of having a real conversation and not that they are talking to a real person.\n\nAdversarially, untargeted attacks can be used for spam in scenarios where the identity of the speaker does not need to be verified to elicit a response. For example, police dispatch often react to incoming calls without verifying the caller if the situation presents itself as dire.\n\n## B. Generation Methodology\n\na) Fully Generative Audio: Fully generative audio is a classification of fake speech that takes a representation of a person's voice and creates a speech sample from scratch. This process comes from recent advancements in machine learning (ML) and DNNs. Hidden layers in DNNs allow complicated tasks such as speech synthesis to work, but have difficulty learning specific attributes.\n\nIf speech synthesis needs to learn specific features such as prosody, the process is mainly trial and error of training data, hyperparameters, and activation functions.\n\nb) Voice Conversion: Voice conversion is a technique that is distinguished by its use of samples from two speakers: a source speaker and a target speaker as shown in Figure 2. Voice conversion uses a transfer function to convert the spectral features of the source speaker to closely match those of the target speaker. Modern VC systems will typically use the converted spectral features with a vocoder to generate the final fake speech sample. Generally, voice conversion outputs are noisier than targeted deepfakes since the system has to account for the background noise in the source sample. Since this generation method requires a source speaker, it is only a targeted objective.\n\n## C. Intended Recipient\n\nDeepfake samples are aimed at fooling either a machine or a human. The output requirements change depending on the target. Thus, designating the intended recipient is crucial for allocating appropriate tasks.\n\na) Machine Recipient: Audio deepfakes were originally focused, in practice, on targeting machine systems such as automatic speaker verification (ASV) systems. This focus was driven by the low audio quality of early deepfakes. Machine learning models were able to craft samples that were optimized against the models of an ASV program, as these samples did not need to succeed in human authentication. This means that deepfake models only needed to make audio whose quality and imitation were good enough to bypass the ASV.\n\nb) Human Recipient: Over time, improved deepfake audio quality output has shifted the focus of deepfakes from fooling machines to fooling humans. With the addition of better vocoders and natural language processing models, deepfakes have become eerily believable to the average listener. These deepfakes cause social issues and impose uncertainty on the authenticity of videos and audio clips. Unlike machine-targeted deepfakes, these require sufficient quality for a human to believe the source.\n\nDetection tasks in this space focus on forensic applications on questionable outlets for media, social media forums, and social engineering attacks. This type of detection has become popularized with recent world events such as the war in Ukraine and the growing trend of faking high-ranked political officials and business professionals [59].\n\nV. RESERCH QUESTIONS AND APPROACH Based on the taxonomy, we design our experiments and reserach quesitons around deepfakes designed to fool a person.\n\nWe aim to answer the following research questions: RQ1 Viability of Prosody for Detection: Do prosodic features robustly distinguish between deepfake and human generated audio?\n\nRQ2 Benefits of Using A Prosody Detector: What are the benefits to using a prosody detector approach over the standard deepfake detection model? Similar to a user study, we perform our testing from a forensic postprocessing standpoint. The model, like a user, processes the entire audio sample and makes one classification of human or computer generated for the entire clip. Additionally, this approach aims to justify the classification of the detector and explain which features (i.e., prosody elements) impact the decision.\n\n## VI. DATASET\n\nFor training and evaluating our model, we focus on the widely used ASVspoof2021 dataset [25]. This dataset contains samples that are clearly defined for our task and represents the community standard used for deepfake detectors. Additionally, ASVspoof2021 provides a set of baseline models tested against the dataset for comparison.\n\n## A. ASVspoof2021:\n\nThe ASVspoof dataset iterations are considered state-ofthe-art for each subsequent ASVspoof challenge (2015-2023), and as such we aim to use one of the recent dataset releases. Thus we use the ASVspoof2021 dataset to train and validate our model. ASVspoof2021 contains three datasets: training, validation, and evaluation. The ASVspoof2021 dataset contains physical access data, logical access data, and deepfake data. The physical access (PA) data contains spoofing attacks performed at the sensor level, such as replay attacks, and rely on weakness in the automatic speech recognition system hardware. The logical access (LA) data focuses on spoofing attacks generated by textto-speech (TTS) and voice conversion (VC) aimed at speaker verification. The deepfake audio also focuses on TTS and VC audio with two differences: (1) the use of more generalized compression algorithms and (2) the focus on audio forensics removing the use of ASV systems. The PA/LA datasets are focused more on bypassing speaker verification systems, while the deepfake set only focuses on whether the audio is human generated. Thus, we focus on the deepfake data which contains a collection of 'bonafide' and synthetic audio samples that were processed with various lossy codecs. The source data for the ASVspoof2021 evaluation dataset is a combination of the previous year's logical accessed data along with many other new sources for a significantly larger evaluation set over previous challenge iterations. This results in attack audio generated with more than 100 different audio spoofing algorithms. The training and validation sets contain different samples generated from the same 20 speakers, however, the evaluation set does not distinguish between different speakers for their audio files. The total number of samples in each set are training: 22,800 deepfakes and 2,580 bonafide; validation: 22,296 deepfakes and 2,548 bonafide; and evaluation: 589,212 deepfakes and 22,617 bonafide.\n\n## VII. PROSODY DEEPFAKE DETECTOR\n\nWe develop a deepfake detection model based on the elements of prosodic analysis as defined in Section III-A to explore an alternative approach to deepfake detection systems. Our data pipeline is as follows: extract pitch prosodic features from speech samples, standardize the extracted feature with standard scaling, and then predict with our model, as shown in Figure 3. The six prosody elements we use are the average and standard deviation of the mean-F 0 of the window, jitter, shimmer, and the average and standard deviation of HNR of the window. We perform a hyper-parameter search to find the best parameters for extracting pitch from the audio, the optimal window size to calculate our features over, and the best model architecture.\n\n## A. Feature Extraction\n\nOur feature extraction process collects the common prosodic features which can be measured using speech analysis tools. We extract the prosodic parameters discussed in Section III using Parselmouth [60], a Python interface for the state-of-the-art acoustic analysis tool called Praat [61]. Figure 3 shows the feature extractor in the pipeline. For full equations on any of our measured features (e.g., HNR, jitter, and shimmers) refer to the Appendix.\n\n1) Measured Prosodic Features: We start by collecting the mean fundamental frequency (mean F 0 ) and standard deviation of F 0. To determine the pitch range (i.e., minimum allowed F 0 and the maximum allowed F 0 ) required for analysis, we refer to the standard values set to 75 Hz for min F 0 and 500 Hz for max F 0. We do not adjust these values per audio file and assume that all voiced speech in any given sample falls within this range.\n\nThe fundamental frequency sequence used to get the mean F 0 is a series of F 0 values sampled with respect to time. These F 0 values are shown in Figure 4 as the dots that make up the black lines. The F 0 sequences of human and deepfake speech are similar, but even for the same sentence and speaker they are not the same. Intonation is the changes to the F 0 sequence over the length of the audio sample. The differences in F 0 sequence are demonstrated in Figure 4 with the fake audio sample being shorter and words such as \"he\" where the organic audio has a dipping intonation versus peaking intonation in the fake. These distinctions demonstrate that synthetic audio generates pitch without perfectly mimicking the correct F 0 sequence.\n\nWe also collect measurements of jitter and shimmer for each sample. We focus on only collecting the values for local jitter and local shimmer as these are commonly used when determining voice quality. The local version of jitter and shimmer look exclusively at consecutive intervals or periods and determines the average difference between them.\n\nThe last feature extracted is the harmonicity (HNR). We collect both the mean value and standard deviation of the HNR throughout the audio sample. We assume a standard minimum 2) Data Scaling: After we extract the features, we preprocess the data by standardizing the data with Min-Max scaling. Standardizing the data ensures that no feature influences the model more than another strictly due to its magnitude. We standardize by scaling the values between 0 and 1 by taking the difference of the feature value from the minimum and diving that by the range of the feature values. Formally, given a data matrix X, we scale each feature column, x, such that for any x i in x,\n\n.\n\n(1)\n\n3) Praat Parameter Search: While the tools used in applied linguistics are good for diagnosing speech problems, the standard parameters used by the tools are not suited for deepfake detection because they assume an organic source. Some of the abnormalities that can exist in generated speech are ignored or dropped by these tools since they could not be naturally produced by humans. Due to this, we do not use the default parameters when calculating the F 0 sequences for the pitch objects. Instead we perform a parameter search to explore the feature space and determine the best combination of values to use in calculating the pitch.\n\nWe focus our search around four parameters: silence threshold, octave cost, octave jump cost, and voiced/unvoiced cost. These are four of the input parameters for determining F 0 and the parameters that have the closest relation to speech. The silence threshold gives the amplitude bound for what is determined as silence. The octave cost determines how much high-frequency values for F 0 are favored in the case of perfectly periodic signals, while the octave jump cost represents the amount of large frequency jumps that should not be ignored in the signal. The voiced/unvoiced cost figures out the sensitivity to transitions between voiced and unvoiced signals. For each of these features, we collect data using a time-series analysis by windowing the audio sample. When training our detector, we test window sizes at 50, 100, 200,and 500 ms non-overlapping window frames.\n\nTo determine general bounds for the parameters, we start our search by processing all of our training data using a grid search for each of the four parameters. We determine which parameter values between zero and one returned a large number of NaN values, signifying inappropriate values for one or more parameters. After determining the bounds, we ran a randomized search of values within those bounds and trained models using an LSTM architecture with three layers (LSTM-64 nodes, LSTM-32 nodes, and Dense-32 nodes). We trained 2,200 models for 200 epochs under an ADAM optimizer with a learning rate of 0.0001, and determined the best parameters optimized over the Equal Error Rate (EER). We choose to use EER to allow direct comparison between our system and existing detectors.\n\n$$x scaled i = x i -min(x) max(x) -min(x)$$\n\n## B. Final Architecture Selection & Training\n\nUsing the optimal value combination for feature extraction, we test five variations on the model architecture. We modify the number of layers and the number of nodes in each layer, but maintain a sigmoid output layer. For an outline of the five model architectures we test, please refer to the Appendix. The five models we consider are: We train each model for 200 epochs with an ADAM optimizer that has a learning rate of 0.0001. The performance of each model can be seen in Figure 5. Our best model (Model B) is optimized for performance by a hyperparameter search over the pitch calculation parameters and model architecture.\n\nWe then train our best model using the ASVspoof2021 training data. We perform a frame-level feature extraction where we give voiced frames the extracted prosodic feature values and unvoiced frames a zero value. Each sample within a batch is also zero padded with unvoiced frames to the longest sample length, which is the same methodology as the ASVspoof2021 baselines.\n\n## C. Final Model Evaluation\n\nTo determine the overall performance of our prosody model, we process the ASVspoof2021 deepfake evaluation dataset using the same data processing methodology we train with. To show our model's performance, we use the Equal Error Rate. The EER is defined as the model's probability threshold at which the false negatives are equal to the false positives. For this metric, an EER closer to zero is indicative of a stronger model. The ASVspoof2021 competition continues to use this metric despite its deprecation to have a common metric between all challenges.\n\nThe ASVspoof2021 challenge is an anonymous submission competition. The only required metric for reporting is EER. Submissions to the challenge are not required to publish documentation. This makes verifying and reproducing the top results infeasible. Because of the requirements of the competition, these models also do not report standard performance metrics such as precision, recall, or F 1 -score. The ASVspoof2021 challenge does not require adversarial testing on the submitted detectors and does not give robustness guarantees on their performance. The challenge does provide access to four baseline models which are used as benchmarks for the competitors. Using these baselines, we can recreate those models in order to test against a larger set of performance metrics for comparing results.\n\nAs previously stated, ASVspoof2021 provides four baseline models. Nautsch et al. [62] uses a Gaussian Mixture Model (GMM) to detect deepfakes. They implement an LFCC method and a CQCC model for Baseline-01 and Baseline-02, respectively. Baseline-03 by Wang et al. [63] constructs a Light Convolutional Network (LCNN) which operates on an MFCC. The best model is Baseline-04, a RawNet2 model [64]. The baseline models 01-04 achieve EERs of 25.56%, 25.25%, 23.48%, and 22.38%, respectively [25]. Overall, our 24.7% EER is directly in the middle of the four baselines.\n\nSeveral prior studies have demonstrated that single metrics, especially EER, do not give the complete representation of a model's performance and inherently hide the system's trade-offs when implemented [65], [66]. To better compare and understand our results, we train versions of the baseline models and calculate additional performance metrics for comparison such as accuracy, percision, recall, and F 1 -score. These metrics are provided in addition to the EER in Table I. We use the reported EER values from the competition to validate that our trained models perform the same as the ones used in the competition. As seen in Table I, our prosody-based model also performs as well as the baseline models in all of the other standard machine learning performance metrics.\n\n## VIII. ADVERSARIAL TESTING AND EXPLAINABILITY\n\nWhile the prosody model has a similar performance to other detetors, we explore some of the benefits that a prosodybased approach has over other detectors. Some limitations to standard deep learning models are their black-box nature that minimizes explainability and increases their susceptibilty to adversarial optmization attacks. In this section, we demonstrate the explainability of individual influential features on the model decisions and test our model under adversarial conditions.\n\n## A. Attention\n\nWe implement attention mechanisms to explain what features are most affecting the classification. Attention mechanisms, first proposed by Bahdanau et al. [67], seek so as to allow networks broader access to better infer long-term effects in a sequence. These attention mechanisms specifically provide hidden states of a sequence to all of the subsequent layers. One can use the attention vector to weigh the importance of each input sequence. Weng et al. [68] provide a thorough analysis of this attention mechanism and its variants. We integrate Self-Attention by Cheng et al. which compares the input sequence against itself to understand which part of the input sequence greatly influenced the prediction [69].\n\nIn our case, we can retrain our model with a Self-Attention layer after our input layer. For each sample in the development set, we compute the attention vector which tells us which time-slice affected the classification the most. This time-slice consists of the input prosodic features calculated in that TABLE I: Results of the training, validation, and evaluation experiments of our best model (Model B) on ASVspoof2021. Each experiment represents the classification of audio in the ASVspoof2021 deepfake track training, validation, and evaluation datasets respectively. Baselines 01-04 are the baseline models that were provided for comparison in the ASVspoof2021 challenge. Due to reporting rules of the competition, the standard performance metrics are not reported and the only metric available for each baseline is the EER. Due to this limitation, we report the measured performance metrics for each model after reproducing each baseline model. interval. Because we trained on prodosic features, we can isolate the differences in feature values that occur during the interval. We visualize the differences for the Jitter, Shimmer, and Mean F 0 in the most important time-slice, as calculated by the attention mechanism, between the bonafide and deepfake samples in Figure 6. We see that for Jitter and Shimmer, the bonafide samples on average have smaller values. For Mean F 0, the deepfake samples have smaller values.\n\nFinding 2. Unlike standard black-box machine learning models, we are able to use attention mechanisms to determine which prosodic elements influenced the detection decision. This adds a layer of explainability to our model that is missing from other detectors. (RQ2)\n\n## B. Adversarial Experiments\n\nWhile we have explored the general performance of the prosody model, we also need to test the robustness of the model. Previous attempts at audio deepfake detection do not consider the robustness to adversarial samples against their model. We use the best performing baseline model, RawNet2, to demonstrate the brittle nature of their results in the presense of an adaptive adversary.\n\nThe Iterative Least-Likely Method [26] takes the gradient of the loss function with respect to the input vector and successively applies perturbations in the direction of the gradient. Formally, given an input X with label y true, we can construct X adv that has the target label y tfoot_0 by iteratively applying to X adv 0 = X:\n\nwhere Clip constrains the perturbation to an L ∞ ball around X with radius ϵ, α is the magnitude of the perturbation, and ∇ X is the gradient of the loss function J with respect to the input.\n\nFormally, the perturbation, δ, required to turn a benign feature vector, x, with label, y 0, into an adversarial sample is defined as,\n\nwhere ϵ is the magnitude of the perturbation, ∇ x is the gradient of the loss function, L, with model parameters θ. FGSM Experiments: We use the pitch calculations as defined by Boersma [70]. The pitch calculations use a non-differentiable, path-finding algorithm. This inhibits any gradient-based attacks, and thus, our system cannot be successfully attacked by a gradient based adaptive adversary. Conversely, the baseline models are fully differentiable, thus we can create a simple gradient-based adaptive adversary for their models.\n\nTo create the adversarial sample, we let X i,j be the matrix of the audio file data with i representing the time domain and j representing the number of channels of the audio file. We choose ϵ to minimize the overall perturbance of the audio sample (Figure 7). We choose ϵ ∈ {0.001, 0.0015, 0.002, 0.0025, 0.005}, as X i,j ∈ [-1, 1]∀i, j. Thus, each ϵ is perturbing less than 0.5% of each data point. We set α to 0.001. Essentially, we are adding small amounts of white noise to the samples.\n\nWe sample 2800 audio samples from the ASVspoof2021 evaluation data set. Then, we craft adversarial samples for the Baseline-RawNet2 for each sample using the five different ϵs. Even with the largest amount of added white noise, we still 0.0010 0.0015 0.0020 0.0025 0.0030 0.0035 0.0040 0.0045 0.0050 2.5\n\n$$X adv N +1 = Clip X,ϵ {X adv N + αsign(∇ X J(X adv N, y t ))},$$\n\n$$η = ϵ • sign(∇ x L(θ, x, y)),$$\n\n## Iterations\n\nAverage Minimum Perturbations for Fig. 7: The average number of steps to successfully craft the adversarial sample with label y t for each ϵ. We see that the average minimum perturbations converges as we let ϵ get larger. This shows that our adversarial attacks are convergent. For ϵ = 0.001, it takes significantly more steps (average 22 steps), but for the larger ϵs, we see that it takes less than 10 steps to craft an adversarial sample. With ϵ = 0.005, the average needed perturbations is only 3 steps. maintain an average waveform amplitude distribution analysis signal to noise ratio (WADA-SNR) [71] > 41dB, which is considered excellent quality audio [72].\n\nWe calculate the accuracy of RawNet2 against the samples at each value of ϵ. For the smallest ϵ of 0.001, the algorithm took an average of 22 iterations to find an adversarial sample. This continues to decrease as we increase ϵ to 0.005, showing that our adversarial attack converges. We see in Figure 8 that RawNet2 decreases in accuracy to 0.7% at ϵ = 0.005, a 99.3% relative decrease in accuracy. These results call into question the robustness of previous ASVspoof results, and future competitions should directly incorporate adaptive adversaries as a means of more fully characterizing detector effectiveness. Finding 3. While other models which are susceptible to small changes in the sample (e.g., Gaussian noise), our models collects specific prosodic features from the audio which is not affected by such changes. This makes the approach of using prosody features for detection less susceptible to simple adaptive attacks. (RQ2)\n\n## C. Lend Us Your Ear\n\nWe strongly encourage the reader to visit our companion website. 2 The website contains two examples of deepfake audio and the targeted adversarial samples. The targeted audio samples have very little difference between the original deepfake and the adversarial sample that RawNet2 misclassifies. This As shown in Section VIII, there are several benefits over existing techniques when using a prosody feature detection approach. By preprocessing the input sample to extract the prosodic features, our model is resistent to optimization attacks on a sample. This requires the adversary to target the specific features that our model detects on (i.e., prosodic features), which as previously discussed in an ongoing field of research in both linguisitcs (e.g., understanding percieved prosody) and machine learning (e.g., properly injecting prosody into deepfakes). Additionally, by targeting the model at desired feature sets instead of allowing the model to freely train on input samples, we remove the ambiguity on the model's decision making. This is particularly important in cases where decisions have to be justified (e.g., removing content from social media or a business rejecting an incoming customer call) and is a necessary consideration for deploying these systems in industry.\n\n## B. Defense Robustness\n\nCurrent systems identify unnatural artifacts in deepfake audio files such as spectral noise and distortion. As synthetic audio advances, these generation problems will disappear and the quality of the audio file will also improve. Unlike these systems, our approach is not dependent on artifacts in the signal, but rather looks exclusively at the linguistic features of speech itself. Research within the applied linguistics community\n\n## C. Prosodic Edge Cases\n\nWhile we look at the high-level trends of prosody in human speech, there are human voices that do not fall within the norm for prosodic features. Some people naturally speak in a monotone voice or exhibit speech pathologies, making their prosodic features differ from other humans and sometimes come across as fake. Since our model looks solely at the prosody of a speaker, these unique individuals could be misclassified as synthetic.\n\n## D. Data Limitations\n\nOne of the largest limitations to the performance of our system is the type of data that is being produced for current datasets. Short two to five-second audio clips do not encapsulate how most people would experience a deepfake attack and limits the amount of prosodic information that is available. Weaponized deepfakes would resemble more of a conversation or a descriptive command rather than bursts of quick declarations. When more practical, lengthy deepfakes are compiled into datasets, our system will be able to pull more prosodic data from samples and better designate the linguistic trends in each sample. However, we use the ASVspoof2021 data for our experiments since that is the standard currently used by the community. X. CONCLUSION Advancements in audio deepfakes are making them increasingly indifferentiable and a growing concerns for not only the security community but also to the broader society. Current detection approaches are likely to become obsolete as deepfakes continue to become more realistic, which reinforces the need for the community to consider many different approaches to deepfake detection. In this paper, we evaluate the use of prosodic acoustic analysis as a means of detecting deepfakes and demonstrate that this approach contains comparable performance (e.g., 93% accuracy and 24.7% EER) to the currently baselines for deepfake detection. Additionally, we discuss the benefits to our prosody approach over the current baselines with the application of attention mechanisms for explainability and implementing the first adaptive adversary using an L ∞ norm test to show the baseline's susceptibility to such attacks compared to our model. These results signify the beginning of exploration into alternative options for deepfake detection and a long-term effort to consider the intersection of linguistics and speech to protect against deepfake threats.\n\n## References\n\n1. Mills, Bunnell, Patel (2014) \"Towards Personalized Speech Synthesis for Augmentative and Alternative Communication\" *Augmentative and Alternative Communication*\n\n2. Leviathan, Matias (2018) \"Google duplex: an ai system for accomplishing real-world tasks over the phone\"\n\n3. Flitter, Cowley (2023) \"Voice Deepfakes Are Coming for Your Bank Balance\" *The New York Times*\n\n4. Hernandez (2023) \"That panicky call from a relative? It could be a thief using a voice clone, FTC warns\"\n\n5. Satariano, Mozur (2023) \"The People Onscreen Are Fake. The Disinformation Is Real\" *The New York Times*\n\n6. Verma, Oremus (2023) \"AI voice clones mimic politicians and celebrities, reshaping reality\"\n\n7. Albadawy, Lyu, Farid (2019) \"Detecting AI-Synthesized Speech Using Bispectral Analysis\"\n\n8. Pan, Zhang, Lyu (2012) \"Detecting splicing in digital audios using local noise level estimation\"\n\n9. Balamurali, Lin, Lui et al. (2019) \"Toward robust audio spoofing detection: A detailed comparison of traditional and learned features\" *IEEE Access*\n\n10. Warren, Tucker, Crowder et al. \"Better Be Computer or I'm Dumb\": A Large-Scale Evaluation of Humans as Audio Deepfake Detectors\"\n\n11. Sun, Zhang, Weiss et al. (2020) \"Generating Diverse and Natural Text-to-Speech Samples Using a Quantized Fine-Grained VAE and Autoregressive Prosody Prior\"\n\n12. Sun, Zhang, Weiss et al. (2020) \"Fully-Hierarchical Fine-Grained Prosody Modeling For Interpretable Speech Synthesis\"\n\n13. Skerry-Ryan, Battenberg, Xiao et al. (2018) \"Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron\"\n\n14. Karlapati, Abbas, Hodari et al. (2021) \"Prosodic Representation Learning and Contextual Sampling for Neural Text-to-Speech\"\n\n15. Hodari, Moinet, Karlapati et al. (2021) \"Camp: A Two-Stage Approach to Modelling Prosody in Context\"\n\n16. Chen, Deng, Wang et al. (2021) \"Speech Bert Embedding for Improving Prosody in Neural TTS\" *IEEE ICASSP*\n\n17. Zhang, Qin, Lee \"Learning Syllable-Level Discrete Prosodic Representation for Expressive Speech Generation\"\n\n18. Yang, Yang, Wu et al. \"Exploiting Deep Sentential Context for Expressive End-to-End Speech Synthesis\"\n\n19. Fu, Tao, Wen et al. (2021) \"Bi-Level Style and Prosody Decoupling Modeling for Personalized End-to-End Speech Synthesis\"\n\n20. Hono, Tsuboi, Sawada et al. \"Hierarchical Multi-Grained Generative Model for Expressive Speech Synthesis\"\n\n21. Łańcucki (2021) \"Fastpitch: Parallel Text-to-Speech with Pitch Prediction\"\n\n22. Aggarwal, Cotescu, Prateek et al. (2020) \"Using Vaes and Normalizing Flows for One-Shot Text-To-Speech Synthesis of Expressive Speech\" *IEEE ICASSP*\n\n23. Valle, Shih, Prenger et al. (2020) \"Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis\"\n\n24. Peppe (2009) \"Why is prosody in speech-language pathology so difficult?\" *International Journal of Speech-Language Pathology*\n\n25. Yamagishi, Wang, Todisco et al. (2021) \"ASVspoof 2021: accelerating progress in spoofed and deepfake speech detection\" *arXiv, Tech. Rep*\n\n26. Kurakin, Goodfellow, Bengio (2016) \"Adversarial machine learning at scale\"\n\n27. Van Den Oord, Dieleman, Zen et al. (2016) \"Wavenet: A generative model for raw audio\"\n\n28. Van Den Oord, Li, Babuschkin et al. (2018) \"Parallel wavenet: Fast high-fidelity speech synthesis\"\n\n29. Baird, Jørgensen, Parada-Cabaleiro et al. (2018) \"The perception of vocal traits in synthesized voices: Age, gender, and human likeness\" *J. Audio Eng. Soc*\n\n30. Kim, Gil Lee, Song et al. (2019) \"Flowavenet : A generative flow for raw audio\"\n\n31. Lorenzo-Trueba, Fang, Wang et al. (2018) \"Can we steal your vocal identity from the internet?: Initial investigation of cloning obama's voice using gan, wavenet and lowquality found data\"\n\n32. Saunders (2019) \"Detecting deep fakes with mice : Machines vs biology\"\n\n33. Wu, Kinnunen, Evans et al. (2015) \"ASVspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge\"\n\n34. Todisco, Wang, Vestman et al. (2019) \"ASVspoof 2019: Future Horizons in Spoofed and Fake Audio Detection\"\n\n35. Yi, Fu, Tao et al. (2022) \"Add 2022: the first audio deep synthesis detection challenge\"\n\n36. Wang, Juefei-Xu, Huang et al. (2020) \"Deepsonar: Towards effective and robust detection of ai-synthesized fake voices\"\n\n37. Wang, Yamagishi (2021) \"Investigating self-supervised front ends for speech spoofing countermeasures\" *arXiv*\n\n38. Wijethunga, Matheesha, Noman et al. (2020) \"Deepfake Audio Detection: A Deep Learning Based Solution for Group Conversations\"\n\n39. Jiang, Zhu, Peng et al. (2020) \"Self-supervised spoofing audio detection scheme\"\n\n40. Subramani, Rao (2020) \"Learning efficient representations for fake speech detection\"\n\n41. Zhang, Yi, Zhao (2021) \"Fake speech detection using residual network with transformer encoder\"\n\n42. Khalid, Kim, Tariq et al. (2021) \"Evaluation of an audiovideo multimodal deepfake dataset using unimodal and multimodal detectors\"\n\n43. Tak, Todisco, Wang et al. (2022) \"Automatic speaker verification spoofing and deepfake detection using wav2vec 2.0 and data augmentation\"\n\n44. Martín-Doñas, Álvarez (2022) \"The vicomtech audio deepfake detection system based on wav2vec2 for the 2022 add challenge\"\n\n45. Müller, Pizzi, Williams (2022) \"Human Perception of Audio Deepfakes\"\n\n46. Mai, Bray, Davies et al. (2023) \"Warning: Humans Cannot Reliably Detect Speech Deepfakes\" *PLOS ONE*\n\n47. Wenger, Bronckers, Cianfarani et al. \"Hello, It's Me\": Deep Learning-based Speech Synthesis Attacks in the Real World\"\n\n48. Mukhopadhyay, Shirvanian, Saxena (2015) \"All Your Voices Are Belong to Us: Stealing Voices to Fool Humans and Machines\"\n\n49. Jongman (2013) \"Acoustic phonetics\"\n\n50. Diehl, Watson, Bennetto et al. (2009) \"An acoustic analysis of prosody in high-functioning autism\" *Applied Psycholinguistics*\n\n51. Teixeira, Oliveira, Lopes (2013) \"Vocal Acoustic Analysis -Jitter, Shimmer and HNR Parameters\" *Procedia Technology*\n\n52. Traunmüller, Eriksson (1995) \"The frequency range of the voice fundamental in the speech of male and female adults\"\n\n53. Susana Finger, Cielo, Schwarz (2009) \"Acoustic vocal measures in women without voice complaints and with normal larynxes\" *Brazilian Journal of Otorhinolaryngology*\n\n54. Murphy, Akande (2005) \"Cepstrum-based estimation of the harmonics-to-noise ratio for synthesized and human voice signals\"\n\n55. Kreiman, Gerratt, Gabelman (2002) \"Jitter, shimmer, and noise in pathological voice quality perception\" *The Journal of the Acoustical Society of America*\n\n56. Wang, Skerry-Ryan, Stanton et al. (2017) \"Tacotron: Towards end-to-end speech synthesis\"\n\n57. Stupp (2019) \"Fraudsters Used AI to Mimic CEO's Voice in Unusual Crime\" *Wall Street Journal*\n\n58. Ravtz (2013) \"I'm the original voice of Siri\" *CNN*\n\n59. Allyn (2022) \"Deepfake video of zelenskyy could be 'tip of the iceberg' in info war, experts warn\"\n\n60. Jadoul, Thompson, De Boer (2018) \"Introducing Parselmouth: A Python interface to Praat\" *Journal of Phonetics*\n\n61. Boersma, Weenink \"Praat: doing phonetics by computer [Computer program]\"\n\n62. Nautsch, Wang, Evans et al. (2021) \"Asvspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech\" *IEEE Transactions on Biometrics, Behavior, and Identity Science*\n\n63. Wang, Yamagishi (2021) \"A comparative study on recent neural spoofing countermeasures for synthetic speech detection\" *arXiv*\n\n64. Tak, Patino, Todisco et al. (2021) \"End-to-end anti-spoofing with rawnet2\" *IEEE ICASSP*\n\n65. Layton, Tucker, Olszewski et al. \"SoK: The Good, The Bad, and The Unbalanced: Measuring Structural Limitations of Deepfake Datasets\"\n\n66. Sugrim, Liu, Mclean et al. (2019) \"Robust performance metrics for authentication systems\"\n\n67. Bahdanau, Cho, Bengio (2014) \"Neural machine translation by jointly learning to align and translate\"\n\n68. Weng (2018)\n\n69. Cheng, Dong, Lapata (2016) \"Long short-term memory-networks for machine reading\" *arXiv*\n\n70. Boersma (1993) \"Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound\"\n\n71. Kim, Stern (2008) \"Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis\"\n\n72. Solutions \"What is signal to noise ratio and how to calculate it\"\n\n73. Nooteboom (1997) \"The prosody of speech: melody and rhythm\"\n\n74. João, Teixeira, Lopes (2013) \"Vocal Acoustic Analysis -Jitter, Shimmer and HNR Parameters | Elsevier Enhanced Reader\" *Science Direct*<|endoftext|>" |
| }, |
| "test": { |
| "total_tokens": 80635024, |
| "example": "# PRIVACY-PRESERVING SECURITY INFERENCE TOWARDS CLOUD-EDGE COLLABORATIVE USING DIFFERENTIAL PRIVACY *\n\nYulong Wang, Xingshu Chen, Qixu Wang\n\n## Abstract\n\nCloud-edge collaborative inference approach splits deep neural networks (DNNs) into two parts that run collaboratively on resource-constrained edge devices and cloud servers, aiming at minimizing inference latency and protecting data privacy. However, even if the raw input data from edge devices is not directly exposed to the cloud, state-of-the-art attacks targeting collaborative inference are still able to reconstruct the raw private data from the intermediate outputs of the exposed local models, introducing serious privacy risks. In this paper, a secure privacy inference framework for cloud-edge collaboration is proposed, termed CIS, which supports adaptively partitioning the network according to the dynamically changing network bandwidth and fully releases the computational power of edge devices. To mitigate the influence introduced by private perturbation, CIS provides a way to achieve differential privacy protection by adding refined noise to the intermediate layer feature maps offloaded to the cloud. Meanwhile, with a given total privacy budget, the budget is reasonably allocated by the size of the feature graph rank generated by different convolution filters, which makes the inference in the cloud robust to the perturbed data, thus effectively trade-off the conflicting problem between privacy and availability. Finally, we construct a real cloud-edge collaborative inference computing scenario to verify the effectiveness of inference latency and model partitioning on resource-constrained edge devices. Furthermore, the state-of-the-art cloud-edge collaborative reconstruction attack is used to evaluate the practical availability of the end-to-end privacy protection mechanism provided by CIS.\n\n## 1 Introduction\n\nWith the advent of the Internet of Everything and the fifth-generation communication era, the decentralized and fragmented data generated by network edge devices is growing exponentially, and the demand for data transmission bandwidth is increasing [1]. Meanwhile, new scenarios such as industrial Internet and autonomous driving have created new demands for real-time data processing and security and privacy that traditional centralized cloud computing architectures can no longer effectively address [2,3]. To address these challenges, edge computing has emerged with the advantage of being closer to the data side, sinking part of the computing and storage tasks from the center to the edge and breaking the bottleneck of the traditional network schemes through cloud-edge collaboration [4].\n\nNevertheless, deploying some computationally intensive tasks in resource-constrained edge devices still faces significant computational latency and energy consumption. Kang et al. designed a lightweight cloud-edge collaborative inference framework, Neurosurgeon [5], to fine-grainedly partition the DNN network for the variation of data size and computation in each layer of the neural network. As shown in Figure 1 for the AlexNet network as an example, the latency and output data size of each layer exhibit a large heterogeneity, which means that layers with higher latency may not necessarily output a larger amount of data. Based on this observation, Neurosurgeon reduces the total end-to-end execution latency as well as the energy consumption of edge devices by dividing the DNN into two parts, and offloading the computationally intensive part to the server at a lower transmission cost (only the intermediate results of the layer where the partition point is located need to be transmitted). Subsequently, more and more research on collaborative inference based on model partitioning [6,7,8] have been proposed to further improve the performance and efficiency of such approaches, while ignoring the potential security and privacy issues.\n\nThe work of He et al. demonstrates the feasibility of data privacy attacks against cloud-edge collaborative inference systems [9], even if the cloud only receives intermediate results offloaded by edge devices instead of the original data. The untrustworthy cloud can still easily and accurately recover sensitive original data from intermediate results by means of white-box and black-box attacks. In addition, the overfitting of the model also provides a side channel of data privacy leakage for cloud-edge collaborative inference [10]. The membership inference attack proposed by Shokri et al. [11] can be exploited by malicious edge nodes to infer whether a certain data exists in the training set by querying the results of the black-box inference service.\n\nTo address these challenges, cryptography-based protocols [12,13] have also been proposed to protect data privacy in the inference phase. However, complex and frequent cryptographic computations introduce significant computational and communication overheads while preserving privacy, which are infeasible to deploy on devices with constrained computational resources such as IoT devices. In addition to this, some scholars have leveraged differential privacy as a lightweight privacy-preserving strategy to achieve privacy preservation in machine learning by adding quantifiable noise to the model or output results through provable mechanisms. Among them, Mireshghallah et al. proposed the Cloak framework [14], which maximizes privacy by employing an optimized Laplace distribution for obfuscation before sending the privacy data to the cloud, which maximizes privacy by minimizing the mutual information between the original input and the data sent to the cloud. However, direct perturbation of the original image can significantly damage the availability of the image, which in turn degrades the inference accuracy severely. Wang et al. [15] proposed to add fine-calibrated noise to the intermediate output to achieve a differential privacy-preserving framework, which designs a noise training method to mitigate the impact of noise perturbation on inference accuracy. Similar to this work, an end-to-end collaborative inference privacy-preserving framework, Shredder [16], proposed by Mireshghallah et al. significantly reduces the amount of information in the communication data by learning to add noise to the distribution without changing the structure and weights of the pre-trained network, while being able to maintain inference accuracy. In summary, the above studies do not reasonably combine cloud-edge collaborative inference with privacy-preserving mechanisms to effectively trade-off the conflicting problems between computational latency, privacy and availability. Therefore, in this paper, we propose a novel cloud-edge collaborative security inference framework, CIS (Collaborative Inference Shield), which aims to maximize the privacy strength with minimal impact on DNN accuracy, while being able to effectively tradeoff between usability and privacy of cloud-edge collaborative inference. The main contributions are summarized as follows.\n\n• A secure privacy inference framework CIS for cloud-edge collaboration is proposed. CIS supports adaptively partitioning the network for collaborative inference based on dynamically changing network bandwidth, aiming to fully release the computational power of edge devices. Meanwhile, the selection of a partition point fully considers the amount of information in offloading intermediate data and can effectively trade-off the total inference latency with the privacy of sensitive data at the edge.\n\n• CIS provides a way to achieve differential privacy protection by adding refined noise to the intermediate layer feature maps offloaded to the cloud. Meanwhile, with a given total privacy budget, the budget is reasonably allocated by the size of the feature graph rank generated by different convolution filters, which makes the inference in the cloud robust to the perturbed data, thus effectively trade-off the conflicting problem between privacy and availability.\n\n• We construct realistic cloud-edge collaborative inference computing scenarios to evaluate the effectiveness of inference latency and model partitioning on resource-constrained edge devices. Also, state-of-the-art cloud-edge collaborative reconstruction attacks from internal and external adversaries are used to evaluate the practical usability of the end-to-end privacy protection mechanisms provided by CIS. 2 Preliminaries and Related Works\n\n$$I n p u t C o n v M a x C o n v M a x C o n v C o n v C o n v M a x F l a t t$$\n\n## 2.1 Differential privacy\n\nDifferential Privacy (DP) [17], proposed by Dwork in 2006 as an alternative reliable privacy model, has been considered as a promising privacy-preserving strategy in machine learning in recent years.The definition of DP is based on a rigorous theoretical foundation that Privacy preservation in machine learning is achieved by adding quantifiable noise to the model or output results through provable mechanisms, and an elegant trade-off between privacy strength and usability can be made by adjusting the privacy budget [18].\n\nDifferential privacy guarantees that queries and accesses of any random algorithm on two adjacent datasets have similar output distributions, and an attacker cannot infer private information about an individual in the results of any query. A formal definition of differential privacy is shown below:\n\nThen the random mechanism M is claimed to satisfy -differential privacy, where the parameter is the privacypreserving budget, and the smaller the, the higher the privacy-preserving strength.\n\nBefore introducing the specific privacy protection mechanism, we give the definition of global sensitivity: Definition 2. (global sensitivity [19]) With a query function f : N |X | → R k for any two adjacent data sets D and D, the global sensitivity is defined as\n\nwhere • is the distance metric, usually l 1 and l 2 norm.\n\nBased on the above mentioned definitions of differential privacy and global sensitivity, The Laplace Mechanism implements the -differential privacy-preserving mechanism by adding random noise obeying the Laplace distribution (Lap\n\nto the query result, as defined below.\n\nDefinition 3. (Laplace mechanism [19] Given any query function f : N |X | → R k with global l 1 sensitivity s, the Laplace mechanism is defined as\n\nwhere Y i is drawn from Lap s of independent identically distributed random variables, and the Laplace mechanism satisfies -differential privacy.\n\nIn addition to this, differential privacy has a very important property, namely the post-processing property [19]. After any processing of the output of a randomized algorithm satisfying differential privacy, the post-processing property still guarantees privacy protection of the same privacy strength. This property enables the application and generalization of differential privacy to complex machine learning algorithms with the following properties: In fact, a complex computational task often does not correspond to a complex privacy-preserving mechanism, but rather the privacy budget is rationally allocated to the various steps of the complex task. The combination theorem of differential privacy gives a solution in this case for computing the privacy-preserving strength and performance of the entire complex computational task. Theorem 1. (Parallel combination theorem [20]) Suppose that given a data set D with k mutually disjoint subset divisions\n\n$$s (f, • ) = max d(D,D )=1 f (D) -f (D ) (2$$\n\n$$)$$\n\n$$(x | b) = 1 2b exp -|x| b )$$\n\n$$M L ((x, f (•), )) = f (x) + (Y 1, Y 2, • • •, Y k )(3)$$\n\n$${D 1, D 2, • • •, D k }, the privacy mechanism M i : N |X | → R i is performed separately for each subset, satisfying i -differential privacy for any i ∈ [k]. Then the mechanism M [k] = {M 1 (D 1 ), M 2 (D 2 ), • • •, M k (D k )} : N |X | → k i=1 R i is satisfying max i -difference Privacy.$$\n\n## 2.2 Cloud-edge collaborative inference and privacy enhancement\n\nDue to the constraints of limited computing resources and energy consumption of edge devices, edge devices need to make decisions to offload a portion of computing tasks to the cloud to collaboratively complete tasks [21]. The cloud-edge collaborative inference research proposed in recent years research [5,22,23,24,7] overcome the significant communication overhead and potential privacy leakage by cutting the model efficiently, where the former part of the computation remains on the edge device and the resulting intermediate computation results are offloaded to the cloud to complete the remaining computational tasks. Further, Hu et al [8] propose the model partitioning method for directed acyclic graph (DAG) topological DNN models, but it is inefficient for extensive DNN model partitioning. To address the shortcomings of offline partitioning methods, [24,25] proposed an adaptive online partitioning method to achieve better results by adjusting the partitioning strategy in real time. [26,27] studied the energy consumption issue under collaborative inference based on model partitioning method to achieve the lowest energy consumption while guaranteeing the delay requirement.\n\nHowever, the work of Zecheng He et al. [9,28] shows that an untrusted cloud can still easily and accurately recover sensitive data from intermediate values in a limited attack background, without even accessing the edge model. Accordingly, many scholars have started to conduct a lot of research on security privacy protection for collaborative inference at the cloud edge. Accordingly, many scholars have started to conduct a lot of research on security privacy protection for collaborative inference at the cloud edge. The most representative studies are adding quantifiable noise satisfying differential privacy to the intermediate output values to achieve data privacy protection for edge devices [16,29,30]. However, these perturbations eventually lead to degradation of model inference accuracy, and the balance between good privacy and usability still faces a great challenge.\n\n## 3 Problem Statement\n\nThis section analyzes and elaborates on the system model and the threat model faced by the CIS framework, and specifies the design goals of the solution in this section.\n\n## 3.1 System Model\n\nIn this work, we consider how to tackle the conflicting problems between collaborative inference performance, privacy and availability in the scenario of cloud-side collaborative deep learning inference oriented. As shown in the left side of Figure. 2, the system model of CIS proposed in this work consists of three main components: Edge device layer E, Cloud layer C, and Transmission layer T. Initially, in the collaborative inference architecture, the original n-layer The selection of the DNN network partition point needs to consider many different factors to determine the optimal strategy. The most important performance metric is the overall inference delay, which mainly consists of the inference computation time of the edge devices, the inference computation time on the cloud, and the network transmission time. The CIS system model proposed in this work will consider the variation of network communication quality in real scenarios, the amount of computation in different layers, and the size of data output from different layers to select the optimal model cutting position to maximize the overall system performance. In addition, another issue that needs to be addressed in the CIS system model is the privacy of the data.\n\n$$DNN network f Θ = {f θ1 • f θ2 • • • • f θn } is cut into two parts at layer m: f device Θ l = {f θ1 • f θ1 • • • • f θm } and f cloud Θr = {f θm+1 • f θm+2 • • • • f θn },$$\n\n## 3.2 Threat Model\n\nAs illustrated in the right half of We assume that the E is trustworthy and its will not disclose any information to other computing entities voluntarily. However, the C is an honest-but-curious entity, i.e., although C is curious about the private data of the edge devices, it will strictly follow the predefined computation protocols and will not interfere with the whole collaborative inference computation process. Similarly, a malicious edge node M adheres strictly to the predefined computation protocols and is more restrictive, and does not have any prior knowledge except through legitimate query requests. However, these advanced adversaries from external and internal entities can still perform other computations to infer or reconstruct the input data privacy of the edge device [9]. Specifically, the model of internal and external threats to CIS and the associated assumptions are as follows:\n\n• White-box Reconstruction Attack, WRA: It is assumed that C does not have any a priori information about the original privacy data x except for the acquisition of the intermediate output v of the partition layer uploaded by E. In addition, the cloud-edge collaborative inference service needs to share the same deep inference network, so it is also assumed that C has information about the network structure and parameters f Θ of the model. The white-box reconstruction attack can be formally defined as\n\n• Black-box Inverse-Network attack, BINA: Assume that M does not have any a priori information about the original private data x, as well as no a priori information about the network and parameters f Θ of the model, other than the ability to use the cloud-edge collaborative inference service through legitimate query requests normally. However, the intermediate data uploaded to the edge devices are available via bypass. the BINA attack can be formally defined as\n\nby training an inverse model f -1 Θ l to reconstruct the input data from the intermediate results.\n\n$$x = W RA f device Θ l (x), f Θ.$$\n\n$$x = f -1 Θ l f device Θ l (x)$$\n\n## 3.3 Design Objectives\n\nAccording to the system model and threat model proposed above, the goal of CIS is to design and implement a cloud-edge collaborative inference framework with high privacy, high precision and low latency, which can resist advanced threats from internal and external. Combined with security requirements, the specific design goals of CIS are as follows:\n\n(1) Inference delay: CIS supports adaptively cutting the network to achieve collaborative inference according to the dynamically changing network bandwidth in order to fully utilize the computing power of edge devices and minimize the total delay of cloud-side collaborative inference.\n\n(2) Privacy: CIS supports adding refined noise based on differential privacy to effectively resist advanced black-box and white-box reconstruction attacks, and the impact of privacy protection on model accuracy is within a reasonable range.\n\n(3) Usability: The protection mechanism and model partitioning mechanism of CIS can be easily applied to existing deep networks without any structural and parametric modification to the network model.\n\n## 4 Proposed Method 4.1 Cloud-Edge collaborative inference acceleration based on model partitioning\n\nTo formally define the model partition problem, the original DNN model can be transformed into a directed linked list L = {V, E}, where V = { 1, 2, • • •, n } denotes the definition of each layer of the DNN model as the set of vertices in L, E denotes the set of dependencies between layers, and i, j ∈ E denotes that the computed output data of i layer will be transferred to j layer as input. On the basis of the formal definition of DNN as a directed linked list L, we will give some other important definitions. Definition 4. (Model partitioning problem) Given a n-layer DNN model with a directed chain list L = {V, E}, the model partitioning problem can be defined as taking some vertex m in V as the partition point, and the edge m, m+1 is cut and L is split into two parts, where L edge = { 1, 2, • • •, m } denotes the layer that will perform the computation at the edge device, L cloud = { m+1, m+2, • • •, n } denotes the layer that will be offloaded to the cloud to perform the computation.\n\nThe latency of cloud-edge collaborative inference mainly includes computation latency and transmission latency, both of which can be obtained by monitoring computation and network resources respectively in the offline configuration stage, and subsequently computed by analyzing the fixed model layer by layer, as shown in the upper left part of Figure. \n3. The transmission latency includes the latency of uploading data T t up and the latency of returning results T t down. The latency of transmitting data from the edge devices to the cloud server through the wireless network depends on the data rate of the network R (t), which can be calculated by the Shannon-Hartley theorem [31] as follows.\n\nwhere B w and g (t) denote the bandwidth and flat fading channel gain at the instantaneous t moment, respectively; P (t) denotes the transmission power of the edge device, σ 2 denotes the noise power of the edge device, and I (t) denotes the inter-area interference power.\n\nThus, the specific definition of transmission latency for cloud-edge collaborative inference can be expressed as follows: Definition 5. (Transmission Latency) Given a n-layer DNN model with a directed linked list L = {V, E}, a partition point of m with the output size of D m, and D r being the data size of the inference result. The transmission inference of the cloud-edge collaboration is defined as follows:\n\nAnother important percentage of inference latency is the computation latency of each layer of the DNN. Since the network structure and the number of parameters of the model during DNN inference are frozen, and the input size of the edge devices is also fixed in size. Therefore, we refer to the method mentioned in the literature [32] to estimate the computational latency of each layer by counting the data volume of floating-point operations per second (FLOPs) of different types of layers in DNN. The amount of FLOPs computed for the convolutional layer F conv and the fully connected layer F f ully can be expressed as:\n\nwhere H, W and C in denote the height, width and number of channels of the input feature map, respectively. K denotes the size of the convolution kernel, and C out denotes the number of channels of the output convolution layer. I and O denote the input and output dimensions of the fully connected layer, respectively. Note that the activation layer is assumed to be a Rectified linear unit ReLU, which has negligible execution time compared to the dot product computation of the convolution and fully connected layers. Definition 6. (Computation Latency ) Given the i th layer of a DNN model, according to the type of the layer (convolutional layer or fully-connected layer), its execution delay on edge devices and cloud servers can be expressed as:\n\nwhere P edge and P cloud denote the floating-point computing power of the edge device or cloud server, respectively, which can be obtained from the CPU or GPU specifications.\n\nBased on the above definitions and analysis, CIS can obtain statistics on edge computation latency, transmission latency and cloud computation latency for each layer in the offline configuration phase:\n\nAs shown in the lower left part of 3, two virtual vertices e and c are constructed to represent the edge layer and the cloud service layer, respectively. The above offline statistical latency information is combined with the DNN model directed chain table L (defined in Eq.4) and converted into a directed acyclic graph G = {V, E }, where E = E ∪ { e, i, i, c } n i=1. Towards the collaborative inference scenario, the optimal segmentation point selection for DNN networks needs to satisfy the minimization of the total inference delay, defined as follows: Definition 7. (Total Inference Latency ) Given a DNN model DAG graph G = {V, E } with the partition point m for n layers, then the total inference latency for cloud-edge collaboration inference is defined as follows:\n\nwhere t e i and t c j denote the computational latency of the corresponding layers on the edge devices and cloud servers, respectively.\n\nComprehensive analysis of the above, the model partition for collaborative inference in CIS is shown in algorithm 1. First, by monitoring the network transmission rate and analyzing the fixed properties of the model in the offline configuration phase, the computational delay of each layer in the edge devices and cloud servers, respectively, and the transmission delay of different layers can be predicted in advance (lines 1-3 of the algorithm 1). In line 4 of the algorithm 1, the original DNN network can be converted into a weighted directed acyclic graph G = {V, E } based on these statistical information. Finally, each layer is considered as a partition layer to count the total inference delay separately, and the layer with the minimum total inference delay is selected as the optimal splitting layer (lines 6-12 of the algorithm 1), which will be deployed in the cloud-edge collaborative inference environment respectively. It is worth noting that the dynamically changing network bandwidth affects the selection of optimal segmentation points. Therefore, CIS will constantly monitor the changes of network resources and can implement adaptive partitioning of DNN models to maximize the inference performance of cloud-edge collaboration.\n\nAlgorithm 1 Model partition algorithm for cloud-edge collaborative inference.\n\nif T total < T best then 10:\n\nend if 13: end for 14: return T best, best\n\n$$R (t) = B w log 2 1 + P (t) g (t) σ 2 + I (t)(4)$$\n\n$$T t = T t up + T t down = D m R (t) + D r R (t)(5)$$\n\n$$F conv = 2HW C in K 2 + 1 C out (6) F f ully = (I • O + O • (I -1)) = (2I -1) O(7)$$\n\n$$t e i = (F conv |F f ully ) /P edge (8) t c i = (F conv |F f ully ) /P cloud(9)$$\n\n$$T c edge = {t e 1, t e 2, • • •, t e n }, T t = {t t 1, t t 2, • • •, t t n },and$$\n\n$$T c cloud = {t c 1, t c 2, • • •, t c n }.$$\n\n$$T total = T t ( m ) + i∈Ledge t e i + j ∈L cloud t c j (10$$\n\n$$)$$\n\n$$Input: f Θ = {f θ1 • f θ2 • • • • f θn }, n-layer DNN network; D = {D 1, D 2, • • •, D n },$$\n\n$$T c edge = {t e 1, t e 2, • • •, t e n } ← f c edge (f Θ, P edge ) 2: T c cloud = {t c 1, t c 2, • • •, t c n } ← f c cloud (f Θ, P cloud ) 3: T t = {t t 1, t t 2, • • •, t t n } ← f t (D, R (t)) 4: G = {V, E } ← DAG f Θ, T c edge, T c cloud, T t 5: T best = +∞ 6: for i = 1 to n do 7: L edge = { 1, 2 • • • i }, L cloud = { i+1, i+2 • • • n } ← Cut (G, best ) 8: T total = T t ( i ) + i∈Ledge t e i + j ∈L cloud t c j 9:$$\n\n$$T best = T total 11: best ← i 12:$$\n\n## 4.2 A privacy-enhancing mechanism for cloud-edge collaborative inference\n\nAs described earlier in the threat model, CIS employs a cloud-edge collaborative inference schema to keep sensitive data from edge devices out of the local area, mitigating privacy security issues to some extent. However, it still faces white-box and white-black reconstruction attacks by advanced internal and external adversaries through the intermediate layer output results and legitimate queries. Therefore, as shown in the right part of Figure. 3, we propose a privacyenhancing mechanism for cloud-edge collaboration, Collaborative-DP, which injects refined Laplace noise satisfying -DP when the edge devices upload the intermediate output of the partition layer, thus satisfying the privacy-preserving enhancement of cloud-edge collaborative inference and minimize the degradation of inference accuracy.\n\nFirst of all, as shown in the Laplace mechanism satisfying -DP given by the definition 3, the added noise needs to be sampled from the distribution Y ∼ Lap s, where is the privacy budget and s is the global sensitivity.\n\nHowever, in the cloud-edge collaborative inference scenario, the global sensitivity of the partition layer m is difficult to estimate without any a priori bound. Overly conservative estimation of s will add too much noise to the output representation and will reduce the accuracy of subsequent inference in the cloud. Similar to the related work [33,15], Collaborative-DP employs an norm clipping of the uploaded intermediate results to a fixed bound as a way to estimate the global sensitivity. Specifically, for sensitive input x from any edge device, an infinite norm is applied to clip the intermediate output result v m = f device Θ l (x) of the partition layer m separately for each channel:\n\n...... Given a total privacy budget, it is crucial to further refine the allocation of privacy budget and generation noise, which has a performance impact on inference accuracy. Inspired by the work of Lin et al. [34] that the average rank of feature maps generated by a single filter is always the same, regardless of the number of image batches received by the DNN. Consequently, a small batch of input images (g ≈ 500) can be utilized to accurately estimate the expectation of the feature map rank. The high rank of the feature map (i.e., the defined intermediate layer output v m ) reflects the magnitude of the amount of information extracted by the different convolution filters in the current partition layer m. Therefore, the proposed Collaborative-DP allocates the privacy budget by the ratio of the rank of the feature submap v m [i] from different channels to the rank of all feature maps. Specifically, feature submaps with higher rank contribute more to the inference accuracy of the model and can be assigned a higher privacy budget (corresponding to less noise), thus achieving a tradeoff between privacy and availability for collaborative inference. The schematic of privacy budget allocation based on feature graph rank is shown in Fig. 4, and the related calculations are as follows:\n\n, where\n\nwhere Rank (•) estimates the expectation of the feature map rank by taking g inputs, v m [i] denotes the feature map generated by the ith filter (k in total), and SVD (•) denotes the rank of the feature map obtained by singular value decomposition.\n\nComprehensive analysis above, we give the detailed design of the adaptive privacy preservation cloud-edge collaborative inference algorithm, termed Collaborative-DP. The first line of the algorithm indicates that the privacy budget allocation based on the feature map rank is implemented according to the formula 12; subsequently, the refined noise is added to the intermediate output by the edge devices at the partition layer in proportion to the privacy budget (lines 3-7 of the algorithm); finally, line 9 of the algorithm indicates that the remote cloud receives the intermediate results with the added noise and then completes the remaining inference network and outputs the results.\n\n$$v m [i] = v m [i] max 1, vm[i] ∞ Cm, f or i = 1, 2, • • •, k, v m ∈ R k * w * h(11)$$\n\n$$i = • Rank (v m [i]) k j=1 Rank (v m [j])$$\n\n$$Rank (v m [i]) ≈ 1 g g t=1 SVD (v m [i], t), k i=1 i =(12)$$\n\n## Algorithm 2 Adaptive privacy preserving cloud-edge collaborative inference algorithm (Collaborative-DP).\n\nInput: x, the sensitive data input from the edge device; D = {x i } g i=1, the training set; f device Θ l (•), the network layer executed by the edge device; f cloud Θr (•), the network layer executed by the cloud server; m, the network partition layer (noise addition layer); the number of convolution filters in m ; C m, the clipping threshold; { i } k i=1, the privacy budget assigned to different feature submaps. Output: y, inference result.\n\nNext, it is necessary to prove whether the algorithm 2 satisfies the strict differential privacy guarantee. Theorem 2. Adaptive privacy preserving cloud-edge collaborative inference algorithm (Collaborative-DP) satisfies k i=1 i -differential privacy.\n\nProof. Initially, we consider the privacy-preserving case of a single convolutional filter f ilter i. We use f i θm (•) to denote the ith convolutional computation function of the partition layer m. Assuming that D and D are adjacent data sets, the Laplace mechanism can be defined as the random function\n\n, where the global sensitivity |∆s| = 2C m. Therefore, for any output data point t ∈ R w * h of the random function M L, we have\n\nHence the convolution computation of a single filter f ilter i in the algorithm is satisfying i -differential privacy. As shown in Figure 4, all the convolutional computations of the partition layer m can be considered as a set of privacy mechanisms performed sequentially on the same data source v m-1. Therefore, the combination mechanism (theorm 1) of the partition layer\n\n$$1: { i } k i=1 ← AllocateBudget D, f device Θ l (•) 2: v m ← f device Θ l (x) 3: for i = 1 to k do 4: v m [i] ← v m [i] / max 1, vm[i] ∞ Cm 5: v m [i] ← v m [i] + Laplace 2Cm i I 6: end for 7: y ← f cloud Θr (v m ) 8: return y$$\n\n$$M i x, f i θm (•), i = f i θm (x)+Laplace 2Cm i I$$\n\n$$P r (M i (D)) = t P r (M i (D )) = t = w * h j=1 exp -i|tj -f i θm (D) j | 2Cm exp -i|tj -f i θm (D ) j | 2Cm = exp i - w * h j=1 t j -f i θm (D) j - w * h j=1 t j -f i θm (D ) j 2C m ≤ exp i f i θm (D) -f i θm (D ) 1 2C m = exp ( i )(13)$$\n\n$$M [k] = {M 1, M 2, • • •, M k } is satisfied by k i=1 i -differential privacy.$$\n\n## 5 Evaluation\n\nIn this section, the experimental setup of the CIS system is first presented, including the environment configuration and performance comparison baseline, the evaluation metrics, and the models and datasets used. Subsequently, CIS will be evaluated in several aspects such as inference latency and accuracy, privacy strength and availability, and success rate against different types of attacks, respectively.\n\n## 5.1 Experiment Setup\n\n## 5.1.1 Experimental environment and configuration\n\nThis work builds a real hardware experimental platform to evaluate the feasibility of the CIS system, where the edge device is a Jetson NANO mobile platform developed by NVIDIA, equipped with a 64-bit quad-core ARM A57@1.43GHz CPU, 128-core NVIDIA Maxwell @921MHz GPU, and 4GB 64-bit LPDDR4 @1600MHz memory. The cloud server is equipped with a 64-bit 10-core Intel Xeon(R) W-2255 @3.70GHZ CPU, a GForce RTX 2080Ti GPU with 12GB memory, and 64GB of RAM. Communication between the cloud server and the edge devices uses a point-to-point WiFi connection, and network traffic simulation is controlled by the Linux Traffic Control tool, which can simulate network scenarios with different network bandwidth, communication quality, and latency.\n\n## 5.1.2 Models and data sets\n\nTo evaluate the performance of the cloud-edge collaborative security inference algorithm, the experiments use three chain topology DNN models: AlexNet, VGG16, and MobileNet v1, and the models are modified accordingly to be adapted and deployed in the CIS framework. Regarding the dataset, the CIFAR-10 [35] dataset is used for all model training and inference in our experiments to compare and validate the accuracy of our proposed methods. The dataset CIFAR-10 contains 50,000 RGB training images of size 32 × 32 and 10,000 test images for 10 classes. (c) Neurosurgeon [5]: The first proposed method for collaborative inference between edge devices and clouds based on DNN model partitioning, the total inference delay generated by DNN networks in collaboration at the cloud and edge.\n\n## 5.1.3 Baseline and Evaluation Metrics\n\n(2) Defensive performance of reconstruction attacks White-box Reconstruction Attack (WRA), based on the attack method provided by He et al [9], an internal adversary from the cloud reconstructs the input data of the original sensitive edge device using the network structure and parameters of the shared model, as well as the uploaded intermediate output information.\n\nBesides, we consider another Black-box Inverse-Network Attack (BINA) [9] from an external threat compared to the white-box reconstruction attack, where an adversary from the external does not have any a priori information about the network and parameters of the model and can only train an inverse model to reconstruct the input data from intermediate results through legitimate query requests using the Cloud Edge collaborative inference service.\n\nIn addition to the visualization of the reconstructed images to validate the proposed privacy-preserving mechanism, MSE, SSIM, and PSNR metrics, which typically measure the difference between the original and reconstructed images, are used to quantify the effectiveness of the reconstruction attack.\n\n(a) Mean Squared Error (MSE): MSE measures the similarity between two images by calculating the cumulative squared error of the pixel values. the lower the MSE value, the higher the similarity between the two images (A and B, image pixel size m • n). The specific calculation is as follows:\n\n(b) Structural similarity (SSIM): SSIM is a perception-based metric that measures the similarity between two images based on structural information. The specific calculation is as follows:\n\nwhere µ A and µ B denote the mean values of pixels in images A and B, respectively. σ 2 A and σ 2 B denote the variance, and σ AB denotes the covariance. In addition, C 1 and C 2 are constants and the value of SSIM is between [0, 1], and higher SSIM values indicate higher similarity between the two images.\n\n(c) Peak signal-to-noise ratio (PSNR): PSNR measures the similarity of two images by the peak error. the larger the PSNR value, the higher the image similarity. The specific calculation is as follows:\n\n$$M SE (A, B) = 1 m • n m,n i,j=1.1 A (i, j) -B (i, j) 2(14)$$\n\n$$SSIM (A, B) = (2µ A µ B + C 1 ) (2σ AB + C 2 ) (µ 2 A + µ 2 B + C 1 ) (σ 2 A + σ 2 B + C 2 )(15)$$\n\n$$P SN R (A, B) = 10 log 10 255 2 M SE (A, B)(16)$$\n\n## 5.2 Inference Latency Performance Analysis\n\nFirst, for the analysis of inference delay performance, our target networks are selected as AlexNet, VGG16, and MobileNet v1. The number of layers of the corresponding networks is given in Table 1. Meanwhile, in order to simulate the dynamically changing network quality between the real edge and the cloud, three networks with different conditions are simulated by the Linux Traffic Control tool, respectively, as shown in Table 1 below. Subsequently, the proposed CIS cloud-side collaborative inference system achieves inference acceleration in comparison with several other approaches, including device-only, cloud-only, and Neurosurgeon, under different network quality conditions, given in Figure 5. It can be clearly found in Figure 5(c) that with poor network quality (uplink bandwidth as low as 0.15Mpbs), the transmission latency generated by sending raw data to the cloud is much larger than the computation latency in edge devices, and the inference speedup of cloud-only is significantly lower than that of device-only for DNN models of different sizes baseline. For the CIS and Neurosurgeon methods with the model partitioning mechanism, the inference speedup ratios for different models are still slightly higher than the baseline device-only method even with low network transmission quality, reaching 1.15x ∼1.65 for CIS and 0.88x ∼1.34x for Neurosurgeon.\n\nWith the improvement of network transmission quality, the bottleneck of computational performance on edge devices is gradually amplified, while the powerful computational power of cloud servers is released with the benefit of network quality improvement. As a result, the inference acceleration ratios of cloud-only, CIS, and Neurosurgeon are increased exponentially. When the network transmission rate reaches 4Mps, the inference speedup ratios of cloud-only reaches 7.12x, CIS reaches 13.56x and Neurosurgeon reaches 10.06x. Finally, when the network transmission rate reaches 15Mpbs, the ratio of transmission delay to total inference delay decreases further, and the inference speedup ratio of cloud-only is significantly better than other methods, reaching 17.89x for the MobileNet model. Moreover, Neurosurgeon's inference speedup ratio degrades compared to the network transmission rate of 4 Mbps. Nevertheless, CIS still slightly outperforms cloud-only in the inference speedup ratio for the AlexNet model.\n\nIn a word, with the continuous development of computing capability of edge devices, the computational schema of cloud-edge collaborative inference can make up for the computational latency bottleneck of cloud-only mode under the complex and variable network transmission quality. Meanwhile, the security and privacy of data is further enhanced by ensuring that the original data does not leave the device.\n\n## 5.3 Defensive performance of white-box reconstruction attacks\n\nWe utilize the regularized Maximum Likelihood Estimation (rMLE)-based white-box reconstruction attack (WRA) [9] to verify the defense performance of the proposed Collaborative-DP. Specifically, given the intermediate output results and the inference network and parameters f device Θ l (x), the similarity of the reconstructed input x to the original input x (indicating the posterior information observed by the adversary from the intermediate results) is measured by the Euclidean distance (ED), and the total variation ( TV) to represent the prior information of the original input, as follows:\n\n= arg min Before further verifying the defense performance of the Collaborative-DP algorithm in CIS against WRA attacks, we give a visualization of the privacy budget allocation based on the rank of feature submaps (as in Eq. 12) in Collaborative-DP. As shown in Fig. 6, the first row shows the original CIFA-10 input images, and the feature submaps output by the collaborative inference network (VGG-16) at the partition layer sorted by their rank, in that order. Given the total privacy budget, Collaborative-DP allocates the budget reasonably according to the rank order, and the lower the feature submaps of lower rank contain lower amount of available information, the privacy budget allocated for them is correspondingly lower, and the added noise becomes higher (X-axis direction in Figure 6). As the total privacy budget keeps decreasing (in the Y-axis direction in Fig. 6), the noise added to the corresponding individual feature submaps also increases significantly, corresponding to the increasing intensity of privacy. Even so Collaborative-DP's rank-based budget allocation mechanism still ensures that feature submaps with high rank have relatively high availability.\n\nNext, the defense effect of Collaborative-DP algorithm in CIS against WRA attack with different privacy protection strengths is given in Fig. 7, where the target model is VGG-16 as well as CIFAR-10 as the target dataset. Analyzed from a visual perspective, the WRA attack is very effective in reconstructing the input image on the edge device almost completely. As the given total privacy budget decreases gradually from high, the noise addition of Collaborative-DP to the intermediate output increases and the attack effect of WRA decreases gradually. When ∈ [10,30], the content of the target data can no longer be clearly distinguished from the visual point of view.\n\nIn addition to visualizing the reconstructed images of WRA attacks, MSE, SSIM and PSNR metrics are used in Fig. 8 to quantify the effectiveness of defense against WRA attacks. Also, Collaborative-DP is compared with Non-DP without added noise (as the baseline algorithm) and the plain noise addition strategy [36] proposed by Ryu et al. (which we named Native-DP for comparison purposes), respectively. For the MSE metric, a lower value means the closer the reconstructed image is to the original image, and the more effective the reconstructed attack is. When 10 < < 50, the MSE value of Native-DP is much lower than that of Collaborative-DP, implying that Collaborative-DP can achieve better defense against reconstruction attacks with a smaller privacy budget. Contrary to the MSE metric, a larger PSNR value indicates that the reconfiguration attack generates a higher quality image and a more effective attack. It can be observed very directly in Fig. 8 that the PSNR value of Collaborative-DP is smaller than that of Native-DP in all privacy budget ranges, which also indicates that Collaborative-DP has higher defense performance against WRA attacks with the same privacy budget. Finally, the SSIM metric measures the similarity between the reconstructed image and the original image by structural similarity, and higher values indicate that the reconstructed attack is more effective. In\n\n$$x = arg min x ED (x, x) + λT V (x)(17)$$\n\n$$x f device Θ l (x) -f device Θ l (x)$$\n\n## LowRank High Rank\n\n## High privacy budget\n\n## Low privacy budget Low noise ratio\n\nHigh noise ratio Fig. 8, it can be found that Native-DP is superior to Collaborative-DP algorithm under SSIM metric. A reasonable explanation is that since the SSIM metric reflects the properties of the object structure in the scene from the perspective of image composition, while Collaborative-DP differs from Native-DP by assigning privacy budget based on the size of the feature subgraph rank. The structural properties of some feature subgraphs are preserved to some extent while trade-offs are made between privacy strength and availability.\n\n## 5.4 Defensive performance of Black-box Inverse-Network Attack\n\nIn this subsection, compared to the white-box reconstruction attack, we consider another black-box attack from an external threat with more restrictive conditions. Specifically, an adversary from external does not have any a priori information about the network and parameters f Θ of the model, and can only use the cloud-edge collaborative inference service through legitimate query requests. Suppose an external adversary can get the intermediate results computed from the edge device with arbitrary input x, and then reconstruct the input data from the intermediate results by training an inverse model f -1 Θ l, as follows: where {x i } m i=1 is the training set generated for the inverse model g, and the output {f Θ l (x i ), x i } obtained by legitimate request is used as the sample to train the inverse model g and the parameters f -1 Θ l. An external adversary can then reconstruct the sensitive data x = f -1 Θ l (v m ) by the inverse model and the obtained intermediate result v m. Similarly, the defense effectiveness of Collaborative-DP algorithm in CIS against BINA attack under different privacy protection strengths is given in Fig. 9, where the target model is VGG-16 as well as CIFAR-10 as the target dataset. First, analyzing from the visual perspective, although the data reconstructed by BINA can still be distinguished by the naked eye, the attack is less effective than the white-box attack model of WRA. This stems from the restriction of the attack condition that the external adversary lacks prior knowledge of the model and strongly relies on the generated training set {f Θ l (x i ), x i } to fit the constructed inverse model parameters f -1 Θ l. However, both Collaborative-DP and Native-DP methods add noise (v = f Θ l (x) + noise) to the intermediate results (also the training set generated by the inverse network), and the overfitted inverse model is very sensitive to noise. Therefore, it can be found in Figure 9 that when the privacy budget < 500, the Collaborative-DP algorithm has been able to resist the BINA attack very well. The different quantitative metrics given in Figs.?? also corroborate the above analysis. Under the condition of = 100 the Collaborative-DP algorithm makes the MSE metrics fast approaching with 20,000 while the MSE and SSIM evaluation metrics are also much lower than the baseline method of Non-DP without adding noise. Based on the above analysis, the defense performance of Collaborative-DP algorithm against BINA attack is already excellent when the privacy budget < 500.\n\n$$f -1 Θ l = arg min g 1 m m i=1 g (f Θ l (x i )) -x i 2(18$$\n\n## 5.5 Analysis of inference accuracy\n\nThe above analysis of the security defense performance of Collaborative-DP facing different attack scenarios is presented. However, blindly achieving high strength privacy protection capability by reducing the privacy budget can heavily sacrifice the accuracy of model inference. Therefore, in this subsection, experiments are conducted to analyze how much impact on the inference accuracy of Collaborative-DP can be caused by different privacy budgets, and how to pick the appropriate privacy budget to achieve the trade-off between privacy and usability.\n\nAs can be seen in the following figure 11, Collaborative-DP has a much lower impact on the accuracy of the original network than the Native-DP approach through the refined privacy budget allocation mechanism. When > 10, the prediction accuracy of the model under Collaborative-DP protection has reached 82.64%, which is slightly lower than the 86.69% of Native-DP and much higher than the 56.56% of Native-DP. And the Native-DP mechanism can reach the best balance of privacy and usability only when > 30. As the previous analysis of the defense performance of the attacks, the minimum privacy budgets to effectively defend against WRA attacks and BINA attacks are 30 and 500, respectively, which are much higher than the privacy budgets required for the availability equilibrium point of Collaborative-DP and Native-DP. Thus, the mechanisms of both Collaborative-DP and Native-DP can protect against advanced threats from internal and external while guaranteeing the accuracy of model prediction, and Collaborative-DP requires less privacy budget. Moreover, after > 10 of Collaborative-DP, the prediction accuracy of the network has stabilized and cannot overlap with the accuracy of the original Non-DP no matter how much the privacy budget is scaled up. A reasonable explanation is that a certain degree of accuracy loss is caused by the fact that Collaborative-DP uses a fixed threshold C m for feature mapping to clipping in order to estimate the global sensitivity.\n\n## 6 Conclusion\n\nIn this paper, a secure privacy inference framework for cloud-edge collaboration is proposed, which supports adaptively partitioning the network according to the dynamically changing network bandwidth and fully releases the computational power of edge devices. Meanwhile, the partition point is selected with full consideration of the amount of information of intermediate results that need to be uploaded, and refined noise is added to them to achieve a differential privacy protection mechanism. Finally, a realistic cloud-edge collaborative inference computing scenario is constructed to evaluate the effectiveness of inference latency and model partitioning on resource-constrained edge devices. Meanwhile, state-of-the-art cloud-edge collaborative inference reconstruction attacks are employed to evaluate the practical usability of the end-to-end privacy-preserving mechanism of CIS.\n\n## References\n\n1. Mao, You, Zhang et al. (2017) \"A survey on mobile edge computing: The communication perspective\" *IEEE communications surveys & tutorials*\n\n2. Wang, Wei, Kong et al. (2019) \"Ecass: Edge computing based auxiliary sensing system for self-driving vehicles\" *Journal of Systems Architecture*\n\n3. Kuang, Ma, Li et al. (2021) \"Cooperative computation offloading and resource allocation for delay minimization in mobile edge computing\" *Journal of Systems Architecture*\n\n4. Siriwardhana, Porambage, Liyanage et al. (2021) \"A survey on mobile augmented reality with 5g mobile edge computing: architectures, applications, and technical aspects\" *IEEE Communications Surveys & Tutorials*\n\n5. Kang, Hauswald, Gao et al. (2017) \"Neurosurgeon: Collaborative intelligence between the cloud and mobile edge\" *ACM SIGARCH Computer Architecture News*\n\n6. Li, Zeng, Zhou et al. (2019) \"On-demand accelerating deep neural network inference via edge computing\" *IEEE Transactions on Wireless Communications*\n\n7. Zhang, Chen, Xu (2021) \"Autodidactic neurosurgeon: Collaborative deep inference for mobile edge intelligence via online learning\"\n\n8. Hu, Bao, Wang et al. (2019) \"Dynamic adaptive dnn surgery for inference acceleration on the edge\"\n\n9. He, Zhang, Lee (2020) \"Attacking and protecting data privacy in edge-cloud collaborative inference systems\" *IEEE Internet of Things Journal*\n\n10. Yeom, Giacomelli, Fredrikson et al. (2018) \"Privacy risk in machine learning: Analyzing the connection to overfitting\"\n\n11. Shokri, Stronati, Song et al. (2017) \"Membership inference attacks against machine learning models\"\n\n12. Liu, Juuti, Lu et al. (2017) \"Oblivious neural network predictions via minionn transformations\"\n\n13. Gilad-Bachrach, Dowlin, Laine et al. (2016) \"Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy\"\n\n14. Mireshghallah, Taram, Jalali et al. (2020) \"A principled approach to learning stochastic representations for privacy in deep neural inference\"\n\n15. Wang, Zhang, Bao et al. (2018) \"Not just privacy: Improving performance of private deep learning in mobile cloud\"\n\n16. Mireshghallah, Taram, Ramrakhyani et al. (2020) \"Learning noise distributions to protect inference privacy\"\n\n17. Dwork, Mcsherry, Nissim et al. (2006) \"Calibrating noise to sensitivity in private data analysis\"\n\n18. Zhao, Chen (2022) \"A survey on differential privacy for unstructured data content\" *ACM Computing Surveys (CSUR)*\n\n19. Dwork, Roth (2014) \"The algorithmic foundations of differential privacy\" *Foundations and Trends® in Theoretical Computer Science*\n\n20. Frank, Mcsherry (2009) \"Privacy integrated queries: an extensible platform for privacy-preserving data analysis\"\n\n21. Mach, Becvar (2017) \"Mobile edge computing: A survey on architecture and computation offloading\" *IEEE Communications Surveys & Tutorials*\n\n22. Surat Teerapittayanon, Mcdanel, Kung (2017) \"Distributed deep neural networks over the cloud, the edge and end devices\"\n\n23. Jong Hwan Ko, Na, Amir et al. (2018) \"Edge-host partitioning of deep neural networks with feature space encoding for resource-constrained internet-of-things platforms\"\n\n24. Zhang, Zhang, Qian et al. (2021) \"Deepslicing: collaborative and adaptive cnn inference with low latency\" *IEEE Transactions on Parallel and Distributed Systems*\n\n25. Banitalebi-Dehkordi, Vedula, Pei et al. (2021) \"Auto-split: a general framework of collaborative edge-cloud ai\"\n\n26. Susmita, Manasi, Sharmin Snigdha et al. (2020) \"Neupart: Using analytical models to drive energy-efficient partitioning of cnn computations on cloud-connected mobile clients\" *IEEE Transactions on Very Large Scale Integration (VLSI) Systems*\n\n27. Xu, Zhao, Liang et al. (2020) \"Energy-aware inference offloading for dnn-driven applications in mobile edge clouds\" *IEEE Transactions on Parallel and Distributed Systems*\n\n28. He, Zhang, Lee (2019) \"Model inversion attacks against collaborative inference\"\n\n29. Wang, Tan, Li et al. (2020) \"Differential privacy preservation in interpretable feedforward-designed convolutional neural networks\"\n\n30. Mao, Yi, Li et al. (2018) \"Learning from differentially private neural activations with edge computing\"\n\n31. Goldsmith (2005) \"Wireless communications\"\n\n32. Mohammed, Joe-Wong, Babbar et al. (2020) \"Distributed inference acceleration with adaptive dnn partitioning and offloading\"\n\n33. Abadi, Chu, Goodfellow et al. (2016) \"Deep learning with differential privacy\"\n\n34. Lin, Ji, Wang et al. (2020) \"Hrank: Filter pruning using high-rank feature map\"\n\n35. Krizhevsky, Hinton (2009) \"Learning multiple layers of features from tiny images\"\n\n36. Ryu, Zheng, Gao et al. (2022) \"Can differential privacy practically protect collaborative deep learning inference for iot? Wireless Networks\"<|endoftext|>" |
| } |
| }, |
| "nuclear": { |
| "train": { |
| "total_tokens": 1052192860, |
| "example": "# Upper limits for the production of the η-mesic Helium in the dd → 3 Henπ 0 and dd → 3 Hepπ -reactions *\n\nMagdalena Skurzok, Wojciech Krzemień, Oleksandr Rundel, Pawel Moskal\n\n## Abstract\n\nWe performed a search for 4 He-η bound state in dd → 3 Henπ 0 and dd → 3 Hepπ -reactions with the WASA-at-COSY facility using a ramped beam technique. The measurement was carried out with high statistics and high acceptance. The signature of η-mesic nuclei was searched for by the measurement of the excitation functions in the vicinity of the η production threshold for each of the considered channels. We did not observe the narrow structure which could be interpreted as a bound state. The preliminary upper limits of the total cross sections for the bound state production and decay varies from 21 nb to 36 nb for the dd → 3 Henπ 0 channel, and from 5 nb to 9 nb for the dd → 3 Hepπ -channel for the bound state width ranging from 5 to 50 MeV.\n\n## 1. Introduction\n\nSince Haider and Liu postulated a possible existence of η-mesic nuclei [1], many experimental groups performed measurements dedicated to search for the new kind of nuclear matter in which the η meson is bound within a nucleus via the strong interaction. However, till now, none of the experiments have brought the clear evidence for the bound state existence. The status of the search was recently described in the following reviews [2][3][4][5][6][7][8]. Some of the experiments set the upper limits for the bound state production in several processes. COSY-11 [9][10][11] group estimated the upper limit of total cross section for dp → ( 3 He-η) bound → pppπ -process to the value of 270 nb and for dp → ( 3 He-η) bound → 3 Heπ 0 to the value 70 nb. COSY-GEM measurement of p 27 Al → 3 Hepπ -X brought the upper limit of the total cross section for ( 25 Mg-η) bound production equal to 0.46 ± 0.16(stat) ± 0.06(syst) nb [12]. The WASA measurement in 2008 results in the upper limit of of the total cross section for the ( 4 He-η) bound creation in dd → 3 Hepπ -reaction, which varies from 20 nb to 27 nb for the range of the bound state width from 5 MeV to 35 MeV [13,14]. The measurement carried out two years later permitted to lower the upper bound for the cross section of dd → ( 4 He-η) bound → 3 Hepπ -process down to the value of few nanobarns. Additionally, the upper limit of the preliminary total cross section was determined for the first time for the ( 4 He-η) bound production in dd → 3 Henπ 0 reaction [15]. This paper presents the preliminary results obtained for the aforementioned processes.\n\n## 2. Experimental results\n\nIn November 2010, WASA-at-COSY Collaboration carried out the experiment dedicated for the search for 4 He-η bound states in dd → 3 Henπ 0 and dd → 3 Hepπ -reactions. The ramped beam technique was used to vary the momentum continuously from 2.127 GeV/c to 2.422 GeV/c, which corresponds to a range of excess energies Q from -70 to 30 MeV [16,17]. The detailed description of the WASA experimental setup is presented in [18].\n\nAnalysis for the dd → 3 Henπ 0 and dd → 3 Hepπ -reactions were carried out independently. Next, the set of the cross-check tests was performed to assure the consistency at the PID level. The 3 He ions and nucleon-pion pairs were identified in the Forward and Central Detector, respectively. The deposited energy patterns in thick scintillator layers of the Forward Hodoscope was used to identify the 3 He ions (the ∆E-E method). The neutral pion π 0 was reconstructed based on the invariant mass of two gamma quanta while the neutron was identified via the missing mass technique [15]. The proton and π -identification was based on the measurement of the energy loss in the thin Plastic Scintillator Barrel combined with the energy deposited in the Electromagnetic Calorimeter [13].\n\nThe events which may correspond to the bound states production were selected using criteria based on Monte Carlo simulations for the η-mesic nuclei production and decay. We apply the cuts in the momentum of 3 He in the CM frame, nucleon CM kinetic energy, pion CM kinetic energy and the opening angle between nucleon-pion pair in the CM. The region rich in signal corresponds to the momenta of the 3 He in the range p cm 3 He ∈ (0.07, 0.2) GeV/c. For this region the excitation function was obtained by normalizing the events selected in individual excess energy intervals by the corresponding integrated luminosities (the detailed description of the luminosity determination one can find in Ref. [15,19]) and corrected for acceptance and efficiency. The excitation function does not reveal the resonance-like structure, which could be the signature of the η-mesic nuclei existence [15], however the interpretation of the results is still in progress. So far, the upper limit of the total cross section for the dd → ( 4 He-η) bound → 3 Henπ 0 and dd → ( 4 He-η) bound → 3 Hepπ -processes was determined on the 90% confidence level. Preliminary, the upper limits were obtained by the fit of the sum of the polynomial and Breit-Wigner functions to the experimentally determined excitation functions. It varies from 21 to 36 nb for the first channel and from 5 to 9 nb for the second channel for the bound state width ranging from 5 to 50 MeV (See Fig. 1). A possible broad state in the case of dd → ( 4 He-η) bound → 3 Henπ 0 reaction cannot be excluded by the current data set [15]. The kinematic region, where we expect the evidence of the signal from the bound state corresponding to 3 He momenta in the CM system in range p cm 3 He ∈ (0.3, 0.4) GeV/c, cannot be fully described only by the combination of the considered background processes (see left panel of Fig. 2). In contrast, as shown in the right panel of Fig. 2 the experimental excitation function is very well fitted by the background contributions for the region where the signal is not expected.\n\n## 3. Conclusion and Perspectives\n\nThe excitation functions were determined for dd → ( 4 He-η) bound → 3 Hepπ -and dd → ( 4 He-η) bound → 3 Henπ 0 processes, however none of them reveal any direct narrow structure which could be signature of the bound state with width less than 50 MeV. The interpretation of the resuts is still in progress. So far preliminary upper limit of the total cross section for the η-mesic 4 He formation and decay was estimated. In case of dd → ( 4 He-Fig. 2. Preliminary experimental excitation functions (red circles) fitted with two background reactions: dd → 3 Henπ 0 (green squares) and dd → 3 HeN * → 3 Henπ 0 (magenta squares). A sum of both background contributions is shown as blue triangles. Left and right panels show results for the regions rich in signal and poor in signal, respectively. The figure is adopted from [15]. η) bound → 3 Hepπ -reaction we obtained the preliminary upper limit of the total cross section in order of few nb which is about four times lower in comparison with the result obtained from 2008 data [13]. Comparing to theoretically estimated value [20], the obtained upper limit value does not exclude the existence of the bound state. The excitation function for the reaction dd → ( 4 He-η) bound → 3 Henπ 0 was obtained for the first time in the experiment. The obtained upper limit is here by factor of five larger than predicted value therefore, we can conclude, that the current measurement does not exclude the existence of bound state also in this process [21]. Moreover, the excitation function obtained for this reaction is a subject of interpretation of few theoretical groupsfoot_0 with respect to very wide ( 4 He-η) bound or 3 He-N * bound state [21].\n\nIn May 2014, we extended the search for to 3 He-η sector [22]. We chose processes corresponding to the three mechanisms: (i) absorption of the η meson by one of the nucleons, which subsequently decays into N * -π pair e.g.: pd → ( 3 He-η) bound → pppπ -, (ii) decay of the η -meson while it is still \"orbiting\" around a nucleus e.g.: pd → ( 3 He-η) bound → 3 He6γ or pd → ( 3 Heη) bound → 3 He2γ reactions and (iii) η meson absorption by few nucleons e.g.: pd → ( 3 He-η) bound → ppn or pd → ( 3 He-η) bound → pd. Almost two weeks of measurement with an average luminosity of about 6•10 30 cm -2 s -1 allowed to collect a world largest data sample for 3 He-η. The data analysis is in progress.\n\nThe search for η and η -mesic bound states is carried out also by other international collaborations, e.g. at J-PARC [23,24] and at GSI [25,26]. In parallel, several theoretical studies are ongoing [3,20,[27][28][29][30][31][32][33][34].\n\n## 4. Acknowledgements\n\n## References\n\n1. Haider, Liu (1986) *Phys. Lett. B*\n\n2. Machner (2015) *J. Phys., G*\n\n3. Kelkar (2013) *Rept. Prog. Phys*\n\n4. Kelkar (2015) *Acta Phys. Polon. B*\n\n5. Haider, Liu (2015) *Int. J. Mod. Phys. E*\n\n6. Krusche, Wilkin (2014) *Prog. Part. Nucl. Phys*\n\n7. Bass (2016) *Acta Phys. Pol. B*\n\n8. (2015) \"print; 55 Cracow School of Theoretical Physics\"\n\n9. Smyrski (2007) *Phys. Lett. B*\n\n10. Krzemień (2009) *Int. J. Mod. Phys. A*\n\n11. Moskal, Smyrski (2010) *Acta Phys. Pol. B*\n\n12. Budzanowski (2009) *Phys. Rev. C*\n\n13. Adlarson (2013) *Phys. Rev. C*\n\n14. Krzemien (2011)\n\n15. Skurzok (2015)\n\n16. Skurzok, Moskal, Krzemien (2012) *Prog. Part. Nucl. Phys*\n\n17. Krzemien, Moskal, Skurzok (2015) *Acta Phys. Pol. B*\n\n18. Adam (2004)\n\n19. Skurzok, Krzemien (2015) *Acta Phys. Pol. B*\n\n20. Wycech, Krzemien (2014) *Acta Phys. Pol. B*\n\n21. Kelkar, Bedoya, Ferro (2015)\n\n22. Moskal, Krzemien, Skurzok (2014)\n\n23. Fujioka (2010)\n\n24. Fujioka (2012) *J. Phys. Conf. Ser*\n\n25. Yoshiki (2014) \"Proceedings of the 20th International Conference on Particles and Nuclei\"\n\n26. Fujioka (2015) *Hyperfine Interact*\n\n27. Bass, Thomas (2014) *Acta Phys. Pol. B*\n\n28. Hirenzaki (2010) *Acta Phys. Pol. B*\n\n29. Hirenzaki, Nagahiro (2014) *Acta Phys. Pol. B*\n\n30. Friedman, Gal, Mares (2013) *Phys. Lett. B*\n\n31. Wilkin (2007) *Phys. Lett. B*\n\n32. Nagahiro (2013) *Phys. Rev. C*\n\n33. Niskanen (2015) *Phys. Rev. C*\n\n34. Wilkin (2016) *Acta Phys. Pol. B*<|endoftext|>" |
| }, |
| "test": { |
| "total_tokens": 116735520, |
| "example": "# The Weak Parity-Violating Pion-Nucleon Coupling\n\nE Henley, W-Y Hwang, L Kisslinger\n\n## Abstract\n\nWe use QCD sum rules to obtain the weak parity-violating pion-nucleon coupling constant f πN N. We find that f πN N ≈ 2 × 10 -8, about an order of magnitude smaller than the \"best estimates\" based on quark models. This result follows from the cancellation between perturbative and nonperturbative QCD processes not found in quark models, but explicit in the QCD sum rule method. Our result is consistent with the experimental upper limit found from 18 F parity-violating measurements.\n\nIn this Letter, we use the method of QCD sum rules with the electroweak and QCD Lagrangians to predict the weak parity-violating (PV) pion-nucleon coupling constant, f πN N. The theoretical prediction of f πN N is an important and challenging problem. Todate, the most accurate PV experiments have only shown 1,2) that the upper limit for the magnitude of this coupling constant is 3-5 times smaller than the \"best value\" predicted by DDH 3) on the basis of a quark model and somewhat smaller than that in a similar calculation carried out more recently. 4) Since that time others have tried to estimate f πN N by means of chiral soliton models 5,6) and QCD sum rules. 7) This coupling is of particular interest because of its sensitivity to the neutral current contribution of weak nonleptonic processes at low energies. 2) QCD sum rules have been shown to be able to reproduce known properties of the nucleon, e.g., µ p, µ n, g A, and of other hadrons. 8) However, they have rarely (if ever) been used to predict unknown properties. Keeping terms in the operator product expansion (OPE) up to dimension 5, we show that there are two main terms in the sum rule for f πN N : the unit operator and a dimension D=3 susceptibility. By using an analogous sum rule for the strong coupling constant, g πN N, to evaluate this susceptibility, we are able to determine the weak coupling f πN N. An important aspect of the present work is that we demonstrate that there is a cancellation between perturbative and nonperturbative QCD modifications of the weak process.\n\nWe employ a two point function for the nucleon in an external pionic field. Our current is the usual one 9)\n\nwhere ǫ abc is the antisymmetric tensor, C is the charge conjugation operator, and a, b, c are color indices. The neutron currents are similar, with the interchange of d ↔ u.\n\nSince the most general weak PV π-N coupling is 2,3,10)\n\nonly charged pions can be emitted or absorbed. For definiteness, we consider the absorption of a π + so that an initial neutron is converted to a proton, and the correlator we consider is\n\nThe general form of Π for the parity-violating pion-nucleon coupling, as dictated by relativistic invariance, is\n\nwith p ≡ γ µ p µ.\n\nThe phenomenological evaluation of the correlator is carried out by deriving a dispersion relation for Π through the insertion of a complete set of physical intermediate states of spin 1 2 in the expression of Eq. 3. Using the usual terminology, we refer to this as the right-hand side (RHS). We only use the sum rule for Π e, since the sum rule for Π o is not as stable. One finds for the parity-violating part of Eqs. 3,4:\n\nM is the nucleon mass; and the parameter λ N is related to the amplitude for finding three quarks in a nucleon at one point and has been determined in a number of sum-rule calculations. 8) The double pole term, corresponding to the insertion of the one-nucleon intermediate state in Eq. (3), has contributions both from the weak pion-nucleon vertex and the parity violation in the nucleon state itself. As will be shown below, in our microscopic calculation using the two-point form only Z 0 -quark loops in the nucleon correlator give the parity-violating vertex correction. As is usual in the method, the physical property of interest, f πN N, is obtained by treating the double-pole term explicitly, while the continuum and excited states are included in the numerical analysis via a parameterization, as discussed below.\n\nThe microscopic evaluation of Π, based on QCD and electroweak theory (the so- The propagators in coordinate space corresponding to the three diagrams of Fig. 2 are:\n\nwith χ π g πq π j < qq >≡< qiτ j γ 5 q > π and m π 0 < qq > π j ≡< q iγ 5 g c τ j σ • Gq > π. Here g πq is the pion-quark coupling, which is not explicitly used in the present calculation, and G represents the gluon field. The susceptibility χ π enters in the evaluation of both strong and weak pion-nucleon coupling constants, while m π 0 enters only for the weak one. We will discuss the treatment of these parameters below. We only consider the even sum rule, namely that for Π e ; that for Π o involves further unknown susceptibilities. The evaluation of the diagrams is straightforward.\n\nFor the weak Hamiltonian, we take\n\nwhere θ C is the Cabibbo angle and A u, A d, B u, B d, are given by\n\nwith θ W the Weinberg angle. This is the standard model Hamiltonian, which we use for the main part of the calculation. We then discuss the QCD effects on our results.\n\nSince momentum can be transferred in the weak point-like interaction, shown by wavy lines representing Z 0 in the figures, there is an additional integral to be carried out in the evaluation of Π. For example, we obtain for Fig. 1a Π\n\nwhere D = 4 -ǫ is the dimension. There is no PV contribution from Figs. (1c) and (1d), and the sum of Figs. (1b) and (1e) vanish. The integrals in Eq. ( 9) are evaluated by standard Feynman techniques, with dimensional regularization. The result is\n\nWe regularize the diagram using mass, vertex, and pion-quark vertex counter terms, leading to the one-loop corrections to our diagram shown in Fig. 3. The lowest dimension pion-quark vertex and mass renormalization diagrams for f πN N are shown in Figs. 3a-c.\n\nIn our approximation of a contact weak interaction, the contribution of Figs. 3a-c vanish under a Borel transformation. The mechanism of Figs. 3d and 3e do not appear in the external field method. The only nonvanishing diagrams in the infinite Z-mass limit are those shown in Figs. 3f and3g. With a minimal subtraction scheme we obtain an additional composite current, which we call η V :\n\nwith\n\nThis current is used for the vertex regularization shown in Figs. 3f and3g. These vertex corrections give the contribution\n\nCombining Eqs. (10,12) and taking the Borel transform one obtains for the regularized\n\nwhere M B is the Borel mass. The other diagrams can be evaluated in the same manner.\n\nThe results from the processes of Figs.\n\nwhere a = -(2π) 2 < qq > and λ2 N = (2π) 4 λ 2 N /g πq. We do not include gluon condensate diagrams for f πN N ; they are of the same order or smaller than uncertainties of our calculation. The factors containing L, L = 0.621 ln(10M B ), give the evolution in Q 2 arising from the anamolous dimensions, and the E i (M 2 B ) functions take into account excited states to ensure the proper large-M 2 B behavior. The last line in Eq. ( 14) is the Borel transform of the double-pole term from the phenomenological (right-hand) side, Eq. ( 5). The direct proportionality to sin 2 θ W should be noted.\n\nFinally, by explicit calculation or Fierz reordering, we can show that the contribution for W ± exchanges vanish. Thus, as required by symmetries 3,10), we find no charged current contribution to the weak PV pion-nucleon vertex; such a contribution requires strangeness-changing currents and would thus be reduced by sin 2 θ C ≈ 0.05. Since we neglect strangeness in the nucleon and strangeness-changing currents, we obtain no contribution.\n\nAs we shall demonstrate below, the first two terms in the theoretical form for Π e given in Eq. ( 14) are of opposite sign and tend to cancel. This is a crucial point. For this reason it is essential to either determine the value of the susceptibility χ π from g πN N or to eliminate it from our equations. We do both as an aid in determining the stability of our solutions. First, we determine χ π directly in terms of g πN N [as a function of the Borel mass] by using the sum rule for the strong coupling, which is analogous to Eq. ( 14), and attempt to use the result to determine f πN N. Second, we eliminate χ π from the PV and strong coupling sum rules and find that we can determine f πN N in terms of g πN N. Details are given below.\n\nWe use the correlator given by the two-point function of Eq. ( 3) for the strong as well as the weak interaction. The general form differs from Eq.( 4) by the presence of a γ 5 in each term. The phenomenological (RHS) for the strong pion-nucleon coupling is now given by\n\nUnlike the weak PV pion-nucleon coupling, the evaluation of the strong one leads to a problem in that there is no double pole on the right-hand (dispersion relation) side.\n\nHowever, as shown by Reinders et. al. 11), the value of the coupling constant g πN N found in this way is virtually the same as that found by means of a 3-point function, which circumvents the lack of a double pole problem.\n\nKeeping terms up to D=6, shown in Fig. 4, for the theoretical side (LHS), and taking the Borel transform we obtain the sum rule for the strong pion-nucleon coupling:\n\nwhere < g 2 c G 2 > is the gluonic condensate. Before we discuss our detailed evaluation of the sum rules to obtain our estimate of f πN N, let us discuss the structure of Eqs. ( 14) and ( 16). First, as we discuss below, if we use the method of Ref. (12) [which uses arguments of PCAC within the sum rule context] to evaluate χ π we find that χ π a=-88 GeV 2. With this value, the χ π term dominates both Eqs. (14,16) with the result that g πN N ≃ 155 [in contrast to the experimental value of 13.5]. With this value of χ π we find that f πN N ≥ 10 -6, at least an order of magnitude larger than experiment.\n\nSecondly, since χ π is the only unknown in Eq. ( 16), we can estimate the vacuum susceptibility using the experimental value of g πN N = 13.5: this gives χ π a ≃-1.88 GeV 2, two orders of magnitude smaller than the value given by the method of Ref. (12) [see discussion below]. With this value one finds that the first two terms in Eq. ( 14), the leading terms for f πN N, almost cancel. Note that the second term involving χ π enters with the opposite sign in the two equations for f πN N and g πN N, respectively. This is the source of the very small parity-violating pion-nucleon coupling in comparison with quark model: there is a cancellation between the dimension zero model-like term using perturbative quark propagators and the vacuum pion susceptibility term.\n\nHowever, we find that the sum rule obtained for f πN N [Eq.( 14)], using the value of\n\nB ) extracted from Eq. ( 16), is not stable in M B. Therefore we cannot obtain a reliable estimate of f πN N by this method.\n\nWe find that we can obtain a satisfactory sum rule to determine f πN N by eliminating χ π from both Eq. ( 14) and Eq. ( 16) by taking derivatives with respect to M 2 B. With this procedure, and taking the ratio of the weak to the strong sum rule we obtain the new sum rule for the weak in terms of the strong coupling constant:\n\nwhere c w = G F sin 2 θ W ( 17 3 -γ)/(24π 2 ) = 5.5 × 10 -8 GeV -2. The sum rule is quite stable with a plateau in M 2 B in the region expected, as shown in Fig. 5. Because of the strong cancellation between the first two terms in Eq. ( 14) [dimension 0 and dimension 2 terms], the dimension four term with the unknown parameter m π 0 is important for the final numerical value of f πN N. We have taken m π 0 = 0 in Fig 5. Guided by the value of the parameter m 0 needed in the nucleon sum rule 8), we evaluate the sum rule given in Eq. 15 with m π 0 taken over the range 0.0 to +0.8 GeV. From this procedure we find:\n\nFor negative values of m π 0 the value of f πN N becomes smaller and even negative, but we did not find stable solutions for sizable negative values of this unknown parameter. To be consistent with the neglect of gluon condensate terms we quote as our central value of f πN N that with m π 0 = 0, shown in Fig. 5, as\n\nThis coupling constant is an order of magnitude smaller than the \"best values\" of Refs. 3 and 4. As emphasized earlier, this result follows from the cancellation of the two leading terms in Eq. ( 14). The first LHS term in that equation, a unit dimension term which would correspond to a quark model type calculation, gives a value for f πN N ≈ 2 × 10 -7, similar to the quark model value. The second term, involving the nonperturbative QCD vacuum susceptibility, χ π, strongly cancels the first term. Because of this cancellation, we cannot expect Eq. ( 19) to be very accurate, but we find a clear explanation for the small value of f πN N, consistent with experiment. 1,2) The results given in Eqs. 18 and 19 have been obtained using the Hamiltonian of the standard model (see Eqs. Since the same parameter appears in all terms, this gives the overall uncertainty arising from strong interaction modifications. Therefore, the main conclusion of our work is not changed.\n\nThere are two relevant features that we would like to point out. The first one is that the use of pseudovector coupling also circumvents the problem of a lack of double pole for the strong interaction constant. For the Lagrangian\n\nwe can treat ∇ µ φ π as a constant external axial vector field. The QCD sum rule is then identical to our calculation of g A. 8) At the quark level, we have\n\nwhere f π is the pion decay constant. From our previous result for g A, 1) we then obtain\n\nwhich is just the Goldberger-Treiman relation.\n\nAs a second feature we wish to attempt an independent estimate of χ π. For this purpose we first use PCAC to obtain\n\nwhere we take the π-quark coupling to be unity in this discussion. We then use the work of Belyaev and Kogan 12), which assumes saturation of a sum by one pion states:\n\nAs described above, the value of χ π obtained in this manner is more than an order of magnitude larger than that found by using the value of g πN N from experiment. Once more we point out that if we use it in Eq. ( 16) we find an order of magnitude discrepancy with the strong coupling constant, g πN N. Furthermore, it is clear that this value of χ π is inconsistent with Eq. ( 14), since by eliminating it with derivatives with respect to the Borel mass we obtain results an order of magnitude different than with its use. We conclude that Eq. ( 24) cannot be correct. We are not certain where the method of Belyaev and Kogan errs, but we believe that it is suspect. Note that χ π = (f π /m q ) χπ ∼\n\n$$η p (x) = ǫ abc [u aT (x)Cγ µ u b (x)]γ 5 γ µ d c (x), ηp (y) = ǫ abc [ū b (y)γ ν C ūaT (y)] dc (y)γ ν γ 5,(1)$$\n\n$$H P V (πNN) = f πN N √ 2 ψ(τ × φ π ) 3 ψ,(2)$$\n\n$$Π = i d 4 xe ix•p < 0|T [η p (x)η n (0)]|0 > π +.(3)$$\n\n$$Π P V = Π e 1 + Π o p,(4)$$\n\n$$Π P V e (p 2 ) RHS = λ 2 N f πN N (p 2 + M 2 ) (p 2 -M 2 ) 2 + continuum.(5)$$\n\n$$S ab 5a = i τ • π 4π 2 x 2 g πq γ 5 δ ab S ab 5b = - i 24 τ • π g πq χ π < qq > δ ab γ 5, S ab 5c = i 3 • 2 7 m π 0 < qq > g πq τ • πx 2 γ 5,(6)$$\n\n$$H w = G F √ 2 (J µ J † µ + N µ N † µ ) with J µ = ūγ µ (1 -γ 5 )d cos θ C N µ = ūγ µ (A u + B u γ 5 )u + d γ µ (A d + B d γ 5 )d,(7)$$\n\n$$A u = 1 2 (1 - 8 3 sin 2 θ W ), A d = - 1 2 (1 - 4 3 sin 2 θ W ), B u = -B d = - 1 2,(8)$$\n\n$$1a e = -2 6 G F sin 2 θ W g πq d D k 1 d D k 2 d D k 3 [k 1 • (p -k 2 -k 3 )(p -k1 -k3 ) k2 + ǫ 4 {2(k 2 • (p -k 3 ) k1 (p -k3 ) + 2(p -k 1 -k 3 ) • (p -k 2 -k 3 ) k2 k1 + -3(p -k1 -k3 ) k2 k1 (p -k2 -k3 )}] [(2π) 3D k 2 1 k 2 2 k 2 3 (p -k 1 -k 2 ) 2 (p -k 2 -k 3 ) 2 ] -1,(9)$$\n\n$$Π 1a e (p 2 ) = - G F sin 2 θ W g πq 3 2 2 7 π 6 p 6 ln(-p 2 )( 1 ǫ + 15 2 - 3 2 γ).(10)$$\n\n$$η V (p) = ǫ abc [u aT (k 1 )Cγ µ u b (k 2 )]γ 5 Γ µ V d c (k 3 ), Γ µ V = 4G F sin 2 (θ W ) 3 2 (4π) 2 (q 2 ) -ǫ/2 (qq µ -q 2 γ µ ),(11)$$\n\n$$k 1 = p -k 2 -k 3 and q = k 2 + k 3.$$\n\n$$Π 1a e(V ) (p 2 ) = G F sin 2 θ W g πq 3 2 2 7 π 6 p 6 ln(-p 2 )( 1 ǫ + 14 3 -γ).(12)$$\n\n$$diagram 1a Π 1a e(R) (p 2 ) = G F sin 2 θ W g πq 3 2 2 7 π 6 ( 17 3 -γ)M 8 B. (13$$\n\n$$)$$\n\n$$Π P V e (M 2 B ) = G F sin 2 θ W ( 17 3 -γ) 24π 2 M 4 B [M 4 B L -4/9 E 3 + 2 3 χ π aL -4/9 M 2 B E 2 + 1 2 m π 0 a E 1 L -4/9 ] = f πN N λ2 N e -M 2 /M 2 B (2 M 2 M 2 B -1)(14)$$\n\n$$Π s e (p 2 ) RHS = λ 2 N g πN N M 2 (p 2 -M 2 ) γ 5 + continuum. (15$$\n\n$$)$$\n\n$$g πN N λ2 N e -M 2 /M 2 B = M 6 B L -4/9 E 2 -M 4 B χ π aL 2/9 E 1 + 4 3 a 2 L 4/9 + < g 2 c G 2 > E 0 M 2 B 8 -< g 2 c G 2 > E 0 M 2 B ( 13 8 -ln M 2 ), (16$$\n\n$$)$$\n\n$$χ π (M 2$$\n\n$$f πN N g πN N = c w M 2 N (M 2 N -4M 2 B )(E 3 M 4 B + 1 2 am π 0 E 1 ) (2M 4 N + 3M 4 B -9M 2 N M 2 B )(12E 2 M 4 B + 3 < G 2 > E 0 ),(17)$$\n\n$$f πN N ≈ (1.9 to 2.4) × 10 -8 f or m π 0 = (0 to 0.8)GeV. (18$$\n\n$$)$$\n\n$$f πN N ≈ 1.9 × 10 -8.(19)$$\n\n$$L πN N = g ′ πN N m π ψN iγ µ γ 5 τ • ψ N ∇ µ φ π(20)$$\n\n$$L πqq = 1 2f π ψq iγ µ γ 5 τ ψ q ∇ µ φ π,(21)$$\n\n$$g ′ πN N m π = g A 2f π, g πN N = g ′ πN N 2M m π = g A M f π(22)$$\n\n$$< 0|ū iγ 5 u -d iγ 5 d|π 0 > = -f π m 2 π √ 2m q e -iq•x ≡ χπ φ π < qq > e -iq•x,(23)$$\n\n$$< 0|q iγ 5 τ 3 q|0 > π = -i √ 2 φ π d 4 xe iQ•x < 0|ū iγ 5 u -d iγ 5 d|π >< π|q iγ 5 τ 3 q|0 > Q→0 = i √ 2 φ π f 2 π m 2 π 2m 2 q ≡ χ π φ π < qq >(24)$$\n\n## 20 χπ\n\nIn conclusion, we find that the weak PV pion-nucleon coupling due to neutral currents is as small as that due to charged currents, ∼ 2 × 10 -8. This result agrees with the conclusion of the chiral soliton model of Kaiser and Meissner 5), but not that of Kaplan and Savage 6). Our result also disagrees with quark model calculations 3,4) and with a previous QCD sum rule calculation. 7) If the coupling is as small as we estimate, it cannot be separated from the charged current contribution and thus cannot be found experimentally; and it is unlikely that the anapole will be seen. 13) Although we have omitted gluon condensate corrections to the PV correlator, our result is sufficiently small that these corrections will not alter our conclusion. Finally, we point out that in the two-point QCD sum rule method used here, the small value of f πN N which we obtained is the result of a cancellation between a process which can be treated in quark models and a vacuum process identified in the method of QCD sum rules.\n\n13. See e.g., M.J. Musolf et al., Phys. Repts. 239 (1994) 1\n\n## References\n\n1. Barnes (1978) *Phys. Rev. Lett*\n\n2. Evans (1985) *Phys. Rev. Lett*\n\n3. Bini (1985) *Phys. Rev. Lett*\n\n4. Adelberger, Haxton, Lang (1985) *Ann. Rev. Nucl. Part. Sci*\n\n5. Desplanques, Donoghue, Holstein (1980) *Ann. Phys. (NY)*\n\n6. Dubovik, Zenkin (1986) *Ann. Phys. (NY)*\n\n7. Kaiser, Meissner (1989) \"1648 and U. Meissner, Mod\" *Nucl. Phys. A*\n\n8. Kaplan, Savage (1993) *Nucl. Phys. A*\n\n9. Khatsimovskii (1985) *Sov. J. Nucl. Phys*\n\n10. Reinders, Rubinstein, Yazaki et al. (1983) \"For references to more recent work see\" *Nucl. Phys. B*\n\n11. Ioffe (1981) *Nucl. Phys. B*\n\n12. (1983) *Z. Phys. C*\n\n13. Henley (1969) *Ann. Rev. Nucl. Sci*\n\n14. Reinders, Rubinstein, Yazaki (1983) *L.J. Reinders, Acta Phys. Polon. B*\n\n15. Belyaev, Kogan (1984) *Phys. Lett*<|endoftext|>" |
| } |
| }, |
| "all": { |
| "total_tokens_train": 4314494104, |
| "total_tokens_test": 477551597, |
| "tokenizer": "EleutherAI/gpt-neo-125M", |
| "vocab_size": 50257, |
| "max_length": -1, |
| "column": "category", |
| "labels": [ |
| "astrophysics", |
| "biology", |
| "cyber", |
| "nuclear" |
| ], |
| "length_strategy": "none" |
| } |
| } |